Skip to main content

An AI-powered bionic hand will know what it's grabbing

An AI-powered bionic hand will know what it's grabbing

/

The hand uses computer vision to pick the right grip for the object in front of it

Share this story

If you buy something from a Verge link, Vox Media may earn a commission. See our ethics statement.

Image: Newcastle University

When the wearer of a prosthetic arm wants to grab something, there are a number of ways they can communicate this signal. With a basic prosthetic, the grip mechanism might be controlled mechanically, with a cable attached to the opposite shoulder, for example. With most contemporary limbs, the signal to grip is detected using myoelectric sensors — which read muscle activity from the skin. And with the most advanced (and most expensive) prosthetics, sensors that can measure nerve signals are actually implanted inside the muscle. But what if the arm itself could see?

That’s the idea being developed by biomedical researchers from Newcastle University, who have developed a prototype prosthetic limb with a AI-powered camera mounted on top. The camera uses the sort of computer vision technology that big tech companies have developed, with researchers using deep learning to teach it to recognize some 500 objects. When the wearer of the limb moves to grab, say, a mug, the camera takes a picture of the object, and moves the hand into a suitable “grasp type.” (For example, a pinching motion for picking up a pen; or a vertical grip for grabbing a soda.) The user then confirms the grip action with a myoelectric signal.

“Using computer vision, we have developed a bionic hand which can respond automatically — in fact, just like a real hand, the user can reach out and pick up a cup or a biscuit with nothing more than a quick glance in the right direction,” said Dr. Kianoush Nazarpour, a biomedical lecturer at Newcastle University, in a press statement. “The beauty of this system is that it’s much more flexible and the hand is able to pick up novel objects — which is crucial since in everyday life people effortlessly pick up a variety of objects that they have never seen before.”

The end result is a hand that is much quicker to use than contemporary prosthetics — up to 10 times faster than others on the market, say the researchers. The system is also cheap. The camera that allows the hand to “see” is just an ordinary Logitech webcam, and the AI software that recognizes objects can be trained cheaply. The whole training process is described in a study published in the Journal of Neural Engineering.

“It’s a stepping stone towards our ultimate goal.”

There are difficulties to overcome, of course. The neural network used to recognize objects wasn’t always correct (it had about an 80 to 90 percent success rate) and amputees who tested the prosthetic had to be able to override its actions when necessary. If the system was implemented in the real world, there would also have to be a mechanism for adding new objects to the AI’s memory, too. The next step, say the researchers, is to add more sensors to prosthetics, so they can detect things like temperature and pressure. With this information they’ll become more and more like organic limbs.

“It’s a stepping stone towards our ultimate goal,” said Dr. Nazarpour of the prototype arm. “But importantly, it’s cheap and it can be implemented soon because it doesn’t require new prosthetics — we can just adapt the ones we have.”