Intel’s Hands-Free UCL MotionInput Software Promises More ‘Equitable Computing’ For All
When people think of Santa Clara-based chipmaker giant Intel, accessibility doesn’t naturally spring to mind as something the company cares about. After all, what does fabricating computer processors have in common with making technology accessible to disabled people? As ever, the tie that binds this seemingly disparate juxtaposition is the people. To care about accessibility, whether in bits or in prose, is not a matter of merely focusing on the tech itself. The tech is nothing, it’s soulless. It’s the people behind the tech that matters most. So, yes, Intel’s work on computer chips has no practical relevance to accessibility whatsoever—but that isn’t the point. The point is accessibility truly does matter to the people, Intel’s workforce, who come together to create said chips. But it goes deeper.
Back in February, Intel’s director of accessibility Darryl Adams told me in an interview the company is committed to playing its part in ensuring equitable digital access for the disability community. Leveraging its massive scale, Intel believes inclusive technology “is something [we] can put out into the world to make it a better place,” Adams said. This mission so deeply resonates with Adams because he has visual impairments himself, so he literally benefits from technology being made more accessible and empathetic. The implications matter.
Intel is soldiering on with its mission to expand digital access with its MotionInput software. In a press release published last month, the company announced UCL MotionInput. Developed in collaboration with Microsoft and IBM, the software was built by students at the University of London’s (UCL) computer sciences department. When paired with a webcam, UCL MotionInput enables people to control their PC in a hands-free manner; the computer is controlled by gestures from one’s head, hands, full body, or speech. The software utilizes artificial intelligence and machine learning to analyze said movements and convert them into traditional trigger actions by mouse and keyboard. UCL MotionInput is conceptually highly similar to Apple’s longstanding Switch Control functionality for users who cannot control their computers via conventional input methods. The key difference between the two is UCL MotionInput requires no additional hardware, whereas Switch Control obviously requires switches. Intel and its partners like to describe UCL MotionInput as “multi-touch in the air.”
“Since the inception of the desktop PC, end users have had to learn to use computing input devices like a keyboard and mouse. We all wanted to unlock the potential of what if a computer could be aware of a range of your movements,” said Pippa Chick, Intel’s global account director on their health and life sciences team, in a recent interview conducted over email. “Now raise that further, in version 3, to include on-device speech alongside the movements. The software is thus meant for any user that wants a touchless interface to replace a keyboard, mouse and joypad with the software that they already use every day.”
UCL MotionInput was born out of necessity, a response to Covid-19. Two people, Dr. Atia Rafiq and Sheena Visram, were instrumental in “defining the clinical needs and parameters for touchless interactions in primary care, triage, hospital care, patient-side, surgical, and radiological use cases,” according to Chick. In the early days of the pandemic, scientists and healthcare professionals were tracking the virus to determine spread, and the question was raised about computers in hospitals, care homes, and other settings that could house the virus by people touching the objects. Such considerations necessitated “an urgent and important review,” Chick said. Thus, the decision was made by those involved to find and build ways to control computers that didn’t require actually touching anything.
Of course, what began as an effort to mitigate transmission of the coronavirus has morphed into something eminently usable by members of the disability community. This is yet another example of technology designed for something else—as Chick told me, “the clinical examples are clear from a hygiene and safety perspective”—being repurposed to address even more meaningful applications. Chick cited several examples of instances where UCL MotionInput could prove useful. These include a chef browsing a recipe on a tablet with their hands full or dirty, as well as someone wanting to play music by the pool but don’t want their device to get wet. From an accessibility perspective, Chick also mentioned UCL MotionInput being beneficial to users who cannot extend their arms forward to touch a screen, as they’re able to control the interface via facial gestures and the like. In addition, someone who can’t move their neck but can move their eyes can utilize the eye-gaze functionality to manipulate the pointer, for example.
“The team lead architect, Sinead, did a fantastic job of this ‘pick and mix’ of modalities of use, and this is just the first generation of this work,” Chick said. “The team at UCL are actively seeking groups that want to trial and improve these features with them.”
Much of the external feedback on UCL MotionInput came from those in the ALS community. Chick explained Catherine Cummings, who’s executive director of the International Alliance for the ALS/MND Association, “played an instrumental part in distributing the software design ideas [for UCL MotionInput] to the ALS community for ideas and suggestions.” Feedback was both “wonderful and instructive,” Chick said. Examples of this include the ability to easily change modalities when someone gets fatigued from performing certain motions. Members of Cummings’ Alliance were “highly excited” by the breadth and depth of the movements possible with UCLMotionInput, Chick added. They make doing everyday tasks on one’s computer more accessible and enjoyable.
Looking towards the future, Chick said getting more feedback is a primary goal. “The students and the academics want to hear from industries, especially charitable organizations, to know what works, what to improve and what people want built next with it,” she said. “They are a super friendly bunch and genuinely want to hear from people I know that accessibility in gaming is a big theme for all of us [on] the team, but also that there are so many possibilities to reach other industries with this technology.”
A video showing UCL MotionInput in action is available on YouTube.