Global First to Empower People with Disabilities: The Scott-Morgan Foundation Debuts Hyper-Realistic AI Avatar, Predictive LLM, Smart Wheelchair, Multimodal Communication Platform, Tongue-Operated Touchpad & EarSwitch Biometric Controls
THE SCOTT-MORGAN FOUNDATION | Jan. 9, 2024
Media Contact: LaVonne Roberts
Today, The Scott-Morgan Foundation, a non-profit leader in developing assistive technology, unveils a series of global firsts in assistive technology for people with disabilities, including the first hyper-realistic AI avatar deployed for accessibility, the first personal LLM dedicated to assistive communication, and the first multimodal interface converging eye tracking, ear and tongue controls, and more into one platform. By addressing some of the most complex human-technology challenges faced by those with limited mobility and speech, these breakthrough collaborations aim to set a new standard for inclusive innovation. The proof of concept, showcased at CES 2024, integrates powerful technologies from an unprecedented set of collaborators recruited and led by the Foundation:
On-device, personal LLM for predictive, generative AI from Lenovo
Eye-gaze tracking and AAC communication platform converging multimodal inputs, including eye tracking, ear, tongue, and wheelchair controls from IRISBOND
Smart independent mobility platform from LUCI Mobility
Circular keyboard interface from The Scott-Morgan Foundation
In-ear biometric control technology from EarSwitch
Tongue-operated, hands-free touchpad from Augmental
By combining these technologies, the Scott-Morgan Foundation aims to enable more autonomous communication and mobility for people with disabilities. During the CES demo, LUCI's self-driving wheelchair will display Erin's hyper-realistic AI avatar on a vertical screen. Created by DeepBrain AI and Lenovo, the avatar captures Erin's personality and mannerisms with 96% accuracy and articulates her text in real time using IRISBOND's multimodal eye gaze-powered AAC platform. The avatar serves as a virtual stand-in for Erin, who will be available for select interviews to discuss her vital role in designing these assistive technologies that preserve independence for people with disabilities.
Currently, 2.5 billion people need assistive technology worldwide. By 2050, this number will reach 3.5 billion. By addressing the intensive needs of people with severe disabilities—especially ALS, also known as MND in the UK, a neurodegenerative disease—the Scott-Morgan Foundation sets out to solve some of assistive technology's greatest challenges. Their solutions unlock new independence for millions of others living with disabilities worldwide. Focused on designing and democratizing assistive tech, the Foundation does visionary work that raises the bar on what is possible in empowering those with limited mobility and speech.
“We are building an ecosystem of complementary solutions to change what it means to live with a disability,” said Andrew Morgan, CEO of the Scott-Morgan Foundation. “This growing collaboration showcases the power of mission-driven companies coming together. Individually, their innovations change lives. Together, they disrupt the entire landscape.”
The work evolved in close collaboration with Erin Taylor, a 24-year-old woman recently diagnosed with ALS. Erin is helping test the different technologies to preserve her personality, independence, and mobility as the disease progresses.
“By driving public awareness of the possible, we hope to spark innovation that makes such assistive technologies accessible to all who need them,” said LaVonne Roberts, Executive Director of The Scott-Morgan Foundation. “This collaboration isn't merely a technological advancement—it's a powerful affirmation of human rights and inclusion.”
AI Avatar to Preserve Personality
The Scott-Morgan Foundation debuted the first hyper-realistic AI avatar deployed as assistive technology. Erin’s avatar preserves her unique personality, voice, mannerisms, and full body in an avatar she can use to transform text into dynamic video. DeepBrain AI’s avatars replicate the look and sound of the real individual with staggering accuracy: over 96% visual and audible similarity to the human counterpart.
The avatar marks a major leap beyond traditional voice banking or other text-to-speech engines. The DeepBrain avatar can also be integrated with an LLM-based generative AI for live interaction, as showcased at a recent Lenovo Formula 1 event—an exciting frontier for the next generation of assistive avatars. Lenovo conceived and sponsored the avatar with ongoing support and processing generously donated by DeepBrain.
Personal, Predictive LLM
Lenovo also unveiled a personal, on-device LLM dedicated to reliably providing the power of generative AI to people with disabilities. By compressing a larger public LLM, the team at Lenovo’s AI Innovation Centre created a powerful predictive text tool optimized for people who cannot use a traditional keyboard.
The new LLM offers solutions to two major challenges faced by Erin and others using AAC:
Smarter, faster text generation with multiple output options. The Lenovo AI inferences the LLM after each user input (character or word) and offers a set of the most likely next words to select, giving several predicted options.
Offline reliability. Fast and accurate communication should not rely on Internet connectivity; here, it runs entirely offline within a portable device.
The LLM currently runs on Erin’s Lenovo Yoga laptop and will later be ported to other devices. Future iterations may integrate more seamlessly into the integrated IRISBOND platform and be customized to learn from Erin’s personal data.
Multimodal Input Interface
IRISBOND, a pioneer in eye-gaze technology, is building the first AAC platform of its kind that converges multiple assistive technologies—including eye tracking, tongue, ear, wheelchair, and avatar controls—into one seamless user experience.
Globally, the World Health Organization (WHO) estimates that more than 405 million people require assistive technologies like AAC. Still, only 10% of those in need have access to them in lower-income countries. In the US alone, 85-90% of the 2.5 million people who need AAC technologies cannot obtain them due to high costs and limited coverage. The Scott-Morgan Foundation is changing that.
This new AI-powered AAC platform aims to be the most accessible, affordable, and empowering communication solution for those with severe disabilities by converging multimodal inputs, leveraging NVIDIA AI for fluid speech, lessening the need for dedicated external hardware, and pioneering autonomous communication even for those with profound speech and mobility limitations.
People with severe disabilities who cannot use their hands or voices often face gaps in conveying messages. The painstaking letter-by-letter process using eye gaze technology can be life-changing but also time-consuming. IRISBOND's human-centric platform will leverage NVIDIA's AI to learn from the user and facilitate more natural conversations through prediction of intent and subtle movement optimizations.
Circular Keyboard and New Input Technologies
The Scott-Morgan Foundation also developed an innovative circular keyboard that optimizes letter placement to minimize eye movements, decreasing typing time and allowing easy, extended communication. Based on research with real users—and a reimagining of a rectangular keyboard designed for fingers—the team struck upon the eye-centric design. Concurrently, predictive text and AI analyze conversations, suggesting relevant responses to reduce latency dramatically.
Augmental has created the world’s first hands-free tongue-operated touchpad called MouthPad^. This intraoral device converts subtle mouth gestures into input commands, serving as an invisible, always-available controller for personal electronics. Made of dental-grade materials, it empowers those with limited mobility through natural and expressive tip-of-the-tongue interaction.
EarSwitch earbuds, meanwhile, will seek to make eye-tracking faster, more accurate, and less tiring by helping users rapidly select letters on the keyboard and wheelchair controls with the “click of an ear” by squeezing a muscle in the ear.
Here, the Scott-Morgan Foundation presents a vision for integrating these technologies to offer ways to thrive for people with different disabilities or in different stages of disease progression.
Clinical Trials and Scaling Up
The Scott-Morgan Foundation is taking a patient-first approach by collaborating with pioneering researcher Dr. David Putrino and Dr. Abbey Sawyer of The Abilities Research Center at the Icahn School of Medicine at Mount Sinai. With more than a decade of experience, Dr. Putrino and his team assess emerging assistive technologies through clinical trials and research. Their crucial guidance and expertise in rehabilitation innovation will optimize this platform's effectiveness and validate its real-world impact for people with disabilities.
Democratization of technology remains a core mission of the Scott-Morgan Foundation, and the new proof of concept is already inspiring more accessible and scalable options to be developed in the near future.
“Digital technologies rarely allow for effective communication with the outside world for people with severe disabilities, and certainly not with the same ease and efficiency as the technology available for non-disabled communities,” said Dr. Putrino of The Abilities Research Center at the Icahn School of Medicine at Mount Sinai. “This failure drives social isolation and loneliness, not to mention the potential safety impacts of a person with severe disability being unable to convey their immediate needs.”
Additional Quotes and Media Assets can be found HERE.
About The Scott-Morgan Foundation
The UK-based Scott-Morgan Foundation focuses on pioneering technology-driven solutions to empower people with severe disabilities. The Foundation develops both bold proof-of-concepts to inspire a brighter future and more immediate, scalable solutions.