[ Today @ 05:45 PM ]: gizmodo.com
[ Today @ 04:36 PM ]: New Atlas
[ Today @ 03:58 PM ]: Click2Houston
[ Today @ 02:46 PM ]: Clinical Trials Arena
[ Today @ 02:38 PM ]: The Messenger
[ Today @ 12:54 PM ]: Washington Examiner
[ Today @ 11:49 AM ]: reuters.com
[ Today @ 05:52 AM ]: Business Insider
[ Today @ 03:24 AM ]: 24/7 Wall St
[ Today @ 01:41 AM ]: AOL
[ Today @ 01:09 AM ]: BBC
[ Yesterday Evening ]: SheKnows
[ Yesterday Evening ]: WTAE-TV
[ Yesterday Evening ]: investorplace.com
[ Yesterday Afternoon ]: Phys.org
[ Yesterday Afternoon ]: The Information
[ Yesterday Afternoon ]: Travel + Leisure
[ Yesterday Afternoon ]: New Atlas
[ Yesterday Afternoon ]: Business Today
[ Yesterday Afternoon ]: earth
[ Yesterday Afternoon ]: Vogue
[ Yesterday Morning ]: TechCrunch
[ Yesterday Morning ]: OPB
[ Yesterday Morning ]: Fortune
[ Yesterday Morning ]: U.S. News & World Report
[ Yesterday Morning ]: BBC
[ Yesterday Morning ]: Food & Wine
[ Yesterday Morning ]: Seeking Alpha
[ Last Tuesday ]: Forbes
[ Last Tuesday ]: Los Angeles Daily News
[ Last Tuesday ]: webtv.un.org
[ Last Tuesday ]: China Daily
[ Last Tuesday ]: iaea.org
[ Last Tuesday ]: MarketWatch
[ Last Tuesday ]: BBC
[ Last Tuesday ]: BBC
[ Last Tuesday ]: The Denver Post
[ Last Tuesday ]: The Daily Item, Sunbury, Pa.
[ Last Tuesday ]: Seeking Alpha
[ Last Tuesday ]: CNET
[ Last Tuesday ]: gizmodo.com
[ Last Tuesday ]: WSB-TV
[ Last Tuesday ]: Seattle Times
[ Last Tuesday ]: Forbes
[ Last Tuesday ]: gizmodo.com
[ Last Tuesday ]: CNET
[ Last Tuesday ]: Click2Houston
AI-Powered Neckband Translates Subvocalized Speech
New AtlasLocale: KOREA REPUBLIC OF

The Mechanics of Silent Speech
The core of this technology lies in the concept of subvocalization. Even when a person is unable to produce audible sound, the brain continues to send electrical signals to the muscles of the larynx, tongue, and throat when they attempt to speak. These physiological movements, while invisible to the naked eye and silent to the ear, produce distinct patterns of vibration and muscle activity.
The POSTECH device utilizes a wearable neckband equipped with highly sensitive sensors. These sensors are positioned to capture the neuromuscular signals generated during the attempt to speak. Rather than relying on acoustic sound waves, the device focuses on the physical manifestations of the speech process. These raw signals are then fed into an artificial intelligence model trained to recognize specific patterns associated with different words and phrases.
Bridging the Gap with Artificial Intelligence
The role of AI in this system is critical. The signals captured from the neck are complex and vary significantly between individuals. The AI acts as a translator, mapping the unique physiological signatures of a user's subvocalizations to a corresponding vocabulary of words. By training the model on a specific dataset of attempted speech, the system can achieve high accuracy in identifying the user's intent.
Unlike traditional speech recognition software, which requires a clear audio input, this system bypasses the need for the vocal folds to successfully vibrate and produce sound. This makes it a viable solution for patients who have lost the mechanical ability to speak but retain the neurological drive to communicate.
Comparative Advantages over Existing Tech
To understand the significance of the POSTECH neckband, it must be compared to current gold standards in assistive communication:
- Eye-Tracking Systems: While effective, eye-tracking often requires the user to be positioned in front of a screen and involves a slow process of selecting letters or icons, which limits the speed of natural conversation.
- Brain-Computer Interfaces (BCIs): High-bandwidth BCIs can translate thoughts directly into text, but they typically require invasive neurosurgery to implant electrodes into the motor cortex. The neckband provides a non-invasive alternative that avoids the risks associated with brain surgery.
- Traditional AAC (Augmentative and Alternative Communication): Many AAC devices rely on manual input or pre-recorded phrases, which lack the fluidity and spontaneity of real-time speech.
Key Technical Details
- Developer: Researchers at Pohang University of Science and Technology (POSTECH).
- Input Method: Non-invasive sensors worn around the neck.
- Detection Target: Neuromuscular signals and vibrations associated with subvocalization.
- Processing: AI-driven pattern recognition to translate signals into text/speech.
- Primary Beneficiaries: Individuals with ALS, throat cancer, or vocal cord paralysis.
- Nature of Interface: Non-invasive wearable device.
Future Directions and Implications
The current iteration of the technology focuses on a controlled set of words and commands. However, the ultimate goal is to expand the vocabulary to allow for a full range of natural, fluid conversation. As the AI models become more sophisticated and the sensors more refined, the latency between the "attempted" word and the "spoken" output is expected to decrease, bringing the user closer to the speed of natural human speech.
Furthermore, the portability of the neckband suggests a future where users can integrate their communication tool seamlessly into their daily lives without being tethered to a computer or a specialized clinic setting. By shifting the focus from the output (sound) to the intent (neuromuscular activity), this research opens a new pathway for restoring agency and voice to those who have lost it.
Read the Full New Atlas Article at:
https://newatlas.com/wearables/postech-ai-neckband-words-speech/
[ Yesterday Evening ]: WTAE-TV
[ Yesterday Afternoon ]: Phys.org
[ Last Tuesday ]: webtv.un.org
[ Last Tuesday ]: CNET
[ Last Tuesday ]: gizmodo.com
[ Last Monday ]: Popular Science
[ Last Monday ]: Digital Trends
[ Last Sunday ]: Nextgov
[ Last Sunday ]: The Conversation
[ Last Saturday ]: TV Technology
[ Last Saturday ]: Interesting Engineering
[ Thu, Apr 16th ]: CNET