The AlterEgo headset being developed at MIT would allow users to communicate with their devices completely hands- and voice-free.

Credit: Lorrie Lejeune/MIT

Regardless of whether your mouth is moving right now, you are talking to yourself.

As you read these words, the muscles in your larynx, jaw and face are fluttering with quick, imperceptible movements, sounding out the words so you can actually “hear” them in your head. This kind of silent speech is called “subvocalization,” and unless you’re a speed-reader who has trained yourself out of this habit, you’re doing it all day, every time you read or even imagine a word.

Now, MIT researchers want to use those subvocalizations to decode your internal monologue and translate it into digital commands, using a wearable “augmented intelligence” headset called AlterEgo. [Inside the Brain: A Photo Journey Through Time]

According to a statement from the MIT Media Lab, the device would allow users to send silent commands to the headset simply by thinking of a word. A neural network would translate the muscle movements to speech and do the user’s bidding — totally hands- and voice-free.

“The motivation for this was to build an IA device — an intelligence-augmentation device,” Arnav Kapur, a graduate student at the MIT Media Lab and lead author of a paper describing the device, said in the statement. “Our idea was: Could we have a computing platform that’s more internal, that melds human and machine in some ways and that feels like an internal extension of our own cognition?”

A promotional video accompanying the news release shows a student (Kapur) going about his daily routine while wearing the headset, using silent commands to navigate through a TV menu, check the time, tally up prices in the supermarket and, apparently, cheat at the game Go. His opponent is none the wiser.

Let’s say you want to ask AlterEgo what time it is. First, you think the word “time.” As you do, muscles in your face and jaw make micro-movements to sound out the word in your head. Electrodes on the underside of the AlterEgo headset press against your face and record these movements, then transmit them to an external computer via Bluetooth. A neural network processes these signals the same way a speech-to-text program might, and responds by telling you the time — “10:45.”

In another twist, AlterEgo includes no earbuds. Instead, a pair of “bone conduction headphones” resting against your head sends vibrations through your facial bones into your inner ear, effectively letting you hear AlterEgo’s responses inside your head. The effect is a completely silent conversation between you and your computer — no need to pull out a phone or laptop.

An early test of the technology showed promising results, MIT said. In a small study, 10 volunteers read a list of 750 randomly ordered numerical digits to themselves while wearing AlterEgo headsets. According to the researchers, AlterEgo correctly interpreted which digits the participants were reading with an average accuracy of 92 percent. (For comparison, Google’s microphone-based speech-to-text translation service has an accuracy of about 95 percent, according to Recode.)

“We basically can’t live without our cellphones, our digital devices,” said Pattie Maes, an MIT professor and the paper’s senior author. “But at the moment, the use of those devices is very disruptive…. My students and I have for a very long time been experimenting with new form factors and new types of experience that enable people to still benefit from all the wonderful knowledge and services that these devices give us, but do it in a way that lets them remain in the present.”

The new paper describing the device was presented at the Association for Computing Machinery’s ACM Intelligent User Interface conference in March, and has yet to appear in a peer-reviewed journal.

Originally published on Live Science.

Kind of untangles some jumbled up wires, huh?

So what’s the real take-away, here.        .       .