AlterEgo – MIT’s New AI Reads and Transcribes What You’re Thinking!

Pranav Dar Last Updated : 08 Apr, 2018
2 min read

Overview

  • AlterEgo comprises of a device and a computing system that can read and transcribe your inner voice
  • It is made up of a neural network that that uses neuromuscular signals to understand thoughts
  • The testing phase revealed an average precision accuracy of 92%!

 

Introduction

Computers being able to pick up what a person is thinking has been so far a product of our imagination. We used to look at these other worldly concepts in movies, or read about them in books, and our imagination used to soar.

Those fictional worlds are inching closer to becoming real-life application.

Researchers at MIT have developed a device and a computing system (together called AlterEgo) that picks up the words you don’t say aloud but vocalise internally. According to MIT’s blog post, “electrodes in the device pick up neuromuscular signals in the jaw and face that are triggered by internal verbalizations — saying words “in your head” — but are undetectable to the human eye”.

The researchers conduced several experiments on test subjects to see how the system performed in different situations. In one example, the system picked up the thoughts of a chess opponent and read off almost all his moves!

So how did they go about building this system? The developers built a neural network that searches and identifies correlations between neuromuscular signals and particular words. It even customises to the user’s needs by re-training just the last two layers of the neural network.

The system was tested on 10 subjects and the accuracy of the algorithm, in terms of reading and transcribing the inner words, was a really impressive 92%. Currently the system is still in it’s nascent stages so it’s limited to performing simple tasks like calculating sums, having short conversations. etc.

Check out the device in action below:

 

Our take on this

Research on “mind reading” algorithms has been going on since a while, dating back to the 19th century! But with the recent boom in technology, this has taken a huge leap forward.

The system will keep getting better as more and more data is accumulated to train the model. Can you imagine the vast uses of this system? If built and utilised properly, it could help deaf people understand what the other person its saying, could be sued for communication in loud places (like airport tarmacs and manufacturing plants), for the military, among many, many other things.

 

Subscribe to AVBytes here to get regular data science, machine learning and AI updates in your inbox!

 

Senior Editor at Analytics Vidhya.Data visualization practitioner who loves reading and delving deeper into the data science and machine learning arts. Always looking for new ways to improve processes using ML and AI.

Responses From Readers

Clear

Jean-Claude KOUASSI
Jean-Claude KOUASSI

Nice! Very similar to a PhD thesis subject. One of the main struggles in the future will be how to differentiate the words you don’t say aloud but vocalize internally (saying words “in your head”) from those which are not dependent from your will (thoughts, memory facts, rural injections, etc).

Congratulations, You Did It!
Well Done on Completing Your Learning Journey. Stay curious and keep exploring!

We use cookies essential for this site to function well. Please click to help us improve its usefulness with additional cookies. Learn about our use of cookies in our Privacy Policy & Cookies Policy.

Show details