It’s 2023, and the world is rapidly drifting away from traditionally known methods of dream interpretation. With the advent of artificial intelligence newer methods of reading the human mind are at play. In March, it was reported that Japanese scientists recreated high-resolution images from scans of brain activity using stable diffusion, now it seems there is another breakthrough in the offing.
A team of scientists from the University of Texas at Austin has developed an AI model that can read your thoughts. The noninvasive AI system known as semantic decoder lays emphasis on translating brain activity into a stream of text according to the peer-reviewed study published in the journal Nature Neuroscience.
The research was led by Jerry Tang, a doctoral student in computer science; Alex Huth, an assistant professor of neuroscience and computer science at UT Austin. The study conducted by Tang and Huth is based partly on a transformer model which is similar to the one that powers Google Bard and OpenAI’s ChatGPT.
With their latest innovation, scientists are hopeful that it can be of assistance to people with paralysis or some form of disability. The newly developed tech is essentially an AI-based decoder that can translate brain activities into a stream of text. This means now AI will allow a person’s thoughts to be read in a non-invasive way, something that has never been attempted in the history of neuroscience or medical science in general.
As part of the study, three people were assigned to MRI machines and were asked to listen to stories. In what can be called a major breakthrough, scientists claim that they produced the text of the participants’ thoughts without the help of any brain implant. It is to be noted that the mind-reading technology captured the main points of their thoughts and did not exactly replicate their thoughts in their entirety.
“For a noninvasive method, this is a real leap forward compared to what’s been done before, which is typically single words or short sentences. We’re getting the model to decode continuous language for extended periods of time with complicated ideas,” Huth was quoted as saying in a report published on the UT Texas website.
According to scientists, the AI system can generate a stream of text when a participant listens to or imagines a story. This according to the researchers is possible once the AI system is fully trained. Researchers essentially deployed a technology like ChatGPT to interpret the thoughts of people while they were watching silent films or when they imagined themselves to be telling a story. The new study has also raised concerns about mental privacy.
Apart from Tang and Huth, Amanda LeBel a former research assistant at the Huth Lab and Shailee Jain, a computer science graduate at UT Austin, are co-authors of the study.