Artificial intelligence is rapidly reshaping scientists’ ability to interpret the brain’s complex electrical signals, bringing researchers closer than ever to decoding human thoughts and inner speech.
In a recent breakthrough, a 52-year-old woman who lost her ability to speak clearly after a stroke nearly two decades ago was able to see her unspoken thoughts appear as text on a screen. The woman, identified only as participant T16, had a tiny array of electrodes surgically implanted in the front part of her brain. As she imagined speaking words, a computer system powered by artificial intelligence translated her neural signals into readable sentences in real time.
The experiment was conducted by researchers at Stanford University in the United States as part of a wider study involving patients with amyotrophic lateral sclerosis (ALS), a progressive neurodegenerative disease. Scientists described the achievement as the closest step yet towards a form of “mind reading”.
The findings were unveiled in August 2025. Soon after, researchers in Japan reported another major advance, demonstrating a “mind captioning” technique that could generate detailed descriptions of images people were seeing or imagining, using non-invasive brain scans combined with multiple AI systems.
Experts say such breakthroughs are opening an unprecedented window into the inner workings of the human brain while offering new communication pathways for people who are unable to speak or move.
“In the next few years, we will begin to see these technologies being commercialised and deployed at scale,” said neuroengineer Maitreyee Wairagkar of the University of California, Davis, who works on brain-computer interfaces. Several companies, including Elon Musk’s Neuralink, are already pursuing commercial brain implants designed to move the technology from laboratories into everyday use.
Brain-computer interfaces, or BCIs, are not new. Scientists have been experimenting with direct brain communication since the late 1960s. For decades, BCIs have allowed users to control prosthetic limbs or computer cursors by decoding brain signals linked to movement. However, translating speech and complex thoughts has proven far more challenging.
Progress has accelerated in recent years, particularly for patients with severe communication impairments. In 2021, Stanford researchers showed that a paralysed man could form English sentences by imagining himself writing letters in the air. More recently, Wairagkar’s team demonstrated a system that converted the attempted speech of an ALS patient into text at about 32 words per minute with nearly 98% accuracy.
These systems rely on tiny microelectrode arrays implanted on the brain’s surface, typically over regions involved in speech and movement. Machine-learning algorithms then analyse vast amounts of neural data, identifying patterns associated with different sounds or phonemes. Researchers often compare the process to voice assistants such as Amazon Alexa—except that instead of interpreting sound waves, the AI decodes neural activity.
A major challenge, however, is that patients usually need to actively attempt speech for accurate decoding, a process that can be tiring and slow. To address this, Stanford scientists explored whether “inner speech”—the words people silently say in their minds—could also be detected.
The results were promising but limited. When participants imagined specific sentences, the system achieved accuracy rates of up to 74% in real time. Performance dropped for more spontaneous thoughts, and open-ended prompts often produced meaningless output. Researchers said the findings suggest inner speech uses neural pathways similar to spoken speech, though the signals are weaker.
Beyond text, scientists are now pushing towards capturing the full richness of human speech. In 2025, Wairagkar’s lab showed it could decode not just words, but also tone, pitch and rhythm, allowing an ALS patient to convey emotion and emphasis. While only about 60% of the generated speech was judged clearly understandable, researchers say it points to a future where brain-driven speech sounds increasingly natural.
Further advances are expected as technology improves. Current studies typically sample only a few hundred neurons, a tiny fraction of the brain’s total. Expanding electrode coverage could significantly boost accuracy and speed, researchers say.
Meanwhile, other teams are using AI to reconstruct what people see or hear by analysing brain scans. By combining functional MRI data with image-generation tools such as Stable Diffusion, scientists have managed to recreate rough versions of images viewed by participants. Japanese researcher Yu Takagi of the Nagoya Institute of Technology says the work has revealed how different brain regions process visual information.
Similar efforts are under way to reconstruct music from brain activity, using advanced algorithms developed by companies such as Google. Although results remain imperfect, researchers believe the approach could eventually help explain how the brain interprets sound, images and even dreams.
While experts caution that fully decoding unfiltered thoughts remains far off, many believe the rapid pace of progress signals a profound shift ahead. As AI continues to unlock the brain’s hidden signals, technologies once confined to science fiction are moving steadily closer to reality.
With inputs from BBC