Researchers Develop Images Using AI To Decode Brain Activity

Researchers from Osaka University have developed images of what subjects are seeing by using AI to decode the brain activity of their subjects.

Researchers Develop Images Using AI To Decode Brain Activity

Researchers from Osaka University have developed images of what subjects are seeing by using AI to decode the brain activity of their subjects.

Yu Takagi was in awe of what he was witnessing. He watched in awe as artificial intelligence created images of what the subject was seeing on a screen by decoding the subject’s brain activity on a Saturday in September while he was working alone at his desk.

According to Takagi, an assistant professor of neuroscience at Osaka University, “I still remember when I saw the first images. When I went into the bathroom and saw my face in the mirror, I said, Okay, that’s normal. I might not be crazy after all.”

In order to analyse the brain scans of test subjects who were shown up to 10,000 images while inside an MRI machine, Takagi and his team used Stable Diffusion (SD), a deep learning AI model developed in Germany in 2022.

Takagi and Shinji Nishimoto developed a straightforward model to “translate” or decode brain activity into an understandable format, and Stable Diffusion was able to produce high-fidelity images that were startlingly similar to the originals.

Despite not having seen the images beforehand or receiving any training to manipulate the results, the AI was still able to accomplish this.

Takagi stated, “We really didn’t expect this kind of result. Takagi emphasised that the development does not currently represent mind-reading because the AI can only produce images that a person has already seen.”

Takagi affirmed, “This is not mind-reading. Unfortunately, there are lots of misconceptions about our research. We don’t think this is realistic; we can’t decipher dreams or imaginations. However, there is obviously potential for the future.”

Nevertheless, in the midst of a wider discussion about the dangers posed by AI in general, the development has raised questions about how such technology might be used in the future.

Despite his enthusiasm, Takagi admits that there is reason for concern about mind-reading technology given the potential for abuse by those who have bad intentions or without permission.

“Privacy concerns are the most significant issue for us. It’s a very delicate matter if a government or other institution has the ability to read people’s minds, Takagi said. “High-level discussions are required to ensure that this cannot occur.”

The tech industry, which has been electrified by rapid advancements in AI, including the release of ChatGPT, which produces human-like speech in response to user prompts, was buzzing about Takagi and Nishimoto’s research.

Neural interfaces are still decades away from being able to accurately and reliably decode imagined visual experiences. Takagi and Nishimoto’s research required subjects to sit in an fMRI scanner for up to 40 hours, which was costly and time-consuming.

Researchers at the Korea Advanced Institute of Science and Technology have found that conventional neural interfaces lack chronic recording stability due to the soft and complex nature of neural tissue.

Current recording techniques rely on electrical pathways to transfer the signal, which are susceptible to electrical noises from surroundings. Current AI capabilities are advancing, but Takagi is not optimistic for brain technology.

Takagi and Nishimoto’s framework could be used with brain-scanning devices other than MRI, such as EEG, or with hyper-invasive technologies like the brain-computer implants being developed by Elon Musk’s Neuralink.

There is currently little practical application for their AI experiments, as the method cannot yet be transferred to novel subjects. However, Takagi sees a future where it could be used for clinical, communication, or entertainment purposes.

Ricardo Silva, a professor of computational neuroscience at University College London and research fellow at the Alan Turing Institute, believes that a marker for Alzheimer’s detection and progression evaluation could be developed by assessing in which ways one could spot persistent anomalies in images of visual navigation tasks reconstructed from a patient’s brain activity.

However, he shares concerns about the ethics of technology that could one day be used for genuine mind reading. Takagi and his partner are developing a better image reconstruction technique that could be used in secondary tasks such as marketing or legal cases.