Human brain works more like AI than expected

Published 3 hours ago
Source: vanguardngr.com
Human brain

Groundbreaking research has revealed that the human brain understands spoken language through a sequential process that almost perfectly mirrors the inner workings of advanced Artificial Intelligence.

The study, published in Nature Communications, suggests that despite the biological and digital worlds being constructed differently, they have converged on the same structural “roadmap” for making sense of the world through speech.

Led by Dr. Ariel Goldstein of the Hebrew University, in collaboration with Google Research and Princeton University, the team utilized electrocorticography to record the brain activity of participants listening to a 30-minute podcast. By tracking neural signals in real-time, they compared human brain waves to the layered processing of Large Language Models (LLMs) like GPT-2 and Llama 2.

The findings were striking: the brain follows a structured, stepwise sequence. Much like an AI model, the brain first processes basic word features before moving deeper into “layers” that handle complex context, tone, and long-term meaning.

The researchers found that early neural signals matched the initial stages of AI processing. However, as the complexity of the story grew, the activity shifted to higher-level language regions, specifically Broca’s area.

In these regions, brain responses peaked later, aligning with the “deeper layers” of AI models where the most sophisticated understanding is formed.

“What surprised us most was how closely the brain’s temporal unfolding of meaning matches the sequence of transformations inside large language models. Both seem to converge on a similar step-by-step buildup toward understanding, “ said Goldstein.

This discovery challenges long-standing “rule-based” theories of how humans comprehend language. To fuel further discovery, the team has released a public dataset, providing a powerful new toolkit for scientists to study how meaning is physically constructed in the human mind.

For many years, language was thought to rely mainly on fixed symbols and rigid hierarchies. These results challenge that view and instead point to a more flexible and statistical process in which meaning gradually emerges through context.

The researchers also tested traditional linguistic elements such as phonemes and morphemes. These classic features did not explain real time brain activity as well as the contextual representations produced by AI models. This supports the idea that the brain relies more on flowing context than on strict linguistic building blocks.

The post Human brain works more like AI than expected appeared first on Vanguard News.

Categories

Newshuman Brain