New research demonstrates the ability of artificial intelligence (AI) systems to reproduce music simply by analyzing a person’s brain waves as they think about a song. Scientists say this mind reading technology could eventually lead to systems that generate original music based on a user’s thoughts.
The study was conducted by researchers at the University of California, San Francisco. They used electroencephalography (EEG) to record brain signals from volunteers as they listened to songs including “Another Brick in the Wall (Part 2)” by Pink Floyd.
The team then developed a deep learning model and trained it on the EEG data linked with time-coded lyrics for each song. When tested, the AI system was able to generate music mimicking the original songs solely by examining brainwaves.
While the reproduced music sounded disjointed, researchers say the neural patterns contained enough data for the AI to produce distinctive elements of pitch, timbre, and melody for each song contemplated by a volunteer.
“We have shown that it is possible to reconstruct music from the brain activity associated with listening to that song,” said Dr. Christian Herff, who led the study. “This is a promising step toward decoding inner experiences through brain activity.”
The findings hint at a future where thinking about a song leads an AI assistant to generate a unique mix of sounds similar to that song. This brain-computer interface could enable novel ways of producing music using the power of the mind.
However, significant challenges remain. The model still requires training on EEGs paired with song elements, and can’t yet generate music from thoughts alone. Still, the research takes an important step toward decoding human brain signals.
#AI #ArtificialIntelligence #Music