A study carried out by researchers from New York University’s Tandon School of Engineering has unveiled the remarkable effects of music and coffee on cognitive performance.
As reported in Nature Scientific Reports, the investigation monitored brain activity during cognitive assessments, utilizing cutting-edge brain filtering and monitoring technology developed by MINDWATCH. Participants were exposed to various stimuli, including music, coffee consumption, and perfume inhalation.
The findings demonstrated a notable increase in brain wave activity (beta band) associated with heightened cognitive performance during music listening and coffee consumption. Surprisingly, even AI-generated music exhibited a significant performance enhancement, paving the way for further exploration in tasks demanding focus and memory retention.
Cup of coffee (credit: INGIMAGE)Pioneering MINDWATCH technologyOver the past six years, Professor Rose Peccia from New York University’s School of Biomedical Engineering has been refining the MINDWATCH brain monitoring technology. This innovative algorithm interprets an individual’s brain activity using data collected from wearable devices that monitor electrodermal activity (EDA), indicating changes in electrical conductivity due to stress-induced sweat reactions.
Stimulating brain function and memory regulationStudy participants donned EDA monitoring bracelets and brain monitoring strips to undertake cognitive tests while exposed to music, coffee, and preferred perfumes. The same tests were conducted without the influence of these stimuli.
The MINDWATCH algorithm unveiled significant alterations in participants’ brain arousal caused by music and coffee. These changes induced a distinct physiological “mindset,” potentially enhancing performance in memory-intensive tasks.
The researchers pinpointed specific stimuli that triggered heightened brain wave activity (beta range), correlating with improved cognitive performance. While perfumes had a minimal impact, the researchers suggested further exploration in this realm.
Countering the pandemic’s negative impactDr. Peccia explained the benefits of MINDWATCH, saying that the global pandemic has profoundly affected mental health worldwide which in turn has underscored the urgency of tracking daily stressors’ detrimental effects on cognitive abilities.
Although the MINDWATCH technology remains in development, its overarching objective is to empower individuals to monitor real-time cognitive arousal, enabling the identification of stress or cognitive disengagement moments.
Enhanced Work and Academic Achievements
Peccia emphasized the revolutionary nature of this technology, envisaging a future where individuals can accomplish tasks while enjoying music, fostering a positive mental state conducive to enhanced work and academic accomplishments.
There’s An AI That’s Making Music to Improve Your Brain Function
There’s an AI that’s making music. It’s known as brain.fm, and it could help decrease anxiety, relieve insomnia, and improve mental performance.
Founded by Adam Hewett and Junaid Kalmadi, brain.fm aims to alter your mind with sound. Or to be more specific, they hope to alter your mind with music…music that is composed by an AI (but more on that later). Hewett describes the work as “Auditory brain stimulation,” which is a mechanism that relies on something known as “brain entrainment” (also known as “neural entrainment”).
This is a novel—and somewhat unconventional—theory that is centered on how brain waves alter in response to acoustic stimulation (sound).
It’s stimulating brain waves reliably with audio, and being able to to reliably see that on an EEG to use it as a therapy.The basic idea is that certain sounds evoke very specific responses in the brain. In other words, listening to music can alter, or induce, certain neural oscillations—oscillations that can achieve a host of things, such as altered mood, decreased anxiety, or improved focus. Notably, those who advocate neural entrainment assert that these acoustic-induced alterations can be seen and analyzed via electroencephalogram (EEG) measurements, which is, of course, where the science comes in.
Hewett succinctly sums the process, and what Brain.fm is attempting to do with it: “It’s stimulating brain waves reliably with audio, and being able to to reliably see that on an EEG to use it as a therapy. That’s what we’ve actually been doing for 13 years…now we’re working with universities and institutions to confirm this and innovate on it further.”
The ResearchNotably, the team works with neuroscientists in order to scientifically verify that the types of responses that they’re aiming for with their music are, in fact, the ones that they are generating.
For example, the team utilized the work of Dr. Benjamin Morillon, who researches dynamic attending theory, which, as Kalmadi explains, “is a new theory in neuroscience that helps explain how oscillations in music can affect oscillations in the brain.”
And they have completed pilot studies with Dr. Giovanni Santostasi, a neuroscientist at Northwestern University’s Feinberg School of Medicine, to see how the auditory stimulation can impact sleep and focus. Kalmadi notes, “We got some promising results on both of the studies, but that’s the preliminary studies, so we’re now focusing on building on top of that with the next layer of a more rigorous study.”
Kalmadi notes the significance of their current results, saying that the music patterns clearly lineup with the EEG readouts, “The spikes we see on the EEG mimic the audio stimulus. The music frequency follows the EEG…literally, follows. It’s become that accurate.”
And they’re also getting help from artificial intelligence.
Music Meets AIOf course, it would take a long time to compose a diverse array of songs, ones that individuals could listen to for hours without experiencing monotony. Trying to create songs that effectively incorporate the various auditory components that generate a specific response in the brain and are also unique (and pleasing to listen to) is even more difficult.
So the team gave the work to artificial intelligence.
Kalmadi outlines the difficult and time consuming nature of the process, and why the team needed the AI: “Every single 30-minute session used to take a day all the way up to a week to make. In order to make the first 20 sessions, when we were just getting brain.fm off the ground, it took us 4 months to put together. So it’s [the AI is] a necessity. How do we create like a massive amount of content that is fresh in terms of its diversity, in terms of its genre of music, and actually takes a ton of rules of what we understand between neuroscience and music to quite accurately entrain the brain?”
And it seems that the answer to this question rests in AI.
Kalmadi continues, “So the AI, we like to describe it on a very high level, has the brain of a neuroscientist and the heart of a musician. It actually composes all the music from the ground up by taking a bunch of rules within neuroscience music, and it makes these sessions, but it sounds like it’s made by humans.”
Hewett explains how the AI works: “we’re using what could most easily be described as an emergent AI, or emergence…. emergent AI is basically taking a set of kind of small instructions, or small little pieces, and then it expands and emerges into something beautiful.”
To break this down a bit, in relation to artificial intelligence, emergence is a process in which larger patterns (like a fluid song) emerge through interactions among smaller or simpler parts.
We start out with what we might call a SongBot, this kind of overarching overlord—the composer of this kind of song. That bot, the SongBot, gets all kinds of instructions like “what kind of genre do I want?”, “what brain waves am I stimulating?”, “what’s the BPM?”. I can leave it open; I can have it do a minor key or a major key.
You can give it instructions, and from there, it spawns off tens of thousands of other little individual bots, you could say.
They’re really just little kind of pieces of code, little brains. And each of these will be a note or a drum beat, and they will have kind of a mind of their own, and they’ll have instructions. For example, a drum beat will want to be in the first part of the measure, or the middle of the measure.
But that doesn’t always happen, and you have to understand that these bots are competing against each other. They form patterns, and subsequent bots that are propagated learn from the previous ones, and they try to emulate those patterns. So as a pattern emerges, the pattern becomes greater for subsequent iterations.