Established with the intention of putting the emphasis on the creativity, abilities, and ingenuity of Asian descent. Raising up the voices of our people and culture.
WRITERS
A&R | Event Curator | Music Instructor | DJ + Instrumentalist |…
Digital Content Executive | Music Producer | Record Label Manager |…
anime x music | freelance writer | [email protected]
greyscale photography of condenser microphone

RESEARCHERS ARE DEVELOPING A BRAIN-COMPUTER INTERFACE THAT CREATES MUSIC BASED ON YOUR MOOD

The technology transforms emotions into “a musical representation that seamlessly and continuously adapts to the listener’s current emotional state.”

In order to assist users with emotion mediation, a brain-computer interface that interprets the moods of those who are wearing it as music is currently being developed.

It was described in an interview with the EveryONE blog, which was published by the open access scientific journal PLOS ONE. The device was developed by scientists Stefan Ehrlich from the Technische Universität München and Kat Agres from the National University of Singapore.

Using music that is tailored to the user’s emotional state, the user can ‘interact’ with their emotions by actively listening to and responding to them. Ehrlich describes the device as “a device that translates a listener’s brain activity, which corresponds to a specific emotional state, into a musical representation that seamlessly and continuously adapts to the listener’s current emotional state.”

Picture1 768x335 1

The listener is made aware of their emotional state through the ever-changing music generated by the device, and it is through this awareness that they are able to mediate their emotions.

The scientists have already put their device to the test with a group of young people suffering from depression, who, according to Agres, “actually think of their identity in part in terms of their music.”

However, despite the fact that the group was divided on the ease with which the device could be used, Agres reported that “without instructing the listeners on how to gain control over the feedback […] all of them reported that they self-evoked emotions by recalling happy or sad moments from their lives” in order to bring their emotional state under control.

In their subsequent presentation, the duo discussed the various challenges they faced while developing their project, including the difficulty of creating a device that continuously generates music based on emotional state sound continuous, rather than just a series of sounds indicating mood. Agres emphasized the importance of adaptability in order to react to changes in brain state in real time, stating that the device had to adapt “to their brain signals and sound continuous and musically cohesive.”

Agres and Ehrlich are currently preparing to begin the second round of testing for their automatic music generation system, which will include both healthy adults and patients suffering from major depressive disorder. If their tests are successful, the researchers hope to use the device to assist stroke patients who are suffering from depression.

Total
0
Shares
Prev
IN CASE YOU’RE ALL CURIOUS, THIS IS HOW AI-GENERATED POKEMON LOOK LIKE
close up photo of pokemon pikachu figurine

IN CASE YOU’RE ALL CURIOUS, THIS IS HOW AI-GENERATED POKEMON LOOK LIKE

Surely the creative minds behind Pokémon have run out of ideas by this point

Next
COD WARZONE ACCIDENTALLY ADDED A SKIN THAT CAN MAKE PLAYERS ALMOST INVISIBLE
MIDNIGHTREBELS-COD

COD WARZONE ACCIDENTALLY ADDED A SKIN THAT CAN MAKE PLAYERS ALMOST INVISIBLE

According to reports, Activision leadership has yet to respond to the