In order to assist users with emotion mediation, a brain-computer interface that interprets the moods of those who are wearing it as music is currently being developed.

It was described in an interview with the EveryONE blog, which was published by the open access scientific journal PLOS ONE. The device was developed by scientists Stefan Ehrlich from the Technische Universität München and Kat Agres from the National University of Singapore.

Using music that is tailored to the user’s emotional state, the user can ‘interact’ with their emotions by actively listening to and responding to them. Ehrlich describes the device as “a device that translates a listener’s brain activity, which corresponds to a specific emotional state, into a musical representation that seamlessly and continuously adapts to the listener’s current emotional state.”

The listener is made aware of their emotional state through the ever-changing music generated by the device, and it is through this awareness that they are able to mediate their emotions.

The scientists have already put their device to the test with a group of young people suffering from depression, who, according to Agres, “actually think of their identity in part in terms of their music.”

However, despite the fact that the group was divided on the ease with which the device could be used, Agres reported that “without instructing the listeners on how to gain control over the feedback […] all of them reported that they self-evoked emotions by recalling happy or sad moments from their lives” in order to bring their emotional state under control.

In their subsequent presentation, the duo discussed the various challenges they faced while developing their project, including the difficulty of creating a device that continuously generates music based on emotional state sound continuous, rather than just a series of sounds indicating mood. Agres emphasized the importance of adaptability in order to react to changes in brain state in real time, stating that the device had to adapt “to their brain signals and sound continuous and musically cohesive.”

Agres and Ehrlich are currently preparing to begin the second round of testing for their automatic music generation system, which will include both healthy adults and patients suffering from major depressive disorder. If their tests are successful, the researchers hope to use the device to assist stroke patients who are suffering from depression.

Leave a Reply

Your email address will not be published. Required fields are marked *


Subscribe to our newsletter

    Follow us