Combining design, software development, digital media and a love of music, programmer and music producer Joel Eaton‘s work sits at the forefront of music technology. The basic premise of his enterprise is taking traditional tools for composition out of the equation, and instead digging into people’s consciousnesses to form the music.
We initially came across him thanks to A Stark Mind, a recent project where brainwaves are tracked via a brain-computer music interface (BCMI) in order to produce a visual score which is then interpreted by musicians live on stage. This technology was initially developed in order to empower people with physical disabilities to engage in the music making process, and continues to wield exciting opportunities in this area.
Developed to answer the question, ‘what if music could adapt to bring the moods of the audience and the performer closer together?’, The Space Between Us was a project that also revolved around a BCMI, but brought an audience member into the fold. During the performance the audience reaction was tracked, and these reactions were reflected through the music via digitally implemented changes to the score.
Eaton recently completed his doctorate in developing brain-computer music interfaces at the University of Plymouth, and is currently working at technology developer ARM as a designer. Find out more about brainwave (the best microgenre since vaporwave? anybody?), or more specifically, brain waves and music, in our interview with him below…
How did you get into your branch of music-making, do you come from a musician background or a computer science one?
Joel Eaton: A bit of both really. I play guitar and have played in a couple of bands over the years, and at the same time I’ve always been interested in music technology, be it the recording and production process or programming sounds from scratch and composing in software that I’ve made.
What first inspired you to work on brain-controlled interfaces, are there historical examples of this sort of work?
Music has always provided me with a space for a mental reboot, so to speak, be it through seeing it being performed, playing the guitar, or making songs for myself on my computer. The power that these interactions with music can hold is something we should never take for granted. A number of years ago I was interested in how technology can provide access to music for people with physical disabilities. I soon discovered that there was really nothing on offer for people who had severe motor restrictions, as interactions with music through instruments and interfaces all rely on some form of gestural input, even ones designed for people with physical disabilities.
At this time I met Professor Eduardo Miranda from Plymouth University, who had done some research in looking into whether the ability to control brainwaves could be applied to music. He seemed a little despondent by the capabilities of the time, but was excited about a potential new area that could offer real-time music control without any physical movement needed, and asked if I’d be keen to get involve. Naturally, I jumped at the chance. This avenue gave us some groundbreaking results and led to my PhD and subsequent work in the field.
We first came across your A Stark Mind performance, where you appear to conduct live musicians using visual representations of brain activity. Can you tell us a little bit more about that performance and how it works?
The piece demonstrates the capabilities of the technology I built by allowing me to direct the musicians by choosing how the score is arranged, a bit like a conductor with the power to change the notes on the pages in front of his orchestra. In the performance, I’m using a brain-computer interface to control a graphical score that three musicians read and play. The score is projected on-stage for both the musicians and the audience to see. The system I built uses three methods of brainwave control to direct the musicians. The first method uses 8 flashing panels of lights that elicit different responses in my brain waves when I gaze at them. The second method is where I imagine squeezing or relaxing my hand. And the third method is where the system tries to determine changes in my emotional state through analysing my brain waves. The outcomes of all of these things are mapped to changes in the score. In practice, it’s really fun to perform. The ‘right’ score is rehearsed with the musicians, so I do my best to surprise them and mix things up a bit.
You’ve used this brain-computer interface to also assist people with disabilities to compose and conduct music with the Activating Memory project. Belfast’s Queens University has also done something along these lines with their Performance Without Barriers initiative. How far do you think interfaces like this could go with empowering people to make music who aren’t necessarily able to interact with traditional modes of music making?
I think that they can make the world of difference. Traditionally, the technological focus of assistive tech lies either in systems for traditional communication, i.e. tools that act as someone’s voice or their scribe, or tools that aid rehabilitation. Having worked with patients with severe physical conditions such as paralysis and locked-in syndrome where physical rehabilitation is not a possibility, there is a real need for access to interacting with music or other art forms, purely for the sake of being creative. This simply helps improve quality of life in the same way that painting a picture or practising an instrument presents all of the intangible benefits aside from producing an end product.
There’s been a lot of noise recently about algorithm-created music. How do you think spontaneous brain-controlled music relates to a world which is increasingly being defined by formulations?
There will always be the healthy debate about the value of ‘art’ created by machines, but it’s important to remember that there’s always someone behind the algorithms. Algorithms are programmed with options, rules and conditions, and these all have to be defined by someone, so the separation of the programmer and the music may be blurred but it’s always distinct. Just like any other type of music, there’s good algorithmic music and bad algorithmic music, and the reasons that you think some of it is good is because of the decisions of the person or people behind it. This exposes the real value of technology in that it’s an aid for humans to produce music. Algorithms and technology on their own can’t replicate the emotional connections that music creates, but we can use them to enhance our music and as an extension of our own creativity.
You chaired a round table event in 2017 about distributing audiovisual projects online. What are the pitfalls and limitations related to audio-visual works at the moment and how could these be overcome/expanded?
The event was primarily aimed at collection managers in the cultural heritage sector, and the most limiting factor in sharing and accessing digital resources is rights issues. Until the Copyright, Design, and Patents Act (1998) is overhauled to cater for the modern world, rights issues will always be a major decider for collections not to share content. Fortunately, many people are thinking in forward-thinking pragmatic terms and are taking risk-managed decisions to share rich content online, and the more people that do then the more chance the law will follow good practice.
I’m friends with their singer, Dan, and have recorded previous bands of his and Stuart (the drummer) before. I wasn’t quite expecting to be as involved as I ended up being with the SS album, but after hearing them in the studio I knew I wanted to get my teeth stuck into it. It’s difficult to capture something so raucous and intense with the clarity and focus it deserves, but essentially I wanted it to steal your full attention from the second you put the record on and not let it go until you turn it off. Thankfully, they seem to like it which is the main thing.