It is 1965. A man in a plain brown suit sits on a stage, staring blankly into the crowd. The purple headband he’s wearing is the most out-of-place characteristic about him, but not for long.
His assistant approaches and begins attaching wired wet electrodes beneath the headband. The wires lead to a small brown box beside him, which in turn connects to a ring of percussion instruments that surround him on stage.
As the assistant moves off stage, the seated man moves his arm to touch the box with a kind of slow intensity that belies what he’s actually doing on that stage. Nothing. The silence in the performance hall is finally broken as the instruments begin to make sound, in spite of the fact that no human hand is touching them.
Alvin Lucier’s “Music For Solo Performance” was a groundbreaking musical composition, and also a functioning brain-computer interface.
Those electrodes took the output from Lucier’s brain (alpha brain waves of about 8-12Hz), and put it through an amplifier. After passing through a bandpass filter, the signals were sent through loudspeakers to activate the percussion instruments.
Since Elon Musk took to Twitter to tease an update on Neuralink, his company currently working on developing a mass-market brain-computer interface (BCI), there has been a lot of speculation about what the August 28th announcement will cover.
While we all wait, why not learn everything you want to know about BCIs in one convenient place: right here. Let’s go.
What is a brain-computer interface?
A brain-computer interface, or BCI, establishes communication between the human brain and an external computer or other device. Jacques Vidal is credited with using the term “brain-computer interface” first in his paper “Toward Direct Brain-Computer Communication” published in 1973.
How does a BCI work?
A brain-computer interface includes a mix of hardware and software components:
- A way to record brain activity
- A way to interpret that activity
- A way to convert the activity to input for the external device
- An external device that receives that input as a control command
The end result is that you can control something with a thought or intention. One example is a neural prosthesis, an artificial device that can replace a missing limb or organ. You could make a prosthetic hand open and close just by thinking about it the same way you would your normal hand.
What is brain activity?
The human brain contains about 86 billion neurons. Not quite as many as stars in the sky, but it gets the job done. When neurons in the brain communicate with each other, they send and receive electrical signals, and you’re able to move, sense, speak, remember…all the things you expect your brain to do to keep you functioning.
Those electrical signals, or brain waves, can be picked up by electrodes placed on the scalp, and then a computer program or algorithm can interpret the electrical signals and convert it to input that an external device can recognize as a command.
How to record brain activity
Hans Berger was the first to record brain activity via electroencephalography (EEG) in 1924, although he didn’t publish his paper until 1929. By 1938, EEG was in use as a diagnostic tool in the medical community.
Other ways to record brain activity include magnetic resonance imaging (MRI), which measures via magnetism of hydrogen atoms in your body; and functional near infrared spectroscopy (fNIRS), which tracks changes in blood hemoglobin levels in the brain.
An MRI measures the flow of oxygen in the brain to help map out active areas. The more active an area is, the more energy it requires, the more blood flow increases to carry oxygen. It can also show a decrease in flow.
EEG is the least expensive and most portable, which is why it’s the most commonly used method for BCIs. Alvin Lucier used EEG for his musical performance via brain-computer interface.
How to interpret and convert brain activity
A computer program or algorithm interprets the electrical signals and then converts it to input an external device can recognize as a command. A functioning BCI works with several algorithms, including signal processing, feature extraction, and pattern recognition/classification.
The opposite process is also possible, where an external device is capable of sending a signal into the brain that triggers neurons in the right location to fire in exactly the right way, creating a sensory experience (i.e. stream music directly to your brain). We’ll circle back to this below when we talk about what BCIs are used for.
What types of BCIs are out there?
There are a few types of brain-computer interfaces, classified based on how the electrodes are placed: invasive, semi-invasive, and non-invasive.
Invasive BCIs require surgically implanting hardware directly into the cortex. A single unit BCI will track the signal from a single area while a multi-unit will track from multiple areas of the brain.
Neuralink's BCI will involve implanting flexible threads of electrodes that would be able to transfer data better.
Semi-invasive BCIs still require surgery, but the hardware is placed on the exposed surface of the brain, not inserted like an invasive BCI. One example of this was a research project conducted by Eric Leuthardt and Daniel Moran in 2017.
They worked with epilepsy patients who already had electrodes placed on the cortex, and gathered data about which area of the brain was activated by certain tasks. Eventually, some of his patients were able to play Space Invaders or move a cursor in three-dimensional space, just by thinking.
Non-invasive BCIs include a number of techniques, none of which require surgery. The hardware or electrodes are placed on the scalp. Wet electrodes use a special gel as a conductor, and dry electrodes are attached without the extra conductor. Out of all those options (MRI, fNIRS, etc.) an EEG is still the most widely used.
Neurable makes a non-invasive, everyday BCI that uses six dry electrodes and machine learning software. Companies like Bitbrain, Muse, and Emotiv are building solutions to record brain activity in non-invasive ways, which can be used for future BCI developments.
What will we use BCIs for?
There is huge potential for brain-computer interfaces in the medical industry. BCIs can be used to restore lost function for people with certain disabilities, since they offer a non-muscular route of communication.
They can even be used to change neural pathways in the brain to bypass damaged areas and restore motor function.
In 1988, Farwell and Donchin presented the first BCI-speller, one of the earliest applications that enabled the user to communicate with their environment with a graphical user interface displaying letters, numbers, and characters.
The user could move a cursor with their brain to select letters on the screen and communicate.
Although categorized as a brain-machine interface, neural prostheses are closely related to BCIs because their function also depends on accurate reading, mapping, and interpretation of brain activity.
A neural prosthesis restores function lost by neurological damage like stroke or spinal cord injury. This can include prosthetic limbs, or visual/auditory prostheses.
You might be familiar with one neural prosthesis already: the cochlear implant. A cochlear implant sends electrical impulses directly to the auditory nerve to restore hearing.
Not exactly streaming directly to the brain, but it’s still a shorter route than sound produced by in-ear earbuds, or even bone conduction headsets like Sentien Audio that send sound directly to the inner ear.
The cochlear implant creates a sensory experience by sending a signal from the device to the brain, rather than controlling an external device by sending a signal from the brain to the device.
Mass-market adoption of brain-computer interfaces will happen someday. When it does, it will include functionality beyond medical applications. BCIs will change the way we use technology, communicate with each other, consume content, and so much more.
The more we advance our technology, the more closely we align with human-computer interaction principles like J. C. R. Licklider’s vision of a future of man-computer symbiosis, or Mark Weiser’s principles for ubiquitous computing:
- The purpose of a computer is to help you do something else.
- The best computer is a quiet, invisible servant.
- The computer should extend your unconscious.
- Technology should create calm.
If you’re interested in reading more about human-computer interaction, check out this blog post.
What are the cons of BCIs?
In spite of the enormous progress made in developing this technology in recent years, there are still a lot of disadvantages that come with any iteration of a brain-computer interface.
Non-invasive BCIs are limited in how much information they can pass between the brain and the computer or device. The skull dampens the signals, reducing accuracy and stability. It also makes it harder to determine the origin of a signal, which is crucial to mapping out where to put electrodes and predicting intention.
This limits practical use cases for non-invasive brain-computer interfaces, which in turn limits funding for research and development. One potential exception is Openwater’s fNIRS system that could be used for more accurate reading of brain activity. Zero surgery required.
Semi-invasive and invasive BCIs come with a laundry list of complex problems. The hardware has to be small, safe to implant and keep long-term, reliable, stable, rechargeable...the list goes on. Another issue is that scar tissue can build up around the implanted hardware, further masking the signals and making them harder to read.
You can summarize the biggest obstacle in one sentence: Who wants to put a piece of hardware with unknown potential into their own brain? People worry about having their brains “hacked”, but there’s another possibility: using a brain-computer interface can potentially alter the brain of the user.
With so many unknowns, mass adoption and practical use of brain-computer interfaces is still years away. BCIs could become the Next Big Thing in technology—much the same way that the smartphone took over and disrupted multiple industries—just...not any time soon.
If you want an interface connecting you to your technology in a brand new way, don’t wait years for a brain-computer interface. Check out Sentien Audio, the first audio interface headset that you can wear all day without blocking your ears.
You get instant access to your voice assistant, podcasts, audiobooks, phone calls, notifications, and more. There when you need it, seamlessly blending in to your day when you don’t.
Explore Sentien Audio today. You can stream music directly to your...inner ear. Your cochlea is the new API.