If you still use Facebook after the Cambridge Analytica scandal, Libra, and more privacy and ethics violations than you and your extended family can count on their fingers and toes, you should have no ethical concerns over the computer-brain interface they began developing two years ago. Now, the first fruit of their labors has arrived.
A Facebook-sponsored experiment at the University of California San Francisco successfully created an interface that translates brain signals into dialogue and published their results in Nature Communication. The software reads these signals to determine what you’ve heard and what you said in response without access to any audio of the conversation. The process utilizes high-density electrocorticography (ECoG), which requires sensors implanted in the brain, so there is no immediate concern for any non-consensual (literal) mind reading on Facebook’s part. Furthermore, it’s clear from their published research that the technology still has a long road ahead before it achieves both a natural and practical usefulness:
Here we demonstrate real-time decoding of perceived and produced speech from high-density ECoG activity in humans during a task that mimics natural question-and-answer dialogue. While this task still provides explicit external cueing and timing to participants, the interactive and goal-oriented aspects of a question-and-answer paradigm represent a major step towards more naturalistic applications. During ECoG recording, participants first listened to a set of pre-recorded questions and then verbally produced a set of answer responses. These data served as input to train speech detection and decoding models. After training, participants performed a task in which, during each trial, they listened to a question and responded aloud with an answer of their choice. Using only neural signals, we detect when participants are listening or speaking and predict the identity of each detected utterance using phone-level Viterbi decoding. Because certain answers are valid responses only to certain questions, we integrate the question and answer predictions by dynamically updating the prior probabilities of each answer using the preceding predicted question likelihoods.
Essentially, participants provided live answers to pre-recorded questions and researchers used their brain signal data to train models to understand both what they said and heard. On average, the software correctly detected correctly perceived questions 76 percent percent of the time and the response of the participant at a lower rate of 61 percent. While it’s easy to concoct nefarious uses for this technology on behalf of Facebook, the technology itself shows a lot of promise in communicating with people who are otherwise unable due to injury or neurodegenerative disorders.
While this research should continue in order to make new medical breakthroughs and help people, it should continue to raise concerns when funded by a company that both wants to predict your future actions and, in some cases, already can. Will the company literally read minds in the near future? No, it has to
conquer revolutionize the global economy first, and a very controlled 61 percent accuracy achieved through invasive brain sensor implants will take some time to become more precise and people-friendly. Nevertheless, we’ve seen how their privacy problems can escalate significantly when concerns aren’t raised in advance.
Would you like your brain signals used for advertising? Facebook refused to deny they would use the technology for this purpose. Ads are highly manipulative and consumers don’t want them—whether they promote a gentle new body wash or a dubious political agenda. Nevertheless, advertising revenue nearly reached $105 billion in 2017. Imagine what companies will pay for actual thoughts.
Of course, Facebook insists their Brain API will only read the thoughts you want to share. Facebook spokesperson Ha Thai put it like this:
We are developing an interface that allows you to communicate with the speed and flexibility of voice and the privacy of text. Specifically, only communications that you have already decided to share by sending them to the speech center of your brain. Privacy will be built into this system, as every Facebook effort.
Cause for skepticism aside, consider how many times you’ve put your foot in your own mouth or just wish you hadn’t said something as you’re saying it. Consider having everything you ever say on record. Do you want Facebook to have that data? Do you want anyone to have it? If you don’t, now is a good time to start caring about Facebook’s research because we already know what happens when we wait and see what they’ll do with it.
Title image credit: Adam Dachis, Gan Khoon Lay, and Laymik.