Neurotechnology Is Blurring The Lines Around Mental Privacy

Neurotechnologies – devices that interact directly with the brain or nervous system – were once dismissed as the stuff of science fiction. Not anymore. Several companies are developing and some are even testing “brain-computer interfaces,” or BCIs, of which the most high-profile is likely Elon Musk’s Neuralink. He announced on Jan. 29, 2024, that the first human in the company’s clinical trials has received a brain implant.

Like other companies, Neuralink’s immediate goal is to improve autonomy for patients with severe paralysis or other neurological disorders.

But not all BCIs are envisioned for medical use: There are EEG headsets that sense electrical activity inside the wearer’s brain covering a wide range of applications, from entertainment and wellness to education and the workplace. Yet, Musk’s ambitions go beyond these therapeutic and non-medical uses. Neuralink aims to eventually help people “surpass able-bodied human performance.”

Neurotechnology research and patents have soared at least twenty-fold over the past two decades, according to a United Nations report, and devices are getting more powerful. Newer devices have the potential to collect data from the brain and other parts of the nervous system more directly, with higher resolution, in greater amounts and in more pervasive ways.

However, these improvements have also raised concerns about mental privacy and human autonomy – questions I think about in my research on the ethical and social implications of brain science and neural engineering. Who owns the generated data, and who should get access? Could this type of device threaten individuals’ ability to make independent decisions?

In July 2023, the U.N. agency for science and culture held a conference on the ethics of neurotechnology, calling for a framework to protect human rights. Some critics have even argued that societies should recognize a new category of human rights, “neurorights.” In 2021, Chile became the first country whose constitution addresses concerns about neurotechnology.

Advances in neurotechnology do raise important privacy concerns. However, I believe these debates can overlook more fundamental threats to privacy.

A Glimpse Inside

Concerns about neurotechnology and privacy focus on the idea that an observer can “read” a person’s thoughts and feelings just from recordings of their brain activity.

It is true that some neurotechnologies can record brain activity with great specificity: for example, developments on high-density electrode arrays that allow for high-resolution recording from multiple parts of the brain.

Researchers can make inferences about mental phenomena and interpret behavior based on this kind of information. However, “reading” the recorded brain activity is not straightforward. Data has already gone through filters and algorithms before the human eye gets the output.

Given these complexities, my colleague Daniel Susser and I wrote an article in the American Journal of Bioethics – Neuroscience asking whether some worries around mental privacy might be misplaced.

While neurotechnologies do raise significant privacy concerns, we argue that the risks are similar to those for more familiar data-collection technologies, such as everyday online surveillance: the kind most people experience through internet browsers and advertising, or wearable devices. Even browser histories on personal computers are capable of revealing highly sensitive information.

It is also worth remembering that a key aspect of being human has always been inferring other people’s behaviors, thoughts and feelings. Brain activity alone does not tell the full story; other behavioral or physiological measures are also needed to reveal this type of information, as well as social context. A certain surge in brain activity might indicate either fear or excitement, for example.

However, that is not to say there’s no cause for concern. Researchers are exploring new directions in which multiple sensors – such as headbands, wrist sensors and room sensors – can be used to capture multiple kinds of behavioral and environmental data. Artificial intelligence could be used to combine that data into more powerful interpretations.

Think For Yourself?

Another thought-provoking debate around neurotechnology deals with cognitive liberty. According to the Center for Cognitive Liberty & Ethics, founded in 1999, the term refers to “the right of each individual to think independently and autonomously, to use the full power of his or her mind, and to engage in multiple modes of thought.”

More recently, other researchers have resurfaced the idea, such as in legal scholar Nita Farahany’s book “The Battle for Your Brain.” Proponents of cognitive liberty argue broadly for the need to protect individuals from having their mental processes manipulated or monitored without their consent. They argue that greater regulation of neurotechnology may be required to protect individuals’ freedom to determine their own inner thoughts and to control their own mental functions.

These are important freedoms, and there are certainly specific features – like those of novel BCI neurotechnology and nonmedical neurotechnology applications – that prompted important questions. Yet I would argue that the way cognitive freedom is discussed in these debates sees each individual person as an isolated, independent agent, neglecting the relational aspects of who we are and how we think.

Thoughts do not simply spring out of nothing in someone’s head. For example, part of my mental process as I write this article is recollecting and reflecting on research from colleagues. I’m also reflecting on my own experiences: the many ways that who I am today is the combination of my upbringing, the society I grew up in, the schools I attended. Even the ads my web browser pushes on me can shape my thoughts.

How much are our thoughts uniquely ours? How much are my mental processes already being manipulated by other influences? And keeping that in mind, how should societies protect privacy and freedom?

I believe that acknowledging the extent to which our thoughts are already shaped and monitored by many different forces can help set priorities as neurotechnologies and AI become more common. Looking beyond novel technology to strengthen current privacy laws may give a more holistic view of the many threats to privacy, and what freedoms need defending.

(Published under Creative Commons from The Conversation. Read the original article here)

Recent Posts

  • Featured

Palestinian Writers Have Long Explored Horrors Of Amputation

Words fail as 2,000-pound bombs shred lives and limbs. The sheer number of children killed in Israeli attacks on Gaza…

1 day ago
  • Featured

MCC Turned Into ‘Modi Code Of Conduct’ Under BJP Rule: Mamata

On Tuesday, May 7, West Bengal Chief Minister Mamata Banerjee alleged that the Election Commission had turned a blind eye…

2 days ago
  • Featured

How Bioengineering Saved A Himalayan Road From Floods

On 14 August 2023, heavy rainfall in North India triggered flash floods and landslides, devastating the region. Kishori Lal, the…

2 days ago
  • Featured

Media Coverage Of Campus Protests Focuses On The Spectacle

Protest movements can look very different depending on where you stand, both literally and figuratively. For protesters, demonstrations are usually…

2 days ago
  • Featured

MDBs Must Prioritize Clean & Community-Led Energy Projects

Multilateral Development Banks (MDBs), governments, and corporations across 160 countries consider or approve more than one investment per day in…

2 days ago
  • Featured

How News Gatherers Can Respond To Social Media Challenge

Print and electronic media are coping admirably with the upheavals being wrought by social media. When 29-year-old YouTuber Dhruv Rathee…

2 days ago

This website uses cookies.