Unesco adopts global standards on ‘wild west’ field of neurotechnology | Unesco

https://www.profitableratecpm.com/f4ffsdxe?key=39b1ebce72f3758345b2155c98e6709c

It’s the latest initiative in a growing international effort to put guardrails around an emerging frontier: technologies that harness data from the brain and nervous system.

UNESCO has adopted a set of global standards on the ethics of neurotechnology, a field that has been described as “a bit of the Wild West.”

“There is no control,” said UNESCO’s head of bioethics, Dafna Feinholz. “We need to inform people about the risks, the potential benefits, the alternatives, so that they have the opportunity to say ‘I accept or I don’t accept’.”

She said the new standards were driven by two recent developments in neurotechnology: artificial intelligence (AI), which offers vast possibilities for decoding brain data, and the proliferation of consumer neurotechnology devices such as headphones that claim to read brain activity and glasses that track eye movements.

The standards define a new category of data, “neural data,” and provide guidelines governing its protection. A list of more than 100 recommendations ranges from rights-based concerns to scenarios that are — at least for now — science fiction, like that of companies using neurotechnology to subliminally market to people during their dreams.

“Neurotechnology has the potential to define the next frontier of human progress, but it is not without risks,” said UNESCO Director-General Audrey Azoulay. The new standards would “enshrine the inviolability of the human spirit,” she said.

Billions of dollars have been invested in neurotechnology projects in recent years, from Sam Altman’s August investment in Merge Labs, a competitor to Elon Musk’s Neuralink, to Meta’s recent unveiling of a wristband that lets users control their phone or AI Ray-Bans by reading the muscle movements of their wrist.

The wave of investment has been accompanied by growing pressure for regulation. The World Economic Forum released a paper last month calling for a privacy-focused framework, and US Senator Chuck Schumer introduced the Mind Act in September – following the lead of four states that have introduced laws to protect “neural data” since 2024.

Proponents of neurotechnology regulation emphasize the importance of protecting personal data. UNESCO standards emphasize the need for “mental privacy” and “freedom of thought.”

Skeptics, however, say legislative efforts are often driven by dystopian anxieties and risk hindering vital medical advances.

“What’s happening with all of this legislation is fear. People are afraid of what this technology is capable of. The idea that neurotechnology reads people’s minds is scary,” said Kristen Mathews, a lawyer who works on mental privacy issues at the US law firm Cooley.

From a technical perspective, neurotechnology has been around for over 100 years. Electroencephalogram (EEG) was invented in 1924 and the first brain-computer interfaces were developed in the 1970s. The latest wave of investment, however, is driven by advances in AI that make it possible to decode large amounts of data – including, eventually, brain waves.

“What has allowed this technology to present perceived privacy issues is the introduction of AI,” Mathews said.

Some AI-based neurotechnology advances could have a transformative medical impact, helping to treat diseases ranging from Parkinson’s disease to amyotrophic lateral sclerosis (ALS).

A paper published in Nature this summer described an AI-powered brain-computer interface decoding the speech of a paralyzed patient. Other work suggests that AI might one day be able to “read” your thoughts – or at least reconstruct an image if you focus hard on it.

The hype around some of these advances generated fears that Mathews said were often far removed from the actual dangers. The Mind Act, for example, asserts that AI and the “corporate vertical integration” of neurotechnology could lead to “cognitive manipulation” and an “erosion of personal autonomy.”

“I don’t know of any company that does this kind of thing. It’s not going to happen. Maybe in two decades,” she said.

Neurotechnology’s current frontier lies in improving brain-computer interfaces, which, despite recent advances, are in their infancy — and in the proliferation of consumer-facing devices, which Mathews warns could raise privacy concerns, a bogeyman of UNESCO standards. She argues, however, that creating the concept of “neural data” is too broad an approach to this issue.

“Those are the kinds of things we would like to address. Monetization, behavioral advertising, use of neural data. But the current laws don’t address what we’re worried about. They’re more amorphous.”

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button