The dangers of so-called AI experts believing their own hype

The dangers of so-called AI experts believing their own hype

Demis Hassabis, CEO of Google Deepmind and a Nobel Prize for its role in the development of the AI ​​Alphafold algorithm to predict protein structures, has made an astonishing assertion on the 60 minutes show in April. With the help of AI like Alphafold, he said, the end of all diseases is at hand, “perhaps in the next decade”. With this, the interview has evolved.

For those who actually work on the development of drugs and hardening, this assertion is laughable. According to the medicinal chemist Derek Lowe, who worked for decades on the discovery of drugs, Hassabis’s statements “make me want to spend time looking silently by the window, putting me unintelligible words”. But you don’t need to be an expert to recognize hyperbola: the idea that all diseases will end in about a decade is absurd.

Some have suggested that Hassabis’ remarks are just an other example of the technology chief procedure, perhaps to attract investors and funding. Is it not like Elon Musk making silly forecasts on the Martian colonies, or Sam Altman of Openai saying that the general artificial intelligence (AG) is right by our side? But although this cynical vision can have a certain validity, it allows these experts to absent and underestimate the problem.

This is one thing when the apparent authorities make large claims outside their field of expertise (see Stephen Hawking on AI, extraterrestrials and space trips). But it might seem that Hassabis stays in his way here. His Nobel quotation mentions new pharmaceutical products as a potential advantage of alphafold predictions, and the release of algorithm was accompanied by endless media titles on the revolution in the discovery of drugs.

Likewise, when his colleague 2024 Nobel Laureat Geoffrey Hinton, former AI advisor at Google, said that the models of great language (LLM), he helped create a work in a way that resembles human learning, he seemed to talk about in -depth knowledge. So too bad the cries of protest of those who seek human cognition – and, in some cases, on AI too.

What such cases seem to reveal is that, strangely, some of these AI experts seem to reflect their products: they are able to produce remarkable results while having an understanding of them which is, at best, deep and brittle skin.

Here is another example: Daniel Kokotajlo, a researcher who left Openai on the concerns about his work towards AGE and is now executive director of the future project in California, said: “We are attracting our AIS, and we are sure they knew that what they said was false.” His anthropomorphic language of knowledge, intentions and deception shows that Kokotajlo has lost sight of what the LLM really are.

The dangers of supposing that these experts know better are illustrated in Hinton’s comment in 2016 that, thanks to AI, “people should stop training radiologists now”. Fortunately, radiology experts did not believe him, although some suspect a link between his remark and the growing concerns of medical students concerning the job prospects in radiology. Hinton has since revised this assertion – but imagine how much it would have had more strength if he had already received the Nobel. The same goes for Hassabis’s comments on the disease: the idea that AI will make the heaviness could cause complacency, when we need the exact opposite, both scientifically and politically.

These “expert” prophets tend to get very little decline in the media, and I can personally attest that even some intelligent scientists believe them. Many government leaders also give the impression that they have swallowed the media of technological CEOs and Silicon Valley gurus. But I recommend that we are starting to deal with their declarations as those of the LLM themselves, by meeting their superficial confidence with skepticism until the facts are verified.

Philip Ball is a scientific writer based in London. His latest book is How Life

Subjects:

  • artificial intelligence/ /
  • technology

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button