Hannah Fry: ‘AI can do some superhuman things – but so can forklifts’


BBC/Curious Films/Rory Langdon
Chances are you’re thinking a lot more about artificial intelligence today than you were five years ago. Since the launch of ChatGPT in November 2022, we have become accustomed to interacting with AI in most spheres of life, from chatbots and smart home technologies to banking and healthcare.
But such rapid change brings unexpected problems – as mathematician and broadcaster Hannah Fry shows in Confidential AI with Hannah Frya new three-part BBC documentary in which she talks to people whose lives have been transformed by technology. She spoke to New scientist about how we should view AI, its role in modern mathematics – and why it will disrupt the global economy.
Bethan Ackerley: In the series, you explore what AI does to our relationships and our sense of reality. Part of this comes from “AI sycophancy” – the idea that these tools give us what we want to hear, not what we need to hear. How does this happen?
Hannah Fry: Previous models were extremely sycophantic. Anything you would write, they would say, “Oh my God, you’re so amazing, you’re the best writer I’ve ever known.” They are a little better now, but there is this fundamental contradiction. We want them to be helpful, supportive, and make us feel important, which is what you get in a really good human relationship.
At the same time, a very good human relationship will allow you to say difficult things out loud. If you put too much into the AI, it stops being useful and starts being argumentative and not fun to be around. There’s also this huge group of people who broke up with their partner because they were using him as a therapist and the AI was telling them, “Get rid of him.”
There are people who have given up their jobs. There are people who have tried to use AI to make money and lost fortunes because they believed too much in its capabilities. Once you start including all of these people, you get a very large group. I think we all know someone who has been affected by social media bubbles and radicalization. I think this is the new version of that.
Has witnessing these issues changed the way you use AI?
What it has changed is how I provoke it. So now, I regularly invite him to, for example, tell me what I don’t see, find my prejudices. Don’t be sycophantic, tell me the hard things.
If we don’t want AI to be like that, what do we want it to be like?
The answer probably depends on the situation. In scientific spaces, there are astonishing examples – I am thinking of AlphaFold [an AI that predicts protein structures]. In mathematics, incredible progress is being made, where algorithms have an intelligence that is not that of humans. But I don’t think you can have a good model of reasoning unless it has conceptual overlap with what humans understand the world to be. So I think it has to be more humane.
“
There are certain situations where AI can do superhuman things, just like forklifts.
“
It seems like every day there is a news story about a math problem that hasn’t been solved for years, but has now been solved thanks to AI. Does this excite you?
I like to think of it as if there is this big map of mathematics and human mathematicians are in a particular territory and revolve around it. They don’t always see the connections with nearby things. Amazing mathematicians have found bridges between two regions of the map, such as the Taniyama-Shimura conjecture, where Japanese mathematicians found a bridge between two otherwise disconnected areas of mathematics. Then, everything we knew here applied there and vice versa.
I think AI is really good at saying, “Look around here, this looks like fertile territory that’s been underexplored,” and that’s really, really exciting. What AI isn’t so good at is pushing the boundaries further. And what he’s really not good at… is total abstraction, having broader and broader theories. What people always say is that if you gave AI everything up to 1900, it wouldn’t come up with general relativity. So I’m still excited that we’re in this ideal situation where AI will make human math faster, more efficient, and more exciting, but it still needs us.
There are many misconceptions about AI. Which one would you dispel, if you could?
People imagine him omnipotent, almost omnipotent. “The AI said this; the AI told me to buy these stocks.” There are certain situations where AI can do superhuman things, just like forklifts. We’ve built tools that can do things that humans can’t do for a long time. This does not mean that they are like God or that they possess untouchable knowledge.
You are not going to give access to your bank account to a forklift…
No! Exactly. I think that’s it – the framing of these things. Because they speak a language and they talk to us, they feel like a creature. We don’t have this problem with Wikipedia. It would be better to think of this as a truly capable Excel spreadsheet, rather than a creature.
Why do we tend to anthropomorphize AI?
Our bodies are adapted to cognitive social relationships. We are an intelligent and social species. And it is an apparently intelligent and social entity. Of course, we put a character in it. There is nothing in our past, in our conception, that would encourage us to do anything else.
Is there no way to guard against this anthropomorphic impulse?
I think it is unfair to place this responsibility on individuals. It’s a bit like saying that junk food is available for free and it’s your responsibility to make sure you don’t consume too much of it. The way these interfaces are designed, the conversations they have with you, we now have very good evidence that all of this is leading to more and more people falling into this trap. And I think it’s only in the design of these systems that you’ll ever be able to prevent people from falling down these rabbit holes.
AI highlights many social problems, such as people being very isolated and alone. But couldn’t AI help solve these problems?
If you say, “OK, you can’t talk to any chatbot if you’re lonely, let’s ban that,” then you still have lonely people. And of course it would be amazing if there were abundant human connections for everyone, but that doesn’t happen. So given that this is the world we live in, I think there are some situations where talking to a chatbot can alleviate some of the worst problems associated with loneliness. But these are delicate subjects. When you start using technology to answer truly human questions, it all becomes incredibly fragile.
Let’s talk about the distant future. We often think of extreme scenarios with AI – for example, a superintelligent AI designed to make paperclips turns us all into paperclips. How useful is it to think about this kind of apocalyptic scenario?
There was a moment when I thought these crazy, far-out scenarios were distracting from what really mattered, which was that decisions were being made by algorithms that affected people’s lives. I’ve changed my mind in recent years, because I think that only by worrying about these kinds of things can we put technical safety mechanisms in place to prevent this from happening.
So, worrying is not useless, worrying really has power. AI can have real negative consequences, and the more honest we are about them, the more likely we are to be able to mitigate them. I want it to be like the year 2000, you know? I want this to be the thing that worries us and worries us, and that’s why we’ve done the work to prevent this from happening.
Do you think we will ever achieve artificial general intelligence?
We don’t really have a clear definition of what AGI is. But if we take AGI to mean at least as good as most humans at any task involving a computer, then, yes, we’re almost there, really. Some people consider AGI to exceed human capabilities in every possible task. That, I don’t know. But I think AGI is really not far off at all. I really think in the next five to ten years we’re going to see seismic changes.
What kind of changes?
I think that the economic models that we have been accustomed to throughout human history will undergo profound changes. I think there will be some really giant advances in science, which I’m really excited about, as well as in drug design. The whole structure of our society is based on the idea that you trade your labor and your knowledge and your human intelligence for money that you then use to buy things – I think there’s a certain fragility in that.
AI will certainly change our relationship to work. What do we need to do to ensure that AI leads us all to work less, rather than some becoming completely unemployed?
I have an answer for that – I can just see how much trouble I’ll get into if I say it out loud. OK, I’ll give you a version of it. There are just a few undeniable facts, right? Until now, society has been based on the exchange of work for money. Our tax system is based on the taxation of income and not wealth. I think both of those things are going to have to change.
Topics:




