What to read this week: The Laws of Thought by Tom Griffiths

https://www.profitableratecpm.com/f4ffsdxe?key=39b1ebce72f3758345b2155c98e6709c
What to read this week: The Laws of Thought by Tom Griffiths

Dwight Ellefsen/FPG/Archives

The laws of thought
Tom Griffiths, William Collins (UK) Macmillan (USA)

FOR nearly 70 years, cognitive science researchers have been waging a civil war. On one side is computationalism, which argues that intelligence is best explained by rules, symbols, and logic that can be expressed as equations. On the other is connectionism, where intelligence emerges from vast connected networks modeled on the brain’s neurons, and no single component is intelligent but, somehow, the system as a whole is.

This battle has shaped everything from cognitive science to the artificial intelligence that is transforming the global economy today. This month, two new books arrive from opposite sides. For me, what stands out is The laws of thought: the quest for a mathematical theory of mind. In it, Princeton professor Tom Griffiths traces the long attempt to formalize thought into mathematical laws, explaining why modern AI is the way it is – and what the future holds.

Griffiths frames the story around three competing and increasingly entangled mathematical ways of formalizing thought: rules and symbols, neural networks, and probability. The first treats thinking like problem solving: dividing a task into objectives and sub-objectives, then moving through it in formal steps. It powered early AI, but also showed why human common sense is so difficult to contain, with the number of rules AI had to follow quickly growing into tens of millions of requirements.

Neural networks exchange explicit rules to learn from examples, building intelligence from many simple units whose interactions produce complex behaviors. This is (sort of) how humans work, but probability and statistics add a third ingredient: uncertainty. Minds don’t have access to perfect information, and what makes us human is how we evaluate evidence and update our beliefs.

For Griffiths, none of the three frameworks is enough. Realistic stories about intelligence, whether human or mechanical, will blend all three. He sets out his historical perspective, examining how humans have attempted to map mental processes using mathematics, drawing on archives and interviews with researchers. As a result, his book is detailed and engaging, if a little heavy-handed.

A different approach is taken by neuroscientists Gaurav Suri and Jay McClelland in The emerging mind: how intelligence arises in people and machinesin which they argue that the mind is an emergent property of interacting neural networks, biological or artificial, which can generate thoughts, emotions and decisions. It draws on McClelland’s history as a pioneer of connectionism.

Both books offer interesting and contradictory perspectives on the generative AI revolution. For Griffiths, a large language model (LLM) confirms his hybrid vision: it is impressive, but hallucinates and stumbles, and a symbolic layer will be necessary to remedy this. For Suri and McClelland, the same LLM is a vindication: it is impressive how much reasoning emerged from a single network.

The problem with The emerging mind is not so much its thesis as its presentation, as the tone oscillates between folksy asides and awkward phrasing. Explaining math and science was always going to be tricky, and neither book really delivers, though The laws of thought comes close because describing the history of AI means focusing on what each framework can and cannot explain.

The emerging mind has a more provocative manifesto, with the authors seeing no fundamental obstacles to more autonomous, goal-driven AI emerging from purely neural architectures. As a result, it may seem less grounded in reality.

Griffiths’ book, however, leaves you with a clear sense of the “languages” we have to describe thought and why the future may well lie in messy overlaps.

Could this future even be a sign of peace between the two camps?

Two other great books on artificial intelligence

New scientist. Science news and long reads from expert journalists, covering scientific, technological, health and environmental developments on the website and in the magazine.

Algorithms to respect
by Brian Christian and Tom Griffiths

This is a non-technical, animated tour of how ideas from computer science can inform everyday decisions, including how an algorithmic approach can improve human decision-making. It was co-written by Griffiths ten years ago, before the ChatGPT revolution, but remains relevant today.

New scientist. Science news and long reads from expert journalists, covering scientific, technological, health and environmental developments on the website and in the magazine.

Restart the AI
Building artificial intelligence we can trust
by Gary Marcus and Ernest Davis

Today’s neural networks can be impressive but fragile, this book argues. He argues for hybrid systems that recapture the strengths of the rules and symbols approach – one of the three mathematical frameworks in Griffiths’ new book.

Chris Stokel-Walker is a technical writer based in Newcastle upon Tyne, UK.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button