New Scientist changed the UK’s freedom of information laws in 2025


Our successful request for Peter Kyle’s ChatGPT logs stunned observers
Images Tada/Victoria Jones/Shutterstock
When I sent an email in early 2025, I didn’t intend to set a legal precedent for how the UK government handles its interactions with AI chatbots, but that’s exactly what happened.
It all started in January when I read an interview with then UK Technology Secretary Peter Kyle in Home Politics. Trying to suggest that he directly used the technology his department was created to regulate, Kyle said he often had conversations with ChatGPT.
This made me wonder: can I get his chat history? Freedom of information (FOI) laws are often deployed to obtain emails and other documents produced by public bodies, but previous precedent suggests that some private data, such as search queries, cannot be disclosed in this way. I was interested to see how the chatbot conversations would be classified.
It turned out to be the former: while many of Kyle’s interactions with ChatGPT were considered private and therefore ineligible for publication under FOI laws, the times he interacted with the AI chatbot in an official capacity were.
So in March, the Department of Science, Industry and Technology (DSIT) provided a handful of conversations Kyle had had with the chatbot – which became the basis of our exclusive story revealing his conversations.
The publication of the chat interactions came as a shock to data protection and FOI experts. “I’m surprised you got them,” Tim Turner, a data protection expert based in Manchester, UK, said at the time. Others were less diplomatic in their language: they were stunned.
When the story was published, we explained how this release was a world first – and access to AI chatbot conversations has continued to generate international interest.
Researchers from different countries, including Canada and Australia, have contacted me asking for advice on how to write their own requests to government ministers to try to obtain the same information. For example, a subsequent FOI request in April revealed that Feryal Clark, then the UK’s minister for artificial intelligence, had not used ChatGPT at all in her official role, despite claiming its benefits. But many requests proved unsuccessful as governments began to rely more on legal exceptions to the free dissemination of information.
I have personally noticed that the UK government has become much more guarded about the idea of freedom of information, particularly in relation to the use of AI, since my story for New scientist. A subsequent request I made via FOI legislation for the response within DSIT to the article – including any emails or Microsoft Teams messages mentioning the story, as well as how DSIT arrived at its official response to the article – was refused.
The reason why? This was deemed vexatious and sorting out the valid information that should be included from the rest would take too long. I was tempted to ask the government to use ChatGPT to summarize everything relevant, given how the then technical secretary had waxed lyrical about its prowess, but decided against it.
Overall, this release is important as governments rapidly adopt AI. The UK government has already admitted that the civil service uses ChatGPT-style tools in its daily processes, saying it saves up to two weeks a year through improved efficiency. However, AI does not summarize information impartially or perfectly: hallucinations exist. That’s why it’s important to provide transparency about how these funds are used – for better or worse.
Topics:
- policy/
- 2025 news review



