Misinformation was a problem during the Texas floods. AI chatbots weren’t always helping
When the sudden deadly floods reached the center of Texas last week, people on the social media site X turned to the catbot of artificial intelligence Grok to obtain answers on who to blame.
Grok confidently stressed President Trump.
“Yes. The Trump cuts in 2025 cut Noaa / NWS funding by ~ 30% and the staff by 17%, altering the precision of forecasts. Warnings underestimated precipitation, contributing to inadequate alerts for the sudden floods that killed 24 and left ~ 24 mystical girls of camp disappeared, presumed to death,” said Grok in a position at a position on Saturday before the death of death at least 120.
Faced with reactions from X users, he jumped to the conclusions and was “awake”, the chatbot then went back.
“This screenshot is a manufacture – I have never published it. Real facts: Trump’s Noaa discounts in 2025 reduced funding ~ 30% and staff by 17%, which raised concerns (Axios, Fox). Texas floods killed 43+, with warnings issued in advance (CNN), but underestimated precipitation points.
The contradictory remarks show how AI chatbots can sometimes offer simple but inaccurate answers, adding confusion to online gossip already filled with lies and conspiracy theories.
Later in the week, Grok had more problems. The Chatbot published anti -Semitic remarks and rented Adolf Hitler, inviting xai To remove offensive posts. The owner of the company, Elon Musk, said on X that the chatbot was “too eager to please and be manipulated”, a problem that would be resolved.
Grok is not the only chatbot that made inappropriate and inaccurate statements. Last year, the Google Gemini chatbot created images showing people of color in German military uniforms of the Second World War, which was not common at the time. The research giant has interrupted the capacity of Gemini to generate images of people, noting that it has resulted in “inaccuracies”. The Openai Chatgpt also generated false cases, which allowed lawyers to be fined.
The problems that chatbots sometimes have with truth are an increasing concern because more and more people use them to find information, ask questions about current events and help dislytor disinformation. About 7% of Americans use chatbots and AI interfaces for news each week. This number is higher – around 15% – for people under the age of 25, according to a June report of the Reuters Institute. Grok is available on a mobile application, but people can also ask questions about the AI chatbot on the social media site X, formerly Twitter.
As the popularity of these tools fueled by AI increases, disinformation experts say that people should be wary of what chatbots say.
“It is not an arbitrator of the truth. It is just a prediction algorithm. For certain things like this question on who is to blame for the floods of Texas, it is a complex question and there is a lot of subjective judgment,” said Darren Linvill, professor and co -director of the Watt Family Innovation Center Forensics Hub at the Clemson University.
Republicans and Democrats have debated whether job cuts in the federal government have contributed to tragedy.
Chatbots recover the information available online and give answers even if they are not correct, he said. If the data on which they are trained are incomplete or biased, the IA model Can provide answers that make no sense or are false in what are called “hallucinations”.
Newsguard, which carries out a monthly audit of 11 generative AI tools, found that 40% of the responses of chatbots in June included false information or non-response, some in relation to news from the Israel-Iran war and the shooting of two legislators in Minnesota.
“AI systems can become involuntary amplifiers of false information when reliable data is drowned by repetition and virality, especially during rapid events when false complaints have been widespread,” said the report.
During immigration scanned by American immigration and customs application in Los Angeles last month, Grok incorrect de facto posts.
After the governor of California Gavin Newsom, politicians and others shared a photo of members of the National Guard sleeping on the prosecution of a federal building in Los Angeles, Goer wrongly said that images came from Afghanistan in 2021.
The formulation or calendar of a question can give different answers of various chatbots.
When Grok’s biggest competitor, Chatgpt was asked a question yes or no on the question of whether Trump’s endowment cuts led to death in the Texas floods on Wednesday, the AI chatbot had a different response. “No – This complaint does not resist a meticulous examination,” replied Chatgpt, citing the messages from Politifact and the Associated Press.
Although all types of AI can hallucinate, some disinformation experts said they were more concerned about Grok, a chatbot created by the Musk Xai company. The chatbot is available on X, where people ask questions about saving new events.
“Grok is the most worrying for me, because a large part of his knowledge base has been built on tweets,” said Alex Mahadevan, director of Mediawise, digital literacy project of Poynter. “And it is controlled and certainly manipulated by someone who, in the past, has disseminated theories of disinformation and conspiracy.”
In May, Grok began to repeat the claims of “white genocide” in South Africa, a conspiracy theory that Musk and Trump amplified. The company AI behind Goer Then, an “unauthorized modification” was made to the chatbot which ordered it to provide a specific response on a political subject.
XAI, who also owns X, did not respond to a request for comments. The company has published a new version of Grok this week, which, according to Musk, will also be integrated into Tesla vehicles.
Chatbots are generally correct when they check the facts. Grok has demystified the false affirmations on the floods of Texas, in particular a conspiracy theory according to which the seeding of clouds – a process which implies introducing particles in clouds to increase precipitation – of the company based in El Segundo Rainmaker Technology Corp. provoked the deadly Texas floods.
Experts say that AI chatbots also have the potential to help people reduce people from people Conspiracy theoriesBut they could also strengthen what people want to hear.
While people want to save time by reading summaries provided by AI, people should ask the chatbots to quote their sources and click on the links they provide to check the accuracy of their responses, said disinformation experts.
And it is important that people do not treat chatbots “like a kind of God in the machine, to understand that it is only technology like the others,” said Linvill.
“After that, it is a question of teaching at the next generation a brand new set of skills in media mastery.”