Chat & Ask AI app exposed 300 million messages due to misconfiguration

NEWYou can now listen to Fox News articles!
A popular mobile app called Chat & Ask AI has over 50 million users on the Google Play Store and Apple App Store. Now, an independent security researcher claims the app has exposed hundreds of millions of private online chatbot conversations.
The exposed messages reportedly include deeply personal and disturbing requests. Users asked questions like how to commit suicide painlessly, how to write suicide notes, how to make meth, and how to hack other apps.
These were not harmless prompts. These were complete chat histories linked to real users.
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive offers straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM bulletin.
HOW TECHNOLOGY IS BEING USED IN THE INVESTIGATION INTO THE DISAPPEARANCE OF NANCY GUTHRIE

Security researchers say Chat & Ask AI exposed hundreds of millions of private chatbot messages, including entire conversation histories linked to real users. (Neil Godwin/Getty Images)
What exactly was exposed
The problem was discovered by a security researcher called Harry. He discovered that Chat & Ask AI had a misconfigured backend using Google Firebase, a popular mobile app development platform. Due to this misconfiguration, it was easy for third parties to gain authenticated access to the application database. Harry says he was able to access around 300 million messages linked to over 25 million users. It analyzed a smaller sample of around 60,000 users and over a million posts to confirm reach.
The exposed data would have included:
- Full AI chat histories
- Timestamps for each conversation
- The personalized name that users gave to the chatbot
- How users configured the AI model
- Which AI model was selected
This is important because many users treat AI chats like private journals, therapists, or brainstorming partners.
How This AI App Stores So Much Sensitive User Data
Chat & Ask AI is not a standalone artificial intelligence model. It acts as a wrapper that allows users to communicate with large language models created by larger companies. Users could choose between models from OpenAI, Anthropic and Google, including ChatGPT, Claude and Gemini. While these companies operate the underlying models, Chat & Ask AI manages the storage. This is where things went wrong. Cybersecurity experts say this type of Firebase misconfiguration is a well-known weakness. It’s also easy to find if anyone knows what to look for.
We reached out to Codeway, which publishes the Chat & Ask AI app, for comment but did not receive a response prior to publication.
149 MILLION PASSWORDS EXPOSED IN MASSIVE CREDENTIAL LEAK

The exposed database reportedly included timestamps, model parameters, and the names users gave their chatbots, revealing much more than isolated prompts. (Elisa Schu/Getty Images)
Why it’s important for everyday users
Many people assume that their chats with AI tools are private. They write things they would never post publicly or even say out loud. When an application stores this data insecurely, it becomes a gold mine for attackers. Even without names, chat histories can reveal mental health issues, illegal behavior, workplace secrets, and personal relationships. Once exposed, this data can be copied, retrieved and shared forever.
YOUR PHONE SHARING DATA AT NIGHT: HERE’S HOW TO STOP IT

Since the app handled data storage itself, a simple misconfiguration of Firebase made sensitive chats via AI accessible to outsiders, according to the researcher. (Édouard Berthelot/Getty)
Ways to Stay Safe When Using AI Apps
You don’t need to stop using AI tools to protect yourself. A few informed choices can reduce your risks while still allowing you to use these apps when they are useful.
1) Be attentive to sensitive topics
Chats with AI can feel private, especially when you’re stressed, curious, or looking for answers. However, not all apps handle conversations securely. Before sharing deeply personal struggles, medical issues, financial details, or questions that could pose legal risk if exposed, take the time to understand how app stores protect your data. If these protections are unclear, consider safer alternatives, such as trusted professionals or services with stricter privacy controls.
2) Search for the app before installing it
Look beyond download counts and star ratings. Check who operates the app, how long it has been available, and whether its privacy policy clearly explains how user data is stored and protected.
3) Assume conversations can be stored
Even when an app claims privacy, many AI tools record conversations for troubleshooting or model improvement. Treat chats as potentially permanent records rather than temporary messages.
4) Limit account associations and logins
Some AI apps allow you to sign in with Google, Apple, or an email account. While convenient, this can directly connect chat histories to your real identity. Where possible, avoid linking AI tools to primary accounts used for work, banking, or personal communications.
5) Check app permissions and data controls
AI applications may require access beyond what is necessary to function. Check permissions carefully and disable anything that is not essential. If the app offers options to delete chat history, limit data retention, or turn off syncing, enable those settings.
6) Use a data deletion service
Your digital footprint extends beyond AI applications. Anyone can find personal information about you with a simple Google search, including your phone number, home address, date of birth, and social security number. Marketers buy this information to target ads. In more serious cases, scammers and identity thieves hack data brokers, leaving personal data exposed or circulating on the dark web. Using a data removal service helps reduce what can be linked to you in the event of a breach.
Although no service can guarantee the complete removal of your data from the Internet, a data deletion service is definitely a wise choice. They’re not cheap, and neither is your privacy. These services do all the work for you by actively monitoring and systematically deleting your personal information across hundreds of websites. This is what gives me peace of mind and has proven to be the most effective way to erase your personal data from the Internet. By limiting the information available, you reduce the risk of fraudsters cross-referencing data from breaches with information they might find on the dark web, making it harder for them to target you.
Check out my top picks for data deletion services and get a free scan to find out if your personal information is already available on the web by visiting Cyberguy.com.
Get a free analysis to find out if your personal information is already available on the web: Cyberguy.com.
Kurt’s Key Takeaways
AI chat apps are evolving quickly, but security still lags behind. This incident shows how a single configuration error can reveal millions of deeply personal conversations. Until stronger protections become the norm, you should treat AI chats with caution and limit what you share. The convenience is real, but so is the risk.
Do you think your AI chats are private, or has this story changed what you’re willing to share with these apps? Let us know what you think by writing to us at Cyberguy.com.
Sign up for my FREE CyberGuy Report
Get my best tech tips, urgent security alerts and exclusive offers straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM bulletin.
CLICK HERE TO DOWNLOAD THE FOX NEWS APP
Copyright 2026 CyberGuy.com. All rights reserved.



