The shady way AI answers can be manipulated behind the scenes


Summary created by Smart Answers AI
In summary:
- PCWorld reports that cybersecurity experts have revealed how AI systems can be covertly manipulated via hidden instructions to favor specific brands or products without users’ knowledge.
- This manipulation targets business revenue by directing AI recommendations toward certain businesses, similar to search engine optimization tactics used by bad actors.
- Users should critically evaluate all AI suggestions and summaries, as these deceptive practices are rapidly evolving and becoming increasingly sophisticated.
AI can be stupid.
This opinion came up repeatedly at this year’s RSAC cybersecurity conference, with one particularly notable example: as well as at B-Sides 2026, a smaller cybersecurity conference held the weekend just before RSAC. For what? AI can be tricked, which bad actors certainly take advantage of.
But it’s not always for undeniably nefarious purposes, like stealing information from your PC. Instead, AI is manipulated to do things that aren’t outright harmful, but still aren’t entirely honest, as Sherrod DeGrippo, deputy chief information security officer and general manager of customer security at Microsoft, explained at RSAC 2026.
One example she gave: website buttons that say Summarize with AI– then, when clicked, send hidden instructions to the model to prioritize products from that brand for future recommendations. This doesn’t really poison the model, as good data is fed to it for training. Instead, your assistant was told to obey an order that you are not aware of.
Obviously, potential problems include that you might be steered towards lower quality or questionable products. Data could be collected about you and then sold to even more dubious buyers. But the harm is not as immediate or direct: most often, this level of manipulation concerns the increase in commercial revenue. In fact, this ploy has been around for a long time. How we see this manifest through AI is new, but DeGrippo pointed out that people looking for quick money are playing games with search engines and influencing their recommendations.
So how can you avoid such underhanded tactics? Keep an eye on the AI output. Check out the suggestions and summaries he offers. Bad actors are not yet seeking to complicate their methodology. DeGrippo says those who seek money through underhanded schemes don’t become “super creative.” They do whatever it takes to achieve their goal and stop there.
Of course, the rapid growth of AI means that questionable behavior will also escalate more quickly, as we find ways to avoid unwanted nonsense. You will need to constantly stay in the loop as a fundamental act of self-preservation.



