ChatGPT quietly fixed its most annoying habit


I’m back working with ChatGPT after a months-long split, during which I had dalliances with Gemini and later Claude (I’m a serial AI subscriber, similar to how I juggle Netflix, HBO Max, and the like). Now that I’ve rekindled my relationship with ChatGPT—all business, I assure you—I’ve noticed something different about the chatbot… or rather, something it has stopped doing, to my great relief.
Back in the day, ChatGPT was all about the follow-ups: those questions it would pose at the bottom of its responses to invite further interaction. “Would you like to know more about how Tailscale could work with your Raspberry Pi setup?” was just one example, or “Would you like me to draw up a 10-week meal plan for your family?”
I don’t have a problem with follow-up questions when they’re relevant or flow naturally from a discussion. But with ChatGPT—and plenty of other popular AI chatbots—the follow-up prompts became persistent and borderline obsessive, with answers to practically all my questions arriving with “Would you like me to…” suggestions tacked onto the ends.
Of course, we all know the reason for these incessant follow-up prompts: the need for big AI providers to boost engagement, to keep us chatting with our AI companions for as long as possible, thus making us more likely to re-sub at the end of the month.
Personally, I found the continual “Would you like…” questions to be annoying, manipulative, and even a tad stressful. They made me feel compelled to reply, even if my answer was a simple “No, thanks.” And if I did have a follow-up question, I felt the need to redirect the conversation (“Instead of the 10-week plan, could you help me with a simple vinaigrette recipe for tonight?”), meaning more typing in the chatbox.
ChatGPT’s unending follow-up questions weren’t the sole reason I took a break, but they sure weren’t enticing me to stick around. Yet when I finally did return a couple of weeks ago, I noticed something different almost immediately: those “Would you like…” questions were gone.
Now, instead of a question, you get more of an “If you’d like”-style suggestion. “If you’re interested, I can show you 5 advanced Claude Cowork workflows that are really interesting,” ChatGPT teased in a recent conversation. Here’s another one: “If you want, I can tell you the one big NYC condo tax deduction many owners miss.”
It’s a subtle but key change. Instead of an in-your-face question that seems to demand an answer, the new follow-ups are simply there for the taking—and easily skippable if you wish.
At the same time, there’s also some Jedi-level mind manipulation going on with these new follow-ups. More often than not, I am asking ChatGPT to tell me more about those gotta-know Claude workflows and hidden condo tax deductions. Whatever’s in the secret sauce of these non-question follow-ups, it’s working.
I’ve asked OpenAI for more details about its less-pushy follow-up prompts and will update this story once I hear back.
I also polled the three big AI giants—ChatGPT, Gemini, and Claude—about the new style of non-question follow-ups, and they all said more or less the same thing: while the naggy questions can lead to “assistant fatigue,” the conditional “if you’d like” phrasing sets up a more “comfortable dynamic” that’s actually more inviting.
ChatGPT, naturally, followed up with an “if you’re interested” prompt about “a whole spectrum of ‘pressure levels’ in conversational prompts,” while the reliably to-the-point Claude just served up a typically thorough and academic answer without any kind of follow-up nag at the end.
Then there’s Gemini, which wrote nearly a dozen paragraphs about how “passive availability” prompts can be more effective than “active prompting” questions, and then immediately hit me with a “Would you like me to…” question at the end. Ugh.




