Anthropic confirms it’s been ‘adjusting’ Claude usage limits


Summary created by Smart Answers AI
In summary:
- Anthropic has confirmed the adjustment of Claude AI usage limits during peak weekday hours, particularly affecting Pro tier subscribers who now hit session limits more frequently.
- PCWorld reports that about 7% of users are experiencing stricter daily limits despite unchanged weekly quotas, primarily between 5 a.m. and 11 a.m. Pacific.
- These changes reflect the growing demand for Claude’s million-token pop-up and the broader challenges AI providers face with flat-rate subscription models.
If you feel like you’ve reached your Claude usage limits a lot faster over the past week, you’re absolutely right.
Anthropic confirmed that it has “adjusted” the five-hour usage limits for Claude Free, Pro, and Max users during peak hours of 5:00 a.m. to 11:00 a.m. PT on weekdays, while leaving the overall weekly limits unchanged.
The news comes via a Reddit post from an Anthropic representative. I contacted Anthropic and confirmed the message was genuine.
The Anthropic article doesn’t specify when the usage limits adjustment took place, but my understanding is that the new rate limits went into effect on Monday.
Claude users have been complaining bitterly about how quickly they’ve reached their usage limits over the past week, with many suspecting a silent reduction in their five-hour usage allowances. It turns out they were right.
Anthropic says it has imposed adjusted usage limits to “manage the growing demand for Claude.”
“We’ve achieved many efficiencies to offset this, but approximately 7% of users will hit session limits they wouldn’t have had before, especially in Pro tiers,” Anthropic’s post continues. “If you’re running token-intensive background tasks, moving them to off-peak times will further extend your session boundaries.”
Anthropic acknowledged that adjusting limits “was frustrating” and that it “continues to invest in effective scaling.”
News of Anthropic’s maximum usage limits comes amid renewed interest in Claude following his legal standoff with the Department of Defense, which sought to label Anthropic a “supply chain risk” after the company balked at signing a military contract. A judge recently suspended the Pentagon’s decision to apply the “supply chain risk” label.
Anthropic’s decision to adjust its five-hour usage limits speaks to a larger problem: how big AI providers treat subscribers on flat-rate plans.
In the past, AI users on “plus,” “pro,” or “max” plans (which cost between $10 and $250 per month, depending on the provider) rarely reached usage limits because they simply chatted with models in an online chat box.
But with the rise of agentic AI features such as mood coding applications and “computer use” capabilities, flat-rate AI subscribers are burning many more tokens than ever before, and large AI providers are struggling to keep up with demand.
The problem is exacerbated by Claude’s massive million-token pop-up, which rolled out earlier this month.
What’s happening now is that Anthropic and other AI companies are curbing the use of the plan, sometimes silently.
And no, that’s not cool.



