CursorAI "unlimited" plan rug pull: Cursor AI silently changed their "unlimited" Pro plan to severely rate-limited without notice, locking users out after 3-7 requests
-
This post did not contain any content.
-
This post did not contain any content.
Someone just got the AWS bill.
-
This post did not contain any content.
Well shit, I've been on vacation, and I signed up with Cursor a month ago. Not allowed at work, but for side projects at home in an effort to "see what all the fuss is about".
So far, the experience was rock solid, but I assume when I get home that I'll be unpleasantly surprised.
Has anyone here had rate limiting hit them?
-
This post did not contain any content.
Sounds like charge back territory
-
This post did not contain any content.
Hopefully (?) this is the start of a trend and people might begin to realize how all those products are not worth their price and AI is an overhyped mess made to hook users before exploiting them...
-
Well shit, I've been on vacation, and I signed up with Cursor a month ago. Not allowed at work, but for side projects at home in an effort to "see what all the fuss is about".
So far, the experience was rock solid, but I assume when I get home that I'll be unpleasantly surprised.
Has anyone here had rate limiting hit them?
I’ve primarily use claude-4-sonnet in cursor and was surprised to see a message telling me it would start costing extra above and beyond my subscription. This was prolly after 100 queries or so. However, switching to “auto” instead of a specific model continues to not cost anything and that still uses claude-4-sonnet when it thinks it needs to. Main difference I’ve noticed is it’s actually faster because it’ll sometimes hit cheaper/dumber APIs to address simple code changes.
It’s a nice toy that does improve my productivity quite a bit and the $20/month is the right price for me, but I have no loyalty and will drop them without delay if it becomes unusable. That hasn’t happened yet.
-
I’ve primarily use claude-4-sonnet in cursor and was surprised to see a message telling me it would start costing extra above and beyond my subscription. This was prolly after 100 queries or so. However, switching to “auto” instead of a specific model continues to not cost anything and that still uses claude-4-sonnet when it thinks it needs to. Main difference I’ve noticed is it’s actually faster because it’ll sometimes hit cheaper/dumber APIs to address simple code changes.
It’s a nice toy that does improve my productivity quite a bit and the $20/month is the right price for me, but I have no loyalty and will drop them without delay if it becomes unusable. That hasn’t happened yet.
"Prolly"
-
Someone just got the AWS bill.
That's got to be it. Cloud compute is expensive when you're not being funded in Azure credits. One the dust settles from the AI bubble bursting, most of the AI we'll see will probably be specialized agents running small models locally.
-
"Prolly"
I mean yeah? I wasn’t counting in detail, it’s an estimate.
Previously you got 500 requests a month and then it’d start charging you, even on “auto.” So the current charging scheme seems to be encouraging auto use so they can use cheaper LLMs when they make sense (honestly a good thing).
-
I mean yeah? I wasn’t counting in detail, it’s an estimate.
Previously you got 500 requests a month and then it’d start charging you, even on “auto.” So the current charging scheme seems to be encouraging auto use so they can use cheaper LLMs when they make sense (honestly a good thing).
I was questioning the use of the word "prolly"
-
I was questioning the use of the word "prolly"
it means "probably"
-
I was questioning the use of the word "prolly"
Nah, you should find a new bone to pick.
-
-
-
Anthropic tested Claude's(LLM, AI Chatbot) ability to manage a physical “storefront” to mixed results, as the AI struggled with pricing strategy and inventory management
Technology1
-
-
An Alabama City Recommends Changing Its Laws to Accommodate One of the Country’s Largest Proposed Data Centers
Technology1
-
-
1
-