Any limits for request to Claude for Pro subscription? Claude 2 instead of Claude 3.5?

During the trial period I liked Cody and took the Pro subscription. However, later, when working with Cody + Claude 3.5 Sonnet, I encountered a catastrophic response quality. I decided to ask what version of Claude was being used, and the chat replied

Do you impose any daily or monthly limits?

Hey @ravecat

In fact, the latest Claude 3.5 Sonnet is used when selected.
You can’t rely on the answers of the model itself, because the model names and IDs are not included in the training data.

I hope that helps.

Rewriting this in the hopes of being more fair…

From my experience, what @ravecat reported seems to be correct. I came here to report the same.

After heavy usage, the quality fell below even basic usability thresholds. I work with Claude 3.5 on a daily basis, and it was obvious that something was very off.

I did some testing on this and the direct API. Across all new chats, the Cody model reports as being Claude 2.1 (with 3.5 sonnet selected). It also reports a training date of 2023. This is inconsistent with the behaviour of the same Anthropic models via API.

If I select 3.5-haiku, I get 3-opus.

Overall, I think it’s fair to make the claim that models may not know or want to report their version information. However, with that said, there should be consistency here between the API responses and its implementations. From my testing, notably, newer versions of the API will not report on version number at any temperature. But, the lower seem to.

The consistency of version output from Cody is telling. It would be an odd hallucination, but beyond that, it shows that the higher versions’ guardrails aren’t there.

More importantly, the demonstrable lack of skill and ability that is afforded by API or Claude web app is what really calls this into question. What worked well for the first days of Cody simply doesn’t anymore. When we take all of these facts together, it presents what I would consider a reasonable case that something is quietly happening. And, given usage rates, I can understand why.

My thought on this is:

I absolutely understand that usage prices are going to be high for some users. I also would be happy to elect to pay what’s reasonable to cover usage. However, looking over the support history, I see there were a lot of users complaining about limits on what was advertised to be unlimited. Now, we are seeing this.

As a word of advice, at the end of the day, it makes more sense to change an unsustainable business model, while also adding transparency, than it does to try to subtly get around providing what’s advertised.

To be clear, I don’t personally intend to seek any legal action, but FTC laws are no joke.

You guys have a good start for a product. Plenty of your users would be happy to pay proper usage. I hope to be able to use it in the future.

Thank you for your feedback.

Since the direct access to the Anthropic models via the API and that of Cody in the chat (IDE, web) is different - Cody has a preset system prompt included - it is of course difficult to make an exact comparison. The temperature is also fixed in Cody.

Especially since you can see exactly what is happening internally in the Output-panel. If I select the Haiku 3.5 model in the chat, I can see this there (Fig. 1)

If I send a sample prompt, I can also see which model is being used (Fig. 2).

Do you have an example prompt available where you can say that it performs badly over a sample of say 20 prompts for comparison?