Cody: AI Code Assistantv1.93.1746816255 (pre-release)
I’m using vscode-remote not sure if this is causing the issue, however whenever I use Gemini 2.5 Pro Preview with the latest pre-release version (and stable version) I often get incomplete garbled responses from specifically Gemini (it has not happened with Claude or OpenAI models).
been using gemini 2.5 pro exp hard for over a month, just today got a really garbled response about a coding project taking up 600,000 tokens - hasnt happened before
Same issue for me. I’ve been using it for a long time and in the past couple of days the response just seem to be cut from the middle and they are super short and incomplete
I feel like truncation has been more of a problem in the last month. I don’t recall experiencing it before then, and my usage patterns have not changed much.
Hey @esafak This could be related to the fact that we conducted an A/B test with 50% of our users and provided them with an enhanced context window. However, we rolled it back because there were anomalies in our metrics. If the metrics look good again in the future, we will consider rolling it out to more users slowly.
Yeah, the issue is, that the thinking tokens are calculated with the output tokens and if the model thinks with too many tokens, this will cut from the output token limit.
This is unfortunate but the team is working on a solution.