Gemini 2.5 pro preview output token cutoff

The gemini 2.5 pro preview that was added has a really short output. Is that something that can be fixed your side?
Long script responses are being truncated every time.

Yes, this is expected because the output token is limited to 4k tokens. Please try with the prompt “continue” to request the next part.

For reference, that model has a 1M input and 64K output token limit. Is Cody’s 4K limit advertised?

@esafak Yes, we have it documented here Cody Input and Output Token Limits - Sourcegraph docs for all models even if not listed.
This decision is evaluated based on daily user usage and the costs incurred in relation to the number of messages available per day. You are able to configure Cody in VS Code to use our own API key for several vendors. You can read about this here: Installing Cody in VS Code - Sourcegraph docs

I also observe the same problem; I can’t have Gemini refactor files due to this problem. It works with Claude.

What works for me is to write “continue from the line: <PASTE_THE_LAST_THREE_LINES>” with prefixed whitespaces. To refactor a very long function, this is sometimes a little bit tricky if the function doesn’t fit in the entire context window and will be cut-off. But this should be a sign to make the function overall shorter.

It’s a file, not a function. For example, I might ask it to refactor a function, then propagate it to its uses. This involves modifying whole files.

I think Cody should be smart enough to know when to loosen its self-imposed output token limit.