AI autocomplete model for Cody vs Cody Pro

The model for autocomplete is the same for Cody and Cody Pro right? What model is it using by default?

2 Likes

That’s a good question… I installed Cody hoping to use it with Claude Sonnet 3.5 for completion. I’ve selected Claude 3.5 in the dropdown of the Cody chat panel (JetBrains IDE) but I’m not sure if that has any effect on what’s used for completion.

1 Like

Based on this document, choosing an autocomplete model is an Enterprise-only feature. It also seems to be pretty limited:

Cody autocomplete works only with Anthropic’s Claude Instant model. Support for other models will be coming later.

Presumably there is more than one option or it would be pointless, but I don’t know what they let enterprise users customize.

In any case, autocomplete needs to be low-latency, so using a remote model isn’t likely to work well. They have a wider set of choices for other commands to do things like chat and edit.

3 Likes

LLMs used for code completion are normally specifically trained for the fill-in-the-middle task, usually these are models specialized for code. My understanding is that basically StarCoder(2), Codestral and DeepSeek Coder are the currently best publicly available LLMs that support this fill-in-the-middle task. Further, as mentioned already, code completion needs to have a low latency. For this reason, even ignoring the fact that it would be insanely expensive, models like Claude Sonnet 3.5 are normally not used for code completions. Continue has also a FAQ entry on the question if you should use GPT-4 to get better completions.

In October 2023, Sourcegraph wrote in a blog post:

Going forward, Cody for community users will make use of a combination of proprietary LLMs from Anthropic and open source models like StarCoder (the CAR we report comes from using Cody with StarCoder).

I don’t know if this is still up-to-date.

2 Likes

Thanks for the info folks! I pay for Pro, wondering if that would afford more options, but obviously it’s more complicated than that, as you say. It would be good if Sourcegraph could be more explicit about what models are used in which circumstances.

Hey everyone,

Currently, we don’t allow the ability to select autocomplete models (outside of our local models through Ollama) for Cody Free or Pro users. The majority of users are on StarCoder for autocomplete as that’s shown to strike the best balance between quality and latency. We are regularly experimenting with other models for autocomplete, so there is a chance that you may fall into one of experiments. These experiments allow us to quantitatively select better models for autocomplete. However, I will take a note of this request and consider this internally

2 Likes

It’s DeepSeek-V2 now: Improving Cody Autocomplete: Faster and Smarter

2 Likes