Better completion please

I have been using GitHub Copilot, Cody, and Cursor to some extent. Among these, I prefer Cody over Cursor (I was using Cursor until recently). However, I believe that Cursor is the best tool for inline completion. It’s hard to quantify this precisely, but my personal impression is that Cursor offers more accurate completions, longer context and a higher frequency of suggestions.

That said, I don’t want to use a forked version of VSCode, and I really like Cody’s UI/UX, so I would appreciate it if you could improve the inline completion feature somehow.

Additionally, as a feature request, sometimes the high frequency of suggestions can feel a bit overwhelming, so it would be great if users could adjust this frequency as needed. This is a feature that Cursor does not offer.

2 Likes

Not saying that it’s impossible (I am personally hoping that Cody’s autocomplete could improve too) but Cursor has some neat tricks and infrastructure which allows them to offer better code completions.

One of them is that they are indexing and remotely storing your whole codebase (at least as embeddings, vectors…) so they could offer better quality autocomplete suggestions with larger context awarness - because they can trigger processing on their remote index almnost instantly.

In Cody’s case your local code (actually small snippet of it) gets uploaded so there is always added latency and extra processing each time.

Cursor also has merged with Supermaven - which previously offered fast completions and 1M token context window so Cursor has in-house completion model fine tuned for this specific task.

As far as I know then Cody also has fine tuned completion model with some tweaks but the undelaying LLM model is still StarCoder or something (it’s not absolutely trained from zero specific for Cody).

Also Cursor’s good autocomplete is the reason they put it under the subscription paywall where Cody offers unlimited autocompletions also for free tier. So, I guess, if Cody do extra remote indexing, processing and/or developers their own autocomplete model then there will and should be additional pricing.

4 Likes

enough said!
Thank you for information!

I’m wondering whether it is possible to get better results when installing the Cody extension in Cursor, so Cody can have the same kind of access as Cursor have. Would be wishful thinking I guess, but maybe a collab between Cody & Cursor is not a bad idea…

The only thing I can do with Cursor is providing the Anthropic API (or OpenAPI, or Google API) keys inside Cursor. If Cody could provide keys then we could leverage the keys inside Cursor, maybe wishful thinking? I dunno.. I’m just thinking there should be a better way.

1 Like

Cursor already has “swallowed” some innovative competitors (like Supermaven) so I wouldn’t like it to collab in any way with Cody.

Cody extension itself would not have any access of Cursor additional context because Cursor index user’s codebase remotely but Cody’s autocomplete primary works locally first (except if they collab and Cursor will be giving access but then this feature would only available in Cursor; plus I doubt Cursor will gain anything from this so this is daydreaming a bit).

About key providing - there is no general key of Cody’s autocomplete because Cody uses in-house fine-tuned open source LLM models. Also autocomplete/FIM models are specific and trained mostly for this task so any generic OpenAI/Anthropic/Google API key would not work (or work with lot of artifacts and performance degredation).

1 Like

Additional note from my side - IMHO, Cody is doing excellent work with their autocomplete, taking in the account available tools and resources for them. If Cody is working only with local codebase index and levering open-source LLM models (with some fine-tunes) then speed and overall performance is hard to match with any other tool doing the same thing… Cody autocomplete doesn’t struggle in large files where other tools take a long time to give any generation or just throws “red” status warning.

Also the new auto-edit autocomplete feels smarter and faster than standart autocomplete.

2 Likes

I also see potential in auto edit. Initially, I thought that auto edit was some sort of agent feature, so I kept using autocomplete. I only realized this today from a notification in the VSCode extension.

If what you’re saying is true, I don’t understand how it can deliver completions of Cursor’s quality using a different mechanism, but I’m looking forward to the beta release.

I still haven’t figured out exactly when auto edit kicks in, but there’s no doubt that the completion suggestions are excellent.