What is the context of auto-complete? Anthropic or Starcode?
What is the size of the embedding? 7k? What’s the difference from this to the context? For example, GPT-4 has a context of 128k and Opus 200k, while in OpenAI’s ChatGPT the context is 32k.
Does this mean that the context it will take from my conversation is the last 7k and not the 128k?
Another question, I only use Neovim, is there an embedding of your repository in Neovim?
Hi @luanlouzada
Autocomplete uses fireworks/starcoder-hybrid
by default.
You can use anthropic
but you must provide:
cody.autocomplete.advanced.serverEndpoint
cody.autocomplete.advanced.accessToken
Right now Cody is limited to about 7k 30k tokens, though we are actively working to increase that limit. Historically LLMs have had poor retrieval from large context windows, but we hear the feedback.
Not at this time. We only have a part-time maintainer for Neovim right now and our focus is on VS Code and JB for this quarter, so I’m afraid I don’t have an ETA for moving Neovim out of its experimental stage to get features like repo embedding.
1 Like