Cody Local Ollama: Autocomplete doesn't use Ollama (BUT IT CAN!)

Problem: Out of the box, Cody doesn’t use Ollama offline for autocomplete, inline commands (opt-k), or Command Palette. But: Local autocomplete and inline commands work if you fiddle with your JSON settings - see fix below).

Note: To ensure you’re running against Ollama while testing, go offline.

This lack of automagic Ollama autocomplete support has downstream effects:

  1. Can’t use a local offline provider… offline. Yes, chat still works, but imagine you’re in the back of an Uber and fire up VSC. Your local chat is working, but… no autocomplete? The user isn’t going to understand why chat works offline but autocomplete doesn’t.
  2. If you start a new VSC session, Cody will sign in (Pro users need connectivity, so this makes sense). If you’re offline when starting a new VSC session, then Cody won’t run at all because it can’t sign in (“Connection Issues”). This means no local GPT even when using the fix that enables local Ollama chat+autocomplete+inline.

Fix to use autocomplete and inline commands right now!
To use Ollama offline for chat, autocomplete, and inline (opt-k on Mac), you still have to add this to your settings (manually specify which local model you want to use - I’m using llama3):

"cody.autocomplete.experimental.ollamaOptions": {
   "url": "http://localhost:11434",
     "model": "llama3"
},
"cody.experimental.ollamaChat": true,
"cody.autocomplete.advanced.provider": "experimental-ollama"
3 Likes