I am trying out the experimental ollama autocomplete feature but having major issues.
Chat seems to work reasonably well for some models, although others just reply with “Hi I’m cody” without answering the prompt.
For autocomplete nothing seems to be working. Generic models autocomplete human language rather than code. Code models autocomplete nothing or garbage.
My test is to make a new file “fizzbuzz.py” and start typing “def fizzbuzz(n):” and then see what happens. So far not a single model has produced a reasonable completion.
At one point I typed “if n” and it autocompleted “# path: fizzbuzz.py” yeah thanks.
So far I’ve tried
- codellama: appears to output garbage from the system prompt
- starcoder2: no output
- deepseek-coder-v2: no output
- qwen2.5-coder: plain text completion
- codegemma: I can hear my GPU chirping away without ever getting a result
Obviously I’m in principle able to get completions, just nothing sensible comes out.