Local LLMs support: Ollama and LM Studio

I saw the post that Ollama is on experiment feature today.
Can we also support LM Studio? LM Studio is an alternative of Ollama, users can download LLMs directly from HuggingFace hub.
Link: https://lmstudio.ai/


Port:

Welcome @LatenPath to our support forum.

You’ve mentioned an intriguing use case. Given that Ollama and LM Studio support the OpenAI API scheme, using LM Studio should be feasible as well.

You’ll need to adjust the communication port in LM Studio to match Ollama’s, which is 11434. Ensure that LM Studio is running before initiating Cody in VS Code.

As I haven’t tested this setup, please note that this guide might not function correctly.

I have attempted to get it working myself but it seems some kind of support would need to be added to cody as on LM Studio side you get “Unexpected endpoint or method. (GET /api/tags). Returning 200 anyway” and on cody side you get “Cannot read properties of undefined (reading ‘usage’)”

Would be nice to have some support for LM Studio as well in the future.