VS Code: Continue – Configure self hosted ollama
By Eric Downing | Filed in AI, Editor, Editors, VS Code | No comments yet.Over the holiday break I decided to setup my own Ollama server on a Linux server I keep running for my own testing and as a media server. It was surprisingly simple to use the Ollama site to pick out an agent and get a functioning LLM running. Then I turned to my VS code editor. I wanted to see what the process was for hooking up to my qwen3-coder LLM running on my server.
The file /home/<username>/.continue/config.yaml holds the config for any models that you are trying to connect to your editor. I added the model below to point to my local server.
name: Local Config
version: 1.0.0
schema: v1
models:
- name: qwen3-coder
provider: ollama
model: qwen3-coder
apiBase: http://<your server IP or name>:11434
roles:
- chat
- edit
- apply
- embed
- autocomplete
On the mac, I had to make sure that the Security and Privacy allowed access to local servers. But other than that all the functionality was available to use my server for privacy and not needing to pay for any services.



