I have been running a local LLM and been using it as a Gemini/ChatGPT replacement. I was looking for a much better coding experience so turned to OpenCode – https://opencode.ai/ . In order to use the locally hosted LLM, the following config file allows me to connect to my own server and keep my data safe.
One note when using OpenCode that I had an issue with was updating the context by extending the num_ctx for the Ollama model. When the context was too small it was unable to open files and complained about not haivng access to todolist.
{
"$schema": "https://opencode.ai/config.json",
"provider": {
"ollama": {
"npm": "@ai-sdk/openai-compatible",
"name": "Ollama (local)",
"options": {
"baseURL": "http://10.0.0.222:11434/v1"
},
"models": {
"qwen3-coder:30b": {
"name": "qwen3-coder:30b"
}
}
}
}
}


