Self-Hosted Model with ollama
How to configure self-hosted models in a Curiosity Workspace
This guide will walk you through setting up ollama and connecting it to Curiosity so you can use models seamlessly.
Install & Start Ollama
Download and install Ollama from the official site.
Start Ollama on your system.
Open a new chat and select a model in ollama. Then start a chat, this will trigger the download automatically.
Configure Ollama Settings
Open Ollama’s settings.
Enable: Expose Ollama to the network. This makes Ollama accessible from Docker containers and other applications.

Add an OpenAI Provider
In Curiosity running inside Docker, add a new OpenAI-compatible AI provider with these settings:
Model
The model you selected in ollama (e.g. deepseek-r1:8b)
API Key
ollama
Custom OpenAI Host


Set Provider Priority
Either remove other AI providers
Or set Ollama as the default provider
Alternatively, you can choose Ollama per chat session.

✅ That’s it! You should now have Ollama running inside Docker with your chosen model ready to use.

Last updated