Self-Hosted Model with ollama

How to configure self-hosted models in a Curiosity Workspace

circle-info

This is a preview feature. Contact us if you experience any issues!

This guide will walk you through setting up ollama and connecting it to Curiosity so you can use models seamlessly.

circle-info

The Curiosity Workspace needs to be able to access ollama via the network (Running in WSL for Windows is not currently supported)

Install & Start Ollama

  • Download and install Ollama from the official site.

  • Start Ollama on your system.

  • Open a new chat and select a model in ollama. Then start a chat, this will trigger the download automatically.

Configure Ollama Settings

  • Open Ollama’s settings.

  • Enable: Expose Ollama to the network. This makes Ollama accessible from Docker containers and other applications.

Add an OpenAI Provider

  • In Curiosity running inside Docker, add a new OpenAI-compatible AI provider with these settings:

Setting
Value

Model

The model you selected in ollama (e.g. deepseek-r1:8b)

API Key

ollama

circle-info

💡 Note: Inside a Docker container, host.docker.internal points to the host machine’s localhost.

Set Provider Priority

  • Either remove other AI providers

  • Or set Ollama as the default provider

  • Alternatively, you can choose Ollama per chat session.

✅ That’s it! You should now have Ollama running inside Docker with your chosen model ready to use.

Last updated