Quickstart

Get up and running with SovereignAI and Ollama

This guide walks you through installing Ollama, downloading a model, and connecting SovereignAI to it. The whole process requires just a few commands.

1. Install Ollama

macOS

Download Ollama from ollama.com/download and drag it to your Applications folder. Alternatively, install with Homebrew:

brew install ollama

Linux (Ubuntu / Debian)

curl -fsSL https://ollama.com/install.sh | sh

Linux (Fedora)

curl -fsSL https://ollama.com/install.sh | sh

Windows

Download the installer from ollama.com/download and run it.

2. Pull the model and start the server

Open a terminal (or PowerShell on Windows) and download the Mistral Nemo 12B Instruct model:

ollama pull mistral-nemo:12b-instruct-2407-q4_K_M

This is roughly 7.5 GB and may take a while depending on your connection.

Then start the server so it accepts connections from your phone:

OLLAMA_HOST=0.0.0.0 ollama serve

On Windows (PowerShell), set the variable first:

$env:OLLAMA_HOST="0.0.0.0"; ollama serve

3. Find your server address

SovereignAI needs the local network IP of the machine running Ollama. Your iOS device and the machine must be on the same network.

To find your IP address:

Your server address will be http://<your-ip>:11434 (for example, http://192.168.1.42:11434).

4. Configure SovereignAI

Open SovereignAI on your iPhone or iPad. If this is your first launch, the onboarding wizard will guide you. Otherwise, go to Settings > Server.

  1. Server URL — Enter your Ollama address (e.g. http://192.168.1.42:11434).
  2. Authentication — Select None. Ollama does not require authentication by default.
  3. Test Connection — Tap the test button. You should see a success message and the list of available models.
  4. Select Model — Choose mistral-nemo:12b-instruct-2407-q4_K_M from the list.

That's it. Start a new chat and send a message.

Troubleshooting

← Back to SovereignAI