Get up and running with SovereignAI and Ollama
This guide walks you through installing Ollama, downloading a model, and connecting SovereignAI to it. The whole process requires just a few commands.
Download Ollama from ollama.com/download and drag it to your Applications folder. Alternatively, install with Homebrew:
brew install ollama
curl -fsSL https://ollama.com/install.sh | sh
curl -fsSL https://ollama.com/install.sh | sh
Download the installer from ollama.com/download and run it.
Open a terminal (or PowerShell on Windows) and download the Mistral Nemo 12B Instruct model:
ollama pull mistral-nemo:12b-instruct-2407-q4_K_M
This is roughly 7.5 GB and may take a while depending on your connection.
Then start the server so it accepts connections from your phone:
OLLAMA_HOST=0.0.0.0 ollama serve
On Windows (PowerShell), set the variable first:
$env:OLLAMA_HOST="0.0.0.0"; ollama serve
SovereignAI needs the local network IP of the machine running Ollama. Your iOS device and the machine must be on the same network.
To find your IP address:
ipconfig getifaddr en0hostname -Iipconfig and look for the IPv4 addressYour server address will be http://<your-ip>:11434 (for example, http://192.168.1.42:11434).
Open SovereignAI on your iPhone or iPad. If this is your first launch, the onboarding wizard will guide you. Otherwise, go to Settings > Server.
http://192.168.1.42:11434).That's it. Start a new chat and send a message.
ollama serve is running with OLLAMA_HOST=0.0.0.0 and that you are using the correct IP address and port.ollama list to verify the model was downloaded. If not, run ollama pull mistral-nemo:12b again.11434 is not blocked by your firewall. On Fedora, you can open it with sudo firewall-cmd --add-port=11434/tcp.