1. Install Ollama
macOS: brew install ollama
Windows / Linux: Visit ollama.com/download
2. Start Ollama server
ollama serve
3. Pull a model
ollama pull llama3.2
4. Test connection
Click "🔌 Test Connection" above
Recommended models:
llama3.2 (3B) — Fast, great for summaries
llama3.1 (8B) — Balanced performance
mistral (7B) — Good all-round model