Guide
Local LLMs with Ollama
VoidBox can use local models served by Ollama instead of the Anthropic API. The guest VM reaches Ollama through SLIRP networking — no API key required.
1. Prerequisites
- Install Ollama: ollama.com
- Pull a model:
ollama pull phi4-mini - Ensure Ollama is running:
ollama serve - Build the guest initramfs (see Architecture)
2. How SLIRP networking works
The guest VM uses SLIRP usermode networking. The gateway IP 10.0.2.2 is transparently mapped to 127.0.0.1 on the host:
Guest VM Host
┌──────────────┐ ┌──────────────┐
│ claude-code │──SLIRP──────>│ Ollama:11434 │
│ (stream-json) │ 10.0.2.2 │ (localhost) │
└──────────────┘ └──────────────┘
Inside the guest, ANTHROPIC_BASE_URL=http://10.0.2.2:11434 reaches the host's Ollama process.
3. Code example
use void_box::agent_box::VoidBox;
use void_box::llm::LlmProvider;
use void_box::skill::Skill;
let model = std::env::var("OLLAMA_MODEL")
.unwrap_or_else(|_| "phi4-mini".into());
let agent = VoidBox::new("ollama_demo")
.llm(LlmProvider::ollama(&model))
.skill(Skill::agent("claude-code"))
.memory_mb(256)
.prompt("Write a Python script that prints the first 10 Fibonacci numbers.")
.build()?;
let result = agent.run(None).await?;
4. Running the example
terminal
$
OLLAMA_MODEL=phi4-mini \VOID_BOX_KERNEL=/boot/vmlinuz-$(uname -r) \VOID_BOX_INITRAMFS=/tmp/void-box-rootfs.cpio.gz \cargo run --example ollama_local5. Environment variables
OLLAMA_MODEL— Ollama model name (e.g.phi4-mini,qwen3-coder)VOID_BOX_KERNEL— path to the host kernel image for KVMVOID_BOX_INITRAMFS— path to the guest initramfs built bybuild_guest_image.sh
Without VOID_BOX_KERNEL set, the example falls back to mock mode (no real VM).
6. Next
See Pipeline Composition to chain Ollama-backed boxes, or define specs with YAML.
