APPENDIX B // HARDWARE REFERENCE
Hardware
Reference.
What you need under the hood. AIIP™ v1.0 is optimized for Apple Silicon. Local-inference performance scales with unified memory.
| Tier | Model | RAM | Storage | Notes |
|---|---|---|---|---|
| Recommended | Mac Studio M4 Max | 64 GB unified memory | 1 TB SSD min | Reference platform. Local Llama-70B-class viable. Headroom for sustained agent workloads + concurrent video work. |
| Recommended (laptop) | MacBook Pro M4 Pro / Max | 36–48 GB | 1 TB SSD | Mobile reference. Local 13B–34B class smooth. Slight tradeoff vs Studio on sustained loads under thermal envelope. |
| Minimum supported | MacBook Air M4 / Pro M3 | 24 GB | 512 GB SSD | Local 7B–13B class only. Lane 2 (Ollama) is constrained — Lanes 1/3/4 carry the bulk. |
| Constrained | M-series · 16 GB RAM | 16 GB | 256 GB+ | Functional but small models only. Lane 2 effectively reserved for tiny utility models. Treat Lanes 1/3 as primary. |
| Not supported (v1.0) | Intel Macs · Linux x86 · Windows | — | — | Out of scope for v1.0. Linux/x86 path documented for v2.0. Apple Silicon is the optimized reference platform. |
Why Apple Silicon?
- Unified memory architecture. Local inference runs on the GPU without copying weights across PCIe — same RAM serves CPU, GPU, and Neural Engine. A 36 GB MacBook can run models that would require a $4k workstation tower elsewhere.
- Metal acceleration. Ollama, llama.cpp, MLX, and most modern local-inference runtimes are first-class on Apple Silicon. No CUDA dependency.
- Power profile. A workstation that runs agents overnight on battery, doesn't heat-throttle, and survives a power outage.
- Time Machine + iCloud Drive native. Operational resilience (Layer 06) ships in the OS — not bolted on.
Storage Footprint
A typical AIIP™ install footprint: ~120 GB. Breakdown:
Local LLM weights
40–80 GB
Llama 3 8B + 70B-quant + DeepSeek + Qwen + Hermes
Dev toolchain
8–15 GB
Xcode CLT, Node, Python, Rust, Homebrew packages
Vector memory + databases
5–20 GB
LanceDB + DAG compaction history + Notion local cache
Agent runtimes + plugins
2–5 GB
Claude Code, OpenCode, Codex CLI, OpenClaw, Aider, Hermes Agent
Working knowledge vaults
5–15 GB
Obsidian PaperVault + iCloud-synced operational data
Headroom / scratch
20+ GB
Build artifacts, container images, temporary downloads