Your complete local AI environment. Run models, execute code in sandboxes, build Streamlit apps — all on your own hardware. Free, private, offline-capable.
Works with Ollama, LM Studio, and Docker. Available for macOS, Windows, and Linux.
Get started right now with pip. Full functionality — local AI inference, code execution, Streamlit apps, device pairing. Requires Python 3.10+.
pip install neura-runtime && neura-runtime startThen run neura-runtime pair to link to your Neura account.
One-click install with system tray, auto-start, and zero configuration.
macOS 10.15+ (Catalina or later)
Universal (Intel + Apple Silicon)
Windows 10/11 (64-bit)
x86_64
Ubuntu 20.04+, Debian 11+, Fedora 38+
x86_64
Also available via pip install neura-runtime
Run Ollama and LM Studio models on your own hardware — Llama 3, Mistral, Phi, Qwen, and more. $0 cost, no token limits.
Execute Python in isolated Docker containers with strict sandboxing — network isolation, memory limits, read-only filesystem.
Agents generate and run interactive Streamlit dashboards locally. Permanent URLs, no cloud expiry, full GPU access.
Your data never leaves your device. No cloud processing, no data collection. Run fully offline.
Open Neura in your browser and your device appears automatically. Local models, Docker status — zero configuration.
Link your device to your Neura account with a 6-digit code. Secure WebSocket tunnel keeps your device connected.
Run pip install neura-runtime in your terminal (or download the desktop app when available). One command, no configuration.
Install Ollama for local models (ollama pull llama3.2), Docker for code execution sandboxes. Both optional — use what you need.
Open app.neura.ai — Runtime auto-connects. Your local models appear in the selector. Code runs in Docker on your machine. Streamlit apps get permanent local URLs. All at $0 cost.
| OS | macOS 10.15+, Windows 10+, or Ubuntu 20.04+ |
| RAM | 8 GB minimum (16 GB+ recommended for larger models) |
| Storage | ~45 MB for Runtime + model sizes (2-40 GB each) |
| GPU | Optional — NVIDIA (CUDA) or Apple Silicon (Metal) for faster inference |
| Docker | Optional — for code execution sandbox and Streamlit apps |
| Python | 3.10+ (CLI install only) |
Free forever. Your hardware, your data, your models.