N
Neura
FeaturesUse Cases

Agent Runtime

ReAct engine with multi-step iteration

200+ Connections

MCPX native handlers + Runtime sandbox

Knowledge & RAG

Memory buckets with hybrid search

GlassBox IDE

Chat-driven agent customization

Dataset Creation Lab

290M+ scholarly works · 9 AI agents

Training Pipeline

Fine-tune 33+ open-source models

Marketplace

Publish and monetize agents & datasets

View all features →
PricingDownloadDocsBlogAbout
Log inGet Started
FeaturesUse Cases
Agent Runtime
200+ Connections
Knowledge & RAG
GlassBox IDE
Dataset Creation Lab
Training Pipeline
Marketplace
PricingDownloadDocsBlogAbout
Toggle theme
Log inGet Started Free
N
Neura

Autonomous AI agents that connect to your world and actually get work done.

Product

  • Features
  • Pricing
  • API Docs
  • Use Cases
  • Blog

Platform

  • Agent Runtime
  • Connections
  • Knowledge & RAG
  • GlassBox IDE
  • Dataset Creation Lab
  • Training Pipeline
  • Marketplace

Company

  • About
  • Careers
  • Contact

Legal

  • Privacy Policy
  • Terms of Service

© 2026 Neura. All rights reserved.

Free Download

Neura Runtime

Your complete local AI environment. Run models, execute code in sandboxes, build Streamlit apps — all on your own hardware. Free, private, offline-capable.

Works with Ollama, LM Studio, and Docker. Available for macOS, Windows, and Linux.

Available Now

Install via CLI

Get started right now with pip. Full functionality — local AI inference, code execution, Streamlit apps, device pairing. Requires Python 3.10+.

pip install neura-runtime && neura-runtime start

Then run neura-runtime pair to link to your Neura account.

Desktop App

One-click install with system tray, auto-start, and zero configuration.

macOS

macOS 10.15+ (Catalina or later)

Universal (Intel + Apple Silicon)

Coming Soon

Windows

Windows 10/11 (64-bit)

x86_64

Coming Soon

Linux

Ubuntu 20.04+, Debian 11+, Fedora 38+

x86_64

Coming Soon

Also available via pip install neura-runtime

Everything runs on your device

Local AI Inference

Run Ollama and LM Studio models on your own hardware — Llama 3, Mistral, Phi, Qwen, and more. $0 cost, no token limits.

Code Execution Sandbox

Execute Python in isolated Docker containers with strict sandboxing — network isolation, memory limits, read-only filesystem.

Streamlit App Builder

Agents generate and run interactive Streamlit dashboards locally. Permanent URLs, no cloud expiry, full GPU access.

Completely Private

Your data never leaves your device. No cloud processing, no data collection. Run fully offline.

Auto-Connects

Open Neura in your browser and your device appears automatically. Local models, Docker status — zero configuration.

Secure Device Pairing

Link your device to your Neura account with a 6-digit code. Secure WebSocket tunnel keeps your device connected.

Get started in 3 steps

1

Install Neura Runtime

Run pip install neura-runtime in your terminal (or download the desktop app when available). One command, no configuration.

2

Set Up Your Environment

Install Ollama for local models (ollama pull llama3.2), Docker for code execution sandboxes. Both optional — use what you need.

3

Use in Neura

Open app.neura.ai — Runtime auto-connects. Your local models appear in the selector. Code runs in Docker on your machine. Streamlit apps get permanent local URLs. All at $0 cost.

System Requirements

OSmacOS 10.15+, Windows 10+, or Ubuntu 20.04+
RAM8 GB minimum (16 GB+ recommended for larger models)
Storage~45 MB for Runtime + model sizes (2-40 GB each)
GPUOptional — NVIDIA (CUDA) or Apple Silicon (Metal) for faster inference
DockerOptional — for code execution sandbox and Streamlit apps
Python3.10+ (CLI install only)

Ready to go local?

Free forever. Your hardware, your data, your models.

Get Started Free Read the Docs