Skip to content

mighty840/orca

Repository files navigation

Orca

Orca

Container + Wasm orchestrator with AI ops
Fills the gap between Coolify and Kubernetes.

CI crates.io License Rust Tests

DocumentationQuick StartFeaturesContributing


Orca is a single-binary orchestrator for teams that have outgrown one server but don't need Kubernetes. It runs containers and WebAssembly modules as first-class workloads, with built-in reverse proxy, auto-TLS, secrets management, health checks, and an AI operations assistant. Deploy with TOML configs that fit on one screen — no YAML empires.

Docker Compose ──> Coolify ──> Orca ──> Kubernetes
   (1 node)        (1 node)   (2-20)     (20-10k)

Quick Start

cargo install mallorca

# Option A: systemd (recommended — handles port binding automatically)
orca install-service
sudo systemctl start orca

# Option B: manual (requires setcap after each install/update)
sudo setcap 'cap_net_bind_service=+ep' $(which orca)
orca server --daemon

Add worker nodes:

# On the worker node:
orca install-service --leader <master-ip>:6880
sudo systemctl start orca-agent

Create a service in services/web/service.toml and deploy:

[[service]]
name = "web"
image = "nginx:alpine"
replicas = 2
port = 80
domain = "example.com"
health = "/"
orca deploy && orca status

What's New in v0.2.3

  • Secrets in cluster.toml -- ${secrets.X} now resolves in ai.api_key, ai.endpoint, and network.setup_key, so cluster config is safe to commit (#22).
  • Per-service stats for remote nodes -- live CPU and memory for every container on every agent, streamed over the WS heartbeat (#13).
  • orca logs <svc> --summarize -- AI-summarised log digest with likely causes and next steps (#23).
  • Multi-arg CLI -- orca deploy svc1 svc2 svc3, orca redeploy svc1 svc2, orca stop svc1 svc2.
  • Shell completions -- orca completions <bash|zsh|fish|powershell>.
  • Find-up config resolution -- run orca from any subdirectory; it walks up to find cluster.toml and services/.
  • AMD ROCm GPU passthrough -- vendor = "amd" mounts /dev/kfd + /dev/dri with auto-detected video/render GIDs.
  • Remote placement fix -- placement.node = "<agent>" now resolves correctly over WS, so pinned services start without a master restart.
  • Proxy preserves Host header -- fixes apps that emit absolute URLs (e.g. LiteLLM /ui redirects).
  • orca webhooks add --secret --infra flags now wired through the CLI.

See CHANGELOG.md for the full history.

Features

Single Binary, Batteries Included

One static executable is the agent, control plane, CLI, and reverse proxy. scp it to a server and you have a production-ready orchestrator with auto-TLS, secrets, health checks, and Prometheus metrics.

Dual Runtime

Run Docker containers and WebAssembly modules side by side. Containers for existing images and databases (~3s cold start). Wasm for edge functions and API handlers (~5ms cold start, ~1-5MB memory).

Multi-Node Clustering

Raft consensus via openraft with embedded redb storage — no etcd. Bin-packing scheduler with GPU awareness. Nodes can span multiple cloud providers via NetBird WireGuard mesh.

Self-Healing

Watchdog restarts crashed containers in ~30s. Health checks with configurable thresholds. Stale route cleanup. Agent reconnection with exponential backoff. Services survive server restarts.

AI Operations

orca ask "why is the API slow?" — diagnoses issues using cluster context. Works with any OpenAI-compatible API (Ollama, LiteLLM, vLLM, OpenAI). Conversational alerts, config generation, and optional auto-remediation.

Developer Experience

TOML config that fits on one screen. TUI dashboard with k9s-style navigation. Git push deploy via webhooks. One-click database creation. RBAC with admin/deployer/viewer roles.

Architecture

┌─────────────────────────────────────┐
│         CLI / TUI / API             │
└──────────────┬──────────────────────┘
               │
┌──────────────▼──────────────────────┐
│         Control Plane               │
│  Raft consensus (openraft + redb)   │
│  Scheduler (bin-packing + GPU)      │
│  API server (axum)                  │
│  Health checker + AI monitor        │
└──────────────┬──────────────────────┘
               │ WebSocket
    ┌──────────┼──────────┐
    ▼          ▼          ▼
┌────────┐ ┌────────┐ ┌────────┐
│ Node 1 │ │ Node 2 │ │ Node 3 │
│ Docker │ │ Docker │ │ Docker │
│ Wasm   │ │ Wasm   │ │ Wasm   │
│ Proxy  │ │ Proxy  │ │ Proxy  │
└────────┘ └────────┘ └────────┘

8 Rust crates | ~28k lines | ~130 tests | all files under 250 lines

Documentation

Full documentation at mighty840.github.io/orca:

Contributing

We welcome contributions! See CONTRIBUTING.md for setup instructions and guidelines.

Key areas where help is wanted:

  • ACME/Let's Encrypt automation
  • Nixpacks integration for auto-detect builds
  • Service templates (WordPress, Supabase, etc.)
  • Preview environments (PR-based deploys)

License

AGPL-3.0. See LICENSE.

About

Container + Wasm orchestrator with AI ops — fills the gap between Coolify and Kubernetes

Resources

License

Contributing

Stars

Watchers

Forks

Packages

 
 
 

Contributors