GitOps for n8n. Version your workflows. Let AI agents create and manage automations.
n8n has no native workflow versioning. If your instance dies, your workflows die with it.
n8n-starter uses Git as source of truth — every workflow and credential is a file you can version, review, and restore. Two-way sync keeps your repo and n8n instance in lockstep.
graph LR
subgraph Git Repository
W[workflows/*.json]
C[credentials/manifest.yml]
end
subgraph Docker
I[n8n-init<br>import on boot] --> N[n8n]
N -->|external hook| WS[watch-server]
WK[n8n-worker] --> N
end
subgraph Infra
PG[(PostgreSQL)]
RD[(Redis)]
end
W --> I
C --> I
WS -->|write back| W
N --> PG
N --> RD
- Boot —
n8n-initreads workflow JSON files andcredentials/manifest.ymlfrom the repo and imports them into n8n - Runtime — every save in the n8n UI triggers an external hook that sends the workflow to
watch-server, which writes it back to disk - Loop closes — commit and push, your Git repo is always up to date
Any AI agent (Claude Code, GPT, Copilot, custom scripts) can create n8n workflows by writing a JSON file to workflows/ and committing. On next deploy, the init container picks it up automatically.
# Write a file → commit → deploy → done
echo '{ ... }' > workflows/my-automation.json
git add . && git commit -m "feat: add my automation"
docker compose up -d
- Write a file — drop a valid n8n workflow JSON into
workflows/(subdirectories become n8n folders) - Commit and deploy —
docker compose up -dimports everything - Branch = environment —
maingoes to production, feature branches to staging
The .claude/skills/n8n-skills/ directory contains the n8n-skills knowledge base (v2.2.0), giving AI agents full awareness of 545 n8n nodes and workflow patterns. Update it with yarn skills:update.
Both Claude Code and Gemini CLI are supported. Run the setup script to configure your preferred tooling:
./scripts/setup-ai.sh claude # verify claude config (default, already in repo)
./scripts/setup-ai.sh gemini # generate .gemini/ config + GEMINI.md
./scripts/setup-ai.sh all # bothGemini setup symlinks the existing skills and generates GEMINI.md from CLAUDE.md. Generated files are gitignored.
See CLAUDE.md (or GEMINI.md after setup) for detailed instructions on how AI agents should interact with this project.
cp .env.example .env
docker compose up -d
open http://localhost:12001Default login: admin@admin.local / password
| Service | Port | Description |
|---|---|---|
| n8n | 12001 | n8n UI + API |
| n8n-worker | - | Queue-based workflow executor (scalable) |
| postgres | 12000 | Shared database |
| redis | - | Job queue (Bull) |
| watch-server | 3456 | Auto-export webhook receiver |
docker compose up -d --scale n8n-worker=5Credentials are defined in credentials/manifest.yml. Two formats:
Manual — explicit env var mapping:
credentials:
- name: "My API"
type: "httpHeaderAuth"
env_mapping:
name: "MY_HEADER_NAME"
value: "MY_HEADER_VALUE"Auto-generated — when you create a credential in the n8n UI, the watch-server fetches its schema and writes an _autoCredentials entry with ${ENV_VAR} placeholders. Fill in the env vars in .env and they get bootstrapped on next startup.
Actual secret values live in .env, never in the manifest.
docker compose -f docker-compose.prd.yml up -dDifferences from dev:
- No watch-server (import-only, no auto-export)
- Credentials mounted read-only
- Standard Docker volumes instead of local bind mounts
All env vars in .env are available to n8n services via env_file. See .env.example for available options.
Issues and PRs welcome. See CLAUDE.md for project conventions and context.
