An LLM-enabled, turn-based geopolitical simulation with a strict separation between true state and player-visible state. The runtime intentionally models incomplete information (confidence, fog, and biased signals) while the engine maintains a canonical world. The LLM layer is used for narrative synthesis and directive parsing, and is what makes this game possible.
This project was developed using X&Immersion's AI Assistant — a secure, private copilot that integrates directly into the studio workflow. X&Immersion builds tailored AI tools for game studios, matching the right open-source models to specific use cases rather than offering one-size-fits-all solutions. Their assistant supports code generation & debugging, internal design Q&A, game doc summarization, narrative assistance, worldbuilding, and NPC dialogue & roleplay.
The assistant was used throughout the entire development process — from architecture design and engine implementation to UI development, debugging, and iterative feature work. All code in this repository was written with the aid of the AI assistant.
We also used Neocortex — Virtual Assistants for deploying intelligent assistants that understand context and handle complex, multi-step interactions. Neocortex was used for the game's interrogation and diplomacy flows, where context tracking across turns is critical. The integration is preserved in db/llm.ts (commented out) — we switched to OpenAI for the deployed version due to available free credits.
We developed and tested extensively with Mistral AI during the hackathon. Mistral's fast inference and strong structured-output support (response_format: json_object) via their POST /v1/chat/completions endpoint made it our primary provider for narrative synthesis, directive parsing, and briefing generation. The Mistral integration is preserved in db/llm.ts (commented out) — we switched to OpenAI for deployment due to available free credits.
npm install
npm run db:setup
npm run devOpen http://localhost:3000.
Reset state:
- UI: Reset Simulation on
/ - API:
POST /api/game/reset
**Pure TypeScript. The engine owns canonical state and produces player-facing snapshots with noise.
- True state:
WorldState - Player view:
GameSnapshot→PlayerViewState(confidence, partial visibility) - Turn pipeline:
- briefing → events → actions → resolution → delayed consequences → drift → failure detection
Key entrypoints:
engine/createNewGameWorld(seed)engine/submitTurnAndAdvance(gameId, world, actions)
The database stores complete, immutable per-turn state, plus the latest player snapshot for fast UI hydration.
- Schema:
db/prisma/schema.prisma - Tables:
Game: latestworldState+lastPlayerSnapshotTurnLog: full before/after world state, actions, consequences, artifacts
UI strictly renders player snapshots and never reads true state directly.
Routes:
/start/countryprofile/gamecontrol room/resolutionafter-action memo/failurefailure state
- Incoming events:
engine/events.ts - Briefing tone & structure:
engine/briefing.ts - Action effects + war logic:
engine/resolve.ts,engine/drift.ts - Failure thresholds:
engine/failure.ts
ENABLE_DEBUG_EXPORT=true npm run devThen call:
GET /api/game/debug/export?gameId=...
Returns canonical world state + complete turn history for diagnostics/balancing.
npm testIncludes a determinism smoke test to guarantee same seed + same actions → identical outcomes.
The LLM layer is intentionally thin and bounded. It generates narrative and translates directives; it does not mutate world state directly.
- Neocortex (
NEOCORTEX_API_KEY) — Virtual Assistants, used during development (commented out) - Mistral (
MISTRAL_API_KEY) — hackathon primary provider (commented out) - OpenAI (
OPENAI_API_KEY) — current default (free credits) - Gemini (
GEMINI_API_KEY) — fallback
Neocortex Virtual Assistants were used during development for deploying intelligent assistants that understand context and handle complex, multi-step interactions. To re-enable:
export NEOCORTEX_API_KEY="YOUR_KEY"
export NEOCORTEX_ENDPOINT="https://api.neocortex.ai/v1"
# optional
export NEOCORTEX_MODEL="neocortex-va-v1"Then uncomment the Neocortex blocks in db/llm.ts. Switched to OpenAI for deployment as we had free API credits.
Mistral was our primary LLM provider during the hackathon for all narrative synthesis and directive parsing. To re-enable:
export MISTRAL_API_KEY="YOUR_KEY"
# optional
export MISTRAL_MODEL="mistral-small-latest"Then uncomment the Mistral blocks in db/llm.ts. Switched to OpenAI for deployment as we had free API credits.
export OPENAI_API_KEY="YOUR_KEY"
# optional
export OPENAI_MODEL="gpt-4.1-mini"
npm run devexport GEMINI_API_KEY="YOUR_KEY"
# optional
export GEMINI_MODEL="gemini-2.5-flash-lite"
npm run devAll LLM calls live in db/llm.ts and are structured as strict JSON with explicit schema validation in db/llmSchemas.ts.
Main flows:
- Directive parsing → structured actions (validated/clamped)
- Briefing/event rewrites → tone + grounding only
- Resolution memo → short narrative synthesis
IMPORTANT: never hardcode keys or commit secrets.



