Skip to content

J0YY/the-world-unscripted

Repository files navigation

The Unscripted World Order (MVP)

An LLM-enabled, turn-based geopolitical simulation with a strict separation between true state and player-visible state. The runtime intentionally models incomplete information (confidence, fog, and biased signals) while the engine maintains a canonical world. The LLM layer is used for narrative synthesis and directive parsing, and is what makes this game possible.

Built with X&Immersion's AI Assistant

This project was developed using X&Immersion's AI Assistant — a secure, private copilot that integrates directly into the studio workflow. X&Immersion builds tailored AI tools for game studios, matching the right open-source models to specific use cases rather than offering one-size-fits-all solutions. Their assistant supports code generation & debugging, internal design Q&A, game doc summarization, narrative assistance, worldbuilding, and NPC dialogue & roleplay.

The assistant was used throughout the entire development process — from architecture design and engine implementation to UI development, debugging, and iterative feature work. All code in this repository was written with the aid of the AI assistant.

Built with Neocortex — Virtual Assistants

We also used Neocortex — Virtual Assistants for deploying intelligent assistants that understand context and handle complex, multi-step interactions. Neocortex was used for the game's interrogation and diplomacy flows, where context tracking across turns is critical. The integration is preserved in db/llm.ts (commented out) — we switched to OpenAI for the deployed version due to available free credits.

Built with Mistral AI

We developed and tested extensively with Mistral AI during the hackathon. Mistral's fast inference and strong structured-output support (response_format: json_object) via their POST /v1/chat/completions endpoint made it our primary provider for narrative synthesis, directive parsing, and briefing generation. The Mistral integration is preserved in db/llm.ts (commented out) — we switched to OpenAI for deployment due to available free credits.

gif1

gif2

gif3

gif4

Local dev (Node 20+)

npm install
npm run db:setup
npm run dev

Open http://localhost:3000.

Reset state:

  • UI: Reset Simulation on /
  • API: POST /api/game/reset

System architecture (engine → persistence → UI)

1) Simulation engine (engine/)

**Pure TypeScript. The engine owns canonical state and produces player-facing snapshots with noise.

  • True state: WorldState
  • Player view: GameSnapshotPlayerViewState (confidence, partial visibility)
  • Turn pipeline:
    • briefing → events → actions → resolution → delayed consequences → drift → failure detection

Key entrypoints:

  • engine/createNewGameWorld(seed)
  • engine/submitTurnAndAdvance(gameId, world, actions)

2) Persistence (db/, SQLite via Prisma)

The database stores complete, immutable per-turn state, plus the latest player snapshot for fast UI hydration.

  • Schema: db/prisma/schema.prisma
  • Tables:
    • Game: latest worldState + lastPlayerSnapshot
    • TurnLog: full before/after world state, actions, consequences, artifacts

3) UI (Next.js App Router)

UI strictly renders player snapshots and never reads true state directly.

Routes:

  • / start
  • /country profile
  • /game control room
  • /resolution after-action memo
  • /failure failure state

Core content surfaces

  • Incoming events: engine/events.ts
  • Briefing tone & structure: engine/briefing.ts
  • Action effects + war logic: engine/resolve.ts, engine/drift.ts
  • Failure thresholds: engine/failure.ts

Debug: export full true state (server-only)

ENABLE_DEBUG_EXPORT=true npm run dev

Then call:

  • GET /api/game/debug/export?gameId=...

Returns canonical world state + complete turn history for diagnostics/balancing.

Tests

npm test

Includes a determinism smoke test to guarantee same seed + same actions → identical outcomes.

LLM subsystem (server-only, optional)

The LLM layer is intentionally thin and bounded. It generates narrative and translates directives; it does not mutate world state directly.

Provider selection (priority order)

  1. Neocortex (NEOCORTEX_API_KEY) — Virtual Assistants, used during development (commented out)
  2. Mistral (MISTRAL_API_KEY) — hackathon primary provider (commented out)
  3. OpenAI (OPENAI_API_KEY) — current default (free credits)
  4. Gemini (GEMINI_API_KEY) — fallback

Neocortex — Virtual Assistants (development, commented out)

Neocortex Virtual Assistants were used during development for deploying intelligent assistants that understand context and handle complex, multi-step interactions. To re-enable:

export NEOCORTEX_API_KEY="YOUR_KEY"
export NEOCORTEX_ENDPOINT="https://api.neocortex.ai/v1"
# optional
export NEOCORTEX_MODEL="neocortex-va-v1"

Then uncomment the Neocortex blocks in db/llm.ts. Switched to OpenAI for deployment as we had free API credits.

Mistral AI (hackathon primary, commented out)

Mistral was our primary LLM provider during the hackathon for all narrative synthesis and directive parsing. To re-enable:

export MISTRAL_API_KEY="YOUR_KEY"
# optional
export MISTRAL_MODEL="mistral-small-latest"

Then uncomment the Mistral blocks in db/llm.ts. Switched to OpenAI for deployment as we had free API credits.

OpenAI (current default)

export OPENAI_API_KEY="YOUR_KEY"
# optional
export OPENAI_MODEL="gpt-4.1-mini"
npm run dev

Gemini (fallback)

export GEMINI_API_KEY="YOUR_KEY"
# optional
export GEMINI_MODEL="gemini-2.5-flash-lite"
npm run dev

LLM usage in this codebase

All LLM calls live in db/llm.ts and are structured as strict JSON with explicit schema validation in db/llmSchemas.ts.

Main flows:

  • Directive parsing → structured actions (validated/clamped)
  • Briefing/event rewrites → tone + grounding only
  • Resolution memo → short narrative synthesis

IMPORTANT: never hardcode keys or commit secrets.

About

a geopolitical simulation made for our fellow nerds <3

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages