A full-stack application for building and interacting with LLM-powered agents, teams, and flows using LlamaIndex.
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
│ Frontend │────▶│ Backend │────▶│ Phoenix │
│ (React UI) │ │ (FastAPI) │ │ (Observability)│
│ Port: 3000 │ │ Port: 6001 │ │ Port: 6006 │
└─────────────────┘ └────────┬────────┘ └─────────────────┘
│
▼
┌─────────────────┐
│ PostgreSQL │
│ (Memory Store) │
│ Port: 5432 │
└─────────────────┘
This repository contains comprehensive documentation for all aspects of the system:
- Architecture Overview: High-level system design, service interaction, and software patterns.
- Agentic Patterns: Deep dive into Single Agents, Multi-Agent Teams (Handoff/Orchestrator), and Event-Driven Flows.
- Backend API Service: API endpoints, code structure, and configuration.
- Frontend UI Service: React app structure, state management, and API integration.
- Development Guide: Setup guide for local backend and frontend development.
- Testing Guide: Unit testing strategy and manual testing scenarios.
- Deployment Guide: Docker build process, Nginx configuration, and production setup.
- Observability: Monitoring and tracing with Arize Phoenix.
- Single Agents - Individual LlamaIndex agents with tool access
- Teams - Multi-agent orchestration with handoffs (AgentWorkflow)
- Flows - Event-driven workflows with step-by-step execution
- Streaming - Real-time SSE streaming for all interactions
- HITL - Human-in-the-loop support for agent/team decisions
- Observability - Full trace visibility via Phoenix UI
-
Configure environment:
cp .env.example .env # Edit .env with your API keys -
Start all services:
docker-compose up -d
-
Access the application:
- Frontend: http://localhost:3000
- Phoenix UI: http://localhost:6006
- Backend API: http://localhost:6001
cd agent
uv sync
cp .env.example .env
# Edit .env with your API keys
uv run uvicorn main:app --reload --port 6001cd agent-ui
npm install
npm run dev| Variable | Required | Description |
|---|---|---|
OPENAI_API_KEY |
Yes | OpenAI API key |
OPENAI_API_BASE |
Yes | OpenAI API base URL |
PERPLEXITY_API_KEY |
No | Perplexity API key for web search |
PERPLEXITY_API_BASE_URL |
No | Perplexity API base URL |
PHOENIX_ENABLED |
No | Enable Phoenix tracing (default: true) |
PHOENIX_ENDPOINT |
No | Phoenix collector endpoint |
MEMORY_DATABASE_URI |
No | PostgreSQL connection string |
This repository serves as a reference implementation for LlamaIndex's agentic framework. Note: The design patterns documented in docs/design-patterns/ are for learning and reference purposes only. For detailed explanations of the active architecture, see Agentic Patterns.
We implement three core patterns:
- Single Agents (ReAct): Tool-using agents (Math, Research, Market).
- Multi-Agent Teams:
- Handoff: Agents passing control to peers.
- Orchestrator: A manager agent delegating tasks to sub-agents.
- Event-Driven Flows: State machines with branching, looping, and human intervention (e.g., Story Critic Flow).
The application supports multiple LLM providers via a factory pattern, but not all providers work with all features:
| Provider | Tool-Using Agents | Toolless Agents | Notes |
|---|---|---|---|
| OpenAI | ✅ Full support | ✅ Full support | Recommended for agents with tools |
| Anthropic | ❌ Broken | ✅ Works | ToolCallBlock not supported in LlamaIndex adapter |
| Gemini Vertex | ❌ Not implemented | ✅ Works | Custom LLM doesn't implement tool interface |
| Bedrock Gateway | ❌ Not implemented | ✅ Works | Custom LLM doesn't implement tool interface |
Recommendation: Use OpenAI as the default provider for full compatibility.
MIT