A comprehensive implementation of intelligent multi-agent systems using LangGraph and Google's Gemini AI. This project demonstrates various agent architectures including ReAct agents, RAG (Retrieval-Augmented Generation) agents, conversational agents, document drafting agents, and a supervisor-based multi-agent orchestration system.
- Quick Start
- Overview
- Architecture
- Project Structure
- Agents Breakdown
- Installation
- UV Package Manager
- Configuration
- Usage
- Technical Details
- Dependencies
- Troubleshooting
- Best Practices
- Contributing
- License
Get up and running in 5 minutes:
# 1. Clone the repository
git clone https://github.com/Abdul-Halim01/LangGraph-MultiAgents.git
cd LangGraph-MultiAgents
# 2. Install UV (if needed)
curl -LsSf https://astral.sh/uv/install.sh | sh # macOS/Linux
# OR: pip install uv
# 3. Create virtual environment
uv venv
source .venv/bin/activate # Windows: .venv\Scripts\activate
# 4. Install dependencies
uv pip install -e .
# 5. Set up environment variables
echo "GOOGLE_API_KEY=your_api_key_here" > .env
# 6. Run an agent
python agent.pyGet your Gemini API Key: Google AI Studio
This project showcases the power of LangGraph for building sophisticated multi-agent AI systems. Each agent is designed with a specific purpose and demonstrates different aspects of agent architectures:
- Tool-based agents that can execute code and manipulate data
- Conversational agents with memory and context management
- RAG agents that retrieve and synthesize information from documents
- Document manipulation agents with specialized workflows
- Supervisor systems that orchestrate multiple specialized agents
All agents leverage Google's Gemini AI models for natural language understanding and generation, showcasing enterprise-grade AI capabilities with cost-effective solutions.
The project implements several agent patterns:
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β LANGGRAPH MULTIAGENTS β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β β
β ββββββββββββββββ ββββββββββββββββ ββββββββββββββββ β
β β Data Analysisβ β Drafter β β Lab Agent β β
β β Agent β β Agent β β β β
β ββββββββββββββββ ββββββββββββββββ ββββββββββββββββ β
β β
β ββββββββββββββββ ββββββββββββββββ ββββββββββββββββ β
β β RAG Agent β β ReAct Agent β β Supervisor β β
β β β β β β System β β
β ββββββββββββββββ ββββββββββββββββ ββββββββββββββββ β
β β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
- ReAct Pattern (Reasoning + Acting)
- RAG Pattern (Retrieval-Augmented Generation)
- State Management with LangGraph StateGraph
- Tool Integration with conditional routing
- Multi-Agent Orchestration with supervisor pattern
- Quality Validation with validator agent
__start__
β
βΌ
ββββββββββββββββ
β Supervisor βββββ
ββββββββ¬ββββββββ β
β β
βββββββββββββββββββΌββββββββββββββββββ
β β β
βΌ βΌ βΌ
ββββββββββββ ββββββββββββ ββββββββββββ
β Coder β β Enhancer β βResearcherβ
ββββββ¬ββββββ ββββββ¬ββββββ ββββββ¬ββββββ
β β β
βββββββββββββββββββΌββββββββββββββββββ
βΌ
ββββββββββββββββ
β Validator β
ββββββββ¬ββββββββ
β
ββββββββΌββββββββ
β __end__ β
ββββββββββββββββ
Dotted lines (Β·Β·Β·) = Conditional routing
Solid lines (βββ) = Direct edges
LangGraph-MultiAgents/
β
βββ agent.py # Data Analysis Agent with Python execution
βββ Drafter.py # Document creation and editing agent
βββ Lab_agent.py # Conversational agent with memory
βββ RAG_Agent.py # RAG agent for PDF document retrieval
βββ ReAct_Agent.py # ReAct pattern implementation
βββ Supervisor.ipynb # Multi-agent supervisor orchestration
βββ langgraph.json # LangGraph configuration
βββ pyproject.toml # Project dependencies
βββ uv.lock # Dependency lock file
βββ .python-version # Python version specification
βββ Stock_Market_Performance_2024.pdf # Sample document for RAG
βββ README.md # This file
Purpose: Execute Python code to analyze pandas DataFrames dynamically.
Key Features:
- Python code execution capability via custom tool
- Automatic data analysis from natural language queries
- Safe code execution environment
- Integration with pandas for data manipulation
Architecture:
StateGraph Flow:
START β agent β [has tool_calls?] β tools β agent β END
β no
ENDHow it Works:
-
LLM Setup: Uses Google's Gemini 2.5 Flash model with low temperature (0.1) for deterministic responses
model = ChatGoogleGenerativeAI( model="gemini-2.5-flash", temperature=0.1, max_tokens=500, )
-
Sample Data: Creates a pandas DataFrame with sales and profit data
df = pd.DataFrame({ 'date': pd.date_range('2024-01-01', periods=10), 'sales': [1_000_000, 120, 115, 140, 160, 155, 180, 190, 185, 200], 'profit': [20, 25, 23, 30, 35, 33, 40, 42, 41, 45] })
-
Python Execution Tool:
- Executes arbitrary Python code with access to
dfandpd - Expects results in a
resultvariable - Safe execution with exception handling
@tool def execute_python(code: str) -> str: """Execute Python code to analyze DataFrame 'df'""" local_vars = {"df": df, "pd": pd} exec(code, {"__builtins__": __builtins__, "pd": pd}, local_vars) return str(local_vars.get("result", "Code executed"))
- Executes arbitrary Python code with access to
-
Graph Logic:
- Agent node calls the LLM with tool binding
- Conditional routing checks for tool calls
- Tools node executes Python code
- Loops back to agent for response synthesis
Use Cases:
- Automated data analysis from natural language
- Quick statistical computations
- Data exploration and insights generation
- Business intelligence queries
Example Interaction:
User: "What are the average sales?"
Agent: [Generates code] β [Executes: df['sales'].mean()] β "The average sales are 1,100,234.5"
Purpose: Interactive document creation and editing with tool-based workflow.
Key Features:
- Document content management via global state
- Update and save operations as tools
- Interactive CLI interface
- Conditional workflow termination
- Real-time document state tracking
Architecture:
StateGraph Flow:
agent β tools β [saved?] β END
β no
agent (loop)How it Works:
-
Global Document State:
document_content = "" # Stores the current document
-
Tools:
-
update: Updates the entire document content@tool def update(content: str) -> str: global document_content document_content = content return f"Document updated: {document_content}"
-
save: Saves document to a text file and triggers workflow termination@tool def save(filename: str) -> str: # Ensures .txt extension # Writes document_content to file # Returns success message
-
-
Agent State Management:
class AgentState(TypedDict): messages: Annotated[Sequence[BaseMessage], add_messages]
-
Interactive Loop:
- Agent prompts user for input
- LLM decides which tool to use based on user intent
- System prompt guides the agent's behavior
- Tools execute and return results
- Conditional edge checks if document was saved (termination condition)
-
Conditional Termination Logic:
def should_continue(state: AgentState) -> str: # Checks if most recent ToolMessage contains "saved" and "document" # Returns "end" to terminate, "continue" to loop
Workflow:
- User starts the agent
- Agent asks what to create/modify
- User provides instructions
- Agent uses
updatetool to modify document - User can continue editing or save
- Agent uses
savetool β workflow ends
Use Cases:
- Interactive document drafting
- Content creation with iterative refinement
- Automated report generation
- Note-taking and documentation
Example Interaction:
USER: Create a meeting summary
AI: [Uses update tool] Document updated with meeting summary
USER: Save it as "meeting_notes"
AI: [Uses save tool] Document saved to meeting_notes.txt
β Workflow ends
Purpose: Simple conversational agent with persistent conversation history.
Key Features:
- Conversation memory across interactions
- Message history persistence to file
- Clean state management
- Interactive CLI interface
Architecture:
StateGraph Flow (per message):
START β process β ENDHow it Works:
-
State Definition:
class AgentState(TypedDict): messages: List[Union[HumanMessage, AIMessage]]
-
LLM Configuration:
- Uses Gemini 2.5 Flash Lite (lightweight, fast)
- Temperature 0.1 for consistent responses
- 500 token limit
-
Processing Node:
def process(state: AgentState) -> AgentState: response = llm.invoke(state["messages"]) state["messages"].append(AIMessage(content=response.content)) return state
-
Conversation Loop:
- Maintains
conversation_historylist - Each turn appends HumanMessage and AIMessage
- State is passed through the graph for each interaction
- History persists across all interactions
- Maintains
-
Persistence:
- Saves entire conversation to
logging.txt - Formats messages by type (Human/AI)
- UTF-8 encoding for special characters
- Saves entire conversation to
Workflow:
- User enters a message
- Message added to conversation history
- History passed to LLM via graph
- AI response added to history
- Loop continues until user types "exit"
- Full conversation saved to file
Use Cases:
- Simple chatbot interfaces
- Conversation logging and analysis
- Testing conversational flows
- Customer support simulations
Example Conversation Flow:
#1
User: Hello, how are you?
AI: I'm doing well, thank you! How can I help you today?
#2
User: Tell me about Python
AI: Python is a high-level programming language...
exit β Saves to logging.txt
Purpose: Retrieval-Augmented Generation for question-answering from PDF documents.
Key Features:
- PDF document loading and processing
- Vector embeddings for semantic search
- ChromaDB for vector storage
- Context-aware question answering
- Free-tier components (Gemini + HuggingFace embeddings)
Architecture:
StateGraph Flow:
llm β [has tool_calls?] β tools β llm β [no tool_calls?] β END
β no
ENDHow it Works:
-
Document Processing Pipeline:
# Load PDF loader = PyPDFLoader("Stock_Market_Performance_2024.pdf") pages = loader.load() # Split into chunks splitter = RecursiveCharacterTextSplitter( chunk_size=1000, chunk_overlap=200, # Maintains context between chunks ) docs = splitter.split_documents(pages)
-
Embeddings:
- Uses HuggingFace's
sentence-transformers/all-MiniLM-L6-v2 - Free, lightweight, and effective for semantic search
- Creates vector representations of text chunks
- Uses HuggingFace's
-
Vector Store (ChromaDB):
vectorstore = Chroma.from_documents( documents=docs, embedding=embeddings, persist_directory="./chroma_db", collection_name="stock_market", )
- Persists to disk (reusable across sessions)
- Enables similarity search
- Returns top-k most relevant chunks
-
Retrieval Tool:
@tool def search_stock_pdf(query: str) -> str: """Search Stock Market Performance 2024 PDF""" results = retriever.invoke(query) # Returns top 5 similar chunks return formatted_results
-
Agent Workflow:
- User asks a question
- LLM receives system prompt emphasizing tool use
- LLM generates tool call with search query
- Tool retrieves relevant document chunks
- LLM synthesizes answer from retrieved context
- Cites sources from documents
-
System Prompt:
SYSTEM_PROMPT = """ You are a RAG assistant answering questions ONLY using the Stock Market Performance 2024 PDF. Always use the search tool before answering. Cite information clearly from the documents. """
RAG Pipeline:
Question β LLM (tool call) β Retrieve chunks β LLM (synthesis) β Answer
Use Cases:
- Document Q&A systems
- Knowledge base queries
- Research assistance
- Compliance and policy questions
Example Interaction:
User: "What was the stock market performance in Q1 2024?"
Agent: [Calls search_stock_pdf tool]
[Retrieves relevant chunks about Q1 performance]
[Synthesizes answer]
"According to the document, Q1 2024 showed strong performance with..."
Purpose: Implements the ReAct (Reasoning + Acting) pattern with tool integration.
Key Features:
- ReAct pattern implementation
- Tool calling with explanations
- Conditional routing based on tool usage
- System-prompted reasoning
Architecture:
StateGraph Flow:
our_model β [has tool_calls?] β tools β our_model β END
β no
ENDHow it Works:
-
State Definition:
class AgentState(TypedDict): messages: Annotated[Sequence[BaseMessage], add_messages]
- Uses
add_messagesreducer to append messages automatically
- Uses
-
Tool Definition:
@tool def add(a: int, b: int) -> str: """Addition function for 2 integers""" return f"Add tool return {a+b}"
-
LLM with Tools:
llm = ChatGoogleGenerativeAI( model="gemini-2.5-flash-lite", max_tokens=200, temperature=0.1, ) llm_with_tools = llm.bind_tools(tools)
-
Model Call Node:
def model_call(state: AgentState) -> AgentState: SystemPrompt = SystemMessage( content="You are Alpo. Add explanation for your answer." ) response = llm_with_tools.invoke([SystemPrompt] + state["messages"]) return {"messages": [response]}
-
Conditional Routing:
def should_continue(state: AgentState): last_message = state['messages'][-1] if last_message.tool_calls: return "continue" # Route to tools else: return "end" # Terminate
-
ReAct Pattern:
- Reasoning: LLM analyzes the query
- Acting: LLM decides to use a tool
- Observing: Tool executes and returns result
- Reasoning: LLM incorporates result and explains
-
Pretty Streaming:
def print_stream(stream): for s in stream: message = s["messages"][-1] message.pretty_print() # Formats output nicely
ReAct Flow:
User Query β LLM (Reason) β Tool Call (Act) β Execute Tool (Observe)
β LLM (Reason with result) β Explain Answer
Use Cases:
- Mathematical computations
- Multi-step reasoning tasks
- Tool-augmented problem solving
- Interactive calculators
Example Interaction:
User: "Add 34 + 21 + 7"
AI (Reasoning): "I need to add these numbers together"
AI (Acting): [Calls add tool with appropriate arguments]
Tool: Returns "Add tool return 62"
AI (Reasoning + Explaining): "The sum is 62. I used the addition tool
because it provides accurate arithmetic computation."
Purpose: Orchestrates multiple specialized agents using a supervisor pattern.
Key Features:
- Multi-agent coordination
- Specialized agents for different tasks
- Dynamic routing based on query type
- Research, coding, and web search capabilities
- Hierarchical agent architecture
Architecture Overview:
The supervisor system implements a hierarchical multi-agent architecture where a supervisor agent routes queries to specialized workers, with a validator agent ensuring quality:
βββββββββββββββ
β __start__ β
ββββββββ¬βββββββ
β
βΌ
βββββββββββββββ
β Supervisor β
β Agent β
ββββββββ¬βββββββ
β
ββββββββββββββββββΌβββββββββββββββββ
βΌ βΌ βΌ
ββββββββββββ βββββββββββββ ββββββββββββββ
β Coder β β Enhancer β β Researcher β
β Agent β β Agent β β Agent β
ββββββββββββ βββββββββββββ ββββββββββββββ
β β β
ββββββββββββββββββΌββββββββββββββββββ
βΌ
βββββββββββββββ
β Validator β
β Agent β
ββββββββ¬βββββββ
β
βΌ
βββββββββββββββ
β __end__ β
βββββββββββββββ
How it Works (Based on notebook implementation):
-
Agent Definition:
- Each worker agent is defined with specific capabilities and system prompts
- Agents are registered in the supervisor's routing logic
- Each agent has access to specialized tools
- Agents communicate through shared state
-
Supervisor Logic:
def supervisor(state: SupervisorState): # Analyzes the current state and messages # Determines which agent should handle the task # Returns routing decision return {"next": "coder" | "enhancer" | "researcher" | "validator"}
- Uses LLM to intelligently route queries
- Can route to multiple agents in sequence
- Tracks conversation state and history
-
Agent State Management:
class SupervisorState(TypedDict): messages: Annotated[Sequence[BaseMessage], add_messages] next: str # Which agent to route to
-
Worker Agent Pattern: Each worker follows this structure:
def worker_agent(state: SupervisorState): # Process the task based on agent specialty # Use agent-specific tools if needed # Return results as messages return {"messages": [response]}
-
Validator Integration:
def validator(state: SupervisorState): # Reviews outputs from worker agents # Checks quality, correctness, completeness # Can reject and send back for improvement # Final approval before END return {"messages": [validation_result]}
-
Graph Construction:
graph = StateGraph(SupervisorState) # Add all agents as nodes graph.add_node("supervisor", supervisor) graph.add_node("coder", coder_agent) graph.add_node("enhancer", enhancer_agent) graph.add_node("researcher", researcher_agent) graph.add_node("validator", validator) # Entry point graph.set_entry_point("supervisor") # Conditional routing from supervisor graph.add_conditional_edges( "supervisor", lambda x: x["next"], { "coder": "coder", "enhancer": "enhancer", "researcher": "researcher", "FINISH": "validator" } ) # All workers route to validator graph.add_edge("coder", "validator") graph.add_edge("enhancer", "validator") graph.add_edge("researcher", "validator") # Validator can loop back or end graph.add_conditional_edges( "validator", should_continue, { "continue": "supervisor", "end": END } )
-
Workflow Orchestration:
- Supervisor receives user query
- Analyzes query intent and requirements
- Routes to appropriate specialist(s)
- Worker(s) complete their tasks
- Validator ensures quality
- If approved β Return to user
- If rejected β Loop back for improvements
-
Specialized Agents:
Coder Agent:
- Writes and debugs code
- Implements solutions in various programming languages
- Explains code functionality
- Provides optimized code solutions
Enhancer Agent:
- Improves and refines content
- Optimizes code or text quality
- Adds missing details and context
- Polishes outputs from other agents
Researcher Agent:
- Conducts in-depth research
- Gathers information from multiple sources
- Synthesizes comprehensive answers
- Provides fact-based insights
Validator Agent:
- Quality assurance for all outputs
- Verifies correctness and completeness
- Checks for errors or inconsistencies
- Ensures outputs meet requirements
- Final gatekeeper before returning to user
-
State Management:
class SupervisorState(TypedDict): messages: Annotated[Sequence[BaseMessage], add_messages] next: str # Which agent to route to next
-
Routing Logic:
- Supervisor examines query
- Uses LLM to determine appropriate agent
- Can chain multiple agents for complex queries
-
Workflow:
User Query β Supervisor β [Route Decision] β Worker Agent(s) β Validator β ResultComplete Flow:
- User submits query
- Supervisor analyzes and routes to appropriate agent(s)
- Worker agent(s) process the task
- Validator checks the output quality
- If valid β Return to user
- If invalid β Route back to worker or supervisor for improvement
Key Notebook Components:
- Agent initialization and configuration
- Supervisor prompt engineering
- Graph construction with conditional edges
- Multi-agent communication protocol
- Result aggregation and synthesis
Detailed Agent Capabilities:
| Agent | Primary Role | Key Functions | Example Tasks |
|---|---|---|---|
| Supervisor | Orchestrator | Query analysis, routing, coordination | Determines workflow path |
| Coder | Software Development | Write code, debug, optimize, explain | "Create a Python sorting algorithm" |
| Enhancer | Quality Improvement | Refine outputs, add details, polish | "Improve this code's readability" |
| Researcher | Information Gathering | Research topics, synthesize data | "Find latest ML trends" |
| Validator | Quality Assurance | Check correctness, verify completeness | Ensures all outputs meet standards |
Agent Interaction Patterns:
-
Sequential Processing:
Researcher β Coder β Enhancer β ValidatorExample: "Research topic, write code, optimize it"
-
Single Agent + Validation:
Coder β ValidatorExample: "Write a simple function"
-
Iterative Improvement:
Coder β Validator β (fails) β Enhancer β ValidatorExample: Quality loop until standards met
-
Parallel-to-Sequential (if supported):
Researcher + Coder β Enhancer β ValidatorExample: Combine research and code, then polish
Use Cases:
- Complex queries requiring multiple specialized skills
- Research + coding + optimization workflows
- Quality-assured content generation
- Multi-step problem solving with validation
- Comprehensive analysis requiring diverse agents
- Production-ready outputs with automatic QA
Example Flow:
User: "Research the latest AI trends and write optimized Python code to analyze them"
Supervisor: Analyzes query β "This needs Researcher, Coder, and Enhancer"
β
Researcher: Finds information about AI trends
β
Supervisor: Receives research β Routes to Coder
β
Coder: Writes analysis code based on research
β
Supervisor: Routes to Enhancer
β
Enhancer: Optimizes code, adds documentation and best practices
β
Validator: Checks if research is complete, code works, and quality is high
β
Validator: Approves β Returns final result to user
Alternative Flow (if validation fails):
Validator: Finds issues β Routes back to Supervisor
β
Supervisor: Re-routes to appropriate agent for fixes
β
[Process repeats until validation passes]
Advanced Features:
- Agent memory and context sharing
- Parallel agent execution (if supported)
- Error handling and fallback strategies
- Agent communication protocols
- Python 3.10 or higher
- Google Gemini API key
- UV package manager (highly recommended - fast, modern Python package manager)
- Git
git clone https://github.com/Abdul-Halim01/LangGraph-MultiAgents.git
cd LangGraph-MultiAgentsOn macOS and Linux:
curl -LsSf https://astral.sh/uv/install.sh | shOn Windows:
powershell -c "irm https://astral.sh/uv/install.ps1 | iex"Or using pip:
pip install uvVerify installation:
uv --version# UV automatically creates and manages virtual environments
uv venv
# Activate the virtual environment
# On macOS/Linux:
source .venv/bin/activate
# On Windows:
.venv\Scripts\activateUV makes dependency installation fast and reliable:
# Install all dependencies from pyproject.toml
uv pip install -e .
# Or install specific packages
uv pip install langgraph langchain langchain-core langchain-google-genai
uv pip install langchain-community langchain-chroma langchain-huggingface
uv pip install python-dotenv pandas jupyter
# UV also supports syncing dependencies
uv pip syncWhy UV?
- β‘ 10-100x faster than pip
- π Reliable: Generates lock files for reproducible installs
- π― Smart: Better dependency resolution
- πΎ Efficient: Global cache reduces disk usage
Create a .env file in the project root:
GOOGLE_API_KEY=your_gemini_api_key_hereGet your Gemini API key:
- Go to Google AI Studio
- Create a new API key
- Copy and paste into
.env
This project uses UV as the primary package manager for superior performance and reliability.
UV is a modern, extremely fast Python package installer and resolver written in Rust. It's designed to be a drop-in replacement for pip with significantly better performance.
| Feature | UV | pip |
|---|---|---|
| Speed | β‘ 10-100x faster | Standard speed |
| Dependency Resolution | π― Advanced resolver | Basic resolver |
| Lock Files | β Built-in (uv.lock) | β Requires pip-tools |
| Caching | πΎ Global cache | Limited caching |
| Reproducibility | π Guaranteed | Variable |
# Create virtual environment
uv venv
# Install from pyproject.toml
uv pip install -e .
# Install specific package
uv pip install package-name
# Install with extras
uv pip install "package-name[extra]"
# Sync dependencies (use lock file)
uv pip sync
# List installed packages
uv pip list
# Freeze dependencies
uv pip freeze
# Uninstall package
uv pip uninstall package-nameThe uv.lock file ensures everyone on your team has the exact same dependencies:
# Generate/update lock file
uv pip compile pyproject.toml -o requirements.txt
# Install from lock file
uv pip syncIf you're coming from pip, UV commands are nearly identical:
# pip β uv
pip install package β uv pip install package
pip uninstall package β uv pip uninstall package
pip list β uv pip list
pip freeze β uv pip freeze{
"dependencies": ["langgraph"],
"graphs": {
"agent": "./Drafter.py:app"
}
}This configuration file:
- Specifies LangGraph as a dependency
- Defines the Drafter agent as the default graph for LangGraph API deployment
Specified in .python-version:
3.12
Key dependencies:
langgraph: Graph-based agent orchestrationlangchain: Core LLM frameworklangchain-google-genai: Gemini AI integrationlangchain-community: Community tools and integrationslangchain-chroma: Vector store integrationlangchain-huggingface: HuggingFace embeddingspython-dotenv: Environment variable managementpandas: Data manipulation
python agent.pyExample queries:
- "What are the average sales?"
- "Show me the total profit"
- "What's the sales trend over time?"
python Drafter.pyInteractive session:
USER: Create a product description for a new smartphone
AI: [Updates document with description]
USER: Make it more technical
AI: [Updates with technical details]
USER: Save as product_desc
AI: [Saves to product_desc.txt]
python Lab_agent.pyExample:
Enter your message: Hello!
AI: Hi! How can I help you today?
Enter your message: Tell me a joke
AI: [Tells a joke]
Enter your message: exit
Conversation Saved to logging.txt!
python RAG_Agent.pyEnsure PDF exists: Stock_Market_Performance_2024.pdf
Example queries:
- "What were the key market trends in 2024?"
- "Which sectors performed best?"
- "What caused the Q2 market volatility?"
python ReAct_Agent.pyExample:
# Already has an example in the code
inputs = {"messages": [("user", "Add 34 + 21 + 7")]}
# Outputs the reasoning and calculation resultInstallation:
# Install Jupyter with UV
uv pip install jupyter ipykernel
# Register the virtual environment as a Jupyter kernel
python -m ipykernel install --user --name=langgraph-multiagentsRun:
# Launch Jupyter
jupyter notebook Supervisor.ipynb
# Or use Jupyter Lab
uv pip install jupyterlab
jupyter lab Supervisor.ipynbThen run all cells to initialize and interact with the multi-agent system.
Architecture:
- Supervisor: Routes tasks to specialized agents
- Coder: Handles programming tasks
- Enhancer: Improves and optimizes outputs
- Researcher: Gathers and synthesizes information
- Validator: Ensures output quality and correctness
All agents use TypedDict for type-safe state management:
from typing import TypedDict, Annotated, Sequence
from langchain_core.messages import BaseMessage
from langgraph.graph.message import add_messages
class AgentState(TypedDict):
messages: Annotated[Sequence[BaseMessage], add_messages]add_messages reducer: Automatically appends new messages to the message list.
Standard pattern across agents:
from langgraph.graph import StateGraph, START, END
# 1. Create graph with state type
graph = StateGraph(AgentState)
# 2. Add nodes
graph.add_node("agent", agent_function)
graph.add_node("tools", tool_node)
# 3. Set entry point
graph.set_entry_point("agent")
# 4. Add edges
graph.add_conditional_edges("agent", should_continue, {
"continue": "tools",
"end": END
})
graph.add_edge("tools", "agent")
# 5. Compile
app = graph.compile()Using @tool decorator:
from langchain_core.tools import tool
@tool
def my_tool(param: str) -> str:
"""Tool description for LLM"""
# Tool logic
return resultImportant:
- Docstring is used by LLM to understand tool purpose
- Type hints are required
- Return type should be string for consistency
Pattern for decision-making in graphs:
def should_continue(state: AgentState) -> str:
last_message = state["messages"][-1]
# Check for tool calls
if hasattr(last_message, "tool_calls") and last_message.tool_calls:
return "continue" # Route to tools
return "end" # Terminate
# Use in graph
graph.add_conditional_edges(
"node_name",
should_continue,
{"continue": "next_node", "end": END}
)LangChain message types used:
HumanMessage: User inputAIMessage: LLM responseSystemMessage: System prompts and instructionsToolMessage: Tool execution results
from langchain_google_genai import ChatGoogleGenerativeAI
llm = ChatGoogleGenerativeAI(
model="gemini-2.5-flash", # Model version
temperature=0.1, # Creativity (0-1)
max_tokens=500, # Response length limit
max_retries=2, # Retry failed requests
)Model Options:
gemini-2.5-flash: Fast, efficient, good for most tasksgemini-2.5-flash-lite: Lightweight versiongemini-2.5-pro: More capable, slower, higher cost
Document Processing:
# Load
loader = PyPDFLoader(pdf_path)
pages = loader.load()
# Split
splitter = RecursiveCharacterTextSplitter(
chunk_size=1000, # Characters per chunk
chunk_overlap=200, # Overlap to maintain context
)
docs = splitter.split_documents(pages)
# Embed and Store
vectorstore = Chroma.from_documents(
documents=docs,
embedding=embeddings,
persist_directory="./chroma_db",
)
# Retrieve
retriever = vectorstore.as_retriever(
search_type="similarity",
search_kwargs={"k": 5} # Top 5 results
)Best practices implemented:
try:
# Tool execution
result = tool.invoke(input)
except Exception as e:
return f"Error: {str(e)}"All tools include try-except blocks for graceful error handling.
[project]
dependencies = [
"langgraph>=0.2.59",
"langchain>=0.3.14",
"langchain-core>=0.3.28",
"langchain-google-genai>=2.0.8",
"langchain-community>=0.3.14",
"langchain-chroma>=0.2.0",
"langchain-huggingface>=0.1.2",
"python-dotenv>=1.0.0",
"pandas>=2.0.0",
]For enhanced functionality:
jupyter: For running Supervisor notebookpypdf: PDF processing (included in langchain-community)chromadb: Vector databasesentence-transformers: Embeddings
# Quick install with UV (recommended)
uv pip install langgraph langchain langchain-google-genai python-dotenv
# Full installation with all features
uv pip install -e .If you prefer traditional pip:
pip install langgraph langchain langchain-google-genai python-dotenv
pip install langchain-community langchain-chroma langchain-huggingface pandasNote: UV is significantly faster and more reliable for dependency management.
A library for building stateful, multi-actor applications with LLMs. Key features:
- State management: TypedDict-based state
- Graph construction: Nodes and edges
- Conditional routing: Dynamic workflow control
- Tool integration: Seamless tool calling
Reasoning + Acting:
- LLM reasons about the problem
- LLM decides to use a tool (acting)
- Tool executes and returns observation
- LLM reasons with new information
- Repeat or provide final answer
Enhances LLM with external knowledge:
- User asks a question
- System retrieves relevant documents
- LLM generates answer using retrieved context
- Reduces hallucinations, provides citations
Multi-agent orchestration:
- Central supervisor coordinates worker agents
- Each agent specializes in specific tasks
- Supervisor routes queries to appropriate agents
- Enables complex, multi-step workflows
@tool
def your_custom_tool(input: str) -> str:
"""Description for the LLM"""
# Your logic here
return result
# Add to tools list
tools = [existing_tools, your_custom_tool]
llm_with_tools = llm.bind_tools(tools)Each agent has a system prompt you can customize:
system_prompt = """
Your custom instructions here.
- Guideline 1
- Guideline 2
"""# Switch to different Gemini model
llm = ChatGoogleGenerativeAI(
model="gemini-1.5-pro", # More capable
temperature=0.7, # More creative
max_tokens=1000, # Longer responses
)- Create a new Python file
- Define state and tools
- Create graph with nodes and edges
- Compile and test
- Integrate with supervisor (optional)
- Flash models: Fast, cost-effective for most tasks
- Pro models: Better for complex reasoning
- Lite models: Minimal latency, good for simple tasks
For RAG agents:
- Chunk size: 500-1500 characters (balance context vs. precision)
- Overlap: 10-20% of chunk size
- Smaller chunks: More precise retrieval, less context
- Larger chunks: More context, potentially less precise
- ChromaDB: Good for local development, persists to disk
- Pinecone: Better for production, managed service
- FAISS: Fast, in-memory, no persistence
- Enable LLM caching for repeated queries
- Cache embeddings for frequently accessed documents
- Use persistent vector stores to avoid re-indexing
1. UV Not Found
-bash: uv: command not foundSolution: Install UV
# macOS/Linux
curl -LsSf https://astral.sh/uv/install.sh | sh
# Windows
powershell -c "irm https://astral.sh/uv/install.ps1 | iex"
# Or via pip
pip install uv2. UV Installation Fails
error: externally-managed-environmentSolution: Use virtual environment
uv venv
source .venv/bin/activate # Activate first
uv pip install package-name3. Import Errors
ModuleNotFoundError: No module named 'langgraph'Solution: Install dependencies with UV
uv pip install langgraph langchain
# Or install everything
uv pip install -e .4. Virtual Environment Not Activated
# Check if venv is activated (should see (.venv) in prompt)
which python # Should point to .venv/bin/python
# If not activated:
source .venv/bin/activate # macOS/Linux
.venv\Scripts\activate # Windows7. ChromaDB Permission Issues
PermissionError: Cannot write to ./chroma_dbSolution: Check directory permissions
chmod 755 ./chroma_db
# Or change persist_directory in RAG_Agent.py8. Memory Issues with Large PDFs Solution:
# Reduce chunk_size in RAG_Agent.py
splitter = RecursiveCharacterTextSplitter(
chunk_size=500, # Reduced from 1000
chunk_overlap=100,
)9. UV Cache Issues
# Clear UV cache if experiencing issues
uv cache clean- Environment Variables: Always use
.envfor API keys - Error Handling: Wrap tool execution in try-except blocks
- Type Hints: Use TypedDict for state management
- Logging: Implement logging for production deployments
- Testing: Test each agent independently before integration
- Documentation: Keep docstrings updated for tools
- Version Control: Don't commit
.envfiles - Resource Cleanup: Close connections and clean up resources
Contributions are welcome! Please:
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
- Follow existing code style
- Add tests for new features
- Update documentation
- Ensure all agents still work
This project is licensed under the MIT License. See LICENSE file for details.
Abdul-Halim01
- GitHub: @Abdul-Halim01
- LangChain team for the excellent framework
- Google for Gemini AI
- LangGraph community for inspiration
- All contributors and users
Happy Building with LangGraph! π