A library for building stateful, graph-based agent workflows where nodes are LLM calls or tools and edges are control flow decisions, enabling complex cyclical agent patterns.
LangChain chains are DAGs: the control flow goes in one direction. ReAct agents can loop, but the loop structure is implicit — it's encoded in the prompt, not the framework. This makes them hard to visualise, debug, and extend.
LangGraph makes the control flow explicit: you define a graph where each node is a computation (an LLM call, a tool, a human review step) and each edge is a transition rule (go to node B after node A; or go to B if condition is true, C otherwise). The graph can have cycles — that's the point.
Result: you can draw your agent as a diagram, see exactly where it is at any moment, and add branches and loops without touching the LLM prompt.
from langgraph.graph import StateGraph, END
from typing import TypedDict, Annotated
import operator
# State: what persists across nodes
class AgentState(TypedDict):
messages: Annotated[list, operator.add] # list that accumulates
step_count: int
final_answer: str
# Node: a function that takes state and returns state updates
def planner_node(state: AgentState) -> dict:
# ... call LLM ...
return {"messages": [{"role": "assistant", "content": "Step 1: Search for X"}]}
def executor_node(state: AgentState) -> dict:
# ... execute tool ...
return {"messages": [{"role": "tool", "content": "Result: Y"}],
"step_count": state["step_count"] + 1}
# Conditional edge: decide which node to go to next
def should_continue(state: AgentState) -> str:
if state["step_count"] >= 5 or state.get("final_answer"):
return "end"
return "executor" # loop back
# Build the graph
graph = StateGraph(AgentState)
graph.add_node("planner", planner_node)
graph.add_node("executor", executor_node)
graph.set_entry_point("planner")
graph.add_edge("planner", "executor")
graph.add_conditional_edges("executor", should_continue, {
"executor": "executor", # loop
"end": END
})
app = graph.compile()
from langgraph.graph import StateGraph, END
from langgraph.prebuilt import ToolNode
from langchain_anthropic import ChatAnthropic
from langchain.tools import tool
from typing import TypedDict, Annotated
import operator
@tool
def search(query: str) -> str:
'''Search for information.'''
return f"Results for: {query}"
tools = [search]
llm = ChatAnthropic(model="claude-3-5-sonnet-20241022").bind_tools(tools)
class State(TypedDict):
messages: Annotated[list, operator.add]
def call_model(state: State) -> dict:
response = llm.invoke(state["messages"])
return {"messages": [response]}
def should_use_tools(state: State) -> str:
last = state["messages"][-1]
return "tools" if last.tool_calls else "end"
graph = StateGraph(State)
graph.add_node("model", call_model)
graph.add_node("tools", ToolNode(tools))
graph.set_entry_point("model")
graph.add_conditional_edges("model", should_use_tools, {
"tools": "tools",
"end": END
})
graph.add_edge("tools", "model") # tool output goes back to model
app = graph.compile()
result = app.invoke({"messages": [{"role": "user", "content": "Search for Python web frameworks."}]})
print(result["messages"][-1].content)
def route_based_on_intent(state: State) -> str:
'''Route to different agent based on query intent.'''
last_msg = state["messages"][-1].content.lower()
if any(w in last_msg for w in ["code", "python", "script", "function"]):
return "coding_agent"
elif any(w in last_msg for w in ["search", "find", "what is", "who is"]):
return "research_agent"
else:
return "general_agent"
graph = StateGraph(State)
graph.add_node("router", lambda s: s) # pass-through router node
graph.add_node("coding_agent", coding_agent_fn)
graph.add_node("research_agent", research_agent_fn)
graph.add_node("general_agent", general_agent_fn)
graph.set_entry_point("router")
graph.add_conditional_edges("router", route_based_on_intent, {
"coding_agent": "coding_agent",
"research_agent": "research_agent",
"general_agent": "general_agent",
})
# All paths converge to END
graph.add_edge("coding_agent", END)
graph.add_edge("research_agent", END)
graph.add_edge("general_agent", END)
from langgraph.checkpoint.memory import MemorySaver
# Checkpointer persists state between runs
checkpointer = MemorySaver() # or use SqliteSaver for persistence
app = graph.compile(
checkpointer=checkpointer,
interrupt_before=["human_review"] # pause before this node
)
# First run — executes up to human_review, then stops
config = {"configurable": {"thread_id": "thread-42"}}
result = app.invoke(
{"messages": [{"role": "user", "content": "Draft a blog post about LangGraph."}]},
config=config
)
print("Draft ready for review:")
print(result["messages"][-1].content)
# Human reviews, then resumes
human_feedback = "Good, but add a section on checkpointing."
result = app.invoke(
{"messages": [{"role": "user", "content": human_feedback}]},
config=config # same thread_id = resume from checkpoint
)
from langgraph.graph import StateGraph, END
# Build specialist subgraphs
def build_research_subgraph():
sg = StateGraph(State)
sg.add_node("search", search_node)
sg.add_node("summarise", summarise_node)
sg.add_edge("search", "summarise")
sg.add_edge("summarise", END)
return sg.compile()
def build_writing_subgraph():
sg = StateGraph(State)
sg.add_node("outline", outline_node)
sg.add_node("write", write_node)
sg.add_node("edit", edit_node)
sg.add_edge("outline", "write")
sg.add_edge("write", "edit")
sg.add_edge("edit", END)
return sg.compile()
# Orchestrator graph uses subgraphs as nodes
orchestrator = StateGraph(State)
orchestrator.add_node("research", build_research_subgraph())
orchestrator.add_node("writing", build_writing_subgraph())
orchestrator.set_entry_point("research")
orchestrator.add_edge("research", "writing")
orchestrator.add_edge("writing", END)
app = orchestrator.compile()
State schema must be consistent. All nodes must return updates that match the defined State TypedDict. A missing key or wrong type causes a runtime error that's hard to debug. Define your State carefully before building nodes.
Cycles require explicit termination conditions. A graph with a loop but no termination condition will run forever (until you hit max recursion or token limits). Always pair a cycle with a conditional edge that has an END path, and add a step counter as a safety net.
MemorySaver is not persistent. It stores state in RAM. For production, use SqliteSaver or PostgresSaver so checkpoints survive process restarts.
Streaming is different from invocation. app.invoke() returns the final state. app.stream() yields state after each node. For long-running agents, streaming lets you show progress to the user. Always use streaming for user-facing applications.
LangGraph models LLM workflows as directed graphs where nodes represent computational steps (LLM calls, tool invocations, data transformations) and edges represent the flow of state between steps. Unlike linear chains, LangGraph supports conditional branching, cycles for iterative refinement, and parallel fan-out/fan-in, enabling the construction of stateful agent architectures that cannot be expressed as simple sequential pipelines.
| Pattern | Structure | Use Case | Key Mechanism |
|---|---|---|---|
| Linear chain | A → B → C | Sequential processing | Fixed edges |
| Conditional branch | A → B or C | Intent routing | Conditional edges |
| Cycle | A → B → A | Iterative refinement | Loop with termination condition |
| Fan-out/fan-in | A → [B,C] → D | Parallel tools | Parallel node execution |
| Subgraph | A → [Graph] → B | Modular agents | Nested graph nodes |
LangGraph's state management uses a typed TypedDict as the shared state object that flows through the entire graph. Each node receives the current state and returns a partial update; LangGraph merges updates using reducer functions (append for lists, overwrite for scalar values by default). This explicit state schema makes the data contract between nodes clear and enables the checkpointing system to serialize and restore execution state at any node boundary — the foundation for long-running agent tasks that survive process restarts.
Conditional edges in LangGraph take a state evaluation function that returns the name of the next node to visit. This allows the graph to implement routing logic based on any aspect of the accumulated state — the content of the most recent tool call result, a counter tracking iteration number, a flag set by a quality evaluation node. The routing function runs after a node completes and determines which branch of the graph to traverse next, enabling dynamic control flow that responds to the intermediate results of the computation.
Checkpointing in LangGraph persists the full execution state after every node transition, enabling long-running agent tasks to be interrupted and resumed without restarting from the beginning. The checkpoint is stored in a configured checkpoint store — SQLite for local development, PostgreSQL or Redis for production deployments. When resuming from a checkpoint, LangGraph restores the exact state at the last saved node, skipping all previously completed steps. This makes LangGraph well-suited for multi-hour research agent tasks that cannot be completed in a single function call within a serverless request timeout.
Human-in-the-loop interrupts in LangGraph use a special interrupt mechanism that pauses graph execution at a designated node and surfaces the current state to an external interface for human review. The execution is serialized to the checkpoint store in an "interrupted" status; when the human provides their response (approval, modification, or rejection), the execution resumes from the interrupted node with the human feedback incorporated into the state. This pattern is implemented without polling — the graph simply waits in the checkpoint store until resumed, making it compatible with asynchronous human review workflows.
Multi-agent architectures in LangGraph use subgraph nodes that encapsulate entire agent graphs as single nodes in a parent graph. The supervisor agent, implemented as the parent graph, routes tasks to specialist subgraphs — a researcher, a writer, a code interpreter — by calling them as nodes. Each subgraph executes fully and returns its result to the supervisor's state. This hierarchical composition allows complex multi-agent systems to be built from independently testable subgraph modules with clear input/output contracts defined by the shared state TypedDict.