LangGraph Quickstart
Benchmarked against: Anthropic — TypeScript V2 (preview) Line: Line 2 (LangGraph Self-Built) Prerequisites: Python 3.11+,
.venvwith LangGraph installed
Build a LangGraph agent from scratch that connects to SuperPortia's infrastructure.
Step 1: Set up environment
# Create virtual environment
python -m venv .venv
source .venv/bin/activate
# Install LangGraph ecosystem
pip install langgraph langgraph-checkpoint-sqlite langchain-mcp-adapters
Step 2: Define the state
from langgraph.graph import StateGraph, MessagesState, START, END
from langchain_core.messages import HumanMessage
# MessagesState tracks conversation history
graph_builder = StateGraph(MessagesState)
Step 3: Create the agent node
from langchain_google_genai import ChatGoogleGenerativeAI
# Use Gemini for low-cost operations
llm = ChatGoogleGenerativeAI(model="gemini-2.5-flash")
def agent_node(state: MessagesState):
"""The agent decides what to do next."""
response = llm.invoke(state["messages"])
return {"messages": [response]}
graph_builder.add_node("agent", agent_node)
Step 4: Add tools
from langchain_core.tools import tool
@tool
def search_knowledge(query: str) -> str:
"""Search the Universal Brain for knowledge."""
# This would connect to Cloud UB via API
import requests
response = requests.post(
"https://your-worker.workers.dev/brain/search",
json={"query": query}
)
return response.json()
# Bind tools to the LLM
tools = [search_knowledge]
llm_with_tools = llm.bind_tools(tools)
Step 5: Add routing logic
from langgraph.prebuilt import ToolNode
tool_node = ToolNode(tools)
graph_builder.add_node("tools", tool_node)
def should_use_tools(state: MessagesState):
"""Route to tools if the agent wants to use them."""
last_message = state["messages"][-1]
if hasattr(last_message, "tool_calls") and last_message.tool_calls:
return "tools"
return END
graph_builder.add_edge(START, "agent")
graph_builder.add_conditional_edges("agent", should_use_tools)
graph_builder.add_edge("tools", "agent")
Step 6: Compile and run
# Add memory for state persistence
from langgraph.checkpoint.sqlite import SqliteSaver
memory = SqliteSaver.from_conn_string(":memory:")
app = graph_builder.compile(checkpointer=memory)
# Run the agent
config = {"configurable": {"thread_id": "session-001"}}
result = app.invoke(
{"messages": [HumanMessage(content="What do we know about fleet architecture?")]},
config=config
)
print(result["messages"][-1].content)
What you built
| Component | What it does |
|---|---|
StateGraph | Manages conversation state |
agent_node | LLM decides next action |
ToolNode | Executes tool calls |
SqliteSaver | Persists state across turns |
| Conditional edges | Routes between agent and tools |
Next steps
| Enhancement | How |
|---|---|
| Add more tools | MCP adapters for Cloud UB, file ops |
| Multi-agent | Use supervisor.py for orchestration |
| Better engine | Swap Gemini for Claude for complex tasks |
| Persistent checkpoints | Use file-based SQLite instead of in-memory |
Related pages
| Page | Relationship |
|---|---|
| Python SDK | Full Python SDK reference |
| Agent SDK Overview | Dual lines context |
| Engine Overview | Available engines for LangGraph |