Welcome to Day 10 of #30DaysOfLangChain! We’ve spent the first week mastering LangChain Expression Language (LCEL), building RAG pipelines, and even creating autonomous agents. As we enter the advanced stages, sometimes complex, multi-step LLM applications need more than just linear chains or simple agent loops. They need explicit state management, cyclical execution, and the ability to define highly custom logic for different scenarios.

This is where LangGraph comes in. LangGraph is a powerful library built on top of LangChain, designed specifically for orchestrating stateful, multi-actor LLM applications as Directed Acyclic Graphs (DAGs).

What is LangGraph? Why is it different?

Think of LangGraph as a canvas for designing sophisticated LLM workflows. While LCEL excels at defining sequences of operations, LangGraph allows you to:

  • Define State: Explicitly manage the shared information that flows through your application, enabling complex reasoning and decision-making over time.
  • Handle Cycles: Unlike basic LCEL chains (which are typically acyclic), LangGraph can define loops, allowing agents to repeatedly use tools, re-evaluate, or engage in multi-turn conversations.
  • Orchestrate Multiple Actors: Design systems where different “nodes” (e.g., an LLM, a tool, a human, another agent) take turns acting on the shared state.
  • Visual Debugging: The graph structure often lends itself well to visualization, making complex flows easier to understand and debug.

Understanding the Directed Acyclic Graph (DAG) Concept

At its heart, LangGraph is about building a Directed Acyclic Graph (DAG).

  • Graph: A collection of nodes (or vertices) and edges (or connections between nodes).
  • Directed: Each edge has a specific direction, indicating the flow from one node to another.
  • Acyclic: There are no cycles in the graph (i.e., you can’t start at a node and follow the edges back to that same node). While the overall graph is acyclic for simple flows, LangGraph does support internal cycles for agentic behavior, which we’ll explore later! For now, we’ll focus on simpler, non-cyclical flows.

In LangGraph, nodes represent discrete steps or actors, and edges define how control and state transition between these steps.

Core Components of LangGraph

  1. StateGraph: This is the foundational class for defining your graph. You initialize it with a GraphState (or schema for your state), which dictates the type of data that will be passed between nodes.
  2. Graph State: This is the shared, mutable object that all nodes operate on. Each node receives the current GraphState, performs its operation, and then returns an update to this state. LangGraph automatically merges these updates. Defining your GraphState clearly is crucial for designing robust graphs.
  3. Nodes (add_node):
    • Nodes are the individual computational units or “steps” in your graph.
    • They are typically Python functions or LangChain Runnables.
    • Each node receives the current GraphState as input and must return a dictionary representing updates to that state.
    • You add nodes to your StateGraph using the add_node(name: str, runnable) method.
  4. Edges (add_edge):
    • Edges define the flow of execution between nodes.
    • add_edge(start_node: str, end_node: str) creates a direct transition from start_node to end_node.
    • You can also define conditional edges (which we’ll cover later) that allow the flow to change based on the state or output of a node.
  5. Entry and Finish Points:
    • set_entry_point(node_name: str): Specifies the first node that the graph will execute when it’s invoked.
    • set_finish_point(node_name: str): Designates a node where the graph execution should terminate and return its final state.

Once you’ve defined your nodes, edges, entry, and finish points, you compile() the StateGraph into a Runnable object that can then be invoke()d, similar to an LCEL chain.

For more details, check out the official LangGraph documentation:


Project: A Minimal LangGraph DAG

We’ll create a very simple graph with two nodes and a single edge. The first node will receive an input string, and the second node will modify that string. This will demonstrate how state transitions occur within a LangGraph.

Before you run the code:

  • Ensure you have langgraph installed.
from typing import TypedDict, Annotated, List
from langgraph.graph import StateGraph, END
from langgraph.graph.message import add_messages
import operator

# --- Step 1: Define the Graph State ---
# This defines the schema of the state that will be passed between nodes.
# 'text' will store the string we're processing.
# 'messages' is a common state key for conversational graphs (though we won't fully use it today).
# Annotated[str, operator.add] means if multiple nodes update 'text', their outputs are concatenated.
class GraphState(TypedDict):
    """
    Represents the state of our graph.

    - text: string, the main text being processed.
    - messages: list of messages, for conversational context (optional for this simple demo).
    """
    text: Annotated[str, operator.add] # Use operator.add to concatenate string updates
    messages: Annotated[List[str], add_messages] # For list, use add_messages from langgraph.graph.message

# --- Step 2: Define Nodes (as Python functions) ---
# Each node receives the current state and returns an update to the state.

def start_node(state: GraphState) -> GraphState:
    """
    The initial node that processes the incoming input text.
    It simply ensures the text is part of the state.
    """
    print(f"--- Node: start_node ---")
    current_text = state.get("text", "")
    print(f"Current text in state: '{current_text}'")
    # In a real app, this might do initial processing or validation
    return {"text": current_text + " (processed by start_node)"}

def process_node(state: GraphState) -> GraphState:
    """
    A node that further processes the text by appending a string.
    """
    print(f"--- Node: process_node ---")
    current_text = state.get("text", "")
    print(f"Current text in state: '{current_text}'")
    # In a real app, this could be an LLM call, a tool use, etc.
    return {"text": current_text + " (appended by process_node)"}

# --- Step 3: Build the LangGraph ---
print("--- Building the LangGraph ---")
workflow = StateGraph(GraphState)

# Add nodes to the workflow
workflow.add_node("start_node", start_node)
workflow.add_node("process_node", process_node)

# Set the entry point of the graph
workflow.set_entry_point("start_node")

# Add a simple edge from start_node to process_node
workflow.add_edge("start_node", "process_node")

# Set the finish point of the graph
workflow.set_finish_point("process_node")

# Compile the graph into a runnable
app = workflow.compile()
print("Graph compiled successfully.\n")

# --- Step 4: Invoke the Graph ---
print("--- Invoking the Graph ---")

initial_input = "Hello LangGraph"
print(f"Initial input to graph: '{initial_input}'")

# When invoking, pass the initial state for the 'text' key.
# LangGraph will automatically initialize the state with this input.
final_state = app.invoke({"text": initial_input})

print("\n--- Final Graph State ---")
print(f"Final Text: {final_state['text']}")
print("-" * 50)

# You can also trace the execution (requires graphviz and specific env setup)
# from IPython.display import Image, display
# try:
#     display(Image(app.get_graph().draw_mermaid_png()))
#     print("Graph visualization generated (if graphviz is installed and path configured).")
# except Exception as e:
#     print(f"Could not generate graph visualization: {e}")
#     print("Ensure `pip install pygraphviz graphviz` and Graphviz binaries are in PATH.")

Code Explanation:

  1. GraphState Definition:
    • We use TypedDict to define the schema of our graph’s state. This provides type hints and clarity.
    • text: Annotated[str, operator.add] is important! It tells LangGraph that if multiple nodes update the text key, their outputs should be concatenated using operator.add (for strings, this means +). For lists, you’d typically use add_messages from langgraph.graph.message to append.
  2. Node Functions:
    • start_node and process_node are simple Python functions.
    • Each function takes the current state as input.
    • Crucially, each function returns a dictionary containing the updates to the state. LangGraph then intelligently merges these updates into the current state.
  3. StateGraph Initialization: We create an instance of StateGraph, passing our GraphState definition.
  4. add_node(): We add our two functions as nodes to the workflow, giving them unique string names ("start_node", "process_node").
  5. set_entry_point(): We specify that the graph execution should always begin with "start_node".
  6. add_edge(): We create a simple directed edge from "start_node" to "process_node". This means after start_node finishes, process_node will execute next.
  7. set_finish_point(): We tell the graph that after process_node completes, the graph execution is finished.
  8. workflow.compile(): This step compiles the defined graph into a Runnable object. This compiled object is what you can then invoke.
  9. app.invoke(): We invoke the compiled graph with an initial input. LangGraph takes this initial input and uses it to populate the GraphState before executing the entry point. The final returned value is the complete GraphState after execution.

This project is a foundational step into LangGraph, demonstrating the basic construction of a graph and how state flows between nodes. In upcoming days, we’ll build upon this to create more dynamic and powerful applications.

Leave a comment

I’m Arpan

I’m a Software Engineer driven by curiosity and a deep interest in Generative AI Technologies. I believe we’re standing at the frontier of a new era—where machines not only learn but create, and I’m excited to explore what’s possible at this intersection of intelligence and imagination.

When I’m not writing code or experimenting with new AI models, you’ll probably find me travelling, soaking in new cultures, or reading a book that challenges how I think. I thrive on new ideas—especially ones that can be turned into meaningful, impactful projects. If it’s bold, innovative, and GenAI-related, I’m all in.

“The future belongs to those who believe in the beauty of their dreams.”Eleanor Roosevelt

“Imagination is more important than knowledge. For knowledge is limited, whereas imagination embraces the entire world.”Albert Einstein

This blog, MLVector, is my space to share technical insights, project breakdowns, and explorations in GenAI—from the models shaping tomorrow to the code powering today.

Let’s build the future, one vector at a time.

Let’s connect