Welcome to Day 11 of #30DaysOfLangChain! On Day 10, we got a foundational understanding of LangGraph, defining static flows with nodes and basic edges. Today, we’re unlocking LangGraph’s true power: Conditional Edges and the ability to create Loops. These features are what enable complex, adaptive, and intelligent behaviors in your LLM applications, especially for building sophisticated agents.

Conditional Edges: Making Your Graph Smart

In a real-world scenario, your LLM application often needs to make decisions. “Should I call a tool?” “Is the user’s question answered?” “Should I ask for more clarification?” Conditional edges allow your graph to dynamically choose the next step based on the outcome of a node’s execution or the current state of the graph.

  • How it works: Instead of a fixed add_edge(node_A, node_B), you use add_conditional_edges(). This method takes:
    • source: The node from which the conditional transitions originate.
    • path: A function (often called a “router” or “decider” function) that takes the current graph state as input and returns the name of the next node to transition to.
    • sinks: A dictionary mapping the possible return values of your path function to the corresponding target nodes.
  • The “Router” Function: This function is key. It’s typically defined by you and contains the logic to inspect the current GraphState (e.g., the LLM’s last response, whether a tool was called, etc.) and decide which path to take. The return value of this function must match one of the keys in your sinks dictionary.

Looping / Cycles: The Heart of Agentic Behavior

While a Directed Acyclic Graph (DAG) fundamentally means no cycles, LangGraph cleverly enables agentic loops within its framework. How? By using conditional edges that can lead back to a previous node.

Consider a typical agent’s thought process:

  1. Think/Call LLM: The LLM decides what to do.
  2. Act/Call Tool: If the LLM decides to use a tool, that tool is executed.
  3. Observe: The result of the tool is observed.
  4. Loop Back to Think/Call LLM: The agent needs to re-evaluate the situation with the new observation.

LangGraph facilitates this loop. A conditional edge from the “Call LLM” node might point to “Call Tool” if a tool is needed, and then a regular edge from “Call Tool” points back to “Call LLM,” forming a cycle that continues until the LLM determines a final answer has been reached, at which point it transitions to an END node.

Revisiting Graph State for Agents

For agentic applications, your GraphState often needs to include messages. This list of messages typically represents the conversation history (user inputs, LLM responses, tool calls, tool outputs) that the LLM uses for context. LangGraph’s add_messages (used with Annotated[List[BaseMessage], add_messages]) automatically appends new messages to this list.

For more details, check out the official LangGraph documentation:

Project: A Simple Looping Agent with Conditional Routing

We’ll build a minimal LangGraph agent that can decide whether to use a tool or provide a final answer. If it decides to use a tool, it executes the tool and then loops back to the LLM to re-evaluate the situation with the tool’s output.

Before you run the code:

  • Ensure Ollama is installed and running (ollama serve) if using Ollama.
  • Pull any necessary Ollama models (e.g., llama2).
  • Ensure your OPENAI_API_KEY is set if using OpenAI models.
  • Ensure langgraph is installed (pip install langgraph).
# Save this as day11-langgraph-conditional-looping.py
import os
from typing import TypedDict, Annotated, List, Union
from langchain_core.messages import BaseMessage, HumanMessage, AIMessage, ToolMessage
from langchain_core.prompts import ChatPromptTemplate
from langchain_openai import ChatOpenAI
from langchain_ollama import ChatOllama
from langchain import tool
from langgraph.graph import StateGraph, END
from langgraph.graph.message import add_messages
from langchain.agents import AgentExecutor, create_react_agent # Not strictly needed for graph building, but useful context


# --- Configuration ---
LLM_PROVIDER = os.getenv("LLM_PROVIDER", "openai").lower()
OLLAMA_MODEL_CHAT = os.getenv("OLLAMA_MODEL_CHAT", "llama2").lower()

# --- Step 1: Define Graph State ---
# The state will primarily contain a list of messages.
# Annotated[List[BaseMessage], add_messages] ensures new messages are appended.
class AgentState(TypedDict):
    messages: Annotated[List[BaseMessage], add_messages]

# --- Step 2: Define Custom Tools (reusing from Day 8/9) ---
@tool
def word_reverser(word: str) -> str:
    """Reverses a given word or string."""
    print(f"\n--- Tool Action: Executing word_reverser on '{word}' ---")
    return word[::-1]

@tool
def character_counter(text: str) -> int:
    """Counts the number of characters in a given string."""
    print(f"\n--- Tool Action: Executing character_counter on '{text}' ---")
    return len(text)

tools = [word_reverser, character_counter]
print(f"Available tools: {[tool.name for tool in tools]}\n")

# --- Step 3: Initialize LLM ---
def initialize_llm(provider, model_name=None, temp=0.7):
    if provider == "openai":
        if not os.getenv("OPENAI_API_KEY"):
            raise ValueError("OPENAI_API_KEY not set for OpenAI provider.")
        # Bind tools to OpenAI LLM for function calling
        return ChatOpenAI(model=model_name or "gpt-3.5-turbo", temperature=temp).bind_tools(tools)
    elif provider == "ollama":
        try:
            # For Ollama, tool calling is often handled by specific models or custom parsing.
            # Here we'll use a general model and rely on custom parsing in the node.
            llm = ChatOllama(model=model_name or OLLAMA_MODEL_CHAT, temperature=temp)
            llm.invoke("Hello!") # Test connection
            return llm
        except Exception as e:
            print(f"Error connecting to Ollama LLM or model '{model_name or OLLAMA_MODEL_CHAT}' not found: {e}")
            print("Please ensure Ollama is running and the specified model is pulled.")
            exit()
    else:
        raise ValueError(f"Invalid LLM provider: {provider}. Must be 'openai' or 'ollama'.")

llm = initialize_llm(LLM_PROVIDER)
print(f"Using LLM: {LLM_PROVIDER} ({llm.model_name if hasattr(llm, 'model_name') else OLLAMA_MODEL_CHAT})\n")


# --- Step 4: Define Graph Nodes ---

def call_llm(state: AgentState) -> AgentState:
    """
    Node to call the LLM and get its response.
    The LLM will decide whether to call a tool or give a final answer.
    """
    print("--- Node: call_llm ---")
    messages = state['messages']
    # If using Ollama, we need to explicitly inject tool definitions into the prompt
    if LLM_PROVIDER == "ollama":
        tool_names = ", ".join([t.name for t in tools])
        tool_descriptions = "\n".join([f"Tool Name: {t.name}\nTool Description: {t.description}\nTool Schema: {t.args_schema.schema() if t.args_schema else 'No schema'}" for t in tools])
        # A simple prompt hint for Ollama to use tools.
        # More robust tool calling with Ollama might require specific models (e.g., function-calling fine-tunes)
        # or more sophisticated parsing.
        system_message = (
            "You are a helpful assistant. You have access to the following tools: "
            f"{tool_names}.\n\n"
            f"Here are their descriptions and schemas:\n{tool_descriptions}\n\n"
            "If you need to use a tool, respond with a JSON object like: "
            "```json\n{{\"tool_name\": \"<tool_name>\", \"tool_input\": {{...}}}}\n```. "
            "Otherwise, respond with your final answer."
        )
        prompt = ChatPromptTemplate.from_messages([
            ("system", system_message),
            *messages # Pass all previous messages
        ])
        response = llm.invoke(prompt)
    else: # OpenAI handles tools automatically when bound
        response = llm.invoke(messages)

    # Return the LLM's response appended to the messages
    return {"messages": [response]}


def call_tool(state: AgentState) -> AgentState:
    """
    Node to execute a tool if the LLM has decided to call one.
    It takes the last AI message (which should contain tool calls) and executes them.
    """
    print("--- Node: call_tool ---")
    messages = state['messages']
    last_message = messages[-1]

    tool_outputs = []
    # OpenAI model with tool_calls
    if last_message.tool_calls:
        for tool_call in last_message.tool_calls:
            tool_name = tool_call.name
            tool_input = tool_call.args
            print(f"Executing tool: {tool_name} with input: {tool_input}")
            # Find the tool by name and execute it
            selected_tool = next(t for t in tools if t.name == tool_name)
            output = selected_tool.invoke(tool_input)
            tool_outputs.append(ToolMessage(content=str(output), tool_call_id=tool_call.id))

    # Basic parsing for Ollama if it tried to output JSON tool call
    elif LLM_PROVIDER == "ollama" and isinstance(last_message.content, str) and "tool_name" in last_message.content:
        import json
        try:
            tool_call_data = json.loads(last_message.content.strip("`").strip("json").strip()) # Attempt to parse JSON
            tool_name = tool_call_data.get("tool_name")
            tool_input = tool_call_data.get("tool_input", {})
            print(f"Executing Ollama-parsed tool: {tool_name} with input: {tool_input}")
            selected_tool = next(t for t in tools if t.name == tool_name)
            output = selected_tool.invoke(tool_input)
            tool_outputs.append(AIMessage(content=f"Tool output: {output}")) # Represent as AI message for simplicity

        except (json.JSONDecodeError, StopIteration) as e:
            print(f"Ollama tool parsing failed or tool not found: {e}")
            tool_outputs.append(AIMessage(content=f"Error parsing tool call: {last_message.content}"))
    else:
        print("No tool calls detected or parsed for execution.")
        # If no tool calls, just return the state as is, or an error message
        # For simplicity, we assume an error or a direct answer was intended by LLM
        pass

    return {"messages": tool_outputs}


# --- Step 5: Define the Routing/Decider Function ---
def route_decision(state: AgentState) -> str:
    """
    Decides the next step based on the last message from the LLM.
    Returns 'tool_call' if a tool needs to be called, otherwise 'end'.
    """
    print("--- Decider: route_decision ---")
    last_message = state['messages'][-1]
    # Check if the LLM outputted a tool call
    if last_message.tool_calls: # For OpenAI's structured tool calls
        print("Decision: LLM wants to call a tool.")
        return "tool_call"
    # Basic check for Ollama's string output, might need more robust parsing for production
    if LLM_PROVIDER == "ollama" and isinstance(last_message.content, str) and "tool_name" in last_message.content:
        print("Decision: Ollama LLM seems to want to call a tool (based on string content).")
        return "tool_call"
    else:
        print("Decision: LLM has a final answer or no tool needed.")
        return "end"


# --- Step 6: Build the LangGraph with Conditional Edges ---
print("--- Building the LangGraph with Conditional Edges & Loops ---")
workflow = StateGraph(AgentState)

# Add nodes
workflow.add_node("call_llm", call_llm)
workflow.add_node("call_tool", call_tool)

# Set entry point
workflow.set_entry_point("call_llm")

# Add conditional edge from call_llm:
# If route_decision returns 'tool_call', go to 'call_tool'.
# If route_decision returns 'end', go to END.
workflow.add_conditional_edges(
    "call_llm", # Source node
    route_decision, # The function that decides the next step
    {
        "tool_call": "call_tool",
        "end": END # Use END to signify graph termination
    }
)

# Add a normal edge from call_tool back to call_llm
# This creates the loop: tool executes, then LLM re-evaluates with tool output
workflow.add_edge("call_tool", "call_llm")

# Compile the graph
app = workflow.compile()
print("LangGraph compiled successfully with conditional edges and looping logic.\n")

# --- Step 7: Invoke the Graph ---
print("--- Invoking the LangGraph (Verbose output below) ---")

# Question requiring a tool
print("\n=== Question 1: Reverse 'LangGraph' ===")
inputs_tool_req = {"messages": [HumanMessage(content="Reverse the word 'LangGraph'.")]}
result_tool_req = app.invoke(inputs_tool_req)
print(f"\nFinal State (Tool Req): {result_tool_req['messages'][-1].content}")


# Question not requiring a tool
print("\n=== Question 2: What is the capital of Japan? ===")
inputs_no_tool = {"messages": [HumanMessage(content="What is the capital of Japan?")]}
result_no_tool = app.invoke(inputs_no_tool)
print(f"\nFinal State (No Tool): {result_no_tool['messages'][-1].content}")

# Question requiring multiple steps (tool + follow-up)
print("\n=== Question 3: Reverse 'Python' and count characters in the reversed word ===")
inputs_multi_step = {"messages": [HumanMessage(content="Reverse the word 'Python' and then count characters in the reversed word.")]}
result_multi_step = app.invoke(inputs_multi_step)
print(f"\nFinal State (Multi-Step): {result_multi_step['messages'][-1].content}")

Code Explanation:

  1. AgentState: We define AgentState primarily with messages, using Annotated[List[BaseMessage], add_messages] to ensure new messages are always appended. This is the core communication channel for our agent.
  2. Tools & LLM Initialization:
    • We reuse word_reverser and character_counter from Day 8/9.
    • Crucially for OpenAI: When initializing ChatOpenAI, we use .bind_tools(tools). This tells OpenAI models about the available tools, allowing them to output structured tool_calls in their responses.
    • For Ollama: Tool calling is more complex for generic Ollama models. We add a basic system message to the prompt to hint at JSON output for tool calls and then try to parse it in call_tool. For robust Ollama tool calling, consider models specifically fine-tuned for function calling or more advanced parsing.
  3. Nodes (call_llm, call_tool):
    • call_llm: This node calls the LLM with the current messages from the state. For OpenAI, the LLM will automatically decide whether to call a tool or give a final answer. For Ollama, we provide a prompt that guides it to output a JSON string for tool calls. It then updates the state by appending the LLM’s response message.
    • call_tool: This node inspects the last_message from the state. If it contains tool_calls (for OpenAI) or can be parsed as a tool call (for Ollama, rudimentary), it executes the specified tool and appends a ToolMessage (or an AIMessage with the tool output for Ollama) to the state.
  4. Router Function (route_decision):
    • This is our conditional logic. It looks at the last_message from the call_llm node.
    • If the LLM’s response indicates a tool call (either via last_message.tool_calls for OpenAI or by parsing the content for Ollama’s JSON attempt), it returns "tool_call".
    • Otherwise, it returns "end", signaling that the LLM has provided a final answer.
  5. Graph Construction (StateGraph, add_conditional_edges, add_edge):
    • workflow.add_node(): Adds our call_llm and call_tool functions as nodes.
    • workflow.set_entry_point("call_llm"): The graph always starts by calling the LLM.
    • workflow.add_conditional_edges("call_llm", route_decision, {"tool_call": "call_tool", "end": END}): This is the core! After call_llm runs, route_decision is executed. If it returns "tool_call", the graph transitions to call_tool. If it returns "end", the graph terminates (END).
    • workflow.add_edge("call_tool", "call_llm"): This creates the loop! After a tool is executed (call_tool), the graph unconditionally transitions back to call_llm. This allows the LLM to process the tool’s output and decide the next step (e.g., provide a final answer or call another tool).

This project demonstrates the fundamental patterns for building dynamic, stateful agents with LangGraph, capable of decision-making and iterative tool use.

Leave a comment

I’m Arpan

I’m a Software Engineer driven by curiosity and a deep interest in Generative AI Technologies. I believe we’re standing at the frontier of a new era—where machines not only learn but create, and I’m excited to explore what’s possible at this intersection of intelligence and imagination.

When I’m not writing code or experimenting with new AI models, you’ll probably find me travelling, soaking in new cultures, or reading a book that challenges how I think. I thrive on new ideas—especially ones that can be turned into meaningful, impactful projects. If it’s bold, innovative, and GenAI-related, I’m all in.

“The future belongs to those who believe in the beauty of their dreams.”Eleanor Roosevelt

“Imagination is more important than knowledge. For knowledge is limited, whereas imagination embraces the entire world.”Albert Einstein

This blog, MLVector, is my space to share technical insights, project breakdowns, and explorations in GenAI—from the models shaping tomorrow to the code powering today.

Let’s build the future, one vector at a time.

Let’s connect