Welcome to Day 18 of #30DaysOfLangChain – LangChain 0.3 Edition! After conceptually exploring multi-agent architectures yesterday, today we get hands-on. We’ll build a practical, collaborative multi-agent system using LangGraph, demonstrating how different AI “agents” can work together, iteratively refining a shared task.

The power of LangGraph truly shines here. Its graph-based structure, combined with flexible state management and conditional transitions, makes it an ideal framework for orchestrating complex interactions between multiple AI components. We’ll model a common real-world scenario: a Writer Agent producing content and an Editor Agent critiquing it, sending it back for revisions until the content meets a satisfactory standard.

The Power of LangGraph for Multi-Agent Systems

LangGraph’s core features are perfectly suited for multi-agent workflows:

  • Agents as Nodes: Each distinct agent (e.g., Writer, Editor, Researcher) or a specific sub-task of an agent can be represented as a node in the graph. This modularity makes the system easy to understand, build, and extend.
  • Passing Information via State: The shared state object acts as the central “blackboard” or “memory” for the entire multi-agent system. Agents communicate by reading from and writing to this shared state. For instance, the Writer writes to a draft field, and the Editor reads that draft and writes feedback back to the state.
  • Conditional Transitions for Iteration and Orchestration: This is critical for collaborative workflows. LangGraph’s add_conditional_edges allow the flow to dynamically change based on conditions within the state. In our project, the Editor will decide if the content is satisfactory (leading to END) or if it needs_revision (looping back to the Writer).

This architecture enables complex, adaptive behaviors that mimic human team collaboration, providing a robust solution for tasks requiring iterative refinement.

Project: Iterative Writer-Editor Collaboration Workflow

Our project for today will set up a LangGraph workflow with two key agents:

  1. Writer Agent: Responsible for generating an initial draft and then revising it based on feedback.
  2. Editor Agent: Responsible for reviewing the draft, providing constructive feedback if necessary, and deciding if the content is satisfactory.

The workflow will be iterative: The Writer creates a draft, the Editor reviews it and provides feedback, and if revisions are needed, the draft goes back to the Writer. This loop continues until the Editor is satisfied, at which point the workflow concludes.

Before you run the code:

  • Ensure all standard LangChain/LangGraph dependencies are installed.
  • You’ll need langchain-openai or langchain-ollama installed based on your LLM_PROVIDER.
  • If using OpenAI, set your OPENAI_API_KEY environment variable. If using Ollama, ensure Ollama is running and you’ve pulled the model (e.g., ollama pull llama2).
import os
from typing import TypedDict, Annotated, List
from langchain_core.messages import BaseMessage, HumanMessage, AIMessage
from langchain_core.prompts import ChatPromptTemplate
from langchain_openai import ChatOpenAI
from langchain_ollama import ChatOllama
from langgraph.graph import StateGraph, END
from langgraph.graph.message import add_messages
import json # For parsing editor's structured output

# Load environment variables from a .env file
from dotenv import load_dotenv
load_dotenv()

# --- Configuration ---
LLM_PROVIDER = os.getenv("LLM_PROVIDER", "openai").lower() # 'openai' or 'ollama'
OLLAMA_MODEL_CHAT = os.getenv("OLLAMA_MODEL_CHAT", "llama2").lower() # e.g., 'llama2', 'mistral'

# --- LLM Initialization ---
def initialize_llm(provider: str, model_name: str = None, temp: float = 0.7):
    """Initializes and returns the ChatLargeLanguageModel based on provider."""
    if provider == "openai":
        if not os.getenv("OPENAI_API_KEY"):
            raise ValueError("OPENAI_API_KEY not set for OpenAI provider.")
        return ChatOpenAI(model=model_name or "gpt-3.5-turbo", temperature=temp)
    elif provider == "ollama":
        try:
            llm_instance = ChatOllama(model=model_name or OLLAMA_MODEL_CHAT, temperature=temp)
            # Test connection to ensure Ollama is running and model is available
            llm_instance.invoke("Hello!", config={"stream": False})
            return llm_instance
        except Exception as e:
            print(f"Error connecting to Ollama LLM or model '{model_name or OLLAMA_MODEL_CHAT}' not found: {e}")
            print("Please ensure Ollama is running and the specified model is pulled (e.g., 'ollama pull llama2').")
            exit()
    else:
        raise ValueError(f"Invalid LLM provider: {provider}. Must be 'openai' or 'ollama'.")

# Initialize the chosen LLM
llm = initialize_llm(LLM_PROVIDER)
print(f"Using LLM: {LLM_PROVIDER} ({llm.model_name if hasattr(llm, 'model_name') else OLLAMA_MODEL_CHAT})\n")


# --- 1. Agent State Definition ---
class CollaborativeAgentState(TypedDict):
    """
    Represents the shared memory for the collaborative agents (Writer and Editor).
    """
    messages: Annotated[List[BaseMessage], add_messages] # Conversation history / internal messages log
    draft: str # The current content being written/revised by the Writer
    feedback: str # Editor's specific feedback for the Writer
    revision_count: int # Tracks how many times the draft has been revised
    status: str # Current state of the draft: "drafting", "reviewing", "revising", "completed", "error"
    topic: str # The original topic or task for the writer to address


# --- 2. Define Agent Nodes ---

# Node for the Writer Agent
def writer_node(state: CollaborativeAgentState) -> CollaborativeAgentState:
    """
    Writer Agent: Generates an initial draft or revises an existing one based on feedback.
    """
    print(f"\n--- Node: Writer Agent (Revision Count: {state['revision_count']}) ---")
    topic = state['topic']
    current_draft = state['draft']
    feedback = state['feedback']
    revision_count = state['revision_count']

    if revision_count == 0:
        # Initial draft generation
        prompt = ChatPromptTemplate.from_messages([
            ("system", f"You are a creative and engaging writer. Write a concise and clear paragraph about '{topic}'. Focus on capturing the reader's interest quickly."),
            ("human", f"Please write an initial draft about: {topic}")
        ])
        print("  Writing initial draft...")
    else:
        # Revise existing draft based on editor's feedback
        prompt = ChatPromptTemplate.from_messages([
            ("system", f"You are a meticulous writer. Revise the following draft about '{topic}' based on the specific feedback provided by the editor. Ensure you address the feedback comprehensively and improve the content's quality, clarity, and conciseness."),
            ("human", f"Current Draft:\n{current_draft}\n\nEditor's Feedback:\n{feedback}\n\nPlease provide a revised draft.")
        ])
        print(f"  Revising draft based on editor's feedback (Revision #{revision_count})...")

    response = llm.invoke(prompt)
    new_draft = response.content.strip()
    
    # Update the state with the new draft, increment revision count, and set status to reviewing
    print(f"  New Draft (excerpt): {new_draft[:150]}...") # Print a snippet of the new draft
    return {
        "draft": new_draft,
        "revision_count": revision_count + 1,
        "status": "reviewing",
        "messages": [AIMessage(content=f"Writer: Draft created/revised (Revision #{revision_count + 1}). Ready for editor.")]
    }

# Node for the Editor Agent
def editor_node(state: CollaborativeAgentState) -> CollaborativeAgentState:
    """
    Editor Agent: Reviews the draft, provides feedback, and decides if it's satisfactory or needs revision.
    """
    print(f"\n--- Node: Editor Agent ---")
    topic = state['topic']
    current_draft = state['draft']
    
    prompt = ChatPromptTemplate.from_messages([
        ("system", """
        You are a highly critical and constructive editor. Your primary task is to review the provided draft about '{topic}'.
        
        Evaluate the draft's:
        - Clarity: Is the message clear and easy to understand?
        - Conciseness: Is there any unnecessary jargon or repetition?
        - Engagement: Does it capture interest and convey the topic effectively?
        - Adherence to Topic: Does it fully address the stated topic?
        
        Based on your evaluation, decide if the draft is 'SATISFACTORY' (no more revisions are needed, it's ready) or 'NEEDS_REVISION'.
        
        If you decide 'NEEDS_REVISION', you MUST provide specific, actionable, and constructive feedback for the writer. Guide them on exactly what needs improvement.
        
        Output your decision as a JSON object with two keys: 'decision' and 'feedback'.
        
        Example for SATISFACTORY: 
        {{"decision": "SATISFACTORY", "feedback": "Excellent work! The draft is clear, concise, and engaging. No revisions needed."}}
        
        Example for NEEDS_REVISION:
        {{"decision": "NEEDS_REVISION", "feedback": "The introduction is too general. Please make the hook more specific to '{topic}' and add a clear thesis statement. Also, shorten the second sentence for better flow."}}
        """),
        ("human", f"Topic: {topic}\n\nDraft to review:\n{current_draft}")
    ])
    
    response = llm.invoke(prompt)
    editor_output_raw = response.content.strip()
    
    decision = "NEEDS_REVISION" # Default to revision in case of parsing errors or ambiguity
    feedback = "Editor could not parse response or provided generic feedback. Please revise for clarity and specific improvements."

    try:
        if editor_output_raw.startswith("```json"):
            # Attempt to strip markdown code block if present
            editor_output_raw = editor_output_raw[7:-3].strip()
        editor_json = json.loads(editor_output_raw)
        
        decision = editor_json.get("decision", "NEEDS_REVISION").upper()
        feedback = editor_json.get("feedback", feedback)
        
    except json.JSONDecodeError:
        print(f"  Warning: Editor LLM returned non-JSON. Raw output: {editor_output_raw[:100]}... Defaulting to 'NEEDS_REVISION'.")
    except Exception as e:
        print(f"  An error occurred during editor's output processing: {e}. Defaulting to 'NEEDS_REVISION'.")
    
    print(f"  Editor Decision: {decision}")
    print(f"  Editor Feedback: {feedback}")

    # Update the state with editor's feedback and decision status
    return {
        "feedback": feedback,
        "status": decision.lower(), # "satisfactory" or "needs_revision"
        "messages": [AIMessage(content=f"Editor: Decision: {decision}. Feedback: {feedback}")]
    }

# --- 3. Routing Logic ---

def route_editor_decision(state: CollaborativeAgentState) -> str:
    """
    Router function based on the Editor's decision.
    Decides whether to send the draft back to the Writer or end the workflow.
    """
    print(f"\n--- Router: Editor Decision (Current Status: '{state['status']}') ---")
    if state['status'] == "satisfactory":
        print("  Decision: Draft is SATISFACTORY. Workflow will END.")
        return END
    elif state['status'] == "needs_revision":
        print("  Decision: Draft NEEDS_REVISION. Routing back to 'writer_node' for revisions.")
        return "writer_node"
    else:
        # Fallback for any unexpected status
        print(f"  Decision: Unexpected status '{state['status']}'. Ending workflow for safety.")
        return END

# --- 4. Build the LangGraph Workflow ---
print("--- Building the Collaborative Multi-Agent Workflow (Writer-Editor) ---")
workflow = StateGraph(CollaborativeAgentState)

# Add nodes representing our agents
workflow.add_node("writer_node", writer_node)
workflow.add_node("editor_node", editor_node)

# Set the entry point of the workflow. We start with the writer.
workflow.set_entry_point("writer_node")

# Define the edges (transitions between nodes)
# After the writer produces a draft, it always goes to the editor for review
workflow.add_edge("writer_node", "editor_node")

# From the editor, the flow is conditional based on the editor's decision
workflow.add_conditional_edges(
    "editor_node",
    route_editor_decision, # Use our custom routing function
    {
        "writer_node": "writer_node", # If 'needs_revision', loop back to writer
        END: END # If 'satisfactory', end the workflow
    }
)

# Compile the graph into a runnable application
collaborative_app = workflow.compile()
print("Collaborative Multi-Agent workflow compiled successfully.\n")

# --- 5. Invoke the Workflow ---
print("--- Invoking the Writer-Editor Collaboration ---")

# Define the initial topic for the collaboration
initial_topic = "the importance of lifelong learning in the 21st century"
print(f"Starting collaboration on topic: '{initial_topic}'")

# Set the initial state for the workflow
initial_input = {
    "messages": [HumanMessage(content=f"Start writing about: {initial_topic}")],
    "draft": "",          # Initial empty draft
    "feedback": "",       # Initial empty feedback
    "revision_count": 0,  # Start with 0 revisions
    "status": "drafting", # Initial status
    "topic": initial_topic # The topic for the writer
}

try:
    # Invoke the workflow. Set a recursion_limit to prevent infinite loops
    # if the editor continuously requests revisions.
    final_state = collaborative_app.invoke(
        initial_input,
        config={"recursion_limit": 10}, # Allow up to 10 node transitions
        # verbose=True # Uncomment this line to see detailed LangGraph internal logs
    )
    print("\n" + "="*50)
    print("--- Final Collaboration State ---")
    print(f"Final Status: {final_state['status'].upper()}")
    print(f"Final Draft (after {final_state['revision_count'] - 1} revisions):")
    print(final_state['draft'])
    print(f"Last Feedback: {final_state['feedback']}")
    print("="*50 + "\n")

except Exception as e:
    print(f"\n!!! Workflow encountered an unexpected error: {e} !!!")
    if "recursion_limit" in str(e):
        print("The workflow likely hit the recursion limit. This means the editor kept requesting revisions without reaching a 'satisfactory' state within the limit.")
    print("Please check LLM responses and prompt engineering if this persists.")

print("Note: The quality and duration of revisions depend heavily on LLM capabilities, prompt clarity, and the complexity of the topic.")

Code Explanation:

  1. CollaborativeAgentState: This TypedDict is the shared memory between the Writer and Editor. It’s crucial for communication and maintaining the project’s progress. It holds the draft (the content being worked on), feedback (from the Editor to the Writer), revision_count (to track iterations), status (to guide the workflow’s next step), and the topic of the content.
  2. writer_node:
    • This function represents our Writer Agent.
    • It checks revision_count: If it’s 0, it means this is the initial draft, so it prompts the LLM to create new content based on the topic.
    • If revision_count is greater than 0, it’s a revision. The LLM is prompted to improve the current_draft specifically based on the feedback from the Editor.
    • After generating/revising, it updates the draft in the state, increments revision_count, and sets the status to “reviewing”, signaling that it’s ready for the Editor.
  3. editor_node:
    • This function represents our Editor Agent.
    • It takes the current_draft and the topic from the state.
    • It prompts an LLM to act as a critical editor, evaluating the draft for clarity, conciseness, engagement, and adherence to the topic.
    • Crucially, the LLM is instructed to output a JSON object with a decision (“SATISFACTORY” or “NEEDS_REVISION”) and specific feedback. This structured output is vital for our programmatic routing.
    • It updates the feedback and status fields in the state based on the LLM’s parsed decision.
  4. route_editor_decision (Router Function):
    • This is the core orchestration logic for our iterative loop.
    • This function is called by LangGraph after the editor_node completes.
    • It examines the status field in the CollaborativeAgentState (which the editor_node just updated).
    • If status is “satisfactory”, it returns END, stopping the workflow.
    • If status is “needs_revision”, it returns "writer_node", routing the flow back to the Writer for another round of revisions.
  5. LangGraph Construction:
    • A StateGraph is initialized with our CollaborativeAgentState.
    • workflow.add_node() registers our writer_node and editor_node functions as callable nodes in the graph.
    • workflow.set_entry_point("writer_node") ensures the process begins with the Writer.
    • workflow.add_edge("writer_node", "editor_node") defines a direct path: after the Writer, the draft always goes to the Editor.
    • workflow.add_conditional_edges("editor_node", route_editor_decision, {...}) implements the iterative loop. It tells LangGraph to use our route_editor_decision function to decide the next step after the editor_node runs.
  6. Invocation:
    • We define an initial_topic and an initial_input state.
    • collaborative_app.invoke() starts the workflow.
    • A recursion_limit in the config object is a safety measure to prevent infinite loops if the editor never becomes satisfied, limiting the number of times the graph can traverse its edges.

This project demonstrates a fundamental multi-agent pattern where agents directly collaborate by passing information and control via LangGraph’s shared state and conditional routing. This iterative feedback loop is incredibly powerful for refining outputs in many AI applications, from content generation to code review and beyond.

Leave a comment

I’m Arpan

I’m a Software Engineer driven by curiosity and a deep interest in Generative AI Technologies. I believe we’re standing at the frontier of a new era—where machines not only learn but create, and I’m excited to explore what’s possible at this intersection of intelligence and imagination.

When I’m not writing code or experimenting with new AI models, you’ll probably find me travelling, soaking in new cultures, or reading a book that challenges how I think. I thrive on new ideas—especially ones that can be turned into meaningful, impactful projects. If it’s bold, innovative, and GenAI-related, I’m all in.

“The future belongs to those who believe in the beauty of their dreams.”Eleanor Roosevelt

“Imagination is more important than knowledge. For knowledge is limited, whereas imagination embraces the entire world.”Albert Einstein

This blog, MLVector, is my space to share technical insights, project breakdowns, and explorations in GenAI—from the models shaping tomorrow to the code powering today.

Let’s build the future, one vector at a time.

Let’s connect