Welcome to Day 26 of #30DaysOfLangChain – LangChain 0.3 Edition! We’ve built powerful LLM chains, complex LangGraph agents, and exposed them via interactive UIs and APIs. But as these applications grow, a critical question emerges: How do we understand what’s happening inside them? How do we debug when an LLM hallucinates, a tool fails, or an agent gets stuck in a loop?
This is where observability comes in, and for LangChain applications, LangSmith is the unparalleled tool. Today, we’ll integrate LangSmith into our existing FastAPI streaming agent to gain crucial insights into its runtime behavior.
The Debugging Dilemma of LLM Applications
Unlike traditional software, LLM applications are often non-deterministic, involving multiple steps, external API calls (to LLMs and tools), and complex reasoning.
- Black Box: It’s hard to tell why an LLM gave a particular answer or what intermediate steps an agent took.
- Non-deterministic: Small changes in prompts or model temperatures can lead to different outputs, making reproduction and debugging difficult.
- Multi-step Errors: A failure at one step might only manifest much later in the chain/graph, making root cause analysis challenging.
Without proper tools, you’re left sifting through logs or adding countless print() statements, which is unsustainable.
Enter LangSmith: Your Observability Hub for LLM Apps
LangSmith is LangChain’s platform designed specifically for building, debugging, monitoring, and evaluating LLM applications. It provides a centralized hub to:
- Tracing: Visualize the end-to-end execution flow of your LangChain chains and LangGraph agents. See every LLM call, every tool invocation, every prompt, and every response, along with latency and token usage.
- Debugging: Pinpoint exactly where issues occur. Inspect prompts, intermediate thoughts, tool inputs, and outputs at each step, allowing you to rapidly identify and fix problems.
- Monitoring: Keep track of key metrics like latency, token usage, and cost over time.
- Dataset & Evaluation: Create datasets from your traces, run test cases, and evaluate the performance of your models and chains against ground truth.
For today, our focus will be primarily on tracing and debugging.
Setting up LangSmith for Your Project
Enabling LangSmith for your LangChain/LangGraph application is surprisingly simple, requiring just a few environment variables.
Steps:
- Sign Up for LangSmith: If you don’t have an account, sign up at smith.langchain.com.
- Get Your API Key: Navigate to your settings or API keys section within LangSmith to generate an API key.
- Configure Environment Variables: Add the following to your
.envfile (or set them directly in your environment):
# .env
LANGCHAIN_TRACING_V2=true
LANGCHAIN_API_KEY="YOUR_LANGSMITH_API_KEY"
LANGCHAIN_PROJECT="30DaysOfLangChain-Day26-Agent" # Name your project
# Also ensure your LLM keys are here, e.g.:
# OPENAI_API_KEY="sk-..."
# LLM_PROVIDER="openai" # or "ollama"
# OLLAMA_MODEL_CHAT="llama2" # if using Ollama
Replace "YOUR_LANGSMITH_API_KEY" with the actual key you obtained. LANGCHAIN_PROJECT helps organize your runs in the LangSmith UI.
hat’s it! When your LangChain code runs with these environment variables set, it will automatically send traces to your specified LangSmith project.
Project: LangSmith for the Streaming Agent
For this project, we will use the FastAPI streaming LangGraph agent from Day 25. This agent, with its tool calls and multi-step execution, provides an excellent canvas to demonstrate LangSmith’s tracing capabilities.
There are no code changes required in our day25-fastapi-streaming-agent.py script. The LangChain and LangGraph libraries are designed to automatically integrate with LangSmith when the environment variables are set.
Demonstration Steps:
- Ensure
day25-fastapi-streaming-agent.pyis ready: Make sure you have the code from Day 25. - Configure
.env: Create or update your.envfile with your LangSmith API key and the tracing settings as shown above. - Run the FastAPI App:
uvicorn day25-fastapi-streaming-agent:app --reload --host 0.0.0.0 --port 8000
- Make Requests: Send a few requests to your agent. Use
curl --no-bufferor the provided HTML client from Day 25. Try questions that involve tool use, e.g., “What is the current time?” and general questions, e.g., “Tell me a joke.” - Observe in LangSmith:
- Open your browser and navigate to smith.langchain.com.
- Select your project (e.g., “30DaysOfLangChain-Day26-Agent”) from the dropdown menu.
- You will see a list of “Runs.” Each request you made will correspond to a run.
Interpreting a Trace in LangSmith:
Click on any run to dive into its detailed trace. You’ll see a visual representation of the agent’s execution flow:
- Root Run (e.g.,
agent_app): The overall execution of your LangGraph agent. - Nodes within the Graph: Each node (e.g.,
llm,tool) will appear as a separate step. - LLM Calls: Within the
llmnode, you’ll see the actual LLM invocation. Click on it to inspect:- Prompt: The exact prompt sent to the LLM. This is invaluable for debugging prompt engineering issues.
- Response: The raw response from the LLM.
- Tokens/Latency: Performance metrics for that specific LLM call.
- Tool Invocations: If your agent called a tool, you’ll see a
toolrun. Click on it to view:- Tool Input: The arguments passed to the tool.
- Tool Output: The result returned by the tool.
- Intermediate Steps: Observe how messages flow between nodes, how decisions are made at conditional edges, and how the state evolves.
LangSmith turns your LLM application from a black box into a transparent, inspectable system, making debugging and optimization dramatically easier.
Key Takeaway
Day 26 was a game-changer for debugging and understanding our AI applications! We integrated LangSmith into our existing LangGraph agent, enabling powerful tracing capabilities with just a few environment variables. Observing our agent’s multi-step decisions, LLM calls, and tool invocations in real-time within the LangSmith UI is an indispensable skill for developing robust, production-ready Generative AI solutions. No more guessing, just clear, visual debugging!

Leave a comment