Welcome to #30DaysOfLangChain – LangChain 0.3 Edition! Over the next month, we’ll embark on a journey to demystify the modern LangChain ecosystem, focusing purely on the latest best practices, the powerful LangChain Expression Language (LCEL), and LangGraph.
Today, on Day 1, we’re laying the groundwork by diving into the single most important concept in LangChain 0.3: the Runnable interface. Once we grasp this foundation, the elegance of the LangChain Expression Language (LCEL) will become immediately apparent. Forget older, more verbose ways of chaining components; LCEL offers a concise, functional, and highly composable way to build your LLM applications, all thanks to Runnables.
Understanding the Runnable Interface: The Foundation
In LangChain 0.3, nearly every component—from a simple prompt template to a complex language model, an output parser, a retriever, or even an entire chain—implements the Runnable interface.
What does Runnable mean? It signifies that an object can be “run” or “invoked” to produce an output from a given input. Specifically, any Runnable object guarantees it has methods like:
.invoke(): For a single input, gets a single output..batch(): For a list of inputs, gets a list of outputs (runs efficiently in parallel)..stream(): For a single input, streams chunks of output (useful for real-time UIs).
This standardization is incredibly powerful because it means any Runnable can be seamlessly connected to any other Runnable. It’s the universal adapter of the LangChain world.You can dive deeper into the Runnable API Reference Runnables
Introducing LCEL: The Language of Runnables
While Runnable defines what an object can do, LCEL defines how you elegantly combine them. LCEL (LangChain Expression Language) is a declarative way to compose these Runnable components into powerful, end-to-end chains.
The magic truly happens with the | operator (the pipe operator). This allows you to chain Runnable objects together in a highly intuitive and readable way, much like piping commands in a Unix shell. The output of one runnable becomes the input of the next, creating clean, functional pipelines with minimal boilerplate.Learn more about LCEL in the official documentation LCEL
Setting Up Your Environment
- First, ensure you have Python 3.9+ installed. We’ll primarily use
pipfor package management.
python -m venv .venv
source .venv/bin/activate
2. Install LangChain and OpenAI (or your preferred LLM provider):
pip install langchain-community langchain-openai python-dotenv
3. Set up your API Key: Create a .env file in your project root (same directory as your Python script) and add your OpenAI API key:
OPENAI_API_KEY="your_openai_api_key_here"
Our First LCEL Pipeline: “Hello, LangChain!”
Let’s build a super simple LCEL chain. Our goal is to take a user’s input, pass it directly to an LLM, and get a response. This will demonstrate how Runnable objects are combined using LCEL.
import os
from langchain_core.prompts import ChatPromptTemplate
from langchain_openai import ChatOpenAI
from langchain_core.output_parsers import StrOutputParser
from dotenv import load_dotenv # Recommended for managing environment variables
# Load environment variables from .env file (if it exists)
load_dotenv()
if not os.getenv("OPENAI_API_KEY"):
raise ValueError("OPENAI_API_KEY environment variable not set. Please set it or use a .env file.")
llm = ChatOpenAI(model="gpt-3.5-turbo", temperature=0.7)
prompt = ChatPromptTemplate.from_messages([
("system", "You are a friendly and helpful AI assistant. Respond concisely."),
("user", "{input}")
])
output_parser = StrOutputParser()
# The output of one Runnable becomes the input of the next.
# Flow: User Input -> Prompt (formats input) -> LLM (generates response) -> Output Parser (extracts string)
chain = prompt | llm | output_parser
def run_chain_example(query: str):
"""Invokes the LCEL chain with a given query and prints the response."""
print(f"\n--- User Query: '{query}' ---")
response = chain.invoke({"input": query}) # .invoke() is the Runnable method being called
print(f"AI Response: {response}")
print("-" * (len(query) + 20))
if __name__ == "__main__":
print("Day 1: Hello, LangChain! - Your First LCEL Pipeline ")
print("Understanding Runnables and LCEL for building robust GenAI applications.")
# Example 1
run_chain_example("What is the capital of Canada?")
# Example 2
run_chain_example("Tell me a short, interesting fact about the ocean.")
# Example 3
run_chain_example("Explain the concept of 'Runnable' in LangChain 0.3.")
Code Explanation (Emphasizing Runnables):
ChatOpenAI,ChatPromptTemplate,StrOutputParser: Each of these objects (our LLM, our prompt, our parser) are instances ofRunnable. They can be independently invoked.chain = prompt | llm | output_parser: This is where LCEL shines! We are chaining these individualRunnablecomponents. The|operator understands how to connect them because they all adhere to theRunnableinterface.- The
prompt(aRunnable) takes the user’s{"input"}and formats it. - The formatted messages (output of
prompt) are then passed (|) to thellm(anotherRunnable). - The
llmgenerates anAIMessageresponse (output ofllm), which is then passed (|) to theoutput_parser(anotherRunnable). - The
output_parserextracts the text content, giving us our final string response.
- The
chain.invoke({"input": user_query}): This executes the entire chain.chainitself is also aRunnable(a composite one!), so we call its.invoke()method. We pass our user query as a dictionary because our prompt expects a key named “input.”

Leave a comment