Welcome to Day 22 of #30DaysOfLangChain – LangChain 0.3 Edition! We’ve spent the last few weeks building sophisticated AI agents and workflows. But what good are these powerful tools if users can’t interact with them easily? Today, we pivot from backend logic to frontend experience, learning how to create intuitive, interactive web applications for our LangChain projects using Streamlit.
Specifically, we’ll focus on building a classic chat interface, which is the most common and natural way for users to interact with Generative AI applications.
Streamlit: The Fast Lane for Python Web Apps
Streamlit is an open-source Python library that lets you create beautiful, custom web apps for machine learning and data science in minutes. Its philosophy is to make app development as simple as writing Python scripts. You don’t need any knowledge of HTML, CSS, or JavaScript.
Why Streamlit for GenAI Apps?
- Python-Native: Build everything in Python, leveraging your existing data science and AI skills.
- Rapid Prototyping: Go from script to interactive app in minutes, perfect for quick iterations and demos.
- Simplicity: A clean API that’s easy to learn, even for those new to web development.
- Direct Data Integration: Seamlessly connect to data sources, models, and now, LangChain components.
- Chat Elements: Streamlit has dedicated components (
st.chat_message,st.chat_input) designed specifically for building conversational interfaces.
Core Streamlit Concepts for Chat Interfaces
To build a functional chat bot in Streamlit, three core concepts are essential:
st.session_state:- Streamlit apps re-run their entire script from top to bottom every time a user interacts with a widget or the app state changes.
st.session_stateis a dictionary-like object that allows you to store and persist information across these reruns for a given user session.- For a chat app, this is crucial for storing the conversation history (
messages) so that past interactions are not lost. - You initialize items in
st.session_statetypically at the beginning of your script, checking if they already exist. E.g.,if "messages" not in st.session_state: st.session_state.messages = [].
st.chat_message:- This element allows you to display messages from different “actors” in a chat interface (e.g., “user” and “assistant”).
- It provides built-in styling and avatars. You use it in a
withblock to add content inside the message bubble:
with st.chat_message("user"):
st.write("Hello, bot!")
st.chat_input:- This is the input widget at the bottom of the chat interface where the user types their message.
- When the user presses Enter or clicks the send button, the
st.chat_inputreturns the submitted text. - It’s typically placed at the end of your script, and its value is assigned to a variable (e.g.,
prompt = st.chat_input("Your message:")).
Project: A Simple LangChain Chat Bot in Streamlit
For today’s project, we will create a basic Streamlit application that hosts a conversational AI. The app will:
- Display a title and instructions.
- Maintain a history of user and AI messages using
st.session_state. - Show previous messages when the app re-runs.
- Take new user input via
st.chat_input. - Invoke a simple LangChain
ChatModel(we’ll make it configurable for OpenAI or Ollama) with the user’s message. - Display the AI’s response in the chat interface.
Before you run the code:
- Install Streamlit:
pip install streamlit - Ensure you have
langchain-openaiorlangchain-ollamainstalled based on your chosen LLM provider. - If using OpenAI, set your
OPENAI_API_KEYenvironment variable. - If using Ollama, ensure Ollama is running and you have pulled the required model (e.g.,
ollama pull llama2).
import streamlit as st
import os
from langchain_openai import ChatOpenAI
from langchain_ollama import ChatOllama
from langchain_core.messages import HumanMessage, AIMessage
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser
# Load environment variables (for OpenAI API key or Ollama model names)
from dotenv import load_dotenv
load_dotenv()
# --- Configuration for LLM ---
LLM_PROVIDER = os.getenv("LLM_PROVIDER", "openai").lower() # 'openai' or 'ollama'
OLLAMA_MODEL_CHAT = os.getenv("OLLAMA_MODEL_CHAT", "llama2").lower() # e.g., 'llama2', 'mistral'
OPENAI_MODEL_CHAT = os.getenv("OPENAI_MODEL_CHAT", "gpt-3.5-turbo") # e.g., 'gpt-4o', 'gpt-3.5-turbo'
# --- LLM Initialization ---
def get_llm():
"""Initializes and returns the ChatLargeLanguageModel based on provider."""
if LLM_PROVIDER == "openai":
if not os.getenv("OPENAI_API_KEY"):
st.error("OPENAI_API_KEY not set for OpenAI provider. Please set it in your .env file or environment variables.")
st.stop() # Stop the app if API key is missing
return ChatOpenAI(model=OPENAI_MODEL_CHAT, temperature=0.7)
elif LLM_PROVIDER == "ollama":
try:
llm_instance = ChatOllama(model=OLLAMA_MODEL_CHAT, temperature=0.7)
# Test connection (optional but good practice)
llm_instance.invoke("test", config={"stream": False})
return llm_instance
except Exception as e:
st.error(f"Error connecting to Ollama LLM '{OLLAMA_MODEL_CHAT}' or model not found: {e}")
st.info(f"Please ensure Ollama is running and you have pulled the model: `ollama pull {OLLAMA_MODEL_CHAT}`")
st.stop() # Stop the app if Ollama fails
else:
st.error(f"Invalid LLM provider: {LLM_PROVIDER}. Must be 'openai' or 'ollama'.")
st.stop()
llm = get_llm()
# --- Streamlit App Setup ---
st.set_page_config(page_title="LangChain Chatbot", page_icon="💬")
st.title("LangChain Chatbot")
st.markdown(f"*{LLM_PROVIDER.capitalize()} model: {OPENAI_MODEL_CHAT if LLM_PROVIDER == 'openai' else OLLAMA_MODEL_CHAT}*")
st.markdown("---")
# --- Initialize chat history in session state ---
# This ensures messages persist across reruns
if "messages" not in st.session_state:
st.session_state.messages = [] # Stores list of {"role": "user" or "assistant", "content": "message text"}
# --- Display chat messages from history ---
# Iterate through the messages stored in session state and display them
for message in st.session_state.messages:
with st.chat_message(message["role"]):
st.markdown(message["content"])
# --- Handle user input ---
# st.chat_input creates an input box at the bottom of the page
if prompt := st.chat_input("What can I help you with?"):
# Add user message to chat history and display it
st.session_state.messages.append({"role": "user", "content": prompt})
with st.chat_message("user"):
st.markdown(prompt)
# Prepare messages for the LLM
# LangChain models typically expect BaseMessage objects, so convert if needed
# For this simple chat, we'll just pass the whole history
langchain_messages = []
for msg in st.session_state.messages:
if msg["role"] == "user":
langchain_messages.append(HumanMessage(content=msg["content"]))
else:
langchain_messages.append(AIMessage(content=msg["content"]))
# Call the LLM
with st.chat_message("assistant"):
# Use a spinner to indicate thinking
with st.spinner("Thinking..."):
# A simple chain: prompt -> LLM -> string output parser
# We construct a simple prompt to include chat history for context
chat_prompt = ChatPromptTemplate.from_messages(langchain_messages)
chain = chat_prompt | llm | StrOutputParser()
# Stream the response for a better UX
full_response = ""
response_container = st.empty() # Placeholder for streaming text
for chunk in chain.stream({}): # No input vars needed if prompt is already full messages
full_response += chunk
response_container.markdown(full_response + "▌") # Add a blinking cursor effect
response_container.markdown(full_response) # Display final response without cursor
# Add assistant response to chat history
st.session_state.messages.append({"role": "assistant", "content": full_response})
# --- How to run this app ---
st.sidebar.markdown("### How to run")
st.sidebar.markdown("1. Save this code as `day22-streamlit-chat.py`")
st.sidebar.markdown("2. Open your terminal in the same directory.")
st.sidebar.markdown("3. Run the command: `streamlit run day22-streamlit-chat.py`")
st.sidebar.markdown("4. Your browser will open with the chat application.")
st.sidebar.markdown("---")
st.sidebar.markdown("#### LLM Configuration")
st.sidebar.markdown(f"**Provider:** `{LLM_PROVIDER.capitalize()}`")
if LLM_PROVIDER == 'openai':
st.sidebar.markdown(f"**Model:** `{OPENAI_MODEL_CHAT}`")
else:
st.sidebar.markdown(f"**Model:** `{OLLAMA_MODEL_CHAT}`")
st.sidebar.markdown("*Set `LLM_PROVIDER` and model names in your `.env` file.*")
Code Explanation:
- Streamlit Setup (
st.set_page_config,st.title):- Standard Streamlit app initialization, setting the page title and displaying a main title.
- We also display which LLM provider and model are currently active, based on environment variables.
- LLM Initialization (
get_llm()):- This function dynamically initializes either
ChatOpenAIorChatOllamabased on theLLM_PROVIDERenvironment variable. - Includes robust error handling: if an API key is missing (for OpenAI) or the Ollama model isn’t found/running, it displays an error and stops the Streamlit app. This is crucial for a user-friendly experience.
- This function dynamically initializes either
st.session_state.messages(The Memory):if "messages" not in st.session_state:: This line is critical. It initializesst.session_state.messagesas an empty list only if it doesn’t already exist. This ensures that the chat history persists across reruns.- Each message is stored as a dictionary:
{"role": "user" or "assistant", "content": "message text"}.
- Displaying Previous Messages:
for message in st.session_state.messages:: This loop iterates through the entire chat history stored inst.session_state.messages.with st.chat_message(message["role"]):: For each message, it creates a chat bubble (userorassistantrole automatically determines styling).st.markdown(message["content"]): Displays the message content inside the bubble.
- Handling User Input (
st.chat_input):if prompt := st.chat_input("What can I help you with?"): This is a concise Python 3.8+ “walrus operator” that assigns the value ofst.chat_inputtopromptand checks ifpromptis not empty (i.e., the user submitted a message).- When a user submits a message:
- The user’s message is appended to
st.session_state.messages. - It’s immediately displayed using
st.chat_message("user").
- The user’s message is appended to
- Invoking the LLM and Displaying Response:
- Convert to LangChain Messages: The messages in
st.session_stateare converted intoHumanMessageandAIMessageobjects, which LangChain’s chat models expect. - Call LLM: The
chat_prompt | llm | StrOutputParser()chain is created and invoked. - Streaming for UX:
chain.stream({})is used to stream the LLM’s response. This provides a much better user experience, as the text appears character by character (or token by token) instead of waiting for the full response. st.empty()andresponse_container.markdown(full_response + "▌"): These are used to update the streamed text in place, creating a typing effect.- Update History: Once the full response is received, it’s appended to
st.session_state.messages.
- Convert to LangChain Messages: The messages in
- Running the App:
- Instructions are provided in a sidebar for convenience: save the file and run
streamlit run your_app_name.pyfrom your terminal.
- Instructions are provided in a sidebar for convenience: save the file and run
This project gives you a solid foundation for building interactive, production-ready GenAI applications. By mastering st.session_state and Streamlit’s chat elements, you can quickly bring your LangChain agents to life for your users.
Key Takeaway
Day 22 was all about closing the loop between our powerful LangChain backends and user interaction. We mastered Streamlit to create a dynamic, conversational UI, leveraging st.session_state for persistent chat history and st.chat_message/st.chat_input for a natural chat experience. This is crucial for rapid prototyping and deploying your GenAI applications, making them accessible and engaging for anyone!

Leave a comment