Custom Chains in langchain
Last Updated :
04 Nov, 2025
In LangChain, chains act as step-by-step workflows that complete tasks in sequence. Each step can use an LLM to process or transform data and then interact with external tools if needed. Built-in chains are like ready-made templates for common tasks. They link together LLMs, prompts and tools using pre-defined logic, helping developers save time by avoiding manual setup for every step.
Common Built-in Chains:
- LLMChain: The most basic chain. It simply takes an input, formats it using a prompt and passes it to an LLM to get a response.
- Sequential Chains: These link multiple sub-chains together, where the output of one step automatically becomes the input for the next. It's great for breaking down a complex problem.
Example: Step 1 generates a list of travel ideas and Step 2 uses that list to create a detailed itinerary.
Limitation
- Less Flexible: Great for standard tasks but hard to modify for custom workflows.
- Can Be Slow: Workflows with many steps or external API calls may take longer to execute.
- Debugging is Hard: When something goes wrong in a long multi-step chain, figuring out where the issue is can be tricky.
Custom Chains
Custom Chains let developers design personalized workflows that connect LLMs, tools, and logic steps for advanced use cases. Unlike predefined chains, custom chains offer complete control over data flow and task execution.
- We can define how data moves between steps and what logic applies at each stage.
- Mix multiple LLM calls, Python functions and API integrations for added flexibility.
- Add branching logic to make decisions dynamically based on inputs or results.
- Combine LLMs with external tools like databases, APIs or computation modules for intelligent task handling.
Custom Chain WorkflowWorking
The process begins with user input which is processed by the LLM to understand the intent behind the query. The model then decides which tool to use i.e either a built-in tool for general operations or a custom tool for specialized logic.
1. User Input: The system receives input or a query from the user.
2. LLM Processing: The LLM interprets the intent and context of the request.
3. Tool Selection:
- Built-in Tool: Used for standard operations like data retrieval or text summarization.
- Custom Tool: Used for domain-specific or task-specific logic that requires custom computation or transformation.
4. Execution: The selected tool performs the required operation and sends the processed data back.
5. Output Generation: The final, processed information is returned as output by completing an adaptive and intelligent workflow.
Implementing Custom Chains
Lets see step by step how we can create custom chains in langchain.
1. Python class wrapping built chains
- Here we set the Google API key for authentication and initialize the Gemini 2.5 Flash model with a temperature of 0.7 for balanced creativity.
- Two PromptTemplates are defined one to generate a paragraph and another to summarize it.
- Each prompt is wrapped in an LLMChain for modular and reusable execution.
- A CustomChain combines both using SimpleSequentialChain, linking outputs and inputs.
- When executed, it generates and then summarizes content, showing smooth task chaining with LLMs.
Python
from langchain_core.prompts import PromptTemplate
from langchain.chains import LLMChain, SimpleSequentialChain
from langchain_google_genai import ChatGoogleGenerativeAI
import os
os.environ["GOOGLE_API_KEY"] = "API Key"
llm = ChatGoogleGenerativeAI(model="gemini-2.5-flash", temperature=0.7)
prompt1 = PromptTemplate(
input_variables=["topic"],
template="Write a short paragraph about {topic}."
)
prompt2 = PromptTemplate(
input_variables=["text"],
template="Summarize the following paragraph in one line:\n{text}"
)
chain1 = LLMChain(llm=llm, prompt=prompt1)
chain2 = LLMChain(llm=llm, prompt=prompt2)
class CustomChain:
def __init__(self, chain1, chain2):
self.chain1 = chain1
self.chain2 = chain2
self.pipeline = SimpleSequentialChain(chains=[chain1, chain2])
def run(self, topic):
"""Run the full custom chain."""
return self.pipeline.run(topic)
if __name__ == "__main__":
custom_chain = CustomChain(chain1, chain2)
result = custom_chain.run("Artificial Intelligence")
print("Final Output:\n", result)
Output:
Artificial Intelligence (AI) is a rapidly evolving field creating machines capable of human-like intelligence, profoundly transforming modern life and prompting critical discussions about its future.
2. Subclassing Chain (or overriding its internal methods)
- Here we create a workflow that summarizes and then translates text into Hindi.
- Two PromptTemplates are defined i.e one for summarization and another for translation.
- The SummarizeTranslateChain class controls data flow between both steps.
- input_keys and output_keys handle input-output mapping.
- When executed, it summarizes English text and translates it into Hindi using a custom chain.
Python
import os
from langchain.chains.base import Chain
from langchain_core.prompts import PromptTemplate
from langchain.chains import LLMChain
from langchain_google_genai import ChatGoogleGenerativeAI
os.environ["GOOGLE_API_KEY"] = "API KEY"
llm = ChatGoogleGenerativeAI(model="gemini-2.5-flash", temperature=0.7)
summary_prompt = PromptTemplate(
input_variables=["text"],
template="Summarize the following paragraph in one line:\n{text}"
)
translate_prompt = PromptTemplate(
input_variables=["summary"],
template="Translate the following English sentence into Hindi:\n{summary}"
)
class SummarizeTranslateChain(Chain):
summarize_chain: LLMChain
translate_chain: LLMChain
@property
def input_keys(self):
return ["text"]
@property
def output_keys(self):
return ["translated_text"]
def _call(self, inputs):
"""Custom logic: summarize → translate"""
text = inputs["text"]
summary = self.summarize_chain.run(text)
translated = self.translate_chain.run(summary)
return {"translated_text": translated}
if __name__ == "__main__":
summarize_chain = LLMChain(llm=llm, prompt=summary_prompt)
translate_chain = LLMChain(llm=llm, prompt=translate_prompt)
custom_chain = SummarizeTranslateChain(
summarize_chain=summarize_chain,
translate_chain=translate_chain
)
result = custom_chain.run(
"Artificial Intelligence helps machines learn from data and make intelligent decisions, enabling them to perform tasks like understanding language, recognizing images, and predicting outcomes. It continues to evolve, making technology smarter and more adaptive to human needs."
)
print("Final Output:\n", result)
Output:
कृत्रिम बुद्धिमत्ता मशीनों को सीखने, बुद्धिमत्तापूर्ण निर्णय लेने और विभिन्न कार्य करने में सक्षम बनाती है, जो मानवीय आवश्यकताओं के लिए अधिक स्मार्ट तथा अधिक अनुकूलनीय तकनीक बनाने हेतु लगातार विकसित होती रहती है।
3. Custom Step with Runnable
- Here we create a number guessing game workflow.
- Two PromptTemplates are defined one for guessing and one for revealing the number.
- A RunnableSequence connects both steps for data flow.
- Each step uses the pipe (|) operator to link the PromptTemplate with the LLM.
- When executed, it generates a random number and the model guesses it, showing custom step execution with Runnable.
Python
from langchain_core.prompts import PromptTemplate
from langchain.schema.runnable import RunnableSequence
from langchain_google_genai import ChatGoogleGenerativeAI
import random, re
llm = ChatGoogleGenerativeAI(model="gemini-2.5-flash", temperature=0.8)
guess_prompt = PromptTemplate(
input_variables=["hint"],
template="Guess a number close to {hint} between 1 and 10."
)
response_prompt = PromptTemplate(
input_variables=["guess"],
template="Player guessed {guess}. Reveal the actual number."
)
guess_chain = guess_prompt | llm
response_chain = response_prompt | llm
custom_chain = RunnableSequence(first=guess_chain, last=response_chain)
number = random.randint(1, 10)
print(f"The actual number: {number}")
guess = guess_chain.invoke({"hint": number}).content
print("\nFinal Output:\n", guess)
Output:
The actual number: 7
Final Output: Okay, the player guessed 6.
What was the actual number? I need that information to tell you if the guess was correct or how close it was!
4. Dynamic (self-constructing) chains
- Here we build a Dynamic Math Quiz using LangChain and Gemini 2.5 Flash.
- Two PromptTemplates are created one for checking the user’s answer and another for generating the next question.
- Each question and check chain is built dynamically at runtime based on the player’s progress.
- The quiz adapts level by level, generating new challenges when the user answers correctly.
- This demonstrates dynamic chaining, where new LLM chains are constructed and executed on-the-fly instead of being predefined.
Python
from langchain_google_genai import ChatGoogleGenerativeAI
from langchain_core.prompts import PromptTemplate
from langchain.chains import LLMChain
class SimpleDynamicQuiz:
def __init__(self, llm, max_levels=4):
self.llm = llm
self.level = 1
self.max_levels = max_levels
def create_check_chain(self, question):
prompt = PromptTemplate(
input_variables=["answer"],
template=(
f"You are a quiz checker at Level {self.level}.\n"
f"Question: {question}\n"
f"Player's answer: {{answer}}\n"
"Respond only with 'Correct' or 'Incorrect'."
)
)
return LLMChain(llm=self.llm, prompt=prompt)
def create_next_question_chain(self):
prompt = PromptTemplate(
input_variables=["level"],
template=(
"You are a quiz generator AI.\n"
"Generate one short and simple question for Level {level}.\n"
"Make it a basic math question (like addition, subtraction, multiplication, or division).\n"
"Do not include any story or description.\n"
"Respond only with the question text."
)
)
return LLMChain(llm=self.llm, prompt=prompt)
def play(self, first_question):
current_question = first_question
print("Welcome to the 4-Level Dynamic Math Quiz!\n")
while self.level <= self.max_levels:
print(f"Level {self.level}: {current_question}")
user_answer = input("Your answer: ").strip()
check_chain = self.create_check_chain(current_question)
result = check_chain.run({"answer": user_answer}).strip()
print(f"Quiz Master: {result}")
if "incorrect" in result.lower():
print(" Game over. Try again next time!")
break
if self.level == self.max_levels:
print(" Congratulations! You completed all 4 levels!")
break
next_q_chain = self.create_next_question_chain()
next_question = next_q_chain.run({"level": str(self.level + 1)}).strip()
self.level += 1
current_question = next_question
5. Testing
Python
if __name__ == "__main__":
llm = ChatGoogleGenerativeAI(model="gemini-2.5-flash", temperature=0.5)
game = SimpleDynamicQuiz(llm, max_levels=4)
game.play("What is 5 + 3?")
Output:
OutputApplications
- Multi-Step Automation: Automate sequential NLP tasks like summarization, translation, sentiment analysis using custom logic.
- Dynamic Workflow Creation: Build adaptive pipelines that modify steps based on user input or data type like text, code, resume, etc.
- Domain-Specific Assistants: Create specialized AI assistants like HR resume screener, job review analyzer or salary insight generator.
- Custom Decision Flows: Implement branching logic where each output determines the next step which is ideal for chatbots, feedback analyzers or content reviewers.
Explore
Introduction to Machine Learning
Python for Machine Learning
Introduction to Statistics
Feature Engineering
Model Evaluation and Tuning
Data Science Practice