LangChain学习:通过Agents自动查询天气

文章目录
1. LangChain tools
2. Python Agent
3. 自定义工具
3.1 查询日期
3.2 查询天气
learn from https://learn.deeplearning.ai/langchain

1. LangChain tools

from config import api_type, api_key, api_base, api_version, model_name
from langchain.chat_models import AzureChatOpenAI
from langchain.agents.agent_toolkits import create_python_agent
from langchain.agents import load_tools, initialize_agent
from langchain.agents import AgentType
from langchain.tools.python.tool import PythonREPLTool
from langchain.python import PythonREPL
from langchain.chat_models import ChatOpenAI

llm = AzureChatOpenAI(
    openai_api_base=api_base,
    openai_api_version=api_version,
    deployment_name=model_name,
    openai_api_key=api_key,
    openai_api_type=api_type,
    temperature=0.5,
)

tools = load_tools(["llm-math", "wikipedia", "terminal"], llm=llm)
agent = initialize_agent(
    tools,
    llm,
    agent=AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION,
    handle_parsing_errors=True,
    verbose=True)

agent("300的四分之一是多少?")

给定了一些内置的工具,让模型自行选择使用哪个

问了个简单的除法,输出:

数学工具

> Entering new AgentExecutor chain...
Question: 300的四分之一是多少?
Thought: We can easily solve this problem using a calculator.
Action:
``
{
  "action": "Calculator",
  "action_input": "300/4"
}
``

Observation: Answer: 75.0
Thought:The calculator tool was able to solve the problem easily and gave us the answer.
Final Answer: 300的四分之一是75。

> Finished chain.
terminal工具

question = "列出当前目录下的所有文件,应该使用什么命令?"
result = agent(question)
print(result)
问了命令行的问题,输出

> Entering new AgentExecutor chain...
Question: 列出当前目录下的所有文件,应该使用什么命令?
Thought: 我需要使用一个能够列出当前目录下所有文件的命令。
Action:
``
{
  "action": "terminal",
  "action_input": "ls"
}
``
/opt/envs/rasa/lib/python3.8/site-packages/langchain/tools/shell/tool.py:33: UserWarning: The shell tool has no safeguards by default. Use at your own risk.
  warnings.warn(

Observation: chains.py
config.py
demo.py
goods.csv
langchain1.py
langchain2.py
langchain3.py
langchain4.py
langchain5.py
people.csv
__pycache__

Thought:我已经成功列出了当前目录下的所有文件。
Final Answer: 可以使用"ls"命令来列出当前目录下的所有文件。

> Finished chain.
{'input': '列出当前目录下的所有文件,应该使用什么命令?', 'output': '可以使用"ls"命令来列出当前目录下的所有文件。'}
2. Python Agent

agent = create_python_agent(
    llm,
    tool=PythonREPLTool(),
    verbose=True
)
import langchain
langchain.debug=True

customer_list = [["Harrison", "Chase"],
                 ["Lang", "Chain"],
                 ["Dolly", "Too"],
                 ["Elle", "Elem"],
                 ["Geoff","Fusion"],
                 ["Trance","Former"],
                 ["Jen","Ayai"]
                ]
res = agent.run(f"""对这些顾客的姓名进行排序:先是姓,再是名字,并打印最终排序结果:{customer_list}""")
print(res)
输出:

/opt/bdp/data01/anaconda3/envs/rasa/bin/python /opt/bdp/data01/chenmingming/gpt/langchain5.py 
[chain/start] [1:chain:AgentExecutor] Entering Chain run with input:
{
  "input": "对这些顾客的姓名进行排序:先是姓,再是名字,并打印最终排序结果:[['Harrison', 'Chase'], ['Lang', 'Chain'], ['Dolly', 'Too'], ['Elle', 'Elem'], ['Geoff', 'Fusion'], ['Trance', 'Former'], ['Jen', 'Ayai']]"
}
[chain/start] [1:chain:AgentExecutor > 2:chain:LLMChain] Entering Chain run with input:
{
  "input": "对这些顾客的姓名进行排序:先是姓,再是名字,并打印最终排序结果:[['Harrison', 'Chase'], ['Lang', 'Chain'], ['Dolly', 'Too'], ['Elle', 'Elem'], ['Geoff', 'Fusion'], ['Trance', 'Former'], ['Jen', 'Ayai']]",
  "agent_scratchpad": "",
  "stop": [
    "\nObservation:",
    "\n\tObservation:"
  ]
}
[llm/start] [1:chain:AgentExecutor > 2:chain:LLMChain > 3:llm:AzureChatOpenAI] Entering LLM run with input:
{
  "prompts": [
    "Human: You are an agent designed to write and execute python code to answer questions.\nYou have access to a python REPL, which you can use to execute python code.\nIf you get an error, debug your code and try again.\nOnly use the output of your code to answer the question. \nYou might know the answer without running any code, but you should still run the code to get the answer.\nIf it does not seem like you can write code to answer the question, just return \"I don't know\" as the answer.\n\n\nPython_REPL: A Python shell. Use this to execute python commands. Input should be a valid python command. If you want to see the output of a value, you should print it out with `print(...)`.\n\nUse the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [Python_REPL]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question\n\nBegin!\n\nQuestion: 对这些顾客的姓名进行排序:先是姓,再是名字,并打印最终排序结果:[['Harrison', 'Chase'], ['Lang', 'Chain'], ['Dolly', 'Too'], ['Elle', 'Elem'], ['Geoff', 'Fusion'], ['Trance', 'Former'], ['Jen', 'Ayai']]\nThought:"
  ]
}
[llm/end] [1:chain:AgentExecutor > 2:chain:LLMChain > 3:llm:AzureChatOpenAI] [5.13s] Exiting LLM run with output:
{
  "generations": [
    [
      {
        "text": "I need to sort the list of names by last name first, then by first name. I can use the sorted() function with a key parameter to specify the sorting order.\nAction: Python_REPL\nAction Input: sorted([['Harrison', 'Chase'], ['Lang', 'Chain'], ['Dolly', 'Too'], ['Elle', 'Elem'], ['Geoff', 'Fusion'], ['Trance', 'Former'], ['Jen', 'Ayai']], key=lambda x: (x[0], x[1]))",
        "generation_info": {
          "finish_reason": "stop"
        },
        "message": {
          "lc": 1,
          "type": "constructor",
          "id": [
            "langchain",
            "schema",
            "messages",
            "AIMessage"
          ],
          "kwargs": {
            "content": "I need to sort the list of names by last name first, then by first name. I can use the sorted() function with a key parameter to specify the sorting order.\nAction: Python_REPL\nAction Input: sorted([['Harrison', 'Chase'], ['Lang', 'Chain'], ['Dolly', 'Too'], ['Elle', 'Elem'], ['Geoff', 'Fusion'], ['Trance', 'Former'], ['Jen', 'Ayai']], key=lambda x: (x[0], x[1]))",
            "additional_kwargs": {}
          }
        }
      }
    ]
  ],
  "llm_output": {
    "token_usage": {
      "completion_tokens": 110,
      "prompt_tokens": 344,
      "total_tokens": 454
    },
    "model_name": "gpt-3.5-turbo"
  },
  "run": null
}
[chain/end] [1:chain:AgentExecutor > 2:chain:LLMChain] [5.13s] Exiting Chain run with output:
{
  "text": "I need to sort the list of names by last name first, then by first name. I can use the sorted() function with a key parameter to specify the sorting order.\nAction: Python_REPL\nAction Input: sorted([['Harrison', 'Chase'], ['Lang', 'Chain'], ['Dolly', 'Too'], ['Elle', 'Elem'], ['Geoff', 'Fusion'], ['Trance', 'Former'], ['Jen', 'Ayai']], key=lambda x: (x[0], x[1]))"
}
[tool/start] [1:chain:AgentExecutor > 4:tool:Python_REPL] Entering Tool run with input:
"sorted([['Harrison', 'Chase'], ['Lang', 'Chain'], ['Dolly', 'Too'], ['Elle', 'Elem'], ['Geoff', 'Fusion'], ['Trance', 'Former'], ['Jen', 'Ayai']], key=lambda x: (x[0], x[1]))"
[tool/end] [1:chain:AgentExecutor > 4:tool:Python_REPL] [1.4369999999999998ms] Exiting Tool run with output:
""
[chain/start] [1:chain:AgentExecutor > 5:chain:LLMChain] Entering Chain run with input:
{
  "input": "对这些顾客的姓名进行排序:先是姓,再是名字,并打印最终排序结果:[['Harrison', 'Chase'], ['Lang', 'Chain'], ['Dolly', 'Too'], ['Elle', 'Elem'], ['Geoff', 'Fusion'], ['Trance', 'Former'], ['Jen', 'Ayai']]",
  "agent_scratchpad": "I need to sort the list of names by last name first, then by first name. I can use the sorted() function with a key parameter to specify the sorting order.\nAction: Python_REPL\nAction Input: sorted([['Harrison', 'Chase'], ['Lang', 'Chain'], ['Dolly', 'Too'], ['Elle', 'Elem'], ['Geoff', 'Fusion'], ['Trance', 'Former'], ['Jen', 'Ayai']], key=lambda x: (x[0], x[1]))\nObservation: \nThought:",
  "stop": [
    "\nObservation:",
    "\n\tObservation:"
  ]
}
[llm/start] [1:chain:AgentExecutor > 5:chain:LLMChain > 6:llm:AzureChatOpenAI] Entering LLM run with input:
{
  "prompts": [
    "Human: You are an agent designed to write and execute python code to answer questions.\nYou have access to a python REPL, which you can use to execute python code.\nIf you get an error, debug your code and try again.\nOnly use the output of your code to answer the question. \nYou might know the answer without running any code, but you should still run the code to get the answer.\nIf it does not seem like you can write code to answer the question, just return \"I don't know\" as the answer.\n\n\nPython_REPL: A Python shell. Use this to execute python commands. Input should be a valid python command. If you want to see the output of a value, you should print it out with `print(...)`.\n\nUse the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [Python_REPL]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question\n\nBegin!\n\nQuestion: 对这些顾客的姓名进行排序:先是姓,再是名字,并打印最终排序结果:[['Harrison', 'Chase'], ['Lang', 'Chain'], ['Dolly', 'Too'], ['Elle', 'Elem'], ['Geoff', 'Fusion'], ['Trance', 'Former'], ['Jen', 'Ayai']]\nThought:I need to sort the list of names by last name first, then by first name. I can use the sorted() function with a key parameter to specify the sorting order.\nAction: Python_REPL\nAction Input: sorted([['Harrison', 'Chase'], ['Lang', 'Chain'], ['Dolly', 'Too'], ['Elle', 'Elem'], ['Geoff', 'Fusion'], ['Trance', 'Former'], ['Jen', 'Ayai']], key=lambda x: (x[0], x[1]))\nObservation: \nThought:"
  ]
}
[llm/end] [1:chain:AgentExecutor > 5:chain:LLMChain > 6:llm:AzureChatOpenAI] [2.84s] Exiting LLM run with output:
{
  "generations": [
    [
      {
        "text": "The sorted() function sorted the list of names by last name first, then by first name. I need to print the final sorted list.\nAction: Python_REPL\nAction Input: print(sorted([['Harrison', 'Chase'], ['Lang', 'Chain'], ['Dolly', 'Too'], ['Elle', 'Elem'], ['Geoff', 'Fusion'], ['Trance', 'Former'], ['Jen', 'Ayai']], key=lambda x: (x[0], x[1])))",
        "generation_info": {
          "finish_reason": "stop"
        },
        "message": {
          "lc": 1,
          "type": "constructor",
          "id": [
            "langchain",
            "schema",
            "messages",
            "AIMessage"
          ],
          "kwargs": {
            "content": "The sorted() function sorted the list of names by last name first, then by first name. I need to print the final sorted list.\nAction: Python_REPL\nAction Input: print(sorted([['Harrison', 'Chase'], ['Lang', 'Chain'], ['Dolly', 'Too'], ['Elle', 'Elem'], ['Geoff', 'Fusion'], ['Trance', 'Former'], ['Jen', 'Ayai']], key=lambda x: (x[0], x[1])))",
            "additional_kwargs": {}
          }
        }
      }
    ]
  ],
  "llm_output": {
    "token_usage": {
      "completion_tokens": 104,
      "prompt_tokens": 459,
      "total_tokens": 563
    },
    "model_name": "gpt-3.5-turbo"
  },
  "run": null
}
[chain/end] [1:chain:AgentExecutor > 5:chain:LLMChain] [2.85s] Exiting Chain run with output:
{
  "text": "The sorted() function sorted the list of names by last name first, then by first name. I need to print the final sorted list.\nAction: Python_REPL\nAction Input: print(sorted([['Harrison', 'Chase'], ['Lang', 'Chain'], ['Dolly', 'Too'], ['Elle', 'Elem'], ['Geoff', 'Fusion'], ['Trance', 'Former'], ['Jen', 'Ayai']], key=lambda x: (x[0], x[1])))"
}
[tool/start] [1:chain:AgentExecutor > 7:tool:Python_REPL] Entering Tool run with input:
"print(sorted([['Harrison', 'Chase'], ['Lang', 'Chain'], ['Dolly', 'Too'], ['Elle', 'Elem'], ['Geoff', 'Fusion'], ['Trance', 'Former'], ['Jen', 'Ayai']], key=lambda x: (x[0], x[1])))"
[tool/end] [1:chain:AgentExecutor > 7:tool:Python_REPL] [0.641ms] Exiting Tool run with output:
"[['Dolly', 'Too'], ['Elle', 'Elem'], ['Geoff', 'Fusion'], ['Harrison', 'Chase'], ['Jen', 'Ayai'], ['Lang', 'Chain'], ['Trance', 'Former']]"
[chain/start] [1:chain:AgentExecutor > 8:chain:LLMChain] Entering Chain run with input:
{
  "input": "对这些顾客的姓名进行排序:先是姓,再是名字,并打印最终排序结果:[['Harrison', 'Chase'], ['Lang', 'Chain'], ['Dolly', 'Too'], ['Elle', 'Elem'], ['Geoff', 'Fusion'], ['Trance', 'Former'], ['Jen', 'Ayai']]",
  "agent_scratchpad": "I need to sort the list of names by last name first, then by first name. I can use the sorted() function with a key parameter to specify the sorting order.\nAction: Python_REPL\nAction Input: sorted([['Harrison', 'Chase'], ['Lang', 'Chain'], ['Dolly', 'Too'], ['Elle', 'Elem'], ['Geoff', 'Fusion'], ['Trance', 'Former'], ['Jen', 'Ayai']], key=lambda x: (x[0], x[1]))\nObservation: \nThought:The sorted() function sorted the list of names by last name first, then by first name. I need to print the final sorted list.\nAction: Python_REPL\nAction Input: print(sorted([['Harrison', 'Chase'], ['Lang', 'Chain'], ['Dolly', 'Too'], ['Elle', 'Elem'], ['Geoff', 'Fusion'], ['Trance', 'Former'], ['Jen', 'Ayai']], key=lambda x: (x[0], x[1])))\nObservation: [['Dolly', 'Too'], ['Elle', 'Elem'], ['Geoff', 'Fusion'], ['Harrison', 'Chase'], ['Jen', 'Ayai'], ['Lang', 'Chain'], ['Trance', 'Former']]\n\nThought:",
  "stop": [
    "\nObservation:",
    "\n\tObservation:"
  ]
}
[llm/start] [1:chain:AgentExecutor > 8:chain:LLMChain > 9:llm:AzureChatOpenAI] Entering LLM run with input:
{
  "prompts": [
    "Human: You are an agent designed to write and execute python code to answer questions.\nYou have access to a python REPL, which you can use to execute python code.\nIf you get an error, debug your code and try again.\nOnly use the output of your code to answer the question. \nYou might know the answer without running any code, but you should still run the code to get the answer.\nIf it does not seem like you can write code to answer the question, just return \"I don't know\" as the answer.\n\n\nPython_REPL: A Python shell. Use this to execute python commands. Input should be a valid python command. If you want to see the output of a value, you should print it out with `print(...)`.\n\nUse the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [Python_REPL]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question\n\nBegin!\n\nQuestion: 对这些顾客的姓名进行排序:先是姓,再是名字,并打印最终排序结果:[['Harrison', 'Chase'], ['Lang', 'Chain'], ['Dolly', 'Too'], ['Elle', 'Elem'], ['Geoff', 'Fusion'], ['Trance', 'Former'], ['Jen', 'Ayai']]\nThought:I need to sort the list of names by last name first, then by first name. I can use the sorted() function with a key parameter to specify the sorting order.\nAction: Python_REPL\nAction Input: sorted([['Harrison', 'Chase'], ['Lang', 'Chain'], ['Dolly', 'Too'], ['Elle', 'Elem'], ['Geoff', 'Fusion'], ['Trance', 'Former'], ['Jen', 'Ayai']], key=lambda x: (x[0], x[1]))\nObservation: \nThought:The sorted() function sorted the list of names by last name first, then by first name. I need to print the final sorted list.\nAction: Python_REPL\nAction Input: print(sorted([['Harrison', 'Chase'], ['Lang', 'Chain'], ['Dolly', 'Too'], ['Elle', 'Elem'], ['Geoff', 'Fusion'], ['Trance', 'Former'], ['Jen', 'Ayai']], key=lambda x: (x[0], x[1])))\nObservation: [['Dolly', 'Too'], ['Elle', 'Elem'], ['Geoff', 'Fusion'], ['Harrison', 'Chase'], ['Jen', 'Ayai'], ['Lang', 'Chain'], ['Trance', 'Former']]\n\nThought:"
  ]
}
[llm/end] [1:chain:AgentExecutor > 8:chain:LLMChain > 9:llm:AzureChatOpenAI] [3.69s] Exiting LLM run with output:
{
  "generations": [
    [
      {
        "text": "The final sorted list is [['Dolly', 'Too'], ['Elle', 'Elem'], ['Geoff', 'Fusion'], ['Harrison', 'Chase'], ['Jen', 'Ayai'], ['Lang', 'Chain'], ['Trance', 'Former']]\nFinal Answer: [['Dolly', 'Too'], ['Elle', 'Elem'], ['Geoff', 'Fusion'], ['Harrison', 'Chase'], ['Jen', 'Ayai'], ['Lang', 'Chain'], ['Trance', 'Former']]",
        "generation_info": {
          "finish_reason": "stop"
        },
        "message": {
          "lc": 1,
          "type": "constructor",
          "id": [
            "langchain",
            "schema",
            "messages",
            "AIMessage"
          ],
          "kwargs": {
            "content": "The final sorted list is [['Dolly', 'Too'], ['Elle', 'Elem'], ['Geoff', 'Fusion'], ['Harrison', 'Chase'], ['Jen', 'Ayai'], ['Lang', 'Chain'], ['Trance', 'Former']]\nFinal Answer: [['Dolly', 'Too'], ['Elle', 'Elem'], ['Geoff', 'Fusion'], ['Harrison', 'Chase'], ['Jen', 'Ayai'], ['Lang', 'Chain'], ['Trance', 'Former']]",
            "additional_kwargs": {}
          }
        }
      }
    ]
  ],
  "llm_output": {
    "token_usage": {
      "completion_tokens": 110,
      "prompt_tokens": 619,
      "total_tokens": 729
    },
    "model_name": "gpt-3.5-turbo"
  },
  "run": null
}
[chain/end] [1:chain:AgentExecutor > 8:chain:LLMChain] [3.69s] Exiting Chain run with output:
{
  "text": "The final sorted list is [['Dolly', 'Too'], ['Elle', 'Elem'], ['Geoff', 'Fusion'], ['Harrison', 'Chase'], ['Jen', 'Ayai'], ['Lang', 'Chain'], ['Trance', 'Former']]\nFinal Answer: [['Dolly', 'Too'], ['Elle', 'Elem'], ['Geoff', 'Fusion'], ['Harrison', 'Chase'], ['Jen', 'Ayai'], ['Lang', 'Chain'], ['Trance', 'Former']]"
}
[chain/end] [1:chain:AgentExecutor] [11.67s] Exiting Chain run with output:
{
  "output": "[['Dolly', 'Too'], ['Elle', 'Elem'], ['Geoff', 'Fusion'], ['Harrison', 'Chase'], ['Jen', 'Ayai'], ['Lang', 'Chain'], ['Trance', 'Former']]"
}
[['Dolly', 'Too'], ['Elle', 'Elem'], ['Geoff', 'Fusion'], ['Harrison', 'Chase'], ['Jen', 'Ayai'], ['Lang', 'Chain'], ['Trance', 'Former']]
3. 自定义工具
3.1 查询日期

from langchain.agents import tool
from datetime import date


@tool
def time(text: str) -> str:
    """返回今天的日期,将其用于与今天的日期相关的任何问题。
    输入应始终为空字符串, 此函数将始终返回今天的date,任何日期的数学运算都应在此函数之外发生"""
    return str(date.today())


agent1 = initialize_agent(
    tools + [time],
    llm,
    agent=AgentTyp

<think>嗯,用户之前问过关于大语言模型生成PPT的实现,现在他们又想知道LangChain是什么。看起来他们可能是在探索如何将大语言模型应用到实际项目中,比如自动化生成PPT或者其他工作流程。我需要先回忆一下之前的回答,确保这次的内容和之前的一致,同时补充新的信息。 首先,用户可能对LangChain不太了解,需要从基础开始解释。LangChain是一个用于构建大语言模型应用的框架,对吧?它主要帮助开发者将LLM与其他工具或数据源连接起来。我需要说明它的核心功能,比如模型集成、数据连接、链式调用、记忆管理和智能体这些部分。 然后,用户可能想知道LangChain的工作原理。这时候需要分步骤解释,比如输入处理、链式处理、工具调用和输出生成。可能需要举一些例子,比如用户查询天气,系统调用天气API,然后用LLM生成回答。 接下来,用户可能对关键技术点感兴趣,比如链(Chains)、智能体(Agents)、记忆(Memory)和索引(Indexes)。需要详细说明每个部分的作用,比如链如何组合多个步骤,智能体如何自主调用工具,记忆如何保存上下文,索引如何处理外部数据。 用户可能还想知道有哪些应用场景,比如问答系统、文档分析、自动化流程和聊天机器人。这时候需要结合实际例子,比如客服机器人结合知识库,或者用LangChain生成PPT的流程。 开源库方面,需要提到LangChain本身是开源的,还有相关的工具比如LlamaIndex、Hugging Face Transformers和FAISS。这些库如何与LangChain配合使用,比如LlamaIndex用于数据索引,Hugging Face提供模型支持。 可能用户还会有疑问,比如LangChain和直接使用API有什么区别,这时候需要强调LangChain的模块化和扩展性,方便开发者构建复杂应用。另外,需要注意LangChain学习曲线,以及处理复杂逻辑时的挑战。 最后,确保回答结构清晰,分点说明,避免技术术语过于复杂,用用户容易理解的语言。可能还需要检查是否有遗漏的重要点,比如安装方法或者代码示例,但根据用户的问题,可能不需要太深入,保持概述即可。</think>以下是关于 **LangChain** 的详细介绍,涵盖其定义、核心功能、工作原理、关键技术及实际应用场景: --- ### **1. LangChain 是什么?** **LangChain** 是一个用于构建**大语言模型(LLM)应用**的开源框架(Python/JS),其核心目标是简化 LLM 与外部工具、数据源及业务流程的集成。 - **定位**:充当 LLM 与真实世界的“桥梁”,解决纯 LLM 的局限性(如无法直接访问实时数据、文件、API 等)。 - **特点**:模块化设计,支持灵活扩展,适用于问答系统、自动化流程、数据分析等场景。 --- ### **2. 核心功能** #### **2.1 模型(Models)** - **统一接口**:支持多种 LLM 的调用(如 OpenAI GPT、Anthropic Claude、本地 Hugging Face 模型等)。 - **多模态扩展**:集成文本生成、图像理解(如 GPT-4V)等能力。 #### **2.2 数据连接(Data Connections)** - **外部数据加载**:从数据库、文档(PDF、Markdown)、网页等获取信息。 - **检索增强生成(RAG)**:将外部数据与 LLM 知识结合,生成更准确的回答(例如:基于企业内部文档的问答)。 #### **2.3 链(Chains)** - **复杂任务拆分**:将多步骤任务组合为“链”(如“查询天气 → 生成出行建议 → 翻译成中文”)。 - **预定义链**:提供通用链(如 `LLMChain`、`SequentialChain`),支持自定义链。 #### **2.4 记忆(Memory)** - **上下文管理**:保存对话历史或任务状态,实现多轮交互(如聊天机器人)。 - **存储方式**:短期记忆(单次会话)与长期记忆(数据库存储)。 #### **2.5 智能体(Agents)** - **自主决策**:根据用户输入动态选择工具(如调用搜索引擎、计算器、API)。 - **示例**: ```python from langchain.agents import load_tools agent.run("北京今天的温度是多少?如果高于30度,推荐一个室内景点。") # 步骤:调用天气API → 判断温度 → 调用旅游数据库API → 生成回答 ``` --- ### **3. 工作原理** LangChain 的核心思想是**“编排”**,通过模块化组件控制 LLM 的行为。典型流程如下: 1. **输入处理**:解析用户请求,提取意图(如“需要外部数据”或“需调用工具”)。 2. **链式处理**:按预设流程调用 LLM、工具或数据源。 3. **工具调用**:执行具体操作(如搜索、计算、读写文件)。 4. **输出生成**:整合结果并返回给用户。 #### **示例:基于文档的问答系统** 1. 用户提问:“公司2023年的营收目标是什么?” 2. LangChain 从企业财务文档中检索相关内容。 3. LLM 根据检索结果生成总结性回答。 --- ### **4. 关键技术** #### **4.1 链(Chains)** - **组合模式**:将多个 LLM 调用或工具操作串联,例如: ```python chain = LLMChain(llm=model, prompt=prompt) → ToolChain(tool=calculator) → OutputParser() ``` #### **4.2 智能体(Agents)** - **工具集**:预定义工具(如 `WikipediaQueryRun`、`PythonREPLTool`)或自定义工具。 - **决策逻辑**:通过 LLM 生成 JSON 格式的“思维步骤”(如 ReAct 框架)。 #### **4.3 检索增强生成(RAG)** - **步骤**: 1. 文档切分与向量化(使用 `Embeddings` 模型)。 2. 相似度检索(如使用 `FAISS` 向量数据库)。 3. 将检索结果注入 LLM 的上下文生成回答。 #### **4.4 记忆管理** - **会话状态**:通过 `ConversationBufferMemory` 保存对话历史。 - **长期记忆**:结合数据库(如 Redis)存储用户偏好或历史记录。 --- ### **5. 应用场景** 1. **问答系统**: - 结合内部知识库的客服机器人。 - 基于技术文档的开发者助手。 2. **文档分析**: - 自动总结长报告、生成会议纪要。 3. **自动化流程**: - 根据邮件内容自动创建任务工单。 - 生成数据分析报告并发送邮件。 4. **聊天机器人**: - 支持多轮对话的个性化交互(如医疗咨询)。 --- ### **6. 开源生态** - **GitHub 仓库**:https://github.com/langchain-ai/langchain - **核心库**: - `langchain-core`:基础模块(链、智能体、记忆)。 - `langchain-community`:第三方工具集成(如 Notion、Discord)。 - `langchain-experimental`:前沿功能测试(如多模态交互)。 - **依赖工具**: - `LlamaIndex`:高效数据索引与检索。 - `Hugging Face Transformers`:本地模型支持。 - `FAISS`:向量相似度搜索。 --- ### **7. 示例代码** #### **基础问答链** ```python from langchain_community.llms import OpenAI from langchain.chains import LLMChain from langchain.prompts import PromptTemplate template = "用一句话解释什么是{concept}" prompt = PromptTemplate(template=template, input_variables=["concept"]) llm = OpenAI(api_key="YOUR_KEY") chain = LLMChain(llm=llm, prompt=prompt) print(chain.run("量子计算")) # 输出:量子计算是利用量子力学原理进行信息处理的新型计算模式。 ``` #### **智能体调用工具** ```python from langchain.agents import load_tools, AgentType, initialize_agent tools = load_tools(["serpapi", "llm-math"], llm=llm) # 加载搜索引擎和计算器 agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION) agent.run("特斯拉当前股价是多少?如果我持有100股,总价值是多少美元?") ``` --- ### **8. 优势与挑战** - **优势**: - 模块化设计,快速搭建 LLM 应用。 - 支持复杂任务编排(如循环、条件分支)。 - **挑战**: - 学习曲线较陡(需理解链、智能体等概念)。 - 复杂逻辑可能导致执行效率下降。 --- ### **总结** LangChain 的核心价值在于**降低 LLM 应用开发门槛**,通过标准化接口和预置组件,开发者无需从零实现数据连接、记忆管理等功能。其“链式”和“智能体”设计模式,尤其适合需要多步骤推理或外部工具调用的场景。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值