个人总结 - LangChain4j应用(完结)
github:
https://github.com/langchain4j/langchain4j/releases
官方文档:
https://docs.langchain4j.dev/intro
简要介绍:
LangChain4j是一个旨在简化大语言模型(LLMs)与Java应用程序集成的框架。
Classification(分类)
-
作用:对用户问题进行分类
enum CustomerServiceCategory { 订单状态, 技术支持, 用户反馈 } public static void main(String[] args) { Map<CustomerServiceCategory, List<String>> examples = Map.of( CustomerServiceCategory.订单状态, Arrays.asList( "我的订单在哪里?", "更新一下发货状态。" ), CustomerServiceCategory.技术支持, Arrays.asList( "应用程序启动时崩溃。", "连接服务器出现问题。", ), CustomerServiceCategory.用户反馈, Arrays.asList( "服务很好,谢谢!", "客服需要改进。" ) ); // 使用预训练的嵌入模型 EmbeddingModel embeddingModel = new AllMiniLmL6V2EmbeddingModel(); // 创建分类器实例 TextClassifier<CustomerServiceCategory> classifier = new EmbeddingModelTextClassifier<>(embeddingModel, examples); // 对一条新消息进行分类 List<CustomerServiceCategory> categories = classifier.classify("为什么应用程序这么慢?"); // 输出分类结果 System.out.println(categories); // [技术支持] }
Image Models(图像模型)
-
生成图像
//构建ImageModel模型 ImageModel model = OpenAiImageModel.builder() .apiKey(ApiKeys.OPENAI_API_KEY) .modelName(DALL_E_3) .build(); //生成图像 Response<Image> response = model.generate("黄昏日落,日式动漫风格"); System.out.println(response.content().url());
-
将图像作为输入
ChatLanguageModel model = OpenAiChatModel.builder() .apiKey(System.getenv("OPENAI_API_KEY")) .modelName(GPT_4_VISION_PREVIEW) .maxTokens(50) .build(); UserMessage userMessage = UserMessage.from( TextContent.from("你看到了什么?"), ImageContent.from("https://upload.wikimedia.org/wikipedia/commons/4/47/PNG_transparency_demonstration_1.png") ); Response<AiMessage> response = model.generate(userMessage); System.out.println(response.content().text());
Spring Boot 集成
-
环境:需要 Java 17 和 Spring Boot 3.2。
-
基本使用:
-
导入依赖
<dependency> <groupId>dev.langchain4j</groupId> <artifactId>langchain4j-open-ai-spring-boot-starter</artifactId> <version>0.35.0</version> </dependency>
-
可配置参数(根据模型提供商):
#同步 langchain4j.open-ai.chat-model.api-key=${OPENAI_API_KEY} #流式 langchain4j.open-ai.streaming-chat-model.api-key=${OPENAI_API_KEY} langchain4j.open-ai.chat-model.model-name=gpt-4o langchain4j.open-ai.chat-model.log-requests=true langchain4j.open-ai.chat-model.log-responses=true
-
自动创建并装配一个 OpenAiChatModel 的实例(ChatLanguageModel 的实现):
@RestController public class ChatController { ChatLanguageModel chatLanguageModel; StreamingChatLanguageModel streamingChatLanguageModel //构造注入 public ChatController(ChatLanguageModel chatLanguageModel, StreamingChatLanguageModel streamingChatLanguageModel) { this.chatLanguageModel = chatLanguageModel; this.streamingChatLanguageModel = streamingChatLanguageModel; } @GetMapping("/chat") public String model(@RequestParam(value = "message", defaultValue = "Hello") String message) { return chatLanguageModel.generate(message); } }
-
-
声明式AiServices
-
引入依赖
<dependency> <groupId>dev.langchain4j</groupId> <artifactId>langchain4j-spring-boot-starter</artifactId> <version>0.35.0</version> </dependency>
-
定义一个AI接口,用@AiService修饰,也视为 Spring Boot @Service。
当应用程序启动时,LangChain4j starter 将扫描 Classpath 并查找所有带有 @AiService 注解的接口。对于找到的每个 AI Service,它将组合该应用所拥有的LangChain4j 组件(如:
StreamingChatLanguageModel
、ChatMemoryProvider
、ContentRetriever
、RetrievalAugmentor
)来创建此接口的实现,并将其注册为 bean:@AiService interface Assistant { @SystemMessage("你是个有礼貌的助理") String chat(String userMessage); }
-
-
Explicit Component Wiring
-
作用:如果有多个 AiServices,并且希望将不同的 LangChain4j 组件连接到每个服务中,则可以使用显式连接模式 (@AiService(wiringMode = EXPLICIT)) 的组件。
-
例子:
-
配置
# OpenAI langchain4j.open-ai.chat-model.api-key=${OPENAI_API_KEY} langchain4j.open-ai.chat-model.model-name=gpt-4o-mini # Ollama langchain4j.ollama.chat-model.base-url=http://localhost:11434 langchain4j.ollama.chat-model.model-name=llama3.1
-
显式指定所有组件
@AiService(wiringMode = EXPLICIT, chatModel = "openAiChatModel") interface OpenAiAssistant { @SystemMessage("You are a polite assistant") String chat(String userMessage); } @AiService(wiringMode = EXPLICIT, chatModel = "ollamaChatModel") interface OllamaAssistant { @SystemMessage("You are a polite assistant") String chat(String userMessage); }
-
-
Logging
-
LangChain4j 使用 SLF4J 进行日志记录
-
确保依赖项中有一个 SLF4J 日志记录,例如 Logback:
<dependency> <groupId>ch.qos.logback</groupId> <artifactId>logback-classic</artifactId> <version>1.5.8</version> </dependency>
-
创建model时设置 logRequests(true) 和 logResponses(true) 来启用对 LLM 的每个请求和响应的日志记录:
OpenAiChatModel.builder() ... .logRequests(true) .logResponses(true) .build();
-
Spring Boot配置日志方式:
langchain4j.open-ai.chat-model.log-requests = true langchain4j.open-ai.chat-model.log-responses = true logging.level.dev.langchain4j = DEBUG logging.level.dev.ai4j.openai4j = DEBUG
Observability
-
定义:ChatLanguageModel 和 StreamingChatLanguageModel 的某些实现允许配置
ChatModelListener
以侦听以下事件:向 LLM 提出的请求、LLM 的回应、错误 -
这些事件包括以下属性:
- Request:Model、Temperature、Top P、Max Tokens、Messages、Tools
- Response:ID、Model、Token Usage、Finish Reason、Assistant Message
-
示例:
ChatModelListener listener = new ChatModelListener() { @Override public void onRequest(ChatModelRequestContext requestContext) { ChatModelRequest request = requestContext.request(); Map<Object, Object> attributes = requestContext.attributes(); ... } @Override public void onResponse(ChatModelResponseContext responseContext) { ChatModelResponse response = responseContext.response(); ChatModelRequest request = responseContext.request(); Map<Object, Object> attributes = responseContext.attributes(); ... } @Override public void onError(ChatModelErrorContext errorContext) { Throwable error = errorContext.error(); ChatModelRequest request = errorContext.request(); ChatModelResponse partialResponse = errorContext.partialResponse(); Map<Object, Object> attributes = errorContext.attributes(); ... } }; ChatLanguageModel model = OpenAiChatModel.builder() .apiKey(System.getenv("OPENAI_API_KEY")) .modelName(GPT_4_O_MINI) //配置listener .listeners(List.of(listener)) .build(); model.generate("给我讲一个和Java有关的笑话");
Integrations(集成)
- 定义:与各模型提供商的API集成,相关文档:https://docs.langchain4j.dev/category/integrations
- 包括:
- Language Models
- Embedding Models
- Embedding Stores
- Image Models
- Document Models
- Document Parsers
- Scoreing(Reranking)Models
- Code Execution Engines
- Frameworks
- Web Search Engines