【LLMRG: Improving Recommendations through Large Language Model Reasoning Graphs】(AAAI24)

LLMRG: Improving Recommendations through Large Language Model Reasoning Graphs

在这里插入图片描述

摘要

推荐系统旨在为用户提供相关的建议,但往往缺乏可解释性,并且无法捕捉用户行为和简档之间的更高级别的语义关系。为此,本文提出了一种利用大型语言模型构造个性化推理图的方法。这些图表通过因果和逻辑推理将用户的属性和行为序列链接起来,以可解释的方式表示用户的兴趣。本文提出的LLM推理图(LLMRG)由链式图推理、发散扩展、自验证与评分、知识库自改进四个部分组成。使用图神经网络对所得到的推理图进行编码,该图神经网络用作改进传统推荐器系统的附加输入,而不需要额外的用户或项目信息。我们的方法展示了LLM如何通过个性化的推理图来实现更具逻辑性和可解释性的推荐系统。

模型框架

在这里插入图片描述

在这里插入图片描述

该框架有四个组件:1)链图推理,2)发散扩展,3)自我验证和评分,和4)自我改进的知识库

流程如下:(以单个用户为例)

  1. 模型输入用户交互序列 和 用户属性
  2. 分支 :基本的序列模型模块、
### Chain-of-Thought Prompting Mechanism in Large Language Models In large language models, chain-of-thought prompting serves as a method to enhance reasoning capabilities by guiding the model through structured thought processes. This approach involves breaking down complex problems into simpler components and providing step-by-step guidance that mirrors human cognitive processing. The creation of these prompts typically includes selecting examples from training datasets where each example represents part of an overall problem-solving process[^2]. By decomposing tasks into multiple steps, this technique encourages deeper understanding and more accurate predictions compared to traditional methods. For instance, when faced with multi-hop question answering or logical deduction challenges, using such chains allows models not only to generate correct answers but also articulate intermediate thoughts leading up to those conclusions. Such transparency facilitates better interpretability while improving performance on various NLP benchmarks. ```python def create_chain_of_thought_prompt(task_description, examples): """ Creates a chain-of-thought prompt based on given task description and examples. Args: task_description (str): Description of the task at hand. examples (list): List containing tuples of input-output pairs used for demonstration purposes. Returns: str: Formatted string representing the final prompt including both instructions and sample cases. """ formatted_examples = "\n".join([f"Input: {ex[0]}, Output: {ex[1]}" for ex in examples]) return f""" Task: {task_description} Examples: {formatted_examples} Now try solving similar questions following above pattern. """ # Example usage examples = [ ("What color do you get mixing red and blue?", "Purple"), ("If it rains tomorrow, will we have our picnic?", "No") ] print(create_chain_of_thought_prompt("Solve logic puzzles", examples)) ```
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值