Multi-Head RAG: Solving Multi-Aspect Problems With LLMs
Multi-Head RAG: Solving Multi-Aspect Problems With LLMs
with LLMs
1 2 3 4
ETH Zurich Cledar BASF SE Warsaw University of Technology
Abstract
1 Introduction
Large Language Models (LLMs) transformed many machine learning tasks using in-context learning
abilities. They achieved such accuracy by leveraging an increasing number of parameters, which
in recent models have grown to hundreds of billions, making LLM training expensive in terms of
both time and resources. It also comes with the danger of leaking confidential data into model
weights [28, 33, 40]. Additionally, continuous training through fine-tuning is necessary to keep LLMs
up-to-date. Even using the newest data, LLMs display an ongoing problem of hallucinations [13,
38, 44] by providing factually incorrect information. Retrieval Augmented Generation (RAG) was
proposed [11, 18] in order to address these issues as well as others and make LLMs more trustworthy.
The key idea behind RAG is to enhance the generative model’s capabilities by integrating a retrieval
system that can fetch relevant documents or passages from a large corpus of data. In this setting, when
a query is received, the retrieval system first identifies and retrieves pertinent information, which is fed
into the generative model’s context for a more accurate and relevant response. Instead of the model
storing information within its weights, RAG effectively leverages external knowledge, reducing
∗
corresponding author
Such multi-aspect embeddings are then directly used for Figure 1: An overview of the decoder architecture, and a
both data items and query representation. Considering comparison of how standard RAG and Multi-Head RAG
multi-aspectuality explicitly comes with challenges. For embeddings are generated.
example, how to assess the effectiveness of a RAG solution in retrieving data that indeed does
cover multiple aspects of a given domain. For this, we establish an evaluation methodology as
well as a full data construction and query processing pipeline that implements the multi-aspect
embedding idea (contribution 2). Our datasets facilitate broad evaluation by considering both
fully-automatically generated, synthetic data and analyzing specific industry use cases that show
the benefits of MRAG (contribution 3). Our evaluation illustrates the benefits in the relevance
2
of retrieved documents, for example 20% over a modern RAG baseline for fetching multi-aspect
Wikipedia articles (contribution 4). We also show how MRAG and its benchmarking principles can
be seamlessly integrated with both existing RAG solutions and benchmarking frameworks such as
RAGAS (contribution 5). MRAG’s code is publicly available2 .
2
https://github.com/spcl/MRAG
3
A Data preparation (see Section 2.1.1) B Query execution (see Section 2.1.2)
Synthetic Synthetic
Other data
data generator query generator Reply to user
(see Section 3) sources User (see Section 3)
Figure 2: Overview of the MRAG pipeline, consisting of two parts: data preparation A and query execution B. The embedding model C and the
data store D are used by both parts. The data store D contains text embeddings linking to text chunks reflecting three different aspects
(cyan, magenta, yellow). Blocks marked by a star are a novelty of this work.
4
Algorithm 1 details the construction of impor-
Algorithm 1 Importance scores for heads.
tance scores. It is a heuristic based on extensive
empirical evaluation; it gives high-quality results for each head hi do
across the tested datasets and tasks. Intuitively, ai ← 0; bi ← 0
the score si of a given head hi consists of two count_ai ← 0; count_bi ← 0
parts, ai and bi . ai is the average of L2 norms for each embedding eij in hi do
of all embeddings in the vector space i; it repre- ai ← ai + ||eij ||
sents how important a given head is: the larger count_ai ← count_ai + 1
the norms, the more attention was given to this at- for each embedding eih do
tention head. bi is the average of cosine distances bi ← bi + cosine-distance(eij , eih )
between all (or a randomly sampled subset, if the count_bi ← count_bi + 1
user wants to reduce pre-compute time) embed- end for
dings in vector space i. Intuitively, bi is a proxy end for
for measuring the “spread” of vector space i: the ai ← ai /count_ai ; bi ← bi /count_bi
larger bi , the larger the average angle between si ← ai · bi
different embeddings in this space is. Deriving si end for
as a product ai · bi ensures that we reward heads
with high average attention and high average spread, but simultaneously penalize heads with lower
average attention or with low average spread (both ai and bi are appropriately scaled).
The used voting strategy combines the constructed lists of text chunks from individual embedding
spaces into a single list of top k chunks. The strategy is very simple (the corresponding Algorithm 2
is in the Appendix). Each text chunk from a list i of the vector space i has a certain position on
this list, we denote this position with p. We obtain a weight for this chunk as si · 2−p ; si is the
previously defined importance score of the space i. Multiplying si with 2−p exponentially lowers
the significance of less relevant text chunks. Finally, all chunks from all lists are sorted using their
weights and the top k chunks form the final list.
2.3.1 Integration with Data Stores
MRAG can be seamlessly used with different classes of data stores C and nearest neighbor (NN)
search approaches. It can be combined with both the exact and the approximate NN to find the
matching (embedding, chunk)-pairs. These two parts of the broader RAG processing pipeline are
orthogonal to MRAG.
5
Legend: SRAG: Standard RAG MRAG: Multi-Head RAG (this work) Document ID Document match Category match Repeated category match No match
1 Example Prompt
Given a story, retrieve relevant documents that provide contextual information about topics brought up in the story.
1 SRAG: MRAG: 2 SRAG: MRAG:
In a realm where the echoes of music intertwined with the whispers of ancient battles, a curious scholar, named Luc Montagnier, delved into the mysteries of a peculiar instrument known as the
Theremin. As he studied its ethereal melodies that seemed to bridge the gap between reality and the unknown, memories of the enigmatic disappearance of the esteemed mayor Celso Daniel
haunted his thoughts. 3 SRAG: MRAG:
4 SRAG: MRAG:
Meanwhile, in a land where dreams took flight on the wings of imagination, children gathered to watch the fantastical tale of "James and the Giant Peach" unfold on the silver screen. The
whimsical story transported them to a world beyond their own, much like the desert planet of Arrakis in the epic novel "Dune," where the precious spice held the key to power and destiny.
6 SRAG: MRAG: 5 SRAG: MRAG:
Amidst the vast expanse of the cosmos, the majestic Kongō-class battlecruisers sailed through the stars, their presence a testament to both honor and sacrifice in the raging tides of war. Their
legacy echoed through the ages, much like the volumes of knowledge meticulously preserved in ancient libraries, each page a treasure trove of insights waiting to be discovered.
7 SRAG: MRAG: 8 SRAG: MRAG: 9 SRAG: MRAG:
In a realm where the digital realm merged with reality, the phenomenon of Twitch Plays Pokémon captivated the masses, blurring the lines between player and spectator, much like the elusive
concept of Money Illusion that tricked minds into perceiving value where none truly existed. And in the midst of it all, a strategic dance unfolded on the board of Camelot, where tactics intertwined
with skill in a timeless battle of wits. 10 SRAG: MRAG:
And so, the scholar pondered these diverse threads of existence, seeking to unravel the intricate tapestry that connected Luc Montagnier to the Theremin, Celso Daniel to the mysteries of power,
and the timeless saga of Dune to the strategic depths of Camelot. In this weaving of tales, each article found its place, like pieces of a puzzle coming together to reveal a grand design hidden
within the annals of history.
Ground Truth 2.1 Standard RAG (SRAG) 2.2 Multi-Head RAG (MRAG)
Retrieval Success Ratio (Document Match): 2/10 Retrieval Success Ratio (Document Match): 5/10
Retrieval Success Ratio (Category Match): 3/10 Retrieval Success Ratio (Category Match): 7/10
Weighted Retrieval Success Ratio (2:1): 0.23 Weighted Retrieval Success Ratio (2:1): 0.56
Figure 3: An example query used to evaluate different RAG strategies. We mention the documents to be fetched in the text and then assess the
success ratio of different RAG strategies in finding these documents and their categories. We mark exact document matches , category matches
, documents that match a category multiple times , and text segments with no matching document . Finally, we show the weighted success
ratio for each strategy, taking a 2:1 weighting (prioritizing the exact article matches).
there is a case when a RAG scheme does not retrieve the exact desired document, but it still retrieves
successfully some other document from the same category. To consider such cases, we use another
measure, the Category Retrieval Success Ratio or Ξc . It has the same form as Ξ(Q, n) above, with
one difference: S(Q, n) is now the set of all the retrieved documents that belong to categories of
the ideal desired documents. Finally, to combine these two metrics, we use the Weighted Retrieval
Success Ratio Ξw as Ξw = w·Ξ+Ξ w+1 . By varying w, the user can adjust the importance of exact
c
document matches and category matches. An example of using these metrics to assess how well
MRAG and Standard RAG capture multi-aspectuality is pictured in the bottom part of Figure 3.
4 Evaluation
We now illustrate the advantages of MRAG over the state of the art.
Comparison Baselines We compare MRAG to two main baselines: Standard RAG and Split RAG.
The first represents a modern RAG pipeline in which each document uses the activations of the
last decoder layer as its embedding. The second is a blend between Standard RAG and MRAG.
Specifically, it splits the activation of the last decoder layer in the same way as MRAG and applies
a voting strategy. The purpose of Split RAG is to show that MRAG’s benefits come from using the
multi-head output as embedding and not merely using multiple embedding spaces. Additionally, we
consider Fusion RAG [29], an optional mechanism that we harness to further enhance the benefits of
MRAG at the cost of additional tokens (detailed in Section 4.2).
We use queries and metrics introduced in Section 3. We use the weighted retrieval success ratio
with 2:1 weighting, which considers category matches as relevant but prioritizes the exact document
matches. Figure 3 shows an example query and metrics usage. Each query requires retrieving
a specific number of documents and the corresponding non-overlapping categories which define
the ground truth. We fetch the top k documents from a database, where k is the “total number of
documents fetched for a tested RAG scheme” (including potentially mismatches). Among these k
documents, we search for matches with the ground truth.
Samples & Summaries Each data point in our plots corresponds to 25 queries. We present the data
using standard boxplots to showcase the distribution. Our primary focus is on the average retrieval
performance among those 25 queries.
6
Figure 4: Retrieval success ratio over 25 queries between MRAG and Standard RAG, where each query includes 10 different aspects. The
upper part presents exact document matches while the lower part presents category only matches (we explain the metrics used in Section 3).
A histogram is presented for a specific sample to showcase the detailed distribution among the 25 queries (the number of documents fetched for
each query is 30).
Figure 5: Relative retrieval improvement of MRAG over Standard RAG across queries with different numbers of aspects and different
embedding models (SFR in the left side, e5 in the right side).
Table 1: Retrieval success ratio (the exact document match) for 25 queries with a single aspect.
7
Figure 6: Relative retrieval improvements of MRAG over Standard RAG for the SFR embedding model compared with Split RAG (the blue
plots), and the relative retrieval improvements of Fusion MRAG over both Fusion RAG and MRAG (the red plots).
We additionally show in Table 1 that MRAG performs on-par with Standard RAG on queries from
our multi-aspect dataset where only a single aspect is expected. Hence, our approach does not suffer
from significant decrease in performance for single-aspect tasks.
4.2 Further Improvements with Additional Tokens
We now show that MRAG can be seamlessly integrated with other RAG approaches: We combine
MRAG with Fusion RAG, representing RAG schemes that use an LLM (additional token cost) for
more accurate retrieval. Fusion RAG uses an LLM to create a fixed number of questions about the
RAG query. Each question is separately applied through an embedding model using Standard RAG.
We apply MRAG’s approach to each of these questions and denote the combined scheme as Fusion
MRAG. Red plots of Figure 6 show that both Fusion RAG and Fusion MRAG perform better than
Standard RAG, on average gaining 10 to 30% in accuracy. Fusion MRAG performs consistently
better than pure Fusion RAG, indicating that these optimizations can be combined together. However,
both Fusion strategies introduce a greater variance than MRAG and additional costs in terms of
compute, latency, and tokens.
4.3 Benefits from Multi-Head Attention Solely
We also compare MRAG to the Split RAG baseline in Figure 6. The blue plots show the relative
weighted performance of MRAG and Split RAG over Standard RAG. MRAG performs better than
Split RAG, illustrating that its high accuracy is due to the actual multi-head part, and not merely just
partitioning the vector and using multiple embedding spaces.
4.4 Real-World Workloads
To further illustrate advantages of MRAG, we also consider two real-word use cases from in-house
industry data analytics projects, namely, the synthesis of legal documents and the analysis of causes
of chemical plant accidents. The results are in Figure 7. In the former (the left side), the task is to
create a document based on user requirements that may be related to different aspects, for example to
the law being considered (e.g., the British or the US one), the subject (e.g., energetic or civil), the
style of the document (e.g., aggressive or mild), etc.. This task is executed with RAG that can fetch
documents from a database. In the latter (the right side), the task is to discover a cause of an accident.
Here, one also wants to retrieve documents from a database that should be used in the LLM context
to facilitate discovering the cause of the accident. The causes are grouped in categories such as utility
impact due to severe weather, lack of preparedness and planning, incorrect installation of equipment,
lack of maintenance, etc.. Similarly to the previous analyses, we measure the retrieval success ratio
over corresponding databases. MRAG offers advantages over other schemes.
Figure 7: Average improvement of the retrieval success ratio of MRAG and Split RAG over Standard RAG for two real-world workloads
constructing legal documents (left) and discovering causes of industry accidents (right).
8
Figure 8: Evaluation of different voting strategies for MRAG and Split RAG
5 Related Work
Our work touches on many areas which we now briefly discuss.
Many RAG schemes appeared recently [10], using the output of the last decoder layer for embedding
generation. In contrast, MRAG leverages different embedding spaces of attention heads to focus
on different aspects of documents and queries. As such, it can be combined with other schemes to
further improve RAG pipelines.
Retrieval is sometimes enhanced by a cross-encoder reranking phase [9, 19, 22, 26, 27, 30]. In
such solutions, typically after retrieving a set of relevant chunks, they are re-ranked using specialized
models. In this work, we focus solely on the first retrieval phase, so MRAG can be seamlessly used
in conjunction with such cross-encoders.
Structure-enhanced RAG schemes employ different strategies for structuring text to improve
retrieval quality. A common idea is to construct a Knowledge Graph from text, which enables
retrieval amongst entities and relationships [3, 6, 16, 17, 37]. RAPTOR [31] generates multi-level
summaries for clusters of related chunks, building a tree of summaries with increasing levels of
abstraction to better capture the meaning of the text. Graph RAG [7] creates a Knowledge Graph,
and summarizes communities in the graph, which provide data at the different levels of abstraction.
All these systems try to improve RAG quality by utilizing additional structures that describe entity
relationships or the inner organization of text. Usually, they need a sophisticated preprocessing phase
to prepare such structures. MRAG achieves the improvement solely based on the embedding model
and has no additional storage requirements, and can be combined with any of these schemes.
6 Conclusion
Retrieval Augmented Generation (RAG) is pivotal for democratizing access to accurate and relevant
outputs from large language models (LLMs). Enhancing the precision and relevance of these outputs
is a critical goal, especially given the challenges posed by queries requiring the retrieval of multiple
9
documents with significantly different contents. These complex queries are common across various
domains, but existing RAG solutions struggle because the embeddings of the necessary documents
can be far apart in the embedding space, complicating their retrieval.
To address this gap, we introduced Multi-Head RAG (MRAG), a novel scheme that leverages the
activations from the multi-head attention layer of decoder models instead of the traditional feed-
forward layer. This approach is grounded in the insight that different attention heads can capture
distinct aspects of the data. By using these diverse activations, MRAG creates embeddings that better
represent the multifaceted nature of data items and queries, thus enhancing the retrieval accuracy for
complex, multi-aspect queries. The simplicity and versatility of this idea allow it to be seamlessly
integrated into any modern RAG pipeline or data analytics framework.
Our comprehensive evaluation methodology, including specific metrics, synthetic datasets, and real-
world use cases, demonstrates MRAG’s effectiveness. The results indicate a significant improvement
in the relevance of retrieved documents, with up to 20% better performance compared to modern
RAG baselines. This validates MRAG’s potential to handle the intricacies of multi-aspect queries
effectively.
Moreover, MRAG proves to be both cost-effective and energy-efficient. It does not require additional
LLM queries, multiple model instances, increased storage, or multiple inference passes over the
embedding model. This efficiency, combined with the enhanced retrieval accuracy, positions MRAG
as a valuable advancement in the field of LLMs and RAG systems. By addressing the challenges of
multi-aspectuality in queries, MRAG paves the way for more reliable and accurate LLM applications
across diverse industries.
References
[1] Abdelrahman Abdallah and Adam Jatowt. 2024. Generator-Retriever-Generator Approach for
Open-Domain Question Answering. arXiv:2307.11278
[2] Akari Asai, Zeqiu Wu, Yizhong Wang, Avirup Sil, and Hannaneh Hajishirzi. 2023. Self-RAG:
Learning to Retrieve, Generate, and Critique through Self-Reflection. arXiv:2310.11511
[3] Tuan Bui, Oanh Tran, Phuong Nguyen, Bao Ho, Long Nguyen, Thang Bui, and Tho Quan. 2024.
Cross-Data Knowledge Graph Construction for LLM-enabled Educational Question-Answering
System: A Case Study at HCMUT. arXiv:2404.09296
[4] Jiawei Chen, Hongyu Lin, Xianpei Han, and Le Sun. 2024. Benchmarking Large Language
Models in Retrieval-Augmented Generation. Proceedings of the AAAI Conference on Artificial
Intelligence 38, 16 (March 2024), 17754–17762. https://doi.org/10.1609/aaai.v38i16.
29728
[5] Zhibo Chu, Shiwen Ni, Zichong Wang, Xi Feng, Chengming Li, Xiping Hu, Ruifeng Xu, Min
Yang, and Wenbin Zhang. 2024. History, Development, and Principles of Large Language
Models-An Introductory Survey. arXiv:2402.06853
[6] Julien Delile, Srayanta Mukherjee, Anton Van Pamel, and Leonid Zhukov. 2024. Graph-Based
Retriever Captures the Long Tail of Biomedical Knowledge. arXiv:2402.12352
10
[7] Darren Edge, Ha Trinh, Newman Cheng, Joshua Bradley, Alex Chao, Apurva Mody, Steven
Truitt, and Jonathan Larson. 2024. From Local to Global: A Graph RAG Approach to Query-
Focused Summarization. arXiv:2404.16130
[8] Shahul Es, Jithin James, Luis Espinosa-Anke, and Steven Schockaert. 2023. RAGAS: Auto-
mated Evaluation of Retrieval Augmented Generation. arXiv:2309.15217
[9] Luyu Gao, Zhuyun Dai, and Jamie Callan. 2021. Rethink Training of BERT Rerankers in
Multi-stage Retrieval Pipeline. In Advances in Information Retrieval: Proceedings of the 43rd
European Conference on IR Research, Part II (Virtual) (ECIR ’21). Springer-Verlag, Berlin,
Heidelberg, 280–286. https://doi.org/10.1007/978-3-030-72240-1_26
[10] Yunfan Gao, Yun Xiong, Xinyu Gao, Kangxiang Jia, Jinliu Pan, Yuxi Bi, Yi Dai, Jiawei Sun,
Meng Wang, and Haofen Wang. 2024. Retrieval-Augmented Generation for Large Language
Models: A Survey. arXiv:2312.10997
[11] Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat, and Ming-Wei Chang. 2020. REALM:
Retrieval-Augmented Language Model Pre-Training. arXiv:2002.08909
[12] Yucheng Hu and Yuxing Lu. 2024. RAG and RAU: A Survey on Retrieval-Augmented Language
Model in Natural Language Processing. arXiv:2404.19543
[13] Lei Huang, Weijiang Yu, Weitao Ma, Weihong Zhong, Zhangyin Feng, Haotian Wang, Qian-
glong Chen, Weihua Peng, Xiaocheng Feng, Bing Qin, and Ting Liu. 2023. A Survey on
Hallucination in Large Language Models: Principles, Taxonomy, Challenges, and Open Ques-
tions. arXiv:2311.05232
[14] Yizheng Huang and Jimmy Huang. 2024. A Survey on Retrieval-Augmented Text Generation
for Large Language Models. arXiv:2404.10981
[15] Huggingface. 2024. Massive Text Embeddings Benchmark Leaderboard. https://
huggingface.co/spaces/mteb/leaderboard Accessed: 2024-05-18.
[16] Mohamed Manzour Hussien, Angie Nataly Melo, Augusto Luis Ballardini, Carlota Salinas Mal-
donado, Rubén Izquierdo, and Miguel Ángel Sotelo. 2024. RAG-based Explainable Prediction
of Road Users Behaviors for Automated Driving using Knowledge Graphs and Large Language
Models. arXiv:2405.00449
[17] Xinke Jiang, Ruizhe Zhang, Yongxin Xu, Rihong Qiu, Yue Fang, Zhiyuan Wang, Jinyi Tang,
Hongxin Ding, Xu Chu, Junfeng Zhao, and Yasha Wang. 2024. HyKGE: A Hypothesis
Knowledge Graph Enhanced Framework for Accurate and Reliable Medical LLMs Responses.
arXiv:2312.15883
[18] Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman
Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, Sebastian Riedel, and
Douwe Kiela. 2020. Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks.
In Proceedings of the Thirty-fourth Annual Conference on Neural Information Processing
Systems (NeurIPS ’20) (Virtual) (Advances in Neural Information Processing Systems, Vol. 33),
H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin (Eds.). Curran Associates,
Inc., New York, NY, USA, 9459–9474. https://proceedings.neurips.cc/paper_files/
paper/2020/file/6b493230205f780e1bc26945df7481e5-Paper.pdf
[19] Canjia Li, Andrew Yates, Sean MacAvaney, Ben He, and Yingfei Sun. 2021. PARADE: Passage
Representation Aggregation for Document Reranking. arXiv:2008.09093
[20] Huayang Li, Yixuan Su, Deng Cai, Yan Wang, and Lemao Liu. 2022. A Survey on Retrieval-
Augmented Text Generation. arXiv:2202.01110
[21] Yuanjie Lyu, Zhiyu Li, Simin Niu, Feiyu Xiong, Bo Tang, Wenjin Wang, Hao Wu, Huanyong
Liu, Tong Xu, Enhong Chen, Yi Luo, Peng Cheng, Haiying Deng, Zhonghao Wang, and
Zijia Lu. 2024. CRUD-RAG: A Comprehensive Chinese Benchmark for Retrieval-Augmented
Generation of Large Language Models. arXiv:2401.17043
11
[22] Sean MacAvaney, Andrew Yates, Arman Cohan, and Nazli Goharian. 2019. CEDR: Con-
textualized Embeddings for Document Ranking. In Proceedings of the 42nd International
ACM SIGIR Conference on Research and Development in Information Retrieval (Paris,
France) (SIGIR ’19). Association for Computing Machinery, New York, NY, USA, 1101–1104.
https://doi.org/10.1145/3331184.3331317
[23] S. S. Manathunga and Y. A. Illangasekara. 2023. Retrieval Augmented Generation and Rep-
resentative Vector Summarization for large unstructured textual data in Medical Education.
arXiv:2308.00479
[24] Rui Meng, Ye Liu, Shafiq Rayhan Joty, Caiming Xiong, Yingbo Zhou, and Semih Yavuz.
2024. SFR-Embedding-Mistral: Enhance Text Retrieval with Transfer Learning. Salesforce
AI Research Blog. https://blog.salesforceairesearch.com/sfr-embedded-mistral/
Accessed: 2024-05-17.
[25] Grégoire Mialon, Roberto Dessi, Maria Lomeli, Christoforos Nalmpantis, Ramakanth Pasunuru,
Roberta Raileanu, Baptiste Roziere, Timo Schick, Jane Dwivedi-Yu, Asli Celikyilmaz, Edouard
Grave, Yann LeCun, and Thomas Scialom. 2023. Augmented Language Models: a Survey.
Transactions on Machine Learning Research (2023). https://openreview.net/forum?id=
jh7wH2AzKK Survey Certification.
[26] Rodrigo Nogueira and Kyunghyun Cho. 2020. Passage Re-ranking with BERT.
arXiv:1901.04085
[27] Rodrigo Nogueira, Zhiying Jiang, and Jimmy Lin. 2020. Document Ranking with a Pretrained
Sequence-to-Sequence Model. arXiv:2003.06713
[28] Vaidehi Patil, Peter Hase, and Mohit Bansal. 2024. Can Sensitive Information Be Deleted
From LLMs? Objectives for Defending Against Extraction Attacks. In Proceedings of the
Twelfth International Conference on Learning Representations (Vienna, Austria) (ICLR ’24).
https://openreview.net/forum?id=7erlRDoaV8
[29] Zackary Rackauckas. 2024. RAG-Fusion: a New Take on Retrieval-Augmented Generation.
arXiv:2402.03367
[30] Guilherme Rosa, Luiz Bonifacio, Vitor Jeronymo, Hugo Abonizio, Marzieh Fadaee, Roberto
Lotufo, and Rodrigo Nogueira. 2022. In Defense of Cross-Encoders for Zero-Shot Retrieval.
arXiv:2212.06121
[31] Parth Sarthi, Salman Abdullah, Aditi Tuli, Shubh Khanna, Anna Goldie, and Christopher D.
Manning. 2024. RAPTOR: Recursive Abstractive Processing for Tree-Organized Retrieval.
arXiv:2401.18059
[32] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez,
Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is All you Need. In Proceedings of the
Thirty-first Annual Conference on Neural Information Processing Systems (NIPS ’17) (Long
Beach, CA, USA) (Advances in Neural Information Processing Systems, Vol. 30), I. Guyon,
U. Von Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (Eds.).
Curran Associates, Inc., New York, NY, USA, 5998–6008. https://proceedings.neurips.
cc/paper_files/paper/2017/file/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf
[33] Jeffrey G. Wang, Jason Wang, Marvin Li, and Seth Neel. 2024. Pandora’s White-Box: Increased
Training Data Leakage in Open LLMs. arXiv:2402.17012
[34] Liang Wang, Nan Yang, Xiaolong Huang, Binxing Jiao, Linjun Yang, Daxin Jiang, Rangan
Majumder, and Furu Wei. 2024. Text Embeddings by Weakly-Supervised Contrastive Pre-
training. arXiv:2212.03533
[35] Christopher Wewer, Florian Lemmerich, and Michael Cochez. 2021. Updating Embeddings for
Dynamic Knowledge Graphs. arXiv:2109.10896
[36] Guangzhi Xiong, Qiao Jin, Zhiyong Lu, and Aidong Zhang. 2024. Benchmarking Retrieval-
Augmented Generation for Medicine. arXiv:2402.13178
12
[37] Zhentao Xu, Mark Jerome Cruz, Matthew Guevara, Tie Wang, Manasi Deshpande, Xiaofeng
Wang, and Zheng Li. 2024. Retrieval-Augmented Generation with Knowledge Graphs for
Customer Service Question Answering. In Proceedings of the 47th International ACM SIGIR
Conference on Research and Development in Information Retrieval (Washington, DC, USA)
(SIGIR ’24). Association for Computing Machinery, New York, NY, USA, 5 pages. https:
//doi.org/10.48550/arXiv.2404.17723
[38] Ziwei Xu, Sanjay Jain, and Mohan Kankanhalli. 2024. Hallucination is Inevitable: An Innate
Limitation of Large Language Models. arXiv:2401.11817
[39] Zhipeng Xu, Zhenghao Liu, Yibin Liu, Chenyan Xiong, Yukun Yan, Shuo Wang, Shi Yu,
Zhiyuan Liu, and Ge Yu. 2024. ActiveRAG: Revealing the Treasures of Knowledge via Active
Learning. arXiv:2402.13547
[40] Biwei Yan, Kun Li, Minghui Xu, Yueyan Dong, Yue Zhang, Zhaochun Ren, and Xiuzhen
Cheng. 2024. On Protecting the Data Privacy of Large Language Models (LLMs): A Survey.
arXiv:2403.05156
[41] Hao Yu, Aoran Gan, Kai Zhang, Shiwei Tong, Qi Liu, and Zhaofeng Liu. 2024. Evaluation of
Retrieval-Augmented Generation: A Survey. arXiv:2405.07437
[42] Wenhao Yu, Hongming Zhang, Xiaoman Pan, Kaixin Ma, Hongwei Wang, and Dong Yu.
2023. Chain-of-Note: Enhancing Robustness in Retrieval-Augmented Language Models.
arXiv:2311.09210
[43] Huimin Zeng, Zhenrui Yue, Qian Jiang, and Dong Wang. 2024. Federated Recommendation
via Hybrid Retrieval Augmented Generation. arXiv:2403.04256
[44] Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo
Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, and Shuming
Shi. 2023. Siren’s Song in the AI Ocean: A Survey on Hallucination in Large Language Models.
arXiv:2309.01219
[45] Penghao Zhao, Hailin Zhang, Qinhan Yu, Zhengren Wang, Yunteng Geng, Fangcheng Fu,
Ling Yang, Wentao Zhang, Jie Jiang, and Bin Cui. 2024. Retrieval-Augmented Generation for
AI-Generated Content: A Survey. arXiv:2402.19473
13
Appendix
A Model Design: Additional Details
A.1 Retrieval Strategies for Multi-Aspect Data
Please create a story about the attached <number of articles> articles on the topics <list of titles>.
It is very important that each of the attached articles is relevant to the story, in a way that references
the content of the article, not just its title. But please also mention each title at least once. Please
make sure that all of the attached articles are relevant to your story, and that each article is referenced
in at least two sentences! They do not necessarily have to be referenced in the same order, but make
sure no article is forgotten.
Important: Output only the story, no additional text. And do not use bullet points, or paragraphs.
Articles:
———
Article <title>:
<body>
<...>
———
Again, make sure that you reference all the following topics in your story: <list of titles>
14