View a PDF of the paper titled UniversalRAG: Retrieval-Augmented Generation over Corpora of Diverse Modalities and Granularities, by Woongyeong Yeo and 4 other authors
View PDF
HTML (experimental)
Abstract:Retrieval-Augmented Generation (RAG) has shown substantial promise in improving factual accuracy by grounding model responses with external knowledge relevant to queries. However, most existing approaches are limited to a text-only corpus, and while recent efforts have extended RAG to other modalities such as images and videos, they typically operate over a single modality-specific corpus. In contrast, real-world queries vary widely in the type of knowledge they require, which a single type of knowledge source cannot address. To address this, we introduce UniversalRAG, designed to retrieve and integrate knowledge from heterogeneous sources with diverse modalities and granularities. Specifically, motivated by the observation that forcing all modalities into a unified representation space derived from a single aggregated corpus causes a modality gap, where the retrieval tends to favor items from the same modality as the query, we propose modality-aware routing, which dynamically identifies the most appropriate modality-specific corpus and performs targeted retrieval within it, and further justify its effectiveness with a theoretical analysis. Moreover, beyond modality, we organize each modality into multiple granularity levels, enabling fine-tuned retrieval tailored to the complexity and scope of the query. We validate UniversalRAG on 10 benchmarks of multiple modalities, showing its superiority over various modality-specific and unified baselines.
Submission history
From: Woongyeong Yeo [view email]
[v1]
Tue, 29 Apr 2025 13:18:58 UTC (1,244 KB)
[v2]
Mon, 19 May 2025 11:09:12 UTC (3,023 KB)
[v3]
Tue, 6 Jan 2026 10:26:36 UTC (1,866 KB)


