View a PDF of the paper titled Insider Knowledge: How Much Can RAG Systems Gain from Evaluation Secrets?, by Laura Dietz and 5 other authors
View PDF
HTML (experimental)
Abstract:RAG systems are increasingly evaluated and optimized using LLM judges, an approach that is rapidly becoming the dominant paradigm for system assessment. Nugget-based approaches in particular are now embedded not only in evaluation frameworks but also in the architectures of RAG systems themselves. While this integration can lead to genuine improvements, it also creates a risk of faulty measurements due to circularity. In this paper, we investigate this risk through comparative experiments with nugget-based RAG systems, including Ginger and Crucible, against strong baselines such as GPT-Researcher. By deliberately modifying Crucible to generate outputs optimized for an LLM judge, we show that near-perfect evaluation scores can be achieved when elements of the evaluation – such as prompt templates or gold nuggets – are leaked or can be predicted. Our results highlight the importance of blind evaluation settings and methodological diversity to guard against mistaking metric overfitting for genuine system progress.
Submission history
From: Laura Dietz [view email]
[v1]
Mon, 19 Jan 2026 17:03:20 UTC (437 KB)
[v2]
Fri, 27 Mar 2026 12:50:12 UTC (462 KB)


