View a PDF of the paper titled Who Gets Cited Most? Benchmarking Long-Context Numerical Reasoning on Scientific Articles, by Miao Li and 3 other authors
View PDF
HTML (experimental)
Abstract:We introduce SciTrek, a diagnostic question-answering benchmark designed to probe long-context numerical reasoning in large language models (LLMs). Existing long-context benchmarks mostly focus on simple information retrieval, rely on artificial contexts, or leave numerical reasoning unexplored. SciTrek addresses these limitations through questions that require counting, sorting, aggregating, and comparing information across multiple full-text scientific articles. Questions are automatically generated by formulating them as SQL queries over a database constructed from article metadata (titles, authors, and references), with ground-truth answers obtained via query execution. This design provides verifiable reasoning traces for fine-grained error analysis and enables efficient scaling to longer contexts with minimal human supervision. Extensive experiments on thirteen frontier open-weight and proprietary LLMs reveal that SciTrek poses a significant challenge: even the best-performing model achieves only 46.5% exact match at 128K tokens, with performance declining as the context length increases. Models particularly struggle with citation-related questions and compound logical conditions, including negation.
Submission history
From: Miao Li [view email]
[v1]
Thu, 25 Sep 2025 11:36:09 UTC (1,394 KB)
[v2]
Tue, 27 Jan 2026 10:01:24 UTC (1,278 KB)
[v3]
Sun, 1 Mar 2026 20:50:48 UTC (1,282 KB)


![[2509.21028] Who Gets Cited Most? Benchmarking Long-Context Numerical Reasoning on Scientific Articles Measuring Intelligence Efficiency of Local AI](https://skytik.cc/wp-content/uploads/2025/11/Measuring-Intelligence-Efficiency-of-Local-AI-768x448.png)