View a PDF of the paper titled Token Reduction Should Go Beyond Efficiency in Generative Models — From Vision, Language to Multimodality, by Zhenglun Kong and 9 other authors
View PDF
HTML (experimental)
Abstract:In Transformer architectures, tokens\textemdash discrete units derived from raw data\textemdash are formed by segmenting inputs into fixed-length chunks. Each token is then mapped to an embedding, enabling parallel attention computations while preserving the input’s essential information. Due to the quadratic computational complexity of transformer self-attention mechanisms, token reduction has primarily been used as an efficiency strategy. This is especially true in single vision and language domains, where it helps balance computational costs, memory usage, and inference latency. Despite these advances, this paper argues that token reduction should transcend its traditional efficiency-oriented role in the era of large generative models. Instead, we position it as a fundamental principle in generative modeling, critically influencing both model architecture and broader applications. Specifically, we contend that across vision, language, and multimodal systems, token reduction can: (i) facilitate deeper multimodal integration and alignment, (ii) mitigate “overthinking” and hallucinations, (iii) maintain coherence over long inputs, and (iv) enhance training stability, etc. We reframe token reduction as more than an efficiency measure. By doing so, we outline promising future directions, including algorithm design, reinforcement learning-guided token reduction, token optimization for in-context learning, agentic framework design, and broader ML and scientific domains.
Submission history
From: Zhenglun Kong [view email]
[v1]
Fri, 23 May 2025 11:30:30 UTC (49 KB)
[v2]
Mon, 28 Jul 2025 01:59:08 UTC (52 KB)
[v3]
Mon, 12 Jan 2026 21:52:55 UTC (52 KB)


![[2505.18227] Token Reduction Should Go Beyond Efficiency in Generative Models — From Vision, Language to Multimodality Measuring Intelligence Efficiency of Local AI](https://skytik.cc/wp-content/uploads/2025/11/Measuring-Intelligence-Efficiency-of-Local-AI-768x448.png)