View a PDF of the paper titled Interpreting ResNet-based CLIP via Neuron-Attention Decomposition, by Edmund Bu and Yossi Gandelsman
View PDF
HTML (experimental)
Abstract:We present a novel technique for interpreting the neurons in CLIP-ResNet by decomposing their contributions to the output into individual computation paths. More specifically, we analyze all pairwise combinations of neurons and the following attention heads of CLIP’s attention-pooling layer. We find that these neuron-head pairs can be approximated by a single direction in CLIP-ResNet’s image-text embedding space. Leveraging this insight, we interpret each neuron-head pair by associating it with text. Additionally, we find that only a sparse set of the neuron-head pairs have a significant contribution to the output value, and that some neuron-head pairs, while polysemantic, represent sub-concepts of their corresponding neurons. We use these observations for two applications. First, we employ the pairs for training-free semantic segmentation, outperforming previous methods for CLIP-ResNet. Second, we utilize the contributions of neuron-head pairs to monitor dataset distribution shifts. Our results demonstrate that examining individual computation paths in neural networks uncovers interpretable units, and that such units can be utilized for downstream tasks.
Submission history
From: Edmund Bu [view email]
[v1]
Wed, 24 Sep 2025 09:50:01 UTC (6,535 KB)
[v2]
Thu, 25 Sep 2025 07:27:10 UTC (6,535 KB)
[v3]
Sun, 30 Nov 2025 00:24:15 UTC (6,529 KB)


![[2509.19943] Interpreting ResNet-based CLIP via Neuron-Attention Decomposition Measuring Intelligence Efficiency of Local AI](https://skytik.cc/wp-content/uploads/2025/11/Measuring-Intelligence-Efficiency-of-Local-AI-768x448.png)