View a PDF of the paper titled Multiplicative learning from observation-prediction ratios, by Han Kim and 3 other authors
View PDF
HTML (experimental)
Abstract:Additive parameter updates, as used in gradient descent and its adaptive extensions, underpin most modern machine-learning optimization. Yet, such additive schemes often demand numerous iterations and intricate learning-rate schedules to cope with scale and curvature of loss functions. Here we introduce Expectation Reflection (ER), a multiplicative learning paradigm that updates parameters based on the ratio of observed to predicted outputs, rather than their differences. ER eliminates the need for ad hoc loss functions or learning-rate tuning while maintaining internal consistency. Extending ER to multilayer networks, we demonstrate its efficacy in image classification, achieving optimal weight determination in a single iteration. We further show that ER can be interpreted as a modified gradient descent incorporating an inverse target-propagation mapping. Together, these results position ER as a fast and scalable alternative to conventional optimization methods for neural-network training.
Submission history
From: Junghyo Jo [view email]
[v1]
Thu, 13 Mar 2025 08:14:00 UTC (117 KB)
[v2]
Tue, 24 Mar 2026 04:34:25 UTC (1,081 KB)


![[2503.10144] Multiplicative learning from observation-prediction ratios Measuring Intelligence Efficiency of Local AI](https://skytik.cc/wp-content/uploads/2025/11/Measuring-Intelligence-Efficiency-of-Local-AI-768x448.png)