View a PDF of the paper titled Efficient Zero-Order Federated Finetuning of Language Models for Resource-Constrained Devices, by Mohamed Aboelenien Ahmed and 4 other authors
View PDF
HTML (experimental)
Abstract:Federated fine-tuning offers a promising approach for tuning Large Language Models (LLMs) on edge devices while preserving data privacy. However, fine-tuning these models on edge devices remains challenging due to high memory, communication, and computational demands. Zero-order optimization with task alignment provides a potential solution, enabling fine-tuning with inference-level memory requirements but requires a longer convergence time. In this paper, we propose \ac{METHOD} that divides the network into two blocks, applying a different number of perturbations per block in a computationally effective way, achieving faster convergence. Our evaluation shows a $1.6-3\times$ reduction in computation overhead compared to zero-order state of the art techniques in federated learning.
Submission history
From: Mohamed Aboelenien Ahmed [view email]
[v1]
Fri, 14 Feb 2025 15:49:02 UTC (923 KB)
[v2]
Wed, 17 Dec 2025 21:43:09 UTC (223 KB)


![[2502.10239] Efficient Zero-Order Federated Finetuning of Language Models for Resource-Constrained Devices Measuring Intelligence Efficiency of Local AI](https://skytik.cc/wp-content/uploads/2025/11/Measuring-Intelligence-Efficiency-of-Local-AI-768x448.png)