On the Convergence of Stochastic Aggregated Gradient Method
Yükleniyor...
Tarih
Yazarlar
Dergi Başlığı
Dergi ISSN
Cilt Başlığı
Yayıncı
Association of Mathematicians (MATDER)
Erişim Hakkı
info:eu-repo/semantics/openAccess
Özet
The minimization problem of the sum of a large set of convex functions arises in various applications. Methods such as incremental gradient, stochastic gradient, and aggregated gradient are popular choices for solving those problems as they do not require a full gradient evaluation at every iteration. In this paper, we analyze a generalization of the stochastic aggregated gradient method via an alternative technique based on the convergence of iterative linear systems. The technique provides a short proof for the O(??1) linear convergence rate in the quadratic case. We observe that the technique is rather restrictive for the general case, and can provide weaker results. © 2024 Elsevier B.V., All rights reserved.
Açıklama
Anahtar Kelimeler
incremental methods, stochastic gradient, Unconstrained optimization
Kaynak
Turkish Journal of Mathematics and Computer Science
WoS Q Değeri
Scopus Q Değeri
Cilt
15
Sayı
1









