両方とも前のリビジョン 前のリビジョン 次のリビジョン | 前のリビジョン |
en:intro:publications [2025/09/01 18:31] – [Conference Activities & Talks] Naoki SATO | en:intro:publications [2025/09/09 17:59] (現在) – Naoki SATO |
---|
| |
==== Proceedings ==== | ==== Proceedings ==== |
- [[https://scholar.google.co.jp/citations?user=rNbGTIgAAAAJ&hl=ja|Naoki Sato]], [[..:iiduka:|Hideaki Iiduka]]: **[[https://ojs.aaai.org/index.php/AAAI/article/view/34234|Explicit and Implicit Graduated Optimization in Deep Neural Networks]]**, [[https://aaai.org/proceeding/aaai-39-2025/|Proceedings of the AAAI Conference on Artificial Intelligence]], 39 (19), 20283--20291 (2025) | - Keisuke Kamo, [[..:iiduka:|Hideaki Iiduka]]: **Increasing Batch Size Improves Convergence of Stochastic Gradient Descent with Momentum**, Proceedings of the 17th Asian Conference on Machine Learning, PMLR ???: ????-???? (2025) |
| - [[https://scholar.google.co.jp/citations?hl=ja&user=dejA0qcAAAAJ|Kanata Oowada]], [[..:iiduka:|Hideaki Iiduka]]: **Faster Convergence of Riemannian Stochastic Gradient Descent with Increasing Batch Size**, Proceedings of the 17th Asian Conference on Machine Learning, PMLR ???: ????-???? (2025) |
| - [[https://scholar.google.com/citations?user=hdDU4Z4AAAAJ&hl=ja|Kento Imaizumi]], [[..:iiduka:|Hideaki Iiduka]]: **Both Asymptotic and Non-Asymptotic Convergence of Quasi-Hyperbolic Momentum using Increasing Batch Size**, Proceedings of the 17th Asian Conference on Machine Learning, PMLR ???: ????-???? (2025) |
| - [[https://scholar.google.co.jp/citations?user=rNbGTIgAAAAJ&hl=ja|Naoki Sato]], [[..:iiduka:|Hideaki Iiduka]]: **[[https://ojs.aaai.org/index.php/AAAI/article/view/34234|Explicit and Implicit Graduated Optimization in Deep Neural Networks]]**, [[https://aaai.org/proceeding/aaai-39-2025/|Proceedings of the AAAI Conference on Artificial Intelligence]], 39 (19), 20283--20291 (2025) [[https://ojs.aaai.org/index.php/AAAI/article/view/34234|Open Access]] |
| |
==== Reward and Punishment ==== | ==== Reward and Punishment ==== |
==== Conference Activities & Talks ==== | ==== Conference Activities & Talks ==== |
- Keisuke Kamo, [[..:iiduka:|Hideaki Iiduka]]: **Increasing Batch Size Improves Convergence of Stochastic Gradient Descent with Momentum**, [[https://www.acml-conf.org/2025/|The 17th Asian Conference on Machine Learning (ACML2025)]], Taipei, Taiwan (Dec. 9--12, 2025) | - Keisuke Kamo, [[..:iiduka:|Hideaki Iiduka]]: **Increasing Batch Size Improves Convergence of Stochastic Gradient Descent with Momentum**, [[https://www.acml-conf.org/2025/|The 17th Asian Conference on Machine Learning (ACML2025)]], Taipei, Taiwan (Dec. 9--12, 2025) |
- [[https://scholar.google.co.jp/citations?hl=ja&user=dejA0qcAAAAJ|Kanata Oowada]], [[..:iiduka:|Hideaki Iiduka]]: **Increasing Batch Size Improves Convergence of Stochastic Gradient Descent with Momentum**, [[https://www.acml-conf.org/2025/|The 17th Asian Conference on Machine Learning (ACML2025)]], Taipei, Taiwan (Dec. 9--12, 2025) | - [[https://scholar.google.co.jp/citations?hl=ja&user=dejA0qcAAAAJ|Kanata Oowada]], [[..:iiduka:|Hideaki Iiduka]]: **Faster Convergence of Riemannian Stochastic Gradient Descent with Increasing Batch Size**, [[https://www.acml-conf.org/2025/|The 17th Asian Conference on Machine Learning (ACML2025)]], Taipei, Taiwan (Dec. 9--12, 2025) |
- [[https://scholar.google.com/citations?user=hdDU4Z4AAAAJ&hl=ja|Kento Imaizumi]], [[..:iiduka:|Hideaki Iiduka]]: **Both Asymptotic and Non-Asymptotic Convergence of Quasi-Hyperbolic Momentum using Increasing Batch Size**, [[https://www.acml-conf.org/2025/|The 17th Asian Conference on Machine Learning (ACML2025)]], Taipei, Taiwan (Dec. 9--12, 2025) | - [[https://scholar.google.com/citations?user=hdDU4Z4AAAAJ&hl=ja|Kento Imaizumi]], [[..:iiduka:|Hideaki Iiduka]]: **Both Asymptotic and Non-Asymptotic Convergence of Quasi-Hyperbolic Momentum using Increasing Batch Size**, [[https://www.acml-conf.org/2025/|The 17th Asian Conference on Machine Learning (ACML2025)]], Taipei, Taiwan (Dec. 9--12, 2025) |
- [[https://scholar.google.co.jp/citations?user=rNbGTIgAAAAJ&hl=ja|Naoki Sato]], [[en:iiduka:|Hideaki Iiduka]]: **Analysis of Muon's convergence and critical batch size**, The 2025 Fall National Conference of Operations Research Society of Japan, Higashi-Hiroshima campus, Hiroshima University (Sep. 12, 2025) | - [[https://scholar.google.co.jp/citations?hl=ja&user=dejA0qcAAAAJ|Kanata Oowada]], [[..:iiduka:|Hideaki Iiduka]]: **Improvements of Computational Efficiency and Convergence Rate of Riemannian Stochastic Gradient Descent with Increasing Batch Size**, RIMS Workshop on Nonlinear Analysis and Convex Analysis, Research Institute for Mathematical Sciences, Kyoto University (Sep. 26, 2025) |
| - [[https://scholar.google.co.jp/citations?user=rNbGTIgAAAAJ&hl=ja|Naoki Sato]], [[https://scholar.google.co.jp/citations?user=xx7O2voAAAAJ&hl=ja|Hiroki Naganuma]], [[en:iiduka:|Hideaki Iiduka]]: **Analysis of Muon's convergence and critical batch size**, The 2025 Fall National Conference of Operations Research Society of Japan, Higashi-Hiroshima campus, Hiroshima University (Sep. 12, 2025) |
- [[https://scholar.google.co.jp/citations?user=RXrwOgoAAAAJ&hl=ja|Hiroyuki Sakai]], [[en:iiduka:|Hideaki Iiduka]]: **A General Framework of Riemannian Adaptive Optimization Methods with a Convergence Analysis**, The 2025 Fall National Conference of Operations Research Society of Japan, Higashi-Hiroshima campus, Hiroshima University (Sep. 12, 2025) | - [[https://scholar.google.co.jp/citations?user=RXrwOgoAAAAJ&hl=ja|Hiroyuki Sakai]], [[en:iiduka:|Hideaki Iiduka]]: **A General Framework of Riemannian Adaptive Optimization Methods with a Convergence Analysis**, The 2025 Fall National Conference of Operations Research Society of Japan, Higashi-Hiroshima campus, Hiroshima University (Sep. 12, 2025) |
- [[https://scholar.google.co.jp/citations?hl=ja&user=dejA0qcAAAAJ|Kanata Owada]], [[..:iiduka:|Hideaki Iiduka]]: **Improving Computational Efficiency and Convergence Rate of Stochastic Gradient Descent on Riemannian Manifolds by Focusing on Batch Size**, Summer School on Continuous Optimization and Related Fields, The Institute of Statistical Mathematics (Aug. 28, 2025) | - [[https://scholar.google.co.jp/citations?hl=ja&user=dejA0qcAAAAJ|Kanata Owada]], [[..:iiduka:|Hideaki Iiduka]]: **Improving Computational Efficiency and Convergence Rate of Stochastic Gradient Descent on Riemannian Manifolds by Focusing on Batch Size**, Summer School on Continuous Optimization and Related Fields, The Institute of Statistical Mathematics (Aug. 28, 2025) |
- [[https://scholar.google.com/citations?hl=ja&user=s9l7NM8AAAAJ|Hikaru Umeda]], [[..:iiduka:|Hideaki Iiduka]]: **Acceleration of Stochastic Gradient Descent by Increasing Batch Size and Learning Rate**, The 2025 Spring National Conference of Operations Research Society of Japan, Seikei University (Mar. 6, 2025) | - [[https://scholar.google.com/citations?hl=ja&user=s9l7NM8AAAAJ|Hikaru Umeda]], [[..:iiduka:|Hideaki Iiduka]]: **Acceleration of Stochastic Gradient Descent by Increasing Batch Size and Learning Rate**, The 2025 Spring National Conference of Operations Research Society of Japan, Seikei University (Mar. 6, 2025) |
- Kanata Owada, [[..:iiduka:|Hideaki Iiduka]]: **Batch Size and Learning Rate of Stochastic Gradient Descent on Riemannian manifold**, The 2025 Spring National Conference of Operations Research Society of Japan, Seikei University (Mar. 6, 2025) | - [[https://scholar.google.co.jp/citations?hl=ja&user=dejA0qcAAAAJ|Kanata Owada]], [[..:iiduka:|Hideaki Iiduka]]: **Batch Size and Learning Rate of Stochastic Gradient Descent on Riemannian manifold**, The 2025 Spring National Conference of Operations Research Society of Japan, Seikei University (Mar. 6, 2025) |
- [[https://scholar.google.com/citations?user=hdDU4Z4AAAAJ&hl=ja|Kento Imaizumi]], [[..:iiduka:|Hideaki Iiduka]]: **Convergence Analysis of SGD with Momentum with Increasing or Decaying Momentum Factor in Nonconvex Optimization**, The 2025 Spring National Conference of Operations Research Society of Japan, Seikei University (Mar. 6, 2025) | - [[https://scholar.google.com/citations?user=hdDU4Z4AAAAJ&hl=ja|Kento Imaizumi]], [[..:iiduka:|Hideaki Iiduka]]: **Convergence Analysis of SGD with Momentum with Increasing or Decaying Momentum Factor in Nonconvex Optimization**, The 2025 Spring National Conference of Operations Research Society of Japan, Seikei University (Mar. 6, 2025) |
- [[https://scholar.google.co.jp/citations?user=rNbGTIgAAAAJ&hl=ja|Naoki Sato]], [[en:iiduka:|Hideaki Iiduka]]: **Stochastic Frank Wolfe method for Constrained Nonconvex Optimization and its Application for Adversarial Attack**, The 2025 Spring National Conference of Operations Research Society of Japan, Seikei University (Mar. 6, 2025) | - [[https://scholar.google.co.jp/citations?user=rNbGTIgAAAAJ&hl=ja|Naoki Sato]], [[en:iiduka:|Hideaki Iiduka]]: **Stochastic Frank Wolfe method for Constrained Nonconvex Optimization and its Application for Adversarial Attack**, The 2025 Spring National Conference of Operations Research Society of Japan, Seikei University (Mar. 6, 2025) |