差分

このページの2つのバージョン間の差分を表示します。

この比較画面へのリンク

両方とも前のリビジョン 前のリビジョン
次のリビジョン
前のリビジョン
en:intro:publications [2025/09/01 20:10] – [Proceedings] Naoki SATOen:intro:publications [2025/10/24 15:07] (現在) – [Proceedings] Naoki SATO
行 11: 行 11:
  
 ==== Proceedings ==== ==== Proceedings ====
-  - Keisuke Kamo, [[..:iiduka:|Hideaki Iiduka]]: **Increasing Batch Size Improves Convergence of Stochastic Gradient Descent with Momentum**, Proceedings of the 17th Asian Conference on Machine Learning, PMLR ???: ????-???? (2025) +  - Keisuke Kamo, [[..:iiduka:|Hideaki Iiduka]]: **Increasing Batch Size Improves Convergence of Stochastic Gradient Descent with Momentum**, Proceedings of the 17th Asian Conference on Machine Learning, PMLR 304: ????-???? (2025) 
-  - [[https://scholar.google.co.jp/citations?hl=ja&user=dejA0qcAAAAJ|Kanata Oowada]], [[..:iiduka:|Hideaki Iiduka]]: **Faster Convergence of Riemannian Stochastic Gradient Descent with Increasing Batch Size**, Proceedings of the 17th Asian Conference on Machine Learning, PMLR ???: ????-???? (2025) +  - [[https://scholar.google.co.jp/citations?hl=ja&user=dejA0qcAAAAJ|Kanata Oowada]], [[..:iiduka:|Hideaki Iiduka]]: **Faster Convergence of Riemannian Stochastic Gradient Descent with Increasing Batch Size**, Proceedings of the 17th Asian Conference on Machine Learning, PMLR 304: ????-???? (2025) 
-  - [[https://scholar.google.com/citations?user=hdDU4Z4AAAAJ&hl=ja|Kento Imaizumi]], [[..:iiduka:|Hideaki Iiduka]]: **Both Asymptotic and Non-Asymptotic Convergence of Quasi-Hyperbolic Momentum using Increasing Batch Size**, Proceedings of the 17th Asian Conference on Machine Learning, PMLR ???: ????-???? (2025) +  - [[https://scholar.google.com/citations?user=hdDU4Z4AAAAJ&hl=ja|Kento Imaizumi]], [[..:iiduka:|Hideaki Iiduka]]: **Both Asymptotic and Non-Asymptotic Convergence of Quasi-Hyperbolic Momentum using Increasing Batch Size**, Proceedings of the 17th Asian Conference on Machine Learning, PMLR 304: ????-???? (2025) 
-  - [[https://scholar.google.co.jp/citations?user=rNbGTIgAAAAJ&hl=ja|Naoki Sato]], [[..:iiduka:|Hideaki Iiduka]]: **[[https://ojs.aaai.org/index.php/AAAI/article/view/34234|Explicit and Implicit Graduated Optimization in Deep Neural Networks]]**, [[https://aaai.org/proceeding/aaai-39-2025/|Proceedings of the AAAI Conference on Artificial Intelligence]], 39 (19), 20283--20291 (2025)+  - [[https://scholar.google.co.jp/citations?user=rNbGTIgAAAAJ&hl=ja|Naoki Sato]], [[..:iiduka:|Hideaki Iiduka]]: **[[https://ojs.aaai.org/index.php/AAAI/article/view/34234|Explicit and Implicit Graduated Optimization in Deep Neural Networks]]**, [[https://aaai.org/proceeding/aaai-39-2025/|Proceedings of the AAAI Conference on Artificial Intelligence]], 39 (19), 20283--20291 (2025) [[https://ojs.aaai.org/index.php/AAAI/article/view/34234|Open Access]]
  
 ==== Reward and Punishment ==== ==== Reward and Punishment ====
 +  - [[https://scholar.google.co.jp/citations?hl=ja&user=dejA0qcAAAAJ|Kanata Oowada]]: [[https://www.marubun-zaidan.jp/en/h_j_gaiyou.html|Marubun Research Promotion Foundation, International Exchange Grant Project]], **100,000yen** (Travel expenses to ACML2025) (Oct. 21, 2025)
 +  - [[https://scholar.google.co.jp/citations?user=RXrwOgoAAAAJ&hl=ja|Hiroyuki Sakai]]: [[https://orsj.org/2025f/conference/student_award/|The Student Excellent Presentation Award of The 2025 Fall National Conference of Operations Research Society of Japan]] (Sep. 26, 2025)
   - [[https://scholar.google.co.jp/citations?user=rNbGTIgAAAAJ&hl=ja|Naoki Sato]]: The Japan Society of Operations Research, the 43rd Student Paper Award (Jul. 28, 2025)   - [[https://scholar.google.co.jp/citations?user=rNbGTIgAAAAJ&hl=ja|Naoki Sato]]: The Japan Society of Operations Research, the 43rd Student Paper Award (Jul. 28, 2025)
   - [[https://scholar.google.co.jp/citations?user=rNbGTIgAAAAJ&hl=ja|Naoki Sato]]: [[https://www.google.com/about/careers/applications/buildyourfuture/scholarships/google-conference-scholarships?ohl=default&hl=en_US|Google Conference Scholarship]], **$1500** (Travel expenses to AAAI-25) (Feb. 25, 2025)   - [[https://scholar.google.co.jp/citations?user=rNbGTIgAAAAJ&hl=ja|Naoki Sato]]: [[https://www.google.com/about/careers/applications/buildyourfuture/scholarships/google-conference-scholarships?ohl=default&hl=en_US|Google Conference Scholarship]], **$1500** (Travel expenses to AAAI-25) (Feb. 25, 2025)
行 22: 行 24:
 ==== Conference Activities & Talks ==== ==== Conference Activities & Talks ====
   - Keisuke Kamo, [[..:iiduka:|Hideaki Iiduka]]: **Increasing Batch Size Improves Convergence of Stochastic Gradient Descent with Momentum**, [[https://www.acml-conf.org/2025/|The 17th Asian Conference on Machine Learning (ACML2025)]], Taipei, Taiwan (Dec. 9--12, 2025)   - Keisuke Kamo, [[..:iiduka:|Hideaki Iiduka]]: **Increasing Batch Size Improves Convergence of Stochastic Gradient Descent with Momentum**, [[https://www.acml-conf.org/2025/|The 17th Asian Conference on Machine Learning (ACML2025)]], Taipei, Taiwan (Dec. 9--12, 2025)
-  - [[https://scholar.google.co.jp/citations?hl=ja&user=dejA0qcAAAAJ|Kanata Oowada]], [[..:iiduka:|Hideaki Iiduka]]: **Increasing Batch Size Improves Convergence of Stochastic Gradient Descent with Momentum**, [[https://www.acml-conf.org/2025/|The 17th Asian Conference on Machine Learning (ACML2025)]], Taipei, Taiwan (Dec. 9--12, 2025)+  - [[https://scholar.google.co.jp/citations?hl=ja&user=dejA0qcAAAAJ|Kanata Oowada]], [[..:iiduka:|Hideaki Iiduka]]: **Faster Convergence of Riemannian Stochastic Gradient Descent with Increasing Batch Size**, [[https://www.acml-conf.org/2025/|The 17th Asian Conference on Machine Learning (ACML2025)]], Taipei, Taiwan (Dec. 9--12, 2025)
   - [[https://scholar.google.com/citations?user=hdDU4Z4AAAAJ&hl=ja|Kento Imaizumi]], [[..:iiduka:|Hideaki Iiduka]]: **Both Asymptotic and Non-Asymptotic Convergence of Quasi-Hyperbolic Momentum using Increasing Batch Size**, [[https://www.acml-conf.org/2025/|The 17th Asian Conference on Machine Learning (ACML2025)]], Taipei, Taiwan (Dec. 9--12, 2025)   - [[https://scholar.google.com/citations?user=hdDU4Z4AAAAJ&hl=ja|Kento Imaizumi]], [[..:iiduka:|Hideaki Iiduka]]: **Both Asymptotic and Non-Asymptotic Convergence of Quasi-Hyperbolic Momentum using Increasing Batch Size**, [[https://www.acml-conf.org/2025/|The 17th Asian Conference on Machine Learning (ACML2025)]], Taipei, Taiwan (Dec. 9--12, 2025)
-  - [[https://scholar.google.co.jp/citations?user=rNbGTIgAAAAJ&hl=ja|Naoki Sato]], [[en:iiduka:|Hideaki Iiduka]]: **Analysis of Muon's convergence and critical batch size**, The 2025 Fall National Conference of Operations Research Society of Japan, Higashi-Hiroshima campus, Hiroshima University (Sep. 12, 2025)+  - [[https://scholar.google.co.jp/citations?hl=ja&user=dejA0qcAAAAJ|Kanata Oowada]], [[..:iiduka:|Hideaki Iiduka]]: **Improvements of Computational Efficiency and Convergence Rate of Riemannian Stochastic Gradient Descent with Increasing Batch Size**, RIMS Workshop on Nonlinear Analysis and Convex Analysis, Research Institute for Mathematical Sciences, Kyoto University (Sep. 26, 2025) 
 +  - [[https://scholar.google.co.jp/citations?user=rNbGTIgAAAAJ&hl=ja|Naoki Sato]], [[https://scholar.google.co.jp/citations?user=xx7O2voAAAAJ&hl=ja|Hiroki Naganuma]], [[en:iiduka:|Hideaki Iiduka]]: **Analysis of Muon's convergence and critical batch size**, The 2025 Fall National Conference of Operations Research Society of Japan, Higashi-Hiroshima campus, Hiroshima University (Sep. 12, 2025)
   - [[https://scholar.google.co.jp/citations?user=RXrwOgoAAAAJ&hl=ja|Hiroyuki Sakai]], [[en:iiduka:|Hideaki Iiduka]]: **A General Framework of Riemannian Adaptive Optimization Methods with a Convergence Analysis**, The 2025 Fall National Conference of Operations Research Society of Japan, Higashi-Hiroshima campus, Hiroshima University (Sep. 12, 2025)   - [[https://scholar.google.co.jp/citations?user=RXrwOgoAAAAJ&hl=ja|Hiroyuki Sakai]], [[en:iiduka:|Hideaki Iiduka]]: **A General Framework of Riemannian Adaptive Optimization Methods with a Convergence Analysis**, The 2025 Fall National Conference of Operations Research Society of Japan, Higashi-Hiroshima campus, Hiroshima University (Sep. 12, 2025)
   - [[https://scholar.google.co.jp/citations?hl=ja&user=dejA0qcAAAAJ|Kanata Owada]], [[..:iiduka:|Hideaki Iiduka]]: **Improving Computational Efficiency and Convergence Rate of Stochastic Gradient Descent on Riemannian Manifolds by Focusing on Batch Size**, Summer School on Continuous Optimization and Related Fields, The Institute of Statistical Mathematics (Aug. 28, 2025)   - [[https://scholar.google.co.jp/citations?hl=ja&user=dejA0qcAAAAJ|Kanata Owada]], [[..:iiduka:|Hideaki Iiduka]]: **Improving Computational Efficiency and Convergence Rate of Stochastic Gradient Descent on Riemannian Manifolds by Focusing on Batch Size**, Summer School on Continuous Optimization and Related Fields, The Institute of Statistical Mathematics (Aug. 28, 2025)
  • en/intro/publications.1756725039.txt.gz
  • 最終更新: 2025/09/01 20:10
  • by Naoki SATO