| 両方とも前のリビジョン 前のリビジョン 次のリビジョン | 前のリビジョン |
| en:intro:publications [2025/08/05 16:56] – [Conference Activities & Talks] Naoki SATO | en:intro:publications [2025/10/24 15:07] (現在) – [Proceedings] Naoki SATO |
|---|
| |
| ==== Proceedings ==== | ==== Proceedings ==== |
| - [[https://scholar.google.co.jp/citations?user=rNbGTIgAAAAJ&hl=ja|Naoki Sato]], [[..:iiduka:|Hideaki Iiduka]]: **[[https://ojs.aaai.org/index.php/AAAI/article/view/34234|Explicit and Implicit Graduated Optimization in Deep Neural Networks]]**, [[https://aaai.org/proceeding/aaai-39-2025/|Proceedings of the AAAI Conference on Artificial Intelligence]], 39 (19), 20283--20291 (2025) | - Keisuke Kamo, [[..:iiduka:|Hideaki Iiduka]]: **Increasing Batch Size Improves Convergence of Stochastic Gradient Descent with Momentum**, Proceedings of the 17th Asian Conference on Machine Learning, PMLR 304: ????-???? (2025) |
| | - [[https://scholar.google.co.jp/citations?hl=ja&user=dejA0qcAAAAJ|Kanata Oowada]], [[..:iiduka:|Hideaki Iiduka]]: **Faster Convergence of Riemannian Stochastic Gradient Descent with Increasing Batch Size**, Proceedings of the 17th Asian Conference on Machine Learning, PMLR 304: ????-???? (2025) |
| | - [[https://scholar.google.com/citations?user=hdDU4Z4AAAAJ&hl=ja|Kento Imaizumi]], [[..:iiduka:|Hideaki Iiduka]]: **Both Asymptotic and Non-Asymptotic Convergence of Quasi-Hyperbolic Momentum using Increasing Batch Size**, Proceedings of the 17th Asian Conference on Machine Learning, PMLR 304: ????-???? (2025) |
| | - [[https://scholar.google.co.jp/citations?user=rNbGTIgAAAAJ&hl=ja|Naoki Sato]], [[..:iiduka:|Hideaki Iiduka]]: **[[https://ojs.aaai.org/index.php/AAAI/article/view/34234|Explicit and Implicit Graduated Optimization in Deep Neural Networks]]**, [[https://aaai.org/proceeding/aaai-39-2025/|Proceedings of the AAAI Conference on Artificial Intelligence]], 39 (19), 20283--20291 (2025) [[https://ojs.aaai.org/index.php/AAAI/article/view/34234|Open Access]] |
| |
| ==== Reward and Punishment ==== | ==== Reward and Punishment ==== |
| | - [[https://scholar.google.co.jp/citations?hl=ja&user=dejA0qcAAAAJ|Kanata Oowada]]: [[https://www.marubun-zaidan.jp/en/h_j_gaiyou.html|Marubun Research Promotion Foundation, International Exchange Grant Project]], **100,000yen** (Travel expenses to ACML2025) (Oct. 21, 2025) |
| | - [[https://scholar.google.co.jp/citations?user=RXrwOgoAAAAJ&hl=ja|Hiroyuki Sakai]]: [[https://orsj.org/2025f/conference/student_award/|The Student Excellent Presentation Award of The 2025 Fall National Conference of Operations Research Society of Japan]] (Sep. 26, 2025) |
| - [[https://scholar.google.co.jp/citations?user=rNbGTIgAAAAJ&hl=ja|Naoki Sato]]: The Japan Society of Operations Research, the 43rd Student Paper Award (Jul. 28, 2025) | - [[https://scholar.google.co.jp/citations?user=rNbGTIgAAAAJ&hl=ja|Naoki Sato]]: The Japan Society of Operations Research, the 43rd Student Paper Award (Jul. 28, 2025) |
| - [[https://scholar.google.co.jp/citations?user=rNbGTIgAAAAJ&hl=ja|Naoki Sato]]: [[https://www.google.com/about/careers/applications/buildyourfuture/scholarships/google-conference-scholarships?ohl=default&hl=en_US|Google Conference Scholarship]], **$1500** (Travel expenses to AAAI-25) (Feb. 25, 2025) | - [[https://scholar.google.co.jp/citations?user=rNbGTIgAAAAJ&hl=ja|Naoki Sato]]: [[https://www.google.com/about/careers/applications/buildyourfuture/scholarships/google-conference-scholarships?ohl=default&hl=en_US|Google Conference Scholarship]], **$1500** (Travel expenses to AAAI-25) (Feb. 25, 2025) |
| |
| ==== Conference Activities & Talks ==== | ==== Conference Activities & Talks ==== |
| - [[https://scholar.google.co.jp/citations?user=rNbGTIgAAAAJ&hl=ja|Naoki Sato]], [[en:iiduka:|Hideaki Iiduka]]: **Analysis of Muon's convergence and critical batch size**, The 2025 Fall National Conference of Operations Research Society of Japan, Higashi-Hiroshima campus, Hiroshima University (Sep. 12, 2025) | - Keisuke Kamo, [[..:iiduka:|Hideaki Iiduka]]: **Increasing Batch Size Improves Convergence of Stochastic Gradient Descent with Momentum**, [[https://www.acml-conf.org/2025/|The 17th Asian Conference on Machine Learning (ACML2025)]], Taipei, Taiwan (Dec. 9--12, 2025) |
| | - [[https://scholar.google.co.jp/citations?hl=ja&user=dejA0qcAAAAJ|Kanata Oowada]], [[..:iiduka:|Hideaki Iiduka]]: **Faster Convergence of Riemannian Stochastic Gradient Descent with Increasing Batch Size**, [[https://www.acml-conf.org/2025/|The 17th Asian Conference on Machine Learning (ACML2025)]], Taipei, Taiwan (Dec. 9--12, 2025) |
| | - [[https://scholar.google.com/citations?user=hdDU4Z4AAAAJ&hl=ja|Kento Imaizumi]], [[..:iiduka:|Hideaki Iiduka]]: **Both Asymptotic and Non-Asymptotic Convergence of Quasi-Hyperbolic Momentum using Increasing Batch Size**, [[https://www.acml-conf.org/2025/|The 17th Asian Conference on Machine Learning (ACML2025)]], Taipei, Taiwan (Dec. 9--12, 2025) |
| | - [[https://scholar.google.co.jp/citations?hl=ja&user=dejA0qcAAAAJ|Kanata Oowada]], [[..:iiduka:|Hideaki Iiduka]]: **Improvements of Computational Efficiency and Convergence Rate of Riemannian Stochastic Gradient Descent with Increasing Batch Size**, RIMS Workshop on Nonlinear Analysis and Convex Analysis, Research Institute for Mathematical Sciences, Kyoto University (Sep. 26, 2025) |
| | - [[https://scholar.google.co.jp/citations?user=rNbGTIgAAAAJ&hl=ja|Naoki Sato]], [[https://scholar.google.co.jp/citations?user=xx7O2voAAAAJ&hl=ja|Hiroki Naganuma]], [[en:iiduka:|Hideaki Iiduka]]: **Analysis of Muon's convergence and critical batch size**, The 2025 Fall National Conference of Operations Research Society of Japan, Higashi-Hiroshima campus, Hiroshima University (Sep. 12, 2025) |
| - [[https://scholar.google.co.jp/citations?user=RXrwOgoAAAAJ&hl=ja|Hiroyuki Sakai]], [[en:iiduka:|Hideaki Iiduka]]: **A General Framework of Riemannian Adaptive Optimization Methods with a Convergence Analysis**, The 2025 Fall National Conference of Operations Research Society of Japan, Higashi-Hiroshima campus, Hiroshima University (Sep. 12, 2025) | - [[https://scholar.google.co.jp/citations?user=RXrwOgoAAAAJ&hl=ja|Hiroyuki Sakai]], [[en:iiduka:|Hideaki Iiduka]]: **A General Framework of Riemannian Adaptive Optimization Methods with a Convergence Analysis**, The 2025 Fall National Conference of Operations Research Society of Japan, Higashi-Hiroshima campus, Hiroshima University (Sep. 12, 2025) |
| | - [[https://scholar.google.co.jp/citations?hl=ja&user=dejA0qcAAAAJ|Kanata Owada]], [[..:iiduka:|Hideaki Iiduka]]: **Improving Computational Efficiency and Convergence Rate of Stochastic Gradient Descent on Riemannian Manifolds by Focusing on Batch Size**, Summer School on Continuous Optimization and Related Fields, The Institute of Statistical Mathematics (Aug. 28, 2025) |
| - [[https://scholar.google.com/citations?hl=ja&user=s9l7NM8AAAAJ|Hikaru Umeda]], [[..:iiduka:|Hideaki Iiduka]]: **Acceleration of Stochastic Gradient Descent by Increasing Batch Size and Learning Rate**, The 2025 Spring National Conference of Operations Research Society of Japan, Seikei University (Mar. 6, 2025) | - [[https://scholar.google.com/citations?hl=ja&user=s9l7NM8AAAAJ|Hikaru Umeda]], [[..:iiduka:|Hideaki Iiduka]]: **Acceleration of Stochastic Gradient Descent by Increasing Batch Size and Learning Rate**, The 2025 Spring National Conference of Operations Research Society of Japan, Seikei University (Mar. 6, 2025) |
| - Kanata Owada, [[..:iiduka:|Hideaki Iiduka]]: **Batch Size and Learning Rate of Stochastic Gradient Descent on Riemannian manifold**, The 2025 Spring National Conference of Operations Research Society of Japan, Seikei University (Mar. 6, 2025) | - [[https://scholar.google.co.jp/citations?hl=ja&user=dejA0qcAAAAJ|Kanata Owada]], [[..:iiduka:|Hideaki Iiduka]]: **Batch Size and Learning Rate of Stochastic Gradient Descent on Riemannian manifold**, The 2025 Spring National Conference of Operations Research Society of Japan, Seikei University (Mar. 6, 2025) |
| - [[https://scholar.google.com/citations?user=hdDU4Z4AAAAJ&hl=ja|Kento Imaizumi]], [[..:iiduka:|Hideaki Iiduka]]: **Convergence Analysis of SGD with Momentum with Increasing or Decaying Momentum Factor in Nonconvex Optimization**, The 2025 Spring National Conference of Operations Research Society of Japan, Seikei University (Mar. 6, 2025) | - [[https://scholar.google.com/citations?user=hdDU4Z4AAAAJ&hl=ja|Kento Imaizumi]], [[..:iiduka:|Hideaki Iiduka]]: **Convergence Analysis of SGD with Momentum with Increasing or Decaying Momentum Factor in Nonconvex Optimization**, The 2025 Spring National Conference of Operations Research Society of Japan, Seikei University (Mar. 6, 2025) |
| - [[https://scholar.google.co.jp/citations?user=rNbGTIgAAAAJ&hl=ja|Naoki Sato]], [[en:iiduka:|Hideaki Iiduka]]: **Stochastic Frank Wolfe method for Constrained Nonconvex Optimization and its Application for Adversarial Attack**, The 2025 Spring National Conference of Operations Research Society of Japan, Seikei University (Mar. 6, 2025) | - [[https://scholar.google.co.jp/citations?user=rNbGTIgAAAAJ&hl=ja|Naoki Sato]], [[en:iiduka:|Hideaki Iiduka]]: **Stochastic Frank Wolfe method for Constrained Nonconvex Optimization and its Application for Adversarial Attack**, The 2025 Spring National Conference of Operations Research Society of Japan, Seikei University (Mar. 6, 2025) |