intro:publications

差分

このページの2つのバージョン間の差分を表示します。

この比較画面へのリンク

両方とも前のリビジョン 前のリビジョン
次のリビジョン
前のリビジョン
intro:publications [2026/01/11 20:44] – [講演・口頭発表等(国内)] Kanata OOWADAintro:publications [2026/03/18 21:35] (現在) Hiroyuki SAKAI
行 4: 行 4:
  
 ===== 2026年 ===== ===== 2026年 =====
 +==== 著書 ====
 +  -  [[..:iiduka:|飯塚 秀明]]: **[[|??]]**, [[|??]] (2026)
 +
 +==== レフェリー付き原著論文 ====
 +  - Shun-ya Aoki, [[..:iiduka:|Hideaki Iiduka]]: **[[|Theoretical Analysis of Sharpness-Aware Minimization Algorithm with Diminishing Learning Rate Based on Variational Inequality]]**, [[https://cot.mathres.org/|Communications in Optimization Theory]]: Special Issue dedicated to Professor Terry Rockafellar on the occasion of his 90th birthday ?? (??): ???--??? (2026) [[|Open Access]]
 +==== 紀要・講究録 ====
 +  - [[https://scholar.google.co.jp/citations?user=rNbGTIgAAAAJ&hl=ja|Naoki Sato]], [[..:iiduka:|Hideaki Iiduka]]: **Lipschitz Multiscale Deep Equilibrium Models: A Theoretically Guaranteed and Accelerated Approach**, Proceedings of the 29th International Conference on Artificial Intelligence and Statistics, PMLR 300: ????--???? (2026)
  
 ==== 博士学位請求論文 ==== ==== 博士学位請求論文 ====
-  - 酒井裕行: **Riemannian Adaptive Optimization Algorithms and Their Applications** (邦文題: リーマン多様体上の適応的最適化アルゴリズムとその応用), 明治大学, 2026 +  - 酒井裕行: **Riemannian Adaptive Optimization Algorithms and Their Applications** (邦文題: リーマン多様体上の適応的最適化アルゴリズムとその応用), 明治大学, 2026 {{intro:doctorial_sakai.pdf|PDF}}
  
 ==== 修士学位請求論文 ==== ==== 修士学位請求論文 ====
行 26: 行 33:
   - 櫛谷 圭介: **凸関数に対する DP-Clipped-SGD の収束解析**   - 櫛谷 圭介: **凸関数に対する DP-Clipped-SGD の収束解析**
   - 渡邉 翠: **PoNoS 直線探索を利用した確率的勾配降下法の収束解析**   - 渡邉 翠: **PoNoS 直線探索を利用した確率的勾配降下法の収束解析**
 +
 +==== 講演・口頭発表等 ====
 +  - [[https://scholar.google.co.jp/citations?user=rNbGTIgAAAAJ&hl=ja|Naoki Sato]], [[..:iiduka:|Hideaki Iiduka]]: **Lipschitz Multiscale Deep Equilibrium Models: A Theoretically Guaranteed and Accelerated Approach**, [[https://virtual.aistats.org/|The 29th International Conference on Artificial Intelligence and Statistics (AISTATS)]], Tangier, Morocco (May. 2 -- Mar. 5, 2026)
 +  - [[https://scholar.google.co.jp/citations?user=rNbGTIgAAAAJ&hl=ja|佐藤 尚樹]], [[..:iiduka:|飯塚 秀明]]: **Banachの不動点定理に基づいた深層平衡モデルの高速化**, 電子情報通信学会 情報論的学習理論と機械学習 (IBISML) 研究会 (第59回), あわぎんホール (2026年3月25日)
  
  
行 41: 行 52:
 ==== 紀要・講究録 ==== ==== 紀要・講究録 ====
   - Keisuke Kamo, [[..:iiduka:|Hideaki Iiduka]]: **Increasing Batch Size Improves Convergence of Stochastic Gradient Descent with Momentum**, Proceedings of the 17th Asian Conference on Machine Learning, PMLR 304: ????-???? (2025)   - Keisuke Kamo, [[..:iiduka:|Hideaki Iiduka]]: **Increasing Batch Size Improves Convergence of Stochastic Gradient Descent with Momentum**, Proceedings of the 17th Asian Conference on Machine Learning, PMLR 304: ????-???? (2025)
-  - [[https://scholar.google.co.jp/citations?hl=ja&user=dejA0qcAAAAJ|Kanata Oowada]], [[..:iiduka:|Hideaki Iiduka]]: **Faster Convergence of Riemannian Stochastic Gradient Descent with Increasing Batch Size**, Proceedings of the 17th Asian Conference on Machine Learning, PMLR 304: ????-???? (2025)+  - [[https://scholar.google.com/citations?user=3U-XTE0AAAAJ&hl=ja|Kanata Oowada]], [[..:iiduka:|Hideaki Iiduka]]: **Faster Convergence of Riemannian Stochastic Gradient Descent with Increasing Batch Size**, Proceedings of the 17th Asian Conference on Machine Learning, PMLR 304: ????-???? (2025)
   - [[https://scholar.google.com/citations?user=hdDU4Z4AAAAJ&hl=ja|Kento Imaizumi]], [[..:iiduka:|Hideaki Iiduka]]: **Both Asymptotic and Non-Asymptotic Convergence of Quasi-Hyperbolic Momentum using Increasing Batch Size**, Proceedings of the 17th Asian Conference on Machine Learning, PMLR 304: ????-???? (2025)   - [[https://scholar.google.com/citations?user=hdDU4Z4AAAAJ&hl=ja|Kento Imaizumi]], [[..:iiduka:|Hideaki Iiduka]]: **Both Asymptotic and Non-Asymptotic Convergence of Quasi-Hyperbolic Momentum using Increasing Batch Size**, Proceedings of the 17th Asian Conference on Machine Learning, PMLR 304: ????-???? (2025)
   - [[https://scholar.google.co.jp/citations?user=rNbGTIgAAAAJ&hl=ja|Naoki Sato]], [[..:iiduka:|Hideaki Iiduka]]: **[[https://ojs.aaai.org/index.php/AAAI/article/view/34234|Explicit and Implicit Graduated Optimization in Deep Neural Networks]]**, [[https://aaai.org/proceeding/aaai-39-2025/|Proceedings of the AAAI Conference on Artificial Intelligence]], 39 (19), 20283--20291 (2025) [[https://ojs.aaai.org/index.php/AAAI/article/view/34234|Open Access]]   - [[https://scholar.google.co.jp/citations?user=rNbGTIgAAAAJ&hl=ja|Naoki Sato]], [[..:iiduka:|Hideaki Iiduka]]: **[[https://ojs.aaai.org/index.php/AAAI/article/view/34234|Explicit and Implicit Graduated Optimization in Deep Neural Networks]]**, [[https://aaai.org/proceeding/aaai-39-2025/|Proceedings of the AAAI Conference on Artificial Intelligence]], 39 (19), 20283--20291 (2025) [[https://ojs.aaai.org/index.php/AAAI/article/view/34234|Open Access]]
行 47: 行 58:
 ==== 賞罰 ==== ==== 賞罰 ====
   - [[https://scholar.google.com/citations?user=hdDU4Z4AAAAJ&hl=ja|今泉 賢人]]: [[https://www.denkidenshi.or.jp/tokojyosei.html|公益財団法人電気電子情報学術振興財団]], **15万円** (ACML2025への渡航費として) (2025年12月11日)   - [[https://scholar.google.com/citations?user=hdDU4Z4AAAAJ&hl=ja|今泉 賢人]]: [[https://www.denkidenshi.or.jp/tokojyosei.html|公益財団法人電気電子情報学術振興財団]], **15万円** (ACML2025への渡航費として) (2025年12月11日)
-  - [[https://scholar.google.co.jp/citations?hl=ja&user=dejA0qcAAAAJ|Kanata Oowada]], [[..:iiduka:|Hideaki Iiduka]]: **Faster Convergence of Riemannian Stochastic Gradient Descent with Increasing Batch Size**, ACML2025 Best Paper Runner-Up Award (2025年12月10日)  +  - [[https://scholar.google.com/citations?user=3U-XTE0AAAAJ&hl=ja|Kanata Oowada]], [[..:iiduka:|Hideaki Iiduka]]: **Faster Convergence of Riemannian Stochastic Gradient Descent with Increasing Batch Size**, ACML2025 Best Paper Runner-Up Award (2025年12月10日)  
-  - [[https://scholar.google.co.jp/citations?hl=ja&user=dejA0qcAAAAJ|大和田 佳生]]: [[https://www.marubun-zaidan.jp/kokusai.html|一般社団法人丸文財団 国際交流助成]], **10万円** (ACML2025への渡航費として) (2025年10月21日)+  - [[https://scholar.google.com/citations?user=3U-XTE0AAAAJ&hl=ja|大和田 佳生]]: [[https://www.marubun-zaidan.jp/kokusai.html|一般社団法人丸文財団 国際交流助成]], **10万円** (ACML2025への渡航費として) (2025年10月21日)
   - [[https://scholar.google.co.jp/citations?user=RXrwOgoAAAAJ&hl=ja|酒井 裕行]]: **リーマン多様体上のミニバッチ適応最適化手法の一般的な枠組みと収束解析**, [[https://orsj.org/2025f/conference/student_award/|日本オペレーションズ・リサーチ学会 2025年秋季研究発表会 学生優秀発表賞]] (2025年9月26日)   - [[https://scholar.google.co.jp/citations?user=RXrwOgoAAAAJ&hl=ja|酒井 裕行]]: **リーマン多様体上のミニバッチ適応最適化手法の一般的な枠組みと収束解析**, [[https://orsj.org/2025f/conference/student_award/|日本オペレーションズ・リサーチ学会 2025年秋季研究発表会 学生優秀発表賞]] (2025年9月26日)
   - [[https://scholar.google.co.jp/citations?user=rNbGTIgAAAAJ&hl=ja|佐藤 尚樹]]: [[https://orsj.org/award-history|日本オペレーションズ・リサーチ学会 第43回学生論文賞]] (2025年7月28日) {{:intro:naoki_sato_master.pdf|PDF}}   - [[https://scholar.google.co.jp/citations?user=rNbGTIgAAAAJ&hl=ja|佐藤 尚樹]]: [[https://orsj.org/award-history|日本オペレーションズ・リサーチ学会 第43回学生論文賞]] (2025年7月28日) {{:intro:naoki_sato_master.pdf|PDF}}
行 79: 行 90:
 ==== 講演・口頭発表等(国外) ==== ==== 講演・口頭発表等(国外) ====
   - Keisuke Kamo, [[..:iiduka:|Hideaki Iiduka]]: **Increasing Batch Size Improves Convergence of Stochastic Gradient Descent with Momentum**, [[https://www.acml-conf.org/2025/|The 17th Asian Conference on Machine Learning (ACML2025)]], Taipei, Taiwan (Dec. 9--12, 2025)   - Keisuke Kamo, [[..:iiduka:|Hideaki Iiduka]]: **Increasing Batch Size Improves Convergence of Stochastic Gradient Descent with Momentum**, [[https://www.acml-conf.org/2025/|The 17th Asian Conference on Machine Learning (ACML2025)]], Taipei, Taiwan (Dec. 9--12, 2025)
-  - [[https://scholar.google.com/citations?user=3U-XTE0AAAAJ&hl=en|Kanata Oowada]], [[..:iiduka:|Hideaki Iiduka]]: **Faster Convergence of Riemannian Stochastic Gradient Descent with Increasing Batch Size**, [[https://www.acml-conf.org/2025/|The 17th Asian Conference on Machine Learning (ACML2025)]], Taipei, Taiwan (Dec. 9--12, 2025)+  - [[https://scholar.google.com/citations?user=3U-XTE0AAAAJ&hl=ja|Kanata Oowada]], [[..:iiduka:|Hideaki Iiduka]]: **Faster Convergence of Riemannian Stochastic Gradient Descent with Increasing Batch Size**, [[https://www.acml-conf.org/2025/|The 17th Asian Conference on Machine Learning (ACML2025)]], Taipei, Taiwan (Dec. 9--12, 2025)
   - [[https://scholar.google.com/citations?user=hdDU4Z4AAAAJ&hl=ja|Kento Imaizumi]], [[..:iiduka:|Hideaki Iiduka]]: **Both Asymptotic and Non-Asymptotic Convergence of Quasi-Hyperbolic Momentum using Increasing Batch Size**, [[https://www.acml-conf.org/2025/|The 17th Asian Conference on Machine Learning (ACML2025)]], Taipei, Taiwan (Dec. 9--12, 2025)   - [[https://scholar.google.com/citations?user=hdDU4Z4AAAAJ&hl=ja|Kento Imaizumi]], [[..:iiduka:|Hideaki Iiduka]]: **Both Asymptotic and Non-Asymptotic Convergence of Quasi-Hyperbolic Momentum using Increasing Batch Size**, [[https://www.acml-conf.org/2025/|The 17th Asian Conference on Machine Learning (ACML2025)]], Taipei, Taiwan (Dec. 9--12, 2025)
   - [[https://scholar.google.co.jp/citations?user=rNbGTIgAAAAJ&hl=ja|Naoki Sato]], [[..:iiduka:|Hideaki Iiduka]]: **Explicit and Implicit Graduated Optimization in Deep Neural Networks**, [[https://aaai.org/conference/aaai/aaai-25/|The 39th Annual AAAI Conference on Artificial Intelligence (AAAI-25)]], Pennsylvania Convention Center, Philadelphia, Pennsylvania, USA (Feb. 27 -- Mar. 4, 2025)   - [[https://scholar.google.co.jp/citations?user=rNbGTIgAAAAJ&hl=ja|Naoki Sato]], [[..:iiduka:|Hideaki Iiduka]]: **Explicit and Implicit Graduated Optimization in Deep Neural Networks**, [[https://aaai.org/conference/aaai/aaai-25/|The 39th Annual AAAI Conference on Artificial Intelligence (AAAI-25)]], Pennsylvania Convention Center, Philadelphia, Pennsylvania, USA (Feb. 27 -- Mar. 4, 2025)
  • intro/publications.1768131856.txt.gz
  • 最終更新: 2026/01/11 20:44
  • by Kanata OOWADA