差分

このページの2つのバージョン間の差分を表示します。

この比較画面へのリンク

両方とも前のリビジョン 前のリビジョン
次のリビジョン
前のリビジョン
en:intro:publications [2025/03/22 00:20] – [Publications in Refereed Journals] Hideaki IIDUKAen:intro:publications [2025/04/19 21:35] (現在) – [Proceedings] Hideaki IIDUKA
行 5: 行 5:
 ===== 2025 ===== ===== 2025 =====
 ==== Publications in Refereed Journals ==== ==== Publications in Refereed Journals ====
-  - [[https://scholar.google.com/citations?user=hdDU4Z4AAAAJ&hl=ja|Kento Imaizumi]], [[en:iiduka:|Hideaki Iiduka]]: **[[https://doi.org/10.1080/02331934.2024.2367635|Iteration and Stochastic First-order Oracle Complexities of Stochastic Gradient Descent using Constant and Decaying Learning Rates]]**, [[https://www.tandfonline.com/journals/gopt20|Optimization]]: Special Issue dedicated to Dr. Alexander J. Zaslavski on the occasion of his 65th birthday ?? (?):?? --?? (2024) [[https://www.tandfonline.com/doi/epdf/10.1080/02331934.2024.2367635?needAccess=true|Open Access]]+  - [[https://scholar.google.com/citations?user=hdDU4Z4AAAAJ&hl=ja|Kento Imaizumi]], [[en:iiduka:|Hideaki Iiduka]]: **[[https://doi.org/10.1080/02331934.2024.2367635|Iteration and Stochastic First-order Oracle Complexities of Stochastic Gradient Descent using Constant and Decaying Learning Rates]]**, [[https://www.tandfonline.com/journals/gopt20|Optimization]]: Special Issue dedicated to Dr. Alexander J. Zaslavski on the occasion of his 65th birthday ?? (?):?? --?? (2025) [[https://www.tandfonline.com/doi/epdf/10.1080/02331934.2024.2367635?needAccess=true|Open Access]]
   - [[https://github.com/tsukaoyuki|Yuki Tsukada]], [[en:iiduka:|Hideaki Iiduka]]: **[[https://openreview.net/forum?id=pqZ6nOm3WF|Relationship between Batch Size and Number of Steps Needed for Nonconvex Optimization of Stochastic Gradient Descent using Armijo-Line-Search Learning Rate]]**, [[https://jmlr.org/tmlr/|Transactions on Machine Learning Research]] (2025) [[https://openreview.net/pdf?id=pqZ6nOm3WF|Open Access]]   - [[https://github.com/tsukaoyuki|Yuki Tsukada]], [[en:iiduka:|Hideaki Iiduka]]: **[[https://openreview.net/forum?id=pqZ6nOm3WF|Relationship between Batch Size and Number of Steps Needed for Nonconvex Optimization of Stochastic Gradient Descent using Armijo-Line-Search Learning Rate]]**, [[https://jmlr.org/tmlr/|Transactions on Machine Learning Research]] (2025) [[https://openreview.net/pdf?id=pqZ6nOm3WF|Open Access]]
   - [[https://scholar.google.com/citations?hl=ja&user=s9l7NM8AAAAJ|Hikaru Umeda]], [[en:iiduka:|Hideaki Iiduka]]: **[[https://openreview.net/forum?id=sbmp55k6iE|Increasing Both Batch Size and Learning Rate Accelerates Stochastic Gradient Descent]]**, [[https://jmlr.org/tmlr/|Transactions on Machine Learning Research]] (2025) [[https://openreview.net/pdf?id=sbmp55k6iE|Open Access]]   - [[https://scholar.google.com/citations?hl=ja&user=s9l7NM8AAAAJ|Hikaru Umeda]], [[en:iiduka:|Hideaki Iiduka]]: **[[https://openreview.net/forum?id=sbmp55k6iE|Increasing Both Batch Size and Learning Rate Accelerates Stochastic Gradient Descent]]**, [[https://jmlr.org/tmlr/|Transactions on Machine Learning Research]] (2025) [[https://openreview.net/pdf?id=sbmp55k6iE|Open Access]]
行 11: 行 11:
  
 ==== Proceedings ==== ==== Proceedings ====
 +  - [[https://scholar.google.co.jp/citations?user=rNbGTIgAAAAJ&hl=ja|Naoki Sato]], [[..:iiduka:|Hideaki Iiduka]]: **[[https://ojs.aaai.org/index.php/AAAI/article/view/34234|Explicit and Implicit Graduated Optimization in Deep Neural Networks]]**, [[https://aaai.org/proceeding/aaai-39-2025/|Proceedings of the AAAI Conference on Artificial Intelligence]], 39 (19), 20283--20291 (2025)
  
 ==== Conference Activities & Talks ==== ==== Conference Activities & Talks ====
  • en/intro/publications.1742570459.txt.gz
  • 最終更新: 2025/03/22 00:20
  • by Hideaki IIDUKA