両方とも前のリビジョン 前のリビジョン 次のリビジョン | 前のリビジョン |
en:intro:publications [2024/12/16 18:55] – [Publications in Refereed Journals] Hideaki IIDUKA | en:intro:publications [2025/04/19 21:35] (現在) – [Proceedings] Hideaki IIDUKA |
---|
===== 2025 ===== | ===== 2025 ===== |
==== Publications in Refereed Journals ==== | ==== Publications in Refereed Journals ==== |
- [[https://scholar.google.co.jp/citations?user=rNbGTIgAAAAJ&hl=ja|Naoki Sato]], Koshiro Izumi, [[..:iiduka:|Hideaki Iiduka]]: **Scaled Conjugate | - [[https://scholar.google.com/citations?user=hdDU4Z4AAAAJ&hl=ja|Kento Imaizumi]], [[en:iiduka:|Hideaki Iiduka]]: **[[https://doi.org/10.1080/02331934.2024.2367635|Iteration and Stochastic First-order Oracle Complexities of Stochastic Gradient Descent using Constant and Decaying Learning Rates]]**, [[https://www.tandfonline.com/journals/gopt20|Optimization]]: Special Issue dedicated to Dr. Alexander J. Zaslavski on the occasion of his 65th birthday ?? (?):?? --?? (2025) [[https://www.tandfonline.com/doi/epdf/10.1080/02331934.2024.2367635?needAccess=true|Open Access]] |
Gradient Method for Nonconvex Optimization in Deep Neural Networks**, [[https://www.jmlr.org/|Journal of Machine Learning Research]] 25: ???-??? (2024) | - [[https://github.com/tsukaoyuki|Yuki Tsukada]], [[en:iiduka:|Hideaki Iiduka]]: **[[https://openreview.net/forum?id=pqZ6nOm3WF|Relationship between Batch Size and Number of Steps Needed for Nonconvex Optimization of Stochastic Gradient Descent using Armijo-Line-Search Learning Rate]]**, [[https://jmlr.org/tmlr/|Transactions on Machine Learning Research]] (2025) [[https://openreview.net/pdf?id=pqZ6nOm3WF|Open Access]] |
- [[https://scholar.google.com/citations?user=hdDU4Z4AAAAJ&hl=ja|Kento Imaizumi]], [[en:iiduka:|Hideaki Iiduka]]: **[[https://doi.org/10.1080/02331934.2024.2367635|Iteration and Stochastic First-order Oracle Complexities of Stochastic Gradient Descent using Constant and Decaying Learning Rates]]**, [[https://www.tandfonline.com/journals/gopt20|Optimization]]: Special Issue dedicated to Dr. Alexander J. Zaslavski on the occasion of his 65th birthday ?? (?):?? --?? (2024) [[https://www.tandfonline.com/doi/epdf/10.1080/02331934.2024.2367635?needAccess=true|Open Access]] | - [[https://scholar.google.com/citations?hl=ja&user=s9l7NM8AAAAJ|Hikaru Umeda]], [[en:iiduka:|Hideaki Iiduka]]: **[[https://openreview.net/forum?id=sbmp55k6iE|Increasing Both Batch Size and Learning Rate Accelerates Stochastic Gradient Descent]]**, [[https://jmlr.org/tmlr/|Transactions on Machine Learning Research]] (2025) [[https://openreview.net/pdf?id=sbmp55k6iE|Open Access]] |
| - [[https://scholar.google.co.jp/citations?user=RXrwOgoAAAAJ&hl=ja|Hiroyuki Sakai]], [[..:iiduka:|Hideaki Iiduka]]: **[[https://openreview.net/forum?id=knv4lQFVoE|A General Framework of Riemannian Adaptive Optimization Methods with a Convergence Analysis]]**, [[https://jmlr.org/tmlr/|Transactions on Machine Learning Research]] (2025) [[https://openreview.net/pdf?id=knv4lQFVoE|Open Access]] |
| |
==== Proceedings ==== | ==== Proceedings ==== |
| - [[https://scholar.google.co.jp/citations?user=rNbGTIgAAAAJ&hl=ja|Naoki Sato]], [[..:iiduka:|Hideaki Iiduka]]: **[[https://ojs.aaai.org/index.php/AAAI/article/view/34234|Explicit and Implicit Graduated Optimization in Deep Neural Networks]]**, [[https://aaai.org/proceeding/aaai-39-2025/|Proceedings of the AAAI Conference on Artificial Intelligence]], 39 (19), 20283--20291 (2025) |
| |
==== Conference Activities & Talks ==== | ==== Conference Activities & Talks ==== |
| - [[https://scholar.google.com/citations?hl=ja&user=s9l7NM8AAAAJ|Hikaru Umeda]], [[..:iiduka:|Hideaki Iiduka]]: **Acceleration of Stochastic Gradient Descent by Increasing Batch Size and Learning Rate**, The 2025 Spring National Conference of Operations Research Society of Japan, Seikei University (Mar. 6, 2025) |
| - Kanata Owada, [[..:iiduka:|Hideaki Iiduka]]: **Batch Size and Learning Rate of Stochastic Gradient Descent on Riemannian manifold**, The 2025 Spring National Conference of Operations Research Society of Japan, Seikei University (Mar. 6, 2025) |
| - [[https://scholar.google.com/citations?user=hdDU4Z4AAAAJ&hl=ja|Kento Imaizumi]], [[..:iiduka:|Hideaki Iiduka]]: **Convergence Analysis of SGD with Momentum with Increasing or Decaying Momentum Factor in Nonconvex Optimization**, The 2025 Spring National Conference of Operations Research Society of Japan, Seikei University (Mar. 6, 2025) |
| - [[https://scholar.google.co.jp/citations?user=rNbGTIgAAAAJ&hl=ja|Naoki Sato]], [[en:iiduka:|Hideaki Iiduka]]: **Stochastic Frank Wolfe method for Constrained Nonconvex Optimization and its Application for Adversarial Attack**, The 2025 Spring National Conference of Operations Research Society of Japan, Seikei University (Mar. 6, 2025) |
- [[https://scholar.google.co.jp/citations?user=rNbGTIgAAAAJ&hl=ja|Naoki Sato]], [[..:iiduka:|Hideaki Iiduka]]: **Explicit and Implicit Graduated Optimization in Deep Neural Networks**, [[https://aaai.org/conference/aaai/aaai-25/|The 39th Annual AAAI Conference on Artificial Intelligence (AAAI-25)]], Pennsylvania Convention Center, Philadelphia, Pennsylvania, USA (Feb. 27 -- Mar. 4, 2025) | - [[https://scholar.google.co.jp/citations?user=rNbGTIgAAAAJ&hl=ja|Naoki Sato]], [[..:iiduka:|Hideaki Iiduka]]: **Explicit and Implicit Graduated Optimization in Deep Neural Networks**, [[https://aaai.org/conference/aaai/aaai-25/|The 39th Annual AAAI Conference on Artificial Intelligence (AAAI-25)]], Pennsylvania Convention Center, Philadelphia, Pennsylvania, USA (Feb. 27 -- Mar. 4, 2025) |
| |
| |
==== Publications in Refereed Journals ==== | ==== Publications in Refereed Journals ==== |
- [[https://scholar.google.co.jp/citations?user=RXrwOgoAAAAJ&hl=ja|Hiroyuki Sakai]], [[en:iiduka:|Hideaki Iiduka]]: **[[http://manu71.magtech.com.cn/Jwk3_pjo/EN/10.61208/pjo-2024-005|Convergence of Riemannian Stochastic Gradient Descent on Hadamard Manifold]]**, [[http://manu71.magtech.com.cn/Jwk3_pjo/EN/home|Pacific Journal of Optimization]]: Special issue: Dedicated to Prof. Masao Fukushima on the occasion of his 75th birthday 20 (4): 743--767 (2024) | - [[https://scholar.google.co.jp/citations?user=rNbGTIgAAAAJ&hl=ja|Naoki Sato]], Koshiro Izumi, [[..:iiduka:|Hideaki Iiduka]]: **[[https://jmlr.org/papers/v25/22-0815.html|Scaled Conjugate |
| Gradient Method for Nonconvex Optimization in Deep Neural Networks]]**, [[https://www.jmlr.org/|Journal of Machine Learning Research]] 25 (395): 1-37 (2024) [[https://jmlr.org/papers/volume25/22-0815/22-0815.pdf|Open Access]] |
| - [[https://scholar.google.co.jp/citations?user=RXrwOgoAAAAJ&hl=ja|Hiroyuki Sakai]], [[en:iiduka:|Hideaki Iiduka]]: **[[http://manu71.magtech.com.cn/Jwk3_pjo/EN/10.61208/pjo-2024-005|Convergence of Riemannian Stochastic Gradient Descent on Hadamard Manifold]]**, [[http://manu71.magtech.com.cn/Jwk3_pjo/EN/home|Pacific Journal of Optimization]]: Special issue: Dedicated to Prof. Masao Fukushima on the occasion of his 75th birthday 20 (4): 743--767 (2024) [[http://www.yokohamapublishers.jp/online-p/PJO/vol20/pjov20n4p743.pdf|Open Access]] |
- [[https://scholar.google.co.jp/citations?user=RXrwOgoAAAAJ&hl=ja|Hiroyuki Sakai]], [[en:iiduka:|Hideaki Iiduka]]: **[[https://link.springer.com/article/10.1007/s10957-024-02449-8?utm_source=rct_congratemailt&utm_medium=email&utm_campaign=oa_20240529&utm_content=10.1007%2Fs10957-024-02449-8|Modified Memoryless Spectral-scaling Broyden Family on Riemannian Manifolds]]**, [[https://link.springer.com/journal/10957|Journal of Optimization Theory and Applications]] 202: 834--853 (2024) [[https://link.springer.com/article/10.1007/s10957-024-02449-8?utm_source=rct_congratemailt&utm_medium=email&utm_campaign=oa_20240529&utm_content=10.1007%2Fs10957-024-02449-8|Open Access]] | - [[https://scholar.google.co.jp/citations?user=RXrwOgoAAAAJ&hl=ja|Hiroyuki Sakai]], [[en:iiduka:|Hideaki Iiduka]]: **[[https://link.springer.com/article/10.1007/s10957-024-02449-8?utm_source=rct_congratemailt&utm_medium=email&utm_campaign=oa_20240529&utm_content=10.1007%2Fs10957-024-02449-8|Modified Memoryless Spectral-scaling Broyden Family on Riemannian Manifolds]]**, [[https://link.springer.com/journal/10957|Journal of Optimization Theory and Applications]] 202: 834--853 (2024) [[https://link.springer.com/article/10.1007/s10957-024-02449-8?utm_source=rct_congratemailt&utm_medium=email&utm_campaign=oa_20240529&utm_content=10.1007%2Fs10957-024-02449-8|Open Access]] |
- [[en:iiduka:|Hideaki Iiduka]]: **[[https://link.springer.com/article/10.1007/s11075-023-01575-0|Theoretical Analysis of Adam using Hyperparameters Close to One without Lipschitz Smoothness]]**, [[https://www.springer.com/journal/11075|Numerical Algorithms]] 95: 383--421 (2024) {{:iiduka:iiduka2023.pdf|PDF}} [[https://rdcu.be/df4ce|Springer Nature SharedIt]] | - [[en:iiduka:|Hideaki Iiduka]]: **[[https://link.springer.com/article/10.1007/s11075-023-01575-0|Theoretical Analysis of Adam using Hyperparameters Close to One without Lipschitz Smoothness]]**, [[https://www.springer.com/journal/11075|Numerical Algorithms]] 95: 383--421 (2024) {{:iiduka:iiduka2023.pdf|PDF}} [[https://rdcu.be/df4ce|Springer Nature SharedIt]] |