両方とも前のリビジョン 前のリビジョン 次のリビジョン | 前のリビジョン 次のリビジョン両方とも次のリビジョン |
en:intro:researches:optimization [2019/09/05 11:08] – [Decentralized Optimization Algorithms] Hideaki IIDUKA | en:intro:researches:optimization [2020/02/21 11:23] – [Optimization Problem and Its Applications] Hideaki IIDUKA |
---|
In particular, it is assumed that $C^{(i)}$ can be expressed as the **fixed point set** of a **nonexpansive mapping**. This implies the metric projection onto $C^{(i)}$ cannot be efficiently computed (e.g., $C^{(i)}$ is the intersection of many closed convex sets or the set of minimizers of a convex function).((The metric projection onto $C^{(i)}$, denoted by $P_{C^{(i)}}$, is defined for all $x\in H$ by $P_{C^{(i)}}(x) \in C^{(i)}$ and $\|x - P_{C^{(i)}}(x)\| = \inf_{y\in C^{(i)}} \|x-y\|$.)) Please see the [[en:intro:researches:fixedpoint|Fixed Point Algorithms]] page for the details of fixed point sets.\\ | In particular, it is assumed that $C^{(i)}$ can be expressed as the **fixed point set** of a **nonexpansive mapping**. This implies the metric projection onto $C^{(i)}$ cannot be efficiently computed (e.g., $C^{(i)}$ is the intersection of many closed convex sets or the set of minimizers of a convex function).((The metric projection onto $C^{(i)}$, denoted by $P_{C^{(i)}}$, is defined for all $x\in H$ by $P_{C^{(i)}}(x) \in C^{(i)}$ and $\|x - P_{C^{(i)}}(x)\| = \inf_{y\in C^{(i)}} \|x-y\|$.)) Please see the [[en:intro:researches:fixedpoint|Fixed Point Algorithms]] page for the details of fixed point sets.\\ |
| |
Here, we divide the problem into the three categories. | Here, we divide the problem into the four categories. |
- **Smooth Convex Optimization Problem**:\\ It assumes that $f^{(i)}$ $(i\in \mathcal{I})$ is smooth and convex. The problem includes [[http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=1206687&url=http%3A%2F%2Fieeexplore.ieee.org%2Fiel5%2F78%2F27152%2F01206687.pdf%3Farnumber%3D1206687|signal recovery problem]], [[http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=4291862&url=http%3A%2F%2Fieeexplore.ieee.org%2Fiel5%2F78%2F4291841%2F04291862.pdf%3Farnumber%3D4291862|beamforming problem]], [[http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=4604754&url=http%3A%2F%2Fieeexplore.ieee.org%2Fiel5%2F49%2F4604726%2F04604754.pdf%3Farnumber%3D4604754|storage allocation problem]], [[http://epubs.siam.org/doi/abs/10.1137/110850542|optimal control problem]], and [[http://epubs.siam.org/doi/abs/10.1137/120866877|bandwidth allocation problem]]. | - **Smooth Convex Optimization Problem**:\\ It assumes that $f^{(i)}$ $(i\in \mathcal{I})$ is smooth and convex. The problem includes [[http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=1206687&url=http%3A%2F%2Fieeexplore.ieee.org%2Fiel5%2F78%2F27152%2F01206687.pdf%3Farnumber%3D1206687|signal recovery problem]], [[http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=4291862&url=http%3A%2F%2Fieeexplore.ieee.org%2Fiel5%2F78%2F4291841%2F04291862.pdf%3Farnumber%3D4291862|beamforming problem]], [[http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=4604754&url=http%3A%2F%2Fieeexplore.ieee.org%2Fiel5%2F49%2F4604726%2F04604754.pdf%3Farnumber%3D4604754|storage allocation problem]], [[http://epubs.siam.org/doi/abs/10.1137/110850542|optimal control problem]], and [[http://epubs.siam.org/doi/abs/10.1137/120866877|bandwidth allocation problem]]. |
- **Nonsmooth Convex Optimization Problem**:\\ It assumes that $f^{(i)}$ $(i\in \mathcal{I})$ is nonsmooth and convex. The problem includes [[http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=4407760&filter%3DAND%28p_IS_Number%3A4407756%29|signal recovery problem]],[[http://www.sciencedirect.com/science/article/pii/S0925231214000964|ensemble learning]], [[http://link.springer.com/chapter/10.1007/978-1-4419-9569-8_17|minimal antenna-subset selection problem]], and [[https://ieeexplore.ieee.org/document/8584116|bandwidth allocation problem]]. | - **Nonsmooth Convex Optimization Problem**:\\ It assumes that $f^{(i)}$ $(i\in \mathcal{I})$ is nonsmooth and convex. The problem includes [[http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=4407760&filter%3DAND%28p_IS_Number%3A4407756%29|signal recovery problem]],[[https://ieeexplore.ieee.org/document/8744480|ensemble learning]], [[http://link.springer.com/chapter/10.1007/978-1-4419-9569-8_17|minimal antenna-subset selection problem]], and [[https://ieeexplore.ieee.org/document/8584116|bandwidth allocation problem]]. |
- **Smooth Nonconvex Optimization Problem**\\ It assumes that $f^{(i)}$ $(i\in \mathcal{I})$ is smooth and nonconvex. The problem has practical problems such as [[http://link.springer.com/article/10.1007%2Fs10107-010-0427-x#|power control]] and [[http://epubs.siam.org/doi/abs/10.1137/110849456|bandwidth allocation]]. | - **Smooth Nonconvex Optimization Problem**\\ It assumes that $f^{(i)}$ $(i\in \mathcal{I})$ is smooth and nonconvex. The problem has practical problems such as [[http://link.springer.com/article/10.1007%2Fs10107-010-0427-x#|power control]] and [[http://epubs.siam.org/doi/abs/10.1137/110849456|bandwidth allocation]]. |
| - **Nonsmooth Nonconvex Optimization Problem**\\ It assumes that $f^{(i)}$ $(i\in \mathcal{I})$ is nonsmooth and nonconvex. The problem has practical problems such as [[https://www.sciencedirect.com/science/article/abs/pii/S037722171400424X|fractional programming]]. |
| |
We focus on the following algorithms for solving the above problems. | We focus on the following algorithms for solving the above problems. |
| |
==== Decentralized Optimization Algorithms ==== | ==== Decentralized Optimization Algorithms ==== |
[[http://epubs.siam.org/doi/abs/10.1137/S1052623499362111|The incremental subgradient method]] and [[http://iopscience.iop.org/0266-5611/24/6/065014|the broadcast optimization method]] are useful for decentralized convex optimization. However, they can be applied to only the case where $C^{(i)} = C$ $(i\in \mathcal{I})$ and $C$ is simple in the sense that the projection onto $C$ can be easily computed (e.g., $C$ is a half-space). We have proposed decentralized optimization algorithms for solving convex optimization problems with more complicated $C^{(i)}$ (e.g., $C^{(i)}$ is the intersection of simple, closed convex sets), which include [[http://epubs.siam.org/doi/abs/10.1137/120866877|bandwidth allocation problem]] and [[http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=4604754&url=http%3A%2F%2Fieeexplore.ieee.org%2Fiel5%2F49%2F4604726%2F04604754.pdf%3Farnumber%3D4604754|storage allocation problem]]. | [[http://epubs.siam.org/doi/abs/10.1137/S1052623499362111|The incremental subgradient method]] and [[http://iopscience.iop.org/0266-5611/24/6/065014|the broadcast optimization method]] are useful for decentralized convex optimization. However, they can be applied to only the case where $C^{(i)} = C$ $(i\in \mathcal{I})$ and $C$ is simple in the sense that the projection onto $C$ can be easily computed (e.g., $C$ is a half-space). We have proposed decentralized optimization algorithms for solving convex optimization problems with more complicated $C^{(i)}$ (e.g., $C^{(i)}$ is the intersection of simple, closed convex sets), which include [[http://epubs.siam.org/doi/abs/10.1137/120866877|bandwidth allocation problem]], [[http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=4604754&url=http%3A%2F%2Fieeexplore.ieee.org%2Fiel5%2F49%2F4604726%2F04604754.pdf%3Farnumber%3D4604754|storage allocation problem]], and [[https://ieeexplore.ieee.org/document/8744480|ensemble learning]]. |
* [[:en:iiduka:|H. Iiduka]]: [[https://ieeexplore.ieee.org/document/8744480|Stochastic Fixed Point Optimization Algorithm for Classifier Ensemble]], IEEE Transactions on Cybernetics (accepted) | * [[:en:iiduka:|H. Iiduka]]: [[https://ieeexplore.ieee.org/document/8744480|Stochastic Fixed Point Optimization Algorithm for Classifier Ensemble]], IEEE Transactions on Cybernetics (accepted) |
* [[:en:iiduka:|H. Iiduka]]: [[https://link.springer.com/article/10.1007/s11081-019-09440-7|Decentralized Hierarchical Constrained Convex Optimization]], Optimization and Engineering (accepted) | * [[:en:iiduka:|H. Iiduka]]: [[https://link.springer.com/article/10.1007/s11081-019-09440-7|Decentralized Hierarchical Constrained Convex Optimization]], Optimization and Engineering (accepted) |
The following are the results of the algorithms based on the above methods. | The following are the results of the algorithms based on the above methods. |
==== Decentralized Optimization Algorithms ==== | ==== Decentralized Optimization Algorithms ==== |
* K. Hishinuma and [[en:iiduka:|H. Iiduka]]: Convergence Analysis of Incremental and Parallel Line Search Subgradient Methods in Hilbert Space, Journal of Nonlinear and Convex Analysis: Special Issue-Dedicated to Wataru Takahashi on the occasion of his 75th birth day (accepted) | * K. Shimizu, K. Hishinuma, [[:en:iiduka:|H. Iiduka]]: Parallel Computing Proximal Method for Nonsmooth Convex Optimization with Fixed Point Constraints of Quasi-nonexpansive Mappings, submitted |
| * H. Oishi, Y. Kobayashi, [[:en:iiduka:|H. Iiduka]]: Incremental Proximal Method for Nonsmooth Convex Optimization with Fixed Point Constraints of Quasi-nonexpansive Mappings, Linear and Nonlinear Analysis (accepted) |
* [[:en:iiduka:|H. Iiduka]]: [[https://ieeexplore.ieee.org/document/8584116|Distributed Optimization for Network Resource Allocation with Nonsmooth Utility Functions]], IEEE Transactions on Control of Network Systems (accepted) | * [[:en:iiduka:|H. Iiduka]]: [[https://ieeexplore.ieee.org/document/8584116|Distributed Optimization for Network Resource Allocation with Nonsmooth Utility Functions]], IEEE Transactions on Control of Network Systems (accepted) |
* K. Hishinuma and [[:en:iiduka:|H. Iiduka]]: Incremental and Parallel Machine Learning Algorithms with Automated Learning Rate Adjustments, Frontiers in Robotics and AI: Resolution of Limitations of Deep Learning to Develop New AI Paradigms, Vol. 6, Article 77, 2019. | * K. Hishinuma and [[en:iiduka:|H. Iiduka]]: Convergence Analysis of Incremental and Parallel Line Search Subgradient Methods in Hilbert Space, Journal of Nonlinear and Convex Analysis: Special Issue-Dedicated to Wataru Takahashi on the occasion of his 75th birth day, Vol. 20, No. 9, pp.1937-1947, 2019. |
| * K. Hishinuma and [[:en:iiduka:|H. Iiduka]]: [[https://www.frontiersin.org/articles/10.3389/frobt.2019.00077/full|Incremental and Parallel Machine Learning Algorithms with Automated Learning Rate Adjustments]], Frontiers in Robotics and AI: Resolution of Limitations of Deep Learning to Develop New AI Paradigms, Vol. 6, Article 77, 2019. |
* [[:en:iiduka:|H. Iiduka]]: [[http://www.tandfonline.com/doi/full/10.1080/10556788.2018.1425860|Two Stochastic Optimization Algorithms for Convex Optimization with Fixed Point Constraints]], Optimization Methods and Software, Vol. 34, No. 4, pp.731-757, 2019. | * [[:en:iiduka:|H. Iiduka]]: [[http://www.tandfonline.com/doi/full/10.1080/10556788.2018.1425860|Two Stochastic Optimization Algorithms for Convex Optimization with Fixed Point Constraints]], Optimization Methods and Software, Vol. 34, No. 4, pp.731-757, 2019. |
* K. Sakurai, T. Jimba, and [[:iiduka:|H. Iiduka]]: [[http://jnva.biemdas.com/archives/843|Iterative methods for parallel convex optimization with fixed point constraints]], Journal of Nonlinear and Variational Analysis, Vol. 3, No. 2, pp.115-126, 2019. | * K. Sakurai, T. Jimba, and [[:iiduka:|H. Iiduka]]: [[http://jnva.biemdas.com/archives/843|Iterative Methods for Parallel Convex Optimization with Fixed Point Constraints]], Journal of Nonlinear and Variational Analysis, Vol. 3, No. 2, pp.115-126, 2019. |
* [[http://gyoseki1.mind.meiji.ac.jp/mjuhp/KgApp?kyoinId=ymkdgygyggy&Language=2|Y. Hayashi]] and [[:en:iiduka:|H. Iiduka]]:[[http://www.sciencedirect.com/science/article/pii/S0925231217313486|Optimality and Convergence for Convex Ensemble Learning with Sparsity and Diversity Based on Fixed Point Optimization]], Neurocomputing, Vol. 273, pp.367-372, 2018. | * [[http://gyoseki1.mind.meiji.ac.jp/mjuhp/KgApp?kyoinId=ymkdgygyggy&Language=2|Y. Hayashi]] and [[:en:iiduka:|H. Iiduka]]:[[http://www.sciencedirect.com/science/article/pii/S0925231217313486|Optimality and Convergence for Convex Ensemble Learning with Sparsity and Diversity Based on Fixed Point Optimization]], Neurocomputing, Vol. 273, pp.367-372, 2018. |
* [[en:iiduka:|H. Iiduka]]: [[http://www.tandfonline.com/doi/full/10.1080/02331934.2016.1252914|Almost Sure Convergence of Random Projected Proximal and Subgradient Algorithms for Distributed Nonsmooth Convex Optimization]], Optimization, Vol. 66, No. 1, pp.35-59, 2017. | * [[en:iiduka:|H. Iiduka]]: [[http://www.tandfonline.com/doi/full/10.1080/02331934.2016.1252914|Almost Sure Convergence of Random Projected Proximal and Subgradient Algorithms for Distributed Nonsmooth Convex Optimization]], Optimization, Vol. 66, No. 1, pp.35-59, 2017. |
| |
==== Decentralized Optimization Algorithms ==== | ==== Decentralized Optimization Algorithms ==== |
| * K. Hishinuma and [[en:iiduka:|H. Iiduka]]: [[https://doi.org/10.1016/j.ejor.2019.09.037|Fixed Point Quasiconvex Subgradient Method]], European Journal of Operational Research, Vol. 282, No. 2, 428–437, 2020 |
* [[en:iiduka:|H. Iiduka]]: [[http://www.ybook.co.jp/|Distributed Iterative Methods for Solving Nonmonotone Variational Inequality over the Intersection of Fixed Point Sets of Nonexpansive Mappings]], Pacific Journal of Optimization, Vol. 10, No. 4, pp. 691-713, 2014. | * [[en:iiduka:|H. Iiduka]]: [[http://www.ybook.co.jp/|Distributed Iterative Methods for Solving Nonmonotone Variational Inequality over the Intersection of Fixed Point Sets of Nonexpansive Mappings]], Pacific Journal of Optimization, Vol. 10, No. 4, pp. 691-713, 2014. |
| |
| |
| |