Let us consider the following problem:
\begin{align*}
\text{Minimize } f(x):= \sum_{i\in \mathcal{I}} f^{(i)} (x) \text{ subject to } x\in C := \bigcap_{i\in \mathcal{I}} C^{(i)},
\end{align*}
where $H$ is a real Hilbert space, $f^{(i)} \colon H \to \mathbb{R}$ $(i\in \mathcal{I} := \{1,2,\ldots,I\})$, and
$C^{(i)}$ $(\subset H)$ $(i\in \mathcal{I})$ is closed and convex with $C \neq \emptyset$.
In particular, it is assumed that $C^{(i)}$ can be expressed as the fixed point set of a nonexpansive mapping. This implies the metric projection onto $C^{(i)}$ cannot be efficiently computed (e.g., $C^{(i)}$ is the intersection of many closed convex sets or the set of minimizers of a convex function).1) Please see the Fixed Point Algorithms page for the details of fixed point sets.
Here, we divide the problem into the four categories.
We focus on the following algorithms for solving the above problems.
Please see the Publications page. You can get the pdf files of our papers.
Many algorithms have been presented to solve convex optimization problems (e.g., interior-point methods, projection methods). In 2001, Yamada proposed the hybrid steepest descent method for smooth convex optimization over the fixed point sets of nonexpansive mappings. We have presented algorithms based on the hybrid steepest descent method and their convergence analyses. The following are our previously reported results:
The incremental subgradient method and the broadcast optimization method are useful for decentralized convex optimization. However, they can be applied to only the case where $C^{(i)} = C$ $(i\in \mathcal{I})$ and $C$ is simple in the sense that the projection onto $C$ can be easily computed (e.g., $C$ is a half-space). We have proposed decentralized optimization algorithms for solving convex optimization problems with more complicated $C^{(i)}$ (e.g., $C^{(i)}$ is the intersection of simple, closed convex sets), which include bandwidth allocation problem, storage allocation problem, and ensemble learning.
The proximal point algorithms and the subgradient methods are useful for solving nonsmooth convex optimization. The following are the results of the algorithms based on the above methods.
The following are our previously reported results.
* K. Hishinuma and H. Iiduka: Fixed Point Quasiconvex Subgradient Method, European Journal of Operational Research, Vol. 282, No. 2, 428–437, 2020