文書の過去の版を表示しています。


Optimization Algorithms

Let us consider the following problem: \begin{align*} \text{Minimize } f(x):= \sum_{i\in \mathcal{I}} f^{(i)} (x) \text{ subject to } x\in C := \bigcap_{i\in \mathcal{I}} C^{(i)}, \end{align*} where $H$ is a real Hilbert space, $f^{(i)} \colon H \to \mathbb{R}$ $(i\in \mathcal{I} := \{1,2,\ldots,I\})$, and $C^{(i)}$ $(\subset H)$ $(i\in \mathcal{I})$ is closed and convex with $C \neq \emptyset$. In particular, it is assumed that $C^{(i)}$ can be expressed as the fixed point set of a nonexpansive mapping. This implies the metric projection onto $C^{(i)}$ cannot be efficiently computed (e.g., $C^{(i)}$ is the intersection of many closed convex sets or the set of minimizers of a convex function).1) Please see the Fixed Point Algorithms page for the details of fixed point sets.

Here, we divide the problem into the three categories.

  1. Smooth Convex Optimization Problem:
    It assumes that $f^{(i)}$ $(i\in \mathcal{I})$ is smooth and convex. The problem includes signal recovery problem, beamforming problem, storage allocation problem, optimal control problem, and bandwidth allocation problem.
  2. Nonsmooth Convex Optimization Problem:
    It assumes that $f^{(i)}$ $(i\in \mathcal{I})$ is nonsmooth and convex. The problem includes signal recovery problem,ensemble learning, minimal antenna-subset selection problem, and bandwidth allocation problem.
  3. Smooth Nonconvex Optimization Problem
    It assumes that $f^{(i)}$ $(i\in \mathcal{I})$ is smooth and nonconvex. The problem has practical problems such as power control and bandwidth allocation.

We focus on the following algorithms for solving the above problems.

  1. Centralized Optimization Algorithms:
    It can be implemented under the assumptions that we can know the explicit forms of $f^{(i)}$s and $C^{(i)}$s, i.e., $f:= \sum_{i\in \mathcal{I}} f^{(i)}$ and $C:= \bigcap_{i\in \mathcal{I}} C^{(i)}$.
  2. Decentralized Optimization Algorithms:
    It can be implemented through all users cooperating when user $i$ $(i\in \mathcal{I})$ has its own private $f^{(i)}$ and $C^{(i)}$ and cannot use the private information of other users.

Please see the Publications page. You can get the pdf files of our papers.

Many algorithms have been presented to solve convex optimization problems (e.g., interior-point methods, projection methods). In 2001, Yamada proposed the hybrid steepest descent method for smooth convex optimization over the fixed point sets of nonexpansive mappings. We have presented algorithms based on the hybrid steepest descent method and their convergence analyses. The following are our previously reported results:

The incremental subgradient method and the broadcast optimization method are useful for decentralized convex optimization. However, they can be applied to only the case where $C^{(i)} = C$ $(i\in \mathcal{I})$ and $C$ is simple in the sense that the projection onto $C$ can be easily computed (e.g., $C$ is a half-space). We have proposed decentralized optimization algorithms for solving convex optimization problems with more complicated $C^{(i)}$ (e.g., $C^{(i)}$ is the intersection of simple, closed convex sets), which include bandwidth allocation problem and storage allocation problem.

The proximal point algorithms and the subgradient methods are useful for solving nonsmooth convex optimization. The following are the results of the algorithms based on the above methods.

The following are our previously reported results.


1)
The metric projection onto $C^{(i)}$, denoted by $P_{C^{(i)}}$, is defined for all $x\in H$ by $P_{C^{(i)}}(x) \in C^{(i)}$ and $\|x - P_{C^{(i)}}(x)\| = \inf_{y\in C^{(i)}} \|x-y\|$.
  • en/intro/researches/optimization.1567649296.txt.gz
  • 最終更新: 2019/09/05 11:08
  • by Hideaki IIDUKA