Department of Applied Mathematics & Physics, Kyoto University

Technical Report 2022-002 (May 20, 2022)

A proximal gradient method with Bregman distance in multi-objective optimization
by Kangming Chen, Ellen H. Fukuda, and Nobuo Yamashita

pdf File


In the last two decades, many descent algorithms were extended to solve multi-objective optimization problems. Recently, the multi-objective proximal gradient descent method was also proposed for problems where each objective function is written as the sum of a differentiable function and a proper convex but not necessarily differentiable one. However, it requires the differentiable part of each objective to have a Lipschitz continuous gradient, which limits its application. Moreover, the method solves subproblems using Euclidean distances only. The so-called Bregman scheme is common in single-objective proximal gradient-type methods. In this case, the Euclidean distance is replaced by the more general and flexible Bregman distance. Combined with the notion of relative smoothness, we have an assumption less demanding than the Lipschitz continuity of the gradients. Thus in this work, we propose a proximal gradient method with Bregman distance for multi-objective optimization. At each iteration of our method, we compute the search direction by solving a subproblem that contains the Bregman distance. This subproblem can be solved easily depending on the choice of the Bregman scheme. We also consider two stepsize strategies: the constant stepsize and the backtracking procedure. In both cases, we prove convergence of the generated sequence to a Pareto stationary point, and analyze the convergence rate through some merit functions. Specifically we get the convergence rates for non-convex ($O((sqrt{1/k})$), convex ($O(1/k)$), and strongly convex ($O(r^k)$ for some $r \in (0, 1)$) problems.