Department of Applied Mathematics & Physics, Kyoto University
Technical Report 2016-001 (March 27, 2016)
A Forward-Backward Splitting Method with Component-wise Lazy Evaluation for Online Structured Convex Optimization
by Yukihiro Togari and Nobuo Yamashita
We consider large-scale optimization problems whose objective functions
are expressed as sums of convex functions. Such problems appear in
various elds, such as statistics and machine learning. One method for
solving such problems is the online gradient method, which exploits the
gradients of a few of the functions in the objective function at each
iteration.
In this paper, we focus on the sparsity of two vectors, namely a
solution and a gradient of each function in the objective function. In
order to obtain a sparse solution, we usually add the L1 regularization
term to the objective function. Then, because the L1 regularization term
consists of all decision variables, the usual online gradient methods
update all variables even if each gradient is sparse. To reduce the
number of computations required for all variables at each iteration, we
can employ a lazy evaluation. This only updates the variables
corresponding to nonzero components of the gradient, by ignoring the L1
regularization terms, and evaluates the ignored terms later. Because
a lazy evaluation does not exploit the information related to all terms
at every iteration, the online algorithm using lazy evaluation may
converge slowly.
In this paper, we describe a forward-backward splitting type method,
which exploits the L1 terms corresponding to the nonzero components of
the gradient at each iteration. We show that its regret bound is O(\
sqrt{T}), where T is the number of functions in the objective function.
Moreover, we present some numerical experiments using the proposed
method, which demonstrate that the proposed method gives promising
results.