The limited-memory BFGS (L-BFGS) algorithm is a popular method of
solving large-scale unconstrained minimization problems. Since L-BFGS
conducts a line search with the Wolfe condition, it may require many
function evaluations for ill-posed problems. To overcome this
difficulty, we propose a method that combines L-BFGS with the
regularized Newton method. The computational cost for a single iteration
of the proposed method is the same as that of the original L-BFGS
method. We show that the proposed method has global convergence under
the usual conditions. Moreover, we present numerical results that show
the robustness of the proposed method.