In this paper, we propose a regularized Newton method without line search. The proposed method controls a regularized parameter instead of a step size in order to guarantee the global convergence. We demonstrate that it is closely related to the TR-Newton method when the Hessian of the objective function is positive definite.
Moreover, it does not solve nonconvex problems but linear equations as subproblems at each iteration. Thus, the proposed algorithm is regarded as a desired solution method
mentioned above. We show that the proposed algorithm has the following convergence properties. (a) The proposed algorithm has global convergence under appropriate
conditions. (b) It has superlinear rate of convergence under the local error bound condition.
(c) Its global complexity bound, which is the first iteration $k$ such that $\| \nabla f(x_k) \| \le
\epsilon$, is $O(\epsilon^{-2})$ when $f$ is nonconvex,
$O(\epsilon^{-\frac{5}{3}})$ when $f$ is convex,
and $O(\epsilon^{-1})$ when $f$ is strongly convex. Moreover, we report numerical results that show that the proposed algorithm is competitive with the existing Newton-type methods, and hence it is very promising.