Abstract
For a non-convex function f : R^n → R with gradient g and Hessian H, define a step vector p(μ,x) as a function of scalar parameter μ and position vector x by the equation (H(x) + μI)p(μ, x) = −g(x).
Under mild conditions on f, we construct criteria for selecting μ so as to ensure that the algorithm x := x + p(μ, x) descends to a second order stationary point of f, and avoids saddle points.
Under mild conditions on f, we construct criteria for selecting μ so as to ensure that the algorithm x := x + p(μ, x) descends to a second order stationary point of f, and avoids saddle points.
Original language | English |
---|---|
Journal | Numerical Algorithms |
DOIs | |
Publication status | Published - 26 Aug 2022 |
Keywords
- Nonlinear optimization; Newton-like methods; Non-convex functions