Mathematics Source Library
C & ASM

 Home Optimization Home Nonlinear Optimization Home Unconstrained Home Steepest Descent Newton - Raphson Damped Newton - Raphson Fletcher - Powell - Davidon Broyden - Fletcher - Shanno Fletcher - Reeves Fletcher - Reeves - Polak - Ribiere

## Damped Newton-Raphson

Given a function f : RnR the damped Newton-Raphson's method attempts to find a location of a local minimum of f. The damped Newton-Raphson is an iterative method which when given a point x0Rn seeks a critical point in the direction given by the Newton-Raphson procedure in the event that the Hessian, H, of f is positive definite at x0 otherwise it seeks a critical point in the direction given by the method of steepest-descent. If the direction is given by the Newton-Raphson method, then the (unnormalized) direction d = H-1grad f and if the direction is given by the method of steepest descent, then the (unnormalized) direction is d = grad f. If the direction is given by the Newton-Raphson method, then set λ = 1 and if the direction is given by the method of steepest descent, then set λ = (grad f)TH (grad f) providing that (grad f)TH (grad f) > 0 otherwise set
λ = max historical step length / ||d|| unless there is no history in which case set λ = 1. The function f is then evaluated at x0 - λ d and if f(x0 - λ d) > f(x0) then halve λ and try again otherwise replace x0 by x0 - λ d. The process is halted by a user-defined function of the displacement between successive iterations, the gradient evaluated at the current estimate, and the total number of iterations performed.

It is possible that the procedure converges to a saddle point and therefore the result needs to be checked as to whether it is a location of a local minimum or not.

### Function List

• int Damped_Newton_Raphson( double (*f)(double *), void (*df)(double*, double*), void (*ddf)(double*, double*, double*), int (*Stopping_Rule)(double*, double*, int, int), double* a, int n )

This routine uses the damped Newton-Raphson method to approximately locate a local minimum of the function f (x). The user-supplied function df (x, fx), the gradient of f (x), has two arguments, the first is the location x where the gradient is evaluated and the second is the array whose ith component is the ith component of grad f (x). The user-supplied function ddf corresponds to the Hessian of f (x). ddf (x, fx, dfx) has three arguments, the first is the location x where the function is evaluated, the second is the gradient which is included to facilitate the evaluation of the Hessian in the event the Hessian depends explicitly on the value grad f (x), and the third argument is the Hessian n x n matrix whose i,j element is ∂2f / ∂xi∂xj evaluated at x. The arguments to the user-supplied stopping_rule are: the displacement vector d between two successive iteratates, the gradient evaluated at the new estimate x, the total number of iterations performed, and n the dimension of x. The user-supplied stopping_rule should return a non-zero, either positive or negative, integer to halt the procedure and return zero to continue iterating, the return value -1 is an intrinsic return value used by the damped Newton-Raphson routine. Upon exiting the function returns either the return of the user-supplied stopping-rule or a -1 indicates that there was not enough dynamic heap memory. The argument a upon input is the initial n-dimensional point and upon exiting is the final estimate of the location of a local minimum.