Mathematics Source Library
C & ASM

 Home Optimization Home Nonlinear Optimization Home Unconstrained Home Steepest Descent Newton - Raphson Damped Newton - Raphson Fletcher - Powell - Davidon Broyden - Fletcher - Shanno Fletcher - Reeves Fletcher - Reeves - Polak - Ribiere

## Steepest Descent

The method of (optimal) steepest-descent is used to find the location of a local minimum of a real-valued function of n real-variables. The method is primarily useful when the current estimate of the location of a local minimum is far from the location of a local minimum. Near a local minimum, it is better to use either the Newton-Raphson, damped Newton-Raphson, or the quasi-Newtonian methods, Fletcher-Powell-Davidon or Broyden-Fletcher-Shanno, or the conjugate gradient methods, Fletcher-Reeves or Fletcher-Reeves-Polak-Ribiere.

The (optimal) steepest-descent method proceeds as follows:
• Initialization: Choose an initial estimate, x0, for the location of a local minimum of the predetermined function f : Rn → R.
• Determine the Search Direction d: Calculate the gradient of f evaluated at, x0, the search direction is then opposite to that of the gradient, d = - grad f ( x0 ).
• Check if d vanishes: If d = 0, then the point x0 is a local extremum.
• The line search: Find the minimum of f ( x ) along the line x = x0 + t d, for t > 0. Let t0 be the value of t for which f ( x0 + t d ) is a minimum.
• Check if the stopping criterion is satisfied: The conventional stopping rules are: (1) Stop if f ( x0 ) - f ( x0 + t0 d ) < ε for some preassigned ε, (2) Stop if || t0 d || < ε for either the max-norm or the l2-norm, and/or (3) Stop if || d || < ε for either the max-norm or the l2-norm,
• Update the estimate for the location of a local minimum: Replace the initial estimate, x0, with x0 + t0 d and repeat the process.
If the initial point happens to be the location of a local maximum or a saddle point, the gradient of the function will vanish and the procedure will fail. It is also possible that the procedure will converge to a saddle point therefore assuming that the steepest descent converges to a point, it should be verified that that point is not a saddle point.

### Function List

• int Steepest_Descent( double (*f)(double *), void (*df)(double *, double *), int (*stopping_rule)(double*, double, double*, double, double*, int, int), double a[], double *fa, double *dfa, double cutoff, double cutoff_scale_factor, double tolerance, int n )

This routine uses the method of steepest descent to approximately locate a local minimum of the user-supplied function f(x). The procedure starts at x = a with the value of f at a stored in fa. The gradient is calculated using the user-supplied function df(x, dfa) where dfa is the n-dimensional array whose ith component dfa[i] is ∂f(x)/∂xi. The parameters cutoff, cutoff_scale_factor and tolerance are used for minimizing the function, f ( x ), down the line in a direction opposite to that given by the gradient. See Minimize_Down_the_Line for a discription of these parameters. The arguments to the user-supplied stopping_rule are: the initial point x0, the function evaluated at the initial point f(x0), the new estimate x, the function evaluated at the new estimate f(x), the gradient of the function evaluated at the new point dfa, the total number of iterations performed, and n the dimension of x. The user-supplied stopping_rule should return a non-zero, either positive or negative, integer to halt the procedure and return zero to continue iterating, return values -1, -2, -3, -4, and -5 are intrinsic return values used by the steepest-descent routine. Upon exiting the function returns either the return of the user-supplied stopping-rule, a 0 indicates success in the case where the gradient of the function evaluated other than at the initial starting point vanishes, -1 indicates that during the line search for a minimum the three points were collinear, -2 indicates that during the line search the parabolic interpolation phase produced a local maximum, -3 indicates that during the line search the initial three points during the parabolic interpolation phase failed to satisfy the condition for a local minimum to exist, -4 indicates that there was not enough dynamic heap memory, -5 indicates that the gradient of the function evaluated at the initial point vanishes which implies that the initial point may be a local maximum, a local minimum, or a local saddle point. The value a is set to the current estimate and the argument fa is set to the value of the function evaluated there.