Mathematics Source Library
C & ASM


Fletcher - Reeves

Fletcher-Reeves

Given a function f : RnR the Fletcher-Reeves method belongs to a group of methods called conjugate gradient methods which attempt to locate a local minimum of f.

Given a quadratic function f ( x ) = xT A x + bT x + c a set of vectors { yiRn : i = 1,...,k } are conjugate directions for f if < yi | A | yj > = yTi A yj = 0 for i ≠ j.

The following lemma is due to Fletcher and Reeves: Given a quadratic function
f ( x ) = xT A x + bT x + c and a point x1, v1 = - g1 = - grad f ( x1 ) and for k ≥ 1, define
vk+1 = - gk+1 + ( gTk+1 gk+1 / gTk gk ) vk, where gj = grad f ( xj ) and
xk+1 = xk - 2 ( gTk vk / < vk | A | vk > ) vk, then { vk } forms a set of conjugate directions for f.

In general the conjugate gradient methods proceed as follows:
  1. Initialization: Choose a point, x0, which approximates the location of a local minimum. Set k = 0 and v-1 = 0.
  2. Check if the stopping criterion is satisfied: The conventional stopping rule is: Stop if || grad f ( xk ) || < ε for either the max-norm or the l2-norm. If the stopping criterion is satisfied, then halt. Otherwise, continue.
  3. Calculate New Search Direction: Set αk (see below). Define
    vk = grad f ( xk ) + αkvk-1.
  4. Determine the Displacement Vector Δ xk: Given xk and vk, let λk be the value of λ which minimizes f ( xk - λ vk ). Then set Δ xk = λk vk.
  5. Update the estimate for the location of a local minimum: Set xk+1 = xk + Δ xk, then increment k and go to step 2.
For conjugate gradient methods, the formula for updating αk vary. For the Fletcher-Reeves method the estimate for αk+1 is
αk = ( gTk gk / gTk-1 gk-1 ), where gj = grad f ( xj ) and k ≠ 0 mod n
and
αk = 0 if k = 0 mod n


Function List

  • int Fletcher_Reeves( double (*f)(double *), void (*df)(double*, double*), int (*Stopping_Rule)(double*, double*, int, int), double *a, double *dfa, double line_search_cutoff, double line_search_cutoff_scale_factor, double line_search_tolerance, int n )

    This routine uses the Fletcher-Reeves method to approximately locate a local minimum of the user-supplied function f(x). The procedure starts at x = a. The gradient is calculated using the user-supplied function df(x, dfa) where dfa is the n-dimensional array whose ith component dfa[i] is ∂f(x)/∂xi. The parameters line_search_cutoff, line_search_cutoff_scale_factor and line_search_tolerance are used for minimizing the function, f ( x ), down the line in a direction opposite to that given by the gradient. See Minimize_Down_the_Line for a discription of these parameters. The arguments to the user-supplied stopping_rule are: the new estimate of the location of the local minimum x, the gradient of the function evaluated at the new estimate f(x), the total number of iterations performed, and n the dimension of x. The user-supplied stopping_rule should return a non-zero, either positive or negative, integer to halt the procedure and return zero to continue iterating, the return value -1 is the only intrinsic return value used by the Fletcher-Reese routine. Upon exiting the function returns either the return of the user-supplied stopping-rule or a -1 indicating that there was not enough dynamic heap memory. The value a is set to the current estimate.

C Source Code

  • The file, fletcher_reeves.c, contains the version of Fletcher_Reeves( ) written in C.