## One-Dimensional Optimization

A continuous real-valued function defined on a closed and bounded interval of the real-line*f*:[

*a,b*]→

*R*, where

*a*<

*b*attains its maxima and minima. If the function

*f*is differentiable throughout the open subinterval

*(a,b)*then either the function attains its extrema and the endpoints

*a*or

*b*or where the derivative

*f'(x)*vanishes, in which case the routines for finding the roots of nonlinear equations may be useful, see Roots of Nonlinear Equations. If the derivative is defined throughout the open subinterval

*(a,b)*with the possible exception of a finite number of points for which the derivative is not defined, then the interval may be partitioned into a finite number of subintervals with non-intersecting interiors for which the previous statement may be applied to each subinterval. There are, however, examples of continuous real-valued functions defined on a closed and bounded interval which are almost nowhere differentiable.

It is convenient to have routines to search for an extremum of a real-valued function of a real-variable without having to program the code for the derivative of the function. All of the following methods can be used to locate the extremum of a unimodal function. Some routines can be used to locate local extrema of multimodal functions.

Given a continuous function

*f:R*the same methods can be used to locate the extrema for the functions

^{n}→R*g:R→R*given by

*g(p) = f(*or

**x**+ p**v**)*g(p) = f(*, where

**x**- p**v**)**x**,

**v**∈R

^{n}and

*p ≥ 0*. If

*f*is smooth and if

**v**≠

**0**is in the direction of the gradient of

*f*then maximizing

*g(p) = f(*maximizes

**x**+ p**v**)*f*along the line through

**x**in the direction of the gradient of

*f*evaluated at

**x**and minimizing

*g(p) = f(*minimizes

**x**- p**v**)*f*along the line through

**x**in the direction opposite to that of the gradient of

*f*evaluated at

**x**.

## Table of One-Dimensional Optimization Routines |