Unconstrained Optimization f:R^{n} → R
Given a differentiable real-valued function of n real-variables, f : R^{n} → R, a point p ∈ R^{n} is called a critical point of f if df(p) = ∇f(p) · dx = grad f(p) · dx = 0 or equivalently if∂f(p) / ∂x_{i} = 0, for i = 1, ..., n. Critical points are also called equilibrium points or sometimes stationary points.
Further if f is of class C² and if p ∈ R^{n} is a critical point of f, the symmetric matrix
H = ( ∂^{2}f(p) / ∂x_{i}∂x_{j} ) is called the Hessian of f at the critical point p. If the Hessian of f at the critical point p of f is non-singular, then the critical point is said to be a non-degenerate critical point. A non-degenerate critical point is a local minimum if the Hessian is a positive-definite symmetric matrix, is a local maximum if the Hessian is a negative-definite symmetric matrix, and is a saddle point if neither the Hessian is neither positive-definite or negative-definite. Non-degenerate critical points are isolated, i.e. given a critical point p of f, then there exists an open neighborhood of p which contains no critical point of f other than p. The index of the Hessian, H of f at the critical point p is the maximum dimension vector subspace on which H is negative definite. And the nullity of H is the dimension of the subspace of vectors v such that wHv = 0 for all vectors w.
The procedures listed below are written to locate a local minimum of a user-supplied function f, with the exception of the Newton-Raphson method which can be used to locate a critical point of f. In particular if the user wants to locate a local maximum of a function f, then one needs to locate the minimum of g( x ) = - f( x ). It is possible for some of the routines to locate a saddle point instead of a local minimum, therefore the results should be checked.
All the routines require that the user program the gradient of the function f. All methods save the Newton-Raphson method require that the user likewise program the function f. No method save the Newton-Raphson method and the damped Newton-Raphson method requires that the user program the Hessian of f.
Table of Unconstrained Optimization Routines of a Real-Valued Function of n Real Variables: |