derivativeapproximate derivatives of a functionCalling Sequencederivative(F,x)
[J [,H]] = derivative(F,x [,h ,order ,H_form ,Q])
ArgumentsF
a Scilab function F: R^n --> R^m or a
list(F,p1,...,pk), where F is a scilab function
in the form y=F(x,p1,...,pk), p1, ..., pk being
any scilab objects (matrices, lists,...).
xreal column vector of dimension n.h(optional) real, the stepsize used in the finite difference
approximations.
order(optional) integer, the order of the finite difference formula
used to approximate the derivatives (order = 1,2 or 4, default is
order=2 ).
H_form(optional) string, the form in which the Hessean will be
returned. Possible forms are:
H_form='default'
H is a m x (n^2) matrix ; in this
form, the k-th row of H corresponds to the Hessean of the k-th
component of F, given as the following row vector :
((grad(F_k) being a row vector).H_form='blockmat' :H is a (mxn) x n block matrix : the classic Hessean
matrices (of each component of F) are stacked by row (H = [H1
; H2 ; ... ; Hm] in scilab syntax).
H_form='hypermat' :H is a n x n matrix for m=1, and a n x n x m hypermatrix
otherwise. H(:,:,k) is the classic Hessean matrix of the k-th
component of F.
Q(optional) real matrix, orthogonal (default is eye(n,n)). Q is added to have the possibility to remove
the arbitrariness of using the canonical basis to approximate the derivatives of a function and it should be an
orthogonal matrix. It is not mandatory but better to recover the derivative as you need the inverse matrix (and
so simply Q' instead of inv(Q)).
Japproximated JacobianHapproximated HessianDescriptionNumerical approximation of the first and second derivatives of a
function F: R^n --> R^m at the point x. The
Jacobian is computed by approximating the directional derivatives of the
components of F in the direction of the columns of Q. (For m=1, v=Q(:,k) :
grad(F(x))*v = Dv(F(x)).) The second derivatives are computed by
composition of first order derivatives. If H is given in its default form
the Taylor series of F(x) up to terms of second order is given by :
(([J,H]=derivative(F,x,H_form='default'), J=J(x), H=H(x).)Performances
If the problem is correctly scaled, increasing the accuracy reduces
the total error but requires more function evaluations.
The following list presents the number of function evaluations required to
compute the Jacobian depending on the order of the formula and the dimension of x,
denoted by n:
order=1, the number of function evaluations is n+1,
order=2, the number of function evaluations is 2n,
order=4, the number of function evaluations is 4n.
Computing the Hessian matrix requires square the number of function evaluations,
as detailed in the following list.
order=1, the number of function evaluations is (n+1)^2,
order=2, the number of function evaluations is 4n^2,
order=4, the number of function evaluations is 16n^2.
RemarksThe step size h must be small to get a low error but if it is too
small floating point errors will dominate by cancellation. As a rule of
thumb, do not change the default step size. To work around numerical
difficulties one may also change the order and/or choose different
orthogonal matrices Q (the default is eye(n,n)), especially if the
approximate derivatives are used in optimization routines. All the
optional arguments may also be passed as named arguments, so that one can
use calls in the form :
ExamplesAccuracy issues
The derivative function uses the same step h
whatever the direction and whatever the norm of x.
This may lead to a poor scaling with respect to x.
An accurate scaling of the step is not possible without many evaluations
of the function. Still, the user has the possibility to compare the results
produced by the derivative and the numdiff
functions. Indeed, the numdiff function scales the
step depending on the absolute value of x.
This scaling may produce more accurate results, especially if
the magnitude of x is large.
In the following Scilab script, we compute the derivative of an
univariate quadratic function. The exact derivative can be
computed analytically and the relative error is computed.
In this rather extreme case, the derivative function
produces no significant digits, while the numdiff
function produces 6 significant digits.
The previous script produces the following output.
In a practical situation, we may not know what is the correct numerical
derivative. Still, we are warned that the numerical derivatives
should be used with caution in this specific case.
See Also
interp
interp2d
splin
eval_cshep2d
diff
numdiff
derivat