1 <?xml version="1.0" encoding="UTF-8"?>
2 <refentry xmlns="http://docbook.org/ns/docbook" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:svg="http://www.w3.org/2000/svg" xmlns:ns4="http://www.w3.org/1999/xhtml" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:db="http://docbook.org/ns/docbook" xmlns:scilab="http://www.scilab.org" xml:id="derivative" xml:lang="en">
4 <refname>derivative</refname>
5 <refpurpose>approximate derivatives of a function</refpurpose>
8 <title>Calling Sequence</title>
9 <synopsis>derivative(F,x)
10 [J [,H]] = derivative(F,x [,h ,order ,H_form ,Q])
14 <title>Arguments</title>
20 a Scilab function F: <literal>R^n --> R^m</literal> or a
21 <literal>list(F,p1,...,pk)</literal>, where F is a scilab function
22 in the form <literal>y=F(x,p1,...,pk)</literal>, p1, ..., pk being
23 any scilab objects (matrices, lists,...).
30 <para>real column vector of dimension n.</para>
36 <para>(optional) real, the stepsize used in the finite difference
44 <para>(optional) integer, the order of the finite difference formula
45 used to approximate the derivatives (order = 1,2 or 4, default is
53 <para>(optional) string, the form in which the Hessean will be
54 returned. Possible forms are:
58 <term>H_form='default'</term>
61 H is a m x (<literal>n^2</literal>) matrix ; in this
62 form, the k-th row of H corresponds to the Hessean of the k-th
63 component of F, given as the following row vector :
68 <imagedata align="center" fileref="../mml/derivative_equation_1.mml"/>
72 <para>((grad(F_k) being a row vector).</para>
76 <term>H_form='blockmat' :</term>
78 <para>H is a (mxn) x n block matrix : the classic Hessean
79 matrices (of each component of F) are stacked by row (H = [H1
80 ; H2 ; ... ; Hm] in scilab syntax).
85 <term>H_form='hypermat' :</term>
87 <para>H is a n x n matrix for m=1, and a n x n x m hypermatrix
88 otherwise. H(:,:,k) is the classic Hessean matrix of the k-th
99 <para>(optional) real matrix, orthogonal (default is eye(n,n)). Q is added to have the possibility to remove
100 the arbitrariness of using the canonical basis to approximate the derivatives of a function and it should be an
101 orthogonal matrix. It is not mandatory but better to recover the derivative as you need the inverse matrix (and
102 so simply Q' instead of inv(Q)).
109 <para>approximated Jacobian</para>
115 <para>approximated Hessian</para>
121 <title>Description</title>
122 <para>Numerical approximation of the first and second derivatives of a
123 function F: <literal> R^n --> R^m</literal> at the point x. The
124 Jacobian is computed by approximating the directional derivatives of the
125 components of F in the direction of the columns of Q. (For m=1, v=Q(:,k) :
126 grad(F(x))*v = Dv(F(x)).) The second derivatives are computed by
127 composition of first order derivatives. If H is given in its default form
128 the Taylor series of F(x) up to terms of second order is given by :
133 <imagedata align="center" fileref="../mml/derivative_equation_2.mml"/>
137 <para>(([J,H]=derivative(F,x,H_form='default'), J=J(x), H=H(x).)</para>
140 <title>Performances</title>
142 If the problem is correctly scaled, increasing the accuracy reduces
143 the total error but requires more function evaluations.
144 The following list presents the number of function evaluations required to
145 compute the Jacobian depending on the order of the formula and the dimension of <literal>x</literal>,
146 denoted by <literal>n</literal>:
151 <literal>order=1</literal>, the number of function evaluations is <literal>n+1</literal>,
156 <literal>order=2</literal>, the number of function evaluations is <literal>2n</literal>,
161 <literal>order=4</literal>, the number of function evaluations is <literal>4n</literal>.
165 <para>Computing the Hessian matrix requires square the number of function evaluations,
166 as detailed in the following list.
171 <literal>order=1</literal>, the number of function evaluations is <literal>(n+1)^2</literal>,
176 <literal>order=2</literal>, the number of function evaluations is <literal>4n^2</literal>,
181 <literal>order=4</literal>, the number of function evaluations is <literal>16n^2</literal>.
187 <title>Remarks</title>
188 <para>The step size h must be small to get a low error but if it is too
189 small floating point errors will dominate by cancellation. As a rule of
190 thumb, do not change the default step size. To work around numerical
191 difficulties one may also change the order and/or choose different
192 orthogonal matrices Q (the default is eye(n,n)), especially if the
193 approximate derivatives are used in optimization routines. All the
194 optional arguments may also be passed as named arguments, so that one can
195 use calls in the form :
197 <programlisting><![CDATA[
198 derivative(F, x, H_form = "hypermat")
199 derivative(F, x, order = 4) etc.
203 <title>Examples</title>
204 <programlisting role="example"><![CDATA[
206 y=[sin(x(1)*x(2))+exp(x(2)*x(3)+x(1)) ; sum(x.^3)];
210 y=[sin(x(1)*x(2)*p)+exp(x(2)*x(3)+x(1)) ; sum(x.^3)];
214 [J,H]=derivative(F,x,H_form='blockmat')
217 // form an orthogonal matrix :
219 // Test order 1, 2 and 4 formulas.
221 [J,H]=derivative(F,x,order=i,H_form='blockmat',Q=Q);
222 mprintf("order= %d \n",i);
228 [J,H]=derivative(list(G,p),x,h,2,H_form='hypermat');
230 [J,H]=derivative(list(G,p),x,h,4,Q=Q);
233 // Taylor series example:
235 [J,H]=derivative(F,x);
239 F(x+dx)-F(x)-J*dx-1/2*H*(dx .*. dx)
242 function y=f(x,A,p,w)
245 // with Jacobian and Hessean given by J(x)=x'*(A+A')+p', and H(x)=A+A'.
250 [J,H]=derivative(list(f,A,p,w),x,h=1,H_form='blockmat')
252 // Since f(x) is quadratic in x, approximate derivatives of order=2 or 4 by finite
253 // differences should be exact for all h~=0. The apparent errors are caused by
254 // cancellation in the floating point operations, so a "big" h is choosen.
255 // Comparison with the exact matrices:
263 <title>Accuracy issues</title>
265 The <literal>derivative</literal> function uses the same step <literal>h</literal>
266 whatever the direction and whatever the norm of <literal>x</literal>.
267 This may lead to a poor scaling with respect to <literal>x</literal>.
268 An accurate scaling of the step is not possible without many evaluations
269 of the function. Still, the user has the possibility to compare the results
270 produced by the <literal>derivative</literal> and the <literal>numdiff</literal>
271 functions. Indeed, the <literal>numdiff</literal> function scales the
272 step depending on the absolute value of <literal>x</literal>.
273 This scaling may produce more accurate results, especially if
274 the magnitude of <literal>x</literal> is large.
277 In the following Scilab script, we compute the derivative of an
278 univariate quadratic function. The exact derivative can be
279 computed analytically and the relative error is computed.
280 In this rather extreme case, the <literal>derivative</literal> function
281 produces no significant digits, while the <literal>numdiff</literal>
282 function produces 6 significant digits.
284 <programlisting role="example"><![CDATA[
285 // Difference between derivative and numdiff when x is large
286 function y = myfunction (x)
291 fp = derivative(myfunction,x);
293 mprintf("Relative error with derivative: %e\n",e)
294 fp = numdiff(myfunction,x);
296 mprintf("Relative error with numdiff: %e\n",e)
299 The previous script produces the following output.
301 <programlisting role="example"><![CDATA[
302 Relative error with derivative: 1.000000e+000
303 Relative error with numdiff: 7.140672e-006
306 In a practical situation, we may not know what is the correct numerical
307 derivative. Still, we are warned that the numerical derivatives
308 should be used with caution in this specific case.
311 <refsection role="see also">
312 <title>See Also</title>
313 <simplelist type="inline">
315 <link linkend="numdiff">numdiff</link>
318 <link linkend="derivat">derivat</link>