1 <?xml version="1.0" encoding="UTF-8"?>
3 * Scilab ( http://www.scilab.org/ ) - This file is part of Scilab
4 * Copyright (C) 2008 - INRIA
6 * This file must be used under the terms of the CeCILL.
7 * This source file is licensed as described in the file COPYING, which
8 * you should have received as part of this distribution. The terms
9 * are also available at
10 * http://www.cecill.info/licences/Licence_CeCILL_V2-en.txt
13 <refentry version="5.0-subset Scilab" xml:id="optim" xml:lang="en"
14 xmlns="http://docbook.org/ns/docbook"
15 xmlns:xlink="http://www.w3.org/1999/xlink"
16 xmlns:svg="http://www.w3.org/2000/svg"
17 xmlns:ns3="http://www.w3.org/1999/xhtml"
18 xmlns:mml="http://www.w3.org/1998/Math/MathML"
19 xmlns:db="http://docbook.org/ns/docbook">
21 <pubdate>$LastChangedDate: 2008-04-28 09:36:26 +0200 (lun, 28 avr 2008)
26 <refname>optim</refname>
28 <refpurpose>non-linear optimization routine</refpurpose>
32 <title>Calling Sequence</title>
34 <synopsis>[f,xopt]=optim(costf,x0)
35 [f [,xopt [,gradopt [,work]]]]=optim(costf [,<contr>],x0 [,algo] [,df0 [,mem]] [,work] [,<stop>] [,<params>] [,imp=iflag])</synopsis>
39 <title>Parameters</title>
46 <para>external, i.e Scilab function list or string
47 (<literal>costf</literal> is the cost function, that is, a Scilab
48 script, a Fortran 77 routine or a C function.</para>
56 <para>real vector (initial value of variable to be
65 <para>value of optimal cost
66 (<literal>f=costf(xopt)</literal>)</para>
74 <para>best value of <literal>x</literal> found.</para>
79 <term><contr></term>
82 <para>keyword representing the following sequence of arguments:
83 <literal>'b',binf,bsup</literal> with <literal>binf</literal> and
84 <literal>bsup</literal> are real vectors with same dimension as
85 <literal>x0</literal>. <literal>binf</literal> and
86 <literal>bsup</literal> are lower and upper bounds on
87 <literal>x</literal>.</para>
97 <para><literal>'qn'</literal> : quasi-Newton (this is the
98 default solver)</para>
102 <para><literal>'gc'</literal> : conjugate gradient</para>
106 <para><literal>'nd'</literal> : non-differentiable.</para>
108 <para>Note that the conjugate gradient solver does not accept
109 bounds on <literal>x</literal>.</para>
119 <para>real scalar. Guessed decreasing of <literal>f</literal> at
120 first iteration. (<literal>df0=1</literal> is the default
129 <para>integer, number of variables used to approximate the Hessian.
130 Default value is 10. This feature is available for the
131 Gradient-Conjugate algorithm "gc" without constraints and the
132 non-smooth algorithm "nd" without constraints.</para>
137 <term><stop></term>
140 <para>keyword representing the sequence of optional parameters
141 controlling the convergence of the algorithm. <literal>'ar',nap
142 [,iter [,epsg [,epsf [,epsx]]]]</literal></para>
149 <para>reserved keyword for stopping rule selection defined as
158 <para>maximum number of calls to <literal>costf</literal>
159 allowed (default is 100).</para>
167 <para>maximum number of iterations allowed (default is
176 <para>threshold on gradient norm.</para>
184 <para>threshold controlling decreasing of
185 <literal>f</literal></para>
193 <para>threshold controlling variation of <literal>x</literal>.
194 This vector (possibly matrix) of same size as
195 <literal>x0</literal> can be used to scale
196 <literal>x</literal>.</para>
204 <term><params></term>
207 <para>keyword representing the method to initialize the arguments
208 <literal>ti, td</literal> passed to the objective function, provided
209 as a C or Fortran routine. This option has no meaning when the cost
210 function is a Scilab script. <params> can be set to only one
211 of the following values.</para>
217 <para>That mode allows to allocate memory in the internal Scilab
218 workspace so that the objective function can get arrays with the
219 required size, but without directly allocating the memory. "in"
220 stands for "initialization". In that mode, before the value and
221 derivative of the objective function is to be computed, there is
222 a dialog between the optim Scilab primitive and the objective
223 function. In this dialog, the objective function is called two
224 times, with particular values of the "ind" parameter. The first
225 time, ind is set to 10 and the objective function is expected to
226 set the nizs, nrzs and ndzs integer parameters of the "nird"
229 <programlisting role = ""><![CDATA[
230 common /nird/ nizs,nrzs,ndzs
233 <para>This allows Scilab to allocate memory inside its internal
234 workspace. The second time the objective function is called, ind
235 is set to 11 and the objective function is expected to set the
236 ti, tr and tz arrays. After this initialization phase, each time
237 it is called, the objective function is ensured that the ti, tr
238 and tz arrays which are passed to it have the values that have
239 been previously initialized.</para>
243 <para>"ti",valti</para>
245 <para>In this mode, valti is expected to be a Scilab vector
246 variable containing integers. Whenever the objective function is
247 called, the ti array it receives contains the values of the
248 Scilab variable.</para>
252 <para>"td", valtd</para>
254 <para>In this mode, valtd is expected to be a Scilab vector
255 variable containing double values. Whenever the objective
256 function is called, the td array it receives contains the values
257 of the Scilab variable.</para>
261 <para>"ti",valti,"td",valtd</para>
263 <para>This mode combines the two previous.</para>
267 <para>The <literal>ti, td</literal> arrays may be used so that the
268 objective function can be computed. For example, if the objective
269 function is a polynomial, the ti array may may be used to store the
270 coefficients of that polynomial.</para>
272 <para>Users should choose carefully between the "in" mode and the
273 "ti" and "td" mode, depending on the fact that the arrays are Scilab
274 variables or not. If the data is available as Scilab variables, then
275 the "ti", valti, "td", valtd mode should be chosen. If the data is
276 available directly from the objective function, the "in" mode should
277 be chosen. Notice that there is no "tr" mode, since, in Scilab, all
278 real values are of "double" type.</para>
280 <para>If neither the "in" mode, nor the "ti", "td" mode is chosen,
281 that is, if <params> is not present as an option of the optim
282 primitive, the user may should not assume that the ti,tr and td
283 arrays can be used : reading or writing the arrays may generate
284 unpredictable results.</para>
289 <term>"imp=iflag"</term>
292 <para>named argument used to set the trace mode. The possible values
293 for iflag are 0,1,2 and >2. Use this option with caution : most
294 of these reports are written on the Scilab standard output.</para>
298 <para>iflag=0: nothing (except errors) is reported (this is the
303 <para>iflag=1: initial and final reports,</para>
307 <para>iflag=2: adds a report per iteration,</para>
311 <para>iflag>2: add reports on linear search.</para>
315 <para>iflag<0: calls the cost function with ind=1 every -imp iterations.</para>
325 <para>gradient of <literal>costf</literal> at
326 <literal>xopt</literal></para>
334 <para>working array for hot restart for quasi-Newton method. This
335 array is automatically initialized by <literal>optim</literal> when
336 <literal>optim</literal> is invoked. It can be used as input
337 parameter to speed-up the calculations.</para>
344 <title>Description</title>
346 <para>Non-linear optimization routine for programs without constraints or
347 with bound constraints:</para>
349 <programlisting role = ""><![CDATA[
350 min costf(x) w.r.t x.
353 <para><literal>costf</literal> is an "external" i.e a Scilab function, a
354 list or a string giving the name of a C or Fortran routine (see
355 "external"). This external must return the value <literal>f</literal> of
356 the cost function at the point <literal>x</literal> and the gradient
357 <literal>g</literal> of the cost function at the point
358 <literal>x</literal>.</para>
362 <term>- Scilab function case</term>
365 <para>If <literal>costf</literal> is a Scilab function, the calling
366 sequence for <literal>costf</literal> must be:</para>
368 <programlisting role = ""><![CDATA[
369 [f,g,ind]=costf(x,ind)
372 <para>Here, <literal>costf</literal> is a function which returns
373 <literal>f</literal>, value (real number) of cost function at
374 <literal>x</literal>, and <literal>g</literal>, gradient vector of
375 cost function at <literal>x</literal>. The variable
376 <literal>ind</literal> is described below.</para>
381 <term>- List case</term>
384 <para>If <literal>costf</literal> is a list, it should be of the
385 form: <literal>list(real_costf, arg1,...,argn)</literal> with
386 <literal>real_costf</literal> a Scilab function with calling
387 sequence : <literal>[f,g,ind]=costf(x,ind,arg1,... argn)</literal>.
388 The <literal>x</literal>, <literal>f</literal>,
389 <literal>g</literal>, <literal>ind</literal> arguments have the same
390 meaning that above. <literal>argi</literal> arguments can be used to
391 pass function parameters.</para>
396 <term>- String case</term>
399 <para>If <literal>costf</literal> is a character string, it refers
400 to the name of a C or Fortran routine which must be linked to
405 <term>* Fortran case</term>
408 <para>The interface of the Fortran subroutine computing the
409 objective must be :</para>
411 <programlisting role = ""><![CDATA[
412 subroutine costf(ind,n,x,f,g,ti,tr,td)
415 <para>with the following declarations:</para>
417 <programlisting role = ""><![CDATA[
419 double precision x(n),f,g(n),td(*)
423 <para>The argument <literal>ind</literal> is described
426 <para>If ind = 2, 3 or 4, the inputs of the routine are :
427 <literal>x, ind, n, ti, tr,td</literal>.</para>
429 <para>If ind = 2, 3 or 4, the outputs of the routine are :
430 <literal>f</literal> and <literal>g</literal>.</para>
435 <term>* C case</term>
438 <para>The interface of the C function computing the objective
441 <programlisting role = ""><![CDATA[
442 void costf(int *ind, int *n, double *x, double *f, double *g, int *ti, float *tr, double *td)
445 <para>The argument <literal>ind</literal> is described
448 <para>The inputs and outputs of the function are the same as
449 in the fortran case.</para>
457 <para>If <literal>ind=2</literal> (resp. <literal>3, 4</literal>),
458 <literal>costf</literal> must provide <literal>f</literal> (resp.
459 <literal>g, f</literal> and <literal>g</literal>).</para>
461 <para>If <literal>ind=1</literal> nothing is computed (used for display
462 purposes only).</para>
464 <para>On output, <literal>ind<0</literal> means that
465 <literal>f</literal> cannot be evaluated at <literal>x</literal> and
466 <literal>ind=0</literal> interrupts the optimization.</para>
470 <title>Example #1 : Scilab function</title>
472 <para>The following is an example with a Scilab function. Notice, for
473 simplifications reasons, the Scilab function "cost" of the following
474 example computes the objective function f and its derivative no matter of
475 the value of ind. This allows to keep the example simple. In practical
476 situations though, the computation of "f" and "g" may raise performances
477 issues so that a direct optimization may be to use the value of "ind" to
478 compute "f" and "g" only when needed.</para>
480 <programlisting role="example"><![CDATA[
481 // External function written in Scilab
484 function [f,g,ind] = cost(x,ind)
485 f=0.5*norm(x-xref)^2;
490 [f,xopt]=optim(cost,x0)
492 // By conjugate gradient - you can use 'qn', 'gc' or 'nd'
493 [f,xopt,gopt]=optim(cost,x0,'gc')
495 //Seen as non differentiable
496 [f,xopt,gopt]=optim(cost,x0,'nd')
498 // Upper and lower bounds on x
499 [f,xopt,gopt]=optim(cost,'b',[-1;0;2],[0.5;1;4],x0)
501 // Upper and lower bounds on x and setting up the algorithm to 'gc'
502 [f,xopt,gopt]=optim(cost,'b',[-1;0;2],[0.5;1;4],x0,'gc')
504 // Bound on the number of call to the objective function
505 [f,xopt,gopt]=optim(cost,'b',[-1;0;2],[0.5;1;4],x0,'gc','ar',3)
507 // Set max number of call to the objective function (3)
508 // Set max number of iterations (100)
509 // Set stopping threshold on the value of f (1e-6),
510 // on the value of the norm of the gradient of the objective function (1e-6)
511 // on the improvement on the parameters x_opt (1e-6;1e-6;1e-6)
512 [f,xopt,gopt]=optim(cost,'b',[-1;0;2],[0.5;1;4],x0,'gc','ar',3,100,1e-6,1e-6,[1e-3;1e-3;1e-3])
514 // Print information messages while optimizing
515 // Be careful, some messages are printed in a terminal. You must
516 // Scilab from the command line to see these messages.
517 [f,xopt]=optim(cost,x0,imp=3)
519 // Use the 'derivative' function to compute the partial
520 // derivatives of the previous problem
521 deff('y=my_f(x)','y=0.5*norm(x-xref)^2');
522 deff('y=my_df(x)','y=derivative(my_f,x)');
523 deff('[f,g,ind]=cost(x,ind)','f=my_f(x); ...
527 xref=[1;2;3];x0=[1;-1;1]
528 [f,xopt]=optim(cost,x0)
533 <title>Example #2 : C function</title>
535 <para>The following is an example with a C function, where a C source code
536 is written into a file, dynamically compiled and loaded into Scilab, and
537 then used by the "optim" solver. The interface of the "rosenc" function is
538 fixed, even if the arguments are not really used in the cost function.
539 This is because the underlying optimization solvers must assume that the
540 objective function has a known, constant interface. In the following
541 example, the arrays ti and tr are not used, only the array "td" is used,
542 as a parameter of the Rosenbrock function. Notice that the content of the
543 arrays ti and td are the same that the content of the Scilab variable, as
546 <programlisting role="example"><![CDATA[
547 // External function written in C (C compiler required)
548 // write down the C code (Rosenbrock problem)
549 C=['#include <math.h>'
550 'double sq(double x)'
552 'void rosenc(int *ind, int *n, double *x, double *f, double *g, '
553 ' int *ti, float *tr, double *td)'
558 ' if (*ind==2||*ind==4) {'
560 ' for (i=1;i<*n;i++)'
561 ' *f+=p*sq(x[i]-sq(x[i-1]))+sq(1.0-x[i]);'
563 ' if (*ind==3||*ind==4) {'
564 ' g[0]=-4.0*p*(x[1]-sq(x[0]))*x[0];'
565 ' for (i=1;i<*n-1;i++)'
566 ' g[i]=2.0*p*(x[i]-sq(x[i-1]))-4.0*p*(x[i+1]-sq(x[i]))*x[i]-2.0*(1.0-x[i]);'
567 ' g[*n-1]=2.0*p*(x[*n-1]-sq(x[*n-2]))-2.0*(1.0-x[*n-1]);'
570 mputl(C,TMPDIR+'/rosenc.c')
572 // compile the C code
573 l=ilib_for_link('rosenc','rosenc.o',[],'c',TMPDIR+'/Makefile');
575 // incremental linking
581 [f,xo,go]=optim('rosenc',x0,'td',p)
586 <title>Example #3 : Fortran function</title>
588 <para>The following is an example with a Fortran function.</para>
590 <programlisting role="example"><![CDATA[
591 // External function written in Fortran (Fortran compiler required)
592 // write down the Fortran code (Rosenbrock problem)
593 F=[ ' subroutine rosenf(ind, n, x, f, g, ti, tr, td)'
594 ' integer ind,n,ti(*)'
595 ' double precision x(n),f,g(n),td(*)'
598 ' double precision y,p'
600 ' if (ind.eq.2.or.ind.eq.4) then'
603 ' f=f+p*(x(i)-x(i-1)**2)**2+(1.0d0-x(i))**2'
606 ' if (ind.eq.3.or.ind.eq.4) then'
607 ' g(1)=-4.0d0*p*(x(2)-x(1)**2)*x(1)'
610 ' g(i)=2.0d0*p*(x(i)-x(i-1)**2)-4.0d0*p*(x(i+1)-x(i)**2)*x(i)'
611 ' & -2.0d0*(1.0d0-x(i))'
614 ' g(n)=2.0d0*p*(x(n)-x(n-1)**2)-2.0d0*(1.0d0-x(n))'
619 mputl(F,TMPDIR+'/rosenf.f')
621 // compile the Fortran code
622 l=ilib_for_link('rosenf','rosenf.o',[],'f',TMPDIR+'/Makefile');
624 // incremental linking
630 [f,xo,go]=optim('rosenf',x0,'td',p)
635 <title>Example #4 : Fortran function with initialization</title>
637 <para>The following is an example with a Fortran function in which the
638 "in" option is used to allocate memory inside the Scilab environment. In
639 this mode, there is a dialog between Scilab and the objective function.
640 The goal of this dialog is to initialize the parameters of the objective
641 function. Each part of this dialog is based on a specific value of the
642 "ind" parameter.</para>
644 <para>At the beginning, Scilab calls the objective function, with the ind
645 parameter equals to 10. This tells the objective function to initialize
646 the sizes of the arrays it needs by setting the nizs, nrzs and ndzs
647 integer parameters of the "nird" common. Then the objective function
648 returns. At this point, Scilab creates internal variables and allocate
649 memory for the variable izs, rzs and dzs. Scilab calls the objective
650 function back again, this time with ind equals to 11. This tells the
651 objective function to initialize the arrays izs, rzs and dzs. When the
652 objective function has done so, it returns. Then Scilab enters in the real
653 optimization mode and calls the optimization solver the user requested.
654 Whenever the objective function is called, the izs, rzs and dzs arrays
655 have the values that have been previously initialized.</para>
657 <programlisting role="example"><![CDATA[
659 // Define a fortran source code and compile it (fortran compiler required)
661 fortransource=[' subroutine rosenf(ind,n,x,f,g,izs,rzs,dzs)'
662 'C -------------------------------------------'
663 'c Example of cost function given by a subroutine'
664 'c if n<=2 returns ind=0'
665 'c f.bonnans, oct 86'
666 ' implicit double precision (a-h,o-z)'
668 ' double precision dzs(*)'
669 ' dimension x(n),g(n),izs(*)'
670 ' common/nird/nizs,nrzs,ndzs'
675 ' if(ind.eq.10) then'
681 ' if(ind.eq.11) then'
687 ' if(ind.eq.2)go to 5'
688 ' if(ind.eq.3)go to 20'
689 ' if(ind.eq.4)go to 5'
695 '10 f=f + dzs(2)*(x(i)-x(im1)**2)**2 + (1.0d+0-x(i))**2'
696 ' if(ind.eq.2)return'
697 '20 g(1)=-4.0d+0*dzs(2)*(x(2)-x(1)**2)*x(1)'
702 ' g(i)=2.0d+0*dzs(2)*(x(i)-x(im1)**2)'
703 '30 g(i)=g(i) -4.0d+0*dzs(2)*(x(ip1)-x(i)**2)*x(i) - '
704 ' & 2.0d+0*(1.0d+0-x(i))'
705 ' g(n)=2.0d+0*dzs(2)*(x(n)-x(nm1)**2) - 2.0d+0*(1.0d+0-x(n))'
708 mputl(fortransource,TMPDIR+'/rosenf.f')
710 // compile the C code
711 libpath=ilib_for_link('rosenf','rosenf.o',[],'f',TMPDIR+'/Makefile');
713 // incremental linking
714 linkid=link(libpath,'rosenf','f');
720 [f,x,g]=optim('rosenf',x0,'in');
725 <title>Example #5 : Fortran function with initialization on Windows with
726 Intel Fortran Compiler</title>
728 <para>Under the Windows operating system with Intel Fortran Compiler, one
729 must carefully design the fortran source code so that the dynamic link
730 works properly. On Scilab's side, the optimization component is
731 dynamically linked and the symbol "nird" is exported out of the
732 optimization dll. On the cost function's side, which is also dynamically
733 linked, the "nird" common must be imported in the cost function
736 <para>The following example is a re-writing of the previous example, with
737 special attention for the Windows operating system with Intel Fortran
738 compiler as example. In that case, we introduce additionnal compiling
739 instructions, which allows the compiler to import the "nird"
742 <programlisting role="example"><![CDATA[
743 fortransource=['subroutine rosenf(ind,n,x,f,g,izs,rzs,dzs)'
744 'cDEC$ IF DEFINED (FORDLL)'
745 'cDEC$ ATTRIBUTES DLLIMPORT:: /nird/'
747 'C -------------------------------------------'
748 'c Example of cost function given by a subroutine'
749 'c if n<=2 returns ind=0'
750 'c f.bonnans, oct 86'
751 ' implicit double precision (a-h,o-z)'
757 <title>Example #6 : Logging features</title>
759 <para>The imp flag may take negative integer values, say k.
760 In that case, the cost function is called once every -k iterations.
761 This allows to draw the function value or write a log file.
764 <para>In the following example, we solve the Rosenbrock test case.
765 For each iteration of the algorithm, we print the value of x, f and g.
768 <programlisting role="example">
771 function [f,g,ind] = cost(x,ind)
772 f=0.5*norm(x-xref)^2;
775 function [f,g,ind] = cost(x,ind)
776 if ind == 2 | ind == 4 then
777 f=0.5*norm(x-xref)^2;
779 if ind == 3 | ind == 4 then
783 mprintf("x = %s\n", strcat(string(x)," "))
784 mprintf("f = %e\n", f)
786 mprintf("g = %s\n", strcat(string(g)," "))
789 [f,xopt]=optim(cost,x0,imp=-1)
792 <para>In the following example, we solve the Rosenbrock test case.
793 For each iteration of the algorithm, we plot the current value of x
794 into a 2D graph containing the contours of Rosenbrock's function.
795 This allows to see the progress of the algorithm while the
796 algorithm is performing. We could as well write the
797 value of x, f and g into a log file if needed.
800 <programlisting role="example">
801 // 1. Define rosenbrock
802 function [ f , g , ind ] = rosenbrock ( x , ind )
803 if ((ind == 1) | (ind == 2) | (ind == 4)) then
804 f = 100.0 *(x(2)-x(1)^2)^2 + (1-x(1))^2;
806 if ((ind == 1) | (ind == 3) | (ind == 4)) then
807 g(1) = - 400. * ( x(2) - x(1)**2 ) * x(1) -2. * ( 1. - x(1) )
808 g(2) = 200. * ( x(2) - x(1)**2 )
811 plot ( x(1) , x(2) , "g." )
816 // 2. Draw the contour of Rosenbrock's function
825 stepx = (xmax - xmin)/nx;
826 xdata = xmin:stepx:xmax;
827 stepy = (ymax - ymin)/ny;
828 ydata = ymin:stepy:ymax;
829 for ix = 1:length(xdata)
830 for iy = 1:length(ydata)
831 x = [xdata(ix) ydata(iy)];
832 f = rosenbrock ( x , 2 );
833 zdata ( ix , iy ) = f;
836 contour ( xdata , ydata , zdata , [1 10 100 500 1000])
837 plot(x0(1) , x0(2) , "b.")
838 plot(xopt(1) , xopt(2) , "r*")
839 // 3. Plot the optimization process, during optimization
840 [ fopt , xopt ] = optim ( rosenbrock , x0 , imp = -1)
846 <title>Example #7 : Optimizing without derivatives</title>
848 <para>It is possible to optimize a problem without an explicit
849 knowledge of the derivative of the cost function.
850 For this purpose, we can use the numdiff or derivative function
851 to compute a numerical derivative of the cost function.
854 <para>In the following example, we use the numdiff function to
855 solve Rosenbrock's problem.
859 <programlisting role="example">
860 function f = rosenbrock ( x )
861 f = 100.0 *(x(2)-x(1)^2)^2 + (1-x(1))^2;
863 function [ f , g , ind ] = rosenbrockCost ( x , ind )
864 if ((ind == 1) | (ind == 2) | (ind == 4)) then
865 f = rosenbrock ( x );
867 if ((ind == 1) | (ind == 3) | (ind == 4)) then
868 g= numdiff ( rosenbrock , x );
872 [ fopt , xopt ] = optim ( rosenbrockCost , x0 )
875 <para>In the following example, we use the derivative function to
876 solve Rosenbrock's problem. Given that the step computation strategy
877 is not the same in numdiff and derivative, this might lead to improved
882 <programlisting role="example">
883 function f = rosenbrock ( x )
884 f = 100.0 *(x(2)-x(1)^2)^2 + (1-x(1))^2;
886 function [ f , g , ind ] = rosenbrockCost2 ( x , ind )
887 if ((ind == 1) | (ind == 2) | (ind == 4)) then
888 f = rosenbrock ( x );
890 if ((ind == 1) | (ind == 3) | (ind == 4)) then
891 g= derivative ( rosenbrock , x.' , order = 4 );
895 [ fopt , xopt ] = optim ( rosenbrockCost2 , x0 )
902 <title>See Also</title>
904 <simplelist type="inline">
905 <member><link linkend="external">external</link></member>
907 <member><link linkend="qpsolve">qpsolve</link></member>
909 <member><link linkend="datafit">datafit</link></member>
911 <member><link linkend="leastsq">leastsq</link></member>
913 <member><link linkend="numdiff">numdiff</link></member>
915 <member><link linkend="derivative">derivative</link></member>
917 <member><link linkend="NDcost">NDcost</link></member>
922 <title>References</title>
924 <para>The following is a map from the various options to the underlying
925 solvers, with some comments about the algorithm, when available.</para>
929 <term>"qn" without constraints</term>
932 <para>n1qn1 : a quasi-Newton method with a Wolfe-type line
938 <term>"qn" with bounds constraints</term>
941 <para>qnbd : a quasi-Newton method with projection</para>
943 <para>RR-0242 - A variant of a projected variable metric method for
944 bound constrained optimization problems, Bonnans Frederic, Rapport
945 de recherche de l'INRIA - Rocquencourt, Octobre 1983</para>
950 <term>"gc" without constraints</term>
953 <para>n1qn3 : a conjugate gradient method with BFGS.</para>
958 <term>"gc" with bounds constraints</term>
961 <para>gcbd : a BFGS-type method with limited memory and
967 <term>"nd" without constraints</term>
970 <para>n1fc1 : a bundle method</para>
975 <term>"nd" with bounds constraints</term>
978 <para>not available</para>
984 <title>Author</title>
985 <para>The Modulopt library : J.Frederic Bonnans, Jean-Charles Gilbert, Claude Lemarechal</para>
986 <para>The interfaces to the Modulopt library : J.Frederic Bonnans</para>
987 <para>This help : Michael Baudin</para>