$LastChangedDate$leastsqSolves non-linear least squares problemsCalling Sequence[fopt,[xopt,[grdopt]]]=leastsq(fun, x0)
[fopt,[xopt,[grdopt]]]=leastsq(fun, dfun, x0)
[fopt,[xopt,[grdopt]]]=leastsq(fun, cstr, x0)
[fopt,[xopt,[grdopt]]]=leastsq(fun, dfun, cstr, x0)
[fopt,[xopt,[grdopt]]]=leastsq(fun, dfun, cstr, x0, algo)
[fopt,[xopt,[grdopt]]]=leastsq([imp], fun [,dfun] [,cstr],x0 [,algo],[df0,[mem]],[stop])Parametersfoptvalue of the function f(x)=||fun(x)||^2
at xoptxoptbest value of x found to minimize
||fun(x)||^2grdoptgradient of f at
xoptfuna scilab function or a list defining a function from
R^n to R^m (see more
details in DESCRIPTION).x0real vector (initial guess of the variable to be
minimized).dfuna scilab function or a string defining the Jacobian matrix of
fun (see more details in DESCRIPTION).cstrbound constraints on x. They must be
introduced by the string keyword 'b' followed by
the lower bound binf then by the upper bound
bsup (so cstr appears as
'b',binf,bsup in the calling sequence). Those
bounds are real vectors with same dimension than
x0 (-%inf and +%inf may be used for dimension
which are unrestricted).algoa string with possible values: 'qn' or
'gc' or 'nd'. These strings
stand for quasi-Newton (default), conjugate gradient or
non-differentiable respectively. Note that 'nd'
does not accept bounds on x.impscalar argument used to set the trace mode.
imp=0 nothing (except errors) is reported,
imp=1 initial and final reports,
imp=2 adds a report per iteration,
imp>2 add reports on linear search. Warning,
most of these reports are written on the Scilab standard
output.df0real scalar. Guessed decreasing of
||fun||^2 at first iteration.
(df0=1 is the default value).meminteger, number of variables used to approximate the Hessean
(second derivatives) of f when
algo='qn'. Default value is
around 6.stopsequence of optional parameters controlling the convergence of
the algorithm. They are introduced by the keyword
'ar', the sequence being of the form
'ar',nap, [iter [,epsg [,epsf [,epsx]]]]napmaximum number of calls to fun
allowed.itermaximum number of iterations allowed.epsgthreshold on gradient norm.epsfthreshold controlling decreasing of
fepsxthreshold controlling variation of x.
This vector (possibly matrix) of same size as
x0 can be used to scale
x.Descriptionfun being a function from
R^n to R^m this routine tries to
minimize w.r.t. x, the function:which is the sum of the squares of the components of
fun. Bound constraints may be imposed on
x.How to provide fun and dfunfun can be either a usual scilab function (case
1) or a fortran or a C routine linked to scilab (case 2). For most
problems the definition of fun will need
supplementary parameters and this can be done in both cases.case 1:when fun is a Scilab function, its calling
sequence must be: y=fun(x
[,opt_par1,opt_par2,...]). When fun
needs optional parameters it must appear as
list(fun,opt_par1,opt_par2,...) in the calling
sequence of leastsq.case 2:when fun is defined by a Fortran or C
routine it must appear as list(fun_name,m
[,opt_par1,opt_par2,...]) in the calling sequence of
leastsq, fun_name (a string)
being the name of the routine which must be linked to Scilab (see
link). The generic calling sequences for
this routine are:where n is the dimension of vector
x, m the dimension of vector
y (which must store the evaluation of
fun at x) and
params is a vector which contains the optional
parameters opt_par1, opt_par2, ... (each
parameter may be a vector, for instance if
opt_par1 has 3 components, the description of
opt_par2 begin from
params(4) (fortran case), and from
params[3] (C case), etc... Note that even if
fun doesn't need supplementary parameters you
must anyway write the fortran code with a params
argument (which is then unused in the subroutine core).In many cases it is adviced to provide the Jacobian matrix
dfun (dfun(i,j)=dfi/dxj) to the
optimizer (which uses a finite difference approximation otherwise) and as
for fun it may be given as a usual scilab function or
as a fortran or a C routine linked to scilab.case 1:when dfun is a scilab function, its calling
sequence must be: y=dfun(x [, optional
parameters]) (notes that even if dfun
needs optional parameters it must appear simply as
dfun in the calling sequence of
leastsq).case 2:when dfun is defined by a Fortran or C
routine it must appear as dfun_name (a string) in
the calling sequence of leastsq
(dfun_name being the name of the routine which
must be linked to Scilab). The calling sequences for this routine
are nearly the same than for fun:in the C case dfun(i,j)=dfi/dxj must be
stored in y[m*(j-1)+i-1].RemarksLike datafit,
leastsq is a front end onto the optim function. If you want to try the
Levenberg-Marquard method instead, use lsqrsolve.A least squares problem may be solved directly with the optim function ; in this case the function NDcost may be useful to compute the derivatives
(see the NDcost help page which provides a
simple example for parameters identification of a differential
equation).Examples"
"void myfunc(int *m,int *n, double *x, double *param, double *f)"
"{"
" /* param[i] = tm[i], param[m+i] = ym[i], param[2m+i] = wm[i] */"
" int i;"
" for ( i = 0 ; i < *m ; i++ )"
" f[i] = param[2*(*m)+i]*( x[0]*exp(-x[1]*param[i]) - param[(*m)+i] );"
" return;"
"}"
""
"void mydfunc(int *m,int *n, double *x, double *param, double *df)"
"{"
" /* param[i] = tm[i], param[m+i] = ym[i], param[2m+i] = wm[i] */"
" int i;"
" for ( i = 0 ; i < *m ; i++ )"
" {"
" df[i] = param[2*(*m)+i]*exp(-x[1]*param[i]);"
" df[i+(*m)] = -x[0]*param[i]*df[i];"
" }"
" return;"
"}"];
mputl(c_code,TMPDIR+'/myfunc.c')
// 8-2/ compiles it. You need a C compiler !
names = ["myfunc" "mydfunc"]
clibname = ilib_for_link(names,"myfunc.o",[],"c",TMPDIR+"/Makefile");
// 8-3/ link it to scilab (see link help page)
link(clibname,names,"c")
// 8-4/ ready for the leastsq call
[f,xopt, gropt] = leastsq(list("myfunc",m,tm,ym,wm),"mydfunc",x0)
]]>See AlsolsqrsolveoptimNDcostdatafitexternalqpsolve