From 37885071a93a0fb12c5736242002ef2ae14851b6 Mon Sep 17 00:00:00 2001
From: =?utf8?q?Micha=EBl=20Baudin?=
Date: Tue, 1 Sep 2009 11:02:19 +0200
Subject: [PATCH] Finished help for fminsearch

.../help/en_US/neldermead/fminsearch.xml  240 ++++++++++
.../help/en_US/optimsimplex/optimsimplex.xml  145 ++++++++
.../macros/optimsimplex/optimsimplex_pfeffer.sci  3 +
.../neldermead/neldermead_kelley.dia.ref  127 
4 files changed, 225 insertions(+), 290 deletions()
delete mode 100644 scilab/modules/optimization/tests/unit_tests/neldermead/neldermead_kelley.dia.ref
diff git a/scilab/modules/optimization/help/en_US/neldermead/fminsearch.xml b/scilab/modules/optimization/help/en_US/neldermead/fminsearch.xml
index e5389f3..b95e51b 100644
 a/scilab/modules/optimization/help/en_US/neldermead/fminsearch.xml
+++ b/scilab/modules/optimization/help/en_US/neldermead/fminsearch.xml
@@ 72,7 +72,8 @@
options
 A struct which contains configurable options of the algorithm (see below for details).
+ A struct which contains configurable options of the algorithm
+ (see below for details).
@@ 114,10 +115,8 @@
0

 The maximum number of function evaluations has been
 reached.

+ The maximum number of function evaluations has been
+ reached.
@@ 125,16 +124,12 @@
1

 The tolerance on the simplex size and function value delta has been
 reached. This signifies that the algorithm has converged, probably to a solution
 of the problem.

+ The tolerance on the simplex size and function value
+ delta has been reached. This signifies that the algorithm has
+ converged, probably to a solution of the problem.


@@ 142,19 +137,16 @@
output

 A struct which stores detailed information about the exit of the algorithm.
 This struct contains the following fields.

+ A struct which stores detailed information about the exit of
+ the algorithm. This struct contains the following fields.
output.algorithm

 A string containing the definition of the algorithm used, i.e. 'NelderMead simplex direct search'.

+ A string containing the definition of the algorithm
+ used, i.e. 'NelderMead simplex direct search'.
@@ 162,9 +154,7 @@
output.funcCount

 The number of function evaluations.

+ The number of function evaluations.
@@ 172,9 +162,7 @@
output.iterations

 The number of iterations.

+ The number of iterations.
@@ 182,154 +170,158 @@
output.message

 A string containing a termination message.

+ A string containing a termination message.



Options

 In this section, we describe the options input argument which
 have an effect on the algorithm used by fminsearch.


 The options input argument is a data structure which drives the
 behaviour of fminsearch. It allows to handle several options in
 a consistent and simple interface, without the problem of
 managing many input arguments.



 The fminsearch function is sensitive to the following options.

+
+ In this section, we describe the options input argument which have
+ an effect on the algorithm used by fminsearch.
+
+ The options input argument is a data structure which drives the
+ behaviour of fminsearch. It allows to handle several options in a
+ consistent and simple interface, without the problem of managing many
+ input arguments.
+
+ The fminsearch function is sensitive to the following
+ options.
options.MaxIter
+

 The maximum number of iterations. The default is 200 * n, where n is the
 number of variables.

+ The maximum number of iterations. The default is 200 * n,
+ where n is the number of variables.
options.MaxFunEvals
+

 The maximum number of evaluations of the cost function. The default is 200 * n,
 where n is the number of variables.

+ The maximum number of evaluations of the cost function. The
+ default is 200 * n, where n is the number of variables.
options.TolFun
+

 The absolute tolerance on function value. The default value is 1.e4.

+ The absolute tolerance on function value. The default value is
+ 1.e4.
options.TolX
+

 The absolute tolerance on simplex size. The default value is 1.e4.

+ The absolute tolerance on simplex size. The default value is
+ 1.e4.


Termination criteria

 In this section, we describe the termination criteria used by fminsearch.


 The criteria is based on the following variables:

+ In this section, we describe the termination criteria used by
+ fminsearch.
+
+ The criteria is based on the following variables:
+
ssize
+

 the current simplex size,

+ the current simplex size,
shiftfv
+

 the absolute value of the difference of function
 value between the highest and lowest vertices.

+ the absolute value of the difference of function value between
+ the highest and lowest vertices.

 If both the following conditions
+
+
+ If both the following conditions
ssize < options.TolX

 and

+ and
shiftfv < options.TolFun

 are true, then the iterations stop.



 The size of the simplex is computed using the "sigmaplus" method of the optimsimplex component.
 The "sigmamplus" size is the maximum length of the vector from each vertex to the first
 vertex. It requires one loop over the vertices.

+ are true, then the iterations stop.
+ The size of the simplex is computed using the "sigmaplus" method of
+ the optimsimplex component. The "sigmamplus" size is the maximum length of
+ the vector from each vertex to the first vertex. It requires one loop over
+ the vertices of the simplex.
The initial simplex

 TODO

+
+ The fminsearch algorithm uses a special initial simplex, which is an
+ heuristic depending on the initial guess. The strategy chosen by
+ fminsearch corresponds to the simplex0method flag of the neldermead
+ component, with the "pfeffer" method. It is associated with the
+ simplex0deltausual = 0.05 and simplex0deltazero = 0.0075 parameters.
+ Pfeffer's method is an heuristic which is presented in "Global
+ Optimization Of LennardJones Atomic Clusters" by Ellen Fan. It is due to
+ L. Pfeffer at Stanford. See in the help of optimsimplex for more
+ details.
The number of iterations

 TODO

+
+ In this section, we present the default values for the number of
+ iterations in fminsearch.
+
+ The options input argument is an optionnal
+ data structure which can contain the
+ options.MaxIter field. It stores the maximum number
+ of iterations. The default value is 200n, where n is the number of
+ variables. The factor 200 has not been chosen by chance, but is the result
+ of experiments performed against quadratic functions with increasing space
+ dimension.
+
+ This result is presented in "Effect of dimensionality on the
+ neldermead simplex method" by Lixing Han and Michael Neumann. This paper
+ is based on Lixing Han's PhD, "Algorithms in Unconstrained Optimization".
+ The study is based on numerical experiment with a quadratic function where
+ the number of terms depends on the dimension of the space (i.e. the number
+ of variables). Their study shows that the number of iterations required to
+ reach the tolerance criteria is roughly 100n. Most iterations are based on
+ inside contractions. Since each step of the NelderMead algorithm only
+ require one or two function evaluations, the number of required function
+ evaluations in this experiment is also roughly 100n.
Example

 In the following example, we use the fminsearch function to compute
 the minimum of the Rosenbrock function. We first define the function
 "banana", and then use the fminsearch function to search the minimum,
 starting with the initial guess [1.2 1.0].

+ In the following example, we use the fminsearch function to compute
+ the minimum of the Rosenbrock function. We first define the function
+ "banana", and then use the fminsearch function to search the minimum,
+ starting with the initial guess [1.2 1.0].
function y = banana (x)
@@ 342,27 +334,26 @@
Bibliography

 "Sequential Application of Simplex Designs in Optimisation and
 Evolutionary Operation", Spendley, W. and Hext, G. R. and Himsworth, F.
 R., American Statistical Association and American Society for Quality,
 1962



 "A Simplex Method for Function Minimization", Nelder, J. A. and
 Mead, R., The Computer Journal, 1965



 "Iterative Methods for Optimization", C. T. Kelley, SIAM Frontiers
 in Applied Mathematics, 1999



 "Algorithm AS47  Function minimization using a simplex procedure",
 O'Neill, R., Applied Statistics, 1971

+ "Sequential Application of Simplex Designs in Optimisation and
+ Evolutionary Operation", Spendley, W. and Hext, G. R. and Himsworth, F.
+ R., American Statistical Association and American Society for Quality,
+ 1962
+
+ "A Simplex Method for Function Minimization", Nelder, J. A. and
+ Mead, R., The Computer Journal, 1965
+
+ "Iterative Methods for Optimization", C. T. Kelley, SIAM Frontiers
+ in Applied Mathematics, 1999
+
+ "Algorithm AS47  Function minimization using a simplex procedure",
+ O'Neill, R., Applied Statistics, 1971
+
+ "Effect of dimensionality on the neldermead simplex method", Lixing
+ Han and Michael Neumann, Optimization Methods and Software, 21, 1, 116,
+ 2006.
+
+ "Algorithms in Unconstrained Optimization", Lixing Han, Ph.D., The
+ University of Connecticut, 2000.
@@ 370,4 +361,11 @@
Michael Baudin, 20082009
+
+
+ Acknowledgements
+
+ Michael Baudin would like to thank Lixing Han, who kindly sent his
+ PhD thesis.
+
diff git a/scilab/modules/optimization/help/en_US/optimsimplex/optimsimplex.xml b/scilab/modules/optimization/help/en_US/optimsimplex/optimsimplex.xml
index c90798a..f27c7a2 100644
 a/scilab/modules/optimization/help/en_US/optimsimplex/optimsimplex.xml
+++ b/scilab/modules/optimization/help/en_US/optimsimplex/optimsimplex.xml
@@ 193,7 +193,7 @@ cen = optimsimplex_xbar ( this , iexcl )
 [ newobj , data ] = optimoptimsimplex_new ( coords , fun , data )
+ [ newobj , data ] = optimsimplex_new ( coords , fun , data )
Creates a new simplex object. All input arguments are
@@ 261,7 +261,7 @@ cen = optimsimplex_xbar ( this , iexcl )
 this = optimoptimsimplex_destroy (this)
+ this = optimsimplex_destroy (this)
Destroy the given object.
@@ 279,7 +279,7 @@ cen = optimsimplex_xbar ( this , iexcl )
 [ this , data ] = optimoptimsimplex_axes ( this , x0 , fun , len , data
+ [ this , data ] = optimsimplex_axes ( this , x0 , fun , len , data
)
@@ 353,7 +353,7 @@ cen = optimsimplex_xbar ( this , iexcl )
 [ this , data ] = optimoptimsimplex_pfeffer ( this , x0 , fun , deltausual
+ [ this , data ] = optimsimplex_pfeffer ( this , x0 , fun , deltausual
, deltazero , data )
@@ 435,7 +435,7 @@ cen = optimsimplex_xbar ( this , iexcl )
 [ this , data ] = optimoptimsimplex_randbounds ( this , x0 , fun ,
+ [ this , data ] = optimsimplex_randbounds ( this , x0 , fun ,
boundsmin , boundsmax , nbpoints , data )
@@ 529,7 +529,7 @@ cen = optimsimplex_xbar ( this , iexcl )
 [ this , data ] = optimoptimsimplex_spendley ( this , x0 , fun , len ,
+ [ this , data ] = optimsimplex_spendley ( this , x0 , fun , len ,
data )
@@ 602,7 +602,7 @@ cen = optimsimplex_xbar ( this , iexcl )
 this = optimoptimsimplex_setall ( this , simplex )
+ this = optimsimplex_setall ( this , simplex )
Set all the coordinates and and the function values of all the
@@ 647,7 +647,7 @@ cen = optimsimplex_xbar ( this , iexcl )
 this = optimoptimsimplex_setallfv ( this , fv )
+ this = optimsimplex_setallfv ( this , fv )
Set all the function values of all the vertices. The vertex #k
@@ 674,7 +674,7 @@ cen = optimsimplex_xbar ( this , iexcl )
 this = optimoptimsimplex_setallx ( this , x )
+ this = optimsimplex_setallx ( this , x )
Set all the coordinates of all the vertices. The vertex #k is
@@ 1700,6 +1700,40 @@ s1 = optimsimplex_destroy ( s1 );
+ Initial simplex strategies
+
+ In this section, we analyse the various initial simplex which are provided in this
+ component.
+
+
+ It is known that direct search methods based on simplex designs are very
+ sensitive to the initial simplex. This is why the current component
+ provides various ways to create such an initial simplex.
+
+
+
+ The first historical simplexbased algorithm is the one presented in
+ "Sequential Application of Simplex Designs in Optimisation and
+ Evolutionary Operation" by W. Spendley, G. R. Hext and F. R. Himsworth.
+ The optimsimplex_spendley function creates the regular simplex which is presented in
+ this paper.
+
+
+
+
+ The method used in the optimsimplex_randbounds function is due to M.J. Box
+ in "A New Method of Constrained Optimization and a Comparison With
+ Other Methods".
+
+
+
+ Pfeffer's method is an heuristic which is presented in "Global Optimization Of LennardJones Atomic Clusters" by
+ Ellen Fan. It is due to L. Pfeffer at Stanford and it is used in fminsearch.
+
+
+
+
+
Authors
Michael Baudin, 20082009
@@ 1708,48 +1742,75 @@ s1 = optimsimplex_destroy ( s1 );
Bibliography
 “Sequential Application of Simplex Designs in Optimisation and
 Evolutionary Operation”, Spendley, W. and Hext, G. R. and Himsworth,
 F. R., American Statistical Association and American Society for Quality,
 1962
+
+ “Sequential Application of Simplex Designs in Optimisation and
+ Evolutionary Operation”, Spendley, W. and Hext, G. R. and Himsworth,
+ F. R., American Statistical Association and American Society for Quality,
+ 1962
+
 "A Simplex Method for Function Minimization", Nelder, J. A. and
 Mead, R. The Computer Journal, January, 1965, 308313
+
+ "A Simplex Method for Function Minimization", Nelder, J. A. and
+ Mead, R. The Computer Journal, January, 1965, 308313
+
 "A New Method of Constrained Optimization and a Comparison With
 Other Methods", M. J. Box, The Computer Journal 1965 8(1):4252, 1965 by
 British Computer Society
+
+ "A New Method of Constrained Optimization and a Comparison With
+ Other Methods", M. J. Box, The Computer Journal 1965 8(1):4252, 1965 by
+ British Computer Society
+
 "Iterative Methods for Optimization", C.T. Kelley, 1999, Chapter 6.,
 section 6.2
+
+ "Iterative Methods for Optimization", C.T. Kelley, 1999, Chapter 6.,
+ section 6.2
+
 "Compact Numerical Methods For Computers  Linear Algebra and
 Function Minimization", J.C. Nash, 1990, Chapter 14. Direct Search
 Methods
+
+ "Compact Numerical Methods For Computers  Linear Algebra and
+ Function Minimization", J.C. Nash, 1990, Chapter 14. Direct Search
+ Methods
+
 "Sequential Application of Simplex Designs in Optimisation and
 Evolutionary Operation", W. Spendley, G. R. Hext, F. R. Himsworth,
 Technometrics, Vol. 4, No. 4 (Nov., 1962), pp. 441461, Section 3.1
+
+ "Sequential Application of Simplex Designs in Optimisation and
+ Evolutionary Operation", W. Spendley, G. R. Hext, F. R. Himsworth,
+ Technometrics, Vol. 4, No. 4 (Nov., 1962), pp. 441461, Section 3.1
+
 "A New Method of Constrained Optimization and a Comparison With
 Other Methods", M. J. Box, The Computer Journal 1965 8(1):4252, 1965 by
 British Computer Society
+
+ "A New Method of Constrained Optimization and a Comparison With
+ Other Methods", M. J. Box, The Computer Journal 1965 8(1):4252, 1965 by
+ British Computer Society
+
 “Detection and Remediation of Stagnation in the NelderMead
 Algorithm Using a Sufficient Decrease Condition”, SIAM J. on
 Optimization, Kelley,, C. T., 1999
+
+ “Detection and Remediation of Stagnation in the NelderMead
+ Algorithm Using a Sufficient Decrease Condition”, SIAM J. on
+ Optimization, Kelley,, C. T., 1999
+
 " MultiDirectional Search: A Direct Search Algorithm for Parallel
 Machines", by E. Boyd, Kenneth W. Kennedy, Richard A. Tapia, Virginia
 Joanne Torczon,, Virginia Joanne Torczon, 1989, Phd Thesis, Rice
 University
+
+ " MultiDirectional Search: A Direct Search Algorithm for Parallel
+ Machines", by E. Boyd, Kenneth W. Kennedy, Richard A. Tapia, Virginia
+ Joanne Torczon,, Virginia Joanne Torczon, 1989, Phd Thesis, Rice
+ University
+
 "Grid Restrained NelderMead Algorithm", Árpád Bũrmen, Janez
 Puhan, Tadej Tuma, Computational Optimization and Applications, Volume 34
 , Issue 3 (July 2006), Pages: 359  375
+
+ "Grid Restrained NelderMead Algorithm", Árpád Bũrmen, Janez
+ Puhan, Tadej Tuma, Computational Optimization and Applications, Volume 34
+ , Issue 3 (July 2006), Pages: 359  375
+
 "A convergent variant of the NelderMead algorithm", C. J. Price, I.
 D. Coope, D. Byatt, Journal of Optimization Theory and Applications,
 Volume 113 , Issue 1 (April 2002), Pages: 5  19,
+
+ "A convergent variant of the NelderMead algorithm", C. J. Price, I.
+ D. Coope, D. Byatt, Journal of Optimization Theory and Applications,
+ Volume 113 , Issue 1 (April 2002), Pages: 5  19,
+
+
+
+ "Global Optimization Of LennardJones Atomic Clusters",
+ Ellen Fan, Thesis, February 26, 2002, McMaster University
+
diff git a/scilab/modules/optimization/macros/optimsimplex/optimsimplex_pfeffer.sci b/scilab/modules/optimization/macros/optimsimplex/optimsimplex_pfeffer.sci
index 38be04c..4abc7d5 100644
 a/scilab/modules/optimization/macros/optimsimplex/optimsimplex_pfeffer.sci
+++ b/scilab/modules/optimization/macros/optimsimplex/optimsimplex_pfeffer.sci
@@ 18,6 +18,9 @@
// deltausual : the absolute delta for nonzero values
// deltazero : the absolute delta for zero values
// data : userdefined data
+// References
+// "Global Optimization Of LennardJones Atomic Clusters"
+// Ellen Fan, Thesis, February 26, 2002, McMaster University
//
function [ this , data ] = optimsimplex_pfeffer ( this , x0 , fun , deltausual , deltazero , data )
if (~isdef('deltausual','local')) then
diff git a/scilab/modules/optimization/tests/unit_tests/neldermead/neldermead_kelley.dia.ref b/scilab/modules/optimization/tests/unit_tests/neldermead/neldermead_kelley.dia.ref
deleted file mode 100644
index 4c7e26b..0000000
 a/scilab/modules/optimization/tests/unit_tests/neldermead/neldermead_kelley.dia.ref
+++ /dev/null
@@ 1,127 +0,0 @@
// Scilab ( http://www.scilab.org/ )  This file is part of Scilab
// Copyright (C) 20082009  INRIA  Michael Baudin
//
// This file must be used under the terms of the CeCILL.
// This source file is licensed as described in the file COPYING, which
// you should have received as part of this distribution. The terms
// are also available at
// http://www.cecill.info/licences/Licence_CeCILL_V2en.txt
tbxhome=getenv("SCILABTBX");
loader = tbxhome + filesep() + 'neldermead'+ filesep() + 'loader.sce';

 Start NelderMead Toolbox

 Load macros from :

 D:\Baudin\PROJET~1\TOOLBO~1\NELDER~1\macros\

 Load help
//
// assert_close 
// Returns 1 if the two real matrices computed and expected are close,
// i.e. if the relative distance between computed and expected is lesser than epsilon.
// Arguments
// computed, expected : the two matrices to compare
// epsilon : a small number
//
function flag = assert_close ( computed, expected, epsilon )
 if expected==0.0 then
 shift = norm(computedexpected);
 else
 shift = norm(computedexpected)/norm(expected);
 end
 if shift < epsilon then
 flag = 1;
 else
 flag = 0;
 end
 if flag <> 1 then bugmes();quit;end
endfunction
function y = rosenbrock (x)
//global index
//index = index + 1
y = 100*(x(2)x(1)^2)^2 + (1x(1))^2;
//mprintf("call #%d, x=(%f,%f), y=%f\n",index,x(1),x(2),y);
endfunction
//global index
//index = 0
x0 = [];
P0 = [1.1 1.1]';
x0(:,1) = P0;
x0(:,2) = P0 + 0.1 * [1 0]';
x0(:,3) = P0 + 0.1 * [0 1]';
x_opt = neldermead_kelley(x0, rosenbrock, tol=1.e16, maxit=10000,budget=200,verbose=0,oshrink=0);
computed = x_opt(:,1);
assert_close ( computed, [1;1], 1e8 );
// Add verbose and see if it still works
x_opt = neldermead_kelley(x0, rosenbrock, tol=1.e16, maxit=10000,budget=20,verbose=1,oshrink=0);
x0=
tol=0.000000
maxit=10000.000000
budget=20.000000
Step #1 : order
Step #2 : reflect
xbar=1.1 1.15
xrho=1 1.2, f(xrho)=4.000000
Step #4 : contract  outside
xc=1.05 1.175, f(xc)=0.528125
 > Accept xc
Step #1 : order
Step #2 : reflect
xbar=1.075 1.1875
xrho=1.05 1.275, f(xrho)=2.978125
Step #4 : contract  inside
xcc=1.0875 1.14375, f(xcc)=0.159026
 > Accept xcc
Step #1 : order
Step #2 : reflect
xbar=1.09375 1.171875
xrho=1.1375 1.16875, f(xrho)=1.585315
Step #4 : contract  inside
xcc=1.071875 1.1734375, f(xcc)=0.065296
 > Accept xcc
Step #1 : order
Step #2 : reflect
xbar=1.0859375 1.1867188
xrho=1.084375 1.2296875, f(xrho)=0.296761
Step #4 : contract  inside
xcc=1.0867188 1.1652344, f(xcc)=0.032242
 > Accept xcc
Step #1 : order
Step #2 : reflect
xbar=1.0933594 1.1826172
xrho=1.1148438 1.1917969, f(xrho)=0.274103
Step #4 : contract  inside
xcc=1.0826172 1.1780273, f(xcc)=0.010387
 > Accept xcc
Step #1 : order
Step #2 : reflect
xbar=1.0913086 1.1890137
xrho=1.0958984 1.212793, f(xrho)=0.023120
Step #4 : contract  outside
xc=1.0936035 1.2009033, f(xc)=0.011197
 > Accept xc
Step #1 : order
Step #2 : reflect
xbar=1.0881104 1.1894653
xrho=1.0762207 1.1789307, f(xrho)=0.048574
Step #4 : contract  inside
xcc=1.0940552 1.1947327, f(xcc)=0.009341
 > Accept xcc
Step #1 : order
Step #2 : reflect
xbar=1.0883362 1.18638
xrho=1.0830688 1.1718567, f(xrho)=0.007040
Step #3 : expand
xe=1.0778015 1.1573334, f(xe)=0.007922
 > Accept xr
Step #1 : order
Step #2 : reflect
xbar=1.088562 1.1832947
xrho=1.0945068 1.188562, f(xrho)=0.017736
Step #4 : contract  inside
xcc=1.0855896 1.180661, f(xcc)=0.007791
 > Accept xcc
Step #1 : order
computed = x_opt(:,1);
assert_close ( computed, [1;1], 1e0 );

1.7.9.5