* Scilab startup script of the binary is now automatically synchronized with
the one from the source tree.
+Optimization:
+=============
+
+* Improved the karmarkar linear optimization solver:
+ * Let the user configure an output function.
+ * Put the number of iterations as an output argument.
+ * Put x0 as an optional argument.
+ * Manage inequality constraints.
+ * Manage bounds.
+ * Compute the exitflag.
+ * Compute the Lagrange multipliers.
+ * Provide documentation for the rtolf and gam parameters.
+ * Provide more examples.
+ * Provide documentation for the stopping rule.
+
+* bug 7193 fixed - The karmarkar help page did not document the
+ parameters eps, gamma, and crit.
+
+* bug 8719 fixed - The karmarkar function printed unwanted messages.
+
+* bug 8720 fixed - The karmarkar function stoped too early in the iterations.
+
+* bug 8726 fixed - The karmarkar function produced a division-by-zero error.
+
+* bug 8727 fixed - The karmarkar function requires the initial guess x0.
+
+* bug 8775 fixed - The karmarkar function did not detect unbounded problems.
Bug fixes:
==========
<!--
* Scilab ( http://www.scilab.org/ ) - This file is part of Scilab
* Copyright (C) 2008 - INRIA
+ * Copyright (C) 2010 - DIGITEO - Michael Baudin
*
* This file must be used under the terms of the CeCILL.
* This source file is licensed as described in the file COPYING, which
<refnamediv>
<refname>karmarkar</refname>
- <refpurpose>karmarkar algorithm</refpurpose>
+ <refpurpose>Solves a linear optimization problem.</refpurpose>
</refnamediv>
<refsynopsisdiv>
<title>Calling Sequence</title>
- <synopsis>[x1]=karmarkar(a,b,c,x0)</synopsis>
+ <synopsis>
+ xopt=karmarkar(Aeq,beq,c)
+ xopt=karmarkar(Aeq,beq,c,x0)
+ xopt=karmarkar(Aeq,beq,c,x0,rtolf)
+ xopt=karmarkar(Aeq,beq,c,x0,rtolf,gam)
+ xopt=karmarkar(Aeq,beq,c,x0,rtolf,gam,maxiter)
+ xopt=karmarkar(Aeq,beq,c,x0,rtolf,gam,maxiter,outfun)
+ xopt=karmarkar(Aeq,beq,c,x0,rtolf,gam,maxiter,outfun,A,b)
+ xopt=karmarkar(Aeq,beq,c,x0,rtolf,gam,maxiter,outfun,A,b,lb)
+ xopt=karmarkar(Aeq,beq,c,x0,rtolf,gam,maxiter,outfun,A,b,lb,ub)
+ [xopt,fopt] = karmarkar(...)
+ [xopt,fopt,exitflag] = karmarkar(...)
+ [xopt,fopt,exitflag,iter] = karmarkar(...)
+ [xopt,fopt,exitflag,iter,yopt] = karmarkar(...)
+ </synopsis>
</refsynopsisdiv>
<refsection>
<variablelist>
<varlistentry>
- <term>a</term>
+ <term>Aeq</term>
+ <listitem>
+ <para>
+ a n-by-p matrix of doubles, where <literal>n</literal> is the number
+ of linear equality constraints
+ and <literal>p</literal> is the number of unknowns,
+ the matrix in the linear equality constraints.
+ </para>
+ </listitem>
+ </varlistentry>
+ <varlistentry>
+ <term>beq</term>
<listitem>
- <para>matrix (n,p)</para>
+ <para>
+ a n-by-1 matrix of doubles,
+ the right hand side of the linear equality constraint.
+ </para>
</listitem>
</varlistentry>
<varlistentry>
- <term>b</term>
+ <term>c</term>
+ <listitem>
+ <para>
+ a p-by-1 matrix of doubles,
+ the linear part of the objective function.
+ </para>
+ </listitem>
+ </varlistentry>
+ <varlistentry>
+ <term>x0</term>
<listitem>
- <para>n - vector</para>
+ <para>
+ a p-by-1 matrix of doubles, the initial guess (default x0=[]).
+ If x0==[], the karmarkar function automatically computes a feasible initial
+ guess.
+ </para>
</listitem>
</varlistentry>
<varlistentry>
- <term>c</term>
+ <term>rtolf</term>
+ <listitem>
+ <para>
+ a 1-by-1 matrix of doubles,
+ a relative tolerance on <literal>f(x)=c'*x</literal> (default rtolf=1.d-5).
+ </para>
+ </listitem>
+ </varlistentry>
+ <varlistentry>
+ <term>gam</term>
<listitem>
- <para>p - vector</para>
+ <para>
+ a 1-by-1 matrix of doubles, the scaling factor (default gam=0.5).
+ The scaling factor must satisfy <literal>0<gam<1</literal>.
+ </para>
</listitem>
</varlistentry>
<varlistentry>
- <term>x0</term>
+ <term>maxiter</term>
+ <listitem>
+ <para>
+ a 1-by-1 matrix of floating point integers, the maximum number of iterations (default maxiter=200).
+ The maximum number of iterations must be greater than 1.
+ </para>
+ </listitem>
+ </varlistentry>
+ <varlistentry>
+ <term>outfun</term>
<listitem>
- <para>initial vector</para>
+ <para>
+ a function or a list, the output function. See below for details.
+ </para>
</listitem>
</varlistentry>
<varlistentry>
- <term>eps</term>
+ <term>A</term>
+ <listitem>
+ <para>
+ a ni-by-p matrix of doubles, the matrix of linear inequality constraints.
+ </para>
+ </listitem>
+ </varlistentry>
+ <varlistentry>
+ <term>b</term>
<listitem>
- <para>threshold (default value : 1.d-5)</para>
+ <para>
+ a ni-by-1 matrix of doubles, the right-hand side of linear inequality constraints.
+ </para>
</listitem>
</varlistentry>
<varlistentry>
- <term>gamma</term>
+ <term>lb</term>
+ <listitem>
+ <para>
+ a p-by-1 matrix of doubles, the lower bounds.
+ </para>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term>ub</term>
+ <listitem>
+ <para>
+ a p-by-1 matrix of doubles, the upper bounds.
+ </para>
+ </listitem>
+ </varlistentry>
+ <varlistentry>
+ <term>xopt</term>
<listitem>
- <para>descent step <literal>0<gamma<1</literal> , default
- value : 1/4</para>
+ <para>a p-by-1 matrix of doubles, the solution.</para>
</listitem>
</varlistentry>
<varlistentry>
- <term>x1</term>
+ <term>fopt</term>
+ <listitem>
+ <para>
+ a 1-by-1 matrix of doubles, the objective function
+ value at optimum, i.e. <literal>fopt=c'*xopt</literal>.
+ </para>
+ </listitem>
+ </varlistentry>
+ <varlistentry>
+ <term>exitflag</term>
<listitem>
- <para>solution</para>
+ <para>
+ a 1-by-1 matrix of floating point integers, the status of execution.
+ See below for details.
+ </para>
</listitem>
</varlistentry>
<varlistentry>
- <term>crit</term>
+ <term>iter</term>
+ <listitem>
+ <para>
+ a 1-by-1 matrix of floating point integers, the number of iterations.
+ </para>
+ </listitem>
+ </varlistentry>
+ <varlistentry>
+ <term>yopt</term>
<listitem>
- <para>value of c'*x1</para>
+ <para>
+ a <literal>struct</literal> containing the dual solution.
+ The structure yopt has four fields : ineqlin, eqlin, upper, lower.
+ The field yopt.ineqlin is the Lagrange multiplier for the inequality constraints.
+ The field yopt.eqlin is the Lagrange multiplier for the equality constraints.
+ The field yopt.upper is the Lagrange multiplier for the upper bounds.
+ The field yopt.lower is the Lagrange multiplier for the lower bounds.
+ </para>
</listitem>
</varlistentry>
+
</variablelist>
</refsection>
<refsection>
<title>Description</title>
- <para>Computes <literal>x</literal> which minimizes</para>
+ <para>
+ Computes the solution of linear programming problems.
+ </para>
+
+ <para>
+ This function has the two following modes.
+ </para>
+
+ <itemizedlist>
+ <listitem>
+ <para>
+ If no inequality constraints and no bound is given
+ (i.e. if <literal>(A==[] & lb==[] & ub==[])</literal>),
+ the function ensures that the variable is nonnegative.
+ </para>
+ </listitem>
+ <listitem>
+ <para>
+ If any inequality constraints or any bound is given
+ (i.e. if <literal>(A<>[] or lb<>[] or ub<>[])</literal>),
+ the function takes into account for this inequality or bound
+ (and does not ensure that the variable is nonnegative).
+ </para>
+ </listitem>
+ </itemizedlist>
+
+ <para>
+ If no inequality constraints and no bound is given
+ (i.e. if <literal>(A==[] & lb==[] & ub==[])</literal>),
+ solves the linear optimization problem:
+ </para>
+
+ <para>
+ <latex>
+ \begin{eqnarray}
+ \begin{array}{l}
+ \textrm{minimize } c^T \cdot x\\
+ A_{eq} x = b_{eq}\\
+ x \geq 0
+ \end{array}
+ \end{eqnarray}
+ </latex>
+ </para>
+
+ <para>
+ If any inequality constraints or any bound is given
+ (i.e. if <literal>(A<>[] | lb<>[] | ub<>[])</literal>),
+ solves the linear optimization problem:
+ </para>
+
+ <para>
+ <latex>
+ \begin{eqnarray}
+ \begin{array}{l}
+ \textrm{minimize } c^T \cdot x\\
+ A_{eq} x = b_{eq}\\
+ A x \leq b\\
+ \ell_b \leq x \leq u_b
+ \end{array}
+ \end{eqnarray}
+ </latex>
+ </para>
+
+ <para>
+ Any optional parameter equal to the empty matrix <literal>[]</literal> is replaced by
+ its default value.
+ </para>
+
+ <para>
+ The <literal>exitflag</literal> parameter allows to know why the algorithm
+ terminated.
+ </para>
+
+ <itemizedlist>
+
+ <listitem>
+ <para>
+ <literal>exitflag = 1</literal> if algorithm converged.
+ </para>
+ </listitem>
+
+ <listitem>
+ <para>
+ <literal>exitflag = 0</literal> if maximum number of iterations was reached.
+ </para>
+ </listitem>
+
+ <listitem>
+ <para>
+ <literal>exitflag = -1</literal> if no feasible point was found
+ </para>
+ </listitem>
+
+ <listitem>
+ <para>
+ <literal>exitflag = -2</literal> if problem is unbounded.
+ </para>
+ </listitem>
+
+ <listitem>
+ <para>
+ <literal>exitflag = -3</literal> if search direction became zero.
+ </para>
+ </listitem>
+
+ <listitem>
+ <para>
+ <literal>exitflag = -4</literal> if algorithm stopped on user's request.
+ </para>
+ </listitem>
+
+ </itemizedlist>
+
+ <para>
+ The output function <literal>outfun</literal> must have header
+ </para>
+
+ <programlisting role="example">
+ <![CDATA[
+stop = outfun ( xopt , optimValues , state )
+]]>
+ </programlisting>
+
+ <para>
+ where
+ <literal>xopt</literal> is a p-by-1 matrix of double representing
+ the current point,
+ <literal>optimValues</literal> is a <literal>struct</literal>,
+ <literal>state</literal> is a 1-by-1 matrix of strings.
+ Here, <literal>stop</literal> is a 1-by-1 matrix of booleans,
+ which is <literal>%t</literal> if the algorithm must stop.
+ </para>
+
+ <para>
+ <literal>optimValues</literal> has the following fields:
+ </para>
+
+ <itemizedlist>
+
+ <listitem>
+ <para>
+ <literal>optimValues.funccount</literal>: a 1-by-1 matrix of floating point integers,
+ the number of function evaluations
+ </para>
+ </listitem>
+
+ <listitem>
+ <para>
+ <literal>optimValues.fval</literal>: a 1-by-1 matrix of doubles, the best function value
+ </para>
+ </listitem>
+
+ <listitem>
+ <para>
+ <literal>optimValues.iteration</literal>: a 1-by-1 matrix of floating point integers,
+ the current iteration number
+ </para>
+ </listitem>
+
+ <listitem>
+ <para>
+ <literal>optimValues.procedure</literal>: a 1-by-1 matrix of strings, the type of step performed.
+ </para>
+ </listitem>
+
+ <listitem>
+ <para>
+ <literal>optimValues.dualgap</literal>: a 1-by-1 matrix of doubles, the duality gap, i.e.
+ <literal>abs(yopt'*beq - fopt)</literal>.
+ At optimum, the duality gap is zero.
+ </para>
+ </listitem>
+
+ </itemizedlist>
+
+ <para>
+ The <literal>optimValues.procedure</literal> field can have the following values.
+ </para>
+
+ <itemizedlist>
+
+ <listitem>
+ <para>
+ If <literal>optimValues.procedure="x0"</literal>, then the algorithm is computing the initial
+ feasible guess <literal>x0</literal> (phase 1).
+ </para>
+ </listitem>
+
+ <listitem>
+ <para>
+ If <literal>optimValues.procedure="x*"</literal>, then the algorithm is computing the solution
+ of the linear program (phase 2).
+ </para>
+ </listitem>
+
+ </itemizedlist>
+
+
+ <para>
+ It might happen that the output function requires additionnal arguments to be evaluated.
+ In this case, we can use the following feature.
+ The function <literal>outfun</literal> can also be the list <literal>(outf,a1,a2,...)</literal>.
+ In this case <literal>outf</literal>, the first element in the list, must have the header:
+ </para>
+
+ <programlisting role="example">
+ <![CDATA[
+ stop = outf ( xopt , optimValues , state , a1 , a2 , ... )
+]]>
+ </programlisting>
+
+ <para>
+ where the input arguments <literal>a1, a2, ...</literal>
+ will be automatically be appended at the end of the calling sequence.
+ </para>
- <informalequation>
- <mediaobject>
- <imageobject>
- <imagedata align="center" fileref="../mml/karmarkar_equation_1.mml" />
- </imageobject>
- </mediaobject>
- </informalequation>
</refsection>
<refsection>
- <title>Examples</title>
+ <title>Stopping rule</title>
+
+ <para>
+ The stopping rule is based on the number of iterations,
+ the relative tolerance on the function value, the duality gap and
+ the user's output function.
+ </para>
+
+ <para>
+ In both the phase 1 and phase 2 of the algorithm, we check the
+ duality gap and the boolean:
+ </para>
+
+ <programlisting role="example">
+ <![CDATA[
+dualgap > 1.e5 * dualgapmin
+]]>
+</programlisting>
+
+ <para>
+ where <literal>dualgap</literal> is the absolute value of the duality gap, and
+ <literal>dualgapmin</literal> is the minimum absolute duality gap measured during
+ this step of the algorithm.
+ The duality gap is computed from
+ </para>
+
+ <programlisting role="example">
+ <![CDATA[
+dualgap = abs(yopt'*beq - fopt)
+]]>
+</programlisting>
+
+ <para>
+ where <literal>yopt</literal> is the Lagrange multiplier, <literal>beq</literal>
+ is the right hand side of the inequality constraints and <literal>fopt</literal>
+ is the minimum function value attained in the current phase.
+ </para>
+
+ <para>
+ During the second phase of the algorithm (i.e. once <literal>x0</literal> is
+ determined, the termination condition for the function value is based on the boolean:
+ </para>
+
+ <programlisting role="example">
+ <![CDATA[
+ abs(fprev-fopt)<=rtolf*abs(fprev)
+]]>
+ </programlisting>
+
+ <para>
+ where <literal>fprev</literal> is the previous function value,
+ <literal>fopt</literal> is the current function value and
+ <literal>rtolf</literal> is the relative tolerance on the function value.
+ </para>
+
+ </refsection>
- <programlisting role="example"><![CDATA[
-n=10;p=20;
-a=rand(n,p);
+ <refsection>
+ <title>Implementation notes</title>
+
+ <para>
+ The implementation is based on the primal affine scaling algorithm, as
+ discovered by Dikin in 1967, and then re-discovered by Barnes and Vanderbei et al in 1986.
+ </para>
+
+ <para>
+ If the scaling factor <literal>gam</literal> is closer to 1 (say <literal>gam=0.99</literal>,
+ for example), then the number of iterations may be lower.
+ Tsuchiya and Muramatsu proved that if an optimal solution exists, then, for any
+ <literal>gam</literal> lower than 2/3, the sequence converges to a point in the interior point of the optimal face.
+ Dikin proved convergence with <literal>gam=1/2</literal>.
+ Mascarenhas found two examples where the parameter <literal>gam=0.999</literal> lets the
+ algorithm converge to a vertex which is not optimal, if the initial guess is chosen
+ appropriately.
+ </para>
+
+ </refsection>
+
+ <refsection>
+ <title>Example #1</title>
+
+ <para>
+ In the following example, we solve a linear optimization problem with 2 linear
+ equality constraints and 3 unknowns.
+ The linear optimization problem is
+ </para>
+
+ <para>
+ <latex>
+ \begin{eqnarray}
+ \begin{array}{l}
+ \textrm{minimize } -x_1 -x_2\\
+ x_1 - x_2 = 0\\
+ x_1 + x_2 + x_3 = 2\\
+ x \geq 0
+ \end{array}
+ \end{eqnarray}
+ </latex>
+ </para>
+
+ <para>
+ The following script solves the problem.
+ </para>
+
+ <programlisting role="example">
+ <![CDATA[
+ Aeq = [
+ 1 -1 0
+ 1 1 1
+ ];
+ beq = [0;2];
+ c = [-1;-1;0];
+ x0 = [0.1;0.1;1.8];
+ [xopt,fopt,exitflag,iter,yopt]=karmarkar(Aeq,beq,c)
+ xstar=[1 1 0]'
+]]>
+ </programlisting>
+
+ <para>
+ The previous script produces the following output.
+ </para>
+
+ <programlisting role="example">
+ <![CDATA[
+-->[xopt,fopt,exitflag,iter,yopt]=karmarkar(Aeq,beq,c)
+ yopt =
+ ineqlin: [0x0 constant]
+ eqlin: [2x1 constant]
+ lower: [3x1 constant]
+ upper: [3x1 constant]
+ iter =
+ 68.
+ exitflag =
+ 1.
+ fopt =
+ - 1.9999898
+ xopt =
+ 0.9999949
+ 0.9999949
+ 0.0000102
+]]>
+ </programlisting>
+
+ <para>
+ We can explore the Lagrange multipliers by detailing
+ each field of the yopt structure.
+ </para>
+
+ <programlisting role="example">
+ <![CDATA[
+-->yopt.ineqlin
+ ans =
+ []
+-->yopt.eqlin
+ ans =
+ - 6.483D-17
+ 1.
+-->yopt.lower
+ ans =
+ - 2.070D-10
+ - 2.070D-10
+ 1.
+-->yopt.upper
+ ans =
+ 0.
+ 0.
+ 0.
+]]>
+ </programlisting>
+
+ <para>
+ We can as well give the initial guess x0, as in the following script.
+ </para>
+
+ <programlisting role="example">
+ <![CDATA[
+ Aeq = [
+ 1 -1 0
+ 1 1 1
+ ];
+ beq = [0;2];
+ c = [-1;-1;0];
+ x0 = [0.1;0.1;1.8];
+ [xopt,fopt,exitflag,iter,yopt]=karmarkar(Aeq,beq,c,x0)
+]]>
+ </programlisting>
+
+ <para>
+ In the case where we need more precision, we can reduce the
+ relative tolerance on the function value.
+ In general, reducing the tolerance increases the number of iterations.
+ </para>
+
+ <programlisting role="example">
+ <![CDATA[
+-->[xopt,fopt,exitflag,iter]=karmarkar(Aeq,beq,c,[],1.e-5);
+-->disp([fopt iter])
+ - 1.9999898 68.
+-->[xopt,fopt,exitflag,iter]=karmarkar(Aeq,beq,c,[],1.e-7);
+-->disp([fopt iter])
+ - 1.9999998 74.
+-->[xopt,fopt,exitflag,iter]=karmarkar(Aeq,beq,c,[],1.e-9);
+-->disp([fopt iter])
+ - 2. 78.
+]]>
+ </programlisting>
+
+ </refsection>
+
+ <refsection>
+ <title>Example #2</title>
+
+ <para>
+ In the following example, we solve a linear optimization problem with 10 random linear
+ equality constraints and 20 unknowns.
+ The initial guess is chosen at random in the [0,1]^p range.
+ </para>
+
+ <programlisting role="example">
+ <![CDATA[
+n=10;
+p=20;
+Aeq=rand(n,p);
c=rand(p,1);
-x0=abs(rand(p,1));
-b=a*x0;
-x1=karmarkar(a,b,c,x0);
- ]]></programlisting>
+x0=rand(p,1);
+beq=Aeq*x0;
+xopt=karmarkar(Aeq,beq,c,x0);
+// Check constraints
+norm(Aeq*xopt-beq)
+]]>
+ </programlisting>
+ </refsection>
+
+ <refsection>
+ <title>Inequality constraints</title>
+
+ <para>
+ Consider the following linear program with inequality constraints.
+ </para>
+
+ <para>
+ <latex>
+ \begin{eqnarray}
+ \begin{array}{l}
+ \textrm{minimize } - 20 x_1 - 24 x_2\\
+ 3 x_1 + 6 x_2 \leq 60 \\
+ 4 x_1 + 2 x_2 \leq 32 \\
+ \end{array}
+ \end{eqnarray}
+ </latex>
+ </para>
+
+ <programlisting role="example">
+ <![CDATA[
+c = [-20 -24]';
+A = [
+3 6
+4 2
+];
+b = [60 32]';
+xopt=karmarkar([],[],c,[],[],[],[],[],A,b)
+]]>
+ </programlisting>
+
+ <para>
+ The previous script produces the following output.
+ </para>
+
+ <programlisting role="example">
+ <![CDATA[
+-->xopt=karmarkar([],[],c,[],[],[],[],[],A,b)
+ xopt =
+ 3.9999125
+ 7.9999912
+]]>
+ </programlisting>
+
</refsection>
+
+ <refsection>
+ <title>With bounds</title>
+
+ <para>
+ Consider the following linear optimization problem.
+ </para>
+
+ <para>
+ <latex>
+ \begin{eqnarray}
+ \begin{array}{l}
+ \textrm{minimize } 2 x_1 + 5 x_2 - 2.5 x_3\\
+ x_1 + S4 x_3 \leq 5 \\
+ E2 x1 - x2 - x3 \leq 0 \\
+ -2 \leq x_1 \leq 2 \\
+ 1 \leq x_2 \\
+ 0 \leq x_3 \leq 3 \\
+ \end{array}
+ \end{eqnarray}
+ </latex>
+ </para>
+ <para>
+ where <literal>S4 = sin(pi/4)/4</literal> and <literal>E2 = exp(2)</literal>.
+ </para>
+
+ <programlisting role="example">
+ <![CDATA[
+c = [ 2; 5; -2.5];
+S4 = sin(%pi/4)/4;
+E2 = exp(2);
+A = [
+1 0 S4
+E2 -1 -1
+];
+b = [ 5; 0];
+lb = [ -2; 1 ; 0 ];
+ub = [ 2; %inf; 3 ];
+xstar = [-2;1;3];
+[xopt,fopt,exitflag,iter,yopt]=karmarkar([],[],c,[],[],[],[],[],A,b,lb,ub)
+]]>
+ </programlisting>
+
+ <para>
+ The previous script produces the following output.
+ </para>
+
+ <programlisting role="example">
+ <![CDATA[
+-->[xopt,fopt,exitflag,iter,yopt]=karmarkar([],[],c,[],[],[],[],[],A,b,lb,ub)
+ yopt =
+ ineqlin: [2x1 constant]
+ eqlin: [0x0 constant]
+ lower: [3x1 constant]
+ upper: [3x1 constant]
+ iter =
+ 76.
+ exitflag =
+ 1.
+ fopt =
+ - 6.4999482
+ xopt =
+ - 1.9999914
+ 1.0000035
+ 2.9999931
+]]>
+ </programlisting>
+
+ </refsection>
+
+ <refsection>
+ <title>Configuring an output function</title>
+
+ <para>
+ It may be useful to configure a callback, so that we can customize the
+ printed messages or create a plot.
+ Consider the following linear optimization problem.
+ </para>
+
+ <para>
+ <latex>
+ \begin{eqnarray}
+ \begin{array}{l}
+ \textrm{minimize } - x_1 - x_2\\
+ 2 w x_1 + x_2 \leq 1+w^2, \quad w=0.0, 0.1, 0.2, ..., 1.0\\
+ x_1,x_2 \geq 0
+ \end{array}
+ \end{eqnarray}
+ </latex>
+ </para>
+
+ <para>
+ The following output function plots the current point
+ and prints the iteration number, the value of the objective
+ function.
+ </para>
+
+ <programlisting role="example">
+ <![CDATA[
+function stop = myoutputfunction ( xopt , optimValues , state )
+ localmsg = gettext("Iteration #%3.0f, state=%s, procedure=%s, fopt=%10.3e, x=[%s], dualgap=%10.3e\n")
+ xstr = strcat(msprintf("%10.3e\n",xopt)'," ")
+ mprintf(localmsg,optimValues.iteration,state,optimValues.procedure,optimValues.fval,xstr,optimValues.dualgap)
+ plot(xopt(1),xopt(2),"bo")
+ stop = %f
+endfunction
+]]>
+ </programlisting>
+
+ <para>
+ The following script defines the optimization problem and
+ runs the optimization.
+ </para>
+
+ <programlisting role="example">
+ <![CDATA[
+ n=11;
+ A = [2*linspace(0,1,n)',ones(n,1)];
+ b = 1 + linspace(0,1,n)'.^2;
+ c=[-1;-1];
+ // Plot the constraints
+ scf();
+ for i = 1 : n
+ plot(linspace(0,1,100),b(i)-A(i,1)*linspace(0,1,100),"b-")
+ end
+ // Run the optimization
+ xopt=karmarkar([],[],c,[],[],[],[],myoutputfunction,A,b);
+ // Plot the starting and ending points
+ plot(xopt(1),xopt(2),"k*")
+]]>
+ </programlisting>
+
+ <para>
+ The previous script produces the following output and creates a graphics.
+ </para>
+
+ <programlisting role="example">
+ <![CDATA[
+-->xopt=karmarkar([],[],c,[],[],[],[],myoutputfunction,A,b)
+Iteration # 0, state=init, procedure=x0, fopt=1.000e+000, x=[0.000e+000 0.000e+000], dualgap=Inf
+Iteration # 0, state=init, procedure=x0, fopt=1.000e+000, x=[0.000e+000 0.000e+000], dualgap=Inf
+Iteration # 1, state=init, procedure=x0, fopt=5.000e-001, x=[2.201e-001 -4.313e-002], dualgap=3.676e-001
+Iteration # 2, state=init, procedure=x0, fopt=2.500e-001, x=[3.283e-001 -6.512e-002], dualgap=2.140e-001
+Iteration # 3, state=init, procedure=x0, fopt=1.250e-001, x=[3.822e-001 -7.634e-002], dualgap=1.161e-001
+Iteration # 4, state=init, procedure=x0, fopt=6.250e-002, x=[4.090e-001 -8.202e-002], dualgap=6.033e-002
+Iteration # 5, state=init, procedure=x0, fopt=3.125e-002, x=[4.225e-001 -8.488e-002], dualgap=3.072e-002
+Iteration # 6, state=init, procedure=x0, fopt=1.563e-002, x=[4.292e-001 -8.632e-002], dualgap=1.549e-002
+[...]
+Iteration # 50, state=init, procedure=x0, fopt=8.882e-016, x=[4.359e-001 -8.775e-002], dualgap=8.882e-016
+Iteration # 51, state=init, procedure=x0, fopt=4.441e-016, x=[4.359e-001 -8.775e-002], dualgap=4.441e-016
+Iteration # 52, state=init, procedure=x0, fopt=2.220e-016, x=[4.359e-001 -8.775e-002], dualgap=2.220e-016
+Iteration # 52, state=init, procedure=x*, fopt=-3.481e-001, x=[4.359e-001 -8.775e-002], dualgap=Inf
+Iteration # 52, state=iter, procedure=x*, fopt=-3.481e-001, x=[4.359e-001 -8.775e-002], dualgap=Inf
+Iteration # 53, state=iter, procedure=x*, fopt=-7.927e-001, x=[5.249e-001 2.678e-001], dualgap=5.098e-001
+[...]
+Iteration # 65, state=iter, procedure=x*, fopt=-1.250e+000, x=[5.005e-001 7.494e-001], dualgap=1.258e-004
+Iteration # 66, state=iter, procedure=x*, fopt=-1.250e+000, x=[5.005e-001 7.494e-001], dualgap=5.941e-005
+Iteration # 67, state=iter, procedure=x*, fopt=-1.250e+000, x=[5.005e-001 7.495e-001], dualgap=2.882e-005
+Iteration # 68, state=iter, procedure=x*, fopt=-1.250e+000, x=[5.005e-001 7.495e-001], dualgap=1.418e-005
+Iteration # 69, state=done, procedure=x*, fopt=-1.250e+000, x=[5.005e-001 7.495e-001], dualgap=7.035e-006
+ xopt =
+ 0.5005127
+ 0.7494803
+]]>
+ </programlisting>
+
+ </refsection>
+
+
+ <refsection>
+ <title>Infeasible problem</title>
+
+ <para>
+ Consider the following linear optimization problem.
+ It is extracted from "Linear Programming in Matlab"
+ Ferris, Mangasarian, Wright, 2008, Chapter 3, "The Simplex Method", Exercise 3-4-2 1.
+ </para>
+
+ <programlisting role="example">
+ <![CDATA[
+ // An infeasible problem.
+ // Minimize -3 x1 + x2
+ // - x1 - x2 >= -2
+ // 2 x1 + 2 x2 >= 10
+ // x >= 0
+ c = [-3;1];
+ A=[
+ -1 -1
+ 2 2
+ ];
+ A=-A;
+ b=[-2;10];
+ b=-b;
+ lb=[0;0];
+ [xopt,fopt,exitflag,iter,yopt]=karmarkar([],[],c,[],[],[],[],[],A,b,lb)
+]]>
+ </programlisting>
+
+ <para>
+ The previous script produces the following output.
+ </para>
+
+ <programlisting role="example">
+ <![CDATA[
+-->[xopt,fopt,exitflag,iter,yopt]=karmarkar([],[],c,[],[],[],[],[],A,b,lb)
+ yopt =
+
+ ineqlin: [0x0 constant]
+ eqlin: [0x0 constant]
+ lower: [0x0 constant]
+ upper: [0x0 constant]
+ iter =
+ 40.
+ exitflag =
+ - 1.
+ fopt =
+ []
+ xopt =
+ []
+]]>
+ </programlisting>
+
+ </refsection>
+
+
+ <refsection>
+ <title>Unbounded problem</title>
+
+ <para>
+ Consider the following linear optimization problem.
+ It is extracted from "Linear and Nonlinear Optimization"
+ Griva, Nash, Sofer, 2009, Chapter 5, "The Simplex Method", Example 5.3.
+ </para>
+
+ <programlisting role="example">
+ <![CDATA[
+ // An unbounded problem.
+ // Minimize -x1 - 2 x2
+ // -1 x1 + x2 <= 2
+ // -2 x1 + x2 <= 1
+ // x >= 0
+ c = [-1;-2];
+ A=[
+ -1 1
+ -2 1
+ ];
+ b=[2;1];
+ lb=[0;0];
+ [xopt,fopt,exitflag,iter,yopt]=karmarkar([],[],c,[],[],[],[],[],A,b,lb)
+]]>
+ </programlisting>
+
+ <para>
+ The previous script produces the following output.
+ Notice that the algorithm detects that the duality gap has increased much
+ more than expected, indicating a failure of the algorithm to find an
+ optimal point.
+ </para>
+
+ <programlisting role="example">
+ <![CDATA[
+-->[xopt,fopt,exitflag,iter,yopt]=karmarkar([],[],c,[],[],[],[],[],A,b,lb)
+ yopt =
+ ineqlin: [2x1 constant]
+ eqlin: [0x0 constant]
+ lower: [2x1 constant]
+ upper: [2x1 constant]
+ iter =
+ 59.
+ exitflag =
+ - 2.
+ fopt =
+ - 45374652.
+ xopt =
+ 15124883.
+ 15124885.
+]]>
+ </programlisting>
+
+ </refsection>
+
+
+ <refsection>
+ <title>References</title>
+ <para>
+ "Iterative solution of problems of linear and quadratic programming",
+ Dikin,
+ Doklady Akademii Nausk SSSR, Vol. 174, pp. 747-748, 1967
+ </para>
+
+ <para>
+ "A New Polynomial Time Algorithm for Linear Programming",
+ Narendra Karmarkar, Combinatorica, Vol 4, nr. 4, p. 373–395, 1984.
+ </para>
+
+ <para>
+ "A variation on Karmarkar’s algorithm for solving linear programming problems,
+ Earl R. Barnes, Mathematical Programming, Volume 36, Number 2, 174-182, 1986.
+ </para>
+
+ <para>
+ "A modification of karmarkar's linear programming algorithm",
+ Robert J. Vanderbei, Marc S. Meketon and Barry A. Freedman,
+ Algorithmica, Volume 1, Numbers 1-4, 395-407, 1986.
+ </para>
+
+ <para>
+ "Practical Optimization: Algorithms and Engineering Applications",
+ Andreas Antoniou, Wu-Sheng Lu, Springer, 2007,
+ Chapter 12, "Linear Programming Part II: Interior Point Methods".
+ </para>
+
+ <para>
+ "Global Convergence of a Long-Step Affine Scaling Algorithm for Degenerate Linear Programming Problems",
+ Takashi Tsuchiya and Masakazu Muramatsu,
+ SIAM J. Optim. Volume 5, Issue 3, pp. 525-551 (August 1995)
+ </para>
+
+ <para>
+ "The convergence of dual variables",
+ Dikin,
+ Tech. report, Siberian Energy Institute, Russia, 1991
+ </para>
+
+ <para>
+ "The Affine Scaling Algorithm Fails for Stepsize 0.999",
+ Walter F. Mascarenhas, SIAM J. Optim. Volume 7, Issue 1, pp. 34-46 (1997)
+ </para>
+
+ <para>
+ "A Primal-Dual Exterior Point Algorithm For Linear Programming Problems"
+ Nikolaos Samaras, Angelo Sifaleras, Charalampos Triantafyllidis
+ Yugoslav Journal of Operations Research
+ Vol 19 (2009), Number 1, 123-132
+ </para>
+
+ </refsection>
+
</refentry>
+
// Scilab ( http://www.scilab.org/ ) - This file is part of Scilab
// Copyright (C) INRIA
+// Copyright (C) 2010 - DIGITEO - Michael Baudin
//
// This file must be used under the terms of the CeCILL.
// This source file is licensed as described in the file COPYING, which
// are also available at
// http://www.cecill.info/licences/Licence_CeCILL_V2-en.txt
//
-function [x1,crit]=karmarkar(a,b,c,x0,eps,Gamma)
-maxiter=200;
-epsdef=1.d-5
+
+// xopt=karmarkar(Aeq,beq,c)
+// xopt=karmarkar(Aeq,beq,c,x0)
+// xopt=karmarkar(Aeq,beq,c,x0,rtolf)
+// xopt=karmarkar(Aeq,beq,c,x0,rtolf,gam)
+// xopt=karmarkar(Aeq,beq,c,x0,rtolf,gam,maxiter)
+// xopt=karmarkar(Aeq,beq,c,x0,rtolf,gam,maxiter,outfun)
+// xopt=karmarkar(Aeq,beq,c,x0,rtolf,gam,maxiter,outfun,A,b)
+// xopt=karmarkar(Aeq,beq,c,x0,rtolf,gam,maxiter,outfun,A,b,lb)
+// xopt=karmarkar(Aeq,beq,c,x0,rtolf,gam,maxiter,outfun,A,b,lb,ub)
+// [xopt,fopt] = karmarkar(...)
+// [xopt,fopt,exitflag] = karmarkar(...)
+// [xopt,fopt,exitflag,iter] = karmarkar(...)
+// [xopt,fopt,exitflag,iter,yopt] = karmarkar(...)
//
-[lhs,rhs]=argn(0)
-select rhs
-case 4 then
- Gamma=1/4,eps=epsdef
-case 5 then
- Gamma=1/4
-case 6,
-else
- error('[x1,crit]=karmarkar(a,b,c,x0 [,eps [,Gamma]])')
-end
-//verification des donnees
-[n,p]=size(a)
-w=size(b)
-if w(1)<>n then
- error(msprintf(gettext("%s: Wrong size for input argument #%d."),'karmarkar',2));
-end
-w=size(c)
-if w(1)<>p then
- error(msprintf(gettext("%s: Wrong size for input argument #%d."),'karmarkar',1));
-end
-w=size(x0)
-if w(1)<>p then
- error(msprintf(gettext("%s: Wrong size for input argument #%d."),'karmarkar',4));
-end
-if min(x0)<0|norm(a*x0-b)>eps then
- error(msprintf(gettext("%s: x0 is not feasible."),'karmarkar'));
-end
-//
-x1=x0;tc=c';
-crit=tc*x1;
-test=eps+1
-count=0
-while test>eps&count<=maxiter
- count=count+1;
-// ax=a*diag(x1);
- ax=a.*(ones(size(a,1),1)*x1');
- xc=x1.*c;
- y=ax'\xc;
-// y=(ax*ax')\(ax*xc)
- d=-xc+ax'*y;
- dk=x1.*d;
- if min(dk)>0 then
- error(msprintf(gettext("%s: Unbounded problem!"),'karmarkar'));
- end
- alpha=-Gamma/min(d);
- test=alpha*(norm(d)**2)/max(1,abs(crit));
- x1=x1+alpha*dk;
- crit=tc*x1;
- write(%io(2),[count,crit,test],'(f3.0,3x,e10.3,3x,e10.3)')
-end
-
+// exitflag = 1 if algorithm converged.
+// exitflag = 0 if maximum number of iterations was reached.
+// exitflag = -1 if no feasible point was found
+// exitflag = -2 if problem is unbounded.
+// exitflag = -3 if search direction became zero.
+// exitflag = -4 if algorithm stopped on user's request.
+// exitflag = -5 if duality gap became too large
+// exitflag = -%inf on internal error.
+
+function [xopt,fopt,exitflag,iter,yopt]=karmarkar(varargin)
+ function argin = argindefault ( rhs , vararglist , ivar , default )
+ // Returns the value of the input argument #ivar.
+ // If this argument was not provided, or was equal to the
+ // empty matrix, returns the default value.
+ if ( rhs < ivar ) then
+ argin = default
+ else
+ if ( vararglist(ivar) <> [] ) then
+ argin = vararglist(ivar)
+ else
+ argin = default
+ end
+ end
+ endfunction
+
+ //
+ [lhs,rhs]=argn(0)
+ if ( rhs<3 | rhs>12 | rhs==9 ) then
+ error(msprintf(gettext("%s: Wrong number of input arguments: %d to %d expected.\n"),"karmarkar",3,12));
+ end
+ //
+ Aeq = varargin(1)
+ beq = varargin(2)
+ c = varargin(3)
+ x0 = argindefault ( rhs , varargin , 4 , [] )
+ rtolf = argindefault ( rhs , varargin , 5 , 1.d-5 )
+ gam = argindefault ( rhs , varargin , 6 , 1/2 )
+ maxiter = argindefault ( rhs , varargin , 7 , 200 )
+ __karmarkar_outfun__ = argindefault ( rhs , varargin , 8 , [] )
+ A = argindefault ( rhs , varargin , 9 , [] )
+ b = argindefault ( rhs , varargin , 10 , [] )
+ lb = argindefault ( rhs , varargin , 11 , [] )
+ ub = argindefault ( rhs , varargin , 12 , [] )
+ //
+ // Check input arguments
+ //
+ //
+ // Check type
+ //
+ if ( typeof(Aeq) <> "constant" ) then
+ error(msprintf(gettext("%s: Wrong type for input argument #%d: Real matrix expected.\n"),"karmarkar",1));
+ end
+ if ( typeof(beq) <> "constant" ) then
+ error(msprintf(gettext("%s: Wrong type for input argument #%d: Real matrix expected.\n"),"karmarkar",2));
+ end
+ if ( typeof(c) <> "constant" ) then
+ error(msprintf(gettext("%s: Wrong type for input argument #%d: Real matrix expected.\n"),"karmarkar",3));
+ end
+ if ( typeof(x0) <> "constant" ) then
+ error(msprintf(gettext("%s: Wrong type for input argument #%d: Real matrix expected.\n"),"karmarkar",4));
+ end
+ if ( typeof(rtolf) <> "constant" ) then
+ error(msprintf(gettext("%s: Wrong type for input argument #%d: Real matrix expected.\n"),"karmarkar",5));
+ end
+ if ( typeof(gam) <> "constant" ) then
+ error(msprintf(gettext("%s: Wrong type for input argument #%d: Real matrix expected.\n"),"karmarkar",6));
+ end
+ if ( typeof(maxiter) <> "constant" ) then
+ error(msprintf(gettext("%s: Wrong type for input argument #%d: Real matrix expected.\n"),"karmarkar",7));
+ end
+ if ( __karmarkar_outfun__ <> [] ) then
+ if ( and(typeof(__karmarkar_outfun__) <> ["function" "list" "fptr"] ) ) then
+ error(msprintf(gettext("%s: Wrong type for input argument #%d: %s expected.\n"),"karmarkar",8,"function or list"));
+ end
+ if ( typeof(__karmarkar_outfun__)=="list" ) then
+ if ( and(typeof(__karmarkar_outfun__(1))<>["function" "fptr"] ) ) then
+ error(msprintf(gettext("%s: Wrong type for input argument #%d: %s expected.\n"),"karmarkar",8,"function"));
+ end
+ end
+ end
+ if ( typeof(A) <> "constant" ) then
+ error(msprintf(gettext("%s: Wrong type for input argument #%d: Real matrix expected.\n"),"karmarkar",9));
+ end
+ if ( typeof(b) <> "constant" ) then
+ error(msprintf(gettext("%s: Wrong type for input argument #%d: Real matrix expected.\n"),"karmarkar",10));
+ end
+ if ( typeof(lb) <> "constant" ) then
+ error(msprintf(gettext("%s: Wrong type for input argument #%d: Real matrix expected.\n"),"karmarkar",11));
+ end
+ if ( typeof(ub) <> "constant" ) then
+ error(msprintf(gettext("%s: Wrong type for input argument #%d: Real matrix expected.\n"),"karmarkar",12));
+ end
+ //
+ // Check size
+ //
+ [ne,pe]=size(Aeq)
+ [ni,pi]=size(A)
+ plb=size(lb,"*")
+ pub=size(ub,"*")
+ if ( Aeq <> [] ) then
+ p = pe
+ elseif ( A <> [] ) then
+ p = pi
+ elseif ( lb <> [] ) then
+ p = plb
+ elseif ( ub <> [] ) then
+ p = pub
+ else
+ error(msprintf(gettext("%s: Either one of Aeq, A, lb or ub must be non-empty."),"karmarkar"));
+ end
+ // The case where Aeq==[] and A==[] is treated below.
+ if ( ( Aeq <> [] & beq == [] ) | ( Aeq == [] & beq <> [] ) ) then
+ error(msprintf(gettext("%s: Wrong size for input argument #%d."),"karmarkar",2));
+ end
+ if ( Aeq <> [] ) then
+ if ( or ( size(beq) <> [ne 1] ) ) then
+ error(msprintf(gettext("%s: Wrong size for input argument #%d."),"karmarkar",2));
+ end
+ end
+ if ( or ( size(c) <> [p 1] ) ) then
+ error(msprintf(gettext("%s: Wrong size for input argument #%d."),"karmarkar",3));
+ end
+ if ( x0 <> [] ) then
+ if ( or ( size(x0) <> [p 1] ) ) then
+ error(msprintf(gettext("%s: Wrong size for input argument #%d."),"karmarkar",4));
+ end
+ end
+ if ( or ( size(rtolf) <> [1 1] ) ) then
+ error(msprintf(gettext("%s: Wrong size for input argument #%d."),"karmarkar",5));
+ end
+ if ( or ( size(gam) <> [1 1] ) ) then
+ error(msprintf(gettext("%s: Wrong size for input argument #%d."),"karmarkar",6));
+ end
+ if ( or ( size(maxiter) <> [1 1] ) ) then
+ error(msprintf(gettext("%s: Wrong size for input argument #%d."),"karmarkar",7));
+ end
+ if ( A <> [] ) then
+ if ( Aeq <> [] ) then
+ if ( pi <> pe ) then
+ error(msprintf(gettext("%s: Wrong size for input argument #%d."),"karmarkar",9));
+ end
+ end
+ end
+ if ( ( A <> [] & b == [] ) | ( A == [] & b <> [] ) ) then
+ error(msprintf(gettext("%s: Wrong size for input argument #%d."),"karmarkar",10));
+ end
+ if ( b <> [] ) then
+ if ( size(b) <> [ni 1] ) then
+ error(msprintf(gettext("%s: Wrong size for input argument #%d."),"karmarkar",10));
+ end
+ end
+ if ( lb <> [] ) then
+ if ( or ( size(lb) <> [p 1] ) ) then
+ error(msprintf(gettext("%s: Wrong size for input argument #%d."),"karmarkar",11));
+ end
+ end
+ if ( ub <> [] ) then
+ if ( or ( size(ub) <> [p 1] ) ) then
+ error(msprintf(gettext("%s: Wrong size for input argument #%d."),"karmarkar",12));
+ end
+ end
+ //
+ // Check content
+ //
+ if ( rtolf < 0 | rtolf > 1 ) then
+ error(msprintf(gettext("%s: Wrong value for input argument #%d. rtolf must be in [0,1]."),"karmarkar",5));
+ end
+ if ( gam < 0 | gam > 1 ) then
+ error(msprintf(gettext("%s: Wrong value for input argument #%d. gam must be in [0,1]."),"karmarkar",6));
+ end
+ if ( maxiter < 1 ) then
+ error(msprintf(gettext("%s: Wrong value for input argument #%d. maxiter must be greater than 1."),"karmarkar",7));
+ end
+ if ( floor(maxiter) <> maxiter ) then
+ error(msprintf(gettext("%s: Wrong value for input argument #%d. maxiter must be a floating point integer."),"karmarkar",7));
+ end
+ if ( lb == [] & ub == [] ) then
+ if ( x0 <> [] ) then
+ if ( min(x0)<0 ) then
+ error(msprintf(gettext("%s: Wrong value for input argument #%d. x0 is not positive."),"karmarkar",4));
+ end
+ end
+ end
+ if ( lb <> [] & ub <> [] ) then
+ if ( or ( ub < lb ) ) then
+ error(msprintf(gettext("%s: Wrong value for input argument #%d. One entry of the upper bound ub is greater than the lower bound lb."),"karmarkar",12));
+ end
+ end
+ if ( lb <> [] & x0 <> [] ) then
+ if ( or ( lb > x0 ) ) then
+ error(msprintf(gettext("%s: Wrong value for input argument #%d. x0 lower than lower bound lb."),"karmarkar",12));
+ end
+ end
+ if ( ub <> [] & x0 <> [] ) then
+ if ( or ( ub < x0 ) ) then
+ error(msprintf(gettext("%s: Wrong value for input argument #%d. x0 greater than upper bound ub."),"karmarkar",12));
+ end
+ end
+ //
+ // Proceed
+ //
+ // Transform the general LP into standard form
+ [AAeq,bbeq,cc,xx0,pinit,newposvars] = karmarkar_preprocess ( Aeq , beq , c , A , b , lb , ub , x0 )
+ //
+ // Solve the standard form LP
+ outfun = list(__karmarkar_outfun__ , pinit , newposvars )
+ [xxopt,fopt,exitflag,iter,yyopt] = karmarkar_findStandardLP ( AAeq , bbeq , cc , xx0 , rtolf , gam , maxiter , outfun )
+ //
+ // Extract the solution from the initial problem
+ [xopt,fopt,yopt] = karmarkar_postprocess ( Aeq , beq , c , A , b , lb , ub , pinit , xxopt , yyopt , exitflag )
+endfunction
+
+function [AAeq,bbeq,cc,xx0,pinit,newposvars] = karmarkar_preprocess ( Aeq , beq , c , A , b , lb , ub , x0 )
+ // Transform the general LP into a standard LP.
+ //
+ // Parameters
+ // Aeq : a ne-by-p matrix of doubles, the matrix of linear equality constraints
+ // beq : a ne-by-1 matrix of doubles, the right hand side of linear equality constraints
+ // A : a ni-by-p matrix of doubles, the matrix of linear inequality constraints
+ // b : a ni-by-1 matrix of doubles, the right hand side of linear inequality constraints
+ // c : a p-by-1 matrix of doubles, the linear objective
+ // x0 : a p-by-1 matrix of doubles, the initial guess. If x0=[], this means that no initial guess is provided.
+ // AAeq : a m-by-q matrix of doubles, the matrix of linear equality constraints. We have m = ne + ni and q = p + ni.
+ // bbeq : a m-by-1 matrix of doubles, the right hand side of linear equality constraints
+ // cc : a p-by-1 matrix of doubles, the linear objective
+ // xx0 : a q-by-1 matrix of doubles, the initial guess. If xx0=[], this means that no initial guess is provided.
+ // pinit : the initial value of p. The solution of the original general LP is xs(1:pinit), where xs is the solution of the modified standard LP.
+ // newposvars : a 1-by-1 matrix of booleans, %f if no positive variables have been introduced, %t if positive variables have been introduced.
+ //
+ // Description
+ // Transform the general linear program into a standard linear program.
+ //
+ // More precisely, if A <> [] transform the general linear program (L.P.):
+ //
+ // min c'*x
+ // A * x <= b
+ // Aeq * x = beq
+ //
+ // into the standard LP:
+ //
+ // min cc'*z
+ // AAeq * z = bbeq
+ // z >= 0
+ //
+ // where z is the new unknown, positive, which may contain slack variables.
+ //
+ // An unrestricted variable xi is transformed into x = xp - xn, where xp,xn >= 0.
+ // This turns the inequality constraint Ax <= b into [A -A][xp;xn] <= b.
+ // The inequality constraint is turned into an equality constraint by introducing
+ // slack variables si.
+ // This transforms the inequality into: [A -A][xp;xn] + s = b which can be written :
+ //
+ // [A -A I][xp;xn;s] = b
+ //
+ // Therefore, the number of variables increases from pinit to 2*pinit + ni.
+ // The initial variable is x(1:pinit)-x(pinit+1:2*pinit).
+ // The same happens for the equality constraints which is turned from
+ // Aeq*x=b to [Aeq -Aeq 0][xp;xn;s]=b.
+ // The cost function is turned from c'*x into [c;-c;0]'[xp;xn].
+ //
+ newposvars = %f
+ //
+ [ne,pe]=size(Aeq)
+ [ni,pi]=size(A)
+ plb=size(lb,"*")
+ pub=size(ub,"*")
+ if ( Aeq <> [] ) then
+ pinit = pe
+ elseif ( A <> [] ) then
+ pinit = pi
+ elseif ( lb <> [] ) then
+ pinit = plb
+ elseif ( ub <> [] ) then
+ pinit = pub
+ end
+ //
+ if ( A == [] & lb == [] & ub == [] ) then
+ AAeq = Aeq
+ bbeq = beq
+ cc = c
+ xx0 = x0
+ return
+ end
+ //
+ // Process the lower bound : x >= lb.
+ // Move it as an inequality constraint: -I * x <= -lb.
+ // TODO : process the case lb <> [] by shifting x (instead of introducing positive variables).
+ if ( lb <> [] ) then
+ if ( A == [] ) then
+ A = -eye(plb,plb)
+ b = -lb
+ else
+ A = [A;-eye(plb,plb)]
+ b = [b;-lb]
+ end
+ end
+ //
+ // Process the upper bound : x <= ub.
+ // Move it as an inequality constraint : I * x <= ub.
+ if ( ub <> [] ) then
+ if ( A == [] ) then
+ A = eye(pub,pub)
+ b = ub
+ else
+ A = [A;eye(pub,pub)]
+ b = [b;ub]
+ end
+ end
+ //
+ // Remove constraints where b(i) = %inf.
+ // Such a A(i,:)*x <= %inf = b(i) will be satisfied anyway, but
+ // may cause failures in the algorithm.
+ iinf = find(b == %inf)
+ b(iinf) = []
+ A(iinf,:) = []
+ //
+ // Create the map from the initial constraints to the final constraints.
+ //
+ // Update the number of inequalities,
+ // given that the bounds have been updated.
+ [ni,pi]=size(A)
+ //
+ // Initialize AAeq, bbeq, cc and xx0.
+ //
+ // If ni inequality constraints are given, transform the problem by
+ // adding pinit positive variables and ni slack variables.
+ // The inequality is Ax <= b.
+ if ( A <> [] ) then
+ //
+ // Create the matrix
+ // AAeq = [
+ // Aeq -Aeq 0
+ // A -A I
+ // ]
+ AAeq (1:ne+ni,1:2*pinit+ni) = zeros(ne+ni,2*pinit+ni)
+ if ( Aeq <> [] ) then
+ AAeq (1:ne,1:pinit) = Aeq
+ AAeq (1:ne,pinit+1:2*pinit) = -Aeq
+ end
+ AAeq (ne+1:ne+ni,1:pinit) = A
+ AAeq (ne+1:ne+ni,pinit+1:2*pinit) = -A
+ AAeq (ne+1:ne+ni,2*pinit+1:2*pinit+ni) = eye(ni,ni)
+ bbeq = [beq;b]
+ cc = [c;-c;zeros(ni,1)]
+ newposvars = %t
+ if ( x0 == [] ) then
+ xx0 = []
+ else
+ xx0 = zeros(2*pinit+ni,1)
+ xx0(1:pinit) = max(x0,0)
+ xx0(pinit+1:2*pinit) = -min(x0,0)
+ s = b - A*x0
+ if ( min(s)<0 ) then
+ error(msprintf(gettext("%s: Wrong value for input argument #%d. x0 does not satisfy the inequality constraints."),"karmarkar",4));
+ end
+ xx0(2*pinit+1:2*pinit+ni) = s
+ end
+ end
+endfunction
+
+function [xopt,fopt,yopt] = karmarkar_postprocess ( Aeq , beq , c , A , b , lb , ub , pinit , xxopt , yyopt , exitflag )
+ // Transform the solution of the standard LP into the solution of the original general LP.
+ //
+ // Extract the solution from the initial problem
+ if ( pinit < size(xxopt,"*") ) then
+ xopt = xxopt(1:pinit) - xxopt(pinit+1:2*pinit)
+ else
+ xopt = xxopt
+ end
+ //
+ // Extract the dual solution.
+ [ne,pe]=size(Aeq)
+ [ni,pi]=size(A)
+ plb=size(lb,"*")
+ pub=size(ub,"*")
+ //
+ // Initialize
+ if ( yyopt == [] ) then
+ yopt.ineqlin = []
+ yopt.eqlin = []
+ yopt.lower = []
+ yopt.upper = []
+ else
+ yopt.ineqlin = []
+ yopt.eqlin = []
+ yopt.lower = zeros(pinit,1)
+ yopt.upper = zeros(pinit,1)
+ end
+ //
+ // Update depending on the presence of the options.
+ kstart = 1
+ if ( ne > 0 ) then
+ kstop = kstart + ne - 1
+ yopt.eqlin = -yyopt(kstart:kstop)
+ kstart = kstop + 1
+ end
+ if ( ni > 0 ) then
+ noninf = find(b<>%inf)
+ kinf = find(b==%inf)
+ kstop = kstart + ni - 1 - size(kinf,"*")
+ yopt.ineqlin(noninf) = -yyopt(kstart:kstop)
+ kstart = kstop + 1
+ end
+ if ( ni == 0 & plb == 0 & pub == 0 ) then
+ yopt.lower = c - Aeq'*yyopt
+ elseif ( plb > 0 ) then
+ noninf = find(lb<>%inf)
+ kinf = find(lb==%inf)
+ kstop = kstart + plb - 1 - size(kinf,"*")
+ yopt.lower(noninf) = -yyopt(kstart:kstop)
+ kstart = kstop + 1
+ end
+ if ( pub > 0 ) then
+ noninf = find(ub<>%inf)
+ kinf = find(ub==%inf)
+ kstop = kstart + pub - 1 - size(kinf,"*")
+ yopt.upper(noninf) = -yyopt(kstart:kstop)
+ kstart = kstop + 1
+ end
+endfunction
+
+function [xopt,fopt,exitflag,iter,yopt] = karmarkar_findStandardLP ( Aeq , beq , c , x0 , rtolf , gam , maxiter , outfun )
+ // Solves a linear problem in standard form (searches for x0 if necessary).
+ //
+ // Parameters
+ // pinit : a 1-by-1 matrix of floating point integers, the number of parameters before the introduction of slack variables.
+ //
+ // Given x0, uses a primal affine scaling (P.A.S.) algorithm to iteratively
+ // find the solution of a linear program in standard form:
+ //
+ // min c'*x
+ // Aeq*x = beq
+ // x >= 0
+ //
+ // If x0 is the empty matrix, compute a strictly feasible point.
+ //
+ [ne,p]=size(Aeq)
+ if ( x0 == [] ) then
+ //
+ // Compute a strictly feasible point.
+ //
+ // Add an extra variable x(p+1) and solve :
+ //
+ // min z(p+1)
+ // AAeq*z = beq
+ // z >= 0
+ //
+ // where z=[x(1) x(2) ... x(p) x(p+1)].
+ // AAeq has the same number of rows, but one column more than Aeq.
+ // The initial guess for the modified problem is z0=[1 1 ... 1]=e(n+1).
+ // The last column of AAeq is beq-Aeq*e(n) so that the initial guess z0 satisfies AAeq*z0=beq.
+ //
+ // References
+ // "A Primal-Dual Exterior Point Algorithm For Linear Programming Problems"
+ // Nikolaos Samaras, Angelo Sifaleras, Charalampos Triantafyllidis
+ // Yugoslav Journal of Operations Research
+ // Vol 19 (2009), Number 1, 123-132
+ //
+ // "A modification of karmarkar's linear programming algorithm",
+ // Robert J. Vanderbei, Marc S. Meketon and Barry A. Freedman,
+ // Algorithmica, Volume 1, Numbers 1-4, 395-407, 1986.
+ //
+ s = beq - Aeq*ones(p,1)
+ AAeq = [Aeq,s]
+ cc = [zeros(p,1);1]
+ xx0 = ones(p+1,1)
+ step = 1
+ xfeasmax = %eps
+ iterstart = 0
+ [xopt1,fopt1,exitflag1,iter1,yopt1] = karmarkar_findxopt ( AAeq , beq , cc , xx0 , rtolf , gam , maxiter , outfun , step , xfeasmax , iterstart )
+ if ( exitflag1 <> 1 ) then
+ // The algorithm did not converge.
+ xopt = []
+ fopt = []
+ exitflag = -1
+ iter = iter1
+ yopt = []
+ return
+ end
+ x0 = xopt1(1:p)
+ else
+ iter1 = 0
+ end
+ //
+ // Check that the initial guess is feasible.
+ if ( x0 <> [] ) then
+ if ( norm(Aeq*x0-beq)>sqrt(%eps)*norm(beq) ) then
+ error(msprintf(gettext("%s: Wrong value for input argument #%d. x0 does not satisfy the equality constraints."),"karmarkar",4));
+ end
+ end
+ //
+ // Find the optimum x
+ step = 2
+ xfeasmax = %nan
+ iterstart = iter1
+ [xopt,fopt,exitflag,iter,yopt] = karmarkar_findxopt ( Aeq , beq , c , x0 , rtolf , gam , maxiter , outfun , step , xfeasmax , iterstart )
+endfunction
+
+function [xopt,fopt,exitflag,iter,yopt] = karmarkar_findxopt ( Aeq , beq , c , x0 , rtolf , gam , maxiter , outfun , step , xfeasmax , iterstart )
+ // Solves a linear problem in standard form, given x0.
+ //
+ // Parameters
+ // step : a 1-by-1 matrix of floating point integers, the kind of algorithm performed. We have step=1 during the search for a feasible point x0, and step=2 during the search for x*.
+ // xfeasmax : a 1-by-1 matrix of double, the maximum value of the feasibility variable, during the search for a feasible point x0.
+ // pinit : a 1-by-1 matrix of floating point integers, the number of parameters before the introduction of slack variables.
+ // iterstart : the initial number of iterations, including the iterations of the previous steps
+ //
+ // Description
+ // Given x0, uses a primal affine scaling (P.A.S.) algorithm to iteratively
+ // find the solution of a linear program in standard form:
+ //
+ // min c'*x
+ // Aeq*x = beq
+ // x >= 0
+ //
+ // We assume that x0 is strictly feasible, i.e. || Aeq*x0-beq || is small and x0 > 0.
+ //
+ // If step = 1, we stop when xopt($) is below the feasibility threshold xfeasmax.
+ // If step = 2, we stop when the objective function does not vary anymore, or the maximum number
+ // of iterations exceeds the maximum, or the users asks to.
+ //
+ // exitflag = 1 if algorithm converged.
+ // exitflag = 0 if maximum number of iterations was reached.
+ // exitflag = -2 if problem is unbounded.
+ // exitflag = -3 if search direction became zero.
+ // exitflag = -4 if step=2 and algorithm stopped on user's request.
+ // exitflag = -%inf on internal error.
+ //
+ // References
+ // "A variation on Karmarkar’s algorithm for solving linear programming problems,
+ // Earl R. Barnes, Mathematical Programming, Volume 36, Number 2, 174-182, 1986.
+ //
+ // "A modification of karmarkar's linear programming algorithm",
+ // Robert J. Vanderbei, Marc S. Meketon and Barry A. Freedman,
+ // Algorithmica, Volume 1, Numbers 1-4, 395-407, 1986.
+ //
+ // "Practical Optimization: Algorithms and Engineering Applications",
+ // Andreas Antoniou, Wu-Sheng Lu, Springer, 2007,
+ // Chapter 12, "Linear Programming Part II: Interior Point Methods".
+ //
+ [ne,p]=size(Aeq)
+ xopt=x0
+ yopt = []
+ tc=c'
+ fopt=tc*xopt
+ dualgapmin = %inf
+ dualgap = %inf
+ funccount = 1
+ fprev = fopt+1
+ iter=iterstart
+ s = zeros(p,1)
+ stop = %f
+ exitflag = -%inf
+ firstloop = %t
+ //
+ state = "init"
+ if ( step == 1 ) then
+ procedure = "x0"
+ else
+ procedure = "x*"
+ end
+ optimValues = struct(...
+ "funccount" , funccount , ...
+ "fval" , fopt , ...
+ "iteration" , iter , ...
+ "procedure" , procedure, ...
+ "dualgap" , dualgap ...
+ );
+ stop = karmarkar_outfunDriver ( xopt , optimValues , state , outfun )
+ //
+ while ( %t )
+ if ( iter >= maxiter ) then
+ exitflag = 0
+ break
+ end
+ if ( step == 1 ) then
+ if ( xopt($) <= xfeasmax ) then
+ exitflag = 1
+ break
+ end
+ end
+ if ( step == 2 ) then
+ if ( abs(fprev-fopt)<=rtolf*abs(fprev) ) then
+ exitflag = 1
+ break
+ end
+ end
+ //
+ // Compute the duality gap
+ if ( dualgap > 1.e5 * dualgapmin ) then
+ // Unbounded problem.
+ exitflag = -2
+ break
+ end
+ //
+ // Calls back the output function
+ if ( step == 1 ) then
+ state = "init"
+ else
+ state = "iter"
+ end
+ if ( step == 1 ) then
+ procedure = "x0"
+ else
+ procedure = "x*"
+ end
+ optimValues = struct(...
+ "funccount" , funccount , ...
+ "fval" , fopt , ...
+ "iteration" , iter , ...
+ "procedure" , procedure, ...
+ "dualgap" , dualgap ...
+ );
+ stop = karmarkar_outfunDriver ( xopt , optimValues , state , outfun )
+ if ( stop ) then
+ exitflag = -4
+ break
+ end
+ iter=iter+1
+ // Compute B as B = Aeq*X where X = diag(xopt).
+ // The following method is equivalent, but faster.
+ xt = xopt'
+ B = Aeq.*xt(ones(ne,1),:)
+ v = xopt.*c
+ // y = inv(B*B') * (B*v) i.e. y is the solution of (B*B') y = B*v.
+ // This implies that y is the solution of (B')*y = v, i.e.
+ yopt = B'\v
+ p = -v+B'*yopt
+ if ( min(p)==0 ) then
+ exitflag = -3
+ break
+ end
+ d = xopt.*p
+ if ( min(d)>0 ) then
+ // Unbounded problem.
+ exitflag = -2
+ break
+ end
+ alpha = -gam / min(p)
+ s = alpha*d
+ xopt=xopt+s
+ fprev = fopt
+ fopt=tc*xopt
+ funccount = funccount + 1
+ //
+ // Compute the duality gap
+ dualgap = abs(yopt'*beq - fopt)
+ if ( firstloop ) then
+ dualgapmin = dualgap
+ else
+ if ( dualgapmin > dualgap ) then
+ dualgapmin = dualgap
+ end
+ end
+ firstloop = %f
+ end
+ //
+ if ( step == 1 ) then
+ state = "init"
+ else
+ state = "done"
+ end
+ if ( step == 1 ) then
+ procedure = "x0"
+ else
+ procedure = "x*"
+ end
+ optimValues = struct(...
+ "funccount" , funccount , ...
+ "fval" , fopt , ...
+ "iteration" , iter , ...
+ "procedure" , procedure, ...
+ "dualgap" , dualgap ...
+ );
+ stop = karmarkar_outfunDriver ( xopt , optimValues , state , outfun )
+endfunction
+
+function stop = karmarkar_outfunDriver ( xopt , optimValues , state , outfun )
+ //
+ // The driver for the output function.
+ // outfun : a list where the first item is the output function.
+ __karmarkar_outfun__ = outfun (1)
+ pinit = outfun (2)
+ newposvars = outfun (3)
+ //
+ // Calls back the user's output function, if required.
+ // Reduce the size of the vectors, to take into account for potential slack variables.
+ if ( __karmarkar_outfun__ <> [] ) then
+ if ( newposvars ) then
+ xopt = xopt(1:pinit) - xopt(pinit+1:2*pinit)
+ else
+ xopt = xopt(1:pinit)
+ end
+ cbktype = typeof( __karmarkar_outfun__ )
+ if ( cbktype == "list" ) then
+ __karmarkar_outfun__f_ = __karmarkar_outfun__ (1)
+ stop = __karmarkar_outfun__f_ ( xopt , optimValues , state , __karmarkar_outfun__ (2:$))
+ elseif ( or(cbktype == ["function" "fptr"] ) ) then
+ stop = __karmarkar_outfun__ ( xopt , optimValues , state )
+ end
+ else
+ stop = %f
+ end
endfunction
+
+
--- /dev/null
+// =============================================================================
+// Scilab ( http://www.scilab.org/ ) - This file is part of Scilab
+// Copyright (C) 2010 - DIGITEO - Michael Baudin
+//
+// This file is distributed under the same license as the Scilab package.
+// =============================================================================
+// <-- JVM NOT MANDATORY -->
+//
+// <-- Non-regression test for bug 8719 -->
+//
+// <-- Bugzilla URL -->
+// http://bugzilla.scilab.org/show_bug.cgi?id=8719
+//
+// <-- Short Description -->
+// The karmarkar function prints unwanted messages.
+//
+function flag = assert_close ( computed, expected, epsilon )
+ if expected==0.0 then
+ shift = norm(computed-expected);
+ else
+ shift = norm(computed-expected)/norm(expected);
+ end
+ if shift < epsilon then
+ flag = 1;
+ else
+ flag = 0;
+ end
+ if flag <> 1 then bugmes();quit;end
+endfunction
+c = [-20 -24 0 0]';
+a = [
+3 6 1 0
+4 2 0 1
+];
+b = [60 32]';
+expected = [4 8 0 0]';
+x0 = [
+4.1128205
+7.7333333
+1.2615385
+0.0820513
+];
+xopt=karmarkar(a,b,c,x0);
+assert_close ( xopt , expected , 1.e-3 );
--- /dev/null
+// =============================================================================
+// Scilab ( http://www.scilab.org/ ) - This file is part of Scilab
+// Copyright (C) 2010 - DIGITEO - Michael Baudin
+//
+// This file is distributed under the same license as the Scilab package.
+// =============================================================================
+// <-- JVM NOT MANDATORY -->
+//
+// <-- Non-regression test for bug 8719 -->
+//
+// <-- Bugzilla URL -->
+// http://bugzilla.scilab.org/show_bug.cgi?id=8719
+//
+// <-- Short Description -->
+// The karmarkar function prints unwanted messages.
+//
+
+function flag = assert_close ( computed, expected, epsilon )
+ if expected==0.0 then
+ shift = norm(computed-expected);
+ else
+ shift = norm(computed-expected)/norm(expected);
+ end
+ if shift < epsilon then
+ flag = 1;
+ else
+ flag = 0;
+ end
+ if flag <> 1 then pause,end
+endfunction
+
+c = [-20 -24 0 0]';
+a = [
+3 6 1 0
+4 2 0 1
+];
+b = [60 32]';
+expected = [4 8 0 0]';
+x0 = [
+4.1128205
+7.7333333
+1.2615385
+0.0820513
+];
+xopt=karmarkar(a,b,c,x0);
+assert_close ( xopt , expected , 1.e-3 );
+
--- /dev/null
+// =============================================================================
+// Scilab ( http://www.scilab.org/ ) - This file is part of Scilab
+// Copyright (C) 2010 - DIGITEO - Michael Baudin
+//
+// This file is distributed under the same license as the Scilab package.
+// =============================================================================
+// <-- JVM NOT MANDATORY -->
+//
+// <-- Non-regression test for bug 8720 -->
+//
+// <-- Bugzilla URL -->
+// http://bugzilla.scilab.org/show_bug.cgi?id=8720
+//
+// <-- Short Description -->
+// The karmarkar function may stop too early in the iterations.
+//
+function flag = assert_close ( computed, expected, epsilon )
+ if expected==0.0 then
+ shift = norm(computed-expected);
+ else
+ shift = norm(computed-expected)/norm(expected);
+ end
+ if shift < epsilon then
+ flag = 1;
+ else
+ flag = 0;
+ end
+ if flag <> 1 then bugmes();quit;end
+endfunction
+c = 1.e-20 * [-20 -24 0 0]';
+a = [
+3 6 1 0
+4 2 0 1
+];
+b = [60 32]';
+x0 = [4.1128205 7.7333333 1.2615385 0.0820513]';
+expected = [4 8 0 0]';
+xopt = karmarkar(a,b,c,x0);
+assert_close ( xopt , expected , 1.e-4 );
--- /dev/null
+// =============================================================================
+// Scilab ( http://www.scilab.org/ ) - This file is part of Scilab
+// Copyright (C) 2010 - DIGITEO - Michael Baudin
+//
+// This file is distributed under the same license as the Scilab package.
+// =============================================================================
+// <-- JVM NOT MANDATORY -->
+//
+// <-- Non-regression test for bug 8720 -->
+//
+// <-- Bugzilla URL -->
+// http://bugzilla.scilab.org/show_bug.cgi?id=8720
+//
+// <-- Short Description -->
+// The karmarkar function may stop too early in the iterations.
+//
+
+function flag = assert_close ( computed, expected, epsilon )
+ if expected==0.0 then
+ shift = norm(computed-expected);
+ else
+ shift = norm(computed-expected)/norm(expected);
+ end
+ if shift < epsilon then
+ flag = 1;
+ else
+ flag = 0;
+ end
+ if flag <> 1 then pause,end
+endfunction
+
+c = 1.e-20 * [-20 -24 0 0]';
+a = [
+3 6 1 0
+4 2 0 1
+];
+b = [60 32]';
+x0 = [4.1128205 7.7333333 1.2615385 0.0820513]';
+expected = [4 8 0 0]';
+xopt = karmarkar(a,b,c,x0);
+assert_close ( xopt , expected , 1.e-4 );
+
--- /dev/null
+// =============================================================================
+// Scilab ( http://www.scilab.org/ ) - This file is part of Scilab
+// Copyright (C) 2010 - DIGITEO - Michael Baudin
+//
+// This file is distributed under the same license as the Scilab package.
+// =============================================================================
+// <-- JVM NOT MANDATORY -->
+//
+// <-- Non-regression test for bug 8726 -->
+//
+// <-- Bugzilla URL -->
+// http://bugzilla.scilab.org/show_bug.cgi?id=8726
+//
+// <-- Short Description -->
+// The karmarkar function may produce a division-by-zero error.
+//
+function flag = MY_assert_equal ( computed , expected )
+ if computed==expected then
+ flag = 1;
+ else
+ flag = 0;
+ end
+ if flag <> 1 then bugmes();quit;end
+endfunction
+Aeq = [
+1 -1 0
+1 1 1
+];
+beq = [0;2];
+c = [-1;-1;0];
+x0 = [1;1;0];
+xopt=karmarkar(Aeq,beq,c,x0);
+xexpected = [1;1;0];
+MY_assert_equal ( xopt , xexpected );
--- /dev/null
+// =============================================================================
+// Scilab ( http://www.scilab.org/ ) - This file is part of Scilab
+// Copyright (C) 2010 - DIGITEO - Michael Baudin
+//
+// This file is distributed under the same license as the Scilab package.
+// =============================================================================
+// <-- JVM NOT MANDATORY -->
+//
+// <-- Non-regression test for bug 8726 -->
+//
+// <-- Bugzilla URL -->
+// http://bugzilla.scilab.org/show_bug.cgi?id=8726
+//
+// <-- Short Description -->
+// The karmarkar function may produce a division-by-zero error.
+//
+
+function flag = MY_assert_equal ( computed , expected )
+ if computed==expected then
+ flag = 1;
+ else
+ flag = 0;
+ end
+ if flag <> 1 then pause,end
+endfunction
+
+
+Aeq = [
+1 -1 0
+1 1 1
+];
+beq = [0;2];
+c = [-1;-1;0];
+x0 = [1;1;0];
+xopt=karmarkar(Aeq,beq,c,x0);
+xexpected = [1;1;0];
+MY_assert_equal ( xopt , xexpected );
+
+
--- /dev/null
+// =============================================================================
+// Scilab ( http://www.scilab.org/ ) - This file is part of Scilab
+// Copyright (C) 2010 - DIGITEO - Michael Baudin
+//
+// This file is distributed under the same license as the Scilab package.
+// =============================================================================
+// <-- JVM NOT MANDATORY -->
+//
+// <-- Non-regression test for bug 8727 -->
+//
+// <-- Bugzilla URL -->
+// http://bugzilla.scilab.org/show_bug.cgi?id=8727
+//
+// <-- Short Description -->
+// The karmarkar function requires the initial guess x0.
+//
+//
+// assert_close --
+// Returns 1 if the two real matrices computed and expected are close,
+// i.e. if the relative distance between computed and expected is lesser than epsilon.
+// Arguments
+// computed, expected : the two matrices to compare
+// epsilon : a small number
+//
+function flag = assert_close ( computed, expected, epsilon )
+ if ( expected == computed ) then
+ flag = 1
+ return
+ end
+ if ( expected==0.0 ) then
+ shift = norm(computed-expected);
+ else
+ shift = norm(computed-expected)/norm(expected);
+ end
+ if shift < epsilon then
+ flag = 1;
+ else
+ flag = 0;
+ end
+ if flag <> 1 then bugmes();quit;end
+endfunction
+n=11;
+Aeq = zeros(n,n+2);
+Aeq(:,1) = 2*linspace(0,1,n)';
+Aeq(:,2) = ones(11,1);
+Aeq(1:n,3:n+2) = eye(n,n);
+beq = 1 + linspace(0,1,n)'.^2;
+c=[-1;-1;zeros(n,1)];
+xopt=karmarkar(Aeq,beq,c);
+xstar = [
+ 0.5041940
+ 0.7457937
+ 0.2542063
+ 0.1633675
+ 0.0925287
+ 0.0416899
+ 0.0108511
+ 0.0000123
+ 0.0091735
+ 0.0383347
+ 0.0874959
+ 0.1566571
+ 0.2458183
+];
+assert_close ( xopt , xstar , 1.e-6 );
--- /dev/null
+// =============================================================================
+// Scilab ( http://www.scilab.org/ ) - This file is part of Scilab
+// Copyright (C) 2010 - DIGITEO - Michael Baudin
+//
+// This file is distributed under the same license as the Scilab package.
+// =============================================================================
+// <-- JVM NOT MANDATORY -->
+//
+// <-- Non-regression test for bug 8727 -->
+//
+// <-- Bugzilla URL -->
+// http://bugzilla.scilab.org/show_bug.cgi?id=8727
+//
+// <-- Short Description -->
+// The karmarkar function requires the initial guess x0.
+//
+
+//
+// assert_close --
+// Returns 1 if the two real matrices computed and expected are close,
+// i.e. if the relative distance between computed and expected is lesser than epsilon.
+// Arguments
+// computed, expected : the two matrices to compare
+// epsilon : a small number
+//
+function flag = assert_close ( computed, expected, epsilon )
+ if ( expected == computed ) then
+ flag = 1
+ return
+ end
+ if ( expected==0.0 ) then
+ shift = norm(computed-expected);
+ else
+ shift = norm(computed-expected)/norm(expected);
+ end
+ if shift < epsilon then
+ flag = 1;
+ else
+ flag = 0;
+ end
+ if flag <> 1 then pause,end
+endfunction
+
+
+n=11;
+Aeq = zeros(n,n+2);
+Aeq(:,1) = 2*linspace(0,1,n)';
+Aeq(:,2) = ones(11,1);
+Aeq(1:n,3:n+2) = eye(n,n);
+beq = 1 + linspace(0,1,n)'.^2;
+c=[-1;-1;zeros(n,1)];
+xopt=karmarkar(Aeq,beq,c);
+
+xstar = [
+ 0.5041940
+ 0.7457937
+ 0.2542063
+ 0.1633675
+ 0.0925287
+ 0.0416899
+ 0.0108511
+ 0.0000123
+ 0.0091735
+ 0.0383347
+ 0.0874959
+ 0.1566571
+ 0.2458183
+];
+assert_close ( xopt , xstar , 1.e-6 );
+
--- /dev/null
+// =============================================================================
+// Scilab ( http://www.scilab.org/ ) - This file is part of Scilab
+// Copyright (C) 2010 - DIGITEO - Michael Baudin
+//
+// This file is distributed under the same license as the Scilab package.
+// =============================================================================
+// <-- JVM NOT MANDATORY -->
+//
+// <-- Non-regression test for bug 8775 -->
+//
+// <-- Bugzilla URL -->
+// http://bugzilla.scilab.org/show_bug.cgi?id=8775
+//
+// <-- Short Description -->
+// The karmarkar function might diverge toward a non-optimal point.
+//
+function flag = MY_assert_equal ( computed , expected )
+ if computed==expected then
+ flag = 1;
+ else
+ flag = 0;
+ end
+ if flag <> 1 then bugmes();quit;end
+endfunction
+// An unbounded problem.
+Aeq = [
+ 2 -2 -1 1 0
+-1 -4 1 0 1
+];
+beq = [-1;-1];
+c = [2;9;3;0;0];
+x0 = [0.2;0.7;1;1;1];
+[xopt,fopt,exitflag]=karmarkar(Aeq,beq,c,x0,0,0.999);
+MY_assert_equal ( exitflag , -2 );
--- /dev/null
+// =============================================================================
+// Scilab ( http://www.scilab.org/ ) - This file is part of Scilab
+// Copyright (C) 2010 - DIGITEO - Michael Baudin
+//
+// This file is distributed under the same license as the Scilab package.
+// =============================================================================
+// <-- JVM NOT MANDATORY -->
+//
+// <-- Non-regression test for bug 8775 -->
+//
+// <-- Bugzilla URL -->
+// http://bugzilla.scilab.org/show_bug.cgi?id=8775
+//
+// <-- Short Description -->
+// The karmarkar function might diverge toward a non-optimal point.
+//
+
+function flag = MY_assert_equal ( computed , expected )
+ if computed==expected then
+ flag = 1;
+ else
+ flag = 0;
+ end
+ if flag <> 1 then pause,end
+endfunction
+
+// An unbounded problem.
+Aeq = [
+ 2 -2 -1 1 0
+-1 -4 1 0 1
+];
+beq = [-1;-1];
+c = [2;9;3;0;0];
+x0 = [0.2;0.7;1;1;1];
+[xopt,fopt,exitflag]=karmarkar(Aeq,beq,c,x0,0,0.999);
+MY_assert_equal ( exitflag , -2 );
+
--- /dev/null
+// =============================================================================
+// Scilab ( http://www.scilab.org/ ) - This file is part of Scilab
+// Copyright (C) 2010 - DIGITEO - Michael Baudin
+//
+// This file is distributed under the same license as the Scilab package.
+// =============================================================================
+// <-- JVM NOT MANDATORY -->
+//
+// assert_close --
+// Returns 1 if the two real matrices computed and expected are close,
+// i.e. if the relative distance between computed and expected is lesser than epsilon.
+// Arguments
+// computed, expected : the two matrices to compare
+// epsilon : a small number
+//
+function flag = assert_close ( computed, expected, epsilon )
+ if ( expected == computed ) then
+ flag = 1
+ return
+ end
+ if ( expected==0.0 ) then
+ shift = norm(computed-expected);
+ else
+ shift = norm(computed-expected)/norm(expected);
+ end
+ if shift < epsilon then
+ flag = 1;
+ else
+ flag = 0;
+ end
+ if flag <> 1 then bugmes();quit;end
+endfunction
+//
+// assert_equal --
+// Returns 1 if the two real matrices computed and expected are equal.
+// Arguments
+// computed, expected : the two matrices to compare
+// epsilon : a small number
+//
+function flag = assert_equal ( computed , expected )
+ if computed==expected then
+ flag = 1;
+ else
+ flag = 0;
+ end
+ if flag <> 1 then bugmes();quit;end
+endfunction
+// With slack variables:
+//
+// Minimize -20.x1 - 24.x2 such as:
+// 3.x1 + 6.x2 + x3 = 60
+// 4.x1 + 2.x2 + x4 = 32
+// x >= 0
+c = [-20 -24 0 0]';
+Aeq = [
+3 6 1 0
+4 2 0 1
+];
+beq = [60 32]';
+xexpected = [4 8 0 0]';
+fexpected = -272;
+x0 = [
+4.1128205
+7.7333333
+1.2615385
+0.0820513
+];
+[xopt,fopt]=karmarkar(Aeq,beq,c,x0);
+assert_close ( xopt , xexpected , 1.e-3 );
+assert_close ( fopt , fexpected , 1.e-4 );
+//
+// Configure the relative tolerance
+rtolf=1.e-6;
+[xopt,fopt]=karmarkar(Aeq,beq,c,x0,rtolf);
+assert_close ( xopt , xexpected , 1.e-4 );
+assert_close ( fopt , fexpected , 1.e-5 );
+//
+// Configure the gamma
+gam = 0.1;
+[xopt,fopt]=karmarkar(Aeq,beq,c,x0,[],gam);
+assert_close ( xopt , xexpected , 1.e-3 );
+assert_close ( fopt , fexpected , 1.e-4 );
+////////////////////////////////////////////////////////////
+//
+// Check new API (from Scilab v5.3.x).
+//
+// Check exit flag
+[xopt,fopt,exitflag]=karmarkar(Aeq,beq,c,x0);
+assert_close ( xopt , xexpected , 1.e-3 );
+assert_close ( fopt , fexpected , 1.e-4 );
+assert_equal ( exitflag , 1 );
+//
+// Check number of iterations
+[xopt,fopt,exitflag,iter]=karmarkar(Aeq,beq,c,x0);
+assert_close ( xopt , xexpected , 1.e-3 );
+assert_close ( fopt , fexpected , 1.e-4 );
+assert_equal ( exitflag , 1 );
+assert_equal ( iter>10 , %t );
+//
+// Check dual solution
+[xopt,fopt,exitflag,iter,yopt]=karmarkar(Aeq,beq,c,x0);
+lambda.ineqlin = [];
+lambda.eqlin = [28/9;8/3];
+lambda.upper = [0;0;0;0];
+lambda.lower = [0;0;28/9;8/3];
+assert_close ( xopt , xexpected , 1.e-3 );
+assert_close ( fopt , fexpected , 1.e-4 );
+assert_equal ( exitflag , 1 );
+assert_equal ( iter>10 , %t );
+assert_equal ( yopt.ineqlin , lambda.ineqlin );
+assert_close ( yopt.eqlin , lambda.eqlin , 1.e-8 );
+assert_close ( yopt.lower , lambda.lower , 1.e-8 );
+assert_equal ( yopt.upper , lambda.upper );
+//
+// Check number of iterations, with default options
+[xopt,fopt,exitflag,iter]=karmarkar(Aeq,beq,c,x0,[],[],10);
+assert_equal ( exitflag , 0 );
+assert_equal ( iter , 10 );
+//
+// Check output function
+function stop = myoutputfunction ( xopt , optimValues , state )
+ localmsg = gettext("Iteration #%3.0f, state=%s, procedure=%s, fopt=%10.3e, x=[%s], dualgap=%10.3e\n")
+ xstr = strcat(msprintf("%10.3e\n",xopt)'," ")
+ teststring = sprintf(localmsg,optimValues.iteration,state,optimValues.procedure,optimValues.fval,xstr,optimValues.dualgap)
+ stop = %f
+endfunction
+xopt=karmarkar(Aeq,beq,c,x0,[],[],[],myoutputfunction);
+assert_close ( xopt , xexpected , 1.e-3 );
+//
+// Check output function, without initial guess
+xopt=karmarkar(Aeq,beq,c,[],[],[],[],myoutputfunction);
+assert_close ( xopt , xexpected , 1.e-3 );
+//
+// Check that the output function can stop the algorithm
+function stop = myoutputfunctionStop ( xopt , optimValues , state )
+ stop = (iter >= 7)
+endfunction
+[xopt,fopt,exitflag,iter]=karmarkar(Aeq,beq,c,x0,[],[],[],myoutputfunctionStop);
+assert_close ( xopt , xexpected , 1.e-2 );
+assert_close ( fopt , fexpected , 1.e-3 );
+assert_equal ( exitflag , -4 );
+assert_equal ( iter , 7 );
+//
+// Check output function with additionnal arguments
+function stop = myoutputfunction2 ( xopt , optimValues , state , myAeq , mybeq , myc )
+ localmsg = gettext("Iteration #%3.0f, fopt=%10.3e, state=%s, ||Ax-beq||=%.3e\n")
+ teststring = sprintf(localmsg,optimValues.iteration,optimValues.fval,state,norm(myAeq*xopt-mybeq))
+ stop = %f
+endfunction
+xopt=karmarkar(Aeq,beq,c,x0,[],[],[],list(myoutputfunction2,Aeq,beq,c));
+assert_close ( xopt , xexpected , 1.e-3 );
+//
+// References
+// "Practical Optimization", Antoniou, Lu, 2007
+// Chapter 11, "Linear Programming Part I: The simplex method",
+// Example 11.9, p. 361
+// Chapter 12, "Linear Programming Part II: Interior point methods",
+// Example 12.2, p.382
+//
+// Minimize 2.x1 + 9.x2 + 3.x3
+// -2.x1 + 2.x2 + x3 - x4 = 1
+// x1 + 4.x2 - x3 - x5 = 1
+// x >= 0
+Aeq = [
+-2 2 1 -1 0
+1 4 -1 0 -1
+];
+beq = [1;1];
+c = [2;9;3;0;0];
+x0 = [0.2;0.7;1;1;1];
+gam = 0.9999;
+rtolf = 1.e-4;
+[xopt,fopt,exitflag,iter,yopt]=karmarkar(Aeq,beq,c,x0,rtolf,gam,[],myoutputfunction);
+xstar = [0 1/3 1/3 0 0]';
+fstar = 4;
+lambda.ineqlin = [];
+lambda.eqlin = [-3.5;-0.5];
+lambda.upper = [0;0;0;0;0];
+lambda.lower = [8.5;0;0;3.5;0.5];
+assert_close ( xopt , xstar , 1.e-4 );
+assert_close ( fopt , fstar , 1.e-4 );
+assert_equal ( exitflag , 1 );
+assert_equal ( iter , 4 );
+assert_equal ( yopt.ineqlin , lambda.ineqlin );
+assert_close ( yopt.eqlin , lambda.eqlin , 1.e-6 );
+assert_close ( yopt.lower , lambda.lower , 1.e-6 );
+assert_equal ( yopt.upper , lambda.upper );
+//
+// Minimize -x1 -x2
+// x1 - x2 = 0
+// x1 + x2 + x3 = 2
+// x >= 0
+//
+// Let karmarkar find a feasible x0.
+Aeq = [
+1 -1 0
+1 1 1
+];
+beq = [0;2];
+c = [-1;-1;0];
+[xopt,fopt,exitflag,iter,yopt]=karmarkar(Aeq,beq,c);
+xstar = [1;1;0];
+fstar = -2;
+lambda.ineqlin = [];
+lambda.eqlin = [0;1];
+lambda.upper = [0;0;0];
+lambda.lower = [0;0;1];
+assert_close ( xopt , xstar , 1.e-4 );
+assert_close ( fopt , fstar , 1.e-4 );
+assert_equal ( exitflag , 1 );
+assert_equal ( iter > 0 , %t );
+assert_equal ( yopt.ineqlin , lambda.ineqlin );
+assert_close ( yopt.eqlin , lambda.eqlin , 1.e-6 );
+assert_close ( yopt.lower , lambda.lower , 1.e-6 );
+assert_equal ( yopt.upper , lambda.upper );
+//
+// Give a linear inequality A*x <= b.
+//
+// Minimize -x1 -x2
+// x1 - x2 = 0
+// x1 + x2 <= 2
+//
+// Give x0.
+Aeq = [
+1 -1
+];
+beq = 0;
+c = [-1;-1];
+A = [1 1];
+b = 2;
+x0 = [0.1;0.1];
+[xopt,fopt,exitflag,iter,yopt]=karmarkar(Aeq,beq,c,x0,[],[],[],[],A,b);
+xstar=[1 1]';
+fstar = c'*xstar;
+lambda.ineqlin = 1;
+lambda.eqlin = 0;
+lambda.upper = [0;0];
+lambda.lower = [0;0];
+assert_close ( xopt , xstar , 1.e-4 );
+assert_close ( fopt , fstar , 1.e-4 );
+assert_equal ( exitflag , 1 );
+assert_equal ( iter > 0 , %t );
+assert_close ( yopt.ineqlin , lambda.ineqlin , 1.e-6 );
+assert_close ( yopt.eqlin , lambda.eqlin , 1.e-6 );
+assert_equal ( yopt.lower , lambda.lower );
+assert_equal ( yopt.upper , lambda.upper );
+//
+// Give a linear inequality A*x <= b
+//
+// Minimize -x1 -x2
+// x1 - x2 = 0
+// x1 + x2 <= 2
+//
+// Do not give x0.
+Aeq = [
+1 -1
+];
+beq = 0;
+c = [-1;-1];
+A = [1 1];
+b = 2;
+function stop = myoutputfunction2 ( xopt , optimValues , state )
+ assert_equal ( size(xopt) , [2 1] );
+ assert_equal ( or(state==["init","iter","done"]) , %t );
+ assert_equal ( or(optimValues.procedure==["x0","x*"]) , %t );
+ stop = %f
+endfunction
+Attention: redéfinition de la fonction: myoutputfunction2 . Utilisez funcprot(0) pour éviter ce message
+
+[xopt,fopt,exitflag,iter,yopt]=karmarkar(Aeq,beq,c,[],[],[],[],myoutputfunction2,A,b);
+xstar=[1 1]';
+fstar = c'*xstar;
+lambda.ineqlin = 1;
+lambda.eqlin = 0;
+lambda.upper = [0;0];
+lambda.lower = [0;0];
+assert_close ( xopt , xstar , 1.e-4 );
+assert_close ( fopt , fstar , 1.e-4 );
+assert_equal ( exitflag , 1 );
+assert_equal ( iter > 0 , %t );
+assert_close ( yopt.ineqlin , lambda.ineqlin , 1.e-6 );
+assert_close ( yopt.eqlin , lambda.eqlin , 1.e-6 );
+assert_equal ( yopt.lower , lambda.lower );
+assert_equal ( yopt.upper , lambda.upper );
+//
+// Minimize -20.x1 - 24.x2 such as:
+// 3.x1 + 6.x2 >= 60
+// 4.x1 + 2.x2 >= 32
+c = [-20 -24]';
+Aeq=[];
+beq=[];
+A = [
+3 6
+4 2
+];
+b = [60 32]';
+[xopt,fopt,exitflag,iter,yopt]=karmarkar(Aeq,beq,c,[],[],[],[],[],A,b);
+xstar = [4 8]';
+fstar = c'*xstar;
+lambda.ineqlin = [28/9;8/3];
+lambda.eqlin = [];
+lambda.upper = [0;0];
+lambda.lower = [0;0];
+assert_close ( xopt , xstar , 1.e-4 );
+assert_close ( fopt , fstar , 1.e-4 );
+assert_equal ( exitflag , 1 );
+assert_equal ( iter > 0 , %t );
+assert_close ( yopt.ineqlin , lambda.ineqlin , 1.e-6 );
+assert_close ( yopt.eqlin , lambda.eqlin , 1.e-6 );
+assert_equal ( yopt.lower , lambda.lower );
+assert_equal ( yopt.upper , lambda.upper );
+//
+// References
+// "Practical Optimization", Antoniou, Lu, 2007
+// Chapter 11, "Linear Programming Part I: The simplex method",
+// Example 11.9, p. 361
+// Chapter 12, "Linear Programming Part II: Interior point methods",
+// Example 12.2, p.382
+//
+// Minimize 2.x1 + 9.x2 + 3.x3
+// +2.x1 - 2.x2 - x3 <= -1
+// -x1 - 4.x2 + x3 <= -1
+// x >= 0
+//
+// Give x0
+Aeq=[];
+beq=[];
+A = [
+ 2 -2 -1
+-1 -4 1
+-1 0 0
+ 0 -1 0
+ 0 0 -1
+];
+b = [-1;-1;0;0;0];
+c = [2;9;3];
+x0 = [0.2;0.7;1];
+[xopt,fopt,exitflag,iter]=karmarkar(Aeq,beq,c,x0,[],[],[],[],A,b);
+xstar = [0 1/3 1/3]';
+fstar = c'*xstar;
+assert_close ( xopt , xstar , 1.e-4 );
+assert_close ( fopt , fstar , 1.e-4 );
+assert_equal ( exitflag , 1 );
+assert_equal ( iter > 0 , %t );
+//
+// Let x0 be found by the algorithm.
+[xopt,fopt,exitflag,iter]=karmarkar(Aeq,beq,c,[],[],[],[],[],A,b);
+xstar = [0 1/3 1/3]';
+fstar = c'*xstar;
+assert_close ( xopt , xstar , 1.e-4 );
+assert_close ( fopt , fstar , 1.e-4 );
+assert_equal ( exitflag , 1 );
+assert_equal ( iter > 0 , %t );
+/////////////////////////////////////////////////////
+// References
+// "Lipsol toolbox", Yin Zhang, Scilab port by Rubio Scola, example0.sce.
+//
+// Minimize 2.x1 + 5.x2 - 2.5.x3
+// x1 S4 x3 <= 5
+// E2 x1 - x2 - x3 <= 0
+// x1 <= 2
+// x2 <= %inf
+// x3 <= 3
+// - x1 <= 2
+// - x2 <= -1
+// - x3 <= 0
+//
+// where
+// S4 = sin(pi/4)/4
+// E2 = exp(2)
+//
+S4 = sin(%pi/4)/4;
+E2 = exp(2);
+c = [ 2; 5; -2.5];
+A = [
+ 1 0 S4
+ E2 -1 -1
+ 1 0 0
+ 0 0 1
+ -1 0 0
+ 0 -1 0
+ 0 0 -1
+];
+b = [ 5; 0;2;3;2;-1;0];
+Aeq = [];
+beq = [];
+xstar = [-2;1;3];
+fstar = c'*xstar;
+lambda.ineqlin = [0;0;0;2.5;2;5;0];
+lambda.eqlin = [];
+lambda.upper = [0;0;0];
+lambda.lower = [0;0;0];
+[xopt,fopt,exitflag,iter,yopt]=karmarkar(Aeq,beq,c,[],[],[],[],[],A,b);
+assert_close ( xopt , xstar , 1.e-4 );
+assert_close ( fopt , fstar , 1.e-4 );
+assert_equal ( exitflag , 1 );
+assert_equal ( iter > 0 , %t );
+assert_close ( yopt.ineqlin , lambda.ineqlin , 1.e-9 );
+assert_equal ( yopt.eqlin , lambda.eqlin );
+assert_equal ( yopt.lower , lambda.lower );
+assert_equal ( yopt.upper , lambda.upper );
+//
+// Minimize 2.x1 + 9.x2 + 3.x3
+// 2 x1 - 2 x2 - x3 <= -1
+// -1 x1 - 4 x2 + x3 <= -1
+// - x1 <= -1
+// - x2 <= 0
+// - x3 <= 0
+//
+Aeq=[];
+beq=[];
+A = [
+ 2 -2 -1
+-1 -4 1
+-1 0 0
+ 0 -1 0
+ 0 0 -1
+];
+b = [-1;-1;-1;0;0];
+c = [2;9;3];
+[xopt,fopt,exitflag,iter,yopt]=karmarkar(Aeq,beq,c,[],[],[],[],[],A,b);
+xstar = [1;0.5;2];
+fstar = c'*xstar;
+lambda.ineqlin = [3.5;0.5;8.5;0;0];
+lambda.eqlin = [];
+lambda.upper = [0;0;0];
+lambda.lower = [0;0;0];
+assert_close ( xopt , xstar , 1.e-4 );
+assert_close ( fopt , fstar , 1.e-4 );
+assert_equal ( exitflag , 1 );
+assert_equal ( iter > 0 , %t );
+assert_close ( yopt.ineqlin , lambda.ineqlin , 1.e-9 );
+assert_equal ( yopt.eqlin , lambda.eqlin );
+assert_equal ( yopt.lower , lambda.lower );
+assert_equal ( yopt.upper , lambda.upper );
+//////////////////////////////////////////////////////////////////////////
+//
+// Set lower bound and do not give x0.
+Aeq=[];
+beq=[];
+A = [
+ 2 -2 -1
+-1 -4 1
+];
+b = [-1;-1];
+c = [2;9;3];
+lb = [1;0;0];
+[xopt,fopt,exitflag,iter,yopt]=karmarkar(Aeq,beq,c,[],[],[],[],[],A,b,lb);
+xstar = [1;0.5;2];
+fstar = c'*xstar;
+lambda.ineqlin = [3.5;0.5];
+lambda.eqlin = [];
+lambda.upper = [0;0;0];
+lambda.lower = [8.5;0;0];
+assert_close ( xopt , xstar , 1.e-4 );
+assert_close ( fopt , fstar , 1.e-4 );
+assert_equal ( exitflag , 1 );
+assert_equal ( iter > 0 , %t );
+assert_close ( yopt.ineqlin , lambda.ineqlin , 1.e-9 );
+assert_equal ( yopt.eqlin , lambda.eqlin );
+assert_close ( yopt.lower , lambda.lower , 1.e-9 );
+assert_equal ( yopt.upper , lambda.upper );
+//
+// Set lower bound and give x0.
+Aeq=[];
+beq=[];
+A = [
+ 2 -2 -1
+-1 -4 1
+];
+b = [-1;-1];
+c = [2;9;3];
+x0 = [1.337848;0.885225;2.535279];
+lb = [1;0;0];
+[xopt,fopt,exitflag,iter,yopt]=karmarkar(Aeq,beq,c,x0,[],[],[],[],A,b,lb);
+xstar = [1;0.5;2];
+fstar = c'*xstar;
+lambda.ineqlin = [3.5;0.5];
+lambda.eqlin = [];
+lambda.upper = [0;0;0];
+lambda.lower = [8.5;0;0];
+assert_close ( xopt , xstar , 1.e-4 );
+assert_close ( fopt , fstar , 1.e-4 );
+assert_equal ( exitflag , 1 );
+assert_equal ( iter > 0 , %t );
+assert_close ( yopt.ineqlin , lambda.ineqlin , 1.e-9 );
+assert_close ( yopt.eqlin , lambda.eqlin , %eps );
+assert_close ( yopt.lower , lambda.lower , 1.e-9 );
+assert_close ( yopt.upper , lambda.upper , 1.e-9 );
+// References
+// LIPSOL is a set of Linear-programming Interior-Point SOLvers written
+// by Yin Zhang.
+// The original Matlab-based code has been adapted to Scilab
+// by H. Rubio Scola.
+//
+// Minimize 2 x1 + 5 x2 - 2.5 x3
+// x1 + S4 x3 <= 5
+// E2 x1 - x2 - x3 <= 0
+// -2 <= x1 <= 2
+// 1 <= x2
+// 0 <= x3 <= 3
+// where:
+// S4 = sin(pi/4)/4
+// E2 = exp(2)
+//
+// Do not give x0.
+c = [ 2; 5; -2.5];
+S4 = sin(%pi/4)/4;
+E2 = exp(2);
+A = [
+ 1 0 S4
+ E2 -1 -1
+];
+b = [ 5; 0];
+lb = [ -2; 1 ; 0 ];
+ub = [ 2; %inf; 3 ];
+Aeq = [];
+beq = [];
+[xopt,fopt,exitflag,iter,yopt]=karmarkar(Aeq,beq,c,[],[],[],[],[],A,b,lb,ub);
+xstar = [-2;1;3];
+fstar = c'*xstar;
+lambda.ineqlin = [0;0];
+lambda.eqlin = [];
+lambda.upper = [0;0;2.5];
+lambda.lower = [2;5;0];
+assert_close ( xopt , xstar , 1.e-4 );
+assert_close ( fopt , fstar , 1.e-4 );
+assert_equal ( exitflag , 1 );
+assert_equal ( iter > 0 , %t );
+assert_close ( yopt.ineqlin , lambda.ineqlin , 1.e-9 );
+assert_close ( yopt.eqlin , lambda.eqlin , %eps );
+assert_close ( yopt.lower , lambda.lower , 1.e-9 );
+assert_close ( yopt.upper , lambda.upper , 1.e-9 );
+//
+// An unbounded problem.
+c = [-20 -24]';
+A = [
+-3 -6
+-4 -2
+];
+b = [-60 -32]';
+[xopt,fopt,exitflag,iter,yopt]=karmarkar([],[],c,[],[],[],[],[],A,b);
+assert_equal ( exitflag , -2 );
+//
+// "Linear Programming in Matlab"
+// Ferris, Mangasarian, Wright
+// 2008
+// Chapter 3, "The Simplex Method", Exercise 3-4-2 1.
+//
+// An infeasible problem.
+// Minimize -3 x1 + x2
+// - x1 - x2 >= -2
+// 2 x1 + 2 x2 >= 10
+// x >= 0
+c = [-3;1];
+A=[
+ -1 -1
+ 2 2
+];
+A=-A;
+b=[-2;10];
+b=-b;
+lb=[0;0];
+[xopt,fopt,exitflag,iter,yopt]=karmarkar([],[],c,[],[],[],[],[],A,b,lb);
+assert_equal ( xopt , [] );
+assert_equal ( fopt , [] );
+assert_equal ( exitflag , -1 );
+assert_equal ( iter > 0 , %t );
+assert_equal ( yopt.ineqlin , [] );
+assert_equal ( yopt.eqlin , [] );
+assert_equal ( yopt.lower , [] );
+assert_equal ( yopt.upper , [] );
+//
+// "Linear Programming in Matlab"
+// Ferris, Mangasarian, Wright
+// 2008
+// Chapter 3, "The Simplex Method", Exercise 3-4-2 2.
+//
+// An unbounded problem.
+// Minimize -x1 + x2
+// 2 x1 - x2 >= 1
+// x1 + 2 x2 >= 2
+// x >= 0
+c = [-1;1];
+A=[
+ 2 -1
+ 1 2
+];
+A=-A;
+b=[1;2];
+b=-b;
+lb=[0;0];
+[xopt,fopt,exitflag,iter,yopt]=karmarkar([],[],c,[],[],[],[],[],A,b,lb);
+assert_equal ( exitflag , -2 );
+//
+// "Linear and Nonlinear Optimization"
+// Griva, Nash, Sofer
+// 2009
+// Chapter 5, "The Simplex Method", Example 5.3
+//
+// An unbounded problem.
+// Minimize -x1 - 2 x2
+// -1 x1 + x2 <= 2
+// -2 x1 + x2 <= 1
+// x >= 0
+c = [-1;-2];
+A=[
+ -1 1
+ -2 1
+];
+b=[2;1];
+lb=[0;0];
+[xopt,fopt,exitflag,iter,yopt]=karmarkar([],[],c,[],[],[],[],[],A,b,lb);
+assert_equal ( exitflag , -2 );
+//
+// "Linear and Nonlinear Optimization"
+// Griva, Nash, Sofer
+// 2009
+// Chapter 5, "The Simplex Method", Example 5.6
+//
+// An unfeasible problem.
+// Minimize -x1
+// - x1 - x2 <= -6
+// 2 x1 + 3 x2 <= 4
+// x >= 0
+c = [-1;0];
+A=[
+ -1 -1
+ 2 3
+];
+b=[-6;4];
+lb=[0;0];
+[xopt,fopt,exitflag,iter,yopt]=karmarkar([],[],c,[],[],[],[],[],A,b,lb);
+assert_equal ( exitflag , -1 );
+//
+// Example from the help page
+// Check that the output points remain feasible
+function stop = myoutputfunction3 ( xopt , optimValues , state , A , b )
+ assert_equal ( and(A*xopt<=b) , %t );
+ stop = %f
+endfunction
+n=11;
+A = [2*linspace(0,1,n)',ones(n,1)];
+b = 1 + linspace(0,1,n)'.^2;
+c=[-1;-1];
+xopt=karmarkar([],[],c,[],[],[],[],list(myoutputfunction3,A,b),A,b);
+xstar = [0.5005127;0.7494803];
+assert_close ( xopt , xstar , 1.e-4 );
--- /dev/null
+// =============================================================================
+// Scilab ( http://www.scilab.org/ ) - This file is part of Scilab
+// Copyright (C) 2010 - DIGITEO - Michael Baudin
+//
+// This file is distributed under the same license as the Scilab package.
+// =============================================================================
+
+// <-- JVM NOT MANDATORY -->
+
+//
+// assert_close --
+// Returns 1 if the two real matrices computed and expected are close,
+// i.e. if the relative distance between computed and expected is lesser than epsilon.
+// Arguments
+// computed, expected : the two matrices to compare
+// epsilon : a small number
+//
+function flag = assert_close ( computed, expected, epsilon )
+ if ( expected == computed ) then
+ flag = 1
+ return
+ end
+ if ( expected==0.0 ) then
+ shift = norm(computed-expected);
+ else
+ shift = norm(computed-expected)/norm(expected);
+ end
+ if shift < epsilon then
+ flag = 1;
+ else
+ flag = 0;
+ end
+ if flag <> 1 then pause,end
+endfunction
+//
+// assert_equal --
+// Returns 1 if the two real matrices computed and expected are equal.
+// Arguments
+// computed, expected : the two matrices to compare
+// epsilon : a small number
+//
+function flag = assert_equal ( computed , expected )
+ if computed==expected then
+ flag = 1;
+ else
+ flag = 0;
+ end
+ if flag <> 1 then pause,end
+endfunction
+
+
+// With slack variables:
+//
+// Minimize -20.x1 - 24.x2 such as:
+// 3.x1 + 6.x2 + x3 = 60
+// 4.x1 + 2.x2 + x4 = 32
+// x >= 0
+c = [-20 -24 0 0]';
+Aeq = [
+3 6 1 0
+4 2 0 1
+];
+beq = [60 32]';
+xexpected = [4 8 0 0]';
+fexpected = -272;
+x0 = [
+4.1128205
+7.7333333
+1.2615385
+0.0820513
+];
+[xopt,fopt]=karmarkar(Aeq,beq,c,x0);
+assert_close ( xopt , xexpected , 1.e-3 );
+assert_close ( fopt , fexpected , 1.e-4 );
+//
+// Configure the relative tolerance
+rtolf=1.e-6;
+[xopt,fopt]=karmarkar(Aeq,beq,c,x0,rtolf);
+assert_close ( xopt , xexpected , 1.e-4 );
+assert_close ( fopt , fexpected , 1.e-5 );
+//
+// Configure the gamma
+gam = 0.1;
+[xopt,fopt]=karmarkar(Aeq,beq,c,x0,[],gam);
+assert_close ( xopt , xexpected , 1.e-3 );
+assert_close ( fopt , fexpected , 1.e-4 );
+////////////////////////////////////////////////////////////
+//
+// Check new API (from Scilab v5.3.x).
+//
+// Check exit flag
+[xopt,fopt,exitflag]=karmarkar(Aeq,beq,c,x0);
+assert_close ( xopt , xexpected , 1.e-3 );
+assert_close ( fopt , fexpected , 1.e-4 );
+assert_equal ( exitflag , 1 );
+//
+// Check number of iterations
+[xopt,fopt,exitflag,iter]=karmarkar(Aeq,beq,c,x0);
+assert_close ( xopt , xexpected , 1.e-3 );
+assert_close ( fopt , fexpected , 1.e-4 );
+assert_equal ( exitflag , 1 );
+assert_equal ( iter>10 , %t );
+//
+// Check dual solution
+[xopt,fopt,exitflag,iter,yopt]=karmarkar(Aeq,beq,c,x0);
+lambda.ineqlin = [];
+lambda.eqlin = [28/9;8/3];
+lambda.upper = [0;0;0;0];
+lambda.lower = [0;0;28/9;8/3];
+assert_close ( xopt , xexpected , 1.e-3 );
+assert_close ( fopt , fexpected , 1.e-4 );
+assert_equal ( exitflag , 1 );
+assert_equal ( iter>10 , %t );
+assert_equal ( yopt.ineqlin , lambda.ineqlin );
+assert_close ( yopt.eqlin , lambda.eqlin , 1.e-8 );
+assert_close ( yopt.lower , lambda.lower , 1.e-8 );
+assert_equal ( yopt.upper , lambda.upper );
+//
+// Check number of iterations, with default options
+[xopt,fopt,exitflag,iter]=karmarkar(Aeq,beq,c,x0,[],[],10);
+assert_equal ( exitflag , 0 );
+assert_equal ( iter , 10 );
+//
+// Check output function
+function stop = myoutputfunction ( xopt , optimValues , state )
+ localmsg = gettext("Iteration #%3.0f, state=%s, procedure=%s, fopt=%10.3e, x=[%s], dualgap=%10.3e\n")
+ xstr = strcat(msprintf("%10.3e\n",xopt)'," ")
+ teststring = sprintf(localmsg,optimValues.iteration,state,optimValues.procedure,optimValues.fval,xstr,optimValues.dualgap)
+ stop = %f
+endfunction
+xopt=karmarkar(Aeq,beq,c,x0,[],[],[],myoutputfunction);
+assert_close ( xopt , xexpected , 1.e-3 );
+//
+// Check output function, without initial guess
+xopt=karmarkar(Aeq,beq,c,[],[],[],[],myoutputfunction);
+assert_close ( xopt , xexpected , 1.e-3 );
+//
+// Check that the output function can stop the algorithm
+function stop = myoutputfunctionStop ( xopt , optimValues , state )
+ stop = (iter >= 7)
+endfunction
+[xopt,fopt,exitflag,iter]=karmarkar(Aeq,beq,c,x0,[],[],[],myoutputfunctionStop);
+assert_close ( xopt , xexpected , 1.e-2 );
+assert_close ( fopt , fexpected , 1.e-3 );
+assert_equal ( exitflag , -4 );
+assert_equal ( iter , 7 );
+//
+// Check output function with additionnal arguments
+function stop = myoutputfunction2 ( xopt , optimValues , state , myAeq , mybeq , myc )
+ localmsg = gettext("Iteration #%3.0f, fopt=%10.3e, state=%s, ||Ax-beq||=%.3e\n")
+ teststring = sprintf(localmsg,optimValues.iteration,optimValues.fval,state,norm(myAeq*xopt-mybeq))
+ stop = %f
+endfunction
+xopt=karmarkar(Aeq,beq,c,x0,[],[],[],list(myoutputfunction2,Aeq,beq,c));
+assert_close ( xopt , xexpected , 1.e-3 );
+//
+// References
+// "Practical Optimization", Antoniou, Lu, 2007
+// Chapter 11, "Linear Programming Part I: The simplex method",
+// Example 11.9, p. 361
+// Chapter 12, "Linear Programming Part II: Interior point methods",
+// Example 12.2, p.382
+//
+// Minimize 2.x1 + 9.x2 + 3.x3
+// -2.x1 + 2.x2 + x3 - x4 = 1
+// x1 + 4.x2 - x3 - x5 = 1
+// x >= 0
+Aeq = [
+-2 2 1 -1 0
+1 4 -1 0 -1
+];
+beq = [1;1];
+c = [2;9;3;0;0];
+x0 = [0.2;0.7;1;1;1];
+gam = 0.9999;
+rtolf = 1.e-4;
+[xopt,fopt,exitflag,iter,yopt]=karmarkar(Aeq,beq,c,x0,rtolf,gam,[],myoutputfunction);
+xstar = [0 1/3 1/3 0 0]';
+fstar = 4;
+lambda.ineqlin = [];
+lambda.eqlin = [-3.5;-0.5];
+lambda.upper = [0;0;0;0;0];
+lambda.lower = [8.5;0;0;3.5;0.5];
+assert_close ( xopt , xstar , 1.e-4 );
+assert_close ( fopt , fstar , 1.e-4 );
+assert_equal ( exitflag , 1 );
+assert_equal ( iter , 4 );
+assert_equal ( yopt.ineqlin , lambda.ineqlin );
+assert_close ( yopt.eqlin , lambda.eqlin , 1.e-6 );
+assert_close ( yopt.lower , lambda.lower , 1.e-6 );
+assert_equal ( yopt.upper , lambda.upper );
+//
+// Minimize -x1 -x2
+// x1 - x2 = 0
+// x1 + x2 + x3 = 2
+// x >= 0
+//
+// Let karmarkar find a feasible x0.
+Aeq = [
+1 -1 0
+1 1 1
+];
+beq = [0;2];
+c = [-1;-1;0];
+[xopt,fopt,exitflag,iter,yopt]=karmarkar(Aeq,beq,c);
+xstar = [1;1;0];
+fstar = -2;
+lambda.ineqlin = [];
+lambda.eqlin = [0;1];
+lambda.upper = [0;0;0];
+lambda.lower = [0;0;1];
+assert_close ( xopt , xstar , 1.e-4 );
+assert_close ( fopt , fstar , 1.e-4 );
+assert_equal ( exitflag , 1 );
+assert_equal ( iter > 0 , %t );
+assert_equal ( yopt.ineqlin , lambda.ineqlin );
+assert_close ( yopt.eqlin , lambda.eqlin , 1.e-6 );
+assert_close ( yopt.lower , lambda.lower , 1.e-6 );
+assert_equal ( yopt.upper , lambda.upper );
+//
+// Give a linear inequality A*x <= b.
+//
+// Minimize -x1 -x2
+// x1 - x2 = 0
+// x1 + x2 <= 2
+//
+// Give x0.
+Aeq = [
+1 -1
+];
+beq = 0;
+c = [-1;-1];
+A = [1 1];
+b = 2;
+x0 = [0.1;0.1];
+[xopt,fopt,exitflag,iter,yopt]=karmarkar(Aeq,beq,c,x0,[],[],[],[],A,b);
+xstar=[1 1]';
+fstar = c'*xstar;
+lambda.ineqlin = 1;
+lambda.eqlin = 0;
+lambda.upper = [0;0];
+lambda.lower = [0;0];
+assert_close ( xopt , xstar , 1.e-4 );
+assert_close ( fopt , fstar , 1.e-4 );
+assert_equal ( exitflag , 1 );
+assert_equal ( iter > 0 , %t );
+assert_close ( yopt.ineqlin , lambda.ineqlin , 1.e-6 );
+assert_close ( yopt.eqlin , lambda.eqlin , 1.e-6 );
+assert_equal ( yopt.lower , lambda.lower );
+assert_equal ( yopt.upper , lambda.upper );
+//
+// Give a linear inequality A*x <= b
+//
+// Minimize -x1 -x2
+// x1 - x2 = 0
+// x1 + x2 <= 2
+//
+// Do not give x0.
+Aeq = [
+1 -1
+];
+beq = 0;
+c = [-1;-1];
+A = [1 1];
+b = 2;
+function stop = myoutputfunction2 ( xopt , optimValues , state )
+ assert_equal ( size(xopt) , [2 1] );
+ assert_equal ( or(state==["init","iter","done"]) , %t );
+ assert_equal ( or(optimValues.procedure==["x0","x*"]) , %t );
+ stop = %f
+endfunction
+[xopt,fopt,exitflag,iter,yopt]=karmarkar(Aeq,beq,c,[],[],[],[],myoutputfunction2,A,b);
+xstar=[1 1]';
+fstar = c'*xstar;
+lambda.ineqlin = 1;
+lambda.eqlin = 0;
+lambda.upper = [0;0];
+lambda.lower = [0;0];
+assert_close ( xopt , xstar , 1.e-4 );
+assert_close ( fopt , fstar , 1.e-4 );
+assert_equal ( exitflag , 1 );
+assert_equal ( iter > 0 , %t );
+assert_close ( yopt.ineqlin , lambda.ineqlin , 1.e-6 );
+assert_close ( yopt.eqlin , lambda.eqlin , 1.e-6 );
+assert_equal ( yopt.lower , lambda.lower );
+assert_equal ( yopt.upper , lambda.upper );
+//
+// Minimize -20.x1 - 24.x2 such as:
+// 3.x1 + 6.x2 >= 60
+// 4.x1 + 2.x2 >= 32
+c = [-20 -24]';
+Aeq=[];
+beq=[];
+A = [
+3 6
+4 2
+];
+b = [60 32]';
+[xopt,fopt,exitflag,iter,yopt]=karmarkar(Aeq,beq,c,[],[],[],[],[],A,b);
+xstar = [4 8]';
+fstar = c'*xstar;
+lambda.ineqlin = [28/9;8/3];
+lambda.eqlin = [];
+lambda.upper = [0;0];
+lambda.lower = [0;0];
+assert_close ( xopt , xstar , 1.e-4 );
+assert_close ( fopt , fstar , 1.e-4 );
+assert_equal ( exitflag , 1 );
+assert_equal ( iter > 0 , %t );
+assert_close ( yopt.ineqlin , lambda.ineqlin , 1.e-6 );
+assert_close ( yopt.eqlin , lambda.eqlin , 1.e-6 );
+assert_equal ( yopt.lower , lambda.lower );
+assert_equal ( yopt.upper , lambda.upper );
+//
+// References
+// "Practical Optimization", Antoniou, Lu, 2007
+// Chapter 11, "Linear Programming Part I: The simplex method",
+// Example 11.9, p. 361
+// Chapter 12, "Linear Programming Part II: Interior point methods",
+// Example 12.2, p.382
+//
+// Minimize 2.x1 + 9.x2 + 3.x3
+// +2.x1 - 2.x2 - x3 <= -1
+// -x1 - 4.x2 + x3 <= -1
+// x >= 0
+//
+// Give x0
+Aeq=[];
+beq=[];
+A = [
+ 2 -2 -1
+-1 -4 1
+-1 0 0
+ 0 -1 0
+ 0 0 -1
+];
+b = [-1;-1;0;0;0];
+c = [2;9;3];
+x0 = [0.2;0.7;1];
+[xopt,fopt,exitflag,iter]=karmarkar(Aeq,beq,c,x0,[],[],[],[],A,b);
+xstar = [0 1/3 1/3]';
+fstar = c'*xstar;
+assert_close ( xopt , xstar , 1.e-4 );
+assert_close ( fopt , fstar , 1.e-4 );
+assert_equal ( exitflag , 1 );
+assert_equal ( iter > 0 , %t );
+//
+// Let x0 be found by the algorithm.
+[xopt,fopt,exitflag,iter]=karmarkar(Aeq,beq,c,[],[],[],[],[],A,b);
+xstar = [0 1/3 1/3]';
+fstar = c'*xstar;
+assert_close ( xopt , xstar , 1.e-4 );
+assert_close ( fopt , fstar , 1.e-4 );
+assert_equal ( exitflag , 1 );
+assert_equal ( iter > 0 , %t );
+/////////////////////////////////////////////////////
+// References
+// "Lipsol toolbox", Yin Zhang, Scilab port by Rubio Scola, example0.sce.
+//
+// Minimize 2.x1 + 5.x2 - 2.5.x3
+// x1 S4 x3 <= 5
+// E2 x1 - x2 - x3 <= 0
+// x1 <= 2
+// x2 <= %inf
+// x3 <= 3
+// - x1 <= 2
+// - x2 <= -1
+// - x3 <= 0
+//
+// where
+// S4 = sin(pi/4)/4
+// E2 = exp(2)
+//
+
+S4 = sin(%pi/4)/4;
+E2 = exp(2);
+c = [ 2; 5; -2.5];
+A = [
+ 1 0 S4
+ E2 -1 -1
+ 1 0 0
+ 0 0 1
+ -1 0 0
+ 0 -1 0
+ 0 0 -1
+];
+b = [ 5; 0;2;3;2;-1;0];
+Aeq = [];
+beq = [];
+xstar = [-2;1;3];
+fstar = c'*xstar;
+lambda.ineqlin = [0;0;0;2.5;2;5;0];
+lambda.eqlin = [];
+lambda.upper = [0;0;0];
+lambda.lower = [0;0;0];
+[xopt,fopt,exitflag,iter,yopt]=karmarkar(Aeq,beq,c,[],[],[],[],[],A,b);
+assert_close ( xopt , xstar , 1.e-4 );
+assert_close ( fopt , fstar , 1.e-4 );
+assert_equal ( exitflag , 1 );
+assert_equal ( iter > 0 , %t );
+assert_close ( yopt.ineqlin , lambda.ineqlin , 1.e-9 );
+assert_equal ( yopt.eqlin , lambda.eqlin );
+assert_equal ( yopt.lower , lambda.lower );
+assert_equal ( yopt.upper , lambda.upper );
+
+//
+// Minimize 2.x1 + 9.x2 + 3.x3
+// 2 x1 - 2 x2 - x3 <= -1
+// -1 x1 - 4 x2 + x3 <= -1
+// - x1 <= -1
+// - x2 <= 0
+// - x3 <= 0
+//
+Aeq=[];
+beq=[];
+A = [
+ 2 -2 -1
+-1 -4 1
+-1 0 0
+ 0 -1 0
+ 0 0 -1
+];
+b = [-1;-1;-1;0;0];
+c = [2;9;3];
+[xopt,fopt,exitflag,iter,yopt]=karmarkar(Aeq,beq,c,[],[],[],[],[],A,b);
+xstar = [1;0.5;2];
+fstar = c'*xstar;
+lambda.ineqlin = [3.5;0.5;8.5;0;0];
+lambda.eqlin = [];
+lambda.upper = [0;0;0];
+lambda.lower = [0;0;0];
+assert_close ( xopt , xstar , 1.e-4 );
+assert_close ( fopt , fstar , 1.e-4 );
+assert_equal ( exitflag , 1 );
+assert_equal ( iter > 0 , %t );
+assert_close ( yopt.ineqlin , lambda.ineqlin , 1.e-9 );
+assert_equal ( yopt.eqlin , lambda.eqlin );
+assert_equal ( yopt.lower , lambda.lower );
+assert_equal ( yopt.upper , lambda.upper );
+
+//////////////////////////////////////////////////////////////////////////
+//
+// Set lower bound and do not give x0.
+Aeq=[];
+beq=[];
+A = [
+ 2 -2 -1
+-1 -4 1
+];
+b = [-1;-1];
+c = [2;9;3];
+lb = [1;0;0];
+[xopt,fopt,exitflag,iter,yopt]=karmarkar(Aeq,beq,c,[],[],[],[],[],A,b,lb);
+xstar = [1;0.5;2];
+fstar = c'*xstar;
+lambda.ineqlin = [3.5;0.5];
+lambda.eqlin = [];
+lambda.upper = [0;0;0];
+lambda.lower = [8.5;0;0];
+assert_close ( xopt , xstar , 1.e-4 );
+assert_close ( fopt , fstar , 1.e-4 );
+assert_equal ( exitflag , 1 );
+assert_equal ( iter > 0 , %t );
+assert_close ( yopt.ineqlin , lambda.ineqlin , 1.e-9 );
+assert_equal ( yopt.eqlin , lambda.eqlin );
+assert_close ( yopt.lower , lambda.lower , 1.e-9 );
+assert_equal ( yopt.upper , lambda.upper );
+//
+// Set lower bound and give x0.
+Aeq=[];
+beq=[];
+A = [
+ 2 -2 -1
+-1 -4 1
+];
+b = [-1;-1];
+c = [2;9;3];
+x0 = [1.337848;0.885225;2.535279];
+lb = [1;0;0];
+[xopt,fopt,exitflag,iter,yopt]=karmarkar(Aeq,beq,c,x0,[],[],[],[],A,b,lb);
+xstar = [1;0.5;2];
+fstar = c'*xstar;
+lambda.ineqlin = [3.5;0.5];
+lambda.eqlin = [];
+lambda.upper = [0;0;0];
+lambda.lower = [8.5;0;0];
+assert_close ( xopt , xstar , 1.e-4 );
+assert_close ( fopt , fstar , 1.e-4 );
+assert_equal ( exitflag , 1 );
+assert_equal ( iter > 0 , %t );
+assert_close ( yopt.ineqlin , lambda.ineqlin , 1.e-9 );
+assert_close ( yopt.eqlin , lambda.eqlin , %eps );
+assert_close ( yopt.lower , lambda.lower , 1.e-9 );
+assert_close ( yopt.upper , lambda.upper , 1.e-9 );
+// References
+// LIPSOL is a set of Linear-programming Interior-Point SOLvers written
+// by Yin Zhang.
+// The original Matlab-based code has been adapted to Scilab
+// by H. Rubio Scola.
+//
+// Minimize 2 x1 + 5 x2 - 2.5 x3
+// x1 + S4 x3 <= 5
+// E2 x1 - x2 - x3 <= 0
+// -2 <= x1 <= 2
+// 1 <= x2
+// 0 <= x3 <= 3
+// where:
+// S4 = sin(pi/4)/4
+// E2 = exp(2)
+//
+// Do not give x0.
+c = [ 2; 5; -2.5];
+S4 = sin(%pi/4)/4;
+E2 = exp(2);
+A = [
+ 1 0 S4
+ E2 -1 -1
+];
+b = [ 5; 0];
+lb = [ -2; 1 ; 0 ];
+ub = [ 2; %inf; 3 ];
+Aeq = [];
+beq = [];
+[xopt,fopt,exitflag,iter,yopt]=karmarkar(Aeq,beq,c,[],[],[],[],[],A,b,lb,ub);
+xstar = [-2;1;3];
+fstar = c'*xstar;
+lambda.ineqlin = [0;0];
+lambda.eqlin = [];
+lambda.upper = [0;0;2.5];
+lambda.lower = [2;5;0];
+assert_close ( xopt , xstar , 1.e-4 );
+assert_close ( fopt , fstar , 1.e-4 );
+assert_equal ( exitflag , 1 );
+assert_equal ( iter > 0 , %t );
+assert_close ( yopt.ineqlin , lambda.ineqlin , 1.e-9 );
+assert_close ( yopt.eqlin , lambda.eqlin , %eps );
+assert_close ( yopt.lower , lambda.lower , 1.e-9 );
+assert_close ( yopt.upper , lambda.upper , 1.e-9 );
+//
+// An unbounded problem.
+c = [-20 -24]';
+A = [
+-3 -6
+-4 -2
+];
+b = [-60 -32]';
+[xopt,fopt,exitflag,iter,yopt]=karmarkar([],[],c,[],[],[],[],[],A,b);
+assert_equal ( exitflag , -2 );
+//
+// "Linear Programming in Matlab"
+// Ferris, Mangasarian, Wright
+// 2008
+// Chapter 3, "The Simplex Method", Exercise 3-4-2 1.
+//
+// An infeasible problem.
+// Minimize -3 x1 + x2
+// - x1 - x2 >= -2
+// 2 x1 + 2 x2 >= 10
+// x >= 0
+c = [-3;1];
+A=[
+ -1 -1
+ 2 2
+];
+A=-A;
+b=[-2;10];
+b=-b;
+lb=[0;0];
+[xopt,fopt,exitflag,iter,yopt]=karmarkar([],[],c,[],[],[],[],[],A,b,lb);
+assert_equal ( xopt , [] );
+assert_equal ( fopt , [] );
+assert_equal ( exitflag , -1 );
+assert_equal ( iter > 0 , %t );
+assert_equal ( yopt.ineqlin , [] );
+assert_equal ( yopt.eqlin , [] );
+assert_equal ( yopt.lower , [] );
+assert_equal ( yopt.upper , [] );
+
+
+//
+// "Linear Programming in Matlab"
+// Ferris, Mangasarian, Wright
+// 2008
+// Chapter 3, "The Simplex Method", Exercise 3-4-2 2.
+//
+// An unbounded problem.
+// Minimize -x1 + x2
+// 2 x1 - x2 >= 1
+// x1 + 2 x2 >= 2
+// x >= 0
+c = [-1;1];
+A=[
+ 2 -1
+ 1 2
+];
+A=-A;
+b=[1;2];
+b=-b;
+lb=[0;0];
+[xopt,fopt,exitflag,iter,yopt]=karmarkar([],[],c,[],[],[],[],[],A,b,lb);
+assert_equal ( exitflag , -2 );
+
+//
+// "Linear and Nonlinear Optimization"
+// Griva, Nash, Sofer
+// 2009
+// Chapter 5, "The Simplex Method", Example 5.3
+//
+// An unbounded problem.
+// Minimize -x1 - 2 x2
+// -1 x1 + x2 <= 2
+// -2 x1 + x2 <= 1
+// x >= 0
+c = [-1;-2];
+A=[
+ -1 1
+ -2 1
+];
+b=[2;1];
+lb=[0;0];
+[xopt,fopt,exitflag,iter,yopt]=karmarkar([],[],c,[],[],[],[],[],A,b,lb);
+assert_equal ( exitflag , -2 );
+//
+// "Linear and Nonlinear Optimization"
+// Griva, Nash, Sofer
+// 2009
+// Chapter 5, "The Simplex Method", Example 5.6
+//
+// An unfeasible problem.
+// Minimize -x1
+// - x1 - x2 <= -6
+// 2 x1 + 3 x2 <= 4
+// x >= 0
+c = [-1;0];
+A=[
+ -1 -1
+ 2 3
+];
+b=[-6;4];
+lb=[0;0];
+[xopt,fopt,exitflag,iter,yopt]=karmarkar([],[],c,[],[],[],[],[],A,b,lb);
+assert_equal ( exitflag , -1 );
+//
+// Example from the help page
+// Check that the output points remain feasible
+function stop = myoutputfunction3 ( xopt , optimValues , state , A , b )
+ assert_equal ( and(A*xopt<=b) , %t );
+ stop = %f
+endfunction
+n=11;
+A = [2*linspace(0,1,n)',ones(n,1)];
+b = 1 + linspace(0,1,n)'.^2;
+c=[-1;-1];
+xopt=karmarkar([],[],c,[],[],[],[],list(myoutputfunction3,A,b),A,b);
+xstar = [0.5005127;0.7494803];
+assert_close ( xopt , xstar , 1.e-4 );
+
+