From 223dd36e730ca89e89298b0a376117637698b6ad Mon Sep 17 00:00:00 2001
From: Paul BIGNIER
Date: Tue, 11 Dec 2012 14:36:11 +0100
Subject: [PATCH] Xcos help : sorting the solvers and bringing minor
corrections
Renamed files to sort the pages and corrected typos
ChangeId: Ied7b7b1b2f10e69491157ad9ede7c48bee54b6c7

.../modules/xcos/help/en_US/solvers/0LSodar.xml  208 ++++++++++++++
scilab/modules/xcos/help/en_US/solvers/1CVode.xml  290 ++++++++++++++++++++
.../xcos/help/en_US/solvers/2RungeKutta.xml  262 ++++++++++++++++++
.../xcos/help/en_US/solvers/3DormandPrice.xml  249 +++++++++++++++++
scilab/modules/xcos/help/en_US/solvers/7IDA.xml  198 +++++++++++++
scilab/modules/xcos/help/en_US/solvers/CVode.xml  290 
.../xcos/help/en_US/solvers/DormandPrice.xml  249 
scilab/modules/xcos/help/en_US/solvers/IDA.xml  198 
scilab/modules/xcos/help/en_US/solvers/LSodar.xml  208 
.../xcos/help/en_US/solvers/RungeKutta.xml  262 
10 files changed, 1207 insertions(+), 1207 deletions()
create mode 100644 scilab/modules/xcos/help/en_US/solvers/0LSodar.xml
create mode 100644 scilab/modules/xcos/help/en_US/solvers/1CVode.xml
create mode 100644 scilab/modules/xcos/help/en_US/solvers/2RungeKutta.xml
create mode 100644 scilab/modules/xcos/help/en_US/solvers/3DormandPrice.xml
create mode 100644 scilab/modules/xcos/help/en_US/solvers/7IDA.xml
delete mode 100644 scilab/modules/xcos/help/en_US/solvers/CVode.xml
delete mode 100644 scilab/modules/xcos/help/en_US/solvers/DormandPrice.xml
delete mode 100644 scilab/modules/xcos/help/en_US/solvers/IDA.xml
delete mode 100644 scilab/modules/xcos/help/en_US/solvers/LSodar.xml
delete mode 100644 scilab/modules/xcos/help/en_US/solvers/RungeKutta.xml
diff git a/scilab/modules/xcos/help/en_US/solvers/0LSodar.xml b/scilab/modules/xcos/help/en_US/solvers/0LSodar.xml
new file mode 100644
index 0000000..81f3c4c
 /dev/null
+++ b/scilab/modules/xcos/help/en_US/solvers/0LSodar.xml
@@ 0,0 +1,208 @@
+
+
+
+
+ LSodar
+
+ LSodar (short for Livermore Solver for Ordinary Differential equations, with Automatic method switching for stiff and nonstiff problems, and with Rootfinding) is a numerical solver providing an efficient and stable method to solve Ordinary Differential Equations (ODEs) Initial Value Problems.
+
+
+
+ Description
+
+ Called by xcos, LSodar (short for Livermore Solver for Ordinary Differential equations, with Automatic method switching for stiff and nonstiff problems, and with Rootfinding) is a numerical solver providing an efficient and stable variablesize step method to solve Initial Value Problems of the form :
+
+
+
+ \begin{eqnarray}
+ \dot{y} = f(t,y), \hspace{3 mm} y(t_0) = y_0, \hspace{3 mm} y \in R^N
+ \end{eqnarray}
+
+
+
+ LSodar is similar to CVode in many ways :
+
+
+ It uses variablesize steps,
+
+
+ It can potentially use BDF and Adams integration, methods,
+
+
+ BDF and Adams being implicit stable methods, LSodar is suitable for stiff and nonstiff problems,
+
+
+ They both look for roots over the integration interval.
+
+
+
+
+ The main difference though is that LSodar is fully automated, and chooses between BDF and Adams itself, by checking for stiffness at every step.
+
+
+ If the step is considered stiff, then BDF (with max order set to 5) is used and the Modified Newton method 'Chord' iteration is selected.
+
+
+ Otherwise, the program uses Adams integration (with max order set to 12) and Functional iterations.
+
+
+ The stiffness detection is done by step size attempts with both methods.
+
+
+ First, if we are in Adams mode and the order is greater than 5, then we assume the problem is nonstiff and proceed with Adams.
+
+
+ The first twenty steps use Adams / Functional method.
+ Then LSodar computes the ideal step size of both methods. If the step size advantage is at least ratio = 5, then the current method switches (Adams / Functional to BDF / Chord Newton or vice versa).
+
+
+ After every switch, LSodar takes twenty steps, then starts comparing the step sizes at every step.
+
+
+ Such strategy induces a minor overhead computational cost if the problem stiffness is known, but is very effective on problems that require differentiate precision. For instance, discontinuitiessensitive problems.
+
+
+ Concerning precision, the two integration/iteration methods being close to CVode's, the results are very similar.
+
+
+
+ Examples
+
+
+
+
+
+
+
+
+
+
+
+ The integral block returns its continuous state, we can evaluate it with LSodar by running the example :
+
+
+
+
+
+ The Scilab console displays :
+
+
+
+ Now, in the following script, we compare the time difference between the methods by running the example with the five solvers in turn :
+
+ Open the script
+
+
+
+
+
+
+ These results show that on a nonstiff problem, for the same precision required, LSodar is significantly faster. Other tests prove the proximity of the results. Indeed, we find that the solution difference order between LSodar and CVode is close to the order of the highest tolerance (
+
+ ylsodar  ycvode
+
+ ≈ max(reltol, abstol) ).
+
+
+ Variable stepsize ODE solvers are not appropriate for deterministic realtime applications because the computational overhead of taking a time step varies over the course of an application.
+
+
+
+ See Also
+
+
+ CVode
+
+
+ IDA
+
+
+ RungeKutta 4(5)
+
+
+ DormandPrice 4(5)
+
+
+ ode
+
+
+ ode_discrete
+
+
+ ode_root
+
+
+ odedc
+
+
+ impl
+
+
+
+
+ Bibliography
+
+ ACM SIGNUM Newsletter, Volume 15, Issue 4, December 1980, Pages 1011 LSode  LSodi
+
+
+ Sundials Documentation
+
+
+
+ History
+
+
+ 5.4.1
+ LSodar solver added
+
+
+
+
diff git a/scilab/modules/xcos/help/en_US/solvers/1CVode.xml b/scilab/modules/xcos/help/en_US/solvers/1CVode.xml
new file mode 100644
index 0000000..7be923f
 /dev/null
+++ b/scilab/modules/xcos/help/en_US/solvers/1CVode.xml
@@ 0,0 +1,290 @@
+
+
+
+
+ CVode
+
+ CVode (short for Clanguage Variablecoefficients ODE solver) is a numerical solver providing an efficient and stable method to solve Ordinary Differential Equations (ODEs) Initial Value Problems. It uses either BDF or Adams as implicit integration method, and Newton or Functional iterations
+
+
+
+ Description
+
+ Called by xcos, CVode (short for Clanguage Variablecoefficients ODE solver) is a numerical solver providing an efficient and stable method to solve Initial Value Problems of the form :
+
+
+
+ \begin{eqnarray}
+ \dot{y} = f(t,y), \hspace{3 mm} y(t_0) = y_0, \hspace{3 mm} y \in R^N
+ \end{eqnarray}
+
+
+
+ Starting with
+
+ y0
+
+ , CVode approximates
+
+ yn+1
+
+ with the formula :
+
+
+
+ \begin{eqnarray}
+ \sum_{i=0}^{K_1} \alpha_{n,i} y_{ni} + h_n\sum_{i=0}^{K_2} \beta_{n,i} \dot{y}_{ni} = 0,\hspace{10 mm} (1)
+ \end{eqnarray}
+
+
+ with
+
+ yn
+
+ the approximation of
+
+ y(tn)
+
+ , and
+
+ hn
+
+ =
+
+ tn  tn1
+
+ the step size.
+
+
+
+ These implicit methods are characterized by their respective order q, which indicates the number of intermediate points required to compute
+
+ yn+1
+
+ .
+
+
+ This is where the difference between BDF and Adams intervenes (Backward Differenciation Formula and AdamsMoulton formula) :
+
+
+
+ If the problem is stiff, the user should select BDF :
+
+
+
+
+ q, the order of the method, is set between 1 and 5 (automated),
+
+
+ K1 = q and K2 = 0.
+
+
+
+ In the case of nonstiffness, Adams is preferred :
+
+
+
+ q is set between 1 and 12 (automated),
+
+
+ K1 = 1 and K2 = q.
+
+
+
+ The coefficients are fixed, uniquely determined by the method type, its order, the history of the step sizes, and the normalization
+
+ αn, 0 = 1
+
+ .
+
+
+ For either choice and at each step, injecting this integration in (1) yields the nonlinear system :
+
+
+
+ G(y_n)\equiv y_nh_n\beta_{n,0}f(t_n,y_n)a_n=0, \hspace{2 mm} where \hspace{2 mm} a_n\equiv \sum_{i>0} (\alpha_{n,i} y_{ni} + h_n\beta_{n,i}\dot{y}_{ni})
+
+
+
+ This system can be solved by either Functional or Newton iterations, described hereafter.
+
+
+ In both following cases, the initial "predicted"
+
+ yn(0)
+
+ is explicitly computed from the history data, by adding derivatives.
+
+
+
+
+ Functional : this method only involves evaluations of f, it simply computes
+
+ yn(0)
+
+ by iterating the formula :
+
+
+ y_{n(m+1)} = h_n β_{n,0} f(t_n,y_{n(m+1)}) + a_n
+
+
+ where \hspace{2 mm} a_n\equiv \sum_{i>0} (\alpha_{n,i} y_{ni} + h_n\beta_{n,i}\dot{y}_{ni})
+
+
+
+
+
+ Newton : here, we use an implemented direct dense solver on the linear system :
+
+
+ M[y_{n(m+1)}y_{n(m)}]=G(y_{n(m)}), \hspace{4 mm} M \approx I\gamma J, \hspace{2 mm} J=\frac{\partial f}{\partial y}, \hspace{2 mm} and \hspace{2 mm} \gamma = h_n\beta_{n,0}
+
+
+
+
+
+ In both situations, CVode uses the history array to control the local error
+
+ yn(m)  yn(0)
+
+ and recomputes
+
+ hn
+
+ if that error is not satisfying.
+
+
+
+ The recommended choices are BDF / Newton for stiff problems and Adams / Functional for the nonstiff ones.
+
+
+
+ The function is called in between activations, because a discrete activation may change the system.
+
+
+ Following the criticality of the event (its effect on the continuous problem), we either relaunch the solver with different start and final times as if nothing happened, or, if the system has been modified, we need to "coldrestart" the problem by reinitializing it anew and relaunching the solver.
+
+
+ Averagely, CVode accepts tolerances up to 1016. Beyond that, it returns a Too much accuracy requested error.
+
+
+
+ Examples
+
+
+
+
+
+
+
+
+
+
+
+ The integral block returns its continuous state, we can evaluate it with BDF / Newton by running the example :
+
+
+
+
+
+ The Scilab console displays :
+
+
+
+ Now, in the following script, we compare the time difference between the methods by running the example with the four solvers in turn :
+
+ Open the script
+
+
+
+ Results :
+
+
+
+ The results show that for a simple nonstiff continuous problem, Adams / Functional is fastest.
+
+
+
+ See Also
+
+
+ LSodar
+
+
+ IDA
+
+
+ RungeKutta 4(5)
+
+
+ DormandPrice 4(5)
+
+
+ ode
+
+
+ ode_discrete
+
+
+ ode_root
+
+
+ odedc
+
+
+ impl
+
+
+
+
+ Bibliography
+
+ Sundials Documentation
+
+
+
diff git a/scilab/modules/xcos/help/en_US/solvers/2RungeKutta.xml b/scilab/modules/xcos/help/en_US/solvers/2RungeKutta.xml
new file mode 100644
index 0000000..ca278f6
 /dev/null
+++ b/scilab/modules/xcos/help/en_US/solvers/2RungeKutta.xml
@@ 0,0 +1,262 @@
+
+
+
+
+ RungeKutta 4(5)
+
+ RungeKutta is a numerical solver providing an efficient explicit method to solve Ordinary Differential Equations (ODEs) Initial Value Problems.
+
+
+
+ Description
+
+ Called by xcos, RungeKutta is a numerical solver providing an efficient fixedsize step method to solve Initial Value Problems of the form :
+
+
+
+ \begin{eqnarray}
+ \dot{y} = f(t,y), \hspace{3 mm} y(t_0) = y_0, \hspace{3 mm} y \in R^N
+ \end{eqnarray}
+
+
+
+ CVode and IDA use variablesize steps for the integration.
+
+
+ A drawback of that is the unpredictable computation time. With RungeKutta, we do not adapt to the complexity of the problem, but we guarantee a stable computation time.
+
+
+ As of now, this method is explicit, so it is not concerned with Newton or Functional iterations, and not advised for stiff problems.
+
+
+ It is an enhancement of the Euler method, which approximates
+
+ yn+1
+
+ by truncating the Taylor expansion.
+
+
+ By convention, to use fixedsize steps, the program first computes a fitting h that approaches the simulation parameter max step size.
+
+
+ An important difference of RungeKutta with the previous methods is that it computes up to the fourth derivative of y, while the others only use linear combinations of y and y'.
+
+
+ Here, the next value is determined by the present value
+
+ yn
+
+ plus the weighted average of four increments, where each increment is the product of the size of the interval, h, and an estimated slope specified by the function f(t,y) :
+
+
+ k1 is the increment based on the slope at the beginning of the interval, using
+
+ yn
+
+ (Euler's method),
+
+
+ k2 is the increment based on the slope at the midpoint of the interval, using
+
+ yn + h*k1/2
+
+ ,
+
+
+ k3 is again the increment based on the slope at the midpoint, but now using
+
+ yn + h*k2/2
+
+
+
+ k4 is the increment based on the slope at the end of the interval, using
+
+ yn + h*k3
+
+
+
+
+
+ We can see that with the ki, we progress in the derivatives of
+
+ yn
+
+ . So in k4, we are approximating
+
+ y(4)n
+
+ , thus making an error in
+
+ O(h5)
+
+ .
+
+
+ So the total error is
+
+ number of steps * O(h5)
+
+ . And since number of steps = interval size / h by definition, the total error is in
+
+ O(h4)
+
+ .
+
+
+ That error analysis baptized the method RungeKutta 4(5),
+
+ O(h5)
+
+ per step,
+
+ O(h4)
+
+ in total.
+
+
+ Although the solver works fine for max step size up to
+
+ 103
+
+ , rounding errors sometimes come into play as we approach
+ 4*104
+
+ . Indeed, the interval splitting cannot be done properly and we get capricious results.
+
+
+
+ Examples
+
+
+
+
+
+
+
+
+
+
+
+ The integral block returns its continuous state, we can evaluate it with RungeKutta by running the example :
+
+
+
+
+
+ The Scilab console displays :
+
+
+
+ Now, in the following script, we compare the time difference between RungeKutta and Sundials by running the example with the five solvers in turn :
+
+ Open the script
+
+
+
+
+
+
+ These results show that on a nonstiff problem, for relatively same precision required and forcing the same step size, RungeKutta is faster.
+
+
+ Variable stepsize ODE solvers are not appropriate for deterministic realtime applications because the computational overhead of taking a time step varies over the course of an application.
+
+
+
+ See Also
+
+
+ LSodar
+
+
+ CVode
+
+
+ IDA
+
+
+ DormandPrice 4(5)
+
+
+ ode
+
+
+ ode_discrete
+
+
+ ode_root
+
+
+ odedc
+
+
+ impl
+
+
+
+
+ Bibliography
+
+ Sundials Documentation
+
+
+
+ History
+
+
+ 5.4.1
+ RungeKutta 4(5) solver added
+
+
+
+
diff git a/scilab/modules/xcos/help/en_US/solvers/3DormandPrice.xml b/scilab/modules/xcos/help/en_US/solvers/3DormandPrice.xml
new file mode 100644
index 0000000..a23bd06
 /dev/null
+++ b/scilab/modules/xcos/help/en_US/solvers/3DormandPrice.xml
@@ 0,0 +1,249 @@
+
+
+
+
+ DormandPrice 4(5)
+
+ DormandPrice is a numerical solver providing an efficient explicit method to solve Ordinary Differential Equations (ODEs) Initial Value Problems.
+
+
+
+ Description
+
+ Called by xcos, DormandPrice is a numerical solver providing an efficient fixedsize step method to solve Initial Value Problems of the form :
+
+
+
+ \begin{eqnarray}
+ \dot{y} = f(t,y), \hspace{3 mm} y(t_0) = y_0, \hspace{3 mm} y \in R^N
+ \end{eqnarray}
+
+
+
+ CVode and IDA use variablesize steps for the integration.
+
+
+ A drawback of that is the unpredictable computation time. With DormandPrice, we do not adapt to the complexity of the problem, but we guarantee a stable computation time.
+
+
+ As of now, this method is explicit, so it is not concerned with Newton or Functional iterations, and not advised for stiff problems.
+
+
+ It is an enhancement of the Euler method, which approximates
+
+ yn+1
+
+ by truncating the Taylor expansion.
+
+
+ By convention, to use fixedsize steps, the program first computes a fitting h that approaches the simulation parameter max step size.
+
+
+ An important difference of DormandPrice with the previous methods is that it computes up to the seventh derivative of y, while the others only use linear combinations of y and y'.
+
+
+ Here, the next value is determined by the present value
+
+ yn
+
+ plus the weighted average of six increments, where each increment is the product of the size of the interval, h, and an estimated slope specified by the function f(t,y) :
+
+
+ k1 is the increment based on the slope at the beginning of the interval, using
+
+ yn
+
+ (Euler's method),
+
+
+ k2, k3, k4 and k5 are the increments based on the slope at respectively 0.2, 0.3, 0.8 and 0.9 of the interval, using combinations of each other,
+
+
+ k6 is the increment based on the slope at the end, also using combinations of the other ki.
+
+
+
+
+ We can see that with the ki, we progress in the derivatives of
+
+ yn
+
+ . In the computation of the ki, we deliberately use coefficients that yield an error in
+
+ O(h5)
+
+ at every step.
+
+
+ So the total error is
+
+ number of steps * O(h5)
+
+ . And since number of steps = interval size / h by definition, the total error is in
+
+ O(h4)
+
+ .
+
+
+ That error analysis baptized the method DormandPrice 4(5) :
+
+ O(h5)
+
+ per step,
+
+ O(h4)
+
+ in total.
+
+
+ Althought the solver works fine for max step size up to
+
+ 103
+
+ , rounding errors sometimes come into play as it approaches
+
+ 4*104
+
+ . Indeed, the interval splitting cannot be done properly and we get capricious results.
+
+
+
+ Examples
+
+
+
+
+
+
+
+
+
+
+
+ The integral block returns its continuous state, we can evaluate it with DormandPrice by running the example :
+
+
+
+
+
+ The Scilab console displays :
+
+
+
+ Now, in the following script, we compare the time difference between DormandPrice and Sundials by running the example with the five solvers in turn :
+
+ Open the script
+
+
+
+
+
+
+ These results show that on a nonstiff problem, for relatively same precision required and forcing the same step size, DormandPrice's computational overhead (compared to RungeKutta) is significant and is close to Adams/Functional. Its error to the solution is althought much smaller than the regular RungeKutta 4(5), for a small overhead in time.
+
+
+ Variable stepsize ODE solvers are not appropriate for deterministic realtime applications because the computational overhead of taking a time step varies over the course of an application.
+
+
+
+ See Also
+
+
+ LSodar
+
+
+ CVode
+
+
+ IDA
+
+
+ RungeKutta 4(5)
+
+
+ ode
+
+
+ ode_discrete
+
+
+ ode_root
+
+
+ odedc
+
+
+ impl
+
+
+
+
+ Bibliography
+
+ Journal of Computational and Applied Mathematics, Volume 15, Issue 2, 2 June 1986, Pages 203211 DormandPrice Method
+
+
+ Sundials Documentation
+
+
+
+ History
+
+
+ 5.4.1
+ DormandPrice 4(5) solver added
+
+
+
+
diff git a/scilab/modules/xcos/help/en_US/solvers/7IDA.xml b/scilab/modules/xcos/help/en_US/solvers/7IDA.xml
new file mode 100644
index 0000000..1b129b0
 /dev/null
+++ b/scilab/modules/xcos/help/en_US/solvers/7IDA.xml
@@ 0,0 +1,198 @@
+
+
+
+
+ IDA
+
+ IDA (short for Implicit Differential Algebraic solver) is a numerical solver providing an efficient and stable method to solve Differential Algebraic Equations (DAEs) Initial Value Problems.
+
+
+
+ Description
+
+ Called by xcos, IDA (short for Implicit Differential Algebraic solver) is a numerical solver providing an efficient and stable method to solve Initial Value Problems of the form :
+
+
+
+ \begin{eqnarray}
+ F(t,y,\dot{y}) = 0, \hspace{2 mm} y(t_0)=y_0, \hspace{2 mm} \dot{y}(t_0)=\dot{y}_0, \hspace{3 mm} y, \hspace{1.5 mm} \dot{y} \hspace{1.5 mm} and \hspace{1.5 mm} F \in R^N \hspace{10 mm} (1)
+ \end{eqnarray}
+
+
+
+
+ Before solving the problem, IDA runs an implemented routine to find consistent values for
+
+ y0
+
+ and
+
+ yPrime0
+
+ .
+
+ Starting then with those
+
+ y0
+
+ and
+
+ yPrime0
+
+ , IDA approximates
+
+ yn+1
+
+ with the BDF formula :
+
+
+
+ \begin{eqnarray}
+ \sum_{i=0}^{q} \alpha_{n,i} y_{ni} = h_n\dot{y}_{n}
+ \end{eqnarray}
+
+
+ with, like in CVode,
+
+ yn
+
+ the approximation of
+
+ y(tn)
+
+ ,
+
+ hn
+
+ =
+
+ tn  tn1
+
+ the step size, and the coefficients are fixed, uniquely determined by the method type, its order q ranging from 1 to 5 and the history of the step sizes.
+
+
+
+ Injecting this formula in (1) yields the system :
+
+
+
+ G(y_n) \equiv F \left( t_n, \hspace{1.5mm} y_n, \hspace{1.5mm} h_n^{1}\sum_{i=0}^{q} \alpha_{n,i} y_{ni} \right) = 0
+
+
+
+ To apply Newton iterations to it, we rewrite it into :
+
+
+
+ J \left[y_{n(m+1)}y_{n(m)} \right] = G(y_{n(m)})
+
+
+
+ with J an approximation of the Jacobian :
+
+
+
+ J = \frac{\partial{G}}{\partial{y}} = \frac{\partial{F}}{\partial{y}}+\alpha\frac{\partial{F}}{\partial{\dot{y}}}, \hspace{4 mm} \alpha = \frac{\alpha_{n,0}}{h_n},
+
+
+
+ α changes whenever the step size or the method order varies.
+
+
+ An implemented direct dense solver is used and we go on to the next step.
+
+
+ IDA uses the history array to control the local error
+
+ yn(m)  yn(0)
+
+ and recomputes
+
+ hn
+
+ if that error is not satisfying.
+
+
+ The function is called in between activations, because a discrete activation may change the system.
+
+
+ Following the criticality of the event (its effect on the continuous problem), we either relaunch the solver with different start and final times as if nothing happened, or, if the system has been modified, we need to "coldrestart" the problem by reinitializing it anew and relaunching the solver.
+
+
+ Averagely, IDA accepts tolerances up to 1011. Beyond that, it returns a Too much accuracy requested error.
+
+
+
+ Example
+
+ The 'Modelica Generic' block returns its continuous states, we can evaluate them with IDA by running the example :
+
+
+
+
+
+
+
+
+
+
+
+
+
+ See Also
+
+
+ LSodar
+
+
+ CVode
+
+
+ RungeKutta 4(5)
+
+
+ DormandPrice 4(5)
+
+
+ ode
+
+
+ ode_discrete
+
+
+ ode_root
+
+
+ odedc
+
+
+ impl
+
+
+
+
+ Bibliography
+
+ Sundials Documentation
+
+
+
diff git a/scilab/modules/xcos/help/en_US/solvers/CVode.xml b/scilab/modules/xcos/help/en_US/solvers/CVode.xml
deleted file mode 100644
index 23561e1..0000000
 a/scilab/modules/xcos/help/en_US/solvers/CVode.xml
+++ /dev/null
@@ 1,290 +0,0 @@




 CVode

 CVode is a numerical solver providing an efficient and stable method to solve Ordinary Differential Equations (ODEs) Initial Value Problems. Called by xcos, it uses either BDF or Adams as implicit integration method, and Newton or Functional iterations



 Description

 CVode is a numerical solver providing an efficient and stable method to solve Initial Value Problems of the form :



 \begin{eqnarray}
 \dot{y} = f(t,y), \hspace{3 mm} y(t_0) = y_0, \hspace{3 mm} y \in R^N
 \end{eqnarray}



 Starting with

 y0

 , CVode approximates

 yn+1

 with the formula :



 \begin{eqnarray}
 \sum_{i=0}^{K_1} \alpha_{n,i} y_{ni} + h_n\sum_{i=0}^{K_2} \beta_{n,i} \dot{y}_{ni} = 0,\hspace{10 mm} (1)
 \end{eqnarray}


 with

 yn

 the approximation of

 y(tn)

 , and

 hn

 =

 tn  tn1

 the step size.



 These implicit methods are characterized by their respective order q, which indicates the number of intermediate points required to compute

 yn+1

 .


 This is where the difference between BDF and Adams intervenes (Backward Differenciation Formula and AdamsMoulton formula) :



 If the problem is stiff, the user should select BDF :




 q, the order of the method, is set between 1 and 5 (automated),


 K1 = q and K2 = 0.



 In the case of nonstiffness, Adams is preferred :



 q is set between 1 and 12 (automated),


 K1 = 1 and K2 = q.



 The coefficients are fixed, uniquely determined by the method type, its order, the history of the step sizes, and the normalization

 αn, 0 = 1

 .


 For either choice and at each step, injecting this integration in (1) yields the nonlinear system :



 G(y_n)\equiv y_nh_n\beta_{n,0}f(t_n,y_n)a_n=0, \hspace{2 mm} where \hspace{2 mm} a_n\equiv \sum_{i>0} (\alpha_{n,i} y_{ni} + h_n\beta_{n,i}\dot{y}_{ni})



 This system can be solved by either Functional or Newton iterations, described hereafter.


 In both following cases, the initial "predicted"

 yn(0)

 is explicitly computed from the history data, by adding derivatives.




 Functional : this method only involves evaluations of f, it simply computes

 yn(0)

 by iterating the formula :


 y_{n(m+1)} = h_n β_{n,0} f(t_n,y_{n(m+1)}) + a_n


 where \hspace{2 mm} a_n\equiv \sum_{i>0} (\alpha_{n,i} y_{ni} + h_n\beta_{n,i}\dot{y}_{ni})





 Newton : here, we use an implemented direct dense solver on the linear system :


 M[y_{n(m+1)}y_{n(m)}]=G(y_{n(m)}), \hspace{4 mm} M \approx I\gamma J, \hspace{2 mm} J=\frac{\partial f}{\partial y}, \hspace{2 mm} and \hspace{2 mm} \gamma = h_n\beta_{n,0}





 In both situations, CVode uses the history array to control the local error

 yn(m)  yn(0)

 and recomputes

 hn

 if that error is not satisfying.



 The recommended choices are BDF / Newton for stiff problems and Adams / Functional for the nonstiff ones.



 The function is called in between activations, because a discrete activation may change the system.


 Following the criticality of the event (its effect on the continuous problem), we either relaunch the solver with different start and final times as if nothing happened, or, if the system has been modified, we need to "coldrestart" the problem by reinitializing it anew and relaunching the solver.


 Averagely, CVode accepts tolerances up to 1016. Beyond that, it returns a Too much accuracy requested error.



 Examples











 The integral block returns its continuous state, we can evaluate it with BDF / Newton by running the example :





 The Scilab console displays :



 Now, in the following script, we compare the time difference between the methods by running the example with the four solvers in turn :

 Open the script



 Results :



 The results show that for a simple nonstiff continuous problem, Adams / Functional is fastest.



 See Also


 IDA


 LSodar


 RungeKutta 4(5)


 DormandPrice 4(5)


 ode


 ode_discrete


 ode_root


 odedc


 impl




 Bibliography

 Sundials Documentation



diff git a/scilab/modules/xcos/help/en_US/solvers/DormandPrice.xml b/scilab/modules/xcos/help/en_US/solvers/DormandPrice.xml
deleted file mode 100644
index 99fa747..0000000
 a/scilab/modules/xcos/help/en_US/solvers/DormandPrice.xml
+++ /dev/null
@@ 1,249 +0,0 @@




 DormandPrice 4(5)

 DormandPrice is a numerical solver providing an efficient explicit method to solve Ordinary Differential Equations (ODEs) Initial Value Problems.



 Description

 Called by xcos, DormandPrice is a numerical solver providing an efficient fixedsize step method to solve Initial Value Problems of the form :



 \begin{eqnarray}
 \dot{y} = f(t,y), \hspace{3 mm} y(t_0) = y_0, \hspace{3 mm} y \in R^N
 \end{eqnarray}



 CVode and IDA use variablesize steps for the integration.


 A drawback of that is the unpredictable computation time. With DoPri, we do not adapt to the complexity of the problem, but we guarantee a stable computation time.


 As of now, this method is explicit, so it is not concerned with Newton or Functional iterations, and not advised for stiff problems.


 It is an enhancement of the Euler method, which approximates

 yn+1

 by truncating the Taylor expansion.


 By convention, to use fixedsize steps, the program first computes a fitting h that approaches the simulation parameter max step size.


 An important difference of DoPri with the previous methods is that it computes up to the seventh derivative of y, while the others only use linear combinations of y and y'.


 Here, the next value is determined by the present value

 yn

 plus the weighted average of six increments, where each increment is the product of the size of the interval, h, and an estimated slope specified by the function f(t,y) :


 k1 is the increment based on the slope at the beginning of the interval, using

 yn

 (Euler's method),


 k2, k3, k4 and k5 are the increments based on the slope at respectively 0.2, 0.3, 0.8 and 0.9 of the interval, using combinations of each other,


 k6 is the increment based on the slope at the end, also using combinations of the other ki.




 We can see that with the ki, we progress in the derivatives of

 yn

 . In the computation of the ki, we deliberately use coefficients that yield an error in

 O(h5)

 at every step.


 So the total error is

 number of steps * O(h5)

 . And since number of steps = interval size / h by definition, the total error is in

 O(h4)

 .


 That error analysis baptized the method Dopri 4(5) :

 O(h5)

 per step,

 O(h4)

 in total.


 Althought the solver works fine for max step size up to

 103

 , rounding errors sometimes come into play as it approaches

 4*104

 . Indeed, the interval splitting cannot be done properly and we get capricious results.



 Examples











 The integral block returns its continuous state, we can evaluate it with DoPri by running the example :





 The Scilab console displays :



 Now, in the following script, we compare the time difference between DoPri and Sundials by running the example with the five solvers in turn :

 Open the script






 These results show that on a nonstiff problem, for relatively same precision required and forcing the same step size, DoPri's computational overhead is significant. Its error to the solution is althought much smaller than the regular RungeKutta 4(5), for a small overhead in time.


 Variable stepsize ODE solvers are not appropriate for deterministic realtime applications because the computational overhead of taking a time step varies over the course of an application.



 See Also


 LSodar


 CVode


 IDA


 RungeKutta 4(5)


 ode


 ode_discrete


 ode_root


 odedc


 impl




 Bibliography

 Journal of Computational and Applied Mathematics, Volume 15, Issue 2, 2 June 1986, Pages 203211 DormandPrice Method


 Sundials Documentation



 History


 5.4.1
 DormandPrice 4(5) solver added




diff git a/scilab/modules/xcos/help/en_US/solvers/IDA.xml b/scilab/modules/xcos/help/en_US/solvers/IDA.xml
deleted file mode 100644
index 7de1c10..0000000
 a/scilab/modules/xcos/help/en_US/solvers/IDA.xml
+++ /dev/null
@@ 1,198 +0,0 @@




 IDA

 IDA (Implicit Differential Algebraic) is a numerical solver providing an efficient and stable method to solve Differential Algebraic Equations (DAEs) Initial Value Problems. Called by xcos.



 Description

 IDA is a numerical solver providing an efficient and stable method to solve Initial Value Problems of the form :



 \begin{eqnarray}
 F(t,y,\dot{y}) = 0, \hspace{2 mm} y(t_0)=y_0, \hspace{2 mm} \dot{y}(t_0)=\dot{y}_0, \hspace{3 mm} y, \hspace{1.5 mm} \dot{y} \hspace{1.5 mm} and \hspace{1.5 mm} F \in R^N \hspace{10 mm} (1)
 \end{eqnarray}




 Before solving the problem, IDA runs an implemented routine to find consistent values for

 y0

 and

 yPrime0

 .

 Starting then with those

 y0

 and

 yPrime0

 , IDA approximates

 yn+1

 with the BDF formula :



 \begin{eqnarray}
 \sum_{i=0}^{q} \alpha_{n,i} y_{ni} = h_n\dot{y}_{n}
 \end{eqnarray}


 with, like in CVode,

 yn

 the approximation of

 y(tn)

 ,

 hn

 =

 tn  tn1

 the step size, and the coefficients are fixed, uniquely determined by the method type, its order q ranging from 1 to 5 and the history of the step sizes.



 Injecting this formula in (1) yields the system :



 G(y_n) \equiv F \left( t_n, \hspace{1.5mm} y_n, \hspace{1.5mm} h_n^{1}\sum_{i=0}^{q} \alpha_{n,i} y_{ni} \right) = 0



 To apply Newton iterations to it, we rewrite it into :



 J \left[y_{n(m+1)}y_{n(m)} \right] = G(y_{n(m)})



 with J an approximation of the Jacobian :



 J = \frac{\partial{G}}{\partial{y}} = \frac{\partial{F}}{\partial{y}}+\alpha\frac{\partial{F}}{\partial{\dot{y}}}, \hspace{4 mm} \alpha = \frac{\alpha_{n,0}}{h_n},



 α changes whenever the step size or the method order varies.


 An implemented direct dense solver is used and we go on to the next step.


 IDA uses the history array to control the local error

 yn(m)  yn(0)

 and recomputes

 hn

 if that error is not satisfying.


 The function is called in between activations, because a discrete activation may change the system.


 Following the criticality of the event (its effect on the continuous problem), we either relaunch the solver with different start and final times as if nothing happened, or, if the system has been modified, we need to "coldrestart" the problem by reinitializing it anew and relaunching the solver.


 Averagely, IDA accepts tolerances up to 1011. Beyond that, it returns a Too much accuracy requested error.



 Example

 The 'Modelica Generic' block returns its continuous states, we can evaluate them with IDA by running the example :













 See Also


 CVode


 LSodar


 RungeKutta 4(5)


 DormandPrice 4(5)


 ode


 ode_discrete


 ode_root


 odedc


 impl




 Bibliography

 Sundials Documentation



diff git a/scilab/modules/xcos/help/en_US/solvers/LSodar.xml b/scilab/modules/xcos/help/en_US/solvers/LSodar.xml
deleted file mode 100644
index faa8d01..0000000
 a/scilab/modules/xcos/help/en_US/solvers/LSodar.xml
+++ /dev/null
@@ 1,208 +0,0 @@




 LSodar

 LSODAR (short for Livermore Solver for Ordinary Differential equations, with Automatic method switching for stiff and nonstiff problems, and with Rootfinding) is a numerical solver providing an efficient and stable method to solve Ordinary Differential Equations (ODEs) Initial Value Problems. Called by xcos.



 Description

 LSODAR (short for Livermore Solver for Ordinary Differential equations, with Automatic method switching for stiff and nonstiff problems, and with Rootfinding) is a numerical solver providing an efficient and stable variablesize step method to solve Initial Value Problems of the form :



 \begin{eqnarray}
 \dot{y} = f(t,y), \hspace{3 mm} y(t_0) = y_0, \hspace{3 mm} y \in R^N
 \end{eqnarray}



 LSodar is similar to CVode in many ways :


 It uses variablesize steps,


 It can potentially use BDF and Adams integration, methods,


 BDF and Adams being implicit stable methods, LSodar is suitable for stiff and nonstiff problems,


 They both look for roots over the integration interval.




 The main difference though is that LSodar is fully automated, and chooses between BDF and Adams itself, by checking for stiffness at every step.


 If the step is considered stiff, then BDF (with max order set to 5) is used and the Modified Newton method 'Chord' iteration is selected.


 Otherwise, the program uses Adams integration (with max order set to 12) and Functional iterations.


 The stiffness detection is done by step size attempts with both methods.


 First, if we are in Adams mode and the order is greater than 5, then we assume the problem is nonstiff and proceed with Adams.


 The first twenty steps use Adams / Functional method.
 Then LSodar computes the ideal step size of both methods. If the step size advantage is at least ratio = 5, then the current method switches (Adams / Functional to BDF / Chord Newton or vice versa).


 After every switch, LSodar takes twenty steps, then starts comparing the step sizes at every step.


 Such strategy induces a minor overhead computational cost if the problem stiffness is known, but is very effective on problems that require differentiate precision. For instance, discontinuitiessensitive problems.


 Concerning precision, the two integration/iteration methods being close to CVode's, the results are very similar.



 Examples











 The integral block returns its continuous state, we can evaluate it with LSodar by running the example :





 The Scilab console displays :



 Now, in the following script, we compare the time difference between the methods by running the example with the five solvers in turn :

 Open the script






 These results show that on a nonstiff problem, for the same precision required, LSodar is significantly faster. Other tests prove the proximity of the results. Indeed, we find that the solution difference order between LSodar and CVode is close to the order of the highest tolerance (

 ylsodar  ycvode

 ≈ max(reltol, abstol) ).


 Variable stepsize ODE solvers are not appropriate for deterministic realtime applications because the computational overhead of taking a time step varies over the course of an application.



 See Also


 CVode


 IDA


 RungeKutta 4(5)


 DormandPrice 4(5)


 ode


 ode_discrete


 ode_root


 odedc


 impl




 Bibliography

 ACM SIGNUM Newsletter, Volume 15, Issue 4, December 1980, Pages 1011 LSode  LSodi


 Sundials Documentation



 History


 5.4.1
 LSodar solver added




diff git a/scilab/modules/xcos/help/en_US/solvers/RungeKutta.xml b/scilab/modules/xcos/help/en_US/solvers/RungeKutta.xml
deleted file mode 100644
index 1cfbd61..0000000
 a/scilab/modules/xcos/help/en_US/solvers/RungeKutta.xml
+++ /dev/null
@@ 1,262 +0,0 @@




 RungeKutta 4(5)

 RungeKutta is a numerical solver providing an efficient explicit method to solve Ordinary Differential Equations (ODEs) Initial Value Problems. Called by xcos.



 Description

 RungeKutta is a numerical solver providing an efficient fixedsize step method to solve Initial Value Problems of the form :



 \begin{eqnarray}
 \dot{y} = f(t,y), \hspace{3 mm} y(t_0) = y_0, \hspace{3 mm} y \in R^N
 \end{eqnarray}



 CVode and IDA use variablesize steps for the integration.


 A drawback of that is the unpredictable computation time. With RungeKutta, we do not adapt to the complexity of the problem, but we guarantee a stable computation time.


 As of now, this method is explicit, so it is not concerned with Newton or Functional iterations, and not advised for stiff problems.


 It is an enhancement of the Euler method, which approximates

 yn+1

 by truncating the Taylor expansion.


 By convention, to use fixedsize steps, the program first computes a fitting h that approaches the simulation parameter max step size.


 An important difference of RungeKutta with the previous methods is that it computes up to the fourth derivative of y, while the others only use linear combinations of y and y'.


 Here, the next value is determined by the present value

 yn

 plus the weighted average of four increments, where each increment is the product of the size of the interval, h, and an estimated slope specified by the function f(t,y) :


 k1 is the increment based on the slope at the beginning of the interval, using

 yn

 (Euler's method),


 k2 is the increment based on the slope at the midpoint of the interval, using

 yn + h*k1/2

 ,


 k3 is again the increment based on the slope at the midpoint, but now using

 yn + h*k2/2



 k4 is the increment based on the slope at the end of the interval, using

 yn + h*k3





 We can see that with the ki, we progress in the derivatives of

 yn

 . So in k4, we are approximating

 y(4)n

 , thus making an error in

 O(h5)

 .


 So the total error is

 number of steps * O(h5)

 . And since number of steps = interval size / h by definition, the total error is in

 O(h4)

 .


 That error analysis baptized the method RungeKutta 4(5),

 O(h5)

 per step,

 O(h4)

 in total.


 Although the solver works fine for max step size up to

 103

 , rounding errors sometimes come into play as we approach
 4*104

 . Indeed, the interval splitting cannot be done properly and we get capricious results.



 Examples











 The integral block returns its continuous state, we can evaluate it with RungeKutta by running the example :





 The Scilab console displays :



 Now, in the following script, we compare the time difference between RungeKutta and Sundials by running the example with the five solvers in turn :

 Open the script






 These results show that on a nonstiff problem, for relatively same precision required and forcing the same step size, RungeKutta is faster.


 Variable stepsize ODE solvers are not appropriate for deterministic realtime applications because the computational overhead of taking a time step varies over the course of an application.



 See Also


 CVode


 IDA


 LSodar


 DormandPrice 4(5)


 ode


 ode_discrete


 ode_root


 odedc


 impl




 Bibliography

 Sundials Documentation



 History


 5.4.1
 RungeKutta 4(5) solver added





1.7.9.5