Bug Fixes
=========
-* Bug #14433 fixed - acoth (which call atanh) crash scilab
+In 6.0.0:
+
+* Bug #9456 fixed - bench_run did not work on a path or in a toolbox
+
+* Bug #13869 fixed - bench_run with option nb_run=10 did not override the NB RUN tags
* Bug #14035 fixed - ndgrid did not manage all homogeneous data type (booleans, integers, polynomials, rationals, strings, [])
+* Bug #14423 fixed - bench_run did not have a return value, export file was not configurable
+
+* Bug #14433 fixed - acoth (which call atanh) crash scilab
+
In 6.0.0 beta-1:
* Bug #6057 fixed - trailing space after minus sign has been removed from the display of values
<refentry xmlns="http://docbook.org/ns/docbook" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:svg="http://www.w3.org/2000/svg" xmlns:ns5="http://www.w3.org/1999/xhtml" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:db="http://docbook.org/ns/docbook" xmlns:scilab="http://www.scilab.org" xml:id="bench_run" xml:lang="en">
<refnamediv>
<refname>bench_run</refname>
- <refpurpose>Launch benchmark tests</refpurpose>
+ <refpurpose>Launches benchmark tests</refpurpose>
</refnamediv>
<refsynopsisdiv>
<title>Calling Sequence</title>
<synopsis>
- bench_run()
- bench_run(module[,test_name[,options]])
+ [modutests_names, elapsed_time, nb_iterations] = bench_run()
+ [modutests_names, elapsed_time, nb_iterations] = bench_run(module[, test_name[, options, [exportToFile]])
+ [modutests_names, elapsed_time, nb_iterations] = bench_run(path_to_module[, test_name[, options, [exportToFile]])
</synopsis>
</refsynopsisdiv>
<refsection>
<varlistentry>
<term>module</term>
<listitem>
- <para>a vector of string. It can be the name of a module or the absolute path of a toolbox.</para>
+ <para>a vector of string. Contains the names of a Scilab modules to benchmark.</para>
+ </listitem>
+ </varlistentry>
+ <varlistentry>
+ <term>path_to_module</term>
+ <listitem>
+ <para>
+ a vector of string. Contains the paths to directories of modules to test. If <literal>"/path/to/directory"</literal> is given as input parameter, tests are retrieved in the subdirectory
+ <literal>
+ /path/to/directory/<emphasis role="bold">tests/benchmarks</emphasis>
+ </literal>
+ .Used for homemade benchmarks.
+ </para>
</listitem>
</varlistentry>
<varlistentry>
<term>test_name</term>
<listitem>
- <para>a vector of string</para>
+ <para>a vector of string. Contains the names of the tests to perform.</para>
+ <para>
+ The name of a test is its filename without <literal>.tst</literal>. If several modules or directory are given as first input parameter, scans for tests in each of these modules or directory.
+ </para>
</listitem>
</varlistentry>
<varlistentry>
<para>a vector of string</para>
<itemizedlist>
<listitem>
- <para>list : list of the benchmark tests available in a module</para>
+ <para>
+ <literal>"list"</literal>: list of the benchmark tests (<literal>test_name</literal>) available in a module
+ </para>
</listitem>
<listitem>
- <para>help : displays some examples of use in the Scilab console</para>
+ <para>
+ <literal>"help"</literal>: displays some examples of use in the Scilab console
+ </para>
</listitem>
<listitem>
- <para>nb_run=value : repeat the benchmark test value times</para>
+ <para>
+ <literal>"nb_run=value"</literal>: runs each benchmark <literal>value</literal> times ; by default <function>bench_run</function> runs 10000 times the code between BENCH START and BENCH END tags (see below). Overrides any <literal>BENCH NB RUN</literal> specified in the benchmark test files.
+ </para>
</listitem>
</itemizedlist>
</listitem>
</varlistentry>
+ <varlistentry>
+ <term>exportToFile</term>
+ <listitem>
+ <para>a single string</para>
+ <para>
+ File path to the result of the <function>bench_run</function> in xml format. By default, or if <literal>"", "[]"</literal> or <literal>[]</literal> is given, the output directory is <literal>TMPDIR/benchmarks/</literal>.
+ </para>
+ <para>
+ If <literal>exportToFile</literal> is a directory, creates a timestamped output file is the directory, otherwize creates the file <literal>exportToFile</literal>. If the file could not be created a warning is issued and the file is created under <literal>TMPDIR/benchmarks/</literal> instead.
+ </para>
+ </listitem>
+ </varlistentry>
+ <varlistentry>
+ <term>modutests_names</term>
+ <listitem>
+ <para>a N-by-2 matrix of strings</para>
+ <para>
+ the first column lists the modules tested by <function>bench_run</function>, the second column lists the names of the benchmarks
+ </para>
+ </listitem>
+ </varlistentry>
+ <varlistentry>
+ <term>elapsed_time</term>
+ <listitem>
+ <para>a vector of doubles</para>
+ <para>the execution time for each benchmark</para>
+ </listitem>
+ </varlistentry>
+ <varlistentry>
+ <term>nb_iterations</term>
+ <listitem>
+ <para>a vector of doubles of size N</para>
+ <para>the number of iterations of respective test</para>
+ </listitem>
+ </varlistentry>
</variablelist>
</refsection>
<refsection>
<title>Description</title>
<para>
- Search for .tst files in benchmark test library
- execute them, and display a report about execution time.
- The .tst files are searched in directories SCI+"/modules/*/tests/benchmark".
+ Performs benchmark tests, measures execution time and produces a report about benchmark tests.
+ </para>
+ <para>
+ Searches for .tst files in benchmark test library or input parameter path under <literal>tests/benchmark</literal> subdirectory,
+ executes them 10000 times and displays a report about execution time.
</para>
<para>
Special tags may be inserted in the .tst file, which help to
<itemizedlist>
<listitem>
<para>
- <-- BENCH NB RUN : 10 -->
- This test will be repeated 10 times.
+ <literal><-- BENCH NB RUN : 10 --></literal>
+ </para>
+ <para>
+ By default, this test will be repeated 10 times, unless the "nb_run=###"<literal>bench_run(..)</literal> option is used. The value given for the flag can be set to any integer value.
</para>
</listitem>
<listitem>
+ <programlisting role="no-scilab-exec"><![CDATA[
+// <-- BENCH START -->
+[code to be executed]
+// <-- BENCH END -->
+]]></programlisting>
<para>
- <-- BENCH START -->
- <-- BENCH END -->
- The interesting part of the benchmark must be enclosed by these
- tags.
+ Code between these tags will be repeated. Code before will be executed before the repetition, code after will be executed after the repetition.
+ If these are not present, the entire code will be repeated.
</para>
</listitem>
</itemizedlist>
<para>Some simple examples of invocation of bench_run</para>
<programlisting role="example"><![CDATA[
// Launch all tests
-bench_run();
-bench_run([]);
-bench_run([],[]);
+// This may take some time...
+// bench_run();
+// bench_run([]);
+// bench_run([],[]);
// Test one or several module
bench_run('core');
// With options
bench_run([],[],'list');
bench_run([],[],'help');
-bench_run([],[],'nb_run=2000');
- ]]></programlisting>
+bench_run("string", [], 'nb_run=100');
+// results in an output file in the current directory
+bench_run("string", [], 'nb_run=100', 'my_output_file.xml');
+// results in an output directory, TMPDIR/benchmarks is the default
+bench_run("string", [], 'nb_run=100', TMPDIR);
+]]></programlisting>
<para>An example of a benchmark file. This file corresponds to the
file
SCI/modules/linear_algebra/tests/benchmarks/bench_chol.tst.
// <-- BENCH START -->
b = chol(a);
// <-- BENCH END -->
- ]]></programlisting>
+]]></programlisting>
<para>The result of the test</para>
- <programlisting role="example"><![CDATA[
+ <screen><![CDATA[
-->bench_run('linear_algebra','bench_chol')
- For Loop (as reference) ........................... 143.00 ms [ 1000000 x]
+For Loop (as reference) ........................... 33.20 ms [ 1000000 x]
- 001/001 - [linear_algebra] bench_chol ...................... 130.60 ms [ 10 x]
- ]]></programlisting>
+001/001 - [linear_algebra] bench_chol ...................... 1233.93 ms [ 10 x]
+ ]]></screen>
</refsection>
<refsection role="see also">
<title>See Also</title>
</member>
</simplelist>
</refsection>
+ <refsection role="history">
+ <title>History</title>
+ <revhistory>
+ <revision>
+ <revnumber>6.0</revnumber>
+ <revdescription>
+ <itemizedlist>
+ <listitem>
+ <literal>bench_run()</literal> can now return its results through the new
+ <literal>modutests_names</literal>, <literal>elapsed_time</literal>
+ and <literal>nb_iterations</literal> output parameters.
+ </listitem>
+ <listitem>
+ Exportation of results in XML is now possible
+ </listitem>
+ <listitem>
+ Global configuration settings mode(),
+ format(), ieee(), warning() and funcprot()
+ are now protected against tests.
+ </listitem>
+ </itemizedlist>
+ </revdescription>
+ </revision>
+ </revhistory>
+ </refsection>
</refentry>
--- /dev/null
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+ * Scilab ( http://www.scilab.org/ ) - This file is part of Scilab
+ * Copyright (C) INRIA
+ *
+ * Copyright (C) 2012 - 2016 - Scilab Enterprises
+ *
+ * This file is hereby licensed under the terms of the GNU GPL v2.0,
+ * pursuant to article 5.3.4 of the CeCILL v.2.1.
+ * This file was originally licensed under the terms of the CeCILL v2.1,
+ * and continues to be available under such terms.
+ * For more information, see the COPYING file which you should have received
+ * along with this program.
+ *
+ -->
+<refentry xmlns="http://docbook.org/ns/docbook" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:svg="http://www.w3.org/2000/svg" xmlns:ns5="http://www.w3.org/1999/xhtml" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:db="http://docbook.org/ns/docbook" xmlns:scilab="http://www.scilab.org" xml:id="bench_run" xml:lang="fr">
+ <refnamediv>
+ <refname>bench_run</refname>
+ <refpurpose>Lance les tests de performances</refpurpose>
+ </refnamediv>
+ <refsynopsisdiv>
+ <title>Syntaxe</title>
+ <synopsis>
+ [modutests_names, elapsed_time, nb_iterations] = bench_run()
+ [modutests_names, elapsed_time, nb_iterations] = bench_run(module[, test_name[, options, [exportToFile]])
+ [modutests_names, elapsed_time, nb_iterations] = bench_run(path_to_module[, test_name[, options, [exportToFile]])
+ </synopsis>
+ </refsynopsisdiv>
+ <refsection>
+ <title>Arguments</title>
+ <variablelist>
+ <varlistentry>
+ <term>module</term>
+ <listitem>
+ <para>Vecteur de chaînes de caractères. Noms des modules internes à Scilab à tester.</para>
+ </listitem>
+ </varlistentry>
+ <varlistentry>
+ <term>path_to_module</term>
+ <listitem>
+ <para>
+ Vecteur de chaînes de caractères. Contient les chemins des modules à tester. Si <literal>"/chemin/vers/module"</literal> est donné en argument d'entrée, les tests sont récupérés dans le sous répertoire
+ <literal>
+ /chemin/vers/module/<emphasis role="bold">tests/benchmarks</emphasis>
+ </literal>
+ .A utiliser pour les tests de performance maison.
+ </para>
+ </listitem>
+ </varlistentry>
+ <varlistentry>
+ <term>test_name</term>
+ <listitem>
+ <para>Vecteur de chaînes de caractères. Contient les noms des tests à effectuer.</para>
+ <para>
+ Le nom d'un test est le nom du fichier sans <literal>.tst</literal>. Si plusieurs modules ou répertoires sont donnés comme premier argument d'entrée, recherche les tests dans chacun de ces modules ou répertoires.
+ </para>
+ </listitem>
+ </varlistentry>
+ <varlistentry>
+ <term>options</term>
+ <listitem>
+ <para>Vecteur de chaînes de caractères. Options parmi:</para>
+ <itemizedlist>
+ <listitem>
+ <para>
+ <literal>"list"</literal> : liste les tests de performance (<literal>test_name</literal>) présents dans un module
+ </para>
+ </listitem>
+ <listitem>
+ <para>
+ <literal>"help"</literal> : affiche quelques exemples d'utilisation en console
+ </para>
+ </listitem>
+ <listitem>
+ <para>
+ <literal>"nb_run=value"</literal> : lance <literal>value</literal> fois chaque tests, par défaut <function>bench_run</function> effectue 10000 fois le code présent entre les balises BENCH START et BENCH END (voir ci-après). Remplace la valeur spécifiée dans la balise <literal>BENCH NB RUN</literal> pour les scripts de tests.
+ </para>
+ </listitem>
+ </itemizedlist>
+ </listitem>
+ </varlistentry>
+ <varlistentry>
+ <term>exportToFile</term>
+ <listitem>
+ <para>une chaîne de caractères</para>
+ <para>
+ Chemin du fichier d'export des résultats de <function>bench_run</function> au format xml. Par défaut ou si <literal>"", []</literal> ou <literal>"[]"</literal> sont donnés en paramètres d'entrée, the répertoire de sortie est <literal>TMPDIR/benchmarks/</literal>.
+ </para>
+ <para>
+ Si <literal>exportToFile</literal> est un répertoire, crée un fichier horodaté dans le répertoire, sinon crée le fichier <literal>exportToFile</literal>. Si ce fichier n'a pas pu être créé, un avertissement est affiché et le fichier est créé sous le répertoire <literal>TMPDIR/benchmarks/</literal>.
+ </para>
+ </listitem>
+ </varlistentry>
+ <varlistentry>
+ <term>modutests_names</term>
+ <listitem>
+ <para>matrice de chaînes de caractères de taille N-par-2</para>
+ <para>
+ La première colonne représente les modules et chemins vers les fichiers testés par <function>bench_run</function>, la seconde colonne représente les noms des tests de performance.
+ </para>
+ </listitem>
+ </varlistentry>
+ <varlistentry>
+ <term>elapsed_time</term>
+ <listitem>
+ <para>vecteur de décimaux</para>
+ <para>temps d'execution pour chaque test de performance</para>
+ </listitem>
+ </varlistentry>
+ <varlistentry>
+ <term>nb_iterations</term>
+ <listitem>
+ <para>vecteur de décimaux de taille N</para>
+ <para>nombre de fois que chaque a été lancé respectivement</para>
+ </listitem>
+ </varlistentry>
+ </variablelist>
+ </refsection>
+ <refsection>
+ <title>Description</title>
+ <para>
+ Effectue des tests de performance, mesure les temps d'exécution et produit un rapport d'exécution pour ces tests.
+ </para>
+ <para>
+ Recherche tous les fichiers <literal>.tst</literal> sous le répertoire <literal>tests/benchmarks</literal> présent dans les modules internes scilab ou dans les chemins fournis en variable d'entrée, exécute ces fichiers 10000 fois et produit un rapport d'exécution.
+ </para>
+ <para>
+ Des balises présentes dans le fichier <literal>.tst</literal> permettent de contrôler le processus du test correspondant. Ces balises sont recherchées dans les commentaires du scripts.
+ </para>
+ <para>Les balises disponibles sont :</para>
+ <itemizedlist>
+ <listitem>
+ <para>
+ <literal><-- BENCH NB RUN : 10 --></literal>
+ </para>
+ <para>
+ Par défaut, le test sera répété 10 fois, sauf si l'option <literal>"nb_run=###"</literal> de <literal>bench_run(...)</literal> est utilisée. Toute valeur entière peut être donnée pour cette balise.
+ </para>
+ </listitem>
+ <listitem>
+ <para>
+ <programlisting role="no-scilab-exec"><![CDATA[
+// <-- BENCH START -->
+[code to be executed]
+// <-- BENCH END -->
+]]></programlisting>
+ </para>
+ <para>
+ Le code entre ces deux balises sera répétée lors du test de performance.
+ Le code présent avant ces balises est exécuté avant la répétition, le code après ces balises est exécuté après.
+ Si ces balises sont absentes du code, le code entier sera répété.
+ </para>
+ </listitem>
+ </itemizedlist>
+ </refsection>
+ <refsection>
+ <title>Exemples</title>
+ <para>
+ Quelques exemples d'utilisation de <function>bench_run</function>
+ </para>
+ <programlisting role="example"><![CDATA[
+// Lance tous les tests
+// Cela peut prendre du temps...
+// bench_run();
+// bench_run([]);
+// bench_run([],[]);
+
+// Test d'un ou de plusieurs modules
+bench_run('core');
+bench_run('core',[]);
+bench_run(['core','string']);
+
+// Lance des tests spécifiques sur un module
+bench_run('core',['trycatch','opcode']);
+
+// Avec des options
+bench_run([],[],'list');
+bench_run([],[],'help');
+bench_run("string", [], 'nb_run=100');
+// Résultats dans un ficher sous le répertoire local
+bench_run("string", [], 'nb_run=100', 'my_output_file.xml');
+// Résultats dans un répertoire, par défaut sous TMPDIR/benchmarks
+bench_run("string", [], 'nb_run=100', TMPDIR);
+ ]]></programlisting>
+ <para> Exemple de fichier de test
+ SCI/modules/linear_algebra/tests/benchmarks/bench_chol.tst.
+ </para>
+ <programlisting role="example"><![CDATA[
+// =============================================================================
+// Scilab ( http://www.scilab.org/ ) - This file is part of Scilab
+// Copyright (C) 2007-2008 - INRIA
+//
+// This file is distributed under the same license as the Scilab package.
+// =============================================================================
+
+//==============================================================================
+// Benchmark for chol function
+//==============================================================================
+
+// <-- BENCH NB RUN : 10 -->
+
+a = 0;
+b = 0;
+a = rand(900, 900, 'n');
+a = a'*a;
+
+// <-- BENCH START -->
+b = chol(a);
+// <-- BENCH END -->
+]]></programlisting>
+ <para>résultat du test</para>
+ <screen><![CDATA[
+-->bench_run('linear_algebra','bench_chol')
+
+For Loop (as reference) ........................... 33.20 ms [ 1000000 x]
+
+001/001 - [linear_algebra] bench_chol ...................... 1233.93 ms [ 10 x]
+ ]]></screen>
+ </refsection>
+ <refsection role="see also">
+ <title>Voir aussi</title>
+ <simplelist type="inline">
+ <member>
+ <link linkend="test_run">test_run</link>
+ </member>
+ </simplelist>
+ </refsection>
+ <refsection role="history">
+ <title>Historique</title>
+ <revhistory>
+ <revision>
+ <revnumber>6.0</revnumber>
+ <revdescription>
+ <itemizedlist>
+ <listitem>
+ <literal>bench_run()</literal> peut maintenant retourner les résultats des tests de performance via les nouveaux paramètres de sortie
+ <literal>modutests_names</literal>, <literal>elapsed_time</literal> et <literal>nb_iterations</literal>
+ </listitem>
+ <listitem>
+ L'export des résultats au format XML est désormais possible
+ </listitem>
+ <listitem>
+ Les paramètres de configuration globale
+ mode(),format(), ieee(), warning() et funcprot()
+ sont protégés lors des tests.
+ </listitem>
+ </itemizedlist>
+ </revdescription>
+ </revision>
+ </revhistory>
+ </refsection>
+</refentry>
<refsynopsisdiv>
<title>呼び出し手順</title>
<synopsis>
- bench_run()
- bench_run(module[,test_name[,options]])
+ [modutests_names, elapsed_time, nb_iterations] = bench_run()
+ [modutests_names, elapsed_time, nb_iterations] = bench_run(module[, test_name[, options, [exportToFile]])
+ [modutests_names, elapsed_time, nb_iterations] = bench_run(path_to_module[, test_name[, options, [exportToFile]])
</synopsis>
</refsynopsisdiv>
<refsection>
</listitem>
</varlistentry>
<varlistentry>
+ <term>path_to_module</term>
+ <listitem>
+ <para>
+ a vector of string. Contains the paths to directories of modules to test. If <literal>"/path/to/directory"</literal> is given as input parameter, tests are retrieved in the subdirectory
+ <literal>
+ /path/to/directory/<emphasis role="bold">tests/benchmarks</emphasis>
+ </literal>
+ .Used for homemade benchmarks.
+ </para>
+ </listitem>
+ </varlistentry>
+ <varlistentry>
<term>test_name</term>
<listitem>
- <para>文字列ベクトル</para>
+ <para>a vector of string. Contains the names of the tests to perform.</para>
+ <para>
+ The name of a test is its filename without <literal>.tst</literal>. If several modules or directory are given as first input parameter, scans for tests in each of these modules or directory.
+ </para>
</listitem>
</varlistentry>
<varlistentry>
<para>文字列ベクトル</para>
<itemizedlist>
<listitem>
- <para>list : モジュールで利用可能なベンチマークテストのリスト</para>
+ <para>"list" : モジュールで利用可能なベンチマークテストのリスト</para>
</listitem>
<listitem>
- <para>help : Scilabコンソールにいくつかの使用例を表示</para>
+ <para>"help" : Scilabコンソールにいくつかの使用例を表示</para>
</listitem>
<listitem>
- <para>nb_run=value : ベンチマークテストを指定回数反復実行</para>
+ <para>
+ <literal>"nb_run=value"</literal>: runs each benchmark <literal>value</literal> times ; by default <function>bench_run</function> runs 10000 times the code between BENCH START and BENCH END tags (see below). Overrides any <literal>BENCH NB RUN</literal> specified in the benchmark test files.
+ </para>
</listitem>
</itemizedlist>
</listitem>
</varlistentry>
+ <varlistentry>
+ <term>exportToFile</term>
+ <listitem>
+ <para>a single string</para>
+ <para>
+ File path to the result of the <function>bench_run</function> in xml format. By default, or if <literal>"", "[]"</literal> or <literal>[]</literal> is given, the output directory is <literal>TMPDIR/benchmarks/</literal>.
+ </para>
+ <para>
+ If <literal>exportToFile</literal> is a directory, creates a timestamped output file is the directory, otherwize creates the file <literal>exportToFile</literal>. If the file could not be created a warning is issued and the file is created under <literal>TMPDIR/benchmarks/</literal> instead.
+ </para>
+ </listitem>
+ </varlistentry>
+ <varlistentry>
+ <term>modutests_names</term>
+ <listitem>
+ <para>a N-by-2 matrix of strings</para>
+ <para>
+ the first column lists the modules tested by <function>bench_run</function>, the second column lists the names of the benchmarks
+ </para>
+ </listitem>
+ </varlistentry>
+ <varlistentry>
+ <term>elapsed_time</term>
+ <listitem>
+ <para>a vector of doubles</para>
+ <para>the execution time for each benchmark</para>
+ </listitem>
+ </varlistentry>
+ <varlistentry>
+ <term>nb_iterations</term>
+ <listitem>
+ <para>a vector of doubles of size N</para>
+ <para>the number of iterations of respective test</para>
+ </listitem>
+ </varlistentry>
</variablelist>
</refsection>
<refsection>
<title>説明</title>
<para>
- ベンチマークテストライブラリの .tst ファイルを探して実行し,実行時間に関する
- レポートを表示します.
- .tst ファイルはSCI+"/modules/*/tests/benchmark"ディレクトリで探されます.
+ Performs benchmark tests, measures execution time and produces a report about benchmark tests.
+ </para>
+ <para>
+ Searches for .tst files in benchmark test library or input parameter path under <literal>tests/benchmark</literal> subdirectory,
+ executes them 10000 times and displays a report about execution time.
</para>
<para>
- テスト処理を制御しやすくするために,.tstファイルに特殊なタグを挿入できます.
- これらのタグはScilabコメントとして記入します.
+ Special tags may be inserted in the .tst file, which help to
+ control the processing of the corresponding test. These tags
+ are expected to be found in Scilab comments.
</para>
- <para>利用可能なタグを以下に示します :</para>
+ <para>These are the available tags :</para>
<itemizedlist>
<listitem>
<para>
- <-- BENCH NB RUN : 10 -->
- このテストは10回反復実行されます.
+ <literal><-- BENCH NB RUN : 10 --></literal>
+ </para>
+ <para>
+ By default, this test will be repeated 10 times, unless the "nb_run=###"<literal>bench_run(..)</literal> option is used. The value given for the flag can be set to any integer value.
</para>
</listitem>
<listitem>
+ <programlisting role="no-scilab-exec"><![CDATA[
+// <-- BENCH START -->
+[code to be executed]
+// <-- BENCH END -->
+]]></programlisting>
<para>
- <-- BENCH START -->
- <-- BENCH END -->
- ベンチマークの関心がある部分をこれらのタグで括りますThe
+ Code between these tags will be repeated. Code before will be executed before the repetition, code after will be executed after the repetition.
+ If these are not present, the entire code will be repeated.
</para>
</listitem>
</itemizedlist>
</refsection>
<refsection>
- <title>例</title>
- <para>bench_runを実行例をいくつか示します</para>
+ <title>Examples</title>
+ <para>Some simple examples of invocation of bench_run</para>
<programlisting role="example"><![CDATA[
-// 全てのテストを実行
-bench_run();
-bench_run([]);
-bench_run([],[]);
-// 1つまたは複数のモジュールをテスト
+// Launch all tests
+// This may take some time...
+// bench_run();
+// bench_run([]);
+// bench_run([],[]);
+
+// Test one or several module
bench_run('core');
bench_run('core',[]);
bench_run(['core','string']);
-// 指定したモジュールの1つまたは複数のテストを実行
+
+// Launch one or several test in a specified module
bench_run('core',['trycatch','opcode']);
-// オプションを指定
+
+// With options
bench_run([],[],'list');
bench_run([],[],'help');
-bench_run([],[],'nb_run=2000');
- ]]></programlisting>
+bench_run("string", [], 'nb_run=100');
+// results in an output file in the current directory
+bench_run("string", [], 'nb_run=100', 'my_output_file.xml');
+// results in an output directory, TMPDIR/benchmarks is the default
+bench_run("string", [], 'nb_run=100', TMPDIR);
+]]></programlisting>
<para>ベンチマークファイルの例. このファイルはファイル
SCI/modules/linear_algebra/tests/benchmarks/bench_chol.tstに対応します.
</para>
//
// This file is distributed under the same license as the Scilab package.
// =============================================================================
+
//==============================================================================
// Benchmark for chol function
//==============================================================================
+
// <-- BENCH NB RUN : 10 -->
+
a = 0;
b = 0;
a = rand(900, 900, 'n');
a = a'*a;
+
// <-- BENCH START -->
b = chol(a);
// <-- BENCH END -->
- ]]></programlisting>
+]]></programlisting>
<para>テストの結果</para>
- <programlisting role="example"><![CDATA[
+ <screen><![CDATA[
-->bench_run('linear_algebra','bench_chol')
- For Loop (as reference) ........................... 143.00 ms [ 1000000 x]
- 001/001 - [linear_algebra] bench_chol ...................... 130.60 ms [ 10 x]
- ]]></programlisting>
+
+For Loop (as reference) ........................... 33.20 ms [ 1000000 x]
+
+001/001 - [linear_algebra] bench_chol ...................... 1233.93 ms [ 10 x]
+ ]]></screen>
</refsection>
<refsection role="see also">
<title>参照</title>
</member>
</simplelist>
</refsection>
+ <refsection role="history">
+ <title>History</title>
+ <revhistory>
+ <revision>
+ <revnumber>6.0</revnumber>
+ <revdescription>
+ <itemizedlist>
+ <listitem>
+ <literal>bench_run()</literal> can now return its results through the new
+ <literal>modutests_names</literal>, <literal>elapsed_time</literal>
+ and <literal>nb_iterations</literal> output parameters.
+ </listitem>
+ <listitem>
+ Exportation of results in XML is now possible
+ </listitem>
+ <listitem>
+ Global configuration settings mode(),
+ format(), ieee(), warning() and funcprot()
+ are now protected against tests.
+ </listitem>
+ </itemizedlist>
+ </revdescription>
+ </revision>
+ </revhistory>
+ </refsection>
</refentry>
--- /dev/null
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+ * Scilab ( http://www.scilab.org/ ) - This file is part of Scilab
+ * Copyright (C) INRIA
+ *
+ * Copyright (C) 2012 - 2016 - Scilab Enterprises
+ *
+ * This file is hereby licensed under the terms of the GNU GPL v2.0,
+ * pursuant to article 5.3.4 of the CeCILL v.2.1.
+ * This file was originally licensed under the terms of the CeCILL v2.1,
+ * and continues to be available under such terms.
+ * For more information, see the COPYING file which you should have received
+ * along with this program.
+ *
+ -->
+<refentry xmlns="http://docbook.org/ns/docbook" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:svg="http://www.w3.org/2000/svg" xmlns:ns5="http://www.w3.org/1999/xhtml" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:db="http://docbook.org/ns/docbook" xmlns:scilab="http://www.scilab.org" xml:id="bench_run" xml:lang="pt">
+ <refnamediv>
+ <refname>bench_run</refname>
+ <refpurpose>Launches benchmark tests</refpurpose>
+ </refnamediv>
+ <refsynopsisdiv>
+ <title>Calling Sequence</title>
+ <synopsis>
+ [modutests_names, elapsed_time, nb_iterations] = bench_run()
+ [modutests_names, elapsed_time, nb_iterations] = bench_run(module[, test_name[, options, [exportToFile]])
+ [modutests_names, elapsed_time, nb_iterations] = bench_run(path_to_module[, test_name[, options, [exportToFile]])
+ </synopsis>
+ </refsynopsisdiv>
+ <refsection>
+ <title>Arguments</title>
+ <variablelist>
+ <varlistentry>
+ <term>module</term>
+ <listitem>
+ <para>a vector of string. Contains the names of a Scilab modules to benchmark.</para>
+ </listitem>
+ </varlistentry>
+ <varlistentry>
+ <term>path_to_module</term>
+ <listitem>
+ <para>
+ a vector of string. Contains the paths to directories of modules to test. If <literal>"/path/to/directory"</literal> is given as input parameter, tests are retrieved in the subdirectory
+ <literal>
+ /path/to/directory/<emphasis role="bold">tests/benchmarks</emphasis>
+ </literal>
+ .Used for homemade benchmarks.
+ </para>
+ </listitem>
+ </varlistentry>
+ <varlistentry>
+ <term>test_name</term>
+ <listitem>
+ <para>a vector of string. Contains the names of the tests to perform.</para>
+ <para>
+ The name of a test is its filename without <literal>.tst</literal>. If several modules or directory are given as first input parameter, scans for tests in each of these modules or directory.
+ </para>
+ </listitem>
+ </varlistentry>
+ <varlistentry>
+ <term>options</term>
+ <listitem>
+ <para>a vector of string</para>
+ <itemizedlist>
+ <listitem>
+ <para>
+ <literal>"list"</literal>: list of the benchmark tests (<literal>test_name</literal>) available in a module
+ </para>
+ </listitem>
+ <listitem>
+ <para>
+ <literal>"help"</literal>: displays some examples of use in the Scilab console
+ </para>
+ </listitem>
+ <listitem>
+ <para>
+ <literal>"nb_run=value"</literal>: runs each benchmark <literal>value</literal> times ; by default <function>bench_run</function> runs 10000 times the code between BENCH START and BENCH END tags (see below). Overrides any <literal>BENCH NB RUN</literal> specified in the benchmark test files.
+ </para>
+ </listitem>
+ </itemizedlist>
+ </listitem>
+ </varlistentry>
+ <varlistentry>
+ <term>exportToFile</term>
+ <listitem>
+ <para>a single string</para>
+ <para>
+ File path to the result of the <function>bench_run</function> in xml format. By default, or if <literal>"", "[]"</literal> or <literal>[]</literal> is given, the output directory is <literal>TMPDIR/benchmarks/</literal>.
+ </para>
+ <para>
+ If <literal>exportToFile</literal> is a directory, creates a timestamped output file is the directory, otherwize creates the file <literal>exportToFile</literal>. If the file could not be created a warning is issued and the file is created under <literal>TMPDIR/benchmarks/</literal> instead.
+ </para>
+ </listitem>
+ </varlistentry>
+ <varlistentry>
+ <term>modutests_names</term>
+ <listitem>
+ <para>a N-by-2 matrix of strings</para>
+ <para>
+ the first column lists the modules tested by <function>bench_run</function>, the second column lists the names of the benchmarks
+ </para>
+ </listitem>
+ </varlistentry>
+ <varlistentry>
+ <term>elapsed_time</term>
+ <listitem>
+ <para>a vector of doubles</para>
+ <para>the execution time for each benchmark</para>
+ </listitem>
+ </varlistentry>
+ <varlistentry>
+ <term>nb_iterations</term>
+ <listitem>
+ <para>a vector of doubles of size N</para>
+ <para>the number of iterations of respective test</para>
+ </listitem>
+ </varlistentry>
+ </variablelist>
+ </refsection>
+ <refsection>
+ <title>Description</title>
+ <para>
+ Performs benchmark tests, measures execution time and produces a report about benchmark tests.
+ </para>
+ <para>
+ Searches for .tst files in benchmark test library or input parameter path under <literal>tests/benchmark</literal> subdirectory,
+ executes them 10000 times and displays a report about execution time.
+ </para>
+ <para>
+ Special tags may be inserted in the .tst file, which help to
+ control the processing of the corresponding test. These tags
+ are expected to be found in Scilab comments.
+ </para>
+ <para>These are the available tags :</para>
+ <itemizedlist>
+ <listitem>
+ <para>
+ <literal><-- BENCH NB RUN : 10 --></literal>
+ </para>
+ <para>
+ By default, this test will be repeated 10 times, unless the "nb_run=###"<literal>bench_run(..)</literal> option is used. The value given for the flag can be set to any integer value.
+ </para>
+ </listitem>
+ <listitem>
+ <programlisting role="no-scilab-exec"><![CDATA[
+// <-- BENCH START -->
+[code to be executed]
+// <-- BENCH END -->
+]]></programlisting>
+ <para>
+ Code between these tags will be repeated. Code before will be executed before the repetition, code after will be executed after the repetition.
+ If these are not present, the entire code will be repeated.
+ </para>
+ </listitem>
+ </itemizedlist>
+ </refsection>
+ <refsection>
+ <title>Examples</title>
+ <para>Some simple examples of invocation of bench_run</para>
+ <programlisting role="example"><![CDATA[
+// Launch all tests
+// This may take some time...
+// bench_run();
+// bench_run([]);
+// bench_run([],[]);
+
+// Test one or several module
+bench_run('core');
+bench_run('core',[]);
+bench_run(['core','string']);
+
+// Launch one or several test in a specified module
+bench_run('core',['trycatch','opcode']);
+
+// With options
+bench_run([],[],'list');
+bench_run([],[],'help');
+bench_run("string", [], 'nb_run=100');
+// results in an output file in the current directory
+bench_run("string", [], 'nb_run=100', 'my_output_file.xml');
+// results in an output directory, TMPDIR/benchmarks is the default
+bench_run("string", [], 'nb_run=100', TMPDIR);
+]]></programlisting>
+ <para>An example of a benchmark file. This file corresponds to the
+ file
+ SCI/modules/linear_algebra/tests/benchmarks/bench_chol.tst.
+ </para>
+ <programlisting role="example"><![CDATA[
+// =============================================================================
+// Scilab ( http://www.scilab.org/ ) - This file is part of Scilab
+// Copyright (C) 2007-2008 - INRIA
+//
+// This file is distributed under the same license as the Scilab package.
+// =============================================================================
+
+//==============================================================================
+// Benchmark for chol function
+//==============================================================================
+
+// <-- BENCH NB RUN : 10 -->
+
+a = 0;
+b = 0;
+a = rand(900, 900, 'n');
+a = a'*a;
+
+// <-- BENCH START -->
+b = chol(a);
+// <-- BENCH END -->
+]]></programlisting>
+ <para>The result of the test</para>
+ <screen><![CDATA[
+-->bench_run('linear_algebra','bench_chol')
+
+For Loop (as reference) ........................... 33.20 ms [ 1000000 x]
+
+001/001 - [linear_algebra] bench_chol ...................... 1233.93 ms [ 10 x]
+ ]]></screen>
+ </refsection>
+ <refsection role="see also">
+ <title>See Also</title>
+ <simplelist type="inline">
+ <member>
+ <link linkend="test_run">test_run</link>
+ </member>
+ </simplelist>
+ </refsection>
+ <refsection role="history">
+ <title>History</title>
+ <revhistory>
+ <revision>
+ <revnumber>6.0</revnumber>
+ <revdescription>
+ <itemizedlist>
+ <listitem>
+ <literal>bench_run()</literal> can now return its results through the new
+ <literal>modutests_names</literal>, <literal>elapsed_time</literal>
+ and <literal>nb_iterations</literal> output parameters.
+ </listitem>
+ <listitem>
+ Exportation of results in XML is now possible
+ </listitem>
+ <listitem>
+ Global configuration settings mode(),
+ format(), ieee(), warning() and funcprot()
+ are now protected against tests.
+ </listitem>
+ </itemizedlist>
+ </revdescription>
+ </revision>
+ </revhistory>
+ </refsection>
+</refentry>
--- /dev/null
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+ * Scilab ( http://www.scilab.org/ ) - This file is part of Scilab
+ * Copyright (C) INRIA
+ *
+ * Copyright (C) 2012 - 2016 - Scilab Enterprises
+ *
+ * This file is hereby licensed under the terms of the GNU GPL v2.0,
+ * pursuant to article 5.3.4 of the CeCILL v.2.1.
+ * This file was originally licensed under the terms of the CeCILL v2.1,
+ * and continues to be available under such terms.
+ * For more information, see the COPYING file which you should have received
+ * along with this program.
+ *
+ -->
+<refentry xmlns="http://docbook.org/ns/docbook" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:svg="http://www.w3.org/2000/svg" xmlns:ns5="http://www.w3.org/1999/xhtml" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:db="http://docbook.org/ns/docbook" xmlns:scilab="http://www.scilab.org" xml:id="bench_run" xml:lang="ru">
+ <refnamediv>
+ <refname>bench_run</refname>
+ <refpurpose>Launches benchmark tests</refpurpose>
+ </refnamediv>
+ <refsynopsisdiv>
+ <title>Calling Sequence</title>
+ <synopsis>
+ [modutests_names, elapsed_time, nb_iterations] = bench_run()
+ [modutests_names, elapsed_time, nb_iterations] = bench_run(module[, test_name[, options, [exportToFile]])
+ [modutests_names, elapsed_time, nb_iterations] = bench_run(path_to_module[, test_name[, options, [exportToFile]])
+ </synopsis>
+ </refsynopsisdiv>
+ <refsection>
+ <title>Arguments</title>
+ <variablelist>
+ <varlistentry>
+ <term>module</term>
+ <listitem>
+ <para>a vector of string. Contains the names of a Scilab modules to benchmark.</para>
+ </listitem>
+ </varlistentry>
+ <varlistentry>
+ <term>path_to_module</term>
+ <listitem>
+ <para>
+ a vector of string. Contains the paths to directories of modules to test. If <literal>"/path/to/directory"</literal> is given as input parameter, tests are retrieved in the subdirectory
+ <literal>
+ /path/to/directory/<emphasis role="bold">tests/benchmarks</emphasis>
+ </literal>
+ .Used for homemade benchmarks.
+ </para>
+ </listitem>
+ </varlistentry>
+ <varlistentry>
+ <term>test_name</term>
+ <listitem>
+ <para>a vector of string. Contains the names of the tests to perform.</para>
+ <para>
+ The name of a test is its filename without <literal>.tst</literal>. If several modules or directory are given as first input parameter, scans for tests in each of these modules or directory.
+ </para>
+ </listitem>
+ </varlistentry>
+ <varlistentry>
+ <term>options</term>
+ <listitem>
+ <para>a vector of string</para>
+ <itemizedlist>
+ <listitem>
+ <para>
+ <literal>"list"</literal>: list of the benchmark tests (<literal>test_name</literal>) available in a module
+ </para>
+ </listitem>
+ <listitem>
+ <para>
+ <literal>"help"</literal>: displays some examples of use in the Scilab console
+ </para>
+ </listitem>
+ <listitem>
+ <para>
+ <literal>"nb_run=value"</literal>: runs each benchmark <literal>value</literal> times ; by default <function>bench_run</function> runs 10000 times the code between BENCH START and BENCH END tags (see below). Overrides any <literal>BENCH NB RUN</literal> specified in the benchmark test files.
+ </para>
+ </listitem>
+ </itemizedlist>
+ </listitem>
+ </varlistentry>
+ <varlistentry>
+ <term>exportToFile</term>
+ <listitem>
+ <para>a single string</para>
+ <para>
+ File path to the result of the <function>bench_run</function> in xml format. By default, or if <literal>"", "[]"</literal> or <literal>[]</literal> is given, the output directory is <literal>TMPDIR/benchmarks/</literal>.
+ </para>
+ <para>
+ If <literal>exportToFile</literal> is a directory, creates a timestamped output file is the directory, otherwize creates the file <literal>exportToFile</literal>. If the file could not be created a warning is issued and the file is created under <literal>TMPDIR/benchmarks/</literal> instead.
+ </para>
+ </listitem>
+ </varlistentry>
+ <varlistentry>
+ <term>modutests_names</term>
+ <listitem>
+ <para>a N-by-2 matrix of strings</para>
+ <para>
+ the first column lists the modules tested by <function>bench_run</function>, the second column lists the names of the benchmarks
+ </para>
+ </listitem>
+ </varlistentry>
+ <varlistentry>
+ <term>elapsed_time</term>
+ <listitem>
+ <para>a vector of doubles</para>
+ <para>the execution time for each benchmark</para>
+ </listitem>
+ </varlistentry>
+ <varlistentry>
+ <term>nb_iterations</term>
+ <listitem>
+ <para>a vector of doubles of size N</para>
+ <para>the number of iterations of respective test</para>
+ </listitem>
+ </varlistentry>
+ </variablelist>
+ </refsection>
+ <refsection>
+ <title>Description</title>
+ <para>
+ Performs benchmark tests, measures execution time and produces a report about benchmark tests.
+ </para>
+ <para>
+ Searches for .tst files in benchmark test library or input parameter path under <literal>tests/benchmark</literal> subdirectory,
+ executes them 10000 times and displays a report about execution time.
+ </para>
+ <para>
+ Special tags may be inserted in the .tst file, which help to
+ control the processing of the corresponding test. These tags
+ are expected to be found in Scilab comments.
+ </para>
+ <para>These are the available tags :</para>
+ <itemizedlist>
+ <listitem>
+ <para>
+ <literal><-- BENCH NB RUN : 10 --></literal>
+ </para>
+ <para>
+ By default, this test will be repeated 10 times, unless the "nb_run=###"<literal>bench_run(..)</literal> option is used. The value given for the flag can be set to any integer value.
+ </para>
+ </listitem>
+ <listitem>
+ <programlisting role="no-scilab-exec"><![CDATA[
+// <-- BENCH START -->
+[code to be executed]
+// <-- BENCH END -->
+]]></programlisting>
+ <para>
+ Code between these tags will be repeated. Code before will be executed before the repetition, code after will be executed after the repetition.
+ If these are not present, the entire code will be repeated.
+ </para>
+ </listitem>
+ </itemizedlist>
+ </refsection>
+ <refsection>
+ <title>Examples</title>
+ <para>Some simple examples of invocation of bench_run</para>
+ <programlisting role="example"><![CDATA[
+// Launch all tests
+// This may take some time...
+// bench_run();
+// bench_run([]);
+// bench_run([],[]);
+
+// Test one or several module
+bench_run('core');
+bench_run('core',[]);
+bench_run(['core','string']);
+
+// Launch one or several test in a specified module
+bench_run('core',['trycatch','opcode']);
+
+// With options
+bench_run([],[],'list');
+bench_run([],[],'help');
+bench_run("string", [], 'nb_run=100');
+// results in an output file in the current directory
+bench_run("string", [], 'nb_run=100', 'my_output_file.xml');
+// results in an output directory, TMPDIR/benchmarks is the default
+bench_run("string", [], 'nb_run=100', TMPDIR);
+]]></programlisting>
+ <para>An example of a benchmark file. This file corresponds to the
+ file
+ SCI/modules/linear_algebra/tests/benchmarks/bench_chol.tst.
+ </para>
+ <programlisting role="example"><![CDATA[
+// =============================================================================
+// Scilab ( http://www.scilab.org/ ) - This file is part of Scilab
+// Copyright (C) 2007-2008 - INRIA
+//
+// This file is distributed under the same license as the Scilab package.
+// =============================================================================
+
+//==============================================================================
+// Benchmark for chol function
+//==============================================================================
+
+// <-- BENCH NB RUN : 10 -->
+
+a = 0;
+b = 0;
+a = rand(900, 900, 'n');
+a = a'*a;
+
+// <-- BENCH START -->
+b = chol(a);
+// <-- BENCH END -->
+]]></programlisting>
+ <para>The result of the test</para>
+ <screen><![CDATA[
+-->bench_run('linear_algebra','bench_chol')
+
+For Loop (as reference) ........................... 33.20 ms [ 1000000 x]
+
+001/001 - [linear_algebra] bench_chol ...................... 1233.93 ms [ 10 x]
+ ]]></screen>
+ </refsection>
+ <refsection role="see also">
+ <title>See Also</title>
+ <simplelist type="inline">
+ <member>
+ <link linkend="test_run">test_run</link>
+ </member>
+ </simplelist>
+ </refsection>
+ <refsection role="history">
+ <title>History</title>
+ <revhistory>
+ <revision>
+ <revnumber>6.0</revnumber>
+ <revdescription>
+ <itemizedlist>
+ <listitem>
+ <literal>bench_run()</literal> can now return its results through the new
+ <literal>modutests_names</literal>, <literal>elapsed_time</literal>
+ and <literal>nb_iterations</literal> output parameters.
+ </listitem>
+ <listitem>
+ Exportation of results in XML is now possible
+ </listitem>
+ <listitem>
+ Global configuration settings mode(),
+ format(), ieee(), warning() and funcprot()
+ are now protected against tests.
+ </listitem>
+ </itemizedlist>
+ </revdescription>
+ </revision>
+ </revhistory>
+ </refsection>
+</refentry>
// Launch benchmarks
//-----------------------------------------------------------------------------
-function bench_run(varargin)
+function [modutests_names, elapsed_time, nb_iterations] = bench_run(varargin)
lhs = argn(1);
rhs = argn(2);
global test_count;
test_list = [];
+ modutests_names = "";
test_count = 0;
boucle_for_time = 0;
just_list_tests = %F;
print_help = %F;
nb_run = "10000";
+ nb_run_override = %f;
+
+ elapsed_time = [];
+ nb_iterations = [];
xml_str = "";
// =======================================================
if (rhs == 0) ..
- | ((rhs == 1) & (varargin(1)==[])) ..
- | (((rhs == 2)|(rhs == 3)) & (varargin(1)==[]) & (varargin(2)==[])) then
+ | ((rhs == 1) & (varargin(1)==[] | varargin(1)=="[]" | varargin(1) == "")) ..
+ | (((rhs >= 2)) & (varargin(1)==[] | varargin(1)=="[]" | varargin(1) == "") & (varargin(2)==[] | varargin(2)=="[]" | varargin(2) == "")) then
// No input argument
// bench_run()
end
elseif (rhs == 1) ..
- | ((rhs == 2) & (varargin(2)==[])) ..
- | ((rhs == 3) & (varargin(2)==[])) then
+ | ((rhs >= 2) & (varargin(2)==[] | varargin(2)=="[]" | varargin(2) == "")) ..
// One input argument
// bench_run(<module_name>)
if( with_module(module_mat(i,j)) ) then
bench_add_module(module_mat(i,j));
else
- error(sprintf(gettext("%s is not an installed module"),module_mat(i,j)));
+ if isdir(module_mat(i,j)) then
+ bench_add_dir(module_mat(i,j));
+ else
+ error(msprintf(gettext("%s: %s is not an installed module"), "bench_run", module_mat(i,j)));
+ end
end
end
end
- elseif (rhs == 2) | (rhs == 3) then
+ elseif (rhs >= 2 & rhs <= 4) then
// Two input arguments
// bench_run(<module_name>,<test_name>)
// bench_run(<module_name>,[<test_name_1>,<test_name_2>] )
- // varargin(1) = <module_name> ==> string 1x1
- // varargin(2) = <test_name_1> ==> mat nl x nc
-
- module = varargin(1);
+ module_mat = varargin(1);
test_mat = varargin(2);
-
- if ((or(size(module) <> [1,1])) & (test_mat <> [])) then
- example = bench_examples();
- err = ["" ; gettext("error : Input argument sizes are not valid") ; "" ; example ];
- printf("%s\n",err);
- return;
+ bench_list_reduced = [];
+
+ // get module and test lists
+ bench_list = bench_list_tests(module_mat);
+ // only keep relevant tests
+ // after this loop bench_test_reduced contains the module and relevant tests
+ for i = 1:size(test_mat, "*")
+ found_tests = find(bench_list(:,2) == test_mat(i));
+ if ~isempty(found_tests)
+ bench_list_reduced = [bench_list_reduced; bench_list(found_tests, :)];
+ else
+ // At least one element in the test list is wrong
+ // this is an error
+ error(msprintf(_("%s: Wrong value for input argument #%d: test %s not found in the list of modules"), "bench_run", 2, test_mat(i)));
+ end
end
- [nl,nc] = size(test_mat);
-
- for i=1:nl
- for j=1:nc
-
- if (fileinfo(SCI+"/modules/"+module+"/tests/benchmarks/"+test_mat(i,j)+".tst")<>[]) then
- bench_add_onebench(module,test_mat(i,j));
- else
- error(sprintf(gettext("The test ""%s"" is not available from the ""%s"" module"),test_mat(i,j),module));
- end
-
- end
+ for i=1:size(bench_list_reduced, "r") //loops over each row of bench_list_reduced
+ bench_add_onebench(bench_list_reduced(i, 1), bench_list_reduced(i, 2));
end
else
- error(msprintf(gettext("%s: Wrong number of input argument(s): %d to %d expected.\n"), "bench_run", 0, 3));
+ error(msprintf(gettext("%s: Wrong number of input argument(s): %d to %d expected.\n"), "bench_run", 0, 4));
end
// =======================================================
// Gestion des options
// =======================================================
- if rhs == 3 then
+ if rhs >= 3 then
option_mat = varargin(3);
print_help = %T;
end
- if grep(option_mat,"nb_run=") <> [] then
- nb_run_line = grep(option_mat,"nb_run=");
- nb_run = strsubst(option_mat(nb_run_line),"nb_run=","");
+ nb_run_line = grep(option_mat,"/nb_run\s*=\s*/", "r")
+ if ~isempty(nb_run_line) then
+ nb_run_override = %t;
+ stripped_option = option_mat(nb_run_line);
+ idx_nonblank = strindex(stripped_option, "/[^ \t\b]/", "r");
+ stripped_option = part(stripped_option, idx_nonblank);
+ nb_run = strsubst(stripped_option, "nb_run=","");
end
end
// Test launch
// =======================================================
+ // Protect user modes during tests
+ saved_modes = mode();
+ saved_ieee = ieee();
+ saved_format = format();
+ saved_warning = warning("query");
+ saved_funcprot = funcprot();
+
printf("\n");
xml_str = [ xml_str ; "<benchmarks>" ];
printf(" For Loop (as reference) ........................... %4.2f ms [ 1000000 x]\n\n",boucle_for_time);
+ // Creation of return values the size of test_count
+
for i=1:test_count
// Display
printf(" ");
// Bench process
- [returned_time,nb_run_done] = bench_run_onebench(test_list(i,1),test_list(i,2),nb_run);
+ [returned_time, nb_run_done] = bench_run_onebench(test_list(i,1), test_list(i,2), nb_run);
+
+ // restore user modes inside the loop
+ // Protects from tests that modify those settings
+ mode(saved_modes);
+ ieee(saved_ieee);
+ format(saved_format([2 1]));
+ warning(saved_warning);
+ funcprot(saved_funcprot);
+
+ elapsed_time = [elapsed_time; returned_time];
+ nb_iterations = [nb_iterations; nb_run_done];
// Display
returned_time_str = sprintf("%4.2f ms",returned_time);
" </bench>" ];
end
-
end
+ modutests_names = test_list;
+ nb_iterations = eval(nb_iterations);
+
// XML management
+ // ==============
+ // exportToFile can be
+ // * "", "[]" or []: default behaviour, write the output file in the TMPDIR/benchmarks
+ // path/to/directory/: export a timestamped xml file to the output directory
+ // path/to/directory/filename.xml: exports filename.xml to the directory
+ // get the current date to create a timestamp
+
+ // Close the final tag for export
xml_str = [ xml_str ; "</benchmarks>" ];
- xml_file_name = SCI+"/bench_"+getversion()+"_"+date()+".xml";
- ierr = execstr("fd_xml = mopen(xml_file_name,''wt'');","errcatch");
- if ierr == 999 then
- xml_file_name = SCIHOME + "/bench_" + getversion() + "_" + date() +".xml";
- ierr = execstr("fd_xml = mopen(xml_file_name,''wt'');","errcatch");
+ if size(unique(modutests_names(:,1)), "r") == 1
+ module_name = tokens(pathconvert(modutests_names(1, 1), %f, %f, "u"), "/"); // name of the only module tested
+ module_name = module_name($);
+ else
+ module_name = "";
end
+ if (rhs == 4)
+ exportToFile = varargin(4);
+ if (isempty(exportToFile) | exportToFile == "[]")
+ exportToFile = "";
+ end
+ else
+ exportToFile = "";
+ end
+ [xml_file_name, ierr, fd_xml] = bench_file_output_path(exportToFile);
+
if ierr == 0 then
mputl(xml_str, fd_xml);
mclose(fd_xml);
clearglobal test_count;
clearglobal boucle_for_time;
+
endfunction
//-----------------------------------------------------------------------------
endfunction
+function [bench_list] = bench_list_tests(module_mat)
+
+ module_test_dir = [];
+ bench_list= [];
+ for i = 1:size(module_mat, "*")
+ if with_module(module_mat(i))
+ // module_mat(i) is a scilab module
+ module_test_dir = [module_test_dir; SCI+"/modules/"+module_mat(i)+"/tests/benchmarks"];
+ else
+ // module_mat(i) is a directory
+ module_test_dir = [module_test_dir; module_mat(i) + "/tests/benchmarks"];
+ end
+ test_mat = gsort(basename(listfiles(module_test_dir(i) + "/*.tst")),"lr","i");
+ bench_list = [bench_list; [repmat(module_mat(i), size(test_mat, "*"), 1), test_mat]];
+ end
+endfunction
+
//-----------------------------------------------------------------------------
// Pierre MARECHAL
// Scilab team
// => Run one test
//-----------------------------------------------------------------------------
-function [returned_time,nb_run_done] = bench_run_onebench(module,test,nb_run)
-
+function [returned_time,nb_run_done] = bench_run_onebench(module, test, nb_run)
+ // runs the benchmark for module
returned_time = 0;
- fullPath = SCI+"/modules/"+module+"/tests/benchmarks/"+test;
+ if with_module(module)
+ fullPath = SCI+"/modules/"+module+"/tests/benchmarks/"+test;
+ else
+ fullPath = module + "/tests/benchmarks/" + test;
+ end
tstfile = pathconvert(fullPath+".tst",%f,%f);
scefile = pathconvert(TMPDIR+"/"+test+".sce",%f,%f);
nb_run_done = nb_run;
- if check_nb_run_line <> [] then
+ if (check_nb_run_line <> [] & ~nb_run_override) then
nb_run_line = txt(check_nb_run_line);
nb_run_start = strindex(nb_run_line,"<-- BENCH NB RUN :") + length("<-- BENCH NB RUN :");
nb_run_end = strindex(nb_run_line,"-->") - 1;
line_end = grep(txt,"<-- BENCH END -->");
// Get the context and the bench
- context = txt([1:line_start-1]);
- bench = txt([line_start+1:line_end-1]);
+ // Take the whole file as bench if the tags are not found
+ if isempty(line_start) | isempty(line_end)
+ context = "";
+ bench = txt;
+ after = ""
+ else
+ context = txt([1:line_start-1]);
+ bench = txt([line_start+1:line_end-1]);
+ after = txt([line_end:$]);
+ end
// Remove blank lines
context(find(context == "" )) = [];
bench;
"end";
"timing = toc();";
+ after;
"returned_time = timing * 1000;"]
mputl(tst_str,scefile);
example = [ sprintf("Examples :\n\n") ];
example = [ example ; sprintf("// Launch all tests\n") ];
- example = [ example ; sprintf("bench_run();\n") ];
- example = [ example ; sprintf("bench_run([]);\n") ];
- example = [ example ; sprintf("bench_run([],[]);\n") ];
+ example = [ example ; sprintf("// This may take some time...\n") ];
+ example = [ example ; sprintf("// bench_run();\n") ];
+ example = [ example ; sprintf("// bench_run([]);\n") ];
+ example = [ example ; sprintf("// bench_run([],[]);\n") ];
example = [ example ; "" ];
example = [ example ; sprintf("// Test one or several module\n") ];
example = [ example ; sprintf("bench_run(''core'');\n") ];
example = [ example ; sprintf("// With options\n") ];
example = [ example ; sprintf("bench_run([],[],''list'');\n") ];
example = [ example ; sprintf("bench_run([],[],''help'');\n") ];
- example = [ example ; sprintf("bench_run([],[],''nb_run=2000'');\n") ];
+ example = [ example ; sprintf("bench_run(""string"",[],''nb_run=100'');\n") ];
+ example = [ example ; sprintf("// results in an output file in the local directory\n") ];
+ example = [ example ; sprintf("bench_run(""string"",[],''nb_run=100'', ""my_output_file.xml"");\n") ];
+ example = [ example ; sprintf("// results in an output directory TMPDIR/benchmarks/ is the default \n") ];
+ example = [ example ; sprintf("bench_run(""string"",[],''nb_run=100'', TMPDIR);\n") ];
example = [ example ; "" ];
endfunction
+
+function bench_add_dir(directory)
+ // Scans directory for tests/benchmarks and add the benchmarks
+ module_test_dir = directory + "/tests/benchmarks";
+ test_mat = gsort(basename(listfiles(module_test_dir+"/*.tst")),"lr","i");
+
+ nl = size(test_mat,"*");
+ for i=1:nl
+ bench_add_onebench(directory, test_mat(i));
+ end
+endfunction
+
+function [xml_file_name, ierr, fd_xml] = bench_file_output_path(exportPath, module_name)
+ if exportPath == ""
+ // Default for export is TMPDIR/benchmarks/
+ exportPath = TMPDIR + "/benchmarks";
+ if ~isdir(exportPath)
+ createdir(exportPath);
+ end
+ end
+
+ // Create timestamp and scilab short version
+ current_date = getdate();
+ current_date = msprintf("%d-%02d-%02d_%02d%02d%02d", current_date(1), current_date(2), current_date(6), current_date(7), current_date(8), current_date(9));
+ sciversion = getversion("scilab");
+ sciversion = string(sciversion);
+ sciversion = sciversion(1) + "." + sciversion(2) + "." + sciversion(3);
+
+ // Manage a single module name separation
+ if (module_name <> "")
+ module_name_sep = module_name + "_";
+ else
+ module_name_sep = "";
+ end
+
+ if isdir(exportPath)
+ // The exportPath is a directory
+ // build the inside this directory
+ xml_file_name = exportPath + "/bench_" + module_name_sep + sciversion + "_" + current_date +".xml";
+ ierr = execstr("fd_xml = mopen(xml_file_name,''wt'');","errcatch");
+ else
+ // The exportPath is not a directory
+ xml_file_name = exportPath;
+ ierr = execstr("fd_xml = mopen(xml_file_name,''wt'');","errcatch");
+ end
+ if ierr <> 0 then
+ [xml_file_alt, ierr, fd_xml] = bench_file_output_path("", module_name);
+ msg = msprintf(_("%s: Cannot create file %s, created file %s instead.\n"), "bench_run", fullpath(xml_file_name), strsubst(fullpath(xml_file_alt), TMPDIR, "TMPDIR"));
+ warning(msg);
+ end
+endfunction
// Benchmark for fft function
//==============================================================================
+// <-- BENCH NB RUN : 100 -->
a = 0; b = 0;
a = rand(800000, 1, "n");
+++ /dev/null
-// ============================================================================
-// Scilab ( http://www.scilab.org/ ) - This file is part of Scilab
-// Copyright (C) 2009 - DIGITEO
-//
-// This file is distributed under the same license as the Scilab package.
-// ============================================================================
-
-//==============================================================================
-// Benchmark for launching editor with a binary file
-//==============================================================================
-
-editor('SCI/modules/core/macros/add_demo.bin')
// On retranche 1 si la valeur est inferieur à 0
- mask = (temp <= 0);
- Year(mask) = Year(mask)-1;
+ mask = find(temp <= 0);
+ if ~isempty(mask)
+ Year(mask) = Year(mask)-1;
- N(mask) = N(mask) - (365.0*Year(mask) + ceil(0.25*Year(mask)) - ceil(0.01*Year(mask)) + ceil(0.0025*Year(mask)));
- N(~mask) = temp(~mask);
+ N(mask) = N(mask) - (365.0*Year(mask) + ceil(0.25*Year(mask)) - ceil(0.01*Year(mask)) + ceil(0.0025*Year(mask)));
+ N(~mask) = temp(~mask);
+ else
+ N = temp;
+ end
// ... and the month
// =========================================================================
// construction de la matrice
month_day_mat = ones(nr,nc);
+ idx_leap_year = isLeapYear(Year);
- month_day_mat(isLeapYear(Year)) = leap_year(Month(isLeapYear(Year))+1);
- month_day_mat(~isLeapYear(Year)) = common_year(Month(~isLeapYear(Year))+1);
+ if ~isempty(Month(idx_leap_year))
+ month_day_mat(idx_leap_year) = leap_year(Month(idx_leap_year)+1);
+ end
+ if ~isempty(Month(~idx_leap_year))
+ month_day_mat(~idx_leap_year) = common_year(Month(~idx_leap_year)+1);
+ end
Month( N>month_day_mat ) = Month( N>month_day_mat )+1;
Day = ones(nr,nc);
- month_day_mat(isLeapYear(Year)) = leap_year(Month(isLeapYear(Year)));
- month_day_mat(~isLeapYear(Year)) = common_year(Month(~isLeapYear(Year)));
+ if ~isempty(Month(idx_leap_year))
+ month_day_mat(idx_leap_year) = leap_year(Month(idx_leap_year));
+ end
+ if ~isempty(Month(~idx_leap_year))
+ month_day_mat(~idx_leap_year) = common_year(Month(~idx_leap_year));
+ end
Day = N - month_day_mat;
Y = floor(D/365.2425);
temp = D - (365.0*Y + ceil(0.25*Y)- ceil(0.01*Y) + ceil(0.0025*Y));
- mask = (temp <= 0);
- Y(mask) = Y(mask) - 1;
- D(mask) = D(mask) - (365.0*Y(mask) + ceil(0.25*Y(mask)) - ceil(0.01*Y(mask)) + ceil(0.0025*Y(mask)));
- D(~mask) = temp(~mask)
+ mask = find(temp <= 0);
+ if ~isempty(mask)
+ Y(mask) = Y(mask) - 1;
+ D(mask) = D(mask) - (365.0*Y(mask) + ceil(0.25*Y(mask)) - ceil(0.01*Y(mask)) + ceil(0.0025*Y(mask)));
+ D(~mask) = temp(~mask);
+ else
+ D = temp;
+ end
M = int(D/29);
+ idx_leap_year = isLeapYear(Y);
- month_day_mat(isLeapYear(Y)) = leap_year(M(isLeapYear(Y))+1);
- month_day_mat(~isLeapYear(Y)) = common_year(M(~isLeapYear(Y))+1);
+ if ~isempty(M(idx_leap_year))
+ month_day_mat(idx_leap_year) = leap_year(M(idx_leap_year)+1);
+ end
+ if ~isempty(M(~idx_leap_year))
+ month_day_mat(~idx_leap_year) = common_year(M(~idx_leap_year)+1);
+ end
M( D>month_day_mat ) = M( D>month_day_mat )+1;
- month_day_mat(isLeapYear(Y)) = leap_year(M(isLeapYear(Y)));
- month_day_mat(~isLeapYear(Y)) = common_year(M(~isLeapYear(Y)));
+ if ~isempty(M(idx_leap_year))
+ month_day_mat(idx_leap_year) = leap_year(M(idx_leap_year));
+ end
+ if ~isempty(M(~idx_leap_year))
+ month_day_mat(~idx_leap_year) = common_year(M(~idx_leap_year));
+ end
d = D - month_day_mat;