deal.II has a testsuite that has, at the time this article was last
      updated (April 2015), some 3,600 small programs (growing by roughly
      one per day) that we run every time we make a change to make sure
      that no existing functionality is broken. The expected output for
      every test is stored in an *.output, and when you run a
      test you are notified if a test produces different output.
    
These days, every time we add a significant piece of functionality, we add at least one new test to the testsuite, and we also do so if we fix a bug, in both cases to make sure that future changes do not break this functionality (again). Machines running the testsuite submit the results to a webpage showing the status of our regression tests.
      The testsuite is part of the development sources of deal.II and located
      under the tests subdirectory. The easiest way to obtain
      both of them is to check out the current development sources via git:
$ git clone https://github.com/dealii/dealii
To enable the testsuite for a given build directory, ensure that deal.II is successfully configured and built (installation is not necessary). After that you can set up the testsuite via the "setup_tests" target:
$ make setup_testsThis will set up all tests supported by the current configuration. The testsuite can now be run in the current build directory as described below.
The setup can be fine-tuned using the following commands:
$ make prune_tests - removes all testsuite subprojects
      In addition, when setting up the testsuite, the following environment
      variables can be used to override default behavior when
      calling make setup_tests:
TEST_TIME_LIMIT
  - The time limit (in seconds) a single test is allowed to take. Defaults
    to 180 seconds
TEST_PICKUP_REGEX
  - A regular expression to select only a subset of tests during setup.
    An empty string is interpreted as a catchall (this is the default).
For example,
TEST_PICKUP_REGEX="umfpack" make setup_testswill only enable tests which match the string "umfpack" in category or name.
      The testsuite can also be set up for an already installed library
      (starting with version 8.3). For this, create a build directory for
      the testsuite and run cmake pointing to the tests
      subdirectory, e.g.,
$ mkdir tests_for_installed_dealii $ cd tests_for_installed_dealii $ cmake -DDEAL_II_DIR=/path/to/installed/dealii /path/to/dealii_source/testsAfter that the same configuration targets as described above are available.
The testsuite can now be run in the build directory via
$ ctest [-j N]Here,
N is the number of concurrent tests that should be
      run, in the same way as you can say make -jN. The testsuite
      is huge and will need around 12h on current computers
      running single threaded.
    
    If you only want to run a subset of tests matching a regular expression, or if you want to exclude tests matching a regular expression, you can use
$ ctest [-j N] -R '<positive regular expression>' $ ctest [-j N] -E '<negative regular expression>'
      Note:
      Not all tests succeed on every machine even if all computations are
      correct, because your machine generates slightly different floating
      point outputs. To increase the number of tests that work correctly,
      install the
      numdiff tool that compares
      stored and newly created output files based on floating point
      tolerances. To use it, simply export where the numdiff
      executable can be found via the PATH
      environment variable so that it can be found during
      make setup_tests.
    
      A typical output of a ctest invocation looks like:
$ ctest -j4 -R "base/thread_validity"
Test project /tmp/trunk/build
      Start 747: base/thread_validity_01.debug
      Start 748: base/thread_validity_01.release
      Start 775: base/thread_validity_05.debug
      Start 776: base/thread_validity_05.release
 1/24 Test #776: base/thread_validity_05.release ...   Passed    1.89 sec
 2/24 Test #748: base/thread_validity_01.release ...   Passed    1.89 sec
      Start 839: base/thread_validity_03.debug
      Start 840: base/thread_validity_03.release
 3/24 Test #747: base/thread_validity_01.debug .....   Passed    2.68 sec
[...]
      Start 1077: base/thread_validity_08.debug
      Start 1078: base/thread_validity_08.release
16/24 Test #1078: base/thread_validity_08.release ...***Failed    2.86 sec
18/24 Test #1077: base/thread_validity_08.debug .....***Failed    3.97 sec
[...]
92% tests passed, 2 tests failed out of 24
Total Test time (real) =  20.43 sec
The following tests FAILED:
        1077 - base/thread_validity_08.debug (Failed)
        1078 - base/thread_validity_08.release (Failed)
Errors while running CTest
      If a test failed (like base/thread_validity_08.debug in above
      example output), you might want to find out what exactly went wrong. To
      this end, you can search
      through Testing/Temporary/LastTest.log for the exact output
      of the test, or you can rerun this one test, specifying -V
      to select verbose output of tests:
$ ctest -V -R "base/thread_validity_08.debug"
[...]
test 1077
    Start 1077: base/thread_validity_08.debug
1077: Test command: [...]
1077: Test timeout computed to be: 600
1077: Test base/thread_validity_08.debug: RUN
1077: ===============================   OUTPUT BEGIN  ===============================
1077: Built target thread_validity_08.debug
1077: Generating thread_validity_08.debug/output
1077: terminate called without an active exception
1077: /bin/sh: line 1: 18030 Aborted [...]/thread_validity_08.debug
1077: base/thread_validity_08.debug: BUILD successful.
1077: base/thread_validity_08.debug: RUN failed. Output:
1077: DEAL::OK.
1077: gmake[3]: *** [thread_validity_08.debug/output] Error 1
1077: gmake[2]: *** [CMakeFiles/thread_validity_08.debug.diff.dir/all] Error 2
1077: gmake[1]: *** [CMakeFiles/thread_validity_08.debug.diff.dir/rule] Error 2
1077: gmake: *** [thread_validity_08.debug.diff] Error 2
1077:
1077:
1077: base/thread_validity_08.debug: ******    RUN failed    *******
1077:
1077: ===============================    OUTPUT END   ===============================
      So this specific test aborted in the RUN stage.
    
    
      The general output for a successful test <test> in
      category <category> for build type
      <build> is
xx: Test <category>/<test>.<build>: PASSED xx: =============================== OUTPUT BEGIN =============================== xx: [...] xx: <category>/<test>.<build>: PASSED. xx: =============================== OUTPUT END ===============================And for a test that fails in stage
<stage>:
xx: Test <category>/<test>.<build>: <stage> xx: =============================== OUTPUT BEGIN =============================== xx: [...] xx: <category>/<test>.<build>: <stage> failed. [...] xx: xx: <category>/<test>.<build>: ****** <stage> failed ******* xx: =============================== OUTPUT END ===============================Hereby,
<stage> indicates the stage in which the
      test failed:
      CONFIGURE: only for test in the "build_tests"
          category: The test project failed in the configuration stage
        BUILD: compilation error occurred
        RUN: the test executable could not be run / aborted
        DIFF: the test output differs from the reference output
        PASSED: the test run successful
        The testsuite can also be used to provide coverage information, i.e., data that shows which lines of the library are executed how many times by running through all of the tests in the testsuite. This is of interest in finding places in the library that are not covered by the testsuite and, consequently, are prone to the inadvertent introduction of bugs since existing functionality is not subject to existing tests.
To run the testsuite in this mode, essentially, you have to do three things:
cmake -DCMAKE_BUILD_TYPE=Debug -DDEAL_II_SETUP_COVERAGE=ON <...>You can then build the library and run the tests as usual.
For the last point, one can in principal use whatever tool one wants. That said, the deal.II ctest driver already has builtin functionality to gather all profiling files and submit them to cdash where we already gather testsuite results (see below). You can do so by invoking
ctest -DCOVERAGE=ON <...> -S ../tests/run_testsuite.cmakewhen running the testsuite, or directly by
ctest <...> -S ../tests/run_coverage.cmake
At the end of all of this, results will be shown in a separate section "Coverage" at the deal.II cdash site.
The following outlines what you need to know if you want to understand how the testsuite actually works, for example because you may want to add tests along with the functionality you are currently developing.
      A test usually consists of a source file and an output file for
      comparison (under the testsuite directory tests):
category/test.cc category/test.output
category will be one of the existing subdirectory
      under tests/, e.g., lac/, base/,
      or mpi/. Historically, we have grouped tests into the
      directories base/, lac/, deal.II/ depending on their
      functionality, and bits/ if they were small unit tests, but
      in practice we have not always followed this rigidly. There are also
      more specialized directories trilinos/, petsc/,
      serialization/, mpi/ etc, whose meaning is more obvious.
      test.cc must be a regular executable (i.e. having an
      int main() routine). It will be compiled, linked and
      run. The executable should not output anything to cout
      (at least under normal circumstances, i.e. no error condition),
      instead the executable should output to a file output
      in the current working directory. In practice, we rarely write the
      source files completely from scratch, but we find an existing test that
      already does something similar and copy/modify it to fit our needs.
    
    
      For a normal test, ctest will typically run the following 3
      stages:
      
BUILD: The build stage generates an executable in
          BUILD_DIR/tests/<category>/<test>.
        RUN: The run stage then invokes the executable in
          the directory where it is located. By convention, each test
          either prints its test results directly to stdout,
          or directly into a file called output (in the
          current working directory). The latter takes precedence.
          The output (via stdout or file) will then be located in
          BUILD_DIR/tests/<category>/<test>/output.
          If the run fails (e.g. because the program aborts with an error
          code) the file output is renamed to
          failing_output.
        DIFF: As a last stage the generated output file will
          be compared to
          SOURCE_DIR/tests/<category>/<test>[...].output.
          and stored in
          BUILD_DIR/tests/<category>/<test>/diff.
          If the diff fails  the file diff is renamed to
          failing_diff.
        
      Comparison file can actually be named in a more complex way than
      just category/test.output. In pseudo code:
category/test.[with_<string>(<=|>=|=|<|>)<on|off|version>.]*
              [mpirun=<x>.][expect=<y>.][binary.][<debug|release>.]output
      Normally, a test will be set up so that it runs twice, once in debug and
      once in release configuration.
      If a specific test can only be run in debug or release configurations but
      not in both it is possible to restrict the setup by prepending
      .debug or .release directly before
      .output, e.g.:
category/test.debug.outputThis way, the test will only be set up to build and run against the debug library. If a test should run in both configurations but, for some reason, produces different output (e.g., because it triggers an assertion in debug mode), then you can just provide two different output files:
category/test.debug.output category/test.release.output
In a similar vain as for build configurations, it is possible to restrict tests to specific feature configurations, e.g.,
category/test.with_umfpack=on.output, or category/test.with_zlib=off.outputThese tests will only be set up if the specified feature was configured. It is possible to provide different output files for disabled/enabled features, e.g.,
category/test.with_64bit_indices=on.output category/test.with_64bit_indices=off.outputFurthermore, a test can be restricted to be run only if specific versions of a feature are available. For example
category/test.with_trilinos.geq.11.14.1.outputwill only be run if (a) trilinos is available, i.e.,
DEAL_II_WITH_TRILINOS=TRUE and (b) if trilinos is at least
      of version 11.14.1. Supported operators are
      =, .le.;, .ge., .leq.,
      .geq..
    
    It is also possible to declare multiple constraints subsequently, e.g.
category/test.with_umfpack=on.with_zlib=on.output
      Note: The tests in some subdirectories of tests/ are
      automatically run only if some feature is enabled. In this case a
      feature constraint encoded in the output file name is redundant and
      should be avoided. In particular, this holds for subdirectories
      distributed_grids, lapack,
      metis, petsc, slepc,
      trilinos, umfpack, gla, and
      mpi
    
      If a test should be run with MPI in parallel, the number of MPI
      processes N with which a program needs to be run for
      comparison with a given output file is specified as follows:
category/test.mpirun=N.outputIt is quite typical for an MPI-enabled test to have multiple output files for different numbers of MPI processes.
      If a test produces binary output add binary to the
      output file to indicate this:
category/test.binary.outputThe testsuite ensures that a diff tool suitable for comparing binary output files is used instead of the default diff tool, which (as in the case of
numdiff) might be unable to compare binary
      files.
    
    
    
      Sometimes it is necessary to provide multiple comparison files for a
      single test, for example because you want to test code on multiple
      platforms that produce different output files that, nonetheless, all
      should be considered correct. An example would be tests that use the
      rand() function that is implemented differently on
      different platforms. Additional comparison files have the same path
      as the main comparison file (in this case test.output)
      followed by a dot and a variant description:
category/test.output category/test.output.2 category/test.output.3 category/test.output.4The testsuite will try to match the output against all variants in alphabetical order starting with the main output file.
Warning: This mechanism is only meant as a last resort for tests where no alternative approach is viable. Especially, consider first to
      Note: The main comparison file (i.e., the one ending in
      output is mandatory. Otherwise, no test will be
      configured.
    
      Normally a test is considered to be successful if all test stages
      could be run and the test reached the PASSED stage (see
      the output description section for details).
      If (for some reason) the test should succeed ending at a specific
      test stage different than PASSED you can specify it via
      expect=<stage>, e.g.:
category/test.expect=run.output
We typically add one or more new tests every time we add new functionality to the library or fix a bug. If you want to contribute code to the library, you should do this as well. Here's how: you need a testcase and a file with the expected output.
For the testcase, we usually start from one of the existing tests, copy and modify it to where it does what we'd like to test. Alternatively, you can also start from a template like this:
// ---------------------------------------------------------------------
//
// Copyright (C) 2015 - 2017 by the deal.II Authors
//
// This file is part of the deal.II library.
//
// The deal.II library is free software; you can use it, redistribute
// it, and/or modify it under the terms of the GNU Lesser General
// Public License as published by the Free Software Foundation; either
// version 2.1 of the License, or (at your option) any later version.
// The full text of the license can be found in the file LICENSE at
// the top level of the deal.II distribution.
//
// ---------------------------------------------------------------------
// a short (a few lines) description of what the program does
#include "../tests.h"
// all include files you need here
int main ()
{
  // Initialize deallog for test output.
  // This also reroutes deallog output to a file "output".
  initlog();
  // your testcode here:
  int i = 0;
  deallog << i << std::endl;
  return 0;
}
    This code opens an output file output in the current working
    directory and then writes all output you generate to it, through the
    deallog stream. The deallog stream works like
    any other std::ostream except that it does a few more
    things behind the scenes that are helpful in this context. In above
    case, we only write a zero to the output file. Most tests of course
    write computed data to the output file to make sure that whatever we
    compute is what we got when the test was first written.
    
    There are a number of directories where you can put a new test.
    Extensive tests of individual classes or groups of classes
    have traditionally been into the base/,
    lac/, deal.II/, fe/,
    hp/, or multigrid/ directories, depending on
    where the classes that are tested are located. More atomic tests often go
    into bits/. There are also
    directories for PETSc and Trilinos wrapper functionality.
    
In order to run your new test, copy it to an appropriate category and create an empty comparison file for it:
category/my_new_test.cc category/my_new_test.outputNow, rerun
$ make setup_testsso that your new test is picked up. After that it is possible to invoke it with
$ ctest -V -R "category/my_new_test"
      If you run your new test executable this way, the test should compile
      and run successfully but fail in the diff stage (because of the empty
      comparison file). You will get an output file
      BUILD_DIR/category/my_new_test/output. Take a look at it to
      make sure that the output is what you had expected. (For complex tests,
      it may sometimes be impossible to say whether the output is correct, and
      in this case we sometimes just take it to make
      sure that future invocations of the test yield the same results.)
    
The next step is to copy and rename this output file to the source directory and replace the original comparison file with it:
category/my_new_test.outputAt this point running the test again should be successful:
$ ctest -V -R "category/my_new_test"
      If you want to create a new category in the testsuite, create an new
      folder under CMakeLists.txt file into it containing
    
CMAKE_MINIMUM_REQUIRED(VERSION 2.8.8)
INCLUDE(../setup_testsubproject.cmake)
PROJECT(testsuite CXX)
INCLUDE(${DEAL_II_TARGET_CONFIG})
DEAL_II_PICKUP_TESTS()
    
    
      To submit test results to our CDash
      instance just invoke ctest within a build directory (or designated
      build directory) with the -S option pointing to the
$ ctest [...] -V -S ../tests/run_testsuite.cmakeThe script will run configure, build and ctest and submit the results to the CDash server. It does not matter whether the configure, build or ctest stages were run before that. Also in script mode, you can specify the same options for
ctest as explained above.
    
    It is possible to run tests and submit results for an already installed library by
mkdir build && cd build cp $DEAL_II_SOURCE_DIR/CTestConfig.cmake . ctest \ -DCTEST_SOURCE_DIRECTORY=$DEAL_II_SOURCE_DIR/tests \ -DDEAL_II_DIR=$DEAL_II_DIR \ [...] -S $DEAL_II_SOURCE_DIR/tests/run_testsuite.cmake -V
      Note: The default output in script mode is very minimal.
      Therefore, it is recommended to specify -V which will
      give the same level of verbosity as the non-script mode.
    
Note: The following variables can be set to via
ctest -D<variable>=<value> [...]to control the behaviour of the
run_testsuite.cmake
      script:
CTEST_SOURCE_DIRECTORY
  - The source directory of deal.II
  - If unspecified, "../deal.II" and "../../" relative to the location
    of this script is used. If this is not a source directory, an error
    thrown.
CTEST_BINARY_DIRECTORY
  - The designated build directory (already configured, empty, or non
    existent - see the information about TRACKs what will happen)
  - If unspecified the current directory is used. If the current
    directory is equal to CTEST_SOURCE_DIRECTORY or the "tests"
    directory, an error is thrown.
CTEST_CMAKE_GENERATOR
  - The CMake Generator to use (e.g. "Unix Makefiles", or "Ninja", see
    $ man cmake)
  - If unspecified the current generator of a configured build directory
    will be used, otherwise "Unix Makefiles".
TRACK
  - The track the test should be submitted to. Defaults to "Experimental".
    Possible values are:
    "Experimental"     - all tests that are not specifically "build" or
                         "regression" tests should go into this track
    "Build Tests"      - Build tests that configure and build in a
                         clean directory and run the build tests
                         "build_tests/*"
    "Nightly"          - Reserved for nightly regression tests for
                         build bots on various architectures
    "Regression Tests" - Reserved for the regression tester
CONFIG_FILE
  - A configuration file (see docs/users/config.sample)
    that will be used during the configuration stage (invokes
    $ cmake -C ${CONFIG_FILE}). This only has an effect if
    CTEST_BINARY_DIRECTORY is empty.
DESCRIPTION
  - A string that is appended to CTEST_BUILD_NAME
COVERAGE
  - If set to ON deal.II will be configured with
    DEAL_II_SETUP_COVERAGE=ON, CMAKE_BUILD_TYPE=Debug and the
    CTEST_COVERAGE() stage will be run. Test results must go into the
    "Experimental" section.
MAKEOPTS
  - Additional options that will be passed directly to make (or ninja).
      Furthermore, the variables described above can also be
      set and will be handed automatically down to cmake.
    
    
    Build tests are used to check that deal.II can be compiled on different systems and with different compilers as well as different configuration options. Results are collected in the "Build Tests" track in CDash.
Running the build test suite is simple and we encourage deal.II
      users with configurations not found on the CDash page to
      participate. Assuming you checked out deal.II into the directory
      dealii, running it is as simple as:
mkdir dealii/build cd dealii/build ctest -j4 -S ../tests/run_buildtest.cmake
      What this does is to compile and build deal.II in the directory
      build, try to configure, build (and run a subset) of all
      tutorial programs supported by the current configuration and send the
      results to the CDash instance.
    
Note: Build tests require the designated build directory to be completely empty. If you want to specify a build configuration for cmake use a configuration file to preseed the cache as explained above:
$ ctest -DCONFIG_FILE="[...]/config.sample" [...]
Build tests work best if they run automatically and periodically. There is a detailed example for such dedicated build tests on the wiki.