/ externals / catch / docs / command-line.md
command-line.md
  1  <a id="top"></a>
  2  # Command line
  3  
  4  **Contents**<br>
  5  [Specifying which tests to run](#specifying-which-tests-to-run)<br>
  6  [Choosing a reporter to use](#choosing-a-reporter-to-use)<br>
  7  [Breaking into the debugger](#breaking-into-the-debugger)<br>
  8  [Showing results for successful tests](#showing-results-for-successful-tests)<br>
  9  [Aborting after a certain number of failures](#aborting-after-a-certain-number-of-failures)<br>
 10  [Listing available tests, tags or reporters](#listing-available-tests-tags-or-reporters)<br>
 11  [Sending output to a file](#sending-output-to-a-file)<br>
 12  [Naming a test run](#naming-a-test-run)<br>
 13  [Eliding assertions expected to throw](#eliding-assertions-expected-to-throw)<br>
 14  [Make whitespace visible](#make-whitespace-visible)<br>
 15  [Warnings](#warnings)<br>
 16  [Reporting timings](#reporting-timings)<br>
 17  [Load test names to run from a file](#load-test-names-to-run-from-a-file)<br>
 18  [Specify the order test cases are run](#specify-the-order-test-cases-are-run)<br>
 19  [Specify a seed for the Random Number Generator](#specify-a-seed-for-the-random-number-generator)<br>
 20  [Identify framework and version according to the libIdentify standard](#identify-framework-and-version-according-to-the-libidentify-standard)<br>
 21  [Wait for key before continuing](#wait-for-key-before-continuing)<br>
 22  [Skip all benchmarks](#skip-all-benchmarks)<br>
 23  [Specify the number of benchmark samples to collect](#specify-the-number-of-benchmark-samples-to-collect)<br>
 24  [Specify the number of resamples for bootstrapping](#specify-the-number-of-resamples-for-bootstrapping)<br>
 25  [Specify the confidence-interval for bootstrapping](#specify-the-confidence-interval-for-bootstrapping)<br>
 26  [Disable statistical analysis of collected benchmark samples](#disable-statistical-analysis-of-collected-benchmark-samples)<br>
 27  [Specify the amount of time in milliseconds spent on warming up each test](#specify-the-amount-of-time-in-milliseconds-spent-on-warming-up-each-test)<br>
 28  [Usage](#usage)<br>
 29  [Specify the section to run](#specify-the-section-to-run)<br>
 30  [Filenames as tags](#filenames-as-tags)<br>
 31  [Override output colouring](#override-output-colouring)<br>
 32  [Test Sharding](#test-sharding)<br>
 33  [Allow running the binary without tests](#allow-running-the-binary-without-tests)<br>
 34  [Output verbosity](#output-verbosity)<br>
 35  
 36  Catch works quite nicely without any command line options at all - but for those times when you want greater control the following options are available.
 37  Click one of the following links to take you straight to that option - or scroll on to browse the available options.
 38  
 39  <a href="#specifying-which-tests-to-run">               `    <test-spec> ...`</a><br />
 40  <a href="#usage">                                       `    -h, -?, --help`</a><br />
 41  <a href="#showing-results-for-successful-tests">        `    -s, --success`</a><br />
 42  <a href="#breaking-into-the-debugger">                  `    -b, --break`</a><br />
 43  <a href="#eliding-assertions-expected-to-throw">        `    -e, --nothrow`</a><br />
 44  <a href="#invisibles">                                  `    -i, --invisibles`</a><br />
 45  <a href="#sending-output-to-a-file">                    `    -o, --out`</a><br />
 46  <a href="#choosing-a-reporter-to-use">                  `    -r, --reporter`</a><br />
 47  <a href="#naming-a-test-run">                           `    -n, --name`</a><br />
 48  <a href="#aborting-after-a-certain-number-of-failures"> `    -a, --abort`</a><br />
 49  <a href="#aborting-after-a-certain-number-of-failures"> `    -x, --abortx`</a><br />
 50  <a href="#warnings">                                    `    -w, --warn`</a><br />
 51  <a href="#reporting-timings">                           `    -d, --durations`</a><br />
 52  <a href="#input-file">                                  `    -f, --input-file`</a><br />
 53  <a href="#run-section">                                 `    -c, --section`</a><br />
 54  <a href="#filenames-as-tags">                           `    -#, --filenames-as-tags`</a><br />
 55  
 56  
 57  </br>
 58  
 59  <a href="#listing-available-tests-tags-or-reporters">   `    --list-tests`</a><br />
 60  <a href="#listing-available-tests-tags-or-reporters">   `    --list-tags`</a><br />
 61  <a href="#listing-available-tests-tags-or-reporters">   `    --list-reporters`</a><br />
 62  <a href="#listing-available-tests-tags-or-reporters">   `    --list-listeners`</a><br />
 63  <a href="#order">                                       `    --order`</a><br />
 64  <a href="#rng-seed">                                    `    --rng-seed`</a><br />
 65  <a href="#libidentify">                                 `    --libidentify`</a><br />
 66  <a href="#wait-for-keypress">                           `    --wait-for-keypress`</a><br />
 67  <a href="#skip-benchmarks">                             `    --skip-benchmarks`</a><br />
 68  <a href="#benchmark-samples">                           `    --benchmark-samples`</a><br />
 69  <a href="#benchmark-resamples">                         `    --benchmark-resamples`</a><br />
 70  <a href="#benchmark-confidence-interval">               `    --benchmark-confidence-interval`</a><br />
 71  <a href="#benchmark-no-analysis">                       `    --benchmark-no-analysis`</a><br />
 72  <a href="#benchmark-warmup-time">                       `    --benchmark-warmup-time`</a><br />
 73  <a href="#colour-mode">                                 `    --colour-mode`</a><br />
 74  <a href="#test-sharding">                               `    --shard-count`</a><br />
 75  <a href="#test-sharding">                               `    --shard-index`</a><br />
 76  <a href=#no-tests-override>                             `    --allow-running-no-tests`</a><br />
 77  <a href=#output-verbosity>                              `    --verbosity`</a><br />
 78  
 79  </br>
 80  
 81  
 82  
 83  <a id="specifying-which-tests-to-run"></a>
 84  ## Specifying which tests to run
 85  
 86  <pre>&lt;test-spec> ...</pre>
 87  
 88  By providing a test spec, you filter which tests will be run. If you call
 89  Catch2 without any test spec, then it will run all non-hidden test
 90  cases. A test case is hidden if it has the `[!benchmark]` tag, any tag
 91  with a dot at the start, e.g. `[.]` or `[.foo]`.
 92  
 93  There are three basic test specs that can then be combined into more
 94  complex specs:
 95  
 96    * Full test name, e.g. `"Test 1"`.
 97  
 98      This allows only test cases whose name is "Test 1".
 99  
100    * Wildcarded test name, e.g. `"*Test"`, or `"Test*"`, or `"*Test*"`.
101  
102      This allows any test case whose name ends with, starts with, or contains
103      in the middle the string "Test". Note that the wildcard can only be at
104      the start or end.
105  
106    * Tag name, e.g. `[some-tag]`.
107  
108      This allows any test case tagged with "[some-tag]". Remember that some
109      tags are special, e.g. those that start with "." or with "!".
110  
111  
112  You can also combine the basic test specs to create more complex test
113  specs. You can:
114  
115    * Concatenate specs to apply all of them, e.g. `[some-tag][other-tag]`.
116  
117      This allows test cases that are tagged with **both** "[some-tag]" **and**
118      "[other-tag]". A test case with just "[some-tag]" will not pass the filter,
119      nor will test case with just "[other-tag]".
120  
121    * Comma-join specs to apply any of them, e.g. `[some-tag],[other-tag]`.
122  
123      This allows test cases that are tagged with **either** "[some-tag]" **or**
124      "[other-tag]". A test case with both will obviously also pass the filter.
125  
126      Note that commas take precendence over simple concatenation. This means
127      that `[a][b],[c]` accepts tests that are tagged with either both "[a]" and
128      "[b]", or tests that are tagged with just "[c]".
129  
130    * Negate the spec by prepending it with `~`, e.g. `~[some-tag]`.
131  
132      This rejects any test case that is tagged with "[some-tag]". Note that
133      rejection takes precedence over other filters.
134  
135      Note that negations always binds to the following _basic_ test spec.
136      This means that `~[foo][bar]` negates only the "[foo]" tag and not the
137      "[bar]" tag.
138  
139  Note that when Catch2 is deciding whether to include a test, first it
140  checks whether the test matches any negative filters. If it does,
141  the test is rejected. After that, the behaviour depends on whether there
142  are positive filters as well. If there are no positive filters, all
143  remaining non-hidden tests are included. If there are positive filters,
144  only tests that match the positive filters are included.
145  
146  You can also match test names with special characters by escaping them
147  with a backslash (`"\"`), e.g. a test named `"Do A, then B"` is matched
148  by "Do A\, then B" test spec. Backslash also escapes itself.
149  
150  
151  ### Examples
152  
153  Given these TEST_CASEs,
154  ```
155  TEST_CASE("Test 1") {}
156  
157  TEST_CASE("Test 2", "[.foo]") {}
158  
159  TEST_CASE("Test 3", "[.bar]") {}
160  
161  TEST_CASE("Test 4", "[.][foo][bar]") {}
162  ```
163  
164  this is the result of these filters
165  ```
166  ./tests                      # Selects only the first test, others are hidden
167  ./tests "Test 1"             # Selects only the first test, other do not match
168  ./tests ~"Test 1"            # Selects no tests. Test 1 is rejected, other tests are hidden
169  ./tests "Test *"             # Selects all tests.
170  ./tests [bar]                # Selects tests 3 and 4. Other tests are not tagged [bar]
171  ./tests ~[foo]               # Selects test 1, because it is the only non-hidden test without [foo] tag
172  ./tests [foo][bar]           # Selects test 4.
173  ./tests [foo],[bar]          # Selects tests 2, 3, 4.
174  ./tests ~[foo][bar]          # Selects test 3. 2 and 4 are rejected due to having [foo] tag
175  ./tests ~"Test 2"[foo]       # Selects test 4, because test 2 is explicitly rejected
176  ./tests [foo][bar],"Test 1"  # Selects tests 1 and 4.
177  ./tests "Test 1*"            # Selects test 1, wildcard can match zero characters
178  ```
179  
180  _Note: Using plain asterisk on a command line can cause issues with shell
181  expansion. Make sure that the asterisk is passed to Catch2 and is not
182  interpreted by the shell._
183  
184  
185  <a id="choosing-a-reporter-to-use"></a>
186  ## Choosing a reporter to use
187  
188  <pre>-r, --reporter &lt;reporter[::key=value]*&gt;</pre>
189  
190  Reporters are how the output from Catch2 (results of assertions, tests,
191  benchmarks and so on) is formatted and written out. The default reporter
192  is called the "Console" reporter and is intended to provide relatively
193  verbose and human-friendly output.
194  
195  Reporters are also individually configurable. To pass configuration options
196  to the reporter, you append `::key=value` to the reporter specification
197  as many times as you want, e.g. `--reporter xml::out=someFile.xml`.
198  
199  The keys must either be prefixed by "X", in which case they are not parsed
200  by Catch2 and are only passed down to the reporter, or one of options
201  hardcoded into Catch2. Currently there are only 2,
202  ["out"](#sending-output-to-a-file), and ["colour-mode"](#colour-mode).
203  
204  _Note that the reporter might still check the X-prefixed options for
205  validity, and throw an error if they are wrong._
206  
207  > Support for passing arguments to reporters through the `-r`, `--reporter` flag was introduced in Catch2 3.0.1
208  
209  There are multiple built-in reporters, you can see what they do by using the
210  [`--list-reporters`](command-line.md#listing-available-tests-tags-or-reporters)
211  flag. If you need a reporter providing custom format outside of the already
212  provided ones, look at the ["write your own reporter" part of the reporter
213  documentation](reporters.md#writing-your-own-reporter).
214  
215  This option may be passed multiple times to use multiple (different)
216  reporters  at the same time. See the [reporter documentation](reporters.md#multiple-reporters)
217  for details on what the resulting behaviour is. Also note that at most one
218  reporter can be provided without the output-file part of reporter spec.
219  This reporter will use the "default" output destination, based on
220  the [`-o`, `--out`](#sending-output-to-a-file) option.
221  
222  > Support for using multiple different reporters at the same time was [introduced](https://github.com/catchorg/Catch2/pull/2183) in Catch2 3.0.1
223  
224  
225  _Note: There is currently no way to escape `::` in the reporter spec,
226  and thus the reporter names, or configuration keys and values, cannot
227  contain `::`. As `::` in paths is relatively obscure (unlike ':'), we do
228  not consider this an issue._
229  
230  
231  <a id="breaking-into-the-debugger"></a>
232  ## Breaking into the debugger
233  <pre>-b, --break</pre>
234  
235  Under most debuggers Catch2 is capable of automatically breaking on a test
236  failure. This allows the user to see the current state of the test during
237  failure.
238  
239  <a id="showing-results-for-successful-tests"></a>
240  ## Showing results for successful tests
241  <pre>-s, --success</pre>
242  
243  Usually you only want to see reporting for failed tests. Sometimes it's useful to see *all* the output (especially when you don't trust that that test you just added worked first time!).
244  To see successful, as well as failing, test results just pass this option. Note that each reporter may treat this option differently. The Junit reporter, for example, logs all results regardless.
245  
246  <a id="aborting-after-a-certain-number-of-failures"></a>
247  ## Aborting after a certain number of failures
248  <pre>-a, --abort
249  -x, --abortx [&lt;failure threshold>]
250  </pre>
251  
252  If a ```REQUIRE``` assertion fails the test case aborts, but subsequent test cases are still run.
253  If a ```CHECK``` assertion fails even the current test case is not aborted.
254  
255  Sometimes this results in a flood of failure messages and you'd rather just see the first few. Specifying ```-a``` or ```--abort``` on its own will abort the whole test run on the first failed assertion of any kind. Use ```-x``` or ```--abortx``` followed by a number to abort after that number of assertion failures.
256  
257  <a id="listing-available-tests-tags-or-reporters"></a>
258  ## Listing available tests, tags or reporters
259  ```
260  --list-tests
261  --list-tags
262  --list-reporters
263  --list-listeners
264  ```
265  
266  > The `--list*` options became customizable through reporters in Catch2 3.0.1
267  
268  > The `--list-listeners` option was added in Catch2 3.0.1
269  
270  `--list-tests` lists all registered tests matching specified test spec.
271  Usually this listing also includes tags, and potentially also other
272  information, like source location, based on verbosity and reporter's design.
273  
274  `--list-tags` lists all tags from registered tests matching specified test
275  spec. Usually this also includes number of tests cases they match and
276  similar information.
277  
278  `--list-reporters` lists all available reporters and their descriptions.
279  
280  `--list-listeners` lists all registered listeners and their descriptions.
281  
282  The [`--verbosity` argument](#output-verbosity) modifies the level of detail provided by the default `--list*` options
283  as follows:
284  
285  | Option             | `normal` (default)              | `quiet`             | `high`                                  |
286  |--------------------|---------------------------------|---------------------|-----------------------------------------|
287  | `--list-tests`     | Test names and tags             | Test names only     | Same as `normal`, plus source code line |
288  | `--list-tags`      | Tags and counts                 | Same as `normal`    | Same as `normal`                        |
289  | `--list-reporters` | Reporter names and descriptions | Reporter names only | Same as `normal`                        |
290  | `--list-listeners` | Listener names and descriptions | Same as `normal`    | Same as `normal`                        |
291  
292  <a id="sending-output-to-a-file"></a>
293  ## Sending output to a file
294  <pre>-o, --out &lt;filename&gt;
295  </pre>
296  
297  Use this option to send all output to a file, instead of stdout. You can
298  use `-` as the filename to explicitly send the output to stdout (this is
299  useful e.g. when using multiple reporters).
300  
301  > Support for `-` as the filename was introduced in Catch2 3.0.1
302  
303  Filenames starting with "%" (percent symbol) are reserved by Catch2 for
304  meta purposes, e.g. using `%debug` as the filename opens stream that
305  writes to platform specific debugging/logging mechanism.
306  
307  Catch2 currently recognizes 3 meta streams:
308  
309  * `%debug` - writes to platform specific debugging/logging output
310  * `%stdout` - writes to stdout
311  * `%stderr` - writes to stderr
312  
313  > Support for `%stdout` and `%stderr` was introduced in Catch2 3.0.1
314  
315  
316  <a id="naming-a-test-run"></a>
317  ## Naming a test run
318  <pre>-n, --name &lt;name for test run></pre>
319  
320  If a name is supplied it will be used by the reporter to provide an overall name for the test run. This can be useful if you are sending to a file, for example, and need to distinguish different test runs - either from different Catch executables or runs of the same executable with different options. If not supplied the name is defaulted to the name of the executable.
321  
322  <a id="eliding-assertions-expected-to-throw"></a>
323  ## Eliding assertions expected to throw
324  <pre>-e, --nothrow</pre>
325  
326  Skips all assertions that test that an exception is thrown, e.g. ```REQUIRE_THROWS```.
327  
328  These can be a nuisance in certain debugging environments that may break when exceptions are thrown (while this is usually optional for handled exceptions, it can be useful to have enabled if you are trying to track down something unexpected).
329  
330  Sometimes exceptions are expected outside of one of the assertions that tests for them (perhaps thrown and caught within the code-under-test). The whole test case can be skipped when using ```-e``` by marking it with the ```[!throws]``` tag.
331  
332  When running with this option any throw checking assertions are skipped so as not to contribute additional noise. Be careful if this affects the behaviour of subsequent tests.
333  
334  <a id="invisibles"></a>
335  ## Make whitespace visible
336  <pre>-i, --invisibles</pre>
337  
338  If a string comparison fails due to differences in whitespace - especially leading or trailing whitespace - it can be hard to see what's going on.
339  This option transforms tabs and newline characters into ```\t``` and ```\n``` respectively when printing.
340  
341  <a id="warnings"></a>
342  ## Warnings
343  <pre>-w, --warn &lt;warning name></pre>
344  
345  You can think of Catch2's warnings as the equivalent of `-Werror` (`/WX`)
346  flag for C++ compilers. It turns some suspicious occurrences, like a section
347  without assertions, into errors. Because these might be intended, warnings
348  are not enabled by default, but user can opt in.
349  
350  You can enable multiple warnings at the same time.
351  
352  There are currently two warnings implemented:
353  
354  ```
355      NoAssertions        // Fail test case / leaf section if no assertions
356                          // (e.g. `REQUIRE`) is encountered.
357      UnmatchedTestSpec   // Fail test run if any of the CLI test specs did
358                          // not match any tests.
359  ```
360  
361  > `UnmatchedTestSpec` was introduced in Catch2 3.0.1.
362  
363  
364  <a id="reporting-timings"></a>
365  ## Reporting timings
366  <pre>-d, --durations &lt;yes/no></pre>
367  
368  When set to ```yes``` Catch will report the duration of each test case, in milliseconds. Note that it does this regardless of whether a test case passes or fails. Note, also, the certain reporters (e.g. Junit) always report test case durations regardless of this option being set or not.
369  
370  <pre>-D, --min-duration &lt;value></pre>
371  
372  > `--min-duration` was [introduced](https://github.com/catchorg/Catch2/pull/1910) in Catch2 2.13.0
373  
374  When set, Catch will report the duration of each test case that took more
375  than &lt;value> seconds, in milliseconds. This option is overridden by both
376  `-d yes` and `-d no`, so that either all durations are reported, or none
377  are.
378  
379  
380  <a id="input-file"></a>
381  ## Load test names to run from a file
382  <pre>-f, --input-file &lt;filename></pre>
383  
384  Provide the name of a file that contains a list of test case names,
385  one per line. Blank lines are skipped.
386  
387  A useful way to generate an initial instance of this file is to combine
388  the [`--list-tests`](#listing-available-tests-tags-or-reporters) flag with
389  the [`--verbosity quiet`](#output-verbosity) option. You can also
390  use test specs to filter this list down to what you want first.
391  
392  
393  <a id="order"></a>
394  ## Specify the order test cases are run
395  <pre>--order &lt;decl|lex|rand&gt;</pre>
396  
397  Test cases are ordered one of three ways:
398  
399  ### decl
400  Declaration order (this is the default order if no --order argument is provided).
401  Tests in the same translation unit are sorted using their declaration orders,
402  different TUs are sorted in an implementation (linking) dependent order.
403  
404  
405  ### lex
406  Lexicographic order. Tests are sorted by their name, their tags are ignored.
407  
408  
409  ### rand
410  
411  Randomly ordered. The order is dependent on Catch2's random seed (see
412  [`--rng-seed`](#rng-seed)), and is subset invariant. What this means
413  is that as long as the random seed is fixed, running only some tests
414  (e.g. via tag) does not change their relative order.
415  
416  > The subset stability was introduced in Catch2 v2.12.0
417  
418  Since the random order was made subset stable, we promise that given
419  the same random seed, the order of test cases will be the same across
420  different platforms, as long as the tests were compiled against identical
421  version of Catch2. We reserve the right to change the relative order
422  of tests cases between Catch2 versions, but it is unlikely to happen often.
423  
424  
425  <a id="rng-seed"></a>
426  ## Specify a seed for the Random Number Generator
427  <pre>--rng-seed &lt;'time'|'random-device'|number&gt;</pre>
428  
429  Sets the seed for random number generators used by Catch2. These are used
430  e.g. to shuffle tests when user asks for tests to be in random order.
431  
432  Using `time` as the argument asks Catch2 generate the seed through call
433  to `std::time(nullptr)`. This provides very weak randomness and multiple
434  runs of the binary can generate the same seed if they are started close
435  to each other.
436  
437  Using `random-device` asks for `std::random_device` to be used instead.
438  If your implementation provides working `std::random_device`, it should
439  be preferred to using `time`. Catch2 uses `std::random_device` by default.
440  
441  
442  <a id="libidentify"></a>
443  ## Identify framework and version according to the libIdentify standard
444  <pre>--libidentify</pre>
445  
446  See [The LibIdentify repo for more information and examples](https://github.com/janwilmans/LibIdentify).
447  
448  <a id="wait-for-keypress"></a>
449  ## Wait for key before continuing
450  <pre>--wait-for-keypress &lt;never|start|exit|both&gt;</pre>
451  
452  Will cause the executable to print a message and wait until the return/ enter key is pressed before continuing -
453  either before running any tests, after running all tests - or both, depending on the argument.
454  
455  <a id="skip-benchmarks"></a>
456  ## Skip all benchmarks
457  <pre>--skip-benchmarks</pre>
458  
459  > [Introduced](https://github.com/catchorg/Catch2/issues/2408) in Catch2 3.0.1.
460  
461  This flag tells Catch2 to skip running all benchmarks. Benchmarks in this
462  case mean code blocks in `BENCHMARK` and `BENCHMARK_ADVANCED` macros, not
463  test cases with the `[!benchmark]` tag.
464  
465  <a id="benchmark-samples"></a>
466  ## Specify the number of benchmark samples to collect
467  <pre>--benchmark-samples &lt;# of samples&gt;</pre>
468  
469  > [Introduced](https://github.com/catchorg/Catch2/issues/1616) in Catch2 2.9.0.
470  
471  When running benchmarks a number of "samples" is collected. This is the base data for later statistical analysis.
472  Per sample a clock resolution dependent number of iterations of the user code is run, which is independent of the number of samples. Defaults to 100.
473  
474  <a id="benchmark-resamples"></a>
475  ## Specify the number of resamples for bootstrapping
476  <pre>--benchmark-resamples &lt;# of resamples&gt;</pre>
477  
478  > [Introduced](https://github.com/catchorg/Catch2/issues/1616) in Catch2 2.9.0.
479  
480  After the measurements are performed, statistical [bootstrapping] is performed
481  on the samples. The number of resamples for that bootstrapping is configurable
482  but defaults to 100000. Due to the bootstrapping it is possible to give
483  estimates for the mean and standard deviation. The estimates come with a lower
484  bound and an upper bound, and the confidence interval (which is configurable but
485  defaults to 95%).
486  
487   [bootstrapping]: http://en.wikipedia.org/wiki/Bootstrapping_%28statistics%29
488  
489  <a id="benchmark-confidence-interval"></a>
490  ## Specify the confidence-interval for bootstrapping
491  <pre>--benchmark-confidence-interval &lt;confidence-interval&gt;</pre>
492  
493  > [Introduced](https://github.com/catchorg/Catch2/issues/1616) in Catch2 2.9.0.
494  
495  The confidence-interval is used for statistical bootstrapping on the samples to
496  calculate the upper and lower bounds of mean and standard deviation.
497  Must be between 0 and 1 and defaults to 0.95.
498  
499  <a id="benchmark-no-analysis"></a>
500  ## Disable statistical analysis of collected benchmark samples
501  <pre>--benchmark-no-analysis</pre>
502  
503  > [Introduced](https://github.com/catchorg/Catch2/issues/1616) in Catch2 2.9.0.
504  
505  When this flag is specified no bootstrapping or any other statistical analysis is performed.
506  Instead the user code is only measured and the plain mean from the samples is reported.
507  
508  <a id="benchmark-warmup-time"></a>
509  ## Specify the amount of time in milliseconds spent on warming up each test
510  <pre>--benchmark-warmup-time</pre>
511  
512  > [Introduced](https://github.com/catchorg/Catch2/pull/1844) in Catch2 2.11.2.
513  
514  Configure the amount of time spent warming up each test.
515  
516  <a id="usage"></a>
517  ## Usage
518  <pre>-h, -?, --help</pre>
519  
520  Prints the command line arguments to stdout
521  
522  
523  <a id="run-section"></a>
524  ## Specify the section to run
525  <pre>-c, --section &lt;section name&gt;</pre>
526  
527  To limit execution to a specific section within a test case, use this option one or more times.
528  To narrow to sub-sections use multiple instances, where each subsequent instance specifies a deeper nesting level.
529  
530  E.g. if you have:
531  
532  <pre>
533  TEST_CASE( "Test" ) {
534    SECTION( "sa" ) {
535      SECTION( "sb" ) {
536        /*...*/
537      }
538      SECTION( "sc" ) {
539        /*...*/
540      }
541    }
542    SECTION( "sd" ) {
543      /*...*/
544    }
545  }
546  </pre>
547  
548  Then you can run `sb` with:
549  <pre>./MyExe Test -c sa -c sb</pre>
550  
551  Or run just `sd` with:
552  <pre>./MyExe Test -c sd</pre>
553  
554  To run all of `sa`, including `sb` and `sc` use:
555  <pre>./MyExe Test -c sa</pre>
556  
557  There are some limitations of this feature to be aware of:
558  - Code outside of sections being skipped will still be executed - e.g. any set-up code in the TEST_CASE before the
559  start of the first section.</br>
560  - At time of writing, wildcards are not supported in section names.
561  - If you specify a section without narrowing to a test case first then all test cases will be executed
562  (but only matching sections within them).
563  
564  
565  <a id="filenames-as-tags"></a>
566  ## Filenames as tags
567  <pre>-#, --filenames-as-tags</pre>
568  
569  This option adds an extra tag to all test cases. The tag is `#` followed
570  by the unqualified filename the test case is defined in, with the _last_
571  extension stripped out.
572  
573  For example, tests within the file `tests\SelfTest\UsageTests\BDD.tests.cpp`
574  will be given the `[#BDD.tests]` tag.
575  
576  
577  <a id="colour-mode"></a>
578  ## Override output colouring
579  <pre>--colour-mode &lt;ansi|win32|none|default&gt;</pre>
580  
581  > The `--colour-mode` option replaced the old `--colour` option in Catch2 3.0.1
582  
583  
584  Catch2 support two different ways of colouring terminal output, and by
585  default it attempts to make a good guess on which implementation to use
586  (and whether to even use it, e.g. Catch2 tries to avoid writing colour
587  codes when writing the results into a file).
588  
589  `--colour-mode` allows the user to explicitly select what happens.
590  
591  * `--colour-mode ansi` tells Catch2 to always use ANSI colour codes, even
592  when writing to a file
593  * `--colour-mode win32` tells Catch2 to use colour implementation based
594    on Win32 terminal API
595  * `--colour-mode none` tells Catch2 to disable colours completely
596  * `--colour-mode default` lets Catch2 decide
597  
598  `--colour-mode default` is the default setting.
599  
600  
601  <a id="test-sharding"></a>
602  ## Test Sharding
603  <pre>--shard-count <#number of shards>, --shard-index <#shard index to run></pre>
604  
605  > [Introduced](https://github.com/catchorg/Catch2/pull/2257) in Catch2 3.0.1.
606  
607  When `--shard-count <#number of shards>` is used, the tests to execute
608  will be split evenly in to the given number of sets, identified by indices
609  starting at 0. The tests in the set given by
610  `--shard-index <#shard index to run>` will be executed. The default shard
611  count is `1`, and the default index to run is `0`.
612  
613  _Shard index must be less than number of shards. As the name suggests,
614  it is treated as an index of the shard to run._
615  
616  Sharding is useful when you want to split test execution across multiple
617  processes, as is done with the [Bazel test sharding](https://docs.bazel.build/versions/main/test-encyclopedia.html#test-sharding).
618  
619  
620  <a id="no-tests-override"></a>
621  ## Allow running the binary without tests
622  <pre>--allow-running-no-tests</pre>
623  
624  > Introduced in Catch2 3.0.1.
625  
626  By default, Catch2 test binaries return non-0 exit code if no tests were run,
627  e.g. if the binary was compiled with no tests, the provided test spec matched no
628  tests, or all tests [were skipped at runtime](skipping-passing-failing.md#top). This flag
629  overrides that, so a test run with no tests still returns 0.
630  
631  ## Output verbosity
632  ```
633  -v, --verbosity <quiet|normal|high>
634  ```
635  
636  Changing verbosity might change how many details Catch2's reporters output.
637  However, you should consider changing the verbosity level as a _suggestion_.
638  Not all reporters support all verbosity levels, e.g. because the reporter's
639  format cannot meaningfully change. In that case, the verbosity level is
640  ignored.
641  
642  Verbosity defaults to _normal_.
643  
644  
645  ---
646  
647  [Home](Readme.md#top)