/ test / README.md
README.md
  1  This directory contains integration tests that test bitcoind and its
  2  utilities in their entirety. It does not contain unit tests, which
  3  can be found in [/src/test](/src/test), [/src/wallet/test](/src/wallet/test),
  4  etc.
  5  
  6  This directory contains the following sets of tests:
  7  
  8  - [fuzz](/test/fuzz) A runner to execute all fuzz targets from
  9    [/src/test/fuzz](/src/test/fuzz).
 10  - [functional](/test/functional) which test the functionality of
 11  bitcoind and bitcoin-qt by interacting with them through the RPC and P2P
 12  interfaces.
 13  - [lint](/test/lint/) which perform various static analysis checks.
 14  
 15  The fuzz tests, functional
 16  tests and lint scripts can be run as explained in the sections below.
 17  
 18  # Running tests locally
 19  
 20  Before tests can be run locally, Bitcoin Core must be built.  See the [building instructions](/doc#building) for help.
 21  
 22  The following examples assume that the build directory is named `build`.
 23  
 24  ## Fuzz tests
 25  
 26  See [/doc/fuzzing.md](/doc/fuzzing.md)
 27  
 28  ### Functional tests
 29  
 30  #### Dependencies and prerequisites
 31  
 32  The ZMQ functional test requires a python ZMQ library. To install it:
 33  
 34  - on Unix, run `sudo apt-get install python3-zmq`
 35  - on mac OS, run `pip3 install pyzmq`
 36  
 37  The IPC functional test requires a python IPC library. `pip3 install pycapnp` may work, but if not, install it from source:
 38  
 39  ```sh
 40  git clone -b v2.2.1 https://github.com/capnproto/pycapnp
 41  pip3 install ./pycapnp
 42  ```
 43  
 44  If that does not work, try adding `-C force-bundled-libcapnp=True` to the `pip` command.
 45  Depending on the system, it may be necessary to install and run in a venv:
 46  
 47  ```sh
 48  python -m venv venv
 49  git clone -b v2.2.1 https://github.com/capnproto/pycapnp
 50  venv/bin/pip3 install ./pycapnp -C force-bundled-libcapnp=True
 51  venv/bin/python3 build/test/functional/interface_ipc.py
 52  ```
 53  
 54  The functional tests assume Python UTF-8 Mode, which is the default on most
 55  systems.
 56  On Windows the `PYTHONUTF8` environment variable must be set to 1:
 57  
 58  ```cmd
 59  set PYTHONUTF8=1
 60  ```
 61  
 62  #### Running the tests
 63  
 64  Individual tests can be run by directly calling the test script, e.g.:
 65  
 66  ```
 67  build/test/functional/feature_rbf.py
 68  ```
 69  
 70  or can be run through the test_runner harness, eg:
 71  
 72  ```
 73  build/test/functional/test_runner.py feature_rbf.py
 74  ```
 75  
 76  You can run any combination (incl. duplicates) of tests by calling:
 77  
 78  ```
 79  build/test/functional/test_runner.py <testname1> <testname2> <testname3> ...
 80  ```
 81  
 82  Wildcard test names can be passed, if the paths are coherent and the test runner
 83  is called from a `bash` shell or similar that does the globbing. For example,
 84  to run all the wallet tests:
 85  
 86  ```
 87  build/test/functional/test_runner.py test/functional/wallet*
 88  functional/test_runner.py functional/wallet*  # (called from the build/test/ directory)
 89  test_runner.py wallet*  # (called from the build/test/functional/ directory)
 90  ```
 91  
 92  but not
 93  
 94  ```
 95  build/test/functional/test_runner.py wallet*
 96  ```
 97  
 98  Combinations of wildcards can be passed:
 99  
100  ```
101  build/test/functional/test_runner.py ./test/functional/tool* test/functional/mempool*
102  test_runner.py tool* mempool*
103  ```
104  
105  Run the regression test suite with:
106  
107  ```
108  build/test/functional/test_runner.py
109  ```
110  
111  Run all possible tests with
112  
113  ```
114  build/test/functional/test_runner.py --extended
115  ```
116  
117  In order to run backwards compatibility tests, first run:
118  
119  ```
120  test/get_previous_releases.py
121  ```
122  
123  to download the necessary previous release binaries.
124  
125  By default, up to 4 tests will be run in parallel by test_runner. To specify
126  how many jobs to run, append `--jobs=n`
127  
128  The individual tests and the test_runner harness have many command-line
129  options. Run `build/test/functional/test_runner.py -h` to see them all.
130  
131  #### Speed up test runs with a RAM disk
132  
133  If you have available RAM on your system you can create a RAM disk to use as the `cache` and `tmp` directories for the functional tests in order to speed them up.
134  Speed-up amount varies on each system (and according to your RAM speed and other variables), but a 2-3x speed-up is not uncommon.
135  
136  **Linux**
137  
138  To create a 4 GiB RAM disk at `/mnt/tmp/`:
139  
140  ```bash
141  sudo mkdir -p /mnt/tmp
142  sudo mount -t tmpfs -o size=4g tmpfs /mnt/tmp/
143  ```
144  
145  Configure the size of the RAM disk using the `size=` option.
146  The size of the RAM disk needed is relative to the number of concurrent jobs the test suite runs.
147  For example running the test suite with `--jobs=100` might need a 4 GiB RAM disk, but running with `--jobs=32` will only need a 2.5 GiB RAM disk.
148  
149  To use, run the test suite specifying the RAM disk as the `cachedir` and `tmpdir`:
150  
151  ```bash
152  build/test/functional/test_runner.py --cachedir=/mnt/tmp/cache --tmpdir=/mnt/tmp
153  ```
154  
155  Once finished with the tests and the disk, and to free the RAM, simply unmount the disk:
156  
157  ```bash
158  sudo umount /mnt/tmp
159  ```
160  
161  **macOS**
162  
163  To create a 4 GiB RAM disk named "ramdisk" at `/Volumes/ramdisk/`:
164  
165  ```bash
166  diskutil erasevolume HFS+ ramdisk $(hdiutil attach -nomount ram://8388608)
167  ```
168  
169  Configure the RAM disk size, expressed as the number of blocks, at the end of the command
170  (`4096 MiB * 2048 blocks/MiB = 8388608 blocks` for 4 GiB). To run the tests using the RAM disk:
171  
172  ```bash
173  build/test/functional/test_runner.py --cachedir=/Volumes/ramdisk/cache --tmpdir=/Volumes/ramdisk/tmp
174  ```
175  
176  To unmount:
177  
178  ```bash
179  umount /Volumes/ramdisk
180  ```
181  
182  #### Troubleshooting and debugging test failures
183  
184  ##### Resource contention
185  
186  The P2P and RPC ports used by the bitcoind nodes-under-test are chosen to make
187  conflicts with other processes unlikely. However, if there is another bitcoind
188  process running on the system (perhaps from a previous test which hasn't successfully
189  killed all its bitcoind nodes), then there may be a port conflict which will
190  cause the test to fail. It is recommended that you run the tests on a system
191  where no other bitcoind processes are running.
192  
193  On linux, the test framework will warn if there is another
194  bitcoind process running when the tests are started.
195  
196  If there are zombie bitcoind processes after test failure, you can kill them
197  by running the following commands. **Note that these commands will kill all
198  bitcoind processes running on the system, so should not be used if any non-test
199  bitcoind processes are being run.**
200  
201  ```bash
202  killall bitcoind
203  ```
204  
205  or
206  
207  ```bash
208  pkill -9 bitcoind
209  ```
210  
211  
212  ##### Data directory cache
213  
214  A pre-mined blockchain with 200 blocks is generated the first time a
215  functional test is run and is stored in build/test/cache. This speeds up
216  test startup times since new blockchains don't need to be generated for
217  each test. However, the cache may get into a bad state, in which case
218  tests will fail. If this happens, remove the cache directory (and make
219  sure bitcoind processes are stopped as above):
220  
221  ```bash
222  rm -rf build/test/cache
223  killall bitcoind
224  ```
225  
226  ##### Test logging
227  
228  The tests contain logging at five different levels (DEBUG, INFO, WARNING, ERROR
229  and CRITICAL). From within your functional tests you can log to these different
230  levels using the logger included in the test_framework, e.g.
231  `self.log.debug(object)`. By default:
232  
233  - when run through the test_runner harness, *all* logs are written to
234    `test_framework.log` and no logs are output to the console.
235  - when run directly, *all* logs are written to `test_framework.log` and INFO
236    level and above are output to the console.
237  - when run by [our CI (Continuous Integration)](/ci/README.md), no logs are output to the console. However, if a test
238    fails, the `test_framework.log` and bitcoind `debug.log`s will all be dumped
239    to the console to help troubleshooting.
240  
241  These log files can be located under the test data directory (which is always
242  printed in the first line of test output):
243    - `<test data directory>/test_framework.log`
244    - `<test data directory>/node<node number>/regtest/debug.log`.
245  
246  The node number identifies the relevant test node, starting from `node0`, which
247  corresponds to its position in the nodes list of the specific test,
248  e.g. `self.nodes[0]`.
249  
250  To change the level of logs output to the console, use the `-l` command line
251  argument.
252  
253  `test_framework.log` and bitcoind `debug.log`s can be combined into a single
254  aggregate log by running the `combine_logs.py` script. The output can be plain
255  text, colorized text or html. For example:
256  
257  ```
258  build/test/functional/combine_logs.py -c <test data directory> | less -r
259  ```
260  
261  will pipe the colorized logs from the test into less.
262  
263  Use `--tracerpc` to trace out all the RPC calls and responses to the console. For
264  some tests (eg any that use `submitblock` to submit a full block over RPC),
265  this can result in a lot of screen output.
266  
267  By default, the test data directory will be deleted after a successful run.
268  Use `--nocleanup` to leave the test data directory intact. The test data
269  directory is never deleted after a failed test.
270  
271  ##### Attaching a debugger
272  
273  A python debugger can be attached to tests at any point. Just add the line:
274  
275  ```py
276  import pdb; pdb.set_trace()
277  ```
278  
279  anywhere in the test. You will then be able to inspect variables, as well as
280  call methods that interact with the bitcoind nodes-under-test.
281  
282  If further introspection of the bitcoind instances themselves becomes
283  necessary, this can be accomplished by first setting a pdb breakpoint
284  at an appropriate location, running the test to that point, then using
285  `gdb` (or `lldb` on macOS) to attach to the process and debug.
286  
287  For instance, to attach to `self.node[1]` during a run you can get
288  the pid of the node within `pdb`.
289  
290  ```
291  (pdb) self.node[1].process.pid
292  ```
293  
294  Alternatively, you can find the pid by inspecting the temp folder for the specific test
295  you are running. The path to that folder is printed at the beginning of every
296  test run:
297  
298  ```bash
299  2017-06-27 14:13:56.686000 TestFramework (INFO): Initializing test directory /tmp/user/1000/testo9vsdjo3
300  ```
301  
302  Use the path to find the pid file in the temp folder:
303  
304  ```bash
305  cat /tmp/user/1000/testo9vsdjo3/node1/regtest/bitcoind.pid
306  ```
307  
308  Then you can use the pid to start `gdb`:
309  
310  ```bash
311  gdb /home/example/bitcoind <pid>
312  ```
313  
314  Note: gdb attach step may require ptrace_scope to be modified, or `sudo` preceding the `gdb`.
315  See this link for considerations: https://www.kernel.org/doc/Documentation/security/Yama.txt
316  
317  Often while debugging RPC calls in functional tests, the test might time out before the
318  process can return a response. Use `--timeout-factor 0` to disable all RPC timeouts for that particular
319  functional test. Ex: `build/test/functional/wallet_hd.py --timeout-factor 0`.
320  
321  ##### Profiling
322  
323  An easy way to profile node performance during functional tests is provided
324  for Linux platforms using `perf`.
325  
326  Perf will sample the running node and will generate profile data in the node's
327  datadir. The profile data can then be presented using `perf report` or a graphical
328  tool like [hotspot](https://github.com/KDAB/hotspot).
329  
330  To generate a profile during test suite runs, use the `--perf` flag.
331  
332  To see render the output to text, run
333  
334  ```sh
335  perf report -i /path/to/datadir/send-big-msgs.perf.data.xxxx --stdio | c++filt | less
336  ```
337  
338  For ways to generate more granular profiles, see the README in
339  [test/functional](/test/functional).
340  
341  ### Lint tests
342  
343  See the README in [test/lint](/test/lint).
344  
345  # Writing functional tests
346  
347  You are encouraged to write functional tests for new or existing features.
348  Further information about the functional test framework and individual
349  tests is found in [test/functional](/test/functional).