README.md
1 This directory contains integration tests that test bitcoind and its 2 utilities in their entirety. It does not contain unit tests, which 3 can be found in [/src/test](/src/test), [/src/wallet/test](/src/wallet/test), 4 etc. 5 6 This directory contains the following sets of tests: 7 8 - [fuzz](/test/fuzz) A runner to execute all fuzz targets from 9 [/src/test/fuzz](/src/test/fuzz). 10 - [functional](/test/functional) which test the functionality of 11 bitcoind and bitcoin-qt by interacting with them through the RPC and P2P 12 interfaces. 13 - [util](/test/util) which tests the utilities (bitcoin-util, bitcoin-tx, ...). 14 - [lint](/test/lint/) which perform various static analysis checks. 15 16 The util tests are run as part of `make check` target. The fuzz tests, functional 17 tests and lint scripts can be run as explained in the sections below. 18 19 # Running tests locally 20 21 Before tests can be run locally, Bitcoin Core must be built. See the [building instructions](/doc#building) for help. 22 23 ## Fuzz tests 24 25 See [/doc/fuzzing.md](/doc/fuzzing.md) 26 27 ### Functional tests 28 29 #### Dependencies and prerequisites 30 31 The ZMQ functional test requires a python ZMQ library. To install it: 32 33 - on Unix, run `sudo apt-get install python3-zmq` 34 - on mac OS, run `pip3 install pyzmq` 35 36 37 On Windows the `PYTHONUTF8` environment variable must be set to 1: 38 39 ```cmd 40 set PYTHONUTF8=1 41 ``` 42 43 #### Running the tests 44 45 Individual tests can be run by directly calling the test script, e.g.: 46 47 ``` 48 test/functional/feature_rbf.py 49 ``` 50 51 or can be run through the test_runner harness, eg: 52 53 ``` 54 test/functional/test_runner.py feature_rbf.py 55 ``` 56 57 You can run any combination (incl. duplicates) of tests by calling: 58 59 ``` 60 test/functional/test_runner.py <testname1> <testname2> <testname3> ... 61 ``` 62 63 Wildcard test names can be passed, if the paths are coherent and the test runner 64 is called from a `bash` shell or similar that does the globbing. For example, 65 to run all the wallet tests: 66 67 ``` 68 test/functional/test_runner.py test/functional/wallet* 69 functional/test_runner.py functional/wallet* (called from the test/ directory) 70 test_runner.py wallet* (called from the test/functional/ directory) 71 ``` 72 73 but not 74 75 ``` 76 test/functional/test_runner.py wallet* 77 ``` 78 79 Combinations of wildcards can be passed: 80 81 ``` 82 test/functional/test_runner.py ./test/functional/tool* test/functional/mempool* 83 test_runner.py tool* mempool* 84 ``` 85 86 Run the regression test suite with: 87 88 ``` 89 test/functional/test_runner.py 90 ``` 91 92 Run all possible tests with 93 94 ``` 95 test/functional/test_runner.py --extended 96 ``` 97 98 In order to run backwards compatibility tests, first run: 99 100 ``` 101 test/get_previous_releases.py -b 102 ``` 103 104 to download the necessary previous release binaries. 105 106 By default, up to 4 tests will be run in parallel by test_runner. To specify 107 how many jobs to run, append `--jobs=n` 108 109 The individual tests and the test_runner harness have many command-line 110 options. Run `test/functional/test_runner.py -h` to see them all. 111 112 #### Speed up test runs with a RAM disk 113 114 If you have available RAM on your system you can create a RAM disk to use as the `cache` and `tmp` directories for the functional tests in order to speed them up. 115 Speed-up amount varies on each system (and according to your RAM speed and other variables), but a 2-3x speed-up is not uncommon. 116 117 **Linux** 118 119 To create a 4 GiB RAM disk at `/mnt/tmp/`: 120 121 ```bash 122 sudo mkdir -p /mnt/tmp 123 sudo mount -t tmpfs -o size=4g tmpfs /mnt/tmp/ 124 ``` 125 126 Configure the size of the RAM disk using the `size=` option. 127 The size of the RAM disk needed is relative to the number of concurrent jobs the test suite runs. 128 For example running the test suite with `--jobs=100` might need a 4 GiB RAM disk, but running with `--jobs=32` will only need a 2.5 GiB RAM disk. 129 130 To use, run the test suite specifying the RAM disk as the `cachedir` and `tmpdir`: 131 132 ```bash 133 test/functional/test_runner.py --cachedir=/mnt/tmp/cache --tmpdir=/mnt/tmp 134 ``` 135 136 Once finished with the tests and the disk, and to free the RAM, simply unmount the disk: 137 138 ```bash 139 sudo umount /mnt/tmp 140 ``` 141 142 **macOS** 143 144 To create a 4 GiB RAM disk named "ramdisk" at `/Volumes/ramdisk/`: 145 146 ```bash 147 diskutil erasevolume HFS+ ramdisk $(hdiutil attach -nomount ram://8388608) 148 ``` 149 150 Configure the RAM disk size, expressed as the number of blocks, at the end of the command 151 (`4096 MiB * 2048 blocks/MiB = 8388608 blocks` for 4 GiB). To run the tests using the RAM disk: 152 153 ```bash 154 test/functional/test_runner.py --cachedir=/Volumes/ramdisk/cache --tmpdir=/Volumes/ramdisk/tmp 155 ``` 156 157 To unmount: 158 159 ```bash 160 umount /Volumes/ramdisk 161 ``` 162 163 #### Troubleshooting and debugging test failures 164 165 ##### Resource contention 166 167 The P2P and RPC ports used by the bitcoind nodes-under-test are chosen to make 168 conflicts with other processes unlikely. However, if there is another bitcoind 169 process running on the system (perhaps from a previous test which hasn't successfully 170 killed all its bitcoind nodes), then there may be a port conflict which will 171 cause the test to fail. It is recommended that you run the tests on a system 172 where no other bitcoind processes are running. 173 174 On linux, the test framework will warn if there is another 175 bitcoind process running when the tests are started. 176 177 If there are zombie bitcoind processes after test failure, you can kill them 178 by running the following commands. **Note that these commands will kill all 179 bitcoind processes running on the system, so should not be used if any non-test 180 bitcoind processes are being run.** 181 182 ```bash 183 killall bitcoind 184 ``` 185 186 or 187 188 ```bash 189 pkill -9 bitcoind 190 ``` 191 192 193 ##### Data directory cache 194 195 A pre-mined blockchain with 200 blocks is generated the first time a 196 functional test is run and is stored in test/cache. This speeds up 197 test startup times since new blockchains don't need to be generated for 198 each test. However, the cache may get into a bad state, in which case 199 tests will fail. If this happens, remove the cache directory (and make 200 sure bitcoind processes are stopped as above): 201 202 ```bash 203 rm -rf test/cache 204 killall bitcoind 205 ``` 206 207 ##### Test logging 208 209 The tests contain logging at five different levels (DEBUG, INFO, WARNING, ERROR 210 and CRITICAL). From within your functional tests you can log to these different 211 levels using the logger included in the test_framework, e.g. 212 `self.log.debug(object)`. By default: 213 214 - when run through the test_runner harness, *all* logs are written to 215 `test_framework.log` and no logs are output to the console. 216 - when run directly, *all* logs are written to `test_framework.log` and INFO 217 level and above are output to the console. 218 - when run by [our CI (Continuous Integration)](/ci/README.md), no logs are output to the console. However, if a test 219 fails, the `test_framework.log` and bitcoind `debug.log`s will all be dumped 220 to the console to help troubleshooting. 221 222 These log files can be located under the test data directory (which is always 223 printed in the first line of test output): 224 - `<test data directory>/test_framework.log` 225 - `<test data directory>/node<node number>/regtest/debug.log`. 226 227 The node number identifies the relevant test node, starting from `node0`, which 228 corresponds to its position in the nodes list of the specific test, 229 e.g. `self.nodes[0]`. 230 231 To change the level of logs output to the console, use the `-l` command line 232 argument. 233 234 `test_framework.log` and bitcoind `debug.log`s can be combined into a single 235 aggregate log by running the `combine_logs.py` script. The output can be plain 236 text, colorized text or html. For example: 237 238 ``` 239 test/functional/combine_logs.py -c <test data directory> | less -r 240 ``` 241 242 will pipe the colorized logs from the test into less. 243 244 Use `--tracerpc` to trace out all the RPC calls and responses to the console. For 245 some tests (eg any that use `submitblock` to submit a full block over RPC), 246 this can result in a lot of screen output. 247 248 By default, the test data directory will be deleted after a successful run. 249 Use `--nocleanup` to leave the test data directory intact. The test data 250 directory is never deleted after a failed test. 251 252 ##### Attaching a debugger 253 254 A python debugger can be attached to tests at any point. Just add the line: 255 256 ```py 257 import pdb; pdb.set_trace() 258 ``` 259 260 anywhere in the test. You will then be able to inspect variables, as well as 261 call methods that interact with the bitcoind nodes-under-test. 262 263 If further introspection of the bitcoind instances themselves becomes 264 necessary, this can be accomplished by first setting a pdb breakpoint 265 at an appropriate location, running the test to that point, then using 266 `gdb` (or `lldb` on macOS) to attach to the process and debug. 267 268 For instance, to attach to `self.node[1]` during a run you can get 269 the pid of the node within `pdb`. 270 271 ``` 272 (pdb) self.node[1].process.pid 273 ``` 274 275 Alternatively, you can find the pid by inspecting the temp folder for the specific test 276 you are running. The path to that folder is printed at the beginning of every 277 test run: 278 279 ```bash 280 2017-06-27 14:13:56.686000 TestFramework (INFO): Initializing test directory /tmp/user/1000/testo9vsdjo3 281 ``` 282 283 Use the path to find the pid file in the temp folder: 284 285 ```bash 286 cat /tmp/user/1000/testo9vsdjo3/node1/regtest/bitcoind.pid 287 ``` 288 289 Then you can use the pid to start `gdb`: 290 291 ```bash 292 gdb /home/example/bitcoind <pid> 293 ``` 294 295 Note: gdb attach step may require ptrace_scope to be modified, or `sudo` preceding the `gdb`. 296 See this link for considerations: https://www.kernel.org/doc/Documentation/security/Yama.txt 297 298 Often while debugging RPC calls in functional tests, the test might time out before the 299 process can return a response. Use `--timeout-factor 0` to disable all RPC timeouts for that particular 300 functional test. Ex: `test/functional/wallet_hd.py --timeout-factor 0`. 301 302 ##### Profiling 303 304 An easy way to profile node performance during functional tests is provided 305 for Linux platforms using `perf`. 306 307 Perf will sample the running node and will generate profile data in the node's 308 datadir. The profile data can then be presented using `perf report` or a graphical 309 tool like [hotspot](https://github.com/KDAB/hotspot). 310 311 To generate a profile during test suite runs, use the `--perf` flag. 312 313 To see render the output to text, run 314 315 ```sh 316 perf report -i /path/to/datadir/send-big-msgs.perf.data.xxxx --stdio | c++filt | less 317 ``` 318 319 For ways to generate more granular profiles, see the README in 320 [test/functional](/test/functional). 321 322 ### Util tests 323 324 Util tests can be run locally by running `test/util/test_runner.py`. 325 Use the `-v` option for verbose output. 326 327 ### Lint tests 328 329 See the README in [test/lint](/test/lint). 330 331 # Writing functional tests 332 333 You are encouraged to write functional tests for new or existing features. 334 Further information about the functional test framework and individual 335 tests is found in [test/functional](/test/functional).