Unit Testing

Unit testing for this module is performed using pytest with support from pytest-asyncio. Unit test files should follow pytest conventions. Additionally, coverage is used to give the developer insight into what the unit tests are actually testing, and what code remains untested. Both of these packages are installed if the dev-setup.sh script is used as described in Development Environment.

In order to run the tests, use the following command:

pytest

pytest reads its configuration from pyproject.toml. Also installed as part of this project’s requirements-dev.txt are coverage and pytest-cov. As currently configured, running the unit tests as described above will execute a subset of the parameterised tests (see the docstring for test/conftest.py). While every combination of parameters won’t always be tested, each individual parameter will be tested at least once.

If you’d like an HTML test-coverage report (at the expense of a slightly longer time taken to run the test), execute pytest with the --cov flag. This report can then be viewed by:

xdg-open htmlcov/index.html

Or, if you are developing on a remote server:

cd htmlcov && python -m http.server 8089

If you are using VSCode, the editor will prompt you to open the link in a browser, and automatically forward the port to your localhost. If not, or if you’d prefer to do it the old-fashioned way, point a browser at port 8089 on the machine that you are developing on.

The results will look something like this:

_images/coverage_screenshot.png

The colour key is at the top of the page, but briefly, lines marked in green were executed by the tests, red were not. Yellow lines indicate branches which were only partially covered, i.e. all possible ways to branch were not tested. In the cases shown, it is because only expected values were passed to the function in question: the unit tests didn’t pass invalid inputs in order to check that exceptions were raised appropriately.

On the right hand side, a context is shown for the lines that were executed, as shown in this image:

_images/coverage_screenshot_contexts.png

On the left side of the | is the static context - in this case showing information regarding the git commit that I ran the test on. The right side shows the dynamic context - in this case, two different tests both executed this code during the course of their run.

Note

coverage's “dynamic context” output is currently specified by pytest-cov to describe the test function which executed the line of code in question. If desired, it can instead be specified in coverage’s configuration as described in coverage’s documentation. This produces a slightly different output which conveys more or less similar information.

coverage's static context is more difficult to specify in a way that is useful. To generate the report above, I executed the following command:

coverage run --context=$(git describe --tags --dirty --always)

This gives more useful information about exactly what code was run, and whether it’s committed or dirty. Unfortunately, doing things this way you miss out on the features of pytest-cov. coverage supports specifying a static context using either the command line (as shown) or via its configuration file, including reading of environment variables, but support doesn’t extend to evaluating arbitrary shell expressions as is possible from the command line.

The package author suggests the use of a Makefile to generate an environment variable which the configuration can then use in generating a static context. This strikes me as a good solution, but I am reluctant to include yet another boiler-plate file in the repository, so I leave this to the discretion of the individual developer to make use of as desired.

Tip

Although having said that, the Makefile could also replace dev-setup.sh, allowing the developer to do something like

make develop  # to set up the environment
make test     # to actually run the tests