Unit testing and code coverage in Python

What this guide covers

Provides a practical introduction to applying unit testing that could be used for the coursework.

Introduction to Python unit testing

What is a unit test and how is it structured?

The types of software testing are covered in one of the video presentations on Moodle.

A unit test is a test that checks that a single component of your app operates in the right way. A unit test helps you to isolate what is broken in your application and fix it faster.

A unit test typically follow the Arrange-Act-Assert model:

Optionally tear down is used to remove any conditions that were temporarily set for the test

A test should typically only include actions necessary to test the object (or function) to get the desired result. The set up and tear down methods allow you to define instructions that will be executed before and after each test method. Examples of uses for setup and teardown:

Which Python test library to use

The standard python library includes the unit test library, unittest.

Another popular python test library is pytest. To use pytest you need to install the library in your venv e.g. pip install pytest.

You can use either unittest or pytest for your coursework.

There are differing opinions on which is the preferable library. This guide uses pytest as pytest is suggested in the Flask and Plotly Dash documentation.

The main differences between pytest and unittest are:

Feature pytest unittest
Installation Third-party library so requires installation Included in Python standard library
Test setup and teardown Uses fixtures Uses setUp() and tearDown() methods
Assertion Format Uses assert except for exceptions which use raises Uses a range of assert* style methods e.g. assertEquals, assertTrue, assertContains

You can use both libraries in an object-oriented (classes) or functional coding style. The test examples given in this guide used a functional approach, you may want to investigate an object-oriented approach if that better suits the nature of your code.

Set up your IDE to support your chosen test runner

You will likely need to configure your project in your IDE to use a test runner for the library you are using, either unittest or pytest. You will need to check the documentation for your IDE for instructions.

Pycharm help: Testing frameworks

Python testing in VS Code

Test naming convention

Many test runners look for a pattern in the naming structure and location of test files. Use the following naming conventions:

There are more prescriptive test naming conventions that can be followed. For the project, use any that are appropriate for Python and the test library you are using.

Pick a naming convention and then adhere to it! This will make your tests easier to read and maintain by others.

See Conventions for Python test discovery in pytest

Create the test directory and test code files

Apply the naming convention you choose and create a test directory in your project structure. For example, it might look like this if you are creating a test for a car class:

├── .gitignore
├── README.md
├── src
│   └── car.py
├── tests
│   └── test_car.py
└── .venv

See Pytest documentation on choosing a test layout.

Consider if you need to create a conftest.py in your 'tests' directory

If you look at pytest examples you will often see a file called conftest.py at the top level of a 'tests' directory ( and/or within each test subdirectory).

conftest.py is typically used to define common elements for pytest such as the set up and tear down and helper functions for your tests. Pytest calls these fixtures.

The advantage of adding fixture functions to conftest.py is that it makes them accessible across multiple test files/tests. Fixtures do not have to in conftest.py, they can be contained within the test files.

Consider adding a new python file in the 'tests' directory called conftest.py.

Fixtures

Pytest fixtures are used to provide common functions that you may need for your tests. They are created (set up, yield) and removed (tear down, finalise) using the @fixture decorator.

Fixtures are established for a particular scope using the syntax @pytest.fixture(scope='module'). Options for scope are:

You may not need to make much use of fixtures for COMP0035, however you will need to use these when we move on to testing Flask and Dash apps in COMP0034.

Fixtures can be added to conftest.py to make them available to other test modules, or within a given test module (i.e. a single python that contains the tests that use those fixtures).

Define the test

Before you start trying to code tests, you first need to decide what your tests should test for.

The Given-When-Then approach is useful to help determine tests before you attempt to automate them in code.

The idea is to break down a test into three sections:

So for example:

GIVEN a User model
WHEN a new User is created
THEN check the email_address, hashed_password, and role fields are defined correctly

You could then consider that 'GIVEN' suggests set up, 'WHEN' suggests the calling of the method or function, and 'THEN' suggests the assertion and tear down. Source

Dan North, credited as the originator of 'GIVEN-WHEN-THEN', also suggests that you can convert user stories to a unit test using this structure. For example:

As a customer,
I want to withdraw cash from an ATM,
so that I don't have to wait in line at the bank.

Could lead to a test:

GIVEN the account is in credit
And the card is valid
And the dispenser contains cash
WHEN the customer requests cash
THEN ensure the account is debited
And ensure cash is dispensed
And ensure the card is returned

Source

It isn't mandatory to work out the tests using Given-When-Then, but it is useful, and is often used as the docstring in test cases. The following example by Patrick Kennedy shows this used for testing a Flask app route:

def test_home_page_with_fixture(test_client):
    """
    GIVEN a Flask application configured for testing
    WHEN the '/' page is requested (GET)
    THEN check that the response is valid from the HTTP status code
    """
    response = test_client.get('/')
    assert response.status_code == 200

Write a test

Assume you want to test a code file called algorithm.py.

Code adapted from the example in this article.

# algorithm.py
def maximum(values):
    _max = values[0]

    for val in values:
        if val > _max:
            _max = val

    return _max


def minimum(values):
    _min = values[0]

    for val in values:
        if val < _min:
            _min = val

    return _min

The first test will test that the minimum function returns the smallest number in a list of numbers. The test might be defined as:

GIVEN the values 2, 4, 8 and 20
WHEN the values are passed to the minimum function
THEN the result should equal 2

The python code for the test could look like this:

import algorithm


def test_minimum_returns_correct_value():
    """
    GIVEN the values 2, 4, 8 and 20
    WHEN the values are passed to the minimum function
    THEN the result should equal 2
    """
    values = [2, 4, 8, 20]
    result = algorithm.minimum(values)
    assert result == 2

Using a fixture in a test

You could use a fixture to set up the data for this example test by adding the following to conftest.py:

import pytest


@pytest.fixture(scope='module')
def values():
    values = [2, 4, 8, 20]
    yield values

You can then use the same values for more than one test by passing the fixture named 'values' as an argument to the test functions e.g.

import algorithm


def test_minimum_returns_correct_value(values):
    """
    GIVEN the values 2, 4, 8 and 20
    WHEN the values are passed to the minimum function
    THEN the result should equal 2
    """
    result = algorithm.minimum(values)
    assert result == 2


def test_maximum_returns_correct_value(values):
    """
        GIVEN the values 2, 4, 8 and 20
        WHEN the values are passed to the maximum function
        THEN the result should equal 20
    """
    result = algorithm.maximum(values)
    assert result == 20

Code coverage

Code coverage provides an understanding of the extent to which your source code is being tested. It analyses the lines of code to determine a % of the lines that are being tested.

Coverage is discussed in the teaching materials on Moodle so isn't repeated here. In summary, it is unusual to aim for 100% code coverage and there is no universally applicable ideal. The % to aim for will vary depending on factors such as the criticality of the app (for example a medical app might require a higher coverage than a marketing app); an organisation's test infrastructure and capabilities; etc.

Google's blog shows that they have a varying scale of coverage defining 60% as 'acceptable', 75% as 'commendable' and 90% as 'exemplary'.

To determine the coverage of your tests you will need to install a separate library. There are numerous libraries that can be used with pytest such as 'coverage' and 'pytest-cov'. For this example we will use pytest-cov which you can install using pip install pytest-cov

Running the Tests

There are different ways to run the tests including:

Since everyone should have a venv let's use that for now (though it's likely to be much easier to use menu options/features in your IDE). Run pytest in the top-level folder for the project from the Terminal in your IDE:

python -m pytest -v

The '-v' parameter means verbose and gives you more detail in the terminal as to the tests that have run and any errors. Refer to the pytest documentation for other parameters.

pytest will run all files of the form test_*.py or *_test.py in the current directory and its subdirectories.

Running the above line will result in output that looks a little like this:

(venv)
localadmins - MacBook - Pro: comp0035_unit_testing
localadmin$ python - m
pytest - v
== == == == == == == == == == == == == == == == == == == == == == == == = test
session
starts == == == == == == == == == == == == == == == == == == == == == == == == ==
platform
darwin - - Python
3.9
.2, pytest - 6.2
.5, py - 1.11
.0, pluggy - 1.0
.0 - - / Users / localadmin / PycharmProjects / comp0035_unit_testing / venv / venv / bin / python
cachedir:.pytest_cache
rootdir: / Users / localadmin / PycharmProjects / comp0035_unit_testing
plugins: cov - 3.0
.0
collected
2
items

tests / test_algorithms.py::test_minimum_returns_correct_value
PASSED[50 %]
tests / test_algorithms.py::test_maximum_returns_correct_value
PASSED[100 %]

== == == == == == == == == == == == == == == == == == == == == == == == == 2
passed in 0.04
s == == == == == == == == == == == == == == == == == == == == == == == == == =

If you want to run a specific test only try:

python -m pytest -v tests/test_user.py

To run the tests with coverage requires the --cov argument to indicate which Python package to check the coverage of a package.

Note: To run this successfully from the command line you will need to move the file you are testing into a package. A Python package is recognised by a directory that has within it a file called __init__.py (the contents of the __init__.py file can be empty). If you run the test through your IDEs interface you won't need to do this.

python -m pytest --cov=project_name

Note: Python packages have an __init__.py at the top level of the package. You may need to add an empty __ init__.py at the top level of your project to be able to run pytest from the command line.

Running the above line will result in something like this:

(venv) localadmins-MacBook-Pro:algorithm localadmin$ python -m pytest --cov=algorithm tests/
================================================= test session starts ==================================================
platform darwin -- Python 3.9.2, pytest-6.2.5, py-1.11.0, pluggy-1.0.0
rootdir: /Users/localadmin/PycharmProjects/algorithm
plugins: cov-3.0.0
collected 2 items                                                                                                      

tests/test_algorithms.py ..                                                                                      [100%]

---------- coverage: platform darwin, python 3.9.2-final-0 -----------
Name           Stmts   Miss  Cover
----------------------------------
algorithm.py      24     12    50%
----------------------------------
TOTAL             24     12    50%

Investigate pytest-cov to see how you can get more detailed reports, for example:

python -m pytest --cov-report term-missing --cov=myproj tests/

Results in something like this which shows which lines of code are not covered by the tests:

(venv) localadmins-MacBook-Pro:algorithm localadmin$ python -m pytest --cov-report term-missing --cov=algorithm tests/
================================================= test session starts ==================================================
platform darwin -- Python 3.9.2, pytest-6.2.5, py-1.11.0, pluggy-1.0.0
rootdir: /Users/localadmin/PycharmProjects/algorithm
plugins: cov-3.0.0
collected 2 items                                                                                                      

tests/test_algorithms.py ..                                                                                      [100%]

---------- coverage: platform darwin, python 3.9.2-final-0 -----------
Name           Stmts   Miss  Cover   Missing
--------------------------------------------
algorithm.py      24     12    50%   16, 22-35
--------------------------------------------
TOTAL             24     12    50%


================================================== 2 passed in 0.08s ===================================================

Issues running the tests

Sometimes you may experience issues with your local imports in your virtual environment when you run tests.

Pytest documentation suggests that you should install your own code as a package before running the tests.

To do this you will need to create a file called pyproject.toml in the root of your package with the following minimum content:

[build-system]
requires = ["setuptools"]
build-backend = "setuptools.build_meta"

[project]
name = "PACKAGENAME"

Where PACKAGENAME is the name of your package. You can then install your package in "editable" mode by running from the same directory:

pip install -e .

which lets you change your source code (both tests and application) and rerun tests at will.

Going beyond

If you want to go beyond what has been covered here you may wish to investigate how to run parameterised tests.

You are strongly encouraged to set up continuous integration using GitHub Actions to automatically run code tests for your project and as well as linting (part of the code quality checking).