Unit testing is a crucial aspect of the software development process that
ensures that individual components of your code function as expected.
Pytest, with its intuitive syntax, robust features, and
extensive plugin ecosystem, has emerged as a leading choice for Python unit
testing.
In this guide, we will explore the core principles of unit testing, delve into
Pytest's powerful capabilities, and equip you with the knowledge and skills to
write clean, maintainable, and effective tests.
By the end, you'll be well-versed in leveraging Pytest to improve your code
quality, catch bugs early, and build more reliable software.
Let's get started!
Prerequisites
Before proceeding with this tutorial, ensure that you have a
recent version of Python installed, and a
basic understanding of writing Python programs.
Side note: Keep your Python endpoints under watch
Even when your Pytest suite is green, real traffic can reveal issues you did not hit locally. Start monitoring key routes with Better Stack so you get alerted on failures, slow responses, or unexpected status codes in production.
Step 1 — Setting up the project directory
Before you can start learning about Pytest, you need to have a program to test.
In this section, you will create a small program that formats file sizes in
bytes in a human-readable format.
Start by creating and navigating to the project directory using the following
commands:
Copied!
mkdir pytest-demo
Copied!
cd pytest-demo
Within this directory, set up a virtual environment to isolate project
dependencies:
Copied!
python -m venv venv
Next, activate the virtual environment:
Copied!
source venv/bin/activate
Once activated, the command prompt will be prefixed with the name of the virtual
environment (venv in this case):
You can now proceed to create the file formatting program in the src
directory:
Copied!
mkdir src
Ensure that the src directory is recognized as a package by adding an
__init__.py file:
Copied!
touch src/__init__.py
Now, create a formatter.py file within the src directory and paste in the
contents below:
Copied!
code src/formatter.py
src/formatter.py
Copied!
import math
def format_file_size(size_bytes):
if size_bytes < 0:
raise ValueError("Size cannot be negative")
elif size_bytes == 0:
return "0B"
size_name = ["B", "KB", "MB", "GB", "TB"]
i = int(math.floor(math.log(size_bytes) / math.log(1024)))
p = math.pow(1024, i)
s = "{:.2f}".format(size_bytes / p)
return f"{s} {size_name[i]}"
The format_file_size() function converts sizes into human-readable formats
such as Kilobytes, Megabytes, or Gigabytes.
In the root directory, create a main.py file that imports the
format_file_size() function and executes it:
Copied!
code main.py
main.py
Copied!
import sys
from src.formatter import format_file_size
if len(sys.argv) >= 2:
try:
size_bytes = int(sys.argv[1])
formatted_size = format_file_size(size_bytes)
print(formatted_size)
except ValueError:
print("Please provide a valid file size in bytes as a command-line argument.")
else:
print("Please provide the file size in bytes as a command-line argument.")
This script serves as the entry point to our program. It reads the file size
from the command line, calls the format_file_size() function, and prints the
result.
Let's quickly test our script to make sure it's working as expected:
Copied!
python main.py 1024
Output
1.00 KB
Copied!
python main.py 3447099988
Output
3.21 GB
Copied!
python main.py 0
Output
0B
With the demo program ready, you're now all set to dive into unit testing with
Pytest!
Step 2 — Writing your first test
In this section, you'll automate the testing process using Pytest. Instead of
manually providing input to your program, you'll write tests that feed in
various file sizes and verify if the output matches your expectations.
Before you can utilize Pytest in your project, you need to install it first
with:
Copied!
pip install -U pytest
Once installed, create a directory where all your tests will be written in. The
convention is to use a tests directory placed adjacent to your source code
like this:
This simple test calls the format_file_size( function with 1024 cubed (which
represents 1GB) and asserts that the returned value is exactly "1.00 GB". If the
function behaves as expected, this test will pass.
Now that you've written a test for the program, we'll look at how to execute the
test next.
Step 3 — Running your tests
To execute the test you just created, you must invoke the pytest command like
this:
Copied!
pytest
Upon execution, you should see output similar to the following:
Output
============================= test session starts ==============================
platform linux -- Python 3.12.3, pytest-8.2.0, pluggy-1.5.0
rootdir: /home/stanley/pytest-demo
collected 1 item
tests/test_format_file_size.py . [100%]
============================== 1 passed in 0.01s ===============================
This output indicates that one test item was found in the
tests/test_format_file_size.py file, and it passed within 0.01 seconds.
If your terminal supports color, you'll see a green line at the bottom, which
further signifies successful execution, as depicted in the screenshot below:
When you run the pytest command without any arguments, it searches through the
current directory and all its subdirectories for file names that begin with
test_ and executes the test functions within them.
Now that you've experienced running tests that pass, let's explore how Pytest
presents failing tests. Go ahead and modify the previous test function to fail
intentionally:
In case of a failure, Pytest will display a red line and a detailed error
report:
Output
. . .
tests/test_format_file_size.py F [100%]
=================================== FAILURES ===================================
___________________ test_format_file_size_returns_GB_format ____________________
def test_format_file_size_returns_GB_format():
> assert format_file_size(0) == "1.00 GB"
E AssertionError: assert '0B' == '1.00 GB'
E
E - 1.00 GB
E + 0B
tests/test_format_file_size.py:5: AssertionError
=========================== short test summary info ============================
FAILED tests/test_format_file_size.py::test_format_file_size_returns_GB_format - AssertionError: assert '0B' == '1.00 GB'
============================== 1 failed in 0.02s ===============================
The output explains why the test failed, with a traceback indicating a mismatch
between the expected result ("1.00 GB") and the actual result ("0B"). The
subsequent summary block provides a concise overview of the failure without the
traceback, indicating why the test failed.
If you're only interested in the summary and don't need the traceback, use the
--tb=no option:
Copied!
pytest --tb=no
The output will then appear as follows:
Output
...
tests/test_format_file_size.py F [100%]
=========================== short test summary info ============================
FAILED tests/test_format_file_size.py::test_format_file_size_returns_GB_format - AssertionError: assert '0B' == '1.00 GB'
============================== 1 failed in 0.01s ===============================
In this output, you only see the summaries, which can help you quickly
understand why a test failed without the clutter of the traceback.
You may now revert your changes to the test function to see it passing once
again. You can also use the "quiet" reporting mode with the -q flag, which
keeps the output brief:
Copied!
pytest -q
Output
. [100%]
1 passed in 0.12s
This iterative process of writing tests, running them, and fixing issues is the
core of the automated testing process, that helps you develop more reliable and
maintainable software.
Step 4 — Designing your tests
In this section, you'll explore conventions and best practices you should follow
when designing tests with Pytest to ensure clarity and maintainability.
Setting up test files
As you've already seen, test files (beginning with test_) are generally placed
in a dedicated tests directory. This naming convention is crucial because
Pytest relies on it to discover and run your tests.
If you prefer, you can place test files alongside their corresponding source
files. For example, example.py and its test file test_example.py can reside
in the same directory:
Output
src/
│
├── example.py
└── test_example.py
Pytest also accepts filenames ending with _test.py, but this is less common.
Functions, classes, and test methods
During testing, you can create functions or classes to organize your assertions.
As demonstrated earlier, function-based tests should be prefixed with test_.
While it's not mandatory to include an underscore, it's recommended for clarity:
Similarly, when using classes for testing, the class name should be prefixed
with Test (capitalized), and its methods should also be prefixed with test_:
Copied!
class TestFormatFileSize:
def test_format_file_size_returns_GB_format(self):
assert format_file_size(0) == "1.00 GB"
Naming patterns
Descriptive names for functions, methods, and classes are crucial for test
readability and maintainability. Avoid generic names like test_function and
instead opt for descriptive names that convey what the test is validating.
Consider the following examples of well-named test files and functions:
By adhering to these guidelines, you'll create a test suite that is not only
functional but also easy to navigate, understand, and maintain.
Step 5 — Filtering tests
As your test suite grows, running every test with each change can become
time-consuming. Pytest provides several methods for selectively running the
tests you're currently focused on.
Before proceeding, modify your test file with additional test cases as follows:
Pytest's -k option allows you to filter tests based on substring matches or
Python expressions. For example, to execute only tests with the "mb" substring,
run:
Copied!
pytest -k mb
This command will execute only the test_format_file_size_returns_format_mb test in this case
(since it's the only test that contains the mb string).
By selectively running tests, you can save time during development and focus on
testing the specific parts of your code that you're actively working on.
Side note: Diagnose production exceptions with full context
Tests reduce risk, but they cannot cover every input, integration, or environment difference. Better Stack captures unhandled exceptions with the request and runtime context you need to reproduce issues quickly and ship a fix with confidence.
Step 6 — Providing multiple test cases
It's common to test the same function multiple times with different inputs.
Instead of writing repetitive test functions, Pytest offers a streamlined way to
handle this using parametrization.
Consider the test_format_file_size.py file from the previous section. It
contains multiple test functions, each verifying a different formatting scenario
for the format_file_size() function. While each test has a unique purpose, the
structure becomes repetitive as only the input values and expected results
change.
Pytest's pytest.mark.parametrize decorator solves this problem by allowing you
to concisely define multiple test cases within a single function.
To apply this approach, rewrite the contents of the test_format_file_size.py
as follows:
The test_format_file_size() function now incorporates parametrization through
the @pytest.mark.parametrize() decorator which lists the test cases as tuples:
tuples: (sizebytes, expectedoutput)
Pytest will run this function multiple times, once for each tuple, effectively
creating separate test cases. Execute the command below to see this in action:
Rather than seeing all six tests as a collective block passing, you can use the
-v option to display each test individually, with Pytest assigning a unique
test ID to each:
With parametrization, your test suite becomes more concise, easier to maintain,
and checks a broader range of scenarios without code duplication.
Step 7 — Parametrizing tests using data classes
You used pytest.param() in the previous step to define the test cases. In this
section, I'll show you an alternative way to parametrize test cases using data
classes.
Data classes offer a more structured and organized way to define test cases in
the following ways:
They logically group related test data (input values, expected outputs, and
IDs) into a single object.
This grouping improves the readability of your test cases and makes it easier
to understand what each test is doing.
You can set default values for fields, reducing redundancy if many test cases
share similar properties.
Let's convert the parametrized test from the previous section to use the data
class pattern. Here's the updated code:
This code defines a FileSizeTestCase class using the @dataclass decorator.
It defines three attributes: size_bytes, expected_result, and id. The id
attribute is initialized in the __post_init__ method, which assigns it a
unique value based on the size_bytes attribute.
This FileSizeTestCase class represents a blueprint for our test cases. Each
test case will be an instance of this class.
The @pytest.mark.parametrize decorator tells Pytest to run the
test_format_file_size() function multiple times – once for each item in the
test_cases list. In each run, the test_case parameter will be an instance of
FileSizeTestCase. The ids argument in the decorator ensures that each test
case in the output is clearly labelled with its generated ID.
When your code includes exception handling, confirming that specific exceptions
are raised under the right conditions is necessary. The pytest.raises()
function is designed for testing such scenarios.
For instance, the format_file_size() function raises a ValueError if the
input is a negative integer:
You can test if the ValueError exception is raised using pytest.raises()
with:
Copied!
def test_format_file_size_negative_size():
with pytest.raises(ValueError, match="Size cannot be negative"):
format_file_size(-1)
This code passes a negative input (-1) to format_file_size() and the
pytest.raises() context manager verifies if a ValueError with the message
"Size cannot be negative" is raised.
You can integrate this case into your parametrized test as follows:
tests/test_format_file_size.y
Copied!
from dataclasses import dataclass, field
import pytest
from src.formatter import format_file_size
@dataclass
class FileSizeTestCase:
size_bytes: int
expected_result: str
id: str = field(init=False)
Here, two fields were added to the FileSizeTestCase class: expected_error,
and error_message. These signal if an error is expected and its message.
The new test case is then supplied an input of -1 to trigger the error, and
the expected_error and error_message fields are supplied accordingly.
Finally, in the test_format_file_size() function, pytest.raises() checks the
exception type and the error message. If no error is expected, assert ensures
that the formatted output matches the expected result.
Upon saving and running the test, you'll see that it passes, confirming that the
ValueError exception was raised:
For error messages that may vary slightly, you can use regular expressions with
pytest.raises():
Copied!
with pytest.raises(ValueError, match=r'value must be \d+$'):
raise ValueError('value must be 42')
With this in place, you can efficiently verify that your code raises the
expected exceptions under various conditions.
Step 9 — Using Pytest fixtures
Having gained familiarity with writing and executing tests, let's turn our
attention to helper functions known as fixtures. Fixtures are special functions
that Pytest executes before or after tests to assist with setup tasks or to
provide necessary data. Using fixtures minimizes repetition and improves
maintainability by centralizing common setup procedures.
Although the topic of Pytest fixtures is extensive enough to warrant a dedicated
article, this section aims to provide a concise introduction to their
fundamental principles.
To get started with fixtures, create a file named
test_format_file_size_with_fixtures.py in your editor and include the
following code:
tests/test_format_file_size_with_fixtures.py
Copied!
import pytest
@pytest.fixture()
def welcome_message():
"""Return a welcome message."""
return "Welcome to our application!"
def test_welcome_message(welcome_message):
"""Test if the fixture returns the correct welcome message."""
assert welcome_message == "Welcome to our application!"
The @pytest.fixture() decorator defines a fixture in Pytest. Such fixtures,
like welcome_message(), can execute setup tasks and deliver data to test
functions. When a test function lists a fixture by name as a parameter, Pytest
automatically invokes the fixture function before running the test function.
Execute these tests by running the following command:
An example of a more practical use of fixtures involves setting up databases as
shown in the following example with
SQLite:
tests/test_database_fixture.py
Copied!
import pytest
import sqlite3
@pytest.fixture(scope="module")
def db_connection(request):
"""Create a SQLite database connection for testing."""
conn = sqlite3.connect(":memory:")
c = conn.cursor()
c.execute(
"""CREATE TABLE users
(username TEXT, email TEXT)"""
)
conn.commit()
def teardown():
"""Close the database connection after the test."""
conn.close()
request.addfinalizer(teardown)
return conn
def create_user(conn, username, email):
"""Create a new user in the database."""
c = conn.cursor()
c.execute("INSERT INTO users VALUES (?, ?)", (username, email))
conn.commit()
def update_email(conn, username, new_email):
"""Update user's email in the database."""
c = conn.cursor()
c.execute("UPDATE users SET email = ? WHERE username = ?", (new_email, username))
conn.commit()
The db_connection() fixture is set to the module scope to ensure it runs
once per test module. It sets up an in-memory SQLite database connection,
establishes a users table, and manages the connection.
Functions like create_user() and update_email() use this database to add
records and update email addresses. The database connection is closed after each
test module through the fixture.
Now, add these tests to assess the functionality of creating users and updating
their email addresses:
tests/test_database_fixture.py
Copied!
. . .
def test_create_user(db_connection):
"""Validate the creation of a new user in the database."""
create_user(db_connection, "user1", "user1@example.com")
# Verify the user's presence in the database
cursor = db_connection.cursor()
cursor.execute("SELECT * FROM users WHERE username=?", ("user1",))
result = cursor.fetchone()
assert result is not None # Confirm the user's existence
assert result[1] == "user1@example.com" # Confirm the correct email of the user
def test_update_email(db_connection):
"""Check the update of a user's email in the database."""
create_user(db_connection, "user2", "user2@example.com")
update_email(db_connection, "user2", "new_email@example.com")
# Confirm the email update in the database
cursor = db_connection.cursor()
cursor.execute("SELECT email FROM users WHERE username=?", ("user2",))
result = cursor.fetchone()
assert result is not None # Confirm the user's existence
assert result[0] == "new_email@example.com" # Confirm the updated email
The test_create_user() function checks the process of creating new user
entries by adding users with specific usernames and email addresses, then
confirming their existence in the database with the correct email.
On the other hand, the test_update_email() function tests the ability to
update user email addresses. It involves initially creating a user entry,
changing the user's email, and verifying the update by querying the database for
the new email.
With the basics of fixtures covered, let's close out this article by exploring
some useful Pytest plugins.
Step 10 — Extending Pytest with plugins
Pytest offers a long list of
plugins to
enhance its capabilities, ranging from integrating with frameworks like Django
and Flask to providing coverage reports.
For example, the pytest-timeout plugin can enforce timeouts on tests, helping
to help with identifying slow tests that could run indefinitely.
Consider the following example:
tests/test_with_timeout.py
Copied!
import pytest
import time
@pytest.mark.timeout(5) # Set a 5-second timeout
def test_function_with_timeout():
time.sleep(3) # Simulate some work (should pass)
assert True
@pytest.mark.timeout(1) # Set a 1-second timeout
def test_function_exceeding_timeout():
time.sleep(2) # Simulate work that takes too long (should fail)
assert True
The @pytest.mark.timeout(seconds) marker from the pytest-timeout plugin sets
the maximum allowable execution time for the test function. In this example, the
first test should pass because it completes within the 5-second limit, while the
second test should fail because it deliberately takes 2 seconds, exceeding the
1-second limit.
The time.sleep() calls are placeholders for the logic you want to test.
Replace them with the functions or code segments you need to time-constrain.
Before running the tests, ensure to install the pytest-timeout plugin first:
Copied!
pip install pytest-timeout
Afterwards, execute the tests through the command below:
Copied!
pytest tests/test_with_timeout.py
The output will show one test passing and the other failing due to the timeout.
You'll see the "Timeout >1.00s" message in the captured stdout.
Output
============================= test session starts ==============================
platform linux -- Python 3.12.3, pytest-8.2.0, pluggy-1.5.0
rootdir: /home/ayo/dev/betterstack/demo/pytest-demo
plugins: timeout-2.3.1
collected 2 items
tests/test_with_timeout.py .F [100%]
=================================== FAILURES ===================================
_______________________ test_function_exceeding_timeout ________________________
@pytest.mark.timeout(1) # Set a 1-second timeout
def test_function_exceeding_timeout():
> time.sleep(2) # Simulate work that takes too long (should fail)
E Failed: Timeout >1.0s
tests/test_with_timeout.py:13: Failed
=========================== short test summary info ============================
FAILED tests/test_with_timeout.py::test_function_exceeding_timeout - Failed: ...
========================= 1 failed, 1 passed in 4.02s ==========================
It's also possible to set global timeouts for all tests using the --timeout
flag, a timeoutconfiguration option,
or the PYTEST_TIMEOUT environmental variable:
pytest-sugar: Modifies the default
Pytest interface to a more visually appealing one.
pytest-xdist: Enables parallel
execution of tests.
Ensure to check out the
plugins page for
the full list.
Side note: Centralize your test and runtime logs
When debugging a flaky test or a CI only failure, having logs in one place saves a lot of time. Better Stack lets you aggregate Python logs across local, CI, staging, and production so you can correlate failures with the exact run and environment.
Final thoughts
This article provided a comprehensive walkthrough of many Pytest features,
including parameterization, fixtures, plugins, and more. A future article will
delve into more advanced techniques to further elevate your testing skills.
To continue learning about Pytest, check out the official
documentation. You'll find extensive
resources to deepen your understanding and proficiency with Pytest.