A Beginner's Guide to Unit Testing with Pytest
Unit testing is a crucial aspect of the software development process that ensures that individual components of your code function as expected. Pytest, with its intuitive syntax, robust features, and extensive plugin ecosystem, has emerged as a leading choice for Python unit testing.
In this guide, we will explore the core principles of unit testing, delve into Pytest's powerful capabilities, and equip you with the knowledge and skills to write clean, maintainable, and effective tests.
By the end, you'll be well-versed in leveraging Pytest to improve your code quality, catch bugs early, and build more reliable software.
Let's get started!
Prerequisites
Before proceeding with this tutorial, ensure that you have a recent version of Python installed, and a basic understanding of writing Python programs.
Step 1 — Setting up the project directory
Before you can start learning about Pytest, you need to have a program to test. In this section, you will create a small program that formats file sizes in bytes in a human-readable format.
Start by creating and navigating to the project directory using the following commands:
mkdir pytest-demo
cd pytest-demo
Within this directory, set up a virtual environment to isolate project dependencies:
python -m venv venv
Next, activate the virtual environment:
source venv/bin/activate
Once activated, the command prompt will be prefixed with the name of the virtual
environment (venv
in this case):
(venv) <your_username>@<your_computer>:~/pytest-demo$
You can now proceed to create the file formatting program in the src
directory:
mkdir src
Ensure that the src
directory is recognized as a package by adding an
__init__.py
file:
touch src/__init__.py
Now, create a formatter.py
file within the src
directory and paste in the
contents below:
code src/formatter.py
import math
def format_file_size(size_bytes):
if size_bytes < 0:
raise ValueError("Size cannot be negative")
elif size_bytes == 0:
return "0B"
size_name = ["B", "KB", "MB", "GB", "TB"]
i = int(math.floor(math.log(size_bytes) / math.log(1024)))
p = math.pow(1024, i)
s = "{:.2f}".format(size_bytes / p)
return f"{s} {size_name[i]}"
The format_file_size()
function converts sizes into human-readable formats
such as Kilobytes, Megabytes, or Gigabytes.
In the root directory, create a main.py
file that imports the
format_file_size()
function and executes it:
code main.py
import sys
from src.formatter import format_file_size
if len(sys.argv) >= 2:
try:
size_bytes = int(sys.argv[1])
formatted_size = format_file_size(size_bytes)
print(formatted_size)
except ValueError:
print("Please provide a valid file size in bytes as a command-line argument.")
else:
print("Please provide the file size in bytes as a command-line argument.")
This script serves as the entry point to our program. It reads the file size
from the command line, calls the format_file_size()
function, and prints the
result.
Let's quickly test our script to make sure it's working as expected:
python main.py 1024
1.00 KB
python main.py 3447099988
3.21 GB
python main.py 0
0B
With the demo program ready, you're now all set to dive into unit testing with Pytest!
Step 2 — Writing your first test
In this section, you'll automate the testing process using Pytest. Instead of manually providing input to your program, you'll write tests that feed in various file sizes and verify if the output matches your expectations.
Before you can utilize Pytest in your project, you need to install it first with:
pip install -U pytest
Once installed, create a directory where all your tests will be written in. The
convention is to use a tests
directory placed adjacent to your source code
like this:
project/
│
├── src/
│ ├── __init__.py
│ └── your_module_name.py
│
└── tests/
└── test_module_functions.py
Go ahead and create the tests
directory and a test_format_file_size.py
file
within this directory:
mkdir tests
code tests/test_format_file_size.py
Populate this file with the following contents:
from src.formatter import format_file_size
def test_format_file_size_returns_GB_format():
assert format_file_size(1024**3) == "1.00 GB"
This simple test calls the format_file_size(
function with 1024 cubed (which
represents 1GB) and asserts that the returned value is exactly "1.00 GB". If the
function behaves as expected, this test will pass.
Now that you've written a test for the program, we'll look at how to execute the test next.
Step 3 — Running your tests
To execute the test you just created, you must invoke the pytest
command like
this:
pytest
Upon execution, you should see output similar to the following:
============================= test session starts ==============================
platform linux -- Python 3.12.3, pytest-8.2.0, pluggy-1.5.0
rootdir: /home/stanley/pytest-demo
collected 1 item
tests/test_format_file_size.py . [100%]
============================== 1 passed in 0.01s ===============================
This output indicates that one test item was found in the
tests/test_format_file_size.py
file, and it passed within 0.01 seconds.
If your terminal supports color, you'll see a green line at the bottom, which further signifies successful execution, as depicted in the screenshot below:
When you run the pytest
command without any arguments, it searches through the
current directory and all its subdirectories for file names that begin with
test_
and executes the test functions within them.
Now that you've experienced running tests that pass, let's explore how Pytest presents failing tests. Go ahead and modify the previous test function to fail intentionally:
...
def test_format_file_size_returns_GB_format():
assert format_file_size(0) == "1.00 GB"
Then re-run the test as follows:
pytest
In case of a failure, Pytest will display a red line and a detailed error report:
. . .
tests/test_format_file_size.py F [100%]
=================================== FAILURES ===================================
___________________ test_format_file_size_returns_GB_format ____________________
def test_format_file_size_returns_GB_format():
> assert format_file_size(0) == "1.00 GB"
E AssertionError: assert '0B' == '1.00 GB'
E
E - 1.00 GB
E + 0B
tests/test_format_file_size.py:5: AssertionError
=========================== short test summary info ============================
FAILED tests/test_format_file_size.py::test_format_file_size_returns_GB_format - AssertionError: assert '0B' == '1.00 GB'
============================== 1 failed in 0.02s ===============================
The output explains why the test failed, with a traceback indicating a mismatch between the expected result ("1.00 GB") and the actual result ("0B"). The subsequent summary block provides a concise overview of the failure without the traceback, indicating why the test failed.
If you're only interested in the summary and don't need the traceback, use the
--tb=no
option:
pytest --tb=no
The output will then appear as follows:
...
tests/test_format_file_size.py F [100%]
=========================== short test summary info ============================
FAILED tests/test_format_file_size.py::test_format_file_size_returns_GB_format - AssertionError: assert '0B' == '1.00 GB'
============================== 1 failed in 0.01s ===============================
In this output, you only see the summaries, which can help you quickly understand why a test failed without the clutter of the traceback.
You may now revert your changes to the test function to see it passing once
again. You can also use the "quiet" reporting mode with the -q
flag, which
keeps the output brief:
pytest -q
. [100%]
1 passed in 0.12s
This iterative process of writing tests, running them, and fixing issues is the core of the automated testing process, that helps you develop more reliable and maintainable software.
Step 4 — Designing your tests
In this section, you'll explore conventions and best practices you should follow when designing tests with Pytest to ensure clarity and maintainability.
Setting up test files
As you've already seen, test files (beginning with test_
) are generally placed
in a dedicated tests
directory. This naming convention is crucial because
Pytest relies on it to discover and run your tests.
If you prefer, you can place test files alongside their corresponding source
files. For example, example.py
and its test file test_example.py
can reside
in the same directory:
src/
│
├── example.py
└── test_example.py
Pytest also accepts filenames ending with _test.py
, but this is less common.
Functions, classes, and test methods
During testing, you can create functions or classes to organize your assertions.
As demonstrated earlier, function-based tests should be prefixed with test_
.
While it's not mandatory to include an underscore, it's recommended for clarity:
def test_format_file_size_returns_GB_format():
assert format_file_size(1024**3) == "1.00 GB"
Similarly, when using classes for testing, the class name should be prefixed
with Test
(capitalized), and its methods should also be prefixed with test_
:
class TestFormatFileSize:
def test_format_file_size_returns_GB_format(self):
assert format_file_size(0) == "1.00 GB"
Naming patterns
Descriptive names for functions, methods, and classes are crucial for test
readability and maintainability. Avoid generic names like test_function
and
instead opt for descriptive names that convey what the test is validating.
Consider the following examples of well-named test files and functions:
# File names
test_compare_recursive_dataclasses.py
test_fixture_named_request.py
# Function names
test_factorial_of_large_number()
test_user_authentication_success()
test_user_authentication_failure()
By adhering to these guidelines, you'll create a test suite that is not only functional but also easy to navigate, understand, and maintain.
Step 5 — Filtering tests
As your test suite grows, running every test with each change can become time-consuming. Pytest provides several methods for selectively running the tests you're currently focused on.
Before proceeding, modify your test file with additional test cases as follows:
from src.formatter import format_file_size
def test_format_file_size_returns_format_zero():
assert format_file_size(0) == "0B"
def test_format_file_size_returns_format_one_byte():
assert format_file_size(1) == "1.00 B"
def test_format_file_size_returns_format_kb():
assert format_file_size(1024) == "1.00 KB"
def test_format_file_size_returns_format_mb():
assert format_file_size(1024**2) == "1.00 MB"
def test_format_file_size_returns_format_gb():
assert format_file_size(1024**3) == "1.00 GB"
def test_format_file_size_returns_format_tb():
assert format_file_size(1024**4) == "1.00 TB"
Running tests from a specific file
If you have multiple test files and want to run tests only within a specific file, you can simply provide the file path to Pytest:
pytest tests/test_format_file_size.py
Running a specific test function
To target a single test function within a file, append ::
followed by the
function name to the file path:
pytest tests/test_format_file_size.py::test_format_file_size_returns_format_tb
Pytest's runner will execute the specified test alone:
. . .
collected 1 item
tests/test_format_file_size.py . [100%]
============================== 1 passed in 0.01s ===============================
Running tests from a specific class
If your tests are organized in classes, you can execute tests from a specific class like this:
pytest tests/test_file.py::<TestClassName>
To execute only a specific method within that class:
pytest tests/test_file.py::<TestClassName>::<test_method>
Filtering with the -k
option
Pytest's -k
option allows you to filter tests based on substring matches or
Python expressions. For example, to execute only tests with the "mb" substring,
run:
pytest -k mb
This command will execute only the test_format_file_size_returns_format_mb
test in this case
(since it's the only test that contains the mb
string).
collected 6 items / 5 deselected / 1 selected
tests/test_format_file_size.py . [100%]
======================= 1 passed, 5 deselected in 0.01s ========================
You can verify this by adding the -v
(verbose) option:
pytest -k mb -v
...
collected 6 items / 5 deselected / 1 selected
tests/test_format_file_size.py::test_format_file_size_returns_format_mb PASSED [100%]
======================= 1 passed, 5 deselected in 0.01s ========================
You can also exclude tests by using the not
keyword:
pytest -k "not gb and not mb" -v
This will execute the tests that don't contain "gb" and "mb" in their names:
. . .
collecting ... collected 6 items / 2 deselected / 4 selected
tests/test_format_file_size.py::test_format_file_size_returns_format_zero PASSED [ 25%]
tests/test_format_file_size.py::test_format_file_size_returns_format_one_byte PASSED [ 50%]
tests/test_format_file_size.py::test_format_file_size_returns_format_kb PASSED [ 75%]
tests/test_format_file_size.py::test_format_file_size_returns_format_tb PASSED [100%]
======================= 4 passed, 2 deselected in 0.01s ========================
By selectively running tests, you can save time during development and focus on testing the specific parts of your code that you're actively working on.
Step 6 — Providing multiple test cases
It's common to test the same function multiple times with different inputs. Instead of writing repetitive test functions, Pytest offers a streamlined way to handle this using parametrization.
Consider the test_format_file_size.py
file from the previous section. It
contains multiple test functions, each verifying a different formatting scenario
for the format_file_size()
function. While each test has a unique purpose, the
structure becomes repetitive as only the input values and expected results
change.
Pytest's pytest.mark.parametrize
decorator solves this problem by allowing you
to concisely define multiple test cases within a single function.
To apply this approach, rewrite the contents of the test_format_file_size.py
as follows:
import pytest
from src.formatter import format_file_size
@pytest.mark.parametrize(
"size_bytes, expected_result",
[
(0, "0B"),
(1, "1.00 B"),
(1024, "1.00 KB"),
(1024**2, "1.00 MB"),
(1024**3, "1.00 GB"),
(1024**4, "1.00 TB"),
],
)
def test_format_file_size(size_bytes, expected_result):
assert format_file_size(size_bytes) == expected_result
The test_format_file_size()
function now incorporates parametrization through
the @pytest.mark.parametrize()
decorator which lists the test cases as tuples:
tuples: (sizebytes, expectedoutput)
Pytest will run this function multiple times, once for each tuple, effectively creating separate test cases. Execute the command below to see this in action:
pytest
. . .
collected 6 items
tests/test_format_file_size.py ...... [100%]
============================== 6 passed in 0.01s ===============================
Rather than seeing all six tests as a collective block passing, you can use the
-v
option to display each test individually, with Pytest assigning a unique
test ID to each:
pytest -v
collected 6 items
tests/test_format_file_size.py::test_format_file_size[0-0B] PASSED [ 16%]
tests/test_format_file_size.py::test_format_file_size[1-1.00 B] PASSED [ 33%]
tests/test_format_file_size.py::test_format_file_size[1024-1.00 KB] PASSED [ 50%]
tests/test_format_file_size.py::test_format_file_size[1048576-1.00 MB] PASSED [ 66%]
tests/test_format_file_size.py::test_format_file_size[1073741824-1.00 GB] PASSED [ 83%]
tests/test_format_file_size.py::test_format_file_size[1099511627776-1.00 TB] PASSED [100%]
============================== 6 passed in 0.01s ===============================
By default, Pytest generates test IDs based on input values. For more
descriptive IDs, use pytest.param()
within your test cases:
import pytest
from src.formatter import format_file_size
@pytest.mark.parametrize(
"size_bytes, expected_result",
[
pytest.param(0, "0B", id="test_format_file_size_returns_format_zero"),
pytest.param(1, "1.00 B", id="test_format_file_size_returns_format_one_byte"),
pytest.param(1024, "1.00 KB", id="test_format_file_size_returns_format_kb"),
pytest.param(1024**2, "1.00 MB", id="test_format_file_size_returns_format_mb"),
pytest.param(1024**3, "1.00 GB", id="test_format_file_size_returns_format_gb"),
pytest.param(1024**4, "1.00 TB", id="test_format_file_size_returns_format_tb"),
],
)
def test_format_file_size(size_bytes, expected_result):
assert format_file_size(size_bytes) == expected_result
Now, the test output will display your custom IDs, making it clearer what each test is checking.
pytest -v
...
collected 6 items
tests/test_format_file_size.py::test_format_file_size[test_format_file_size_returns_format_zero] PASSED [ 16%]
tests/test_format_file_size.py::test_format_file_size[test_format_file_size_returns_format_one_byte] PASSED [ 33%]
tests/test_format_file_size.py::test_format_file_size[test_format_file_size_returns_format_kb] PASSED [ 50%]
tests/test_format_file_size.py::test_format_file_size[test_format_file_size_returns_format_mb] PASSED [ 66%]
tests/test_format_file_size.py::test_format_file_size[test_format_file_size_returns_format_gb] PASSED [ 83%]
tests/test_format_file_size.py::test_format_file_size[test_format_file_size_returns_format_tb] PASSED [100%]
============================== 6 passed in 0.01s ===============================
With parametrization, your test suite becomes more concise, easier to maintain, and checks a broader range of scenarios without code duplication.
Step 7 — Parametrizing tests using data classes
You used pytest.param()
in the previous step to define the test cases. In this
section, I'll show you an alternative way to parametrize test cases using data
classes.
Data classes offer a more structured and organized way to define test cases in the following ways:
- They logically group related test data (input values, expected outputs, and IDs) into a single object.
- This grouping improves the readability of your test cases and makes it easier to understand what each test is doing.
- You can set default values for fields, reducing redundancy if many test cases share similar properties.
Let's convert the parametrized test from the previous section to use the data class pattern. Here's the updated code:
from dataclasses import dataclass, field
import pytest
from src.formatter import format_file_size
@dataclass
class FileSizeTestCase:
size_bytes: int
expected_result: str
id: str = field(init=False)
def __post_init__(self):
self.id = f"test_format_file_size_{self.size_bytes}_bytes"
test_cases = [
FileSizeTestCase(0, "0B"),
FileSizeTestCase(1, "1.00 B"),
FileSizeTestCase(1024, "1.00 KB"),
FileSizeTestCase(1024**2, "1.00 MB"),
FileSizeTestCase(1024**3, "1.00 GB"),
FileSizeTestCase(1024**4, "1.00 TB"),
]
@pytest.mark.parametrize("test_case", test_cases, ids=lambda tc: tc.id)
def test_format_file_size(test_case):
assert format_file_size(test_case.size_bytes) == test_case.expected_result
This code defines a FileSizeTestCase
class using the @dataclass
decorator.
It defines three attributes: size_bytes
, expected_result
, and id
. The id
attribute is initialized in the __post_init__
method, which assigns it a
unique value based on the size_bytes
attribute.
This FileSizeTestCase
class represents a blueprint for our test cases. Each
test case will be an instance of this class.
The @pytest.mark.parametrize
decorator tells Pytest to run the
test_format_file_size()
function multiple times – once for each item in the
test_cases
list. In each run, the test_case
parameter will be an instance of
FileSizeTestCase
. The ids
argument in the decorator ensures that each test
case in the output is clearly labelled with its generated ID.
When you re-run the tests with:
pytest tests/test_format_file_size.py -v
You will see that all the tests pass as before:
. . .
collected 6 items
tests/test_format_file_size.py::test_format_file_size[test_format_file_size_0_bytes] PASSED [ 16%]
tests/test_format_file_size.py::test_format_file_size[test_format_file_size_1_bytes] PASSED [ 33%]
tests/test_format_file_size.py::test_format_file_size[test_format_file_size_1024_bytes] PASSED [ 50%]
tests/test_format_file_size.py::test_format_file_size[test_format_file_size_1048576_bytes] PASSED [ 66%]
tests/test_format_file_size.py::test_format_file_size[test_format_file_size_1073741824_bytes] PASSED [ 83%]
tests/test_format_file_size.py::test_format_file_size[test_format_file_size_1099511627776_bytes] PASSED [100%]
======================================== 6 passed in 0.01s =========================================
Step 8 — Testing exception handling with Pytest
When your code includes exception handling, confirming that specific exceptions
are raised under the right conditions is necessary. The pytest.raises()
function is designed for testing such scenarios.
For instance, the format_file_size()
function raises a ValueError
if the
input is a negative integer:
. . .
def format_file_size(size_bytes):
if size_bytes < 0:
raise ValueError("Size cannot be negative")
elif size_bytes == 0:
return "0B"
. . .
You can test if the ValueError
exception is raised using pytest.raises()
with:
def test_format_file_size_negative_size():
with pytest.raises(ValueError, match="Size cannot be negative"):
format_file_size(-1)
This code passes a negative input (-1) to format_file_size()
and the
pytest.raises()
context manager verifies if a ValueError
with the message
"Size cannot be negative" is raised.
You can integrate this case into your parametrized test as follows:
from dataclasses import dataclass, field
import pytest
from src.formatter import format_file_size
@dataclass
class FileSizeTestCase:
size_bytes: int
expected_result: str
id: str = field(init=False)
expected_error: type[Exception] = None
error_message: str | None = None
def __post_init__(self):
self.id = f"test_format_file_size_{self.size_bytes}_bytes"
test_cases = [
FileSizeTestCase(0, "0B"),
FileSizeTestCase(1, "1.00 B"),
FileSizeTestCase(1024, "1.00 KB"),
FileSizeTestCase(1024**2, "1.00 MB"),
FileSizeTestCase(1024**3, "1.00 GB"),
FileSizeTestCase(1024**4, "1.00 TB"),
FileSizeTestCase(
-1, "", ValueError, "Size cannot be negative"
), # Test case expecting an error
]
@pytest.mark.parametrize("test_case", test_cases, ids=lambda tc: tc.id)
def test_format_file_size(test_case):
if test_case.expected_error:
with pytest.raises(test_case.expected_error, match=test_case.error_message):
format_file_size(test_case.size_bytes)
else:
assert format_file_size(test_case.size_bytes) == test_case.expected_result
Here, two fields were added to the FileSizeTestCase
class: expected_error
,
and error_message
. These signal if an error is expected and its message.
The new test case is then supplied an input of -1
to trigger the error, and
the expected_error
and error_message
fields are supplied accordingly.
Finally, in the test_format_file_size()
function, pytest.raises()
checks the
exception type and the error message. If no error is expected, assert
ensures
that the formatted output matches the expected result.
Upon saving and running the test, you'll see that it passes, confirming that the
ValueError
exception was raised:
pytest tests/test_format_file_size.py -v
You should see output similar to:
. . .
collected 7 items
tests/test_format_file_size.py::test_format_file_size[test_format_file_size_0_bytes] PASSED [ 14%]
tests/test_format_file_size.py::test_format_file_size[test_format_file_size_1_bytes] PASSED [ 28%]
tests/test_format_file_size.py::test_format_file_size[test_format_file_size_1024_bytes] PASSED [ 42%]
tests/test_format_file_size.py::test_format_file_size[test_format_file_size_1048576_bytes] PASSED [ 57%]
tests/test_format_file_size.py::test_format_file_size[test_format_file_size_1073741824_bytes] PASSED [ 71%]
tests/test_format_file_size.py::test_format_file_size[test_format_file_size_1099511627776_bytes] PASSED [ 85%]
tests/test_format_file_size.py::test_format_file_size[test_format_file_size_-1_bytes] PASSED [100%]
======================================== 7 passed in 0.01s =========================================
For error messages that may vary slightly, you can use regular expressions with
pytest.raises()
:
with pytest.raises(ValueError, match=r'value must be \d+$'):
raise ValueError('value must be 42')
With this in place, you can efficiently verify that your code raises the expected exceptions under various conditions.
Step 9 — Using Pytest fixtures
Having gained familiarity with writing and executing tests, let's turn our attention to helper functions known as fixtures. Fixtures are special functions that Pytest executes before or after tests to assist with setup tasks or to provide necessary data. Using fixtures minimizes repetition and improves maintainability by centralizing common setup procedures.
Although the topic of Pytest fixtures is extensive enough to warrant a dedicated article, this section aims to provide a concise introduction to their fundamental principles.
To get started with fixtures, create a file named
test_format_file_size_with_fixtures.py
in your editor and include the
following code:
import pytest
@pytest.fixture()
def welcome_message():
"""Return a welcome message."""
return "Welcome to our application!"
def test_welcome_message(welcome_message):
"""Test if the fixture returns the correct welcome message."""
assert welcome_message == "Welcome to our application!"
The @pytest.fixture()
decorator defines a fixture in Pytest. Such fixtures,
like welcome_message()
, can execute setup tasks and deliver data to test
functions. When a test function lists a fixture by name as a parameter, Pytest
automatically invokes the fixture function before running the test function.
Execute these tests by running the following command:
pytest tests/test_format_file_size_with_fixtures.py
Executing the command will yield the following results:
collected 1 item
tests/test_format_file_size_with_fixtures.py . [100%]
============================== 1 passed in 0.01s ===============================
An example of a more practical use of fixtures involves setting up databases as shown in the following example with SQLite:
import pytest
import sqlite3
@pytest.fixture(scope="module")
def db_connection(request):
"""Create a SQLite database connection for testing."""
conn = sqlite3.connect(":memory:")
c = conn.cursor()
c.execute(
"""CREATE TABLE users
(username TEXT, email TEXT)"""
)
conn.commit()
def teardown():
"""Close the database connection after the test."""
conn.close()
request.addfinalizer(teardown)
return conn
def create_user(conn, username, email):
"""Create a new user in the database."""
c = conn.cursor()
c.execute("INSERT INTO users VALUES (?, ?)", (username, email))
conn.commit()
def update_email(conn, username, new_email):
"""Update user's email in the database."""
c = conn.cursor()
c.execute("UPDATE users SET email = ? WHERE username = ?", (new_email, username))
conn.commit()
The db_connection()
fixture is set to the module
scope to ensure it runs
once per test module. It sets up an in-memory SQLite database connection,
establishes a users
table, and manages the connection.
Functions like create_user()
and update_email()
use this database to add
records and update email addresses. The database connection is closed after each
test module through the fixture.
Now, add these tests to assess the functionality of creating users and updating their email addresses:
. . .
def test_create_user(db_connection):
"""Validate the creation of a new user in the database."""
create_user(db_connection, "user1", "user1@example.com")
# Verify the user's presence in the database
cursor = db_connection.cursor()
cursor.execute("SELECT * FROM users WHERE username=?", ("user1",))
result = cursor.fetchone()
assert result is not None # Confirm the user's existence
assert result[1] == "user1@example.com" # Confirm the correct email of the user
def test_update_email(db_connection):
"""Check the update of a user's email in the database."""
create_user(db_connection, "user2", "user2@example.com")
update_email(db_connection, "user2", "new_email@example.com")
# Confirm the email update in the database
cursor = db_connection.cursor()
cursor.execute("SELECT email FROM users WHERE username=?", ("user2",))
result = cursor.fetchone()
assert result is not None # Confirm the user's existence
assert result[0] == "new_email@example.com" # Confirm the updated email
The test_create_user()
function checks the process of creating new user
entries by adding users with specific usernames and email addresses, then
confirming their existence in the database with the correct email.
On the other hand, the test_update_email()
function tests the ability to
update user email addresses. It involves initially creating a user entry,
changing the user's email, and verifying the update by querying the database for
the new email.
Run the tests using the command:
pytest tests/test_database_fixture.py
The output will look similar to the following:
collected 2 items
tests/test_database_fixture.py .. [100%]
============================== 2 passed in 0.01s ===============================
With the basics of fixtures covered, let's close out this article by exploring some useful Pytest plugins.
Step 10 — Extending Pytest with plugins
Pytest offers a long list of plugins to enhance its capabilities, ranging from integrating with frameworks like Django and Flask to providing coverage reports.
For example, the pytest-timeout
plugin can enforce timeouts on tests, helping
to help with identifying slow tests that could run indefinitely.
Consider the following example:
import pytest
import time
@pytest.mark.timeout(5) # Set a 5-second timeout
def test_function_with_timeout():
time.sleep(3) # Simulate some work (should pass)
assert True
@pytest.mark.timeout(1) # Set a 1-second timeout
def test_function_exceeding_timeout():
time.sleep(2) # Simulate work that takes too long (should fail)
assert True
The @pytest.mark.timeout(seconds)
marker from the pytest-timeout
plugin sets
the maximum allowable execution time for the test function. In this example, the
first test should pass because it completes within the 5-second limit, while the
second test should fail because it deliberately takes 2 seconds, exceeding the
1-second limit.
The time.sleep()
calls are placeholders for the logic you want to test.
Replace them with the functions or code segments you need to time-constrain.
Before running the tests, ensure to install the pytest-timeout
plugin first:
pip install pytest-timeout
Afterwards, execute the tests through the command below:
pytest tests/test_with_timeout.py
The output will show one test passing and the other failing due to the timeout. You'll see the "Timeout >1.00s" message in the captured stdout.
============================= test session starts ==============================
platform linux -- Python 3.12.3, pytest-8.2.0, pluggy-1.5.0
rootdir: /home/ayo/dev/betterstack/demo/pytest-demo
plugins: timeout-2.3.1
collected 2 items
tests/test_with_timeout.py .F [100%]
=================================== FAILURES ===================================
_______________________ test_function_exceeding_timeout ________________________
@pytest.mark.timeout(1) # Set a 1-second timeout
def test_function_exceeding_timeout():
> time.sleep(2) # Simulate work that takes too long (should fail)
E Failed: Timeout >1.0s
tests/test_with_timeout.py:13: Failed
=========================== short test summary info ============================
FAILED tests/test_with_timeout.py::test_function_exceeding_timeout - Failed: ...
========================= 1 failed, 1 passed in 4.02s ==========================
It's also possible to set global timeouts for all tests using the --timeout
flag, a timeout
configuration option,
or the PYTEST_TIMEOUT
environmental variable:
pytest tests/test_with_timeout.py --timeout=5 # seconds
PYTEST_TIMEOUT=5 pytest tests/test_with_timeout.py
[pytest]
timeout = 300
There are many other useful plugins for Pytest, including:
pytest-cov
: Provides support for measuring code coverage.pytest-django
: Integrates Django into the Pytest framework.pytest-docker
: Facilitates Docker-based integration testing.pytest-sugar
: Modifies the default Pytest interface to a more visually appealing one.pytest-xdist
: Enables parallel execution of tests.
Ensure to check out the plugins page for the full list.
Final thoughts
This article provided a comprehensive walkthrough of many Pytest features, including parameterization, fixtures, plugins, and more. A future article will delve into more advanced techniques to further elevate your testing skills.
To continue learning about Pytest, check out the official documentation. You'll find extensive resources to deepen your understanding and proficiency with Pytest.
Thanks for reading, and happy testing!
Make your mark
Join the writer's program
Are you a developer and love writing and sharing your knowledge with the world? Join our guest writing program and get paid for writing amazing technical guides. We'll get them to the right readers that will appreciate them.
Write for usBuild on top of Better Stack
Write a script, app or project on top of Better Stack and share it with the world. Make a public repository and share it with us at our email.
community@betterstack.comor submit a pull request and help us build better products for everyone.
See the full list of amazing projects on github