A Gentle Introduction to Python's unittest Module
Python includes a built-in testing framework called unittest
, which allows you to write and run automated tests. This framework, inspired by JUnit, follows an object-oriented approach and includes useful features for writing test cases, test suites, and fixtures.
By the end of this tutorial, you will be able to use unittest
to write and run automated tests, helping you catch bugs early and make your software more reliable.
Let's get started!
Prerequisites
To follow this tutorial, make sure your machine has the latest version of Python installed and that you have a basic understanding of writing Python programs.
Step 1 — Setting up the project directory
Before you can start writing automated tests with unittest
, you need to create a simple application to test. In this section, you'll build a simple yet practical application that converts file sizes from bytes into human-readable formats.
First, create a new directory for the project:
mkdir unittest-demo
Next, navigate into the newly created directory:
cd unittest-demo
Create a virtual environment to isolate dependencies, prevent conflicts, and avoid polluting your system environment:
python -m venv venv
Activate the virtual environment:
source venv/bin/activate
Once the environment is activated, you will see the virtual environment's name (venv
in this case) prefixed to the command prompt:
(venv) <your_username>@<your_computer>:~/unittest-demo$
Create a src
directory to contain the source code:
mkdir src
To ensure Python recognizes the src
directory as a package, add an empty __init__.py
file:
touch src/__init__.py
Next, create and open the formatter.py
file within the src
directory using the text editor of your choice. This tutorial assumes you are using VSCode, which can be opened with the code
command:
code src/formatter.py
Add the following code to the formatter.py
file to format file sizes:
import math
def format_file_size(size_bytes):
if size_bytes < 0:
raise ValueError("Size cannot be negative")
elif size_bytes == 0:
return "0B"
size_name = ["B", "KB", "MB", "GB", "TB"]
i = int(math.floor(math.log(size_bytes) / math.log(1024)))
p = math.pow(1024, i)
s = "{:.2f}".format(size_bytes / p)
return f"{s} {size_name[i]}"
The format_file_size()
function converts a file size in bytes to a human-readable string format (e.g., KB, MB, GB).
In the root directory, create a main.py
file that prompts user input and passes it to the format_file_size()
function:
code main.py
import sys
from src.formatter import format_file_size
if len(sys.argv) >= 2:
try:
size_bytes = int(sys.argv[1])
formatted_size = format_file_size(size_bytes)
print(formatted_size)
except ValueError:
print("Please provide a valid file size in bytes as a command-line argument.")
else:
print("Please provide the file size in bytes as a command-line argument.")
This code takes a file size in bytes from the command line, formats it using the format_file_size()
function, and prints the result or displays an error message if the input is invalid or missing.
Let's quickly test the script to ensure it works as intended:
python main.py 2048
2.00 KB
python main.py 5368709120
5.00 GB
python main.py -1
Please provide a valid file size in bytes as a command-line argument.
Now that the application works, you will write automated tests for it in the next section.
Step 2 — Writing your first test
In this section and the following ones, you'll use unittest
to write automated tests that ensure the format_file_size()
function works correctly. This includes verifying the proper formatting of various file sizes. Writing these tests will help confirm that your code functions as intended under different scenarios.
To keep the test code well-organized and easily maintainable, you will create a tests
directory alongside the source code directory:
project/
│
├── src/
│ ├── __init__.py
│ └── your_module_name.py
│
└── tests/
└── test_module_functions.py
This structure helps keep your project tidy and makes locating and running tests easier.
First, create the tests
directory with the following command:
mkdir tests
Next, create the test_format_file_size.py
file in your text editor:
code tests/test_format_file_size.py
Add the following code to write your first test:
import unittest
from src.formatter import format_file_size
class TestFormatFileSize(unittest.TestCase):
def test_format_file_size_returns_GB_format(self):
self.assertEqual(format_file_size(1024**3), "1.00 GB")
if __name__ == "__main__":
unittest.main()
In this example, you define a test case for the format_file_size()
function within a subclass of unittest.TestCase
named TestFormatFileSize
. This follows the unittest
framework convention of prefixing the class name with "Test".
The class includes a single test method, test_format_file_size_returns_GB_format()
, which checks that the format_file_size()
function accurately formats a file size of 1 GB. It uses self.assertEqual()
to verify that the function's output is "1.00 GB".
With your test case defined, the next step is to run the test.
Step 3 — Running your tests
To ensure that everything works as expected, you need to run the tests.
You can run the tests using the following command:
python -m unittest tests.test_format_file_size
This command runs all test cases defined in the test_format_file_size
module, where tests
is the directory name and test_format_file_size
is the test module (Python file) within that directory.
When you execute this command, the output will look similar to the following:
.
----------------------------------------------------------------------
Ran 1 test in 0.000s
OK
This output indicates that one test was found in the tests/test_format_file_size.py
file and passed within 0.000 seconds.
As the number of test files grows, you might want to run all the tests at the same time. To do this, you can use the following command:
python -m unittest discover
----------------------------------------------------------------------
Ran 1 test in 0.000s
OK
The discover
command makes the test runner search the current directory and all its subdirectories for test files that start with test_
, executing all found test functions. This approach helps manage and run a large number of tests efficiently, ensuring comprehensive test coverage across your entire codebase.
Now that you understand how unittest
behaves when all tests pass, let's see what happens when a test fails. In the test file, modify the test by changing the format_file_size()
input to 0
to cause a failure deliberately:
import unittest
from src.formatter import format_file_size
class TestFormatFileSize(unittest.TestCase):
def test_format_file_size_returns_GB_format(self):
self.assertEqual(format_file_size(0), "1.00 GB")
...
Following that, rerun the tests:
python -m unittest tests.test_format_file_size
The unittest
framework will now show that the test is failing and where it is failing:
======================================================================
FAIL: test_format_file_size_returns_GB_format (tests.test_format_file_size.TestFormatFileSize.test_format_file_size_returns_GB_format)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/stanley/unittest-demo/tests/test_format_file_size.py", line 8, in test_format_file_size_returns_GB_format
self.assertEqual(format_file_size(0), "1.00 GB")
AssertionError: '0B' != '1.00 GB'
- 0B
+ 1.00 GB
----------------------------------------------------------------------
Ran 1 test in 0.001s
FAILED (failures=1)
The output shows that the test failed, providing a traceback indicating a mismatch between the expected result ("1.00 GB") and the actual result ("0B"). The subsequent summary block provides a concise overview of the failure.
Now that you can run your tests, you are ready to write more comprehensive test cases and ensure your code functions correctly under various scenarios.
Step 4 — Designing your tests
In this section, you'll become familiar with conventions and best practices for designing tests with unittest
to ensure they are straightforward to maintain.
Setting up test files
One popular convention you've followed is prefixing test files with test_
and placing them in the tests
directory. This approach ensures that your tests are easily discoverable by unittest
, which looks for files starting with test_
. Organizing tests in a dedicated tests
directory keeps your project structure clean and makes it easier to manage and locate test files.
If that works for you, you also have the option to place test files alongside their corresponding source files, a convention common in other languages like Go. For example, the source file example.py
and its test file test_example.py
can be in the same directory:
src/
│
├── example.py
└── test_example.py
Classes and test methods
Another essential convention to follow is naming each test class with a Test
prefix (capitalized) and ensuring that all methods within the class are prefixed with test_
. While the underscore is not mandatory, this is a widely adopted convention among Python users:
class TestFormatFileSize(unittest.TestCase):
def test_format_file_size_returns_GB_format(self):
self.assertEqual(format_file_size(0), "1.00 GB")
Naming patterns
The names you give your test files, classes, and methods should describe what they are testing. Descriptive names improve readability and maintainability by clearly indicating the purpose of each test. Generic names like test_method
can lead to confusion and make it harder to understand what is being tested, especially as the codebase grows.
Here are some examples of well-named test files, class names, and method names following this convention:
# File names
test_factorial.py
test_user_authentication.py
# Class names and method names
class TestFactorial(unittest.TestCase):
def test_factorial_of_large_number(self):
pass
class TestUserAuthentication(unittest.TestCase):
def test_user_authentication_success(self):
pass
def test_user_authentication_failure(self):
pass
Having clear and specific names produces better test reporting, helping other developers (and future you) quickly grasp the tested functionality, locate specific tests, and maintain the test suite more effectively.
Step 5 — Filtering tests
Running all tests can become time-consuming as your application grows and your test suite expands. To improve efficiency, you can filter tests to run only a subset, especially when working on specific features or fixing bugs. This approach provides faster feedback, helps isolate issues, and allows targeted testing.
Here’s how you can add more test methods and run specific subsets of your tests:
import unittest
from src.formatter import format_file_size
class TestFormatFileSize(unittest.TestCase):
def test_format_file_size_returns_format_zero(self):
self.assertEqual(format_file_size(0), "0B")
def test_format_file_size_returns_format_one_byte(self):
self.assertEqual(format_file_size(1), "1.00 B")
def test_format_file_size_returns_format_kb(self):
self.assertEqual(format_file_size(1024), "1.00 KB")
def test_format_file_size_returns_format_mb(self):
self.assertEqual(format_file_size(1024**2), "1.00 MB")
def test_format_file_size_returns_format_gb(self):
self.assertEqual(format_file_size(1024**3), "1.00 GB")
def test_format_file_size_returns_format_tb(self):
self.assertEqual(format_file_size(1024**4), "1.00 TB")
if __name__ == "__main__":
unittest.main()
Running tests from a specific file
When you have multiple test files and only want to run a specific test file, you can provide the module name:
python -m unittest tests.test_format_file_size
----------------------------------------------------------------------
Ran 6 tests in 0.001s
OK
Running Tests from a specific class
It is common to have more than one class in a test module. To execute only tests from a specific class, you can use:
python -m unittest tests.test_format_file_size.TestFormatFileSize
The output will be the same as in the preceding section since there is only one class:
----------------------------------------------------------------------
Ran 6 tests in 0.001s
OK
Running a specific test method of a class
If you want to target a single method in a class, you can specify the method name in the command:
python -m unittest tests.test_format_file_size.TestFormatFileSize.test_format_file_size_returns_format_tb
Running this command will execute only the specified method:
----------------------------------------------------------------------
Ran 1 test in 0.000s
OK
Filtering with the -k
option
To streamline your testing process, unittest
provides the -k
command-line option, which allows you to filter and run only the tests that match a specific pattern(a substring). This can be particularly useful when you have an extensive suite of tests and want to run a subset that matches a particular condition or naming convention.
For example, the following command executes only the tests that have the "gb" substring in their names:
python -m unittest discover tests -k "gb"
This command will execute only the test_format_file_size_returns_format_gb
test because it contains the "gb" substring. The output will be:
----------------------------------------------------------------------
Ran 1 test in 0.000s
OK
You can verify this with the -v
(verbose) option:
python -m unittest discover tests -k "gb" -v
test_format_file_size_returns_format_gb (test_format_file_size.TestFormatFileSize.test_format_file_size_returns_format_gb) ... ok
----------------------------------------------------------------------
Ran 1 test in 0.000s
OK
In this case, the verbose output provides additional information about each test executed, making it easier to understand which tests matched the pattern and their results.
Skipping tests with unittest
There are times when you want to skip specific tests, often due to incomplete functionality, unavailable dependencies, or other temporary conditions. The unittest
module provides the skip()
decorator, which allows you to skip individual test methods or entire test classes.
The skip()
decorator marks tests that should not be executed. Here’s how you can use the skip()
decorator in unittest
:
class TestFormatFileSize(unittest.TestCase):
...
@unittest.skip("Skipping this test because the feature is not implemented yet")
def test_format_file_size_returns_format_tb(self):
self.assertEqual(format_file_size(1024**4), "1.00 TB")
Save and rerun the tests:
python -m unittest discover tests -v
You will see the output confirming that one test was skipped:
test_format_file_size_returns_format_gb (test_format_file_size.TestFormatFileSize.test_format_file_size_returns_format_gb) ... ok
test_format_file_size_returns_format_kb (test_format_file_size.TestFormatFileSize.test_format_file_size_returns_format_kb) ... ok
test_format_file_size_returns_format_mb (test_format_file_size.TestFormatFileSize.test_format_file_size_returns_format_mb) ... ok
test_format_file_size_returns_format_one_byte (test_format_file_size.TestFormatFileSize.test_format_file_size_returns_format_one_byte) ... ok
test_format_file_size_returns_format_tb (test_format_file_size.TestFormatFileSize.test_format_file_size_returns_format_tb) ... skipped 'Skipping this test because the feature is not implemented yet'
test_format_file_size_returns_format_zero (test_format_file_size.TestFormatFileSize.test_format_file_size_returns_format_zero) ... ok
----------------------------------------------------------------------
Ran 6 tests in 0.001s
OK (skipped=1)
The unittest
module provides several skip-related decorators to control the execution of tests:
@unittest.skipIf(condition, reason)
: Skips a test if the specified condition isTrue
.@unittest.skipUnless(condition, reason)
: Skips a test unless the condition isTrue
.@unittest.expectedFailure
: Marks a test that is expected to fail, allowing it to be run without causing the entire test suite to fail.
Step 6 — Providing multiple test cases
Multiple test methods are often similar but differ only in their inputs and expected outputs. Consider the methods you added in the previous section; each function takes the format_file_size()
function and tests it with different inputs to verify the correct output. Instead of writing separate test methods for each case, you can use parameterized tests to simplify your code and reduce redundancy.
Since Python 3.4, Python introduced subtests, which consolidate similar tests into a single method. To make use of subtests, rewrite the contents of test_format_file_size.py
with the following code:
import unittest
from src.formatter import format_file_size
class TestFormatFileSize(unittest.TestCase):
def test_format_file_size(self):
"""Test format_file_size function for various file sizes"""
test_cases = [
(0, "0B"),
(1, "1.00 B"),
(1024, "1.00 KB"),
(1024**2, "1.00 MB"),
(1024**3, "1.00 GB"),
(1024**4, "1.00 TB"),
]
for size, expected_output in test_cases:
with self.subTest(
f"Input size: {size} - Expected output: {expected_output}"
):
self.assertEqual(format_file_size(size), expected_output)
if __name__ == "__main__":
unittest.main()
This code defines a single method to test the format_file_size()
function using multiple test cases. Instead of separate test methods, it uses a list of tuples for different inputs and expected outputs, iterating through them with a loop and creating a subtest for each case using self.subTest
.
Now you can run the test:
python -m unittest tests.test_format_file_size -v
test_format_file_size (tests.test_format_file_size.TestFormatFileSize.test_format_file_size)
Test format_file_size function for various file sizes ... ok
----------------------------------------------------------------------
Ran 1 test in 0.001s
OK
You can also modify the test to cause a failure deliberately:
...
class TestFormatFileSize(unittest.TestCase):
def test_format_file_size(self):
test_cases = [
(0, "0B"),
...
(0, "1.00 TB"),
]
...
When you rerun the tests, you will see more details about the failure, including the specific input and expected output:
test_format_file_size (tests.test_format_file_size.TestFormatFileSize.test_format_file_size)
Test format_file_size function for various file sizes ...
test_format_file_size (tests.test_format_file_size.TestFormatFileSize.test_format_file_size) [Input size: 0 - Expected output: 1.00 TB]
Test format_file_size function for various file sizes ... FAIL
======================================================================
FAIL: test_format_file_size (tests.test_format_file_size.TestFormatFileSize.test_format_file_size) [Input size: 0 - Expected output: 1.00 TB]
Test format_file_size function for various file sizes
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/stanley/unittest-demo/tests/test_format_file_size.py", line 23, in test_format_file_size
self.assertEqual(format_file_size(size), expected_output)
AssertionError: '0B' != '1.00 TB'
- 0B
+ 1.00 TB
----------------------------------------------------------------------
Ran 1 test in 0.002s
FAILED (failures=1)
Now that you can parameterize tests with subtests, you will take it further in the next section.
Step 7 — Parametrizing tests using data classes
So far, you have used subtests with a list of tuples to parameterize your tests. While this approach is straightforward, it has a few issues: tuples lack descriptive names, making them less readable and unclear, especially with many parameters; maintaining and updating a growing list of tuples becomes cumbersome; and adding additional metadata or more complex structures to tuples can make them unwieldy and hard to manage.
To address these issues, consider using data classes for a more structured, readable, and scalable way to organize your test parameters. Data classes help in the following ways:
- Logical Grouping: They group related test data (input values, expected outputs, and IDs) into a single object, enhancing the readability of your tests.
- Default Values: You can set default values for fields, which reduces redundancy if test cases share similar properties or values.
Let's use subtests with data classes by rewriting the code from the previous section:
import unittest
from dataclasses import dataclass, field
from src.formatter import format_file_size
# Define a data class to hold test cases
@dataclass
class FileSizeTestCase:
size_bytes: int
expected_output: str
id: str = field(init=False)
def __post_init__(self):
self.id = f"test_format_file_size_{self.size_bytes}_bytes"
class TestFormatFileSize(unittest.TestCase):
def test_format_file_size(self):
"""Test format_file_size function for various file sizes"""
# Define test cases using the data class
test_cases = [
FileSizeTestCase(size_bytes=0, expected_output="0B"),
FileSizeTestCase(size_bytes=1, expected_output="1.00 B"),
FileSizeTestCase(size_bytes=1024, expected_output="1.00 KB"),
FileSizeTestCase(size_bytes=1024**2, expected_output="1.00 MB"),
FileSizeTestCase(size_bytes=1024**3, expected_output="1.00 GB"),
FileSizeTestCase(size_bytes=1024**4, expected_output="1.00 TB"),
]
for case in test_cases:
with self.subTest(
f"Input size: {case.size_bytes} - Expected output: {case.expected_output}"
):
self.assertEqual(
format_file_size(case.size_bytes), case.expected_output
)
if __name__ == "__main__":
unittest.main()
This code defines a FileSizeTestCase
class using the @dataclass
decorator, which includes three attributes: size_bytes
, expected_output
, and id
. This data class serves as a blueprint for our test cases, with each test case represented as an instance of this class.
Within the TestFormatFileSize
class, the test_format_file_size
method iterates through a list of FileSizeTestCase
instances. Each instance contains the input size and the expected output for the format_file_size
function. The self.subTest
context manager runs each test case independently, providing clearer test output and making it easier to identify and debug specific failures.
Now, rerun the tests with:
python -m unittest tests.test_format_file_size -v
The tests will pass without any issues:
test_format_file_size (tests.test_format_file_size.TestFormatFileSize.test_format_file_size)
Test format_file_size function for various file sizes ... ok
----------------------------------------------------------------------
Ran 1 test in 0.001s
OK
With this change, your tests are now more structured and maintainable, making managing and understanding the test cases more manageable.
Step 8 — Testing exception handling with unittest
Exception handling is essential and needs to be tested to ensure exceptions are raised under the correct conditions.
For instance, the format_file_size()
function raises a ValueError
if it receives a negative integer as input:
. . .
def format_file_size(size_bytes):
if size_bytes < 0:
raise ValueError("Size cannot be negative")
elif size_bytes == 0:
return "0B"
. . .
You can test if the ValueError
exception is raised using assertRaises()
:
class TestFormatFileSize(unittest.TestCase):
def test_format_file_size_negative_size(self):
with self.assertRaises(ValueError) as context:
format_file_size(-1)
self.assertEqual(str(context.exception), "Size cannot be negative")
The unittest
framework's assertRaises
context manager checks that a ValueError
is raised with the appropriate message when format_file_size()
is called with a negative size. The str(context.exception)
extracts the exception message for comparison.
To integrate this into the parameterized tests, you can do it as follows:
import unittest
from dataclasses import dataclass, field
from src.formatter import format_file_size
@dataclass
class FileSizeTestCase:
size_bytes: int
expected_output: str
id: str = field(init=False)
expected_error: type[Exception] = None
error_message: str | None = None
def __post_init__(self):
self.id = f"test_format_file_size_{self.size_bytes}_bytes"
class TestFormatFileSize(unittest.TestCase):
def test_format_file_size(self):
"""Test format_file_size function for various file sizes"""
test_cases = [
FileSizeTestCase(size_bytes=0, expected_output="0B"),
FileSizeTestCase(size_bytes=1, expected_output="1.00 B"),
FileSizeTestCase(size_bytes=1024, expected_output="1.00 KB"),
FileSizeTestCase(size_bytes=1024**2, expected_output="1.00 MB"),
FileSizeTestCase(size_bytes=1024**3, expected_output="1.00 GB"),
FileSizeTestCase(size_bytes=1024**4, expected_output="1.00 TB"),
FileSizeTestCase(
size_bytes=-1,
expected_output="",
expected_error=ValueError,
error_message="Size cannot be negative",
),
]
for case in test_cases:
with self.subTest(
f"Input size: {case.size_bytes} - Expected output: {case.expected_output}"
):
if case.expected_error:
with self.assertRaises(case.expected_error, msg=case.error_message):
format_file_size(case.size_bytes)
else:
self.assertEqual(
format_file_size(case.size_bytes), case.expected_output
)
if __name__ == "__main__":
unittest.main()
This code added two additional fields to the FileSizeTestCase
class: expected_error
and error_message
. These fields indicate if an error is expected and what the error message should be.
A new test case with an input of -1
is included to trigger the error, with the expected_error
and error_message
fields set accordingly.
In the test_format_file_size()
method, the code checks for the expected exception using self.assertRaises()
. If an error is expected, it verifies the type and message of the exception. If no error is expected, self.assertEqual()
ensures that the function's output matches the expected result.
When you save and run the tests, you will see that the tests pass, confirming that the ValueError
exception was raised:
python -m unittest tests.test_format_file_size -v
test_format_file_size (tests.test_format_file_size.TestFormatFileSize.test_format_file_size)
Test format_file_size function for various file sizes ... ok
----------------------------------------------------------------------
Ran 1 test in 0.001s
OK
For error messages that may vary slightly, you can use regular expressions with unittest.assertRaises()
:
class TestValueError(unittest.TestCase):
def test_value_error(self):
with self.assertRaises(ValueError) as context:
raise ValueError("value must be 42")
self.assertTrue(
re.match(r"value must be \d+$", str(context.exception)),
f"Expected error message did not match: {str(context.exception)}",
)
With this in place, you can efficiently verify that your code raises the expected exceptions under various conditions.
Step 9 — Using fixtures in unittest
Now that you can use unittest
to write, organize, and execute tests, we will explore how to use fixtures. Fixtures are helper functions that set up the necessary preconditions and clean up after tests. They are essential for writing efficient and maintainable tests, as they allow you to share setup code across multiple tests, reducing redundancy and improving code clarity.
The topic of fixtures is extensive and best served with its article, but here, we will provide a concise introduction.
To begin with fixtures, create a separate test file named test_fixtures.py
:
code test_fixtures.py
Add the following code:
import unittest
class TestWelcomeMessage(unittest.TestCase):
def setUp(self):
"""Set up the welcome message fixture."""
self.welcome_message = "Welcome to our application!"
def test_welcome_message(self):
"""Test if the fixture returns the correct welcome message."""
self.assertEqual(self.welcome_message, "Welcome to our application!")
if __name__ == "__main__":
unittest.main()
The setUp
method creates a fixture that is automatically called before each test method, ensuring consistent setup. The setUp
method initializes the welcome_message
attribute, making it available to all test methods in the class. The test_welcome_message
method then verifies that this fixture returns the correct message by using self.assertEqual
to check if the value of self.welcome_message
matches the expected string.
With the fixture in place, run the following command to execute the tests:
python -m unittest tests.test_fixtures -v
The output will look similar to the following:
test_welcome_message (tests.test_fixtures.TestWelcomeMessage.test_welcome_message)
Test if the fixture returns the correct welcome message. ... ok
----------------------------------------------------------------------
Ran 1 test in 0.000s
OK
For a more practical example that uses fixtures to set up a database, consider the following code using SQLite:
import unittest
import sqlite3
class TestDatabaseOperations(unittest.TestCase):
def setUp(self):
"""Set up the SQLite database connection and create table."""
self.conn = sqlite3.connect(":memory:")
self.create_table()
def tearDown(self):
"""Close the database connection after each test."""
self.conn.close()
def create_table(self):
"""Create a 'users' table in the database."""
c = self.conn.cursor()
c.execute(
"""CREATE TABLE users
(username TEXT, email TEXT)"""
)
self.conn.commit()
def create_user(self, username, email):
"""Create a new user in the database."""
c = self.conn.cursor()
c.execute("INSERT INTO users VALUES (?, ?)", (username, email))
self.conn.commit()
def update_email(self, username, new_email):
"""Update user's email in the database."""
c = self.conn.cursor()
c.execute(
"UPDATE users SET email = ? WHERE username = ?", (new_email, username)
)
self.conn.commit()
The setUp
method creates an in-memory SQLite database connection and initializes the users
table, ensuring a fresh setup before each test. The tearDown
method closes the database connection after each test, maintaining a clean test environment. Additionally, the create_user
and update_email
methods allow for the insertion and updating of records within the test methods, facilitating database manipulation during tests.
With the fixture in place, you can add tests to verify if creating and updating users works correctly:
. . .
def test_create_user(self):
"""Validate the creation of a new user in the database."""
self.create_user("user1", "user1@example.com")
# Verify the user's presence in the database
cursor = self.conn.cursor()
cursor.execute("SELECT * FROM users WHERE username=?", ("user1",))
result = cursor.fetchone()
self.assertIsNotNone(result) # Confirm the user's existence
self.assertEqual(
result[1], "user1@example.com"
) # Confirm the correct email of the user
def test_update_email(self):
"""Check the update of a user's email in the database."""
self.create_user("user2", "user2@example.com")
self.update_email("user2", "new_email@example.com")
# Confirm the email update in the database
cursor = self.conn.cursor()
cursor.execute("SELECT email FROM users WHERE username=?", ("user2",))
result = cursor.fetchone()
self.assertIsNotNone(result) # Confirm the user's existence
self.assertEqual(
result[0], "new_email@example.com"
) # Confirm the updated email
if __name__ == "__main__":
unittest.main()
The test_create_user()
function validates adding new users to the database. It creates a user with a specific username and email address, then confirms the user's existence and corrects the email in the database through queries. Conversely, the test_update_email()
function verifies the ability to update a user's email address. It first creates a user entry, updates the user's email, and then checks the database to ensure the email has been correctly updated.
Now run the tests with the following command:
python -m unittest tests.test_fixtures -v
The output will look similar to the following:
test_create_user (tests.test_fixtures.TestDatabaseOperations.test_create_user)
Validate the creation of a new user in the database. ... ok
test_update_email (tests.test_fixtures.TestDatabaseOperations.test_update_email)
Check the update of a user's email in the database. ... ok
----------------------------------------------------------------------
Ran 2 tests in 0.002s
OK
Now that you are familiar with fixtures, you can create more complex and maintainable tests.
Step 10 - Writing tests with doctests
In Python, docstrings are string literals that appear after defining a class, method, or function. They are used to document your code. A neat feature of Python is that you can add tests to the docstrings, and Python will execute them for you.
Take the following example:
def add(a, b):
"""
Returns the sum of a and b.
>>> add(2, 3)
5
"""
return a + b
In this example, the add()
function calculates the sum of two numbers, a
and b
. The function includes a comprehensive docstring that describes its purpose ("Returns the sum of a and b") and provides an example usage in the doctest format.
This example demonstrates how to call the function with arguments 2
and 3
, expecting a result of 5
. Docstrings with embedded doctests serve a dual purpose:
- They document the code.
- They provide a way to verify its correctness automatically.
When you include a usage example in the docstring, you can use the doctest
module to run these embedded tests and ensure the function behaves as expected.
Let's apply the doctest to the application code in formatter.py
:
...
def format_file_size(size_bytes):
"""
Converts a file size in bytes to a human-readable format.
Args:
size_bytes (int): The size of the file in bytes.
Returns:
str: The human-readable file size.
Raises:
ValueError: If the size_bytes is negative.
Examples:
>>> format_file_size(0)
'0B'
>>> format_file_size(1024)
'1.00 KB'
>>> format_file_size(1048576)
'1.00 MB'
"""
if size_bytes < 0:
raise ValueError("Size cannot be negative")
elif size_bytes == 0:
return "0B"
...
The docstring for the format_file_size()
function describes its purpose, input, output, and potential exceptions. It details that the function converts bytes to a human-readable format, takes an integer size_bytes
as input, returns a formatted string, and raises a ValueError
for negative inputs. The examples provided show typical usage and expected results.
To see if the doctest examples pass, enter the following command:
python -m doctest -v src/formatter.py
When you run the file, you will see the following output:
Trying:
format_file_size(0)
Expecting:
'0B'
ok
Trying:
format_file_size(1024)
Expecting:
'1.00 KB'
ok
Trying:
format_file_size(1048576)
Expecting:
'1.00 MB'
ok
1 items had no tests:
formatter
1 items passed all tests:
3 tests in formatter.format_file_size
3 tests in 2 items.
3 passed and 0 failed.
Test passed.
The output shows that the format_file_size
function passed all tests. Specifically, it correctly returned '0B'
for 0
, '1.00 KB'
for 1024
, and '1.00 MB'
for 1048576
. All 3 tests in the function passed, with no failures.
Final thoughts
This article walked you through writing, organizing, and executing unit tests with Python's built-in unittest
framework. It also explored features such as subtests and fixtures to help you create efficient and maintainable tests.
To continue learning more about unittest
, see the official documentation for more details. unittest
is not the only testing framework available for Python; Pytest is another popular testing framework that offers a more concise syntax and additional features. To explore Pytest, see our Pytest documentation guide.
Make your mark
Join the writer's program
Are you a developer and love writing and sharing your knowledge with the world? Join our guest writing program and get paid for writing amazing technical guides. We'll get them to the right readers that will appreciate them.
Write for usBuild on top of Better Stack
Write a script, app or project on top of Better Stack and share it with the world. Make a public repository and share it with us at our email.
community@betterstack.comor submit a pull request and help us build better products for everyone.
See the full list of amazing projects on github