Testing in Go: Intermediate Tips and Techniques
In the previous article, I introduced the basics of testing in Go, covering the standard library's testing capabilities, how to run tests and interpret results, and how to generate and view code coverage reports.
While those techniques are a great starting point, real-world code often demands more sophisticated testing strategies. You might face challenges like slow execution, managing dependencies, and making test results easily understandable.
In this article, we'll dive into intermediate Go testing techniques that address these issues by focusing on:
- Handling dependencies and reusing code in multiple test functions,
- Speeding up your test runs,
- Making the test output easy to read.
Ready to improve your Go testing skills? Let's dive in!
Prerequisites
Before proceeding with this tutorial, ensure that you've met the following requirements:
- Basic familiarity with the Go programming language.
- A recent version of Go installed on your local machine.
- Familiarity with basic unit testing concepts in Go.
- A recent version of Docker installed on your system.
Step 1 β Setting up the demo project
To demonstrate the various techniques I'll be introducing in this article, I've created a GitHub repository which you can clone and work with on your local machine. We'll be testing a simple function that takes a JSON string and pretty prints it to make it more human-readable.
You can clone the repo to your machine by executing:
Then, navigate to the project directory and open it in your preferred text editor:
In the next section, we'll start by understanding test fixtures in Go.
Step 2 β Understanding test fixtures in Go
When writing certain tests, you may need additional data to support the test
cases, and to enable consistent and repeatable testing. These are called test
fixtures, and it's a standard practice to place them within a testdata
directory alongside your test files.
For instance, consider a simple package designed to format JSON data. Testing this package will involve using fixtures to ensure the formatter consistently produces the correct output. These fixtures might include various files containing JSON strings formatted differently.
The fixtures package in the demo project exports a single function which
formats a JSON string passed to it. The implementation of this function is
straightforward:
The next step involves be setting up the fixtures in fixtures/testdata
directory. We'll utilize two fixture files:
invalid.json: This contains an invalid JSON object to test how thePrettyPrintJSON()function handles errors.
valid.json: Contains a valid JSON object that is not well formatted.
Since both JSON files are already set up, let's go ahead and use them in the
unit tests for the function. To do this, open up the fixtures/code_test.go
file in your editor and populate it as follows:
The TestPrettyPrintJSON() function checks the normal operation and error
handling behavior of the PrettyPrintJSON() function by attempting to parse
both correctly formatted and malformed JSON files.
For each case in the test table (tt), the JSON file specified in filePath is
opened and its contents are read into a buffer, which is then subsequently
passed into the PrettyPrintJSON() function.
The outcome is then evaluated based on the hasErr field. If an error is
expected, and PrettyPrintJSON does not return an error, the test fails because
it indicates a failure in the function's error-handling logic. Conversely, if an
error occurs when none is expected, the test also fails.
Running the test is as simple as using the Go command below:
With these fixtures, you can always be sure that the PrettyPrintJSON()
function can parse any JSON files thrown at it and report errors when there are
parsing failures.
In the next section, you will verify the format of the prettified JSON.
Step 3 β Working with golden files
Testing often involves asserting that the output from a function matches an expected result. This becomes challenging with complex outputs, such as long HTML strings, intricate JSON responses, or even binary data. To address this, we'll use golden files.
A golden file stores the expected output for a test, allowing future tests to assert against it. This helps with detecting unexpected changes in the output, usually a sign of a bug in the program.
In the previous section, we used test fixtures to provide raw JSON data for
formatting. Now, we'll enhance our testing approach by using a golden file to
ensure that the formatted output from the PrettyPrintJSON function remains
consistent over time.
You can go ahead and add the highlighted content below to the code_test.go
file:
The goldie package is a Go testing utility that does the following:
- Automatically creates a golden file with the expected output of the function under test if it doesn't exist.
- Asserts that the current test output matches the contents of the golden file.
- Optionally modifies the golden file with updated data when the
-updateflag is used with thego testcommand.
Ensure to download the package with the command below before proceeding:
The verifyMatch() function uses the goldie package to assert against the
formatted JSON output produced by the PrettyPrintJSON() function, but it will
fail initially because there's no golden file present at the moment:
To fix this, you need to include the -update flag to create the golden file
for this specific test:
This creates the golden file in
fixtures/testdata/golden/TestPrettyPrintJSON.golden, so the test passes:
Examine the contents of the golden file in your text editor:
Any time you use the use the -update flag, the contents of the golden file for
the corresponding test will be created or updated in the testdata/golden
directory as shown above.
Before committing your changes, ensure that the contents of the file meets your
expectations as that is what future test runs (without using -update) will be
compared against.
It's also important to point out a few things:
- Only use the
-updateflag locally. Your CI server should not be using the-updateflag. - Always commit the golden files to your repository to make them available to your teammates and in your CI/CD pipelines.
- Never use the
-updateflag unless you want to update the expected output of the function under test.
With that said, let's now move on to the next section where you'll learn about test helpers in Go.
Step 4 β Using test helpers
Just like production code, test code should be maintainable and readable. A hallmark of well-crafted code is its modular structure, achieved by breaking down complex tasks into smaller, manageable functions. This principle holds true in test environments as well, where these smaller, purpose-specific functions are known as test helpers.
Test helpers not only streamline code by abstracting repetitive tasks but also enhance re-usability. For instance, if several tests require the same object configuration or database connection setup, it's inefficient and error-prone to duplicate this setup code across multiple tests.
To illustrate the benefit of test helpers, let's update the verifyMatch()
function introduced earlier. To designate a function as a test helper in Go, use
t.Helper(). This call is best placed at the beginning of the function to
ensure that any errors are reported in the context of the test that invoked the
helper, rather than within the helper function itself.
Debugging can become more challenging without marking the function with
t.Helper(). When a test fails, Go's testing framework will report the error
location within the helper function itself, not at the point where the helper
was called. This can obscure which test case failed, especially when multiple
test functions use the same helper.
To demonstrate this, remove the t.Helper() line you just added above, then
delete the entire golden directory within fixtures/testdata like this:
When you execute the tests now, it should fail once again with the following error:
The failure is reported to have occurred on line 22 of the code_test.go file,
which is the highlighted line below:
However, when you add the t.Helper() line back in, you get the same failure
but the reported line is different. Now it says code_test:74 which directly
points to the invoking test:
Ensure to fix the test failure with the -update flag once again as
demonstrated in Step 3 above before proceeding to the next section.
Step 5 β Setting up and tearing down test cases
Testing often involves initializing resources or configuring dependencies before executing the tests. This setup could range from creating databases and tables to seeding data, especially when testing database interactions like with a PostgreSQL database.
Implementing setup and teardown routines is essential to streamline this process and avoid repetition across tests. For example, if you want to test your PostgreSQL database implementation, several preparatory steps are necessary such as:
- Creating a new database
- Creating the tables in the database
- Optionally, add data to the tables
While the steps above can be easily achieved, it becomes a big pile of repetition when you have to write multiple tests that have to do each step repeatedly. This is where implementing setup and teardown logic makes sense.
To demonstrate this, we'll implement a CRUD system where you can fetch a user and add a new user to the database. To do this, you need to create a few new directories:
postgres: Contains the CRUD application code interacting with the PostgreSQL database.postgres/testdata/migrations: Stores the SQL files for setting up database tables and indexes.postgres/testdata/fixtures: Contains sample data to preload into the database.
Go ahead and create the necessary files in the postgres directory:
Open the user.go file, and enter the following code:
In the above code, there are two main functions:
Get(): This method retrieves a user from the database through their email address.Create(): This method creates a new user in the database.
Before we can write the corresponding tests, let's create the migration files that will contain the logic to set up the database tables and also make sense of the data we want to load the database with.
To do that, you need to create a few more files through the commands below:
In the fixtures/users.yml file, add a list of a few sample users to populate
the database:
Next, create the SQL migration for the users table like this:
We now have both our migrations and sample data ready. The next step is to
implement the setup function which will be called for each test function. We
have two methods in the postgres/user.go file so this ideally means we will
write two tests. Having a setup function means we can easily reuse the setup
logic for both tests.
To get started with creating a setup function, enter the following code in the
postgres/user_test.go file:
The above code defines a setup for testing with a PostgreSQL database in Go, using the testcontainers-go library to create a real database environment in Docker containers. We have the following two functions:
setupDatabase(): Acts as the main setup function that initializes a new PostgreSQL container, sets up the database, loads sample data, and returns a closure for tearing down the environment. This closure should be invoked at the completion of each test to properly clean up and shut down the database container.prepareTestDatabase(): Serves as a helper function to keep thesetupDatabase()function concise. It is responsible for seeding the database with sample data using the testfixtures and golang-migrate packages.
Ensure to download all the third-party packages used in the file by running:
Putting this together, an example of how to use the above code would be:
The next step is to write the tests to validate the CRUD logic you previously
wrote. To do this, update the user_test.go file with the following contents:
You defined the following test cases in the file above:
TestUserRepository_Create(): This test case handles the straightforward task of inserting a new user into the database.TestUserRepository_Get(): This test case checks the functionality of retrieving a user from the database. It also tests the retrieval of a non-existent user, followed by the creation of that user and a subsequent retrieval attempt to confirm the operation's success.
In both cases, the setupDatabase() function is called first, and the
teardown() function is deferred so that each test runs with a clean slate.
Our test suite for the postgres package is now complete so you can go ahead to
run them with the following command:
They should all pass successfully:
Step 6 β Running Go tests in parallel
Go tests are executed serially by default, meaning that each test runs only after the previous one has completed. This approach is manageable with few tests, but as your suite grows, the total execution time can become significant.
The end goal is to have a lot of tests, run them, and be confident they all pass but not at the expense of the developer's time waiting for them to pass or fail. To accelerate the testing process, Go can execute tests in parallel.
Here are a few benefits of running tests in parallel:
- Increased speed: Parallel testing can significantly reduce waiting time for test results.
- Detection of flaky tests: Flaky tests are those that produce inconsistent results, often due to dependencies on external states or interactions with shared resources. Running tests in parallel helps identify these issues early by isolating tests from shared states.
You can enable parallel test execution in Go using the following methods:
- From the command line: When running the
go testcommand, you can use the-parallelflag to enable parallel test execution. This flag accepts a number indicating the maximum number of tests to run simultaneously, defaulting to the number of CPUs available on the machine.
- Within test code: Invoking the
t.Parallel()method in your test function instructs the test runner to run the test in parallel with others.
Step 7 β Improving go test output
While Go's test runner produces output that can be easily read and understood, there are ways to make it much more readable. For example, using colors to denote failed and passed tests, getting a detailed summary of all executed tests among others.
To demonstrate this, we will be using a project called gotestsum, but there are others like gotestfmt you can explore as well. To install this package, you need to run the following command:
The gotestsum package includes a few different ways to format the output of
the executed tests. The first one is testdox. This can be used by running the
following command:
Another popular option is to list the packages that have been tested. This can be used by running the following command:
An added advantage of using gotestsum is that it can automatically rerun tests
upon any changes to Go files in the project through --watch flag:
Step 8 β Understanding Blackbox and Whitebox testing
Testing in Go is generally a straightforward process: invoke a function or system, provide inputs, and verify the outputs. However, there are two primary approaches to this process: Whitebox testing and Blackbox testing.
1. Whitebox testing
Throughout this tutorial, we've primarily engaged in Whitebox testing. This approach involves accessing and inspecting the internal implementations of the functions under test by placing the test file in the same package as the code under test.
For example, if you have a package calc with the following code:
The test for the Add() method will be in the same package like this:
Since Whitebox testing allows you to access internal state, you can often catch certain bugs by asserting against the internal state of the function under test.
Its main disadvantage is that such tests can be more brittle since they are coupled to the program's internal structure. For example, if you change the algorithm used to compute some result, the test can break even if the final output is exactly the same.
2. Blackbox testing
Blackbox testing involves testing a software system without any knowledge of the application's internal workings. The test does not assert against the underlying logic of the function but merely checks if the software behaves as expected from an external viewpoint.
To implement Blackbox testing in Go, place your tests in an inner package by
appending _test to the package name, which effectively restricts access to
internal-only states and functions.
With the calc example, the test will be placed in a calc/calc_test directory
like this:
This method of testing prevents you from being able to access the internal state
of the calc package, thus allowing you to focus on ensuring that the
function being tested produces the correct output.
If you're practicing Blackbox testing, and you also need to test implementation
details, a common pattern is to create an _internal_test.go within the package
under test:
Final thoughts
As your test suite expands, the complexity can also increase. However, by applying the patterns and techniques discussed in this article, you can keep your tests organized and manageable.
Thanks for reading, and happy testing!