Structlog is an open-source logging tool for Python known for its simple API,
performance, and quality of life features. It has been used in production since
2013 and has evolved over the years to incorporate recent major changes in
Python such as asyncio and
type hints.
This tutorial explores the essential aspects of Structlog. We will delve into
formatting logs, applying filters, incorporating contextual data, seamlessly
integrating Structlog with the Python standard logging library, and much more.
Finally, we'll demonstrate how to seamlessly integrate Structlog into a Django
web application, unlocking its full potential for your projects.
Side note: Visualize your Structlog logs in real time
Centralizing logs isn’t only about storage. It’s also about making them easy to explore. With Better Stack, you can watch Structlog events stream in live, filter by fields, and quickly spot patterns across requests and services.
Prerequisites
Before proceeding with this tutorial, ensure you have the latest version of
Python (3.11 at the time of writing). If you don't have Python installed, find
the download instructions here.
Once you have Python installed, create a directory that will contain the code
samples, then change into the directory:
Copied!
mkdir structlog_demo && cd structlog_demo
Next, create a virtual environment to avoid dependency conflicts on your system
and activate:
Copied!
python3 -m venv venv
Copied!
source venv/bin/activate
When the virtual environment is active, the terminal will be prefixed with your
virtual environment's name in parenthesis. You are now all set to continue with
the remainder of this article.
Getting started with Structlog
Before you can start logging with Structlog, you must download the package into
your project with the following command:
After installing Structlog, create an app.py file using a text editor of your
choice and add the following code to log with Structlog:
app.py
Copied!
import structlog
logger = structlog.get_logger()
logger.info("Logging with structlog")
Structlog provides a pre-configured logger by default, which you access by
invoking the structlog.get_logger() method. The info() method is used on the
resulting logger to record messages at the INFO level.
Save the file and run the program with the following command:
Copied!
python app.py
Output
2025-06-06 15:46:27 [info ] Logging with structlog
The output shows the following:
2025-06-06 15:46:27: the time stamp.
info: the severity level of the log message.
Logging with structlog: the log message.
Understanding log levels in Structlog
Log levels are labels that indicate the importance or
severity of the message. Structlog retains the five log levels found in the
logging module:
DEBUG(10): used to log information that is useful for debugging the program.
INFO(20): indicates a normal event in the program.
WARNING(30): tracks events that indicate potential issues or anomalies that
might not be critical but should be noted and addressed soon.
ERROR(40): indicates an error concerning an operation in your application
that is fatal to that operation alone.
CRITICAL(50): indicates a severe problem that can force the application to
shutdown, and must be investigated right away.
Each of the log levels mentioned has a corresponding logging method that you can
invoke on a logger:
app.py
Copied!
import structlog
logger = structlog.get_logger()
logger.debug("Database connection established")
logger.info("Processing data from the API")
logger.warning("Resource usage is nearing capacity")
logger.error("Failed to save the file. Please check permissions")
logger.critical("System has encountered a critical failure. Shutting down")
Output
2025-06-06 15:47:07 [debug ] Database connection established
2025-06-06 15:47:07 [info ] Processing data from the API
2025-06-06 15:47:07 [warning ] Resource usage is nearing capacity
2025-06-06 15:47:07 [error ] Failed to save the file. Please check permissions
2025-06-06 15:47:07 [critical ] System has encountered a critical failure. Shutting down
The messages are colored based on the severity of the message:
Now that you are familiar with Structlog's severity levels and their
corresponding methods, let's move on you can move on to filtering log entries.
Setting the default log level
Unlike other logging packages, Structlog does not do any level-based filtering
by default. To filter the logs based on the level, you need to configure
Structlog using its configure() method:
The structlog.make_filtering_bound_logger() method allows you to set a desired
minimum level. The method makes use of the Python standard logging library
levels, hence why you import the library in the second line. The logging library
is only used to provide the log level to Structlog and nothing else.
When you run the file again, you will only see messages with a severity level of
WARNING or higher:
Output
2025-06-06 15:51:42 [warning ] Resource usage is nearing capacity
2025-06-06 15:51:42 [error ] Failed to save the file. Please check permissions
2025-06-06 15:51:42 [critical ] System has encountered a critical failure. Shutting down
Alternatively, you can also pass the integer value associated with the level
like so:
logger = structlog.get_logger()
logger.debug("Database connection established")
logger.info("Processing data from the API")
logger.warning("Resource usage is nearing capacity")
logger.error("Failed to save the file. Please check permissions")
logger.critical("System has encountered a critical failure. Shutting down")
The os.environ.get() method access the LOG_LEVEL environmental variable and
capitalizes the string value. It defaults to INFO if the LOG_LEVEL variable
is not set. Afterwards, the getattr() method is used to translate the level
string into a valid log level. Finally, the make_filtering_bound_logger()
method is updated accordingly.
You can test this out by setting the environment variable when you run the file
as shown below:
Copied!
LOG_LEVEL=error python app.py
Output
2025-06-06 15:53:17 [error ] Failed to save the file. Please check permissions
2025-06-06 15:53:17 [critical ] System has encountered a critical failure. Shutting down
That takes care of level-based filtering. In the next section, you will learn
how to format the log records.
Formatting log records
You can format log records with Structlog using processors which are functions
that customize log messages. Structlog has
built-in processors
that can add timestamps, log levels or modify a log format to mention a few.
These processors can be composed into a processor chain. When a processor
modifies a log entry, it passes the modified value to the next processor. To
understand this fundamental idea, we will add each processor individually
starting with the following example:
app.py
Copied!
import structlog
structlog.configure(
processors=[
structlog.dev.ConsoleRenderer(),
]
)
logger = structlog.get_logger()
logger.info("An info message")
In the preceding snippet, you added a ConsoleRenderer() processor to the
processors list. The method receives an event dictionary and formats every
property as key=value pairs aside from the timestamp, log level, and log
message. Then, it displays the value in the console.
Running the file generates the following output:
Output
An info message
As you can observe from the output, only the log message is logged in the
console. There is no severity level or a timestamp as before. For the log entry
to have a log level or timestamp, you will have to add a processor to attach the
information you want. Structlog provides an add_log_level processor that adds
a log level to a log entry. Add the highlighted code to add the processor:
structlog.dev.ConsoleRenderer(),
]
)
logger = structlog.get_logger()
logger.info("An info message")
The log entry now displays the severity level:
Output
[info ] An info message
When the info() logging method is invoked, Structlog constructs an event
dictionary containing the message passed to the method. It then passes the event
dictionary to the first processor in the list, which is the add_log_level
method here.
This method adds a severity level property to the dictionary and returns the
same dictionary to the next processor. The ConsoleRenderer() then takes the
event dictionary, converts it to a string, and displays it in the console.
Therefore, if you messed up the processor order like this:
"TypeError: 'str' object does not support item assignment".
The issue here is that add_log_level expects an event dictionary so that it
can add a log level property. Instead, it receives a string and attempts to add
the property to it triggering the error.
Therefore, ensure that the processor that handles output is always the last one
in the chain.
Customizing the timestamp
At this point, the log entry has no timestamp.
Timestamps are crucial
since they let you know when the log entry was made, allowing for log filtering
log messages based on the date or time.
Structlog provides the TimeStamper() processor that adds timestamps to log
entries:
structlog.processors.add_log_level,
structlog.dev.ConsoleRenderer(),
]
)
logger = structlog.get_logger()
logger.info("An info message")
Output
1749218120.31536 [info ] An info message
This timestamp is a Unix epoch representing the number of seconds that have
elapsed from January 1, 1970 12:00am UTC. To make the timestamp more human
readable, you can pass an fmt option and set it to the
ISO-8601 format:
2025-06-06T13:55:40.717123Z [info ] An info message
The ISO-8601 format is a popular and recommended standard for formatting date
and time in logs since it can record timezones. We recommend sticking to UTC
time to remove the ambiguity of time zones or international boundaries.
In rare cases where you want to use a completely custom format, you can do it
with
strftime format codes:
Structlog also allows you to define custom fields. For example, you might want
your logs to include a process ID, hostname, or the Python version. As of this
writing, Structlog doesn't have built-in processors to add this kind of
information. So you need to create a custom processor to add them.
In this section, you will define a custom processor that adds a process ID to
the log entry. Building upon the example in the previous section, add the
highlighted code:
Structlog processors receive three arguments: the logger, method name (such as
info), and an event dictionary. In the set_processor_id function, the first
two parameters are not being used but event_dict is used to add a custom
property to the log which in this case is the process ID.
To make Structlog aware of the set_process_id() custom processor, you must add
it to the processor chain. When you run the file, you will see the process ID in
the log record:
Output
2025-06-06T13:56:46.479795Z [info ] An info message process_id=55798
Logging in JSON
Structlog provides a JSONRenderer() processor for creating structured logs
using the JSON format. You only need to replace the ConsoleRenderer() with the
highlighted line:
The event key records the log message, but you can rename it to something more
typical using the EventRenamer() processor. Add the following line to rename
event to msg:
Side note: Structlog JSON logs are perfect for Better Stack
Once you switch Structlog to JSON output, you can pipe those structured events straight into Better Stack, so you can filter issues instantly and correlate events across services without regex-heavy searching.
Adding contextual data to your logs
Now that you know how to create structured logs, filtering to find the logs you
want to read will be much easier. But without adding relevant contextual data,
it can be difficult to understand the sequence of events leading up to the log.
To remedy this, you can include contextual data each a log message. In a web
application, such data could be the request ID, HTTP status code, resource ID
and more.
With Structlog, you can add the contextual data at log point using key-value
pairs:
In a web application, if you want information like the request ID in all logs,
you only need to invoke the bind() method in the middleware.
Log filtering with Structlog
Log filtering in Structlog is achieved through processors. You can filter an
event based on any property in the event dictionary and raise the
structlog.DropEvent exception to drop the event as demonstrated in this
example:
The drop_messages() function is a custom processor that checks if the route
property value on the event dictionary matches login. If the condition
evaluates to true, the event is dropped and the log message will not be logged:
Output
{"title": "My first post", "route": "post", "event": "Post Created", "level": "info", "timestamp": "2025-06-06T14:00:17.830155Z"}
Structlog also allows you to filter events based on call site parameters like
filename, function name, thread, and more. You only need to add the
CallsiteParameterAdder() and specify the parameters you'd like to use for your
filtering.
The read_files() and delete_files() functions log messages in the console.
If you want to see log entries from only the read_files() function, add the
highlighted lines to drop log messages in the delete_files() function:
The filter_function() custom processor checks if the function name matches the
delete_files() and drops the event if the condition evaluates to true.
However, the event dictionary passed to each processor does not this information
by default. Therefore, the CallsiteParameterAdder() processor must be
introduced to include the func_name property amongst others. Finally, the
filter_function custom processor is added to the processor chain, so that
messages from the specified function are dropped accordingly:
The log message in the delete_files() function has now been filtered out.
Using asynchronous methods to avoid blocking
Sometimes if your application is writing a lot of logs, it can be blocked while
the Structlog processor chain is formatting the log records. To avoid this,
Structlog provides asynchronous logging methods prefixed with an a. For
example, a log method like info() has its asynchronous counterpart ainfo().
To use it, you import the asyncio library from Python and ensure that the
logging method call is invoked within a function that is prefixed with the
async keyword:
Behind the scenes, the application will execute concurrently with the Structlog
processor chain as it formats the logs.
Logging exceptions with Structlog
An application may detect errors during execution and throw an exception,
interrupting the normal flow of the program. These exceptions provide insights
into what went wrong and provide a starting point for debugging. Structlog
provides an exception() method to log exceptions as seen in this example:
{"exc_info": true, "event": "Cannot divide one by zero!", "level": "error", "timestamp": "2025-06-06T14:02:36.907179Z"}
When the ZeroDivisionError exception is thrown, Structlog logs the message
passed to the exception() method. But it is currently missing a stack trace
that can help you find the root cause of the problem. To add this info to all
exceptions, use the following processor:
The exception now contains helpful information that provides more context.
Notice that the exception details are also serialized in the JSON format, making
automatic analysis by log management systems much easier!
Logging into files
So far, you've been sending the logs to the console. Structlog also allows you
to redirect the logs to a more persistent storage device like a file by using
the WriteLoggerFactory() method as follows:
Here, you set the logger_factory to the WriteLoggerFactory() method to send
the logs to a new app.log file. When you run the program, this file will be
created and you will observe the following contents:
When logging to files, ensure to control the sizes of files through log
rotation.
Using Structlog in a Django application
Now that you are familiar with how Structlog works, you will implement a logging
system in a
Django World Clock application.
This application lets you search for any location and returns the time for the
location.
For a smooth integration, we will use the
django-structlog package.
The package allows for smooth integration of Django with Structlog and also
comes with middleware that adds context data, such as user agent, IP address, or
a request ID.
First, deactivate the project directory virtual environment:
Watching for file changes with StatReloader
Performing system checks...
System check identified no issues (0 silenced).
June 06, 2025 - 14:09:02
Django version 4.1.1, using settings 'djangoWorldClock.settings'
Starting development server at http://127.0.0.1:8000/
Quit the server with CONTROL-C.
Next, visit http://127.0.0.1:8000/ in the web browser of your choice to see a
page similar to this:
When you enter a valid location, you will see the date for the city:
When you search for a blank location, you will see the following error:
Installing and configuring django-structlog
With the project up and running, you will now install django-structlog and set
up the middleware for logging.
In the terminal, enter the following command to install django-structlog:
Copied!
python -m pip install django-structlog
Open djangoWorldClock/settings.py in your text editor and register the
django_structlog package:
djangoWorldClock/settings.py
Copied!
INSTALLED_APP = [
# . . .
'django_structlog'
]
Following that, add django-structlog middleware to your project:
json_formatter: converts log messages to the JSON format.
plain_console: render logs messages as plain text.
Next, you define two handlers that send plain log messages to the console, and
JSON log messages to the logs/json.log file.
Afterward, you configure Structlog with the configure() method and use
processors that should be familiar at this point.
Finally, you invoke os.makedirs() to create the directory logs, where the
log files will reside.
Save the file, and the server will automatically restart and log the
following(if not, refresh http://127.0.0.1:8000/ in the browser):
Output
2025-06-06T14:18:20.242188Z [info ] request_started [django_structlog.middlewares.request] ip=127.0.0.1 request='GET /' request_id=8081cf47-5e9a-40ef-bc2d-7dc4a5e8b612 user_agent='Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/137.0.0.0 Safari/537.36' user_id=None
2025-06-06T14:18:20.242188Z [info ] request_started [django_structlog.middlewares.request] ip=127.0.0.1 request='GET /' request_id=8081cf47-5e9a-40ef-bc2d-7dc4a5e8b612 user_agent='Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/137.0.0.0 Safari/537.36' user_id=None
2025-06-06T14:18:20.251050Z [info ] request_finished [django_structlog.middlewares.request] code=200 ip=127.0.0.1 request='GET /' request_id=8081cf47-5e9a-40ef-bc2d-7dc4a5e8b612 user_id=None
2025-06-06T14:18:20.251050Z [info ] request_finished [django_structlog.middlewares.request] code=200 ip=127.0.0.1 request='GET /' request_id=8081cf47-5e9a-40ef-bc2d-7dc4a5e8b612 user_id=None
The application is now sending logs to the console as configured. Each log
messages include the request_id, IP address, HTTP method, and the user agent.
django-structlog has automatically added this context data for us.
In the project root directory, you will also find that the logs directory has
been created with the json.log file.
The json.log file contains JSON-formatted log messages:
The log messages include an IP address, which sometimes is sensitive
information. Other examples include authorization tokens and passwords. To keep
the information safe, you can redact the information or remove it. In this
tutorial, you will remove the IP address.
To achieve that, you can implement a signal receiver to override existing
context data. You can also use the same option to add new metadata to the
request. Create a worldClock/signals.py file and add the following code:
worldClock/signals.py
Copied!
from django.dispatch import receiver
from django_structlog.signals import bind_extra_request_metadata
import structlog
@receiver(bind_extra_request_metadata)
def remove_ip_address(request, logger, **kwargs):
structlog.contextvars.bind_contextvars(ip=None)
The bind_contextvars() method adds/modifies context data globally in all the
log messages. Here, you set ip to None so that the IP address value should
be removed.
To make sure that the signal works, add the following line to the
worldClock/__init__.py file:
worldClock/__init__.py
Copied!
from . import signals
With that, save the file and the server will automatically restart. Refresh
http://localhost:8000/ to see logs with the IP address will be removed:
Output
2025-06-06T14:20:36.438009Z [info ] request_started [django_structlog.middlewares.request] ip=None request='GET /' request_id=c9e8d320-b214-4fd4-83b2-131a0ad2468a user_agent='Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/137.0.0.0 Safari/537.36' user_id=None
2025-06-06T14:20:36.438009Z [info ] request_started [django_structlog.middlewares.request] ip=None request='GET /' request_id=c9e8d320-b214-4fd4-83b2-131a0ad2468a user_agent='Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/137.0.0.0 Safari/537.36' user_id=None
2025-06-06T14:20:36.506138Z [info ] request_finished [django_structlog.middlewares.request] code=200 ip=None request='GET /' request_id=c9e8d320-b214-4fd4-83b2-131a0ad2468a user_id=None
2025-06-06T14:20:36.506138Z [info ] request_finished
Logging in Django view functions
With logging in place, you are now ready to log messages in Django views.
In every file that has views, you should import structlog and invoke
structlog.get_logger(), then call logging methods as you have been doing
earlier in the tutorial:
worldClock/views.py
Copied!
from django.shortcuts import render, redirect
import requests
import structlog
logger = structlog.get_logger(__name__)
def home(request):
logger.debug("homepage visited")
return render(request, "home.html")
def search(request):
# If the request method is not POST, redirect to the home page
if request.method != "POST":
logger.info(
"redirecting %s request to %s to '/'",
method=request.method,
path=request.path,
)
return redirect("/")
# Get the search query
query = request.POST.get("q", "")
searchLogger = logger.bind(query=query)
searchLogger.info("incoming search query for %s", query, query=query)
try:
# Add proper headers for Nominatim API
headers = {"User-Agent": "Django World Clock App (your-email@example.com)"}
# Pass the search query to the Nominatim API to get a location
location_response = requests.get(
"https://nominatim.openstreetmap.org/search",
params={"q": query, "format": "json", "limit": "1"},
headers=headers,
)
searchLogger.bind(location=location_response).debug("Nominatim API response")
# Check if the response is successful before trying to parse JSON
if location_response.status_code == 200:
location = location_response.json()
else:
return render(request, "500.html")
# If a location is found, pass the coordinate to the Time API to get the current time
if location:
coordinate = [location[0]["lat"], location[0]["lon"]]
time_response = requests.get(
"https://timeapi.io/api/Time/current/coordinate",
params={"latitude": coordinate[0], "longitude": coordinate[1]},
)
searchLogger.bind(time=time_response).debug("Time API response")
searchLogger.bind(coordinate=coordinate).debug(
"Search query %s succeeded without errors", query
)
if time_response.status_code == 200:
return render(
request,
"success.html",
{"location": location[0], "time": time_response.json()},
)
else:
return render(request, "500.html")
# If a location is NOT found, return the error page
else:
searchLogger.info("location %s not found", query, query=query)
return render(request, "fail.html")
except Exception as error:
searchLogger.exception(error)
return render(request, "500.html")
When you save and refresh http://localhost:8000/, you will see the
homepage visited log message:
While logging to standard output works well during development, production applications require a more robust logging solution. You could use file-based logging with rotation as discussed earlier, but this approach requires you to access individual servers to view logs. A better approach is to centralize all your logs in one location where they can be monitored, analyzed, and searched efficiently.
There are multiple approaches to log centralization, but one of the most straightforward options is using a cloud-based log management platform. Once your application is forwarding logs successfully, you can watch events arrive in real time, set alerts for critical patterns, and explore your data through dashboards that make it easier to spot trends and anomalies over time.
If you want to see what that workflow looks like in practice, here’s a quick demo:
In this section, we'll configure our Structlog application to send logs to Better Stack Telemetry.
Before you can start ingesting logs into Better Stack, you'll need to create a free account and navigate to the Sources section from the left sidebar, then click Connect source.
Provide a descriptive name for your source and choose Python as the platform, then click Create source:
After creating the source, you'll receive a source token (e.g., abc123XYZ789tokenExample) and an ingestion endpoint (e.g., s1234567.us-west-2.betterstackdata.com). Copy both values as they're required for configuration:
The logtail-python package should already be installed if you followed the complete tutorial, but if you're starting fresh, install it using:
Copied!
pip install logtail-python
Now, let's modify the World Clock application to include the Better Stack handler. Head back to djangoWorldClock/settings.py and add the Better Stack handler to the Structlog configuration:
djangoWorldClock/settings.py
Copied!
...
from django.shortcuts import render, redirect
import requests
import structlog
The LogtailHandler requires both the source token and the ingestion endpoint that you copied from the Better Stack interface.
Once your configuration is complete, you should see a “Logs received!” message. This confirms that logs from your Bash script container are successfully being delivered to Better Stack:
Once everything is set up, open your browser and visit http://127.0.0.1:8000/, then perform a city search to generate log activity. Your logs will begin streaming to Better Stack in real time:
To view more details about a specific event, click on any log entry:
Once logs are flowing in, open Live tail to see events streaming in real time. If you want to preview what the Live tail experience looks like first, here’s a quick demo:
From this point forward, all your Structlog output will be centralized in Better Stack Telemetry. You can easily apply filters to locate specific information or configure alerts to notify you when logs match particular criteria, making your production logging strategy both powerful and maintainable.
Final thoughts
In this article, we covered at a broad range of Structlog features and how to
customize them. Armed with this knowledge, you should now be able to harness the
power of Structlog effectively in your projects. To expand on your learning,
ensure to read the
Structlog documentation.