Back to Logging guides

A Comprehensive Guide to Python Logging with Structlog

Stanley Ulili
Updated on November 30, 2023

Structlog is an open-source logging tool for Python known for its simple API, performance, and quality of life features. It has been used in production since 2013 and has evolved over the years to incorporate recent major changes in Python such as asyncio and type hints.

This tutorial explores the essential aspects of Structlog. We will delve into formatting logs, applying filters, incorporating contextual data, seamlessly integrating Structlog with the Python standard logging library, and much more. Finally, we'll demonstrate how to seamlessly integrate Structlog into a Django web application, unlocking its full potential for your projects.

Prerequisites

Before proceeding with this tutorial, ensure you have the latest version of Python (3.11 at the time of writing). If you don't have Python installed, find the download instructions here.

Once you have Python installed, create a directory that will contain the code samples, then change into the directory:

 
mkdir structlog_demo && cd structlog_demo

Next, create a virtual environment to avoid dependency conflicts on your system and activate:

 
python3 -m venv venv
 
source venv/bin/activate

When the virtual environment is active, the terminal will be prefixed with your virtual environment's name in parenthesis. You are now all set to continue with the remainder of this article.

Getting started with Structlog

Before you can start logging with Structlog, you must download the package into your project with the following command:

 
python -m pip install structlog
Output
Collecting structlog
  Downloading structlog-23.1.0-py3-none-any.whl (62 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 62.4/62.4 kB 3.0 MB/s eta 0:00:00
Installing collected packages: structlog
Successfully installed structlog-23.1.0

After installing Structlog, create an app.py file using a text editor of your choice and add the following code to log with Structlog:

app.py
import structlog

logger = structlog.get_logger()
logger.info("Logging with structlog")

Structlog provides a pre-configured logger by default, which you access by invoking the structlog.get_logger() method. The info() method is used on the resulting logger to record messages at the INFO level.

Save the file and run the program with the following command:

 
python app.py
Output
2023-07-16 12:29:21 [info     ] Logging with structlog

The output shows the following:

  • 2023-07-16 12:29:21: the time stamp.
  • info: the severity level of the log message.
  • Logging with structlog: the log message.

Understanding log levels in Structlog

Log levels are labels that indicate the importance or severity of the message. Structlog retains the five log levels found in the logging module:

  • DEBUG(10): used to log information that is useful for debugging the program.
  • INFO(20): indicates a normal event in the program.
  • WARNING(30): tracks events that indicate potential issues or anomalies that might not be critical but should be noted and addressed soon.
  • ERROR(40): indicates an error concerning an operation in your application that is fatal to that operation alone.
  • CRITICAL(50): indicates a severe problem that can force the application to shutdown, and must be investigated right away.

Each of the log levels mentioned has a corresponding logging method that you can invoke on a logger:

 
import structlog

logger = structlog.get_logger()

logger.debug("Database connection established")
logger.info("Processing data from the API")
logger.warning("Resource usage is nearing capacity")
logger.error("Failed to save the file. Please check permissions")
logger.critical("System has encountered a critical failure. Shutting down")
Output
2023-07-31 06:43:37 [debug    ] Database connection established
2023-07-31 06:43:37 [info     ] Processing data from the API
2023-07-31 06:43:37 [warning  ] Resource usage is nearing capacity
2023-07-31 06:43:37 [error    ] Failed to save the file. Please check permissions
2023-07-31 06:43:37 [critical ] System has encountered a critical failure. Shutting down
own

The messages are colored based on the severity of the message:

all-levels.png

Now that you are familiar with Structlog's severity levels and their corresponding methods, let's move on you can move on to filtering log entries.

Setting the default log level

Unlike other logging packages, Structlog does not do any level-based filtering by default. To filter the logs based on the level, you need to configure Structlog using its configure() method:

app.py
import structlog
import logging
structlog.configure(
wrapper_class=structlog.make_filtering_bound_logger(logging.WARNING)
)
logger = structlog.get_logger() . . .

The structlog.make_filtering_bound_logger() method allows you to set a desired minimum level. The method makes use of the Python standard logging library levels, hence why you import the library in the second line. The logging library is only used to provide the log level to Structlog and nothing else.

When you run the file again, you will only see messages with a severity level of WARNING or higher:

Output
2023-07-31 06:57:07 [warning  ] Resource usage is nearing capacity
2023-07-31 06:57:07 [error    ] Failed to save the file. Please check permissions
2023-07-31 06:57:07 [critical ] System has encountered a critical failure. Shutting down

Alternatively, you can also pass the integer value associated with the level like so:

 
. . .
structlog.configure(
    wrapper_class=structlog.make_filtering_bound_logger(30)
    )
. . .

Another option to consider is using environment variables. This will allow you to change the minimum log level without having to change the code:

app.py
import structlog
import logging
import os
level = os.environ.get("LOG_LEVEL", "INFO").upper()
LOG_LEVEL = getattr(logging, level)
structlog.configure(
wrapper_class=structlog.make_filtering_bound_logger(LOG_LEVEL))
logger = structlog.get_logger() logger.debug("Database connection established") logger.info("Processing data from the API") logger.warning("Resource usage is nearing capacity") logger.error("Failed to save the file. Please check permissions") logger.critical("System has encountered a critical failure. Shutting down")

The os.environ.get() method access the LOG_LEVEL environmental variable and capitalizes the string value. It defaults to INFO if the LOG_LEVEL variable is not set. Afterwards, the getattr() method is used to translate the level string into a valid log level. Finally, the make_filtering_bound_logger() method is updated accordingly.

You can test this out by setting the environment variable when you run the file as shown below:

 
LOG_LEVEL=warning python app.py
Output
2023-07-31 07:10:05 [warning  ] Resource usage is nearing capacity
2023-07-31 07:10:05 [error    ] Failed to save the file. Please check permissions
2023-07-31 07:10:05 [critical ] System has encountered a critical failure. Shutting down

That takes care of level-based filtering. In the next section, you will learn how to format the log records.

Formatting log records

You can format log records with Structlog using processors which are functions that customize log messages. Structlog has built-in processors that can add timestamps, log levels or modify a log format to mention a few. These processors can be composed into a processor chain. When a processor modifies a log entry, it passes the modified value to the next processor. To understand this fundamental idea, we will add each processor individually starting with the following example:

app.py
import structlog

structlog.configure(
processors=[
structlog.dev.ConsoleRenderer(),
]
) logger = structlog.get_logger() logger.info("An info message")

In the preceding snippet, you added a ConsoleRenderer() processor to the processors list. The method receives an event dictionary and formats every property as key=value pairs aside from the timestamp, log level, and log message. Then, it displays the value in the console.

Running the file generates the following output:

Output
An info message

As you can observe from the output, only the log message is logged in the console. There is no severity level or a timestamp as before. For the log entry to have a log level or timestamp, you will have to add a processor to attach the information you want. Structlog provides an add_log_level processor that adds a log level to a log entry. Add the highlighted code to add the processor:

app.py
import structlog

structlog.configure(
    processors=[
structlog.processors.add_log_level,
structlog.dev.ConsoleRenderer(), ] ) logger = structlog.get_logger() logger.info("An info message")

The log entry now displays the severity level:

Output
[info     ] An info message

When the info() logging method is invoked, Structlog constructs an event dictionary containing the message passed to the method. It then passes the event dictionary to the first processor in the list, which is the add_log_level method here.

This method adds a severity level property to the dictionary and returns the same dictionary to the next processor. The ConsoleRenderer() then takes the event dictionary, converts it to a string, and displays it in the console.

Therefore, if you messed up the processor order like this:

app.py
. . .
structlog.configure(
    processors=[
        structlog.dev.ConsoleRenderer(),
        structlog.processors.add_log_level,
    ]
)

You would receive an error:

Output
 "TypeError: 'str' object does not support item assignment".

The issue here is that add_log_level expects an event dictionary so that it can add a log level property. Instead, it receives a string and attempts to add the property to it triggering the error.

Therefore, ensure that the processor that handles output is always the last one in the chain.

Customizing the timestamp

At this point, the log entry has no timestamp. Timestamps are crucial since they let you know when the log entry was made, allowing for log filtering log messages based on the date or time.

Structlog provides the TimeStamper() processor that adds timestamps to log entries:

app.py
import structlog

structlog.configure(
    processors=[
structlog.processors.TimeStamper(),
structlog.processors.add_log_level, structlog.dev.ConsoleRenderer(), ] ) logger = structlog.get_logger() logger.info("An info message")
Output
1689524060.085182 [info     ] An info message

This timestamp is a Unix epoch representing the number of seconds that have elapsed from January 1, 1970 12:00am UTC. To make the timestamp more human readable, you can pass an fmt option and set it to the ISO-8601 format:

app.py
. . .
structlog.configure(
    processors=[
structlog.processors.TimeStamper(fmt="iso"),
structlog.processors.add_log_level, structlog.dev.ConsoleRenderer(), ] ) . . .
Output
2023-08-01T11:17:18.135786Z [info     ] An info message

The ISO-8601 format is a popular and recommended standard for formatting date and time in logs since it can record timezones. We recommend sticking to UTC time to remove the ambiguity of time zones or international boundaries.

In rare cases where you want to use a completely custom format, you can do it with strftime format codes:

 
structlog.processors.TimeStamper(fmt="%Y-%m-%d %H:%M.%S"),

Most often, the ISO-8601 format is all you need.

Adding custom fields to logs

Structlog also allows you to define custom fields. For example, you might want your logs to include a process ID, hostname, or the Python version. As of this writing, Structlog doesn't have built-in processors to add this kind of information. So you need to create a custom processor to add them.

In this section, you will define a custom processor that adds a process ID to the log entry. Building upon the example in the previous section, add the highlighted code:

app.py
import structlog
import os
def set_process_id(_, __, event_dict):
event_dict["process_id"] = os.getpid()
return event_dict
structlog.configure( processors=[ structlog.processors.TimeStamper(fmt="iso"), structlog.processors.add_log_level,
set_process_id,
structlog.dev.ConsoleRenderer(), ] ) . . .

Structlog processors receive three arguments: the logger, method name (such as info), and an event dictionary. In the set_processor_id function, the first two parameters are not being used but event_dict is used to add a custom property to the log which in this case is the process ID.

To make Structlog aware of the set_process_id() custom processor, you must add it to the processor chain. When you run the file, you will see the process ID in the log record:

Output
2023-07-31T05:28:35.427629Z [info     ] An info message                process_id=102790

Logging in JSON

Structlog provides a JSONRenderer() processor for creating structured logs using the JSON format. You only need to replace the ConsoleRenderer() with the highlighted line:

app.py
. . .
structlog.configure(
    processors=[
        structlog.processors.add_log_level,
        structlog.processors.TimeStamper(fmt="iso"),
        set_process_id,
structlog.processors.JSONRenderer(),
] ) logger = structlog.get_logger()
logger.info("Log entry in JSON format")

The JSONRenderer() processor renders the log in a structured form as displayed in the output:

Output
{"event": "Log entry in JSON format", "level": "info", "timestamp": "2023-07-31T05:34:52.779215Z", "process_id": 114401}

The event key records the log message, but you can rename it to something more typical using the EventRenamer() processor. Add the following line to rename event to msg:

app.py
. . .
structlog.configure(
    processors=[
        structlog.processors.add_log_level,
        structlog.processors.TimeStamper(fmt="iso"),
        set_process_id,
structlog.processors.EventRenamer("msg"),
structlog.processors.JSONRenderer(), ] ) . . .
Output
{"level": "info", "timestamp": "2023-07-31T05:39:42.198119Z", "process_id": 122903, "msg": "Log entry in JSON format"}

Now the event property has been renamed to msg.

Adding contextual data to your logs

Now that you know how to create structured logs, filtering to find the logs you want to read will be much easier. But without adding relevant contextual data, it can be difficult to understand the sequence of events leading up to the log.

To remedy this, you can include contextual data each a log message. In a web application, such data could be the request ID, HTTP status code, resource ID and more.

With Structlog, you can add the contextual data at log point using key-value pairs:

app.py
import structlog

structlog.configure(
    processors=[
        structlog.processors.add_log_level,
        structlog.processors.TimeStamper(fmt="iso"),
        structlog.processors.JSONRenderer(),
    ]
)
logger = structlog.get_logger()
logger.info("Comment created", name="John Doe", post_id=2, comment_id="1")
logger.info("Comment created", name="Bob Foster", post_id=2, comment_id="2")
logger.info("Comment created", name="Jane Miles", post_id=2, comment_id="3")

All the key-value pairs passed to the info() method calls are attached to the log message as shown in the output:

Output
{"name": "John Doe", "post_id": 2, "comment_id": "1", "event": "Comment created", "level": "info", "timestamp": "2023-07-31T05:44:01.894040Z"}
{"name": "Bob Foster", "post_id": 2, "comment_id": "2", "event": "Comment created", "level": "info", "timestamp": "2023-07-31T05:44:01.894133Z"}
{"name": "Jane Miles", "post_id": 2, "comment_id": "3", "event": "Comment created", "level": "info", "timestamp": "2023-07-31T05:44:01.894153Z"}

Notice how post_id is repeated in at each info() call. To avoid this repetition, you can bind key-value pairs to the logger via its bind() method:

app.py
. . .
logger = structlog.get_logger()
post_logger = logger.bind(post_id=2)

post_logger.info("Comment created", name="John Doe", comment_id="1")
post_logger.info("Comment created", name="Bob Foster", comment_id="2")
post_logger.info("Comment created", name="Jane Miles", comment_id="3")

The output is similar to the previous one, but the log methods have no repetition:

Output
{"post_id": 2, "name": "John Doe", "comment_id": "1", "event": "Comment created", "level": "info", "timestamp": "2023-07-31T05:48:22.305666Z"}
{"post_id": 2, "name": "Bob Foster", "comment_id": "2", "event": "Comment created", "level": "info", "timestamp": "2023-07-31T05:48:22.305736Z"}
{"post_id": 2, "name": "Jane Miles", "comment_id": "3", "event": "Comment created", "level": "info", "timestamp": "2023-07-31T05:48:22.305758Z"}

In a web application, if you want information like the request ID in all logs, you only need to invoke the bind() method in the middleware.

Log filtering with Structlog

Log filtering in Structlog is achieved through processors. You can filter an event based on any property in the event dictionary and raise the structlog.DropEvent exception to drop the event as demonstrated in this example:

app.py
import structlog


def drop_messages(_, __, event_dict):
    if event_dict.get("route") == "login":
        raise structlog.DropEvent
    return event_dict


structlog.configure(
    processors=[
        structlog.contextvars.merge_contextvars,
        structlog.processors.add_log_level,
        structlog.processors.TimeStamper(fmt="iso"),
        drop_messages,
        structlog.processors.JSONRenderer(),
    ]
)
logger = structlog.get_logger()
logger.info("User created", name="John Doe", route="login")
logger.info("Post Created", title="My first post", route="post")

The drop_messages() function is a custom processor that checks if the route property value on the event dictionary matches login. If the condition evaluates to true, the event is dropped and the log message will not be logged:

Output
{"title": "My first post", "route": "post", "event": "Post Created", "level": "info", "timestamp": "2023-07-31T05:57:11.559779Z"}

Structlog also allows you to filter events based on call site parameters like filename, function name, thread, and more. You only need to add the CallsiteParameterAdder() and specify the parameters you'd like to use for your filtering.

Let's look at the following example:

app.py
import structlog


structlog.configure(
    processors=[
        structlog.processors.add_log_level,
        structlog.processors.TimeStamper(fmt="iso"),
        structlog.processors.JSONRenderer(),
    ]
)
logger = structlog.get_logger()

def read_files():
    logger.info("File opened successfully", name="file.txt")


def delete_files():
    logger.warning("File deleted", name="app.log")

read_files()
delete_files()
Output
{"name": "file.txt", "event": "File opened successfully", "level": "info", "timestamp": "2023-07-31T05:58:54.959635Z"}
{"name": "app.log", "event": "File deleted", "level": "warning", "timestamp": "2023-07-31T05:58:54.959702Z"}

The read_files() and delete_files() functions log messages in the console. If you want to see log entries from only the read_files() function, add the highlighted lines to drop log messages in the delete_files() function:

app.py
import structlog

def filter_function(_, __, event_dict):
if event_dict.get("func_name") == "delete_files":
raise structlog.DropEvent
return event_dict
structlog.configure( processors=[ structlog.processors.add_log_level, structlog.processors.TimeStamper(fmt="iso"),
structlog.processors.CallsiteParameterAdder(
[structlog.processors.CallsiteParameter.FUNC_NAME]
),
filter_function,
structlog.processors.JSONRenderer(), ] ) logger = structlog.get_logger() . . .

The filter_function() custom processor checks if the function name matches the delete_files() and drops the event if the condition evaluates to true. However, the event dictionary passed to each processor does not this information by default. Therefore, the CallsiteParameterAdder() processor must be introduced to include the func_name property amongst others. Finally, the filter_function custom processor is added to the processor chain, so that messages from the specified function are dropped accordingly:

Output
{"name": "file.txt", "event": "File opened successfully", "level": "info", "timestamp": "2023-07-31T06:29:06.155647Z", "func_name": "read_files"}

The log message in the delete_files() function has now been filtered out.

Using asynchronous methods to avoid blocking

Sometimes if your application is writing a lot of logs, it can be blocked while the Structlog processor chain is formatting the log records. To avoid this, Structlog provides asynchronous logging methods prefixed with an a. For example, a log method like info() has its asynchronous counterpart ainfo(). To use it, you import the asyncio library from Python and ensure that the logging method call is invoked within a function that is prefixed with the async keyword:

app.py
import structlog
import asyncio

structlog.configure(
    processors=[
        structlog.processors.add_log_level,
        structlog.processors.TimeStamper(fmt="iso"),
        structlog.processors.JSONRenderer(),
    ]
)
logger = structlog.get_logger()


async def check_file_type():
await logger.awarning("Unsupported File", name="file1.t3x")
asyncio.run(check_file_type())
Output
{"name": "file1.t3x", "event": "Unsupported File", "level": "warning", "timestamp": "2023-07-31T06:32:05.764121Z"}

Behind the scenes, the application will execute concurrently with the Structlog processor chain as it formats the logs.

Logging exceptions with Structlog

An application may detect errors during execution and throw an exception, interrupting the normal flow of the program. These exceptions provide insights into what went wrong and provide a starting point for debugging. Structlog provides an exception() method to log exceptions as seen in this example:

app.py
import structlog

structlog.configure(

    processors=[
        structlog.processors.add_log_level,
        structlog.processors.TimeStamper(fmt="iso"),
        structlog.processors.JSONRenderer(),
    ],
)
logger = structlog.get_logger()

try:
    1 / 0
except ZeroDivisionError:
    logger.exception("Cannot divide one by zero!")
Output
{"exc_info": true, "event": "Cannot divide one by zero!", "level": "error", "timestamp": "2023-07-31 06:34.16"}

When the ZeroDivisionError exception is thrown, Structlog logs the message passed to the exception() method. But it is currently missing a stack trace that can help you find the root cause of the problem. To add this info to all exceptions, use the following processor:

app.py
. . .
structlog.configure(
    processors=[
        structlog.processors.add_log_level,
        structlog.processors.TimeStamper(fmt="iso"),
structlog.processors.dict_tracebacks,
structlog.processors.JSONRenderer(), ], ) . . .

Running the file yields a log entry like this:

Output
{"event": "Cannot divide one by zero!", "level": "error", "timestamp": "2023-07-31T07:00:31.526266Z", "exception": [{"exc_type": "ZeroDivisionError", "exc_value": "division by zero", "syntax_error": null, "is_cause": false, "frames": [{"filename": "/home/stanley/structlog_demo/app.py", "lineno": 16, "name": "<module>", "line": "", "locals": {"__name__": "__main__", "__doc__": "None", "__package__": "None", "__loader__": "<_frozen_importlib_external.SourceFileLoader object at 0xffffaa2f3410>", "__spec__": "None", "__annotations__": "{}", "__builtins__": "<module 'builtins' (built-in)>", "__file__": "/home/stanley/structlog_demo/app.py", "__cached__": "None", "structlog": "\"<module 'structlog' from '/home/stanley/structlog_demo/venv/lib/python3.11/site-\"+32", "logger": "'<BoundLoggerLazyProxy(logger=None, wrapper_class=None, processors=None, context_'+55"}}]}]}

The exception now contains helpful information that provides more context. Notice that the exception details are also serialized in the JSON format, making automatic analysis by log management systems much easier!

Logging into files

So far, you've been sending the logs to the console. Structlog also allows you to redirect the logs to a more persistent storage device like a file by using the WriteLoggerFactory() method as follows:

app.py
import structlog
from pathlib import Path

structlog.configure(
    processors=[
        structlog.processors.add_log_level,
        structlog.processors.StackInfoRenderer(),
        structlog.dev.set_exc_info,
        structlog.processors.TimeStamper(fmt="iso"),
        structlog.processors.JSONRenderer(),
    ],
logger_factory=structlog.WriteLoggerFactory(
file=Path("app").with_suffix(".log").open("wt")
),
) logger = structlog.get_logger() logger.info("Info message") logger.error("Error message")

Here, you set the logger_factory to the WriteLoggerFactory() method to send the logs to a new app.log file. When you run the program, this file will be created and you will observe the following contents:

app.log
{"event": "Info message", "level": "info", "timestamp": "2023-07-31T07:04:37.994633Z"}
{"event": "Error message", "level": "error", "timestamp": "2023-07-31T07:04:37.994745Z"}

When logging to files, ensure to control the sizes of files through log rotation.

Using Structlog in a Django application

Now that you are familiar with how Structlog works, you will implement a logging system in a Django World Clock application. This application lets you search for any location and returns the time for the location.

For a smooth integration, we will use the django-structlog package. The package allows for smooth integration of Django with Structlog and also comes with middleware that adds context data, such as user agent, IP address, or a request ID.

First, deactivate the project directory virtual environment:

 
deactivate

Next, move outside the project directory:

 
cd ..

Clone the repository to your local machine:

 
git clone https://github.com/betterstack-community/django-world-clock.git

Then move into the newly created directory:

 
cd django-world-clock

Create a virtual environment, then activate it:

 
python3 -m venv venv
 
source venv/bin/activate

Move into the djangoWorldClock directory:

 
cd djangoWorldClock

Install all the dependencies:

 
pip install -r requirements.txt

Migrate the database as follows:

 
python manage.py migrate

Then run the server:

 
python manage.py runserver
Output
Watching for file changes with StatReloader
Performing system checks. . .

System check identified no issues (0 silenced).
July 17, 2023 - 18:13:19
Django version 4.1.1, using settings 'djangoWorldClock.settings'
Starting development server at http://127.0.0.1:8000/
Quit the server with CONTROL-C.

Next, visit http://127.0.0.1:8000/ in the web browser of your choice to see a page similar to this:

homepage.png

When you enter a valid location, you will see the date for the city:

location_date.png

When you search for a blank location, you will see the following error:

location_notfound.png

Installing and configuring django-structlog

With the project up and running, you will now install django-structlog and set up the middleware for logging.

In the terminal, enter the following command to install django-structlog:

 
python -m pip install django-structlog

Open djangoWorldClock/settings.py in your text editor and register the django_structlog package:

djangoWorldClock/settings.py
INSTALLED_APP = [
    # . . .
    'django_structlog'
]

Following that, add django-structlog middleware to your project:

djangoWorldClock/settings.py
MIDDLEWARE = [
    # . . .
    'django_structlog.middlewares.RequestMiddleware',
]

Next, create a Structlog configuration at the end of the settings.py:

djangoWorldClock/settings.py
import structlog
import os

. . .
LOGGING = {
    "version": 1,
    "disable_existing_loggers": True,
    "formatters": {
        "json_formatter": {
            "()": structlog.stdlib.ProcessorFormatter,
            "processor": structlog.processors.JSONRenderer(),
        },
        "plain_console": {
            "()": structlog.stdlib.ProcessorFormatter,
            "processor": structlog.dev.ConsoleRenderer(),
        },
    },
    "handlers": {
        "console": {
            "class": "logging.StreamHandler",
            "formatter": "plain_console",
        },
        "json_file": {
            "class": "logging.handlers.WatchedFileHandler",
            "filename": "logs/json.log",
            "formatter": "json_formatter",
        },
    },
    "loggers": {
        "django_structlog": {
            "handlers": ["console", "json_file"],
            "level": "DEBUG",
        },
        "root": {
            "handlers": ["console", "json_file"],
            "level": "DEBUG",
        },
    },
}

structlog.configure(
    processors=[
        structlog.contextvars.merge_contextvars,
        structlog.stdlib.filter_by_level,
        structlog.processors.TimeStamper(fmt="iso"),
        structlog.stdlib.add_logger_name,
        structlog.stdlib.add_log_level,
        structlog.stdlib.PositionalArgumentsFormatter(),
        structlog.processors.StackInfoRenderer(),
        structlog.processors.format_exc_info,
        structlog.processors.UnicodeDecoder(),
        structlog.stdlib.ProcessorFormatter.wrap_for_formatter,
    ],
    logger_factory=structlog.stdlib.LoggerFactory(),
    cache_logger_on_first_use=True,
)
os.makedirs(os.path.join(BASE_DIR, "logs"), exist_ok=True)

The LOGGING dictionary has two formatters:

  • json_formatter: converts log messages to the JSON format.
  • plain_console: render logs messages as plain text.

Next, you define two handlers that send plain log messages to the console, and JSON log messages to the logs/json.log file.

Afterward, you configure Structlog with the configure() method and use processors that should be familiar at this point.

Finally, you invoke os.makedirs() to create the directory logs, where the log files will reside.

Save the file, and the server will automatically restart and log the following(if not, refresh http://127.0.0.1:8000/ in the browser):

Output
2023-07-31T07:33:06.713421Z [info     ] request_started                [django_structlog.middlewares.request] ip=127.0.0.1 request=GET / request_id=c142eeac-4fcc-4387-9ecb-95e1367008d6 user_agent=Mozilla/5.0 (X11; Ubuntu; Linux aarch64; rv:109.0) Gecko/20100101 Firefox/114.0 user_id=None
2023-07-31T07:33:06.713421Z [info     ] request_started                [django_structlog.middlewares.request] ip=127.0.0.1 request=GET / request_id=c142eeac-4fcc-4387-9ecb-95e1367008d6 user_agent=Mozilla/5.0 (X11; Ubuntu; Linux aarch64; rv:109.0) Gecko/20100101 Firefox/114.0 user_id=None
2023-07-31T07:33:06.716902Z [info     ] request_finished               [django_structlog.middlewares.request] code=200 ip=127.0.0.1 request=GET / request_id=c142eeac-4fcc-4387-9ecb-95e1367008d6 user_id=None
2023-07-31T07:33:06.716902Z [info     ] request_finished               [django_structlog.middlewares.request] code=200 ip=127.0.0.1 request=GET / request_id=c142eeac-4fcc-4387-9ecb-95e1367008d6 user_id=None

The application is now sending logs to the console as configured. Each log messages include the request_id, IP address, HTTP method, and the user agent. django-structlog has automatically added this context data for us.

In the project root directory, you will also find that the logs directory has been created with the json.log file.

The json.log file contains JSON-formatted log messages:

Output
{"request": "GET /", "user_agent": "Mozilla/5.0 (X11; Ubuntu; Linux aarch64; rv:109.0) Gecko/20100101 Firefox/114.0", "event": "request_started", "user_id": null, "ip": "127.0.0.1", "request_id": "c142eeac-4fcc-4387-9ecb-95e1367008d6", "timestamp": "2023-07-31T07:33:06.713421Z", "logger": "django_structlog.middlewares.request", "level": "info"}
{"request": "GET /", "user_agent": "Mozilla/5.0 (X11; Ubuntu; Linux aarch64; rv:109.0) Gecko/20100101 Firefox/114.0", "event": "request_started", "user_id": null, "ip": "127.0.0.1", "request_id": "c142eeac-4fcc-4387-9ecb-95e1367008d6", "timestamp": "2023-07-31T07:33:06.713421Z", "logger": "django_structlog.middlewares.request", "level": "info"}
{"code": 200, "request": "GET /", "event": "request_finished", "user_id": null, "ip": "127.0.0.1", "request_id": "c142eeac-4fcc-4387-9ecb-95e1367008d6", "timestamp": "2023-07-31T07:33:06.716902Z", "logger": "django_structlog.middlewares.request", "level": "info"}
{"code": 200, "request": "GET /", "event": "request_finished", "user_id": null, "ip": "127.0.0.1", "request_id": "c142eeac-4fcc-4387-9ecb-95e1367008d6", "timestamp": "2023-07-31T07:33:06.716902Z", "logger": "django_structlog.middlewares.request", "level": "info"}

The log messages include an IP address, which sometimes is sensitive information. Other examples include authorization tokens and passwords. To keep the information safe, you can redact the information or remove it. In this tutorial, you will remove the IP address.

To achieve that, you can implement a signal receiver to override existing context data. You can also use the same option to add new metadata to the request. Create a worldClock/signals.py file and add the following code:

worldClock/signals.py
from django.dispatch import receiver

from django_structlog.signals import bind_extra_request_metadata
import structlog


@receiver(bind_extra_request_metadata)
def remove_ip_address(request, logger, **kwargs):
    structlog.contextvars.bind_contextvars(ip=None)

The bind_contextvars() method adds/modifies context data globally in all the log messages. Here, you set ip to None so that the IP address value should be removed.

To make sure that the signal works, add the following line to the worldClock/__init__.py file:

worldClock/__init__.py
from . import signals

With that, save the file and the server will automatically restart. Refresh http://localhost:8000/ to see logs with the IP address will be removed:

Output
2023-07-31T07:44:05.730961Z [info     ] request_started                [django_structlog.middlewares.request] ip=None request=GET / request_id=12ef8eb1-5893-4f18-9bb3-5e6e75803a2e user_agent=Mozilla/5.0 (X11; Ubuntu; Linux aarch64; rv:109.0) Gecko/20100101 Firefox/114.0 user_id=None
2023-07-31T07:44:05.730961Z [info     ] request_started                [django_structlog.middlewares.request] ip=None request=GET / request_id=12ef8eb1-5893-4f18-9bb3-5e6e75803a2e user_agent=Mozilla/5.0 (X11; Ubuntu; Linux aarch64; rv:109.0) Gecko/20100101 Firefox/114.0 user_id=None
2023-07-31T07:44:05.736637Z [info     ] request_finished               [django_structlog.middlewares.request] code=200 ip=None request=GET / request_id=12ef8eb1-5893-4f18-9bb3-5e6e75803a2e user_id=None
2023-07-31T07:44:05.736637Z [info     ] request_finished               [django_structlog.middlewares.request] code=200 ip=None request=GET / request_id=12ef8eb1-5893-4f18-9bb3-5e6e75803a2e user_id=None

Logging in Django view functions

With logging in place, you are now ready to log messages in Django views.

In every file that has views, you should import structlog and invoke structlog.get_logger(), then call logging methods as you have been doing earlier in the tutorial:

worldClock/views.py
from django.shortcuts import render, redirect
import requests
import structlog
logger = structlog.get_logger(__name__)
# Create your views here. def home(request):
logger.debug("homepage visited")
return render(request, "home.html") def search(request): # If the request method is not POST, redirect to the home page if request.method != "POST":
logger.info(
"redirecting %s request to %s to '/'",
method=request.method,
path=request.path,
)
return redirect("/") # Get the search query query = request.POST.get("q", "")
searchLogger = logger.bind(query=query)
searchLogger.info("incoming search query for %s", query, query=query)
try: # Pass the search query to the Nominatim API to get a location location = requests.get( "https://nominatim.openstreetmap.org/search", {"q": query, "format": "json", "limit": "1"}, ).json()
searchLogger.bind(location=location).debug("Nominatim API response")
# If a location is found, pass the coordinate to the Time API to get the current time if location: coordinate = [location[0]["lat"], location[0]["lon"]] time = requests.get( "https://timeapi.io/api/Time/current/coordinate", {"latitude": coordinate[0], "longitude": coordinate[1]}, )
searchLogger.bind(time=time).debug("Time API response")
searchLogger.bind(coordinate=coordinate).debug(
"Search query %s succeeded without errors", query
)
return render( request, "success.html", {"location": location[0], "time": time.json()} ) # If a location is NOT found, return the error page else:
searchLogger.info("location %s not found", query, query=query)
return render(request, "fail.html") except Exception as error:
searchLogger.exception(error)
return render(request, "500.html")

When you save and refresh http://localhost:8000/, you will see the homepage visited log message:

Output
. . .
2023-07-31T07:51:10.689684Z [debug    ] homepage visited               [worldClock.views] ip=None request_id=3e2a647d-1fb4-4ee0-ba0e-0ff9f1cfe5fa user_id=None
. . .

When you perform a query for "New York" or any valid city name, a log message will first acknowledge the query:

Output
. . .
2023-07-31T07:53:56.035176Z [info     ] incoming search query for New York [worldClock.views] ip=None query=New York request_id=5636ac03-c871-4862-8810-669227d15e4f user_id=None
. . .

In the try block, a request will be to the API and a debug message will be logged containing the API response data:

Output
. . .
2023-07-31T08:01:52.386435Z [debug    ] Nominatim API response         [worldClock.views] ip=None location=[{'place_id': 362105211, 'licence': 'Data © OpenStreetMap contributors, ODbL 1.0. https://osm.org/copyright', 'osm_type': 'relation', 'osm_id': 175905, 'boundingbox': ['40.476578', '40.91763', '-74.258843', '-73.700233'], 'lat': '40.7127281', 'lon': '-74.0060152', 'display_name': 'City of New York, New York, United States', 'class': 'boundary', 'type': 'administrative', 'importance': 1.017576611451846, 'icon': 'https://nominatim.openstreetmap.org/ui/mapicons/poi_boundary_administrative.p.20.png'}] query=New York request_id=706375d5-7660-4f52-b83a-4d05d30e533b user_id=None

2023-07-31T08:01:58.734683Z [debug    ] Time API response              [worldClock.views] ip=None query=New York request_id=706375d5-7660-4f52-b83a-4d05d30e533b time=<Response [200]> user_id=None

If everything went smoothly, a successful debug message is logged:

Output
2023-07-31T08:01:58.734998Z [debug    ] Search query New York succeeded without errors [worldClock.views] coordinate=['40.7127281', '-74.0060152'] ip=None query=New York request_id=706375d5-7660-4f52-b83a-4d05d30e533b user_id=None

When a user enters an empty location, the following debug messages will be logged:

Output
. . .
2023-07-31T08:11:35.377341Z [info     ] incoming search query for      [worldClock.views] ip=None query= request_id=5b1a3a0f-7066-4e6c-8567-4f1464bce294 user_id=None
2023-07-31T08:11:37.901924Z [debug    ] Nominatim API response         [worldClock.views] ip=None location=[] query= request_id=5b1a3a0f-7066-4e6c-8567-4f1464bce294 user_id=None
2023-07-31T08:11:37.902142Z [info     ] location  not found            [worldClock.views] ip=None query= request_id=5b1a3a0f-7066-4e6c-8567-4f1464bce294 user_id=None
. . .

Final thoughts

In this article, we covered at a broad range of Structlog features and how to customize them. Armed with this knowledge, you should now be able to harness the power of Structlog effectively in your projects. To expand on your learning, ensure to read the Structlog documentation.

Thanks for reading, and happy logging!

Author's avatar
Article by
Stanley Ulili
Stanley Ulili is a technical educator at Better Stack based in Malawi. He specializes in backend development and has freelanced for platforms like DigitalOcean, LogRocket, and AppSignal. Stanley is passionate about making complex topics accessible to developers.
Got an article suggestion? Let us know
Next article
How to Get Started with Logging in Django
Django comes with an integrated logging module that provides basic as well as advanced logging features. Read on to learn how to start using it in your projects
Licensed under CC-BY-NC-SA

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

Make your mark

Join the writer's program

Are you a developer and love writing and sharing your knowledge with the world? Join our guest writing program and get paid for writing amazing technical guides. We'll get them to the right readers that will appreciate them.

Write for us
Writer of the month
Marin Bezhanov
Marin is a software engineer and architect with a broad range of experience working...
Build on top of Better Stack

Write a script, app or project on top of Better Stack and share it with the world. Make a public repository and share it with us at our email.

community@betterstack.com

or submit a pull request and help us build better products for everyone.

See the full list of amazing projects on github