Python Logging with Loguru

A Complete Guide to Logging in Python with Loguru

Better Stack Team
Updated on March 16, 2023

Loguru is the most popular third-party logging framework for Python on GitHub, with about 13k stars at the time of writing. It aims to ease the process of setting up a logging system in your project and provide a simpler alternative to the default Python logging module, which is sometimes criticized for having a convoluted configuration setup.

Loguru is much easier to set up than the standard logging module, and it has many useful features that will help you collect as much information from your application as needed. This guide will descibe the library and its features in detail, and give you an idea of how to integrate it into a typical web application setup. It will also provide some pointers on how to seamlessly migrate from the standard logging module to Loguru.

Logtail dashboard

🔭 Want to centralize and monitor your Python logs?

Head over to Logtail and start ingesting your logs in 5 minutes.


Before proceeding with this tutorial, ensure that you have a recent version of Python installed on your machine. To best understand the concepts discussed in this tutorial, create a new Python project so that you may test the code snippets and examples presented in this article.

We recommend that you use Python's virtual environment feature for your project, so that changes made to your project do not affect the other Python projects on your machine.

python3 -m venv loguru-demo

Afterward, change into the loguru-demo directory as follows:

cd loguru-demo

Getting started with Loguru

Before you can start logging with Loguru, you must install the loguru package using the pip command:

pip install loguru
Collecting loguru
 Downloading loguru-0.6.0-py3-none-any.whl (58 kB)
 |████████████████████████████████| 58 kB 1.2 MB/s
Installing collected packages: loguru
Successfully installed loguru-0.6.0

Next, create an file in your project directory as this is where we will demonstrate various features of Loguru.

from loguru import logger

logger.debug("Happy logging with Loguru!")

The most basic way to use Loguru is by importing the logger object from the loguru package. This logger is pre-configured with a handler that logs to the standard error by default. The debug() method is subsequently used to log a message at the DEBUG level. Save the file and execute the command below to see it in action:

2022-08-10 11:16:59.511 | DEBUG | __main__:<module>:3 - Happy logging with Loguru!

The output contains the following details:

  • 2022-08-10 11:16:59.511: the timestamp.
  • DEBUG: the log level, which is used to describe the severity level of the log message.
  • __main__:<module>:3: the file location, scope and line number. In this example, the file location is __main__ because you executed the file directly. The scope is <module> because the logger is not located inside a class or a function.
  • Happy logging with Loguru!: the log message.

Exploring log levels in Loguru

Log levels are a widely used concept in logging. They specify the severity of a log record so that messages can be filtered or prioritized based on how urgent they are. Loguru offers seven unique log levels, and each one is associated with an integer value as shown in the list below:

  • TRACE (5): used to record fine-grained information about the program's execution path for diagnostic purposes.
  • DEBUG (10): used by developers to record messages for debugging purposes.
  • INFO (20): used to record informational messages that describe the normal operation of the program.
  • SUCCESS (25): similar to INFO but used to indicate the success of an operation.
  • WARNING (30): used to indicate an unusual event that may require further investigation.
  • ERROR (40): used to record error conditions that affected a specific operation.
  • CRITICAL (50): used to record error conditions that prevent a core function from working.

Each log level listed above has a corresponding method of the same name, which enables you to send log records with that log level:
. . .

logger.trace("A trace message.")
logger.debug("A debug message.")"An info message.")
logger.success("A success message.")
logger.warning("A warning message.")
logger.error("An error message.")
logger.critical("A critical message.")
2022-08-10 11:58:33.224 | DEBUG | __main__:<module>:12 - A debug message.
2022-08-10 11:58:33.224 | INFO | __main__:<module>:13 - An info message.
2022-08-10 11:58:33.225 | SUCCESS | __main__:<module>:14 - A success message.
2022-08-10 11:58:33.226 | WARNING | __main__:<module>:15 - A warning message.
2022-08-10 11:58:33.226 | ERROR | __main__:<module>:16 - An error message.
2022-08-10 11:58:33.227 | CRITICAL | __main__:<module>:17 - A critical message.

These messages are printed to the console in different colors based on their log level.

Loguru log levels

Notice that the TRACE level message is not included in the output above. This is because Loguru defaults to using DEBUG as its minimum level, which causes any logs with a severity lower than DEBUG to be ignored.

If you want to change the default level, you may use the level argument to the add() method shown below:
import sys
from loguru import logger

logger.add(sys.stderr, level="INFO")
. . .

The remove() method is called first to remove the configuration for the default handler (whose ID is 0). Subsequently, the add() method adds a new handler to the logger. This handler logs to the standard error and only records logs with INFO severity or greater.

When you execute the program once more, you'll notice that the DEBUG message is also omitted since it is less severe than INFO:

2022-09-13 12:53:36.123 | INFO     | __main__:<module>:9 - An info message.
2022-09-13 12:53:36.123 | SUCCESS  | __main__:<module>:10 - A success message.
2022-09-13 12:53:36.123 | WARNING  | __main__:<module>:11 - A warning message.
2022-09-13 12:53:36.123 | ERROR    | __main__:<module>:12 - An error message.
2022-09-13 12:53:36.123 | CRITICAL | __main__:<module>:13 - A critical message.

Creating custom levels

Loguru also provides the ability to create custom levels using level() method on a logger which comes in handy if the defaults don't fit with your logging strategy. Here's an example that adds the FATAL level to the logger:
import sys
from loguru import logger

logger.level("FATAL", no=60, color="<red>", icon="!!!")
logger.log("FATAL", "A user updated some information.")

The level() method takes the following four parameters:

  • name: the name of the log level.
  • no: the corresponding severity value (must be an integer).
  • color: the color markup.
  • icon: the icon of the level.

When choosing a severity value for your custom log level, you should consider how important this level is to your project. For example, the FATAL level above is given an integer value of 60, making it the most severe.

Since custom log levels do not have provision for level methods (like info(), debug() etc), you must use the generic log() method on the logger by specifying the log level name, followed by the message to be logged. This yields the following output:

2022-08-26 11:34:13.971 | FATAL   | __main__:<module>:42 - A user updated some information.

Customizing Loguru

When using Python's logging module, you'll need to create custom handlers, formatters, and filters to customize a logger's formatting and output. Loguru simplifies this process by only using its add() method, which takes the following parameters:

  • sink: specifies a destination for each record produced bu the logger. By default, it is set to sys.stderr.
  • level: specifies the minimum log level for the logger.
  • format: useful for defining a custom format for your logs.
  • filter: used to determine whether a record should be logged or not.
  • colorize: takes a boolean value and determines whether or not terminal colorization should be enabled.
  • serialize: causes the log record to be presented in JSON format if set to True.
  • backtrace: determines whether the exception trace should extend beyond the point where the error is captured, making it easier to debug.
  • diagnose: determines whether the variable values should be displayed in the exception trace. You should set it to False in the production environment to avoid leaking sensitive information.
  • enqueue: enabling this option places the log records in a queue to avoid conflicts when multiple processes are logging to the same destination.
  • catch: if an unexpected error happens when logging to the specified sink, you can catch that error by setting this option to True. The error will be printed to the standard error.

We will use many of these options to customize the logger as we go further in this guide.

Filtering log records

In an earlier section, you used the level parameter on the add() method to change the minimum log level on the logger, but this only drops logs that are lower than the specified severity. If you need to define a more complex criteria to decide whether or not a log record should be accepted, you can use the filter option as shown below:
import sys
from loguru import logger

def level_filter(level):
def is_level(record):
return record["level"].name == level
return is_level
logger.add(sys.stderr, filter=level_filter(level="WARNING"))
. . .

In this scenario, the filter option is assigned to a function that accepts a record variable containing details about the log record. This function returns True if the record's level is the same as the level parameter in the enclosing scope so that it is sent to the sink. With this configuration in place, only WARNING level messages will be recorded by the logger.

2022-09-30 12:17:00.548 | WARNING  | __main__:<module>:15 - A warning message.

Formatting log records

Reformatting the log records generated by Loguru can be done through the format option in the add() method. Each log record in Loguru is a Python dictionary, which contains data such as its timestamp, log level, and more. You can use the formatting directives provided by Loguru to include or rearrange each piece of information as follows:
import sys
from loguru import logger

logger.add(sys.stderr, format="{time} | {level} | {message}")
logger.debug("Happy logging with Loguru!")

The format parameter defines the custom format, which takes three directives in this example:

  • {time}: the timestamp,
  • {level}: the log level,
  • {message}: the log message.

When you execute the program above, you will observe the following output:

2022-08-10T15:01:32.154035-0400 | DEBUG | Happy logging with Loguru!

Some of these directives also support further customization. For example, the time directive can be changed to a more human-readable format through the formatting tokens below:

logger.add(sys.stderr, format="{time:MMMM D, YYYY > HH:mm:ss} | {level} | {message}")

This yields the following output:

August 9, 2022 > 15:35:01 | DEBUG | Happy logging with Loguru!

If you prefer to use UTC instead of your local time, you can add !UTC at the end of the time format:

logger.add(sys.stderr, format="{time:MMMM D, YYYY > HH:mm:ss!UTC} | {level} | {message}")
August 9, 2022 > 19:35:01 | DEBUG | Happy logging with Loguru!

Using a structured format

Loguru also supports structured logging in JSON format through its serialize option. This lets you output your logs in JSON so that machines can easily parse and analyze it since the information in each record will be provided in key/value pairs.
import sys
from loguru import logger

logger.add(sys.stderr, format="{time:MMMM D, YYYY > HH:mm:ss!UTC} | {level} | {message}", serialize=True)
logger.debug("Happy logging with Loguru!")
{"text": "August 10, 2022 > 19:38:06 | DEBUG | Happy logging with Loguru!\n", "record": {"elapsed": {"repr": "0:00:00.004000", "seconds": 0.004}, "exception": null, "extra": {}, "file": {"name": "", "path": "C:\\Users\\Eric\\Documents\\Better Stack\\loguru-demo\\"}, "function": "<module>", "level": {"icon": "🐞", "name": "DEBUG", "no": 10}, "line": 8, "message": "Happy logging with Loguru!", "module": "app", "name": "__main__", "process": {"id": 22652, "name": "MainProcess"}, "thread": {"id": 25892, "name": "MainThread"}, "time": {"repr": "2022-08-10 15:38:06.369578-04:00", "timestamp": 1660160286.369578}}}

This output contains a text property which is the original log record text (customizable using the format option), as well as the file name and path (file), the log level and its corresponding icon (level), and so on. If you don't need to include everything shown above in the log record, you can create a custom serialize() function and use it as follows:
import sys
import json
from loguru import logger

def serialize(record):
    subset = {
        "timestamp": record["time"].timestamp(),
        "message": record["message"],
        "level": record["level"].name,
    return json.dumps(subset)

def patching(record):
    record["extra"]["serialized"] = serialize(record)


logger = logger.patch(patching)
logger.add(sys.stderr, format="{extra[serialized]}")
logger.debug("Happy logging with Loguru!")

In this example, three variables, timestamp, message and level are selected in the serialize() function, and then the serialize() function is used to modify the record["extra"] dictionary in the patching() function. And finally, the patching() function is passed to the patch() method, which is used to modify the record dictionary.

Here's the output to expect after running the snippet:

{"timestamp": 1663328693.765488, "message": "Happy logging with Loguru!", "level": "DEBUG"}

Adding contextual data to your logs

Besides the log message, it is often necessary to include other relevant information in the log entry so that you can use such data to filter or correlate your logs.

For example, if you are running an online shopping platform and a seller updates one of their products, you should include the seller and product ID in the log entry describing this update such that you can easily trace the seller and product activity over time.

Before you can start logging contextual data, you need to ensure that the {extra} directive is included in your custom format. This variable is a Python dictionary containing the contextual data for each log entry (if any).

logger.add(sys.stderr, format="{time:MMMM D, YYYY > HH:mm:ss} | {level} | {message} | {extra}")

You can subsequently use either bind() or contextualize() to include extra information at log point.

The bind() method returns a child logger that inherits any existing contextual data from its parent and creates a custom context at that is subsequently included with all the records produced by the logger.
import sys
from loguru import logger

logger.add(sys.stderr, format="{time:MMMM D, YYYY > HH:mm:ss} | {level} | {message} | {extra}")

childLogger = logger.bind(seller_id="001", product_id="123")"product page opened")"product updated")"product page closed")"INFO message")
September 16, 2022 > 13:04:10 | INFO | product page opened | {'seller_id': '001', 'product_id': '123'}
September 16, 2022 > 13:04:10 | INFO | product updated | {'seller_id': '001', 'product_id': '123'}
September 16, 2022 > 13:04:10 | INFO | product page closed | {'seller_id': '001', 'product_id': '123'}
September 16, 2022 > 13:06:08 | INFO | INFO message | {}

Notice that the bind() method does not affect the original logger, which causes it to have an empty extra object as shown above. If you wanted to override the parent logger, you can assign it to the logger.bind() as shown below:

logger = logger.bind(seller_id="001", product_id="123")

Another way to update a logger in place is to use the contextualize() method, which modifies its extra dictionary directly without returning a new logger. This method needs to be used with the with statement:
import sys
from loguru import logger

logger.add(sys.stderr, format="{time:MMMM D, YYYY > HH:mm:ss} | {level} | {message} | {extra}")

def log():"A user requested a service.")

with logger.contextualize(seller_id="001", product_id="123"):
August 12, 2022 > 11:00:52 | INFO | A user requested a service. | {'seller_id': '001', 'product_id': '123'}

Logging errors with Loguru

Errors are often the most common target for logging, so it's helpful to see what tools are provided in the library to handle this use case. You can automatically log errors as they happen inside a function:
import sys
from loguru import logger

logger.add(sys.stderr, format="{time:MMMM D, YYYY > HH:mm:ss} | {level} | {message} | {extra}")

def test(x):

with logger.catch():

In this example, the test() function divides 50 by 0 yielding an error. This error will be caught and logged by the catch() method as shown below:

August 29, 2022 > 12:11:15 | ERROR | An error has been caught in function '<module>', process 'MainProcess' (70360), thread 'MainThread' (4380231040): | {}
Traceback (most recent call last):
> File "/Users/erichu/Documents/Better Stack/loguru-demo/", line 25, in <module>
    └ <function test at 0x10593f760>
  File "/Users/erichu/Documents/Better Stack/loguru-demo/", line 22, in test
       └ 0
ZeroDivisionError: division by zero

The error message includes the following information:

  • The timestamp: August 29, 2022 > 12:11:15.
  • The log level: ERROR.
  • The log message: An error has been caught in function . . ..
  • The stack trace of the program leading up to the error.
  • The type of the error: ZeroDivisionError: division by zero.

You can also use a decorator instead of with statement:
. . .
def test(x):


The catch() can also take the following parameters, allowing you to customize its behavior further:

  • exception: specifies one or more exception types that should be intercepted by the catch() method.
  • level: overwrites the default level for errors (ERROR).
  • reraise: determines whether the exception should be raised again after being logged.
  • onerror: defines a callback function that will be executed when an error has been caught.
  • exclude: creates a blocklist of exception types that should not be caught and logged by the catch() method.
  • default: defines the value to be returned if an error occurred in the decorated function without being re-raised.
  • message: overrides the default error message.

Here's an example that demonstrates how to change the level and message of a logged error:
. . .
@logger.catch(level="CRITICAL", message="An error caught in test()")
def test(x):


When an error occurs in the test() function, it will now be logged at the CRITICAL level with a custom message:

September 20, 2022 > 13:08:01 | CRITICAL | An error caught in test() | {}
Traceback (most recent call last): > File "/Users/erichu/Documents/Better Stack/loguru/", line 12, in <module> test(0) └ <function test at 0x101dd9b40> File "/Users/erichu/Documents/Better Stack/loguru/", line 10, in test 50/x └ 0 ZeroDivisionError: division by zero

The logger.exception() method is also provided for logging exceptions at the ERROR level:

except Exception as e:

Logging to files

Loguru's sink option allows you to choose the destination of all log records emitted through a logger. So far, we've only considered logging to the console, but you can also push log messages to a local file by changing the sink option like this:
. . .

logger.debug("A debug message.")

With this in place, the log record will be sent to a new loguru.log file in the current directory, and you can check its contents with the following command:

cat loguru.log
2022-08-11 13:16:52.573 | DEBUG | __main__:<module>:13 - A debug message.

When sink is pointing to a file, the add() method provides a few more options for customizing how the log file should be handled:

  • rotation: specifies a condition in which the current log file will be closed and a new file will be created. This condition can be an int, datetime or str, and str is recommended since it is more human-readable.
  • retention: specifies how log each log file will be retained before it is deleted from the filesystem.
  • compression: the log file will be converted to the specified compression format if this option is set.
  • delay: if set to True, the creation of a new log file will be delayed until the first log message is pushed.
  • mode, buffering, encoding: These parameters will be passed to Python's open() function which determines how Python will open the log files.

When rotation has an int value, it corresponds to the maximum number of bytes the current file is allowed to hold before a new one is created. When it has a datetime.timedelta value, it indicates the frequency of each rotation, while datetime.time specifies the time of the day each rotation should occur. And finally, rotation can also take a str value, which is the human-friendly variant of the aforementioned types.
. . .
logger.add("loguru.log", rotation="5 seconds")

logger.debug("A debug message.")

In this example, log rotation will occur every five seconds (for demonstration purposes), but you should set a longer duration in a real-world application. If you run the snippet above, a loguru.log file will be generated, and written to until the period specified has elapsed. When that happens, the file is renamed to loguru.<timestamp>.log and a new loguru.log file is created afterward.

You can also set up the logger to clean up old files like this:

logger.add("loguru.log", rotation="5 seconds", retention="1 minute")
logger.add("loguru.log", rotation="5 seconds", retention=3)

In this first snippet, files older than one minute will be removed automatically. In the second one, only the three newest files will be retained. If you're deploying your application to Linux, we recommend that you utilize logrotate for log file rotation so that your application does not have to directly address such concerns.

Using Loguru in a Django application

In this section, you will implement a logging system through Loguru for a demo world clock application where users can search for a location and get its current time. (See the logging branch for the final implementation).

Start by cloning the project repository to your computer:

git clone

Next, change into the django-world-clock directory and also into the djangoWorldClock subdirectory:

cd django-world-clock/djangoWorldClock

You can observe the structure of the project by using the tree command:

├── djangoWorldClock
│   ├──
│   ├──
│   ├──
│   ├──
│   └──
├── requirements.txt
├── templates
│   ├── fail.html
│   ├── home.html
│   ├── layout.html
│   └── success.html
└── worldClock
    ├── migrations
    │   └──

Go ahead and install the necessary dependencies by executing the command below:

pip install -r requirements.txt

Afterward, run the migrations for the project:

python migrate

Once the migrations have been carried out, execute the command below to launch the application server:

python runserver
Performing system checks...

System check identified no issues (0 silenced).
September 30, 2022 - 07:48:19
Django version 4.1.1, using settings 'djangoWorldClock.settings'
Starting development server at
Quit the server with CONTROL-C.

You can access this world clock app by opening your browser and heading to http://localhost:8000.

You should land on the following page:

The Home Page

When you search for a city and the query is successful, you will observe the following result:

Search Successful

If the location is not found, an error message will be displayed:

Search Not Successful

In the terminal, you'll notice that some log messages are outputted for each request even though we haven't configured any logging setup for the project:

[29/Sep/2022 11:38:35] "GET / HTTP/1.1" 200 1068
Not Found: /favicon.ico
[29/Sep/2022 11:38:36] "GET /favicon.ico HTTP/1.1" 404 2327
[29/Sep/2022 11:39:54] "POST /search/ HTTP/1.1" 200 1172

This is due to Django's default logging setup which uses the standard library logging module. Go ahead and disable it by adding the following line to the djangoWorldClock/ file:


The server will restart after saving the file, and you won't observe the default request logs anymore. In a subsequent section, you'll create a middleware function that uses Loguru to record incoming request information.

Adding Loguru to your Django project

Now that we've set up the project, let's go ahead and implement a basic logging strategy using the features described earlier in this tutorial. Since loguru is already installed (per requirements.txt), you can go right ahead and use it in your project.

Import it into the worldClock/ file and configure it as follows:

from django.shortcuts import render, redirect
import requests
import sys
from loguru import logger
logger.add(sys.stderr, format="{time:MMMM D, YYYY > HH:mm:ss!UTC} | {level} | {message} | {extra}")
. . .

The above configuration ensures that each log record is written to the standard error in the specified format.

Creating a request logging middleware

Once you've added Loguru to your application, create a middleware function that logs each HTTP request as follows:

code worldClock/
from loguru import logger
import uuid
import time

def logging_middleware(get_response):
    def middleware(request):
        # Create a request ID
        request_id = str(uuid.uuid4())

        # Add context to all loggers in all views
        with logger.contextualize(request_id=request_id):

            request.start_time = time.time()

            response = get_response(request)

            elapsed = time.time() - request.start_time

            # After the response is received
                "incoming '{method}' request to '{path}'",

            response["X-Request-ID"] = request_id

            return response

    return middleware

The above snippet defines a middleware function that creates a request ID and adds it to the logger's context. This makes it accessible in all the logging calls defined in the request handlers. Once response is received, a corresponding log entry for the request will be printed to the console.

Before the middleware function can take effect, you need to activate it by editing your djangoWorldClock/ file as follows:

code djangoWorldClock\
. . .
. . .

Once you save the file, the server should restart and you will observe the following request log when you make a load up the application's homepage in the browser.

September 30, 2022 > 03:58:44 | INFO | incoming 'GET' request to '/' | {'request_id': 'd7b98454-80da-4f23-aa19-300818f7f900', 'path': '/', 'method': 'GET', 'status_code': 200, 'response_size': 1068, 'elapsed': 0.0028009414672851562}

As you can see, the request_id is included in the log entry along with other relevant details about the request, thus effectively replacing the default request logging facility in Django. You are now set up to track every single request that is made to your application, and you can easily see if the request succeeded or not and how long it took to complete.

In the next section, we will discuss logging in the view functions and you'll see more about how logging can help you diagnose the various happenings in your application effectively.

Logging in Django view functions

Head back to the file and edit the home() and search() views as follows:

. . .

def home(request):
logger.trace("homepage visited")
return render(request, "home.html") def search(request): # If the request method is not POST, redirect to the home page if request.method != "POST":
"redirecting '{method}' request to '{path}' to '/'",
return redirect("/") query = request.POST.get("q", "").strip() if not query:"search query is empty. Redirecting to /")
return redirect("/")
searchLogger = logger.bind(query=query)"incoming search query for '{query}'", query=query)
try: # Pass the search query to the Nominatim API to get a location location = requests.get( "", {"q": query, "format": "json", "limit": "1"}, ).json()
searchLogger.bind(location=location).debug("Nominatim API response")
# If a location is found, pass the coordinate to the Time API to get the current time if location: coordinate = [location[0]["lat"], location[0]["lon"]] time = requests.get( "", {"latitude": coordinate[0], "longitude": coordinate[1]}, )
searchLogger.bind(time=time).debug("Time API response")
"Search query '{query}' succeeded without errors"
return render( request, "success.html", {"location": location[0], "time": time.json()} ) # If a location is NOT found, return the error page else:"location '{query}' not found", query=query)
return render(request, "fail.html") except Exception as error:
return render(request, "500.html")

In the home() view, a trace log is added so that you can track that the function was called when tracing through your application. This log entry will not be produced unless the level option is set to TRACE in the logger.add() function.

    format="{time:MMMM D, YYYY > HH:mm:ss!UTC} | {level} | {message} | {extra}",

Notice how the request_id is present in both entries below. Its presence lets you easily correlate your logs and trace the execution path of a request in your application.

September 30, 2022 > 04:36:29 | TRACE | visit to homepage | {'request_id': 'da2035e3-af92-4735-82ca-f21dde3e5cd0'}
September 30, 2022 > 04:36:29 | INFO | incoming 'HEAD' request to '/' | {'request_id': 'da2035e3-af92-4735-82ca-f21dde3e5cd0', 'path': '/', 'method': 'HEAD', 'status_code': 200, 'response_size': 1068, 'elapsed': 0.0034792423248291016}

In the search() function, you added multiple logging calls. The first one logs when a non-POST request is redirected to the homepage:

curl --head http://localhost:8000
September 30, 2022 > 04:36:26 | INFO | redirecting 'HEAD' request to '/search/' to '/' | {'request_id': '7f296d3d-761c-4d4c-bc98-994240ab3cd8', 'method': 'HEAD', 'path': '/search/'}
September 30, 2022 > 04:36:26 | INFO | incoming 'HEAD' request to '/search/' | {'request_id': '7f296d3d-761c-4d4c-bc98-994240ab3cd8', 'path': '/search/', 'method': 'HEAD', 'status_code': 302, 'response_size': 0, 'elapsed': 0.0019669532775878906}

If the search query is empty, a redirect also occurs:

September 30, 2022 > 04:49:11 | INFO | search query is empty. Redirecting to / | {'request_id': 'f7470915-0d09-4f1d-91b9-450148ef1a22'}
September 30, 2022 > 04:49:11 | INFO | incoming 'POST' request to '/search/' | {'request_id': 'f7470915-0d09-4f1d-91b9-450148ef1a22', 'path': '/search/', 'method': 'POST', 'status_code': 302, 'response_size': 0, 'elapsed': 0.002503633499145508}

Once we have a valid search query, it is bound to a new searchLogger so that it is included in each log entry created by the logger. We can see that on the next line where the search query is acknowledged:

September 30, 2022 > 06:11:06 | INFO | incoming search query for 'london' | {'request_id': '0bfbead4-3fca-46a2-b167-460c461b50c5', 'query': 'london'}

Within the try block, the results from the two API requests to the Nominatim and Time API are logged at the DEBUG level as they are useful for debugging:

September 30, 2022 > 06:11:07 | DEBUG | Nominatim API response | {'request_id': '0bfbead4-3fca-46a2-b167-460c461b50c5', 'query': 'london', 'location': [{'place_id': 344385499, 'licence': 'Data © OpenStreetMap contributors, ODbL 1.0.', 'osm_type': 'relation', 'osm_id': 65606, 'boundingbox': ['51.2867601', '51.6918741', '-0.5103751', '0.3340155'], 'lat': '51.5073219', 'lon': '-0.1276474', 'display_name': 'London, Greater London, England, United Kingdom', 'class': 'place', 'type': 'city', 'importance': 0.9307827616237295, 'icon': ''}]}
September 30, 2022 > 06:11:13 | DEBUG | Time API response | {'request_id': '0bfbead4-3fca-46a2-b167-460c461b50c5', 'query': 'london', 'time': <Response [200]>}

If the location entered by the user isn't valid, an error page will be displayed in the browser, and the following message is logged at the INFO level:

September 30, 2022 > 06:16:57 | INFO | location 'nonexistentcity' not found | {'request_id': '0f86d175-744e-4710-8332-457c24f78300', 'query': 'nonexistentcity'}

Finally, any other exception will be logged at the ERROR level using the exception() helper function:

September 30, 2022 > 06:18:48 | ERROR | list index out of range | {'request_id': 'b582b6f2-2917-4f76-8012-3197b235a222', 'query': 'nonexistent'}
Traceback (most recent call last):

  File "/usr/lib64/python3.10/", line 973, in _bootstrap

. . .

You may notice that the traceback included in the exception message is huge. Therefore, we recommend disabling the backtrace option on the logger to ensure that the traceback is not extended beyond the catching point. You should also set the diagnose option to False.

    format="{time:MMMM D, YYYY > HH:mm:ss!UTC} | {level} | {message} | {extra}",
September 30, 2022 > 06:27:00 | ERROR | list index out of range | {'request_id': 'c3a053fc-0156-465c-96cd-6c8074ac527c', 'query': 'nonexistent'}
Traceback (most recent call last):

  File "/home/ayo/dev/betterstack/community/demo/django-world-clock/djangoWorldClock/worldClock/", line 58, in search
    coordinate = [location[0]["lat"], location[0]["lon"]]
IndexError: list index out of range

Centralizing and monitoring your logs

Logging to the standard error is great in development environments, but a more permanent solution needs to be implemented for production environments. You can log to rotating files as described earlier, but this means that you have to log into each server where your application is deployed to view the logs. The most practical solution is to centralize all your logs so they can be viewed, analyzed, and monitored in one place.

There are several strategies for centralizing and monitoring logs but the simplest one usually involves sending your logs to hosted cloud log management service. Once you configure your application or its environment to send logs to the service, you'll be able to monitor new entries in realtime, and set up alerting so you don't miss notable events. In this section, you will send the logs produced by the World Clock application to Logtail.

Before you can ingest logs to Logtail, you need to create a free account and click the Sources link on the left when you are logged in.

Logtail source page

On the Sources page and click the Connect source button.

Logtail source page

Next, give your source a name, and remember to choose Python as your platform.

Logtail Create Source

Once the source is created, copy the Source token field to your clipboard.

Logtail copy source token

You don't need to follow the installation instructions on the page as it assumes that you are using the logging module in the standard library. The logtail-python package should already be installed as part of requirements.txt, but just in case you did not follow this tutorial from the start, you can install the Logtail package using the following command:

pip install logtail-python
Collecting logtail-python
 Downloading logtail_python-0.1.3-py2.py3-none-any.whl (8.0 kB)
. . .
Installing collected packages: msgpack, urllib3, idna, charset-normalizer, certifi, requests, logtail-python
Successfully installed certifi-2022.6.15 charset-normalizer-2.1.0 idna-3.3 logtail-python-0.1.3 msgpack-1.0.4 requests-2.28.1 urllib3-1.26.11

Head back to, and add the Logtail handler to the logger:

from django.shortcuts import render, redirect
import requests
import sys
from loguru import logger
from logtail import LogtailHandler
logtail_handler = LogtailHandler(source_token="<your logtail source token>")
logger.remove(0) logger.add( sys.stderr, format="{time:MMMM D, YYYY > HH:mm:ss!UTC} | {level} | {message} | {extra}", level="TRACE", backtrace=False, diagnose=False, )
. . .

Notice how Loguru makes it easy to log to a different location using a different settings. With this snippet in place, you will start observing your logs in Logtail's Live tail page as follows:

Live tail

Migrating from logging to Loguru

Before wrapping up this tutorial, let's discuss a few things you're likely to encounter when attempting a migration from the standard library logging module to Loguru.

First, when you are using the logging module, it is common to use the getLogger() function to initialize a logger. This is not necessary with Loguru as you only need to import the logger, and you are good to go. Each time this imported logger is used, it will automatically contain the contextual __name__ value.

# Using Python default logger
logger = logging.getLogger('my_app')

# Using Loguru. This is sufficient, the logger is ready to use.
from loguru import logger

When using the logging module, you need to set up a Handler, Filter, Formatter, and other related objects to configure the logger. In Loguru, you only need to use the add() method so you can replace:

# Using Python default logger

formatter = logging.Formatter(. . .)
handler = logging.Handler(. . .)


filter = logging.Filter(. . .)

. . .


with this:

# Using loguru
logger.add(sink=. . ., level='. . .', format='. . .', filter=. . .)

The format setting requires some special attention here. If you are using % style parameter with the Formatter object, you need to replace them with {} style format. For instance, %(username)s can be replaced with {username}, and logger.debug("User: %s", username) will need to be replaced with logger.debug("User: {}", username).

Loguru is also fully compatible with existing Handler objects created using the logging module so it is possible to add them directly. This could save you some time if you have a complicated setup and you don't want to rewrite everything.

from loguru import logger
import logging

handler = logging.FileHandler(filename='my_app.log')


See the Loguru migration guide more details regarding switching from the standard logging module.

Final thoughts

In this tutorial, we discussed the Loguru package, a fully-featured alternative to the default logging module which aims to ease the process of logging in Python. We also demonstrated practical example of how to utilize it in a Django application, and how to centralize all your logs with the help of Logtail. To learn more about Loguru, do check out its GitHub repository and official documentation.

Thanks for reading, and happy logging!

Centralize all your logs into one place.
Analyze, correlate and filter logs with SQL.
Create actionable
Share and comment with built-in collaboration.
Got an article suggestion? Let us know
Next article
How to Get Started with Logging in Django
Django comes with an integrated logging module that provides basic as well as advanced logging features. Read on to learn how to start using it in your projects
Licensed under CC-BY-NC-SA

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.