Side note: Get a Python logs dashboard
Save hours of sifting through Python logs. Centralize with Better Stack and start visualizing your log data in minutes.
While Python offers a robust and feature-rich logging solution within its standard library, the third-party logging ecosystem presents a compelling array of alternatives. Depending on your requirements, these external libraries might be more suitable for your logging needs.
Therefore, this article will consider Python's top six logging solutions for
tracking application and library behaviour. We will begin with a discussion of
the standard logging module, then examine five other logging
frameworks created by the Python community.
Let's get started!
Save hours of sifting through Python logs. Centralize with Better Stack and start visualizing your log data in minutes.
Python distinguishes itself from most programming languages by including a
fully-featured logging framework in its standard library. This logging solution
effectively caters to the needs of both library and application developers and
it incorporates the following severity levels: DEBUG, INFO, WARNING,
ERROR, and CRITICAL. Thanks to the default logger, you can immediately begin
logging without any preliminary setup:
This default (or root) logger operates at the WARNING level, meaning that only
logging calls whose severity equals or exceeds WARNING will produce an output:
This configuration ensures that only potentially important messages are shown,
reducing the noise in the log output. However, you can customize the log level
and fine-tune the logging behavior as needed. The recommended way to use the
logging module involves creating a custom logger through the getLogger()
method:
Once you have a custom logger, you can customize its output through the
Handler,
Formatter,
and Filter
classes provided by the logging module.
Handlers decide the output destination and can be customized based on the log level. Multiple handlers can also be added to a logger to simultaneously send log messages to different destinations.
Formatters determine the format of the records produced by a logger. However,
there are no predefined formats like JSON, Logfmt, etc. You have to combine
the available
log record attributes
to build your own formats. The default format for the root logger is
%(levelname)s:%(name)s:%(message)s. However, custom loggers default to just
%(message)s.
Filters are used by handler and logger objects to filter log records. They provide greater control than log levels over which log records should be processed and ignored. They also allow you to enhance or modify the records somehow before the logs are sent to their final destination. For example, you can create a custom filter that redacts sensitive data in your logs.
Here's an example that logs to the console and a file using a custom logger:
When you execute the above program, the following log messages are printed to the console as expected:
The error.log file is also created, and it should contain the ERROR log
alone since the minimum level on the errHandler was set to ERROR:
At the time of writing, the logging module cannot produce structured logs
unless you
implement some additional code.
Thankfully, there is an easier and better way to get structured output: the
python-json-logger library.
Once installed, you may utilize it as follows:
If you modify the previous example with the highlighted lines above, you will observe the following output upon execution:
Contextual data can also be added at log point through the extra property on a
level method like this:
These extra fields are automatically inserted in the log record if they do not
clash with any of the default attribute names. Otherwise, you'll get a
KeyError exception.
As you can see, the built-in logging module is capable and extensible for
various logging needs. However, its initial configuration and customization can
be cumbersome since you have to create and configure loggers, handlers, and
formatters before you can start logging effectively.
Please see our comprehensive Python logging
guide and the
official documentation to
learn more about the logging module's features and best practices.
Loguru is the most popular third-party logging framework for Python
with over 15k GitHub stars at the time of writing. It aims to simplify the
logging process by pre-configuring the logger and making it really easy to
customize via its add() method. Initiating logging with Loguru is a breeze;
just install the package and import it, then call one of its level methods as
follows:
The default configuration logs a semi-structured and colorized output to the
standard error. It also defaults to DEBUG as its minimum level, which explains
why the TRACE output isn't recorded.
Loguru's inner workings are easily customized through the add() function which
handles everything from formatting the logs to setting their destination. For
example, you can log to the standard output, change the default level to INFO,
and format your logs as JSON using the configuration below:
The default JSON output produced by Loguru can be quite verbose, but it's easy to serialize the log messages using a custom function like this:
Contextual logging is also fully supported in Loguru. You've already seen the
bind() method above which allows the addition of contextual data at log point.
You can also use it to create child loggers for logging records that share the
same context:
Notice how the user_id and doc_id fields are present in all three records:
On the other hand, its contextualize() method eases the addition of contextual
fields to all log records within a specific scope or context. For example, the
snippet below demonstrates adding a unique request ID attribute to all logs
created as a result of that request:
Loguru also supports all the features you'd expect from a good logging
framework such as logging to files with automatic rotation
and compression, custom
log levels, exception handling, logging to multiple destinations at once, and
much more. It also provides a
migration guide
for users coming from the standard logging module.
Please see the official documentation and our dedicated Loguru guide and to learn more about using Loguru to create a production-ready logging setup for Python applications.
Structlog is a logging library dedicated to producing structured output in JSON or Logfmt. It supports a colorized and aesthetically enhanced console output for development environments, but also allows for complete customization of the log format to meet diverse needs. You may install the Structlog package using the command below:
The simplest possible usage of Structlog involves calling the get_logger()
method and using any of the level methods on the resulting logger:
The default configuration of a Structlog logger is quite friendly for
development environments. The output is colorized, and any included contextual
data is placed in key=value pairs. Additionally, tracebacks are neatly
formatted and organized so that it's much easier to spot the cause of the issue.
A unique behavior of Structlog is that it doesn't filter records by their
levels. This is why all the levels above were written to the console. However,
it's easy enough to configure a default level through the configure() method
like this:
Structlog is compatible with the log levels in the standard logging module
hence the usage of the logging.INFO constant above. You can also use the
number associated with the level directly:
The logger returned by the get_logger() function is called a Bound Logger
because you can bind contextual values to it. Once the key/value pairs are
bound, they will be included in each subsequent log entry produced by the
logger.
Bound loggers also include a chain of processor functions that transform and enrich log records as they pass through the logging pipeline. For example, you can log in JSON using the following configuration:
Each processor is executed in the declaration order, so TimeStamper() is
called first to add an ISO-8601 formatted timestamp to each entry, then the
severity level is added through add_log_level, and finally the entire record
is serialized as JSON by calling JSONRenderer(). You will observe the
following output after making the highlighted modifications to the program:
Another cool thing Structlog can do is automatically format tracebacks so that
they are also serialized in JSON format. You only need to use the
dict_tracebacks processor like this:
Whenever exceptions are logged, you will observe that the records are enriched with well-formatted information about the exception making it easy to analyze in a log management service.
I've only scratched the surface of what Structlog has to offer, so ensure to check out its documentation to learn more.
Eliot is a unique Python logging solution
that aims not only to provide a record of an event that occurred in the program,
but also outputs a causal chain of actions leading to the event. You can install
Eliot with pip as follows:
One of Eliot's key concepts is an action which represents any task that can start and finish successfully or fail with an exception. When you start an action, two log records are produced: one to indicate the start of the action, and the other to indicate its success or failure. The best way to demonstrate this model is through an example:
The start_action function is used here to indicate the start of a new action.
Once the calculate() function is executed, two logs are sent to the
destination configured by to_file():
Eliot produces structured JSON output by default, and the following records are included:
task_uuid: The unique task identifier that produced the message.action_status: Indicates the status of the action.timestamp: The UNIX timestamp of the message.task_level: The location of the message within the task's tree of actions.action_type: The provided action_type argument.You can add additional fields to both the start message and the success message of an action like this:
Another way to log the inputs and results of a function is through the
log_call decorator:
In this case, the action_type will be a concatenation of the module and
function name, but the remaining fields will be the same as before:
You can customize the behavior of the log_call decorator by changing the
action_type field, and excluding certain arguments or the result:
If an uncaught exception is detected within the context of an action, the action will be marked as failed, and an exception message will be logged instead of a success message:
Instead of a success message, you'll now observe an exception message
accompanied with a reason:
When you need to log isolated messages within the context of an action, you can
use the log method as follows:
Eliot does not have the concept of log levels, so you can only add the level field manually if needed:
Another neat feature of Eliot is its ability to visualize its logs through the
eliot-tree command-line tool.
Once you've installed eliot-tree, you can pipe the JSON logs produced by Eliot
to the command as follows:
If you're logging to a file, you can pass the file as an argument to the tool:
There is so much more to Eliot than can be covered here so ensure to check out its documentation to learn more.
Logbook describes itself as a cool
replacement for Python's standard library logging module, whose aim is to make
logging fun. You can install it in your project using the following command:
Getting started with Logbook is also really straightforward:
As shown above, the logbook.Logger method is used to create a new logger
channel.. This logger provides access to level methods like info() and
warning() for writing log messages. All the log levels in the logging module
are supported, with the addition of NOTICE level, which sits between INFO
and WARNING.
Logbook also uses the Handler concept to determine the destination and
formatting of the logs. The StreamHandler class sends logs
to any output stream (the standard output in this case), and other handlers are
available for logging to files, Syslog, Redis, Slack etc.
However, unlike the standard logging module, you are discouraged from
registering handlers on the logger directly. Instead, you're supposed to bind
the handler to the process, thread, or greenlet stack through the
push_application(), push_thread(), and push_greenlet() methods
respectively. The corresponding pop_application(), pop_thread(), and
pop_greenlet() methods also exist for unregistering handlers:
You can also bind a handler for the duration of a with-block. This ensures that logs created within the block are sent only to the specified handler:
Log formatting is also done through handlers. A format_string property exists
on each handler for this purpose, and it accepts properties on the
LogRecord
class:
Unfortunately, structured logging isn't supported in any of Logbook's built-in handlers. You'd have to implement it yourself via a custom handler. For more details, see the Logbook documentation.
Mirosoft's Picologging library is a relatively new addition to Python's logging ecosystem. Positioned as a high-performance drop-in replacement for the standard logging module, it boasts a remarkable 4-10 times speed improvement, as stated in its GitHub Readme. To integrate it into your project, you can install it with the following command:
Picologging shares the same familiar API as the logging module in Python and
it uses the same log record attributes for formatting:
Picologging's documentation emphasizes that it is currently in an early-alpha
state, so you should hold off on using it in production. Nevertheless, it is
already showing some promise when it comes to performance improvements to the
standard logging module according to these
benchmarks. Please see
the documentation for more
Our primary recommendation for logging in Python is to use Loguru due to its impressive features and user-friendly API. However, it's crucial to familiarize yourself with the built-in logging module, as it remains a powerful and widely used solution.
Structlog is another robust option that merits consideration, and Eliot can also be a good choice, provided its lack of log levels isn't a significant concern for your use case. On the other hand, Picologging is currently in its early development stages, and Logbook lacks native support for structured logging, making them less advisable for logging in production.
Once you've chosen your logging framework and implemented it across your Python applications, the next step is centralizing those logs for effective monitoring and analysis. Better Stack works smoothly with major frameworks.
Thanks for reading, and happy logging!Thanks for reading, and happy logging!
We use cookies to authenticate users, improve the product user experience, and for personalized ads. Learn more.