Guides
How to Choose a Logging Framework

10 Factors for Choosing a Logging Framework

Author's avatar
Ayooluwa Isaiah
Updated on August 4, 2023

A logging framework is a tool that helps you standardize the process of logging in your application. While certain programming languages offer built-in logging modules as part of their standard libraries, most logging frameworks are third-party libraries like Log4j (Java), Zerolog (Go), or Winston (Node.js). Occasionally, organizations opt to develop custom logging solutions, but this is typically limited to larger companies with highly specialized needs.

To be considered a logging framework, the library needs to address common logging concerns satisfactorily. This includes capturing all relevant information about the recorded event, such as timestamps and log levels. Ideally, logs should be generated in a structured format, such as JSON, allowing for easy parsing and analysis. They should also be transportable to various destinations, such as the console, files, or log management services. Additionally, a logging framework should be flexible enough to adapt to different deployment environments and scenarios, while ensuring optimal performance even when dealing with a large volume of logs.

Due to these considerations, packages like the logging module in Ruby or the Console module in Node.js do not qualify as robust logging solutions for production applications. While they may fulfill certain logging criteria, they lack many essential features required for production logging. Therefore, in most scenarios, we recommend that you forego the built-in logging solution provided by the language in favour of a third-party library to fulfill logging needs.

This article will explore crucial factors to consider when selecting a logging framework. It will also provide examples of specific frameworks that meet the criteria in some of the most popular languages.

🔭 Want to centralize and monitor your application logs?

Head over to Better Stack and start ingesting your logs in 5 minutes.

1. Support for leveled logging

Any logging framework worth its salt will provide log levels to differentiate between recorded events based on their severity. However, the specific levels provided will differ from library to library so you may need to decide what levels are needed beforehand. Ideally, the framework should allow customization of log levels to accommodate specific needs, even if the default options aren't satisfactory. For example, in the case of Node.js, the popular logging library Winston uses the following levels by default:

 
{
  error: 0,
  warn: 1,
  info: 2,
  http: 3,
  verbose: 4,
  debug: 5,
  silly: 6
}

But you can easily change it to something more typical like this:

 
const logLevels = {
  fatal: 0,
  error: 1,
  warn: 2,
  info: 3,
  debug: 4,
  trace: 5,
};

const logger = winston.createLogger({
  levels: logLevels,
});

2. Impact on application behavior and testing

Log statements should meet the fundamental requirement of not impacting the program's behavior. Essentially, they should have no effect on the execution flow and correctness of the application. In addition, it should be straightforward to turn off logging during testing to prevent interference with test outputs. This helps avoid flaky tests caused by differences in log output and ensures that tests focus solely on the intended functionality without being affected by logging statements.

3. Support for structured logging formats

Presently, most logging frameworks default to generating unstructured or semi-structured log data primarily intended for human consumption. These logs consist of strings that often contain embedded variables, requiring subsequent parsing and extraction. The main drawback of unstructured log records is that automating their processing is challenging due to the lack of a standardized format. Finding specific events in unstructured logs typically involves crafting custom regular expressions, which may require modification if the log data changes.

 
DEBUG [2023-06-18 10:23:45] [app.js:123] - User authentication successful for user 'john.doe'.
INFO [2023-06-18 11:45:32] [api.js:456] - Request received from IP '192.168.0.1' to access resource '/api/data'.
ERROR [2023-06-18 13:57:21] [db.js:789] - Database connection failed. Error: Timeout reached while connecting to the database.

In contrast, structured logs are composed of objects rather than strings. Each property within the object can be automatically extracted and processed by log processing tools, enabling searches, alerts, and the generation of output formats that humans easily understand. The good news is that many logging frameworks support already structured logging. Some frameworks default to structured logging, but you must actively enable it in others. Most frameworks employ JSON for management services due to its ubiquitous support amongst various logging tools and services.

 
{"timestamp": "2023-06-18 10:23:45", "file": "app.js", "line": 123, "level": "DEBUG", "message": "User authentication successful for user 'john.doe'"}
{"timestamp": "2023-06-18 11:45:32", "file": "api.js", "line": 456, "level": "INFO", "message": "Request received from IP '192.168.0.1' to access resource '/api/data'"}
{"timestamp": "2023-06-18 13:57:21", "file": "db.js", "line": 789, "level": "ERROR", "message": "Database connection failed. Error: Timeout reached while connecting to the database"}

The main drawback to structured logging is that they typically lack the natural language flow and readability that unstructured logs provide, making it more challenging to quickly grasp the information contained in the log without additional tools.

Therefore, it might be worth considering a framework that simplifies the prettification of log output during development. This allows for a more human-friendly display, potentially with colorized console output, while production systems utilize the more efficient structured format in conjunction with log management services.

Screenshot from 2023-06-19 17-48-42.png

4. Support for logging contextual data

A good logging framework must provide adequate support for adding contextual data to log records. Such data may include file and line number of the event, request or session identifiers, user IDs, process IDs, or any other relevant data that provides insights into the specific context of the log event. Ideally, the framework should offer the flexibility to add contextual data at specific log points, within defined scopes, or even universally across all application logs.

For example, with Pino, you can add contextual data at log point like this:

 
logger.error(
  { transaction_id: '12343_ff', user_id: 'johndoe' },
  'Transaction failed'
);

You can also add context to a group of logs using child loggers:

 
function getEntity(entityID) {
  const childLogger = logger.child({ entity_id: entityID });
  childLogger.trace('getEntity invoked');
  childLogger.trace('getEntity completed');
}

getEntity('entity_id');

Finally, you can add universal data to all logs while creating the logger:

 
const pino = require('pino');

const logger = pino({
  formatters: {
bindings: (bindings) => {
return { pid: bindings.pid, host: bindings.hostname, node_version: process.version };
},
}, });

5. Error handling behavior

When considering a logging framework, you must verify how it handles errors, as they are often the primary focus of production logging. A good framework should capture all the relevant information about the error that occurred, including a full stack trace. Surprisingly, some popular frameworks get this wrong, so it's crucial to thoroughly test and carefully review the documentation to verify the error handling capabilities of your selected logging solution.

Some frameworks output the entire stack trace as a string in an object property like this:

 
{"level":"error","time":1643706943924,"pid":13185,"hostname":"Kreig","err":{"type":"Error","message":"ValidationError: email address in invalid","stack":"Error: ValidationError: email address in invalid\n    at Object.<anonymous> (/home/ayo/dev/betterstack/demo/snippets/main.js:3:14)\n    at Module._compile (node:internal/modules/cjs/loader:1097:14)\n    at Object.Module._extensions..js (node:internal/modules/cjs/loader:1149:10)\n    at Module.load (node:internal/modules/cjs/loader:975:32)\n    at Function.Module._load (node:internal/modules/cjs/loader:822:12)\n    at Function.executeUserEntryPoint [as runMain] (node:internal/modules/run_main:81:12)\n    at node:internal/main/run_main_module:17:47"},"msg":"ValidationError: email address in invalid"}

Others may offer the ability to format stack traces as objects, allowing for automatic parsing:

 
{"level":"error","stack":[{"func":"inner","line":"20","source":"main.go"},{"func":"middle","line":"24","source":"main.go"},{"func":"outer","line":"32","source":"main.go"},{"func":"main","line":"15","source":"main.go"},{"func":"main","line":"250","source":"proc.go"},{"func":"goexit","line":"1571","source":"asm_amd64.s"}],"error":"seems we have an error here","time":1658700227}

When working with a language that utilizes exceptions, you should also investigate if the framework can catch and log uncaught exceptions before the program exits so that you can gain insights into the cause and context of the crash, enabling more effective resolution of the underlying issues.

 
const winston = require('winston');
const logger = winston.createLogger({
  level: process.env.LOG_LEVEL || 'info',
  format: winston.format.json(),
  transports: [new winston.transports.Console()],
  // Winston can automatically catch and log uncaught exceptions and promise
  // rejections before the program exits
exceptionHandlers: [
new winston.transports.File({ filename: 'exception.log' }),
],
rejectionHandlers: [
new winston.transports.File({ filename: 'rejections.log' }),
],
});

6. Impact on application performance

Logging should not significantly degrade your application's performance or consume excessive resources even when generating a high volume of logs. Therefore, choosing a logging framework with good performance characteristics is also essential. For example, starting a new project with Logrus for structured logging in Go is a bad decision, even though it has many of the characteristics of a good logging framework. It is an order of magnitude slower than newer options, such as Zap, Slog, and Zerolog, and allocates a lot more memory. Read more about the best Go libraries.

Package Time Objects Allocated
zerolog 81 ns/op 0 allocs/op
zap 193 ns/op 0 allocs/op
zap (sugared) 227 ns/op 1 allocs/op
slog 322 ns/op 0 allocs/op
go-kit 5377 ns/op 56 allocs/op
apex/log 19518 ns/op 53 allocs/op
log15 19812 ns/op 70 allocs/op
logrus 21997 ns/op 68 allocs/op

By selecting a framework that minimizes object allocations, you will ensure that your log statements add very little to the runtime overhead of your application.

7. Adequate log transportation options

Transporting log entries to one or more destinations is another important consideration when choosing a logging framework. At a minimum, the framework should support appending logs to the standard output/standard error, and a file. Ideally though, it should offer flexibility in configuring custom destinations for log output, as modern applications tend to use web-based log aggregators in addition to a local mechanism.

 
import winston from 'winston';
import { Logtail } from '@logtail/node';
import { LogtailTransport } from '@logtail/winston';

const logtail = new Logtail('<your_source_token>');

const logger = winston.createLogger({
  // Winston can automatically send logs to multiple destinations such as the
  // console and a web-based log management solution. You can even use different
  // formats for each destination based on how the log data is used
transports: [
new winston.transports.Console(),
new LogtailTransport(logtail),
],
}); export default logger;

Screenshot 2023-06-19 at 17-20-29 Live tail Better Stack.png

If you have complete control over your application environment, you can output logs to the console or a file and employ a dedicated log shipper like Fluentd, Logstash, or Vector, to collect, transform, and forward the records to other destinations as required. This approach allows for centralized log management and flexibility in routing logs to different systems.

However, in situations where the deployment platform restricts access to the application environment, the ability for the logging framework to directly ship logs to their final destinations becomes crucial. Support for log file rotation is also a nice-to-have feature, but we think such tasks are generally best done using standard Linux tools such as Logrotate.

8. Support for log sampling

In today's modern environment, where a large number of logs are often produced, managing and storing all records can become quite costly. Log sampling is a technique employed to mitigate this challenge by selectively capturing and storing a subset of log events instead of recoding every single entry.

The concept behind log sampling is to retain a representative sample of logs while discarding the rest, reducing storage requirements and associated costs. It involves randomly or systematically selecting a fraction of log events for retention based on predetermined criteria.

The sampling rate determines the proportion of retained logs compared to the total log volume generated. For example, a 10% sampling rate means that only 10% of log events will be stored and analyzed, while the remaining 90% will be discarded.

Of course, you should carefully consider the sampling rate and criteria to ensure that the captured logs are representative and that no critical events are lost due to the sampling. Here's an example that configures the Zap logger in Go to sample log entries with the same level and message:

 
func createLogger() *zap.Logger {
    stdout := zapcore.AddSync(os.Stdout)

    level := zap.NewAtomicLevelAt(zap.InfoLevel)
    productionCfg := zap.NewProductionEncoderConfig()
    jsonEncoder := zapcore.NewJSONEncoder(productionCfg)
    jsonOutCore := zapcore.NewCore(jsonEncoder, stdout, level)

samplingCore := zapcore.NewSamplerWithOptions(
jsonOutCore,
time.Second, // interval
5, // log first five entries
0, // thereafter log zero entires within the interval
)
return zap.New(samplingCore) }

Here, only the first five records with the same level and message are recorded within a one-second interval. Any other entry in that interval will be automatically discarded since 0 is specified here. You can see this in action by logging in a for loop:

 
func main() {
    logger := createLogger()

    defer logger.Sync()

    for i := 1; i <= 100; i++ {
        logger.Info("an info message")
    }
}

Without log sampling, 100 identical logs will be produced when the loop is executed. However, with log sampling, only five entries are logged while the others are discarded. In this manner, you can significantly mitigate the cost of storing logs without compromising your ability to capture relevant information about your program.

9. Customization and extensibility

Beyond what the framework currently offers, you also need to evaluate its flexibility to adapt to changing requirements and integration with the broader observability ecosystem. For example, there should be APIs that allow you to customize the log format, add custom fields, execute custom logic before or after log events are processed, integrate with third-party systems, and more!

 
// Implementing Zerolog's Hook interface allows you to execute some code
// each time a log event is captured
type Hook interface {
    Run(e *zerolog.Event, level zerolog.Level, message string)
}

10. Reputation and community support

Another important consideration when choosing a logging framework is its reputation in the development community. Established logging frameworks that have been used for several years often have a wealth of user experiences and reports available. These experiences can serve as valuable insights and help in your decision-making process when choosing between multiple good options.

You should also consider the size and activity level of the framework's community. An active community indicates ongoing development, bug fixes, and support from fellow developers. Check the project's documentation quality and availability of tutorials and community forums or chat channels for assistance with issues.

Some logging framework recommendations

To simplify the process of selecting a logging framework, we have compiled a list of recommended frameworks for several popular programming languages. These frameworks have been chosen based on their features, community support, and adoption. To delve further into these libraries, you can explore the provided guides below for more in-depth information.

1. Pino (Node.js)

Pino is a high-performance structured logging framework for Node.js applications that offers many useful features. While it logs in JSON format by default, it provides the pino-pretty module to enhance the log output's readability in development. Additionally, it offers the flexibility to produce logs in other formats using transport modules.

It is also built into the Fastify web framework, and integrates seamlessly with other popular Node.js web frameworks like Express. It stands out with its unique log redaction feature, which helps keep sensitive data secure by preventing its inclusion in logs. In comparison to Winston, another popular logging option, Pino is lighter, easier to use, and comes with saner defaults with better performance to boot. Overall, Pino is our recommended choice for structured logging in Node.js applications.

2. Zerolog, Zap, or Slog (Go)

Zerolog stands out as the fastest logging framework in the Go ecosystem, boasting minimal allocations. It focuses on exclusively producing structured output, but also provides a human-friendly, colorized console output for development environments. It offers features such as log sampling, hooks, and formatted stack traces for error logging.

If you're seeking a more customizable framework, Uber's Zap package is worth considering. It pioneered the zero-allocation logging approach embraced by Zerolog, and it offers a highly customizable API that should meet most requirements. Another upcoming option is Slog, the structured logging API integrated into the Go standard library expected to be available from Go 1.21 onwards. Slog can be used as a standalone logger or as a logging frontend, providing flexibility and decoupling the application's logging from any specific framework.

3. Monolog (PHP)

Monolog is PHP's most popular logging framework. It has gained significant traction and widespread adoption, being integrated into major PHP application frameworks like Laravel and Symfony to provide a robust logging solution. It supports various logging formats including JSON, and several handlers for transporting your logs to various destinations. It also implements the PSR-3 logger interface (a common PHP interface for logger objects) promotes interoperability and simplifies future transitions to alternative compatible libraries if needed.

4. SLF4J with Log4J2 or Logback (Java)

Simple Logging Facade for Java (SLF4J) serves as a logging facade that abstracts various Java logging frameworks. By introducing a generic API layer, SLF4J enables seamless migration between different logging frameworks with minimal disruption. If you find that one framework does not adequately address your needs or requirements, you can easily switch to another without significant code changes. When considering SLF4J implementations, both Log4j2 and Logback are highly regarded choices. However, Log4j2 stands out as the more actively maintained framework and better performer of the two.

5. Loguru (Python)

Although the built-in logging module in Python's standard library is quite robust and widely used in the community, its setup and configuration can sometimes be complex, even for simple tasks. If you find the native logging API cumbersome, an excellent alternative to explore is Loguru. It is the most prominent third-party logging framework for Python, and it positions itself as a simpler alternative that's pre-configured (but customizable), with many of the features discussed in this article. Notably, it retains compatibility with the standard logging module, ensuring a smooth transition if you decide to migrate from the built-in module.

6. Semantic Logger (Ruby)

Ruby's standard logging module does not meet our requirements for a good logging framework due to its lack of contextual logging and other advanced features. Therefore, we recommend going with a third-party framework such as Semantic Logger which supports structured and context-aware logging in JSON, multiple output destinations, and more. It also conforms to Ruby's standard logging interface, so you only need to change how the logger is created to migrate.

Final thoughts and next steps

Selecting a logging framework is a critical decision for your application, as it will play a significant role in recording events that'll help you manage and troubleshoot your application effectively. We hope the guidelines provided above assist you in choosing the framework that best suits your requirements.

Once you have integrated a logging framework into your project, the next step is to ensure that the recorded data is sent to a log management service so that it can be effectively aggregated, stored, and analyzed. By leveraging Better Stack, you can centralize your log data, simplify log analysis, and enable real-time monitoring so that notable events are tracked and addressed quickly.

Thank you for reading, and happy logging!

Author's avatar
Article by
Ayooluwa Isaiah
Ayo is the Head of Content at Better Stack. His passion is simplifying and communicating complex technical ideas effectively. His work was featured on several esteemed publications including LWN.net, Digital Ocean, and CSS-Tricks. When he’s not writing or coding, he loves to travel, bike, and play tennis.
Centralize all your logs into one place.
Analyze, correlate and filter logs with SQL.
Create actionable
dashboards.
Share and comment with built-in collaboration.
Got an article suggestion? Let us know
Next article
Getting Started with Log Aggregation in Production
Discover the benefits of log aggregation and learn how to effectively manage and analyze logs with this comprehensive guide. Stay ahead of the game with best practices and expert insights.
Licensed under CC-BY-NC-SA

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.