Back to Logging guides

A Complete Guide to Pino Logging in Node.js

Ayooluwa Isaiah
Updated on December 6, 2023

Pino is a powerful logging framework for Node.js that boasts exceptional speed and comprehensive features. In fact, its impressive performance earned it a default spot in the open-source Fastify web server for logging output. Pino's versatility also extends to its ease of integration with other Node.js web frameworks, making it a top choice for developers looking for a reliable and flexible logging solution.

Pino includes all the standard features expected in any logging framework, such as customizable log levels, formatting options, and multiple log transportation options. Its flexibility is one of its standout features, as it can be easily extended to meet specific requirements, making it a top choice for a wide range of applications.

This article will guide you through creating a logging service for your Node.js application using Pino. You will learn how to leverage the framework's many features and customize them to achieve an optimal configuration for your specific use case.

By the end of this tutorial, you will be well-equipped to implement a production-ready logging setup in your Node.js application with Pino, helping you to streamline your logging process and improve the overall performance and reliability of your application.

Side note: Get a Node.js logs dashboard

Save hours of sifting through Node.js logs. Centralize with Better Stack and start visualizing your log data in minutes.

See the Node.js demo dashboard live.

Prerequisites

Before proceeding with the rest of this article, ensure you have a recent version of Node.js and npm installed locally on your machine. This article also assumes you are familiar with the basic concepts of logging in Node.js.

Getting started with Pino

To get the most out of this tutorial, create a new Node.js project to try out the concepts we will be discussing. Start by initializing a new Node.js project using the commands below:

 
mkdir pino-logging && cd pino-logging
 
npm init -y

Afterward, install the latest version of pino through the command below. The examples in this article are compatible with version 8.x, which is the latest at the time of writing.

 
npm install pino

Create a new logger.js file in the root of your project directory, and populate it with the following contents:

logger.js
const pino = require('pino');

module.exports = pino({});

This snippet requires the pino package and exports a logger instance created by executing the top-level pino() function. We'll explore all the different ways you can customize the Pino logger, but for now, let's go ahead and use the exported logger in a new index.js file as shown below:

index.js
const logger = require('./logger');

logger.info('Hello, world!');

Once you save the file, execute the program using the following command:

 
node index.js

You should observe the following output:

Output
{"level":30,"time":1677506333497,"pid":39977,"hostname":"fedora","msg":"Hello, world!"}

The first thing you'll notice about the output above is that it's structured and formatted in JSON, the prevalent industry standard for structured logging. Besides the log message, the following fields are present in the log entry:

  • The log level indicating the severity of the event being logged.
  • The time of the event (the number of milliseconds elapsed since January 1, 1970 00:00:00 UTC).
  • The hostname of the machine where the program is running.
  • The process ID (pid) of the Node.js program being executed.

We'll discuss how you can customize each of these fields, and how to enrich your logs with other contextual fields later on in this tutorial.

Prettifying JSON logs in development

JSON is great for production logging due to its simplicity, flexibility, and widespread support amongst logging tools, but it's not the easiest for humans to read especially when printed on one line. To make the JSON output from Pino easier to read in development environments (where logs are typically printed to the standard output), you can adopt one of the following approaches.

1. Using jq

jq is a nifty command-line tool for processing JSON data. You can pipe your JSON logs to it to colorize and pretty-print it like this:

 
node index.js | jq
Output
{
  "level": 30,
  "time": 1677669391146,
  "pid": 557812,
  "hostname": "fedora",
  "msg": "Hello, world!"
}

Screenshot from 2023-03-01 12-17-54.png

If the JSON output is too large, you can filter irrelevant fields from the output by using jq's del() function:

 
node index.js | jq 'del(.time,.hostname,.pid)'
Output
{
  "level": 30,
  "msg": "Hello, world!"
}

You can also opt to use a whitelist instead, which is handy for rearranging the order of the fields:

 
node index.js | jq '{msg,level}'
Output
{
  "msg": "Hello, world!",
  "level": 30
}

You can transform your JSON logs in many other ways through jq, so ensure to check out its documentation to learn more.

2. Using pino-pretty

The Pino team have also provided the pino-pretty package for converting newline-delimited JSON entries into a more human-readable plaintext output:

You'll need to install the pino-pretty package first:

 
npm install pino-pretty --save-dev

Once the installation completes, you'll be able to pipe your application logs to pino-pretty as shown below:

 
node index.js | npx pino-pretty

You will observe that the logs are now reformatted and colorized to make them easier to read:

Output
[12:33:00.352] INFO (579951): Hello, world!

Screenshot from 2023-03-01 12-33-11.png

If you want to customize pino-pretty's output, check out the relevant Pino documentation for the pino-pretty transport.

Log levels in Pino

The default log levels in Pino are (ordered by ascending severity) trace, debug, info, warn, error, and fatal, and each of these have a corresponding method on the logger:

index.js
const logger = require('./logger');

logger.fatal('fatal');
logger.error('error');
logger.warn('warn');
logger.info('info');
logger.debug('debug');
logger.trace('trace');

When you execute the code above, you will get the following output:

Output
{"level":60,"time":1643664517737,"pid":20047,"hostname":"fedora","msg":"fatal"}
{"level":50,"time":1643664517738,"pid":20047,"hostname":"fedora","msg":"error"}
{"level":40,"time":1643664517738,"pid":20047,"hostname":"fedora","msg":"warn"}
{"level":30,"time":1643664517738,"pid":20047,"hostname":"fedora","msg":"info"}

Notice how the severity level is represented by a number that increments in 10s according to the severity of the event. You'll also observe that no entry is emitted for the debug() and trace() methods due to the default minimum level on a Pino logger (info) which causes less severe events to be suppressed.

Setting the minimum log level is typically done when creating the logger. It is best to controlled the minimum log level through an environmental variable so that you can change it anytime without making code modifications:

logger.js
const logger = require('./logger');

module.exports = pinoLogger({
  level: process.env.PINO_LOG_LEVEL || 'info',
});

If the PINO_LOG_LEVEL variable is set in the environment, its value will be used. Otherwise, the info level will be the default. The example below sets the minimum level to error so that the events below the error level are all suppressed.

 
PINO_LOG_LEVEL=error node index.js
Output
{"level":60,"time":1643665426792,"pid":22663,"hostname":"fedora","msg":"fatal"}
{"level":50,"time":1643665426793,"pid":22663,"hostname":"fedora","msg":"error"}

You can also change the minimum level on a logger instance at anytime through its level property:

index.js
const logger = require('./logger');

logger.level = 'debug'; // only trace messages will be suppressed now

. . .

This is useful if you want to change the minimum log level at runtime perhaps by exposing a secure endpoint for this purpose:

 
app.get('/changeLevel', (req, res) => {
  const { level } = req.body;
  // check that the level is valid then change it:
  logger.level = level;
});

Customizing log levels in Pino

Pino does not restrict you to the default levels that it provides. You can easily add customs levels by creating an object that defines the integer priority of each level, and assigning the object to the customLevels property. For example, you can add a notice level that is more severe than info but less severe than warn using the code below:

 
const pino = require('pino');

const levels = {
  notice: 35, // Any number between info (30) and warn (40) will work the same
};

module.exports = pino({
  level: process.env.PINO_LOG_LEVEL || 'info',
  customLevels: levels,
});
logger.js
const pino = require('pino');

const levels = {
  notice: 35, // Any number between info (30) and warn (40) will work the same
};

module.exports = pino({
  level: process.env.PINO_LOG_LEVEL || 'info',
  customLevels: levels,
});

At this point, you can log events at each defined custom level through their respective methods, and all the default levels will continue to work as usual:

index.js
const logger = require('./logger');

logger.warn('warn');
logger.notice('notice');
logger.info('info');
Output
{"level":40,"time":1678192423827,"pid":122107,"hostname":"fedora","msg":"warn"}
{"level":35,"time":1678192423828,"pid":122107,"hostname":"fedora","msg":"notice"}
{"level":30,"time":1678192423828,"pid":122107,"hostname":"fedora","msg":"info"}

Assuming you want to replace Pino's log levels entirely with perhaps the standard Syslog levels, you must specify the useOnlyCustomLevels option as shown below:

logger.js
const pino = require('pino');

const levels = {
  emerg: 80,
  alert: 70,
  crit: 60,
  error: 50,
  warn: 40,
  notice: 30,
  info: 20,
  debug: 10,
};

module.exports = pino({
  level: process.env.PINO_LOG_LEVEL || 'info',
  customLevels: levels,
  useOnlyCustomLevels: true,
});

Customizing the default fields

In this section, we'll take a quick look at the process of modifying the standard fields that come with every Pino log entry. However, be sure to explore the comprehensive range of Pino options at your convenience.

Using string labels for severity levels

Let's start by specifying the level name in the log entry instead of its integer value. This can be achieved through the formatters configuration below:

logger.js
. . .

module.exports = pino({
  level: process.env.PINO_LOG_LEVEL || 'info',
formatters: {
level: (label) => {
return { level: label.toUpperCase() };
},
},
});

This change causes the severity level on each entry to be upper-case labels:

Output
{"level":"ERROR","time":1677673626066,"pid":636012,"hostname":"fedora","msg":"error"}
{"level":"WARN","time":1677673626066,"pid":636012,"hostname":"fedora","msg":"warn"}
{"level":"INFO","time":1677673626066,"pid":636012,"hostname":"fedora","msg":"info"}

You can also rename the level property by returning something like this from the function:

logger.js
module.exports = pinoLogger({
  level: process.env.PINO_LOG_LEVEL || 'info',
formatters: {
level: (label) => {
return { severity: label.toUpperCase() };
},
},
});
Output
{"severity":"ERROR","time":1677676496547,"pid":693683,"hostname":"fedora","msg":"error"}
{"severity":"WARN","time":1677676496547,"pid":693683,"hostname":"fedora","msg":"warn"}
{"severity":"INFO","time":1677676496547,"pid":693683,"hostname":"fedora","msg":"info"}

Customizing the timestamp format

Pino's default timestamp is the number of milliseconds elapsed since January 1, 1970 00:00:00 UTC (as produced by the Date.now() function). You can customize this output through the timestamp property when creating a logger. We recommend outputting your timestamps in the ISO-8601 format:

logger.js
const pino = require('pino');

module.exports = pino({
  level: process.env.PINO_LOG_LEVEL || 'info',
  formatters: {
    level: (label) => {
      return { level: label.toUpperCase() };
    },
  },
timestamp: pino.stdTimeFunctions.isoTime,
});
Output
{"level":"INFO","time":"2023-03-01T12:36:14.170Z","pid":650073,"hostname":"fedora","msg":"info"}

You can also rename the property from time to timestamp by specifying a function that returns a partial JSON representation of the current time (prefixed with a comma) like this:

 
pino({
  timestamp: () => `,"timestamp":"${new Date(Date.now()).toISOString()}"`,
})
Output
{"label":"INFO","timestamp":"2023-03-01T13:19:10.018Z","pid":698279,"hostname":"fedora","msg":"info"}

Customizing the default binding

Pino binds two extra properties to each log entry by default: the program's process ID (pid), and the current machine's hostname. You can customize them through the bindings function on the formatters object. For example, let's rename hostname to host:

 
const pino = require('pino');

module.exports = pino({
  level: process.env.PINO_LOG_LEVEL || 'info',
  formatters: {
bindings: (bindings) => {
return { pid: bindings.pid, host: bindings.hostname };
},
level: (label) => { return { level: label.toUpperCase() }; }, }, timestamp: pino.stdTimeFunctions.isoTime, });
Output
{"level":"INFO","time":"2023-03-01T13:24:28.276Z","process_id":707519,"host":"fedora","msg":"info"}

You may decide to omit any of the fields by removing it from the returned object, and you can also add custom properties here if you want them to appear in every log entry. Here's an example that adds the node version used to execute the program to each log entry:

 
bindings: (bindings) => {
  return {
    pid: bindings.pid,
    host: bindings.hostname,
node_version: process.version,
}; },
Output
{"level":"INFO","time":"2023-03-01T13:31:28.940Z","pid":719462,"host":"fedora","node_version":"v18.14.0","msg":"info"}

Other useful examples of global data that can be added to every log entry include the application version, operating system, configuration settings, git commit hash, and more.

Adding context to your logs

Adding contextual data to logs refers to including additional information that provides more context or details about the events being logged. This information can help with troubleshooting, debugging, and monitoring your application in production.

For example, if an error occurs in a web application, including contextual data such as the request's ID, the endpoint being accessed, or the user ID that triggered the request can help with identifying the root cause of the issue more quickly.

In Pino, the primary way to add contextual data to your log entries is through the mergingObject parameter on a level method:

 
logger.error(
  { transaction_id: '12343_ff', user_id: 'johndoe' },
  'Transaction failed'
);

The above snippet produces the following output:

Output
{"level":"ERROR","time":"2023-03-01T13:47:00.302Z","pid":737430,"hostname":"fedora","transaction_id":"12343_ff","user_id":"johndoe","msg":"Transaction failed"}

It's also helpful to set some contextual data on all logs produced within the scope of a function, module, or service so that you don't have to repeat them at each log point. This is done in Pino through child loggers:

index.js
const logger = require('./logger');

logger.info('starting the program');

function getUser(userID) {
  const childLogger = logger.child({ userID });
  childLogger.trace('getUser called');
  // retrieve user data and return it
  childLogger.trace('getUser completed');
}

getUser('johndoe');

logger.info('ending the program');

Execute the code with trace as the minimum level:

 
PINO_LOG_LEVEL=trace node index.js
Output
{"level":"INFO","time":"2023-03-01T14:15:47.168Z","pid":764167,"hostname":"fedora","msg":"starting the program"}
{"level":"TRACE","time":"2023-03-01T14:15:47.169Z","pid":764167,"hostname":"fedora","userID":"johndoe","msg":"getUser called"}
{"level":"TRACE","time":"2023-03-01T14:15:47.169Z","pid":764167,"hostname":"fedora","userID":"johndoe","msg":"getUser completed"}
{"level":"INFO","time":"2023-03-01T14:15:47.169Z","pid":764167,"hostname":"fedora","msg":"ending the program"}

Notice how the userID property is present only within the context of the getUser() function. Using child loggers allows you to add context to log entries without the data at log point. It also makes filtering and analyzing logs easier based on specific criteria, such as user ID, function name, or other relevant contextual details.

Logging errors with Pino

Logging errors is a critical practice that will help you track and diagnose issues that occur in production. When an exception is caught, you should log all the relevant details, including its severity, a description of the problem, and any relevant contextual information.

You can log errors with Pino by passing the error object as the first argument to the error() method followed by the log message:

index.js
const logger = require('./logger');

function alwaysThrowError() {
  throw new Error('processing error');
}

try {
  alwaysThrowError();
} catch (err) {
logger.error(err, 'An unexpected error occurred while processing the request');
}

This example produces a log entry that includes an err property containing the type of the error, its message, and a complete stack trace which is handy for troubleshooting.

Output
{
  "level": "ERROR",
  "time": "2023-03-01T14:28:17.821Z",
  "pid": 781077,
  "hostname": "fedora",
  "err": {
    "type": "Error",
    "message": "processing error",
    "stack": "Error: processing error\n    at alwaysThrowError (/home/ayo/dev/betterstack/community/demo/pino-logging/main.js:4:9)\n    at Object.<anonymous> (/home/ayo/dev/betterstack/community/demo/pino-logging/main.js:8:3)\n    at Module._compile (node:internal/modules/cjs/loader:1226:14)\n    at Module._extensions..js (node:internal/modules/cjs/loader:1280:10)\n    at Module.load (node:internal/modules/cjs/loader:1089:32)\n    at Module._load (node:internal/modules/cjs/loader:930:12)\n    at Function.executeUserEntryPoint [as runMain] (node:internal/modules/run_main:81:12)\n    at node:internal/main/run_main_module:23:47"
  },
  "msg": "An unexpected error occurred while processing the request"
}

Handling uncaught exceptions and unhandled promise rejections

Pino does not include a special mechanism for logging uncaught exceptions or promise rejections, so you must listen for the uncaughtException and unhandledRejection events and log the exception using the FATAL level before exiting the program (after attempting a graceful shutdown):

 
process.on('uncaughtException', (err) => {
  // log the exception
logger.fatal(err, 'uncaught exception detected');
// shutdown the server gracefully server.close(() => { process.exit(1); // then exit }); // If a graceful shutdown is not achieved after 1 second, // shut down the process completely setTimeout(() => { process.abort(); // exit immediately and generate a core dump file }, 1000).unref() process.exit(1); });

You can use a process manager like PM2, or a service like Docker to automatically restart your application if it goes down due to an uncaught exception. Also, don't forget to set up health checks so you can continually monitor the state of your application with an appropriate monitoring tool.

Transporting your Node.js logs

Pino defaults to logging to the standard output as you've seen throughout this tutorial, but you can also configure it to log to a file or other destinations (such as a remote log management service).

You'll need to use the transports feature, that was introduced in Pino v7. These transports operate inside worker threads, so that the main thread of the application is kept free from transforming log data or sending them to remote services (which could significantly increase the latency of your HTTP responses).

Here's how to use the built-in pino/file transport to route your logs to a file (or a file descriptor):

logger.js
const pino = require('pino');

const fileTransport = pino.transport({
target: 'pino/file',
options: { destination: `${__dirname}/app.log` },
});
module.exports = pino( { level: process.env.PINO_LOG_LEVEL || 'info', formatters: { level: (label) => { return { level: label.toUpperCase() }; }, }, timestamp: pino.stdTimeFunctions.isoTime, },
fileTransport
);

Henceforth, all logs will be sent to an app.log file in the current working directory instead of the standard output. Unlike, Winston, its main competition in the Node.js logging space, Pino does not provide a built-in mechanism to rotate your log files so they stay manageable. You'll need to rely on external tools such as Logrotate for this purpose.

Another way to log into files (or file descriptors) is by using the pino.destination() API like this:

logger.js
const pino = require('pino');

module.exports = pino(
  {
    level: process.env.PINO_LOG_LEVEL || 'info',
    formatters: {
      level: (label) => {
        return { level: label.toUpperCase() };
      },
    },
    timestamp: pino.stdTimeFunctions.isoTime,
  },
pino.destination(`${__dirname}/app.log`)
);

Note that the pino/file transport uses pino.destination() under the hood. The main difference between the two is that the former runs in a worker thread while the latter runs in the main thread. When logging only to the standard output or local files, using pino/file may introduce some overhead because the data has to be moved off the main thread first. You should probably stick with pino.destination() in such cases. Using pino/file is recommended only when you're logging to multiple destinations at once, such as to a local file and a third-party log management service.

Pino also supports "legacy transports" that run in a completely separate process from the Node.js program. See the relevant documentation for more details.

Logging to multiple destinations in Pino

Logging to multiple destinations is a common use case that is also supported in Pino v7+ transports. You'll need to create a targets array and place all the transport objects within it like this:

logger.js
const pino = require('pino');
const transport = pino.transport({
targets: [
{
target: 'pino/file',
options: { destination: `${__dirname}/app.log` },
},
{
target: 'pino/file', // logs to the standard output by default
},
],
});
module.exports = pino( { level: process.env.PINO_LOG_LEVEL || 'info', timestamp: pino.stdTimeFunctions.isoTime, },
transport
);

This snippet configures Pino to log to the standard output and the app.log file simultaneously. Note that the formatters.level function cannot be used when logging to multiple destinations, and that's why it was omitted in the snippet above. If you leave it in, you will get the following error:

 
Error: option.transport.targets do not allow custom level formatters

You can change the second object to use the pino-pretty transport if you'd like a prettified output to be delivered to stdout instead of the JSON formatted output (note that pino-pretty must be installed first):

 
const transport = pino.transport({
  targets: [
    {
      target: 'pino/file',
      options: { destination: `${__dirname}/app.log` },
    },
    {
target: 'pino-pretty',
}, ], });
 
node index.js && echo $'\n' && cat app.log
Output
[14:33:41.932] INFO (259060): info
[14:33:41.933] ERROR (259060): error
[14:33:41.933] FATAL (259060): fatal

{"level":30,"time":"2023-03-03T13:33:41.932Z","pid":259060,"hostname":"fedora","msg":"info"}
{"level":50,"time":"2023-03-03T13:33:41.933Z","pid":259060,"hostname":"fedora","msg":"error"}
{"level":60,"time":"2023-03-03T13:33:41.933Z","pid":259060,"hostname":"fedora","msg":"fatal"}

Keeping sensitive data out of your logs

One of the most critical best practices for application logging involves keeping sensitive data out of your logs. Examples of such data includes (but is not limited to) the following:

  • Financial data such as card numbers, pins, bank accounts, etc.
  • Passwords or application secrets.
  • Any data that can be used to identify a person such as email addresses, names, phone numbers, addresses, identification numbers and more.
  • Medical records
  • Biometric data, and more.

Including sensitive data in logs can lead to data breaches, identity theft, unauthorized access, or other malicious activities which could damage trust in your business, sometimes irreparably. It could also expose your business to fines, and other penalties from regulatory bodies such as GDPR, PCI, and HIPPA. To prevent such incidents, it's crucial to always sanitize your logs to ensure such data do not accidentally sneak in.

You can adopt several practices to keep sensitive data out of your logs, but we cannot discuss them all here. We'll focus only on Log redaction, a technique for identifying and removing sensitive data from the logs, while preserving the relevant information needed for troubleshooting or analysis. Pino uses the fast-redact package to provide log redaction capabilities for Node.js applications.

For example, you might have a user object with the following structure:

 
const user = {
  id: 'johndoe',
  name: 'John Doe',
  address: '123 Imaginary Street',
  passport: {
    number: 'BE123892',
    issued: 2023,
    expires: 2027,
  },
  phone: '123-234-544',
};

If this object is logged as is, you will expose sensitive data such as the user's name, address, passport details, and phone number:

 
logger.info({ user }, 'User updated');
Output
{
  "level": "info",
  "time": 1677660968266,
  "pid": 377737,
  "hostname": "fedora",
  "user": {
    "id": "johndoe",
    "name": "John Doe",
    "address": "123 Imaginary Street",
    "passport": {
      "number": "BE123892",
      "issued": 2023,
      "expires": 2027
    },
    "phone": "123-234-544"
  },
  "msg": "User updated"
}

To prevent this from happening, you must set up your logger instance in advance to redact the sensitive fields. Here's how:

 
const pino = require('pino');
module.exports = pino({
  level: process.env.PINO_LOG_LEVEL || 'info',
  formatters: {
    level: (label) => {
      return { level: label };
    },
  },
redact: ['user.name', 'user.address', 'user.passport', 'user.phone'],
});

The redact option above is used to specify an array of fields that should be redacted in the logs. The above configuration will replace the name, address, passport, and phone fields in any user object supplied at log point with a [Redacted] placeholder. This way, only the id field is decipherable in the logs:

 
{
  "level": "info",
  "time": 1677662887561,
  "pid": 406515,
  "hostname": "fedora",
  "user": {
    "id": "johndoe",
    "name": "[Redacted]",
    "address": "[Redacted]",
    "passport": "[Redacted]",
    "phone": "[Redacted]"
  },
  "msg": "User updated"
}

You can also change the placeholder string using the following configuration:

 
module.exports = pino({
  redact: {
    paths: ['user.name', 'user.address', 'user.passport', 'user.phone'],
    censor: '[PINO REDACTED]',
  },
});
Output
{
  "level": "info",
  "time": 1677663111963,
  "pid": 415221,
  "hostname": "fedora",
  "user": {
    "id": "johndoe",
    "name": "[PINO REDACTED]",
    "address": "[PINO REDACTED]",
    "passport": "[PINO REDACTED]",
    "phone": "[PINO REDACTED]"
  },
  "msg": "User updated"
}

Finally, you can decide to remove the fields entirely by specifying the remove option. Reducing the verbosity of your logs might be preferable so they don't take up storage resources unnecessarily.

 
module.exports = pino({
  redact: {
    paths: ['user.name', 'user.address', 'user.passport', 'user.phone'],
    censor: '[PINO REDACTED]',
remove: true,
}, });
Output
{
  "level": "info",
  "time": 1677663213497,
  "pid": 419647,
  "hostname": "fedora",
  "user": {
    "id": "johndoe"
  },
  "msg": "User updated"
}

While this is a handy way to reduce the risk of sensitive data being included in your logs, it can be easily bypassed if you're not careful. For example, if the user object is nested inside some other entity or placed at the top level, the redaction filter will not match the fields anymore, and the sensitive fields will make it through.

 
// the current redaction filter will match
logger.info({ user }, 'User updated');
// the current redaction filter will not match
logger.info({ nested: { user } }, 'User updated');
logger.info(user, 'User updated');

You'll have to update the filters to look like this to catch these three cases:

 
module.exports = pino({
  redact: {
    paths: [
      'name',
      'address',
      'passport',
      'phone',
      'user.name',
      'user.address',
      'user.passport',
      'user.phone',
      '*.user.name', // * is a wildcard covering a depth of 1
      '*.user.address',
      '*.user.passport',
      '*.user.phone',
    ],
    remove: true,
  },
});

Of course, you should enforce that objects are being logged in a consistent manner throughout your application during the review process, but since you can't account for every variation that may make it through, it's best not to rely on this technique as a primary solution for preventing sensitive data from making it through to your logs.

Log redaction should be used as a backup measure that can help catch problems missed in the review process. Ideally, don't log any objects that may contain any sensitive data in the first place. Extracting only the necessary non-sensitive fields to provide context about the event being logged is the best way to reduce the risk of sensitive data from making it into your logs.

Logging HTTP requests with Pino

You can use Pino to log HTTP requests in your Node.js web application no matter the framework you're using. Fastify users should note that while logging with Pino is built into the framework, it is disabled by default so you must enable it first.

 
const fastify = require('fastify')({
  logger: true
})

Once enabled, Pino will log all incoming requests to the server in the following manner:

 
{"level":30,"time":1675961032671,"pid":450514,"hostname":"fedora","reqId":"req-1","res":{"statusCode":200},"responseTime":3.1204520016908646,"msg":"request completed"}

If you use some other framework, see the Pino ecosystem page for the specific integration that works with your framework. The example below demonstrates how to use the pino-http package to log HTTP requests in Express:

index.js
const express = require('express');
const logger = require('./logger');
const axios = require('axios');
const pinoHTTP = require('pino-http');

const app = express();

app.use(
pinoHTTP({
logger,
})
);
app.get('/crypto', async (req, res) => { try { const response = await axios.get( 'https://api2.binance.com/api/v3/ticker/24hr' ); const tickerPrice = response.data; res.json(tickerPrice); } catch (err) { logger.error(err); res.status(500).send('Internal server error'); } }); app.listen('4000', () => { console.log('Server is running on port 4000'); });

Also, ensure your logger.js file is set up to log to both the standard output and a file like this:

logger.js
const pino = require('pino');

const transport = pino.transport({
  targets: [
    {
      target: 'pino/file',
      options: { destination: `${__dirname}/server.log` },
    },
    {
      target: 'pino-pretty',
    },
  ],
});

module.exports = pino(
  {
    level: process.env.PINO_LOG_LEVEL || 'info',
    timestamp: pino.stdTimeFunctions.isoTime,
  },
  transport
);

Then install the required dependencies using the command below:

 
npm install express axios pino-http

Start the server on port 4000 and make a GET request to the /crypto route through curl:

 
node index.js
 
curl http://localhost:4000/crypto

You'll observe the following the following prettified log output in the server console, corresponding to the HTTP request:

Output
[15:30:54.508] INFO (291881): request completed
    req: {
      "id": 1,
      "method": "GET",
      "url": "/crypto",
      "query": {},
      "params": {},
      "headers": {
        "host": "localhost:4000",
        "user-agent": "curl/7.85.0",
        "accept": "*/*"
      },
      "remoteAddress": "::ffff:127.0.0.1",
      "remotePort": 36862
    }
    res: {
      "statusCode": 200,
      "headers": {
        "x-powered-by": "Express",
        "content-type": "application/json; charset=utf-8",
        "content-length": "1099516",
        "etag": "W/\"10c6fc-mMUyGYJwdl+yk7A7N/rYiPWqFjo\""
      }
    }
    responseTime: 2848

The server.log file will contain the raw JSON output:

 
cat server.log
Output
{"level":30,"time":"2023-03-03T14:30:54.508Z","pid":291881,"hostname":"fedora","req":{"id":1,"method":"GET","url":"/crypto","query":{},"params":{},"headers":{"host":"localhost:4000","user-agent":"curl/7.85.0","accept":"*/*"},"remoteAddress":"::ffff:127.0.0.1","remotePort":36862},"res":{"statusCode":200,"headers":{"x-powered-by":"Express","content-type":"application/json; charset=utf-8","content-length":"1099516","etag":"W/\"10c6fc-mMUyGYJwdl+yk7A7N/rYiPWqFjo\""}},"responseTime":2848,"msg":"request completed"}

You can further customize the output of the pino-http module by taking a look at its API documentation.

Centralizing and monitoring your Node.js logs

One of the main advantages of logging in a structured format is the ability to ingest them into a centralized logging system to be indexed, searched, and analyzed efficiently. By consolidating all log data into a central location, you will gain a holistic view of your systems' health and performance, making it easier to identify patterns, spot anomalies, and troubleshoot issues.

Centralizing logs also simplifies compliance efforts by providing a single source of truth for auditing and monitoring purposes. In addition, it helps to ensure that the relevant logs are properly retained and easily accessible for any regulatory or legal audits.

Furthermore, with the right tools, centralizing logs can enable real-time alerting and proactive monitoring, allowing you to detect and respond to issues before they become critical. This can significantly reduce downtime and minimize the impact on your organization's operations.

Now that you've configured Pino in your Node.js application to output structured logs, the next step is to centralize your logs in a log management system so that you can reap the benefits of logging in a structured format. Better Stack is one such solution that can tail your logs, analyze and visualize them, and help with alerting when certain patterns are detected.

Screenshot from 2023-03-03 17-00-11.png

There are several ways to get your logs from your Node.js application into Better Stack, but one of the easiest ways is to use its Pino transport like this:

logger.js
const transport = pino.transport({
  targets: [
    {
      target: 'pino/file',
      options: { destination: `${__dirname}/app.log` },
    },
{
target: '@logtail/pino',
options: { sourceToken: '<your_better_stack_source_token>' },
},
{ target: 'pino-pretty', }, ], });

Note that you need to install the @logtail/pino package first like this:

 
npm install @logtail/pino

Which this configuration in place, your logs will be centralized in Better Stack and you can view them in real-time through the live tail page. You can also filter them using any of the attributes in the logs, and created automated alerts to notify you of significant events (such as a spike in errors).

Screenshot from 2023-03-03 17-22-11.png

Final thoughts and next steps

In this article, we've provided a comprehensive overview of Node.js logging with Pino by discussing its key features, and how to configure and customize it for your specific needs. We hope that the information contained in this guide has been helpful in demystifying logging with Pino and how to use it effectively in your Node.js applications.

It is impossible to learn everything about Pino and its capabilities in one article, so we highly recommend consulting the official documentation for more information on its basic and advanced features.

Thanks for reading, and happy logging!

Author's avatar
Article by
Ayooluwa Isaiah
Ayo is the Head of Content at Better Stack. His passion is simplifying and communicating complex technical ideas effectively. His work was featured on several esteemed publications including LWN.net, Digital Ocean, and CSS-Tricks. When he’s not writing or coding, he loves to travel, bike, and play tennis.
Got an article suggestion? Let us know
Next article
A Complete Guide to Winston Logging in Node.js
Learn how to start logging with Winston in Node.js and go from basics to best practices in no time.
Licensed under CC-BY-NC-SA

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

Make your mark

Join the writer's program

Are you a developer and love writing and sharing your knowledge with the world? Join our guest writing program and get paid for writing amazing technical guides. We'll get them to the right readers that will appreciate them.

Write for us
Writer of the month
Marin Bezhanov
Marin is a software engineer and architect with a broad range of experience working...
Build on top of Better Stack

Write a script, app or project on top of Better Stack and share it with the world. Make a public repository and share it with us at our email.

community@betterstack.com

or submit a pull request and help us build better products for everyone.

See the full list of amazing projects on github