Guides
Logging in Docker

A Comprehensive Guide to Logging in Docker

Better Stack Team
Updated on March 17, 2023

Containers have become an increasingly popular way to deploy applications due to their portability and reproducibility across different environments. However, one important consideration when containerizing an application is logging. Logging plays a critical role in troubleshooting, identifying trends, and optimizing application performance. In the case of Docker containers, logging presents unique challenges as they are ephemeral and logs stored in the container's filesystem are lost once the container is terminated. To address this, it's important to transport logs to a more permanent location.

This article will cover the basics of logging in Docker containers, including how to access and view log messages, as well as crafting an optimal logging strategy tailored to your application's needs, and transporting your logs to a log management service for further processing and analysis. Note that this tutorial will not cover the logs generated by the docker daemon itself, as their management falls outside the scope of this article.

Logtail dashboard

🔭 Want to centralize and monitor your Docker logs?

Head over to Logtail and start ingesting your logs in 5 minutes.

Prerequisites

Before you proceed with this tutorial, you will access to a system that supports Docker (v20.10 or later), preferably a Linux server that includes a non-root user with sudo access. A basic understanding of how Docker containers work is also assumed.

Setting up a demo container (optional)

To demonstrate the concepts described in this article, we will use a simple NGINX hello world application image to set up a Docker container. If you want to follow along, use the docker pull command with the image name (karthequian/helloworld) to download it from the Docker registry to your server.

 
docker pull karthequian/helloworld:latest

You'll see the program's output appear on the screen:

Output
Using default tag: latest
latest: Pulling from karthequian/helloworld
83ee3a23efb7: Pull complete
db98fc6f11f0: Pull complete
f611acd52c6c: Pull complete
ce6148ee5b27: Pull complete
f41d580b4c45: Pull complete
272afdecd73d: Pull complete
603e831d3bf2: Pull complete
4b3f00fe862f: Pull complete
1813c5daf2e4: Pull complete
4db7ca47ea28: Pull complete
37d652721feb: Pull complete
e9bce6aacaff: Pull complete
50da342c2533: Pull complete
Digest: sha256:48413fdddeae11e4732896e49b6d82979847955666ed95e4d6e57b433920c9e1
Status: Downloaded newer image for karthequian/helloworld:latest
docker.io/karthequian/helloworld:latest

The output above shows the process of fetching an image and storing it locally to be available for running containers. If you get some permissions error, you may need to prefix the command above with sudo or better still, add the current user to the docker group:

 
sudo usermod -a -G docker <username>

Once you've done that, log out and log back into your system again, and the docker pull command should work without prefixing it with sudo. You can subsequently use the docker images command to verify that the karthequian/helloworld image is present:

 
docker images

You should see the following output:

Output
REPOSITORY               TAG       IMAGE ID       CREATED         SIZE
karthequian/helloworld   latest    a0d8db65e6fb   13 months ago   227MB

At this point, you can create and run a new container from the karthequian/helloworld container image by executing the docker run command as shown below:

 
docker run -p 80:80/tcp -d "karthequian/helloworld:latest"

The argument to the -p flag maps port 80 in the container to port 80 on your machine, while the -d option runs the container in the background and prints its ID (typical usage for web service). Assuming there were no errors, the container ID will be displayed on your screen.

Output
39c0ffde9c30600629742357b5f01b278eba9ade7f4c96b9e3883e8fa2b52243

If you get the error below, it means that some other program is already using port 80 on your server, so you should stop that program before rerunning the command.

Output
docker: Error response from daemon: driver failed programming external connectivity on endpoint nostalgic_heisenberg (f493fdf78c94adc66a248ac3fd62e911c1d477dda62398bd36cd40b323605159): Error starting userland proxy: listen tcp4 0.0.0.0:80: bind: address already in use.

You can verify the status of your Docker containers with the docker ps command:

 
docker ps

You'll see the program's output appear on the screen:

Output
CONTAINER ID   IMAGE                           COMMAND              CREATED          STATUS          PORTS                               NAMES
39c0ffde9c30   karthequian/helloworld:latest   "/runner.sh nginx"   16 seconds ago   Up 15 seconds   0.0.0.0:80->80/tcp, :::80->80/tcp   inspiring_lovelace

The ps command describes a few details about your running containers. You can see the Container ID, the image running inside the container, the command used to start the container, its creation date, the status, exposed ports and the auto-generated name of the container.

If you visit the following URI in your browser http://<your_server_ip> in a browser, you'll see the sample NGINX hello world app.

NGINX hello world application

You can reload the page a few times so that several log entries are generated. At this point, you're all setup to view and configure the logs generated by your running containers.

Viewing container logs with the Docker CLI

Docker provides a docker logs command for viewing the log messages produced within a container. You can also use the docker service logs command to display the logs produced by all containers participating in a service but we won't demonstrate that in this article.

By default, the logs command display all the messages sent to the standard output (stdout) and standard error (stderr) streams within a container, so if the services running on the container do not log to either stream (perhaps when they log to files instead), you may not see any useful output from the command.

 
docker logs <container_id>

For the NGINX hello world container, you will observe the following output:

Output
217.138.222.108 - - [21/Feb/2022:12:42:37 +0000] "GET / HTTP/1.1" 200 4369 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/98.0.4758.102 Safari/537.36"
217.138.222.108 - - [21/Feb/2022:12:42:37 +0000] "GET / HTTP/1.1" 200 4369 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/98.0.4758.102 Safari/537.36"
217.138.222.108 - - [21/Feb/2022:12:42:39 +0000] "GET /favicon.ico HTTP/1.1" 404 564 "http://168.119.119.25/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/98.0.4758.102 Safari/537.36"
217.138.222.108 - - [21/Feb/2022:12:42:39 +0000] "GET /favicon.ico HTTP/1.1" 404 564 "http://168.119.119.25/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/98.0.4758.102 Safari/537.36"
217.138.222.108 - - [21/Feb/2022:12:44:25 +0000] "GET / HTTP/1.1" 200 4369 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/98.0.4758.102 Safari/537.36"
217.138.222.108 - - [21/Feb/2022:12:44:25 +0000] "GET / HTTP/1.1" 200 4369 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/98.0.4758.102 Safari/537.36"
. . .

Notice how we're able to view the NGINX logs for the application through the docker logs command despite the fact that NGINX writes its logs to access.log and error.log files in the /var/log/nginx directory by default. It works because the helloworld image configures NGINX to send the access log to /dev/stdout and the error log to /dev/stderr instead.

The official NGINX Docker image takes a different approach to achieve the same effect. It creates a symbolic link from /var/log/nginx/access.log to /dev/stdout and another one from /var/log/nginx/error.log to /dev/stderr so that the logs are collected by Docker.

These are two approaches you can use to ensure that you can access the relevant logs you need through docker logs.

Filtering your logs with the Docker CLI

If your application generates a lot of logs, the output from command docker logs command above may be huge so it's probably best to limit the output in some way using the available options:

1. Display the only the most recent logs

The --tail option can be used to display only the latest entries:

 
docker logs --tail 100 <container_id> # show the last 100 lines

2. Limit the log output to a specific time range

Docker also provides the option to limit the log output by time so that you can quickly examine the entries that came through within the period you're interested in without being distracted by other entries.

There are two options for this task: --since and --until. The former specifies the lower time limit, while the latter sets an upper time limit for which logs should be displayed. The argument to both flags can be The argument to both flags can be a RFC 3339 date, a UNIX timestamp, or a Go duration string (e.g. 1m30s, 3h)

For example, the command below will only display log entries that were produced in the last 15 minutes:

 
docker logs --since 15m <container_id>

This one will show all logs except those produced in the last 1 hour:

 
docker logs --until 1h <container_id>

You can also derive a more specific output by combining both --since and --until. For example, the command below only prints the entries logged between 12PM and 1PM on Feb 16, 2023:

 
docker logs --since 2023-02-16T12:00:00 --until 2023-02-16T13:00:00 <container_id>

3. View container logs in real-time

The docker logs command displays only the log entries present at the time of execution. If you want to continue streaming output from the containers stdout and stderr, use the --follow option:

 
docker logs --follow <container_id>

You can also combine this with either or both of the --tail or --since options to narrow down the initial output, while printing subsequent entries.

 
docker logs --follow --since 15m <container_id>
 
docker logs --follow --tail 100 <container_id>

4. Filter Docker logs with grep

You can also filter the contents of docker logs through grep if you're only searching for lines containing a specific pattern:

 
docker logs <container_id> | grep <pattern>

Logging drivers in Docker

Docker uses logging drivers to retrieve logs from running containers and services. The default logging driver for a Docker containers is the json-file driver which internally caches container logs as JSON and stores them in files designed to be accessed through the docker daemon. These files can be found in the path below but take care not to modify them with external tools as it may interfere with Docker's logging system.

 
/var/lib/docker/containers/<container_id>/<container_id>-json.log

Also note that the json-file driver does not perform log rotation by default so it must be configured to do so, so that you don't end up running out of disk space (see the next section). Some of the other supported logging drivers in Docker are listed below:

  • none: container logs are disabled. This causes docker logs to stop producing output.
  • local: logs are stored in a custom format designed for minimal overhead. Preserves up to 100MB of logs by default (20 MB in a maximum of five files).
  • syslog: writes log entries to the syslog facility on the host machine. You can read our tutorial on syslog if you are not familiar with it.
  • journald: writes log messages to the journald daemon on the host machine. See our tutorial on journald for more details.
  • fluentd: writes log messages to the fluentd daemon running on the host machine.
  • gelf: sends log entries to a Graylog Extended Log Format (GELF) endpoint such as Graylog or Logstash.
  • awslogs: sends log entries to AWS CloudWatch.
  • gcplogs: sends log entries to the Google Cloud Platform.
  • logentries: sends log entries to Rapid7 Logentries.
  • splunk: uses the HTTP Event Collector to write log messages to Splunk.
  • etwlogs (Windows only): writes log entries as Event Tracing for Windows (ETW) events.

You can check the current default logging driver for the Docker daemon through the command below:

 
docker info --format '{{.LoggingDriver}}'

This should yield the output below:

Output
json-file

You can also find out the logging driver for a running container by using the following docker inspect command:

 
docker inspect -f '{{.HostConfig.LogConfig.Type}}' <container_id>
Output
json-file

Configuring logging drivers in Docker

In this section, we will configure the json-file driver for our running Docker container for demonstration purposes. As mentioned earlier, log rotation and compression are not performed by default so we'll enable both options to ensure that the log files produced by Docker are kept to manageable sizes.

Start by opening or creating the daemon configuration file in /etc/docker/daemon.json:

 
sudo nano /etc/docker/daemon.json

Populate the file with the following contents:

/etc/docker/daemon.json
{
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "10m",
    "max-file": "3",
    "compress": "true"
  }
}

The log-driver field sets the logging driver to json-file while the log-opts property configures the supported options for the specified logging driver. Note that each property in the log-opts object must have a string value (including boolean and numeric values as seen above).

Here's a brief explanation of each of the json-file options above:

  • max-size: the maximum size of the log file before it is rotated. An integer plus a modifier represents the measuring unit (k for kilobytes, m for megabytes, and g for gigabytes). Defaults to -1 (unlimited).
  • max-file: the maximum number of log files that can be present. When log rotation creates excess files, the oldest one will be removed. The max-size option must also be specified for this setting to take effect.

  • compress: if this is set to true, rotated log files will be compressed to save disk space.

The complete list of all the options for the json-file driver is defined in the official documentation, so be sure to check them out.

When you modify the docker daemon configuration file as above, you must restart the docker service to apply the changes to newly created containers. Existing containers will not adopt the new configuration until they are recreated using docker run.

 
sudo systemctl restart docker

After restarting the docker service, all your running containers will be terminated, so you must start them again with docker run. Do explore the live restore option if you want to keep your containers running even when the daemon becomes unavailable.

 
docker run -p 80:80/tcp -d "karthequian/helloworld:latest"

Prior to Docker 20.10, the docker logs command only worked when the logging driver was set to local, json-file, or journald, but this changed with the introduction of dual logging in Docker 20.10. This feature allows docker logs to read container logs locally in a consistent format regardless of the logging driver in use. It's enabled by default, but if you prefer to disable it, use the cache-disabled property shown below.

/etc/docker/daemon.json
{
  "log-driver": "syslog",
  "log-opts": {
    "syslog-facility": "daemon",
"cache-disabled": "true"
} }

Note that the cache-disabled option does not affect the local, json-file, or journald drivers since they do not use the dual logging feature. When cache-disabled is true for any other logging driver, the docker logs command will stop working:

 
docker logs 45f89252a86b
 
Error response from daemon: configured logging driver does not support reading

Overriding the default logging driver per container

It is sometimes useful to set a logging driver (or options) other than the default for a specific container. This can be done through the --log-driver and --log-opt flags when using docker container create or docker run:

 
docker run --log-driver syslog --log-opt syslog-address=udp://1.2.3.4:1111 -p 80:80/tcp -d "karthequian/helloworld:latest"

Choosing a logging delivery mode

The logging delivery mode for a Docker container refers to how it prioritizes the delivery of incoming log messages to the configured driver. The following two modes are supported, and they can be used with any logging driver:

1. Blocking mode

In blocking mode (the default), the delivery of log messages to the driver will block all other operations that the container is performing, which may impact its performance, especially with drivers that write to a remote service. The main advantage of blocking mode is that it guarantees that each log message will be delivered to the driver.

/etc/docker/daemon.json
{
  "log-driver": "json-file",
  "log-opts": {
"mode": "blocking"
} }

You can keep using the blocking mode when the logging driver being used writes to the local filesystem. It is unlikely that the latency introduced will be significant since these drivers are generally rapid.

2. Non-blocking mode

In non-blocking mode, incoming log entries are stored in a memory buffer until the configured logging driver is available to process them. Once the logs are processed, they are cleared from the buffer to make way for new entries.

/etc/docker/daemon.json
{
  "log-driver": "syslog",
  "log-opts": {
"mode": "non-blocking"
} }

The advantage of this mode is that the application's performance will not be impacted. However, it also introduces the possibility of losing log entries if the memory buffer is filled up so that existing entries are deleted before processing. To decrease the likelihood of losing log messages in containers that generate a significant amount of logs, you can increase the maximum memory buffer size from its default (1MB) through the max-buffer-size property:

/etc/docker/daemon.json
{
  "log-driver": "syslog",
  "log-opts": {
    "mode": "non-blocking",
    "max-buffer-size": "5m"
  }
}

Choosing a Docker logging strategy

There are several ways to aggregate, store and centralize your Docker container logs. Thus far, we've only covered the native Docker logging driver approach in this article since it's the most suitable for many everyday use cases. However, that strategy won't fit every project, so it's a good idea to be aware of alternative methods so that you can make an informed decision about the right logging approach for your application.

1. Using application-based logging

In this approach, the application itself handles its own logging through a logging framework. For example, a Node.js application could use Winston or Pino to format and transport its logs to a log management solution for storage and further processing.

This approach provides the greatest amount of control for application developers to generate, format, and transport the logs as they see fit. However, this control has a performance cost since everything (including the complexities of log delivery) is done at the application level.

Another consideration is that logs must be transported to a remote log management service (such as Logtail) or a data volume outside the container's filesystem to prevent loss on container termination.

2. Using a logging driver

We discussed this approach in great detail throughout this tutorial. To recap, it involves logging to the stdout and stderr streams so that they are picked up by Docker's log collector, and using a logging driver to manage how the logs are stored or transported.

This is the native way to log in Docker, and it reduces the impact of logging on your application's performance. The main downside to this approach is that it creates a dependency between your container and its host, but this is tolerable in most situations.

3. Using data volumes

If you don't want your container logs to be lost immediately after the container is terminated, you can link a directory inside the container to a directory on the Docker host where the log entries are transported to. This ensures that your logs are retained even when the container is destroyed, and it also makes it easy to aggregate logs from multiple containers in one place and make copies or transport them to some remote location.

4. Using a dedicated logging container

Setting up a dedicated container whose sole purpose is to aggregate and centralize Docker logs could be a great solution, especially when deploying a microservice architecture. This approach removes the dependency on the host machine and makes it easier to scale up your logging infrastructure by simply adding a new container when needed.

5. Using the sidecar approach

A technique used for more complicated deployments is the sidecar approach in which each Docker container has its own dedicated logging container (they are considered a single unit). The main advantage is that the logging approach can be tailored for each application, and it offers greater transparency regarding the origin of each log entry.

A drawback to this strategy is that setting up a logging container per application will consume more resources than the dedicated logging container approach, and the added complexity may not be worthwhile for smaller deployments. You also need to use Docker compose to ensure that both containers are managed as a single unit to avoid incomplete or missing log data.

Final thoughts

In conclusion, this article offered a comprehensive overview of logging in Docker containers to facilitate the quick and efficient deployment of applications in a Docker container. It covered the fundamentals of Docker container logging, including log storage and retrieval, as well as the configuration of logging drivers.

Additionally, alternative approaches were discussed for an optimal logging setup in Docker. To delve deeper into this topic, we recommend exploring other articles in this series or consulting the official documentation.

Thank you for reading, and best of luck with your logging endeavors!

Centralize all your logs into one place.
Analyze, correlate and filter logs with SQL.
Create actionable
dashboards.
Share and comment with built-in collaboration.
Got an article suggestion? Let us know
Next article
A Complete Guide to Logging in Heroku
Learn how to start logging in Heroku and go from basics to best practices in no time.
Licensed under CC-BY-NC-SA

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.