Effective logging is crucial for any application, but
even more so for containerized environments which are often distributed.
Docker provides built-in logging tools to help you monitor, troubleshoot, and
gain insights from your containerized services. This guide explores those tools
and shows you how to centralize your container logs for
advanced analysis and management.
Let's get started!
Prerequisites
To follow along with this guide, ensure you have the following:
Basic command-line skills.
A recent version of Docker
installed on your system.
A running Docker container that's already generating logs. If you need a
service that continually generates logs for testing purposes, you can use the
official Vector image:
Copied!
docker run --name vector-demo -d timberio/vector:latest-alpine
Without providing a configuration file, this vector-demo container will generate
sample log data indefinitely.
Side note: Centralize your container logs so debugging doesn’t scale painfully
Docker’s built in tools are great for a single container, but once you have a few services running, Better Stack makes it easy to search and tail logs across everything in one place instead of hopping between terminals.
Understanding Docker Container logs
Docker container logs are the records of events and messages generated by
applications running in them. This typically includes status updates, debugging
information, or error messages.
Whenever a containerized service writes such data to the standard output or
standard error streams, Docker captures them as container logs and makes them
accessible for monitoring, troubleshooting, and debugging purposes.
By default, Docker stores logs in JSON format on the host system. These logs are
saved in a specific location, which typically looks like this:
Note the exact location might vary depending on your Docker configuration and
operating system, and you may also need sudo privileges to access the log
files.
To find the exact path of a container's log file, you can use the following
command:
{"log":" \"appname\": \"BryanHorsey\",\n","stream":"stdout","time":"2025-05-29T05:51:23.055677329Z"}
{"log":" \"facility\": \"lpr\",\n","stream":"stdout","time":"2025-05-29T05:51:23.055692882Z"}
{"log":" \"hostname\": \"some.associates\",\n","stream":"stdout","time":"2025-05-29T05:51:23.055705333Z"}
{"log":" \"message\": \"#hugops to everyone who has to deal with this\",\n","stream":"stdout","time":"2025-05-29T05:51:23.055716046Z"}
{"log":" \"msgid\": \"ID282\",\n","stream":"stdout","time":"2025-05-29T05:51:23.055724085Z"}
{"log":" \"procid\": 4741,\n","stream":"stdout","time":"2025-05-29T05:51:23.055731325Z"}
{"log":" \"severity\": \"info\",\n","stream":"stdout","time":"2025-05-29T05:51:23.055740056Z"}
{"log":" \"timestamp\": \"2025-05-29T05:51:23.054Z\",\n","stream":"stdout","time":"2025-05-29T05:51:23.055751695Z"}
{"log":" \"version\": 2\n","stream":"stdout","time":"2025-05-29T05:51:23.055763635Z"}
{"log":"}\n","stream":"stdout","time":"2025-05-29T05:51:23.055775068Z"}
Each JSON entry usually has three properties:
log: The actual log message as captured from the Docker container.
stream: Indicates the stream where the log originated (stdout or
stderr).
time: The time of log collection in ISO 8601 format.
In some cases, you will see an attrs field containing extra attributes
provided through the --log-opt flag when creating the container, such as
environmental variables and labels.
Manually inspecting JSON log files is one way to view logs, but Docker provides
more convenient tools to access and manage logs directly from the command line.
We'll explore these built-in commands in the next section.
Viewing Docker logs in the terminal
You can easily view logs from a running container through the docker logs
command. It has the following syntax:
Copied!
docker logs [<options>] <container>
The <container> placeholder can be either the container name or ID, while
<options> are optional flags to customize the output.
Here's its basic usage:
Copied!
docker logs <container>
As in:
Copied!
docker logs vector-demo
This command retrieves the entire log output stored for a container, which
can include all lines written to stdout and stderr since the container
started:
Output
. . .
{
"appname": "devankoshal",
"facility": "ntp",
"hostname": "names.realestate",
"message": "#hugops to everyone who has to deal with this",
"msgid": "ID816",
"procid": 7831,
"severity": "warning",
"timestamp": "2025-05-29T05:54:19.053Z",
"version": 1
}
{
"appname": "CrucifiX",
"facility": "syslog",
"hostname": "names.city",
"message": "We're gonna need a bigger boat",
"msgid": "ID862",
"procid": 8576,
"severity": "err",
"timestamp": "2025-05-29T05:54:20.053Z",
"version": 2
}
If the docker logs command results in an error or empty output, see our
troubleshooting tips below for potential
solutions.
If your logs don't already include timestamps, you can use the -t/--timestamps
option to prepend ISO 8601 timestamps to each log entry:
Copied!
docker logs -t vector-demo
In cases where your application already includes timestamps in the log output,
this option may be redundant:
If you'd like to see the attributes contained in the attrs field (if
available), you can use the --details option:
Copied!
docker logs --details <container>
Tailing Docker logs
For real-time monitoring of container logs, use the -f/--follow option. This
streams new log entries as they are generated, similar to the tail -f command:
Copied!
docker logs -f <container>
Filtering Docker logs
The output of docker logs can become overwhelming, especially for long-running
containers with extensive logs.
To make log inspection more efficient, Docker provides several built-in options
to filter and limit log output.
For starters, to display only the last N entries, you can use the
-n/--tail option:
Copied!
docker logs -n 10 <container> # displays the last 10 log lines
You can also limit the log output to a specific time range with --since and
--until. The former displays log entries that occurred after the provided
timestamp, while the latter displays log entries that occurred before the
provided timestamp.
docker logs --since 2025-05-29T09:00:00 <container> # only show logs after this time
Copied!
docker logs --since 15m <container> # Only show logs produced within the last 15 minutes.
Copied!
docker logs --until 2025-05-29T09:00:00 <container> # only show logs produced before this time
Copied!
docker logs --until 1h <container> # Only show logs produced before the last 1 hour
You can also combine the --since and --until options to filter logs within a
specific time range. For example, the command below only prints the entries
logged between 12:45 and 13:00 on December 17th, 2024:
The --since and --tail options can also be paired with -f/--follow flag to
narrow down the initial output while printing all subsequent entries. It does
not work with --until:
Copied!
docker logs -f --since 15m <container>
Copied!
docker logs -f --tail 10 <container>
Beyond the built-in filtering options, you can also filter Docker logs through
standard shell utilities and operations.
For instance, you can show only the logs sent to the standard output with:
Copied!
docker logs <container> 2>/dev/null
This redirects stderr output (file descriptor 2) to /dev/null, effectively
discarding them. As a result, only the stdout logs will be displayed.
Similarly, you can display only stderr logs with:
Copied!
docker logs <container> 2>&1 >/dev/null
You can also pipe the docker logs output to shell commands like grep, awk,
and similar to search the text and display only the records that match a
specific pattern:
Side note: Stop piping docker logs to grep and find answers instantly
If you’re constantly filtering by time ranges and patterns, Better Stack gives you fast querying and live tail for all containers, so you can track down errors and weird spikes without assembling command line pipelines.
Troubleshooting Docker logs output
If the docker logs command produces an error or returns empty output, it may
be due to a variety of reasons. Here are two common causes and solutions:
1. Check if dual logging is disabled with remote logging drivers
When using
remote logging drivers
like splunk, gcplogs, or awslogs, Docker's
dual logging
functionality typically acts as a local cache, allowing the docker logs
command to continue working. However, if dual logging is disabled, you may
encounter the following error:
Output
Error response from daemon: configured logging driver does not support reading
To confirm this, inspect the container to confirm that it is indeed using a
remote logging driver:
Then check the cache-disabled option for the container to see the status of
the dual logging cache:
Copied!
docker inspect -f '{{ index .HostConfig.LogConfig.Config "cache-disabled" }}' <container>
Output
true
If the output is true, the dual logging cache is disabled for that container.
If the output is false or empty, dual logging cache is enabled (or not
explicitly configured, in which case the daemon's default applies).
To re-enable dual logging for a specific container, you must stop it first, then
explicitly set the cache-disabled option to false using the --log-opt flag
with docker run:
Copied!
docker run -d --log-opt cache-disabled=false <image>
To enable dual logging for all new containers, you can edit the Docker daemon
configuration file as follows:
Copied!
sudo nano /etc/docker/daemon.json
/etc/docker/daemon.json
Copied!
{
"log-driver": "splunk",
"log-opts": {
"cache-disabled": "false", // or remove this property entirely
. . .
}
}
This sets cache-disabled to false globally so that all containers created
after this change will have dual logging enabled unless explicitly overridden.
You'll need to restart the Docker daemon for the change to take effect:
Copied!
sudo systemctl restart docker
Note that the cache-disabled setting only applies to remote logging drivers.
Local drivers like json-file, local, or journald are unaffected, so
docker logs will continue to work.
2. Check if the containerized application logs to stdout or stderr
If the containerized application does not write logs to stdout or stderr,
Docker may not capture any logs.
Some applications are configured to write logs directly to files inside the
container's filesystem. These logs won't be visible with docker logs and will
be lost when the container is removed.
To fix this, you can take two approaches:
If possible, configure the service to send its logs to the standard output or
standard error accordingly. This approach is exemplified in
this custom Nginx image
where the Nginx configuration has been modified to send access logs to
/dev/stdout and error logs to /dev/stderr rather than log files in the
/var/log/nginx directory.
If such configuration options do not exist for the containerized service, you
can create a symbolic link from the generated log files to either
/dev/stdout or /dev/stderr as appropriate. This is the solution adopted
by the
official Nginx Docker image.
With either setup, the logs produced by your containerized services should now
be accessible through the Docker CLI.
Following these steps should resolve most issues with your Docker container log
output. For further debugging, you can also check Docker daemon logs
(sudo journalctl -u docker) to ensure no underlying issues exist.
Viewing Docker logs with a GUI
For a more user-friendly experience, view Docker container logs through a
graphical user interface can be a convenient alternative to terminal commands.
If you're using Docker Desktop, you can
access your container logs by navigating to the Containers page and
selecting the container of interest.
The Logs tab is the default:
From here, you can read the logs or perform basic searches.
If you desire more functionality or a nicer interface, you can try out a
dedicated Docker log viewer like Dozzle.
Use the command below to download its Docker image locally:
Copied!
docker pull amir20/dozzle:latest
Output
latest: Pulling from amir20/dozzle
15851880b5d7: Pull complete
d57a2496955d: Pull complete
Digest: sha256:2727b94bb5d2962152020677fed93d182d3791e7c7aaf8ad4b2ccbd45efab09e
Status: Downloaded newer image for amir20/dozzle:latest
docker.io/amir20/dozzle:latest
Afterward, run it in a container and create a volume that mounts the Docker
socket on your host inside the container:
Copied!
docker run --name dozzle -d --volume=/var/run/docker.sock:/var/run/docker.sock -p 8888:8080 amir20/dozzle:latest
Open your browser and navigate to http://localhost:8888. Select the relevant
container to view its logs, which will update in real-time:
You can view logs from multiple containers side by side by using the Pin as
column feature, accessible by hovering over a container name:
On the far-right panel, a dropdown menu lets you perform several actions, such
as downloading the logs to a file, searching the logs, or filtering by stream or
log level:
Feel free to check out
Dozzle's documentation to more learn
about its features, and how to tailor its setup to your specific needs.
Centralizing your Docker Container logs
Docker provides basic tools for viewing and managing container logs, but these
are limited when working with multiple containers across distributed
environments. This is where centralized logging comes in.
Centralizing Docker logs addresses these limitations by collecting, storing, and
analyzing logs from all your containers in a single place.
This approach not only helps in identifying issues quickly but also ensures that
your logs are preserved and actionable, even if containers are restarted or
terminated.
One possible solution is Better Stack, an
observability platform with powerful log management features. Logs can be
shipped to Better Stack using Vector, a lightweight
log-forwarding tool.
To explore this, you'll create a container based on the
official Vector image and supply a
configuration file that instructs it to ingest the logs that it collects into
Better Stack.
If you’d prefer a quick walkthrough of this Vector based setup, you can follow
along with the video below:
Provide a suitable name for the source (e.g. after the service running in the
container), then choose Docker as the platform, then scroll down to the
bottom of the page and click Connect Source.
After creating the source, you will receive a source token (e.g., qU73jvQjZrNFHimZo4miLdxF) and an ingestion host (e.g., s1315908.eu-nbg-2.betterstackdata.com).
Be sure to save both values, as you’ll need them when configuring log forwarding in your Vector configuration file:
With your source token copied, create a Vector configuration file somewhere on
your filesystem, and populate it with the following contents:
Make sure to replace <your_ingesting_host> and <your_betterstack_source_token> with the actual ingestion host and source token provided on your Better Stack Sources page.
The docker_containers source configures Vector to collect logs from all
containers running on the host machine, except for the vector container which
we'll set up shortly.
If you want to use an allowlist of images, containers, or labels, you can use
the include_images, include_containers, or include_labels properties. You
will find all the details in the
Vector documentation.
The collected logs are then ingested to Better Stack over HTTP. If you'd like to
process or transform the logs before sending them out, see the
transforms reference.
Once you've saved the file, execute the command below to start the vector
service while mounting the configuration file and the Docker daemon socket:
In a production setting, you can avoid directly mounting the Docker socket and
use SSH or HTTPS for communication between Vector and the Docker daemon instead.
Alternatively, you can also install Vector directly on the host machine.
Once the vector container is running, return to the Better Stack source and
scroll down to the Verify data collection. After a few moments, you should
see a Logs received! message, confirming that your container logs are now
being shipped to the service.
Clicking on the Live tail link will take you to a page where you'll see your
container logs streaming in.
In this example, the log entries are coming from an Nginx container I set up separately on my machine using this tutorial.
In your case, they will most likely originate from a different Docker container running on your own setup.
If you want to see what the Live tail experience looks like before clicking
through, here’s a quick demo:
If you’d like to see how dashboards and log visualisations come together in practice, the video below walks through the experience:
Final thoughts
This article has provided you with a comprehensive understanding of Docker's log
management features and how to leverage them to monitor and debug the various
services deployed within your Docker containers.
You also learned how to aggregate logs from multiple containers in one place to
streamline log analysis using advanced search, filtering, and visualization
techniques.
With this knowledge, you can confidently manage and monitor your Docker
container logs whether you're debugging a single container or monitoring a
large-scale deployment.
For further exploration, consider diving into the official
Docker logs reference,
and check out our Docker logging best
practices for improving the performance and
reliability of your container logging setup.