If the docker logs
command results in an error or empty output, see our
troubleshooting tips below for potential
solutions.
Effective logging is crucial for any application, but even more so for containerized environments which are often distributed.
Docker provides built-in logging tools to help you monitor, troubleshoot, and gain insights from your containerized services. This guide explores those tools and shows you how to centralize your container logs for advanced analysis and management.
Let's get started!
Prerequisites
To follow along with this guide, ensure you have the following:
- Basic command-line skills.
- A recent version of Docker installed on your system.
- A running Docker container that's already generating logs. If you need a service that continually generates logs for testing purposes, you can use the official Vector image:
docker run --name vector-demo -d timberio/vector:latest-alpine
Without providing a configuration file, this vector-demo
container will generate
sample log data indefinitely.
Understanding Docker Container logs
Docker container logs are the records of events and messages generated by applications running in them. This typically includes status updates, debugging information, or error messages.
Whenever a containerized service writes such data to the standard output or standard error streams, Docker captures them as container logs and makes them accessible for monitoring, troubleshooting, and debugging purposes.
By default, Docker stores logs in JSON format on the host system. These logs are saved in a specific location, which typically looks like this:
/var/lib/docker/containers/<container-id>/<container-id>-json.log
Note the exact location might vary depending on your Docker configuration and
operating system, and you may also need sudo
privileges to access the log
files.
To find the exact path of a container's log file, you can use the following command:
docker inspect -f '{{.LogPath}}' <container>
/var/lib/docker/containers/67abdd039f5ebfcf22c5a6e437fc82be34c6673479d653fd880049a30d75e116/67abdd039f5ebfcf22c5a6e437fc82be34c6673479d653fd880049a30d75e116-json.log
Once you have the log file path, you can inspect its contents using tools like
tail
to view the most recent entries:
sudo tail /var/lib/docker/containers/<container-id>/<container-id>-json.log
{"log":"\u003e node --require ./Instrumentation.js server.js\n","stream":"stdout","attrs":{"tag":"frontend"},"time":"2024-12-17T06:47:57.375433674Z"}
{"log":"\n","stream":"stdout","attrs":{"tag":"frontend"},"time":"2024-12-17T06:47:57.375439356Z"}
{"log":" ▲ Next.js 14.2.5\n","stream":"stdout","attrs":{"tag":"frontend"},"time":"2024-12-17T06:47:58.393514509Z"}
{"log":" - Local: http://ad52be827027:8080\n","stream":"stdout","attrs":{"tag":"frontend"},"time":"2024-12-17T06:47:58.393899007Z"}
{"log":" - Network: http://192.168.16.24:8080\n","stream":"stdout","attrs":{"tag":"frontend"},"time":"2024-12-17T06:47:58.393916369Z"}
{"log":"\n","stream":"stdout","attrs":{"tag":"frontend"},"time":"2024-12-17T06:47:58.394032634Z"}
{"log":" ✓ Starting...\n","stream":"stdout","attrs":{"tag":"frontend"},"time":"2024-12-17T06:47:58.394118183Z"}
{"log":" ✓ Ready in 415ms\n","stream":"stdout","attrs":{"tag":"frontend"},"time":"2024-12-17T06:47:58.76945803Z"}
{"log":"(node:17) MetadataLookupWarning: received unexpected error = network timeout at: http://169.254.169.254/computeMetadata/v1/instance code = UNKNOWN\n","stream":"stderr","attrs":{"tag"
:"frontend"},"time":"2024-12-17T06:48:01.390932425Z"}
{"log":"(Use `node --trace-warnings ...` to show where the warning was created)\n","stream":"stderr","attrs":{"tag":"frontend"},"time":"2024-12-17T06:48:01.390965495Z"}
Each JSON entry usually has three properties:
log
: The actual log message as captured from the Docker container.stream
: Indicates the stream where the log originated (stdout
orstderr
).time
: The time of log collection in ISO 8601 format.
In some cases, you will see an attrs
field containing extra attributes
provided through the --log-opt
flag when creating the container, such as
environmental variables and labels.
Manually inspecting JSON log files is one way to view logs, but Docker provides more convenient tools to access and manage logs directly from the command line.
We'll explore these built-in commands in the next section.
Viewing Docker logs in the terminal
You can easily view logs from a running container through the docker logs
command. It has the following syntax:
docker logs [<options>] <container>
The <container>
placeholder can be either the container name or ID, while
<options>
are optional flags to customize the output.
Here's its basic usage:
docker logs <container>
As in:
docker logs otel-collector
This command retrieves the entire log output stored for a container, which
can include all lines written to stdout
and stderr
since the container
started:
. . .
2024-12-17T08:27:54.613Z info Metrics {"kind": "exporter", "data_type": "metrics", "name": "debug", "resource metrics": 817, "metrics": 2487, "data points": 4533}
2024-12-17T08:27:55.215Z info Metrics {"kind": "exporter", "data_type": "metrics", "name": "debug", "resource metrics": 2, "metrics": 22, "data points": 49}
2024-12-17T08:27:55.415Z info Metrics {"kind": "exporter", "data_type": "metrics", "name": "debug", "resource metrics": 1, "metrics": 14, "data points": 14}
2024-12-17T08:27:55.513Z info Traces {"kind": "exporter", "data_type": "traces", "name": "debug", "resource spans": 1, "spans": 1}
2024-12-17T08:27:56.017Z info Metrics {"kind": "exporter", "data_type": "metrics", "name": "debug", "resource metrics": 2, "metrics": 28, "data points": 49}
2024-12-17T08:27:56.418Z info Metrics {"kind": "exporter", "data_type": "metrics", "name": "debug", "resource metrics": 27, "metrics": 425, "data points": 545}
2024-12-17T08:27:56.619Z info Metrics {"kind": "exporter", "data_type": "metrics", "name": "debug", "resource metrics": 1, "metrics": 15, "data points": 51}
If your logs don't already include timestamps, you can use the -t/--timestamps
option to prepend ISO 8601 timestamps to each log entry:
docker logs -t otel-collector
In cases where your application already includes timestamps in the log output, this option may be redundant:
. . .
2024-12-17T09:08:56.738257049Z 2024-12-17T09:08:56.738Z info Metrics {"kind": "exporter", "data_type": "metrics", "name": "debug", "resource metrics": 1, "metrics": 15, "data points": 51}
2024-12-17T09:08:57.139166438Z 2024-12-17T09:08:57.138Z info Metrics {"kind": "exporter", "data_type": "metrics", "name": "debug", "resource metrics": 1, "metrics": 28, "data points": 40}
2024-12-17T09:08:57.340044392Z 2024-12-17T09:08:57.339Z info Metrics {"kind": "exporter", "data_type": "metrics", "name": "debug", "resource metrics": 1, "metrics": 5, "data points": 6}
2024-12-17T09:08:57.587608075Z 2024-12-17T09:08:57.587Z info Traces {"kind": "exporter", "data_type": "traces", "name": "debug", "resource spans": 1, "spans": 2}
2024-12-17T09:08:58.942993661Z 2024-12-17T09:08:58.942Z info Metrics {"kind": "exporter", "data_type": "metrics", "name": "debug", "resource metrics": 1, "metrics": 1, "data points": 2}
2024-12-17T09:08:59.343194193Z 2024-12-17T09:08:59.343Z info Metrics {"kind": "exporter", "data_type": "metrics", "name": "debug", "resource metrics": 1, "metrics": 5, "data points": 6}
2024-12-17T09:09:01.699296392Z 2024-12-17T09:09:01.699Z info Logs {"kind": "exporter", "data_type": "logs", "name": "debug", "resource logs": 1, "log records": 2}
2024-12-17T09:09:02.602836466Z 2024-12-17T09:09:02.602Z info Traces {"kind": "exporter", "data_type": "traces", "name": "debug", "resource spans": 1, "spans": 1}
2024-12-17T09:09:04.558810639Z 2024-12-17T09:09:04.558Z info Metrics {"kind": "exporter", "data_type": "metrics", "name": "debug", "resource metrics": 1, "metrics": 29, "data points": 34
If you'd like to see the attributes contained in the attrs
field (if
available), you can use the --details
option:
docker logs --details <container>
Tailing Docker logs
For real-time monitoring of container logs, use the -f/--follow
option. This
streams new log entries as they are generated, similar to the tail -f
command:
docker logs -f <container>
Filtering Docker logs
The output of docker logs
can become overwhelming, especially for long-running
containers with extensive logs.
To make log inspection more efficient, Docker provides several built-in options to filter and limit log output.
For starters, to display only the last N entries, you can use the
-n/--tail
option:
docker logs -n 10 <container> # displays the last 10 log lines
You can also limit the log output to a specific time range with --since
and
--until
. The former displays log entries that occurred after the provided
timestamp, while the latter displays log entries that occurred before the
provided timestamp.
The arguments for both flags must be in a recognizable date and time format, such as an RFC 3339 date, a UNIX timestamp, or a Go duration string (e.g. 1m30s, 3h).
For example:
docker logs --since 2024-12-17T09:00:00 <container> # only show logs after this time
docker logs --since 15m # Only show logs produced within the last 15 minutes.
docker logs --until 2024-12-17T09:00:00 <container> # only show logs produced before this time
docker logs --until 1h # Only show logs produced before the last 1 hour
You can also combine the --since
and --until
options to filter logs within a
specific time range. For example, the command below only prints the entries
logged between 12:45 and 13:00 on December 17th, 2024:
docker logs --since 2024-12-17T12:45:00 --until 2024-12-17T13:00:00 <container>
The --since
and --tail
options can also be paired with -f/--follow
flag to
narrow down the initial output while printing all subsequent entries. It does
not work with --until
:
docker logs -f --since 15m <container>
docker logs -f --tail 10 <container>
Beyond the built-in filtering options, you can also filter Docker logs through standard shell utilities and operations.
For instance, you can show only the logs sent to the standard output with:
docker logs <container> 2>/dev/null
This redirects stderr
output (file descriptor 2) to /dev/null
, effectively
discarding them. As a result, only the stdout
logs will be displayed.
Similarly, you can display only stderr
logs with:
docker logs <container> 2>&1 >/dev/null
You can also pipe the docker logs
output to shell commands like grep
, awk
,
and similar to search the text and display only the records that match a
specific pattern:
docker logs <container> | grep '200'
Summary of filtering options
Filter Type | Command |
---|---|
Last N lines | docker logs --tail N <container> |
Logs after a time | docker logs --since <timestamp> <container> |
Logs before a time | docker logs --until <timestamp> <container> |
Logs between two times | docker logs --since <timestamp> --until <timestamp> <container> |
Follow from last N lines | docker logs -f --tail N <container> |
Only stdout logs |
docker logs <container> 2>/dev/null |
Only stderr logs |
docker logs <container> 2>&1 >/dev/null |
Search for patterns | docker logs <container> grep '<pattern>' |
Filter with shell tools | docker logs <container> awk '/<pattern>/ {print $0}' |
Troubleshooting Docker logs output
If the docker logs
command produces an error or returns empty output, it may
be due to a variety of reasons. Here are two common causes and solutions:
1. Check if dual logging is disabled with remote logging drivers
When using
remote logging drivers
like splunk
, gcplogs
, or awslogs
, Docker's
dual logging
functionality typically acts as a local cache, allowing the docker logs
command to continue working. However, if dual logging is disabled, you may
encounter the following error:
Error response from daemon: configured logging driver does not support reading
To confirm this, inspect the container to confirm that it is indeed using a remote logging driver:
docker inspect -f '{{.HostConfig.LogConfig.Type}}' <container>
splunk
Then check the cache-disabled
option for the container to see the status of
the dual logging cache:
docker inspect -f '{{ index .HostConfig.LogConfig.Config "cache-disabled" }}' <container>
true
If the output is true
, the dual logging cache is disabled for that container.
If the output is false
or empty, dual logging cache is enabled (or not
explicitly configured, in which case the daemon's default applies).
To re-enable dual logging for a specific container, you must stop it first, then
explicitly set the cache-disabled
option to false
using the --log-opt
flag
with docker run
:
docker run -d --log-opt cache-disabled=false <image>
To enable dual logging for all new containers, you can edit the Docker daemon configuration file as follows:
sudo nano /etc/docker/daemon.json
{
"log-driver": "splunk",
"log-opts": {
"cache-disabled": "false", // or remove this property entirely
. . .
}
}
This sets cache-disabled
to false
globally so that all containers created
after this change will have dual logging enabled unless explicitly overridden.
You'll need to restart the Docker daemon for the change to take effect:
sudo systemctl restart docker
Note that the cache-disabled
setting only applies to remote logging drivers.
Local drivers like json-file
, local
, or journald
are unaffected, so
docker logs
will continue to work.
2. Check if the containerized application logs to stdout or stderr
If the containerized application does not write logs to stdout
or stderr
,
Docker may not capture any logs.
Some applications are configured to write logs directly to files inside the
container's filesystem. These logs won't be visible with docker logs
and will
be lost when the container is removed.
To fix this, you can take two approaches:
If possible, configure the service to send its logs to the standard output or standard error accordingly. This approach is exemplified in this custom Nginx image where the Nginx configuration has been modified to send access logs to
/dev/stdout
and error logs to/dev/stderr
rather than log files in the/var/log/nginx
directory.If such configuration options do not exist for the containerized service, you can create a symbolic link from the generated log files to either
/dev/stdout
or/dev/stderr
as appropriate. This is the solution adopted by the official Nginx Docker image.
With either setup, the logs produced by your containerized services should now be accessible through the Docker CLI.
Following these steps should resolve most issues with your Docker container log
output. For further debugging, you can also check Docker daemon logs
(sudo journalctl -u docker
) to ensure no underlying issues exist.
Viewing Docker logs with a GUI
For a more user-friendly experience, view Docker container logs through a graphical user interface can be a convenient alternative to terminal commands.
If you're using Docker Desktop, you can access your container logs by navigating to the Containers page and selecting the container of interest.
The Logs tab is the default:
From here, you can read the logs or perform basic searches.
If you desire more functionality or a nicer interface, you can try out a dedicated Docker log viewer like Dozzle.
Use the command below to download its Docker image locally:
docker pull amir20/dozzle:latest
latest: Pulling from amir20/dozzle
15851880b5d7: Pull complete
d57a2496955d: Pull complete
Digest: sha256:2727b94bb5d2962152020677fed93d182d3791e7c7aaf8ad4b2ccbd45efab09e
Status: Downloaded newer image for amir20/dozzle:latest
docker.io/amir20/dozzle:latest
Afterward, run it in a container and create a volume that mounts the Docker socket on your host inside the container:
docker run --name dozzle -d --volume=/var/run/docker.sock:/var/run/docker.sock -p 8888:8080 amir20/dozzle:latest
b42129bbb2a8c1b253d59b017c872a19d7182819ab37b0b0d86ed6dc052313f9
Open your browser and navigate to http://localhost:8888
. Select the relevant
container to view its logs, which will update in real-time:
You can view logs from multiple containers side by side by using the Pin as column feature, accessible by hovering over a container name:
On the far-right panel, a dropdown menu lets you perform several actions, such as downloading the logs to a file, searching the logs, or filtering by stream or log level:
Feel free to check out Dozzle's documentation to more learn about its features, and how to tailor its setup to your specific needs.
Centralizing your Docker Container logs
Docker provides basic tools for viewing and managing container logs, but these are limited when working with multiple containers across distributed environments. This is where centralized logging comes in.
Centralizing Docker logs addresses these limitations by collecting, storing, and analyzing logs from all your containers in a single place.
This approach not only helps in identifying issues quickly but also ensures that your logs are preserved and actionable, even if containers are restarted or terminated.
One possible solution is Better Stack, an observability platform with powerful log management features. Logs can be shipped to Better Stack using Vector, a lightweight log-forwarding tool.
To explore this, you'll create a container based on the official Vector image and supply a configuration file that instructs it to ingest the logs that it collects into Better Stack.
Sign up for a free Better Stack account and navigate to the Telemetry dashboard. Then, from the menu on the left, choose Sources and click on Connect source:
Provide a suitable name for the source (e.g. after the service running in the container), then choose Docker as the platform, then scroll down to the bottom of the page and click Connect Source.
Once your source is created, copy the provided Source token for use in the Vector configuration file:
With your source token copied, create a Vector configuration file somewhere on your filesystem, and populate it with the following contents:
sources:
docker_containers:
type: docker_logs
exclude_containers:
- vector
sinks:
better_stack:
type: http
method: post
inputs:
- docker
uri: https://in.logs.betterstack.com/
encoding:
codec: json
auth:
strategy: bearer
token: <your_betterstack_source_token>
The docker_containers
source configures Vector to collect logs from all
containers running on the host machine, except for the vector
container which
we'll set up shortly.
If you want to use an allowlist of images, containers, or labels, you can use
the include_images
, include_containers
, or include_labels
properties. You
will find all the details in the
Vector documentation.
The collected logs are then ingested to Better Stack over HTTP. If you'd like to process or transform the logs before sending them out, see the transforms reference.
Once you've saved the file, execute the command below to start the vector
service while mounting the configuration file and the Docker daemon socket:
docker run -d --name vector \
-v $(pwd)/vector.yaml:/etc/vector/vector.yaml:ro \
-v /var/run/docker.sock:/var/run/docker.sock \
timberio/vector:latest-alpine
If you're using Docker Compose, you can use the following fragment instead:
services:
vector:
image: timberio/vector:latest-alpine
container_name: vector
volumes:
- ./vector.yaml:/etc/vector/vector.yaml:ro
- /var/run/docker.sock:/var/run/docker.sock
In a production setting, you can avoid directly mounting the Docker socket and use SSH or HTTPS for communication between Vector and the Docker daemon instead. Alternatively, you can also install Vector directly on the host machine.
Once the vector
container is running, return to the Better Stack source and
scroll down to the Verify data collection. After a few moments, you should
see a Logs received! message, confirming that your container logs are now
being shipped to the service.
Clicking on the Live tail link will take you to a page where you'll see your container logs streaming in.
From here, you can query, visualize, and correlate your logs, set up alerting, and benefit from all the other advantages of centralized log management.
Final thoughts
This article has provided you with a comprehensive understanding of Docker's log management features and how to leverage them to monitor and debug the various services deployed within your Docker containers.
You also learned how to aggregate logs from multiple containers in one place to streamline log analysis using advanced search, filtering, and visualization techniques.
With this knowledge, you're can confidently manage and monitor your Docker container logs whether you're debugging a single container or monitoring a large-scale deployment.
For further exploration, consider diving into the official Docker logs reference, and check out our Docker logging best practices for improving the performance and reliability of your container logging setup.
Thanks for reading, and happy logging!
Make your mark
Join the writer's program
Are you a developer and love writing and sharing your knowledge with the world? Join our guest writing program and get paid for writing amazing technical guides. We'll get them to the right readers that will appreciate them.
Write for usBuild on top of Better Stack
Write a script, app or project on top of Better Stack and share it with the world. Make a public repository and share it with us at our email.
community@betterstack.comor submit a pull request and help us build better products for everyone.
See the full list of amazing projects on github