Docker has become an increasingly popular way to deploy applications due to its portability and reproducibility across different environments. However, logging within Docker containers presents unique challenges since containers are ephemeral, and the data stored in their filesystems are lost forever once the container is terminated.
To address these issues effectively, you must understand how logging in Docker works so that you can configure your application accordingly and use the right solutions that'll meet your performance and logging requirements.
This article provides a comprehensive guide to effectively managing Docker container and daemon logs. It encompasses everything from collecting and viewing log messages to crafting an optimal logging strategy tailored to your specific needs. You'll also discover how to centralize your Docker logs in a log management service, enabling advanced analysis, alerting, and long-term storage capabilities.
Prerequisites
Before you proceed with this tutorial, you need access to a system that supports
the latest Docker (v24.0.5 at the time of writing), preferably a Linux machine
or server that includes a non-root user with sudo
access. A basic
understanding of how Docker containers work is also assumed.
Step 1 — Setting up an Nginx container (optional)
To follow through with the concepts described in this article, you can set up a
Docker container based on
this Nginx hello world image.
It has been configured to produce access logs each time the server is accessed.
Use the docker pull
command with the image name
(betterstackcommunity/nginx-helloworld
) to download it from the Docker
registry to your computer:
docker pull betterstackcommunity/nginx-helloworld:latest
You'll see the program's output appear on the screen:
Using default tag: latest
latest: Pulling from betterstackcommunity/nginx-helloworld
83ee3a23efb7: Pull complete
db98fc6f11f0: Pull complete
f611acd52c6c: Pull complete
ce6148ee5b27: Pull complete
f41d580b4c45: Pull complete
272afdecd73d: Pull complete
603e831d3bf2: Pull complete
4b3f00fe862f: Pull complete
1813c5daf2e4: Pull complete
4db7ca47ea28: Pull complete
37d652721feb: Pull complete
e9bce6aacaff: Pull complete
50da342c2533: Pull complete
Digest: sha256:48413fdddeae11e4732896e49b6d82979847955666ed95e4d6e57b433920c9e1
Status: Downloaded newer image for betterstackcommunity/nginx-helloworld:latest
docker.io/betterstackcommunity/nginx-helloworld:latest
The output above describes the process of fetching an image and storing it
locally. If you get a permission error, you may need to prefix the command above
with sudo
, or better still, use the command below to add the current user to
the docker
group:
sudo usermod -aG docker ${USER}
Once you've done that, apply the new group membership by typing the following:
su - ${USER}
The docker pull
command should work now without prefixing it with sudo
. You
can subsequently use the docker images
command to verify that the downloaded
image is present:
docker images
You should see the following output:
REPOSITORY TAG IMAGE ID CREATED SIZE
betterstackcommunity/nginx-helloworld latest a35a83d637b5 4 hours ago 240MB
You can now create and run a new container from the image by executing the
docker run
command as shown below:
docker run -p 80:80/tcp -d "betterstackcommunity/nginx-helloworld:latest"
The argument to the -p
flag maps port 80 in the container to port 80 on your
machine, while the -d
option runs the container in the background and prints
its ID (typical usage for web service). Assuming there were no errors, the
container ID will be displayed on your screen:
39c0ffde9c30600629742357b5f01b278eba9ade7f4c96b9e3883e8fa2b52243
If you get the error below, it means that some other program is already using port 80 on your server, so you should stop that program before re-running the command.
docker: Error response from daemon: driver failed programming external connectivity on endpoint nostalgic_heisenberg (f493fdf78c94adc66a248ac3fd62e911c1d477dda62398bd36cd40b323605159): Error starting userland proxy: listen tcp4 0.0.0.0:80: bind: address already in use.
You can verify the status of your Docker containers with the docker ps
command:
docker ps
You'll see the following output appear on the screen:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
0639eb28a3c9 betterstackcommunity/nginx-helloworld:latest "/runner.sh nginx" 3 seconds ago Up 2 seconds 0.0.0.0:80->80/tcp, :::80->80/tcp sharp_mcnulty
The ps
command describes a few details about your running containers. You can
see the Container ID, the image running inside the container, the command used
to start the container, its creation date, the status, exposed ports, and the
auto-generated name of the container.
To see the deployed Nginx service in action, visit http://<your_server_ip>:80
or http://localhost:80 in your browser. You should observe a sample Nginx demo
page:
To generate a few access logs, you can use the curl
command as follows:
curl http://localhost:80/?[1-10]
You're now all set to view and configure the logs generated by your running Nginx container.
Step 2 — Viewing container logs with the Docker CLI
Docker provides a logs
command for viewing the log entries produced by the
services running in a container. You can also use the docker service logs
command to view the logs produced by all containers participating in a service,
but we'll focus only on single-container logs in this article.
By default, the logs
command displays all the messages sent to the standard
output (stdout
) and standard error (stderr
) streams within a container. If
the services running in the container do not output their logs to either stream
(perhaps when they log into files), you may not see any useful output from the
command.
Use the command below to view the logs for the Nginx hello world container.
Replace <container_id>
below with the appropriate ID retrieved by running
docker ps
:
docker logs <container_id>
You should observe the following output:
{"timestamp":"2023-09-06T15:13:11+00:00","pid":"8","remote_addr":"172.17.0.1","remote_user":"","request":"GET /?8 HTTP/1.1","status": "200","body_bytes_sent":"11109","request_time":"0.000","http_referrer":"","http_user_agent":"curl/8.0.1","time_taken_ms":"1694013191.274"}
{"timestamp":"2023-09-06T15:13:11+00:00","pid":"8","remote_addr":"172.17.0.1","remote_user":"","request":"GET /?9 HTTP/1.1","status": "200","body_bytes_sent":"11109","request_time":"0.000","http_referrer":"","http_user_agent":"curl/8.0.1","time_taken_ms":"1694013191.274"}
{"timestamp":"2023-09-06T15:13:11+00:00","pid":"8","remote_addr":"172.17.0.1","remote_user":"","request":"GET /?10 HTTP/1.1","status": "200","body_bytes_sent":"11109","request_time":"0.000","http_referrer":"","http_user_agent":"curl/8.0.1","time_taken_ms":"1694013191.274"}
. . .
By default, Nginx logs are
written to access.log
and error.log
files in the /var/log/nginx
directory. However, the
Nginx configuration for the container has been modified
to send access logs to /dev/stdout
and error logs to /dev/stderr
instead, so
that Docker can collect and manage the logs. The access logs are
JSON-formatted for ease of use with log management tools.
The official Nginx Docker image
takes a different approach
to achieve the same effect. It creates a symbolic link from
/var/log/nginx/access.log
to /dev/stdout
and another one from
/var/log/nginx/error.log
to /dev/stderr
. These are two approaches you can
adopt in your configuration files to ensure that the logs produced by your
application are accessible through the Docker CLI.
Step 3 — Filtering your logs with the Docker CLI
If your application generates a lot of logs, the output from the docker logs
command will be quite large, and you usually only need to view a small subset of
the logs at a time. Below are some approaches you can use to filter the
container logs:
1. Displaying only the most recent logs
Use the --tail
option to display only the latest entries:
docker logs --tail 10 <container_id> # show the last 10 lines
2. Limiting the log output to a specific time range
There are two options to limit the log entries by their timestamp: --since
and
--until
. The former displays log entries that occurred after the provided
timestamp, while the latter displays log entries that occurred before the
provided timestamp. The arguments to both flags must be in a recognizable date
and time format, such as an
RFC 3339 date, a
UNIX timestamp, or a
Go duration string (e.g. 1m30s, 3h).
For example, the command below will only display log entries that were produced in the last 15 minutes:
docker logs --since 15m <container_id>
This one will show all logs except those produced in the last 1 hour:
docker logs --until 1h <container_id>
You can also combine the --since
and --until
options to filter logs within a
specific time range. For example, the command below only prints the entries
logged between 12 PM and 1 PM on Feb 16, 2023:
docker logs --since 2023-02-16T12:00:00 --until 2023-02-16T13:00:00 <container_id>
3. Viewing Docker logs in real-time
The docker logs
command only displays the entries present at the time of
execution. If you want to continue streaming log output from the container in
real-time, use the --follow
option:
docker logs --follow <container_id>
You can also combine this with either or both of the --tail
or --since
options to narrow down the initial output while printing subsequent entries:
docker logs --follow --since 15m <container_id>
docker logs --follow --tail 10 <container_id>
4. Filtering Docker logs with grep
Another way to filter docker logs
output is through grep
. It's a useful way
to display only the records that match a specific pattern:
docker logs <container_id> | grep '200'
Step 4 — Viewing container logs with Dozzle
If you find the docker logs
command too primitive, you can try out
Dozzle. It's a log viewer that provides real-time log
streaming, filtering, and monitoring capabilities through its web-based user
interface. It's also quite lightweight, so it can be run alongside your other
containers without compromising performance in most cases.
Use the command below to download its Docker image locally:
docker pull amir20/dozzle:latest
latest: Pulling from amir20/dozzle
15851880b5d7: Pull complete
d57a2496955d: Pull complete
Digest: sha256:2727b94bb5d2962152020677fed93d182d3791e7c7aaf8ad4b2ccbd45efab09e
Status: Downloaded newer image for amir20/dozzle:latest
docker.io/amir20/dozzle:latest
Afterward, run it in a container and create a volume that mounts the Docker socket on your host inside the container:
docker run --name dozzle -d --volume=/var/run/docker.sock:/var/run/docker.sock -p 8888:8080 amir20/dozzle:latest
b42129bbb2a8c1b253d59b017c872a19d7182819ab37b0b0d86ed6dc052313f9
Next, head over to http://localhost:8888 in your browser and select your Nginx container to view its logs. It'll keep updating in real-time:
Feel free to check out Dozzle's documentation to learn about its features, and how to customize its behavior to your liking.
Step 5 — Choosing a logging driver
Logging drivers in Docker are mechanisms that determine how container logs are collected and processed. The default driver is json-file which writes the container logs to JSON files stored on the host machine. These files can be found in the following directory:
/var/lib/docker/containers/<container_id>/<container_id>-json.log
You can view the contents of the file using the following command:
sudo tail /var/lib/docker/containers/<container_id>/<container_id>-json.log
{
"log": "{\"timestamp\":\"2023-09-06T15:13:11+00:00\",\"pid\":\"8\",\"remote_addr\":\"172.17.0.1\",\"remote_user\":\"\",\"request\":\"GET /?10 HTTP/1.1\",\"status\": \"200\",\"body_bytes_sent\":\"11109\",\"request_time\":\"0.000\",\"http_referrer\":\"\",\"http_user_agent\":\"curl/8.0.1\",\"time_taken_ms\":\"1694013191.274\"}\n",
"stream": "stdout",
"time": "2023-09-06T15:13:11.274712995Z"
}
. . .
The log
property contains the raw access log entry generated by the Nginx
process, while stream
records where it was collected from. The timestamp
property represents the time of collection by the json-file
driver. When you
use the docker logs
command, it only presents the contents of the log
property as you've already seen but you can use the --details
flag to view
additional information if present.
Note that with the json-file
driver, log
rotation isn't
performed by default, so the logs will continue accumulating over time. I'll
show you how to resolve this issue in
the next section.
Some of the other supported logging drivers in Docker are listed below:
- none: This driver disables container logs and causes
docker logs
to stop producing output. - local: Collects the raw log output from the container and stores them in a format designed for minimal overhead. Preserves up to 100MB of logs by default (20 MB each in a maximum of five files).
- syslog: Writes log entries to the
syslog
facility on the host machine. See our tutorial on Syslog if you are not familiar with it. - journald: Writes log messages to the
journald
daemon on the host machine. See our tutorial on journald for more details. - fluentd: Writes log messages to the fluentd daemon running on the host machine.
- gelf: Sends log entries to a Graylog Extended Log Format (GELF) endpoint such as Graylog or Logstash.
- awslogs: Sends log entries to AWS CloudWatch.
- gcplogs: Sends log entries to the Google Cloud Platform.
You can check the current default logging driver for the Docker daemon through the command below:
docker info --format '{{.LoggingDriver}}'
This should yield the output below:
json-file
You can also find out the logging driver for a running container by using the
following docker inspect
command:
docker inspect -f '{{.HostConfig.LogConfig.Type}}' <container_id>
json-file
In this tutorial, we will continue using the json-file
logging driver.
However, in the next section, you'll adjust some settings to optimize it for
production usage.
Step 6 — Configuring the json-file driver
In this section, you will learn how to configure your chosen logging driver
(json-file
in this case). Start by opening or creating the daemon
configuration file as follows:
sudo nano /etc/docker/daemon.json
Populate the file with the following contents:
{
"log-driver": "json-file",
"log-opts": {
"max-size": "20m",
"max-file": "5",
"compress": "true",
"labels": "production_status",
"env": "os"
}
}
The log-driver
field sets the logging driver to json-file
, while log-opts
configures the supported options for the specified logging driver. Each property
in the log-opts
object must be a string (including boolean and numeric values,
as seen above).
Here's a brief explanation of each property in the log-opts
object:
max-size
: The maximum size of the log file before it is rotated (20MB). You may specify an integer plus the measuring unit (k
for kilobytes,m
for megabytes, andg
for gigabytes).max-file
: The maximum allowable number of log files for each container. When an excess file is created, the oldest one will be deleted. Themax-size
option must also be specified for this setting to take effect.compress
: When set totrue
, the rotated log files will be compressed to save disk space.labels
: A comma-separated list of logging-related labels accepted by the Docker daemon.env
: A comma-separated list of logging-related environment variables accepted by the Docker daemon.
With the above configuration in place, Docker will keep a maximum of 100MB of
logs per container while the older ones get deleted. You can find the complete
list of options for the json-file
driver in the
official documentation.
When you modify the Docker daemon configuration file as above, you must restart
the docker
service to apply the changes. Note that this will shut down all
running containers unless
live restore is enabled.
sudo systemctl restart docker
When you re-launch the Nginx container, you may specify the configured labels
and environmental variables as follows:
docker run -p 80:80/tcp -d --label production_status=testing --env os=ubuntu "betterstackcommunity/nginx-helloworld:latest"
Afterward, generate some access logs and view them in the container's JSON log
file once again. You should observe a new attrs
object that contains the
specified label and environmental variables:
sudo tail -n 1 /var/lib/docker/containers/<container_id>/<container_id>-json.log
{
"log": "{\"timestamp\":\"2023-09-06T15:13:11+00:00\",\"pid\":\"8\",\"remote_addr\":\"172.17.0.1\",\"remote_user\":\"\",\"request\":\"GET /?10 HTTP/1.1\",\"status\": \"200\",\"body_bytes_sent\":\"11109\",\"request_time\":\"0.000\",\"http_referrer\":\"\",\"http_user_agent\":\"curl/8.0.1\",\"time_taken_ms\":\"1694013191.274\"}\n",
"stream": "stdout",
"attrs": {
"os": "ubuntu",
"production_status": "testing"
},
"time": "2023-09-06T15:13:11.274712995Z"
}
Including these additional details in your container logs can help you easily find and filter the messages you're looking for, especially once you've centralized the logs in a log management service as you'll see later on.
Overriding the default logging driver per container
It's sometimes useful to choose a different logging driver or modify the driver
options for a specific container. This is achieved through the --log-driver
and --log-opt
flags when executing docker container create
or docker run
:
docker run --log-driver local --log-opt max-size=50m -p 80:80/tcp -d "betterstackcommunity/nginx-helloworld:latest"
Step 7 — Choosing a log delivery mode
The log delivery mode for a Docker container refers to how it prioritizes the delivery of incoming log messages to the configured driver. The following two modes are supported:
1. Blocking mode
In blocking mode (the default), the delivery of log messages to the selected driver is synchronous and blocks the application or process generating logs until the log entry is successfully delivered. The main advantage of this approach is that it guarantees that each log message will be delivered to the driver, but at the cost of performance since the application needs to wait for log delivery.
{
"log-driver": "json-file",
"log-opts": {
"mode": "blocking"
}
}
This delay should be negligible with the json-file
or local
drivers since
they both write to the local filesystem. However, with drivers that write
directly to a remote server, a noticeable latency will be observed if log
delivery is slow. This is where non-blocking mode can come in handy.
2. Non-blocking mode
In non-blocking mode, incoming log entries are processed asynchronously without causing the application to block. They are temporarily stored in a memory buffer until the configured logging driver can process them. Once processed, they are cleared from the buffer to make way for new entries.
{
"log-driver": "syslog",
"log-opts": {
"mode": "non-blocking"
}
}
docker run --log-driver syslog --log-opt mode=non-blocking -p 80:80/tcp -d "betterstackcommunity/nginx-helloworld:latest"
When non-blocking mode is used, performance issues are minimized even if there's
a high volume of logging activity. However, there is a risk of losing log
entries if driver is unable to keep up with the rate of log messages emitted by
the application. To improve reliability in non-blocking mode, you can increase
the maximum buffer size from its default 1 MB to a more suitable size through
the max-buffer-size
property:
{
"log-driver": "syslog",
"log-opts": {
"mode": "non-blocking",
"max-buffer-size": "20m"
}
}
Step 8 — Choosing a Docker logging strategy
There are several ways to aggregate, store, and centralize your Docker container logs. Thus far, we've only covered the native Docker logging driver approach in this article since it's the most suitable for most uses. However, that strategy won't fit every project, so it's a good idea to be aware of alternative methods so you can make an informed decision about the right logging approach for your application.
1. Using a Docker logging driver
We've already discussed this approach in great detail throughout this tutorial. To recap, it involves redirecting the log output from your application to the standard output and standard error streams, and using a logging driver to manage how the logs are stored or transported.
We recommend using the json-file
driver in blocking mode for most use cases.
Since it writes to a local file on the host, it shouldn't cause any performance
problems. If your application emits a large amount of log data, consider using
non-blocking mode instead with a generous buffer so that the primary operations
of your application are not interrupted when the driver is trying to persist the
logs to a file.
We generally don't recommend using drivers that write to a remote host, such as
awslogs
, gcplogs
, or splunk
, except if you can't create logs locally. It's
usually better to employ a dedicated log shipper to
read the JSON files created by json-file
and transport their contents to the
desired location
(see the next section
for an example).
2. Setting up a dedicated logging container
A dedicated logging container is one that's specially designed to collect and aggregate logs generated by other containers within a Docker environment. It's an excellent solution for centralizing and processing logs locally before forwarding them to some external service.
When using this approach, you'll typically deploy a log shipper in a dedicated container, and use it to aggregate logs from all containers, enrich or transform them, and forward them to a central location. This removes the dependency between the application container and the host, making it easy to scale your logging infrastructure by adding more logging containers.
3. Handling log delivery at the container level
With this approach, the application or service handles its own log delivery through a logging framework or a log shipper installed within the application container. Instead of logging to the standard output or standard error, you can route the logs directly to a file located on a configured data volume. This method creates a directory inside the container and connects it to a directory on the host or elsewhere. When you create log files in such directories, they will be preserved even when the container is terminated.
You can also configure your chosen framework to ship the log records directly to a remote log management solution. For example, Winston provides several transport options for delivering logs to remote services from Node.js programs. In the case of Nginx logs, you could install a shipper like Vector, Fluentd, or Logstash and it to transport the access and error logs directly to a log management service.
The main drawback to handling log delivery within the application container is that it could introduce significant latency to the application if synchronous delivery mechanisms are used when logging to remote servers. It also leads to tighter coupling between logging concerns and the core application logic by forcing you to modify the application code when changes to the logging behavior are desired. Also, using log shippers in the application container is against the recommended practice of running one process per container.
4. Using the sidecar approach
A technique often used for more complicated deployments is the sidecar approach in which each application container is paired with a dedicated logging container. The main advantage is that the logging strategy can be tailored to each application, and it offers greater transparency regarding the origin of each log entry.
However, setting up a logging container per application container will consume more resources, and the added complexity is unlikely to be worthwhile for small to medium deployments. You'll also need to to ensure that each container pair is managed as a single unit to avoid incomplete or missing log data.
For the most scenarios, we recommend using option 1 or 2 above. You'll see a demonstration of both strategies in the next step.
Step 9 — Centralizing Docker container logs with Vector
In this section, you will centralize your Docker container logs in Better Stack, an observability service with uptime monitoring and built-in log management features. Transporting your container logs to a managed service is the easiest way to centralize them for easy correlation, analysis, and alerting.
Go ahead and sign up for a free account if you don't have one already. Once you're signed in, create a source and select the Docker platform as shown in the screenshot below:
Once your source is created, copy the Source token from the resulting page:
Our first strategy will be to use a log shipper on the container host to collect the Nginx logs and ship them to Better Stack accordingly. Our tool of choice for this demonstration is Vector, a high-performance solution for building observability pipelines.
Follow the instructions on this page to download and install Vector on your machine. Once installed, open its configuration file and populate it with the following contents:
sudo nano /etc/vector/vector.toml
[sources.nginx_docker]
type = "docker_logs"
include_images = ["betterstackcommunity/nginx-helloworld"]
[transforms.better_stack_transform]
type = "remap"
inputs = ["nginx_docker"]
source = """
del(.source_type)
.dt = del(.timestamp)
.nginx = parse_nginx_log(.message, format: "combined") ??
parse_nginx_log(.message, format: "error") ??
{}
.level = .nginx.severity || .level
"""
[sinks.better_stack]
type = "http"
method = "post"
inputs = ["better_stack_transform"]
uri = "https://in.logs.betterstack.com/"
encoding.codec = "json"
auth.strategy = "bearer"
auth.token = "<your_betterstack_source_token>"
The nginx_docker
source configures Vector to collect logs from any container
launched from the betterstackcommunity/nginx-helloworld
image according to the
options provided by the
docker_logs source.
The better_stack
sink configures the appropriate endpoint and authorization
for the service to receive logs at your configured source. Ensure that the
<your_source_token>
placeholder is replaced with the source token you copied
earlier.
Once you're done, restart the Vector service and generate a few access logs
through the curl
command:
sudo systemctl restart vector
curl http://localhost:80/?[1-10]
After some moments, you should see the logs in Better Stack's live tail page:
Another way to centralize your Docker container logs without deploying Vector on the Docker host is by setting up Vector in a dedicated container through its official image. Here's a sample Docker Compose configuration to orchestrate and deploy both containers in one go:
version: '3.8'
services:
nginx:
image: betterstackcommunity/nginx-helloworld:latest
logging:
driver: json-file
options:
max-size: '10m'
max-file: '10'
labels: 'production_status'
labels:
production_status: 'development'
container_name: nginx
ports:
- '80:80'
vector:
image: timberio/vector:0.32.1-alpine
container_name: vector
volumes:
- <path_to_vector.toml>:/etc/vector/vector.toml:ro
- /var/run/docker.sock:/var/run/docker.sock
depends_on:
- nginx
To configure the Vector instance running in the container, all you need to do is
set up a volume that mounts the Docker socket at /var/run/docker.sock
and
another one that mounts the configuration file located somewhere on your host to
/etc/vector/vector.toml
.
Before using the docker compose up
command, ensure to stop your existing Nginx
container through the command below:
docker stop <container_id>
You may now launch both containers using the command below:
docker compose up -d
[+] Running 2/2
✔ Container nginx Started 0.3s
✔ Container vector Started 0.6s
When you generate some additional access logs, you should observe that the logs continue coming through on the live tail page:
At this point, you can now proceed with setting up alerting and visualizations, and benefit from all the other advantages of centralized log management.
Step 10 — Managing Docker daemon logs
Thus far, we've only considered the logs generated from Docker containers. To design an optimal Docker logging strategy, you must also pay attention to the logs generated by the Docker engine itself.
These logs provide valuable insights into the functioning of Docker at the system level, including details about container lifecycle events, network configuration, image management, incoming API requests, and more.
To monitor the log entries produced by the Docker daemon, you can use the following command on Linux systems that use Systemd:
journalctl -fu docker.service
Sep 06 23:30:29 fedora dockerd[1308395]: time="2023-09-06T23:30:29.801241724+02:00" level=info msg="Pull session cancelled"
Sep 06 23:30:50 fedora dockerd[1308395]: time="2023-09-06T23:30:50.274910657+02:00" level=error msg="Not continuing with pull after error: context canceled"
Sep 07 00:13:18 fedora dockerd[1308395]: time="2023-09-07T00:13:18.380803561+02:00" level=info msg="cluster update event" module=dispatcher node.id=lg07o4wcgcr01sjd59haf9igd
. . .
As you can see, events emitted by the Docker daemon are formatted as key=value
pairs. Additional details, such as the system timestamp, hostname, and the
process that generated the log entry is prepended to the log entry and stored in
the Systemd journal.
To transport these logs elsewhere, you can use Vector as before through its Journald integration. For example, here's how to transport the daemon logs to Better Stack:
[sources.docker_daemon]
type = "journald"
include_units = [ "docker.service" ]
[sinks.better_stack_docker_daemon]
type = "http"
method = "post"
inputs = [ "docker_daemon" ]
uri = "https://in.logs.betterstack.com/"
encoding.codec = "json"
auth.strategy = "bearer"
auth.token = "<your_betterstack_source_token>"
Once you restart the Vector service, generate some Docker logs by starting, stopping, or restarting some containers. You'll start seeing the logs on the live tail page, giving you full visibility into your Docker environment.
Please see the relevant documentation page for more details on how to configure Docker engine logs in production.
Final thoughts
In this article, we've provided a thorough exploration of Docker's log management features and why it is needed to ensure the smooth operations of the services deployed within Docker containers. We delved into the essential aspects of Docker container logging, encompassing log storage and retrieval, and the configuration of logging drivers while discussing several best practices along the way.
We also explored a few alternate approaches for managing container logs in a Docker environment before rounding up with a brief section on viewing and aggregating the logs generated by the Docker engine itself. To explore this topic further, we encourage you to refer to the official Docker documentation, and also our series of articles on scaling Docker in production.
Thank you for reading, and happy logging!
Make your mark
Join the writer's program
Are you a developer and love writing and sharing your knowledge with the world? Join our guest writing program and get paid for writing amazing technical guides. We'll get them to the right readers that will appreciate them.
Write for usBuild on top of Better Stack
Write a script, app or project on top of Better Stack and share it with the world. Make a public repository and share it with us at our email.
community@betterstack.comor submit a pull request and help us build better products for everyone.
See the full list of amazing projects on github