Are you tired of the dreaded "it works on my machine" syndrome? By packaging
your application and its dependencies into portable, self-contained units called
containers, Docker ensures consistent behavior across
different environments, from development to production.
In this hands-on guide, I'll show you how to effortlessly "Dockerize" your
Node.js applications to unlock a smoother and more reliable development and
deployment process.
Let's get started!
Prerequisites
Prior Node.js development experience.
Familiarity with the Linux command-line.
Access to a Linux machine with
Docker Engine installed.
Step 1 — Setting up the demo project
To demonstrate Node.js application development and deployment with Docker, we'll
use a
URL shortener application
built with Fastify that stores shortened URLs in
PostgreSQL. Our goal is to create a custom Docker image to run this application
in various environments.
If you don't have the postgres:alpine image, it will be downloaded from
DockerHub. The container will be named url-shortener-db, removed upon stopping
(--rm), and have port 5432 mapped to the same port on your host machine.
Two environmental variables are also supplied to the container:
POSTGRES_PASSWORD: This sets a password for the default postgres user.
You must provide a value to this variable to use the PostgreSQL image.
POSTGRED_DB: This allows you to specify the name of the default database
that will be created when the container is launched.
If you have a local PostgreSQL instance, stop it first to avoid port conflicts:
Copied!
sudo systemctl stop postgres
A successful launch of the container will output logs confirming database
readiness:
Output
. . .
2024-07-28 20:33:22.578 UTC [1] LOG: starting PostgreSQL 16.3 (Debian 16.3-1.pgdg120+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 12.2.0-14) 12.2.0, 64-bit
2024-07-28 20:33:22.578 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
2024-07-28 20:33:22.578 UTC [1] LOG: listening on IPv6 address "::", port 5432
2024-07-28 20:33:22.591 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
2024-07-28 20:33:22.606 UTC [29] LOG: database system was interrupted; last known up at 2024-07-28 20:32:16 UTC
2024-07-28 20:33:22.672 UTC [29] LOG: database system was not properly shut down; automatic recovery in progress
2024-07-28 20:33:23.094 UTC [1] LOG: database system is ready to accept connections
With the database running, open a new terminal, navigate to the project
directory, then copy .env.sample to .env and modify the POSTGRES entries
within as needed:
Copied!
cp .env.sample .env
.env
Copied!
NODE_ENV=production
LOG_LEVEL=info
PORT=5000
POSTGRES_DB=url-shortener
POSTGRES_USER=postgres
POSTGRES_PASSWORD=admin
POSTGRES_HOST=localhost
With the .env file created and updated, apply the database migrations and
start the Node.js application:
The console output will indicate the application running on port 5000:
Output
. . .
{"level":"info","time":"2024-07-28T20:58:44.149Z","pid":236289,"host":"fedora","msg":"Server listening at http://[::1]:5000"}
{"level":"info","time":"2024-07-28T20:58:44.149Z","pid":236289,"host":"fedora","msg":"Server listening at http://127.0.0.1:5000"}
{"level":"info","time":"2024-07-28T20:58:44.149Z","pid":236289,"host":"fedora","msg":"URL Shortener is running in development mode → PORT http://[::1]:5000"}
You may now open http://localhost:5000 to see the application interface:
Test its functionality by shortening a URL and clicking Visit to verify
redirection:
With a functional Node.js application set up, you're now ready to containerize
it using Docker.
Side note: Monitor your Dockerized app with Better Stack
Once your Node.js app is running in a container, make sure it stays reachable. Better Stack Uptime Monitoring checks your endpoints from multiple regions and alerts you fast when something breaks, with timelines, error details, and screenshots so you can pinpoint the cause.
Step 2 — Creating a Docker image for your Node.js app
To deploy our Node.js application, we'll create a Docker image using a
Dockerfile. This text file
provides instructions to the Docker engine on constructing the image.
Think of a Docker image as a template capturing your application and its
environment. It includes configuration details and a layered filesystem
containing the software required to run your application. A Docker container is
a live, isolated environment created from this image where your application
runs.
Essentially, Docker images define the application's build and packaging process,
while containers are the running instances.
Let's start by examining the format of the Dockerfile which is shown below:
Dockerfile
Copied!
# Comment
COMMAND arguments
Any line that begins with a # is a comment (except
parser directives),
while other lines must contain a specific command followed by its arguments.
Although command names are not case-sensitive, they are often written in
uppercase to distinguish them from arguments.
Let's create a Dockerfile to build our URL shortener's Docker image:
Copied!
code Dockerfile
The first decision to make when writing a Dockerfile is choosing an
appropriate base image. This image must be capable of running Node.js code so
that your application can run within the container. A common choice is the
official Node.js image which comes
pre-installed with the necessary runtime environment for running Node.js
applications.
I recommend using the latest LTS version (v20.16.0 at the time of writing) to
guarantee stability when deploying to production. For the most up-to-date
information, check the
Node.js releases page.
There are also several variants to pick from, but I recommend using the latest
Alpine variant as it is known for being exceptionally lightweight and simple.
Go ahead and enter the base image into your Dockerfile as follows:
Dockerfile
Copied!
# Use Node 20.16 alpine as base image
FROM node:20.16-alpine3.19 AS base
Next, change the working directory within the image to /build with the
WORKDIR instruction:
Dockerfile
Copied!
. . .
# Change the working directory to /build
WORKDIR /build
This directive avoids the need to run mkdir and cd when you want to navigate
into a new directory.
Installing your application dependencies is the next step. Before proceeding,
you need to copy the package.json and package-lock.json files into the image
with the COPY directive:
Dockerfile
Copied!
. . .
# Copy the package.json and package-lock.json files to the /build directory
COPY package*.json ./
Copying package.json and package-lock.json first optimizes Docker builds by
leveraging layer caching. Since
dependencies change less frequently than application code, this ensures that the
subsequent dependency installation instruction is only re-executed when those
files are modified, leading to significantly faster build times when only the
source code changes.
Let's add the instruction to install the application dependencies next:
Dockerfile
Copied!
. . .
# Install production dependencies and clean the cache
RUN npm ci --omit=dev && npm cache clean --force
Here, we've opted for npm ci over npm install for faster and more consistent
builds. It removes any existing node_modules directory and installs production
dependencies (without devDependencies) precisely as listed in
package-lock.json without modifying this file. Clearing the cache also helps
reduce image size since it's not needed within the container.
Now that you've installed the application dependencies, the next step is to copy
the rest of the source code into the image:
Dockerfile
Copied!
. . .
# Copy the entire source code into the container
COPY . .
Next, you can document the ports that a container built on this image will
listen on using the EXPOSE instruction:
Dockerfile
Copied!
. . .
# Document the port that may need to be published
EXPOSE 5000
Finally, specify the command to run when starting the application:
Dockerfile
Copied!
. . .
# Start the application
CMD ["node", "src/server.js"]
The CMD instruction defines the command for launching the application in a
container based on this image. We're using the node command directly here
instead of npm start because the latter doesn't forward termination signals
like SIGTERM to the application (for a graceful shutdown) but kills it
directly.
The final Dockerfile is thus:
Dockerfile
Copied!
# Use Node 20.16 alpine as base image
FROM node:20.16-alpine3.19 AS base
# Change the working directory to /build
WORKDIR /build
# Copy the package.json and package-lock.json files to the /build directory
COPY package*.json ./
# Install production dependencies and clean the cache
RUN npm ci --omit=dev && npm cache clean --force
# Copy the entire source code into the container
COPY . .
# Document the port that may need to be published
EXPOSE 5000
# Start the application
CMD ["node", "src/server.js"]
With these instructions in place, you're ready to build the Docker image.
Step 3 — Building the Docker image and launching a container
Having prepared your Dockerfile, you can create the Docker image using the
docker build command. Before proceeding, create a .dockerignore file in your
project's root directory to exclude unnecessary files from the build context.
This helps with reducing image size, speeding up builds, and preventing
accidental inclusion of sensitive information.
In our case, we'll ignore any .env files, the .git directory, and the
node_modules directory:
.dockerignore
Copied!
**/*.env
# Dependencies
**/node_modules
# Other unnecessary files or directories
.git/
Now, build the Docker image from your project root:
Copied!
docker build . -t url-shortener
The -t flag assigns the url-shortener name to the image. You can also add a
specific tag, such as 0.1.0, using the command below. Without a tag, Docker
defaults to latest.
Copied!
docker build . -t url-shortener:0.1.0
After the build, verify the new image exists in your local library:
Copied!
docker image ls url-shortener
Output
REPOSITORY TAG IMAGE ID CREATED SIZE
url-shortener latest 20f75e2d7b45 17 seconds ago 183MB
Now, run the image as a Docker container with docker run:
Copied!
docker run --rm --name url-shortener-app --publish 5000:5000 url-shortener
However, this command fails due to missing environment variables:
Output
/build/node_modules/env-schema/index.js:85
const error = new Error(ajv.errorsText(ajv.errors, { dataVar: 'env' }))
^
Error: env must have required property 'POSTGRES_USER', env must have required property 'POSTGRES_PASSWORD', env must have required property 'POSTGRES_DB'
at envSchema (/build/node_modules/env-schema/index.js:85:19)
at file:///build/src/config/env.js:46:16
at ModuleJob.run (node:internal/modules/esm/module_job:222:25)
at async ModuleLoader.import (node:internal/modules/esm/loader:316:24)
at async asyncRunEntryPointWithESMLoader (node:internal/modules/run_main:123:5) {
. . .
The error indicates that the application could not find some required
environmental variables when starting the container. This makes sense since the
.env file wasn't copied over to the Docker image to prevent secrets from
leaking into the image.
To resolve this, pass environment variables from the .env file to the
container through the --env-file flag:
Copied!
docker run --rm --name url-shortener-app --publish 5000:5000 --env-file .env url-shortener
Now we get a different error:
Output
{"level":"fatal","time":"2024-07-28T23:07:58.786Z","pid":17,"host":"54431553635f","err":{"type":"ConnectionRefusedError","message":"","stack":"SequelizeConnectionRefusedError\n at Client.
_connectionCallback (/build/node_modules/sequelize/lib/dialects/postgres/connection-manager.js:133:24)\n at Client._handleErrorWhileConnecting (/app/node_modules/pg/lib/client.js:327:19)\n
at Client._handleErrorEvent (/build/node_modules/pg/lib/client.js:337:19)\n at Connection.emit (node:events:519:28)\n at Socket.reportStreamError (/app/node_modules/pg/lib/connection.
js:58:12)\n at Socket.emit (node:events:519:28)\n at emitErrorNT (node:internal/streams/destroy:169:8)\n at emitErrorCloseNT (node:internal/streams/destroy:128:3)\n at process.pr
ocessTicksAndRejections (node:internal/process/task_queues:82:21)","name":"SequelizeConnectionRefusedError","parent":{"type":"AggregateError","message":"","stack":"AggregateError [ECONNREFUS
ED]: \n at internalConnectMultiple (node:net:1118:18)\n at afterConnectMultiple (node:net:1685:7)","aggregateErrors":[{"type":"Error","message":"connect ECONNREFUSED ::1:5432","stack":
"Error: connect ECONNREFUSED ::1:5432\n at createConnectionError (node:net:1648:14)\n at afterConnectMultiple (node:net:1678:16)","errno":-111,"code":"ECONNREFUSED","syscall":"connect"
,"address":"::1","port":5432},{"type":"Error","message":"connect ECONNREFUSED 127.0.0.1:5432","stack":"Error: connect ECONNREFUSED 127.0.0.1:5432\n at createConnectionError (node:net:1648
:14)\n at afterConnectMultiple (node:net:1678:16)","errno":-111,"code":"ECONNREFUSED","syscall":"connect","address":"127.0.0.1","port":5432}],"code":"ECONNREFUSED"},"original":{"type":"Ag
gregateError","message":"","stack":"AggregateError [ECONNREFUSED]: \n at internalConnectMultiple (node:net:1118:18)\n at afterConnectMultiple (node:net:1685:7)","aggregateErrors":[{"ty
pe":"Error","message":"connect ECONNREFUSED ::1:5432","stack":"Error: connect ECONNREFUSED ::1:5432\n at createConnectionError (node:net:1648:14)\n at afterConnectMultiple (node:net:16
78:16)","errno":-111,"code":"ECONNREFUSED","syscall":"connect","address":"::1","port":5432},{"type":"Error","message":"connect ECONNREFUSED 127.0.0.1:5432","stack":"Error: connect ECONNREFUS
ED 127.0.0.1:5432\n at createConnectionError (node:net:1648:14)\n at afterConnectMultiple (node:net:1678:16)","errno":-111,"code":"ECONNREFUSED","syscall":"connect","address":"127.0.0.
1","port":5432}],"code":"ECONNREFUSED"}},"msg":""}
{"level":"info","time":"2024-07-28T23:07:58.787Z","pid":17,"host":"54431553635f","msg":"Server closed"}
This occurs because the application is configured to connect a PostgreSQL
instance running on localhost within the container, where PostgreSQL isn't
running.
To enable communication between the application container and the
url-shortener-db container, create a custom
Docker network:
The POSTGRES_HOST variable has been updated from localhost to
url-shortener-db, which makes it possible for application container to
communicate with the PostgreSQL instance running in the url-shortener-db
container.
Finally, launch the application once again and apply the --network option like
this:
. . .
{msg":"Connected to database"}
{msg":"Server listening at http://[::1]:5000"}
{msg":"Server listening at http://127.0.0.1:5000"}
{msg":"URL Shortener is running in production mode → PORT http://[::1]:5000"}
However, when you try to visit http://localhost:5000, you will observe that
the application doesn't load:
Fastify (and other servers) typically listen for connections only on the local
interface (127.0.0.1) when running on a host machine. However, this isn't
sufficient within a Docker container as the host and other network devices can't
access it.
To enable external connections, Fastify needs to bind to an address accessible
outside the container. Using 0.0.0.0 achieves this by binding to all available
network interfaces, allowing Fastify to receive connections from any reachable
IP address.
Open your server.js file and modify the following line by adding the host
property:
Save the file, then quit the existing url-shortener-app container with
Ctrl-C. Now build a new version of your url-shortener Docker image to
replace the existing one:
Copied!
docker build . -t url-shortener
Once it finishes, relaunch the container once again:
{msg":"URL Shortener is running in production mode → PORT http://0.0.0.0:5000"}
The application should now be accessible at http://localhost:5000.
Step 4 — Setting up a web server
Web servers like Nginx or Caddy are often
placed before Node.js applications to enhance performance and security. They
excel at tasks like load balancing, reverse proxying, serving static assets, and
handling SSL/TLS termination and caching. While Node.js can handle some of
these, a dedicated web server is often a more performant and robust solution for
production environments.
Let's set up a Caddy instance as a reverse proxy for our Node.js application.
Keep both the application and PostgreSQL containers running, then open a new
terminal and launch a Caddy container based on the
official Alpine image:
Ensure to keep both application and PostgreSQL containers running, then open a
new terminal and launch a new container for Caddy by running the command below:
Copied!
docker run --rm --name url-shortener-caddy-server -p 80:80 caddy:alpine
This command now includes persistent volumes (caddy-config and caddy-data)
to store configuration and data, respectively. It also mounts your custom
Caddyfile and connects to the url-shortener network for communication with
the app container.
Now, when you access http://localhost, you should see the URL shortener
application functioning as it did before, but with the added benefits of a
dedicated web server. Do check out the
Caddy docs for more details.
Step 5 — Using Docker Compose to manage multiple containers
I'm sure you'll agree that running multiple containers to get your application
going is quite tedious, especially if your application isn't a monolith like in
this demo but contains multiple microservices all running in standalone
containers.
This is where Docker Compose comes in.
Instead of manually creating and managing individual Docker containers using the
docker run command, Compose offers a solution by allowing you to define and
manage multi-container applications within a single YAML file. This streamlines
your workflow and provides a structured approach for complex applications.
Let's create a docker-compose.yml file in your project's root directory to
configure your Node.js application and its associated services:
app: Your URL shortener application that is dependent on a health postgres
service.
caddy: A Caddy web server acting as a reverse proxy, serving on port 80 and
using a custom Caddyfile.
postgres: An Alpine Linux-based PostgreSQL database, configured with
specific settings and a health check to ensure readiness.
These services share the url-shortener network and utilize volumes to persist
data. Placeholders are used for environment variables, which will be populated
from your .env file when the containers are started. Notably, port 5000 isn't
published anymore, making the app accessible only via Caddy at
http://localhost.
Before bringing up your services with docker compose, run the command below to
stop and remove the existing containers with:
Copied!
docker stop $(docker ps -a -q)
Copied!
docker container prune -f
Now, run the following command to launch all three services with Compose:
Copied!
docker compose up --build
This builds the app image and launches all three services in the foreground.
You'll notice that the respective logs produced by each container are prefixed
with the container name as shown below:
Output
[+] Running 5/5
✔ Network fastify-url-shortener_url-shortener Created 0.2s
✔ Volume "fastify-url-shortener_pg-data" Created 0.0s
✔ Container url-shortener-db Created 0.1s
✔ Container url-shortener-app Created 0.1s
✔ Container url-shortener-caddy-server Created 0.1s
Attaching to url-shortener-app, url-shortener-caddy-server, url-shortener-db
. . .
url-shortener-db | 2024-07-30 12:43:54.957 UTC [1] LOG: starting PostgreSQL 16.3 on x86_64-pc-linux-musl, compiled by gcc (Alpine 13.2.1_git20240309) 13.2.1 20240309, 64-bit
. . .
url-shortener-caddy-server | {"level":"info","ts":1722343436.5617106,"logger":"tls","msg":"finished cleaning storage units"}
. . .
url-shortener-app | {"level":"info","time":"2024-07-30T12:45:21.950Z","pid":17,"host":"d51e7abc421b","reqId":"
You can also run the services in the background through the --detach option.
Stop the existing instances with Ctrl-C first, then run:
Copied!
docker compose up --build --detach
Output
[+] Running 3/3
✔ Container url-shortener-db Healthy 10.8s
✔ Container url-shortener-app Started 11.2s
✔ Container url-shortener-caddy-server Started 11.5
Verify they're running with docker compose ps:
Copied!
docker compose ps
Output
NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS
url-shortener-app fastify-url-shortener-app "docker-entrypoint.s…" app 3 minutes ago Up 2 minutes
url-shortener-caddy-server caddy:alpine "caddy run --config …" caddy 3 minutes ago Up 2 minutes 443/tcp, 0.0.0.0:80->80/tcp, :::80->80/tcp, 2019/tcp, 443/udp
url-shortener-db postgres:alpine "docker-entrypoint.s…" postgres 3 minutes ago Up 3 minutes (healthy) 0.0.0.0:5432->5432/tcp, :::5432->5432/tcp
To stop and remove the running containers, execute:
There you have it! You can now launch all your services and their dependencies
with a single command and stop them again as needed. Creating additional
services is also as easy as adding a new service definition in the
docker-compose.yml file.
Let's move on now to exploring how to develop your application directly within
Docker.
Side note: Visualize your container logs in Better Stack
With Better Stack Logs, you can stream logs in real time, search across containers, and spot patterns like failing health checks or error spikes after a deploy.
Step 6 — Developing your Node.js application in Docker
Now that you've mastered image creation and multi-container management with
Docker Compose, let's transform Docker into a productive development
environment. This will simplify setup on your local machine and ensure a
standardized workflow for the rest of your team.
# Create a production stage based on the "base" image
FROM base AS production
# Change the working directory to /build
WORKDIR /build
# Copy the package.json and package-lock.json files to the /build directory
COPY package*.json ./
# Install production dependencies and clean the cache
RUN npm ci --omit=dev && npm cache clean --force
# Copy the entire source code into the container
COPY . .
# Document the port that may need to be published
EXPOSE 5000
# Start the application
CMD ["node", "src/server.js"]
We've introduced a development stage inheriting from the base image,
designed to execute npm run dev from the /app directory. The production
stage, intended for production builds, retains the original instructions.
The application dependencies are now being installed in the /node directory
without omitting the devDependencies this time around. This is essential since
you can't guarantee that everyone will be using host OS for development.
You'll also notice that we didn't copy the project's contents into /node/app
for the development stage; instead, we'll mount the local directory for live
development and enable automatic reloads via nodemon.
Next, update the app portion of your docker-compose.yml file:
The services.app.build.target value is now set to ${NODE_ENV} so that when
you're building the application appropriate stage is used instead according what
the value supplied through the .env file.
On the other hand, the current directory is mounted to the /node/app directory
within the container. This also mounts the local node_modules directory into
the container negating the effect of installing the dependencies to /node in
the first place.
To mitigate this, we've added an anonymous volume to hide the container's local
node_modules directory. Node.js's module resolution algorithm ensures that the
node_modules directory in the /node directory is found and used as a result.
To launch the application in development mode, all you need to do is change your
NODE_ENV entry to development:
.env
Copied!
NODE_ENV=development
. . .
Then run:
Copied!
docker compose up --build
After building the Docker image, everything should start up the same way as
before:
Output
[+] Running 3/3
✔ Container url-shortener-db Created 0.0s
✔ Container url-shortener-app Recreated 0.2s
✔ Container url-shortener-caddy-server Recreated 0.1s
Attaching to url-shortener-app, url-shortener-caddy-server, url-shortener-db
. . .
url-shortener-db | 2024-07-30 14:43:31.061 UTC [1] LOG: database system is ready to accept connections
url-shortener-app |
. . .
url-shortener-app | {"level":"info","time":"2024-07-30T14:43:42.459Z","pid":18,"host":"5d53b2894b01","msg":"URL Shortener is running in development mode → PORT http://0.0.0.0:5000"}
You'll notice that this time around, the dev script was executed and nodemon
is now watching for changes. Since we mounted the project directory to the
docker container, making changes as usual will trigger a restart of the
application.
You can test this out by adding a simple health check route to your application
code:
src/routes/routes.js
Copied!
import urlSchema from '../schemas/url.schema.js';
import urlController from '../controllers/url.controller.js';
import rootController from '../controllers/root.controller.js';
import errorHandler from '../middleware/error.js';
export default async function fastifyRoutes(fastify) {
fastify.get('/', rootController.render);
fastify.post(
'/shorten',
{
schema: {
body: urlSchema,
},
},
urlController.shorten
);
Once you save the file, you'll notice that the application restarts in the
url-shortener-app container:
Output
. . .
url-shortener-app | [nodemon] restarting due to changes...
url-shortener-app | [nodemon] starting `node src/server.js`
url-shortener-app | {"level":"debug","time":"2024-07-30T14:58:22.430Z","pid":67,"host":"d07124bccc28","msg":"Executing (default): SELECT 1+1 AS result"}
url-shortener-app | {"level":"debug","time":"2024-07-30T14:58:22.432Z","pid":67,"host":"d07124bccc28","msg":"Executing (default): SELECT table_name FROM information_schema.tables WHERE table_schema = 'public' AND table_name = 'urls'"}
url-shortener-app | {"level":"debug","time":"2024-07-30T14:58:22.437Z","pid":67,"host":"d07124bccc28","msg":"Executing (default): SELECT i.relname AS name, ix.indisprimary AS primary, ix.indisunique AS unique, ix.indkey AS indkey, array_agg(a.attnum) as column_indexes, array_agg(a.attname) AS column_names, pg_get_indexdef(ix.indexrelid) AS definition FROM pg_class t, pg_class i, pg_index ix, pg_attribute a WHERE t.oid = ix.indrelid AND i.oid = ix.indexrelid AND a.attrelid = t.oid AND t.relkind = 'r' and t.relname = 'urls' GROUP BY i.relname, ix.indexrelid, ix.indisprimary, ix.indisunique, ix.indkey ORDER BY i.relname;"}
url-shortener-app | {"level":"info","time":"2024-07-30T14:58:22.442Z","pid":67,"host":"d07124bccc28","msg":"Connected to database"}
url-shortener-app | {"level":"info","time":"2024-07-30T14:58:22.464Z","pid":67,"host":"d07124bccc28","msg":"Server listening at http://0.0.0.0:5000"}
url-shortener-app | {"level":"info","time":"2024-07-30T14:58:22.464Z","pid":67,"host":"d07124bccc28","msg":"URL Shortener is running in development mode → PORT http://0.0.0.0:5000"}
You can now send a request to http://localhost/health and you should get the
correct response:
Copied!
curl http://localhost/health
Output
{"status":"ok"}
The only time you'll need to rebuild the image in development is when you add or
remove a dependency to your project.
With this setup, you only need access to the source code and Docker installed on
the host machine to create a complete Node.js development environment with a
single command.
If you're a VS Code user, you might want to check out the
Dev Containers extension
to enhance your local development workflow even further. The relevant tooling
for other editors may be found here.
Side note: Trace Docker requests with eBPF (no code changes)
With Better Stack, you can use eBPF instrumentation to capture request traces automatically, even in containers, without adding tracing libraries or changing your app code.
Final thoughts
Throughout this guide, you've gained hands-on experience in preparing a Docker
image for your Node.js application and using it for local development or
production deployment.
But this is just the beginning of your Docker journey! There are countless
opportunities for optimizing your development and deployment workflows even
further.