OpenTelemetry is an observability framework that simplifies the process of
capturing and standardizing telemetry data—such as logs, traces, and
metrics—across various programming languages and platforms.
It provides a consistent approach to instrumenting applications, regardless of
programming language, frameworks, or observability tools you're using.
In this guide, we'll explore how to use OpenTelemetry to add tracing to your
Node.js applications.
By the end of this tutorial, you'll be equipped to monitor and analyze your
application's performance through detailed trace data, identify bottlenecks, and
optimize its behavior effectively.
In this tutorial, you'll learn how to instrument a Node.js application to
generate traces using OpenTelemetry.
The application you'll
work with is designed for converting JSON to YAML. It features GitHub social
login with cookie-based session management to prevent unauthorized access.
To get started, clone the application repository to your local machine:
The auth directory contains the service responsible for authentication and
session management, while the converter directory hosts the service that
handles the conversion from JSON to YAML.
All user requests are processed through the auth service before reaching the
converter service. The setup also relies on Redis, PostgreSQL, and the GitHub
API to illustrate how tracing can help you understand service interactions.
Open the GitHub Developer Settings page at
https://github.com/settings/apps in your browser:
Click the New GitHub App button, provide a suitable name, and set the
Homepage URL to http://localhost:8000 with the Callback URL set to
http://localhost:8000/auth/github/callback.
Scroll down the page and make sure to uncheck the Webhook option, as it
won't be needed for this tutorial:
Once you're done, click Create GitHub App at the bottom of the page:
On the resulting page, click the Generate a new client secret button, then
copy both the generated token and the Client ID:
Now, return to your terminal, open the .env file in your text editor, and
update the highlighted lines with the copied values:
Copied!
code .env
.env
Copied!
. . .
GITHUB_CLIENT_ID=<your_github_client_id>
GITHUB_CLIENT_SECRET=<your_github_client_secret>
. . .
Finally, launch the application and its associated services. You can start the
entire setup locally using Docker Compose:
Copied!
docker compose up -d --build
This command will initiate the following containers:
Output
. . .
✔ Service auth Built 12.3s
✔ Service converter Built 1.7s
✔ Network json-to-yaml-nodejs_json-to-yaml-network Created 0.2s
✔ Container json-to-yaml-db Healthy 11.4s
✔ Container json-to-yaml-redis Healthy 11.4s
✔ Container json-to-yaml-auth Started 11.4s
✔ Container json-to-yaml-converter Started 11.6s
The auth container handles authentication and is accessible at
http://localhost:8000. The app container runs the converter service on
port 8001 in the container but isn't exposed to localhost. Both services are
utilizing nodemon for live reloading on file changes.
The db container runs PostgreSQL, while redis runs Redis.
With everything up and running, navigate to http://localhost:8000 in your
browser to access the application UI:
After authenticating with GitHub, you'll be redirected to the following page:
Input a valid JSON object in the provided field, and click the Convert
button:
You should see the resulting YAML response displayed in your browser:
You've successfully set up and explored the demo application in this initial
step.
In the upcoming sections, you'll learn how to instrument the services with the
Node.js OpenTelemetry SDK and visualize the traces in Jaeger.
Skip manual OpenTelemetry instrumentation
While manual instrumentation gives you control over your traces, Better Stack Tracing uses eBPF to automatically instrument your Kubernetes or Docker workloads without code changes. Your traces start flowing immediately, and databases get recognized automatically without configuring exporters or collectors.
Predictable pricing and up to 30x cheaper than Datadog. Start free in minutes.
Step 2 — Initializing the Node.js OpenTelemetry SDK
Now that you've set up with the sample application, it's time to implement basic
trace instrumentation using OpenTelemetry. This will allow the application to
generate traces for every HTTP request it processes.
To get started, you'll need to set up the OpenTelemetry SDK in your application.
Install the required dependencies by running the following command in your
terminal:
This command installs these OpenTelemetry SDK components:
@opentelemetry/sdk-node:
Provides the core functionality for initializing the OpenTelemetry SDK in a
Node.js environment.
@opentelemetry/auto-instrumentations-node:
Automatically instruments supported libraries and frameworks, allowing you to
trace requests without manually adding instrumentation code.
@opentelemetry/sdk-trace-node:
Contains the tracing implementation for Node.js applications, enabling the
capture of span data for monitored operations.
@opentelemetry/api:
Defines the data types and interfaces for creating and manipulating telemetry
data according to the OpenTelemetry specification.
After installing the necessary packages, create a file named otel.js in the
root directory of your project and insert the following code into this file:
otel.js
Copied!
import { getNodeAutoInstrumentations } from "@opentelemetry/auto-instrumentations-node";
import { NodeSDK } from "@opentelemetry/sdk-node";
import { ConsoleSpanExporter } from "@opentelemetry/sdk-trace-node";
const sdk = new NodeSDK({
traceExporter: new ConsoleSpanExporter(),
instrumentations: [getNodeAutoInstrumentations()],
});
sdk.start();
This code sets up OpenTelemetry to automatically instrument a Node.js
application and export the generated trace data directly to the console.
The getNodeAutoInstrumentations() function enables automatic instrumentation
for supported libraries and frameworks, such as Fastify, Redis, and PostgreSQL
(pg), which are used in both the auth and converter services.
By adopting this approach, tracing becomes much simpler to implement as it
eliminates the need for manually instrumenting each library.
It does come at a cost of increasing the size of your dependencies so if this is
a concern, ensure to
search the OpenTelemetry Registry
for specific instrumentation packages for the libraries you're actually using.
To activate tracing in both services, simply register the otel.js file at the
beginning of the auth/server.js and converter/server.js files as shown
below:
Copied!
[auth/server.js]
import '../otel.js';
. . .
Copied!
[converter/server.js]
import '../otel.js';
. . .
With this in place, you'll start seeing the trace data in the console when you
view the logs for the running services:
Copied!
docker compose logs -f auth converter
If you visit http://localhost:8000/auth in your browser, you will see an
output similar to what's shown below:
In this example, the resource section includes critical information about the
application, such as its name, SDK details, process attributes, and the host
environment, which helps identify where the span originated.
You'll notice that the service.name attribute is reported as
unknown_service. This is because the OTEL_SERVICE_NAME environmental
variable isn't defined at the moment.
To set it up, add the following to the bottom of your .env file:
The instrumentationScope shows that the instrumentation was handled by the
@opentelemetry/instrumentation-http library, while the traceId uniquely
identifies the trace that this span belongs to.
Since this is a root span, the parentId is undefined, and the kind property
indicates that this is a server-side span (represented by the value of 1). The
timestamp denotes when the span began, and the duration reflects how long
the request took to process in microseconds.
The attributes section captures various HTTP and network details, providing
context about the request's origin, destination, and behavior.
In the next step, you'll set up the OpenTelemetry Collector to collect and
export the raw span data to Jaeger, an open source distributed tracing backend
tool.
Step 3 — Setting up the OpenTelemetry Collector
The OpenTelemetry Collector is the recommended way to
collect telemetry data from instrumented services, process them, and export them
to one or more observability backends.
You can set it up by adding a new collector service to your
docker-compose.yml file as follows:
The collector service uses
otel/opentelemetry-collector
image and it mounts the (currently nonexistent) local configuration file
(otelcol.yaml) into the container. If you're using the
Contrib distribution
instead, ensure that your configuration file is mounted to the appropriate path
like this:
This OpenTelemetry Collector configuration defines how the collector receives,
processes, and exports trace data. Here's a high-level overview of its
components:
Receivers
The otlp receiver is configured to accept telemetry data via the OpenTelemetry
Protocol. It specifies that the collector will accept incoming trace data from
applications or services sending OTLP-formatted telemetry data over HTTP at the
json-to-yaml-collector:4318 endpoint.
Processors
The configuration includes a batch processor, which is responsible for
batching multiple trace data points together before they are sent to the
exporter. This improves performance and reduces the number of outgoing requests
by sending larger, aggregated payloads.
Exporters
The otlp/jaeger exporter is set up to send the processed trace data to a
Jaeger instance. It uses the json-to-yaml-jaeger:4317 endpoint, where the
jaeger service is expected to be reachable (we'll set this up in the next
step).
The configuration also specifies tls.insecure: true, indicating that the
exporter will not verify the TLS certificate of the Jaeger endpoint. This is
useful for development or testing environments but should not be used in
production settings.
Service
The service section defines the data processing pipeline for traces. In this
case, the pipeline for traces takes data from the otlp receiver, processes it
with the batch processor, and then exports it using the otlp/jaeger
exporter.
Once you've configured the OpenTelemetry Collector through the otelcol.yaml
file, you need to modify your Node.js instrumentation to transmit trace spans in
the OTLP format to the Collector instead of outputting them to the console.
First, you need to install the trace exporter for OTLP (http/json) through the
command below:
In this code, a new instance of the OTLPTraceExporter is configured to send
trace data to an OTLP endpoint which is configured to be
http://localhost:4318/v1/traces by default.
Since your Collector instance is not exposed to localhost, you need to change
this endpoint through the OTEL_EXPORTER_OTLP_ENDPOINT environmental variable
as follows:
This updated value corresponds to the Collector's hostname within our Docker
Compose setup and the port it listens on for OTLP data over HTTP. This now means
that the generated trace data will be sent to
http://json-to-yaml-collector:4318/v1/traces.
Under the hood, the exporter is also configured to use a BatchSpanProcessor by
default so that the generated spans are batched before being exported.
In the next step, you'll configure a Jaeger instance to receive the trace data
from the OpenTelemetry Collector.
Step 4 — Setting up Jaeger
Before you can visualize your traces, you need to set up a Jaeger instance to
ingest the data from the Collector. This is easy to do through the
jaegertracing/all-in-one image
which which provides an easy way to run all of Jaeger's backend components and
user interface in one container.
Open your docker-compose.yml file and modify it as follows:
These modifications introduce a jaeger service whose UI is exposed at
http://localhost:16686 to make it accessible outside the Docker network.
Once you've saved the file, relaunch all the services with:
Copied!
docker compose up -d --build
Output
✔ Network json-to-yaml-nodejs_json-to-yaml-network Created 0.2s
✔ Container json-to-yaml-db Healthy 11.1s
✔ Container json-to-yaml-jaeger Healthy 31.5s
✔ Container json-to-yaml-redis Healthy 11.6s
✔ Container json-to-yaml-collector Started 31.6s
✔ Container json-to-yaml-auth Started 11.6s
✔ Container json-to-yaml-converter Started 11.8s
With the services ready, head to your application at http://localhost:8000 and
generate some traces by authenticating with GitHub and converting some JSON to
YAML:
Then, open the Jaeger UI in your browser at http://localhost:16686, find the
auth-service entry, and click Find Traces:
You should see a list traces generated by the application. Click on any one of
them to see the component spans. For example, here are the spans for an
authenticated request to the homepage:
The trace timeline shows you the chronological order of events within that
specific request, and you'll see where the interactions with Redis and
PostgreSQL occur.
In the next section, we'll look at how to customize the Node.js instrumentation
to make the generated traces even more meaningful.
Step 5 — Customizing the Node.js auto instrumentation
Now that you've started seeing your application traces in Jaeger, let's look at
some of the customization options for the automatic instrumentation.
First, let's disable the instrumentation for filesystem calls so that spans are
no longer generated for the fs, tcp, and dns calls you may have observed
in the previous section.
You can do this by disabling the @opentelemetry/instrumentation-fs and
@opentelemetry/instrumentation-net, and @opentelemetry/instrumentation-dns
packages through the OTEL_NODE_DISABLED_INSTRUMENTATIONS environmental
variable:
Copied!
OTEL_NODE_DISABLED_INSTRUMENTATIONS=fs,net,dns
However, it's usually more more convenient to only explicitly enable the
instrumentations you need through the OTEL_NODE_ENABLED_INSTRUMENTATIONS as
in:
Next, let's change the span names for incoming requests from generic names like
GET or POST to more specific names like HTTP GET <endpoint>. Modify your
otel.js file as follows:
To customize a specific instrumentation, you need to provide the name of the
instrumentation as a key to getNodeAutoInstrumentation() and its configuration
object as the value.
For example, the
@opentelemetry/instrumentation-http
is what instruments the HTTP requests received by the server so its what you
need to customize here. The requestHook() updates the span name to something
more readable so that you can easily distinguish it amongst other spans.
After saving the file, run the command below to restart the auth and
converter services:
Copied!
docker compose up -d auth converter
Once they're back up and running, generate some new spans by clicking around the
application UI. You will now observe the following traces in Jaeger with updated
span names:
To customize the other instrumentation packages, please see their respective
documentation.
Step 6 — Adding custom trace instrumentation
While instrumentation libraries capture telemetry at the system boundaries, such
as inbound/outbound HTTP requests or database calls, they don't capture what's
happening within your application itself. To achieve that, you'll need to write
custom instrumentation.
In this section, let's add custom instrumentation for the JSON to YAML
conversion function. Currently, you'll see the following trace for such
operations:
This trace shows that an HTTP POST request was received by the auth-service to
the /convert endpoint, which subsequently issued a POST request to the
converter-service's /convert-yaml endpoint.
Let's develop this trace further, by adding a span for the conversion process
itself, not just the entire request.
To do this, you'll need to modify your converter service as follows:
const span = tracer.startSpan("convert json to yaml");
const yaml = stringify(body);
span.end();
reply.send(yaml);
}
export { convertToYAML };
This code snippet introduces some custom spans to your application.
First, it obtains a tracer instance using
trace.getTracer(process.env.OTEL_SERVICE_NAME). This tracer is associated with
your service name to provide context for the spans it generates.
Before you can obtain a working tracer with the getTrace() method, you need to
register a trace provider with trace.setGlobalTracerProvider(). However, since
we're using the auto-instrumentations-node package, this step is unnecessary
as global tracer has already been registered.
Once you have a tracer, you can create spans around the operations you'd like
to track. You only need to call startSpan() before the operation and
span.end() afterwards.
Once you restart the converter service with:
Copied!
docker compose up -d converter
You can repeat the JSON to YAML conversion once again in your application. When
you return to Jaeger, you'll observe that two new spans have been added to the
traces generated for such operations:
As you can see, parsing the JSON body was much quicker than the conversion from
JSON to YAML. While this is not an interesting insight, it serves to illustrate
just how you can track various operations in your services to figure out what
the bottleneck is when debugging problems.
Simplifying tracing with Better Stack
Throughout this tutorial, you've seen how to manually instrument a Node.js application with OpenTelemetry. While this approach gives you granular control, it requires installing SDK packages, configuring exporters, setting up collectors, and maintaining instrumentation code as your application evolves.
Better Stack Tracing takes a different approach using eBPF technology. Point it at your Kubernetes or Docker cluster and it automatically instruments your workloads without modifying your code. Here's what you get:
Traces start flowing immediately without installing SDK dependencies or configuring exporters
Databases like PostgreSQL, MySQL, Redis, and MongoDB get recognized and instrumented automatically
Context propagation works out of the box across your services
Visual "bubble up" investigation lets you select services and timeframes through drag and drop
AI analyzes your service map and logs during incidents, suggesting potential causes
OpenTelemetry-native architecture keeps your trace data portable
Works with Jaeger or any OpenTelemetry-compatible backend
Combines traces, logs, metrics, and incident management in one platform
If you'd like to try automatic instrumentation while keeping the flexibility of OpenTelemetry, check out Better Stack Tracing.
Final thoughts
I hope this tutorial has equipped you with a solid understanding of OpenTelemetry and how to instrument your Node.js applications to generate traces.
The OpenTelemetry Registry is also a valuable resource for discovering a wide range of auto-instrumentation libraries tailored to popular Node.js frameworks and libraries.
If manual instrumentation feels like too much overhead for your setup, Better Stack Tracing handles OpenTelemetry automatically with eBPF, so you can skip the SDK integration steps while keeping the same observability benefits.
You can find the complete code for this tutorial on GitHub.