OpenTelemetry provides a unified standard for
observability instrumentation, making it easier to
gather telemetry data like logs, traces, and metrics, regardless of your
specific Go framework or observability backend.
In this tutorial, we'll focus on using OpenTelemetry to instrument your Go
applications for tracing. You'll learn how to seamlessly integrate the
OpenTelemetry SDK to gain a comprehensive view of your application's behavior,
enabling effective troubleshooting and optimization.
Let's dive in!
Prerequisites
Basic Linux skills.
Prior Go development experience and a
recent version installed.
In this tutorial, your focus will be on instrumenting a Go application to
generate traces with OpenTelemetry.
The application is
designed for converting images (such as JPEGS) to the
AVIF format. It also incorporates a GitHub
social login to secure the /upload route, preventing unauthorized access.
To begin, clone the application to your local machine:
Open the GitHub Developer Settings page at
https://github.com/settings/apps in your browser:
Click the New GitHub App button and provide a suitable name. Set the
Homepage URL to http://localhost:8000 and the Callback URL to
http://localhost:8000/auth/github/callback.
Also, make sure to uncheck the Webhook option as it won't be necessary for
this tutorial:
Once you're done, click Create GitHub App at the bottom of the page:
Click the Generate a new client secret button on the resulting page. Copy
both the generated token and the Client ID:
Now, return to your terminal, open the .env file in your text editor, and
update the highlighted lines with the copied values:
The app service runs the application in development mode, utilizing
air for live reloading on file changes.
The db service runs PostgreSQL.
The migrate service runs database migrations and exits.
The redis service runs Redis.
With everything up and running, navigate to http://localhost:8000 in your
browser to access the application user interface:
After authenticating with your GitHub account, you'll see the following page:
Uploading an image will display its AVIF version in the browser, confirming the
application's functionality.
You've successfully set up and explored the demo application in this initial
step. The upcoming sections will guide you through instrumenting this program
with the OpenTelemetry API.
Step 2 — Initializing the OpenTelemetry SDK
Now that you're acquainted with the sample application, let's explore how to add
basic instrumentation using OpenTelemetry to create a trace for every HTTP
request the application handles.
The initial step involves setting up the OpenTelemetry SDK in the application.
Install the necessary dependencies with the following command:
Copied!
go get go.opentelemetry.io/otel \
go.opentelemetry.io/otel/exporters/stdout/stdouttrace \
go.opentelemetry.io/otel/sdk/trace \
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp
This command installs these OpenTelemetry SDK components:
Note: If you're using a different framework for HTTP requests (such as
Gin), you'll need to install the appropriate
instrumentation library instead of the otelhttp instrumentation. Ensure to
search the
OpenTelemetry Registry
to find the relevant instrumentation library and go get it.
Once the packages are installed, you need to bootstrap the OpenTelemetry SDK in
your code for distributed tracing. Place the following code within an otel.go
file in your project's root directory:
This code establishes an OpenTelemetry SDK for tracing in your Go application.
It configures a trace exporter that directs traces to standard output in a
human-readable format.
The setUpOtelSDK() function initializes the global trace provider using
otel.SetTraceProvider(). Additionally, it provides a mechanism for gracefully
shutting down the initialized OpenTelemetry SDK components by iterating through
registered shutdownFuncs and executing each function while consolidating any
errors that arise.
On the other hand, the newTraceProvider() function, creates a trace exporter
that outputs traces to standard output with pretty-printing enabled. It then
constructs a trace provider utilizing this exporter and configures it with a
batcher featuring a one-second timeout.
The batcher serves to buffer traces before exporting them in batches for
enhanced efficiency. The default timeout is five seconds, but it's adjusted to
one second here for faster feedback when testing.
In the next section, you'll proceed to set up automatic instrumentation for the
HTTP server, allowing you to observe traces for each incoming request.
Skip manual OpenTelemetry instrumentation
While manual instrumentation gives you control over your traces, Better Stack Tracing uses eBPF to automatically instrument your Kubernetes or Docker workloads without code changes. Your traces start flowing immediately, and you can still export to Jaeger or any OpenTelemetry-compatible backend.
Predictable pricing and up to 30x cheaper than Datadog. Start free in minutes.
Step 3 — Instrumenting the HTTP server
Now that you have the OpenTelemetry SDK set up, let's instrument the HTTP server
to automatically generate trace spans for incoming requests.
Modify your main.go file to include code that sets up the OpenTelemetry SDK
and instruments the HTTP server through the otelhttp instrumentation library:
In this code, the setupOTelSDK() function is called to initialize the
OpenTelemetry SDK. Then, the otelhttp.NewHandler() method wraps the request
multiplexer to add HTTP instrumentation across the entire server. The
otelhttp.WithSpanNameFormatter() method is used to customize the generated
span names, providing a clear description of the traced operation (e.g.,
HTTP GET /).
You can also exclude specific requests from being traced using
otelhttp.WithFilter():
This object is a span representing a successful HTTP GET request to the root
path of the service. Let's explore the key components of the span in more
detail:
Name: This is the human-readable name for the span, often used to represent
the traced operation.
SpanContext: This holds the core identifies for the span:
TraceID: A unique identifier for the entire trace to which this span
belongs.
SpanID: A unique identifier for this specific span within the trace.
TraceFlags: Used to encode information about the trace, like whether it
should be sampled.
Remote: Indicates whether the parent of this span is in a different
process.
Parent: This identifies the parent span in the trace hierarchy. In this
case, the parent has all zero values, indicating that this is the root span.
SpanKind: Specifies the role of the span in the trace. Here, the value 2
signifies a Server span, meaning this span represents the server-side
handling of a client request.
StartTime, EndTime: These timestamps record when the span started and
ended.
Attributes: A collection of key-value pairs providing additional context
about the span.
Events: Used to log specific occurrences within the span's lifetime.
Links: Used to associate this span with other spans in the same or different
traces.
Status: This conveys the outcome of the operation represented by the span.
It is Unset in this example indicating no explicit status was set, but it
could also be OK or Error.
DroppedAttributes, DroppedEvents, DroppedLinks: These counters track how
many attributes, events, or links were dropped due to exceeding limits set by
the OpenTelemetry SDK or exporter.
ChildSpanCount: This indicates how many direct child spans this span has. A
value of 0 suggests that this is a leaf span (no further operations were
traced within this one).
Resource: Describes the entity that produced the span. Here, it includes the
service name (see OTEL_SERVICE_NAME in your .env) and information about
the OpenTelemetry SDK used.
IntrumentationLibrary: This identifies the OpenTelemetry instrumentation
library responsible for creating this span.
In the next step, you'll configure the OpenTelemetry Collector to gather and
export these spans to a backend system for visualization and analysis.
Step 4 — Configuring the OpenTelemetry Collector
In the previous steps, you instrumented the Go application with OpenTelemetry
and configured it to send telemetry to the standard output. While this is useful
for testing, it's recommended that the data be sent to a suitable distributed
tracing backend for visualization and analysis.
OpenTelemetry offers two primary export approaches:
The OpenTelemetry collector which offers
flexibility in data processing and routing to various backends (recommended).
A direct export from your application to one or more backends of your choice.
The Collector itself doesn't store observability data; it processes and routes
it. It receives different types of observability signals from applications, then
transforms and sends them to dedicated storage and analysis systems.
In this section, you'll configure the OpenTelemetry Collector to export traces
to Jaeger, a free and open-source distributed tracing tool that
facilitates the storage, retrieval, and visualization of trace data.
To get started, go ahead and create an otelcol.yaml file in the root of your
project as follows:
The configuration specifies an otlp receiver, designed to handle incoming
telemetry data in the OTLP format.
It's set up to accept this data over HTTP, meaning the Collector will start an
HTTP server on port 4318, ready to receive OTLP payloads from your application.
Processors
Copied!
processors:
batch:
Next, we have an optional batch processor. While not mandatory, processors sit
between receivers and exporters, allowing you to manipulate the incoming data.
In this case, the batch processor groups data into batches to optimize network
performance when sending it to the backend.
The otlp/jaeger exporter is responsible for sending the processed trace data
to Jaeger. The endpoint points to the local Jaeger instance running in your
Docker Compose setup (to be added shortly). The insecure: true setting under
tls is necessary because the local Jaeger container will use an unencrypted
HTTP connection for its OTLP gRPC endpoint.
Pipelines
Copied!
service:
pipelines:
traces:
receivers: [otlp]
processors: [batch]
exporters: [otlp/jaeger]
Finally, the traces pipeline ties everything together. It instructs the
Collector to take trace data received from the otlp receiver, process it with
the batch processor, and then export it to Jaeger using the otlp/jaeger
exporter.
This configuration demonstrates the flexibility of the OpenTelemetry Collector.
By defining different pipelines, you can easily customize how data is received,
processed, and exported.
Step 5 — Forwarding traces to the OpenTelemetry Collector
Now that the OpenTelemetry Collector configuration file is ready, let's update
your Go application to transmit trace spans in the OTLP format to the Collector
instead of outputting them to the console.
Here, you're replacing the stdouttrace exporter with the oltptracehttp
exporter. This exporter sends each generated span to
https://localhost:4318/v1/traces by default.
Since the Collector will run in Docker, adjust the OTLP endpoint in your .env
file:
The OTEL_EXPORTER_OLTP_ENDPOINT allows you to configure the target base URL
for telemetry data. Its value reflects the Collector's hostname within Docker
(to be set up shortly) and the port it listens on for OTLP data over HTTP.
This now means that the generated trace data will be sent to
http://go-image-upload-collector:4318/v1/traces.
In the next section, you'll set up the OpenTelemetry Collector and Jaeger
containers using Docker Compose.
Step 6 — Setting up OpenTelemetry Collector and Jaeger
Now that you've configured your application to export data to the OpenTelemetry
Collector, the next step is launching the Jaeger and OpenTelemetry Collector
containers so that you can visualize the traces more effectively.
Open up your docker-compose.yml file and add the following services below the
existing ones:
The collector service uses the
otel/opentelemetry-collector
image to process and export telemetry data. It mounts the local configuration
file (otelcol.yaml) into the container and is set to start only after the
jaeger service is healthy. If you're using the
Contrib distribution
instead, ensure that your configuration file is mounted to the appropriate path
like this:
The jaeger service runs the
jaegertracing/all-in-one,
which includes all components of the Jaeger backend. It uses the
W3C trace context format for
propagation, exposes the Jaeger UI on port 16686, and includes a health check to
ensure the service is running correctly before allowing dependent services to
start.
Once you've saved the file, stop and remove the existing containers with:
Copied!
docker compose down
Then execute the command below to launch them all at once:
With the services ready, head to your application at http://localhost:8000 and
generate some traces by refreshing the page a few times. Then, open the Jaeger
UI in your browser at http://localhost:16686:
Find the go-image-upload service and click Find Traces:
You should see a list of the traces you generated. Click on any one of them to
see the component spans:
Currently, each trace contains only a single span, so there's not much to see.
However, you can now easily explore the span attributes by expanding the
Tags section above.
In the next section, you'll add more instrumentation to the application to make
the traces more informative and interesting.
Step 7 — Instrumenting the HTTP client
The otelhttp package also offers a way to automatically instrument outbound
requests made through http.Client.
To enable this, override the default transport in your github.go file:
github.go
Copied!
package main
import (
"context"
"net/http"
"time"
"github.com/go-resty/resty/v2"
By making this change, a span will be created for all subsequent requests made
to GitHub APIs.
You can test this by authenticating with GitHub once again. Once logged in,
return to Jaeger and click the Find Traces button.
You'll notice that the request to the /auth/github/callback route now has
three spans instead of one:
Clicking on the span reveals the flow of the requests:
You'll observe that the request to https://github.com/login/oauth/access_token
took 711ms, while the one to https://api.github.com/user took 674ms (at least
on my end).
Important: The client_id and client_secret tokens are visible in the API
calls. The
recommended practice
is to remove such sensitive data from telemetry before forwarding it to a
storage backend. This is possible with the OpenTelemetry Collector's but setting
it up this is beyond the scope of this tutorial.
In the upcoming sections, you'll instrument the Redis and PostgreSQL libraries.
Step 8 — Instrumenting the Redis Go client
The demo application makes several calls to Redis to store and retrieve session
tokens. Let's instrument the Redis client to generate spans that help you
monitor the performance and errors associated with each Redis query.
Begin by installing the OpenTelemetry instrumentation for go-redis:
Copied!
go get github.com/redis/go-redis/extra/redisotel/v9
Next, open your redisconn/redis.go file and modify it as follows:
if err := redisotel.InstrumentTracing(r); err != nil {
return nil, err
}
return &RedisConn{
client: r,
}, nil
}
Instrumenting the Redis client for traces is done by using the
InstrumentTracing() hook provided by redisotel package. You can also report
OpenTelemetry metrics with
InstrumentMetrics().
After saving your changes, navigate to your application, log out, and then log
in again.
In Jaeger, you'll start seeing spans for the Redis set, get, and del
operations accordingly:
Step 9 — Instrumenting the Bun SQL client
Instrumenting the uptrace/bun library is quite
similar to the Redis client. Bun provides a dedicated OpenTelemetry
instrumentation module called bunotel, which needs to be installed first:
Copied!
go get github.com/uptrace/bun/extra/bunotel
Once installed, add the bunotel hook to your db/db.go file:
)
type DBConn struct {
db *bun.DB
}
func NewDBConn(ctx context.Context, name, url string) (*DBConn, error) {
sqldb := sql.OpenDB(pgdriver.NewConnector(pgdriver.WithDSN(url)))
db := bun.NewDB(sqldb, pgdialect.New())
db.AddQueryHook(
bunotel.NewQueryHook(bunotel.WithDBName(name)),
)
return &DBConn{db}, nil
}
. . .
After saving the changes, interact with it in the same manner as before.
You will notice that new trace spans for each PostgreSQL query start to appear
in Jaeger:
Step 10 — Adding custom instrumentation
While instrumentation libraries capture telemetry at the system boundaries, such
as inbound/outbound HTTP requests or database calls, they don't capture what's
happening within your application itself. To achieve that, you'll need to write
custom manual instrumentation.
In this section, let's add custom instrumentation for the requireAuth
function.
To create spans, you first need a tracer. Create one by providing the name and
version of the library/application performing the instrumentation. Typically,
you only need one tracer per application:
main.go
Copied!
package main
import (
. . .
"go.opentelemetry.io/otel"
"go.opentelemetry.io/otel/trace"
)
var redisConn *redisconn.RedisConn
var dbConn *db.DBConn
var tracer trace.Tracer
. . .
func init() {
. . .
tracer = otel.Tracer(conf.ServiceName)
}
. . .
Once your tracer is initialized, you can use it to create spans with
tracer.Start(). Let's add a span for the requireAuth() middleware function:
The requireAuth middleware is designed to protect certain routes in the
application by ensuring that only authenticated users can access them. It checks
for a session cookie and validates it against a Redis store to determine if the
user is logged in. If not, it redirects them to the login page (/auth).
The tracer.Start() method initiates a new span named requireAuth with the
context of the incoming HTTP request. The otelhttp.NewHander() method used to
instrument the server earlier adds the active span for the incoming request to
the request context. This means the requireAuth span will be nested within it
as you'll soon see.
The span.SetAttributes() method adds the value of the session cookie as an
attribute to the span. It is mainly used for recording contextual
information about the operation that may be
helpful for debugging purposes.
In cases where authentication fails (either due to a missing cookie or an
invalid session token), an event is added to the span. This event provides
additional context about why the authentication failed.
Finally, if authentication is successful, the span's status is explicitly set to
Ok with an "authenticated successfully" message. The span.End() method is
then called before the next handler is executed.
When you play around with the application once again and check the traces in
Jaeger, you'll notice that a new span is created for the protected routes like
/ and /upload:
If an event is recorded in the span, it appears in the Logs section:
You now have the knowledge to create spans for any operation in your
application. Consider creating a span that tracks the image conversion to AVIF
in the uploadImage() handler as an exercise.
Simplifying tracing with Better Stack
Throughout this tutorial, you've seen how to manually instrument a Go application with OpenTelemetry. While this approach gives you granular control, it requires adding SDK dependencies, wrapping handlers, creating spans, and maintaining instrumentation code as your application evolves.
Better Stack Tracing takes a different approach using eBPF technology. Point it at your Kubernetes or Docker cluster and it automatically instruments your workloads without modifying your code. Here's what you get:
Traces start flowing immediately without adding SDK dependencies or wrapping handlers
Databases like PostgreSQL, MySQL, Redis, and MongoDB get recognized and instrumented automatically
Context propagation works out of the box across your services
Visual "bubble up" investigation lets you select services and timeframes through drag and drop
AI analyzes your service map and logs during incidents, suggesting potential causes
OpenTelemetry-native architecture keeps your trace data portable
Works with Jaeger or any OpenTelemetry-compatible backend
Combines traces, logs, metrics, and incident management in one platform
If you'd like to try automatic instrumentation while keeping the flexibility of OpenTelemetry, check out Better Stack Tracing.
Final thoughts
You've covered a lot of ground with this tutorial, and you should now have a solid grasp of OpenTelemetry and its application for instrumenting Go applications with tracing capabilities.
To delve deeper into the OpenTelemetry project, consider exploring its official documentation. The OpenTelemetry Registry is also an excellent resource to discover numerous auto-instrumentation libraries covering popular Go frameworks and libraries.
If manual instrumentation feels like too much overhead for your setup, Better Stack Tracing handles OpenTelemetry automatically with eBPF, so you can skip the SDK integration steps while keeping the same observability benefits.
Remember to thoroughly test your OpenTelemetry instrumentation before deploying your applications to production. This ensures that the captured data is accurate, meaningful, and useful for detecting and solving problems.
Feel free to also check out the complete code on GitHub.