Back to Observability guides

Practical Tracing for Go Apps with OpenTelemetry (Beginner's Guide)

Ayooluwa Isaiah
Updated on September 16, 2024

OpenTelemetry provides a unified standard for observability instrumentation, making it easier to gather telemetry data like logs, traces, and metrics, regardless of your specific Go framework or observability backend.

In this tutorial, we'll focus on using OpenTelemetry to instrument your Go applications for tracing. You'll learn how to seamlessly integrate the OpenTelemetry SDK to gain a comprehensive view of your application's behavior, enabling effective troubleshooting and optimization.

Let's dive in!

Prerequisites

Step 1 — Setting up the demo project

In this tutorial, your focus will be on instrumenting a Go application to generate traces with OpenTelemetry. The application is designed for converting images (such as JPEGS) to the AVIF format. It also incorporates a GitHub social login to secure the /upload route, preventing unauthorized access.

To begin, clone the application to your local machine:

 
git clone https://github.com/betterstack-community/go-image-upload

Navigate into the project directory and install the necessary dependencies:

 
cd go-image-upload
 
go mod tidy

Rename the .env.sample file to .env:

 
mv .env.sample .env

Before running the application, you'll need to create a GitHub application to enable GitHub OAuth for user authentication.

Open the GitHub Developer Settings page at https://github.com/settings/apps in your browser:

GitHub New App page

Click the New GitHub App button and provide a suitable name. Set the Homepage URL to http://localhost:8000 and the Callback URL to http://localhost:8000/auth/github/callback.

GitHub Register New App page

Also, make sure to uncheck the Webhook option as it won't be necessary for this tutorial:

Deactivate WebHook

Once you're done, click Create GitHub App at the bottom of the page:

GitHub Create App Button

Click the Generate a new client secret button on the resulting page. Copy both the generated token and the Client ID:

GitHub App Copy Client Secret and Client ID

Now, return to your terminal, open the .env file in your text editor, and update the highlighted lines with the copied values:

 
code .env
.env
GO_ENV=development
PORT=8000
LOG_LEVEL=info
POSTGRES_DB=go-image-upload
POSTGRES_USER=postgres
POSTGRES_PASSWORD=admin
POSTGRES_HOST=go-image-upload-db
GITHUB_CLIENT_ID=<your_github_client_id>
GITHUB_CLIENT_SECRET=<your_github_client_secret>
GITHUB_REDIRECT_URI=http://localhost:8000/auth/github/callback REDIS_ADDR=go-image-upload-redis:6379 OTEL_SERVICE_NAME=go-image-upload

Finally, launch the application and its associated services. You can start the entire setup locally using Docker Compose:

 
docker compose up -d --build

This will initiate the following containers:

Output
 ✔ Network go-image-upload_go-image-upload-network  Created                0.2s
 ✔ Container go-image-upload-redis                  Healthy               12.2s
 ✔ Container go-image-upload-db                     Healthy               12.2s
 ✔ Container go-image-upload-migrate                Exited                12.0s
 ✔ Container go-image-upload-app                    Started               12.2s
  • The app service runs the application in development mode, utilizing air for live reloading on file changes.
  • The db service runs PostgreSQL.
  • The migrate service runs database migrations and exits.
  • The redis service runs Redis.

With everything up and running, navigate to http://localhost:8000 in your browser to access the application user interface:

Image Upload Service

After authenticating with your GitHub account, you'll see the following page:

Image Upload Service Authenticated

Uploading an image will display its AVIF version in the browser, confirming the application's functionality.

Converted AVIF image

You've successfully set up and explored the demo application in this initial step. The upcoming sections will guide you through instrumenting this program with the OpenTelemetry API.

Step 2 — Initializing the OpenTelemetry SDK

Now that you're acquainted with the sample application, let's explore how to add basic instrumentation using OpenTelemetry to create a trace for every HTTP request the application handles.

The initial step involves setting up the OpenTelemetry SDK in the application. Install the necessary dependencies with the following command:

 
go get go.opentelemetry.io/otel \
  go.opentelemetry.io/otel/exporters/stdout/stdouttrace \
  go.opentelemetry.io/otel/sdk/trace \
  go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp\

This command installs these OpenTelemetry SDK components:

Note: If you're using a different framework for HTTP requests (such as Gin), you'll need to install the appropriate instrumentation library instead of the otelhttp instrumentation. Ensure to search the OpenTelemetry Registry to find the relevant instrumentation library and go get it.

Screenshot of OpenTelemetry Instrumentation Search Page

Once the packages are installed, you need to bootstrap the OpenTelemetry SDK in your code for distributed tracing. Place the following code within an otel.go file in your project's root directory:

otel.go
package main

import (
    "context"
    "errors"
    "time"

    "go.opentelemetry.io/otel"
    "go.opentelemetry.io/otel/exporters/stdout/stdouttrace"
    "go.opentelemetry.io/otel/sdk/trace"
)

func setupOTelSDK(
    ctx context.Context,
) (shutdown func(context.Context) error, err error) {
    var shutdownFuncs []func(context.Context) error

    shutdown = func(ctx context.Context) error {
        var err error

        for _, fn := range shutdownFuncs {
            err = errors.Join(err, fn(ctx))
        }

        shutdownFuncs = nil
        return err
    }

    handleErr := func(inErr error) {
        err = errors.Join(inErr, shutdown(ctx))
    }

    tracerProvider, err := newTraceProvider(ctx)
    if err != nil {
        handleErr(err)
        return
    }

    shutdownFuncs = append(shutdownFuncs, tracerProvider.Shutdown)
    otel.SetTracerProvider(tracerProvider)

    return
}

func newTraceProvider(ctx context.Context) (*trace.TracerProvider, error) {
    traceExporter, err := stdouttrace.New(
        stdouttrace.WithPrettyPrint())
    if err != nil {
        return nil, err
    }

    traceProvider := trace.NewTracerProvider(
        trace.WithBatcher(traceExporter,
            trace.WithBatchTimeout(time.Second)),
    )
    return traceProvider, nil
}

This code establishes an OpenTelemetry SDK for tracing in your Go application. It configures a trace exporter that directs traces to standard output in a human-readable format.

The setUpOtelSDK() function initializes the global trace provider using otel.SetTraceProvider(). Additionally, it provides a mechanism for gracefully shutting down the initialized OpenTelemetry SDK components by iterating through registered shutdownFuncs and executing each function while consolidating any errors that arise.

On the other hand, the newTraceProvider() function, creates a trace exporter that outputs traces to standard output with pretty-printing enabled. It then constructs a trace provider utilizing this exporter and configures it with a batcher featuring a one-second timeout.

The batcher serves to buffer traces before exporting them in batches for enhanced efficiency. The default timeout is five seconds, but it's adjusted to one second here for faster feedback when testing.

In the next section, you'll proceed to set up automatic instrumentation for the HTTP server, allowing you to observe traces for each incoming request.

Step 3 — Instrumenting the HTTP server

Now that you have the OpenTelemetry SDK set up, let's instrument the HTTP server to automatically generate trace spans for incoming requests.

Modify your main.go file to include code that sets up the OpenTelemetry SDK and instruments the HTTP server through the otelhttp instrumentation library:

main.go
package main

import (
    "context"
    "embed"
"errors"
"log" "net/http" "os" "github.com/betterstack-community/go-image-upload/db" "github.com/betterstack-community/go-image-upload/redisconn" "github.com/joho/godotenv"
"go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp"
) . . . func main() {
ctx := context.Background()
otelShutdown, err := setupOTelSDK(ctx)
if err != nil {
log.Fatal(err)
}
defer func() {
err = errors.Join(err, otelShutdown(ctx))
log.Println(err)
}()
mux := http.NewServeMux() mux.HandleFunc("GET /auth/github/callback", completeGitHubAuth) mux.HandleFunc("GET /auth/github", redirectToGitHubLogin) mux.HandleFunc("GET /auth/logout", logout) mux.HandleFunc("GET /auth", renderAuth) mux.HandleFunc("GET /", getUser)
httpSpanName := func(operation string, r *http.Request) string {
return fmt.Sprintf("HTTP %s %s", r.Method, r.URL.Path)
}
handler := otelhttp.NewHandler(
mux,
"/",
otelhttp.WithSpanNameFormatter(httpSpanName),
)
log.Println("Server started on port 8000")
log.Fatal(http.ListenAndServe(":8000", handler))
}

In this code, the setupOTelSDK() function is called to initialize the OpenTelemetry SDK. Then, the otelhttp.NewHandler() method wraps the request multiplexer to add HTTP instrumentation across the entire server. The otelhttp.WithSpanNameFormatter() method is used to customize the generated span names, providing a clear description of the traced operation (e.g., HTTP GET /).

You can also exclude specific requests from being traced using otelhttp.WithFilter():

 
otelhttp.NewHandler(mux, "/", otelhttp.WithFilter(otelReqFilter))

func otelReqFilter(req *http.Request) bool {
    return req.URL.Path != "/auth"
}

Refer to the documentation for additional customization options.

Once your server restarts, revisit the application's home page at http://localhost:8000. If you're already authenticated, you'll see the upload page:

Image Upload page

Now, check your application logs to view the trace spans:

 
docker compose logs -f app

You should observe a JSON object similar to this (note that the Attributes and Resource arrays are truncated for brevity):

Output
. . .
{
        "Name": "HTTP GET /",
        "SpanContext": {
                "TraceID": "e3c306d18bac2742de07756bdb9e607b",
                "SpanID": "3ee91f86b5468681",
                "TraceFlags": "01",
                "TraceState": "",
                "Remote": false
        },
        "Parent": {
                "TraceID": "00000000000000000000000000000000",
                "SpanID": "0000000000000000",
                "TraceFlags": "00",
                "TraceState": "",
                "Remote": false
        },
        "SpanKind": 2,
        "StartTime": "2024-08-26T14:19:47.205308249+01:00",
        "EndTime": "2024-08-26T14:19:47.206802188+01:00",
        "Attributes": [. . .],
        "Events": null,
        "Links": null,
        "Status": {
                "Code": "Unset",
                "Description": ""
        },
        "DroppedAttributes": 0,
        "DroppedEvents": 0,
        "DroppedLinks": 0,
        "ChildSpanCount": 0,
        "Resource": [. . .],
        "InstrumentationLibrary": {
                "Name": "go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp",
                "Version": "0.53.0",
                "SchemaURL": ""
        }
}

This object is a span representing a successful HTTP GET request to the root path of the service. Let's explore the key components of the span in more detail:

  • Name: This is the human-readable name for the span, often used to represent the traced operation.

  • SpanContext: This holds the core identifies for the span:

    • TraceID: A unique identifier for the entire trace to which this span belongs.
    • SpanID: A unique identifier for this specific span within the trace.
    • TraceFlags: Used to encode information about the trace, like whether it should be sampled.
    • TraceState: Carries vendor-specific trace context information.
    • Remote: Indicates whether the parent of this span is in a different process.
  • Parent: This identifies the parent span in the trace hierarchy. In this case, the parent has all zero values, indicating that this is the root span.

  • SpanKind: Specifies the role of the span in the trace. Here, the value 2 signifies a Server span, meaning this span represents the server-side handling of a client request.

  • StartTime, EndTime: These timestamps record when the span started and ended.

  • Attributes: A collection of key-value pairs providing additional context about the span.

  • Events: Used to log specific occurrences within the span's lifetime.

  • Links: Used to associate this span with other spans in the same or different traces.

  • Status: This conveys the outcome of the operation represented by the span. It is Unset in this example indicating no explicit status was set, but it could also be OK or Error.

  • DroppedAttributes, DroppedEvents, DroppedLinks: These counters track how many attributes, events, or links were dropped due to exceeding limits set by the OpenTelemetry SDK or exporter.

  • ChildSpanCount: This indicates how many direct child spans this span has. A value of 0 suggests that this is a leaf span (no further operations were traced within this one).

  • Resource: Describes the entity that produced the span. Here, it includes the service name (see OTEL_SERVICE_NAME in your .env) and information about the OpenTelemetry SDK used.

  • IntrumentationLibrary: This identifies the OpenTelemetry instrumentation library responsible for creating this span.

In the next step, you'll configure the OpenTelemetry Collector to gather and export these spans to a backend system for visualization and analysis.

Step 4 — Configuring the OpenTelemetry Collector

In the previous steps, you instrumented the Go application with OpenTelemetry and configured it to send telemetry to the standard output. While this is useful for testing, it's recommended that the data be sent to a suitable distributed tracing backend for visualization and analysis.

OpenTelemetry offers two primary export approaches:

  1. The OpenTelemetry collector which offers flexibility in data processing and routing to various backends (recommended).

  2. A direct export from your application to one or more backends of your choice.

The Collector itself doesn't store observability data; it processes and routes it. It receives different types of observability signals from applications, then transforms and sends them to dedicated storage and analysis systems.

In this section, you'll configure the OpenTelemetry Collector to export traces to Jaeger, a free and open-source distributed tracing tool that facilitates the storage, retrieval, and visualization of trace data.

To get started, go ahead and create an otelcol.yaml file in the root of your project as follows:

otelcol.yaml
receivers:
  otlp:
    protocols:
      http:
        endpoint: go-image-upload-collector:4318

processors:
  batch:

exporters:
  otlp/jaeger:
    endpoint: go-image-upload-jaeger:4317
    tls:
      insecure: true

service:
  pipelines:
    traces:
      receivers: [otlp]
      processors: [batch]
      exporters: [otlp/jaeger]

This file configures the local OpenTelemetry Collector and comprises the following sections:

Receivers

 
receivers:
  otlp:
    protocols:
      http:
        endpoint: go-image-upload-collector:4318

The configuration specifies an otlp receiver, designed to handle incoming telemetry data in the OTLP format. It's set up to accept this data over HTTP, meaning the Collector will start an HTTP server on port 4318, ready to receive OTLP payloads from your application.

Processors

 
processors:
  batch:

Next, we have an optional batch processor. While not mandatory, processors sit between receivers and exporters, allowing you to manipulate the incoming data. In this case, the batch processor groups data into batches to optimize network performance when sending it to the backend.

Exporters

 
exporters:
  otlp/jaeger:
    endpoint: go-image-upload-jaeger:4317
    tls:
      insecure: true

The otlp/jaeger exporter is responsible for sending the processed trace data to Jaeger. The endpoint points to the local Jaeger instance running in your Docker Compose setup (to be added shortly). The insecure: true setting under tls is necessary because the local Jaeger container will use an unencrypted HTTP connection for its OTLP gRPC endpoint.

Pipelines

 
service:
pipelines:
traces:
receivers: [otlp]
processors: [batch]
exporters: [otlp/jaeger]

Finally, the traces pipeline ties everything together. It instructs the Collector to take trace data received from the otlp receiver, process it with the batch processor, and then export it to Jaeger using the otlp/jaeger exporter.

This configuration demonstrates the flexibility of the OpenTelemetry Collector. By defining different pipelines, you can easily customize how data is received, processed, and exported.

Step 5 — Forwarding traces to the OpenTelemetry Collector

Now that the OpenTelemetry Collector configuration file is ready, let's update your Go application to transmit trace spans in the OTLP format to the Collector instead of outputting them to the console.

Install the OLTP Trace Exporter package with:

 
go get go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp

Once installed, modify your otel.go file as follows:

otel.go
package main

import (
    "context"
    "errors"
    "time"

    "go.opentelemetry.io/otel"
"go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp"
"go.opentelemetry.io/otel/sdk/trace" ) . . . func newTraceProvider(ctx context.Context) (*trace.TracerProvider, error) {
traceExporter, err := otlptracehttp.New(ctx)
if err != nil { return nil, err } traceProvider := trace.NewTracerProvider( trace.WithBatcher(traceExporter, trace.WithBatchTimeout(time.Second)), ) return traceProvider, nil }

Here, you're replacing the stdouttrace exporter with the oltptracehttp exporter. This exporter sends each generated span to https://localhost:4318/v1/traces by default.

Since the Collector will run in Docker, adjust the OTLP endpoint in your .env file:

.env
. . .
OTEL_EXPORTER_OTLP_ENDPOINT=http://go-image-upload-collector:4318

The OTEL_EXPORTER_OLTP_ENDPOINT allows you to configure the target base URL for telemetry data. Its value reflects the Collector's hostname within Docker (to be set up shortly) and the port it listens on for OTLP data over HTTP.

This now means that the generated trace data will be sent to http://go-image-upload-collector:4318/v1/traces.

In the next section, you'll set up the OpenTelemetry Collector and Jaeger containers using Docker Compose.

Step 6 — Setting up OpenTelemetry Collector and Jaeger

Now that you've configured your application to export data to the OpenTelemetry Collector, the next step is launching the Jaeger and OpenTelemetry Collector containers so that you can visualize the traces more effectively.

Open up your docker-compose.yml file and add the following services below the existing ones:

docker-compose.yml
  collector:
    container_name: go-image-upload-collector
    image: otel/opentelemetry-collector:0.107.0
    volumes:
      - ./otelcol.yaml:/etc/otelcol/config.yaml
    depends_on:
      jaeger:
        condition: service_healthy
    networks:
      - go-image-upload-network

  jaeger:
    container_name: go-image-upload-jaeger
    image: jaegertracing/all-in-one:latest
    environment:
      JAEGER_PROPAGATION: w3c
    ports:
      - 16686:16686
    healthcheck:
      test: [CMD, wget, -q, -S, -O, "-", "localhost:14269"]
    networks:
      - go-image-upload-network

The collector service uses the otel/opentelemetry-collector image to process and export telemetry data. It mounts the local configuration file (otelcol.yaml) into the container and is set to start only after the jaeger service is healthy. If you're using the Contrib distribution instead, ensure that your configuration file is mounted to the appropriate path like this:

 
collector:
  container_name: go-image-upload-collector
  image: otel/opentelemetry-collector-contrib:0.107.0
  volumes:
- ./otelcol.yaml:/etc/otelcol-contrib/config.yaml

The jaeger service runs the jaegertracing/all-in-one, which includes all components of the Jaeger backend. It uses the W3C trace context format for propagation, exposes the Jaeger UI on port 16686, and includes a health check to ensure the service is running correctly before allowing dependent services to start.

Once you've saved the file, stop and remove the existing containers with:

 
docker compose down

Then execute the command below to launch them all at once:

 
docker compose up -d --build
Output
. . .
 ✔ Network go-image-upload_go-image-upload-network      Created            0.2s
 ✔ Container go-image-upload-jaeger                     Healthy           31.5s
 ✔ Container go-image-upload-db                         Healthy           11.4s
 ✔ Container go-image-upload-redis                      Healthy           12.2s
 ✔ Container go-image-upload-migrate                    Exited            12.0s
 ✔ Container go-image-upload-collector                  Started           31.6s
 ✔ Container go-image-upload-app                        Started           12.1s

With the services ready, head to your application at http://localhost:8000 and generate some traces by refreshing the page a few times. Then, open the Jaeger UI in your browser at http://localhost:16686:

Jaeger UI showing Go Image Upload Service

Find the go-image-upload service and click Find Traces:

Jaeger UI showing traces

You should see a list of the traces you generated. Click on any one of them to see the component spans:

Jaeger UI showing component spans in a trace

Currently, each trace contains only a single span, so there's not much to see. However, you can now easily explore the span attributes by expanding the Tags section above.

In the next section, you'll add more instrumentation to the application to make the traces more informative and interesting.

Step 7 — Instrumenting the HTTP client

The otelhttp package also offers a way to automatically instrument outbound requests made through http.Client.

To enable this, override the default transport in your github.go file:

github.go
package main

import (
    "context"
    "net/http"
    "time"

    "github.com/go-resty/resty/v2"
"go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp"
) var httpClient = &http.Client{ Timeout: 2 * time.Minute,
Transport: otelhttp.NewTransport(http.DefaultTransport),
} . . .

By making this change, a span will be created for all subsequent requests made to GitHub APIs.

You can test this by authenticating with GitHub once again. Once logged in, return to Jaeger and click the Find Traces button.

You'll notice that the request to the /auth/github/callback route now has three spans instead of one:

Jaeger UI showing three spans for the callback route

Clicking on the span reveals the flow of the requests:

Jaeger UI showing Gantt chart with trace spans

You'll observe that the request to https://github.com/login/oauth/access_token took 711ms, while the one to https://api.github.com/user took 674ms (at least on my end).

Important: The client_id and client_secret tokens are visible in the API calls. The recommended practice is to remove such sensitive data from telemetry before forwarding it to a storage backend. This is possible with the OpenTelemetry Collector's but setting it up this is beyond the scope of this tutorial.

In the upcoming sections, you'll instrument the Redis and PostgreSQL libraries.

Step 8 — Instrumenting the Redis Go client

The demo application makes several calls to Redis to store and retrieve session tokens. Let's instrument the Redis client to generate spans that help you monitor the performance and errors associated with each Redis query.

Begin by installing the OpenTelemetry instrumentation for go-redis:

 
go get github.com/redis/go-redis/extra/redisotel/v9

Next, open your redisconn/redis.go file and modify it as follows:

redisconn/redis.go
package redisconn

import (
    "context"
    "log/slog"
    "time"

"github.com/redis/go-redis/extra/redisotel/v9"
redis "github.com/redis/go-redis/v9" ) . . . func NewRedisConn(ctx context.Context, addr string) (*RedisConn, error) { r := redis.NewClient(&redis.Options{ Addr: addr, DB: 0, }) err := r.Ping(ctx).Err() if err != nil { return nil, err } slog.DebugContext(ctx, "redis connection is successful")
if err := redisotel.InstrumentTracing(r); err != nil {
return nil, err
}
return &RedisConn{ client: r, }, nil }

Instrumenting the Redis client for traces is done by using the InstrumentTracing() hook provided by redisotel package. You can also report OpenTelemetry metrics with InstrumentMetrics().

After saving your changes, navigate to your application, log out, and then log in again.

In Jaeger, you'll start seeing spans for the Redis set, get, and del operations accordingly:

Jaeger UI showing Redis Go Trace Spans with OpenTelemetry

Step 9 — Instrumenting the Bun SQL client

Instrumenting the uptrace/bun library is quite similar to the Redis client. Bun provides a dedicated OpenTelemetry instrumentation module called bunotel, which needs to be installed first:

 
go get github.com/uptrace/bun/extra/bunotel

Once installed, add the bunotel hook to your db/db.go file:

db/db.go
package db

import (
    "context"
    "database/sql"
    "errors"

    "github.com/betterstack-community/go-image-upload/models"
    "github.com/uptrace/bun"
    "github.com/uptrace/bun/dialect/pgdialect"
    "github.com/uptrace/bun/driver/pgdriver"
"github.com/uptrace/bun/extra/bunotel"
) type DBConn struct { db *bun.DB } func NewDBConn(ctx context.Context, name, url string) (*DBConn, error) { sqldb := sql.OpenDB(pgdriver.NewConnector(pgdriver.WithDSN(url))) db := bun.NewDB(sqldb, pgdialect.New())
db.AddQueryHook(
bunotel.NewQueryHook(bunotel.WithDBName(name)),
)
return &DBConn{db}, nil } . . .

After saving the changes, interact with it in the same manner as before.

You will notice that new trace spans for each PostgreSQL query start to appear in Jaeger:

Jaeger UI showing PostgreSQL spans

Step 10 — Adding custom instrumentation

While instrumentation libraries capture telemetry at the system boundaries, such as inbound/outbound HTTP requests or database calls, they don't capture what's happening within your application itself. To achieve that, you'll need to write custom manual instrumentation.

In this section, let's add custom instrumentation for the requireAuth function.

To create spans, you first need a tracer. Create one by providing the name and version of the library/application performing the instrumentation. Typically, you only need one tracer per application:

main.go
package main

import (
    . . .
"go.opentelemetry.io/otel"
"go.opentelemetry.io/otel/trace"
) var redisConn *redisconn.RedisConn var dbConn *db.DBConn
var tracer trace.Tracer
. . . func init() { . . .
tracer = otel.Tracer(conf.ServiceName)
} . . .

Once your tracer is initialized, you can use it to create spans with tracer.Start(). Let's add a span for the requireAuth() middleware function:

handler.go
package main

import (
    . . .
"go.opentelemetry.io/otel/attribute"
"go.opentelemetry.io/otel/codes"
"go.opentelemetry.io/otel/trace"
"github.com/betterstack-community/go-image-upload/models" ) . . . func requireAuth(next http.Handler) http.Handler { return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
ctx, span := tracer.Start(
r.Context(),
"requireAuth",
trace.WithSpanKind(trace.SpanKindServer),
)
cookie, err := r.Cookie(sessionCookieKey) if err != nil { http.Redirect(w, r, "/auth", http.StatusSeeOther)
span.AddEvent(
"redirecting to /auth",
trace.WithAttributes(
attribute.String("reason", "missing session cookie"),
),
)
span.End()
return }
span.SetAttributes(
attribute.String("app.cookie.value", cookie.Value),
)
email, err := redisConn.GetSessionToken(ctx, cookie.Value) if err != nil { http.Redirect(w, r, "/auth", http.StatusSeeOther)
span.AddEvent(
"redirecting to /auth",
trace.WithAttributes(
attribute.String("reason", err.Error()),
))
span.End()
return } ctx = context.WithValue(r.Context(), "email", email) req := r.WithContext(ctx)
span.SetStatus(codes.Ok, "authenticated successfully")
span.End()
next.ServeHTTP(w, req) }) } . . .

The requireAuth middleware is designed to protect certain routes in the application by ensuring that only authenticated users can access them. It checks for a session cookie and validates it against a Redis store to determine if the user is logged in. If not, it redirects them to the login page (/auth).

The tracer.Start() method initiates a new span named requireAuth with the context of the incoming HTTP request. The otelhttp.NewHander() method used to instrument the server earlier adds the active span for the incoming request to the request context. This means the requireAuth span will be nested within it as you'll soon see.

The span.SetAttributes() method adds the value of the session cookie as an attribute to the span. It is mainly used for recording contextual information about the operation that may be helpful for debugging purposes.

In cases where authentication fails (either due to a missing cookie or an invalid session token), an event is added to the span. This event provides additional context about why the authentication failed.

Finally, if authentication is successful, the span's status is explicitly set to Ok with an "authenticated successfully" message. The span.End() method is then called before the next handler is executed.

When you play around with the application once again and check the traces in Jaeger, you'll notice that a new span is created for the protected routes like / and /upload:

Jaeger UI showing trace spans from custom instrumentation

If an event is recorded in the span, it appears in the Logs section:

Jaeger UI showing trace span event

You now have the knowledge to create spans for any operation in your application. Consider creating a span that tracks the image conversion to AVIF in the uploadImage() handler as an exercise.

Final thoughts

You've covered a lot of ground with this tutorial, and you should now have a solid grasp of OpenTelemetry and its application for instrumenting Go applications with tracing capabilities.

To delve deeper into the OpenTelemetry project, consider exploring its official documentation. The OpenTelemetry Registry is also an excellent resource to discover numerous auto-instrumentation libraries covering popular Go frameworks and libraries.

Remember to thoroughly test your OpenTelemetry instrumentation before deploying your applications to production. This ensures that the captured data is accurate, meaningful, and useful for detecting and solving problems.

Feel free to also check out the complete code on GitHub.

Thanks for reading, and happy tracing!

Author's avatar
Article by
Ayooluwa Isaiah
Ayo is the Head of Content at Better Stack. His passion is simplifying and communicating complex technical ideas effectively. His work was featured on several esteemed publications including LWN.net, Digital Ocean, and CSS-Tricks. When he’s not writing or coding, he loves to travel, bike, and play tennis.
Got an article suggestion? Let us know
Next article
Monitoring Go Apps with OpenTelemetry Metrics
Learn how to set up OpenTelemetry metrics in a Go application, track key metrics, and send data to a backend for analysis and visualization
Licensed under CC-BY-NC-SA

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

Make your mark

Join the writer's program

Are you a developer and love writing and sharing your knowledge with the world? Join our guest writing program and get paid for writing amazing technical guides. We'll get them to the right readers that will appreciate them.

Write for us
Writer of the month
Marin Bezhanov
Marin is a software engineer and architect with a broad range of experience working...
Build on top of Better Stack

Write a script, app or project on top of Better Stack and share it with the world. Make a public repository and share it with us at our email.

community@betterstack.com

or submit a pull request and help us build better products for everyone.

See the full list of amazing projects on github