Back to Logging guides

Logging in Go with Slog: The Ultimate Guide

Ayooluwa Isaiah
Updated on October 4, 2024

In this article, we'll explore structured logging in Go with a specific focus on the recently introduced log/slog package which aims to bring high-performance, structured, and leveled logging to the Go standard library.

The package has its origins in this GitHub discussion opened by Jonathan Amsterdam, which later led to the proposal describing the exact design of the package. Once finalized, it was released in Go v1.21 and now resides at log/slog.

In the following sections, I will present a comprehensive examination what Slog has to offer with accompanying examples. For a performance comparison with other Go logging frameworks, refer to this GitHub repo.

Getting started with Slog

Let's begin our exploration of the log/slog package by walking through its design and architecture. It provides three main types that you should be familiar with:

  • Logger: the logging "frontend" which provides level methods such as (Info() and Error()) for recording events of interest.
  • Record: a representation of each self-contained log object created by a Logger.
  • Handler: an interface that, once implemented, determines the formatting and destination of each Record. Two built-in handlers are included in the log/slog package: TextHandler and JSONHandler for key=value and JSON output respectively.

Like most Go logging libraries, the slog package exposes a default Logger that is accessible through top-level functions. This logger produces an almost identical output as the older log.Printf() method, except for the inclusion of log levels:

 
package main

import (
    "log"
    "log/slog"
)

func main() {
    log.Print("Info message")
    slog.Info("Info message")
}
Output
2024/01/03 10:24:22 Info message
2024/01/03 10:24:22 INFO Info message

This is a somewhat bizarre default since Slog's main purpose is to bring structured logging to the standard library.

It's easy enough to correct this by creating a custom Logger instance through the slog.New() method. It accepts an implementation of Handler interface, which determines how the logs are formatted and where they are written to.

Here's an example that uses the built-in JSONHandler type to output JSON logs to the stdout:

 
func main() {
logger := slog.New(slog.NewJSONHandler(os.Stdout, nil))
logger.Debug("Debug message") logger.Info("Info message") logger.Warn("Warning message") logger.Error("Error message") }
Output
{"time":"2023-03-15T12:59:22.227408691+01:00","level":"INFO","msg":"Info message"}
{"time":"2023-03-15T12:59:22.227468972+01:00","level":"WARN","msg":"Warning message"}
{"time":"2023-03-15T12:59:22.227472149+01:00","level":"ERROR","msg":"Error message"}

When the TextHandler type is used instead, each log record will be formatted according to the Logfmt standard:

 
logger := slog.New(slog.NewTextHandler(os.Stdout, nil))
Output
time=2023-03-15T13:00:11.333+01:00 level=INFO msg="Info message"
time=2023-03-15T13:00:11.333+01:00 level=WARN msg="Warning message"
time=2023-03-15T13:00:11.333+01:00 level=ERROR msg="Error message"

All Logger instances default to logging at the INFO level, which leads to the suppression of the DEBUG entry, but you can easily update this as you see fit.

Customizing the default logger

The most straightforward way to customize the default Logger is to utilize the slog.SetDefault() method, allowing you to replace the default logger with a custom one.

 
func main() {
    logger := slog.New(slog.NewJSONHandler(os.Stdout, nil))

slog.SetDefault(logger)
slog.Info("Info message") }

You'll now observe that the package's top-level logging methods now produce JSON output as seen below:

Output
{"time":"2023-03-15T13:07:39.105777557+01:00","level":"INFO","msg":"Info message"}

Using the SetDefault() method also alters the default log.Logger employed by the log package. This behavior allows existing applications that utilize the older log package to transition to structured logging seamlessly :

 
func main() {
    logger := slog.New(slog.NewJSONHandler(os.Stdout, nil))

    slog.SetDefault(logger)

    // elsewhere in the application
log.Println("Hello from old logger")
}
Output
{"time":"2023-03-16T15:20:33.783681176+01:00","level":"INFO","msg":"Hello from old logger"}

The slog.NewLogLogger() method is also available for converting an slog.Logger to a log.Logger when you need to utilize APIs that require the latter (such as http.Server.ErrorLog):

 
func main() {
    handler := slog.NewJSONHandler(os.Stdout, nil)

    logger := slog.NewLogLogger(handler, slog.LevelError)

    _ = http.Server{
        // this API only accepts `log.Logger`
        ErrorLog: logger,
    }
}

Adding contextual attributes to log records

A significant advantage of structured logging over unstructured formats is the ability to add arbitrary attributes as key/value pairs in log records.

These attributes provide additional context about the logged event, which can be valuable for tasks such as troubleshooting, generating metrics, auditing, and various other purposes.

Here's an example illustrating how it works in Slog:

 
logger.Info(
  "incoming request",
  "method", "GET",
  "time_taken_ms", 158,
  "path", "/hello/world?q=search",
  "status", 200,
  "user_agent", "Googlebot/2.1 (+http://www.google.com/bot.html)",
)
Output
{
  "time":"2023-02-24T11:52:49.554074496+01:00",
  "level":"INFO",
  "msg":"incoming request",
  "method":"GET",
  "time_taken_ms":158,
  "path":"/hello/world?q=search",
  "status":200,
  "user_agent":"Googlebot/2.1 (+http://www.google.com/bot.html)"
}

All the level methods (Info(), Debug(), etc.) accept a log message as their first argument, and an unlimited number of loosely typed key/value pairs thereafter.

This API is similar to the SugaredLogger API in Zap (specifically its level methods ending in w) as it prioritizes brevity at the cost of additional memory allocations.

But be cautious, as this approach can cause unexpected issues. Specifically, unbalanced key/value pairs can result in problematic output:

 
logger.Info(
  "incoming request",
  "method", "GET",
  "time_taken_ms", // the value for this key is missing
)

Since the time_taken_ms key does not have a corresponding value, it will be treated as a value with key !BADKEY. This isn't great because a property misalignment could create bad entries, and you may not know about it until you need to use the logs.

Output
{
  "time": "2023-03-15T13:15:29.956566795+01:00",
  "level": "INFO",
  "msg": "incoming request",
  "method": "GET",
"!BADKEY": "time_taken_ms"
}

To prevent such problems, you can run the vet command or use a linter to automatically report such issues.

Screenshot from 2024-01-01 15-17-49.png

Another way to prevent such mistakes is by using strongly-typed contextual attributes as shown below:

 
logger.Info(
  "incoming request",
  slog.String("method", "GET"),
  slog.Int("time_taken_ms", 158),
  slog.String("path", "/hello/world?q=search"),
  slog.Int("status", 200),
  slog.String(
    "user_agent",
    "Googlebot/2.1 (+http://www.google.com/bot.html)",
  ),
)

While this is a much better approach to contextual logging, it's not fool-proof as nothing is stopping you from mixing strongly-typed and loosely-typed key/value pairs like this:

 
logger.Info(
  "incoming request",
  "method", "GET",
  slog.Int("time_taken_ms", 158),
  slog.String("path", "/hello/world?q=search"),
  "status", 200,
  slog.String(
    "user_agent",
    "Googlebot/2.1 (+http://www.google.com/bot.html)",
  ),
)

To guarantee type safety when adding contextual attributes to your records, you must use the LogAttrs() method like this:

 
logger.LogAttrs(
  context.Background(),
  slog.LevelInfo,
  "incoming request",
  slog.String("method", "GET"),
  slog.Int("time_taken_ms", 158),
  slog.String("path", "/hello/world?q=search"),
  slog.Int("status", 200),
  slog.String(
    "user_agent",
    "Googlebot/2.1 (+http://www.google.com/bot.html)",
  ),
)

This method only accepts the slog.Attr type for custom attributes, so it's impossible to have an unbalanced key/value pair. However, its API is more convoluted as you always need to pass a context (or nil) and the log level to the method in addition to the log message and custom attributes.

Grouping contextual attributes

Slog also allows grouping multiple attributes under a single name, but the output depends on the Handler in use. For example, with JSONHandler, each group is nested within the JSON object:

 
logger.LogAttrs(
  context.Background(),
  slog.LevelInfo,
  "image uploaded",
  slog.Int("id", 23123),
slog.Group("properties",
slog.Int("width", 4000),
slog.Int("height", 3000),
slog.String("format", "jpeg"),
),
)
Output
{
  "time":"2023-02-24T12:03:12.175582603+01:00",
  "level":"INFO",
  "msg":"image uploaded",
  "id":23123,
"properties":{
"width":4000,
"height":3000,
"format":"jpeg"
}
}

When using the TextHandler, each key in the group will be prefixed by the group name like this:

Output
time=2023-02-24T12:06:20.249+01:00 level=INFO msg="image uploaded" id=23123
properties.width=4000 properties.height=3000 properties.format=jpeg

Creating and using child loggers

Including the same attributes in all records within a specific scope can be beneficial to ensure their presence without repetitive logging statements.

This is where child loggers can prove helpful, as they create a new logging context that inherits from their parent while allowing the inclusion of additional fields.

In Slog, creating child loggers is accomplished using the Logger.With() method. It accepts one or more key/value pairs and returns a new Logger that includes the specified attributes.

Consider the following code snippet that adds the program's process ID and the Go version used for compilation to each log record, storing them in a program_info property:

 
func main() {
    handler := slog.NewJSONHandler(os.Stdout, nil)
    buildInfo, _ := debug.ReadBuildInfo()

    logger := slog.New(handler)

child := logger.With(
slog.Group("program_info",
slog.Int("pid", os.Getpid()),
slog.String("go_version", buildInfo.GoVersion),
),
)
. . . }

With this configuration in place, all records created by the child logger will contain the specified attributes under the program_info property as long as it is not overridden at log point:

 
func main() {
    . . .

    child.Info("image upload successful", slog.String("image_id", "39ud88"))
    child.Warn(
        "storage is 90% full",
        slog.String("available_space", "900.1 mb"),
    )
}
Output
{
  "time": "2023-02-26T19:26:46.046793623+01:00",
  "level": "INFO",
  "msg": "image upload successful",
  "program_info": {
    "pid": 229108,
    "go_version": "go1.20"
  },
  "image_id": "39ud88"
}
{
  "time": "2023-02-26T19:26:46.046847902+01:00",
  "level": "WARN",
  "msg": "storage is 90% full",
  "program_info": {
    "pid": 229108,
    "go_version": "go1.20"
  },
  "available_space": "900.1 MB"
}

You can also use the WithGroup() method to create a child logger that starts a group such that all attributes added to the logger (including those added at log point) are nested under the group name:

 
handler := slog.NewJSONHandler(os.Stdout, nil)
buildInfo, _ := debug.ReadBuildInfo()
logger := slog.New(handler).WithGroup("program_info")
child := logger.With( slog.Int("pid", os.Getpid()), slog.String("go_version", buildInfo.GoVersion), ) child.Warn( "storage is 90% full", slog.String("available_space", "900.1 MB"), )
Output
{
  "time": "2023-05-24T19:00:18.384136084+01:00",
  "level": "WARN",
  "msg": "storage is 90% full",
  "program_info": {
    "pid": 1971993,
    "go_version": "go1.20.2",
    "available_space": "900.1 mb"
  }
}

Customizing Slog levels

The log/slog package provides four log levels by default, with each one associated with an integer value: DEBUG (-4), INFO (0), WARN (4), and ERROR (8).

The gap of four between each level is a deliberate design decision made to accommodate logging schemes with custom levels between the default ones. For example, you can create a custom level between INFO and WARN with a value of 1, 2, or 3.

We previously observed that all loggers are configured to log at the INFO level by default, which causes events logged at a lower severity (such as DEBUG) to be suppressed. You can customize this behavior through the HandlerOptions type as shown below:

 
func main() {
opts := &slog.HandlerOptions{
Level: slog.LevelDebug,
}
handler := slog.NewJSONHandler(os.Stdout, opts)
logger := slog.New(handler) logger.Debug("Debug message") logger.Info("Info message") logger.Warn("Warning message") logger.Error("Error message") }
Output
{"time":"2023-05-24T19:03:10.70311982+01:00","level":"DEBUG","msg":"Debug message"}
{"time":"2023-05-24T19:03:10.703187713+01:00","level":"INFO","msg":"Info message"}
{"time":"2023-05-24T19:03:10.703190419+01:00","level":"WARN","msg":"Warning message"}
{"time":"2023-05-24T19:03:10.703192892+01:00","level":"ERROR","msg":"Error message"}

This approach to setting the level fixes the level of the handler throughout its lifetime. If you need the minimum level to be dynamically varied, you must use the LevelVar type as illustrated below:

 
func main() {
logLevel := &slog.LevelVar{} // INFO
opts := &slog.HandlerOptions{
Level: logLevel,
}
handler := slog.NewJSONHandler(os.Stdout, opts) . . . }

You can subsequently update the log level anytime using the following:

 
logLevel.Set(slog.LevelDebug)

Creating custom log levels

If you need custom levels beyond what Slog provides by default, you can create them by implementing the Leveler interface whose signature is as follows:

 
type Leveler interface {
    Level() Level
}

It's easy to implement this interface through the Level type shown below (since Level itself implements Leveler):

 
const (
    LevelTrace  = slog.Level(-8)
    LevelFatal  = slog.Level(12)
)

Once you've defined custom levels as above, you can only use them through the Log() or LogAttrs() method:

 
opts := &slog.HandlerOptions{
    Level: LevelTrace,
}

logger := slog.New(slog.NewJSONHandler(os.Stdout, opts))

ctx := context.Background()
logger.Log(ctx, LevelTrace, "Trace message")
logger.Log(ctx, LevelFatal, "Fatal level")
Output
{"time":"2023-02-24T09:26:41.666493901+01:00","level":"DEBUG-4","msg":"Trace level"}
{"time":"2023-02-24T09:26:41.666602404+01:00","level":"ERROR+4","msg":"Fatal level"}

Notice how the custom levels are labeled in terms of the defaults. This is definitely not what you want, so you should customize the level names through the HandlerOptions type like this:

 
. . .

var LevelNames = map[slog.Leveler]string{
    LevelTrace:      "TRACE",
    LevelFatal:      "FATAL",
}

func main() {
    opts := slog.HandlerOptions{
        Level: LevelTrace,
ReplaceAttr: func(groups []string, a slog.Attr) slog.Attr {
if a.Key == slog.LevelKey {
level := a.Value.Any().(slog.Level)
levelLabel, exists := LevelNames[level]
if !exists {
levelLabel = level.String()
}
a.Value = slog.StringValue(levelLabel)
}
return a
},
} . . . }

The ReplaceAttr() function is used to customize how each key/value pair in a Record is handled by a Handler. It can be used to customize key names, or process the values in some way.

In the above example, it maps the custom log levels to their respective labels producing TRACE and FATAL respectively.

Output
{"time":"2023-02-24T09:27:51.747625912+01:00","level":"TRACE","msg":"Trace level"}
{"time":"2023-02-24T09:27:51.747737319+01:00","level":"FATAL","msg":"Fatal level"}

Customizing Slog Handlers

As mentioned earlier, both TextHandler and JSONHandler can be customized using the HandlerOptions type. You've already seen how to adjust the minimum level and modify attributes before logging them.

Another customization that can be accomplished through HandlerOptions including the log source if required:

 
opts := &slog.HandlerOptions{
AddSource: true,
Level: slog.LevelDebug, }
Output
{
  "time": "2024-01-03T11:06:50.971029852+01:00",
  "level": "DEBUG",
"source": {
"function": "main.main",
"file": "/home/ayo/dev/betterstack/demo/slog/main.go",
"line": 17
},
"msg": "Debug message" }

It's also easy to switch enough handlers based on the application environment. For example, you might prefer to use the TextHandler for your development logs since it's a little easier to read, then switch to JSONHandler in production for more flexibility and compatibility with various logging tools.

This behavior is easily implemented through environmental variables:

 
var appEnv = os.Getenv("APP_ENV")

func main() {
    opts := &slog.HandlerOptions{
        Level: slog.LevelDebug,
    }

    var handler slog.Handler = slog.NewTextHandler(os.Stdout, opts)
    if appEnv == "production" {
        handler = slog.NewJSONHandler(os.Stdout, opts)
    }

    logger := slog.New(handler)

    logger.Info("Info message")
}
 
go run main.go
Output
time=2023-02-24T10:36:39.697+01:00 level=INFO msg="Info message"
 
APP_ENV=production go run main.go
Output
{"time":"2023-02-24T10:35:16.964821548+01:00","level":"INFO","msg":"Info message"}

Creating custom Handlers

Since Handler is an interface, it's possible to create custom handlers for formatting the logs differently or writing them to other destinations.

Its signature is as follows:

 
type Handler interface {
    Enabled(context.Context, Level) bool
    Handle(context.Context, r Record) error
    WithAttrs(attrs []Attr) Handler
    WithGroup(name string) Handler
}

Here's what each of the methods do:

  • Enabled() determines if a log record should be handled or discarded based on its level. The context can also used to make a decision.
  • Handle() processes each log record sent to the handler. It is called only if Enabled() returns true.
  • WithAttrs() creates a new handler from an existing one and adds the specified attributes it.
  • WithGroup() creates a new handler from an existing one and adds the specified group name to it such that the name qualifies subsequent attributes..

Here's an example that uses the log, json, and color packages to implement a prettified development output for log records:

handler.go
// NOTE: Not well tested, just an illustration of what's possible
package main

import (
    "context"
    "encoding/json"
    "io"
    "log"
    "log/slog"

    "github.com/fatih/color"
)

type PrettyHandlerOptions struct {
    SlogOpts slog.HandlerOptions
}

type PrettyHandler struct {
    slog.Handler
    l *log.Logger
}

func (h *PrettyHandler) Handle(ctx context.Context, r slog.Record) error {
    level := r.Level.String() + ":"

    switch r.Level {
    case slog.LevelDebug:
        level = color.MagentaString(level)
    case slog.LevelInfo:
        level = color.BlueString(level)
    case slog.LevelWarn:
        level = color.YellowString(level)
    case slog.LevelError:
        level = color.RedString(level)
    }

    fields := make(map[string]interface{}, r.NumAttrs())
    r.Attrs(func(a slog.Attr) bool {
        fields[a.Key] = a.Value.Any()

        return true
    })

    b, err := json.MarshalIndent(fields, "", "  ")
    if err != nil {
        return err
    }

    timeStr := r.Time.Format("[15:05:05.000]")
    msg := color.CyanString(r.Message)

    h.l.Println(timeStr, level, msg, color.WhiteString(string(b)))

    return nil
}

func NewPrettyHandler(
    out io.Writer,
    opts PrettyHandlerOptions,
) *PrettyHandler {
    h := &PrettyHandler{
        Handler: slog.NewJSONHandler(out, &opts.SlogOpts),
        l:       log.New(out, "", 0),
    }

    return h
}

When you use the PrettyHandler in your code like this:

 
func main() {
    opts := PrettyHandlerOptions{
        SlogOpts: slog.HandlerOptions{
            Level: slog.LevelDebug,
        },
    }
    handler := NewPrettyHandler(os.Stdout, opts)
    logger := slog.New(handler)
    logger.Debug(
        "executing database query",
        slog.String("query", "SELECT * FROM users"),
    )
    logger.Info("image upload successful", slog.String("image_id", "39ud88"))
    logger.Warn(
        "storage is 90% full",
        slog.String("available_space", "900.1 MB"),
    )
    logger.Error(
        "An error occurred while processing the request",
        slog.String("url", "https://example.com"),
    )
}

You will observe the following colorized output when you execute the program:

Screenshot from 2023-05-24 19-53-04.png

You can find several custom handlers created by the community on GitHub and this Go Wiki page. Some notable examples include:

  • tint - writes tinted (colorized) logs.
  • slog-sampling - improves logging throughput by dropping repetitive log records.
  • slog-multi - implements workflows such as middleware, fanout, routing, failover, load balancing.
  • slog-formatter - provides more flexible attribute formatting.

Using the context package with Slog

So far, we've primarily utilized the standard variants of the level methods such as Info(), Debug(), and others, but Slog also provides context-aware variants that accepts a context.Context value as their first argument. Here's the signature for each one:

 
func (ctx context.Context, msg string, args ...any)

With such methods, you can propagate contextual attributes across functions by storing them in the Context so that when these values are found, they are added to any resulting records.

Consider the following program:

 
package main

import (
    "context"
    "log/slog"
    "os"
)

func main() {
    logger := slog.New(slog.NewJSONHandler(os.Stdout, nil))

    ctx := context.WithValue(context.Background(), "request_id", "req-123")

    logger.InfoContext(ctx, "image uploaded", slog.String("image_id", "img-998"))
}

A request_id is added to the ctx variable and passed to the InfoContext method. However, when the program is run, the request_id field does not appear in the log:

Output
{
  "time": "2024-01-02T11:04:28.590527494+01:00",
  "level": "INFO",
  "msg": "image uploaded",
  "image_id": "img-998"
}

To get this working, you need to create a custom handler and re-implement the Handle method as shown below:

 
type ctxKey string

const (
    slogFields ctxKey = "slog_fields"
)

type ContextHandler struct {
    slog.Handler
}

// Handle adds contextual attributes to the Record before calling the underlying
// handler
func (h ContextHandler) Handle(ctx context.Context, r slog.Record) error {
    if attrs, ok := ctx.Value(slogFields).([]slog.Attr); ok {
        for _, v := range attrs {
            r.AddAttrs(v)
        }
    }

    return h.Handler.Handle(ctx, r)
}

// AppendCtx adds an slog attribute to the provided context so that it will be
// included in any Record created with such context
func AppendCtx(parent context.Context, attr slog.Attr) context.Context {
    if parent == nil {
        parent = context.Background()
    }

    if v, ok := parent.Value(slogFields).([]slog.Attr); ok {
        v = append(v, attr)
        return context.WithValue(parent, slogFields, v)
    }

    v := []slog.Attr{}
    v = append(v, attr)
    return context.WithValue(parent, slogFields, v)
}

The ContextHandler struct embeds the slog.Handler interface and implements the Handle method to extract Slog attributes stored within the provided context. If found, they are added to the Record before the underlying Handler is called to format and output the record.

On the other hand, the AppendCtx function adds Slog attributes to a context.Context using the slogFields key so that it is accessible to the ContextHandler.

Here's how to use them both:

 
func main() {
h := &ContextHandler{slog.NewJSONHandler(os.Stdout, nil)}
logger := slog.New(h)
ctx := AppendCtx(context.Background(), slog.String("request_id", "req-123"))
logger.InfoContext(ctx, "image uploaded", slog.String("image_id", "img-998")) }

You will now observe that the request_id is included in any records created with the ctx argument:

Output
{
  "time": "2024-01-02T11:29:15.229984723+01:00",
  "level": "INFO",
  "msg": "image uploaded",
  "image_id": "img-998",
  "request_id": "req-123"
}

Error logging with Slog

When it comes to logging errors, a helper is not provided for the error type like in most frameworks, so you must use slog.Any() like this:

 
err := errors.New("something happened")

logger.ErrorContext(ctx, "upload failed", slog.Any("error", err))
Output
{
  "time": "2024-01-02T14:13:44.41886393+01:00",
  "level": "ERROR",
  "msg": "upload failed",
  "error": "something happened"
}

To obtain and log error stack traces, you can use a library like xerrors to create errors with stack traces:

 
err := xerrors.New("something happened")
logger.ErrorContext(ctx, "upload failed", slog.Any("error", err))

Before you can observe the stack trace in the error log, you also need to extract, format, and add it to the corresponding Record through the ReplaceAttr() function demonstrated earlier.

Here's an example:

 
package main

import (
    "context"
    "log/slog"
    "os"
    "path/filepath"

    "github.com/mdobak/go-xerrors"
)

type stackFrame struct {
    Func   string `json:"func"`
    Source string `json:"source"`
    Line   int    `json:"line"`
}

func replaceAttr(_ []string, a slog.Attr) slog.Attr {
    switch a.Value.Kind() {
    case slog.KindAny:
        switch v := a.Value.Any().(type) {
        case error:
            a.Value = fmtErr(v)
        }
    }

    return a
}

// marshalStack extracts stack frames from the error
func marshalStack(err error) []stackFrame {
    trace := xerrors.StackTrace(err)

    if len(trace) == 0 {
        return nil
    }

    frames := trace.Frames()

    s := make([]stackFrame, len(frames))

    for i, v := range frames {
        f := stackFrame{
            Source: filepath.Join(
                filepath.Base(filepath.Dir(v.File)),
                filepath.Base(v.File),
            ),
            Func: filepath.Base(v.Function),
            Line: v.Line,
        }

        s[i] = f
    }

    return s
}

// fmtErr returns a slog.Value with keys `msg` and `trace`. If the error
// does not implement interface { StackTrace() errors.StackTrace }, the `trace`
// key is omitted.
func fmtErr(err error) slog.Value {
    var groupValues []slog.Attr

    groupValues = append(groupValues, slog.String("msg", err.Error()))

    frames := marshalStack(err)

    if frames != nil {
        groupValues = append(groupValues,
            slog.Any("trace", frames),
        )
    }

    return slog.GroupValue(groupValues...)
}

func main() {
    h := slog.NewJSONHandler(os.Stdout, &slog.HandlerOptions{
ReplaceAttr: replaceAttr,
})
logger := slog.New(h) ctx := context.Background() err := xerrors.New("something happened") logger.ErrorContext(ctx, "image uploaded", slog.Any("error", err)) }

With this in place, any errors created using xerrors.New() will be logged with a well-formatted stack trace as follows:

Output
{
  "time": "2024-01-03T07:09:31.013954119+01:00",
  "level": "ERROR",
  "msg": "image uploaded",
"error": {
"msg": "something happened",
"trace": [
{
"func": "main.main",
"source": "slog/main.go",
"line": 82
},
{
"func": "runtime.main",
"source": "runtime/proc.go",
"line": 267
},
{
"func": "runtime.goexit",
"source": "runtime/asm_amd64.s",
"line": 1650
}
]
}
}

Now you can easily trace the path of execution leading to any unexpected errors in your application.

Hiding sensitive fields with the LogValuer interface

The LogValuer interface allows you to standardize your logging output by specifying how your custom types should be logged. Here's its signature:

 
type LogValuer interface {
    LogValue() Value
}

A prime use case for implementing this interface is for hiding sensitive fields in your custom types. For example, here's a User type that does not implement the LogValuer interface. Notice how sensitive details are exposed when an instance is logged:

 
// User does not implement `LogValuer` here
type User struct {
    ID        string `json:"id"`
    FirstName string `json:"first_name"`
    LastName  string `json:"last_name"`
    Email     string `json:"email"`
    Password  string `json:"password"`
}

func main() {
    handler := slog.NewJSONHandler(os.Stdout, nil)
    logger := slog.New(handler)

    u := &User{
        ID:        "user-12234",
        FirstName: "Jan",
        LastName:  "Doe",
        Email:     "jan@example.com",
        Password:  "pass-12334",
    }

    logger.Info("info", "user", u)
}
Output
{
  "time": "2023-02-26T22:11:30.080656774+01:00",
  "level": "INFO",
  "msg": "info",
  "user": {
    "id": "user-12234",
    "first_name": "Jan",
    "last_name": "Doe",
    "email": "jan@example.com",
    "password": "pass-12334"
  }
}

This is problematic since the type contains secret fields that should not be present in the logs (such as emails and passwords), and it can also make your logs unnecessarily verbose.

You can solve this issue by specifying how you'd like the type represented in the logs. For example, you may specify that only the ID field should be logged as follows:

 
// implement the `LogValuer` interface on the User struct
func (u User) LogValue() slog.Value {
    return slog.StringValue(u.ID)
}

You will now observe the following output:

Output
{
  "time": "2023-02-26T22:43:28.184363059+01:00",
  "level": "INFO",
  "msg": "info",
  "user": "user-12234"
}

You can also group multiple attributes like this:

 
func (u User) LogValue() slog.Value {
    return slog.GroupValue(
        slog.String("id", u.ID),
        slog.String("name", u.FirstName+" "+u.LastName),
    )
}
Output
{
  "time": "2023-03-15T14:44:24.223381036+01:00",
  "level": "INFO",
  "msg": "info",
  "user": {
    "id": "user-12234",
    "name": "Jan Doe"
  }
}

Using third-party logging backends with Slog

One of Slog's major design goals is to provide a unified logging frontend (slog.Logger) for Go applications while the backend (slog.Handler) remains customizable from program to program.

This way, the logging API is consistent across all dependencies, even if the backends differ. It also avoids coupling the logging implementation to a specific package by making it trivial to switch a different backend if requirements change in your project.

Here's an example that uses the Slog frontend with a Zap backend, potentially providing the best of both worlds:

 
go get go.uber.org/zap
 
go get go.uber.org/zap/exp/zapslog
 
package main

import (
    "log/slog"

    "go.uber.org/zap"
    "go.uber.org/zap/exp/zapslog"
)

func main() {
    zapL := zap.Must(zap.NewProduction())

    defer zapL.Sync()

    logger := slog.New(zapslog.NewHandler(zapL.Core(), nil))

    logger.Info(
        "incoming request",
        slog.String("method", "GET"),
        slog.String("path", "/api/user"),
        slog.Int("status", 200),
    )
}

This snippet creates a new Zap production logger that is subsequently used as a handler for the Slog package through zapslog.NewHandler(). With this in place, you only need to write your logs using methods provided on slog.Logger but the resulting records will be processed according to the provided zapL configuration.

Output
{"level":"info","ts":1697453912.4535635,"msg":"incoming request","method":"GET","path":"/api/user","status":200}

Switching to a different logging is really straightforward since logging is done in terms of slog.Logger. For example, you can switch from Zap to Zerolog like this:

 
go get github.com/rs/zerolog
 
go get github.com/samber/slog-zerolog
 
package main

import (
    "log/slog"
    "os"

"github.com/rs/zerolog"
slogzerolog "github.com/samber/slog-zerolog"
) func main() {
zerologL := zerolog.New(os.Stdout).Level(zerolog.InfoLevel)
logger := slog.New(
slogzerolog.Option{Logger: &zerologL}.NewZerologHandler(),
)
logger.Info( "incoming request", slog.String("method", "GET"), slog.String("path", "/api/user"), slog.Int("status", 200), ) }
Output
{"level":"info","time":"2023-10-16T13:22:33+02:00","method":"GET","path":"/api/user","status":200,"message":"incoming request"}

In the above snippet, the Zap handler has been replaced with a custom Zerolog one. Since logging isn't done using either library's custom APIs, the migration process only takes a couple of minutes compared to a situation where you have to switch out one logging API for another across your entire application.

Best practices for writing and storing Go logs

Once you've configured Slog or your preferred third-party Go logging framework, it's necessary to adopt the following best practices to ensure that you get the most out of your application logs:

1. Standardize your logging interfaces

Implementing the LogValuer interface allows you to standardize how the various types in your application are logged to ensure that their representation in the logs is consistent throughout your application. It's also an effective strategy for ensuring that sensitive fields are omitted from your application logs, as we explored earlier in this article.

2. Add a stack trace to your error logs

To improve your ability to debug unexpected issues in production, you should add a stack trace to your error logs. That way, it'll be much easier to pinpoint where the error originated within the codebase and the program flow that led to the problem.

Slog does not currently provide a built-in way to add stack traces to errors, but as we demonstrated earlier, this functionality can be implemented using packages like pkgerrors or go-xerrors with a couple of helper functions.

3. Lint your Slog statements to enforce consistency

One of the main drawbacks of the Slog API is that it allows for two different types of arguments, which could lead to inconsistency in a code base. Aside from that you also want to enforce consistent key name conventions (snake_case, camelCase, etc.), or if logging calls should always include a context argument.

A linter like sloglint can help you enforce various rules for Slog based on your preferred code style. Here's an example configuration when used through golangci-lint:

.golangci.yml
linters-settings:
  sloglint:
    # Enforce not mixing key-value pairs and attributes.
    # Default: true
    no-mixed-args: false
    # Enforce using key-value pairs only (overrides no-mixed-args, incompatible with attr-only).
    # Default: false
    kv-only: true
    # Enforce using attributes only (overrides no-mixed-args, incompatible with kv-only).
    # Default: false
    attr-only: true
    # Enforce using methods that accept a context.
    # Default: false
    context-only: true
    # Enforce using static values for log messages.
    # Default: false
    static-msg: true
    # Enforce using constants instead of raw keys.
    # Default: false
    no-raw-keys: true
    # Enforce a single key naming convention.
    # Values: snake, kebab, camel, pascal
    # Default: ""
    key-naming-case: snake
    # Enforce putting arguments on separate lines.
    # Default: false
    args-on-sep-lines: true

4. Centralize your logs, but persist them to local files first

It's generally better to decouple the task of writing logs from shipping them to a centralized log management system. Writing logs to local files first ensures a backup in case the log management system or network faces issues, preventing potential loss of crucial data.

Additionally, storing logs locally before sending them off helps buffer the logs, allowing for batch transmissions to help optimize network bandwidth usage and minimize impact on application performance.

Local log storage also affords greater flexibility so that if there's a need to transition to a different log management system, modifications are required only in the shipping method rather than the entire application logging mechanism. See our articles on using specialized log shippers like Vector or Fluentd for more details.

Logging to files does not necessarily require you to configure your chosen framework to write directly to a file as Systemd can easily redirect the application's standard output and error streams to a file. Docker also defaults to collecting all data sent to both streams and routing them to local files on the host machine.

5. Sample your logs

Log sampling is the practice of recording only a representative subset of log entries instead every single log event. This technique is beneficial in high-traffic environments where systems generate vast amounts of log data, and processing every entry could be quite costly since centralized logging solutions often charge based on data ingestion rates or storage.

 
package main

import (
    "fmt"
    "log/slog"
    "os"

    slogmulti "github.com/samber/slog-multi"
    slogsampling "github.com/samber/slog-sampling"
)

func main() {
    // Will print 20% of entries.
    option := slogsampling.UniformSamplingOption{
        Rate: 0.2,
    }

    logger := slog.New(
        slogmulti.
            Pipe(option.NewMiddleware()).
            Handler(slog.NewJSONHandler(os.Stdout, nil)),
    )

    for i := 1; i <= 10; i++ {
        logger.Info(fmt.Sprintf("a message from the gods: %d", i))
    }
}
Output
{"time":"2023-10-18T19:14:09.820090798+02:00","level":"INFO","msg":"a message from the gods: 4"}
{"time":"2023-10-18T19:14:09.820117844+02:00","level":"INFO","msg":"a message from the gods: 5"}

Third-party frameworks such as Zerolog and Zap provide built-in log sampling features. With Slog, you'd have to integrate a third-party handler such as slog-sampling or develop a custom solution. You can also choose to sample logs through a dedicated log shipper such as Vector.

6. Use a log management service

Centralizing your logs in a log management system makes it easy to search, analyze, and monitor your application's behavior across multiple servers and environments. With all the logs in one place, your ability to identify and diagnose issues is sped up significantly as you'd no longer need to jump between different servers to gather information about your service.

Wikipedia Demo on Logtail

While there are several log management solutions out there, Better Stack provides an easy way to set up centralized log management in a few minutes, with live tailing, alerting, dashboards, and uptime monitoring, and incident management features built into a modern and intuitive interface. Give it a try with a completely free plan here.

Final thoughts

I hope this post has provided you with an understanding of the new structured logging package in Go, and how you can start using it in your projects. If you'd like to explore this topic further, I recommend checking out the full proposal and package documentation.

Thanks for reading, and happy logging!

Author's avatar
Article by
Ayooluwa Isaiah
Ayo is the Head of Content at Better Stack. His passion is simplifying and communicating complex technical ideas effectively. His work was featured on several esteemed publications including LWN.net, Digital Ocean, and CSS-Tricks. When he’s not writing or coding, he loves to travel, bike, and play tennis.
Got an article suggestion? Let us know
Next article
Contextual Logging in Go with Slog
Learn how to implement contextual logging in your Go applications using log/slog and the context package
Licensed under CC-BY-NC-SA

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

Make your mark

Join the writer's program

Are you a developer and love writing and sharing your knowledge with the world? Join our guest writing program and get paid for writing amazing technical guides. We'll get them to the right readers that will appreciate them.

Write for us
Writer of the month
Marin Bezhanov
Marin is a software engineer and architect with a broad range of experience working...
Build on top of Better Stack

Write a script, app or project on top of Better Stack and share it with the world. Make a public repository and share it with us at our email.

community@betterstack.com

or submit a pull request and help us build better products for everyone.

See the full list of amazing projects on github