In this article, we'll explore structured logging in Go with a specific focus on
the recently introduced log/slog package which
aims to bring high-performance, structured, and leveled logging to the Go
standard library.
In the following sections, I will present a comprehensive examination what Slog
has to offer with accompanying examples. For a performance comparison with other
Go logging frameworks, refer to
this GitHub repo.
Getting started with Slog
Let's begin our exploration of the log/slog package by walking through its
design and architecture. It provides three main types that you should be
familiar with:
Logger: the logging "frontend" which provides level methods such as
(Info() and Error()) for recording events of interest.
Record: a representation of each self-contained log object created by a
Logger.
Handler: an interface that, once implemented, determines the formatting and
destination of each Record. Two built-in handlers are included in the
log/slog package: TextHandler and JSONHandler for key=value and JSON
output respectively.
Like most Go logging libraries, the slog
package exposes a default Logger that is accessible through top-level
functions. This logger produces an almost identical output as the older
log.Printf() method, except for the inclusion of log levels:
2024/01/03 10:24:22 Info message
2024/01/03 10:24:22 INFO Info message
This is a somewhat bizarre default since Slog's main purpose is to bring
structured logging to the standard library.
It's easy enough to correct this by creating a custom Logger instance through
the slog.New() method. It accepts an implementation of Handler interface,
which determines how the logs are formatted and where they are written to.
Here's an example that uses the built-in JSONHandler type to output JSON logs
to the stdout:
All Logger instances default to logging at the INFO level, which leads to
the suppression of the DEBUG entry, but you can
easily update this
as you see fit.
Customizing the default logger
The most straightforward way to customize the default Logger is to utilize the
slog.SetDefault() method, allowing you to replace the default logger with a
custom one.
Using the SetDefault() method also alters the default log.Logger employed by
the log package. This behavior allows existing applications that utilize the
older log package to transition to structured logging
seamlessly :
Copied!
func main() {
logger := slog.New(slog.NewJSONHandler(os.Stdout, nil))
slog.SetDefault(logger)
// elsewhere in the application
log.Println("Hello from old logger")
}
Output
{"time":"2023-03-16T15:20:33.783681176+01:00","level":"INFO","msg":"Hello from old logger"}
The slog.NewLogLogger() method is also available for converting an
slog.Logger to a log.Logger when you need to utilize APIs that require the
latter (such as http.Server.ErrorLog):
Copied!
func main() {
handler := slog.NewJSONHandler(os.Stdout, nil)
logger := slog.NewLogLogger(handler, slog.LevelError)
_ = http.Server{
// this API only accepts `log.Logger`
ErrorLog: logger,
}
}
Adding contextual attributes to log records
A significant advantage of structured logging over unstructured
formats is the ability to add arbitrary attributes as key/value pairs in log
records.
These attributes provide additional context about the logged event, which can be
valuable for tasks such as troubleshooting, generating metrics, auditing, and
various other purposes.
Here's an example illustrating how it works in Slog:
All the level methods (Info(), Debug(), etc.) accept a log message as their
first argument, and an unlimited number of loosely typed key/value pairs
thereafter.
This API is similar to
the SugaredLogger API in Zap
(specifically its level methods ending in w) as it prioritizes brevity at the
cost of additional memory allocations.
But be cautious, as this approach can cause unexpected issues. Specifically,
unbalanced key/value pairs can result in problematic output:
Copied!
logger.Info(
"incoming request",
"method", "GET",
"time_taken_ms", // the value for this key is missing
)
Since the time_taken_ms key does not have a corresponding value, it will be
treated as a value with key !BADKEY. This isn't great because a property
misalignment could create bad entries, and you may not know about it until you
need to use the logs.
While this is a much better approach to contextual logging, it's not fool-proof
as nothing is stopping you from mixing strongly-typed and loosely-typed
key/value pairs like this:
This method only accepts the slog.Attr type for custom attributes, so it's
impossible to have an unbalanced key/value pair. However, its API is more
convoluted as you always need to pass a context (or nil) and the log level to
the method in addition to the log message and custom attributes.
Grouping contextual attributes
Slog also allows grouping multiple attributes under a single name, but the
output depends on the Handler in use. For example, with JSONHandler, each
group is nested within the JSON object:
Including the same attributes in all records within a specific scope can be
beneficial to ensure their presence without repetitive logging statements.
This is where child loggers can prove helpful, as they create a new logging
context that inherits from their parent while allowing the inclusion of
additional fields.
In Slog, creating child loggers is accomplished using the Logger.With()
method. It accepts one or more key/value pairs and returns a new Logger that
includes the specified attributes.
Consider the following code snippet that adds the program's process ID and the
Go version used for compilation to each log record, storing them in a
program_info property:
With this configuration in place, all records created by the child logger will
contain the specified attributes under the program_info property as long as it
is not overridden at log point:
You can also use the WithGroup() method to create a child logger that starts a
group such that all attributes added to the logger (including those added at log
point) are nested under the group name:
The log/slog package provides four log levels by default, with each one
associated with an integer value: DEBUG (-4), INFO (0), WARN (4), and
ERROR (8).
The gap of four between each level is a deliberate design decision made to
accommodate logging schemes with custom levels between the default ones. For
example, you can create a custom level between INFO and WARN with a value of
1, 2, or 3.
We previously observed that all loggers are configured to log at the INFO
level by default, which causes events logged at a lower severity (such as
DEBUG) to be suppressed. You can customize this behavior through the
HandlerOptions type as shown
below:
This approach to setting the level fixes the level of the handler throughout
its lifetime. If you need the minimum level to be dynamically
varied, you must use the LevelVar type as
illustrated below:
You can subsequently update the log level anytime using the following:
Copied!
logLevel.Set(slog.LevelDebug)
Creating custom log levels
If you need custom levels beyond what Slog provides by default, you can create
them by implementing the
Leveler interface whose signature is as
follows:
Copied!
type Leveler interface {
Level() Level
}
It's easy to implement this interface through the Level type shown below
(since Level itself implements Leveler):
Notice how the custom levels are labeled in terms of the defaults. This is
definitely not what you want, so you should customize the level names through
the HandlerOptions type like this:
ReplaceAttr: func(groups []string, a slog.Attr) slog.Attr {
if a.Key == slog.LevelKey {
level := a.Value.Any().(slog.Level)
levelLabel, exists := LevelNames[level]
if !exists {
levelLabel = level.String()
}
a.Value = slog.StringValue(levelLabel)
}
return a
},
}
. . .
}
The ReplaceAttr() function is used to customize how each key/value pair in a
Record is handled by a Handler. It can be used to customize key names, or
process the values in some way.
In the above example, it maps the custom log levels to their respective labels
producing TRACE and FATAL respectively.
As mentioned earlier, both TextHandler and JSONHandler can be customized
using the HandlerOptions type. You've already seen how to adjust the minimum
level and modify attributes before logging them.
Another customization that can be accomplished through HandlerOptions
including the log source if required:
It's also easy to switch enough handlers based on the application environment.
For example, you might prefer to use the TextHandler for your development logs
since it's a little easier to read, then switch to JSONHandler in production
for more flexibility and compatibility with various logging tools.
This behavior is easily implemented through environmental variables:
While Slog handles structured logging in your application, Better Stack provides centralized log management with live tailing, SQL and PromQL queries, automated anomaly detection, and incident management. Collect logs from all your Go services in one place with integrations for Docker, Kubernetes, and more.
Predictable pricing starting at $0.25/GB. Start free in minutes.
Creating custom Handlers
Since Handler is an interface, it's possible to create custom handlers for
formatting the logs differently or writing them to other destinations.
Its signature is as follows:
Copied!
type Handler interface {
Enabled(context.Context, Level) bool
Handle(context.Context, r Record) error
WithAttrs(attrs []Attr) Handler
WithGroup(name string) Handler
}
Here's what each of the methods do:
Enabled() determines if a log record should be handled or discarded based on
its level. The context can also used to make a decision.
Handle() processes each log record sent to the handler. It is called only if
Enabled() returns true.
WithAttrs() creates a new handler from an existing one and adds the
specified attributes it.
WithGroup() creates a new handler from an existing one and adds the
specified group name to it such that the name qualifies subsequent
attributes..
Here's an example that uses the log, json, and
color packages to implement a prettified
development output for log records:
handler.go
Copied!
// NOTE: Not well tested, just an illustration of what's possible
package main
import (
"context"
"encoding/json"
"io"
"log"
"log/slog"
"github.com/fatih/color"
)
type PrettyHandlerOptions struct {
SlogOpts slog.HandlerOptions
}
type PrettyHandler struct {
slog.Handler
l *log.Logger
}
func (h *PrettyHandler) Handle(ctx context.Context, r slog.Record) error {
level := r.Level.String() + ":"
switch r.Level {
case slog.LevelDebug:
level = color.MagentaString(level)
case slog.LevelInfo:
level = color.BlueString(level)
case slog.LevelWarn:
level = color.YellowString(level)
case slog.LevelError:
level = color.RedString(level)
}
fields := make(map[string]interface{}, r.NumAttrs())
r.Attrs(func(a slog.Attr) bool {
fields[a.Key] = a.Value.Any()
return true
})
b, err := json.MarshalIndent(fields, "", " ")
if err != nil {
return err
}
timeStr := r.Time.Format("[15:05:05.000]")
msg := color.CyanString(r.Message)
h.l.Println(timeStr, level, msg, color.WhiteString(string(b)))
return nil
}
func NewPrettyHandler(
out io.Writer,
opts PrettyHandlerOptions,
) *PrettyHandler {
h := &PrettyHandler{
Handler: slog.NewJSONHandler(out, &opts.SlogOpts),
l: log.New(out, "", 0),
}
return h
}
When you use the PrettyHandler in your code like this:
slog-sampling - improves logging
throughput by dropping repetitive log records.
slog-multi - implements workflows such
as middleware, fanout, routing, failover, load balancing.
slog-formatter - provides more
flexible attribute formatting.
Using the context package with Slog
So far, we've primarily utilized the standard variants of the level methods such
as Info(), Debug(), and others, but Slog also provides context-aware
variants that accepts a context.Context value as their first argument. Here's
the signature for each one:
With such methods, you can propagate contextual attributes across functions by
storing them in the Context so that when these values are found, they are
added to any resulting records.
A request_id is added to the ctx variable and passed to the InfoContext
method. However, when the program is run, the request_id field does not appear
in the log:
To get this working, you need to create a custom handler and re-implement the
Handle method as shown below:
Copied!
type ctxKey string
const (
slogFields ctxKey = "slog_fields"
)
type ContextHandler struct {
slog.Handler
}
// Handle adds contextual attributes to the Record before calling the underlying
// handler
func (h ContextHandler) Handle(ctx context.Context, r slog.Record) error {
if attrs, ok := ctx.Value(slogFields).([]slog.Attr); ok {
for _, v := range attrs {
r.AddAttrs(v)
}
}
return h.Handler.Handle(ctx, r)
}
// AppendCtx adds an slog attribute to the provided context so that it will be
// included in any Record created with such context
func AppendCtx(parent context.Context, attr slog.Attr) context.Context {
if parent == nil {
parent = context.Background()
}
if v, ok := parent.Value(slogFields).([]slog.Attr); ok {
v = append(v, attr)
return context.WithValue(parent, slogFields, v)
}
v := []slog.Attr{}
v = append(v, attr)
return context.WithValue(parent, slogFields, v)
}
The ContextHandler struct embeds the slog.Handler interface and implements
the Handle method to extract Slog attributes stored within the provided
context. If found, they are added to the Record before the underlying
Handler is called to format and output the record.
On the other hand, the AppendCtx function adds Slog attributes to a
context.Context using the slogFields key so that it is accessible to the
ContextHandler.
Here's how to use them both:
Copied!
func main() {
h := &ContextHandler{slog.NewJSONHandler(os.Stdout, nil)}
Before you can observe the stack trace in the error log, you also need to
extract, format, and add it to the corresponding Record through the
ReplaceAttr() function demonstrated earlier.
Here's an example:
Copied!
package main
import (
"context"
"log/slog"
"os"
"path/filepath"
"github.com/mdobak/go-xerrors"
)
type stackFrame struct {
Func string `json:"func"`
Source string `json:"source"`
Line int `json:"line"`
}
func replaceAttr(_ []string, a slog.Attr) slog.Attr {
switch a.Value.Kind() {
case slog.KindAny:
switch v := a.Value.Any().(type) {
case error:
a.Value = fmtErr(v)
}
}
return a
}
// marshalStack extracts stack frames from the error
func marshalStack(err error) []stackFrame {
trace := xerrors.StackTrace(err)
if len(trace) == 0 {
return nil
}
frames := trace.Frames()
s := make([]stackFrame, len(frames))
for i, v := range frames {
f := stackFrame{
Source: filepath.Join(
filepath.Base(filepath.Dir(v.File)),
filepath.Base(v.File),
),
Func: filepath.Base(v.Function),
Line: v.Line,
}
s[i] = f
}
return s
}
// fmtErr returns a slog.Value with keys `msg` and `trace`. If the error
// does not implement interface { StackTrace() errors.StackTrace }, the `trace`
// key is omitted.
func fmtErr(err error) slog.Value {
var groupValues []slog.Attr
groupValues = append(groupValues, slog.String("msg", err.Error()))
frames := marshalStack(err)
if frames != nil {
groupValues = append(groupValues,
slog.Any("trace", frames),
)
}
return slog.GroupValue(groupValues...)
}
func main() {
h := slog.NewJSONHandler(os.Stdout, &slog.HandlerOptions{
Now you can easily trace the path of execution leading to any unexpected errors
in your application.
Hiding sensitive fields with the LogValuer interface
The LogValuer interface allows you to standardize your logging output by
specifying how your custom types should be logged. Here's its signature:
Copied!
type LogValuer interface {
LogValue() Value
}
A prime use case for implementing this interface is for hiding sensitive
fields in your custom types. For example, here's a User type
that does not implement the LogValuer interface. Notice how sensitive details
are exposed when an instance is logged:
Copied!
// User does not implement `LogValuer` here
type User struct {
ID string `json:"id"`
FirstName string `json:"first_name"`
LastName string `json:"last_name"`
Email string `json:"email"`
Password string `json:"password"`
}
func main() {
handler := slog.NewJSONHandler(os.Stdout, nil)
logger := slog.New(handler)
u := &User{
ID: "user-12234",
FirstName: "Jan",
LastName: "Doe",
Email: "jan@example.com",
Password: "pass-12334",
}
logger.Info("info", "user", u)
}
This is problematic since the type contains secret fields that should not be
present in the logs (such as emails and passwords), and it can also make your
logs unnecessarily verbose.
You can solve this issue by specifying how you'd like the type represented in
the logs. For example, you may specify that only the ID field should be logged
as follows:
Copied!
// implement the `LogValuer` interface on the User struct
func (u User) LogValue() slog.Value {
return slog.StringValue(u.ID)
}
One of Slog's major design goals is to provide a unified logging frontend
(slog.Logger) for Go applications while the backend (slog.Handler) remains
customizable from program to program.
This way, the logging API is consistent across all dependencies, even if the
backends differ. It also avoids coupling the logging implementation to a
specific package by making it trivial to switch a different backend if
requirements change in your project.
Here's an example that uses the Slog frontend with a Zap backend,
potentially providing the best of both worlds:
This snippet creates a new Zap production logger that is subsequently used as a
handler for the Slog package through zapslog.NewHandler(). With this in place,
you only need to write your logs using methods provided on slog.Logger but the
resulting records will be processed according to the provided zapL
configuration.
Switching to a different logging is really straightforward since logging is done
in terms of slog.Logger. For example, you can switch from Zap to
Zerolog like this:
In the above snippet, the Zap handler has been replaced with a
custom Zerolog one. Since logging
isn't done using either library's custom APIs, the migration process only takes
a couple of minutes compared to a situation where you have to switch out one
logging API for another across your entire application.
Best practices for writing and storing Go logs
Once you've configured Slog or your preferred third-party Go logging
framework, it's necessary to adopt the following
best practices to ensure that you get the most out of your application logs:
1. Standardize your logging interfaces
Implementing the LogValuer interface allows you to standardize how the various
types in your application are logged to ensure that their representation in the
logs is consistent throughout your application. It's also an effective strategy
for ensuring that sensitive fields are omitted from your application logs, as we
explored earlier in this article.
2. Add a stack trace to your error logs
To improve your ability to debug unexpected issues in production, you should add
a stack trace to your error logs. That way, it'll be much easier to pinpoint
where the error originated within the codebase and the program flow that led to
the problem.
Slog does not currently provide a built-in way to add stack traces to errors,
but as we demonstrated earlier, this functionality can be implemented using
packages like pkgerrors or
go-xerrors with a couple of helper
functions.
3. Lint your Slog statements to enforce consistency
One of the main drawbacks of the Slog API is that it allows for two different
types of arguments, which could lead to inconsistency in a code base. Aside from
that you also want to enforce consistent key name conventions (snake_case,
camelCase, etc.), or if logging calls should always include a context argument.
A linter like sloglint can
help you enforce various rules for Slog based on your preferred code style.
Here's an example configuration when used through
golangci-lint:
.golangci.yml
Copied!
linters-settings:
sloglint:
# Enforce not mixing key-value pairs and attributes.
# Default: true
no-mixed-args: false
# Enforce using key-value pairs only (overrides no-mixed-args, incompatible with attr-only).
# Default: false
kv-only: true
# Enforce using attributes only (overrides no-mixed-args, incompatible with kv-only).
# Default: false
attr-only: true
# Enforce using methods that accept a context.
# Default: false
context-only: true
# Enforce using static values for log messages.
# Default: false
static-msg: true
# Enforce using constants instead of raw keys.
# Default: false
no-raw-keys: true
# Enforce a single key naming convention.
# Values: snake, kebab, camel, pascal
# Default: ""
key-naming-case: snake
# Enforce putting arguments on separate lines.
# Default: false
args-on-sep-lines: true
4. Centralize your logs, but persist them to local files first
It's generally better to decouple the task of writing logs from shipping them to
a centralized log management system. Writing logs to local files first ensures a
backup in case the log management system or network faces issues, preventing
potential loss of crucial data.
Additionally, storing logs locally before sending them off helps buffer the
logs, allowing for batch transmissions to help optimize network bandwidth usage
and minimize impact on application performance.
Local log storage also affords greater flexibility so that if there's a need to
transition to a different log management system, modifications are required only
in the shipping method rather than the entire application logging mechanism. See
our articles on using specialized log shippers like
Vector or Fluentd for more details.
Logging to files does not necessarily require you to configure your chosen
framework to write directly to a file as
Systemd can easily redirect the
application's standard output and error streams to a file.
Docker also defaults to collecting all data
sent to both streams and routing them to local files on the host machine.
5. Sample your logs
Log sampling is the practice of recording only a representative subset of log
entries instead every single log event. This technique is beneficial in
high-traffic environments where systems generate vast amounts of log data, and
processing every entry could be quite costly since centralized logging solutions
often charge based on data ingestion rates or storage.
Copied!
package main
import (
"fmt"
"log/slog"
"os"
slogmulti "github.com/samber/slog-multi"
slogsampling "github.com/samber/slog-sampling"
)
func main() {
// Will print 20% of entries.
option := slogsampling.UniformSamplingOption{
Rate: 0.2,
}
logger := slog.New(
slogmulti.
Pipe(option.NewMiddleware()).
Handler(slog.NewJSONHandler(os.Stdout, nil)),
)
for i := 1; i <= 10; i++ {
logger.Info(fmt.Sprintf("a message from the gods: %d", i))
}
}
Output
{"time":"2023-10-18T19:14:09.820090798+02:00","level":"INFO","msg":"a message from the gods: 4"}
{"time":"2023-10-18T19:14:09.820117844+02:00","level":"INFO","msg":"a message from the gods: 5"}
Third-party frameworks such as Zerolog and
Zap provide built-in log sampling features. With
Slog, you'd have to integrate a third-party handler such as
slog-sampling or develop a custom
solution. You can also choose to sample logs through a dedicated log shipper
such as
Vector.
6. Use a log management service
Centralizing your logs in a log management system makes it easy to search,
analyze, and monitor your application's behavior across multiple servers and
environments. With all the logs in one place, your ability to identify and
diagnose issues is sped up significantly as you'd no longer need to jump between
different servers to gather information about your service.
While there are several log management solutions out there,
Better Stack provides an easy way to set up
centralized log management in a few minutes, with live tailing, alerting,
dashboards, and uptime monitoring, and incident management features built into a
modern and intuitive interface. Give it a try with a completely free plan
here.
Final thoughts
I hope this post has provided you with an understanding of the new structured
logging package in Go, and how you can start using it in your projects. If you'd
like to explore this topic further, I recommend checking out the
full proposal
and package documentation.