A Complete Guide to Semantic Logger for Ruby on Rails
While the built-in Ruby logger provides basic logging, it lacks features like context-aware structured logging, asynchronous writes, and multi-destination support. Semantic Logger addresses these limitations by offering:
- Structured logging in JSON or logfmt.
- Asynchronous logging to prevent blocking application execution.
- Multi-destination logging, allowing logs to be sent to files, databases, and external services.
- Automatic log enrichment with contextual data and metrics.
- Seamless Rails integration.
In this article, we'll explore how to effectively use Semantic Logger to improve Ruby and Rails application logging, and turn unstructured logs to structured data that facilitates observability.
Let's dive in!
What is Semantic Logger?
Semantic Logger is a high-performance logging framework that brings structured, leveled, and efficient logging to Ruby applications.
It is widely used due to its flexibility, ease of integration with Rails, and ability to log to multiple outputs, such as files, databases, or log management services.
It operates asynchronously by default using a separate logging thread. When your code logs something, the message goes into a queue and a separate thread processes this queue.
This non-blocking behavior ensures that logging does not slow down application execution.
To ensure that all logs are written before the application exits, Semantic Logger automatically flushes the log queue. You can also trigger a manual flush when necessary with:
If logs need to be written immediately, synchronous mode can be enabled using:
Or, when using a Gemfile:
In this mode, log messages are written in the same thread, meaning execution waits until logging is completed:
The tradeoff is between performance with asynchronous logging and immediacy/reliability with synchronous logging.
Getting started with Semantic Logger
To use Semantic Logger in your Ruby project, add it to your Gemfile:
Then install it by running:
Semantic Logger offers several key components:
- Logger: The main interface for logging messages with different levels
(e.g.,
info,error). - Appenders: Defines where logs are sent (e.g., console, files, external services).
- Log formatters: Customize log output formats, such as JSON or plain text.
Let's start with a simple setup that logs messages to the console:
Before logging with Semantic Logger, you need to specify a destination for the
logs with add_appender(). In the snippet above, the logs are directed to the
standard output.
You must also create an instance of the Logger class by supplying the name of
the class or application as seen above. This ensures that the logging entries
from each class are uniquely identified.
You can then call level methods like info(), warn() and others on the logger
instance to produce log entries.
This will yield the following output:
By default, Semantic Logger includes a timestamp, log level, process and thread ID, logger name, and the message in its output. It also includes the file name and line number in the case of errors.
Instead of creating a logger instance as seen above, you can use the
SemanticLogger::Loggable mixin to provide logging capabilities to a class:
You'll notice that the resulting log entry is correctly attributed to the class:
Using log levels
Semantic Logger supports the following log levels:
trace, debug, info, warn, error, and fatal. These levels allow you
to control log verbosity by filtering messages based on severity.
These levels allow you to control log verbosity by filtering messages based on severity.
Semantic Logger follows the standard Ruby and Rails logging interface, so if you're migrating from the default logger, you don't need to modify all existing logging calls.
Setting the default log level
By default, Semantic Logger logs at the info level, meaning that trace and
debug messages are ignored unless explicitly enabled.
You can control this by adjusting the global log level with:
Now, only warn, error, and fatal logs will be recorded.
You can also override the default log level for a specific logger:
This ensures that only logs at error or higher are recorded for the Device
class.
You can also control log verbosity without modifying code through an environment variable:
This makes it easy to adjust logging levels in different environments (such as
debug in development, warn in production).
Suppressing logs with silence
The silence method allows you to increase the log level for a block of code
temporarily. By default, it suppresses all logs below error:
Internally, silence raises default_level to error and restores it after the
block.
You can also specify a custom log level:
Some common use cases for logger.silence include:
- During batch operations where logging each item would be too verbose.
- When calling noisy third-party libraries.
- During testing, when you want to suppress certain log output.
However, note that silence does not affect loggers with explicitly set
levels—only those relying on the global default.
Structuring your log entries
One of the key benefits of Semantic Logger is its support for structured logging. Instead of just text messages, structured logs are designed for machine parsability, a prerequisite for achieving an observable system.
Since JSON is the most widely used structured format, we'll configure Semantic Logger to output logs in JSON:
You'll see the output will now be in JSON format:
Note that the JSON output shown above is formatted for readability. The actual entry is always a single-line JSON string ending with a newline character.
This JSON output contains a few additional details not found in the default
format such as the host, application, and level_index properties.
The numeric representation of log levels (level_index) makes it easier to
perform comparisons and set thresholds programmatically in log management tools.
Adding context to your logs
Semantic Logger allows you to enrich logs with contextual data beyond just the log message. Each logging method supports additional parameters:
Here's a breakdown of each parameter:
<message>(required if no other parameter is provided): The main log message.<payload_or_exception>(optional): It can either be:- A hash: Adds structured key-value data for contextual logging.
- An exception: If an error object is passed, Semantic Logger will log its message and backtrace.
<exception>(optional): If an exception is not provided in<payload_or_exception>, you can pass it here explicitly.&block(optional): The block is evaluated only if the log level is enabled.
You can include custom attributes in logs to provide more details about events:
They will appear in the logs under the payload key:
You can also log exceptions either in the second or third parameter as follows:
In either case, an exception object will be included in the log entry with the
type of error, the error message, and a stack trace:
If an exception has a cause (nested error), Semantic Logger will include both in the same entry:
Contextual logging with tagged
Semantic Logger provides a convenient way to add contextual metadata to multiple
log entries using the tagged method.
It ensures that all logs within a specific block share common attributes, making it easier to trace related events.
Here's how to use it:
You'll notice that both log entries produced within the block contain a
named_tag property with the shared facility and region fields:
This way, you can easily include relevant contextual attributes to your log entries without manually passing metadata in each log call.
Working with appenders
Semantic Logger supports logging to multiple destinations beyond the console, including:
- A text file,
- Any HTTP, UDP, or TCP endpoint,
- Log management services Better Stack, Papertrail, New Relic, etc,
- Error tracking tools,
- Databases like MySQL or MongoDB.
To configure where logs are sent, use the add_appender() method:
You can also specify multiple appenders to store logs in different formats and locations simultaneously. For example, you can log to a file and the console with this configuration:
With this setup, all logs will be written to the console in a colorized format:
While the application.log will only contain the logs with error severity or
greater in JSON format:
If you're logging to a file, ensure to configure log rotation on the server or container.
For more appender configurations, refer to the Semantic Logger documentation.
Collecting performance measurements through logs
Semantic Logger provides a simple way to track metrics such as durations and counts directly in the log entries.
For example, you can track the execution time of a block of code through the
measure methods:
These methods track execution time and can include custom payload data:
Once the block completes, the log entry includes the execution duration:
To reduce log volume, you can log only if an operation exceeds a given duration:
This is useful for identifying slow operations without cluttering logs with fast executions.
You can also associate a custom metric name with the log entry to make it easier to aggregate and visualize such data in log monitoring tools:
If you're looking for a dedicated metrics instrumentation tool, check out Prometheus or OpenTelemetry
Integrating Semantic Logger in Rails
Semantic Logger integrates seamlessly with Rails to enable structured, high-performance logging throughout your application.
To replace Rails' default logger with Semantic Logger, you can modify your
config/application.rb file as follows:
To replace Rails' default logger with Semantic Logger, modify
config/application.rb:
For a more straightforward setup, use the rails_semantic_logger gem, which
automatically replaces the Rails logger:
It also automatically replaces the default loggers for:
- Sidekiq
- Bugsnag
- Mongoid
- Mongo
- Moped
- Resque
- Sidetiq
- DelayedJob
By using rails_semantic_logger, all logs from your Rails app and these
dependencies will follow the configured Semantic Logger format.
Once Semantic Logger is integrated, Rails' logs will start to appear in its default format:
You can also include the source code file name and line number where the message
originated by setting config.semantic_logger.backtrace_level to the desired
level:
Now, each log entry includes the source file (subscriber.rb) and line number
(138) to help with debugging efforts:
Customizing the Rails log output
By default, Rails logs several entries for each request it receives:
A typical request emits the following entries:
- A "Started" entry when Rails receives a new HTTP request.
- A "Processing" entry when Rails starts executing a controller action.
- A "Rendered" entry when a view template is rendered.
- A "Completed" entry once the request is fully processed.
Except for "Completed", all other logs are at the debug level and only appear
in development or testing environments.
You can turn off specific log entries (Started, Processing, or Rendered)
in config/application.rb:
You'll also notice that entries generated by the ActionController and
ActiveRecord are converted to semantic data:
You can disable this conversion with:
The messages will now appear as follows:
Modifying request completion entries
By default, Rails includes details like HTTP method, controller action, response status, and execution times in the "Completed" log.
However, you can enhance these logs by adding custom metadata through the
append_info_to_payload method:
When this controller processes a request, the log entry will now include order details:
If you need request-wide metadata across all controllers, define
append_info_to_payload in ApplicationController:
Now, every request log will automatically include the client's IP address and browser details.
Enabling structured JSON logging
To enable structured logging in Rails with Semantic Logger, configure the log
format in config/application.rb:
All the log entries will now be presented in JSON format:
Adding log tags
To include global metadata in every log entry, use the config.log_tags option.
This ensures application or request-wide attributes are automatically added to
all logs.
Here's an example that modifies config/application.rb to include the request
ID in every log entry:
With this configuration, all log entries will contain a request_id inside the
named_tags field:
This is a handy way to correlate all the logs generated by a single request
without manually adding request_id in every log statement.
Centralizing and monitoring your Rails logs
So far, we've explored how to configure Semantic Logger, customize its log output, and integrate it into Rails applications. The next step is centralizing your logs to enable sophisticated log analysis, and long-term storage.
Instead of managing logs on individual servers, using a log management service such as Better Stack allows you to:
- Aggregate logs from multiple environments in one place.
- Monitor application health and detect anomalies in real time.
- Set up proactive alerts to catch issues before they impact users.
- Correlate logs with metrics and traces for deeper insights.
The recommended approach is logging to the console or a file and using a log forwarder such as Vector, Fluentd, or the OpenTelemetry Collector to route logs to their final destination.
This decouples logging from log storage, ensuring that your application remains agnostic to the final log destination. It also means that log processing and aggregation are handled externally which helps avoid additional performance overhead.
If logging to the console or a file is not practical in your environment, Semantic Logger supports direct integrations with various services through its appenders.
Final thoughts
We've covered a lot of ground in this tutorial around how Semantic Logger enhances logging in Ruby on Rails applications.
By offering everything required for a robust logging system, along with flexibility and easy customization, Semantic Logger stands out as a strong choice for your next Ruby project.
I hope this article has helped you understand how to integrate Semantic Logger in your Ruby and Rails projects. For more details, be sure to check out the official Semantic Logger documentation.
Thanks for reading, and happy logging!