Ruby on Rails Monitoring with Prometheus
This article provides a detailed guide on integrating Prometheus metrics into your Ruby on Rails application.
It explores key concepts, including instrumenting your application with various metric types, monitoring HTTP request activity, and exposing metrics for Prometheus to scrape.
Let's get started!
Prerequisites
- Prior experience with Ruby on Rails, along with a recent version of Ruby installed.
- Familiarity with Docker and Docker Compose.
- Basic understanding of how Prometheus works.
Step 1 — Setting up the demo project
To demonstrate Prometheus instrumentation in Rails applications, let's set up a simple "Hello World" Rails application along with the Prometheus server.
First, create a new Rails application and navigate into the project directory:
Let's create a simple controller with two routes - one for our main page and one for our metrics endpoint:
Update your routes file to include these endpoints:
This app exposes two endpoints: root (/) returns a simple "Hello world!"
message, and a /metrics endpoint that will eventually expose the instrumented
metrics.
Next, create a Dockerfile in your project root:
Now, create a compose.yaml file to set up both the Rails application and
Prometheus server:
The app service is the Rails application running on port 3000, while
prometheus configures a Prometheus server to scrape the Rails app via the
prometheus.yml file, which we'll create next:
Launch both services in detached mode with:
To confirm that the Rails application is running, send a request to the root endpoint:
This should return:
To verify that Prometheus is able to access the exposed /metrics endpoint,
visit http://localhost:9090/targets in your browser. With everything up and
running, you're ready to integrate Prometheus in your Ruby on Rails application
in the next step.
Step 2 — Installing the Prometheus client
Before instrumenting your Rails application with Prometheus, you need to install the official Prometheus client for Ruby applications.
Add the prometheus-client gem to your Gemfile:
Then install the gem:
Then rebuild the app service to ensure that the prometheus-client dependency
is installed:
Once the app service restarts, integrate Prometheus into your application by
modifying the metrics action in your controller:
This modification introduces the prometheus-client gem and its functionality
to collect and return metrics in a format that Prometheus can scrape.
Once you've saved the file, visit http://localhost:3000/metrics in your
browser or use curl to see the default Prometheus metrics:
By default, Prometheus uses a global registry that automatically includes standard Ruby runtime metrics. If you want to use a custom registry to expose only specific metrics, modify your controller:
Since no custom metrics are registered yet, the /metrics endpoint will return
an empty response now. In the following sections, you will instrument the
application with different metric types, including Counters, Gauges, Histograms,
and Summaries.
Step 3 — Instrumenting a Counter metric
Let's start with a fundamental metric that tracks the total number of HTTP requests made to the server. Since this value always increases, it is best represented as a Counter.
To automatically track HTTP requests in Rails, we'll create a middleware. First, create a new file for our middleware:
Next, create a file to configure the middleware:
This implementation creates a Counter metric named http_requests_total with
labels for status code, path, and HTTP method. It uses a custom middleware to
automatically count all HTTP requests by incrementing the counter after each
request is processed.
After restarting your application, if you refresh
http://localhost:3000/metrics several times, you'll see output like:
You can view your metrics in the Prometheus client by heading to
http://localhost:9090. Then type http_requests_total into the query box and
click Execute to see the raw values.
You can switch to the Graph tab to visualize the counter increasing over time:
Step 4 — Instrumenting a Gauge metric
A Gauge represents a value that can fluctuate up or down, making it ideal for tracking real-time values such as active connections, queue sizes, or memory usage.
In this section, we'll use a Prometheus Gauge to monitor the number of active requests being processed by the service. Let's update our middleware:
The active_requests_gauge metric is created using gauge() to track the
number of active HTTP requests at any given moment.
When a new request starts processing, the gauge is incremented. After the request is completed, the gauge is decremented.
To observe the metric in action, let's add a delay to the root route:
Using a load testing tool like
Apache Benchmark to
generate requests to the / route:
Visiting the /metrics endpoint on your browser will show something like:
This indicates that there are currently 10 active requests being processed by your service.
Tracking absolute values
If you need a Gauge that tracks absolute but fluctuating values, you can set the value directly instead of incrementing or decrementing it.
For example, to track the current memory usage of the Rails application, you can define a gauge and use it to record the current memory usage of the process:
And initialize it in the Rails configuration:
The collect_memory_metrics method runs in a background thread to continuously
update the memory_usage_gauge metric every second. Here, set() is used
instead of increment/decrement to set absolute values.
Step 5 — Instrumenting a Histogram metric
Histograms are useful for tracking the distribution of measurements, such as
HTTP request durations. In Ruby, creating a Histogram metric is straightforward
with the histogram method of the Prometheus registry.
Let's update our middleware to track request durations:
The latency_histogram metric is created to track the duration of each request
to the server. With such a metric, you can:
- Track response time distributions
- Calculate percentiles (like p95, p99)
- Identify slow endpoints
- Monitor performance trends over time
Before a request is processed, the middleware records the start time. After the request completes, the middleware calculates the total duration and records it in the histogram.
After saving the file and restarting the application, make several requests to
see the histogram data in the /metrics endpoint:
Let's understand what this output means:
- Each
_bucketline represents the number of requests that took less than or equal to a specific duration. For example,le="0.025"} 4means four requests completed within 25 milliseconds. - The
_sumvalue is the total of all observed durations. - The
_countvalue is the total number of observations.
The histogram uses default buckets (in seconds), but you can specify custom ones:
The real power of histograms comes when analyzing them in Prometheus. For example, to calculate the 99th percentile latency over a 1-minute window you can use:
This query will show you the response time that 99% of requests fall under, which is more useful than averages for understanding real user experience.
Step 6 — Instrumenting a Summary metric
A Summary metric in Prometheus is useful for capturing pre-aggregated quantiles, such as the median, 95th percentile, or 99th percentile, while also providing overall counts and sums for observed values.
Let's create a new controller for an external API request and add a Summary metric:
Update your routes to include this controller:
The posts_latency_summary metric tracks the duration of requests to an
external API. In the /posts endpoint, the start time of the request is
recorded before sending a GET request to the API.
Once the request completes, the duration is calculated and recorded in the
Summary metric using posts_latency_summary.observe(duration).
After restarting the application, make several requests to the /posts endpoint
to generate latency data:
The metrics endpoint will show output like:
The median request time is about 341 milliseconds (0.341 seconds), 90% of requests complete within 355 milliseconds (0.355 seconds), and 99% complete within 498 milliseconds (0.498 seconds).
Final thoughts
In this tutorial, we explored setting up and using Prometheus metrics in a Ruby on Rails application.
We covered how to define and register different types of metrics - counters for tracking cumulative values, gauges for fluctuating measurements, histograms for understanding value distributions, and summaries for calculating client-side quantiles.
To build on this foundation, you might want to:
- Set up Prometheus Alertmanager to create alerts based on your metrics
- Connect your metrics to Grafana or Better Stack for powerful visualization and dashboarding
- Explore PromQL to write more sophisticated queries for analysis
Thanks for reading, and happy monitoring!