Visualizing Better Stack logs and metrics in Grafana

Better Stack stores all your telemetry data in a ClickHouse-powered warehouse you can query from outside the platform using plain SQL. That means any tool that speaks to a ClickHouse HTTP endpoint can become a front end for your logs and metrics — including Grafana.

This guide walks through connecting Grafana to Better Stack via the SQL API, defining a user-defined metric from your logs, and building three panels: a metrics-backed time-series chart, a request volume chart from wide events, and a log table.

Setting up a source

If you do not have a source yet, follow the quick start guide to create one and start sending logs.

Important: The data region you pick when creating a source determines which cluster it lands on — for example, selecting Europe may assign the source to eu-fsn-3. When you create a SQL API connection later, you must select the same cluster. Check the Data region section on your source's settings page after creation to confirm the exact cluster name.

How Better Stack stores your data

Better Stack uses two storage tiers. Understanding both matters for choosing which table to query and how fast results will be.

Wide events are every structured log line or OpenTelemetry span your application ships to Better Stack — arbitrary JSON stored in object storage. They are great for ad-hoc filtering and exploration but cost more compute at query time because they scan raw files from cloud storage.

Metrics are a separate, highly compressed representation stored on local NVMe SSDs. Better Stack populates them either by ingesting Prometheus metrics directly or by defining logs-to-metrics expressions that extract numeric attributes from your wide events in real time.

See Wide events vs. metrics for a deeper comparison.

Defining a user-defined metric

Say your application logs contain a duration field on every request:

 
{"level":"info","message":"request handled","duration":143,"route":"/api/orders","status":200}
  1. Go to Sources → your source and click the Extract metrics tab.
  2. Click Create metric and fill in:
    • Metric: duration
    • JSON dot notation: duration (Better Stack generates the extraction expression automatically)
    • Aggregation functions: Avg, Min, Max
  3. Click Create metric.
  4. When prompted, choose how Better Stack should process your existing data:
  • Apply changes to new data only (immediate) — the metric starts populating right away from new logs only.
  • Reprocess historical logs — Better Stack backfills the metric from all existing logs, which can take an hour or more depending on data volume.

Defining a user-defined metric in Better Stack

Note User-defined metrics only capture data from the point of definition onwards. Logs sent before you created the metric will not appear in the metrics table. After saving, send a fresh batch of logs before querying.

Once defined, any query against remote(t123456_your_source_metrics) resolves that metric in milliseconds because it reads from compressed columnar data on NVMe rather than scanning object storage.

The practical implication for dashboarding: use the metrics table for time-series charts that refresh every 30 seconds or cover a 90-day window. Use the wide events table for log tables, one-off investigations, or attributes you have not yet promoted to a metric.

Generating SQL API credentials

Better Stack exposes a read-only ClickHouse HTTP endpoint you configure once and reference in Grafana. For full details, see the SQL API documentation.

  1. In the Better Stack sidebar, go to Integrations and open the SQL API tab.
  2. Click Connect next to ClickHouse HTTP client.
  3. Select the Cluster that matches your source's cluster — visible in the source's Data region section (e.g. eu-fsn-3).

Common mistake: If your source is on a different cluster than the SQL API connection, queries return no results and the sample queries list appears empty. Always confirm the cluster on your source's settings page before creating the connection.

  1. Select which teams this connection should have access to.
  2. Optionally add your outbound IP address to the IP allowlist to restrict access. Leave it blank to allow connections from any IP.

  3. Click Create connection.

  4. The success banner shows a curl command and three values. Copy all of them now — the password is hidden permanently once you navigate away:

    • Host (e.g. eu-fsn-3-connect.betterstackdata.com:443)
    • Username
    • Password

SQL API connection credentials in Better Stack

You can find your source identifiers by clicking Click here to show sample queries on the same page after your connection is created.

Note The sample queries link only appears after logs have started flowing into the source. If you see "No sample queries found", wait a few minutes and refresh. Identifiers follow the pattern t{team_id}_{source_id} — for example t208902_my_app becomes remote(t208902_my_app_logs) for hot storage queries.

Connecting Grafana to Better Stack

Installing the ClickHouse plugin

In your Grafana instance, go to Administration → Plugins and data → Plugins, search for ClickHouse, select the Grafana ClickHouse data source plugin, and click Install.

Screenshot of grafana plugins

For self-hosted Grafana you can also use the CLI:

terminal
sudo grafana cli plugins install grafana-clickhouse-datasource
sudo systemctl restart grafana-server

Adding the data source

  1. Go to Connections → Data sources and click Add new data source.
  2. Search for and select ClickHouse.
  3. Fill in the Server section:
Field Value Note
Server address Your cluster hostname From the Better Stack success banner — hostname only, no port (e.g. eu-fsn-3-connect.betterstackdata.com)
Server port 443
Protocol HTTP Default is Native — you must switch this to HTTP
Secure connection Enabled Off by default — toggle this on
  1. Scroll down to Credentials and enter your username and password from the Better Stack success banner.
  2. Leave Default database blank.
  3. Click Save & test. You should see Data source is working.

Grafana ClickHouse data source configured for Better Stack

Troubleshooting connection errors

Error Cause Fix
context deadline exceeded Protocol is still set to Native Switch Protocol to HTTP and enable Secure connection
failed to create ClickHouse client Protocol/Secure mismatch or wrong password Verify Protocol is HTTP and Secure is on; re-enter the password via the Reset button
Host not in allowlist You added specific IPs to the allowlist but are connecting from a different IP Add your current outbound IP (curl https://ifconfig.me) to the connection's allowlist, or leave it blank to allow all IPs
Authentication failed Credentials from a different connection Create a fresh connection and copy all three values at the same time from the success banner

Building your first panel: average request duration

This time-series chart reads from the pre-aggregated metrics table. Because the aggregation states are computed at ingest time, this query is fast even over long time ranges.

Before opening Grafana, confirm that the metrics table has data by running:

terminal
curl -u YOUR_USERNAME:YOUR_PASSWORD \
-H 'Content-type: plain/text' \ -X POST 'https://YOUR_CLUSTER-connect.betterstackdata.com' \
-d "SELECT name, count() FROM remote(t123456_your_source_metrics) GROUP BY name FORMAT Pretty"

Replace YOUR_USERNAME, YOUR_PASSWORD, YOUR_CLUSTER, and t123456_your_source with your actual SQL API credentials and source identifier.

You should see your metric name listed with a non-zero count. For example, if you defined a metric named duration you would see:

Output
   ┃ name     ┃ count() ┃
   ┡━━━━━━━━━━╇━━━━━━━━━┩
1. │ ᴺᵁᴸᴸ     │     439 │
   ├──────────┼─────────┤
2. │ duration │     436 │
   └──────────┴─────────┘

If you only see NULL rows, the metric was created but no logs have been sent since — send a fresh batch and wait a minute before retrying.

Note NULL name rows are not errors. They represent events that arrived before the metric was defined and have no extracted value. Only named rows (e.g. duration) contain usable aggregation states.

  1. Create a new dashboard and add a panel.
  2. Select your Better Stack data source.
  3. Set Query Type to Table.
  4. Switch to the SQL editor and paste:
Grafana SQL editor
SELECT
  toStartOfMinute(dt) AS time,
  avgMerge(value_avg)  AS avg_duration,
  minMerge(value_min)  AS min_duration,
  maxMerge(value_max)  AS max_duration
FROM remote(t123456_your_source_metrics)
WHERE name = 'duration' AND dt >= $__fromTime AND dt < $__toTime GROUP BY time ORDER BY time ASC

Replace t123456_your_source with your actual source identifier.

  1. Under Visualization, choose Time series.

The $__fromTime and $__toTime macros are injected by the Grafana ClickHouse plugin and map to the dashboard time range picker. The Merge() functions resolve the partial aggregation states stored in the metrics table — use them whenever querying the metrics table.

Note Always set Query Type to Table when querying Better Stack through the ClickHouse plugin. The Time series query type applies extra processing that conflicts with ClickHouse's aggregate merge functions and may return no data.

Note If you see Unknown expression or function identifier '$__fromTime', your plugin version may use a different macro name. Try toDateTime($__from / 1000) and toDateTime($__to / 1000) as an alternative.

You will get three lines — average, minimum, and maximum duration — in a single panel backed by sub-second queries.

Average request duration panel in Grafana showing avg, min, and max lines

Building a second panel: request volume by status code

Not every signal warrants a permanently defined metric. This chart counts requests per minute directly from the raw logs table — the wide events tier. It is slower for long windows but useful for attributes you are still exploring.

  1. Add another panel and set Query Type to Table.
  2. Paste:
Grafana SQL editor — full history
SELECT
  toStartOfMinute(dt)                                AS time,
  JSONExtract(raw, 'status', 'Nullable(UInt16)')     AS status_code,
  count(*)                                           AS requests
FROM (
  SELECT dt, raw
FROM remote(t123456_your_source_logs)
UNION ALL SELECT dt, raw FROM s3Cluster(primary, t123456_your_source_s3) WHERE _row_type = 1 ) WHERE dt >= $__fromTime AND dt < $__toTime AND status_code IS NOT NULL GROUP BY time, status_code ORDER BY time ASC

The UNION ALL combines hot storage (remote(...)_logs) for recent data with cold S3 storage for historical records. For time ranges under 24 hours, drop the s3Cluster union to keep latency low:

Grafana SQL editor — recent data only
SELECT
  toStartOfMinute(dt)                                AS time,
  JSONExtract(raw, 'status', 'Nullable(UInt16)')     AS status_code,
  count(*)                                           AS requests
FROM remote(t123456_your_source_logs)
WHERE dt >= $__fromTime
  AND dt <  $__toTime
GROUP BY time, status_code
ORDER BY time ASC
  1. Set the visualization to Time series. Grafana splits the results into separate lines per status code automatically. Set Stack series to Normal for a stacked area chart.

Screenshot of the second panel

Note The SQL API supports up to 4 concurrent queries for logs and 20 for metrics on the Standard plan. Always include a LIMIT and a tight WHERE dt range to avoid hitting these limits. See Query limits for details.

Building a log table

A table panel is the right choice for surfacing individual log entries alongside parsed fields.

  1. Add a third panel and set the visualization to Table.
  2. Set Query Type to Table.
  3. Paste:
Grafana SQL editor
SELECT
  dt                                                      AS time,
  JSONExtract(raw, 'level',    'Nullable(String)')        AS level,
  JSONExtract(raw, 'route',    'Nullable(String)')        AS route,
  JSONExtract(raw, 'status',   'Nullable(UInt16)')        AS status,
  JSONExtract(raw, 'duration', 'Nullable(Float64)')       AS duration_ms,
  JSONExtract(raw, 'message',  'Nullable(String)')        AS message
FROM remote(t123456_your_source_logs)
WHERE dt >= $__fromTime AND dt < $__toTime AND JSONExtract(raw, 'level', 'Nullable(String)') IN ('error', 'warn') ORDER BY dt DESC LIMIT 200

In the Transformations tab, add Organize fields by name to rename and reorder columns — for example, changing duration_ms to Duration (ms). Add Filter data by values to let users drill down by level or route without touching the SQL.

Log table panel in Grafana

The LIMIT 200 guard prevents the panel from hitting concurrent query limits. For longer time windows, tighten the WHERE clause or use S3 sampling:

 
FROM s3Cluster(primary, t123456_your_source_s3, sampling = 0.1)
WHERE _row_type = 1

A sampling factor of 0.1 reads 10% of stored S3 files — usually enough to surface representative error patterns without scanning the full dataset.

Arrange the three panels on the dashboard canvas like this:

Completed Better Stack Grafana dashboard with duration chart, status code chart, and log table