Explore documentation
Explore logs with SQL
Quickly explore and visualize your logs to debug issues and discover patterns using Log SQL.
Want a dashboard with multiple charts and alerts?
Use Dashboards instead of Explore logs to get:
- Dashboards: Put multiple charts on a dashboard to get an overview of your apps and services.
- Faster charts: Dashboards use Logs to metrics to load weeks of your logs in seconds.
- Alerts: Set up anomaly alerts for errors and latencies of your apps and CPU and memory of your servers.
Getting started
Start exploring your logs in Explore logs → Create query. Select a source. Use the following Log SQL expression:
SELECT {{time}} as time,
count(*) as value,
JSONExtract(json, 'level', 'Nullable(String)') AS series
FROM {{source}}
WHERE time BETWEEN {{start_time}} AND {{end_time}}
GROUP BY time, series
Log SQL queries use ClickHouse SQL, which is largely similar to ANSI SQL you’re likely familiar with. In the SQL query above, we leverage the JSONExtract()
 ClickHouse function.
In the query above, we use count(*)
to get the number of logs for each level
.
Extracting log fields
In Explore logs, you can access any field in your logs. Extract top-level or nested fields using JSONExtract()
function:
- Top-level field
level
:JSONExtract(json, 'level', 'Nullable(String)')
- Nested field
context.request.status
:JSONExtract(json, 'context', 'request', 'status', 'Nullable(String)')
The last argument of JSONExtract()
is a ClickHouse type.
Use 'Nullable(String)'
extract fields as strings.
Wrapping String
with Nullable()
prevents issues with missing values.
Query variables
On top of SQL, Log SQL queries feature variables like {{source}}
 or {{time}}
. Variables conveniently insert selected source, current time range, and other selected values into your queries. Read more about query variables.
Explore logs with examples
Chart number of logs by host
Chart how many logs your servers produce:
SELECT {{time}} as time,
count(*) as value,
JSONExtract(json, 'context', 'hostname', 'Nullable(String)') AS series
FROM {{source}}
WHERE time BETWEEN {{start_time}} AND {{end_time}}
GROUP BY time, series
Want to use the extracted field in dashboards?
Add a metric in Logs to metrics for your source:
- Metric: Name your metric
hostname
. - JSON dot notation: Write JSON path
context.hostname
. - Type: Select String.
- Aggregations: Leave No aggregation selected.
Once added, you can use hostname
column on Dashboards.
Track specific error in your logs
Match specific errors in logs message
field using CASE
and LIKE
. The LIKE
operator performs a case-sensitive search:
SELECT {{time}} AS time,
COUNT(CASE WHEN
JSONExtract(json, 'message', 'Nullable(String)')
LIKE '%connection lost%'
THEN TRUE ELSE FALSE
END) AS value
FROM {{source}}
WHERE time BETWEEN {{start_time}} AND {{end_time}}
GROUP BY time
Want to create an alert for the matched error?
Add a metric in Logs to metrics for your source:
- Metric: Name your metric
is_connection_lost_error
. - SQL expression: Use the
CASE
statement - i.e. everything starting withCASE
and ending withEND
including theCASE
andEND
keywords. - Type: Select Boolean.
- Aggregations: Leave No aggregation selected.
To create the alert:
- Go to Dashboards → Create dashboard.
- Create a chart with
is_connection_lost_error = 1
filter. - Add an alert in the right sidebar.