Query your logs & metrics outside of the Better Stack dashboard with our HTTP API.
You can connect from Grafana, the ClickHouse client via HTTP or any other HTTP client.
Getting started
Create a username and password by navigating to Dashboards → Connect remotely.
Click Connect ClickHouse HTTP client.
Follow the instructions in form and click Create connection.
Copy the password shown in the flash message and store it securely. You won't be able to access the password again.
Basic usage
Query logs with curl
Fetch your recent logs using a simple curl command:
Querying logs using curl
Copied!
curl -u $USERNAME:$PASSWORD \
-H 'Content-type: plain/text' \
-X POST 'https://eu-nbg-2-connect.betterstackdata.com?output_format_pretty_row_numbers=0' \
-d "SELECT dt, raw FROM (
SELECT dt, raw FROM remote(t123456_your_source_logs)
UNION ALL
SELECT dt, raw FROM s3Cluster(primary, t123456_your_source_s3)
) ORDER BY dt DESC LIMIT 100 FORMAT JSONEachRow"
Replace $USERNAME:$PASSWORD with your connection credentials and t123456_your_source with your actual team ID and textual source ID, which you can find in Sources:
Query metrics with curl
Get hourly event counts from your metrics:
Querying metrics using curl
Copied!
curl -u $USERNAME:$PASSWORD \
-H 'Content-type: plain/text' \
-X POST 'https://eu-nbg-2-connect.betterstackdata.com?output_format_pretty_row_numbers=0' \
-d "SELECT toStartOfHour(dt) AS time, countMerge(events_count)
FROM remote(t123456_your_source_metrics)
GROUP BY time
ORDER BY time DESC
LIMIT 24
FORMAT Pretty"
Again, replace $USERNAME:$PASSWORD and t123456_your_source with your actual data.
Data sources explained
The Connection API provides access to two types of storage for each source:
Recent logs: remote(t123456_your_source_logs)
Fast access to recently ingested logs.
Historical logs: s3Cluster(primary, t123456_your_source_s3)
Long-term storage for older logs.
Metrics: remote(t123456_your_source_metrics)
Aggregated metrics data, similar to Dashboards.
Use UNION ALL to combine recent and historical logs for complete results.
Filtering and searching logs
Search by log content
Filter logs containing specific text or fields:
Accessing all logs containing "My text"
Copied!
SELECT dt, raw
FROM (
SELECT dt, raw
FROM remote(t123456_your_source_logs)
UNION ALL
SELECT dt, raw
FROM s3Cluster(primary, t123456_your_source_s3)
)
WHERE raw LIKE '%My text%'
ORDER BY dt ASC
LIMIT 5000
FORMAT JSONEachRow
Extract JSON fields
Access nested JSON fields in your logs:
Accessing recent error logs
Copied!
SELECT
dt,
getJSON(raw, 'level') as severity,
getJSON(raw, 'message') as message,
getJSON(raw, 'context.hostname') as hostname
FROM remote(t123456_your_source_logs)
WHERE severity = 'ERROR'
LIMIT 100
FORMAT JSONEachRow
Optimizing queries for large datasets
Use S3 glob interpolation
For better performance with large historical datasets, use S3 glob interpolation by adding query parameters and adding filename option to FROM s3Cluster:
-X POST 'https://eu-nbg-2-connect.betterstackdata.com?output_format_pretty_row_numbers=0&table=t123456.your_source&range-from=1748449431000000&range-to=1748535831000000' \
-d "SELECT raw, dt
FROM s3Cluster(primary, t123456_your_source_s3, filename='{{_s3_glob_interpolate}}')
WHERE _row_type = 1
AND dt BETWEEN toDateTime64(1748449431, 0, 'UTC') AND toDateTime64(1748535831, 0, 'UTC')
ORDER BY dt ASC
LIMIT 1000
FORMAT JSONEachRow"
Query parameters:
table - A single source identification.
Use a format with dot t123456.your_source, without the _s3 suffix.
range-from and range-to - Time interval.
Use Unix microsecond timestamp format. Use in addition to WHERE.
Pagination
For large result sets, paginate using time-based ordering:
Using LIMIT and ORDER BY for pagination
Copied!
SELECT raw, dt
FROM s3Cluster(primary, t123456_your_source_s3)
WHERE _row_type = 1
AND dt BETWEEN toDateTime64(1748449431, 0, 'UTC') AND toDateTime64(1748535831, 0, 'UTC')
ORDER BY dt ASC
LIMIT 1000
FORMAT JSONEachRow
Common issues and solutions
Memory limit exceeded
If you encounter MEMORY_LIMIT_EXCEEDED errors:
Use shorter time ranges: Limit your queries to smaller time windows.
Add specific filters: Filter by dt, source, or other fields early in your query.
Limit result size: Use LIMIT or max_result_rows settings.
JSONEachRow - One JSON object per line, best for programmatic access.
Pretty - Human-readable table format.
CSV - Comma-separated values.
TSV - Tab-separated values.
Specify the format at the end of your SQL query: FORMAT JSONEachRow
Missing the brackets around your data when using JSONEachRow?
You can use SETTINGS output_format_json_array_of_rows = 1 in front of FORMAT JSONEachRow to make sure your data is wrapped by [ and ], if you need valid JSON output.