Prefer using APIs?
You can manage your ClickHouse connections using Connection API.
Query your logs & metrics outside of the Better Stack dashboard with our read-only HTTP API.
You can connect from Grafana, the ClickHouse client via HTTP or any other HTTP client.
Create a username and password by navigating to Dashboards → Connect remotely in the side menu.
Click Connect ClickHouse HTTP client.
Follow the instructions in form and click Create connection.
Copy the password shown in the flash message and store it securely. You won't be able to access the password again.
You can manage your ClickHouse connections using Connection API.
Fetch your data using a simple curl command:
curl -u $USERNAME:$PASSWORD \
-H 'Content-type: plain/text' \
-X POST 'https://eu-nbg-2-connect.betterstackdata.com?output_format_pretty_row_numbers=0' \
-d "SELECT dt, raw FROM (
SELECT dt, raw FROM remote(t123456_your_source_logs)
UNION ALL
SELECT dt, raw FROM s3Cluster(primary, t123456_your_source_s3)
WHERE _row_type = 1
) ORDER BY dt DESC LIMIT 100 FORMAT JSONEachRow"
curl -u $USERNAME:$PASSWORD \
-H 'Content-type: plain/text' \
-X POST 'https://eu-nbg-2-connect.betterstackdata.com?output_format_pretty_row_numbers=0' \
-d "SELECT dt, raw FROM (
SELECT dt, raw FROM remote(t123456_your_source_spans)
UNION ALL
SELECT dt, raw FROM s3Cluster(primary, t123456_your_source_s3)
WHERE _row_type = 3
) ORDER BY dt DESC LIMIT 100 FORMAT JSONEachRow"
curl -u $USERNAME:$PASSWORD \
-H 'Content-type: plain/text' \
-X POST 'https://eu-nbg-2-connect.betterstackdata.com?output_format_pretty_row_numbers=0' \
-d "SELECT toStartOfHour(dt) AS time, countMerge(events_count)
FROM remote(t123456_your_source_metrics)
GROUP BY time
ORDER BY time DESC
LIMIT 24
FORMAT Pretty"
Replace $USERNAME:$PASSWORD with your connection credentials and t123456_your_source depending on your available sources, which you can find in Connections:
The Query API provides access to three types of telemetry data for each source: logs, spans, and metrics. This data is split between hot storage for recent data and cold storage for historical data.
remote(t123456_your_source_logs) for fast access to recent logs.
For historical data, use s3Cluster(primary, t123456_your_source_s3). Filter with WHERE _row_type = 1.
Use UNION ALL to combine recent and historical data for complete results.
remote(t123456_your_source_spans) for fast access to recent spans.
For historical data, use s3Cluster(primary, t123456_your_source_s3). Filter with WHERE _row_type = 3.
remote(t123456_your_source_metrics) provides aggregated metrics data, similar to Dashboards.
Replace t123456_your_source depending on your available sources, which you can find in Connections.
Filter logs containing specific text or fields:
SELECT dt, raw
FROM (
SELECT dt, raw
FROM remote(t123456_your_source_logs)
UNION ALL
SELECT dt, raw
FROM s3Cluster(primary, t123456_your_source_s3)
WHERE _row_type = 1
)
WHERE raw LIKE '%My text%'
ORDER BY dt ASC
LIMIT 5000
FORMAT JSONEachRow
Access nested JSON fields in your logs:
SELECT
dt,
JSONExtract(raw, 'level', 'Nullable(String)') as severity,
JSONExtract(raw, 'message', 'Nullable(String)') as message,
JSONExtract(raw, 'context.hostname', 'Nullable(String)') as hostname
FROM remote(t123456_your_source_logs)
WHERE severity = 'ERROR'
LIMIT 100
FORMAT JSONEachRow
For large result sets, paginate using time-based ordering:
SELECT raw, dt
FROM s3Cluster(primary, t123456_your_source_s3)
WHERE _row_type = 1
AND dt BETWEEN toDateTime64(1748449431, 0, 'UTC') AND toDateTime64(1748535831, 0, 'UTC')
ORDER BY dt ASC
LIMIT 1000
FORMAT JSONEachRow
To help run extremely fast S3 queries over very wide time ranges, you can apply sampling, which uses a percentage of the stored S3 files as opposed to all of them; this is great if you want results quickly, and are comfortable with an estimated result.
curl -u $USERNAME:$PASSWORD \
-H 'Content-type: plain/text' \
-X POST 'https://eu-nbg-2-connect.betterstackdata.com?output_format_pretty_row_numbers=0' \
-d "SELECT toStartOfInterval(dt, INTERVAL 1 DAY) AS day, count(*) AS count
FROM s3Cluster(primary, t123456_your_source_s3, sampling = 0.5)
WHERE _row_type = 1
AND dt BETWEEN toDateTime64(1748449431, 0, 'UTC') AND toDateTime64(1748535831, 0, 'UTC')
ORDER BY day ASC
LIMIT 1000
FORMAT JSONEachRow"
If you encounter MEMORY_LIMIT_EXCEEDED errors:
dt, source, or other fields early in your query.LIMIT or max_result_rows settings.To avoid hitting the concurrent query limit:
The Query API supports various ClickHouse output formats, most notably:
JSON - A single JSON data structure.JSONEachRow - One JSON object per line, best for programmatic access.Pretty - Human-readable table format.CSV - Comma-separated values.TSV - Tab-separated values.Specify the format at the end of your SQL query: FORMAT JSONEachRow
Missing the brackets around your data when using JSONEachRow?
You can use SETTINGS output_format_json_array_of_rows = 1 in front of FORMAT JSONEachRow to make sure your data is wrapped by [ and ], if you need valid JSON output.
Learn more about using JSON in ClickHouse.
ORDER BY dt DESC or ORDER BY dt ASC for consistent pagination.WHERE conditions to reduce data processing.We use cookies to authenticate users, improve the product user experience, and for personalized ads. Learn more.