Replace t123456_your_source depending on your available sources, which you can find in Ad-hoc SQL API.
Explore documentation
Querying with Ad-hoc SQL API
Query your events & time series with our read-only Ad-hoc SQL HTTP API.
You can connect from Grafana, Metabase, the official ClickHouse client via HTTP or any other HTTP client.
Getting started
Go to Warehouse -> Ad-hoc SQL API -> Create connection.
Select your Cluster and Teams the connection should have access to.
Click Create connection.
Copy the password shown in the flash message and store it securely. You won't be able to access the password again.
Basic usage
Fetch your data using a simple curl command:
curl -u $USERNAME:$PASSWORD \
-H 'Content-type: plain/text' \
-X POST 'https://eu-nbg-2-connect.betterstackdata.com?output_format_pretty_row_numbers=0' \
-d "SELECT dt, raw FROM (
SELECT dt, raw FROM remote(t123456_your_source_events)
UNION ALL
SELECT dt, raw FROM s3Cluster(primary, t123456_your_source_s3)
WHERE _row_type = 1
) ORDER BY dt DESC LIMIT 100 FORMAT JSONEachRow"
curl -u $USERNAME:$PASSWORD \
-H 'Content-type: plain/text' \
-X POST 'https://eu-nbg-2-connect.betterstackdata.com?output_format_pretty_row_numbers=0' \
-d "SELECT toStartOfHour(dt) AS time, countMerge(events_count)
FROM remote(t123456_your_source_timeseries)
GROUP BY time
ORDER BY time DESC
LIMIT 24
FORMAT Pretty"
Replace $USERNAME:$PASSWORD with your connection credentials and t123456_your_source depending on your available sources, which you can find in Ad-hoc SQL API:
Data sources explained
The Ad-hoc SQL API provides access to 2 types of data for each source: events and time series.
Events
remote(t123456_your_source_events) for fast access to recent events.
For historical events, use s3Cluster(primary, t123456_your_source_s3).
Use UNION ALL to combine recent and historical data for complete results.
Time series
remote(t123456_your_source_timeseries) provides aggregated time series data.
Filtering and searching events
Search by event content
Filter logs containing specific text or fields:
SELECT dt, raw
FROM (
SELECT dt, raw
FROM remote(t123456_your_source_events)
UNION ALL
SELECT dt, raw
FROM s3Cluster(primary, t123456_your_source_s3)
)
WHERE raw LIKE '%My text%'
ORDER BY dt ASC
LIMIT 5000
FORMAT JSONEachRow
Extract JSON fields
Access nested JSON fields in your logs:
SELECT
dt,
json.level AS severity
-- alternative syntax using the JSONExtract function and raw column
-- JSONExtract(raw, 'level', 'Nullable(String)') AS severity,
JSONExtract(raw, 'message', 'Nullable(String)') AS message,
JSONExtract(raw, 'context.hostname', 'Nullable(String)') as hostname
FROM remote(t123456_your_source_logs)
WHERE severity = 'ERROR'
LIMIT 100
FORMAT JSONEachRow
Pagination
For large result sets, paginate using time-based ordering:
SELECT raw, dt
FROM s3Cluster(primary, t123456_your_source_s3)
WHERE _row_type = 1
AND dt BETWEEN toDateTime64(1748449431, 0, 'UTC') AND toDateTime64(1748535831, 0, 'UTC')
ORDER BY dt ASC
LIMIT 1000
FORMAT JSONEachRow
Common issues and solutions
Memory limit exceeded
If you encounter MEMORY_LIMIT_EXCEEDED errors:
- Use shorter time ranges: Limit your queries to smaller time windows.
- Add specific filters: Filter by
dt, source, or other fields early in your query. - Limit result size: Use
LIMITormax_result_rowssettings.
Consider upgrading your querying speed.
Too many simultaneous queries
To avoid hitting the concurrent query limit:
- Add delays between requests: Wait 1-2 seconds between API calls.
- Use shorter time ranges: Reduce the scope of each query.
- Implement retry logic: Detect the error and retry after a longer delay.
Consider upgrading your querying speed.
Output formats
The Ad-hoc SQL API supports various ClickHouse output formats, most notably:
JSON- A single JSON data structure.JSONEachRow- One JSON object per line, best for programmatic access.Pretty- Human-readable table format.CSV- Comma-separated values.TSV- Tab-separated values.
Specify the format at the end of your SQL query: FORMAT JSONEachRow
Missing the brackets around your data when using JSONEachRow?
You can use SETTINGS output_format_json_array_of_rows = 1 in front of FORMAT JSONEachRow to make sure your data is wrapped by [ and ], if you need valid JSON output.
Learn more about using JSON in ClickHouse.
Best practices
- Always use LIMIT: Prevent accidentally fetching too much data.
- Order your results: Use
ORDER BY dt DESCorORDER BY dt ASCfor consistent pagination. - Filter early: Apply
WHEREconditions to reduce data processing. - Use appropriate time ranges: Shorter ranges perform better.
- Store credentials securely: Never hardcode usernames and passwords in your scripts.
- Handle errors gracefully when scripting: Implement retry logic for temporary failures or rate limiting.