Explore documentation
Connect remotely via HTTP API
Query your logs & metrics outside of the Better Stack dashboard with our HTTP API.
You can connect from Grafana, the ClickHouse client via HTTP or any other HTTP client.
Getting started
Create a username and password by navigating to Dashboards → Connect remotely.
Click Connect ClickHouse HTTP client.
Follow the instructions in form and click Create connection.
Copy the password shown in the flash message and store it securely. You won't be able to access the password again.
Basic usage
Query logs with curl
Fetch your recent logs using a simple curl command:
curl -u $USERNAME:$PASSWORD \
-H 'Content-type: plain/text' \
-X POST 'https://eu-nbg-2-connect.betterstackdata.com?output_format_pretty_row_numbers=0' \
-d "SELECT dt, raw FROM (
SELECT dt, raw FROM remote(t123456_your_source_logs)
UNION ALL
SELECT dt, raw FROM s3Cluster(primary, t123456_your_source_s3)
) ORDER BY dt DESC LIMIT 100 FORMAT JSONEachRow"
Replace $USERNAME:$PASSWORD
with your connection credentials and t123456_your_source
with your actual team ID and textual source ID, which you can find in Sources:
Query metrics with curl
Get hourly event counts from your metrics:
curl -u $USERNAME:$PASSWORD \
-H 'Content-type: plain/text' \
-X POST 'https://eu-nbg-2-connect.betterstackdata.com?output_format_pretty_row_numbers=0' \
-d "SELECT toStartOfHour(dt) AS time, countMerge(events_count)
FROM remote(t123456_your_source_metrics)
GROUP BY time
ORDER BY time DESC
LIMIT 24
FORMAT Pretty"
Again, replace $USERNAME:$PASSWORD
and t123456_your_source
with your actual data.
Data sources explained
The Connection API provides access to two types of storage for each source:
- Recent logs:
remote(t123456_your_source_logs)
Fast access to recently ingested logs. - Historical logs:
s3Cluster(primary, t123456_your_source_s3)
Long-term storage for older logs. - Metrics:
remote(t123456_your_source_metrics)
Aggregated metrics data, similar to Dashboards.
Use UNION ALL
to combine recent and historical logs for complete results.
Filtering and searching logs
Search by log content
Filter logs containing specific text or fields:
SELECT dt, raw
FROM (
SELECT dt, raw
FROM remote(t123456_your_source_logs)
UNION ALL
SELECT dt, raw
FROM s3Cluster(primary, t123456_your_source_s3)
)
WHERE raw LIKE '%My text%'
ORDER BY dt ASC
LIMIT 5000
FORMAT JSONEachRow
Extract JSON fields
Access nested JSON fields in your logs:
SELECT
dt,
getJSON(raw, 'level') as severity,
getJSON(raw, 'message') as message,
getJSON(raw, 'context.hostname') as hostname
FROM remote(t123456_your_source_logs)
WHERE severity = 'ERROR'
LIMIT 100
FORMAT JSONEachRow
Optimizing queries for large datasets
Use S3 glob interpolation
For better performance with large historical datasets, use S3 glob interpolation by adding query parameters and adding filename
option to FROM s3Cluster
:
curl -u $USERNAME:$PASSWORD \
-H 'Content-type: plain/text' \
-X POST 'https://eu-nbg-2-connect.betterstackdata.com?output_format_pretty_row_numbers=0&table=t123456.your_source&range-from=1748449431000000&range-to=1748535831000000' \
-d "SELECT raw, dt
FROM s3Cluster(primary, t123456_your_source_s3, filename='{{_s3_glob_interpolate}}')
WHERE _row_type = 1
AND dt BETWEEN toDateTime64(1748449431, 0, 'UTC') AND toDateTime64(1748535831, 0, 'UTC')
ORDER BY dt ASC
LIMIT 1000
FORMAT JSONEachRow"
Query parameters:
table
- A single source identification.
Use a format with dott123456.your_source
, without the_s3
suffix.range-from
andrange-to
- Time interval.
Use Unix microsecond timestamp format. Use in addition toWHERE
.
Pagination
For large result sets, paginate using time-based ordering:
SELECT raw, dt
FROM s3Cluster(primary, t123456_your_source_s3)
WHERE _row_type = 1
AND dt BETWEEN toDateTime64(1748449431, 0, 'UTC') AND toDateTime64(1748535831, 0, 'UTC')
ORDER BY dt ASC
LIMIT 1000
FORMAT JSONEachRow
Common issues and solutions
Memory limit exceeded
If you encounter MEMORY_LIMIT_EXCEEDED
errors:
- Use shorter time ranges: Limit your queries to smaller time windows.
- Add specific filters: Filter by
dt
, source, or other fields early in your query. - Limit result size: Use
LIMIT
ormax_result_rows
settings. - Apply S3 glob interpolation: See dedicated section for details.
Too many simultaneous queries
To avoid hitting the concurrent query limit:
- Add delays between requests: Wait 1-2 seconds between API calls.
- Use shorter time ranges: Reduce the scope of each query.
- Implement retry logic: Detect the error and retry after a longer delay.
Output formats
The Connection API supports various ClickHouse output formats:
JSON
- A single JSON data structure.JSONEachRow
- One JSON object per line, best for programmatic access.Pretty
- Human-readable table format.CSV
- Comma-separated values.TSV
- Tab-separated values.
Specify the format at the end of your SQL query: FORMAT JSONEachRow
Best practices
- Always use LIMIT: Prevent accidentally fetching too much data.
- Order your results: Use
ORDER BY dt DESC
orORDER BY dt ASC
for consistent pagination. - Filter early: Apply
WHERE
conditions to reduce data processing. - Use appropriate time ranges: Shorter ranges perform better.
- Store credentials securely: Never hardcode usernames and passwords in your scripts.
- Handle errors gracefully when scripting: Implement retry logic for temporary failures or rate limiting.