This is a billable feature that cannot be changed after a source is created.
Explore documentation
Hosting data in your own bucket
You can store your telemetry data in a bucket you control. This is useful for:
- Keeping a long-term archive of your data.
- Having tight control over storing sensitive data.
- Retaining data beyond the standard retention period.
This feature is available for any S3-compatible service, such as AWS S3, CloudFlare R2, DigitalOcean Spaces, etc. We recommend using Cloudflare R2 since it doesn't charge for egress fees.
Set up your source
- Go to Telemetry -> Sources -> Connect source.
- After configuring basic information, scroll down to Advanced settings.
- Choose your provider and fill in the bucket access details.
- Create your source, and start sending your data ๐
Unsure about how to configure your storage or need a custom setup?
Please let us know at hello@betterstack.com. We're happy to help! ๐
Data retention
By default, the data will be automatically removed from your bucket after the logs data retention.
You can choose to keep the data in the bucket after your retention period ends. This ensures you keep a permanent copy, even if logs are no longer accessible in Better Stack.
While creating your source, enable the Keep data after retention option.
File format
The log files stored in your bucket are in the ClickHouse Native format. This enables highly efficient storage and querying.
You can inspect and query the files using the clickhouse-local tool.
Querying log files with ClickHouse
To query your log files directly from S3:
- Install clickhouse-local for your OS.
- Use the ClickHouse s3 table function to query your files.
./clickhouse local \
-q "SELECT dt, raw
FROM s3(
'https://<region>.s3-compatible-service.com/<bucket>/<path>/<file>',
'<access-key-id>',
'<access-key-secret>'
)
ORDER BY dt DESC
LIMIT 500"
To find out the structure of the data in any file:
./clickhouse local \
-q "DESCRIBE s3(
'https://<region>.s3-compatible-service.com/<bucket>/<path>/<file>',
'<access-key-id>',
'<access-key-secret>'
)"
Downloading log files for local analysis (Optional)
If you prefer to download and analyze your logs locally:
- Download any file from your bucket. No need to decompress the file - the native format is handled directly.
- Run
./clickhouse local
with your query, replacingFROM s3(...)
withFROM file(...)
:
./clickhouse local \
-q "SELECT dt, raw
FROM file('<downloaded-file>')
ORDER BY dt DESC
LIMIT 500"
Search your logs
You can use ClickHouse SQL expressions as you would in Explore.
./clickhouse local \
-q "SELECT dt, raw
FROM s3(
'https://<region>.s3-compatible-service.com/<bucket>/<path>/<file>',
'<access-key-id>',
'<access-key-secret>'
)
WHERE raw ILIKE '%search term%'
ORDER BY dt DESC
LIMIT 500"
Export to CSV, JSON, etc
You can use any ClickHouse-supported format with --output-format
.
./clickhouse local \
--output-format=JSONEachRow \
-q "SELECT dt, raw
FROM s3(
'https://<region>.s3-compatible-service.com/<bucket>/<path>/<file>',
'<access-key-id>',
'<access-key-secret>'
)"
Read more about output formats in our Connection API docs.