This is a billable feature that cannot be changed after a source is created.
Explore documentation
Hosting data in your own bucket
Better Stack allows you to store your log data in a bucket you control. This is useful for:
- Keeping a long-term archive of your data.
- Having tight control over storing sensitive data.
- Retaining data beyond the standard retention period.
This feature is available for any S3-compatible service, such as AWS S3, CloudFlare R2, DigitalOcean Spaces, etc.
Set up your source
- Go to Telemetry -> Sources -> Connect source.
- After configuring basic information, scroll down to Advanced settings.
- Choose your provider and fill in the bucket access details.
- Create your source, and start sending your data 🚀
Unsure about how to configure your storage or need a custom setup?
Please let us know at hello@betterstack.com. We're happy to help! 🙏
Data retention
By default, the data will be automatically removed from your bucket after the logs data retention.
You can choose to keep the data in the bucket after your retention period ends. This ensures you keep a permanent copy, even if logs are no longer accessible in Better Stack.
While creating your source, enable the Keep data after retention option.
File format
The log files stored in your bucket are in the ClickHouse Native format. This enables highly efficient storage and querying.
You can inspect and query the files using the clickhouse-local tool.
Querying log files with ClickHouse
To download and analyze your logs locally:
- Install clickhouse-local for your OS.
- Download any file from your bucket.
No need to decompress the file - the native format is handled directly. - Run
./clickhouse local
with your query:
./clickhouse local \
-q "SELECT dt, raw
FROM file('<downloaded-file>')
ORDER BY dt DESC
LIMIT 500"
./clickhouse local \
-q "DESCRIBE file('<downloaded-file>')"
Querying logs from S3 directly
You can avoid the need of downloading by using the ClickHouse s3 table function by replacing FROM file(...)
by FROM s3(...)
in your query:
./clickhouse local \
-q "SELECT dt, raw
FROM s3(
'https://<region>.s3-compatible-service.com/<bucket>/<path>/<file>',
'<access-key-id>',
'<access-key-secret>'
)
ORDER BY dt DESC
LIMIT 500"
Search your logs
You can use ClickHouse SQL expressions as you would in Explore.
./clickhouse local \
-q "SELECT dt, raw
FROM file('<downloaded-file>')
WHERE raw ILIKE '%search term%'
ORDER BY dt DESC
LIMIT 500"
Export to CSV, JSON, etc
You can use any ClickHouse-supported format with --output-format
.
./clickhouse local \
--output-format=JSONEachRow \
-q "SELECT dt, raw
FROM file('<downloaded-file>')"
Read more about output formats in our Connection API docs.