Time series data warehouse as an API

Ingest petabytes and run analytical SQL queries at scale. So affordable you’ll ask what’s wrong with it.

Have a 100TB+ dataset? Book a consultation

Relied on by the world’s best engineering teams


Get serverless ClickHouse as an API without the scaling headache

01

Ingest petabytes of JSON, NDJSON or MessagePack via HTTP

cURL
curl

-X POST https://s1554406.eu-nbg-2.betterstackdata.com \

-H "Authorization: Bearer JiATyrxDfaAR9cP1erYDytMR" \

-H "Content-Type: application/json" \

-d Loading

02

Run ClickHouse SQL queries via any ClickHouse HTTP client such as cURL, Metabase or Grafana

cURL

curl -u uBSo9t0g3eqiITS6hN8QLIuIOpkYLo0hN:jJzkwCCipA2LIwf6h1yo3IzpK36jjEQKp9swg86IxOgYbnXLmziJLmVtXSfnNwCC \

-H 'Content-Type: text/plain' \

-X POST 'https://eu-nbg-2-connect.betterstackdata.com?output_format_pretty_row_numbers=0' \

-d "SELECT

cpu_cores,

countMerge(events_count) AS visitors,

bar(countMerge(events_count), 0, 1000, 50) AS histogram

FROM remote(t466713_homepage_timeseries)

WHERE cpu_cores IS NOT NULL

GROUP BY cpu_cores

ORDER BY visitors DESC

FORMAT Pretty"

03

Save queries as a client-facing APIs you can securely call from your frontend

cURL

curl "https://eu-nbg-2-connect.betterstackdata.com/query/ti1Mweu5FCyTCEIlNNRAYzm1l5rAyL9n.csv"


Scales beyond

MySQL
PostgreSQL
Supabase
MariaDB
MongoDB
Firebase
MySQL

Good fit

You have massive amounts of JSON events
Each event has a time attribute
You never need to update events once you ingest them
You need fast SQL queries at scale
Leverage ready-made APIs for saved queries

Not a good fit

You have just millions of events to store
You need to update individual records once ingested
Your events are not timestamped
You prefer overpaying for Snowflake to get invited into their invite-only events

So affordable you’ll ask what’s wrong with it

Benefit from our economies of scale. We pass the savings on to you. Cheaper than self-hosting on AWS.

Ingest up to

5x more data

with the same budget

or save up to

80%

of your costs

Ingest & store a 100 TB dataset, query it 100 times with compute similar to 320 vCPU and 320GB RAM and internet egress of 2x the original dataset

Better Stack Warehouse

approx.

$16,000

approx.

$16,000

Self-hosted ClickHouse on AWS

approx.

$28,000

approx.

$28,000

ClickHouse Cloud

approx.

$41,000

approx.

$41,000

Supabase

approx.

$43,000

approx.

$43,000

Snowflake

approx.

$56,000

approx.

$56,000

tinybird

approx.

$62,000

approx.

$62,000

BigQuery

approx.

$85,000

approx.

$85,000

Pinecone

approx.

$116,000

approx.

$116,000


Optimize AWS egress costs

Hosted on AWS? Leverage our ingestion endpoints hosted within AWS so that you don't need to pay AWS $0.09/GB for egress.

We've set up AWS Direct Connect so that you don't have to.

100 TB internet egress from Amazon Web Services
AWS Direct Connect with Better Stack

approx.

$4,096

approx.

$4,096

AWS internet egress

approx.

$9,216

approx.

$9,216

AWS internet egress priced at $0.09/GB. AWS Direct Connect with Better Stack priced at $0.04/GB. Hosted in eu-central-1, if you’re sending data from a different AWS region, you might also need to pay a #0.02/GB inter-region AWS fee.


Simple & predictable pricing

We charge for

$0.1/GB ingestion
$0.05/GB retention on object storage
$1/GB retention on fast NVMe SSD
Unlimited standard querying included
Explore pricing

Others charge for

Public internet data transfer
Inter region data transfer
vCPU hours
per-query GB/hours
Cached API requests
SOC 2 Type II compliance

Relied on by the world’s best engineering teams


How Tesla built a quadrillion-scale observability platform on ClickHouse

Tesla built Comet, a quadrillion-scale observability platform powered by ClickHouse, to process billions of metrics per second and deliver real-time insights across its global operations, showcasing ClickHouse’s unmatched speed, scalability, and reliability.
Watch Alon Tal’s talk at the Open House conference on Youtube.
Photo by ClickHouse, Inc.

Quadrillion-scale use cases powered by ClickHouse

Leverage a similar approach to using ClickHouse as Tesla and enjoy the benefits of using ClickHouse at a massive scale without the scaling headache.

Real-time analytics

Run sub-second analytical queries at scale to power real-time dashboards.

Data warehouse

Analyze trends in raw events with ad-hoc SQL queries at petabyte scale.

Gen AI & Retrieval-Augmented Generation (RAG)

Get accurate and context-aware LLM answers specific to your data.

Observability

Track events, logs, traces, and metrics at a petabyte scale.


No vendor lock-in:
Open formats in your own S3 bucket

Host your data in your own S3-compatible bucket for full control. Data is stored in the open source ClickHouse native format for the best performance. You can read it using the open source ClickHouse client CLI.

Apache Iceberg is in private beta. Reach out if you're interested!

JSON events on object storage, time series on fast NVMe SSD

Get the best of both worlds: scalability and cost-efficiency of wide JSON events stored in object storage, and fast analytical queries with highly compressed time series stored on local NVMe SSD drives.

Available in 4 regions, custom deployments on request

Leverage our self-serve clusters in Virginia, Oregon, Germany or Singapore today or request a deployment of a custom cluster with a dedicated S3 bucket in your own VPC.

Run SQL queries with an HTTP API

Your data is yours. Host it in your own S3 bucket and run arbitrary SQL queries via our HTTP API.


Everything you'd expect from a time series data warehouse

Built-in vector embeddings and approximate KNN

Generate vector embeddings without having to call an external API with our built-in embedding model embeddinggemma:300 and query embeddings fast with vector indexes. Run full-text search, multi-vector search and faceted search in a single query.

Analyze petabyte-scale trends with query time sampling

Search raw JSON events with 100s of nodes from S3. Run arbitrary SQL queries and leverage query time sampling to get ad-hoc insights about trends at scale or store time series on NVMe SSD for fast analytical queries used in your APIs.

Transform JSON events at scale
Redact personally identifiable information or simply discard useless events so that you don't get billed.
Built-in geo IP at query time
Enrich every event with location data at query time. No preprocessing, no extra storage.
Ready-made client-facing APIs
Get Reddit-proof APIs for saved queries ready to serve millions of requests.
Free of charge.
MCP server
Connect new data sources, add new database credentials, and run queries from Claude or Cursor.
Terraform
Create new data sources, add temporary database credentials or remove data sources as needed.
Time-to-live
Drop your data automatically after a certain time frame by configuring an optional time-to-live.
Spending alerts
Get notified before costs exceed your budget with real-time notifications.
Detailed usage reporting
Understand every dollar you spend by data source, team, and feature.
Spending limit
Cap your costs automatically and never worry about surprise bills again.

Secure, scalable & compliant

Everything your data privacy officer requires so that you can pass audits with a piece of mind.

SOC 2 Type 2
Request our SOC 2 Type 2 attestation upon signing an NDA.
GDPR-compliant
Comply with European Union’s general data privacy regulation.
ISO 27001 data centers
Ensuring strict data security standards.

Relied on by the world’s best engineering teams

Happy customers, growing market presence

Ship higher-quality software faster. Be the hero of your engineering teams.

Have a 100TB+ dataset? Book a consultation