# Time series data warehouse as an API

Ingest petabytes and run analytical SQL queries at scale.  
So affordable you’ll ask what’s wrong with it.

Have a 100TB+ dataset? [Book a consultation](https://betterstack.com/book-a-demo)

## Get a time series data warehouse as an API without the scaling headache

Ingest petabytes of JSON, NDJSON or MessagePack via HTTP

    curl -X POST https://s1554406.eu-nbg-2.betterstackdata.com \
      -H "Authorization: Bearer" \ -H "Content-Type: application/json" \ -d '[{"dt": "2025-01-01T00:00:00Z", "message": "hello"}]'

Run SQL queries via cURL, Metabase or Grafana

    curl -u uBSo9t0g3eqiITS6hN8QLIuIOpkYLo0hN:jJzkwCCipA2LIwf6h1yo3IzpK36jjEQKp9swg86IxOgYbnXLmziJLmVtXSfnNwCC \
    -H 'Content-Type: text/plain' \
    -X POST 'https://eu-nbg-2-connect.betterstackdata.com?output_format_pretty_row_numbers=0' \
    -d "SELECT
           cpu_cores,
           countMerge(events_count) AS visitors,
           bar(visitors, 0, max(countMerge(events_count)) OVER (), 50) AS histogram
           FROM remote(t466713_homepage_timeseries)
           WHERE cpu_cores IS NOT NULL
           GROUP BY cpu_cores
           ORDER BY visitors DESC
           FORMAT Pretty"

Save queries as a client-facing APIs you can securely call from your frontend

    curl "https://eu-nbg-2-connect.betterstackdata.com/query/wXZfp2nTzUY9OJUC6711jDqOwkP8XU4h.csv"

## Scales beyond

Scales beyond MySQL, PostgreSQL, Supabase, MariaDB, MongoDB, Firebase and more.

### Good fit

- You have massive amounts of JSON events
- Each event has a time attribute
- You never need to update events once you ingest them
- You need fast SQL queries at scale
- Leverage ready-made APIs for saved queries

### Not a good fit

- You have just millions of events to store
- You need to update individual records once ingested
- Your events are not timestamped
- You prefer overpaying for Snowflake to get invited into their invite-only events

## So affordable you'll ask what's wrong with it

Benefit from our economies of scale. Ingest up to 5x more data with the same budget, or save up to 80% of your costs.

### Ingest & store a 100 TB dataset, query it 100 times, and 200 TB internet egress

| Provider | Approx. monthly cost |
| --- | --- |
| Better Stack Warehouse | $16,000 |
| Supabase | $43,000 |
| Snowflake | $56,000 |
| tinybird | $62,000 |
| BigQuery | $85,000 |
| Pinecone | $116,000 |

_This isn't an apples to apples comparison but we hope it still paints you a picture. Did we make a mistake? Corrections welcome! Better Stack ingest $0.1/GB, object storage retention $0.05/GB, assuming 1% of data extracted as time series stored on NVMe drives for $1/GB for fast API queries, and $0 for regular speed querying plus $0 for unlimited egress adds up to 100\*1024\*(0.1+0.05) + 1 \* 1024 \* 1 = $16,384._

[Explore pricing](/pricing.md#warehouse)

## Optimize AWS egress costs

Hosted on AWS? Leverage our ingestion endpoints hosted within AWS so that you don't need to pay AWS $0.09/GB for egress.

We've set up AWS Direct Connect so that you don't have to.

100 TB internet egress from Amazon Web Services

AWS Direct Connect with Better Stack

approx.

$4,096

approx.

$4,096

AWS internet egress

approx.

$9,216

approx.

$9,216

AWS internet egress priced at $0.09/GB. AWS Direct Connect with Better Stack priced at $0.04/GB. Hosted in eu-central-1, if you’re sending data from a different AWS region, you might also need to pay a #0.02/GB inter-region AWS fee.

## Simple & predictable pricing

We charge for

$0.1/GB ingestion

$0.05/GB retention on object storage

$1/GB retention on fast NVMe SSD

Unlimited standard querying included

[Explore pricing](/pricing.md#warehouse)

Others charge for

Public internet data transfer

Inter region data transfer

vCPU hours

per-query GB/hours

Cached API requests

SOC 2 Type II compliance

## No vendor lock-in: Open formats in your own S3 bucket

Host your data in your own S3-compatible bucket for full control.

### JSON events on object storage, time series on fast NVMe SSD

Get the best of both worlds: scalability and cost-efficiency of wide JSON events stored in object storage, and fast analytical queries with highly compressed time series stored on local NVMe SSD drives.

#### Available in 4 regions, custom deployments on request

Leverage our self-serve clusters in Virginia, Oregon, Germany or Singapore today or [request a deployment](mailto:hello@betterstack.com?subject=I%27m%20interested%20in%20custom%20deployment%20of%20Better%20Stack%20Warehouse&body=Monthly%20volume%3A%202%20PB%0ALocation%3A%20us-east%0ACompliance%3A%20SOC2%0ACustom%20bucket%3A%20yes%0A%0AAnything%20else%3A) of a custom cluster with a dedicated S3 bucket in your own VPC.

#### Run SQL queries with an HTTP API

Your data is yours. Host it in your own S3 bucket and run arbitrary SQL queries via our HTTP API.

## Everything you'd expect from a time series data warehouse

### Built-in vector embeddings and approximate KNN

Generate vector embeddings without having to call an external API with our built-in embedding modelembeddinggemma:300and query embeddings fast with vector indexes. Run full-text search, multi-vector search and faceted search in a single query.

### Analyze petabyte-scale trends with query time sampling

Search raw JSON events with 100s of nodes from S3. Run arbitrary SQL queries and leverage query time sampling to get ad-hoc insights about trends at scale or store time series on NVMe SSD for fast analytical queries used in your APIs.

##### Transform JSON events at scale

Redact personally identifiable information or simply discard useless events so that you don't get billed.

##### Built-in geo IP at query time

Enrich every event with location data at query time. No preprocessing, no extra storage.

##### Ready-made client-facing APIs

Get Reddit-proof APIs for saved queries ready to serve millions of requests.   
Free of charge.

##### MCP server

Connect new data sources, add new database credentials, and run queries from Claude or Cursor.

##### Terraform

Create new data sources, add temporary database credentials or remove data sources as needed.

##### Time-to-live

Drop your data automatically after a certain time frame by configuring an optional time-to-live.

##### Spending alerts

Get notified before costs exceed your budget with real-time notifications.

##### Detailed usage reporting

Understand every dollar you spend by data source, team, and feature.

##### Spending limit

Cap your costs automatically and never worry about surprise bills again.

## Secure, scalable & compliant

### Everything your data privacy officer requires so that you can pass audits with a peace of mind.

#### SOC 2 Type 2

Request our SOC 2 Type 2 attestation upon signing an NDA.

#### GDPR-compliant

Comply with European Union’s general data privacy regulation.

#### ISO 27001 data centers

Ensuring strict data security standards.

## Happy customers, growing market presence

Get started today. Try Better Stack risk-free with our 60-day money-back guarantee.

Ship higher-quality software faster. Be the hero of your engineering teams.

Have a 100TB+ dataset? [Book a consultation](https://betterstack.com/book-a-demo)
