Want to send metrics to Better Stack?
Create a new Prometheus source and start using this API.
This endpoint allows you to send a single metric or a list of metrics. The events can be encoded in JSON or preferably in a more efficient MessagePack.
You can use these metrics to build Dashboards using the built-in metrics value, rate, series_id, and tags. Please note that not all sources support sending metrics.
Create a new Prometheus source and start using this API.
Headers
Either application/json, application/msgpack, or application/x-ndjson
Bearer $SOURCE_TOKEN
Body parameters
An array of metrics encoded in JSON or MessagePack, or newline-delimited JSONs of metrics
A single metric encoded in JSON or MessagePack
The metric was, or the metrics were successfully logged.
You provided an invalid source token, or the source doesn't allow metrics ingestion.
Response body
Unauthorized
The body is not a valid JSON or MessagePack.
Response body
Couldn't parse JSON content.
The body is too large (over 20 MiB).
Response body
payload reached size limit
Send a single metric using cURL:
curl -X POST https://$INGESTING_HOST/metrics \
-H "Authorization: Bearer $SOURCE_TOKEN" \
-H "Content-Type: application/json" \
-d '{"name":"test_metric","gauge":{"value":123}}'
curl -X POST https://$INGESTING_HOST/metrics \
-H "Authorization: Bearer $SOURCE_TOKEN" \
-H "Content-Type: application/x-ndjson" \
-d '{"name":"test_metric","gauge":{"value":123}}'
# Python is required to prepare the binary data in this example
python3 -c 'import msgpack; \
print(msgpack.packb( \
{"name":"test_metric","gauge":{"value":123}} \
).hex())' \
| xxd -r -p \
| curl -X POST https://$INGESTING_HOST/metrics \
-H "Authorization: Bearer $SOURCE_TOKEN" \
-H "Content-Type: application/msgpack" \
--data-binary @-
Send multiple metrics using cURL:
curl -X POST https://$INGESTING_HOST/metrics \
-H "Authorization: Bearer $SOURCE_TOKEN" \
-H "Content-Type: application/json" \
-d '[{"name":"metric_a","gauge":{"value":3.14}},{"name":"metric_b","counter":{"value":42}}]'
curl -X POST https://$INGESTING_HOST/metrics \
-H "Authorization: Bearer $SOURCE_TOKEN" \
-H "Content-Type: application/x-ndjson" \
-d $'{"name":"metric_a","gauge":{"value":3.14}}\n{"name":"metric_b","counter":{"value":42}}'
# Python is required to prepare the binary data
python3 -c 'import msgpack; \
print(msgpack.packb( \
[{"name":"metric_a","gauge":{"value":3.14}},{"name":"metric_b","counter":{"value":42}}] \
).hex())' \
| xxd -r -p \
| curl -X POST https://$INGESTING_HOST/metrics \
-H "Authorization: Bearer $SOURCE_TOKEN" \
-H "Content-Type: application/msgpack" \
--data-binary @-
By default, the time of the metric will be the time of receiving it. You can override this by including a field dt containing the event time either as:
1672490759, 1672490759123, 16724907591234560002022-12-31T13:45:59.123456Z, 2022-12-31 13:45:59.123456+02:00Alternatively, you can use ISO 8601, as it will most likely use a format compatible with RFC 3339. In MessagePack, you can also use the timestamp extension type.
In case the timestamp can't be parsed, we save it as a string, but revert to using the reception time as the event time.
curl -X POST https://$INGESTING_HOST/metrics \
-H "Authorization: Bearer $SOURCE_TOKEN" \
-H "Content-Type: application/json" \
-d '{"name":"test_metric","gauge":{"value":123},"dt":"2023-08-09 07:03:30+00:00"}'
A gauge represents a single numerical value that can go up or down. Common use cases include tracking temperature, current memory usage, or system load. Gauges reflect the current state or level at the time of reporting. For example, memory usage might fluctuate, and each time you send a gauge, you're reporting the latest value.
{
"name": "node_cpu_temperature_celsius",
"gauge": {
"value": 63.5
}
}
A counter is a metric type used for tracking monotonically increasing values, like the number of requests served or errors encountered. Unlike gauges, counters only increase or reset to zero, they never decrease. They're ideal for counting discrete events over time.
{
"name": "http_requests_total",
"counter": {
"value": 42
}
}
A histogram samples observations (such as request durations or response sizes) and counts them in configurable buckets. It also provides the sum of all observed values. Histograms are useful for measuring the distribution of values, allowing you to understand the frequency of observations within specific ranges.
The fields dt, name, histogram.count, histogram.sum, histogram.buckets.*.count, and histogram.buckets.*.upper_limit are required for histograms.
{
"name": "http_request_duration_seconds",
"histogram": {
"count": 140,
"sum": 75.5,
"buckets": [
{
"upper_limit": 0.1,
"count": 15
},
{
"upper_limit": 0.2,
"count": 30
},
{
"upper_limit": 0.5,
"count": 45
},
{
"upper_limit": 1.0,
"count": 50
}
]
}
}
When you send a histogram, we store its data in special metric-like columns, which allow for powerful analysis:
buckets column stores the cumulative counts for each bucket.buckets_sum column contains the sum of all observed values.buckets_count column contains the total number of observations.You can then use SQL to build heatmaps, calculate custom quantiles (like p95), and visualize the distribution of your data.
Learn how to query this data in our Querying histograms guide.
A summary also samples observations and provides a total count and sum of all observed values, similar to a histogram. Additionally, it calculates configurable quantiles (e.g., median, 90th percentile). Summaries are useful when you need to track the distribution of values and compute specific statistical measures over time.
The fields dt, name, summary.count, summary.sum, summary.quantiles.*.value, and summary.quantiles.*.quantile are required for summaries.
{
"name": "http_response_size_bytes",
"summary": {
"count": 100,
"sum": 25000,
"quantiles": [
{
"quantile": 0.5,
"value": 200
},
{
"quantile": 0.9,
"value": 450
},
{
"quantile": 0.99,
"value": 600
}
]
}
}
Similar to histograms, summary data is stored in dedicated columns for powerful querying:
quantiles column stores an array of tuples representing the pre-calculated percentiles.quantiles_sum column contains the sum of all observed values.quantiles_count column contains the total number of observations.You can query these columns to analyze statistical distributions over time. Querying summaries is similar to Querying histograms.
You can use any string labels in the tags field of your metric to provide metadata for grouping and filtering of your data.
{
"name": "node_cpu_temperature_celsius",
"gauge": {
"value": 63.5
},
"tags": {
"host": "server_1",
"region": "us-east",
"environment": "production"
}
}
Tags have direct impact on how do we bill for metrics. It's recommended to avoid unique or high-cardinality values that aren't necessary for your dashboards, such as process IDs or request IDs.
Many unique tag values can dramatically increase the number of metric series.
The maximum allowed size of a single request is 10 MiB.
We use cookies to authenticate users, improve the product user experience, and for personalized ads. Learn more.