Better Stack collector

Better Stack collector is the easiest and the recommended way of integrating Better Stack into your environment.

Why should you use the collector?

Instrument without code changes

Leverage eBPF to instrument your Kubernetes or Docker clusters to gather logs, metrics, and OpenTelemetry traces without code changes.

Monitor and control your collectors remotely

Remotely monitor collector's throughput and adjust the collector configuration directly from the Better Stack dashboard to adjust sampling, compression, and batching as needed.

Have a legacy service? Use the Better Stack dashboard to increase sampling to save on ingesting costs and egress costs and only scale up when you need the telemetry.

CleanShot 2026-02-07 at 7 .04.07.png

Databases auto-instrumented automatically

Collector automatically recognizes databases and other common services running in your cluster. Monitor the internals of your PostgreSQL, MySQL, Redis, Memcached, MongoDB, Apache, Nginx, Elasticsearch, or Kafka out-of-box.

Transform wide events with VRL

Transform logs, spans or other wide events to redact personally identifiable information or simply discard useless events so that you don't get billed.

Collect additional OpenTelemetry traces

Send any OpenTelemetry traces to Better Stack.

Get best of both worlds: Collect traces with zero effort using eBPF-based auto-instrumentation. For full flexibility, instrument your services using OpenTelemetry SDKs and send custom traces to Better Stack alongside eBPF data.

Getting started

Install via Kubernetes Helm chart

Add collector Helm chart and install it:

Add and install Helm chart
helm repo add better-stack https://betterstackhq.github.io/collector-helm-chart
helm repo update
helm install better-stack-collector better-stack/collector \
  --set collector.env.COLLECTOR_SECRET="$COLLECTOR_SECRET"


For advanced configuration options, see the values.yaml file.

After installing Better Stack collector, restart your services so the eBPF instrumenters can attach correctly.

Install via Docker

Deploy collector with Docker Compose 1.25.0 or later using the provided install script:

Install using Docker Compose
curl -sSL https://raw.githubusercontent.com/BetterStackHQ/collector/main/install.sh | \
  COLLECTOR_SECRET="$COLLECTOR_SECRET" bash


After installing Better Stack collector, restart your services so the eBPF instrumenters can attach correctly.

Install to Docker Swarm

Deploy collector to each node in your Swarm cluster with Docker Compose 1.25.0 or later using the following script:

Deploy to all Swarm nodes
curl -sSL https://raw.githubusercontent.com/BetterStackHQ/collector/refs/heads/main/deploy-to-swarm.sh | \
    MANAGER_NODE=root@swarm-manager COLLECTOR_SECRET="$COLLECTOR_SECRET" bash


After installing Better Stack collector, restart your services so the eBPF instrumenters can attach correctly.

Install and use as egress proxy

Run collector as fowarding proxy for logs, traces, and metrics to save on egress costs.

Install collector as egress proxy.

Required kernel features

Collector requires Linux kernel 5.14 or newer for reliable eBPF-based auto-instrumentation. It relies on BTF, CO-RE, and the eBPF ring buffer (BPF_MAP_TYPE_RINGBUF). Older kernels may work if your distribution has backported these features.

Check if your system supports all the required features with:

Host system Kubernetes Docker
curl -sSL https://telemetry.betterstack.com/api/collector/public/ebpf.sh | bash
kubectl run -i --rm ebpf-check --image=alpine --restart=Never --privileged=true -- sh -c "apk add --no-cache bash wget -q && \
  wget -qO- https://telemetry.betterstack.com/api/collector/public/ebpf.sh | bash"
docker run --rm --privileged alpine:latest sh -c "apk add --no-cache bash wget -q && \
  wget -qO- https://telemetry.betterstack.com/api/collector/public/ebpf.sh | bash"

Your cluster doesn't support all the required features?

Use OpenTelemetry SDK and send traces to Better Stack anyway.

Auto-instrument apps with OpenTelemetry SDK

Collector automatically give you eBPF traces and metrics for all your services. Send OpenTelemetry SDK traces and logs to Collector for control and flexibility.

Enable OpenTelemetry in Better Stack collector

Enable OpenTelemetry ports on Collector to send Otel traces, logs, and metrics to Better Stack via Collector:

Kubernetes Docker Compose Docker Swarm
# Enable for existing collector
helm repo update
helm upgrade better-stack-collector better-stack/collector \
  --reuse-values \
  --set collectOtel.grpcPort=4317 \
  --set collectOtel.httpPort=4318

# Deploy new collector with OpenTelemetry forwarding
helm repo add better-stack https://betterstackhq.github.io/collector-helm-chart
helm repo update
helm install better-stack-collector better-stack/collector \
  --set collector.env.COLLECTOR_SECRET="$COLLECTOR_SECRET" \
  --set collectOtel.grpcPort=4317 \
  --set collectOtel.httpPort=4318
curl -sSL https://raw.githubusercontent.com/BetterStackHQ/collector/main/install.sh | \
  COLLECTOR_SECRET="$COLLECTOR_SECRET" \
  COLLECT_OTEL_GRPC_PORT=4317 \
  COLLECT_OTEL_HTTP_PORT=4318 bash
curl -sSL https://raw.githubusercontent.com/BetterStackHQ/collector/refs/heads/main/deploy-to-swarm.sh | \
    MANAGER_NODE=root@swarm-manager COLLECTOR_SECRET="$COLLECTOR_SECRET" \
    COLLECT_OTEL_GRPC_PORT=4317 \
    COLLECT_OTEL_HTTP_PORT=4318 bash


Navigate to Sources -> Your collector -> Configure -> Ingesting. Then, enable the OpenTelemetry SDK traces checkbox.

All services can now send OpenTelemetry data to Better Stack via the open ports.

Instrument your services

Instrument your services with OpenTelemetry:

Automatically instrument services in Kubernetes

Install OpenTelemetry Operator

Install OpenTelemetry Operator to your cluster.

Set up the Operator

Create an Instrumentation resource to make sure OpenTelemetry data is sent to Better Stack collector:

instrumentation.yaml
apiVersion: opentelemetry.io/v1alpha1
kind: Instrumentation
metadata:
  name: better-stack
spec:
  exporter:
    endpoint: http://better-stack-collector-otlp.namespace-with-bs-collector.svc:4317

Enable auto-instrumentation on your workloads

Annotate your workloads to enable auto-instrumentation:

deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-app
spec:
  template:
    metadata:
      annotations:
        instrumentation.opentelemetry.io/inject-python: "better-stack"

Use the correct language for your service

Replace inject-python with the annotation for your language: inject-java, inject-nodejs, inject-dotnet, or inject-go.
See all available languages in the Operator documentation.

Add OTEL_SERVICE_NAME and OTEL_RESOURCE_ATTRIBUTES env variables to make sure the Otel data is correctly connected to the matching service:

Service and host env
env:
  - name: OTEL_SERVICE_NAME
    value: "my-namespace/Deployment/my-application"
  - name: POD_NAME
    valueFrom:
      fieldRef:
        fieldPath: metadata.name
  - name: NODE_NAME
    valueFrom:
      fieldRef:
        fieldPath: spec.nodeName
  - name: OTEL_RESOURCE_ATTRIBUTES
    value: "host.name=$(NODE_NAME),container.id=my-namespace/$(POD_NAME)/my-application"

OpenTelemetry traces are now flowing into Better Stack

Instrument individual services with OpenTelemetry

OpenTelemetry supports zero-code and manual SDK instrumentaions depending on specific language. Follow the OpenTelemetry official integration guide for the language your service is using.

Send OpenTelemetry data to Better Stack collector

Configure your OpenTelemetry SDK to send data to the Better Stack collector. In Kubernetes, use the node's IP address to reach the collector's host ports:

HTTP gRPC
env:
  - name: NODE_IP
    valueFrom:
      fieldRef:
        fieldPath: status.hostIP
  - name: OTEL_EXPORTER_OTLP_ENDPOINT
    value: "http://$(NODE_IP):4318"
env:
  - name: NODE_IP
    valueFrom:
      fieldRef:
        fieldPath: status.hostIP
  - name: OTEL_EXPORTER_OTLP_ENDPOINT
    value: "http://$(NODE_IP):4317"

For Docker Compose and Docker Swarm, point your services to the Collector via localhost, for example http://localhost:4318 for HTTP or http://localhost:4317 for gRPC.

Add service and host to all services instrumented with OpenTelemetery SDK. Set the OTEL_SERVICE_NAME and OTEL_RESOURCE_ATTRIBUTES as follows:

Kubernetes Docker
env:
  - name: OTEL_SERVICE_NAME
    value: "my-namespace/Deployment/my-application"
  - name: POD_NAME
    valueFrom:
      fieldRef:
        fieldPath: metadata.name
  - name: NODE_NAME
    valueFrom:
      fieldRef:
        fieldPath: spec.nodeName
  - name: OTEL_RESOURCE_ATTRIBUTES
    value: "host.name=$(NODE_NAME),container.id=my-namespace/$(POD_NAME)/my-application"
export OTEL_SERVICE_NAME="my-application"
export OTEL_RESOURCE_ATTRIBUTES="host.name=my-host-123,container.id=my-application-0a1b2c3d4e"

How are the service and host attributes used in Better Stack?

service.name is used to automatically group traces and logs by service in Better Stack dashboards. host.name and container.id help correlate SDK-generated telemetry with eBPF-generated data from the same workload.

OpenTelemetry traces are now flowing into Better Stack

Collecting database metrics

MySQL

Create a database user with the following permissions to collector MySQL database metrics:

Set up MySQL database user
CREATE USER 'betterstack'@'%' IDENTIFIED BY '<PASSWORD>';
GRANT SELECT, PROCESS, REPLICATION CLIENT ON *.* TO 'betterstack'@'%';

PostgreSQL

If the default postgres database does not exist, create it:

Create default Postgres database
CREATE DATABASE postgres;

Create a database user with the pg_monitor role and enable the pg_stat_statements extension:

Set up Postgres database user
CREATE ROLE betterstack WITH LOGIN PASSWORD '<PASSWORD>';
GRANT pg_monitor TO betterstack;
GRANT CONNECT ON DATABASE postgres TO betterstack;
CREATE EXTENSION pg_stat_statements;

Make sure the pg_stat_statements extension is loaded via the shared_preload_libraries server setting.

CleanShot 2026-02-07 at 7 .06.57.png

Collecting Apache and nginx metrics

Better Stack collector automatically discovers nginx and Apache running on your hosts. Once discovered, the collector scrapes metrics from each process every 15 seconds.

For this to work, each process needs a status endpoint enabled. If the endpoint isn't available, the process will show as Configuration required in the Better Stack dashboard.

Nginx

The collector scrapes nginx metrics from the stub_status module at /nginx_status. Add the following to your nginx configuration:

/etc/nginx/conf.d/status.conf
server {
    listen 80;
    location /nginx_status {
        stub_status;
        allow 127.0.0.1;
        allow ::1;
        deny all;
    }
}

Reload nginx to apply the change:

Reload nginx
nginx -s reload

The stub_status module is included in most nginx packages by default. Verify it's available via nginx -V 2>&1 | grep -o with-http_stub_status_module.

Apache

The collector scrapes Apache metrics from mod_status at /server-status?auto. Enable the module and add the following configuration:

/etc/apache2/mods-enabled/status.conf
<Location "/server-status">
    SetHandler server-status
    Require local
</Location>

Enable the module and reload Apache:

Debian / Ubuntu RHEL / CentOS / Amazon Linux
a2enmod status
systemctl reload apache2
# mod_status is usually enabled by default
# Verify with: httpd -M | grep status
systemctl reload httpd

Running older Apache?

Set ExtendedStatus On to get the full set of metrics. On Apache 2.3.6 and up, this is already enabled by default.

Verify the configuration

After enabling the status endpoint, verify it's reachable from the host:

Nginx Apache
curl http://127.0.0.1/nginx_status
curl http://127.0.0.1/server-status?auto

The process status in Better Stack will automatically change from Configuration required to Collecting within a few minutes.

Collect additonal logs

Send any log files from the host filesystem to Better Stack. For example, add the following configuration to your manual.vector.yaml file to send the logs from /var/www/custom.log and any file matching /var/www/**/service.*.log:

Sending custom logs
sources:
  better_stack_logs_custom_file:
    type: file
    read_from: beginning
    ignore_older_secs: 259200 # send only log lines newer than 3 days
    include:
      - "/host/var/www/custom.log"
      - "/host/var/www/**/service.*.log"

Any source with a name starting with better_stack_logs_ is forwarded to the logs sink.

Upgrading collector

Getting the latest collector is mostly the same as initial installation:

Kubernetes Docker Compose Docker Swarm
# Update repo and upgrade chart
helm repo update
helm upgrade better-stack-collector better-stack/collector \
  --set collector.env.COLLECTOR_SECRET="$COLLECTOR_SECRET"
# Rerun install command to upgrade collector
curl -sSL https://raw.githubusercontent.com/BetterStackHQ/collector/main/install.sh | \
  COLLECTOR_SECRET="$COLLECTOR_SECRET" bash
# Rerun deploy command to upgrade collector
curl -sSL https://raw.githubusercontent.com/BetterStackHQ/collector/refs/heads/main/deploy-to-swarm.sh | \
    MANAGER_NODE=root@swarm-manager COLLECTOR_SECRET="$COLLECTOR_SECRET" bash

# Not working? Force upgrade removes existing collector containers before upgrading collector
curl -sSL https://raw.githubusercontent.com/BetterStackHQ/collector/refs/heads/main/deploy-to-swarm.sh | \
    ACTION=force_upgrade \
    MANAGER_NODE=root@swarm-manager COLLECTOR_SECRET="$COLLECTOR_SECRET" bash


Uninstalling collector

Something wrong? Let us know at hello@betterstack.com.

Kubernetes Docker Compose Docker Swarm
# Uninstall chart and remove repo
helm uninstall better-stack-collector
helm repo remove better-stack
curl -sSL https://raw.githubusercontent.com/BetterStackHQ/collector/main/uninstall.sh | bash
# Uninstall using a provided script
curl -sSL https://raw.githubusercontent.com/BetterStackHQ/collector/refs/heads/main/deploy-to-swarm.sh | \
    MANAGER_NODE=root@swarm-manager COLLECTOR_SECRET="$COLLECTOR_SECRET" ACTION=uninstall bash

Collecting Prometheus metrics in Kubernetes

Automatically discover and scrape metrics from all pods and services with native Prometheus annotations. Add the following annotaions to your pods and services:

Example annotaions
prometheus.io/scrape: "true"
prometheus.io/path: "/metrics"
prometheus.io/port: "9090"

Using Secrets and ConfigMaps in Kubernetes

Pass collector token as a Secret and use ConfigMap to set environment variables:

values.yaml
collector:
  envFrom:
    - configMapRef:
        name: my-collector-config
    - secretRef:
        name: my-collector-secret
ebpf:
  envFrom:
    - configMapRef:
        name: my-ebpf-config
    - secretRef:
        name: my-ebpf-secret
secret.yaml
apiVersion: v1
kind: Secret
type: Opaque
metadata:
  name: my-collector-secret
string_data:
  COLLECTOR_SECRET: "$COLLECTOR_SECRET"


Environment variables have precedence over Secrets and ConfigMaps

Environment variables set explicitly in collector.env or ebpf.env will take precedence over variables sourced from envFrom.

Setting Pod Priority

To ensure the collector pods have higher scheduling priority and are less likely to be evicted during resource shortages, you can assign them a PriorityClass. This is especially important in production clusters where telemetry data is critical.

To set a priority class, pecify the priorityClassName in your values.yaml. You must have a PriorityClass object already defined in your cluster.

values.yaml
priorityClassName: "high-priority-nonpreempting"

Using labels with Docker logs

Docker's json-file log driver can be configured to attach container labels to logs. After configuring Docker, you will see the labels on logs in the attrs object.

Adjusting /etc/docker/daemon.json requires restarting the Docker daemon with systemctl restart docker and may cause downtime.

Deploying to specific Docker Swarm nodes

To restrict the Collector to selected Swarm nodes, add the better-stack.collector=true label to those nodes from a swarm manager. Nodes without the label will stop running the Collector.

Managing Collector placement in Docker Swarm
# Enable Collector on a specific node
docker node update --label-add better-stack.collector=true <node-name>

# Disable Collector on a node
docker node update --label-rm better-stack.collector <node-name>

# If needed, remove the eBPF container from the node directly
docker stop better-stack-ebpf && docker rm better-stack-ebpf

Security hardening

Collector is configured to be easy to integrate by default. You can adjust its settings to match your security needs.

Mount fewer paths into the collector container

Collector mounts the root filesystem read-only into the collector container by default. You can mount only select paths into the container:

Kubernetes Docker Compose Docker Swarm
helm repo add better-stack https://betterstackhq.github.io/collector-helm-chart
helm repo update
helm install better-stack-collector better-stack/collector \
  --set collector.env.COLLECTOR_SECRET="$COLLECTOR_SECRET" \
  --set collector.hostMounts.paths[0]=/var/www/custom \
  --set collector.hostMounts.paths[1]=/var/lib/something
curl -sSL https://raw.githubusercontent.com/BetterStackHQ/collector/main/install.sh | \
  COLLECTOR_SECRET="$COLLECTOR_SECRET" \
  MOUNT_HOST_PATHS="/var/www/custom,/var/lib/something" bash
curl -sSL https://raw.githubusercontent.com/BetterStackHQ/collector/refs/heads/main/deploy-to-swarm.sh | \
    MANAGER_NODE=root@swarm-manager COLLECTOR_SECRET="$COLLECTOR_SECRET" \
    MOUNT_HOST_PATHS="/var/www/custom,/var/lib/something" bash


Selected paths will be mounted read-only under the /host prefix, e.g. /host/var/www/custom.

Required permissions

The ebpf container requires privileged access as it runs the eBPF code responsible for generating traces. The collector container doesn't request privileged access.

Managing the collector secret with Kubernetes Secrets

For enhanced security, we recommend managing the COLLECTOR_SECRET using a Kubernetes Secret instead of placing it directly in your values.yaml file. This prevents the secret from being stored in plain text in your version control system.

Create Kubernetes secret
kubectl create secret generic better-stack-collector-secret \
  --from-literal=COLLECTOR_SECRET='$COLLECTOR_SECRET'


values.yaml
collector:
  env:
    # Leave this empty when using a secret
    COLLECTOR_SECRET: ""
  envFrom:
    # Reference the secret you created
    - secretRef:
        name: better-stack-collector-secret

Troubleshooting

eBPF performance impact

Better Stack collector uses open source eBPF-based instrumentation. eBPF plugs directly into the kernel to generate metrics and traces based on network calls. In some cases, eBPF can increase CPU usage and network latency. For most use-cases and applications the performance impact is negligible or well worth the auto-instrumentation benefits.

Issues with a specific service?

Toggle eBPF for the affected service:

  • Go to Sources -> your collector -> Configure.
  • Scroll down to Services, find your service, and toggle Collect traces.

Traces won't be collected for the service via eBPF-instrumentation.

Cluster-wide performance issues

Seeing elevated CPU load or network latencies across your cluster?
Switch eBPF tracing to a lighter variant to reduce performance impact:

  • Go to Sources -> your collector -> Configure.
  • Scroll down to What data do you want to collect? and enable Basic distributed tracing.

eBPF tracing will now run significantly faster. Traces might be incomplete.

Disabling eBPF completely

Still experiencing issues? Try these options in order:

  • Disable both Full distributed tracing and Basic distributed tracing. No traces will be produced.
  • Disable Service map & RED metrics.
  • Disable Host metrics.

Well-known issues

  • SignalR WebSocket library experiences connection issues with eBPF enabled

Conflicts with other tracing solutions

In rare occasions, running the Better Stack collector alongside other tracing solutions, like Sentry or Datadog, can cause conflicts.

A possible symptom is seeing connection errors to services such as Elasticsearch after installing the collector. The service might start returning errors indicating duplicate headers.

Error example
{
  "error": {
    "type": "illegal_argument_exception",
    "reason": "multiple values for single-valued header [traceparent]."
  }
}

In such a case, we recommend to disable tracing in the other monitoring tool to prevent duplicate headers. This ensures that only the Better Stack collector is responsible for injecting the tracing headers.

Why exactly does the conflict occur?

This error occurs because both the collector's eBPF-based instrumentation (OBI) and the other tracing tool are adding a traceparent header. Some services, like Elasticsearch, can strictly enforce the W3C specification and reject requests with multiple traceparent headers.

OBI is designed to prevent this by respecting existing traceparent headers, as seen in the official documentation. However, conflicts can still occur if the other tool doesn't handle existing headers correctly.

Seeing incomplete traces

If you notice incomplete traces, try the following steps:

Restart your services

This ensures the Better Stack collector’s eBPF instrumenters hook into processes correctly.

Remove other tracing agents

Tools like Datadog and Jaeger can conflict with eBPF-based instrumentation.

Disable other eBPF instrumenters

Running more than one eBPF tool at the same time may cause spans to be dropped.

The container name is already in use

Stop and remove previous collector version:

Stop and remove containers
docker stop better-stack-collector better-stack-ebpf && \
  docker rm better-stack-collector better-stack-ebpf

Getting container name is already in use when upgrading collector in Docker Swarm?

Force upgrade to remove existing collector containers before installing collector:

Docker Swarm force upgrade
curl -sSL https://raw.githubusercontent.com/BetterStackHQ/collector/refs/heads/main/deploy-to-swarm.sh | \
    ACTION=force_upgrade \
    MANAGER_NODE=root@swarm-manager COLLECTOR_SECRET="$COLLECTOR_SECRET" bash


If you notice errors mentioning Mountpoint '/' has total capacity of X bytes, but configured buffers using mountpoint have total maximum size of Y bytes, adjust the Batching on disk size:

CleanShot 2025-12-04 at 15.59.42@2x.png

Collector allocates three buffers of the given size, which by default totals 9GB. Smaller buffers are less resilient to data loss during transient network failures.

Insufficient ServiceAccount permissions in Kubernetes

If you see Kubernetes discovery failed errors, make sure the ServiceAccount used by the Collector can list pods, services, endpoints, and namespaces. The Collector relies on the Kubernetes API to discover workloads and attach metadata to logs, metrics, and traces.

GitHub repository

Better Stack collector is open source. See the GitHub repository.

Need help?

Please let us know at hello@betterstack.com.
We're happy to help! 🙏