Better Stack collector

Better Stack collector is the easiest and the recommended way of integrating Better Stack into your environment.

Why should you use the collector?

Instrument without code changes

Leverage eBPF to instrument your Kubernetes or Docker clusters to gather logs, metrics, and OpenTelemetry traces without code changes.

Monitor and control your collectors remotely

Remotely monitor collector's throughput and adjust the collector configuration directly from the Better Stack dashboard to adjust sampling, compression, and batching as needed.

Have a legacy service? Use the Better Stack dashboard to increase sampling to save on ingesting costs and egress costs and only scale up when you need the telemetry.

collector-configuration.png

Databases auto-instrumented automatically

Collector automatically recognizes databases running in your cluster. Monitor the internals of your PostgreSQL, MySQL, Redis, Memcached or MongoDB out-of-box.

Transform wide events with VRL

Transform logs, spans or other wide events to redact personally identifiable information or simply discard useless events so that you don't get billed.

Collect additional OpenTelemetry traces

Send any OpenTelemetry traces to Better Stack.

Get best of both worlds: Collect traces with zero effort using eBPF-based auto-instrumentation. For full flexibility, instrument your services using OpenTelemetry SDKs and send custom traces to Better Stack alongside eBPF data.

Getting started

Install via Kubernetes Helm chart

Add collector Helm chart and install it:

Add and install Helm chart
helm repo add better-stack https://betterstackhq.github.io/collector-helm-chart
helm repo update
helm install better-stack-collector better-stack/collector \
  --set collector.env.COLLECTOR_SECRET="$COLLECTOR_SECRET"

For advanced configuration options, see the values.yaml file.

After installing Better Stack collector, restart your services so the eBPF instrumenters can attach correctly.

Install via Docker

Deploy collector with Docker Compose 1.25.0 or later using the provided install script:

Install using Docker Compose
curl -sSL https://raw.githubusercontent.com/BetterStackHQ/collector/main/install.sh | \
  COLLECTOR_SECRET="$COLLECTOR_SECRET" bash

After installing Better Stack collector, restart your services so the eBPF instrumenters can attach correctly.

Install to Docker Swarm

Deploy collector to each node in your Swarm cluster with Docker Compose 1.25.0 or later using the following script:

Deploy to all Swarm nodes
curl -sSL https://raw.githubusercontent.com/BetterStackHQ/collector/refs/heads/main/deploy-to-swarm.sh | \
    MANAGER_NODE=root@swarm-manager COLLECTOR_SECRET="$COLLECTOR_SECRET" bash

After installing Better Stack collector, restart your services so the eBPF instrumenters can attach correctly.

Install as egress proxy with Docker

Forward logs, spans, and metrics

Collector can run as a forwarding proxy for logs, metrics, and traces. This is an advanced feature. We recommend running a dedicated collector instance for proxying.

Set the PROXY_PORT variable before installing or updating collector:

Install using Docker Compose
curl -sSL https://raw.githubusercontent.com/BetterStackHQ/collector/main/install.sh | \
  COLLECTOR_SECRET="$COLLECTOR_SECRET" PROXY_PORT=80 bash

This port must be free on the host system.

This will enable three endpoints on the host the collector is installed on, under the specified port:

  • /v1/logs, also available under /
  • /v1/metrics
  • /v1/traces

These endpoints use Vector’s buffering and batching, which helps reduce egress costs.

You can also secure the proxy with HTTP Basic authentication:

Screenshot 2025-09-18 at 13.23.07.png

Generate an SSL certificate with Let's Encrypt

You can secure the proxy with SSL certificates from Let’s Encrypt.

Set PROXY_PORT and USE_TLS before installing or upgrading collector:

Install using Docker Compose
curl -sSL https://raw.githubusercontent.com/BetterStackHQ/collector/main/install.sh | \
  COLLECTOR_SECRET="$COLLECTOR_SECRET" PROXY_PORT=443 USE_TLS=true bash

When USE_TLS is set, collector will also bind port 80 to handle Let’s Encrypt HTTP-01 challenges. If port 80 is already in use, collector will fail to start. We recommend using 443 as PROXY_PORT with SSL.

Enable SSL in the collector configuration:

Screenshot 2025-09-18 at 13.22.14.png

The configured domain name is updated automatically as part of the remote configuration. Collector will try to obtain a certificate from Let’s Encrypt every 10 minutes until successful. Once issued, the certificate is automatically renewed every 6 hours.

Send a sample log line

You can test the egress proxy by sending a sample log line.

 
curl -X POST \
     -H 'Content-Type: application/json'
     -d '{"dt":"'"$(date -u +'%Y-%m-%d %T UTC')"'","message":"Hello world via egress proxy!"}' \
     --insecure
     https://$TLS_DOMAIN:$PROXY_PORT/

Required kernel features

Collector requires Linux kernel 5.14 or newer for reliable eBPF-based auto-instrumentation. It relies on BTF, CO-RE, and the eBPF ring buffer (BPF_MAP_TYPE_RINGBUF). Older kernels may work if your distribution has backported these features.

Check if your system supports all the required features with:

Host system Kubernetes Docker
curl -sSL https://raw.githubusercontent.com/BetterStackHQ/collector/refs/heads/main/ebpf.sh | bash
kubectl run -i --rm ebpf-check --image=alpine --restart=Never --privileged=true -- sh -c "apk add --no-cache bash wget -q && \
  wget -qO- https://raw.githubusercontent.com/BetterStackHQ/collector/refs/heads/main/ebpf.sh | bash"
docker run --rm --privileged alpine:latest sh -c "apk add --no-cache bash wget -q && \
  wget -qO- https://raw.githubusercontent.com/BetterStackHQ/collector/refs/heads/main/ebpf.sh | bash"

Your cluster doesn't support all the required features?

Integrate OpenTelemetry SDKs into your services and send traces to Better Stack anyway.

Auto-instrumenting applications with OpenTelemetry SDK

You can connect your eBPF-generated traces with traces coming directly from the official OpenTelemetry SDK for your platform.

  1. Create a new Source and pick OpenTelemetry platform.

  2. Follow OpenTelemetry official integration guide for your language.

  3. Go to Sources -> your collector source -> Services, then find and assign the OpenTelemetry source to matching discovered service.

You can now use our OpenTelemetry Tracing dashboard and seamlessly analyze traces coming from eBPF alongside traces coming from the OpenTelemetry SDK directly.

Collecting database metrics

MySQL

Create a database user with the following permissions to collector MySQL database metrics:

Set up MySQL database user
CREATE USER 'betterstack'@'%' IDENTIFIED BY '<PASSWORD>';
GRANT SELECT, PROCESS, REPLICATION CLIENT ON *.* TO 'betterstack'@'%';

PostgreSQL

Create a database user with the pg_monitor role and enable the pg_stat_statements extension:

Set up Postgres database user
CREATE ROLE betterstack WITH LOGIN PASSWORD '<PASSWORD>';
GRANT pg_monitor TO betterstack;
CREATE EXTENSION pg_stat_statements;

Make sure the pg_stat_statements extension is loaded via the shared_preload_libraries server setting.

Sending custom logs

Send any log files from the host filesystem to Better Stack. For example, add the following configuration to your manual.vector.yaml file to send the logs from /var/www/custom.log and any file matching /var/www/**/service.*.log:

Sending custom logs
sources:
  better_stack_logs_custom_file:
    type: file
    read_from: beginning
    ignore_older_secs: 259200 # send only log lines newer than 3 days
    include:
      - /host/var/www/custom.log
      - /host/var/www/**/service.*.log

Any source with a name starting with better_stack_logs_ is forwarded to the logs sink.

Advanced configuration

Collecting Prometheus metrics in Kubernetes

Automatically discover and scrape metrics from all pods and services with native Prometheus annotations. Add the following labels to your pods and services:

Example labels
prometheus.io/scrape: "true"
prometheus.io/path: "/metrics"
prometheus.io/port: "9090"

Upgrading collector

Getting the latest collector is mostly the same as initial installation:

Kubernetes Docker Compose Docker Swarm
# Update repo and upgrade chart
helm repo update
helm upgrade better-stack-collector better-stack/collector \
  --set collector.env.COLLECTOR_SECRET="$COLLECTOR_SECRET"
# Rerun install command to upgrade collector
curl -sSL https://raw.githubusercontent.com/BetterStackHQ/collector/main/install.sh | \
  COLLECTOR_SECRET="$COLLECTOR_SECRET" bash
# Rerun deploy command to upgrade collector
curl -sSL https://raw.githubusercontent.com/BetterStackHQ/collector/refs/heads/main/deploy-to-swarm.sh | \
    MANAGER_NODE=root@swarm-manager COLLECTOR_SECRET="$COLLECTOR_SECRET" bash

# Not working? Force upgrade removes existing collector containers before upgrading collector
curl -sSL https://raw.githubusercontent.com/BetterStackHQ/collector/refs/heads/main/deploy-to-swarm.sh | \
    ACTION=force_upgrade \
    MANAGER_NODE=root@swarm-manager COLLECTOR_SECRET="$COLLECTOR_SECRET" bash

Uninstalling collector

Something wrong? Let us know at hello@betterstack.com.

Kubernetes Docker Compose Docker Swarm
# Uninstall chart and remove repo
helm uninstall better-stack-collector
helm repo remove better-stack
curl -sSL https://raw.githubusercontent.com/BetterStackHQ/collector/main/uninstall.sh | bash
# Uninstall using a provided script
curl -sSL https://raw.githubusercontent.com/BetterStackHQ/collector/refs/heads/main/deploy-to-swarm.sh | \
    MANAGER_NODE=root@swarm-manager COLLECTOR_SECRET="$COLLECTOR_SECRET" ACTION=uninstall bash

Troubleshooting

Conflicts with other tracing solutions

In rare occasions, running the Better Stack collector alongside other tracing solutions - like Sentry or Datadog - can cause conflicts.

A possible symptom is seeing connection errors to services such as Elasticsearch after installing the collector. The service might start returning errors indicating duplicate headers.

Error example
{
  "error": {
    "type": "illegal_argument_exception",
    "reason": "multiple values for single-valued header [traceparent]."
  }
}

In such a case, we recommend to disable tracing in the other monitoring tool to prevent duplicate headers. This ensures that only the Better Stack collector is responsible for injecting the tracing headers.

Why exactly does the conflict occur?

This error occurs because both the collector's eBPF-based instrumentation (Beyla) and the other tracing tool are adding a traceparent header. Some services, like Elasticsearch, can strictly enforce the W3C specification and reject requests with multiple traceparent headers.

Beyla is designed to prevent this by respecting existing traceparent headers, as seen in the official documentation. However, conflicts can still occur if the other tool doesn't handle existing headers correctly.

Seeing incomplete traces

If you notice incomplete traces, try the following steps:

Restart your services

This ensures the Better Stack collector’s eBPF instrumenters hook into processes correctly.

Remove other tracing agents

Tools like Datadog and Jaeger can conflict with eBPF-based instrumentation.

Disable other eBPF instrumenters

Running more than one eBPF tool at the same time may cause spans to be dropped.

The container name is already in use

Stop and remove previous collector version:

Stop and remove containers
docker stop better-stack-collector better-stack-beyla && \
  docker rm better-stack-collector better-stack-beyla

Getting container name is already in use when upgrading collector in Docker Swarm?

Force upgrade to remove existing collector containers before installing collector:

Docker Swarm force upgrade
curl -sSL https://raw.githubusercontent.com/BetterStackHQ/collector/refs/heads/main/deploy-to-swarm.sh | \
    ACTION=force_upgrade \
    MANAGER_NODE=root@swarm-manager COLLECTOR_SECRET="$COLLECTOR_SECRET" bash

GitHub repository

Better Stack collector is open source. See the GitHub repository.

Need help?

Please let us know at hello@betterstack.com.
We're happy to help! 🙏