Better Stack collector is the easiest and recommended way of integrating Better Stack into your environment.
Why should you use the collector?
Instrument without code changes
Leverage eBPF to instrument your Kubernetes or Docker clusters to gather logs, metrics, and OpenTelemetry traces without code changes.
Monitor and control your collectors remotely
Remotely monitor collector's throughput and adjust the collector configuration directly from the Better Stack dashboard to adjust sampling, compression, and batching as needed.
Have a legacy service? Use the Better Stack dashboard to increase sampling to save on ingesting costs and egress costs and only scale up when you need the telemetry.
Databases auto-instrumented automatically
Collector automatically recognizes databases
running in your cluster. Monitor the internals of your PostgreSQL, MySQL, Redis, Memcached or
MongoDB out-of-box.
Transform wide events with VRL
Transform logs, spans or other wide events to redact personally identifiable information or simply discard useless events so that you don't get billed.
Collect additional OpenTelemetry traces
Send any OpenTelemetry traces to Better Stack.
Get best of both worlds: Collect traces with zero effort using eBPF-based auto-instrumentation. For full flexibility, instrument your services using OpenTelemetry SDKs and send custom traces to Better Stack alongside eBPF data.
Collector requires Linux kernel 5.14 or newer for reliable eBPF-based auto-instrumentation. It relies on BTF, CO-RE, and the eBPF ring buffer (BPF_MAP_TYPE_RINGBUF). Older kernels may work if your distribution has backported these features.
Check if your system supports all the required features with:
Go to Sources -> your collector source -> Services, then find and assign the OpenTelemetry source to matching discovered service.
You can now use our OpenTelemetry Tracing dashboard and seamlessly analyze traces coming from eBPF alongside traces coming from the OpenTelemetry SDK directly.
Advanced configuration
Collecting Prometheus metrics in Kubernetes
Automatically discover and scrape metrics from all pods and services with native Prometheus annotations. Add the following labels to your pods and services:
# Uninstall using a provided script
curl -sSL https://raw.githubusercontent.com/BetterStackHQ/collector/refs/heads/main/deploy-to-swarm.sh | \
MANAGER_NODE=root@swarm-manager COLLECTOR_SECRET="$COLLECTOR_SECRET" ACTION=uninstall bash
Forward logs, spans, and metrics
Collector can run as a forwarding proxy for logs, metrics, and traces. This is an advanced feature. We recommend running a dedicated collector instance for proxying.
Set the PROXY_PORT variable before installing or updating collector:
Copied!
export PROXY_PORT=80
This port must be free on the host system.
After installation, enable the Vector proxy in your collector configuration:
This will enable three endpoints on the host the collector is installed on, under the specified port:
/v1/logs, also available under /
/v1/metrics
/v1/traces
These endpoints use Vector’s buffering and batching, which helps reduce egress costs.
You can also secure the proxy with HTTP Basic authentication:
Generate an SSL certificate with Let's Encrypt
You can secure the proxy with SSL certificates from Let’s Encrypt.
Set both PROXY_PORT and TLS_DOMAIN before installing or upgrading collector:
When TLS_DOMAIN is set, collector will also bind port 80 to handle Let’s Encrypt HTTP-01 challenges.
If port 80 is already in use, collector will fail to start. We recommend using 443 as PROXY_PORT with SSL.
Enable SSL in the collector configuration:
Collector will try to obtain a certificate from Let’s Encrypt every 10 minutes until successful. Once issued, the certificate is automatically renewed every 6 hours.
Troubleshooting
Conflicts with other tracing solutions
In rare occasions, running the Better Stack collector alongside other tracing solutions - like Sentry or Datadog - can cause conflicts.
A possible symptom is seeing connection errors to services such as Elasticsearch after installing the collector. The service might start returning errors indicating duplicate headers.
In such a case, we recommend to disable tracing in the other monitoring tool to prevent duplicate headers. This ensures that only the Better Stack collector is responsible for injecting the tracing headers.
Why exactly does the conflict occur?
This error occurs because both the collector's eBPF-based instrumentation (Beyla) and the other tracing tool are adding a traceparent header. Some services, like Elasticsearch, can strictly enforce the W3C specification and reject requests with multiple traceparent headers.
Beyla is designed to prevent this by respecting existing traceparent headers, as seen in the official documentation. However, conflicts can still occur if the other tool doesn't handle existing headers correctly.