Better Stack collector is the easiest and the recommended way of integrating Better Stack into your environment.
Leverage eBPF to instrument your Kubernetes or Docker clusters to gather logs, metrics, and OpenTelemetry traces without code changes.
Remotely monitor collector's throughput and adjust the collector configuration directly from the Better Stack dashboard to adjust sampling, compression, and batching as needed.
Have a legacy service? Use the Better Stack dashboard to increase sampling to save on ingesting costs and egress costs and only scale up when you need the telemetry.
Collector automatically recognizes databases running in your cluster. Monitor the internals of your PostgreSQL, MySQL, Redis, Memcached or MongoDB out-of-box.
Transform logs, spans or other wide events to redact personally identifiable information or simply discard useless events so that you don't get billed.
Send any OpenTelemetry traces to Better Stack.
Get best of both worlds: Collect traces with zero effort using eBPF-based auto-instrumentation. For full flexibility, instrument your services using OpenTelemetry SDKs and send custom traces to Better Stack alongside eBPF data.
Add collector Helm chart and install it:
For advanced configuration options, see the values.yaml file.
After installing Better Stack collector, restart your services so the eBPF instrumenters can attach correctly.
Deploy collector with Docker Compose 1.25.0 or later using the provided install script:
After installing Better Stack collector, restart your services so the eBPF instrumenters can attach correctly.
Deploy collector to each node in your Swarm cluster with Docker Compose 1.25.0 or later using the following script:
After installing Better Stack collector, restart your services so the eBPF instrumenters can attach correctly.
Collector can run as a forwarding proxy for logs, metrics, and traces. This is an advanced feature. We recommend running a dedicated collector instance for proxying.
Set the PROXY_PORT variable before installing or updating collector:
This port must be free on the host system.
This will enable three endpoints on the host the collector is installed on, under the specified port:
/v1/logs, also available under //v1/metrics/v1/tracesThese endpoints use Vector’s buffering and batching, which helps reduce egress costs.
You can also secure the proxy with HTTP Basic authentication:
You can secure the proxy with SSL certificates from Let’s Encrypt.
Set PROXY_PORT and USE_TLS before installing or upgrading collector:
When USE_TLS is set, collector will also bind port 80 to handle Let’s Encrypt HTTP-01 challenges.
If port 80 is already in use, collector will fail to start. We recommend using 443 as PROXY_PORT with SSL.
Enable SSL in the collector configuration:
The configured domain name is updated automatically as part of the remote configuration. Collector will try to obtain a certificate from Let’s Encrypt every 10 minutes until successful. Once issued, the certificate is automatically renewed every 6 hours.
You can test the egress proxy by sending a sample log line.
Collector requires Linux kernel 5.14 or newer for reliable eBPF-based auto-instrumentation. It relies on BTF, CO-RE, and the eBPF ring buffer (BPF_MAP_TYPE_RINGBUF). Older kernels may work if your distribution has backported these features.
Check if your system supports all the required features with:
Your cluster doesn't support all the required features?
Integrate OpenTelemetry SDKs into your services and send traces to Better Stack anyway.
You can connect your eBPF-generated traces with traces coming directly from the official OpenTelemetry SDK for your platform.
Create a new Source and pick OpenTelemetry platform.
Follow OpenTelemetry official integration guide for your language.
Go to Sources -> your collector source -> Services, then find and assign the OpenTelemetry source to matching discovered service.
You can now use our OpenTelemetry Tracing dashboard and seamlessly analyze traces coming from eBPF alongside traces coming from the OpenTelemetry SDK directly.
Create a database user with the following permissions to collector MySQL database metrics:
If the default postgres database does not exist, create it:
Create a database user with the pg_monitor role and enable the pg_stat_statements extension:
Make sure the pg_stat_statements extension is loaded via the shared_preload_libraries server setting.
Send any log files from the host filesystem to Better Stack. For example, add the following configuration to your manual.vector.yaml file to send the logs from /var/www/custom.log and any file matching /var/www/**/service.*.log:
Any source with a name starting with better_stack_logs_ is forwarded to the logs sink.
Automatically discover and scrape metrics from all pods and services with native Prometheus annotations. Add the following labels to your pods and services:
Getting the latest collector is mostly the same as initial installation:
Something wrong? Let us know at hello@betterstack.com.
For Kubernetes installations, you can inject environment variables into the collector and beyla containers from Kubernetes ConfigMaps and Secrets.
This is useful for managing configuration dynamically without hardcoding it into your values.yaml, or for managing your collector secret with additional security.
Environment variables set explicitly in collector.env or beyla.env will take precedence over variables sourced from envFrom.
To ensure the collector pods have higher scheduling priority and are less likely to be evicted during resource shortages, you can assign them a PriorityClass. This is especially important in production clusters where telemetry data is critical.
To set a priority class, pecify the priorityClassName in your values.yaml. You must have a PriorityClass object already defined in your cluster.
Learn more in the Kubernetes documentation on Pod Priority and Preemption.
Docker's json-file log driver can be configured to attach container labels to logs. After configuring Docker, you will see the labels on logs in the attrs object.
Adjusting /etc/docker/daemon.json requires restarting the Docker daemon with systemctl restart docker and may cause downtime.
To restrict the Collector to selected Swarm nodes, add the better-stack.collector=true label to those nodes from a swarm manager. Nodes without the label will stop running the Collector.
Collector is configured to be easy to integrate by default. You can adjust its settings to match your security needs.
Collector mounts the root filesystem read-only into the collector container by default. You can mount only select paths into the container:
Selected paths will be mounted read-only under the /host prefix, e.g. /host/var/www/custom.
The Beyla container requires privileged access as it runs the eBPF code responsible for generating traces. The collector container doesn't request privileged access.
For enhanced security, we recommend managing the COLLECTOR_SECRET using a Kubernetes Secret instead of placing it directly in your values.yaml file. This prevents the secret from being stored in plain text in your version control system.
In rare occasions, running the Better Stack collector alongside other tracing solutions - like Sentry or Datadog - can cause conflicts.
A possible symptom is seeing connection errors to services such as Elasticsearch after installing the collector. The service might start returning errors indicating duplicate headers.
In such a case, we recommend to disable tracing in the other monitoring tool to prevent duplicate headers. This ensures that only the Better Stack collector is responsible for injecting the tracing headers.
This error occurs because both the collector's eBPF-based instrumentation (Beyla) and the other tracing tool are adding a traceparent header. Some services, like Elasticsearch, can strictly enforce the W3C specification and reject requests with multiple traceparent headers.
Beyla is designed to prevent this by respecting existing traceparent headers, as seen in the official documentation. However, conflicts can still occur if the other tool doesn't handle existing headers correctly.
If you notice incomplete traces, try the following steps:
This ensures the Better Stack collector’s eBPF instrumenters hook into processes correctly.
Tools like Datadog and Jaeger can conflict with eBPF-based instrumentation.
Running more than one eBPF tool at the same time may cause spans to be dropped.
Stop and remove previous collector version:
Getting container name is already in use when upgrading collector in Docker Swarm?
Force upgrade to remove existing collector containers before installing collector:
If you notice errors mentioning Mountpoint '/' has total capacity of X bytes, but configured buffers using mountpoint have total maximum size of Y bytes, adjust the Batching on disk size:
Collector allocates three buffers of the given size, which by default totals 9GB. Smaller buffers are less resilient to data loss during transient network failures.
If you see Kubernetes discovery failed errors, make sure the ServiceAccount used by the Collector can list pods, services, endpoints, and namespaces. The Collector relies on the Kubernetes API to discover workloads and attach metadata to logs, metrics, and traces.
Better Stack collector is open source. See the GitHub repository.
Please let us know at hello@betterstack.com.
We're happy to help! 🙏
We use cookies to authenticate users, improve the product user experience, and for personalized ads. Learn more.