# Better Stack collector

[Better Stack collector](https://github.com/BetterStackHQ/collector) is the easiest and the recommended way of integrating Better Stack into your environment. 

## Why should you use the collector?

### Instrument without code changes

**Leverage eBPF** to instrument your Kubernetes or Docker clusters to gather logs, metrics, and OpenTelemetry traces **without code changes**.

### Monitor and control your collectors remotely

Remotely monitor collector's throughput and adjust the collector configuration directly from the Better Stack dashboard to **adjust sampling, compression, and batching as needed**.

Have a legacy service? Use the Better Stack dashboard to increase sampling to save on ingesting costs and egress costs and only scale up when you need the telemetry.

![CleanShot 2026-02-07 at 7 .04.07.png](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/7ca2b0e3-592d-47f5-1fc4-268533598d00/lg2x =3244x2676)

### Databases auto-instrumented automatically

Collector automatically recognizes databases and other common services
running in your cluster. **Monitor the internals of your PostgreSQL, MySQL, Redis, Memcached,
MongoDB, Apache, Nginx, Elasticsearch, or Kafka out-of-box.**

### Transform wide events with VRL

[Transform logs](https://betterstack.com/docs/logs/using-logtail/transforming-ingested-data/logs-vrl/), spans or other wide events to redact personally identifiable information or simply **discard useless events so that you don't get billed**.

### Collect additional OpenTelemetry traces

Send any OpenTelemetry traces to Better Stack.

Get best of both worlds: Collect traces with zero effort using eBPF-based auto-instrumentation. For full flexibility, [instrument your services using OpenTelemetry SDKs](#auto-instrument-apps-with-opentelemetry-sdk) and send custom traces to Better Stack alongside eBPF data. 


# Getting started

* [Install with a Kubernetes Helm chart](#install-via-kubernetes-helm-chart)
* [Install with Docker](#install-via-docker)
* [Install to Docker Swarm](#install-to-docker-swarm)

## Install via Kubernetes Helm chart

Add collector Helm chart and install it:

```bash
[label Add and install Helm chart]
helm repo add better-stack https://betterstackhq.github.io/collector-helm-chart
helm repo update
helm install better-stack-collector better-stack/collector \
  --set collector.env.COLLECTOR_SECRET="$COLLECTOR_SECRET"
```

For advanced configuration options, see the [values.yaml](https://github.com/BetterStackHQ/collector-helm-chart/blob/main/values.yaml) file.

[info]
After installing Better Stack collector, restart your services so the eBPF instrumenters can attach correctly.
[/info]

## Install via Docker

Deploy collector with Docker Compose 1.25.0 or later using the provided install script:

```bash
[label Install using Docker Compose]
curl -sSL https://raw.githubusercontent.com/BetterStackHQ/collector/main/install.sh | \
  COLLECTOR_SECRET="$COLLECTOR_SECRET" bash
```

[info]
After installing Better Stack collector, restart your services so the eBPF instrumenters can attach correctly.
[/info]

## Install to Docker Swarm

Deploy collector to each node in your Swarm cluster with Docker Compose 1.25.0 or later using the following script:

```bash
[label Deploy to all Swarm nodes]
curl -sSL https://raw.githubusercontent.com/BetterStackHQ/collector/refs/heads/main/deploy-to-swarm.sh | \
    MANAGER_NODE=root@swarm-manager COLLECTOR_SECRET="$COLLECTOR_SECRET" bash
```

[info]
After installing Better Stack collector, restart your services so the eBPF instrumenters can attach correctly.
[/info]

## Required kernel features

Collector requires Linux kernel 5.14 or newer for reliable eBPF-based auto-instrumentation. It relies on BTF, CO-RE, and the eBPF ring buffer (`BPF_MAP_TYPE_RINGBUF`). Older kernels may work if your distribution has backported these features.

Check if your system supports all the required features with:

[code-tabs]
```sh
[label Host system]
curl -sSL https://telemetry.betterstack.com/api/collector/public/ebpf.sh | bash
```
```sh
[label Kubernetes]
kubectl run -i --rm ebpf-check --image=alpine --restart=Never --privileged=true -- sh -c "apk add --no-cache bash wget -q && \
  wget -qO- https://telemetry.betterstack.com/api/collector/public/ebpf.sh | bash"
```
```sh
[label Docker]
docker run --rm --privileged alpine:latest sh -c "apk add --no-cache bash wget -q && \
  wget -qO- https://telemetry.betterstack.com/api/collector/public/ebpf.sh | bash"
```
[/code-tabs]

[info]

#### Your cluster doesn't support all the required features?

[Use OpenTelemetry SDK](#auto-instrumenting-apps-with-opentelemetry-sdk) and send traces to Better Stack anyway.

[/info]

## Auto-instrument apps with OpenTelemetry SDK

Collector automatically give you eBPF traces and metrics for all your services. Send OpenTelemetry SDK traces and logs to Collector for control and flexibility.  

### Enable OpenTelemetry in Better Stack collector

Enable OpenTelemetry ports on Collector to send Otel traces, logs, and metrics to Better Stack via Collector:

[code-tabs]
```sh
[label Kubernetes]
# Enable for existing collector
helm repo update
helm upgrade better-stack-collector better-stack/collector \
  --reuse-values \
  --set collectOtel.grpcPort=4317 \
  --set collectOtel.httpPort=4318

# Deploy new collector with OpenTelemetry forwarding
helm repo add better-stack https://betterstackhq.github.io/collector-helm-chart
helm repo update
helm install better-stack-collector better-stack/collector \
  --set collector.env.COLLECTOR_SECRET="$COLLECTOR_SECRET" \
  --set collectOtel.grpcPort=4317 \
  --set collectOtel.httpPort=4318
```
```sh
[label Docker Compose]
curl -sSL https://raw.githubusercontent.com/BetterStackHQ/collector/main/install.sh | \
  COLLECTOR_SECRET="$COLLECTOR_SECRET" \
  COLLECT_OTEL_GRPC_PORT=4317 \
  COLLECT_OTEL_HTTP_PORT=4318 bash
```
```sh
[label Docker Swarm]
curl -sSL https://raw.githubusercontent.com/BetterStackHQ/collector/refs/heads/main/deploy-to-swarm.sh | \
    MANAGER_NODE=root@swarm-manager COLLECTOR_SECRET="$COLLECTOR_SECRET" \
    COLLECT_OTEL_GRPC_PORT=4317 \
    COLLECT_OTEL_HTTP_PORT=4318 bash
```
[/code-tabs]

Navigate to [Sources](https://telemetry.betterstack.com/team/0/sources) -> your collector -> **Configure** -> **Ingesting**. Then, enable the **OpenTelemetry SDK traces** checkbox.

[success]

All services can now send OpenTelemetry data to Better Stack via the open ports.

[/success]

### Instrument your services

Instrument your services with OpenTelemetry:

- [Auto-instrument services in Kubernetes using OpenTelemetry Operator](#automatically-instrument-services-in-kubernetes)
- [Instrument individual services using the OpenTelemetry integrations](#instrument-individual-services-with-opentelemetry)

[info]

#### Already have your services instrumented with OpenTelemetry?

Skip this step and [send the OpenTelemetry SDK data to Collector](#send-opentelemetry-data-to-better-stack-collector).

[/info]

### Automatically instrument services in Kubernetes

#### Install OpenTelemetry Operator

Install [OpenTelemetry Operator](https://opentelemetry.io/docs/platforms/kubernetes/operator/) to your cluster.

#### Set up the Operator

Create an `Instrumentation` resource to make sure OpenTelemetry data is sent to Better Stack collector:

```yaml
[label instrumentation.yaml]
apiVersion: opentelemetry.io/v1alpha1
kind: Instrumentation
metadata:
  name: better-stack
spec:
  exporter:
    endpoint: http://better-stack-collector-otlp.namespace-with-bs-collector.svc:4318
  env:
    - name: OTEL_EXPORTER_OTLP_PROTOCOL
      value: http/protobuf
```

#### Enable auto-instrumentation on your workloads

Annotate your workloads to enable auto-instrumentation:

```yaml
[label deployment.yaml]
apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-app
spec:
  template:
    metadata:
      annotations:
        instrumentation.opentelemetry.io/inject-python: "better-stack"
```

[info]
#### Use the correct language for your service

Replace `inject-python` with the annotation for your language: `inject-java`, `inject-nodejs`, `inject-dotnet`, or `inject-go`.  
See all available languages in the [Operator documentation](https://opentelemetry.io/docs/platforms/kubernetes/operator/automatic/#configure-automatic-instrumentation).

[/info]

Add `OTEL_SERVICE_NAME` and `OTEL_RESOURCE_ATTRIBUTES` env variables to make sure the Otel data is correctly connected to the matching service:

```yaml
[label Service and host env]
env:
  - name: OTEL_SERVICE_NAME
    value: "my-namespace/Deployment/my-application"
  - name: POD_NAME
    valueFrom:
      fieldRef:
        fieldPath: metadata.name
  - name: NODE_NAME
    valueFrom:
      fieldRef:
        fieldPath: spec.nodeName
  - name: OTEL_RESOURCE_ATTRIBUTES
    value: "host.name=$(NODE_NAME),container.id=my-namespace/$(POD_NAME)/my-application"
```

[success]
#### OpenTelemetry traces are now flowing into Better Stack

- View traces in [Live tail](https://telemetry.betterstack.com/team/0/tail ";_blank")
- Check out the [OpenTelemetry Tracing dashboard](https://betterstack.com/dashboards/tracing-open-telemetry) to analyze traces coming from eBPF alongside traces coming from the OpenTelemetry SDK.
[/success]

### Instrument individual services with OpenTelemetry

OpenTelemetry supports zero-code and manual SDK instrumentaions depending on specific language.
Follow [the OpenTelemetry official integration guide](https://opentelemetry.io/docs/languages/) for the language your service is using.

### Send OpenTelemetry data to Better Stack collector

Configure your OpenTelemetry SDK to send data to the Better Stack collector. 
In Kubernetes, use the node's IP address to reach the collector's host ports:

[code-tabs]
```yaml
[label HTTP]
env:
  - name: NODE_IP
    valueFrom:
      fieldRef:
        fieldPath: status.hostIP
  - name: OTEL_EXPORTER_OTLP_ENDPOINT
    value: "http://$(NODE_IP):4318"
```
```yaml
[label gRPC]
env:
  - name: NODE_IP
    valueFrom:
      fieldRef:
        fieldPath: status.hostIP
  - name: OTEL_EXPORTER_OTLP_ENDPOINT
    value: "http://$(NODE_IP):4317"
```
[/code-tabs]

For Docker Compose and Docker Swarm, point your services to the Collector via localhost, for example `http://localhost:4318` for HTTP or `http://localhost:4317` for gRPC.

Add service and host to all services instrumented with OpenTelemetery SDK. Set the `OTEL_SERVICE_NAME` and `OTEL_RESOURCE_ATTRIBUTES` as follows:

[code-tabs]
```yaml
[label Kubernetes]
env:
  - name: OTEL_SERVICE_NAME
    value: "my-namespace/Deployment/my-application"
  - name: POD_NAME
    valueFrom:
      fieldRef:
        fieldPath: metadata.name
  - name: NODE_NAME
    valueFrom:
      fieldRef:
        fieldPath: spec.nodeName
  - name: OTEL_RESOURCE_ATTRIBUTES
    value: "host.name=$(NODE_NAME),container.id=my-namespace/$(POD_NAME)/my-application"
```
```bash
[label Docker]
export OTEL_SERVICE_NAME="my-application"
export OTEL_RESOURCE_ATTRIBUTES="host.name=my-host-123,container.id=my-application-0a1b2c3d4e"
```
[/code-tabs]

[info]
#### How are the service and host attributes used in Better Stack?

`service.name` is used to automatically group traces and logs by service in Better Stack dashboards. `host.name` and `container.id` help correlate SDK-generated telemetry with eBPF-generated data from the same workload.
[/info]


[success]
#### OpenTelemetry traces are now flowing into Better Stack

- View traces in [Live tail](https://telemetry.betterstack.com/team/0/tail ";_blank")
- Check out the [OpenTelemetry Tracing dashboard](https://betterstack.com/dashboards/tracing-open-telemetry) to analyze traces coming from eBPF alongside traces coming from the OpenTelemetry SDK.
[/success]

## Collecting database metrics

### MySQL

Create a database user with the following permissions to collector MySQL database metrics:

```sql
[label Set up MySQL database user]
CREATE USER 'betterstack'@'%' IDENTIFIED BY '<PASSWORD>';
GRANT SELECT, PROCESS, REPLICATION CLIENT ON *.* TO 'betterstack'@'%';
```

### PostgreSQL

If the default `postgres` database does not exist, create it:

```sql
[label Create default Postgres database]
CREATE DATABASE postgres;
```


Create a database user with the `pg_monitor` role and enable the `pg_stat_statements` extension:

```sql
[label Set up Postgres database user]
CREATE ROLE betterstack WITH LOGIN PASSWORD '<PASSWORD>';
GRANT pg_monitor TO betterstack;
GRANT CONNECT ON DATABASE postgres TO betterstack;
CREATE EXTENSION pg_stat_statements;
```

Make sure the `pg_stat_statements extension` is loaded via the `shared_preload_libraries` server setting.

![CleanShot 2026-02-07 at 7 .06.57.png](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/2470b6b4-846e-4ca1-bb26-fb86b4c41f00/lg2x =3244x2676)

## Collecting Apache and nginx metrics

Better Stack collector automatically discovers nginx and Apache running on your hosts. Once discovered, the collector scrapes metrics from each process every 15 seconds.

For this to work, each process needs a status endpoint enabled. If the endpoint isn't available, the process will show as **Configuration required** in the Better Stack dashboard.

### Nginx

The collector scrapes nginx metrics from the [stub_status](https://nginx.org/en/docs/http/ngx_http_stub_status_module.html) module at `/nginx_status`. Add the following to your nginx configuration:

```yaml
[label /etc/nginx/conf.d/status.conf]
server {
    listen 80;
    location /nginx_status {
        stub_status;
        allow 127.0.0.1;
        allow ::1;
        deny all;
    }
}
```

Reload nginx to apply the change:

```bash
[label Reload nginx]
nginx -s reload
```

[info]
The `stub_status` module is included in most nginx packages by default. Verify it's available via `nginx -V 2>&1 | grep -o with-http_stub_status_module`.
[/info]

### Apache

The collector scrapes Apache metrics from [mod_status](https://httpd.apache.org/docs/current/mod/mod_status.html) at `/server-status?auto`. Enable the module and add the following configuration:

```xml
[label /etc/apache2/mods-enabled/status.conf]
<Location "/server-status">
    SetHandler server-status
    Require local
</Location>
```

Enable the module and reload Apache:

[code-tabs]
```bash
[label Debian / Ubuntu]
a2enmod status
systemctl reload apache2
```
```bash
[label RHEL / CentOS / Amazon Linux]
# mod_status is usually enabled by default
# Verify with: httpd -M | grep status
systemctl reload httpd
```
[/code-tabs]

[info]
#### Running older Apache?
Set `ExtendedStatus On` to get the full set of metrics. On Apache 2.3.6 and up, this is already enabled by default.
[/info]

### Verify the configuration

After enabling the status endpoint, verify it's reachable from the host:

[code-tabs]
```bash
[label Nginx]
curl http://127.0.0.1/nginx_status
```
```bash
[label Apache]
curl http://127.0.0.1/server-status?auto
```
[/code-tabs]

The process status in Better Stack will automatically change from **Configuration required** to **Collecting** within a few minutes.

## Collect additonal logs

Send any log files from the host filesystem to Better Stack. For example, add the following configuration to your `manual.vector.yaml` file to send the logs from `/var/www/custom.log` and any file matching `/var/www/**/service.*.log`:

```yaml
[label Sending custom logs]
sources:
  better_stack_logs_custom_file:
    type: file
    read_from: beginning
    ignore_older_secs: 259200 # send only log lines newer than 3 days
    include:
      - "/host/var/www/custom.log"
      - "/host/var/www/**/service.*.log"
```

[info]
Any source with a name starting with `better_stack_logs_` is forwarded to the logs sink.
[/info]

## Upgrading collector

Getting the latest collector is mostly the same as initial installation:

[code-tabs]
```sh
[label Kubernetes]
# Update repo and upgrade chart
helm repo update
helm upgrade better-stack-collector better-stack/collector \
  --set collector.env.COLLECTOR_SECRET="$COLLECTOR_SECRET"
```
```sh
[label Docker Compose]
# Rerun install command to upgrade collector
curl -sSL https://raw.githubusercontent.com/BetterStackHQ/collector/main/install.sh | \
  COLLECTOR_SECRET="$COLLECTOR_SECRET" bash
```
```sh
[label Docker Swarm]
# Rerun deploy command to upgrade collector
curl -sSL https://raw.githubusercontent.com/BetterStackHQ/collector/refs/heads/main/deploy-to-swarm.sh | \
    MANAGER_NODE=root@swarm-manager COLLECTOR_SECRET="$COLLECTOR_SECRET" bash

# Not working? Force upgrade removes existing collector containers before upgrading collector
curl -sSL https://raw.githubusercontent.com/BetterStackHQ/collector/refs/heads/main/deploy-to-swarm.sh | \
    ACTION=force_upgrade \
    MANAGER_NODE=root@swarm-manager COLLECTOR_SECRET="$COLLECTOR_SECRET" bash
```
[/code-tabs]

## Uninstalling collector

Something wrong? Let us know at hello@betterstack.com.

[code-tabs]
```sh
[label Kubernetes]
# Uninstall chart and remove repo
helm uninstall better-stack-collector
helm repo remove better-stack
```
```sh
[label Docker Compose]
curl -sSL https://raw.githubusercontent.com/BetterStackHQ/collector/main/uninstall.sh | bash
```
```sh
[label Docker Swarm]
# Uninstall using a provided script
curl -sSL https://raw.githubusercontent.com/BetterStackHQ/collector/refs/heads/main/deploy-to-swarm.sh | \
    MANAGER_NODE=root@swarm-manager COLLECTOR_SECRET="$COLLECTOR_SECRET" ACTION=uninstall bash
```
[/code-tabs]

## Collecting Prometheus metrics in Kubernetes

Automatically discover and scrape metrics from all pods and services with native Prometheus annotations. Add the following annotaions to your pods and services:

```yaml
[label Example annotaions]
prometheus.io/scrape: "true"
prometheus.io/path: "/metrics"
prometheus.io/port: "9090"
```

## Using Secrets and ConfigMaps in Kubernetes

Pass collector token as a [Secret](https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/) and use [ConfigMap](https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/) to set environment variables:

```yaml
[label values.yaml]
collector:
  envFrom:
    - configMapRef:
        name: my-collector-config
    - secretRef:
        name: my-collector-secret
ebpf:
  envFrom:
    - configMapRef:
        name: my-ebpf-config
    - secretRef:
        name: my-ebpf-secret
```

```yaml
[label secret.yaml]
apiVersion: v1
kind: Secret
type: Opaque
metadata:
  name: my-collector-secret
string_data:
  COLLECTOR_SECRET: "$COLLECTOR_SECRET"
```

[warning]

#### Environment variables have precedence over Secrets and ConfigMaps

Environment variables set explicitly in `collector.env` or `ebpf.env` will take precedence over variables sourced from `envFrom`.

[/warning]

## Setting Pod Priority

To ensure the collector pods have higher scheduling priority and are less likely to be evicted during resource shortages, you can assign them a `PriorityClass`. This is especially important in production clusters where telemetry data is critical.

To set a priority class, pecify the `priorityClassName` in your `values.yaml`. You must have a `PriorityClass` object already defined in your cluster.

```yaml
[label values.yaml]
priorityClassName: "high-priority-nonpreempting"
```

[info]
Learn more about Pod Priority and Preemption in [Kubernetes documentation](https://kubernetes.io/docs/concepts/scheduling-eviction/pod-priority-preemption/).
[/info]

## Using labels with Docker logs

Docker's [json-file](https://docs.docker.com/engine/logging/drivers/json-file/) log driver can be configured to attach container labels to logs. After configuring Docker, you will see the labels on logs in the `attrs` object.

[info]
Adjusting `/etc/docker/daemon.json` requires restarting the Docker daemon with `systemctl restart docker` and may cause downtime.
[/info]

### Deploying to specific Docker Swarm nodes

To restrict the Collector to selected Swarm nodes, add the `better-stack.collector=true` label to those nodes from a swarm manager. Nodes without the label will stop running the Collector.

```bash
[label Managing Collector placement in Docker Swarm]
# Enable Collector on a specific node
docker node update --label-add better-stack.collector=true <node-name>

# Disable Collector on a node
docker node update --label-rm better-stack.collector <node-name>

# If needed, remove the eBPF container from the node directly
docker stop better-stack-ebpf && docker rm better-stack-ebpf
```


## Security hardening

Collector is configured to be easy to integrate by default. You can adjust its settings to match your security needs.

### Mount fewer paths into the collector container

Collector mounts the root filesystem **read-only** into the collector container by default. You can mount only select paths into the container:

[code-tabs]
```bash
[label Kubernetes]
helm repo add better-stack https://betterstackhq.github.io/collector-helm-chart
helm repo update
helm install better-stack-collector better-stack/collector \
  --set collector.env.COLLECTOR_SECRET="$COLLECTOR_SECRET" \
  --set collector.hostMounts.paths[0]=/var/www/custom \
  --set collector.hostMounts.paths[1]=/var/lib/something
```
```bash
[label Docker Compose]
curl -sSL https://raw.githubusercontent.com/BetterStackHQ/collector/main/install.sh | \
  COLLECTOR_SECRET="$COLLECTOR_SECRET" \
  MOUNT_HOST_PATHS="/var/www/custom,/var/lib/something" bash
```
```bash
[label Docker Swarm]
curl -sSL https://raw.githubusercontent.com/BetterStackHQ/collector/refs/heads/main/deploy-to-swarm.sh | \
    MANAGER_NODE=root@swarm-manager COLLECTOR_SECRET="$COLLECTOR_SECRET" \
    MOUNT_HOST_PATHS="/var/www/custom,/var/lib/something" bash
```
[/code-tabs]

Selected paths will be mounted read-only under the `/host` prefix, e.g. `/host/var/www/custom`.

### Required permissions

The `ebpf` container requires privileged access as it runs the eBPF code responsible for generating traces. The collector container doesn't request privileged access.

### Managing the collector secret with Kubernetes Secrets

For enhanced security, we recommend managing the `COLLECTOR_SECRET` [using a Kubernetes Secret](#injecting-environment-variables-via-configmaps-and-secrets) instead of placing it directly in your `values.yaml` file. This prevents the secret from being stored in plain text in your version control system.

```bash
[label Create Kubernetes secret]
kubectl create secret generic better-stack-collector-secret \
  --from-literal=COLLECTOR_SECRET='$COLLECTOR_SECRET'
```

```yaml
[label values.yaml]
collector:
  env:
    # Leave this empty when using a secret
    COLLECTOR_SECRET: ""
  envFrom:
    # Reference the secret you created
    - secretRef:
        name: better-stack-collector-secret
```

## Troubleshooting

### eBPF performance impact

Better Stack collector uses open source eBPF-based instrumentation. eBPF plugs directly into the kernel to generate metrics and traces based on network calls. In some cases, eBPF can increase CPU usage and network latency. For most use-cases and applications the performance impact is negligible or well worth the auto-instrumentation benefits.

#### Issues with a specific service?

Toggle eBPF for the affected service: 

- Go to [Sources](https://telemetry.betterstack.com/team/0/sources) -> your collector -> **Configure**.
- Scroll down to **Services**, find your service, and toggle **Collect traces**.

Traces won't be collected for the service via eBPF-instrumentation. 

#### Cluster-wide performance issues

Seeing elevated CPU load or network latencies across your cluster?  
Switch eBPF tracing to a lighter variant to reduce performance impact:

- Go to [Sources](https://telemetry.betterstack.com/team/0/sources) -> your collector -> **Configure**.
- Scroll down to **What data do you want to collect?** and enable **Basic distributed tracing**.

eBPF tracing will now run significantly faster. Traces might be incomplete.

#### Disabling eBPF completely

Still experiencing issues? Try these options in order:

- Disable both **Full distributed tracing** and **Basic distributed tracing**. No traces will be produced.
- Disable **Service map & RED metrics**.
- Disable **Host metrics**.

#### Well-known issues

- SignalR WebSocket library experiences connection issues with eBPF enabled

### Conflicts with other tracing solutions

In rare occasions, running the Better Stack collector alongside other tracing solutions, like Sentry or Datadog, can cause conflicts.

A possible symptom is seeing connection errors to services such as Elasticsearch after installing the collector. The service might start returning errors indicating duplicate headers.

```json
[label Error example]
{
  "error": {
    "type": "illegal_argument_exception",
    "reason": "multiple values for single-valued header [traceparent]."
  }
}
```

In such a case, we recommend to **disable tracing in the other monitoring tool** to prevent duplicate headers. This ensures that only the Better Stack collector is responsible for injecting the tracing headers.

[info]
#### Why exactly does the conflict occur?

This error occurs because both the collector's eBPF-based instrumentation (OBI) and the other tracing tool are adding a `traceparent` header. Some services, like Elasticsearch, can strictly enforce the W3C specification and reject requests with multiple `traceparent` headers.

OBI is designed to prevent this by respecting existing `traceparent` headers, as seen in [the official documentation](https://opentelemetry.io/docs/zero-code/obi/distributed-traces/#introduction). However, conflicts can still occur if the other tool doesn't handle existing headers correctly.
[/info]


### Seeing incomplete traces

If you notice incomplete traces, try the following steps:

#### Restart your services

This ensures the Better Stack collector’s eBPF instrumenters hook into processes correctly.

#### Remove other tracing agents

Tools like Datadog and Jaeger can conflict with eBPF-based instrumentation.

#### Disable other eBPF instrumenters

Running more than one eBPF tool at the same time may cause spans to be dropped.


### The container name is already in use

Stop and remove previous collector version:

```bash
[label Stop and remove containers]
docker stop better-stack-collector better-stack-ebpf && \
  docker rm better-stack-collector better-stack-ebpf
```

**Getting `container name is already in use` when upgrading collector in Docker Swarm?**

Force upgrade to remove existing collector containers before installing collector:

```bash
[label Docker Swarm force upgrade]
curl -sSL https://raw.githubusercontent.com/BetterStackHQ/collector/refs/heads/main/deploy-to-swarm.sh | \
    ACTION=force_upgrade \
    MANAGER_NODE=root@swarm-manager COLLECTOR_SECRET="$COLLECTOR_SECRET" bash
```

### Getting Vector errors related to disk space

If you notice errors mentioning `Mountpoint '/' has total capacity of X bytes, but configured buffers using mountpoint have total maximum size of Y bytes`, adjust the `Batching on disk` size:

![CleanShot 2025-12-04 at 15.59.42@2x.png](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/ef7153e8-5765-44a2-1702-37d099439800/md1x =1194x234)

Collector allocates three buffers of the given size, which by default totals 9GB. Smaller buffers are less resilient to data loss during transient network failures.

### Insufficient ServiceAccount permissions in Kubernetes

If you see `Kubernetes discovery failed` errors, make sure the ServiceAccount used by the Collector can list pods, services, endpoints, and namespaces. The Collector relies on the Kubernetes API to discover workloads and attach metadata to logs, metrics, and traces.

### GitHub repository

Better Stack collector is open source. See the [GitHub repository](https://github.com/BetterStackHQ/collector).

## Need help?

Please let us know at hello@betterstack.com.  
We're happy to help! 🙏
