# Better Stack AWS Fargate logging

## Start logging in 12 minutes

Collect logs from your [AWS Fargate cluster](https://docs.aws.amazon.com/eks/latest/userguide/fargate.html) to Better Stack.

### Create AWS Kinesis Data Firehose delivery stream

In [AWS Console → Kinesis → Data Firehose → Create delivery stream](https://console.aws.amazon.com/firehose/home#/streams), create a new delivery stream with these settings:


```plain
[label Delivery stream configuration]
Source:               Direct PUT
Destination:          HTTP Endpoint
Delivery stream name: better-stack-firehose
HTTP endpoint URL:    https://$INGESTING_HOST/aws-firehose
Access key:           $SOURCE_TOKEN
```

[info]
You can also enable GZIP compression, setup Amazon S3 backup bucket, or additional custom parameters in your Firehose delivery stream settings.
[/info]

After creation, you can run **Test with demo data** in the delivery stream detail page. You should see the demo logs in [Better Stack → Logs & traces](https://telemetry.betterstack.com/team/0/tail ";_blank").

[info]
It may take few minutes for demo logs to propagate to Better Stack.
[/info]

### Create ConfigMap defining logs setup

Save the following YAML config to a file named `aws-observability.yaml`.

Replace `cluster-region` with the region code of your cluster (eg. `us-east-1`).

```yaml
[label aws-observability.yaml]
kind: Namespace
apiVersion: v1
metadata:
  name: aws-observability
  labels:
    aws-observability: enabled

---

kind: ConfigMap
apiVersion: v1
metadata:
  name: aws-logging
  namespace: aws-observability
data:
  filters.conf: |
    [FILTER]
      Name parser
      Match *
      Key_name log
      Parser crio
    [FILTER]
      Name kubernetes
      Match kube.*
      Merge_Log On
      Keep_Log Off
      Buffer_Size 0
      Kube_Meta_Cache_TTL 300s
  output.conf: |
    [OUTPUT]
      Name  kinesis_firehose
      Match *
[highlight]
      region cluster-region
[/highlight]
      delivery_stream better-stack-firehose
  parsers.conf: |
    [PARSER]
      Name crio
      Format Regex
      Regex ^(?<time>[^ ]+) (?<stream>stdout|stderr) (?<logtag>P|F) (?<log>.*)$
      Time_Key    time
      Time_Format %Y-%m-%dT%H:%M:%S.%L%z
```

Then apply the manifest to your cluster:

```sh
[label Applying manifest]
kubectl apply -f aws-observability.yaml
```

### Configure IAM policy in your cluster

First, download the **AWS Kinesis Data Firehose IAM policy** which allows logging into the delivery stream:

```sh
[label Download IAM policy file]
curl https://raw.githubusercontent.com/aws-samples/amazon-eks-fluent-logging-examples/mainline/examples/fargate/kinesis-firehose/permissions.json \
  -o kinesis-firehose-logging-iam-policy.json
```

Create an **IAM policy** from the policy file:

```sh
[label Create an IAM policy]
aws iam create-policy --policy-name kinesis-firehose-logging-iam-policy \
  --policy-document file://kinesis-firehose-logging-iam-policy.json
```

Attach the policy to your **Pod execution role**.

Replace `FargatePodExecutionRole` with your **Pod execution role** name. You can find it in your **Fargate profile** details in **EKS Cluster → Compute tab → Fargate profiles**.

```sh
[label Attach IAM policy to Pod execution role]
AWS_ACCOUNT_ID="$(aws sts get-caller-identity --query "Account" --output text)"
aws iam attach-role-policy \
  --policy-arn arn:aws:iam::${AWS_ACCOUNT_ID}:policy/kinesis-firehose-logging-iam-policy \
[highlight]
  --role-name FargatePodExecutionRole
[/highlight]
```

### Restart pods in your cluster

To enable logging in a pod, it needs to be restarted.

You can restart your deployments via `kubectl rollout restart deployment/<deployment-name>` or `kubectl rollout restart deployments`.

To check if a pod has logging enabled, run `kubectl describe`:

```text
[label Example output of kubectl describe]
$ kubectl describe pods/sample-app-97bfb67f7-8hq6l
Name:                 sample-app-97bfb67f7-8hq6l
Namespace:            default
  ...
Annotations:          CapacityProvisioned: 0.25vCPU 0.5GB
[highlight]
                      Logging: LoggingEnabled
[/highlight]
                      kubectl.kubernetes.io/restartedAt: 2023-09-05T17:02:29+02:00
  ...
Events:
  Type    Reason          Age    From               Message
  ----    ------          ----   ----               -------
[highlight]
  Normal  LoggingEnabled  3m41s  fargate-scheduler  Successfully enabled logging for pod
[/highlight]
  Normal  Scheduled       2m53s  fargate-scheduler  Successfully assigned default/sample-app-97bfb67f7-8hq6l to fargate-ip-10-0-128-131.eu-north-1.compute.internal
```

You should see your logs in [Better Stack → Live tail](https://telemetry.betterstack.com/team/0/tail ";_blank").

[info]
It may take few minutes for your logs to propagate to Better Stack.
[/info]

## Need help?

Please let us know at hello@betterstack.com.  
We're happy to help! 🙏

## Additional information

Interested in learning more about **logging in Amazon Fargate clusters**?   
Head over to [official AWS Fargate logging](https://docs.aws.amazon.com/eks/latest/userguide/fargate-logging.html).
