You can also enable GZIP compression, setup Amazon S3 backup bucket, or additional custom parameters in your Firehose delivery stream settings.
Explore documentation
Better Stack AWS Fargate logging
Start logging in 12 minutes
Collect logs from your AWS Fargate cluster to Better Stack.
Create AWS Kinesis Data Firehose delivery stream
In AWS Console → Kinesis → Data Firehose → Create delivery stream, create a new delivery stream with these settings:
Source: Direct PUT
Destination: HTTP Endpoint
Delivery stream name: better-stack-firehose
HTTP endpoint URL: https://$INGESTING_HOST/aws-firehose
Access key: $SOURCE_TOKEN
After creation, you can run Test with demo data in the delivery stream detail page. You should see the demo logs in Better Stack → Live tail.
It may take few minutes for demo logs to propagate to Better Stack.
Create ConfigMap defining logs setup
Save the following YAML config to a file named aws-observability.yaml
.
Replace cluster-region
with the region code of your cluster (eg. us-east-1
).
kind: Namespace
apiVersion: v1
metadata:
name: aws-observability
labels:
aws-observability: enabled
---
kind: ConfigMap
apiVersion: v1
metadata:
name: aws-logging
namespace: aws-observability
data:
filters.conf: |
[FILTER]
Name parser
Match *
Key_name log
Parser crio
[FILTER]
Name kubernetes
Match kube.*
Merge_Log On
Keep_Log Off
Buffer_Size 0
Kube_Meta_Cache_TTL 300s
output.conf: |
[OUTPUT]
Name kinesis_firehose
Match *
region cluster-region
delivery_stream better-stack-firehose
parsers.conf: |
[PARSER]
Name crio
Format Regex
Regex ^(?<time>[^ ]+) (?<stream>stdout|stderr) (?<logtag>P|F) (?<log>.*)$
Time_Key time
Time_Format %Y-%m-%dT%H:%M:%S.%L%z
Then apply the manifest to your cluster:
kubectl apply -f aws-observability.yaml
Configure IAM policy in your cluster
First, download the AWS Kinesis Data Firehose IAM policy which allows logging into the delivery stream:
curl https://raw.githubusercontent.com/aws-samples/amazon-eks-fluent-logging-examples/mainline/examples/fargate/kinesis-firehose/permissions.json \
-o kinesis-firehose-logging-iam-policy.json
Create an IAM policy from the policy file:
aws iam create-policy --policy-name kinesis-firehose-logging-iam-policy \
--policy-document file://kinesis-firehose-logging-iam-policy.json
Attach the policy to your Pod execution role.
Replace FargatePodExecutionRole
with your Pod execution role name. You can find it in your Fargate profile details in EKS Cluster → Compute tab → Fargate profiles.
AWS_ACCOUNT_ID="$(aws sts get-caller-identity --query "Account" --output text)"
aws iam attach-role-policy \
--policy-arn arn:aws:iam::${AWS_ACCOUNT_ID}:policy/kinesis-firehose-logging-iam-policy \
--role-name FargatePodExecutionRole
Restart pods in your cluster
To enable logging in a pod, it needs to be restarted.
You can restart your deployments via kubectl rollout restart deployment/<deployment-name>
or kubectl rollout restart deployments
.
To check if a pod has logging enabled, run kubectl describe
:
$ kubectl describe pods/sample-app-97bfb67f7-8hq6l
Name: sample-app-97bfb67f7-8hq6l
Namespace: default
...
Annotations: CapacityProvisioned: 0.25vCPU 0.5GB
Logging: LoggingEnabled
kubectl.kubernetes.io/restartedAt: 2023-09-05T17:02:29+02:00
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal LoggingEnabled 3m41s fargate-scheduler Successfully enabled logging for pod
Normal Scheduled 2m53s fargate-scheduler Successfully assigned default/sample-app-97bfb67f7-8hq6l to fargate-ip-10-0-128-131.eu-north-1.compute.internal
You should see your logs in Better Stack → Live tail.
It may take few minutes for your logs to propagate to Better Stack.
Need help?
Please let us know at hello@betterstack.com.
We're happy to help! 🙏
Additional information
Interested in learning more about logging in Amazon Fargate clusters?
Head over to official AWS Fargate logging.