AWS Elastic Load Balancing

Start logging to Better Stack

Send AWS Elastic Load Balancering logs to Better Stack. We automatically recognize and parse Application Load Balancer, Network Load Balancer, and Classic Elastic Load Balancer access logs.

Deploy a Lambda function to read logs from S3 bucket and ship them to Better Stack.

1. Create S3 bucket for ELB logs

Create an S3 bucket:

Add permissions for ELB to write logs into the bucket:

  • Select Your bucket โ†’ Permissions.
  • Look for Bucket policy section and click Edit.
  • Paste in the following policy:
Bucket policy
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::054676820928:root"
            },
            "Action": "s3:PutObject",
            "Resource": "arn:aws:s3:::elb-logs-bucket-name/AWSLogs/*"
        }
    ]
}

Replace bucket name and ARN

Replace elb-logs-bucket-name with name of your S3 bucket.
Replace arn:aws:iam::054676820928:root with the correct ARN for your region.

2. Enable ELB access logs

  • Open AWS Console.
  • Search for Load balancers and choose Load balancers (EC2 feature).
  • Go to Your load balancer โ†’ Attributes โ†’ Edit.
  • Under Monitoring, enable Access logs toggle.
  • Click Browse S3 and select bucket you created earlier.

3. Create Lambda

  • Open AWS Console.
  • Search and select Lambda.
  • Click Create function.
  • Select Use a blueprint, find and select Get S3 object with python3.x environment.
  • Fill out Function name - e.g. betterstack-elb-logs-forwarder.
  • Fill out Role name - e.g. betterstack-elb-logs-forwarder-role.

Scroll down to S3 trigger:

  • Select Bucket with your ELB logs.
  • As Event types choose All object create events.
  • Read and tick Recursive invocation acknowledgement checkbox.
  • Click Create function.

4. Set up Lambda

Download Requests package as Zip file.

Add Lambda layer with Requests package and dependencies:

  • Search and select Lambda โ†’ Layers โ†’ Create layer.
  • Name the layer - e.g. python-requests.
  • Upload the Zip file.
  • Select both x86_64 and arm64.
  • Select Runtime Python environment matching your Lambda - e.g. python3.10.
  • Click Create.

Set up Lambda layer:

  • Navigate back to your Lambda function.
  • Scroll down to Layers and click Add a layer.
  • Select Custom layers and find the layer you just created.
  • Click Add

Find Code source section for your lambda.
Replace Code for lambda_function with the following code:

Lambda forwarder code
import json
import gzip
import boto3 
import requests
import shlex
import urllib

s3 = boto3.client('s3')

def process_log_line(line, fields):
    res = shlex.split(line, posix=False)
    return dict(zip(fields, res))

def get_load_balancer_type(key):
    if key.endswith(".gz"):
        key = key[:-3]

    file_parts = key.split(".")
    if len(file_parts) == 2:
        # Only one dot separating file name and extension - i.e. no dot in file name
        return "classic"

    loadbalancer_type = file_parts[0].split("_")[-1]
    return loadbalancer_type if loadbalancer_type in ["app", "net"] else "classic"

def get_log_fields(load_balancer_type):
    if load_balancer_type == "app":
        return ["type", "dt", "elb", "client_port", "target_port", "request_processing_time", "target_processing_time", "response_processing_time", "elb_status_code", "target_status_code", "received_bytes", "sent_bytes", "request","user_agent", "ssl_cipher", "ssl_protocol", "target_group_arn", "trace_idd", "domain_name", "chosen_cert_arn", "matched_rule_priority", "dt", "actions_executed", "redirect_url", "error_reason", "target_port_list", "target_status_code_list", "classification", "classification_reason"]

    if load_balancer_type == "net":
        return ["type", "version", "dt", "elb", "listener", "client_port", "destination_port", "connection_time", "tls_handshake_time", "received_bytes", "sent_bytes", "incoming_tls_alert", "chosen_cert_arn", "chosen_cert_serial", "tls_cipher", "tls_protocol_version", "tls_named_group", "domain_name", "alpn_fe_protocol", "alpn_be_protocol", "alpn_client_preference_list", "tls_connection_creation_time"]

    # use "classic" elb as a default
    return ["dt", "elb", "client_port", "backend_port", "request_processing_time", "backend_processing_time", "response_processing_time", "elb_status_code", "backend_status_code", "received_bytes", "sent_bytes", "request", "user_agent", "ssl_cipher", "ssl_protocol"]

def lambda_handler(event, context):
    better_stack_source_token = '$SOURCE_TOKEN'

    bucket = event['Records'][0]['s3']['bucket']['name']
    key = urllib.parse.unquote_plus(event['Records'][0]['s3']['object']['key'], encoding='utf-8')

    log_fields = get_log_fields(get_load_balancer_type(key))

    obj = s3.get_object(Bucket=bucket, Key=key)
    content = obj['Body'].read()

    if key.endswith('.log.gz'):
        content = gzip.decompress(content)
    elif key.endswith('.log'):
        pass
    else:
        raise ValueError(f"Unexpected file extension for file {key} in bucket {bucket}. Expected .log or .log.gz file.")

    lines = str(content, encoding='utf-8').strip().split('\n')
    json_data = json.dumps(lines)
    valid_data = json.loads(json_data)
    processed_data = [process_log_line(line, log_fields) for line in valid_data]

    url = 'https://in.logs.betterstack.com'
    headers = {
        'Authorization': f"Bearer {better_stack_source_token}",
        'Content-Type': 'application/json'
    }

    response = requests.post(url, json=processed_data, headers=headers)

    print(f"Sent logs to {url}. Got response code: {response.status_code}")

Click Deploy and you're done. ๐ŸŽ‰

You should see your logs in Better Stack โ†’ Live tail.

It may take few minutes for your logs to propagate to Better Stack.

Need help?

Please let us know at hello@betterstack.com.
We're happy to help! ๐Ÿ™