# The Missing Guide to AWS Lambda Logs

AWS Lambda logging seems straightforward – print to the console, and it
magically appears in CloudWatch. However, that simplicity can lead to unexpected
costs or difficulties locating essential information when it matters most.

In this guide, you will learn about:

- Finding and interpreting the logs AWS Lambda automatically generates.
- Structuring your logs strategically so you can quickly zero in on problems
  when they happen.
- How Better Stack can simplify analysis and monitoring of your Lambda logs
  while being more cost-effective.

Ready to get more out of your Lambda logs? Let's dive into some practical tips
next!

[ad-logs]

## Understanding AWS Lambda logs

AWS Lambda logs are records of events generated by Lambda functions. AWS
automatically monitors function executions and reports various logs and metrics
through Amazon CloudWatch as long as your function's execution role has the
necessary permissions.

These logs include various helpful information for monitoring and
troubleshooting your Lambda functions. Here's a breakdown of what you can expect
from AWS Lambda logs:

### 1. System logs

Lambda automatically generates system logs for each function invocation. These
logs reveal crucial metrics like start/end times, execution duration, memory
usage (allocated vs. actual), and billed duration. This data helps you
understand function behavior, optimize for cost, and pinpoint areas for
improvement.

Here's an example:

```text
INIT_START Runtime Version: nodejs:18.v24 Runtime Version ARN: arn:aws:lambda:us-east-1::runtime:c09960ad0af4321e1a7cf013174f7c0d7169bf09af823ca2ad2f93c72ade708a
START RequestId: 765b52b4-2600-4348-9ec4-c7f7f1346c57 Version: $LATEST
END RequestId: 765b52b4-2600-4348-9ec4-c7f7f1346c57
REPORT RequestId: 765b52b4-2600-4348-9ec4-c7f7f1346c57 Duration: 259.72 ms Billed Duration: 260 ms Memory Size: 128 MB Max Memory Used: 69 MB Init Duration: 189.15 ms
```

These AWS Lambda logs track a single function invocation. The process begins
with the `INIT_START` log, marking initialization, specifying the Node.js
runtime version, and assigning a unique ARN to the runtime environment.

Next, the `START` log signals execution with a unique RequestId and indicates
the function version ($LATEST). Finally, the `END` log confirms the successful
completion of the function's execution for this request.

Finally, the `REPORT` log offers a summary, showing that the function completed
successfully in 259.72 milliseconds (billed as 260ms due to rounding), with
initialization taking 189.15 milliseconds, and used a maximum of 69MB out of
128MB allocated memory.

### 2. Error logs

Lambda function errors can stem from two sources: unhandled exceptions thrown
directly by your code or issues within the Lambda runtime environment, such as
exceeding timeouts, memory limits, or misconfigurations.

Here's an example of how uncaught errors in Node.js are logged by the Lambda
runtime:

```text
2024-04-09T07:22:49.403Z	d1dd355b-74f9-4984-ad30-c5e9ae23517a	ERROR	Invoke Error
{
    "errorType": "Error",
    "errorMessage": "uncaught error",
    "stack": [
        "Error: uncaught error",
        "    at Runtime.handler (file:///var/task/index.mjs:34:13)",
        "    at Runtime.handleOnceNonStreaming (file:///var/runtime/index.mjs:1173:29)"
    ]
}
```

### 3. Application or function logs

Inside your Lambda function code, you can use print statements (or equivalent)
to output custom log messages. The standard output and standard error streams
from a Lambda function are automatically sent to CloudWatch Logs without
requiring logging drivers.

For example, the following statements

```javascript
console.log('Fetching data from the API.');
console.log('Data fetched successfully:', responseData);
```

Will appear as below in CloudWatch:

```text
2024-04-03T00:22:41.959Z	3bcb760c-ecdb-459b-a97b-c2318b3215fe	INFO	Fetching data from the API.
2024-04-03T00:22:41.978Z	3bcb760c-ecdb-459b-a97b-c2318b3215fe	INFO	Data fetched successfully: { userId: 1, id: 1, title: 'delectus aut autem', completed: false }
```

Lambda automatically enhances function logs generated using `console` methods in
Node.js by adding a timestamp, request ID, and log level to each entry.

## Accessing your Lambda logs in AWS CloudWatch

AWS Lambda seamlessly integrates with CloudWatch by automatically forwarding all
logs to a log group tied specifically to each Lambda function.

The naming convention for these log groups mirrors the Lambda function's name by
following the `/aws/lambda/<function name>` pattern, but this can be adjusted in
the AWS console, as you'll see later on.

To view these logs, navigate to the CloudWatch section within the AWS Management
Console. Under the **Logs** section, click on **Log groups** and select your
function's corresponding group.

![aws-lambda-logs-cloudwatch.png](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/3a78a999-0e96-491c-ff73-92e666479500/public
=2482x1496)

Within the **Log streams** tab of the selected log group, you'll find individual
log streams for each execution instance of your Lambda function. These streams
are conventionally named in the format
`YYYY/MM/DD/[<FunctionVersion>]<InstanceId>`.

![aws-lambda-group.png](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/bd948ad5-3558-44cd-7559-4e708d946e00/md1x
=3622x1840)

You can click on the most recent stream to view its contents. You will observe
the three standard log statements generated per invocation (`START`, `END` and
`REPORT`) as well as any custom logs generated by your functions:

![aws-lambda-logs.png](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/d15cac60-6a3d-4213-9628-8d8fdb080000/orig
=3102x1355)

Here's a breakdown of the key parts of the above logs:

- **Initialization log (INIT_START)**: Details the runtime setup.
- **Start log (START)**: Indicates invocation start, includes a unique
  RequestId.
- **Application logs**: Messages generated within your function's code that
  provide additional context on the function's activities.
- **End log (END)**: Signals invocation completion.
- **Report log (REPORT)**: Provides execution metrics (duration, memory usage,
  etc.).

These logs demonstrate a typical successful execution pattern where the function
starts, performs its task, and then completes without errors, providing relevant
performance data.

## Capturing Lambda logs in structured JSON format

By default, AWS Lambda outputs logs in a [semi-structured
format](https://betterstack.com/community/guides/logging/log-formatting/), which complicates automated log analysis and monitoring
efforts. To effectively analyze these logs, you'd need to parse each log entry
manually by looking for specific string identifiers or the function invocation's
request ID.

Fortunately, AWS Lambda now allows for a full transition to [structured JSON
logging](https://betterstack.com/community/guides/logging/json-logging/) for both system-generated and custom function logs, which
makes it easy to filter and analyze the log data in CloudWatch.

You can configure this behavior in the Lambda Management Console under the
Configuration tab. By navigating to the **Monitoring and operations** tools in
the left panel and adjusting the **Log format** setting, you can enable
structured JSON logging:

![Configure JSON logging in AWS Lambda](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/f9aa5232-0b51-4ad9-a375-d1c5f6891d00/md1x =1678x682)

When you execute your function and view its log output, it should appear in the
following manner:

![aws-lambda-json-logs.png](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/26b48945-030c-4d2d-8bdc-4f1fcf856900/orig
=3874x2014)

Once enabled, logging output for your Lambda function executions will adopt a
JSON structure, making it much easier to parse and analyze programmatically.

For instance, a `REPORT` log entry in JSON format distinctly organizes each
metric into its property, and the entry is linked to other logs from the same
invocation with a common `record.requestId`:

```json
{
    "time": "2024-04-03T00:56:35.345Z",
    "type": "platform.report",
    "record": {
        "requestId": "88c76e69-8021-474e-a030-f8bd7490cba4",
        "metrics": {
            "durationMs": 2226.333,
            "billedDurationMs": 2227,
            "memorySizeMB": 128,
            "maxMemoryUsedMB": 88,
            "initDurationMs": 147.316
        },
        "status": "success"
    }
}
```

Additionally, if your application logs are structured as valid JSON objects,
they will be automatically parsed and placed within the `message` property of
the log output:

```javascript
console.log(data); // Assuming 'data' is a valid JSON object from an API call
```

```json
[output]
{
    "timestamp": "2024-04-03T01:04:49.418Z",
    "level": "INFO",
    "requestId": "ad26627c-5c74-4828-89c9-c22c17dfc61b",
[highlight]
    "message": {
        "userId": 1,
        "id": 1,
        "title": "delectus aut autem",
        "completed": false
    }
[/highlight]
}
```

If the log output is not valid JSON, Lambda will instead treat and log the
message as a string, still providing valuable information albeit in a less
structured format:

```javascript
console.log('Data fetched successfully:', data);
```

```json
[output]
{
    "timestamp": "2024-04-03T00:56:35.144Z",
    "level": "INFO",
    "requestId": "88c76e69-8021-474e-a030-f8bd7490cba4",
[highlight]
    "message": "Data fetched successfully: { userId: 1, id: 1, title: 'delectus aut autem', completed: false }"
[/highlight]
}
```

## Configuring AWS Lambda log levels

Adopting structured JSON logging in AWS Lambda not only streamlines log
formatting but also enables you to control which logs are published to
CloudWatch through [log level filters](https://betterstack.com/community/guides/logging/log-levels-explained/).

![lambda-log-levels.png](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/a0561aba-a888-495e-e4f9-65bb70991700/md2x
=1595x1252)

This configuration is accessible from the AWS Lambda console's **Logging
configuration** section. By default, both system and application logs are set to
the `INFO` level, preventing entries logged at the `DEBUG` or `TRACE` levels
from being transmitted to CloudWatch.

The level of a function log is denoted by its `level` property. In Node.js
functions, Lambda automatically assigns the `INFO` level to entries generated
using `console.log()` and `console.info()`. Similarly, records produced by
`console.debug()`, `console.trace()`, `console.warn()`, and `console.error()`
are assigned `DEBUG`, `TRACE`, `WARN`, and `ERROR` respectively.

If you choose to [log with a custom framework](https://betterstack.com/community/guides/logging/logging-framework/) instead,
ensure that it outputs JSON-structured entries with `level` and `timestamp`
properties as shown below:

```json
{
[highlight]
  "level": "INFO",
  "timestamp": "2024-04-01T12:36:14.170Z",
[/highlight]
  "pid": 650073,
  "hostname": "fedora",
  "msg": "an info message"
}
```

The level should be one of the supported application log levels and the provided
timestamp must be compatible with the
[RFC 3339 format](https://www.ietf.org/rfc/rfc3339.txt). If the log level or
timestamp is invalid or missing, Lambda will automatically assign the `INFO`
level to the log entry with its own timestamp.

Also, when configuring application log-level filtering for your function, the
selected level is stored in the `AWS_LAMBDA_LOG_LEVEL` environment variable. You
can configure your logging framework according to this variable so that it
doesn't output logs that the Lambda runtime would eventually discard.

## Customizing your Lambda log group in CloudWatch

AWS Lambda sends logs for each function to a dedicated log group named
`/aws/lambda/<function name>`, but this setup can make it cumbersome to manage
security, governance, and retention policies across a large number of functions.

To streamline log management for all of the Lambda functions that make up a
particular application, you can opt for a shared CloudWatch log group. You can
do this by selecting the **Custom** option and providing a new name as follows:

![aws-lambda-log-group.png](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/1879a36f-5721-421b-4a94-0c91fd5f0400/md2x
=1616x877)

On the next function invocation, Lambda will create the shared group and begin
streaming logs. Log streams within this group will include the function name and
version in their names, allowing you to trace logs back to their originating
functions easily.

![lambda-log-streams.png](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/13981bc9-5593-4a2b-4603-6d9b3d946f00/orig
=2434x1272)

## Configuring your CloudWatch log retention settings

CloudWatch logs are stored indefinitely by default, incurring charges after the
first 5GB. To avoid paying unnecessarily for old logs, customize your log
retention settings by heading to CloudWatch -> Log Groups -> <your log group> ->
Actions -> Edit retention settings.

![lambda-retention-1.png](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/b97dca03-0217-447d-d425-02a588f13400/lg2x
=2142x965)

![lambda-retention-2.png](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/c69ee981-b34d-4e61-9e9f-dae3a2909d00/public
=1117x453)

You'll notice that with a shared log group, it's much easier to apply a
consistent log retention policy to a collection of functions, compared to when
each function has a separate log group.

## Using third-party logging frameworks

AWS Lambda's built-in logging capabilities is a good starting point, but you'll
often want more control for deeper contextual analysis and troubleshooting.
[Custom logging frameworks](https://betterstack.com/community/guides/logging/logging-framework/) provide the solution.

For example,
[Pino](https://betterstack.com/community/guides/logging/how-to-install-setup-and-use-pino-to-log-node-js-applications/) is a
popular logging library for Node.js programs that allows you to configure log
levels, add contextual data, and many other features not possible through the
Console API.

The recommended approach for integrating a reusable logging solution across your
Lambda functions is to use an
[AWS Lambda Layer](https://docs.aws.amazon.com/lambda/latest/dg/configuration-layers.html)
dependency that contains your logging configuration.

### Setting up a Lambda layer for logging

To get started, create a new directory in your filesystem and navigate into it:

```command
mkdir pinojs-layer
```

```command
cd pinojs-layer
```

Create a `nodejs` directory and navigate into it as well:

```command
mkdir nodejs
```

```command
cd nodejs
```

Within the `nodejs` directory, initiate a new Node.js project with the following
command, accepting all the defaults:

```command
npm init -y
```

Then run the following to configure the project as an ES module:

```command
npm pkg set type="module";
```

Once you're done, install the `pino` dependency with:

```command
npm install pino
```

After the installation completes, create a new `index.js` file and configure
Pino as follows:

```command
code index.js
```

```javascript
[label index.js]
import pino from 'pino';

const logger = pino({
  level: process.env.AWS_LAMBDA_LOG_LEVEL || 'info',
  formatters: {
    bindings: (bindings) => {
      return { nodeVersion: process.version };
    },
    level: (label) => {
      return { level: label.toUpperCase() };
    },
  },
  timestamp: () => `,"timestamp":"${new Date(Date.now()).toISOString()}"`,
});

export { logger };
```

This Pino configuration sets up a logger instance that defaults to `INFO` unless
otherwise specified through the `AWS_LAMBDA_LOG_LEVEL` environment variable
which corresponds to the application log level in the Lambda function settings.

It also customizes the log output by including the Node.js version in each log
entry under the `nodeVersion` property, transforming the `level` property to
uppercase, and overriding the default `timestamp` function with a custom format.
The configured logger is then exported for use throughout the application.

Once you've configured Pino, return to your terminal and navigate back into the
`pinojs-layer` directory.

```command
cd ..
```

From the `pinojs-layer` directory, run the command below to archive the `nodejs`
directory contents into a zip file as follows:

```command
zip -r pino.zip nodejs/
```

Now return to your AWS Lambda console and find the **Layers** option in the
sidebar under **Additional resources**.

Create a new layer and populate it with the following contents, ensuring to
select the appropriate runtime that corresponds with your Lambda functions:

![aws-lambda-create-layer.png](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/410429f2-7371-425f-b7d7-82354df7f000/orig
=1948x1952)

Once created, return to your Lambda function page and click the highlighted
**Layers** button which will navigate you to the **Layers** section:

![click-lambda-layer.png](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/0719c332-e3c0-446b-6b3f-27616443c400/orig
=1618x828)

![add-new-layer.png](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/6adeeca5-7757-434e-ad7b-848f6ca27800/orig
=2096x371)

Click **Add a layer** and fill the resulting form as follows:

![choose-lambda-layer.png](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/85d27034-1df2-4220-1f59-f95c5a845d00/md2x
=1990x1534)

Once your layer is added to the function, you will see a success message on the
screen.

It's now time to use the layer in your function code. You only need to import
the exported `logger` as follows:

```javascript
import { logger } from '/opt/nodejs/index.js';

export const handler = async (event, context, callback) => {
  logger.info("Hello world!");
};
```

When your function is invoked, you will see the resulting log entry in the
CloudWatch log stream:

```json
[output]
{
  "level": "INFO",
  "timestamp": "2024-04-03T03:11:17.691Z",
  "nodeVersion": "v20.11.1",
  "msg": "Hello world!"
}
```

![pino-log.png](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/a2d524f0-a28e-43c7-4f20-40bbbb49e900/md1x
=2308x1272)

In this manner, you can reuse the framework configuration in all your Node.js
functions to maintain a consistent log format, making [log
management](https://betterstack.com/community/guides/logging/log-management/) and analysis a lot more pleasant.

### Logging request IDs

Each Lambda function invocation is identified by a unique request ID which is
automatically included in the system and error logs. When writing function logs
through the built-in Node.js console API, the request ID is automatically
included so you can link any entry to the function invocation that produced it.

However, when using custom logging libraries, you must explicitly include the
request ID before it will appear in the logs. The way to do this depends on the
logging library, but you can generally call a function that accepts the request
ID and bind it to the returned logger.

Here's an example with the aforementioned Pino library:

```javascript
import pino from 'pino';

function getLogger(requestId) {
  return pino({
    level: process.env.AWS_LAMBDA_LOG_LEVEL || 'info',
    formatters: {
      bindings: (bindings) => {
[highlight]
        return { nodeVersion: process.version, requestId };
[/highlight]
      },
      level: (label) => {
        return { level: label.toUpperCase() };
      },
    },
    timestamp: () => `,"timestamp":"${new Date(Date.now()).toISOString()}"`,
  });
}

export { getLogger };
```

The `getLogger()` function accepts the request ID and binds it to the logger so
that it is included in all logs. You can subsequently use it in your Lambda
functions like this:

```javascript
import { getLogger } from '/opt/nodejs/index.js';

export const handler = async (event, context, callback) => {
  const logger = getLogger(context.awsRequestId)
  logger.info("Hello world!");
};
```

When executed, this produces:

```json
[output]
{
  "level": "INFO",
  "timestamp": "2024-04-03T03:11:17.691Z",
  "nodeVersion": "v20.11.1",
[highlight]
  "requestId": "765b52b4-2600-4348-9ec4-c7f7f1346c57",
[/highlight]
  "msg": "Hello world!"
}
```

Once you've updated the Pino layer code in Lambda, ensure also to update the
**Layer version** in your function's layer settings:

![update-layer-version.png](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/b8da2932-3304-4549-bb0c-d61f3a133300/md2x
=1517x1017)

Another piece of information you should consider including in your logging
configuration is the function name which is accessible through the
`AWS_LAMBDA_FUNCTION_NAME` environmental variable:

```javascript
import pino from 'pino';

function getLogger(requestId) {
  return pino({
    level: process.env.AWS_LAMBDA_LOG_LEVEL || 'info',
    formatters: {
      bindings: (bindings) => {
        return {
          nodeVersion: process.version,
          requestId,
[highlight]
          function: process.env.AWS_LAMBDA_FUNCTION_NAME,
[/highlight]
        };
      },
      level: (label) => {
        return { level: label.toUpperCase() };
      },
    },
    timestamp: () => `,"timestamp":"${new Date(Date.now()).toISOString()}"`,
  });
}

export { getLogger };
```

The resulting entries created by this configuration will now contain the
`function` property:

```json
{
  "level": "INFO",
  "timestamp": "2024-04-09T17:04:50.406Z",
  "nodeVersion": "v18.19.1",
  "requestId": "95772dc0-12d6-4713-a5bd-6e9450900f2d",
[highlight]
  "function": "myHTTPRequestFunc",
[/highlight]
  "msg": "Fetching data from the API."
}
```

This makes it a lot easier to understand the context of each log entry
especially if you're shipping the logs off AWS CloudWatch to a different [log
management tool](https://betterstack.com/community/comparisons/log-management-and-aggregation-tools/).

## Analyzing AWS Lambda logs with Better Stack

After optimizing your AWS Lambda logging, consider a specialized observability
platform for deeper insights and cost savings. These tools offer advantages over
CloudWatch and centralize your monitoring data.

[Better Stack](https://betterstack.com/logs) is a compelling option that
provides log monitoring and integrated incident management. This lets you track
Lambda function activity, receive alerts for notable events or trends, and build
automated reactive measures for detected issues. You can explore its features
with a [free account](https://telemetry.betterstack.com/users/sign-up).

### Forwarding your Lambda logs to Better Stack

By routing your AWS Lambda logs to Better Stack, you can consolidate, analyze,
and monitor your logging data in a unified platform. Achieving this integration
is straightforward through the use of Better Stack's AWS Lambda extension, which
leverages
[the Telemetry API](https://docs.aws.amazon.com/lambda/latest/dg/telemetry-api.html)
to capture and stream your logs directly and in real-time.

For detailed guidance on deploying the Better Stack AWS Lambda extension for
efficient log forwarding, refer to the
[official documentation](https://betterstack.com/docs/logs/aws-lambda/).
Following setup, consider revoking your function's CloudWatch write permissions
to avoid redundant expenses.

### Exploring Lambda Logs in Better Stack

![lambda-logs-in-betterstack.png](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/5dd80c89-b0f8-46cd-6b01-a8e1ed5acf00/lg1x
=2993x1616)

Once your AWS Lambda function logs are streaming into Better Stack, you can
leverage its powerful filtering and search capabilities for targeted analysis
depending on your use case. For instance, you might want to look at `REPORT`
entries to find slow function invocations or those that are nearing their memory
limits.

This can be done by setting up filters to isolate logs showing higher execution
times or memory usage close to the allocated maximum. You can also search for
`ERROR` logs related to timeout exceptions or out-of-memory errors to pinpoint
functions that require further optimization or debugging.

Beyond filtering your Lambda logs to find potential problems, you can use them
to build high-level dashboards with custom data visualizations. This way, you
can get a quick, top-down perspective of your incoming logs without endlessly
filtering through them.

### Detecting issues in real-time

Better Stack also allows you to set up alerting rules to notify you when an
issue is detected, ensuring you're promptly informed of potential problems
within your AWS Lambda functions.

For example, you can configure an alert to trigger if there's an unusual spike
in function error rates or if any function's execution time or memory usage
surpasses a critical threshold, which may indicate a potential performance
bottleneck.

These real-time alerts can be dispatched through your preferred channels, such
as email, Slack, SMS, and others. By enabling these alerts, you'll stay ahead of
issues and have enough context to respond adequately.

## Final thoughts

In this article, we discussed finding and configuring AWS Lambda logs,
understanding their structure, and integrating custom logging for enhanced
insights. I also emphasized several best practices along the way to help you
streamline your logging workflow.

To further explore Lambda monitoring with Better Stack, check out our
[comprehensive documentation here](https://betterstack.com/docs/logs/aws-lambda/).
For additional reference, the
[official AWS Lambda docs](https://docs.aws.amazon.com/lambda/latest/dg/monitoring-cloudwatchlogs.html)
offer an in-depth resource.

Thanks for reading, and happy logging!