The Missing Guide to AWS Lambda Logs
AWS Lambda logging seems straightforward – print to the console, and it magically appears in CloudWatch. However, that simplicity can lead to unexpected costs or difficulties locating essential information when it matters most.
In this guide, you will learn about:
- Finding and interpreting the logs AWS Lambda automatically generates.
- Structuring your logs strategically so you can quickly zero in on problems when they happen.
- How Better Stack can simplify analysis and monitoring of your Lambda logs while being more cost-effective.
Ready to get more out of your Lambda logs? Let's dive into some practical tips next!
Understanding AWS Lambda logs
AWS Lambda logs are records of events generated by Lambda functions. AWS automatically monitors function executions and reports various logs and metrics through Amazon CloudWatch as long as your function's execution role has the necessary permissions.
These logs include various helpful information for monitoring and troubleshooting your Lambda functions. Here's a breakdown of what you can expect from AWS Lambda logs:
1. System logs
Lambda automatically generates system logs for each function invocation. These logs reveal crucial metrics like start/end times, execution duration, memory usage (allocated vs. actual), and billed duration. This data helps you understand function behavior, optimize for cost, and pinpoint areas for improvement.
Here's an example:
INIT_START Runtime Version: nodejs:18.v24 Runtime Version ARN: arn:aws:lambda:us-east-1::runtime:c09960ad0af4321e1a7cf013174f7c0d7169bf09af823ca2ad2f93c72ade708a
START RequestId: 765b52b4-2600-4348-9ec4-c7f7f1346c57 Version: $LATEST
END RequestId: 765b52b4-2600-4348-9ec4-c7f7f1346c57
REPORT RequestId: 765b52b4-2600-4348-9ec4-c7f7f1346c57 Duration: 259.72 ms Billed Duration: 260 ms Memory Size: 128 MB Max Memory Used: 69 MB Init Duration: 189.15 ms
These AWS Lambda logs track a single function invocation. The process begins
with the INIT_START
log, marking initialization, specifying the Node.js
runtime version, and assigning a unique ARN to the runtime environment.
Next, the START
log signals execution with a unique RequestId and indicates
the function version ($LATEST). Finally, the END
log confirms the successful
completion of the function's execution for this request.
Finally, the REPORT
log offers a summary, showing that the function completed
successfully in 259.72 milliseconds (billed as 260ms due to rounding), with
initialization taking 189.15 milliseconds, and used a maximum of 69MB out of
128MB allocated memory.
2. Error logs
Lambda function errors can stem from two sources: unhandled exceptions thrown directly by your code or issues within the Lambda runtime environment, such as exceeding timeouts, memory limits, or misconfigurations.
Here's an example of how uncaught errors in Node.js are logged by the Lambda runtime:
2024-04-09T07:22:49.403Z d1dd355b-74f9-4984-ad30-c5e9ae23517a ERROR Invoke Error
{
"errorType": "Error",
"errorMessage": "uncaught error",
"stack": [
"Error: uncaught error",
" at Runtime.handler (file:///var/task/index.mjs:34:13)",
" at Runtime.handleOnceNonStreaming (file:///var/runtime/index.mjs:1173:29)"
]
}
3. Application or function logs
Inside your Lambda function code, you can use print statements (or equivalent) to output custom log messages. The standard output and standard error streams from a Lambda function are automatically sent to CloudWatch Logs without requiring logging drivers.
For example, the following statements
console.log('Fetching data from the API.');
console.log('Data fetched successfully:', responseData);
Will appear as below in CloudWatch:
2024-04-03T00:22:41.959Z 3bcb760c-ecdb-459b-a97b-c2318b3215fe INFO Fetching data from the API.
2024-04-03T00:22:41.978Z 3bcb760c-ecdb-459b-a97b-c2318b3215fe INFO Data fetched successfully: { userId: 1, id: 1, title: 'delectus aut autem', completed: false }
Lambda automatically enhances function logs generated using console
methods in
Node.js by adding a timestamp, request ID, and log level to each entry.
Accessing your Lambda logs in AWS CloudWatch
AWS Lambda seamlessly integrates with CloudWatch by automatically forwarding all logs to a log group tied specifically to each Lambda function.
The naming convention for these log groups mirrors the Lambda function's name by
following the /aws/lambda/<function name>
pattern, but this can be adjusted in
the AWS console, as you'll see later on.
To view these logs, navigate to the CloudWatch section within the AWS Management Console. Under the Logs section, click on Log groups and select your function's corresponding group.
Within the Log streams tab of the selected log group, you'll find individual
log streams for each execution instance of your Lambda function. These streams
are conventionally named in the format
YYYY/MM/DD/[<FunctionVersion>]<InstanceId>
.
You can click on the most recent stream to view its contents. You will observe
the three standard log statements generated per invocation (START
, END
and
REPORT
) as well as any custom logs generated by your functions:
Here's a breakdown of the key parts of the above logs:
- Initialization log (INIT_START): Details the runtime setup.
- Start log (START): Indicates invocation start, includes a unique RequestId.
- Application logs: Messages generated within your function's code that provide additional context on the function's activities.
- End log (END): Signals invocation completion.
- Report log (REPORT): Provides execution metrics (duration, memory usage, etc.).
These logs demonstrate a typical successful execution pattern where the function starts, performs its task, and then completes without errors, providing relevant performance data.
Capturing Lambda logs in structured JSON format
By default, AWS Lambda outputs logs in a semi-structured format, which complicates automated log analysis and monitoring efforts. To effectively analyze these logs, you'd need to parse each log entry manually by looking for specific string identifiers or the function invocation's request ID.
Fortunately, AWS Lambda now allows for a full transition to structured JSON logging for both system-generated and custom function logs, which makes it easy to filter and analyze the log data in CloudWatch.
You can configure this behavior in the Lambda Management Console under the Configuration tab. By navigating to the Monitoring and operations tools in the left panel and adjusting the Log format setting, you can enable structured JSON logging:
When you execute your function and view its log output, it should appear in the following manner:
Once enabled, logging output for your Lambda function executions will adopt a JSON structure, making it much easier to parse and analyze programmatically.
For instance, a REPORT
log entry in JSON format distinctly organizes each
metric into its property, and the entry is linked to other logs from the same
invocation with a common record.requestId
:
{
"time": "2024-04-03T00:56:35.345Z",
"type": "platform.report",
"record": {
"requestId": "88c76e69-8021-474e-a030-f8bd7490cba4",
"metrics": {
"durationMs": 2226.333,
"billedDurationMs": 2227,
"memorySizeMB": 128,
"maxMemoryUsedMB": 88,
"initDurationMs": 147.316
},
"status": "success"
}
}
Additionally, if your application logs are structured as valid JSON objects,
they will be automatically parsed and placed within the message
property of
the log output:
console.log(data); // Assuming 'data' is a valid JSON object from an API call
{
"timestamp": "2024-04-03T01:04:49.418Z",
"level": "INFO",
"requestId": "ad26627c-5c74-4828-89c9-c22c17dfc61b",
"message": {
"userId": 1,
"id": 1,
"title": "delectus aut autem",
"completed": false
}
}
If the log output is not valid JSON, Lambda will instead treat and log the message as a string, still providing valuable information albeit in a less structured format:
console.log('Data fetched successfully:', data);
{
"timestamp": "2024-04-03T00:56:35.144Z",
"level": "INFO",
"requestId": "88c76e69-8021-474e-a030-f8bd7490cba4",
"message": "Data fetched successfully: { userId: 1, id: 1, title: 'delectus aut autem', completed: false }"
}
Configuring AWS Lambda log levels
Adopting structured JSON logging in AWS Lambda not only streamlines log formatting but also enables you to control which logs are published to CloudWatch through log level filters.
This configuration is accessible from the AWS Lambda console's Logging
configuration section. By default, both system and application logs are set to
the INFO
level, preventing entries logged at the DEBUG
or TRACE
levels
from being transmitted to CloudWatch.
The level of a function log is denoted by its level
property. In Node.js
functions, Lambda automatically assigns the INFO
level to entries generated
using console.log()
and console.info()
. Similarly, records produced by
console.debug()
, console.trace()
, console.warn()
, and console.error()
are assigned DEBUG
, TRACE
, WARN
, and ERROR
respectively.
If you choose to log with a custom framework instead,
ensure that it outputs JSON-structured entries with level
and timestamp
properties as shown below:
{
"level": "INFO",
"timestamp": "2024-04-01T12:36:14.170Z",
"pid": 650073,
"hostname": "fedora",
"msg": "an info message"
}
The level should be one of the supported application log levels and the provided
timestamp must be compatible with the
RFC 3339 format. If the log level or
timestamp is invalid or missing, Lambda will automatically assign the INFO
level to the log entry with its own timestamp.
Also, when configuring application log-level filtering for your function, the
selected level is stored in the AWS_LAMBDA_LOG_LEVEL
environment variable. You
can configure your logging framework according to this variable so that it
doesn't output logs that the Lambda runtime would eventually discard.
Customizing your Lambda log group in CloudWatch
AWS Lambda sends logs for each function to a dedicated log group named
/aws/lambda/<function name>
, but this setup can make it cumbersome to manage
security, governance, and retention policies across a large number of functions.
To streamline log management for all of the Lambda functions that make up a particular application, you can opt for a shared CloudWatch log group. You can do this by selecting the Custom option and providing a new name as follows:
On the next function invocation, Lambda will create the shared group and begin streaming logs. Log streams within this group will include the function name and version in their names, allowing you to trace logs back to their originating functions easily.
Configuring your CloudWatch log retention settings
CloudWatch logs are stored indefinitely by default, incurring charges after the
first 5GB. To avoid paying unnecessarily for old logs, customize your log
retention settings by heading to CloudWatch -> Log Groups ->
You'll notice that with a shared log group, it's much easier to apply a consistent log retention policy to a collection of functions, compared to when each function has a separate log group.
Using third-party logging frameworks
AWS Lambda's built-in logging capabilities is a good starting point, but you'll often want more control for deeper contextual analysis and troubleshooting. Custom logging frameworks provide the solution.
For example, Pino is a popular logging library for Node.js programs that allows you to configure log levels, add contextual data, and many other features not possible through the Console API.
The recommended approach for integrating a reusable logging solution across your Lambda functions is to use an AWS Lambda Layer dependency that contains your logging configuration.
Setting up a Lambda layer for logging
To get started, create a new directory in your filesystem and navigate into it:
mkdir pinojs-layer
cd pinojs-layer
Create a nodejs
directory and navigate into it as well:
mkdir nodejs
cd nodejs
Within the nodejs
directory, initiate a new Node.js project with the following
command, accepting all the defaults:
npm init -y
Then run the following to configure the project as an ES module:
npm pkg set type="module";
Once you're done, install the pino
dependency with:
npm install pino
After the installation completes, create a new index.js
file and configure
Pino as follows:
code index.js
import pino from 'pino';
const logger = pino({
level: process.env.AWS_LAMBDA_LOG_LEVEL || 'info',
formatters: {
bindings: (bindings) => {
return { nodeVersion: process.version };
},
level: (label) => {
return { level: label.toUpperCase() };
},
},
timestamp: () => `,"timestamp":"${new Date(Date.now()).toISOString()}"`,
});
export { logger };
This Pino configuration sets up a logger instance that defaults to INFO
unless
otherwise specified through the AWS_LAMBDA_LOG_LEVEL
environment variable
which corresponds to the application log level in the Lambda function settings.
It also customizes the log output by including the Node.js version in each log
entry under the nodeVersion
property, transforming the level
property to
uppercase, and overriding the default timestamp
function with a custom format.
The configured logger is then exported for use throughout the application.
Once you've configured Pino, return to your terminal and navigate back into the
pinojs-layer
directory.
cd ..
From the pinojs-layer
directory, run the command below to archive the nodejs
directory contents into a zip file as follows:
zip -r pino.zip nodejs/
Now return to your AWS Lambda console and find the Layers option in the sidebar under Additional resources.
Create a new layer and populate it with the following contents, ensuring to select the appropriate runtime that corresponds with your Lambda functions:
Once created, return to your Lambda function page and click the highlighted Layers button which will navigate you to the Layers section:
Click Add a layer and fill the resulting form as follows:
Once your layer is added to the function, you will see a success message on the screen.
It's now time to use the layer in your function code. You only need to import
the exported logger
as follows:
import { logger } from '/opt/nodejs/index.js';
export const handler = async (event, context, callback) => {
logger.info("Hello world!");
};
When your function is invoked, you will see the resulting log entry in the CloudWatch log stream:
{
"level": "INFO",
"timestamp": "2024-04-03T03:11:17.691Z",
"nodeVersion": "v20.11.1",
"msg": "Hello world!"
}
In this manner, you can reuse the framework configuration in all your Node.js functions to maintain a consistent log format, making log management and analysis a lot more pleasant.
Logging request IDs
Each Lambda function invocation is identified by a unique request ID which is automatically included in the system and error logs. When writing function logs through the built-in Node.js console API, the request ID is automatically included so you can link any entry to the function invocation that produced it.
However, when using custom logging libraries, you must explicitly include the request ID before it will appear in the logs. The way to do this depends on the logging library, but you can generally call a function that accepts the request ID and bind it to the returned logger.
Here's an example with the aforementioned Pino library:
import pino from 'pino';
function getLogger(requestId) {
return pino({
level: process.env.AWS_LAMBDA_LOG_LEVEL || 'info',
formatters: {
bindings: (bindings) => {
return { nodeVersion: process.version, requestId };
},
level: (label) => {
return { level: label.toUpperCase() };
},
},
timestamp: () => `,"timestamp":"${new Date(Date.now()).toISOString()}"`,
});
}
export { getLogger };
The getLogger()
function accepts the request ID and binds it to the logger so
that it is included in all logs. You can subsequently use it in your Lambda
functions like this:
import { getLogger } from '/opt/nodejs/index.js';
export const handler = async (event, context, callback) => {
const logger = getLogger(context.awsRequestId)
logger.info("Hello world!");
};
When executed, this produces:
{
"level": "INFO",
"timestamp": "2024-04-03T03:11:17.691Z",
"nodeVersion": "v20.11.1",
"requestId": "765b52b4-2600-4348-9ec4-c7f7f1346c57",
"msg": "Hello world!"
}
Once you've updated the Pino layer code in Lambda, ensure also to update the Layer version in your function's layer settings:
Another piece of information you should consider including in your logging
configuration is the function name which is accessible through the
AWS_LAMBDA_FUNCTION_NAME
environmental variable:
import pino from 'pino';
function getLogger(requestId) {
return pino({
level: process.env.AWS_LAMBDA_LOG_LEVEL || 'info',
formatters: {
bindings: (bindings) => {
return {
nodeVersion: process.version,
requestId,
function: process.env.AWS_LAMBDA_FUNCTION_NAME,
};
},
level: (label) => {
return { level: label.toUpperCase() };
},
},
timestamp: () => `,"timestamp":"${new Date(Date.now()).toISOString()}"`,
});
}
export { getLogger };
The resulting entries created by this configuration will now contain the
function
property:
{
"level": "INFO",
"timestamp": "2024-04-09T17:04:50.406Z",
"nodeVersion": "v18.19.1",
"requestId": "95772dc0-12d6-4713-a5bd-6e9450900f2d",
"function": "myHTTPRequestFunc",
"msg": "Fetching data from the API."
}
This makes it a lot easier to understand the context of each log entry especially if you're shipping the logs off AWS CloudWatch to a different log management tool.
Analyzing AWS Lambda logs with Better Stack
After optimizing your AWS Lambda logging, consider a specialized observability platform for deeper insights and cost savings. These tools offer advantages over CloudWatch and centralize your monitoring data.
Better Stack is a compelling option that provides log monitoring and integrated incident management. This lets you track Lambda function activity, receive alerts for notable events or trends, and build automated reactive measures for detected issues. You can explore its features with a free account.
Forwarding your Lambda logs to Better Stack
By routing your AWS Lambda logs to Better Stack, you can consolidate, analyze, and monitor your logging data in a unified platform. Achieving this integration is straightforward through the use of Better Stack's AWS Lambda extension, which leverages the Telemetry API to capture and stream your logs directly and in real-time.
For detailed guidance on deploying the Better Stack AWS Lambda extension for efficient log forwarding, refer to the official documentation. Following setup, consider revoking your function's CloudWatch write permissions to avoid redundant expenses.
Exploring Lambda Logs in Better Stack
Once your AWS Lambda function logs are streaming into Better Stack, you can
leverage its powerful filtering and search capabilities for targeted analysis
depending on your use case. For instance, you might want to look at REPORT
entries to find slow function invocations or those that are nearing their memory
limits.
This can be done by setting up filters to isolate logs showing higher execution
times or memory usage close to the allocated maximum. You can also search for
ERROR
logs related to timeout exceptions or out-of-memory errors to pinpoint
functions that require further optimization or debugging.
Beyond filtering your Lambda logs to find potential problems, you can use them to build high-level dashboards with custom data visualizations. This way, you can get a quick, top-down perspective of your incoming logs without endlessly filtering through them.
Detecting issues in real-time
Better Stack also allows you to set up alerting rules to notify you when an issue is detected, ensuring you're promptly informed of potential problems within your AWS Lambda functions.
For example, you can configure an alert to trigger if there's an unusual spike in function error rates or if any function's execution time or memory usage surpasses a critical threshold, which may indicate a potential performance bottleneck.
These real-time alerts can be dispatched through your preferred channels, such as email, Slack, SMS, and others. By enabling these alerts, you'll stay ahead of issues and have enough context to respond adequately.
Final thoughts
In this article, we discussed finding and configuring AWS Lambda logs, understanding their structure, and integrating custom logging for enhanced insights. I also emphasized several best practices along the way to help you streamline your logging workflow.
To further explore Lambda monitoring with Better Stack, check out our comprehensive documentation here. For additional reference, the official AWS Lambda docs offer an in-depth resource.
Thanks for reading, and happy logging!
Make your mark
Join the writer's program
Are you a developer and love writing and sharing your knowledge with the world? Join our guest writing program and get paid for writing amazing technical guides. We'll get them to the right readers that will appreciate them.
Write for usBuild on top of Better Stack
Write a script, app or project on top of Better Stack and share it with the world. Make a public repository and share it with us at our email.
community@betterstack.comor submit a pull request and help us build better products for everyone.
See the full list of amazing projects on github