Back to Logging guides

Getting Started with Collecting and Managing Azure Logs

Eric Hu
Updated on November 23, 2023

Logging is the process of recording events or messages that occur during the execution of a software program. It is a crucial component in the software development cycle. And Microsoft Azure, as a cloud computing platform and infrastructure created by Microsoft for building, deploying, and managing applications, provides a comprehensive logging solution for its various services, known as Azure Monitor. It allows you to collect, analyze, and visualize logs generated by your applications, services, and infrastructure.

In this tutorial, we will discuss how to start logging in Azure, including collecting, viewing, and searching log entries, visualizing log data by creating charts, and setting up a log-based monitoring system.

Introducing Azure Monitor

Logs

Azure Monitor is a built-in data platform in Azure that provides monitoring and diagnostics for resources and applications. It helps you understand the performance and status of your resources, detect issues, and take corresponding actions. In addition, Azure Monitor supports log and metric data collection, alerting, dashboards, and integrations with other Azure services. The following diagram from Azure documentation illustrates the architecture of the Azure Monitor service.

Azure Monitor architecture

First, on the left side of this diagram, you can see that Azure Monitor collects data from various sources, such as applications, infrastructure, Azure platform, or other custom sources. The method of accessing data from different sources varies, as we are going to discuss later. These data will be stored inside Azure Monitor as metrics, logs, traces, or changes.

Azure Monitor will then be able to process these data for further analysis/ visualization or integrate with other services of the Azure platform. You can also export the data to third-party applications such as Better Stack Logs through import/export APIs. For this article, we will focus on how to use Azure Monitor to collect and process log data.

Collecting log data

Before we can talk about collecting log data, you must first understand how Azure organizes its infrastructures. Azure applications are divided into different tiers. They are the Azure tenant, which is at the bottom level, and Azure Subscription, Azure Resources, operating system, as well as the application source code, which is at the top level.

Azure Monitor sources

Azure Monitor treat each tier as an independent data source, and they have different data collection methods.

The Azure tenant level

Let us start from the bottom level, the Azure tenant. Data related to the Azure tenant can be collected from the Azure Active Directory report, which contains sign-in activities and audit logs regarding all tasks performed by this particular tenant.

The Azure Subscription level

At the Azure Subscription level, every Subscription will generate a trail of activity logs, and you may export these logs to a Log Analytics workspace for further analysis.

The Azure Resource level

As for the Azure Resources, each Resource will generate logs providing insights into the internal working of the Resource. These logs are collected automatically, but you need to update the diagnostic setting so that they may be forwarded to Azure Monitor.

The operating system level

Next, for the operating system, you can collect log data automatically by installing the Azure Monitor agent. The agent can be installed using the Azure CLI, and then it will then be able to deliver monitoring data to Azure Monitor for future use.

After the agent has been installed, you should also configure how the data will be collected by defining a data collection rule and associating the operating system to the rule.

The following data can be forwarded to Azure Monitor through the agent:

  • Events and performance data: These are numeric values describing the performance and workload of the monitored operating system.
  • Windows event logs or Syslog: Logs sent to the Windows/Linux event logging system.
  • Text logs or Windows IIS logs: Some log records are not forwarded to the standard logging services such as Windows Event log or Syslog, and instead, they are sent to text files and stored locally on a disk. You may also configure the Azure Monitor agent to collect log entries from these locations.

The application level

At the application level, Azure has provided SDKs for some popular programming languages and technologies to make things easier for you. For now, the supported technologies include .NET, Java, JavaScript, Node.js and Python. These SDKs allow you to send log data to Azure Monitor directly. Please refer to their respective documentation for details.

Custom sources

In addition to the standard tiers of an application, Azure Monitor also allows you to send data from custom sources using the logs ingestion API.

To use this API, you must create a data collection endpoint for Azure Monitor. The endpoint allows you to send data directly to Log Analytics workspaces.

Next, you should define a data collection rule to specify the format the log data.

And lastly, make an API call to the endpoint in your application. Its payload should have the log data formatted in JSON. The call should point to a specific table in the workspace, follow a specific format and apply a data collection rule so that Azure Monitor can understand the log data. If the format does not match the data collection rule, The API call will need to go through a data ingestion pipeline, transforming the data for the target table.

The following image from Azure docs illustrates this process:

Logs ingestion API

Viewing and searching logs

Now that you have successfully sent log records to Azure Monitor, let's discuss how you can view and analyze them. To do this, you must create a Log Analytics workspace. Log Analytics workspace is part of the Azure Monitor platform designed to process and analyze log data. Each workspace has its own data repository, allowing you to combine data from multiple sources.

Log analytics

This interface consists of three main components. First, you will find multiple tables on the left side, each corresponding to a single data source. The tables are all defined with a unique set of columns. You may use log queries to retrieve specific columns of data and feed them to other services that are provided by the Azure Monitor.

The following types of tables are available in the Log Analytics workspace:

  • Azure table: These tables store logs from Azure resources. They are created automatically based on the Azure services you are using and have a predefined schema. However, you can add more columns to these tables if you wish.
  • Custom table: These tables store logs from other recourses. You must design the schema based on how you want to keep the data you collected from a given data source.
  • Search results: The search result table is generated when you run a search job, and it is generated based on the search query you define.
  • Restored logs: These tables store archived logs.

The top section is where you can search for logs using the Kusto Query Language (KQL).

And the bottom section is where the search result will be displayed.

Visualizing log data

Once you've collected the log data, you can take it one step further and extract numeric metrics from the logs to create visualizations using the Metrics Explorer.

Data visualization

The Metrics Explorer provides a comprehensive view of the performance of your Azure resources. With Azure Metrics Explorer, you can view real-time and historical metrics data from various Azure services, including virtual machines, web apps, databases, and more. It is a crucial tool for gaining insights into how your resources are used and how they perform over time.

Besides creating charts, the Explorer allows you to view metrics from multiple resources in a single dashboard and compare metrics data to identify trends and patterns. You can also export the created chats to other tools, such as Power BI, for further analysis and visualization.

Creating alerts

Alert rules

Azure allows you to create threshold-based or log-based alerts using its Azure Monitor Alerts service. It provides real-time notifications about critical events or conditions in your Azure resources. For example, with Azure Monitor Alerts, you can create alerts triggered by specific conditions in your Azure services by setting up alert rules, such as when a metric exceeds a specified threshold or when a log query returns a particular result.

alert notifications

Azure Alerts can be configured to send notifications via email, SMS, or to an Azure Event Grid topic. You can also integrate Azure Alerts with other tools, such as Microsoft Teams or Slack, to receive notifications in the way that is most convenient for you.

alert actions

Besides setting up a notification channel, Azure also allows you to configure some more complex automation actions, such as restarting a server, executing a script, and so on. These actions will be executed when the alert is triggered. You can also group multiple actions to form an action group, which will help simplify the process of managing your alert actions, allowing you to respond quickly to issues and resolve them before they impact your customers.

By using Azure Alerts, you can stay informed about critical events or conditions in your Azure resources and take timely action to resolve any issues.

Exporting logs to third-party platforms

As you may have noticed, Azure comes with a rather complex log management and monitoring system. If you wish to manage your logs elsewhere using a third-party log management service such as Logs, Azure allows you to export log data to external destinations through different channels, such as REST API or an export job.

By using REST API, Azure allows you to export log data stored in Azure Monitor Logs by setting up an API endpoint, which allows third-party applications and services to retrieve information. Or you may create a Azure Log Analytics export job, which allows you to export the log data to other Azure services include Azure Blob Storage, Event Hubs, and Power BI.

Azure Blob Storage is a cloud-based object storage service for storing unstructured data. It is highly scalable, durable, and cost-effective, making it the perfect tool for backing up and archiving log data in the long term, as it does not have a fixed log retention period.

Azure Event Hubs is a cloud-based event-processing tool that provides a highly scalable, durable event ingestion and delivery service. This option allows you to stream log data to other applications and services for further analysis.

Lastly, Power BI is a business intelligence and data visualization platform that allows you to analyze and visualize data from a wide range of sources

Best practices when logging in Azure

Lastly, there are a few things you should note when logging in Azure:

  1. Use Azure SDKs when implementing the logging system. Azure provides SDKs for .NET, Java, JavaScript, Node.js, and Python. It is better to use these client libraries if you are building logging systems in these languages.
  2. Log as much as necessary. Make sure you are recording all events related to your application's functioning, including errors, authentication attempts, data access and modifications, and other essential actions performed in your application.
  3. Include contextual information in your logs. Each log record should contain enough contextual information that describes the event. For example, you are running a blog application, and a user added a new post. Instead of a simple message, you should also include some information in the record, such as the user ID, post ID, timestamp, user agent, and other relevant details about the event.
  4. Use structured logging format. Using a structured logging format ensures that your log records can be automatically processed by various logging tools, which will save you time when investigating an issue. JSON is go-to structured format for most people, but other options like logfmt also exist.
  5. Exclude sensitive information. Make sure to never log business secrets, or personal data such as email addresses, passwords, or credit card information so that you don't compromise user privacy or incur regulatory fines.
  6. Use the appropriate log level. Always ensure your log records have the appropriate log level so that you can easily differentiate between events that require urgent attention from those that are merely informational.
  7. Centralize all your logs in Azure Monitor. Every time you create a new Resource, ensure you send the related log records to Azure Monitor so that you will have a centralized place to manage and process all log records.

Better Stack Logs: a modern log management platform

Logs

As we've discussed before, Azure Monitor is a very powerful infrastructure monitoring system. However, it is a bit complex to set up and use. If you are looking for a modern alternative to Azure Monitor that is easier to set up, consider sending your application logs to Logs instead. Better Stack Logs is a cloud-based log management solution that allows you to view, search, and process all your logs in one place.

Better Stack Logs offers many client libraries that allow you to send your application logs to Better Stack Logs directly. We have also created detailed logging guides for many different languages, such as the ones listed below:

Final thoughts

In this article, we discussed some fundamental concepts regarding logging in Azure. Then, we explored how to collect log data from different sources, query the collected log entries, visualize them by creating charts, set up log-based alerts, and export log data to external destinations. And lastly, we listed some best practice guidelines you should follow when logging in Azure, which will make sure you are using Azure Monitor to its full potential. We hope this tutorial has helped you understand the various logging and monitoring features provided by Azure.

If you wish to dig deeper into the subject of logging, we also provide several tutorials regarding log rotation, log levels, and centralizing logs to help you build a more effective logging system.

Thanks for reading, and happy logging!

Author's avatar
Article by
Eric Hu
Eric is a technical writer with a passion for writing and coding, mainly in Python and PHP. He loves transforming complex technical concepts into accessible content, solidifying understanding while sharing his own perspective. He wishes his content can be of assistance to as many people as possible.
Got an article suggestion? Let us know
Next article
Platforms: AWS
Licensed under CC-BY-NC-SA

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

Make your mark

Join the writer's program

Are you a developer and love writing and sharing your knowledge with the world? Join our guest writing program and get paid for writing amazing technical guides. We'll get them to the right readers that will appreciate them.

Write for us
Writer of the month
Marin Bezhanov
Marin is a software engineer and architect with a broad range of experience working...
Build on top of Better Stack

Write a script, app or project on top of Better Stack and share it with the world. Make a public repository and share it with us at our email.

community@betterstack.com

or submit a pull request and help us build better products for everyone.

See the full list of amazing projects on github