How to Collect, Process, and Ship Log Data with Rsyslog
Modern computing systems generate diverse log messages, encompassing vital information from system logs (including kernel and boot messages), applications, databases, and network services or daemons. These logs play a crucial role in troubleshooting and diagnosing issues when they arise, and are often effective when they have been centralized.
To centralize logs, you can use a log shipper, a tool designed to collect logs from various sources and forward them to diverse locations. Rsyslog is a prominent log shipper operating based on the syslog protocol.
Rsyslog ships with advanced features, such as filtering, and supports both TCP and UDP protocols for transporting messages. It can handle logs related to mail, authorizations, kernel messages, and more.
This comprehensive tutorial will guide you through using Rsyslog to collect, process, and forward log data to a central location. First, you will configure Rsyslog to read logs from a file. Next, you will explore how to process logs using Rsyslog. Finally, you will centralize the logs to Better Stack.
Prerequisites
Before you begin, ensure you have access to a system with a non-root user account that has sudo
privileges.
Once you've confirmed these prerequisites, create a directory to store your configuration files and applications:
mkdir log-processing-stack
Next, navigate into the newly created directory:
cd log-processing-stack
With the directory set up, you're ready to install Rsyslog.
Installing Rsyslog
Rsyslog is pre-installed on many systems and may sometimes need to be updated. It's considered best practice to install the latest version to ensure you have access to the most recent features and security enhancements.
Below are the installation instructions, tested on Ubuntu 22.04. For other systems, consult the Rsyslog documentation for installation guidelines.
First, install the latest version of Rsyslog:
sudo apt-get install rsyslog
If you see the message "rsyslog is already the newest version," it indicates that you have the latest version installed.
Confirm the installation and check the Rsyslog version with the following:
rsyslogd -v
You should see an output similar to this:
rsyslogd 8.2312.0 (aka 2023.12) compiled with:
PLATFORM: x86_64-pc-linux-gnu
PLATFORM (lsb_release -d):
FEATURE_REGEXP: Yes
GSSAPI Kerberos 5 support: Yes
FEATURE_DEBUG (debug build, slow code): No
32bit Atomic operations supported: Yes
64bit Atomic operations supported: Yes
memory allocator: system default
Runtime Instrumentation (slow code): No
uuid support: Yes
systemd support: Yes
Config file: /etc/rsyslog.conf
PID file: /run/rsyslogd.pid
Number of Bits in RainerScript integers: 64
Additionally, ensure that the Rsyslog service is active and running:
systemctl status rsyslog
You should see the status as "active (running)," confirming that Rsyslog is operational:
● rsyslog.service - System Logging Service
Loaded: loaded (/usr/lib/systemd/system/rsyslog.service; enabled; preset: enabled)
Active: active (running) since Thu 2025-05-22 09:36:01 UTC; 2 weeks 0 days ago
TriggeredBy: ● syslog.socket
Docs: man:rsyslogd(8)
man:rsyslog.conf(5)
https://www.rsyslog.com/doc/
Main PID: 927 (rsyslogd)
Tasks: 4 (limit: 4540)
Memory: 64.3M (peak: 65.0M)
CPU: 1min 6.787s
CGroup: /system.slice/rsyslog.service
└─927 /usr/sbin/rsyslogd -n -iNONE
Warning: some journal files were not opened due to insufficient permissions.
With Rsyslog successfully installed and running, let's understand how it works.
How Rsyslog works
Before delving into how Rsyslog collects application logs, it's essential to understand how it works with system logs.
In your system, various applications like SSHD, mail clients/servers, and cron tasks generate logs at frequent intervals. These applications write log messages to the /dev/log
file as if it were a regular file (pseudo device).
The Rsyslog daemon monitors this file, collecting logs as they are written, and redirects them to individual plain text files in the /var/log
directory, including the /var/log/syslog
file. Rsyslog can route logs to their appropriate files by inspecting header information, such as priority and message origin, which it uses for filtering.
The routing of these messages is based on rules defined in the 50-default.conf
file, located in the /etc/rsyslog.d/
directory, which we'll explore shortly. Rsyslog operates with default configurations, whether freshly installed or already existing.
However, data originates from diverse sources, and these sources might lack rules in the default configurations.
Building on this knowledge, Rsyslog can be extended to collect logs from additional inputs and redirect them to various destinations, including remote ones, as illustrated in the diagram below:
To understand this process, imagine Rsyslog as a pipeline. On one end, Rsyslog collects inputs, transforms them, and forwards them to the other end—the destination.
This can be achieved with a custom configuration file in the /etc/rsyslog.d/
directory, structured as follows:
module(load="<module_name>")
# Collect logs
input(...)
# Modify logs
template(name="<template_name>") {}
# Redirect logs to the destination
action(type="<module_name>")
The main components include:
input
: collects logs from various sources.template
: modifies the log message format.action
: delivers logs to different destinations.
Rsyslog uses modules extensively to accomplish its tasks.
Rsyslog inputs
Rsyslog features modules designed to collect logs from various sources, identifiable by names starting with the im
prefix. Here are a few examples of these input modules:
imhttp: collects plaintext messages via HTTP.
imjournal: fetches system journal messages into Syslog.
imfile: reads text files and converts their contents into Syslog messages.
imdocker: collects logs from Docker containers using the Docker REST API.
Rsyslog Message Modification Modules
For modifying log messages, Rsyslog provides message modification modules typically prefixed with mm
:
mmjsonparse: parses structured log messages conforming to the CEE/lumberjack spec.
mmfields: extracts specific fields from log entries.
mmkubernetes: adds Kubernetes metadata to each log event.
mmanon: anonymizes IP addresses for privacy.
Rsyslog output modules
Rsyslog offers a wide array of output modules, recognizable by names starting with the om
prefix. These modules allow forwarding log messages to various destinations:
omfile: writes log entries to a file on the local system.
ommysql: sends log entries to a MySQL database.
omrabbitmq: forwards log data to RabbitMQ, a popular message broker.
omelasticsearch: delivers log output to Elasticsearch, a robust search and analytics engine.
Now that you have an idea of the available Rsyslog modules and what they do, let's analyze the Rsyslog configuration file in greater detail.
Understanding the Rsyslog configuration
When Rsyslog starts running on your system, it operates with a default configuration file. It collects logs from various processes and directs them to plain text files in the /var/log
directories.
Rsyslog relies on rules predefined in the default configuration file. You can also define your own rules, global directives, or modules.
Rsyslog rules
To comprehend how rules work, open the 50-default.conf
configuration file in your preferred text editor. This tutorial uses nano
, a command-line text editor:
sudo nano /etc/rsyslog.d/50-default.conf
In the initial part of the file, you'll find contents similar to this (edited for brevity):
...
auth,authpriv.* /var/log/auth.log
*.*;auth,authpriv.none -/var/log/syslog
#cron.* /var/log/cron.log
#daemon.* -/var/log/daemon.log
kern.* -/var/log/kern.log
...
The lines in the file are rules. A rule comprises a filter for selecting log messages and an action specifying the path to send the logs. Lines starting with #
are comments and won't be executed.
Consider this line:
kern.* -/var/log/kern.log
This line can be divided into a selector filtering syslog messages kern.*
and an action specifying the path to forward the logs -/var/log/kern.log
.
Let's examine the selector kern.*
in detail. kern.*
is a Facility/Priority-based filter, a commonly used method for filtering syslog messages.
kern.*
can be interpreted as follows:
FACILITY.PRIORITY
FACILITY: a subsystem generating log messages.
kern
is an example of a facility alongside other subsystems likeauthpriv
,cron
,user
,daemon
,mail
,auth
,syslog
,lpr
,news
,uucp
, etc. To define all facilities, you can use*
.PRIORITY: specifies the log message priority. Priorities include
debug
,info
,notice
,warning
,warn
(same aswarning
),err
,error
(same as err),crit
,alert
,emerg
,panic
. If you want to send logs with any priority level, you can use*
. Optionally, you can use the priority keywordnone
for facilities without specified priorities.
Filter and action are separated by one or more spaces or tabs.
The last part, -/var/log/kern.log
, is the action indicating the target file where content is sent.
In this configuration file, most rules direct output to various files, which you can find in /var/log
.
Close the configuration file and use the following command to list all contents in the /var/log
directory:
ls -l /var/log/
The output will include files like:
total 444
total 444
-rw-r--r-- 1 root root 0 Oct 22 04:33 alternatives.log
drwxr-xr-x 2 root root 4096 Oct 27 08:37 apt
-rw-r----- 1 syslog adm 7596 Oct 27 08:44 auth.log
-rw-r--r-- 1 root root 0 Oct 22 04:33 bootstrap.log
-rw-rw---- 1 root utmp 0 Feb 17 2023 btmp
-rw-r----- 1 syslog adm 105503 Oct 27 08:33 cloud-init.log
-rw-r----- 1 root adm 5769 Oct 27 08:33 cloud-init-output.log
drwxr-xr-x 2 root root 4096 Feb 10 2023 dist-upgrade
-rw-r----- 1 root adm 46597 Oct 27 08:33 dmesg
-rw-r--r-- 1 root root 6664 Oct 27 08:37 dpkg.log
-rw-r--r-- 1 root root 32032 Oct 27 08:33 faillog
drwxr-sr-x+ 4 root systemd-journal 4096 Oct 27 08:33 journal
-rw-r----- 1 syslog adm 70510 Oct 27 08:44 kern.log
drwxr-xr-x 2 landscape landscape 4096 Oct 27 08:33 landscape
-rw-rw-r-- 1 root utmp 292292 Oct 27 08:35 lastlog
drwx------ 2 root root 4096 Feb 17 2023 private
-rw-r----- 1 syslog adm 136675 Oct 27 08:44 syslog
-rw-r--r-- 1 root root 4748 Oct 27 08:37 ubuntu-advantage.log
-rw-r----- 1 syslog adm 10487 Oct 27 08:44 ufw.log
drwxr-x--- 2 root adm 4096 Oct 22 04:28 unattended-upgrades
-rw-rw-r-- 1 root utmp 3840 Oct 27 08:35 wtmp
Most files Rsyslog creates belong to the syslog
user and the adm
group. Other applications besides Rsyslog also create logs in this directory, such as MySQL and Nginx.
This behavior of creating files with these attributes is defined in another default configuration file, /etc/rsyslog.conf
.
Rsyslog global directives and modules
When Rsyslog runs, it reads the /etc/rsyslog.conf
file, another default configuration already defined. This file contains global directives, modules, and references to all the configuration files in the /etc/rsyslog.d/
directory, including the /etc/rsyslog.d/50-default.conf
we examined in the previous section.
Open the /etc/rsyslog.conf
configuration file using the following command:
nano /etc/rsyslog.conf
Locate the following section near the bottom of the file:
...
#
# Set the default permissions for all log files.
#
$FileOwner syslog
$FileGroup adm
$FileCreateMode 0640
$DirCreateMode 0755
$Umask 0022
$PrivDropToUser syslog
$PrivDropToGroup syslog
...
In this file, there are properties such as $FileOwner
and $FileGroup
that specify the file owner and group, along with file permissions. If you need to change ownership, this is the section to look at. Any keyword prefixed with $
is a variable you can modify.
Further down the configuration file, you'll find lines like:
...
#
# Where to place spool and state files
#
$WorkDirectory /var/spool/rsyslog
#
# Include all config files in /etc/rsyslog.d/
#
$IncludeConfig /etc/rsyslog.d/*.conf
The $WorkDirectory
specifies the location Rsyslog uses to store state files, and $IncludeConfig
includes all the configuration files defined in the /etc/rsyslog.d
directory. Rsyslog will read any configuration file you create in this directory. This is where you will define your custom configurations.
Now that you understand that Rsyslog has default configurations that route most system logs to various files in /var/log
, you are ready to create a demo application that generates logs. Later, you'll configure Rsyslog to read these logs.
Developing a demo logging application
In this section, you'll create a logging application built with the Bash scripting language. The application will generate JSON logs at regular intervals, simulating a high-traffic real-world application.
To begin, ensure you are in the processing-stack/logify
directory and create a subdirectory for the demo logging application:
mkdir logify
Navigate into the directory:
cd logify
Next, create a logify.sh
file:
nano logify.sh
In your logify.sh
file, add the following code to produce logs:
#!/bin/bash
filepath="/var/log/logify/app.log"
create_log_entry() {
local info_messages=("Connected to database" "Task completed successfully" "Operation finished" "Initialized application")
local random_message=${info_messages[$RANDOM % ${#info_messages[@]}]}
local http_status_code=200
local ip_address="127.0.0.1"
local level=30
local pid=$$
local ssn="407-01-2433"
local time=$(date +%s)
local log='{"status": '$http_status_code', "ip": "'$ip_address'", "level": '$level', "msg": "'$random_message'", "pid": '$pid', "ssn": "'$ssn'", "time": '$time'}'
echo "$log"
}
while true; do
log_record=$(create_log_entry)
echo "${log_record}" >> "${filepath}"
sleep 3
done
The create_log_entry()
function generates structured logs in JSON format with details, such as severity level, message, and HTTP status code. It then enters an infinite loop that repeatedly calls the create_log_entry()
function to write logs to a specified file in the /var/log/logify
directory.
When you finish writing the code, save and exit the file. Then make the file executable:
chmod +x logify.sh
Next, create the /var/log/logify
directory to store the application logs:
sudo mkdir /var/log/logify
Assign the currently logged-in user in the $USER
variable as the owner of the /var/log/logify
directory:
sudo chown -R $USER:$USER /var/log/logify/
Run the logify.sh
script in the background:
./logify.sh &
The &
sign tells the OS to run the script in the background, allowing you to continue using the terminal for other tasks while the program runs.
When you press enter, the script will start running and you'll see something like:
[1] 652089
Here, 652089
is the process ID, which can be used to terminate the script if needed.
Now, view the app.log
contents with the tail
command:
tail -n 4 /var/log/logify/app.log
The output will show structured JSON logs similar to this:
{"status": 200, "ip": "127.0.0.1", "level": 30, "emailAddress": "user@mail.com", "msg": "Connected to database", "pid": 169516, "ssn": "407-01-2433", "timestamp": 1749119648}
{"status": 200, "ip": "127.0.0.1", "level": 30, "msg": "Operation finished", "pid": 652089, "ssn": "407-01-2433", "time": 1749119651}
{"status": 200, "ip": "127.0.0.1", "level": 30, "emailAddress": "user@mail.com", "msg": "Task completed successfully", "pid": 169516, "ssn": "407-01-2433", "timestamp": 1749119651}
{"status": 200, "ip": "127.0.0.1", "level": 30, "msg": "Task completed successfully", "pid": 652089, "ssn": "407-01-2433", "time": 1749119654}
With the application generating structured JSON logs, you are now ready to use Rsyslog to read these log entries.
Getting started with Rsyslog
Now that you have developed an application to produce logs at regular intervals, you will use Rsyslog to read the logs from a file and transform them into syslog messages stored under the /var/log/syslog
file.
To begin, create a configuration file with a name of your choosing in the /etc/rsyslog.d
directory:
sudo nano /etc/rsyslog.d/51-rsyslog-logify.conf
In the 52-rsyslog-logify.conf
file, add the following configuration:
global(
workDirectory="/var/spool/rsyslog"
)
# Load the imfile module to read logs from a file
module(load="imfile")
# Define a new input for reading logs from a file
input(type="imfile"
File="/var/log/logify/app.log"
Tag="FileLogs"
PersistStateInterval="10"
Facility="local0")
# Send logs with the specified tag to the console
if $syslogtag == 'FileLogs' then {
action(type="omfile"
file="/var/log/syslog")
}
In the first line, the global()
directive configures the working directory to store state files. The files allow Rsyslog to track the parts of the logs it has processed.
Next, the module()
method is used to load the imfile
module, which is used to read logs from files.
Following that, you define an input using the imfile
module to read logs from the specified path under the File
parameter. You then add a tag, FileLogs
,to each log entry processed, and the PersistStateInterval
parameter specifies how often the state file should be written when reading the logs.
Finally, a conditional expression checks if the log tag equals the FileLogs
tag. If true, an action using the omfile
module is defined to forward the logs to the /var/log/syslog
file.
After you are finished, save and exit the configuration file.
Before restarting Rsyslog, its a good idea to check the configuration file for syntax errors. Enter the following command to check if the configuration file has no syntax errors:
rsyslogd -f /etc/rsyslog.d/51-rsyslog-logify.conf -N1
When the configuration file has no errors, you will see output similar to this:
rsyslogd: version 8.2312.0, config validation run (level 1), master config /etc/rsyslog.d/51-rsyslog-logify.conf
rsyslogd: End of config validation run. Bye.
Now restart Rsyslog:
sudo systemctl restart rsyslog.service
When Rsyslog restarts, it will start sending the logs to /var/log/syslog
. To check the logs in real-time as they get written, enter the following command:
sudo tail -f /var/log/syslog
The log entries will be displayed, showing the timestamp, hostname, log tag, and the log message:
2025-06-05T10:35:36.187305+00:00 ubuntu FileLogs {"status": 200, "ip": "127.0.0.1", "level": 30, "emailAddress": "user@mail.com", "msg": "Task completed successfully", "pid": 169516, "ssn": "407-01-2433", "timestamp": 1749119736}
2025-06-05T10:35:36.187305+00:00 ubuntu FileLogs {"status": 200, "ip": "127.0.0.1", "level": 30, "emailAddress": "user@mail.com", "msg": "Task completed successfully", "pid": 169516, "ssn": "407-01-2433", "timestamp": 1749119736}
2025-06-05T10:35:38.913045+00:00 ubuntu FileLogs {"status": 200, "ip": "127.0.0.1", "level": 30, "msg": "Initialized application", "pid": 652089, "ssn": "407-01-2433", "time": 1749119738}
2025-06-05T10:35:38.913045+00:00 ubuntu FileLogs {"status": 200, "ip": "127.0.0.1", "level": 30, "msg": "Initialized application", "pid": 652089, "ssn": "407-01-2433", "time": 1749119738}
...
Since the /var/log/syslog
file contains logs from other processes, it's common to see logs from sources such as kernel
.
Now that Rsyslog can read application logs, you can further process the log messages as needed.
Transforming Logs with Rsyslog
When Rsyslog reads log entries, you can transform them before sending them to the output. You can enrich them with new fields or format them differently. One common transformation is formatting logs as JSON using Rsyslog templates.
Formatting logs in JSON with Rsyslog templates
Rsyslog allows you to format logs into various formats using templates. By default, Rsyslog automatically formats log messages, even if no templates are specified, using its built-in templates. However, you might want to format your logs in JSON, which is structured and machine parsable.
If you look at the logs Rsyslog is currently formatting, you will notice that the logs are not structured:
2025-06-05T10:35:38.913045+00:00 ubuntu FileLogs {"status": 200, "ip": "127.0.0.1", "level": 30, "msg": "Initialized application", "pid": 652089, "ssn": "407-01-2433", "time": 1749119738}
Many remote destinations prefer structured logs, so it's a good practice to structure the log messages.
In Rsyslog, you can use templates with the template()
object to modify and structure logs. Open the configuration file:
sudo nano /etc/rsyslog.d/51-rsyslog-logify.conf
Add the template to the configuration file:
...
input(type="imfile"
File="/var/log/logify/app.log"
Tag="FileLogs"
PersistStateInterval="10"
Facility="local0")
template(name="json-template" type="list" option.jsonf="on") {
property(outname="@timestamp" name="timereported" dateFormat="rfc3339" format="jsonf")
property(outname="host" name="hostname" format="jsonf")
property(outname="severity" name="syslogseverity" caseConversion="upper" format="jsonf" datatype="number")
property(outname="facility" name="syslogfacility" format="jsonf" datatype="number")
property(outname="syslog-tag" name="syslogtag" format="jsonf")
property(outname="source" name="app-name" format="jsonf" onEmpty="null")
property(outname="message" name="msg" format="jsonf")
}
if $syslogtag == 'FileLogs' then {
action(
type="omfile"
file="/var/log/syslog"
template="json-template"
)
}
In the configuration above, you define a json-template
template using the template()
object. This template formats the syslog message as JSON. The template includes various property statements to add fields to the syslog message. Each property statement specifies the name
of the property to access and the outname
, which defines the output field name in the JSON object. The format
parameter is set to "jsonf"
to format the property as JSON. Some properties include a timestamp, host, syslog-tag, and the syslog message itself.
Finally, you add the template
parameter in the action section, referencing the newly defined json-template
.
After saving your file, restart Rsyslog:
sudo systemctl restart rsyslog
Now, check the logs being written:
sudo tail -f /var/log/syslog
The output shows that the syslog messages are now formatted as JSON. They also include additional fields that provide more context:
{"@timestamp":"2025-06-05T10:40:25.189748+00:00", "host":"ubuntu", "severity":5, "facility":16, "syslog-tag":"FileLogs", "source":"FileLogs", "message":"{\"status\": 200, \"ip\": \"127.0.0.1\", \"level\": 30, \"emailAddress\": \"user@mail.com\", \"msg\": \"Task completed successfully\", \"pid\": 169516, \"ssn\": \"407-01-2433\", \"timestamp\": 1749120025}"}
...
The logs in the output are now structured in JSON format and contain more detailed information. Next, you will add custom fields to the log event.
Adding Custom Fields with Rsyslog
In Rsyslog, you can add custom fields to log entries using constant statements. These statements allow you to insert fixed values into log messages.
First, open the configuration file:
sudo nano /etc/rsyslog.d/51-rsyslog-logify.conf
Add a new constant statement to include a custom field called environment
with the value dev
:
template(name="json-template" type="list" option.jsonf="on") {
property(outname="@timestamp" name="timereported" dateFormat="rfc3339" format="jsonf")
property(outname="host" name="hostname" format="jsonf")
property(outname="severity" name="syslogseverity" caseConversion="upper" format="jsonf" datatype="number")
property(outname="facility" name="syslogfacility" format="jsonf" datatype="number")
property(outname="syslog-tag" name="syslogtag" format="jsonf")
property(outname="source" name="app-name" format="jsonf" onEmpty="null")
property(outname="message" name="msg" format="jsonf")
constant(outname="environment" value="dev" format="jsonf")
}
In the configuration above, a constant
statement has been added with the outname
set to environment
and the value
set to dev
. This constant statement inserts a fixed field named environment
with the value dev
into each log entry.
Save and exit the configuration file. Then, restart Rsyslog to apply the changes:
sudo systemctl restart rsyslog
To verify if the custom field has been added, tail the syslog file:
sudo tail -f /var/log/syslog
You will observe that Rsyslog has included an environment
field in each log entry at the end of the log event:
{"@timestamp":"2025-06-05T10:42:31.631819+00:00", "host":"ubuntu", "severity":5, "facility":16, "syslog-tag":"FileLogs", "source":"FileLogs", "message":"{\"status\": 200, \"ip\": \"127.0.0.1\", \"level\": 30, \"emailAddress\": \"user@mail.com\", \"msg\": \"Operation finished\", \"pid\": 169516, \"ssn\": \"407-01-2433\", \"timestamp\": 1749120151}", "environment": "dev"}
Now that you can add custom fields to log events, you are ready to forward logs to Better Stack.
Configuring Rsyslog with Better Stack
Better Stack provides an automated setup script that configures Rsyslog to forward logs. Run the following command, replacing $SOURCE_TOKEN
with your actual source token from Better Stack:
wget -qO- https://telemetry.betterstack.com/rsyslog/$SOURCE_TOKEN | sudo sh
This script will automatically:
- Detect your system configuration
- Create the necessary Rsyslog configuration for Better Stack as 70-logtail.conf
- Set up secure TLS connections to Better Stack's servers
First, install the required TLS package for secure log forwarding:
sudo apt-get install rsyslog-gnutls
Next, create a free Better Stack account. Once registered, proceed to the Sources section in your dashboard and click the Connect source button:
Provide a name for your source, such as "Logify logs," and select "Rsyslog" as the platform:
After creating the source, copy the Source Token and Ingesting Host provided by Better Stack:
Run the following command, replacing $SOURCE_TOKEN
with your actual source token from Better Stack:
wget -qO- https://telemetry.betterstack.com/rsyslog/$SOURCE_TOKEN | sudo sh
Starting Betterstackdata.com automatic rsyslog setup
Setting up rsyslog...
[0/3] Checking prerequisites
- wget OK
[1/3] Testing Let's Encrypt SSL certificates setup
- curl OK
- OK
[2/3] Writing rsyslog configuration into /etc/rsyslog.d/70-logtail.conf
[3/3] Restarting rsyslog
Better Stack rsyslog setup is complete.
This script will automatically: - Detect your system configuration - Create the necessary Rsyslog configuration for Better Stack - Set up secure TLS connections to Better Stack's servers
However, since you want to send only your logify application logs (not all system logs), you need to modify the generated configuration to specifically target your application logs.
After running the Better Stack setup script, you need to customize the 70-logtail.conf
file to read logs from your logify application.
Open the Better Stack configuration file for editing:
sudo nano /etc/rsyslog.d/70-logtail.conf
The file will contain the Better Stack forwarding configuration. You need to add the file reading configuration at the beginning of this file. Add the following lines at the very top of the file, before any existing content:
global(DefaultNetstreamDriverCAFile="/etc/ssl/certs/ca-certificates.crt")
global(
workDirectory="/var/spool/rsyslog"
)
# Load the imfile module to read logs from a file
module(load="imfile")
# Define a new input for reading logs from a file
input(type="imfile"
File="/var/log/logify/app.log"
Tag="FileLogs"
PersistStateInterval="10"
Facility="local0")
template(name="LogtailFormat" type="list") {
...
}
# Existing Better Stack configuration below...
Next, you need to modify the action section to only send your logify logs to Better Stack. Find the action
section in the file (it will contain type="omfwd"
) and wrap it with a conditional statement.
The generated configuration will look like this:
...
action(
type="omfwd"
protocol="tcp"
target="YOUR_INGESTING_HOST"
port="6514"
template="LogtailFormat"
TCP_Framing="octet-counted"
StreamDriver="gtls"
StreamDriverMode="1"
StreamDriverAuthMode="x509/name"
StreamDriverPermittedPeers="*.betterstackdata.com"
queue.spoolDirectory="/var/spool/rsyslog"
queue.filename="logtail"
queue.maxdiskspace="75m"
queue.type="LinkedList"
queue.saveonshutdown="on"
)
Important: You must wrap this entire action
block with a conditional statement to filter only your logify logs. Modify it to:
# Send only FileLogs (our logify application) to Better Stack
if $syslogtag == 'FileLogs' then {
action(
type="omfwd"
protocol="tcp"
target="YOUR_INGESTING_HOST"
port="6514"
template="LogtailFormat"
TCP_Framing="octet-counted"
StreamDriver="gtls"
StreamDriverMode="1"
StreamDriverAuthMode="x509/name"
StreamDriverPermittedPeers="*.betterstackdata.com"
queue.spoolDirectory="/var/spool/rsyslog"
queue.filename="logtail"
queue.maxdiskspace="75m"
queue.type="LinkedList"
queue.saveonshutdown="on"
)
}
Without this conditional statement, Rsyslog will send ALL system logs to Better Stack, not just your logify application logs.
Since you're now reading logs directly in the Better Stack configuration, remove your previous local configuration to avoid conflicts:
sudo rm /etc/rsyslog.d/51-rsyslog-logify.conf
Before restarting Rsyslog, validate your configuration file for syntax errors:
rsyslogd -f /etc/rsyslog.d/70-logtail.conf -N1
When the configuration file has no errors, you will see output similar to this:
rsyslogd: version 8.2312.0, config validation run (level 1), master config /etc/rsyslog.d/70-logtail.conf
rsyslogd: End of config validation run. Bye.
Now restart Rsyslog to apply the new configuration:
sudo systemctl restart rsyslog
To verify that your logs are being sent to Better Stack, first ensure your logify script is still running:
ps aux | grep logify
dev 169516 0.0 0.0 7740 3456 ? S Jun02 3:06 /bin/bash ./logify.sh
If the script is not running, restart it:
cd log-processing-stack/logify
./logify.sh &
To monitor the log forwarding process, check the Rsyslog service status:
sudo systemctl status rsyslog
You can also monitor Rsyslog's activity in real-time:
sudo journalctl -u rsyslog -f
After a few moments, navigate to your Better Stack dashboard and go to "Live tail." You should see your logify application logs appearing in real-time:
Click on any log entry to view its detailed information:
Final thoughts
In this comprehensive guide, you explored the functionality and flexibility of Rsyslog for effective log management. You began by learning how Rsyslog works, then progressed to using it to read logs from various programs, transform log data into JSON format, and add custom fields.
Finally, you configured Rsyslog to forward logs to Better Stack.
With this foundation, you're now well-prepared to integrate Rsyslog into your own projects. To deepen your understanding, check out the official Rsyslog documentation.
While Rsyslog is a powerful log shipper, there are several other tools available. To compare alternatives and choose the right solution for your needs, explore our log shippers guide.
Thank you and happy logging!
Make your mark
Join the writer's program
Are you a developer and love writing and sharing your knowledge with the world? Join our guest writing program and get paid for writing amazing technical guides. We'll get them to the right readers that will appreciate them.
Write for us
Build on top of Better Stack
Write a script, app or project on top of Better Stack and share it with the world. Make a public repository and share it with us at our email.
community@betterstack.comor submit a pull request and help us build better products for everyone.
See the full list of amazing projects on github