# Log Management with Journalctl: A SysAdmin's Guide

When troubleshooting server issues or diagnosing problems with your services,
logs are your most valuable resource.

In Linux systems using [systemd](https://betterstack.com/community/guides/logging/how-to-control-systemd-with-systemctl/), a
logging system called the **journal** is used to capture and centralize log
entries from the kernel, various `systemd` services, and other userland
processes.

This journal is implemented by the `journald` daemon, which collects and stores
log data from various sources in a structured, binary format for ease of
retrieval.

<iframe width="100%" height="315" src="https://www.youtube.com/embed/Y_erZnIhgKg" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>

However, this centralized log can grow large and contain tens or even hundreds
of thousands of entries. To efficiently find the information you need, you must
master the art of filtering and querying the journal.

This is where the `journalctl` utility comes in. It allows you to query and
filter logs based on various criteria, such as time, service, or boot session,
and it can output log data in different formats, making it adaptable to various
analysis and visualization needs.

In this guide, you'll learn how to effectively navigate and filter the `systemd`
journal using `journalctl`, enabling you to:

- Isolate logs from specific time ranges, services, or boot sessions.
- Search for entries containing specific keywords or patterns.
- Customize the output format for easier analysis.
- Manage journal storage to prevent excessive disk usage.

Let's get started!

[summary]

## Side note: Centralize Journald logs in minutes

If you are troubleshooting more than one server, `journalctl` becomes slow because it is per host. Shipping the journal to Better Stack gives you one place to search, filter, and correlate logs across all machines.

<iframe width="100%" height="315" src="https://www.youtube.com/embed/xmqvQqPkH24" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>

[/summary]

## Granting users access to system logs

Linux users can only view log entries generated by their processes and services
by default. If you attempt to view all system logs using `journalctl`, you might
encounter a message like this:

```text
[output]
Hint: You are currently not seeing messages from other users and the system.
      Users in groups 'adm', 'systemd-journal' can see all messages.
      Pass -q to turn off this notice.
. . .
```

This message indicates that your current user lacks the necessary permissions to
access all log entries. To grant a user access to the complete system journal,
add them to a privileged group, such as `adm` or `systemd-journal`:

```command
sudo usermod -a -G systemd-journal <user>
```

After adding the user to the group, they need to log out and log back in for the
changes to take effect. Once they log back in, they can view all system logs
using `journalctl`.

## Viewing Journald logs with Journalctl

This section will guide you through accessing and navigating system logs using
the `journalctl` command. We'll start with basic queries and then explore ways
to customize the output.

To see all log entries collected by the `journald` daemon, run the `journalctl`
command without arguments:

```command
journalctl
```

![journalctl without arguments](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/f675a479-d24f-4cee-cb8e-675f01d81900/public
=2960x1734)

This command lists all available journal entries in chronological order (from
oldest to newest) and pipes the output through a pager for ease of navigation,
as the log often contains tens or even hundreds of thousands of lines.

```text
[output]
-- Logs begin at Sat 2024-10-26 07:06:58 UTC, end at Thu 2025-01-16 10:37:13 UTC. --
Oct 26 07:06:58 Ubuntu-20-04 sshd[3011984]: Disconnected from invalid user pzserver 175.107.32.186 port 52429 [preauth]
Oct 26 07:07:02 Ubuntu-20-04 sshd[3011990]: Invalid user qinyang from 196.189.87.177 port 3496
Oct 26 07:07:03 Ubuntu-20-04 sshd[3011990]: Received disconnect from 196.189.87.177 port 3496:11: Bye Bye [preauth]
Oct 26 07:07:03 Ubuntu-20-04 sshd[3011990]: Disconnected from invalid user qinyang 196.189.87.177 port 3496 [preauth]
. . .
```

You can show the most recent logs first by adding the `--reverse` flag:

```command
journalctl --reverse
```

The output begins with a header showing the time range of the displayed logs:

```text
[output]
-- Logs begin at Sat 2024-10-26 07:06:58 UTC, end at Thu 2025-01-16 10:37:13 UTC. --

```

Following the header, you'll find individual log entries, sorted from oldest to
newest. Each entry follows this format:

```text
[output]
Oct 26 07:06:58 Ubuntu-20-04 sshd[3011984]: Disconnected from invalid user pzserver 175.107.32.186 port 52429 [preauth]
```

Each entry starts with a timestamp, the machine's hostname, the program that
generated the log entry, and its process id. The log message itself comes
afterward. The format will be recognizable to anyone familiar with [standard
syslog logging](https://betterstack.com/community/guides/logging/how-to-view-and-configure-linux-logs-on-ubuntu-20-04/).

If you want to process the `journalctl` output using tools like `grep`, `awk`,
or `sed`, or redirect it to a file, you can use the `--no-pager` option to
disable paging:

```command
journalctl --no-pager
```

Since the output can be extensive, it's often necessary to limit the data to
make it more manageable. You can achieve this by using various filtering options
provided by `journalctl`, such as:

- **By time range**: Use `--since` and `--until` to display logs within a
  specific timeframe:

  ```command
  journalctl --since "2025-01-01 00:00:00" --until "2025-01-15 23:59:59"
  ```

- **By service**: You can focus on logs from a specific service with the
  `--unit` flag:

  ```command
  journalctl --unit servicename.service
  ```

- **By severity level**: To view logs of a certain severity or higher:

  ```command
  journalctl --priority=warning
  ```

These options allow you to efficiently narrow down the log data to suit your
specific needs. Other relevant options to explore include:

- `--no-hostname`: Suppresses the hostname in log entries.
- `--no-full`: Truncates long log fields in the output instead of displaying
  them in full.
- `-a/--all`: Displays all fields, even those that are normally suppressed or
  truncated.
- `--truncate-newline`: Removes newline characters from the `MESSAGE` field.
- `-q/--quiet`: Suppresses the header and metadata output of the `journalctl`
  command to leave only the raw log content.

In the following sections, we'll delve into more advanced filtering and output
customization techniques to help you efficiently find the information you need.

## Modifying the Journal output format

When working with logs generated by `journalctl`, it's often beneficial to
customize the output format to fit various needs.

A simple modification to the output is configuring timestamps to be displayed in
UTC instead of the system time:

```command
journalctl --utc
```

The `-o/--output` option allows you to print the journal output in a variety of
formats. For example, you can print the entries in a [structured log
format](https://betterstack.com/community/guides/logging/structured-logging/) such as JSON with:

```command
journalctl --output json
```

```json
[output]
{"SYSLOG_IDENTIFIER":"supergfxd","_TRANSPORT":"stdout","__MONOTONIC_TIMESTAMP":"303405189223","_RUNTIME_SCOPE":"system","__SEQNUM_ID":"699003bb8d9b4e16a49ee0d845f5be64","__SEQNUM":"144711354","_COMM":"supergfxd","MESSAGE":"WARN: get_runtime_status: Could not find dGPU","__CURSOR":"s=699003bb8d9b4e16a49ee0d845f5be64;i=8a01eba;b=40d8cb53df934b6b8205666796a69234;m=46a45bc867;t=62bd0da3dfd84;x=cdfa3e7b1b67aafd","_CAP_EFFECTIVE":"1ffffffffff","_GID":"0","_HOSTNAME":"falcon","_STREAM_ID":"15e02852f29f485b9e2866a7a27c3b4d","PRIORITY":"6","_SYSTEMD_SLICE":"system.slice","_EXE":"/usr/bin/supergfxd","__REALTIME_TIMESTAMP":"1737025874951556","_UID":"0","SYSLOG_FACILITY":"3","_SYSTEMD_CGROUP":"/system.slice/supergfxd.service","_SYSTEMD_INVOCATION_ID":"1e2f890b696c4cedab849d5ccc3afefc","_SELINUX_CONTEXT":"system_u:system_r:unconfined_t:s0","_SYSTEMD_UNIT":"supergfxd.service","_CMDLINE":"/usr/bin/supergfxd","_PID":"1982","_BOOT_ID":"40d8cb53df934b6b8205666796a69234","_MACHINE_ID":"fd08879b531543db8847a3f7cea42cac"}
. . .
```

As you can see, this output is far more detailed than the default with a wealth
of information in an easily parsable format.

For improved readability, you can use the `json-pretty` format:

```command
journalctl --output json-pretty
```

This will present the JSON output in a more human-friendly format, although it
will take up more space on your screen.

```json
[output]
{
	"__MONOTONIC_TIMESTAMP" : "303528319506",
	"_PID" : "1982",
	"_STREAM_ID" : "15e02852f29f485b9e2866a7a27c3b4d",
	"__CURSOR" : "s=699003bb8d9b4e16a49ee0d845f5be64;i=8a01f57;b=40d8cb53df934b6b8205666796a69234;m=46abb29a12;t=62bd0e194cf30;x=cdfa3e7b1b67aafd",
	"_GID" : "0",
	"_BOOT_ID" : "40d8cb53df934b6b8205666796a69234",
	"_TRANSPORT" : "stdout",
	"_MACHINE_ID" : "fd08879b531543db8847a3f7cea42cac",
	"__SEQNUM" : "144711511",
	"_CAP_EFFECTIVE" : "1ffffffffff",
	"SYSLOG_IDENTIFIER" : "supergfxd",
	"_SELINUX_CONTEXT" : "system_u:system_r:unconfined_t:s0",
	"_SYSTEMD_SLICE" : "system.slice",
	"__REALTIME_TIMESTAMP" : "1737025998081840",
	"_COMM" : "supergfxd",
	"_EXE" : "/usr/bin/supergfxd",
	"_SYSTEMD_UNIT" : "supergfxd.service",
	"SYSLOG_FACILITY" : "3",
	"_UID" : "0",
	"PRIORITY" : "6",
	"_SYSTEMD_CGROUP" : "/system.slice/supergfxd.service",
	"MESSAGE" : "WARN: get_runtime_status: Could not find dGPU",
	"_SYSTEMD_INVOCATION_ID" : "1e2f890b696c4cedab849d5ccc3afefc",
	"_CMDLINE" : "/usr/bin/supergfxd",
	"_RUNTIME_SCOPE" : "system",
	"_HOSTNAME" : "falcon",
	"__SEQNUM_ID" : "699003bb8d9b4e16a49ee0d845f5be64"
}
```

You'll notice that this JSON format contains many fields that were not present
in the default output. You'll spot three different kinds of fields in the each
entry:

1. **Fields prefixed with `__`**: These fields are **journal-specific metadata**
   that are generated and managed internally by `systemd-journald`. They are not
   directly related to the log message content but provide additional context
   for managing and querying logs.

2. **Fields prefixed with `_`**: These fields describe **system or
   process-related metadata** that `systemd-journald` collects from the
   environment when the log entry is created. They are tied to the source of the
   log.

3. **Fields without a prefix**: These fields represent **log content or
   attributes** explicitly set by the logging application or system. They
   usually contain the actual information being logged or its classification
   (e.g., priority, message content, syslog facility).

To see all the available fields that are present in the systemd journal, use the
`--fields` flag:

```command
journalctl --fields
```

```text
[output]
_SOURCE_MONOTONIC_TIMESTAMP
SSSD_PRG_NAME
UNIT_RESULT
PROBLEM_DIR
_COMM
_GID
CODE_LINE
SYSLOG_RAW
SYSLOG_TIMESTAMP
_CAP_EFFECTIVE
ACTION
USER_ID
. . .
```

You can find out more about these fields by reading the `systemd` manual:

```command
man systemd.journal-fields
```

If you'd like to display specific fields alone, you can then use the
`--output-fields` option when using certain formats such as `json`,
`json-pretty`, `verbose`, `export` and others.

Here's the syntax:

```command
journalctl --output=json --output-fields=<field1>,<field2>,<field3>
```

For example, if you're only interested in the log message and its priority
level, you can use:

```command
journalctl --output json-pretty --output-fields=MESSAGE,PRIORITY
```

```json
[output]
{
	"PRIORITY" : "6",
	"__CURSOR" : "s=699003bb8d9b4e16a49ee0d845f5be64;i=8a02766;b=40d8cb53df934b6b8205666796a69234;m=470ddacd3d;t=62bd143bd025a;x=cdfa3e7b1b67aafd",
	"__REALTIME_TIMESTAMP" : "1737027644883546",
	"__MONOTONIC_TIMESTAMP" : "305175121213",
	"__SEQNUM" : "144713574",
	"__SEQNUM_ID" : "699003bb8d9b4e16a49ee0d845f5be64",
	"_BOOT_ID" : "40d8cb53df934b6b8205666796a69234",
	"MESSAGE" : "WARN: get_runtime_status: Could not find dGPU"
}
```

Depending on the `--output` format, you'll notice that certain fields are
included regardless of the `--output-fields` option. These are fields typically
required to identify or understand the entry's source or scope.

Here are the other formats that can control the output produced by `journalctl`.
You can examine the
[complete list here](https://www.freedesktop.org/software/systemd/man/journalctl.html#-o).

- `short`: This is the default output format.
- `cat`: Includes the log message alone by default.
- `json`: JSON-formatted output containing all available fields per entry.
- `json-pretty`: Prettified `json` output for better readability.
- `verbose`: Displays the entire log entry with all available fields per entry.

Now that you're familiar with how to customize the presentation of `journalctl`
output, let's explore techniques to refine your log searches and zero in on the
specific information you need.

## Filtering logs by boot session

When working with systems that undergo frequent reboots, filtering logs based on
specific boot sessions can be helpful. `journalctl` provides options to isolate
logs generated during a particular boot, allowing you to focus your analysis on
a specific time window.

To display logs from the current boot session, use the `-b` flag:

```command
journalctl -b
```

This will show all log entries recorded since the system last started, including
low-level kernel messages related to the boot process.

To see a list of all recorded boot sessions, use the `--list-boots` option:

```command
journalctl --list-boots
```

```text
[output]
IDX BOOT ID                          FIRST ENTRY                 LAST ENTRY
 -2 dc722c908d5a43b4b83724ac87251295 Sat 2024-12-28 12:20:12 WAT Mon 2025-01-06 15:36:29 WAT
 -1 46b024e358c847e7a40bc936de0764ab Mon 2025-01-06 15:39:37 WAT Thu 2025-01-09 16:28:11 WAT
  0 40d8cb53df934b6b8205666796a69234 Thu 2025-01-09 16:30:38 WAT Fri 2025-01-17 05:13:03 WAT
```

This command outputs a table with information about each boot session,
including:

- **IDX**: A relative identifier for each boot session.
- **Boot ID**: A unique hexadecimal identifier for each boot.
- **Time range**: The start and end time of the boot session.

You can use either the offset number or the boot ID to filter logs for a
specific boot session:

```command
journalctl -b 0   # Shows logs from the current boot session
```

```command
journalctl -b -1  # Shows logs from the previous boot session
```

```command
journalctl -b 0f419686d8744067acd4e7ab962a280b # Show logs associated with the specified boot ID.
```

## Filtering Journal logs by a time range

One of the most common ways to narrow down your log search is by filtering
entries based on their timestamps. `journalctl` offers a few options to specify
time ranges for your queries.

You can define a time window for your log search using the `--since` and
`--until` flags to specify the lower and upper bounds of the time range
respectively.

Both flags accept flexible
[timestamp formats](https://www.freedesktop.org/software/systemd/man/systemd.time.html),
including:

- Full timestamps: `YYYY-MM-DD HH:MM:SS` (e.g., `2021-11-23 23:02:15`)
- Dates only: `YYYY-MM-DD` (e.g., `2021-05-04`)
- Times only: `HH:MM` (e.g., `12:00`)
- Relative times: `5 hour ago`, `32` min ago
- Keywords: `yesterday`, `today`, `now`

For instance, to view logs from today onward, use:

```command
journalctl --since 'today'
```

Note that regardless of the filtering period, the header will continue to
reference the period of the logs available in the journal.

```text
[output]
-- Logs begin at Fri 2022-02-11 15:34:17 UTC, end at Wed 2022-02-16 21:33:52 UTC. --
Feb 16 00:00:00 ubuntu-2gb-nbg1-1 vector[74071]: {"appname":"ahmadajmi","facility":"local4","hostname":"we.com","message":"#hugops to everyone who has to deal with this","msgid":"ID844","procid":113,"severity":"alert","timestam>
Feb 16 00:00:01 ubuntu-2gb-nbg1-1 systemd[1]: Starting Rotate log files...
Feb 16 00:00:01 ubuntu-2gb-nbg1-1 systemd[1]: Starting Daily man-db regeneration...
Feb 16 00:00:01 ubuntu-2gb-nbg1-1 CRON[79633]: pam_unix(cron:session): session opened for user ayo by (uid=0)
Feb 16 00:00:01 ubuntu-2gb-nbg1-1 CRON[79641]: (ayo) CMD (/usr/sbin/logrotate /home/ayo/logrotate.conf --state /home/ayo/custom-state)
Feb 16 00:00:01 ubuntu-2gb-nbg1-1 CRON[79633]: pam_unix(cron:session): session closed for user ayo
Feb 16 00:00:01 ubuntu-2gb-nbg1-1 systemd[1]: logrotate.service: Succeeded.
Feb 16 00:00:01 ubuntu-2gb-nbg1-1 systemd[1]: Finished Rotate log files.
Feb 16 00:00:01 ubuntu-2gb-nbg1-1 vector[74071]: {"appname":"meln1ks","facility":"ntp","hostname":"make.net","message":"You're not gonna believe what just happened","msgid":"ID477","procid":6062,"severity":"notice","timestamp":>
Feb 16 00:00:01 ubuntu-2gb-nbg1-1 systemd[1]: man-db.service: Succeeded.
Feb 16 00:00:01 ubuntu-2gb-nbg1-1 systemd[1]: Finished Daily man-db regeneration.
Feb 16 00:00:02 ubuntu-2gb-nbg1-1 vector[74071]: {"appname":"shaneIxD","facility":"daemon","hostname":"names.com","message":"Take a breath, let it go, walk away","msgid":"ID74","procid":1031,"severity":"notice","timestamp":"202>
```

The output may show a lot of records, but you'll observe that they were all
recorded on the current day.

You can also filter logs that fall on a specific date or between specific dates
with:

```command
journalctl --since '2022-02-16 21:00:00' --until '2022-02-16 22:00:00'
```

```command
journalctl --since 12:00 --until '30 min ago'
```

While time-based filtering is useful, it still leaves you with logs from various
sources. You can combine time filtering with other filtering options to refine
your search further. For instance, you can focus on specific applications or
services, as we'll explore in the next section.

## Filtering Journal logs by Systemd service

When you're troubleshooting a specific application or service, it's necessary to
isolate its logs from the rest of the system. `journalctl` allows you to do this
by filtering entries based on the systemd unit they belong to.

For example, to view logs generated by a specific service, use the `-u/--unit`
flag followed by the service name:

```command
journalctl --unit docker.service
```

You'll see the log entries from the Docker service alone:

```text
[output]
Jan 06 15:40:19 falcon systemd[1]: Starting docker.service - Docker Application Container Engine...
Jan 06 15:40:20 falcon dockerd[3017]: time="2025-01-06T15:40:20.062820121+01:00" level=info msg="Starting up"
Jan 06 15:40:20 falcon dockerd[3017]: time="2025-01-06T15:40:20.063738107+01:00" level=info msg="OTEL tracing is not configured, using no-op tracer provider"
Jan 06 15:40:20 falcon dockerd[3017]: time="2025-01-06T15:40:20.063900900+01:00" level=info msg="detected 127.0.0.53 nameserver, assuming systemd-resolved, so using resolv.conf: /run/system>
Jan 06 15:40:20 falcon dockerd[3017]: time="2025-01-06T15:40:20.244641929+01:00" level=info msg="[graphdriver] using prior storage driver: overlay2"
Jan 06 15:40:20 falcon dockerd[3017]: time="2025-01-06T15:40:20.658573403+01:00" level=info msg="Loading containers: start."
Jan 06 15:40:20 falcon dockerd[3017]: time="2025-01-06T15:40:20.710466438+01:00" level=info msg="Firewalld: docker zone already exists, returning"
Jan 06 15:40:20 falcon dockerd[3017]: time="2025-01-06T15:40:20.964546328+01:00" level=info msg="Firewalld: interface br-b2325828137f already part of docker zone, returning"
Jan 06 15:40:21 falcon dockerd[3017]: time="2025-01-06T15:40:21.047457060+01:00" level=info msg="Firewalld: interface br-a4f2fa388fc2 already part of docker zone, returning"
Jan 06 15:40:21 falcon dockerd[3017]: time="2025-01-06T15:40:21.142260916+01:00" level=info msg="Firewalld: interface docker0 already part of docker zone, returning"
. . .
```

If there are no entries for the specified service, you'll see the following
instead:

```text
[output]
-- No entries --
```

You can also filter for multiple services simultaneously by repeating the
`--unit` flag:

```command
journalctl --unit rsyslog.service --unit nginx.service --since '1 hour ago'
```

This shows logs from both `rsyslog.service` and `nginx.service` that occurred
from an hour ago.

The entries will be merged and displayed in chronological order, making it
significantly easier to understand the sequence of events within your system.

## Filtering Journal entries by metadata

Beyond filtering by time or service, `journalctl` allows log entries to be
filtered based on their associated metadata. This allows for precise queries
that target logs with particular characteristics.

You've already seen the available metadata fields, which can be retrieved by
running:

```command
journalctl --fields | less
```

```text
[output]
AUDIT_FIELD_ROOT_DIR
CODE_FILE
MEMORY_SWAP_PEAK
_UID
_UDEV_SYSNAME
AUDIT_FIELD_HOSTNAME
JOB_RESULT
AUDIT_FIELD_CWD
_AUDIT_FIELD_FAMILY
SLEEP
. . .
```

There are many options, but you can see which of the fields are available on the
logs you're interested in through the `json` or `verbose` format as follows:

```command
journalctl --output verbose
```

```text
[output]
Thu 2025-01-16 15:55:48.909780 WAT [s=699003bb8d9b4e16a49ee0d845f5be64;i=8a0606e;b=40d8cb53df934b6b8205666796a69234;m=49c77811b6;t=62bd3fd5a46d4;x=cdfa3e7b1b67aafd]
    _TRANSPORT=stdout
    _STREAM_ID=15e02852f29f485b9e2866a7a27c3b4d
    PRIORITY=6
    SYSLOG_FACILITY=3
    SYSLOG_IDENTIFIER=supergfxd
    MESSAGE=WARN: get_runtime_status: Could not find dGPU
    _PID=1982
    _UID=0
    _GID=0
    _COMM=supergfxd
    _EXE=/usr/bin/supergfxd
    _CMDLINE=/usr/bin/supergfxd
    _CAP_EFFECTIVE=1ffffffffff
    _SELINUX_CONTEXT=system_u:system_r:unconfined_t:s0
    _SYSTEMD_CGROUP=/system.slice/supergfxd.service
    _SYSTEMD_UNIT=supergfxd.service
    _SYSTEMD_SLICE=system.slice
    _SYSTEMD_INVOCATION_ID=1e2f890b696c4cedab849d5ccc3afefc
    _BOOT_ID=40d8cb53df934b6b8205666796a69234
    _MACHINE_ID=fd08879b531543db8847a3f7cea42cac
    _HOSTNAME=falcon
    _RUNTIME_SCOPE=system
```

Once you've figured out what fields you're interested in, you can display all
possible values for that field with the `-F/--field` flag. For example, to see
all possible priority levels, use:

```command
journalctl -F PRIORITY
```

```text
[output]
2
7
3
4
5
6
```

The numbers can be mapped to the standard `syslog` priority levels:

```javascript
{
  emerg: 0,
  alert: 1,
  crit: 2,
  err: 3,
  warning: 4,
  notice: 5,
  info: 6,
  debug: 7
}
```

To filter for entries with a specific metadata value, use the field name
followed by an equals sign (=) and the desired value.

For example, to show only logs with priority level "3", use:

```command
journalctl PRIORITY=3
```

![Filtering journalctl by priority](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/dabb30b9-abcb-47e5-467f-ae889e0b9e00/lg1x
=2960x1734)

You can also combine multiple metadata filters to refine your search further.
For instance, to see error logs from the kernel, run:

```command
journalctl PRIORITY=3 SYSLOG_IDENTIFIER=kernel
```

```text
[output]
Jan 02 05:12:20 falcon kernel: ucsi_acpi USBC000:00: unknown error 0
Jan 02 05:12:20 falcon kernel: ucsi_acpi USBC000:00: UCSI_GET_PDOS failed (-5)
Jan 02 05:12:21 falcon kernel: ACPI Error: Thread 1156460608 cannot release Mutex [ECMX] acquired by thread 2163781696 (20240827/exmutex-378)
Jan 02 05:12:21 falcon kernel: ACPI Error: Aborting method \_SB.PC00.LPCB.ECDV._Q66 due to previous error (AE_AML_NOT_OWNER) (20240827/psparse-529)
. . .
```

For commonly used fields, you can use dedicated flags to reduce verbosity and
make your `journalctl` commands more concise. This includes:

- `-p/--priority`: `PRIORITY`
- `-f/--facility`: `SYSLOG_FACILITY`
- `-t/--identifier`: `SYSLOG_IDENTIFIER`
- `-u/--unit`: `_SYSTEMD_UNIT`

For example, instead of:

```command
journalctl SYSLOG_IDENTIFIER=sshd PRIORITY=3
```

You can simply write:

```command
journalctl -t sshd -p 3
```

## Tailing and following Journal entries

Similar to using `tail -f` to monitor a file for new content, `journalctl`
provides a way to "tail" or follow journal entries in real time.

This is incredibly useful for diagnosing issues in real time, especially when
troubleshooting intermittent problems or observing the effects of configuration
changes.

To initiate real time log following, use the `-f/--follow` flag with
`journalctl`:

```command
journalctl --follow
```

This command will display the most recent 10 log entries and continuously
display entries as they are written to the journal. You can configure how many
logs are initially displayed with the `-n/--lines` option:

```command
journalctl --lines 20 --follow # show 20 initial lines instead
```

You can also use the `--no-tail` option to show all lines even when in follow
mode:

```command
journalctl --no-tail --follow
```

The `--follow` flag can be combined with other `journalctl` filters to focus on
specific events. For example, to tail messages with error severity or higher,
use:

```command
journalctl --follow --priority err
```

Or to tail logs from a specific service:

```command
journalctl --follow --unit docker.service
```

You can stop following the logs and return to the command prompt any time by
pressing `Ctrl+C`.

[summary]

## Side note: Debug in real time with Live Tail

`journalctl --follow` is great on a single box. Live Tail gives you the same feel, but across all your servers, with instant search and one click context on each event.

<iframe width="100%" height="315" src="https://www.youtube.com/embed/XJv7ON314k4" title="Better Stack Live Tail walkthrough" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>

[/summary]

## Searching for Journal entries

While filtering allows you to narrow down log entries based on specific
criteria, `journalctl` also offers the `-g/--grep` flag to find entries
containing particular keywords or patterns:

You can combine it with any of the filtering options discussed above for more
precise results. For instance, the command below will display all journal
entries from the `ssh` service containing the phrase "Invalid user" within the
last one hour:

```command
journalctl --unit ssh.service --grep 'Invalid user' --since '1 hour ago'
```

```text
[output]
Jan 17 03:47:57 Ubuntu-20-04 sshd[2133328]: Invalid user hadoop from 193.32.162.79 port 45694
Jan 17 03:46:49 Ubuntu-20-04 sshd[2133324]: Invalid user admin from 92.255.85.189 port 42582
Jan 17 03:44:30 Ubuntu-20-04 sshd[2133319]: Invalid user sysadmin from 2.57.122.194 port 44086
Jan 17 03:40:39 Ubuntu-20-04 sshd[2133309]: Invalid user ansible from 193.32.162.79 port 45834
Jan 17 03:36:37 Ubuntu-20-04 sshd[2133297]: Invalid user sysadmin from 2.57.122.194 port 40998
Jan 17 03:34:38 Ubuntu-20-04 sshd[2133292]: Invalid user admin from 92.255.85.188 port 20794
. . .
```

The `--grep` flag also supports regular expressions for more complex searches:

```command
journalctl --grep "error\|failed"
```

The search is case-insensitive by default, but you can make it case-sensitive
through the `--case-sensitive` flag:

```command
journalctl --grep <pattern> --case-sensitive
```

## Maintaining the Systemd Journal

The `journalctl` utility also offers several options to manage the size and
content of the system journal to prevent excessive disk usage.

You can see how much space is currently occupied by the journal through the
`--disk-usage` flag:

```command
journalctl --disk-usage
```

This will display the total size of the active and archived journals:

```text
[output]
Archived and active journals take up 1.8G in the file system.
```

If the journal is taking up too much space, you can choose from the following
vacuuming options to manually shrink it to a desired size:

- `--vacuum-size=<bytes>`: Shrink the journal to a desired size.
- `--vacuum-files=<int>`: Reduce the number of journal files to `<int>`.
- `--vacuum-time=<time>`: Remove log entries older than a specified date.

For example, you can reduce the journal size to 500 MB with:

```command
sudo journalctl --vacuum-size=500M # shrink journal to 500 MB.
```

You'll see the program's output appear on the screen:

```
[output]
Deleted archived journal /var/log/journal/cee31bed2e414d19ab394c074b55b354/system@d96f9da5333a4e1d8394272215ea1917-00000000001705f4-0005d7c8815c6001.journal (128.0M).
Deleted archived journal /var/log/journal/cee31bed2e414d19ab394c074b55b354/system@d96f9da5333a4e1d8394272215ea1917-000000000018a118-0005d7ccffbf428b.journal (128.0M).
Deleted archived journal /var/log/journal/cee31bed2e414d19ab394c074b55b354/system@d96f9da5333a4e1d8394272215ea1917-00000000001a39bc-0005d7d1621e9d0e.journal (128.0M).
Deleted archived journal /var/log/journal/cee31bed2e414d19ab394c074b55b354/system@d96f9da5333a4e1d8394272215ea1917-00000000001bd232-0005d7d5977c6dd9.journal (128.0M).
Deleted archived journal /var/log/journal/cee31bed2e414d19ab394c074b55b354/system@d96f9da5333a4e1d8394272215ea1917-00000000001d6237-0005d7d9614516c8.journal (128.0M).
Deleted archived journal /var/log/journal/cee31bed2e414d19ab394c074b55b354/system@d96f9da5333a4e1d8394272215ea1917-00000000001ef1f4-0005d7dd2e4431c9.journal (128.0M).
Deleted archived journal /var/log/journal/cee31bed2e414d19ab394c074b55b354/system@d96f9da5333a4e1d8394272215ea1917-00000000002081d7-0005d7e0f8261d1a.journal (128.0M).
Deleted archived journal /var/log/journal/cee31bed2e414d19ab394c074b55b354/system@d96f9da5333a4e1d8394272215ea1917-000000000022130a-0005d7e4c5032aa2.journal (128.0M).
Deleted archived journal /var/log/journal/cee31bed2e414d19ab394c074b55b354/system@d96f9da5333a4e1d8394272215ea1917-000000000023ac43-0005d7e9129c10c1.journal (128.0M).
Deleted archived journal /var/log/journal/cee31bed2e414d19ab394c074b55b354/system@d96f9da5333a4e1d8394272215ea1917-000000000025457d-0005d7ecc6945d7e.journal (128.0M).
Deleted archived journal /var/log/journal/cee31bed2e414d19ab394c074b55b354/system@d96f9da5333a4e1d8394272215ea1917-000000000026d5ce-0005d7f0c66f8cfe.journal (128.0M).
Vacuuming done, freed 1.3G of archived journals from /var/log/journal/cee31bed2e414d19ab394c074b55b354.
Vacuuming done, freed 0B of archived journals from /run/log/journal.
Vacuuming done, freed 0B of archived journals from /var/log/journal.
```

As you can see, the journal was shrunk to 500 MB after log entries totalling 1.3
GB in size were deleted from the archive.

Instead of specifying a size, you can also delete logs based on their age using
the `--vacuum-time` option. The command below will delete any entries that were
recorded more than one month ago:

```command
sudo journalctl --vacuum-time=1month
```

### Configuring Journal storage

To automatically manage journal size, you can modify the
[following options](https://www.freedesktop.org/software/systemd/man/journald.conf.html#SystemMaxUse=)
in the `/etc/systemd/journald.conf` file:

- `SystemMaxUse` and `RuntimeMaxUse`: Set the maximum amount of space that the
  journal should take up in persistent storage (`/var/log/journal`) and volatile
  storage (`/run/log/journal`) respectively.
- `SystemKeepFree` and `RuntimeKeepFree`: Defines the percentage of disk space
  that should always be kept free for other uses.
- `SystemMaxFileSize` and `RuntimeMaxFileSize`: Controls how large journal
  entries should grow before being rotated.
- `SystemMaxFiles` and `RuntimeMaxFiles`: Specifies the maximum number of
  journal files to keep.

To further reduce disk space usage, you can enable compression for the journal
through the `Compress` option. However, note that compression can slightly
impact performance when retrieving log data.

```text
[label /etc/systemd/journad.conf]
[Journal]
Compress=yes
SystemMaxUse=5G
RuntimeMaxUse=1G
SystemKeepFree=10%
RuntimeKeepFree=15%
SystemMaxFileSize=100M
RuntimeMaxFileSize=50M
SystemMaxFiles=100
RuntimeMaxFiles=50
```

## Centralizing Journald logs with Better Stack

While `journalctl` is an excellent tool for accessing and analyzing system logs locally, it starts to fall short as soon as you manage more than one server. At that point, your logs are scattered across machines, and you end up repeating the same queries host by host.

A centralized log management platform like [Better Stack](https://betterstack.com/telemetry) brings those logs into one place, so you can search and filter across all servers at once, correlate related events, and move much faster when diagnosing incidents.

If you want the quickest path, this walkthrough shows how to set up the Better Stack collector and start shipping logs with sensible defaults:

<iframe width="100%" height="315" src="https://www.youtube.com/embed/_pv2tKoBnGo" title="Ship logs to Better Stack with the collector" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>

With logs centralized, you can go beyond `journalctl --follow`. Better Stack Live Tail lets you watch events in real time across all machines, drill into context with a click, and search instantly while new logs are still coming in:

<iframe width="100%" height="315" src="https://www.youtube.com/embed/XJv7ON314k4" title="Better Stack Live Tail walkthrough" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>

Once your journald logs are flowing, you can also move past “find the one line” troubleshooting and start spotting patterns over time. Better Stack lets you build dashboards from your log data so you can visualise trends, correlate spikes with deploys, and catch anomalies early.

<iframe width="100%" height="315" src="https://www.youtube.com/embed/xmqvQqPkH24" title="Visualize and explore your logs in Better Stack" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>


Centralization also makes alerting practical. Instead of manually checking logs or keeping a terminal open, you can trigger notifications when important patterns appear, and route incidents to the right people right away.

![email-alert.png](https://imagedelivery.net/xZXo0QFi-1_4Zimer-T0XQ/7d5b4f3a-ff23-42f5-ea2e-c51690df8a00/md1x =1780x1414)

Finally, retention becomes simpler. Journald logs live on each server and are constrained by local disk limits and rotation policies. In contrast, Better Stack provides scalable storage so you can retain logs for longer, search historical incidents, and keep data around for compliance when needed. Dashboards and visualizations help you spot trends you would otherwise miss in raw output, like rising SSH failures, recurring service restarts, or bursts of kernel errors.

Start taking control of your logs with Better Stack by [creating a free account here](https://betterstack.com/users/sign-up).

## Final thoughts

In this article, we've thoroughly explored the `systemd` journal and uncovered
the powerful capabilities of `journalctl`.

From understanding the journal's purpose and structure to learning how to
filter, search, and customize log output, you've gained essential skills for
navigating and analyzing your system logs.

With the techniques learned in this guide, you can effectively troubleshoot
issues, debug complex problems, and gain valuable insights into your system's
behavior.

By making the most of `journalctl`, you'll be better equipped to maintain a
reliable and well-monitored system.

For more details, be sure to check out the official documentation or type
`man journalctl` in your terminal.

Thanks for reading, and happy logging!