Back to Databases guides

What's New in PostgreSQL 18

Stanley Ulili
Updated on October 2, 2025

PostgreSQL 18 has officially landed, and it's one of the most transformative releases in years. It was released on September 25, 2025 and this version introduces an asynchronous I/O subsystem, virtual generated columns, and temporal constraints, addressing pain points that developers have wrestled with for years.

The PostgreSQL team has focused on performance optimization, query planning improvements, and modernizing core capabilities while maintaining the stability that production systems depend on. Unlike incremental updates, PostgreSQL 18 brings architectural changes that fundamentally improve how the database handles I/O operations and processes complex queries.

Before diving into upgrade considerations, let's explore the standout features in PostgreSQL 18 and how they can enhance your database operations.

Let's begin!

Asynchronous I/O transforms performance

PostgreSQL 18 introduces an asynchronous I/O subsystem that fundamentally changes how the database handles read operations. This advancement delivers significant performance improvements for sequential scans, bitmap heap scans, vacuums, and similar operations that previously suffered from synchronous I/O bottlenecks.

The traditional approach forced backends to wait for each I/O operation to complete before requesting the next one:

 
-- Before: Sequential reads blocked on each operation
SELECT * FROM large_table WHERE category = 'active';
-- Each page read waits for the previous to complete

With PostgreSQL 18, backends can queue multiple read requests simultaneously, allowing the operating system to optimize I/O scheduling:

 
-- After: Multiple reads queued efficiently
SELECT * FROM large_table WHERE category = 'active';
-- Multiple pages requested in parallel for faster throughput

The new io_method configuration variable controls this behavior, with io_combine_limit and io_max_combine_limit providing fine-tuned control over request batching. Systems without fadvise() support now benefit from effective_io_concurrency and maintenance_io_concurrency settings greater than zero, bringing performance parity across platforms.

You can monitor asynchronous I/O activity through the new pg_aios system view, which shows active file handles being used for async operations.

Virtual generated columns become the default

PostgreSQL 18 makes virtual generated columns the standard behavior, computing values during read operations rather than storing them physically. This change reduces storage overhead and eliminates the need to update computed values during writes, making them particularly valuable for expensive calculations that aren't accessed in every query.

In earlier versions, generated columns were always stored, consuming disk space and requiring updates on every row modification:

 
-- Before PostgreSQL 18: Stored generated columns
CREATE TABLE products (
    id SERIAL PRIMARY KEY,
    price NUMERIC(10,2),
    tax_rate NUMERIC(3,2),
    total_price NUMERIC(10,2) GENERATED ALWAYS AS (price * (1 + tax_rate)) STORED
);
-- Value computed and stored on every INSERT/UPDATE

PostgreSQL 18 computes these values on demand by default, saving storage and write overhead:

 
-- PostgreSQL 18: Virtual generated columns (default)
CREATE TABLE products (
    id SERIAL PRIMARY KEY,
    price NUMERIC(10,2),
    tax_rate NUMERIC(3,2),
total_price NUMERIC(10,2) GENERATED ALWAYS AS (price * (1 + tax_rate))
); -- Value computed only when selected

When you need the pre-computed behavior for performance-critical read paths, explicitly specify STORED:

 
CREATE TABLE products (
    id SERIAL PRIMARY KEY,
    price NUMERIC(10,2),
    tax_rate NUMERIC(3,2),
total_price NUMERIC(10,2) GENERATED ALWAYS AS (price * (1 + tax_rate)) STORED
);

This shift aligns PostgreSQL with modern database design principles, giving you flexibility to choose between compute-on-read and store-on-write based on your specific access patterns.

Skip scans unlock multicolumn index potential

PostgreSQL 18 introduces skip scan functionality for B-tree indexes, allowing the query planner to use multicolumn indexes even when early columns lack equality restrictions. This eliminates the frustrating scenario where perfectly good indexes sat unused because queries didn't filter on the first indexed column.

Previously, a multicolumn index could only be used effectively if queries filtered on the leading columns:

 
-- Index that was underutilized before PostgreSQL 18
CREATE INDEX idx_orders ON orders (status, customer_id, order_date);

-- This query couldn't use the index efficiently
SELECT * FROM orders 
WHERE customer_id = 123 
  AND order_date > '2025-01-01';
-- Index unused because 'status' wasn't filtered

PostgreSQL 18's skip scan capability allows the planner to efficiently use this index by skipping through distinct values of the first column:

 
-- Same index, now usable in PostgreSQL 18
SELECT * FROM orders 
WHERE customer_id = 123 
  AND order_date > '2025-01-01';
-- Index now utilized via skip scan technique

This optimization works by internally scanning through the distinct values of unfiltered leading columns, then diving deep into the index for matches on later columns. The planner automatically decides when skip scans provide better performance than sequential scans, eliminating the need for duplicate indexes with different column orderings.

Temporal constraints for time-based data integrity

PostgreSQL 18 adds native support for temporal constraints, allowing you to define non-overlapping PRIMARY KEY, UNIQUE, and FOREIGN KEY constraints on range types. This feature addresses a common challenge in applications dealing with time-based data, such as reservation systems, employee assignments, or scheduling applications.

Before PostgreSQL 18, preventing overlapping time ranges required custom triggers or exclusion constraints with specialized GiST indexes:

 
-- Before: Complex exclusion constraint approach
CREATE TABLE room_bookings (
    room_id INTEGER,
    booking_period TSTZRANGE,
    EXCLUDE USING gist (room_id WITH =, booking_period WITH &&)
);

PostgreSQL 18 provides cleaner syntax using the WITHOUT OVERLAPS clause:

 
-- PostgreSQL 18: Direct temporal constraint
CREATE TABLE room_bookings (
    room_id INTEGER,
    booking_period TSTZRANGE,
PRIMARY KEY (room_id, booking_period WITHOUT OVERLAPS)
);

This approach extends to foreign key relationships using the PERIOD keyword:

 
CREATE TABLE booking_payments (
    booking_id INTEGER,
    payment_period TSTZRANGE,
    amount NUMERIC(10,2),
FOREIGN KEY (booking_id, payment_period PERIOD)
REFERENCES room_bookings (booking_id, booking_period PERIOD)
);

These constraints integrate seamlessly with the query planner and provide clearer error messages when violations occur, making temporal data modeling more intuitive and maintainable.

UUID version 7 for distributed systems

PostgreSQL 18 introduces the uuidv7() function, generating timestamp-ordered UUIDs that address a long-standing performance issue with traditional UUIDs. Unlike version 4 UUIDs that distribute randomly across index pages, version 7 UUIDs maintain temporal ordering, dramatically reducing index fragmentation and improving insertion performance.

Traditional UUID v4 generation creates random values that cause index bloat:

 
-- Before: UUID v4 with random distribution
CREATE TABLE events (
    id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
    event_type VARCHAR(50),
    created_at TIMESTAMPTZ DEFAULT NOW()
);
-- Random UUIDs scatter across index pages

PostgreSQL 18's UUID v7 maintains chronological ordering while preserving global uniqueness:

 
-- PostgreSQL 18: UUID v7 with temporal ordering
CREATE TABLE events (
id UUID PRIMARY KEY DEFAULT uuidv7(),
event_type VARCHAR(50), created_at TIMESTAMPTZ DEFAULT NOW() ); -- Sequential UUIDs reduce index fragmentation

The uuidv4() function has been added as an explicit alias for the existing gen_random_uuid() function, making version selection clearer in your schema definitions. For distributed systems requiring both uniqueness guarantees and efficient indexing, UUID v7 provides the best of both worlds.

OLD and NEW in RETURNING clauses

PostgreSQL 18 extends the RETURNING clause with explicit OLD and NEW aliases, allowing you to return both pre-modification and post-modification values from INSERT, UPDATE, DELETE, and MERGE statements. This enhancement simplifies audit logging, change tracking, and application logic that needs to compare before and after states.

Earlier versions returned only the final state, requiring separate queries to capture both values:

 
-- Before: Only new values available
UPDATE accounts 
SET balance = balance - 50 
WHERE account_id = 123
RETURNING balance;
-- Returns only the new balance

PostgreSQL 18 lets you explicitly reference both states:

 
-- PostgreSQL 18: Access to both old and new values
UPDATE accounts 
SET balance = balance - 50 
WHERE account_id = 123
RETURNING OLD.balance AS previous_balance, NEW.balance AS current_balance;
-- Returns: previous_balance: 200, current_balance: 150

This capability extends to all DML operations:

 
-- DELETE with old value capture
DELETE FROM user_sessions 
WHERE expires_at < NOW()
RETURNING OLD.user_id, OLD.expires_at;
-- MERGE with conditional returns MERGE INTO inventory AS target USING shipments AS source ON target.product_id = source.product_id WHEN MATCHED THEN UPDATE SET quantity = target.quantity + source.quantity
RETURNING OLD.quantity AS before, NEW.quantity AS after;

The aliases can be renamed to avoid conflicts with actual column names, providing flexibility for complex queries.

Breaking changes requiring attention

PostgreSQL 18 introduces several breaking changes that affect existing applications. Addressing these proactively prevents issues during migration and ensures continued database compatibility.

Checksums enabled by default

The initdb utility now enables data checksums by default, providing automatic detection of storage corruption. While this improves data integrity, it requires matching settings during upgrades:

 
# Disable checksums when initializing (if needed for compatibility)
initdb --no-data-checksums /path/to/datadir

When using pg_upgrade, source and target clusters must have matching checksum settings. The new --no-data-checksums option helps when upgrading older clusters that didn't use checksums.

VACUUM inheritance behavior change

VACUUM and ANALYZE now process inheritance children by default, reversing the previous behavior:

 
-- Before: Only processed parent table
VACUUM users;

-- PostgreSQL 18: Processes parent and all child tables
VACUUM users;

-- To get old behavior, use ONLY
VACUUM ONLY users;

This change makes partition maintenance more intuitive but may impact existing maintenance scripts that assumed the old behavior.

Time zone abbreviation resolution order

PostgreSQL 18 prioritizes the session's time zone abbreviations over the server-wide timezone_abbreviations setting. Applications relying on specific abbreviation resolution may experience different interpretations:

 
-- Session abbreviations now checked first
SET SESSION timezone = 'America/New_York';
SELECT '2025-01-15 14:00 EST'::TIMESTAMPTZ;
-- Uses session's EST definition before checking timezone_abbreviations

Review applications using custom time zone abbreviations to ensure they behave as expected after the upgrade.

MD5 password deprecation

While MD5 authentication still functions, PostgreSQL 18 emits deprecation warnings for CREATE ROLE and ALTER ROLE commands setting MD5 passwords:

 
-- Generates deprecation warning
CREATE USER app_user WITH PASSWORD 'md5...' ;

Plan migration to SCRAM-SHA-256 authentication before future releases remove MD5 support entirely. Disable warnings temporarily with SET md5_password_warnings = off if needed during transition periods.

Statistics preservation in pg_upgrade

PostgreSQL 18's pg_upgrade utility now preserves optimizer statistics during major version upgrades, eliminating the performance degradation that plagued freshly upgraded databases. This advancement saves hours of analysis time and prevents query plan regressions immediately after upgrade.

Previously, upgrades forced the optimizer to use default statistics until manual analysis completed:

 
# Before: Required manual statistics gathering
pg_upgrade --old-datadir=/old --new-datadir=/new
# Then run ANALYZE on entire cluster
vacuumdb --all --analyze-only

PostgreSQL 18 automatically transfers statistics during the upgrade process:

 
# PostgreSQL 18: Statistics preserved automatically
pg_upgrade --old-datadir=/old --new-datadir=/new
# Database immediately has accurate statistics

Extended statistics aren't preserved and still require regeneration, but standard column and table statistics transfer seamlessly. Use --no-statistics to disable this behavior if you prefer starting fresh:

 
pg_upgrade --old-datadir=/old --new-datadir=/new --no-statistics

The --missing-stats-only option in vacuumdb complements this by analyzing only tables lacking statistics, perfect for handling extended statistics after upgrade.

OAuth authentication support

PostgreSQL 18 introduces native OAuth authentication, bringing modern identity management to database connections. This integration allows PostgreSQL to validate tokens issued by OAuth providers, eliminating the need for managing database-specific credentials in microservice architectures and cloud-native applications.

The new oauth authentication method in pg_hba.conf enables token validation:

 
# OAuth authentication configuration
host all all 0.0.0.0/0 oauth

Configure token validation libraries via the oauth_validator_libraries server variable:

 
ALTER SYSTEM SET oauth_validator_libraries = 'myvalidator';
SELECT pg_reload_conf();

Client applications pass OAuth tokens through libpq connection parameters, which PostgreSQL validates against configured providers. This feature requires compiling PostgreSQL with --with-libcurl support and implementing or configuring appropriate validator libraries.

OAuth authentication streamlines multi-tenant architectures, integrates with existing identity providers like Okta or Auth0, and simplifies credential rotation in containerized environments.

Enhanced monitoring and observability

PostgreSQL 18 significantly expands monitoring capabilities, providing deeper visibility into database operations and performance characteristics. These additions help diagnose issues faster and understand system behavior more precisely.

The log_connections variable now accepts multiple values beyond simple on/off, allowing fine-grained connection logging:

 
-- Configure detailed connection logging
SET log_connections = 'duration';

The new log_line_prefix escape sequence %L outputs client IP addresses:

 
-- Include client IP in log lines
SET log_line_prefix = '%t [%p] %L %u@%d ';

Query lock failures now get dedicated logging through log_lock_failures:

 
-- Log SELECT ... NOWAIT failures
SET log_lock_failures = on;

The pg_stat_all_tables views gained columns tracking time spent in maintenance operations:

 
SELECT schemaname, relname, 
total_vacuum_time, total_autovacuum_time,
total_analyze_time, total_autoanalyze_time
FROM pg_stat_all_tables WHERE schemaname = 'public';

Per-backend I/O statistics became available through pg_stat_get_backend_io() and pg_stat_reset_backend_stats(), enabling targeted performance analysis of specific connections. The pg_stat_io view now reports activity in bytes rather than blocks, providing more intuitive metrics:

 
SELECT backend_type, context, 
read_bytes, write_bytes, extend_bytes
FROM pg_stat_io WHERE backend_type = 'client backend';

WAL activity moved from pg_stat_wal to pg_stat_io, centralizing I/O metrics and adding receiver activity tracking. These enhancements transform PostgreSQL's observability from basic counters to comprehensive operational intelligence.

Optimizer and query planning improvements

PostgreSQL 18 delivers substantial query optimizer enhancements that improve performance across diverse workloads. These changes happen automatically without query modifications, making applications faster without code changes.

The optimizer now eliminates unnecessary self-joins automatically:

 
-- Optimizer recognizes redundant join
SELECT u1.name, u2.email 
FROM users u1 
JOIN users u2 ON u1.id = u2.id;
-- Simplified internally to single table scan

IN (VALUES ...) clauses convert to ANY arrays for better statistics utilization:

 
-- Automatically optimized
SELECT * FROM orders 
WHERE status IN (VALUES ('pending'), ('processing'), ('shipped'));
-- Internally transformed for better cardinality estimation

OR-clauses with index-compatible conditions transform into array lookups:

 
-- Leverages index efficiently
SELECT * FROM products 
WHERE category = 'electronics' OR category = 'computers' OR category = 'phones';
-- Converted to indexed array scan

SELECT DISTINCT operations internally reorder keys to avoid sorting when possible, and GROUP BY clauses drop functionally dependent columns automatically:

 
-- Simplified grouping
SELECT user_id, COUNT(*) 
FROM orders 
GROUP BY user_id, user_email;
-- user_email dropped if unique constraint exists on (user_id, user_email)

The planner now uses Right Semi Join strategies, enables incremental sorts for merge joins, and improved partition-wise join support—all contributing to faster execution across complex analytical queries.

Should you upgrade to PostgreSQL 18?

If you're running PostgreSQL on supported operating systems with modern Node.js-equivalent environments, upgrading to PostgreSQL 18 delivers meaningful performance improvements and valuable new features. The asynchronous I/O subsystem alone justifies the upgrade for I/O-intensive workloads, while query planning enhancements benefit virtually every application.

Before upgrading production systems, review the migration guide thoroughly and test against a staging environment. Pay particular attention to:

  • Checksum settings for pg_upgrade compatibility
  • VACUUM/ANALYZE scripts that may need ONLY clauses
  • MD5 authentication migration planning
  • Time zone abbreviation handling changes

Verify your PostgreSQL version and plan your upgrade path:

 
psql -c "SELECT version();"

Use pg_upgrade for major version transitions, ensuring matching checksum settings between clusters:

 
# Check old cluster checksum setting
pg_controldata /old/datadir | grep "Data page checksum"

# Initialize new cluster with matching setting
initdb --data-checksums /new/datadir  # or --no-data-checksums

# Perform upgrade with statistics preservation
pg_upgrade \
  --old-datadir=/old/datadir \
  --new-datadir=/new/datadir \
  --old-bindir=/old/bin \
  --new-bindir=/new/bin

Run comprehensive test suites against the new version, focusing on areas affected by breaking changes. Monitor query performance after upgrade to verify that statistics transferred correctly and new optimizer strategies benefit your workload.

Final thoughts

PostgreSQL 18 represents a significant advancement for the world's most advanced open-source relational database. The asynchronous I/O subsystem addresses long-standing performance bottlenecks, while features like temporal constraints and UUID v7 support solve real-world problems that previously required complex workarounds.

The PostgreSQL community continues demonstrating its commitment to both innovation and stability, delivering architectural improvements alongside careful attention to compatibility. For production systems, the combination of performance gains and new capabilities makes PostgreSQL 18 a compelling upgrade target.

For comprehensive details on all changes and migration considerations, consult the official documentation:

Thank you and happy querying!

Got an article suggestion? Let us know
Licensed under CC-BY-NC-SA

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.