Database selection shapes how your application handles data. PostgreSQL offers a relational model with SQL queries and ACID guarantees. DynamoDB provides a key-value store with automatic scaling and single-digit millisecond latency. One prioritizes data relationships and query flexibility, the other optimizes for predictable performance at any scale.
DynamoDB entered the market in 2012 as Amazon's answer to scaling challenges that traditional databases struggled with. It eliminated capacity planning by automatically scaling throughput. You paid for what you used, and AWS handled the infrastructure.
PostgreSQL represented decades of relational database evolution. It required you to think about capacity upfront but gave you powerful querying capabilities and data modeling tools that key-value stores couldn't match.
PostgreSQL 18's performance improvements and better JSON support make it viable for workloads that previously required NoSQL databases. But DynamoDB still excels at scenarios where predictable performance matters more than query flexibility. Let's look at where each database fits.
What is PostgreSQL?
PostgreSQL is an open-source relational database that has been in active development since 1986. It emphasizes SQL standards and data correctness through constraints, foreign keys, and transaction guarantees. The database runs on your infrastructure, whether that's on-premises servers, cloud VMs, or managed services like AWS RDS.
PostgreSQL's strength lies in its query capabilities. You can join tables, aggregate data, and express complex business logic in SQL. The database includes extensions for specialized workloads like full-text search, geospatial data, and time-series analysis.
Version 18 improved performance for concurrent workloads and enhanced JSONB operations. These changes make PostgreSQL competitive for applications that need both relational structure and flexible document storage.
What is DynamoDB?
DynamoDB is a fully managed NoSQL database service from Amazon Web Services. Launched in 2012, it guarantees consistent performance regardless of scale. The database automatically distributes data across multiple servers and handles replication without configuration.
DynamoDB uses a key-value model where you access items by their primary key. The database excels at simple queries but struggles with complex operations like joins or aggregations. You model your data around access patterns rather than normalizing relationships.
The service scales automatically based on traffic. You can configure on-demand pricing where AWS adjusts capacity in real-time, or provisioned capacity where you specify read and write throughput upfront. DynamoDB handles all operational tasks including backups, patching, and failover.
PostgreSQL vs DynamoDB: quick comparison
Feature | PostgreSQL 18 | DynamoDB |
---|---|---|
Model | Relational (tables, rows) | Key-value (items, attributes) |
Query language | SQL | PartiQL/API operations |
Schema | Required, enforced | Flexible, only key required |
Indexes | Multiple per table, any column | Primary key + optional secondary indexes |
Transactions | Full ACID across tables | ACID for items < 4MB, 100 items max |
Joins | Native SQL joins | Application-level only |
Consistency | Immediate (synchronous) | Eventual or strong (configurable) |
Scaling | Vertical (larger server) | Horizontal (automatic) |
Management | Self-managed or RDS | Fully managed by AWS |
Pricing | Instance hours | Request units + storage |
Backup | Manual or automated | Automatic continuous |
Performance | Depends on resources | Guaranteed single-digit ms |
Query flexibility | Highly flexible | Limited to key patterns |
Data relationships | Foreign keys, constraints | Application responsibility |
License | PostgreSQL (open source) | Proprietary AWS service |
Getting started
DynamoDB requires an AWS account but no infrastructure decisions. Tables are created through the AWS Console or CLI with only the primary key defined upfront:
aws dynamodb create-table \
--table-name Users \
--attribute-definitions AttributeName=userId,AttributeType=S \
--key-schema AttributeName=userId,KeyType=HASH \
--billing-mode PAY_PER_REQUEST
The table becomes available immediately. Other attributes can be added dynamically when inserting items.
PostgreSQL needs server provisioning and explicit database setup:
sudo apt install postgresql
sudo -u postgres psql
CREATE DATABASE myapp;
CREATE USER appuser WITH PASSWORD 'secure_pass';
GRANT ALL ON DATABASE myapp TO appuser;
Production deployments require configuring connection pooling, replication, and backup strategies. DynamoDB handles these automatically.
Data modeling approaches
DynamoDB stores everything in flexible items where only the primary key is required:
const params = {
TableName: 'Users',
Item: {
userId: '123',
email: 'user@example.com',
name: 'John Doe',
preferences: {
theme: 'dark',
notifications: true
}
}
};
await dynamodb.put(params).promise();
Any attributes can be added without schema changes. The preferences object can differ between users. However, the database doesn't validate data types or required fields beyond the primary key.
PostgreSQL enforces structure through column definitions while allowing flexibility through JSONB:
CREATE TABLE users (
user_id VARCHAR(50) PRIMARY KEY,
email VARCHAR(255) NOT NULL UNIQUE,
name VARCHAR(255) NOT NULL,
preferences JSONB DEFAULT '{}'
);
INSERT INTO users (user_id, email, name, preferences)
VALUES ('123', 'user@example.com', 'John Doe',
'{"theme": "dark", "notifications": true}');
Email and name must exist and match their types. The preferences column provides flexibility while critical fields remain structured and validated.
Querying patterns
DynamoDB performs fast lookups by primary key but struggles with other access patterns:
// Fast: get by primary key
const result = await dynamodb.get({
TableName: 'Users',
Key: { userId: '123' }
}).promise();
// Slow: query by non-key attribute requires scanning entire table
const scanResult = await dynamodb.scan({
TableName: 'Users',
FilterExpression: 'email = :email',
ExpressionAttributeValues: { ':email': 'user@example.com' }
}).promise();
Querying by anything other than the primary key means scanning every item or creating a Global Secondary Index. Scans become prohibitively expensive as tables grow.
PostgreSQL queries any column efficiently with proper indexing:
-- Query by any indexed column
SELECT * FROM users WHERE email = 'user@example.com';
-- Complex queries with multiple conditions
SELECT name, preferences->>'theme' as theme
FROM users
WHERE created_at > NOW() - INTERVAL '30 days'
AND preferences->>'notifications' = 'true';
Queries work against any column combination. Adding indexes makes any access pattern fast without redesigning the schema.
Handling relationships
DynamoDB has no native support for relationships, forcing denormalization:
// Option 1: Embed related data (limited by 400KB item size)
await dynamodb.update({
TableName: 'Users',
Key: { userId: '123' },
UpdateExpression: 'SET posts = list_append(posts, :post)',
ExpressionAttributeValues: {
':post': [{ postId: 'p1', title: 'My Post' }]
}
});
// Option 2: Separate table, query in application code
const posts = await dynamodb.query({
TableName: 'UserPosts',
KeyConditionExpression: 'userId = :uid',
ExpressionAttributeValues: { ':uid': '123' }
}).promise();
Related data either embeds within items (hitting size limits) or requires multiple queries in application code. The database doesn't enforce referential integrity.
PostgreSQL uses foreign keys to maintain relationships:
CREATE TABLE posts (
post_id SERIAL PRIMARY KEY,
user_id VARCHAR(50) REFERENCES users(user_id) ON DELETE CASCADE,
title VARCHAR(255) NOT NULL,
content TEXT
);
SELECT u.name, p.title, p.content
FROM users u
JOIN posts p ON u.user_id = p.user_id
WHERE u.user_id = '123';
Foreign keys prevent orphaned records. A single query combines data from multiple tables with guaranteed referential integrity.
Scaling characteristics
DynamoDB scales horizontally without configuration:
// Enable auto-scaling
aws application-autoscaling register-scalable-target \
--service-namespace dynamodb \
--resource-id table/Users \
--scalable-dimension dynamodb:table:ReadCapacityUnits \
--min-capacity 5 \
--max-capacity 1000
The database automatically partitions data across servers as it grows. Performance remains consistent regardless of data size, but every request costs money.
PostgreSQL scales vertically by adding resources to a single server:
-- Configuration for larger workloads
max_connections = 200
shared_buffers = 4GB
effective_cache_size = 12GB
Horizontal scaling requires read replicas, connection pooling, or sharding extensions. This demands more planning but provides cost control.
Transactions and consistency
PostgreSQL transactions work across any number of tables:
BEGIN;
UPDATE users SET credits = credits - 100 WHERE user_id = '123';
UPDATE users SET credits = credits + 100 WHERE user_id = '456';
COMMIT;
Both updates succeed together or neither does, maintaining consistency across the database.
DynamoDB transactions have size and item limits:
await dynamodb.transactWrite({
TransactItems: [
{
Update: {
TableName: 'Users',
Key: { userId: '123' },
UpdateExpression: 'SET credits = credits - :amount',
ExpressionAttributeValues: { ':amount': 100 }
}
},
{
Update: {
TableName: 'Users',
Key: { userId: '456' },
UpdateExpression: 'SET credits = credits + :amount',
ExpressionAttributeValues: { ':amount': 100 }
}
}
]
}).promise();
Transactions are limited to 100 items and 4MB total size. They also cost twice as much as regular operations.
Indexing strategies
PostgreSQL allows indexes on any column or expression:
CREATE INDEX idx_user_email ON users(email);
CREATE INDEX idx_preferences ON users USING GIN (preferences);
-- Partial index for common queries
CREATE INDEX idx_active_users ON users(email)
WHERE created_at > NOW() - INTERVAL '90 days';
The query planner automatically chooses optimal indexes. Any query pattern can be made fast without schema redesign.
DynamoDB requires Global Secondary Indexes for non-key queries:
aws dynamodb update-table \
--table-name Users \
--attribute-definitions AttributeName=email,AttributeType=S \
--global-secondary-index-updates \
'[{
"Create": {
"IndexName": "EmailIndex",
"KeySchema": [{"AttributeName":"email","KeyType":"HASH"}],
"Projection": {"ProjectionType":"ALL"}
}
}]'
Each Global Secondary Index doubles storage costs and consumes write capacity. Access patterns must be identified during design because adding indexes later is expensive.
Operational considerations
DynamoDB eliminates operational tasks entirely:
// Backups happen automatically
aws dynamodb update-continuous-backups \
--table-name Users \
--point-in-time-recovery-specification PointInTimeRecoveryEnabled=true
AWS handles patching, failover, and replication across availability zones. The service maintains availability without configuration.
PostgreSQL requires infrastructure management:
# Configure replication in postgresql.conf
wal_level = replica
max_wal_senders = 3
# Create backup strategy
pg_dump myapp > backup.sql
Even with RDS, you manage capacity planning, monitoring, and performance tuning. This provides control but demands database administration knowledge.
Final thoughts
PostgreSQL and DynamoDB solve different problems. PostgreSQL gives you query flexibility and data integrity guarantees. Complex relationships and unpredictable query patterns work naturally. The tradeoff is operational responsibility and scaling complexity.
DynamoDB guarantees performance and eliminates operations. It scales automatically and maintains consistent latency regardless of load. The tradeoff is limited query flexibility and higher costs for some workloads.
For applications with complex data relationships, reporting requirements, or unpredictable query patterns, PostgreSQL is the better choice. For applications with simple access patterns that need predictable performance at massive scale, DynamoDB makes sense.
Most new applications should start with PostgreSQL unless they have specific requirements that DynamoDB addresses. You can always migrate to DynamoDB later if scaling becomes an issue, but moving from DynamoDB to PostgreSQL when you discover you need complex queries is far more painful.