Back to Scaling Ruby Applications guides

GoodJob vs Sidekiq

Stanley Ulili
Updated on November 6, 2025

Background job architecture determines how your Rails application handles asynchronous work and what happens when workers crash or jobs fail. GoodJob executes jobs using PostgreSQL’s transaction guarantees and LISTEN/NOTIFY, while Sidekiq processes jobs from Redis with configurable reliability tradeoffs. This difference affects operational complexity and failure recovery patterns, not just which datastore you run.

GoodJob emerged in 2020 when David Copeland built a background processor that leveraged PostgreSQL’s reliability guarantees without adding Redis to the stack. The gem uses advisory locks for job claiming and LISTEN/NOTIFY for immediate job execution. Sidekiq appeared in 2012 when Mike Perham created a multi-threaded worker that used Redis for speed and deployed a commercial reliability layer for teams needing guaranteed execution.

Modern Rails applications choose based on infrastructure complexity and job reliability requirements. GoodJob handles persistence through your existing PostgreSQL database with ACID transactions. Sidekiq gives you speed and memory efficiency through Redis but requires additional infrastructure. Your choice affects how you deploy workers, handle failures, and reason about job execution guarantees.

What is GoodJob?

GoodJob stores jobs as rows in a PostgreSQL table within your application database. Every job enqueue happens in a transaction with your business logic. When you create a user and send a welcome email, both operations commit atomically or roll back together. The worker processes claim jobs using PostgreSQL advisory locks to prevent duplicate execution.

The execution model uses LISTEN/NOTIFY for immediate job pickup. When you enqueue a job, GoodJob sends a PostgreSQL notification. Worker processes listening on that channel wake up instantly and claim the job — no polling delays. If no workers are listening, jobs wait in the table until a worker starts and picks them up.

Jobs execute in background threads within your Rails process or in separate worker processes. The async execution mode runs workers in the same process as your web server, reducing deployment complexity. The external execution mode runs dedicated worker processes that scale independently from web servers. Both modes claim jobs using the same advisory lock mechanism.

GoodJob runs as a gem within your Rails application. You add it to your Gemfile, run migrations to create the jobs table, and configure ActiveJob to use GoodJob as the adapter. No separate services to deploy, no additional datastores to maintain. Worker processes connect to your PostgreSQL database using your existing connection pool.

The perform_later call inserts a row into the good_jobs table. If the transaction rolls back, the job never gets created. No orphaned jobs. No separate cleanup logic. The database transaction guarantees consistency between your business logic and background jobs.

What is Sidekiq?

Screenshot of Sidekiq Github page
=1280x640)

Sidekiq stores jobs as JSON strings in Redis lists and sorted sets. You push jobs onto queues, and worker processes pop jobs off for execution. Redis provides fast enqueue and dequeue operations with minimal memory overhead per job. The datastore lives separate from your application database, requiring coordination between systems.

The default execution model provides at-most-once delivery semantics. Sidekiq pops jobs from Redis and executes them. If the worker crashes during execution, the job disappears. No automatic retry for crashed workers. The Pro and Enterprise versions add reliability features like super_fetch that keep jobs in Redis until explicitly acknowledged.

Workers run in separate processes that use threads for concurrent execution. Each Sidekiq process spawns 25 threads by default, letting you process multiple jobs simultaneously with lower memory overhead than forking processes. Thread-safety becomes important — your job code must handle concurrent execution without shared state corruption.

Sidekiq runs as a separate service that connects to Redis. You deploy Sidekiq processes independently from your Rails servers. Redis can live on the same host during development or run as a managed service in production. The separation gives you flexibility but adds deployment and monitoring complexity.

The perform_later call pushes a job to Redis immediately. The enqueue happens outside your database transaction. If your transaction rolls back after enqueuing, the job still executes. You need compensating logic to handle jobs referencing rolled-back records.

GoodJob vs Sidekiq: quick comparison

Aspect GoodJob Sidekiq
Storage backend PostgreSQL Redis
Transaction safety Jobs commit with business logic Jobs enqueue separately
Execution guarantee At-least-once (with retries) At-most-once (Pro adds reliability)
Worker model Threads in Rails process or external External process with threads
Job claiming Advisory locks Pop from Redis
Immediate pickup LISTEN/NOTIFY Polling (configurable interval)
Infrastructure PostgreSQL only PostgreSQL + Redis
Deployment complexity Single service Multiple services
Concurrency model Thread-safe required Thread-safe required
Operational overhead Lower Higher

Transaction safety

The storage difference became critical when I built an e-commerce checkout. GoodJob's transactional job creation prevented orphaned operations:

 
# GoodJob - atomic order creation and job enqueue
Order.transaction do
  order = Order.create!(
    user: current_user,
    items: cart.items,
    total: cart.total
  )

  Payment.create!(
    order: order,
    amount: cart.total,
    status: 'pending'
  )

  ChargeCustomerJob.perform_later(order.id)
  SendConfirmationEmailJob.perform_later(order.id)
  UpdateInventoryJob.perform_later(order.id)
end

The order creation, payment record, and three jobs all commit atomically. If anything fails—validation error, database constraint violation, network issue—the entire transaction rolls back. No charge jobs for non-existent orders. No confirmation emails referencing missing payments. The database guarantees everything succeeds together or nothing happens.

Sidekiq required defensive coding to handle the separate enqueue:

 
# Sidekiq - handle potential inconsistency
order = nil

Order.transaction do
  order = Order.create!(
    user: current_user,
    items: cart.items,
    total: cart.total
  )

  Payment.create!(
    order: order,
    amount: cart.total,
    status: 'pending'
  )
end

# Jobs enqueue after transaction commits
ChargeCustomerJob.perform_later(order.id)
SendConfirmationEmailJob.perform_later(order.id)
UpdateInventoryJob.perform_later(order.id)

Jobs got enqueued after the transaction committed. If Redis went down between commit and enqueue, I lost jobs completely. If the application crashed after the transaction but before enqueue, those jobs never ran. I added idempotency checks in each job to handle duplicate executions when Redis was temporarily unavailable and I retried the entire operation.

Worker crash recovery

That transaction safety extended to how each system handles worker crashes. GoodJob keeps jobs in the database until explicitly marked finished:

 
# GoodJob - jobs survive worker crashes
# Worker claims job with advisory lock
# Job row stays in database during execution
# If worker crashes, lock releases automatically
# Another worker picks up the job and retries

class ProcessPaymentJob < ApplicationJob
  retry_on StandardError, wait: 5.seconds, attempts: 3

  def perform(payment_id)
    payment = Payment.find(payment_id)
    # Worker crashes here
    PaymentGateway.charge(payment)
  end
end

When a GoodJob worker crashes mid-execution, PostgreSQL releases the advisory lock automatically. The job becomes available for retry immediately. Another worker picks it up and runs it again. The retry logic built into ActiveJob handles the reattempts. I never lost jobs due to worker crashes—they always got retried.

Sidekiq's default behavior loses jobs when workers crash:

 
# Sidekiq - jobs disappear on worker crash
# Worker pops job from Redis
# Job removed from queue immediately
# If worker crashes, job lost forever

class ProcessPaymentJob < ApplicationJob
  def perform(payment_id)
    payment = Payment.find(payment_id)
    # Worker crashes here - job lost
    PaymentGateway.charge(payment)
  end
end

Sidekiq pops the job from Redis before execution starts. If the worker dies during processing, that job vanishes. No retry. No recovery. Gone. During a deployment where I killed workers too aggressively, I lost hundreds of jobs. Customers didn't receive confirmation emails. Inventory didn't update. I had to add manual cleanup jobs to find and reprocess missing work.

Sidekiq Pro adds super_fetch mode that keeps jobs in Redis until acknowledged:

 
# Sidekiq Pro - reliable fetch
# config/initializers/sidekiq.rb
Sidekiq.configure_server do |config|
  config.super_fetch!
end

# Jobs stay in Redis until explicitly acknowledged
# Worker crashes return job to queue
# Automatic retry without job loss

Super_fetch moves jobs to a working queue instead of removing them immediately. If a worker crashes, jobs get pushed back to the main queue after a timeout. This costs money—Sidekiq Pro starts at $179/month for five workers. I paid for it after losing those jobs, but it meant adding a paid dependency for reliability that GoodJob provides by default.

Job latency and pickup time

The crash recovery difference revealed latency characteristics. GoodJob's LISTEN/NOTIFY provides near-instant job pickup:

 
# GoodJob - immediate notification
# Enqueue sends PostgreSQL NOTIFY
# Workers receive notification instantly
# Job starts within milliseconds

User.transaction do
  user = User.create!(email: params[:email])
  WelcomeEmailJob.perform_later(user.id)
end
# Job picked up in 5-50ms typically

When I enqueued a job, workers listening on the PostgreSQL channel received the notification immediately. Jobs started executing within milliseconds. The LISTEN/NOTIFY mechanism operates at the protocol level—very low overhead. During load testing, I measured median pickup latency of 15 milliseconds from enqueue to execution start.

Sidekiq uses polling with configurable intervals:

 
# Sidekiq - polling-based pickup
# config/sidekiq.yml
:poll_interval: 5  # Default: check queues every 5 seconds

# Workers poll Redis periodically
# Job sits in queue until next poll
# Minimum latency equals poll interval

Workers checked Redis every five seconds by default. Jobs enqueued between polls waited until the next check. I reduced the interval to one second for time-sensitive jobs, which increased Redis load proportionally. The tradeoff became clear—lower latency meant more Redis queries, higher CPU usage on workers, and more network traffic. With LISTEN/NOTIFY, GoodJob avoided this polling overhead entirely.

Deployment models

Those latency characteristics influenced deployment strategies. GoodJob's async mode runs workers inside web processes:

 
# GoodJob - async execution in web process
# config/environments/production.rb
config.active_job.queue_adapter = :good_job

# config/initializers/good_job.rb
Rails.application.configure do
  config.good_job.execution_mode = :async
  config.good_job.max_threads = 5
end

# Puma config
workers ENV.fetch("WEB_CONCURRENCY") { 2 }
threads_count = ENV.fetch("RAILS_MAX_THREADS") { 5 }
threads threads_count, threads_count

My Puma web servers ran background jobs in dedicated threads. One process handled both HTTP requests and background work. Deployment became simpler—a single container image, one process to monitor, no coordination between services. Memory usage stayed reasonable since Puma's forked workers shared code. The async mode worked perfectly for my workload of 50-100 jobs per minute.

Sidekiq requires separate worker processes:

 
# Sidekiq - external worker processes
# Procfile
web: bundle exec puma -C config/puma.rb
worker: bundle exec sidekiq -C config/sidekiq.yml

# Separate containers in production
# web container: Rails + Puma
# worker container: Rails + Sidekiq
# Must coordinate deployments between them

I deployed web and worker containers separately. During deployments, I needed to drain workers gracefully to avoid losing jobs. The two-container setup doubled my infrastructure complexity. Health checks for both services. Separate scaling policies. Independent crash recovery. More monitoring dashboards. The operational overhead became significant.

GoodJob also supports external workers when you need independent scaling:

 
# GoodJob - external execution mode
# config/initializers/good_job.rb
config.good_job.execution_mode = :external

# Procfile
web: bundle exec puma -C config/puma.rb
worker: bundle exec good_job start

# Similar deployment model to Sidekiq
# But workers use same database
# No additional infrastructure

When my job volume grew, I switched to external mode. Dedicated worker processes scaled independently from web servers. But unlike Sidekiq, I didn't add Redis to my stack. Workers connected to the same PostgreSQL database. One fewer service to monitor, patch, and keep running.

Concurrency and resource usage

The deployment model affected resource consumption patterns. GoodJob's thread-based workers kept memory usage predictable:

 
# GoodJob - memory usage scales with threads
# config/initializers/good_job.rb
config.good_job.max_threads = 10

# Each thread shares process memory
# 10 threads processing jobs simultaneously
# Memory growth from job data, not worker overhead
# Typical worker process: 200-400MB RSS

I ran 10 concurrent threads per worker process. Each thread executed jobs independently but shared the Rails application memory. When processing large datasets, memory grew from the data itself, not from spawning new workers. A typical GoodJob worker used 300MB resident memory under load. Scaling to 50 concurrent jobs meant five processes, roughly 1.5GB total.

Sidekiq's threading model provided similar efficiency:

 
# Sidekiq - comparable memory footprint
# config/sidekiq.yml
:concurrency: 25

# 25 threads per Sidekiq process
# Similar memory sharing benefits
# Typical worker process: 250-450MB RSS
# Better memory efficiency than GoodJob slightly

Sidekiq workers handled 25 concurrent jobs per process with similar memory usage. The Redis client added minimal overhead. Memory efficiency actually edged out GoodJob by 10-15% in my benchmarks, likely because Sidekiq's codebase is more optimized after 12 years of production use. For memory-constrained environments, Sidekiq's efficiency mattered.

Both systems required thread-safe code:

 
# Both GoodJob and Sidekiq - thread safety required
class ReportGenerationJob < ApplicationJob
  def perform(user_id)
    # Bad: shared state across threads
    @@report_cache ||= {}
    @@report_cache[user_id] = generate_report(user_id)

    # Good: no shared state
    report = generate_report(user_id)
    ReportStorage.save(user_id, report)
  end
end

Class variables and global state caused race conditions with either system. Two threads modifying the same hash corrupted data. I learned to avoid shared mutable state, use database transactions for coordination, and treat each job execution as isolated. The discipline applied equally to both processors.

Queue management and priority

The resource constraints led me to explore queue configuration. GoodJob manages queues through database queries:

 
# GoodJob - queue configuration
# config/initializers/good_job.rb
config.good_job.queues = 'critical:2;default:1;low_priority:0.5'

# Query-based queue implementation
# "critical" gets 40% of worker attention
# "default" gets 40% of worker attention
# "low_priority" gets 20% of worker attention

class UrgentProcessingJob < ApplicationJob
  queue_as :critical

  def perform(order_id)
    # Executes with higher priority
  end
end

The queue weights determined how workers distributed attention. Critical jobs got processed more frequently than low priority ones. GoodJob achieved this by querying different queues with weighted probabilities. The database query overhead became noticeable at high throughput—each worker made frequent queries to check all configured queues.

Sidekiq implements queues using separate Redis lists:

 
# Sidekiq - queue configuration
# config/sidekiq.yml
:queues:
  - [critical, 4]
  - [default, 2]
  - [low_priority, 1]

# Each queue is a Redis list
# Workers check queues in weighted order
# Lower Redis query overhead
# Faster queue polling

class UrgentProcessingJob < ApplicationJob
  queue_as :critical

  def perform(order_id)
    # Checked 4x more often than default
  end
end

Workers polled queues according to their weights. The critical queue got checked four times for every low_priority check. Redis list operations are incredibly fast—microseconds per operation. Sidekiq's queue management felt snappier and used fewer resources than GoodJob's database queries.

Job filtering and queries

Those queue queries highlighted broader querying capabilities. GoodJob exposes jobs through ActiveRecord:

 
# GoodJob - query jobs like any model
# Find jobs for specific user
GoodJob::Job.where("serialized_params->>'user_id' = ?", user_id.to_s)

# Find stuck jobs
GoodJob::Job.where("scheduled_at < ?", 1.hour.ago)
              .where(finished_at: nil)

# Analyze job distribution
GoodJob::Job.group(:queue_name).count

# Custom cleanup logic
GoodJob::Job.where("created_at < ?", 1.week.ago)
              .where.not(finished_at: nil)
              .delete_all

The jobs table worked like any Rails model. I wrote SQL queries to analyze job patterns, find anomalies, and clean up old records. During debugging, I queried jobs by arguments to trace specific operations through the system. The flexibility helped immensely when investigating production issues.

Sidekiq provides API methods but limited querying:

 
# Sidekiq - API-based job inspection
# View queue sizes
Sidekiq::Queue.new('default').size

# Inspect scheduled jobs
Sidekiq::ScheduledSet.new.each do |job|
  puts job.args
end

# Limited filtering capabilities
# Must iterate all jobs to filter
# No SQL-like queries available

# Clear a queue (nuclear option)
Sidekiq::Queue.new('default').clear

I could check queue sizes and iterate scheduled jobs, but complex queries weren't possible. To find all jobs for a specific user, I had to load every job and filter in Ruby. Millions of jobs made this impractical. The Redis datastore traded query flexibility for speed. For debugging specific jobs, I added logging to track what I needed.

Monitoring and observability

The querying difference extended to monitoring. GoodJob metrics come from PostgreSQL queries:

 
# GoodJob - database-backed metrics
# Built-in dashboard at /good_job
# Shows queue sizes, execution times, error rates
# Query for custom metrics

# Custom monitoring query
SELECT
  queue_name,
  COUNT(*) FILTER (WHERE finished_at IS NULL) as pending,
  COUNT(*) FILTER (WHERE error IS NOT NULL) as failed,
  AVG(EXTRACT(EPOCH FROM (finished_at - performed_at))) as avg_duration
FROM good_jobs
WHERE created_at > NOW() - INTERVAL '1 hour'
GROUP BY queue_name;

GoodJob ships with a web dashboard that shows queue depths, job execution times, and error rates. I wrote custom SQL queries to track metrics specific to my application. The database-backed approach meant every metric query added load to PostgreSQL. During high job volume, monitoring queries competed with job processing for database resources.

Sidekiq Pro includes a monitoring UI with Redis-backed metrics:

Screenshot of the dashboard

The Sidekiq web UI provided real-time visibility without affecting worker performance. Metrics lived in Redis with automatic expiration. For long-term analysis, I exported metrics to Prometheus. The monitoring system felt more polished but required integration work. GoodJob's simpler dashboard sometimes sufficed, especially during early development.

Final thoughts

GoodJob and Sidekiq represent two distinct philosophies in Ruby background processing. GoodJob prioritizes transactional safety and simplicity, relying on PostgreSQL for both persistence and reliability, while Sidekiq emphasizes speed and scalability, leveraging Redis for high throughput and concurrency at the cost of added infrastructure and potential reliability trade-offs.

If your Rails app already uses PostgreSQL and you value atomic job creation, built-in reliability, and minimal operational overhead, GoodJob is the natural fit. If you need maximum throughput, mature tooling, and are comfortable managing Redis, Sidekiq remains the performance-optimized standard, especially with its commercial reliability features.