Friday, February 6, 2026

BullMQ Elixir vs Oban Performance Benchmark

Cover for BullMQ Elixir vs Oban Performance Benchmark

With the release of BullMQ for Elixir, we ran some benchmarks against Oban, the most popular job queue in the Elixir ecosystem. I think that it is an interesting comparison because they use different backends: BullMQ uses Redis, Oban uses PostgreSQL.

About BullMQ Elixir

While BullMQ Elixir is new to the ecosystem, it's built on years of battle-tested code from the Node.js BullMQ library. The Elixir implementation itself is just a thin layer on top of the same Lua scripts that power the Node.js version, scripts that have been refined over years of production use across thousands of deployments.

The heavy lifting happens in Redis via these Lua scripts. The Elixir code handles connection management, job serialization, and the worker lifecycle, but the core queue logic is shared with Node.js.

We've also ported most of the comprehensive test suite from Node.js, so despite being young in the Elixir world, BullMQ Elixir has a solid test coverage.

Keep in mind that these are still the early days for BullMQ in Elixir, and we're actively working to squeeze more performance out of it. We expect these numbers to improve as we optimize the Elixir-specific code paths. We're also planning benchmarks for distributed Elixir deployments, running workers across multiple nodes, which is where things can get even more interesting. Stay tuned.

Under the hood

When you add a job in Oban, it runs a single SQL INSERT (or batch INSERT for bulk operations). The job lands in an oban_jobs table. Simple and straightforward.

BullMQ does more per job. Each insert runs a Lua script that atomically roughly does the following:

  • Generates a unique job ID
  • Stores job data in a Redis hash
  • Adds the job to the right data structure (LIST for waiting, ZSET for delayed/prioritized)
  • Checks for duplicates
  • Handles parent-child dependencies if you're using flows
  • Publishes events for real-time subscribers
  • Updates rate limiting state if configured

Feature Comparison

FeatureBullMQOban
Delayed Jobs
Priority Queues
Deduplication
Rate Limiting💰 Pro
Flows/Dependencies💰 Pro
Real-time Events
Cron/Schedulers
BackendRedisPostgreSQL
Cross-language✅ Node.js, Python, PHP, ElixirElixir, Python

Rate limiting and job flows are included in BullMQ's open-source version. Oban requires the paid Pro tier for these.

Benchmark Environment

  • MacBook Pro M2 Pro, 16GB RAM
  • Redis 7.x (Docker, localhost) with AOF enabled (appendfsync everysec)
  • PostgreSQL 16 (standalone, localhost) with connection pool of 150
  • Elixir 1.18 / OTP 27
  • BullMQ 1.2.6, Oban 2.20.3

All tests were run 5 times, and we report the mean result. Tests used 50,000 jobs to ensure statistically meaningful run times (several seconds per test).

Results

Single Job Insertion

BullMQOban01.5K3K4.5K6K5,8002,900
Jobs per second - Single job insertion (50,000 jobs)

For one-at-a-time inserts, BullMQ achieves ~5,800 jobs/sec vs Oban's ~2,900 jobs/sec—about 100% faster. This matters in scenarios where you're adding jobs from request handlers or event callbacks, where each request enqueues one job.

Concurrent Single Job Insertion

BullMQOban04.5K9K14K18K17,70011,200
Jobs per second - Concurrent single job insertion (50,000 jobs, 10 concurrent inserters)

When 10 processes insert jobs simultaneously (simulating multiple request handlers):

  • BullMQ: ~17,700 jobs/sec
  • Oban: ~11,200 jobs/sec

BullMQ is ~57% faster. Both scale well with concurrency—BullMQ from 5.8K to 17.7K (3.1x), Oban from 2.9K to 11.2K (3.9x).

Bulk Job Insertion

BullMQOban015K30K45K60K51,40036,800
Jobs per second - Bulk insert 50,000 jobs (batches of 1,000)

For sequential bulk inserts with 1,000-job batches, BullMQ wins: 51.4K jobs/sec vs 36.8K jobs/sec (+40%).

Concurrent Bulk Job Insertion

BullMQOban025K50K75K100K63,40089,600
Jobs per second - Concurrent bulk insert (50,000 jobs, batches of 1,000, 10 concurrent inserters)

Here's where Oban shines. When 10 processes do bulk inserts simultaneously:

  • BullMQ: ~63.4K jobs/sec
  • Oban: ~89.6K jobs/sec

Oban is 41% faster at concurrent bulk inserts. PostgreSQL's connection pool efficiently parallelizes multiple bulk INSERT statements, each running in its own transaction. Redis pipelines, while fast, still serialize through a single-threaded event loop.

Batch size still matters:

Batch SizeBullMQObanWinner
10046.3K57.4KOban (+24%)
25054.4K63.7KOban (+17%)
50058.6K57.8KTie
100057.0K51.0KBullMQ (+12%)
200057.3K40.4KBullMQ (+42%)

Smaller batches favor Oban due to PostgreSQL's efficient transaction handling. Larger batches favor BullMQ because PostgreSQL's overhead (WAL writes, index updates) compounds while Redis stays consistent.

Note: PostgreSQL has a hard limit of 65,535 parameters per query, which caps Oban's batch size to ~7,000 jobs (depending on column count).

Job Processing (10ms work)

BullMQOban02.5K5K7.5K10K8,3004,400
Jobs per second - Processing with 10ms simulated work (1 worker, concurrency=100)

Each test uses a single BullMQ Worker (or a single Oban queue) with a matching connection pool size. We tested two concurrency levels to show how throughput scales:

ConcurrencyBullMQObanDifference
10911 jobs/sec523 jobs/secBullMQ +74%
1008,300 jobs/sec4,400 jobs/secBullMQ +88%

BullMQ scales from 911 to 8,300 jobs/sec (9.1x) when going from 10 to 100 concurrent processors; Oban scales from 523 to 4,400 (8.4x). Both scale well, but BullMQ maintains its lead at every level.

To put the 100-concurrency numbers in context: with 10ms of work per job, the theoretical maximum is 10,000 jobs/sec (100 × 100 jobs/sec). BullMQ achieves 83% of theoretical max; Oban achieves 44%.

The difference is queue overhead — time spent fetching jobs, updating state, and marking completion. BullMQ's lower overhead means more time doing actual work.

CPU-Bound Processing

BullMQOban06.5K13K20K26K24,3006,800
Jobs per second - CPU-bound work with 1000 sin/cos operations per job (1 worker, concurrency=100)

To measure throughput with lightweight but real CPU work, each job performs 1,000 sin/cos calculations (~1ms of CPU time). This uses 1 worker with a matching connection pool:

ConcurrencyBullMQObanDifference
1012,400 jobs/sec1,200 jobs/secBullMQ +944%
10024,300 jobs/sec6,800 jobs/secBullMQ +257%

The gap is dramatic — especially at concurrency 10, where BullMQ is nearly 10x faster. Since each job only takes ~1ms of CPU time, the bottleneck is almost entirely queue overhead: how fast the system can dequeue, track, and mark jobs complete. This is where BullMQ's Redis pipelines and atomic Lua scripts shine.

BullMQ scales from 12.4K to 24.3K (2x) going from 10 to 100 concurrent processors, peaking at 25.8K jobs/sec. Oban scales from 1.2K to 6.8K (5.7x) — a steeper curve that suggests its per-job overhead dominates at lower concurrency.

These numbers scale with concurrency and hardware. On a 12-core machine like the M2 Pro used here, the BEAM VM schedules lightweight processes across all available cores automatically, so adding more concurrent processors translates directly into higher throughput — until Redis or network I/O becomes the bottleneck.

Pure Queue Overhead

BullMQOban06.5K13K20K26K25,6007,100
Jobs per second - Minimal work per job, measuring raw queue overhead (1 worker, concurrency=100)

To isolate the queue machinery itself, we ran a test where each job does essentially nothing — just enqueue, dequeue, and mark complete. This measures the raw overhead of the queue system:

ConcurrencyBullMQObanDifference
1014,600 jobs/sec1,200 jobs/secBullMQ +1106%
10025,600 jobs/sec7,100 jobs/secBullMQ +262%

With no job work to amortize, this test exposes the full cost of each queue round-trip. BullMQ peaks at 27.2K jobs/sec — the ceiling for single-worker throughput on this hardware. At concurrency 10, BullMQ is over 12x faster than Oban, showing how Redis's in-memory operations and pipelined commands minimize per-job overhead compared to PostgreSQL's disk-based query cycle.

Poll-Free Architecture

Both BullMQ and Oban use blocking commands to wait for jobs, but the implementations differ significantly.

How BullMQ fetches jobs:

BullMQ uses a marker-based system with BZPOPMIN. Instead of blocking directly on job lists, workers block on a separate "marker" ZSET. When jobs are added, a marker with a timestamp is pushed to this ZSET:

  • When jobs are available: BZPOPMIN returns immediately with a marker → worker runs "moveToActive" Lua script → fetches the next job based on priority, delay, rate limiting, etc.
  • When idle: BZPOPMIN blocks until a marker arrives or timeout.

This unified mechanism handles standard jobs, priority jobs, delayed jobs, and rate-limited jobs through the same code path. The timestamp on the marker tells workers when the next job should be processed—for immediate jobs it's 0, for delayed jobs it's the scheduled time.

The Elixir architectural win:

In Node.js BullMQ, each worker process maintains its own Redis connection for blocking operations. With 100 idle workers:

  • 100 Node.js workers idle = 100 Redis connections blocking on BZPOPMIN

In Elixir BullMQ, a single coordinator process manages all concurrent job processors:

  • 1 blocking connection for BZPOPMIN waits (shared by all job processors)
  • Shared connection pool for job operations (moveToActive, complete, etc.)

When idle, that's 100x fewer blocking connections vs Node.js. During active processing, workers share a configurable connection pool rather than each holding dedicated connections. This architecture scales efficiently—at thousands of workers across many queues, the connection savings are substantial.

BEAM Advantage: Effortless Horizontal Scaling

The BEAM VM was designed for distributed systems. Elixir nodes can connect to each other and communicate seamlessly—this is what powers Phoenix's real-time features across clusters.

For BullMQ Elixir, scaling is straightforward:

  1. Spin up more BEAM nodes - each runs its own BullMQ worker coordinator
  2. Point them at the same Redis - that's it
  3. Redis handles work distribution - jobs are automatically distributed across all workers

No load balancers, no orchestration layer, no coordination overhead. Each node's workers compete fairly for jobs through Redis's atomic operations. If you need more throughput you can just add another node, for example using Kubernetes or similar orchestrators, by increasing the replica count to increase your processing capacity linearly. If your load is large enough you will end up saturating Redis, which can also be updated to larger instances, or you could start dividing jobs into smaller queues and take advantage of Redis clustering.

Why does BullMQ win at processing?

BullMQ's architecture minimizes round-trips. When a worker fetches a job, processes it, and marks it complete, BullMQ batches these operations into efficient Redis pipelines. The Lua scripts that run in Redis are executed atomically with minimal overhead.

PostgreSQL, by contrast, requires more round-trips: fetch job, update state to "running", complete job with state update. Each operation is a separate query, and PostgreSQL's MVCC and WAL machinery add overhead that Redis avoids by keeping everything in memory.

Why does Oban win at concurrent bulk insert?

PostgreSQL's bulk INSERT is remarkably efficient. A single INSERT INTO ... VALUES (row1), (row2), ... statement handles thousands of rows with:

  • One transaction
  • One WAL write
  • Batched index updates

When multiple processes do bulk inserts simultaneously, PostgreSQL's connection pool runs each INSERT in parallel across separate connections. Each transaction runs independently, and PostgreSQL can parallelize the work across CPU cores.

Redis, by contrast, is single-threaded. BullMQ's Redis pipelines batch commands to minimize network round-trips, but Redis still processes each job's Lua script sequentially—even when multiple clients send pipelines simultaneously. The pipelines queue up and execute one at a time.

For sequential bulk inserts, BullMQ's low per-job overhead wins. For concurrent bulk inserts, PostgreSQL's parallelism wins.

Durability trade-offs

Oban gives you PostgreSQL's ACID guarantees out of the box. Jobs are on disk, transactions are atomic, and you get all the durability you'd expect.

BullMQ's durability depends on how you configure Redis. With AOF persistence and appendfsync always, you get similar guarantees but at a slight performance cost. Most deployments use appendfsync everysec as a reasonable middle ground—which is what we used in these benchmarks.

If losing a second of jobs during a Redis crash is unacceptable, Oban is the safer choice. If you need the throughput and can tolerate that risk (or have Redis replication), BullMQ makes sense.

Conclusions

The benchmarks show both libraries have strengths in different scenarios:

BullMQ excels at:

  • Single job insertion (~100% faster, ~57% faster with 10 concurrent inserters)
  • Job processing throughput (~88% faster for I/O work, up to ~257% faster for CPU-bound jobs)
  • Pure queue overhead (up to 12x faster at low concurrency, ~3.6x faster at high concurrency)
  • Sequential bulk inserts (40% faster at 1000-job batches)

Oban excels at:

  • Concurrent bulk inserts (41% faster with 10 concurrent inserters)
  • Small batch inserts (up to ~500 jobs per batch)

The concurrent bulk insert result is notable: when multiple processes bulk-insert simultaneously, PostgreSQL's connection pool parallelizes these operations efficiently, while Redis's single-threaded event loop serializes them.

BullMQ makes sense when:

  • Processing throughput matters. Up to ~257% faster processing means lower latency and better resource utilization.
  • You want features without paying extra. Rate limiting, job flows, and real-time events are all open source.
  • You're already running Redis. No new infrastructure to manage.
  • Your stack spans multiple languages. The same queues work across Node.js, Python, PHP, and Elixir. Add jobs from your Python ML service, and process them in Elixir.
  • You value battle-tested code. BullMQ Elixir runs the same Lua scripts that power thousands of Node.js deployments. Years of edge cases already handled.

Oban makes sense when:

  • You want everything in PostgreSQL. One database, one backup strategy, one ops story.
  • Strict durability is non-negotiable. ACID guarantees out of the box with no configuration.
  • You're in the Elixir/Python ecosystem. Oban now supports both, with tight Ecto and Phoenix integration for Elixir.
  • You do concurrent bulk inserts. Multiple processes bulk-inserting jobs in parallel is where Oban shines.

Both are solid choices. Oban has excellent documentation and a strong community. BullMQ brings faster processing, cross-platform reach, and a decade of production hardening.


Methodology Notes

To ensure fair comparison:

  • PostgreSQL pool_size was set to 150 connections to match concurrency needs (earlier versions of this benchmark used pool_size=20, which unfairly bottlenecked Oban)
  • 50,000 jobs per test ensures tests run long enough (several seconds) for meaningful measurements
  • 5 runs per test, mean result reported
  • Same machine for all tests, with Redis and PostgreSQL both running locally
  • Default configurations for both libraries where possible

The benchmark code is open source and we welcome feedback and improvements.


Benchmark code is available at GitHub