BullMQ Elixir vs Oban Performance Benchmark
Read more:
With the release of BullMQ for Elixir, we ran some benchmarks against Oban, the most popular job queue in the Elixir ecosystem. The comparison is interesting because they use different backends: BullMQ uses Redis, Oban uses PostgreSQL.
About BullMQ Elixir
While BullMQ Elixir is new to the ecosystem, it's built on years of battle-tested code from the Node.js BullMQ library. The Elixir implementation is a thin layer on top of the same Lua scripts that power the Node.js version, scripts that have been refined over years of production use across thousands of deployments.
The heavy lifting happens in Redis via these Lua scripts. The Elixir code handles connection management, job serialization, and the worker lifecycle, but the core queue logic is shared with Node.js.
We've also ported the comprehensive test suite from Node.js, so despite being young in the Elixir world, BullMQ Elixir has solid test coverage.
These are still the early days for BullMQ in Elixir, and we're actively working to squeeze more performance out of it. We expect these numbers to improve as we optimize the Elixir-specific code paths. We're also planning benchmarks for distributed Elixir deployments, running workers across multiple nodes, which is where things get really interesting. Stay tuned.
What Each Library Does Under the Hood
When you add a job in Oban, it runs a single SQL INSERT (or batch INSERT for bulk operations). The job lands
in an oban_jobs table. Simple and straightforward.
BullMQ does more per job. Each insert runs a Lua script that atomically:
- Generates a unique job ID
- Stores job data in a Redis hash
- Adds the job to the right data structure (LIST for waiting, ZSET for delayed/prioritized)
- Checks for duplicates
- Handles parent-child dependencies if you're using flows
- Publishes events for real-time subscribers
- Updates rate limiting state if configured
More work per job, but more features baked in.
Feature Comparison
| Feature | BullMQ | Oban |
|---|---|---|
| Delayed Jobs | ✅ | ✅ |
| Priority Queues | ✅ | ✅ |
| Deduplication | ✅ | ✅ |
| Rate Limiting | ✅ | 💰 Pro |
| Flows/Dependencies | ✅ | 💰 Pro |
| Real-time Events | ✅ | ❌ |
| Cron/Schedulers | ✅ | ✅ |
| Backend | Redis | PostgreSQL |
| Cross-language | ✅ Node.js, Python, PHP, Elixir | Elixir, Python |
Rate limiting and job flows are included in BullMQ's open-source version. Oban requires the paid Pro tier for these.
Benchmark Environment
- MacBook Pro M2 Pro, 16GB RAM
- Redis 7.x (Docker, localhost)
- PostgreSQL 16 (standalone, localhost)
- Elixir 1.18 / OTP 27
- BullMQ 1.2.5, Oban 2.20.3
We ran benchmarks with Redis in two configurations:
- AOF enabled (
appendfsync everysec) - comparable durability to PostgreSQL - AOF disabled - pure in-memory, maximum throughput
Results
Bulk Job Insertion
At 10K jobs, Oban is slightly faster. PostgreSQL's batch INSERT is well optimized for this range.
At 50K jobs, Oban still has a slight edge.
At 100K jobs, BullMQ is 40% faster. As batch size grows, PostgreSQL slows down due to WAL writes, index updates, and transaction overhead. Redis pipelines maintain more consistent throughput at scale.
Single Job Insertion
For one-at-a-time inserts, BullMQ hits 6,500 jobs/sec vs Oban's 3,900 jobs/sec. That's 1.7x faster. This matters when you're adding jobs from request handlers or event callbacks.
Job Processing
Processing throughput with 100 concurrent workers and zero-work jobs (pure queue overhead):
- BullMQ: 21,800 jobs/sec
- Oban: ~9,200 jobs/sec
BullMQ is about 2.4x faster at processing. With actual work (say, 1ms per job), both libraries handle it fine, but the overhead difference means BullMQ can sustain higher throughput before the queue becomes the bottleneck.
Why the Gap at Scale?
BullMQ batches thousands of Lua script executions into a single network round-trip. The complexity of each script varies: fetching from a standard queue is O(1), but priority jobs use sorted sets (O(log n)), and operations like completing jobs depend on how many jobs you keep in those sets. For most workloads with reasonable retention, this stays fast.
PostgreSQL bulk inserts grow in cost differently. More data means more WAL writes, more index updates, larger transactions. The overhead compounds as batch size increases.
It's not that PostgreSQL is slow—it's optimized for different workloads. ACID compliance and disk durability have costs that Redis sidesteps by keeping everything in memory.
Durability Trade-offs
Oban gives you PostgreSQL's ACID guarantees out of the box. Jobs are on disk, transactions are atomic, and you get all the durability you'd expect.
BullMQ's durability depends on how you configure Redis. With AOF persistence and appendfsync always,
you get similar guarantees but at a performance cost. Most deployments use appendfsync everysec as
a reasonable middle ground—which is what we used in these benchmarks.
If losing a second of jobs during a Redis crash is unacceptable, Oban is the safer choice. If you need the throughput and can tolerate that risk (or have Redis replication), BullMQ makes sense.
Which One?
BullMQ makes sense when:
- Throughput matters. 2x faster processing and better scaling at high volumes.
- You want features without paying extra. Rate limiting, job flows, and real-time events are all open source.
- You're already running Redis. No new infrastructure to manage.
- Your stack spans multiple languages. The same queues work across Node.js, Python, PHP, and Elixir. Add jobs from your Python ML service, process them in Elixir—no problem.
- You value battle-tested code. BullMQ Elixir runs the same Lua scripts that power thousands of Node.js deployments. Years of edge cases already handled.
Oban makes sense when:
- You want everything in PostgreSQL. One database, one backup strategy, one ops story.
- Strict durability is non-negotiable. ACID guarantees out of the box with no configuration.
- You're in the Elixir/Python ecosystem. Oban now supports both, with tight Ecto and Phoenix integration for Elixir.
Both are solid choices. Oban has excellent documentation and a strong community. BullMQ brings raw performance, cross-platform reach, and a decade of production hardening.
Benchmark code is in the BullMQ repo at GitHub