Background jobs that scale

The open source message queue for Redis™, trusted by thousands of companies processing billions of jobs every day. Available for Node.js, Bun, Python, Elixir, and PHP.

myQueue.ts
import { Queue } from "bullmq";

const queue = new Queue("Paint");
await queue.add("cars", {
color: "blue",
delay: 30000
});
my_queue.py
from bullmq import Queue

queue = Queue("Paint")
await queue.add("cars", {
"color": "blue",
"delay": 30000
})
my_queue.ex
{:ok, job} = BullMQ.Queue.add(
"Paint", "cars", %{color: "blue"},
connection: :redis,
delay: 30_000
)
my_queue.php
use BullMQ\Queue;

$queue = new Queue("Paint");
$job = $queue->add("cars", [
"color" => "blue",
"delay" => 30000
]);
myWorker.ts
import { Worker } from 'bullmq';

const worker = new Worker('Paint', async job => {
if (job.name === 'cars') {
await paintCar(job.data.color);
}
}, { concurrency: 100 });
my_worker.py
from bullmq import Worker

async def process(job, token):
if job.name == "cars":
await paint_car(job.data["color"])

worker = Worker("Paint", process, {
"concurrency": 100
})
my_worker.ex
defmodule MyWorker do
def process(%BullMQ.Job{name: "cars", data: data}) do
paint_car(data["color"])
{:ok, %{painted: true}}
end
end

{:ok, _worker} = BullMQ.Worker.start_link(
queue: "Paint",
connection: :redis,
processor: &MyWorker.process/1,
concurrency: 100
)

Trusted by teams worldwide

Production Ready

Powers video transcoding, AI pipelines, payment processing, and millions of background jobs at companies worldwide since 2011.

Multi-Language

Available for Node.js, Bun, Python, Elixir, and PHP — use the same queues across your entire stack.

Truly Open Source

MIT licensed, without any artificial limitations on the number of workers or concurrency.

Built for speed

Queue over 250,000 jobs per second.*

Adding Jobs
0 jobs/sec
Processing Jobs
0 jobs/sec

* Benchmarks performed with DragonflyDB. See adding jobs and processing jobs benchmarks.

Scales Horizontally

Run thousands of workers across unlimited servers with minimal configuration

Redis & Beyond

Works with Redis, Valkey, DragonflyDB, AWS ElastiCache, Upstash and more

Battle Tested

Over 10M monthly downloads and a decade of production use

Schedule jobs for later

Process jobs at a specific time or after a delay. Perfect for reminders, scheduled emails, or any time-sensitive task.

  • Millisecond precision timing
  • Survives server restarts
  • Timezone-aware scheduling
Now
Send email +5 min
Reminder +4 hours
Report +1 day

Recurring jobs made easy

Create job factories that produce jobs on a schedule. Use cron expressions, fixed intervals, or custom patterns. Perfect for recurring tasks, reports, and maintenance jobs.

  • Cron expressions
  • Fixed intervals
  • Upsert semantics
  • Job templates
daily-report 0 9 * * *
Yesterday 9:00
Today 9:00
Tomorrow 9:00
+2 days

Failures are temporary

Jobs automatically retry with exponential backoff. Configure attempts, delays, and custom backoff strategies.

  • Exponential backoff
  • Custom retry strategies
  • Dead letter queues
1
2
3
Automatic retry with backoff

Complex dependencies

Create parent-child job relationships with unlimited nesting depth. Build complex hierarchies where children run in parallel and parents wait for all dependencies to complete.

  • Unlimited nesting depth
  • Parallel execution
  • Failure propagation strategies
  • Result aggregation
Ship Order
Warehouse
Notify
Pick
Pack
Label
Email
SMS

Protect your APIs

Safeguard external services with rate limiting per queue or group. Deduplicate jobs to implement debounce and throttle patterns.

  • Per-second/minute/hour rate limits
  • Deduplication for debounce & throttle
  • Group-based rate limiting Pro
Rate Limit
max N/t
Throttle
1 per t

One queue, any language

Write producers in Node.js, consumers in Python. Mix and match across your entire stack. Perfect for microservices and polyglot environments.

Node.js
Bun
Python
Elixir
PHP

See it in action

Get a feel for how BullMQ works with these code snippets. Check out the documentation for more.

events.ts
import { QueueEvents } from "bullmq";

const queueEvents = new QueueEvents("Paint");

queueEvents.on("completed", ({ jobId }) => {
console.log(`Job ${jobId} completed`);
});

queueEvents.on("failed", ({ jobId }, err) => {
console.log(`Job ${jobId} failed`);
});
events.py
from bullmq import QueueEvents

queue_events = QueueEvents("Paint")

async def on_completed(job, result):
print(f"Job {job['jobId']} completed")

async def on_failed(job, err):
print(f"Job {job['jobId']} failed")

queue_events.on("completed", on_completed)
queue_events.on("failed", on_failed)
events.ex
# Events via Worker callbacks
{:ok, _worker} = BullMQ.Worker.start_link(
queue: "Paint",
connection: :redis,
processor: &MyWorker.process/1,
on_completed: fn job, result ->
IO.puts("Job #{job.id} completed")
end,
on_failed: fn job, reason ->
IO.puts("Job #{job.id} failed")
end
)
repeatable.ts
import { Queue } from 'bullmq';

const queue = new Queue('Paint');

// Repeat job once every day at 3:15 (am)
await queue.add(
'submarine',
{ color: 'yellow' },
{
repeat: {
pattern: '* 15 3 * * *',
},
},
);
repeatable.py
from bullmq import Queue

queue = Queue("Paint")

# Repeat job once every day at 3:15 (am)
await queue.add(
"submarine",
{"color": "yellow"},
{
"repeat": {
"pattern": "* 15 3 * * *"
}
}
)
repeatable.ex
# Repeat job once every day at 3:15 (am)
{:ok, job} = BullMQ.Queue.add(
"Paint", "submarine", %{color: "yellow"},
connection: :redis,
repeat: %{pattern: "* 15 3 * * *"}
)
repeatable.php
use BullMQ\Queue;

$queue = new Queue("Paint");

// Repeat job once every day at 3:15 (am)
$job = $queue->add(
"submarine",
["color" => "yellow"],
[
"repeat" => [
"pattern" => "* 15 3 * * *"
]
]
);
ratelimit.ts
import { Worker } from 'bullmq';

const worker = new Worker(
'Paint',
async job => paintCar(job),
{
limiter: {
max: 10,
duration: 1000,
},
}
);
ratelimit.py
from bullmq import Worker

async def process(job, token):
await paint_car(job)

worker = Worker("Paint", process, {
"limiter": {
"max": 10,
"duration": 1000
}
})
ratelimit.ex
{:ok, _worker} = BullMQ.Worker.start_link(
queue: "Paint",
connection: :redis,
processor: fn job -> paint_car(job); {:ok, nil} end,
limiter: %{max: 10, duration: 1000}
)
retry.ts
import { Queue } from 'bullmq';

const queue = new Queue('Paint');

await queue.add(
'car',
{ color: 'pink' },
{
attempts: 3,
backoff: {
type: 'exponential',
delay: 1000,
},
},
);
retry.py
from bullmq import Queue

queue = Queue("Paint")

await queue.add(
"car",
{"color": "pink"},
{
"attempts": 3,
"backoff": {
"type": "exponential",
"delay": 1000
}
}
)
retry.ex
{:ok, job} = BullMQ.Queue.add(
"Paint", "car", %{color: "pink"},
connection: :redis,
attempts: 3,
backoff: %{type: :exponential, delay: 1000}
)
retry.php
use BullMQ\Queue;

$queue = new Queue("Paint");

$job = $queue->add(
"car",
["color" => "pink"],
[
"attempts" => 3,
"backoff" => [
"type" => "exponential",
"delay" => 1000
]
]
);
flow.ts
import { FlowProducer } from "bullmq";

const flow = new FlowProducer();

await flow.add({
name: "Renovate",
queueName: "cars",
children: [
{ name: "paint", queueName: "steps" },
{ name: "engine", queueName: "steps" },
{ name: "wheels", queueName: "steps" },
],
});
flow.py
from bullmq import FlowProducer

flow = FlowProducer()

await flow.add({
"name": "Renovate",
"queueName": "cars",
"children": [
{"name": "paint", "queueName": "steps"},
{"name": "engine", "queueName": "steps"},
{"name": "wheels", "queueName": "steps"},
]
})
flow.ex
{:ok, job} = BullMQ.FlowProducer.add(
%{
name: "Renovate",
queue_name: "cars",
children: [
%{name: "paint", queue_name: "steps"},
%{name: "engine", queue_name: "steps"},
%{name: "wheels", queue_name: "steps"}
]
},
connection: :redis
)

Powerful Dashboard

As your queues grow, you need visibility. Taskforce.sh gives you real-time insights, powerful debugging tools, and complete control over your jobs.

Real-time Monitoring

Track job throughput, latency, and queue health with live metrics and alerts.

Job Inspector

Search, filter, and debug individual jobs with detailed execution logs.

Performance Insights

Identify bottlenecks and optimize your workers with actionable analytics.

By subscribing to Taskforce.sh, you support the continued development of BullMQ.

Try Taskforce.sh

Take it to the next level

A drop-in replacement with exclusive features. Same creators, full compatibility, professional support. Scale predictably with generous licensing terms.

Grouping

Assign jobs to groups and set rules such maximum concurrency per group and/or per group rate limits.

Learn more

Batches

Increase efficiency by consuming your jobs in batches, a strategy that minimizes overhead and can boost throughput.

Learn more

Use Observables

Implement jobs as observables, enabling more streamlined job cancellation and improved job state management.

Learn more

Professional support

Access professional support directly from the maintainers of BullMQ, ensuring you have expert assistance when you need it.

Learn more
groups.ts
import { WorkerPro } from '@taskforcesh/bullmq-pro';

const worker = new WorkerPro('myQueue', processFn, {
group: {
limit: {
// Limit to 100 jobs per second per group
max: 100,
duration 1000,
},
// Limit to 4 concurrent jobs per group
concurrency: 4,
},
connection
});
observables.ts
import { WorkerPro } from "@taskforcesh/bullmq-pro"
import { Observable } from "rxjs"

const processor = async () => {
return new Observable<number>(subscriber => {
subscriber.next(1);
subscriber.next(2);
subscriber.next(3);
const intervalId = setTimeout(() => {
subscriber.next(4);
subscriber.complete();
}, 500);

// Provide a way of canceling and
// disposing the interval resource
return function unsubscribe() {
clearInterval(intervalId);
};
});
};
batches.ts
import { WorkerPro } from '@taskforcesh/bullmq-pro';

const worker = new WorkerPro("MyQueue",
async (job: JobPro) => {
const batch = job.getBatch();

for(let i=0; i<batch.length; i++) {
const batchedJob = batch[i];
await doSomethingWithBatchedJob(batchedJob);
}
}, { connection, batches: { size: 10 } });

Currently available for Node.js and Bun. More language support coming soon.

Simple, fair pricing

BullMQ is MIT Licensed. Support the project and unlock Pro features.

Standard

Per-deployment license*

$1,395 /year**

or $139/monthly

  • Groups
  • Batches
  • Observables
  • Professional Support
Get Started

Embedded

For product integration

Custom

Redistribution rights

  • All Enterprise Features
  • Embed in your products
  • Sell to your customers
Contact Us

* A deployment means a single, distinct operational environment (Kubernetes cluster, server, VM, etc.) connecting to one or more Redis instances.

** Standard license available for organizations with fewer than 100 employees. Larger organizations require Enterprise license.