Scaling workers
GroupMQ ensures only one active job per groupId
at a time while allowing many groups to process concurrently.
Concurrency options
Section titled “Concurrency options”GroupMQ now supports two approaches for handling multiple jobs:
1) Single-process concurrency
Section titled “1) Single-process concurrency”You can now set concurrency: N
on a Worker to process up to N jobs concurrently within a single process. This is useful for I/O-bound workloads where jobs spend most of their time waiting for external services.
const worker = new Worker({
queue,
concurrency: 4, // Process up to 4 jobs at once
async handler(job) {
await fetch('https://api.example.com/process', {
method: 'POST',
body: JSON.stringify(job.data)
});
},
});
When to use concurrency:
- I/O-bound jobs (HTTP requests, database queries, file operations)
- Jobs that don’t compete for CPU resources
- You want to maximize throughput without spawning multiple processes
Limitations:
- Still limited by Node.js single-threaded nature for CPU-bound work
- All jobs share the same event loop and memory space
- CPU-intensive jobs can block the event loop for other concurrent jobs
- Memory leaks in one job affect the entire process
2) Process-level scaling
Section titled “2) Process-level scaling”For CPU-intensive workloads or when you need true parallelism, scale across multiple processes. Each process gets its own CPU core and memory space.
Process-level scaling approaches
Section titled “Process-level scaling approaches”For CPU-intensive workloads or when you need true parallelism:
- Spawn multiple processes (child processes)
- Use Node.js Cluster mode
- Use PM2 (cluster mode)
- Run multiple containers/replicas (Kubernetes, Docker Compose, etc.)
Each process should create its own Redis connection, Queue
, and Worker
, then call worker.run()
. Group-level FIFO ordering is preserved across processes.
When to use process-level scaling:
- CPU-intensive jobs that benefit from true parallelism
- Jobs that need memory isolation
- Production deployments with high availability requirements
Child processes
Section titled “Child processes”import { fork } from 'node:child_process';
const instances = 4;
for (let i = 0; i < instances; i++) {
fork('./worker.js');
}
import Redis from 'ioredis';
import { Queue, Worker } from 'groupmq';
const redis = new Redis('redis://localhost:6379', { maxRetriesPerRequest: null });
const queue = new Queue<{ id: string; ms: number }>({ redis, namespace: 'app' });
const worker = new Worker({
queue,
async handler(job) {
await new Promise((r) => setTimeout(r, job.data.ms));
},
});
worker.run();
Node Cluster
Section titled “Node Cluster”import cluster from 'node:cluster';
import os from 'node:os';
import Redis from 'ioredis';
import { Queue, Worker } from 'groupmq';
const instances = process.env.WORKERS ? Number(process.env.WORKERS) : os.cpus().length;
if (cluster.isPrimary) {
for (let i = 0; i < instances; i++) cluster.fork();
} else {
const redis = new Redis('redis://localhost:6379', { maxRetriesPerRequest: null });
const queue = new Queue<{ id: string; ms: number }>({ redis, namespace: 'app' });
new Worker({ queue, async handler(job) { await new Promise((r) => setTimeout(r, job.data.ms)); } }).run();
}
PM2 (cluster)
Section titled “PM2 (cluster)”pm2 start worker.js -i max
# or a fixed number
pm2 start worker.js -i 4
Containers / Replicas
Section titled “Containers / Replicas”Run multiple replicas of the same worker image. Each replica is an isolated process with its own Worker
instance. All connect to the same Redis and cooperate safely.