Performance Benchmarks
Real-world benchmark results comparing GroupMQ and BullMQ performance over time. All benchmarks are run locally on a MacBook M2 with identical settings to ensure fair comparisons.
Loading benchmark data...
Understanding the Results
Throughput (jobs/sec)
More jobs processed per second.
Higher throughput means more jobs can be processed per second. GroupMQ is optimized for group-based processing while maintaining high throughput.
Pickup Time
How quickly workers grab jobs.
This measures how quickly a worker picks up a job after it's enqueued. Lower pickup times indicate better responsiveness and less queuing overhead.
Processing Time
Time spent executing job logic.
The actual time spent executing your job handler. This should be similar between both systems as it depends on your job logic.
Total Time
End-to-end job latency.
End-to-end latency from when a job is added to the queue until it's completed. This includes pickup time + processing time + any queue overhead.
Benchmark Settings
All benchmarks are run with:
- Hardware: MacBook M2 (Apple Silicon)
- Redis: Local Redis instance (6.2+)
- Node.js: Latest LTS version
- Job Workload: CPU-bound tasks (small computation) or I/O-bound tasks
Running Your Own Benchmarks
Want to run benchmarks yourself? Use the benchmark tool included in the repository:
# CPU-bound workload
npm run benchmark -- --mq groupmq --jobs 500 --workers 4 --job-type cpu --multi-process
# Compare with BullMQ
npm run benchmark -- --mq bullmq --jobs 500 --workers 4 --job-type cpu --multi-process
Available Options
--mq <bullmq|groupmq>
: Queue implementation to benchmark--jobs <n>
: Number of jobs to process (default: 100)--workers <n>
: Number of workers (default: 4)--job-type <cpu|io>
: Type of job workload (default: cpu)--multi-process
: Use separate processes for workers--output <file>
: Save results to JSON file
Performance Tips
Based on these benchmarks, here are some recommendations:
- Use multi-process workers for CPU-bound workloads to leverage multiple cores
- Adjust worker count based on your workload - more workers aren't always better
- Monitor pickup times - high pickup times may indicate too many concurrent jobs
- Consider batching for high-throughput scenarios
For more optimization strategies, see our Performance Tips guide.