Job Types & Scheduling

Model multiple job kinds in one queue using a discriminated union (like Redux style type/payload). This keeps handlers type‑safe and easy to extend.

type EmailJob = { type: 'send-email'; payload: { to: string; subject: string } };
type ReindexJob = { type: 'reindex'; payload: { index: string } };
type JobPayload = EmailJob | ReindexJob;

const queue = new Queue<JobPayload>({ redis, namespace: 'app' });

await queue.add({ groupId: 'user:1', data: { type: 'send-email', payload: { to: 'a@b.com', subject: 'Hi' } } });
await queue.add({ groupId: 'tenant:42', data: { type: 'reindex', payload: { index: 'products' } } });

new Worker({
  queue,
  async handler(job) {
    switch (job.data.type) {
      case 'send-email':
        return await sendEmail(job.data.payload);
      case 'reindex':
        return await reindex(job.data.payload.index);
    }
  },
}).run();

Tip: Use groupId to serialize domains that must not overlap (per user/account/order), while letting other groups process in parallel.

Delay execution using delay or schedule with an absolute time via runAt.

// Delay by 2 seconds
await queue.add({ groupId: 'user:1', data: { type: 'send-email', payload: { to: 'a@b.com', subject: 'Hi' } }, delay: 2000 });

// Run at a specific timestamp
await queue.add({ groupId: 'user:2', data: { type: 'reindex', payload: { index: 'products' } }, runAt: Date.now() + 5_000 });

Behavior:

  • Delayed jobs become eligible when their time is reached and are then scheduled respecting per‑group FIFO and orderMs (if provided).
  • Workers handle promotion automatically; no separate polling code is needed.

You can modify when a delayed job will run.

const job = await queue.add({ groupId: 'user:1', data: { type: 'send-email', payload: { to: 'a@b.com', subject: 'Hi' } }, delay: 60_000 });

// Bring it forward to run ASAP (0 means no delay)
await job.changeDelay(0);

// Or push it back by 30 seconds
await job.changeDelay(30_000);

Alternatively, if you only have the job ID:

await queue.changeDelay('job-id-here', 30_000);

Create repeating jobs with a fixed interval or a cron pattern.

// Every 5 seconds
await queue.add({ groupId: 'cron', data: { type: 'reindex', payload: { index: 'products' } }, repeat: { every: 5000 } });

// Cron pattern (every day at midnight)
await queue.add({ groupId: 'cron', data: { type: 'send-email', payload: { to: 'ops@x.com', subject: 'Daily report' } }, repeat: { pattern: '0 0 * * *' } });

Repeating jobs are materialized by a distributed scheduler that runs as part of the worker’s maintenance cycle:

  1. Worker scheduler checks for due repeat jobs every schedulerIntervalMs (default: 1000ms)
  2. Distributed lock (schedulerLockTtlMs, default: 1500ms) ensures only one worker processes the scheduler at a time
  3. When a repeat is due, it’s enqueued as a regular job with a fresh job ID
  4. The next occurrence is scheduled automatically

The effective minimum interval for repeating jobs is controlled by two settings:

How often the worker attempts to run the scheduler. Default: 1000ms (1 second)

new Worker({
  queue,
  schedulerIntervalMs: 1000, // Check every 1s
  async handler(job) { /* ... */ },
}).run();

The distributed lock TTL that prevents multiple workers from running the scheduler simultaneously. This is the actual bottleneck for fast repeats.

  • Default: 1500ms (1.5 seconds)
  • Minimum practical repeat interval: ~1.5-2 seconds

For sub-second repeats (e.g., every 500ms), you need to configure both:

// ⚠️ NOT RECOMMENDED for production - use sparingly
const queue = new Queue({
  redis,
  namespace: 'fast-queue',
  schedulerLockTtlMs: 50, // Allow fast lock acquisition
});

new Worker({
  queue,
  schedulerIntervalMs: 10, // Check every 10ms
  cleanupIntervalMs: 100,  // Run cleanup every 100ms
  async handler(job) { /* ... */ },
}).run();

await queue.add({
  groupId: 'fast-cron',
  data: { task: 'tick' },
  repeat: { every: 100 }, // Every 100ms - now possible!
});

⚠️ Warning: Very fast repeating jobs (< 1 second) increase Redis load and coordination overhead. Use them only when absolutely necessary and in controlled environments (e.g., testing, specialized real-time processing).

await queue.removeRepeatingJob('cron', { every: 5000 });
// or
await queue.removeRepeatingJob('cron', { pattern: '0 0 * * *' });
  • Default settings work well for most use cases (1+ second intervals)
  • Ensure at least one worker is running to materialize repeats
  • The original orderMs is preserved for jobs created from a repeat definition
  • Use standard cron patterns for time-of-day scheduling (e.g., '0 9 * * 1-5' for 9 AM on weekdays)

Updating an existing job’s data payload is not currently supported.

Recommended patterns:

  • Store canonical data in your database and place lightweight references (IDs) in the job payload.
  • For changes to timing, use changeDelay. For content changes, enqueue a new job.
  • If you rely on jobId for idempotence, remember re‑adding with the same jobId will de‑duplicate; use a new jobId for a new version.

Planned: native job data update API.