Laravel queue:work --max-jobs & --max-time: Complete Queue Guide (2026)

Quick Answer

Use --max-jobs and --max-time together in production to prevent memory leaks by periodically restarting workers:

php artisan queue:work --max-jobs=500 --max-time=3600

--max-jobs=500 exits after processing 500 jobs. --max-time=3600 exits after 1 hour. Supervisor or Horizon restarts the worker automatically. This is the recommended production best practice.

If your Laravel app is sending emails, generating PDFs, calling third-party APIs, or processing uploads synchronously — users are waiting for all of that before they get a response. Queues fix this. They let you push slow work into the background and return a response immediately, making your app feel instant even when it's doing heavy lifting behind the scenes.

This guide covers everything from basic setup to production-grade queue management — including the exact queue:work flags you need to prevent memory leaks in production.

1. How Laravel Queues Work

The concept is simple: instead of doing work now, you push it onto a queue and a background worker picks it up and processes it asynchronously. The HTTP request finishes immediately, and the user never waits.

Step 1
Create a Job
A job class holds the logic you want to run in the background
Step 2
Dispatch It
Push the job onto a queue driver (Redis, database, SQS)
Step 3
Worker Runs It
A queue worker process picks up the job and executes it
Step 4
Monitor & Retry
Failed jobs are stored and can be retried via Horizon or Artisan

2. Setting Up Redis as Your Queue Driver

Laravel supports several queue drivers — database, Redis, Amazon SQS, and others. For production, Redis is the right choice. It's in-memory, extremely fast, and integrates perfectly with Laravel Horizon.

# Install Redis PHP client composer require predis/predis # .env — switch queue driver to Redis QUEUE_CONNECTION=redis REDIS_HOST=127.0.0.1 REDIS_PASSWORD=null REDIS_PORT=6379
Tip: If you're on Laravel 10+ and using PHP 8.1+, you can use the native phpredis extension instead of predis for better performance. Install it with sudo apt install php-redis on Linux.

3. Creating Your First Job

Use Artisan to generate a job class. Let's create one that sends a welcome email after a user registers — a classic use case that should never run synchronously.

php artisan make:job SendWelcomeEmail
<?php namespace App\Jobs; use App\Models\User; use App\Mail\WelcomeMail; use Illuminate\Bus\Queueable; use Illuminate\Contracts\Queue\ShouldQueue; use Illuminate\Foundation\Bus\Dispatchable; use Illuminate\Queue\InteractsWithQueue; use Illuminate\Queue\SerializesModels; use Illuminate\Support\Facades\Mail; class SendWelcomeEmail implements ShouldQueue { use Dispatchable, InteractsWithQueue, Queueable, SerializesModels; // Retry up to 3 times if the job fails public int $tries = 3; // Timeout in seconds — kill the job if it takes too long public int $timeout = 60; public function __construct( public readonly User $user ) {} public function handle(): void { Mail::to($this->user->email) ->send(new WelcomeMail($this->user)); } // Called when all retries are exhausted public function failed(\Throwable $exception): void { \Log::error("SendWelcomeEmail failed for user {$this->user->id}: {$exception->getMessage()}"); } }
Important: Always implement ShouldQueue on jobs you want to run asynchronously. Without it, the job runs synchronously even if you dispatch it — a common gotcha that's easy to miss.

4. Dispatching Jobs

// Dispatch immediately to the queue SendWelcomeEmail::dispatch($user); // Delay the job — run it 5 minutes from now SendWelcomeEmail::dispatch($user)->delay(now()->addMinutes(5)); // Send to a specific queue name SendWelcomeEmail::dispatch($user)->onQueue('emails'); // Dispatch synchronously (bypass the queue — useful for testing) SendWelcomeEmail::dispatchSync($user); // Dispatch only if a condition is true SendWelcomeEmail::dispatchIf($user->wants_emails, $user);

5. queue:work Flags — --max-jobs, --max-time & More

This is the most important section for production. Getting these flags right prevents memory leaks and keeps your queues healthy long-term.

FlagWhat it doesRecommended value
--max-jobs Exit after processing N jobs 500 Use in production
--max-time Exit after N seconds 3600 (1 hour) Use in production
--memory Exit if memory exceeds N MB 256
--sleep Seconds to sleep when no jobs available 3
--tries Max attempts before marking job as failed 3
--timeout Seconds before a job is killed 60
--queue Which queues to process (priority order) critical,default
# Development only — no flags needed php artisan queue:work # Production best practice — always use both flags php artisan queue:work --max-jobs=500 --max-time=3600 # Full production command with all recommended flags php artisan queue:work redis \ --queue=critical,default \ --max-jobs=500 \ --max-time=3600 \ --memory=256 \ --sleep=3 \ --tries=3 \ --timeout=60
Why use both --max-jobs and --max-time? --max-jobs restarts after a job count — good for high-traffic queues. --max-time restarts after a time window — catches memory leaks even on low-traffic queues that process few jobs. Together they cover both scenarios. Supervisor or Horizon automatically restarts the worker after each exit.
# --max-time=3600: exit after 3600 seconds (1 hour) # Worker finishes current job first, then exits gracefully php artisan queue:work --max-time=3600 # --max-jobs=500: exit after processing 500 jobs # Supervisor restarts automatically php artisan queue:work --max-jobs=500 # --once: process exactly one job then exit (for cron-based setups) php artisan queue:work --once

6. Keeping Workers Running with Supervisor

In production, you need a process manager to keep your queue workers alive when they exit after --max-jobs or --max-time. Supervisor is the standard solution on Linux servers.

# Install Supervisor sudo apt install supervisor # Create config: /etc/supervisor/conf.d/laravel-worker.conf [program:laravel-worker] process_name=%(program_name)s_%(process_num)02d command=php /var/www/html/artisan queue:work redis --sleep=3 --tries=3 --max-jobs=500 --max-time=3600 autostart=true autorestart=true stopasgroup=true killasgroup=true user=www-data numprocs=4 redirect_stderr=true stdout_logfile=/var/www/html/storage/logs/worker.log stopwaitsecs=3600
# Load and start Supervisor sudo supervisorctl reread sudo supervisorctl update sudo supervisorctl start laravel-worker:*

The numprocs=4 setting runs 4 parallel worker processes. With autorestart=true, Supervisor automatically restarts each worker after it exits from --max-jobs or --max-time.

7. Installing and Using Laravel Horizon

Horizon is the official Redis queue dashboard from the Laravel team. It replaces manual Supervisor config with code-driven worker configuration and gives you a real-time dashboard for monitoring everything.

# Install Horizon composer require laravel/horizon php artisan horizon:install php artisan migrate
// config/horizon.php 'environments' => [ 'production' => [ 'supervisor-1' => [ 'maxProcesses' => 10, 'balanceMaxShift' => 1, 'balanceCooldown' => 3, ], 'emails' => [ 'connection' => 'redis', 'queue' => ['emails'], 'balance' => 'auto', 'minProcesses' => 1, 'maxProcesses' => 5, 'tries' => 3, ], ], 'local' => [ 'supervisor-1' => [ 'maxProcesses' => 3, ], ], ],
# Start Horizon php artisan horizon # Check status php artisan horizon:status # Gracefully restart after deployment php artisan horizon:terminate
Note: Laravel Horizon requires the Redis queue driver. It does not work with the database, SQS, or other drivers. Make sure QUEUE_CONNECTION=redis is set in your .env before installing.
Security: Protect the Horizon dashboard in production. Configure the gate in app/Providers/HorizonServiceProvider.php to restrict access to admin users only.

8. Handling Failed Jobs

# Create the failed_jobs table php artisan queue:failed-table php artisan migrate # List all failed jobs php artisan queue:failed # Retry a specific failed job by ID php artisan queue:retry 5 # Retry all failed jobs php artisan queue:retry all # Clear all failed jobs php artisan queue:flush
class SendWelcomeEmail implements ShouldQueue { public int $tries = 5; // Wait 1 min, then 5 min, then 10 min between retries public function backoff(): array { return [60, 300, 600]; } public function failed(\Throwable $exception): void { \Log::error("Job failed: {$exception->getMessage()}"); } }

9. Queue Best Practices for Production

Always use --max-jobs and --max-time together

Never run queue:work in production without these flags. PHP processes accumulate memory over time — restarting workers periodically is the simplest way to keep memory usage stable.

Use dedicated queues by priority

// Assign jobs to specific queues SendPasswordReset::dispatch($user)->onQueue('critical'); SendWelcomeEmail::dispatch($user)->onQueue('emails'); GenerateMonthlyReport::dispatch()->onQueue('reports'); // Worker processes critical first, then emails, then reports php artisan queue:work --queue=critical,emails,reports

Always restart workers after deployment

# Add to your deployment script php artisan queue:restart # for queue:work + Supervisor php artisan horizon:terminate # for Horizon

10. Conclusion

The most important takeaway for production: always use --max-jobs and --max-time together with Supervisor or Horizon to keep workers healthy and memory usage stable. It's a small config change that prevents the most common production queue issues.

Start with the database driver locally, move to Redis in production, and add Horizon once you need visibility into what's happening. For related reading, check out the Laravel Performance Optimization guide — the natural next step after queues are working.

Need Help Setting Up Laravel Queues?

I've implemented Redis queue architecture in production SaaS platforms, payroll systems, and e-commerce APIs. Happy to help you design the right queue setup for your application's scale and requirements.

Based in Bangladesh · Remote worldwide · Fast turnaround

About the Author

Kamruzzaman Polash — Software Engineer specialising in Laravel, REST APIs, and scalable backend systems. 10+ projects delivered for clients worldwide.