Service deployment clear my redis queue
ilvalerione
PROOP

2 months ago

I have a Laravel application that uses a dragonfly (redis equivalent) instance for queue and background jobs. Everytime I release an update, the messages in the queue are cleared when Railway performs the deployment.

In the Railway documentation the suggested pre-deplyment script includes the optmize:clear command that cause this behaviour.

#!/bin/bash

# Make sure this file has executable permissions, run chmod +x railway/init-app.sh

# Exit the script if any command fails

set -e

# Run migrations

php artisan migrate --force

# Clear cache

php artisan optimize:clear

# Cache the various components of the Laravel application

php artisan config:cache

php artisan event:cache

php artisan route:cache

php artisan view:cache

I changed this script as suggested by the laravel documentation (https://laravel.com/docs/12.x/deployment#optimization) with a simple run of the optimize command:

#!/bin/bash

# Make sure this file has executable permissions, run chmod +x railway/init-app.sh

# Exit the script if any command fails

set -e

# Run migrations

php artisan migrate --force

# Cache

php artisan optimize

But the problem persist. Everytime a new deployment is triggered the entire queue is cleared causing pending jobs to be lost.

I also tried to run the cache command individually without optimize but the job continue to be cleared after every deployment.

php artisan config:cache

php artisan event:cache

php artisan route:cache

php artisan view:cache

Any suggestions on how to fix this problem?

Solved$10 Bounty

Pinned Solution

douefranck
FREE

2 months ago

hey so from what i can see the issue is your cache and queue are probably using the same dragonfly database. when laravel clears cache it runs flushdb which wipes the entire database including your queue jobs

the fix is to use separate databases for cache and queue. in your config/database.php add a queue connection with a different database number like database 2, and in config/queue.php make sure the redis connection uses that queue connection instead of default. same for cache - give it database 1 in config/cache.php

this way when cache gets cleared it only flushes its own database and leaves your queue alone. this is a known laravel issue and separate databases is the standard solution

doue

4 Replies

ilvalerione
PROOP

2 months ago

In the image attached is the effect of flushing all the pending jobs in the dragonfly queue.

Attachments


Railway
BOT

2 months ago

This thread has been marked as public for community involvement, as it does not contain any sensitive or personal information. Any further activity in this thread will be visible to everyone.

Status changed to Open Railway about 2 months ago


douefranck
FREE

2 months ago

hey so from what i can see the issue is your cache and queue are probably using the same dragonfly database. when laravel clears cache it runs flushdb which wipes the entire database including your queue jobs

the fix is to use separate databases for cache and queue. in your config/database.php add a queue connection with a different database number like database 2, and in config/queue.php make sure the redis connection uses that queue connection instead of default. same for cache - give it database 1 in config/cache.php

this way when cache gets cleared it only flushes its own database and leaves your queue alone. this is a known laravel issue and separate databases is the standard solution

doue


ilvalerione
PROOP

2 months ago

It finally solved the problem for the jobs. But it's just a trick, because the main problem is that I don't want to run cache:clear at all. That's why I excluded this command from my pre-deployment script.

I went deeper into the Railway deployment logs and I found something strange. Railway execute commands during deployment autonomously.

Basically it runs the pre-deployment script associated to the service, and than run again commands on its own...

Here are the logs:

Starting Container

INFO Configuration cache cleared successfully.

INFO Nothing to migrate.

INFO Cached events cleared successfully.

INFO Route cache cleared successfully.

INFO Compiled views cleared successfully.

INFO Configuration cached successfully.

INFO Events cached successfully.

INFO Routes cached successfully.

INFO Blade templates cached successfully.

Stopping Container

Starting Container

Running migrations and seeding database ...

INFO Nothing to migrate.

config ......................................................... 4.08ms DONE

INFO The [public/storage] link has been connected to [storagepublic].

INFO Clearing cached bootstrap files.

routes ......................................................... 0.94ms DONE

views .......................................................... 8.60ms DONE

cache .............................................................. 1s DONE

events ......................................................... 0.94ms DONE

compiled ....................................................... 1.04ms DONE

INFO Caching framework bootstrap, configuration, and metadata.

config ........................................................ 27.12ms DONE

events ......................................................... 1.60ms DONE

routes ........................................................ 45.25ms DONE

views ......................................................... 45.62ms DONE

Starting Laravel server ...

In this logs you can see two Start and Stop container sessions. The first shows logs about the pre-deployment script, the second shows logs for commands I don't have in the pre-deployment script including cache:clear

Am I missing something? Can I have the support of the Railway team?


douefranck
FREE

2 months ago

from your logs i can see railway is definitely running cache clear in that second container session but i honestly dont know why or where thats configured. i didnt find railway documentation explaining this

the good news is your separate databases solution is already working and your jobs arent being lost anymore. thats the actual fix to your problem

if you want to know why railway runs those extra commands you should ask railway support or make a new thread asking specifically about that deployment behavior with those logs. they would know their own deployment process

but your main issue is solved , jobs are safe now with separate databases

doue


Status changed to Solved brody about 2 months ago


Loading...