16 days ago
Hi Railway Support Team,
My deployment has crashed for the second time in the production environment without any changes made to the application.
Error details:
- "multirun: one or more of the provided commands ended abnormally"
- Server was already running (pid: 29, file: /app/tmp/pids/server.pid)
- Rails 7.1.5.2 application
- Deployment: Chatwoot-K4EO in Hermes Project — Clinimed
Concerns:
1. This is happening without any code changes or configuration updates
2. The crashes appear to be environment-related rather than application-related
3. I'm paying for a managed service specifically to avoid having to constantly monitor and restart servers
Could you please investigate what's causing these recurring crashes and provide a solution to ensure deployment stability?
I've attached screenshots of the deploy logs and crash notification.
Thank you,
Carlos
CTO, Axisor Technologies Brasil
Attachments
4 Replies
16 days ago
This thread has been marked as public for community involvement, as it does not contain any sensitive or personal information. Any further activity in this thread will be visible to everyone.
Status changed to Open brody • 16 days ago
16 days ago
could you please share your current railway deployment 'Start Command' for the Chatwoot service?
16 days ago
to fix this , go to ur Chatwoot-K4EO service in Railway. then go to settings → scroll down to "Custom Start Command"
and set et the "Custom Start Command" exactly to this:
multirun "rm -f tmp/pids/server.pid && bin/rails server -b 0.0.0.0 -p $PORT" "bundle exec sidekiq"let me know if it fix it for you
bytekeim
to fix this , go to ur Chatwoot-K4EO service in Railway. then go to settings → scroll down to "Custom Start Command"and set et the "Custom Start Command" exactly to this:multirun "rm -f tmp/pids/server.pid && bin/rails server -b 0.0.0.0 -p $PORT" "bundle exec sidekiq"let me know if it fix it for you
13 days ago
Hi bytekeim,
Thank you for your continued support on this issue.
We've implemented the Custom Start Command exactly as suggested:
multirun "rm -f tmp/pids/server.pid && bin/rails server -b 0.0.0.0 -p $PORT" "bundle exec sidekiq"
Unfortunately, the issue persists. We've also tried the following variations:
1. Using absolute path: rm -f /app/tmp/pids/server.pid
2. Removing and recreating the entire pids directory: rm -rf /app/tmp/pids && mkdir -p /app/tmp/pids
The deployment logs still show:
A server is already running (pid: 29, file: /app/tmp/pids/server.pid).
Exiting
multirun: one or more of the provided commands ended abnormally
Followed by healthcheck failures after 14 retry attempts.
Our current hypothesis:
The PID file seems to be persisting across deployments, possibly due to volume mounting or container restart behavior. The rm -f command may be executing, but the file reappears before Rails starts, or the volume is being mounted after the cleanup command runs.
We're actively investigating:
- Volume mount timing and persistence behavior
- Alternative start command strategies
- Potential Railway-specific configuration issues
Do you have any insights on:
1. Whether the chatwoot-k4eo-volume might be causing PID file persistence?
2. If there's a pre-deploy cleanup step we should configure?
3. Any Railway-specific Chatwoot deployment best practices?
We appreciate your help and are committed to resolving this to ensure service stability.
Best regards,
Thiago
Axisor Developer Team
13 days ago
Hey Thiago,
thx for the update. sucks that the custom command didn't fully nail it yet, but I think we're close. from what I've seen in other Railway setups with Chatwoot, yeah, that chatwoot-k4eo-volume is prob the culprit for the PID file sticking around. If it's mounted too broadly like at /app or /app/tmp, it keeps temp files alive across restarts, which messes with the rm command timing. Railway's restarts don't wipe the ephemeral stuff like a full redeploy does, so stale pids hang out.
first off, check ur service settings and tweak the volume mount to just /app/storage – that's where Chatwoot dumps attachments if ur using local storage (make sure ACTIVE_STORAGE_SERVICE=local in env vars).
If ur on cloud like S3, ditch the volume altogether to stop the persistence bs.
for the start command, try this tweak to run the rm before multirun kicks in:
rm -f /app/tmp/pids/server.pid && multirun "bin/rails server -b 0.0.0.0 -p $PORT" "bundle exec sidekiq"
that should clean it sequentially. If u got a Dockerfile, toss the rm in the CMD for the web part too.
long-term, I'd split into separate services: one for web (just bin/rails server -b 0.0.0.0 -p $PORT ) and one for worker (bundle exec sidekiq).
share the Postgres/Redis between 'em – cuts down on multirun flakiness and makes restarts smoother.
on pre-deploy cleanup, Railway doesn't have built-in hooks, but if perms are wonky, add a chown in ur entrypoint or something. For Chatwoot best practices on Railway: stick to the official template, set RAILS_ENV=production, use cloud storage to avoid volume headaches, and monitor memory in the dashboard cuz stuff like Gmail IMAP can spike and cause OOM crashes leading to this loop.
lmk if that fixes it or if logs show somethin else