8 months ago
I locally develop using a Postgres instance run via Docker.
My production environment in Railway has a Postgres service.
For each PR Railway creates a staging environment, with a Postgres db service.
Is there any way to get the state of database in the PR environment (or maybe a dedicated staging environment) to be the same as the production environment? It'd be great to be able to test my database migrations in a staging or PR environment where my db state is identical to production, to avoid making mistakes.
Thanks as always!
0 Replies
8 months ago
you would want to pgdump your prod database, and pgrestore to your PR database.
8 months ago
n/a
8 months ago
Hey Brody! Thanks for the quick response.
That's helpful. I'm currently running migrations prior to building my app with Node. My builds are happening via a Dockerfile. Would you suggest doing the dump and restoration prior to the migration and build?
I asked Sonnet 3.5 and it suggested I approach it as such. Do you think this is reasonable?
8 months ago
yeah that could definitely work, but you'll need a way to make sure that code is only ran for PR envs, and for that you can likely use environment overwrites -
8 months ago
I see. How would I incorporate this in my Dockerfile? I get the impression the railway.json
that's referenced in that link is for the Nixpacks build and not Dockerfile, but that's probably not the case.
Another way I thought of doing it is that if I create a db-dump-restore.ts
file that I check for process.env.RAILWAY_ENVIRONMENT_NAME
being pr-*
and that in those scenarios, the dump is executed.
8 months ago
Also if I can use the railway.json
, do I just put this in my root folder?
How would I modify this to work in addition to the start scripts I'm running in package.json
?
The docs suggest:
{
"environments": {
"pr": {
"deploy": {
"startCommand": "echo 'start command for all pull requests!'"
}
}
}
}
Is that startCommand
something that runs prior to the start
script from the package.json
which my Dockerfile refers to as the start CMD? This is now defined to first migrate, then build, like so: "start": "pnpm db:prod:migrate && NODE_ENV=production node build"
.
8 months ago
the start command in the railway.json file runs whether or not you use nixpacks or a dockerfile, and either way, it will completely overwrite the start command
8 months ago
Ah I see. So I would make it something like: "pnpm db:prod:dumprestore && pnpm db:prod:migrate && NODE_ENV=production node build"
?
8 months ago
id set NODE_ENV
in the service variables, but yes
8 months ago
well why are you running build in your start command?
8 months ago
oh funky syntax, for your own sake, please use a full path instead of specifying a folder
8 months ago
Glad you're asking. I had some issues with the NODE_ENV
being properly inferred.
My Dockerfile ends with:
CMD ["pnpm", "start"]
And my start script is:
"pnpm db:prod:migrate && NODE_ENV=production node build"
"db:prod:migrate": "NODE_ENV=production dotenvx run -- vite-node .drizzle/migrate.ts"
8 months ago
I believe that Railway automatically injects NODE_ENV=production
but only in runtime so I needed to force NODE_ENV
to production so dotenvx
can use the right environment file, decrypt the relevant .env.*
file and inject those env vars.
8 months ago
Hope that makes sense
8 months ago
I don't understand sorry. What do you mean?
8 months ago
use node build/index.js
instead
8 months ago
It's referring to my build
script in package.json
however. Isn't that correct?
"build": "dotenvx run -f .env.production -- vite build"
8 months ago
It's working.
8 months ago
Is this how I'd define the startCommand
? If so I'll give that a shot!
8 months ago
i dont think it is, npm run build
runs the build script, node build
runs the index file within the build folder
8 months ago
Okay I will try it, thanks.
8 months ago
Please let me know about the startCommand too if you get a chance.
8 months ago
the syntax is correct, but please use an explicit path
8 months ago
Got it. Thanks Brody! Will implement this.
8 months ago
I'll report back on my results.
7 months ago
I'm finally working on this and think this may work better if I use Github Actions for the db migration and pgdump and pgrestore.
I want to ask for your input however. What's the most robust way to go about this? Currently I'm using Github PR auto-deploys on Railway and Railway's automatically deploying based on the the Dockerfile I have in my root folder.
Sonnet tells me the best way would be to separate the database migrations, backups and restorations from the build steps, so that it's more compatible with continuous deployment and horizontal scaling. Therefore it says I should put the db migration
and pg_dump
and pg_restore
steps in a Github Actions workflow. Is that what you would recommend?
If so, what's the best way to access the public database URL of my postgres database in a staging environment? Do I need to call the Railway API in the Github Action to access this?
Also if I take this route, would you suggest keeping the automatic deployments via Railway's recognition of the Dockerfile (and leaving it out of the Github Action) and enabling "wait for CI", or would you recommend I rather disable that feature and call railway up
at the end of the Github Actions workflow?
7 months ago
Kindly bumping this team
7 months ago
Bumping once more. I'm kinda stuck on this!
7 months ago
please know that we aren't able to offer guaranteed response times on the Hobby plan.
7 months ago
Do I need to call the Railway API in the Github Action to access this?
Yes you would.Also if I take this route, would you suggest keeping the automatic deployments via Railway's recognition of the Dockerfile (and leaving it out of the Github Action) and enabling "wait for CI", or would you recommend I rather disable that feature and call railway up at the end of the Github Actions workflow?
I don't have a recommendation regarding this, it would be whatever you think works best for you.
7 months ago
Do you happen to have a template or an example of a Github Actions call where the dynamic database URL is requested? Specifically, the hard part would be knowing which environmentId
to use in the API call.
7 months ago
Sorry, I understand. I see this is something for the Pro plan.
7 months ago
You would have to get all the environments, and then find the ID that corresponds to the applicable environment name.
7 months ago
Great. I got a Github Actions script to do this using the GraphQL API.
7 months ago
Do I need the DATABASE_URL
or the PUBLIC_DATABASE_URL
however? I intend to run migrations using the CI script prior to the app being built. It's erroring when using the private networking URL now. Could that be a security setting issue? Or is it because I simply need the public URL?
7 months ago
Is it correct I need to use DATABASE_PUBLIC_URL
instead of the private networking var? I'm running migrations via Github Actions prior to building the app.
7 months ago
yes you would need to use the public database url