2 months ago
Description:
Issue:
Environment variables containing "S3" in the name are not being injected into the deployed container, even though they appear correctly in the Railway dashboard and work in railway shell locally.
Variables affected:
- S3_ENDPOINT
- S3_ACCESS_KEY_ID
- S3_SECRET_ACCESS_KEY
- S3_BUCKET
- S3_REGION
What works:
- DATABASE_URL on the same service works fine
- NODE_ENV works fine
- Variables show correctly in Railway dashboard
- railway shell locally shows all S3 variables correctly
Debug output from deployed container:
Added logging to check process.env:
```ts
console.log({
hasDatabaseUrl: !!process.env.DATABASE_URL, // true
hasNodeEnv: !!process.env.NODE_ENV, // true
s3EnvKeys: Object.keys(process.env).filter(k => k.includes("S3")) // [] (empty!)
});
Logs:
S3 Config check: {
hasEndpoint: false,
hasRegion: true, // only true because of code default "us-east-1"
hasAccessKey: false,
hasSecret: false,
endpointLength: 0,
accessKeyLength: 0,
secretLength: 0,
hasDatabaseUrl: true,
hasNodeEnv: true,
nodeEnv: 'production',
s3EnvKeys: []
}
What I've tried:
- Manually retyped all variable names (ruling out invisible characters)
- Disabled "Metal Build Environment"
- Multiple redeploys
- Verified variables exist in correct service and environment
Environment:
- Builder: Railpack (default)
- Service: Tambo API Server / staging
- Node.js app (NestJS)
Pinned Solution
2 months ago
Turns out, this was a Turborepo configuration issue on our end, not a Railway problem.
Turborepo filters environment variables unless they're explicitly listed in turbo.json. The S3 variables weren't included there, so they weren't being passed through to the app. DATABASE_URL worked because it was already in the config.
2 Replies
2 months ago
This thread has been marked as public for community involvement, as it does not contain any sensitive or personal information. Any further activity in this thread will be visible to everyone.
Status changed to Open Railway • about 2 months ago
2 months ago
"I've seen something similar happen with Railpack builders before. Since
railway shellworks but the deployed container doesn't, it sounds like the variables are being filtered out during the build or runtime injection phase, possibly due to a.envfile override or a reserved keyword conflict (though standard S3 vars shouldn't be reserved).
Two quick things to check:
Do you have a
.env (or
.env.production) file unintentionally committed to your repo? If present, it might be overwriting the injected environment variables at runtime, especially if you're using
dotenvor similar libraries that prioritize file-based config.
In your
railway.toml(if you have one), check if there are any specific
build.envor
deploy.envwhitelist/blacklist configurations that might be excluding them.
As a dirty workaround to isolate the 'S3' substring theory, try creating a new variable like
MY_STORAGE_BUCKET(aliasing
S3_BUCKET) and see if that gets injected. If it does, there's definitely some filtering happening on the
S3_prefix."
2 months ago
Turns out, this was a Turborepo configuration issue on our end, not a Railway problem.
Turborepo filters environment variables unless they're explicitly listed in turbo.json. The S3 variables weren't included there, so they weren't being passed through to the app. DATABASE_URL worked because it was already in the config.
Status changed to Solved brody • about 2 months ago