CORS with Object Storage/Bucket
landynu
PROOP

4 months ago

Hey all,

I know that Railway object storage is new and in Beta, but I stayed away from Minio given their recent changes and dove in with Railway.

I am somewhat new to object storage. I was able to set up a different project hosted in Railway using AWS S3 without issue, But I am working on a new project and can't seem to get Object Storage to work due to CORS errors.

I have all of the variables defined in my local dev and railway environment,

Is there any documentation to review best practices?

Thanks in advance!

Solved$10 Bounty

Pinned Solution

bytekeim
PRO

3 months ago

Hey landynu,

I came across your question on CORS errors with Railway's Object Storage and wanted to help out. It's beta, so yeah, custom CORS isn't fully supported yet, which explains the browser blocking direct requests. Your AWS S3 setup probably worked because you could tweak the policies there, but here's how to get around it on Railway without switching providers.

Basically, since it's S3-compatible, pre-signed URLs from the backend are the way to go—they embed the access rights so the frontend can hit the bucket without CORS kicking in. I set this up in a Node project, and it cleared things right up. Here's the code I used (feel free to adjust for your language/stack):

JavaScript

const AWS = require('aws-sdk'); // Use @aws-sdk/client-s3 for newer versions if preferred

const s3 = new AWS.S3({
  accessKeyId: process.env.RAILWAY_BUCKET_ACCESS_KEY_ID,
  secretAccessKey: process.env.RAILWAY_BUCKET_SECRET_ACCESS_KEY,
  endpoint: process.env.RAILWAY_BUCKET_ENDPOINT, // e.g., https://storage.railway.app
  s3ForcePathStyle: true,
  signatureVersion: 'v4'
});

// Function to get a pre-signed upload URL
const getUploadUrl = (key, contentType) => {
  return s3.getSignedUrl('putObject', {
    Bucket: process.env.RAILWAY_BUCKET_NAME,
    Key: key,
    Expires: 300, // Expires in 5 minutes
    ContentType: contentType
  });
};

// Backend route example
app.post('/get-upload-url', (req, res) => {
  const { fileName, contentType } = req.body;
  const url = getUploadUrl(fileName, contentType);
  res.json({ url });
});

On the client side, grab the URL and upload like this:

JavaScript

async function uploadFile(file) {
  const res = await fetch('/get-upload-url', {
    method: 'POST',
    headers: { 'Content-Type': 'application/json' },
    body: JSON.stringify({ fileName: file.name, contentType: file.type })
  });
  const { url } = await res.json();

  await fetch(url, {
    method: 'PUT',
    body: file,
    headers: { 'Content-Type': file.type }
  });
  console.log('File uploaded!');
}

For getting files, just use 'getObject' instead of 'putObject'. Keep the expiration short for security.

If you want public-ish access (like serving images), proxy through your server to keep it on the same domain:

JavaScript

app.get('/proxy/:key', async (req, res) => {
  const params = {
    Bucket: process.env.RAILWAY_BUCKET_NAME,
    Key: req.params.key
  };
  try {
    const data = await s3.getObject(params).promise();
    res.set('Content-Type', data.ContentType);
    res.send(data.Body);
  } catch (err) {
    res.status(500).send('Oops, error getting the file');
  }
});

This routes everything through your app, no CORS needed. For heavier use, imgproxy could be a good add-on for processing.

On docs/best practices: Railway's page covers the setup basics, but for more, check AWS S3 guides since it's compatible—things like env var management, bucket scoping per environment, and usage monitoring. Verify your vars match exactly between local and prod, especially the endpoint.

If this doesn't click, drop more info like the error or your code, and I'll refine it.

2 Replies

danneljung
FREE

4 months ago

Hello!

As you said, Railway Object Storage is still in Beta, so some things like CORS settings are not complete yet. The CORS errors you see usually happen when the browser tries to call the bucket directly. A common way to fix this is to send requests through your backend, or use AWS S3 or Cloudflare R2 if you need full CORS support. Railway storage is S3‑compatible, so in theory it should work like your S3 setup, but right now custom CORS rules may not be possible.

I did not find any best practices guide for Railway Object Storage, but the Storage Buckets docs explain the basics. Since it is S3‑compatible, you can follow AWS S3 best practices until Railway adds more documentation.

Another advanced option is to create pre‑signed URLs on the server. These URLs already include the access permissions, so the browser can use them without CORS problems. Railway has not documented this yet, so it may or may not work, but in theory it should because the system is S3‑compatible.

Let me know if this helps, or feel free to add more detail if you can.


bytekeim
PRO

3 months ago

Hey landynu,

I came across your question on CORS errors with Railway's Object Storage and wanted to help out. It's beta, so yeah, custom CORS isn't fully supported yet, which explains the browser blocking direct requests. Your AWS S3 setup probably worked because you could tweak the policies there, but here's how to get around it on Railway without switching providers.

Basically, since it's S3-compatible, pre-signed URLs from the backend are the way to go—they embed the access rights so the frontend can hit the bucket without CORS kicking in. I set this up in a Node project, and it cleared things right up. Here's the code I used (feel free to adjust for your language/stack):

JavaScript

const AWS = require('aws-sdk'); // Use @aws-sdk/client-s3 for newer versions if preferred

const s3 = new AWS.S3({
  accessKeyId: process.env.RAILWAY_BUCKET_ACCESS_KEY_ID,
  secretAccessKey: process.env.RAILWAY_BUCKET_SECRET_ACCESS_KEY,
  endpoint: process.env.RAILWAY_BUCKET_ENDPOINT, // e.g., https://storage.railway.app
  s3ForcePathStyle: true,
  signatureVersion: 'v4'
});

// Function to get a pre-signed upload URL
const getUploadUrl = (key, contentType) => {
  return s3.getSignedUrl('putObject', {
    Bucket: process.env.RAILWAY_BUCKET_NAME,
    Key: key,
    Expires: 300, // Expires in 5 minutes
    ContentType: contentType
  });
};

// Backend route example
app.post('/get-upload-url', (req, res) => {
  const { fileName, contentType } = req.body;
  const url = getUploadUrl(fileName, contentType);
  res.json({ url });
});

On the client side, grab the URL and upload like this:

JavaScript

async function uploadFile(file) {
  const res = await fetch('/get-upload-url', {
    method: 'POST',
    headers: { 'Content-Type': 'application/json' },
    body: JSON.stringify({ fileName: file.name, contentType: file.type })
  });
  const { url } = await res.json();

  await fetch(url, {
    method: 'PUT',
    body: file,
    headers: { 'Content-Type': file.type }
  });
  console.log('File uploaded!');
}

For getting files, just use 'getObject' instead of 'putObject'. Keep the expiration short for security.

If you want public-ish access (like serving images), proxy through your server to keep it on the same domain:

JavaScript

app.get('/proxy/:key', async (req, res) => {
  const params = {
    Bucket: process.env.RAILWAY_BUCKET_NAME,
    Key: req.params.key
  };
  try {
    const data = await s3.getObject(params).promise();
    res.set('Content-Type', data.ContentType);
    res.send(data.Body);
  } catch (err) {
    res.status(500).send('Oops, error getting the file');
  }
});

This routes everything through your app, no CORS needed. For heavier use, imgproxy could be a good add-on for processing.

On docs/best practices: Railway's page covers the setup basics, but for more, check AWS S3 guides since it's compatible—things like env var management, bucket scoping per environment, and usage monitoring. Verify your vars match exactly between local and prod, especially the endpoint.

If this doesn't click, drop more info like the error or your code, and I'll refine it.


Status changed to Solved ray-chen 3 months ago


Loading...