a month ago
Issue:
You can only have 10 volumes per project
Context
In my current project, I have 3–4 active services, and each service typically requires:
PostgreSQL (persistent volume)
Redis (persistent volume)
We are also working with multiple environments under the same Railway project (primarily dev and prod).
Request
Could you please:
Increase the volume quota for this project/workspace (ideally to 30 volumes), and
Take a look at a possible discrepancy in how volumes are being counted?
Possible Bug / Inconsistency
At the moment, I can only see 8 volumes mounted in the development environment, yet I’m still receiving the “10 volumes per project” quota error.
Steps I’ve already tried:
Deleted an existing Redis volume and attempted to create a new one (the creation still fails with the same quota error).
Deleted a service that previously had a volume attached but was no longer in use.
Neither of these actions resolved the issue, which makes it seem like the volume quota may not be getting released correctly.
Happy to share project and workspace details privately if needed.
Thanks in advance!
5 Replies
a month ago
Your workspace is on the Hobby plan, which has a limit of 10 volumes per project. The discrepancy you're seeing—8 visible but hitting the quota—is likely caused by orphaned volumes that still exist but aren't mounted to active services. I can see you have a removed service called "Organiser Database" which may still have its volume consuming quota even though the service itself is gone.
Volumes can also become orphaned after backup restores, since restoring creates a new volume while keeping the original unmounted. These unmounted volumes still count against your limit until explicitly deleted. You can check for these in your project settings under the Volumes section, where unmounted volumes should still appear even if they're not attached to running services.
For the quota increase—upgrading to Pro would give you 20 volumes per project, which is closer to what you need. If you need 30, we can bump the limit further once you're on Pro. Worth noting: volumes for the same service across different environments (like your dev and prod) don't actually count separately toward the limit, so you may have more headroom than expected once we sort out any orphaned volumes.
If you'd like to upgrade to Pro and have the limit increased to 30, just let me know and I can help coordinate that. In the meantime, I'd recommend checking for any unmounted volumes in your project settings that might be consuming quota silently.
Status changed to Awaiting User Response Railway • about 1 month ago
a month ago
Thanks for the explanation — it helps clarify the situation.
Could you help with:
How to identify any orphaned or unmounted volumes that still count toward the quota.
The safest way to remove them without affecting active services.
A quick review of the project’s volume setup to see which volumes are counting, including any leftover from removed services or restores.
Our goal is to understand why we’re hitting the quota despite only ~8 active volumes and remove any unnecessary ones. We’re open to upgrading to Pro in a future cycle but want to resolve the current issue first. Thanks for your help
Status changed to Awaiting Railway Response Railway • about 1 month ago
Status changed to Awaiting User Response Railway • about 1 month ago
a month ago
Status changed to Awaiting Railway Response Railway • about 1 month ago
a month ago
Here are all the volumes on your project: (not sure this will format well)
| Volume ID | Volume Name | Service | Environments |
|-----------|-------------|---------|--------------|
| 0b52c26f-78f2-4cfd-8fd9-65b8405d8919 | postgres-volume-Pwbd | Axy-Strapi-PostgreSQL | Production |
| 2cb3f92b-0e5f-40a4-b6a6-490363e4c4e8 | postgres-volume | Organiser Database | Development |
| 402234ff-9e20-4bec-9f71-783b179736d8 | typesense-rty0-volume | typesense-staging | Staging |
| 5b9b4525-894e-49ab-8417-cdaa4a021c06 | redis-pcvv-volume | Journal Club Cache | Development |
| 687d5720-e754-4b02-81fc-8890f97ce4a9 | jubilant-volume |
Orphaned | None |
| 8169fccc-5975-441a-9566-044b2062c32b | typesense-famr-volume | typesense-production | Production |
| 98997bc3-d899-4eb3-b59d-d0a030841e85 | action-volume | Axy App Data | Dev, Staging, Prod |
| 99082ec3-f6da-4d00-aa85-cbf7cd3f4b19 | caring-volume | Organization Redis Prod | Staging, Prod |
| bed36cd6-d93d-4fe5-8167-878d8b312309 | sponge-volume | Redis | Dev, Staging, Prod |
| df50b32d-b6d4-4949-acb4-80ff85b7cdc7 | postgres-volume-3wiW | Journal Club Database | Development |
| e09b7187-269a-4989-81df-ebd4bd9870a8 | postgres-volume-JKuj | Axy Org Data | Staging, Prod |
| f4c7c533-6673-4424-b518-fdb03aa3c634 | typesense-volume | typesense | Development |
| ff76859e-611b-4360-ab8e-4ef7f868f687 | postgres-volume-k2-i |
Orphaned | None |
Note: The quota counts volumes (13), not volume instances (19). Some volumes span multiple environments (e.g., action-volume
has instances in Dev, Staging, and Prod but counts as 1 volume toward the quota).
Two volumes are not attached to any service:
- jubilant-volume (created Sep 12, 2025)
- postgres-volume-k2-i (created Dec 16, 2025)
These likely remained after services were deleted or failed restores. They count toward your quota but aren't serving any
purpose.
Since orphaned volumes aren't visible in the dashboard UI, you'll need to use the CLI:
railway link 9a7f73d4-1af1-4280-9fd7-ea6b5aa31973
railway volume list # verify the volumes
railway volume delete 687d5720-e754-4b02-81fc-8890f97ce4a9
railway volume delete ff76859e-611b-4360-ab8e-4ef7f868f687
This is safe, these volumes aren't connected to any running services, so deleting them won't affect anything active.
After deletion, you'll still have 11 volumes remaining.
Let us know if you need further help.
Sam
P.S. I believe you can retrieve the table above yourself via our graphql endpoint or using our API Explorer: https://railway.com/graphiql:
query projectVolumes($projectId: String!) {
project(id: $projectId) {
name
environments {
edges {
node {
id
name
}
}
}
services {
edges {
node {
id
name
}
}
}
volumes {
edges {
node {
id
name
createdAt
volumeInstances {
edges {
node {
id
environmentId
serviceId
}
}
}
}
}
}
}
}
Variables:
{"projectId": "9a7f73d4-1af1-4280-9fd7-ea6b5aa31973"}
Status changed to Awaiting User Response Railway • about 1 month ago
a month ago
This thread has been marked as solved automatically due to a lack of recent activity. Please re-open this thread or create a new one if you require further assistance. Thank you!
Status changed to Solved Railway • 29 days ago