Sharing volumes between services
ay1man4
PROOP

a year ago

To support shared and multi volumes between services.

It's a missing feature that will open the door for thousands of projects currently we can't deploy in Railway.

Under Review

0 Threads mention this feature

12 Replies

mark-avalaunch
PRO

a year ago

100%


a year ago

It would be much appreciated if y'all can share real use cases.


mark-avalaunch
PRO

a year ago

Mine is pretty simple and I just ran into it after deploying, in large part because I didn't do the proper research on the Railway prior.

Our setup consists of two distinct services:

  1. Application Worker Container: This runs our core application logic, handling primary tasks, background processing, and potentially API interactions. It's built using Python 3.12 and relies on standard application libraries.  

  2. Specialized Task Executor Container: This container is designed to run a specific, resource-intensive task using a specialized toolset. This toolset has distinct and potentially conflicting dependencies, including a requirement for Python 3.11 and specific libraries for data processing and external interactions.  

Reason for Separation:

The primary reason for separating these into two containers was dependency isolation. The specialized task executor requires a different Python runtime version and a unique set of libraries that could conflict if installed alongside the main application worker's dependencies in a single container environment. This separation ensures each component runs reliably.

Intended Communication (Shared Volume Use Case):

The intended workflow was as follows:

  • The Application Worker would determine when a specialized task needed to be run.

  • It would trigger the Task Executor container (or a process within it).

  • The Task Executor would perform its task and generate output data files (e.g., JSON results).

  • These output files were intended to be written to a shared volume.

  • The Application Worker would then read these data files from the shared volume to continue its processing workflow based on the results generated by the Task Executor.

The shared volume was chosen as a straightforward mechanism for passing potentially large data files generated by the specialized task back to the main application worker without complex networking or intermediate storage.

Since shared volumes are not supported between services in this manner on the platform, we are now investigating alternative strategies to achieve the same data handover between our Application Worker and the Task Executor container.


ay1man4
PROOP

a year ago

Sample Use Case: Web App Uploads Processed by a Worker

Scenario:

You have:

1. A web app service (web) that allows users to upload images.

2. A worker service (processor) that listens for new uploads and processes them (e.g., resizes or compresses the images).

You want both services to access the same files, so you use a shared volume.

---

Docker Compose Example:

version: '3.9'

services:

web:

image: my-web-app

volumes:

- uploads:/app/uploads

ports:

- "8000:8000"

processor:

image: image-processor

volumes:

- uploads:/app/uploads

volumes:

uploads:


ay1man4

Sample Use Case: Web App Uploads Processed by a WorkerScenario:You have:1. A web app service (web) that allows users to upload images.2. A worker service (processor) that listens for new uploads and processes them (e.g., resizes or compresses the images).You want both services to access the same files, so you use a shared volume.---Docker Compose Example:version: '3.9'services:web:image: my-web-appvolumes:- uploads:/app/uploadsports:- "8000:8000"processor:image: image-processorvolumes:- uploads:/app/uploadsvolumes:uploads:

mark-avalaunch
PRO

a year ago

Thanks for the example! Just to confirm, does your platform run all services defined within a single docker-compose.yml file on the same underlying Docker host instance, enabling native named volume sharing for read/write access between containers as shown?


ay1man4
PROOP

a year ago

Yes, similar to the shared example.


a year ago

For either of your use cases, would you need this hypothetical shared volume to allow both read and write permissions for all connected services?


brody

For either of your use cases, would you need this hypothetical shared volume to allow both read and write permissions for all connected services?

mark-avalaunch
PRO

a year ago

Yes, that's correct.


javierortegap
PRO

10 months ago

Following up on the discussion about shared volumes, I'd like to present a critical use case for deploying ERPNext on the platform. The lack of shared volume support between services is currently a major hurdle for a standard, best-practice ERPNext deployment.

Application Overview: ERPNext

  • ERPNext is built on the Frappe Framework (Python-based).

  • It's a multi-process application that typically consists of:

    1. Web Server (Gunicorn/Python): Serves the application's dynamic content and API.

    2. Background Workers (RQ - Python): Handle asynchronous tasks like sending emails, processing reports, running workflows, and other long-running jobs. There are typically multiple queues (e.g., short, default, long).

    3. Scheduler (Frappe Scheduler - Python): Manages scheduled and recurring tasks (e.g., cron jobs, automated backups within ERPNext, email digests).

    4. Real-time / WebSocket Server (Node.js + Socket.IO): Handles real-time updates and notifications within the UI.

    5. Frontend Proxy (Nginx): Serves static assets, handles SSL termination, and reverse proxies requests to the Gunicorn web server and WebSocket server.

  • It uses MariaDB/PostgreSQL for its database and Redis for caching and queuing. These are well-supported as separate services on Railway.

The Core Challenge: Shared Filesystem Requirement

All the ERPNext application processes listed above (Gunicorn, RQ Workers, Scheduler, Nginx, and even bench CLI commands used for maintenance) critically rely on having shared access to the same frappe-bench/sites/ directory structure at runtime.

This directory contains:

  1. Site Configuration Files:

    • common_site_config.json: Contains database credentials, Redis credentials, and other global settings.

    • [site_name]/site_config.json: Contains site-specific configurations, file upload paths, maintenance mode flags, etc.

    • All processes need to read these configurations to function correctly.

  2. Private Files:

    • Files uploaded by users (e.g., attachments to documents, user-uploaded images) are stored within [site_name]/private/files/.

    • The Gunicorn web server might handle the upload, and a background worker might later need to process that same file (e.g., for data extraction, conversion). They both need to see the exact same file via the filesystem.

  3. Public Assets & Compiled Files:

    • [site_name]/public/files/: Publicly accessible user uploads.

    • assets/: Compiled JS/CSS assets. While Nginx primarily serves these, Gunicorn and workers might interact with paths or generate links based on this structure. The bench build command (run by the entrypoint or during updates) populates this.

  4. Installed Apps Code:

    • While the base code is in the Docker image, any site-specific apps or customizations might reside within the sites directory or be linked from there.

  5. Logs (Often):

    • logs/ directory within the bench often stores application-level logs from various Frappe processes.

Intended Deployment Architecture (Ideal with Shared Volumes):

The standard and recommended way to deploy ERPNext with Docker (as seen in the official frappe/frappe_docker repository's compose.yaml) is to have:

  • A single persistent volume mounted to /home/frappe/frappe-bench/ (or specifically /home/frappe/frappe-bench/sites/ and /home/frappe/frappe-bench/sites/assets/ if multiple mounts were possible to the same volume from different subpaths, though a single parent mount is more common).

  • Multiple distinct container services, each running one type of ERPNext process, all mounting this same shared volume:

    • erpnext-gunicorn (Web Server)

    • erpnext-nginx (Frontend Proxy)

    • erpnext-worker-default

    • erpnext-worker-short

    • erpnext-worker-long

    • erpnext-scheduler

    • erpnext-socketio

    • An initial erpnext-init-site job to set up the site within the volume.

      Specific Permissions Needed for the Shared Volume:

      • All services/processes (Gunicorn, Workers, Scheduler, Nginx reading assets) would need Read/Write access to different parts of the shared sites directory.

        • Gunicorn/Workers: Read/Write for site configs, private files.

        • Nginx: Read access for public assets and potentially specific site configs for routing.

        • Bench CLI (for setup/migrations): Read/Write for almost everything within sites.

      • The processes typically run as a non-root user (e.g., frappe, UID 1000) within the containers, so the shared volume needs to be writable by this UID/GID. Railway's RAILWAY_RUN_UID=0 helps the initial container entrypoint to chown the volume correctly for this frappe user.

        Thanks for prioritizing this feature.


javierortegap

Following up on the discussion about shared volumes, I'd like to present a critical use case for deploying ERPNext on the platform. The lack of shared volume support between services is currently a major hurdle for a standard, best-practice ERPNext deployment.Application Overview: ERPNextERPNext is built on the Frappe Framework (Python-based).It's a multi-process application that typically consists of:Web Server (Gunicorn/Python): Serves the application's dynamic content and API.Background Workers (RQ - Python): Handle asynchronous tasks like sending emails, processing reports, running workflows, and other long-running jobs. There are typically multiple queues (e.g., short, default, long).Scheduler (Frappe Scheduler - Python): Manages scheduled and recurring tasks (e.g., cron jobs, automated backups within ERPNext, email digests).Real-time / WebSocket Server (Node.js + Socket.IO): Handles real-time updates and notifications within the UI.Frontend Proxy (Nginx): Serves static assets, handles SSL termination, and reverse proxies requests to the Gunicorn web server and WebSocket server.It uses MariaDB/PostgreSQL for its database and Redis for caching and queuing. These are well-supported as separate services on Railway.The Core Challenge: Shared Filesystem RequirementAll the ERPNext application processes listed above (Gunicorn, RQ Workers, Scheduler, Nginx, and even bench CLI commands used for maintenance) critically rely on having shared access to the same frappe-bench/sites/ directory structure at runtime.This directory contains:Site Configuration Files:common_site_config.json: Contains database credentials, Redis credentials, and other global settings.[site_name]/site_config.json: Contains site-specific configurations, file upload paths, maintenance mode flags, etc.All processes need to read these configurations to function correctly.Private Files:Files uploaded by users (e.g., attachments to documents, user-uploaded images) are stored within [site_name]/private/files/.The Gunicorn web server might handle the upload, and a background worker might later need to process that same file (e.g., for data extraction, conversion). They both need to see the exact same file via the filesystem.Public Assets & Compiled Files:[site_name]/public/files/: Publicly accessible user uploads.assets/: Compiled JS/CSS assets. While Nginx primarily serves these, Gunicorn and workers might interact with paths or generate links based on this structure. The bench build command (run by the entrypoint or during updates) populates this.Installed Apps Code:While the base code is in the Docker image, any site-specific apps or customizations might reside within the sites directory or be linked from there.Logs (Often):logs/ directory within the bench often stores application-level logs from various Frappe processes.Intended Deployment Architecture (Ideal with Shared Volumes):The standard and recommended way to deploy ERPNext with Docker (as seen in the official frappe/frappe_docker repository's compose.yaml) is to have:A single persistent volume mounted to /home/frappe/frappe-bench/ (or specifically /home/frappe/frappe-bench/sites/ and /home/frappe/frappe-bench/sites/assets/ if multiple mounts were possible to the same volume from different subpaths, though a single parent mount is more common).Multiple distinct container services, each running one type of ERPNext process, all mounting this same shared volume:erpnext-gunicorn (Web Server)erpnext-nginx (Frontend Proxy)erpnext-worker-defaulterpnext-worker-shorterpnext-worker-longerpnext-schedulererpnext-socketioAn initial erpnext-init-site job to set up the site within the volume.Specific Permissions Needed for the Shared Volume:All services/processes (Gunicorn, Workers, Scheduler, Nginx reading assets) would need Read/Write access to different parts of the shared sites directory.Gunicorn/Workers: Read/Write for site configs, private files.Nginx: Read access for public assets and potentially specific site configs for routing.Bench CLI (for setup/migrations): Read/Write for almost everything within sites.The processes typically run as a non-root user (e.g., frappe, UID 1000) within the containers, so the shared volume needs to be writable by this UID/GID. Railway's RAILWAY_RUN_UID=0 helps the initial container entrypoint to chown the volume correctly for this frappe user.Thanks for prioritizing this feature.

10 months ago

Hello,

We greatly appreciate you sharing your usecase, but I would like to be clear in that we cannot prioritize this feature right now.


brody

Hello,We greatly appreciate you sharing your usecase, but I would like to be clear in that we cannot prioritize this feature right now.

javierortegap
PRO

10 months ago

Thats a pity. Any ETA on this?


javierortegap

Thats a pity. Any ETA on this?

10 months ago

We do not have any projects layed out for this feature so I won't be able to offer an ETA.


Loading...