Unusually high billing for worker server memory
fffayzul
PROOP

18 days ago

Hi Railway Team,

I'm writing regarding unexpected charges on my account related to memory usage on one of my servers.

My Setup: I'm running two servers for my application:

  1. Main server: Handles database I/O and WebSocket connections

  2. Worker server: Processes asynchronous background tasks (push notifications, emails, etc.)

The Issue: The billing dashboard shows my worker server exceeded 3GB of memory usage during the billing period. However, based on my monitoring and the limited testing activity, this doesn't align with actual usage:

  • Only 2 people have been testing the application

  • My worker tasks are lightweight (notifications and emails)

  • Internal monitoring never showed memory consumption exceeding 3GB

My Concern: I understand that running two servers will naturally increase costs, but these charges seem disproportionate given:

  • The application is still in testing phase

  • Minimal user activity (just 2 testers)

  • The lightweight nature of the background tasks being processed

Could you please review the memory usage data for my worker server and help me understand what might have caused this spike? I want to ensure I'm being billed accurately before scaling to production.

Thank you for your assistance.

Solved$20 Bounty

1 Replies

brody
EMPLOYEE

18 days ago

This thread has been marked as public for community involvement, as it does not contain any sensitive or personal information. Any further activity in this thread will be visible to everyone.

Status changed to Open brody 18 days ago


fffayzul
PROOP

14 days ago

Update: Resolved - Root Cause Identified

I figured out the issue. It wasn't a billing error - it was a Celery configuration problem.

The Problem: My Celery worker was defaulting to concurrency=48 with the prefork pool. This means Celery was spawning 48 separate Python processes on startup, regardless of actual workload. Each process consumed memory, resulting in ~3GB usage even when completely idle.

The Solution: I adjusted my Celery start command to explicitly set concurrency:

bash

# For testing (2 testers):
celery -A myapp worker --concurrency=2 --pool=threads --loglevel=info

# For production:
celery -A myapp worker --concurrency=8 --pool=threads --loglevel=warning

Why this works:

  • Using --pool=threads instead of the default prefork is more memory-efficient for I/O-bound tasks (emails, push notifications)

  • Explicitly setting --concurrency=2 limits worker threads to what's actually needed

  • Idle memory dropped from ~3GB to ~200-300MB

Lesson learned: If you're running Celery on Railway (or any container platform), always explicitly set your concurrency. Celery auto-detects available CPU cores, and Railway containers can report high core counts, causing Celery to spawn far more worker processes than you actually need.

Hope this helps anyone else running into unexpectedly high memory usage with Celery workers!


Status changed to Solved brody 14 days ago


Loading...