a year ago
I'm installing a custom package in railway.yaml but getting pip error - how can I fix it?
railway.toml:
[build]
builder = "nixpacks"
buildCommand = "git clone https://username:$GIT_ACCESS_TOKEN@github.com/org/my-module.git && cd my-module && pip install -e ."
logs:
0.278 Cloning into 'my-module'...
4.171 /bin/bash: line 1: pip: command not found
Why is this not working?
0 Replies
a year ago
If your project isn’t detected as a python project (doesn’t have a main.py or isn’t the primary language) then you’ll have to tell nixpacks to install python. Create a nixpacks.toml file and add “python3” to the providers section
a year ago
make sure to include the “…” as well as that tells nixpacks to install the detected packages as well
a year ago
@Charles_ for visibility ^
a year ago
the real question is, why are you having your build clone a repo instead of letting railway's infra do that for you?
it seems that adding requirements.txt at project root fixed it - but I only need setup.py. Would that also work with nixpacks.toml if I remove requirements.txt?
I'm actually installing 6 different private module which makes for a very long and awkward buildCommand
a year ago
attach your repo to the service
a year ago
Fair, this sounds like an absolute pain to do with nixpacks, i would highly recommend this be done with a Dockerfile
@Brody any idea why I can't access my application via url provided by Railway?
I see this in logs
2024-04-11 12:01:18 +0000 - dagster-webserver - INFO - Serving dagster-webserver on http://0.0.0.0:3000 in process 8
I have port defined in my variables
PORT=3000
I expose port in Dockfile:
EXPOSE 3000
my supervisord.conf
[supervisord]
nodaemon=true
user=root
[program:dagster-daemon]
command=dagster-daemon run
autostart=true
autorestart=true
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0
environment=
DAGSTER_HOME="/app"
[program:dagit]
command=dagster-webserver -h 0.0.0.0 -p %(ENV_PORT)s
directory=/app
autostart=true
autorestart=true
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0
environment=
DAGSTER_HOME="/app"
@Brody just checking; are you aware of any issues/downsides from using supervisord inside Dockerfile for managing long-running processes?
I would have them split as two railway services but both processes need access to a shared volume which AFAIK isn't possible.
a year ago
I use supervisor on a demanding php-fpm service on Railway and never had any issues with it.
a year ago
yeah it's totally fine, but personally, I much prefer parallel
a year ago
far far simpler
a year ago
Thanks, how can I avoid this Dockerfile caching? This layers runs git clone & pip install on repo's I've made commits to
a year ago
set a service variable NO_CACHE
to 1
@Brody please send me friend request if you're interested in paid work (deploying docker-compose to Railway)
a year ago
well I'm honestly just more curious as to what you want to deploy more than anything