2 years ago
Hi! Just recently I've been having issues with my celery instance. It was working all this time. It can't connect to redis (everything is dockerized). Here is the following issue: Cannot connect to {redacted redis address} Lookup timed out.. - Has anyone had the same issue?
102 Replies
2 years ago
the redis address would be very important to help us debug this
2 years ago
oh so not on railway
2 years ago
is there some kind of whitelist you need to do?
2 years ago
can we focus on getting you connected to a railway hosted redis database?
2 years ago
works better for me since i know nothing about redislabs
2 years ago
can i see a screenshot of your railway project please?
2 years ago
if redis isn't configured, where did you get that domain from?
so that domain is from a heroku instance, I'm having the same issue and decided to try railway to see if it was a isolated instance
2 years ago
im talking about this domain
[2024-02-26 17:31:11,560: ERROR/MainProcess] consumer: Cannot connect to redis://default:**@[monorail.proxy.rlwy.net:34378](monorail.proxy.rlwy.net:34378)/0: Error -3 connecting to monorail.proxy.rlwy.net:34378. Lookup timed out..
2 years ago
please show me the screenshot of your project
2 years ago
okay can show me how you are trying to connect to it
2 years ago
very helpful video, what do you have REDIS_ENDPOINT and REDIS_PORT set to in your service variables?
REDISENDPOINT=default:**@monorail.proxy.rlwy.net REDISPORT=34378
2 years ago
instead of trying to build the url yourself, just use a REDIS_URL variable in the celery config file, then in your service variables set REDIS_URL to ${{Redis.REDIS_PRIVATE_URL}}
this is unlikely to fix the main issue, but its best we do this instead
okay got it - would my config look like this then?
import os
from dotenv import loaddotenv loaddotenv()
REDISURL = os.environ.get('REDISURL', '6379') # default to 6379 if not provided
imports = ["tasks"]
brokerurl = REDISURL
resultbackend = REDISURL
taskserializer = 'json' resultserializer = 'json'
acceptcontent = ['json'] timezone = 'America/Chicago' enableutc = True
brokerconnectionretryonstartup = True
2 years ago
lets omit the dotenv for now, since it looks like you are committing that file to your repo
2 years ago
import os
imports = ["tasks"]
broker_url = os.environ['REDIS_URL']
result_backend = os.environ['REDIS_URL']
task_serializer = 'json'
result_serializer = 'json'
accept_content = ['json']
timezone = 'America/Chicago'
enable_utc = True
broker_connection_retry_on_startup = True2 years ago
please reference my message
2 years ago
show me the new error please
[2024-02-26 17:54:49,592: ERROR/MainProcess] consumer: Cannot connect to redis://default:**@redis.railway.internal:6379//: Error -3 connecting to redis.railway.internal:6379. Lookup timed out..
2 years ago
cool, send your dockerfile please, and going forward please enclose logs and code (or similar) in code blocks
2 years ago
triple backticks
2 years ago
try this instead
# Dockerfile.worker
# Use an official Python runtime as a parent image
FROM python:3.11.1
WORKDIR /app
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY . .
# Command to run the Celery worker
CMD sleep 3 && celery -A tasks worker --loglevel=info --pool=eventlet2 years ago
interesting
yeah, I'm not sure if it's a rediscloud issue but status have been okay with a few urgent fixes that happend last two days
2 years ago
railway deploys a redis docker image
2 years ago
increase the sleep to 10 seconds?
2 years ago
im kinda out of ideas, you sure you are usng the dockerfile i provided?
2 years ago
and you haven't modified anything at all about the redis database on railway right? absolutely nothing at all?
2 years ago
let me do some thinking, could you share your repo, or add me to it?
Thank you! This is under a client repo but I can create another and add you in
2 years ago
yes please
brody192
2 years ago
theres no need but i very much appreciate the gesture
Here you go ! https://www.buymeacoffee.com/brody192/c/8635280 - The world needs mroe people like you. Much appreciated
I figured out what it was - it was the concurrency pool. Eventlet doesn't want to play nice with Docker I guess
2 years ago
sorry i couldn’t solve this but happy you have! and thank you so much for the train!!
2 years ago
a sleep is still needed, but you can bump the sleep down to 3 seconds
2 years ago
because the private network takes about 3 seconds to be able to respond to dns lookups
2 years ago
happy to help where i can!