a year ago
I have an issue with connection error for my redis, however everything seem to be working when I test and I am stuck.
See my settings:
# Celery settings
CELERY_BROKER_URL = os.environ.get("REDIS_URL", "redis://localhost:6379/0")
CELERY_RESULT_BACKEND = os.environ.get("REDIS_URL", "redis://localhost:6379/0")
CELERY_ACCEPT_CONTENT = ["json"]
CELERY_TASK_SERIALIZER = "json"
CELERY_RESULT_SERIALIZER = "json"
CELERY_TIMEZONE = "Australia/Tasmania"
CELERY_WORKER_STATE_DB = "worker-state.db"
CELERY_BROKER_CONNECTION_RETRY_ON_STARTUP = True
CACHES = {
"default": {
"BACKEND": "django_redis.cache.RedisCache",
"LOCATION": os.environ.get("REDIS_URL", "redis://localhost:6379/0"),
"OPTIONS": {
"CLIENT_CLASS": "django_redis.client.DefaultClient",
},
}
}
This is my local test:
Attempting to connect to Redis using URL: redis://default:*****@roundhouse.proxy.rlwy.net:****
Connected to Redis successfully!
Yet I get this error:
File "/usr/local/lib/python3.12/site-packages/kombu/messaging.py", line 234, in <lambda>
Jul 09 09:43:52
django-server
channel = ChannelPromise(lambda: connection.default_channel)
Jul 09 09:43:52
django-server
^^^^^^^^^^^^^^^^^^^^^^^^^^
Jul 09 09:43:52
django-server
File "/usr/local/lib/python3.12/site-packages/kombu/connection.py", line 953, in default_channel
Jul 09 09:43:52
django-server
self._ensure_connection(**conn_opts)
Jul 09 09:43:52
django-server
File "/usr/local/lib/python3.12/site-packages/kombu/connection.py", line 458, in ensureconnection
Jul 09 09:43:52
django-server
with ctx():
Jul 09 09:43:52
django-server
File "/usr/local/lib/python3.12/contextlib.py", line 158, in exit
Jul 09 09:43:52
django-server
self.gen.throw(value)
Jul 09 09:43:52
django-server
File "/usr/local/lib/python3.12/site-packages/kombu/connection.py", line 476, in reraiseas_library_errors
Jul 09 09:43:52
django-server
raise ConnectionError(str(exc)) from exc
Jul 09 09:43:52
django-server
kombu.exceptions.OperationalError: [Errno 111] Connection refused
14 Replies
a year ago
I don't get any deployement error, but I get an error when triggering my task as below:
Attachments
a year ago
This is my celery.py:
from __future__ import absolute_import, unicode_literals
import os
from celery import Celery
# Set the default Django settings module for the 'celery' program.
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'mysite.settings')
app = Celery('mysite')
# Using a string here means the worker will not have to
# pickle the object when using Windows.
app.config_from_object('django.conf:settings', namespace='CELERY')
# Load task modules from all registered Django app configs.
app.autodiscover_tasks()
@app.task(bind=True)
def debug_task(self):
print(f'Request: {self.request!r}')
a year ago
and my views.py & tasks.py:
# views.py
@login_required
@csrf_exempt
def delete_ticket_fields(request):
if request.method == "POST":
field_ids = request.POST.getlist("ids[]")
print(f"Field IDs to delete: {field_ids}")
for field_id in field_ids:
field = get_object_or_404(TicketFields, id=field_id)
print(f"Deleting field with ID: {field_id}")
result = delete_ticket_field_task.delay(
field.zendesk_instance.zendesk_instance_url,
field.zendesk_instance.author.email,
field.zendesk_instance.api_token,
field.zd_id,
)
print(f"Task result: {result}")
field.delete()
return JsonResponse({"status": "success"})
return JsonResponse({"status": "failed"}, status=400)
# tasks.py
@shared_task
def delete_ticket_field_task(zendesk_instance_url, author_email, api_token, field_id):
try:
logger.info(f"Starting task to delete field: {field_id}")
response = zd_delete_ticket_field(zendesk_instance_url, author_email, api_token, field_id)
if response.status_code == 204:
TicketFields.objects.filter(zd_id=field_id).delete()
logger.info(f"Task completed with status: {response.status_code}")
return response.status_code
except Exception as exc:
logger.error(f"Error occurred: {exc}")
raise self.retry(exc=exc, countdown=60)
a year ago
Have you set a
REDIS_URL
service variable?
yes, it's all setup the redis connection work when it's first deployed. But connection error when I trigger my celery task. I was thinking this might be due to the new proxy? not sure to be honest
a year ago
I run my celery as I start django on the same service.
CMD ["sh", "-c", "python manage.py migrate && python manage.py collectstatic --noinput && gunicorn mysite.wsgi:application --bind 0.0.0.0:8000 & celery -A mysite worker --loglevel=info --concurrency=8 & celery -A mysite beat --loglevel=info & tail -f /dev/null"]
My Django service is running both Django & Celery on the same instance. I have done this before, and replicated the exact same code, that is why I am wondering the main difference between my 2 projects is the runtime (legacy vs V2) and the new edge proxy.
Attachments
a year ago
Go ahead and deploy Django and Celery as separate services please, it is not ideal to deploy them inside a single service.
a year ago
But to answer your question, no this is not likely to be an issue with the runtime or edge proxy, this is almost certainly a misconfiguration on your side of things.
a year ago
Got it, thanks mate
a year ago
Got it, thanks mate
Please, how did you solve the problem? I am also facing same challenge.