3 days ago
Hi,
I face an issue regarding constant increasing of pod memory (RAM) over time while I use my application. When the application is not in use, the pod's memory stay stable but memory is not released. It looks like a memory leak.
I notice this issue in all my python pods (in different environments). Also, whenever the pods are restarted or redeployed, the RAM is reset to the normal level.
The context is python-based microservices, with an api pod with FastApi and task pods consuming consuming messages from a RabbitMQ queue, processing dataframes, and returning results via the message.
The application is deployed via a Dockerfile and the deployment is successful.
The application was previously deployed on a bare metal Kubernetes cluster and not memory leak was ever noticed.
So far, I tried :
- Offloading processing to a ProcessPoolExecutor
, isolating the heavy logic in a separate subprocess to allow RAM release
- Adding resource.getrusage()
and confirmed the parent process memory was not reclaiming properly after async processing
- using garbage collector
from within the application to explicitly release memory
None of this works.
Any help would be highly appreciated
Regards
Attachments
1 Replies
3 days ago
Hey there! We've found the following might help you get unblocked faster:
If you find the answer from one of these, please let us know by solving the thread!