OLLAMA + DEEPSEEK r1 1.5b = extremly slow
aistrategyha
FREE
6 months ago
I've setup Ollama + WebUI, I pulled llama3, deepseek r1 1,5b, but for both the chat is extremly low (actually useless), and ollama is not using at all the ressources available (I've a pla of 32vCPU and 32GB RAM). Would you know why is that? Thanks
1 Replies
6 months ago
Hello,
We do not support GPUs, so that's likely going to be why it's so slow.
Status changed to Awaiting User Response Railway • 6 months ago
Railway
BOT
a month ago
This thread has been marked as solved automatically due to a lack of recent activity. Please re-open this thread or create a new one if you require further assistance. Thank you!
Status changed to Solved Railway • about 1 month ago