OLLAMA + DEEPSEEK r1 1.5b = extremly slow

aistrategyha
FREE

4 months ago

I've setup Ollama + WebUI, I pulled llama3, deepseek r1 1,5b, but for both the chat is extremly low (actually useless), and ollama is not using at all the ressources available (I've a pla of 32vCPU and 32GB RAM). Would you know why is that? Thanks

Awaiting User Response

1 Replies

4 months ago

Hello,

We do not support GPUs, so that's likely going to be why it's so slow.


Status changed to Awaiting User Response railway[bot] 4 months ago