OLLAMA + DEEPSEEK r1 1.5b = extremly slow
aistrategyha
FREEOP
a year ago
I've setup Ollama + WebUI, I pulled llama3, deepseek r1 1,5b, but for both the chat is extremly low (actually useless), and ollama is not using at all the ressources available (I've a pla of 32vCPU and 32GB RAM). Would you know why is that? Thanks
1 Replies
Status changed to Awaiting User Response Railway • about 1 year ago
Railway
BOT
7 months ago
This thread has been marked as solved automatically due to a lack of recent activity. Please re-open this thread or create a new one if you require further assistance. Thank you!
Status changed to Solved Railway • 7 months ago