6 months ago
Description: A powerful workflow automation tool for technical people
Category: AI/ML
4 Replies
4 months ago
Hi Jack, thanks for putting this on.
I'm having trouble connecting to Ollama and QDrant services.
Should the deployment have values in the deployment variables?
For example, my ollama one came in as ::
.
How can I use the ollama models and the qdrant service in N8N?
Attachments
stockotaco
Hi Jack, thanks for putting this on. I'm having trouble connecting to Ollama and QDrant services. Should the deployment have values in the deployment variables? For example, my ollama one came in as :: . How can I use the ollama models and the qdrant service in N8N?
4 months ago
Good afternoon! Apologies for the delay.
It looks like you have the connections all set up, but need to pull the llama3.2
model for Ollama to get started using it.
I’ve made sure to attach a service of “Open WebUI” for your convenience. You can open up the service url, create an admin account, then head over to “Admin Panel” -> “Settings” -> “Connections”, then hit the wrench on Ollama API, and enter the model tag of the model you’d like to pull. You should then immediately see it among the options inside N8N.
jack
Good afternoon! Apologies for the delay.It looks like you have the connections all set up, but need to pull the llama3.2 model for Ollama to get started using it.I’ve made sure to attach a service of “Open WebUI” for your convenience. You can open up the service url, create an admin account, then head over to “Admin Panel” -> “Settings” -> “Connections”, then hit the wrench on Ollama API, and enter the model tag of the model you’d like to pull. You should then immediately see it among the options inside N8N.
4 months ago
Thank you so much for the reply!
Yes, now I see how that works with OLLAMA.
Quick note on that... it's is incredibly slow... is that expected? Generations take ages for simple things. I'm scared to do anything harder than a simple question.
And so then how does it work with the QDRANT service?
Thank you so much!
4 months ago
I’m not exactly sure how Qdrant works either because I’ve never used it but I’m sure they document its features and walkthrough on the repo.
As far as Ollama being slow, I’m not entirely sure there’s anything we can do about that either unfortunately. I assume its performance is based on the CPU or GPU being used to execute the AI/ML functions.