Flowise is a visual no-code AI workflow builder used to create LLM pipelines, RAG systems, chatbots and agent flows through a drag-and-drop UI.
It provides a browser-based interface that makes it easy to combine LLMs, vector databases, APIs, tools, and custom logic into complete applications. Flowise focuses on rapid prototyping and operational deployment rather than low-level model management. It is open source, modular, and supports both local and cloud environments.
Flowise can be used to build:
Flowise also integrates with major vector databases (Qdrant, Milvus, Chroma, Pinecone), local LLM engines (Ollama), and cloud LLM providers (OpenAI, Anthropic, Mistral).
Install the OpenWebUI & Ollama to run your local Ollama service. This setup is optimized for a single GPU server and integrates directly with Flowise.
Node Model: select ChatOllama
Base URL:
If Flowise runs in a separate container →
http://host.docker.internal:11434
(Use this instead of localhost, since localhost would refer to the container itself.)
Model Name:
Ensure the model listed in Flowise exactly matches the name in Ollama/OpenWebUI.
Temperature:
A value around 0.3 is recommended for factual tasks.
Higher values (e.g., 0.8) increase creativity but also hallucination risk.
Flowise itself does not require a GPU.
GPU usage depends on the models behind Ollama or other LLM backends.
For RAG pipelines, Flowise includes Retriever Vector DB nodes: select database type (e.g., Qdrant), then configure:
http://host.docker.internal:6333-p 3000:3000 OR --network host + open browser on correct host IP (make sure using our default template for Flowise and the Secure URL provided in the Server Dashboard)localhost used inside containerhttp://host.docker.internal:11434 if Flowise runs in Docker bridge mode/root/.flowiseFlowise can call ComfyUI’s API. Install ComfyUI with our tempalte: ComfyUI Template for GPU Server Blibs easily and securely. No native Flowise node yet, but integrate via the HTTP Request Node.
POST http://<host>:8188/prompt
http://host.docker.internal:8188/promptContent-Type: application/json{
"prompt": {
"1": {
"inputs": {
"seed": 1,
"steps": 20,
"cfg": 8,
"sampler_name": "euler",
"scheduler": "normal",
"positive": "a robot with wings",
"negative": ""
},
"class_type": "KSampler"
}
}
}
Schedule image generations or LLM workflows on intervals/events.
Save ComfyUI output URLs into Qdrant / PostgreSQL for retrieval inside flows.
ComfyUI queues jobs → Flowise needs polling logic via loop + HTTP Request node.