ComfyUI Workflow
Requires: ComfyUI_Eclipse 3.4.x
its a very basic workflow that uses nodes from Eclipse which supports 8 Backends:
transformers
gguf (llama-cpp-python)
ollama (docker), vllm (docker), sglang (docker), llama.cpp (docker)
wd14 (onnx)
yolo
Image Description (VLM / WD14) or text input (LLM)
Detection (Face, Eyes, Hands, and other areas ;) using Florence2 and Yolo (qwen detections are usally meh... but works very good with huihui qwen 3.5 9b claude 4.6
try claude for image to prompt descriptions / detection (also very good):
In most cases, the model doesn't mince words and describes exactly what it "sees".
ollama: huihui_ai/qwen3.5-abliterated:9b-Claude
transformers: https://huggingface.co/huihui-ai/Huihui-Qwen3.5-9B-Claude-4.6-Opus-abliterated
Description
The latest Smart LM / Smart Detection update requires reloading the nodes in existing workflows β I removed the compat remapping because it was getting too messy to maintain.
The widget order is also reorganized now so it makes sense by itself, instead of being reshuffled in JS every time a new widget is added (I'm not planning to add more⦠but just in case).
After downloading the AceStep workflow I noticed a big gap in the settings that Ollama and the other Docker backends actually support β some of them were even silently dropped π. Since I don't want to use the Ollama server directly but go through the Docker backends the LM / Detection nodes already use, Eclipse 3.4 ships a lot of adjustments and fixes there. But as mentioned, you'll have to reload the nodes in older workflows. Sorry about that.
If you haven't installed Docker yet, I'd suggest taking the time to set it up β inference is much faster than Transformers, and the Ollama registry has a lot of models now (both standard and abliterated).
to get tags in the direct chat example enable multi task and set task 2 to natural lang to tags ;)
