ComfyUI Workflow
Requires: ComfyUI_Eclipse 3.3.x
its a very basic workflow that uses 2 nodes from SmartLML Eclipse which supports 8 Backends now:
transformers
gguf (llama-cpp-python)
ollama (docker), vllm (docker), sglang (docker), llama.cpp (docker)
wd14 (onnx)
yolo
One for a image description (VLM / WD14) or text input (LLM)
One for the Detection (Face, Eyes, Hands, and other areas ;) using Florence2 and Yolo (qwen detections are meh... but its under construction, works very good with huihui qwen 3.5 9b claude 4.6)
try claude for image to prompt descriptions / detection (also very good):
In most cases, the model doesn't mince words and describes exactly what it "sees".
transformers: https://huggingface.co/huihui-ai/Huihui-Qwen3.5-9B-Claude-4.6-Opus-abliterated
ollama: huihui_ai/qwen3.5-abliterated:9b-Claude
Description
updated version:
uses the eclipse sml nodes
introduces a basic example of the new bridge nodes which allows wireless control (subgraph aware: see iGEN ONE).
this workflow and the iGEN workflows are shipped with the latest version of Eclipse ;)
