ComfyUI Workflow for Flux.2 Klein
Any feedback would be appreciated.
📂 Required Models
Diffusion Model: flux2-klein-4b or any of the other versions .../ComfyUI/models/diffusion_models/
Text Encoder: qwen_3__4b or qwen_3__8b .../ComfyUI/models/text_encoders/
VAE: flux2-vae .../ComfyUI/models/vae/
🧩 Required Custom Nodes
🟩 ComfyUI-Manager (by Comfy-Org)
https://github.com/Comfy-Org/ComfyUI-Manager🟩 rgthree-comfy (by rgthree)
https://github.com/rgthree/rgthree-comfy🟩 ComfyUI-GGUF (by city96)
https://github.com/city96/ComfyUI-GGUF
⚙️ Recommended Settings
Steps: 4 (distilled)
CFG: 1.0 (distilled)
Sampler: Euler
Experiment and enjoy!

Description
Upload of 2.0
(This is not an exhaustive changelog)
Updated to V2.0:
added upscaler
FAQ
Comments (4)
The best just got better! Thank you <3
very good!!!
As always, great work! Thanks a lot! Just a quick addition. You can actually use quantized text encoders as well with a GGUF clip loader: https://huggingface.co/bartowski/Qwen_Qwen3-8B-GGUF
It helps a lot for the VRAM poor.
How can I control the final image size myself? I want to add a node to clear VRAM called easy cleanGpuUsed; can I add it between ImageUpscaleWithModel and VAEDecode?









