I was trying to create Qwen-Image Workflow ASAP ;)
get GGUF: https://huggingface.co/city96/Qwen-Image-gguf/tree/main
get VAE and CLIP: https://huggingface.co/Comfy-Org/Qwen-Image_ComfyUI/tree/main/split_files
Update: there are at least two Abliterated (Uncensored to a degree) GGUF versions of CLIP text-encoder for this model:
1) https://huggingface.co/mradermacher/Qwen2.5-VL-7B-Instruct-abliterated-GGUF/tree/main
2) https://huggingface.co/mradermacher/Qwen2.5-VL-7B-Abliterated-Caption-it-GGUF/tree/main <--- this one is my personal favorite!
3) for 4-8 steps lora go to this link: https://huggingface.co/lightx2v/Qwen-Image-Lightning/tree/main
P.S. The model is very sensitive to photography settings. Try to be careful with the depth of field and shallow focus in your prompts.
Description
this 8 Steps WorkFlow has 2 nodes that are absolutely necessary to squeeze the max quality of this kind of fast image generation:
1) Qwen-Image has somehow strange suggested official image sizes for generation:
"1:1": (1328, 1328), "16:9": (1664, 928), "9:16": (928, 1664), "4:3": (1472, 1140), "3:4": (1140, 1472), "3:2": (1584, 1056), "2:3": (1056, 1584) ... so for them it uses the "EmptyLatentImageCustom" node
2) "ClownsharKSampler" node for proper access to additional samplers/schedulers + bongmath by RES4LYF
also you will need to get 4 or 8 steps lora: https://huggingface.co/lightx2v/Qwen-Image-Lightning/tree/main
FAQ
Comments (19)
If use 4090, Q6_K,Q8_0 and Fp8, which is better and fast?
on 4090 there are some claims that safetensors FP8 is the best for speed (not GGUF)
@creatumundo399 GGUFs load to the gpu on a 4090. I know because I do that with my 4090. I don't even do anything special to do that.
I get this error line spamming my console while generating, with different numbers between blocks and img, what is up with that?
update comfy to nightly!!!
When i use the uncensored version on the unet model loader: ERROR: Could not detect model type of: ..\Qwen2.5-VL-7B-Instruct-abliterated.Q8_0.gguf
Do you know what i have to do?
1) is your Comfy updated to the last nightly version?
2) maybe you have to re-check what have you downloaded as gguf file - is it complete
3) dunno - it has to work out of the box
no, I'm in the stable version, that's it?
I'm having the same issue. My comfyui version is 0.3.52, with the latest updated GGUF custom nodes. I can run Qwen2.5-VL-7B-instruct (with the correctly named associated mmproj file) with the GGUF clip loader which works perfectly. However, when I try to load the files you link: Qwen2.5-VL-7B-Instruct-abliterated or Qwen2.5-VL-7B-Abliterated-Caption-it (with their own correctly named mmproj in the same file location) with the GGUF clip loader.... then I get the error. I've tried deleting the GGUF custom node and re-installing, and following the directions here (https://github.com/city96/ComfyUI-GGUF/issues/317). --- But cant get the abliterated files to work
@animewrongdoer157 Update comfyui and delete gguf from custom nodes, i fixed it doing that
@ThatHurts <3
for me i had to manually updae the comfi gguf from the manager hit try update and then it worked. i reinstalled first but same error until i maullay update from manager.
@bahaba7208305 yes, we are updating Comfy both with bat file + manager for custom nodes
I'm just getting a black image.
SAGE attention ON is known to do it
Yes, remove --use-sage-attention from your startup .bat (that resolved it instantly for me). I highly recommend trying out the deis_3m_ode sampler with ays+ scheduler — incredible results: much better prompt adherance, especially if your prompt is well-formatted.
The models you mention, don't work with nunchaku qwen-image-edit, different mat dimention error. Anything to work with nunchaku?
then at least replicate the RES4LYF nodes logic and samplers