CivArchive
    Qwen-Image WF - 4-8 Steps
    NSFW

    I was trying to create Qwen-Image Workflow ASAP ;)
    get GGUF: https://huggingface.co/city96/Qwen-Image-gguf/tree/main
    get VAE and CLIP: https://huggingface.co/Comfy-Org/Qwen-Image_ComfyUI/tree/main/split_files

    Update: there are at least two Abliterated (Uncensored to a degree) GGUF versions of CLIP text-encoder for this model:
    1) https://huggingface.co/mradermacher/Qwen2.5-VL-7B-Instruct-abliterated-GGUF/tree/main
    2) https://huggingface.co/mradermacher/Qwen2.5-VL-7B-Abliterated-Caption-it-GGUF/tree/main <--- this one is my personal favorite!
    3) for 4-8 steps lora go to this link: https://huggingface.co/lightx2v/Qwen-Image-Lightning/tree/main

    P.S. The model is very sensitive to photography settings. Try to be careful with the depth of field and shallow focus in your prompts.

    Description

    this 8 Steps WorkFlow has 2 nodes that are absolutely necessary to squeeze the max quality of this kind of fast image generation:
    1) Qwen-Image has somehow strange suggested official image sizes for generation:
    "1:1": (1328, 1328), "16:9": (1664, 928), "9:16": (928, 1664), "4:3": (1472, 1140), "3:4": (1140, 1472), "3:2": (1584, 1056), "2:3": (1056, 1584) ... so for them it uses the "EmptyLatentImageCustom" node
    2) "ClownsharKSampler" node for proper access to additional samplers/schedulers + bongmath by RES4LYF

    also you will need to get 4 or 8 steps lora: https://huggingface.co/lightx2v/Qwen-Image-Lightning/tree/main

    FAQ

    Comments (19)

    sekaiwlc07860Aug 12, 2025
    CivitAI

    If use 4090, Q6_K,Q8_0 and Fp8, which is better and fast?

    0l1v1aR0551
    Author
    Aug 13, 2025· 3 reactions

    on 4090 there are some claims that safetensors FP8 is the best for speed (not GGUF)

    creatumundo399Aug 19, 2025· 1 reaction
    For a 4090, fp8 will always be better than gguf. The gguf models are for people who have low VRAM since the model is loaded on the cpu and not the gpu. But that causes the generation of the image to be much slower since the CPU is not as good for that type of calculations. On the other hand, the .safetensors models (fp8, fp16) are loaded on the GPU.

    scorpioveOct 5, 2025· 1 reaction

    @creatumundo399 GGUFs load to the gpu on a 4090. I know because I do that with my 4090. I don't even do anything special to do that.

    MetaGenAug 15, 2025
    CivitAI
    ERROR lora diffusion_model.transformer_blocks.44.img_mlp.net.2.weight Allocation on device

    I get this error line spamming my console while generating, with different numbers between blocks and img, what is up with that?

    0l1v1aR0551
    Author
    Aug 15, 2025

    update comfy to nightly!!!

    ThatHurtsAug 25, 2025
    CivitAI

    When i use the uncensored version on the unet model loader: ERROR: Could not detect model type of: ..\Qwen2.5-VL-7B-Instruct-abliterated.Q8_0.gguf

    Do you know what i have to do?

    0l1v1aR0551
    Author
    Aug 25, 2025

    1) is your Comfy updated to the last nightly version?
    2) maybe you have to re-check what have you downloaded as gguf file - is it complete
    3) dunno - it has to work out of the box

    ThatHurtsAug 25, 2025

    no, I'm in the stable version, that's it?

    animewrongdoer157Aug 25, 2025

    I'm having the same issue. My comfyui version is 0.3.52, with the latest updated GGUF custom nodes. I can run Qwen2.5-VL-7B-instruct (with the correctly named associated mmproj file) with the GGUF clip loader which works perfectly. However, when I try to load the files you link: Qwen2.5-VL-7B-Instruct-abliterated or Qwen2.5-VL-7B-Abliterated-Caption-it (with their own correctly named mmproj in the same file location) with the GGUF clip loader.... then I get the error. I've tried deleting the GGUF custom node and re-installing, and following the directions here (https://github.com/city96/ComfyUI-GGUF/issues/317). --- But cant get the abliterated files to work

    ThatHurtsAug 26, 2025· 1 reaction

    @animewrongdoer157 Update comfyui and delete gguf from custom nodes, i fixed it doing that

    0l1v1aR0551
    Author
    Aug 26, 2025

    @ThatHurts <3

    bahaba7208305Sep 11, 2025

    for me i had to manually updae the comfi gguf from the manager hit try update and then it worked. i reinstalled first but same error until i maullay update from manager.

    0l1v1aR0551
    Author
    Sep 11, 2025

    @bahaba7208305 yes, we are updating Comfy both with bat file + manager for custom nodes

    BrAInB0tAug 30, 2025
    CivitAI

    I'm just getting a black image.

    0l1v1aR0551
    Author
    Aug 31, 2025

    SAGE attention ON is known to do it

    radicalhatterJan 17, 2026

    Yes, remove --use-sage-attention from your startup .bat (that resolved it instantly for me). I highly recommend trying out the deis_3m_ode sampler with ays+ scheduler — incredible results: much better prompt adherance, especially if your prompt is well-formatted.

    anomalborSep 18, 2025
    CivitAI

    The models you mention, don't work with nunchaku qwen-image-edit, different mat dimention error. Anything to work with nunchaku?

    0l1v1aR0551
    Author
    Sep 18, 2025

    then at least replicate the RES4LYF nodes logic and samplers

    Workflows
    Qwen

    Details

    Downloads
    817
    Platform
    CivitAI
    Platform Status
    Available
    Created
    8/12/2025
    Updated
    5/4/2026
    Deleted
    -