CivArchive
    HiDream Full GGUF-Q5_K_M UNCENSORED馃敒 - v1.0
    NSFW
    Preview 71558913
    Preview 72201094
    Preview 72201095
    Preview 72201097
    Preview 72204911
    Preview 72488084
    Preview 72493592
    Preview 72493585
    Preview 72467859

    This is the most OPTIMAL (in terms of Quality/Speed|VRAM) quant of the HiDream Full model packed with completely "lobotomized"=uncensored text encoder (Meta Llama 3.1)

    This file collection contains two main ingredients that make HiDream way better and !UNcensored:

    • Nice trick: converted-flan-t5-xxl-Q5_K_M.gguf is used instead of t5-v1_1-xxl-encoder-Q5_K_M.gguf for better text-to-vector translation/encoding;

    • The main secret ingredient: meta-llama-3.1-8b-instruct-abliterated.Q5_K_M.gguf - instead of Meta-Llama-3.1-8B-Instruct-Q5_K_M.gguf --> Read about the process of LLM abliteration/uncensoring here: https://huggingface.co/blog/mlabonne/abliteration (get other uncensored LLMs from his repos on Huggingface...)

    So ... simply do:

    1. Unpack the archive file!

    2. place: hidream-i1-full-Q5_K_M.gguf file in ComfyUI\models\unet folder;

    3. place: converted-flan-t5-xxl-Q5_K_M.gguf, meta-llama-3.1-8b-instruct-abliterated.Q5_K_M.gguf, clip_g_hidream.safetensors, clip_l_hidream.safetensors in ComfyUI\models\text_encoders folder;

    4. place: HiDream.vae.safetensors under ComfyUI\models\vae folder;

    5. use my UNcensored HiDream-Full Workflow.json as starting workflow to test how it works;

    6. in case of VRAM problems - use my VRAM optimized bat file: run_nvidia_gpu_fp8vae.bat to start ComfyUI (put it directly into the ComfyUI folder);

    ... this way you can have a nice high quality HiDream-Full image generation with 12Gb VRAM (tested!)

    Update1: You can also use other uncensored Meta Llama 3.1 versions in text encoder part, for example this image: https://civarchive.com/images/71818416 is using DarkIdol-Llama-3.1-8B-Instruct-1.2-Uncensored-GGUF
    Update2: Try this CLIP-G --> https://civarchive.com/models/1564749?modelVersionId=1773479

    Description

    This file collection contains two main ingredients that make HiDream way better and !UNcensored:

    • Nice trick: converted-flan-t5-xxl-Q5_K_M.gguf is used instead of t5-v1_1-xxl-encoder-Q5_K_M.gguf for better text-to-vector translation/encoding;

    • The main secret ingredient: meta-llama-3.1-8b-instruct-abliterated.Q5_K_M.gguf - instead of Meta-Llama-3.1-8B-Instruct-Q5_K_M.gguf --> Read about the process of LLM abliteration/uncensoring here: https://huggingface.co/blog/mlabonne/abliteration (get other uncensored LLMs from his repos on Huggingface...)

    1. Unpack the archive file!

    2. place: hidream-i1-full-Q5_K_M.gguf file in ComfyUI\models\unet folder;

    3. place: converted-flan-t5-xxl-Q5_K_M.gguf, meta-llama-3.1-8b-instruct-abliterated.Q5_K_M.gguf, clip_g_hidream.safetensors, clip_l_hidream.safetensors in ComfyUI\models\text_encoders folder;

    4. place: HiDream.vae.safetensors under ComfyUI\models\vae folder;

    5. use my UNcensored HiDream-Full Workflow.json as starting workflow to test how it works;

    6. in case of VRAM problems - use my VRAM optimized bat file: run_nvidia_gpu_fp8vae.bat to start ComfyUI (put it directly into the ComfyUI folder);

    ... this way you can have a nice high quality HiDream-Full image generation with 12Gb VRAM (tested!)

    Use it well ;)

    HFGL

    FAQ

    Comments (71)

    kunde2Apr 22, 20251 reaction
    CivitAI

    Awseome to start seeing a better uncensored verion of hidream! Are all needed files you mention part of the .zip file?

    0l1v1aR0551
    Author
    Apr 22, 20251 reaction

    yes - all of them are inside of that zip archive (for inconvenient convenience...) - CivitAI prohibits them all as one upload unless they are in zip :(

    ByteCrafterApr 22, 20253 reactions
    CivitAI

    Can this be done with the Q8 versions for those of us who have more resources available?

    0l1v1aR0551
    Author
    Apr 22, 2025

    Yes, but you need at least uncensored LLAMA and ... also FLAN is an optional but nice choice.

    bk227865750Apr 22, 2025

    can i presume that the actual model is still original (just Q5) ? only the LLAMA and FLAN are doing the work ?

    0l1v1aR0551
    Author
    Apr 22, 20252 reactions

    @bk227865750聽the model is OG, the trick is in Clip LLMs (Meta Llama + Google FLAN)

    _Envy_Apr 22, 20251 reaction

    Yes. I'm working on it now and will have uploads shortly.

    0l1v1aR0551
    Author
    Apr 22, 2025

    @_Envy_聽<3

    _Envy_Apr 22, 2025

    @OliviaRossi What uncensored llama are you using? Is there a safetensors file of it? I can only find a diffusers one, which doesn't do much good with comfy.

    ByteCrafterApr 22, 2025

    @_Envy_@OliviaRossi聽I think I may have got it working with these https://huggingface.co/dumb-dev/flan-t5-xxl-gguf/tree/main/Q8, https://huggingface.co/mlabonne/Meta-Llama-3.1-8B-Instruct-abliterated-GGUF/tree/main the Q8 model from there. It doesn't throw errors at least and it runs.

    fsandersphotoApr 22, 20251 reaction
    CivitAI

    No errors, but at he end no image, just noise.

    0l1v1aR0551
    Author
    Apr 22, 2025

    that is the case with HiDream, I was getting the same problem, when in my ComfyUI bat file was the command-line switch --force-FP16, so try to play with that bat file, maybe use my version or adapt it

    fsandersphotoApr 22, 2025

    @OliviaRossi聽Using your bat file leads to the same result - blueviolet noise

    0l1v1aR0551
    Author
    Apr 22, 2025

    @fsandersphoto聽and my workflow? sampler and other settings? HiDream is very sensitive for them ...

    fsandersphotoApr 22, 2025

    @OliviaRossi@OliviaRossi聽I am using your workflow and all files from the zip

    ByteCrafterApr 22, 2025

    @fsandersphoto聽oh, you need to change ComfyUI to the nightly and update to the latest, the reason that happens is that large length prompts go over the token allowance so it was reported and fixed in a more recent update, but be warned the more recent update breaks some workflow nodes.

    fsandersphotoApr 24, 2025

    @ByteCrafter聽You are right, update to nightly breaks some workflow nodes, but still no images after update.

    0l1v1aR0551
    Author
    Apr 24, 2025

    @fsandersphoto聽here is a very fresh image: https://civitai.com/images/72107327 - maybe copy nodes from it and try? (updated Comfy and workflow)

    ByteCrafterApr 25, 20251 reaction

    @fsandersphoto聽then its likely you have not got something installed that should be.

    0l1v1aR0551
    Author
    Apr 25, 2025

    @ByteCrafter聽yup - something is off in ComfyUI settings (commandline switches - the most probable cause), maybe some conflicts with nodes

    transformermanApr 22, 20255 reactions
    CivitAI

    I'm unconvinced. I generated 18 images, 9 with this setup and 9 with the normal one. And I see no indication that this "uncensored" setup is any better at nudes than the normal one.

    PantiliciousApr 23, 2025

    I agree with you based on my tests. On top of that the workflow provided generated blurry messy images.

    0l1v1aR0551
    Author
    Apr 23, 2025

    @Pantilicious聽blurry messy - something is wrong with your Comfy updates / settings, I was having the same problem with HiDream until I was brutally changing them, because if ComfyUI is OK for Flux - HiDream is way more sensitive for tiny changes of the environment and generation parameters ;)

    0l1v1aR0551
    Author
    Apr 23, 2025

    here is more straightforward example https://civitai.com/images/71911200 <-- use natural language, not just "pussy" (you have to explain to both included LLMs what you want)

    0l1v1aR0551
    Author
    Apr 23, 2025

    @Pantilicious blurry and messy - on some of the scheduler / sampler combos - same as the HiDream itself is聽

    Vivacious5691Apr 22, 20251 reaction
    CivitAI

    These encoders are an actual improvement. But I am using the text-encode-only (TE-only) version of the flan_T5 encoder to save even more memory

    0l1v1aR0551
    Author
    Apr 23, 2025

    I think that my attached version is also a TE only, if I'm wrong - then = you have a good point!

    MysticMindAiApr 23, 20251 reaction
    CivitAI

    What about Llama 3.1 Lexi Uncensored? That's supposedly the uncensored variant of Instruct.

    0l1v1aR0551
    Author
    Apr 23, 20251 reaction

    well, thanks for the suggestion - gonna test it too - very interesting point!!!

    0l1v1aR0551
    Author
    Apr 23, 20251 reaction

    so ... after testing Lexi Llama (uncensored) - I've found this (another one), for example this image: https://civitai.com/images/71818416 is using DarkIdol-Llama-3.1-8B-Instruct-1.2-Uncensored-GGUF, which is better than Lexi for generating hands

    MysticMindAiApr 27, 2025

    Nice! I'll take a look. Thx!

    0l1v1aR0551
    Author
    Apr 27, 2025

    @MysticMindAi聽but the one that is in this file collection works the best in my numerous tests

    MysticMindAiMay 11, 2025

    @OliviaRossi聽been getting great results between abilerated and DarkIdol variants thus far!

    0l1v1aR0551
    Author
    May 11, 2025

    @MysticMindAi聽nice to know!

    Cosmic_CrafterApr 23, 20251 reaction
    CivitAI

    take a look at your gguf section I think you have the wrong clip loader in the build

    0l1v1aR0551
    Author
    Apr 23, 20251 reaction

    here is the workflow in the image's metadata that has both quadruple gguf loaders (one at a time ...) - they are both doing the same job/result = tested!
    https://civitai.com/images/71811027

    97BuckeyeApr 23, 20255 reactions
    CivitAI

    No matter what I try, I absolutely cannot get an image of a woman not wearing pants. This shows off GREAT tits鈥攖hank you for that. But I just can't get a single pussy to show up. Sad times.

    0l1v1aR0551
    Author
    Apr 23, 2025

    by the way - pants are on with Euler/Beta but Off with dpmpp_2m/Beta ;)

    try to change Scheduler/Sampler combo!

    azeliApr 23, 20252 reactions

    Yes, it's annoying me everyone is saying this model is uncensored when it clearly isn't, I've tried tons of text encoders etc but no joy.

    0l1v1aR0551
    Author
    Apr 23, 2025

    @azelihttps://civitai.com/images/71558294 - this image compares same settings with censored and uncensored one - and yes, female pelvis area is not an easy task for it but ... boobs are gorgeous ;)

    0l1v1aR0551
    Author
    Apr 23, 2025

    @azeli聽here is an example for this matter: https://civitai.com/images/71911200 <-- use it (copy nodes)

    everylightApr 24, 20252 reactions
    CivitAI

    Fantastic work! Any chance you could back up the model to huggingface? Apologies if this is already done. Thank you greatly for your work!

    0l1v1aR0551
    Author
    Apr 24, 20252 reactions

    all parts of this model are from various repos on Huggingface - the point is to prove the concept, anf the quant size (which is best suitable) you can get from there ;)

    and TY!

    ediblepapers2655Apr 25, 20251 reaction
    CivitAI

    Does it have to use meta-llama-3.1 or can we use something like gemma-3-12b-it-abliterated-GGUF?

    0l1v1aR0551
    Author
    Apr 25, 20251 reaction

    unfortunately - Meta LLAMA 3.1, gemma - I was not testing it - doing it right now (for science!)

    UPDATE: - no, error is popping up - "unexpected architecture - Gemma"

    jahitianMay 1, 20252 reactions
    CivitAI

    Working great so far, thank you for sharing this!!!

    0l1v1aR0551
    Author
    May 1, 2025

    TY!
    u r welcome!!!

    AKDesignsMay 10, 20253 reactions
    CivitAI

    Can you add Q2 GGUF versions of these models?

    0l1v1aR0551
    Author
    May 10, 20251 reaction

    You can easily get all the needed parts just by searching for them on https://huggingface.co website. All parts of this model are from there.

    But - you should know, that Q2-Q3 are unusable since they are producing very low quality output. What you can do - is to try to fit Q4-K-S into your VRAM, and also, instead of using quad gguf clip loader - you can use only a single gguf clip loader (now that Comfyui supports it) - and as a clip you have to load only the obliterated meta llama (no need in others - but you have to switch the type of a clip to hidream)

    AKDesignsMay 16, 2025

    @OliviaRossi聽Since my main Laptop's HDD went caput, I'm stuck for now using old potato laptop & it can only handle Q2 versions :(
    I just use Hidream to set main scene composition & SD1.5 or SDXL as refiner models.

    Am I correct in understanding that it's "Uncensored LLM" used as text encoder that converts/makes this model into uncensored version?
    That means I can use any Q2 model I already have ie Q2+Un-LLM=Uncensored Hidream Q2 ???

    AKDesignsMay 16, 2025

    BTW what is difference between outputs generated by "uncensored" & "ablitirated" LLM models?

    0l1v1aR0551
    Author
    May 16, 2025

    @AKDesigns聽censored LLMs simply are trying to limit your prompts within "decency" and the most impressive part - uncensored LLMs tend to create way more artful SFW images, simply because they are not artificially limited in their output (if that makes sense ...)

    0l1v1aR0551
    Author
    May 16, 2025

    @AKDesigns yes, you can "construct" your own model file-set = Q2 if you want to聽

    xynozeditz334May 27, 2025

    @OliviaRossi聽is there anyway to run q4 in 6gb vram ive tried q3 its literally unusable

    EpiduralMay 10, 20253 reactions
    CivitAI

    for HiDream I1, especially with quantized or dev/full models, it's highly recommended to use the ModelSamplingSD3 node to set the shift parameter (e.g., shift=3.0 for full, 6.0 for dev, 3.0 for fast). The shift value directly influences the denoising and sampling behavior, and skipping it can noticeably affect image quality and prompt adherence. Adding this node to your workflow can really help you get the best out of HiDream!

    0l1v1aR0551
    Author
    May 10, 2025

    Yes, I tested it, and in many cases it makes the output much worse. This may be because I'm using a different sampler/scheduler combination or because I'm using different models for clip than what was intended. In any case, sampling has to be tested to meet your needs!

    0l1v1aR0551
    Author
    May 11, 2025

    UPDATE: again re-tested ModelSamplingSD3 (3-6 strength) and it is really detrimental on various samplers / schedulers - not going to use it at all

    diegomariopyande4640May 12, 20253 reactions
    CivitAI

    Do we need to download text encoder/vae or is it baked

    0l1v1aR0551
    Author
    May 12, 2025

    HiDream uses the same VAE as FLUX

    nixyzeco292May 21, 20254 reactions
    CivitAI

    How better train LORA for this? I fount train TE is good, but Onetrainer load only huggingface folder, but not safetensor or gguf

    0l1v1aR0551
    Author
    May 21, 2025

    I hope someone knows the answer ...

    ForeverNecessary737716Jun 4, 20254 reactions
    CivitAI

    can I download somewhere just the llama and t5? I already got the unet and it's too many GB for me

    0l1v1aR0551
    Author
    Jun 4, 2025

    all parts of this model were taken from huggingface.co - so: YES!

    silverlinings29991791Jul 7, 2025
    CivitAI

    PSA: If you have at least 12GB VRAM and 16GB system RAM, you can make this workflow a bit more reliable by using the fp8 15GB Dev/Full model instead of GGUF Q5. It improves quality and seems a bit more reliable. It also works fine with GGUF Clip loaders, so you can use Q5 FLAN and Llama with it. Remember to switch ComfyUI to LowVRAM mode.

    0l1v1aR0551
    Author
    Jul 7, 20251 reaction

    yes, Q8 of both text encoder and the model itself are fitting in 12GB VRAM, for that purpose you need to have not too many nodes installed in Comfyui and preferrable - turned OFF the preview (which eats up some additional memory)

    starbuckkserapis659Jul 9, 2025
    CivitAI

    Workflow is throwing an error about missing nodes:

    QuadrupleClipLoaderGGUF

    LoaderGGUF

    arsibaltJul 9, 2025

    Not really an error :) anytime you import a workflow that uses nodes you don't have installed, ComfyUI will inform you about it

    Close the notification, click Manager in the top right, then Install Missing Custom Nodes. It'll walk you through the rest

    LisaBBAug 5, 2025
    CivitAI

    is it possible to use it in SDNEXT?

    0l1v1aR0551
    Author
    Aug 5, 2025

    if it supports this architecture (HiDream) - then - yes

    damianad85339Nov 5, 2025
    CivitAI

    Is it possible to edit existing pictures with this? I'm new in confyui not sure how to add that to the workflow. Thanks.

    0l1v1aR0551
    Author
    Nov 5, 2025

    it is simple - instead of empty latent - you need pass your image with mask ;)

    Checkpoint
    HiDream

    Details

    Downloads
    3,225
    Platform
    CivitAI
    Platform Status
    Available
    Created
    4/22/2025
    Updated
    4/30/2026
    Deleted
    -

    Files

    hidreamFullGGUFQ5KM_v10.zip

    Mirrors

    Huggingface (1 mirrors)
    CivitAI (1 mirrors)