CivArchive
    Flux.2 - Dev
    NSFW
    Preview 116110236
    Preview 116110244
    Preview 116110414
    Preview 116110748
    Preview 116111980
    Preview 116111997

    Flux.2 [Flex], [Dev], [Pro], & [Max] are live for Generation!

    FLUX.2 [Flex] is the next leap in the FLUX model family, delivering unprecedented image quality and creative flexibility. FLUX.2 is a state-of-the-art performance image generation model with top of the line prompt following, visual quality, image detail and output diversity.

    Original Flux.2 [Dev] files: https://huggingface.co/black-forest-labs/FLUX.2-dev

    FP8 Quantized from ComfyUI: https://huggingface.co/Comfy-Org/flux2-dev/tree/main

    Description

    FAQ

    Comments (137)

    qekNov 25, 2025
    CivitAI

    Hooray! It's out

    gsgsdgNov 25, 2025· 1 reaction
    CivitAI

    64 gb checkpoint on Huggingface :'(

    theallyNov 25, 2025· 5 reactions

    You can use the FP8 version! It's only checks notes... 35 GB.

    StardeafNov 25, 2025

    And somehow it works on 24 GB, at least fp8.

    gsgsdgNov 27, 2025

    @theally I guess I'll have to buy a SSD only to fit the checkpoint shrug

    ggorebama853Nov 25, 2025
    CivitAI

    Excited to try out this new version of Flux when it is available.

    qekNov 25, 2025· 1 reaction

    Available on HuggingFace Hub

    ggorebama853Nov 25, 2025

    @qek I'm not sure what that is?

    qekNov 25, 2025· 1 reaction
    J1BNov 25, 2025· 1 reaction

    You get 50 free Generations here: https://playground.bfl.ai/

    J1BNov 25, 2025· 1 reaction

    @qek you also need 64GB-96GB of Vram to run the full model.

    ggorebama853Nov 25, 2025

    @qek Thank you!

    ggorebama853Nov 25, 2025

    @J1B Thank you! 

    EMYSTRATRIELNov 25, 2025
    CivitAI

    Wow ! 😁

    emotionaldreams4Nov 25, 2025· 5 reactions
    CivitAI

    models just keep getting bigger and bigger.. ridiculous

    J1BNov 25, 2025· 4 reactions

    That's what always happens with computers, when bill Gates said "640K RAM ought to be enough for anybody" in 1981, he was wrong.

    PopHorn1956Nov 25, 2025· 3 reactions

    Plus text encoder 36Gb :)

    TheBiLL1Nov 26, 2025· 1 reaction

    @J1B But now Moore's Law has failed, and as computer prices keep rising while keeping up with computing power demands becomes harder, many models can only run on server clusters. The most useful software on PC nowadays is the web browser.

    blablabla666234Nov 26, 2025

    USE GGUF

    Keroro_GunsoNov 26, 2025· 2 reactions

    @love123654 It's not moore's law you should blame. The current silicon is plenty fast. It's greed we are battling. Nvidia and AMD do NOT want you to have more vram. They don't want to canabalise the sales of their high end hardware. Because there is no other competition in the space we all suffer. It's not a law of scaling. It's just greed.

    J1BNov 26, 2025· 1 reaction

    @love123654 in 1981 the base model IBM 5150 cost $5,600 (and upto $20,000 for a higher end models) in today's money, you could buy a very nice AI capable PC with a RTX 5090 GPU for that much.

    TheBiLL1Nov 26, 2025

    @J1B With limited advancements in chip manufacturing processes, performance improvements in chips like the 5090 are largely achieved through increasing chip size.

    TheBiLL1Nov 26, 2025· 2 reactions

    @Keroro_Gunso The greed of these hardware companies is certainly a significant factor, but hardware manufacturing costs are also clearly increasing. Meanwhile, the growth rate of computational power and GPU memory requirements for models far exceeds the improvement in PC performance. I believe this is a joint attack by hardware and model companies on offline generation, but I believe our open-source community can counter it.

    Keroro_GunsoNov 26, 2025

    @love123654  I hope.

    tuefmaNov 25, 2025· 7 reactions
    CivitAI

    GGUF models out there for the rest of us.

    https://huggingface.co/orabazes/FLUX.2-dev-GGUF

    StardeafNov 26, 2025· 1 reaction

    Thanks. On a 3090 24GB, Q4 loads "completely" in comfy, anything above loads "partially", however it doesn't seem to noticeably affect generation speed. Which is about 200 seconds for 1024x1200, as reported by comfy.

    gothefungus872Nov 26, 2025· 1 reaction

    Only Q2 loads entirely on my 5060ti (16GB), but the system ram offloading for higher quants seems to work very cleanly, adding about 20 seconds per gen (230->250). I've tested Q3 and Q4, and as long as you have enough system ram there's no speed difference between the two. So presumably you can use the highest quant you can stuff into your system ram with no further drawbacks (still downloading Q6 to verify!).

    windkeeperNov 25, 2025· 2 reactions
    CivitAI

    DrawThings won't accept it 😭

    reakaakaskyNov 26, 2025· 10 reactions
    CivitAI

    60gb DiT + 30gb TE. orz

    gausssidorov928Nov 26, 2025· 2 reactions

    2026 FLUX-3 256gb video ram

    blablabla666234Nov 26, 2025· 17 reactions
    CivitAI

    NEED NSFW ASAP!!!!!

    qekNov 26, 2025· 2 reactions

    Good luck 🤑

    StardeafNov 27, 2025· 1 reaction

    It actually is "not safe for work". I, for example, have some work to do and instead I'm sitting here feeding silly prompts into this...

    0l1v1aR0551Nov 26, 2025· 2 reactions
    CivitAI

    this is DA *FFing BOMB!!!

    samon777Nov 26, 2025· 1 reaction
    CivitAI

    fp8 model renders in about 4.5 minutes on RTX 3090 for a simple 2112 x 1184 image.

    It seems that all this requires so many resources for only one purpose: you use their servers, not your local computer.

    qekNov 26, 2025· 2 reactions

    I ran 2-bit quants to try. 768x768, 20 steps, Prompt executed in 237.49 seconds. Became slower after adding ReferenceLatent, does it happen with other users?

    gausssidorov928Nov 26, 2025· 1 reaction

    Все хотят вкусно жрать и слаще спать,

    StardeafNov 27, 2025· 1 reaction

    Yes, it gets slower with each reference image. I'm getting generation times between 200 and 500 seconds on 3090 with Q8 version. It seems to need at least 30 steps to deliver. A slow beast, but so far it looks good.

    LetTheBassDropNov 27, 2025· 2 reactions

    It's a terrible model.

    5848052Nov 26, 2025· 10 reactions
    CivitAI

    Actually... It's quite interesting model. Still has issue with six fingers. Still it has plastic skin and another similar FLUX appearances, but in the end, this is a FLUX features after all. 😅

    More prompt understanding, that means model mostly follow for your prompt. More variations now for creating. But... FLUX KREA still in a business. I can probably say, KREA still a better choice because IT REQUIRES LESS RESOURCES.

    60 GB VRAM FOR THIS??? WHAT??? This looks now like some joke from developers. Like all this new cool models REQUIRE a HUGE amount of VRAM, but the differences between new and old is like... ok. All this looks like, developers simple creates benchmarks for GPU.
    Looks like all this requires so much resources only for one thing, generate use their servers, not your local machine. Want local? Spend a thousands of dollars for your machine, or even ONLY FOR GPU. Don't forget about RAM (prices are spicy now =D), SSD, CPU and so on. If you want launch models like this. Well actually you build your own server. =P

    Or rent servers. It's crazy how all this grow up so fast, but performance of GPU doesn't!

    At a short time. I think we get models with 100-120 GB VRAM requirement or even more (hello Hunyuan Image 3). And it for image. Video? 200-400? If anyone from develores even try to share it with comunity of course. =D

    P.S. I actually like this model. But progression is not so good. More like FLUX 1.5. Something looks better in old one. Don't even know about if creators will even try to deal with lora and modding this model. That a brutal machine you need for this now.

    qekNov 26, 2025· 1 reaction

    It still gives butt chins, triple chins. Flux Krea is just a fine-tune

    gausssidorov928Nov 26, 2025· 2 reactions

    Они не слаются, нет, чтобы изменить архитектуру, они клепают модели под 60 гигов, так там ещё остались проблемы с генерацией отдаленных объектов, как они были уродливыми, так и остались))))

    StardeafNov 27, 2025· 1 reaction

    How many steps did you do? I had body issues with extra arms and fingers but with 20 steps or less. They cleared out when I increased steps to 30. At least in 3 cases I encountered so far. I'll also need to figure how guidance works, from what I seen the range is 1.5 to 15. So probably set it half way for starters.

    5848052Nov 27, 2025· 1 reaction

    @Stardeaf 30 steps. Cfg was 5 or 6. I got a lot of bad images. This is best I've got.

    StardeafNov 27, 2025· 1 reaction

    @SaiWeb Was "DJ." the whole prompt?

    mphobbitNov 27, 2025· 2 reactions

    It seems they use a LLM as TE.

    mkDanielNov 28, 2025· 1 reaction

    There are allready 2 loras.

    karl1688Nov 26, 2025· 13 reactions
    CivitAI

    This model is extremely disappointing; the quality of the generated images has barely improved, yet it has become much bulkier. This is very unfriendly for training Lora, and a model of this size should inherently offer richer aesthetic performance.

    ouzhen123456990Nov 26, 2025

    你说得很对

    Svengali75Nov 26, 2025· 1 reaction

    Quality of image can be discuss yes, but prompt adherence is very impressive. I m running some tests and the coherence of the final image related to the details prompted are stunning.

    karl1688Nov 26, 2025

    @Svengali75 What kind of adherence are you referring to? If it could handle comic panels like Qwen, that would be somewhat acceptable, but I believe this is far from sufficient. For a model of such a massive size, its quality should not be this blurry and lacking in detail. Its performance in realistic styles is even inferior to SD 1.5, which is unacceptable, not to mention the large number of anatomical errors it still exhibits.

    StardeafNov 27, 2025

    @karl1688 Hmm... I'm not getting any of that so far. Maybe you do something wrong? Give us some details of the failed gens.

    karl1688Nov 27, 2025

    @Stardeaf I tested it using the same realistic photography prompt. As an ultra-large model with 32 billion parameters, its performance is even worse than a fine-tuned SD1.5 model. The skin in portraits still looks extremely plasticky, distortion is severe, limbs and fingers lack detail, and the image style is actually a regression compared to Krea. In summary, its generation quality is simply too poor. I can tolerate the slow speed, but this quality is unacceptable. Look at Z Image; that is the performance an advanced model should deliver.

    Aderek514Nov 26, 2025· 1 reaction
    CivitAI

    Nice!

    And lora training too?

    qekNov 26, 2025· 2 reactions

    Using AI Toolkit

    Shrekman17Nov 26, 2025· 1 reaction

    We are talking about civit, realistic time frame for adding lora training for Flux 2 will be release of Flux69

    theallyNov 26, 2025· 4 reactions

    @Shrekman17 We have it available for internal testing :) We'll open it up when we're satisfied that it works. It's going to be costly though; it's very resource intensive.

    qekNov 26, 2025· 1 reaction

    @theally Sounds nice anyway

    Shrekman17Nov 27, 2025· 1 reaction

    @theally bar for my trust to civit at this point lays in hell,
    i would like to be finally surprised,
    but for over ~6 months the only useful for me thing added was Sora 2,
    Actually Civit might be one of the best places to use Sora for now

    gausssidorov928Nov 26, 2025· 18 reactions
    CivitAI

    Да да да все покупайте 128 гигабайтные карты)))))

    qekNov 26, 2025· 1 reaction

    I ran it and it took 10 GB of VRAM (used a 2-bit quant)

    gausssidorov928Nov 26, 2025· 1 reaction

    @qek 600 секунд генерировал одну картинку?))

    devold5000Nov 26, 2025· 4 reactions

    У меня 5090 32gb, завелась с fp8, суть в том что flux2 не должна была работать, а вылетать с нехваткой, но по всей видимости они встроили что то вроде block swap, потому что видео память что при 1024х1024 забита почти полностью, что при 2048х2048... еще удивило жор оперативки, например: mistral_3_small_flux2_bf16.safetensors, сжирает все 92gb ram... С mistral_3_small_flux2_fp8.safetensors проблем нет, но все равно в пределах 64gb ram нужно иметь.


    qekNov 26, 2025· 1 reaction

    768x768, 20 steps, no reference latents, ~240 seconds

    Gaydevai_pauloNov 26, 2025· 1 reaction

    Уверен что сделают nunchaku версию и будет быстрее :)

    mphobbitNov 27, 2025· 1 reaction

    @devold5000 Mistral is a family of LLM. So in fact you use hybrid of txt2img + LLM (Qwen also uses its own LLM). If someone replace LLM with something lightweight TE the size and memory consumption could be smaller.

    Stefan_FalkokNov 27, 2025· 1 reaction

    у меня генерация картинки на 5080 30 шагов в фулл хд, юни пс занимаем около 250-280 секунд. Ну долговато, это на q8_0 модели. Ты можешь скачать мой воркфлоу с флакс2 и парить себе мозги касаемо времени генерации

    fbg630Dec 6, 2025

    @devold5000 Потребление памяти зависит от разрешения изображения практически линейно. У меня 4080S и 96Gb RAM. 1Mp - 6.5 s/it; 1.5Mp - 11s/it, 77%RAM; 3.5Mp -28s/it, 83% RAM в режиме Img2Img.

    andresbravo2003Nov 26, 2025· 1 reaction
    CivitAI

    the next gen is coming in hot!

    qekNov 26, 2025· 1 reaction

    And large

    bymgv551Nov 26, 2025· 4 reactions
    CivitAI

    We need Ultra Realistic Lora, bcs base model is shit

    qekNov 26, 2025· 2 reactions

    I agree

    StardeafNov 27, 2025· 3 reactions

    I didn't have that impression. Actually I think it's much better than F1. But it needs a lot of steps to deliver. I think 30 is minimum. And maybe you need to figure out how to prompt it.

    LetTheBassDropNov 27, 2025· 4 reactions

    @Stardeaf F1 is a low bar. I don't think I've ever seen a good realistic image of a person with just F1. Even with a lora it's just bad. I never got the hype for Flux. It's finicky, huge model sizes, slow generations, and bad output. This new model is even slower and worse. And I thought Pony 7 was a disappointing model... Flux just proved they're not capable of improving their model as long as they're limiting their datasets and censoring.

    StardeafNov 27, 2025· 1 reaction

    @LetTheBassDrop I agree F1 is not great to put it politely. I've been late to tinker with it but so far results I could get were kind of disappointing, which I attribute to my lack of experience in handling it. But I don't get the hate F2 is getting here. I think it's mind blowing. It's not just about the image 'realism' or whatever, but the scope of concepts it understands. It's taking a step forward to a hybrid with language model.

    PrimaveriNov 30, 2025· 1 reaction

    Z-image turbo can fix it with a second pass of 0.65-0.85 denoise

    fbg630Dec 6, 2025

    @Primaveri Z-image turbo really good, but it needs a second pass in SDXL models to be realistic. Сreate an image in F2, refine it in Z-image turbo and then refine it in SDXL-I passed this quest. The result is good, but not worth the time. I hope that Z-image-base will fix the situation.

    Mr_FeiNov 27, 2025· 2 reactions
    CivitAI

    Could you provide a complete text encoder text encoding model file? How to use model-00001-of-00010.safetensors from model-00010-of-00010.safetensors? Do all of them need to be downloaded? Could you provide a merged file?

    ak002Nov 27, 2025· 1 reaction

    what r u talking ? link is right there. fp8 quantized from comfyui. click on split files and text encoders !

    Mr_FeiNov 27, 2025· 1 reaction

    The text encoding folder in the link contains 01of10, 02of10, 03of10... Which of 10of10 is the text-encoded file? Do all of them need to be downloaded? If you want to download all of them, you can only choose one safetensors in ComfyUI

    qekNov 27, 2025· 1 reaction

    @sunweixi1993786 They said NO

    Mr_FeiDec 1, 2025

    @ak002 I've seen the URL you sent. The file shows "small". Has it been deleted?

    Mr_FeiDec 2, 2025

    @ak002 Hello, the text_encoders you shared are the abrided version marked "small" made by ComfyUI, not the complete version of text_encoders from the studio!

    paradoxical4u2c712Nov 27, 2025· 10 reactions
    CivitAI

    my 4090 can't even shift this elephant off its ar...

    qekNov 27, 2025· 1 reaction

    👉 GGUF

    MusigregNov 27, 2025· 1 reaction

    My 4090 handles the fp8 version quite well. Just takes a bit of time...

    qekNov 27, 2025· 1 reaction

    @musigreg I can only run Q2

    2800883Nov 27, 2025· 2 reactions

    Must be your settings, my 4090 runs it just fine. Is your output image larger than 2K? That might be your problem.

    NapoInfrNov 27, 2025· 1 reaction

    I saw a news item saying that Nvidia has been working with ComfyUI to make this model run on RTX. You need to update your ComfyUI installation.

    Stefan_FalkokNov 27, 2025· 1 reaction

    download my workflow for flux2, download gguf q8_0 model - it will better than fp8 for a quality, and you get 250-280 seconds generation ksampler time in 30 steps and 1920x1080 resolution. Good look!

    mkDanielNov 28, 2025· 1 reaction

    @NapoInfr It uses the tensor cores? Is it why I am getting a 4k image with 20 steps (2 days ago) in just 80-100 seconds on 5090?

    Stefan_FalkokNov 28, 2025

    @mkDaniel gguf models never use tensor cores. only fp16, fp8 and fp4. 80-100 Seconds in 20 steps with 4k image on 5090 is very-very nice result

    frfromgNov 29, 2025· 1 reaction

    nvidia payed them to make it that big so that you consider buying pro card

    qekNov 29, 2025

    @frfromg paid*

    I'm a forge webui user and well.. I downloaded the model and the minstral 3.2 for text encoding and then I downloaded the VAE file for flux2 and put them all where they go.. I get an error that says I don't have clip or dict_ and I didn't see anywhere that says forge webui is not compatible with flux2, I also did not see anywhere that said I needed clip. oh well, I'll wait a bit longer before trying to use it again.

    2thecurveNov 28, 2025· 2 reactions
    CivitAI

    crashes comfyui when i try to launch the fp16 on my 4090 lol, trying fp8

    qekNov 28, 2025· 1 reaction

    I can only run 2-bit quants

    mkDanielNov 28, 2025· 1 reaction

    Yeah, I do not even try FP16 on 5090.

    2thecurveNov 28, 2025· 1 reaction

    So i just seen clip needs to be set on CPU

    mochorongNov 28, 2025· 7 reactions
    CivitAI

    Flux 2 is as good as SD 1.5

    qekNov 28, 2025· 1 reaction

    worse than SD 1*

    Cezi_Nov 28, 2025· 5 reactions
    CivitAI

    Requaied almost 64GB VRAM. Sick. This train has already left for me (4070 user)

    qekNov 29, 2025

    If you have at least 12k buzz, try the Civitai trainer

    mkDanielNov 29, 2025

    I can train on a 32GB with FP8. It is something like 10s/it but IT IS training.

    Same as training FLUX1 on 12GB

    skechtupNov 29, 2025· 3 reactions
    CivitAI

    very coherent to prompts (GGUF version). although as a user of a 12gb Vram card, i would love a Schnell version to come, because 1 image takes a while.. Do we have other options for 12GB Vram.. or less..?

    qekNov 29, 2025

    No Schnell, there will be Klein instead
    "Do we have other options for 12GB Vram"
    For Flux2? Use more RAM than VRAM

    skechtupDec 1, 2025

    @qek how to use more RAM ? In the VAE node? OR is there another way?

    qekDec 2, 2025

    @skechtup 1. Some arguments, compare:
    --highvram: Max performance, doesn’t unload models once loaded.
    --normalvram or unset: Standard operation.
    --lowvram: Saves VRAM but slows things down.
    --novram: Barely uses VRAM.
    2. The Load CLIP has the device option, it can be CPU, it uses RAM. VAELoader KJ also has the option

    skechtupDec 5, 2025

    @qek i've tried all, it does not speed things up. When is the Klein version expected ?

    4598756Nov 29, 2025
    CivitAI

    I can't train more than 1epoch with the new civitai lora training 10 000 buzz for a character that doesn't look at all like the original...

    qekNov 29, 2025

    12k buzz*?

    4598756Nov 29, 2025

    @qek yeah 12k not 10 :/

    yaiwrkNov 29, 2025· 21 reactions
    CivitAI

    SO HOT!!!!!!!!!!!! THIS IS THE BEST EVER! MAXIMUM REALISM. USE PRO VERSION

    yaiwrkNov 30, 2025· 9 reactions
    CivitAI

    if your hands grow out of your ass, then this is not a flux problem

    dj999cool480Dec 1, 2025· 6 reactions
    CivitAI

    As a tester of all existing image generation models, I can say that this model generates decent images, but the images lack sharpness and are blurry.

    qekDec 1, 2025· 8 reactions
    CivitAI

    I tried Flex on the site, got an image covered in small RGB noise, I got it when zoomed in, wtff. I also tried Pro and Dev, they made a character with 4 fingers instead of 5, and Flex made 5 fingers, but 4 nails instead of 5 🤬

    Mr_FeiDec 2, 2025
    CivitAI

    Hello, the text_encoders you shared are the abrided version marked "small" made by ComfyUI, not the complete version of text_encoders from the studio!

    aueki4g467Dec 2, 2025

    Mistral-Small-3 is a standard model made by MistralAI, right? There are also Medium and Large models, but they are commercial and provided via API. Only Small is released as an open model.

    qekDec 2, 2025

    @aueki4g467 It seems the OP didn't get how to get the text encoder for Flux2

    Mr_FeiDec 2, 2025

    @aueki4g467 So, is the text_encoders (bf16 and fp8) of "small" the complete version?

    qekDec 2, 2025

    @sunweixi1993786 Yes, it is for Flux.2

    BonticariusDec 11, 2025· 6 reactions
    CivitAI

    cant even run this on my 5090. We are cooked chat. and its censored (for obvious reasons)

    qekDec 12, 2025· 2 reactions

    Heavily censored a model unable to gen porn, such a joke

    WWG1WGA17Dec 27, 2025· 2 reactions

    No doubt. I invested heavily, went ahead and got a 5090 FE, to pair it with my 12900 K, 64 GBS of Corsair Dominator Platinum DDR5 6800 mega transfers RAM, a 4 TB Sabrent Rocket 5 , Gen 5 NVME, And I used my old 3080 12GB as a secondary, and even got the high dollar motherboard the Z790 Dark Hero, figuring OK that should get me set up, then they're like... OK here hold my beer.

    WWG1WGA17Dec 27, 2025· 1 reaction

    @qek yep, fuhked up, They don't realize how much they're holding their own model back from developing. 

    BonticariusDec 27, 2025

    @WWG1WGA17 yeah limitation has reach its peak

    lucidzachary473Dec 29, 2025

    @WWG1WGA17 I just got a 5090. Was not expecting any models on CivitAI to be off the table! lol damn

    qekJan 1, 2026

    @WWG1WGA17 Crazy, I can only run it in 3 bits. It seems the devs really hate us and made it as big and as censored as possible. But, apparently, Qwen Image Edit isn't like that, it wasn't made to be gross

    ccollinsJan 8, 2026

    @lucidzachary473 You can run a quantized version on a 5090 for sure. I'm running a Q8 version with a 3090.

    qekDec 17, 2025· 7 reactions
    CivitAI

    No Flux.2 Max?

    qekDec 27, 2025

    No Flux.2 Max, but added GPT Image 1.5 and Seedream 4.5

    theallyJan 2, 2026

    @qek Max is live for generation now :)

    qekJan 2, 2026

    @theally I see

    WyvernDrykeDec 18, 2025· 1 reaction
    CivitAI

    Evaluating the Gallery images, I think I'll wait to use this Checkpoint until good developers finetune better versions of it. The images are slightly blurry and the composition is subpar compared to current Flux1 generations. This shows promise, but in its present form, I would not use it.

    qekDec 18, 2025

    I got noisy images, you can see my comment

    EscomDec 26, 2025· 19 reactions
    CivitAI

    This model is so heavily censored it is beyond imagination and completely ridiculous. It's really not worth upgrading your gear for this, folks.
    A model that struggles with a shy smile deserves to be criticized.

    Checkpoint
    Flux.2 D

    Details

    Downloads
    9,117
    Platform
    CivitAI
    Platform Status
    Available
    Created
    11/25/2025
    Updated
    5/2/2026
    Deleted
    -

    Files

    flux2_dev.safetensors

    Mirrors

    Huggingface (1 mirrors)
    CivitAI (1 mirrors)
    Other Platforms (TensorArt, SeaArt, etc.) (1 mirrors)