Real Dream Website (Faster download)
Available at TensorArt: https://tensor.art/models/899250342552672189/
Comfyui Workflows:
Anima text2image with Loras Workflow
Anima img2img (denoise) with Loras Workflow
Klein 9B Edit with Loras Workflow
Klein 9B img2img (denoise) with Loras Workflow
Z-image-turbo img2img workflow
Z-image text2img with loras workflow
Z-image De-Turbo with loras workflow
This model is capable of generating very realistic images that can confuse people's critical sense, please do not use it irresponsibly, harming ordinary people or public figures. Also avoid its use in electoral campaigns, promoting or damaging the image of candidates. Although the field of artificial intelligence is poorly regulated, its improper use can cause legal penalties even in countries with little regulation in the area. Be very careful to not commit crimes in your country and not to promote injustices against innocent people. I do not authorize the use of this model for Lora training using the face of real people without authorization from the biological owner. Do not create images that can embarrass and damage the public image of people. This can cause serious psychological harm, including deep depression. I do not authorize the use of images generated by this model for scam applications and the dissemination of false information.
During the fine-tuning of AI models, I only use synthetic images that do not infringe any copyright, personal image rights, or laws. The models are merges of open models from different sources and may contain diluted data remnants in the neural network. I am not responsible for possible data remnants since they are diluted, and it is not humanly possible to track the existence and origin of this diluted data.
I do not allow the upload of any of my models on Tensor Art. I only do not allow it there because the policy of duplicate models on Tensor Art prevents me from uploading again if someone has already done it.
I have enabled to automatically hide all NSFW posts for several reasons that are not worth listing. I hope you have patience and understand.
Description
FAQ
Comments (29)
need more example images, to make people want to !!!
This is an amazing base model, I'm getting the most realistic results that I've ever got using the FLUX models to date.
I agree, I have tried everything and this model 1. doesn't produce childlike images and 2. the subjects remains consistent over a large number of different experiments, sticking to prompt
@martingeraldfairclough460ย I agree this base model doesn't produce those childlike images like you said. I love the consistency as well.
This is the most realistic FLUX model I ever tested, since RealVisXL.Inpainting with this model is just awesome.With "UltraRealistic" lora it just rocks:))
A big "thank you" to creator๐
Can you give me the workflow for inpainting? Thank you very much.
@titusadrian789898ย Hi!
I'm using ForgeUI in all my generations and default "Inpaint" tab in it, because I haven't got enough vram(12gb).
@dimond_uaย I use ComfyUI via cloud to use big GPU. I found a cloud GPU site with pretty low price. I was hoping you had a workflow (.json). Have a nice day!
Thanks for a great model
whats the best sampler for this model?
Not the creator, but for me DPM++2M and Euler A, both with Karras, give good results
Great model. v16 is giving me amazing skin and eye details.
For the DMD2 v2 model, how do you get rid of tan lines?
I have tried negative prompts like tan, tan line, tan lines
Also tried [tan] and the alike. Nothing works. Can you please advise how to prevent them?
Lately i got a very hard time trying some pose / hand position with Pony 16, is that normal or should i try somethig?
Model doesnt seem to follow my prompt as good as earlier generations
Flux Dev only provides FP8 model downloads.
Could you also offer FP16 and GGUF quantized versions?
Sorry for my lack of knowledge about Flux but at the moment I don't know how to convert it to GGUF, FP16 I don't think it's worth it
@sinatraย Quantizing to GGUF makes your model accessible to more users with limited VRAM.
For example, your model โReal Dream flux.1 dev v1 fp8โ is 16GB in size, meaning only users with more than 16GB of VRAM can run it. But take the โPhantomโ model as another exampleโitโs originally 60GB, and my PC only has 12GB of VRAM, so the full model is unusable for me. However, once itโs quantized into GGUF format, I can use a Q6_K version thatโs only 11.6GB and it runs fine.
https://huggingface.co/QuantStack/Wan2.1-VACE-14B-GGUF/discussions/2#6838a12bc3ccad443cfa04da
city96/FLUX.1-dev-gguf ยท Hugging Face
If youโre not sure how to quantize your model to GGUF format,
you could try reaching out to https://huggingface.co/wsbagnsv1 or any member of the QuantStack organization on Hugging Face.
@makisekurisu_jpย fp8 models works perfect at 12Gb vram (Rtx3060).
Ive never commented on anything here but logged in just to say that v16 is unbelievable!!! Its honestly the best checkpoint Ive ever used. Nothing comes close to it.
I love you bro. Best checkpoint ever. Fuckin love you with all my hear <3 is just too much love.
I cant help but ask, which one are you talking about? :)
@kellykellyย SDXL Pony 16
๐ณ Thanks for being such a big fan
๐ณ thanks for single-handedly sustaining the MDMA trade in whatever town you live
thank you very much for sharing your work with us
But flux1-dev-fp8.safetensors has better coherence between the character's lighting and the background.
The character appears to be lit with artificial studio lighting, which stands out in the outdoor scene and breaks the sense of realism.
you can't really compare XL and FLUX models
@SLACK69ย this is not a XL only model
I am curious if you bothered to read the model notes, where it explains exactly why he cant use Flux, prior to posting your absurd comment.
i wonder how this would look if it was an XL DMD model
Details
Files
Available On (2 platforms)
Same model published on other platforms. May have additional downloads or version variants.