So, here’s how I roll: this workflow generates a little Chroma image with txt2img, upscales and refines it for some extra polish with FluxRefinerV1.1 with img2img, and gives faces a nice touch-up (Face Detailer). On my trusty laptop (rockin’ an RTX 3070 with 8GB VRAM and 40GB RAM), it makes a 1120 x 1120 pic in about 180 to 280 seconds —depending on the prompt, as I said in my previous workflow, I’m new to ComfyUI, so if you’ve got any tips or tricks, I’m all ears! On the bright side, at least I didn’t make a huge Spaghetti Monster this time!
Models:
CLIPs:
VAE:
LoRA:
None
Upscale models:
Custom nodes:
comfyui-uscape-by-model
comfyui_memory_cleanup
comfyui-vrgamedevgirl-main
comfyui_essentials_mb
comfyui-impact-pack
comfy-image-saver
Description
I'm placing this in Flux1 D, because there's no "Chroma" category.
FAQ
Comments (6)
Here via your Reddit post. You actually make me interested in Chroma for the first time. But dear lord, those teeth in your example renders. What on Earth did the people responsible for the face refiner use for training data?
When I had a RDNA 1 AMD GPU, I was able to use full flux D at near 1MP with an acceptable render time. It is better to have the initial render at an optimal resolution for the underlying model. For images you really like, I would recommend upscaling with SeedVR2- just alter a workflow from video (which no home GPU has enough VRAM for anyway) to a single image input! If you like face detail, SeedVR2 will blow you away.
The Flux Refiner model version 1.1 says it has trouble with teeth, so that may be the culprit.
@ArtificeAI I might try 1.5 in an updated workflow
lora sup?
To vastly improve results:
- Download the ChromaHD gguf (will allow you to generate 1024x1024). Then feed in 1024x1024 latents.
-Put in a RescaleCFG node before the samplers
-Using ddim+ddim_uniform sampler scheduler
Do you mind elaborating on rescale cfg nodes?




