Edit photos fast with style, relighting, and object control precision.
Who it's for: creators who want this pipeline in ComfyUI without assembling nodes from scratch. Not for: one-click results with zero tuning — you still choose inputs, prompts, and settings.
Open preloaded workflow on RunComfy
Open preloaded workflow on RunComfy (browser)
Why RunComfy first
- Fewer missing-node surprises — run the graph in a managed environment before you mirror it locally.
- Quick GPU tryout — useful if your local VRAM or install time is the bottleneck.
- Matches the published JSON — the zip follows the same runnable workflow you can open on RunComfy.
When downloading for local ComfyUI makes sense — you want full control over models on disk, batch scripting, or offline runs.
How to use (local ComfyUI)
1. Load inputs (images/video/audio) in the marked loader nodes.
2. Set prompts, resolution, and seeds; start with a short test run.
3. Export from the Save / Write nodes shown in the graph.
Expectations — First run may pull large weights; cloud runs may require a free RunComfy account.
Overview
This workflow gives you complete control over editing photos using prompt-based instructions. You can replace backgrounds, insert or remove objects, and change lighting with precision. It's designed for creators who want efficient edits without complex setup. The process is fast, with optional Lightning LoRA for quick iterations. Style transfer is supported to transform visuals into unique looks. Outputs preserve details while delivering high-quality, natural edits. Perfect for artists who want powerful photo manipulation made simple.
Important nodes:
Key nodes in Comfyui Qwen Image Edit workflow
TextEncodeQwenImageEdit (#76)
Encodes the main instruction that drives the edit. Favor direct verbs like “replace,” “insert,” “remove,” “recolor,” and “relight.” If the change should be local, name the region or object explicitly. Keep prompts concise; long lists of style tags are rarely needed.
TextEncodeQwenImageEdit (#77)
Provides negative or protective guidance. Use it to tell the model what to avoid or to preserve key attributes. Good patterns: “keep skin tone,” “do not change composition,” “ignore background text.”
LoraLoaderModelOnly (#89)
Applies the Qwen-Image-Lightning LoRA for rapid iteration. Turn it on when you need near-instant results. Reduce sampler steps substantially when this LoRA is active to maintain crisp edits.
ImageScaleToTotalPixels (#93)
Downscales oversized inputs to a target megapixel budget to stabilize quality. Use it when source images are very large or contain heavy compression; it often improves edge smoothness and reduces halos.
CFGNorm (#75)
Normalizes classifier-free guidance behavior so the model follows prompts without pushing artifacts. If you see oversaturation or “over-editing,” lower the strength slightly; if edits feel timid, raise it a bit.
KSampler (#3)
Runs the diffusion loop. Start with modest steps for fp8 and increase only if the edit is incomplete. Keep guidance moderate; very high values can wash out preserved regions. When the Lightning LoRA is on, use very few steps to capture its speed benefit.
Notes
Qwen Image Edit Workflow in ComfyUI | Inpainting, Relighting, Style Transfer — see RunComfy page for the latest node requirements.
Description
Initial release — qwen-image-edit.