Z-Image ControlNet 2.1-2601 Local Inpainting Workflow is a ComfyUI workflow designed for precise local repainting, masked image editing, and structure-controlled visual correction. Instead of regenerating the entire image, this workflow focuses on editing only the selected mask area while keeping the original composition, pose, lighting, background, and main visual identity as stable as possible.
This workflow is built around Z-Image Turbo, using the Qwen 3 4B text encoder, the Z-Image VAE, and the Z-Image Turbo Fun ControlNet Union 2.1-2601 model patch. The main goal is to give creators a more controllable way to repair or redesign specific parts of an image. You can use it to replace clothing, fix a face, repair hands, change an object, repaint a product area, modify a character, add new elements, or correct a broken AI-generated detail without destroying the rest of the picture.
The workflow starts from an input image and a mask. The ImageLoader provides both the original image and the masked area, then the image is encoded into latent space through the VAE. The masked region is passed into the latent repainting stage, so the new generation is mainly applied to the area you selected. This makes the workflow suitable for real production correction, because you do not need to recreate the full image every time a small region fails.
A key part of this workflow is the high-noise and low-noise prompt structure. The high-noise prompt controls the main semantic change. This is where you describe what the masked area should become: a new outfit, a new object, a different face detail, a new weapon, a corrected hand, a changed product, or a repaired background element. The low-noise prompt is used for refinement. It helps the generated area blend back into the original image by improving color consistency, lighting, texture, and edge transition.
This two-stage prompt structure is useful because local repainting needs both creativity and stability. If the prompt is too strong, the edited area may break away from the original picture. If the prompt is too weak, the change may not be obvious enough. By separating the main concept from the final refinement, this workflow gives users more control over how much the masked region changes and how naturally it merges with the source image.
The ControlNet module is another important part of this workflow. It uses the Z-Image Turbo Fun ControlNet Union 2.1-2601 model patch to provide structure guidance during generation. This helps the workflow preserve pose, silhouette, object direction, and scene layout when editing a selected region. For example, when editing a character, you can keep the original body posture while changing the outfit. When editing a product image, you can keep the product position and only modify surface details. When editing an illustration, you can preserve the composition while correcting a broken part.
The workflow also includes pose and preprocessor-related nodes, such as DWPreprocessor and SDPose-related processing. These are useful when the source image contains a person or character and you need stronger body-structure preservation. This makes the workflow more suitable for character-based editing, anime illustration correction, fashion replacement, cyberpunk character repainting, fantasy armor modification, and pose-aware local redraw.
DetailDaemonSamplerNode is included to enhance local detail during the sampling process. This helps the edited area avoid looking flat or blurry. It can improve fabric edges, hair strands, facial detail, metal reflections, product surfaces, armor texture, vehicle parts, neon materials, and other fine structures. This is especially useful when the original image is already high quality and the inpainted region must match the sharpness of the surrounding area.
The workflow also includes sampler and scheduler control, using a structured sampling process rather than a simple one-click inpaint. The repaint strength, total steps, denoise level, CFG guidance, ControlNet influence, and mask size all affect the final result. For small corrections, a conservative denoise setting is recommended. For stronger replacement, use a larger mask and a clearer prompt. If the result drifts too much, reduce denoise or simplify the prompt. If the repaint is not strong enough, expand the mask area or increase the semantic strength of the high-noise prompt.
Main features:
- Z-Image Turbo local repainting workflow
- Z-Image Turbo Fun ControlNet Union 2.1-2601 support
- Mask-based local inpainting
- High-noise prompt for main semantic change
- Low-noise prompt for final texture and style refinement
- Qwen 3 4B text encoder support
- Z-Image VAE latent workflow
- SetLatentNoiseMask for targeted region editing
- ControlNet-guided structure preservation
- Pose-aware preprocessing support
- DetailDaemon sampling for sharper local details
- Suitable for image repair, object replacement, and character editing
- More stable than full image regeneration
- Useful for AI artwork post-production and Civitai example preparation
Recommended use cases:
Local object replacement, face repair, hand correction, clothing replacement, product image cleanup, anime illustration repair, character redesign, cyberpunk outfit modification, fantasy armor repainting, background object removal, damaged image correction, AI-generated image fixing, masked area enhancement, social media cover correction, product visual adjustment, and Civitai showcase image refinement.
Suggested workflow:
Upload your source image first, then prepare the mask area. The mask should cover the region you want to edit, but it is usually better to make the mask slightly larger than the exact broken area so the model has enough space to blend edges naturally.
Write the high-noise prompt to describe the main change. For example, if you want to replace clothing, describe the new clothing clearly. If you want to repair a face, describe the desired facial expression and quality. If you want to replace an object, describe the new object, material, angle, and lighting direction.
Write the low-noise prompt to describe the final style. This can include terms such as natural lighting, consistent texture, clean edges, matching color tone, detailed fabric, cinematic lighting, realistic surface, or anime-style polish. Keep this prompt focused on refinement instead of rewriting the entire image.
Use a lower denoise value when you only need small repairs. Use stronger denoise when the masked region needs to become something completely new. If the result looks too different from the source image, reduce denoise, reduce prompt complexity, or lower the detail intensity. If the result is too weak, increase the repaint strength, enlarge the mask, or make the high-noise prompt more direct.
For character images, keep the pose reference active when you need body structure stability. For product images, keep the prompt clean and avoid describing unrelated background details. For anime and illustration work, the workflow can be pushed further creatively, but it is still recommended to control the mask carefully to avoid unwanted changes outside the target region.
This workflow is designed for creators who need practical local editing inside ComfyUI. It is not only a demo workflow, but a useful production tool for image correction, publishable example generation, visual asset repair, character polishing, and before/after comparison creation. With Z-Image Turbo, ControlNet Union 2.1-2601, mask-based latent repainting, two-stage prompting, pose-aware preprocessing, and detail-enhanced sampling, it provides a flexible and efficient solution for controlled local inpainting.
🎥 YouTube Video Tutorial
Want to know what this workflow actually does and how to start fast?
This video explains what the tool is, how to launch the workflow instantly, and shares my core design logic — no local setup, no complicated environment.
Everything starts directly on RunningHub, so you can experience it in action first.
👉 YouTube Tutorial: https://youtu.be/LH1FquAz5O8
Before you begin, I recommend watching the video thoroughly — getting the full context helps you understand the tool faster and avoid common detours.
⚙️ RunningHub Workflow
Try the workflow online right now — no installation required.
👉 Workflow: https://www.runninghub.ai/post/2011731528195252226?inviteCode=rh-v1111
If the results meet your expectations, you can later deploy it locally for customization.
🎁 Fan Benefits: Register to get 1000 points + daily login 100 points — enjoy 4090 performance and 48 GB super power!
📺 Bilibili Updates (Mainland China & Asia-Pacific)
If you’re in the Asia-Pacific region, you can watch the video below to see the workflow demonstration and creative breakdown.
📺 Bilibili Video: https://www.bilibili.com/video/BV1LLkFBhEgm/
☕ Support Me on Ko-fi
If you find my content helpful and want to support future creations, you can buy me a coffee ☕.
Every bit of support helps me keep creating — just like a spark that can ignite a blazing flame.
👉 Ko-fi: https://ko-fi.com/aiksk
💼 Business Contact
For collaboration or inquiries, please contact aiksk95 on WeChat.
🎥 YouTube 视频教程
想了解这个工作流到底是怎样的工具,以及如何快速启动?
视频主要介绍 工具定位、快速启动方法 和 我的构筑思路。
我们会直接在 RunningHub 上进行演示,让你第一时间看到实际效果。
👉 YouTube 教程: https://youtu.be/LH1FquAz5O8
开始前建议尽量完整地观看视频 —— 把握整体思路会更快上手,也能少走常见弯路。
⚙️ 在线体验工作流
现在就可以在线体验,无需安装。
👉 工作流: https://www.runninghub.ai/post/2011731528195252226?inviteCode=rh-v1111
打开上方链接即可直接运行该工作流,实时查看生成效果。
如果觉得效果理想,你也可以在本地进行自定义部署。
🎁 粉丝福利: 注册即送 1000 积分,每日登录 100 积分,畅玩 4090 体验 48 G 超级性能!
📺 Bilibili 更新(中国大陆及南亚太地区)
如果你在中国大陆或南亚太地区,可以通过下方视频查看该工作流的实测效果与构思讲解。
📺 B站视频: https://www.bilibili.com/video/BV1LLkFBhEgm/
我会在 夸克网盘 持续更新模型资源:
👉 https://pan.quark.cn/s/20c6f6f8d87b
这些资源主要面向本地用户,方便进行创作与学习。
