Use an AI Toolkit-trained LoRA with Qwen Image 2512 in ComfyUI via one RCQwenImage2512 node for preview-aligned generations.
Who it's for: creators who want this pipeline in ComfyUI without assembling nodes from scratch. Not for: one-click results with zero tuning — you still choose inputs, prompts, and settings.
Open preloaded workflow on RunComfy
Open preloaded workflow on RunComfy (browser)
Why RunComfy first
- Fewer missing-node surprises — run the graph in a managed environment before you mirror it locally.
- Quick GPU tryout — useful if your local VRAM or install time is the bottleneck.
- Matches the published JSON — the zip follows the same runnable workflow you can open on RunComfy.
When downloading for local ComfyUI makes sense — you want full control over models on disk, batch scripting, or offline runs.
How to use (local ComfyUI)
1. Load inputs (images/video/audio) in the marked loader nodes.
2. Set prompts, resolution, and seeds; start with a short test run.
3. Export from the Save / Write nodes shown in the graph.
Expectations — First run may pull large weights; cloud runs may require a free RunComfy account.
Overview
Qwen Image 2512 LoRA Inference lets you generate text-to-image outputs with Qwen Image 2512 in ComfyUI while keeping AI Toolkit LoRA behavior consistent. The workflow uses RC Qwen Image 2512 (RCQwenImage2512) to execute a Qwen-specific inference pipeline rather than rebuilding the job as a standard sampler graph. Load your adapter through lora_path and tune it with lora_scale inside that pipeline to reduce preview drift.
Important nodes:
AITK LoRA (RCAITKLoRA)
AITK Load Pipeline (Qwen 2512)
AITK Empty Latent (512x512)
Latent Upscale (1.5x to 768x768)
AITK Sampler (Initial txt2img)
AITK Sampler (Refine at 0.35 denoise)
SaveImage
Notes
Qwen Image 2512 LoRA Inference in ComfyUI | RunComfy Workflow (Training-Matched Results) — see RunComfy page for the latest node requirements.
Description
Initial release — Qwen-Image-2512-LoRA-Inference.