Upscale videos fast, smooth, and super clear—no detail lost.
Who it's for: creators who want this pipeline in ComfyUI without assembling nodes from scratch. Not for: one-click results with zero tuning — you still choose inputs, prompts, and settings.
Open preloaded workflow on RunComfy
Open preloaded workflow on RunComfy (browser)
Why RunComfy first
- Fewer missing-node surprises — run the graph in a managed environment before you mirror it locally.
- Quick GPU tryout — useful if your local VRAM or install time is the bottleneck.
- Matches the published JSON — the zip follows the same runnable workflow you can open on RunComfy.
When downloading for local ComfyUI makes sense — you want full control over models on disk, batch scripting, or offline runs.
How to use (local ComfyUI)
1. Load inputs (images/video/audio) in the marked loader nodes.
2. Set prompts, resolution, and seeds; start with a short test run.
3. Export from the Save / Write nodes shown in the graph.
Expectations — First run may pull large weights; cloud runs may require a free RunComfy account.
Overview
This workflow helps you transform any low-resolution or AI-generated video into clear, high-definition footage. Using advanced diffusion and sparse attention mechanisms, it restores detail and sharpness while maintaining smooth motion. Perfect for visual creators, it streamlines post-production and video enhancement. You get control over fine textures and balance between speed and quality. Easily upscale your clips while preserving consistent results. Ideal for rapid, high-quality video restoration tasks.
Important nodes:
Key nodes in Comfyui FlashVSR workflow
FlashVSRNode (#152, full)
- Core ultra-fast upscaler in “full” mode. Adjust scale for 2x/4x work, enable color_fix to stabilize luminance, and use tiled_vae or tiled_dit when working at larger resolutions. Tune sparse_ratio, kv_ratio, and local_range only if you see motion softness or temporal drift. Implementation reference: ComfyUI-FlashVSR_Ultra_Fast.
FlashVSRNode (#143, tiny)
- Ultra-fast “tiny” mode for maximum speed. Use it for previews or very long sequences. Same controls as the full node, but expect slightly softer micro-detail. Reference: ComfyUI-FlashVSR_Ultra_Fast.
FlashVSR_SM_KSampler (#146, Pass 1)
- Streaming-quality sampler paired with a TCDecoder-enabled model (#158). Set scale first, then balance cfg and steps for detail vs speed. If VRAM is tight at high resolutions, enable full_tiled and reduce split_num. Implementation details and weights: ComfyUI_FlashVSR.
FlashVSR_SM_KSampler (#148, Pass 2)
- Second streaming pass with a complementary model setup (#150). Use it to test alternative TCDecoder/embedding combos on the same frames. Keep kv_ratio and local_range consistent across passes when you want a controlled A/B.
WanVideoAddFlashVSRInput (#114)
- Bridges your preprocessed frames into the Wan sampler as FlashVSR conditioning. The strength control determines how assertively FlashVSR restoration is applied relative to any prompt influence. Increase strength when the source is very compressed or AI-generated.
WanVideoSampler (#27)
- One-step inference inside the Wan pipeline. If you use prompts, start neutral and avoid strong negative lists; let FlashVSR handle restoration while text slightly nudges tone or scene interpretation. Keep steps to one for true FlashVSR behavior in this route.
ColorMatch (#142)
- Harmonizes color back to the source after restoration. Use it to avoid unintended hue or gamma shifts, especially when exporting comparisons.
Notes
FlashVSR in ComfyUI Workflow | Real-Time Video Restoration — see RunComfy page for the latest node requirements.
Description
Initial release — FlashVSR.