CivArchive
    iGEN: One (ZImage | FLUX1+2 | SD | Qwen | Nunchaku) - v1.2
    NSFW
    Preview 126238105
    Preview 126236015
    Preview 126213703
    Preview 126213527
    Preview 126213538
    Preview 126213579

    iGEN ONE — Workflow Description

    700 nodes · 1187 links · 39 groups · 2 pipeline rows
    Built with ComfyUI_Eclipse (github, comfyreg) and ComfyUI_SmartLML (github, comfyreg) custom nodes


    Overview

    iGEN ONE is a modular, all-in-one image generation and post-processing pipeline for ComfyUI supporting a wide range of diffusion models — Stable Diffusion, Flux 1 & 2, HiDream, Qwen, and more. It is designed as a "kitchen sink" workflow where every major feature (image sources, prompt methods, model enhancements, detailers, upscalers, watermarks) exists as a self-contained group that can be independently enabled or disabled via Mute/Bypass toggles — without breaking the pipeline.

    The workflow is organized into two horizontal rows:

    • ROW 1 (28 groups): Image input → Prompt construction → Model loading → Rendering

    • ROW 2 (11 groups): Post-processing → Detailing → Upscaling → Watermarks → Save


    Tips & Hints

    Sampler Settings

    The sampler settings can be configured in several ways.

    The first and most common approach is to set them in the model loader. When I download a model file, I check a few reference images for recommended sampler settings and save them within the template so they load automatically each time. To use this method, enable the "allow overwrite" option in the Initial Render group so the model loader's values take priority.

    The second approach is to disable "allow overwrite" and use the settings defined directly in the Initial Render group instead.

    The sampler settings in the Detailer groups support three modes:

    1. Use all values from the Detailer's own Sampler Settings node.

    2. Use the values from the model loader assigned to that Detailer group.

    3. Start with the Initial Render values and only override specific fields from the Detailer's Sampler Settings node. For example, if the Initial Render group uses sampler dpmpp_2m and scheduler sgm_uniform, you can carry those into the Detailers by hiding the sampler and scheduler fields in the Detailer group. Any hidden fields are automatically filled from the Initial Render settings.

    Frontend Compatibility Note

    When using a ComfyUI Frontend later than 1.37.11 (e.g. 1.41.21), you have to go into the SubGraphs of the Samplers (after a generation) and promote the preview of the Sampler in case you want to see a Preview (right-click on the preview and promote it). The preview promotions have been removed from the workflow because in older Frontends (before 1.41.21) they cause duplicate previews.


    ROW 1 — Generation Pipeline

    The generation pipeline flows left-to-right across 28 groups. Data is passed between groups using Eclipse's Set/Get node system (named value channels) rather than direct node links, which keeps the visual layout clean.


    Phase 1: Image Source Selection (Groups 1–9)

    The workflow provides four mutually exclusive image input methods. Only one should be active at a time. A priority-based fallback chain using GetAllActiveNode + Any Multi-Switch ("ReturnFirstNonNone") automatically selects the first available image from the enabled source.

    1. Image Load (17 nodes)
    Load a single image from disk. Extracts embedded generation metadata (model, prompt, sampler settings, seed) from the image and can optionally override the workflow's prompt, seed, and sampler settings using those extracted values. This allows "remix" workflows where you load a previously generated image and re-use its settings.

    2. Image Load from Folder (17 nodes)
    Batch processing mode. Loads images sequentially from a folder, with sorting controls (by name/date, ascending/descending) and subfolder traversal. Like Image Load, it can extract and override generation data from each image's metadata for batch remix workflows.

    3. Input Video (Frame) (3 nodes)
    Extract a single frame from a video file. Uses VHS_LoadVideo with a configurable frame skip offset.

    4. Text-only generation (no dedicated group)
    When no image source is active, the workflow generates from text prompts only, using an empty latent.

    After source selection, the image passes through an optional processing chain:

    5. Image Crop (Auto) (7 nodes)
    Automatic subject-aware cropping using SegmentAnything (SAM) for subject detection. Centers the crop on the detected subject and resizes to the target latent dimensions. Best for single-subject images.

    6. Image Crop (Custom) (6 nodes)
    Manual bounding-box cropping with pixel-level inset controls. For precise framing when auto-crop doesn't produce the desired result.

    7. Preview (Cropped Image) (7 nodes)
    Preview checkpoint for the cropping stage. Includes a Stop node to halt execution here for crop verification before proceeding.

    8. Resize Image (9 nodes)
    Resize the reference image to specific dimensions using KJ's resize node. Used when the input image dimensions don't match the target generation size.

    9. Remove Background (10 nodes)
    Background removal using BiRefNet. Isolates the subject on a transparent background, useful for character-focused generation or when the background would confuse the model.

    → Image Preview (8 nodes)
    The final image source checkpoint. Uses GetAllActiveNode to find the first available image from the entire chain (rembg → resize → crop_preview → crop_custom → crop_auto → video → folder → load). Displays a preview and stores the result as "ref_image" for downstream consumption. The fallback priority chain means you can enable any combination of processing steps and the last active one wins.


    Phase 2: Prompt Construction (Groups 10, 20–25)

    The workflow offers six prompt sources, plus three post-processing stages:

    10. Image to Prompt (6 nodes)
    Vision-Language Model (VLM) captioning. Uses SmartLML's Smart Language Model Loader v3 with a Mistral-based model via Transformers backend. Analyzes the reference image and generates a natural language description. Temperature is configurable for factual vs. creative captions.

    20. Raffle (4 nodes)
    Random prompt generation using the Raffle node. Generates tag-based prompts from a curated category system with negative filters to exclude unwanted content. Seed-controlled for reproducibility.

    21. Read Prompt from Files (6 nodes)
    Reads prompts from external text files (one prompt per line). Supports multiple files and uses the seed to select which prompt to use. Includes Replace String v3 for post-processing the loaded prompt.

    22. Prompt (21 nodes)
    The main prompt assembly group. This is the central hub that combines all prompt sources into the final positive and negative prompts. Contains:

    • Wildcard Processor: Template-based prompt with wildcards for variety (e.g., {slavic|asian|european} woman with {straight|wavy} {blonde|red} hair)

    • Smart Prompt v2 (Subject): Structured subject description builder with dropdowns for gender, age, hair, clothing, etc.

    • Smart Prompt v2 (Settings): Environment and setting descriptions with dropdowns for location, time of day, weather, etc.

    • Join nodes: Combines input prompts (from i2p/raffle/files) with the manual prompt using configurable separators

    • String DeDuplicate: Removes duplicate tags/phrases from the combined prompt

    • Prefix/Suffix: Optional quality tags like "masterpiece, 8K, absurdres"

    The group uses Mute/Bypass Repeaters to toggle between using external prompt sources vs. the built-in wildcard/smart prompt system.

    23. Prompt Styler (8 nodes)
    Wraps the positive prompt in a style template (e.g., "An HDR photograph of {prompt}"). Uses Eclipse's Prompt Styler with natural language mode.

    24. Prompt Edit (6 nodes)
    LLM-based prompt rewriting. Uses a GGUF language model to creatively expand or rephrase the prompt. Higher temperature for creative variation.

    25. Save Prompts (3 nodes)
    Saves the final prompt to a text file for archival or reuse. Can append to an existing file for building prompt collections.


    Phase 3: Model Loading & Enhancement (Groups 11–19)

    11. Folder / Size (11 nodes)
    Configuration hub. Sets output folder structure (date-based subfolders), image dimensions, batch size, and VRAM purge behavior using Smart Folder v2.

    12. Model Loader (14 nodes)
    Loads the checkpoint using Eclipse's Smart Model Loader. Supports UNet and full checkpoint modes with configurable features (clip, vae, sampler, memory_cleanup, block_swap). Stores all model metadata via SetNodes for the Generation Data system.

    13. LoRAs (8 nodes)
    Dual LoRA Stack setup. Two Lora Stack nodes feed into a Lora Stack Apply, supporting both model_only and simple (model + clip) modes with up to 9 combined LoRA slots.

    14. Model Patcher (16 nodes)
    Optional model modifications. Includes:

    • ModelSamplingFlux: Adjusts Flux-specific guidance parameters

    • ModelSamplingAuraFlow: AuraFlow sampling override

    • DynamicThresholdingFull: CFG thresholding for better prompt adherence

    • PerturbedAttentionGuidance (PAG): Self-attention manipulation

    • SelfAttentionGuidance (SAG): Feature map attention

    • DifferentialDiffusion: Mask-based denoising control

    • CFGZeroStar: Zero-star CFG guidance technique

    • PatchSageAttention: Memory-efficient attention

    • TorchCompileModel: JIT compilation for speed

    • TeaCache: Token caching for faster inference

    15. PuLID (Flux) (5 nodes)
    Standard PuLID identity preservation. Loads a reference face image and applies PuLID to the Flux model to preserve facial identity in the generated image.

    16. PuLID (Flux: Nunchaku) (5 nodes)
    Nunchaku-optimized PuLID. Same concept as above but uses Eclipse's Nunchaku PuLID nodes for compatibility with quantized Nunchaku Flux models.

    17. Flux Redux (12 nodes)
    Flux Redux style transfer. Loads a style reference image and applies it via CLIP Vision encoding + StyleModelApply. Supports a second reference image for blending.

    18. Preprocessor (7 nodes)
    Image preprocessing for ControlNet. Uses DepthAnything for depth map extraction, scaled to target dimensions. Only needed when the ControlNet group is active or a ControlNet Lora is used.

    19. ControlNet (25 nodes)
    ControlNet conditioning. Supports three modes (via Fast Muter):

    • Standard ControlNet (SDXL Union ProMax)

    • Union Type control (depth, canny, etc.)

    • DiffSynth Qwen/ZIT ControlNet


    Phase 4: Rendering (Groups 26–28)

    26. Initial Render (55 nodes)
    The core sampling group — the largest and most complex in the workflow. This group assembles all inputs and runs the diffusion sampling process. It contains extensive internal routing via Any Multi-Switch nodes and multiple optional sub-features controlled by Mute/Bypass Repeaters:

    Sub-features (via Fast Muter toggles):

    • Initial Render — Main txt2img / img2img sampling

    • Seed Enhancer — SeedVarianceEnhancer for noise variation

    • Noise Injection — Additional noise patterns

    • Stop — Halt execution after render

    • Detail Daemon — DetailDaemonSamplerNode for micro-detail

    • i2i (DiffSynth: Qwen/ZIT) — Qwen-based img2img

    • i2i (DiffSynth: Qwen Lora) — Qwen LoRA variant

    • Flux Guidance — FluxGuidance node for CFG control

    • i2i (Denoise) — Standard img2img with denoise strength

    • i2i (Flux Preproc) — InstructPixToPix preprocessing

    • Negative Prompt — Optional negative conditioning

    Input assembly:

    • GetAllActiveNode collects MODEL, VAE, ref_image, and all pipe data

    • Any Multi-Switch chains select the first available input for each slot

    • Conditioning is assembled from the final positive prompt string

    • Conditioning Zero Out provides empty negative conditioning (Flux default)

    Uses SamplerCustomAdvanced with Smart Sampler Settings v2 for full sampler/scheduler/step/cfg configurability. The rendered latent is decoded via VAEDecode and stored for downstream use.

    27. Latent Upscale (13 nodes)
    Second-pass latent-space upscaling. Takes the initial render's latent (or image re-encoded), applies a lighter sampling pass with different sampler settings for refinement at higher resolution. Uses Smart Sampler Settings v2 for independent parameter control.

    28. Initial Render (Preview) (16 nodes)
    Preview and save checkpoint for the generation phase. Features:

    • Save Images v2 with embedded workflow metadata and generation data

    • Generation Data node assembles all metadata (model, VAE, LoRAs, prompt, dimensions) into a structured info block

    • Optional Stop node to halt before post-processing

    • Preview Image for quick visual check

    • Join node collects all LoRA names for metadata

    This is the boundary between generation and post-processing. The Stop node can prevent the image from entering Row 2.


    ROW 2 — Post-Processing Pipeline

    Post-processing groups flow left-to-right. Each group uses GetAllActiveNode with an expanding priority list to find the "latest" image from the chain. This means each group automatically picks up the output of whichever previous group was last active.


    Phase 5: Face Swap (Group 1)

    1. Face Swap (F2Klein) (24 nodes)
    AI-powered face replacement. Takes a reference face image and swaps it onto the generated image's face/hair while preserving clothing and background. Can load its own dedicated checkpoint (separate from the main model) for the face swap inpainting process. Includes:

    • Smart Model Loader for dedicated model

    • Image desaturation option for the reference

    • Before/After comparison via Image Comparer


    Phase 6: Upscaling (Groups 2, 7–8)

    2. Upscale Image (13 nodes)
    First upscale stage with three toggleable methods:

    • Scale Image to Total Pixels: Resize to target megapixels

    • Smart Sharpen+: Adaptive sharpening

    • Upscale with Model: Neural upscaler (e.g., 4x AnimeSharp)

    7. SeedVR2 (Upscale) (16 nodes)
    AI video upscaler repurposed for single-image upscaling. Uses the SeedVR2 7B diffusion model for high-quality upscaling with:

    • Dedicated VAE and DiT model

    • Optional pre-resize

    • RAM cleanup for Windows memory management

    • LAB color space processing

    8. Rescale Image (12 nodes)
    Final size adjustment. Rescales the image by a configurable factor with bicubic interpolation, followed by smart sharpening. Includes an optional edge crop to remove border artifacts.


    Phase 7: Detailing (Groups 3–6)

    Each detailer group follows an identical architecture pattern (52–53 nodes):

    • SAM2Ultra for subject/region detection (guided by Florence-2 VLM)

    • Detection to Bboxes for bounding box extraction

    • MaskGrow to expand detection masks

    • MaskToSEGS for segment generation

    • Dedicated model loading (optional — can use the main model or load its own)

    • LoRA stack (optional)

    • Custom positive/negative prompts for the detail region

    • SEGS-based inpainting with configurable sampler settings

    • Before/After Image Comparer for visual QA

    3. Detailer: Tiles (53 nodes)
    Tile-based image enhancement. Splits the image into overlapping tiles, re-renders each tile at higher detail, then reassembles. Uses:

    • TTP tile processing for grid split/reassembly

    • VAEDecodeTiled for memory-efficient tile decoding

    • Smart Language Model Loader v3 with WD14 tagger for auto-captioning

    • Replace String v3 to clean up detected tags

    4. Detailer: Face (52 nodes)
    Face-specific detail enhancement. Uses Florence-2 VLM for face detection, SAM2Ultra for precise mask generation. Inpaints faces with prompts targeting detailed skin, eyes, and lips.

    5. Detailer: Eye (52 nodes)
    Eye-specific detail enhancement. Same architecture as Face detailer but focused on eye regions with prompts targeting sharp eyes, eyelashes, and eyeliner.

    6. Detailer: Mouth (52 nodes)
    Mouth-specific detail enhancement. Targets lips and teeth with prompts filtering out asymmetry, gaps, decay, and other dental artifacts.


    Phase 8: Watermark & Save (Groups 9–11)

    9. Create Watermark (Text) (21 nodes)
    Text-based watermark overlay. Creates a custom text image with configurable font, size, and color, then composites it onto the image. Optional effects: Drop Shadow, Outer Glow. Configurable position and opacity.

    10. Create Watermark (Logo) (24 nodes)
    Logo-based watermark overlay. Loads an image file as the watermark logo, with optional desaturation, resize, Drop Shadow, and Outer Glow effects. Configurable positioning.

    11. Save Image (12 nodes)
    Final output assembly and save. This is the terminal group that:

    • Collects the latest image from the ENTIRE pipeline using GetAllActiveNode with all 12 possible output image channels

    • Assembles complete Generation Data from all active pipeline stages (collects model names, VAE names, LoRA names from every detailer and processing group that was active)

    • Join nodes merge all model and LoRA name strings

    • Save Images v2 with full metadata embedding (workflow JSON + generation data text)

    The GetAllActiveNode priority chain for the final image:
    copyright_logo → copyright_custom → rescale → svr2 → mouth → eye → face → tile → upscale → faceswap → initial → ref_image

    This means it always saves the output from the last active processing stage, regardless of which groups are enabled.


    Architectural Patterns

    1. Set/Get Data Routing
    Almost all inter-group data flow uses Eclipse's Set/Get node system. SetNodes publish named values (e.g., "ref_image", "model_init") and GetNodes retrieve them by name. This decouples groups from each other and allows any group to be muted without breaking downstream connections.

    2. Fallback Priority Chains
    GetAllActiveNode + Any Multi-Switch ("ReturnFirstNonNone") is the core routing pattern. Each group's input uses this to try multiple named values in priority order, automatically falling back to the next available one. This enables the "enable any combination" modularity.

    3. Mute/Bypass Repeater Pattern
    Each group contains internal sub-features controlled by Mute/Bypass Repeater nodes. A Fast Muter or Fast Bypasser provides toggle controls. This gives two-level granularity: entire groups can be muted, and within active groups, individual features can be toggled.

    4. Generation Data Tracking
    The workflow meticulously tracks what was used at each stage. Every group that loads a model, applies LoRAs, or modifies settings stores string representations (model name, VAE name, LoRA list) via SetNodes. The final Save Image group joins all of these into complete metadata.

    5. Detailer Template
    All four detailer groups use an identical 52-node architecture template. Each has its own model loader (optional), LoRA stack, sampler settings, VLM detection, SAM2 masking, and SEGS inpainting — making each fully independent and configurable.

    6. Node Collector Pattern
    Groups with multiple togglable features use a Node Collector to gather references to all feature nodes. This enables the Fast Muter/Bypasser to control multiple nodes from a single toggle panel.


    Custom Node Packages Used

    Primary (author's own):

    • ComfyUI_Eclipse — Core nodes: loaders, pipes, Set/Get, Mute/Bypass Repeaters, Smart Prompt, Smart Folder, Save Images, Detailers, etc.

    • ComfyUI_SmartLML — Smart Language Model Loader for VLM/LLM integration

    Third-party:

    • VHS (VideoHelperSuite) — Video loading

    • RES4LYF — Advanced samplers

    • Raffle — Random prompt generation

    • pysssss Custom-Scripts — ShowText display

    • KJNodes — Image resize, custom dimensions

    • SeedVR2 VideoUpscaler — AI upscaling

    • Nunchaku — Quantized model support

    • Impact Pack — SEGS detailing system

    • LayerStyle — Drop shadow, outer glow, watermark effects

    • LayerMask — SAM2Ultra, MaskGrow, ImageToMask

    • LayerUtility — ImageAutoCrop, SimpleTextImage

    • Advanced ControlNet — ACN_AdvancedControlNetApply_v2

    • TTP — Tile processing (tile batch, assembly)

    • BiRefNet — Background removal

    Description

    • requires latest eclipse version

      • fixed gguf clip loader error in all model loader

    • sorry for v1.1 that was a complete mess -.-

    • reUploaded with a small fix:

      • the gen-pipe that was used by Save Images v2 was empty apart from the sampler settings & model names, no prompt etc.

    • compete rebuild from the var names to the fallback checks and subgraphs

    • all groups that made sense are in here iGEN 1+2 apart from qwen edit/supir

    • arrange the groups as you like or delete what you don't need

    Workflows
    ZImageTurbo

    Details

    Downloads
    46
    Platform
    CivitAI
    Platform Status
    Deleted
    Created
    4/2/2026
    Updated
    4/27/2026
    Deleted
    4/27/2026

    Files

    igenOneZimageFLUX12SD_v12.zip

    Mirrors

    Huggingface (1 mirrors)

    igenOneZimageFLUX12SD_v12.zip

    Mirrors

    igenOneZimageFLUX12SD_v12.zip

    Mirrors