I would like to explain my work. I know so little so I have a long bridge to cross. I only work out of using Stable Diffusion, I know nothing about ComfyUI, Pony, etc just straight out of Stable Diffusion WebUI which mean I use no tokens, everything is done offline on my system only. If some have issues using my models, I'm sorry, I really have no answer other than my settings but you can see that in the image. I am open to tips or ideas but again, I only work out of what I have and I'm happy just doing that. I am happy to create images or a model for someone if they like, right now I'm just a merger and not a trainer just yet
Description
FAQ
Comments (21)
@jrrytng467 Not sure what's going on with v2.0, but everything is coming out very overexposed for me. All of my prompts usually involve "dark" and "at night" (and negative "harsh lighting"), but instead I'm getting super bright, washed out images, like it's full daylight and there was a giant powerful spotlight lighting up the entire scene to the point that everything starts looking faded. Is the VAE fully baked-in like usual?
Otherwise, it looks like it could be something great, but something's just majorly "off" with the lighting and contrast.
I'm not sure. I don't bake in VAE, I merged this like I did the others. Usually if I have anything that even looks a little worse for whatever reason Ill just delete it and not use that model. I try and stay consistent the way I do it. Ill try and remember what model I merged it with
@jrrytng467 Yeah, I'm thinking it's not the VAE. I can get very good normal-looking results for some prompts, but then it acts really weird for others (and I usually keep my prompts and LoRA load-out very similar). I was thinking it was just a specific word or phrase I was using that was causing it, but on a batch run of multiple images from a single prompt, some results look fine and some look overexposed or blown-out in lighting. It's very similar to how your recent models provide a nice variety in pose, background and view - but it's doing this with the contrast and lighting as well. So instead of adding a pleasant variety, it's just a bit of a frustrating randomness that quite often produces many unusable results.
No idea what would cause that and the only other models I've ever seen behave this way with random lighting/exposure were a few of Felldude's hybrid Pony/Illu/SDXL creations from about a year ago (his 3xThreat 2k v1.0 behaves like this exactly). I know he likes to push boundaries with new and experimental CLIP encoders, but I'm really not sure how that aspect works at all. Anyway, IMD v2 can definitely produce some absolutely stunning gems that likely exceed v1, but due to the very random lighting, it's far more difficult and rare to achieve good results compared to v1. I'll keep messing with it over the weekend and see if I can figure out any reliable work-around to avoid what it's doing, but really not sure how when many pictures look mostly fine and so many have the brightest flash-lighting you could possibly imagine.
@AFD_0 sometimes I get models that loading images look real good but when finishes loading the images is all soft lite with outlining white spots. I don't know why it does that so I just delete it, but I just made a model on TestV4, had the same issue but because I wanted that model in with mine I swapped the merge and put mine on the tail end. I merged it strong but it ended out coming out clean. I have no idea why it does that
@jrrytng467 Faded final images with white dots is usually a VAE issue in my experience. To determine that 100%, just load the model and tell it to load the standard SDXL VAE (as opposed to whatever VAE is already baked-in to the merged models).
The strangest thing I've noticed, is that merging 2 models, both with baked-in VAE, can still occasionally create a merge with a missing or broken VAE. Loading the VAE separately will always fix this, but it's a pain in the ass imo and I generally avoid using models that need the VAE loaded separately. When you're doing your merges, I'd look for an option that allows you to bake-in the standard VAE to insure that colors/lighting are always correct (and to avoid the weird white dots that occur without one).
@AFD_0 I try to look to for models that have baked in VAE to stay away from those. Like I said, I don't bake them in my models but I guess they could be getting in if someone doesn't state that in the model. Thank you for that tip, very helpful and now I know why. I'm going to create another Test_v1 now, wish I knew that before. I hate putting things out that can cause issues for other when I don't understand myself whats going on.
@jrrytng467 Yeah, I know there's a few custom SDXL-compatible VAEs out there, but think the vast majority are just the standard SDXL_VAE, probably from the very first SDXL 1.0 model I'm guessing. Wish I knew more about it myself, just sharing what I've learned from my little experience with merging, but mostly my troubleshooting experiences as an end-user (and the faded image with white dots everywhere thing is definitely VAE-related from what I've seen).
@AFD_0 I see you got the model to work for you? So what did you end up doing? The image came out real clear, much detail
@jrrytng467 Thanks! Surprised you even saw it with the site issues going on. All I can see in the IMD v2.0 gallery is "no results found" and nothing I posted tonight shows on my feed yet.
And I actually haven't done any further troubleshooting with IMD v2.0 (been busy playing around with test_ v4.0). That image was just one of the good results from last night. It's strange. Was just using my usual prompts that all turn out good-to-great on most of your prior models. Some turn out amazing, some have the super-bright lighting thing going on (which imo means it's probably not a VAE-related issue).
I'm thinking maybe this is happening because those prompts are are a little longer and slightly more descriptive than the others.. or, because those particular prompts are using character LoRAs (both my own and others vs having no character LoRA loaded or character described).. or, it could also be the difference between prompting "20-year-old woman" vs "20yo woman" (Pony prompts like "rating_safe" seem fine). Think I'm leaning more towards the character LoRAs causing the bright lighting, which I've seen happen with some models in the past (<20 out of ~600 models I've tested). Many models that can all draw out the likeness perfectly fine, still draw out other elements in the LoRA's training data to varying degrees I think, like lighting, prominent colors, image quality, Minor abberations are unfortunately fairly common, but becoming too bright and blown-out/over-exposed from a LoRA has been pretty rare in my testings and hasn't occurred in any of your prior models (at least from the test_, ChasingMyVision and In_My_Dreams series). They're likeness replication has always varied to some degree, but other than IMD v2.0, they've all done exceptionally well in not scavenging unwanted elements and attributes from a character LoRA (ie, just pulling the likeness for face and body, but not the wallpaper hue/shade or another non-character portion of the training data). Honestly not sure, but other than those observations, I really don't think I'll come to any more of a definitive conclusion for this behavior.
Actually, the LoRAs might not be the issue either. One of mine works perfectly fine with IMD v2.0 without contrast/lighting issues. Might just be a particular prompt token difference that's triggering it to happen, but still not sure why those same prompts never caused this issue with your prior models. I'm just gonna keep playing with test_ v4 for now, it's really fun!
@AFD_0 well I'm glad your getting enjoyment from it. lately Ive just been playing around with them and not really much thought put into any of them. I experience issues here and there with them. I think the more I put into something the more issues I start to have. I had one that just removing anything with the word that had pubic hair in the prompt it would load a blank screen, put it back in and loaded OK, huh. I think I will just start to create models instead of building on the ones I have. I don't know if you tried my anime model, I played around with that one last night, kinda testing it. I decided not to do anything with that model, to me its perfect just the way it is. Kind of amazing to me really. Ive never been a big fan of anime but this model is fun. I throw any prompt at it and it stays anime without crossing over to real photo images. You should try that one at least once and see what you think of it.
I wish I could take the time to study things more on a level like you do but I think because its still all so new to me I'm just having to much fun. Hopefully it comes later so I can learn more. I really want to learn to make the videos but I want the time it will take for me to really be able to dive myself into it to understand it all.
Thank you again for the time you put into looking at my models the way you do, I really get more from that than the complements I get from others
@jrrytng467 My pleasure and thank you again for sharing your works with us! I'm just a hobbyist with all of this as well and still haven't quite figured everything out, but with time, you end up learning many different weird things about the way stuff works and some strange things that can happen unexpectedly. The technical "why" it did something unusual or unexpected, I really have no idea, but with enough data or similar experiences, sometimes you can narrow things down a bit and find what is causing it.. or, sometimes you just get a blank image and have no idea where to start looking!
@AFD_0 Thats what the delete button is for :)
@AFD_0 I was having an issue merging checkpoints of mine together and I couldn't remember what you said about it so I came back to this message. Wow, helped me so much. Thank you
@jrrytng467 Other than my notes, I've completely lost track of my opinions on most everything. The one thing I do remember, is that Test_ v4 is still amazing and I've been using that model constantly!
@AFD_0 well your input helped me with a model I posted last night, otherwise it wouldn't have happened
@jrrytng467 That's awesome to hear! Really glad my long ramblings can be useful sometimes, lol. Just saw that you posted ChasingMyVision v4.0 and had to download it immediately! Hopefully I'll get a chance to play with it tonight, cause your sample images look really good!
And I really can't state enough how insanely good Test_ v4 has been. I literally spent the entire weekend using it on just a single prompt! Kept stacking LoRAs on it, made it as long/wordy as possible and even went crazy making 250+ step versions. Found a few very narrow issues here and there (mostly just finger issues), but was surprised how it was the only model I've ever tried that I could throw everything at it and it would still make absolutely beautiful pictures. The only thing I couldn't figure out was how to make giant magical pillars of light beams emanating upwards from her hands like a sorceress. Got so close, but think that's such a very complicated thing to achieve from any model. I really need to make a specific "spellcaster" LoRA for something like that, though I'll give it a try in CMV tonight just in case!
@AFD_0 most of the time it's all in the prompts and I've seen crazy stuff either removing a word or putting it in front or back of another word. Ask GPT, it knows stable diffusion and might help. Anyways, the issue I had was (read the construction of the V4) i was having the issue of a smokey image with white dots why trying to merge 2 of my models. I didn't think of it being a VAE issue because I did both of them the same. Anyways I tried deleting the VAE in both models, merged them together and it worked so I guess one was different than the other, but I tried it because of what you told me from your experience. Now I'm just curious how it behaves with other people. The only issue I really have is I think the hands struggle a little more but it's not bad. Im curious how it works under normal settings for you because I know you had an issue with one of them. Sounds to me though, you should give yourself a little break from it all or stop test for awhile and just have fun
@jrrytng467 Yeah, I did take a little bit of a break and spending that long on a single prompt was actually my idea of fun, I think. Just kept trying new things to see if I could break it, but alas, it kept on trucking!
Mentioned this in the other thread, but I wasn't able to load your new ChasingMyVision v4.0 at all (and noticed someone else was having either the exact same or a very similar issue). Tried resetting, restarting and reloading everything multiple times and no luck at all. Not really sure what would cause such an issue, but I do believe the other person is correct that it is somehow VAE-related. I would think that loading an external VAE would correct any issue like that, but that doesn't seem to be the case.
Dang, and here I was excited all day waiting to play around with something new! XD
@AFD_0 Test_V1 V5.0 is the fix. I moved it to Test_V1 because I took out chasing my vision and replaced it with test V4.0 as the base, thought that might make you happy
@jrrytng467 Sounds good to me! And the ChasingMyVision series has been 100% top-notch as well. Can't remember exactly what or why, but I can see that I gave each version a thumbs up (other than v4 since I couldn't test it, of course).
I'm downloading Test_ v5.0 right now, but since I'm at home it'll probably take a really long time to complete (depending on how many times I have to restart it, lol). I'll let you know what I find out as soon as I can!
Just wanted to let you know that I'm probably going to have to wait until Wednesday night to try out Test_ v5.0 after I download it from work. Kept getting restarts at home tonight. Took 2 hours to get to 70% and the d/l timed-out on me. Think I need to use Jdownloader or something, lol. Please make sure it doesn't disappear, thanks! :)



















