Version names includes the base model it was trained from, use appropriate version for each model. Rouwei version wont work well with noob, the opposite is also true. Old noob versions would be too weak on a fresh noob checkpoints also, eps and vpred should also be used with their dedicated versions. Don't forget to check version descriptions for more specific info and list of supported style tags.
Anima v4 info: Update for recent anima preview 3 version. List of styles is same.
Anima v3.5 info: Same version as anima-p2_v3, but trained of full range of blocks. Should™ just work better overall. Out of dataset test still fine. List of styles the same, grid
Anima v3 info: V3 is version trained from and for anima preview 0.2. Still won't affect data outside of dataset so much. Style list is same with v2, grid with styles.
Anima v2 info: A bit late, since 0.2 preview is already out, but whatever. It was trained from 0.1 version of preview model. Catastrophic forgetting was addressed and improved, at least it's not so disastrous now. Mixing between styles was also a little bit improved from empirical tests. List of all styles and grid. It would not work as well with 0.2 preview.
Anima v1 info: Anima version of lora comes with limited dataset. List of styles that comes with it. More like a test version. Model is in early stage and too incoherent at something above 1mp. If you decide to upscale, you would need to use tiled methods, unless you're okay with artifacts. Style tags should be invoked with @ before them. Mixing somewhat works, but not how it was with default xl weighting, rather like compel mode.
MIO - mix in one versions with mix of ai styles and some real artists in dataset.
BIO - best in one versions, containing only artificial images in dataset. 90% filled with naiv3 or styles from local models with very distinctive look only.
Description
Version trained from Noob-vpred-0.6. Should be compatible with 0.65 and 0.5 either. The main difference is filtered and expanded dataset. List of all styles in this version
FAQ
Comments (8)
This is insanely good, holy shit.
I need one with the updated dataset that uses vpred 0.6 but for eps 1.1 pretty plz
I'll see what I can do
@bakariso thank you for baking it!!!!
Very nice styles in a LoRA! Thanks for your contributions!
There is one thing I want to point out: the LoRA also learns censoring because of censored training images. (Although it is still removable but adding "censored" into negative prompts is not enough)
Perhaps tagging those censored images with tag "censored" may help.
You should not put "uncensored" into negatives if you want uncensored gens, instead put it in pos. Everything was already properly tagged and I haven't experienced any issues like this
@bakariso Thanks for your reply! I made a mistake in my comment so I modified it. I put "censored" into negatives, not "uncensored". Actually it is not a big problem. As you said, adding "uncensored" into positives and "censored" into negatives could help.
@k34339878767 Yeah, it's very simple to just use neg and pos simultaneously










