Style LoRA based on the artwork of Paper Tiger/Kamitora from early 2000s up until disappearing a few years ago without notice. Patreon site is gone, main website is gone and only on archive. The only recent information I've seen was a post from an old anime board that seemed like it was just hacked and going to be used to try and scam people with a new fake Patreon, but then there was no other follow-up posts since the initial announcement made in the comments of a post. As such, no commercial use allowed.
The style revolved a lot around light BDSM, futanari/newhalf, and if there is a penis present, it's huge. The women are often in tears or outright crying, there's a lot of drool and fluids.
Recommended weight: 0.7-1.0
I played around with a few other LoRAs in conjunction and they seemed to behave alright together, especially when compounding fluid generation and genital sizes. Adjust your weights accordingly.
Intended for anime models, but when using realistic models it appears to just cartoonify the whole thing if above a 0.5 weight, usually while adding a huge penis growing out of somebody if not adjusted for in the negative tags.
This model was trained on 750 images,
This is my first LoRA training so I'm not sure what I'm doing...there are no trigger words because I don't understand why you'd put the LoRA in the prompt and then NOT trigger it.
There is a good chance the watermarks and stamps show up regardless of prompt, but I was able to clear it out easily with img2img inpainting and negative tags like 'stamp' and 'watermark'.
V2.0:
Went through 175 of the training images and removed the stamps, lettering/text, kanji, and watermarks by hand. Overall though it seems like the results do feel weaker or there's more sorts of mutations, body merges, etc. You may have better luck with better negative prompts. I'll probably keep working through the rest of the original training images and see if I can get a better result for v3
Description
We hit 1000 downloads, posting the final version. I've been using it for a few days and am getting the best compatibility across different models and LoRA combinations. I'm including the training dataset in case anybody wants to take it and attempt to improve it, but I'm moving on to other work from here.
{
"unetLR": 0.0005,
"clipSkip": 2,
"loraType": "lora",
"keepTokens": 0,
"networkDim": 24,
"numRepeats": 10,
"resolution": 512,
"lrScheduler": "cosine_with_restarts",
"minSnrGamma": 5,
"targetSteps": 9845,
"enableBucket": true,
"networkAlpha": 16,
"optimizerArgs": "weight_decay=0.1",
"optimizerType": "AdamW8Bit",
"textEncoderLR": 0.00005,
"maxTrainEpochs": 11,
"shuffleCaption": true,
"trainBatchSize": 4,
"flipAugmentation": true,
"lrSchedulerNumCycles": 3
}