Because why not.
It's a little inconsistent because the training data is extremely finite, but it's still very recognisable. Trigger word is "m0nd0 jacket".
The big file is the 14B version, the small one is the 1.3B. I don't think this lora warrants two pages so they're both here.
The 1.3B version is untested, I just ran the same parameters through the smaller model.
Description
better captions