People who vote in your polls have no idea wtf they talk about
This is a completely broken / unusable model.
Under static constraints, proper established character sheets, long term memory inserts, it completely breaks down. And I don't mean like a normal model that's just a bit dumb. This one, is legit retarded the second it's out of the most basic prompt. It failed ALL my tests (even shit L2 passes). From command line to web navigation, to keeping to character sheet adherence, it's the first model who gets a full zero on all metrics. The thinking tag content is just random fluff that ain't actually providing anything for the model to build upon.
Wtf? The fact your discord peons voted that thing up is concerning tbh.
Edit: Sorry, btw, I appreciate your work on models. This rant ain't really against you, bad tunes happen, and you made the effort to even try. But that voting thing, it ruffles feathers :D
Hmm... that's a lot of thumbs up. Could you try Cydonia v4zk and/or v4zg in BeaverAI?
4zg, 4zj, 4zk and 4zi have the same behaviors : writing a book as sysprompt. I still like the corpus.
Right now i'm on HF to download the 22B v1, rumors say i should know it for comparison.
Hmm... that's a lot of thumbs up. Could you try Cydonia v4zk and/or v4zg in BeaverAI?
I wish I still could afford the time to test beta-versions of models, but between recent releases, my own AI-adjacent projects, and doing actual work that pays the bills, I sadly don't have the time anymore. Sorry, mate. I'll still look at your new full releases when I get the time. Precog was fun, btw (Not smart, but fun to play with).
Cheers.
Magidonia v4.3 is close to Cydonia v4.1. (after Cydonia v4.2, I call that a win π )
Not tested, but if Magidonia v4.3 does support images (like its base) than it wins in my books.
I usually only use min_p 0.07 and if past 1k-2k tokens, I sometimes enable xtc (prob of 0.2-0.5 and threshold of 0.2 should work well).
So yes, no penalties, no top_p etc, this sets 24B drummer finetunes apart from the rest in my limited testing.