Oh yeah for sure I'll hit you up sometime! Just to be clear I wasn't asking you to upload all your personal tweaks that you've spent probably weeks on to improve them haha. I was just curious about some of the things you said. For example, when you said " Extract a small LoRA from this" I was a little bit confused actually haha. As in: I have no idea how to do that, let alone apply it to smoothen out other models in the merge.
I know about adapter models and that you can create those with LoRA fine-tuning which you can either load on top during inference or you can merge with the base model, but extracting a LoRA from an existing model is kinda confusing me haha (sorry!). It sounds interesting though! Do I understand correctly that this would enable you to kind of "operate" on the model more precisely and with a lot less compute required (aka: more merges you can make and test in a given time window)?