tatsu-lab/alpaca
Viewer • Updated • 52k • 97.4k • 959
How to use monsterapi/gpt2_alpaca-lora with PEFT:
from peft import PeftModel
from transformers import AutoModelForCausalLM
base_model = AutoModelForCausalLM.from_pretrained("gpt2")
model = PeftModel.from_pretrained(base_model, "monsterapi/gpt2_alpaca-lora")We finetuned gpt2 on tatsu-lab/alpaca Dataset for 5 epochs using MonsterAPI no-code LLM finetuner.
This dataset is HuggingFaceH4/tatsu-lab/alpaca unfiltered, removing 36 instances of blatant alignment.
The finetuning session got completed in 20 minutes and costed us only $3 for the entire finetuning run!
Base model
openai-community/gpt2