Instructions to use shauray/Origami_WanLora with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use shauray/Origami_WanLora with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline from diffusers.utils import export_to_video # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("Wan-AI/Wan2.1-T2V-14B", dtype=torch.bfloat16, device_map="cuda") pipe.load_lora_weights("shauray/Origami_WanLora") prompt = "[origami] a crafted grasshopper moving on the jungle floor, dead leaves all around, huge trees in the background." output = pipe(prompt=prompt).frames[0] export_to_video(output, "output.mp4") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- Draw Things
Origami Lora for WanVideo2.1
- Prompt
- [origami] a crafted grasshopper moving on the jungle floor, dead leaves all around, huge trees in the background.
- Prompt
- [origami] a crafted grasshopper moving on the jungle floor, dead leaves all around, huge trees in the background.
- Prompt
- [origami] a monkey swinging on a branch of a tree, huge monkeys around them.
Trigger words
You should use origami to trigger the video generation.
Using with Diffusers
pip install git+https://github.com/huggingface/diffusers.git
import torch
from diffusers.utils import export_to_video
from diffusers import AutoencoderKLWan, WanPipeline
from diffusers.schedulers.scheduling_unipc_multistep import UniPCMultistepScheduler
# Available models: Wan-AI/Wan2.1-T2V-14B-Diffusers, Wan-AI/Wan2.1-T2V-1.3B-Diffusers
model_id = "Wan-AI/Wan2.1-T2V-14B-Diffusers"
vae = AutoencoderKLWan.from_pretrained(model_id, subfolder="vae", torch_dtype=torch.float32)
pipe = WanPipeline.from_pretrained(model_id, vae=vae, torch_dtype=torch.bfloat16)
flow_shift = 5.0 # 5.0 for 720P, 3.0 for 480P
pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config, flow_shift=flow_shift)
pipe.to("cuda")
pipe.load_lora_weights("shauray/Origami_WanLora")
pipe.enable_model_cpu_offload() #for low-vram environments
prompt = "origami style bull charging towards a man"
output = pipe(
prompt=prompt,
height=480,
width=720,
num_frames=81,
guidance_scale=5.0,
).frames[0]
export_to_video(output, "output.mp4", fps=16)
Download model
Weights for this model are available in Safetensors format.
Download
license: apache-2.0
this Lora is not perfect has a little like towards the bottom of every generation cause the dataset had those (I fucked up cleaning those)
- Downloads last month
- 7
Model tree for shauray/Origami_WanLora
Base model
Wan-AI/Wan2.1-T2V-14B