Instructions to use codemichaeld/Wan1.3b_fp8 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use codemichaeld/Wan1.3b_fp8 with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("codemichaeld/Wan1.3b_fp8", dtype=torch.bfloat16, device_map="cuda") prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" image = pipe(prompt).images[0] - Notebooks
- Google Colab
- Kaggle
FP8 Model with Delta Compensation
- Source:
https://huggingface.co/Kijai/WanVideo_comfy - File:
Wan2_1-T2V-1_3B_fp32.safetensors - FP8 Format:
E5M2 - Delta File:
Wan2_1-T2V-1_3B_fp32-fp8-delta.safetensors
Usage (Inference)
To restore near-original precision:
import torch
from safetensors.torch import load_file
fp8_state = load_file("Wan2_1-T2V-1_3B_fp32-fp8-e5m2.safetensors")
delta_state = load_file("Wan2_1-T2V-1_3B_fp32-fp8-delta.safetensors")
restored_state = {}
for key in fp8_state:
if f"delta.{key}" in delta_state:
fp8_weight = fp8_state[key].to(torch.float32)
delta = delta_state[f"delta.{key}"]
restored_state[key] = fp8_weight + delta
else:
restored_state[key] = fp8_state[key].to(torch.float32)
Requires PyTorch ≥ 2.1 for FP8 support.
- Downloads last month
- 8
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support