JiT-H/32 (Diffusers)

This repository is self-contained: model weights and a custom diffusers pipeline (JiTPipeline) are both included, so no external code repo is required.

Available Checkpoints (All 6)

The JiT paper reports six ImageNet checkpoints across 256 and 512 resolutions. Use the following relative paths with JiTPipeline.from_pretrained(...).

Checkpoint Relative path Resolution Pre-trained dataset Recommended CFG Recommended interval Recommended noise_scale FID-50K
JiT-B/16 ./JiT-B-16 256x256 ImageNet 256x256 3.0 [0.1, 1.0] 1.0 3.66
JiT-L/16 ./JiT-L-16 256x256 ImageNet 256x256 2.4 [0.1, 1.0] 1.0 2.36
JiT-H/16 ./JiT-H-16 256x256 ImageNet 256x256 2.2 [0.1, 1.0] 1.0 1.86
JiT-B/32 ./JiT-B-32 512x512 ImageNet 512x512 3.0 [0.1, 1.0] 2.0 4.02
JiT-L/32 ./JiT-L-32 512x512 ImageNet 512x512 2.5 [0.1, 1.0] 2.0 2.53
JiT-H/32 ./JiT-H-32 512x512 ImageNet 512x512 2.3 [0.1, 1.0] 2.0 1.94

Source: Back to Basics: Let Denoising Generative Models Denoise (arXiv:2511.13720).

Demo Image

JiT-H/32 test inference

One-Stop Diffusers Inference

from pathlib import Path
import sys
import torch

repo_dir = Path(".").resolve()
sys.path.insert(0, str(repo_dir))
from jit_diffusers import JiTPipeline

device = "cuda" if torch.cuda.is_available() else "cpu"
pipe = JiTPipeline.from_pretrained("./JiT-H-32").to(device)
pipe.transformer = pipe.transformer.to(device=device, dtype=torch.bfloat16 if device == "cuda" else torch.float32)
pipe.transformer.eval()

generator = torch.Generator(device=device).manual_seed(42)
output = pipe(
    class_labels=[207],
    num_inference_steps=50,
    guidance_scale=2.3,
    guidance_interval_min=0.1,
    guidance_interval_max=1.0,
    noise_scale=2.0,
    t_eps=5e-2,
    sampling_method="heun",
    generator=generator,
    output_type="pil",
)
image = output.images[0]
output_path = Path("./demo_images/jit_h32_test_inference.png")
output_path.parent.mkdir(parents=True, exist_ok=True)
image.save(output_path)
print(f"Saved image to: {output_path}")

Ready-to-Run Commands (All 6 Checkpoints)

Run these from this repository root (models/BiliSakura/JiT-diffusers).

# 256x256 checkpoints
python run_jit_diffusers_inference.py --model_path ./JiT-B-16 --output_path ./demo_images/jit_b16_test_inference.png --class_label 207 --seed 42 --steps 50 --cfg 3.0 --interval_min 0.1 --interval_max 1.0 --noise_scale 1.0 --t_eps 5e-2 --solver heun
python run_jit_diffusers_inference.py --model_path ./JiT-L-16 --output_path ./demo_images/jit_l16_test_inference.png --class_label 207 --seed 42 --steps 50 --cfg 2.4 --interval_min 0.1 --interval_max 1.0 --noise_scale 1.0 --t_eps 5e-2 --solver heun
python run_jit_diffusers_inference.py --model_path ./JiT-H-16 --output_path ./demo_images/jit_h16_test_inference.png --class_label 207 --seed 42 --steps 50 --cfg 2.2 --interval_min 0.1 --interval_max 1.0 --noise_scale 1.0 --t_eps 5e-2 --solver heun

# 512x512 checkpoints
python run_jit_diffusers_inference.py --model_path ./JiT-B-32 --output_path ./demo_images/jit_b32_test_inference.png --class_label 207 --seed 42 --steps 50 --cfg 3.0 --interval_min 0.1 --interval_max 1.0 --noise_scale 2.0 --t_eps 5e-2 --solver heun
python run_jit_diffusers_inference.py --model_path ./JiT-L-32 --output_path ./demo_images/jit_l32_test_inference.png --class_label 207 --seed 42 --steps 50 --cfg 2.5 --interval_min 0.1 --interval_max 1.0 --noise_scale 2.0 --t_eps 5e-2 --solver heun
python run_jit_diffusers_inference.py --model_path ./JiT-H-32 --output_path ./demo_images/jit_h32_test_inference.png --class_label 207 --seed 42 --steps 50 --cfg 2.3 --interval_min 0.1 --interval_max 1.0 --noise_scale 2.0 --t_eps 5e-2 --solver heun
Downloads last month
-
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 1 Ask for provider support

Paper for BiliSakura/JiT-diffusers