Instructions to use drbaph/Z-Image-Turbo-FP8 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use drbaph/Z-Image-Turbo-FP8 with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("drbaph/Z-Image-Turbo-FP8", dtype=torch.bfloat16, device_map="cuda") prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" image = pipe(prompt).images[0] - Notebooks
- Google Colab
- Kaggle
- Local Apps
- Draw Things
- DiffusionBee
This is a quantization of Comfy-Org/z_image_turbo to FP8_E5M2 and FP8_E4M3FN
| Precision | Image 1 | Image 2 |
|---|---|---|
| bf16 | ![]() |
![]() |
| fp8_e4m3fn | ![]() |
![]() |
⚡️- Image
An Efficient Image Generation Foundation Model with Single-Stream Diffusion Transformer
✨ Z-Image
Z-Image is a powerful and highly efficient image generation model with 6B parameters. It is currently has three variants:
🚀 Z-Image-Turbo – A distilled version of Z-Image that matches or exceeds leading competitors with only 8 NFEs (Number of Function Evaluations). It offers ⚡️sub-second inference latency⚡️ on enterprise-grade H800 GPUs and fits comfortably within 16G VRAM consumer devices. It excels in photorealistic image generation, bilingual text rendering (English & Chinese), and robust instruction adherence.
🧱 Z-Image-Base – The non-distilled foundation model. By releasing this checkpoint, we aim to unlock the full potential for community-driven fine-tuning and custom development.
✍️ Z-Image-Edit – A variant fine-tuned on Z-Image specifically for image editing tasks. It supports creative image-to-image generation with impressive instruction-following capabilities, allowing for precise edits based on natural language prompts.
📥 Model Zoo
- Downloads last month
- 20,515



