Qwen-Image-Layered (FP8 E5M2 & E4M3FN)

This is a quantization of Qwen/Qwen-Image-Layered to FP8 E5M2 and FP8 E4M3FN.

Sensitive layers (norms, embeddings, biases) were kept in BF16.

License & Usage: This model strictly follows the original licensing terms and usage restrictions. Please refer to the original model card for details.

Downloads last month
1,141
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for T5B/Qwen-Image-Layered-FP8

Base model

Qwen/Qwen-Image
Quantized
(5)
this model