Diffusers
English
MLX
davidi-bria commited on
Commit
c8bd3cd
·
verified ·
1 Parent(s): 0d05b2a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +83 -214
README.md CHANGED
@@ -1,242 +1,111 @@
1
  ---
 
2
  language:
3
  - en
4
  base_model:
5
  - briaai/FIBO
6
- pipeline_tag: text-to-image
7
  library_name: diffusers
8
- license: cc-by-nc-4.0
9
- license_name: bria-fibo
10
- license_link: https://creativecommons.org/licenses/by-nc/4.0/deed.en
11
  tags:
12
- - lora
13
- - guidance-distillation
14
- - text-to-image
15
- - fast-inference
16
- - bria
17
- - fibo
18
- extra_gated_heading: Access FIBO Guidance Distillation LoRA
19
- extra_gated_description: >-
20
- This model is an extension of Bria AI's FIBO model. Weights are open source for non-commercial use only, per the
21
- provided [license](https://creativecommons.org/licenses/by-nc/4.0/deed.en).
22
- extra_gated_fields:
23
- Name: text
24
- Email: text
25
- Company/Org name: text
26
- Company Website URL: text
27
- Discord user: text
28
- I agree to BRIA’s Privacy policy, Terms & conditions, and acknowledge Non commercial use to be Personal use / Academy / Non profit (direct or indirect): checkbox
29
  ---
30
-
31
- <!-- ===================== HEADER ===================== -->
32
  <p align="center">
33
  <img src="https://bria-public.s3.us-east-1.amazonaws.com/Bria-logo.svg" width="200"/>
34
  </p>
35
-
36
- <div align="center">
37
-
38
- # FIBO Guidance Distillation LoRA ⚡
39
-
40
- [![Hugging Face](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Model-blue)](https://huggingface.co/briaai/FIBO)
41
- [![GitHub](https://img.shields.io/badge/GitHub-Repo-181717?logo=github)](https://github.com/Bria-AI/FIBO)
42
- [![License](https://img.shields.io/badge/License-CC%20BY--NC%204.0-lightgrey)](https://creativecommons.org/licenses/by-nc/4.0/deed.en)
43
-
44
- </div>
45
-
46
  <p align="center">
47
- <img src="assets/shuttle.png" alt="FIBO Lite Hero" width="100%"/>
48
- </p>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
49
 
50
- <p align="center">
51
- <b>Accelerate FIBO inference by 2x with a distillation LoRA.</b>
52
- <br>
53
- <i>This LoRA distills classifier-free guidance into the model, allowing you to generate images with 2x faster inference. As with most distillation approaches, there may be a slight degradation in quality compared to the full model.</i>
54
  </p>
55
-
56
- ---
57
-
58
- ## 🚀 Overview
59
-
60
- This is a **Guidance Distillation LoRA** for the [**FIBO**](https://huggingface.co/briaai/FIBO) text-to-image model.
61
-
62
- By distilling the knowledge of the teacher model (typically running at high guidance scales) into this lightweight adapter, you can run inference with **Guidance Scale (CFG) = 1.0**. This skips the negative prompt pass entirely, effectively **doubling the inference speed** compared to standard generation. As this is a distillation LoRA, there is a slight quality degradation compared to the full model at CFG=5, though the speed benefits often outweigh this tradeoff for many use cases.
63
-
64
- ### ✨ What's New
65
-
66
- - **Nov 2025**: Initial release of FIBO Guidance Distillation LoRA
67
- - 2x inference speedup with maintained quality
68
- - Compatible with standard diffusers pipeline
69
-
70
- ### 🔑 Key Benefits
71
-
72
- * **2x Faster Inference**: Running at `guidance_scale=1` means calculating the noise prediction only once per step instead of twice.
73
- * **Quality Tradeoff**: As a distillation LoRA, there is a slight quality degradation compared to the full model at CFG=5, but the speed gains make it ideal for rapid iteration and production workflows where speed is prioritized.
74
- * **Drop-in Replacement**: Works seamlessly with existing FIBO workflows—just set `guidance_scale=1.0`.
75
- * **Memory Efficient**: Minimal additional GPU memory overhead.
76
-
77
- ---
78
-
79
- ## 📊 Comparison & Examples
80
-
81
-
82
  <p align="center">
83
- <em>Left: Regular FIBO (Base Model, CFG=5)&nbsp;&nbsp;|&nbsp;&nbsp;Right: FIBO Lite (Distilled LoRA, CFG=1)</em><br>
84
- <img src="assets/astro-pair.png" alt="Comparison" width="100%"/>
85
  </p>
 
86
  <p align="center">
87
- <em>🖼️ More Example Outputs from FIBO Distilled LoRA:</em>
 
 
88
  </p>
89
- <table>
90
- <tr>
91
- <td align="center"><img src="assets/gorilla.png" width="100% "/></td>
92
- <td align="center"><img src="assets/sheeps.png" width="100%"/></td>
93
- </tr>
94
- <tr>
95
- <td align="center"><img src="assets/1.png" width="100%"/></td>
96
- <td align="center"><img src="assets/2.png" width="100%"/></td>
97
- </tr>
98
- <tr>
99
- <td align="center"><img src="assets/flowers.png" width="100%"/></td>
100
- <td align="center"><img src="assets/41.jpg" width="100%"/></td>
101
- </tr>
102
-
103
- </table>
104
-
105
-
106
- | Feature | Base FIBO | FIBO + Distill LoRA |
107
- | :--- | :---: | :---: |
108
- | **Guidance Scale** | 5.0 (Typical) | **1.0** (Distilled) |
109
- | **Compute per Step** | 2x (Cond + Uncond) | **1x (Cond Only)** |
110
- | **Speed** | Baseline | **~2x Faster** |
111
- | **Quality** | Full Quality | Slight Degradation |
112
-
113
- ---
114
-
115
- ## 🛠️ Usage
116
-
117
- ### Requirements
118
 
119
- ```bash
120
- pip install git+https://github.com/huggingface/diffusers torch torchvision boltons ujson sentencepiece accelerate transformers
121
- ```
122
- for using gemini api, you need to install `google-genai` as well
123
 
124
- ### Quick Start
125
 
126
- ```python
127
- import os
128
 
129
- import torch
130
- from diffusers import BriaFiboPipeline
131
- from diffusers.modular_pipelines import ModularPipeline
132
 
133
- # -------------------------------
134
- # Load the VLM pipeline
135
- # -------------------------------
136
- torch.set_grad_enabled(False)
137
 
138
- # Using local VLM
139
- vlm_pipe = ModularPipeline.from_pretrained("briaai/FIBO-VLM-prompt-to-JSON", trust_remote_code=True)
140
 
 
 
 
 
 
141
 
142
- # Using Gemini API, requires GOOGLE_API_KEY environment variable
143
- # assert os.getenv("GOOGLE_API_KEY") is not None, "GOOGLE_API_KEY environment variable is not set"
144
- # vlm_pipe = ModularPipeline.from_pretrained("briaai/FIBO-gemini-prompt-to-JSON", trust_remote_code=True)
145
-
146
-
147
-
148
- # Load the FIBO pipeline
149
- pipe = BriaFiboPipeline.from_pretrained(
150
- "briaai/Fibo-lite",
151
- torch_dtype=torch.bfloat16,
152
- )
153
- pipe.to("cuda")
154
-
155
- # Generate with guidance_scale=1.0 (2x faster!)
156
- output = vlm_pipe(
157
- prompt="A hyper-detailed, ultra-fluffy owl sitting in the trees at night, looking directly at the camera with wide, adorable, expressive eyes. Its feathers are soft and voluminous, catching the cool moonlight with subtle silver highlights. The owl's gaze is curious and full of charm, giving it a whimsical, storybook-like personality."
158
- )
159
- json_prompt_generate = output.values["json_prompt"]
160
-
161
- def get_default_negative_prompt(existing_json: dict) -> str:
162
- negative_prompt = ""
163
- style_medium = existing_json.get("style_medium", "").lower()
164
- if style_medium in ["photograph", "photography", "photo"]:
165
- negative_prompt = """{'style_medium':'digital illustration','artistic_style':'non-realistic'}"""
166
- return negative_prompt
167
-
168
-
169
- negative_prompt = get_default_negative_prompt(json.loads(json_prompt_generate))
170
- results_generate = pipe(
171
- prompt=json_prompt_generate, num_inference_steps=50, guidance_scale=1, negative_prompt=negative_prompt
172
- )
173
- results_generate.images[0].save("image_generate.png")
174
- with open("image_generate_json_prompt.json", "w") as f:
175
- f.write(json_prompt_generate)
176
- ```
177
-
178
- ### Key Parameters
179
-
180
- - **`guidance_scale=1.0`**: This is the magic setting! With the LoRA loaded, you get the quality of CFG=5 at the speed of CFG=1.
181
- - **`num_inference_steps=50`**: Standard for FIBO. Adjust based on your quality/speed tradeoff.
182
- - **No negative prompt needed**: The distillation handles this internally.
183
-
184
-
185
- ---
186
-
187
- ## 🧠 Training Details
188
-
189
- * **Method**: Guidance Distillation (Distilling the CFG effect into the model weights via LoRA).
190
- * **Base Model**: [briaai/FIBO](https://huggingface.co/briaai/FIBO)
191
- * **Trainable Parameters**: 471,472,128
192
- * **Precision**: bfloat16
193
- * **Teacher Configuration**: CFG=5.0 (standard FIBO setting)
194
- * **Student Configuration**: CFG=1.0 (target deployment)
195
- * **Training Objective**: Minimize KL divergence between teacher and student outputs
196
-
197
- ### Training Process
198
-
199
- This model was trained to minimize the difference between:
200
- - **Teacher Model**: Base FIBO running at `guidance_scale=5.0`
201
- - **Student Model**: Base FIBO + LoRA running at `guidance_scale=1.0`
202
-
203
- The training process effectively "bakes in" the stylistic and structural benefits of classifier-free guidance without the computational cost at inference time.
204
-
205
- ---
206
-
207
- ## 🤝 Community & Support
208
-
209
- - **GitHub**: [FIBO Repository](https://github.com/Bria-AI/FIBO)
210
- - **Hugging Face**: [FIBO Model Card](https://huggingface.co/briaai/FIBO)
211
- - **Commercial Licensing**: [Contact Bria AI](https://bria.ai/contact-us)
212
-
213
- ---
214
-
215
- ## 📚 Citation
216
-
217
- If you use this model in your research or project, please cite:
218
-
219
- ```bibtex
220
- @misc{gutflaish2025generating,
221
- title = {Generating an Image From 1,000 Words: Enhancing Text-to-Image With Structured Captions},
222
- author = {Gutflaish, Eyal and Kachlon, Eliran and Zisman, Hezi and Hacham, Tal and Sarid, Nimrod and Visheratin, Alexander and Huberman, Saar and Davidi, Gal and Bukchin, Guy and Goldberg, Kfir and Mokady, Ron},
223
- year = {2025},
224
- eprint = {2511.06876},
225
- archivePrefix = {arXiv},
226
- primaryClass = {cs.CV},
227
- doi = {10.48550/arXiv.2511.06876},
228
- url = {https://arxiv.org/abs/2511.06876}
229
- }
230
- ```
231
-
232
- ---
233
-
234
- ## 📄 License
235
 
236
- <p><b>Source-Code & Weights</b></p>
237
 
238
- <ul>
239
- <li>The model is open source for non-commercial use with <a href="https://creativecommons.org/licenses/by-nc/4.0/deed.en">this license</a> </li>
240
- <li>For commercial use <a href="https://bria.ai/contact-us?hsCtaAttrib=114250296256">Click here</a>.</li>
241
- </ul>
242
-
 
1
  ---
2
+ license: cc-by-nc-4.0
3
  language:
4
  - en
5
  base_model:
6
  - briaai/FIBO
 
7
  library_name: diffusers
 
 
 
8
  tags:
9
+ - MLX
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
10
  ---
 
 
11
  <p align="center">
12
  <img src="https://bria-public.s3.us-east-1.amazonaws.com/Bria-logo.svg" width="200"/>
13
  </p>
 
 
 
 
 
 
 
 
 
 
 
14
  <p align="center">
15
+ <!-- GitHub Repo -->
16
+ <a href="https://github.com/Bria-AI/FIBO" target="_blank">
17
+ <img
18
+ alt="GitHub Repo"
19
+ src="https://img.shields.io/badge/GitHub-Repo-181717?logo=github&logoColor=white&style=for-the-badge"
20
+ />
21
+ </a>
22
+ &nbsp;
23
+
24
+ <!-- Hugging Face Demo -->
25
+ <a href="https://huggingface.co/spaces/briaai/FIBO" target="_blank">
26
+ <img
27
+ alt="Hugging Face Demo"
28
+ src="https://img.shields.io/badge/Hugging%20Face-Demo-FFD21E?logo=huggingface&logoColor=black&style=for-the-badge"
29
+ />
30
+ </a>
31
+ &nbsp;
32
+
33
+ <!-- FIBO Demo on Bria (replace URL if you have a specific demo link) -->
34
+ <a href="https://platform.bria.ai/labs/fibo" target="_blank">
35
+ <img
36
+ alt="FIBO Demo on Bria"
37
+ src="https://img.shields.io/badge/FIBO%20Demo-Bria-6C47FF?style=for-the-badge"
38
+ />
39
+ </a>
40
+ &nbsp;
41
+
42
+ <!-- Bria Platform -->
43
+ <a href="https://platform.bria.ai" target="_blank">
44
+ <img
45
+ alt="Bria Platform"
46
+ src="https://img.shields.io/badge/Bria-Platform-0EA5E9?style=for-the-badge"
47
+ />
48
+ </a>
49
+ &nbsp;
50
+
51
+ <!-- Bria Discord -->
52
+ <a href="https://discord.com/invite/Nxe9YW9zHS" target="_blank">
53
+ <img
54
+ alt="Bria Discord"
55
+ src="https://img.shields.io/badge/Discord-Join-5865F2?logo=discord&logoColor=white&style=for-the-badge"
56
+ />
57
+ </a>
58
+ &nbsp;
59
+
60
+ <!-- Tech Paper -->
61
+ <a href="https://arxiv.org/abs/2511.06876" target="_blank">
62
+ <img
63
+ alt="Tech Paper"
64
+ src="https://img.shields.io/badge/Tech%20Paper-lightgrey?logo=arxiv&logoColor=red&style=for-the-badge"
65
+ />
66
+ </a>
67
+ &nbsp;
68
+
69
+ <!-- Mflux -->
70
+ <a href="https://github.com/filipstrand/mflux" target="_blank">
71
+ <img
72
+ alt="Mflux"
73
+ src="https://img.shields.io/badge/Mflux-Repo-blue?logo=github&logoColor=white&style=for-the-badge"
74
+ />
75
+ </a>
76
+ &nbsp;
77
 
 
 
 
 
78
  </p>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
79
  <p align="center">
80
+ <img src="https://bria-public.s3.us-east-1.amazonaws.com/car.001.jpeg" width="1024"/>
 
81
  </p>
82
+
83
  <p align="center">
84
+ <b>FIBO is the first open-source, JSON-native text-to-image model trained exclusively on long structred captions.</b>
85
+ <br><br>
86
+ <i>Fibo sets a new standard for controllability, predictability, and disentanglement.</i>
87
  </p>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
88
 
 
 
 
 
89
 
90
+ # Fibo MLX 4-bit
91
 
92
+ This is an 4-bit quantization of [FIBO](https://huggingface.co/briaai/FIBO) made with [mflux](https://github.com/filipstrand/mflux).
 
93
 
94
+ ## Run SOTA on Your Mac
 
 
95
 
96
+ This quantization brings the state-of-the-art capabilities of FIBO to your local machine. Thanks to the 4-bit quantization and FIBO's efficient architecture, you can now run this powerful 8-billion-parameter model directly on your Mac with excellent performance and reduced memory usage.
 
 
 
97
 
98
+ ## About the Project
 
99
 
100
+ ### FIBO
101
+ [FIBO](https://huggingface.co/briaai/FIBO) is an 8B parameter DiT-based, flow-matching text-to-image model designed for enterprise-ready applications. It focuses on:
102
+ - **Controllability & Predictability**: Exceptional prompt adherence using long, structured JSON captions.
103
+ - **Responsible AI**: Trained exclusively on licensed data.
104
+ - **Architecture**: Features SmolLM3-3B as the text encoder and Wan 2.2 as the VAE.
105
 
106
+ ### Mflux
107
+ [Mflux](https://github.com/filipstrand/mflux) is a powerful generation framework built on top of Apple's MLX. It enables efficient inference of diffusion models on Apple Silicon, unlocking the full potential of your Mac's hardware.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
108
 
109
+ ## License & Access
110
 
111
+ The weights for the original FIBO model are open source for non-commercial use. Please refer to the [original model card](https://huggingface.co/briaai/FIBO) for access and license details.