xilanhua12138 commited on
Commit
3eefc8a
Β·
verified Β·
1 Parent(s): fc88871

Upload folder using huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +8 -5
README.md CHANGED
@@ -30,7 +30,7 @@
30
  ## πŸ“– Introduction
31
 
32
  This is the official implementation for the paper: [HPSv3: Towards Wide-Spectrum Human Preference Score]().
33
- First, we introduce a VLM-based preference model **HPSv3**, trained on a "wide spectrum" preference dataset **HPDv3** with 1.08M text-image pairs and 1.17M annotated pairwise comparisons, covering both state-of-the-art and earlier generative models, as well as high- and low-quality real-world images. Second, we propose a novel reasoning approach for iterative image refinement, **COHP**, which efficiently improves image quality without requiring additional training data.
34
 
35
  <p align="center">
36
  <img src="assets/teaser.png" alt="Teaser" width="900"/>
@@ -46,7 +46,7 @@ First, we introduce a VLM-based preference model **HPSv3**, trained on a "wide s
46
  2. [🌐 Gradio Demo](#🌐-gradio-demo)
47
  3. [πŸ‹οΈ Training](#πŸ‹οΈ-training)
48
  4. [πŸ“Š Benchmark](#πŸ“Š-benchmark)
49
- 5. [🎯 COHP (Consistency-guided Human Preference Optimization)](#🎯-cohp-consistency-guided-human-preference-optimization)
50
 
51
  ---
52
 
@@ -146,9 +146,12 @@ Human Preference Dataset v3 (HPD v3) comprises 1.08M text-image pairs and 1.17M
146
  </details>
147
 
148
  #### Download HPDv3
149
- ```bash
150
- huggingface-cli download --repo-type dataset MizzenAI/HPDv3 --local-dir /your-local-dataset-path
151
  ```
 
 
 
 
 
152
 
153
  #### Pairwise Training Data Format
154
 
@@ -242,7 +245,7 @@ To evaluate **HPSv3 preference accuracy** or **human preference score of image g
242
 
243
  ---
244
 
245
- ## 🎯 COHP (Consistency-guided Human Preference Optimization)
246
 
247
  COHP is our novel reasoning approach for iterative image refinement that efficiently improves image quality without requiring additional training data. It works by generating images with multiple diffusion models, selecting the best one using reward models, and then iteratively refining it through image-to-image generation.
248
 
 
30
  ## πŸ“– Introduction
31
 
32
  This is the official implementation for the paper: [HPSv3: Towards Wide-Spectrum Human Preference Score]().
33
+ First, we introduce a VLM-based preference model **HPSv3**, trained on a "wide spectrum" preference dataset **HPDv3** with 1.08M text-image pairs and 1.17M annotated pairwise comparisons, covering both state-of-the-art and earlier generative models, as well as high- and low-quality real-world images. Second, we propose a novel reasoning approach for iterative image refinement, **CoHP(Chain-of-Human-Preference)**, which efficiently improves image quality without requiring additional training data.
34
 
35
  <p align="center">
36
  <img src="assets/teaser.png" alt="Teaser" width="900"/>
 
46
  2. [🌐 Gradio Demo](#🌐-gradio-demo)
47
  3. [πŸ‹οΈ Training](#πŸ‹οΈ-training)
48
  4. [πŸ“Š Benchmark](#πŸ“Š-benchmark)
49
+ 5. [🎯 CoHP (Chain-of-Human-Preference)](#🎯-cohp-chain-of-human-preference)
50
 
51
  ---
52
 
 
146
  </details>
147
 
148
  #### Download HPDv3
 
 
149
  ```
150
+ HPDv3 is comming soon! Stay tuned!
151
+ ```
152
+ <!-- ```bash
153
+ huggingface-cli download --repo-type dataset MizzenAI/HPDv3 --local-dir /your-local-dataset-path
154
+ ``` -->
155
 
156
  #### Pairwise Training Data Format
157
 
 
245
 
246
  ---
247
 
248
+ ## 🎯 CoHP (Chain-of-Human-Preference)
249
 
250
  COHP is our novel reasoning approach for iterative image refinement that efficiently improves image quality without requiring additional training data. It works by generating images with multiple diffusion models, selecting the best one using reward models, and then iteratively refining it through image-to-image generation.
251