Add model card metadata, paper link and sample usage

#1
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +42 -0
README.md ADDED
@@ -0,0 +1,42 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: peft
3
+ pipeline_tag: image-text-to-text
4
+ ---
5
+
6
+ # EditHF
7
+
8
+ EditHF is an MLLM-based evaluation model introduced in the paper [EditHF-1M: A Million-Scale Rich Human Preference Feedback for Image Editing](https://huggingface.co/papers/2603.14916).
9
+
10
+ It is designed to provide fine-grained, human-aligned scores for text-guided image editing across three dimensions: **visual quality**, **editing alignment**, and **attribute preservation**. The model was trained on the **EditHF-1M** dataset, which contains over 29M human preference pairs.
11
+
12
+ ## Resources
13
+ - **Paper:** [EditHF-1M: A Million-Scale Rich Human Preference Feedback for Image Editing](https://huggingface.co/papers/2603.14916)
14
+ - **GitHub Repository:** [IntMeGroup/EditHF](https://github.com/IntMeGroup/EditHF)
15
+
16
+ ## Sample Usage
17
+
18
+ To use EditHF for evaluating image editing results, you can use the inference script provided in the official repository:
19
+
20
+ ```bash
21
+ python inference.py \
22
+ --source_image "/path/to/source.jpg" \
23
+ --edited_image "/path/to/edited.jpg" \
24
+ --instruction "Editing instruction" \
25
+ --peft_dir "lora_checkpoints_visual" \
26
+ --mode visual
27
+ ```
28
+
29
+ The `--mode` parameter can be set to:
30
+ - `visual`: Evaluates visual quality.
31
+ - `alignment`: Evaluates alignment with the editing instruction.
32
+ - `preservation`: Evaluates the preservation of source image attributes.
33
+
34
+ ## Citation
35
+ ```bibtex
36
+ @article{edithf1m,
37
+ title={EditHF-1M: A Million-Scale Rich Human Preference Feedback for Image Editing},
38
+ author={...},
39
+ journal={arXiv preprint arXiv:2603.14916},
40
+ year={2026}
41
+ }
42
+ ```