Jeff Moe commited on
Commit
65cfd95
·
1 Parent(s): 7591bf1

Data provenance info

Browse files
Files changed (1) hide show
  1. DATA_PROVENANCE.md +83 -0
DATA_PROVENANCE.md ADDED
@@ -0,0 +1,83 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # LibreHPS — Data provenance
2
+
3
+ LibreHPS-4B was trained exclusively on permissively-licensed
4
+ human-preference data for text-to-image and text-to-video generation.
5
+ This document summarises the licence audit performed for every
6
+ upstream dataset in the training blend and the per-image generator
7
+ audit applied to filter the data before training.
8
+
9
+ ## Datasets used
10
+
11
+ All datasets in the LibreHPS training blend are distributed under one
12
+ of: **MIT**, **Apache-2.0**, **BSD-3-Clause**, or
13
+ **CDLA-Permissive-2.0**. None impose non-commercial, no-derivatives,
14
+ or share-alike restrictions on ML-model training derivatives.
15
+
16
+ | Dataset | Upstream licence |
17
+ |---|---|
18
+ | `MizzenAI/HPDv3` | MIT |
19
+ | `zai-org/ImageRewardDB` | Apache-2.0 |
20
+ | `zai-org/VisionRewardDB-Image` | Apache-2.0 |
21
+ | `zai-org/VisionRewardDB-Video` | Apache-2.0 |
22
+ | `Rapidata/text-2-image-Rich-Human-Feedback` | CDLA-Permissive-2.0 |
23
+ | `Rapidata/sora-video-generation-alignment-likert-scoring` | CDLA-Permissive-2.0 |
24
+ | `Rapidata/text-2-video-human-preferences-veo3` | CDLA-Permissive-2.0 |
25
+ | `TIGER-Lab/VideoFeedback` | MIT |
26
+ | `data-is-better-together/open-image-preferences-v1` | Apache-2.0 |
27
+ | `fudan-generative-ai/LiFT-HRA-20K` | Apache-2.0 |
28
+
29
+ CDLA-Permissive-2.0 §3.1 jointly with §5.4 exempts "Results"
30
+ (including ML-model artefacts) from any downstream restriction. This
31
+ is the basis on which Rapidata CDLA-2.0 datasets are included.
32
+
33
+ ## Per-image generator audit
34
+
35
+ A subset of upstream datasets — notably the Rapidata model-vs-model
36
+ splits — embed images generated by closed-source services (OpenAI,
37
+ Midjourney, Imagen, etc.). Each retained training pair carries an
38
+ audit flag `license_audit_flags.upstream_generator_restriction`
39
+ recording whether the upstream generator's ToS prohibits
40
+ redistribution of its outputs. Rows flagged `true` were dropped before
41
+ training.
42
+
43
+ **Generators dropped:**
44
+
45
+ - **Midjourney** — ToS prohibits redistribution of generated images.
46
+ - **DALL·E 3** (via OpenAI API) — older OpenAI ToS prohibits
47
+ redistribution; only retained where the upstream dataset itself
48
+ asserts generator-ToS compatibility.
49
+
50
+ **Generators kept:**
51
+
52
+ - **Stable Diffusion family** (SD 1.x, SD 2.x, SDXL, SD 3.x) — open
53
+ weights.
54
+ - **FLUX.1**, **FLUX.1.1**, **FLUX 2 pro** — API ToS permits
55
+ redistribution.
56
+ - **Imagen 3 / 4 / 4 Ultra** — Google generative-AI ToS permits
57
+ redistribution of outputs.
58
+ - **HunyuanImage 2.1**, **Seedream 3**, **Kolors**, **Lumina** — open
59
+ weights.
60
+ - **Recraft v2 / v3**, **Ideogram V2** — API ToS permits
61
+ redistribution.
62
+
63
+ ## Excluded datasets
64
+
65
+ These were reviewed and explicitly excluded:
66
+
67
+ - **Pick-a-Pic v1 / v2** — no declared HF licence; the underlying
68
+ Dreamlike Photoreal 2.0 generator carries an OpenRAIL-M
69
+ non-commercial clause.
70
+ - **HPDv2 (standalone)** — research-use only.
71
+ - **Midjourney Discord scrapes** — ToS prohibits redistribution.
72
+ - **PKU-Alignment/SafeSora\*** — CC-BY-NC-4.0 (non-commercial).
73
+ - **CodeGoat24/VideoFeedback** — a derivative of
74
+ `TIGER-Lab/VideoFeedback`; the original was ingested instead.
75
+
76
+ ## What this means for users
77
+
78
+ LibreHPS-4B is safe to use commercially. Every training-data row was
79
+ either drawn from a permissively-licensed dataset, or filtered out by
80
+ the generator-redistribution audit above. The model weights are
81
+ released under Apache-2.0 (see [`LICENSE`](LICENSE)) with no
82
+ inherited non-commercial or no-derivatives clauses from the training
83
+ data.