Marcel031 HichTala commited on
Commit
96199cb
·
0 Parent(s):

Duplicate from HichTala/dior

Browse files

Co-authored-by: Hicham Tala <HichTala@users.noreply.huggingface.co>

.gitattributes ADDED
@@ -0,0 +1,59 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
5
+ *.ckpt filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.lz4 filter=lfs diff=lfs merge=lfs -text
12
+ *.mds filter=lfs diff=lfs merge=lfs -text
13
+ *.mlmodel filter=lfs diff=lfs merge=lfs -text
14
+ *.model filter=lfs diff=lfs merge=lfs -text
15
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
16
+ *.npy filter=lfs diff=lfs merge=lfs -text
17
+ *.npz filter=lfs diff=lfs merge=lfs -text
18
+ *.onnx filter=lfs diff=lfs merge=lfs -text
19
+ *.ot filter=lfs diff=lfs merge=lfs -text
20
+ *.parquet filter=lfs diff=lfs merge=lfs -text
21
+ *.pb filter=lfs diff=lfs merge=lfs -text
22
+ *.pickle filter=lfs diff=lfs merge=lfs -text
23
+ *.pkl filter=lfs diff=lfs merge=lfs -text
24
+ *.pt filter=lfs diff=lfs merge=lfs -text
25
+ *.pth filter=lfs diff=lfs merge=lfs -text
26
+ *.rar filter=lfs diff=lfs merge=lfs -text
27
+ *.safetensors filter=lfs diff=lfs merge=lfs -text
28
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
29
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
30
+ *.tar filter=lfs diff=lfs merge=lfs -text
31
+ *.tflite filter=lfs diff=lfs merge=lfs -text
32
+ *.tgz filter=lfs diff=lfs merge=lfs -text
33
+ *.wasm filter=lfs diff=lfs merge=lfs -text
34
+ *.xz filter=lfs diff=lfs merge=lfs -text
35
+ *.zip filter=lfs diff=lfs merge=lfs -text
36
+ *.zst filter=lfs diff=lfs merge=lfs -text
37
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
38
+ # Audio files - uncompressed
39
+ *.pcm filter=lfs diff=lfs merge=lfs -text
40
+ *.sam filter=lfs diff=lfs merge=lfs -text
41
+ *.raw filter=lfs diff=lfs merge=lfs -text
42
+ # Audio files - compressed
43
+ *.aac filter=lfs diff=lfs merge=lfs -text
44
+ *.flac filter=lfs diff=lfs merge=lfs -text
45
+ *.mp3 filter=lfs diff=lfs merge=lfs -text
46
+ *.ogg filter=lfs diff=lfs merge=lfs -text
47
+ *.wav filter=lfs diff=lfs merge=lfs -text
48
+ # Image files - uncompressed
49
+ *.bmp filter=lfs diff=lfs merge=lfs -text
50
+ *.gif filter=lfs diff=lfs merge=lfs -text
51
+ *.png filter=lfs diff=lfs merge=lfs -text
52
+ *.tiff filter=lfs diff=lfs merge=lfs -text
53
+ # Image files - compressed
54
+ *.jpg filter=lfs diff=lfs merge=lfs -text
55
+ *.jpeg filter=lfs diff=lfs merge=lfs -text
56
+ *.webp filter=lfs diff=lfs merge=lfs -text
57
+ # Video files - compressed
58
+ *.mp4 filter=lfs diff=lfs merge=lfs -text
59
+ *.webm filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,159 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ dataset_info:
3
+ features:
4
+ - name: image_id
5
+ dtype: int64
6
+ - name: image
7
+ dtype: image
8
+ - name: width
9
+ dtype: int64
10
+ - name: height
11
+ dtype: int64
12
+ - name: objects
13
+ sequence:
14
+ - name: bbox_id
15
+ dtype: int64
16
+ - name: category
17
+ dtype:
18
+ class_label:
19
+ names:
20
+ '0': Airplane
21
+ '1': Airport
22
+ '2': Baseball field
23
+ '3': Basketball court
24
+ '4': Bridge
25
+ '5': Chimney
26
+ '6': Dam
27
+ '7': Expressway service area
28
+ '8': Expressway toll station
29
+ '9': Golf course
30
+ '10': Ground track field
31
+ '11': Harbor
32
+ '12': Overpass
33
+ '13': Ship
34
+ '14': Stadium
35
+ '15': Storage tank
36
+ '16': Tennis court
37
+ '17': Train station
38
+ '18': Vehicle
39
+ '19': Wind mill
40
+ - name: bbox
41
+ sequence: int64
42
+ length: 4
43
+ - name: area
44
+ dtype: int64
45
+ splits:
46
+ - name: train
47
+ num_bytes: 5902685454
48
+ num_examples: 18000
49
+ - name: test
50
+ num_bytes: 1150035824
51
+ num_examples: 3463
52
+ - name: validation
53
+ num_bytes: 645393741
54
+ num_examples: 2000
55
+ download_size: 7626168863
56
+ dataset_size: 7698115019
57
+ configs:
58
+ - config_name: default
59
+ data_files:
60
+ - split: train
61
+ path: data/train-*
62
+ - split: test
63
+ path: data/test-*
64
+ - split: validation
65
+ path: data/validation-*
66
+ task_categories:
67
+ - object-detection
68
+ language:
69
+ - en
70
+ pretty_name: DIOR
71
+ ---
72
+ # DIOR Hugging Face-Ready Vision Dataset
73
+
74
+ This dataset is a restructured version of the DIOR (Object Detection in Optical Remote Sensing Images), specifically designed to simplify object detection workflows. By converting them to the COCO format, this project provides an easier way to use DIOR with popular computer vision frameworks. Additionally, the dataset is formatted for seamless integration with Hugging Face datasets, unlocking new possibilities for training and experimentation.
75
+
76
+ ## 📂 Dataset Structure
77
+ ### COCO Format
78
+ The dataset follows the COCO dataset structure, making it straightforward to work with:
79
+
80
+ ```plaintext
81
+ dior/
82
+ ├── annotations/
83
+ │ ├── instances_train.json
84
+ │ ├── instances_val.json
85
+ │ └── instances_test.json
86
+ ├── train/
87
+ ├── val/
88
+ ├── test/
89
+ ```
90
+ ### Hugging Face Format
91
+ The dataset is compatible with the datasets library. You can load it directly using:
92
+
93
+ ```python
94
+ from datasets import load_dataset
95
+
96
+ dataset = load_dataset("HichTala/dior")
97
+ ```
98
+
99
+ ## 🖼️ Sample Visualizations
100
+
101
+ Above: An example of resized images with bounding boxes in COCO format.
102
+
103
+ ## 🚀 Getting Started
104
+ ### Install Required Libraries
105
+
106
+ - Install datasets for Hugging Face compatibility:
107
+
108
+ ```bash
109
+ pip install datasets
110
+ ```
111
+ - Use any object detection framework supporting COCO format for training.
112
+
113
+ ### Load the Dataset
114
+ #### Hugging Face:
115
+
116
+ ```python
117
+ from datasets import load_dataset
118
+
119
+ dataset = load_dataset("HichTala/dior")
120
+ train_data = dataset["train"]
121
+ ```
122
+
123
+ #### Custom Script for COCO-Compatible Frameworks:
124
+ ```python
125
+ import json
126
+ from pycocotools.coco import COCO
127
+
128
+ coco = COCO("annotations/train.json")
129
+ ```
130
+
131
+ see demo notebook [here](https://github.com/cocodataset/cocoapi/blob/master/PythonAPI/pycocoDemo.ipynb) for more details.
132
+
133
+ ## 📚 Used in Research
134
+
135
+ This processed version of DIOR has been used in the paper:\
136
+ 📄 [LoRA for Cross-Domain Few-Shot Object Detection](https://huggingface.co/papers/2504.06330)\
137
+ The dataset served as a target domain for evaluating the generalization capabilities of diffusion-based object detectors in low-data regimes.
138
+
139
+ ## 📝 How to Cite
140
+ If you use this dataset, please consider citing the original DIOR dataset:
141
+
142
+ ```plaintext
143
+ @article{Li_2020,
144
+ title={Object detection in optical remote sensing images: A survey and a new benchmark},
145
+ volume={159},
146
+ ISSN={0924-2716},
147
+ url={http://dx.doi.org/10.1016/j.isprsjprs.2019.11.023},
148
+ DOI={10.1016/j.isprsjprs.2019.11.023},
149
+ journal={ISPRS Journal of Photogrammetry and Remote Sensing},
150
+ publisher={Elsevier BV},
151
+ author={Li, Ke and Wan, Gang and Cheng, Gong and Meng, Liqiu and Han, Junwei},
152
+ year={2020},
153
+ month=jan, pages={296–307}}
154
+ ```
155
+
156
+ Additionally, you can mention this repository for the resized COCO and Hugging Face formats.
157
+
158
+
159
+ Enjoy using DIOR in coco format for your object detection experiments! 🚀
data/test-00000-of-00003.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9d9521ca39f331b7f73e694197bbbe69708c034aac104711f37c22640280cddf
3
+ size 375526579
data/test-00001-of-00003.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d98f1083fadd6662130d666d7cf5f0fe01c0d907062572994b980f046830e6ee
3
+ size 380941540
data/test-00002-of-00003.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c922ec78964bb24e893772813660a5ed2210e806899259566fe22c622bd9ab1f
3
+ size 382599561
data/train-00000-of-00012.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:126dfa3901b898a10ed9d5680c895eff29202467b0cdcb31174a064ea04615f5
3
+ size 484028493
data/train-00001-of-00012.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:faf9265b8ef5ab60bd033d5d47c2d4d06ef74814056bb3bb0be4db5f33eb9506
3
+ size 485822201
data/train-00002-of-00012.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b3ed3055cc9859f409eb0bb3c17a755668ab3fe58682813599dd6c104a8eaa6f
3
+ size 495662717
data/train-00003-of-00012.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4674c437d8fde4fddd0cee91b98f0919175d8945adab8e874f7b615ed6d9b3d5
3
+ size 482795278
data/train-00004-of-00012.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b50e4779b1a44572cddfa097a9e38f4ea779f065cde399dd9bbd4bfb38e31836
3
+ size 483499634
data/train-00005-of-00012.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1d29b1f0e8793c94c17878235e034a23c222dca9ba5266bd810b39685fd11dc3
3
+ size 487293857
data/train-00006-of-00012.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2ed4859aa484050a7603a719476a725ee52723e6260a6d02cba6f4ee5927bb06
3
+ size 486829583
data/train-00007-of-00012.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:06cabec3bedc52beadca38dfc8179e7f766566344aab1ad497d132b86e92e293
3
+ size 485002681
data/train-00008-of-00012.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5c44dd3ca452e4555bdc9aa297161870370cf0b455562ea2664502b147df94c3
3
+ size 487152266
data/train-00009-of-00012.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2cc22db7460234bc657b9e84b563941410e0305dced520bc8b0032725ff5e36d
3
+ size 497039004
data/train-00010-of-00012.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:12a3588d01ff7e9a7f313a1651a7206eea564a1c3ea628f4a0189afccd566de9
3
+ size 482881600
data/train-00011-of-00012.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3101512fdf758e857b9829ae28267825e6aba482a25a08724f172c208b392359
3
+ size 489728447
data/validation-00000-of-00002.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8b4a8a2eb2cc261ad665e596f73451b858b787ca6b82f1065218cf18d9340313
3
+ size 322475044
data/validation-00001-of-00002.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:75df9cbc240f18ec2f043d839865608f2366bbc872157a0c3c19f0061b08e614
3
+ size 316890378