File size: 5,543 Bytes
831376d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
0e09f16
831376d
 
455200e
831376d
 
 
 
 
c729fb6
455200e
 
 
 
 
 
 
 
 
c729fb6
455200e
 
 
c729fb6
455200e
 
 
 
 
 
831376d
c729fb6
0e09f16
831376d
455200e
192c6b4
455200e
192c6b4
 
455200e
 
 
c729fb6
192c6b4
643b70c
192c6b4
643b70c
192c6b4
c729fb6
455200e
 
 
 
192c6b4
455200e
 
192c6b4
455200e
 
 
 
 
 
192c6b4
455200e
c729fb6
455200e
34ab979
192c6b4
34ab979
192c6b4
34ab979
192c6b4
34ab979
192c6b4
34ab979
192c6b4
c729fb6
455200e
192c6b4
455200e
192c6b4
455200e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c729fb6
455200e
34ab979
192c6b4
34ab979
192c6b4
34ab979
192c6b4
34ab979
192c6b4
34ab979
192c6b4
c729fb6
192c6b4
 
831376d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
---
license: apache-2.0
tags:
- image
- segmentation
- space
pretty_name: 'SWiM: Spacecraft With Masks (Instance Segmentation)'
size_categories:
- 1K<n<1M
task_categories:
- image-segmentation
task_ids:
- instance-segmentation
annotations_creators:
- machine-generated
- expert-generated
---

---

# SWiM: Spacecraft With Masks

A large-scale instance segmentation dataset of nearly 64k annotated spacecraft images that was created using real spacecraft models, superimposed on a mixture of real and synthetic backgrounds generated using NASA's TTALOS pipeline. To mimic camera distortions and noise in real-world image acquisition, we also added different types of noise and distortion to the images.

## Dataset Summary
The dataset contains over 63,917 annotated images with instance masks for varied spacecraft. It's structured for YOLO and segmentation applications, and chunked to stay within Hugging Face's per-folder file limits.


## How to Use/Download
### Directory Structure Note

Due to Hugging Face Hub's per-directory file limit (10,000 files), this dataset is chunked: each logical split (like `train/labels/`) is subdivided into folders (`000/`, `001/`, ...) containing no more than 5,000 files each.

**Example Structure:**
```
  Baseline
    ├──train/
         ├──images/
              ├── 000/
              │   ├── img_0.png
              │   └── ...
              ├── 001/
              └── ...

```
If you're using models/tools like **YOLO** or others that expect a **flat directory**, you may need to **merge these subfolders at load-time or during preprocessing**.

**YOLO Example Structure:**
```
  Baseline
    ├──train/
         ├──images/
              ├── img_0.png
              ├── imag_99.png
              └── ...

```
      

### Utility Scripts

The following scripts help you with the download of this dataset. Due to the large nature of the data and the custom directory structure, it is recommended to use the following scripts to either sample or to download the entire dataset.


#### 1. Setup

Create your virtual environment to help manage dependencies and prevent conflicts:
```
  python -m venv env
  
  source env/bin/activate # On Windows: env\Scripts\activate
  
  pip install -r requirements.txt
```

#### 2. Sample 500 items from a specific chunk:

This script is useful for quick local inspection, prototyping, or lightweight evaluation without downloading the full dataset.

Usage:
    python3 utils/sample_swim.py --output-dir ./samples --count 100

Arguments:
    --repo-id          Hugging Face dataset repository ID
    --image-subdir     Path to image subdirectory inside the dataset repo
    --label-subdir     Path to corresponding label subdirectory
    --output-dir       Directory to save downloaded files
    --count            Number of samples to download

Example Usage with all args:
```
  python3 sample_swim.py
  
  --repo-id JeffreyJsam/SWiM-SpacecraftWithMasks
  
  --image-subdir Baseline/images/val/000
  
  --label-subdir Baseline/labels/val/000
  
  --output-dir ./Sampled-SWiM
  
  --count 500
```
#### 3. Download the entire dataset (optionally flatten chunks for YOLO format):

Streams and downloads the full paired dataset (images + label txt files) from a Hugging Face Hub repository. It recursively processes all available chunk subfolders (e.g., '000', '001', ...) under given parent paths.

Features:
- Recursively discovers subdirs (chunks) using HfFileSystem
- Optionally flattens the directory structure by removing the deepest chunk level
- Saves each .png image with its corresponding .txt label

Use this script if you want to download the complete dataset for model training or offline access.

Usage:
    # Download all chunks (flattened/ YOLO format)
    python utils/download_swim.py --output-dir ./SWiM --flatten

    # Download specific chunks
    python3 utils/download_swim.py --chunks 000 001 002 --flatten False

Arguments:
    --repo-id          Hugging Face dataset repository ID
    --images-parent    Parent directory for image chunks (e.g., Baseline/images/train)
    --labels-parent    Parent directory for label chunks (e.g., Baseline/labels/train)
    --output-dir       Where to save the downloaded dataset
    --flatten          Remove final 'chunk' subdir in output paths (default: True)
    --chunks           Specific chunk names (e.g., 000 001); omit to download all

Example usage with all args:
```
  python3 download_swim.py
  
  --repo-id JeffreyJsam/SWiM-SpacecraftWithMasks
  
  --images-parent Baseline/images/val
  
  --labels-parent Baseline/labels/val
  
  --output-dir ./SWiM
  
  --flatten
```
  
**Arguments are all configurable—see `--help` for details.**

## Code and Data Generation Pipeline

All dataset generation scripts, preprocessing tools, and model training code are available on GitHub:

[GitHub Repository: https://github.com/RiceD2KLab/SWiM](https://github.com/RiceD2KLab/SWiM)


## Citation

If you use this dataset, please cite:

@misc{sam2025newdatasetperformancebenchmark,
      title={A New Dataset and Performance Benchmark for Real-time Spacecraft Segmentation in Onboard Flight Computers}, 
      author={Jeffrey Joan Sam and Janhavi Sathe and Nikhil Chigali and Naman Gupta and Radhey Ruparel and Yicheng Jiang and Janmajay Singh and James W. Berck and Arko Barman},
      year={2025},
      eprint={2507.10775},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2507.10775}, 
}