GSD-S / README.md
garrying's picture
Upload README.md with huggingface_hub
2912aa9 verified
---
license: other
license_name: bsd-3-clause-non-commercial
license_link: https://github.com/Mhaiyang/NeurIPS2022_GlassSemNet/blob/main/LICENSE
task_categories:
- image-segmentation
tags:
- glass-surface-detection
- semantic-segmentation
- scene-understanding
pretty_name: GSD-S (Glass Surface Detection Semantics)
size_categories:
- 1K<n<10K
---
# GSD-S: Glass Surface Detection – Semantics
GSD-S is a glass surface detection dataset augmented with per-pixel semantic labels, introduced in the NeurIPS 2022 paper **"Exploiting Semantic Relations for Glass Surface Detection"**.
Each sample pairs an RGB photograph with a binary glass mask and a 43-class semantic segmentation map, enabling joint glass detection and scene-semantic reasoning.
- **Paper:** [Exploiting Semantic Relations for Glass Surface Detection](https://openreview.net/forum?id=WrIrYMCZgbb) — NeurIPS 2022
- **Project page:** https://jiaying.link/neurips2022-gsds/
- **Authors:** Jiaying Lin, Yuen-Hei Yeung, Rynson W.H. Lau (City University of Hong Kong)
---
## Dataset Summary
| Split | Samples |
|-------|---------|
| train | 3,911 |
| test | 608 |
| **total** | **4,519** |
Images are 640 × 480 pixels (JPEG). All annotation maps are PNG.
---
## Columns
| Column | Type | Description |
|--------|------|-------------|
| `image_id` | `string` | Original filename stem (e.g. `000000000711`); use for round-trip fidelity |
| `image` | `Image` | RGB photograph (.jpg) |
| `mask` | `Image` | Binary glass mask — pixel values 0 (non-glass) or 255 (glass) |
| `seg` | `Image` | Semantic segmentation map — pixel values 0–42 (class index) |
| `seg_colored` | `Image` | False-color rendering of `seg` using the GSD-S palette (for visualization) |
### Semantic classes (43 total)
`unknown`, `wall`, `glass`, `floor`, `ceiling`, `door`, `chair`, `table`, `sofa`,
`cabinet`, `curtain`, `blinds`, `bedding`, `picture`, `light`, `clothes`, `counter`,
`sink`, `toilet`, `towel`, `mirror`, `tv`, `building_structure`, `stationery`, `plant`,
`person`, `fridge`, `bath_shower`, `seat`, `floor_mat`, `fence`, `ground`, `bottle`,
`kitchenware`, `road`, `transport`, `electronics`, `food`, `bag`, `nature`, `animal`,
`road_infrastructure`, `clock`
The class-to-color mapping is available in the official repository at
`utils/GSD-S_color_map.csv`.
---
## Loading the Dataset
```python
from datasets import load_dataset
ds = load_dataset("garrying/GSD-S")
sample = ds["train"][0]
print(sample["image_id"]) # e.g. "000000000711"
sample["image"].show() # RGB photo
sample["mask"].show() # binary glass mask
sample["seg"].show() # semantic class indices
sample["seg_colored"].show() # false-color visualization
```
---
## Converting Back to Raw Files
A conversion helper is bundled in this repository. Download and run it:
```bash
# Download the script
huggingface-cli download garrying/GSD-S parquet_to_raw.py --repo-type dataset --local-dir .
# Restore all splits to ./GSD-S/
python parquet_to_raw.py --repo garrying/GSD-S
# Or restore from a locally cached copy
python parquet_to_raw.py --local /path/to/local/cache
```
Output layout:
```
GSD-S/
train/
images/ # .jpg
masks/ # .png
segs/ # .png (class-index maps)
segs_colored/ # .png (false-color maps)
test/
...
```
---
## Evaluation Metrics
The official evaluation protocol reports:
- **IoU** — Intersection over Union
- **F-measure** (Fβ, β² = 0.3) — weighted precision-recall
- **MAE** — Mean Absolute Error
- **BER** — Balanced Error Rate
Predictions and ground-truth masks are binarized at threshold 0.5 before computing all metrics.
---
## Citation
```bibtex
@inproceedings{neurips2022:gsds2022,
title = {Exploiting Semantic Relations for Glass Surface Detection},
author = {Lin, Jiaying and Yeung, Yuen Hei and Lau, Rynson W.H.},
booktitle = {Advances in Neural Information Processing Systems (NeurIPS)},
year = {2022}
}
```
---
## License
BSD 3-Clause License — **non-commercial use only**.
See [LICENSE](https://github.com/Mhaiyang/NeurIPS2022_GlassSemNet/blob/main/LICENSE) for the full text.
Please cite the paper if you use this dataset.