The dataset viewer is not available for this dataset.
Error code: ConfigNamesError
Exception: ValueError
Message: Feature type 'Object' not found. Available feature types: ['Value', 'ClassLabel', 'Translation', 'TranslationVariableLanguages', 'LargeList', 'List', 'Array2D', 'Array3D', 'Array4D', 'Array5D', 'Audio', 'Image', 'Video', 'Pdf', 'Nifti', 'Json']
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 66, in compute_config_names_response
config_names = get_dataset_config_names(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 161, in get_dataset_config_names
dataset_module = dataset_module_factory(
^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/load.py", line 1207, in dataset_module_factory
raise e1 from None
File "/usr/local/lib/python3.12/site-packages/datasets/load.py", line 1182, in dataset_module_factory
).get_module()
^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/load.py", line 612, in get_module
dataset_infos = DatasetInfosDict.from_dataset_card_data(dataset_card_data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/info.py", line 396, in from_dataset_card_data
dataset_info = DatasetInfo._from_yaml_dict(dataset_card_data["dataset_info"])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/info.py", line 317, in _from_yaml_dict
yaml_data["features"] = Features._from_yaml_list(yaml_data["features"])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/features/features.py", line 2138, in _from_yaml_list
return cls.from_dict(from_yaml_inner(yaml_data))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/features/features.py", line 1983, in from_dict
obj = generate_from_dict(dic)
^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/features/features.py", line 1564, in generate_from_dict
return {key: generate_from_dict(value) for key, value in obj.items()}
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/features/features.py", line 1570, in generate_from_dict
raise ValueError(f"Feature type '{_type}' not found. Available feature types: {list(_FEATURE_TYPES.keys())}")
ValueError: Feature type 'Object' not found. Available feature types: ['Value', 'ClassLabel', 'Translation', 'TranslationVariableLanguages', 'LargeList', 'List', 'Array2D', 'Array3D', 'Array4D', 'Array5D', 'Audio', 'Image', 'Video', 'Pdf', 'Nifti', 'Json']Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
Satellite Disruption Triage Dataset (v0)
TL;DR
This is a small, high-quality auxiliary dataset of paired satellite images (baseline + current) from 11 global disaster events, annotated with structured JSON triage outputs for training or evaluating vision-language models (VLMs) on macro-scale civilian disruption detection.
- 120 examples total: 92 train, 28 eval
- Source: BRIGHT dataset (XView2-compatible format derived from Kullervo/BRIGHT)
- Purpose: Structured VLM fine-tuning and evaluation for satellite-based humanitarian disruption triage
- License: CC-BY-NC-4.0 (Maxar Open Data via BRIGHT)
What this dataset is
Each example contains:
- A baseline satellite image of a location before a disaster event
- A current satellite image of the same location after the event
- A structured JSON output with exactly these fields:
| Field | Type | Description |
|---|---|---|
action |
string | One of: discard, defer, downlink_now |
category |
string | Short disruption type label (e.g., earthquake_building_damage) |
rationale |
string | One or two sentence operational explanation |
bbox_norm |
float[4] or null | Normalized bounding box [x_min, y_min, x_max, y_max] of affected area, or null if no disruption |
Action definitions
discardβ No meaningful macro-scale civilian disruption visible. Buildings and infrastructure appear structurally intact in the current image compared to baseline.deferβ Minor to moderate civilian infrastructure disruption visible. Damage is present but localized; may warrant later review but does not indicate immediate widespread impact requiring urgent action.downlink_nowβ Significant macro-scale civilian disruption visible (widespread building damage/destruction, flooding, wildfire destruction, etc.). Urgently warrants satellite downlink and humanitarian response coordination.
Category taxonomy
| Category | Description | Example sources |
|---|---|---|
no_disruption |
No visible damage (paired with discard) |
Any event |
earthquake_building_damage |
Seismic damage to civilian buildings | Turkey, Morocco, Noto, Haiti |
wildfire_structure_damage |
Wildfire destruction of civilian structures | Hawaii, Marshall Fire |
volcanic_eruption_damage |
Lava flow or ashfall damage to civilian structures | La Palma, Congo |
flood_infrastructure_damage |
Flood-related damage to buildings and infrastructure | Libya |
civilian_explosion_damage |
Civilian building damage from industrial/urban explosions | Beirut, Bata |
Excluded categories: No explicit hospital/humanitarian site labels, no road-access labels, no military tactical data. The dataset is scoped to civilian macro-scale building disruption only.
What this dataset is NOT
- Not a military targeting dataset β No military objectives, strike damage, or tactical analysis labels.
- Not a real-time monitoring claim β Labels are based on post-event archived satellite imagery and do not imply real-time detection capability.
- Not a tiny-object detection dataset β Focus is on macro-scale (tile-level or large-cluster) disruption, not individual small-object detection.
- Not a hospital/humanitarian-site-specific dataset β While we prefer civilian-relevant labels, source data lacks explicit hospital or humanitarian site annotations.
- Not exhaustively complete β This is version 0 (~120 examples) intended as an auxiliary fine-tuning/evaluation resource, not a production-scale production dataset.
Source datasets used
Primary source: BRIGHT (XView2 format)
- HF repo: GabeT29/BRIGHT-XView2Format
- Parent repo: Kullervo/BRIGHT
- Paper: Chen et al., "BRIGHT: a globally distributed multimodal building damage assessment dataset with very-high-resolution for all-weather disaster response", Earth System Science Data, 17, 6217β6253, 2025. DOI: 10.5194/essd-17-6217-2025
- License: CC-BY-NC-4.0 (Maxar Open Data)
BRIGHT provides paired pre-event optical and post-event SAR images with pixel-level damage masks:
0= background (no building)1= intact2= damaged3= destroyed
We mapped these mask-derived damage statistics to our triage schema.
Datasets inspected but not used as primary source
| Dataset | Why excluded / limited use |
|---|---|
| xBD / xView2 | Official download requires registration; HF mirrors incomplete. BRIGHT supersedes for our use case. |
| SpaceNet 8 | Focused on flood detection + road network; requires tortilla library for HF version. Road labels did not cleanly map to our macro-scale building triage schema. |
| KOlegaBB/damage_assessment_ukraine | Small (2,219 instances), uses Google Maps imagery with redistribution restrictions. BRIGHT includes Ukraine/Myanmar conflict tiles under Maxar Open Data license, making BRIGHT preferable. |
| Sen1Floods11 | No pre-event baseline imagery; SAR-only without paired optical. Labels are pixel-level flood masks without building-level damage severity. Does not cleanly support our paired-image macro-scale triage task. |
Event provenance
| Event | Examples | Region | Year | Type |
|---|---|---|---|---|
| Turkey earthquake | 22 | Turkey | 2023 | Natural disaster |
| Marshall wildfire | 20 | USA | 2022 | Natural disaster |
| Noto earthquake | 17 | Japan | 2024 | Natural disaster |
| La Palma volcano | 14 | Spain | 2021 | Natural disaster |
| Hawaii wildfire | 9 | USA | 2023 | Natural disaster |
| Libya flood | 8 | Libya | 2023 | Natural disaster |
| Beirut explosion | 7 | Lebanon | 2020 | Man-made disaster |
| Congo volcano | 7 | DRC | 2021 | Natural disaster |
| Bata explosion | 7 | Equatorial Guinea | 2021 | Man-made disaster |
| Morocco earthquake | 6 | Morocco | 2023 | Natural disaster |
| Haiti earthquake | 3 | Haiti | 2021 | Natural disaster |
Coverage: 11 events across 4 continents, 5 disaster types (earthquake, wildfire, volcanic eruption, flood, explosion), natural and man-made origins.
Labeling methodology
Labels were derived algorithmically from BRIGHT pixel-level damage masks with the following pipeline:
- Damage statistics: For each 1024Γ1024 tile, we computed the ratio of damaged+destroyed pixels to total building pixels from the mask.
- Triage mapping:
damage_ratio < 0.05βdiscard0.05 β€ damage_ratio < 0.35ANDdestruction_ratio < 0.15βdefer- Otherwise β
downlink_now
- Category: Inferred from BRIGHT event name (e.g., "turkey-earthquake" β
earthquake_building_damage). - Rationale: Template-generated based on action + category, written in operational language.
- Bounding box: Computed as the axis-aligned bounding box of all damaged/destroyed pixels, normalized to [0, 1] per tile.
nullfordiscardexamples. - Deduplication: Perceptual hash (16Γ16 image thumbnail) on post-event tiles to exclude near-identical examples.
- Diversity enforcement: Round-robin selection across events to prevent single-event dominance.
Splits
| Split | Count | Description |
|---|---|---|
| train | 92 | Stratified by action (27 discard, 27 defer, 38 downlink_now) |
| eval | 28 | Stratified by action (8 discard, 8 defer, 12 downlink_now) |
Stratification ensures both splits contain representative proportions of all three triage actions.
File formats
This repository provides two machine-readable formats describing the same examples:
1. Flat JSONL (train_flat.jsonl, eval_flat.jsonl)
For structured evaluation and non-conversational training:
{
"example_id": "bright_turkey-earthquake_val_images_turkey-earthquake_00000299",
"baseline_image": "images/baseline/bright_turkey-earthquake_val_images_turkey-earthquake_00000299_baseline.png",
"current_image": "images/current/bright_turkey-earthquake_val_images_turkey-earthquake_00000299_current.png",
"target_output": {
"action": "downlink_now",
"category": "earthquake_building_damage",
"rationale": "Widespread building collapse visible in civilian area following earthquake. Multiple structures destroyed, indicating potential mass casualty event requiring urgent response.",
"bbox_norm": [0.3203, 0.2676, 0.8291, 0.7568]
},
"source_dataset": "BRIGHT (GabeT29/BRIGHT-XView2Format)",
"source_event": "turkey-earthquake",
"source_image_name": "val/images/turkey-earthquake_00000299",
"provenance": "BRIGHT dataset, event=turkey-earthquake, tile=val/images/turkey-earthquake_00000299, source_split=val",
"damage_ratio": 0.4521,
"destruction_ratio": 0.2014
}
2. SFT-ready messages format (train_sft.jsonl, eval_sft.jsonl)
For vision-language model fine-tuning in standard conversation format:
{
"example_id": "bright_turkey-earthquake_val_images_turkey-earthquake_00000299",
"images": [
"images/baseline/..._baseline.png",
"images/current/..._current.png"
],
"messages": [
{
"role": "system",
"content": "You are a satellite imagery analyst performing structured disruption triage. You are given two satellite images of the same location: a baseline image from an earlier date and a current image from a later date. Compare the two images and determine whether there is meaningful macro-scale civilian disruption visible in the current image. Return your assessment as a JSON object with exactly these fields: \"action\" (one of \"discard\", \"defer\", or \"downlink_now\"), \"category\" (short disruption type label), \"rationale\" (one or two sentence operational explanation), \"bbox_norm\" (null or [x_min, y_min, x_max, y_max] normalized bounding box of affected area)."
},
{
"role": "user",
"content": "Compare these two satellite images of the same location.\n\nImage 1 (Baseline): <image_baseline>\nImage 2 (Current): <image_current>\n\nAnalyze whether there is meaningful macro-scale civilian disruption visible in the current image compared to the baseline. Return your assessment as a strict JSON object."
},
{
"role": "assistant",
"content": "{\"action\":\"downlink_now\",\"category\":\"earthquake_building_damage\",...}"
}
],
"source_dataset": "BRIGHT (GabeT29/BRIGHT-XView2Format)",
"provenance": "BRIGHT dataset, event=turkey-earthquake, tile=..."
}
Note: Image paths in messages are referenced via the images list. The placeholder tokens <image_baseline> and <image_current> in user content should be replaced with actual image tokens by the training pipeline.
Intended use
- VLM SFT fine-tuning β Train small-to-medium VLMs to output structured JSON from paired satellite images.
- Structured evaluation β Benchmark a VLM's ability to correctly classify triage actions and localize affected areas.
- Prompt engineering research β Study how system prompts and image ordering affect structured output quality for geospatial analysis.
- Auxiliary dataset β Combine with larger general-purpose instruction datasets or domain-specific satellite datasets.
Main limitations
Small size (120 examples) β This is intentionally a high-quality but small v0 dataset. It is suitable for few-shot fine-tuning, evaluation, or as an auxiliary component in a larger training mix, but not sufficient as a standalone training corpus for a full model.
Algorithmic labels with template rationales β Triage actions are derived from pixel-level mask statistics using fixed thresholds, not human expert judgment. Rationale text is template-generated rather than written by domain experts. A production system should include human-in-the-loop validation or at least expert spot-checking.
No explicit hospital / humanitarian site / road-access labels β The source BRIGHT dataset provides building-level damage masks but does not label hospitals, humanitarian sites, or road passability. Categories like "hospital disruption" or "major road-access disruption" referenced in the task specification are not present in this v0 dataset. Adding these would require overlaying OpenStreetMap amenity data or using a complementary dataset like SpaceNet 8 in a future version.
License restrictions β Derived from Maxar Open Data under CC-BY-NC-4.0. Not for commercial use without additional licensing from Maxar.
Image modality gap β BRIGHT pairs pre-event optical (RGB) with post-event SAR (radar). The baseline and current images are different sensor modalities, which may create artifacts unrelated to actual change. This is documented and expected, but VLM training pipelines should account for modality-aware prompting or multimodal understanding.
How to use
from datasets import load_dataset
# Load the flat evaluation format
ds = load_dataset("ChrisRPL/satellite-disruption-triage-v0", data_files="train_flat.jsonl", split="train")
# Or load the SFT format
ds_sft = load_dataset("ChrisRPL/satellite-disruption-triage-v0", data_files="train_sft.jsonl", split="train")
# Load paired images
from PIL import Image
import json
example = ds[0]
baseline = Image.open(example["baseline_image"])
current = Image.open(example["current_image"])
target = json.loads(example["target_output"])
Citation
If you use this dataset, please cite both this dataset and the original BRIGHT paper:
@dataset{satellite_disruption_triage_v0,
title = {Satellite Disruption Triage Dataset v0},
author = {Hugging Face Agent},
year = {2025},
url = {https://huggingface.co/datasets/ChrisRPL/satellite-disruption-triage-v0}
}
@article{Chen2025Bright,
AUTHOR = {Chen, H. and Song, J. and Dietrich, O. and Broni-Bediako, C. and Xuan, W. and Wang, J. and Shao, X. and Wei, Y. and Xia, J. and Lan, C. and Schindler, K. and Yokoya, N.},
TITLE = {BRIGHT: a globally distributed multimodal building damage assessment dataset with very-high-resolution for all-weather disaster response},
JOURNAL = {Earth System Science Data},
VOLUME = {17},
YEAR = {2025},
NUMBER = {11},
PAGES = {6217--6253},
DOI = {10.5194/essd-17-6217-2025}
}
Version history
- v0.1.0 (2025-04-23) β Initial release. 120 examples from 11 BRIGHT events. 92 train / 28 eval.
- Downloads last month
- 45