YAML Metadata Warning:empty or missing yaml metadata in repo card
Check out the documentation for more information.
π°οΈ Land Cover Segmentation β U-Net
A U-Net deep learning model for pixel-wise land cover classification from Sentinel-2 multi-spectral satellite imagery. Trained to identify 5 land cover types across large-scale geospatial datasets.
πΊοΈ Land Cover Classes
The model is trained to classify every pixel into one of the following 5 categories:
| Class | Description |
|---|---|
| Barren Land | Exposed soil, rock, and sparsely vegetated areas |
| Built-up Area | Urban and suburban structures, roads |
| Crop | Agricultural and cultivated farmland |
| Forest | Dense tree cover and woodland areas |
| Water | Rivers, lakes, reservoirs, and water bodies |
ποΈ Model Architecture
The model is based on the U-Net architecture β a fully convolutional encoder-decoder network with skip connections designed for semantic image segmentation.
Encoder progressively extracts spatial features through two blocks of convolutions followed by max pooling, reducing the spatial resolution while increasing feature depth (32 β 64 filters).
Bottleneck captures the highest-level abstract features at the compressed representation (128 filters).
Decoder restores the original spatial resolution through two upsampling blocks. At each step, skip connections from the corresponding encoder block are concatenated to recover fine-grained spatial detail (64 β 32 filters).
Output is a 1Γ1 convolution with softmax activation that produces a per-pixel probability distribution over the 5 land cover classes.
Input (64Γ64Γ16)
β
ββββ Encoder Block 1 β Conv(32) β Conv(32) β MaxPool βββββββββββββββ skip
ββββ Encoder Block 2 β Conv(64) β Conv(64) β MaxPool βββββββββ skipβ
ββββ Bottleneck β Conv(128) β Conv(128) β β
ββββ Decoder Block 1 β UpSample β Concat βββββββββββββββββββββ β
β Conv(64) β Conv(64) β
ββββ Decoder Block 2 β UpSample β Concat βββββββββββββββββββββββββββ
β Conv(32) β Conv(32)
ββββ Output β Conv(5, 1Γ1) β Softmax
(64Γ64Γ5)
π₯ Input
| Property | Details |
|---|---|
| Sensor | Sentinel-2 Multi-Spectral Imagery |
| Patch Size | 64 Γ 64 pixels |
| Channels | 16 spectral bands |
| Preprocessing | Normalized pixel values |
π€ Output
| Property | Details |
|---|---|
| Shape | 64 Γ 64 pixel segmentation mask |
| Type | Per-pixel class label (one of 5 land cover classes) |
| Format | Integer class map derived from softmax probabilities |
π Model Performance
Evaluated on 5,746,688 test pixels across all 5 classes.
Overall Metrics
| Metric | Score |
|---|---|
| Overall Accuracy | 93.04% |
| Validation Accuracy | 93.04% |
| Validation Loss | 0.2678 |
| Macro Avg Precision | 92.58% |
| Macro Avg Recall | 92.88% |
| Macro Avg F1-Score | 92.69% |
| Weighted Avg F1 | 93.06% |
Per-Class Performance
| Class | Precision | Recall | F1-Score |
|---|---|---|---|
| Barren Land | 84.30% | 90.59% | 87.33% |
| Built-up Area | 93.32% | 89.29% | 91.26% |
| Crop | 93.53% | 95.42% | 94.47% |
| Forest | 95.55% | 96.84% | 96.19% |
| Water | 96.23% | 92.27% | 94.21% |
Forest and Water achieve the highest classification accuracy. Barren Land is the most challenging class, likely due to spectral overlap with Built-up Areas and Crop fields.
π Model File
| File | Description |
|---|---|
landcover_unet_model.h5 |
Trained Keras model (weights + architecture) |