YAML Metadata Warning:empty or missing yaml metadata in repo card

Check out the documentation for more information.

πŸ›°οΈ Land Cover Segmentation β€” U-Net

A U-Net deep learning model for pixel-wise land cover classification from Sentinel-2 multi-spectral satellite imagery. Trained to identify 5 land cover types across large-scale geospatial datasets.


πŸ—ΊοΈ Land Cover Classes

The model is trained to classify every pixel into one of the following 5 categories:

Class Description
Barren Land Exposed soil, rock, and sparsely vegetated areas
Built-up Area Urban and suburban structures, roads
Crop Agricultural and cultivated farmland
Forest Dense tree cover and woodland areas
Water Rivers, lakes, reservoirs, and water bodies

πŸ—οΈ Model Architecture

The model is based on the U-Net architecture β€” a fully convolutional encoder-decoder network with skip connections designed for semantic image segmentation.

Encoder progressively extracts spatial features through two blocks of convolutions followed by max pooling, reducing the spatial resolution while increasing feature depth (32 β†’ 64 filters).

Bottleneck captures the highest-level abstract features at the compressed representation (128 filters).

Decoder restores the original spatial resolution through two upsampling blocks. At each step, skip connections from the corresponding encoder block are concatenated to recover fine-grained spatial detail (64 β†’ 32 filters).

Output is a 1Γ—1 convolution with softmax activation that produces a per-pixel probability distribution over the 5 land cover classes.

Input (64Γ—64Γ—16)
    β”‚
    β”œβ”€β”€β”€ Encoder Block 1 β€” Conv(32) β†’ Conv(32) β†’ MaxPool ──────────────┐ skip
    β”œβ”€β”€β”€ Encoder Block 2 β€” Conv(64) β†’ Conv(64) β†’ MaxPool ────────┐ skipβ”‚
    β”œβ”€β”€β”€ Bottleneck      β€” Conv(128) β†’ Conv(128)                  β”‚     β”‚
    β”œβ”€β”€β”€ Decoder Block 1 β€” UpSample β†’ Concat β†β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜     β”‚
    β”‚                      Conv(64) β†’ Conv(64)                          β”‚
    β”œβ”€β”€β”€ Decoder Block 2 β€” UpSample β†’ Concat β†β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
    β”‚                      Conv(32) β†’ Conv(32)
    └─── Output          β€” Conv(5, 1Γ—1) β†’ Softmax
                           (64Γ—64Γ—5)

πŸ“₯ Input

Property Details
Sensor Sentinel-2 Multi-Spectral Imagery
Patch Size 64 Γ— 64 pixels
Channels 16 spectral bands
Preprocessing Normalized pixel values

πŸ“€ Output

Property Details
Shape 64 Γ— 64 pixel segmentation mask
Type Per-pixel class label (one of 5 land cover classes)
Format Integer class map derived from softmax probabilities

πŸ“Š Model Performance

Evaluated on 5,746,688 test pixels across all 5 classes.

Overall Metrics

Metric Score
Overall Accuracy 93.04%
Validation Accuracy 93.04%
Validation Loss 0.2678
Macro Avg Precision 92.58%
Macro Avg Recall 92.88%
Macro Avg F1-Score 92.69%
Weighted Avg F1 93.06%

Per-Class Performance

Class Precision Recall F1-Score
Barren Land 84.30% 90.59% 87.33%
Built-up Area 93.32% 89.29% 91.26%
Crop 93.53% 95.42% 94.47%
Forest 95.55% 96.84% 96.19%
Water 96.23% 92.27% 94.21%

Forest and Water achieve the highest classification accuracy. Barren Land is the most challenging class, likely due to spectral overlap with Built-up Areas and Crop fields.


πŸ“ Model File

File Description
landcover_unet_model.h5 Trained Keras model (weights + architecture)
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Space using bhargav37/lulc-dl-model 1