Beta-Earth
Collection
Models release as part of the Beta-Earth project • 8 items • Updated
BetaEarth SegFormer-B2 with FiLM conditioning — robust variant trained with curriculum modality dropout for single-modality deployment.
Part of the BetaEarth family. Unlike the frozen+FiLM models which perform best with 4 input modalities, this model handles any subset of {S2 L1C, S2 L2A, S1 RTC, COP-DEM} gracefully.
All-modality (full input):
| Metric | Value |
|---|---|
| Val cosine similarity (all) | 0.878 |
| Total parameters | 104.8M |
| Trainable parameters | 104.8M |
| Inputs | S2 L1C+L2A (9ch), S1 RTC (2ch), COP-DEM (1ch), DOY |
| Output | (H, W, 64) float32, L2-normalised |
Single-modality robustness (main contribution):
| Input subset | Cosine sim | Notes |
|---|---|---|
| All modalities | 0.878 | Comparable to frozen+FiLM (0.886) |
| L1C + DEM | 0.850 | |
| L2A + DEM | 0.823 | |
| S2 both (L1C+L2A) | 0.818 | |
| L1C only | 0.806 | Deployable with just L1C |
| L2A only | 0.755 | Up from 0.537 in frozen+FiLM |
| S1 only | 0.712 | Up from ~0.60 in frozen+FiLM |
| DEM only | 0.609 |
r4ctfspmpip install betaearth
from betaearth import BetaEarth
model = BetaEarth.from_pretrained("asterisk-labs/betaearth-segformer-film-robust")
# Full input — all modalities
embedding = model.predict(
s2_l1c=s2_l1c, # (9, H, W) uint16
s2_l2a=s2_l2a, # (9, H, W) uint16
s1=s1, # (2, H, W) float32
dem=dem, # (1, H, W) float32
doy=182,
)
# Single modality — works without all inputs
embedding = model.predict(s2_l2a=s2_l2a, doy=182)
| Model | Cos Sim | Params | Best for |
|---|---|---|---|
| betaearth-segformer-film | 0.886 | 0.3M | Best all-modality quality |
| betaearth-segformer-film-robust | 0.878 | 104.8M | Flexible deployment |
| betaearth-segformer-film-hilr | 0.886 | 0.3M | Alt frozen |
| betaearth-segformer | 0.880 | 104.8M | No timestamp |
| betaearth-segformer-film-scratch | 0.883 | 104.8M | End-to-end |
| betaearth-rgb-only | 0.836 | 26.3M | Minimal data |
@inproceedings{czerkawski2026betaearth,
title = {BetaEarth: Emulating Closed-Source Earth Observation Foundation Models Through Their Public Embeddings},
author = {Czerkawski, Mikolaj},
booktitle = {ISPRS Congress 2026},
year = {2026}
}
CC-BY 4.0. Training data attribution: "The AlphaEarth Foundations Satellite Embedding dataset is produced by Google and Google DeepMind."