Continuum Attention for Neural Operators
Paper β’ 2406.06486 β’ Published
This model implements a Cross-Attention Coefficient Head for the Fourier Neural Operator (FNO) architecture, designed to improve out-of-distribution (OOD) generalization when the coefficient field statistics shift.
Standard FNO treats the variable coefficient field a(x) as just another input channel (concatenated). This model instead uses a cross-attention mechanism where:
[-1,1]Β²a(x)This forces the model to build a conditioning representation of the coefficient field rather than treating it as a fixed input feature, improving generalization when permeability statistics differ from training.
a(x) ββ[kv_embed]βββΊ KV
β
ββββΊ cross-attn βββ Q = query_proj(coordinate_grid [-1,1]Β²)
β
ββββΊ bypass = coeff_bypass(a) βββ
βΌ
attended + bypass ββ[FNO blocks]βββΊ projection βββΊ u(x)
| Parameter | Value |
|---|---|
| Resolution | 32Γ32 |
| Width | 32 |
| Depth | 3 FNO blocks |
| Modes | 8 |
| Attention Heads | 4 |
| Parameter | Value |
|---|---|
| Resolution | 64Γ64 |
| Width | 64 |
| Depth | 4 FNO blocks |
| Modes | 12 |
| Attention Heads | 4 |
-βΒ·(a(x)βu) = 1 on unit square, zero Dirichlet BCsspsolve or numpy dense)| Split | Distribution | Baseline RL2 | Cross-Attn RL2 |
|---|---|---|---|
| ID | L=0.1 | ~0.018 | ~0.021 |
| OOD Smooth | L=0.2 | ~0.065 (3.5Γ) | ~0.029 (1.4Γ) |
| OOD Rough | L=0.05 | ~0.071 (3.9Γ) | ~0.032 (1.5Γ) |
Based on small-scale experiments at 32Γ32 resolution. Full 64Γ64 results pending.
If you use this model, please cite:
@article{calvello2024continuum,
title={Continuum Attention for Neural Operators},
author={Calvello, Edoardo and Boull\'e, Nicolas and SchΓ€fer, Florian},
journal={arXiv preprint arXiv:2406.06486},
year={2024}
}
@article{li2021fno,
title={Fourier Neural Operator for Parametric Partial Differential Equations},
author={Li, Zongyi and Kovachki, Nikola and Azizzadenesheli, Kamyar and others},
journal={NeurIPS},
year={2021}
}