license: bsd-3-clause
library_name: braindecode
pipeline_tag: feature-extraction
tags:
- eeg
- biosignal
- pytorch
- neuroscience
- braindecode
- convolutional
FBCNet
FBCNet from Mane, R et al (2021) .
Architecture-only repository. This repo documents the
braindecode.models.FBCNetclass. No pretrained weights are distributed here — instantiate the model and train it on your own data, or fine-tune from a published foundation-model checkpoint separately.
Quick start
pip install braindecode
from braindecode.models import FBCNet
model = FBCNet(
n_chans=22,
sfreq=250,
input_window_seconds=4.0,
n_outputs=4,
)
The signal-shape arguments above are example defaults — adjust them to match your recording.
Documentation
- Full API reference (parameters, references, architecture figure): https://braindecode.org/stable/generated/braindecode.models.FBCNet.html
- Interactive browser with live instantiation: https://huggingface.co/spaces/braindecode/model-explorer
- Source on GitHub: https://github.com/braindecode/braindecode/blob/master/braindecode/models/fbcnet.py#L31
Architecture description
The block below is the rendered class docstring (parameters, references, architecture figure where available).
FBCNet from Mane, R et al (2021) [fbcnet2021]_.
ConvolutionFilterbank.. figure:: https://raw.githubusercontent.com/ravikiran-mane/FBCNet/refs/heads/master/FBCNet-V2.png :align: center :alt: FBCNet Architecture
The FBCNet model applies spatial convolution and variance calculation along the time axis, inspired by the Filter Bank Common Spatial Pattern (FBCSP) algorithm.
Notes
This implementation is not guaranteed to be correct and has not been checked by the original authors; it has only been reimplemented from the paper description and source code [fbcnetcode2021]_. There is a difference in the activation function; in the paper, the ELU is used as the activation function, but in the original code, SiLU is used. We followed the code.
Parameters
n_bands : int or None or list[tuple[int, int]]], default=9 Number of frequency bands. Could n_filters_spat : int, default=32 Number of spatial filters for the first convolution. n_dim: int, default=3 Number of dimensions for the temporal reductor temporal_layer : str, default='LogVarLayer' Type of temporal aggregator layer. Options: 'VarLayer', 'StdLayer', 'LogVarLayer', 'MeanLayer', 'MaxLayer'. stride_factor : int, default=4 Stride factor for reshaping. activation : nn.Module, default=nn.SiLU Activation function class to apply in Spatial Convolution Block. cnn_max_norm : float, default=2.0 Maximum norm for the spatial convolution layer. linear_max_norm : float, default=0.5 Maximum norm for the final linear layer. filter_parameters: dict, default None Dictionary of parameters to use for the FilterBankLayer. If None, a default Chebyshev Type II filter with transition bandwidth of 2 Hz and stop-band ripple of 30 dB will be used.
References
.. [fbcnet2021] Mane, R., Chew, E., Chua, K., Ang, K. K., Robinson, N., Vinod, A. P., ... & Guan, C. (2021). FBCNet: A multi-view convolutional neural network for brain-computer interface. preprint arXiv:2104.01233. .. [fbcnetcode2021] Link to source-code: https://github.com/ravikiran-mane/FBCNet
.. rubric:: Hugging Face Hub integration
When the optional huggingface_hub package is installed, all models
automatically gain the ability to be pushed to and loaded from the
Hugging Face Hub. Install with::
pip install braindecode[hub]
Pushing a model to the Hub:
.. code:: from braindecode.models import FBCNet
# Train your model
model = FBCNet(n_chans=22, n_outputs=4, n_times=1000)
# ... training code ...
# Push to the Hub
model.push_to_hub(
repo_id="username/my-fbcnet-model",
commit_message="Initial model upload",
)
Loading a model from the Hub:
.. code:: from braindecode.models import FBCNet
# Load pretrained model
model = FBCNet.from_pretrained("username/my-fbcnet-model")
# Load with a different number of outputs (head is rebuilt automatically)
model = FBCNet.from_pretrained("username/my-fbcnet-model", n_outputs=4)
Extracting features and replacing the head:
.. code:: import torch
x = torch.randn(1, model.n_chans, model.n_times)
# Extract encoder features (consistent dict across all models)
out = model(x, return_features=True)
features = out["features"]
# Replace the classification head
model.reset_head(n_outputs=10)
Saving and restoring full configuration:
.. code:: import json
config = model.get_config() # all __init__ params
with open("config.json", "w") as f:
json.dump(config, f)
model2 = FBCNet.from_config(config) # reconstruct (no weights)
All model parameters (both EEG-specific and model-specific such as dropout rates, activation functions, number of filters) are automatically saved to the Hub and restored when loading.
See :ref:load-pretrained-models for a complete tutorial.
Citation
Please cite both the original paper for this architecture (see the References section above) and braindecode:
@article{aristimunha2025braindecode,
title = {Braindecode: a deep learning library for raw electrophysiological data},
author = {Aristimunha, Bruno and others},
journal = {Zenodo},
year = {2025},
doi = {10.5281/zenodo.17699192},
}
License
BSD-3-Clause for the model code (matching braindecode). Pretraining-derived weights, if you fine-tune from a checkpoint, inherit the licence of that checkpoint and its training corpus.