File size: 6,260 Bytes
0efa599 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 | ---
license: bsd-3-clause
library_name: braindecode
pipeline_tag: feature-extraction
tags:
- eeg
- biosignal
- pytorch
- neuroscience
- braindecode
- convolutional
---
# FBCNet
FBCNet from Mane, R et al (2021) .
> **Architecture-only repository.** This repo documents the
> `braindecode.models.FBCNet` class. **No pretrained weights are
> distributed here** — instantiate the model and train it on your own
> data, or fine-tune from a published foundation-model checkpoint
> separately.
## Quick start
```bash
pip install braindecode
```
```python
from braindecode.models import FBCNet
model = FBCNet(
n_chans=22,
sfreq=250,
input_window_seconds=4.0,
n_outputs=4,
)
```
The signal-shape arguments above are example defaults — adjust them
to match your recording.
## Documentation
- Full API reference (parameters, references, architecture figure):
<https://braindecode.org/stable/generated/braindecode.models.FBCNet.html>
- Interactive browser with live instantiation:
<https://huggingface.co/spaces/braindecode/model-explorer>
- Source on GitHub: <https://github.com/braindecode/braindecode/blob/master/braindecode/models/fbcnet.py#L31>
## Architecture description
The block below is the rendered class docstring (parameters,
references, architecture figure where available).
<div class='bd-doc'><main>
<p>FBCNet from Mane, R et al (2021) [fbcnet2021]_.</p>
<span style="display:inline-block;padding:2px 8px;border-radius:4px;background:#5cb85c;color:white;font-size:11px;font-weight:600;margin-right:4px;">Convolution</span><span style="display:inline-block;padding:2px 8px;border-radius:4px;background:#0072B2;color:white;font-size:11px;font-weight:600;margin-right:4px;">Filterbank</span>
.. figure:: https://raw.githubusercontent.com/ravikiran-mane/FBCNet/refs/heads/master/FBCNet-V2.png
:align: center
:alt: FBCNet Architecture
The FBCNet model applies spatial convolution and variance calculation along
the time axis, inspired by the Filter Bank Common Spatial Pattern (FBCSP)
algorithm.
Notes
-----
This implementation is not guaranteed to be correct and has not been checked
by the original authors; it has only been reimplemented from the paper
description and source code [fbcnetcode2021]_. There is a difference in the
activation function; in the paper, the ELU is used as the activation function,
but in the original code, SiLU is used. We followed the code.
Parameters
----------
n_bands : int or None or list[tuple[int, int]]], default=9
Number of frequency bands. Could
n_filters_spat : int, default=32
Number of spatial filters for the first convolution.
n_dim: int, default=3
Number of dimensions for the temporal reductor
temporal_layer : str, default='LogVarLayer'
Type of temporal aggregator layer. Options: 'VarLayer', 'StdLayer',
'LogVarLayer', 'MeanLayer', 'MaxLayer'.
stride_factor : int, default=4
Stride factor for reshaping.
activation : nn.Module, default=nn.SiLU
Activation function class to apply in Spatial Convolution Block.
cnn_max_norm : float, default=2.0
Maximum norm for the spatial convolution layer.
linear_max_norm : float, default=0.5
Maximum norm for the final linear layer.
filter_parameters: dict, default None
Dictionary of parameters to use for the FilterBankLayer.
If None, a default Chebyshev Type II filter with transition bandwidth of
2 Hz and stop-band ripple of 30 dB will be used.
References
----------
.. [fbcnet2021] Mane, R., Chew, E., Chua, K., Ang, K. K., Robinson, N.,
Vinod, A. P., ... & Guan, C. (2021). FBCNet: A multi-view convolutional
neural network for brain-computer interface. preprint arXiv:2104.01233.
.. [fbcnetcode2021] Link to source-code:
https://github.com/ravikiran-mane/FBCNet
.. rubric:: Hugging Face Hub integration
When the optional ``huggingface_hub`` package is installed, all models
automatically gain the ability to be pushed to and loaded from the
Hugging Face Hub. Install with::
pip install braindecode[hub]
**Pushing a model to the Hub:**
.. code::
from braindecode.models import FBCNet
# Train your model
model = FBCNet(n_chans=22, n_outputs=4, n_times=1000)
# ... training code ...
# Push to the Hub
model.push_to_hub(
repo_id="username/my-fbcnet-model",
commit_message="Initial model upload",
)
**Loading a model from the Hub:**
.. code::
from braindecode.models import FBCNet
# Load pretrained model
model = FBCNet.from_pretrained("username/my-fbcnet-model")
# Load with a different number of outputs (head is rebuilt automatically)
model = FBCNet.from_pretrained("username/my-fbcnet-model", n_outputs=4)
**Extracting features and replacing the head:**
.. code::
import torch
x = torch.randn(1, model.n_chans, model.n_times)
# Extract encoder features (consistent dict across all models)
out = model(x, return_features=True)
features = out["features"]
# Replace the classification head
model.reset_head(n_outputs=10)
**Saving and restoring full configuration:**
.. code::
import json
config = model.get_config() # all __init__ params
with open("config.json", "w") as f:
json.dump(config, f)
model2 = FBCNet.from_config(config) # reconstruct (no weights)
All model parameters (both EEG-specific and model-specific such as
dropout rates, activation functions, number of filters) are automatically
saved to the Hub and restored when loading.
See :ref:`load-pretrained-models` for a complete tutorial.</main>
</div>
## Citation
Please cite both the original paper for this architecture (see the
*References* section above) and braindecode:
```bibtex
@article{aristimunha2025braindecode,
title = {Braindecode: a deep learning library for raw electrophysiological data},
author = {Aristimunha, Bruno and others},
journal = {Zenodo},
year = {2025},
doi = {10.5281/zenodo.17699192},
}
```
## License
BSD-3-Clause for the model code (matching braindecode).
Pretraining-derived weights, if you fine-tune from a checkpoint,
inherit the licence of that checkpoint and its training corpus.
|