| story_data.data = list of words (the actual text) |
| story_data.data_times = timestamp for each word (in seconds) |
| story_data.tr_times = fMRI scan timestamps (every 2 seconds) |
|
|
| TR (Repetition Time) is how often the fMRI scanner takes a "photo" of the brain. Here it's every 2 seconds. So 256 TRs = one brain scan every 2 seconds for ~512 seconds of story. |
|
|
| We need to align words with TR times, the brain doesn't respond to a word instantly. There's a hemodynamic delay of ~4-6 seconds between hearing a word and the blood-oxygen response showing up in the fMRI |
| so need to: |
| - Know when each word was spoken (that's data_times — 1656 timestamps, one per word) |
| - Downsample from word-rate (~3 words/sec) to TR-rate (1 scan/2 sec) so dimensions match |
|
|
| Dimension mismatch: |
| Words: 1656 words × embedding_size |
| fMRI: 241 timepoints × 94251 voxels |
|
|
| After downsampling: |
| Embeddings: 256 TRs × embedding_size ← matches fMRI time dimension |
| fMRI: 241 timepoints × 94251 voxels |
|
|
| Still not exact matching (256 vs 241), trimming will fix that |
|
|
| The instructions say trim first 5 seconds and last 10 seconds, which removes those extra TRs at the edges where the scanner was running but the story hadn't started/had already ended. |
|
|
| The delays (make_delayed) then handle the hemodynamic lag, by creating lagged copies of the embeddings at 1, 2, 3, 4 second delays, you let the model figure out which lag best predicts brain activity for each voxel. |
|
|
|
|
|
|
| --- preprocessing_utils.py --- |
| trim_fmri — the trimming logic (reusable anywhere) |
| preprocess_embeddings — full pipeline for X (embeddings) |
| load_fmri — full pipeline for Y (fMRI), which is just load + trim since fMRI doesn't need downsampling or delays |
|
|
| X pipeline: word_vectors → downsample → trim → delay (preprocess_embeddings) |
| Y pipeline: .npy file → load → trim (load_fmri) |