VideoFlexTok: Flexible-Length Coarse-to-Fine Video Tokenization
Website | Paper | GitHub | BibTeX
Official implementation and pre-trained models for:
VideoFlexTok: Flexible-Length Coarse-to-Fine Video Tokenization, arXiv 2026
Andrei Atanov*, Jesse Allardice*, Roman Bachmann, Oğuzhan Fatih Kar, R Devon Hjelm, David Griffiths, Peter Fu, Afshin Dehghan, Amir Zamir
VideoFlexTok represents videos with a variable-length sequence of tokens structured in a coarse-to-fine manner, where the first tokens capture abstract information, such as semantics and motion, and later tokens add fine-grained details.
Installation
For install instructions, please see https://github.com/apple/ml-videoflextok.
Usage
To load the VideoFlexTok model directly from HuggingFace Hub, call:
from videoflextok.wrappers import VideoFlexTokFromHub
model = VideoFlexTokFromHub.from_pretrained('EPFL-VILAB/videoflextok_d18_d28').eval()
The model can also be loaded by downloading the model.safetensors checkpoint in this repository manually and loading it using our helper functions:
from hydra.utils import instantiate
from videoflextok.utils.checkpoint import load_safetensors
ckpt, config = load_safetensors('/path/to/model.safetensors')
model = instantiate(config).eval()
model.load_state_dict(ckpt)
After loading a VideoFlexTok model, videos can be encoded using:
from videoflextok.utils.demo import read_mp4
# Load example video into a float tensor of shape (3, T, 256, 256), normalized to [-1,1]
# it will sample frames at approx. 8 FPS, ensuring T = 1 + K * 16 for some integer K >= 1,
# which is required for the chunking mechanism in VideoFlexTok
video_tensor = read_mp4("./data/video_examples/red_ball.mp4", fps=8) # (C, T, H, W)
# Encode into a list of discrete token sequences, where each sequence is of shape [1, t, 256]
# this will automatically apply the encoder in the sliding window fashion, and concatenate the resulting tokens along the sequence dimension
# t = 1 + K * 4 since each chunk of 16 frames is tokenized into 4 tokens, and the first token corresponds to the first frame
tokens_list = model.tokenize(video_tensor[None])
The list of token sequences can be truncated in a nested fashion:
k_keep = 64 # For example, only keep the first 64 out of 256 tokens for each timestep
tokens_list = [t[..., :k_keep] for t in tokens_list]
To decode the tokens with VideoFlexTok's rectified flow decoder, call:
# tokens_list is a list of [1, t, l] discrete token sequences, with l <= 256
# reconst is a list of RGB videos of shape [1, 3, T, 256, 256] tensor, normalized to [-1,1]
reconst = model.detokenize(
tokens_list,
timesteps=30, # Number of denoising steps
guidance_scale=20., # Classifier-free guidance scale (15-30 typically works well)
perform_norm_guidance=True, # See https://arxiv.org/abs/2410.02416
)
Citation
If you find this repository helpful, please consider citing our work:
@article{videoflextok,
title={{VideoFlexTok}: Flexible-Length Coarse-to-Fine Video Tokenization},
author={Andrei Atanov and Jesse Allardice and Roman Bachmann and O{\u{g}}uzhan Fatih Kar and Peter Fu and David Griffiths and Devon Hjelm and Afshin Dehghan and Amir Zamir},
journal={arXiv 2026},
year={2026},
}
License
The model weights in this repository are released under the Apache License 2.0
- Downloads last month
- 66