repo stringlengths 7 90 | file_url stringlengths 81 315 | file_path stringlengths 4 228 | content stringlengths 0 32.8k | language stringclasses 1
value | license stringclasses 7
values | commit_sha stringlengths 40 40 | retrieved_at stringdate 2026-01-04 14:38:15 2026-01-05 02:33:18 | truncated bool 2
classes |
|---|---|---|---|---|---|---|---|---|
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/optimizers/performance_test.py | labml_nn/optimizers/performance_test.py | """
---
title: Test performance of Adam implementations
summary: This experiment compares performance of Adam implementations.
---
# Performance testing Adam
```text
TorchAdam warmup...[DONE] 222.59ms
TorchAdam...[DONE] 1,356.01ms
MyAdam warmup...[DONE] 119.15ms
MyAdam...[DONE] 1,192.89ms
```
[ implementation of the paper
[On the Convergence of Adam and Beyond](https://arxiv.org/abs/1904.09237).
We implement this as an extension to our [Adam optimiz... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/optimizers/mnist_experiment.py | labml_nn/optimizers/mnist_experiment.py | """
---
title: MNIST example to test the optimizers
summary: This is a simple MNIST example with a CNN model to test the optimizers.
---
# MNIST example to test the optimizers
"""
import torch.nn as nn
import torch.utils.data
from labml import experiment, tracker
from labml.configs import option
from labml_nn.helpers... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/optimizers/noam.py | labml_nn/optimizers/noam.py | """
---
title: Noam optimizer from Attention is All You Need paper
summary: >
This is a tutorial/implementation of Noam optimizer.
Noam optimizer has a warm-up period and then an exponentially decaying learning rate.
---
# Noam Optimizer
This is the [PyTorch](https://pytorch.org) implementation of optimizer intro... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/helpers/metrics.py | labml_nn/helpers/metrics.py | import dataclasses
from abc import ABC
import torch
from labml import tracker
class StateModule:
def __init__(self):
pass
# def __call__(self):
# raise NotImplementedError
def create_state(self) -> any:
raise NotImplementedError
def set_state(self, data: any):
raise... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/helpers/datasets.py | labml_nn/helpers/datasets.py | import random
from pathlib import PurePath, Path
from typing import List, Callable, Dict, Optional
from torchvision import datasets, transforms
import torch
from labml import lab
from labml import monit
from labml.configs import BaseConfigs
from labml.configs import aggregate, option
from labml.utils.download import ... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/helpers/schedule.py | labml_nn/helpers/schedule.py | from typing import Tuple, List
class Schedule:
def __call__(self, x):
raise NotImplementedError()
class Flat(Schedule):
def __init__(self, value):
self.__value = value
def __call__(self, x):
return self.__value
def __str__(self):
return f"Schedule({self.__value})"
... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/helpers/optimizer.py | labml_nn/helpers/optimizer.py | from typing import Tuple
import torch
from labml import tracker
from labml.configs import BaseConfigs, option, meta_config
class OptimizerConfigs(BaseConfigs):
r"""
This creates a configurable optimizer.
Arguments:
learning_rate (float): Learning rate of the optimizer. Defaults to ``0.01``.
... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/helpers/trainer.py | labml_nn/helpers/trainer.py | import signal
import typing
from typing import Dict, List, Callable
from typing import Optional, Tuple, Any, Collection
import torch.optim
import torch.optim
import torch.utils.data
import torch.utils.data
from labml import tracker, logger, monit
from labml.configs import BaseConfigs, meta_config, option
from labml.in... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/helpers/__init__.py | labml_nn/helpers/__init__.py | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false | |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/helpers/device.py | labml_nn/helpers/device.py | import torch
from labml.configs import BaseConfigs, hyperparams, option
class DeviceInfo:
def __init__(self, *,
use_cuda: bool,
cuda_device: int):
self.use_cuda = use_cuda
self.cuda_device = cuda_device
self.cuda_count = torch.cuda.device_count()
... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/capsule_networks/mnist.py | labml_nn/capsule_networks/mnist.py | """
---
title: Classify MNIST digits with Capsule Networks
summary: Code for training Capsule Networks on MNIST dataset
---
# Classify MNIST digits with Capsule Networks
This is an annotated PyTorch code to classify MNIST digits with PyTorch.
This paper implements the experiment described in paper
[Dynamic Routing B... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/capsule_networks/__init__.py | labml_nn/capsule_networks/__init__.py | """
---
title: Capsule Networks
summary: >
PyTorch implementation and tutorial of Capsule Networks.
Capsule network is a neural network architecture that embeds features
as capsules and routes them with a voting mechanism to next layer of capsules.
---
# Capsule Networks
This is a [PyTorch](https://pytorch.org)... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/sketch_rnn/__init__.py | labml_nn/sketch_rnn/__init__.py | """
---
title: Sketch RNN
summary: >
This is an annotated PyTorch implementation of the Sketch RNN from paper A Neural Representation of Sketch Drawings.
Sketch RNN is a sequence-to-sequence model that generates sketches of objects such as bicycles, cats, etc.
---
# Sketch RNN
This is an annotated [PyTorch](https... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/activations/__init__.py | labml_nn/activations/__init__.py | """
---
title: Neural Network Activation Functions
summary: >
A set of PyTorch implementations/tutorials related to neural network activations
---
# Neural Networks Activations
* [Fuzzy Tiling Activations](fta/index.html)
* 🚧 [Swish](swish/index.html)
"""
from .swish import Swish
| python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/activations/swish.py | labml_nn/activations/swish.py | import torch
from torch import nn
class Swish(nn.Module):
def __init__(self):
super().__init__()
self.sigmoid = nn.Sigmoid()
def forward(self, x: torch.Tensor) -> torch.Tensor:
return x * self.sigmoid(x)
| python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/activations/fta/experiment.py | labml_nn/activations/fta/experiment.py | """
---
title: Fuzzy Tiling Activation Experiment
summary: >
Training a transformer with FTA in FFN on Tiny Shakespeare.
---
# [Fuzzy Tiling Activation](index.html) Experiment
[](https://colab.research.google.com/github/labmlai/annotated_deep_... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/activations/fta/__init__.py | labml_nn/activations/fta/__init__.py | """
---
title: Fuzzy Tiling Activations
summary: >
PyTorch implementation and tutorial of Fuzzy Tiling Activations from the
paper Fuzzy Tiling Activations: A Simple Approach to Learning Sparse Representations Online.
---
# Fuzzy Tiling Activations (FTA)
[ -> nn.ModuleList:
"""
## Clone Module
Make a `nn.ModuleList` with clone... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/utils/tokenizer.py | labml_nn/utils/tokenizer.py | from typing import Callable
from labml.configs import BaseConfigs, option
class TokenizerConfigs(BaseConfigs):
"""
<a id="TokenizerConfigs"></a>
## Tokenizer Configurations
"""
tokenizer: Callable = 'character'
def __init__(self):
super().__init__(_primary='tokenizer')
@option(To... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/diffusion/__init__.py | labml_nn/diffusion/__init__.py | """
---
title: Diffusion models
summary: >
A set of PyTorch implementations/tutorials of diffusion models.
---
# Diffusion models
* [Denoising Diffusion Probabilistic Models (DDPM)](ddpm/index.html)
* [Stable Diffusion](stable_diffusion/index.html)
* [Latent Diffusion Model](stable_diffusion/latent_diffusion.html)
*... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/diffusion/ddpm/unet.py | labml_nn/diffusion/ddpm/unet.py | """
---
title: U-Net model for Denoising Diffusion Probabilistic Models (DDPM)
summary: >
UNet model for Denoising Diffusion Probabilistic Models (DDPM)
---
# U-Net model for [Denoising Diffusion Probabilistic Models (DDPM)](index.html)
This is a [U-Net](../../unet/index.html) based model to predict noise
$\textcol... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/diffusion/ddpm/experiment.py | labml_nn/diffusion/ddpm/experiment.py | """
---
title: Denoising Diffusion Probabilistic Models (DDPM) training
summary: >
Training code for
Denoising Diffusion Probabilistic Model.
---
# [Denoising Diffusion Probabilistic Models (DDPM)](index.html) training
[](https://colab.rese... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/diffusion/ddpm/utils.py | labml_nn/diffusion/ddpm/utils.py | """
---
title: Utility functions for DDPM experiment
summary: >
Utility functions for DDPM experiment
---
# Utility functions for [DDPM](index.html) experiemnt
"""
import torch.utils.data
def gather(consts: torch.Tensor, t: torch.Tensor):
"""Gather consts for $t$ and reshape to feature map shape"""
c = con... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/diffusion/ddpm/__init__.py | labml_nn/diffusion/ddpm/__init__.py | """
---
title: Denoising Diffusion Probabilistic Models (DDPM)
summary: >
PyTorch implementation and tutorial of the paper
Denoising Diffusion Probabilistic Models (DDPM).
---
# Denoising Diffusion Probabilistic Models (DDPM)
[](https://col... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/diffusion/ddpm/evaluate.py | labml_nn/diffusion/ddpm/evaluate.py | """
---
title: Denoising Diffusion Probabilistic Models (DDPM) evaluation/sampling
summary: >
Code to generate samples from a trained
Denoising Diffusion Probabilistic Model.
---
# [Denoising Diffusion Probabilistic Models (DDPM)](index.html) evaluation/sampling
This is the code to generate images and create inte... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/diffusion/stable_diffusion/latent_diffusion.py | labml_nn/diffusion/stable_diffusion/latent_diffusion.py | """
---
title: Latent Diffusion Models
summary: >
Annotated PyTorch implementation/tutorial of latent diffusion models from paper
High-Resolution Image Synthesis with Latent Diffusion Models
---
# Latent Diffusion Models
Latent diffusion models use an auto-encoder to map between image space and
latent space. The di... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/diffusion/stable_diffusion/util.py | labml_nn/diffusion/stable_diffusion/util.py | """
---
title: Utility functions for stable diffusion
summary: >
Utility functions for stable diffusion
---
# Utility functions for [stable diffusion](index.html)
"""
import os
import random
from pathlib import Path
import PIL
import numpy as np
import torch
from PIL import Image
from labml import monit
from labml... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/diffusion/stable_diffusion/__init__.py | labml_nn/diffusion/stable_diffusion/__init__.py | """
---
title: Stable Diffusion
summary: >
Annotated PyTorch implementation/tutorial of stable diffusion.
---
# Stable Diffusion
This is based on official stable diffusion repository
[CompVis/stable-diffusion](https://github.com/CompVis/stable-diffusion).
We have kept the model structure same so that open sourced w... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/diffusion/stable_diffusion/scripts/in_paint.py | labml_nn/diffusion/stable_diffusion/scripts/in_paint.py | """
---
title: In-paint images using stable diffusion with a prompt
summary: >
In-paint images using stable diffusion with a prompt
---
# In-paint images using [stable diffusion](../index.html) with a prompt
"""
import argparse
from pathlib import Path
from typing import Optional
import torch
from labml import lab... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/diffusion/stable_diffusion/scripts/text_to_image.py | labml_nn/diffusion/stable_diffusion/scripts/text_to_image.py | """
---
title: Generate images using stable diffusion with a prompt
summary: >
Generate images using stable diffusion with a prompt
---
# Generate images using [stable diffusion](../index.html) with a prompt
"""
import argparse
import os
from pathlib import Path
import torch
from labml import lab, monit
from labml... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/diffusion/stable_diffusion/scripts/image_to_image.py | labml_nn/diffusion/stable_diffusion/scripts/image_to_image.py | """
---
title: Generate images using stable diffusion with a prompt from a given image
summary: >
Generate images using stable diffusion with a prompt from a given image
---
# Generate images using [stable diffusion](../index.html) with a prompt from a given image
"""
import argparse
from pathlib import Path
import... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/diffusion/stable_diffusion/scripts/__init__.py | labml_nn/diffusion/stable_diffusion/scripts/__init__.py | """
---
title: Scripts to show example usages stable diffusion
summary: >
Annotated PyTorch implementation/tutorial of example usages of stable diffusion
---
# Scripts to show example usages [stable diffusion](../index.html)
* [Prompt to image diffusion](text_to_image.html)
* [Image to image diffusion](image_to_imag... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/diffusion/stable_diffusion/model/unet.py | labml_nn/diffusion/stable_diffusion/model/unet.py | """
---
title: U-Net for Stable Diffusion
summary: >
Annotated PyTorch implementation/tutorial of the U-Net in stable diffusion.
---
# U-Net for [Stable Diffusion](../index.html)
This implements the U-Net that
gives $\epsilon_\text{cond}(x_t, c)$
We have kept to the model definition and naming unchanged from
[Com... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/diffusion/stable_diffusion/model/autoencoder.py | labml_nn/diffusion/stable_diffusion/model/autoencoder.py | """
---
title: Autoencoder for Stable Diffusion
summary: >
Annotated PyTorch implementation/tutorial of the autoencoder
for stable diffusion.
---
# Autoencoder for [Stable Diffusion](../index.html)
This implements the auto-encoder model used to map between image space and latent space.
We have kept to the model de... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/diffusion/stable_diffusion/model/unet_attention.py | labml_nn/diffusion/stable_diffusion/model/unet_attention.py | """
---
title: Transformer for Stable Diffusion U-Net
summary: >
Annotated PyTorch implementation/tutorial of the transformer
for U-Net in stable diffusion.
---
# Transformer for Stable Diffusion [U-Net](unet.html)
This implements the transformer module used in [U-Net](unet.html) that
gives $\epsilon_\text{cond}(x... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/diffusion/stable_diffusion/model/__init__.py | labml_nn/diffusion/stable_diffusion/model/__init__.py | """
---
title: Modules used in stable diffusion
summary: >
Models and components for stable diffusion.
---
# [Stable Diffusion](../index.html) Models
* [AutoEncoder](autoencoder.html)
* [U-Net](unet.html) with [attention](unet_attention.html)
* [CLIP embedder](clip_embedder.html).
"""
| python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/diffusion/stable_diffusion/model/clip_embedder.py | labml_nn/diffusion/stable_diffusion/model/clip_embedder.py | """
---
title: CLIP Text Embedder
summary: >
CLIP embedder to get prompt embeddings for stable diffusion
---
# CLIP Text Embedder
This is used to get prompt embeddings for [stable diffusion](../index.html).
It uses HuggingFace Transformers CLIP model.
"""
from typing import List
from torch import nn
from transform... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/diffusion/stable_diffusion/sampler/ddim.py | labml_nn/diffusion/stable_diffusion/sampler/ddim.py | """
---
title: Denoising Diffusion Implicit Models (DDIM) Sampling
summary: >
Annotated PyTorch implementation/tutorial of
Denoising Diffusion Implicit Models (DDIM) Sampling
for stable diffusion model.
---
# Denoising Diffusion Implicit Models (DDIM) Sampling
This implements DDIM sampling from the paper
[Denoisin... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/diffusion/stable_diffusion/sampler/ddpm.py | labml_nn/diffusion/stable_diffusion/sampler/ddpm.py | """
---
title: Denoising Diffusion Probabilistic Models (DDPM) Sampling
summary: >
Annotated PyTorch implementation/tutorial of
Denoising Diffusion Probabilistic Models (DDPM) Sampling
for stable diffusion model.
---
# Denoising Diffusion Probabilistic Models (DDPM) Sampling
For a simpler DDPM implementation refer... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/diffusion/stable_diffusion/sampler/__init__.py | labml_nn/diffusion/stable_diffusion/sampler/__init__.py | """
---
title: Sampling algorithms for stable diffusion
summary: >
Annotated PyTorch implementation/tutorial of
sampling algorithms
for stable diffusion model.
---
# Sampling algorithms for [stable diffusion](../index.html)
We have implemented the following [sampling algorithms](sampler/index.html):
* [Denoising ... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/resnet/experiment.py | labml_nn/resnet/experiment.py | """
---
title: Train a ResNet on CIFAR 10
summary: >
Train a ResNet on CIFAR 10
---
# Train a [ResNet](index.html) on CIFAR 10
"""
from typing import List, Optional
from torch import nn
from labml import experiment
from labml.configs import option
from labml_nn.experiments.cifar10 import CIFAR10Configs
from labml_... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/resnet/__init__.py | labml_nn/resnet/__init__.py | """
---
title: Deep Residual Learning for Image Recognition (ResNet)
summary: >
A PyTorch implementation/tutorial of Deep Residual Learning for Image Recognition (ResNet).
---
# Deep Residual Learning for Image Recognition (ResNet)
This is a [PyTorch](https://pytorch.org) implementation of the paper
[Deep Residual L... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/lstm/__init__.py | labml_nn/lstm/__init__.py | """
---
title: Long Short-Term Memory (LSTM)
summary: A simple PyTorch implementation/tutorial of Long Short-Term Memory (LSTM) modules.
---
# Long Short-Term Memory (LSTM)
This is a [PyTorch](https://pytorch.org) implementation of Long Short-Term Memory.
"""
from typing import Optional, Tuple
import torch
from tor... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/scaling/__init__.py | labml_nn/scaling/__init__.py | """
---
title: Large scale model training
summary: >
Large scale model training/inference implementations.
---
# Large scale model training
* [Zero-DP optimizer](zero3/index.html)
"""
| python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/scaling/zero3/finetune_neox.py | labml_nn/scaling/zero3/finetune_neox.py | """
---
title: Finetune GPT-NeoX with Zero3 memory optimizer
summary: >
This script trains the bias parameters of the GPT-NeoX on multiple devices with Zero-DP Memory Optimization.
---
# Finetune [GPT-NeoX](../../neox/index.html) with [Zero3 memory optimizer](index.html)
This script trains the bias parameters of... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/scaling/zero3/__init__.py | labml_nn/scaling/zero3/__init__.py | """
---
title: Zero-DP Memory Optimization
summary: >
This is an implementation of Zero-DP Memory Optimization written in PyTorch.
---
# Zero-DP Memory Optimization
This is an implementation of Zero-DP introduced in the paper
[ZeRO: Memory Optimization Towards Training A Trillion Parameter Models](https://arxiv.o... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/transformers/models.py | labml_nn/transformers/models.py | """
---
title: Transformer Encoder and Decoder Models
summary: >
These are PyTorch implementations of Transformer based encoder and decoder models,
as well as other related modules.
---
# Transformer Encoder and Decoder Models
[](https://co... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/transformers/label_smoothing_loss.py | labml_nn/transformers/label_smoothing_loss.py | """
---
title: Label Smoothing Loss
summary: >
This is an implementation of label smoothing loss, that can be used as
an alternative to cross entropy loss for improved accuracy.
---
# Label Smoothing Loss
"""
import matplotlib.pyplot as plt
import numpy as np
import torch
from torch import nn
class LabelSmoothi... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/transformers/configs.py | labml_nn/transformers/configs.py | """
---
title: Configurable Transformer Components
summary: These are configurable components that can be re-used quite easily.
---
# Configurable Transformer Components
"""
import copy
import torch.nn as nn
from labml.configs import BaseConfigs, option, calculate, aggregate
from .feed_forward import FeedForward
fro... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/transformers/positional_encoding.py | labml_nn/transformers/positional_encoding.py | """
---
title: Fixed Positional Encodings
summary: >
Implementation with explanation of fixed positional encodings as
described in paper Attention is All You Need.
---
# Fixed Positional Encodings
The positional encoding encodes the position along the sequence into
a vector of size `d_model`.
\begin{align}
PE_{... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/transformers/relative_mha.py | labml_nn/transformers/relative_mha.py | """
---
title: Relative Multi-Headed Attention
summary: Relative Multi-Headed Attention from paper Transformer-XL.
redirect: https://nn.labml.ai/transformers/xl/relative_mha.html
---
"""
| python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/transformers/utils.py | labml_nn/transformers/utils.py | """
---
title: Utilities for Transformer
summary: A bunch of utility functions and classes for transformers.
---
# Utilities for Transformer
"""
import torch
def subsequent_mask(seq_len):
"""
## Subsequent mask to mask out data from future (subsequent) time steps
"""
mask = torch.tril(torch.ones(seq... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/transformers/__init__.py | labml_nn/transformers/__init__.py | """
---
title: Transformers
summary: >
This is a collection of PyTorch implementations/tutorials of
transformers and related techniques.
---
# Transformers
This module contains [PyTorch](https://pytorch.org/)
implementations and explanations of original transformer
from paper [Attention Is All You Need](https://a... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/transformers/mha.py | labml_nn/transformers/mha.py | """
---
title: Multi-Headed Attention (MHA)
summary: >
This implements the Multi-Headed Attention used in transformers
using PyTorch with explanations.
---
# Multi-Headed Attention (MHA)
[](https://colab.research.google.com/github/labmlai/a... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/transformers/feed_forward.py | labml_nn/transformers/feed_forward.py | """
---
title: Position-wise Feed-Forward Network (FFN)
summary: Documented reusable implementation of the position wise feedforward network.
---
# Position-wise Feed-Forward Network (FFN)
This is a [PyTorch](https://pytorch.org) implementation
of position-wise feedforward network used in transformer.
FFN consists ... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/transformers/primer_ez/experiment.py | labml_nn/transformers/primer_ez/experiment.py | """
---
title: Primer EZ experiment
summary: This experiment trains Primer EZ on Tiny Shakespeare dataset.
---
# [Primer EZ](index.html) Experiment
This is an annotated PyTorch experiment to train a [Primer EZ transformer](index.html).
This is based on our [vanilla transformer experiment](../basic/experiment.html).
... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/transformers/primer_ez/variations.py | labml_nn/transformers/primer_ez/variations.py | """
---
title: Primer EZ variations
summary: We tried some variations to Primer EZ.
---
# [Primer EZ](index.html) Variations
We tried some variations to see which changes in Primer EZ has most benefits.
"""
import torch
from torch import nn
from labml_nn.transformers import MultiHeadAttention
class SpatialDepthWi... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/transformers/primer_ez/efficient.py | labml_nn/transformers/primer_ez/efficient.py | import math
import torch
from torch import nn
from labml_nn.transformers import MultiHeadAttention
class SpatialDepthWiseConvolution(nn.Module):
"""
## Spatial Depth Wise Convolution
This is actually slower
"""
def __init__(self, d_k: int, kernel_size: int = 3):
"""
* `d_k` is ... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/transformers/primer_ez/__init__.py | labml_nn/transformers/primer_ez/__init__.py | """
---
title: "Primer: Searching for Efficient Transformers for Language Modeling"
summary: >
This is an annotated implementation/tutorial of
Primer: Searching for Efficient Transformers for Language Modeling for Vision in PyTorch.
---
# Primer: Searching for Efficient Transformers for Language Modeling
This is ... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/transformers/vit/experiment.py | labml_nn/transformers/vit/experiment.py | """
---
title: Train a Vision Transformer (ViT) on CIFAR 10
summary: >
Train a Vision Transformer (ViT) on CIFAR 10
---
# Train a [Vision Transformer (ViT)](index.html) on CIFAR 10
"""
from labml import experiment
from labml.configs import option
from labml_nn.experiments.cifar10 import CIFAR10Configs
from labml_n... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/transformers/vit/__init__.py | labml_nn/transformers/vit/__init__.py | """
---
title: Vision Transformer (ViT)
summary: >
A PyTorch implementation/tutorial of the paper
"An Image Is Worth 16x16 Words: Transformers For Image Recognition At Scale"
---
# Vision Transformer (ViT)
This is a [PyTorch](https://pytorch.org) implementation of the paper
[An Image Is Worth 16x16 Words: Transfor... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/transformers/switch/experiment.py | labml_nn/transformers/switch/experiment.py | """
---
title: Switch Transformer Experiment
summary: This experiment trains a small switch transformer on tiny Shakespeare dataset.
---
# Switch Transformer Experiment
This is an annotated PyTorch experiment to train a switch transformer.
[](... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/transformers/switch/__init__.py | labml_nn/transformers/switch/__init__.py | """
---
title: Switch Transformer
summary: >
This is an annotated implementation/tutorial a miniature version of Switch Transformer in PyTorch.
---
# Switch Transformer
This is a miniature [PyTorch](https://pytorch.org) implementation of the paper
[Switch Transformers: Scaling to Trillion Parameter Models with Simp... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/transformers/aft/experiment.py | labml_nn/transformers/aft/experiment.py | """
---
title: Attention Free Transformer (AFT) Experiment
summary: This experiment trains an Attention Free Transformer (AFT) based model on Tiny Shakespeare dataset.
---
# [Attention Free Transformer (AFT)](index.html) Experiment
This is an annotated PyTorch experiment to train a [AFT model](index.html).
This is b... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/transformers/aft/__init__.py | labml_nn/transformers/aft/__init__.py | """
---
title: An Attention Free Transformer
summary: >
This is an annotated implementation/tutorial of the AFT (Attention Free Transformer) in PyTorch.
---
# An Attention Free Transformer
This is a [PyTorch](https://pytorch.org) implementation of the paper
[An Attention Free Transformer](https://arxiv.org/abs/2105... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/transformers/feedback/experiment.py | labml_nn/transformers/feedback/experiment.py | """
---
title: Train Feedback Transformer
summary: This is training code with notes for a feedback transformer.
---
# Train Feedback Transformer
This trains a [feedback transformer](index.html) model for auto-regression.
You can pick the original feedback transformer or the new version
where the keys and values are p... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/transformers/feedback/__init__.py | labml_nn/transformers/feedback/__init__.py | """
---
title: Feedback Transformer
summary: >
This is an annotated implementation/tutorial the Feedback Transformer in PyTorch.
---
# Feedback Transformer
This is a [PyTorch](https://pytorch.org) implementation of the paper
[Accessing Higher-level Representations in Sequential Transformers with Feedback Memory](ht... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/transformers/rope/experiment.py | labml_nn/transformers/rope/experiment.py | """
---
title: Rotary Positional Embeddings (RoPE) Experiment
summary: This experiment trains a transformer model with Rotary Positional Embeddings (RoPE) on tiny Shakespeare dataset.
---
# Rotary Positional Embeddings (RoPE) Experiment
This is an annotated PyTorch experiment to train a transformer model with Rotary ... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/transformers/rope/__init__.py | labml_nn/transformers/rope/__init__.py | """
---
title: Rotary Positional Embeddings (RoPE)
summary: >
Annotated implementation of RoPE from paper
RoFormer: Enhanced Transformer with Rotary Position Embedding
---
# Rotary Positional Embeddings (RoPE)
This is an implementation of
[Rotary Positional Embeddings (RoPE)](https://arxiv.org/abs/2104.09864)
in ... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/transformers/rope/value_pe/arithmetic_experiment.py | labml_nn/transformers/rope/value_pe/arithmetic_experiment.py | """
---
title: Rotary Positional Embeddings with Relative distance (RoPER) Experiment
summary: This experiment trains a transformer model with Rotary Positional Embeddings with
Relative Distance (RoPER) on the arithmetic addition task.
---
# Rotary Positional Embeddings with Relative distance ([RoPER](index.html)) Ex... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/transformers/rope/value_pe/experiment.py | labml_nn/transformers/rope/value_pe/experiment.py | """
---
title: Rotary Positional Embeddings (RoPE) Experiment
summary: This experiment trains a transformer model with Rotary Positional Embeddings (RoPE) on tiny Shakespeare dataset.
---
# Rotary Positional Embeddings (RoPE) Experiment
This is an annotated PyTorch experiment to train a transformer model with Rotary ... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/transformers/rope/value_pe/__init__.py | labml_nn/transformers/rope/value_pe/__init__.py | """
---
title: Rotary Positional Embeddings with Relative distance (RoPER)
summary: >
This is an implementation of RoPER which adds relative distance information to embeddings on
top of RoPE introduced in RoFormer: Enhanced Transformer with Rotary Position Embedding
---
*RoPER is work by [Georges Harik (@gharik)](... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/transformers/basic/__init__.py | labml_nn/transformers/basic/__init__.py | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false | |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/transformers/basic/with_sophia.py | labml_nn/transformers/basic/with_sophia.py | """
---
title: Transformer Auto-Regression Experiment with [Sophia-G optimizer](../../optimizers/sophia.html)
summary: >
This trains a simple transformer model on NLP auto-regression with Sophia-G optimizer.
---
# Transformer Auto-Regression Experiment with [Sophia-G optimizer](../../optimizers/sophia.html)
This tr... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/transformers/basic/autoregressive_experiment.py | labml_nn/transformers/basic/autoregressive_experiment.py | """
---
title: Transformer Auto-Regression Experiment
summary: >
This trains a simple transformer model on NLP auto-regression.
---
# Transformer Auto-Regression Experiment
[](https://colab.research.google.com/github/labmlai/annotated_deep_le... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/transformers/retro/bert_embeddings.py | labml_nn/transformers/retro/bert_embeddings.py | """
---
title: BERT Embeddings of chunks of text
summary: >
Generate BERT embeddings for chunks using a frozen BERT model
---
# BERT Embeddings of chunks of text
This is the code to get BERT embeddings of chunks for [RETRO model](index.html).
"""
from typing import List
import torch
from transformers import BertT... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/transformers/retro/train.py | labml_nn/transformers/retro/train.py | """
---
title: RETRO training
summary: >
Training RETRO model with Tiny Shakespeare dataset
---
# RETRO training
This is the training code for
[RETRO](index.html).
"""
import torch
from labml import monit, lab, tracker, experiment, logger
from labml.logger import Text
from labml_nn.helpers.datasets import TextFil... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/transformers/retro/model.py | labml_nn/transformers/retro/model.py | """
---
title: RETRO model
summary: >
RETRO model with encoder for neighbors and autoregressive decoder
---
# RETRO model
This is the model definition for
[RETRO](index.html).
"""
import math
from typing import Set
import torch
from torch import nn
from labml.logger import inspect
class RotaryPositionalEmbedd... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/transformers/retro/dataset.py | labml_nn/transformers/retro/dataset.py | """
---
title: Training dataset for RETRO
summary: >
Create a dataset for RETRO model training
---
# RETRO training dataset
We pre-retrieve nearest neighbors from the [key-value database](database.html)
and create the dataset to train the [RETRO](index.html)
[model](model.html).
"""
import json
from pathlib impo... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/transformers/retro/database.py | labml_nn/transformers/retro/database.py | """
---
title: Database for nearest neighbor retrieval
summary: >
Nearest neighbor retrieval and creation of the database
---
# Database for nearest neighbor retrieval
This is the build the database and retrieves nearest neighbors for
[RETRO model](index.html).
We use [FAISS library](https://faiss.ai/) for the da... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/transformers/retro/__init__.py | labml_nn/transformers/retro/__init__.py | """
---
title: Retrieval-Enhanced Transformer (Retro)
summary: >
This is a PyTorch implementation/tutorial of the paper
Improving language models by retrieving from trillions of tokens.
It builds a key-value database of chunks of text and retrieves and uses them when
making predictions.
---
# Retrieval-Enhance... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/transformers/alibi/experiment.py | labml_nn/transformers/alibi/experiment.py | """
---
title: Attention with Linear Biases (ALiBi) Experiment
summary: This experiment trains an Attention with Linear Biases (ALiBi) based model on Tiny Shakespeare dataset.
---
# [Attention with Linear Biases (ALiBi)](index.html) Experiment
This is an annotated PyTorch experiment to train a [ALiBi model](index.htm... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/transformers/alibi/__init__.py | labml_nn/transformers/alibi/__init__.py | """
---
title: Attention with Linear Biases (ALiBi)
summary: >
Documented implementation with explanations of Attention with Linear Biases (ALiBi)
---
# Attention with Linear Biases (ALiBi)
This is an implementation of Attention with Linear Biases (ALiBi) from the paper
[Train Short, Test Long: Attention with Linea... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/transformers/knn/build_index.py | labml_nn/transformers/knn/build_index.py | """
---
title: Build FAISS index for k-NN search
summary: This builds the FAISS index with the transformer embeddings.
---
# Build FAISS index for k-NN search
We want to build the index of $\big(f(c_i), w_i\big)$.
We store $f(c_i)$ and $w_i$ in memory mapped numpy arrays.
We find $f(c_i)$ nearest to $f(c_t)$ using [F... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/transformers/knn/__init__.py | labml_nn/transformers/knn/__init__.py | """
---
title: k-Nearest Neighbor Language Models
summary: >
This is a simple PyTorch implementation/tutorial of the paper
Generalization through Memorization: Nearest Neighbor Language Models using FAISS.
It runs a kNN model on the final transformer layer embeddings to improve the
loss of transformer based lan... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/transformers/knn/eval_knn.py | labml_nn/transformers/knn/eval_knn.py | """
---
title: Evaluate k-nearest neighbor language model
summary: >
This runs the kNN model and merges the kNN results with transformer output to
achieve better results than just using the transformer.
---
# Evaluate k-nearest neighbor language model
"""
from typing import Optional, List
import faiss
import nump... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/transformers/knn/train_model.py | labml_nn/transformers/knn/train_model.py | """
---
title: Train Autoregressive Transformer
summary: This is training code with notes for a basic auto-regressive transformer.
---
# Train Autoregressive Transformer
This trains a simple [transformer](../../) model for auto-regression.
"""
import torch
from torch import nn
from labml import experiment
from labml... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/transformers/gmlp/experiment.py | labml_nn/transformers/gmlp/experiment.py | """
---
title: Pay Attention to MLPs (gMLP) Experiment
summary: This experiment trains a gMLP based model on Tiny Shakespeare dataset.
---
# [Pay Attention to MLPs (gMLP)](index.html) Experiment
This is an annotated PyTorch experiment to train a [gMLP model](index.html).
The paper also applies a Stochastic Depth reg... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/transformers/gmlp/__init__.py | labml_nn/transformers/gmlp/__init__.py | """
---
title: Pay Attention to MLPs (gMLP)
summary: >
This is an annotated implementation/tutorial of Pay Attention to MLPs (gMLP) in PyTorch.
---
# Pay Attention to MLPs (gMLP)
This is a [PyTorch](https://pytorch.org) implementation of the paper
[Pay Attention to MLPs](https://arxiv.org/abs/2105.08050).
This pap... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/transformers/fnet/experiment.py | labml_nn/transformers/fnet/experiment.py | """
---
title: FNet Experiment
summary: This experiment trains a FNet based model on AG News dataset.
---
# [FNet](index.html) Experiment
This is an annotated PyTorch experiment to train a [FNet model](index.html).
This is based on
[general training loop and configurations for AG News classification task](../../expe... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/transformers/fnet/__init__.py | labml_nn/transformers/fnet/__init__.py | """
---
title: "FNet: Mixing Tokens with Fourier Transforms"
summary: >
This is an annotated implementation/tutorial of FNet in PyTorch.
---
# FNet: Mixing Tokens with Fourier Transforms
This is a [PyTorch](https://pytorch.org) implementation of the paper
[FNet: Mixing Tokens with Fourier Transforms](https://arxiv.... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/transformers/fast_weights/experiment.py | labml_nn/transformers/fast_weights/experiment.py | """
---
title: Train Fast Weights Transformer
summary: This is training code with notes for a Fast Weights Transformer.
---
# Train Fast Weights Transformer
This trains a fast weights transformer model for auto-regression.
Here’s a Colab notebook for training a fast weights transformer on Tiny Shakespeare dataset.
... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/transformers/fast_weights/token_wise.py | labml_nn/transformers/fast_weights/token_wise.py | """
---
title: Fast Weight Systems
summary: >
This is an annotated implementation/tutorial of
Linear Transformers Are Secretly Fast Weight Memory Systems in PyTorch.
---
"""
from typing import Optional
import torch
from torch import nn
from labml_nn.transformers.fast_weights import DPFP
from labml_nn.transformers... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/transformers/fast_weights/__init__.py | labml_nn/transformers/fast_weights/__init__.py | """
---
title: Linear Transformers Are Secretly Fast Weight Memory Systems
summary: >
This is an annotated implementation/tutorial of
Linear Transformers Are Secretly Fast Weight Memory Systems in PyTorch.
---
# Fast weights transformer
The paper
[Linear Transformers Are Secretly Fast Weight Memory Systems in PyT... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/transformers/compressive/experiment.py | labml_nn/transformers/compressive/experiment.py | """
---
title: Compressive Transformer Experiment
summary: This experiment trains a compressive transformer model on tiny Shakespeare dataset.
---
# Compressive Transformer Experiment
This is an annotated PyTorch experiment to train a compressive transformer model.
"""
from typing import List, Tuple, NamedTuple
impo... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/transformers/compressive/__init__.py | labml_nn/transformers/compressive/__init__.py | """
---
title: Compressive Transformer
summary: >
Documented implementation with explanations of a
Compressive Transformer model.
---
# Compressive Transformer
This is an implementation of
[Compressive Transformers for Long-Range Sequence Modelling](https://arxiv.org/abs/1911.05507)
in [PyTorch](https://pytorch.o... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/transformers/xl/experiment.py | labml_nn/transformers/xl/experiment.py | """
---
title: Transformer XL Experiment
summary: This experiment trains a transformer XL model on tiny Shakespeare dataset.
---
# Transformer XL Experiment
This is an annotated PyTorch experiment to train a transformer xl model.
"""
from typing import List
import torch
import torch.nn as nn
from labml import experi... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/transformers/xl/relative_mha.py | labml_nn/transformers/xl/relative_mha.py | """
---
title: Relative Multi-Headed Attention
summary: >
Documented implementation with explanations of
Relative Multi-Headed Attention from paper Transformer-XL.
---
# Relative Multi-Headed Attention
This is an implementation of relative multi-headed attention from paper
[Transformer-XL: Attentive Language Mode... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/transformers/xl/__init__.py | labml_nn/transformers/xl/__init__.py | """
---
title: Transformer XL
summary: >
Documented implementation with explanations of a
Transformer-XL model.
---
# Transformer XL
This is an implementation of
[Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context](https://arxiv.org/abs/1901.02860)
in [PyTorch](https://pytorch.org).
Transfor... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.