repo stringlengths 7 90 | file_url stringlengths 81 315 | file_path stringlengths 4 228 | content stringlengths 0 32.8k | language stringclasses 1
value | license stringclasses 7
values | commit_sha stringlengths 40 40 | retrieved_at stringdate 2026-01-04 14:38:15 2026-01-05 02:33:18 | truncated bool 2
classes |
|---|---|---|---|---|---|---|---|---|
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/transformers/hour_glass/experiment.py | labml_nn/transformers/hour_glass/experiment.py | """
---
title: Hierarchical Transformers Are More Efficient Language Models Experiment
summary: This experiment trains a hourglass model on Tiny Shakespeare dataset.
---
# [Hierarchical Transformers Are More Efficient Language Models](index.html) Experiment
This is an annotated PyTorch experiment to train a [hourgla... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/transformers/hour_glass/__init__.py | labml_nn/transformers/hour_glass/__init__.py | """
---
title: Hierarchical Transformers Are More Efficient Language Models
summary: >
This is an annotated implementation/tutorial of hourglass model in PyTorch.
---
# Hierarchical Transformers Are More Efficient Language Models
This is a [PyTorch](https://pytorch.org) implementation of the paper
[Hierarchical Tra... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/transformers/flash/__init__.py | labml_nn/transformers/flash/__init__.py | """
---
title: Flash Attention
summary: >
This is a PyTorch/Triton implementation of Flash Attention 2
with explanations.
---
# Flash Attention
Flash attention speeds up transformer attention mechanism by reducing the number of
memory reads/writes between GPU high bandwidth memory (HBM) and GPU on-chip SRAM.
It'... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | true |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/transformers/flash/test.py | labml_nn/transformers/flash/test.py | """
### Test Flash Attention Implementation
This is the code to test and measure performance of our flash attention implementation
"""
import torch
import triton
from labml import logger, monit
from labml_nn.transformers.flash import attention
HI_PRES_TORCH = torch.float32
@torch.no_grad()
def _calc_abs_rel_error... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/transformers/mlp_mixer/experiment.py | labml_nn/transformers/mlp_mixer/experiment.py | """
---
title: MLP Mixer experiment
summary: This experiment trains MLP Mixer on Tiny Shakespeare dataset.
---
# [MLP Mixer](index.html) Experiment
This is an annotated PyTorch experiment to train a [MLP Mixer Model](index.html).
"""
from labml import experiment
from labml.configs import option
from labml_nn.transfo... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/transformers/mlp_mixer/__init__.py | labml_nn/transformers/mlp_mixer/__init__.py | """
---
title: "MLP-Mixer: An all-MLP Architecture for Vision"
summary: >
This is an annotated implementation/tutorial of MLP-Mixer: An all-MLP Architecture for Vision in PyTorch.
---
# MLP-Mixer: An all-MLP Architecture for Vision
This is a [PyTorch](https://pytorch.org) implementation of the paper
[MLP-Mixer: An ... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/transformers/gpt/__init__.py | labml_nn/transformers/gpt/__init__.py | """
---
title: GPT
summary: >
Implementation/tutorial of GPT model and training code.
---
# GPT
This is a tutorial/implementation of
[OpenAI GPT architecture](https://openai.com/blog/better-language-models/)
in [PyTorch](https://pytorch.org).
We got a bunch of implementation details from
[minGPT](https://github.com... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/transformers/mlm/experiment.py | labml_nn/transformers/mlm/experiment.py | """
---
title: Masked Language Model Experiment
summary: This experiment trains Masked Language Model (MLM) on Tiny Shakespeare dataset.
---
# [Masked Language Model (MLM)](index.html) Experiment
This is an annotated PyTorch experiment to train a [Masked Language Model](index.html).
"""
from typing import List
impor... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/transformers/mlm/__init__.py | labml_nn/transformers/mlm/__init__.py | """
---
title: Masked Language Model
summary: >
This is an annotated implementation/tutorial of the Masked Language Model in PyTorch.
---
# Masked Language Model (MLM)
This is a [PyTorch](https://pytorch.org) implementation of the Masked Language Model (MLM)
used to pre-train the BERT model introduced in the paper... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/transformers/jax_transformer/__init__.py | labml_nn/transformers/jax_transformer/__init__.py | """
---
title: Autoregressive Transformer Decoder in JAX from scratch
summary: >
An implementation of a transformer decode on a small text dataset in JAX from scratch,
with implementations of basic layers like layer normalization and adam optimizer.
---
# Autoregressive Transformer Decoder in JAX from scratch
... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | true |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/transformers/glu_variants/experiment.py | labml_nn/transformers/glu_variants/experiment.py | """
---
title: Gated Linear Units and Variants
summary: >
Train an auto-regressive transformer with Gated Linear Units and variants
for the position-wise feedforward network (FFN).
---
# Gated Linear Units and Variants
This trains a simple [transformer](../../) model for auto-regression.
We try different variants... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/transformers/glu_variants/simple.py | labml_nn/transformers/glu_variants/simple.py | """
---
title: Gated Linear Units and Variants
summary: >
Train an auto-regressive transformer with Gated Linear Units and variants
for the position-wise feedforward network (FFN).
---
# Gated Linear Units and Variants
This trains a simple [transformer](../../) model for auto-regression.
We try different variants... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/transformers/glu_variants/__init__.py | labml_nn/transformers/glu_variants/__init__.py | """
---
title: Gated Linear Units and Variants
summary: >
Train an auto-regressive transformer with Gated Linear Units and variants
for the position-wise feedforward network (FFN).
---
# Gated Linear Units and Variants
* [Experiment that uses `labml.configs`](experiment.html)
* [Simpler version from scratch](simp... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/graphs/__init__.py | labml_nn/graphs/__init__.py | """
---
title: Graph Neural Networks
summary: >
A set of PyTorch implementations/tutorials related to graph neural networks
---
# Graph Neural Networks
* [Graph Attention Networks (GAT)](gat/index.html)
* [Graph Attention Networks v2 (GATv2)](gatv2/index.html)
"""
| python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/graphs/gat/experiment.py | labml_nn/graphs/gat/experiment.py | """
---
title: Train a Graph Attention Network (GAT) on Cora dataset
summary: >
This trains is a Graph Attention Network (GAT) on Cora dataset
---
# Train a Graph Attention Network (GAT) on Cora dataset
"""
from typing import Dict
import numpy as np
import torch
from torch import nn
from labml import lab, monit,... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/graphs/gat/__init__.py | labml_nn/graphs/gat/__init__.py | """
---
title: Graph Attention Networks (GAT)
summary: >
A PyTorch implementation/tutorial of Graph Attention Networks.
---
# Graph Attention Networks (GAT)
This is a [PyTorch](https://pytorch.org) implementation of the paper
[Graph Attention Networks](https://arxiv.org/abs/1710.10903).
GATs work on graph data.
A g... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/graphs/gatv2/experiment.py | labml_nn/graphs/gatv2/experiment.py | """
---
title: Train a Graph Attention Network v2 (GATv2) on Cora dataset
summary: >
This trains is a Graph Attention Network v2 (GATv2) on Cora dataset
---
# Train a Graph Attention Network v2 (GATv2) on Cora dataset
"""
import torch
from torch import nn
from labml import experiment
from labml.configs import opt... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/graphs/gatv2/__init__.py | labml_nn/graphs/gatv2/__init__.py | """
---
title: Graph Attention Networks v2 (GATv2)
summary: >
A PyTorch implementation/tutorial of Graph Attention Networks v2.
---
# Graph Attention Networks v2 (GATv2)
This is a [PyTorch](https://pytorch.org) implementation of the GATv2 operator from the paper
[How Attentive are Graph Attention Networks?](https://ar... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/normalization/__init__.py | labml_nn/normalization/__init__.py | """
---
title: Normalization Layers
summary: >
A set of PyTorch implementations/tutorials of normalization layers.
---
# Normalization Layers
* [Batch Normalization](batch_norm/index.html)
* [Layer Normalization](layer_norm/index.html)
* [Instance Normalization](instance_norm/index.html)
* [Group Normalization](grou... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/normalization/instance_norm/experiment.py | labml_nn/normalization/instance_norm/experiment.py | """
---
title: CIFAR10 Experiment to try Instance Normalization
summary: >
This trains is a simple convolutional neural network that uses instance normalization
to classify CIFAR10 images.
---
# CIFAR10 Experiment for Instance Normalization
This demonstrates the use of an instance normalization layer in a convolu... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/normalization/instance_norm/__init__.py | labml_nn/normalization/instance_norm/__init__.py | """
---
title: Instance Normalization
summary: >
A PyTorch implementation/tutorial of instance normalization.
---
# Instance Normalization
This is a [PyTorch](https://pytorch.org) implementation of
[Instance Normalization: The Missing Ingredient for Fast Stylization](https://arxiv.org/abs/1607.08022).
Instance norm... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/normalization/deep_norm/experiment.py | labml_nn/normalization/deep_norm/experiment.py | """
---
title: DeepNorm Experiment
summary: >
Training a DeepNorm transformer on Tiny Shakespeare.
---
# [DeepNorm](index.html) Experiment
[](https://colab.research.google.com/github/labmlai/annotated_deep_learning_paper_implementations/blob/m... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/normalization/deep_norm/__init__.py | labml_nn/normalization/deep_norm/__init__.py | """
---
title: DeepNorm
summary: >
A PyTorch implementation/tutorial of DeepNorm from paper DeepNet: Scaling Transformers to 1,000 Layers.
---
# DeepNorm
[](https://colab.research.google.com/github/labmlai/annotated_deep_learning_paper_impleme... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/normalization/layer_norm/__init__.py | labml_nn/normalization/layer_norm/__init__.py | """
---
title: Layer Normalization
summary: >
A PyTorch implementation/tutorial of layer normalization.
---
# Layer Normalization
This is a [PyTorch](https://pytorch.org) implementation of
[Layer Normalization](https://arxiv.org/abs/1607.06450).
### Limitations of [Batch Normalization](../batch_norm/index.html)
* ... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/normalization/weight_standardization/experiment.py | labml_nn/normalization/weight_standardization/experiment.py | """
---
title: CIFAR10 Experiment to try Weight Standardization and Batch-Channel Normalization
summary: >
This trains is a VGG net that uses weight standardization and batch-channel normalization
to classify CIFAR10 images.
---
# CIFAR10 Experiment to try Weight Standardization and Batch-Channel Normalization
""... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/normalization/weight_standardization/__init__.py | labml_nn/normalization/weight_standardization/__init__.py | """
---
title: Weight Standardization
summary: >
A PyTorch implementation/tutorial of Weight Standardization.
---
# Weight Standardization
This is a [PyTorch](https://pytorch.org) implementation of Weight Standardization from the paper
[Micro-Batch Training with Batch-Channel Normalization and Weight Standardizatio... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/normalization/weight_standardization/conv2d.py | labml_nn/normalization/weight_standardization/conv2d.py | """
---
title: 2D Convolution Layer with Weight Standardization
summary: >
A PyTorch implementation/tutorial of a 2D Convolution Layer with Weight Standardization.
---
# 2D Convolution Layer with Weight Standardization
This is an implementation of a 2 dimensional convolution layer with [Weight Standardization](./ind... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/normalization/batch_norm/mnist.py | labml_nn/normalization/batch_norm/mnist.py | """
---
title: MNIST Experiment to try Batch Normalization
summary: >
This trains is a simple convolutional neural network that uses batch normalization
to classify MNIST digits.
---
# MNIST Experiment for Batch Normalization
"""
import torch.nn as nn
import torch.nn.functional as F
import torch.utils.data
from ... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/normalization/batch_norm/__init__.py | labml_nn/normalization/batch_norm/__init__.py | """
---
title: Batch Normalization
summary: >
A PyTorch implementation/tutorial of batch normalization.
---
# Batch Normalization
This is a [PyTorch](https://pytorch.org) implementation of Batch Normalization from paper
[Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift](h... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/normalization/batch_norm/cifar10.py | labml_nn/normalization/batch_norm/cifar10.py | """
---
title: CIFAR10 Experiment to try Group Normalization
summary: >
This trains is a simple convolutional neural network that uses group normalization
to classify CIFAR10 images.
---
# CIFAR10 Experiment for Group Normalization
"""
import torch.nn as nn
from labml import experiment
from labml.configs import ... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/normalization/batch_channel_norm/__init__.py | labml_nn/normalization/batch_channel_norm/__init__.py | """
---
title: Batch-Channel Normalization
summary: >
A PyTorch implementation/tutorial of Batch-Channel Normalization.
---
# Batch-Channel Normalization
This is a [PyTorch](https://pytorch.org) implementation of Batch-Channel Normalization from the paper
[Micro-Batch Training with Batch-Channel Normalization and W... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/normalization/group_norm/experiment.py | labml_nn/normalization/group_norm/experiment.py | """
---
title: CIFAR10 Experiment to try Group Normalization
summary: >
This trains is a simple convolutional neural network that uses group normalization
to classify CIFAR10 images.
---
# CIFAR10 Experiment for Group Normalization
"""
import torch.nn as nn
from labml import experiment
from labml.configs import ... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/normalization/group_norm/__init__.py | labml_nn/normalization/group_norm/__init__.py | """
---
title: Group Normalization
summary: >
A PyTorch implementation/tutorial of group normalization.
---
# Group Normalization
This is a [PyTorch](https://pytorch.org) implementation of
the [Group Normalization](https://arxiv.org/abs/1803.08494) paper.
[Batch Normalization](../batch_norm/index.html) works well f... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/rwkv/experiment.py | labml_nn/rwkv/experiment.py | import inspect
import math
import torch
import torch.nn as nn
from labml_nn.rwkv.configs import RWKVConfigs
from labml_nn.rwkv import RWKV
from labml_nn.rwkv import TimeMixing
from labml import experiment
from labml.configs import option
from labml_nn.experiments.nlp_autoregression import NLPAutoRegressionConfigs
c... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/rwkv/configs.py | labml_nn/rwkv/configs.py | from labml.configs import BaseConfigs
class RWKVConfigs(BaseConfigs):
"""
## Transformer Configurations
This defines configurations for a transformer.
The configurations are calculate using option functions.
These are lazy loaded and therefore only the necessary modules
are calculated.
""... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/rwkv/__init__.py | labml_nn/rwkv/__init__.py | """
---
title: Receptance Weighted Key Value (RWKV)
summary: >
This implements the RWKV model
using PyTorch with explanations.
---
# Receptance Weighted Key Value (RWKV)
This is a tutorial/implementation of RWKV
from paper [RWKV: Reinventing RNNs for the Transformer Era](https://arxiv.org/pdf/2305.13048.pdf)
in... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/cfr/infoset_saver.py | labml_nn/cfr/infoset_saver.py | import json
import pathlib
from typing import Dict
from labml import experiment
from labml_nn.cfr import InfoSet
class InfoSetSaver(experiment.ModelSaver):
def __init__(self, infosets: Dict[str, InfoSet]):
self.infosets = infosets
def save(self, checkpoint_path: pathlib.Path) -> any:
data = ... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/cfr/analytics.py | labml_nn/cfr/analytics.py | from typing import List
import altair as alt
import numpy as np
from labml import analytics
from labml.analytics import IndicatorCollection
def calculate_percentages(means: List[np.ndarray], names: List[List[str]]):
normalized = []
for i in range(len(means)):
total = np.zeros_like(means[i])
... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/cfr/__init__.py | labml_nn/cfr/__init__.py | """
---
title: Regret Minimization in Games with Incomplete Information (CFR)
summary: >
This is an annotated implementation/tutorial of Regret Minimization in Games with Incomplete Information
---
# Regret Minimization in Games with Incomplete Information (CFR)
The paper
[Regret Minimization in Games with Incomple... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/cfr/kuhn/__init__.py | labml_nn/cfr/kuhn/__init__.py | """
---
title: CFR on Kuhn Poker
summary: >
This is an annotated implementation/tutorial of CFR on Kuhn Poker
---
# [Counterfactual Regret Minimization (CFR)](../index.html) on Kuhn Poker
This applies [Counterfactual Regret Minimization (CFR)](../index.html) to Kuhn poker.
[Kuhn Poker](https://en.wikipedia.org/wik... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/recurrent_highway_networks/__init__.py | labml_nn/recurrent_highway_networks/__init__.py | """
---
title: Recurrent Highway Networks
summary: A simple PyTorch implementation/tutorial of Recurrent Highway Networks.
---
# Recurrent Highway Networks
This is a [PyTorch](https://pytorch.org) implementation of [Recurrent Highway Networks](https://arxiv.org/abs/1607.03474).
"""
from typing import Optional
import... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/gan/__init__.py | labml_nn/gan/__init__.py | """
---
title: Generative Adversarial Networks
summary: >
A set of PyTorch implementations/tutorials of GANs.
---
# Generative Adversarial Networks
* [Original GAN](original/index.html)
* [GAN with deep convolutional network](dcgan/index.html)
* [Cycle GAN](cycle_gan/index.html)
* [Wasserstein GAN](wasserstein/index... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/gan/cycle_gan/__init__.py | labml_nn/gan/cycle_gan/__init__.py | """
---
title: Cycle GAN
summary: >
A simple PyTorch implementation/tutorial of Cycle GAN introduced in paper
Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks.
---
# Cycle GAN
This is a [PyTorch](https://pytorch.org) implementation/tutorial of the paper
[Unpaired Image-to-Image Tran... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/gan/stylegan/experiment.py | labml_nn/gan/stylegan/experiment.py | """
---
title: StyleGAN 2 Model Training
summary: >
An annotated PyTorch implementation of StyleGAN2 model training code.
---
# [StyleGAN 2](index.html) Model Training
This is the training code for [StyleGAN 2](index.html) model.

---*These are $64 \times 64$ images generated a... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/gan/stylegan/__init__.py | labml_nn/gan/stylegan/__init__.py | """
---
title: StyleGAN 2
summary: >
An annotated PyTorch implementation of StyleGAN2.
---
# StyleGAN 2
This is a [PyTorch](https://pytorch.org) implementation of the paper
[Analyzing and Improving the Image Quality of StyleGAN](https://arxiv.org/abs/1912.04958)
which introduces **StyleGAN 2**.
StyleGAN 2 is an im... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | true |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/gan/original/experiment.py | labml_nn/gan/original/experiment.py | """
---
title: Generative Adversarial Networks experiment with MNIST
summary: This experiment generates MNIST images using multi-layer perceptron.
---
# Generative Adversarial Networks experiment with MNIST
"""
from typing import Any
from torchvision import transforms
import torch
import torch.nn as nn
import torch... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/gan/original/__init__.py | labml_nn/gan/original/__init__.py | """
---
title: Generative Adversarial Networks (GAN)
summary: A simple PyTorch implementation/tutorial of Generative Adversarial Networks (GAN) loss functions.
---
# Generative Adversarial Networks (GAN)
This is an implementation of
[Generative Adversarial Networks](https://arxiv.org/abs/1406.2661).
The generator, $... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/gan/dcgan/__init__.py | labml_nn/gan/dcgan/__init__.py | """
---
title: Deep Convolutional Generative Adversarial Networks (DCGAN)
summary: A simple PyTorch implementation/tutorial of Deep Convolutional Generative Adversarial Networks (DCGAN).
---
# Deep Convolutional Generative Adversarial Networks (DCGAN)
This is a [PyTorch](https://pytorch.org) implementation of paper
[... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/gan/wasserstein/experiment.py | labml_nn/gan/wasserstein/experiment.py | """
---
title: WGAN experiment with MNIST
summary: This experiment generates MNIST images using convolutional neural network.
---
# WGAN experiment with MNIST
"""
from labml import experiment
from labml.configs import calculate
# Import configurations from [DCGAN experiment](../dcgan/index.html)
from labml_nn.gan.dcg... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/gan/wasserstein/__init__.py | labml_nn/gan/wasserstein/__init__.py | r"""
---
title: Wasserstein GAN (WGAN)
summary: A simple PyTorch implementation/tutorial of Wasserstein Generative Adversarial Networks (WGAN) loss functions.
---
# Wasserstein GAN (WGAN)
This is an implementation of
[Wasserstein GAN](https://arxiv.org/abs/1701.07875).
The original GAN loss is based on Jensen-Shanno... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/gan/wasserstein/gradient_penalty/experiment.py | labml_nn/gan/wasserstein/gradient_penalty/experiment.py | """
---
title: WGAN-GP experiment with MNIST
summary: This experiment generates MNIST images using convolutional neural network.
---
# WGAN-GP experiment with MNIST
"""
import torch
from labml import experiment, tracker
# Import configurations from [Wasserstein experiment](../experiment.html)
from labml_nn.gan.wasse... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/gan/wasserstein/gradient_penalty/__init__.py | labml_nn/gan/wasserstein/gradient_penalty/__init__.py | r"""
---
title: Gradient Penalty for Wasserstein GAN (WGAN-GP)
summary: >
An annotated PyTorch implementation/tutorial of
Improved Training of Wasserstein GANs.
---
# Gradient Penalty for Wasserstein GAN (WGAN-GP)
This is an implementation of
[Improved Training of Wasserstein GANs](https://arxiv.org/abs/1704.00028... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/conv_mixer/experiment.py | labml_nn/conv_mixer/experiment.py | """
---
title: Train ConvMixer on CIFAR 10
summary: >
Train ConvMixer on CIFAR 10
---
# Train a [ConvMixer](index.html) on CIFAR 10
This script trains a ConvMixer on CIFAR 10 dataset.
This is not an attempt to reproduce the results of the paper.
The paper uses image augmentations
present in [PyTorch Image Models... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/conv_mixer/__init__.py | labml_nn/conv_mixer/__init__.py | """
---
title: Patches Are All You Need? (ConvMixer)
summary: >
A PyTorch implementation/tutorial of the paper
"Patches Are All You Need?"
---
# Patches Are All You Need? (ConvMixer)
This is a [PyTorch](https://pytorch.org) implementation of the paper
[Patches Are All You Need?](https://arxiv.org/abs/2201.09792).
... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/hypernetworks/hyper_lstm.py | labml_nn/hypernetworks/hyper_lstm.py | """
---
title: HyperNetworks - HyperLSTM
summary: A PyTorch implementation/tutorial of HyperLSTM introduced in paper HyperNetworks.
---
# HyperNetworks - HyperLSTM
We have implemented HyperLSTM introduced in paper
[HyperNetworks](https://arxiv.org/abs/1609.09106), with annotations
using [PyTorch](https://pytorch.org)... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/hypernetworks/experiment.py | labml_nn/hypernetworks/experiment.py | import torch
import torch.nn as nn
from labml import experiment
from labml.configs import option
from labml.utils.pytorch import get_modules
from labml_nn.experiments.nlp_autoregression import NLPAutoRegressionConfigs
from labml_nn.hypernetworks.hyper_lstm import HyperLSTM
from labml_nn.lstm import LSTM
class Autore... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/hypernetworks/__init__.py | labml_nn/hypernetworks/__init__.py | """
---
title: HyperNetworks
summary: A PyTorch implementation/tutorial of HyperLSTM introduced in paper HyperNetworks.
---
## [HyperLSTM](hyper_lstm.html)
""" | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/neox/model.py | labml_nn/neox/model.py | """
---
title: GPT-NeoX Model Definition
summary: >
This is the model definition of GPT-NeoX.
---
# GPT-NeoX Model
Here is the code for layers of GPT-NeoX model and the code to load
20B checkpoint.
The method `load_state` in the layers load the checkpoints of that layer.
The checkpoint loading helpers are on [`c... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/neox/checkpoint.py | labml_nn/neox/checkpoint.py | """
---
title: GPT-NeoX Checkpoints
summary: >
Code to download checkpoints and helpers to load them.
---
# GPT-NeoX Checkpoints
"""
from pathlib import Path
from typing import Dict, Union, Tuple, Optional
import torch
from torch import nn
from labml import monit, lab, logger
from labml.logger import Text, insp... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/neox/__init__.py | labml_nn/neox/__init__.py | """
---
title: GPT-NeoX
summary: >
Simple GPT-NeoX implementation
---
# GPT-NeoX
This is a simple implementation of [Eleuther GPT-NeoX](https://arxiv.org/abs/2204.06745) for inference and fine-tuning.
* [Model definition](model.html)
* [Tokenizer](tokenizer.html)
* [Checkpoint downloading and loading helpers](c... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/neox/tokenizer.py | labml_nn/neox/tokenizer.py | """
---
title: GPT-NeoX Tokenizer
summary: >
Loads the GPT-NeoX tokenizer
---
# GPT-NeoX Tokenizer
This initializes a Hugging Face tokenizer from the downloaded vocabulary.
"""
from tokenizers import Tokenizer
from labml import lab, monit
@monit.func('Load NeoX Tokenizer')
def get_tokenizer() -> Tokenizer:
... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/neox/samples/finetune.py | labml_nn/neox/samples/finetune.py | """
---
title: Fine Tune GPT-NeoX
summary: >
Fine tune GPT-NeoX biases with Fairscale pipeline parallel module
---
# Fine Tune GPT-NeoX
This shows how to fine tune GPT-NeoX with pipeline parallelism.
"""
import fairscale
import torch
import torch.nn as nn
import torch.utils.data
import torch.utils.data
import ty... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/neox/samples/generate.py | labml_nn/neox/samples/generate.py | """
---
title: Generate Text with GPT-NeoX
summary: >
Generate Text with GPT-NeoX
---
# Generate Text with GPT-NeoX
This shows how to generate text from GPT-NeoX with a single GPU.
This needs a GPU with more than 45GB memory.
"""
# Imports
from typing import List
import torch
from torch import nn
from labml... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/neox/samples/__init__.py | labml_nn/neox/samples/__init__.py | """
---
title: Samples
summary: >
Samples for inference and fine-tuning
---
# Samples
* [Generating text](generate.html)
* [Fine tuning the biases with pipeline-parallel training](finetune.html)
""" | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/neox/samples/llm_int8.py | labml_nn/neox/samples/llm_int8.py | """
---
title: Generate Text with GPT-NeoX using LLM.int8() quantization
summary: >
Generate Text with GPT-NeoX using LLM.int8() quantization
---
# Generate Text with GPT-NeoX using LLM.int8() quantization
This shows how to generate text from GPT-NeoX using [LLM.int8() quantization](../utils/llm_int8.html).
Th... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/neox/utils/text_dataset.py | labml_nn/neox/utils/text_dataset.py | """
---
title: Text Dataset for GPT-NeoX
summary: >
Loads text datasets to fine-tune GPT-NeoX
---
# Text Dataset for GPT-NeoX
"""
from pathlib import PurePath, Path
from typing import Optional, List
import torch
import torch.utils.data
from labml import lab
from labml import monit
from labml.logger import inspect... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/neox/utils/finetune.py | labml_nn/neox/utils/finetune.py | from typing import List, Dict
import torch
from torch import nn
from labml_nn.neox.model import TransformerLayer, NeoXModule
class FineTuner:
def __init__(self, layers: List[NeoXModule]):
self.layers = layers
def get_trainable_params(self) -> Dict[str, nn.Parameter]:
params = {}
for... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/neox/utils/trainer.py | labml_nn/neox/utils/trainer.py | from typing import Optional, Set, List
import torch.nn as nn
import torch.optim
import torch.utils.data
from torch.cuda import amp
from torch.cuda.amp import GradScaler
from labml import monit, tracker
from labml.configs import BaseConfigs, option
from labml_nn.neox.utils.finetune import FineTuner
def get_trainable... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/neox/utils/__init__.py | labml_nn/neox/utils/__init__.py | """
---
title: Utilities and Helpers
summary: >
Utilities and helper functions
---
# Utilities and Helpers
* [Cache for intermediate activations (for faster inference)](cache.html)
* [Tools for finetuning](finetune.html)
* [Trainer](trainer.html)
* [Text dataset](text_dataset.html)
"""
import typing
from typing i... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/neox/utils/cache.py | labml_nn/neox/utils/cache.py | """
---
title: Cache for Intermediate Activations
summary: >
Cache for intermediate activations for faster inference.
---
# Cache for Intermediate Activations
During inference the model outputs token by token.
We use this simple cache to store key's and value's attention layers,
so that we don't have to recompute... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/neox/utils/llm_int8.py | labml_nn/neox/utils/llm_int8.py | """
---
title: LLM.int8() on GPT-NeoX
summary: >
Transform nn.Linear layers to 8-bit integer layers.
---
# LLM.int() on GPT-NeoX
This implements a utility function to transform a `nn.Linear` layer to LLM.int8() linear layer.
[LLM.int8() paper](https://arxiv.org/abs/eb2bcaee1d0011edaa66a71c10a887e7)
shows you ca... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/neox/evaluation/half_precision.py | labml_nn/neox/evaluation/half_precision.py | """
---
title: Evaluate GPT-NeoX using LLM.int8() quantization on test suite
summary: >
Evaluate GPT-NeoX using LLM.int8() quantization on test suite
---
# Evaluate GPT-NeoX using LLM.int8() quantization on test suite
This code evaluate [GPT-NeoX](../index.html) using, on a suite of tasks.
"""
import argparse
... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/neox/evaluation/__init__.py | labml_nn/neox/evaluation/__init__.py | """
---
title: Evaluation
summary: >
Code to evaluate the model on NLP tasks through lm-evaluation-harness
---
# Evaluation
This is the code to test the model on
[EleutherAI/lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness).
* [Evaluating half precision model on a single GPU](half_preci... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/neox/evaluation/llm_int8.py | labml_nn/neox/evaluation/llm_int8.py | """
---
title: Evaluate GPT-NeoX using LLM.int8() quantization on test suite
summary: >
Evaluate GPT-NeoX using LLM.int8() quantization on test suite
---
# Evaluate GPT-NeoX using LLM.int8() quantization on test suite
This code evaluate [GPT-NeoX](../index.html) using [LLM.int8() quantization](../utils/llm_int8... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/uncertainty/__init__.py | labml_nn/uncertainty/__init__.py | """
---
title: Neural Networks with Uncertainty Estimation
summary: >
A set of PyTorch implementations/tutorials related to uncertainty estimation
---
# Neural Networks with Uncertainty Estimation
These are neural network architectures that estimate the uncertainty of the predictions.
* [Evidential Deep Learning to... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/uncertainty/evidence/experiment.py | labml_nn/uncertainty/evidence/experiment.py | """
---
title: "Evidential Deep Learning to Quantify Classification Uncertainty Experiment"
summary: >
This trains is EDL model on MNIST
---
# [Evidential Deep Learning to Quantify Classification Uncertainty](index.html) Experiment
This trains a model based on [Evidential Deep Learning to Quantify Classification Un... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/uncertainty/evidence/__init__.py | labml_nn/uncertainty/evidence/__init__.py | """
---
title: "Evidential Deep Learning to Quantify Classification Uncertainty"
summary: >
A PyTorch implementation/tutorial of the paper Evidential Deep Learning to Quantify Classification
Uncertainty.
---
# Evidential Deep Learning to Quantify Classification Uncertainty
This is a [PyTorch](https://pytorch.org) i... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/sampling/nucleus.py | labml_nn/sampling/nucleus.py | """
---
title: Nucleus Sampling
summary: A PyTorch implementation of nucleus sampling from language models.
---
# Nucleus Sampling
This is an implementation of nucleus sampling, introduced in the paper
[The Curious Case of Neural Text Degeneration](https://arxiv.org/abs/1904.09751).
The paper discusses the problems ... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/sampling/top_k.py | labml_nn/sampling/top_k.py | """
---
title: Top-k Sampling
summary: A PyTorch implementation of top-k sampling from language models.
---
# Top-k Sampling
Here we first pick the top-k tokens from the distribution of logits, and then
sample from them.
Here's an [experiment](experiment.html) that uses these sampling techniques.
"""
import torch
... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/sampling/experiment.py | labml_nn/sampling/experiment.py | """
---
title: Trying out Sampling Techniques for Language Models
summary: >
We try out different sampling techniques for language models on HuggingFace's GPT2 model.
---
# Trying out Sampling Techniques for Language Models
* [Greedy Sampling](greedy.html)
* [Temperature Sampling](temperature.html)
* [Top-k Sampling... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/sampling/temperature.py | labml_nn/sampling/temperature.py | """
---
title: Sampling from Language Models with Temperature
summary: A PyTorch implementation of sampling from language models with temperature.
---
# Sampling from Language Models with Temperature
Here we sample from the following probability distribution where $V$ is the vocabulary,
$u_{1:|V|}$ are the logits of ... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/sampling/greedy.py | labml_nn/sampling/greedy.py | """
---
title: Greedy Sampling
summary: A PyTorch implementation of greedy sampling from language models.
---
# Greedy Sampling
Here we sample the most likely token from the distribution of logits.
Here's an [experiment](experiment.html) that uses these sampling techniques.
"""
import torch
from labml_nn.sampling ... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/sampling/__init__.py | labml_nn/sampling/__init__.py | """
---
title: Sampling Techniques for Language Models
summary: >
A set of PyTorch implementations/tutorials of sampling techniques for language models.
---
# Sampling Techniques for Language Models
* [Greedy Sampling](greedy.html)
* [Temperature Sampling](temperature.html)
* [Top-k Sampling](top_k.html)
* [Nucleus ... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/sampling/experiment_tiny.py | labml_nn/sampling/experiment_tiny.py | from typing import Tuple
import torch
from labml import experiment, monit
from labml import logger
from labml.logger import Text
from labml_nn.helpers.datasets import TextDataset
from labml_nn.sampling import Sampler
from labml_nn.sampling.greedy import GreedySampler
from labml_nn.sampling.nucleus import NucleusSampl... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/lora/experiment.py | labml_nn/lora/experiment.py | """
---
title: Finetune GPT-2 with LoRA
summary: This is training code with notes for fine-tuning pre-trained GPT-2 model with LoRA.
---
# Finetune [GPT-2](gpt2.html) with [LoRA](index.html)
Here's a Colab notebook for training a feedback transformer on Tiny Shakespeare dataset.
[
Here's [the training code](experiment.html) for training a GPT2 model with LoRA
on Tiny Shakespeare dataset.
"""
import torch
import torch.nn as nn
from labml_nn.lora import Linear, Embedding
... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
labmlai/annotated_deep_learning_paper_implementations | https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25e169843e93980faa1d0468ea4df42ca7463382/labml_nn/lora/__init__.py | labml_nn/lora/__init__.py | """
---
title: Low-Rank Adaptation (LoRA)
summary: >
Annotated implementation of RoRA from paper
LoRA: Low-Rank Adaptation of Large Language Models
---
# Low-Rank Adaptation (LoRA)
This is an implementation of
[Low-Rank Adaptation (LoRA)](https://arxiv.org/abs/2106.09685)
in [PyTorch](https://pytorch.org).
Low-R... | python | MIT | 25e169843e93980faa1d0468ea4df42ca7463382 | 2026-01-04T14:38:23.238891Z | false |
Shubhamsaboo/awesome-llm-apps | https://github.com/Shubhamsaboo/awesome-llm-apps/blob/44efabeac69a8bd1f7cfbf8fddd8fde5ce32b335/ai_agent_framework_crash_course/openai_sdk_crash_course/2_structured_output_agent/support_ticket_agent.py | ai_agent_framework_crash_course/openai_sdk_crash_course/2_structured_output_agent/support_ticket_agent.py | """
OpenAI Agents SDK Tutorial 2: Structured Output Agent - Support Tickets
This module demonstrates how to create an agent that returns structured data
using Pydantic models for support ticket creation.
"""
import os
from typing import List, Optional
from enum import Enum
from dotenv import load_dotenv
from pydantic... | python | Apache-2.0 | 44efabeac69a8bd1f7cfbf8fddd8fde5ce32b335 | 2026-01-04T14:38:15.481187Z | false |
Shubhamsaboo/awesome-llm-apps | https://github.com/Shubhamsaboo/awesome-llm-apps/blob/44efabeac69a8bd1f7cfbf8fddd8fde5ce32b335/ai_agent_framework_crash_course/openai_sdk_crash_course/2_structured_output_agent/product_review_agent.py | ai_agent_framework_crash_course/openai_sdk_crash_course/2_structured_output_agent/product_review_agent.py | """
OpenAI Agents SDK Tutorial 2: Structured Output Agent - Product Reviews
This module demonstrates extracting structured data from product reviews
using complex nested Pydantic models.
"""
import os
from typing import List, Optional
from enum import Enum
from dotenv import load_dotenv
from pydantic import BaseModel... | python | Apache-2.0 | 44efabeac69a8bd1f7cfbf8fddd8fde5ce32b335 | 2026-01-04T14:38:15.481187Z | false |
Shubhamsaboo/awesome-llm-apps | https://github.com/Shubhamsaboo/awesome-llm-apps/blob/44efabeac69a8bd1f7cfbf8fddd8fde5ce32b335/ai_agent_framework_crash_course/openai_sdk_crash_course/2_structured_output_agent/2_2_product_review_agent/__init__.py | ai_agent_framework_crash_course/openai_sdk_crash_course/2_structured_output_agent/2_2_product_review_agent/__init__.py | # Product Review Agent Package
| python | Apache-2.0 | 44efabeac69a8bd1f7cfbf8fddd8fde5ce32b335 | 2026-01-04T14:38:15.481187Z | false |
Shubhamsaboo/awesome-llm-apps | https://github.com/Shubhamsaboo/awesome-llm-apps/blob/44efabeac69a8bd1f7cfbf8fddd8fde5ce32b335/ai_agent_framework_crash_course/openai_sdk_crash_course/2_structured_output_agent/2_2_product_review_agent/agent.py | ai_agent_framework_crash_course/openai_sdk_crash_course/2_structured_output_agent/2_2_product_review_agent/agent.py | from typing import List, Optional
from enum import Enum
from agents import Agent
from pydantic import BaseModel, Field
class Sentiment(str, Enum):
VERY_POSITIVE = "very_positive"
POSITIVE = "positive"
NEUTRAL = "neutral"
NEGATIVE = "negative"
VERY_NEGATIVE = "very_negative"
class ProductReview(Bas... | python | Apache-2.0 | 44efabeac69a8bd1f7cfbf8fddd8fde5ce32b335 | 2026-01-04T14:38:15.481187Z | false |
Shubhamsaboo/awesome-llm-apps | https://github.com/Shubhamsaboo/awesome-llm-apps/blob/44efabeac69a8bd1f7cfbf8fddd8fde5ce32b335/ai_agent_framework_crash_course/openai_sdk_crash_course/2_structured_output_agent/2_1_support_ticket_agent/__init__.py | ai_agent_framework_crash_course/openai_sdk_crash_course/2_structured_output_agent/2_1_support_ticket_agent/__init__.py | # Support Ticket Agent Package
| python | Apache-2.0 | 44efabeac69a8bd1f7cfbf8fddd8fde5ce32b335 | 2026-01-04T14:38:15.481187Z | false |
Shubhamsaboo/awesome-llm-apps | https://github.com/Shubhamsaboo/awesome-llm-apps/blob/44efabeac69a8bd1f7cfbf8fddd8fde5ce32b335/ai_agent_framework_crash_course/openai_sdk_crash_course/2_structured_output_agent/2_1_support_ticket_agent/agent.py | ai_agent_framework_crash_course/openai_sdk_crash_course/2_structured_output_agent/2_1_support_ticket_agent/agent.py | from typing import List, Optional
from enum import Enum
from agents import Agent
from pydantic import BaseModel, Field
class Priority(str, Enum):
LOW = "low"
MEDIUM = "medium"
HIGH = "high"
CRITICAL = "critical"
class SupportTicket(BaseModel):
title: str = Field(description="A concise summary of t... | python | Apache-2.0 | 44efabeac69a8bd1f7cfbf8fddd8fde5ce32b335 | 2026-01-04T14:38:15.481187Z | false |
Shubhamsaboo/awesome-llm-apps | https://github.com/Shubhamsaboo/awesome-llm-apps/blob/44efabeac69a8bd1f7cfbf8fddd8fde5ce32b335/ai_agent_framework_crash_course/openai_sdk_crash_course/5_context_management/agent.py | ai_agent_framework_crash_course/openai_sdk_crash_course/5_context_management/agent.py | from dataclasses import dataclass
from agents import Agent, RunContextWrapper, Runner, function_tool
@dataclass
class UserInfo:
"""Context object containing user information and session data"""
name: str
uid: int
preferences: dict = None
def __post_init__(self):
if self.preferences is ... | python | Apache-2.0 | 44efabeac69a8bd1f7cfbf8fddd8fde5ce32b335 | 2026-01-04T14:38:15.481187Z | false |
Shubhamsaboo/awesome-llm-apps | https://github.com/Shubhamsaboo/awesome-llm-apps/blob/44efabeac69a8bd1f7cfbf8fddd8fde5ce32b335/ai_agent_framework_crash_course/openai_sdk_crash_course/8_handoffs_delegation/advanced_handoffs.py | ai_agent_framework_crash_course/openai_sdk_crash_course/8_handoffs_delegation/advanced_handoffs.py | from agents import Agent, Runner, handoff, RunContextWrapper
from agents.extensions import handoff_filters
from agents.extensions.handoff_prompt import RECOMMENDED_PROMPT_PREFIX
from pydantic import BaseModel
import asyncio
# Define structured input for escalation handoff
class EscalationData(BaseModel):
reason: s... | python | Apache-2.0 | 44efabeac69a8bd1f7cfbf8fddd8fde5ce32b335 | 2026-01-04T14:38:15.481187Z | false |
Shubhamsaboo/awesome-llm-apps | https://github.com/Shubhamsaboo/awesome-llm-apps/blob/44efabeac69a8bd1f7cfbf8fddd8fde5ce32b335/ai_agent_framework_crash_course/openai_sdk_crash_course/8_handoffs_delegation/basic_handoffs.py | ai_agent_framework_crash_course/openai_sdk_crash_course/8_handoffs_delegation/basic_handoffs.py | from agents import Agent, Runner, handoff
from agents.extensions.handoff_prompt import RECOMMENDED_PROMPT_PREFIX
import asyncio
# Create specialized agents
billing_agent = Agent(
name="Billing Agent",
instructions=f"""{RECOMMENDED_PROMPT_PREFIX}
You are a billing specialist. Help customers with:
- Paym... | python | Apache-2.0 | 44efabeac69a8bd1f7cfbf8fddd8fde5ce32b335 | 2026-01-04T14:38:15.481187Z | false |
Shubhamsaboo/awesome-llm-apps | https://github.com/Shubhamsaboo/awesome-llm-apps/blob/44efabeac69a8bd1f7cfbf8fddd8fde5ce32b335/ai_agent_framework_crash_course/openai_sdk_crash_course/8_handoffs_delegation/8_1_basic_handoffs/__init__.py | ai_agent_framework_crash_course/openai_sdk_crash_course/8_handoffs_delegation/8_1_basic_handoffs/__init__.py | # Basic Handoffs module for OpenAI Agents SDK tutorial
| python | Apache-2.0 | 44efabeac69a8bd1f7cfbf8fddd8fde5ce32b335 | 2026-01-04T14:38:15.481187Z | false |
Shubhamsaboo/awesome-llm-apps | https://github.com/Shubhamsaboo/awesome-llm-apps/blob/44efabeac69a8bd1f7cfbf8fddd8fde5ce32b335/ai_agent_framework_crash_course/openai_sdk_crash_course/8_handoffs_delegation/8_1_basic_handoffs/agent.py | ai_agent_framework_crash_course/openai_sdk_crash_course/8_handoffs_delegation/8_1_basic_handoffs/agent.py | from agents import Agent, Runner, handoff
from agents.extensions.handoff_prompt import RECOMMENDED_PROMPT_PREFIX
import asyncio
# Create specialized agents
billing_agent = Agent(
name="Billing Agent",
instructions=f"""{RECOMMENDED_PROMPT_PREFIX}
You are a billing specialist. Help customers with:
- Paym... | python | Apache-2.0 | 44efabeac69a8bd1f7cfbf8fddd8fde5ce32b335 | 2026-01-04T14:38:15.481187Z | false |
Shubhamsaboo/awesome-llm-apps | https://github.com/Shubhamsaboo/awesome-llm-apps/blob/44efabeac69a8bd1f7cfbf8fddd8fde5ce32b335/ai_agent_framework_crash_course/openai_sdk_crash_course/8_handoffs_delegation/8_2_advanced_handoffs/__init__.py | ai_agent_framework_crash_course/openai_sdk_crash_course/8_handoffs_delegation/8_2_advanced_handoffs/__init__.py | # Advanced Handoffs module for OpenAI Agents SDK tutorial
| python | Apache-2.0 | 44efabeac69a8bd1f7cfbf8fddd8fde5ce32b335 | 2026-01-04T14:38:15.481187Z | false |
Shubhamsaboo/awesome-llm-apps | https://github.com/Shubhamsaboo/awesome-llm-apps/blob/44efabeac69a8bd1f7cfbf8fddd8fde5ce32b335/ai_agent_framework_crash_course/openai_sdk_crash_course/8_handoffs_delegation/8_2_advanced_handoffs/agent.py | ai_agent_framework_crash_course/openai_sdk_crash_course/8_handoffs_delegation/8_2_advanced_handoffs/agent.py | from agents import Agent, Runner, handoff, RunContextWrapper
from agents.extensions import handoff_filters
from agents.extensions.handoff_prompt import RECOMMENDED_PROMPT_PREFIX
from pydantic import BaseModel
import asyncio
# Define structured input for escalation handoff
class EscalationData(BaseModel):
reason: s... | python | Apache-2.0 | 44efabeac69a8bd1f7cfbf8fddd8fde5ce32b335 | 2026-01-04T14:38:15.481187Z | false |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.