repo
string
github_id
int64
github_node_id
string
number
int64
html_url
string
api_url
string
title
string
body
string
state
string
state_reason
string
locked
bool
comments_count
int64
labels
list
assignees
list
created_at
string
updated_at
string
closed_at
string
author_association
string
milestone_title
string
snapshot_id
string
extracted_at
string
author_login
string
author_id
int64
author_node_id
string
author_type
string
author_site_admin
bool
huggingface/transformers
4,256,151,805
I_kwDOCUB6oc79r7j9
45,412
https://github.com/huggingface/transformers/issues/45412
https://api.github.com/repos/huggingface/transformers/issues/45412
RT-DETR models do not release memory when deleted / garbage-collected
### System Info Transfomers: 5.5.3 PyTorch: 2.8.0+cu126 TorchVision: 0.23.0+cu126 System: Debian 13 (trixie) Python: 3.13.5 ### Who can help? @yonigozlan @molbap ### Information - [x] The official example scripts - [x] My own modified scripts ### Tasks - [x] An officially supported task in the `examples` folder ...
open
null
false
7
[ "bug" ]
[]
2026-04-13T15:46:43Z
2026-04-16T11:56:08Z
null
NONE
null
20260416T222012Z
2026-04-16T22:20:12Z
dhdaines
3,325,008
MDQ6VXNlcjMzMjUwMDg=
User
false
huggingface/transformers
4,258,468,813
I_kwDOCUB6oc790xPN
45,419
https://github.com/huggingface/transformers/issues/45419
https://api.github.com/repos/huggingface/transformers/issues/45419
Chat template inconsistencies in tool-calling support
Chat templates across model families handle tool-calling messages inconsistently. This creates fragility for any library (like TRL) that needs to construct tool-calling conversations programmatically, since there's no single "safe" way to build an assistant message with `tool_calls`. I ran a systematic check across al...
open
null
false
1
[]
[]
2026-04-13T23:27:06Z
2026-04-15T15:37:11Z
null
MEMBER
null
20260415T224019Z
2026-04-15T22:40:19Z
qgallouedec
45,557,362
MDQ6VXNlcjQ1NTU3MzYy
User
false
huggingface/transformers
4,262,515,718
I_kwDOCUB6oc7-ENQG
45,431
https://github.com/huggingface/transformers/issues/45431
https://api.github.com/repos/huggingface/transformers/issues/45431
Wrong checkpoint path in Dinov2 model_docs
Wrong checkpoint path in Dinov2 model_docs. The current checkpoint "google/dinov2-base-patch16-224" does not exist. The correct one should be "facebook/dinov-base". This issue is fixed in PR #45430 ### Who can help? @yonigozlan @molbap @stevhliu ### Notes This is a minor issue, but it might help new users. Than...
closed
completed
false
0
[]
[]
2026-04-14T13:53:21Z
2026-04-15T09:02:16Z
2026-04-15T09:02:16Z
CONTRIBUTOR
null
20260415T224019Z
2026-04-15T22:40:19Z
ambroiseodt
64,415,312
MDQ6VXNlcjY0NDE1MzEy
User
false
huggingface/transformers
4,263,833,996
I_kwDOCUB6oc7-JPGM
45,440
https://github.com/huggingface/transformers/issues/45440
https://api.github.com/repos/huggingface/transformers/issues/45440
Native `DeepseekV3MoE` diverges from the remote DeepSeekV3 implementation
### System Info Hello, the `DeepseekV3MoE` class in transformers (native) differs from the official remote DeepSeekV3 implementation (which was updated for a bug but not in `transformers`, hence the difference). <details> <summary> See trf DeepSeekV3 MoE code </summary> https://github.com/huggingface/transformers/bl...
open
null
false
0
[ "bug" ]
[]
2026-04-14T17:50:49Z
2026-04-14T18:14:43Z
null
CONTRIBUTOR
null
20260414T200457Z
2026-04-14T20:04:57Z
casinca
47,400,729
MDQ6VXNlcjQ3NDAwNzI5
User
false
huggingface/transformers
4,266,475,063
I_kwDOCUB6oc7-TT43
45,446
https://github.com/huggingface/transformers/issues/45446
https://api.github.com/repos/huggingface/transformers/issues/45446
Incorrect PyTorch version check for AuxRequest import in flex_attention
### System Info In src/transformers/integrations/flex_attention.py, the code currently checks for PyTorch version >= 2.9.0 to import AuxRequest from torch.nn.attention.flex_attention. However, AuxRequest was actually introduced in PyTorch 2.9.1. According to the official PyTorch documentation, AuxRequest is available ...
closed
completed
false
1
[ "bug" ]
[]
2026-04-15T05:26:22Z
2026-04-15T11:36:37Z
2026-04-15T11:35:34Z
NONE
null
20260415T224019Z
2026-04-15T22:40:19Z
ZSLsherly
142,322,697
U_kgDOCHusCQ
User
false
huggingface/transformers
4,266,677,435
I_kwDOCUB6oc7-UFS7
45,447
https://github.com/huggingface/transformers/issues/45447
https://api.github.com/repos/huggingface/transformers/issues/45447
granitemoehybrid: HybridMambaAttentionDynamicCache missing from modeling_granitemoehybrid — breaks ibm-granite/granite-4.0-3b-vision remote code
## Summary The `ibm-granite/granite-4.0-3b-vision` model's remote `modeling.py` imports `HybridMambaAttentionDynamicCache` from `transformers.models.granitemoehybrid.modeling_granitemoehybrid`. This class does not exist in transformers 5.5.4 (latest) or on the current `main` branch, causing an `ImportError` whenever a...
open
null
false
3
[]
[]
2026-04-15T06:13:12Z
2026-04-15T12:17:47Z
null
NONE
null
20260415T224019Z
2026-04-15T22:40:19Z
Steve-Allison
3,996,420
MDQ6VXNlcjM5OTY0MjA=
User
false
huggingface/transformers
4,268,948,257
I_kwDOCUB6oc7-cvsh
45,458
https://github.com/huggingface/transformers/issues/45458
https://api.github.com/repos/huggingface/transformers/issues/45458
Add typing support incrementally (meta issue)
We’re progressively adding typing support to the codebase using `ty`. This issue tracks the overall progress as we extend coverage directory by directory. # Current status The tooling is already in place. Type checking is enabled for a subset of directories You can run it locally with: ``` make typing ``` # How to...
open
null
false
0
[]
[ "tarekziade" ]
2026-04-15T12:36:24Z
2026-04-15T12:42:42Z
null
MEMBER
null
20260415T224019Z
2026-04-15T22:40:19Z
tarekziade
250,019
MDQ6VXNlcjI1MDAxOQ==
User
false
huggingface/transformers
4,269,023,860
I_kwDOCUB6oc7-dCJ0
45,459
https://github.com/huggingface/transformers/issues/45459
https://api.github.com/repos/huggingface/transformers/issues/45459
`except import_protobuf_decode_error()` hides real tokenizer errors when protobuf isn't installed
### System Info transformers 5.5.4 (latest release) and 5.6.0.dev0 (main). `PreTrainedTokenizerBase._from_pretrained` has `except import_protobuf_decode_error():` at `src/transformers/tokenization_utils_base.py:1919` (line 1933 on main). The helper raises `ImportError` when protobuf isn't installed. The except-class ...
open
null
false
4
[ "bug" ]
[]
2026-04-15T12:48:42Z
2026-04-16T14:32:09Z
null
CONTRIBUTOR
null
20260416T222012Z
2026-04-16T22:20:12Z
jw9603
70,795,645
MDQ6VXNlcjcwNzk1NjQ1
User
false
huggingface/transformers
4,270,313,753
I_kwDOCUB6oc7-h9EZ
45,464
https://github.com/huggingface/transformers/issues/45464
https://api.github.com/repos/huggingface/transformers/issues/45464
chat/completions API fail on Qwen3.5-0.8B for streaming inference
### System Info - `transformers` version: 5.5.0 - 5.5.4 - Platform: macOS-26.4.1-arm64-arm-64bit-Mach-O - Python version: 3.14.3 - Huggingface_hub version: 1.10.2 - Safetensors version: 0.7.0 - Accelerate version: 1.13.0 - Accelerate config: not found - DeepSpeed version: not installed - PyTorch version (accelerat...
open
null
false
2
[ "bug" ]
[]
2026-04-15T16:36:31Z
2026-04-16T10:49:09Z
null
NONE
null
20260416T222012Z
2026-04-16T22:20:12Z
zhangwei217245
346,451
MDQ6VXNlcjM0NjQ1MQ==
User
false
huggingface/transformers
4,273,783,730
I_kwDOCUB6oc7-vMOy
45,468
https://github.com/huggingface/transformers/issues/45468
https://api.github.com/repos/huggingface/transformers/issues/45468
[BUG] Gemma-4 Gemma4AudioRelPositionalEncoding
### System Info N/A. ### Who can help? The hard coded numbers **12** and **-1** seem to be related to `attention_context_left` and `attention_context_right`. https://github.com/huggingface/transformers/blob/8426e7e63d49d9c3b5f0c09d43e792a59c75c62c/src/transformers/models/gemma4/modular_gemma4.py#L160 @eustlb @ebez...
open
null
false
1
[ "bug" ]
[]
2026-04-16T06:52:43Z
2026-04-16T09:30:36Z
null
CONTRIBUTOR
null
20260416T222012Z
2026-04-16T22:20:12Z
foldl
4,046,440
MDQ6VXNlcjQwNDY0NDA=
User
false
huggingface/transformers
4,276,519,756
I_kwDOCUB6oc7-5oNM
45,478
https://github.com/huggingface/transformers/issues/45478
https://api.github.com/repos/huggingface/transformers/issues/45478
[BUG] transformers>=5.4.0, Qwen3.5 Moe from_pretrained error
### System Info https://github.com/huggingface/transformers/issues/45310 This issue has not been fixed in the main branch. ``` import os os.environ['CUDA_VISIBLE_DEVICS'] = '0' from transformers import Qwen3_5ForConditionalGeneration, AutoTokenizer model = Qwen3_5ForConditionalGeneration.from_pretrained('Qwen/Qwe...
open
null
false
3
[ "Should Fix", "bug" ]
[]
2026-04-16T14:48:15Z
2026-04-18T02:59:15Z
null
CONTRIBUTOR
null
20260418T030535Z
2026-04-18T03:05:35Z
Jintao-Huang
45,290,347
MDQ6VXNlcjQ1MjkwMzQ3
User
false
huggingface/transformers
4,276,582,143
I_kwDOCUB6oc7-53b_
45,479
https://github.com/huggingface/transformers/issues/45479
https://api.github.com/repos/huggingface/transformers/issues/45479
`problem_type="single_label_classification"` with `num_labels=1` leads to degenerate zero loss across multiple sequence-classification models
### System Info Hi, I found what looks like a library-wide issue in `transformers` affecting multiple `ForSequenceClassification` models, not just ModernBERT. If a model is initialized with: ```python num_labels=1 problem_type="single_label_classification" ``` the forward pass uses `CrossEntropyLoss()` with only one ...
open
null
false
3
[ "bug" ]
[]
2026-04-16T14:58:54Z
2026-04-17T11:48:17Z
null
NONE
null
20260417T180542Z
2026-04-17T18:05:42Z
BohdanBabii
73,220,903
MDQ6VXNlcjczMjIwOTAz
User
false
huggingface/transformers
4,276,916,345
I_kwDOCUB6oc7-7JB5
45,482
https://github.com/huggingface/transformers/issues/45482
https://api.github.com/repos/huggingface/transformers/issues/45482
Gemma4 26B-A4B: cross-device errors with CPU offload (RoPE, inputs, layer_scalar, SDPA mask, mm_token_type_ids)
# Bug: Gemma4 cross-device tensor errors with accelerate CPU offload ## Environment - transformers latest (Gemma4 support, `modeling_gemma4.py`) - Gemma4 26B-A4B-it (MoE, 4B active params) - `accelerate` device_map with CPU offload (layers overflow to RAM) - BnB INT8 + PEFT LoRA + Gradient Checkpointing - RTX 4090 (2...
open
null
false
2
[]
[]
2026-04-16T15:57:28Z
2026-04-17T17:12:08Z
null
NONE
null
20260417T180542Z
2026-04-17T18:05:42Z
sirfyyn
31,549,942
MDQ6VXNlcjMxNTQ5OTQy
User
false
huggingface/transformers
647,983,215
MDU6SXNzdWU2NDc5ODMyMTU=
5,391
https://github.com/huggingface/transformers/issues/5391
https://api.github.com/repos/huggingface/transformers/issues/5391
Training a GPT-2 from scratch in Greek-text, results in a low perplexity score of 7 after 15 epochs. Is it normal that score?
# ❓ Questions & Help <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. Make s...
closed
completed
false
3
[ "wontfix" ]
[]
2020-06-30T08:37:47Z
2026-02-09T16:33:04Z
2020-09-13T17:12:37Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
Nkonstan
35,643,708
MDQ6VXNlcjM1NjQzNzA4
User
false
huggingface/transformers
665,862,946
MDU6SXNzdWU2NjU4NjI5NDY=
6,045
https://github.com/huggingface/transformers/issues/6045
https://api.github.com/repos/huggingface/transformers/issues/6045
Test BART's memory consumption
- this can run on GPU only and be marked `@slow` - check how much memory bart is using at `__init__` - assert that it doesn't use more than 110% of that. - check how much memory bart uses on a single forward pass. (optionally test this in fp16). - assert that it doesn't use more than 110% of that. - check how much...
closed
completed
false
11
[ "Help wanted", "Tests", "Benchmarks", "WIP" ]
[ "stas00", "patrickvonplaten" ]
2020-07-26T21:24:39Z
2026-02-10T13:24:28Z
2026-02-10T13:13:03Z
CONTRIBUTOR
null
20260325T173244Z
2026-03-25T17:32:44Z
sshleifer
6,045,025
MDQ6VXNlcjYwNDUwMjU=
User
false
huggingface/transformers
718,927,476
MDU6SXNzdWU3MTg5Mjc0NzY=
7,715
https://github.com/huggingface/transformers/issues/7715
https://api.github.com/repos/huggingface/transformers/issues/7715
examples/rag: test coverage, tiny model
Disclaimer: I don't know this code very well, this may be much harder than it seems. Blocking PR: #7713 [`examples/rag/finetune.py`, `examples/rag/finetune.sh`, `eval_rag.py`] do not seem to be tested at all. It would be good to have a `test_finetune.py` like `examples/seq2seq` that tested these. cc @stas00 ...
closed
completed
false
6
[ "Help wanted", "Tests", "rag", "Feature request" ]
[]
2020-10-11T21:09:58Z
2026-02-10T13:24:36Z
2026-02-10T13:07:03Z
CONTRIBUTOR
null
20260325T173244Z
2026-03-25T17:32:44Z
sshleifer
6,045,025
MDQ6VXNlcjYwNDUwMjU=
User
false
huggingface/transformers
2,501,014,784
I_kwDOCUB6oc6VEnUA
33,260
https://github.com/huggingface/transformers/issues/33260
https://api.github.com/repos/huggingface/transformers/issues/33260
Community contribution: Adding GGUF support for more architectures
### Feature request Recently, we have added the ability to load `gguf` files within [transformers](https://huggingface.co/docs/hub/en/gguf). <img src="https://github.com/user-attachments/assets/61df6455-6016-449e-a37f-9dfc7f918902" width="600"> The goal was to offer the possibility to users to further train/fine-tu...
open
null
false
46
[ "Good Second Issue", "Feature request" ]
[]
2024-09-02T13:41:47Z
2026-04-17T00:09:50Z
null
MEMBER
null
20260417T180542Z
2026-04-17T18:05:42Z
SunMarc
57,196,510
MDQ6VXNlcjU3MTk2NTEw
User
false
huggingface/transformers
4,281,338,037
I_kwDOCUB6oc7_MAi1
45,488
https://github.com/huggingface/transformers/issues/45488
https://api.github.com/repos/huggingface/transformers/issues/45488
LlamaTokenizer in v5 overrides tokenizer.json's ByteLevel pre-tokenizer with Metaspace, silently breaks DeepSeek V3/R1 family
### System info - `transformers`: 5.3.0 - `tokenizers`: 0.22.2 - Python: 3.12 / Linux ### Who can help? @ArthurZucker ### Reproduction Tokenizer-only, ~7 MB download: ```python from transformers import AutoTokenizer tok = AutoTokenizer.from_pretrained("deepseek-ai/DeepSeek-V3") print(repr(tok.decode(tok.encode("h...
open
null
false
1
[]
[]
2026-04-17T08:50:48Z
2026-04-17T11:43:10Z
null
NONE
null
20260417T180542Z
2026-04-17T18:05:42Z
dc3671
5,948,851
MDQ6VXNlcjU5NDg4NTE=
User
false
huggingface/transformers
4,282,845,832
I_kwDOCUB6oc7_RwqI
45,491
https://github.com/huggingface/transformers/issues/45491
https://api.github.com/repos/huggingface/transformers/issues/45491
[Gemma3] NaN embeddings on GPU when batching sequences of mixed length (sliding window attention + all-padding windows)
### System Info **System Info** - `transformers`: 4.45.1 - `sentence-transformers`: 5.1.2 - `tokenizers`: 0.20.0 - `safetensors`: 0.4.5 - `PyTorch`: ≥ 2.6.0 - Serving runtime: `pytorch/torchserve-kfs:0.12.0` (Python 3.9, Linux x86_64) - GPU: NVIDIA (CUDA, via KServe / TorchServe on Kubernetes) - CPU inference: **not ...
open
null
false
0
[ "bug" ]
[]
2026-04-17T13:10:25Z
2026-04-17T13:10:25Z
null
NONE
null
20260417T180542Z
2026-04-17T18:05:42Z
RiccardoTOTI
43,544,166
MDQ6VXNlcjQzNTQ0MTY2
User
false
huggingface/transformers
4,285,307,501
I_kwDOCUB6oc7_bJpt
45,496
https://github.com/huggingface/transformers/issues/45496
https://api.github.com/repos/huggingface/transformers/issues/45496
Add V-JEPA 2.1 inference support
### Feature request Meta released [V-JEPA 2.1](https://github.com/facebookresearch/vjepa2) on 2026-03-16 with four pretrained video encoders at 384 resolution (ViT-B 80M, ViT-L 300M, ViT-g 1B, ViT-G 2B). The existing `vjepa2` model family in transformers supports V-JEPA 2.0 but not 2.1. V-JEPA 2.1 introduces several ...
open
null
false
0
[ "Feature request" ]
[]
2026-04-17T20:59:54Z
2026-04-17T20:59:54Z
null
NONE
null
20260417T210541Z
2026-04-17T21:05:41Z
davevanveen
25,591,765
MDQ6VXNlcjI1NTkxNzY1
User
false
huggingface/transformers
4,288,654,586
I_kwDOCUB6oc7_n6z6
45,507
https://github.com/huggingface/transformers/issues/45507
https://api.github.com/repos/huggingface/transformers/issues/45507
GraniteMoEHybrid Model Calls Invalid Method
### System Info Linux: Ubuntu 24.04.4 LTS / 6.8.0-107-generic-64k / aarch64 Python: 3.12.12 Transformers: 5.5.4 Cuda: 12.9 ### Who can help? _No response_ ### Information - [ ] The official example scripts - [x] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such a...
open
null
false
0
[ "bug" ]
[]
2026-04-18T17:07:36Z
2026-04-18T17:10:09Z
null
NONE
null
20260418T190539Z
2026-04-18T19:05:39Z
rnowling
1,114,888
MDQ6VXNlcjExMTQ4ODg=
User
false