repo
string
github_id
int64
github_node_id
string
number
int64
html_url
string
api_url
string
title
string
body
string
state
string
state_reason
string
locked
bool
comments_count
int64
labels
list
assignees
list
created_at
string
updated_at
string
closed_at
string
author_association
string
milestone_title
string
snapshot_id
string
extracted_at
string
author_login
string
author_id
int64
author_node_id
string
author_type
string
author_site_admin
bool
huggingface/transformers
3,518,780,612
I_kwDOCUB6oc7RvFTE
41,628
https://github.com/huggingface/transformers/issues/41628
https://api.github.com/repos/huggingface/transformers/issues/41628
Cannot import name 'AutoImageProcessor' from 'transformers'
### System Info Intel CPU Nvidia 3090 ubuntu 22.04 python 3.10.12 transformers=5.0.0.dev0 (installed from the official git repo) ### PS: It's also tested with transformers=4.57.1, which is installed using "pip install", the same error persisted while executing "from transformers import AutoImageProcessor, AutoModel"....
closed
completed
false
6
[ "bug" ]
[]
2025-10-15T16:29:20Z
2026-02-26T18:36:13Z
2025-10-16T12:37:07Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
Pittmann-XIE
103,981,664
U_kgDOBjKiYA
User
false
huggingface/transformers
3,522,707,706
I_kwDOCUB6oc7R-ED6
41,669
https://github.com/huggingface/transformers/issues/41669
https://api.github.com/repos/huggingface/transformers/issues/41669
Remove import * usage from models, adds 10 seconds and 38k files
### System Info any ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction The import * is...
closed
completed
false
10
[ "bug" ]
[]
2025-10-16T16:56:39Z
2026-04-07T04:33:22Z
2025-11-24T08:02:53Z
CONTRIBUTOR
null
20260407T090028Z
2026-04-07T09:00:28Z
bschnurr
1,946,977
MDQ6VXNlcjE5NDY5Nzc=
User
false
huggingface/transformers
3,528,715,552
I_kwDOCUB6oc7SU-0g
41,720
https://github.com/huggingface/transformers/issues/41720
https://api.github.com/repos/huggingface/transformers/issues/41720
Qwen3 with auto device mapping fails due to cudaErrorAssert on A800
### System Info - `transformers` version: 4.57.1 - Platform: Linux-4.19.90-2107.6.0.0192.8.oe1.bclinux.x86_64-x86_64-with-glibc2.35 - Python version: 3.12.12 - Huggingface_hub version: 0.35.3 - Safetensors version: 0.6.2 - Accelerate version: 1.10.1 - Accelerate config: not found - DeepSpeed version: not installed ...
closed
completed
false
7
[ "bug" ]
[]
2025-10-18T11:50:43Z
2026-03-12T05:43:23Z
2026-01-05T08:03:26Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
guosyjlu
69,756,483
MDQ6VXNlcjY5NzU2NDgz
User
false
huggingface/transformers
3,532,707,392
I_kwDOCUB6oc7SkNZA
41,749
https://github.com/huggingface/transformers/issues/41749
https://api.github.com/repos/huggingface/transformers/issues/41749
`_get_num_multimodal_tokens` is not implemented for model `mllama`
vLLM 0.11’s Transformers-backend expects the HF processor to implement a method called `_get_num_multimodal_tokens` which is [not implemented for mllama](https://github.com/huggingface/transformers/blob/main/src/transformers/models/mllama/processing_mllama.py) in `transformers 4.57.1`. Because of this, `vllm serve met...
closed
completed
false
4
[ "bug" ]
[]
2025-10-20T14:38:22Z
2026-01-26T10:05:20Z
2025-10-21T09:58:49Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
mrtpk
8,076,245
MDQ6VXNlcjgwNzYyNDU=
User
false
huggingface/transformers
3,535,832,788
I_kwDOCUB6oc7SwIbU
41,762
https://github.com/huggingface/transformers/issues/41762
https://api.github.com/repos/huggingface/transformers/issues/41762
`IndexError: index 0 is out of bounds for dimension 0 with size 0` when loading Gemma3ForConditionalGeneration with DeepSpeed ZeRO-3
### System Info transformers=4.57.1 ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction...
closed
completed
false
8
[ "bug" ]
[]
2025-10-21T09:58:58Z
2026-02-20T15:36:18Z
2025-10-22T15:10:46Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
Asunatan
105,210,894
U_kgDOBkVkDg
User
false
huggingface/transformers
3,548,058,215
I_kwDOCUB6oc7TexJn
41,842
https://github.com/huggingface/transformers/issues/41842
https://api.github.com/repos/huggingface/transformers/issues/41842
Incorrect usage of `num_items_in_batch`?
It seems that `num_items_in_batch` is computed for all items in the batch [here](https://github.com/huggingface/transformers/blob/9c20660138830ca362533551ca978c27b48283a1/src/transformers/trainer.py#L2430). However, when loss is computed in the `training_step`, it is computed for each input in the batch one by one. Do...
closed
completed
false
3
[]
[]
2025-10-24T07:36:00Z
2026-03-09T14:02:44Z
2025-12-01T08:02:48Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
gohar94
6,470,801
MDQ6VXNlcjY0NzA4MDE=
User
false
huggingface/transformers
3,570,611,821
I_kwDOCUB6oc7U0zZt
41,950
https://github.com/huggingface/transformers/issues/41950
https://api.github.com/repos/huggingface/transformers/issues/41950
video-classification pipeline looks for image processors
### System Info 4.57.1 ### Who can help? @zucchini-nlp I can take a stab at this sometime ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details bel...
open
null
false
6
[ "WIP", "bug" ]
[]
2025-10-30T12:45:06Z
2026-02-19T10:56:02Z
null
MEMBER
null
20260325T173244Z
2026-03-25T17:32:44Z
merveenoyan
53,175,384
MDQ6VXNlcjUzMTc1Mzg0
User
false
huggingface/transformers
3,590,608,152
I_kwDOCUB6oc7WBFUY
42,032
https://github.com/huggingface/transformers/issues/42032
https://api.github.com/repos/huggingface/transformers/issues/42032
ValueError: Unrecognized configuration class <class 'transformers.models.qwen3_omni_moe.configuration_qwen3_omni_moe.Qwen3OmniMoeConfig'> for this kind of AutoModel: AutoModel.
### System Info I have started testing the Qwen3-Omni model and at that time there was transformers version 4.56.0 available which had the issues to the model. With the commits and bugs fixation for transformers version 4.57.0 it got fixed but that commit was available on git. Since there is transformer update on the ...
closed
completed
false
5
[ "bug" ]
[]
2025-11-05T11:39:39Z
2026-02-11T23:54:10Z
2025-12-27T08:03:07Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
Tortoise17
36,593,708
MDQ6VXNlcjM2NTkzNzA4
User
false
huggingface/transformers
3,604,732,641
I_kwDOCUB6oc7W29rh
42,111
https://github.com/huggingface/transformers/issues/42111
https://api.github.com/repos/huggingface/transformers/issues/42111
Add thinking-budget support (max_thinking_tokens) for reasoning-capable chat models
### Feature request A built-in way to cap how many tokens a reasoning model spends inside its ``<think> … </think>`` block. Today, we can only control the total response length via ``max_new_tokens``. No parameter limits the internal reasoning segment when ``enable_thinking=True``. ### Motivation - Reasoning models ...
open
null
false
1
[ "Feature request" ]
[]
2025-11-09T10:09:11Z
2026-02-14T05:37:15Z
null
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
AndresAlgaba
35,764,158
MDQ6VXNlcjM1NzY0MTU4
User
false
huggingface/transformers
3,607,099,901
I_kwDOCUB6oc7W__n9
42,116
https://github.com/huggingface/transformers/issues/42116
https://api.github.com/repos/huggingface/transformers/issues/42116
Integration of the SINQ quantization strategy
### Feature request Adding support for **SINQ** quantization for Hugging Face compatible models, enabling users to apply it directly through the configuration settings. The **SINQ** quantization method, recently introduced in the paper [SINQ: Sinkhorn-Normalized Quantization for Calibration-Free Low-Precision LLM Weig...
closed
completed
false
8
[ "Feature request" ]
[]
2025-11-10T09:44:32Z
2026-02-16T15:08:43Z
2026-02-16T15:08:43Z
CONTRIBUTOR
null
20260325T173244Z
2026-03-25T17:32:44Z
ChiaraBoretti
83,216,540
MDQ6VXNlcjgzMjE2NTQw
User
false
huggingface/transformers
3,619,868,194
I_kwDOCUB6oc7Xws4i
42,175
https://github.com/huggingface/transformers/issues/42175
https://api.github.com/repos/huggingface/transformers/issues/42175
Tensorflow not include in the backend when using pip install '.[torch]'
### System Info I install the program successfully when using `pip install -e .[torch]`. However, I encounter the below issue when using ``pip install '.[torch]'``: ``` (omni) pqyin@proj54:/data2/pqyin/transformers$ python Python 3.13.9 | packaged by Anaconda, Inc. | (main, Oct 21 2025, 19:16:10) [GCC 11.2.0] on linu...
closed
completed
false
2
[ "bug" ]
[]
2025-11-13T07:21:50Z
2026-02-13T22:47:40Z
2025-11-18T14:49:34Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
yinpeiqi
60,515,999
MDQ6VXNlcjYwNTE1OTk5
User
false
huggingface/transformers
3,623,280,797
I_kwDOCUB6oc7X9uCd
42,199
https://github.com/huggingface/transformers/issues/42199
https://api.github.com/repos/huggingface/transformers/issues/42199
Cardinality error is incorrect for models derived from DETR that do not have an explicit background class
## Issue For DETR variants, the cardinality errors that are reported during training are incorrect. This was reported in the DeformableDETR repository, and was acknowledged but not resolved: https://github.com/fundamentalvision/Deformable-DETR/issues/24 Since all the derived models no longer include an explicit bac...
closed
completed
false
10
[ "bug" ]
[]
2025-11-13T23:46:02Z
2026-02-09T17:30:44Z
2026-02-09T17:30:44Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
jveitchmichaelis
3,159,591
MDQ6VXNlcjMxNTk1OTE=
User
false
huggingface/transformers
3,623,324,953
I_kwDOCUB6oc7X940Z
42,200
https://github.com/huggingface/transformers/issues/42200
https://api.github.com/repos/huggingface/transformers/issues/42200
Request of rewriting implementation of prediction_step in trainer.py
### System Info Any system. Because it's a problem coming from source code. ### Who can help? @SunMarc ### Information - [ ] The official example scripts - [x] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [x] My own task or dataset (gi...
open
null
false
4
[ "Good Second Issue", "bug" ]
[]
2025-11-14T00:13:40Z
2026-02-24T22:09:56Z
null
CONTRIBUTOR
null
20260325T173244Z
2026-03-25T17:32:44Z
Yacklin
139,425,274
U_kgDOCE91-g
User
false
huggingface/transformers
3,624,126,333
I_kwDOCUB6oc7YA8d9
42,202
https://github.com/huggingface/transformers/issues/42202
https://api.github.com/repos/huggingface/transformers/issues/42202
Deformable DETR Finetuning breaks for any dataset
### System Info - GPU: V100 - torch2.6.0+cu126 - transformers 4.57.1 ### Who can help? Hi @yonigozlan @molbap @NielsRogge Thanks for the awesome work on vision models! I've been trying to finetune the Deformable DETR models (SenseTime/deformable-detr-with-box-refine-two-stage) for the past few days on a custom ...
closed
completed
false
6
[ "bug" ]
[]
2025-11-14T06:29:52Z
2026-02-08T16:56:36Z
2026-02-08T16:56:36Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
iamsashank09
26,921,144
MDQ6VXNlcjI2OTIxMTQ0
User
false
huggingface/transformers
3,628,809,478
I_kwDOCUB6oc7YSz0G
42,222
https://github.com/huggingface/transformers/issues/42222
https://api.github.com/repos/huggingface/transformers/issues/42222
All vitpose model were brokentransformers/models/vitpose_
### System Info transformers/models/vitpose_backbone/modeling_vitpose_backbone.py", line 304, in forward raise ValueError(transformers/models/vitpose_backbone/modeling_vitpose_backbone.py", line 304, in forward raise ValueError( ValueError: dataset_index must be provided when using multiple experts (num_expert...
closed
completed
false
11
[ "bug" ]
[]
2025-11-15T14:56:04Z
2026-02-09T08:11:37Z
2026-02-09T08:11:37Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
lucasjinreal
21,303,438
MDQ6VXNlcjIxMzAzNDM4
User
false
huggingface/transformers
3,634,466,348
I_kwDOCUB6oc7YoY4s
42,249
https://github.com/huggingface/transformers/issues/42249
https://api.github.com/repos/huggingface/transformers/issues/42249
`parse_response` should drop EOS
When using `parse_response`, I noticed it includes the EOS token in the `content`. However, the EOS token should be excluded, as it adds an unwanted EOS before tool calls during subsequent formatting. ```python from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen3-0.6B") # Why ...
closed
completed
false
7
[]
[]
2025-11-17T18:14:56Z
2026-02-15T08:04:47Z
2026-02-15T08:04:47Z
MEMBER
null
20260325T173244Z
2026-03-25T17:32:44Z
qgallouedec
45,557,362
MDQ6VXNlcjQ1NTU3MzYy
User
false
huggingface/transformers
3,660,616,450
I_kwDOCUB6oc7aMJMC
42,371
https://github.com/huggingface/transformers/issues/42371
https://api.github.com/repos/huggingface/transformers/issues/42371
Please use the new API settings to control TF32 behavior, ...
### System Info > UserWarning: Please use the new API settings to control TF32 behavior, such as torch.backends.cudnn.conv.fp32_precision = 'tf32' or torch.backends.cuda.matmul.fp32_precision = 'ieee'. Old settings, e.g, torch.backends.cuda.matmul.allow_tf32 = True, torch.backends.cudnn.allow_tf32 = True, allowTF32CuD...
closed
completed
false
18
[ "Good First Issue", "bug" ]
[]
2025-11-24T21:38:12Z
2026-02-05T06:12:27Z
2025-12-01T09:58:32Z
CONTRIBUTOR
null
20260325T173244Z
2026-03-25T17:32:44Z
wasertech
79,070,834
MDQ6VXNlcjc5MDcwODM0
User
false
huggingface/transformers
3,661,658,315
I_kwDOCUB6oc7aQHjL
42,375
https://github.com/huggingface/transformers/issues/42375
https://api.github.com/repos/huggingface/transformers/issues/42375
SAM3 single image inference with multiple text prompt
Hi I'm trying to run inference on a single image, aiming to get the bbox of objects from several different categories (e.g. "a person" and "a car"). the only example i found for prompting with multiple categories is in the "Batched Inference with Text Prompts" example, but then i need to unnecessarily duplicate my imag...
closed
completed
false
10
[]
[]
2025-11-25T06:20:09Z
2026-02-08T08:04:26Z
2026-02-08T08:04:26Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
iariav
28,890,865
MDQ6VXNlcjI4ODkwODY1
User
false
huggingface/transformers
3,663,973,751
I_kwDOCUB6oc7aY813
42,405
https://github.com/huggingface/transformers/issues/42405
https://api.github.com/repos/huggingface/transformers/issues/42405
Integrate FA4 (Flash Attention for Blackwell) into HF Transformers
### Feature request Transformers currently supports Flash Attention 2 and 3, but not Flash Attention 4. Users with compatible hardware and the latest flash-attn package cannot leverage FA4's improvements . Lets add that as well . ### Motivation Flash Attention 4 is now available in the flash-attn package, bringing ...
closed
completed
false
0
[ "Feature request" ]
[]
2025-11-25T17:33:50Z
2026-03-13T19:32:03Z
2026-03-13T19:32:03Z
CONTRIBUTOR
null
20260325T173244Z
2026-03-25T17:32:44Z
sambhavnoobcoder
94,298,612
U_kgDOBZ7h9A
User
false
huggingface/transformers
3,676,316,784
I_kwDOCUB6oc7bICRw
42,491
https://github.com/huggingface/transformers/issues/42491
https://api.github.com/repos/huggingface/transformers/issues/42491
The LoRA model trained with qwen3_moe on hf4.x cannot be used on the current main branch (hf5.x).
### System Info - `transformers` version: 5.0.0.dev0 - Platform: Linux-5.15.0-122-generic-x86_64-with-glibc2.31 - Python version: 3.10.15 - Huggingface_hub version: 1.1.5 - Safetensors version: 0.4.5 - Accelerate version: 1.9.0 - Accelerate config: not found - DeepSpeed version: 0.16.2 - PyTorch version (accelerato...
closed
completed
false
6
[ "bug", "PEFT" ]
[]
2025-11-29T03:57:15Z
2026-01-24T10:06:54Z
2026-01-24T10:06:54Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
linitra24
116,639,249
U_kgDOBvPGEQ
User
false
huggingface/transformers
3,679,872,670
I_kwDOCUB6oc7bVmae
42,503
https://github.com/huggingface/transformers/issues/42503
https://api.github.com/repos/huggingface/transformers/issues/42503
Add ModernVBERT models
### Model description Add the models from [ModernVBERT: Towards Smaller Visual Document Retrievers](https://arxiv.org/abs/2510.01149). - `ModernVBertModel` - `ColModernVBertForRetrieval` ### Open source status - [ ] The model implementation is available - [ ] The model weights are available ### Provide useful link...
closed
completed
false
0
[ "New model" ]
[]
2025-12-01T08:33:46Z
2026-02-23T12:55:43Z
2026-02-23T12:55:43Z
CONTRIBUTOR
null
20260325T173244Z
2026-03-25T17:32:44Z
paultltc
73,120,933
MDQ6VXNlcjczMTIwOTMz
User
false
huggingface/transformers
3,684,864,768
I_kwDOCUB6oc7bopMA
42,548
https://github.com/huggingface/transformers/issues/42548
https://api.github.com/repos/huggingface/transformers/issues/42548
cannot import name 'PreTrainedModel' from 'transformers'
Installed `transformers-4.57.0` but encountered an error. However, it contains PreTrainedModel, yet the error still persists: Traceback (most recent call last): File "/checkpoint/binary/train_package/playground/benchmarks/discrete_vla_pretrain.py", line 39, in <module> from dexbotic.model.discrete_vla.discrete_vla_arc...
closed
completed
false
14
[]
[]
2025-12-02T09:10:32Z
2026-03-08T08:03:46Z
2026-03-08T08:03:46Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
missTL
98,095,885
U_kgDOBdjTDQ
User
false
huggingface/transformers
3,685,706,215
I_kwDOCUB6oc7br2nn
42,556
https://github.com/huggingface/transformers/issues/42556
https://api.github.com/repos/huggingface/transformers/issues/42556
[v5] Remove `safe_serialization` parameter
### Feature request Mentioned in https://github.com/huggingface/transformers/pull/42391. Original comment from @Wauplin : > Do we still want to allow saving with `safe_serialization=False`? The v5 release feels a very good opportunity to definitely get rid of non-safetensors serialization. WDYT ? At this point, un...
closed
completed
false
5
[ "Feature request", "for_v5?" ]
[]
2025-12-02T12:54:40Z
2026-02-12T08:18:25Z
2025-12-16T14:16:45Z
MEMBER
null
20260325T173244Z
2026-03-25T17:32:44Z
Wauplin
11,801,849
MDQ6VXNlcjExODAxODQ5
User
false
huggingface/transformers
3,693,864,351
I_kwDOCUB6oc7cK-Wf
42,617
https://github.com/huggingface/transformers/issues/42617
https://api.github.com/repos/huggingface/transformers/issues/42617
Not able to run 3d_parallel.py
### System Info - `transformers` version: 4.57.1 - Platform: Linux-5.4.0-218-generic-x86_64-with-glibc2.31 - Python version: 3.10.18 - Huggingface_hub version: 0.36.0 - Safetensors version: 0.6.2 - Accelerate version: 1.11.0 - Accelerate config: not found - DeepSpeed version: not installed - PyTorch version (accele...
closed
completed
false
3
[ "bug" ]
[]
2025-12-04T10:14:17Z
2026-02-06T08:08:59Z
2026-02-06T08:08:59Z
CONTRIBUTOR
null
20260325T173244Z
2026-03-25T17:32:44Z
quic-meetkuma
200,747,495
U_kgDOC_cp5w
User
false
huggingface/transformers
3,697,005,235
I_kwDOCUB6oc7cW9Kz
42,638
https://github.com/huggingface/transformers/issues/42638
https://api.github.com/repos/huggingface/transformers/issues/42638
Routing Replay for MoEs
### Feature request RecentRL approaches for training MoE models increasingly rely on **Routing Replay**, as described in the following papers: - https://huggingface.co/papers/2507.18071 - https://huggingface.co/papers/2510.11370 - https://huggingface.co/papers/2512.01374 Without going into the training details, Rout...
closed
completed
false
2
[ "Feature request" ]
[]
2025-12-04T23:58:14Z
2026-04-14T08:09:16Z
2026-04-14T08:09:16Z
MEMBER
null
20260414T122001Z
2026-04-14T12:20:01Z
qgallouedec
45,557,362
MDQ6VXNlcjQ1NTU3MzYy
User
false
huggingface/transformers
3,701,324,455
I_kwDOCUB6oc7cnbqn
42,673
https://github.com/huggingface/transformers/issues/42673
https://api.github.com/repos/huggingface/transformers/issues/42673
Qwen3ForCausalLM leaks VRAM if used in multiple dataloader threads
### System Info torch 2.8.0 transformers==4.56.2 or ransformers==4.57.3, both tested ### Who can help? @ArthurZucker @Cyrilvallez ### Information - [ ] The official example scripts - [x] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [x] ...
closed
completed
false
10
[ "bug" ]
[]
2025-12-06T09:01:40Z
2026-03-02T11:19:28Z
2026-03-02T11:19:28Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
dxqb
183,307,934
U_kgDOCu0Ong
User
false
huggingface/transformers
3,707,232,318
I_kwDOCUB6oc7c9-A-
42,710
https://github.com/huggingface/transformers/issues/42710
https://api.github.com/repos/huggingface/transformers/issues/42710
Outstanding issues / PR before we can release v5
- #42697 actual perf upgrade - #34919 default dtype - #42513 - #42563 final form and break for tokenization - #42558 gradient checkpointing refactor - #42555 shard size default - #42491 PEFT + MOE refactor - #41388 default to fast image processors - #42418 - #42894 - #42564
closed
completed
false
4
[]
[]
2025-12-08T16:57:54Z
2026-01-26T08:29:54Z
2026-01-26T08:29:54Z
MEMBER
null
20260325T173244Z
2026-03-25T17:32:44Z
ArthurZucker
48,595,927
MDQ6VXNlcjQ4NTk1OTI3
User
false
huggingface/transformers
3,710,462,747
I_kwDOCUB6oc7dKSsb
42,738
https://github.com/huggingface/transformers/issues/42738
https://api.github.com/repos/huggingface/transformers/issues/42738
BERT-like models with RoPE
### Model description Some BERT-like models: - [ ] NomicBert - [ ] GTE - [ ] Snowflake GTE - [ ] Jinaai embeddings Are well used custom models which implement BERT with RoPE. This is a tracking issue to add support for them in Transformers. --- One motivation for this is so that the Transformers modeling backend ...
open
reopened
false
3
[ "New model" ]
[]
2025-12-09T11:30:40Z
2026-04-02T14:35:17Z
null
MEMBER
null
20260407T090028Z
2026-04-07T09:00:28Z
hmellor
19,981,378
MDQ6VXNlcjE5OTgxMzc4
User
false
huggingface/transformers
3,711,097,149
I_kwDOCUB6oc7dMtk9
42,740
https://github.com/huggingface/transformers/issues/42740
https://api.github.com/repos/huggingface/transformers/issues/42740
how to train trocr with transformers 4.57+?
i train trocr with tranfomers 4.15, the results is right,but train with 4.57.1,the acc is always 0 , i did't find the reason,did t can train succ with latest transofrmers?
closed
completed
false
3
[]
[]
2025-12-09T14:07:50Z
2026-02-18T08:10:11Z
2026-02-18T08:10:11Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
cqray1990
32,585,434
MDQ6VXNlcjMyNTg1NDM0
User
false
huggingface/transformers
3,713,115,899
I_kwDOCUB6oc7dUab7
42,754
https://github.com/huggingface/transformers/issues/42754
https://api.github.com/repos/huggingface/transformers/issues/42754
Excluding weight decay not working properly on most LMs
The problem is consistent across all three files: https://github.com/huggingface/transformers/blob/471d7ce9abbb3bc1b3bab673367378f9dbc3caac/examples/pytorch/language-modeling/run_clm_no_trainer.py#L518-L528 https://github.com/huggingface/transformers/blob/471d7ce9abbb3bc1b3bab673367378f9dbc3caac/examples/pytorch/langua...
closed
completed
false
5
[]
[]
2025-12-10T00:36:55Z
2026-02-13T10:46:14Z
2026-01-18T08:01:57Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
wwt17
10,792,281
MDQ6VXNlcjEwNzkyMjgx
User
false
huggingface/transformers
3,722,388,341
I_kwDOCUB6oc7d3yN1
42,831
https://github.com/huggingface/transformers/issues/42831
https://api.github.com/repos/huggingface/transformers/issues/42831
Accuracy issue associated with FineGrainedFP8
### System Info Hello, I am writing to report an issue I observed while evaluating the accuracy of a model quantized with FineGrainedFP8 through lm-eval. I observed significant accuracy discrepancies when deploying the quantized model with the HF backend versus the vLLM backend. <img width="1408" height="1250" alt="...
closed
completed
false
6
[ "bug" ]
[]
2025-12-12T07:47:42Z
2026-02-08T08:04:07Z
2026-02-08T08:04:07Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
sunghyuckhong
131,639,753
U_kgDOB9ipyQ
User
false
huggingface/transformers
3,722,400,306
I_kwDOCUB6oc7d31Iy
42,832
https://github.com/huggingface/transformers/issues/42832
https://api.github.com/repos/huggingface/transformers/issues/42832
Question about tie_weights
Hi, I noticed that the logic of the tie_weights function has changed in the transformers 5.0.0rc. In v4.x, when tie_word_embeddings=True, weights between embed_tokens.weight and lm_head.weight were always tied, regardless of whether both tensors were present in the checkpoint. However, in v5.0.0rc, if both embed_tok...
closed
completed
false
12
[]
[]
2025-12-12T07:52:43Z
2026-04-01T08:24:28Z
2026-04-01T08:24:28Z
NONE
null
20260407T090028Z
2026-04-07T09:00:28Z
cjw-d
73,046,570
MDQ6VXNlcjczMDQ2NTcw
User
false
huggingface/transformers
3,732,475,843
I_kwDOCUB6oc7eeQ_D
42,886
https://github.com/huggingface/transformers/issues/42886
https://api.github.com/repos/huggingface/transformers/issues/42886
Tokenizer fails to load from cache when HF_HUB_OFFLINE=1 on 4.57.3
### System Info transformers==4.57.3 ### Who can help? @vasqu @ArthurZucker ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Repr...
closed
completed
false
7
[ "bug" ]
[]
2025-12-15T23:17:06Z
2026-02-09T11:12:23Z
2026-02-09T11:12:23Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
chtruong814
195,193,376
U_kgDOC6JqIA
User
false
huggingface/transformers
3,733,715,750
I_kwDOCUB6oc7ei_sm
42,890
https://github.com/huggingface/transformers/issues/42890
https://api.github.com/repos/huggingface/transformers/issues/42890
tests/models/sam_hq/test_modeling_sam_hq.py::SamHQModelIntegrationTest may fail since a lot of cases are lack of set_seed()
### System Info transformers built from latest main. ### Who can help? @ydshieh similar issue may occur in tests/models/sam/test_modeling_sam.py see https://github.com/sywangyi/transformers/blob/main/src/transformers/models/sam_hq/modeling_sam_hq.py#L1077 randn() is used in positional_embedding, so you need to ...
closed
completed
false
5
[ "bug" ]
[]
2025-12-16T08:22:52Z
2026-01-26T16:22:26Z
2026-01-26T16:22:26Z
CONTRIBUTOR
null
20260325T173244Z
2026-03-25T17:32:44Z
sywangyi
36,058,628
MDQ6VXNlcjM2MDU4NjI4
User
false
huggingface/transformers
3,734,408,824
I_kwDOCUB6oc7elo54
42,898
https://github.com/huggingface/transformers/issues/42898
https://api.github.com/repos/huggingface/transformers/issues/42898
`clean_up_tokenization_spaces` behavior changes in v5
### System Info - `transformers` version: 4.57.3/5.0.0rc1 - Platform: Linux-5.15.0-56-generic-x86_64-with-glibc2.35 - Python version: 3.10.12 ### Who can help? @ArthurZucker and @itazap ### Information - [ ] The official example scripts - [x] My own modified scripts ### Tasks - [ ] An officially supported task i...
closed
completed
false
3
[ "bug" ]
[]
2025-12-16T11:34:46Z
2026-02-11T10:01:39Z
2026-02-11T10:01:39Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
apaniukov
51,917,466
MDQ6VXNlcjUxOTE3NDY2
User
false
huggingface/transformers
3,734,648,644
I_kwDOCUB6oc7emjdE
42,902
https://github.com/huggingface/transformers/issues/42902
https://api.github.com/repos/huggingface/transformers/issues/42902
Improve GPT OSS Conversion Script
### Feature request The GPT OSS Conversion Script exposes parameters that are not needed or used, has incorrect documentation, and crashes due to a tiktoken bug. I improved the script and validated it works with GPT-OSS-20B. ### Motivation The original motivation is from https://github.com/huggingface/accelerate/is...
closed
completed
false
0
[ "Feature request" ]
[]
2025-12-16T12:46:20Z
2026-02-02T09:25:02Z
2026-02-02T09:25:02Z
CONTRIBUTOR
null
20260325T173244Z
2026-03-25T17:32:44Z
KyleMylonakisProtopia
122,286,752
U_kgDOB0nyoA
User
false
huggingface/transformers
3,735,475,105
I_kwDOCUB6oc7eptOh
42,907
https://github.com/huggingface/transformers/issues/42907
https://api.github.com/repos/huggingface/transformers/issues/42907
Failing to Save Dequantized Ministrals/Devstrals
### System Info When running in a clean google colab environment with transformers installed from source `!pip install -U git+https://github.com/huggingface/transformers.git` with `transformers===5.0.0rc1`. It can be reproduced with the free T4 GPU. ### Who can help? @ArthurZucker @Cyrilvallez ### Information - ...
closed
completed
false
10
[ "bug" ]
[]
2025-12-16T16:21:17Z
2026-03-08T08:03:34Z
2026-03-08T08:03:34Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
pandora-s-git
128,635,000
U_kgDOB6rQeA
User
false
huggingface/transformers
3,736,518,828
I_kwDOCUB6oc7etsCs
42,913
https://github.com/huggingface/transformers/issues/42913
https://api.github.com/repos/huggingface/transformers/issues/42913
Unexpected tokenizer behavior difference from v4 to v5
```python from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained('mlx-community/Llama-3.2-1B-Instruct-4bit') text = tokenizer.decode([128000, 64, 1174, 65]) print(text) ``` For 4.57.3 you get `<|begin_of_text|>a,b` For 5.0.0rc1 you get `<|begin_of_text|>a ,b` Is it expected the behavior ch...
closed
completed
false
2
[ "bug" ]
[]
2025-12-16T22:14:01Z
2026-01-25T08:02:25Z
2026-01-25T08:02:25Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
awni
1,542,805
MDQ6VXNlcjE1NDI4MDU=
User
false
huggingface/transformers
3,737,719,708
I_kwDOCUB6oc7eyROc
42,915
https://github.com/huggingface/transformers/issues/42915
https://api.github.com/repos/huggingface/transformers/issues/42915
Qwen3Moe failed with FineGrainedFP8Config
### System Info - `transformers` version: 4.57.3 - Platform: Linux-5.15.0-136-generic-x86_64-with-glibc2.35 - Python version: 3.12.10 - Huggingface_hub version: 0.36.0 - Safetensors version: 0.5.3 - Accelerate version: 1.12.0 - Accelerate config: not found - DeepSpeed version: not installed - PyTorch version (accel...
closed
completed
false
4
[ "bug" ]
[]
2025-12-17T07:35:47Z
2026-01-25T08:02:24Z
2026-01-25T08:02:24Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
jessiewiswjc
70,051,089
MDQ6VXNlcjcwMDUxMDg5
User
false
huggingface/transformers
3,740,752,067
I_kwDOCUB6oc7e91jD
42,936
https://github.com/huggingface/transformers/issues/42936
https://api.github.com/repos/huggingface/transformers/issues/42936
Mask2former model's ignore_value not used after definition
https://github.com/huggingface/transformers/blob/47b0e478f324b54f177ea7998a0791870fdd0324/src/transformers/models/mask2former/configuration_mask2former.py#L83 `ignore_value` does not seem to be used any where despite its definition above
closed
completed
false
6
[]
[]
2025-12-17T22:44:36Z
2026-03-11T08:08:16Z
2026-03-11T08:08:16Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
mhmd-j
56,149,395
MDQ6VXNlcjU2MTQ5Mzk1
User
false
huggingface/transformers
3,742,730,049
I_kwDOCUB6oc7fFYdB
42,943
https://github.com/huggingface/transformers/issues/42943
https://api.github.com/repos/huggingface/transformers/issues/42943
Continuous batching: output queue requeue starvation and request-scoped iterator does not terminate on completion
### **Description** There are two related correctness issues in the continuous batching result consumption logic that can lead to unfairness and non-terminating iterators under concurrent workloads. #### 1. Starvation and incorrect timeout handling in `get_result` `ContinuousBatchingManager.get_result` currently r...
closed
completed
false
1
[]
[ "remi-or" ]
2025-12-18T11:35:58Z
2026-02-08T08:04:03Z
2026-02-08T08:04:03Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
pythongiant
13,624,560
MDQ6VXNlcjEzNjI0NTYw
User
false
huggingface/transformers
3,744,470,098
I_kwDOCUB6oc7fMBRS
42,947
https://github.com/huggingface/transformers/issues/42947
https://api.github.com/repos/huggingface/transformers/issues/42947
Gradient Checkpointing Ineffective with PEFT LoRA Despite Proper Configuration
### System Info - `transformers` version: 4.57.1 - Platform: Linux-5.15.0-161-generic-x86_64-with-glibc2.35 - Python version: 3.12.12 - Huggingface_hub version: 0.36.0 - Safetensors version: 0.7.0 - Accelerate version: 1.11.0 - Accelerate config: not found - DeepSpeed version: not installed - PyTorch version (acceler...
closed
completed
false
14
[ "bug" ]
[]
2025-12-18T19:10:03Z
2026-02-18T08:10:01Z
2026-02-18T08:10:01Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
yurkoff-mv
82,467,993
MDQ6VXNlcjgyNDY3OTkz
User
false
huggingface/transformers
3,750,144,911
I_kwDOCUB6oc7fhquP
42,971
https://github.com/huggingface/transformers/issues/42971
https://api.github.com/repos/huggingface/transformers/issues/42971
Please create a Huggingface Transformers SKILL for Claude
### Feature request Please create a Huggingface Transformers SKILL for Claude and a PLUGIN for Claude Code. ### Motivation It is very hard to navigate all the features of the Transformers library. Letting Claude guide us would make things much faster. ### Your contribution I'll help testing the SKILL on my mac M4.
closed
completed
false
17
[ "Good First Issue", "Feature request" ]
[]
2025-12-20T15:17:03Z
2026-02-03T14:20:04Z
2026-02-03T14:20:04Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
Emasoft
713,559
MDQ6VXNlcjcxMzU1OQ==
User
false
huggingface/transformers
3,750,762,299
I_kwDOCUB6oc7fkBc7
42,977
https://github.com/huggingface/transformers/issues/42977
https://api.github.com/repos/huggingface/transformers/issues/42977
Add ViT NEPA
### Model description Summary : Next-Embedding Prediction: The Simple Secret to Strong Vision Learners NEPA is a self-supervised method. It trains Vision Transformers to predict future patch embeddings. No complex loss functions or extra heads. Achieves 85.3% top-1 accuracy on ImageNet-1K with ViT-L. The NEPA model ...
open
null
false
1
[ "New model" ]
[]
2025-12-21T05:33:08Z
2026-02-07T21:42:17Z
null
CONTRIBUTOR
null
20260325T173244Z
2026-03-25T17:32:44Z
sbucaille
24,275,548
MDQ6VXNlcjI0Mjc1NTQ4
User
false
huggingface/transformers
3,751,484,378
I_kwDOCUB6oc7fmxva
42,981
https://github.com/huggingface/transformers/issues/42981
https://api.github.com/repos/huggingface/transformers/issues/42981
Change from v4 to v5c1: DeprecationWarning: builtin type SwigPyPacked/Object has no __module__ attribute
When I run a purest, the tests work as expected, after updating transformers to 5rc1 but I get a deprecation warning that is new. This seems harmless, for now, but may mean something to someone! ``` <frozen importlib._bootstrap>:488 <frozen importlib._bootstrap>:488: DeprecationWarning: builtin type SwigPyPacked h...
closed
completed
false
2
[]
[]
2025-12-21T19:04:58Z
2026-01-29T08:06:26Z
2026-01-29T08:06:26Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
jrp2014
8,142,876
MDQ6VXNlcjgxNDI4NzY=
User
false
huggingface/transformers
3,752,659,910
I_kwDOCUB6oc7frQvG
42,994
https://github.com/huggingface/transformers/issues/42994
https://api.github.com/repos/huggingface/transformers/issues/42994
quantized model saving failed
### System Info regression PR #42734 ### Who can help? @Cyrilvallez ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproductio...
closed
completed
false
8
[ "bug" ]
[]
2025-12-22T07:35:08Z
2026-03-02T08:09:13Z
2026-03-02T08:09:13Z
CONTRIBUTOR
null
20260325T173244Z
2026-03-25T17:32:44Z
jiqing-feng
107,918,818
U_kgDOBm614g
User
false
huggingface/transformers
3,755,864,802
I_kwDOCUB6oc7f3fLi
43,010
https://github.com/huggingface/transformers/issues/43010
https://api.github.com/repos/huggingface/transformers/issues/43010
Cache's (and Layer's) `update(...)` method to be decorated with `@torch.no_grad`
### System Info 5.0.0rc1, 2.9.1 ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction It ...
closed
completed
false
7
[ "bug" ]
[]
2025-12-23T03:08:33Z
2026-03-16T08:18:54Z
2026-03-16T08:18:54Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
vadimkantorov
1,041,752
MDQ6VXNlcjEwNDE3NTI=
User
false
huggingface/transformers
3,755,872,144
I_kwDOCUB6oc7f3g-Q
43,011
https://github.com/huggingface/transformers/issues/43011
https://api.github.com/repos/huggingface/transformers/issues/43011
`StaticLayer` cache layer to implement `.crop(seq_len)` to match API of `DynamicLayer`
### Feature request 5.0.0rc1, 2.91 ### Motivation It then is possible to crop the cache to a prefix sequence and reuse the prefix cache ### Your contribution I've implemented it as follows: ```python @torch.no_grad def crop(self, seq_len): self.keys[0, 0, seq_len:] = 0 ``` btw it feels quite fragile for `get_s...
open
null
false
8
[ "Feature request" ]
[]
2025-12-23T03:13:30Z
2026-02-17T09:17:37Z
null
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
vadimkantorov
1,041,752
MDQ6VXNlcjEwNDE3NTI=
User
false
huggingface/transformers
3,755,941,571
I_kwDOCUB6oc7f3x7D
43,012
https://github.com/huggingface/transformers/issues/43012
https://api.github.com/repos/huggingface/transformers/issues/43012
Compiling a bfloat16 model triggers float32 precision PyTorch warning
### System Info 5.0.0rc1, 2.9.1 ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ```...
closed
completed
false
5
[ "bug" ]
[]
2025-12-23T03:59:45Z
2026-03-05T08:07:35Z
2026-03-05T08:07:35Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
vadimkantorov
1,041,752
MDQ6VXNlcjEwNDE3NTI=
User
false
huggingface/transformers
3,757,619,336
I_kwDOCUB6oc7f-LiI
43,023
https://github.com/huggingface/transformers/issues/43023
https://api.github.com/repos/huggingface/transformers/issues/43023
How to investigate "CAS service error" during model downloading?
### System Info (nm) PS C:\Users\myuser\AppData\Local\anaconda3\envs\nm\Lib\site-packages\transformers\commands> python .\transformers_cli.py env ``` Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points. - `transformers` version: 4.57.3 - Platform: Windows-10-10.0.19045-SP0 - Python v...
closed
completed
false
3
[ "bug" ]
[]
2025-12-23T14:48:51Z
2026-01-31T08:02:33Z
2026-01-31T08:02:33Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
satyrmipt
113,777,913
U_kgDOBsgc-Q
User
false
huggingface/transformers
3,760,414,050
I_kwDOCUB6oc7gI11i
43,032
https://github.com/huggingface/transformers/issues/43032
https://api.github.com/repos/huggingface/transformers/issues/43032
[v5 Regression] BitsAndBytes 4-bit quantization OOM - core_model_loading bypasses quantizer device placement
I probably missed a new setting somewhere, because the load is very slow and it was fast before, but just in case i did not, here is a possible issue. just close it if i missed something. **System Info** transformers: 5.0.0.dev0 (installed from main) bitsandbytes: 0.49.0 torch: 2.9.1+cu128 Python: 3.12 GPU: NVIDIA G...
closed
completed
false
7
[]
[]
2025-12-24T14:11:26Z
2026-03-18T15:42:39Z
2026-03-18T15:42:39Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
jwm1969
35,038,618
MDQ6VXNlcjM1MDM4NjE4
User
false
huggingface/transformers
3,761,496,273
I_kwDOCUB6oc7gM-DR
43,037
https://github.com/huggingface/transformers/issues/43037
https://api.github.com/repos/huggingface/transformers/issues/43037
DeepSeek v3.2 support
### Feature request When will Transformers officially support DeepSeek v3.2? https://huggingface.co/deepseek-ai/DeepSeek-V3.2 ### Motivation None ### Your contribution None
open
null
false
6
[ "Feature request" ]
[]
2025-12-25T06:38:29Z
2026-04-13T09:20:25Z
null
NONE
null
20260413T094825Z
2026-04-13T09:48:25Z
freedom-cui
81,297,486
MDQ6VXNlcjgxMjk3NDg2
User
false
huggingface/transformers
3,761,801,871
I_kwDOCUB6oc7gOIqP
43,039
https://github.com/huggingface/transformers/issues/43039
https://api.github.com/repos/huggingface/transformers/issues/43039
When using the Liger Kernel, torch.nn.functional.cross_entropy is called
### System Info ``` accelerate 1.11.0 liger-kernel 0.0.3 numpy 2.3.3 peft 0.18.0 tokenizers 0.22.1 torch 2.9.0+cu126 torch-tb-profiler 0.4.3 tor...
open
null
false
7
[ "Good Second Issue", "bug" ]
[]
2025-12-25T10:55:29Z
2026-03-25T03:50:08Z
null
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
yurkoff-mv
82,467,993
MDQ6VXNlcjgyNDY3OTkz
User
false
huggingface/transformers
3,763,183,796
I_kwDOCUB6oc7gTaC0
43,048
https://github.com/huggingface/transformers/issues/43048
https://api.github.com/repos/huggingface/transformers/issues/43048
Need to understand difference between TP support via transformers code v/s Pytorch's native parallelize_module API.
Based on the existing code base of transformers, below sequence of operations are performed on model object to make it TP compatible. - TP Plan for Llama: https://github.com/huggingface/transformers/blob/a7f29523361b2cc12e51c1f5133d95f122f6f45c/src/transformers/models/llama/configuration_llama.py#L113 - self._tp_plan ...
closed
completed
false
3
[]
[]
2025-12-26T10:05:38Z
2026-03-08T08:03:22Z
2026-03-08T08:03:22Z
CONTRIBUTOR
null
20260325T173244Z
2026-03-25T17:32:44Z
quic-meetkuma
200,747,495
U_kgDOC_cp5w
User
false
huggingface/transformers
3,764,583,354
I_kwDOCUB6oc7gYvu6
43,054
https://github.com/huggingface/transformers/issues/43054
https://api.github.com/repos/huggingface/transformers/issues/43054
text embedding of siglip2 is much worse than siglip
### System Info - `transformers` version: 4.57.3 - Platform: Linux-5.15.167.4-microsoft-standard-WSL2-x86_64-with-glibc2.39 - Python version: 3.12.12 - Huggingface_hub version: 0.36.0 - Safetensors version: 0.7.0 - Accelerate version: 1.12.0 - Accelerate config: not found - DeepSpeed version: not installed - PyTorc...
closed
completed
false
8
[ "bug" ]
[]
2025-12-27T08:46:30Z
2026-01-28T14:52:48Z
2026-01-28T14:52:48Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
fancyerii
5,372,812
MDQ6VXNlcjUzNzI4MTI=
User
false
huggingface/transformers
3,767,588,609
I_kwDOCUB6oc7gkNcB
43,064
https://github.com/huggingface/transformers/issues/43064
https://api.github.com/repos/huggingface/transformers/issues/43064
Trainer.train() using v5 + FSDP2 + PEFT + cpu_ram_efficient_loading=True results in wrong optimizer states/params on all but 0th rank
### System Info v5 + FSDP2 + cpu_ram_efficient_loading=True + PEFT + "reshard_after_forward": True ### Who can help? @S1ro1 @SunMarc ### Information - [ ] The official example scripts - [x] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) -...
closed
completed
false
3
[ "bug" ]
[]
2025-12-29T14:44:46Z
2026-02-06T08:08:32Z
2026-02-06T08:08:32Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
d5031
46,198,216
MDQ6VXNlcjQ2MTk4MjE2
User
false
huggingface/transformers
3,767,686,605
I_kwDOCUB6oc7gklXN
43,065
https://github.com/huggingface/transformers/issues/43065
https://api.github.com/repos/huggingface/transformers/issues/43065
Dummy `nn.Conv2d` in `Sam3PixelDecoder`
### System Info transformers==5.0.0rc1 ### Who can help? I’ve noticed that `Sam3Model.from_pretrained("facebook/sam3")` sets `num_upsampling_stages=3`. However, `backbone_features` passed to `Sam3PixelDecoder` is a tuple of size 3. As a result, `self.conv_layers[2]` and `self.norms[2]` are bypassed and do nothing. ...
closed
completed
false
3
[ "bug" ]
[]
2025-12-29T15:28:42Z
2026-02-08T08:03:51Z
2026-02-08T08:03:51Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
james77777778
20,734,616
MDQ6VXNlcjIwNzM0NjE2
User
false
huggingface/transformers
3,767,691,311
I_kwDOCUB6oc7gkmgv
43,066
https://github.com/huggingface/transformers/issues/43066
https://api.github.com/repos/huggingface/transformers/issues/43066
Wrong tokenizer decoder type in Transformers v5
### System Info Wrong decoder type with `5.0.0rc1`. ### Information - [x] The official example scripts - [ ] My own modified scripts ### Tasks - [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Run this: ```pyth...
closed
completed
false
6
[ "bug" ]
[]
2025-12-29T15:31:04Z
2026-01-26T08:27:35Z
2026-01-26T08:27:35Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
awni
1,542,805
MDQ6VXNlcjE1NDI4MDU=
User
false
huggingface/transformers
3,777,909,876
I_kwDOCUB6oc7hLlR0
43,086
https://github.com/huggingface/transformers/issues/43086
https://api.github.com/repos/huggingface/transformers/issues/43086
Add async_stopping_criteria flag to reduce GPU-CPU synchronization overhead
### Feature request Add an `async_stopping_criteria` flag to `GenerationConfig` that performs stopping criteria checks asynchronously on a separate CUDA stream. This reduces GPU-CPU synchronization overhead during autoregressive text generation by allowing the model to continue generating tokens while stopping criteri...
closed
completed
false
5
[]
[]
2026-01-03T09:54:01Z
2026-03-08T08:03:20Z
2026-03-08T08:03:20Z
CONTRIBUTOR
null
20260325T173244Z
2026-03-25T17:32:44Z
AmitMY
5,757,359
MDQ6VXNlcjU3NTczNTk=
User
false
huggingface/transformers
3,777,969,619
I_kwDOCUB6oc7hLz3T
43,089
https://github.com/huggingface/transformers/issues/43089
https://api.github.com/repos/huggingface/transformers/issues/43089
Generation overhead: many GPU syncs per token + PyTorch dispatch overhead
# Generation overhead: 3.25 GPU syncs per token + PyTorch dispatch overhead ## System Info - `transformers` version: 5.0.0.dev0 (main branch) - Platform: Linux - Python version: 3.12 - PyTorch version: 2.x with CUDA - GPU: NVIDIA (tested) ## Who can help? @gante @zucchini-nlp ## Information - [x] My own modified ...
closed
completed
false
2
[]
[]
2026-01-03T11:14:18Z
2026-02-25T08:11:03Z
2026-02-25T08:11:03Z
CONTRIBUTOR
null
20260325T173244Z
2026-03-25T17:32:44Z
AmitMY
5,757,359
MDQ6VXNlcjU3NTczNTk=
User
false
huggingface/transformers
3,779,141,757
I_kwDOCUB6oc7hQSB9
43,097
https://github.com/huggingface/transformers/issues/43097
https://api.github.com/repos/huggingface/transformers/issues/43097
5.0.0 `tie_embeddings_and_encoder_decoder` removed without indication
### System Info - `transformers` version: 5.0.0.dev0 (main branch) - Platform: Linux - Python version: 3.12 - PyTorch version: 2.x with CUDA - GPU: NVIDIA (tested) ### Who can help? @stevhliu ### Information - [ ] The official example scripts - [x] My own modified scripts ### Tasks - [ ] An officially supported...
closed
completed
false
6
[ "bug" ]
[]
2026-01-04T11:23:50Z
2026-02-24T08:10:39Z
2026-02-24T08:10:39Z
CONTRIBUTOR
null
20260325T173244Z
2026-03-25T17:32:44Z
AmitMY
5,757,359
MDQ6VXNlcjU3NTczNTk=
User
false
huggingface/transformers
3,779,206,925
I_kwDOCUB6oc7hQh8N
43,099
https://github.com/huggingface/transformers/issues/43099
https://api.github.com/repos/huggingface/transformers/issues/43099
Question about .T usage when loading video frames from file paths
Thank you very much for the excellent and high-quality work on this project. While reading the code and experimenting with loading video frames via a list of image paths, I had a question regarding a specific implementation detail and would appreciate some clarification. In processing_utils.py, for the case where vid...
closed
completed
false
2
[]
[]
2026-01-04T12:41:09Z
2026-02-12T08:09:45Z
2026-02-12T08:09:45Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
c1ircle
106,308,364
U_kgDOBlYjDA
User
false
huggingface/transformers
3,782,023,965
I_kwDOCUB6oc7hbRsd
43,116
https://github.com/huggingface/transformers/issues/43116
https://api.github.com/repos/huggingface/transformers/issues/43116
Multi-label classification always returns empty results in run_classification.py example script
### System Info Dell workstation with NVIDIA Titan Xp (12 GB RAM), driver version 535.261.03, CUDA 12.2. Ubuntu Linux 24.04, Python 3.12. ### Who can help? I'm using `run_classification.py` example script (in `pytorch/text-classification` folder), but when running with multi-labelled data it always returns empty val...
closed
completed
false
9
[ "bug" ]
[]
2026-01-05T16:03:51Z
2026-02-18T08:09:48Z
2026-02-18T08:09:48Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
ziorufus
3,517,832
MDQ6VXNlcjM1MTc4MzI=
User
false
huggingface/transformers
3,782,800,686
I_kwDOCUB6oc7hePUu
43,122
https://github.com/huggingface/transformers/issues/43122
https://api.github.com/repos/huggingface/transformers/issues/43122
Different tokenization with same tokenizer from 4.57.3 to 5.0
### System Info Moving from transformers 4.57.3 to 5.0+ introduces a different and seemingly incorrect tokenization when using the same tokenizer. I believe the new version is incorrect because when using it, we get bad results (the model starts to introduce unexpected artifacts in the response). ### Who can help? ...
closed
completed
false
3
[ "Fast Tokenizers", "bug" ]
[]
2026-01-05T20:40:19Z
2026-01-26T08:27:15Z
2026-01-26T08:27:15Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
awni
1,542,805
MDQ6VXNlcjE1NDI4MDU=
User
false
huggingface/transformers
3,784,637,664
I_kwDOCUB6oc7hlPzg
43,125
https://github.com/huggingface/transformers/issues/43125
https://api.github.com/repos/huggingface/transformers/issues/43125
Saving bug using FSDP 2.0 + parallelism_config set (while works fine without providing parallelism_config in TrainingArguments)
The reason is the order of elifs here https://github.com/huggingface/transformers/blob/main/src/transformers/trainer.py#L4024 elif getattr(self.accelerator, "parallelism_config", None) is not None: # DeepSpeed SP already handles checkpoint saving below, so skip manual save in that case ...
closed
completed
false
5
[]
[]
2026-01-06T10:28:26Z
2026-01-24T17:30:07Z
2026-01-24T17:30:07Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
d5031
46,198,216
MDQ6VXNlcjQ2MTk4MjE2
User
false
huggingface/transformers
3,799,228,707
I_kwDOCUB6oc7ic6Ej
43,208
https://github.com/huggingface/transformers/issues/43208
https://api.github.com/repos/huggingface/transformers/issues/43208
[xLSTM] Three bugs preventing training models smaller than 7B
### System Info - transformers version: 5.0.0.dev0 - Platform: Linux/macOS - Python: 3.12 ### Who can help? @ArthurZucker @vasqu ### Information - The official example scripts - My own modified scripts ### Reproduction ```python from transformers import xLSTMConfig, xLSTMForCausalLM import torch # Config for ~125M...
closed
completed
false
1
[]
[]
2026-01-10T06:52:23Z
2026-02-09T18:05:45Z
2026-02-09T18:05:45Z
CONTRIBUTOR
null
20260325T173244Z
2026-03-25T17:32:44Z
Anri-Lombard
76,818,211
MDQ6VXNlcjc2ODE4MjEx
User
false
huggingface/transformers
3,804,003,927
I_kwDOCUB6oc7ivH5X
43,232
https://github.com/huggingface/transformers/issues/43232
https://api.github.com/repos/huggingface/transformers/issues/43232
_update_model_kwargs_for_generation after sync_gpus when generation
### System Info - `transformers` version: main (latest) - Platform: Linux - Python version: 3.11 - PyTorch version: 2.9 ### Who can help? @gante ### Information - [ ] The official example scripts - [x] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQ...
closed
completed
false
5
[ "bug" ]
[]
2026-01-12T11:55:52Z
2026-03-05T08:07:26Z
2026-03-05T08:07:26Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
xin-w8023
43,900,898
MDQ6VXNlcjQzOTAwODk4
User
false
huggingface/transformers
3,806,595,287
I_kwDOCUB6oc7i5AjX
43,240
https://github.com/huggingface/transformers/issues/43240
https://api.github.com/repos/huggingface/transformers/issues/43240
kwargs are not passed to loss calculation function.
https://github.com/huggingface/transformers/blob/3aa89c07f210df18865daee9df81fe2766d13884/src/transformers/loss/loss_utils.py#L36 If we want to use label_smoothing here. The loss_kwargs can be passed into the fixed_cross_entropy but not used.
closed
completed
false
2
[]
[]
2026-01-13T00:51:17Z
2026-02-21T08:03:15Z
2026-02-21T08:03:15Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
XinhaoMei
58,569,453
MDQ6VXNlcjU4NTY5NDUz
User
false
huggingface/transformers
3,809,161,904
I_kwDOCUB6oc7jCzKw
43,257
https://github.com/huggingface/transformers/issues/43257
https://api.github.com/repos/huggingface/transformers/issues/43257
Qwen3 MOE weights not converted when loading with accelerate + deepspeed
### System Info ``` - `transformers` version: 5.0.0.dev0 - Platform: Linux-5.15.0-1048-aws-x86_64-with-glibc2.31 - Python version: 3.12.11 - Huggingface_hub version: 1.3.1 - Safetensors version: 0.7.0 - Accelerate version: 1.12.0 - Accelerate config: not found - DeepSpeed version: 0.18.3 - PyTorch version (accelerat...
closed
completed
false
11
[ "bug" ]
[ "kashif" ]
2026-01-13T14:39:25Z
2026-02-11T19:46:19Z
2026-01-24T17:16:33Z
MEMBER
null
20260325T173244Z
2026-03-25T17:32:44Z
edbeeching
7,275,864
MDQ6VXNlcjcyNzU4NjQ=
User
false
huggingface/transformers
3,809,998,924
I_kwDOCUB6oc7jF_hM
43,262
https://github.com/huggingface/transformers/issues/43262
https://api.github.com/repos/huggingface/transformers/issues/43262
Audio processors: `apply_chat_template()` defaults to 16kHz sampling rate, even if the processor config sets a different value
Firstly, thanks for a fantastic library! ### System Info - `transformers` version: 4.57.5 - Platform: macOS-26.2-arm64-arm-64bit-Mach-O - Python version: 3.14.2 - Huggingface_hub version: 0.36.0 - Safetensors version: 0.7.0 - Accelerate version: not installed - Accelerate config: not found - DeepSpeed version: not in...
closed
completed
false
2
[ "bug" ]
[]
2026-01-13T18:18:01Z
2026-02-02T11:32:38Z
2026-02-02T11:32:38Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
rmclarke
25,023,912
MDQ6VXNlcjI1MDIzOTEy
User
false
huggingface/transformers
3,812,129,522
I_kwDOCUB6oc7jOHry
43,278
https://github.com/huggingface/transformers/issues/43278
https://api.github.com/repos/huggingface/transformers/issues/43278
Embedding layer dtype changed from BF16 in training to FP32 in evaluate.
### System Info - `transformers` version: 4.57.1 - Platform: Linux-5.15.0-1071-azure-x86_64-with-glibc2.35 - Python version: 3.12.12 - Huggingface_hub version: 0.36.0 - Safetensors version: 0.7.0 - Accelerate version: 1.12.0 - Accelerate config: not found - DeepSpeed version: 0.18.4 - PyTorch version (accelerator?)...
closed
completed
false
4
[ "bug" ]
[]
2026-01-14T08:23:51Z
2026-02-23T08:10:44Z
2026-02-23T08:10:44Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
TriLoo
16,267,477
MDQ6VXNlcjE2MjY3NDc3
User
false
huggingface/transformers
3,813,016,307
I_kwDOCUB6oc7jRgLz
43,284
https://github.com/huggingface/transformers/issues/43284
https://api.github.com/repos/huggingface/transformers/issues/43284
Customize Quantization-Friendly Backward Compatibility
### Feature request Hi guys, It’s great to see Transformers moving to V5 with a modular design and improved performance! We’ve started adapting it with the RC branch, but noticed that some of the changes are not very friendly for quantization tools. Here’s the context: Given a BF16 model, quantization tools typically ...
closed
completed
false
20
[ "Feature request" ]
[]
2026-01-14T12:32:00Z
2026-03-02T10:10:49Z
2026-03-02T10:10:49Z
CONTRIBUTOR
null
20260325T173244Z
2026-03-25T17:32:44Z
yiliu30
106,061,964
U_kgDOBlJgjA
User
false
huggingface/transformers
3,814,786,615
I_kwDOCUB6oc7jYQY3
43,295
https://github.com/huggingface/transformers/issues/43295
https://api.github.com/repos/huggingface/transformers/issues/43295
[Regression] v4.57.5 breaks custom model code accessing processor.tokenizer and passing images to tokenizer
### System Info - `transformers` version: 4.57.5 - Platform: Linux-5.15.0-161-generic-x86_64-with-glibc2.35 - Python version: 3.12.12 - Huggingface_hub version: 0.36.0 - Safetensors version: 0.7.0 - Accelerate version: 1.12.0 - Accelerate config: not found - DeepSpeed version: not installed - PyTorch version (accel...
closed
completed
false
3
[ "bug" ]
[]
2026-01-14T20:37:49Z
2026-02-23T08:10:42Z
2026-02-23T08:10:42Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
AndreasKaratzas
42,451,412
MDQ6VXNlcjQyNDUxNDEy
User
false
huggingface/transformers
3,815,747,215
I_kwDOCUB6oc7jb66P
43,296
https://github.com/huggingface/transformers/issues/43296
https://api.github.com/repos/huggingface/transformers/issues/43296
Failed to load PaddleOCR-VL model with transformers 4.53.0 in vLLM 0.11.0
### System Info transformers 4.53.0 platform Linux python version 3.11.4 ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or datas...
closed
completed
false
1
[ "bug" ]
[]
2026-01-15T02:52:00Z
2026-02-10T02:58:27Z
2026-02-10T02:58:27Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
jhlddz
51,847,122
MDQ6VXNlcjUxODQ3MTIy
User
false
huggingface/transformers
3,816,324,669
I_kwDOCUB6oc7jeH49
43,298
https://github.com/huggingface/transformers/issues/43298
https://api.github.com/repos/huggingface/transformers/issues/43298
continous batching not support audio Model
### Feature request <img width="719" height="457" alt="Image" src="https://github.com/user-attachments/assets/ac0ec577-6568-4e21-a3c9-78e1d3ab846d" /> continous batching not support audio Model input_ids,audio_model input_features(is a tensor([1, 80, 1280])),please support~ ### Motivation please support continous ba...
open
null
false
5
[ "Feature request" ]
[ "remi-or" ]
2026-01-15T07:23:05Z
2026-03-13T22:42:27Z
null
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
MengLeebin
13,679,493
MDQ6VXNlcjEzNjc5NDkz
User
false
huggingface/transformers
3,816,484,709
I_kwDOCUB6oc7jeu9l
43,299
https://github.com/huggingface/transformers/issues/43299
https://api.github.com/repos/huggingface/transformers/issues/43299
Transformer version: 5.0.0.dev0 breaks Qwen3VL Moe models loading
### System Info I am trying to run inference on Qwen3VL Moe models, both 30B-A3B and 235B-A22B; however, there is a size mismatch between the HF checkpoint and the model weights. Inference done on H100 GPU with standard inference script from Qwen repo. Error message shown below: Note: downgrading transformer version...
closed
completed
false
4
[ "bug" ]
[]
2026-01-15T08:18:25Z
2026-02-11T12:43:58Z
2026-01-16T11:27:21Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
daulettoibazar
113,344,861
U_kgDOBsGBXQ
User
false
huggingface/transformers
3,821,478,506
I_kwDOCUB6oc7jxyJq
43,316
https://github.com/huggingface/transformers/issues/43316
https://api.github.com/repos/huggingface/transformers/issues/43316
API discrepancy between `Gemma3TextConfig` and others
### System Info transformers==v5.0.0rc3 ### Who can help? @zucchini-nlp ### Information - [ ] The official example scripts - [x] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduc...
closed
completed
false
4
[ "bug" ]
[]
2026-01-16T10:47:00Z
2026-01-26T09:12:56Z
2026-01-26T09:12:56Z
CONTRIBUTOR
null
20260325T173244Z
2026-03-25T17:32:44Z
Tcc0403
76,503,978
MDQ6VXNlcjc2NTAzOTc4
User
false
huggingface/transformers
3,821,662,877
I_kwDOCUB6oc7jyfKd
43,317
https://github.com/huggingface/transformers/issues/43317
https://api.github.com/repos/huggingface/transformers/issues/43317
device_map=auto fails to load the dequantized model on gpu+cpu offload
### System Info On a A100 using transformers from main and latest torch ### Who can help? @Cyrilvallez ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (gi...
closed
completed
false
0
[ "bug" ]
[]
2026-01-16T11:35:02Z
2026-01-26T12:30:24Z
2026-01-26T12:30:24Z
MEMBER
null
20260325T173244Z
2026-03-25T17:32:44Z
IlyasMoutawwakil
57,442,720
MDQ6VXNlcjU3NDQyNzIw
User
false
huggingface/transformers
3,822,275,080
I_kwDOCUB6oc7j00oI
43,322
https://github.com/huggingface/transformers/issues/43322
https://api.github.com/repos/huggingface/transformers/issues/43322
Segmentation Fault when loading Llava Next Models
### Running in Segmentation Fault when trying to run LLaVA-NeXT-Video-34B/7B Running inference with LLaVA-NeXT-Video-34B (and the 7B) consistently results in a segmentation fault during generation on CUDA. This occurs on an NVIDIA H200 (140GB) GPU with sufficient memory available. I tested on other models like Qwen2...
closed
completed
false
2
[ "bug" ]
[]
2026-01-16T14:28:51Z
2026-02-24T08:10:31Z
2026-02-24T08:10:31Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
omrastogi
43,903,014
MDQ6VXNlcjQzOTAzMDE0
User
false
huggingface/transformers
3,824,075,616
I_kwDOCUB6oc7j7sNg
43,329
https://github.com/huggingface/transformers/issues/43329
https://api.github.com/repos/huggingface/transformers/issues/43329
[BUG] _get_num_multimodal_tokens: video branch uses undefined a) get_number_of_video_patches, b)merge_size. Tests never hit video route (multiple VLM processors)
### System Info - `transformers` version: 4.57.3 - Platform: Linux-6.8.0-52-generic-x86_64-with-glibc2.35 - Python version: 3.12.9 - Huggingface_hub version: 0.34.3 - Safetensors version: 0.5.3 - Accelerate version: 1.9.0 - Accelerate config: - compute_environment: LOCAL_MACHINE - distributed_type: MULTI_GP...
closed
completed
false
2
[ "bug" ]
[ "zucchini-nlp" ]
2026-01-17T00:18:39Z
2026-02-24T08:10:28Z
2026-02-24T08:10:28Z
CONTRIBUTOR
null
20260325T173244Z
2026-03-25T17:32:44Z
stefgina
24,375,599
MDQ6VXNlcjI0Mzc1NTk5
User
false
huggingface/transformers
3,824,859,369
I_kwDOCUB6oc7j-rjp
43,334
https://github.com/huggingface/transformers/issues/43334
https://api.github.com/repos/huggingface/transformers/issues/43334
Qwen3-VL can't be loaded in transformers dev: AttributeError: 'Qwen3VLTextConfig' object has no attribute 'pad_token_id'
Qwen3-VL can't be loaded in transformers dev branch (5.0.0.dev0 at commit 24807bfcf4a21286fa2a7e728f381ddaaca7bbc7): > AttributeError: 'Qwen3VLTextConfig' object has no attribute 'pad_token_id' After investigation, the issue appeared after the merge of this PR: - https://github.com/huggingface/transformers/pull/41541...
closed
completed
false
5
[ "bug" ]
[]
2026-01-17T10:42:01Z
2026-01-27T09:19:51Z
2026-01-22T18:25:38Z
MEMBER
null
20260325T173244Z
2026-03-25T17:32:44Z
albertvillanova
8,515,462
MDQ6VXNlcjg1MTU0NjI=
User
false
huggingface/transformers
3,824,943,220
I_kwDOCUB6oc7j_AB0
43,335
https://github.com/huggingface/transformers/issues/43335
https://api.github.com/repos/huggingface/transformers/issues/43335
[BUG] SwitchTransformersConfig creates sparse layer when num_sparse_encoder_layers=0 with single layer model
### System Info * `transformers` version: `5.0.0.dev0` * Platform: `Linux-5.15.167.4-microsoft-standard-WSL2-x86_64-with-glibc2.39` * Python version: `3.12.3` * `huggingface_hub` version: `1.3.2` * `safetensors` version: `0.7.0` * `accelerate` version: `1.12.0` * Accelerate config: `not installed` * DeepSpeed version:...
closed
completed
false
4
[ "bug" ]
[]
2026-01-17T11:28:35Z
2026-02-09T17:33:46Z
2026-02-09T17:33:46Z
CONTRIBUTOR
null
20260325T173244Z
2026-03-25T17:32:44Z
harshaljanjani
75,426,551
MDQ6VXNlcjc1NDI2NTUx
User
false
huggingface/transformers
3,828,221,249
I_kwDOCUB6oc7kLgVB
43,344
https://github.com/huggingface/transformers/issues/43344
https://api.github.com/repos/huggingface/transformers/issues/43344
invalid test cases for glm_image model
### System Info A100 w/ 80GB memory For test cases: ``` tests/models/glm_image/test_modeling_glm_image.py::GlmImageIntegrationTest::test_small_model_integration_test_batch_flashatt2 tests/models/glm_image/test_modeling_glm_image.py::GlmImageIntegrationTest::test_small_model_integration_test_batch ``` it will throw erro...
closed
completed
false
1
[ "bug" ]
[]
2026-01-19T06:33:52Z
2026-01-26T15:31:00Z
2026-01-26T15:31:00Z
CONTRIBUTOR
null
20260325T173244Z
2026-03-25T17:32:44Z
kaixuanliu
13,268,042
MDQ6VXNlcjEzMjY4MDQy
User
false
huggingface/transformers
3,830,059,142
I_kwDOCUB6oc7kShCG
43,352
https://github.com/huggingface/transformers/issues/43352
https://api.github.com/repos/huggingface/transformers/issues/43352
Error: NemotronHForCausalLM does not support Flash Attention 2.0 yet
### System Info Error: NemotronHForCausalLM does not support Flash Attention 2.0 yet. Please request to add support where the model is hosted, on its model hub page: https://huggingface.co/nvidia/NVIDIA-Nemotron-3-Nano-30B-A3B-BF16/discussions/new or in the Transformers GitHub repo: https://github.com/huggingface/tran...
closed
completed
false
2
[ "bug" ]
[]
2026-01-19T14:50:02Z
2026-02-28T08:02:53Z
2026-02-28T08:02:53Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
OrlandoWhite88
119,964,986
U_kgDOByaFOg
User
false
huggingface/transformers
3,832,557,479
I_kwDOCUB6oc7kcC-n
43,366
https://github.com/huggingface/transformers/issues/43366
https://api.github.com/repos/huggingface/transformers/issues/43366
GGUF model with architecture gpt-oss support
### System Info - `transformers` version: 4.57.5 - Platform: Linux-5.4.0-193-generic-x86_64-with-glibc2.35 - Python version: 3.11.7 - Huggingface_hub version: 0.36.0 - Safetensors version: 0.7.0 - Accelerate version: not installed - Accelerate config: not found - DeepSpeed version: not installed - PyTorch version (acc...
open
null
false
5
[ "Good Second Issue", "Feature request", "bug" ]
[]
2026-01-20T08:03:36Z
2026-02-12T21:10:20Z
null
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
akayunov
13,577,138
MDQ6VXNlcjEzNTc3MTM4
User
false
huggingface/transformers
3,834,922,327
I_kwDOCUB6oc7klEVX
43,377
https://github.com/huggingface/transformers/issues/43377
https://api.github.com/repos/huggingface/transformers/issues/43377
[BUG] MIMI Encoder produces different outputs for batched vs single inputs due to missing padding mask support
### System Info * `transformers` version: `5.0.0.dev0` * Platform: `Linux-5.15.167.4-microsoft-standard-WSL2-x86_64-with-glibc2.39` * Python version: `3.12.3` * `huggingface_hub` version: `1.3.2` * `safetensors` version: `0.7.0` * `accelerate` version: `1.12.0` * Accelerate config: `not installed` * DeepSpeed version:...
closed
completed
false
1
[ "bug" ]
[]
2026-01-20T18:29:14Z
2026-03-15T08:06:43Z
2026-03-15T08:06:43Z
CONTRIBUTOR
null
20260325T173244Z
2026-03-25T17:32:44Z
harshaljanjani
75,426,551
MDQ6VXNlcjc1NDI2NTUx
User
false
huggingface/transformers
3,835,812,087
I_kwDOCUB6oc7kodj3
43,381
https://github.com/huggingface/transformers/issues/43381
https://api.github.com/repos/huggingface/transformers/issues/43381
Gradient checkpointing cannot be used in eval mode
### System Info - `transformers` version: 4.57.1 - Platform: Linux-6.17.0-8-generic-x86_64-with-glibc2.42 - Python version: 3.12.9 - Huggingface_hub version: 0.34.3 - Safetensors version: 0.5.3 - Accelerate version: 1.9.0 - Accelerate config: not found - DeepSpeed version: not installed - PyTorch version (accelerat...
closed
completed
false
5
[ "bug" ]
[]
2026-01-20T22:58:34Z
2026-03-02T06:53:05Z
2026-03-01T08:02:58Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
smarter
63,430
MDQ6VXNlcjYzNDMw
User
false
huggingface/transformers
3,836,016,486
I_kwDOCUB6oc7kpPdm
43,383
https://github.com/huggingface/transformers/issues/43383
https://api.github.com/repos/huggingface/transformers/issues/43383
Parakeet model limited to ~8 minutes audio and yet NeMo supports hours-long audio
I noticed that the current Parakeet model appears to only support ~8 minutes of audio input before running into length limitations (likely tied to max positional encoding constraints which is 5000). However, in NeMo, same ASR models (`nvidia/parakeet-ctc-0.6b` and `nvidia/parakeet-ctc-1.1b`) can process hours-long aud...
closed
completed
false
3
[]
[]
2026-01-21T00:22:36Z
2026-03-04T08:06:00Z
2026-03-04T08:06:00Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
penguinwang96825
28,087,825
MDQ6VXNlcjI4MDg3ODI1
User
false
huggingface/transformers
3,836,753,591
I_kwDOCUB6oc7ksDa3
43,386
https://github.com/huggingface/transformers/issues/43386
https://api.github.com/repos/huggingface/transformers/issues/43386
Support other types of model inputs for continuous batching
### Feature request Unless I'm mistaken, the continuous batching api currently does not seem to support any other input modality other than token ids via `input_ids`. VLM models require inputs such as `pixel_values`, which are not accepted in `add_request()`. https://github.com/huggingface/transformers/blob/v5.0.0rc2...
open
null
false
5
[ "Feature request" ]
[]
2026-01-21T06:13:13Z
2026-03-04T15:36:33Z
null
CONTRIBUTOR
null
20260325T173244Z
2026-03-25T17:32:44Z
yhshin11
5,031,800
MDQ6VXNlcjUwMzE4MDA=
User
false
huggingface/transformers
3,837,507,900
I_kwDOCUB6oc7ku7k8
43,388
https://github.com/huggingface/transformers/issues/43388
https://api.github.com/repos/huggingface/transformers/issues/43388
gather_for_metrics incorrectly drops label elements in the last batch when labels is a tuple with several label types e.g. used by mask2former
### System Info accelerate==1.7.0 (but code is the same also in current 1.12.0) transformers==4.53.0.dev0 torch==2.6.0 python3.10 ### Who can help? @yonigozlan @molbap @SunMarc ### Information - [x] The official example scripts - [x] My own modified scripts ### Tasks - [x] An officially supported task in the `e...
closed
completed
false
4
[ "Examples", "bug" ]
[]
2026-01-21T10:05:53Z
2026-03-02T08:08:56Z
2026-03-02T08:08:56Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
J-Bracke
74,211,542
MDQ6VXNlcjc0MjExNTQy
User
false
huggingface/transformers
3,841,563,178
I_kwDOCUB6oc7k-Zoq
43,404
https://github.com/huggingface/transformers/issues/43404
https://api.github.com/repos/huggingface/transformers/issues/43404
Bug: lm_head weight not tied in Mistral3ForConditionalGeneration (affects AutoModelForImageTextToText)
### System Info - `transformers` version: 5.0.0.dev0 - Platform: Linux-5.15.133+-x86_64-with-glibc2.35 - Python version: 3.12.0 - PyTorch version: 2.9.0+cu126 - CUDA/cuDNN version: 12.6 - GPU: Tesla T4 (compute capability 7.5) ### Who can help? @zucchini-nlp @ArthurZucker @amyeroberts ### Information - [x] The o...
closed
completed
false
6
[ "bug" ]
[]
2026-01-22T07:06:48Z
2026-01-26T09:30:44Z
2026-01-26T09:30:44Z
CONTRIBUTOR
null
20260325T173244Z
2026-03-25T17:32:44Z
aswin00000
91,382,951
MDQ6VXNlcjkxMzgyOTUx
User
false
huggingface/transformers
3,842,899,772
I_kwDOCUB6oc7lDf88
43,408
https://github.com/huggingface/transformers/issues/43408
https://api.github.com/repos/huggingface/transformers/issues/43408
Warning: You are using a model of type sam3_video to instantiate a model of type sam3_tracker
### System Info When using the sample code to create a SAM3 Tracker Model like: ```python from transformers import Sam3TrackerProcessor, Sam3TrackerModel model = Sam3TrackerModel.from_pretrained("facebook/sam3") processor = Sam3TrackerProcessor.from_pretrained("facebook/sam3") ``` The following warning will be displ...
closed
completed
false
5
[ "bug" ]
[]
2026-01-22T13:12:57Z
2026-02-05T13:03:34Z
2026-02-05T13:03:34Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
gboeer
1,067,159
MDQ6VXNlcjEwNjcxNTk=
User
false
huggingface/transformers
3,843,302,597
I_kwDOCUB6oc7lFCTF
43,412
https://github.com/huggingface/transformers/issues/43412
https://api.github.com/repos/huggingface/transformers/issues/43412
gemma3n executorch export fails missing self training guard and erfinv not supported
system info - transformers 5.0.0rc1 - torch 2.9.1 - executorch 1.0.1 - python 3.11 - ubuntu 24.04 who can help @ArthurZucker @younesbelkada information - the official example scripts tasks - an officially supported task in the examples folder problem description when exporting gemma3n models to executorch pt...
closed
completed
false
5
[]
[]
2026-01-22T14:51:37Z
2026-03-29T08:08:17Z
2026-03-29T08:08:17Z
NONE
null
20260407T090028Z
2026-04-07T09:00:28Z
maceip
804,368
MDQ6VXNlcjgwNDM2OA==
User
false
huggingface/transformers
3,844,198,299
I_kwDOCUB6oc7lIc-b
43,421
https://github.com/huggingface/transformers/issues/43421
https://api.github.com/repos/huggingface/transformers/issues/43421
[FEATURE] TokenizersBackend does not update post-processor when special tokens are modified at runtime
### System Info * `transformers` version: `5.0.0.dev0` * Platform: `Linux-5.15.167.4-microsoft-standard-WSL2-x86_64-with-glibc2.39` * Python version: `3.12.3` * `huggingface_hub` version: `1.3.2` * `safetensors` version: `0.7.0` * `accelerate` version: `1.12.0` * Accelerate config: `not installed` * DeepSpeed version:...
closed
completed
false
3
[ "bug" ]
[]
2026-01-22T18:29:02Z
2026-03-09T08:09:27Z
2026-03-09T08:09:27Z
CONTRIBUTOR
null
20260325T173244Z
2026-03-25T17:32:44Z
harshaljanjani
75,426,551
MDQ6VXNlcjc1NDI2NTUx
User
false
huggingface/transformers
3,844,895,799
I_kwDOCUB6oc7lLHQ3
43,425
https://github.com/huggingface/transformers/issues/43425
https://api.github.com/repos/huggingface/transformers/issues/43425
Torch 2.10 incompatible
### System Info transformers 4.57.6 torch 2.10 accelerate 1.12.0 ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give det...
closed
not_planned
false
9
[ "bug" ]
[]
2026-01-22T21:51:30Z
2026-03-06T22:09:41Z
2026-02-06T22:24:23Z
MEMBER
null
20260325T173244Z
2026-03-25T17:32:44Z
qgallouedec
45,557,362
MDQ6VXNlcjQ1NTU3MzYy
User
false
huggingface/transformers
3,846,965,862
I_kwDOCUB6oc7lTApm
43,441
https://github.com/huggingface/transformers/issues/43441
https://api.github.com/repos/huggingface/transformers/issues/43441
[BUG] Ministral-3 fails with FlashAttention in Transformers v5 RC
### System Info **Description** In Transformers `5.0.0rc3`, running `mistralai/Ministral-3-8B-Instruct-2512` with FlashAttention enabled results in an `IndexError` during the attention forward pass. This issue does not occur when FlashAttention is disabled. --- **Model** * `mistralai/Ministral-3-8B-Instruct-2512`...
closed
completed
false
1
[ "bug" ]
[]
2026-01-23T11:00:47Z
2026-01-26T08:18:44Z
2026-01-26T08:18:44Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
dysby
28,685,434
MDQ6VXNlcjI4Njg1NDM0
User
false
huggingface/transformers
3,847,750,539
I_kwDOCUB6oc7lWAOL
43,450
https://github.com/huggingface/transformers/issues/43450
https://api.github.com/repos/huggingface/transformers/issues/43450
Video processors return incorrect shape when input is batched
### System Info - `transformers` version: 4.57.6 - Platform: Linux-5.10.0-37-cloud-amd64-x86_64-with-glibc2.31 - Python version: 3.10.19 - Huggingface_hub version: 0.36.0 - Safetensors version: 0.7.0 - Accelerate version: not installed - Accelerate config: not found - DeepSpeed version: not installed - PyTorch version...
closed
completed
false
0
[ "bug" ]
[]
2026-01-23T14:42:11Z
2026-01-30T10:27:51Z
2026-01-30T10:27:51Z
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
shaform
367,172
MDQ6VXNlcjM2NzE3Mg==
User
false
huggingface/transformers
3,848,223,339
I_kwDOCUB6oc7lXzpr
43,452
https://github.com/huggingface/transformers/issues/43452
https://api.github.com/repos/huggingface/transformers/issues/43452
gguf_file breaks for AutoTokenizer.from_pretrained and AutoModelForCausalLM.from_pretrained
### System Info - `transformers` version: 5.0.0.dev0 - Platform: macOS-15.6-arm64-arm-64bit-Mach-O - Python version: 3.13.2 - Huggingface_hub version: 1.3.1 - Safetensors version: 0.5.3 - Accelerate version: 1.12.0 - Accelerate config: not found - DeepSpeed version: not installed - PyTorch version (accelerator?): 2...
closed
completed
false
2
[ "bug" ]
[]
2026-01-23T16:34:19Z
2026-01-24T18:07:17Z
2026-01-24T18:07:17Z
MEMBER
null
20260325T173244Z
2026-03-25T17:32:44Z
xenova
26,504,141
MDQ6VXNlcjI2NTA0MTQx
User
false
huggingface/transformers
3,848,651,246
I_kwDOCUB6oc7lZcHu
43,454
https://github.com/huggingface/transformers/issues/43454
https://api.github.com/repos/huggingface/transformers/issues/43454
[BUG] AyaVisionConfig fails to tie lm_head weights causing garbage text generation
### System Info * `transformers` version: `5.0.0.dev0` * Platform: `Linux-5.15.167.4-microsoft-standard-WSL2-x86_64-with-glibc2.39` * Python version: `3.12.3` * `huggingface_hub` version: `1.3.2` * `safetensors` version: `0.7.0` * `accelerate` version: `1.12.0` * Accelerate config: `not installed` * DeepSpeed version:...
closed
completed
false
0
[ "bug" ]
[]
2026-01-23T18:28:13Z
2026-01-24T17:12:22Z
2026-01-24T17:12:22Z
CONTRIBUTOR
null
20260325T173244Z
2026-03-25T17:32:44Z
harshaljanjani
75,426,551
MDQ6VXNlcjc1NDI2NTUx
User
false
huggingface/transformers
3,852,393,114
I_kwDOCUB6oc7lntqa
43,472
https://github.com/huggingface/transformers/issues/43472
https://api.github.com/repos/huggingface/transformers/issues/43472
Introduce standardized BatchLinear module for MoE architectures to facilitate PEFT and Quantization
### Feature request Starting from transformer v5, Linear module of Moe experts are fused into one single module. While this improve speed of Moe, it is making downstream library difficult to adapt to this change. I propose introducing a standardized `BatchLinear` (or `MoELinear`) module within transformers. This modu...
open
null
false
3
[ "Feature request" ]
[]
2026-01-25T01:50:15Z
2026-01-25T16:15:17Z
null
NONE
null
20260325T173244Z
2026-03-25T17:32:44Z
ITcarrot
41,422,161
MDQ6VXNlcjQxNDIyMTYx
User
false