repo string | github_id int64 | github_node_id string | number int64 | html_url string | api_url string | title string | body string | state string | state_reason string | locked bool | comments_count int64 | labels list | assignees list | created_at string | updated_at string | closed_at string | author_association string | milestone_title string | snapshot_id string | extracted_at string | author_login string | author_id int64 | author_node_id string | author_type string | author_site_admin bool |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
huggingface/transformers | 3,852,606,803 | I_kwDOCUB6oc7loh1T | 43,474 | https://github.com/huggingface/transformers/issues/43474 | https://api.github.com/repos/huggingface/transformers/issues/43474 | [SAM3 Video] get_text_features returns CLIPTextModelOutput but code expects tensor | ### System Info
- `transformers` version: main branch
- Platform: Linux (CUDA)
- Python version: 3.11
### Who can help?
@yonigozlan @amyeroberts
### Reproduction
When using `Sam3VideoModel` with text prompts:
AttributeError: 'CLIPTextModelOutput' object has no attribute 'shape'
**Root Cause:** ... | closed | completed | false | 2 | [] | [] | 2026-01-25T05:16:50Z | 2026-01-26T10:38:54Z | 2026-01-26T10:38:54Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | ncerny | 4,643,765 | MDQ6VXNlcjQ2NDM3NjU= | User | false |
huggingface/transformers | 3,852,781,688 | I_kwDOCUB6oc7lpMh4 | 43,475 | https://github.com/huggingface/transformers/issues/43475 | https://api.github.com/repos/huggingface/transformers/issues/43475 | [SAM 3 Video] Sam3VisionEncoderOutput object has no attribute 'fpn_position_embeddings' | ### System Info
Version of `transformers`: main branch (`5.0.0.dev0`)
Platform: `Linux-6.6.105+-x86_64-with-glibc2.35`
Python version: `3.12.12`
### Who can help?
@yonigozlan @molbap
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in t... | closed | completed | false | 4 | [
"bug"
] | [] | 2026-01-25T07:28:15Z | 2026-01-26T10:38:55Z | 2026-01-26T10:38:54Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | vydpnguyen | 114,444,436 | U_kgDOBtJIlA | User | false |
huggingface/transformers | 3,854,179,064 | I_kwDOCUB6oc7luhr4 | 43,479 | https://github.com/huggingface/transformers/issues/43479 | https://api.github.com/repos/huggingface/transformers/issues/43479 | Phi4MultimodalConfig incorrectly initializes default vision/audio configs when passed as None | ### System Info
> Note: This bug report has been refined by AI, I have review the report and taken the full responsibility for this issue.
## 🐛 Bug description
In `Phi4MultimodalConfig.__init__`, there are two issues related to default initialization of multimodal sub-configs:
1. When `vision_config` is `None`, a... | closed | completed | false | 0 | [
"bug"
] | [] | 2026-01-26T01:26:15Z | 2026-01-26T13:47:34Z | 2026-01-26T13:47:34Z | CONTRIBUTOR | null | 20260325T173244Z | 2026-03-25T17:32:44Z | charlieJ107 | 42,380,833 | MDQ6VXNlcjQyMzgwODMz | User | false |
huggingface/transformers | 3,854,690,659 | I_kwDOCUB6oc7lwelj | 43,482 | https://github.com/huggingface/transformers/issues/43482 | https://api.github.com/repos/huggingface/transformers/issues/43482 | Qwen2.5-GGUF loading failed with transformers v5 | ### System Info
transformers v5.0.0rc3
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproducti... | closed | completed | false | 1 | [
"bug"
] | [] | 2026-01-26T06:35:09Z | 2026-01-26T08:17:02Z | 2026-01-26T08:17:02Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | Kaihui-intel | 93,114,262 | U_kgDOBYzPlg | User | false |
huggingface/transformers | 3,855,376,489 | I_kwDOCUB6oc7lzGBp | 43,489 | https://github.com/huggingface/transformers/issues/43489 | https://api.github.com/repos/huggingface/transformers/issues/43489 | Transformers' version 5 is out! | Thank you all for your patience while we released RC0, RC1, RC2, and RC3.
We're looking forward to collaborating with you on a stronger, stabler, and faster foundation. Please keep the feedback coming in GitHub issues, this is only the beginning 🙌
@huggingface/transformers-core-maintainers | closed | completed | false | 7 | [] | [] | 2026-01-26T10:26:52Z | 2026-03-26T13:39:56Z | 2026-03-07T08:02:57Z | MEMBER | null | 20260407T090028Z | 2026-04-07T09:00:28Z | LysandreJik | 30,755,778 | MDQ6VXNlcjMwNzU1Nzc4 | User | false |
huggingface/transformers | 3,855,518,030 | I_kwDOCUB6oc7lzolO | 43,493 | https://github.com/huggingface/transformers/issues/43493 | https://api.github.com/repos/huggingface/transformers/issues/43493 | SigLIP2 discrepancy between HF implementation and original JAX implementation | ### System Info
Google Colab
- `transformers` version: 4.57.6
- Platform: Linux-6.6.105+-x86_64-with-glibc2.35
- Python version: 3.12.12
- Huggingface_hub version: 0.36.0
- Safetensors version: 0.7.0
- Accelerate version: 1.12.0
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (acc... | closed | completed | false | 4 | [
"bug"
] | [] | 2026-01-26T11:10:24Z | 2026-03-07T08:02:56Z | 2026-03-07T08:02:56Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | nmilosev | 11,432,491 | MDQ6VXNlcjExNDMyNDkx | User | false |
huggingface/transformers | 3,856,655,225 | I_kwDOCUB6oc7l3-N5 | 43,502 | https://github.com/huggingface/transformers/issues/43502 | https://api.github.com/repos/huggingface/transformers/issues/43502 | API requests are made despite setting local_files_only=True. | ### System Info
API requests are made despite setting `local_files_only=True` when loading a pretrained tokenizers.
Here is the relevant part of the code that triggers the requests.
https://github.com/huggingface/transformers/blob/9495ae2880d53dcb8e91cad5f618787c0cfc1c96/src/transformers/tokenization_utils_tokenizer... | closed | completed | false | 12 | [
"bug"
] | [] | 2026-01-26T16:20:44Z | 2026-04-13T10:00:55Z | 2026-04-13T10:00:55Z | NONE | null | 20260414T122001Z | 2026-04-14T12:20:01Z | haok1402 | 89,672,451 | MDQ6VXNlcjg5NjcyNDUx | User | false |
huggingface/transformers | 3,857,129,808 | I_kwDOCUB6oc7l5yFQ | 43,504 | https://github.com/huggingface/transformers/issues/43504 | https://api.github.com/repos/huggingface/transformers/issues/43504 | [BUG] BeitForSemanticSegmentation fails to load pretrained model preset due to a legacy field | ### System Info
- `transformers` version: `5.0.1.dev0`
- Platform: `Linux-5.15.167.4-microsoft-standard-WSL2-x86_64-with-glibc2.39`
- Python version: `3.12.3`
- Huggingface_hub version: `1.3.3`
- Safetensors version: `0.7.0`
- Accelerate version: `1.12.0`
- PyTorch version (accelerator?): `2.10.0+cu128 (CUDA)`
- GPU t... | closed | completed | false | 0 | [
"bug"
] | [] | 2026-01-26T18:28:17Z | 2026-04-18T09:14:15Z | 2026-02-05T13:33:52Z | CONTRIBUTOR | null | 20260418T100536Z | 2026-04-18T10:05:36Z | harshaljanjani | 75,426,551 | MDQ6VXNlcjc1NDI2NTUx | User | false |
huggingface/transformers | 3,857,795,347 | I_kwDOCUB6oc7l8UkT | 43,508 | https://github.com/huggingface/transformers/issues/43508 | https://api.github.com/repos/huggingface/transformers/issues/43508 | Transformers 5.0.0 calls torch.is_autocast_enabled(device_type) breaking compatibility with torch<2.4 | In `transformers==5.0.0`, a call to `torch.is_autocast_enabled(device_type)` was introduced.
That function signature (accepting a `device_type: str` argument) was only added in **PyTorch 2.4.0**.
However, the current minimum PyTorch version specified by Transformers is **torch>=2.2**.
On torch 2.2.x and 2.3.x, `to... | closed | completed | false | 2 | [
"bug"
] | [] | 2026-01-26T21:42:45Z | 2026-01-30T13:25:51Z | 2026-01-30T13:25:51Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | oliverholworthy | 1,216,955 | MDQ6VXNlcjEyMTY5NTU= | User | false |
huggingface/transformers | 3,859,303,687 | I_kwDOCUB6oc7mCE0H | 43,519 | https://github.com/huggingface/transformers/issues/43519 | https://api.github.com/repos/huggingface/transformers/issues/43519 | Incorrect timestamp calculation in Qwen3VL Processor | ### System Info
Should temporal_patch_size be used instead of merge_size for the timestamp calculation in this context?
https://github.com/huggingface/transformers/blob/dfe30827b8ebdd974eb7ce69c7d5d8cf8e6cf852/src/transformers/models/qwen3_vl/processing_qwen3_vl.py#L155
### Who can help?
_No response_
### Informati... | closed | completed | false | 4 | [
"bug"
] | [] | 2026-01-27T07:54:58Z | 2026-02-09T09:54:11Z | 2026-02-09T09:54:11Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | 666zz666 | 31,818,905 | MDQ6VXNlcjMxODE4OTA1 | User | false |
huggingface/transformers | 3,859,918,253 | I_kwDOCUB6oc7mEa2t | 43,522 | https://github.com/huggingface/transformers/issues/43522 | https://api.github.com/repos/huggingface/transformers/issues/43522 | Weights are not tied when model is loaded onto the `meta` device | This can cause issues if the `model.named_parameters` of a model on the `meta` device are examined because the parameter for the tied weight will still be present. | closed | completed | false | 0 | [] | [] | 2026-01-27T10:35:07Z | 2026-01-27T15:45:23Z | 2026-01-27T15:45:23Z | MEMBER | null | 20260325T173244Z | 2026-03-25T17:32:44Z | hmellor | 19,981,378 | MDQ6VXNlcjE5OTgxMzc4 | User | false |
huggingface/transformers | 3,860,517,917 | I_kwDOCUB6oc7mGtQd | 43,525 | https://github.com/huggingface/transformers/issues/43525 | https://api.github.com/repos/huggingface/transformers/issues/43525 | AttributeError: 'Llama4Config' object has no attribute 'pad_token_id' | ### System Info
transformers == 5.0.0
### Who can help?
@zucchini-nlp
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproducti... | closed | completed | false | 2 | [
"bug"
] | [] | 2026-01-27T13:07:03Z | 2026-01-30T11:37:27Z | 2026-01-30T11:37:27Z | CONTRIBUTOR | null | 20260325T173244Z | 2026-03-25T17:32:44Z | xin3he | 83,260,933 | MDQ6VXNlcjgzMjYwOTMz | User | false |
huggingface/transformers | 3,860,605,108 | I_kwDOCUB6oc7mHCi0 | 43,526 | https://github.com/huggingface/transformers/issues/43526 | https://api.github.com/repos/huggingface/transformers/issues/43526 | reduce_labels returns only one label instead of the whole array of labels in BeitImageProcessorFast | ### System Info
- `transformers` version: 5.0.1.dev0
- Platform: macOS-26.2-arm64-arm-64bit
- Python version: 3.12.11
- Huggingface_hub version: 1.3.1
- Safetensors version: 0.6.2
- Accelerate version: 1.11.0
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (accelerator?): 2.8.0 (NA... | closed | completed | false | 0 | [
"bug"
] | [] | 2026-01-27T13:27:08Z | 2026-01-27T21:26:16Z | 2026-01-27T21:26:16Z | CONTRIBUTOR | null | 20260325T173244Z | 2026-03-25T17:32:44Z | sbucaille | 24,275,548 | MDQ6VXNlcjI0Mjc1NTQ4 | User | false |
huggingface/transformers | 3,860,975,090 | I_kwDOCUB6oc7mIc3y | 43,531 | https://github.com/huggingface/transformers/issues/43531 | https://api.github.com/repos/huggingface/transformers/issues/43531 | `sliding_window` issue with Qwen3-MoE models | ### System Info
- `transformers` version: 5.0.1.dev0
- Platform: Linux-5.15.186.el8-x86_64-with-glibc2.39
- Python version: 3.11.5
- Huggingface_hub version: 1.3.4
- Safetensors version: 0.7.0
- Accelerate version: 1.12.0
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (accelerat... | closed | completed | false | 2 | [
"bug"
] | [] | 2026-01-27T14:51:34Z | 2026-02-04T10:44:31Z | 2026-02-04T10:44:31Z | CONTRIBUTOR | null | 20260325T173244Z | 2026-03-25T17:32:44Z | saattrupdan | 47,701,536 | MDQ6VXNlcjQ3NzAxNTM2 | User | false |
huggingface/transformers | 3,862,243,206 | I_kwDOCUB6oc7mNSeG | 43,540 | https://github.com/huggingface/transformers/issues/43540 | https://api.github.com/repos/huggingface/transformers/issues/43540 | ValueError when processing video inputs in Qwen3OmniMoe | ### System Info
## System Info
- `transformers` version: 5.0.1.dev0 (also affects v4.57.0 through v5.0.0)
- Platform: Linux-5.10.0-37-cloud-amd64-x86_64-with-glibc2.31
- Python version: 3.10.17
- PyTorch version: 2.10.0+cu128
- CUDA available: Yes
- CUDA version: 12.8
- GPU: NVIDIA A100-SXM4-80GB
### Who can help?
... | closed | completed | false | 1 | [
"bug"
] | [] | 2026-01-27T20:06:48Z | 2026-01-29T14:55:31Z | 2026-01-29T14:55:31Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | BBiering | 9,655,207 | MDQ6VXNlcjk2NTUyMDc= | User | false |
huggingface/transformers | 3,862,780,567 | I_kwDOCUB6oc7mPVqX | 43,541 | https://github.com/huggingface/transformers/issues/43541 | https://api.github.com/repos/huggingface/transformers/issues/43541 | RuntimeError: MixtralForCausalLM float32 model errors at grouped_mm op during torch dynamo tracing | ### System Info
- `transformers` version: 5.0.0
- Platform: Linux-6.8.0-1040-aws-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 1.3.4
- Safetensors version: 0.7.0
- Accelerate version: not installed
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (acceler... | closed | completed | false | 4 | [
"bug"
] | [] | 2026-01-27T23:00:00Z | 2026-02-27T12:01:18Z | 2026-02-27T12:00:19Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | vakumar1 | 77,815,048 | MDQ6VXNlcjc3ODE1MDQ4 | User | false |
huggingface/transformers | 3,864,953,184 | I_kwDOCUB6oc7mXoFg | 43,550 | https://github.com/huggingface/transformers/issues/43550 | https://api.github.com/repos/huggingface/transformers/issues/43550 | [BUG] Bamba-9B-v2 model fails with torch.compile when using SDPA | ### System Info
* `transformers` version: `5.0.0.dev0`
* Platform: `Linux-5.15.167.4-microsoft-standard-WSL2-x86_64-with-glibc2.39`
* Python version: `3.12.3`
* `huggingface_hub` version: `1.3.2`
* `safetensors` version: `0.7.0`
* `accelerate` version: `1.12.0`
* Accelerate config: `not installed`
* DeepSpeed version:... | closed | completed | false | 5 | [
"bug"
] | [] | 2026-01-28T11:27:21Z | 2026-04-18T09:13:56Z | 2026-02-10T03:55:43Z | CONTRIBUTOR | null | 20260418T100536Z | 2026-04-18T10:05:36Z | harshaljanjani | 75,426,551 | MDQ6VXNlcjc1NDI2NTUx | User | false |
huggingface/transformers | 3,868,002,459 | I_kwDOCUB6oc7mjQib | 43,572 | https://github.com/huggingface/transformers/issues/43572 | https://api.github.com/repos/huggingface/transformers/issues/43572 | missing pad_token_idx in StableLmConfig after 5.0 update | ### System Info
transformers 5.0.0
trainium 2
python 3.10
pytorch 2.9
### Who can help?
@ArthurZucker @Cyrilvallez @itazap
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own... | closed | completed | false | 4 | [
"bug"
] | [] | 2026-01-29T00:17:39Z | 2026-01-30T11:37:27Z | 2026-01-30T11:37:27Z | CONTRIBUTOR | null | 20260325T173244Z | 2026-03-25T17:32:44Z | mario-aws | 172,859,788 | U_kgDOCk2hjA | User | false |
huggingface/transformers | 3,868,652,684 | I_kwDOCUB6oc7mlvSM | 43,575 | https://github.com/huggingface/transformers/issues/43575 | https://api.github.com/repos/huggingface/transformers/issues/43575 | Load `Qwen2-57B-A14B-Instruct` with tp lead to OOM | ### System Info
- `transformers` version: 4.57.3
- Platform: Linux-4.18.0-193.28.1.el8_2.x86_64-x86_64-with-glibc2.28
- Python version: 3.12.11
- Huggingface_hub version: 0.36.0
- Safetensors version: 0.6.2
- Accelerate version: 1.10.1
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version... | closed | completed | false | 3 | [
"bug"
] | [] | 2026-01-29T03:53:17Z | 2026-03-08T08:03:11Z | 2026-03-08T08:03:11Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | Hermit-w | 129,869,143 | U_kgDOB72lVw | User | false |
huggingface/transformers | 3,868,721,712 | I_kwDOCUB6oc7mmAIw | 43,576 | https://github.com/huggingface/transformers/issues/43576 | https://api.github.com/repos/huggingface/transformers/issues/43576 | `transformers env` command seems to be broken in v5 | ### System Info
transformers 5.0.0
torch 2.10.0
### Who can help?
_No response_
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### R... | closed | completed | false | 6 | [
"bug"
] | [] | 2026-01-29T04:17:11Z | 2026-04-02T13:15:20Z | 2026-04-02T08:18:15Z | NONE | null | 20260407T090028Z | 2026-04-07T09:00:28Z | Hermit-w | 129,869,143 | U_kgDOB72lVw | User | false |
huggingface/transformers | 3,868,834,548 | I_kwDOCUB6oc7mmbr0 | 43,577 | https://github.com/huggingface/transformers/issues/43577 | https://api.github.com/repos/huggingface/transformers/issues/43577 | model.dtype and model.qformer.dtype remain float32 when loading Blip2 model with dtype=torch.float16 or torch.bfloat16 | ### System Info
- `transformers` version: 5.0.0rc3
- Platform: Linux-6.8.0-59-generic-x86_64-with-glibc2.35
- Python version: 3.12.12
- Huggingface_hub version: 1.3.4
- Safetensors version: 0.7.0
- Accelerate version: 1.12.0
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (accele... | closed | completed | false | 1 | [
"bug"
] | [] | 2026-01-29T05:01:05Z | 2026-01-29T11:23:47Z | 2026-01-29T11:23:47Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | rebel-seinpark | 190,359,610 | U_kgDOC1ioOg | User | false |
huggingface/transformers | 3,869,506,565 | I_kwDOCUB6oc7mo_wF | 43,582 | https://github.com/huggingface/transformers/issues/43582 | https://api.github.com/repos/huggingface/transformers/issues/43582 | caching_allocator_warmup function raise TypeError on AppleSilicon M4 max | ### System Info
- `transformers` version: 5.0.0
- Platform: macOS-26.2-arm64-arm-64bit-Mach-O
- Python version: 3.14.2
- Huggingface_hub version: 1.3.4
- Safetensors version: 0.7.0
- Accelerate version: 1.12.0
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (accelerator?): 2.10.0... | closed | completed | false | 2 | [
"bug"
] | [] | 2026-01-29T08:41:02Z | 2026-02-02T16:55:52Z | 2026-02-02T16:55:52Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | theVannu | 31,689,229 | MDQ6VXNlcjMxNjg5MjI5 | User | false |
huggingface/transformers | 3,870,603,524 | I_kwDOCUB6oc7mtLkE | 43,595 | https://github.com/huggingface/transformers/issues/43595 | https://api.github.com/repos/huggingface/transformers/issues/43595 | Tell Us: What Would Make Trainer Better? | # RFC: Trainer improvements
The Trainer class is a core component of the Transformers library, and we're looking to make it even better.
We're gathering inputs on potential improvements, new features, and pain points you've experienced with the Trainer class.
We're particularly interested in feedback on:
- **Train... | open | null | false | 16 | [
"trainer",
"Feature request",
"contributions-welcome"
] | [] | 2026-01-29T13:12:09Z | 2026-04-18T19:09:18Z | null | MEMBER | null | 20260418T200535Z | 2026-04-18T20:05:35Z | SunMarc | 57,196,510 | MDQ6VXNlcjU3MTk2NTEw | User | false |
huggingface/transformers | 3,870,771,717 | I_kwDOCUB6oc7mt0oF | 43,596 | https://github.com/huggingface/transformers/issues/43596 | https://api.github.com/repos/huggingface/transformers/issues/43596 | IndexError: index 0 is out of bounds for dimension 0 with size 0 with deepspeed zero3 init and BertModel | ### System Info
- `deepspeed` version: 0.18.4
- `transformers` version: 5.0.0
- Platform: Linux-5.14.21-150500.55.65_13.0.74-cray_shasta_c_64k-aarch64-with-glibc2.39
- Python version: 3.12.3
- Huggingface_hub version: 1.3.4
- Safetensors version: 0.5.3
- Accelerate version: 1.12.0
- Accelerate config: not found
- Dee... | closed | completed | false | 2 | [
"bug"
] | [] | 2026-01-29T13:44:57Z | 2026-01-31T00:35:34Z | 2026-01-31T00:35:34Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | MichelDucartier | 57,410,827 | MDQ6VXNlcjU3NDEwODI3 | User | false |
huggingface/transformers | 3,871,043,630 | I_kwDOCUB6oc7mu3Au | 43,597 | https://github.com/huggingface/transformers/issues/43597 | https://api.github.com/repos/huggingface/transformers/issues/43597 | Activation offloading for Trainer | ### Feature request
Activation offloading is implemented in TRL. It's common to all trainers, so it could be nice to upstream this feature.
https://github.com/huggingface/trl/blob/43fb8d310633448a0c4c731a2efe9c1ca55e6184/trl/trainer/sft_trainer.py#L898
(cc @kashif, IDK how extensively it's tested)
### Motivation
... | open | null | false | 1 | [
"Feature request"
] | [] | 2026-01-29T14:40:30Z | 2026-02-05T15:55:59Z | null | MEMBER | null | 20260325T173244Z | 2026-03-25T17:32:44Z | qgallouedec | 45,557,362 | MDQ6VXNlcjQ1NTU3MzYy | User | false |
huggingface/transformers | 3,871,080,245 | I_kwDOCUB6oc7mu_81 | 43,598 | https://github.com/huggingface/transformers/issues/43598 | https://api.github.com/repos/huggingface/transformers/issues/43598 | Revisit `remove_unused_column` in Trainer for better customizability | In Trainer, when `remove_unused_column=True`, the trainer will check the signature of the model and remove the columns of the dataset which don't match the signature.
In most trl trainers, we don't directly feed the model with the sampled data, see for example in DPO:
https://github.com/huggingface/trl/blob/7a530ba6d... | closed | completed | false | 1 | [] | [] | 2026-01-29T14:48:49Z | 2026-03-09T08:09:20Z | 2026-03-09T08:09:20Z | MEMBER | null | 20260325T173244Z | 2026-03-25T17:32:44Z | qgallouedec | 45,557,362 | MDQ6VXNlcjQ1NTU3MzYy | User | false |
huggingface/transformers | 3,871,136,601 | I_kwDOCUB6oc7mvNtZ | 43,599 | https://github.com/huggingface/transformers/issues/43599 | https://api.github.com/repos/huggingface/transformers/issues/43599 | Use a private `_metrics` dict to allow for additional metric logging | When defining your own trainer, you want to log your own metrics. Over the time in TRL we've converged toward the use of this structure in all trainers:
```python
from collections import defaultdict
from transformers import Trainer
class MyTrainer(Trainer):
def __init__( self, ...):
...
self._metr... | closed | completed | false | 1 | [] | [] | 2026-01-29T15:01:45Z | 2026-03-09T08:09:18Z | 2026-03-09T08:09:18Z | MEMBER | null | 20260325T173244Z | 2026-03-25T17:32:44Z | qgallouedec | 45,557,362 | MDQ6VXNlcjQ1NTU3MzYy | User | false |
huggingface/transformers | 3,871,174,924 | I_kwDOCUB6oc7mvXEM | 43,600 | https://github.com/huggingface/transformers/issues/43600 | https://api.github.com/repos/huggingface/transformers/issues/43600 | Account for custom trainers when trying to estimate the number of FLOPS | The Trainer estimates the number of FLOPs using the number of elements in the input tensor associated with the key `"input_ids"`. However, in most custom trainers, the sampled data does not include the `"input_ids"` key. For example, for GRPO, the available key is `"prompt"`. As a result, the trainer issues the warning... | closed | completed | false | 1 | [] | [] | 2026-01-29T15:09:53Z | 2026-02-03T17:29:46Z | 2026-02-03T17:29:45Z | MEMBER | null | 20260325T173244Z | 2026-03-25T17:32:44Z | qgallouedec | 45,557,362 | MDQ6VXNlcjQ1NTU3MzYy | User | false |
huggingface/transformers | 3,871,236,658 | I_kwDOCUB6oc7mvmIy | 43,601 | https://github.com/huggingface/transformers/issues/43601 | https://api.github.com/repos/huggingface/transformers/issues/43601 | Revisit the condition for calling `compute_loss` at eval | During eval, `Trainer` calls `prediction_step`. If no labels are present in the inputs, it only runs forward and
returns logits, and don't call `compute_loss`. Consequently you can't get your custom loss at eval
In trainers like DPO, we need to override the `prediction_step` like this to force the call to `compute_los... | closed | completed | false | 4 | [] | [] | 2026-01-29T15:21:59Z | 2026-03-09T08:09:15Z | 2026-03-09T08:09:15Z | MEMBER | null | 20260325T173244Z | 2026-03-25T17:32:44Z | qgallouedec | 45,557,362 | MDQ6VXNlcjQ1NTU3MzYy | User | false |
huggingface/transformers | 3,871,265,355 | I_kwDOCUB6oc7mvtJL | 43,602 | https://github.com/huggingface/transformers/issues/43602 | https://api.github.com/repos/huggingface/transformers/issues/43602 | Revisit the condition for calling `compute_metrics` | Related to https://github.com/huggingface/transformers/issues/43601
`Trainer` will only call `compute_metrics` when it has all three: `loss`, `labels`, and `logits`. But some trainers don’t provide `labels` (e.g., GRPO), and some setups don’t materialize `logits` (e.g., Liger). As a result, `compute_metrics` gets sile... | closed | completed | false | 6 | [] | [] | 2026-01-29T15:27:29Z | 2026-04-15T08:30:09Z | 2026-04-15T08:30:09Z | MEMBER | null | 20260415T224019Z | 2026-04-15T22:40:19Z | qgallouedec | 45,557,362 | MDQ6VXNlcjQ1NTU3MzYy | User | false |
huggingface/transformers | 3,871,334,543 | I_kwDOCUB6oc7mv-CP | 43,604 | https://github.com/huggingface/transformers/issues/43604 | https://api.github.com/repos/huggingface/transformers/issues/43604 | Revisit the condition for scaling the loss | Gradient accumulation requires scaled loss. Normally, loss scaling in the `Trainer` class depends on whether the
model accepts loss-related kwargs.
https://github.com/huggingface/transformers/blob/e7a2c0cc3471df9df0dd3ee739d1e1e034d549e0/src/transformers/trainer.py#L3827-L3930
In most custom trainers, we compute our ... | closed | completed | false | 1 | [] | [] | 2026-01-29T15:42:54Z | 2026-03-20T08:08:40Z | 2026-03-20T08:08:40Z | MEMBER | null | 20260325T173244Z | 2026-03-25T17:32:44Z | qgallouedec | 45,557,362 | MDQ6VXNlcjQ1NTU3MzYy | User | false |
huggingface/transformers | 3,872,022,872 | I_kwDOCUB6oc7mymFY | 43,606 | https://github.com/huggingface/transformers/issues/43606 | https://api.github.com/repos/huggingface/transformers/issues/43606 | [BUG][CI] suno/bark-small model fails with a device mismatch when using CPU offload | ### System Info
* `transformers` version: `5.0.0.dev0`
* Platform: `Linux-5.15.167.4-microsoft-standard-WSL2-x86_64-with-glibc2.39`
* Python version: `3.12.3`
* `huggingface_hub` version: `1.3.2`
* `safetensors` version: `0.7.0`
* `accelerate` version: `1.12.0`
* Accelerate config: `not installed`
* DeepSpeed version:... | closed | completed | false | 0 | [
"bug"
] | [] | 2026-01-29T18:28:15Z | 2026-04-18T09:13:25Z | 2026-01-30T13:11:07Z | CONTRIBUTOR | null | 20260418T100536Z | 2026-04-18T10:05:36Z | harshaljanjani | 75,426,551 | MDQ6VXNlcjc1NDI2NTUx | User | false |
huggingface/transformers | 3,873,501,453 | I_kwDOCUB6oc7m4PEN | 43,611 | https://github.com/huggingface/transformers/issues/43611 | https://api.github.com/repos/huggingface/transformers/issues/43611 | Transformers 5.0.0 breaks loading models with the `base_model_prefix` attribute | ### System Info
- `transformers` version: 5.0.0
- Platform: Linux-6.14.0-37-generic-x86_64-with-glibc2.39
- Python version: 3.12.3
- Huggingface_hub version: 1.3.5
- Safetensors version: 0.7.0
- Accelerate version: 1.12.0
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (accelerat... | closed | completed | false | 4 | [
"bug"
] | [] | 2026-01-30T02:22:40Z | 2026-02-26T07:08:29Z | 2026-02-26T07:08:29Z | CONTRIBUTOR | null | 20260325T173244Z | 2026-03-25T17:32:44Z | umarbutler | 8,473,183 | MDQ6VXNlcjg0NzMxODM= | User | false |
huggingface/transformers | 3,875,056,472 | I_kwDOCUB6oc7m-KtY | 43,618 | https://github.com/huggingface/transformers/issues/43618 | https://api.github.com/repos/huggingface/transformers/issues/43618 | CLIPOutput attentions is no longer assigned | ### System Info
- transformers 5.0.0
- python 3.12.11
### Who can help?
@yonigozlan @molbap
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details ... | closed | completed | false | 1 | [
"bug",
"Vision"
] | [] | 2026-01-30T10:26:55Z | 2026-02-03T13:51:23Z | 2026-02-03T13:51:23Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | dbellavista-ai | 197,579,912 | U_kgDOC8bUiA | User | false |
huggingface/transformers | 3,876,019,160 | I_kwDOCUB6oc7nB1vY | 43,630 | https://github.com/huggingface/transformers/issues/43630 | https://api.github.com/repos/huggingface/transformers/issues/43630 | Add multilingual text classification examples to docs (Arabic, Chinese, etc.) | ## 🚀 Feature request
### Motivation
The [Text Classification guide](https://huggingface.co/docs/transformers/tasks/sequence_classification) currently only demonstrates fine-tuning on the English IMDb dataset using `distilbert-base-uncased`.
As someone working on Arabic NLP (sentiment analysis using models like `au... | closed | completed | false | 2 | [] | [] | 2026-01-30T14:42:31Z | 2026-03-11T08:07:55Z | 2026-03-11T08:07:55Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | salehA13 | 151,180,453 | U_kgDOCQLUpQ | User | false |
huggingface/transformers | 3,876,126,459 | I_kwDOCUB6oc7nCP77 | 43,632 | https://github.com/huggingface/transformers/issues/43632 | https://api.github.com/repos/huggingface/transformers/issues/43632 | Transformers v5 breaks the `_is_hf_initialized` flag | ### System Info
- `transformers` version: 5.0.0
- Platform: macOS-26.3-arm64-arm-64bit
- Python version: 3.12.9
- Huggingface_hub version: 1.3.4
- Safetensors version: 0.7.0
- Accelerate version: 1.12.0
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (accelerator?): 2.9.1 (NA)
#... | closed | completed | false | 7 | [
"bug"
] | [] | 2026-01-30T15:10:41Z | 2026-02-03T16:24:24Z | 2026-02-03T16:24:24Z | CONTRIBUTOR | null | 20260325T173244Z | 2026-03-25T17:32:44Z | ZhiyuanChen | 28,757,366 | MDQ6VXNlcjI4NzU3MzY2 | User | false |
huggingface/transformers | 3,877,973,458 | I_kwDOCUB6oc7nJS3S | 43,638 | https://github.com/huggingface/transformers/issues/43638 | https://api.github.com/repos/huggingface/transformers/issues/43638 | IndexError: index 0 is out of bounds for dimension 0 with size 0 with deepspeed zero3 traininig and a non pretrained Bert model | ### System Info
Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points.
- `deepspeed` version: 0.18.5
- `transformers` version: 5.0.0
- Platform: Linux-5.14.21-150500.55.65_13.0.74-cray_shasta_c_64k-aarch64-with-glibc2.39
- Python version: 3.12.3
- Huggingface_hub version: 1.3.5
- Safeten... | closed | completed | false | 5 | [
"bug"
] | [] | 2026-01-31T00:31:54Z | 2026-03-11T08:07:53Z | 2026-03-11T08:07:53Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | MichelDucartier | 57,410,827 | MDQ6VXNlcjU3NDEwODI3 | User | false |
huggingface/transformers | 3,878,651,061 | I_kwDOCUB6oc7nL4S1 | 43,643 | https://github.com/huggingface/transformers/issues/43643 | https://api.github.com/repos/huggingface/transformers/issues/43643 | `trust_remote_code=True` in `AutoConfig.from_pretrained` results in missing fields in returned object | ### System Info
- `transformers` version: 5.0.0
- Platform: Linux-6.8.0-85-generic-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 1.3.5
- Safetensors version: 0.7.0
- Accelerate version: not installed
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (accel... | closed | completed | false | 2 | [
"bug"
] | [] | 2026-01-31T07:13:33Z | 2026-02-02T11:11:27Z | 2026-02-02T11:11:27Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | Lander-Hatsune | 36,358,465 | MDQ6VXNlcjM2MzU4NDY1 | User | false |
huggingface/transformers | 3,878,850,250 | I_kwDOCUB6oc7nMo7K | 43,644 | https://github.com/huggingface/transformers/issues/43644 | https://api.github.com/repos/huggingface/transformers/issues/43644 | Transformers 5.0.0 fills non-persistent buffers with junk | ### System Info
- `transformers` version: 5.0.0
- Platform: Linux-6.14.0-37-generic-x86_64-with-glibc2.39
- Python version: 3.12.3
- Huggingface_hub version: 1.3.5
- Safetensors version: 0.7.0
- Accelerate version: 1.12.0
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (accelerat... | closed | completed | false | 3 | [
"bug"
] | [] | 2026-01-31T09:49:07Z | 2026-03-02T11:08:24Z | 2026-03-02T11:08:24Z | CONTRIBUTOR | null | 20260325T173244Z | 2026-03-25T17:32:44Z | umarbutler | 8,473,183 | MDQ6VXNlcjg0NzMxODM= | User | false |
huggingface/transformers | 3,878,856,941 | I_kwDOCUB6oc7nMqjt | 43,645 | https://github.com/huggingface/transformers/issues/43645 | https://api.github.com/repos/huggingface/transformers/issues/43645 | Transformers 5.0.0 breaks defining and then initializing custom models in Jupyter notebooks | ### System Info
- `transformers` version: 5.0.0
- Platform: Linux-6.14.0-37-generic-x86_64-with-glibc2.39
- Python version: 3.12.3
- Huggingface_hub version: 1.3.5
- Safetensors version: 0.7.0
- Accelerate version: 1.12.0
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (accelerat... | closed | completed | false | 1 | [
"bug"
] | [] | 2026-01-31T09:53:08Z | 2026-02-03T13:20:19Z | 2026-02-03T13:20:19Z | CONTRIBUTOR | null | 20260325T173244Z | 2026-03-25T17:32:44Z | umarbutler | 8,473,183 | MDQ6VXNlcjg0NzMxODM= | User | false |
huggingface/transformers | 3,878,870,603 | I_kwDOCUB6oc7nMt5L | 43,646 | https://github.com/huggingface/transformers/issues/43646 | https://api.github.com/repos/huggingface/transformers/issues/43646 | Transformers 5.0.0 breaks custom model initialization | ### System Info
- `transformers` version: 5.0.0
- Platform: Linux-6.14.0-37-generic-x86_64-with-glibc2.39
- Python version: 3.12.3
- Huggingface_hub version: 1.3.5
- Safetensors version: 0.7.0
- Accelerate version: 1.12.0
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (accelerat... | closed | completed | false | 3 | [
"bug"
] | [] | 2026-01-31T10:02:54Z | 2026-02-04T09:32:28Z | 2026-02-04T09:32:28Z | CONTRIBUTOR | null | 20260325T173244Z | 2026-03-25T17:32:44Z | umarbutler | 8,473,183 | MDQ6VXNlcjg0NzMxODM= | User | false |
huggingface/transformers | 3,879,417,855 | I_kwDOCUB6oc7nOzf_ | 43,650 | https://github.com/huggingface/transformers/issues/43650 | https://api.github.com/repos/huggingface/transformers/issues/43650 | ADD THE DATA | ### System Info
**RAGHAV SHARMA
# Load model directly
from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("cyrilvallez/test_remote_code_dummy_llama", trust_remote_code=True, dtype="auto")
# ... | closed | completed | false | 0 | [
"bug"
] | [] | 2026-01-31T15:25:06Z | 2026-01-31T16:02:41Z | 2026-01-31T16:02:41Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | rsiya1986-arch | 258,452,174 | U_kgDOD2eqzg | User | false |
huggingface/transformers | 3,879,769,067 | I_kwDOCUB6oc7nQJPr | 43,653 | https://github.com/huggingface/transformers/issues/43653 | https://api.github.com/repos/huggingface/transformers/issues/43653 | [BUG][CI] BigBirdTokenizer mask token not registered as special token, gives empty decode output | ### System Info
* `transformers` version: `5.0.0.dev0`
* Platform: `Linux-5.15.167.4-microsoft-standard-WSL2-x86_64-with-glibc2.39`
* Python version: `3.12.3`
* `huggingface_hub` version: `1.3.2`
* `safetensors` version: `0.7.0`
* `accelerate` version: `1.12.0`
* Accelerate config: `not installed`
* DeepSpeed version:... | closed | completed | false | 1 | [
"bug"
] | [] | 2026-01-31T18:28:04Z | 2026-04-18T09:13:04Z | 2026-03-11T08:07:51Z | CONTRIBUTOR | null | 20260418T100536Z | 2026-04-18T10:05:36Z | harshaljanjani | 75,426,551 | MDQ6VXNlcjc1NDI2NTUx | User | false |
huggingface/transformers | 3,881,695,827 | I_kwDOCUB6oc7nXfpT | 43,668 | https://github.com/huggingface/transformers/issues/43668 | https://api.github.com/repos/huggingface/transformers/issues/43668 | ModernBERTConfig `norm_eps` type hint is incorrect | The type hint for `norm_eps` is int, but should be float.
https://github.com/huggingface/transformers/blob/78bb85146c59258a0710c8d08311d98d52303c38/src/transformers/models/modernbert/configuration_modernbert.py#L158 | closed | completed | false | 0 | [] | [] | 2026-02-01T09:46:55Z | 2026-02-03T14:33:12Z | 2026-02-03T14:33:11Z | CONTRIBUTOR | null | 20260325T173244Z | 2026-03-25T17:32:44Z | fschlatt | 23,191,892 | MDQ6VXNlcjIzMTkxODky | User | false |
huggingface/transformers | 3,883,611,554 | I_kwDOCUB6oc7nezWi | 43,671 | https://github.com/huggingface/transformers/issues/43671 | https://api.github.com/repos/huggingface/transformers/issues/43671 | Proposal to add Qwen3-TTS support | ### Model description
Model: Qwen3-TTS
Repository: https://github.com/QwenLM/Qwen3-TTS
Paper/Blog: https://huggingface.co/papers/2601.15621
License: Apache 2.0
Qwen3-TTS is currently available as a standalone package (qwen-tts) but is not integrated into transformers. The model uses `PreTrainedModel` and `GenerationM... | open | null | false | 8 | [
"New model"
] | [] | 2026-02-02T02:33:32Z | 2026-03-19T18:05:09Z | null | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | ShahVandit | 79,093,791 | MDQ6VXNlcjc5MDkzNzkx | User | false |
huggingface/transformers | 3,884,399,574 | I_kwDOCUB6oc7nhzvW | 43,673 | https://github.com/huggingface/transformers/issues/43673 | https://api.github.com/repos/huggingface/transformers/issues/43673 | GenerationMixin cache missing in v5.0.0 during chunked_prefill | ### System Info
The removal of v4.57.6's default insertion of DynamicCache https://github.com/huggingface/transformers/blob/v4.57.6/src/transformers/generation/utils.py#L2034
vs
https://github.com/huggingface/transformers/blob/v5.0.0/src/transformers/generation/utils.py#L2042
has made it so that in v5.0.0 chunked_pref... | closed | completed | false | 1 | [
"bug"
] | [] | 2026-02-02T07:23:31Z | 2026-02-04T13:22:51Z | 2026-02-04T13:22:51Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | SmerkyG | 8,826,350 | MDQ6VXNlcjg4MjYzNTA= | User | false |
huggingface/transformers | 3,885,079,851 | I_kwDOCUB6oc7nkZ0r | 43,676 | https://github.com/huggingface/transformers/issues/43676 | https://api.github.com/repos/huggingface/transformers/issues/43676 | `test_apply_chat_template_video_frame_sampling` fails with `num_frames` and `fps` are mutually exclusive | ### System Info
================================================================================= short test summary info =================================================================================
FAILED tests/models/qwen3_vl/test_processing_qwen3_vl.py::Qwen3VLProcessorTest::test_apply_chat_template_video_fram... | closed | completed | false | 2 | [
"bug"
] | [
"tarekziade"
] | 2026-02-02T10:10:35Z | 2026-02-02T10:57:01Z | 2026-02-02T10:57:01Z | MEMBER | null | 20260325T173244Z | 2026-03-25T17:32:44Z | tarekziade | 250,019 | MDQ6VXNlcjI1MDAxOQ== | User | false |
huggingface/transformers | 3,885,516,609 | I_kwDOCUB6oc7nmEdB | 43,680 | https://github.com/huggingface/transformers/issues/43680 | https://api.github.com/repos/huggingface/transformers/issues/43680 | Add Nvidia NitroGen to huggingface transformers | ### Feature request
NVIDIA's NitroGen is a groundbreaking AI model designed to play hundreds of video games autonomously across various genres. This model, developed in collaboration with MineDojo, is trained on over 40,000 hours of publicly available gameplay videos, allowing it to perform across 1,000+ games without... | open | null | false | 3 | [
"Feature request"
] | [] | 2026-02-02T11:51:53Z | 2026-02-25T09:25:33Z | null | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | AffanBinFaisal | 115,661,779 | U_kgDOBuTb0w | User | false |
huggingface/transformers | 3,886,005,584 | I_kwDOCUB6oc7nn71Q | 43,682 | https://github.com/huggingface/transformers/issues/43682 | https://api.github.com/repos/huggingface/transformers/issues/43682 | `torch.compile` and `torch.export` helper function to be moved from `import_utils` | a tracker per @vasqu's request | open | null | false | 0 | [
"WIP"
] | [] | 2026-02-02T13:42:53Z | 2026-03-05T10:58:12Z | null | MEMBER | null | 20260325T173244Z | 2026-03-25T17:32:44Z | IlyasMoutawwakil | 57,442,720 | MDQ6VXNlcjU3NDQyNzIw | User | false |
huggingface/transformers | 3,886,162,737 | I_kwDOCUB6oc7noiMx | 43,684 | https://github.com/huggingface/transformers/issues/43684 | https://api.github.com/repos/huggingface/transformers/issues/43684 | Add Qwen3-Omni model registration to AutoModel and AutoModelForConditionalGeneration classes | ### Feature request
When attempting to load Qwen3-Omni using Auto classes:
```python
from transformers import AutoModel
model = AutoModel.from_pretrained(
"Qwen/Qwen3-Omni-30B-A3B-Instruct",
trust_remote_code=True,
)
```
I get an error:
```
ValueError:
Unrecognized configuration class <class 'transformers.mod... | closed | completed | false | 6 | [
"Feature request"
] | [] | 2026-02-02T14:15:36Z | 2026-02-04T14:38:45Z | 2026-02-04T14:37:15Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | samudraneel05 | 98,489,891 | U_kgDOBd7WIw | User | false |
huggingface/transformers | 3,886,717,958 | I_kwDOCUB6oc7nqpwG | 43,688 | https://github.com/huggingface/transformers/issues/43688 | https://api.github.com/repos/huggingface/transformers/issues/43688 | Incorrect normalization of auxiliary loss in OLMoE and GPT Oss | ### System Info
- `transformers` version: 4.57.6
- Platform: macOS-15.7.3-arm64-arm-64bit
- Python version: 3.10.14
- Huggingface_hub version: 0.36.0
- Safetensors version: 0.7.0
- Accelerate version: not installed
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (accelerator?): 2.9.... | closed | completed | false | 3 | [
"bug"
] | [] | 2026-02-02T15:59:33Z | 2026-03-30T07:58:48Z | 2026-03-13T08:08:53Z | NONE | null | 20260407T090028Z | 2026-04-07T09:00:28Z | andresnowak | 35,544,006 | MDQ6VXNlcjM1NTQ0MDA2 | User | false |
huggingface/transformers | 3,889,453,532 | I_kwDOCUB6oc7n1Fnc | 43,696 | https://github.com/huggingface/transformers/issues/43696 | https://api.github.com/repos/huggingface/transformers/issues/43696 | GPT-oss-20b: torch.OutOfMemoryError: CUDA out of memory. | Hi,
I am trying to perform a distributed training run of [gpt-oss-20b](https://huggingface.co/openai/gpt-oss-20b) on x8 A100s (40gb); however, I am running into memory issues when trying to load the model into memory using the code below. I am aware that for GPT-OSS the Mxfp4 is only supported for Hopper generation a... | closed | completed | false | 2 | [] | [] | 2026-02-03T06:33:14Z | 2026-03-13T08:08:51Z | 2026-03-13T08:08:51Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | Decadz | 23,614,094 | MDQ6VXNlcjIzNjE0MDk0 | User | false |
huggingface/transformers | 3,889,625,306 | I_kwDOCUB6oc7n1vja | 43,697 | https://github.com/huggingface/transformers/issues/43697 | https://api.github.com/repos/huggingface/transformers/issues/43697 | RTDetrV2ForObjectDetection produces different outputs in Transformers v5 with identical inputs | ### System Info
After upgrading from Transformers 4.57.6 to 5.0.0, RTDetrV2ForObjectDetection produces different logits and pred_boxes for identical pixel_values tensors.
• The same saved pixel_values tensor is reused across versions.
• Model is in eval() mode.
• Weights/config are identical (checkpoint trained wit... | closed | completed | false | 6 | [
"bug"
] | [] | 2026-02-03T07:15:55Z | 2026-03-05T16:38:26Z | 2026-03-05T16:38:26Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | MorganFujimaka | 7,124,893 | MDQ6VXNlcjcxMjQ4OTM= | User | false |
huggingface/transformers | 3,889,662,684 | I_kwDOCUB6oc7n14rc | 43,698 | https://github.com/huggingface/transformers/issues/43698 | https://api.github.com/repos/huggingface/transformers/issues/43698 | SwanLab integration uses outdated swanlab.init() signature | ### Feature request
https://github.com/huggingface/transformers/blob/aefa23ad1c52de9c115f3d762fe1a1eda643275a/src/transformers/integrations/integration_utils.py#L2305
In `src/transformers/integrations/integration_utils.py`, the SwanLab integration calls `swanlab.init(**init_args)` but the current implementation does ... | closed | completed | false | 2 | [
"Good First Issue",
"Feature request"
] | [] | 2026-02-03T07:26:39Z | 2026-02-09T08:54:26Z | 2026-02-09T08:54:26Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | TianjiaoTsao | 115,390,784 | U_kgDOBuC5QA | User | false |
huggingface/transformers | 3,889,801,280 | I_kwDOCUB6oc7n2ahA | 43,700 | https://github.com/huggingface/transformers/issues/43700 | https://api.github.com/repos/huggingface/transformers/issues/43700 | Lack of FastTokenizer for Internlm/Intern-S1 | Hi,
I'm currently trying to use `return_assistant_token_masks` with Intern's models, but it's a Python-based, slow tokenizer, which is incompatible with this feature. I can't seem to find any docs on converting slow tokenizers to fast tokenizers.
Any pointers would be very much appreciated. Thanks! | closed | completed | false | 2 | [] | [] | 2026-02-03T08:00:01Z | 2026-04-02T08:18:10Z | 2026-04-02T08:18:10Z | CONTRIBUTOR | null | 20260407T090028Z | 2026-04-07T09:00:28Z | jiosephlee | 43,046,526 | MDQ6VXNlcjQzMDQ2NTI2 | User | false |
huggingface/transformers | 3,889,888,269 | I_kwDOCUB6oc7n2vwN | 43,701 | https://github.com/huggingface/transformers/issues/43701 | https://api.github.com/repos/huggingface/transformers/issues/43701 | resume_from_checkpoint key mismatch | ### System Info
- `transformers` version: 4.57.1
- Platform: Linux-5.15.0-60-generic-x86_64-with-glibc2.39
- Python version: 3.10.19
- Huggingface_hub version: 0.36.0
- Safetensors version: 0.7.0
- Accelerate version: 1.11.0
- Accelerate config: not found
- DeepSpeed version: 0.17.6
- PyTorch version (accelerator?)... | closed | completed | false | 4 | [
"bug"
] | [] | 2026-02-03T08:21:39Z | 2026-03-27T08:12:14Z | 2026-03-27T08:12:14Z | NONE | null | 20260407T090028Z | 2026-04-07T09:00:28Z | zenosai | 92,961,570 | U_kgDOBYp7Ig | User | false |
huggingface/transformers | 3,890,553,156 | I_kwDOCUB6oc7n5SFE | 43,704 | https://github.com/huggingface/transformers/issues/43704 | https://api.github.com/repos/huggingface/transformers/issues/43704 | Qwen3ForCausalLM leaks VRAM if used in multiple dataloader threads | ### System Info
reopening, please see https://github.com/huggingface/transformers/issues/42673
this is still an issue, but auto-closed by `github-actions`. That it expects a reaction within a little over a week isn't great.
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My... | closed | completed | false | 1 | [
"bug"
] | [] | 2026-02-03T10:37:40Z | 2026-02-03T14:00:27Z | 2026-02-03T14:00:27Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | dxqb | 183,307,934 | U_kgDOCu0Ong | User | false |
huggingface/transformers | 3,891,508,155 | I_kwDOCUB6oc7n87O7 | 43,708 | https://github.com/huggingface/transformers/issues/43708 | https://api.github.com/repos/huggingface/transformers/issues/43708 | Trainer `resume_from_checkpoint` incorrectly calculates `max_steps` when changing `per_device_train_batch_size` with same global batch size | ### System Info
- `transformers` version: 4.57.3
- Platform: linux
- Python version: 3.11.14
- PyTorch version: 2.9.1
### Who can help?
@SunMarc
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQu... | closed | completed | false | 7 | [
"bug"
] | [] | 2026-02-03T14:18:09Z | 2026-02-06T14:46:45Z | 2026-02-06T14:46:45Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | KHPan | 146,960,977 | U_kgDOCMJyUQ | User | false |
huggingface/transformers | 3,892,437,128 | I_kwDOCUB6oc7oAeCI | 43,716 | https://github.com/huggingface/transformers/issues/43716 | https://api.github.com/repos/huggingface/transformers/issues/43716 | Mistral-3: dtype mismatch between image preprocessor and model | ### System Info
transformers\=\=5.0.0
pytorch\=\=2.10.0
python\=\=3.12.3
Platform: WSL
### Who can help?
@zucchini-nlp @Rocketknight1
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
-... | closed | completed | false | 1 | [
"bug"
] | [] | 2026-02-03T17:47:11Z | 2026-02-04T13:54:53Z | 2026-02-04T13:54:53Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | egallouis | 12,673,757 | MDQ6VXNlcjEyNjczNzU3 | User | false |
huggingface/transformers | 3,892,463,496 | I_kwDOCUB6oc7oAkeI | 43,717 | https://github.com/huggingface/transformers/issues/43717 | https://api.github.com/repos/huggingface/transformers/issues/43717 | init_weights usage in Mamba and Mamba-2 | Hi,
`dt_bias` initialization has a pretty non-trivial impact on Mamba training/performance.
In the current implementation, it looks like `dt_bias` is [initialized to all ones](https://github.com/huggingface/transformers/blob/v5.0.0/src/transformers/models/mamba2/modeling_mamba2.py#L253) in the mixer class and then la... | closed | completed | false | 4 | [] | [] | 2026-02-03T17:53:47Z | 2026-03-15T08:06:34Z | 2026-03-15T08:06:34Z | CONTRIBUTOR | null | 20260325T173244Z | 2026-03-25T17:32:44Z | kevinli573 | 57,026,513 | MDQ6VXNlcjU3MDI2NTEz | User | false |
huggingface/transformers | 3,892,950,547 | I_kwDOCUB6oc7oCbYT | 43,720 | https://github.com/huggingface/transformers/issues/43720 | https://api.github.com/repos/huggingface/transformers/issues/43720 | [BUG][CI] BitNet AutoBitLinear fails when packed weights aren’t unpacked during accelerate loading | ### System Info
* `transformers` version: `5.0.0.dev0`
* Platform: `Linux-5.15.167.4-microsoft-standard-WSL2-x86_64-with-glibc2.39`
* Python version: `3.12.3`
* `huggingface_hub` version: `1.3.2`
* `safetensors` version: `0.7.0`
* `accelerate` version: `1.12.0`
* Accelerate config: `not installed`
* DeepSpeed version:... | closed | completed | false | 0 | [
"bug"
] | [] | 2026-02-03T19:58:35Z | 2026-04-18T09:12:52Z | 2026-02-16T15:38:20Z | CONTRIBUTOR | null | 20260418T100536Z | 2026-04-18T10:05:36Z | harshaljanjani | 75,426,551 | MDQ6VXNlcjc1NDI2NTUx | User | false |
huggingface/transformers | 3,893,814,945 | I_kwDOCUB6oc7oFuah | 43,723 | https://github.com/huggingface/transformers/issues/43723 | https://api.github.com/repos/huggingface/transformers/issues/43723 | Issue loading tokenizer using AutoTokenizer.from_pretrained in v5 | ### System Info
There is an issue when loading the tokenizer from the model guillermoruiz/bilmaLAT and others in transformers v5.
when I run the code:
```
bilmaLAT = AutoTokenizer.from_pretrained('guillermoruiz/bilmaLAT')
bilmaLAT('hola')
```
the output is:
`
{'input_ids': [2, 76, 83, 80, 69, 3], 'attention_mask': [... | closed | completed | false | 3 | [
"bug"
] | [] | 2026-02-04T00:03:20Z | 2026-02-04T16:39:43Z | 2026-02-04T16:39:43Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | msubrayada | 392,873 | MDQ6VXNlcjM5Mjg3Mw== | User | false |
huggingface/transformers | 3,894,014,350 | I_kwDOCUB6oc7oGfGO | 43,724 | https://github.com/huggingface/transformers/issues/43724 | https://api.github.com/repos/huggingface/transformers/issues/43724 | [integration] add pluto experiment tracker callback. | ### Feature request
add `PlutoCallback` as an option for natively logging to the pluto experiment tracker.
### Motivation
We recently launched a new OSS experiment tracker ([client code](https://github.com/Trainy-ai/pluto), [server code](https://github.com/Trainy-ai/pluto-server)), and we're hoping to get integrated... | open | null | false | 1 | [
"Feature request"
] | [] | 2026-02-04T01:19:47Z | 2026-02-08T14:25:35Z | null | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | asaiacai | 25,255,966 | MDQ6VXNlcjI1MjU1OTY2 | User | false |
huggingface/transformers | 3,894,364,637 | I_kwDOCUB6oc7oH0nd | 43,725 | https://github.com/huggingface/transformers/issues/43725 | https://api.github.com/repos/huggingface/transformers/issues/43725 | Quantization model behavior changed | ### System Info
torch 2.10.0
peft 0.18.2.dev0
bitsandbytes 0.49.1
The only variable is transformers.
### Who can help?
@ArthurZucker
### Reproduction
The regression was found in peft tests:
https://github.com/jiqing-feng/peft/blob/8bit/tests/test_gpu_examples.py#L2901
`RUN_SLOW=1 pytest tests/test_gpu_examples.py... | closed | completed | false | 3 | [
"bug"
] | [] | 2026-02-04T03:29:00Z | 2026-03-15T08:06:31Z | 2026-03-15T08:06:31Z | CONTRIBUTOR | null | 20260325T173244Z | 2026-03-25T17:32:44Z | jiqing-feng | 107,918,818 | U_kgDOBm614g | User | false |
huggingface/transformers | 3,895,007,483 | I_kwDOCUB6oc7oKRj7 | 43,726 | https://github.com/huggingface/transformers/issues/43726 | https://api.github.com/repos/huggingface/transformers/issues/43726 | Help: Transformer v5 managed to break HF models | There are some models that are rendered broken because of `from transformers.models.gpt2.tokenization_gpt2 import bytes_to_unicode` and `ImportError: cannot import name 'bytes_to_unicode' from 'transformers.models.gpt2.tokenization_gpt2'`
https://huggingface.co/NexVeridian/Kimi-Linear-REAP-35B-A3B-Instruct-4bit/discuss... | closed | completed | false | 9 | [] | [] | 2026-02-04T06:57:52Z | 2026-03-06T08:24:16Z | 2026-03-06T08:24:16Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | TomLucidor | 85,554,801 | MDQ6VXNlcjg1NTU0ODAx | User | false |
huggingface/transformers | 3,897,334,888 | I_kwDOCUB6oc7oTJxo | 43,742 | https://github.com/huggingface/transformers/issues/43742 | https://api.github.com/repos/huggingface/transformers/issues/43742 | Key error when loading facebook/MobileLLM-125M | ### System Info
- `transformers` version: 5.0.0
- Platform: Linux-4.18.0-553.el8_10.x86_64-x86_64-with-glibc2.28
- Python version: 3.12.12
- Huggingface_hub version: 1.3.7
- Safetensors version: 0.7.0
- Accelerate version: 1.12.0
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (acc... | closed | completed | false | 5 | [
"bug"
] | [] | 2026-02-04T16:19:57Z | 2026-03-15T08:06:30Z | 2026-03-15T08:06:30Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | pahrendt-semron | 224,740,397 | U_kgDODWVELQ | User | false |
huggingface/transformers | 3,897,725,855 | I_kwDOCUB6oc7oUpOf | 43,746 | https://github.com/huggingface/transformers/issues/43746 | https://api.github.com/repos/huggingface/transformers/issues/43746 | [GraniteSpeechForConditionalGeneration] Models with PEFT adapters won't load from local checkpoints (from_pretrained) | ### System Info
- `transformers` version: 4.57.6
- Platform: macOS-26.2-arm64-arm-64bit
- Python version: 3.11.14
- Huggingface_hub version: 0.36.1
- Safetensors version: 0.7.0
- Accelerate version: 1.12.0
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (accelerator?): 2.10.0 (NA)
... | closed | completed | false | 2 | [
"bug"
] | [] | 2026-02-04T17:55:19Z | 2026-03-05T14:14:45Z | 2026-03-05T14:14:45Z | CONTRIBUTOR | null | 20260325T173244Z | 2026-03-25T17:32:44Z | gabe-l-hart | 1,254,484 | MDQ6VXNlcjEyNTQ0ODQ= | User | false |
huggingface/transformers | 3,899,034,560 | I_kwDOCUB6oc7oZovA | 43,749 | https://github.com/huggingface/transformers/issues/43749 | https://api.github.com/repos/huggingface/transformers/issues/43749 | FSDP_CPU_RAM_EFFICIENT_LOADING broken | ### System Info
- `transformers` version: 5.0.0
- Platform: Linux-6.14.0-37-generic-x86_64-with-glibc2.39
- Python version: 3.12.3
- Huggingface_hub version: 1.3.4
- Safetensors version: 0.7.0
- Accelerate version: 1.12.0
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (accelerator... | closed | completed | false | 12 | [
"bug"
] | [] | 2026-02-05T00:36:21Z | 2026-03-17T14:12:23Z | 2026-03-10T15:16:06Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | kmod | 271,916 | MDQ6VXNlcjI3MTkxNg== | User | false |
huggingface/transformers | 3,899,852,848 | I_kwDOCUB6oc7ocwgw | 43,756 | https://github.com/huggingface/transformers/issues/43756 | https://api.github.com/repos/huggingface/transformers/issues/43756 | Smollm3 drops 3/4 RoPE layers while 1/4 is intended by whats claimed in the blog post. | ### System Info
- `transformers` version: v5.0.0
- Python version: N/A (bug in library code)
- PyTorch version: N/A (bug in library code)
### Who can help?
@ArthurZucker @Cyrilvallez
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in t... | closed | completed | false | 5 | [
"bug"
] | [] | 2026-02-05T06:01:24Z | 2026-02-05T19:46:18Z | 2026-02-05T19:46:18Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | tobiaskatsch | 91,801,605 | U_kgDOBXjIBQ | User | false |
huggingface/transformers | 3,900,601,264 | I_kwDOCUB6oc7ofnOw | 43,761 | https://github.com/huggingface/transformers/issues/43761 | https://api.github.com/repos/huggingface/transformers/issues/43761 | [v5 regression] CLIPVisionModel.forward returns hidden_states=None even when output_hidden_states=True | ### System Info
### Description
I am reporting a potential regression found while testing `transformers` **v5.0.0**.
We noticed that `CLIPVisionModel.forward()` returns `hidden_states=None` even when `output_hidden_states=True` is explicitly passed. This behavior is different from v4.x, where hidden states were corre... | closed | completed | false | 5 | [
"bug"
] | [] | 2026-02-05T09:35:32Z | 2026-02-06T17:07:10Z | 2026-02-06T17:07:10Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | yukiu00 | 74,698,040 | MDQ6VXNlcjc0Njk4MDQw | User | false |
huggingface/transformers | 3,903,764,009 | I_kwDOCUB6oc7orrYp | 43,782 | https://github.com/huggingface/transformers/issues/43782 | https://api.github.com/repos/huggingface/transformers/issues/43782 | Qwen3VLForConditionalGeneration.from_pretrained weight_only = True error | ### System Info
platform: ubuntu 24.04.03
python 3.10
transformers 5.0
torch 2.10
accelerate 1.12
docker image: FROM nvidia/cuda:12.2.2-cudnn8-runtime-ubuntu22.04
https://huggingface.co/Qwen/Qwen3-VL-8B-Instruct
### Who can help?
@yonigozlan @molbap
### Information
- [x] The official example scripts
- [ ] My o... | closed | completed | false | 3 | [
"bug"
] | [] | 2026-02-05T21:52:13Z | 2026-03-29T08:08:08Z | 2026-03-29T08:08:08Z | NONE | null | 20260407T090028Z | 2026-04-07T09:00:28Z | oscars17 | 57,970,391 | MDQ6VXNlcjU3OTcwMzkx | User | false |
huggingface/transformers | 3,904,173,603 | I_kwDOCUB6oc7otPYj | 43,784 | https://github.com/huggingface/transformers/issues/43784 | https://api.github.com/repos/huggingface/transformers/issues/43784 | NameError: name 'nn' is not defined when importing sentence-transformers with latest transformers | ## System Info
```
transformers version: latest (installed via pip today, 2026-02-05)
sentence-transformers version: latest
torch version: 2.x (from pytorch/pytorch Docker image)
Python version: 3.11+
OS: Linux (Docker container)
```
## Who can help?
@ArthurZucker @Rocketknight1
## Information
- [ ] The official e... | closed | completed | false | 7 | [] | [] | 2026-02-06T00:18:08Z | 2026-03-09T15:20:18Z | 2026-03-09T15:20:18Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | Alan-Jowett | 20,480,683 | MDQ6VXNlcjIwNDgwNjgz | User | false |
huggingface/transformers | 3,905,743,795 | I_kwDOCUB6oc7ozOuz | 43,792 | https://github.com/huggingface/transformers/issues/43792 | https://api.github.com/repos/huggingface/transformers/issues/43792 | openai/whisper-large-v2 can't run | ### System Info
```python
tt = pipeline(model="openai/whisper-large-v2")
tt("https://hf-mirror.com/datasets/Narsil/asr_dummy/resolve/main/mlk.flac")
```
print error message like this:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/developer/.local/lib/python3.10/site-packages/... | closed | completed | false | 2 | [
"bug"
] | [] | 2026-02-06T09:16:29Z | 2026-02-09T02:11:00Z | 2026-02-09T02:10:36Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | LIANGQI0811 | 16,715,787 | MDQ6VXNlcjE2NzE1Nzg3 | User | false |
huggingface/transformers | 3,907,442,736 | I_kwDOCUB6oc7o5tgw | 43,805 | https://github.com/huggingface/transformers/issues/43805 | https://api.github.com/repos/huggingface/transformers/issues/43805 | chore(test): add a `set_seed` pytest fixture | follow-up from https://github.com/huggingface/transformers/pull/43794
we should add in our fixture a `set_seed` so we *always* set the same seed in all of the model tests to improve determinism.
cc @Rocketknight1 | closed | completed | false | 3 | [] | [
"tarekziade"
] | 2026-02-06T16:14:29Z | 2026-03-03T15:27:37Z | 2026-03-03T15:27:37Z | MEMBER | null | 20260325T173244Z | 2026-03-25T17:32:44Z | tarekziade | 250,019 | MDQ6VXNlcjI1MDAxOQ== | User | false |
huggingface/transformers | 3,909,147,913 | I_kwDOCUB6oc7pAN0J | 43,813 | https://github.com/huggingface/transformers/issues/43813 | https://api.github.com/repos/huggingface/transformers/issues/43813 | Typo - should be "orig_conversion.quantization_operation" | https://github.com/huggingface/transformers/blob/dd360ad2364382e7ef3c19d1011cb0a4e9b418ff/src/transformers/integrations/peft.py#L303
Please have a look. There's another similar typo in this file on line 264 | closed | completed | false | 3 | [] | [] | 2026-02-07T02:29:14Z | 2026-03-17T08:11:29Z | 2026-03-17T08:11:29Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | ishaan-shivhare | 71,440,292 | MDQ6VXNlcjcxNDQwMjky | User | false |
huggingface/transformers | 3,910,304,356 | I_kwDOCUB6oc7pEoJk | 43,818 | https://github.com/huggingface/transformers/issues/43818 | https://api.github.com/repos/huggingface/transformers/issues/43818 | [Video-LLaVA] `Video-LLaVA-7B-hf` video_tower is missing temporal attention AND shares nearly identical weights with image_tower | ### System Info
### Problem
The HF-converted model `LanguageBind/Video-LLaVA-7B-hf` has two critical problems in its video tower:
1. **Missing `temporal_attn` layers**: The original `LanguageBind/Video-LLaVA-7B` video tower contains per-layer temporal attention for cross-frame reasoning. These are completely absen... | closed | completed | false | 4 | [
"bug"
] | [] | 2026-02-07T13:41:05Z | 2026-03-25T08:11:13Z | 2026-03-25T08:11:13Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | jong980812 | 64,763,643 | MDQ6VXNlcjY0NzYzNjQz | User | false |
huggingface/transformers | 3,910,437,840 | I_kwDOCUB6oc7pFIvQ | 43,819 | https://github.com/huggingface/transformers/issues/43819 | https://api.github.com/repos/huggingface/transformers/issues/43819 | [BUG] DAC.from_latents does not match the forward pass with missing STE | ### System Info
* `transformers` version: `5.0.0.dev0`
* Platform: `Linux-5.15.167.4-microsoft-standard-WSL2-x86_64-with-glibc2.39`
* Python version: `3.12.3`
* `huggingface_hub` version: `1.3.2`
* `safetensors` version: `0.7.0`
* `accelerate` version: `1.12.0`
* Accelerate config: `not installed`
* DeepSpeed version:... | closed | completed | false | 0 | [
"bug"
] | [] | 2026-02-07T14:44:24Z | 2026-04-18T09:12:29Z | 2026-02-10T17:15:22Z | CONTRIBUTOR | null | 20260418T100536Z | 2026-04-18T10:05:36Z | harshaljanjani | 75,426,551 | MDQ6VXNlcjc1NDI2NTUx | User | false |
huggingface/transformers | 3,911,668,331 | I_kwDOCUB6oc7pJ1Jr | 43,824 | https://github.com/huggingface/transformers/issues/43824 | https://api.github.com/repos/huggingface/transformers/issues/43824 | ImportError: cannot import name 'Qwen2_5_VLForConditionalGeneration' from 'transformers' | ### System Info
```
Traceback (most recent call last):
File "/home/wbrione/.conda/envs/watt_ai/bin/transformers", line 3, in <module>
from transformers.cli.transformers import main
File "/home/wbrione/.conda/envs/watt_ai/lib/python3.10/site-packages/transformers/cli/transformers.py", line 22, in <module>
f... | closed | completed | false | 11 | [
"bug"
] | [] | 2026-02-08T00:48:13Z | 2026-03-23T08:15:17Z | 2026-03-23T08:15:17Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | waltwalt36 | 197,971,906 | U_kgDOC8zPwg | User | false |
huggingface/transformers | 3,911,716,367 | I_kwDOCUB6oc7pKA4P | 43,825 | https://github.com/huggingface/transformers/issues/43825 | https://api.github.com/repos/huggingface/transformers/issues/43825 | pipeline() error message incorrectly suggests translation tasks are supported in v5 | ### System Info
Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points.
- `transformers` version: 5.2.0.dev0
- Platform: Linux-6.6.105+-x86_64-with-glibc2.35
- Python version: 3.12.12
- Huggingface_hub version: 1.3.7
- Safetensors version: 0.7.0
- Accelerate version: 1.12.0
- Accelerate c... | closed | completed | false | 4 | [
"bug"
] | [] | 2026-02-08T01:31:10Z | 2026-03-10T11:51:49Z | 2026-03-10T11:51:49Z | CONTRIBUTOR | null | 20260325T173244Z | 2026-03-25T17:32:44Z | math-hiyoko | 56,009,584 | MDQ6VXNlcjU2MDA5NTg0 | User | false |
huggingface/transformers | 3,911,766,524 | I_kwDOCUB6oc7pKNH8 | 43,827 | https://github.com/huggingface/transformers/issues/43827 | https://api.github.com/repos/huggingface/transformers/issues/43827 | Summarization/Translation docs still reference pipeline() after v5 pipeline removals | ### System Info
Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points.
- `transformers` version: 5.2.0.dev0
- Platform: Linux-6.6.105+-x86_64-with-glibc2.35
- Python version: 3.12.12
- Huggingface_hub version: 1.3.7
- Safetensors version: 0.7.0
- Accelerate version: 1.12.0
- Accelerate c... | closed | completed | false | 0 | [
"bug"
] | [] | 2026-02-08T02:11:03Z | 2026-02-09T12:32:44Z | 2026-02-09T12:32:44Z | CONTRIBUTOR | null | 20260325T173244Z | 2026-03-25T17:32:44Z | math-hiyoko | 56,009,584 | MDQ6VXNlcjU2MDA5NTg0 | User | false |
huggingface/transformers | 3,911,839,470 | I_kwDOCUB6oc7pKe7u | 43,828 | https://github.com/huggingface/transformers/issues/43828 | https://api.github.com/repos/huggingface/transformers/issues/43828 | With torch.autocast, Phi-tiny-MoE-instruct raises an dtype mismatch error | ### System Info
Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points.
- `transformers` version: 5.1.0
- Platform: Linux-5.15.0-168-generic-x86_64-with-glibc2.35
- Python version: 3.11.14
- Huggingface_hub version: 1.4.1
- Safetensors version: 0.7.0
- Accelerate version: 1.12.0
- Acceler... | closed | completed | false | 1 | [
"bug"
] | [] | 2026-02-08T03:08:45Z | 2026-02-11T10:42:46Z | 2026-02-11T10:42:46Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | vinnamkim | 26,541,465 | MDQ6VXNlcjI2NTQxNDY1 | User | false |
huggingface/transformers | 3,912,543,248 | I_kwDOCUB6oc7pNKwQ | 43,834 | https://github.com/huggingface/transformers/issues/43834 | https://api.github.com/repos/huggingface/transformers/issues/43834 | [i18n-<languageCode>] Translating docs to <languageName> | <!--Deutsch
Note: Please search to see if an issue already exists for the language you are trying to translate.
-->
Hi!
Let's bring the documentation to all the <languageName>-speaking community 🌐 (currently 0 out of 267 complete)
Who would want to translate? Please follow the 🤗 [TRANSLATING guide](https://github... | closed | completed | false | 0 | [
"WIP"
] | [] | 2026-02-08T11:03:54Z | 2026-02-09T08:39:59Z | 2026-02-09T08:39:59Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | Pediboi666 | 188,489,565 | U_kgDOCzwfXQ | User | false |
huggingface/transformers | 3,912,552,678 | I_kwDOCUB6oc7pNNDm | 43,835 | https://github.com/huggingface/transformers/issues/43835 | https://api.github.com/repos/huggingface/transformers/issues/43835 | Fett | Fett
_Ursprünglich gepostet von @Pediboi666 in https://github.com/huggingface/transformers/pull/42845#pullrequestreview-3769644934_ | closed | completed | false | 0 | [] | [] | 2026-02-08T11:12:19Z | 2026-02-09T08:39:54Z | 2026-02-09T08:39:54Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | Pediboi666 | 188,489,565 | U_kgDOCzwfXQ | User | false |
huggingface/transformers | 3,912,605,067 | I_kwDOCUB6oc7pNZ2L | 43,837 | https://github.com/huggingface/transformers/issues/43837 | https://api.github.com/repos/huggingface/transformers/issues/43837 | Proposal to add Qwen3-ASR support | ### Model description
Model: Qwen3-ASR
Repository: https://github.com/QwenLM/Qwen3-ASR
Paper: https://arxiv.org/abs/2601.21337
License: Apache 2.0
Qwen3-ASR would make a good addition to Transformers. It uses models that are already integrated in the library such as Qwen2TokenizerFast and WhisperFeatureExtractor. Tra... | open | null | false | 0 | [
"New model"
] | [] | 2026-02-08T12:00:34Z | 2026-02-08T12:09:12Z | null | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | mbtariq82 | 119,065,609 | U_kgDOBxjMCQ | User | false |
huggingface/transformers | 3,912,951,527 | I_kwDOCUB6oc7pOubn | 43,844 | https://github.com/huggingface/transformers/issues/43844 | https://api.github.com/repos/huggingface/transformers/issues/43844 | Gradient abnormally increases when training a randomly initialized model with HfDeepSpeedConfig + ZeRO-3 | ### System Info
Environment information:
- Operating System: Linux
- Python version: 3.10
- `torch==2.7.1`
- `deepspeed==0.18.3`
- `transformers==4.57.3`
Full dependencies can be found in the project's [`pyproject.toml`](https://github.com/wxhcore/bumblecore/blob/main/pyproject.toml).
### Who can help?
... | closed | completed | false | 4 | [
"bug"
] | [] | 2026-02-08T15:32:38Z | 2026-03-19T08:09:11Z | 2026-03-19T08:09:11Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | wxhcore | 165,930,670 | U_kgDOCePmrg | User | false |
huggingface/transformers | 3,913,629,793 | I_kwDOCUB6oc7pRUBh | 43,845 | https://github.com/huggingface/transformers/issues/43845 | https://api.github.com/repos/huggingface/transformers/issues/43845 | huggingface | https://github.com/likaia/nginxpulse/commit/9940b2c008aba73d976992c0020b2c1249b59220 | closed | completed | false | 0 | [] | [] | 2026-02-08T21:36:36Z | 2026-02-09T08:39:43Z | 2026-02-09T08:39:43Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | Pediboi666 | 188,489,565 | U_kgDOCzwfXQ | User | false |
huggingface/transformers | 3,913,630,365 | I_kwDOCUB6oc7pRUKd | 43,846 | https://github.com/huggingface/transformers/issues/43846 | https://api.github.com/repos/huggingface/transformers/issues/43846 | huggingface | https://github.com/likaia/nginxpulse/commit/f363839f502181fdfa6ff0a90471c6136d8fbe70 | closed | completed | false | 0 | [] | [] | 2026-02-08T21:37:04Z | 2026-02-09T08:39:37Z | 2026-02-09T08:39:37Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | Pediboi666 | 188,489,565 | U_kgDOCzwfXQ | User | false |
huggingface/transformers | 3,914,964,667 | I_kwDOCUB6oc7pWZ67 | 43,854 | https://github.com/huggingface/transformers/issues/43854 | https://api.github.com/repos/huggingface/transformers/issues/43854 | Unable to load `zai-org/GLM-4.7-Flash` model correctly in the unit tests | ### System Info
```
- `transformers` version: 5.2.0.dev0
- Platform: Linux-5.4.292-1.el8.elrepo.x86_64-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 1.4.0
- Safetensors version: 0.5.3
- Accelerate version: 1.3.0
- Accelerate config: - compute_environment: LOCAL_MACHINE
- distrib... | closed | completed | false | 6 | [
"bug"
] | [] | 2026-02-09T07:48:27Z | 2026-02-11T13:47:40Z | 2026-02-11T13:47:40Z | CONTRIBUTOR | null | 20260325T173244Z | 2026-03-25T17:32:44Z | kaixuanliu | 13,268,042 | MDQ6VXNlcjEzMjY4MDQy | User | false |
huggingface/transformers | 3,915,467,750 | I_kwDOCUB6oc7pYUvm | 43,856 | https://github.com/huggingface/transformers/issues/43856 | https://api.github.com/repos/huggingface/transformers/issues/43856 | Inefficient memory usage during Qwen3 MoE training | ### System Info
- `transformers` version: 4.57.3
- Platform: Linux-6.8.0-90-generic-x86_64-with-glibc2.35
- Python version: 3.12.12
- Huggingface_hub version: 0.36.0
- Safetensors version: 0.7.0
- Accelerate version: 1.12.0
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (acceler... | closed | completed | false | 6 | [
"bug"
] | [] | 2026-02-09T09:48:03Z | 2026-03-23T08:15:13Z | 2026-03-23T08:15:13Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | yurkoff-mv | 82,467,993 | MDQ6VXNlcjgyNDY3OTkz | User | false |
huggingface/transformers | 3,915,662,891 | I_kwDOCUB6oc7pZEYr | 43,859 | https://github.com/huggingface/transformers/issues/43859 | https://api.github.com/repos/huggingface/transformers/issues/43859 | huggingface | https://github.com/likaia/nginxpulse/issues/54 | closed | completed | false | 1 | [] | [] | 2026-02-09T10:30:38Z | 2026-02-09T12:35:48Z | 2026-02-09T12:35:48Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | Pediboi666 | 188,489,565 | U_kgDOCzwfXQ | User | false |
huggingface/transformers | 3,916,690,972 | I_kwDOCUB6oc7pc_Yc | 43,864 | https://github.com/huggingface/transformers/issues/43864 | https://api.github.com/repos/huggingface/transformers/issues/43864 | GlmMoeDsaConfig: mlp_layer_types default overwritten by inlined parent init | ## Bug Description
`GlmMoeDsaConfig` ends up with the wrong default `mlp_layer_types`. The intended default is `["dense"]*3 + ["sparse"]*75` (3 initial dense layers), but the actual default at runtime is `["dense"] + ["sparse"]*77` (1 dense layer).
## Root Cause
The generated `configuration_glm_moe_dsa.py` inlines b... | closed | completed | false | 3 | [] | [] | 2026-02-09T14:30:13Z | 2026-02-10T12:24:20Z | 2026-02-10T12:24:20Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | joninco | 225,958,065 | U_kgDODXfYsQ | User | false |
huggingface/transformers | 3,916,952,227 | I_kwDOCUB6oc7pd_Kj | 43,866 | https://github.com/huggingface/transformers/issues/43866 | https://api.github.com/repos/huggingface/transformers/issues/43866 | Ovis2 1B checkpoint corrupted | ### System Info
transformers env
Traceback (most recent call last):
File "/root/miniconda3/bin/transformers", line 3, in <module>
from transformers.cli.transformers import main
File "/root/miniconda3/lib/python3.12/site-packages/transformers/cli/transformers.py", line 22, in <module>
from transformers.cli.... | closed | completed | false | 2 | [
"bug"
] | [] | 2026-02-09T15:27:57Z | 2026-04-02T08:18:06Z | 2026-04-02T08:18:06Z | NONE | null | 20260407T090028Z | 2026-04-07T09:00:28Z | EnricoBeltramo | 22,332,651 | MDQ6VXNlcjIyMzMyNjUx | User | false |
huggingface/transformers | 3,917,284,957 | I_kwDOCUB6oc7pfQZd | 43,867 | https://github.com/huggingface/transformers/issues/43867 | https://api.github.com/repos/huggingface/transformers/issues/43867 | load model error when state_dict sorted | ### System Info
```
my_model.from_pretraine('path_to_model')
state_dict = sorted(state_dict.items(), key=lambda kv: dot_natural_key(kv[0]))
TypeError: '<' not supported between instances of 'str' and 'int'
```
dot_natural_key splits model parameter names into a list composed of several strings or integers. However, i... | closed | completed | false | 5 | [
"bug"
] | [] | 2026-02-09T17:03:21Z | 2026-02-13T17:39:53Z | 2026-02-13T17:39:53Z | CONTRIBUTOR | null | 20260325T173244Z | 2026-03-25T17:32:44Z | enze5088 | 14,285,786 | MDQ6VXNlcjE0Mjg1Nzg2 | User | false |
huggingface/transformers | 3,918,473,475 | I_kwDOCUB6oc7pjykD | 43,872 | https://github.com/huggingface/transformers/issues/43872 | https://api.github.com/repos/huggingface/transformers/issues/43872 | bitsandbytes incompatibility: TypeError: Int8Params.__new__() got an unexpected keyword argument '_is_hf_initialized' | ### System Info
Well that's interesting...
```
ed@banana /tmp [127]> uvx --with huggingface-hub,ipython,transformers,torch,bitsandbytes,accelerate transformers env
Traceback (most recent call last):
File "/home/ed/.cache/uv/archive-v0/93vgXQSK-rYWDyEU_W9k9/bin/transformers", line 6, in <module>
from transformers... | closed | completed | false | 10 | [
"bug"
] | [] | 2026-02-09T22:32:43Z | 2026-03-18T11:37:13Z | 2026-03-18T11:37:13Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | edmcman | 1,017,189 | MDQ6VXNlcjEwMTcxODk= | User | false |
huggingface/transformers | 3,918,568,338 | I_kwDOCUB6oc7pkJuS | 43,873 | https://github.com/huggingface/transformers/issues/43873 | https://api.github.com/repos/huggingface/transformers/issues/43873 | Offloading not working as expected with quantization | ### System Info
- `transformers` version: 4.57.3
- Platform: Linux-6.8.0-94-generic-x86_64-with-glibc2.35
- Python version: 3.11.12
- Huggingface_hub version: 0.36.2
- Safetensors version: 0.7.0
- Accelerate version: 1.12.0
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (accelerat... | open | null | false | 8 | [
"bug"
] | [] | 2026-02-09T23:04:43Z | 2026-04-06T12:36:12Z | null | NONE | null | 20260407T090028Z | 2026-04-07T09:00:28Z | edmcman | 1,017,189 | MDQ6VXNlcjEwMTcxODk= | User | false |
huggingface/transformers | 3,918,972,452 | I_kwDOCUB6oc7plsYk | 43,874 | https://github.com/huggingface/transformers/issues/43874 | https://api.github.com/repos/huggingface/transformers/issues/43874 | [GLM46V] Glm46VImageProcessorFast missing get_number_of_image_patches breaks _get_num_multimodal_tokens (AttributeError) | ### System Info
When `use_fast=True`, `Glm46VProcessor._get_num_multimodal_tokens` calls:
`self.image_processor.get_number_of_image_patches(...)`,
but `Glm46VImageProcessorFast` does not implement this method.
This raises:
`AttributeError: 'Glm46VImageProcessorFast' object has no attribute 'get_number_of_image_patche... | closed | completed | false | 1 | [
"bug"
] | [] | 2026-02-10T01:32:32Z | 2026-02-11T12:23:28Z | 2026-02-11T12:23:28Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | baonudesifeizhai | 85,092,850 | MDQ6VXNlcjg1MDkyODUw | User | false |
huggingface/transformers | 3,919,597,941 | I_kwDOCUB6oc7poFF1 | 43,878 | https://github.com/huggingface/transformers/issues/43878 | https://api.github.com/repos/huggingface/transformers/issues/43878 | Save + loading from pretrained raises: out_features is not supported by TimmBackbone. Please use out_indices instead. | ### System Info
Test in Colab - see notebook below. `transformers` 5.1.0
### Overview
A recent update (5.1?) seems to have broken model save + load for some architectures that use a ResNet-50(?) timm backbone (we noticed as our CI started to fail while v5.0 seemed to be fine). ViT backbones seem unaffected. To repli... | closed | completed | false | 1 | [
"bug"
] | [] | 2026-02-10T05:58:00Z | 2026-02-12T15:57:05Z | 2026-02-12T15:57:05Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | jveitchmichaelis | 3,159,591 | MDQ6VXNlcjMxNTk1OTE= | User | false |
huggingface/transformers | 3,920,137,898 | I_kwDOCUB6oc7pqI6q | 43,881 | https://github.com/huggingface/transformers/issues/43881 | https://api.github.com/repos/huggingface/transformers/issues/43881 | glm-4v-9b loading failed | ### System Info
transformers v5.1.0
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
... | closed | completed | false | 4 | [
"bug"
] | [] | 2026-02-10T08:31:10Z | 2026-03-16T08:05:20Z | 2026-02-23T09:46:01Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | Kaihui-intel | 93,114,262 | U_kgDOBYzPlg | User | false |
huggingface/transformers | 3,920,214,981 | I_kwDOCUB6oc7pqbvF | 43,883 | https://github.com/huggingface/transformers/issues/43883 | https://api.github.com/repos/huggingface/transformers/issues/43883 | AttributeError: 'MolmoForCausalLM' object has no attribute 'all_tied_weights_keys' | ### System Info
transformers==5.1.0
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
... | closed | completed | false | 7 | [
"bug"
] | [] | 2026-02-10T08:48:56Z | 2026-02-16T17:54:39Z | 2026-02-16T17:54:39Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | Kaihui-intel | 93,114,262 | U_kgDOBYzPlg | User | false |
huggingface/transformers | 3,920,539,730 | I_kwDOCUB6oc7prrBS | 43,885 | https://github.com/huggingface/transformers/issues/43885 | https://api.github.com/repos/huggingface/transformers/issues/43885 | [Refactor] Confusing naming: input_embeds vs inputs_embeds | There is currently inconsistent usage of the variable names `input_embeds` and `inputs_embeds` across the codebase, which can be confusing for contributors.
Based on a quick survey:
- `inputs_embeds` is used in 10,000+ occurrences across ~700 files
- `input_embeds` is used in ~700 occurrences across ~300 files
From a... | closed | completed | false | 18 | [] | [] | 2026-02-10T10:00:22Z | 2026-02-19T11:08:09Z | 2026-02-19T11:08:09Z | NONE | null | 20260325T173244Z | 2026-03-25T17:32:44Z | JiangJiaWei1103 | 36,886,416 | MDQ6VXNlcjM2ODg2NDE2 | User | false |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.