repo
stringclasses
147 values
number
int64
1
172k
title
stringlengths
2
476
body
stringlengths
0
5k
url
stringlengths
39
70
state
stringclasses
2 values
labels
listlengths
0
9
created_at
timestamp[ns, tz=UTC]date
2017-01-18 18:50:08
2026-01-06 07:33:18
updated_at
timestamp[ns, tz=UTC]date
2017-01-18 19:20:07
2026-01-06 08:03:39
comments
int64
0
58
user
stringlengths
2
28
sgl-project/sglang
14,824
Throughput degradation on Qwen3-30B-A3B with EAGLE3
I observed a throughput degradation when trying to use EAGLE3 to speed up Qwen3-30B-A3B (on 2x H100). I suspect the overhead might be overshadowing the gains. It would be great if we could have some profiling analysis to pinpoint exactly where the cost is coming from. Also, tuning parameters for MoE models feels much...
https://github.com/sgl-project/sglang/issues/14824
open
[]
2025-12-10T14:22:05Z
2025-12-19T21:36:54Z
1
Zzsf11
vllm-project/vllm
30,392
[Bug]: Docker image v0.12.0 Fail to serve via Docker image
### Your current environment ```text Collecting environment information... ============================== System Info ============================== OS : Ubuntu 22.04.5 LTS (x86_64) GCC version : (Ubuntu 11.4.0-1ubuntu1~22.04.2) 11.4.0 Clang version : C...
https://github.com/vllm-project/vllm/issues/30392
open
[ "usage" ]
2025-12-10T13:43:59Z
2026-01-04T14:24:56Z
7
kuopching
huggingface/transformers
42,771
FSDP of Trainer does not work well with Accelerate
### System Info - `transformers` version: 4.57.3 - Platform: Linux-6.6.97+-x86_64-with-glibc2.35 - Python version: 3.11.11 - Huggingface_hub version: 0.36.0 - Safetensors version: 0.7.0 - Accelerate version: 1.12.0 - Accelerate config: not found - DeepSpeed version: not installed - PyTorch version (accelerator?): 2...
https://github.com/huggingface/transformers/issues/42771
open
[ "bug" ]
2025-12-10T12:54:49Z
2025-12-11T07:07:19Z
2
gouchangjiang
vllm-project/vllm
30,381
[Usage]:
### Your current environment ```text The output of `python collect_env.py` ``` ### How would you like to use vllm I want to run inference of a [specific model](put link here). I don't know how to integrate it with vllm. ### Before submitting a new issue... - [x] Make sure you already searched for relevant issues...
https://github.com/vllm-project/vllm/issues/30381
closed
[ "usage" ]
2025-12-10T09:27:51Z
2025-12-10T09:28:26Z
0
tobeprozy
vllm-project/vllm
30,380
[Usage]: 大家一般怎么使用vllm/tests的?
### Your current environment anywhere ### How would you like to use vllm I don't know how to use vllm test. ### Before submitting a new issue... - [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/lat...
https://github.com/vllm-project/vllm/issues/30380
open
[ "usage" ]
2025-12-10T09:27:46Z
2025-12-10T13:19:18Z
1
tobeprozy
vllm-project/vllm
30,379
[Usage]: how to use vllm/tests/?
### Your current environment 大家一般怎么使用[vllm](https://github.com/vllm-project/vllm/tree/main)/[tests](https://github.com/vllm-project/vllm/tree/main/tests)的? ### How would you like to use vllm I want to run inference of a [specific model](put link here). I don't know how to integrate it with vllm. ### Before submitt...
https://github.com/vllm-project/vllm/issues/30379
closed
[ "usage" ]
2025-12-10T09:25:52Z
2025-12-10T09:26:25Z
0
tobeprozy
vllm-project/vllm
30,375
[Bug]: [TPU] ShapeDtypeStruct error when loading custom safetensors checkpoint on TPU v5litepod
### Your current environment <details> <summary>The output of <code>python collect_env.py</code></summary> PyTorch version: 2.9.0+cu128 vLLM version: 0.12.0 (vllm-tpu) JAX version: 0.8.0 Python version: 3.12.8 (main, Jan 14 2025, 22:49:14) [Clang 19.1.6] TPU: v5litepod-4 (4 chips, single host) OS: Amazon Linux 2023 ...
https://github.com/vllm-project/vllm/issues/30375
open
[ "bug" ]
2025-12-10T08:12:57Z
2025-12-11T05:34:19Z
1
Baltsat
sgl-project/sglang
14,800
How should we set piecewise-cuda-graph-max-tokens according to TP DP and chunked-prefill-size?
How should we set piecewise-cuda-graph-max-tokens according to TP DP and chunked-prefill-size? For TP only, should we set piecewise-cuda-graph-max-tokens = chunked-prefill-size? and for DP attention DP<=TP, should we set piecewise-cuda-graph-max-tokens = chunked-prefill-size/DP? Thanks.
https://github.com/sgl-project/sglang/issues/14800
open
[]
2025-12-10T07:26:36Z
2025-12-10T07:26:36Z
0
llc-kc
sgl-project/sglang
14,783
[Bug][ConvertLinalgRToBinary] encounters error: bishengir-compile: Unknown command line argument '--target=Ascend910B2C'. Try: '/usr/local/Ascend/ascend-toolkit/latest/bin/bishengir-compile --help' bishengir-compile: Did you mean '--pgso=Ascend910B2C'?
### Checklist - [x] I searched related issues but found no solution. - [ ] The bug persists in the latest version. - [ ] Issues without environment info and a minimal reproducible demo are hard to resolve and may receive no feedback. - [ ] If this is not a bug report but a general question, please start a discussion a...
https://github.com/sgl-project/sglang/issues/14783
closed
[ "npu" ]
2025-12-10T03:54:50Z
2025-12-13T12:28:26Z
1
rsy-hub4121
huggingface/transformers
42,757
cannot import name 'is_offline_mode' from 'huggingface_hub'
### System Info - transformers-5.0.0 - huggingface_hub-1.2.1 ``` ImportError: cannot import name 'is_offline_mode' from 'huggingface_hub' (/root/miniconda3/envs/transformers/lib/python3.10/site-packages/huggingface_hub/__init__.py) ``` ### Who can help? _No response_ ### Information - [ ] The official example scri...
https://github.com/huggingface/transformers/issues/42757
closed
[ "bug" ]
2025-12-10T02:43:43Z
2025-12-23T17:15:20Z
0
dollarser
vllm-project/vllm
30,359
[RFC] [QeRL]: Online Quantization and Model Reloading
### Motivation. ## What is Quantized Model Reloading and Why is it Useful? vLLM serves not only as a inference runtime for serving requests from end users, but also as a means of serving requests for large language model post-training. One particularly important use case is using vLLM to serve rollouts (required by ...
https://github.com/vllm-project/vllm/issues/30359
open
[ "RFC" ]
2025-12-09T21:24:20Z
2025-12-19T18:19:22Z
8
kylesayrs
vllm-project/vllm
30,358
[Bug]: NIXL PD disaggregate with host_buffer has accuracy issue - Prefill scheduled num_block mismatch at update_state_after_alloc and request_finished
### Your current environment vllm-commit-id: 73a484caa1ad320d6e695f098c25c479a71e6774 Tested with A100 ### 🐛 Describe the bug How to reproduce ``` PREFILL_BLOCK_SIZE=16 DECODE_BLOCK_SIZE=16 bash tests/v1/kv_connector/nixl_integration/run_accuracy_test.sh --kv_buffer_device cpu ``` accuracy is ~0.3 much lower tha...
https://github.com/vllm-project/vllm/issues/30358
open
[ "bug" ]
2025-12-09T20:15:48Z
2025-12-10T17:07:38Z
3
xuechendi
pytorch/pytorch
169,970
Does torch._grouped.mm work with cudagraphs over multiple nodes?
### 🐛 Describe the bug torch._grouped_mm auses dynamic memory allocations via c10::cuda::CUDACachingAllocator::allocate() that appears to be incompatible with CUDA graph capture and replay. This causes "CUDA error: an illegal memory access was encountered" when these operations are captured in a CUDA graph and later ...
https://github.com/pytorch/pytorch/issues/169970
open
[ "oncall: distributed", "module: cuda", "module: cuda graphs" ]
2025-12-09T18:12:10Z
2025-12-16T22:00:56Z
3
ashahab
huggingface/datasets
7,900
`Permission denied` when sharing cache between users
### Describe the bug We want to use `datasets` and `transformers` on a shared machine. Right now, each user has a separate HF_HOME in their home directory. To reduce duplicates of the datasets, we want to share that cache. While experimenting, we are running into `Permission denied` errors. It looks like this was sup...
https://github.com/huggingface/datasets/issues/7900
open
[]
2025-12-09T16:41:47Z
2025-12-16T15:39:06Z
2
qthequartermasterman
sgl-project/sglang
14,746
Cannot join SGL slack Channel
same issue with [#3929](https://github.com/sgl-project/sglang/issues/3929) and [#11983](https://github.com/sgl-project/sglang/issues/11983) Can we get a new invitation link? Thanks a lot!
https://github.com/sgl-project/sglang/issues/14746
closed
[]
2025-12-09T15:43:51Z
2025-12-10T08:33:01Z
2
alphabetc1
pytorch/pytorch
169,954
How to prevent landing PRs on sparse tensors that should be rejected?
Recently, https://github.com/pytorch/pytorch/pull/169807 was submitted that added out-of-bounds checks for inputs of constructing a sparse COO tensor. Sounds reasonable, right? No, it is not right because the corresponding checks already exist but are disabled and the PR authors/reviewers are not aware of this. Fortuna...
https://github.com/pytorch/pytorch/issues/169954
closed
[ "triage review", "module: sparse" ]
2025-12-09T14:27:46Z
2025-12-17T04:25:50Z
null
pearu
huggingface/transformers
42,740
how to train trocr with transformers 4.57+?
i train trocr with tranfomers 4.15, the results is right,but train with 4.57.1,the acc is always 0 , i did't find the reason,did t can train succ with latest transofrmers?
https://github.com/huggingface/transformers/issues/42740
open
[]
2025-12-09T14:07:50Z
2026-01-05T06:46:34Z
null
cqray1990
huggingface/transformers
42,739
How about adding local kernel loading to `transformers.KernelConfig()`
### Feature request As title. ### Motivation Currently, the class `KernelConfig()` creates the `kernel_mapping` through the `LayerRepository` provided by `huggingface/kernels`. The `LayerRepository` downloads and loads kernel from the hub. I think adding the ability for it to load kernel locally should be very helpf...
https://github.com/huggingface/transformers/issues/42739
closed
[ "Feature request" ]
2025-12-09T12:22:41Z
2025-12-17T01:21:57Z
null
zheliuyu
huggingface/peft
2,945
Return base model state_dict with original keys
### Feature request TL;DR: `from peft import get_base_model_state_dict` Hi! I'm looking for a way to get the state dict of the base model after it has been wrapped in a `PeftModel` while preserving the original model's state dict keys. To the best of my knowledge, the only way this can be done right now is getting t...
https://github.com/huggingface/peft/issues/2945
open
[]
2025-12-09T11:23:52Z
2025-12-09T17:06:13Z
6
dvmazur
pytorch/ao
3,469
per tensor symmetric activation quantization
Is there a w8a8 QAT config that support the following describe? int8 per tensor symmetric activation quantization and int8 per channel weight symmetric quantization
https://github.com/pytorch/ao/issues/3469
open
[]
2025-12-09T11:12:02Z
2025-12-12T21:23:43Z
2
jivercx
vllm-project/vllm
30,325
[Performance]: Can we enable triton_kernels on sm120
### Proposal to improve performance Since PR (https://github.com/triton-lang/triton/pull/8498) had been merged, we may enable triton_kernels on sm120. https://github.com/vllm-project/vllm/blob/67475a6e81abea915857f82e6f10d80b03b842c9/vllm/model_executor/layers/quantization/mxfp4.py#L153-L160 Although I haven't looke...
https://github.com/vllm-project/vllm/issues/30325
open
[ "performance" ]
2025-12-09T09:21:04Z
2025-12-10T10:16:18Z
2
ijpq
pytorch/pytorch
169,929
Python 3.14 – No CUDA/GPU Wheels Available (Only CPU Build Installed)
### 🐛 Describe the bug Hi PyTorch Team, I’m using Python 3.14, and I noticed that the latest PyTorch versions install successfully, but only CPU builds are available: pip install torch torchvision torchaudio **Result,** torch.__version__ → 2.9.1+cpu torch.version.cuda → None torch.cuda.is_available() → False **My...
https://github.com/pytorch/pytorch/issues/169929
open
[ "module: binaries", "triaged" ]
2025-12-09T07:57:15Z
2025-12-16T15:06:50Z
9
ashikauk24-source
vllm-project/vllm
30,296
[Usage]: Is it possible to configure P2P kv-cache in multi-machine and multi-gpu scenarios?
### Your current environment ```text Collecting environment information... ============================== System Info ============================== OS : Ubuntu 22.04.5 LTS (x86_64) GCC version : (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 Clang version : Cou...
https://github.com/vllm-project/vllm/issues/30296
open
[ "usage" ]
2025-12-09T03:29:48Z
2025-12-09T03:29:48Z
0
lululu-1997
pytorch/pytorch
169,893
Investigate which submodules in third_party/ can be omitted from stable header hiding
In https://github.com/pytorch/pytorch/pull/167496 we hide all headers except stable/headeronly/shim when TORCH_STABLE_ONLY/TORCH_TARGET_VERSION are defined @pearu raised that headers in third_party/ should be exposed > The TORCH_TARGET_VERSION post-processing modifies all header files (except few such as headeronly a...
https://github.com/pytorch/pytorch/issues/169893
open
[ "module: cpp-extensions", "module: cpp", "triaged" ]
2025-12-08T22:45:21Z
2025-12-30T21:08:11Z
2
mikaylagawarecki
huggingface/trl
4,641
Further improving `GRPOTrainer` doc to include Qwen SAPO in Loss Types
### Feature request Hello, I'd like to further document the Qwen SAPO implementation from @pramodith , not in the `paper_index` (he already did a good job) but in the `loss-types` subsection of the `GRPOTrainer`: https://huggingface.co/docs/trl/main/en/grpo_trainer#loss-types. I'd like to add the formula, a short pa...
https://github.com/huggingface/trl/issues/4641
closed
[ "📚 documentation", "✨ enhancement", "🏋 GRPO" ]
2025-12-08T20:06:59Z
2025-12-12T17:28:06Z
1
casinca
pytorch/pytorch
169,870
Capturing ViewAndMutationMeta for training graphs for PyTorch 2.8
### 🚀 The feature, motivation and pitch ### Problem We want to capture ViewAndMutationMeta for training graphs so that we can capture and propagate input/output aliasing information to later compilation phases. ### Likely Approach For training graphs, it seems that the best place to to do that would be to capture th...
https://github.com/pytorch/pytorch/issues/169870
open
[ "triaged", "module: viewing and reshaping", "oncall: pt2" ]
2025-12-08T19:52:18Z
2025-12-28T22:09:55Z
1
pratnali
huggingface/transformers
42,713
mulitmodal forward pass for ministral 3 family
### System Info https://github.com/huggingface/transformers/blob/main/src/transformers/models/ministral3/modeling_ministral3.py#L505 seems like here we are using generic class which takes only the input ids as input ignoring the pixel values. when can we expect this implemented ? ### Who can help? @Cyrilvallez ...
https://github.com/huggingface/transformers/issues/42713
closed
[ "bug" ]
2025-12-08T18:46:14Z
2025-12-15T11:21:08Z
4
rishavranaut
pytorch/pytorch
169,854
CPython test cases under dynamo don't follow paradigm
### 🚀 The feature, motivation and pitch ### Problem Currently, the test/dynamo folder prevents calling a test case with PYTORCH_TEST_WITH_DYNAMO, and additionally any tests under test/dynamo should have their main method run torch._dynamo.test_case.run_tests. The Cpython test suite goes against those two assumptions...
https://github.com/pytorch/pytorch/issues/169854
open
[ "module: tests", "triaged", "enhancement", "oncall: pt2", "module: dynamo" ]
2025-12-08T17:52:30Z
2025-12-12T14:35:03Z
1
trichmo
vllm-project/vllm
30,271
[Usage]: Qwen 3 VL Embedding
### Your current environment Hi I would like to ask if there is a way to extract Qwen 3 VL multimodal embeddings, similar to Jina Embeddings V4, for retrieval purposes? I've tried to initialize the model this way but it doesn't work: ``` model = LLM( model="Qwen/Qwen3-VL-8B-Instruct", task="embed", trust_...
https://github.com/vllm-project/vllm/issues/30271
closed
[ "usage" ]
2025-12-08T17:26:41Z
2025-12-09T07:18:35Z
2
MingFengC
huggingface/optimum
2,390
Request for input shapes to be specified
### Feature request Currently, optimum-cli does not provide a way to specify static input shapes, it defaults to dynamic shapes. Is there a way to make it possible to specify the input shape? If not, why do we not allow this? An example would be: `optimum-cli export openvino --model microsoft/resnet-50 graph_convert...
https://github.com/huggingface/optimum/issues/2390
open
[]
2025-12-08T15:24:04Z
2025-12-20T19:38:02Z
3
danielliuce
huggingface/transformers
42,698
parse_response must not accept detokenized text
### System Info [parse_response](https://github.com/huggingface/transformers/blob/5ee9ffe386c5ecc77d8009ab648b8c4c109931ea/src/transformers/tokenization_utils_base.py#L3525) function must only accept raw tokens, but never detokenized text. Parsing from text is a vulnerability and therefore must not be possible. Once ...
https://github.com/huggingface/transformers/issues/42698
open
[ "bug" ]
2025-12-08T12:20:39Z
2025-12-08T15:59:19Z
2
kibergus
vllm-project/vllm
30,248
[Feature]: any plan to support Relaxed Acceptance in v1?
### 🚀 The feature, motivation and pitch [NV Relaxed Acceptance](https://github.com/NVIDIA/TensorRT-LLM/blob/main/docs/source/blogs/tech_blog/blog2_DeepSeek_R1_MTP_Implementation_and_Optimization.md#relaxed-acceptance) There are PRs ([vllm](https://github.com/vllm-project/vllm/pull/21506), [vllm](https://github.com/vl...
https://github.com/vllm-project/vllm/issues/30248
open
[ "feature request" ]
2025-12-08T08:45:20Z
2025-12-09T10:18:22Z
4
chengda-wu
vllm-project/vllm
30,246
[Usage]: How to disable reasoning for gpt-oss-120b
### Your current environment ``` ============================== System Info ============================== OS : Ubuntu 22.04.5 LTS (x86_64) GCC version : (Ubuntu 11.4.0-1ubuntu1~22.04.2) 11.4.0 Clang version : Could not collect CMake version ...
https://github.com/vllm-project/vllm/issues/30246
open
[ "usage" ]
2025-12-08T08:23:58Z
2025-12-08T08:23:58Z
0
WiiliamC
huggingface/transformers
42,690
How to run Phi4MultimodalProcessor
### System Info transformers version: 4.57.1 python version: 3.9 ### Who can help? _No response_ ### Information - [x] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [x] My own task or dataset (give deta...
https://github.com/huggingface/transformers/issues/42690
open
[ "bug" ]
2025-12-08T03:27:02Z
2025-12-09T12:30:27Z
null
wcrzlh
vllm-project/vllm
30,222
[Bug]: gpt-oss response api: streaming + code interpreter has bugs
### Your current environment <details> <summary>The output of <code>python collect_env.py</code></summary> ```text Your output of `python collect_env.py` here ``` </details> ### 🐛 Describe the bug Gpt-oss in streaming mode cannot see internal code interpreter output the problem is with https://github.com/vllm-...
https://github.com/vllm-project/vllm/issues/30222
open
[ "bug" ]
2025-12-08T01:32:35Z
2025-12-08T09:49:55Z
4
jordane95
pytorch/pytorch
169,797
When augment_with_fx_traces=True but the user has misconfigured the FX config, raise an error
### 🐛 Describe the bug If you dump memory profile with augment_with_fx_traces=True but you don't set torch.fx.experimental._config.enrich_profiler_metadata (or better yet, you accidentally use the dead dynamo version of the config), you will just silently not get any augmentation. This is bad, we should say something...
https://github.com/pytorch/pytorch/issues/169797
open
[ "triaged", "oncall: profiler", "module: fx" ]
2025-12-08T01:24:32Z
2025-12-09T18:23:40Z
0
ezyang
pytorch/tutorials
3,687
Feedback about Tensors
There is the following issue on this page: https://docs.pytorch.org/tutorials/beginner/basics/tensorqs_tutorial.html Hello, I just finished the tutorial on tensors, and I think it's really well written. However, I have a question. There are so many attributes and methods related to tensors that after reading the tutor...
https://github.com/pytorch/tutorials/issues/3687
open
[ "question", "core" ]
2025-12-07T21:07:52Z
2025-12-08T17:00:50Z
null
NJX-njx
vllm-project/vllm
30,211
[Bug]: How to make vLLM support multi stream torch compile and each stream capture cuda graph.
### Your current environment <details> <summary>The output of <code>python collect_env.py</code></summary> ```text Your output of `python collect_env.py` here ``` </details> ### 🐛 Describe the bug SGLang now supports multi stream torch compile and each stream capture cuda graph. The code link is https://git...
https://github.com/vllm-project/vllm/issues/30211
open
[ "bug", "feature request", "nvidia" ]
2025-12-07T15:12:04Z
2025-12-15T05:39:39Z
3
lambda7xx
pytorch/executorch
16,123
Is dynamic weight update / fine-tuning supported in QNN / XNNPACK backends?
### 🚀 The feature, motivation and pitch I’m working on a research project to fine-tune a model on Android devices. I am exploring using ExecuTorch + QNN or XNNPACK backend for inference acceleration, but need to ensure that the backend can support dynamic modification of weights (i.e., after initialization, allow upd...
https://github.com/pytorch/executorch/issues/16123
open
[ "module: training" ]
2025-12-07T06:24:09Z
2025-12-11T09:27:29Z
5
qqqqqqqwy
vllm-project/vllm
30,193
[Bug]: Behavioral Difference in hidden_states[-1] between vLLM and Transformers for Qwen3VLForConditionalGeneration
### Your current environment - vLLM Version: 0.11.2 - Transformers Version: 4.57 - Model: Qwen3VLForConditionalGeneration ### 🐛 Describe the bug I have observed an inconsistency in the output of the forward method for the `Qwen3VLForConditionalGeneration` class between vLLM (version 0.11.2) and Transformers (version ...
https://github.com/vllm-project/vllm/issues/30193
closed
[ "bug" ]
2025-12-07T04:50:11Z
2025-12-16T03:24:00Z
3
guodongxiaren
huggingface/transformers
42,674
Missing imports for DetrLoss and DetrHungarianMatcher
Previously, I was able to import these classes as ``` from transformers.models.detr.modeling_detr import DetrLoss, DetrObjectDetectionOutput, DetrHungarianMatcher ``` In v4.57.3, the import fails and I also cannot find DetrLoss or DetrHungarianMatcher anywhere in the codebase. Have they been removed/replaced with an ...
https://github.com/huggingface/transformers/issues/42674
open
[]
2025-12-06T15:32:14Z
2026-01-06T08:02:43Z
1
sammlapp
vllm-project/vllm
30,163
[Usage]: Help Running NVFP4 model on 2x DGX Spark with vLLM + Ray (multi-node)
### Your current environment # Help: Running NVFP4 model on 2x DGX Spark with vLLM + Ray (multi-node) ## Hardware - **2x DGX Spark** (GB10 GPU each, sm_121a / compute capability 12.1) - Connected via 200GbE ConnectX-7/Ethernet - Driver: 580.95.05, Host CUDA: 13.0 ## Goal Run `lukealonso/GLM-4.6-NVFP4` (357B MoE mode...
https://github.com/vllm-project/vllm/issues/30163
open
[ "usage" ]
2025-12-06T00:24:52Z
2025-12-07T16:22:40Z
2
letsrock85
huggingface/accelerate
3,876
Why TP can't be used with pure DP?
As per [this](https://github.com/huggingface/accelerate/blob/b9ca0de682f25f15357a3f9f1a4d94374a1d451d/src/accelerate/parallelism_config.py#L332), we can not be use TP along with pure DP (or DDP). We need to shard the model across further nodes by specifying dp_shard_size as well. Why this limitation exists? Is it just ...
https://github.com/huggingface/accelerate/issues/3876
open
[]
2025-12-05T16:11:22Z
2025-12-26T10:07:09Z
3
quic-meetkuma
huggingface/lerobot
2,589
Clarification on XVLA folding checkpoint
Hi Lerobot team, great work on the XVLA release! I have tried finetuning on my custom dataset and have a few clarifications: 1. Is the [lerobot/xvla-folding](https://huggingface.co/lerobot/xvla-folding) checkpoint finetuned on [lerobot/xvla-soft-fold](https://huggingface.co/datasets/lerobot/xvla-soft-fold)? - I a...
https://github.com/huggingface/lerobot/issues/2589
open
[ "question", "policies" ]
2025-12-05T11:42:46Z
2025-12-22T08:43:05Z
null
brycegoh
pytorch/pytorch
169,663
[CI] Inductor dashboards failing due to unused --quant arg
### 🐛 Describe the bug The offending code was added in https://github.com/pytorch/pytorch/pull/123419 which results in a failure as no quant type is provided. https://github.com/pytorch/pytorch/blob/a097e166db7077f1e8da94757ccd91a6a521550e/.ci/pytorch/test.sh#L767 This is causing unnecessary headaches when debugging...
https://github.com/pytorch/pytorch/issues/169663
closed
[ "oncall: pt2", "module: inductor" ]
2025-12-05T11:10:10Z
2025-12-08T01:34:22Z
0
jataylo
vllm-project/vllm
30,129
[Feature]: About video input for qwen3vl
### 🚀 The feature, motivation and pitch I tried using base64 encoding to provide video input for vllm inference, but it seems this input method is not yet supported by Qwen3VL (I've seen similar issues reported elsewhere). Currently, I can only specify parameters like fps/maximum frames and then pass the local path o...
https://github.com/vllm-project/vllm/issues/30129
open
[ "feature request" ]
2025-12-05T10:32:06Z
2025-12-19T03:32:30Z
4
lingcco
huggingface/sentence-transformers
3,585
How to choose negative instance when using MultipleNegativesRankingLoss train embedding model?
Firstly, I am still confused how to choose negative instance if I use MultipleNegativesRankingLoss, in https://github.com/huggingface/sentence-transformers/blob/main/sentence_transformers/losses/MultipleNegativesRankingLoss.py# L113 `embeddings = [self.model(sentence_feature)["sentence_embedding"] for sentence_feature ...
https://github.com/huggingface/sentence-transformers/issues/3585
open
[]
2025-12-05T09:50:26Z
2025-12-09T11:49:26Z
null
4daJKong
vllm-project/vllm
30,124
[Bug]: How to run DeepSeek-V3.2 on 2 H100 nodes?
### 🐛 Describe the bug How to run DeepSeek-V3.2 on 2 H100 nodes? I only found the cmd for H200/B200: vllm serve deepseek-ai/DeepSeek-V3.2 -tp 8 but it does not work in multi-node scenarios (e.g., 2 H100 nodes). So what should the cmd be for two H100 nodes? how should params --tp/--dp/--pp be configured? ### Befo...
https://github.com/vllm-project/vllm/issues/30124
open
[ "bug" ]
2025-12-05T09:40:45Z
2025-12-14T08:57:52Z
2
XQZ1120
pytorch/pytorch
169,659
[Export] Incosistent input validation when re-importing a .pt2 model on Linux vs. Windows
### 🐛 Describe the bug ## Summary: Importing the same .pt2 model on Windows and Linux yields a GraphModule() instance containing a guard function for input validation on Windows and a GraphModule _without_ that guard function on Linux (same device, Ubuntu running in WSL2). **Why is this an issue?** When trying to p...
https://github.com/pytorch/pytorch/issues/169659
closed
[ "oncall: pt2", "oncall: export" ]
2025-12-05T08:50:31Z
2025-12-09T09:22:45Z
3
etrommer
vllm-project/vllm
30,121
[Feature]: Could you please provide Chinese documentation for vLLM? 😊
### 🚀 The feature, motivation and pitch Could you please provide Chinese documentation for vLLM? 😊 ### Alternatives Could you please provide Chinese documentation for vLLM? 😊 ### Additional context Could you please provide Chinese documentation for vLLM? 😊 ### Before submitting a new issue... - [x] Make su...
https://github.com/vllm-project/vllm/issues/30121
open
[ "feature request" ]
2025-12-05T08:13:46Z
2025-12-08T04:31:05Z
4
moshilangzi
huggingface/transformers
42,641
Cannot inference llava-next with transformers==4.57.1 on dtype="auto" bug
### System Info ``` - `transformers` version: 4.57.1 - Platform: Linux-5.15.0-161-generic-x86_64-with-glibc2.35 - Python version: 3.10.12 - Huggingface_hub version: 0.35.3 - Safetensors version: 0.6.2 - Accelerate version: 1.10.1 - Accelerate config: not found - DeepSpeed version: not installed - PyTorch version (a...
https://github.com/huggingface/transformers/issues/42641
open
[ "bug" ]
2025-12-05T04:39:35Z
2025-12-23T11:08:56Z
5
rebel-seinpark
vllm-project/vllm
30,098
[Doc]: Misleading Logic & Docstring in `block_quant_to_tensor_quant` (Block FP8)
### 📚 The doc issue The docstring and implementation of the `block_quant_to_tensor_quant` function have a critical mismatch regarding the dequantization process, leading to numerical errors when used outside of specific fused kernel backends. ### Problematic Function The function is currently implemented as: ```py...
https://github.com/vllm-project/vllm/issues/30098
closed
[ "documentation" ]
2025-12-05T02:12:07Z
2025-12-24T17:22:50Z
0
xqoasis
huggingface/transformers
42,638
Routing Replay for MoEs
### Feature request RecentRL approaches for training MoE models increasingly rely on **Routing Replay**, as described in the following papers: - https://huggingface.co/papers/2507.18071 - https://huggingface.co/papers/2510.11370 - https://huggingface.co/papers/2512.01374 Without going into the training details, Rout...
https://github.com/huggingface/transformers/issues/42638
open
[ "Feature request" ]
2025-12-04T23:58:14Z
2025-12-05T16:29:05Z
2
qgallouedec
vllm-project/vllm
30,084
[Performance]: Should I expect linear scaling with pure DP?
### Proposal to improve performance _No response_ ### Report of performance regression _No response_ ### Misc discussion on performance I decided to benchmark vLLM 0.11.2 with pure DP of Qwen/Qwen2.5-32B-Instruct deployment(before benchmarking DP+EP with Qwen/Qwen3-30B-A3B-Instruct-2507) on DP1 vs DP8 (H200): DP1...
https://github.com/vllm-project/vllm/issues/30084
open
[ "performance" ]
2025-12-04T19:52:45Z
2025-12-16T04:09:24Z
7
pbelevich
vllm-project/vllm
30,082
[Usage]: Turn off reasoning for Kimi-K2-Thinking?
### Your current environment ```text Output of collect_env.py- Collecting environment information... ============================== System Info ============================== OS : Ubuntu 22.04.5 LTS (x86_64) GCC version : (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 Clang...
https://github.com/vllm-project/vllm/issues/30082
open
[ "usage" ]
2025-12-04T19:32:13Z
2025-12-08T23:02:58Z
2
vikrantdeshpande09876
pytorch/pytorch
169,597
Standardize Testing in OpenReg
### 🚀 The feature, motivation and pitch Described in the following [issue](https://github.com/pytorch/pytorch/issues/158917): OpenReg aims to: - Track the evolution of community features and provide up-to-date standardized integration implementations, serving as the official reference and code example for integrati...
https://github.com/pytorch/pytorch/issues/169597
open
[ "triaged", "module: PrivateUse1", "module: openreg" ]
2025-12-04T19:22:19Z
2025-12-04T19:34:26Z
0
JRosenkranz
pytorch/ao
3,436
Int4WeightOnly torch.bmm semantics
Currently, int4 weight only quantization does not work out of the box for llama4 scout. ```python fqn_to_config = FqnToConfig( { r"re:.*\.feed_forward\.experts\.gate_up_proj": Int4WeightOnlyConfig(), r"re:.*\.feed_forward\.experts\.down_proj": Int4WeightOnlyConfig() } ) quantized_model = Auto...
https://github.com/pytorch/ao/issues/3436
open
[ "triaged" ]
2025-12-04T18:35:44Z
2025-12-04T23:12:15Z
0
jcaip
vllm-project/vllm
30,075
[Feature]: Default eplb num_redundant_experts to the lowest valid value if unspecified
### 🚀 The feature, motivation and pitch EPLB requires the number of experts to be chosen up front and there is a known minimum valid value that can be derived from the vllm startup configuration. Since extra EPLB experts trades kv cache memory for potential performance improvements, but that is not guaranteed to pay...
https://github.com/vllm-project/vllm/issues/30075
open
[ "help wanted", "good first issue", "feature request" ]
2025-12-04T18:19:03Z
2025-12-20T21:00:23Z
4
smarterclayton
pytorch/torchtitan
2,109
Knowledge Distillation template
Hi, I want to use torchtitan for knowledge distillation, what is the right way to do it? should I hold both models inside the main model? (then how can I exclude the teacher from being saved or .train()ed or exclude it from the optimizer) or is there a way to have two separate models (with parallelism handled correctly...
https://github.com/pytorch/torchtitan/issues/2109
open
[ "question" ]
2025-12-04T17:37:51Z
2025-12-04T23:35:26Z
null
Separius
vllm-project/vllm
30,058
[Feature]: Multi-Adapter Support for Embed Qwen3 8B Embedding Model
### 🚀 The feature, motivation and pitch Hi Team, do we currently support multi-adapter (LoRA) support for embedding models, specifically Qwen3 8B Embedding model? If not, when can we expect the support? Thanks :) ### Alternatives _No response_ ### Additional context _No response_ ### Before submitting a new issu...
https://github.com/vllm-project/vllm/issues/30058
open
[ "feature request" ]
2025-12-04T12:05:15Z
2025-12-04T19:42:04Z
4
dawnik17
huggingface/accelerate
3,873
How to specify accelerate launch yaml config item when running with torchrun
I've read the doc [Launching Accelerate scripts](https://huggingface.co/docs/accelerate/basic_tutorials/launch), and would like to launch with torchrun. However, the doc does not mention how to specify configs like `distribute_type` when using torchrun. What are the equivalent of these configurations when using torchr...
https://github.com/huggingface/accelerate/issues/3873
open
[]
2025-12-04T07:27:43Z
2026-01-03T15:07:19Z
null
WhoisZihan
huggingface/lerobot
2,580
How can the leader arm be synchronized to follow the follower arm during inference?
https://github.com/huggingface/lerobot/issues/2580
open
[]
2025-12-04T07:22:07Z
2025-12-11T02:53:11Z
null
zhoushaoxiang
vllm-project/vllm
30,023
[Feature]: Support qwen3next with GGUF?
### 🚀 The feature, motivation and pitch With v0.11.0, `vllm` report: ``` vllm | (APIServer pid=1) ValueError: GGUF model with architecture qwen3next is not supported yet. ``` https://huggingface.co/Qwen/Qwen3-Next-80B-A3B-Thinking-GGUF I did a simple dig for this, seems the vllm has support of `Qwen3-Next` as ar...
https://github.com/vllm-project/vllm/issues/30023
open
[ "feature request" ]
2025-12-04T03:40:26Z
2025-12-18T05:31:57Z
0
zeerd
pytorch/ao
3,452
Any plans to support `USE_DISTRIBUTED=0` pytorch?
**Dec 6th EDIT:** simplified & expanded error and reproduction example from conversation below. If not then please write in readme/requirements somewhere. The error below was cryptic. Error that led me to this conception: <details> ``` Traceback (most recent call last): File "/data/data/com.termux/files/home/dev/l...
https://github.com/pytorch/ao/issues/3452
open
[]
2025-12-04T00:25:19Z
2025-12-07T20:20:59Z
7
rene-descartes2021
vllm-project/vllm
29,998
[Bug]: cannot send two POST to /v1/chat/completions endpoint with identic tool function name with model GPT-OSS-120B
### Your current environment <details> <summary>The bug is reproducible with docker image vllm/vllm-openai:v0.12.0</summary> ```yaml services: vllm-gptoss-large: image: vllm/vllm-openai:v0.12.0 restart: always shm_size: '64gb' deploy: resources: reservations: devices: ...
https://github.com/vllm-project/vllm/issues/29998
open
[ "bug" ]
2025-12-03T21:41:35Z
2025-12-19T15:53:43Z
14
pd-t
huggingface/transformers
42,589
Incorrect tokenization `tokenizers` for escaped strings / Mismatch with `mistral_common`
### System Info ``` In [3]: mistral_common.__version__ Out[3]: '1.8.6' ``` ``` In [4]: import transformers; transformers.__version__ Out[4]: '5.0.0.dev0' ``` ``` In [5]: import tokenizers; tokenizers.__version__ Out[5]: '0.22.1' ``` ### Who can help? @ArthurZucker @itazap ### Information - [ ] The official exam...
https://github.com/huggingface/transformers/issues/42589
closed
[ "bug" ]
2025-12-03T10:57:35Z
2025-12-16T10:45:35Z
5
patrickvonplaten
huggingface/diffusers
12,781
Impossible to log into Huggingface/Diffusers Discord
### Describe the bug When trying to verify my Discord/Huggingface account, no matter what I do, I end up with this message: <img width="512" height="217" alt="Image" src="https://github.com/user-attachments/assets/d1d0f18b-c80f-4862-abde-fb49ee505ddd" /> Has the HF Discord died? If that is the case, what alternative...
https://github.com/huggingface/diffusers/issues/12781
closed
[ "bug" ]
2025-12-03T09:42:55Z
2025-12-04T15:11:42Z
4
tin2tin
pytorch/pytorch
169,461
torch compile + replicate, compute and communication not overlap
### 🐛 Describe the bug When I use a combination of composable.replicate and torch.compile, I observe that all backward allreduce operations are executed only after the entire backward pass computation is complete. This behavior prevents the overlap of computation and communication, which is typically achieved in DDP...
https://github.com/pytorch/pytorch/issues/169461
open
[ "oncall: distributed", "triaged", "oncall: pt2" ]
2025-12-03T08:33:05Z
2025-12-09T17:50:45Z
6
peaceorwell
vllm-project/vllm
29,944
[Usage]:It seems that the prefix cache has not brought about any performance benefits.
### Your current environment ``` root@ubuntu:/vllm-workspace# python3 collect_env.py Collecting environment information... ============================== System Info ============================== OS : Ubuntu 22.04.5 LTS (x86_64) GCC version : (Ubuntu 11.4.0-1ubuntu1~...
https://github.com/vllm-project/vllm/issues/29944
open
[ "usage" ]
2025-12-03T07:03:49Z
2025-12-03T07:04:37Z
0
wenba0
vllm-project/vllm
29,940
[Usage]: QWen2-Audio-7B support
### Your current environment We encountered numerous peculiar issues during the QWen2-Audio-7B conversion process. Do we currently support Qwen2-Audio-7B? If so, could you provide a demo? Thank you very much! ### 🐛 Describe the bug Refer to Whisper's demo ### Before submitting a new issue... - [x] Make sure you ...
https://github.com/vllm-project/vllm/issues/29940
closed
[ "usage" ]
2025-12-03T06:04:07Z
2025-12-04T14:23:05Z
1
freedom-cui
huggingface/datasets
7,893
push_to_hub OOM: _push_parquet_shards_to_hub accumulates all shard bytes in memory
## Summary Large dataset uploads crash or hang due to memory exhaustion. This appears to be the root cause of several long-standing issues. ### Related Issues This is the root cause of: - #5990 - Pushing a large dataset on the hub consistently hangs (46 comments, open since 2023) - #7400 - 504 Gateway Timeout when u...
https://github.com/huggingface/datasets/issues/7893
closed
[]
2025-12-03T04:19:34Z
2025-12-05T22:45:59Z
2
The-Obstacle-Is-The-Way
pytorch/torchtitan
2,101
EP in latest main is slow
Hi team, I tried to duplicate the EP implementation in my model. But I find it's running much slowly with EP. I find there is a written cpu-gpu synchronization at the beginning of all2all in token dispatch, for input_split and output_split, which is kinda a blocker. Is it possible to avoid it without symmetric memory ...
https://github.com/pytorch/torchtitan/issues/2101
open
[]
2025-12-03T00:10:46Z
2025-12-03T00:10:46Z
0
goldhuang
pytorch/torchtitan
2,100
symmetric memory all2all integration for EP
Hi team, I find https://github.com/pytorch/torchtitan/tree/main/torchtitan/experiments/moe_symm_mem_kernels. But seems there is no progress update for a while according to Experiment | Test Status | Owners -- | -- | -- [moe_symm_mem_kernels](https://github.com/pytorch/torchtitan/blob/main/torchtitan/experiments/moe_...
https://github.com/pytorch/torchtitan/issues/2100
open
[]
2025-12-03T00:02:55Z
2025-12-03T00:02:55Z
0
goldhuang
vllm-project/vllm
29,920
[Feature]: Add support for fused fp8 output to FlashAttention 3
### 🚀 The feature, motivation and pitch On Hopper, we use FlashAttention as the default attention backend. When o-proj is quantized to fp8, we are leaving performance on the table as FA3 does not support fused output fp8 quant. With Triton/ROCm/AITER backends we saw up to 8% speedups with attention+quant fusion. vLL...
https://github.com/vllm-project/vllm/issues/29920
open
[ "help wanted", "performance", "feature request", "torch.compile" ]
2025-12-02T20:16:31Z
2026-01-05T20:53:11Z
4
ProExpertProg
vllm-project/vllm
29,917
[Feature]: VLLM_DISABLE_COMPILE_CACHE should be a config flag
### 🚀 The feature, motivation and pitch `vllm serve` does a nice printout of non-default config flags. VLLM_DISABLE_COMPILE_CACHE gets used enough that it should have an equivalent config flag for it Offline @ProExpertProg mentioned we can treat it like VLLM_DEBUG_DUMP_PATH where we have both and the env var overrid...
https://github.com/vllm-project/vllm/issues/29917
open
[ "help wanted", "feature request", "torch.compile" ]
2025-12-02T20:06:01Z
2025-12-05T05:19:12Z
6
zou3519
pytorch/xla
9,726
How is GetOutputShardings supposed to work for PJRT Implementers?
We have a custom shardy + stablehlo pipeline manage shard propagation inside our compiler stack. We're having trouble **communcating the correct output sharding back to the framework**, and cannot find any obvious interface to do so, and wanted to ask what the intended path for this looks like. To be clear, this is th...
https://github.com/pytorch/xla/issues/9726
open
[ "question", "runtime", "stablehlo" ]
2025-12-02T19:26:20Z
2025-12-15T13:47:09Z
null
jameszianxuTT
huggingface/inference-playground
102
How to know when a model is outdated ?
I'm testing https://huggingface.co/chat/models/openai/gpt-oss-20b and there I asked this: ``` do you know any github repository created in 2025? <p>Sure! Here are a few GitHub repositories that were created in 2025 (all with their public “created date” and a short description):</p> Repository | Created | Short descri...
https://github.com/huggingface/inference-playground/issues/102
open
[]
2025-12-02T17:10:51Z
2025-12-02T17:10:51Z
null
mingodad
pytorch/executorch
16,041
CORTEX_M: Memory optimization
No work has been done looking into optimizing memory of the runtime. This ticket covers a broad investigation into what can be done in this space: 1. Can we optimize scratch buffer allocation (e.g. is it reused between kernels currently?) 2. Can we strip away anything from the elf to minimize runtime size? 3. Any other...
https://github.com/pytorch/executorch/issues/16041
open
[]
2025-12-02T14:24:20Z
2025-12-15T12:01:21Z
0
AdrianLundell
pytorch/executorch
16,039
CORTEX_M: Target configuration
CMSIS-NN requires slightly different lowerings for different architecture extensions (scalar/DSP/vector). Currently vector extension is assumed, so we might need to add a way to configure this and do modifications in the pass lowering where required. For example, the linear operator currently only pass the kernel_sum...
https://github.com/pytorch/executorch/issues/16039
open
[]
2025-12-02T14:19:34Z
2025-12-03T15:34:04Z
0
AdrianLundell
pytorch/pytorch
169,371
C++ Generator API is platform dependent
When creating a tensor with the C++ API, one can do something like this: ``` try { Tensor t = torch::ones({200, 1, 28, 28}); t.to(torch::DeviceType::MPS); } catch(const std::exception& e) { ... } ``` This code is going to compile and run on all platforms, obviously going into the `catch` block if not on macOS....
https://github.com/pytorch/pytorch/issues/169371
open
[ "module: cpp", "triaged", "module: accelerator" ]
2025-12-02T12:53:52Z
2025-12-04T02:07:26Z
1
matteosal
vllm-project/vllm
29,875
[Usage]: Is there a way to inject the grammar into the docker directly
### Your current environment ```text ============================== System Info ============================== OS : Ubuntu 22.04.5 LTS (x86_64) GCC version : (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 Clang version : Could not collect CMake version ...
https://github.com/vllm-project/vllm/issues/29875
open
[ "usage" ]
2025-12-02T12:30:56Z
2025-12-03T11:53:43Z
1
chwundermsft
vllm-project/vllm
29,871
[Usage]: Extremly low token input speed for DeepSeek-R1-Distill-Llama-70B
### Your current environment <details> <summary>The output of <code>python collect_env.py</code></summary> ```text Collecting environment information... ============================== System Info ============================== OS : Ubuntu 22.04.5 LTS (x86_64) GCC version ...
https://github.com/vllm-project/vllm/issues/29871
open
[ "usage" ]
2025-12-02T11:25:25Z
2025-12-02T15:30:53Z
2
muelphil
vllm-project/vllm
29,866
[Doc]:
### 📚 The doc issue # Installation des bibliothèques XAI !pip install shap !pip install lime !pip install alibi !pip install interpret !pip install dalex !pip install eli5 ### Suggest a potential alternative/fix # Installation des bibliothèques XAI !pip install shap !pip install lime !pip install alibi !pip instal...
https://github.com/vllm-project/vllm/issues/29866
closed
[ "documentation" ]
2025-12-02T10:43:04Z
2025-12-02T10:50:10Z
0
hassaballahmahamatahmat5-cpu
vllm-project/vllm
29,865
[Doc]:
### 📚 The doc issue # Installation des bibliothèques XAI !pip install shap !pip install lime !pip install alibi !pip install interpret !pip install dalex !pip install eli5 ### Suggest a potential alternative/fix # Installation des bibliothèques XAI !pip install shap !pip install lime !pip install alibi !pip instal...
https://github.com/vllm-project/vllm/issues/29865
closed
[ "documentation" ]
2025-12-02T10:43:01Z
2025-12-02T10:50:00Z
0
hassaballahmahamatahmat5-cpu
vllm-project/vllm
29,864
[Usage]: I am unable to run the GLM-4.5-Air-REAP-82B-A12B-nvfp4 model on an RTX 5090.
### Your current environment I am unable to run the GLM-4.5-Air-REAP-82B-A12B-nvfp4 model on an RTX 5090. ```text Collecting environment information... ============================== System Info ============================== OS : Ubuntu 22.04.5 LTS (x86_64) GCC version ...
https://github.com/vllm-project/vllm/issues/29864
open
[ "usage" ]
2025-12-02T10:13:31Z
2025-12-05T17:06:30Z
2
east612-ai
huggingface/diffusers
12,772
How to convert diffusers model to wan2.2 format
I see convert_wan_to_diffusers.py in diffusers repo, but no convert_diffusers_to_wan.py. Do you have plan to upload a convert scripts?
https://github.com/huggingface/diffusers/issues/12772
open
[]
2025-12-02T09:19:29Z
2025-12-02T09:19:29Z
null
wikiwen
pytorch/executorch
16,034
How to add a new backend?
### 🚀 The feature, motivation and pitch Hi, I've already seen that some backends already supported from [here](https://docs.pytorch.org/executorch/main/backends-overview.html). Is there a convenient way to add a new backend, like [CANN](https://developer.huawei.com/consumer/en/doc/hiai-guides/introduction-00000010514...
https://github.com/pytorch/executorch/issues/16034
open
[]
2025-12-02T03:13:18Z
2025-12-02T18:50:18Z
null
JingliangGao
huggingface/diffusers
12,764
When will the img2img pipeline of FLUX.2-dev be released?
I see that the current version(0.36.0-dev) only updated the text-to-image pipeline for Flux2. We are looking forward to the update of the image-to-image pipeline!
https://github.com/huggingface/diffusers/issues/12764
open
[]
2025-12-01T11:25:35Z
2025-12-01T11:41:56Z
1
guanxyu
huggingface/smolagents
1,890
Question: how to use sever-side tools provided by Google Gemini or OpenAI GPT?
Gemini has some server-side tools like google_search (https://ai.google.dev/gemini-api/docs/google-search) or google_map. OpenAI also has server-side tools like web_search. Does Smolagents support using such server-side tools from agents? If so, how?
https://github.com/huggingface/smolagents/issues/1890
open
[]
2025-12-01T05:16:01Z
2025-12-23T10:49:45Z
null
victorx-deckard
huggingface/agents-course
623
Message: Submission received, but no valid/matching task IDs were found in the 1 answers provided. Score did not improve previous record, leaderboard not updated.
I am correctly downloading the GAIA 2023 Level 1 validation dataset using snapshot_download and load_dataset. This submission is for Unit 4 Agent Course. data_dir = snapshot_download( repo_id="gaia-benchmark/GAIA", repo_type="dataset" ) dataset = load_dataset(data_dir, "2023_level1", spl...
https://github.com/huggingface/agents-course/issues/623
open
[ "question" ]
2025-12-01T02:09:21Z
2025-12-01T02:09:21Z
null
ShwetaBorole
huggingface/tokenizers
1,902
Guide: Compiling `tokenizers` on Android/Termux
Hello Hugging Face team and fellow developers, This is a guide for anyone trying to install `tokenizers` (or packages that depend on it, like `transformers` or `docling`) on an Android device using [Termux](https://termux.dev/). Currently, there are no other issues mentioning Termux, so hopefully, this guide can help ...
https://github.com/huggingface/tokenizers/issues/1902
open
[]
2025-12-01T00:46:42Z
2025-12-01T00:46:42Z
0
Manamama-Gemini-Cloud-AI-01
pytorch/pytorch
169,269
cannot import name 'get_num_sms' from 'torch._inductor.utils'
### 🐛 Describe the bug I'm trying to run [nano-vllm](https://github.com/GeeeekExplorer/nano-vllm), and there is an error: ``` File "/mnt/petrelfs/fengyuan/anaconda3/envs/qwen_copy/lib/python3.12/site-packages/torch/_inductor/kernel/mm_grouped.py", line 20, in <module> from ..utils import ( ImportError: cannot im...
https://github.com/pytorch/pytorch/issues/169269
closed
[]
2025-11-30T19:57:48Z
2025-12-02T20:54:05Z
2
WangHaoZhe
vllm-project/vllm
29,747
[Bug]: --scheduling-policy=priority & n>1 crashes engine
### Your current environment <details> <summary>The output of <code>python collect_env.py</code></summary> ```text Your output of `python collect_env.py` here ``` </details> ### 🐛 Describe the bug When running with priority scheduling, e.g.: ```bash vllm serve Qwen/Qwen3-0.6B --scheduling-policy=priority ``` an...
https://github.com/vllm-project/vllm/issues/29747
closed
[ "bug" ]
2025-11-30T13:20:23Z
2025-12-02T22:42:30Z
3
hibukipanim
pytorch/executorch
16,010
How to run add operator in executorch ?
The result of the following code is "Segmentation fault: 11" ... ``` using executorch::aten::ScalarType; using executorch::aten::Tensor; using executorch::aten::TensorImpl; int main() { executorch::runtime::runtime_init(); // Create our input tensor. float data[14465 * 3] = { 1 }; TensorImpl::SizesType sizes[] = ...
https://github.com/pytorch/executorch/issues/16010
open
[ "module: runtime" ]
2025-11-30T10:49:22Z
2025-12-01T17:50:32Z
null
rscguo
vllm-project/vllm
29,735
[Usage]:Accessing free_blocks count from LLMEngine or LLM ?
### Your current environment ```text None ``` ### How would you like to use vllm I'm doing research on key-value caching optimization. I want to know how to determine the number of free blocks during runtime. I tried manually creating the engine, but I couldn't find the method after searching through the code. AI ke...
https://github.com/vllm-project/vllm/issues/29735
closed
[ "usage" ]
2025-11-29T19:21:50Z
2025-12-05T14:01:42Z
4
H-T-H
vllm-project/vllm
29,722
[RFC]: Add Balance Scheduling
### Motivation. **Limitations of the current vLLM v1 scheduling strategy** vLLM v1 scheduling currently enables chunkedprefill by default, which processes prefill and decode requests simultaneously in a single scheduling session. This can impact the overall system throughput and performance in some scenarios. Balance...
https://github.com/vllm-project/vllm/issues/29722
open
[ "RFC" ]
2025-11-29T09:28:43Z
2025-12-02T08:23:33Z
0
GDzhu01
vllm-project/vllm
29,707
[Usage]: Workaround to run model on GPUs with Compute Capability < 8.0?
### Your current environment Problem: I am unable to run the Qwen3-VL-32B-Instruct-AWQ-4bit model due to a CUDA compute capability requirement. My hardware consists of two NVIDIA QUADRO RTX 5000 cards (16GB each, 32GB total) with a compute capability of 7.5. The software framework (likely a recent version of PyTorch o...
https://github.com/vllm-project/vllm/issues/29707
closed
[ "usage" ]
2025-11-29T00:47:39Z
2025-11-30T06:04:29Z
5
seasoncool
pytorch/torchtitan
2,091
question of `_op_sac_save_list` for op-sac
Hi, I have a noob question, is there any particular reason we dont put `torch.ops.aten._scaled_dot_product_cudnn_attention.default` (and maybe some other SDPA variants) into `_op_sac_save_list` to avoid recompute?
https://github.com/pytorch/torchtitan/issues/2091
closed
[]
2025-11-28T23:29:02Z
2025-12-02T20:52:33Z
4
rakkit
pytorch/FBGEMM
5,176
How to apply gradient clip in fused optimizer?
I noticed that my embedding bag parameters exploded. Is there a way I could apply gradient clip. I'm using `EmbOptimType.EXACT_ROWWISE_ADAGRAD` Here is the code ``` sharder_with_optim_params = EmbeddingBagCollectionSharder( fused_params={ 'optimizer': EmbOptimType.EXACT_ROWWISE_ADAGRAD, ...
https://github.com/pytorch/FBGEMM/issues/5176
open
[]
2025-11-28T16:07:57Z
2025-11-28T16:07:57Z
null
acmilannesta
vllm-project/vllm
29,679
[Usage]: Get request total time
### Your current environment ```text ============================== System Info ============================== OS : Ubuntu 22.04.5 LTS (x86_64) GCC version : (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 Clang version : Could not collect CMake version ...
https://github.com/vllm-project/vllm/issues/29679
closed
[ "usage" ]
2025-11-28T14:03:16Z
2025-12-01T09:34:12Z
5
chwundermsft