| -r requirements.txt | |
| # Cluster-only serving stack (not portable to laptop/macOS environments) | |
| # Previous issue of pins (see issue #111) resolved. | |
| # Before: | |
| # torch==2.6.0 | |
| # transformers==4.51.3 | |
| # huggingface-hub==0.31.1 | |
| # vllm==0.8.5 | |
| # nvidia-cudnn-cu12==9.10.0.56 | |
| torch==2.10.0 | |
| transformers==4.57.6 | |
| huggingface-hub==0.36.2 | |
| vllm==0.19.0 | |
| # Let torch/vLLM pull the matching CUDA/cuDNN wheels transitively. Hard-pinning | |
| # cuDNN separately makes the resolver fragile and can easily diverge from the | |
| # stack that vLLM expects. | |
| # AaT runner deps (also pinned in requirements.txt; re-listed here for clarity) | |
| mcp[cli]==1.27.0 | |
| openai-agents==0.14.5 | |