Unable to determine the version compatibilities
Hi BharatGen Team,
I am trying to run inference using patram-7b-instruct locally, but I am running into multiple errors due to library version incompatibilities. The model card does not specify exact pinned versions for the required libraries, which makes it very difficult to set up a working environment.
Errors encountered so far:
ImportError: cannot import name 'TextKwargs' from 'transformers.processing_utils'
→ Caused by older versions oftransformersthat do not haveTextKwargsAttributeError: 'NoneType' object has no attribute 'size'atmodeling_patram.py:2320
→past_key_valuesisNoneon the first forward pass, caused by newertransformersversions (≥ 4.47) that changed how KV cache is initializedAssertionErroratmodeling_patram.py:2669—assert generation_config.use_cache
→ The model hardcodesuse_cache=True, but newertransformersversions conflict with this
My setup:
- GPU: 22 GB VRAM
- Python: 3.10
- OS: Linux
Request:
Could you please share a complete requirements.txt with exact pinned versions for all dependencies that are confirmed to work with this model? Specifically:
transformers==?torch==?accelerate==?pydantic==?- Any other pinned versions
This will help the community avoid spending hours debugging environment issues and focus on actually using the model.
Thank you!
Hi, thanks for sharing the detailed error trace and environment details.
For patram-7b-instruct, the dependency versions we currently recommend for local inference are:
torch==2.7.0
torchvision==0.22.0
transformers==4.50.3
accelerate==0.26.0
einops==0.8.1
The errors you ran into are generally due to version mismatches across the transformers stack, especially around cache handling and generation behavior across releases.
Using a clean environment with the pinned versions above should resolve these issues.
We’ve also added a requirements.txt file to the repository files section so the recommended setup is easier to reproduce.
Thanks for reporting this.