Unable to determine the version compatibilities

#6
by AlfredManan - opened

Hi BharatGen Team,

I am trying to run inference using patram-7b-instruct locally, but I am running into multiple errors due to library version incompatibilities. The model card does not specify exact pinned versions for the required libraries, which makes it very difficult to set up a working environment.

Errors encountered so far:

  1. ImportError: cannot import name 'TextKwargs' from 'transformers.processing_utils'
    → Caused by older versions of transformers that do not have TextKwargs

  2. AttributeError: 'NoneType' object has no attribute 'size' at modeling_patram.py:2320
    → past_key_values is None on the first forward pass, caused by newer transformers versions (≥ 4.47) that changed how KV cache is initialized

  3. AssertionError at modeling_patram.py:2669 — assert generation_config.use_cache
    → The model hardcodes use_cache=True, but newer transformers versions conflict with this

My setup:

  • GPU: 22 GB VRAM
  • Python: 3.10
  • OS: Linux

Request:
Could you please share a complete requirements.txt with exact pinned versions for all dependencies that are confirmed to work with this model? Specifically:

  • transformers==?
  • torch==?
  • accelerate==?
  • pydantic==?
  • Any other pinned versions

This will help the community avoid spending hours debugging environment issues and focus on actually using the model.

Thank you!

Hi, thanks for sharing the detailed error trace and environment details.

For patram-7b-instruct, the dependency versions we currently recommend for local inference are:

torch==2.7.0
torchvision==0.22.0
transformers==4.50.3
accelerate==0.26.0
einops==0.8.1

The errors you ran into are generally due to version mismatches across the transformers stack, especially around cache handling and generation behavior across releases.

Using a clean environment with the pinned versions above should resolve these issues.

We’ve also added a requirements.txt file to the repository files section so the recommended setup is easier to reproduce.

Thanks for reporting this.

Sign up or log in to comment