File size: 1,772 Bytes
2d89537 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 | ---
license: apache-2.0
license_link: https://www.apache.org/licenses/LICENSE-2.0
base_model: HuggingFaceTB/SmolLM2-1.7B-Instruct
tags:
- quantized
- 4-bit
- int4
- awq
language:
- en
library_name: transformers
pipeline_tag: text-generation
---
# SmolLM2-1.7B-Instruct-AWQ-INT4
INT4 weight-only quantization of [`HuggingFaceTB/SmolLM2-1.7B-Instruct`](https://huggingface.co/HuggingFaceTB/SmolLM2-1.7B-Instruct).
HuggingFace SmolLM2 1.7B-Instruct in INT4. About 1 GB on disk. Runs on a 4 GB consumer GPU.
| Property | Value |
|---|---|
| Base model | [HuggingFaceTB/SmolLM2-1.7B-Instruct](https://huggingface.co/HuggingFaceTB/SmolLM2-1.7B-Instruct) |
| Quantization | INT4 weight-only |
| Approx. on-disk size | ~1.0 GB |
| License | Apache License, Version 2.0 |
| Languages | English |
## Load (vLLM)
```bash
vllm serve drawais/SmolLM2-1.7B-Instruct-AWQ-INT4 \
--max-model-len 32768 \
--gpu-memory-utilization 0.94
```
```python
from vllm import LLM, SamplingParams
llm = LLM(model="drawais/SmolLM2-1.7B-Instruct-AWQ-INT4", max_model_len=32768)
print(llm.generate(["Hello!"], SamplingParams(max_tokens=128))[0].outputs[0].text)
```
## Footprint
~1.0 GB on disk. Recommended VRAM: enough headroom for KV cache.
## License & attribution
This artifact is a derivative work of [`HuggingFaceTB/SmolLM2-1.7B-Instruct`](https://huggingface.co/HuggingFaceTB/SmolLM2-1.7B-Instruct),
released by its original authors under the **Apache License, Version 2.0**.
This artifact is distributed under the same license. The full license text is
included in [`LICENSE`](LICENSE), and required attribution is in [`NOTICE`](NOTICE).
License text: https://www.apache.org/licenses/LICENSE-2.0
Source model: https://huggingface.co/HuggingFaceTB/SmolLM2-1.7B-Instruct
|