Update README.md
Browse files
README.md
CHANGED
|
@@ -175,8 +175,8 @@ curl http://localhost:8000/v1/chat/completions \
|
|
| 175 |
> [!TIP]
|
| 176 |
> The `--mamba-cache-dtype float32` and `--mamba-ssm-cache-dtype float32` flags are important for accurate long-context generation. See the [Inference guide](https://github.com/awslabs/hybrid-model-factory/blob/main/docs/Inference.md#recommended-flags-for-hybrid-models) for details on all recommended flags.
|
| 177 |
|
| 178 |
-
### With
|
| 179 |
-
See the [Inference guide](https://github.com/awslabs/hybrid-model-factory/blob/main/docs/Inference.md#huggingface-transformers-inference) for details on when we recommend the
|
| 180 |
|
| 181 |
```python
|
| 182 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
|
@@ -198,7 +198,7 @@ print(tokenizer.decode(outputs[0], skip_special_tokens=True))
|
|
| 198 |
|
| 199 |
### Training-Free Context Extension
|
| 200 |
|
| 201 |
-
This model supports training-free context extension 2-4× its native context via an extension to Hybrid models of [PICASO cache composition](https://arxiv.org/abs/2502.17605). See the [State Composition guide](https://github.com/awslabs/hybrid-model-factory/blob/main/docs/StateComposition.md) for usage. Note, this is currently supported in
|
| 202 |
|
| 203 |
|
| 204 |
## Training data
|
|
|
|
| 175 |
> [!TIP]
|
| 176 |
> The `--mamba-cache-dtype float32` and `--mamba-ssm-cache-dtype float32` flags are important for accurate long-context generation. See the [Inference guide](https://github.com/awslabs/hybrid-model-factory/blob/main/docs/Inference.md#recommended-flags-for-hybrid-models) for details on all recommended flags.
|
| 177 |
|
| 178 |
+
### With Hugging Face Transformers
|
| 179 |
+
See the [Inference guide](https://github.com/awslabs/hybrid-model-factory/blob/main/docs/Inference.md#huggingface-transformers-inference) for details on when we recommend the Hugging Face Transformers implementation as opposed to the highly optimized vLLM one.
|
| 180 |
|
| 181 |
```python
|
| 182 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
|
|
|
| 198 |
|
| 199 |
### Training-Free Context Extension
|
| 200 |
|
| 201 |
+
This model supports training-free context extension 2-4× its native context via an extension to Hybrid models of [PICASO cache composition](https://arxiv.org/abs/2502.17605). See the [State Composition guide](https://github.com/awslabs/hybrid-model-factory/blob/main/docs/StateComposition.md) for usage. Note, this is currently supported in Hugging Face Transformers only.
|
| 202 |
|
| 203 |
|
| 204 |
## Training data
|