---
library_name: kernels
license: apache-2.0
---
This is the repository card of kernels-community/paged-attention that has been pushed on the Hub. It was built to be used with the [`kernels` library](https://github.com/huggingface/kernels). This card was automatically generated.
## How to use
```python
# make sure `kernels` is installed: `pip install -U kernels`
from kernels import get_kernel
kernel_module = get_kernel("kernels-community/paged-attention")
convert_fp8 = kernel_module.convert_fp8
convert_fp8(...)
```
## Available functions
- `convert_fp8`
- `copy_blocks`
- `ops`
- `paged_attention_v1`
- `paged_attention_v2`
- `reshape_and_cache`
- `reshape_and_cache_flash`
- `swap_blocks`
## Benchmarks
Benchmarking script is available for this kernel. Run `kernels benchmark kernels-community/paged-attention`.
### Performance