---
title: README
emoji: 🌍
colorFrom: gray
colorTo: purple
sdk: static
pinned: false
---
----
## 🌍 Join the Pruna AI community!
[](https://github.com/PrunaAI/pruna)
[](https://twitter.com/PrunaAI)
[](https://www.linkedin.com/company/pruna-ai)
[](https://discord.com/invite/JFQmtFKCjd)
[](https://dashboard.pruna.ai/login?utm_source=huggingface&utm_medium=org_card&utm_campaign=hf_traffic)
----
## 💜 Make AI models faster, cheaper, smaller, greener!
[Pruna AI](https://www.pruna.ai/) makes AI models faster, cheaper, smaller, and greener with the `pruna` package.
- It supports **various models, including CV, NLP, audio, and graphs for predictive and generative AI**.
- It supports **various hardware, including GPU, CPU, Edge**.
- It supports **various compression algorithms**, including quantization, pruning, distillation, caching, recovery, compilation, or factorization, among others.
- You can **combine algorithms** to find the optimal configuration and smash/compress your model.
- You can **evaluate reliable quality and efficiency metrics** of your base vs smashed/compressed models.
**Set it up in minutes and compress your first models in a few lines of code!**
----
## ⏩ How to get started?
You can smash your own models by installing [pruna](https://github.com/PrunaAI/pruna) with pip:
```py
pip install pruna
```
You can start with simple notebooks to experience efficiency gains with:
| Use Case | Free Notebooks |
|------------------------------------------------------------|----------------------------------------------------------------|
| **3x Faster Stable Diffusion Models** | ⏩ [Smash for free](https://colab.research.google.com/github/PrunaAI/pruna/blob/main/docs/tutorials/sd_deepcache.ipynb) |
| **Making your LLMs 4x smaller** | ⏩ [Smash for free](https://colab.research.google.com/github/PrunaAI/pruna/blob/main/docs/tutorials/llms.ipynb) |
| **Smash your model with a CPU only** | ⏩ [Smash for free](https://colab.research.google.com/github/PrunaAI/pruna/blob/main/docs/tutorials/cv_cpu.ipynb) |
| **Transcribe 2 hours of audio in less than 2 minutes with Whisper** | ⏩ [Smash for free](https://colab.research.google.com/github/PrunaAI/pruna/blob/main/docs/tutorials/asr_tutorial.ipynb) |
| **100% faster Whisper Transcription** | ⏩ [Smash for free](https://colab.research.google.com/github/PrunaAI/pruna/blob/main/docs/tutorials/asr_tutorial.ipynb) |
| **Run your Flux model without an A100** | ⏩ [Smash for free](https://githubtocolab.com/PrunaAI/pruna/blob/1d68f74c132bd4045f2af55bb1e5c03bf2dde6a9/docs/tutorials/flux_small.ipynb) |
| **x2 smaller Sana in action** | ⏩ [Smash for free](https://colab.research.google.com/github/PrunaAI/pruna/blob/main/docs/tutorials/sana_diffusers_int8.ipynb) |
For more details on installation and free tutorials, check the [Pruna AI documentation](https://docs.pruna.ai/).
----
## ✨ Test our Performance Models
Want to use our optimized models right away? Try them [via our API](https://www.pruna.ai/all-models) for fast, easy access to Pruna-powered inference.
Try our models