File size: 4,055 Bytes
b29b21e 883d3ec b29b21e 849c885 6dcfe35 849c885 869dd82 4e85def 8e3afca 80f2fc3 f17be29 a50ad5d 4e85def 8e3afca 04f8d63 4e85def 8e3afca b96ca22 9069070 b96ca22 88604f7 7069475 e5dce18 8e3afca 7069475 9069070 4e85def |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 |
---
title: README
emoji: π
colorFrom: gray
colorTo: purple
sdk: static
pinned: false
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://imgur.com/rVAgqMY.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
----
# π Join the Pruna AI community!
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.com/invite/rskEr4BZJx)
[](https://www.reddit.com/r/PrunaAI/)
(Open-Source lauch of [Pruna AI](https://github.com/PrunaAI) is on March 20th, 2025 π [Munich event](https://lu.ma/xlmd455g) & [Paris event](https://lu.ma/xsm2j7h9) π©πͺπ«π·πͺπΊπ)
----
# π Simply make AI models faster, cheaper, smaller, greener!
[Pruna AI](https://www.pruna.ai/) makes AI models faster, cheaper, smaller, greener with the `pruna` package.
- It supports **various models including CV, NLP, audio, graphs for predictive and generative AI**.
- It supports **various hardware including GPU, CPU, Edge**.
- It supports **various compression algortihms including quantization, pruning, distillation, caching, recovery, compilation** that can be **combined together**.
- You can either **play on your own** with smash/compression configurations or **let the smashing/compressing agent** find the optimal configuration **[Pro]**.
- You can **evaluate reliable quality and efficiency metrics** of your base vs smashed/compressed models.
You can set it up in minutes and compress your first models in few lines of code!
----
# β© How to get started?
You can smash your own models by installing pruna with:
```
pip install pruna
```
You can start with simple notebooks to experience efficiency gains with:
| Use Case | Free Notebooks |
|------------------------------------------------------------|----------------------------------------------------------------|
| **3x Faster Stable Diffusion Models** | β© [Smash for free](https://colab.research.google.com/drive/1BZm6NtCsF2mBV4UYlRlqpTIpTmQgR0iQ?usp=sharing) |
| **Turbocharge Stable Diffusion Video Generation** | β© [Smash for free](https://colab.research.google.com/drive/1m1wvGdXi-qND-2ys0zqAaMFZ9DbMd5jW?usp=sharing) |
| **Making your LLMs 4x smaller** | β© [Smash for free](https://colab.research.google.com/drive/1jQgwhmoPz80qRf5NdRJcY_pAr7Oj5Ftv?usp=sharing) |
| **Blazingly fast Computer Vision Models** | β© [Smash for free](https://colab.research.google.com/drive/1GkzxTQW-2yCKXc8omE6Sa4SxiETMi8yC?usp=sharing) |
| **Smash your model with a CPU only** | β© [Smash for free](https://colab.research.google.com/drive/19iLNVSgbx_IoCgduXPhqKq7rCoxegnZO?usp=sharing) |
| **Transcribe 2 hours of audio in less than 2 minutes with Whisper** | β© [Smash for free](https://colab.research.google.com/drive/1dc6fb8_GD8eshznthBSpGpRu4WPW7xuZ?usp=sharing) |
| **100% faster Whisper Transcription** | β© [Smash for free](https://colab.research.google.com/drive/1kCJ4-xmo7y8VS6smzaV0207A5rONHPXu?usp=sharing) |
| **Flux generation in a heartbeat, literally** | β© [Smash for free](https://colab.research.google.com/drive/18_iG0UXhD7OQR_CxSSsKFC8TLDsRw_9m?usp=sharing) |
| **Run your Flux model without an A100** | β© [Smash for free](https://colab.research.google.com/drive/1i1iSITNgiOpschV-Nu5mfX-effwYV9sn?usp=sharing) |
For more details about installation and tutorials, you can check the [Pruna AI documentation](https://docs.pruna.ai/).
----
|