Update README.md
Browse files
README.md
CHANGED
@@ -25,22 +25,17 @@ pinned: false
|
|
25 |
- You can **evaluate reliable quality and efficiency metrics** of your base vs smashed/compressed models.
|
26 |
You can set it up in minutes and compress your first models in few lines of code!
|
27 |
|
28 |
-
:fire:
|
29 |
-
:boom:
|
30 |
-
high_brightness:
|
31 |
-
:sunny:
|
32 |
-
|
33 |
| Use Case | Free Notebooks |
|
34 |
|------------------------------------------------------------|----------------------------------------------------------------|
|
35 |
-
| 3x Faster Stable Diffusion Models |
|
36 |
-
| Turbocharge Stable Diffusion Video Generation |
|
37 |
-
| Making your LLMs 4x smaller |
|
38 |
-
| Blazingly fast Computer Vision Models |
|
39 |
-
| Smash your model with a CPU only |
|
40 |
-
| Transcribe 2 hours of audio in less than 2 minutes with Whisper |
|
41 |
-
| 100% faster Whisper Transcription |
|
42 |
-
| Flux generation in a heartbeat, literally |
|
43 |
-
| Run your Flux model without an A100 |
|
44 |
|
45 |
|
46 |
You can smash your own models by installing pruna with:
|
|
|
25 |
- You can **evaluate reliable quality and efficiency metrics** of your base vs smashed/compressed models.
|
26 |
You can set it up in minutes and compress your first models in few lines of code!
|
27 |
|
|
|
|
|
|
|
|
|
|
|
28 |
| Use Case | Free Notebooks |
|
29 |
|------------------------------------------------------------|----------------------------------------------------------------|
|
30 |
+
| 3x Faster Stable Diffusion Models | ▶️ [Smash for free](https://colab.research.google.com/drive/1BZm6NtCsF2mBV4UYlRlqpTIpTmQgR0iQ?usp=sharing) |
|
31 |
+
| Turbocharge Stable Diffusion Video Generation | ▶️ [Smash for free](https://colab.research.google.com/drive/1m1wvGdXi-qND-2ys0zqAaMFZ9DbMd5jW?usp=sharing) |
|
32 |
+
| Making your LLMs 4x smaller | ▶️ [Smash for free](https://colab.research.google.com/drive/1jQgwhmoPz80qRf5NdRJcY_pAr7Oj5Ftv?usp=sharing) |
|
33 |
+
| Blazingly fast Computer Vision Models | ▶️ [Smash for free](https://colab.research.google.com/drive/1GkzxTQW-2yCKXc8omE6Sa4SxiETMi8yC?usp=sharing) |
|
34 |
+
| Smash your model with a CPU only | ▶️ [Smash for free](https://colab.research.google.com/drive/19iLNVSgbx_IoCgduXPhqKq7rCoxegnZO?usp=sharing) |
|
35 |
+
| Transcribe 2 hours of audio in less than 2 minutes with Whisper | ▶️ [Smash for free](https://colab.research.google.com/drive/1dc6fb8_GD8eshznthBSpGpRu4WPW7xuZ?usp=sharing) |
|
36 |
+
| 100% faster Whisper Transcription | ▶️ [Smash for free](https://colab.research.google.com/drive/1kCJ4-xmo7y8VS6smzaV0207A5rONHPXu?usp=sharing) |
|
37 |
+
| Flux generation in a heartbeat, literally | ▶️ [Smash for free](https://colab.research.google.com/drive/18_iG0UXhD7OQR_CxSSsKFC8TLDsRw_9m?usp=sharing) |
|
38 |
+
| Run your Flux model without an A100 | ▶️ [Smash for free](https://colab.research.google.com/drive/1i1iSITNgiOpschV-Nu5mfX-effwYV9sn?usp=sharing) |
|
39 |
|
40 |
|
41 |
You can smash your own models by installing pruna with:
|