Update README.md
Browse files
README.md
CHANGED
|
@@ -22,7 +22,15 @@ This model is `not yet` available for inference at JustLab.ai
|
|
| 22 |
Model tested: Working perfectly even with 10 steps.
|
| 23 |
Contact: [JustLab.ai](https://justlab.ai) for commercial support
|
| 24 |
|
| 25 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 26 |
|
| 27 |
```python
|
| 28 |
import os
|
|
|
|
| 22 |
Model tested: Working perfectly even with 10 steps.
|
| 23 |
Contact: [JustLab.ai](https://justlab.ai) for commercial support
|
| 24 |
|
| 25 |
+
|
| 26 |
+
### Performance on rtx4090
|
| 27 |
+
- 20 steps about 78 seconds.
|
| 28 |
+
- 10 steps about 40 seconds.
|
| 29 |
+
|
| 30 |
+
Interestingly I was under the impression that the Qwen-VL could not be quantized which is why several projects use the full 15Gb model.
|
| 31 |
+
Here I have quantized it too and it seems to be workign fine.
|
| 32 |
+
|
| 33 |
+
Sample script. (min 20GB VRAM)
|
| 34 |
|
| 35 |
```python
|
| 36 |
import os
|