Upload README.md
Browse files
README.md
CHANGED
@@ -55,6 +55,7 @@ Multiple GPTQ parameter permutations are provided; see Provided Files below for
|
|
55 |
<!-- repositories-available start -->
|
56 |
## Repositories available
|
57 |
|
|
|
58 |
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/TigerBot-70B-Chat-GPTQ)
|
59 |
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/TigerBot-70B-Chat-GGUF)
|
60 |
* [Tiger Research's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TigerResearch/tigerbot-70b-chat)
|
|
|
55 |
<!-- repositories-available start -->
|
56 |
## Repositories available
|
57 |
|
58 |
+
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/TigerBot-70B-Chat-AWQ)
|
59 |
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/TigerBot-70B-Chat-GPTQ)
|
60 |
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/TigerBot-70B-Chat-GGUF)
|
61 |
* [Tiger Research's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TigerResearch/tigerbot-70b-chat)
|