add model weights
Browse files
README.md
CHANGED
|
@@ -1,20 +1,17 @@
|
|
| 1 |
---
|
| 2 |
-
|
| 3 |
-
|
| 4 |
-
base\_model: meta-llama/Meta-Llama-3.1-8B-Instruct
|
| 5 |
tags:
|
| 6 |
-
|
| 7 |
-
|
| 8 |
-
|
| 9 |
-
|
| 10 |
-
|
| 11 |
-
* q4f16
|
| 12 |
-
|
| 13 |
---
|
| 14 |
|
| 15 |
# ReelevateLM-q4f16
|
| 16 |
|
| 17 |
-
This is the [Meta Llama
|
| 18 |
|
| 19 |
The model can be used in:
|
| 20 |
|
|
@@ -28,13 +25,13 @@ Before running any examples, install MLC LLM by following the [installation docu
|
|
| 28 |
### Chat (CLI)
|
| 29 |
|
| 30 |
```bash
|
| 31 |
-
mlc_llm chat HF://pr0methium/ReelevateLM-
|
| 32 |
```
|
| 33 |
|
| 34 |
### REST Server
|
| 35 |
|
| 36 |
```bash
|
| 37 |
-
mlc_llm serve HF://pr0methium/ReelevateLM-
|
| 38 |
```
|
| 39 |
|
| 40 |
### Python API
|
|
@@ -42,7 +39,7 @@ mlc_llm serve HF://pr0methium/ReelevateLM-q4f16
|
|
| 42 |
```python
|
| 43 |
from mlc_llm import MLCEngine
|
| 44 |
|
| 45 |
-
model = "HF://pr0methium/ReelevateLM-
|
| 46 |
engine = MLCEngine(model)
|
| 47 |
|
| 48 |
for response in engine.chat.completions.create(
|
|
@@ -59,4 +56,4 @@ engine.terminate()
|
|
| 59 |
|
| 60 |
## Documentation
|
| 61 |
|
| 62 |
-
For more information on the MLC LLM project, please visit the [docs](https://llm.mlc.ai/docs/) and the [GitHub repo](https://github.com/mlc-ai/mlc-llm).
|
|
|
|
| 1 |
---
|
| 2 |
+
library_name: mlc-llm
|
| 3 |
+
base_model: meta-llama/Meta-Llama-3.1-8B-Instruct
|
|
|
|
| 4 |
tags:
|
| 5 |
+
- mlc-llm
|
| 6 |
+
- web-llm
|
| 7 |
+
- llama-3.1
|
| 8 |
+
- instruct
|
| 9 |
+
- q4f16_1
|
|
|
|
|
|
|
| 10 |
---
|
| 11 |
|
| 12 |
# ReelevateLM-q4f16
|
| 13 |
|
| 14 |
+
This is the [Meta Llama 3.1 Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct) model fine‑tuned with LoRA and converted to MLC format `q4f16_1`.
|
| 15 |
|
| 16 |
The model can be used in:
|
| 17 |
|
|
|
|
| 25 |
### Chat (CLI)
|
| 26 |
|
| 27 |
```bash
|
| 28 |
+
mlc_llm chat HF://pr0methium/ReelevateLM-q4f16_1
|
| 29 |
```
|
| 30 |
|
| 31 |
### REST Server
|
| 32 |
|
| 33 |
```bash
|
| 34 |
+
mlc_llm serve HF://pr0methium/ReelevateLM-q4f16_1
|
| 35 |
```
|
| 36 |
|
| 37 |
### Python API
|
|
|
|
| 39 |
```python
|
| 40 |
from mlc_llm import MLCEngine
|
| 41 |
|
| 42 |
+
model = "HF://pr0methium/ReelevateLM-q4f16_1"
|
| 43 |
engine = MLCEngine(model)
|
| 44 |
|
| 45 |
for response in engine.chat.completions.create(
|
|
|
|
| 56 |
|
| 57 |
## Documentation
|
| 58 |
|
| 59 |
+
For more information on the MLC LLM project, please visit the [docs](https://llm.mlc.ai/docs/) and the [GitHub repo](https://github.com/mlc-ai/mlc-llm).
|