Fix the readme.
Browse files
README.md
CHANGED
|
@@ -1,5 +1,5 @@
|
|
| 1 |
---
|
| 2 |
-
base_model: Qwen/Qwen2.5-Coder-
|
| 3 |
language:
|
| 4 |
- en
|
| 5 |
license: apache-2.0
|
|
@@ -18,17 +18,17 @@ tags:
|
|
| 18 |
- **Context Length:** 32k tokens
|
| 19 |
- Maximum Prompt Length: 32768 tokens
|
| 20 |
- Maximum Generation Length: 32768 tokens
|
| 21 |
-
- **Intended Use Cases:** Intended for commercial and non-commercial use. Same as [Qwen/Qwen2.5-Coder-
|
| 22 |
- **Release Date:** 08/04/2025
|
| 23 |
- **Version:** v2025.3
|
| 24 |
-
- **License(s):** [Apache 2.0 License](https://huggingface.co/Qwen/Qwen2.5-Coder-
|
| 25 |
- **Supported Inference Engine(s):** Furiosa LLM
|
| 26 |
- **Supported Hardware Compatibility:** FuriosaAI RNGD
|
| 27 |
- **Preferred Operating System(s):** Linux
|
| 28 |
|
| 29 |
|
| 30 |
## Description:
|
| 31 |
-
This model is the pre-compiled version of the [Qwen/Qwen2.5-Coder-
|
| 32 |
which is an auto-regressive language model that uses an optimized transformer architecture.
|
| 33 |
|
| 34 |
## Usage
|
|
@@ -38,5 +38,5 @@ follow the example command below after
|
|
| 38 |
[installing Furiosa-LLM and its prerequisites](https://developer.furiosa.ai/latest/en/getting_started/furiosa_llm.html#installing-furiosa-llm).
|
| 39 |
|
| 40 |
```sh
|
| 41 |
-
furiosa-llm serve furiosa-ai/Qwen2.5-Coder-
|
| 42 |
```
|
|
|
|
| 1 |
---
|
| 2 |
+
base_model: Qwen/Qwen2.5-Coder-7B-Instruct
|
| 3 |
language:
|
| 4 |
- en
|
| 5 |
license: apache-2.0
|
|
|
|
| 18 |
- **Context Length:** 32k tokens
|
| 19 |
- Maximum Prompt Length: 32768 tokens
|
| 20 |
- Maximum Generation Length: 32768 tokens
|
| 21 |
+
- **Intended Use Cases:** Intended for commercial and non-commercial use. Same as [Qwen/Qwen2.5-Coder-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-7B-Instruct), this models is intended for assistant-like chat.
|
| 22 |
- **Release Date:** 08/04/2025
|
| 23 |
- **Version:** v2025.3
|
| 24 |
+
- **License(s):** [Apache 2.0 License](https://huggingface.co/Qwen/Qwen2.5-Coder-7B-Instruct/blob/main/LICENSE)
|
| 25 |
- **Supported Inference Engine(s):** Furiosa LLM
|
| 26 |
- **Supported Hardware Compatibility:** FuriosaAI RNGD
|
| 27 |
- **Preferred Operating System(s):** Linux
|
| 28 |
|
| 29 |
|
| 30 |
## Description:
|
| 31 |
+
This model is the pre-compiled version of the [Qwen/Qwen2.5-Coder-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-7B-Instruct),
|
| 32 |
which is an auto-regressive language model that uses an optimized transformer architecture.
|
| 33 |
|
| 34 |
## Usage
|
|
|
|
| 38 |
[installing Furiosa-LLM and its prerequisites](https://developer.furiosa.ai/latest/en/getting_started/furiosa_llm.html#installing-furiosa-llm).
|
| 39 |
|
| 40 |
```sh
|
| 41 |
+
furiosa-llm serve furiosa-ai/Qwen2.5-Coder-7B-Instruct
|
| 42 |
```
|