W4D
/

GGUF
Inference Endpoints
conversational
W4D commited on
Commit
2967fcb
·
verified ·
1 Parent(s): 4557234

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +69 -68
README.md CHANGED
@@ -1,69 +1,70 @@
1
- ---
2
- license: apache-2.0
3
- datasets:
4
- - vicgalle/alpaca-gpt4
5
- language:
6
- - en
7
- - sr
8
- - bs
9
- - hr
10
- base_model:
11
- - gordicaleksa/YugoGPT
12
- ---
13
- # YugoGPT Instruct
14
-
15
- **YugoGPT Instruct** is a fine-tuned version of the YugoGPT base model designed specifically for translation tasks involving Serbian, Croatian, and Bosnian languages. Unlike the base model, this instruct model is optimized for following user instructions, offering improved performance in instruction-based interactions.
16
-
17
- ---
18
-
19
- ## Overview
20
-
21
- YugoGPT Instruct builds upon the powerful capabilities of the YugoGPT base model, fine-tuning it to enhance its usability in structured and directive tasks. This model is ideal for translation workflows where accuracy and context preservation are critical.
22
-
23
- ---
24
-
25
- ## Features
26
-
27
- - **Specialized for BCS Languages**: Tailored for Serbian, Croatian, and Bosnian language translations.
28
- - **Instruction Following**: Fine-tuned to better adhere to user-provided instructions.
29
- - **Flexible Deployment**: Compatible with various quantization formats for different computational environments.
30
-
31
- ---
32
-
33
- ## Quantization Formats
34
-
35
- A variety of quantization formats are available to suit diverse performance and resource requirements. Below is the table of quantization options:
36
-
37
- | Filename | Quant Type | Description |
38
- |---------------------------|------------|--------------------------------------------|
39
- | `YugoGPT-7B-Instruct-F16` | F16 | Full F16 precision, maximum quality. |
40
- | `YugoGPT-7B-Instruct-Q8_0` | Q8_0 | Extremely high quality. |
41
- | `YugoGPT-7B-Instruct-Q6_K` | Q6_K | Very high quality, near perfect, recommended. |
42
- | `YugoGPT-7B-Instruct-Q5_K_M` | Q5_K_M | High quality, recommended. |
43
- | `YugoGPT-7B-Instruct-Q5_K_S` | Q5_K_S | High quality with optimal trade-offs. |
44
- | `YugoGPT-7B-Instruct-Q4_K_M` | Q4_K_M | Good quality, optimized for speed. |
45
- | `YugoGPT-7B-Instruct-Q4_K_S` | Q4_K_S | Slightly lower quality with more savings. |
46
- | `YugoGPT-7B-Instruct-Q3_K_L` | Q3_K_L | Lower quality, good for low RAM systems. |
47
- | `YugoGPT-7B-Instruct-Q3_K_M` | Q3_K_M | Low quality, optimized for size. |
48
- | `YugoGPT-7B-Instruct-Q3_K_S` | Q3_K_S | Low quality, not recommended. |
49
-
50
- ---
51
-
52
- ## Usage
53
-
54
- For usage with Ollama, you can initialize the model using the provided `modelfile` in the repository. Follow Ollama’s setup instructions to get started.
55
- Replace `{__FILE_LOCATION__}` with the file name of the quant you want to use when creating the model using Ollama CLI.
56
- ---
57
-
58
- ## Licensing
59
-
60
- This model is released under the [Apache 2.0 License](https://www.apache.org/licenses/LICENSE-2.0), the same as the YugoGPT base repository.
61
-
62
- ---
63
-
64
- ## Credits
65
-
66
- - **Base Model**: [YugoGPT by Aleksa Gordić](https://huggingface.co/gordicaleksa/YugoGPT)
67
- - **Fine-Tuning Framework**: [Unsloth](https://github.com/unslothai/unsloth)
68
-
 
69
  ---
 
1
+ ---
2
+ license: apache-2.0
3
+ datasets:
4
+ - vicgalle/alpaca-gpt4
5
+ language:
6
+ - en
7
+ - sr
8
+ - bs
9
+ - hr
10
+ base_model:
11
+ - gordicaleksa/YugoGPT
12
+ ---
13
+ # YugoGPT Instruct
14
+
15
+ **YugoGPT Instruct** is a fine-tuned version of the YugoGPT base model designed specifically for translation tasks involving Serbian, Croatian, and Bosnian languages. Unlike the base model, this instruct model is optimized for following user instructions, offering improved performance in instruction-based interactions.
16
+
17
+ ---
18
+
19
+ ## Overview
20
+
21
+ YugoGPT Instruct builds upon the powerful capabilities of the YugoGPT base model, fine-tuning it to enhance its usability in structured and directive tasks. This model is ideal for translation workflows where accuracy and context preservation are critical.
22
+
23
+ ---
24
+
25
+ ## Features
26
+
27
+ - **Specialized for BCS Languages**: Tailored for Serbian, Croatian, and Bosnian language translations.
28
+ - **Instruction Following**: Fine-tuned to better adhere to user-provided instructions.
29
+ - **Flexible Deployment**: Compatible with various quantization formats for different computational environments.
30
+
31
+ ---
32
+
33
+ ## Quantization Formats
34
+
35
+ A variety of quantization formats are available to suit diverse performance and resource requirements. Below is the table of quantization options:
36
+
37
+ | Filename | Quant Type | Description |
38
+ |---------------------------|------------|--------------------------------------------|
39
+ | `YugoGPT-7B-Instruct-F16` | F16 | Full F16 precision, maximum quality. |
40
+ | `YugoGPT-7B-Instruct-Q8_0` | Q8_0 | Extremely high quality. |
41
+ | `YugoGPT-7B-Instruct-Q6_K` | Q6_K | Very high quality, near perfect, recommended. |
42
+ | `YugoGPT-7B-Instruct-Q5_K_M` | Q5_K_M | High quality, recommended. |
43
+ | `YugoGPT-7B-Instruct-Q5_K_S` | Q5_K_S | High quality with optimal trade-offs. |
44
+ | `YugoGPT-7B-Instruct-Q4_K_M` | Q4_K_M | Good quality, optimized for speed. |
45
+ | `YugoGPT-7B-Instruct-Q4_K_S` | Q4_K_S | Slightly lower quality with more savings. |
46
+ | `YugoGPT-7B-Instruct-Q3_K_L` | Q3_K_L | Lower quality, good for low RAM systems. |
47
+ | `YugoGPT-7B-Instruct-Q3_K_M` | Q3_K_M | Low quality, optimized for size. |
48
+ | `YugoGPT-7B-Instruct-Q3_K_S` | Q3_K_S | Low quality, not recommended. |
49
+
50
+ ---
51
+
52
+ ## Usage
53
+
54
+ For usage with Ollama, you can initialize the model using the provided `modelfile` in the repository. Follow Ollama’s setup instructions to get started.
55
+ Replace `{__FILE_LOCATION__}` with the file name of the quant you want to use when creating the model using Ollama CLI.
56
+
57
+ ---
58
+
59
+ ## Licensing
60
+
61
+ This model is released under the [Apache 2.0 License](https://www.apache.org/licenses/LICENSE-2.0), the same as the YugoGPT base repository.
62
+
63
+ ---
64
+
65
+ ## Credits
66
+
67
+ - **Base Model**: [YugoGPT by Aleksa Gordić](https://huggingface.co/gordicaleksa/YugoGPT)
68
+ - **Fine-Tuning Framework**: [Unsloth](https://github.com/unslothai/unsloth)
69
+
70
  ---