Update README.md
Browse files
README.md
CHANGED
@@ -1,61 +1,53 @@
|
|
1 |
---
|
2 |
license: gemma
|
3 |
---
|
4 |
-
# π Nidum Gemma-3-
|
5 |
|
6 |
-
Welcome to Nidum's Gemma-3-
|
7 |
|
8 |
-
## β¨ Why
|
9 |
|
10 |
-
- **
|
11 |
-
- **
|
12 |
-
- **
|
|
|
13 |
|
14 |
-
##
|
15 |
|
16 |
-
| Quantization |
|
17 |
-
|
18 |
-
| **Q8_0**
|
19 |
-
| **Q6_K**
|
20 |
-
| **Q5_K_M**
|
21 |
-
| **Q3_K_M**
|
22 |
-
| **TQ2_0**
|
23 |
-
| **TQ1_0**
|
24 |
|
25 |
-
|
26 |
-
|
27 |
-
|
28 |
-
|
29 |
-
- **Creative Writing & Arts**: Generate stories, scripts, poetry, and explore creative ideas.
|
30 |
-
- **Virtual Assistants**: Provide natural and unrestricted conversational experiences.
|
31 |
-
- **Educational Resources**: Facilitate engaging, interactive learning environments.
|
32 |
-
- **Entertainment & Gaming**: Create immersive narratives and interactive gameplay experiences.
|
33 |
|
34 |
---
|
35 |
|
36 |
-
##
|
37 |
|
38 |
```python
|
39 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
40 |
import torch
|
41 |
|
42 |
-
model_name = "nidum/nidum-
|
43 |
|
44 |
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
45 |
-
model = AutoModelForCausalLM.from_pretrained(model_name
|
46 |
-
|
47 |
-
prompt = "Tell me an imaginative story about a hidden city."
|
48 |
-
input_ids = tokenizer(prompt, return_tensors="pt").input_ids
|
49 |
|
50 |
-
|
51 |
-
|
52 |
-
|
53 |
-
print(output)
|
54 |
```
|
55 |
|
56 |
-
|
57 |
-
|
58 |
-
## π Unlock Limitless Creativity
|
59 |
|
60 |
-
|
61 |
|
|
|
1 |
---
|
2 |
license: gemma
|
3 |
---
|
4 |
+
# π Nidum Gemma-3-27B Instruct Uncensored
|
5 |
|
6 |
+
Welcome to Nidum's Gemma-3-27B Instruct Uncensored, a powerful and versatile model optimized for unrestricted interactions. Designed for creators, researchers, and AI enthusiasts seeking innovative and boundary-pushing capabilities.
|
7 |
|
8 |
+
## β¨ Why Nidum Gemma-3-27B Instruct Uncensored?
|
9 |
|
10 |
+
- **Uncensored Interaction**: Generate content freely without artificial restrictions.
|
11 |
+
- **High Intelligence**: Exceptional reasoning and comprehensive conversational capabilities.
|
12 |
+
- **Versatile Applications**: Perfect for creative writing, educational interactions, research projects, virtual assistance, and more.
|
13 |
+
- **Open and Innovative**: Tailored for users who appreciate limitless creativity.
|
14 |
|
15 |
+
## π Available GGUF Quantized Models
|
16 |
|
17 |
+
| Quantization | Bits per Weight | Ideal For | Link |
|
18 |
+
|--------------|-----------------|-----------|------|
|
19 |
+
| **Q8_0** | 8-bit | Best accuracy and performance | [model-Q8_0.gguf](https://huggingface.co/nidum/nidum-Gemma-3-27B-Instruct-Uncensored-GGUF/blob/main/model-Q8_0.gguf) |
|
20 |
+
| **Q6_K** | 6-bit | Strong accuracy and fast inference | [model-Q6_K.gguf](https://huggingface.co/nidum/nidum-Gemma-3-27B-Instruct-Uncensored-GGUF/resolve/main/model-Q6_K.gguf) |
|
21 |
+
| **Q5_K_M** | 5-bit | Balance between accuracy and speed | [model-Q5_K_M.gguf](https://huggingface.co/nidum/nidum-Gemma-3-27B-Instruct-Uncensored-GGUF/resolve/main/model-Q5_K_M.gguf) |
|
22 |
+
| **Q3_K_M** | 3-bit | Low memory usage, good performance | [model-Q3_K_M.gguf](https://huggingface.co/nidum/nidum-Gemma-3-27B-Instruct-Uncensored-GGUF/resolve/main/model-Q3_K_M.gguf) |
|
23 |
+
| **TQ2_0** | 2-bit (Tiny) | Maximum speed and minimal resources | [model-TQ2_0.gguf](https://huggingface.co/nidum/nidum-Gemma-3-27B-Instruct-Uncensored-GGUF/resolve/main/model-TQ2_0.gguf) |
|
24 |
+
| **TQ1_0** | 1-bit (Tiny) | Minimal footprint and fastest inference | [model-TQ1_0.gguf](https://huggingface.co/nidum/nidum-Gemma-3-27B-Instruct-Uncensored-GGUF/resolve/main/model-TQ1_0.gguf) |
|
25 |
|
26 |
+
## π― Recommended Quantization
|
27 |
+
- **Best accuracy**: Use `Q8_0` or `Q6_K`.
|
28 |
+
- **Balanced performance**: Use `Q5_K_M`.
|
29 |
+
- **Small footprint (mobile/edge)**: Choose `Q3_K_M`, `TQ2_0`, or `TQ1_0`.
|
|
|
|
|
|
|
|
|
30 |
|
31 |
---
|
32 |
|
33 |
+
## π Example Usage (Original Model)
|
34 |
|
35 |
```python
|
36 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
37 |
import torch
|
38 |
|
39 |
+
model_name = "nidum/nidum-Gemma-3-27B-Instruct-Uncensored"
|
40 |
|
41 |
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
42 |
+
model = AutoModelForCausalLM.from_pretrained(model_name)
|
|
|
|
|
|
|
43 |
|
44 |
+
prompt = "Tell me a futuristic story about space travel."
|
45 |
+
inputs = tokenizer(prompt, return_tensors="pt")
|
46 |
+
output = model.generate(**inputs, max_length=200)
|
47 |
+
print(tokenizer.decode(output[0], skip_special_tokens=True))
|
48 |
```
|
49 |
|
50 |
+
## π Your AI, Your Way
|
|
|
|
|
51 |
|
52 |
+
Unlock your creativity and innovation potential with **Nidum Gemma-3-27B Instruct Uncensored**. Experience the freedom to create, explore, and innovate without limits.
|
53 |
|