rogkesavan commited on
Commit
85df80e
Β·
verified Β·
1 Parent(s): 9444c6c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +29 -37
README.md CHANGED
@@ -1,61 +1,53 @@
1
  ---
2
  license: gemma
3
  ---
4
- # πŸš€ Nidum Gemma-3-4B IT Uncensored
5
 
6
- Welcome to Nidum's Gemma-3-4B IT Uncensored, your gateway to an open and unrestricted AI experience. This powerful model enables users to explore and innovate without boundaries.
7
 
8
- ## ✨ Why Choose Nidum Gemma-3-4B IT Uncensored?
9
 
10
- - **Unrestricted AI Interaction**: Freedom to discuss, explore, and innovate without content limitations.
11
- - **Efficient and Versatile**: Optimized performance suitable for various hardware configurations.
12
- - **Diverse Applications**: Perfect for creative projects, conversational AI, educational tools, and entertainment.
 
13
 
14
- ## πŸ“₯ Available Quantized Versions (GGUF)
15
 
16
- | Quantization | Description | Bits per Weight | Download |
17
- |--------------|-------------------------------------------|-----------------|---------------|
18
- | **Q8_0** | Best accuracy and performance | 8-bit | [model-Q8_0.gguf](https://huggingface.co/nidum/Nidum-Gemma-3-4B-IT-Uncensored-GGUF/resolve/main/model-Q8_0.gguf) |
19
- | **Q6_K** | Balance between speed and quality | 6-bit | [model-Q6_K.gguf](https://huggingface.co/nidum/Nidum-Gemma-3-4B-IT-Uncensored-GGUF/resolve/main/model-Q6_K.gguf) |
20
- | **Q5_K_M** | Good accuracy with lower memory usage | 5-bit | [model-Q5_K_M.gguf](https://huggingface.co/nidum/Nidum-Gemma-3-4B-IT-Uncensored-GGUF/resolve/main/model-Q5_K_M.gguf) |
21
- | **Q3_K_M** | Smaller footprint, good for limited resources | 3-bit | [model-Q3_K_M.gguf](https://huggingface.co/nidum/Nidum-Gemma-3-4B-IT-Uncensored-GGUF/resolve/main/model-Q3_K_M.gguf) |
22
- | **TQ2_0** | Very fast inference, minimal memory usage | 2-bit | [model-TQ2_0.gguf](https://huggingface.co/nidum/Nidum-Gemma-3-4B-IT-Uncensored-GGUF/resolve/main/model-TQ2_0.gguf) |
23
- | **TQ1_0** | Minimal memory usage, fastest inference | 2-bit | [model-TQ1_0.gguf](https://huggingface.co/nidum/Nidum-Gemma-3-4B-IT-Uncensored-GGUF/resolve/main/model-TQ1_0.gguf) |
24
 
25
- ---
26
-
27
- ## πŸš€ Recommended Applications
28
-
29
- - **Creative Writing & Arts**: Generate stories, scripts, poetry, and explore creative ideas.
30
- - **Virtual Assistants**: Provide natural and unrestricted conversational experiences.
31
- - **Educational Resources**: Facilitate engaging, interactive learning environments.
32
- - **Entertainment & Gaming**: Create immersive narratives and interactive gameplay experiences.
33
 
34
  ---
35
 
36
- ## πŸŽ‰ Example Usage
37
 
38
  ```python
39
  from transformers import AutoModelForCausalLM, AutoTokenizer
40
  import torch
41
 
42
- model_name = "nidum/nidum-gemma-3-4b-it-uncensored"
43
 
44
  tokenizer = AutoTokenizer.from_pretrained(model_name)
45
- model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.float16)
46
-
47
- prompt = "Tell me an imaginative story about a hidden city."
48
- input_ids = tokenizer(prompt, return_tensors="pt").input_ids
49
 
50
- generated_ids = model.generate(input_ids, max_length=200)
51
- output = tokenizer.decode(generated_ids[0], skip_special_tokens=True)
52
-
53
- print(output)
54
  ```
55
 
56
- ---
57
-
58
- ## 🌟 Unlock Limitless Creativity
59
 
60
- Experience the freedom to innovate and explore without boundaries using **Nidum Gemma-3-4B IT Uncensored**.
61
 
 
1
  ---
2
  license: gemma
3
  ---
4
+ # πŸš€ Nidum Gemma-3-27B Instruct Uncensored
5
 
6
+ Welcome to Nidum's Gemma-3-27B Instruct Uncensored, a powerful and versatile model optimized for unrestricted interactions. Designed for creators, researchers, and AI enthusiasts seeking innovative and boundary-pushing capabilities.
7
 
8
+ ## ✨ Why Nidum Gemma-3-27B Instruct Uncensored?
9
 
10
+ - **Uncensored Interaction**: Generate content freely without artificial restrictions.
11
+ - **High Intelligence**: Exceptional reasoning and comprehensive conversational capabilities.
12
+ - **Versatile Applications**: Perfect for creative writing, educational interactions, research projects, virtual assistance, and more.
13
+ - **Open and Innovative**: Tailored for users who appreciate limitless creativity.
14
 
15
+ ## πŸš€ Available GGUF Quantized Models
16
 
17
+ | Quantization | Bits per Weight | Ideal For | Link |
18
+ |--------------|-----------------|-----------|------|
19
+ | **Q8_0** | 8-bit | Best accuracy and performance | [model-Q8_0.gguf](https://huggingface.co/nidum/nidum-Gemma-3-27B-Instruct-Uncensored-GGUF/blob/main/model-Q8_0.gguf) |
20
+ | **Q6_K** | 6-bit | Strong accuracy and fast inference | [model-Q6_K.gguf](https://huggingface.co/nidum/nidum-Gemma-3-27B-Instruct-Uncensored-GGUF/resolve/main/model-Q6_K.gguf) |
21
+ | **Q5_K_M** | 5-bit | Balance between accuracy and speed | [model-Q5_K_M.gguf](https://huggingface.co/nidum/nidum-Gemma-3-27B-Instruct-Uncensored-GGUF/resolve/main/model-Q5_K_M.gguf) |
22
+ | **Q3_K_M** | 3-bit | Low memory usage, good performance | [model-Q3_K_M.gguf](https://huggingface.co/nidum/nidum-Gemma-3-27B-Instruct-Uncensored-GGUF/resolve/main/model-Q3_K_M.gguf) |
23
+ | **TQ2_0** | 2-bit (Tiny) | Maximum speed and minimal resources | [model-TQ2_0.gguf](https://huggingface.co/nidum/nidum-Gemma-3-27B-Instruct-Uncensored-GGUF/resolve/main/model-TQ2_0.gguf) |
24
+ | **TQ1_0** | 1-bit (Tiny) | Minimal footprint and fastest inference | [model-TQ1_0.gguf](https://huggingface.co/nidum/nidum-Gemma-3-27B-Instruct-Uncensored-GGUF/resolve/main/model-TQ1_0.gguf) |
25
 
26
+ ## 🎯 Recommended Quantization
27
+ - **Best accuracy**: Use `Q8_0` or `Q6_K`.
28
+ - **Balanced performance**: Use `Q5_K_M`.
29
+ - **Small footprint (mobile/edge)**: Choose `Q3_K_M`, `TQ2_0`, or `TQ1_0`.
 
 
 
 
30
 
31
  ---
32
 
33
+ ## πŸš€ Example Usage (Original Model)
34
 
35
  ```python
36
  from transformers import AutoModelForCausalLM, AutoTokenizer
37
  import torch
38
 
39
+ model_name = "nidum/nidum-Gemma-3-27B-Instruct-Uncensored"
40
 
41
  tokenizer = AutoTokenizer.from_pretrained(model_name)
42
+ model = AutoModelForCausalLM.from_pretrained(model_name)
 
 
 
43
 
44
+ prompt = "Tell me a futuristic story about space travel."
45
+ inputs = tokenizer(prompt, return_tensors="pt")
46
+ output = model.generate(**inputs, max_length=200)
47
+ print(tokenizer.decode(output[0], skip_special_tokens=True))
48
  ```
49
 
50
+ ## πŸš€ Your AI, Your Way
 
 
51
 
52
+ Unlock your creativity and innovation potential with **Nidum Gemma-3-27B Instruct Uncensored**. Experience the freedom to create, explore, and innovate without limits.
53