π Nidum Gemma-3-27B Instruct Uncensored
Welcome to Nidum's Gemma-3-27B Instruct Uncensored, a powerful and versatile model optimized for unrestricted interactions. Designed for creators, researchers, and AI enthusiasts seeking innovative and boundary-pushing capabilities.
β¨ Why Nidum Gemma-3-27B Instruct Uncensored?
- Uncensored Interaction: Generate content freely without artificial restrictions.
- High Intelligence: Exceptional reasoning and comprehensive conversational capabilities.
- Versatile Applications: Perfect for creative writing, educational interactions, research projects, virtual assistance, and more.
- Open and Innovative: Tailored for users who appreciate limitless creativity.
π Available GGUF Quantized Models
Quantization | Bits per Weight | Ideal For | Link |
---|---|---|---|
Q8_0 | 8-bit | Best accuracy and performance | model-Q8_0.gguf |
Q6_K | 6-bit | Strong accuracy and fast inference | model-Q6_K.gguf |
Q5_K_M | 5-bit | Balance between accuracy and speed | model-Q5_K_M.gguf |
Q3_K_M | 3-bit | Low memory usage, good performance | model-Q3_K_M.gguf |
TQ2_0 | 2-bit (Tiny) | Maximum speed and minimal resources | model-TQ2_0.gguf |
TQ1_0 | 1-bit (Tiny) | Minimal footprint and fastest inference | model-TQ1_0.gguf |
π― Recommended Quantization
- Best accuracy: Use
Q8_0
orQ6_K
. - Balanced performance: Use
Q5_K_M
. - Small footprint (mobile/edge): Choose
Q3_K_M
,TQ2_0
, orTQ1_0
.
π Example Usage (Original Model)
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model_name = "nidum/nidum-Gemma-3-27B-Instruct-Uncensored"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
prompt = "Tell me a futuristic story about space travel."
inputs = tokenizer(prompt, return_tensors="pt")
output = model.generate(**inputs, max_length=200)
print(tokenizer.decode(output[0], skip_special_tokens=True))
π Your AI, Your Way
Unlock your creativity and innovation potential with Nidum Gemma-3-27B Instruct Uncensored. Experience the freedom to create, explore, and innovate without limits.
- Downloads last month
- 5
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API:
The model has no library tag.