--- license: gemma --- # 🚀 Nidum Gemma-3-27B Instruct Uncensored Welcome to Nidum's Gemma-3-27B Instruct Uncensored, a powerful and versatile model optimized for unrestricted interactions. Designed for creators, researchers, and AI enthusiasts seeking innovative and boundary-pushing capabilities. ## ✨ Why Nidum Gemma-3-27B Instruct Uncensored? - **Uncensored Interaction**: Generate content freely without artificial restrictions. - **High Intelligence**: Exceptional reasoning and comprehensive conversational capabilities. - **Versatile Applications**: Perfect for creative writing, educational interactions, research projects, virtual assistance, and more. - **Open and Innovative**: Tailored for users who appreciate limitless creativity. ## 🚀 Available GGUF Quantized Models | Quantization | Bits per Weight | Ideal For | Link | |--------------|-----------------|-----------|------| | **Q8_0** | 8-bit | Best accuracy and performance | [model-Q8_0.gguf](https://huggingface.co/nidum/nidum-Gemma-3-27B-Instruct-Uncensored-GGUF/blob/main/model-Q8_0.gguf) | | **Q6_K** | 6-bit | Strong accuracy and fast inference | [model-Q6_K.gguf](https://huggingface.co/nidum/nidum-Gemma-3-27B-Instruct-Uncensored-GGUF/resolve/main/model-Q6_K.gguf) | | **Q5_K_M** | 5-bit | Balance between accuracy and speed | [model-Q5_K_M.gguf](https://huggingface.co/nidum/nidum-Gemma-3-27B-Instruct-Uncensored-GGUF/resolve/main/model-Q5_K_M.gguf) | | **Q3_K_M** | 3-bit | Low memory usage, good performance | [model-Q3_K_M.gguf](https://huggingface.co/nidum/nidum-Gemma-3-27B-Instruct-Uncensored-GGUF/resolve/main/model-Q3_K_M.gguf) | | **TQ2_0** | 2-bit (Tiny) | Maximum speed and minimal resources | [model-TQ2_0.gguf](https://huggingface.co/nidum/nidum-Gemma-3-27B-Instruct-Uncensored-GGUF/resolve/main/model-TQ2_0.gguf) | | **TQ1_0** | 1-bit (Tiny) | Minimal footprint and fastest inference | [model-TQ1_0.gguf](https://huggingface.co/nidum/nidum-Gemma-3-27B-Instruct-Uncensored-GGUF/resolve/main/model-TQ1_0.gguf) | ## 🎯 Recommended Quantization - **Best accuracy**: Use `Q8_0` or `Q6_K`. - **Balanced performance**: Use `Q5_K_M`. - **Small footprint (mobile/edge)**: Choose `Q3_K_M`, `TQ2_0`, or `TQ1_0`. --- ## 🚀 Example Usage (Original Model) ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch model_name = "nidum/nidum-Gemma-3-27B-Instruct-Uncensored" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name) prompt = "Tell me a futuristic story about space travel." inputs = tokenizer(prompt, return_tensors="pt") output = model.generate(**inputs, max_length=200) print(tokenizer.decode(output[0], skip_special_tokens=True)) ``` ## 🚀 Your AI, Your Way Unlock your creativity and innovation potential with **Nidum Gemma-3-27B Instruct Uncensored**. Experience the freedom to create, explore, and innovate without limits.