image

CogitoZ - 32B

Model Overview

CogitoZ - 32B is a state-of-the-art large language model fine-tuned to excel in advanced reasoning and real-time decision-making tasks. This enhanced version was trained using Unsloth, achieving a 2x faster training process. Leveraging Hugging Face's TRL (Transformers Reinforcement Learning) library, CogitoZ combines efficiency with exceptional reasoning performance.


Key Features

  1. Fast Training: Optimized with Unsloth, achieving a 2x faster training cycle without compromising model quality.
  2. Enhanced Reasoning: Utilizes advanced chain-of-thought (CoT) reasoning for solving complex problems.
  3. Quantization Ready: Supports 8-bit and 4-bit quantization for deployment on resource-constrained devices.
  4. Scalable Inference: Seamless integration with text-generation-inference tools for real-time applications.

Intended Use

Primary Use Cases

  • Education: Real-time assistance for complex problem-solving, especially in mathematics and logic.
  • Business: Supports decision-making, financial modeling, and operational strategy.
  • Healthcare: Enhances diagnostic accuracy and supports structured clinical reasoning.
  • Legal Analysis: Simplifies complex legal documents and constructs logical arguments.

Limitations

  • May produce biased outputs if the input prompts contain prejudicial or harmful content.
  • Should not be used for real-time, high-stakes autonomous decisions (e.g., robotics or autonomous vehicles).

Technical Details

  • Training Framework: Hugging Face's Transformers and TRL libraries.
  • Optimization Framework: Unsloth for faster and efficient training.
  • Language Support: English.
  • Quantization: Compatible with 8-bit and 4-bit inference modes for deployment on edge devices.

Deployment Example

Using Hugging Face Transformers:

from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "Daemontatox/CogitoZ"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)

prompt = "Explain the Pythagorean theorem step-by-step:"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

Optimized Inference:

Install the transformers and text-generation-inference libraries. Deploy on servers or edge devices using quantized models for optimal performance. Training Data The fine-tuning process utilized reasoning-specific datasets, including:

MATH Dataset: Focused on logical and mathematical problems.

Custom Corpora: Tailored datasets for multi-domain reasoning and structured problem-solving.

Ethical Considerations

Bias Awareness -> The model reflects biases present in the training data. Users should carefully evaluate outputs in sensitive contexts.

Safe Deployment -> Not recommended for generating harmful or unethical content.

Acknowledgments

This model was developed with contributions from Daemontatox and the Unsloth team, utilizing state-of-the-art techniques in fine-tuning and optimization.

Open LLM Leaderboard Evaluation Results

Detailed results can be found here! Summarized results can be found here!

Metric Value (%)
Average 38.36
IFEval (0-Shot) 39.67
BBH (3-Shot) 53.89
MATH Lvl 5 (4-Shot) 46.30
GPQA (0-shot) 19.35
MuSR (0-shot) 19.94
MMLU-PRO (5-shot) 51.03
Downloads last month
71
Safetensors
Model size
32.8B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for Daemontatox/CogitoZ

Base model

Qwen/Qwen2.5-32B
Finetuned
(37)
this model
Quantizations
2 models

Dataset used to train Daemontatox/CogitoZ

Collection including Daemontatox/CogitoZ

Evaluation results