Fernando J. Albornoz commited on
Commit
8ceca0d
·
verified ·
1 Parent(s): b93acea

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +138 -39
README.md CHANGED
@@ -1,69 +1,168 @@
1
  ---
2
  base_model: unsloth/Qwen3-0.6B-unsloth-bnb-4bit
 
3
  tags:
4
  - text-generation
5
  - text-generation-inference
6
  - transformers
7
- - unsloth
8
  - qwen3
9
- - gguf
10
  - math
11
  - enosis-labs
12
  - math-mini
 
13
  license: apache-2.0
14
  language:
15
  - en
16
- library_name: llama.cpp # O la principal librería para GGUF que recomiendes
17
- pipeline_tag: text-generation
18
  ---
19
 
20
- # Math Mini 0.6B (Preview) - GGUF
21
 
22
- ## Model Card for `enosislabs/math-mini-0.6B-preview-gguf`
23
 
24
- <p align="center">
25
- <img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="170"/>
26
- </p>
27
 
28
- **Math Mini 0.6B (Preview)** is a specialized language model developed by **Enosis Labs**, fine-tuned for mathematical problem-solving. This compact 0.6 billion parameter model is part of our "Math Mini" series, designed for efficiency and accessibility, particularly for performing mathematical reasoning tasks.
29
 
30
- This GGUF version is quantized and optimized for running locally on CPUs and other supported hardware through interfaces like `llama.cpp`, `LM Studio`, `Oobabooga Text Generation WebUI`, etc.
31
 
32
- **Please note: This is a preview version.** We are actively working on improvements and further evaluations.
 
 
 
33
 
34
- ## Model Details
35
 
36
- * **Developed by:** [Enosis Labs](https://huggingface.co/enosislabs) (¡Asegúrate de que este enlace sea correcto o créalo!)
37
- * **Model type:** Qwen3-based Decoder-only Transformer
38
- * **Series:** Math Mini
39
- * **Version:** 0.6B (Preview)
40
- * **Language(s):** Primarily English (enfocado en problemas matemáticos en inglés)
41
- * **License:** Apache 2.0
42
- * **Finetuned from:** `unsloth/Qwen3-0.6B-unsloth-bnb-4bit`
43
- * **Fine-tuning Frameworks:** [Unsloth](https://github.com/unslothai/unsloth) for 2x faster training and optimized memory usage, in conjunction with Hugging Face's [TRL](https://github.com/huggingface/trl) library.
44
- * **Fine-tuning Data:** Fine-tuned on a curated collection of mathematical reasoning datasets, including algebraic problems, multi-step arithmetic, and word problems. Key datasets included variants of AQuA-RAT and OpenMathReasoning.
45
 
46
- ## Intended Uses
47
 
48
- This model is intended for (but not limited to):
49
 
50
- * Assisting with grade-school to early high-school level mathematical problems.
51
- * Solving algebraic equations and multi-step arithmetic.
52
- * Understanding and processing mathematical questions posed in natural language.
53
- * Use as a lightweight math reasoning engine in applications where resource efficiency is crucial.
54
- * Educational purposes, experimentation, and as a foundational block for further specialization.
55
 
56
- ## How to Get Started with GGUF
57
 
58
- This model is provided in GGUF format, making it easy to run locally.
 
59
 
60
- **1. Download the GGUF file:**
61
- You can find the GGUF file(s) in the "[Files and versions](https_url_del_repositorio_en_hf/tree/main)" tab of this repository (reemplaza `https_url_del_repositorio_en_hf` con el enlace real a tu repo). Look for files ending with `.gguf`. For example: `math-mini-0.6b-preview-q4_K_M.gguf` (o el nombre exacto de tu archivo).
62
 
63
- **2. Using with `llama.cpp` (Example):**
64
 
65
- First, ensure you have `llama.cpp` compiled.
66
- ```bash
67
- git clone [https://github.com/ggerganov/llama.cpp.git](https://github.com/ggerganov/llama.cpp.git)
68
- cd llama.cpp
69
- make
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  base_model: unsloth/Qwen3-0.6B-unsloth-bnb-4bit
3
+ model_name: Math Mini 0.6B (Preview)
4
  tags:
5
  - text-generation
6
  - text-generation-inference
7
  - transformers
 
8
  - qwen3
 
9
  - math
10
  - enosis-labs
11
  - math-mini
12
+ - gguf # Tag for GGUF version
13
  license: apache-2.0
14
  language:
15
  - en
 
 
16
  ---
17
 
18
+ # Math Mini 0.6B (Preview)
19
 
20
+ **Math Mini 0.6B (Preview)** is a compact, specialized model developed by **Enosis Labs** as part of the "Mini Series." It is designed to deliver efficient and precise mathematical reasoning, with a realistic and practical focus for its size. This model is fine-tuned from `unsloth/Qwen3-0.6B-unsloth-bnb-4bit`.
21
 
22
+ ## Philosophy & Capabilities
 
 
23
 
24
+ The Mini Series, along with the "Enosis Math" and "Enosis Code" models, incorporates step-by-step reasoning by default, enabling more efficient, clear, and well-founded answers. All models in the Math series have been trained with carefully curated step-by-step problem-solving datasets, resulting in a greater ability to reason and explain solutions in a structured way.
25
 
26
+ **Math Mini 0.6B (Preview)** is optimized for:
27
 
28
+ * **Basic Algebra:** Solving equations and manipulating expressions.
29
+ * **Arithmetic & Sequential Reasoning:** Calculations and breaking down problems into logical steps.
30
+ * **Elementary Logic:** Applying deduction in mathematical contexts.
31
+ * **Introductory Competition Problem Solving:** Focus on foundational skills adapted to the model's scale.
32
 
33
+ Larger models in the "Enosis Math" series address advanced topics such as calculus, higher algebra, and olympiad problems. The "Code Mini" and "Enosis Code" series are oriented towards programming and algorithmic tasks, maintaining the same philosophy of explicit and efficient reasoning.
34
 
35
+ This model is a **preview version** and is under continuous improvement and evaluation.
 
 
 
 
 
 
 
 
36
 
37
+ ## Quick Start
38
 
39
+ Available in both Hugging Face Transformers and quantized GGUF formats.
40
 
41
+ ### Transformers (Hugging Face)
 
 
 
 
42
 
43
+ Ensure you have the latest `transformers` library. For Qwen3 models, a recent version is recommended.
44
 
45
+ ```python
46
+ from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
47
 
48
+ model_id = "enosislabs/math-mini-0.6b-preview-gguf"
 
49
 
50
+ pipe = pipeline("text-generation", model=model_id, trust_remote_code=True)
51
 
52
+ messages = [
53
+ {"role": "system", "content": "You are a helpful math assistant."},
54
+ {"role": "user", "content": "Solve for x: 3x + 11 = 35"},
55
+ ]
56
+
57
+ formatted_prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
58
+ outputs = pipe(formatted_prompt, max_new_tokens=100)
59
+ print(outputs[0]["generated_text"])
60
+
61
+ # Alternatively, load the model and tokenizer directly:
62
+ tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)
63
+ model = AutoModelForCausalLM.from_pretrained(model_id, trust_remote_code=True)
64
+ inputs = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt")
65
+ outputs = model.generate(inputs, max_new_tokens=100)
66
+ response_text = tokenizer.decode(outputs[0], skip_special_tokens=False)
67
+ print(response_text)
68
+ ```
69
+
70
+ ### GGUF with Ollama
71
+
72
+ Download the `.gguf` file from Hugging Face and use Ollama. You can choose between other GGUF versions such as 4bit, 5bit, and 8bit. This example uses the 4bit version:
73
+
74
+ ```bash
75
+ ollama run enosislabs/math-mini-0.6b-preview-gguf:Q4_K_M
76
+ ```
77
+
78
+ For more control, create a `Modelfile` with the Qwen3 template:
79
+
80
+ ```modelfile
81
+ FROM ./math-mini-0.6b-preview-Q4_K_M.gguf
82
+ TEMPLATE """
83
+ <|im_start|>system
84
+ {{ .System }}<|im_end|>
85
+ <|im_start|>user
86
+ {{ .Prompt }}<|im_end|>
87
+ <|im_start|>assistant
88
+ """
89
+ ```
90
+
91
+ Then run:
92
+
93
+ ```bash
94
+ ollama create math-mini-0.6b -f Modelfile
95
+ ollama run math-mini-0.6b
96
+ ```
97
+
98
+ ### GGUF with llama.cpp
99
+
100
+ ```bash
101
+ ./main -m ./path/to/math-mini-0.6b-preview.gguf -n 256 -p "<|im_start|>system\nYou are a helpful math assistant.<|im_end|>\n<|im_start|>user\nSolve for x: 2x + 5 = 15<|im_end|>\n<|im_start|>assistant\n" --temp 0.2 -c 2048
102
+ ```
103
+
104
+ ### vLLM (Transformers)
105
+
106
+ ```bash
107
+ pip install vllm
108
+ python -m vllm.entrypoints.openai.api_server --model enosislabs/math-mini-0.6b-preview-gguf --trust-remote-code
109
+ ```
110
+
111
+ For chat:
112
+
113
+ ```bash
114
+ curl -X POST "http://localhost:8000/v1/chat/completions" \
115
+ -H "Content-Type: application/json" \
116
+ --data '{
117
+ "model": "enosislabs/math-mini-0.6b-preview-gguf",
118
+ "messages": [
119
+ {"role": "system", "content": "You are a helpful math assistant."},
120
+ {"role": "user", "content": "What is the capital of France?"}
121
+ ],
122
+ "max_tokens": 50,
123
+ "temperature": 0.2
124
+ }'
125
+ ```
126
+
127
+ ## Prompt Format (Qwen3 ChatML)
128
+
129
+ For best results, use the Qwen3 ChatML format. The `tokenizer.apply_chat_template` method handles this automatically.
130
+
131
+ ```text
132
+ <|im_start|>system
133
+ You are a helpful AI assistant. Provide a detailed step-by-step solution.
134
+ <|im_end|>
135
+ <|im_start|>user
136
+ {user_question}
137
+ <|im_end|>
138
+ <|im_start|>assistant
139
+ ```
140
+
141
+ ## Acknowledgements
142
+
143
+ * Fine-tuned from `unsloth/Qwen3-0.6B-unsloth-bnb-4bit`.
144
+ * Training process accelerated and optimized thanks to [Unsloth](https://github.com/unslothai/unsloth).
145
+
146
+ ## Citation
147
+
148
+ If you use this model, please cite:
149
+
150
+ ```bibtex
151
+ @software{enosislabs_math_mini_0.6b_preview_2025,
152
+ author = {{Enosis Labs}},
153
+ title = {{Math Mini 0.6B (Preview)}},
154
+ year = {2025},
155
+ publisher = {Hugging Face},
156
+ version = {0.1-preview},
157
+ url = {https://huggingface.co/enosislabs/math-mini-0.6b-preview-gguf}
158
+ }
159
+ ```
160
+
161
+ <!--
162
+ Key points:
163
+ - More subtle and direct, less redundancy.
164
+ - Emphasizes default activation of step-by-step reasoning across the series.
165
+ - Clear and modern examples for each format.
166
+ - ChatML prompt is central to the experience.
167
+ - Assumes the repo contains both Transformers and GGUF models.
168
+ -->