CISCai commited on
Commit
b9d3a39
·
verified ·
1 Parent(s): 1b6c06f

Upload 13 files

Browse files
.gitattributes CHANGED
@@ -35,3 +35,15 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
  granite-3.2-8b-instruct-bf16-00001-of-00002.gguf filter=lfs diff=lfs merge=lfs -text
37
  granite-3.2-8b-instruct-bf16-00002-of-00002.gguf filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
  granite-3.2-8b-instruct-bf16-00001-of-00002.gguf filter=lfs diff=lfs merge=lfs -text
37
  granite-3.2-8b-instruct-bf16-00002-of-00002.gguf filter=lfs diff=lfs merge=lfs -text
38
+ granite-3.2-8b-instruct.imatrix.dat filter=lfs diff=lfs merge=lfs -text
39
+ granite-3.2-8b-instruct.IQ1_M.gguf filter=lfs diff=lfs merge=lfs -text
40
+ granite-3.2-8b-instruct.IQ1_S.gguf filter=lfs diff=lfs merge=lfs -text
41
+ granite-3.2-8b-instruct.IQ2_M.gguf filter=lfs diff=lfs merge=lfs -text
42
+ granite-3.2-8b-instruct.IQ2_S.gguf filter=lfs diff=lfs merge=lfs -text
43
+ granite-3.2-8b-instruct.IQ2_XS.gguf filter=lfs diff=lfs merge=lfs -text
44
+ granite-3.2-8b-instruct.IQ2_XXS.gguf filter=lfs diff=lfs merge=lfs -text
45
+ granite-3.2-8b-instruct.IQ3_M.gguf filter=lfs diff=lfs merge=lfs -text
46
+ granite-3.2-8b-instruct.IQ3_S.gguf filter=lfs diff=lfs merge=lfs -text
47
+ granite-3.2-8b-instruct.IQ3_XS.gguf filter=lfs diff=lfs merge=lfs -text
48
+ granite-3.2-8b-instruct.IQ3_XXS.gguf filter=lfs diff=lfs merge=lfs -text
49
+ granite-3.2-8b-instruct.IQ4_XS.gguf filter=lfs diff=lfs merge=lfs -text
README.md CHANGED
@@ -1,3 +1,677 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ pipeline_tag: text-generation
3
+ inference: false
4
+ license: apache-2.0
5
+ tags:
6
+ - code
7
+ - language
8
+ - granite-3.2
9
+ base_model:
10
+ - ibm-granite/granite-3.2-8b-instruct
11
+ model_creator: ibm-granite
12
+ model_name: granite-3.2-8b-instruct
13
+ model_type: granite
14
+ datasets:
15
+ - m-a-p/CodeFeedback-Filtered-Instruction
16
+ quantized_by: CISC
17
+ ---
18
+
19
+ # granite-3.2-8b-instruct - SOTA GGUF
20
+ - Model creator: [IBM](https://huggingface.co/ibm-granite)
21
+ - Original model: [granite-3.2-8b-instruct](https://huggingface.co/ibm-granite/granite-3.2-8b-instruct)
22
+
23
+ <!-- description start -->
24
+ ## Description
25
+
26
+ This repo contains State Of The Art quantized GGUF format model files for [granite-3.2-8b-instruct](https://huggingface.co/ibm-granite/granite-3.2-8b-instruct).
27
+
28
+ Quantization was done with an importance matrix that was trained for ~1M tokens (256 batches of 4096 tokens) of answers from the [CodeFeedback-Filtered-Instruction](https://huggingface.co/datasets/m-a-p/CodeFeedback-Filtered-Instruction) dataset.
29
+
30
+ Fill-in-Middle tokens are automatically detected and supported as of commit [0d2ec43](https://github.com/ggerganov/llama.cpp/commit/11ac9800aff532715a5bc7991062c68ba3472e6e), see [example](#simple-llama-cpp-python-example-fill-in-middle-code).
31
+
32
+ <!-- description end -->
33
+
34
+ <!-- compatibility_gguf start -->
35
+ ## Compatibility
36
+
37
+ These quantised GGUFv3 files are compatible with llama.cpp from September 17th 2024 onwards, as of commit [0becb22](https://github.com/ggerganov/llama.cpp/commit/0d2ec438330271d201c2e9224aca23d0d5c908bf)
38
+
39
+ They are also compatible with many third party UIs and libraries provided they are built using a recent llama.cpp.
40
+
41
+ Usage with llama-cpp-python based frameworks require [PR#1486](https://github.com/abetlen/llama-cpp-python/pull/1486) patched in for the chat template to work correctly.
42
+
43
+ ## Explanation of quantisation methods
44
+
45
+ <details>
46
+ <summary>Click to see details</summary>
47
+
48
+ The new methods available are:
49
+
50
+ * GGML_TYPE_IQ1_S - 1-bit quantization in super-blocks with an importance matrix applied, effectively using 1.56 bits per weight (bpw)
51
+ * GGML_TYPE_IQ1_M - 1-bit quantization in super-blocks with an importance matrix applied, effectively using 1.75 bpw
52
+ * GGML_TYPE_IQ2_XXS - 2-bit quantization in super-blocks with an importance matrix applied, effectively using 2.06 bpw
53
+ * GGML_TYPE_IQ2_XS - 2-bit quantization in super-blocks with an importance matrix applied, effectively using 2.31 bpw
54
+ * GGML_TYPE_IQ2_S - 2-bit quantization in super-blocks with an importance matrix applied, effectively using 2.5 bpw
55
+ * GGML_TYPE_IQ2_M - 2-bit quantization in super-blocks with an importance matrix applied, effectively using 2.7 bpw
56
+ * GGML_TYPE_IQ3_XXS - 3-bit quantization in super-blocks with an importance matrix applied, effectively using 3.06 bpw
57
+ * GGML_TYPE_IQ3_XS - 3-bit quantization in super-blocks with an importance matrix applied, effectively using 3.3 bpw
58
+ * GGML_TYPE_IQ3_S - 3-bit quantization in super-blocks with an importance matrix applied, effectively using 3.44 bpw
59
+ * GGML_TYPE_IQ3_M - 3-bit quantization in super-blocks with an importance matrix applied, effectively using 3.66 bpw
60
+ * GGML_TYPE_IQ4_XS - 4-bit quantization in super-blocks with an importance matrix applied, effectively using 4.25 bpw
61
+ * GGML_TYPE_IQ4_NL - 4-bit non-linearly mapped quantization with an importance matrix applied, effectively using 4.5 bpw
62
+
63
+ Refer to the Provided Files table below to see what files use which methods, and how.
64
+ </details>
65
+ <!-- compatibility_gguf end -->
66
+
67
+ <!-- README_GGUF.md-provided-files start -->
68
+ ## Provided files
69
+
70
+ | Name | Quant method | Bits | Size | Max RAM required | Use case |
71
+ | ---- | ---- | ---- | ---- | ---- | ----- |
72
+ | [granite-3.2-8b-instruct.IQ1_S.gguf](https://huggingface.co/CISCai/granite-3.2-8b-instruct-SOTA-GGUF/blob/main/granite-3.2-8b-instruct.IQ1_S.gguf) | IQ1_S | 1 | 1.7 GB| 1.9 GB | smallest, significant quality loss |
73
+ | [granite-3.2-8b-instruct.IQ1_M.gguf](https://huggingface.co/CISCai/granite-3.2-8b-instruct-SOTA-GGUF/blob/main/granite-3.2-8b-instruct.IQ1_M.gguf) | IQ1_M | 1 | 1.8 GB| 2.1 GB | very small, significant quality loss |
74
+ | [granite-3.2-8b-instruct.IQ2_XXS.gguf](https://huggingface.co/CISCai/granite-3.2-8b-instruct-SOTA-GGUF/blob/main/granite-3.2-8b-instruct.IQ2_XXS.gguf) | IQ2_XXS | 2 | 2.1 GB| 2.3 GB | very small, high quality loss |
75
+ | [granite-3.2-8b-instruct.IQ2_XS.gguf](https://huggingface.co/CISCai/granite-3.2-8b-instruct-SOTA-GGUF/blob/main/granite-3.2-8b-instruct.IQ2_XS.gguf) | IQ2_XS | 2 | 2.3 GB| 2.5 GB | very small, high quality loss |
76
+ | [granite-3.2-8b-instruct.IQ2_S.gguf](https://huggingface.co/CISCai/granite-3.2-8b-instruct-SOTA-GGUF/blob/main/granite-3.2-8b-instruct.IQ2_S.gguf) | IQ2_S | 2 | 2.4 GB| 2.7 GB | small, substantial quality loss |
77
+ | [granite-3.2-8b-instruct.IQ2_M.gguf](https://huggingface.co/CISCai/granite-3.2-8b-instruct-SOTA-GGUF/blob/main/granite-3.2-8b-instruct.IQ2_M.gguf) | IQ2_M | 2 | 2.6 GB| 2.9 GB | small, greater quality loss |
78
+ | [granite-3.2-8b-instruct.IQ3_XXS.gguf](https://huggingface.co/CISCai/granite-3.2-8b-instruct-SOTA-GGUF/blob/main/granite-3.2-8b-instruct.IQ3_XXS.gguf) | IQ3_XXS | 3 | 3.0 GB| 3.2 GB | very small, high quality loss |
79
+ | [granite-3.2-8b-instruct.IQ3_XS.gguf](https://huggingface.co/CISCai/granite-3.2-8b-instruct-SOTA-GGUF/blob/main/granite-3.2-8b-instruct.IQ3_XS.gguf) | IQ3_XS | 3 | 3.2 GB| 3.4 GB | small, substantial quality loss |
80
+ | [granite-3.2-8b-instruct.IQ3_S.gguf](https://huggingface.co/CISCai/granite-3.2-8b-instruct-SOTA-GGUF/blob/main/granite-3.2-8b-instruct.IQ3_S.gguf) | IQ3_S | 3 | 3.4 GB| 3.6 GB | small, greater quality loss |
81
+ | [granite-3.2-8b-instruct.IQ3_M.gguf](https://huggingface.co/CISCai/granite-3.2-8b-instruct-SOTA-GGUF/blob/main/granite-3.2-8b-instruct.IQ3_M.gguf) | IQ3_M | 3 | 3.5 GB| 3.7 GB | medium, balanced quality - recommended |
82
+ | [granite-3.2-8b-instruct.IQ4_XS.gguf](https://huggingface.co/CISCai/granite-3.2-8b-instruct-SOTA-GGUF/blob/main/granite-3.2-8b-instruct.IQ4_XS.gguf) | IQ4_XS | 4 | 4.1 GB| 4.3 GB | small, substantial quality loss |
83
+
84
+ Generated importance matrix file: [granite-3.2-8b-instruct.imatrix.dat](https://huggingface.co/CISCai/granite-3.2-8b-instruct-SOTA-GGUF/blob/main/granite-3.2-8b-instruct.imatrix.dat)
85
+
86
+ **Note**: the above RAM figures assume no GPU offloading with 4K context. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
87
+
88
+ <!-- README_GGUF.md-provided-files end -->
89
+
90
+ <!-- README_GGUF.md-how-to-run start -->
91
+ ## Example `llama.cpp` command
92
+
93
+ Make sure you are using `llama.cpp` from commit [6171c9d](https://github.com/ggerganov/llama.cpp/commit/6171c9d25820ccf676b243c172868819d882848f) or later for jinja2 chat template support.
94
+
95
+ ```shell
96
+ ./llama-cli -ngl 41 -m granite-3.2-8b-instruct.IQ4_XS.gguf --color -c 131072 -cnv --jinja"
97
+ ```
98
+
99
+ Change `-ngl 41` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
100
+
101
+ Change `-c 131072` to the desired sequence length.
102
+
103
+ If you are low on V/RAM try quantizing the K-cache with `-ctk q8_0` or even `-ctk q4_0` for big memory savings (depending on context size).
104
+ There is a similar option for V-cache (`-ctv`), only available if you enable Flash Attention (`-fa`) as well.
105
+
106
+ For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
107
+
108
+ ## How to run from Python code
109
+
110
+ You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) module.
111
+
112
+ ### How to load this model in Python code, using llama-cpp-python
113
+
114
+ For full documentation, please see: [llama-cpp-python docs](https://llama-cpp-python.readthedocs.io/en/latest/).
115
+
116
+ #### First install the package
117
+
118
+ Run one of the following commands, according to your system:
119
+
120
+ ```shell
121
+ # Prebuilt wheel with basic CPU support
122
+ pip install llama-cpp-python --extra-index-url https://abetlen.github.io/llama-cpp-python/whl/cpu
123
+ # Prebuilt wheel with NVidia CUDA acceleration
124
+ pip install llama-cpp-python --extra-index-url https://abetlen.github.io/llama-cpp-python/whl/cu121 (or cu122 etc.)
125
+ # Prebuilt wheel with Metal GPU acceleration
126
+ pip install llama-cpp-python --extra-index-url https://abetlen.github.io/llama-cpp-python/whl/metal
127
+ # Build base version with no GPU acceleration
128
+ pip install llama-cpp-python
129
+ # With NVidia CUDA acceleration
130
+ CMAKE_ARGS="-DGGML_CUDA=on" pip install llama-cpp-python
131
+ # Or with OpenBLAS acceleration
132
+ CMAKE_ARGS="-DGGML_BLAS=ON -DGGML_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
133
+ # Or with AMD ROCm GPU acceleration (Linux only)
134
+ CMAKE_ARGS="-DGGML_HIPBLAS=on" pip install llama-cpp-python
135
+ # Or with Metal GPU acceleration for macOS systems only
136
+ CMAKE_ARGS="-DGGML_METAL=on" pip install llama-cpp-python
137
+ # Or with Vulkan acceleration
138
+ CMAKE_ARGS="-DGGML_VULKAN=on" pip install llama-cpp-python
139
+ # Or with SYCL acceleration
140
+ CMAKE_ARGS="-DGGML_SYCL=on -DCMAKE_C_COMPILER=icx -DCMAKE_CXX_COMPILER=icpx" pip install llama-cpp-python
141
+
142
+ # In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
143
+ $env:CMAKE_ARGS = "-DGGML_CUDA=on"
144
+ pip install llama-cpp-python
145
+ ```
146
+
147
+ #### Simple llama-cpp-python example code
148
+
149
+ ```python
150
+ from llama_cpp import Llama
151
+
152
+ # Chat Completion API
153
+
154
+ llm = Llama(model_path="./granite-3.2-8b-instruct.IQ4_XS.gguf", n_gpu_layers=41, n_ctx=131072)
155
+ print(llm.create_chat_completion(
156
+ repeat_penalty = 1.0,
157
+ messages = [
158
+ {
159
+ "role": "user",
160
+ "content": "Pick a LeetCode challenge and solve it in Python."
161
+ }
162
+ ]
163
+ ))
164
+ ```
165
+
166
+ #### Simple llama-cpp-python example fill-in-middle code
167
+
168
+ ```python
169
+ from llama_cpp import Llama
170
+
171
+ # Completion API
172
+
173
+ prompt = "def add("
174
+ suffix = "\n return sum\n\n"
175
+
176
+ llm = Llama(model_path="./granite-3.2-8b-instruct.IQ4_XS.gguf", n_gpu_layers=41, n_ctx=131072)
177
+ output = llm.create_completion(
178
+ temperature = 0.0,
179
+ repeat_penalty = 1.0,
180
+ prompt = prompt,
181
+ suffix = suffix
182
+ )
183
+
184
+ # Models sometimes repeat suffix in response, attempt to filter that
185
+ response = output["choices"][0]["text"]
186
+ response_stripped = response.rstrip()
187
+ unwanted_response_suffix = suffix.rstrip()
188
+ unwanted_response_length = len(unwanted_response_suffix)
189
+
190
+ filtered = False
191
+ if unwanted_response_suffix and response_stripped[-unwanted_response_length:] == unwanted_response_suffix:
192
+ response = response_stripped[:-unwanted_response_length]
193
+ filtered = True
194
+
195
+ print(f"Fill-in-Middle completion{' (filtered)' if filtered else ''}:\n\n{prompt}\033[32m{response}\033[{'33' if filtered else '0'}m{suffix}\033[0m")
196
+ ```
197
+
198
+ #### Simple llama-cpp-python example function calling code
199
+
200
+ ```python
201
+ from llama_cpp import Llama
202
+
203
+ # Chat Completion API
204
+
205
+ grammar = LlamaGrammar.from_json_schema(json.dumps({
206
+ "type": "array",
207
+ "items": {
208
+ "type": "object",
209
+ "required": [ "name", "arguments" ],
210
+ "properties": {
211
+ "name": {
212
+ "type": "string"
213
+ },
214
+ "arguments": {
215
+ "type": "object"
216
+ }
217
+ }
218
+ }
219
+ }))
220
+
221
+ llm = Llama(model_path="./granite-3.2-8b-instruct.IQ4_XS.gguf", n_gpu_layers=41, n_ctx=131072)
222
+ response = llm.create_chat_completion(
223
+ temperature = 0.0,
224
+ repeat_penalty = 1.0,
225
+ messages = [
226
+ {
227
+ "role": "user",
228
+ "content": "What's the weather like in Oslo and Stockholm?"
229
+ }
230
+ ],
231
+ tools=[{
232
+ "type": "function",
233
+ "function": {
234
+ "name": "get_current_weather",
235
+ "description": "Get the current weather in a given location",
236
+ "parameters": {
237
+ "type": "object",
238
+ "properties": {
239
+ "location": {
240
+ "type": "string",
241
+ "description": "The city and state, e.g. San Francisco, CA"
242
+ },
243
+ "unit": {
244
+ "type": "string",
245
+ "enum": [ "celsius", "fahrenheit" ]
246
+ }
247
+ },
248
+ "required": [ "location" ]
249
+ }
250
+ }
251
+ }],
252
+ grammar = grammar
253
+ )
254
+ print(json.loads(response["choices"][0]["text"]))
255
+
256
+ print(llm.create_chat_completion(
257
+ temperature = 0.0,
258
+ repeat_penalty = 1.0,
259
+ messages = [
260
+ {
261
+ "role": "user",
262
+ "content": "What's the weather like in Oslo?"
263
+ },
264
+ { # The tool_calls is from the response to the above with tool_choice active
265
+ "role": "assistant",
266
+ "content": None,
267
+ "tool_calls": [
268
+ {
269
+ "id": "call__0_get_current_weather_cmpl-...",
270
+ "type": "function",
271
+ "function": {
272
+ "name": "get_current_weather",
273
+ "arguments": { "location": "Oslo, Norway" , "unit": "celsius" }
274
+ }
275
+ }
276
+ ]
277
+ },
278
+ { # The tool_call_id is from tool_calls and content is the result from the function call you made
279
+ "role": "tool",
280
+ "content": "20",
281
+ "tool_call_id": "call__0_get_current_weather_cmpl-..."
282
+ }
283
+ ],
284
+ tools=[{
285
+ "type": "function",
286
+ "function": {
287
+ "name": "get_current_weather",
288
+ "description": "Get the current weather in a given location",
289
+ "parameters": {
290
+ "type": "object",
291
+ "properties": {
292
+ "location": {
293
+ "type": "string",
294
+ "description": "The city and state, e.g. San Francisco, CA"
295
+ },
296
+ "unit": {
297
+ "type": "string",
298
+ "enum": [ "celsius", "fahrenheit" ]
299
+ }
300
+ },
301
+ "required": [ "location" ]
302
+ }
303
+ }
304
+ }],
305
+ #tool_choice={
306
+ # "type": "function",
307
+ # "function": {
308
+ # "name": "get_current_weather"
309
+ # }
310
+ #}
311
+ ))
312
+ ```
313
+
314
+ <!-- README_GGUF.md-how-to-run end -->
315
+
316
+ <!-- original-model-card start -->
317
+ # Granite-3.2-8B-Instruct
318
+
319
+ **Model Summary:**
320
+ Granite-3.2-8B-Instruct is an 8-billion-parameter, long-context AI model fine-tuned for thinking capabilities. Built on top of [Granite-3.1-8B-Instruct](https://huggingface.co/ibm-granite/granite-3.1-8b-instruct), it has been trained using a mix of permissively licensed open-source datasets and internally generated synthetic data designed for reasoning tasks. The model allows controllability of its thinking capability, ensuring it is applied only when required.
321
+
322
+
323
+ - **Developers:** Granite Team, IBM
324
+ - **Website**: [Granite Docs](https://www.ibm.com/granite/docs/)
325
+ - **Release Date**: February 26th, 2025
326
+ - **License:** [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0)
327
+
328
+ **Supported Languages:**
329
+ English, German, Spanish, French, Japanese, Portuguese, Arabic, Czech, Italian, Korean, Dutch, and Chinese. However, users may finetune this Granite model for languages beyond these 12 languages.
330
+
331
+ **Intended Use:**
332
+ This model is designed to handle general instruction-following tasks and can be integrated into AI assistants across various domains, including business applications.
333
+
334
+ **Capabilities**
335
+ * **Thinking**
336
+ * Summarization
337
+ * Text classification
338
+ * Text extraction
339
+ * Question-answering
340
+ * Retrieval Augmented Generation (RAG)
341
+ * Code related tasks
342
+ * Function-calling tasks
343
+ * Multilingual dialog use cases
344
+ * Long-context tasks including long document/meeting summarization, long document QA, etc.
345
+
346
+
347
+
348
+ **Generation:**
349
+ This is a simple example of how to use Granite-3.2-8B-Instruct model.
350
+
351
+ Install the following libraries:
352
+
353
+ ```shell
354
+ pip install torch torchvision torchaudio
355
+ pip install accelerate
356
+ pip install transformers
357
+ ```
358
+ Then, copy the snippet from the section that is relevant for your use case.
359
+
360
+ ```python
361
+ from transformers import AutoModelForCausalLM, AutoTokenizer, set_seed
362
+ import torch
363
+
364
+ model_path="ibm-granite/granite-3.2-8b-instruct"
365
+ device="cuda"
366
+ model = AutoModelForCausalLM.from_pretrained(
367
+ model_path,
368
+ device_map=device,
369
+ torch_dtype=torch.bfloat16,
370
+ )
371
+ tokenizer = AutoTokenizer.from_pretrained(
372
+ model_path
373
+ )
374
+
375
+ conv = [{"role": "user", "content":"You have 10 liters of a 30% acid solution. How many liters of a 70% acid solution must be added to achieve a 50% acid mixture?"}]
376
+
377
+ input_ids = tokenizer.apply_chat_template(conv, return_tensors="pt", thinking=True, return_dict=True, add_generation_prompt=True).to(device)
378
+
379
+ set_seed(42)
380
+ output = model.generate(
381
+ **input_ids,
382
+ max_new_tokens=8192,
383
+ )
384
+
385
+ prediction = tokenizer.decode(output[0, input_ids["input_ids"].shape[1]:], skip_special_tokens=True)
386
+ print(prediction)
387
+ ```
388
+
389
+ **Example Outputs**
390
+ - thinking=True
391
+ ```md
392
+ Here is my thought process:
393
+
394
+ 1. We know we start with 10 liters of a 30% acid solution, so there's 3 liters of pure acid in it (10 liters * 0.3).
395
+ 2. We need to determine how much of the 70% acid solution to add such that the final mixture is 50% acid.
396
+ 3. Let's say x liters of the 70% acid solution are added. This will contribute 0.7x liters of pure acid.
397
+ 4. The total volume of the new mixture will be 10 + x liters, and the total amount of pure acid will be 3 + 0.7x liters.
398
+ 5. We need to set up an equation where (3 + 0.7x) / (10 + x) = 0.5 (since we want a 50% acid mixture).
399
+ 6. Solving this equation for x will give us the volume of 70% solution needed.
400
+
401
+ Here is my response:
402
+
403
+ To solve this, we need to set up an equation based on the concentration of acid before and after mixing.
404
+
405
+ 1. **Initial Acid Content**:
406
+ - We start with 10 liters of a 30% acid solution.
407
+ - Pure acid in this = 10 * 0.3 = 3 liters.
408
+
409
+ 2. **Variable for Addition**:
410
+ - Let x be the liters of 70% acid solution to be added.
411
+ - Pure acid in x liters of 70% solution = 0.7x liters.
412
+
413
+ 3. **Final Mixture Concentration**:
414
+ - Total volume of mixture = 10 + x liters.
415
+ - Total pure acid in the mixture = 3 (from initial solution) + 0.7x (from added solution).
416
+
417
+ 4. **Setting up the equation for 50% acid concentration**:
418
+ - We require that the ratio of pure acid to total volume is 0.5 (50%).
419
+ - Therefore, the equation is:
420
+
421
+ \[
422
+ \frac{3 + 0.7x}{10 + x} = 0.5
423
+ \]
424
+
425
+ 5. **Solving the equation**:
426
+
427
+ Multiply both sides by (10 + x) to clear the fraction:
428
+
429
+ \[
430
+ 3 + 0.7x = 0.5(10 + x)
431
+ \]
432
+
433
+ Expanding the right side:
434
+
435
+ \[
436
+ 3 + 0.7x = 5 + 0.5x
437
+ \]
438
+
439
+ Subtract 0.5x from both sides:
440
+
441
+ \[
442
+ 3 + 0.2x = 5
443
+ \]
444
+
445
+ Subtract 3 from both sides:
446
+
447
+ \[
448
+ 0.2x = 2
449
+ \]
450
+
451
+ Divide by 0.2:
452
+
453
+ \[
454
+ x = 10
455
+ \]
456
+
457
+ So, 10 liters of the 70% acid solution must be added to the initial 10 liters of 30% solution to get a 50% acid mixture.
458
+
459
+ Let's verify:
460
+
461
+ - Initial acid: 3 liters
462
+ - Acid from added solution: 0.7 * 10 = 7 liters
463
+ - Total acid: 3 + 7 = 10 liters
464
+ - Total volume: 10 + 10 = 20 liters
465
+ - Acid percentage: \(\frac{10}{20} = 0.5 = 50\%\)
466
+
467
+ This confirms our solution.
468
+
469
+ Final answer: 10 liters of 70% acid solution should be added.
470
+ ```
471
+ - thinking=False
472
+ ```md
473
+ Let's denote the amount of 70% acid solution to be added as x liters.
474
+
475
+ We start with 10 liters of a 30% acid solution, which means there are 0.3 * 10 = 3 liters of pure acid in it.
476
+
477
+ We want to end up with a 50% acid solution after adding x liters of 70% acid solution. The total volume of the mixture will be 10 + x liters, and we want half of this volume to be pure acid.
478
+
479
+ So, the amount of pure acid in the final mixture should be 0.5 * (10 + x) liters.
480
+
481
+ The amount of pure acid in the final mixture comes from two sources: the original 30% solution and the added 70% solution. So, we can set up the following equation:
482
+
483
+ 3 (from the original solution) + 0.7x (from the added solution) = 0.5 * (10 + x)
484
+
485
+ Now, let's solve for x:
486
+
487
+ 3 + 0.7x = 5 + 0.5x
488
+ 0.7x - 0.5x = 5 - 3
489
+ 0.2x = 2
490
+ x = 2 / 0.2
491
+ x = 10
492
+
493
+ So, you need to add 10 liters of a 70% acid solution to the 10 liters of a 30% acid solution to get a 50% acid mixture.
494
+ ```
495
+
496
+ **Evaluation Results:**
497
+ <table>
498
+
499
+ <thead>
500
+ <tr>
501
+ <th style="text-align:left; background-color: #001d6c; color: white;">Models</th>
502
+ <th style="text-align:center; background-color: #001d6c; color: white;">ArenaHard</th>
503
+ <th style="text-align:center; background-color: #001d6c; color: white;">Alpaca-Eval-2</th>
504
+ <th style="text-align:center; background-color: #001d6c; color: white;">MMLU</th>
505
+ <th style="text-align:center; background-color: #001d6c; color: white;">PopQA</th>
506
+ <th style="text-align:center; background-color: #001d6c; color: white;">TruthfulQA</th>
507
+ <th style="text-align:center; background-color: #001d6c; color: white;">BigBenchHard</th>
508
+ <th style="text-align:center; background-color: #001d6c; color: white;">DROP</th>
509
+ <th style="text-align:center; background-color: #001d6c; color: white;">GSM8K</th>
510
+ <th style="text-align:center; background-color: #001d6c; color: white;">HumanEval</th>
511
+ <th style="text-align:center; background-color: #001d6c; color: white;">HumanEval+</th>
512
+ <th style="text-align:center; background-color: #001d6c; color: white;">IFEval</th>
513
+ <th style="text-align:center; background-color: #001d6c; color: white;">AttaQ</th>
514
+ </tr></thead>
515
+ <tbody>
516
+ <tr>
517
+ <td style="text-align:left; background-color: #DAE8FF; color: black;">Llama-3.1-8B-Instruct</td>
518
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">36.43</td>
519
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">27.22</td>
520
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">69.15</td>
521
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">28.79</td>
522
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">52.79</td>
523
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">72.66</td>
524
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">61.48</td>
525
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">83.24</td>
526
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">85.32</td>
527
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">80.15</td>
528
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">79.10</td>
529
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">83.43</td>
530
+ </tr>
531
+
532
+ <tr>
533
+ <td style="text-align:left; background-color: #DAE8FF; color: black;">DeepSeek-R1-Distill-Llama-8B</td>
534
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">17.17</td>
535
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">21.85</td>
536
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">45.80</td>
537
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">13.25</td>
538
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">47.43</td>
539
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">65.71</td>
540
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">44.46</td>
541
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">72.18</td>
542
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">67.54</td>
543
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">62.91</td>
544
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">66.50</td>
545
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">42.87</td>
546
+ </tr>
547
+
548
+ <tr>
549
+ <td style="text-align:left; background-color: #DAE8FF; color: black;">Qwen-2.5-7B-Instruct</td>
550
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">25.44</td>
551
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">30.34</td>
552
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">74.30</td>
553
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">18.12</td>
554
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">63.06</td>
555
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">70.40</td>
556
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">54.71</td>
557
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">84.46</td>
558
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">93.35</td>
559
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">89.91</td>
560
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">74.90</td>
561
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">81.90</td>
562
+ </tr>
563
+
564
+ <tr>
565
+ <td style="text-align:left; background-color: #DAE8FF; color: black;">DeepSeek-R1-Distill-Qwen-7B</td>
566
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">10.36</td>
567
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">15.35</td>
568
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">50.72</td>
569
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">9.94</td>
570
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">47.14</td>
571
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">65.04</td>
572
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">42.76</td>
573
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">78.47</td>
574
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">79.89</td>
575
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">78.43</td>
576
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">59.10</td>
577
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">42.45</td>
578
+ </tr>
579
+
580
+ <tr>
581
+ <td style="text-align:left; background-color: #DAE8FF; color: black;">Granite-3.1-8B-Instruct</td>
582
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">37.58</td>
583
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">30.34</td>
584
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">66.77</td>
585
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">28.7</td>
586
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">65.84</td>
587
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">68.55</td>
588
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">50.78</td>
589
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">79.15</td>
590
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">89.63</td>
591
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">85.79</td>
592
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">73.20</td>
593
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">85.73</td>
594
+ </tr>
595
+
596
+
597
+ <tr>
598
+ <td style="text-align:left; background-color: #DAE8FF; color: black;">Granite-3.1-2B-Instruct</td>
599
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">23.3</td>
600
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">27.17</td>
601
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">57.11</td>
602
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">20.55</td>
603
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">59.79</td>
604
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">54.46</td>
605
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">18.68</td>
606
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">67.55</td>
607
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">79.45</td>
608
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">75.26</td>
609
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">63.59</td>
610
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">84.7</td>
611
+ </tr>
612
+
613
+
614
+ <tr>
615
+ <td style="text-align:left; background-color: #DAE8FF; color: black;">Granite-3.2-2B-Instruct</td>
616
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">24.86</td>
617
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">34.51</td>
618
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">57.18</td>
619
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">20.56</td>
620
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">59.8</td>
621
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">52.27</td>
622
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">21.12</td>
623
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">67.02</td>
624
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">80.13</td>
625
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">73.39</td>
626
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">61.55</td>
627
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">83.23</td>
628
+ </tr>
629
+
630
+ <tr>
631
+ <td style="text-align:left; background-color: #DAE8FF; color: black;"><b>Granite-3.2-8B-Instruct</b></td>
632
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">55.25</td>
633
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">61.19</td>
634
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">66.79</td>
635
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">28.04</td>
636
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">66.92</td>
637
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">64.77</td>
638
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">50.95</td>
639
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">81.65</td>
640
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">89.35</td>
641
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">85.72</td>
642
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">74.31</td>
643
+ <td style="text-align:center; background-color: #DAE8FF; color: black;">85.42</td>
644
+
645
+ </tr>
646
+
647
+
648
+
649
+ </tbody></table>
650
+
651
+ **Training Data:**
652
+ Overall, our training data is largely comprised of two key sources: (1) publicly available datasets with permissive license, (2) internal synthetically generated data targeted to enhance reasoning capabilites.
653
+ <!-- A detailed attribution of datasets can be found in [Granite 3.2 Technical Report (coming soon)](#), and [Accompanying Author List](https://github.com/ibm-granite/granite-3.0-language-models/blob/main/author-ack.pdf). -->
654
+
655
+ **Infrastructure:**
656
+ We train Granite-3.2-8B-Instruct using IBM's super computing cluster, Blue Vela, which is outfitted with NVIDIA H100 GPUs. This cluster provides a scalable and efficient infrastructure for training our models over thousands of GPUs.
657
+
658
+ **Ethical Considerations and Limitations:**
659
+ Granite-3.2-8B-Instruct builds upon Granite-3.1-8B-Instruct, leveraging both permissively licensed open-source and select proprietary data for enhanced performance. Since it inherits its foundation from the previous model, all ethical considerations and limitations applicable to [Granite-3.1-8B-Instruct](https://huggingface.co/ibm-granite/granite-3.1-8b-instruct) remain relevant.
660
+
661
+
662
+ **Resources**
663
+ - ⭐️ Learn about the latest updates with Granite: https://www.ibm.com/granite
664
+ - 📄 Get started with tutorials, best practices, and prompt engineering advice: https://www.ibm.com/granite/docs/
665
+ - 💡 Learn about the latest Granite learning resources: https://ibm.biz/granite-learning-resources
666
+
667
+ <!-- ## Citation
668
+ ```
669
+ @misc{granite-models,
670
+ author = {author 1, author2, ...},
671
+ title = {},
672
+ journal = {},
673
+ volume = {},
674
+ year = {2024},
675
+ url = {https://arxiv.org/abs/0000.00000},
676
+ }
677
+ ``` -->
granite-3.2-8b-instruct.IQ1_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d8fdbcab707b9ee8b33f929802b1aa428980b9e46b7efd1ad55f5fc5d3c5ea2f
3
+ size 1997309888
granite-3.2-8b-instruct.IQ1_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0f98b5edc81881cbd330da29642ff98f52f0506adac19120242d65b9200ee3cc
3
+ size 1836337088
granite-3.2-8b-instruct.IQ2_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0af320ef5273b0d1c4de3801dca881877a2b1de6c76461681dd6688d3b9034e5
3
+ size 2836826048
granite-3.2-8b-instruct.IQ2_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:af6b51d3a814f803dc770875398c7f880039b12c3046519361be7e5ee3b2143c
3
+ size 2622195648
granite-3.2-8b-instruct.IQ2_XS.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bcb32f3a294018b057c474d903787ba2e6ad916a3da54fd7e2b5ff58abc5fec7
3
+ size 2501199808
granite-3.2-8b-instruct.IQ2_XXS.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e855e56f331ee5a84dbba7c9107c93bf35ac65292e0040968eca73ecc8959c05
3
+ size 2265597888
granite-3.2-8b-instruct.IQ3_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:039d0281dd9df34f22b64fdbf68a69cd25b14f7c6ae0488ea00d8fe5a018d162
3
+ size 3738717728
granite-3.2-8b-instruct.IQ3_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ae4fd819a9064b39bc61536158bc152abd941963343dbe1baff61bde7c59344a
3
+ size 3614772768
granite-3.2-8b-instruct.IQ3_XS.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4be789904655dc63adae2b745a553da99b9f72a52d5fa173e37e4b75f8cd8b00
3
+ size 3427995168
granite-3.2-8b-instruct.IQ3_XXS.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0caeb76e542dd120230b335e98dcbb86990f2f3d6022343c678598d121cc09f4
3
+ size 3201370048
granite-3.2-8b-instruct.IQ4_XS.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:72f29358a1f948f614d1d18d3f137f539807a1fad5d6fab713f85e2bc1cdc51e
3
+ size 4428074528
granite-3.2-8b-instruct.imatrix.dat ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ce5570c34e4dba08869145dbd35fd05c0af4de305d6b044ddd783722721ed0ea
3
+ size 5989448