|
--- |
|
pipeline_tag: text-generation |
|
inference: false |
|
license: apache-2.0 |
|
library_name: transformers |
|
tags: |
|
- language |
|
- granite-3.2 |
|
- llama-cpp |
|
- gguf-my-repo |
|
base_model: ibm-granite/granite-3.2-2b-instruct |
|
--- |
|
|
|
# Triangle104/granite-3.2-2b-instruct-Q8_0-GGUF |
|
This model was converted to GGUF format from [`ibm-granite/granite-3.2-2b-instruct`](https://huggingface.co/ibm-granite/granite-3.2-2b-instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. |
|
Refer to the [original model card](https://huggingface.co/ibm-granite/granite-3.2-2b-instruct) for more details on the model. |
|
|
|
--- |
|
Model Summary: |
|
- |
|
Granite-3.2-2B-Instruct is an 2-billion-parameter, long-context AI model fine-tuned for thinking capabilities. Built on top of Granite-3.1-2B-Instruct, |
|
it has been trained using a mix of permissively licensed open-source |
|
datasets and internally generated synthetic data designed for reasoning |
|
tasks. The model allows controllability of its thinking capability, |
|
ensuring it is applied only when required. |
|
|
|
|
|
Developers: Granite Team, IBM |
|
Website: Granite Docs |
|
Release Date: February 26th, 2025 |
|
License: Apache 2.0 |
|
|
|
|
|
Supported Languages: |
|
- |
|
English, German, Spanish, French, Japanese, Portuguese, Arabic, Czech, |
|
Italian, Korean, Dutch, and Chinese. However, users may finetune this |
|
Granite model for languages beyond these 12 languages. |
|
|
|
|
|
Intended Use: |
|
- |
|
This model is designed to handle general instruction-following tasks and |
|
can be integrated into AI assistants across various domains, including |
|
business applications. |
|
|
|
|
|
Capabilities |
|
- |
|
Thinking |
|
Summarization |
|
Text classification |
|
Text extraction |
|
Question-answering |
|
Retrieval Augmented Generation (RAG) |
|
Code related tasks |
|
Function-calling tasks |
|
Multilingual dialog use cases |
|
Long-context tasks including long document/meeting summarization, long document QA, etc. |
|
|
|
|
|
Generation: |
|
- |
|
This is a simple example of how to use Granite-3.2-2B-Instruct model. |
|
|
|
|
|
Install the following libraries: |
|
- |
|
|
|
pip install torch torchvision torchaudio |
|
pip install accelerate |
|
pip install transformers |
|
|
|
Then, copy the snippet from the section that is relevant for your use case. |
|
|
|
|
|
from transformers import AutoModelForCausalLM, AutoTokenizer, set_seed |
|
import torch |
|
|
|
model_path="ibm-granite/granite-3.2-2b-instruct" |
|
device="cuda" |
|
model = AutoModelForCausalLM.from_pretrained( |
|
model_path, |
|
device_map=device, |
|
torch_dtype=torch.bfloat16, |
|
) |
|
tokenizer = AutoTokenizer.from_pretrained( |
|
model_path |
|
) |
|
|
|
conv = [{"role": "user", "content":"You have 10 liters of a 30% acid solution. How many liters of a 70% acid solution must be added to achieve a 50% acid mixture?"}] |
|
|
|
input_ids = tokenizer.apply_chat_template(conv, return_tensors="pt", thinking=True, return_dict=True, add_generation_prompt=True).to(device) |
|
|
|
set_seed(42) |
|
output = model.generate( |
|
**input_ids, |
|
max_new_tokens=8192, |
|
) |
|
|
|
prediction = tokenizer.decode(output[0, input_ids["input_ids"].shape[1]:], skip_special_tokens=True) |
|
print(prediction) |
|
|
|
Example Outputs |
|
|
|
|
|
thinking=True |
|
|
|
|
|
Here is my thought process: |
|
|
|
1. First, let's understand the composition of the initial 10 liters of 30% acid solution: |
|
- Acid content = 30% of 10 liters = 0.30 * 10 = 3 liters |
|
|
|
2. We want to end up with a 50% acid solution. Let's denote the volume of the 70% acid solution to be added as 'x' liters. |
|
|
|
3. The amount of acid in the final mixture (50% solution) should equal the initial acid content plus the acid from the added 70% solution. |
|
- Total acid in final mixture = 0.50 * (10 + x) |
|
|
|
4. The equation representing the total acid content is: |
|
3 (from initial solution) + 0.70 * x (from added solution) = 0.50 * (10 + x) |
|
|
|
5. Now, we solve for 'x': |
|
3 + 0.70x = 5 + 0.50x |
|
0.70x - 0.50x = 5 - 3 |
|
0.20x = 2 |
|
x = 2 / 0.20 |
|
x = 10 liters |
|
|
|
6. Therefore, 10 liters of a 70% acid solution must be added to the initial 10 liters of 30% acid solution to achieve a 50% acid mixture. |
|
|
|
Here is my response: |
|
|
|
To solve this problem, we need to determine how much of a 70% acid solution must be added to a 10-liter solution that is already 30% acid to create a new mixture that is 50% acid. Here's how to calculate it: |
|
|
|
1. **Initial Acid Content**: |
|
- The initial 10-liter solution contains 3 liters of acid (since 30% of 10 liters). |
|
|
|
2. **Desired Final Acid Content**: |
|
- We want the final mixture to be 50% acid. |
|
|
|
3. **Equation Setup**: |
|
- Let 'x' be the volume (in liters) of the 70% acid solution to be added. |
|
- The equation representing the total acid content in the final mixture (which is 10 liters from the initial solution plus 'x' liters of the added solution) should equal 50% of the total volume (10 + x). |
|
- This gives us: 3 (acid from initial solution) + 0.70x (acid from added solution) = 0.50(10 + x) |
|
|
|
4. **Solving the Equation**: |
|
- First, simplify the right side of the equation: 0.50(10 + x) = 5 + 0.50x |
|
- Now, the equation is: 3 + 0.70x = 5 + 0.50x |
|
- Subtract 0.50x from both sides: 0.20x = 2 |
|
- Divide both sides by 0.20: x = 2 / 0.20 |
|
- x = 10 liters |
|
|
|
5. **Conclusion**: |
|
- To achieve a 50% acid mixture from the initial 30% solution, you need to add 10 liters of a 70% acid solution. |
|
|
|
So, the final answer is that 10 liters of a 70% acid solution must be added to the initial 10 liters of 30% acid solution. |
|
|
|
thinking=False |
|
|
|
To solve this problem, we'll use the concept of volume and concentration. Let's denote the amount of 70% acid solution we need to add as "x" liters. |
|
|
|
First, let's find out how much acid is in the initial 10-liter solution: |
|
|
|
Acid in initial solution = 30% of 10 liters = 0.30 * 10 = 3 liters |
|
|
|
Now, we want to end up with a 50% acid solution in a total volume of (10 + x) liters. Let's denote the final volume as V. |
|
|
|
Final acid concentration = 50% |
|
Final acid amount = 50% of V = 0.50 * V |
|
|
|
We know the initial acid amount and the final acid amount, so we can set up an equation: |
|
|
|
Initial acid amount + Acid from added solution = Final acid amount |
|
3 liters + (70% of x) = 0.50 * (10 + x) |
|
|
|
Now, let's solve for x: |
|
|
|
0.70x + 3 = 0.50 * 10 + 0.50x |
|
0.70x - 0.50x = 0.50 * 10 - 3 |
|
0.20x = 5 - 3 |
|
0.20x = 2 |
|
x = 2 / 0.20 |
|
x = 10 liters |
|
|
|
So, you need to add 10 liters of a 70% acid solution to the initial 10-liter 30% acid solution to achieve a 50% acid mixture. |
|
|
|
--- |
|
## Use with llama.cpp |
|
Install llama.cpp through brew (works on Mac and Linux) |
|
|
|
```bash |
|
brew install llama.cpp |
|
|
|
``` |
|
Invoke the llama.cpp server or the CLI. |
|
|
|
### CLI: |
|
```bash |
|
llama-cli --hf-repo Triangle104/granite-3.2-2b-instruct-Q8_0-GGUF --hf-file granite-3.2-2b-instruct-q8_0.gguf -p "The meaning to life and the universe is" |
|
``` |
|
|
|
### Server: |
|
```bash |
|
llama-server --hf-repo Triangle104/granite-3.2-2b-instruct-Q8_0-GGUF --hf-file granite-3.2-2b-instruct-q8_0.gguf -c 2048 |
|
``` |
|
|
|
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. |
|
|
|
Step 1: Clone llama.cpp from GitHub. |
|
``` |
|
git clone https://github.com/ggerganov/llama.cpp |
|
``` |
|
|
|
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). |
|
``` |
|
cd llama.cpp && LLAMA_CURL=1 make |
|
``` |
|
|
|
Step 3: Run inference through the main binary. |
|
``` |
|
./llama-cli --hf-repo Triangle104/granite-3.2-2b-instruct-Q8_0-GGUF --hf-file granite-3.2-2b-instruct-q8_0.gguf -p "The meaning to life and the universe is" |
|
``` |
|
or |
|
``` |
|
./llama-server --hf-repo Triangle104/granite-3.2-2b-instruct-Q8_0-GGUF --hf-file granite-3.2-2b-instruct-q8_0.gguf -c 2048 |
|
``` |
|
|