File size: 3,374 Bytes
12e2204 d219b08 12e2204 d219b08 12e2204 92e0c3c |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 |
---
license: apache-2.0
library_name: transformers
inference: false
tags:
- mlx
base_model: AIDC-AI/Marco-o1
---
# fastx-ai/Marco-o1-int-4
The Model [fastx-ai/Marco-o1-int-4](https://huggingface.co/fastx-ai/Marco-o1-int-4) was
converted to MLX format from [AIDC-AI/Marco-o1](https://huggingface.co/AIDC-AI/Marco-o1)
using mlx-lm version **0.20.1**.
```python
prompt="hello, can you teach me why 2 + 4 = 6 ?"
```
```shell
==========
Prompt: <|im_start|>system
你是一个经过良好训练的AI助手,你的名字是Marco-o1.
## 重要!!!!!
当你回答问题时,你的思考应该在<Thought>内完成,<Output>内输出你的结果。
<Thought>应该尽可能是英文,但是有2个特例,一个是对原文中的引用,另一个是是数学应该使用markdown格式,<Output>内的输出需要遵循用户输入的语言。
<|im_end|>
<|im_start|>user
hello, can you teach me why 2 + 4 = 6 ?<|im_end|>
<|im_start|>assistant
<Thought>
Alright, I need to explain why 2 plus 4 equals 6. Let's start by recalling the basic principles of addition. Addition is the process of combining two or more numbers to find their total. So, when we add 2 and 4, we're essentially combining two quantities.
First, let's visualize this. Imagine you have 2 apples and someone gives you 4 more apples. Now, how many apples do you have in total? Counting them out
==========
Prompt: 118 tokens, 698.640 tokens-per-sec
Generation: 100 tokens, 103.937 tokens-per-sec
Peak memory: 4.386 GB
```
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("fastx-ai/Marco-o1-1.2B-mlx-int4")
prompt="hello"
if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
## Change system prompt ...
1. clone this repo to local
2. change tokenizer_config.json
```json
"chat_template": "{% for message in messages %}{% if loop.first and messages[0]['role'] != 'system' %}{{ '<|im_start|>system\n\n你是一个经过良好训练的AI助手,你的名字是Marco-o1.\n \n## 重要!!!!!\n当你回答问题时,你的思考应该在<Thought>内完成,<Output>内输出你的结果。\n<Thought>应该尽可能是英文,但是有2个特例,一个是对原文中的引用,另一个是是数学应该使用markdown格式,<Output>内的输出需要遵循用户输入的语言。\n <|im_end|>\n' }}{% endif %}{{'<|im_start|>' + message['role'] + '\n' + message['content'] + '<|im_end|>' + '\n'}}{% endfor %}{% if add_generation_prompt %}{{ '<|im_start|>assistant\n' }}{% endif %}",
```
3. load
```python
from mlx_lm import load, generate
model, tokenizer = load("./mlx_model") # notice: folder where you put this repo files.
prompt="hello, can you teach me why 2 + 4 = 6 ?"
if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|