bobig commited on
Commit
d754fd9
·
verified ·
1 Parent(s): a5c8b9a

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +49 -0
README.md ADDED
@@ -0,0 +1,49 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ tags:
4
+ - Uncensored
5
+ - Abliterated
6
+ - Cubed Reasoning
7
+ - QwQ-32B
8
+ - reasoning
9
+ - thinking
10
+ - r1
11
+ - cot
12
+ - deepseek
13
+ - Qwen2.5
14
+ - Hermes
15
+ - DeepHermes
16
+ - DeepSeek
17
+ - DeepSeek-R1-Distill
18
+ - 128k context
19
+ - merge
20
+ - mlx
21
+ - mlx-my-repo
22
+ base_model: DavidAU/Qwen2.5-QwQ-35B-Eureka-Cubed-abliterated-uncensored
23
+ ---
24
+
25
+ # bobig/Qwen2.5-QwQ-35B-Eureka-Cubed-abliterated-uncensored-4bit
26
+
27
+ The Model [bobig/Qwen2.5-QwQ-35B-Eureka-Cubed-abliterated-uncensored-4bit](https://huggingface.co/bobig/Qwen2.5-QwQ-35B-Eureka-Cubed-abliterated-uncensored-4bit) was converted to MLX format from [DavidAU/Qwen2.5-QwQ-35B-Eureka-Cubed-abliterated-uncensored](https://huggingface.co/DavidAU/Qwen2.5-QwQ-35B-Eureka-Cubed-abliterated-uncensored) using mlx-lm version **0.21.5**.
28
+
29
+ ## Use with mlx
30
+
31
+ ```bash
32
+ pip install mlx-lm
33
+ ```
34
+
35
+ ```python
36
+ from mlx_lm import load, generate
37
+
38
+ model, tokenizer = load("bobig/Qwen2.5-QwQ-35B-Eureka-Cubed-abliterated-uncensored-4bit")
39
+
40
+ prompt="hello"
41
+
42
+ if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
43
+ messages = [{"role": "user", "content": prompt}]
44
+ prompt = tokenizer.apply_chat_template(
45
+ messages, tokenize=False, add_generation_prompt=True
46
+ )
47
+
48
+ response = generate(model, tokenizer, prompt=prompt, verbose=True)
49
+ ```