LatentWanderer commited on
Commit
7ab9968
·
verified ·
1 Parent(s): c4cbe0d

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +89 -0
README.md ADDED
@@ -0,0 +1,89 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # GGUF of featherless-ai/Qwerky-QwQ-32B
2
+
3
+ Created using llama.cpp (b5013)[https://github.com/ggml-org/llama.cpp/releases/tag/b5013] with required fixes merged.
4
+
5
+ ---
6
+ thumbnail: https://cdn-uploads.huggingface.co/production/uploads/633e85093a17ab61de8d9073/OufWyNMKYRozfC8j8S-M8.png
7
+ license: apache-2.0
8
+ library_name: transformers
9
+ ---
10
+
11
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/633e85093a17ab61de8d9073/OufWyNMKYRozfC8j8S-M8.png)
12
+
13
+ - Try out the model on [![Featherless](https://img.shields.io/badge/featherless--ai%2FQwerky--QwQ--32B-Dummy?style=flat&label=Featherless&color=facc15)](https://featherless.ai/models/featherless-ai/Qwerky-QwQ-32B)
14
+ - Model details from our blog post here! [![Substack](https://img.shields.io/badge/Substack-Dummy?style=flat&color=facc15)](https://substack.recursal.ai/p/qwerky-72b-and-32b-training-large)
15
+
16
+ Benchmarks is as follows for both Qwerky-QwQ-32B and Qwerky-72B models:
17
+
18
+ | Tasks | Metric | Qwerky-QwQ-32B | Qwen/QwQ-32B | Qwerky-72B | Qwen2.5-72B-Instruct |
19
+ |:---:|:---:|:---:|:---:|:---:|:---:|
20
+ | arc_challenge | acc_norm | **0.5640** | 0.5563 | **0.6382** | 0.6323 |
21
+ | arc_easy | acc_norm | 0.7837 | **0.7866** | **0.8443** | 0.8329 |
22
+ | hellaswag | acc_norm | 0.8303 | **0.8407** | 0.8573 | **0.8736** |
23
+ | lambada_openai | acc | 0.6621 | **0.6683** | **0.7539** | 0.7506 |
24
+ | piqa | acc | **0.8036** | 0.7976 | 0.8248 | **0.8357** |
25
+ | sciq | acc | **0.9630** | **0.9630** | 0.9670 | **0.9740** |
26
+ | winogrande | acc | **0.7324** | 0.7048 | **0.7956** | 0.7632 |
27
+ | mmlu | acc | 0.7431 | **0.7985** | 0.7746 | **0.8338** |
28
+
29
+ > *Note: All benchmarks except MMLU are 0-shot and Version 1. For MMLU, it's Version 2.*
30
+
31
+
32
+ ## Running with `transformers`
33
+
34
+ Since this model is not on transformers at the moment you will have to enable remote code with the following line.
35
+
36
+ ```py
37
+ # ...
38
+
39
+ model = AutoModelForCausalLM.from_pretrained("featherless-ai/Qwerky-QwQ-32B", trust_remote_code=True)
40
+
41
+ # ...
42
+ ```
43
+
44
+ Other than enabling remote code, you may run the model like a regular model with transformers like so.
45
+
46
+ ```py
47
+ from transformers import AutoModelForCausalLM, AutoTokenizer
48
+
49
+ model_name = "featherless-ai/Qwerky-72B"
50
+
51
+ model = AutoModelForCausalLM.from_pretrained(
52
+ model_name,
53
+ torch_dtype="auto",
54
+ device_map="auto",
55
+ trust_remote_code=True,
56
+ )
57
+
58
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
59
+
60
+ prompt = """There is a very famous song that I recall by the singer's surname as Astley.
61
+ I can't remember the name or the youtube URL that people use to link as an example url.
62
+ What's song name?"""
63
+ messages = [
64
+ {"role": "system", "content": "You are a helpful assistant."},
65
+ {"role": "user", "content": prompt},
66
+ ]
67
+ text = tokenizer.apply_chat_template(
68
+ messages, tokenize=False, add_generation_prompt=True
69
+ )
70
+ model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
71
+
72
+ generated_ids = model.generate(**model_inputs, max_new_tokens=512)
73
+ generated_ids = [
74
+ output_ids[len(input_ids) :]
75
+ for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
76
+ ]
77
+
78
+ response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
79
+ ```
80
+
81
+ ## Model notes
82
+
83
+ Linear models offer a promising approach to significantly reduce computational costs at scale, particularly for large context lengths. Enabling a >1000x improvement in inference costs, enabling o1 inference time thinking and wider AI accessibility.
84
+
85
+ As demonstrated with our Qwerky-72B-Preview and prior models such as QRWKV6-32B Instruct Preview, we have successfully converted Qwen 2.5 QwQ 32B into a RWKV variant without requiring a pretrain on the base model or retraining the model from scratch. Enabling us to test and validate the more efficient RWKV Linear attention with a much smaller budget. Since our preview, we have continued to refine our technique and managed to improve the model over the preview model iteration.
86
+
87
+ As with our previous models, the model's inherent knowledge and dataset training are inherited from its "parent" model. Consequently, unlike previous RWKV models trained on over 100+ languages, the QRWKV model is limited to approximately 30 languages supported by the Qwen line of models.
88
+
89
+ You may find our details of the process from our previous release, [here](https://huggingface.co/recursal/QRWKV6-32B-Instruct-Preview-v0.1).