picocreator commited on
Commit
cdb3859
·
verified ·
1 Parent(s): 6c88454

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +13 -11
README.md CHANGED
@@ -7,16 +7,8 @@ library_name: transformers
7
 
8
  ![image/png](https://cdn-uploads.huggingface.co/production/uploads/633e85093a17ab61de8d9073/dM-i7n313mUnY-fbmElVM.png)
9
 
10
- > Psst! Try out the model on [![Featherless](https://img.shields.io/badge/featherless--ai%2FQwerky--72B-Dummy?style=flat&label=Featherless&color=facc15)](https://featherless.ai/models/featherless-ai/Qwerky-72B)
11
- > Our launch article can be found on here! [![Substack](https://img.shields.io/badge/Substack-Dummy?style=flat&color=facc15)](https://substack.recursal.ai/p/qwerky-72b-and-32b-training-large)
12
-
13
- Linear models offer a promising approach to significantly reduce computational costs at scale, particularly for large context lengths. Enabling a >1000x improvement in inference costs, enabling o1 inference time thinking and wider AI accessibility.
14
-
15
- As demonstrated with our Qwerky-72B-Preview and prior models such as QRWKV6-32B Instruct Preview, we have successfully converted Qwen 2.5 72B into a RWKV variant without requiring a pretrain on the base model or retraining the model from scratch. Enabling us to test and validate the more efficient RWKV Linear attention with a much smaller budget. Since our preview, we have continued to refine our technique and managed to improve the model over the preview model iteration.
16
-
17
- As with our previous models, the model's inherent knowledge and dataset training are inherited from its "parent" model. Consequently, unlike previous RWKV models trained on over 100+ languages, the QRWKV model is limited to approximately 30 languages supported by the Qwen line of models.
18
-
19
- You may find our details of the process from our previous release, [here](https://huggingface.co/recursal/QRWKV6-32B-Instruct-Preview-v0.1).
20
 
21
  Benchmarks is as follows for both Qwerky-QwQ-32B and Qwerky-72B models:
22
 
@@ -80,4 +72,14 @@ generated_ids = [
80
  ]
81
 
82
  response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
83
- ```
 
 
 
 
 
 
 
 
 
 
 
7
 
8
  ![image/png](https://cdn-uploads.huggingface.co/production/uploads/633e85093a17ab61de8d9073/dM-i7n313mUnY-fbmElVM.png)
9
 
10
+ - Try out the model on [![Featherless](https://img.shields.io/badge/featherless--ai%2FQwerky--72B-Dummy?style=flat&label=Featherless&color=facc15)](https://featherless.ai/models/featherless-ai/Qwerky-72B)
11
+ - Model details can be found on our blog post here! [![Substack](https://img.shields.io/badge/Substack-Dummy?style=flat&color=facc15)](https://substack.recursal.ai/p/qwerky-72b-and-32b-training-large)
 
 
 
 
 
 
 
 
12
 
13
  Benchmarks is as follows for both Qwerky-QwQ-32B and Qwerky-72B models:
14
 
 
72
  ]
73
 
74
  response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
75
+ ```
76
+
77
+ ## Model notes
78
+
79
+ Linear models offer a promising approach to significantly reduce computational costs at scale, particularly for large context lengths. Enabling a >1000x improvement in inference costs, enabling o1 inference time thinking and wider AI accessibility.
80
+
81
+ As demonstrated with our Qwerky-72B-Preview and prior models such as QRWKV6-32B Instruct Preview, we have successfully converted Qwen 2.5 72B into a RWKV variant without requiring a pretrain on the base model or retraining the model from scratch. Enabling us to test and validate the more efficient RWKV Linear attention with a much smaller budget. Since our preview, we have continued to refine our technique and managed to improve the model over the preview model iteration.
82
+
83
+ As with our previous models, the model's inherent knowledge and dataset training are inherited from its "parent" model. Consequently, unlike previous RWKV models trained on over 100+ languages, the QRWKV model is limited to approximately 30 languages supported by the Qwen line of models.
84
+
85
+ You may find our details of the process from our previous release, [here](https://huggingface.co/recursal/QRWKV6-32B-Instruct-Preview-v0.1).