Remove library name and Transformers code snippet
#2
by
nielsr
HF Staff
- opened
README.md
CHANGED
@@ -1,7 +1,6 @@
|
|
1 |
---
|
2 |
license: apache-2.0
|
3 |
pipeline_tag: text-generation
|
4 |
-
library_name: transformers
|
5 |
tags:
|
6 |
- text-generation
|
7 |
- causal-lm
|
@@ -98,31 +97,6 @@ Performance evaluation is ongoing. The model shows promising results in:
|
|
98 |
- Significantly improved needle-in-haystack task performance compared to pure RWKV architectures
|
99 |
- Competitive performance on standard language modeling benchmarks
|
100 |
|
101 |
-
## Usage with Hugging Face Transformers
|
102 |
-
|
103 |
-
This model can be loaded and used with the `transformers` library. Ensure you have `transformers` installed: `pip install transformers`.
|
104 |
-
When loading, remember to set `trust_remote_code=True` because of the custom architecture.
|
105 |
-
|
106 |
-
```python
|
107 |
-
from transformers import pipeline, AutoTokenizer
|
108 |
-
import torch
|
109 |
-
|
110 |
-
model_name = "OpenMOSE/HRWKV7-Reka-Flash3-Preview" # Replace with the actual model ID if different
|
111 |
-
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
|
112 |
-
pipe = pipeline(
|
113 |
-
"text-generation",
|
114 |
-
model_name,
|
115 |
-
tokenizer=tokenizer,
|
116 |
-
torch_dtype=torch.bfloat16, # or torch.float16 depending on your GPU and model precision
|
117 |
-
device_map="auto",
|
118 |
-
trust_remote_code=True,
|
119 |
-
)
|
120 |
-
|
121 |
-
text = "The quick brown fox jumps over the lazy "
|
122 |
-
result = pipe(text, max_new_tokens=20, do_sample=True, top_p=0.9, temperature=0.7)[0]["generated_text"]
|
123 |
-
print(result)
|
124 |
-
```
|
125 |
-
|
126 |
## Run with RWKV-Infer (as provided by original authors)
|
127 |
- RWKV-Infer now support hxa079
|
128 |
```bash
|
|
|
1 |
---
|
2 |
license: apache-2.0
|
3 |
pipeline_tag: text-generation
|
|
|
4 |
tags:
|
5 |
- text-generation
|
6 |
- causal-lm
|
|
|
97 |
- Significantly improved needle-in-haystack task performance compared to pure RWKV architectures
|
98 |
- Competitive performance on standard language modeling benchmarks
|
99 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
100 |
## Run with RWKV-Infer (as provided by original authors)
|
101 |
- RWKV-Infer now support hxa079
|
102 |
```bash
|