Upload folder using huggingface_hub
Browse files
README.md
CHANGED
@@ -1,5 +1,4 @@
|
|
1 |
---
|
2 |
-
license: apache-2.0
|
3 |
base_model:
|
4 |
- openlm-research/open_llama_7b
|
5 |
- stabilityai/StableBeluga-7B
|
@@ -7,38 +6,17 @@ tags:
|
|
7 |
- merge
|
8 |
- mergekit
|
9 |
- lazymergekit
|
10 |
-
-
|
11 |
-
- StableBeluga
|
12 |
-
- slerp
|
13 |
---
|
14 |
|
15 |
# OpenLlama-Stable-7B
|
16 |
|
17 |
-
|
18 |
-
|
19 |
-
|
20 |
-
|
21 |
-
I'm David Soeiro-Vuong, a third-year Computer Science student working as an apprentice at TW3 Partners, a company specialized in Generative AI. Passionate about artificial intelligence and language models optimization, I focus on creating efficient model merges that balance performance and capabilities.
|
22 |
-
|
23 |
-
🔗 [Connect with me on LinkedIn](https://www.linkedin.com/in/david-soeiro-vuong-a28b582ba/)
|
24 |
-
|
25 |
-
## Merge Details
|
26 |
-
|
27 |
-
### Merge Method
|
28 |
|
29 |
-
|
30 |
-
|
31 |
-
- **Attention Layers**: 0.7 interpolation value favoring StableBeluga's strong instruction-following capabilities
|
32 |
-
- **MLP Layers**: 0.5 interpolation value creating an equal blend for balanced reasoning
|
33 |
-
- **Other Parameters**: 0.6 interpolation value slightly favoring StableBeluga's refinements
|
34 |
-
- **Format**: bfloat16 precision for efficient memory usage
|
35 |
-
|
36 |
-
### Models Merged
|
37 |
-
|
38 |
-
* [openlm-research/open_llama_7b](https://huggingface.co/openlm-research/open_llama_7b) - An open-source reproduction of Meta's LLaMA that offers strong base capabilities
|
39 |
-
* [stabilityai/StableBeluga-7B](https://huggingface.co/stabilityai/StableBeluga-7B) - StabilityAI's instruction-tuned variant offering improved instruction following and coherence
|
40 |
-
|
41 |
-
### Configuration
|
42 |
|
43 |
```yaml
|
44 |
slices:
|
@@ -62,57 +40,27 @@ parameters:
|
|
62 |
dtype: bfloat16
|
63 |
```
|
64 |
|
65 |
-
##
|
66 |
-
|
67 |
-
This merge combines:
|
68 |
-
- Open Llama's strong foundational knowledge and reasoning
|
69 |
-
- StableBeluga's improved instruction following and coherence
|
70 |
-
- Fully open architecture with no usage restrictions
|
71 |
-
|
72 |
-
The resulting model provides enhanced performance on tasks requiring both strong reasoning and good instruction following, such as:
|
73 |
-
- Detailed explanations of complex concepts
|
74 |
-
- Creative writing with coherent structure
|
75 |
-
- Problem-solving with step-by-step reasoning
|
76 |
-
- Balanced factual responses with nuanced perspectives
|
77 |
-
|
78 |
-
## Usage
|
79 |
|
80 |
```python
|
81 |
-
|
82 |
-
import torch
|
83 |
-
|
84 |
-
model_id = "david-sv/OpenLlama-Stable-7B" # Replace with your actual HF username
|
85 |
-
tokenizer = AutoTokenizer.from_pretrained(model_id)
|
86 |
-
model = AutoModelForCausalLM.from_pretrained(
|
87 |
-
model_id,
|
88 |
-
torch_dtype=torch.float16,
|
89 |
-
device_map="auto"
|
90 |
-
)
|
91 |
|
92 |
-
|
93 |
-
|
|
|
94 |
|
95 |
-
|
|
|
96 |
|
97 |
-
|
98 |
-
|
99 |
-
|
100 |
-
|
101 |
-
|
102 |
-
|
103 |
-
|
104 |
)
|
105 |
|
106 |
-
|
107 |
-
|
108 |
-
|
109 |
-
## Limitations
|
110 |
-
|
111 |
-
- Inherits limitations from both base models
|
112 |
-
- May exhibit inconsistent behavior for certain complex reasoning tasks
|
113 |
-
- No additional alignment or fine-tuning beyond the base models' training
|
114 |
-
- Model was created through parameter merging without additional training data
|
115 |
-
|
116 |
-
## License
|
117 |
-
|
118 |
-
This model is released under the Apache 2.0 license, consistent with the underlying models' licenses.
|
|
|
1 |
---
|
|
|
2 |
base_model:
|
3 |
- openlm-research/open_llama_7b
|
4 |
- stabilityai/StableBeluga-7B
|
|
|
6 |
- merge
|
7 |
- mergekit
|
8 |
- lazymergekit
|
9 |
+
- openlm-research/open_llama_7b
|
10 |
+
- stabilityai/StableBeluga-7B
|
|
|
11 |
---
|
12 |
|
13 |
# OpenLlama-Stable-7B
|
14 |
|
15 |
+
OpenLlama-Stable-7B is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
|
16 |
+
* [openlm-research/open_llama_7b](https://huggingface.co/openlm-research/open_llama_7b)
|
17 |
+
* [stabilityai/StableBeluga-7B](https://huggingface.co/stabilityai/StableBeluga-7B)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
18 |
|
19 |
+
## 🧩 Configuration
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
20 |
|
21 |
```yaml
|
22 |
slices:
|
|
|
40 |
dtype: bfloat16
|
41 |
```
|
42 |
|
43 |
+
## 💻 Usage
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
44 |
|
45 |
```python
|
46 |
+
!pip install -qU transformers accelerate
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
47 |
|
48 |
+
from transformers import AutoTokenizer
|
49 |
+
import transformers
|
50 |
+
import torch
|
51 |
|
52 |
+
model = "Davidsv/OpenLlama-Stable-7B"
|
53 |
+
messages = [{"role": "user", "content": "What is a large language model?"}]
|
54 |
|
55 |
+
tokenizer = AutoTokenizer.from_pretrained(model)
|
56 |
+
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
|
57 |
+
pipeline = transformers.pipeline(
|
58 |
+
"text-generation",
|
59 |
+
model=model,
|
60 |
+
torch_dtype=torch.float16,
|
61 |
+
device_map="auto",
|
62 |
)
|
63 |
|
64 |
+
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
|
65 |
+
print(outputs[0]["generated_text"])
|
66 |
+
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|