phizzle / README.md
Isotonic's picture
Update README.md
9971366 verified
---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- rhysjones/phi-2-orange
- cognitivecomputations/dolphin-2_6-phi-2
- mrm8488/phi-2-coder
---
🌟 Buying me coffee is a direct way to show support for this project.
<a href="https://www.buymeacoffee.com/isotonic"><img src="https://www.buymeacoffee.com/assets/img/guidelines/download-assets-sm-1.svg" alt=""></a>
# Phizzle
Phizzle is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [rhysjones/phi-2-orange](https://huggingface.co/rhysjones/phi-2-orange)
* [cognitivecomputations/dolphin-2_6-phi-2](https://huggingface.co/cognitivecomputations/dolphin-2_6-phi-2)
* [mrm8488/phi-2-coder](https://huggingface.co/mrm8488/phi-2-coder)
## 🧩 Configuration
```yaml
models:
- model: rhysjones/phi-2-orange
parameters:
density: 0.5
weight: 0.3
- model: cognitivecomputations/dolphin-2_6-phi-2
parameters:
density: 0.5
weight: 0.3
- model: mrm8488/phi-2-coder
parameters:
density: 0.5
weight: 0.3
merge_method: ties
base_model: rhysjones/phi-2-orange
parameters:
normalize: true
dtype: float16
```
## πŸ’» Usage
```python
!pip install -qU transformers accelerate einops
from transformers import AutoTokenizer
import transformers
import torch
model = "Isotonic/phizzle"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
trust_remote_code=True
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
## Evaluations
Evaluations done using mlabonne's usefull [Colab notebook llm-autoeval](https://github.com/mlabonne/llm-autoeval).
Also check out the alternative leaderboard at [Yet_Another_LLM_Leaderboard](https://huggingface.co/spaces/mlabonne/Yet_Another_LLM_Leaderboard)
[phizzle](https://huggingface.co/Isotonic/phizzle) - Yet to be benchmarked
| Model |AGIEval|GPT4All|TruthfulQA|Bigbench|Average|
|----------------------------------------------------------------|------:|------:|---------:|-------:|------:|
|[phi-2-orange](https://huggingface.co/rhysjones/phi-2-orange)| **33.37**| 71.33| 49.87| **37.3**| **47.97**|
|[phi-2-dpo](https://huggingface.co/lxuechen/phi-2-dpo)| 30.39| **71.68**| **50.75**| 34.9| 46.93|
|[dolphin-2_6-phi-2](https://huggingface.co/cognitivecomputations/dolphin-2_6-phi-2)| 33.12| 69.85| 47.39| 37.2| 46.89|
|[phi-2](https://huggingface.co/microsoft/phi-2)| 27.98| 70.8| 44.43| 35.21| 44.61|