Update README.md
Browse files
README.md
CHANGED
@@ -20,6 +20,14 @@ tags:
|
|
20 |
|
21 |
New base model, this one actually expects you to use Llama 3 Instruct format. There's lots to talk about here, and this model is TOTALLY different than previous version for a variety of reasons.
|
22 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
23 |
# Why a different approach?
|
24 |
As some users had noted, particularly thanks to |GodZiol| and The-Istar, the previous Mira's instruct format was very unclear. Infact, when testing Llama-3 instruct format it seemed just broken, and, it was. Why? Well, the issue was with merging multiple models with different stopping tokens. I'll leave a tecnical explanation below for my assumption about why this happened. The long story short, I changed strategies for this model. It's very different, and expects the Llama-3 format to be used.
|
25 |
|
@@ -36,3 +44,14 @@ We have competing objectives for the model. We want model diversity for interest
|
|
36 |
- rinna/llama-3-youko-70B
|
37 |
- hitachi-nlp/Llama-3.1-70B-FLDx2
|
38 |
- (MergedHistLlama -- A custom model stock merge itself of multiple See [Version 0.2 For the details](https://huggingface.co/Blackroot/Mirai-70B-0.2))
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
20 |
|
21 |
New base model, this one actually expects you to use Llama 3 Instruct format. There's lots to talk about here, and this model is TOTALLY different than previous version for a variety of reasons.
|
22 |
|
23 |
+
This is evolution 1. Yes, I know it makes no sense. I explain this in my rant down below. I'm going to list the recipe for the model now, but know the reality is more complex than just this:
|
24 |
+
Stock for the "True Merge"
|
25 |
+
- PKU-Baichuan-MLSystemLab/Llama3-PBM-Nova-70B
|
26 |
+
- yentinglin/Llama-3-Taiwan-70B-Instruct
|
27 |
+
- Sao10K/L3.3-70B-Euryale-v2.3
|
28 |
+
- (Custom Base Model-Stock Soup)
|
29 |
+
|
30 |
+
|
31 |
# Why a different approach?
|
32 |
As some users had noted, particularly thanks to |GodZiol| and The-Istar, the previous Mira's instruct format was very unclear. Infact, when testing Llama-3 instruct format it seemed just broken, and, it was. Why? Well, the issue was with merging multiple models with different stopping tokens. I'll leave a tecnical explanation below for my assumption about why this happened. The long story short, I changed strategies for this model. It's very different, and expects the Llama-3 format to be used.
|
33 |
|
|
|
44 |
- rinna/llama-3-youko-70B
|
45 |
- hitachi-nlp/Llama-3.1-70B-FLDx2
|
46 |
- (MergedHistLlama -- A custom model stock merge itself of multiple See [Version 0.2 For the details](https://huggingface.co/Blackroot/Mirai-70B-0.2))
|
47 |
+
- tokyotech-llm/Llama-3.1-Swallow-70B-v0.1
|
48 |
+
Base model:
|
49 |
+
- meta-llama/Meta-Llama-3-70B
|
50 |
+
|
51 |
+
All of these were folded into the base model, and they critically all agreed about the EOS token.
|
52 |
+
|
53 |
+
# Model Stock is Really bad at this EOS thing
|
54 |
+
I love the interest and diversity of model stock, but after trying many different merges, I realized that I had to try something else. Specifically, TIES. Model stock and TIES are almost polar opposites. TIES acts as an amplifier, when models agree, the task vectors align, TIES strengthens those underlying weights, this means things that are good about the model get amplified, things that are bad get amplified. Model stock smoothes things out, smoothing the weights out between models. If smoothing is an issue, lets try amplifying. I've avoided TIES merges, because I'm specifically trying to avoid some of the bad mannerisms of the base ablation nemo model. However, I tried it anyways. Wouldn't you know it, TIES preserved the EOS, and can actually shut up most of the time. Not only that, but the model result is good. Quite good. The instruct is simply better than prior Mirai's and I don't think it's by a small margin either. There's some quirks, but I'm still able to run inference without any penalties and with the same sampling settings I've been running with. This was really surprising to me, I had not anticipated good results with TIES merging, but I'll eat my shoes now, it's good. The model is by no means perfect, there's some edge areas that end up in strange outputs, and the model occassionaly will insert phrases that appeared commonly into the end of responses. However, overall, I like the result.
|
55 |
+
|
56 |
+
# Ties is Really, Really slow, And also Evolutions or Something
|
57 |
+
Ties takes something like 10-15x longer than model stock. It has to calculate a bunch of fancy vectors and directions, and this is slow. In practice, this means it's even slower for me to iterate on evolutions. Speaking of, which evolution is this? Well, this is where it gets weird, because the previous evolution was 13, but all of these were done as model stock merges. I just decide to switch to TIES out of the blue. This means that bascially none of the other evolutions give consistent or reliable results. Therefore, this is sort of evolution 1, despite having many models incorporated already. I'll be calling it evolution 1, but just know this was the actual reality of the situation.
|