Text Generation
Transformers
Safetensors
llama
mergekit
Merge
abacusai/Dracarys-Llama-3.1-70B-Instruct
Sao10K/L3-70B-Euryale-v2.1
gbueno86/Cathallama-70B
sophosympatheia/New-Dawn-Llama-3.1-70B-v1.1
nothingiisreal/L3.1-70B-Celeste-V0.1-BF16
Fizzarolli/L3.1-70b-glitz-v0.2
cyberagent/Llama-3.1-70B-Japanese-Instruct-2407
conversational
text-generation-inference
Inference Endpoints
yeet
Browse files
.ipynb_checkpoints/README-checkpoint.md
DELETED
@@ -1,56 +0,0 @@
|
|
1 |
-
---
|
2 |
-
base_model:
|
3 |
-
- abacusai/Dracarys-Llama-3.1-70B-Instruct
|
4 |
-
- Fizzarolli/L3.1-70b-glitz-v0.2
|
5 |
-
- nothingiisreal/L3.1-70B-Celeste-V0.1-BF16
|
6 |
-
- sophosympatheia/New-Dawn-Llama-3.1-70B-v1.1
|
7 |
-
- gbueno86/Cathallama-70B
|
8 |
-
- cyberagent/Llama-3.1-70B-Japanese-Instruct-2407
|
9 |
-
- Sao10K/L3-70B-Euryale-v2.1
|
10 |
-
library_name: transformers
|
11 |
-
tags:
|
12 |
-
- mergekit
|
13 |
-
- merge
|
14 |
-
|
15 |
-
---
|
16 |
-
# merge
|
17 |
-
|
18 |
-
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
|
19 |
-
|
20 |
-
## Merge Details
|
21 |
-
### Merge Method
|
22 |
-
|
23 |
-
This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [Fizzarolli/L3.1-70b-glitz-v0.2](https://huggingface.co/Fizzarolli/L3.1-70b-glitz-v0.2) as a base.
|
24 |
-
|
25 |
-
### Models Merged
|
26 |
-
|
27 |
-
The following models were included in the merge:
|
28 |
-
* [abacusai/Dracarys-Llama-3.1-70B-Instruct](https://huggingface.co/abacusai/Dracarys-Llama-3.1-70B-Instruct)
|
29 |
-
* [nothingiisreal/L3.1-70B-Celeste-V0.1-BF16](https://huggingface.co/nothingiisreal/L3.1-70B-Celeste-V0.1-BF16)
|
30 |
-
* [sophosympatheia/New-Dawn-Llama-3.1-70B-v1.1](https://huggingface.co/sophosympatheia/New-Dawn-Llama-3.1-70B-v1.1)
|
31 |
-
* [gbueno86/Cathallama-70B](https://huggingface.co/gbueno86/Cathallama-70B)
|
32 |
-
* [cyberagent/Llama-3.1-70B-Japanese-Instruct-2407](https://huggingface.co/cyberagent/Llama-3.1-70B-Japanese-Instruct-2407)
|
33 |
-
* [Sao10K/L3-70B-Euryale-v2.1](https://huggingface.co/Sao10K/L3-70B-Euryale-v2.1)
|
34 |
-
|
35 |
-
### Configuration
|
36 |
-
|
37 |
-
The following YAML configuration was used to produce this model:
|
38 |
-
|
39 |
-
```yaml
|
40 |
-
|
41 |
-
models:
|
42 |
-
- model: Fizzarolli/L3.1-70b-glitz-v0.2
|
43 |
-
- model: cyberagent/Llama-3.1-70B-Japanese-Instruct-2407
|
44 |
-
- model: Sao10K/L3-70B-Euryale-v2.1
|
45 |
-
- model: nothingiisreal/L3.1-70B-Celeste-V0.1-BF16
|
46 |
-
- model: sophosympatheia/New-Dawn-Llama-3.1-70B-v1.1
|
47 |
-
- model: gbueno86/Cathallama-70B
|
48 |
-
- model: abacusai/Dracarys-Llama-3.1-70B-Instruct
|
49 |
-
|
50 |
-
merge_method: model_stock
|
51 |
-
base_model: Fizzarolli/L3.1-70b-glitz-v0.2
|
52 |
-
parameters:
|
53 |
-
normalize: true
|
54 |
-
dtype: bfloat16
|
55 |
-
|
56 |
-
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|