File size: 2,144 Bytes
649642a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
---
base_model:
- DMindAI/DMind-1-mini
- Qwen/Qwen3-14B
- soob3123/GrayLine-Qwen3-14B
- ValiantLabs/Qwen3-14B-Cobalt2
- ValiantLabs/Qwen3-14B-Esper3
library_name: transformers
license: apache-2.0
language:
- en
pipeline_tag: text-generation
tags:
- mergekit
- merge
- esper
- esper-3
- dmind
- dmind-1-mini
- cobalt
- cobalt-2
- grayline
- valiant
- valiant-labs
- qwen
- qwen-3
- qwen-3-14b
- 14b
- reasoning
- web3
- code
- code-instruct
- python
- javascript
- dev-ops
- jenkins
- terraform
- scripting
- powershell
- azure
- aws
- gcp
- cloud
- problem-solving
- architect
- engineer
- developer
- creative
- analytical
- expert
- rationality
- math
- math-reasoning
- math-instruct
- uncensored
- unfiltered
- amoral-ai
- conversational
- chat
- instruct

---
# sequelbox/Qwen3-14B-Esper3Mix

This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit), combining several Qwen 3 14b finetunes to maximize reasoning performance.

## Merge Details
### Merge Method

This model was merged using the [DELLA](https://arxiv.org/abs/2406.11617) merge method using [Qwen/Qwen3-14B](https://huggingface.co/Qwen/Qwen3-14B) as a base.

### Models Merged

The following models were included in the merge:
* [DMindAI/DMind-1-mini](https://huggingface.co/DMindAI/DMind-1-mini)
* [soob3123/GrayLine-Qwen3-14B](https://huggingface.co/soob3123/GrayLine-Qwen3-14B)
* [ValiantLabs/Qwen3-14B-Cobalt2](https://huggingface.co/ValiantLabs/Qwen3-14B-Cobalt2)
* [ValiantLabs/Qwen3-14B-Esper3](https://huggingface.co/ValiantLabs/Qwen3-14B-Esper3)

### Configuration

The following YAML configuration was used to produce this model:

```yaml
merge_method: della
dtype: bfloat16
parameters:
  normalize: true
models:
  - model: ValiantLabs/Qwen3-14B-Esper3
    parameters:
      density: 0.25
      weight: 0.4
  - model: ValiantLabs/Qwen3-14B-Cobalt2
    parameters:
      density: 0.25
      weight: 0.25
  - model: DMindAI/DMind-1-mini
    parameters:
      density: 0.25
      weight: 0.25
  - model: soob3123/GrayLine-Qwen3-14B
    parameters:
      density: 0.25
      weight: 0.25
base_model: Qwen/Qwen3-14B

```