aqua-qwen-0.1-110B
This model was created by merging 2 models using the linear DARE merge method using mergekit. The following models were included in the merge:
Configuration
The following YAML configuration was used to produce this model:
name: aqua-qwen-0.1-110B
base_model:
model:
path: cognitivecomputations/dolphin-2.9.1-qwen-110b
dtype: bfloat16
merge_method: dare_linear
parameters:
normalize: 1.0
slices:
- sources:
- model: cognitivecomputations/dolphin-2.9.1-qwen-110b
layer_range: [0, 80]
parameters:
weight: 0.6
- model: Qwen/Qwen1.5-110B-Chat
layer_range: [0, 80]
parameters:
weight: 0.4
Usage
It is recommended to use GGUF version of the model available here
- Downloads last month
- 9
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.