|
|
--- |
|
|
base_model: |
|
|
- prithivMLmods/JSONify-Flux |
|
|
- prithivMLmods/ChemQwen2-vL |
|
|
- prithivMLmods/Qwen2-VL-OCR-2B-Instruct |
|
|
- prithivMLmods/Qwen2-VL-OCR2-2B-Instruct |
|
|
- Qwen/Qwen2-VL-2B-Instruct |
|
|
- prithivMLmods/LatexMind-2B-Codec |
|
|
- prithivMLmods/Radiology-Infer-Mini |
|
|
- prithivMLmods/QvQ-Step-Tiny |
|
|
- prithivMLmods/Caption-Pro |
|
|
- prithivMLmods/Blazer.1-2B-Vision |
|
|
- prithivMLmods/Omni-Reasoner-2B |
|
|
- Qwen/Qwen2-VL-2B |
|
|
library_name: transformers |
|
|
tags: |
|
|
- mergekit |
|
|
- merge |
|
|
|
|
|
--- |
|
|
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). |
|
|
|
|
|
## Merge Details |
|
|
### Merge Method |
|
|
|
|
|
This model was merged using the [linear](https://arxiv.org/abs/2203.05482) merge method using [Qwen/Qwen2-VL-2B-Instruct](https://huggingface.co/Qwen/Qwen2-VL-2B-Instruct) as a base. |
|
|
|
|
|
### Models Merged |
|
|
|
|
|
The following models were included in the merge: |
|
|
* [prithivMLmods/JSONify-Flux](https://huggingface.co/prithivMLmods/JSONify-Flux) |
|
|
* [prithivMLmods/ChemQwen2-vL](https://huggingface.co/prithivMLmods/ChemQwen2-vL) |
|
|
* [prithivMLmods/Qwen2-VL-OCR-2B-Instruct](https://huggingface.co/prithivMLmods/Qwen2-VL-OCR-2B-Instruct) |
|
|
* [prithivMLmods/Qwen2-VL-OCR2-2B-Instruct](https://huggingface.co/prithivMLmods/Qwen2-VL-OCR2-2B-Instruct) |
|
|
* [prithivMLmods/LatexMind-2B-Codec](https://huggingface.co/prithivMLmods/LatexMind-2B-Codec) |
|
|
* [prithivMLmods/Radiology-Infer-Mini](https://huggingface.co/prithivMLmods/Radiology-Infer-Mini) |
|
|
* [prithivMLmods/QvQ-Step-Tiny](https://huggingface.co/prithivMLmods/QvQ-Step-Tiny) |
|
|
* [prithivMLmods/Caption-Pro](https://huggingface.co/prithivMLmods/Caption-Pro) |
|
|
* [prithivMLmods/Blazer.1-2B-Vision](https://huggingface.co/prithivMLmods/Blazer.1-2B-Vision) |
|
|
* [prithivMLmods/Omni-Reasoner-2B](https://huggingface.co/prithivMLmods/Omni-Reasoner-2B) |
|
|
* [Qwen/Qwen2-VL-2B](https://huggingface.co/Qwen/Qwen2-VL-2B) |
|
|
|
|
|
### Configuration |
|
|
|
|
|
The following YAML configuration was used to produce this model: |
|
|
|
|
|
```yaml |
|
|
models: |
|
|
- model: prithivMLmods/Blazer.1-2B-Vision |
|
|
- model: prithivMLmods/Caption-Pro |
|
|
- model: prithivMLmods/ChemQwen2-vL |
|
|
- model: prithivMLmods/JSONify-Flux |
|
|
- model: prithivMLmods/LatexMind-2B-Codec |
|
|
- model: prithivMLmods/Omni-Reasoner-2B |
|
|
- model: prithivMLmods/QvQ-Step-Tiny |
|
|
- model: prithivMLmods/Qwen2-VL-OCR2-2B-Instruct |
|
|
- model: prithivMLmods/Qwen2-VL-OCR-2B-Instruct |
|
|
- model: prithivMLmods/Radiology-Infer-Mini |
|
|
- model: Qwen/Qwen2-VL-2B-Instruct |
|
|
- model: Qwen/Qwen2-VL-2B |
|
|
merge_method: linear |
|
|
base_model: Qwen/Qwen2-VL-2B-Instruct |
|
|
parameters: |
|
|
weight: 0.5 |
|
|
normalize: true |
|
|
int8_mask: true |
|
|
dtype: bfloat16 |
|
|
|
|
|
``` |
|
|
|