---
license: apache-2.0
---
# MLM-Filter-13b Model Card
## Model details
**Model type:**
LLaVA is an open-source chatbot trained by fine-tuning LLaMA/Vicuna on GPT-generated multimodal instruction-following data.
It is an auto-regressive language model, based on the transformer architecture.
**Model date:**
MLM-Filter-13B was trained in Dec 2023.
**Paper or resources for more information:**
https://mlm-filter.github.io/
## License
Llama 2 is licensed under the LLAMA 2 Community License,
Copyright (c) Meta Platforms, Inc. All Rights Reserved.
**Where to send questions or comments about the model:**
https://github.com/victorwz/mlm-filter/issues
## Intended use
**Primary intended uses:**
MLM-Filter can be used as a drop-in replacement for CLIPScore in these tasks:
1. Score image-text data in large-scale pre-training dataset and then filter high-quality subsets based on the scores (For training MLLMs or VLMs, please consider to jointly use the Image-Text Matching score and the Object Detail Fulfillment score);
2. Evaluate the image-text alignment for image2text or text2image generation models;
3. Any potential applications with the need to calculate the image-text alignment.
## Training dataset
- 46k instruction sampled from LLaVA-1.5 665k data.
- 4k instructions on image-text data quality assessment tasks ranging across 4 metrics.
## Usage Sample
Please follow the instructions in https://github.com/Victorwz/MLM_Filter.