|
--- |
|
task_categories: |
|
- visual-question-answering |
|
language: |
|
- en |
|
tags: |
|
- Vision |
|
- food |
|
- recipe |
|
configs: |
|
- config_name: Recipe1M |
|
data_files: |
|
- split: test |
|
path: food_eval_multitask_v2/data-*.arrow |
|
- config_name: Nutrition5K |
|
data_files: |
|
- split: test |
|
path: nutrition50k/data-*.arrow |
|
- config_name: Food101 |
|
data_files: |
|
- split: test |
|
path: food101/data-*.arrow |
|
- config_name: FoodSeg103 |
|
data_files: |
|
- split: test |
|
path: foodseg103/data-*.arrow |
|
--- |
|
|
|
# Adapting Multimodal Large Language Models to Domains via Post-Training |
|
|
|
This repos contains the **food visual instruction tasks for evaluating MLLMs** in our paper: [On Domain-Specific Post-Training for Multimodal Large Language Models](https://huggingface.co/papers/2411.19930). |
|
|
|
The main project page is: [Adapt-MLLM-to-Domains](https://huggingface.co/AdaptLLM/Adapt-MLLM-to-Domains) |
|
|
|
## 1. Download Data |
|
You can load datasets using the `datasets` library: |
|
```python |
|
from datasets import load_dataset |
|
|
|
# Choose the task name from the list of available tasks |
|
task_name = 'FoodSeg103' # Options: 'Food101', 'FoodSeg103', 'Nutrition5K', 'Recipe1M' |
|
|
|
# Load the dataset for the chosen task |
|
data = load_dataset('AdaptLLM/food-VQA-benchmark', task_name, split='test') |
|
|
|
print(list(data)[0]) |
|
``` |
|
|
|
The mapping between category names and indices for `Food101`, `FoodSeg103`, and `Nutrition5K` datasets is provided in the following files: |
|
<details> |
|
<summary> Click to expand </summary> |
|
|
|
- Food101: `food101_name_to_label_map.json` |
|
- FoodSeg103: `foodSeg103_id2label.json` |
|
- Nutrition5K: `nutrition5k_ingredients.py` |
|
|
|
#### Example Usages: |
|
|
|
**Food101** |
|
```python |
|
import json |
|
|
|
# Load the mapping file |
|
map_path = 'food101_name_to_label_map.json' |
|
name_to_label_map = json.load(open(map_path)) |
|
name_to_label_map = {key.replace('_', ' '): value for key, value in name_to_label_map.items()} |
|
|
|
# Reverse mapping: label to name |
|
label_to_name_map = {value: key for key, value in name_to_label_map.items()} |
|
``` |
|
|
|
**FoodSeg103** |
|
```python |
|
import json |
|
|
|
# Load the mapping file |
|
map_path = 'foodSeg103_id2label.json' |
|
id2name_map = json.load(open(map_path)) |
|
|
|
# Remove background and irrelevant labels |
|
id2name_map.pop("0") # Background |
|
id2name_map.pop("103") # Other ingredients |
|
|
|
# Convert keys to integers |
|
id2name_map = {int(key): value for key, value in id2name_map.items()} |
|
|
|
# Create reverse mapping: name to ID |
|
name2id_map = {value: key for key, value in id2name_map.items()} |
|
``` |
|
|
|
**Nutrition5K** |
|
```python |
|
from nutrition5k_ingredients import all_ingredients |
|
|
|
# Create mappings |
|
id2name_map = dict(zip(range(0, len(all_ingredients)), all_ingredients)) |
|
name2id_map = {value: key for key, value in id2name_map.items()} |
|
``` |
|
</details> |
|
|
|
|
|
## 2. Evaluate Any MLLM Compatible with vLLM on the Food Benchmarks |
|
|
|
We provide a guide to directly evaluate MLLMs such as LLaVA-v1.6 ([open-source version](https://huggingface.co/Lin-Chen/open-llava-next-llama3-8b)), Qwen2-VL-Instruct, and Llama-3.2-Vision-Instruct. |
|
To evaluate other MLLMs, refer to [this guide](https://github.com/vllm-project/vllm/blob/main/examples/offline_inference_vision_language.py) for modifying the `BaseTask` class in the [vllm_inference/utils/task.py](https://github.com/bigai-ai/QA-Synthesizer/blob/main/vllm_inference/utils/task.py) file. |
|
Feel free reach out to us for assistance! |
|
|
|
**The dataset loading script is embedded in the inference code, so you can directly run the following commands to evaluate MLLMs.** |
|
|
|
### 1) Setup |
|
|
|
Install vLLM using `pip` or [from source](https://vllm.readthedocs.io/en/latest/getting_started/installation.html#build-from-source). |
|
|
|
As recommended in the official vLLM documentation, install vLLM in a **fresh new** conda environment: |
|
|
|
```bash |
|
conda create -n vllm python=3.10 -y |
|
conda activate vllm |
|
pip install vllm # Ensure vllm>=0.6.2 for compatibility with Llama-3.2. If Llama-3.2 is not used, vllm==0.6.1 is sufficient. |
|
``` |
|
|
|
Clone the repository and navigate to the inference directory: |
|
|
|
```bash |
|
git clone https://github.com/bigai-ai/QA-Synthesizer.git |
|
cd QA-Synthesizer/vllm_inference |
|
RESULTS_DIR=./eval_results # Directory for saving evaluation scores |
|
``` |
|
|
|
### 2) Evaluate |
|
|
|
Run the following commands: |
|
|
|
```bash |
|
# Specify the domain: choose from ['food', 'Recipe1M', 'Nutrition5K', 'Food101', 'FoodSeg103'] |
|
# 'food' runs inference on all food tasks; others run on individual tasks. |
|
DOMAIN='food' |
|
|
|
# Specify the model type: choose from ['llava', 'qwen2_vl', 'mllama'] |
|
# For LLaVA-v1.6, Qwen2-VL, and Llama-3.2-Vision-Instruct, respectively. |
|
MODEL_TYPE='qwen2_vl' |
|
|
|
# Set the model repository ID on Hugging Face. Examples: |
|
# "Qwen/Qwen2-VL-2B-Instruct", "AdaptLLM/food-Qwen2-VL-2B-Instruct" for MLLMs based on Qwen2-VL-Instruct. |
|
# "meta-llama/Llama-3.2-11B-Vision-Instruct", "AdaptLLM/food-Llama-3.2-11B-Vision-Instruct" for MLLMs based on Llama-3.2-Vision-Instruct. |
|
# "AdaptLLM/food-LLaVA-NeXT-Llama3-8B" for MLLMs based on LLaVA-v1.6. |
|
MODEL=AdaptLLM/food-Qwen2-VL-2B-Instruct |
|
|
|
# Set the directory for saving model prediction outputs: |
|
OUTPUT_DIR=./output/AdaMLLM-food-Qwen-2B_${DOMAIN} |
|
|
|
# Run inference with data parallelism; adjust CUDA devices as needed: |
|
CUDA_VISIBLE_DEVICES='0,1,2,3,4,5,6,7' bash run_inference.sh ${MODEL} ${DOMAIN} ${MODEL_TYPE} ${OUTPUT_DIR} ${RESULTS_DIR} |
|
``` |
|
|
|
Detailed scripts to reproduce our results are in [Evaluation.md](https://github.com/bigai-ai/QA-Synthesizer/blob/main/docs/Evaluation.md) |
|
|
|
### 3) Results |
|
The evaluation results are stored in `./eval_results`, and the model prediction outputs are in `./output`. |
|
|
|
|
|
## Citation |
|
If you find our work helpful, please cite us. |
|
|
|
[AdaMLLM](https://huggingface.co/papers/2411.19930) |
|
```bibtex |
|
@article{adamllm, |
|
title={On Domain-Specific Post-Training for Multimodal Large Language Models}, |
|
author={Cheng, Daixuan and Huang, Shaohan and Zhu, Ziyu and Zhang, Xintong and Zhao, Wayne Xin and Luan, Zhongzhi and Dai, Bo and Zhang, Zhenliang}, |
|
journal={arXiv preprint arXiv:2411.19930}, |
|
year={2024} |
|
} |
|
``` |
|
|
|
[Adapt LLM to Domains](https://huggingface.co/papers/2309.09530) (ICLR 2024) |
|
```bibtex |
|
@inproceedings{ |
|
cheng2024adapting, |
|
title={Adapting Large Language Models via Reading Comprehension}, |
|
author={Daixuan Cheng and Shaohan Huang and Furu Wei}, |
|
booktitle={The Twelfth International Conference on Learning Representations}, |
|
year={2024}, |
|
url={https://openreview.net/forum?id=y886UXPEZ0} |
|
} |
|
``` |
|
|