|
--- |
|
dataset_info: |
|
- config_name: Autonomous Driving |
|
features: |
|
- name: domain |
|
dtype: string |
|
- name: image |
|
dtype: image |
|
- name: question |
|
dtype: string |
|
- name: actions |
|
sequence: string |
|
- name: answer_index |
|
dtype: int64 |
|
- name: reason |
|
dtype: string |
|
- name: key_concept |
|
sequence: string |
|
- name: question_prompt |
|
dtype: string |
|
- name: answer_with_reason |
|
dtype: string |
|
- name: full_meta_data_json |
|
dtype: string |
|
splits: |
|
- name: test_open |
|
num_bytes: 134659773 |
|
num_examples: 100 |
|
- name: test_closed |
|
num_bytes: 67549223 |
|
num_examples: 150 |
|
download_size: 270416985 |
|
dataset_size: 202208996 |
|
- config_name: Domestic Robot |
|
features: |
|
- name: domain |
|
dtype: string |
|
- name: image |
|
dtype: image |
|
- name: question |
|
dtype: string |
|
- name: actions |
|
sequence: string |
|
- name: answer_index |
|
dtype: int64 |
|
- name: reason |
|
dtype: string |
|
- name: key_concept |
|
sequence: string |
|
- name: question_prompt |
|
dtype: string |
|
- name: answer_with_reason |
|
dtype: string |
|
- name: full_meta_data_json |
|
dtype: string |
|
splits: |
|
- name: test_open |
|
num_bytes: 91702060 |
|
num_examples: 100 |
|
- name: test_closed |
|
num_bytes: 177827577 |
|
num_examples: 200 |
|
download_size: 105390299 |
|
dataset_size: 269529637 |
|
- config_name: Open-World Game |
|
features: |
|
- name: domain |
|
dtype: string |
|
- name: image |
|
dtype: image |
|
- name: question |
|
dtype: string |
|
- name: actions |
|
sequence: string |
|
- name: answer_index |
|
dtype: int64 |
|
- name: reason |
|
dtype: string |
|
- name: key_concept |
|
sequence: string |
|
- name: question_prompt |
|
dtype: string |
|
- name: answer_with_reason |
|
dtype: string |
|
- name: full_meta_data_json |
|
dtype: string |
|
splits: |
|
- name: test_open |
|
num_bytes: 16139511 |
|
num_examples: 117 |
|
- name: test_closed |
|
num_bytes: 19069366 |
|
num_examples: 141 |
|
download_size: 34988721 |
|
dataset_size: 35208877 |
|
configs: |
|
- config_name: Autonomous Driving |
|
data_files: |
|
- split: test_open |
|
path: Autonomous Driving/test_open-* |
|
- split: test_closed |
|
path: Autonomous Driving/test_closed-* |
|
- config_name: Domestic Robot |
|
data_files: |
|
- split: test_open |
|
path: Domestic Robot/test_open-* |
|
- split: test_closed |
|
path: Domestic Robot/test_closed-* |
|
- config_name: Open-World Game |
|
data_files: |
|
- split: test_open |
|
path: Open-World Game/test_open-* |
|
- split: test_closed |
|
path: Open-World Game/test_closed-* |
|
license: apache-2.0 |
|
task_categories: |
|
- multiple-choice |
|
- visual-question-answering |
|
language: |
|
- en |
|
pretty_name: PCA-Bench |
|
--- |
|
|
|
|
|
<h1 align="center">PCA-Bench</h1> |
|
|
|
<p align="center"> |
|
|
|
<a href="https://github.com/pkunlp-icler/PCA-EVAL"> |
|
<img alt="Static Badge" src="https://img.shields.io/badge/Github-Online-white"> |
|
|
|
<a href="https://github.com/pkunlp-icler/PCA-EVAL/blob/main/PCA_Bench_Paper.pdf"> |
|
<img alt="Static Badge" src="https://img.shields.io/badge/Paper-PCABench-red"> |
|
|
|
<a href="https://huggingface.co/datasets/PCA-Bench/PCA-Bench-V1"> |
|
<img alt="Static Badge" src="https://img.shields.io/badge/HFDataset-PCABenchV1-yellow"> |
|
</a> |
|
|
|
<a href="https://docs.qq.com/sheet/DVUd4WUpGRHRqUnNV"> |
|
<img alt="Static Badge" src="https://img.shields.io/badge/Leaderboard-Online-blue"> |
|
</a> |
|
</p> |
|
|
|
|
|
|
|
|
|
*PCA-Bench is an innovative benchmark for evaluating and locating errors in Multimodal LLMs when conducting embodied decision making tasks, specifically focusing on perception, cognition, and action.* |
|
|
|
|
|
## Release |
|
- [2024.02.15] [PCA-Bench-V1](https://github.com/pkunlp-icler/PCA-EVAL) is released. We release the open and closed track data in [huggingface](https://huggingface.co/datasets/PCA-Bench/PCA-Bench-V1). We also set an online [leaderboard ](https://docs.qq.com/sheet/DVUd4WUpGRHRqUnNV) accepting users' submission. |
|
- [2023.12.15] [PCA-EVAL](https://arxiv.org/abs/2310.02071) is accepted to Foundation Model for Decision Making Workshop @NeurIPS 2023. PCA-Evaluation tool is released in github. |
|
|
|
## Leaderboard |
|
[Leaderboard with Full Metrics](https://docs.qq.com/sheet/DVUd4WUpGRHRqUnNV) |
|
|
|
|
|
|
|
## Submit Results |
|
|
|
📢 For close track evaluaiton and PCA-Evaluation, please follow [this file](https://github.com/pkunlp-icler/PCA-EVAL/blob/main/pca-eval/results/chatgpt_holmes_outputs/Autonomous%20Driving.json) to organize your model output. Submit **Six JSON files** from different domains and different tracks, along with your **model name** and **organization** to us via [email](mailto:[email protected]). Ensure you use the dataset's provided prompt as the default input for fair comparison. |
|
|
|
We will send the PCA-Eval results of your model to you and update the leaderboard. |
|
|
|
We provide sample code to get the six json files. User only needs to add your model inference code: |
|
```python |
|
# Sample code for PCA-Eval |
|
from datasets import load_dataset |
|
from tqdm import tqdm |
|
import json |
|
import os |
|
|
|
def YOUR_INFERENCE_CODE(prompt,image): |
|
"""Simple single round multimodal conversation call. |
|
""" |
|
response = YOUR_MODEL.inference(prompt,image) |
|
return response |
|
|
|
output_path = "./Results-DIR-PATH/" |
|
os.mkdir(output_path) |
|
|
|
dataset_ad = load_dataset("PCA-Bench/PCA-Bench-V1","Autonomous Driving") |
|
dataset_dr = load_dataset("PCA-Bench/PCA-Bench-V1","Domestic Robot") |
|
dataset_og = load_dataset("PCA-Bench/PCA-Bench-V1","Open-World Game") |
|
|
|
test_dataset_dict = {"Autonomous-Driving":dataset_ad,"Domestic-Robot":dataset_dr,"Open-World-Game":dataset_og} |
|
test_split = ["test_closed","test_open"] |
|
test_domain = list(test_dataset_dict.keys()) |
|
|
|
for domain in test_domain: |
|
for split in test_split: |
|
print("testing on %s:%s"%(domain,split)) |
|
|
|
prediction_results = [] |
|
output_filename = output_path+"%s-%s.json"%(domain,split) |
|
prompts = test_dataset_dict[domain][split]['question_prompt'] |
|
images = test_dataset_dict[domain][split]['image'] |
|
|
|
for prompt_id in tqdm(range(len(prompts))): |
|
user_inputs = prompts[prompt_id] # do not change the prompts for fair comparison |
|
index = prompt_id |
|
image = images[prompt_id] |
|
|
|
outputs = YOUR_INFERENCE_CODE(user_inputs,image) |
|
|
|
prediction_results.append({ |
|
'prompt': user_inputs, |
|
'model_output': outputs, |
|
'index': index, |
|
}) |
|
|
|
with open(output_filename, 'w') as f: |
|
json.dump(prediction_results, f, indent=4) |
|
|
|
# submit the 6 json files in the output_path to our email |
|
``` |
|
|
|
You could also simply compute the multiple-choice accuracy locally as a comparison metric in your own experiments. However, in the online leaderboard, we only consider the average action score and Genuine PCA score when ranking models. |
|
|
|
|
|
For more information, refer to the offical [github repo](https://github.com/pkunlp-icler/PCA-EVAL) |