|
---
|
|
license: cc-by-nc-sa-4.0
|
|
task_categories:
|
|
- multiple-choice
|
|
extra_gated_fields:
|
|
'Full Name': text
|
|
'Affiliation (Organization/University)': text
|
|
'Designation/Status in Your Organization': text
|
|
'Country': country
|
|
'I want to use this dataset for (please provide the reason(s))': text
|
|
'COLD dataset is free for research use but NOT for commercial use; do you agree if you are provided with the COLD dataset, you will NOT use for any commercial purposes? Also do you agree that you will not be sharing this dataset further or uploading it anywhere else on the internet': checkbox
|
|
'DISCLAIMER The dataset is released for research purposes only and authors do not take any responsibility for any damage or loss arising due to usage of data or any system/model developed using the dataset': checkbox
|
|
tags:
|
|
- LLM
|
|
- NLP
|
|
- reasoning
|
|
- causal reasoning
|
|
- mcqa
|
|
- multiple-choice
|
|
pretty_name: COLD
|
|
language:
|
|
- en
|
|
multilinguality:
|
|
- monolingual
|
|
size_categories:
|
|
- 1K<n<10K
|
|
dataset_info:
|
|
- config_name: cake
|
|
features:
|
|
- name: premise
|
|
dtype: string
|
|
- name: choice1
|
|
dtype: string
|
|
- name: choice2
|
|
dtype: string
|
|
- name: label
|
|
dtype: string
|
|
- name: question
|
|
dtype: string
|
|
- config_name: shopping
|
|
features:
|
|
- name: premise
|
|
dtype: string
|
|
- name: choice1
|
|
dtype: string
|
|
- name: choice2
|
|
dtype: string
|
|
- name: label
|
|
dtype: string
|
|
- name: question
|
|
dtype: string
|
|
- config_name: train
|
|
features:
|
|
- name: premise
|
|
dtype: string
|
|
- name: choice1
|
|
dtype: string
|
|
- name: choice2
|
|
dtype: string
|
|
- name: label
|
|
dtype: string
|
|
- name: question
|
|
dtype: string
|
|
- config_name: tree
|
|
features:
|
|
- name: premise
|
|
dtype: string
|
|
- name: choice1
|
|
dtype: string
|
|
- name: choice2
|
|
dtype: string
|
|
- name: label
|
|
dtype: string
|
|
- name: question
|
|
dtype: string
|
|
- config_name: bus
|
|
features:
|
|
- name: premise
|
|
dtype: string
|
|
- name: choice1
|
|
dtype: string
|
|
- name: choice2
|
|
dtype: string
|
|
- name: label
|
|
dtype: string
|
|
- name: question
|
|
dtype: string
|
|
configs:
|
|
- config_name: shopping
|
|
data_files: "going_grocery_shopping.csv"
|
|
- config_name: cake
|
|
data_files: "baking_a_cake.csv"
|
|
- config_name: train
|
|
data_files: "going_on_a_train.csv"
|
|
- config_name: tree
|
|
data_files: "planting_a_tree.csv"
|
|
- config_name: bus
|
|
data_files: "riding_on_a_bus.csv"
|
|
---
|
|
|
|
|
|
# COLD: Causal reasOning in cLosed Daily activities
|
|
[](https://creativecommons.org/licenses/by-nc/4.0/)
|
|
[](https://www.cse.iitk.ac.in/users/ajoshi/COLD/) <!-- temporary webpage link -->
|
|
<!-- [](https://exploration-lab.github.io/COLD/) -->
|
|
<!-- official webpage link (to be updated) -->
|
|
<!-- [](https://github.com/Exploration-Lab/COLD) -->
|
|

|
|
**Picture:** *The proposed COLD framework for evaluating LLMs for causal reasoning. The humanwritten Event Sequence Descriptions (ESDs) are obtained from crowdsource workers and include a
|
|
telegrammic-style sequence of events when performing an activity. The Observational Graph and the
|
|
Causal Graph for an activity are used to create causal query triplets (details in Algorithm 1), shown
|
|
towards the right. Using counterfactual reasoning, “going to the kitchen” is possible without going to
|
|
the market (if the ingredients are already available), making “come home with the ingredients.” a
|
|
more plausible effect among the given choices. Similarly, in the second example, the event “going to
|
|
market” has no direct relation with the event “heating the oven”.*
|
|
|
|
|
|
This repository contains the official release of the following paper:
|
|
> **COLD: Causal reasOning in cLosed Daily activities**<br>
|
|
|
|
> **Authors:** Abhinav Joshi*, Areeb Ahmad*, and Ashutosh Modi <br>
|
|
>
|
|
> **Abstract:** *Large Language Models (LLMs) have shown state-of-the-art performance in a variety of tasks, including arithmetic and reasoning; however, to gauge the intellectual
|
|
capabilities of LLMs, causal reasoning has become a reliable proxy for validating
|
|
a general understanding of the mechanics and intricacies of the world similar to humans. Previous works in natural language processing (NLP) have either focused on
|
|
open-ended causal reasoning via causal commonsense reasoning (CCR) or framed
|
|
a symbolic representation-based question answering for theoretically backed-up
|
|
analysis via a causal inference engine. The former adds an advantage of real-world
|
|
grounding but lacks theoretically backed-up analysis/validation, whereas the latter
|
|
is far from real-world grounding. In this work, we bridge this gap by proposing the
|
|
COLD (Causal reasOning in cLosed Daily activities) framework, which is built
|
|
upon human understanding of daily real-world activities to reason about the causal
|
|
nature of events. We show that the proposed framework facilitates the creation
|
|
of enormous causal queries (∼ 9 million) and comes close to the mini-turing test,
|
|
simulating causal reasoning to evaluate the understanding of a daily real-world
|
|
task. We evaluate multiple LLMs on the created causal queries and find that causal
|
|
reasoning is challenging even for activities trivial to humans. We further explore
|
|
(the causal reasoning abilities of LLMs) using the backdoor criterion to determine
|
|
the causal strength between events.*
|
|
|
|
|
|
## Loading the Dataset
|
|
|
|
You can load the dataset directly from Hugging Face using the following code:
|
|
|
|
```python
|
|
from datasets import load_dataset
|
|
|
|
activity = "shopping" # pick one among the available configs for different activities: ['shopping', 'cake', 'train', 'tree', 'bus']
|
|
|
|
dataset = load_dataset("abhinav-joshi/cold", activity)
|
|
|
|
```
|
|
|
|
## Generating MCQA Queries
|
|
|
|
To generate multiple-choice questions (MCQA) using the obtained dataframe, you can use the following code snippet:
|
|
|
|
```python
|
|
import numpy as np
|
|
|
|
from string import ascii_uppercase
|
|
from datasets import load_dataset
|
|
|
|
def prompt_templates_mcqa():
|
|
"""
|
|
Consider the activity of {activity name}.
|
|
[ in-context examples (if few-shot/in-context learning experiment) ]
|
|
Which of the following events (given as options A or B) is a plausible question
|
|
(cause/effect) of the event {premise}?
|
|
A. choice1
|
|
B. choice2
|
|
Answer: A
|
|
The following are multiple choice questions about activity name. You should directly
|
|
answer the question by choosing the correct option.
|
|
[ in-context examples (if few-shot/in-context learning experiment) ]
|
|
Which of the following events (given as options A or B) is a plausible question
|
|
(cause/effect) of the event premise?
|
|
A. choice1
|
|
B. choice2
|
|
Answer: A
|
|
"""
|
|
prompt_templates = {
|
|
"template1": lambda activity_name, premise, choices, causal_question: f"Consider the activity of '{activity_name}'. Which of the following events (given as options A or B) is a more plausible {causal_question} of the event '{premise}'?\n" + "\n".join([f"{ascii_uppercase[i]}. {choice}" for i, choice in enumerate(choices)]) + "\nAnswer:",
|
|
"template2": lambda activity_name, premise, choices, causal_question: f"Consider the activity of '{activity_name}'. Which of the following events (given as options A or B) is a plausible {causal_question} of the event '{premise}'?\n" + "\n".join([f"{ascii_uppercase[i]}. {choice}" for i, choice in enumerate(choices)]) + "\nAnswer:",
|
|
"template3": lambda activity_name, premise, choices, causal_question: f"While performing the activity '{activity_name}', which of the following events (given as options A or B), will be a more plausible {causal_question} of the event '{premise}'?\n" + "\n".join([f"{ascii_uppercase[i]}. {choice}" for i, choice in enumerate(choices)]) + "\nAnswer:",
|
|
"template4": lambda activity_name, premise, choices, causal_question: f"In the context of the activity '{activity_name}', which of the following events (given as options A or B) is a plausible {causal_question} of the event '{premise}'?\n" + "\n".join([f"{ascii_uppercase[i]}. {choice}" for i, choice in enumerate(choices)]) + "\nAnswer:",
|
|
"template5": lambda activity_name, premise, choices, causal_question: f"The following are multiple choice questions about '{activity_name}'. You should directly answer the question by choosing the correct option.\nWhich of the following events (given as options A or B) is a plausible {causal_question} of the event '{premise}'?\n" + "\n".join([f"{ascii_uppercase[i]}. {choice}" for i, choice in enumerate(choices)]) + "\nAnswer:",
|
|
}
|
|
return prompt_templates
|
|
|
|
def get_question_text(activity_name, premise, choices, causal_question, template="random"):
|
|
prompt_templates = prompt_templates_mcqa()
|
|
if template == "random":
|
|
template_id = np.random.choice(list(prompt_templates.keys()))
|
|
else:
|
|
template_id = template
|
|
template = prompt_templates[template_id]
|
|
template_text = template(activity_name, premise, choices, causal_question)
|
|
return template_text
|
|
|
|
# Example usage
|
|
activity = "shopping" # pick one among the available configs for different activities: ['shopping', 'cake', 'train', 'tree', 'bus']
|
|
|
|
activities_dict = {
|
|
"shopping": "going grocery shopping",
|
|
"cake": "baking a cake",
|
|
"train": "going on a train",
|
|
"tree": "planting a tree",
|
|
"bus": "riding on a bus"
|
|
}
|
|
activity_name = activities_dict[activity]
|
|
|
|
dataset = load_dataset("abhinav-joshi/cold", activity)
|
|
|
|
# Example dataset
|
|
dataset["train"][0]
|
|
# {'premise': 'navigate to checkout.', 'choice1': 'walk in store.', 'choice2': 'get the bill for groceries.', 'label': '1', 'question': 'effect'}
|
|
|
|
premise = dataset["train"][0]["premise"]
|
|
choices = [dataset["train"][0]["choice1"], dataset["train"][0]["choice2"]]
|
|
causal_question = dataset["train"][0]["question"]
|
|
|
|
question_text = get_question_text(activity_name, premise, choices, causal_question)
|
|
print(question_text)
|
|
|
|
# Output
|
|
"""
|
|
Consider the activity of 'going grocery shopping'. Which of the following events (given as options A or B) is a plausible effect of the event 'navigate to checkout.'?
|
|
A. walk in store.
|
|
B. get the bill for groceries.
|
|
Answer:
|
|
"""
|
|
|
|
```
|
|
|
|
## Citation
|
|
|
|
[**COLD: Causal reasOning in cLosed Daily activities**](https://nips.cc/virtual/2024/poster/96459), 2024. In the Thirty-eighth Annual Conference on [Neural Information Processing Systems (NeurIPS’24)](https://neurips.cc/), Vancouver, Canada.
|
|
```
|
|
@misc{cold,
|
|
title={COLD: Causal reasOning in cLosed Daily activities},
|
|
author={Abhinav Joshi and Areeb Ahmad and Ashutosh Modi},
|
|
year={2024},
|
|
eprint={2411.19500},
|
|
archivePrefix={arXiv},
|
|
primaryClass={cs.CL},
|
|
url={https://arxiv.org/abs/2411.19500},
|
|
}
|
|
```
|
|
```
|
|
@inproceedings{
|
|
joshi2024cold,
|
|
title={{COLD}: Causal reasOning in cLosed Daily activities},
|
|
author={Abhinav Joshi and Areeb Ahmad and Ashutosh Modi},
|
|
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
|
|
year={2024},
|
|
url={https://openreview.net/forum?id=7Mo1NOosNT}
|
|
}
|
|
```
|
|
|
|
## License
|
|
[](https://creativecommons.org/licenses/by-nc/4.0/)
|
|
The COLD follows [CC-BY-NC](CC-BY-NC) license. Thus, users can share and adapt the dataset/codebase if they give credit to the authors and do not use the dataset/codebase for any commercial purposes.
|
|
|