Datasets:

Modalities:
Text
Formats:
json
Languages:
English
ArXiv:
Libraries:
Datasets
Dask
License:
PBIF / README.md
nielsr's picture
nielsr HF staff
Add link to paper and code, task category
9666633 verified
|
raw
history blame
1.85 kB
metadata
license: apache-2.0
task_categories:
  - text-generation

Official implementation of the paper "Order Matters: Investigate the Position Bias in Multi-constraint Instruction Following".

Code: https://github.com/meowpass/PBIF

We systematically study the position bias problem in multi-constraint instruction following. Through our experiments, we have the following findings:

  • LLMs prefer to "hard-to-easy" constraint order
    • existing LLMs can achieve a better following accuracy in multi-constraint instructions when presented with constraints in “hard-to-easy” orders.
    • This finding can be generalized in both single-round and multi-round scenarios, regardless of the architecture of LLM, the size of LLM’s parameters and the number of constraints.
  • Constraints order affect how the LLMs handle a specific constraint
    • The "Hard-to-easy" constraint order induces the LLM to pay more attention to the constraint part in the multi-constraint instructions.
    • The LLM’s performance on various constraints is strongly correlated with its attention patterns.

PBIF Dataset

The dataset consists of single_round inference data and multi_round inference data. For each of the data, there are 5 fields:

  • prompt: Synthesized multi-constraint instructions.
  • constraint: The constraints contained in the instructions.
  • instruction_id_list: The id of the constraints in the instructions.
  • kwargs: Corresponding parameters for the constraints, which are only used for evaluation.
  • ranking: The constraint order of the instruction. (0 indicates the hardest constraint) It is worth noting that, in multi_round inference data, the prompt is the initial instruction, which is more convenient for the user to construct the multi-round dialog data for themselves.