---
license: cc-by-4.0
metrics:
- bleu4
- meteor
- rouge-l
- bertscore
- moverscore
language: en
datasets:
- lmqg/qag_tweetqa
pipeline_tag: text2text-generation
tags:
- questions and answers generation
widget:
- text: "Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records."
  example_title: "Questions & Answers Generation Example 1" 
model-index:
- name: lmqg/bart-large-tweetqa-qag
  results:
  - task:
      name: Text2text Generation
      type: text2text-generation
    dataset:
      name: lmqg/qag_tweetqa
      type: default
      args: default
    metrics:
    - name: BLEU4 (Question & Answer Generation)
      type: bleu4_question_answer_generation
      value: 15.18
    - name: ROUGE-L (Question & Answer Generation)
      type: rouge_l_question_answer_generation
      value: 34.99
    - name: METEOR (Question & Answer Generation)
      type: meteor_question_answer_generation
      value: 27.91
    - name: BERTScore (Question & Answer Generation)
      type: bertscore_question_answer_generation
      value: 91.27
    - name: MoverScore (Question & Answer Generation)
      type: moverscore_question_answer_generation
      value: 62.25
    - name: QAAlignedF1Score-BERTScore (Question & Answer Generation)
      type: qa_aligned_f1_score_bertscore_question_answer_generation
      value: 92.47
    - name: QAAlignedRecall-BERTScore (Question & Answer Generation)
      type: qa_aligned_recall_bertscore_question_answer_generation
      value: 92.21
    - name: QAAlignedPrecision-BERTScore (Question & Answer Generation)
      type: qa_aligned_precision_bertscore_question_answer_generation
      value: 92.74
    - name: QAAlignedF1Score-MoverScore (Question & Answer Generation)
      type: qa_aligned_f1_score_moverscore_question_answer_generation
      value: 64.66
    - name: QAAlignedRecall-MoverScore (Question & Answer Generation)
      type: qa_aligned_recall_moverscore_question_answer_generation
      value: 64.03
    - name: QAAlignedPrecision-MoverScore (Question & Answer Generation)
      type: qa_aligned_precision_moverscore_question_answer_generation
      value: 65.39
---

# Model Card of `lmqg/bart-large-tweetqa-qag`
This model is fine-tuned version of [facebook/bart-large](https://huggingface.co/facebook/bart-large) for question & answer pair generation task on the [lmqg/qag_tweetqa](https://huggingface.co/datasets/lmqg/qag_tweetqa) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation).


### Overview
- **Language model:** [facebook/bart-large](https://huggingface.co/facebook/bart-large)   
- **Language:** en  
- **Training data:** [lmqg/qag_tweetqa](https://huggingface.co/datasets/lmqg/qag_tweetqa) (default)
- **Online Demo:** [https://autoqg.net/](https://autoqg.net/)
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)

### Usage
- With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-)
```python
from lmqg import TransformersQG

# initialize model
model = TransformersQG(language="en", model="lmqg/bart-large-tweetqa-qag")

# model prediction
question_answer_pairs = model.generate_qa("William Turner was an English painter who specialised in watercolour landscapes")

```

- With `transformers`
```python
from transformers import pipeline

pipe = pipeline("text2text-generation", "lmqg/bart-large-tweetqa-qag")
output = pipe("Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records.")

```

## Evaluation


- ***Metric (Question & Answer Generation)***: [raw metric file](https://huggingface.co/lmqg/bart-large-tweetqa-qag/raw/main/eval/metric.first.answer.paragraph.questions_answers.lmqg_qag_tweetqa.default.json) 

|                                 |   Score | Type    | Dataset                                                              |
|:--------------------------------|--------:|:--------|:---------------------------------------------------------------------|
| BERTScore                       |   91.27 | default | [lmqg/qag_tweetqa](https://huggingface.co/datasets/lmqg/qag_tweetqa) |
| Bleu_1                          |   44.55 | default | [lmqg/qag_tweetqa](https://huggingface.co/datasets/lmqg/qag_tweetqa) |
| Bleu_2                          |   31.15 | default | [lmqg/qag_tweetqa](https://huggingface.co/datasets/lmqg/qag_tweetqa) |
| Bleu_3                          |   21.58 | default | [lmqg/qag_tweetqa](https://huggingface.co/datasets/lmqg/qag_tweetqa) |
| Bleu_4                          |   15.18 | default | [lmqg/qag_tweetqa](https://huggingface.co/datasets/lmqg/qag_tweetqa) |
| METEOR                          |   27.91 | default | [lmqg/qag_tweetqa](https://huggingface.co/datasets/lmqg/qag_tweetqa) |
| MoverScore                      |   62.25 | default | [lmqg/qag_tweetqa](https://huggingface.co/datasets/lmqg/qag_tweetqa) |
| QAAlignedF1Score (BERTScore)    |   92.47 | default | [lmqg/qag_tweetqa](https://huggingface.co/datasets/lmqg/qag_tweetqa) |
| QAAlignedF1Score (MoverScore)   |   64.66 | default | [lmqg/qag_tweetqa](https://huggingface.co/datasets/lmqg/qag_tweetqa) |
| QAAlignedPrecision (BERTScore)  |   92.74 | default | [lmqg/qag_tweetqa](https://huggingface.co/datasets/lmqg/qag_tweetqa) |
| QAAlignedPrecision (MoverScore) |   65.39 | default | [lmqg/qag_tweetqa](https://huggingface.co/datasets/lmqg/qag_tweetqa) |
| QAAlignedRecall (BERTScore)     |   92.21 | default | [lmqg/qag_tweetqa](https://huggingface.co/datasets/lmqg/qag_tweetqa) |
| QAAlignedRecall (MoverScore)    |   64.03 | default | [lmqg/qag_tweetqa](https://huggingface.co/datasets/lmqg/qag_tweetqa) |
| ROUGE_L                         |   34.99 | default | [lmqg/qag_tweetqa](https://huggingface.co/datasets/lmqg/qag_tweetqa) |



## Training hyperparameters

The following hyperparameters were used during fine-tuning:
 - dataset_path: lmqg/qag_tweetqa
 - dataset_name: default
 - input_types: ['paragraph']
 - output_types: ['questions_answers']
 - prefix_types: None
 - model: facebook/bart-large
 - max_length: 256
 - max_length_output: 128
 - epoch: 14
 - batch: 32
 - lr: 5e-05
 - fp16: False
 - random_seed: 1
 - gradient_accumulation_steps: 8
 - label_smoothing: 0.15

The full configuration can be found at [fine-tuning config file](https://huggingface.co/lmqg/bart-large-tweetqa-qag/raw/main/trainer_config.json).

## Citation
```
@inproceedings{ushio-etal-2022-generative,
    title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
    author = "Ushio, Asahi  and
        Alva-Manchego, Fernando  and
        Camacho-Collados, Jose",
    booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
    month = dec,
    year = "2022",
    address = "Abu Dhabi, U.A.E.",
    publisher = "Association for Computational Linguistics",
}

```