mega-ssum / README.md
komats's picture
Update README.md
a62f6ec verified
metadata
license: cc-by-4.0
dataset_info:
  features:
    - name: id
      dtype: string
    - name: audio
      dtype: audio
    - name: transcription
      dtype: string
    - name: summary
      dtype: string
    - name: summary1
      dtype: string
    - name: summary2
      dtype: string
    - name: summary3
      dtype: string
  splits:
    - name: core
      num_bytes: 17683719490
      num_examples: 50000
    - name: duc2003
      num_bytes: 244384744
      num_examples: 624
    - name: validation
      num_bytes: 342668783
      num_examples: 1000
    - name: test
      num_bytes: 1411039659
      num_examples: 4000
  download_size: 19837902893
  dataset_size: 19681812676
configs:
  - config_name: default
    data_files:
      - split: core
        path: data/core-*
      - split: duc2003
        path: data/duc2003-*
      - split: validation
        path: data/validation-*
      - split: test
        path: data/test-*

Mega-SSum

  • A large-scale English sentence-wise speech summarization (Sen-SSum) dataset
    • Consists of 3.8M+ synthesized speech, transcription, summary triplets
    • Derived from the Gigaword dataset Rush+2015

Overview

  • The dataset is divided into five splits: train/core/dev/eval/duc2003. (See below table)
    • We added a new evaluation split "test" for in-domain evaluation.
    • The train split is here: MegaSSum(train).
orig. data split #samples #speakers total dur. (hrs) ave. dur. (sec) CR* (%)
Gigaword train 3,800,000 2,559 11,678.2 11.1 26.2
Gigaword core 50,000 2,559 154.6 11.1 25.8
Gigaword valid 1,000 96 3.0 10.7 25.1
Gigaword test 4,000 80 12.5 11.2 24.1
DUC2003 duc2003 624 80 2.1 12.2 27.5

*CR (compression rate, %) = #words in summary / #words in transcription * 100. Lower is shorter summary.

Notes

  • The core set is identical to the first 50k samples of the train split.
    • You may train your model and report the results only with the core set because the train split is very large.
    • Using the entire train split is generally not recommended unless there are special reasons (e.g., to investigate the upper bound).
  • The duc2003 split has four reference summaries for each speech. You can report the best score from 4 scores.
  • Spoken sentences were generated using VITS Kim+2021 trained with LibriTTS-R Koizumi+2023.
  • More details and some experiments on this dataset can be found here.

Citation

  • This dataset Matsuura+2024:

    @inproceedings{matsuura24_interspeech,
      title     = {{Sentence-wise Speech Summarization}: Task, Datasets, and End-to-End Modeling with LM Knowledge Distillation},
      author    = {Kohei Matsuura and Takanori Ashihara and Takafumi Moriya and Masato Mimura and Takatomo Kano and Atsunori Ogawa and Marc Delcroix},
      year      = {2024},
      booktitle = {Interspeech 2024},
      pages     = {1945--1949},
    }
    
  • The Gigaword dataset Rush+2015:

    @article{Rush_2015,
       title={A Neural Attention Model for Abstractive Sentence Summarization},
       journal={Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing},
       author={Rush, Alexander M. and Chopra, Sumit and Weston, Jason},
       year={2015}
    }
    
  • VITS TTS Kim+2021:

    @InProceedings{pmlr-v139-kim21f,
      title = 	 {Conditional Variational Autoencoder with Adversarial Learning for End-to-End Text-to-Speech},
      author =       {Kim, Jaehyeon and Kong, Jungil and Son, Juhee},
      booktitle = 	 {Proceedings of the 38th International Conference on Machine Learning},
      pages = 	 {5530--5540},
      year = 	 {2021},
    }
    
  • LibriTTS-R Koizumi+2023:

    @inproceedings{koizumi23_interspeech,
      author={Yuma Koizumi and Heiga Zen and Shigeki Karita and Yifan Ding and Kohei Yatabe and Nobuyuki Morioka and Michiel Bacchiani and Yu Zhang and Wei Han and Ankur Bapna},
      title={{LibriTTS-R}: A Restored Multi-Speaker Text-to-Speech Corpus},
      year=2023,
      booktitle={Proc. INTERSPEECH 2023},
      pages={5496--5500},
    }