Update README.md
Browse files
README.md
CHANGED
@@ -65,7 +65,8 @@ configs:
|
|
65 |
|
66 |
# Notes
|
67 |
- The core set is identical to the first 50k samples of the train split.
|
68 |
-
- You may train your model and report the results only with the core set because the train split is
|
|
|
69 |
- The duc2003 split has four reference summaries for each speech. You can report the best score from 4 scores.
|
70 |
- Spoken sentences were generated using VITS [Kim+2021](https://proceedings.mlr.press/v139/kim21f.html) trained with LibriTTS-R [Koizumi+2023](https://www.isca-archive.org/interspeech_2023/koizumi23_interspeech.html).
|
71 |
- More details and some experiments on this dataset can be found [here](https://www.isca-archive.org/interspeech_2024/matsuura24_interspeech.html#).
|
|
|
65 |
|
66 |
# Notes
|
67 |
- The core set is identical to the first 50k samples of the train split.
|
68 |
+
- You may train your model and report the results only with the core set because the train split is very large.
|
69 |
+
- Using the entire train split is generally not recommended unless there are special reasons (e.g., to investigate the upper bound).
|
70 |
- The duc2003 split has four reference summaries for each speech. You can report the best score from 4 scores.
|
71 |
- Spoken sentences were generated using VITS [Kim+2021](https://proceedings.mlr.press/v139/kim21f.html) trained with LibriTTS-R [Koizumi+2023](https://www.isca-archive.org/interspeech_2023/koizumi23_interspeech.html).
|
72 |
- More details and some experiments on this dataset can be found [here](https://www.isca-archive.org/interspeech_2024/matsuura24_interspeech.html#).
|