- Downloads last month
- 12
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
Dataset used to train lmqg/mt5-base-ruquad-qg
Evaluation results
- BLEU4 (Question Generation) on lmqg/qg_ruquadself-reported17.630
- ROUGE-L (Question Generation) on lmqg/qg_ruquadself-reported33.020
- METEOR (Question Generation) on lmqg/qg_ruquadself-reported28.480
- BERTScore (Question Generation) on lmqg/qg_ruquadself-reported85.820
- MoverScore (Question Generation) on lmqg/qg_ruquadself-reported64.560
- QAAlignedF1Score-BERTScore (Question & Answer Generation (with Gold Answer)) [Gold Answer] on lmqg/qg_ruquadself-reported91.100
- QAAlignedRecall-BERTScore (Question & Answer Generation (with Gold Answer)) [Gold Answer] on lmqg/qg_ruquadself-reported91.090
- QAAlignedPrecision-BERTScore (Question & Answer Generation (with Gold Answer)) [Gold Answer] on lmqg/qg_ruquadself-reported91.110
- QAAlignedF1Score-MoverScore (Question & Answer Generation (with Gold Answer)) [Gold Answer] on lmqg/qg_ruquadself-reported70.060
- QAAlignedRecall-MoverScore (Question & Answer Generation (with Gold Answer)) [Gold Answer] on lmqg/qg_ruquadself-reported70.040