Spaces:
Running
Running
| """ | |
| This file contains the text content for the leaderboard client. | |
| """ | |
| HEADER_MARKDOWN = """ | |
| # EMMA JSALT25 Benchmark – Multi-Talker ASR Evaluation | |
| Welcome to the official leaderboard for benchmarking **multi-talker ASR systems**, hosted by the **EMMA JSALT25 team**. This platform enables model submissions, comparisons, and evaluation on challenging multi-speaker scenarios. | |
| """ | |
| LEADERBOARD_TAB_TITLE_MARKDOWN = """ | |
| ## Leaderboard | |
| Below you’ll find the latest results submitted to the benchmark. Models are evaluated using **`meeteval`** with **TCP-WER (collar=5s)**. | |
| """ | |
| SUBMISSION_TAB_TITLE_MARKDOWN = """ | |
| ## Submit Your Model | |
| To submit your MT-ASR hypothesis to the benchmark, complete the form below: | |
| - **Submitted by**: Your name or team identifier. | |
| - **Model ID**: A unique identifier for your submission (used to track models on the leaderboard). | |
| - **Hypothesis File**: Upload a **SegLST `.json` file** that includes **all segments across datasets** in a single list. | |
| - **Task**: Choose the evaluation task (e.g., single-channel ground-truth diarization). | |
| - **Datasets**: Select one or more datasets you wish to evaluate on. | |
| 📩 To enable submission, please [email the EMMA team](mailto:[email protected]) to receive a **submission token**. | |
| After clicking **Submit**, your model will be evaluated and results displayed in the leaderboard. | |
| """ | |
| RANKING_AFTER_SUBMISSION_MARKDOWN = """ | |
| 📊 Below is how your model compares after evaluation: | |
| """ | |
| SUBMISSION_DETAILS_MARKDOWN = """ | |
| ⚠️ Are you sure you want to finalize your submission? This action is **irreversible** once submitted. | |
| """ | |
| MORE_DETAILS_MARKDOWN = """ | |
| ## Model Metadata: | |
| Detailed information about the selected submission. | |
| """ | |
| MODAL_SUBMIT_MARKDOWN = """ | |
| ✅ Confirm Submission | |
| Are you ready to submit your model for evaluation? | |
| """ | |