Spaces:
Sleeping
Sleeping
A newer version of the Gradio SDK is available:
5.21.0
metadata
title: WER Evaluation Tool
emoji: 🎯
colorFrom: blue
colorTo: red
sdk: gradio
sdk_version: 5.16.0
app_file: app.py
pinned: false
WER Evaluation Tool
This Gradio app provides a user-friendly interface for calculating Word Error Rate (WER) and related metrics between reference and hypothesis texts. It's particularly useful for evaluating speech recognition or machine translation outputs.
Features
- Calculate WER, MER, WIL, and WIP metrics
- Text normalization options
- Custom word filtering
- Detailed error analysis
- Example inputs for testing
How to Use
- Enter or paste your reference text
- Enter or paste your hypothesis text
- Configure options (normalization, word filtering)
- Click "Calculate WER" to see results
Local Development
- Clone the repository:
git clone https://github.com/yourusername/wer-evaluation-tool.git
cd wer-evaluation-tool
- Create and activate a virtual environment using
uv
:
uv venv
source .venv/bin/activate # On Unix/macOS
# or
.venv\Scripts\activate # On Windows
- Install dependencies:
uv pip install -r requirements.txt
- Run the app locally:
uv run python app_gradio.py
Installation
You can install the package directly from PyPI:
uv pip install wer-evaluation-tool
Testing
Run the test suite using pytest:
uv run pytest tests/
Contributing
- Fork the repository
- Create a new branch (
git checkout -b feature/improvement
) - Make your changes
- Run tests to ensure everything works
- Commit your changes (
git commit -am 'Add new feature'
) - Push to the branch (
git push origin feature/improvement
) - Create a Pull Request
License
This project is licensed under the MIT License - see the LICENSE file for details.
Acknowledgments
- Thanks to all contributors who have helped with the development
- Inspired by the need for better speech recognition evaluation tools
- Built with Gradio
Contact
For questions or feedback, please:
- Open an issue in the GitHub repository
- Contact the maintainers at [email/contact information]
Citation
If you use this tool in your research, please cite:
@software{wer_evaluation_tool,
title = {WER Evaluation Tool},
author = {Your Name},
year = {2024},
url = {https://github.com/yourusername/wer-evaluation-tool}
}