Spaces:
Sleeping
Sleeping
title: WER Evaluation Tool | |
emoji: 🎯 | |
colorFrom: blue | |
colorTo: red | |
sdk: gradio | |
sdk_version: 5.16.0 | |
app_file: app.py | |
pinned: false | |
# WER Evaluation Tool | |
This Gradio app provides a user-friendly interface for calculating Word Error Rate (WER) and related metrics between reference and hypothesis texts. It's particularly useful for evaluating speech recognition or machine translation outputs. | |
## Features | |
- Calculate WER, MER, WIL, and WIP metrics | |
- Text normalization options | |
- Custom word filtering | |
- Detailed error analysis | |
- Example inputs for testing | |
## How to Use | |
1. Enter or paste your reference text | |
2. Enter or paste your hypothesis text | |
3. Configure options (normalization, word filtering) | |
4. Click "Calculate WER" to see results | |
## Local Development | |
1. Clone the repository: | |
```bash | |
git clone https://github.com/yourusername/wer-evaluation-tool.git | |
cd wer-evaluation-tool | |
``` | |
2. Create and activate a virtual environment using `uv`: | |
```bash | |
uv venv | |
source .venv/bin/activate # On Unix/macOS | |
# or | |
.venv\Scripts\activate # On Windows | |
``` | |
3. Install dependencies: | |
```bash | |
uv pip install -r requirements.txt | |
``` | |
4. Run the app locally: | |
```bash | |
uv run python app_gradio.py | |
``` | |
## Installation | |
You can install the package directly from PyPI: | |
```bash | |
uv pip install wer-evaluation-tool | |
``` | |
## Testing | |
Run the test suite using pytest: | |
```bash | |
uv run pytest tests/ | |
``` | |
## Contributing | |
1. Fork the repository | |
2. Create a new branch (`git checkout -b feature/improvement`) | |
3. Make your changes | |
4. Run tests to ensure everything works | |
5. Commit your changes (`git commit -am 'Add new feature'`) | |
6. Push to the branch (`git push origin feature/improvement`) | |
7. Create a Pull Request | |
## License | |
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details. | |
## Acknowledgments | |
- Thanks to all contributors who have helped with the development | |
- Inspired by the need for better speech recognition evaluation tools | |
- Built with [Gradio](https://gradio.app/) | |
## Contact | |
For questions or feedback, please: | |
- Open an issue in the GitHub repository | |
- Contact the maintainers at [email/contact information] | |
## Citation | |
If you use this tool in your research, please cite: | |
```bibtex | |
@software{wer_evaluation_tool, | |
title = {WER Evaluation Tool}, | |
author = {Your Name}, | |
year = {2024}, | |
url = {https://github.com/yourusername/wer-evaluation-tool} | |
} | |
``` |