This model is part of the GrammarCorrector tool.
"FlanT5 from scratch for the grammar correction tool" article about how this models was trained:
FlanT5 was trained using JFLEG dataset. The primary objective of the experiment was to develop a highly effective tool using relatively small models, minimal datasets, and constrained computational resources.
To accomplish this goal, we implemented two key strategies:
- Perplexity-Based Data Pruning With Small Reference Models.
- A simple sampling and voting method for multiple LLM agents. model was trained.
- Downloads last month
- 3,803
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.
Model tree for akhmat-s/t5-base-grammar-corrector
Base model
google-t5/t5-base