File size: 2,332 Bytes
e7ff72d 8c12699 34d8db2 e71850a a39dab6 b42e52c 52e8a51 34d8db2 2716602 34d8db2 2716602 a487f03 2716602 34a5670 6574b84 2716602 34d8db2 e90e4f6 34d8db2 2716602 34d8db2 7474327 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 |
---
license: mit
datasets: Hemanth-thunder/en_ta
language:
- ta
- en
widget:
- text: A room without books is like a body without a soul.
- text: hardwork never fails.
- text: Actor Vijay is competing an 2026 election.
- text: The Sun is approximately 4.6 billion years older than Earth.
pipeline_tag: text2text-generation
---
# English to Tamil Translation Model
This model translates English sentences into Tamil using a fine-tuned version of the [Mr-Vicky](https://huggingface.co/Mr-Vicky-01/Fine_tune_english_to_tamil) available on the Hugging Face model hub.
## About the Authors
This model was developed by [Mr-Vicky](https://huggingface.co/Mr-Vicky-01) in collaboration with [suriya7](https://huggingface.co/suriya7).
## Usage
To use this model, you can either directly use the Hugging Face `transformers` library or you can use the model via the Hugging Face inference API.
## Directly try this model
[Hugging Face Spaces](https://huggingface.co/spaces/Mr-Vicky-01/tamil_translator)
![image/png](https://cdn-uploads.huggingface.co/production/uploads/65af937a30e33d1b60c8772b/5CzurOdTLJ1dvaCUkVWt5.png)
### Model Information
Training Details
- **This model has been fine-tuned for English to Tamil translation.**
- **Training Duration: Over 10 hours**
- **Loss Achieved: 0.6**
- **Model Architecture**
- **The model architecture is based on the Transformer architecture, specifically optimized for sequence-to-sequence tasks.**
### Installation
To use this model, you'll need to have the `transformers` library installed. You can install it via pip:
```bash
pip install transformers
```
### Via Transformers Library
You can use this model in your Python code like this:
## Inference
1. **How to use the model in our notebook**:
```python
# Load model directly
import torch
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
checkpoint = "Mr-Vicky-01/English-Tamil-Translator"
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint)
def language_translator(text):
tokenized = tokenizer([text], return_tensors='pt')
out = model.generate(**tokenized, max_length=128)
return tokenizer.decode(out[0],skip_special_tokens=True)
text_to_translate = "hardwork never fail"
output = language_translator(text_to_translate)
print(output)
```
|