File size: 1,195 Bytes
e7ff72d
 
8c12699
 
 
 
34d8db2
a39dab6
a044cd3
b42e52c
52e8a51
34d8db2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
---
license: mit
datasets: Hemanth-thunder/en_ta
language:
- ta
- en
widget:
- text: Actor Vijay is competing an 2026 election.
- text: you have to study well for exams
- text: The Sun is approximately 4.6 billion years older than Earth.
pipeline_tag: text2text-generation
---

## Model Details
- **Model Name**: English-Tamil-Translator
- **Model Type**: Deep Learning Model
- **Language**: Python
- **Task**: Language Translation

## How to Use
1. **Install Gemma Python Package**:
   ```bash
    pip install -q -U transformers==4.38.0
   ```

## Inference
1. **How to use the model in our notebook**:
```python
# Load model directly
import torch
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM

checkpoint = "Mr-Vicky-01/English-Tamil-Translator"
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint)

def language_translator(text):
    tokenized = tokenizer([text], return_tensors='pt')
    out = model.generate(**tokenized, max_length=128)
    return tokenizer.decode(out[0],skip_special_tokens=True)

text_to_translate = "i have to play football now!"
output = language_translator(text_to_translate)
print(output)
```