File size: 2,807 Bytes
f609538
 
 
 
 
 
 
 
d7f6dd4
 
 
f609538
 
 
 
d7f6dd4
 
 
 
 
 
 
f609538
 
 
d7f6dd4
f609538
 
d7f6dd4
 
f609538
d7f6dd4
f609538
d7f6dd4
 
f609538
d7f6dd4
 
 
 
0dec95d
 
d7f6dd4
 
 
 
 
 
 
f609538
 
d7f6dd4
 
 
 
f609538
 
d7f6dd4
f609538
d7f6dd4
 
 
 
 
 
 
 
 
f609538
 
d7f6dd4
 
f609538
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
---
license: mit
datasets:
- saillab/taco-datasets
language:
- ar
- en
---
Arabic Translator: Machine Learning Model
This repository contains a machine learning model designed to translate text into Arabic. The model is trained on a custom dataset and fine-tuned to optimize translation accuracy while balancing training and validation performance.

πŸ“„ Overview:



The model is built using deep learning techniques to translate text effectively. It was trained and validated using loss metrics to monitor performance over multiple epochs. The training process is visualized through loss curves that demonstrate learning progress and highlight overfitting challenges.

Key Features:
Language Support: Translates text into Arabic.
Model Architecture: Based on [model architecture used, e.g., Transformer, RNN, etc.].
Preprocessing: Includes tokenization and encoding steps for handling Arabic script.
Evaluation: Monitored with training and validation loss for consistent improvement.



πŸš€ How to Use


Installation
Clone this repository:
git clone https://huggingface.co/MounikaAithagoni/Traanslator
cd arabic-translator

Install dependencies:
pip install -r requirements.txt

Model Inference
from transformers import <ModelClass>, AutoTokenizer

# Load the model and tokenizer
model = <ModelClass>.from_pretrained("<https://huggingface.co/MounikaAithagoni/Traanslator>")
tokenizer = AutoTokenizer.from_pretrained("<https://huggingface.co/MounikaAithagoni/Traanslator>")

# Translate a sample sentence
text = "Hello, how are you?"
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs)
translation = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(f"Translation: {translation}")


πŸ§‘β€πŸ’» Training Details
Training Loss: Decreased steadily across epochs, indicating effective learning.
Validation Loss: Decreased initially but plateaued later, suggesting overfitting beyond epoch 5.
Epochs: Trained for 10 epochs with an early stopping mechanism.


πŸ“ Dataset
 https://huggingface.co/datasets/saillab/taco-datasets/tree/main/multilingual-instruction-tuning-dataset%20/multilingual-alpaca-52k-gpt-4Links to an external site. 
The model was trained on a custom dataset tailored for Arabic translation. Preprocessing steps included:

Tokenizing and encoding text data.
Splitting into training and validation sets.
For details on the dataset format, refer to the data/ folder.

πŸ“Š Evaluation
Metrics: Training and validation loss monitored.
Performance: Shows good initial generalization with validation loss increasing slightly after the 5th epoch, signaling overfitting.


πŸ”§ Future Improvements
Implement techniques to address overfitting, such as regularization or data augmentation.
Fine-tune on larger, more diverse datasets for better generalization.