File size: 1,432 Bytes
e9e2022 ec792df d9cdb17 d185548 e801441 d9cdb17 e9e2022 d185548 e9e2022 d185548 e9e2022 d185548 e9e2022 d185548 e9e2022 d185548 e9e2022 d185548 e9e2022 d185548 e9e2022 d185548 e9e2022 d185548 e9e2022 d185548 e9e2022 d185548 ec792df |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 |
---
license: apache-2.0
library_name: transformers
datasets:
- kde4
widget:
- text: Hi! How are you?
---
## Model Summary
dataequity-opus-mt-en-es is a Transformer based language translator fine tuned using the kde dataset. The base model used is Helsinki-NLP/opus-mt-en-es
Our model hasn't been fine-tuned through reinforcement learning from human feedback. The intention behind crafting this open-source model is to provide the research community with a non-restricted small model to explore vital safety challenges, such as reducing toxicity, understanding societal biases, enhancing controllability, and more.
### eng-spa
* source group: English
* target group: Spanish
* model: transformer
* source language(s): en
* target language(s): es
* model: transformer
### Inference Code:
```python
from transformers import MarianMTModel, MarianTokenizer,
hub_repo_name = 'sandeepsundaram/dataequity-opus-mt-en-es'
tokenizer = MarianTokenizer.from_pretrained(hub_repo_name)
finetuned_model = MarianMTModel.from_pretrained(hub_repo_name)
questions = [
"How are the first days of each season chosen?",
"Why are laws requiring identification for voting scrutinized by the media?",
"Why aren't there many new operating systems being created?"
]
translated = finetuned_model.generate(**tokenizer(questions, return_tensors="pt", padding=True))
[tokenizer.decode(t, skip_special_tokens=True) for t in translated]
``` |