Model Summary
dataequity-opus-mt-en-es is a Transformer based language translator fine tuned using the kde dataset. The base model used is Helsinki-NLP/opus-mt-en-es
Our model hasn't been fine-tuned through reinforcement learning from human feedback. The intention behind crafting this open-source model is to provide the research community with a non-restricted small model to explore vital safety challenges, such as reducing toxicity, understanding societal biases, enhancing controllability, and more.
eng-spa
source group: English
target group: Spanish
model: transformer
source language(s): en
target language(s): es
model: transformer
Inference Code:
from transformers import MarianMTModel, MarianTokenizer,
hub_repo_name = 'sandeepsundaram/dataequity-opus-mt-en-es'
tokenizer = MarianTokenizer.from_pretrained(hub_repo_name)
finetuned_model = MarianMTModel.from_pretrained(hub_repo_name)
questions = [
"How are the first days of each season chosen?",
"Why are laws requiring identification for voting scrutinized by the media?",
"Why aren't there many new operating systems being created?"
]
translated = finetuned_model.generate(**tokenizer(questions, return_tensors="pt", padding=True))
[tokenizer.decode(t, skip_special_tokens=True) for t in translated]
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
HF Inference API was unable to determine this model’s pipeline type.