File size: 4,713 Bytes
7cc3d40 6a23332 ae00bc4 6a23332 4012ebc 2a7dc49 4012ebc 0f3e45f 4012ebc 0f3e45f 3141292 0f3e45f 3141292 0f3e45f |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 |
---
license: mit
datasets:
- vrclc/dakshina-lexicons-ml
- vrclc/Dakshina-romanized-ml
- vrclc/Aksharantar-ml
language:
- ml
metrics:
- cer
- wer
- bleu
pipeline_tag: text2text-generation
model-index:
- name: Malayalam Transliteration
results:
- task:
type: automatic-speech-recognition
name: ASR
dataset:
name: IndoNLP Test -1
type: vrclc/IndoNLP-1
split: test
args: ml
metrics:
- type: cer
value: 7.4
name: CER
---
# Model Card for Model ID
Sequence to Sequence Model for Treansliterationg Romanised Malayalam (Manglish) to Native Script.
### Model Sources
- **Repository:** https://github.com/VRCLC-DUK/ml-en-transliteration
- **Paper:** https://arxiv.org/abs/2412.09957
- **Demo:** https://huggingface.co/spaces/vrclc/en-ml-transliteration
### Model Description
- **Developed by:** [Bajiyo Baiju](https://huggingface.co/Bajiyo), [Kavya Manohar](https://huggingface.co/kavyamanohar), [Leena G Pillai](https://huggingface.co/leenag)
- **Language(s) (NLP):** Malayalam
- **License:** MIT
- Developed as a shared task submission to [INDONLP Workshop](https://indonlp-workshop.github.io/IndoNLP-Workshop/) at [COLING 2025](https://coling2025.org//), Abu Dhabi.
## How to Get Started with the Model
The model needs to have an user defined tokenizers for source and target scripts. The model is trained on words. If your use case involves transliterating full sentences, split the sentences into words before passing to the model.
### Load Dependencies
```
import keras
import huggingface_hub
import tensorflow as tf
import numpy as np
from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.preprocessing.sequence import pad_sequences
from huggingface_hub import from_pretrained_keras
import re
```
## Load Model
```
model = from_pretrained_keras("vrclc/transliteration")
```
### Define Tokens and Input Sequence Length:
```
source_tokens = list('abcdefghijklmnopqrstuvwxyz ')
source_tokenizer = Tokenizer(char_level=True, filters='')
source_tokenizer.fit_on_texts(source_tokens)
target_tokens = [
# Independent vowels
'അ', 'ആ', 'ഇ', 'ഈ', 'ഉ', 'ഊ', 'ഋ', 'ൠ', 'ഌ', 'ൡ', 'എ', 'ഏ', 'ഐ', 'ഒ', 'ഓ', 'ഔ',
# Consonants
'ക', 'ഖ', 'ഗ', 'ഘ', 'ങ', 'ച', 'ഛ', 'ജ', 'ഝ', 'ഞ',
'ട', 'ഠ', 'ഡ', 'ഢ', 'ണ', 'ത', 'ഥ', 'ദ', 'ധ', 'ന',
'പ', 'ഫ', 'ബ', 'ഭ', 'മ', 'യ', 'ര', 'ല', 'വ', 'ശ',
'ഷ', 'സ', 'ഹ', 'ള', 'ഴ', 'റ',
# Chillu letters
'ൺ', 'ൻ', 'ർ', 'ൽ', 'ൾ',
# Additional characters
'ം', 'ഃ', '്',
# Vowel modifiers / Signs
'ാ', 'ി', 'ീ', 'ു', 'ൂ', 'ൃ', 'ൄ', 'െ', 'േ', 'ൈ', 'ൊ', 'ോ', 'ൌ', 'ൗ', ' '
]
target_tokenizer = Tokenizer(char_level=True, filters='')
target_tokenizer.fit_on_texts(target_tokens)
max_seq_length = model.get_layer("encoder_input").input_shape[0][1]
```
### Wrapper script to split input sentences to words before passing to the model
```
def transliterate_with_split_tokens(input_text, model, source_tokenizer, target_tokenizer, max_seq_length):
"""
Transliterates input text in roman script, retains all other characters (including punctuation, spaces, etc.)
"""
# Regular expression to split the text into tokens and non-tokens
tokens_and_non_tokens = re.findall(r"([a-zA-Z]+)|([^a-zA-Z]+)", input_text)
transliterated_text = ""
for token_or_non_token in tokens_and_non_tokens:
token = token_or_non_token[0]
non_token = token_or_non_token[1]
if token:
input_sequence = source_tokenizer.texts_to_sequences([token])[0]
input_sequence_padded = pad_sequences([input_sequence], maxlen=max_seq_length, padding='post')
predicted_sequence = model.predict(input_sequence_padded)
predicted_indices = np.argmax(predicted_sequence, axis=-1)[0]
transliterated_word = ''.join([target_tokenizer.index_word[idx] for idx in predicted_indices if idx != 0])
transliterated_text += transliterated_word
elif non_token:
transliterated_text += non_token
return transliterated_text
```
### Usage
```
input text = "ente veedu"
transliterated_text = transliterate_with_split_tokens(input_text, model, source_tokenizer, target_tokenizer, max_seq_length)
print(transliterated_text)
```
## Citation
```
@article{baiju2024romanized,
title={Romanized to Native Malayalam Script Transliteration Using an Encoder-Decoder Framework},
author={Baiju, Bajiyo and Pillai, Leena G and Manohar, Kavya and Sherly, Elizabeth},
journal={arXiv preprint arXiv:2412.09957},
year={2024}
}
```
|