metadata
dict | paper
dict | review
dict | citation_count
int64 0
0
| normalized_citation_count
int64 0
0
| cited_papers
listlengths 0
0
| citing_papers
listlengths 0
0
|
---|---|---|---|---|---|---|
{
"id": "Nz9R5l3JRwr",
"year": null,
"venue": "EAMT 2022",
"pdf_link": "https://aclanthology.org/2022.eamt-1.14.pdf",
"forum_link": "https://openreview.net/forum?id=Nz9R5l3JRwr",
"arxiv_id": null,
"doi": null
}
|
{
"title": "On the Interaction of Regularization Factors in Low-resource Neural Machine Translation",
"authors": [
"Àlex R. Atrio",
"Andrei Popescu-Belis"
],
"abstract": null,
"keywords": [],
"raw_extracted_content": "On the Interaction of Regularization Factors\nin Low-resource Neural Machine Translation\n`Alex R. Atrio1,2and Andrei Popescu-Belis1,2\n1HEIG-VD / HES-SO2EPFL\nYverdon-les-Bains Lausanne\nSwitzerland Switzerland\n{alejandro.ramirezatrio, andrei.popescu-belis }@heig-vd.ch\nAbstract\nWe explore the roles and interactions of\nthe hyper-parameters governing regulari-\nzation, and propose a range of values ap-\nplicable to low-resource neural machine\ntranslation. We demonstrate that default\nor recommended values for high-resource\nsettings are not optimal for low-resource\nones, and that more aggressive regulariza-\ntion is needed when resources are scarce,\nin proportion to their scarcity. We ex-\nplain our observations by the generaliza-\ntion abilities of sharp vs. flat basins in the\nloss landscape of a neural network. Re-\nsults for four regularization factors corrob-\norate our claim: batch size, learning rate,\ndropout rate, and gradient clipping. More-\nover, we show that optimal results are ob-\ntained when using several of these fac-\ntors, and that our findings generalize across\ndatasets of different sizes and languages.\n1 Introduction\nThe training of neural machine translation (NMT)\nmodels is governed by many hyper-parameters,\nwhich play a central role in the performances of\nthe trained models, especially their generalization\nabilities. While most of the NMT frameworks rec-\nommend default values for the hyper-parameters,\nwhen it comes to low-resource settings, fewer\nguidelines are available.\nThis study systematically explores the roles and\ninteractions of a subset of hyper-parameters in\nlow-resource NMT settings, namely those acting\n© 2022 The authors. This article is licensed under a Creative\nCommons 3.0 licence, no derivative works, attribution, CC-\nBY-ND.asregularization factors . Regularizers do not fall\nunder a single theoretical definition: Goodfellow\net al. (2016, page 224) view them as a collection\nof methods “intended to reduce generalization er-\nror but not training error.” We present here a uni-\nfied perspective on several regularizers which act\nupon the estimation of the gradients during back-\npropagation. Using the distinction made by Keskar\net al. (2016) between flat and sharp basins in the\nloss landscape, we argue that noisier estimates of\nthe gradients can increase the likelihood of find-\ning flatter minima, which have better generaliza-\ntion abilities. Specifically, we defend three claims:\n1. NMT models benefit from more aggressive re-\ngularization when the amount of training data is\nsmall. We demonstrate this for four different reg-\nularizers: batch size, learning rate, dropout, and\ngradient clipping. We compare the default regu-\nlarization hyper-parameters of the OpenNMT-py\nframework for mid-to-high resources – compara-\nble to those of the original Transformer (Vaswani\net al., 2017) – with the ones we optimized for a\nlow-resource setting (Sections 4-7).\n2. The combination of different regularization\nsources is preferable over their individual use.\nWhen used together, an amount of regularization\nfrom each of the four factors under study outper-\nforms the use of any single one alone, and the best\nscores are robust with respect to the variation of\neach factor (Section 8).\n3. Regularization factors optimized on one low-\nresource dataset are beneficial for low-resource\ndatasets in other languages, and benefit from more\naggressive regularization as the amount of training\ndata decreases. We demonstrate this by comparing\nour default and optimized settings on data samples\nof varying sizes from our main corpus and four ad-\nditional low-size datasets (Section 9).\n2 Background and Related Work\n2.1 Regularizers and the Loss Landscape\nIn the absence of a general treatment of regula-\nrization factors, most studies combine them em-\npirically and search only a very small part of the\nhyper-parameter space. Kuka ˇcka et al. (2017) pro-\nvide a taxonomy of regularization factors, but con-\ntinue to define them simply as techniques to im-\nprove generalization. Similarly, in their survey,\nMoradi et al. (2020) consider as regularization any\n“component of the learning process or prediction\nprocedure that is added to alleviate data shortage,”\nbut do not provide a common measure of regulari-\nzation or consider the combination of factors.\nPeng et al. (2015) study regularization tech-\nniques independently as well as in combination,\nstill without a common theoretical underpinning.\nOn two NLP tasks, they observe that using two\nfactors – namely, L2 norm of weights and embed-\ndings, and dropout – is better than using either by\nitself. Moreover, when using both factors, if one is\nset to its optimal value obtained when used alone,\nthe other one must be lowered.\nWe adopt here the perspective put forward by\nKeskar et al. (2016), among others, who explain\nthegeneralization gap between values of regulari-\nzation factors in terms of the topography of the loss\nlandscape. Given a minimum of the loss function,\nthe slower this function varies around its neighbor-\nhood (hence creating flat basins in the topography),\ntheflatter (or less sharp) is the region. Models that\nare optimized in flatter regions tend to generalize\nbetter, and moderately less accurate gradients give\nmodels a higher probability of finding these flatter\nregions.\nHere, we narrow down our perspective to a set of\nregularization factors that concern the estimation\nof the gradients of the loss function, as they are\nused during training with back-propagation. Ac-\ncording to the above perspective, models trained\nwith noisier gradient estimates are more likely than\nmodels trained with precise ones to find flatmin-\nima of the loss function, as their identification\nrequires less precision. Additionally, a moder-\nate amount of noise confers “exploratory abilities”\nthat allow the search to exit sharper basins. There-\nfore, there is an optimal amount of noise in the gra-\ndient estimation: with too much noise, training is\nhampered or becomes impossible, but with too lit-\ntle noise, the model is likely to get trapped into\nsharp minimizers with low generalization abilities.For instance, in the case of batch size (a fre-\nquently studied regularization factor), Goodfellow\net al. (2016, Chapter 8.1.3) explain that models\ntrained with smaller batch sizes tend to optimize\ninto low-precision regions because they use noisier\ngradient estimates than when training with larger\nbatch sizes.\nHypothesizing that noisier gradients improve\nthe chance of a model to optimize into flatter re-\ngions, Smith and Le (2017) and Smith et al. (2017)\npropose a gradient noise scale to measure how\nlearning rate (another regularization factor) should\nbe adjusted to the batch size, on image data. They\nestimate the average gradient noise gfor each\nbatch as g=ϵ(N/B−1)≈ϵN/B where ϵis\nthe learning rate, Nthe size of the training set, and\nBthe batch size, assuming that N≫B. This\nshows that “increasing the batch size and decay-\ning the learning rate are quantitatively equivalent”\n(Smith et al., 2017, Sec. 1).\nJastrze ¸bski et al. (2018) also note that the pro-\nportionality of batch size and learning rate is cru-\ncial for gradient descent convergence, and the abil-\nity of the resulting model to generalize well. In\nparticular, higher ratios seem to lead to flatter min-\nima, which lead to better generalization, similar to\nwhat Keskar et al. (2016) observed. Specifically\nwhether the relation between batch size and learn-\ning rate is linear, squared, or otherwise, has not\nbeen conclusively determined (Krizhevsky, 2014;\nHoffer et al., 2017; Popel and Bojar, 2018). The\nroles of the batch size and learning rate have often\nbeen discussed from the perspective of computer\nvision, but different studies have made different\nobservations, and the debate has not been settled\nyet (Dinh et al., 2017; Hoffer et al., 2017; Goyal et\nal., 2017; Li et al., 2017; Kawaguchi et al., 2017).\nAs for dropout and gradient clipping, which are ad-\nditional regularization factors, they have not been\nconsidered yet in relation to flat and sharp mini-\nmizers. We will consider here that the claim that\nless accurate gradients lead to flatter minima ap-\nplies to them too: for dropout, due to removing\nsome components of the sums; and for clipping,\nby affecting the norm of the gradient.\n2.2 Regularization Factors for NMT\nRecent NMT models are based on the Trans-\nformer (Vaswani et al., 2017), a deep encoder-\ndecoder neural network which is quite sensitive to\nthe hyper-parameters governing regularization fac-\ntors during training. We discuss here the four pa-\nrameters that we study in this paper.\nBatch size. As we saw, models trained with\nsmaller batch sizes have better generalization ca-\npabilities. However, batch size is not only a regu-\nlarization factor, but has an influence on training\nspeed: larger batches accelerate training by mak-\ning a better use of the GPU memory.\nLearning rate is a positive scalar that con-\ntrols how much the weights are updated. We use\nthe dynamic learning schedule known as ‘noam’\n(Vaswani et al., 2017, Eq. 3). During its ini-\ntial steps, known as warmup , the learning rate\nincreases linearly from zero, reaching its highest\nvalue at the last warmup step w. Afterwards, it de-\ncays proportionally to the inverse square root of the\nstep number s. At each step, this is multiplied by a\nfactor based on the output size of the embedding\nlayer dmodel (512 in Transformer-Base). More-\nover, following OpenNMT-py’s recommendation,\nwe include a scaling factor (sf), which we set by\ndefault to 2. The learning rate lrat each step s:\nlr(s) = sf·d−0.5\nmodel·min\u0010\ns−0.5, s·w−1.5\u0011\n(1)\nDropout (Srivastava et al., 2014) consists of a\nmasking noise: a probability that a unit is ran-\ndomly turned off during training. It is applied on\nthe output of each hidden layer, including the out-\nput of the attention layers, but not the embedding\nlayer, so no loss of input or output data occurs.\nThis encourages each hidden unit to perform well\nregardless of other units (Goodfellow et al., 2016,\nChapter 7.12).\nGradient clipping consists of renormalizing the\ngradient gto a threshold vif it exceeds it, i.e. if\n||g||> v, then g←gv/||g||(same direction but\nbounded norm). Therefore, the smaller the value\nofv, the more aggressively we clip the gradients,\nand the more regularization is applied (Goodfellow\net al., 2016, Chapter 10.11.1).\n2.3 Role of Regularization for NMT\nPopel and Bojar (2018) report that BLEU scores\nincrease with batch size in a Transformer-based\nNMT system, although with diminishing returns,\nand recommend setting a large batch size. They\nobserved moderate changes across a large range of\nlearning rates, and found thresholds beyond which\ntraining was much slower or diverged. They made\nsimilar observations for warmup steps, concluding\nthat the search space for learning rate and warmupsteps was wide. Their experiments were performed\non large datasets, leaving their questions open for\nlow-resource settings.\nOtt et al. (2018) observe that training time with\nvery large datasets can be shortened when using\nlarger batch sizes: they accumulate batches from\n25k tokens per batch to 400k. When paired with\nan increased learning rate schedule (noam’s times\ntwo) they do not report performance loss.\nSennrich and Zhang (2019) found that smaller\nbatch sizes (1k-4k) were beneficial for low-\nresource NMT, and studied a variety of regulari-\nzation factors for recurrent neural networks. How-\never, the regularization factors were not disentan-\ngled, and their effects on Transformer-based NMT\nare difficult to extrapolate.\nAraabi and Monz (2020) studied the Trans-\nformer’s hyper-parameters in several low-resource\nsettings. They observed improvements for larger\nbatch sizes on the larger datasets, but did not ob-\nserve improvements with smaller batch sizes on\nsmaller datasets, or changes to optimal number of\nwarmup steps or learning rate. They concluded to\nthe need for larger batches from the Transformer.\nHowever, due to the late position of the batch size\nand learning rate in their order of optimization\nof the hyper-parameters, their regularizing effects\ncannot be precisely determined.\nXu et al. (2020) computed gradients while accu-\nmulating minibatches, and observed that increas-\ning batch size stabilizes gradient direction up to a\ncertain point, which allowed them to dynamically\nadjust batch sizes while training. Miceli Barone et\nal. (2017) observed improvements when combin-\ning dropout with L2-norm during fine-tuning, and\nconcluded that “multiple regularizers outperform a\nsingle one.”\nIn previous work, we observed improvements of\nscores and training time when using smaller batch\nsizes, with a Transformer on a low-resource dataset\n(Atrio and Popescu-Belis, 2021). We found a min-\nimum value of the batch size below which train-\ning diverged, but did not study other regularization\nfactors and interactions between them.\nStudies on the optimization and effects of re-\ngularization factors thus remain scarce. Many\nprevious studies optimize parameters in sequence.\nWhile this strategy is certainly a faster approach\nto optimization, it does not shed full light on each\nfactor in isolation, as we do below in Sections 4\nto 7, or in combination, as we study in Sections 8\nDataset Src-tgt Lines Words (tgt)\nWMT20 Low-res HSB-DE 60k 823k\n= = 40k 550k\n= = 20k 273k\nNewsComm. v13 DE-EN 120k 3M\nTED Talks SK-EN 61k 1.3M\n= SL-EN 19k 443k\n= GL-EN 10k 214k\nTable 1: Numbers of lines of the original corpora used in\nour experiments. Sections 4-8 use only the first dataset. We\ndo not use monolingual or back-translated data, and train our\ntokenizers using only each parallel corpus.\nand 9.\n3 Data and Systems\nWe train our NMT systems with the Upper Sor-\nbian (HSB) to German (DE) training data of\nthe WMT 2020 Low-Resource Translation Task\n(Fraser, 2020). We also use the HSB-DE devel-\nopment and test sets provided by the WMT 2020\nand 2021 Low-Resource Translation Tasks (Fraser,\n2020; Libovick ´y and Fraser, 2021), each consist-\ning of 2k sentences. As length-based filtering does\nnot show significant differences, we do not filter\nour data. Additionally, in Section 9, we train sys-\ntems for translation from Galician (GL), Slovenian\n(SL), and Slovak (SK) into English (EN), using to-\nkenized and cleaned transcriptions of TED Talks\n(Qi et al., 2018).1Finally, we train a larger Ger-\nman to English system using 120k lines from News\nCommentary v13 (Bojar et al., 2018), and sample\n1,500 lines each for development and testing. Ta-\nble 1 presents these resources.\nTokenization into subwords is done with a Un-\nigram LM model (Kudo, 2018) from Sentence-\nPiece.2For each language pair we build a shared\nvocabulary of 10k subwords using only the paral-\nlel corpus, with character coverage of 0.98, nbest\nof 1 and alpha of 0.\nWe use the Transformer-Base architecture\n(Vaswani et al., 2017) implemented in OpenNMT-\npy (Klein et al., 2017; Klein et al., 2020).3Our\ndefault setting of hyper-parameters is the one rec-\nommended by OpenNMT-py4which is close to the\noriginal Transformer (Vaswani et al., 2017). The\n1https://github.com/neulab/\nword-embeddings-for-nmt\n2https://github.com/google/sentencepiece\n3We make public our configuration files and package re-\nquirements at https://github.com/AlexRAtrio/\nreg-factors .\n4https://opennmt.net/OpenNMT-py/FAQ.html#\nhow-do-i-use-the-transformer-modelregularization factors appear with relatively low\nstrengths in this setting, as is usual when large\ndatasets are available. The setting includes the\n‘noam’ learning rate schedule with a scaling fac-\ntor of 2 and a dropout rate of 0.1. For Adam,\nβ1= 0.9, β2= 0.998andϵ= 10−8.\nWe train our models for a maximum of 100\nhours, although they generally converge earlier.\nWhen comparing batch sizes in Section 4, it could\nbe argued that epochs might provide a fairer com-\nparison, but we measure real clock time as the most\nrelevant measure for practitioners.\nA batch consists of lines (tokenized sentences)\nthat are translated one by one, with a fixed maxi-\nmum length of 512 tokens for Transformer-Base.\nLines are padded if shorter, and filtered out if\nlonger. We train all models on two GPUs with\n11 GB of memory each (GeForce RTX 1080Ti).\nEach device processes several batches, depending\non the batch size, which are afterwards accumu-\nlated and used to update the model. The effec-\ntive batch size and the batch size parameter of\nOpenNMT-py are two different values: the former\nisG×A×batch size , where Gis the number of\nGPUs and Athe number of accumulated batches,\nhere equal to two.5Throughout the paper, we re-\nport the batch size parameter, but the effective\nbatch size is in fact four times larger.\nWe generate translations with a beam size of\nseven, with consecutive ensembles of four check-\npoints. For each model we report the highest\nBLEU score (Papineni et al., 2002) calculated with\nSacreBLEU (Post, 2018) on detokenized text6as\nwell as the chrF score (Popovi ´c, 2015). We test\nthe statistical significance of differences in scores\nat the 95% confidence level using paired bootstrap\nresampling from SacreBLEU.\n4 Batch Size\nIn this section we train models with batch sizes\nranging from 500 to 10,000, with all other hyper-\nparameters set to default. Models with batch sizes\nof 100 and 250 were also trained, but did not con-\nverge. The largest tested batch size is the largest\nvalue supported by our GPUs.\nThe BLEU and chrF scores in Table 2 show that\nlowering the batch size improves quality of NMT,\n5https://forum.opennmt.net/t/\nepochs-determination/3119\n6https://github.com/mjpost/sacrebleu with\nthe signature nrefs:1 |bs:1000 |seed:12345 |case:mixed |eff:no\n|tok:13a |smooth:exp |version:2.0.0.\nBatch train dev test\nSize Xent Acc. B LEU chrF B LEU chrF\n0.5k 0.02 99.93 50.54* 73.35 43.95ˆ 69.25\n1k 0.01 99.94 52.02 74.63 44.40 ˆ70.02\n3k 0.01 99.96 50.16* 73.38 43.91ˆ 69.16\n6k 0.01 99.97 49.66+ 73.09 42.55 −68.85\n9k 0.01 99.96 49.42+ 73.10 42.22 −68.40\n10k 0.01 99.97 48.46 72.49 42.19 −68.38\nTable 2: HSB-DE scores with various batch sizes, all other\nsettings being default ones. Values with the same color or\nsymbol are notsignificantly different. The highest scores are\nin bold.\nlikely due to the regularizing effect of a less ac-\ncurate gradient, according to our theoretical per-\nspective. In particular, we observe improved re-\nsults with a batch size smaller than 3,000 (+1.71\nBLEU) and an optimal size around 1,000 (+2.21),\nwith scores gradually decreasing as batch size in-\ncreases. These results are in line with previous ob-\nservations (Sennrich and Zhang, 2019; Atrio and\nPopescu-Belis, 2021).\nThere is no clear correlation between the train-\ning accuracy or cross-entropy loss and the general-\nization capacity, i.e. the scores on the development\nand test sets. The lower scores of models trained\nwith larger batch sizes are likely not due to over-\nfitting, because the testing curves of these models\ndo not show any decrease late in the training. This\nfurther supports the claim that better generaliza-\ntion abilities are due to flat minima (Keskar et al.,\n2016, Section 2.1).\nFigure 1: Throughput (subwords/second, in blue) and speed\n(epochs/hour, in green) for the tested batch sizes.\nOur results are competitive with the compara-\nble baselines from the WMT20 shared task on low-\nresource NMT for HSB-DE (Fraser, 2020), which\nused the same parallel data.7The baseline BLEU\n7Some of these systems used in fact larger monolingual HSB,\nDE and/or CS datasets for training their tokenizers, while we\nonly used 60k lines of parallel HSB-DE text.scores of Knowles et al. (2020), Libovick ´y et al.\n(2020) and Kvapil ´ıkov´a et al. (2020) were respec-\ntively 44.1, 43.4, and 38.7 on the test set.\nRegularization through smaller batch sizes thus\nprovides visible improvements with respect to the\ndefault setting. Larger batch sizes, however, ex-\nploit more fully the memory of the GPUs, which\nenables higher throughput in terms of subwords\nprocessed per second, as illustrated in Figure 1,\nalthough this does not increase linearly: instead,\nwe observe diminishing returns as batch size in-\ncreases. Still, while a batch size of 10k has\nthe lowest BLEU scores, it nearly doubles the\nthroughput with respect to the highest-scoring\nbatch size (1k). Due to differences in hardware\nand software, these values are difficult to compare\nto other studies, but the trends are similar to those\nobserved by Popel and Bojar (2018, Section 4.1).\nIf the regularization attained with lower batch\nsizes can also be obtained by using other regula-\nrization factors, this would allow the use of larger\nbatch sizes for a more efficient training. Therefore,\nin the next sections, we will compare a large batch\nsize (10k) and the optimal, regularized one (1k),\nand verify that none of the other regularization\nfactors that will be optimized have an impact on\nspeed.\n5 Learning Rate\nPrevious studies by Smith et al. (2017) and Smith\nand Le (2017) have shown that the regularization\neffects of the batch size and of the learning rate\nmay be comparable. In this section, we study the\nrole of varying schedules of the learning rate (5.1)\nand the effect of resetting the schedule in mid-\ntraining, i.e. suddenly increasing the learning rate\nbefore another decrease (5.2).\n5.1 Regularization through Learning Rate\nSince all our models have the same dimension of\nembeddings ( dmodel in Eq. 1 above), the only vari-\nables influencing the learning rate in the ‘noam’\nschedule are the number of warmup steps and the\nscaling factor (Vaswani et al., 2017, Eq. 3). We test\ntwo different values for the former: 8k (default)\nand 16k. For the latter, we test even values from\n2 (default) to 14. Figure 2 displays some tested\nschedules, including our default one (8k, 2) and\nthe ‘noam’ original one (4k, 1).\nThe results in Table 3 show that both batch sizes\nreach similar maximal scores (46.20 and 46.29),\nFigure 2: ‘Noam’ learning rate schedules with different scal-\ning factors ( sf) and numbers of warmup steps ( w).\nalthough with different scaling factors: 6 for a\nbatch size of 1k vs. 10 for a batch size of 10k.\nThe improvement is 1.8 BLEU points for a batch\nsize of 1k, and 4.1 for 10k. As a batch size of 1k\nis already a strong regularization factor, a smaller\nvalue of the learning rate (hence less regulariza-\ntion through this factor) is sufficient, compared to\nthe case of a larger batch size.\nWar Scaling factor\nmup 2 4 6 8 10 12 14\n8k 44.40 45.42 38.90 0.65 0.18 0.05 0.60\n16k 43.96 45.74 46.20 * 46.07* 45.79* 45.24* 42.24\n8k 42.19 44.59 45.27 ⋆45.93−45.87−45.34 ⋆45.31 ⋆\n16k 41.70 44.36 45.32+45.89ˆ 46.29 ˆ 45.69+45.69+\nTable 3: BLEU scores on the HSB-DE test set for batch sizes\nof 1k (top) and 10k (bottom) and various learning schedules.\nWe denote scores that are notsignificantly different row-wise\nwith the same color or symbol.\nThe models trained with the larger batch size\n(10k) are more stable when learning rates increase\n(larger scaling factors) likely due to more accurate\nestimates of the gradients (compare lines 1 vs. 3,\nand 2 vs. 4). Similarly, these models have a higher\nmaximal learning rate beyond which they diverge\n(compare in Table 3 the large difference between\nlines 1 and 2 with the small difference between\nlines 3 and 4). This shows the importance of in-\ncreasing the number of warmup steps as the scal-\ning factor increases, to avoid reaching high max-\nima of the learning rate (the peaks visible on the\nschedules in Figure 2). Moreover, the regulariza-\ntion provided by other factors (in this case, batch\nsize) needs to be taken into account when increas-\ning the amount of regularization from the learn-\ning rate. Finally, as long as the maximal learning\nrate remains below the values that make a model\ndiverge, the BLEU scores do not change signifi-\ncantly when the scaling factor increases above acertain value, as also observed by Popel and Bojar\n(2018, 4.6, Fig. 7).\n5.2 Resetting the LR during Training\nFrom the perspective of the loss landscape, we\nhypothesize that introducing more noise into the\ngradient when the scores have already leveled-\noff, namely by resetting the learning rate schedule,\nshould increase the probability for the weights to\nescape the sharp minima basins and avoid falling\nback into them, which should improve the gen-\neralization abilities of the trained model. Since\na model trained with a smaller batch size has a\nhigher chance, during the first part of training, to\nfall into flat minima due to an increased gradient\nnoise (Smith et al., 2017), we expect the larger\nbatch sizes to benefit more from this strategy than\nthe smaller ones.\nHours\n50 100 100\nBatch size no lr reset reset lr\n1k BLEU 44.25 44.40 45.85\nchrF 69.78 70.02 70.84\nTrain. Acc. 99.93 99.94 99.84\nXent 0.02 0.01 0.02\n∆ +0.15 +1.60\n10k BLEU 41.60 42.19 45.25\nchrF 68.03 68.38 70.57\nTrain. Acc. 99.94 99.97 99.92\nXent 0.01 0.01 0.01\n∆ +0.59 +3.65\nTable 4: BLEU and chrF scores on the HSB-DE test set,\ntraining accuracy and cross-entropy on the training set, and\nchange of BLEU scores when continuing training until 100\nhours vs. resetting the learning rate at 50h.\nIn Table 4 we provide the scores after train-\ning for 50 hours (half of their training time); the\nscores after 100 hours when continuing to train\nfrom the 50-hour checkpoint; and the final score\nafter training for 50 hours with a schedule reset at\nthe 50-hour checkpoint. The results corroborate\nour hypothesis: both batch sizes benefit signifi-\ncantly from the strategy of resetting the learning\nrate, and the large batch size more than the smaller\none ((+3.65 vs. +1.6 BLEU points). As both mod-\nels reached their highest BLEU scores before 25\nhours, the difference is likely not due to that fact\nthat the first model saw more times the training\ndata thanks to its higher throughput. Furthermore,\nafter increasing the learning rate mid-training, both\nthe loss and training accuracy worsen or remain\nstable, while BLEU scores improve, likely due to\nreaching flatter basins, not lower minima.\n6 Dropout Rate\nThe dropout of a certain proportion of neurons dur-\ning training is another frequent source of regulari-\nzation. As this amounts to removing certain terms\nfrom the summation of gradients, its role can also\nbe considered from the perspective of flat vs. sharp\nminimizers.\nDropout\n0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8\n44.40* 45.35+ 45.39 +44.87* 44.54* 42.58 37.69 19.83\n42.19 43.76 44.74 45.40 ˆ 45.39ˆ 45.26ˆ 42.91 35.52\nTable 5: Dropout scores on the HSB-DE test set for 1k (top)\nand 10k (bottom) batch sizes. We denote row-wise lack of sig-\nnificant differences with the same color or symbol. Dropout\nrates of 0.9 have considerably lower scores.\nBLEU scores in Table 5 show that the model\ntrained with a larger batch size – hence subject\nto less regularization – requires a more aggres-\nsive dropout of around 0.4–0.6 in order to reach\nits highest scores, with respect to a model trained\nwith a smaller batch size, which reaches its highest\nscore for 0.2–0.3. This is consistent with our pre-\nvious findings from Section 5.1 and Table 3, which\nalso showed that the model subject to less regula-\nrization from a factor (larger batch size) required\nmore regularization from another factor in order to\nreach its highest scores.\n7 Gradient Clipping\nFinally, we experiment with our fourth regulariza-\ntion factor: gradient clipping. Since it directly in-\nvolves constraining the norm of the gradient, the\nperspective based on flat vs. sharp basins in the\nloss landscape also holds for it.\nBatch Drop Gradient Clipping\nsize out None 20 10 5 2.5\n1k 0.1 44.40 44.75 44.92 44.74 44.54\n10k 0.1 42.19 42.41 42.01 42.30 42.20\n0.2 43.76 44.15 44.34 43.98 43.85\n0.3 44.74 45.36 44.72 44.75 44.99\n0.4 45.40 45.56 45.30 45.45 45.48\nTable 6: BLEU scores on the HSB-DE test set for batch sizes\nof 10k and 1k on the test set, with a dropout rate of 0.1 (de-\nfault), for several upper limits of the gradients.\nAs in the previous sections, we compare mod-\nels trained with batch sizes of 1k and 10k, but\nobserve no statistically significant differences be-\ntween them when using default values for other\nhyper-parameters, with BLEU scores shown in Ta-\nble 6 – although values of 10 or 20 are alwaysamong the best. This is likely because default\nsettings do not feature enough regularization (i.e.,\nthey do not increase enough the gradient’s norm)\nfor the gradients to be affected by clipping. For this\nreason, we perform additional experiments with a\nbatch size of 10k (due to its advantage for speed)\nwith more regularizing dropout values of 0.2, 0.3,\nand 0.4, and scaling factor of 6 and 10. Regard-\ning the models with increasing dropout rate, we\nonly observe a statistically significant difference\nbetween the best and worst results (for dropout of\n0.2), the best and two worst results (for 0.3), and\nno differences at all (for 0.4). We conclude that\ngradient clipping only marginally affects training\nin these settings.\n8 Combining Regularization Factors\nWe will now show that a combination of regulari-\nzation factors can produce higher scores than in-\ndividual factors used separately, and that the maxi-\nmal scores are stable when varying the strengths of\nregularizers around their optimal values. The batch\nsize is fixed at 10k, since this enables a higher\ntraining speed than 1k with similar best scores,\nprovided that other regularization factors are used,\nas shown in Tables 3, 4 and 5. The number of\nwarmup steps is fixed at 16k since we showed in\nSection 5.1 that this parameter mainly limits the\npeaks of the learning rate and thus prevents mod-\nels from diverging early in the training. Our search\nspace for the other regularization factors is shown\nin Table 8.\nFactor Value Xent Tr.\nacc.BLEU chrF ∆\nDefaults - 0.01 99.97 42.19 68.38 -\nBatch size 1k 0.02 99.94 44.40 70.02 +2.21\nS.f. 10 0.01 99.94 45.93 70.74 +3.74\nS.f. + w.s. 10+16k 0.01 99.94 46.29 71.22 +4.10\nL.r. reset 50% 0.01 99.92 45.25 70.57 +3.06\nDropout 0.4 0.07 99.46 45.40 71.00 +3.21\nClipping 10 0.01 99.96 42.41 68.43 +0.22\nCombination Table 8 0.03 99.78 47.11 71.88 +4.92\n+ l.r. reset - 0.06 99.30 47.20 71.80 +5.01\nTable 7: HSB-DE scores on the test set when the regulari-\nzation factors are used either independently (lines 2–6) or in\ncombination (line 7), in the latter case with the optimal val-\nues from Table 8. The last column shows increases in BLEU\nscores over the default settings.\nWe present in Table 7 the highest scores\nachieved using individual regularization factors,\nalong with those from the default setup (first line)\nand from the combination of factors (last two\nlines). Regularization factors are already present\nin the default setup, but at low strengths.\nThe comparison of scores in Table 7 shows\nthat each factor used independently allows the\nmodel to outperform the default setting by 2–4\nBLEU points. However, the use of a combina-\ntion of factors achieves the highest score of 47.20\nBLEU points (+ 5.01), which is significantly above\nall others. In the case of resetting the learning\nrate, although this has a visible effect when used\nwith default parameters, its effect is much smaller\nwhen used jointly with other regularization factors,\nlikely because a flat basin is found before the reset.\nMoreover, the combination of factors results in a\nhigher loss and a lower accuracy on the train set\nthan the default setup or factors used individually,\nwhich supports our interpretation of the improve-\nment based on flatter minima.\nTable 8 shows that the best scores reached\nwith increased regularization are quite stable when\nvarying the intensity of the factors. The optimal\nregion of the scaling factor is around 10, with a rel-\natively flat neighborhood, similar to the case when\nit was optimized individually (Section 5). Optimal\ndropout rates are now around 0.3–0.5, compared\nto 0.4–0.6 when used individually (Section 6). Fi-\nnally, gradient clipping has only a marginal effect\nin combination with other factors, presumably be-\ncause it cannot help to increase the gradients.\n9 Testing on Additional Corpora\nIn this section, we confirm our claims using ad-\nditional low-resource datasets. We consider two\nsmaller samples with 40k and 20k lines from the\nHSB-DE corpus, as well as parallel datasets for\nGalician, German, Slovak and Slovenian (see Sec-\ntion 3). We do not optimize regularization fac-\ntors on each dataset, but only use the optimal\nhyper-parameters found above on HSB-DE with\n60k lines.\nTable 9 demonstrates that these hyper-parameterGrad Scaling Dropout\nclipping factor 0.1 0.3 0.5 0.7\nNone 2 42.19 44.74 45.39 42.91\n6 45.32 46.70 46.22 43.66\n10 46.29 47.06 46.93 43.18\n14 45.69 46.84 47.07 43.61\n18 45.26 46.89 46.67 43.19\n5 2 41.39 44.47 45.05 43.48\n6 45.20 46.62 46.70 43.88\n10 45.65 47.11 46.76 44.04\n14 45.57 47.11 47.06 43.63\n18 44.72 46.59 47.02 42.72\nTable 8: HSB-DE BLEU scores for a combination of the scal-\ning factor, gradient clipping, and dropout rate, for a batch size\nof 10k and 16k warmup steps. The highest scores are in bold.\nvalues bring significant improvements of BLEU\nand chrF scores over the baseline for all datasets\n(four different source languages). When compar-\ning HSB-DE datasets of different sizes, we find\nthat as the amount of data decreases, the positive\neffects of our regularization parameters increase,\nwith up to 21% improvement in BLEU scores for\nthe smallest subset. Furthermore, we also observe\nan increase in the loss over all datasets with the\noptimized setup, which shows that the reason why\ntheir less accurate gradients generalize better is not\ndue to finding lower but rather flatter minima of\nloss.\n10 Conclusion\nWe presented a unified perspective on the role\nof four regularization factors in low-resource set-\ntings: batch size, learning schedule, gradient clip-\nping and dropout rate. The results support our\nclaim that more regularization is beneficial in such\nsettings, with respect to the default values that are\nrecommended for high-resource settings. We first\nsubstantiated the claim for each factor taken indi-\nvidually, and then showed that a combination of\nfactors leads to improved scores and is robust when\nfactors vary. Finally, we showed that our findings\ngeneralize across different low-resource sizes and\nCorpus Lines Default Optimized % ∆\nXent Tr. Acc. BLEU chrF Xent Tr. Acc. BLEU chrF BLEU\nHSB-DE 60k 0.01 99.97 42.19 68.38 0.06 99.30 47.20 71.80 +11.87\nHSB-DE 40k 0.01 99.98 32.38 60.68 0.03 99.80 37.63 65.12 +16.21\nHSB-DE 20k 0.01 99.98 22.93 51.42 0.02 99.93 27.84 56.27 +21.41\nDE-EN 120k 0.10 98.20 29.94 56.81 0.60 84.71 35.77 61.44 +19.47\nSK-EN 61k 0.02 99.89 25.61 46.42 0.40 89.29 29.71 49.67 +16.01\nSL-EN 19k 0.01 99.93 15.53 34.99 0.09 98.89 18.43 37.75 +18.67\nGL-EN 10k 0.01 99.98 16.00 34.52 0.04 99.69 19.04 37.84 +19.00\nTable 9: BLEU scores on test sets of different corpora and subsets of our main HSB-DE corpus (first line), comparing our\ndefault setup and our optimized setup as presented in Section 8.\nlanguages. Overall, we interpreted the results from\nthe perspective of the loss landscape, and argued\nthat more regularization is beneficial because the\nnoise it introduces in the estimation of gradients\nleads to finding flatter minima of the loss, which\nhave better generalization abilities. We hope that\nbetter insights on the loss landscape of the Trans-\nformer will confirm our theoretical interpretation,\nand that the observations put forward in this pa-\nper will also help practitioners with setting hyper-\nparameters for low-resource NMT systems.\n11 Acknowledgments\nWe thank the Swiss National Science Foundation\n(DOMAT grant n. 175693, On-demand Knowl-\nedge for Document-level Machine Translation)\nand Armasuisse (FamilyMT project). We espe-\ncially thank Dr. Ljiljana Dolamic (Armasuisse) for\nher support in the FamilyMT project. We are also\ngrateful to the anonymous reviewers and to Gior-\ngos Vernikos for their helpful suggestions.\nReferences\nAraabi, Ali and Christof Monz. 2020. Optimizing\nTransformer for low-resource neural machine trans-\nlation. In Proceedings of the 28th International Con-\nference on Computational Linguistics , pages 3429–\n3435, Barcelona, Spain.\nAtrio, `Alex R. and Andrei Popescu-Belis. 2021. Small\nbatch sizes improve training of low-resource neural\nMT. In Proceedings of the 18th International Con-\nference on Natural Language Processing (ICON) ,\nPatna, India.\nBojar, Ond ˇrej, Christian Federmann, Mark Fishel,\nYvette Graham, Barry Haddow, Philipp Koehn, and\nChristof Monz. 2018. Findings of the 2018 confer-\nence on machine translation (WMT18). In Proceed-\nings of the Third Conference on Machine Transla-\ntion: Shared Task Papers , pages 272–303, Belgium,\nBrussels.\nDinh, Laurent, Razvan Pascanu, Samy Bengio, and\nYoshua Bengio. 2017. Sharp minima can general-\nize for deep nets. In International Conference on\nMachine Learning , pages 1019–1028.\nFraser, Alexander. 2020. Findings of the WMT 2020\nshared tasks in unsupervised MT and very low re-\nsource supervised MT. In Proceedings of the Fifth\nConference on Machine Translation , pages 765–771.\nGoodfellow, Ian, Yoshua Bengio, and Aaron Courville.\n2016. Deep Learning . MIT Press.\nGoyal, Priya, Piotr Doll ´ar, Ross Girshick, Pieter No-\nordhuis, Lukasz Wesolowski, Aapo Kyrola, AndrewTulloch, Yangqing Jia, and Kaiming He. 2017. Ac-\ncurate, large minibatch SGD: Training Imagenet in 1\nhour. arXiv preprint arXiv:1706.02677 .\nHoffer, Elad, Itay Hubara, and Daniel Soudry. 2017.\nTrain longer, generalize better: Closing the gener-\nalization gap in large batch training of neural net-\nworks. In Proceedings of the 31st International Con-\nference on Neural Information Processing Systems ,\nNIPS’17, page 1729–1739.\nJastrze ¸bski, Stanislaw, Zachary Kenton, Devansh Arpit,\nNicolas Ballas, Asja Fischer, Yoshua Bengio, and\nAmos Storkey. 2018. Width of minima reached by\nstochastic gradient descent is influenced by learning\nrate to batch size ratio. In Proceedings of 27th Inter-\nnational Conference on Artificial Neural Networks ,\nLecture Notes in Computer Science, pages 392–402.\nSpringer, Cham.\nKawaguchi, Kenji, Leslie Pack Kaelbling, and Yoshua\nBengio. 2017. Generalization in deep learning.\narXiv preprint arXiv:1710.05468 .\nKeskar, Nitish Shirish, Dheevatsa Mudigere, Jorge No-\ncedal, Mikhail Smelyanskiy, and Ping Tak Peter\nTang. 2016. On large-batch training for deep learn-\ning: Generalization gap and sharp minima. arXiv\npreprint arXiv:1609.04836 .\nKlein, Guillaume, Yoon Kim, Yuntian Deng, Jean\nSenellart, and Alexander Rush. 2017. OpenNMT:\nOpen-source toolkit for NMT. In Proceedings of the\n55th Annual Meeting of the Association for Compu-\ntational Linguistics, System Demonstrations , pages\n67–72.\nKlein, Guillaume, Franc ¸ois Hernandez, Vincent\nNguyen, and Jean Senellart. 2020. The OpenNMT\nneural machine translation toolkit: 2020 edition. In\nProceedings of the 14th Conference of the Associa-\ntion for Machine Translation in the Americas , pages\n102–109.\nKnowles, Rebecca, Samuel Larkin, Darlene Stewart,\nand Patrick Littell. 2020. NRC systems for low re-\nsource German-Upper Sorbian machine translation\n2020: Transfer learning with lexical modifications.\nInProceedings of the Fifth Conference on Machine\nTranslation , pages 1112–1122.\nKrizhevsky, Alex. 2014. One weird trick for paralleliz-\ning convolutional neural networks. arXiv preprint\narXiv:1404.5997 .\nKudo, Taku. 2018. Subword regularization: Improv-\ning neural network translation models with multiple\nsubword candidates. In Proceedings of the 56th An-\nnual Meeting of the Association for Computational\nLinguistics , pages 66–75.\nKuka ˇcka, Jan, Vladimir Golkov, and Daniel Cremers.\n2017. Regularization for deep learning: A taxon-\nomy. arXiv preprint arXiv:1710.10686 .\nKvapil ´ıkov´a, Ivana, Tom Kocmi, and Ond ˇrej Bojar.\n2020. CUNI systems for the unsupervised and very\nlow resource translation task in WMT20. In Pro-\nceedings of the Fifth Conference on Machine Trans-\nlation , pages 1123–1128.\nLi, Hao, Zheng Xu, Gavin Taylor, Christoph Studer,\nand Tom Goldstein. 2017. Visualizing the\nloss landscape of neural nets. arXiv preprint\narXiv:1712.09913 .\nLibovick ´y, Jind ˇrich, Viktor Hangya, Helmut Schmid,\nand Alexander Fraser. 2020. The LMU Munich sys-\ntem for the WMT20 very low resource supervised\nMT task. In Proceedings of the Fifth Conference on\nMachine Translation , pages 1104–1111.\nLibovick ´y, Jind ˇrich and Alexander Fraser. 2021. Find-\nings of the WMT 2021 shared tasks in unsupervised\nMT and very low resource supervised MT. In Pro-\nceedings of the Sixth Conference on Machine Trans-\nlation .\nMiceli Barone, Antonio Valerio, Barry Haddow, Ulrich\nGermann, and Rico Sennrich. 2017. Regularization\ntechniques for fine-tuning in neural machine trans-\nlation. In Proceedings of the 2017 Conference on\nEmpirical Methods in Natural Language Processing ,\npages 1489–1494, Copenhagen, Denmark.\nMoradi, Reza, Reza Berangi, and Behrouz Minaei.\n2020. A survey of regularization strategies for deep\nmodels. Artificial Intelligence Review , 53(6):3947–\n3986.\nOtt, Myle, Sergey Edunov, David Grangier, and\nMichael Auli. 2018. Scaling neural machine trans-\nlation. In Proceedings of the Third Conference on\nMachine Translation: Research Papers , pages 1–9,\nBelgium, Brussels.\nPapineni, Kishore, Salim Roukos, Todd Ward, and Wei-\nJing Zhu. 2002. Bleu: a method for automatic eval-\nuation of machine translation. In Proceedings of the\n40th Annual Meeting of the Association for Com-\nputational Linguistics , pages 311–318, Philadelphia,\nPA, USA.\nPeng, Hao, Lili Mou, Ge Li, Yunchuan Chen, Yangyang\nLu, and Zhi Jin. 2015. A comparative study on regu-\nlarization strategies for embedding-based neural net-\nworks. In Proceedings of the 2015 Conference on\nEmpirical Methods in Natural Language Processing ,\npages 2106–2111, Lisbon, Portugal.\nPopel, Martin and Ond ˇrej Bojar. 2018. Training tips\nfor the Transformer model. The Prague Bulletin of\nMathematical Linguistics , 110(1):43–70, 4.\nPopovi ´c, Maja. 2015. chrF: character n-gram f-score\nfor automatic MT evaluation. In Proceedings of the\nTenth Workshop on Statistical Machine Translation ,\npages 392–395, Lisbon, Portugal.Post, Matt. 2018. A call for clarity in reporting BLEU\nscores. In Proceedings of the Third Conference on\nMachine Translation: Research Papers , pages 186–\n191, Belgium, Brussels.\nQi, Ye, Devendra Sachan, Matthieu Felix, Sarguna Pad-\nmanabhan, and Graham Neubig. 2018. When and\nwhy are pre-trained word embeddings useful for neu-\nral machine translation? In Proceedings of the 2018\nConference of the North American Chapter of the As-\nsociation for Computational Linguistics , pages 529–\n535, New Orleans, LA, USA.\nSennrich, Rico and Biao Zhang. 2019. Revisiting low-\nresource neural machine translation: A case study.\nInProceedings of the 57th Annual Meeting of the As-\nsociation for Computational Linguistics , pages 211–\n221, Florence, Italy.\nSmith, Samuel L and Quoc V Le. 2017. A bayesian\nperspective on generalization and stochastic gradient\ndescent. arXiv preprint arXiv:1710.06451 .\nSmith, Samuel L, Pieter-Jan Kindermans, Chris Ying,\nand Quoc V Le. 2017. Don’t decay the learn-\ning rate, increase the batch size. arXiv preprint\narXiv:1711.00489 .\nSrivastava, Nitish, Geoffrey Hinton, Alex Krizhevsky,\nIlya Sutskever, and Ruslan Salakhutdinov. 2014.\nDropout: a simple way to prevent neural networks\nfrom overfitting. The journal of machine learning\nresearch , 15(1):1929–1958.\nVaswani, Ashish, Noam Shazeer, Niki Parmar, Jakob\nUszkoreit, Llion Jones, Aidan N Gomez, Lukasz\nKaiser, and Illia Polosukhin. 2017. Attention is all\nyou need. In Advances in Neural Information Pro-\ncessing Systems 30 , pages 5998–6008.\nXu, Hongfei, Josef van Genabith, Deyi Xiong, and\nQiuhui Liu. 2020. Dynamically adjusting Trans-\nformer batch size by monitoring gradient direction\nchange. In Proceedings of the 58th Annual Meet-\ning of the Association for Computational Linguistics ,\npages 3519–3524.",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "xZnMW0IZYT",
"year": null,
"venue": "EAMT 2022",
"pdf_link": "https://aclanthology.org/2022.eamt-1.46.pdf",
"forum_link": "https://openreview.net/forum?id=xZnMW0IZYT",
"arxiv_id": null,
"doi": null
}
|
{
"title": "InDeep $\\times$ NMT: Empowering Human Translators via Interpretable Neural Machine Translation",
"authors": [
"Gabriele Sarti",
"Arianna Bisazza"
],
"abstract": null,
"keywords": [],
"raw_extracted_content": "InDeep ×NMT: Empowering Human Translators via\nInterpretable Neural Machine Translation\nGabriele Sarti andArianna Bisazza\nCenter for Language and Cognition (CLCG)\nUniversity of Groningen, The Netherlands\n{g.sarti, a.bisazza }@rug.nl\nAbstract\nThe NWO-funded InDeep project aims to\nempower users of deep-learning models of\ntext, speech, and music by improving their\nability to interact with such models and in-\nterpret their behaviors. In the translation\ndomain, we aim at developing new tools\nand methodologies to improve prediction\nattribution, error analysis, and controllable\ngeneration for neural machine translation\nsystems. These advances will be evalu-\nated through field studies involving profes-\nsional translators to assess gains in post-\nediting efficiency and enjoyability.\n1 Introduction\nIn recent years, the widespread adoption of deep\nlearning systems in neural machine translation\n(NMT) led to substantial performance gains across\nmost language pairs. Consequently, the focus of\nhuman professionals gradually shifted towards the\npost-editing of machine-generated content. De-\nspite the indisputable quality of NMT, the ques-\ntion of why and how these systems can effectively\nencode and exploit linguistic information stands\nunanswered. Indeed, NMT systems are intrin-\nsically opaque due to their multi-layered nonlin-\near architecture. This fact significantly hinders\nour ability to interpret their behavior (Samek et\nal., 2019), an essential prerequisite to their appli-\ncation in real-world scenarios requiring account-\nability and transparency. For this reason, the in-\nterpretability of neural models has grown into a\nprolific field of research, developing multiple ap-\n© 2022 The authors. This article is licensed under a Creative\nCommons 3.0 licence, no derivative works, attribution, CC-\nBY-ND.proaches aimed at analyzing models’ predictions\nand learned representations (Belinkov et al., 2020).\nWhile most explainable NMT studies focus on\nanalyzing model learning and predictive behav-\niors to gain theoretical insights, interpretability ap-\nproaches have seldom been applied from a user-\ncentric perspective. This criticality was high-\nlighted by exponents of the interpretability field,\namong which the necessity of grounding future re-\nsearch in practical applications found broad con-\nsensus (Doshi-Velez and Kim, 2017). In light of\nthis, the development of methods that are self-\ncontained, generalizable, and scalable would en-\nable the identification of widespread issues char-\nacterizing NMT predictions such as hallucina-\ntions (Raunak et al., 2021), under- and over-\ntranslation, and inadequate terminology (Vamvas\nand Sennrich, 2021; Vamvas and Sennrich, 2022).\n2 Project Description\nAs part of the broader consortium ‘InDeep: Inter-\npreting Deep Learning Models for Text and Sound’\nfunded by the Dutch Research Council (NWO)1,\nwe aim to build upon the latest advances in inter-\npretability studies to empower end-users of NMT\nvia the application of interpretability techniques\nfor neural machine translation. The InDeep project\nwill run from 2021 to 2026, involving a number of\nacademic and industrial partners such as the uni-\nversities of Groningen and Amsterdam, KPN, De-\nloitte and Hugging Face. Central to this project is\nimproving the subjective post-editing experience\nfor human professionals, promoting a shift from\na passive proofreading routine to an active role\nin the translation process by employing interac-\ntive and intelligible computational practices, driv-\n1Find more details at https://interpretingdl.github.io and\nhttps://www.nwo.nl/en/projects/nwa129219399\ning further enhancements in the quality and effi-\nciency of post-editing in real-world scenarios. On\nthe methodological side, this entails developing\nand adapting tools and methodologies to improve\nprediction attribution, error analysis, and control-\nlable generation for NMT systems. We will eval-\nuate our approaches using automatic metrics, and\nvia a field study surveying professionals in collab-\noration with GlobalTextware.2\nThe focus for the first part of the project will be\non identifying approaches that could be general-\nized to conditional text generation tasks (Alvarez-\nMelis and Jaakkola, 2017). Feature andinstance\nattribution methods let us establish the importance\nof input components and training examples, re-\nspectively, in driving model predictions. These\ntechniques are interesting due to their practical ap-\nplicability in standard translation workflows. In\nparticular, we find it essential to assess the rela-\ntionship between importance scores produced by\nthese methods and different categories of transla-\ntion errors. Evaluating the faithfulness for model\nattributions, i.e., how they are causally linked to\nthe system’s outputs, is another fundamental com-\nponent of our investigation and will be pursued\nby employing a mix of existing and new tech-\nniques (DeYoung et al., 2020).\nThe second part of the project will involve a\nfield study combining behavioral and subjective\nquality metrics to empirically estimate the effec-\ntiveness of our methods in real-world scenarios.\nFor the behavioral part, we intend to use a com-\nbination of keylogging and possibly eye-tracking\nand mouse-tracking to collect granular informa-\ntion about the post-editing process. Our analysis\nwill benefit from insights from recent interactive\nNMT studies (Santy et al., 2019; Coppers et al.,\n2018; Vandeghinste et al., 2019) to present transla-\ntors with useful information while avoiding visual\nclutter. Our preliminary inquiry involving profes-\nsionals highlighted sentence-level quality estima-\ntion and adaptive style/terminology constraints as\npromising directions to increase post-editing pro-\nductivity and enjoyability, supporting the potential\nof combining interpretable and interactive modules\nfor NMT.\n2https://www.globaltextware.nl/References\nAlvarez-Melis, David, and Tommi Jaakkola. 2017. A\nCausal Framework for Explaining the Predictions of\nBlack-Box Sequence-to-Sequence Models In Pro-\nceedings of EMNLP 2017 , 412–421.\nBelinkov, Yonatan, Sebastian Gehrmann, and Ellie\nPavlick. 2020. Interpretability and Analysis in Neu-\nral NLP In Proceedings of ACL 2020: Tutorials ,\n1–5.\nCoppers, Sven, Jan Van den Bergh, Kris Luyten, Karin\nConinx, Iulianna van der Lek-Ciudin, Tom Vanalle-\nmeersch, Vincent Vandeghinste 2018. Intellingo:\nAn Intelligible Translation Environment In Proceed-\nings of CHI 2018 : 524, 1–13.\nDeYoung, Jay, Sarthak Jain, Nazneen Fatema Rajani,\nEric Lehman, Caiming Xiong, Richard Socher, By-\nron C. Wallace. 2020. ERASER: A Benchmark to\nEvaluate Rationalized NLP Models In Proceedings\nof ACL 2020 , 4443–4458.\nDoshi-Velez, Finale, and Been Kim. 2018. Consid-\nerations for Evaluation and Generalization in Inter-\npretable Machine Learning Explainable and Inter-\npretable Models in Computer Vision and Machine\nLearning , 3–17.\nHe, Shilin, Zhaopeng Tu, Xing Wang, Longyue Wang,\nMichael Lyu, and Shuming Shi. 2019. Towards Un-\nderstanding Neural Machine Translation with Word\nImportance In Proceedings of EMNLP-IJCNLP\n2019 , 953–962.\nSamek, Wojciech, Gr ´egoire Montavon, Andrea\nVedaldi, Lars Kai Hansen, and Klaus-Robert M ¨uller.\n2019. Explainable AI: Interpreting, Explaining and\nVisualizing Deep Learning Springer Nature .\nSanty, Sebastin, Sandipan Dandapat, Monojit Choud-\nhury, and Kalika Bali. 2019. INMT: Interactive\nNeural Machine Translation Prediction In Proceed-\nings of EMNLP-IJCNLP 2019 , 103–8.\nVamvas, Jannis and Rico Sennrich. 2021. Contrastive\nConditioning for Assessing Disambiguation in MT:\nA Case Study of Distilled Bias In Proceedings of\nEMNLP 2021 , 10246–10265.\nVamvas, Jannis and Rico Sennrich. 2022. As Little\nas Possible, as Much as Necessary: Detecting Over-\nand Undertranslations with Contrastive Conditioning\nInProceedings of ACL 2022 , 10139–10155.\nVandeghinste, Vincent, Tom Vanallemeersch, Liesbeth\nAugustinus, Bram Bult ´e, Frank Van Eynde, Joris\nPelemans, Lyan Verwimp. 2019. Improving the\nTranslation Environment for Professional Transla-\ntorsInformatics 6 (2): 24, 1–36.\nVikas Raunak, Arul Menezes, Marcin Junczys-\nDowmunt. 2021. The Curious Case of Hallucina-\ntions in Neural Machine Translation In Proceedings\nof NAACL 2021 , 1172–1183.",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "Hl_wgtNk5O4",
"year": null,
"venue": "EAMT 2022",
"pdf_link": "https://aclanthology.org/2022.eamt-1.2.pdf",
"forum_link": "https://openreview.net/forum?id=Hl_wgtNk5O4",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Neural Speech Translation: From Neural Machine Translation to Direct Speech Translation",
"authors": [
"Mattia Antonino Di Gangi"
],
"abstract": null,
"keywords": [],
"raw_extracted_content": "Neural Speech Translation: from Neural Machine Translation\nto Direct Speech translation\nMattia A. Di Gangi∗\nFondazione Bruno Kessler (FBK)\nICT Doctoral School - University of Trento\nvia Sommarive, Povo, Trento, Italy\[email protected]\nSpeech-to-text translation, or simply speech\ntranslation (ST), is the task of translating automati-\ncally a spoken speech. The problem has classically\nbeen tackled by combining the technologies of au-\ntomatic speech recognition (ASR) and machine\ntranslation (MT) with different degrees of coupling\n(Takezawa et al., 1998; Waibel et al., 1991). The\nmost popular approach is to cascade ASR and MT\nsystems, as it can make use of the state of the art in\nsuch mature fields (Black et al., 2002). The goal of\nthis thesis was to develop the so-called approach\nof direct speech translation, which translates au-\ndio without intermediate transcription (Duong et\nal., 2016; B ´erard et al., 2016; Weiss et al., 2017).\nDirect speech translation (DST) is based on the\nsequence-to-sequence learning technology that al-\nlowed the spectacular advances of the field of neu-\nral MT (NMT) but introducing its own challenges\n(Sutskever et al., 2014; Bahdanau et al., 2015).\nWe started with a study about the effects of\nNMT in cascaded ST, where we analyzed the\ntranslation errors of NMT and phrase-based MT\n(PBMT) for automatically transcribed input text.\nOur results showed that NMT achieves an overall\nhigher quality also in this setting, but its ability to\nmodel a theoretically-unlimited context can intro-\nduce subtle errors. Indeed, we found that in PBMT\nthe errors are localized in correspondence to the\nsource error, whereas NMT can introduce errors\nfar from the source-side error position.\nMotivated by application needs, in a following\nwork we studied how to use a single NMT system\nto translate effectively clean source text and auto-\nmatic transcripts. We found that a simple training\n∗*Now at AppTek GmbH\n∗© 2022 The authors. This article is licensed under a Creative\nCommons 3.0 licence, no derivative works, attribution, CC-\nBY-ND.algorithm that fine-tunes the model on both kinds\nof inputs improves the translation quality of cor-\nrupted input without any degradation on clean in-\nput.\nIn a parallel research line, we were interested\nin making the training of RNN-based NMT more\nefficient, as it required at the time long training\ntime also for relatively small datasets. For this,\nwe proposed simple-recurrent NMT (SR-NMT),\nan encoder-decoder architecture that requires a\nfraction of parameters and computing power than\nLSTM-based NMT. It is built on top of simple re-\ncurrent units (Lei et al., 2017), which are faster to\ntrain but achieve a lower translation quality than\nLSTMs, particularly because they do not bene-\nfit from the addition of computation layers. On\nthe other side, SR-NMT has been designed to be\ntrained as a deep network and our results show how\nthe performance improves significantly up to 8 lay-\ners in the encoder and in the decoder.\nOur two research lines converge in our work on\nDST. We start with a participation in IWSLT 2018,\nwhich introduced a separate evaluation for direct\nmodels in order to encourage participants to ex-\nplore this new and promising technology. From\nthis participation we learn that training such kind\nof models is really difficult, findings confirmed by\nthe very low results of all but the winning model.\nWe hypothesize that such difficulty is due also to\nthe low availability of training data for the task,\nwhich in fact requires source audio matched with\nits translation. It is much easier to find transcribed\naudio data and separate translated text.\nIn a first effort to overcome this data paucity, we\npropose MuST-C, a Multilingual Speech Transla-\ntion Corpus (Di Gangi et al., 2019a). It is obtained\nfrom TED talks and provides the audio (in English)\nsegmented into sentences matched with the cor-\nresponding audio transcripts and translations to 8\nlanguages. MuST-C provides audio data ranging\nfrom 385 to 504 hours, according to the target lan-\nguage, filtered for achieving a high quality of par-\nallel data.\nWith MuST-C available, we focused on deep\nlearning methods for DST and proposed S-\nTransformer, an adaptation of Transformer to the\ntask (Di Gangi et al., 2019b). The problems that\nS-Transformer aims to solve are the high resource\nburden in terms of computing power and training\ntime of LSTM-based DST, and the difficulty of\nself-attention to model audio-like sequences, char-\nacterized by a very high number of time steps and\nlow information density per step. The first prob-\nlem is tackled effectively by the use of Trans-\nformer, which trains faster and scales better than\nLSTMs, while for modeling we used 2D CNNs,\n2D self-attention, and time-biased self-attention,\nwhich help with both convergence time and trans-\nlation quality.\nFinally, we applied S-Transformer in a one-to-\nmany multilingual fashion to make better use of\nthe MusT-C data, as well as comparing character-\nlevel against BPE-level segmentation of the tar-\nget sentence. Our results showed that the BPE-\nsegmentation is generally better and achieves\nlarger improvement also in the multilingual sce-\nnario. Moreover, we participated in the DST evalu-\nation at IWSLT 2019 and 2020, where MuST-C be-\ncame the main in-task training corpus, and our sub-\nmissions’ results were competitive with the ones of\nteams from the industry. The results and products\nof this thesis contributed to the fast development\nof the technology of DST and lowered the barrier\nof entry into the field by making data1and code2\npublicly available.\nAcknowledgments\nThe author would like to thank his Ph.D. supervi-\nsors: Marcello Federico, Marco Turchi, and Mat-\nteo Negri; his thesis examiners: Evgeny Matusov,\nJan Nieheus, and Lo ¨ıc Barrault, as well as all the\nHLT-MT group at FBK. The author was financially\nsupported by a Ph.D. scholarship from FBK. This\nthesis was partly financially supported by an Ama-\nzon AWS ML Grant.\n1https://ict.fbk.eu/must-c/\n2https://github.com/mattiadg/FBK-Fairseq-STReferences\nBahdanau, Dzmitry, Kyunghyun Cho, and Yoshua Ben-\ngio. 2015. Neural Machine Translation by Jointly\nLearning to Align and Translate. In Proceedings of\nICLR 2015 .\nB´erard, Alexandre, Olivier Pietquin, Laurent Besacier,\nand Christophe Servan. 2016. Listen and Translate:\nA Proof of Concept for End-to-End Speech-to-Text\nTranslation. In NIPS Workshop on end-to-end learn-\ning for speech and audio processing .\nBlack, Alan W, Ralf D Brown, Robert Frederking,\nKevin Lenzo, John Moody, Alexander I Rudnicky,\nRita Singh, and Eric Steinbrecher. 2002. Rapid\nDevelopment of Speech-to-Speech Translation Sys-\ntems. In Seventh International Conference on Spo-\nken Language Processing .\nDi Gangi, Mattia A., Roldano Cattoni, Luisa Ben-\ntivogli, Matteo Negri, and Marco Turchi. 2019a.\nMust-c: a Multilingual Speech Translation Corpus.\nInProceedings of NAACL 2019 , pages 2012–2017,\nMinneapolis, MN, USA.\nDi Gangi, Mattia A., Matteo Negri, and Marco Turchi.\n2019b. Adapting Transformer to End-to-End Spo-\nken Language Translation. In Proceedings of Inter-\nspeech 2019 , pages 1133–1137, Graz, Austria.\nDuong, Long, Antonios Anastasopoulos, David Chi-\nang, Steven Bird, and Trevor Cohn. 2016. An atten-\ntional Model for Speech Translation Without Tran-\nscription. In Proceedings of NAACL 2016 , pages\n949–959.\nLei, Tao, Yu Zhang, and Yoav Artzi. 2017. Train-\ning RNNs as Fast as CNNs. arXiv preprint\narXiv:1709.02755 .\nSutskever, Ilya, Oriol Vinyals, and Quoc V Le. 2014.\nSequence to Sequence Learning with Neural Net-\nworks. In Proceedings of NIPS 2014 .\nTakezawa, Toshiyuki, Tsuyoshi Morimoto, Yoshinori\nSagisaka, Nick Campbell, Hitoshi Iida, Fumiaki\nSugaya, Akio Yokoo, and Seiichi Yamamoto. 1998.\nA Japanese-to-English speech translation system:\nATR-MATRIX. In Fifth International Conference\non Spoken Language Processing .\nWaibel, Alex, Ajay N Jain, Arthur E McNair, Hiroaki\nSaito, Alexander G Hauptmann, and Joe Tebelskis.\n1991. JANUS: a Speech-to-Speech Translation Sys-\ntem Using Connectionist and Symbolic Processing\nStrategies. In Proceedings of the ICASSP 1991 ,\npages 793–796.\nWeiss, Ron J., Jan Chorowski, Navdeep Jaitly, Yonghui\nWu, and Zhifeng Chen. 2017. Sequence-to-\nSequence Models Can Directly Translate Foreign\nSpeech. In Proceedings of Interspeech 2017 , Stock-\nholm, Sweden, August.",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "8BUl_MWv-4L",
"year": null,
"venue": "EAMT 2008",
"pdf_link": "https://aclanthology.org/2008.eamt-1.1.pdf",
"forum_link": "https://openreview.net/forum?id=8BUl_MWv-4L",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Shaping research from user requirements, and other exotic things..",
"authors": [
"Nicola Cancedda"
],
"abstract": null,
"keywords": [],
"raw_extracted_content": "Abstracts of invited talks \n \nShaping research from user requirements, \nand other exotic things \n \nNicola Cancedda \nXerox Research Centre Europe \[email protected] \n \n \nDespite its relative maturity, Machine Tr anslation is an extremely active field, \nenjoying a highly inter-disc iplinary community. This presentation will try to \nconvey two very distinct pers pectives: that of the engi neer proposing MT tools to \nLanguage Service Providers (LSPs), and that of the Machine Learning researcher. \n \nWhile sheer translation quality is crucial, several other requirements must be met \nin order to successfully deploy an MT syst em. The first part of this presentation \nwill focus on some desiderata gathered from LSPs considering the option of \ndeploying MT to support pr ofessional translators. \n \nWhile statistical MT is now mainstream , the interaction between the Machine \nLearning (ML) community and the MT community remains limited. The second \npart of the talk will present some a pproaches proposed by pure-ML researchers \nwhen brought to applying their tools of the trade to Machine Translation. 12th EAMT conference, 22-23 September 2008, Hamburg, Germany\n4",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "n_ZDhQHG3L",
"year": null,
"venue": "EAMT 2014",
"pdf_link": "https://aclanthology.org/2014.eamt-1.6.pdf",
"forum_link": "https://openreview.net/forum?id=n_ZDhQHG3L",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Translation model based weighting for phrase extraction",
"authors": [
"Saab Mansour",
"Hermann Ney"
],
"abstract": null,
"keywords": [],
"raw_extracted_content": "Translation Model Based Weighting for Phrase Extraction\nSaab Mansour andHermann Ney\nHuman Language Technology and Pattern Recognition\nComputer Science Department\nRWTH Aachen University\nAachen, Germany\nfmansour,[email protected]\nAbstract\nDomain adaptation for statistical machine\ntranslation is the task of altering general\nmodels to improve performance on the test\ndomain. In this work, we suggest several\nnovel weighting schemes based on trans-\nlation models for adapted phrase extrac-\ntion. To calculate the weights, we first\nphrase align the general bilingual training\ndata, then, using domain specific transla-\ntion models, the aligned data is scored and\nweights are defined over these scores. Ex-\nperiments are performed on two translation\ntasks, German-to-English and Arabic-to-\nEnglish translation with lectures as the tar-\nget domain. Different weighting schemes\nbased on translation models are compared,\nand significant improvements over auto-\nmatic translation quality are reported. In\naddition, we compare our work to previ-\nous methods for adaptation and show sig-\nnificant gains.\n1 Introduction\nIn recent years, large amounts of monolingual and\nbilingual training corpora were collected for sta-\ntistical machine translation (SMT). Early years\nfocused on structured data translation such as\nnewswire and parliamentary discussions. Nowa-\ndays, new domains of translation are being ex-\nplored, such as talk translation in the IWSLT TED\nevaluation (Cettolo et al., 2012) and patents trans-\nlation at the NTCIR PatentMT task (Goto et al.,\n2013).\nc\r2014 The authors. This article is licensed under a Creative\nCommons 3.0 licence, no derivative works, attribution, CC-\nBY-ND.The task of domain adaptation tackles the prob-\nlem of utilizing existing resources mainly drawn\nfrom one domain (e.g. newswire, parliamentary\ndiscussion) to maximize the performance on the\ntest domain (e.g. lectures, web forums).\nThe main component of an SMT system is the\nphrase table, providing the building blocks (i.e.\nphrase translation pairs) and corresponding trans-\nlation model scores (e.g., phrase models, word lex-\nical smoothing, etc.) to search for the best trans-\nlation. In this work, we experiment with phrase\nmodel adaptation through training data weighting,\nwhere one assigns higher weights to relevant do-\nmain training instances, thus causing an increase\nof the corresponding probabilities. As a result,\ntranslation pairs which can be obtained from rel-\nevant training instances will have a higher chance\nof being utilized during search.\nThe main contribution of this work is design-\ning several novel schemes for scoring sentences\nand assigning them appropriate weights to mani-\nfest adaptation. Our method consists of two steps:\nfirst, we find phrase alignments for the bilingual\ntraining data, then, the aligned data is scored using\ntranslation models and weights are generated.\nExperiments using the suggested methods and\na comparison to previous work are done on two\ntasks: Arabic-to-English and German-to-English\nTED lectures translation. The results show sig-\nnificant improvements over the baseline, and sig-\nnificant improvements over previous work are re-\nported when combining our suggested methods\nwith previous work.\nThe rest of the paper is organized as follows.\nRelated work on adaptation and weighting is de-\ntailed in Section 2. The weighted phrase ex-\ntraction training and the methods for assigning\nweights using translation models are described in\n35\nSection 3 and Section 4 correspondingly. Exper-\nimental setup including corpora statistics and the\nSMT system used in this work are described in\nSection 5. The results of the suggested methods\nare summarized in Section 6 and error analysis is\ngiven in Section 7. Last, we conclude with few\nsuggestions for future work.\n2 Related Work\nA broad range of methods and techniques have\nbeen suggested in the past for domain adaptation\nfor SMT. In recent work, language model and\nphrase model adaptation received most of the at-\ntention. In this work, we focus on phrase model\nadaptation. A prominent approach in recent work\nfor phrase model adaptation is training samples\nweighting at different levels of granularity. Foster\nand Kuhn (2007) perform phrase model adaptation\nusing mixture modeling at the corpus level. Each\ncorpus in their setting gets a weight using vari-\nous methods including language model (LM) per-\nplexity and information retrieval methods. Inter-\npolation is then done linearly or log-linearly. The\nweights are calculated using the development set\ntherefore expressing adaptation to the domain be-\ning translated. A finer grained weighting is that\nof (Matsoukas et al., 2009), who assign each sen-\ntence in the bitexts a weight using features of meta-\ninformation and optimizing a mapping from fea-\nture vectors to weights using a translation qual-\nity measure over the development set. Foster et\nal. (2010) perform weighting at the phrase level,\nusing a maximum likelihood term limited to the\ndevelopment set as an objective function to op-\ntimize. They compare the phrase level weight-\ning to a “flat” model, where the weight directly\nmodels the phrase probability. In their experi-\nments, the weighting method performs better than\nthe flat model, therefore, they conclude that re-\ntaining the original relative frequency probabilities\nof the phrase model is important for good perfor-\nmance.\nData filtering for adaptation (Moore and Lewis,\n2010; Axelrod et al., 2011) can be seen as a spe-\ncial case of the sample weighting method where a\nweight of 0 is assigned to discard unwanted sam-\nples. These methods rely on an LM based score\nto perform the selection, though the filtered data\nwill affect the training of other models such as the\nphrase model and other translation models. LM\nbased scoring might be more appropriate for LMadaptation but not as much for phrase model adap-\ntation as it does not capture bilingual dependen-\ncies. We score training data instances using trans-\nlation models and thus model connections between\nsource and target sentences.\nIn this work, we compare several scoring\nschemes at the sentence level for weighted phrase\nextraction. Additionally, we experiment with new\nscoring methods based on translation models used\nduring the decoding process. In weighting, all the\nphrase pairs are retained, and only their probabil-\nity is altered. This allows the decoder to make\nthe decision whether to use a phrase pair or not,\na more methodological way than removing phrase\npairs completely when filtering.\n3 Weighted Phrase Extraction\nThe classical phrase model is estimated using rel-\native frequency:\np(~fj~e) =P\nrcr(~f;~e)P\n~f0P\nrcr(~f0;~e)(1)\nHere, ~f;~eare contiguous phrases, cr(~f;~e)de-\nnotes the count of (~f;~e)being a translation of each\nother in sentence pair (fr;er). One method to in-\ntroduce weights to eq. (1) is by weighting each sen-\ntence pair by a weight wr. Eq. (1) will now have\nthe extended form:\np(~fj~e) =P\nrwr\u0001cr(~f;~e)P\n~f0P\nrwr\u0001cr(~f0;~e)(2)\nIt is easy to see that setting fwr= 1gwill result\nin eq. (1) (or any non-zero equal weights). Increas-\ning the weight wrof the corresponding sentence\npair will result in an increase of the probabilities\nof the phrase pairs extracted. Thus, by increasing\nthe weight of in-domain sentence pairs, the prob-\nability of in-domain phrase translations could also\nincrease.\nWe perform weighting rather than filtering for\nadaptation as the former was shown to achieve bet-\nter results (Mansour and Ney, 2012).\nNext, we discuss several methods for setting the\nweights in a fashion which serves adaptation.\n4 Weighting Schemes\nSeveral weighting schemes can be devised to man-\nifest adaptation. Previous work suggested per-\nplexity based scoring to perform adaptation (e.g.\n(Moore and Lewis, 2010)). The basic idea is to\n36\ngenerate a model using an in-domain training data\nand measure the perplexity of the in-domain model\non new events to rank their relevance to the in-\ndomain. We recall this method in Section 4.1.\nIn this work, we suggest to use several phrase-\nbased translation models to perform scoring. The\nbasic idea of adaptation using translation models\nis similar to the perplexity based method. We use\nan in-domain training data to estimate translation\nmodel scores over new events. Further details of\nthe method are given in Section 4.2.\n4.1 LM Perplexity Weighting\nLM cross-entropy scoring can be used for both\nmonolingual and bilingual data filtering (Moore\nand Lewis, 2010; Axelrod et al., 2011). Next,\nwe recall the scoring methods introduced in the\nabove previous work and utilize it for our proposed\nweighted phrase extraction method.\nThe scores for each sentence in the general-\ndomain corpus are based on the cross-entropy dif-\nference of the in-domain (IN) and general-domain\n(GD) models. Denoting HLM(x)as the cross en-\ntropy of sentence xaccording to LM, then the\ncross entropy difference DHLM(x)can be written\nas:\nDHLM(x) =HLMIN(x)\u0000HLMGD(x)(3)\nThe intuition behind eq. (3) is that we are inter-\nested in sentences as close as possible to the in-\ndomain, but also as far as possible from the gen-\neral corpus. Moore and Lewis (2010) show that\nusing eq. (3) for filtering performs better in terms\nof perplexity than using in-domain cross-entropy\nonly (HLMIN(x)). For more details about the rea-\nsoning behind eq. (3) we refer the reader to (Moore\nand Lewis, 2010).\nAxelrod et al. (2011) adapted the LM scores for\nbilingual data filtering for the purpose of TM train-\ning. The bilingual cross entropy difference for a\nsentence pair (fr;er)in the GD corpus is then de-\nfined by:\ndr=DHLMsource (fr) +DHLMtarget(er)\nWe utilizedrfor our suggested weighted phrase\nextraction.drcan be assigned negative values, and\nlowerdrindicates sentence pairs which are more\nrelevant to the in-domain. Therefore, we negate\nthe termdrto get the notion of higher weights in-\ndicating sentences being closer to the in-domain,and use an exponent to ensure positive values. The\nfinal weight is of the form:\nwr=e\u0000dr(4)\nThis term is proportional to perplexities and in-\nverse perplexities, as the exponent of entropy is\nperplexity by definition.\n4.2 Translation Model Weighting\nIn state-of-the-art SMT several models are used\nduring decoding to find the best scoring hypoth-\nesis. The models include, phrase translation prob-\nabilities, word lexical smoothing, reordering mod-\nels, etc. We utilize these translation models to per-\nform sentence weighting for adaptation. To esti-\nmate the models’ scores, a phrase alignment is re-\nquired. We use the forced alignment (FA) phrase\ntraining procedure (Wuebker et al., 2010) for this\npurpose. The general FA procedure will be pre-\nsented next followed by an explanation how we es-\ntimate scores for adaptation using FA.\n4.2.1 Forced Alignment Training\nThe standard phrase extraction procedure in\nSMT consists of two phases: (i)word-alignment\ntraining (e.g., IBM alignment models), (ii)heuris-\ntic phrase extraction and relative frequency based\nphrase translation probability estimation.\nIn this work, we utilize phrase training using\nthe FA method for the task of adaptation. Un-\nlike heuristic phrase extraction, the FA method per-\nforms actual phrase training. In the standard FA\nprocedure, we are given a training set, from which\nan initial heuristics-based phrase table p0is gener-\nated. FA training is then done by running a normal\nSMT decoder (using p0phrases and models) on the\ntraining data and constrain the translation to the\ngiven target instance. Forced decoding generates\nn-best possible phrase alignments from which we\nare interested in the first-best (viterbi) one. Note\nthat we do not use FA to generate a trained phrase\ntable but only to get phrase alignments of the bilin-\ngual training data. We explain next how to utilize\nFA training for adaptation.\n4.2.2 Scoring\nThe proposed method for calculating translation\nmodel scores using FA is depicted in Figure 1. We\nstart by training the translation models using the\nstandard heuristic method over the in-domain por-\ntion of the training data. We then use these in-\ndomain translation models to perform the FA pro-\n37\n/g51/g100 /g101/g100 /g30/g30/g30\n/g51\n/g116 /g101/g116/g27/g130/g116/g130/g132/g115/g106/g84/g73/g66/g56/g115/g36/g116 /g125/g89/g132/g115/g82/g130/g83/g56/g66/g73/g130/g106 \n/g110/g85/g83/g28/g132/g115/g116/g82/g106/g115/g28/g36/g66/g116/g83 /g56/g66/g73/g130/g106/g82/g83/g82/g123/g66/g132/g130/g82 /g100/g85/g83/g60/g130/g90/g132/g36/g82/g28/g36/g123/g83 /g125/g89/g132/g115/g82/g130/g83/g130/g45/g28/g132/g115/g123/g28/g36/g66/g116 \n/g49/g85/g83/g51/g86/g83/g37/g36/g28/g130/g132/g65/g36 /g125/g89/g132/g115/g82/g130/g83/g115/g106/g36/g27/g116/g56/g130/g116/g28 /g51'/g100 /g101'/g100 /g30/g30/g30\n/g51'\n/g56 /g101'/g56/g80/g100/g21/g130 /g100/g21/g125/g119/g80 /g100/g95/g130/g100/g68/g30/g30/g30\n/g80\n/g44/g21/g130 /g44/g21/g125/g119/g80 /g44/g95/g130/g44/g68\n/g100/g85/g89 /g95/g23 /g21/g89 /g55/g23 /g21/g89 /g105/g23 /g21/g89 /g93/g23 /g21/g89 /g78/g95 /g21¼/g30/g30/g30\n/g56/g85/g89 \n/g95/g23 /g21/g89 /g55/g23 /g21/g89 /g105/g23 /g21/g89 /g93/g23 /g21/g89 /g78/g95 /g21¼/g36/g116/g84/g73/g66/g56/g115/g36/g116 \n/g51'/g100\n/g30/g30/g30/g101'\n/g100Figure 1: Translation model scores generation for general-domain sentence pairs using in-domain corpus\nand viterbi phrase alignments calculated by the FA procedure.\ncedure over the general-domain (GD) data. The FA\nprocedure provides n-best possible phrase align-\nments, but we are interested only in one align-\nment. Even though the IN data is small, we en-\nsure that all GD sentences are phrase aligned us-\ning backoff phrases (Wuebker and Ney, 2013). Us-\ning the viterbi (first-best) phrase alignment and the\nin-domain models again, we generate the transla-\ntion model scores for GD sentences. As the scores\nare calculated by IN models, they express the re-\nlatedness of the scored sentence to the in-domain.\nNote that the FA procedure for getting adaptation\nweights is different from the standard FA proce-\ndure. In the standard FA procedure, the same cor-\npus is used to generate the initial heuristic phrase\ntable as well as phrase training. The FA procedure\nto obtain adaptation weights uses an initial phrase\ntable extracted from IN while the training is done\nover GD.\nNext, we define the process for generating the\nscores with mathematical notation. Given a train-\ning sentence pair (fJ\n1;eI\n1)from the GD corpus, we\nforce decode fJ\n1=f1:::fJintoeI\n1=e1:::eIus-\ning the IN phrase table. The force decoding pro-\ncess generates a viterbi phrase alignment sK\n1=\ns1:::sK,sk= (bk;jk;ik)where (bk;jk)are the\nsource phrase ~fkbegin and end positions corre-\nspondingly, and ikis the end position of translation\ntarget phrase ~ek(the start position of ~ekisik\u00001+1\nby definition of phrase based translation). Using\nsK\n1we calculate the scores of 10 translation mod-\nels which are grouped into 5 weighting schemes:\n\u000fPM: phrase translation models in both source-to-\ntarget (s2t) and target-to-source (t2s) directions\nhPMs2t(fJ\n1;eI\n1;sK\n1) =KX\nk=1logp(~fkj~ek)The t2s direction is defined analogously using the\np(~ekj~fk)probabilities.\n\u000fSM: word lexical smoothing models also in both\ntranslation directions\nhSMs2t(fJ\n1;eI\n1;sK\n1) =KX\nk=1jkX\nj=bklogikX\ni=ik\u00001+1p(fjjei)\n\u000fRM: distance based reordering model\nhRM(fJ\n1;eI\n1;sK\n1) =KX\nk=1jbk\u0000jk\u00001+ 1j\n\u000fCM: phrase count models\nhCMi(fJ\n1;eI\n1;sK\n1) =KX\nk=1h\nc(~fk;~ek)<ii\niis assigned the values 2,3,4 (3 count features).\nc(~f;~e)is the count of the bilingual phrase pair\nbeing aligned to each other (in the IN corpus).\n\u000fLP: length based word and phrase penalties\nhLPwordPernalty (fJ\n1;eI\n1;sK\n1) =I\nhLPphrasePenalty (fJ\n1;eI\n1;sK\n1) =K\nWe experiment with the PM scheme indepen-\ndently. In addition, we try using all models in a\nloglinear fashion for weighting (denoted by TM),\nand using TM and LM combined score (denoted\nby TM+LM). We use the decoder optimized lamb-\ndas to combine the models.\n38\nTo obtain the weights for a scheme which is\ncomposed of a set of models fhn\n1g, we normal-\nize (the sum of absolute values equals 1) the corre-\nsponding lambdas obtaining f\u0015n\n1g, and calculate:\nw(f;e;s ) =e\u0000nP\ni=1\u0015i\u0001hi(f;e;s)\nAn alternative method to perform adaptation by\nforce aligning GD using IN would be performing\nphrase probability re-estimation as done in the fi-\nnal step of standard FA training. In this case, n-best\nphrase alignments are generated for each sentence\nin GD using the IN models and the phrase model is\nthen reestimated using relative frequencies on the\nn-bests. This way we directly use the FA proce-\ndure to generate the translation models. The prob-\nlem with this approach is that due to the small size\nof IN, some sentences in GD can not be decoded\nwith the initial phrase table and fallback runs us-\ning backoff phrases need to be used (Wuebker and\nNey, 2013). Backoff phrases of a sentence pair\ncontain all source and target sub-strings up-to a de-\nfined maximum length. Therefore, many of these\nbackoff phrase pairs are not a translation of each\nother. Using such phrases to reestimate the phrase\nmodel might generate unwanted phrase translation\ncandidates. In the case of weighting, the back-\noff probabilities are used indirectly to weight the\ninitial counts, in addition, combining with other\nmodel scores remedies the problem further.\nAnother way to perform adaptation using FA is\nby starting with a GD heuristic phrase table and\nutilize it to force decode IN. This way, the proba-\nbilities of the general phrase model are biased to-\nwards the in-domain distribution. This method was\npresented by (Mansour and Ney, 2013) and will be\ncompared to our work.\n5 Experimental Setup\n5.1 Training Corpora\nTo evaluate the introduced methods experimen-\ntally, we use the IWSLT 2011 TED Arabic-to-\nEnglish and German-to-English translation tasks.\nThe IWSLT 2011 evaluation campaign focuses\non the translation of TED talks, a collection of\nlectures on a variety of topics ranging from sci-\nence to culture. For Arabic-to-English, the bilin-\ngual data consists of roughly 100K sentences of\nin-domain TED talks data and 8M sentences of\n“other”-domain (OD) United Nations (UN) data.\nFor the German-to-English task, the data consistsde en ar en\nINsen 130K 90K\ntok 2.5M 3.4M 1.6M 1.7M\nvoc 71K 49K 56K 34K\nODsen 2.1M 7.9M\ntok 55M 56M 228M 226M\nvoc 191K 129K 449K 411K\ndevsen 883 934\ntok 20K 21K 19K 20K\noov 215 (1.1%) 184 (1.0%)\ntest10sen 1565 1664\ntok 31K 27K 31K 32K\noov 227 (0.7%) 228 (0.8%)\ntest11sen 1436 1450\ntok 27K 27K 27K 27K\noov 271 (1.0%) 163 (0.6%)\nTable 1: IWSLT 2011 TED bilingual corpora\nstatistics: the number of sentences (sen), running\nwords (tok) and vocabulary (voc) are given for the\ntraining data. For the test data, the number of out-\nof-vocabulary (oov) words relatively to using all\ntraining data (concatenating IN and OD) is given\n(in parentheses is the percentage).\nof 130K TED sentences and 2.1M sentences of\n“other”-domain data assembled from the news-\ncommentary and the europarl corpora. For lan-\nguage model training purposes, we use an addi-\ntional 1.4 billion words (supplied as part of the\ncampaign monolingual training data).\nThe bilingual training and test data for the\nArabic-to-English and German-to-English tasks\nare summarized in Table 11. The English data is\ntokenized and lowercased while the Arabic data\nwas tokenized and segmented using MADA v3.1\n(Roth et al., 2008) with the ATB scheme (this\nscheme splits all clitics except the definite article\nand normalizes the Arabic characters alef and yaa).\nThe German source is decompounded and part-\nof-speech-based long-range verb reordering rules\n(Popovi ´c and Ney, 2006) are applied.\nFrom Table 1, we note that the general data\nis more than 20 times bigger than the in-domain\ndata. A simple concatenation of the corpora might\nmask the phrase probabilities obtained from the in-\ndomain corpus, causing a deterioration in perfor-\nmance. This is especially true for the Arabic-to-\n1For a list of the IWSLT TED 2011 training cor-\npora, see http://www.iwslt2011.org/doku.php?\nid=06_evaluation\n39\nEnglish setup, where the UN data is 100 times big-\nger than the TED data and the domains are distinct.\nOne way to avoid this contamination is by filtering\nthe general corpus, but this discards phrase trans-\nlations completely from the phrase model. A more\nprincipled way is by weighting the sentences of\nthe corpora differently, such that sentences which\nare more related to the domain will have higher\nweights and therefore have a stronger impact on\nthe phrase probabilities.\n5.2 Translation System\nThe baseline system is built using the open-source\nSMT toolkit Jane2, which provides state-of-the-art\nphrase-based SMT system (Wuebker et al., 2012).\nIn addition to the phrase based decoder, Jane in-\ncludes an implementation of the forced alignment\nprocedure used in this work for the purpose of\nadaptation. We use the standard set of mod-\nels with phrase translation probabilities and word\nlexical smoothing for source-to-target and target-\nto-source directions, a word and phrase penalty,\ndistance-based reordering and an n-gram target\nlanguage model. In addition, our baseline includes\nbinary count features which fire if the count of the\nphrase pair in the training corpus is smaller than a\nthreshold. We use three count features with thresh-\noldsf2;3;4g.\nThe SMT systems are tuned on the dev\n(dev2010) development set with minimum error\nrate training (Och, 2003) using B LEU (Papineni\net al., 2002) accuracy measure as the optimization\ncriterion. We test the performance of our system\non the test2010 andtest2011 sets using the B LEU\nand translation edit rate (T ER) (Snover et al., 2006)\nmeasures. We use T ERas an additional measure\nto verify the consistency of our improvements and\navoid over-tuning. The Arabic-English results are\ncase sensitive while the German-English results\nare case insensitive. In addition to the raw auto-\nmatic results, we perform significance testing over\nall evaluations sets. For both B LEU and T ER, we\nperform bootstrap resampling with bounds estima-\ntion as described by (Koehn, 2004). We use the\n90% and 95% (denoted by yandzcorrespondingly\nin the tables) confidence thresholds to draw signif-\nicance conclusions.\n2www.hltpr.rwth-aachen.de/jane6 Results\nIn this section we compare the suggested weight-\ning schemes experimentally using the final trans-\nlation quality. We use two TED tasks, German-to-\nEnglish and Arabic-to-English translation. In ad-\ndition to evaluating our suggested translation mod-\nels based weighting schemes, we evaluate methods\nsuggested in previous work, including LM based\nweighting and FA based adaptation.\nThe results for both German-to-English and\nArabic-to-English TED tasks are summarized in\nTable 2. Each language pair section is divided\ninto three subsections which differ by the phrase\ntable training method. The first subsection is using\nstate-of-the-art heuristic phrase extraction, the sec-\nond is using FA adaptation and the third is using\nweighted phrase extraction with different weight-\ning schemes.\nTo perform weighted phrase extraction, we use\nall data ( ALL, a concatenation of INandOD) as the\ngeneral-domain data (in eq. 3 and Figure 1). This\nway, we ensure weighting for all sentences in the\ntraining data, and, data from INis still used for the\ngeneration of the weighted phrase table.\n6.1 German-to-English\nFocusing on the German-to-English translation re-\nsults, we note that using all data (ALL system)\nfor the heuristic phrase extraction improves over\nthe in-domain system (IN), with gains up-to +0.9%\nBLEU and -0.7% T ERon the test2011 set. We per-\nform significance testing in comparison to the ALL\nsystem as this is the best baseline system (among\nIN and ALL).\nMansour and Ney (2013) method of adaptation\nusing the FA procedure (ALL-FA-IN) consistently\noutperforms the baseline system, with significant\nimprovements on test10 T ER.\nComparing the weighting schemes, weighting\nbased on the phrase model (PM) and language\nmodel (LM) perform similarly, without a clear ad-\nvantage to one method. The standalone weight-\ning schemes do not achieve improvements over\nthe baseline. Combining all the translation models\n(PM,SM,RM,CM,LP) into the TM scheme gener-\nates improvements over the standalone weighting\nschemes. TM also improves over the LM scheme\nsuggested in previous work. We hypothesize that\nTM scoring is better for phrase model adaptation\nas it captures bilingual dependencies, unlike the\nLM scheme. In an experiment we do not report\n40\nSystem dev test2010 test2011\nBLEU TER BLEU TER BLEU TER\nGerman-to-English\nIN 31.0 48.9 29.3 51.0 32.7 46.8\nALL 31.2 48.3 29.5 50.5 33.6 46.1\nForced alignment based adaptation\nALL-FA-IN 31.8 47.4y29.7 49.7y33.6 45.5\nWeighted phrase extraction\nLM 31.1 48.7 29.2 51.1 33.6 46.2\nPM 31.5 48.8 29.2 50.9 33.1 46.4\nTM 31.7 48.4 29.8 50.2 33.8 45.8\nTM+LM 32.2y47.5y30.1 49.5z34.4y44.8z\nArabic-to-English\nIN 27.2 54.1 25.3 57.1 24.3 59.9\nALL 27.1 54.8 24.4 58.6 23.8 61.1\nALL-FA-IN 27.7 53.7 25.3 56.9 24.7 59.3\nLM 28.1y52.9z26.0 56.2y24.6 59.3\nPM 27.2 54.4 25.1 57.5 24.1 60.3\nTM 27.4 53.9 25.4 57.0 24.4 59.5\nTM+LM 28.3z52.8z26.2y55.9z25.1y58.7z\nTable 2: TED 2011 translation results. B LEU and T ERare given in percentages. INdenotes the TED\nlectures in-domain corpus and ALL is using all available bilingual data (including IN). Significance is\nmarked withyfor 90% confidence and zfor 95% confidence, and is measured over the best heuristic\nsystem.\nhere, we tried to remove one translation model at\na time from the TM scheme, the results always\ngot worse. Therefore, we conclude that using all\ntranslation models is important to achieve robust\nweighting and generate the best results.\nCombining TM with LM weighting (TM+LM)\ngenerates the best system overall. Significant im-\nprovements at the 95% level are observed for\nTER, BLEU is significantly improved for test11.\nTM+LM is significantly better than LM weighting\non both test sets. In comparison to ALL-FA-IN,\nTM+LM is significantly better on test11 B LEU.\nTM+LM combines the advantages of both scor-\ning methods, where TM ensures in-domain lexical\nchoice while LM achieves better sentence fluency.\n6.2 Arabic-to-English\nTo verify our results, we repeat the experiments on\nthe Arabic-to-English TED task. The scenario is\ndifferent here as using the OD data (UN) deterio-\nrates the results of the IN system by 0.9% and 0.5%\nBLEU on test2010 and test2011 correspondingly.\nWe attribute this deterioration to the large size of\nthe UN data (a factor of 100 bigger than IN) which\ncauses bias to OD. In addition, UN is more distinctfrom the TED lecture domain. We use the IN sys-\ntem as baseline and perform significance testing in\ncomparison to this system.\nFA adaptation (ALL-FA-IN) results are similar\nto the German-to-English section, with consistent\nimprovements over the baseline but no significance\nis observed in this case.\nFor the weighting experiments, combining the\ntranslation models into the TM scheme improves\nover the standalone schemes. The LM scheme\nis performing better than TM in this case. We\nhypothesize that this is due to the big gap be-\ntween the in-domain TED corpus and the other-\ndomain UN corpus. The LM scheme is combining\na term which overweights sentences further from\nthe other-domain. This factor proves to be crucial\nin the case of a big gap between IN and OD. Such a\nterm is not present in the translation model weight-\ning schemes, we leave its incorporation for future\nwork.\nFinally, similar to the German-to-English re-\nsults, the combined TM+LM achieves the best re-\nsults, with significant improvements at the 90%\nlevel for all sets and error measures, and at the\n41\nType DE-EN AR-EN\nbase TM+LM base TM+LM\nlexical 23695 23451 26679 25813\nreorder 1193 1106 935 904\nTable 3: Error analysis. A comparison of the er-\nror types along with the error counts are given.\nThe systems include the baseline system and the\nTM+LM weighted system.\n95% level for most. TM+LM improves over the\nbaseline with +1.1% B LEU and -1.3% T ERon\ndev, +0.9% B LEU and -1.2% T ERon test2010 and\n+0.8% B LEU and -1.2% T ERon test2011.\n7 Error Analysis\nIn this section, we perform automatic and man-\nual error analysis. For the automatic part, we\nuse addicter3(Berka et al., 2012), which performs\nHMM word alignment between the reference and\nthe hypothesis and measures lexical (word inser-\ntions, deletions and substitutions) and reordering\nerrors. Addicter is a good tool to measure tenden-\ncies in the errors, but the number of errors might be\nmisleading due to alignment errors. The summary\nof the errors is given in Table 3. From the table we\nclearly see that the majority of the improvement\ncomes from lexical errors reduction. This is an in-\ndication of an improved lexical choice, due to the\nimproved phrase model probabilities.\nTranslation examples are given in Table 4. The\nexamples show that the lexical choice is being im-\nproved when using the weighted TM+LM phrase\nextraction. For the first example in German,\n“grossartig” means “great”, but translated by the\nbaseline as “a lot”, which causes the meaning to\nbe distorted. For the second Arabic example, the\nword ÈYªÓ is ambiguous and could mean both\n“rate” and “modified”. The TM+LM system does\nthe correct lexical choice in this case.\n8 Conclusion\nIn this work, we investigate several weighting\nschemes for phrase extraction adaptation. Unlike\nprevious work where language model scoring is\nused for adaptation, we utilize several translation\nmodels to perform the weighing.\nThe translation models used for weighting are\ncalculated over phrase aligned general-domain\n3https://wiki.ufal.ms.mff.cuni.cz/user:zeman:addicterSample sentences\nsrc es fuehlt sich grossartig an .\nref it feels great .\nbase it feels like a lot .\nTM+LM it feels great .\nsrc es haelt dich frisch .\nref it keeps you fresh .\nbase it’s got you fresh .\nTM+LM it keeps you fresh .\nsrc ÕËAªË@ ÐAª£A \rK.Ðñ\u0010®\u0010J\tJ\n»\nref How are you going to feed the world\nbase How will feed the world\nTM+LM How are you going to feed the world\nsrc AJ\n\t\u001cJ\nk.ÈYªÓ Z@\tY\t« ? @\tXAÖÏð\nref And why? Genetically engineered food\nbase And why ? Food rate genetically\nTM+LM And why ? Genetically modified food\nTable 4: Sample sentences. The source, reference,\nbaseline hypothesis and TM+LM weighted system\nhypothesis are given.\nsentences using an in-domain phrase table.\nExperiments on two language pairs show signif-\nicant improvements over the baseline, with gains\nup-to +1.0% B LEU and -1.3% T ERwhen using\na combined TM and LM (TM+LM) weighting\nscheme. The TM+LM scheme also shows im-\nprovements over previous work, namely scoring\nusing LM and using FA training to adapt a general-\ndomain phrase table to the in-domain (ALL-FA-IN\nmethod).\nIn future work, we plan to investigate using\ntranslation model scoring in a fashion similar to the\ncross entropy difference framework. In this case,\nthe general-domain data will be phrase aligned and\nscored using a general-domain phrase table, and\nthe difference between the in-domain based scores\nand the general-domain ones can be calculated.\nAnother interesting scenario we are planning to\ntackle is when only monolingual in-domain data\nexists, and whether our methods could be still ap-\nplied and gain improvements, for example using\nautomatic translations.\nAcknowledgments\nThis material is based upon work supported by\nthe DARPA BOLT project under Contract No.\nHR0011-12-C-0015.\n42\nReferences\nAxelrod, Amittai, Xiaodong He, and Jianfeng Gao.\n2011. Domain adaptation via pseudo in-domain data\nselection. In Proceedings of the 2011 Conference on\nEmpirical Methods in Natural Language Processing ,\npages 355–362, Edinburgh, Scotland, UK., July. As-\nsociation for Computational Linguistics.\nBerka, Jan, Ondrej Bojar, Mark Fishel, Maja Popovic,\nand Daniel Zeman. 2012. Automatic MT Error\nAnalysis: Hjerson Helping Addicter. In LREC ,\npages 2158–2163, Istanbul, Turkey.\nCettolo, M Federico M, L Bentivogli, M Paul, and\nS St¨uker. 2012. Overview of the iwslt 2012 eval-\nuation campaign. In International Workshop on\nSpoken Language Translation , pages 12–33, Hong\nKong, December.\nFoster, George and Roland Kuhn. 2007. Mixture-\nmodel adaptation for SMT. In Proceedings of the\nSecond Workshop on Statistical Machine Transla-\ntion, pages 128–135, Prague, Czech Republic, June.\nAssociation for Computational Linguistics.\nFoster, George, Cyril Goutte, and Roland Kuhn. 2010.\nDiscriminative instance weighting for domain adap-\ntation in statistical machine translation. In Proceed-\nings of the 2010 Conference on Empirical Methods\nin Natural Language Processing , pages 451–459,\nCambridge, MA, October. Association for Compu-\ntational Linguistics.\nGoto, Isao, Bin Lu, Ka Po Chow, Eiichiro Sumita, and\nBenjamin K Tsou. 2013. Overview of the patent\nmachine translation task at the ntcir-10 workshop.\nInProceedings of the 10th NTCIR Conference , vol-\nume 10, pages 260–286, Tokyo, Japan, June.\nKoehn, Philipp. 2004. Statistical Significance Tests\nfor Machine Translation Evaluation. In Proc. of the\nConf. on Empirical Methods for Natural Language\nProcessing (EMNLP) , pages 388–395, Barcelona,\nSpain, July.\nMansour, Saab and Hermann Ney. 2012. A simple\nand effective weighted phrase extraction for machine\ntranslation adaptation. In International Workshop on\nSpoken Language Translation , pages 193–200, Hong\nKong, December.\nMansour, Saab and Hermann Ney. 2013. Phrase train-\ning based adaptation for statistical machine transla-\ntion. In Proceedings of the 2013 Conference of the\nNorth American Chapter of the Association for Com-\nputational Linguistics: Human Language Technolo-\ngies, pages 649–654, Atlanta, Georgia, June. Asso-\nciation for Computational Linguistics.\nMatsoukas, Spyros, Antti-Veikko I. Rosti, and Bing\nZhang. 2009. Discriminative corpus weight esti-\nmation for machine translation. In Proceedings of\nthe 2009 Conference on Empirical Methods in Nat-\nural Language Processing , pages 708–717, Singa-\npore, August. Association for Computational Lin-\nguistics.Moore, Robert C. and William Lewis. 2010. Intelligent\nselection of language model training data. In Pro-\nceedings of the ACL 2010 Conference Short Papers ,\npages 220–224, Uppsala, Sweden, July. Association\nfor Computational Linguistics.\nOch, Franz J. 2003. Minimum Error Rate Train-\ning in Statistical Machine Translation. In Proceed-\nings of the 41th Annual Meeting of the Association\nfor Computational Linguistics , pages 160–167, Sap-\nporo, Japan, July.\nPapineni, Kishore, Salim Roukos, Todd Ward, and\nWei-Jing Zhu. 2002. Bleu: a Method for Auto-\nmatic Evaluation of Machine Translation. In Pro-\nceedings of the 41st Annual Meeting of the Associa-\ntion for Computational Linguistics , pages 311–318,\nPhiladelphia, Pennsylvania, USA, July.\nPopovi ´c, M. and H. Ney. 2006. POS-based Word Re-\norderings for Statistical Machine Translation. In In-\nternational Conference on Language Resources and\nEvaluation , pages 1278–1283.\nRoth, Ryan, Owen Rambow, Nizar Habash, Mona\nDiab, and Cynthia Rudin. 2008. Arabic morpho-\nlogical tagging, diacritization, and lemmatization us-\ning lexeme models and feature ranking. In Proceed-\nings of ACL-08: HLT, Short Papers , pages 117–120,\nColumbus, Ohio, June. Association for Computa-\ntional Linguistics.\nSnover, Matthew, Bonnie Dorr, Richard Schwartz, Lin-\nnea Micciulla, and John Makhoul. 2006. A Study of\nTranslation Edit Rate with Targeted Human Annota-\ntion. In Proceedings of the 7th Conference of the As-\nsociation for Machine Translation in the Americas ,\npages 223–231, Cambridge, Massachusetts, USA,\nAugust.\nWuebker, Joern and Hermann Ney. 2013. Length-\nincremental phrase training for smt. In ACL 2013\nEighth Workshop on Statistical Machine Translation ,\npages 309–319, Sofia, Bulgaria, August.\nWuebker, Joern, Arne Mauser, and Hermann Ney.\n2010. Training phrase translation models with\nleaving-one-out. In Proceedings of the 48th Annual\nMeeting of the Assoc. for Computational Linguistics ,\npages 475–484, Uppsala, Sweden, July.\nWuebker, Joern, Matthias Huck, Stephan Peitz, Malte\nNuhn, Markus Freitag, Jan-Thorsten Peter, Saab\nMansour, and Hermann Ney. 2012. Jane 2: Open\nsource phrase-based and hierarchical statistical ma-\nchine translation. In International Conference on\nComputational Linguistics , Mumbai, India, Decem-\nber.\n43",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "hy5Qow03hw",
"year": null,
"venue": "EAMT 2014",
"pdf_link": "https://aclanthology.org/2014.eamt-1.35.pdf",
"forum_link": "https://openreview.net/forum?id=hy5Qow03hw",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Collaborative web UI localization, or how to build feature-rich multilingual datasets",
"authors": [
"Vicent Alabau",
"Luis A. Leiva"
],
"abstract": null,
"keywords": [],
"raw_extracted_content": "Collaborative Web UI Localization, or\nHow to Build Feature-rich Multilingual Datasets\nVicent Alabau\nPRHLT Research Center\nUniversitat Polit `ecnica de Val `encia\[email protected] A. Leiva\nPRHLT Research Center\nUniversitat Polit `ecnica de Val `encia\[email protected]\nAbstract\nWe present a method to generate feature-\nrich multilingual parallel datasets for ma-\nchine translation systems, including e.g.\ntype of widget, user’s locale, or geoloca-\ntion. To support this argument, we have\ndeveloped a bookmarklet that instruments\narbitrary websites so that casual end users\ncan modify their texts on demand. After\nsurveying 52 users, we conclude that peo-\nple is leaned toward using this method in\nlieu of other comparable alternatives. We\nvalidate our prototype in a controlled study\nwith 10 users, showing that language re-\nsources can be easily generated.\n1 Introduction\nToday most websites are looking forward to mak-\ning their contents available in more than one lan-\nguage, mainly to reach a global audience, to gain\na competitive advantage, or just because of legal\nrequirements. To this end, adapting user interface\n(UI) texts through translation—or “localization”—\nis a central task, since its result affects system us-\nability and acceptability. Actually, translation is\njust one of the activities of localization yet the most\nimportant overall (Keniston, 1997).\nRecently there have been significant improve-\nments in machine translation (MT) technology,\nto the extent that, in particular contexts such as\nmedical prescriptions or knowledge-base articles,\nmachine-translated content is qualitatively compa-\nrable to that of human-translated (Dillinger and\nLaurie, 2009). However, for MT systems to excel\nat UI localization not only it is needed an impor-\ntant amount of training data, but also the data must\nbe especially tailored to the particularities of UI\nmessages. Indeed, translating the text in an inter-\nc\r2014 The authors. This article is licensed under a Creative\nCommons 3.0 licence, no derivative works, attribution, CC-\nBY-ND.face is a challenging task, even for trained human\ntranslators (Munt ´es-Mulero et al., 2012).\nParallel data offer a rich source of additional\nknowledge about language, and a sound basis for\nboth translation and contrastive studies (McEnery\nand Xiao, 2007). Although there are some valu-\nable tools to build multilingual parallel corpora,\nthey are still limited when it comes to the exploita-\ntion of UI-based resources. Thus, we propose a\nnovel approach: delegating the corpus generation\nto the end users of software applications, as a re-\nsult of a regular interaction with such applications.\nTo support our approach, we developed a proof-\nof-concept web-based prototype, motivated by the\nfact that nowadays people use web browsers more\nthan any other class of software. Moreover, soft-\nware translation poses two interesting challenges:\n1)user interface (UI) strings appear anywhere in\nthe developer’s language of choice whereas con-\ntent is typically generated and consumed in the\nuser’s language; 2)UI bilingual sentences can be\nenriched with metadata to handle disambiguation.\n2 Related Work\nIn the past, several methods have been developed\nto build parallel corpora by automatic means, e.g.,\nby mining Wikipedia (Smith et al., 2010), web\npages with a similar structure (Resnik and Smith,\n2003), parliament proceedings (Koehn, 2005), or\nusing specialized tools such as OPUS (Tiedemann,\n2012). However, in the end, parallel texts are\nscarce resources, limited in size and language cov-\nerage (Munteanu and Marcu, 2005).\nIn addition, many tools such as Crowdin1,\nSmartLing2, and Launchpad3do support collabo-\nrative translation. However, for these tools to work\nproperly, applications must be internationalized\nbeforehand. Besides, Google Translator Toolkit4\nallows contributing with translations. However,\n1http://www.crowdin.net\n2http://www.smartling.com\n3http://translations.launchpad.net\n4http://translate.google.com/toolkit/\n151\nthe proposed translations are not rendered on the\nweb page unless one uses the Website Translator\ntoolandowns the site. Furthermore, it is oriented\nto translating content and not UI elements such as\nbuttons, drop-down lists, etc. that otherwise may\ncarry valuable language information.\nProbably, the closest work in soul to ours is\nDuolingo (von Ahn, 2013), an effort to collabora-\ntively translate the Web while users are learning a\nlanguage. However, we are interested in providing\ncomputer users with a means of editing the text of\nany website on demand, only when it is needed.\nMore importantly, current tools force users to\nswitch and use said tools, which may prevent them\nfrom contributing. Also, user contributions are not\nshown until the application owner decides to do so,\nthus hindering collaboration. Therefore, we feel\nanother collaborative translation method is needed.\n3 User Survey\nWe prepared a 2-question survey in order to iden-\ntify to what extent would users be motivated to\ntranslate or edit translations in a computer appli-\ncation or a website. The first question ( Q1) asked\nthe preference degree to using 4 different methods:\n1.M1: Editing the application source code.\n2.M2: Installing a dedicated tool.\n3.M3: The application features a menu option.\n4.M4: Editing text in-place, at runtime.\nThe second question ( Q2) asked the willingness\nto personalize the texts displayed in an applica-\ntion, provided that there were an easy method to do\nit. We included example images for each instance\ncase, and answers to Q1were randomly presented\nto the users, to avoid possible biases. Both ques-\ntions were scored in a 1–5 Likert scale (1: strongly\ndisagree, 5: strongly agree). The survey was then\nreleased online via Twitter, Facebook, and word-\nof-mouth communication. Eventually, 52 users\n(24 females) aged 19–34 from 5 countries (USA,\nUK, France, Spain, and Germany) participated in\nthe survey. The results are shown in Table 1.\nM1 M2 M3 M4 Q2\nM1.79 2.37 3.27 4.58 4.27\nMdn 2 2 3 5 4\nSD 0.98 0.97 0.95 0.82 0.88\nTable 1: Detailed survey results.As observed, a preference for in-place runtime\ntranslation ( M4) is evident over the rest of the\nconsidered options. Installing dedicated software\n(M2) is not seen as a likable approach, and even\nless editing the source code of the application\n(M1). On the other hand, having a translation fa-\ncility bundled with the application ( M3) is a sig-\nnificant enhancement. This is somewhat already\nimplemented in most Linux programs, e.g., the of-\nficial GNOME image viewer,which allows users to\nseamlessly collaborate worldwide to translate the\nprogram. Nevertheless, as previously pointed out,\nM4seems to be the most comfortable option.\nRegarding the willingness to personalize texts\n(Q2), as expected, people are favorably predis-\nposed to do so if they were given an easy-to-use\nmethod such as the one we are proposing. Together\nwith the previous answers, this survey reveals that\nour method would allow regular computer users to\n(indirectly) contribute with translations. This sug-\ngests in turn that occasional users of an application\nor arbitrary visitors of a website are more likely\nto submit a translation pair, which would dramat-\nically facilitate corpus construction, both in terms\nof human effort and time.\n4 Method Overview\nApparently, users are eager to contribute with\ntranslations when they can instantaneously person-\nalize their applications and the collaboration effort\nhas a low entry cost. Thus, we propose a method\nwere translations are carried out just-in-time and\nin-place . First, just-in-time implies that a transla-\ntion takes place at the very same moment that the\nuser needs it. For instance, when a user spots a\nsentence that has not been translated into her lan-\nguage, or a translation error is bothering her, she\nis simply able to amend the text on the UI. Sec-\nond, in-place editing means that translation is per-\nformed on the same UI, not in another application,\nso that the overhead introduced by task switch-\ning has minimal impact. This localization strategy\nhas shown some advantages over more traditional\nmethods (Leiva and Alabau, 2014).\nThe core idea of our method is adapting the be-\nhavior of UI widgets so that they can switch to an\nedit mode when some accelerator is used. Note\nthat the application should work as it was origi-\nnally designed, however the behavior of the wid-\ngets would change only on demand (see Figure\n1). While in theory this could be incorporated to\nany major UI library (e.g., Qt, GTK, MFC, Co-\n152\nFigure 1: Example of edit mode . While CTRL is pressed, elements are highlighted as the mouse hovers\nthem. Then, the user clicks on the element, which becomes editable, in order to change its content.\ncoa), in this paper we test a method that is suit-\nable for web-based UIs. For simplicity, the method\nis deployed as a bookmarklet (no installation, just\ndrag-and-drop, available for all browsers), which\nis more compatible than using extensions or plu-\ngins. The method can be roughly summarized as\nfollows: 1)a welcome menu is shown when click-\ning on the bookmarklet; 2)resource strings are au-\ntomatically extracted in the original language from\ntext nodes, alt attributes, form elements, etc.\nalong with a unique identifier (XPath); 3)user’s\nprevious translations, if any, are loaded and ap-\nplied to the UI; 4)event listeners to receive user\ninteraction are attached to UI elements; 5)when\nthe user activates the edit mode , UI elements be-\ncome content-editable items, or a modal window\npops up as a fallback mechanism; 6)user informa-\ntion is collected, such as locale, geolocation by IP,\netc. 7)finally, the user can submit her contribu-\ntions by clicking again on the bookmarklet.\n5 Evaluation\nWe performed a controlled evaluation to assess if\nour method was worth being deployed at a larger\nscale. Thus, we recruited 10 Spanish users with\nan advanced English level. Participants were told\nto translate while interacting with a small airline\nwebsite (5 pages) and one section of the popular\nWordpress platform. At the end of the session,\nusers submitted their translations to our server.\nIn 5 minutes, 159 out of the 205 poten-\ntially translatable sentences were identified by the\nusers. On average, each user contributed with 114\n(SD=4) sentences. Not all sentences were trans-\nlated because some of them only appear under\nspecial circumstances like error messages or hid-\nden options in menus, whereas others have low\nsaliency (e.g., a copyright notice). Figure 2a shows\nthe histogram of sources with different transla-\ntions. It can be observed that more than a half of\nthe sources received multiple translations, while it\nwas not unusual to have up to 4 different trans-\nlations for each source. Conversely, Figure 2b\nshows the histogram of the number of times the\nmost voted translation was indeed produced bythe agreement of nusers. It turns out that users\nshowed full disagreement only on 24sentences.\nFor the other sentences, at least two users agreed\nat any time. In addition, we can see a peak when 9\nand10users agreed. This is explained in part be-\ncause some sources were fairly simple to translate\n(such as navigation links) and thus it was expected\nthat users would submit similar translations.\nIn general, users reported that they were happy\nto test our method for translating web pages. They\nfelt the technique was easy to use, and expressed\nan intention to contribute with translations for their\nfavorite applications. Hence, it seems plausible\nthat a larger scale deployment would be success-\nful.\n6 General Discussion\nOur method allows users to achieve an immedi-\nate benefit, since the website is being adapted to\ntheir language needs as they contribute to trans-\nlating (and personalizing) it. At the same time,\nresearchers also benefit from these contributions,\nsince valuable language resources are being gener-\nated in the long run. Further, the method leads to\nhaving multiple references for a given source text,\ncoming from different users worldwide, which al-\nlows for better training and evaluation of MT sys-\ntems. More importantly, resources are ultimately\nsupervised by humans—which provides valuable\nground truth data—and can be deployed for poten-\ntially anylanguage. Last but not least, our method\nenables “contextualized translation”, in the sense\nthat additional metadata are coupled to the tra-\nditional source-target language pairs, such as the\ntype of widget (e.g., button, label, etc.), geoloca-\ntion, locale, or the user agent string.\nThe survey gave us intuition regarding whether\nregular users would engage to contribute with ca-\nsual translations. Nevertheless, as in any collab-\norative tool, the user needs a motivation to carry\nout any task. We believe that our proposal adds\ngreat value to how users experience computer soft-\nware since, right from the beginning, they can fix\ntranslation errors and personalize their favorite ap-\nplications. In contrast to other approaches where\n153\n2 4\n6\n80\n20\n4060\nNo. different translations\nNo. source texts\n1 3 5 7 9\n0\n10\n20\n30\nMax. user agreement\nAbs. frequency\nFigure 2: Distribution of different translations per source (2a) and histogram of user agreements (2b).\nthe user contributions are used to merely collect\ndata, here these contributions are rendered imme-\ndiately on the UI, so the benefit becomes instan-\ntaneous. Besides, as more and more data are col-\nlected, they can be used to initially populate a web\npage or application with the consensus translations\nfrom other users. This is especially interesting for\nminority languages, where a few users with knowl-\nedge of said minority language can make the UI\naccessible to the rest of users. Also, information\nreported by the browser can provide translations\ntailored to the user context, e.g., country or oper-\nating system. Hopefully, the low entry cost of our\napproach will reduce the burden on the user and\nthus foster collaboration.\nIn addition, the language resources that our\nmethod is able to collect provide unprecedented\nvalue for the MT community. First, potentially\nany language with a representative user base can\ngenerate parallel data. What is more, sentence\npairs are properly aligned, since they come from\nthe very same UI element, and multiple references\nmay be available. Furthermore, translations are\nperformed with a visual context. Thus, not only the\nchances that translations are appropriate will im-\nprove, but also language resources can be tagged\nwith feature-rich metadata. For instance, the type\nof UI element (e.g., paragraph, button, link) or the\ntext of a header or a label that relates to it, all can\nbe used as additional information to provide bet-\nter disambiguation in MT (Munt ´es-Mulero et al.,\n2012). Even so, personal information—if avail-\nable and always under the user consent—can pro-\nvide resources for adaptation of general models to\nspecific dialects, or to target different age groups.\nAcknowledgments\nWork supported by the FP7 of the European Com-\nmission under grant agreements 287576 (CAS-\nMACAT) and 600707 (tranScriptorium).References\nDillinger, M. and G. Laurie. 2009. Success with ma-\nchine translation: automating knowledge-base trans-\nlation. ClientSide News.\nKeniston, Kenneth. 1997. Software Localization:\nNotes on Technology and Culture. Working Paper\n#26, Massachusetts Institute of Technology.\nKoehn, Philipp. 2005. Europarl: A Parallel Corpus for\nStatistical Machine Translation. In Proc. MT Sum-\nmit, pages 79–86.\nLeiva, Luis A. and Vicent Alabau. 2014. The impact of\nvisual contextualization on ui localization. In Proc.\nCHI, pages 3739–3742.\nMcEnery, Anthony and Zhonghua Xiao, 2007. In-\ncorporating Corpora: Translation and the Linguist ,\nchapter Parallel and comparable corpora: What are\nthey up to? Multilingual Matters.\nMunteanu, Dragos Stefan and Daniel Marcu. 2005.\nImproving Machine Translation Performance by Ex-\nploiting Non-Parallel Corpora. Computational Lin-\nguistics , 31(4):477–504.\nMunt ´es-Mulero, V ., P. Paladini Adell, C. Espa ˜na-Bonet,\nand L. M `arquez. 2012. Context-Aware Machine\nTranslation for Software Localization. In Proc.\nEAMT , pages 77–80.\nResnik, Philip and Noah A. Smith. 2003. The Web\nas a parallel corpus. Computational Linguistics ,\n29(3):349–380.\nSmith, Jason R., Chris Quirk, and Kristina Toutanova.\n2010. Extracting parallel sentences from compa-\nrable corpora using document level alignment. In\nProc. NAACL , pages 403–411.\nTiedemann, J ¨org. 2012. Parallel Data, Tools and Inter-\nfaces in OPUS. In Proc. LREC , pages 2214–2218.\nvon Ahn, Luis. 2013. Duolingo: learn a language for\nfree while helping to translate the web. In Proc. IUI ,\npages 1–2.\n154",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "Pa96BAIt9jr",
"year": null,
"venue": "EAMT 2022",
"pdf_link": "https://aclanthology.org/2022.eamt-1.47.pdf",
"forum_link": "https://openreview.net/forum?id=Pa96BAIt9jr",
"arxiv_id": null,
"doi": null
}
|
{
"title": "QUARTZ: Quality-Aware Machine Translation",
"authors": [
"José G. C. de Souza",
"Ricardo Rei",
"Ana C. Farinha",
"Helena Moniz",
"André F. T. Martins"
],
"abstract": null,
"keywords": [],
"raw_extracted_content": "QUARTZ : Quality-Aware Machine Translation\nJosé G. C. de Souza1Ricardo Rei1,2,4Ana C Farinha1\nHelena Moniz1,4,5André F. T. Martins1,2,3\n1Unbabel2Instituto Superior Técnico3Instituto de Telecomunicações\n4INESC-ID5Faculdade de Letras da Universidade de Lisboa\nLisbon, Portugal\njose.souza, ricardo.rei, catarina.farinha, helena.moniz, [email protected]\nAbstract\nThis paper presents Q UARTZ , QUality-\nAwaRe machine Translation, a project led\nby Unbabel and funded by the ELISE\nOpen Call1which aims at developing ma-\nchine translation systems that are more\nrobust and produce fewer critical errors.\nWith Q UARTZ we want to enable ma-\nchine translation for user-generated con-\nversational content types that do not tol-\nerate critical errors in automatic transla-\ntions. The project runs from January to\nJuly 2022.\n1 Introduction\nDespite the progress in the fluency of machine\ntranslation (MT) systems, critical translation errors\nare still frequent, including deviations in meaning\nthrough toxic or offensive content, hallucinations,\nmistranslation of entities with health, safety, or fi-\nnancial implications, or deviation in sentiment po-\nlarity or negation. These errors occur more often\nwhen the source sentence is out of domain or con-\ntains typos, abbreviations, or capitalized text, all\ncommon with user-generated content. This lack of\nrobustness prevents the use of MT systems in prac-\ntical applications where the above errors cannot be\ntolerated.\nQUARTZ aims to build reliable, quality-aware\nMT systems for user-generated conversational\ndata. The project will address the limitations above\nby: (a) developing quality metrics capable of de-\ntecting critical errors and hallucinations; (b) en-\ndowing MT systems with a confidence (quality)\n© 2022 The authors. This article is licensed under a Creative\nCommons 3.0 licence, no derivative works, attribution, CC-\nBY-ND.\n1Funding from the European Union’s Horizon 2020 research\nand innovation program under grant agreement No 951847score, and fine-tuning pre-trained MT models to\nthe domains in which they will be used through\nquality-driven objectives.\nThis will be done by leveraging post-edited data\nand quality annotations produced by the Unba-\nbel community and building upon the state-of-the-\nart, open-source quality estimation technology al-\nready existing at Unbabel: O PENKIWI(Kepler et\nal., 2019) and C OMET (Rei et al., 2020). From\na product perspective, focus will be given to con-\nversational, user-generated data in a multilingual\ncustomer service scenario (email or chat involving\na customer and an agent), in which Unbabel has\nrenowned expertise and existing technology vali-\ndated by existing customers. The solution aims to\neliminate language barriers in the highly multilin-\ngual European market.\n2 MT and Translation Quality\nThe current state of the art in MT is based on auto-\nregressive sequence-to-sequence models trained\nwith maximum likelihood and teacher forcing.\nThis objective encourages the model to assign high\nprobability to reference translations, but does not\naccount for the severity of translation mistakes of\nthe hypotheses generated. This leads to exposure\nbias, vulnerability to adversarial attacks, and no\ncontrol for hallucinations, harmful content, and bi-\nases (Wang and Sennrich, 2020), hampering the\nresponsible use of NMT for user-generated con-\nversational content.\nProject Overview Qualitative evaluation car-\nried out by translators (post-editors and annota-\ntors) provides a human feedback loop that can\ngenerate large amounts of data with information\nabout translation errors, their severities, and de-\ntailed quality annotations. The main methodology\nused to evaluate translations according to differ-\nent aspects of translation quality is the industry-\nFigure 1: In Q UARTZ quality estimation systems will interact directly with the machine translation system during the decoding\nphase to avoid critical errors. Words marked in red are considered errors.\nadopted multi-dimensional quality (MQM) taxon-\nomy (Lommel et al., 2014). Unbabel uses this data\nto train its open-source C OMET and O PENKIWI\nframeworks to develop systems for MT evalua-\ntion and quality estimation, with MQM annota-\ntions and post-edits becoming a standard in Met-\nrics and Quality Estimation WMT shared tasks\n(Freitag et al., 2021; Specia et al., 2021).\nThis project will close this loop by making\nMT systems quality-aware and robust. Decoding\nstrategies for MT will be developed using the qual-\nity estimation metrics trained on the target domain\ndata. The incorporation of these quality objec-\ntives into the decoding step of MT systems can\nhave a big impact on controlling their tendency to\nproduce hallucinations and other critical mistakes.\nThis rationale is depicted in Figure 1.\nRelated Work Prior work on minimum Bayes\nrisk (MBR) decoding paves the way to tune MT\nsystems towards a given metric, but so far this\nhas been done with purely lexical metrics such as\nBLEU (Müller and Sennrich, 2021) or neural met-\nrics that do not capture severity and biases (Fre-\nitag et al., 2022). The main difference between\nQUARTZ and previous work is going beyond lexi-\ncal metrics in incorporating quality scores for gen-\nerating automatic translations.\nReferences\nFreitag, Markus, Ricardo Rei, Nitika Mathur, Chi-kiu\nLo, Craig Stewart, George Foster, Alon Lavie, and\nOndˇrej Bojar. 2021. Results of the WMT21 met-\nrics shared task: Evaluating metrics with expert-\nbased human evaluations on TED and news domain.\nInProceedings of the Sixth Conference on Machine\nTranslation , pages 733–774, Online, November. As-\nsociation for Computational Linguistics.\nFreitag, Markus, David Grangier, Qijun Tan, and\nBowen Liang. 2022. High quality rather than highmodel probability: Minimum bayes risk decoding\nwith neural metrics. In Accepted at Transactions of\nthe Association for Computational Linguistics, pre-\nsented at North American Chapter of the Association\nfor Computational Linguistics 2022 , Seattle, Wash-\nington. Association for Computational Linguistics.\nKepler, Fabio, Jonay Trénous, Marcos Treviso, Miguel\nVera, and André F. T. Martins. 2019. OpenKiwi:\nAn open source framework for quality estimation.\nInProceedings of the 57th Annual Meeting of the\nAssociation for Computational Linguistics: System\nDemonstrations , pages 117–122, Florence, Italy,\nJuly. Association for Computational Linguistics.\nLommel, Arle, Aljoscha Burchardt, and Hans Uszko-\nreit. 2014. Multidimensional quality metrics\n(MQM): A framework for declaring and describing\ntranslation quality metrics. Tradumàtica: Tecnolo-\ngies de la Traducció , 0:455–463, 12.\nMüller, Mathias and Rico Sennrich. 2021. Understand-\ning the properties of minimum bayes risk decoding\nin neural machine translation. In Proceedings of the\nJoint Conference of the 59th Annual Meeting of the\nAssociation for Computational Linguistics and the\n11th International Joint Conference on Natural Lan-\nguage Processing (ACL-IJCNLP 2021) .\nRei, Ricardo, Craig Stewart, Ana C Farinha, and Alon\nLavie. 2020. COMET: A neural framework for MT\nevaluation. In Proceedings of the 2020 Conference\non Empirical Methods in Natural Language Process-\ning (EMNLP) , pages 2685–2702, Online, November.\nAssociation for Computational Linguistics.\nSpecia, Lucia, Frédéric Blain, Marina Fomicheva,\nChrysoula Zerva, Zhenhao Li, Vishrav Chaudhary,\nand André F. T. Martins. 2021. Findings of the\nWMT 2021 shared task on quality estimation. In\nProceedings of the Sixth Conference on Machine\nTranslation , pages 684–725, Online, November. As-\nsociation for Computational Linguistics.\nWang, Chaojun and Rico Sennrich. 2020. On expo-\nsure bias, hallucination and domain shift in neural\nmachine translation. In Proceedings of the 58th An-\nnual Meeting of the Association for Computational\nLinguistics , pages 3544–3552, Online, July. Associ-\nation for Computational Linguistics.",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "bav7keY9Vm",
"year": null,
"venue": "EAMT 2022",
"pdf_link": "https://aclanthology.org/2022.eamt-1.9.pdf",
"forum_link": "https://openreview.net/forum?id=bav7keY9Vm",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Searching for COMETINHO: The Little Metric That Could",
"authors": [
"Ricardo Rei",
"Ana C. Farinha",
"José G. C. de Souza",
"Pedro G. Ramos",
"André F. T. Martins",
"Luísa Coheur",
"Alon Lavie"
],
"abstract": "Ricardo Rei, Ana C Farinha, José G.C. de Souza, Pedro G. Ramos, André F.T. Martins, Luisa Coheur, Alon Lavie. Proceedings of the 23rd Annual Conference of the European Association for Machine Translation. 2022.",
"keywords": [],
"raw_extracted_content": "Searching for C OMETINHO : The Little Metric That Could\nRicardo Rei∗,1,2,3Ana C Farinha∗,1Jos´e G. C. de Souza∗,1\nPedro G. Ramos2Andr ´e F. T. Martins1,3,4Luisa Coheur2,3Alon Lavie1\n1Unbabel2INESC-ID3Instituto Superior T ´ecnico4Instituto de Telecomunicac ¸ ˜oes\n{ricardo.rei, catarina.farinha, jose.souza }@unbabel.com\nAbstract\nRecently proposed neural-based machine\ntranslation evaluation metrics, such as\nCOMET and B LEURT , exhibit much higher\ncorrelations with human judgments than\ntraditional lexical overlap metrics. How-\never, they require large models and are\ncomputationally very costly, preventing\ntheir application in scenarios where one\nhas to score thousands of translation hy-\npotheses (e.g. outputs of multiple sys-\ntems or different hypotheses of the same\nsystem, as in minimum Bayes risk decod-\ning). In this paper, we introduce several\ntechniques, based on pruning and knowl-\nedge distillation, to create more compact\nand faster C OMET versions—which we\ndub C OMETINHO . First, we show that\njust by optimizing the code through the\nuse of caching and length batching we\ncan reduce inference time between 39 %\nand 65 %when scoring multiple systems.\nSecond, we show that pruning C OMET\ncan lead to a 21% model reduction with-\nout affecting the model’s accuracy be-\nyond 0.015 Kendall τcorrelation. Finally,\nwe present D ISTIL -COMET , a lightweight\ndistilled version that is 80 %smaller and\n2.128x faster while attaining a perfor-\nmance close to the original model. Our\ncode is available at: https://github.\ncom/Unbabel/COMET\n© 2022 The authors. This article is licensed under a Creative\nCommons 3.0 licence, no derivative works, attribution, CC-\nBY-ND.\n* Corresponding authors.1 Introduction\nTraditional metrics for machine translation (MT)\nevaluation rely on lexical similarity between a\ngiven hypothesis and a reference translation. Met-\nrics such as B LEU (Papineni et al., 2002) and\nCHRF (Popovi ´c, 2015) remain popular due to ef-\nficient memory usage and fast computational per-\nformance, even though several studies have shown\nthat they correlate poorly with human judgements,\nspecially for high quality MT (Ma et al., 2019;\nMathur et al., 2020a).\nIn contrast, neural fine-tuned metrics on top of\npre-trained models such as mBERT (Devlin et al.,\n2019) and XLM-R (Conneau et al., 2020) (e.g\nBLEURT (Sellam et al., 2020) and C OMET (Rei et\nal., 2020) have demonstrated significant improve-\nments in comparison to other metrics (Mathur et\nal., 2020b; Kocmi et al., 2021; Freitag et al.,\n2021b). The improvements made them good can-\ndidates for revisiting promising research directions\nwhere the metric plays a more central role in can-\ndidate selection during decoding, such as N-best\nreranking (Ng et al., 2019; Bhattacharyya et al.,\n2021; Fernandes et al., 2022) and minimum Bayes\nrisk (MBR) decoding (Eikema and Aziz, 2021;\nM¨uller and Sennrich, 2021). Nonetheless, the\ncomplexity of such strategies using metrics based\non large transformer models can become impracti-\ncal for a large set of MT hypotheses.\nIn this paper, we describe several experiments\nthat attempt to reduce C OMET computational cost\nand model size to make it more efficient at in-\nference. Our techniques are particularly useful in\nsettings where we have multiple translations from\ndifferent systems on the same source sentences.\nSince the models are based on triplet encoders, we\nwill first analyse the impact of embedding caching\n1 2 3 4 5 6 7 8\nNumber of Systems050100150200250Time (s)Original Model (wmt20-comet-da)\nOriginal Model + Caching + Len. batching\nPrune + Caching + Len. batching\nDistil. + Caching + Len. batching\nBLEUFigure 1: Comparison between the vanilla C OMET , COMET\nwith caching and length batching, P RUNE -COMET and\nDISTIL -COMET . We report the average of 5 runs for each\nmodel/metric for a varying number of systems. All experi-\nments were performed using the German →English WMT20\nNewstest, with a NVIDIA GeForce GTX 1080 TI GPU\nand a constant batch size of 16. For comparison we also\nplot the runtime of B LEU in a Intel(R) Core(TM)\ni7-6850K CPU @ 3.60GHz .\nand length batching . Then, we will try to fur-\nther reduce the computational cost by using weight\npruning andknowledge distillation . Our results\nshow that embedding caching and length batch-\ning alone can boost C OMET performance 39.19%\nwhen scoring one system and 65.44% when scor-\ning 8 systems over the same test set. Furthermore,\nwith knowledge distillation we are able to create a\nmodel that is 80% smaller and 2.128x faster with a\nperformance close to the original model and above\nstrong baselines such as B ERTSCORE and P RISM .\nFigure 1 shows time differences for all proposed\nmethods when evaluating a varying number of sys-\ntems.\n2 Related Work\nIn the last couple of years, learned metrics such\nas C OMET (Rei et al., 2020) and B LEURT (Sel-\nlam et al., 2020) proved to achieve high cor-\nrelations with human judgments (Mathur et al.,\n2020b; Freitag et al., 2021a; Kocmi et al., 2021).\nThey are cast as a regression problem and cap-\nture the semantic similarity between the translated\ntext and a reference text, going beyond the sim-\nple surface/lexical similarities—the base of popu-\nlar metrics like B LEU (Papineni et al., 2002) and\nCHRF (Popovi ´c, 2015). The fact that C OMET and\nBLEURT metrics leverage large pre-trained multi-\nlingual models was a huge turning point. By using\ncontextual embeddings trained on a different task,researchers were able to overcome the scarcity of\ndata in MT evaluation (as well as in other tasks in\nwhich data is also limited). With such multilin-\ngual models, high-quality MT evaluation is now a\npossibility, even for language pairs without labeled\ndata available (i.e. zero-shot scenarios). How-\never, this multilingual property usually comes with\na trade-off. For example, for cross-lingual transfer\ntask, gains in performance (higher accuracy with\nhuman labels) only occur by adding new language\npairs until a certain point, after which adding more\nlanguages actually decreases the performance, un-\nless the model capacity is also increased (a phe-\nnomena called “the curse of multilinguality” (Con-\nneau et al., 2020).\nBesides the curse of multilinguality phenomena,\nthe NLP community has been motivated to build\nlarger and larger transformer models because, gen-\nerally, the bigger the model the better it performs.\nThis was demonstrated in several tasks like the\nones in the GLUE benchmark (Goyal et al., 2021)\nand in multilingual translation tasks (Fan et al.,\n2020). Hence, models are achieving astonish-\ning sizes like BERT with 340M parameters (De-\nvlin et al., 2019), XLM-R XXL with 10.7B param-\neters (Goyal et al., 2021), M2M-100 with 12B\nparameters (Fan et al., 2020), and GPT-3 with\n175B parameters (Brown et al., 2020). However,\nthis growth comes with computational, monetary\nand environmental costs. For example, training a\nmodel with 1.5B parameters costs from 80k dollars\nup to 1.6M dollars1when doing hyper-parameter\ntuning and performing multiple runs per setting\n(Sharir et al., 2020). Such scale makes running\nsimilar experiments impractical to the majority of\nresearch groups, and the high energy and high re-\nsponse latency of such models are preventing them\nfrom being deployed in production (e.g. (Sun et\nal., 2020)).\nTo deal with the above problem, it is neces-\nsary to apply techniques for making models more\ncompact, such as pruning, distillation, quantiza-\ntion, among others. In a recent review (Gupta\nand Agrawal, 2022) summarizes these techniques\nfor increasing inference efficiency, i.e., for mak-\ning the model faster, consuming fewer computa-\ntional resources, using less memory, and less disk\nspace. DistilBERT (Sanh et al., 2019) is a success-\nful example: using distillation with BERT as the\n1Estimates from (Sharir et al., 2020) calculated using internal\nAI21 Labs data; cloud solutions such as GCP or AWS can\ndiffer.\n1000 1500 2000 2500 3000 3500 4000 4500 5000\nT estset Size020406080100120140Timeunsorted\nlen. batch\nBLEUFigure 2: Runtime (in seconds) varying number of exam-\nples, with a NVIDIA GeForce GTX 1080 TI GPU and\na constant batch size of 16. The time is calculated with\nthe average of 10 runs using the default C OMET model\nwmt20-comet-da . For comparison we also plot the run-\ntime of BLEU in a Intel(R) Core(TM) i7-6850K\nCPU @ 3.60GHz .\nteacher and reducing the amount of layers from\nthe regular 12 to only 6, the model retains 97%\nof BERT’s performance while reducing the size\nby 40% and being 60% faster. The authors have\nalso shown that when used for a mobile appli-\ncation (iPhone), the DistilBERT was 71% faster\nthan BERT. Another example, closer to our re-\nsearch, is the metric obtained from using synthetic\ndata and performing distillation using a new vari-\nation of B LEURT as the teacher (Pu et al., 2021).\nThe resulting metric obtains up to 10.5% improve-\nment over vanilla fine-tuning and reaches 92.6%\nof teacher’s performance using only a third of\nits parameters. Nonetheless, the architecture of\nBLEURT -based models requires that the reference\nis always encoded together with MT hypothesis\nwhich is extremely inefficient in use cases such as\nMBR, where the metric has a O(N2)complexity\n(with Nbeing the number of hypotheses), and sys-\ntem scoring where for a fixed source and reference\nwe can have several translations being compared.\n3 Length Sorting and Caching\nBefore exploring approaches that reduce the num-\nber of model parameters, we experiment with tech-\nniques to optimize the inference time computa-\ntional load. One which is commonly used is to sort\nthe batches according to sentence length to reduce\ntensor padding (Pu et al., 2021). Since C OMET\nreceives three input texts (source, hypothesis and\nreference), for simplicity, we do length sorting ac-\ncording to the source length. Figure 2 shows the\n1 2 3 4 5 6 7 8\nNumber of Systems050100150200250Timewithout caching\nwith caching\nBLEUFigure 3: Runtime (in seconds) varying number of sys-\ntems for the de-en WMT20 Newstest, with a NVIDIA\nGeForce GTX 1080 TI GPU and a constant batch size\nof 16. The time is calculated with the average of 5 runs using\nthe default C OMET model wmt20-comet-da . For com-\nparison we also plot the runtime of BLEU in a Intel(R)\nCore(TM) i7-6850K CPU @ 3.60GHz .\nspeed difference between an unsorted test set with\nvarying size and length-based sorting.\nAs previously pointed out, C OMET metrics are\nbased on triplet encoders2which means that the\nsource and reference encoding does not depend on\nthe provided MT hypothesis as opposed to other\nrecent metrics such as B LEURT (Sellam et al.,\n2020) which have to repetitively encode the ref-\nerence for every hypotheses. With that said, using\nCOMET we only need to encode each unique sen-\ntence (source, hypothesis translation or reference\ntranslation) once. This means that we can cache\npreviously encoded batches and reuse their repre-\nsentations. In Figure 3, we show the speed gains,\nin seconds, when scoring multiple systems over the\nsame test set. This reflects the typical MT develop-\nment use case in which we want to select the best\namong several MT systems.\nThese two optimizations altogether are respon-\nsible for reducing the inference time of C OMET\nfrom 34.7 seconds to 21.1 seconds while scoring\n1 system (39.19% faster) and from 265.9 seconds\nto 91.9 seconds when scoring 8 systems (65.44%\nfaster). For all experiments performed along the\nrest of the paper we always use both optimization\non all C OMET models being compared.\n2A triplet encoder, is a model architecture where three sen-\ntences are encoded independently and in parallel. Architec-\ntures such as this have been extensively explored for sentence\nretrieval applications due to its efficiency (e.g. Sentence-\nBERT (Reimers and Gurevych, 2019))\nFigure 4: Normalized weights distribution for the C OMET\ndefault model ( wmt20-comet-da ). As we can observe lay-\ners between 15-19 are the most relevant ones with a normal-\nized weight between 0.75 and 1. The representations learnt by\nlayers 15-19 depend on previous layers but we can prune the\ntop layers (20-25) without impacting the layers that the model\ndeemed more relevant.\n4 Model Pruning\nModel pruning has been widely used in natural lan-\nguage processing to remove non-informative con-\nnections and thus reducing model size (Zhu and\nGupta, 2018). Since most C OMET parameters\ncome from the XLM-R model, we attempt to re-\nduce its size. We start with layer pruning by re-\nmoving the top layers of XLM-R. Then we experi-\nment with making its encoder blocks smaller either\nby reducing the size of the feed-forward hidden\nlayers or by removing attention heads. The main\nadvantage of these approaches is their simplicity:\nwithin minutes we are able to obtain a new model\nwith reduced size and memory footprint with min-\nimal performance impact.\nFor all the experiments in this section, we\nused the development set from the Metrics shared\ntask of WMT 2020. This set contains di-\nrect assessment annotations (DA; (Graham et al.,\n2013)) for English →German, English →Czech,\nEnglish →Polish and English →Russian. We use\nthese language pairs because they were anno-\ntated by experts exploring document context and\nin abilingual setup (without access to a reference\ntranslation)3. Nonetheless, in Section 6 we show\nthe resulting model performance on all language\n3In the WMT 2020 findings paper (Mathur et al., 2020b),\nmost metrics showed suspiciously low correlations with hu-\nman judgements based on crowd-sourcing platforms such as\nMechanical Turk. Thus, we decided to focus just on 4 lan-\nguage pairs in which annotations are deemed as trustworthy.\n0 2 4 6 8 10\nNumber of Pruned Layers0.420.440.460.48Devset Kendall-tauOriginal ModelFigure 5: Impacts in performance of Layer Pruning for the\nWMT 2020 development set. We can observe that removing\nup to 5 layers does not affect model performance but provides\na 10% reduction in model size.\npairs from WMT 2021 for both DA and multi-\ndimensional quality metric annotations (MQM;\n(Lommel et al., 2014)).\n4.1 Layer Pruning\nIn large pre-trained language models, different lay-\ners learn representations that capture different lev-\nels of linguistic abstractions, which can impact a\ndownstream task in different ways (Peters et al.,\n2018; Tenney et al., 2019). In order to let the\nmodel learn the relevance of each layer during\ntraining, (Peters et al., 2018) proposed a layer-\nwise attention mechanism that pools information\nfrom all layers. This method has been adopted in\nCOMET .\nAfter analyzing the weights learnt by C OMET\n(wmt20-comet-da ) for each layer of XLM-R\n(Figure 4), we realized that the topmost layers (20-\n25) are not the most relevant ones. This means\nthat we can prune those layers without having an\nimpact on the most relevant features.\nEach removed layer decreases the number of\ntotal parameters by 2.16%. Figure 5 shows the\nimpacts in performance after removing a varying\nnumber of layers. As we can observe, performance\nstarts to decrease only after removing 5 layers.\nYet, removing 5 layers already produces a 10.8%\nreduction in model parameters. Surprisingly, re-\nmoving the last layer (pruning 1 layer) slightly\nimproves the performance in terms of Kendall-\ntau (Kendall, 1938).\n4.2 Transformer Block Pruning\nThe Transformer architecture is composed of sev-\neral encoder blocks (layers) stacked on top of the\nother. In the previous section, we reduce model\n512 1024 2048 3072 4096\nHidden Size0.2750.3000.3250.3500.3750.4000.4250.4500.475Devset Kendall-tau\n707580859095100\n% of parametersOriginal Model(a)Feed-forward hidden size pruning.\n4 6 8 10 12 14 16\nNumber of Heads0.360.380.400.420.440.46Devset Kendall-tau\n7880828486889092\n% of parameters (b)Attention head pruning.\nFigure 6: Impact of gradient based pruning techniques on model size (in blue) and performance on the WMT 2020 development\nset (in green). Note that in Figure (a) we apply pruning just for the feed-forward hidden size. In Figure (b) pruning is applied\nto several heads while freezing the hidden size to 3072 (3/4 of the original hidden size of XLM-R).\nsize by removing the topmost blocks (depth prun-\ning). In this section we reduce the size of each\nblock instead (width pruning).\nEach transformer block is made of two com-\nponents: a self-attention (composed of several at-\ntention heads) and a feed-forward neural network .\nIn XLM-R-large, each block is made of 16 self-\nattention heads followed by a feed-forward of a\nsingle hidden layer with 4092 parameters.\nUsing the TextPruner toolkit4, we can eas-\nily prune both the attention heads and the feed-\nforward hidden sizes. Figure 6a shows the im-\npact of pruning the hidden sizes from 4096 →{512,\n1024, 2048, 3072 }while Figure 6b shows the im-\npact of reducing the attention heads from 16 →{4,\n6, 8, 10, 12, 14 }.\n4.3 P RUNED -COMET\nAfter experimenting with these three different\npruning techniques, we created a pruned version\nof C OMET in which we keep only 19 XLM-R lay-\ners, we reduced the feed-forward hidden size by\n3/4(3072 hidden size) and we removed 2 heads\n(out of 16). According to our experiments above,\nthe resulting model’s performance drop should be\nalmost the same as the original model but the re-\nsulting model is 21.1% smaller.\nThe resulting model is able to score 1000 sam-\nples in just 19.74seconds, while the original model\ntakes around 31.32seconds. It is important to\nnotice that most of the XLM-R parameters come\nfrom its huge embedding layer. Since the em-\nbedding size memory does not affect the infer-\nence time, the obtained 20% reduction in param-\n4https://textpruner.readthedocs.io/en/\nlatest/eters translates into speed improvements of around\n36.97%.5\n5 Distillation\nAnother commonly used way to compress neu-\nral networks is through knowledge distilation (Bu-\ncilua et al., 2006; Hinton et al., 2015) in which, for\nlarge amounts of unlabeled data, a smaller neural\nnetwork (the student) is trained to mimic a more\ncomplex model (the teacher).\nAs the teacher network, we used an ensem-\nble of 5 C OMET models trained with different\nseeds (Glushkova et al., 2021). The student net-\nwork follows the same architecture as the origi-\nnal model and the same hyper-parameters. How-\never, instead of using XLM-R-large, it uses a dis-\ntilled version with only 12 layers, 12 heads, em-\nbeddings of 384 features, and intermediate hidden\nsizes of 1536. This model has only 117M param-\neters compared to the 560M parameters from the\nlarge model.\nRegarding the unlabeled data for distillation, we\nextracted 25M sentence pairs from OPUS (Tiede-\nmann, 2012) ranging a total of 15 language pairs.\nTo guarantee high quality parallel data we used Bi-\ncleaner tool (Ram ´ırez-S ´anchez et al., 2020) with\na threshold of 0.8. Then, using pre-trained MT\nmodels available in Hugging Face Transformers,\nwe created 2 different translations for each source:\none using a bilingual model (in theory a high\nquality translation) and another using pivoting\n(which can be thought as lower quality). Finally,\nwe scored all the data using our teacher ensem-\n5Experiments performed in a NVIDIA GeForce GTX\n1080 TI GPU and a constant batch size of 16. The resulting\ntime is the average of 5 runs.\nTable 1: Kendall’s tau correlation on high resource language pairs using the MQM annotations for both News and TED talks\ndomain collected for the WMT 2021 Metrics Task.\nzh-en en-de en-ru\nMetric #Params News TED News TED News TED avg.\nBLEU - 0.166 0.056 0.082 0.093 0.115 0.067 0.097\nCHRF - 0.171 0.081 0.101 0.134 0.182 0.255 0.154\nBERTSCORE 179M 0.230 0.131 0.154 0.184 0.185 0.275 0.193\nPRISM 745M 0.265 0.139 0.182 0.264 0.219 0.292 0.229\nBLEURT 579M 0.345 0.166 0.253 0.332 0.296 0.347 0.290\nCOMET 582M 0.336 0.159 0.227 0.290 0.284 0.329 0.271\nPRUNE -COMET 460M 0.333 0.157 0.219 0.293 0.274 0.319 0.266\nDISTIL -COMET 119M 0.321 0.161 0.202 0.274 0.263 0.326 0.258\nTable 2: Kendall’s tau-like correlations on low resource language pairs using the DARR data from WMT 2021 Metrics task.\nMetric #Params zu-xh xh-zu bn-hi hi-bn en-ja en-ha en-is avg.\nBLEU - 0.381 0.1887 0.070 0.246 0.315 0.124 0.278 0.229\nCHRF - 0.530 0.301 0.071 0.327 0.371 0.186 0.373 0.308\nBERTSCORE 179M 0.488 0.267 0.074 0.365 0.413 0.161 0.354 0.303\nBLEURT 579M 0.563 0.362 0.179 0.498 0.483 0.186 0.469 0.391\nCOMET 582M 0.550 0.285 0.156 0.526 0.521 0.234 0.474 0.392\nPRUNE -COMET 460M 0.541 0.264 0.163 0.519 0.513 0.197 0.439 0.377\nDISTIL -COMET 119M 0.488 0.254 0.135 0.498 0.471 0.145 0.419 0.344\nble. The resulting corpus contains 45M tuples\nwith ( source ,translation ,reference ,\nscore ).\nThe resulting model which name D ISTIL -\nCOMET , scores 1000 sentences in 14.72 seconds\nresulting in a 53% speed improvement over the\noriginal model3.\n6 Correlation with Human Judgements\nIn this section, we show results for {PRUNE\nand D ISTIL}-COMET in terms of correlations\nwith MQM annotations from WMT 2021 Met-\nrics task for two different domains: News and\nTED talks. Since these annotations only cover\nhigh-resource language pairs (English →German,\nEnglish →Russian, Chinese →English), we\nalso evaluate models on low resource lan-\nguage pairs using DA Relative Ranks from\nWMT 2021, namely we test these models for:\nHindi↔Bengali, Zulu ↔Xhosa, English →Hausa,\nEnglish →Icelandic, English →Japanese. For\na detailed comparison, we also present results\nfor CHRF (Popovi ´c, 2015) and B LEU (Pa-\npineni et al., 2002), two computationally\nefficient lexical metrics, and other neural met-rics such as P RISM6(Thompson and Post,\n2020), B LEURT (Sellam et al., 2020) and\nBERTSCORE (Zhang et al., 2020).\nFrom Table 1, we can observe that P RUNE -\nCOMET has minimal performance drops compared\nwith vanilla C OMET with only 80% of its pa-\nrameters. D ISTIL -COMET performance is on av-\nerage 0.013 Kendall’s bellow vanilla C OMET for\nhigh resources languages, which is impressive for\na model that only has 20% of C OMET ’s parame-\nters. For low-resource languages, we can observe\nbigger performance differences between C OMET ,\nPRUNE -COMET , and D ISTIL -COMET which con-\nfirm results by (Pu et al., 2021) that shows that\nsmaller MT evaluation models are limited in their\nability to generalize to several language pairs.\nNonetheless, when comparing with other recently\nproposed metrics such as P RISM and B ERTSCORE ,\n{PRUNE and D ISTIL}-COMET have higher corre-\nlations with human judgements for both high and\nlow resource language pairs. The only exception\nis B LEURT which shows stronger correlations than\nCOMET on high-resource language pairs and com-\n6PRISM does not support the low-resource language pairs\nused in our experiments, thus we only report P RISM corre-\nlations with MQM data\npetitive performance in low-resource ones.7\n7 Use Case: Minimum Bayes Risk\nDecoding\nIn minimum Bayes risk (MBR) decoding, a ma-\nchine translation evaluation metric can be used as\nthe utility function for comparing the translation\nhypotheses. This kind of approach, also known as\n“consensus decoding”, derived from the idea that\nthe top ranked translation is the one with the high-\nest average score when compared to all other hy-\npotheses. This process requires that each hypothe-\nsis translation be compared to every other hypothe-\nses in an hypotheses candidate list. Having faster\nneural metrics could directly impact research and\ncomputational performance of using MBR decod-\ning approaches with such metrics.\n25 50 75 100 125 150 175 200\nSample Size01000200030004000Time (s)COMET\nDistil.\nPrune\nFigure 7: Runtime for performing MBR with a differ-\nent number of samples using one NVIDIA GeForce GTX\n1080 TI GPU .\nUsing C OMET models with distillation or prun-\ning can have a considerable effect at the perfor-\nmance of MBR decoding using such models as\nthe utility function. Figure 7 shows that D ISTIL -\nCOMET is always substantially faster than the orig-\ninal C OMET model especially for larger candi-\ndate list sizes such as 200 candidates. Likewise,\nPRUNE -COMET performs better than the original\nmodel but its performance is also considerably\nhigher than D ISTIL -COMET .\nRegarding the two C OMET variants there is a\nclear trade-off that needs to be taken into consid-\neration, as evidenced by the results in Section 6:\nwhile D ISTIL -COMET is faster, P RUNE -COMET is\n7For a more detailed comparison between C OMET and\nBLEURT metrics we refer the reader to the WMT 2021 Met-\nrics shared task results paper (Freitag et al., 2021b) where\nboth metrics ended up statistically tied for most language\npairs and domains.more accurate, leaving the choice of each model to\nuse up to the most important aspect for the appli-\ncation. In the case of MBR decoding, this might\ndepend on the hardware available for performing\nthe computations.\n8 Conclusion and Future Work\nIn this paper we presented two simple optimiza-\ntions that lead to significant performance gains\non neural metrics such as C OMET and two ap-\nproaches to reduce its number of parameters. To-\ngether these techniques achieve impressive gains\nin performance (both speed and memory) at a very\nsmall cost in performance.\nTo showcase the effectiveness of our meth-\nods, we presented D ISTIL -COMET and P RUNE -\nCOMET . These models were obtained using\nCOMET knowledge distillation and pruning re-\nspectively. To test the proposed models, we used\nthe data from the WMT 2021 Metrics task which\ncovers low resource languages as well as high re-\nsource languages. Overall the results of P RUNE -\nCOMET are stable across the board with only a\nsmall degradation compared to the original met-\nric. Knowledge distillation leads to much higher\ncompression rates but seems to confirm previous\nfindings (Pu et al., 2021) which suggest the lack of\nmodel capacity when it comes to the multilingual\ngeneralization for low resource languages.\nA primary avenue for future work is to study\nhow decreasing the model size can further impact\non robustness of the metric, inspired by recent\nstudies which identified weaknesses of C OMET\nmetrics when dealing with numbers and named en-\ntities (Freitag et al., 2021b; Amrhein and Sennrich,\n2022). Also, in this work we explored knowl-\nedge distillation directly from the teacher output\nbut an interesting avenue for improving the qual-\nity of the student model is to explore alternative\ndistillation approaches that learn directly from in-\nternal representations of the teacher model such as\nself-attention distillation (Wang et al., 2020).\nAcknowledgments\nWe would like to thank Jo ˜ao Alves and Craig Stew-\nart and the anonymous reviewers for useful feed-\nback. This work was supported by the P2020 Pro-\ngram through project MAIA (contract No 045909)\nand by the European Union’s Horizon 2020 re-\nsearch and innovation program (QUARTZ grant\nagreement No 951847).\nReferences\nAmrhein, Chantal and Rico Sennrich. 2022. Iden-\ntifying Weaknesses in Machine Translation Metrics\nThrough Minimum Bayes Risk Decoding: A Case\nStudy for COMET. CoRR , abs/2010.11125.\nBhattacharyya, Sumanta, Amirmohammad Rooshenas,\nSubhajit Naskar, Simeng Sun, Mohit Iyyer, and An-\ndrew McCallum. 2021. Energy-based reranking:\nImproving neural machine translation using energy-\nbased models. In Proceedings of the 59th Annual\nMeeting of the Association for Computational Lin-\nguistics and the 11th International Joint Conference\non Natural Language Processing (Volume 1: Long\nPapers) , pages 4528–4537, Online, August. Associ-\nation for Computational Linguistics.\nBrown, Tom B., Benjamin Mann, Nick Ryder, Melanie\nSubbiah, Jared Kaplan, Prafulla Dhariwal, Arvind\nNeelakantan, Pranav Shyam, Girish Sastry, Amanda\nAskell, Sandhini Agarwal, Ariel Herbert-V oss,\nGretchen Krueger, Tom Henighan, Rewon Child,\nAditya Ramesh, Daniel M. Ziegler, Jeffrey Wu,\nClemens Winter, Christopher Hesse, Mark Chen,\nEric Sigler, Mateusz Litwin, Scott Gray, Benjamin\nChess, Jack Clark, Christopher Berner, Sam Mc-\nCandlish, Alec Radford, Ilya Sutskever, and Dario\nAmodei. 2020. Language models are few-shot\nlearners. CoRR , abs/2005.14165.\nBucilua, Cristian, Rich Caruana, and Alexandru\nNiculescu-Mizil. 2006. Model compression. In\nProceedings of the 12th ACM SIGKDD International\nConference on Knowledge Discovery and Data Min-\ning, KDD ’06, page 535–541, New York, NY , USA.\nAssociation for Computing Machinery.\nConneau, Alexis, Kartikay Khandelwal, Naman Goyal,\nVishrav Chaudhary, Guillaume Wenzek, Francisco\nGuzm ´an, Edouard Grave, Myle Ott, Luke Zettle-\nmoyer, and Veselin Stoyanov. 2020. Unsupervised\ncross-lingual representation learning at scale. In\nProceedings of the 58th Annual Meeting of the Asso-\nciation for Computational Linguistics , pages 8440–\n8451, Online, July. Association for Computational\nLinguistics.\nDevlin, Jacob, Ming-Wei Chang, Kenton Lee, and\nKristina Toutanova. 2019. BERT: Pre-training of\ndeep bidirectional transformers for language under-\nstanding. In Proceedings of the 2019 Conference of\nthe North American Chapter of the Association for\nComputational Linguistics: Human Language Tech-\nnologies, Volume 1 (Long and Short Papers) , pages\n4171–4186, Minneapolis, Minnesota, June. Associa-\ntion for Computational Linguistics.\nEikema, Bryan and Wilker Aziz. 2021. Sampling-\nbased minimum bayes risk decoding for neural ma-\nchine translation. CoRR , abs/2108.04718.\nFan, Angela, Shruti Bhosale, Holger Schwenk, Zhiyi\nMa, Ahmed El-Kishky, Siddharth Goyal, Man-\ndeep Baines, Onur Celebi, Guillaume Wenzek,Vishrav Chaudhary, Naman Goyal, Tom Birch, Vi-\ntaliy Liptchinsky, Sergey Edunov, Edouard Grave,\nMichael Auli, and Armand Joulin. 2020. Be-\nyond english-centric multilingual machine transla-\ntion. CoRR , abs/2010.11125.\nFernandes, Patrick, Antonio Farinhas, Ricardo Rei,\nJos´e G. C. de Souza, Perez Ogayo, Neubig Graham,\nand Andr ´e F. T. Martins. 2022. Quality-Aware De-\ncoding for Neural Machine Translation. In Accepted\nat the 2022 Annual Conference of the North Amer-\nican Chapter of the Association for Computational\nLinguistics , Seattle, Washington, july. Association\nfor Computational Linguistics.\nFreitag, Markus, George Foster, David Grangier, Viresh\nRatnakar, Qijun Tan, and Wolfgang Macherey.\n2021a. Experts, Errors, and Context: A Large-Scale\nStudy of Human Evaluation for Machine Transla-\ntion. Transactions of the Association for Computa-\ntional Linguistics , 9:1460–1474, 12.\nFreitag, Markus, Ricardo Rei, Nitika Mathur, Chi-kiu\nLo, Craig Stewart, George Foster, Alon Lavie, and\nOndˇrej Bojar. 2021b. Results of the WMT21 met-\nrics shared task: Evaluating metrics with expert-\nbased human evaluations on TED and news domain.\nInProceedings of the Sixth Conference on Machine\nTranslation , pages 733–774, Online, November. As-\nsociation for Computational Linguistics.\nGlushkova, Taisiya, Chrysoula Zerva, Ricardo Rei, and\nAndr ´e F. T. Martins. 2021. Uncertainty-aware ma-\nchine translation evaluation. In Findings of the As-\nsociation for Computational Linguistics: EMNLP\n2021 , pages 3920–3938, Punta Cana, Dominican Re-\npublic, November. Association for Computational\nLinguistics.\nGoyal, Naman, Jingfei Du, Myle Ott, Giri Ananthara-\nman, and Alexis Conneau. 2021. Larger-scale trans-\nformers for multilingual masked language modeling.\nInProceedings of the 6th Workshop on Representa-\ntion Learning for NLP (RepL4NLP-2021) , pages 29–\n33, Online, August. Association for Computational\nLinguistics.\nGraham, Yvette, Timothy Baldwin, Alistair Moffat, and\nJustin Zobel. 2013. Continuous measurement scales\nin human evaluation of machine translation. In Pro-\nceedings of the 7th Linguistic Annotation Workshop\nand Interoperability with Discourse , pages 33–41,\nSofia, Bulgaria, August. Association for Computa-\ntional Linguistics.\nGupta, Manish and Puneet Agrawal. 2022. Compres-\nsion of deep learning models for text: A survey.\nACM Trans. Knowl. Discov. Data , 16(4), jan.\nHinton, Geoffrey, Oriol Vinyals, and Jeffrey Dean.\n2015. Distilling the knowledge in a neural network.\nInNIPS Deep Learning and Representation Learn-\ning Workshop .\nKendall, M. G. 1938. A new measure of rank correla-\ntion. Biometrika , 30(1/2):81–93.\nKocmi, Tom, Christian Federmann, Roman Grund-\nkiewicz, Marcin Junczys-Dowmunt, Hitokazu Mat-\nsushita, and Arul Menezes. 2021. To ship or not to\nship: An extensive evaluation of automatic metrics\nfor machine translation. In Proceedings of the Sixth\nConference on Machine Translation , pages 478–494,\nOnline, November. Association for Computational\nLinguistics.\nLommel, Arle, Aljoscha Burchardt, and Hans Uszko-\nreit. 2014. Multidimensional quality metrics\n(MQM): A framework for declaring and describing\ntranslation quality metrics. Tradum `atica: tecnolo-\ngies de la traducci ´o, 0:455–463, 12.\nMa, Qingsong, Johnny Wei, Ond ˇrej Bojar, and Yvette\nGraham. 2019. Results of the WMT19 metrics\nshared task: Segment-level and strong MT sys-\ntems pose big challenges. In Proceedings of the\nFourth Conference on Machine Translation (Volume\n2: Shared Task Papers, Day 1) , pages 62–90, Flo-\nrence, Italy, August. Association for Computational\nLinguistics.\nMathur, Nitika, Timothy Baldwin, and Trevor Cohn.\n2020a. Tangled up in BLEU: Reevaluating the eval-\nuation of automatic machine translation evaluation\nmetrics. In Proceedings of the 58th Annual Meet-\ning of the Association for Computational Linguis-\ntics, pages 4984–4997, Online, July. Association for\nComputational Linguistics.\nMathur, Nitika, Johnny Wei, Markus Freitag, Qing-\nsong Ma, and Ond ˇrej Bojar. 2020b. Results of\nthe WMT20 metrics shared task. In Proceedings of\nthe Fifth Conference on Machine Translation , pages\n688–725, Online, November. Association for Com-\nputational Linguistics.\nM¨uller, Mathias and Rico Sennrich. 2021. Understand-\ning the properties of minimum bayes risk decoding\nin neural machine translation. In Proceedings of the\nJoint Conference of the 59th Annual Meeting of the\nAssociation for Computational Linguistics and the\n11th International Joint Conference on Natural Lan-\nguage Processing (ACL-IJCNLP 2021) .\nNg, Nathan, Kyra Yee, Alexei Baevski, Myle Ott,\nMichael Auli, and Sergey Edunov. 2019. Facebook\nFAIR’s WMT19 news translation task submission.\nInProceedings of the Fourth Conference on Machine\nTranslation (Volume 2: Shared Task Papers, Day 1) ,\npages 314–319, Florence, Italy, August. Association\nfor Computational Linguistics.\nPapineni, Kishore, Salim Roukos, Todd Ward, and Wei-\nJing Zhu. 2002. Bleu: a method for automatic eval-\nuation of machine translation. In Proceedings of the\n40th Annual Meeting of the Association for Com-\nputational Linguistics , pages 311–318, Philadelphia,\nPennsylvania, USA, July. Association for Computa-\ntional Linguistics.\nPeters, Matthew E., Mark Neumann, Mohit Iyyer, Matt\nGardner, Christopher Clark, Kenton Lee, and LukeZettlemoyer. 2018. Deep contextualized word rep-\nresentations. In Proceedings of the 2018 Confer-\nence of the North American Chapter of the Associ-\nation for Computational Linguistics: Human Lan-\nguage Technologies, Volume 1 (Long Papers) , pages\n2227–2237, New Orleans, Louisiana, June. Associa-\ntion for Computational Linguistics.\nPopovi ´c, Maja. 2015. chrF: character n-gram F-score\nfor automatic MT evaluation. In Proceedings of the\nTenth Workshop on Statistical Machine Translation ,\npages 392–395, Lisbon, Portugal, September. Asso-\nciation for Computational Linguistics.\nPu, Amy, Hyung Won Chung, Ankur Parikh, Sebastian\nGehrmann, and Thibault Sellam. 2021. Learning\ncompact metrics for MT. In Proceedings of the 2021\nConference on Empirical Methods in Natural Lan-\nguage Processing , pages 751–762, Online and Punta\nCana, Dominican Republic, November. Association\nfor Computational Linguistics.\nRam´ırez-S ´anchez, Gema, Jaume Zaragoza-Bernabeu,\nMarta Ba ˜n´on, and Sergio Ortiz-Rojas. 2020. Bi-\nfixer and Bicleaner: two open-source tools to clean\nyour parallel data. In Proceedings of the 22nd An-\nnual Conference of the European Association for\nMachine Translation , pages 291–298, Lisboa, Por-\ntugal, November. European Association for Machine\nTranslation.\nRei, Ricardo, Craig Stewart, Ana C Farinha, and Alon\nLavie. 2020. COMET: A neural framework for MT\nevaluation. In Proceedings of the 2020 Conference\non Empirical Methods in Natural Language Process-\ning (EMNLP) , pages 2685–2702, Online, November.\nAssociation for Computational Linguistics.\nReimers, Nils and Iryna Gurevych. 2019. Sentence-\nBERT: Sentence embeddings using Siamese BERT-\nnetworks. In Proceedings of the 2019 Conference on\nEmpirical Methods in Natural Language Processing\nand the 9th International Joint Conference on Natu-\nral Language Processing (EMNLP-IJCNLP) , pages\n3982–3992, Hong Kong, China, November. Associ-\nation for Computational Linguistics.\nSanh, Victor, Lysandre Debut, Julien Chaumond, and\nThomas Wolf. 2019. Distilbert, a distilled version of\nbert: smaller, faster, cheaper and lighter. In NeurIPS\nEMC ˆ2 Workshop .\nSellam, Thibault, Dipanjan Das, and Ankur Parikh.\n2020. BLEURT: Learning robust metrics for text\ngeneration. In Proceedings of the 58th Annual Meet-\ning of the Association for Computational Linguis-\ntics, pages 7881–7892, Online, July. Association for\nComputational Linguistics.\nSharir, Or, Barak Peleg, and Yoav Shoham. 2020. The\ncost of training NLP models: A concise overview.\nCoRR , abs/2004.08900.\nSun, Zhiqing, Hongkun Yu, Xiaodan Song, Renjie Liu,\nYiming Yang, and Denny Zhou. 2020. Mobile-\nBERT: a compact task-agnostic BERT for resource-\nlimited devices. In Proceedings of the 58th Annual\nMeeting of the Association for Computational Lin-\nguistics , pages 2158–2170, Online, July. Association\nfor Computational Linguistics.\nTenney, Ian, Dipanjan Das, and Ellie Pavlick. 2019.\nBERT rediscovers the classical NLP pipeline. In\nProceedings of the 57th Annual Meeting of the Asso-\nciation for Computational Linguistics , pages 4593–\n4601, Florence, Italy, July. Association for Compu-\ntational Linguistics.\nThompson, Brian and Matt Post. 2020. Automatic ma-\nchine translation evaluation in many languages via\nzero-shot paraphrasing. In Proceedings of the 2020\nConference on Empirical Methods in Natural Lan-\nguage Processing (EMNLP) , pages 90–121, Online,\nNovember. Association for Computational Linguis-\ntics.\nTiedemann, J ¨org. 2012. Parallel data, tools and in-\nterfaces in OPUS. In Proceedings of the Eighth In-\nternational Conference on Language Resources and\nEvaluation (LREC’12) , pages 2214–2218, Istanbul,\nTurkey, May. European Language Resources Asso-\nciation (ELRA).\nWang, Wenhui, Furu Wei, Li Dong, Hangbo Bao, Nan\nYang, and Ming Zhou. 2020. Minilm: Deep\nself-attention distillation for task-agnostic compres-\nsion of pre-trained transformers. In Larochelle, H.,\nM. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin,\neditors, Advances in Neural Information Processing\nSystems , volume 33, pages 5776–5788. Curran As-\nsociates, Inc.\nZhang, Tianyi, Varsha Kishore, Felix Wu, Kilian Q.\nWeinberger, and Yoav Artzi. 2020. Bertscore: Eval-\nuating text generation with bert. In International\nConference on Learning Representations .\nZhu, Michael and Suyog Gupta. 2018. To prune, or\nnot to prune: Exploring the efficacy of pruning for\nmodel compression. In 6th International Conference\non Learning Representations, ICLR 2018, Vancou-\nver, BC, Canada, April 30 - May 3, 2018, Workshop\nTrack Proceedings . OpenReview.net.",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "FL1-63L08RkZ",
"year": null,
"venue": "EAMT 2009",
"pdf_link": "https://aclanthology.org/2009.eamt-1.5.pdf",
"forum_link": "https://openreview.net/forum?id=FL1-63L08RkZ",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Estimating the Sentence-Level Quality of Machine Translation Systems",
"authors": [
"Lucia Specia",
"Marco Turchi",
"Nicola Cancedda",
"Nello Cristianini",
"Marc Dymetman"
],
"abstract": null,
"keywords": [],
"raw_extracted_content": "Proceedings of the 13th Annual Conference of the EAMT , pages 28–35,\nBarcelona, May 2009\nEstimatingthe Sentence-Level Quality ofMachine Translat ionSystems\nLucia Specia*, NicolaCancedda\nand MarcDymetman\nXerox Research CentreEurope\nMeylan,38240,France\[email protected]\[email protected]\[email protected] Turchi* and NelloCristianini\nDepartmentofEngineeringMathematics\nUniversityofBristol\nBristol, BS8 1TR, UK\[email protected]\[email protected]\nAbstract\nWe investigate the problem of predicting\nthe quality of sentences produced by ma-\nchine translation systems when reference\ntranslations are not available. The prob-\nlem is addressed as a regression task and\na method that takes into account the con-\ntribution of different features is proposed.\nWeexperiment with this method for trans-\nlations produced by various MT systems\nand different language pairs, annotated\nwith quality scores both automatically and\nmanually. Results show that our method\nallows obtaining good estimates and that\nidentifying a reduced set of relevant fea-\ntures plays an important role. The experi-\nmentsalsohighlight anumberofoutstand-\ningfeaturesthatwereconsistently selected\nas the most relevant and could be used\nin different ways to improve MT perfor-\nmance or to enhance MTevaluation.\n1 Introduction\nThe notion of “quality” in Machine Translation\n(MT) can have different interpretations depend-\ning on the intended use of the translations (e.g.,\nfluency and adequacy, post-editing time, etc.).\nNonetheless, the assessment of the quality of a\ntranslation is in general done by the user, who\nneeds to read the translation and the source text\nto be able to judge whether it is a good transla-\ntion or not. This is a very time consuming task\nand may not even be possible, if the user does not\nhaveknowledgeaboutthesourcelanguage. There-\nfore, automatically assessing the quality of trans-\n*L.Specia and M. Turchicontributed equallytothis work.\nc/circlecopyrt2009 European Association for Machine Translation.lations produced by MT systems is a crucial prob-\nlem,either tofilteroutthelowqualityones, e.g. to\navoid professional translators spending time read-\ning / post-editing bad translations, or to present\nthem in such a way as to make end-users aware\nof the quality. This task, referred to as Confidence\nEstimation(CE),isconcernedabout predictingthe\nqualityofasystem’soutputforagiveninput,with-\nout anyinformation about the expected output.\nCE for MT has been viewed as a binary classi-\nfication problem (Blatz et al., 2003) to distinguish\nbetween “good” and “bad” translations. However,\nitmaybedifficulttofindaclearboundarybetween\n“good”and“bad”translationsandthisinformation\nmay not be useful in certain applications (e.g, the\ntimenecessary to post-edit translations).\nWe distinguish the task of CE from that of MT\nevaluation by the need, in the latter, of reference\ntranslations. The general goal of MT evaluation\nis to compare a machine translation to reference\ntranslation(s) andprovideaqualityscorewhichre-\nflectshowclosethetwotranslationsare. InCE,the\ntask consists in estimating the quality of the trans-\nlation given only information about the input and\noutput texts and the translation process.\nIn this paper we consider CE for MTas a wider\nproblem,inwhichacontinuous qualityscoreises-\ntimated for each sentence. This could be seen as a\nproxy for MT evaluation, but without any form or\nreference information. This problem is addressed\nas a regression task, where we train algorithms\nto predict different types of sentence-level scores.\nThe contribution of a large number of features is\nepxloited by using afeature selection strategy. We\nalso distinguish between features that depend on\nthe translation process of a given MT system and\nthose that can be extracted given only the input\nsentences and corresponding output translations,\n28\nand are therefore independent onMTsystems.\nIn the remaining of this paper we first discuss\nthe previous work on CE for MT (Section 2), to\nthen describe our experimental setting (Section 3)\nandmethod(Section4)andpresentanddiscussthe\nresults obtained (Sections 5and 6).\n2 Related work\nEarly work on CE for MT aimed at estimating\nthe quality at the word level (Gandrabur and Fos-\nter, 2003; Ueffing and Ney, 2005; Kadri and Nie,\n2006). Sentence-level CE appears to be a more\nnatural set-up for practical applications of MT.\nOneshould consider asreal-world scenario for CE\nan MT system in use, which would provide to the\nuser, together with each sentence translation, an\nestimate of its quality. If this estimate is in the\nform a numerical score, it sould also be viewed as\na proxy to some automatic or manual metric, like\nNIST (Doddington, 2002) or 1-5 adequacy. Other\nestimates include thetimethat wouldbenecessary\nto post-edit such translation, or simply a “good” /\n“bad” indicator.\nDifferently from MT evaluation, in CE refer-\nence translations are not available to compute the\nquality estimates. Therefore, CE approaches can-\nnot be directly compared to the several recently\nproposed metrics for sentence-level MT evalua-\ntion that also use machine learning algorithms and\nsometimes similar features to those used in CE.\nFor example, (Kulesza and Shieber, 2004) use\nSupport VectorMachines(SVM)withn-grampre-\ncision and other reference-based features to pre-\ndict if a sentence is produced by a human trans-\nlator (presumably good) or by a MT system (pre-\nsumablybad)( human-likeness classification ). (Al-\nbrecht and Hwa, 2007a) rely on regression-based\nalgorithms and features, like string and syntax\nmatching of the translation over the correspond-\ning references, tomeasure thequality of sentences\nas a continuous score. In (Albrecht and Hwa,\n2007b),pseudo-references (produced byother MT\nsystems) areusedinsteadofhumanreferences, but\nthisscenariowithmultipleMTsystemsisdifferent\nfrom that of CE.\nThe most comprehensive study on CE at the\nsentence level to date is that of (Blatz et al.,\n2004). Multi-layer perceptrons and Naive Bayes\nare trained on 91 features extracted for transla-\ntions tagged according to NIST and word error\nrate. Scores are thresholded to label the 5th or30th percentile of the examples as “correct” and\nthe remainder as “incorrect”. Regression is also\nperformed, but the estimated scores are mapped\ninto the same classes to make results binary. The\ncontribution of features is investigated by produc-\ningclassifiers for each feature individually and for\ncombinations of all features except one at a time.\nIn both cases, none of the features is found to be\nsignificantly more relevant than the others. This\nseems to point out that many of the features are\nredundant, but this aspect isnot investigated.\n(Quirk, 2004) uses linear regression with fea-\ntures similar tothose usedin(Blatzet al.,2004) to\nestimate sentence translation quality considering\nalso a small set of translations manually labeled\nascorrect / incorrect. Models trained onthis small\ndataset (350 sentences) outperform those trained\nonalargersetofautomaticallylabeleddata. Given\nthe small amount of manually annotated data and\nthe fact that translations come from a single MT\nsystem and language-pair, it is not clear how re-\nsults can be generalized. The contribution of dif-\nferent features isnot investigated.\n(Gamonetal.,2005) trainanSVMclassifierus-\ning a number of linguistic features (grammar pro-\nductions, semantic relationships, etc.) extracted\nfrom machine and human translations to distin-\nguish between human and machine translations\n(human-likeness classification ). The predictions\nof SVM, when combined to a 4-gram language\nmodel score, only slightly increase the correlation\nwith human judgements and such correlation is\nstill lower than that achieved by BLEU (Papineni\net al., 2002). Moreover, as shown in (Albrecht\nand Hwa, 2007a), high human-likeness does not\nnecessarily imply good MT quality. Besides esti-\nmatingthequalityof machinetranslations directly,\nwe use a larger set of features, which are meant\nto cover many more aspects of the translations.\nThesefeaturesareallresource-independent, allow-\ning to generalize this method across translations\nproduced by several MT systems and for different\nlanguage-pairs.\nAlthough our goal is very similar to that of\n(Blatz et al., 2004; Quirk, 2004), it is not possi-\nbletocompareourresultstotheseprevious works,\nsince we estimate continuous scores, instead of\nbinary ones. We consider the following aspects\nas main improvements wrt such previous works:\n(a) evidence that is is possible to accurately esti-\nmate continuous scores, besides binary indicators,29\nwhichcanbemoreappropriate forcertain applica-\ntions (e.g. post-edition time); (b) the use of learn-\ning techniques that are appropriate for the type of\nfeatures used in CE (Partial Least Squares, which\ncan deal efficiently with multicollinearity of input\nfeatures); (c)theadditionofnewfeaturesthatwere\nfoundtobeveryrelevant;(d)theproposalofanex-\nplicit feature selection method to identify relevant\nfeatures in a systematic way; and (e) the exploita-\ntionofmultipledatasetsoftranslationsfromdiffer-\nent MTsystems and language pairs, with different\ntypes ofhuman andautomatic quality annotations,\nthrough the use of resource-independent features\nand the definition of system-independent features.\n3 Experimental setting\n3.1 Features\nWe extract all the features identified in previous\nworkforsentence-level CE(see(Blatzetal.,2003)\nfor alist), except those depending onlinguistic re-\nsources like parsers or WordNet. Wealso add new\nfeatures to cover aspects that have not been di-\nrectly addressed in previous work, including the\nmismatch of many superficial constructions be-\ntween the input and output sentences (percentages\nof punctuation symbols, numbers, etc.), similar-\nitybetween thesource sentence andsentences ina\nmonolingual corpus, word alignment between in-\nput and output sentences, length of phrases, etc.\nThisresults in atotal of 84features.\nMany of these features depend on some aspect\nof the translation process, and therefore are MT\nsystem-dependent andcould not beextracted from\nall translation data used in this paper. We thus di-\nvide thefeatures intwosubsets: (a) black-box fea-\ntures, which can be extracted given only the in-\nput sentence and the translation produced by the\nMT system, i.e., the source and target sentences,\nand possibly monolingual or parallel corpora, and\n(b)glass-box features , which may also depend on\nsome aspect of the translation process.\nThe black-box group includes simple features\nlikesourceandtargetsentencelengthsandtheirra-\ntios, source and target sentence n-gram frequency\nstatistics in the corpus, etc. This constitutes an\ninteresting scenario and can be particularly use-\nful when it is not possible to have access to in-\nternal features of the MT systems (in commercial\nsystems, e.g.). It also provides a way to perform\nthetaskof CEacross different MTsystems, which\nmay use different frameworks. An interesting re-searchquestioniswhether itispossibletoproduce\naccurate CEmodelstaking intoaccount onlythese\nvery basic features. To our knowledge, this issue\nhas not been investigated before.\nThe glass-box group includes internal features\nof the MT system, like the SMT model score,\nphrase and word probabilities, and alternative\ntranslations per source word. They also include\nfeatures based on the n-best list of translation can-\ndidates, some of which apply globally to the set\nof all candidates for a given source sentence (e.g.\ndegree to which phrases are translated in the same\nway throughout the n-best list), and some to spe-\ncific candidates (e.g. ratio between scores of the\ncandidate and top candidate). Weextract atotal of\n54glass-box features.\n3.2 Data\nWe use two types of translation data: (a) transla-\ntions automatically annotated with NIST scores,\nand(b)translations produced bydifferent MTsys-\ntemsandformultiplelanguage-pairs, manuallyan-\nnotated with different types of scores.\nTheautomatically annotated dataset, henceforth\nNISTdataset ,isproducedfromtheFrench-English\nEuroparlparallelcorpus,asprovidedbytheWMT-\n2008sharedtranslationtask(Callison-Burch etal.,\n2008). Wetranslatethethreedevelopment-testsets\navailable ( ∼6k sentences) using a phrase-based\nMT system [omitted for blind review]. These\ntranslations and their 1,000 n-best lists are scored\naccording to sentence-level NIST and the 84 fea-\ntures are extracted from them.\nThe dataset is first sampled into 1,000 subsam-\nples, where each subsample contains all feature\nvectors for a certain position in all the n-best lists\nand is randomly split in training (50 %), validation\n(30%)andtest(20 %) usingauniform distribution.\nThe first type of manually annotated datasets\n(WMT datasets ) is derived from several corpora\nof the WMT-2006 translation shared task (Koehn\nand Monz, 2006). These are subsets of sentences\nfrom the test data used in the shared task, an-\nnotated by humans according to adequacy, with\nscores from 1 (worst) to 5 (best). Each corpus\ncontains ∼100-400 sentences andrefers toagiven\nlanguage pair and MT system. Since this number\nis very small, we put together all sets of transla-\ntions from a given MT system. We select four\namong the resulting datasets: the three phrase-\nbased SMT systems ( S1, S2, S3 ) with the high-30\nest numbers of examples and the only rule-based\nsystem (RB). Each new dataset contains ∼1,300-\n2,000 sentences, and 4-6 language-pairs. The fea-\nture vectors of these datasets contain only black-\nbox features. To account for mixing language-\npairs, we add the source and target language in-\ndicators as features. The task becomes predicting\nthe quality of a given MT system which translates\nbetween different language pairs.\nThe manually annotated datasets of the second\ntype (1-4 datasets ) contain 4K sentences of the\nEuroparl domain (English-Spanish), translated by\nfour SMTsystems developed by different partners\nin the project P[omitted for blind review]: P-ES-\n1, P-ES-2, P-ES-3 and P-ES-4 . The sentences are\nannotated by professional translators according to\n1-4 quality scores, which are commonly used by\nthemtoindicate thequality oftranslations withre-\nspecttotheneedofpost-edition: 1=requirescom-\nplete retranslation, ...,4= fit for purpose.\nDatasets of the final type( post-edition datasets )\ncontain 3K sentences of the automotive industry\ndomain (English-Russian), translated by three MT\nsystems from the same project P:P-ER-1, P-ER-2\nand P-ER-3 . The sentences are annotated accord-\ningtopost-edition time,thatis,givenasourcesen-\ntence in English and its translation into Russian, a\nprofessional translator post-edited such translation\nto make it into a good quality sentence, while the\ntimewas recorded.\nBlack-box features are extracted from all\ndatasets in the last two groups ( 1-4andpost-\nedition). Additionally, glass-box features are ex-\ntractedfromoneofthedatasets( P-ES-1),sincewe\nhad access to the SMT system in this case. We\ncallthisP-ES-1gb . Inthepost-edition datasets, the\npost-edition time is first normalized by the source\nsentence length, sothat thescore referstothetime\nnecessary per source word.\nFor each manually annotated dataset, the fea-\nture vectors are randomly subsampled 100 times\nin training ( 50%), validation ( 30%) and test ( 20%\nusing auniform distribution.\nIn both automatically and manually annotated\ndatasets, we represent each subsample as a matrix\nof variable predictors ( X) times variable response\n(Y)andnormalizefeaturevaluesusingthe zscore.\nDatasets covering different language pairs and\nMT systems and particularly data annotated ac-\ncording to post-edition time for CE have not been\ninvestigated before.3.3 Learningalgorithm\nWe estimate the quality of the translations by pre-\ndicting the sentence-level NIST, 1-5 / 1-4 scores\nor post-edition time using Partial Least Squares\n(PLS) (Wold et al., 1984). Given a matrix X(in-\nput variables) and a vector Y(response variable),\nthe goal of PLS regression is to predict Yfrom\nXand to describe their common structure. In or-\nder to do that, PLS projects the original data onto\na different space of latent variables (or “compo-\nnents”) and is also able to provide information on\nthe importance of individual features in X. PLS\nis particularly indicated when the features in X\nare strongly correlated (multicollinearity). This is\nthe case in our datasets. For example, we con-\nsider each of the SMT system features individu-\nally, as well as the sum of the all these features\n(theactual SMTmodel score). Withsuchdatasets,\nstandard regression techniques usually fail (Rosi-\npal andTrejo,2001). PLShasbeenwidelyused to\nextractqualitativeinformationfromdifferenttypes\nofdata(Frenichetal.,1995),buttoourknowledge,\nithasnotbeenusedinNLPapplications. Morefor-\nmally, PLS can be defined as an ordinary multiple\nregression problem, i.e.,\nY=XBw+F\nwhere Bwis the regression matrix, Fis the resid-\nual matrix, but Bwis computed directly using an\noptimal number of components. For more details\nsee (Jong, 1993). When Xis standardized, an ele-\nment of Bwwith large absolute value indicates an\nimportant X-variable.\nIt is well known that feature selection can be\nhelpful to many tasks in NLP,and that even learn-\ning methods that implicitly perform some form of\nfeature selection, such as SVMs, can benefit from\ntheuseofexplicitfeatureselectiontechniques. We\ntake advantage of a property of PLS, which is the\nordering of the features of XinBwaccording to\ntheirrelevance,todefineamethodtoselectsubsets\nof discriminative features (Section 4).\nTo evaluate the performance of the approach,\nwe compute the average error in the estimation\nof NIST or manual scores by means of the Root\nMean Squared Prediction Error (RMSPE) metric:/radicalBig\n1\nN/summationtextN\nj=1(yj−/hatwideyj)2, where Nis the number of\npoints, /hatwideyisthepredictionobtainedbytheregressor\nandyis the actual value of the test case. RMSPE\nquantifies the amount by which the estimator dif-\nfers from the expected score: the lower the value,31\nthe better theperformance.\n4 Method\nOurmethodtoperform regression supported byan\nembedded feature selection procedure consists of\nthe following steps: (1) sort all features according\nto their relevance in the training data; (2) select\nonly the top features according to their relevance\ninthevalidationset; (3)applytheselectedfeatures\nto the test data and evaluate the performance. In\nmore details:\n1. Given each pre-defined number of compo-\nnents, for each i-thsubsample of the training\ndata, we run PLS to compute the Bw(i)ma-\ntrix, generating a list Lb(i)of feature ranked\nindecreasing order of importance. Aftergen-\nerating Lbforallsubsamples,weobtainama-\ntrix where each row icontains an Lb(i), e.g.:\n66756...10\n44563...10\n...............\n66563...10\nA list Lcontaining the global feature order-\ning for all subsamples is obtained by select-\ning the feature appearing most frequently in\neach column (i.e., taking the mode, without\nrepeating features). In the case shown, L=\n{66,56,3,... ,10}.\n2. Given the list Lproduced for a certain num-\nberofcomponents, foreach i-thsubsampleof\nthevalidation data, wetrain theregression al-\ngorithms on 80 %of the data, adding features\nfromLonebyone. Wetestthemodelsonthe\nremaining validation data and plot the learn-\ning curves with the mean error scores over\nall the subsamples. By analyzing the learn-\ning curves, we select the first nfeatures that\nmaximize the performance of the models.\n3. Given the selected nfeatures and the number\nofcomponentsthatoptmizedtheperformance\nin the validation data, for each i-thsubsam-\nple of the test data, we train ( 80%) and test\n(20%) theperformance of theregressor using\nthesefeatures, andcompute their correspond-\ning metrics over all subsamples.\n5 Results\n5.1 NISTdataset\nFigure 1 illustrates the performance for different\nnumbers of PLS components used to generate or-\nderedlistsoffeatures. Themaximumperformance\nFigure1: Performance for listsgenerated withdif-\nferent numbers of components - NIST dataset\nisobtained from theordered list generated with40\ncomponents. Thisresulted in32features being se-\nlected, and an RMSPE in the test set of 1.503 ±\n0.045. TheRMSPEforallfeatures, withoutapply-\ningthefeature selection method, is 1.670 ±0.669.\nTherefore, the models produced for the selected\nsubsetoffeaturesperformbetterthanusingallfea-\ntures. Moreover, results for the subsets of features\nare more stable, given the large variance observed\nintheRMSPEscorewithallfeatures. Toprovidea\nmoreintuitivemeasure, wecansaythatthesystem\ndeviates on average ∼1.5points when predicting\nthe sentence-level NIST score. We believe this is\nan acceptable deviation, given that the scores vary\nfrom0to18.44.\nAlthough the subsets of features selected vary\nfordifferent numbers ofcomponents, someappear\ninall the top lists:\n•average number of alternative translations for\nwords inthe source sentence;\n•ratio of source and target lengths;\n•proportion of aborted nodes in the decoder’s\nsearch graph.\nThe first feature reflects the ambiguity and\ntherefore the difficulty of translating the source\nsentence. Thesecond favorssource andtargetsen-\ntences which aresimilar insize, which isexpected\nfor close language-pairs like English-French. The\nlast gives an idea about the uncertainty in the\nsearch: nodes are aborted if the decoder is certain\nthat they will not yield good translations.\nOther features appear as relevant for most\nchoices in thenumber of components:\n•source sentence length;\n•number of different words in the n-best list\ndivided byaverage sentence length;32\nMT RMSPE RMSPEall features\nRB1.058±0.087 1.171±0.098\nS11.159±0.064 1.197±0.059\nS21.116±0.073 1.190±0.073\nS31.160±0.059 1.201±0.062\nTable 1: RMSPE- WMTdatasets\n•1/3-gram source frequency statistics in the\nwhole corpus or its most frequent quartile;\n•3-gram source language model probability;\n•3-gram target language model probability\nconsidering n-best list ascorpus;\n•phrase probabilities;\n•average size ofphrases inthetarget sentence;\n•proportion of pruned and remaining nodes in\ndecoder’s final search graph.\nThesefeatures ingeneral point out thedifficulty\nof translating the source sentence, the uniformity\nof the candidates in the n-best list, how well the\nsource sentence is covered in the training corpus,\nandhowcommonplacethetargetsentenceis. They\nincludesomeSMTmodelfeatures, butnotably not\nthe actual SMT score. Surprisingly, half of these\nvery discriminative features areblack-box.\n5.2 Manuallyannotated datasets\nResults for the WMT datasets are less straightfor-\nwardtointerpret, sincetheproblem hasmorevari-\nables, particularly multiple language pairs, in- /\nout-of-domain sentences in a single dataset, and\nreduced dataset sizes. The best numbers of com-\nponents varyfrom 1to25andfeature selection re-\nsults in different subsets of features (from 2 to 10\nfeatures) for different MT systems. Nevertheless,\nin all the datasets, feature selection yields better\nresults, asshown in Table1.\nThe models deviate on average ∼1.1points\nwhen predicting 1-5 scores. This means, e.g., that\nsome sentences atually scoring 4would be given\ntothe user as scoring 5.\nTable 2 shows the performance obtained for the\n1-4andpost-edition datasets . The figures for the\nsubsets of features consistently outperform those\nfor using all features and arealso more stable.\nThe models produced for different MT systems\n(P-ES-1toP-ES-4) deviate ∼0.6-0.7points when\npredicting the sentence-level 1-4scores, which we\nbelieve is a satisfactory deviation. For example,\none sentence that should be considered as “fit for\npurpose”(score4)wouldneverbepredictedas“re-\nquires complete retranslation” (score 1) and dis-\ncarded asa consequence.MT RMSPE RMSPEall features\nP-ES-1gb 0.690±0.052 0.780±0.385\nP-ES-1 0.706±0.059 0.793±0.643\nP-ES-2 0.653±0.114 0.750±0.541\nP-ES-3 0.718±0.144 0.745±0.287\nP-ES-4 0.603±0.262 1.550±3.551\nP-ER-1 1.951±0.174 2.083±0.561\nP-ER-2 2.883±0.301 3.483±1.489\nP-ER-3 3.879±0.339 4.893±2.342\nTable 2: RMSPE- 1-4andpost-edition datasets\nAn interesting result is the comparison between\nthescoresforthetwovariationsofthefirstdataset,\ni.e.,P-ES-1gb (glass-box features) and P-ES-1\n(black-box features). The gain in using glass-box\nfeatures is very little in this case. This shows that\nalthough glass-box features may be very informa-\ntive, it is possible to represent the same informa-\ntion using simpler features. From a practical point\nofview,thisisveryimportant,sinceblack-boxfea-\ntures are usually faster to extract and can be used\nwithany MTsystem.\nIn order to investigate whether any single fea-\nture would be able to predict the quality scores as\nwell as the combination of selected good features,\nwe compare the Pearson’s correlation coefficient\nofeachfeatureandthepredictedCEscorewiththe\nexpected human score. The correlation of the best\nfeatures with the human score is ∼0.5(glass-box\nfeatures)orupto ∼0.4(black-boxfeatures)across\nthe different 1-4 datasets . TheCEscore correlates\n∼0.6with thehuman score.\nInTable3wecomparethecorrelation oftheCE\nand human scores against that of well-known MT\nevaluation metrics (at the sentence level) and hu-\nman scores on a test set for P-ES-1gb (values are\nsimilar for other datasets). The quality estimate\npredicted by our method correlates better with hu-\nman scores than reference-based MT evaluation\nmetrics. We apply bootstrapping re-sampling on\nthe data and then use paired t-test to determine\nthe statistical significance of the correlation dif-\nferences (Koehn, 2004). The differences between\nall metrics and CEarestatistically significant with\n99.8%confidence. Different from these metrics,\nour methodrequires sometraining data foragiven\nlanguage-pair and text domain, but once ths train-\ning is done, it can be used to estimate the quality\nof any number of new sentences.\nResults for the post-edition datasets vary con-\nsiderably from system to system. This may indi-\ncate that different MT systems require more post-33\nBLEU-2 NIST TERMeteor CE score\n0.342 0.298-0.263 0.376 0.602\nTable3: Correlation of MTevaluation metricsand\nour score with human annotation - P-ES-1gb\nedition due to their translation quality. For exam-\nple, taking the error for P-ER-1, of∼1.95, we\ncan say that the CE system is able to predict, for\na given source sentence, a post-edition time by\nsource word that will deviate up to 1.95seconds\nfrom the real post-edition time needed. The aver-\nageerrorsfoundmayseemaverylargeonaword-\nbasis,butmoreinvestigation ontheuseofthistype\nof CE score to aid translators in their post-edition\nwork isnecessary in this direction.\nBy analyzing the top features in all tasks with\nthe manually annotated datasets we can highlight\nthe following ones:\n•sourcelanguageandin/out-of-domain indica-\ntors (WMTdatasets );\n•source & target sentence 3-gram language\nmodel probability;\n•source &target sentence lengths;\n•percentages of types of word alignments;\n•percentage and mismatch in the numbers and\npunctuation symbols inthe source and target.\nThe first two features convey corpus informa-\ntion. Their impact in the performance is expected,\ngiventhatitmaybeeasiertotranslatebetweencer-\ntain pairs of languages and in-domain sentences.\nThe size of the source and target points out the\ndifficulty of the translation (longer sentences are\nmoredifficult). Liketheremainingfeatures, italso\nexpresses someform between source and target.\n6 Discussionandconclusions\nWe have presented a series of experiments on a\nmethod for confidence estimation to MT that al-\nlows taking into account the contribution of dif-\nferent features and have also identified very in-\nformativeandnon-redundant featuresthatimprove\nthe performance of the produced CE models. Al-\nthough it is not directly possible to compare our\nresults to previous work, because of the unavail-\nabilityofthedatasetsusedbefore, weconsider our\nresultstobesatisfactory. Particularlyinthecaseof\ntheregressiontask,itispossibletohavesomeintu-\nitiononwhattheimpactoftheerror wouldbe. For\nexample,itwouldindicatecrossingonaverageonecategoryinthequalityrankingofthetaskspredict-\ningadequacy scores(1=worst,5=best),andonly\nresultinuncertaintyintheboundariesbetweentwo\nadjacent categories in the 1-4 datasets .\nThe sets of relevant features identified includes\nmany features that have not been used before, in-\ncludingtheaveragesizeofthephrasesinthetarget,\nseveral types of mismatchings in the source and\ntarget, etc. Some of the others features have been\nused in previous work, but their exact definition is\ndifferent here. Forexample, weusethe proportion\nofabortedsearchnodes,insteadofabsolutevalues,\nandwecompute the average number of alternative\ntranslationsbyusingprobabilisticdictionariespro-\nduced from word-alignment.\nBesides directly using the estimated scores as\nquality indicators to professional translators or\nend-users, we plan to further investigate uses for\nthe features selected across MT systems and lan-\nguage pairs from different MT points of view. In\ntheexperimentswiththe NISTdataset ,thefeatures\nfound tobe themost relevant arenot those usually\nconsidered in SMT models. Simple features like\nthe ratio of lengths of source and target sentences,\ntheambiguity ofthesourcewords, thecoverage of\nthe source sentence in the corpus are clearly good\nindicators of translation quality. Afuture direction\nwill be to investigate whether these features could\nalsobeusefultoimprovethetranslationsproduced\nbySMTsystems, e.g., in thefollowing ways:\n•Complement existing features in SMT mod-\nels.\n•Rerank n-best lists produced by SMT sys-\ntems, which could make use of the features\nthat arenot local tosingle hipotheses.\nAs discussed in (Gamon et al., 2005), the read-\nability of the sentence, expressed by features like\n3-gram language models, is a good proxy to pre-\ndict translation quality, even in terms of adequacy.\nUltimately, automatic metricssuch asNISTaimat\nsimulating how humans evaluate translations. In\nthat sense, the findings of our experiments with\nthe manually annotated datasets could also be ex-\nploited from an MT evaluation point of view, for\nexample, inthe following ways:\n•Provide additional features to a reference-\nbased metric like that proposed by (Albrecht\nand Hwa,2007a).\n•Provide a score to be combined with other\nMTevaluation metricsusing frameworkslike34\nthose proposed by (Paul et al., 2007) and\n(Gim´ enez and M` arquez, 2008).\nOurfindingscouldalsobeusedtoprovideanew\nevaluation metric on itself, with some function to\noptimize the correlation with human annotations,\nwithout theneed of reference translations.\nReferences\nAlbrecht, J. and R. Hwa. 2007a. A re-examination of\nmachine learning approaches for sentence-level mt\nevaluation. In Proceedingsofthe45thAnnualMeet-\ning of the Association of ComputationalLinguistics ,\npages880–887,Prague.\nAlbrecht, J. and R. Hwa. 2007b. Regression for\nsentence-levelmtevaluationwithpseudoreferences.\nInProceedingsofthe45thAnnualMeetingoftheAs-\nsociation of Computational Linguistics , pages 296–\n303,Prague.\nBlatz, J., E. Fitzgerald, G. Foster, S. Gandrabur,\nC. Goutte, A. Kulesza, A. Sanchis, and N. Ueffing.\n2003. Confidence estimation for machine transla-\ntion. Technical report, Johns Hopkins University,\nBaltimore.\nBlatz, J., E. Fitzgerald, G. Foster, S. Gandrabur,\nC. Goutte, A. Kulesza, A. Sanchis, and N. Ueffing.\n2004. Confidence estimation for machine transla-\ntion. InProceedingsofthe20thConferenceonCom-\nputationalLinguistics ,pages315–321,Geneva.\nCallison-Burch, C., C. Fordyce, P. Koehn, C. Monz,\nand J. Schroeder. 2008. Further meta-evaluation\nof machine translation. In Proceedings of the 3rd\nWorkshop on Statistical Machine Translation , pages\n70–106,Columbus.\nDoddington, G. 2002. Automatic evaluation of ma-\nchinetranslationqualityusingn-gramco-occurrence\nstatistics. In Proceedings of the 2nd Conference on\nHumanLanguageTechnologyResearch ,pages138–\n145,SanDiego.\nFrenich, A. G., A. G. Jouan-Rimbaud, D. Massart,\nD. L. Kuttatharmmakul, S. Martinez Galera, and\nJ. L. M. Martinez Vidal. 1995. Wavelength selec-\ntionmethodformulticomponentspectrophotometric\ndeterminations using partial least squares. Analist,\n120(12):2787–2792.\nGamon, M., A. Aue, and M. Smets. 2005. Sentence-\nlevel mt evaluation without reference translations:\nbeyond language modeling. In Proceedings of the\nEuropeanAssociationforMachineTranslationCon-\nference,Budapest.\nGandrabur, S. and G. Foster. 2003. Confidence esti-\nmation for translation prediction. In Proceedings of\nthe 7th Conference on Natural Language Learning ,\npages95–102,Edmonton.Gim´ enez,J.andL.M` arquez. 2008. Heterogeneousau-\ntomatic mt evaluation through non-parametric met-\nriccombinations. In Proceedingsofthe3rdInterna-\ntional Joint Conference on Natural Language Pro-\ncessing,pages319–326,Hyderabad.\nJong, S De. 1993. Simpls: An alternative approach to\npartial least squares regression. Chemometrics and\nIntelligentLaboratorySystems ,18:251–263.\nKadri,Y.andJ.Y.Nie. 2006. Improvingquerytransla-\ntion with confidence estimation for cross language\ninformation retrieval. In Proceedings of the 15th\nConferenceonInformationandKnowledgeManage-\nment,pages818–819,Arlington.\nKoehn, P. and C. Monz. 2006. Manual and automatic\nevaluation of machine translation between european\nlanguages. In Proceedings of the Workshop on Sta-\ntistical Machine Translation , pages 102–121, New\nYork.\nKoehn, P. 2004. Statistical significance tests for ma-\nchine translation evaluation. In Conference on Em-\npirical Methods in Natural Language Processing ,\nBarcelona.\nKulesza, A. and A. Shieber. 2004. A learning ap-\nproachtoimprovingsentence-levelmtevaluation. In\nProceedingsofthe10thInternationalConferenceon\nTheoretical and Methodological Issues in Machine\nTranslation ,Baltimore.\nPapineni, K., S. Roukos, T. Ward, and W. Zhu. 2002.\nBleu: a methodfor automaticevaluationof machine\ntranslation. In Proceedingsofthe40thAnnualMeet-\ning on Association for Computational Linguistics ,\npage311318,Morristown.\nPaul, M., A. Finch, and E. Sumita. 2007. Reducing\nhuman assessment of machine translation quality to\nbinary classifiers. In Proceedings of the 11th Con-\nferenceonTheoreticalandMethodologicalIssuesin\nMachineTranslation ,pages154–162,Skovde.\nQuirk, C. B. 2004. Training a sentence-level ma-\nchine translation confidence measure. In Proceed-\nings of the 4th Conference on Language Resources\nandEvaluation ,pages825–828,Lisbon.\nRosipal, R. and L. J. Trejo. 2001. Kernel partial\nleastsquaresregressioninreproducingkernelhilbert\nspace.MachineLearningResearch ,2:97–123.\nUeffing, N. and H. Ney. 2005. Application of word-\nlevel confidence measures in interactive statistical\nmachinetranslation. In Proceedingsofthe10thCon-\nference of the European Association for Machine\nTranslation ,pages262–270,Budapest.\nWold, S., A. Ruhe, H. Wold, and W. J. Dunn. 1984.\nThe covariance problem in linear regression. the\npartial least squares (pls) approach to generalized\ninverses. SIAM Journal on Scientific Computing ,\n5:735–743.35",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "iVW-K8qsMe",
"year": null,
"venue": "EAMT 2010",
"pdf_link": "https://aclanthology.org/2010.eamt-1.31.pdf",
"forum_link": "https://openreview.net/forum?id=iVW-K8qsMe",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Learning an Expert from Human Annotations in Statistical Machine Translation: the Case of Out-of-Vocabulary Words",
"authors": [
"Wilker Aziz",
"Marc Dymetman",
"Lucia Specia",
"Shachar Mirkin"
],
"abstract": null,
"keywords": [],
"raw_extracted_content": "Learning an Expert from Human Annotations in Statistical Machine\nTranslation: the Case of Out-of-Vocabulary Words\nWilker Aziz\u0003, Marc Dymetmany, Shachar Mirkinx, Lucia Speciaz, Nicola Cancedday, Ido Daganx\n\u0003University of S ˜ao Paulo,yXerox Research Centre Europe\nxBar-Ilan University,zUniversity of Wolverhampton\[email protected], fdymetman,[email protected]\[email protected], fmirkins,[email protected]\nAbstract\nWe present a general method for incorporating an\n“expert” model into a Statistical Machine Transla-\ntion (SMT) system, in order to improve its perfor-\nmance on a particular “area of expertise”, and ap-\nply this method to the specific task of finding ade-\nquate replacements for Out-of-V ocabulary (OOV)\nwords. Candidate replacements are paraphrases\nand entailed phrases, obtained using monolin-\ngual resources. These candidate replacements are\ntransformed into “dynamic biphrases”, generated\nat decoding time based on the context of each\nsource sentence. Standard SMT features are en-\nhanced with a number of new features aimed at\nscoring translations produced by using different\nreplacements. Active learning is used to discrimi-\nnatively train the model parameters from human\nassessments of the quality of translations. The\nlearning framework yields an SMT system which\nis able to deal with sentences containing OOV\nwords but also guarantees that the performance\nis not degraded for input sentences without OOV\nwords. Results of experiments on English-French\ntranslation show that this method outperforms pre-\nvious work addressing OOV words in terms of ac-\nceptability.\n1 Introduction\nWhen translating a new sentence, Statistical Ma-\nchine Translation (SMT) systems often encounter\n“Out-of-V ocabulary” (OOV) words, that is, words\nfor which no translation is provided in the system\nphrase table. The problem is particularly severe\nwhen bilingual data are scarce or the text to be\ntranslated is not from the same domain as the data\nused to train the system.\nOne approach consists in replacing the OOV\nword by a paraphrase, i.e. a word that is equiv-\nalent and known to the phrase-table. For instance,\nin the sentence “The police hit the protester ”, ifthe source word “hit ” is OOV , it could be replaced\nby its paraphrase “struck ”. In previous work such\nparaphrases are learnt by “pivoting” through par-\nallel texts involving multiple languages (Callison-\nBurch et al., 2006) or on the basis of monolingual\ndata and distributional similarity metrics (Marton\net al., 2009).\nMirkin et al. (2009) go beyond the use of para-\nphrase to incorporate the notion of an entailed\nphrase, that is, a word which is implied by the\nOOV word, but is not necessarily equivalent to\nit — for example, this could result in “hit ” being\nreplaced by the entailed phrase “attacked ”. Both\nparaphrases and entailed phrases are obtained us-\ning monolingual resources such as WordNet (Fell-\nbaum, 1998). This approach results in higher\ncoverage and human acceptability of the transla-\ntions produced relative to approaches based only\non paraphrases.\nIn (Mirkin et al., 2009) a replacement for the\nOOV word is chosen based on a score represent-\ning how well it fits the context of the input sen-\ntence, combined with the global SMT score ob-\ntained after translating multiple alternative sen-\ntences produced by alternative replacements. The\ncombination of source and target language scores\nis heuristically defined as their product, and en-\ntailed phrases are only used when paraphrases are\nnot available. This approach has several short-\ncomings: translating each replacement variant is\nwasteful and does not capitalize on the search ca-\npabilities of the decoder; the ad hoc combination\nof scores makes it difficult to tune the contribution\nof each score or to extend the approach to incorpo-\nrate new features; and the enforced preference to\nparaphrases may result in inadequate paraphrases\ninstead of acceptable entailed phrases.\nWe propose an approach which also takes into\naccount both paraphrased and entailed words and\nuses a context model score, but differs from\n(Mirkin et al., 2009) in several crucial aspects,\n[EAMT May 2010 St Raphael, France]\nmostly stemming from the fact that we integrate\nthe selection of the replacement words into the\nSMT decoder. This has implications for both the\ndecoding and training processes.\nAtdecoding time, when translating a source\nsentence with an OOV word, besides the collec-\ntion of biphrases1stored in the system phrase ta-\nble, we generate a set of dynamic biphrases on the\nfly, based on the context of that specific source\nsentence, to address the OOV word. For exam-\nple, we could derive the dynamic biphrases (hit,\na frapp ´e)and(hit, a attaqu ´e)from the static ones\n(struck, a frapp ´e)and(attacked, a attaqu ´e).\nSuch dynamic biphrases are assigned several\nfeatures that characterize different aspects of the\nprocess that generated them, such as the appro-\npriateness of the replacement in the context of\nthe specific source sentence, allowing for example\nreach to be preferred to strike orattack in replac-\ninghitin “We hit the city at lunch time ”. Dynamic\nand static biphrases compete during the search for\nan optimal translation.\nAttraining time, standard techniques such as\nMERT (Minimum Error Rate Training) (Och,\n2003), which attempt to maximize automatic met-\nrics like BLEU (Papineni et al., 2002) based on\na bilingual corpus, are directly applicable. How-\never, as has been discussed in (Callison-Burch et\nal., 2006; Mirkin et al., 2009), such automatic\nmeasures are poor indicators of improvements in\ntranslation quality in presence of semantic modifi-\ncations of the kind we are considering here. There-\nfore, we perform the training and evaluation on the\nbasis of human annotations. We use a form of ac-\ntive learning to focus the annotation effort on a\nsmall set of candidates which are useful for the\ntraining.\nSentences containing OOV words represent a\nfairly small fraction of the sentences to be trans-\nlated2. Thus, to avoid human annotation of a\nlarge sample with relatively few cases of OOV\nwords, for the purpose of yielding a statistically\nunbiased sample, we perform a two-phase train-\ning: (a) the standard SMT model is first trained on\nan unbiased bilingual sample using MERT and N-\nbest lists; and (b) this model is extended with ad-\nditional dynamic features and iteratively updated\nby using other samples containing only sentences\nwith OOV words annotated for quality by humans.\n1Biphrases are the standard source and target phrase pairs.\n2In our experimental setting, in 50K sentences from News\ntexts, 15% contain at least one OOV content word.We update the model parameters in such a way\nthat the new model does not modify the scores\nof translations of cases without OOV words. This\nis done through an adaptation of the online learn-\ning algorithm MIRA (Crammer et al., 2006) which\npreserves linear subspaces of parameters. This ap-\nproach consists therefore in learning an expert that\nis able to improve the performance of the transla-\ntion system on a specific set of inputs, while pre-\nserving its performance on all other inputs.\nThe main contributions of this paper can be\nsummarized as follows: an efficient mechanism\nintegrated into the decoder for handling contextual\ninformation; a method for adding expertise to an\nSMT model relative to a specific task, relying on\nhighly informative, biased, samples and on human\nscores; expert models that affect only a specific set\nof inputs related to a particular problem, improv-\ning the translation performance in such cases.\nIn the remainder of this paper, we introduce the\nframework proposed in this paper for learning an\nexpert for the task of handling sentence contain-\ning OOV words (Section 2), then present our ex-\nperimental setup (Section 3) and finally our results\n(Section 4).\n2 Learning an expert for OOV words\nOur approach to learning an OOV expert for\nSMT is motivated by several general require-\nments. First, for efficiency reasons, we want the\nexpert to be tightly integrated with the SMT de-\ncoder. Second, we need to rely on human judg-\nments of the translations produced, since auto-\nmatic evaluation measures such as BLEU are poor\npredictors of translation quality in the presence of\nsemantic approximations of the kind we are con-\nsidering (Mirkin et al., 2009) . Third, because hu-\nman annotations are costly, we need to use them\nsparingly. In particular: (i) we want to focus the\nannotation task on the specific problem of sen-\ntences containing OOV words, and (ii) even for\nthese sentences, we should only hand the anno-\ntators a small, well-chosen, sample of translation\ncandidates to assess, not an exhaustive list. Fi-\nnally, we need to be careful not to bias training to-\nwards the human annotated sample in such a way\nthat the integrated decoder becomes better on the\nOOV sentences, but is degraded on the “normal”\nsentences. We address these requirements as fol-\nlows.\nIntegrated decoding The integrated decoder con-\nsists of a standard phrase-based SMT decoder\n(Lopez, 2008; Koehn, 2010) enhanced with the\nability to add dynamic biphrases at runtime and\nattempting to maximize a variant of the stan-\ndard “log-linear” objective function. The stan-\ndard SMT decoder tries to find argmax (a;t)\u0003\u0001\nG(s;t;a ), where \u0003is a vector of weights, and\nG(s;t;a )a vector of features depending on the\nsource sentence s, the target sentence tand the\nphrase-level alignment a. The integrated decoder\ntries to find\nargmax (a;t)\u0003\u0001G(s;t;a ) +M\u0001H(s;t;a )\nwhereMis an additional vector of weights and\nH(s;t;a )an additional vector of “dynamic” fea-\ntures associated with the dynamic biphrases and\nassessing different characteristics of their associ-\nated replacements (see Section 2.2). The inte-\ngrated model is thus completely parametrized by\nthe concatenated weight vector. We call this model\n\u0003\bMfor short.\nHuman annotations We select at random a set of\nOOV sentences from our test domain to compose\nourOOV training set , and for each of these sen-\ntences, provide the human annotators with a sam-\nple of candidate translations for different choices\nof replacements. They are then asked to rank these\ncandidates according to how well they approxi-\nmate the meaning of the source. In order not to\nforce the annotators to decide on fine-grained dis-\ntinctions that they are not confident about, which\ncould be confusing and increase noise for the\nlearning module, we provide guidelines and an an-\nnotation interface that encourage ranking the can-\ndidates in a few distinct “clusters”, where the rank\nbetween clusters is clear, but the elements inside\neach cluster are considered indistinguishable. The\nannotators are also asked to concentrate their judg-\nment on the portions of the sentences which are\naffected by the different replacements. To cover\npotential cases of cognates, annotators can choose\nthe actual OOV as the best “replacement”.\nActive sampling In order to keep the sample of\ncandidate translations to be annotated for a given\nOOV source sentence small, but still informative\nfor training, we adopt an active learning scheme\n(Settles, 2010; Haffari et al., 2009; Eck et al.,\n2005). We do not extract a priori a sample of\ntranslation candidates for each sentence in the\nOOV training set and ask the annotators to workon these samples — which would mean that they\nmight have to compare candidates that have little\nchance of being selected by the end-model after\ntraining. Instead, This is an iterative process, with\na slice of the OOV training set selected for each\niteration. When sampling candidate translations\n(out of a given slice of the OOV training set) to be\nannotated in the next iteration, we use the transla-\ntions produced by the model \u0003\bMobtained so\nfar, after training on all previous samples. This\nguarantees that we sample the overall best can-\ndidates for each OOV sentence according to the\ncurrent model. Additionally, we sample several\nother translations corresponding to top candidates\naccording to individual features used in the model,\nincluding the context model score, as we will de-\ntail in Section 3. This ensures a diversity of candi-\ndates to compare, while avoiding having to ask the\nannotators to give feedback on candidates that do\nnot stand a chance of being selected by the model.\nAvoiding bias We train the model \u0003\bMaim-\ning to guarantee that when the integrated decoder\nfinds a new sentence containing OOV words, it\nwill rank the translation candidates in a way con-\nsistent with the ranks that the human judges would\ngive to these candidates; in particular it should out-\nput as its best translation a candidate that the anno-\ntators would rank in top position. However, if we\ntune both\u0003andMto attain this goal, the value of\n\u0003in the integrated decoder can differ significantly\nfrom its value in the standard decoder, say \u00030. In\nthat case, when decoding a non-OOV sentence, for\nwhich the dynamic features H(s;t;a )are null, the\nintegrated decoder would use \u0003instead of \u00030, pos-\nsibly degrading its performance on such sentences.\nTo avoid this problem, while training \u0003\bMwe\nkeep\u0003fixed at the value \u00030, in other words, we al-\nlow onlyMto be updated in the iterative learning\nprocess. In such a way, we preserve the original\nbehavior of the system on standard inputs. This\nrequires a learning technique that can be adapted\nin a way that the parameter vector \u0003\bMvaries\nonly in the linear subspace for which \u0003 = \u0003 0; no-\ntice that this is different from training \u0003andM\nseparately and then learning the best mixing fac-\ntor between the two models. One technique which\nprovides a mathematically neat way to handle this\nrequirement is MIRA (Crammer et al., 2006), an\nonline training method in which each learning step\nconsists in updating the current parameter vector\nminimally (in the sense of Euclidian distance) so\nthat it lies in a certain subspace determined by the\ncurrent training point. It is then quite natural to\nadd the constraint that it also lies on the subspace\n\u0003 = \u0003 0.\n2.1 Learning to rank OOV candidates with\nMIRA\nLet us first write \n\u0011\u0003\bM, andF(s;t;a )\u0011\nG(s;t;a )\bH(s;t;a ), and also introduce notation\nfor the two projection operators \u0019(\u0003)(\u0003\bM) = \u0003\nand\u0019(M)(\u0003\bM) =M.\nOur goal when training from human annota-\ntions is that, whenever the annotators say that the\ntranslation candidate (s;t;a )is strictly better than\nthe translation candidate (s;t0;a0), then the model\nscores give the same result, namely are such that\n\n\u0001F(s;t;a )>\n\u0001F(s;t0;a0). Our approach to\nlearning can then be outlined as follows. Based\non the value of \nlearned on previous iterations\nwith other samples of OOV sentences, we ac-\ntively sample, as previsouly described, a few can-\ndidate translations (s;tj;aj)for each source sen-\ntencesin the current slice of the data, and have\nthem ranked by human annotators, preferably in\na few distinct clusters. We extract at random a\ncertain number of pairs of translation candidates\nyj;k\u0011((s;tj;aj);(s;tk;ak)), where (s;tj;aj)\nand(s;tk;ak)are assumed to belong to two dif-\nferent clusters. We then define a feature vec-\ntor on candidate pairs \b(yj;k)\u0011F(s;tj;aj)\u0000\nF(s;tk;ak).\nThe basic learning step is the following. We\nassume \nto be the current value of the parame-\nters, andyj;kthe next pair of annotated candidates,\nwith (without loss of generality) (s;tj;aj)being\nstrictly preferred by the annotator to (s;tk;ak).\nThe update from \nto\n0is then performed as fol-\nlows:\nIf\n:\b(yj;k)\u00150then\n0:= \nElse\n0:=argmin!k!\u0000\nk2(a)\ns.t.!:\b(yj;k)\u0000!:\b(yk;j)\u00151(b)\nand\u0019(\u0003)(!) = \u0003 0 (c)\nIn other words, we are learning to rank the candi-\ndates through a “pairwise comparison” approach\n(Li, 2009), in which whenever a candidate pair\nyj;kis ordered in opposite ways by the annota-\ntor and the model, an update of \nis performed.\nThis update is a simple variant of the MIRA al-\ngorithm (as presented for instance in (Crammer,\n2007)), where we update the parameter \nmini-\nmally in terms of Euclidian distance (a) such thatthe new parameter respects two conditions. The\ncondition (b) forces the classification margin for\nthe pair to become larger with the updated model\nthan the loss currently incurred on that pair, con-\nsidering that this loss is 0 when the model chooses\nthe correct order yj;k, and 1 when it chooses the\nwrong order yk;j. The second condition (c), which\nis our original addition to MIRA, forces the new\nparameter to have an invariant \u0003-projection. The\nsolution \n0to the constrained optimization prob-\nlem above can be obtained through Lagrange mul-\ntipliers (proof omitted). Assuming that we already\nstart from a parameter \nsuch that\u0019(\u0003)(\n) = \u0003 0,\nthen the update is given by:\n\n0= \n +\u001c \u0019(M)(X);\nwhereX\u0011\b(yj;k)\u0000\b(yk;j) = 2 \b(yj;k)and\n\u001c\u00111\u0000\n:X\nk\u0019(M)(X)k2.3As is standard with MIRA, the\nfinal value for the model is found by averaging the\n\nvalues found by iterating the basic learning step\njust described.\n2.2 Dynamic Features\nGiven an OOV word, similar to (Mirkin et al.,\n2009), we search for a set of candidate replace-\nments in WordNet, considering both synonyms\nand hypernyms of the OOV word which are avail-\nable in the biphrase table. To this set we add the\nOOV word itself to account for proper nouns and\npotential cognates. Unlike previous work, we do\nnot explicitly give preference to any type of candi-\ndate (e.g. synonyms over hypernyms), but instead\ndistinguish them through features associated with\nthe new biphrases. Given a source sentence swith\nan OOV word ( oov), we compute several feature\nscores for each candidate replacement ( rep):\nContext model score Score indicating the degree\nby whichrepfits the context of s. Following the\nresults reported by Mirkin et al. (2009) we apply\nLatent Semantic Analysis (LSA) (Deerwester et\nal., 1990) as the method for computing this score,\nusing 100-dimension vectors constructed based on\na corpus of the same domain as the test set. Given\nsandrep, we compute the cosine similarity be-\ntween their LSA vectors, where the sentence’s\nvector is the average of the vectors of all the con-\ntent words in it.\n3Technically, this ratio is only defined for \u0019(M)(X) 6= 0,\ni.e. for cases where the pair of translations differ in their M\nprojections; in the rare instances where this might not be true,\nwe can simply ignore the pair in the learning process.\nDomain similarity Score representing how well\nrepcan replaceoovin general in texts of a given\ndomain. It is computed as the cosine similarity\nbetween the LSA vectors of the two words and is\nintended to give preference to replacements which\ncorrespond to more frequent senses of the OOV\nword in that domain (McCarthy et al., 2004).\nInformation loss Measures the distance in Word-\nNet’s hierarchy, denoted d, betweenoovandrep:\nS(unk;sub ) = 1\u0000(1\nd+1), where the distance be-\ntween synonyms is 0, and the further the hyper-\nnym is up the hierarchy, the smaller the score. This\ncan be considered a simple approximation to the\nnotion of information loss , that is, the further the\nrepis from theoovin a hierarchy, the fewer se-\nmantic traits exist between the two, and therefore\nthe more information is lost if we use rep.\nIdentity Binary feature to mark the cases where\nthe OOV is kept in the sentence, what we call an\n“identity” replacement.\nSynonyms vs hypernyms Binary feature to dis-\ntinguish between synonym and hypernym replace-\nments.\nStatic plus dynamic Dynamic biphrases for a\ngiven source sentence can be derived from all\nthe static biphrases containing the chosen replace-\nment. For example, when replacing the OOV at-\ntacked byaccused , a number of static biphrases\nhaving accused in the source side could be used\nto generate (was attacked, a ´et´e accus ´e),(he was\nattacked, il a ´et´e accus ´e),(attacked, a incrimin ´e),\n(attacked, le) . Although these dynamic biphrases\nare very different, they will be assigned the same\ndynamic features values. To allow for the decoder\nto distinguish among such biphrases, we define an\nadditional feature as the linear combination of the\nfeature values of the static biphrase from which the\ndynamic biphrase was derived.\nAll static features are assigned a null value in the\ndynamic biphrases, and all dynamic features are\nassigned a null value in the static biphrases.4\n3 Experimental Setting\nData We consider the English-French translation\ntask and a scenario where an SMT system is used\n4Thus, what we have mnemonically called “dynamic\nfeatures” are features that are non-null only in dynamic\nbiphrases; some are contextual, others not.to translate texts of a different domain from the\none it was trained on. We train a standard phrase-\nbased SMT system on Europarl-v4 ( \u00181Msen-\ntence pairs) and use it to decode sentences from\nthe News domain. The standard log-linear model\nparameters are tuned using 2Kunseen sentences\nfrom Europarl-v4 through MERT. A 3-gram tar-\nget language model is trained using 7Msen-\ntences of French News texts. All datasets are\ntaken from the the WMT-09 competition5. For\nthe learning framework, we take all sentences in\nthe News Commentary domain (training, devel-\nopment and test sets) from WMT-09 ( \u001875K)\nand extract those containing one OOV word that\nis not a proper name, symbol or number ( \u001815%\nof the sentences). Of these, we then randomly se-\nlected 1Ksentences for tuning the context model\n(LSA tuning set ), other 1Ksentences for tuning\nthe SMT feature weights ( SMT tuning set ), and\n500sentences for evaluating all methods ( test set ).\nThe data used for computing the context model\nand domain similarity scores is the Reuters Cor-\npus, V olume 1 (RCV1), which is also of the News\ndomain6.\nWe experiment with the following systems:\nBaseline SMT The SMT system we use, MA-\nTRAX (Simard et al., 2005), without any special\ntreatment for OOV words, where these are simply\ncopied to the translations.\nMonolingual retrieval Method described in\n(Marton et al., 2009) where paraphrases for OOV\nwords are extracted from a monolingual corpus\nbased on similarity metrics. We use their best-\nperforming setting with single-word paraphrases\nextracted from a News domain corpus with 10M\nsentences. The additional biphrases are statically\nadded into the system’s biphrase library and the\nsimilarity score is used as a new feature. The log-\nlinear model is then entirely retrained with MERT\nand the SMT tuning set .\nLexical entailment Two best performing meth-\nods described in (Mirkin et al., 2009). For each\nsentence with an OOV word a set of alternative\nsource sentences is generated by directly replac-\ning each OOV word by synonyms from Word-\nNet or – if synonyms are not found – by hyper-\nnyms. These two settings do not add features to\n5http://www.statmt.org/wmt09/.\n6http://trec.nist.gov/data/reuters/reuters.html\nthe model, hence they do not require retraining:\n\u000fSMT All alternative source sentences are\ntranslated using a standard SMT system and\nthe “best” translation is the one with the high-\nest global SMT score.\n\u000fLSA-SMT The 20-best alternative source\nsentences are selected according to an LSA\ncontext model score and translated by the a\nstandard SMT system. The “best” translation\nis the one that maximizes the product of the\nLSA and global SMT scores.\nOOV expert Method proposed in this paper, as\ndescribed in Section 2. The expert model with\nall dynamic features is trained on the basis of hu-\nman annotations using the SMT tuning set. At\neach iteration of the learning process we sample\n80sentences for annotation by bilingual (English\nand French) speakers. For a given source sen-\ntence, the sampled options at each iteration con-\nsist of a random choice of 8dynamic biphrases\ncorresponding to different replacements, 4addi-\ntional dynamic biphrases corresponding to differ-\nent ways of translating those replacements, and the\ntop candidates according to each of our main dy-\nnamic features: 1-best given by the information\nloss feature, 2-best given by the context model fea-\nture, 1-best given by the domain similarity feature\nand1-best given by the identity feature. In total\nat most 17 non-identical candidates can be pro-\nduced for annotation, but typically only a dozen\nare found. The results reported in Section 4 are\nobtained after only 6iterations.\nMERT Baseline with the same settings as the\nOOV expert , but where the tuning of allmodel\nparameters (both static and dynamic) is done au-\ntomatically using standard MERT with reference\ntranslations for the SMT tuning set, instead of our\nlearning framework and human annotations.\n4 Results\nTest set Following the same guidelines used for\nthe annotation task, two native speakers of French\n(and fluent speakers of English) were asked to\njudge translations produced by different systems\non 500 source sentences, according to how well\nthey reproduced the meaning of the source sen-\ntence. They were asked to rank the translations in\na few clearly distinct clusters and to discard use-\nless translations.Features \u0016 \u001b Best Acceptance\nLID 2.477 1.465 0.4728 0.5252\nID 2.491 1.463 0.4668 0.5211\nLI 2.547 1.457 0.4427 0.5050\nI 2.561 1.463 0.4447 0.4970\nD 2.924 1.414 0.3360 0.3722\nLD 2.930 1.412 0.3340 0.3702\nL 3.056 1.361 0.2857 0.3300\nBaseline 3.219 1.252 0.2093 0.2918\nTable 1: Comparison between different feature combina-\ntions and the baseline showing the percentage of times each\ncombination outputs a translation that is acceptable, i.e. is\nnot discarded (Acceptance), a translation that is ranked in the\nfirst cluster (Best), as well as the the mean rank ( \u0016) and stan-\ndard deviation ( \u001b) of each combination, where the discarded\ntranslations are conventionally assigned a rank of 5, lower\nthan the rank of any acceptable cluster observed among the\nannotations. (L) context model score, (I) information-loss,\n(D) domain similarity, (Baseline) SMT system.\nWe computed inter-annotator agreement con-\ncerning both acceptance and ranking, for trans-\nlations of 30 randomly sampled source sentences\nthat were evaluated by both annotators. For rank-\ning, we followed (Callison-Burch et al., 2008),\nchecking for each two translations, AandB,\nwhether the annotators agreed that A=B,A>B\norA < B . This resulted in kappa coefficient\nscores (Cohen, 1960) of 0.87 for translation ac-\nceptance and 0.83 for ranking.\nCombinations of dynamic features In order to\nhave a picture of the contribution of each dynamic\nfeature to the expert model, we compare the per-\nformance on the test set of different combinations\nof our main features. The results are shown in Ta-\nble 1. The features not mentioned in the table,\nsuch as the identity flag, are secondary features in-\ncluded in all combinations.\nThe baseline SMT system (i.e., only identity\nreplacements ) reaches 29.18% of acceptance (a\ntranslation is said to be acceptable if it is not dis-\ncarded), which is related to the fact that, for the\ngiven domain, copying an OOV English word into\nthe French output often results in a cognate. The\nbest performance is obtained with combination of\nall features (LID).\nEvolution of learning For the complete feature\nvector LID we compared the performance (on the\ntest data) of models corresponding to different it-\nerations of the online learning scheme. The results\nare presented in Table 2. We see a large increase\nin performance from M0toM1, then smaller in-\ncreases. After two or three iterations the perfor-\nIterations \u0016 \u001b Best Acceptance\nM6 2.487 1.458 0.4628 0.5252\nM5 2.491 1.459 0.4628 0.5231\nM4 2.489 1.458 0.4628 0.5252\nM3 2.493 1.455 0.4588 0.5252\nM2 2.501 1.456 0.4567 0.5211\nM1 2.519 1.456 0.4507 0.5151\nM0 2.944 1.407 0.328 0.3642\nBaseline 3.237 1.228 0.1932 0.2918\nTable 2: Each iteration adds 80 annotated sentences to the\ntraining set, from which the next vector of weights is com-\nputed. The dynamic vector M0was initialized with zero for\nthe replacement-related features and 1 for the source-target\nfeature. (Baseline) SMT system without OOV handling.\nmance changes are negligible, indicating that an-\nnotation effort for training the system could be\nroughly divided by two without affecting its end\nperformance.\nComparison with other systems We now com-\npare our LID model, in different decoding and\ntraining setups, with the methods proposed in pre-\nvious work and described in Section 3. Table 3\npresents the results in terms of mean rank and stan-\ndard deviation (note that the rank is relative to the\nother systems in the comparison and is not directly\ncomparable to the rank of the same system in a dif-\nferent comparison), percentage of time each sys-\ntem outputs a first-ranked translation and the per-\ncentage of time it outputs an acceptable one, using\nthe same conventions as in Table 1.\nLet us first focus on the lines other than b-LID\nin the table, corresponding to systems mentioned\nin Section 3. These results are consistent across\ndifferent measures: acceptance, mean rank, or be-\ning ranked in the best cluster. In particular we\nsee that both the LID, trained on human annota-\ntions, and LID-MERT systems, trained by MERT\nfrom reference translations, considerably outper-\nform the baseline and the Monolingual Retrieval\nmethod, with LID being better than LID-MERT\nparticularly in terms of acceptability. A somewhat\ndisappointing result, however, is that LID is infe-\nrior to both SMT-LSA and SMT on all measures.\nBy observing the outputs of SMT and SMT-\nLSA, we noticed that, although they can theoret-\nically produce identity replacements, they never\nactually do so on the test set. This is probably due\nto the fact that the language model that is part of\nthe scoring function in both SMT and SMT-LSA\ncontributes to giving a very bad score to identity\nreplacements, unless they happen to belong to theset of possible French forms (“exact” cognates),\nand therefore these models tend to strongly favor\nentailment replacements.\nOn the other hand, our LID model does actually\nproduce identity replacements quite often, some of\nwhich are acceptable (perhaps even ranked first)\nto the annotators, but a majority of which lead\nto non-acceptability. This is due to the fact that,\nat training time, the LID model actually learns\nto score the identity replacements relatively well\n(often overcoming the repulsion of the language\nmodel feature in the underlying baseline SMT sys-\ntem), due to the fact that many of them are ac-\ntually preferred by the annotators, typically those\nthat correspond to approximate cognates of exist-\ning French words (the annotation guidelines did\nnot discourage them from doing so). Thus the LID\nmodel has a tendency to sometimes favor identities\nover entailments. However , it is not clever enough\nto distinguish the “good” identities (namely, the\nquasi-cognates) from the bad ones (namely, En-\nglish words with no obvious French connotation),\ngiven that all identity replacements are only identi-\nfied by a binary feature (identity vs. non-identity)\ninstead of being associated with any features that\ncould predict their understandability in a French\nsentence. Thus LID, when it selects an identity\nreplacement, often selects an unacceptable one.\nMotivated by this uncertainty concerning the\nuse of identity replacements, we defined a sys-\ntemb-LID which uses the same model as LID,\nbut the identity replacements are blocked at decod-\ning time . In this way the system is forced to pro-\nduce an entailment replacement instead of an iden-\ntity one, but otherwise ranks the different entail-\nment replacements in the same order as the origi-\nnal LID. We can then see from Table 3 that b-LID\noutperforms every other system by a large mar-\ngin:7it is excellent at distinguishing between true\nentailments, and while it misses some good iden-\ntity replacements, is not handicapped in this re-\nspect relative to the other systems, which are also\nunable to model them.\n5 Conclusions\nWhile our approach is motivated by a specific\nproblem (OOV terms), we believe that some of\nthe innovations we have introduced are of a larger\n7A Wilcoxon signed rank test (Wilcoxon, 1945) shows\nthatb-LID is better ranked than its closest competitor SMT\nwith a p-value of less than 2%.\nSystem \u0016 \u001b Best Acceptance\nb-LID 2.274 1.803 0.6258 0.7002\nSMT 2.736 1.933 0.5172 0.5822\nSMT-LSA 2.744 1.931 0.5132 0.5822\nLID 3.018 1.913 0.4145 0.5252\nLID-mert 3.153 1.928 0.4024 0.4849\nBaseline 3.998 1.603 0.1549 0.2918\nMonRet 4.107 1.584 0.1690 0.2495\nTable 3: (LID) complete dynamic vector trained on the ba-\nsis of human assessments; ( b-LID) as LID, but blocking iden-\ntity replacements; (LID-MERT) complete dynamic vector\ntrained on the basis of automatic assessments; (SMT, SMT-\nLSA) and (MonRet or Monolingual retrieval) as described in\nSection 3; (Baseline) SMT system without OOV handling.\ngeneral interest for SMT: our use of dynamic\nbiphrases and features for incorporating complex\nadditional run-time knowledge into a standard\nphrase-based SMT system, our approach to in-\ntegrating a MERT-trained log-linear model with\na model actively trained from a small sample\nof human annotations addressing a specific phe-\nnomenon, and finally the formal techniques used\nin order to guarantee that the expert that is thus\nlearned from a focussed, biased, sample, is able\nto improve performance on its domain of exper-\ntise while preserving the baseline system’s perfor-\nmance on the standard cases.\nAcknowledgments\nThis work was supported in part by the ICT Pro-\ngramme of the European Community, under the\nPASCAL-2 Network of Excellence, ICT-216886.\nWe thank Binyam Gebrekidan Gebre and Ibrahim\nSoumana for performing the annotations and the\nanonymous reviewers for their useful comments.\nThis publication only reflects the authors’ views.\nReferences\nChris Callison-Burch, Philipp Koehn, and Miles Osborne.\n2006. Improved Statistical Machine Translation Using\nParaphrases. In Proceedings of HLT-NAACL .\nChris Callison-Burch, Cameron Fordyce, Philipp Koehn,\nChristof Monz, and Josh Schroeder. 2008. Further Meta-\nEvaluation of Machine Translation. In Proceedings of\nWMT .\nJacob Cohen. 1960. A Coefficient of Agreement for Nomi-\nnal Scales. Educational and Psychological Measurement ,\n20(1):37–46.\nKoby Crammer, Ofer Dekel, Joseph Keshet, Shai Shalev-\nShwartz, and Yoram Singer. 2006. Online passive-\naggressive algorithms. Journal of Machine Learning Re-\nsearch , 7:551–585.Koby Crammer. 2007. Online learning of real-\nworld problems. Tutorial given at ICML.\nwww.cis.upenn.edu/˜crammer/icml-tutorial-index.html.\nScott Deerwester, S.T. Dumais, G.W. Furnas, T.K. Landauer,\nand R.A. Harshman. 1990. Indexing by Latent Semantic\nAnalysis. Journal of the American Society for Information\nScience, 41 , pages 391–407.\nMatthias Eck, Stephan V ogel, and Alex Waibel. 2005. Low\ncost portability for statistical machine translation based on\nn-gram coverage. In MT Summit X , pages 227–234.\nChristiane Fellbaum, editor. 1998. WordNet: An Electronic\nLexical Database (Language, Speech, and Communica-\ntion) . The MIT Press.\nGholamreza Haffari, Maxim Roy, and Anoop Sarkar. 2009.\nActive learning for statistical phrase-based machine trans-\nlation. In Proceedings of Human Language Technologies\n/ North American Chapter of the Association for Compu-\ntational Linguistics , pages 415–423.\nPhilipp Koehn. 2010. Statistical Machine Translation . Cam-\nbridge University Press.\nHang Li. 2009. Learning to rank. Tutorial given\nat ACL-IJCNLP, August. research.microsoft.com/en-\nus/people/hangli/li-acl-ijcnlp-2009-tutorial.pdf.\nAdam Lopez. 2008. Statistical machine translation. ACM\nComputing Surveys , 40(3):1–49.\nYuval Marton, Chris Callison-Burch, and Philip Resnik.\n2009. Improved statistical machine translation using\nmonolingually-derived paraphrases. In Proceedings of the\n2009 Conference on Empirical Methods in Natural Lan-\nguage Processing , pages 381–390.\nDiana McCarthy, Rob Koeling, Julie Weeds, and John Car-\nroll. 2004. Finding predominant word senses in untagged\ntext. In In Proceedings of ACL , pages 280–287.\nShachar Mirkin, Lucia Specia, Nicola Cancedda, Ido Da-\ngan, Marc Dymetman, and Idan Szpektor. 2009. Source-\nlanguage entailment modeling for translating unknown\nterms. In Proceedings of ACL , Singapore.\nFranz Josef Och. 2003. Minimum error rate training in statis-\ntical machine translation. In ACL ’03: Proceedings of the\n41st Annual Meeting on Association for Computational\nLinguistics , pages 160–167.\nKishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing\nZhu. 2002. BLEU: a Method for Automatic Evaluation\nof Machine Translation. In Proceedings of ACL .\nBurr Settles. 2010. Active learning literature survey. Tech-\nnical report, University of Wisconsin-Madison.\nM. Simard, N. Cancedda, B. Cavestro, M. Dymetman,\nE. Gaussier, C. Goutte, and K. Yamada. 2005. Trans-\nlating with Non-contiguous Phrases. In Proceedings of\nHLT-EMNLP .\nFrank Wilcoxon. 1945. Individual Comparisons by Ranking\nMethods. Biometrics Bulletin , 1(6):80–83.",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "qXwrVLzXKLo",
"year": null,
"venue": "EAMT 2022",
"pdf_link": "https://aclanthology.org/2022.eamt-1.58.pdf",
"forum_link": "https://openreview.net/forum?id=qXwrVLzXKLo",
"arxiv_id": null,
"doi": null
}
|
{
"title": "DiHuTra: a Parallel Corpus to Analyse Differences between Human Translations",
"authors": [
"Ekaterina Lapshinova-Koltunski",
"Maja Popovic",
"Maarit Koponen"
],
"abstract": null,
"keywords": [],
"raw_extracted_content": "DiHuTra: a Parallel Corpus\nto Analyse Differences between Human Translations\nEkaterina Lapshinova-Koltunski1, Maja Popovi ´c2, Maarit Koponen3\nLanguage Science and Technology1, ADAPT Centre2,\nForeign Languages and Translation Studies3\nSaarland University1, Dublin City University2, University of Eastern Finland3\[email protected], [email protected],\[email protected]\nAbstract\nThe DiHuTra project aimed to design a\ncorpus of parallel human translations of\nthe same source texts by professionals and\nstudents. The resulting corpus consists\nof English news and reviews source texts,\ntheir translations into Russian and Croa-\ntian, and translations of the reviews into\nFinnish. The corpus will be valuable for\nboth studying variation in translation and\nevaluating machine translation (MT) sys-\ntems.\n1 Description\nMany studies have demonstrated that translated\ntexts have different textual features than texts orig-\ninally written in the given language (originals).\nFurthermore, some studies have shown evidence of\nvariation between human translations generated by\ndifferent translators (Rubino et al., 2016; Popovi ´c,\n2020; Kunilovskaya and Lapshinova-Koltunski,\n2020). Nevertheless, the number of such studies\nis still very small and limited to comparable cor-\npora where different translators translated differ-\nent source texts. Therefore, exact comparisons be-\ntween human translations are not possible.\nThe DiHuTra project, formed by Saarland Uni-\nversity, ADAPT Centre and University of Eastern\nFinland in 2021–2022 has aimed to design a paral-\nlel corpus to address these issues. Each source text\noriginally written in English has been translated\ninto three target languages: Croatian, Russian and\nFinnish, by two groups of translators: profession-\nals and students. These parallel human translations\nc\r2022 The authors. This article is licensed under a Creative\nCommons 3.0 licence, no derivative works, attribution, CC-\nBY-ND.will enable a better comparison of various text fea-\ntures as well as impact of automatic MT evaluation\nwhen used as references.\n2 Data sets\nThe source texts consist of two sub-sets of publicly\navailable data sets from two distinct domains:\nAmazon product reviews1contain unique\nproduct reviews from Amazon written in English\nwith overall ratings from 1 to 5, 1 and 2 referring\nto negative, 3 to neutral and 4 and 5 to positive. We\nselected a balanced set of reviews from 14 cate-\ngories (e.g., “Sports and Outdoors”, “Books”, etc.)\nwith an equal number of positive and negative re-\nviews (14 from each of the 14 topics). In total,\nwe included 196 reviews, containing 5.4 sentences\nand 93.2 words on average.\nNews texts were imported from the WMT (2019\nand 2020) shared task2News test corpus. The top-\nics vary between politics, sports, crime, health, etc.\nThe news are longer than reviews, with 9.9 sen-\ntences and 221.7 words on average. The WMT\nshared tasks also contain a set of human transla-\ntions of the English source texts into several lan-\nguages including Russian, however, neither Croat-\nian nor Finnish. We selected only texts which were\noriginally written in English and had professional\ntranslations into Russian. In total, we included 68\nnews articles from different sources.\n3 Translation process\nEach English review was translated into the three\ntarget languages, Croatian, Russian and Finnish,\nby professionals and by students. For the news\n1http://jmcauley.ucsd.edu/data/amazon/\n2http://www.statmt.org/wmt20/\ntranslation-task.html\nen hr ru fi\nnews reviews news reviews news reviews reviews\nprof stud prof stud prof stud prof stud prof stud\na 17,186 15,236 16,662 16,632 14,003 13,940 17,469 17,054 14,233 14,247 11,709 12,213\nb 4,138 3,155 6,009 5,975 4,359 4,446 6,079 6,076 4,417 4,523 4,612 4,664\nc 0.220 0.178 0.341 0.340 0.282 0.288 0.340 0.349 0.289 0.300 0.360 0.350\nd 98.2 101.7 86.2 83.8 92.1 88.2 122.9 116.7 126.3 124.1 109.8 112.5\nTable 1: Text statistics and lexical variety: (a) total number of words, (b) total number of running words, (c) ratio between\nvocabulary and words \", (d) Yule’s K coefficient #.\ncorpus, Russian translations were already avail-\nable from the WMT shared task and Croatian\ntranslations were produced for the purpose of this\nwork. Finnish professional translations were not\nprovided for the news articles. In addition to trans-\nlations, information about age, gender, experience\nand the study program (for students) was collected.\nTranslators were asked to keep the sentence align-\nment (not to merge or to split sentences) and not to\nuse MT. No further restrictions were given to trans-\nlators. The total number of tokens in the resulting\ncorpus amounts to 180,584.\n4 Corpus statistics\nThe first statistics on the shallow features in terms\nof running words and vocabulary in the sources\nand the three target languages (see Table 1). We\nalso estimated lexical richness in terms of ratio be-\ntween vocabulary and total number of words and\nYule’s K coefficient. Both values indicate how rich\nthe vocabulary is in the given text, the richness\nbeing proportional to the vocabulary/words ratio\n(higher value indicates richer vocabulary) and in-\nversely proportional to Yule’s K (a lower value in-\ndicates a richer vocabulary).\nThe corpus is valuable for studying variation\nin translation as it allows direct comparisons be-\ntween human translations of the same source texts.\nOur preliminary analyses based on the shallow text\nstatistics and matching/distance measures indicate\nthat students used shorter sentences but richer vo-\ncabulary. To better understand these differences,\nwe plan to carry out detailed analyses on the anno-\ntated data (we have tokenised, lemmatised, parts-\nof-speech tagged and parsed the data using univer-\nsal dependencies). This resource is also valuable\nfor evaluation of MT systems for the three lan-\nguage pairs. The Croatian (and probably Russian)\npart of the user reviews will be used in the WMT\nshared task in 2022.3We believe that this resource\nwill help us to understand and improve quality is-\n3https://machinetranslate.org/wmt22sues in both human and machine translation.\nThe corpus is available via CLARIN4. The\nproject has also a GitHub repository5which\ncontains the data and some additional informa-\ntion. The details about the corpus can be found\nin (Lapshinova-Koltunski et al., 2022).\n5 Acknowledgments\nThe creation of the corpus was supported through\nthe EAMT sponsorship programme (2021) and by\nADAPT Centre. The ADAPT Centre is funded\nby through the SFI Research Centres Programme\nand co-funded under the ERDF through Grant\n13/RC/2106. The Finnish subcorpus was sup-\nported by a Kopiosto grant awarded by the Finnish\nAssociation of Translators and Interpreters. We\nthank the translators in V olgograd, Zagreb, Rijeka\nand Finland. In particular, we thank Aleksandr\nBesedin from V olSU for coordinating the work of\nthe Russian translators.\nReferences\nKunilovskaya, M. and Lapshinova-Koltunski, E.\n(2020). Lexicogrammatic translationese across two\ntargets and competence levels. In Proceedings of\nLREC 2020 , pages 4102–4112, Marseille, France,\nMay.\nLapshinova-Koltunski, E., Popovi ´c, M., and Koponen,\nM. (2022). Dihutra: a parallel corpus to analyse dif-\nferences between human translations. In Proceed-\nings of LREC 2022 , Marseille, France, June.\nPopovi ´c, M. (2020). On the differences between hu-\nman translations. In Proceedings of the EAMT 2020 ,\npages 365–374, Lisboa, Portugal, November.\nRubino, R., Lapshinova-Koltunski, E., and van Gen-\nabith, J. (2016). Information density and quality\nestimation features as translationese indicators for\nhuman translation classification. In Proceedings of\nNAACL-HLT 2016 , pages 960–970, San Diego, Cal-\nifornia, June.\n4https://fedora.clarin-d.uni-saarland.de/\ndihutra/index.html\n5https://github.com/katjakaterina/dihutra",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "IzYyx31IRDAY",
"year": null,
"venue": "EAMT 2012",
"pdf_link": "https://aclanthology.org/2012.eamt-1.65.pdf",
"forum_link": "https://openreview.net/forum?id=IzYyx31IRDAY",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Learning Machine Translation from In-domain and Out-of-domain Data",
"authors": [
"Marco Turchi",
"Cyril Goutte",
"Nello Cristianini"
],
"abstract": "Marco Turchi, Cyril Goutte, Nello Cristianini. Proceedings of the 16th Annual conference of the European Association for Machine Translation. 2012.",
"keywords": [],
"raw_extracted_content": "Learning Machine Translation from In-domain and Out-of-domain Data\nMarco Turchi\nEuropean Commission JRC,\nIPSC - GlobeSec\nVia Fermi 2749,\n21020 Ispra (V A), Italy\[email protected] Goutte\nInteractive Language Tech.,\nNational Research Council Canada,\n283 Boulevard Alexandre-Tach ´e,\nGatineau QC J8X3X7, Canada\[email protected] Cristianini\nIntelligent Systems Lab.,\nUniversity of Bristol,\nMVB, Woodland Rd,\nBS8-1 UB, Bristol, UK\[email protected]\nAbstract\nThe performance of Phrase-Based Statis-\ntical Machine Translation (PBSMT) sys-\ntems mostly depends on training data.\nMany papers have investigated how to cre-\nate new resources in order to increase\nthe size of the training corpus in an at-\ntempt to improve PBSMT performance.\nIn this work, we analyse and characterize\nthe way in which the in-domain and out-\nof-domain performance of PBSMT is im-\npacted when the amount of training data\nincreases. Two different PBSMT systems,\nMoses and Portage, two of the largest par-\nallel corpora, Giga (French-English) and\nUN (Chinese-English) datasets and several\nin- and out-of-domain test sets were used\nto build high quality learning curves show-\ning consistent logarithmic growth in per-\nformance. These results are stable across\nlanguage pairs, PBSMT systems and do-\nmains. We also analyse the respective im-\npact of additional training data for esti-\nmating the language and translation mod-\nels. Our proposed model approximates\nlearning curves very well and indicates the\ntranslation model contributes about 30%\nmore to the performance gain than the lan-\nguage model.\n1 Introduction\nWith the growing availability of bilingual parallel\ncorpora, the past two decades saw the development\nand widespread adoption of statistical machine\ntranslation (SMT) models. Given a source (“for-\neign”) language sentence fand a target (“english”)\nc/circlecopyrt2012 European Association for Machine Translation.language translation e, the relationship between e\nandfis modelled using a statistical or probabilis-\ntic model which is estimated from a large amount\nof textual data, comprising bilingual and monolin-\ngual corpora. The most popular class of SMT sys-\ntems is Phrase-Based SMT (PBSMT, (Koehn et al.,\n2003)).\nIn this paper, we are concerned with analyzing\nand characterizing the way in which the perfor-\nmance of PBSMT models evolves with increasing\namounts of training data. In the SMT community,\nit is a common belief that learning curves follow\nlogarithmic laws. However, there are few large-\nscale systematic analyses of the growth rate of the\nPBSMT performance. Early work (Al-Onaizan\net al., 1999) used a relatively small training set\nand perplexity as evaluation metric. (Koehn et\nal., 2003) and (Suresh, 2010) show that BLEU\nscore has a log-linear dependency with training\ncorpus size, but this is limited to 350k training\nsentence pairs. Learning curves were also pre-\nsented in order to motivate the use of active learn-\ning for MT (Bloodgood and Callison-Burch, 2010;\nHaffari et al., 2011). They attempt to address\nthe challenge of “diminishing returns” in learn-\ning MT, although this is again done with small\ntraining corpora (<90k sentence pairs), and, on a\nlog-scale, performance seems again to increase lin-\nearly. (Brants et al., 2007) produced a large-scale\nstudy, but focused on the language model training\nonly, with billions of (monolingual) tokens.\nThe first complete and systematic analysis of\nPBSMT learning curves was obtained by (Turchi\net al., 2008) using the Spanish-English Europarl,\nand recently extended to larger training data and\nmore systems by (Turchi et al., 2011). In their\nwork, accurate learning curves obtained over a\nlarge range of data sizes confirm that performance\nProceedings of the 16th EAMT Conference, 28-30 May 2012, Trento, Italy\n305\ngrows linearly in the log domain.\nThe reason why relatively few systematic stud-\nies have been reported may be that producing ac-\ncurate learning curves up to large data sizes with\nstate-of-the-art systems requires the use of high\nperformance computing in a carefully set up envi-\nronment. This may seem dispensable when typical\nSMT research is usually focused on maximizing\nthe performance that can be extracted from a given\ndata set, rather than analysing how this perfor-\nmance evolves. However, we believe that the anal-\nysis and quantification of the way machine transla-\ntion systems learn from data are important steps to\nidentify critical situations which affect the overall\ntranslation performance. We also wish to charac-\nterize PBSMT performance up to data sizes more\ntypical of current large-scale bilingual corpora.\nIn the following we pursue three purposes:\n1. We confirm, in a systematic way, previous\nfindings that PBSMT performance gains con-\nstant improvements for each doubling of the\ndata. This holds across systems, language\npairs and over a large range of data sizes.\n2. We show that, somewhat surprisingly, this\nextends to out-of-domain data, although the\ngrowth is weaker in that case.\n3. We analyse and quantify the relative impor-\ntance of training data in language and trans-\nlation model training, and show that the latter\ncontributes about 30% more to the gains in\nperformance.\nIn contrast with previous work, we build our\nlearning curves using two of the largest available\nparallel training sets: the French-English Giga cor-\npus and the Chinese-English UN corpus. In addi-\ntion to being large corpora, these also cover two\nvery distinct language pairs. We also use two PB-\nSMT systems: Moses (Koehn et al., 2007) and\nPortage (Ueffing et al., 2007). Finally, we analyze\nin- and out-of-domain learning curves in order to\nbetter understand and investigate the growth rate.\nThe following section gives a quick overview\nof the models and systems we used in our exper-\niments. We then briefly describe the experimen-\ntal settings and data we used. Section 4 shows\nand analyzes the learning curves we obtained on\nFrench-English and on Chinese-English, and sec-\ntion 5 presents our results on the relative impor-\ntance of LM and TM in the performance increase.2 Translation Models and Systems\nThe standard phrase-based machine translation\nsystems which we analyse here rely on a log-linear\nmodel and a set of baseline features functions.\nTranslations of a source sentence fis obtained by:\n/hatwidee(f) = argmax\ne/summationdisplay\niλihi(e,f).\nwhere the hi(e,a,f)arefeature functions involv-\ning both the source and target sentences, and the λi\nare the weights of those feature functions. Typical\nexamples of feature functions that compose a basic\nphrase-based MT system are:\n•phrase translation feature, e.g.:\nhT(e,f)=/summationtext\nklogp(fk|ek);\n•language model feature, e.g.:\nhL(e,f)=/summationtext\njlogp(wj|wj−1, . . . w 1)\n•distortion feature, e.g.:\nhD(e,f)=/summationtext\nk/bardblstart(f k)- end(f k−1)−1/bardbl\n•Word penalty and/or phrase penalty features.\nwhere ekandfkare contiguous subsequences of\nwords in the source and target sentences and wj\nare target words.\nParameter estimation is crucial for both the\ntranslation and language model features. Con-\nditional probabilities are estimated from a large\ntraining corpus using empirical counts and vari-\nous smoothing strategies. In addition, the weights\nλiare also estimated from a (usually disjoint) cor-\npus of source and target sentence pairs. The size\nand composition of the training data will therefore\nhave an influence on the quality of the predictions\n/hatwideethrough the estimation of both the log-linear pa-\nrameters and the feature functions.\nNote that alternate models such as hierarchi-\ncal (Chiang, 2007) or syntax based (Zollman\nand Venugopal, 2006) have been developed and\ncould also be studied. However their use on the\nlarge scale necessary for creating accurate learning\ncurves would require solving a number of prac-\ntical issues and we focus instead on the straight\nPBSMT approach, which has been shown in re-\ncent MT evaluations (Callison-Burch et al., 2009;\nCallison-Burch et al., 2011) to offer competitive\nperformance.\n306\n2.1 PBSMT Software\nSeveral software packages are available for train-\ning PBSMT systems. In this work, we use\nMoses (Koehn et al., 2007) and Portage (Ueffing\net al., 2007), two state-of-the-art systems capa-\nble of learning translation tables, language mod-\nels and decoding parameters from one or several\nparallel corpora. Moses is a complete open-source\nphrase-based translation toolkit available for aca-\ndemic purposes, while Portage is a similar pack-\nage, available to partners of the National Research\nCouncil Canada.\nGiven a parallel training corpus, both perform\nbasic preprocessing (tokenization, lowercasing,\netc.) if necessary, and build the various com-\nponents of the model. Both use standard exter-\nnal tools for training the language model, such\nas SRILM (Stolcke, 2002). Moses uses GIZA++\n(Och and Ney, 2003) for word alignments, while\nPortage uses an in-house IBM model and HMM\nimplementation. The parameters of the log-linear\nmodels are tuned using minimum error rate train-\ning (MERT, (Och, 2003)).\nEarlier experiments performed on the Europarl\ncorpus with both systems showed (Turchi et al.,\n2011) that despite small differences in observed\nperformance, both systems produce very similar\nlearning curves.\n3 Experimental Setting\n3.1 Corpora\nWe experiment with large corpora in two language\npairs: French-English and Chinese-English.\nFor French-English, we use the Giga corpus\n(Callison-Burch et al., 2009) to provide the train-\ning, development and one in-domain test set. As\nout-of-domain test set, we use two different sam-\nples from the EMEA corpus (Tiedemann, 2009),\nwhich contains parallel documents from the Eu-\nropean Medicines Agency, and two News test\nsets from the 2009 (Callison-Burch et al., 2009)\nand 2011 (Callison-Burch et al., 2011) editions\nof the Workshop on Statistical Machine Transla-\ntion, containing news articles drawn from a variety\nof sources and languages in different periods and\ntranslated by human translators.\nFor Chinese-English, we use various parallel\ncorpora obtained from the Linguistic Data Con-\nsortium for the NIST evaluations. The train-\ning, development and in-domain test sets are\nsampled from the United Nations corpus (UN,src Training Set Sentences Words\nfr Giga 18.276 M 482,744k\nch UN 4.968 M 163,960k\nDev. Set\nfr Giga 1,000 62k\nch UN 2,000 32k\nTest Set\nfr Giga 3,000 109k\nfr Emea 3,051 45.4k\nfr Emea2 3,051 46.7k\nfr News 2009 2,489 70.7k\nfr News 2011 3,030 85.1k\nch UN 10,000 332k\nch HKH 5,000 153k\nch NIST 1,357 42k\nch News 10,317 320k\nTable 1: Number of sentences and words (source\nside) for the training, dev and various test sets.\nLDC2004E12). As out-of-domain test sets, we\nused a sample from the Hong-Kong Hansard\n(HKH, LDC2000T50), a corpus of Chinese News\ntranslations (LDC2005T06) and the NIST 2008\nChinese evaluation set (LDC2009E09). Basic\nstatistics are given in Table 1.\nIn order to analyse the way MT performance\nevolves with increasing data, we subsample (with-\nout replacement) the training sets at various sizes,\naveraging performance (estimated by BLEU, cf.\nsection 3.3) over several samples. Learning curves\nare then obtained by plotting the average BLEU\nscore, with error bars, as training data sizes in-\ncreases. The relatively large amount of sentences\nin most test sets will allow us to reduce the uncer-\ntainty on the estimated test error, therefore produc-\ning smaller error bars.\nFor the French-English data, we followed the\nmethodology proposed in (Turchi et al., 2008) and\nsampled 20 different sizes representing 5%, 10%,\netc. of the original training corpus. Due to the\nlarge size of the corpus, only three random subsets\nare sampled at each size. For the Chinese-English\ndataset, we sampled at sizes corresponding to one\nhalf, one quarter, etc. down to 1/512th(∼0.2%)\nof the full size. At each size we produced 10 ran-\ndom samples. Each random subsample produces\na model (cf. below) which is used to translate the\nvarious test sets. The learning curves will there-\nfore cover the range from around 900 thousand\n307\nto 18.3 million sentences for French-English, and\nfrom around 10 thousand to 5 million sentences for\nChinese-English.\nNote that the corpora, in addition to differing\nin language pair, also differ in domain and ho-\nmogeneity. The UN data contains only material\nfrom the United Nations, covering a wide range\nof themes, but fairly homogeneous in terms of\nstyle and genre. The Giga corpus, on the other\nhand, was obtained through a targeted web crawl\nof bilingual web sites from the Canadian govern-\nment, the European Union, the United Nations,\nand other international organizations. In addition\nto covering a wide range of themes, they also con-\ntain documents with different styles and genres.\nMoreover, we estimated in an independent study\nthat the rate of misaligned sentence pairs in the\nGiga corpus is as high as 13%.\nThe choice of source languages is driven by the\ndesire to analyze two very different languages and\nby the scarcity of large publicly available bilingual\ncorpora, especially outside European languages.\nUN data is also available in Russian or Arabic, but\nby definition would be the same domain and ho-\nmogeneity as the Chinese-English corpus.\n3.2 PBSMT System Training\nFor both systems, Portage and Moses, we used the\nbasic configuration and features: phrase extraction\nis done by aligning the corpus at the word level\n(IBM models 1, 2, 3 and 4 for Moses, HMM and\nIBM2 models for Portage), the parameters of the\nlog-linear model are set using an implementation\nof Och’s MERT algorithm (Och, 2003), n-gram\nlanguage modelling uses Kneser-Ney smoothing\n(3-gram using SRILM for Moses and 4-gram for\nPortage) and the maximum phrase length is 7 to-\nkens. In Portage, phrase pairs were filtered so that\nthe top 30 translations for each source phrase were\nretained. In both systems, the MERT algorithm\nwas independently run on each sampled training\nset for each experiment.\nNote that we expect that there will be differ-\nences in the quality of the translation depending on\nthe source language. However, we are not so much\ninterested in the actual translation performance as\nin the way this performance evolves with increas-\ning data under various conditions.\n3.3 Evaluation metrics\nWe report performance in terms of BLEU score\n(Papineni et al., 2001), the well accepted andwidespread automatic MT metric. We are well\naware that maximizing BLEU may neither be nec-\nessary for, nor guarantee good translation perfor-\nmance, and that automatic MT metrics may not\ntell the whole story as far as translation quality is\nconcerned. However, our systematic study aims at\ncharacterizing the behaviour of PBSMT systems\nthat are built by maximizing such metrics, and this\nmaximization is part of the learning system we an-\nalyze. Deriving learning curves for human evalu-\nations of translation quality would be interesting,\nbut is clearly impractical at his point.\n4 Learning Curve Analysis\nWe now present the results obtained under the gen-\neral framework outlined above.\nWe stress that in these experiments, we focus on\nthe growth rate of the learning curves. In particu-\nlar we are interested in 1) confirming that learning\ncurves have logarithmic growth, and 2) possible\ndifferences between domains, languages and sys-\ntems. A common, but poorly supported belief in\nPBSMT is that each doubling of the data yields a\nmore or less constant increase in performance. In\norder to analyze and support this belief, we show\nall learning curves on a log scale, where we can\ncheck if the curve has a linear behaviour.\nNote that sampling without replacement results\nin an increasing overlap between samples as their\nsizes grow. The size of the error bars therefore de-\ncreases as the training set size grows, because the\ntraining sets, and therefore the resulting models,\nare not independent. This must be kept in mind,\nalthough we still believe that the presence of error\nbars helps to better understand the stability of the\nMT system’s performance.\nThe resulting learning curves are shown in Fig-\nures 1 and 2 for the French-English and Chinese-\nEnglish data, respectively. The plots show the\nperformance, averaged over samples (marks, con-\nnected with dotted lines), the error bars (vertical\nlines) indicating the natural variance in the perfor-\nmance, and a least-squares linear fit of these points\n(dashed or solid line). It is very clear that the learn-\ning curves are almost exactly linear on the log scale\nin most cases (Chinese-English and most French-\nEnglish curves). The EMEA 2 and News 2009\ncurves display a worse fit, but the empirical results\nare within error bars of the linear fit, showing that\nthe deviation from linearity is not statistically sig-\nnificant. The instability in these last two curves\n308\n1e+06 2e+06 5e+06 1e+07 2e+070.25 0.30 0.35Learning Curves for French Giga, News and Emea corpora\nNb. tr\naining sentencesBLEUGiga\nEmea\nEmea2\nNews2010\nNe\nws2011Figure 1: French-English learning curves obtained\nusing the Giga corpus for training Moses on five\ntest sets: one in-domain and four out-of-domain.\nmay actually be due to the fact mentioned earlier\nthat the dependency between the performance esti-\nmates increases for large training sizes, which may\nlead to an increasing bias in the average.\nThese results confirm the findings of (Turchi et\nal., 2008) and extend them to more language pairs\nand much larger data sizes. These experiments\nsupports the following claims:\n•The increase in performance for PBSMT sys-\ntems is essentially constant for each doubling\nof the data, over a wide range of training data\nsizes. Note that the growth does not seem to\nslow down as we near 20M training sentence\npairs.\n•A corollary of that first claim is that minor,\neven statistically significant increases in per-\nformance due to model “tweaking” are likely\nto be dwarfed by moderate increases in data\nsizes. For our Chinese system, for example, a\n10% increase in data produces a 0.43 BLEU\ngain.\n•On a linear scale, however, the addition of\nmassive amounts of data from the same do-\nmain will result in diminishing improvements\n(“diminishing returns”) in the performance\nafter an initial fast growth (Turchi et al., 2008;\nBloodgood and Callison-Burch, 2010).\n•Interestingly, the general shape of the learn-\n1e+04 5e+04 2e+05 5e+05 2e+06 5e+060.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40Learning Curves for Chinese UN and news corpora\nNb. tr\naining sentencesBLEUUN\nHKH\nNews\nNist08Figure 2: Chinese-English learning curves ob-\ntained using the UN corpus for training Portage\non four test sets: one in-domain and three out-of-\ndomain.\ning curves is essentially the same across dif-\nferent language pairs, different PBSMT sys-\ntems, and also over different sources of test\ndata (in-domain or out-of-domain).\n•In particular, although the performance on\nout-of-domain data may greatly suffer (cf.\nFigure 2), the rate of increase is still linear\nin the log domain, up to large data sizes.\nIn order to quantify these findings, we estimate\nthe gain per each doubling of the training set size\nby fitting a simple linear model on the learning\ncurves in the log domain. For the Chinese-English\ndata, each doubling of the data yields a gain of\naround 2.1 BLEU points on the in-domain data,\nand only 0.6 on the out-of-domain test sets. For\nthe French-English data, the BLEU gain per train-\ning data doubling is around 1.5 points for the in-\ndomain data, 1.1 for the EMEA test sets and 0.6\nfor the News test sets.\nOne may wonder why the out-of-domain EMEA\ntest sets yield such high learning curves. Although\nthe EMEA data comes from a European agency,\nwe have verified that the sentences it contains are\nnot contained in the Giga corpus. However, it turns\nout that the EMEA data is actually fairly easy to\ntranslate. The language is relatively constrained\nand repetitive, sentences are much shorter (on av-\nerage ∼15 words against more than 28 for the other\n309\ncorpora), and the number of out-of-vocabulary\nwords much lower than in the other test sets.\nBy contrast, all out-of-domain learning curves\non Chinese-English are much lower than the in-\ndomain curves (we have corroborated this with a\ndozen different test sets taken from various sources\navailable for NIST evaluations, but omitted here\nfor clarity). We believe this reflects differences be-\ntween the sources of our training data. The UN\ncorpus covers a number of topics but is very ho-\nmogeneous and rather limited in genre. By con-\ntrast, the Giga corpus contains a wide range of doc-\numents covering many themes and genres. As a\nconsequence, any test set that does not come from\nthe UN data is distinctively different and “far” out-\nof-domain. On the other hand, it is not inconceiv-\nable that even for French text that does not come\nfrom the same sources, the larger and more diverse\nGiga corpus provides some measure of overlap in\ntopics and genre.\n5 Relative Importance of TM and LM\nIn the previous Section, experiments have been run\nusing the same training set size for language and\ntranslation models. However, there is a large dif-\nference in the cost of training data for language\nand translation models. The former can be trained\nusing monolingual data only while the latter re-\nquires bilingual texts. In recent years, several\nparallel corpora have been produced, e.g. Eu-\nroparl (Koehn, 2005), JRC Acquis (Steinberger et\nal., 2006), and others, but they are not comparable\nto the amount of freely available monolingual data.\n(Brants et al., 2007) have shown that perfor-\nmance improves linearly with the log of the num-\nber of tokens in the language model training set\nwhen this quantity is huge (from billions to tril-\nlions of tokens). In this section, we are interested\nin understanding the trade-off between the train-\ning data size used to build language and transla-\ntion models, as well as in how performance is af-\nfected by that difference. We propose a mathemat-\nical model to estimate the variation in BLEU score\naccording to the size of the training data used by\nthe language model vs. that use by the translation\nmodel. The previous section shows that the over-\nall performance of a PBSMT system grows in the\nlogarithm of the training data size. We therefore\nmodelled this relation in the following way:\nBLEU (dLM, dTM) =\nαLM∗log2(dLM) + αTM∗log2(dTM) +/epsilon1where dLMis the amount of training data used to\nbuild the language model, dTMis the amount of\ntraining data used to build the Translation Model.\nαLMandαTMare weighting factors that identify\nthe contribution of language and translation train-\ning data to the BLEU score, and /epsilon1is the residual.\nNote that when dLM=dTM, we recover a simple\nlogarithmic relationship between performance and\ndata size, as illustrated in the previous section.\nIn order to evaluate the relation between the\namount of training data used to build language\nand translation models we estimate αLMandαTM\nfrom data. We focus on the French-English data,\nand use the training data subsets at every 10% of\nthe full data size (10%, 20%, etc.), using the same\ndevelopment and test sets as before. One instance\nof a PBSMT model is learned for each combina-\ntion of language and translation training data sizes,\nand we compute the resulting BLEU on the test\nsets. We estimate the parameters αLMandαTM\nusing multivariate linear regression based on least\nsquares (Draper and Smith, 1981), with the BLEU\nscores as response variables and the log values of\nthe LM and TM training sizes as explanatory vari-\nables. This is done for three French-English test\nsets: the in-domain Giga, Emea and News 2009.\nThe Emea2 and News 2011 test sets were qualita-\ntively very similar.\nWe estimated the weighting factors using all the\ndata. The results in Table 2 empirically confirm\nthe common belief that adding data to the transla-\ntion model is more important than to the language\nmodel (α TM> α LM). The values of αLMand\nαTMvary across the test sets, and correspond to\nan increase of 1 to 1.3 BLEU point per doubling of\nthe training data for the LM and 1.2 to 1.8 BLEU\npoint per doubling for the TM. However, the ratio\nis rather stable, indicating that the relative impor-\ntance of the TM w.r.t. the LM is stable across do-\nmains. Not surprisingly, the more similar the test\nset is to the training data, the larger is the BLEU\npoint growth. Our results are qualitatively com-\npatible with the observations reported in a tutorial\nby (Och, 2005), although the increments in BLEU\nwith each doubling of the training data size are\nreported 0.5 and 2.5 points for the language and\ntranslation models, respectively, in the context of\nArabic-English translation. The ratio we observed\nin our experiments is lot more favourable to the\nlanguage model.\nIn order to validate this finding, we performed\n310\nTest Set αLM αTM αTM/αLM\nGiga 0.0133 0.0182 1.368\nEmea 0.0134 0.0168 1.2563\nNews 2009 0.0097 0.0122 1.2532\nTable 2: Empirical estimation of the contributions\nαLMandαTMof the LM and TM, respectively, (/epsilon1\nis smaller than 1×10−4), in BLEU per log2in size.\nExperiments have been performed independently\non the three test sets.\ntwo simple experiments where we added a fairly\nlarge, 10 million sentence corpus of monolingual\ndata (not included in the Giga corpus) to our LM\ntraining data, starting with around 5 million sen-\ntence of bilingual data from the Giga corpus. This\nproduced a 1.79 BLEU increase in performance\non News 2009 and 1.38 BLEU increase on News\n2011, which is roughly consistent with a tripling\nin LM training data size according to the rate esti-\nmated in Table 2 (0.97 ×log23≈1.54).\n6 Discussion\nAlthough limited to two language pairs, our results\ninvestigate the behaviour of PBSMT as a learn-\ning system over a range of different conditions:\nvery different language pairs, in-domain and out-\nof-domain data, differing level of corpus homo-\ngeneity. etc. We emphasize that obtaining system-\natic and accurate learning curves requires a signif-\nicant effort, even with an high performance com-\nputing architecture (Figure 2 requires translating\nmore than 3 million test sentences with 91 mod-\nels).\nThe learning curves obtained here suggest that,\non an absolute (linear) scale, performance gains\nper fixed amount of additional data decrease. The\ndiminishing improvements in performance after an\nearly fast growth was also reported by (Uszkoreit\net al., 2010) who mined the Web to extract very\nlarge sets of parallel documents. Starting with two\ncorpora (French/Spanish to English) similar in di-\nmensions to the Giga training set and using the\nNews 2009 test sets, they report that adding more\nthan 4,800 M words from a different domain re-\nsulted in relative small performance gains (< 2\nBLEU points).\nOn a log-scale, on the other hand, there is no\nsign that performance gains decrease as we keep\ndoubling the training corpus size, at least up to\n20M sentence pairs. Note that although usualMTmetrics have natural bounds (0 for error-based\nmetrics such as TER, 1 for BLEU), this has little\npractical relevance to the results presented here.\nIndeed, assuming we could extrapolate the very\nstable growth rates observed here, taking the per-\nformance of the out-of-domain HKH test set to\nwhere the in-domain UN data starts (for 10k sen-\ntence pairs only) would require close to 180 billion\nsentence pairs. For all practical purpose, we would\nrun out of data long before we reached even half of\nthe theoretical maximum BLEU score.\nFinally, the analysis of the relative importance\nof TM and LM estimation shows that the trans-\nlation model contributes about 30% more to the\nincrease in performance than the language model.\nConsidering the crucial role of the phrase table in\nthe translation process, this contribution is maybe\nless than one would expected. This means that the\nmassive addition of training data to the language\nmodel has a substantial impact in terms of perfor-\nmance, as shown by (Brants et al., 2007). It is in-\nteresting that the ratio of αTMandαLMseems sta-\nble across different domains. The relation between\nthe translation and language model contribution to\nthe final BLEU score does not change whether we\ntranslate in- or out-of-domain data.\n7 Conclusion\nUsing state-of-the-art Phrase-Based Statistical\nMachine Translation packages and large parallel\ncorpora, we derived very accurate learning curves\nfor a number of language pairs and domains. Our\nresults suggest that performance, as measured by\nBLEU, increases by a constant factor for each\ndoubling of the data. Although that factor varies\ndepending on corpus and language pair, this re-\nsult seems consistent over all experimental con-\nditions we tried. Our findings confirm the results\nreported for example by (Brants et al., 2007) and\n(Och, 2005), and extend and complete the findings\nof (Turchi et al., 2008).\nWe propose a study of how performance is influ-\nenced by difference sizes of data used for training\nthe language and translation models. Our model\ngives more importance to the translation model\nthan the language model every doubling of train-\ning data, but we are lot more favourable to the lan-\nguage model compared to other reported results in\nthe literature.\nEven if we do not currently provide any result\nthat is immediately actionable to improve current\n311\nPBSMT performance, we believe it is important\nto analyse and quantify the way Machine Transla-\ntion systems learn. In addition, the markedly dif-\nferent rates of performance increase for in-domain\nand out-of-domain data may provide a clue to bet-\nter characterise the suitability of a MT model to\ntranslate a given test set. Investigating features\nthat help us differentiate out-of-domain from in-\ndomain data may prove very useful to improve\npractical performance of PBSMT systems.\nReferences\nY. Al-Onaizan and J. Curin and M. Jahr and K. Knight\net al. 1999. Statistical Machine Translation: Final\nReport. JHU 1999 Summer Workshop on Language\nEngineering, CSLP.\nM. Bloodgood and C. Callison-Burch. 2010. Bucking\nthe trend: large-scale cost-focused active learning\nfor statistical machine translation. 48th Meeting of\nthe ACL, pp. 854–864.\nT. Brants and A. C. Popat and P. Xu and F. J. Och and\nJ. Dean. 2007. Large Language Models in Machine\nTranslation. Proc. EMNLP-CoNLL 2007, pp. 858–\n867.\nC. Callison-Burch, and P. Koehn and C. Monz and J.\nSchroeder. 2009. Findings of the 2009 Workshop on\nStatistical Machine Translation. Fourth Workshop\non Statistical Machine Translation, pp. 1–28.\nC. Callison-Burch, and P. Koehn and C. Monz and O.\nZaidan. 2011. Findings of the 2011 Workshop on\nStatistical Machine Translation. Sixth Workshop on\nStatistical Machine Translation, pp. 22–64.\nD. Chiang. 2007. Hierarchical Phrase-Based Transla-\ntion. Computational Linguistics, 33(2):201–228.\nN.R. Draper and H. Smith. 1981. Applied regression\nanalysis. Wiley, New York, USA.\nG. Haffari and M. Roy and A. Sarkar. 2009. Ac-\ntive Learning for Statistical Phrase-based Machine\nTranslation. Proc. HLT-NAACL, pp. 415–423.\nP. Koehn. 2005. Europarl: A Parallel Corpus for Sta-\ntistical Machine Translation. Proc. MT-Summit X,\npp. 79-86.\nP. Koehn and H. Hoang and A. Birch and C. Callison-\nBurch et al. 2007. Moses: Open source toolkit for\nstatistical machine translation. 45th Meeting of the\nACL demo, pp. 177–180.\nP. Koehn and F. J. Och and D. Marcu. 2003. Statistical\nphrase-based translation. Proc. NAACL-HLT, pp.\n48–54. Edmonton, Canada.\nF. J. Och 2005. Statistical machine translation: Foun-\ndations and recent advances. Proc. MT-Summit X\ntutorial.F. J. Och 2003. Minimum error rate training in statis-\ntical machine translation. 41st Meeting of the ACL,\npp. 160–167.\nF. J. Och and H. Ney 2003. A Systematic Compari-\nson of Various Statistical Alignment Models. Com-\nputational Linguistics, 29(1): pages 19–51. Sapporo,\nJapan.\nF. J. Och and H. Ney 2002. Discriminative training\nand maximum entropy models for statistical machine\ntranslation. 40th Meeting of the ACL, pp. 295–302.\nK. Papineni and S. Roukos and T. Ward and W. J. Zhu\n2002. BLEU: a method for automatic evaluation of\nmachine translation. 40th Meeting of the ACL, pp.\n311–318.\nA. Stolcke. 2002. SRILM – An extensible language\nmodeling toolkit. Intl. Conf. Spoken Language Pro-\ncessing.\nR. Steinberger and B. Pouliquen and A. Widiger and\nC. Ignat and T. Erjavec and D. Tufis ¸ and D. Varga.\n2006. The JRC-Acquis: A multilingual aligned par-\nallel corpus with 20+ languages. 5th LREC, pp.\n2142–2147.\nB. Suresh. 2010 Inclusion of large input corpora in\nStatistical Machine Translation. Technical report,\nStanford University.\nJ. Tiedemann. 2009. News from OPUS—A Collection\nof Multilingual Parallel Corpora with Tools and In-\nterfaces. RANLP (vol V), pp. 237-248.\nM. Turchi and T. DeBie and N. Cristianini. 2008.\nLearning Performance of a Machine Translation\nSystem: a Statistical and Computational Analysis.\nThird Workshop on Statistical Machine Translation,\npp. 35–43.\nM. Turchi and T. DeBie and C. Goutte and N. Cristian-\nini. 2012. Learning to Translate: a statistical and\ncomputational analysis. Advances in Artificial In-\ntelligence, in press.\nN. Ueffing and M. Simard and S. Larkin and J. Howard\nJohnson. 2007. NRC’s PORTAGE system for WMT.\nSecond Workshop on Statistical Machine Transla-\ntion, pp. 185–188.\nJ. Uszkoreit and J.M. Ponte and A.C. Popat and M. Du-\nbiner. 2010. Large scale parallel document mining\nfor machine translation. 23rd COLING, pp. 1101–\n1109.\nA. Zollman and A. Venugopal. 2006. Syntax aug-\nmented machine translation via chart parsing. Proc.\nNAACL Workshop on Machine Translation.\nR. Zens and F. J.Och and H. Ney. 2002. Phrase-Based\nStatistical Machine Translation. Proc. KI ’02: Ad-\nvances in Artificial Intelligence, pp. 18–32.\n312",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "vPCUy_0wT8-",
"year": null,
"venue": "EAMT 2020",
"pdf_link": "https://aclanthology.org/2020.eamt-1.25.pdf",
"forum_link": "https://openreview.net/forum?id=vPCUy_0wT8-",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Automatic Translation for Multiple NLP tasks: a Multi-task Approach to Machine-oriented NMT Adaptation",
"authors": [
"Amirhossein Tebbifakhr",
"Matteo Negri",
"Marco Turchi"
],
"abstract": "Amirhossein Tebbifakhr, Matteo Negri, Marco Turchi. Proceedings of the 22nd Annual Conference of the European Association for Machine Translation. 2020.",
"keywords": [],
"raw_extracted_content": "Automatic Translation for Multiple NLP tasks:\na Multi-task Approach to Machine-oriented NMT Adaptation\nAmirhossein Tebbifakhr\nFBK, Trento, Italy\nUniversity of Trento, Italy\[email protected] Negri\nFBK, Trento, Italy\[email protected] Turchi\nFBK, Trento, Italy\[email protected]\nAbstract\nAlthough machine translation (MT) tra-\nditionally pursues “human-oriented” ob-\njectives, humans are not the only possi-\nble consumers of MT output. For in-\nstance, when automatic translations are\nused to feed downstream Natural Lan-\nguage Processing (NLP) components in\ncross-lingual settings, the translated texts\nshould ideally pursue “machine-oriented”\nobjectives that maximize the performance\nof these components. Tebbifakhr et al.\n(2019) recently proposed a reinforcement\nlearning approach to adapt a generic neu-\nral MT (NMT) system by exploiting the re-\nward from a downstream sentiment classi-\nfier. But what if the downstream NLP tasks\nto serve are more than one? How to avoid\nthe costs of adapting and maintaining one\ndedicated NMT system for each task? We\naddress this problem by proposing a multi-\ntask approach to machine-oriented NMT\nadaptation, which is capable to serve mul-\ntiple downstream tasks with a single sys-\ntem. Through experiments with Spanish\nand Italian data covering three different\ntasks, we show that our approach can out-\nperform a generic NMT system, and com-\npete with single-task models in most of the\nsettings.\n1 Introduction\nNeural Machine Translation (NMT) systems are\ntypically developed considering humans as the\nc\r2020 The authors. This article is licensed under a Creative\nCommons 3.0 licence, no derivative works, attribution, CC-\nBY-ND.end-users, and are hence optimized pursuing\nhuman-oriented requirements about the output\nquality. To meet these requirements, supervised\nNMT models are trained to maximize the probabil-\nity of the given parallel corpora (Bahdanau et al.,\n2015; Sutskever et al., 2014), which embed the ad-\nequacy and fluency criteria essential for the human\ncomprehension of a translated sentence. In an-\nother line of research, these objectives are directly\naddressed in Reinforcement Learning (Ranzato et\nal., 2016; Shen et al., 2016) and Bandit Learning\n(Kreutzer et al., 2017; Nguyen et al., 2017), where\nmodel optimization is driven by the human feed-\nback obtained for each translation hypothesis.\nHowever, humans are not the only possible con-\nsumers of MT output. In a variety of application\nscenarios, MT can in fact act as a pre-processor to\nperform other natural language processing (NLP)\ntasks. For instance, this is the case of text clas-\nsification tasks for which, in low-resource con-\nditions, the paucity of training data provides a\nstrong motivation for exploiting translation-based\nsolutions. In tasks like sentiment classification,\nhate speech detection or document classification\n(the three application scenarios addressed in this\npaper) a translation-based approach would allow:\ni)translating the input text data from an under-\nresourced language into a resource-rich target lan-\nguage for which high-performance NLP compo-\nnents are available, ii)run a classifier on the trans-\nlated text and, finally, iii)project the results back\nto the original language.\nThis approach represents a straightforward so-\nlution in low/medium-resource1language settings\n1Jain et al. (2019) consider as “medium-resource” languages\nthose for which, although annotated training corpora do not\nexist, off-the-shelf (MT) systems like Google Translate are\navailable.\nwhere reliable NLP components for specific tasks\nare not available, and represents a strong baseline\nin a variety of multilingual and cross-lingual NLP\ntasks (Conneau et al., 2018). However, the NMT\nsystems normally used are still optimized by pur-\nsuing human-oriented adequacy and fluency objec-\ntives, which are not necessarily the optimal ones\nfor this pipelined solution. These models can in-\ndeed produce translations in which some proper-\nties of the input text are altered or even lost. For\ninstance, as shown in (Mohammad et al., 2016),\nthis happens in sentiment classification, where au-\ntomatic translations can fail to properly project\ncore traits of the input text into the target language.\nWhen this happens, the downstream linguistic pro-\ncessor will likely produce results of lower quality.\nIn light of these considerations, Tebbifakhr et al.\n(2019) argued that when the role of NMT is to feed\na downstream NLP component instead of a hu-\nman, translating into fluent and adequate sentences\nis not necessarily the main priority. Rather, if the\ngoal is producing translations that are “easy to pro-\ncess” by the downstream component, other opti-\nmization strategies might be more effective, even\nif they result in low-quality output from the point\nof view of human comprehension. Back to the\nsentiment classification example: before meaning\nand style, a “machine-oriented” translation should\nprioritize the optimal projection of the sentiment\ntraits of the input text, which are the key clues from\nthe automatic sentiment classification standpoint.\nTo pursue machine-oriented translation objec-\ntives, Tebbifakhr et al. (2019) proposed Machine-\nOriented Reinforce (MO-Reinforce), a method\nbased on Reinforce (Williams, 1992; Ranzato et\nal., 2016). While in Reinforce the objective is to\nmaximize the reward given by humans to NMT\nsystems’ output, in MO-Reinforce the human feed-\nback is replaced by the reward coming from a\ndownstream NLP system. Focusing on sentiment\nclassification, where the classifier’s output is a\nprobability distribution over the classes for each\ninput text, they define the reward as the probability\nof predicting the correct class. Evaluation results\ncomputed on Twitter data show that a downstream\nEnglish sentiment classifier performs significantly\nbetter when it is fed with machine-oriented trans-\nlations rather than the human-oriented ones pro-\nduced by a general-purpose NMT system.\nDespite its potential usefulness, MO-Reinforce\nhas a limitation that might reduce its general ap-plicability: it requires one NMT model for each\ndownstream task. This represents a possible bottle-\nneck in real industry scenarios, where training and\nmaintaining multiple task-oriented NMT systems\n(one for each possible downstream task) would be\ncostly and time-consuming, if not unfeasible. To\novercome this limitation, in this paper we explore\nthe possibility to simultaneously address multiple\ndownstream tasks with a single NMT system. In\nthis direction, we propose a multi-task learning ap-\nproach that has two main potential strengths. One\nis the higher flexibility for industrial deployment\ndue to its architectural simplicity. The other is\nthe possibility to exploit knowledge transfer across\nsimilar tasks (Zhang and Yang, 2017), eventually\nimproving the results achieved by the single-task\nMO-Reinforce approach.\nWe test the viability of our multi-task approach\non two source languages (Spanish and Italian2)\nfor which data covering different tasks (sentiment\nclassification, hate speech detection and document\nclassification) have to be translated into English\nand then processed by dedicated NLP components.\nOur results show that translating with the pro-\nposed multi-task extension yields significant gains\nin classification performance with respect to both\ni)a generic NMT system and ii)the original single-\ntask MO-Reinforce by Tebbifakhr et al. (2019).\nBesides exploring for the first time a multi-task\napproach to “machine-oriented” NMT, this paper\nprovides two technical contributions that explain\nthe reported performance gains, namely: i)a re-\nward normalization strategy to weigh the impor-\ntance of each sample in the course of training,\nandii)the application of dropout while sampling\nthe translation candidates, which makes the model\nmore reactive and avoids local optima. On the ex-\nperimental side, another contribution of this work\nis the first evaluation on multi-class classification\ndata (i.e., those used for the document classifica-\ntion task), a more challenging scenario compared\nto the binary task considered by Tebbifakhr et al.\n(2019).\n2Although one of the motivations for machine-oriented trans-\nlation is to support NLP in under-resourced settings, the cho-\nsen source languages do not fall in this category. The choice\nis motivated by the fact that they provide us with all the nec-\nessary infrastructure (e.g. test data) to perform a sound com-\nparative evaluation. Here, indeed, we focus on testing the\ngeneral applicability of our approach, while its evaluation in\nreal under-resourced settings (conditioned to the availability\nof benchmarks for multiple tasks) is left for future work.\n2 Background\n2.1 Human-oriented NMT\nFormally, in MT, the probability of generating the\ntranslation ywith length of Ngiven a source sen-\ntence xis computed as follows:\nP(yjx) =NY\ni=1p\u0012(yijy<i;x) (1)\nwherep\u0012is a conditional probability defined by\nsequence-to-sequence NMT models (Bahdanau et\nal., 2015; Sutskever et al., 2014; Vaswani et al.,\n2017). In these models, an encoder first encodes\nthe source sentence and then, at each time step,\na decoder outputs the probability distribution over\nthe vocabulary conditioned on the encoded source\nsentence and the translation prefix y<i. In su-\npervised NMT, the parameters of the model \u0012are\ntrained by maximizing the log-likelihood of the\ngiven parallel corpus fxs;ysgS\ns=1:\nL=SX\ns=1logP(ysjxs)\n=SX\ns=1NsX\ni=1logp\u0012(ys\nijys\n<i;x)(2)\nBy maximizing this objective, the model indi-\nrectly pursues the human-oriented objectives of\nadequacy and fluency embedded in the training\nparallel corpora.\nIn addition to normal NMT training, these ob-\njectives can be directly addressed using reinforce-\nment learning methods such as Reinforce (Ranzato\net al., 2016). This method maximizes the expected\nreward from the end-user:\nL=SX\ns=1E^y\u0018P(:jxs)\u0001(^y)\n=SX\ns=1X\n^y2YP(^yjxs)\u0001(^y)(3)\nwhere \u0001(^y)is the reward of the sampled transla-\ntion candidate ^y, and Yis the set of all the possible\ntranslation candidates. Since the size of this set Y\nis exponentially large, Equation 3 is estimated by\nsampling one translation candidate out of this set\nusing multinomial sampling or beam search:\n^L=SX\ns=1P(^yjxs)\u0001(^y);^y\u0018P(:jxs)(4)Since collecting human rewards is costly, the\nprocess can be simulated by comparing the sam-\npled translation candidates with the corresponding\nreference translations using automatic evaluation\nmetrics like BLUE (Papineni et al., 2002).\nThe two learning strategies (supervised and re-\ninforcement) have two main commonalities: i)\nthe learning objectives are human-oriented, and ii)\nthey both need parallel data, respectively for maxi-\nmizing the probability of the translation pair in su-\npervised learning and for simulating the human re-\nward in reinforcement learning.\n2.2 Machine-oriented NMT\nTo pursue machine-oriented objectives and to by-\npass the need for parallel corpora, in the MO-\nReinforce algorithm proposed by (Tebbifakhr et\nal., 2019), the human reward is replaced by the re-\nward from a downstream classifier (in that case,\na polarity detector predicting the positive/negative\nsentiment of a translated sentence). This reward\nis defined as the probability of labeling the trans-\nlated text with the correct class and it can be eas-\nily computed since the output of the downstream\nclassifier is a probability distribution over the pos-\nsible classes. Therefore, given a small amount of\nlabeled data in the source language3fxs;lsgS\ns=1,\nin which lis the label of the corresponding source\ntextx, Equation 4 can be redefined as follows:\n^L=SX\ns=1P(^yjxs)\u0001(^y;ls);^y\u0018P(:jxs)(5)\nwhere \u0001(^y;ls)is the probability that the down-\nstream classifier assigns lsto a sampled candidate.\nIn order to increase the contribution of the re-\nward and to sample “useful” translation candi-\ndates, the proposed sampling strategy randomly\nextractsKcandidates and eventually chooses the\none with the highest reward to update the model.\nThis strategy results in the selection of candidates\nthat influence the initial model towards translations\nthat maximize the performance of the downstream\nprocessor. For instance, in the sentiment classifica-\ntion scenario, these are NMT outputs that preserve,\nor even emphasize, relevant aspects like the proper\nhandling of sentiment-bearing terms. Although\nthey are poor in terms of the human-oriented no-\ntion of quality (as shown by BLEU scores close\n3In (Tebbifakhr et al., 2019), MO-Reinforce is shown to re-\nsult in better classification performance than the original Re-\ninforce (Ranzato et al., 2016) with few hundred labeled in-\nstances ( \u0018500).\nto zero when compared against human references),\ntheir high sentiment polarization considerably sim-\nplifies the polarity labelling task.\nDespite the significant gains compared to the\nclassification performance achieved by translating\nwith a generic NMT system, a limitation of MO-\nReinforce lies in its applicability to one task at a\ntime. Serving multiple tasks would only be pos-\nsible by training multiple NMT models (one for\neach possible downstream classifier), which is a\nsub-optimal solution for the actual deployment of\nthe approach in real industrial settings. To over-\ncome this issue, in the next section we propose an\nextension aimed at simultaneously serving multi-\nple classifiers with a single NMT system. Later, in\nthe experimental part of the paper (sections 4 and\n5), we will evaluate it in a multi-task scenario in-\nvolving both binary and multi-class tasks.\n3 Multi-task Machine-oriented NMT\nOur multi-task extensions of MO-Reinforce in-\nclude: i)prepending task-specific tokens to the in-\nput for managing multiple domains and comput-\ning normalized rewards to avoid under/over-fitting\n(Section 3.1), and ii)adding randomness to the\nsampling process to push for higher exploration of\nthe probability space (Section 3.2).\n3.1 Normalized Reward\nTo serve multiple downstream classifiers with a\nsingle NMT system, the model has to be trained\non a mixture of the labeled datasets available for\nthe different tasks. To define the target task, we\nprepend a task-specific token to each input sam-\nple within the corresponding dataset. In this way,\nthe NMT model is informed about the target down-\nstream application for which the input text has to\nbe translated. This idea is drawn from multilingual\nNMT, in which an effective solution is to prepend\nto the input sentences a token defining the desired\ntarget language (Johnson et al., 2017).\nTo avoid under/over-fitting when training the\nNMT model on mixed datasets that can have dif-\nferent sizes, we need to schedule the sampling\nfrom these datasets. In multilingual NMT, two\nfixed sampling schedules have been proposed,\nnamely: i)proportionally with respect to the\ndataset size (Luong et al., 2015), or ii)uniformly\nfrom each dataset (Dong et al., 2015). However,\nthese fixed scheduling approaches are not optimal\nsolutions. The first one gives higher importance totasks with larger datasets, so that those with less\ntraining material might remain under-fitted. The\nsecond one gives equal importance to all the tasks,\nwhich implies that larger datasets for some tasks\nwill not be fully exploited, reducing systems’ per-\nformance on those tasks.\nTo overcome these limitations, adaptive\nscheduling strategies can be adopted to update the\nimportance of each task in the course of training.\nThe idea is that, when the performance of the\nmodel is low on one task, higher importance is\ngiven to that task. This can be done by keeping\nthe schedule fixed and scaling the gradients\n(Chen et al., 2017), or directly by changing the\nsampling weights (Jean et al., 2019). In the first\napproach by Chen et al. (2017), the adaptation\nis done based on the magnitude of the gradients.\nHowever, the computed gradients loosely correlate\nwith the performance of the model and do not\ndirectly measure model’s performance for the\ncorresponding task. The second one (Jean et al.,\n2019), requires knowing the performance of the\nsingle-task models for each task on the develop-\nment set before starting the training. Then, after\neach epoch, the results of the multi-task model\non the same development set are compared with\nthose achieved by the single-task models, and the\nweights get updated accordingly. As a direct in-\ndicator, models’ performance on the development\nset represents a more reliable alternative compared\nto exploiting the indirect information provided\nby gradients’ magnitude. However, it is more\ncomputationally intensive and it assumes knowing\nin advance the performance of the single-task\nmodels, which is not always available.\nWe hence opt for the idea of scaling the gradi-\nents while keeping the schedule fixed and uniform\nacross tasks. We make the adaptation based on the\nreward from the downstream task, which reflects\nthe performance of the model for the correspond-\ning input sample. Equation 6 shows the stochastic\ngradient of the MO-Reinforce objective function.\nr^L=SX\ns=1\u0001(^y;ls)rlogP(^yjxs) (6)\nIn this formulation, since the magnitude of the\nreward scales the computed gradient for each sam-\nple, those samples with higher rewards will also\nhave higher influence on the model adaptation pro-\ncess. This can have a negative impact when the\nsamples come from challenging tasks or even from\nchallenging classes within a specific task. These\nsamples, in fact, will likely get lower reward leav-\ning the corresponding tasks/classes under-fitted.\nTo avoid this problem and to boost performance\nwhen dealing with challenging samples, we pro-\npose a reward normalization step, which extends\nMO-Reinforce with the possibility to weight the\nimportance of each sample during training. The\nidea is that the average reward for the Ktransla-\ntion candidates sampled by MO-Reinforce in order\nto chose the most useful one (see Section 2.2) can\nbe considered as an indicator of the level of dif-\nficulty of each task. Therefore, to normalize the\nreward, this average value can be subtracted from\nthe original reward as follows:\n^\u0001(^y;l) = \u0001( ^y;l)\u0000PK\nk=1\u0001(^yk;l)\nK+\u000b(7)\nwhereKis the number of sampled translation can-\ndidates. We add a constant value \u000bto prevent\nzero reward for the cases in which all the rewards\nhave the same value. This normalization reduces\nmore the reward of easy samples, whose average is\nhigh, and subsequently results in giving more im-\nportance to challenging samples with low reward.\n3.2 Noisy Sampling\nTwo sampling strategies are used for sampling\nthe translation candidates in reinforcement learn-\ning. The first one is beam search (Sutskever et\nal., 2014). It is a heuristic search, which main-\ntains a pool of highest probability translation pre-\nfixes with size B. At each step, the prefixes in the\npool are expanded by Bhighest probability words\nfrom the model’s distribution output. Then, the re-\nsultingB\u0002Bhypotheses are pruned by keeping\nB-highest probability prefixes. The search contin-\nues until all the prefixes in the pool reach the EOS\ntoken. The second one is multinomial sampling\n(Ranzato et al., 2016) where, at each time step, a\nword is generated by sampling from the model’s\ndistribution output. The generation is terminated\nwhen the EOS token is generated.\nFor a given application, the choice between the\ntwo sampling strategies depends on the known\ntrade-off between exploration and exploitation in\nreinforcement learning. Indeed, while beam search\nexploits more the model’s knowledge, multinomial\nsampling is more oriented to exploring the proba-\nbility search space. In light of this difference, in\nMO-Reinforce the sampling is done using multi-\nnomial sampling, which achieves better results inNMT (Wu et al., 2018). This is needed, since\nthe parameters of the model are initialized by a\ngeneric NMT system, which is trained on paral-\nlel data pursuing human-oriented objectives. Push-\ning for the exploration of the probability space in-\nstead of exploiting the original model’s knowledge\nwill promote the generation of more diverse can-\ndidates and eventually increase the chance to in-\nfluence system’s behaviour towards our machine-\noriented objectives.\nAlthough for these reasons multinomial sam-\npling represents a better choice compared to beam\nsearch, in MO-Reinforce the exploration of the\nprobability space does not always result in a boost\nof candidates’ diversity. For instance, the higher\nrandomness in generating the translation candi-\ndates might not suffice when the model’s probabil-\nity distribution is very peaked (i.e. when, at a given\ntime step, the number of plausible options for the\nnext word is very small). In this case, multinomial\nsampling will likely generate the same candidate\nat different iterations on the data. If its reward is\nthe highest one among the Ksamples, this candi-\ndate will be chosen and the model will be updated\nto increase the candidate’s probability. The result\nwill be an even more peaked distribution that, in\nturn, will increase the chance of making the model\nstuck in a local optimum by repeatedly generating\nthe same candidate.\nTo avoid these local optima and make MO-\nReinforce more reactive to handle multi-task data,\nour last extension aims to perturb the model’s\nprobability distribution. We do this by enabling\ndropout (Srivastava et al., 2014) while generating\nthe candidates, which is usually disabled while\ngenerating the translation outputs. Dropout adds\npermutation in sampling, which helps the model to\ngenerate different translation candidates at differ-\nent passes over the data even in the case of highly\npeaked probability distributions.\n4 Experiments\nOur multi-task extension of MO-Reinforce is eval-\nuated on two source languages: Spanish and Ital-\nian. For Spanish, we consider the downstream\ntasks of document classification and hate speech\ndetection. For Italian, we select document clas-\nsification and sentiment analysis. The evaluation\nis done by feeding dedicated English classifiers\n(one for each downstream task) with translations\nproduced by different NMT models, namely: i)\nSpanish Tasks\nMLDoc Hate Speech\nCCAT ECAT GCAT MCAT Non-Hateful Hateful\nTrain 100 100 100 100 400 400\nDevelopement 314 201 208 277 500 500\nTest 1246 731 794 1229 278 222\nItalian Tasks\nMLDoc Sentiment\nCCAT ECAT GCAT MCAT Negative Positive\nTrain 100 100 100 100 2289 1450\nDevelopement 239 248 238 275 254 161\nTest 963 1066 976 995 733 316\nTable 1: Statistics of datasets used for the Spanish and Italian tasks.\nEuroparl JRC Wikipedia ECB TED KDE News11 News Total\nEs-En 2M 0.8M 1.8M 0.1M 0.2M 0.2M 0.3M 0.2M 5.6M\nIt-En 2M 0.8M 1M 0.2M 0.2M 0.3M 0.04M 0.02M 4.56M\nTable 2: Statistics of the parallel corpora used for training the generic NMT systems\na general-purpose NMT system, ii)the original\nsingle-task MO-Reinforce, and iii)different vari-\nants of our multi-task extension. The goal is to\nmaximize the classification performance on each\ndownstream task. As another term of comparison\nfor the three translation-based solutions, we con-\nsider the results obtained by directly processing the\ninput sentences with task-specific Spanish and Ital-\nian classifiers trained on the same small datasets\nused to adapt the general-purpose NMT system.\nIn line with (Tebbifakhr et al., 2019), the\nmulti-task approach is expected to outperform the\ngeneric (human-oriented) NMT system, as well\nas the task/language-specific classifiers trained on\nfew data points. Ideally, thanks to the solutions\nproposed in Section 3, it should also compete with\nthe single-task (machine-oriented) models. This\nwould indicate the viability of a single-model ap-\nproach to simultaneously address multiple tasks.\nIn the following, we describe the task-specific\ndata used for model adaptation and evaluation, as\nwell as the parallel corpora used for training the\ngeneric NMT system. Their statistics are respec-\ntively reported in Tables 1 and 2.\nDocument Classification. For this multi-class\nlabelling task, we use the MLDoc corpora\n(Schwenk and Li, 2018), which cover 8 languages,\nincluding English, Spanish and Italian. They com-\nprise news stories labeled with 4 different cate-\ngories: CCAT (Corporate/Industrial), ECAT (Eco-\nnomics), GCAT (Government/Social), and MCAT\n(Markets). For each language, the training, de-velopment and test sets respectively contain 10K,\n1K, and 4K documents uniformly distributed into\nthe 4 classes. Following (Bell, 1991), for train-\ning and evaluation we only consider the first sen-\ntence of each document, which usually provides\nenough information about the general content of\nthe document. We use the whole English training\nset to build our downstream classifiers. To simu-\nlate an under-resourced setting, we randomly sam-\nple 100 documents for each class from the Spanish\nand Italian training sets. We use these samples to\nadapt the generic NMT system for the downstream\ntask, while for development and test we use the\nwhole sets.\nHate Speech Detection. For this binary task, we\nuse the English and Spanish datasets published for\nthe multilingual hate speech detection shared task\nat SemEval 2019 (Basile et al., 2019). We train the\ndownstream classifier on the whole English train-\ning set, including 3,783 hateful and 5,217 non-\nhateful Twitter messages. We randomly sample\n400 tweets for each class from the Spanish training\nset in order to simulate the under-resourced setting.\nSince the test set is not publicly available, we use\nthe development set as final evaluation benchmark,\nand we sample 500 tweets for each class from the\nrest of the training set as the development set.\nSentiment Classification. For this binary task,\nwe use a collection of annotated tweets released\nfor the Italian sentiment analysis task at Evalita\n2016 (Barbieri et al., 2016). After filtering out the\nsubjective tweets and the ones with mixed polarity,\nModelsSpanish-English Italian-English\nMLDoc Hate Speech MLDoc Sentiment\nGeneric 82.58 54.49 75.43 51.89\nSource 84.86 75.29 73.24 64.06\nSingle-task MO-Reinforce 88.36 64.24 76.86 70.27\nMulti-task MO-Reinforce (proportional sampling) 86.18 62.93 10.83 70.11\nMulti-task MO-Reinforce (uniform sampling) 86.45 55.07 68.26 68.01\nMulti-task MO-Reinforce (normalization) 86.98 66.52 75.11 66.70\nMulti-task MO-Reinforce (dropout) 87.73 77.56 80.31 68.98\nMulti-task MO-Reinforce (dropout & normalization) 90.13 77.08 80.90 66.73\nTable 3: Classification results (F1) obtained by: i)translating with the Generic NMT system, ii)directly processing the\nuntranslated data ( Source ),iii)translating with separate Single-task MO-Rinforce models, iv)oneMulti-task MO-Reinforce\nmodel with different sampling strategies, v)oneMulti-task MO-Reinforce model with reward normalization and noisy sampling.\nwe train the downstream system using a balanced\nset of 1.6M negative and positive tweets (Go et al.,\n2009).\nGeneric NMT systems We train the generic\nNMT system using the parallel corpora reported\nin Table 2. After filtering out long and imbalanced\npairs, we encode the corpora using 32K byte-pair\ncodes (Sennrich et al., 2016). Our NMT model\nuses Transformer with parameters set as in the\noriginal paper (Vaswani et al., 2017). In all the\nsettings, we start the training by initializing the\nNMT model with the trained generic NMT sys-\ntems. Then, we continue the training for 50 epochs\nand choose the best performing checkpoint based\non the average F1 score measured on the develop-\nment set of each task. We set K(i.e. the number of\nsampled translation candidates at each time step)\nto 5, and used the development set to evaluate dif-\nferent values of \u000b(i.e. the constant value added to\nprevent zero rewards – see Section 3.1). The best-\nperforming value of 0.1 was then used in all the\nexperiments. For developing the classifiers (both\nthe downstream English ones and the language-\nspecific ones used as baseline), we fine-tune the\nmultilingual BERT (Devlin et al., 2019).\n5 Results and Discussion\nOur experimental results are shown in Table 3,\nwhich reports the classification performance (F1)\nobtained on each downstream task by:\n\u000fFeeding the English classifiers with trans-\nlations from different NMT models (i.e.\nGeneric ,Single-task MO-Reinforce and dif-\nferent variants of Multi-task MO-reinforce );\n\u000fRunning language-specific classifiers on the\noriginal untranslated data ( Source ).The F1 scores obtained by the Generic NMT\nsystems in document classification (MLDoc) show\nthat the simplest translation-based approach pro-\nduces competitive results compared to those\nachieved by language-specific classifiers trained\non small in-domain data. The situation is different\nfor tasks whose data differ significantly from those\nused to train the general-purpose system. On the\nuser-generated content used for hate speech detec-\ntion and sentiment classification (i.e. Twitter data),\ntheGeneric results are indeed poor. This shows\nthat NMT models trained by only pursuing human-\noriented criteria might not fit to target downstream\ntasks, for which machine-oriented adaptation be-\ncomes necessary.\nMachine-oriented adaptation with single-task\nMO-Reinforce yields the expected benefits, with\nimprovements (+3.25 F1 points for document clas-\nsification, +18.38 for sentiment classification in\nItalian) that allow to outperform the language-\nspecific ( Source ) classifiers in three tasks out of\nfour. These gains confirm and validate on multiple\ntasks (including multi-class classification) the find-\nings of Tebbifakhr et al. (2019), showing that MO-\nReinforce can leverage the feedback from external\nlinguistic processors to adapt the NMT model to-\nwards translations that maximize the performance\nin downstream applications.\nThe middle part of Table 3 shows the first re-\nsults obtained by our multi-task adaptation of MO-\nReinforce . This is done by prepending the task-\nspecific tokens and comparing the two fixed sam-\npling schedules (proportional to datasets’ size and\nuniform). As expected (see Section 3.1), when\nsampling proportionally, the task with less train-\ning data (MLDoc) starves in training and remains\nunder-fitted. This is particularly evident for Italian,\nwhere the document classification dataset is ten\nModelsSpanish-English Italian-English\nMLDoc Hate Speech MLDoc Sentiment\nSingle-Task MO-Reinforce 88.36 64.24 76.86 70.27\nSingle-Task MO-Reinforce (dropout) 89.91 35.73 81.87 65.67\nSingle-Task MO-Reinforce (dropout & normalization) 88.55 78.33 81.22 70.97\nTable 4: Classification results (F1) obtained by translating with the original single-task MO-Reinforce and two variants of\nmulti-task MO-Reinforce (with noisy sampling – dropout – alone and combined with reward normalization).\ntimes smaller than the sentiment analysis one, and\nperformance is particularly low (10.83). On Span-\nish, where the hate-speech dataset is only twice as\nbig as the document classification one, the prob-\nlem exists but it is less evident. Although uni-\nform sampling helps the task with less training\ndata (MLDoc) to achieve better performance, it\nharms those with more data, which remain under-\nfitted (lower performance than proportional sam-\npling). Analysing the performance of the multitask\nand single task variants of MO-Reinforce , we no-\ntice that, although the former still outperforms the\nGeneric NMT system in three tasks out of 4, its\nresults are worse compared to the single-task MO-\nReinforce . For the task with the most unbalanced\ndata (MLDoc Italian), uniform sampling helps to\nincrease the performance, but it is not sufficient\nto reach the scores achieved by Generic NMT.\nOn hate speech data, the results of the language-\nspecific classifiers ( Source ) are still the highest\nones. The results reported so far would not allow\na user to replace the single task systems with the\nmultitask one.\nThe bottom part of Table 3 reports the classifica-\ntion results obtained by MO-Reinforce with reward\nnormalization and noisy sampling (both separately\nand together). As it can be seen, reward normaliza-\ntion is beneficial for both the Spanish tasks, with\na larger performance gain on hate speech with re-\nspect to both the sampling strategies (+3.59 and\n+11.45 F1 points). For Italian, reward normaliza-\ntion helps in the MLDoc task (+6.85 over the best\nsampling strategy), but it results in a performance\ndrop in sentiment classification (-1.31). In gen-\neral, reward normalization shows to be useful for\ntasks that tend to remain under-fitted with propor-\ntional or uniform sampling. Concerning the senti-\nment analysis task, our intuition is that, in presence\nof a large quantity of task-specific data in the tar-\nget language, both the English classifier and the\ncomputed rewards are reliable enough. Scaling\nthe rewards with their average value (see Eq. 7)\nreduces the learning capability of the NMT sys-tem, resulting in an under-fitted model. Although\nadding reward normalization reduces the gap in\nperformance with respect to the single-task MO-\nReinforce and the Source classifiers, it is not yet\nsufficient to replace them.\nThe results are significantly better with the noisy\nsampling approach discussed in Section 3.2. In\nboth the languages and in all the tasks, the reported\nF1 scores approach those obtained by the single-\ntask variant of MO-Reinforce (which in two cases\nis even outperformed) and always improve over the\nlanguage-specific Source classifiers. This confirms\nthat enabling dropout while generating the transla-\ntion candidates avoids the model to get stuck in\nlocal optima, and promotes diversity in producing\ncandidates that eventually receive higher rewards.\nCombined, the two contributions of this paper\n(reward normalization and noisy sampling) yield\nmixed outcomes. For Spanish, we observe a fur-\nther improvement compared to noisy sampling in\ndocument classification (+2.40), which comes at\nthe cost of a small drop in hate speech detection\n(-0.48). Also for Italian there is an improvement\nover noisy sampling alone in document classifica-\ntion (+0.59), but a larger drop in sentiment classi-\nfication performance (-2.25). However, it’s worth\nremarking that: i)the size of the Italian sentiment\nanalysis dataset is almost 10 times larger than the\nsize of the document classification dataset, and ii)\nthe data used to train the English classifiers are\neven more unbalanced. Being able to harmonize\nthe results of the two task hence becomes quite\ndifficult. Nevertheless, combining reward normal-\nization and noisy sampling has a general positive\neffect, which allows the multi-task MO-Reinforce\nsystem to approach and, in some tasks, even to out-\nperform the single task models.\nIn our final analysis, we investigate the effect\nof introducing dropout and reward normalization\nwhen MO-Reinforce is used in the single-task sce-\nnario. As shown in Table 4, enabling dropout im-\nproves the document classification results in both\nthe languages. The reported scores show that the\n0.0 0.2 0.4 0.6 0.8 1.0\nReward050100150200250300350CountNon-Hateful\nHatefulFigure 1: Rewards distribution for the hate speech detection\ntraining set translated with the Generic NMT system.\nadded noise introduced by dropout helps the model\nto explore more the probability space and avoid lo-\ncal optima, even when dealing with a single task.\nHowever, for hate speech detection in Spanish and\nsentiment analysis in Italian, this exploration of\nthe probability space results in lower performance\ncompared to the original MO-Reinforce . To under-\nstand the reasons of this drop, Figure 1 shows the\ndistribution of the rewards obtained in hate speech\ndetection when translating the training set with the\ngeneric NMT system. This distribution shows that\nthe downstream classifier is very biased toward\nthe non-hateful class (right side of Figure 1), with\nmost of the hateful samples obtaining zero reward\n(left side). While the model is exploring the proba-\nbility space, this extreme imbalance in the rewards\ndoes not allow the hateful samples to get a non-\nzero reward, and this drastically scales down their\ngradients preventing the NMT system to actually\nlearn from these samples. Eventually, this results\nin a “catastrophic forgetting”, where the NMT sys-\ntem learns only from one class and totally forgets\nthe other. Whatever it will receive in input, this\nsystem will generate a translation with no hate nu-\nances, which will be classified as non-hateful by\nthe downstream classifier. The very low F1 (35.73)\nis the result of this process.\nAdding reward normalization minimizes the\n“catastrophic forgetting” effect by keeping the\nmagnitude of the rewards balanced across the\nclasses. In terms of performance, hate speech\ndetection and sentiment analysis benefit of it by\nachieving higher results compared to the original\nMO-Reinforce (respectively, +14.09 and +0.77).\nOn both the languages, the document classifi-cation results slightly drop compared with MO-\nReinforce with dropout, but they still outperform\nthose achieved by translating with the original ap-\nproach by (Tebbifakhr et al., 2019).\nLooking at the output of the system, we no-\nticed that the translations are shorter and are not\nadequate compared to the output of the Generic\nsystem. For instance, in document classification,\nthe samples belonging to the Corporate class are\nusually translated to “ The company. ”, or the posi-\ntive samples in sentiment analysis are translated to\n“I’m very happy. ”, which are easier to be classified\nby the downstream classifiers.\n6 Conclusion\nWe proposed an extension of the MO-Reinforce al-\ngorithm, targeting “machine-oriented” NMT adap-\ntation in a multi-task scenario. In this scenario,\ndifferent NLP components are fed with transla-\ntions produced by a single NMT system, which is\nadapted to generate output that is “easy to process”\nby the downstream processing tools. To close the\nperformance gap between the single and the multi-\ntask variants of MO-Reinforce, we enhanced the\nlatter with reward normalization and noisy sam-\npling strategies. Our experiments show that, with\nthese two features, the multi-task MO-Reinforce\napproach achieves significant gains in performance\nthat make it competitive with the single-task solu-\ntion (though, having one single model to build and\nmaintain, at considerably lower deployment costs).\nFurthermore, we show that reward normalization\nand noisy sampling can also help in the single-task\nsetting, where our approach outperforms the origi-\nnal MO-Reinforce in four tasks.\nReferences\nBahdanau, Dzmitry, Kyunghyun Cho, and Yoshua Ben-\ngio. 2015. Neural machine translation by jointly\nlearning to align and translate. In Proc. ofICLR\n2015, San Diego, CA, USA, May.\nBarbieri, F, V Basile, D Croce, M Nissim, N Novielli,\nand V Patti. 2016. Overview of the evalita 2016\nsentiment polarity classification task. In Proc. of\nEV ALITA 2016, Naples, Italy, December.\nBasile, Valerio, Cristina Bosco, Elisabetta Fersini, Deb-\nora Nozza, et al. 2019. SemEval-2019 task 5: Mul-\ntilingual detection of hate speech against immigrants\nand women in twitter. In Proc. ofSEMEV AL 2019,\npages 54–63, Minneapolis, Minnesota, USA, June.\nBell, A. 1991. The Language ofNews Media. Lan-\nguage in society. Blackwell.\nChen, Zhao, Vijay Badrinarayanan, Chen-Yu Lee,\nand Andrew Rabinovich. 2017. Gradnorm:\nGradient normalization for adaptive loss balanc-\ning in deep multitask networks. arXiv preprint\narXiv:1711.02257.\nConneau, Alexis, Ruty Rinott, Guillaume Lample, Ad-\nina Williams, Samuel Bowman, et al. 2018. XNLI:\nEvaluating cross-lingual sentence representations.\nInProc. ofEMNLP 2018, pages 2475–2485, Brus-\nsels, Belgium, November.\nDevlin, Jacob, Ming-Wei Chang, Kenton Lee, and\nKristina Toutanova. 2019. BERT: Pre-training of\ndeep bidirectional transformers for language under-\nstanding. In Proc. ofNAACL-HLT 2019, pages\n4171–4186, Minneapolis, Minnesota, June.\nDong, Daxiang, Hua Wu, Wei He, Dianhai Yu, and\nHaifeng Wang. 2015. Multi-task learning for mul-\ntiple language translation. In Proc. ofACL 2015,\npages 1723–1732, Beijing, China, July.\nGo, Alec, Richa Bhayani, and Lei Huang. 2009. Twit-\nter sentiment classification using distant supervision.\nCS224N Project Report, Stanford, 1(12):2009.\nJain, Alankar, Bhargavi Paranjape, and Zachary C. Lip-\nton. 2019. Entity projection via machine translation\nfor cross-lingual NER. In Proc. ofEMNLP 2019,\npages 1083–1092, Hong Kong, China, November.\nJean, S ´ebastien, Orhan Firat, and Melvin Johnson.\n2019. Adaptive scheduling for multi-task learning.\narXiv preprint arXiv:1909.06434.\nJohnson, Melvin, Mike Schuster, Quoc V . Le, Maxim\nKrikun, et al. 2017. Google’s multilingual neu-\nral machine translation system: Enabling zero-shot\ntranslation. Transactions oftheAssociation for\nComputational Linguistics, 5:339–351.\nKreutzer, Julia, Artem Sokolov, and Stefan Riezler.\n2017. Bandit structured prediction for neural\nsequence-to-sequence learning. In Proc. ofACL\n2017, pages 1503–1513, Vancouver, Canada, Au-\ngust.\nLuong, Minh-Thang, Quoc V Le, Ilya Sutskever, et al.\n2015. Multi-task sequence to sequence learning.\narXiv preprint arXiv:1511.06114.\nMohammad, Saif M., Mohammad Salameh, and Svet-\nlana Kiritchenko. 2016. How translation alters sen-\ntiment. Journal ofArtificial Intelligence Research,\n55(1):95–130, January.\nNguyen, Khanh, Hal Daum ´e III, and Jordan Boyd-\nGraber. 2017. Reinforcement learning for ban-\ndit neural machine translation with simulated human\nfeedback. In Proc. ofEMNLP 2017, pages 1464–\n1474, Copenhagen, Denmark, September.Papineni, Kishore, Salim Roukos, Todd Ward, and Wei-\nJing Zhu. 2002. BLEU: a method for automatic\nevaluation of machine translation. In Proc. ofACL\n2002, pages 311–318, Philadelphia, PA, USA, July.\nRanzato, Marc’Aurelio, Sumit Chopra, Michael Auli,\nand Wojciech Zaremba. 2016. Sequence level train-\ning with recurrent neural networks. In Proc. ofICLR\n2016, San Juan, Puerto Rico, May.\nSchwenk, Holger and Xian Li. 2018. A Corpus for\nMultilingual Document Classification in Eight Lan-\nguages. In Proc. ofLREC 2018, Miyazaki, Japan,\nMay.\nSennrich, Rico, Barry Haddow, and Alexandra Birch.\n2016. Neural machine translation of rare words with\nsubword units. In Proc. ofACL 2016, pages 1715–\n1725, Berlin, Germany, August.\nShen, Shiqi, Yong Cheng, Zhongjun He, Wei He, et al.\n2016. Minimum risk training for neural machine\ntranslation. In Proc. ofACL 2016, pages 1683–\n1692, Berlin, Germany, August.\nSrivastava, Nitish, Geoffrey Hinton, Alex Krizhevsky,\nIlya Sutskever, and Ruslan Salakhutdinov. 2014.\nDropout: A simple way to prevent neural networks\nfrom overfitting. Journal ofMachine Learning\nResearch, 15(56):1929–1958.\nSutskever, Ilya, Oriol Vinyals, and Quoc V Le.\n2014. Sequence to sequence learning with neu-\nral networks. In Advances inNeural Information\nProcessing Systems 27, pages 3104–3112. Curran\nAssociates, Inc.\nTebbifakhr, Amirhossein, Luisa Bentivogli, Matteo Ne-\ngri, and Marco Turchi. 2019. Machine translation\nfor machines: the sentiment classification use case.\nInProc. ofEMNLP 2019, pages 1368–1374, Hong\nKong, China, November.\nVaswani, Ashish, Noam Shazeer, Niki Parmar, Jakob\nUszkoreit, et al. 2017. Attention is all you need. In\nAdvances inNeural Information Processing Systems\n30, pages 5998–6008. Curran Associates, Inc.\nWilliams, Ronald J. 1992. Simple statistical gradient-\nfollowing algorithms for connectionist reinforce-\nment learning. Machine learning, 8(3-4):229–256.\nWu, Lijun, Fei Tian, Tao Qin, Jianhuang Lai, and Tie-\nYan Liu. 2018. A study of reinforcement learning\nfor neural machine translation. In Proc. ofEMNLP\n2018, pages 3612–3621, Brussels, Belgium, Novem-\nber.\nZhang, Yu and Qiang Yang. 2017. A survey on multi-\ntask learning. arXiv preprint arXiv:1707.08114.",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "2SDcMbFeNx0",
"year": null,
"venue": "EAMT 2020",
"pdf_link": "https://aclanthology.org/2020.eamt-1.51.pdf",
"forum_link": "https://openreview.net/forum?id=2SDcMbFeNx0",
"arxiv_id": null,
"doi": null
}
|
{
"title": "CEF Data Marketplace: Powering a Long-term Supply of Language Data",
"authors": [
"Amir Kamran",
"Dace Dzeguze",
"Jaap van der Meer",
"Milica Panic",
"Alessandro Cattelan",
"Daniele Patrioli",
"Luisa Bentivogli",
"Marco Turchi"
],
"abstract": "Amir Kamran, Dace Dzeguze, Jaap van der Meer, Milica Panic, Alessandro Cattelan, Daniele Patrioli, Luisa Bentivogli, Marco Turchi. Proceedings of the 22nd Annual Conference of the European Association for Machine Translation. 2020.",
"keywords": [],
"raw_extracted_content": "CEF Data Marketplace:\nPowering a Long-term Supply of Language Data\nAmir Kamrany;Dace Dzeguzey;Jaap van der Meery;Milica Panicy;\nAlessandro Cattelanz;Daniele Patrioliz;\nLuisa Bentivogli?;and Marco Turchi?\nyTAUS - Language Data Network, Netherlands famir,dace,jaap,milica [email protected]\nzTranslated, Italyfalessandro,daniele.patrioli [email protected]\n?FBK, Italyfbentivo,turchi [email protected]\nAbstract\nWe describe the CEF Data Marketplace\nproject, which focuses on the develop-\nment of a trading platform of translation\ndata for language professionals: transla-\ntors, machine translation (MT) developers,\nlanguage service providers (LSPs), trans-\nlation buyers and government bodies. The\nCEF Data Marketplace platform will be\ndesigned and built to manage and trade\ndata for all languages and domains. This\nproject will open a continuous and long-\nterm supply of language data for MT and\nother machine learning applications.\n1 Introduction\nThe CEF Data Marketplace project is an initiative\nco-funded by the European Union under the Con-\nnecting Europe Facility programme, under Grant\nAgreement INEA/CEF/ICT/A2018/1816453. The\nproject has a duration of 24 months and started in\nNovember 2019.\nWith over 3501million new Internet users in\n2019 and the annual digital growth of 9%, there\nis insufficient content available in the local lan-\nguages. The automated translation platforms sup-\nport merely about a hundred of the 4,000 lan-\nguages with an established writing system. The\nCEF Data Marketplace will be the first platform\nthat facilitates the buying and selling of language\ndata to help businesses and communities reach\nc\r2020 The authors. This article is licensed under a Creative\nCommons 3.0 licence, no derivative works, attribution, CC-\nBY-ND.\n1https://wearesocial.com/blog/2019/\n01/digital-2019-global-internet-use-\nacceleratesscale with their language technologies while offer-\ning a way for the language data creators to mone-\ntize their work.\n2 Platform Description\nThe platform focuses on the integration and main-\ntenance of the already available technologies for\nmanaging and trading translation data. Specifi-\ncally, the following features will be added to an\nexisting underlying translation data repository:\n\u000fAn easy-to-use mechanism to upload and an-\nnotate data-sets for data sellers, as well as op-\ntions to upload updates to the data-sets;\n\u000fan easy-to-explore mechanism to find the\nright data for specific languages and domains\nfor data buyers;\n\u000fan easy-to-trade transaction system for data\nsellers to earn monetary rewards by trading\ntheir data with data buyers;\n\u000fan easy-to-trust reputation system to improve\nthe confidence of data buyers towards the\nmarketplace and to ensure quality of data.\n3 State-of-the-art Processing Tools\nAdvanced data processing services will be in-\ntegrated to enable and facilitate data exchange\nthrough the marketplace and to encourage data\nsellers and buyers to join the platform. These ser-\nvices consist of software for cleaning, anonymiz-\ning and clustering the data to ensure that the\ndata-sets available in the Marketplace are of high\nquality. These services will be provided through\nAPIs and will be available free of charge to data\nproviders or against a fee for users not publishing\ntheir data through marketplace. The software will\nalso be released open source for the industrial and\nresearch communities.\n4 Data Acquisition Strategy\nTo acquire as much data as possible with added\nvalue for the CEF Automated Translation (CEF-\nAT) Core Platform our strategy is to create a vi-\nbrant, broader market serving the needs of trans-\nlation providers and translation buyers across var-\nious desired language combinations and domains.\nThe legal framework of TAUS Data is updated to\nbuild trust2of the data owners to participate in the\nMarketplace. Clear guidelines are provided about\ndata ownership to safeguard the copyrights and to\nsupport the royalty-based model.\n5 Acknowledgement\nThe sole responsibility of this publication lies\nwith the author. The European Union is not re-\nsponsible for any use that may be made of the in-\nformation contained therein.\n2http://hdl.handle.net/11346/TAUS-PZNM",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "quqlqfcZk8v",
"year": null,
"venue": "EAMT 2008",
"pdf_link": "https://aclanthology.org/2008.eamt-1.26.pdf",
"forum_link": "https://openreview.net/forum?id=quqlqfcZk8v",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Boosting performance of weak MT engines automatically: using MT output to align segments & build statistical post-editors",
"authors": [
"Clare R. Voss",
"Matthew Aguirre",
"Jeffrey Micher",
"Richard Chang",
"Jamal Laoudi",
"Reginald L. Hobbs"
],
"abstract": "Clare R. Voss, Matthew Aguirre, Jeffrey Micher, Richard Chang, Jamal Laoudi, Reginald Hobbs. Proceedings of the 12th Annual conference of the European Association for Machine Translation. 2008.",
"keywords": [],
"raw_extracted_content": "Boosting Performance of Weak MT Engines \nAutomatically: Using MT Output to Align Segments & \nBuild Statistical Post-Editors \nClare R. Voss1, Matthew Aguirre2, Jeffrey Micher1, \nRichard Chang3, Jamal Laoudi3, Reginald Hobbs1 \n1Multilingual Computing Branch, Army Research Laboratory, Adelphi, MD 20783 \n2ArtisTech, Inc., 10560 Main St., Suite 105, Fairfax, VA 22030 \n3Advanced Resources Technologies, Inc., 1555 King St., Suite 400 Alexandria, VA 22314 \n{voss,jmicher,rachang, jlaoudi, hobbs }@arl.army.mil, magui [email protected] \nAbstract. This paper addresses the practical ch allenge of improving existing, op-\nerational translation systems with relatively weak, black-box MT engines when \nhigher quality MT engines are not availabl e and only a limited quantity of online re-\nsources is available. Recent research results show impressive performance gains in \ntranslating between Indo-European languages when chaining mature, existing rule-based MT engines and post-MT editors built automatically with limited amounts of \nparallel data. We show that this hybrid approach of serially composing or “chaining” \nan MT engine and automated post-MT editor---when applied to much weaker lexi-con-based and rule-based MT engines, translating across the more widely divergent languages of Urdu and English, and given limited amounts of document-parallel \nonly training data---will yield statistically significant boosts in translation quality up to the 50K of parallel segments in trai ning the post-editor, but not necessarily be-\nyond that. \nIntroduction \nIn industry and government, MT developers may be asked to improve existing, opera-\ntional translation systems with relatively weak, black-box MT engines because higher quality MT engines are not available and only a limited quantity of online resources is \navailable. Recent research results show impressive performance gains in translating be-tween Indo-European languages when chaining together mature, existing rule-based MT \nengines and post-MT editors built automatically with limited amounts of parallel data ([1], \n[2], [3]). In this paper, we show that this hybrid approach of serially composing an MT engine and automated post-MT editor---when applied to much weaker MT engines, trans-\nlating across more widely divergent languages, and given only limited amounts of training data---will yield statistically significant boosts in translation quality up to the first 50K of \nparallel segments in training the post-editor, but not necessarily beyond that. \n12th EAMT conference, 22-23 September 2008, Hamburg, Germany\n192\n \nThe key idea behind our approach is to have MT engines do their own translations to \nboost the performance of the systems in which they are embedded. We document and pre-\nsent results of this “self-help” workflow where (i) the MT engine outputs are used to iden-\ntify segment-level alignments, (ii) the resulting segment pairs are used to train automated statistical post-editors (APEs), and (iii) the resulting APEs form part of serially chained \nsystems (MT + APE) that outperform the original MT engines. \nThe paper begins with a brief overview of our approach to post-MT processing. The \nAlignment Section presents a novel algorithm that we developed for identifying Urdu-English segment-level alignments based on Urdu and English document-aligned files. In the Results and Analyses section that follows we review evaluation results and begin to \naddress questions raised in the Approach section. The paper concludes with a discussion of open issues and notes of future work. \nApproach \nWe know that human translators dislike working with MT output because---with no \nmechanisms built into the MT system to learn directly from human post-editing correc-\ntions--- the same errors appear over and over and the translators must make the same cor-\nrections over and over again as well. Our in-house requirement has been to determine how \nto boost the MT engines we already have and eliminate, where possible, known errors. In this paper, we report on leveraging the SRILM and MOSES tools ([4], [5]) without modi-\nfications to rapidly build statistical post-MT editors in just a few months. Our work fol-\nlows from the insights of [2] and [3] that “post-editors” can be built as monolingual trans-lation engines that convert “raw” target-language (TL) text produced by a baseline MT \nengine into higher quality TL text by correcting errors in TL word choice and order. \nOur approach has been to augment two in-house Urdu-to-English MT engines, one \nrule-based and one lexicon-based, with automated statistical post-editors built from the same corpus of parallel-aligned data to address several questions: 1. How effective are automated post-editors (APEs) in word re-orderings to boost an \nUrdu-English MT lexicon-based MT (LBMT) where no re-ordering of the Urdu input occurs? How does this compare to an APE’s impact on a rule-based MT (RBMT) \nwhere some re-ordering occurs prior to the APE processing by the baseline MT\n1? \n2. How much impact does the amount of parallel data for building an APE have on the \nperformance of a LBMT+APE hybrid versus a RBMT+APE hybrid? \n3. How effective are the RBMT+APE and LBMT+APE hybrids compared to a standalone \nstatistical MT engine (SMT) built with the same data as the APEs? Are the different engines impacted equally by segment-level alignments of different qualities? \n \n \n11 Given that Urdu-English translation has longer dist ance re-ordering than in prior work of French-\nEnglish translation where re-orderings are mostly local, we expected that an APE for Urdu-to-\nEnglish translation would be less effective than an APE for French-to-English translation. \n12th EAMT conference, 22-23 September 2008, Hamburg, Germany\n193\n \nData Preparation and Alignment \nThe training data in our study was restricted to the NIST 2008 Open MT Workshop’s \nUrdu “language pack” on DVD that included collections of document-parallel Urdu and English files that NIST provided without standard cleaning, with the stated expectation \nthat workshop participants would modify the files as needed [6]. Our first challenge in \nusing this data was to identify and extract pairs of Urdu and English segments that were translations of each other from separate Urdu and English files aligned only at the docu-\nment level. The intuition for the aligner algorithm that we developed came from our ob-servations reading the Urdu files after they were run through our two in-house MT en-\ngines: even the low-quality raw “English” output of these engines was “good enough” for \nus to scan and match by content with segments in the corresponding English files. We \nthen wondered if automated evaluation metrics could do this matching for us, by identify-\ning the highest scoring matched pairs of Urdu-translated “English” segments with Eng-\nlish-original segments. \n As a check on the possibility of segments aligning across document boundaries, we \nasked an Urdu speaker to examine several pairs of aligned Urdu and English documents to determine whether segments from one Urdu document appeared in the preceding or fol-lowing documents. This concern arose in part from the fact that aligned documents did \nnot always contain the same number of segments. We discovered that, even though seg-ments did not always have a corresponding partner in their aligned document, the seg-\nments did not align across document boundaries. As a result, our alignment algorithm \nwas restricted to comparing segments within aligned documents. \nBefore starting the alignment, documents were binned into three groups: those contain-\ning the same number of segments (Equal), those whose segment counts were off by one \n(OneOff), and those whose segments counts were more than one off (MoreThanOneOff). \nWe had expected that segment pairs within Equal document set might already be perfectly \naligned. On inspection however, we found that many documents in the Equal set were not segment-aligned. The automated evaluation metrics in the algorithm were BLEU [7] and \nGTM 1.4 [8]. After some initial experimentation, BLEU was set to have an n-gram size \nof 2, to yield more of a score spread across segments. The translation engines in the algo-\nrithm were the in-house LBMT and RBMT engines. The algorithm also included post-\nMT processing prior to segment-pair scoring to remove annotations intended for the hu-\nman reader only, to boost segment scores and again create more spread across segments. \nAlignment Algorithm \nThe algorithm steps, necessarily simplifying somewhat from all the details, were: \n1. Split the original single files with all of the \"aligned\" data in it into separate source and \nreference files based on document ID. \n2. Translate all of the Urdu segments to English using both MT engines. \n3. For each engine’s output and each metric, perform the fully exhaustive (N x M) evalua-tions of each of the N MTed segments against each of the M reference segments, on \n12th EAMT conference, 22-23 September 2008, Hamburg, Germany\n194\n \neach document. This results in four triplets of {metric score, source segment ID, refer-\nence segment ID}, because both MT engines’ output were scored with both metrics. \n4. All triplets for the two sets with the same metric were ranked, with the more likely aligned segment pairs above those judged less likely based on their metric score. In the event of equivalent scores, the tie was broken by selecting pairs whose difference in \nsource and reference IDs were closest (to each other in the document). \n5. An iterative algorithm for selection, deletion, and re-ranking of the triples for \"most \nprobable\" alignment was then applied to the lists. The highest scoring segment align-\nment was popped off the list first and saved as a candidate alignment. Then any other segments in the list with the same source ID or the same reference ID as the designated \ncandidate were also discarded. This removed all competing alignments for either seg-ment of the selected candidate, rapidly reducing the number of triples to re-sort and it-erate through. During the selection phase, if pairs of possible segments crossed over \neach other, we removed the \"worst-offending\" cross-over pair, defined as the pair with \nthe most number of other segments crossed. With the pair removed, the data was re-sorted and checked again for other crossovers. \n6. The final lists of candidate alignments for each of the two metrics were then intersected \nand only alignments found by iterations over both metrics were kept. (Note that we ef-\nfectively ignored the differences between MT engines by creating two lists of triplets.) \nEvaluating the Alignment Algorithm \nFor an initial evaluation, we ran the algorithm on five documents selected from the More-\nThanOneOff set, on the assumption these were “noisier” than the other sets and would \ngive us a lower bound on the algorithm’s performance. With the assistance of our Urdu \nspeaker, we produced a gold-standard alignment on this set. Precision and recall metrics\n2 \nwere calculated on the algorithm-aligned (hypothesized) segments. The per-document pair \nresults indicated that the algorithm would serve our needs for automatically extracting \nalignment candidates for training a post-editor: precision scores ranged from .67 to .9 and \nrecall from .63 to .88, on documents that differed by 2 to 4 segments in length. \nTo evaluate the algorithm over the full collection, we built a sample of 13 documents \nfrom each of the binned sets (Equal, OneOff, MoreThanOne Off). We again created a gold standard alignment for each document pair and used it to score the sample align-\nments. Table 1 shows the number of segments in each file of the document pairs selected for the evaluation set. The evaluation results in the two bottom rows indicate that the algo-\nrithm performed effectively on the Equal and OneOff pairs, with precisions score at .98 \nand .93. The large drop in precision for the MoreThanOneOff pairs to .57 pre-empted our use of this data in our builds. Clearly the initial three-way binning of the documents by \nsegment-count differences helped filter and isolate better alignments for MT training. In \nthe next section, we describe the use of the alignments from the Equal and OneOff bins \nfor the different system builds, in effect an extrinsic evaluation of these alignments. \n \n \n2 We define precision as # correct hypothesized alignments / total # hypothesized alignments, and \nrecall as # correct hypothesized alignments / total # gold standard alignments. \n12th EAMT conference, 22-23 September 2008, Hamburg, Germany\n195\n \nTable 1. Evaluation of segment alignment algorithm on 13 document pairs from three dataset bins. \nDocument Bins Equal \n#U=#E segs OneOff \n{#U segs, #E segs} MoreThanOneOff {#U segs, #E segs} \nListing of # seg-\nments in each \ndocument of pair to \ntest for alignment 31, 12, 5, \n4, 35, 14, \n5, 6, 32, 4, \n 3, 7, 4 {14,15},{8,9},{13,12}, \n{30,31},{21,20},{15,16}, \n{6,5},{7,6},{6,5},{12,11},\n{8,7},{10,9},{5,4} {11,14},{43,13},{11,5}, \n{10,5},{7,9},{5,3},{8,3},\n{15,10},{8,4},{11,8}, \n{3,5},{10,6},7,4} \nPrecision .98 .93 .57 \nRecall .95 .86 .53 \nResults and Analyses \nTo assess the impact of our two in-house MT engines in creating Urdu-English alignment \nand statistical post-editors to these engines, we also built, following the identical process, \n(i) a second set of post-editors using independently-created control alignments created by colleagues from the NIST DVD files with no knowledge of our algorithm\n3 and (ii) two \nsets of standalone statistical MT engines, from our and our colleagues’ alignments. All \nsystems were evaluated on the same Urdu dataset of the NIST 2008 Open MT workshop, \nconsisting of 1862 Urdu segments in 132 documents, where each segment was tagged and \npaired with four English human reference translations. Figure 1 shows the distribution of Urdu test segments by length. The test documents also varied in length from 3 to 79 seg-\nments, with slightly over half of the documents (68), having fewer than 10 segments.\n4 \n \n \nFigure 1. Histogram of #segments by segment lengt h (in tokens) from evaluation dataset \n \n3 We thank Tim Anderson of AFRL and Wade Shen of MIT Lincoln Labs for sharing their datasets. \n4 Another forty documents, slightly under a third, had 10 to 19 segments. Twenty-one documents \nhad 20 to 46 segments. Three others were much longer: 54, 76, 79 segments. \n12th EAMT conference, 22-23 September 2008, Hamburg, Germany\n196\n \nSystem-level Evaluation \nThe lexicon-based MT (LBMT) engine, when run standalone, produced a very weak \nBLEU-4 score (see baseline column in Table 2). This was not unexpected: given that this \nMT preserves Urdu’s SOV and head-final phrase-internal word order because it does no reordering of translated words, its output scores points mostly for single word matches. \nTable 2. System-level BLEU-4 scores on lexicon-based MT with automated post-editors trained on \nsame 25, 50, 75, and 105K pairs of aligned segments as for RBMT APE & SMT in Tables 3 and 4. \n LBMT LBMT+APEs (alignment set size) \n baseline 25K 50K 75K 105K\n alignment 1 0.064 0.150 0.172 0.180 0.185\n alignment 2 0.064 0.161 0.182 \n \nWhen the LBMT was chained with an automated post-editor (APE) built with only 25K of parallel segments, whether aligned by our colleagues (alignment 1) or our own (align-\nment 2), the hybrid score was more than double the baseline MT score. The hybrid score \nincreases however fell off dramatically beyond that initial set: with a second 25K parallel \nsegments to train the APE, the hybrid score increased by about one-eighth, and then with a third 25K, the hybrid score increased only by one-twentieth. \nTable 3. System-level BLEU-4 scores on rule-based MT with automated post-editors trained on \nsame sets of 25, 50, 75, 105K aligned segment pa irs as for LBMT APE & SMT in Tables 2 and 4. \n RBMT RBMT +APE \n Baseline +25K +50K +75K +105K\n alignment 1 0.127 0.180 0.195 0.202 0.206\n alignment 2 0.127 0.185 0.203 \n \nIn contrast, the rule-based MT engine, when run standalone, scored significantly higher \nthan the LBMT, at roughly double the BLEU points (see baseline column in Table 3). \nWhen chained with an APE built on 25K of parallel segments (alignment 2), the hybrid \nscore increased roughly by one-half. While this was not as dramatic a gain as with the \nLBMT+APE combination, the increase was statistically significant nonetheless. The \nRBMT+APE score increases beyond that initial 25K dataset---as occurred with the \nLBMT+APE---fell off dramatically: with a second 25K parallel segments to train the APE, the score increasing by only about one-tenth, and then with a third 25K (in align-\nment 1), the score increased only by one-twentieth. \nThese results suggested the first 25K training datasets contained the critical mass of \nnew in-genre, in-domain vocabulary and short phrases needed to translate the evaluation dataset, while the subsequent 25K datasets drawn from this same set of source texts con-tained much less new content and so only contributed to boosting the translation coverage \nin a more limited fashion. \n12th EAMT conference, 22-23 September 2008, Hamburg, Germany\n197\n \nTo test for this possibility, we used only the two alignment sets to train a series of new \nstatistical MT (SMT)5 and the results came out consistent with this possibility. The BLEU \nscores on the SMT trained on our 25K and 50K alignments were statistically indistin-\nguishable from the BLEU scores for the LBMT+APE engines trained on these same alignments (see Table 4). If, at the finer-grained document and segment levels, we were \nto see that SMT output does outscore LBMT+APE output some of the time, then it would be fair to ask the fundamental question for this hybrid approach: for another much larger, \nhigh quality alignment set, will the SMT systematically match or will it instead outscore \nthe LBMT+APE, on the same amount of training data?\n6 \nTable 4. System-level BLEU-4 scores for statistical MTs trained on same alignment datasets of the \nAPE engines in Tables 2 and 3 (The 14K* training set was built from Equal bin alignments only) \n SMT \n 14K* 25K 50K 75K 105K\nalignment 1 0.144 0.163 0.175 0.180\nalignment 2 0.144 0.166 0.186 \nDocument-level and Segment-level Evaluation \nAs a first step in addressing this question, Tables 5 and 6 show, at document- and seg-\nment-level evaluations respectively, how frequently the 50K APEs with our alignments boost their baseline MT engines. The RBMT+APE hybrid showed individual documents \ndecreased in score from their baseline RBMT translation, but only 2 scores were statisti-cally significantly lower. By contrast, at the segment level in Table 6, both hybrids show \nstatistically significant drops in segment scores. \nTable 5. Document-level Bleu-4 score changes from LBMT to LBMT+APE runs and from RBMT \nto RBMT+APE runs (all APEs built with 50K alignmt 2) \nDocument score changes from LBMT to from RBMT to \nBleu-4 LBMT+APE RBMT+APE \n # increased / unchanged / decreased 132 / 0 / 0 124 / 0 / 8 \n It is especially intriguing that both hybrids show proportionately more decreases relative to their baseline MT systems in Bleu-1 scores at the segment-level (Table 6) than in \nBLEU-4 scores. This indicates that particular word or punctuation changes made by the \nAPEs are “worse”, i.e., with fewer 1-gram matches, even though on balance the APEs are increasing the higher-order n-gram matches that boost BLEU-4 scores, which could be a \nresult of APE substitutions or re-orderings yielding longer matches. While the APE “ad-vantages” with the first 50K training data are only partly a matter of increased vocabulary \n \n \n5 The English language model was built with only the English side of the parallel data. \n6 Since ramping up and maintaining a LBMT may be easier and less expensive than retraining an \nSMT or APE, the answer to this question has practical ramifications as well. \n12th EAMT conference, 22-23 September 2008, Hamburg, Germany\n198\n \ncoverage that comes with more training data, the segment-level evaluation suggests that \nthe lack of stronger lexical analysis in the APE to increase 1-gram matches is a limiting \nfactor in boosting the overall performance of the hybrids. \nTable 6. Segment-level Bleu score differences from LBMT to LBMT+APE runs and from RBMT \nto RBMT+APE runs (all APEs built with 50K alignmt 2) \nSegment score changes from LBMT to from RBMT to \n LBMT+APE RBMT+APE \nBleu-4 \n # increased / unchanged / decreased 1543 /66 / 252 1252 / 473 / 136 \nBleu-1 \n # increased / unchanged / decreased 1436 / 108 / 317 1198 / 197 / 466 \n \nIn addition to looking at the added-value and limiting factors from the APEs them-\nselves, we return to the question raised earlier about the impact of the baseline MTs on the \nsystem performance. Table 7 suggests that, using scores at the document level, there is consistent evidence in the score differences to rank order the LBMT + APE below SMT, \nwith 71 out of 132 SMT documents outscoring the LBMT+APE. One explanation might \nbe that an LBMT-specific APE faces more challenges with re-ordering edits to make on LBMT output than the SMT does on Urdu text: the APE must deal with noisy LBMT-\ninduced English without the benefit of linguistic content and redundancy (such as mor-\nphological and syntactic information) from Urdu that has been lost. In contrast, the SMT \nis “free” to detect and make use of that Urdu linguistic knowledge for the re-ordering for \ntranslation into English. \nTable 7. Document-level score differences between LBMT+APE and SMT engines, and between \nSMT and RBMT+APE engines (50K alignmt 2) \nDocument score changes between LBMT+APE between SMT and \nBleu-4 and SMT RBMT+APE \n # increased / unchanged / decreased 71 / 59 / 2 90 / 3 / 39 \n \nTable 7 also shows that with document-level scores, there is some evidence to rank the \nSMT below the RBMT+APE builds on 50K alignments.7 With a carefully constructed test \nset, it would be possible to determine whether this RBMT provides to its APE a parsing analysis and re-ordering advantage that the SMT we have built in its current form lacks \n(for example, no factored translation model [9], because we lacked Urdu resources to an-\nnotate our data for lemmas, part-of-speech, morphology, word class). \nThough we have presented an evaluation of the alignments and hybrid builds in terms \nof Bleu scores, we recognize that this is but a first step in reaching a deeper understanding of the impact and effectiveness of APEs when chained with LBMT and RBMT engines, \n \n \n7 We apply paired t-tests of statistical significanc e over document scores, rather than using BLEU’s \nautomated confidence intervals without system to system paired comparisons. \n12th EAMT conference, 22-23 September 2008, Hamburg, Germany\n199\n \nespecially given the well-recognized limitations of literal matching for assessing transla-\ntion quality [10]. At this stage in our work, we have begun manually assessing segment \noutputs across translation engines by aligning them with multiple reference translations \n(RTs), as shown in Table 8. The APEs for both engines made three identical substitutions while also making other distinct changes. The LBMT+APE and StatMT outputs are strik-\ningly similar, while the RBMT+APE output quite distinct re-ordering of the Urdu original word order (compare with LBMT output). \nTable 8. Five MT system outputs (LBMT, LBMT+APE, RBMT, RBMT+APE, StatMT) on same \ninput segment and four Reference Translations (RT1-4), manually aligned for presentation. \n Segment Translation \nLBMT P S O privatisation , injunction issuing \nLBMT+APE pso privatization , stay order issued \n \nRBMT Injunction ongoing P S O privatisation , \nRBMT+APE stay order the pso privatization . \n \nStatMT pso privatization , the stay order issued \n \nRT 1 PSO Privatization , Stray Order Issued \nRT 2 PSO Privatization , Stay Order Issue \nRT 3 Stay on PSO Privatization \nRT 4 Stay on PSO Privatization \nConclusion and Future Work \nIn this paper, we have reported statistically significant performance improvements in (i) \ntranslating between Urdu and English, languages more divergent in word order than pre-\nviously tested Indo-European pairs, by (ii) composing existing, but weak lexicon substitu-\ntion-based and rule-based MT engines with statistical post-editors. The post-editors were trained on segment-level alignments generated with a novel, iterative re-ranking algorithm \nthat selects most likely alignment pairs from automatically scored outputs of these two \nengines. We also examined document-level performance of the lexicon-based and rule-based hybrids for clues to limits we observed on their post-editors’ improvements after \n50K of training data. \nThe most striking result of using the MT engines’ own outputs was the enormous gain \nin performance with the serial composition of the LBMT+APE system based on only 25K alignments. This suggests, for time-critical, rapid ramp-up of MT engines for very low-resource languages, that the first step is to find or build a translation lexicon and an \n12th EAMT conference, 22-23 September 2008, Hamburg, Germany\n200\n \nLBMT while immediately working in tandem to obtain document-parallel or comparable \ndatasets that can boost the LBMT with progressively stronger APEs built with that engine. \nLonger term, however, given that (i) larger, in-domain training corpora can be constructe-\ndand (ii) SMTs outperform LBMT-based hybrids but underperform RBMT+APEs when trained on the same small quantities of data, we expect that RBMT-based hybrids, like our \nRBMT+APE or new automated RBMT hybrid types [11], will outperform SMTs on widely syntax-divergent language pairs\n8 \nReferences \n1. Elming, J. \"Transformation-based correction of rule-based MT.\" In Proceedings of the 11th An-\nnual Conference of the European Association for Machine Translation, Oslo, Norway (2006) \n2. Dugast, L., Senellart, J., Koehn, P. \"Stati stical Post-Editing on SYSTRAN’s Rule-Based Transla-\ntion System.\" In Proceedings of the Second ACL Workshop on Statistical Machine Translation, \nPrague, Czech Republic (2007) \n3. Simard, M., Ueffing, N., Isabelle, P., Kuhn, R. \"Rule-Ba.sed Translation with Statistical Phrase-\nBased Post-Editing,\" In Proceedings of the Second ACL Workshop on Statistical Machine \nTranslation, Prague, Czech Republic (2007) \n4. Stolke, A. “SRILM - an extensible language modeling toolkit.” In Proceedings of the Interna-\ntional Conference on Spoken Language Processing (2002) \n5. Koehn, P., Hoang, H., Birch, A., Callison-Burch, C., Federico, M., Bertoldi, N., Cowan, B., Shen, \nW., Moran, C., Zens, R., Dyer, C., Bojar, O., Constantin, A., and Herbst, E. “Moses: Open \nsource toolkit for statistical machine translation. ” In Proceedings of the Annual Meeting of the \nACL, Demonstration and poster session, Prague, Czech Republic (2007). \n6. NIST 2008 Open MT Workshop with Urdu Resource DVD (R116_1_1), \nhttp://www.nist.gov/spe ech/tests/mt/2008/doc/MT 08_EvalPlan.v2.4.pdf, \nwww.nist.gov/speech/tests/mt/2008/doc/2008_NIS T_MTOpenEval_Agmnt_StandardV3.pdf \n7. Papineni, K., Roukos, S., Ward, T., Zhu, W. “BLEU: a method for automatic evaluation of MT.” In Proceedings of the ACL, Philadelphia, PA (2002) \n8. Melamed, I. Dan, Green, R., Turian, J. “Precision and recall of machine translation” In Proceed-ings of the HLT-NAACL, Edmonton Canada (2003) \n9. Koehn, P., Hoang, H. “Factored Translation Models” EMNLP-CoNLL-2007: Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computa-\ntional Natural Language Learning, Prague, Czech Republic (2007) \n10. Callison-Burch, C., M. Osborne, P. Koehn “Re- evaluating the role of BLEU in machine transla-\ntion research” In the Proceedings of EACL, Trento, Italy (2006) \n11. Font Llitjós, A, Vogel, W. “A walk on the ot her side: adding statistical components to a transfer-\nbased translation system” SSST, NAACL-HLT-2007 Workshop on Syntax and Structure in Sta-tistical Transl ation, Rochester, NY (2007) \n \n \n8 [3] also discussed the relation of their RBMT+ APE and SMT, projecting that their SMT would \nsurpass the RBMT + APE only with a massive amount of training data. \n12th EAMT conference, 22-23 September 2008, Hamburg, Germany\n201",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "DurOz_TrJ34",
"year": null,
"venue": "EAMT 2020",
"pdf_link": "https://aclanthology.org/2020.eamt-1.16.pdf",
"forum_link": "https://openreview.net/forum?id=DurOz_TrJ34",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Quality In, Quality Out: Learning from Actual Mistakes",
"authors": [
"Frédéric Blain",
"Nikolaos Aletras",
"Lucia Specia"
],
"abstract": "Frederic Blain, Nikolaos Aletras, Lucia Specia. Proceedings of the 22nd Annual Conference of the European Association for Machine Translation. 2020.",
"keywords": [],
"raw_extracted_content": "Quality In, Quality Out: Learning from Actual Mistakes\nFr´ed´eric Blain1Nikolaos Aletras1Lucia Specia1,2\n1Department of Computer Science, University of Sheffield\n2Department of Computing, Imperial College London\nUnited Kingdom\nff.blain,n.aletras,l.specia [email protected]\nAbstract\nApproaches to Quality Estimation (QE) of\nmachine translation have shown promis-\ning results at predicting quality scores for\ntranslated sentences. However, QE models\nare often trained on noisy approximations\nof quality annotations derived from the\nproportion of post-edited words in trans-\nlated sentences instead of direct human an-\nnotations of translation errors. The latter is\na more reliable ground-truth but more ex-\npensive to obtain. In this paper, we present\nthe first attempt to model the task of pre-\ndicting the proportion of actual transla-\ntion errors in a sentence while minimis-\ning the need for direct human annotation.\nFor that purpose, we use transfer-learning\nto leverage large scale noisy annotations\nand small sets of high-quality human an-\nnotated translation errors to train QE mod-\nels. Experiments on four language pairs\nand translations obtained by statistical and\nneural models show consistent gains over\nstrong baselines.\n1 Introduction\nQuality Estimation (QE) for Machine Translation\n(MT) is the task of predicting the overall quality of\nan automatically generated translation e.g., on ei-\nther word, sentence or document level (Blatz et al.,\n2004; Ueffing and Ney, 2007). In opposition to au-\ntomatic metrics and manual evaluation which rely\non gold standard reference translations, QE mod-\nels can produce quality estimates on unseen data,\nc\r2020 The authors. This article is licensed under a Creative\nCommons 3.0 licence, no derivative works, attribution, CC-\nBY-ND.and at runtime. QE has already proven its useful-\nness in many applications such as improving pro-\nductivity in post-editing of MT, and recent neural-\nbased approaches to QE have been shown to pro-\nvide promising performance in predicting quality\nof neural MT output (Fonseca et al., 2019).\nQE models are trained under full supervision,\nwhich requires to have quality-labelled training\ndata at hand. Obtaining annotated data for all the\ndomains and languages of interest is costly and of-\nten impractical. As a result, QE models can suf-\nfer from the same limitations as neural MT mod-\nels themselves, such as drastic degradation of their\nperformance on out-of-domain data. As an alter-\nnative, QE models are often trained under weak\nsupervision, using training instances labelled from\nnoisy or limited sources (e.g. data labelled with\nautomatic metrics for MT).\nHere, we focus on sentence-level QE, where\ngiven a pair of sentences (the source and its transla-\ntion), the aim is to train supervised Machine Learn-\ning (ML) models that can predict a quality label as\na numerical value. The most widely used label for\nsentence-level QE is the Human-mediated Transla-\ntion Edit Rate (HTER) (Snover et al., 2006), which\nrepresents the post-editing effort . HTER consists\nof the minimum number of edits a human language\nexpert is required to make in order to fix the trans-\nlation errors in a sentence, taking values between 0\nand 1. The main limitation of HTER is that it does\nnot represent an actual translation error rate, but\nits noisy approximation. The noise stems mostly\nfrom errors in the heuristics used to automatically\nalign the machine translation and its post-edited\nversion, but also from the fact that some edits rep-\nresent preferential choices of humans, rather than\nerrors. To overcome such limitations, QE mod-\nels can be improved by using data that has been\nFigure 1: Example of a German sentence (top) and its automatic translation into English. The HTER between the translation\nand its post-edited version (A NN-1) is 0:091, while the proportion of fine-grained expert-annotated MT errors (A NN-2), is\n6=23 = 0 :261.\ndirectly annotated for translation errors by human\nexperts. Figure 1 shows an example of the discrep-\nancy between the HTER score and the proportion\nofactual errors from expert annotation, for a raw\ntranslation and its post-edited version.\nAnnotations of MT errors usually follow fine-\ngrained error taxonomies such as the Multidimen-\nsional Quality Metrics (MQM) framework (Lom-\nmel et al., 2014). While such annotations provide\nhighly reliable labelled data, they are more expen-\nsive to produce than HTER. This often results in\ndatasets that are orders of magnitude smaller than\nHTER-based ones. This makes it hard to only\nuse such high-quality resources for training neural-\nbased QE models, which typically require large\namounts of training data.\nIn this paper, we use transfer-learning to develop\nQE models by exploiting the advantages of both\nnoisy and high-quality labelled data. We leverage\ninformation from large amounts of HTER data and\nsmall amounts of MQM annotations to train more\nreliable sentence-level QE models. Our aim is to\npredict the proportion of actual errors in MT out-\nputs. More fine-grained error prediction is left for\nfuture work.\nMain contributions: (1) We introduce a new\ntask of predicting the proportion of actual trans-\nlation errors using transfer-learning for QE1, by\nleveraging large scale noisy HTER annotations and\nsmaller but of higher quality expert MQM anno-\ntations; (2) we show that our simple yet effective\napproach using transfer-learning yields better per-\nformance at predicting the proportion of actual er-\nrors in MT, compared to models trained directly\non expert-annotated MQM or HTER-only data; (3)\nwe report experiments on four language pairs and\nboth statistical and neural MT systems.\n2 Related Work\nQuality labels for sentence-level QE Quirk\n(2004) introduced the use of manually created\n1https://github.com/sheffieldnlp/tlqequality labels for evaluating MT systems. With\na rather small dataset (approximately 350 sen-\ntences), they reported better results than those ob-\ntained with a much larger set of instances anno-\ntated automatically. Similarly, Specia et al. (2009)\nproposed the use of a (1-4) Likert scale represent-\ning a translator’s perception on quality with re-\ngard to the degree of difficulty to fix a transla-\ntion. However, sentence-level quality annotations\nappear to be subjective while agreement between\nannotators is generally low (Specia, 2011). More\nrecently, sentence-level QE models are most typi-\ncally trained on HTER scores (Bojar et al., 2013;\nBojar et al., 2014; Bojar et al., 2015; Bojar et al.,\n2016; Bojar et al., 2017; Specia et al., 2018; Fon-\nseca et al., 2019).\nTransfer-learning for QE Transfer-learning\n(TL) is a machine learning approach where models\ntrained on a source task are adapted to a related\ntarget task (Pan et al., 2010; Yosinski et al., 2014).\nTransfer-learning methods have been widely used\nin NLP, e.g., machine translation (Zoph et al.,\n2016) and text classification (Howard and Ruder,\n2018). Previous work on TL for QE focused on\nadapting models for labels produced by different\nannotators (Cohn and Specia, 2013; Shah and\nSpecia, 2016) which is different to this work.\nMore recent work on TL techniques for QE ex-\nplore pre-trained word representations. This was\nfirst done by P OSTECH (Kim et al., 2017), best per-\nforming neural-based architecture in the QE shared\ntask at WMT’17 (Bojar et al., 2017). P OSTECH\nre-purposes a recurrent neural network encoder\npre-trained on large parallel corpora, to predict\nHTER scores using multi-task learning at differ-\nent levels of granularity ( e.g., word, phrase, or sen-\ntence). Then, Kepler et al. (2019) used a predictor-\nestimator architecture similar to P OSTECH along-\nside very large scale pre-trained representations\nfrom BERT (Devlin et al., 2018) and XLM (Lam-\nple and Conneau, 2019), and ensembling tech-\nniques, to win the QE tasks at WMT’19 (Fonseca\net al., 2019). These models are pre-trained on un-\nlabelled data, as opposed to noisier labelled data,\nand aim to predict HTER scores, which is differ-\nent to the focus of this paper.\nTo the best of our knowledge, this paper is the\nfirst attempt to repurpose a QE model pre-trained\non one quality label to a model that predicts an-\nother quality label; we first train a model on noisy\nHTER data to predict post-editing effort, and lever-\nage its knowledge to train a model capable of pre-\ndicting the actual proportion of translation errors\nusing expert-annotated MQM data.\n3 Transfer-Learning Approach\nWe use inductive transfer-learning (Pan et al.,\n2010), where given a source learning task TSand a\ntarget taskTT, the aim is to improve performance\nin the latter by re-using knowledge from TS, where\nTS6=TT. Here,TScorresponds to predicting post-\nediting effort based on noisy HTER annotations,\nandTTto predicting the proportion of actual pro-\nportion of errors based on MQM annotations.\n3.1 Source task QE model\nBiRNN-HTER We use the BiRNN model pro-\nposed by Ive et al. (2018) as our base model to\npredict HTER scores. Figure 2 illustrates the high-\nlevel architecture of the model. Words in source\nand translated sentences are first mapped into em-\nbedding vectors. Then, the word embeddings are\npassed through bidirectional Gated Recurrent Unit\nencoders (Cho et al., 2014) to learn context-aware\nword representations in both the source and tar-\nget sentences. The two sentence representations\nare learned independently from each other before\nbeing concatenated as a weighted sum of their\nword vectors, generated by an attention mecha-\nnism. The concatenated representation is finally\npassed through a dense layer with sigmoid acti-\nvation to generate the quality estimate. BiRNN\nperformed competitively in the WMT’18 shared\ntask on QE (Specia et al., 2018) without rely-\ning on any parallel data nor expensive pre-training\nregimes such as the P OSTECH approach (Sec-\ntion 2). Overall, it is easier and faster to train\nwith a smaller number of parameters compared to\nPOSTECH , which makes it more suitable for this\ntask.3.2 Adaptation to the target task\nOur target task is to predict the proportion (be-\ntween 0 and 1) of actual MQM errors in a trans-\nlated sentence. Therefore, we adapt our BiRNN-\nHTER model to the target task.\nBiRNN-MQM TL We first replace the BiRNN-\nHTER output layer with two new layers: (1) a\nfully-connected layer followed by a rectified linear\nunit (Nair and Hinton, 2010) as the activation func-\ntion; and (2) a fully-connected output layer with a\nsigmoid activation to produce the predictions. We\ntrain these two layers on target task data by freez-\ning the rest of the model.\nBiRNN-MQM TL+FT We further fine-tune our\nBiRNN-MQM TLmodel on the target task data us-\ning a small learning rate following (Howard and\nRuder, 2018).\nHybrid Finally, we hypothesise that linguis-\ntic information ( e.g., number of tokens in the\nsource/target sentence, language model probabil-\nity of source/target sentence, etc.) might be com-\nplementary to the source-target representations ob-\ntained by our BiRNN-MQM TL+FT model. For\nthat purpose, we first extract a representation of\nthe source and translated sentence by removing\nthe BiRNN-MQM TL+FT output layer and then\nwe concatenate it with the widely used 17 black-\nbox sentence-level QE features extracted with the\nopen-source QuEst++ toolkit (Specia et al., 2015).\nThe joint neural and linguistic information of the\nsource and target sentences is fed into a linear re-\ngression2model using a L2 regularisation penalty.\n4 Experimental Setup\n4.1 Data\nFor our experiments, we use the freely available\nQT21 dataset3(Specia et al., 2017) used in the QE\nshared task (Bojar et al., 2017; Specia et al., 2018).\nThis dataset contains both post-edited (HTER)\nand error-annotated (MQM) data in four language\npairs: English into German, Latvian and Czech,\nand German into English; and phrase-based statis-\ntical (PBMT) and neural (NMT) translation mod-\nels. The annotation for errors was produced by\nprofessional translators using the MQM taxonomy\n2We also tried to jointly feed the features during fine-tuning\nbut did not yield better performance.\n3http://www.qt21.eu/resources/data/\nFigure 2: High-level architecture of the BiRNN sentence-\nlevel QE model.\nHTER data (Source) MQM data (Target)\n# sentences # sentences\nPBMT NMT PBMT NMT\nEN-DE 25,305 12,564 2,655 3,386\nEN-LV 10,561 11,116 3,284 3,244\nDE-EN 25,922 – 3,374 –\nEN-CS 37,725 – 3,460 –\nTable 1: Statistics for HTER and MQM data for statistical\n(PBMT) and neural (NMT) translation systems across lan-\nguage pairs.\nwith 21 error categories ( e.g., mistranslation, mor-\nphology, etc.). To obtain a score for the entire sen-\ntence, we divide the number of words annotated\nwith any error category by the length of the sen-\ntence. Predicting the actual type of MQM errors is\nleft for future work. Note that the MQM-annotated\nsentences are a subset of the HTER data ( i.e.some\nof them have both annotations), so we removed\nthese from the HTER data.\nBy design, all sentences selected for MQM an-\nnotation have at least one error. In order to increase\nthe size and variety of the MQM dataset, we dou-\nbled the number of MQM-annotated sentences by\ntaking sentences for which no edit was made dur-\ning PE ( i.e.perfect translations with zero MQM\nerrors). Table 1 summarises the statistics of the la-\nbelled data used for our experiments.\n4.2 Baseline and comparison models\nTo assess our models, we compare them against\nthe following baselines.\nBiRNN-HTER A BiRNN-HTER model trained\non the HTER data and used as is, to predict the pro-portion of MQM errors. That is using the source\ntask base model to predict the scores in the target\ntask.\nBiRNN-MQM This is the same BiRNN archi-\ntecture as our source task model (BiRNN-HTER)\nbut trained from-scratch on the MQM data without\ntransfer-learning.\nLR-QEfeat A feature-based approach used in\nthe WMT shared tasks as an official baseline. We\nuse the 17 black-box sentence-level QE features\nintroduced above (see Section 3.2) to train a lin-\near regression4model with a L2 regularization\npenalty.\n4.3 Model hyper-parameters\nFor the BiRNN-HTER model, we use default pa-\nrameters as in (Ive et al., 2018). For the BiRNN-\nMQM TL, we use a 5-fold Cross Validation ap-\nproach. We use a dense layer5of 50 and choose\nthe number of epochs in f1; ::;40g, training learn-\ning rate in\b1e\u00002;1e\u00003\tand fine-tuning learning\nrate in\b1e\u00003;1e\u00004\ton a validation set, by min-\nimising the Mean Absolute Error (MAE) between\nthe predicted score and gold standard labels. We\nalso experimented with two approaches for fine-\ntuning: (1) unfreezing all the layers at the same\ntime; and (2) a gradual unfreezing approach pro-\nposed by (Howard and Ruder, 2018). We use\nAdam (Kingma and Ba, 2014) with default pa-\nrameters, and a batch size of 100. For the Hybrid\nmodel, we optimise the L2 regularisation penalty.\nTable 2 reports on the optimal values determined\nby hyper-parameters optimisation.\n5 Results\nTables 3 and 4 show respectively the average ab-\nsolute Pearson’s rcorrelation co-efficient and the\nRoot Mean Square Error (the official metrics for\nthis task (Graham, 2015)) between actual and pre-\ndicted MQM error proportions in six combinations\nof MT models (PBMT, NMT) and language pairs\n(EN-DE, EN-LV , DE-EN and EN-CS).\nFirst, we observe that the baseline model (LR-\nQEfeat) performs fairly well on predicting the pro-\nportion of errors, especially for the EN-DE and\nEN-CS PBMT. However, it is not robust across\nlanguage pairs and types of translation systems.\n4We have also tested a Support Vector Regression with a ra-\ndial basis function kernel, but it yielded lower performance.\n5We did not observe noticeable differences in performance\nusing smaller or larger size in early experimentation.\nTraining Fine-tuning\nEpochs Learning rate Epochs/Method Learning rate\nEN-DE NMT 22 0.01 gradual unfreezing 0.001\nEN-LV NMT 16 0.001 gradual unfreezing 0.001\nEN-DE PBMT 15 0.001 1 0.001\nEN-LV PBMT 18 0.01 gradual unfreezing 0.001\nDE-EN PBMT 19 0.01 1 0.001\nEN-CS PBMT 18 0.001 gradual unfreezing 0.001\nTable 2: Optimal values selected for the adaptation of the source task sentence-level BiRNN QE model (BiRNN-HTER) to\nthe target task ( i.e.proportion of actual MT error in MT). For each language pair: number of epochs and learning rates for the\ntraining, and number of epochs or method used for the fine-tuning of the model.\nEN-DE NMT EN-LV NMT EN-DE PBMT EN-LV PBMT DE-EN PBMT EN-CS PBMT\n(1) LR-QEfeat 0.152 \u00060.06 0.404\u00060.19 0.585\u00060.02 0.471\u00060.06 0.329\u00060.02 0.635\u00060.02\n(2) BiRNN-HTER 0.297 \u00060.04 0.003\u00060.09 0.146\u00060.06 0.110\u00060.05 0.113\u00060.07 0.426\u00060.05\n(3) BiRNN-MQM 0.584 \u00060.04 0.542\u00060.05 0.619\u00060.05 0.583\u00060.03 0.606\u00060.08 0.757\u00060.01\n(4) BiRNN-MQM TL 0.575\u00060.04 0.596\u00060.06 0.644\u00060.02 0.612\u00060.03 0.594\u00060.02 0.787\u00060.03\n(5) BiRNN-MQM TL+FT 0.649\u00060.05 0.612\u00060.06 0.648\u00060.04 0.649\u00060.04 0.601\u00060.05 0.793\u00060.02\n(6) Hybrid 0.644 \u00060.05 0.522\u00060.28 0.658\u00060.04 0.655\u00060.03 0.610\u00060.05 0.795\u00060.02\nTable 3: Average absolute Pearson’s rcorrelation between actual and predicted MQM error proportions across all folds\nin six combinations of MT models and language pairs: (1)feature-based baseline (LR-QEfeat) – (2)BiRNN model trained\non HTER data, and used as is – (3)BiRNN model trained from scratch on MQM annotated data – (4)BiRNN MQM trained\nwith transfer-learning, i.e.trained on HTER data and adapted using MQM data – (5)BiRNN-MQM TLmodel fine-tuned with\nadditional training epochs – (6)fine-tuned BiRNN-MQM TL+FT model used as feature extractor along with the 17 sentence-\nlevel QE features and a linear regression algorithm (Hybrid). Measurements not significantly outperformed by any other overall,\nare underlined. Significance is computed with Hotelling-Williams test (Williams, 1959).\nEN-DE NMT EN-LV NMT EN-DE PBMT EN-LV PBMT DE-EN PBMT EN-CS PBMT\n(1) LR-QEfeat 0.112 \u00060.01 0.157\u00060.10 0.161\u00060.01 0.114\u00060.01 0.115\u00060.00 0.175\u00060.01\n(2) BiRNN-HTER 0.117 \u00060.01 0.523\u00060.01 0.250\u00060.01 0.460\u00060.01 0.605\u00060.03 0.333\u00060.01\n(3) BiRNN-MQM 0.093 \u00060.01 0.108\u00060.01 0.157\u00060.01 0.110\u00060.01 0.097\u00060.00 0.152\u00060.01\n(4) BiRNN-MQM TL 0.094\u00060.01 0.102\u00060.01 0.158\u00060.01 0.108\u00060.01 0.110\u00060.00 0.145\u00060.01\n(5) BiRNN-MQM TL+FT 0.091\u00060.01 0.105\u00060.01 0.152\u00060.01 0.100\u00060.01 0.100\u00060.00 0.139\u00060.01\n(6) Hybrid 0.087\u00060.01 0.212\u00060.26 0.149\u00060.01 0.098\u00060.01 0.097\u00060.00 0.138\u00060.01\nTable 4: Average absolute RMSE between actual and predicted MQM error proportions across all folds in six combinations\nof MT models and language pairs: (1)feature-based baseline (LR-QEfeat) – (2)BiRNN model trained on HTER data, and used\nas is – (3)BiRNN model trained from scratch on MQM annotated data – (4)BiRNN MQM trained with transfer-learning, i.e.\ntrained on HTER data and adapted using MQM data – (5)BiRNN-MQM TLmodel fine-tuned with additional training epochs –\n(6)fine-tuned BiRNN-MQM TL+FT model used as feature extractor along with the 17 sentence-level QE features and a linear\nregression algorithm (Hybrid). Measurements not significantly outperformed by any other overall, are underlined. Significance\nis computed with Hotelling-Williams test (Williams, 1959).\nSecond, the BiRNN-HTER model, trained on\nHTER data and used as is, is not able to predict\nthe proportion of actual MQM errors. Surprisingly,\nthe BiRNN-MQM model trained on MQM data di-\nrectly achieves relatively good performance for all\nlanguage pairs. This seems to confirm that (i) the\nBiRNN architecture, as simple as it may be, allows\nto train models that perform well while keeping\nlow the computational resources required; and (ii)\nthat HTER is a noisy approximation of the qual-\nity of a translation and post-edits are not actually\nwell-aligned to actual translation errors.\nOverall, the best performing model is BiRNN-\nMQM TLwith transfer-learning and fine-tuning,\nwhile our Hybrid model seems to further improveperformance in predicting quality on statistical MT\noutput. This is in line with recent findings demon-\nstrating the benefits of feature-based approaches\nfor predicting the quality of statistical MT, but not\nfor predicting the quality of neural MT, which is\nbetter modelled with learned representations using\nneural networks (Specia et al., 2018). This also\nconfirms our main hypothesis that noisy data, but\nfrom a closely related task, encapsulates useful in-\nformation that our TL model is able to leverage.\n6 Leveraging Pre-trained Token-level\nRepresentations\nAs reported in (Fonseca et al., 2019), state-of-\nthe-art models for supervised QE follow current\ntrend in the NLP community in 2019: leveraging\nlarge-scale pre-trained language models to com-\npute word- or sentence-level representations. Fol-\nlowing (Kepler et al., 2019) and their Transformer-\nbased Predictor-Estimator model, we considered\ntwo variants of our BiRNN-HTER model intro-\nduced in Section 3:\nLM-BiRNN By default, the weights of both the\nsource and target bidirectional GRU encoders of\nthe BiRNN model are first randomly initiated and\nthen learned, simultaneously, during training of\nthe task at hand. In this variant, we first learn\nthe weights of each encoder independently in a\nlanguage modelling fashion with a Cross-Entropy\nloss, using the additional resources provided by the\norganisers of the WMT’18 QE shared task6. We\nthen reuse the learned weights to initiate each en-\ncoder of the BiRNN model.\nBERT-BiRNN In this variant of the BiRNN\nmodel, the token-level representations are ex-\ntracted from a pre-trained multilingual base cased\nBERT (Devlin et al., 2018) model. Concretely, we\nreplace both the source and the target embedding\nlayers in Figure 2 by a single custom BERT em-\nbedding layer. During training, we fine-tune the\nweights of the word embeddings layer, as well as\nthe weights of the last 4 encoding layers of the\nBERT model.\nIn the rest of the paper, and similarly to the\nnaming of our models in Sections 3.1 and 3.2, we\nwill refer to as “BERT-BiRNN-HTER”, “BERT-\nBiRNN-MQM” and “BERT-BiRNN-MQM TL”,\nthe three variants of this model trained from\nscratch on the source task (-HTER), on the target\ntask (-MQM) and adapted to the target task using\nTL (-MQM TL), respectively.\n6.1 Experimental Results\nWe evaluate the benefit of using pre-trained token-\nlevel representations, by comparing the perfor-\nmance of our previously introduced BERT vari-\nants, against our base BiRNN model.\nPredicting HTER\nTable 5 summarises the performance of each\nmodel at predicting HTER scores on the HTER\ndata described in Table 1. We include the BiRNN-\nHTER models from Tables 3 and 4 (row (2)) for di-\nrect comparison when trained at predicting HTER.\n6http://statmt.org/wmt18/quality-estimation-task.htmlFirst, we observe that, overall, relying on pre-\ntrained token representation helps to improve the\nperformance of our BiRNN model, confirming the\nfindings in (Fonseca et al., 2019). Second, while\nrelying on advanced token representations such as\nthose extracted from BERT significantly help im-\nproving across language pairs and types of trans-\nlation, relying on simpler representations seems to\nmainly help on neural-based MT output, and with\nlimited gains.\nHowever, pre-trained representations usually re-\nquire to be fine-tuned for the task at hand. In our\nscenario of application, where only a few data-\npoints of the target task is available, this may be\na challenging task when using complex and deep\narchitectures such as the BERT model, which con-\ntains millions of parameters trained on large scale\ntraining data (BERT models are trained on the\nWikipedia dataset).\nPredicting MQM with Transfer-Learning\nWe replicated the experimental settings for induc-\ntive transfer-learning described in Section 4, by\nconsidering this time the BERT variant of our base\nBiRNN model. Our experimental results are sum-\nmarised in Tables 6 and 7, which report on Pear-\nson’s rcorrelation and RMSE, respectively. We\ninclude LR-QEfeat, the feature-based approach, as\nwell as the default BiRNN-HTER and BiRNN-\nMQM models from Tables 3 and 4 (rows (1)-(4))\nfor direct comparison when trained at predicting\nMQM error proportions.\nFirst, we observe that when our BiRNN model\nis trained at predicting the source task (HTER) and\nused as is to predict on the target task (MQM),\nmore advanced representations can help improve\nits performance (rows (2) vs.(b)). However, both\nvariants are usually outperformed by the baseline\nmodel (LR-QEfeat) on predicting the proportion of\nerrors, apart from EN-DE NMT.\nSecond, when trained from scratch on MQM an-\nnotated data, the BERT-BiRNN model is signif-\nicantly outperformed by our base BiRNN model\nacross all language pairs and types of translation\n(rows (3) vs.(c)). While we previously observed\nthe benefit of using advanced representations from\nBERT when at least 10,000 training datapoints are\navailable (see Table 5), we now observe degraded\nperformances when the number of training set is\nlower than 4,000 datapoints.\nThird, when trained on HTER data and adapted\nEN-DE NMT EN-LV NMT EN-DE PBMT EN-LV PBMT DE-EN PBMT EN-CS PBMT\n(2) BiRNN-HTER 0.290 0.436 0.347 0.416 0.505 0.480\n(a) LM-BiRNN-HTER 0.372 0.443 0.395 0.384 0.495 0.476\n(b) BERT-BiRNN-HTER 0.390 0.561 0.612 0.520 0.641 0.537\nTable 5: Absolute Pearson’s rcorrelation between actual and predicted HTER scores , for the HTER data introduced in\nTable 1: (2)default BiRNN model trained on HTER data – (a)BiRNN model with the weights of each source and target\nencoders pre-trained in a language modelling fashion using the additional resources of the QE shared task at WMT’18 – (b)\nBiRNN model with token-level representations extracted from a pre-trained multilingual base cased BERT model. Measure-\nments not significantly outperformed by any other overall, are underlined. Significance is computed with Hotelling-Williams\ntest (Williams, 1959).\nEN-DE NMT EN-LV NMT EN-DE PBMT EN-LV PBMT DE-EN PBMT EN-CS PBMT\n(1) LR-QEfeat 0.152 \u00060.06 0.404\u00060.19 0.585\u00060.02 0.471\u00060.06 0.329\u00060.02 0.635\u00060.02\n(2) BiRNN-HTER 0.297 \u00060.04 0.003\u00060.09 0.146\u00060.06 0.110\u00060.05 0.113\u00060.07 0.426\u00060.05\n(b) BERT-BiRNN-HTER 0.211 \u00060.03 0.220\u00060.04 0.467\u00060.04 0.302\u00060.05 0.311\u00060.09 0.175\u00060.03\n(3) BiRNN-MQM 0.584\u00060.04 0.542\u00060.05 0.619\u00060.05 0.583\u00060.03 0.606\u00060.08 0.757\u00060.01\n(c) BERT-BiRNN-MQM 0.227 \u00060.05 0.343\u00060.07 0.445\u00060.02 0.451\u00060.05 0.276\u00060.06 0.461\u00060.05\n(4) BiRNN-MQM TL 0.575\u00060.04 0.596\u00060.06 0.644\u00060.02 0.612\u00060.03 0.594\u00060.02 0.787\u00060.03\n(d) BERT-BiRNN-MQM TL 0.189\u00060.06 0.349\u00060.06 0.510\u00060.03 0.491\u00060.07 0.083\u00060.03 0.477\u00060.06\nTable 6: Average absolute Pearson’s rcorrelation between actual and predicted MQM error proportions across all folds in\nsix combinations of MT models and language pairs: (1)feature-based baseline (LR-QEfeat) – (2)default BiRNN model trained\non HTER data, and used as is – (b)BERT-BiRNN model trained on HTER data, and used as is – (3)BiRNN model trained\nfrom scratch on MQM annotated data – (c)BERT-BiRNN model trained from scratch on MQM annotated data – (4)BiRNN-\nMQM model trained with transfer-learning, i.e.trained on HTER data and adapted using MQM data. (d)BERT-BiRNN-MQM\nmodel trained with transfer-learning, i.e.trained on HTER data and adapted using MQM data. Measurements not significantly\noutperformed by any other overall, are underlined. Significance is computed with Hotelling-Williams test (Williams, 1959).\nEN-DE NMT EN-LV NMT EN-DE PBMT EN-LV PBMT DE-EN PBMT EN-CS PBMT\n(1) LR-QEfeat 0.112 \u00060.01 0.157\u00060.10 0.161\u00060.01 0.114\u00060.01 0.115\u00060.00 0.175\u00060.01\n(2a) BiRNN-HTER 0.117 \u00060.01 0.523\u00060.01 0.250\u00060.01 0.460\u00060.01 0.605\u00060.03 0.333\u00060.01\n(b) BERT-BiRNN-HTER 0.117 \u00060.01 0.249\u00060.01 0.184\u00060.01 0.146\u00060.00 0.206\u00060.01 0.294\u00060.01\n(3) BiRNN-MQM 0.093\u00060.01 0.108\u00060.01 0.157\u00060.01 0.110\u00060.01 0.097\u00060.00 0.152\u00060.01\n(c) BERT-BiRNN-MQM 0.113 \u00060.01 0.121\u00060.01 0.189\u00060.01 0.128\u00060.02 0.120\u00060.01 0.204\u00060.01\n(4) BiRNN-MQM TL 0.094\u00060.01 0.102\u00060.01 0.158\u00060.01 0.108\u00060.01 0.110\u00060.00 0.145\u00060.01\n(d) BERT-BiRNN-MQM TL 0.116\u00060.01 0.123\u00060.01 0.178\u00060.01 0.116\u00060.01 0.137\u00060.01 0.207\u00060.02\nTable 7: Average absolute RMSE between actual and predicted MQM error proportions across all folds in six combinations\nof MT models and language pairs: (1)feature-based baseline (LR-QEfeat) – (2)default BiRNN model trained on HTER data,\nand used as is – (b)BERT-BiRNN model trained on HTER data, and used as is – (3)BiRNN model trained from scratch\non MQM annotated data – (c)BERT-BiRNN model trained from scratch on MQM annotated data – (4)BiRNN-MQM model\ntrained with transfer-learning, i.e.trained on HTER data and adapted using MQM data. (d)BERT-BiRNN-MQM model trained\nwith transfer-learning, i.e.trained on HTER data and adapted using MQM data. Measurements not significantly outperformed\nby any other overall, are underlined. Significance is computed with Hotelling-Williams test (Williams, 1959).\nusing MQM data (rows (4) vs.(d)), we observe\nthat the performance of the BERT-BiRNN model\nslightly improve compared to training from scratch\non MQM data (row (c)) across all language pairs\nbut EN-DE NMT and DE-EN PBMT . For the lat-\nter, we even observe a significant drop in the per-\nformance of the model. There is no obvious ex-\nplanations for that, so we hope that further experi-\nments would help us to understand the reasons be-\nhind it. On the one hand, this confirms that fine-\ntuning deep architectures such as BERT to extract\nadvanced token level representation is a challeng-\ning task when only a few training instances is avail-\nable. On the other hand, we saw the benefit of us-ing advanced representation from pre-trained mod-\nels such as BERT, and plan to continue working\ntowards that research direction.\n7 Conclusions\nWe introduced a new task of predicting the propor-\ntion of actual errors in a translated sentence as an\nalternative to the commonly used noisy estimate\nHTER. The reported results from using induc-\ntive transfer-learning are particularly encouraging\nconsidering the simplicity of our BiRNN model.\nOur transfer-learning method helps to train mod-\nels which are better at predicting the proportion of\nactual errors for different language pairs and trans-\nlation systems, compared to models trained on the\ntarget task only.\nHowever, whereas we were expecting to observe\nsignificant gains with the use of more advanced\ntoken-level pre-trained representations (here from\nBERT), we report drastic degradation in perfor-\nmances for this configuration when re-purposing\nthe QE models via transfer-learning. These some-\nwhat counter-intuitive results are an indication that\nfurther work can be done in this area to refine our\ntransfer-learning approach, as the use of large scale\npre-trained representations has become a common\npractice in NLP applications, including QE.\nIn addition to this, we plan in furture to estimate\nthe quality of machine translation using more fine-\ngrained MQM annotations for subsentence-level\nQE.\nAcknowledgements\nThis work was supported by the Bergamot project\n(EU H2020 Grant No. 825303).\nReferences\nBlatz, John, Erin Fitzgerald, George Foster, Simona\nGandrabur, Cyril Goutte, Alex Kulesza, Alberto San-\nchis, and Nicola Ueffing. 2004. Confidence estima-\ntion for machine translation. In COLING .\nBojar, Ondrej, Christian Buck, Chris Callison-\nBurch, Christian Federmann, Barry Haddow, Philipp\nKoehn, Christof Monz, Matt Post, Radu Soricut, and\nLucia Specia. 2013. Findings of the 2013 Work-\nshop on Statistical Machine Translation. In Eighth\nWorkshop on Statistical Machine Translation , WMT,\npages 1–44, Sofia, Bulgaria.\nBojar, Ondrej, Christian Buck, Christian Federmann,\nBarry Haddow, Philipp Koehn, Johannes Leveling,\nChristof Monz, Pavel Pecina, Matt Post, Herve\nSaint-Amand, Radu Soricut, Lucia Specia, and Ale ˇs\nTamchyna. 2014. Findings of the 2014 workshop on\nstatistical machine translation. In Ninth Workshop\non Statistical Machine Translation , WMT, pages 12–\n58, Baltimore, Maryland.\nBojar, Ond ˇrej, Rajen Chatterjee, Christian Federmann,\nBarry Haddow, Matthias Huck, Chris Hokamp,\nPhilipp Koehn, Varvara Logacheva, Christof Monz,\nMatteo Negri, Matt Post, Carolina Scarton, Lucia\nSpecia, and Marco Turchi. 2015. Findings of the\n2015 Workshop on Statistical Machine Translation.\nInProceedings of the Tenth Workshop on Statistical\nMachine Translation , pages 1–46, Lisbon, Portugal,\nSeptember.\nBojar, Ond ˇrej, Rajen Chatterjee, Christian Federmann,\nYvette Graham, Barry Haddow, Matthias Huck,Antonio Jimeno Yepes, Philipp Koehn, Varvara\nLogacheva, Christof Monz, Matteo Negri, Aure-\nlie Neveol, Mariana Neves, Martin Popel, Matt\nPost, Raphael Rubino, Carolina Scarton, Lucia Spe-\ncia, Marco Turchi, Karin Verspoor, and Marcos\nZampieri. 2016. Findings of the 2016 conference\non machine translation. In Proceedings of the First\nConference on Machine Translation , pages 131–198,\nBerlin, Germany, August.\nBojar, Ond ˇrej, Rajen Chatterjee, Christian Federmann,\nYvette Graham, Barry Haddow, Shujian Huang,\nMatthias Huck, Philipp Koehn, Qun Liu, Varvara Lo-\ngacheva, Christof Monz, Matteo Negri, Matt Post,\nRaphael Rubino, Lucia Specia, and Marco Turchi.\n2017. Findings of the 2017 conference on machine\ntranslation (wmt17). In Proceedings of the Sec-\nond Conference on Machine Translation, Volume 2:\nShared Task Papers , pages 169–214, Copenhagen,\nDenmark, September.\nCho, Kyunghyun, Bart Van Merri ¨enboer, Caglar Gul-\ncehre, Dzmitry Bahdanau, Fethi Bougares, Holger\nSchwenk, and Yoshua Bengio. 2014. Learning\nphrase representations using rnn encoder-decoder\nfor statistical machine translation. arXiv preprint\narXiv:1406.1078 .\nCohn, Trevor and Lucia Specia. 2013. Modelling\nannotator bias with multi-task gaussian processes:\nAn application to machine translation quality esti-\nmation. In 51st Annual Meeting of the Association\nfor Computational Linguistics , ACL, pages 32–42,\nSofia, Bulgaria.\nDevlin, Jacob, Ming-Wei Chang, Kenton Lee, and\nKristina Toutanova. 2018. Bert: Pre-training of\ndeep bidirectional transformers for language under-\nstanding. arXiv preprint arXiv:1810.04805 .\nFonseca, Erick, Lisa Yankovskaya, Andr ´e FT Martins,\nMark Fishel, and Christian Federmann. 2019. Find-\nings of the WMT 2019 Shared Tasks on Quality Es-\ntimation. In Proceedings of the Fourth Conference\non Machine Translation (Volume 3: Shared Task Pa-\npers, Day 2) , pages 1–10.\nGraham, Yvette. 2015. Improving evaluation of ma-\nchine translation quality estimation. In Proceed-\nings of the 53rd Annual Meeting of the Association\nfor Computational Linguistics and the 7th Interna-\ntional Joint Conference on Natural Language Pro-\ncessing (Volume 1: Long Papers) , pages 1804–1813,\nBeijing, China, July. Association for Computational\nLinguistics.\nHoward, Jeremy and Sebastian Ruder. 2018. Univer-\nsal language model fine-tuning for text classification.\narXiv preprint arXiv:1801.06146 .\nIve, Julia, Fr ´ed´eric Blain, and Lucia Specia. 2018.\nDeepQuest: a framework for neural-based qual-\nity estimation. In Proceedings of COLING 2018,\nthe 27th International Conference on Computational\nLinguistics: Technical Papers , Santa Fe, new Mex-\nico.\nKepler, F ´abio, Jonay Tr ´enous, Marcos Treviso, Miguel\nVera, Ant ´onio G ´ois, M Amin Farajian, Ant ´onio V\nLopes, and Andr ´e FT Martins. 2019. Unbabel’s par-\nticipation in the wmt19 translation quality estimation\nshared task. arXiv preprint arXiv:1907.10352 .\nKim, Hyun, Jong-Hyeok Lee, and Seung-Hoon Na.\n2017. Predictor-estimator using multilevel task\nlearning with stack propagation for neural quality es-\ntimation. In Proceedings of the Second Conference\non Machine Translation , pages 562–568.\nKingma, Diederik P and Jimmy Ba. 2014. Adam: A\nmethod for stochastic optimization. arXiv preprint\narXiv:1412.6980 .\nLample, Guillaume and Alexis Conneau. 2019. Cross-\nlingual language model pretraining. arXiv preprint\narXiv:1901.07291 .\nLommel, Arle Richard, Aljoscha Burchardt, and Hans\nUszkoreit. 2014. Multidimensional quality metrics\n(MQM): A framework for declaring and describing\ntranslation quality metrics. Tradum `atica: tecnolo-\ngies de la traducci ´o, 0(12):455–463, 12.\nNair, Vinod and Geoffrey E Hinton. 2010. Rectified\nlinear units improve restricted boltzmann machines.\nInProceedings of the 27th international conference\non machine learning (ICML-10) , pages 807–814.\nPan, Sinno Jialin, Qiang Yang, et al. 2010. A survey on\ntransfer learning. IEEE Transactions on knowledge\nand data engineering , 22(10):1345–1359.\nQuirk, Christopher. 2004. Training a sentence-level\nmachine translation confidence measure. In LREC .\nCiteseer.\nShah, Kashif and Lucia Specia. 2016. Large-scale mul-\ntitask learning for machine translation quality esti-\nmation. In Conference of the North American Chap-\nter of the Association for Computational Linguistics:\nHuman Language Technologies , pages 558–567, San\nDiego, California.\nSnover, Matthew, Bonnie Dorr, Richard Schwartz, Lin-\nnea Micciulla, and John Makhoul. 2006. A study of\ntranslation edit rate with targeted human annotation.\nInProceedings of association for machine transla-\ntion in the Americas , volume 200.\nSpecia, Lucia, Marco Turchi, Nicola Cancedda, Marc\nDymetman, and Nello Cristianini. 2009. Estimating\nthe Sentence-Level Quality of Machine Translation\nSystems. In 13th Annual Conference of the Euro-\npean Association for Machine Translation , EAMT,\npages 28–37, Barcelona, Spain.\nSpecia, Lucia, Gustavo Paetzold, and Carolina Scarton.\n2015. Multi-level translation quality prediction with\nquest++. Proceedings of ACL-IJCNLP 2015 System\nDemonstrations , pages 115–120.Specia, Lucia, Kim Harris, Fr ´ed´eric Blain, Aljoscha\nBurchardt, Viviven Macketanz, Inguna Skadina,\nMatteo Negri, , and Marco Turchi. 2017. Transla-\ntion quality and productivity: A study on rich mor-\nphology languages. In Machine Translation Summit\nXVI, pages 55–71, Nagoya, Japan.\nSpecia, Lucia, Fr ´ed´eric Blain, Varvara Logacheva,\nRam ´on Astudillo, and Andr ´e F. T. Martins. 2018.\nfindings of the wmt 2018 shared task on quality esti-\nmation. In Proceedings of the Third Conference on\nMachine Translation, Volume 2: Shared Task Papers ,\npages 702–722, Belgium, Brussels, October.\nSpecia, Lucia. 2011. Exploiting objective annotations\nfor measuring translation post-editing effort. In Pro-\nceedings of the 15th Conference of the European As-\nsociation for Machine Translation , pages 73–80.\nUeffing, Nicola and Hermann Ney. 2007. Word-level\nconfidence estimation for machine translation. Com-\nputational Linguistics , 33(1):9–40.\nWilliams, Evan James. 1959. Regression Analysis , vol-\nume 14. Wiley, New York, USA.\nYosinski, Jason, Jeff Clune, Yoshua Bengio, and Hod\nLipson. 2014. How transferable are features in deep\nneural networks? In Advances in neural information\nprocessing systems , pages 3320–3328.\nZoph, Barret, Deniz Yuret, Jonathan May, and Kevin\nKnight. 2016. Transfer learning for low-resource\nneural machine translation. In Proceedings of the\n2016 Conference on Empirical Methods in Natural\nLanguage Processing , pages 1568–1575.",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "UHQ2TMYv-Cl",
"year": null,
"venue": "EAMT 2005",
"pdf_link": "https://aclanthology.org/2005.eamt-1.28.pdf",
"forum_link": "https://openreview.net/forum?id=UHQ2TMYv-Cl",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Building a WSD module within an MT system to enable interactive resolution in the user's source language",
"authors": [
"Constantin Orasan",
"Ted Marshall",
"Robert Clark",
"Le An Ha",
"Ruslan Mitkov"
],
"abstract": "Constantin Orasan, Ted Marshall, Robert Clark, Le An Ha, Ruslan Mitkov. Proceedings of the 10th EAMT Conference: Practical applications of machine translation. 2005.",
"keywords": [],
"raw_extracted_content": "EAMT 2005 Conference Proceedings 205 Building a WSD Module within an MT system to enable \ninteractive resolution in the user’s source language \nConstantin Orasan*, Ted Marshall+, Robert Clark+, Le An Ha*, Ruslan Mitkov* \n* Research Group in Com putational Linguistics, \nUniversity of Wolverhampton, UK \n+ Translution, UK \[email protected], Ted.Marshall@translu tion.com, [email protected], \[email protected], [email protected] \nAbstract. Ambiguous words pose very serious problems to existing machine translation \nsystems. The Translation Checker, a system part of Translution Central addresses this prob-\nlem by allowing users to disambiguate words in their own language, with little or no \nknowledge of the target language. In order to achieve this, a multilingual dictionary is being \ndeveloped using EuroWordNet. Languages are too ambiguous to feasibly present users with \nall the senses available for a word. To this end, a suite of language processing modules has \nbeen developed to reduce the ambiguity of words. The implemented modules and an evaluation of their influence on English, Fren ch, German, Italian and Spanish corpora are \npresented. The results of the evaluation show that the propos ed approach dramatically re-\nduces the ambiguity of the language. \n1. Introduction \nAmbiguous words pose a very serious problem \nto existing machine translation systems because in many situations the translation engines do not know how to handle these words. This problem can be particularly serious for organisations which heavily rely on machine translation for their everyday operation. The work presented in this paper is part of a larger project to develop technologies which will enable people and or-ganisations to improve communications by re-\nmoving language barriers.\n1 The technology is \nbased on automatically redirecting e-mails, web pages and electronic documents to a centrally based translation facility termed Translution \nCentral . Users simply write emails in their own \nlanguage in the normal way, press the Send but-ton, and the recipients will receive the email automatically translated into their own lan-guage. Similarly, incoming emails will be trans-lated into the user’s own language. \nThe initial release of the product will sup-\nport five European languages: English, French, \n \n \n1 This project was initiated by Translution in 2002 . German, Spanish and Ita lian. Translution has \ndeveloped three different product suites, aimed at different sectors of the market, Translution \nLight, Translution Pro and Translution Corpo-\nrate\n2. \nThis paper presents the Translation Checker, \na tool integrated in the Translation Central which enables users to specify the meaning of polyse-\nmous words without the need of knowing how they translate in the target language or defining the domain of the source document. The paper is structured as follows: Section 2 discusses the structure of the translation checker. An evalua-tion of the system is presented in Section 3. The paper finishes with conclusions. \n3 \n \n2 More information about these products can be \nfound at http://www.translution.com \n3 This is a collabora tive work between Trans-\nlution and the University of Wolverhampton the main objective being to develop tools which allow users to improve Machine Translation quality designed with the non-linguist in mind. \nOrasan, et. al. \n206 EAMT 2005 Conference Proceedings 2. Translation Checker: a tool to \ndeal with ambiguous words in MT \nTranslation Checker is a product designed to \nhelp the translation process by allowing users to specify which meaning of an ambiguous word is to be used, without the need of knowing how to translate it in the target language or the need to define domain-dependent meanings. In order to work, this product needs to have access to a multilingual dictionary that allows a user to ob-tain a definition for any word in a text. On the basis of this definition and of the information present in the multilingual dictionary, the user can indicate which meaning is to be used, thereby producing its translation in the target language without the necessity of knowing the word in the target language. For example for the verb to address the following three defini-\ntions will be displayed: \n• to speak to someone formally \n• to put an address on something \n• to deal with something particular \nbut the user will not need to know that in French \nthe first sense in translated by s'adresser à , the \nsecond one by mettre une adresse, whilst the \nthird one by traiter . \nBecause it was noticed that natural language \nis highly ambiguous\n4, it is not feasible to re-\nquire the marking of all words with their senses, \nand ways to automatically reduce the number of ambiguous words have to be identified. In this section, the main features of the multilingual dictionary and the tools necessary to reduce this ambiguity are presented. \n2.1. Multilingual dictionary \nThe dictionary used by the Translation Checker \nis based on the EuroWordNet, a multilingual da-tabase with wordnets for several European lan-guages developed between 1996 and 1999 with funding from the European Union (EuroWord-Net). The wordnets are structured in the same way as the American WordNet (WordNet) in \n \n \n4 Using the English and French WordNets it was \ndetermined that for the top 1000 English words the averages number of senses per word is 7.83, whilst for the top 1000 French words is 4.62. terms of synsets (sets of synonymous words) \nwith basic semantic relations between them. In addition, these wordnets are linked to an Inter-Lingual-Index (ILI), based on the American WordNet. Via this index, the languages are in-terconnected so that it is possible to go from words in one language to similar words in any other language. In addition to the synonymy re-lations, the wordnets also contain a large num-ber of other relations, such as hypernymy (i.e. more general concepts), hyponymy (i.e. more specific concepts), etc. Because these relations cannot be directly used in the translation proc-ess, it was decided not to include them in the multilingual dictionary. The reason for employ-ing WordNet as a basis for the dictionary is be-cause work has been either performed or is cur-\nrently going on a wide number of languages of-fering scalability to the proposed method. \nEven though the languages in the EuroWord-\nNet are supposed to be linked via a language independent index, because this index is based on the American WordNet, it inherited all its \nweaknesses. For this reason, soon after we started this project it became obvious that work was required to maximise the usefulness of the dic-tionary. After investigating the ILI (i.e. the English WordNet) it became clear that it can be improved in the following ways. \n• Remove senses which are too specific, too \nrare or too obscure (e.g. the verb accept \nwith the meaning be sexually responsive , \nthe adjective dark with the meaning in a \nstate of intellectual or social darkness , the \nnoun account with the meaning turned her \nwriting skills to good account ) \n• Conflate senses which are too close for the \neveryday user (e.g. two senses of the verb gather : believe to be the case and conclude \nfrom evidence were conflated in one sense \nto understand/believe so mething even though \nit has not been explicitly stated ) \n• Add missing senses (e.g. for the noun gate \nthere is no meaning for the place where you \nboard a plane at the airport ) \nIn addition to the work undertaken on the ILI, a \nfurther step which needed to be taken for all five languages was to provide definitions for the words. The English WordNet has a large number of glosses which were provided by the \nBuilding a WSD module within an MT system \nEAMT 2005 Conference Proceedings 207 lexicographers in order to facilitate its creation, \nbut in many cases these glosses are not appro-priate as definitions (e.g. for the verb accumu-\nlate the gloss is Journals are accumulating in \nmy office and therefore it was replaced with the \ndefinition to (be) collect(ed) or gather(ed) over \na period of time ). The rest of the WordNets do \nnot have definitions attached to words, and therefore have to be introduced from scratch. \nThe third type of work which needed to be \nperformed on the WordNet is to enrich the WordNets for languages ot her than English. In-\nvestigation of these wordnets revealed that the quantity of information varies enormously from one language to another. Table 1 presents the number of synsets present in different lan-guages. As a result of this finding, it became evident that in order to have a high quality re-source, it is necessary to have a similar number of synsets across languages. In addition it was necessary to add adjectives and adverbs for French, German and Spanish. \nThe work necessary to improve the multilin-\ngual dictionary was undertaken at the Univer-sity of Wolverhampton and it involved all the steps described above. For each language, na-tive speakers were employed to perform the de-scribed steps. In order to speed up the produc-tion of definitions in languages other than Eng-lish, the English definitions provided by our English expert were automatically translated and presented to the other language experts. This approach had limited success because only in a few cases the definitions were correctly or nearly correctly translated. There are several explanations for this. First of all, many of the words in the definitions are ambiguous which makes the translation quite difficult. In addition, many of the definitions are not grammatical sentences, making them difficult to translate even for humans. 2.2. Implementation of the language \nprocessors \nAs aforementioned, presenting all the alterna-\ntive meanings is not a practical solution to deal with ambiguous words because of the high am-biguity some words exhibit. This problem was addressed by implementing several language processing filters which reduce the ambiguity. At present the filters implemented in the system are: \n• Part-of-speech taggers \n• Named entity recognisers \n• Identification of multiword units \n• Cross-lingual references \n• Document and domain sense selection \nEach of these filters is presented in detail in the \nremaining of the section. \n2.2.1. Part-of-speech taggers \nPart-of-speech tagging is the process of assign-\ning labels to words which indicate their gram-matical category. In the context of the WSD project, this information is important for two reasons: \n• The use of part-of-speech enables us to re-\nduce the number of senses possible for a word. For example, the word bank has 10 \nsenses as a noun and 8 senses as a verb. In a sentence like He banks the money , when us-\ning POS tagging, the word bank will be \nidentified as a verb, and the number of meanings the users have to choose from is reduced by 10.\n5 \n• In computational linguist ics, part-of-speech \ninformation is considered basic information which is widely used to improve the per-formance or make possible other tasks such \n \n \n5 The numbers reported in this and next section \nuse EuroWordNet and not th e enriched dictionaries. English French German Italian Spanish \nSynsets 91803 22417 10284 14967 28066 \nNouns 60647 17528 7594 11537 24047 \nVerbs 11597 4892 2688 1653 4019 \nAdjectives 16491 0 4 1573 0 \nAdverbs 3263 1 0 206 0 \nTable 1: Total number of synsets and words in WordNets \nOrasan, et. al. \n208 EAMT 2005 Conference Proceedings as named entity recognition and identifica-\ntion of multiword units. \nThe part-of-speech taggers implemented in the \nTranslation Checker are ba sed on Hidden Markov \nModels (HMM) which confers them language \nindependence. On the basis of the error analy-sis, a set of rules which correct frequent errors of the part-of-speech tagg er has been written for \neach language. Examples of rules are: \n \n2.2.2. Named entity recognition \nNamed entity recognition is the task which identi-\nfies whether sequences of words refer to entities that have special meaning (e.g. names of peo-ple, locations, organisations, etc.) This informa-tion is important for this project for two rea-sons: \n• Named entities contain words which have \nseveral senses, but their senses should not be shown to the user because they do not need disambiguation in this context. In a sentence such as Bill Gates is the youngest \nmulti-billionaire in the history of the United \nStates , if we can identify Bill Gates and \nUnited States as Named Entities, we can \neliminate 13 senses of bill, 7 senses of gate, \n8 senses of united and 11 senses of states \n(this eliminates 39 senses in total). \n• The identification of named entities is im-\nportant for the machine translation process because they either should not be translated, or when they are translated, this has to be done using special approaches such as table lookup (Babych and Hartley, 2003) \nIn the context of this project, it is not necessary \nto perform complete named entity recognition. It is enough to identify them, without determin-ing their type. Sometimes this task is referred to as normalisation . A language independent named \nentity engine has been implemented in order to facilitate the identification of named entities in different languages. This engine relies on lan-guage specific gazetteers and language specific rules. 2.2.3. Multiword units \nMultiword units are sequences of tokens which \nhave a different meaning than the individual parts which constitute them. Identification of the multiword units can reduce the number of choices a user of the Translation Checker is making. For example, if the system can identify multiword units such as prime minister , earn-\nings per share , there is no need to disambiguate \ncomponent words ( prime , minister , earnings , share \neliminating 30 senses in total). At present only \nthe multiword units which are nouns are identi-fied and dealt with, but in the future it is in-tended to tackle other types of multiword units. \n2.2.4. Cross-language reference \nAfter analysing the data, it was noticed that there \nare cases where all the senses of a word in a language are translated using the same word in the target language (for example all the senses of the word opponent can be translated to \nFrench using the same word adversaire ). In this \ncontext, even if the word is ambiguous, it is not necessary to ask the user of the system to dis-ambiguate it, because the word which translates it can be accurately determined. This process is referred to as cross-lingual reference . The ap-\nproach can be extended when there are several target languages. \n2.2.5. Document and domain sense selection \nThe filters presented in S ections 2.2.1–2.2.4 try \nto reduce the number of senses which are pre-sented to a user for a word. In addition to these filters, two others which do not reduce the num-ber of senses which are presented to the user, but which influence the way they are displayed to the user, and therefore help the decision process, were tried. \nIn the one sense per discourse setting, the \nTranslation Checker assumes that the user uses only one sense in the document, and when users choose a specific sense fo r a word, all the ap-\npearances of the same word in the text after the \nmarked one will be automatically considered with the same sense. Given that this is not al-ways the case the users can override this sense. \nThe subject of the text is another way to help \nthe user in the disambiguation process. The same \nsubject prioritisation determines which of the \nsenses will be displayed higher in the list of THE ADJ X OF Æ X is tagged as NOUN \nADJ COMMA X AND ADJ Æ X is tagged as ADJ \nBuilding a WSD module within an MT system \nEAMT 2005 Conference Proceedings 209 senses on the basis of senses selected for other \nwords. This process relies on an annotated ver-sion of the English WordNet, where all the syn-sets were annotated with a subject label (Magnini and Speranza, 2002). The ontology of senses used in the annotation is quite large, but we de-cided to keep only the 51 more general subjects. \n3. Evaluation of the reduction \ntechniques \nThe main aim of the evaluation was to auto-\nmatically determine the effectiveness of elimi-nation techniques implemented in the Transla-tionChecker. In addition, a small scale user-focused \nevaluation was performed on English to French translations. The experiments were performed on an improved version of the multilingual dic-tionary. Tables 2 and 3 present the number of words and synsets of the multilingual dictionar-ies. As can be seen the number of words and synsets is much lower than the ones which were in the original dictionary. The reason for this is that in the current dictionary we included only the words which have been checker. However, the words to be included in the dictionary were selected in such a way th at they are the most \nfrequent ones and ensure over 80% coverage of texts. In future we plan to continue working on the dictionary to enlarge it. \n3.1. Automatic evaluation \nIn order to assess the influence of the filters on \nthe number of senses available to a user, the system was run several times, each time switch-ing on an additional filter. For the experiments \ncorpora of 100,000 words per language have been extracted from th e Multilingual Corpora \nfor Co-operation (MLCC) distributed by ELDA. These corpora contain newswire texts in Eng-lish, German, French, Spanish and Italian. Table 4 presents the number of words from the corpora which appear in dictionary and total num-ber meanings for these words when no filtering is considered. As can be seen, for all the lan-guages but German, more than 40% of the words can be processed by the TranslationChecker. The lower value for German will have to be in-vestigated further. The remaining words are ei-\nther closed class words such as prepositions, ar-ticles, conjunctions, or are unknown words and have to be discarded. \nLanguage # ambiguous # meanings\nEnglish 41,745 185,086 \nFrench 40,631 157,278 \nGerman 23,841 56,821 \nSpanish 42,959 158,522 \nItalian 41,489 142,850 \nTable 4: Number of ambiguous words \nThe reduction of average number of senses per dictionary word (i.e. word which appears in our dictionary) is presented in Table 5. The col-umns of the table correspond to the different languages which can be processed by the Trans-lationChecker, whilst the rows correspond to different filters: Nothing when no filtering is \napplied, +POS when the part-of-speech tagger \nis used, +NE when the named entity is switched English French German Italian Spanish \nSynsets 9260 6459 7299 5410 5541 \nNouns 4579 3219 3632 2757 2816 \nVerbs 2897 2043 2331 1705 1733 \nAdjectives 1678 1042 1150 826 884 \nAdverbs 206 155 186 122 108 \nTable 2: Total number of synsets in multilingual dictionary \n English French German Italian Spanish \nNouns 3816 3036 3329 3388 3350 \nVerbs 2706 2069 1951 2336 2183 \nAdjectives 1671 1258 965 1242 1077 \nAdverbs 233 193 177 221 166 \nTable 3: The number of words in the multilingual dictionary \nOrasan, et. al. \n210 EAMT 2005 Conference Proceedings on, +MWU when the multiword units are con-\nsidered, and +ST when the same translation \nmodule is turned on. Each of the modules is ap-plied on the top of the other.\n6 \nFor the same translation the results reported \nin Table 5 correspond to the situation when for each of the other four languages is possible to find one word which can be used in the transla-tion for all the senses. A bigger reduction in the average number of senses as a result of the Same translation module is obtained when only \none target language is considered. The results in this situation are presented in Table 6. The rows of correspond to the source language, whilst the columns to the target language. As can be seen the results vary a lot from one pair of languages to another. \nInvestigation of Table 3 reveals that part-of-\nspeech tagging leads to a massive reduction in the average number of senses. The named entity recogniser has quite a small influence, but closer investigation of the corpora indicated that they do not contain a large number of named entities. Identification of multiword units proved \nmore beneficial than named entity recognition, a result which was not expected, but which can be justified by the nature of the corpora. A small \n \n \n6 Actually it is not possible to apply the named \nentity recogniser without running the part-of-speech tagger. For this reason it was not possible to report the influence of individual modules on the reduction in the number of senses displayed. but useful reduction was achieved by the same \ntranslation module. As seen in Table 6 this re-duction is larger when only one target language is used. \n3.2. User-focused evaluation \nIn order to see how users find the Translation \nChecker, a user-focused experiment was con-ducted. In this experiment, the user was asked to use the Translation Checker from English to French with different settings on several small texts.\n7 The main purpose of this experiment was \nnot to record the time necessary to annotate the text with senses, but to get feedback about how the user feels while using the program. \nThe best combination determined empiri-\ncally by the user was POS+NE+MWU+One \nsense per discourse . The same domain prioriti-\nsation did not prove useful because, as a result of constantly changing the place of a definition in the list according to the current domain, the user was confused. The One sense per dis-\ncourse module did not prove as accurate as ex-\npected (i.e. there were quite a few texts where the same word was used with more than one sense), but overall, it reduced the time neces-sary to process texts. \n \n \n7 Actually for this experiment we did not use the \nTranslationChecker, but a t ool which replicates its \nfunctionality, but it is not integrated in the Transla-tionCentral. English French German Spanish Italian \nNothing 4.43 3.87 2.38 3.69 3.44 \n+POS 2.82 2.72 2.03 2.69 2.07 \n+POS+NE 2.80 2.72 1.97 2.47 2.03 \n+POS+NE+MWU 2.79 2.72 1.97 2.44 2.01 \n+POS+NE+MWU+ST 2.77 2.67 1.87 2.38 1.97 \nTable 5: The average number of senses per dictionary word \n EN FR DE ES IT \nEN - 2.55 2.68 2.59 2.53 \nFR 2.50 - 2.63 2.56 2.52 \nDE 1.67 1.80 - 1.75 1.75 \nES 2.25 2.31 2.35 - 2.26 \nIT 1.78 1.81 1.94 1.84 - \nTable 6: The average number of sense when the Same translation module is used \nBuilding a WSD module within an MT system \nEAMT 2005 Conference Proceedings 211 4. Conclusions and future plans \nThis paper has addressed the problem of poly-\nsemous words in machine translations by pro-posing the Translation Checker, a tool which re-lies on a multilingual dictionary and a series of natural language filters to help users disam-biguate such words. Evaluation conducted on English, French, German, Spanish and Italian has revealed that each of the proposed filters help reducing the ambiguity. \nFor future we plan to continue enriching the \ndictionary in order to include more words. We also plan to continue the evaluation in several directions. The first one will focus on the evalua-tion of each individual module included in the TranslationChecker in order to find out its in-fluence on the overall success of the system. In addition, evaluation of the impact of the Trans-lationChecker on the quality of the translation will also be investigated. Given that the Trans-lationChecker is part of a commercial product will enable us to conduct evaluations from the \npoint of view of the user of the system. \n5. References \nBABYCH, B; HARTLEY, A. (2003) Improving Ma-\nchine Translation quality with automatic Named Entity \nrecognition. In: EACL 2003, 10th Conference of the \nEuropean Chapter. Proceedings of the 7th International EAMT workshop on MT and other language technol-\nogy tools. April 13th 2003, Budapest, Hungary. Pp. \n1-8 \nEuroWordNet: http://www. illc.uva.nl/EuroWordNet/ \nMAGNINI B., SPERANZA M. (2002), Merging Glo-\nbal and Specialized Linguis tic Ontologies, ITC-irst, \nJune 2002, 6 pp. Published in Simov K. (ed.), Pro-\nceedings of Ontolex 2002 Ontologies and Lexical \nKnowledge Bases, Workshop held in conjunction with LREC-2002, Las Palmas, Canary Islands, Spain, \nMay 27-31, 2002, pp. 43-48. \nWordNet: http://wordnet.princeton.edu/w3wn.html",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "MurGf1RIGxa",
"year": null,
"venue": "EAMT 2011",
"pdf_link": "https://aclanthology.org/2011.eamt-1.33.pdf",
"forum_link": "https://openreview.net/forum?id=MurGf1RIGxa",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Minimum Error Rate Training Semiring",
"authors": [
"Artem Sokolov",
"François Yvon"
],
"abstract": "Artem Sokolov, François Yvon. Proceedings of the 15th Annual conference of the European Association for Machine Translation. 2011.",
"keywords": [],
"raw_extracted_content": "Minimum Error Rate Training Semiring\nArtem Sokolov Franc ¸ois Yvon\nLIMSI-CNRS LIMSI-CNRs & Uni. Paris Sud\nBP-133, 91 403 Orsay, France\n{artem.sokolov, francois.yvon }@limsi.fr\nAbstract\nModern Statistical Machine Translation\n(SMT) systems make their decisions based\non multiple information sources, which as-\nsess various aspects of the match between\na source sentence and its possible trans-\nlation(s). Tuning a SMT system consists\nin finding the right balance between these\nsources so as to produce the best possi-\nble output, and is usually achieved through\nMinimum Error Rate Training (MERT)\n(Och, 2003). In this paper, we recast\nthe operations implied in MERT in the\nterms of operations over a specific semir-\ning, which, in particular, enables us to de-\nrive a simple and generic implementation\nof MERT over word lattices.\n1 Introduction\nInference (decoding) in phrase-based statistical\nmachine translation (SMT) systems is typically\nbased on a log-linear model of the probability\np(e|f) =Z(f)−1exp(¯λ·¯h(e,f))of obtaining a\ntarget sentence egiven an input sentence f. For\nsuch model, the MAP decision rule selects ˜efas :\n˜ef(¯λ) = arg max\ne∈Ep(e|f)\n= arg max\ne∈E¯λ·¯h(e,f), (1)\nwhereEis the set of reachable translations, ¯h(e,f)\nis the vector of feature functions representing var-\nious compatibility measures of fand e, and ¯λis\na parameter vector, each component λiof which\nregulates the influence of the feature hi(e,f).\nc/circlecopyrt2011 European Association for Machine Translation.The set of reachable translations E(also re-\nferred to as the search space ) in modern decoders\nis based on a set of heuristics that define the set of\npossible translation of each word or phrase (up to\na maximum limit) and specify the range of possi-\nble reorderings of words or phrases during trans-\nlation. Assuming that the components of Ehave\nbeen defined, the actual tuning step of a SMT sys-\ntem consists in finding ¯λ∗that maximizes the em-\npirical gainGon a development set F={(f,rf)}\nmade of pairs of a source sentence fand corre-\nsponding reference translation(s) rf:\n¯λ∗= arg max\n¯λG(F;¯λ) (2)\nwhere the computation of the gain function G, typ-\nically the BLEU score (Papineni et al., 2002)1,\ndepends on the actual translations {˜ef(¯λ),f∈F}\nachieved for a given value of ¯λaccording to (1).\nFor the sake of performing this optimization ef-\nficiently, the search space of the decoder is often\napproximated using an explicit list of n-best hy-\npotheses or a directed acyclic graph (lattice) en-\ncoding a large number of potential translations.\nBecause of the form of the inference rule (1),\nthe learning criterion (2) is neither convex nor dif-\nferentiable. Furthermore, its exact computation is\nmade intractable by the typical size of E, hence the\nrecourse to various heuristic optimization strate-\ngies. The most successful to date is the proposal\nof (Och, 2003), usually referred to as Minimum Er-\nror Rate Training (MERT). This proposal has how-\never been repeatedly questioned for (i) its compu-\ntational cost and (ii) the instability of the result-\ning solutions (Cer et al., 2008; Moore and Quirk,\n2008; Foster and Kuhn, 2009). The most promis-\n1At this stage, any other metrics could be used instead of\nBLEU (see e.g., (Zaidan, 2009)).Mik el L. F orcada, Heidi Depraetere, Vincen t V andeghinste (eds.)\nPr o c e e dings of the 15th Confer enc e of the Eur op e an Asso ciation for Machine T r anslation , p. 241\u0015248\nLeuv en, Belgium, Ma y 2011\ning improvement consists in extending the approx-\nimation of the search space used in (2) from n-\nbest lists to lattices, which both improves speed\nand reduces the variability of the final outcome\n(Macherey et al., 2008). Our main contribution in\nthis paper is to recast the algorithm of (Macherey\net al., 2008; Kumar et al., 2009) (“lattice-MERT”)\nin a sound algebraic framework, using the MERT\nsemiring2. Using this reformulation, we produce\nan efficient implementation of lattice MERT based\non a generic finite-state toolbox (Allauzen et al.,\n2007). Preliminary experimental results confirm\nthe main conclusions of (Macherey et al., 2008).\nThe rest of this paper is organized as follows.\nIn Section 2, we recall the main principles of the\nbasic algorithm of (Och, 2003), and some of the\nimprovements that have been proposed in the liter-\nature. Section 3 is where we introduce the MERT\nsemiring and its main properties. We then de-\nscribe (Section 4) our own implementation of lat-\ntice MERT using a generic shortest distance algo-\nrithm in the appropriate semiring, and discuss var-\nious possible speed-ups. We then report machine\ntranslation experiments that demonstrate the effec-\ntiveness of our proposal (Section 5).\n2 MERT and Lattice MERT\nIn the SMT literature, the MERT optimization\ncycle is used to refer to the development phase,\nwhere the weights of the various features/models\ninvolved in equation (1) are to be tuned over some\ndevelopment data. The whole procedure (Och,\n2003) is sketched in algorithm 1.\nAlgorithm 1: The MERT optimization cycle\nInput : initial value ¯λ0for¯λ, development data F,\nrequired minimum improvement /epsilon1\nOutput : optimal value ¯λ∗for¯λ\nrepeat\nfor(f∈F)doHt(f,λt)←Translate (f)\n¯λt+1←Optimize ({Ht(f,¯λt),f∈F},¯λt)\nt←t+ 1\nuntil (|¯λt+1−¯λt|</epsilon1)\n¯λ∗←¯λt\nMERT thus implies two different kinds of oper-\nations: decoding , which basically implements the\n2The notion of a MERT semiring has been alluded to in the\nliterature (Dyer et al., 2010). To the best of our knowledge,\nthis semiring has never been formally described, neither from\nthe algebraic, nor from the implementation standpoint. This\nis a gap that we intend to fill in this work.inference procedure and returns a set Ht(f,λt)of\nhypotheses, and optimization , which we now de-\nscribe. The Optimize () function relies on op-\ntimization techniques for non-differentiable func-\ntions, such as the Powell’s search algorithm (Pow-\nell, 1964). This requires to perform a series of min-\nimizations of (2) along lines ¯λ=¯λ0+γ¯rfor some\ndirections ¯r.\nDue to the log-linear form of the probability\nin (1), the optimal hypothesis ˜efis given by:\n˜ef(γ) = arg max\ne∈Eye+γse\nwhereye=¯λ0·¯h(e,f)andse= ¯r·¯h(e,f). Each\ntranslation hypothesis is thus associated with a line\ninR2, and the most probable hypothesis for a given\nγis the one whose line dominates all the others.\nThe sequence of line segments that dominate all\nother lines for some value of γis called the upper\nenvelope (see Figure 1 and Definition 3.2).\nThe upper envelope identifies hypotheses that\ncan be selected when ¯λis moved along the con-\nsidered line. Projections of the intersections of the\nenvelope’s lines onto the γ-axis define the interval\nboundaries; over each such interval, the optimal\nhypothesis is constant. After merging the intervals\ncomputed separately for each sentence in F, it is\npossible to find γ∗maximizing gain Gby comput-\ning it on each interval and setting γ∗to the middle\nof the best interval.\nDuring each round of optimization, MERT ex-\nplores several directions ¯riand updates ¯λ=λ0+\nγ∗\ni∗¯ri∗, wherei∗is the index of the direction yield-\ning the highest increase of G.\nThe procedure sketched in algorithm 1 has re-\npeatedly been criticized for its computational cost\nand its lack of stability, which often implies the\nfinding of a suboptimal solution. There are two\nmain reasons why MERT can be very time con-\nsuming. The first is due to the total number of it-\nerations that need to be performed to attain con-\nvergence: folklore wisdom is that the number of\niterations shall be approximately proportional to\ntotal the number of dimensions. Speed is also de-\npendent on the time required to perform one itera-\ntion, which is dominated by the translation phase.\nEven when distributed over several CPUs transla-\ntion takes much more time and resource than op-\ntimization. It is thus expected that the most sig-\nnificant speed improvements will be obtained by\nreducing the number of iterations. The other main242\nissue is the stability of the results, which, in prac-\ntice, is addressed by running the optimize several\ntimes, with different starting points. These ineffi-\nciencies have stimulated the development of alter-\nnative approaches3. Attempts at improving MERT\ncan be split into two categories: works that try\nto fix the optimization procedure and works that\nconsider alternative, arguably easier to optimize\nor better suited training criteria (Smith and Eisner,\n2006; Zens et al., 2007; Watanabe et al., 2007). In\nthe sequel, we only discuss the former approaches,\nwhich are more relevant to this work.\n(Cer et al., 2008) provides a thorough analysis of\nthe optimization procedure, and suggests that im-\nprovements can be attained by (i) considering mul-\ntiple random search directions instead of the Pow-\nell algorithm, and (ii) ensuring, through regulariza-\ntion, that the optimal ¯λ∗have the ability to gener-\nalize well. This analysis is completed by the works\nof (Foster and Kuhn, 2009), which also suggests to\nimprove the exploration of the search space by us-\ning well chosen multiple restarting points at each\niteration (see also (Moore and Quirk, 2008)).\nInitially proposed for n-best lists, MERT was\nalso generalized to the case when Eis approxi-\nmated by a phrase lattice (Macherey et al., 2008),\nand more recently, to hypergraphs (Kumar et al.,\n2009). These generalizations take advantage of the\ndecomposability of the feature functions ¯h(e,f),\nwhich are computed as a sum of local feature\nfunctions. When this property holds, rather than\nconstructing upper envelopes for each hypothesis\nin the lattice4, the envelopes are distributed over\nnodes in the lattice. Working with much better\napproximations of the complete search spaces not\nonly allows to converge in less iterations, but also\nto achieve better generalization, a finding that was\nrecently confirmed by (Larkin et al., 2010). Our\nwork is a continuation of this line of research,\ndriven by the intuition that recasting MERT in a\nclear algebraic framework, as we do in the next\nsection, can help develop faster, and even more ef-\nficient, implementations of MERT for complex hy-\npotheses set.\n3 The MERT Semiring\nRecall that a semiring Kover a setKis a system\n/angbracketleftK,⊕,⊗,¯0,¯1/angbracketright, where/angbracketleftK,⊕,¯0/angbracketrightis a commutative\n3Not to mention changes in the core optimization routines, as\nine.g., (Lambert and Banchs, 2006)\n4They are too numerous to be efficiently enumerated.monoid with identity element ¯0, meaning that a⊕\n(b⊕c) = (a⊕b)⊕c,a⊕b=b⊕aand∀a,a⊕¯0 =\n¯0⊕a=a. Additionally,/angbracketleftK,⊗,¯1/angbracketrightis a monoid\nwith identity element ¯1;⊗distributes over⊕so\nthata⊗(b⊕c) = (a⊗b)⊕(a⊗c)and(b⊕c)⊗\na= (b⊗a)⊕(c⊗a)and element ¯0annihilates\nK(a⊗¯0 = ¯0⊗a=¯0). A semiring is called\ncommutative if the operation ⊗is commutative.\nIn this section, we characterize the algebraic\nstructure of the set of upper envelopes of a set\nof lines in the plane, and show that a set of en-\nvelopes equipped with certain operations of addi-\ntion (⊕) and multiplication ( ⊗) defines a commu-\ntative semiring.\n3.1 Lineset semiring\nConsider a set Dof setsdkofnlines{dk\ni.s·x+\ndk\ni.y,i= 1...n}inR2, wheredk\ni,s,dk\ni,y∈Rare,\nrespectively, the slope and the y-intercept of the i-\nth line in the set dk. For two sets d1,d2∈D, we\ndefine the following internal operations ⊕Dand\n⊗D5as follows:\nd1⊕Dd2=d1∪d2,\nd1⊗Dd2={(d1\ni.s+d2\nj.s)·x+ (d1\ni.y+d2\nj.y)\n|∀d1\ni∈d1,d2\nj∈d2}. (3)\nProposition 3.1. D=/angbracketleftD,⊕D,⊗D,¯0D,¯1D/angbracketright,\nwhere ¯0D=∅and¯1D={0·x+ 0}, is a commu-\ntative semiring.\nProof. It is well known that /angbracketleftD,⊕D/angbracketrightis a com-\nmutative monoid with ¯0Das identity element.\n/angbracketleftD,⊗D/angbracketrightis also a commutative monoid with ¯1Das\nidentity element. It is finally routine to check that\n⊗Dis distributive over ⊕D. The additive identity\n¯0DannihilatesDfor the⊗Doperation, because of\nthe definition (3): as no line is contained in ¯0Dthe\nresult of multiplication by ¯0Dis always empty.\n3.2 Envelope semiring\nDefinition 3.2. The upper envelope of a set of lines\nd∈Dis a subset env(d)⊆dconsisting of lines\ndi∈d, s.t. for each line di∈env(d), there exists\nan non-empty interval Ii∈R, s.t. ifx∈Ii, then\ndi.s·x+di.y>di/prime.s·x+di/prime.y, for any line di/prime/negationslash=di.\nTwo linesdianddjinenv(d) are said to be\nneighbors if their corresponding intervals Iiand\nIjare adjacent.\n5Formally, ⊗corresponds to the Minkowski sum of the two\nsets of lines.243\nFigure 1: The upper envelope of a set of lines\nFor MERT, it is important to know the inter-\nsections of neighboring lines in the envelope. For\nthis purpose, an envelope can be ordered as a list\nof lines with increasing slopes and each line en-\ncoded as a tuple (x,s,y )wherexis the line’s\nx-intersection with the previous line in the list.\nThe upper envelope can be computed by algo-\nrithm 2, which is a rewrite of the sweepline al-\ngorithm from (Macherey et al., 2008). We use a\nseparate function Sweep (algorithm 3), which will\nserve in a faster implementation of the ⊕opera-\ntion.\nAlgorithm 2: TheSweepLine algorithm\nInput : sorted array Scontaining lines\nOutput : upper envelope of S\nj= 0\nfor(i= 0;i<|S|; + +i)do\nSweep (S,S[i],j)\nend\nS.resize (j)\nAlgorithm 3: TheSweep (S,/lscript,j )function\nInput : upper envelope S, line/lscript, current value of j\nOutput : upper envelope of S∪/lscript, updated value\nforj\nif(0<j)then\nif(S[j−1].s=/lscript.s)then\nif(/lscript.y< =S[j−1].y)then continue\nj←j−1\nwhile (0<j)do\n/lscript.x= (/lscript.y−S[j−1].y)/(S[j−1].s−/lscript.s)\nif(S[j−1].x</lscript.x )then break\nj←j−1\nend\nif(0 =j)then/lscript.x=−∞\nS[j] =/lscript\nj←j+ 1LetEbe a subset of Dsuch that env(d) =dfor\neachd∈E, and define the operations ⊕Eand⊗E\nas the projections of the respective operations in D\non the setE:\nd1⊕Ed2= env(d1⊕Dd2),\nd1⊗Ed2= env(d1⊗Dd2).\nProposition 3.3. The tuple E =\n/angbracketleftE,⊕E,⊗E,¯0E,¯1E/angbracketright, where ¯0E=∅and\n¯1E={0·x+ 0}, is a commutative semiring.\nProof. It is routine to check both associative, com-\nmutative and distributive properties, and that ¯0Eis\na multiplicative annihilator of E.\nA semiring is called weakly divisible if for any\npair of elements d1andd2such thatd1⊕d2/negationslash=¯0,\nthere exists at least one dsuch thatd1= (d1⊕\nd2)⊗d. The concept of divisibility is important as\nit is a crucial requirement for optimizing transduc-\ners by determinization (Mohri, 2009). However,\none can observe that the MERT semiring is not di-\nvisible, which has its implication on the minimiza-\ntion of the finite-state automata between iterations\nof algorithm 1 (see subsection 4.2).\n3.3 Shortest Distance and MERT\nLetA= (Σ,Q,I,F,E )be a weighted finite state\nacceptor with weights in E, meaning that the tran-\nsitions (q,a,q/prime)inAcarry a weight winE. For-\nmally,Eis a mapping from (Q×Σ×Q)intoE;\nlikewiseIandFare mappings from QintoE. We\nuse the notations of (Mohri, 2009): if e= (q,a,q/prime)\nis a transition in domain(E),p(e) =q(resp.\nn(e) =q/prime) denotes its origin (resp. destination)\nstate,i(e) =aits label, and w(e) =E(e)its\nweight. These notations extend to paths: if πis\na path inA,p(π)(resp.n(π)) is its initial (resp.\nending) state and i(π)is the label along the path.\nIn our setting, Ais derived from a word lattice L\nas follows. We assume that Lhas a single start and\nend state, denoted respectively q0andqF. Each arc\ninLlabeled with a target word acarries a vector\n¯h(a,f)of local features associated with a. Given\na starting point ¯λ0and a search direction ¯r,Lis\nturned into a weighted acceptor over Eby associ-\nating with (q0)(resp.qF) the weight ¯1and replac-\ning¯hwith a singleton containing line diwith slope\ndi.s= (¯r·¯h)xandy-interceptdi.y= (¯λ0·¯h).244\nThe total weight of a successful path π=e1...el\ninAis thus computed as:\nw(π) =I(e1)⊗/bracketleftbigl/circlemultiplydisplay\ni=1w(ei)/bracketrightbig\n⊗F(el)\n={¯λ0·l/summationdisplay\ni=1¯h(i(ei),f) + (¯r·l/summationdisplay\ni=1¯h(i(ei),f))x}.\nThis is because each weight in Acontains a single\nline, which means that the total weight of a path\nis also a singleton line set, which corresponds to\nthe complete translation hypothesis read along π\nase=i(e1)...i(el).\nThe computation of the complete upper enve-\nlope of all the lines corresponding to translation\nhypotheses in the lattice Lthus corresponds to the\nenvelope of the union of all these lines:\nenv(/uniondisplay\nπ∈Aw(π)) =/circleplusdisplay\nπ∈Aw(π). (4)\nQuantities such as (4) can be efficiently com-\nputed by generic shortest distance algorithms over\nacyclic graphs (Mohri, 2002).\nAs an additional note, it is interesting to real-\nize that all the previous considerations hold if we\nmake the elements in Evectors of set of lines, in-\nstead of just set of lines; and define the semiring\noperations to be performed componentwise. This\nmeans that the computation of (4) can be done si-\nmultaneously in any number of directions.\n4 Implementing MERT with semiring\noperations\n4.1 Basic operations\nGiven that Eis a semiring, our implementation re-\nlies on the general finite-state transducer library\nOpenFst (Allauzen et al., 2007) to perform the\ncomputation of the quantities that are required by\nMERT on lattices. The main benefit of adopting\nthis framework is that we only need to implement\nthe basic semiring operations to get the full power\nof proven and well optimized algorithms.\nWe first detail the⊗operation, implemented as\nspecified in algorithm 4. In our application, the au-\ntomata derived from translation lattices are acyclic\nwhich means that, if we process states in topo-\nlogical order, the right-hand argument d2in algo-\nrithm 4 always contains only one line, what re-\nmoves the need for the inner for-loop. As multi-\nplication by a single line does not change the rela-Algorithm 4:⊗operation\nInput : two envelopes d1,d2\nOutput :d1⊗d2\nS=∅\nford1\ni∈d1do\nford2\nj∈d2do\nS←S∪{(d1\ni.s+d1\nj.s)·x+ (d1\ni.y+d1\nj.y)}\nend\nend\nSweepLine (S);\nAlgorithm 5:⊗operation for acyclic lattices\nInput : two envelopes d1,d2\nOutput :d1⊗d2\nS=∅\nford1\ni∈d1do\nS←S∪{(d1\ni.s+d1\n0.s)·x+ (d1\ni.y+d1\n0.y)}\nend\ntive order of lines in the envelope, the final call to\nSweepLine ()is also not required (algorithm 5).\nIn (Macherey et al., 2008; Kumar et al., 2009),\nthe⊕operation was straight-forwardly defined as\nSweepLine (d1∪d2). However, for the sake of\nefficiency, one can use the fact that both arguments\nare sorted, which means that the envelop of their\nunion can be performed in linear time, by process-\ning them simultaneously (see algorithm 6).\n4.2 Lattice optimization\nPerforming the full MERT cycle involves repeat-\nedly solving the optimization problem (2) on ap-\nproximations of the translation search space of\nincreasing quality. Standard implementation of\nMERT alternate between the optimization of ¯λand\nthe computation of updated lattices L(translation\nhypotheses). To ensure the convergence of this\nprocedure and to avoid overfitting, we need to en-\nsure that the lattices used at step tactually contain\nall the hypotheses that have served to optimize ¯λ\nat iterationt−1. This requirement is typically met\nby merging decoder’s outputs (lattice Lt) produced\nduring all the previous iterations Lt−1...L 1.\nMerging lattices is readily implemented using\nthe OpenFst Union operation. However, the re-\nsulting acceptor might still contain a large number\nof duplicated paths, corresponding to identical hy-\npotheses produced for different values of t, even\nafter ¯λupdate. This leads to a waste of time while245\nAlgorithm 6:⊕operation\nInput : two envelopes d1,d2\nOutput :d1⊕d2\nj= 0;i1=i2= 0;S=∅\nwhile (i1</vextendsingle/vextendsingled1/vextendsingle/vextendsingleandi2</vextendsingle/vextendsingled2/vextendsingle/vextendsingle)do\nif(d1\ni1.s<d2\ni2.s)then\nSweep (S,d1\ni1,j)\ni1←i1+ 1\nelse\nSweep (S,d2\ni2,j)\ni2←i2+ 1\nif(i1=/vextendsingle/vextendsingled1/vextendsingle/vextendsingle)then\nfor(;i2</vextendsingle/vextendsingled2/vextendsingle/vextendsingle;i2+ +) doSweep (S,d2\ni2,j)\nbreak\nif(i2=/vextendsingle/vextendsingled2/vextendsingle/vextendsingle)then\nfor(;i1</vextendsingle/vextendsingled1/vextendsingle/vextendsingle;i1+ +) doSweep (S,d1\ni1,j)\nbreak\nend\nS.resize (j)\nperforming shortest distance calculation. Using\nthe OpenFst operation Determinize over the\nMERT semiring is ruled out by the fact that the\ndeterminization of a transducer requires (weak) di-\nvisibility of weights (Mohri, 2009), a property that\ndoes not hold in the MERT semiring.\nTo circumvent the problem, we perform the re-\nquired operations Union andDeterminize in\nthe(min,+)(tropical) semiring as follows. Each\ninput lattice is first converted into an intermediate\nautomaton with identical arcs and states. In the\nnew automaton, the output label of an arc com-\npactly encodes the original output phrase and all\nthe model scores, and the weights of all arcs are\nset to one. The tropical semiring being divisible,\nso the resulting automaton can be optimized using\nthe standard library operations. We then restore\nthe original encoding so as to recover a transducer\nwith proper labels and weights.\n4.3 Additional speed-ups and improvements\nFinding the optimal ¯λ∗for a set of translation lat-\ntices and the corresponding references is an itera-\ntive procedure detailed in algorithm 7.\nOur implementation uses the optimization strat-\negy known as Koehn’s coordinate descent (Cer et\nal., 2008), which optimizes ¯λseparately for each\nfeature (dimension). This is a difference with\nthe approach of (Och, 2003) which uses Powell’s\nsearch algorithm. This basic approach is extendedAlgorithm 7: MERT workflow\nInput : initial ¯λ0, FST-archiveA, set of restart\npointsP\nOutput : optimal ¯λ∗\nforall restart points p∈Pdo\nforeach latticeL∈A do\nfordirection ¯r∈Rndo\ninit arcsa∈Lwith singleton\n{(¯r·¯Fa)·x+¯λ·¯Fa}\nend\nrunShortestDistance (L)\nget envelope of the final state\ncollect its intersections and BLEU statistics\nend\nmerge intersections from all the lattices\nfind interval with maximum BLEU\nset¯λ∗as the middle of the winning interval\nend\nreturn ¯λ∗\nas follows: optimization can be restarted from a\nnumber of randomly chosen points, search is also\nperformed in several random dimensions, which\nare regenerated after each improvement of ¯λ.\nAt each step of the MERT workflow (algo-\nrithm 7), all directions are processed simultane-\nously in one single traversal of the lattice , as ex-\nplained in section 3.3. As reported in (Cer et al.,\n2008; Macherey et al., 2008), we have observed\nthat using more random directions is a simple and\neffective means to gain up to 0.3-0.5 BLEU points.\nIn comparison, the improvements obtained with\nrandom restarts remained limited, except than dur-\ning the first iteration. Given the time needed to\nreinitialize each lattice in the archive with each\nnew starting point ¯λ0before recomputing the\nshortest distance, we use random restart only for\nthe first iterations; from the second iteration on, we\ninitialize search only with the previous best point.\nWe also merged two line segments of the up-\nper envelope if generated intersection point closer\nthan10−3, as in (Cer et al., 2008; Macherey et al.,\n2008), where it is claimed to make results more\nstable. We did not, however, notice any change\nin performance with or without interval merging\non small datasets. Interval merging was used for\nthe sake of decreasing the number of intervals,\nthat saved some amount of time when computing\nBLEU scores for each of them. Finally note that\nafter each round, ¯λ∗was/lscript1-normalized.246\nsmall large\ndev test ∆t dev test ∆t\n15.03 16.88 20.47 20.98\n15.19 16.63 00:42 22.57 22.82 01:52\n16.74 16.43 00:45 24.76 25.43 01:42\n16.83 18.30 00:44 24.75 25.46 02:01\n16.93 18.33 00:50 23.10 23.36 02:03\n16.93 18.32 00:55 25.10 25.54 01:24\n17.04 18.59 00:52 24.93 25.35 01:36\n17.04 18.59 00:50 25.05 25.46 01:35\n17.06 18.56 00:53 25.29 26.10 01:36\n17.07 18.56 00:56 25.28 26.11 01:38\n17.06 18.55 00:57 25.28 26.13 01:49\n17.06 18.56 00:57\n17.06 18.56 00:58\nTable 1:n-best list MERT’s dev-, test- and time-\nperformance on both datasets.\n5 Experiments\nOur experiments use the n-gram approach\nof (Mari ˜no et al., 2006) as implemented in the\nN-coder system. This implementation produces\ntranslation lattices in the form of weighted finite-\nstate acceptors, which greatly simplifies integra-\ntion with our system. Our version of N-coder uses\n11 model scores. We consider a small and a larger\ntask, both based on the data distributed for last year\nWMT6campaign: translation and language mod-\nels in the former system use only the NewsCom-\nmentary dataset, the larger ones use all the data al-\nlowed in the constrained track. The smaller system\nis tuned on the full development set (2051 sent.),\nwhile the larger partitions it equally to optimize the\nlanguage model7and MERT. Finally, both tasks\nuse the official WMT’10 test set (2525 sent.).\nOur baseline is the MERT distributed in the\nMOSES8toolkit with 100-best list, Koehn’s coor-\ndinate descend and 20 restart points. The lattice\nand the baseline versions use the same /epsilon1= 10−5.\nTypical runs of the baseline and our system are\nreported respectively, in Table 1 and Tables 2, 3 for\ndifferent numbers nrof additional random direc-\ntions (0, 20 and 50). For each value of nr, we have\n3 columns: BLEU-performance on development\nand test sets, as well as the time (hours:minutes)\ntaken for each iteration (includes decoding time)9.\nOur experiments showed no clear gain in term\nof BLEU, most probably due to the relatively\n6www.statmt.org/wmt10\n7See details in (Allauzen et al., 2010).\n8www.statmt.org/moses\n9All experiments were run on a server with 64G of memory\nand two Xeon processors with 4 cores at 2.27 Ghz. Lattice\nMERT is multi-threaded.nr= 0 nr= 20\ndev test ∆t dev test ∆t\n15.03 16.88 00:00 15.03 16.88 00:00\n16.64 17.95 00:32 16.97 18.39 01:43\n16.83 18.17 01:26 17.02 18.47 02:02\n17.02 18.46 01:20\n17.02 18.46 01:39\n17.03 18.46 02:11\n17.03 18.46 01:54\n17.03 18.46 02:02\n17.03 18.46 02:19\nTable 2: Lattice MERT’s dev-, test- and time-\nperformance on the small task.\nnr= 0 nr= 20 nr= 50\ndev test ∆t dev test ∆t dev test ∆t\n20.47 20.97 20.47 20.97 20.47 20.98\n23.71 24.19 01:29 20.74 21.17 02:15 20.66 21.03 04:15\n25.26 24.59 01:20 25.35 25.84 02:12 25.72 26.29 04:01\n25.29 25.92 01:48 25.67 26.26 03:40 25.97 26.18 04:24\n25.72 26.41 04:00 26.00 26.24 05:21\n26.01 26.24 07:09\nTable 3: Lattice MERT’s dev-, test- and time-\nperformance on the larger task.\nsmall number of features used in our decoder. Pa-\npers reporting lattice MERT to increase BLEU\nperformance typically use more features ( e.g., 19\nin (Larkin et al., 2010)). The main benefit here\nseems to be speed, as convergence in lattice MERT\nis obtained much faster than with the baseline,\nwhich more than compensates for the increased\nsearch time. Adding random search directions\nseems to make a difference, but also comes at a\nprice, as the cost increases linearly with the num-\nber of dimensions. A reasonable balance seems to\nbe around 20-30 directions.\nIt may finally be noted that better stopping cri-\nteria are needed to detect convergence, as lattice\nMERT sometimes operates in regions where small\nchanges in ¯λdo not produce visible improvement\nof dev-BLEU (e.g., for nr= 20 for the small set).\nIn these regions, continuing search is highly unde-\nsirable as each subsequent iteration becomes more\nand more time-consuming.\n6 Future work\nIn this paper, we have provided a sound formaliza-\ntion for the lattice MERT algorithm, resulting in\nan efficient implementation based on the OpenFst\ntoolkit, and small improvements on our control test\nset. Further experiments with richer feature sets\nare needed to confirm the improvements brought\nby this new tuning module.247\nWe believe that this procedure can still be opti-\nmized in many ways. For instance, we found that\n1/3 of the total time of the optimization procedure\nis spent performing ⊕operation. In comparison,\n⊗takes about 2.5 times less time. On average,\neach invocation of ⊕eliminated 55% of lines of\nthe union of its operand. This suggests that speed\ncan be gained from optimizing the computations\nof envelopes. Another possible direction for future\nwork is to investigate means to speed up the short-\nest distance calculation with heuristic search tech-\nniques. While during the first iteration this may re-\nproduce the work of the decoder, this is especially\ndesirable to be applied to the merged lattices used\nin the following iterations. This, together with the\nuse of better stopping criteria, may prevent the un-\ncontrolled growth of lattices.\nAcknowledgments\nThis work has been partially funded by OSEO un-\nder the Quaero program.\nReferences\nAllauzen, Cyril, Michael Riley, Johan Schalkwyk, Wo-\njciech Skut, and Mehryar Mohri. 2007. OpenFst: A\ngeneral and efficient weighted finite-state transducer\nlibrary. In Proc. of the Int. Conf. on Implementation\nand Application of Automata , pages 11–23.\nAllauzen, Alexandre, Josep M. Crego, Ilknur Durgar\nEl-Kahlout, and Franc ¸ois Yvon. 2010. LIMSI’s sta-\ntistical translation systems for WMT’10. In Proc.\nof the Joint Workshop on SMT and MetricsMATR ,\npages 54–59.\nCer, Daniel, Dan Jurafsky, and Christopher D. Man-\nning. 2008. Regularization and search for minimum\nerror rate training. In Proc. of the Workshop on SMT ,\npages 26–34.\nDyer, Chris, Adam Lopez, Juri Ganitkevitch, Jonathan\nWeese, Ferhan Ture, Phil Blunsom, Hendra Seti-\nawan, Vladimir Eidelman, and Philip Resnik. 2010.\ncdec: A decoder, alignment, and learning framework\nfor finite-state and context-free translation models.\nInProc. of the ACL , pages 7–12.\nFoster, George and Roland Kuhn. 2009. Stabilizing\nminimum error rate training. In Proc. of the Work-\nshop on SMT , pages 242–249.\nKumar, Shankar, Wolfgang Macherey, Chris Dyer, and\nFranz Och. 2009. Efficient minimum error rate\ntraining and minimum bayes-risk decoding for trans-\nlation hypergraphs and lattices. In Proc. of the Joint\nConf. of the Annual Meeting of the ACL and the Conf.\non NLP of the AFNLP , volume 1, pages 163–171.Lambert, Patrik and Rafael E. Banchs. 2006. Tun-\ning Machine Translation Parameters with SPSA. In\nProc. of the Int. Workshop on Spoken Language\nTranslation , pages 190–196, Kyoto, Japan.\nLarkin, Samuel, Boxing Chen, George Foster, Ulrich\nGermann, Eric Joanis, Howard Johnson, and Roland\nKuhn. 2010. Lessons from NRC’s Portage system at\nWMT 2010. In Proc. of the Joint Workshop on SMT\nand MetricsMATR , pages 127–132.\nMacherey, Wolfgang, Franz Josef Och, Ignacio Thayer,\nand Jakob Uszkoreit. 2008. Lattice-based minimum\nerror rate training for statistical machine translation.\nInProc. of the Conf. on EMNLP , pages 725–734.\nMari ˜no, Jos ´e B., Rafael E. Banchs R, Josep M.\nCrego, Adri `a de Gispert, Patrick Lambert, Jos ´e A.R.\nFonollosa, and Marta R. Costa-Juss `a. 2006. N-\ngram-based machine translation. Comput. Ling. ,\n32(4):527–549.\nMohri, Mehryar. 2002. Semiring frameworks and al-\ngorithms for shortest-distance problems. J. Autom.\nLang. Comb. , 7:321–350.\nMohri, Mehryar. 2009. Weighted automata algo-\nrithms. In Droste, Manfred, Werner Kuich, and\nHeiko V ogler, editors, Handbook of Weighted Au-\ntomata , chapter 6, pages 213–254.\nMoore, Robert C. and Chris Quirk. 2008. Random\nrestarts in minimum error rate training for statistical\nmachine translation. In Proc. of the COLING , pages\n585–592, Manchester, UK.\nOch, Franz Josef. 2003. Minimum error rate training\nin statistical machine translation. In Proc. of the An-\nnual Meeting of the ACL , volume 1, pages 160–167.\nPapineni, Kishore, Salim Roukos, Todd Ward, and Wei-\nJing Zhu. 2002. Bleu: a method for automatic eval-\nuation of machine translation. In Proc. of the Annual\nMeeting of the ACL , pages 311–318.\nPowell, M.J.D. 1964. An efficient method for finding\nthe minimum of a function of several variables with-\nout calculating derivatives. Computer J. , 7:152–162.\nSmith, David A. and Jason Eisner. 2006. Minimum-\nrisk annealing for training log-linear models. In\nProc. COLING/ACL , pages 787–794.\nWatanabe, Taro, Jun Suzuki, Hajime Tsukada, and\nHideki Isozaki. 2007. Online large-margin training\nfor statistical machine translation. In Proc. of the\nJoint Conf. on EMNLP-CoNLL , pages 764–773.\nZaidan, Omar F. 2009. Z-MERT: A fully configurable\nopen source tool for minimum error rate training of\nmachine translation systems. The Prague Bulletin of\nMathematical Ling. , 91:79–88.\nZens, Richard, Sasa Hasan, and Hermann Ney. 2007.\nA systematic comparison of training criteria for sta-\ntistical machine translation. In Proc. of the Joint\nConf. on EMNLP-CoNLL , pages 524–532.248",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "LNNRkJRY8l",
"year": null,
"venue": "EAMT 2020",
"pdf_link": "https://aclanthology.org/2020.eamt-1.24.pdf",
"forum_link": "https://openreview.net/forum?id=LNNRkJRY8l",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Document-level Neural MT: A Systematic Comparison",
"authors": [
"António V. Lopes",
"M. Amin Farajian",
"Rachel Bawden",
"Michael Zhang",
"André F. T. Martins"
],
"abstract": null,
"keywords": [],
"raw_extracted_content": "Document-level Neural MT: A Systematic Comparison\nAnt´onio V . Lopes1M. Amin Farajian1Rachel Bawden2\nMichael Zhang3Andr ´e F. T. Martins1\n1Unbabel, Rua Visc. de Santar ´em 67B, Lisbon, Portugal\n2University of Edinburgh, Scotland, UK\n3University of Washington, Seattle, WA, USA\nfantonio.lopes, amin, andre.martins [email protected]\[email protected] ,[email protected]\nAbstract\nIn this paper we provide a systematic com-\nparison of existing and new document-\nlevel neural machine translation solutions.\nAs part of this comparison, we introduce\nand evaluate a document-level variant of\nthe recently proposed Star Transformer ar-\nchitecture. In addition to using the tradi-\ntional metric BLEU, we report the accu-\nracy of the models in handling anaphoric\npronoun translation as well as coherence\nand cohesion using contrastive test sets.\nFinally, we report the results of human\nevaluation in terms of Multidimensional\nQuality Metrics (MQM) and analyse the\ncorrelation of the results obtained by the\nautomatic metrics with human judgments.\n1 Introduction\nThere has been undeniable progress in Machine\nTranslation (MT) in recent years, so much so that\nfor certain languages and domains, when sentences\nare evaluated in isolation, it has been suggested\nthat MT is on par with human translation (Has-\nsan et al., 2018). However, it has been shown\nthat human translation clearly outperforms MT at\nthe document level, when the whole translation is\ntaken into account (L ¨aubli et al., 2018; Toral et al.,\n2018; Laubli et al., 2020). For example, the Con-\nference on Machine Translation (WMT) now con-\nsiders inter-sentential translations in their shared\ntask (Barrault et al., 2019). This sets a demand for\ncontext-aware machine translation: systems that\ntake the context into account when translating, as\nopposed to translating sentences independently.\n© 2020 The authors. This article is licensed under a Creative\nCommons 3.0 licence, no derivative works, attribution, CC-\nBY-ND.Translating sentences in context (i.e. at the doc-\nument level) is essential for correctly handling\ndiscourse phenomena whose scope can go be-\nyond the current sentence and which therefore re-\nquire document context (Hardmeier, 2012; Baw-\nden, 2018; Wang, 2019). Important examples in-\nclude anaphora, lexical coherence and cohesion,\ndeixis and ellipsis; crucial aspects in delivering\nhigh quality translations which often are poorly\nevaluated using standard automatic metrics.\nNumerous context-aware neural MT (NMT)\napproaches have been proposed in recent years\n(Tiedemann and Scherrer, 2017; Zhang et al.,\n2018; Maruf et al., 2019; Miculicich et al., 2018;\nV oita et al., 2019b; Tu et al., 2018), integrat-\ning source-side and sometimes target-side context.\nHowever, they have often been evaluated on differ-\nent languages, datasets, and model sizes. Certain\nmodels have also previously been trained on few\nsentence pairs rather than in more realistic, high-\nresource scenarios. A direct comparison and anal-\nysis of the methods, particularly concerning their\nindividual strengths and weaknesses on different\nlanguage pairs is therefore currently lacking.\nWe fill these gaps by comparing a representa-\ntive set of context-aware NMT solutions under the\nsame experimental settings, providing:\n• A systematic comparison of context-aware NMT\nmethods using large datasets (i.e. pre-trained\nusing large amounts of sentence-level data)\nfor three language directions: English (EN)\ninto French (FR), German (DE) and Brazil-\nian Portgueuse (PT br). We evaluate on\n(i) document translation using public data for\nEN!fFR,DEgand (ii) chat translation using\nproprietary data for all three directions. We use\ntargeted automatic evaluation and human assess-\nments of quality.\n• A novel document-level method inspired by the\nStar transformer approach (Guo et al., 2019),\nwhich can leverage full document context from\narbitrarily large documents.\n• The creation of an additional open-source large-\nscale contrastive test set for EN !FR anaphoric\npronoun translation.1\n2 Neural Machine Translation\n2.1 Sentence-level NMT\nNMT systems are based on the encoder-decoder\narchitecture (Bahdanau et al., 2014), where the\nencoder maps the source sentence into word vec-\ntors, and the decoder produces the target sentence\ngiven these source representations. These systems,\nby assuming a conditional independence between\nsentences, are applied to sentence-level transla-\ntion, i.e. ignoring source- and target-side context.\nAs such, current state-of-the-art NMT systems op-\ntimize the negative log-likelihood of the sentences:\np(y(k)jx(k)) =nY\nt=1p(y(k)\ntjy(k)\n<t;x(k));(1)\nwherex(k)andy(k)are thekthsource and target\ntraining sentences, and y(k)\ntis thetthtoken iny(k).\nIn this paper, the underlying architecture is a\nTransformer (Vaswani et al., 2017). Transform-\ners are usually applied to sentence-level transla-\ntion, using the sentence independence assumption\nabove. This assumption precludes these systems\nfrom learning inter-sentential phenomena. For ex-\nample, Smith (2017) analyzes certain discourse\nphenomena that sentence-level MT systems cannot\ncapture, such as obtaining consistency and lexical\ncoherence of named entities, among others.\n2.2 Context-aware NMT\nContext-aware NMT relaxes the independence as-\nsumption of sentence-level NMT; each sentence is\ntranslated by conditioning on the current source\nsentence as well as other sentence pairs (source\nand target) in the same document. More for-\nmally, given a document DcontainingKsentence\npairsf(x(1);y(1));(x(2);y(2));:::; (x(K);y(K))g,\nthe probability of translating x(k)intoy(k)is:\np(y(k)jx(k)) =nY\nt=1p(y(k)\ntjy(k)\n<t;X;Y(<k));(2)\n1The dataset and scripts are available at https://github.com/\nrbawden/Large-contrastive-pronoun-testset-EN-FRwhereX:=fx(1);:::;x(K)gare the document’s\nsource sentences and Y(<k):=fy(1);:::;y(k\u00001)g\nthe previously generated target sentences.\n2.3 Chat translation\nA particular case of context-aware MT is chat\ntranslation, where the document is composed of\nutterances from two or more speakers, speaking\nin their respective languages (Maruf et al., 2018;\nBawden et al., 2019).\nThere are two main defining aspects of chat:\nthe content type (shorter, less planned, more infor-\nmal and ungrammatical and noisier), and the con-\ntext available (past utterances only, from multiple\nspeakers in different languages). Specifically, chat\nis an online task where only the past utterances\nare available and context-aware models (see §3)\nneed to be adapted to cope with multiple speak-\ners. In this work we introduce tokens to distinguish\neach speaker and modifying the internal flow of\nthe method to incorporate both speakers’ context.\nThere is also an additional challenge in how to han-\ndle both language directions and how using gold or\npredicted context affects chat models. In this work\nwe consider a simplification of this problem by as-\nsuming the language direction of the first speaker\nis always from a gold set, leaving for future work\nthe assessment of the impact of using predictions\nof the other speaker’s utterances.\n3 Context-aware NMT methods\nWe compare three previous context-aware ap-\nproaches (concatenation, multi-source and cache-\nbased) in our experiments. As well as illustrat-\ning different methods of integrating context, they\nvary in terms of which context (source/target, pre-\nvious/future) and how much context (number of\nsentences) they can exploit, as shown in Table 1.\nAlthough other context-aware methods do exist,\nwe choose these three methods as being represen-\ntative of the number of context sentences and usage\nof both source and target side context.\nConcatenation: Tiedemann and Scherrer (2017)\nuse the previous sentence as context, i.e. X(k\u00001)\nandY(k\u00001), concatenated to the current sentence,\ni.e.X(k)andY(k), separated by a special token. It\nis called 2to1 when just the source-side context\nis used, and 2to2 when the target is used too.\nMulti-source context encoder: Zhang et al.\n(2018) model the previous source sentences,\nX(<k)with an additional encoder. They modify\nthe transformer encoder and decoder blocks to in-\ntegrate this encoded context; they introduce an ad-\nditional context encoder in the source side that re-\nceives the previous two source sentences as con-\ntext (separated by a special token), encodes them\nand passes the context encodings to both the en-\ncoder and decoder, integrating them using addi-\ntional multi-head attention mechanisms. Similar\nto the concatenation-based approach, here the con-\ntext is limited to the previous few sentences.\nCache-based: Tu et al. (2018) model all previ-\nous source and target sentences, X(<k)andY(<k)\nwith a cache-based approach (Grave et al., 2016),\nwhereby, once a sentence has been decoded, its\ndecoder states and attention vectors are saved in\nan external key-value memory that can be queried\nwhen translating subsequent sentences. This is one\nof the first approaches that uses the global context.\nOther methods have been proposed to use both\nsource and target history with different ranges of\ncontext. (Miculicich et al., 2018) attends to words\nfrom previous sentences with a 2-stage hierarchi-\ncal approach, while (Maruf et al., 2019), simi-\nlarly, attends to words in specific sentences us-\ning sparse hierarchical selective attention. (V oita\net al., 2019a), which extends the concatenation-\nbased approach to four sentences in a monolingual\nAutomatic Post-Edition (APE) setting; whereas\nJunczys-Dowmunt (2019) proposes full document\nconcatenation with a BERT model to improve the\nword embeddings through document context and\nfull document APE. Ng et al. (2019) proposes a\nnoisy channel approach with reranking, where the\nlanguage model (LM) operates at document-level\nbut the reranking does not. Yu et al. (2019) extends\nthe previous work using conditionally dependent\nsentence reranking with the document-level LM.\n#Prev #Fut Src Trg\nConcat2to1 (1) 1 - X\nConcat2to2 (1) 1 - X X\nMulti-source context encoder (2) 2 - X\nCache-based (3) all - X X\nStar (4) - (see §4) all all (src) X X\nTarget APE (5) 3 3 X\nSparse Hierarchical attn. (6) all - X X\nTable 1: A summary of the methods compared (1-4). We also\ninclude (5-6) in this summary table for comparative purposes.4 Doc-Star-Transformer\nWe propose a scalable approach to document-level\nNMT inspired by the Star architecture (Guo et al.,\n2019) for sentence-level NMT. We have an equiv-\nalent relay node and build sentence-level represen-\ntations; we propagate this non-local information at\ndocument-level and enrich the word-level embed-\ndings with context information.\nTo do this, we augment the vanilla sentence-\nlevel Transformer model of Vaswani et al. (2017)\nwith two additional multi-headed attention sub-\nlayers. The first sub-layer is used to summarize\nthe global contribution of each sentence into a sin-\ngle embedding. The second layer then uses these\nsentence embeddings to update word representa-\ntions throughout the document, thereby incorpo-\nrating document-wide context.\nIn §4.1, we describe our model assuming it can\nattend to context from the entire document with-\nout practical memory constraints. Then in §4.2 we\nshow how to extend the model to arbitrarily long\ncontexts by introducing sentence-level recurrence.\n4.1 Document-level Context Attention\nWe begin by describing the encoder of the Doc-\nStar-Transformer (Figure 1). We refer to the sen-\ntence and word representations of the kthsentence\nat layeriass(k)\niandw(k)\nirespectively. Our Doc-\nStar-Transformer model makes use of the Scaled\nDot-Product Attention of Vaswani et al. (2017) to\nperform alternating updates to sentence and word\nembeddings across the document to efficiently in-\ncorporate document-wide context; our method can\nefficiently capture local and non-local context (at\ndocument-level) and, like the Star Transformer,\nalso eliminates the need to compute pairwise at-\ntention scores for each word in the document .\nIntermediate word representations, H(k)\ni, are\nupdated with sentence-level context. These inter-\nmediate representation are then used in a second\nstage of multi-headed attention to generate an em-\nbedding for each sentence in the document.\nH(k)\ni=Transformer (w(k)\ni\u00001); (3)\ns(k)\ni=MultiAtt (s(k)\ni\u00001;H(k)\ni); (4)\nWe then concatenate the newly constructed sen-\ntence representations and allow each word in sen-\ntencekto attend to all preceding sentences’ repre-\nsentations.2Finally, we apply a feed-forward net-\n2We describe our method in the online setting and to match\nwork, which uses two linear transformations with\na ReLU activation to get the layer’s final output.\nH(k)\ni0=MultiAtt (H(k)\ni;[s(k)\ni;s(k\u00001)\ni;:::;s(1)\ni]);\n(5)\nw(k)\ni=ReLu (H(k)\ni0); (6)\nFigure 1: Doc-Star-Transformer encoder.\nThe Doc-Star-Transformer decoder follows a\nsimilar structure to the encoder, except that the de-\ncoder does not have access to the sentence repre-\nsentation of the current sentence k, thus, remov-\ning sentence s(k)\nifrom (5). Source-side context is\nadded through concatenation of the previous sen-\ntence embeddings from the final layer of the en-\ncoder with the decoder’s in (5).\n4.2 Sentence-level Recurrence\nTo overcome practical memory constraints (due to\nvery long documents), we introduce a sentence-\nlevel recurrence mechanism with state reuse, sim-\nilar to that used by Dai et al. (2019). During\ntraining, a constant number of sentence embed-\ndings are cached to provide context when translat-\ning the next segment. We cut off gradients to these\ncached sentence embeddings, but allow them to\nthe decoder side. In the document-MT setting, (5) concate-\nnates all sentences’ representations to include context from\nfuture source-side sentences during translation.be used to model long-term dependencies without\ncontext fragmentation. More formally, we allow \u001c\nto be the number of previous sentence embeddings\nmaintained in the cache and update as follows:\nH(k0)\ni=MultiAtt (H(k)\ni;[s(k)\ni;s(k)\ni\u00001;:::;s(B)\ni;\nSG(s(B)\ni);:::;SG(s(B\u0000\u001c)\ni )]);\nwhereBis the index of the first sentence in\nthe batch and SGs are the sentence representations\nwith stopped gradients. In contrast with previous\napproaches, such as Hierarchical Attention (Maruf\net al., 2019), this gradient caching strategy has the\nadvantage of letting the model attend to full source\ncontext regardless of document lengths and there-\nfore to avoid practical memory issues.\n5 Evaluating Context-Aware NMT\nThe evaluation of context-aware MT is notori-\nously tricky (Hardmeier, 2012); standard auto-\nmatic metrics such as B LEU (Papineni et al., 2002)\nare poorly suited to evaluating discourse phenom-\nena (e.g. anaphoric references, lexical cohesion,\ndeixis, ellipsis) that require document context. We\ntherefore evaluate all models using a range of\nphenomenon-specific contrastive test sets.\nContrastive sets are an automatic way of evalu-\nating the handling of particular phenomena (Sen-\nnrich, 2017; Rios Gonzales et al., 2017). The aim\nis to assess how well models rank correct transla-\ntions higher than incorrect (contrastive) ones. For\ncontext-aware test sets, the correctness of transla-\ntions depends on context. Several such sets exist\nfor a range of discourse phenomena and for sev-\neral language directions: EN !FR (Bawden et al.,\n2018), EN!DE (M ¨uller et al., 2018) and EN !RU\n(V oita et al., 2019b). In this article, we evaluate\nusing the following test sets for our two language\ndirections of focus, EN !DE and EN!FR:\nEN-FR: anaphora, lexical choice (Bawden et\nal., 2018):3two manually crafted sets (200 con-\ntrastive pairs each), for which the previous sen-\ntence determines the correct translation. The sets\nare balanced such that each correct translation also\nappears as an incorrect one (a non-contextual base-\nline achieves 50% precision). Anaphora examples\ninclude singular and plural personal and posses-\nsive pronouns. In addition to standard contrastive\nexamples, this set also contains contextually cor-\nrect examples, where the antecedent is translated\n3https://github.com/rbawden/discourse-mt-test-sets\nstrangely, designed to test the use of past transla-\ntion decisions. Lexical choice examples include\ncases of lexical ambiguity (cohesion) and lexical\nrepetition (cohesion).\nEN!DE: anaphoric pronouns (ContraPro)\n(M¨uller et al., 2018).4A large-scale automati-\ncally created set from OpenSubtitles2018 (Lison\net al., 2018), in which sentences containing the\nEnglish anaphoric pronoun it(and its correspond-\ning German translations er,sieores) are automat-\nically identified, and contrastive erroneous transla-\ntions are automatically created. The test set con-\ntains 4,000 examples for each target pronoun type,\nand the disambiguating context can be found in\nany number of previous sentences.\nEN!FR: large-scale pronoun test set We au-\ntomatically create a large-scale EN !FR test set\nfrom OpenSubtitles2018 (Lison et al., 2018) in\nthe style of ContraPro, with some modifications to\ntheir protocol due to the limited quality of avail-\nable tools. The test set is created as follows:\n1. Instances of itandtheyand their antecedents are\ndetected using NEURALCOREF .5Unlike M ¨uller\net al. (2018), we only run English coreference\ndue to a lack of an adequate French tool.\n2. We align pronouns to their translations ( il,elle,\nils,elles) using FastAlign (Dyer et al., 2013).\n3. Examples are filtered to only include sub-\nject pronouns (using Spacy6) with a nominal\nantecedent, aligned to a nominal French an-\ntecedent matching the pronoun’s gender. We\nalso remove examples whose antecedent is\nmore than five sentences away to avoid cases\nof imprecise coreference resolution.\n4. Contrastive translations are created by inverting\nthe pronouns’ gender (cf. Figure 2). We modify\nthe gender of words that agree with the pronoun\n(e.g. adjectives and some past participles) using\nthe Le ffflexicon (Sagot, 2010)).\nThe test set consists of 3,500 examples for each\ntarget pronoun type (cf. Table 2 for the distribution\nof coreference distances).\n6 Experimental Setup\nAs mentioned in §1, we aim to provide a system-\natic comparison of the approaches over the same\n4https://github.com/ZurichNLP/ContraPro\n5https://github.com/huggingface/neuralcoref\n6https://spacy.ioContext sentence\nPies made from apples like these.\nDestartes ffaites avec des pommes comme celles-ci\nCurrent sentence\nOh,they do look delicious.\n\b Elles font l’air d ´elicieux.\n× Ilsmont l’air d ´elicieux.\nFigure 2: An example from the large-scale EN !FR test set.\n# examples at each distance\nPronoun 0 1 2 3 4 5\nil 1,628 1,094 363 213 127 75\nelle 1,658 1,144 356 166 106 70\nils 1,165 1,180 501 302 196 156\nelles 1,535 1,148 409 199 128 81\nTable 2: The distribution of each pronoun type according to\ndistance (in #sentences) from the antecedent.\ndatasets, training data sizes and language pairs. We\nstudy whether pre-training with larger resources\n(in a more realistic high-resource scenario) has an\nimpact on the methods on language directions that\nare challenging for sentence-level MT. We con-\nsider translation from English into French (FR),\nGerman (DE) and Brazilian Portuguese (PT br),\nwhich all have gendered pronouns corresponding\nto neutral anaphoric pronouns in English ( itfor all\nthree and they for FR and PT br).\nWe compare the three previous methods (§3)\nplus the Doc-Star-Transformer in two scenarios:\n(i) document MT, testing on TED talks (EN !FR\nand EN!PTbr), and (ii) chat MT testing on pro-\nprietary conversation data for all three directions.\n6.1 Data\nFor both scenarios, we pre-train baseline mod-\nels on large amounts of publicly available\nsentence-level parallel data ( \u001818M,\u001822Mand\n\u00185Msentence pairs for EN !DE, EN!FR, and\nEN!PTbr respectively). We then separately fine-\ntune them to each domain. For the document MT\ntask, we consider EN !DE and EN!FR and fine-\ntune on IWSLT17 (Cettolo et al., 2012) TED Talks,\nusing the test sets 2011-2014 as dev sets, and\n2015 as test sets. For the chat MT task, we fine-\ntune on (anonymized) proprietary data of 3dif-\nferent domains and on an additional language pair\n(EN!PTbr). Dataset sizes are shown in Table 3\n(sentence-level pre-training data) and Tables 4–5\n(document and chat task data respectively).\nTrain Dev\nEN-DE 18M 1K\nEN-FR 20M 1K\nEN-PT br 5M 1K\nTable 3: Sentence-level corpus sizes (#sentences)\nTrain Dev Test\nEN-DE 206K 5.4K 1.1K\nEN-FR 233K 5.8k 1.2K\nTable 4: TED talks document-level corpus sizes (#sentences)\nDomain1 Domain2 Domain3\nEN-DETrain 674k 62K 13K\nDev 37K 3.2K 0.6K\nTest 35K 3.6K 0.7K\nEN-FRTrain 395K 108K 110K\nDev 21K 6.3K 6.1K\nTest 22K 6.2K 6.3K\nEN-PT brTrain 235K 61K 13K\nDev 13K 3.4K 0.7K\nTest 13K 3.2K 0.7K\nTable 5: The corpora sizes of the chat translation task. We\nconsider both speakers for this count.\n6.2 Training Configuration\nFor all experiments we use the Transformer base\nconfiguration (hidden size of 512, feedforward size\nof 2048, 6 layers, 8 attention heads) with the\nlearning rate schedule described in (Vaswani et\nal., 2017). We use label smoothing with an ep-\nsilon value of 0:1(Pereyra et al., 2017) and early\nstopping of 5consecutive non-improving valida-\ntion points of both accuracy and perplexity. Self-\nattentive models are sensitive to batch size (Popel\nand Bojar, 2018), and so we use batches of 32kto-\nkens for all methods.7For all tasks, we use a sub-\nword unit vocabulary (Sennrich et al., 2016) with\n32koperations. We share source and target embed-\ndings, as well as target embeddings with the final\nvocab projection layer (Press and Wolf, 2017).\nFor the document translation experiments, we\nrun the same experimental setting with 3different\nseeds and average the scores of each model.\nFor the approaches that fine-tune just the\ndocument-level parameters (i.e. cache-based,\nmulti-source encoder, and Doc-Star-Transformer),\nwe reset all optimizer states and train with the\nsame configuration as the baselines (with the base\nparameters frozen), as described in (Tu et al., 2018;\nZhang et al., 2018). For Doc-Star-Transformer we\nuse multi-heads of 2 and 8 heads. All methods are\n7The optimizer update is delayed to simulate the 32ktokens.implemented in Open-NMT (Klein et al., 2017).\n6.3 Chat-specific modifications\nIn the case of the concatenation-based approaches,\nmulti-source context encoder, and the Doc-Star-\nTransformer, we add the speaker symbol as spe-\ncial token to the beginning of each sentence. For\nthe cache-based systems, we introduce two differ-\nent caches, one per speaker, and investigate dif-\nferent methods for deep fusing them (Tu et al.,\n2018): (i) deep fusing the first speaker’s cache first\nand next fusing with the second speaker’s cache,\n(ii) the same method but with the second speaker\nfirst, and (iii) jointly integrating the caches. In ad-\ndition, for the cache-based system we explore the\neffect of storing full words or subword units in\nthe external memory For the full word approach,\nwe use subword units in the vocab but merge the\nwords when adding to the cache.\n6.4 Evaluation setup\nWe perform both automatic and manual evalua-\ntion, in order to gain more insights into the dif-\nferences between the models.\nAutomatic evaluation: We first evaluate\nall methods with case-sensitive detokenized\nBLEU (Papineni et al., 2002).8We then evaluate\ncontext-dependent discourse-level phenomena us-\ning the previously described contrastive test sets.\nFor EN!DE this corresponds to the large-scale\nanaphoric pronoun test set of M ¨uller et al. (2018)\nand for EN!FR our own analogous large-scale\nanaphoric pronoun test set (described in §5),9as\nwell as the manually crafted test sets of Bawden et\nal. (2018) for anaphora and coherence/cohesion.\nManual evaluation: In the case of the chat\ntranslation task (using proprietary data), in addi-\ntion to BLEU, we also manually assess the perfor-\nmance of the systems with professional human an-\nnotators, who mark the errors of the systems with\ndifferent levels of severity (i.e. minor, major, crit-\nical). In the case of extra-sentential errors such as\nagreements we asked them to mark both the pro-\nnoun and its antecedent. We score the systems’\nperformance using Multidimensional Quality Met-\nrics (MQM) (Lommel, 2013):\nMQM =100\u0000minor +major\u00035 +critical\u000310\nWord count\n8Using Moses’ (Koehn et al., 2007) multi-bleu-detok .\n9For both large-scale test sets, we make sure to exclude the\ndocuments they include from the training data.\nBy having access to the full conversation, the\nannotators can annotate both intra- and extra-\nsentential errors (e.g. document-level error exam-\nples of agreement or lexical consistency).\nWe prioritize documents with a large number of\nedits compared to the sentence-level baseline (nor-\nmalized by document length) due to document-\nlevel systems tending to perform few edits with re-\nspect to the high performance non-context-aware\nsystems. We request annotations of approximately\n200 sentences per language pair and method.\n7 Results and analysis\n7.1 Document Translation Task\nTable 6 shows the results of the average perfor-\nmance of each system on IWSLT data according\nto BLEU. Although the approaches have previ-\nously shown improved performance compared to\na baseline, when a stronger baseline is used, we\nsee marginal to no improvements over the baseline\nfor both language directions.\nEN!DE EN !FR\nBaseline 32.08 40.92\nConcat2to1 31.84 40.67\nConcat2to2 30.89 40.57\nCache SubWords 32.10 40.91\nCache Words 32.12 40.88\nZhang et al. 2018 31.03 40.95\nStar, 2 heads, gold target ctx 31.76 41.00\nStar, 2 heads, predicted target ctx 31.39 40.72\nStar, 8 heads, gold target ctx 31.74 40.74\nStar, 8 heads, predicted target ctx 31.29 40.58\nTable 6: BLEU score results on the IWSLT15 test set (aver-\naged over 3 different runs for each method).\nTable 7 shows the average performance of each\nsystem for all contrastive sets. The results differ\ngreatly from BLEU results; methods on par or be-\nlow the baseline according to BLEU perform better\nthan the baseline when evaluated on the contrastive\ntest sets. This is notably the case of the Concat\nmodels, which achieve some of the best results on\nthe both large-scale pronoun sets (EN !DE and\nEN!FR), as shown by the high percentages on the\nmore difficult feminine pronoun Siefor EN!DE\nand all pronouns for EN !FR.\nMost models struggle to achieve high perfor-\nmances for the feminine Sieand neutral Er, which\nis likely due to masculine Esbeing the majority\nclass in the training data. For French, although\nthe feminine pronouns are also usually challeng-\ning, the high scores seen here are possibly due tothe fact that many examples have an antecedent\nwithin the same sentence. The Concat2to2 method\nhowever performs well across the board, proving\nto be an effective way of exploiting context. It also\nachieves the highest scores on both the anaphora\nand coherence/cohesion test set, which is only pos-\nsible when the context is actually being used, as\nthe test set is completely balanced. This appears to\nconfirm the findings of Bawden et al. (2018) that\ntarget-side context is most effectively used when\nchannelled through the decoder. Surprisingly, the\nmulti-source encoder approach degrades the base-\nline with respect to this evaluation, suggesting that\nthe context being used is detrimental to the han-\ndling of these phenomena.\nWe note that using OpenSubtitles as a resource\nfor context-dependent translation or scoring, has\nadditional challenges. Figure 3 illustrates four of\nthese, which could make translation more chal-\nlenging if they affect the context being exploited.\n7.2 Chat Translation Task\nTable 8 shows BLEU score results on the propri-\netary data, with the modifications described in §3\nto address the chat task. As expected, document-\nlevel information has a larger impact for the lowest\nresource language pair, EN !PTbr, with marginal\nimprovements on EN !FR and EN!DE.\nThe performance of these methods depends on\nthe language pair and domain. Although it is not\nconclusive which method performs best, our pro-\nposed method improves over the baseline consis-\ntently, whereas the cache-based and Concat2to2\nmethods also perform well in some scenarios. For\nour Doc-Star-Transformer approach, using predic-\ntions rather than the gold history harms the model\nat inference, showing that bridging this gap could\nlead to a better handling of target-side context.\nThere is little correlation between BLEU scores\nand the human MQM scores (as shown by the\ncomparison for 3 methods in Table 9). Although\nthe difference between BLEU scores are marginal,\nMQM indicates that quality differences can be\nseen by human evaluators: the document-level sys-\ntems (Cache and Star) both achieve higher results\nfor EN!PTbr (although the Star approach under-\nperforms for EN!FR). This shows that for cer-\ntain language directions, the document-level ap-\nproaches do learn to fix some errors and therefore\nimprove translation quality. This also confirms\nprevious suggestions that BLEU is not a good met-\nEN!DE EN !FR\nTotal Es Sie Er Totalit they AnaphoraCoherence/\ncohesion(%)\nelle il elles ils All All\nBaseline 45.0 91.9 22.9 20.2 79.7 88.1 82.7 76.1 72.2 50.0 50.0\nConcat2to1 48.0 91.6 27.1 25.3 80.9 88.4 83.3 77.2 73.9 50.0 52.5\nConcat2to2 70.8 91.8 61.9 58.7 83.2 89.2 86.2 80.4 77.6 82.5 55.0\nCache (Subwords) 45.2 92.1 23.5 19.9 79.7 88.0 82.7 76.0 72.0 50.0 50.0\nMulti-src Enc 42.6 62.3 33.9 31.5 59.0 62.0 61.3 57.2 57.3 47.0 46.5\nStar, 8 heads 45.9 91.3 27.0 19.5 79.6 88.0 82.6 76.1 72.0 50.0 50.0\nTable 7: Accuracies (in %) for the contrastive sets. Methods outperforming the baseline are in bold.\nDomain1 Domain2 Domain3\nEN-DE EN-FR EN-PT br EN-DE EN-FR EN-PT br EN-DE EN-FR EN-PT br\nBaseline 78.53 79.71 81.21 72.11 76 73.94 69.67 74.76 74.95\nConcat2to1S1,S2 + speaker tag 78.04 79.65 80.36 71 75.35 73.02 69.92 74.57 74.82\nS1 77.97 79.55 80.26 70.95 75.21 73.33 69.77 74.47 74.84\nConcat2to2S1,S2 + speaker tag 79.84 79.3 80.33 70.56 74.87 73.52 69.74 74.37 74.56\nS1 78.88 79.15 79.92 70.13 74.9 73.33 69.59 74.25 74.33\nCache S1 + CliJointPolicy Subwords 78.62 79.66 80.79 72.12 75.03 73.47 69.47 74.77 75.04\nJointPolicy Words 78.52 79.63 80.93 71.66 75.93 73.54 69.55 74.77 74.97\nCache S1 onlySubwods 78.41 79.46 81.17 71.73 75.92 74.41 69.68 74.8 74.94\nWords 78.28 79.54 81.04 71.9 75.87 74.33 69.51 74.82 74.94\nMulti-src enc SEP + speaker tag 78.23 79.64 81.04 71.5 75.87 73.78 - 74.66 74.82\nStarS1,S2 2 heads Gold target ctx 79.7 80.08 82.64 71.79 75.62 73.67 71.36 74.87 75.03\nS1,S2 2 heads Predicted target ctx 78.81 79.38 79.63 71.72 75.58 73.7 69.38 74.77 75.11\nS1 2 heads Gold target ctx 79.35 79.58 82.52 72.16 75.95 74.1 71.33 75.01 75.48\nS1 2 heads Predicted target ctx 78.17 79.24 79.83 72.24 75.68 73.9 70.24 74.65 75.21\nTable 8: BLEU scores on the chat translation task (proprietary data for 3 different domains and language pairs). S1 and S2\nrefer to the speakers in the case of chat translation task.\nEN!FR EN !PTbr\nBLEU MQM BLEU MQM\nBaseline 74.76 87.46 74.95 92.47\nCache 74.82 89.02 74,94 93.20\nStar 2 heads 75.01 86.80 75.48 95.20\nTable 9: The results of automatic and manual evaluation\nof the context-aware NMT methods in terms of BLEU and\nMQM on English !French and English !Portuguese.\nric to distinguish between strong NMT systems.\n8 Conclusion\nWe provided a systematic comparison of several\ncontext-aware NMT methods. One of the meth-\nods in this comparison was a new adaptation of\nthe recently proposed StarTransformer architec-\nture to document-level MT. In addition to BLEU,\nwe reported results of the contrastive evaluation\nof context-dependent phenomena (anaphora and\ncoherence/cohesion), creating an additional large-\nscale contrastive test set for EN !FR anaphoric\npronouns, and we carried out human evalua-\ntion in terms of Multidimensional Quality Met-\nrics (MQM). Our findings suggest that existing\ncontext-aware approaches are less advantageous in\nscenarios with larger datasets and strong sentence-\nlevel baselines. In terms of the targeted context-\ndependent evaluation, one of the promising ap-proaches is one of the simplest: the Concat2to2,\nwhere translated context is channelled through\nthe decoder, although our Doc-Star-Transformer\nmethod achieves good results according to the\nmanual evaluation of MT quality.\nAcknowledgments\nWe thank the anonymous reviewers for their\nvaluable feedback. This work is supported by\nthe EU in the context of the PT2020 project\n(contracts 027767 and 038510) and the H2020\nGoURMET project (825299), by the European Re-\nsearch Council (ERC StG DeepSPIN 758969), by\nthe Fundac ¸ ˜ao para a Ci ˆencia e Tecnologia through\ncontract UID/EEA/50008/2019 and by the UK En-\ngineering and Physical Sciences Research Council\n(MTStretch fellowship grant EP/S001271/1).\nReferences\nBahdanau, D., K. Cho, and Y . Bengio. 2014. Neural\nmachine translation by jointly learning to align and\ntranslate. arXiv preprint arXiv:1409.0473 .\nBarrault, Lo ¨ıc, Ond ˇrej Bojar, Marta R. Costa-juss `a,\nChristian Federmann, Mark Fishel, Yvette Gra-\nham, Barry Haddow, Matthias Huck, Philipp Koehn,\nShervin Malmasi, Christof Monz, Mathias M ¨uller,\nSantanu Pal, Matt Post, and Marcos Zampieri. 2019.\nDifficulty English French\nColloquialisms Well, they just ain’t a-treatin’ me right Eh bien, elles me traitent mal\n‘Well, they’re treating me badly’\nParaphrasing Do not forget your friends, they are always with\nyou heart and soul!N’oubliez pas vos amis: ils sont toujours pr `es de vous!\n‘Don’t forget your friends: they are always near to you’\nTruncation Neighbor. what have you done? V oisin ?\n‘Neighbour?’\nFree translation I don’t understand either. Moi non plus.\n‘me neither’\nFigure 3: Examples of four challenges for MT of OpenSubtitles: (i) colloquialisms, (ii) paraphrasing, (iii) subtitle truncation\n(can be due to space constraints), and (iv) free translations that fulfill the same discursive role.\nFindings of the 2019 Conference on Machine Trans-\nlation (WMT19). In Proceedings of the 4th Confer-\nence on Machine Translation .\nBawden, Rachel, Rico Sennrich, Alexandra Birch, and\nBarry Haddow. 2018. Evaluating Discourse Phe-\nnomena in Neural Machine Translation. In Proceed-\nings of the 2018 Conference of the North American\nChapter of the Association for Computational Lin-\nguistics; Human Language Technologies .\nBawden, Rachel, Sophie Rosset, Thomas Lavergne,\nand Eric Bilinski. 2019. DiaBLa: A Corpus of\nBilingual Spontaneous Written Dialogues for Ma-\nchine Translation.\nBawden, Rachel. 2018. Going beyond the sentence:\nContextual Machine Translation of Dialogue . Ph.D.\nthesis, LIMSI, CNRS, Universit ´e Paris-Sud, Univer-\nsit´e Paris-Saclay, Orsay, France.\nCettolo, Mauro, Christian Girardi, and Marcello Fed-\nerico. 2012. WIT3: Web Inventory of Transcribed\nand Translated Talks. In Proceedings of the 16th\nConference of the European Association for Machine\nTranslation .\nDai, Zihang, Zhilin Yang, Yiming Yang, Jaime G.\nCarbonell, Quoc V . Le, and Ruslan Salakhutdinov.\n2019. Transformer-XL: Attentive Language Models\nBeyond a Fixed-Length Context. In Proceedings of\nthe 57th annual meeting on association for compu-\ntational linguistics .\nDyer, Chris, Victor Chahuneau, and Noah A. Smith.\n2013. A simple, fast, and effective reparameteriza-\ntion of IBM model 2. In Proceedings of the 2013\nConference of the North American Chapter of the\nAssociation for Computational Linguistics: Human\nLanguage Technologies .\nGrave, Edouard, Armand Joulin, and Nicolas Usunier.\n2016. Improving neural language models with a\ncontinuous cache. arXiv preprint arXiv:1612.04426 .\nGuo, Qipeng, Xipeng Qiu, Pengfei Liu, Yunfan Shao,\nXiangyang Xue, and Zheng Zhang. 2019. Star-\ntransformer. In Proceedings of the 2019 Conference\nof the North American Chapter of the Association for\nComputational Linguistics: Human Language Tech-\nnologies .Hardmeier, Christian. 2012. Discourse in Statistical\nMachine Translation. a survey and a case study. Dis-\ncours , 11.\nHassan, Hany, Anthony Aue, Chang Chen, Vishal\nChowdhary, Jonathan Clark, Christian Feder-\nmann, Xuedong Huang, Marcin Junczys-Dowmunt,\nWilliam Lewis, Mu Li, et al. 2018. Achieving hu-\nman parity on automatic Chinese to English news\ntranslation. arXiv preprint arXiv:1803.05567 .\nJunczys-Dowmunt, Marcin. 2019. Microsoft translator\nat wmt 2019: Towards large-scale document-level\nneural machine translation. In Proceedings of the\nFourth Conference on Machine Translation .\nKlein, Guillaume, Yoon Kim, Yuntian Deng, Jean\nSenellart, and Alexander M. Rush. 2017. Open-\nNMT: Open-source toolkit for neural machine trans-\nlation. In Proceedings of the 55th Annual Meeting of\nthe Association for Computational Linguistics .\nKoehn, Philipp, Hieu Hoang, Alexandra Birch, Chris\nCallison-Burch, Marcello Federico, Nicola Bertoldi,\nBrooke Cowan, Wade Shen, Christine Moran,\nRichard Zens, Chris Dyer, Ond ˇrej Bojar, Alexandra\nConstantin, and Evan Herbst. 2007. Moses: Open\nsource toolkit for statistical machine translation. In\nProceedings of the 45th Annual Meeting of the Asso-\nciation for Computational Linguistics .\nL¨aubli, Samuel, Rico Sennrich, and Martin V olk. 2018.\nHas machine translation achieved human parity? a\ncase for document-level evaluation. In Proceedings\nof the 2018 Conference on Empirical Methods in\nNatural Language Processing .\nLaubli, Samuel, Sheila Castilho, Graham Neubig,\nRico Sennrich, Qinlan Shen, and Antonio Toral.\n2020. A set of recommendations for assessing hu-\nman–machine parity in language translation. Jour-\nnal of Artificial Intelligence Research (JAIR) , 67.\nLison, Pierre, J ¨org Tiedemann, and Milen Kouylekov.\n2018. OpenSubtitles2018: Statistical rescoring of\nsentence alignments in large, noisy parallel corpora.\nInProceedings of the 11th International Conference\non Language Resources and Evaluation .\nLommel, Arle Richard. 2013. Multidimensional qual-\nity metrics: a flexible system for assessing transla-\ntion quality\nMaruf, Sameen, Andr ´e F. T. Martins, and Gholamreza\nHaffari. 2018. Contextual neural model for translat-\ning bilingual multi-speaker conversations. In Proc.\nof the 3rd Conference on Machine Translation .\nMaruf, Sameen, Andr ´e F. T. Martins, and Gholamreza\nHaffari. 2019. Selective Attention for Context-\naware Neural Machine Translation. In Proceedings\nof the 2019 Conference of the North American Chap-\nter of the Association for Computational Linguistics:\nHuman Language Technologies .\nMiculicich, Lesly, Dhananjay Ram, Nikolaos Pappas,\nand James Henderson. 2018. Document-Level Neu-\nral Machine Translation with Hierarchical Attention\nNetworks. In Proceedings of the 2018 Conference\non Empirical Methods in Natural Language Process-\ning.\nM¨uller, Mathias, Annette Rios, Elena V oita, and Rico\nSennrich. 2018. A large-scale test set for the eval-\nuation of context-aware pronoun translation in neu-\nral machine translation. In Proceedings of the Third\nConference on Machine Translation .\nNg, Nathan, Kyra Yee, Alexei Baevski, Myle Ott,\nMichael Auli, and Sergey Edunov. 2019. Facebook\nfair’s wmt19 news translation task submission. In\nProceedings of the Fourth Conference on Machine\nTranslation (Volume 2: Shared Task Papers, Day 1) .\nPapineni, Kishore, Salim Roukos, Todd Ward, and Wei-\nJing Zhu. 2002. BLEU: a method for Automatic\nEvaluation of Machine Translation. In Proceedings\nof the 40th Annual Meeting on Association for Com-\nputational Linguistics .\nPereyra, Gabriel, George Tucker, Jan Chorowski,\nŁukasz Kaiser, and Geoffrey Hinton. 2017. Regu-\nlarizing neural networks by penalizing confident out-\nput distributions. In Proceedings of the 5th Interna-\ntional Conference on Learning Representations .\nPopel, Martin and Ond ˇrej Bojar. 2018. Training tips\nfor the transformer model. The Prague Bulletin of\nMathematical Linguistics , 110(1).\nPress, Ofir and Lior Wolf. 2017. Using the output em-\nbedding to improve language models. In Proceed-\nings of the 15th Conference of the European Chapter\nof the Association for Computational Linguistics .\nRios Gonzales, Annette, Laura Mascarell, and Rico\nSennrich. 2017. Improving word sense disambigua-\ntion in neural machine translation with sense embed-\ndings. In Proceedings of the 2nd Conference on Ma-\nchine Translation .\nSagot, Beno ˆıt. 2010. The lefff, a freely available and\nlarge-coverage morphological and syntactic lexicon\nfor French. In Proceedings of the 7th International\nConference on Language Resources and Evaluation .\nSennrich, Rico, Barry Haddow, and Alexandra Birch.\n2016. Neural machine translation of rare words with\nsubword units. In Proc. of the 54th Annual Meeting\nof the Association for Computational Linguistics .Sennrich, Rico. 2017. How Grammatical is Character-\nlevel Neural Machine Translation? In Proceedings\nof the 15th Conference of the European Chapter of\nthe Association for Computational Linguistics .\nSmith, Karin Sim. 2017. On integrating discourse in\nmachine translation. In Proceedings of the Third\nWorkshop on Discourse in Machine Translation .\nTiedemann, J ¨org and Yves Scherrer. 2017. Neural ma-\nchine translation with extended context. In Proceed-\nings of the 3rd Workshop on Discourse in Machine\nTranslation .\nToral, Antonio, Sheila Castilho, Ke Hu, and Andy Way.\n2018. Attaining the Unattainable? Reassessing\nClaims of Human Parity in Neural Machine Trans-\nlation. In Proceedings of the Third Conference on\nMachine Translation .\nTu, Zhaopeng, Yang Liu, Shuming Shi, and Tong\nZhang. 2018. Learning to remember translation his-\ntory with a continuous cache. Transactions of the\nAssociation for Computational Linguistics , 6.\nVaswani, Ashish, Noam Shazeer, Niki Parmar, Jakob\nUszkoreit, Llion Jones, Aidan N Gomez, Łukasz\nKaiser, and Illia Polosukhin. 2017. Attention is all\nyou need. In Advances in neural information pro-\ncessing systems .\nV oita, Elena, Rico Sennrich, and Ivan Titov. 2019a.\nContext-aware monolingual repair for neural ma-\nchine translation. In Proceedings of the 2019 Con-\nference on Empirical Methods in Natural Language\nProcessing and the 9th International Joint Confer-\nence on Natural Language Processing .\nV oita, Elena, Rico Sennrich, and Ivan Titov. 2019b.\nWhen a good translation is wrong in context:\nContext-aware machine translation improves on\ndeixis, ellipsis, and lexical cohesion. In Proceed-\nings of the 57th Annual Meeting of the Association\nfor Computational Linguistics .\nWang, Longyue. 2019. Discourse-Aware Neural Ma-\nchine Translation . Ph.D. thesis, Dublin City Univer-\nsity, Dublin, Ireland.\nYu, Lei, Laurent Sartran, Wojciech Stokowiec, Wang\nLing, Lingpeng Kong, Phil Blunsom, and Chris\nDyer. 2019. Putting machine translation in context\nwith the noisy channel model.\nZhang, Jiacheng, Huanbo Luan, Maosong Sun, Feifei\nZhai, Jingfang Xu, Min Zhang, and Yang Liu. 2018.\nImproving the Transformer Translation Model with\nDocument-Level Context. In Proceedings of the\n2018 Conference on Empirical Methods in Natural\nLanguage Processing .",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "RKtW_7jQoKv",
"year": null,
"venue": "EAMT 2020",
"pdf_link": "https://aclanthology.org/2020.eamt-1.15.pdf",
"forum_link": "https://openreview.net/forum?id=RKtW_7jQoKv",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Correct Me If You Can: Learning from Error Corrections and Markings",
"authors": [
"Julia Kreutzer",
"Nathaniel Berger",
"Stefan Riezler"
],
"abstract": "Sequence-to-sequence learning involves a trade-off between signal strength and annotation cost of training data. For example, machine translation data range from costly expert-generated translations that enable supervised learning, to weak quality-judgment feedback that facilitate reinforcement learning. We present the first user study on annotation cost and machine learnability for the less popular annotation mode of error markings. We show that error markings for translations of TED talks from English to German allow precise credit assignment while requiring significantly less human effort than correcting/post-editing, and that error-marked data can be used successfully to fine-tune neural machine translation models.",
"keywords": [],
"raw_extracted_content": "Correct Me If You Can: Learning from Error Corrections and Markings\nJulia Kreutzer\u0003and Nathaniel Berger\u0003and Stefan Riezlery;\u0003\n\u0003Computational Linguistics &yIWR\nHeidelberg University, Germany\nfkreutzer, berger, riezler [email protected]\nAbstract\nSequence-to-sequence learning involves a\ntrade-off between signal strength and an-\nnotation cost of training data. For ex-\nample, machine translation data range\nfrom costly expert-generated translations\nthat enable supervised learning, to weak\nquality-judgment feedback that facilitate\nreinforcement learning. We present the\nfirst user study on annotation cost and\nmachine learnability for the less popu-\nlar annotation mode of error markings.\nWe show that error markings for trans-\nlations of TED talks from English to\nGerman allow precise credit assignment\nwhile requiring significantly less human\neffort than correcting/post-editing, and that\nerror-marked data can be used success-\nfully to fine-tune neural machine transla-\ntion models.\n1 Introduction\nSuccessful machine learning for structured output\nprediction requires the effort of annotating suf-\nficient amounts of gold-standard outputs—a task\nthat can be costly if structures are complex and ex-\npert knowledge is required, as for example in neu-\nral machine translation (NMT) (Bahdanau et al.,\n2015). Approaches that propose to train sequence-\nto-sequence prediction models by reinforcement\nlearning from task-specific scores, for example\nBLEU in machine translation (MT), shift the prob-\nlem by simulating such scores by evaluating ma-\nchine translation output against expert-generated\nc\r2020 The authors. This article is licensed under a Creative\nCommons 3.0 licence, no derivative works, attribution, CC-\nBY-ND.reference structures (Ranzato et al., 2016; Bah-\ndanau et al., 2017; Kreutzer et al., 2017; Sokolov\net al., 2017). An alternative approach that proposes\nto considerably reduce human annotation effort by\nallowing to mark errors in machine outputs, for ex-\nample erroneous words or phrases in a machine\ntranslation, has recently been proposed and been\ninvestigated in simulation studies by Marie and\nMax (2015); Domingo et al. (2017); Petrushkov\net al. (2018). This approach takes the middle\nground between supervised learning from error\ncorrections as in machine translation post-editing1\n(or from translations created from scratch) and\nreinforcement learning from sequence-level ban-\ndit feedback (this includes self-supervised learning\nwhere all outputs are rewarded uniformly). Error\nmarkings are highly promising since they suggest\nan interaction mode with low annotation cost, yet\nthey can enable precise token-level credit/blame\nassignment, and thus can lead to an effective fine-\ngrained discriminative signal for machine learning\nand data filtering.\nOur work is the first to investigate learning from\nerror markings in a user study. Error corrections\nand error markings are collected from junior pro-\nfessional translators, analyzed, and used as train-\ning data for fine-tuning neural machine translation\nsystems. The focus of our work is on the learn-\nability from error corrections and error markings,\nand on the behavior of annotators as teachers to\na machine translation system. We find that error\nmarkings require significantly less effort (in terms\nof key-stroke-mouse-ratio (KSMR) and time) and\nresult in a lower correction rate (ratio of words\nmarked as incorrect or corrected in a post-edit).\nFurthermore, they are less prone to over-editing\n1In the following we will use the more general term error cor-\nrections and MT specific term post-edits interchangeably.\nthan error corrections. Perhaps surprisingly, agree-\nment between annotators of which words to mark\nor to correct was lower for markings than for post-\nedits. However, despite of the low inter-annotator\nagreement, fine-tuning of neural machine transla-\ntion could be conducted successfully from data an-\nnotated in either mode. Our data set of error cor-\nrections and markings is publicly available.2\n2 Related Work\nPrior work closest to ours is that of Marie and Max\n(2015); Domingo et al. (2017); Petrushkov et al.\n(2018), however, these works were conducted by\nsimulating error markings by heuristic matching\nof machine translations against independently cre-\nated human reference translations. Thus the ques-\ntion of the practical feasibility of machine learning\nfrom noisy human error markings is left open.\nUser studies on machine learnability from hu-\nman post-edits, together with thorough perfor-\nmance analyses with mixed effects models, have\nbeen presented by Green et al. (2014); Bentivogli\net al. (2016); Karimova et al. (2018). Albeit show-\ncasing the potential of improving NMT through\nhuman corrections of machine-generated outputs,\nthese works do not consider “weaker” annotation\nmodes like error markings. User studies on the\nprocess and effort of machine translation post-\nediting are too numerous to list—a comprehensive\noverview is given in Koponen (2016). In contrast\nto works on interactive-predictive translation (Fos-\nter et al., 1997; Knowles and Koehn, 2016; Peris\net al., 2017; Domingo et al., 2017; Lam et al.,\n2018), our approach does not require an online in-\nteraction with the human and allows to investigate,\nfilter, pre-process, or augment the human feedback\nsignal before making a machine learning update.\nMachine learning from human feedback beyond\nthe scope of translations, has considered learn-\ning from human pairwise preferences (Christiano\net al., 2017), from human corrective feedback\n(Celemin et al., 2018), or from sentence-level re-\nward signals on a Likert scale (Kreutzer et al.,\n2018). However, none of these studies has consid-\nered error markings on tokens of output sequences,\ndespite its general applicability to a wide range of\nlearning tasks.\n2https://www.cl.uni-heidelberg.de/\nstatnlpgroup/humanmt/3 User Study on Human Error Markings\nand Corrections\nThe goal of the annotation study is to compare the\nnovel error marking mode to the widely adopted\nmachine translation post-editing mode. We are in-\nterested in finding an interaction scenario that costs\nlittle time and effort, but still allows to teach the\nmachine how to improve its translations. In this\nsection we present the setup, measure and com-\npare the observed amount of effort and time that\nwent into these annotations, and discuss the relia-\nbility and adoption of the new marking mode. Ma-\nchine learnability, i.e. training of an NMT system\non human-annotated data is discussed in Section 4.\n3.1 Participants\nWe recruited 10participants that described them-\nselves as native German speakers and having ei-\nther a C1 or C2 level in English, as measured by\nthe Common European Framework of Reference\nlevels. 8participants were students studying trans-\nlation or interpretation and 2participants were stu-\ndents studying computational linguistics. All par-\nticipants were paid 100 efor their participation in\nthe study, which was done online, and limited to a\nmaximum of 6 hours, and it took them between 2\nand 4.5 hours excluding breaks. They agreed to the\nusage of the recorded data for research purposes.\n3.2 Interface\nThe annotation interface has three modes: (1)\nmarkings, (2) corrections, and (3) the user-choice\nmode, where annotators first choose between (1)\nand (2) before submitting their annotation. While\nthe first two modes are used for collecting train-\ning data for the MT model, the third mode is used\nfor evaluative purposes to investigate which mode\nis preferable when given the choice. In any case,\nannotators are presented the source sentence, the\ntarget sentence and an instruction to either mark or\ncorrect (aka post-edit) the translation or choose an\nediting mode. They also had the option to pause\nand resume the session. No document-level con-\ntext was presented, i.e., translated sentences were\njudged in isolation, but in consecutive order like\nthey appeared in the original documents to provide\na reasonable amount of context. They received\ndetailed instructions (see Appendix A) on how\nto proceed with the annotation. Each annotator\nworked on 300 sentences, 100 for each mode, and\nan extra 15 sentences for intra-annotator agree-\nFigure 1: Interface for marking of translation outputs follow-\ning user choice between markings and post-edits.\nment measures that were repeated after each mode.\nAfter the completion of the annotation task they\nanswered a survey about the preferred mode, the\nperceived editing/marking speed, user-choice poli-\ncies, and suggestions for improvement. A sreen-\nshot of the interface showing a marking operation\nis shown in Figure 1. The code for the interface is\npublicly available3.\n3.3 Data\nWe selected a subset of 30TED talks to create the\nthree data sets from the IWSLT17 machine trans-\nlation corpus4. The talks were filtered by the fol-\nlowing criteria: single speakers, no music/singing,\nlow intra-line final-sentence punctuation (indicat-\ning bad segmentation), length between 80 and 149\nsentences. One additional short talk was selected\nfor testing the inter- and intra-annotator reliability.\nWe filtered out those sentences where model hy-\npothesis and references were equal, in order to save\nannotation effort where it is clearly not needed,\nand also removed the last line from every talk (usu-\nally “thank you”). For each talk, one topic of a set\nof keywords provided by TED was selected. See\nAppendix B for a description of how data was split\nacross annotators.\n3.4 Effort and Time\nCorrecting one translated sentence took on aver-\nage approximately 5 times longer than marking\nerrors, and required 42 more actions, i.e., clicks\nand keystrokes. That is 0.6 actions per character\nfor post-edits, while only 0.03 actions per charac-\nter for markings. This measurement aligns with\nthe unanimous subjective impression of the partic-\nipants that they were faster in marking mode.\n3https://github.com/StatNLP/\nmt-correct-mark-interface\n4https://sites.google.com/site/\niwsltevaluation2017/To investigate the sources of variance affecting\ntime and effort, we use Linear Mixed Effect Mod-\nels (LMEM) (Barr et al., 2013) and build one with\nKSMR as response variable, and another one for\nthe total edit duration (excluding breaks) as re-\nsponse variable, and with the editing mode (cor-\nrecting vs. marking) as fixed effect. For both re-\nsponse variables, we model users5, talks and tar-\nget lengths6as random effects, e.g., the one for\nKSMR:\nKSMR\u0018mode + (1juser id) + (1jtalk id)\n+ (1jtrglength ) (1)\nWe use the implementation in the R package\nlmer4 (Bates et al., 2015) and fit the models\nwith restricted maximum likelihood. Inspecting\nthe intercepts of the fitted models, we confirm that\nKSMR is significantly ( p= 0:01) higher for post\nedits than for markings (+3.76 on average). The\nvariance due to the user (0.69) is larger than due to\nthe talk (0.54) and the length (0.05)7. Longer sen-\ntences have a slightly higher KSMR than shorter\nones. When modeling the topics as random effects\n(rather than the talks), the highest KSMR (judging\nby individual intercepts) was obtained for physics\nand biodiversity and the lowest for language and\ndiseases. This might be explained by e.g. the MT\ntraining data or the raters expertise.\nAnalyzing the LMEM for editing duration, we\nfind that post-editing takes on average 42s longer\nthan marking, which is significant at p= 0:01.\nThe variance due to the target length is the largest,\nfollowed by the one due to the talk and the one\ndue to the user is smallest. Long sentences have\na six time higher editing duration on average than\nshorter ones. With respect to topics, the longest\nediting was done for topics like physics and evolu-\ntion, shortest for diseases and health.\n3.4.1 Annotation Quality\nThe corrections increased the quality, measured\nby comparison to reference translations, by 2.1\npoints in BLEU and decreased TER by 1 point.\nWhile this indicates a general improvement, it has\nto be taken with a grain of salt, since the post-edits\n5Random effects are denoted, e.g., by (1|user id)) .\n6Target lengths measured by number of characters were\nbinned into two groups at the limit of 176 characters.\n7Note that KSMR is already normalized by reference length,\nhence the small effect of target length. In a LMER for the\nraw action count (clicks+key strokes), this effect had a larger\nimpact.\nsrc I am a nomadic artist.\nhyp Ich bin ein nomadischer K ¨unstler.\npe Ich bin ein nomadischer K ¨unstler.\nref Ich wurde zu einer nomadischen K ¨unstlerin .\nsrc I look at the chemistry of the ocean today.\nhyp Ich betrachte heute die Chemie des Ozeans.\npe Ich erforsche t ¨aglich die Chemie der Meere.\nref Ich untersuche die Chemie der Meere der Gegenwart .\nsrc There’s even a software called cadnano that allow . . .\nhyp Es gibt sogar eine Software namens Caboano , die . . .\npe Es gibt sogar eine Software namens Caboano , die . . .\nref Es gibt sogar eine Software namens ”cadnano” , . . .\nsrc It was a thick forest.\nhyp Es war ein dicker Wald.\npe Es handelte sich um einen dichten Wald.\nref Auf der Insel war dichter Wald.\nTable 1: Examples of post-editing to illustrate differences\nbetween reference translations ( ref) and post-edits ( pe). Ex-\nample 1: The gender in the German translation could not be\ninferred from the context, since speaker information is un-\navailable to post-editor. Example 2: “today” is interpreted as\nadverb by the NMT, this interpretation is kept in the post-edit\n(“telephone game” effect). Example 3: Another case of the\n“telephone game” effect: the name of the software is changed\nby the NMT, and not corrected by post-editors. Example 4:\nOver-editing by post-editor, and more information in the ref-\nerence translation than in the source.\nare heavily biased by the structure, word choice\netc. by the machine translation, which might not\nnecessarily agree with the reference translations,\nwhile still being accurate.\nHow good are the corrections? We therefore\nmanually inspect the post-edits to get insights into\nthe differences between post-edits and references.\nTable 1 provides a set of examples8with their anal-\nysis in the caption. Besides the effect of “liter-\nalness” (Koponen, 2016), we observe three major\nproblems:\n1.Over-editing : Editors edited translations even\nthough they are adequate and fluent.\n2.“Telephone game” effect : Semantic mistakes\n(that do not influence fluency) introduced by\nthe MT system flow into the post-edit and re-\nmain uncorrected, when more obvious correc-\ntions are needed elsewhere in the sentence.\n3.Missing information : Since editors only ob-\nserve a portion of the complete context, i.e.,\nthey do not see the video recording of the\nspeaker or the full transcript of the talk, they\nare not able to convey as much information as\nthe reference translations.\n8Selected because of their differences to references.src Each year, it sends up a new generation of shoots.\nann Jedes Jahr sendet es eine neue Generation von Shoots.\nsim Jedes Jahr sendet es eine neue Generation von Shoots .\nref Jedes Jahr wachsen neue Triebe.\nsrc He killed 63 percent of the Hazara population.\nann Er starb 63 Prozent der Bev ¨olkerung Hazara.\nsim Er starb 63 Prozent der Bev ¨olkerung Hazara.\nref Er t¨otete 63% der Hazara-Bev ¨olkerung.\nsrc They would ordinarily support fish and other wildlife.\nann Sie w ¨urden Fisch und andere wild lebende Tiere unterst ¨utzen.\nsim Siew¨urden Fisch und andere wild lebende Tiere unterst ¨utzen .\nref Normalerweise w ¨urden sie Fisch und andere Wildtiere ern ¨ahren.\nTable 2: Examples of markings to illustrate differences be-\ntween human markings ( ann) and simulated markings ( sim).\nMarked parts are underlined. Example 1: “es” not clear from\ncontext, less literal reference translation. Example 2: Word\nomission (preposition after “Bev ¨olkerung”) or incorrect word\norder is not possible to mark. Example 3: Word order differs\nbetween MT and references, word omission (“ordinarily”) not\nmarked.\nHow good are the markings? Markings, in con-\ntrast, are less prone to over-editing, since they have\nfewer degrees of freedom. They are equally ex-\nposed to problem (3) of missing context, and an-\nother limitation is added: Word omissions and\nword order problems cannot be annotated. Table 2\ngives a set of examples that illustrate these prob-\nlems. While annotators were most likely not aware\nof problems (1) and (2), they might have sensed\nthat information was missing, as well as the ad-\nditional limitations of markings. The simulation\nof markings from references as used in previous\nwork (Petrushkov et al., 2018; Marie and Max,\n2015) seems overly harsh for the generated target\ntranslations, e.g., marking “Hazara-Bev ¨olkerung”\nas incorrect, even though it is a valid translation of\n“Hazara population”.\nMode Intra-Rater (Mean / Std.) \u000b Inter-Rater \u000b\nMarking 0:522/0:284 0:201\nCorrection 0:820/0:171 0:542\nUser-Chosen 0:775/0:179 0:473\nTable 3: Intra- and Inter-rater agreement calculated by Krip-\npendorff’s\u000b.\nHow reliable are corrections and markings?\nIn addition to the absolute quality of the anno-\ntations, we are interested in measuring their re-\nliability: Do annotators agree on which parts of\na translation to mark or edit? While there are\nmany possible valid translations, and hence many\nways to annotate one given translation, it has been\nshown that learnability profits from annotations\nwith less conflicting information (Kreutzer et al.,\n2018). In order to quantify agreement for both\nmodes on the same scale, we reduce both anno-\ntations to sentence-level quality judgments, which\nfor markings is the ratio of words that were marked\nas incorrect in a sentence, and for corrections the\nratio of words that was actually edited. If the hy-\npothesis was perfect, no markings nor edits would\nbe required, and if it was completely wrong, all\nof it had to be marked or edited. After this reduc-\ntion, we measure agreement with Krippendorff’s \u000b\n(Krippendorff, 2013), see Table 3.\nWhich mode do annotators prefer? In the\nuser-choice mode, where annotators can choose\nfor each sentence whether they would like to mark\nor correct it, markings were chosen much more fre-\nquently than post-edits (61.9%). Annotators did\nnot agree on the preferred choice of mode for the\nrepeated sentences ( \u000b=\u00000:008), which indicates\nthat there is no obvious policy when one of the\nmodes would be advantageous over the other. In\nthe post-annotation questionnaire, however, 60%\nof the participants said they generally preferred\npost-edits over markings, despite markings being\nfaster, and hence resulting in a higher hourly pay.\nTo better understand the differences in modes,\nwe asked them about their policies in the user-\nchoice mode where for each sentence they would\nhave to decide individually if they want to mark\nor post-edit it. The most commonly described pol-\nicy is decide based on error types and frequency:\nchoose post-edits when insertions or re-ordering is\nneeded, and markings preferably for translations\nwith word errors (less effort than doing a lookup\nor replacement). One person preferred post-edits\nfor short translations, markings for longer ones,\nanother three generally preferred markings gener-\nally, and one person preferred post-edits. Where\nannotators found the interface to need improve-\nments was (1) in the presentation of inter-sentential\ncontext, (2) in the display of overall progress and\n(3) an option to edit previously edited sentences.\nFor the marking mode they requested an option to\nmark missing parts or areas for re-ordering.\nDo markings and corrections express the same\ntranslation quality judgment? We observe that\nannotators find more than twice as many token cor-\nrections in post-edit mode than in marking mode9\n9The automatically assessed translation quality for the base-\nline model does not differ drastically between the portions\nselected per mode.\n0.000.250.500.751.00\nmarking post_edit\nEditing ModeCorrection RateFigure 2: Correction rate by annotation mode. The correc-\ntion rate describes the ratio of words in the translation that\nwere marked as incorrect (in marking mode) or edited (in\npost-editing mode). Means are indicated with diamonds.\nThis is partially caused by the reduced degrees\nof freedom in marking mode, but also underlines\nthe general trend towards over-editing when in\npost-edit mode. If markings and post-edits were\nused to compute a quality metric based on the\ncorrection rate, translations are judged as much\nworse in post-editing mode than in marking mode\n(Figure 2). This also holds for whole sentences,\nwhere 273 (26.20%) were left un-edited in mark-\ning mode, and only 3 (0.29%) in post-editing\nmode.\n4 Machine Learnability of NMT from\nHuman Markings and Corrections\nThe hypotheses presented to the annotators were\ngenerated by an NMT model. The goal is to use\nthe supervision signal provided by the human an-\nnotation to improve the underlying model by ma-\nchine learning. Learnability is concerned with the\nquestion of how strong a signal is necessary in or-\nder to see improvements in NMT fine-tuning on\nthe respective data.\nDefinition. Letx=x1:::xSbe a sequence\nof indices over a source vocabulary VSRC, and\ny=y1:::yTa sequence of indices over a tar-\nget vocabularyVTRG. The goal of sequence-to-\nsequence learning is to learn a function for map-\nping a input sequence xinto an output sequences\ny. For the example of machine translation, y\nis a translation of x, and a model parameterized\nby a set of weights \u0012is optimized to maximize\np\u0012(yjx). This quantity is further factorized into\nconditional probabilities over single tokens p\u0012(yj\nx) =QT\nt=1p\u0012(ytjx;y<t), where the latter distri-\nbution is defined by the neural model’s softmax-\nnormalized output vector:\np\u0012(ytjx;y<t) =softmax (NN\u0012(x;y<t)):(2)\nThere are various options for building the archi-\ntecture of the neural model NN \u0012, such as recurrent\n(Bahdanau et al., 2015), convolutional (Gehring\net al., 2017) or attentional (Vaswani et al., 2017)\nencoder-decoder architectures (or a mix thereof\n(Chen et al., 2018)).\nLearning from Error Corrections. The stan-\ndard supervised learning mode in human-in-the-\nloop machine translation assumes a fully corrected\noutputy\u0003for an input xthat is treated similar\nto a gold standard reference translation (Turchi\net al., 2017). Model adaptation can be performed\nby maximizing the likelihood of the user-provided\ncorrections where\nL(\u0012) =X\nx;y\u0003TX\nt=1logp\u0012(y\u0003\ntjx;y\u0003\n<t); (3)\nusing stochastic gradient descent techniques (Bot-\ntou et al., 2018).\nLearning from Error Markings. A weaker\nfeedback mode is to let a human teacher mark the\ncorrect parts of the machine-generated output ^y\n(Marie and Max, 2015; Petrushkov et al., 2018;\nDomingo et al., 2017). As a consequence every\ntoken in the output receives a reward \u000em\nt, either\u000e+\nt\nif marked as correct, or \u000e\u0000\ntotherwise. Petrushkov\net al. (2018) proposed a model with \u000e+\nt= 1 and\n\u000e\u0000\nt= 0, but this weighting schemes leads to the\nignorance of incorrect outputs in the gradient and\nthe rewarding of correct tokens. Instead, we find\nit beneficial to penalize incorrect tokens, with e.g.\n\u000e\u0000\nt=\u00000:5, and reward correct tokens \u000e+\nt= 0:5,\nwhich aligns with the findings from Lam et al.\n(2019). The objective of the learning system is to\nmaximize the likelihood of the correct parts of the\noutput where\nL(\u0012) =X\nx;^yTX\nt=1\u000em\ntlogp\u0012(^ytjx; ^y<t):(4)Domain train dev test\nWMT17 5,919,142 2,169 3,004\nIWSLT17 206,112 2,385 1,138\nSelection 1035 corr / 1042 mark 1,043\nTable 4: Data sizes (en-de), official splits from WMT17 and\nIWSLT17. Our target-domain data is a subset of selected talks\nfrom IWSLT2017 training data totalling 3,120 sentences.\n4.1 NMT Fine-Tuning\nNMT Model and Data. The goal is to adapt a\ngeneral-domain NMT model to a new domain with\neither post-edits or markings. For the general-\ndomain NMT system, we use the pre-trained 4-\nlayer LSTM encoder-decoder Joey NMT WMT17\nmodel (Kreutzer et al., 2019) for translations from\nEnglish to German10. The model is trained on\na joint vocabulary with 30k subwords (Sennrich\net al., 2016). Model outputs are de-tokenized and\nun-BPEd before being presented to the annotators.\nWith the help of human annotations we then adapt\nthis model to the domain of TED talk transcripts\nby continuing learning on the annotated data. Hy-\nperparameters including learning rate schedule,\ndropout and batch size for this fine-tuning step are\ntuned on the IWSLT17 dev set. For the marking\nmode, the weights \u000e+and\u000e\u0000are tuned in addi-\ntion. As test data, we use the split of the selected\ntalks that was annotated in the user-mode, since\nthe purpose of this split was the evaluation of user\npreference. There is no overlap in the three data\nsplits, but they have the same distribution over top-\nics, so that we can both measure local adaptation\nand draw comparisons between modes. Data sizes\nare given in Table 4.\nEvaluation. The models are evaluated with TER\n(Snover et al., 2006), BLEU (Papineni et al., 2002)\nand METEOR (Lavie and Denkowski, 2009)11\nagainst references translations. Significance is\ntested with approximate randomization for three\nruns for each system (Clark et al., 2011).\n4.2 Results\nCorrections, Markings and Quality Judgments.\nTable 5 compares the models after fine-tuning with\n10Pre-trained model: https://github.com/\njoeynmt/joeynmt/blob/master/README.\nmd#wmt17 ; modified fork of Joey NMT: https:\n//github.com/StatNLP/joeynmt/tree/mark\n11Computed with MultEval v0.5.1 (Clark et al., 2011) on tok-\nenized outputs.\nSystem TER #BLEU\"METEOR\"\n1 WMT baseline 58.6 23.9 42.7\nError Corrections\n2 Full 57.4?24.6?44.7?\n3 Small 57.9?24.1 44.2?\nError Markings\n4 0/1 57.5?24.4?44.0?\n5 -0.5/0.5 57.4?24.6?44.2?\n6 random 58.1?24.1 43.5?\nQuality Judgments\n7 from corrections 57.4?24.6?44.7?\n8 from markings 57.6?24.5?43.8?\nTable 5: Results on the test set with feedback collected from\nhumans. Decoding with beam search of width 5 and length\npenalty of 1. Significant ( p<= 0:05) improvements over the\nbaseline are marked with?. Full error corrections and error\nmarkings only significantly differ in terms of METEOR.\ncorrections and markings with the original WMT\nout-of-domain model.\nThe “small” model trained with error correc-\ntions is trained on one fifth of the data, which is\ncomparable to the effort it takes to collect the er-\nror markings. Both error corrections and mark-\nings can be reduced to sentence-level quality judg-\nments, where all tokens receive the same weight\nin Eq.\u000e=#marked\nhyptokensor\u000e=#corrected\nhyptokens. In addi-\ntion, we compare the markings against a random\nchoice of marked tokens per sentence.12We see\nthat both models trained on corrections and mark-\nings improve significantly over the baseline (rows\n2 and 3). Tuning the weights for (in)correct tokens\nmakes a small but significant difference for learn-\ning from markings (rows 4 and 5). These human\nmarkings lead to significantly better models than\nrandom markings (row 6). When reducing both\ntypes of human feedback to sentence-level quality\njudgments, no loss in comparison to error correc-\ntions and a small loss for markings (rows 7 and\n8) is observed. We suspect that the small margin\nbetween results for learning from corrections and\nmarkings is due to evaluating against references.\nEffects like over-editing (see Section 3.4.1) pro-\nduce training data that lead the model to generate\noutputs that diverge more from independent refer-\nences and therefore score lower than deserved un-\nder all metrics except for METEOR.\nHuman Evaluation. It is infeasible to collect\nmarkings or corrections for all our systems for a\n12Each token is marked with probability pmark = 0:5.more appropriate comparison than to references,\nbut for that purpose we conduct a small human\nevaluation study. Three bilingual raters receive\n120 translations of the test set ( \u001810%) and the\ncorresponding source sentences for each mode and\njudge whether the translation is better, as good as,\nor worse than the baseline: 64% of the translations\nobtained from learning from error markings are\njudged at least as good as the baseline, compared to\n65.2% for the translations obtained from learning\nfrom error corrections. Table 6 shows the detailed\nproportions excluding identical translations.\nSystem >BL =BL<BL\nError Markings 43.0% 21.0% 36.4%\nError Corrections 49.1% 16.1% 34.7%\nTable 6: Human preferences for comparisons between base-\nline (BL) translations and the NMT system fine-tuned on er-\nror markings and corrections. >: better than the baseline, <\nworse than the baseline.\nEffort vs. Translation Quality. Figure 3 illus-\ntrates the relation between the total time spent on\nannotations and the resulting translation quality for\ncorrections and markings trained on a selection of\nsubsets of the full annotated data: The overall trend\nshows that both modes benefit from more training\ndata, with more variance for the marking mode,\nbut also a steeper descent. From a total annota-\ntion amount of approximately 20,000s on ( \u00195.5h),\nmarkings are the more efficient choice.\n4.2.1 LMEM Analysis\nWe fit a LMEM for sentence-level quality scores\nof the baseline, and three runs each for the NMT\nsystems fine-tuned on markings and post-edits re-\nspectively, and inspect the influence of the system\nas a fixed effect, and sentence id, topic and source\nlength as random effects.\nTER\u0018system + (1jtalk id=sent id)\n+ (1jtopic ) + (1jsrclength )\nThe fixed effect is significant at p= 0:05, i.e., the\nquality scores of the three systems differ signifi-\ncantly under this model. The global intercept lies\nat 64.73, the one for marking 1.23 below, and the\none for post-editing 0.96 below. The variance in\nTER is for the largest part explained by the sen-\ntence, then the talk, the source length, and the least\nby the topic.\n●\n●●●\n●\n●●●\n●●\n●\n●●\n●\n●●\n●\n●58.058.559.0\n0 20000 40000 60000\nAnnotation Duration [s]TERmode\n●mark\npe\nsize\n●\n●\n●\n●\n●0\n250\n500\n750\n1000Figure 3: Improvement in TER for training data of\nvarying size: lower is better. Scores are collected\nacross two runs with a random selection of k2\n[125;250;375;500;625;750;875] training data points.\n5 Conclusion\nWe presented the first user study on the annotation\nprocess and the machine learnability of human er-\nror markings of translation outputs. This annota-\ntion mode has so far been given less attention than\nerror corrections or quality judgments, and has un-\ntil now only been investigated in simulation stud-\nies. We found that both according to automatic\nevaluation metrics and by human evaluation, fine-\ntuning of NMT models achieved comparable gains\nby learning from error corrections and markings.\nHowever, error markings required several orders of\nmagnitude less human annotation effort.\nIn future work we will investigate the integration\nof automatic markings into the learning process,\nand we will explore online adaptation possibilities.\nAcknowledgments\nWe would like to thank the anonymous reviewers\nfor their feedback, Michael Staniek and Michael\nHagmann for the help with data processing and\nanalysis, and Sariya Karimova and Tsz Kin Lam\nfor their contribution to a preliminary study. The\nresearch reported in this paper was supported in\npart by the German research foundation (DFG) un-\nder grant RI-2221/4-1.References\nBahdanau, D., Brakel, P., Xu, K., Goyal, A.,\nLowe, R., Pineau, J., Courville, A., and Ben-\ngio, Y . (2017). An actor-critic algorithm for\nsequence prediction. In Proceedings of the In-\nternational Conference on Learning Represen-\ntations (ICLR) , Toulon, France.\nBahdanau, D., Cho, K., and Bengio, Y . (2015).\nNeural machine translation by jointly learning\nto align and translate. In Proceedings of the In-\nternational Conference on Learning Represen-\ntations (ICLR) , San Diego, CA.\nBarr, D. J., Roger, L., Scheepers, C., and Tily, H. J.\n(2013). Random effects structure for confirma-\ntory hypothesis testing: Keep it maximal. J.\nMem. Lang , 68(3):255–278.\nBates, D., M ¨achler, M., Bolker, B., and Walker,\nS. (2015). Fitting linear mixed-effects mod-\nels using lme4. Journal of Statistical Software ,\n67(1):1–48.\nBentivogli, L., Bisazza, A., Cettolo, M., and Fed-\nerico, M. (2016). Neural versus phrase-based\nmachine translation quality: a case study. In\nProceedings of the 2016 Conference on Empir-\nical Methods in Natural Language Processing\n(EMNLP) , Austin, TX.\nBottou, L., Curtis, F. E., and Nocedal, J. (2018).\nOptimization methods for large-scale machine\nlearning. SIAM Review , 60(2):223–311.\nCelemin, C., del Solar, J. R., and Kober, J. (2018).\nA fast hybrid reinforcement learning framework\nwith human corrective feedback. Autonomous\nRobots .\nChen, M. X., Firat, O., Bapna, A., Johnson, M.,\nMacherey, W., Foster, G., Jones, L., Schus-\nter, M., Shazeer, N., Parmar, N., Vaswani, A.,\nUszkoreit, J., Kaiser, L., Chen, Z., Wu, Y ., and\nHughes, M. (2018). The best of both worlds:\nCombining recent advances in neural machine\ntranslation. In Proceedings of the 56th Annual\nMeeting of the Association for Computational\nLinguistics (ACL) , Melbourne, Australia.\nChristiano, P. F., Leike, J., Brown, T., Martic, M.,\nLegg, S., and Amodei, D. (2017). Deep re-\ninforcement learning from human preferences.\nInAdvances in Neural Information Processing\nSystems (NIPS) , Long Beach, CA.\nClark, J. H., Dyer, C., Lavie, A., and Smith, N. A.\n(2011). Better hypothesis testing for statisti-\ncal machine translation: Controlling for opti-\nmizer instability. In Proceedings of the 49th An-\nnual Meeting of the Association for Computa-\ntional Linguistics: Human Language Technolo-\ngies (ACL-HLT) , Portland, OR.\nDomingo, M., Peris, ´A., and Casacuberta, F.\n(2017). Segment-based interactive-predictive\nmachine translation. Machine Translation ,\n31(4):163–185.\nFoster, G., Isabelle, P., and Plamondon, P. (1997).\nTarget-text mediated interactive machine trans-\nlation. Machine Translation , 12(1-2):175–194.\nGehring, J., Auli, M., Grangier, D., Yarats, D., and\nDauphin, Y . (2017). Convolutional sequence to\nsequence learning. In Proceedings of the 55th\nAnnual Meeting of the Association for Compu-\ntational Linguistics (ACL) , Vancouver, Canada.\nGreen, S., Wang, S. I., Chuang, J., Heer, J., Schus-\nter, S., and Manning, C. D. (2014). Human ef-\nfort and machine learnability in computer aided\ntranslation. In Proceedings the onference on\nEmpirical Methods in Natural Language Pro-\ncessing (EMNLP) , Doha, Qatar.\nKarimova, S., Simianer, P., and Riezler, S. (2018).\nA user-study on online adaptation of neural ma-\nchine translation to human post-edits. Machine\nTranslation , 32(4):309–324.\nKnowles, R. and Koehn, P. (2016). Neural inter-\nactive translation prediction. In Proceedings of\nthe Conference of the Association for Machine\nTranslation in the Americas (AMTA) , Austin,\nTX.\nKoponen, M. (2016). Machine Translation Post-\nEditing and Effort. Empirical Studies on the\nPost-Editing Process . PhD thesis, University of\nHelsinki.\nKreutzer, J., Bastings, J., and Riezler, S. (2019).\nJoey NMT: A minimalist NMT toolkit for\nnovices. In Proceedings of the 2019 Confer-\nence on Empirical Methods in Natural Lan-\nguage Processing and the 9th International\nJoint Conference on Natural Language Process-\ning (EMNLP-IJCNLP) , Hong Kong, China.\nKreutzer, J., Sokolov, A., and Riezler, S.\n(2017). Bandit structured prediction for neural\nsequence-to-sequence learning. In Proceedings\nof the 55th Annual Meeting of the Association\nfor Computational Linguistics (ACL) , Vancou-\nver, Canada.Kreutzer, J., Uyheng, J., and Riezler, S. (2018).\nReliability and learnability of human bandit\nfeedback for sequence-to-sequence reinforce-\nment learning. In Proceedings of the 56th An-\nnual Meeting of the Association for Computa-\ntional Linguistics (ACL) , Melbourne, Australia.\nKrippendorff, K. (2013). Content Analysis. An In-\ntroduction to Its Methodology . Sage, third edi-\ntion.\nLam, T. K., Kreutzer, J., and Riezler, S. (2018). A\nreinforcement learning approach to interactive-\npredictive neural machine translation. In Pro-\nceedings of the 21st Annual Conference of the\nEuropean Association for Machine Translation\n(EAMT) , Alicante, Spain.\nLam, T. K., Schamoni, S., and Riezler, S. (2019).\nInteractive-predictive neural machine transla-\ntion through reinforcement and imitation. In\nProceedings of Machine Translation Summit\nXVII (MTSUMMIT) , Dublin, Ireland.\nLavie, A. and Denkowski, M. J. (2009). The me-\nteor metric for automatic evaluation of machine\ntranslation. Machine Translation , 23(2-3):105–\n115.\nMarie, B. and Max, A. (2015). Touch-based\npre-post-editing of machine translation output.\nInProceedings of the Conference on Empiri-\ncal Methods in Natural Language Processing\n(EMNLP) , Lisbon, Portugal.\nPapineni, K., Roukos, S., Ward, T., and Zhu, W.-J.\n(2002). Bleu: a method for automatic evalua-\ntion of machine translation. In Proceedings of\nthe 40th Annual Meeting of the Association for\nComputational Linguistics (ACL) , Philadelphia,\nPA.\nPeris, ´A., Domingo, M., and Casacuberta, F.\n(2017). Interactive neural machine translation.\nComputer Speech & Language , 45:201–220.\nPetrushkov, P., Khadivi, S., and Matusov, E.\n(2018). Learning from chunk-based feedback\nin neural machine translation. In Proceedings of\nthe 56th Annual Meeting of the Association for\nComputational Linguistics (ACL) , Melbourne,\nAustralia.\nRanzato, M., Chopra, S., Auli, M., and Zaremba,\nW. (2016). Sequence level training with recur-\nrent neural networks. In Proceedings of the In-\nternational Conference on Learning Represen-\ntation (ICLR) , San Juan, Puerto Rico.\nSennrich, R., Haddow, B., and Birch, A. (2016).\nNeural machine translation of rare words with\nsubword units. In Proceedings of the 54th An-\nnual Meeting of the Association for Computa-\ntional Linguistics (ACL) , Berlin, Germany.\nSnover, M., Dorr, B., Schwartz, R., Micciulla, L.,\nand Makhoul, J. (2006). A study of transla-\ntion edit rate with targeted human annotation.\nInProceedings of the Conference of the Asso-\nciation for Machine Translation in the Americas\n(AMTA) , Cambridge, MA.\nSokolov, A., Kreutzer, J., Sunderland, K.,\nDanchenko, P., Szymaniak, W., F ¨urstenau, H.,\nand Riezler, S. (2017). A shared task on bandit\nlearning for machine translation. In Proceedings\nof the Second Conference on Machine Transla-\ntion, Copenhagen, Denmark.\nTurchi, M., Negri, M., Farajian, M. A., and Fed-\nerico, M. (2017). Continuous learning from hu-\nman post-edits for neural machine translation.\nThe Prague Bulletin of Mathematical Linguis-\ntics (PBML) , 1(108):233–244.\nVaswani, A., Shazeer, N., Parmar, N., Uszkoreit,\nJ., Jones, L., Gomez, A. N., Kaiser, L., and\nPolosukhin, I. (2017). Attention is all you need.\nInAdvances in Neural Information Processing\nSystems (NIPS) , Long Beach, CA.\nAppendix\nA Annotator Instructions\nThe annotators received the following instructions:\n\u000fYou will be shown a source sentence, its\ntranslation and an instruction.\n\u000fRead the source sentence and the translation.\n\u000fFollow the instruction by either marking the\nincorrect words of the translation by clicking\non them or highlighting them, correcting the\ntranslation by deleting, inserting and replac-\ning words or parts of words, or choosing be-\ntween modes (i) and (ii), and then click “sub-\nmit”.\n–In (ii), if you make a mistake and want\nto start over, you can click on the button\n“reset”.\n–In (i), to highlight, click on the word you\nwould like to start highlighting from,\nkeep the mouse button pushed down,\ndrag the pointer to the word you wouldlike to stop highlighting on, and release\nthe mouse button while over that word.\n\u000fIf you want to take a short break (get a coffee,\netc.), click on “pause” to pause the session.\nWe’re measuring time it takes to work on each\nsentence, so please do not overuse this button\n(e.g. do not press pause while you’re making\nyour decisions), but also do not feel rushed if\nyou feel uncertain about a sentence.\n\u000fInstead, if you want to take a longer break,\njust log out. The website will return you re-\nturn you to the latest unannotated sentence\nwhen you log back in. If you log out in the\nmiddle of an annotation, your markings or\npost-edits will not be saved.\n\u000fAfter completing all sentences (ca. 300),\nyou’ll be asked to fill a survey about your ex-\nperience.\n\u000fImportant:\n–Please do not use any external dictionar-\nies or translation tools.\n–You might notice that some sentences re-\nappear, which is desired. Please try to be\nconsistent with repeated sentences.\n–There is no way to return and re-edit\nprevious sentences, so please make sure\nyou’re confident with the edits/markings\nyou provided before you click “submit”.\nB Creating Data Splits\nIn order to have users see a wider range of talks,\neach talk was split into three parts (beginning, mid-\ndle, and end). Each talk part was assigned an an-\nnotation mode. Parts were then assigned to users\nusing the following constraints:\n\u000fEach user should see nine document parts.\n\u000fNo user should see the same document twice.\n\u000fEach user should see three sections in post-\nediting, marking, and user-choice mode.\n\u000fEach user should see three beginning, three\nmiddle, and three ending sections.\n\u000fEach document should be assigned each of\nthe three annotation modes.\nTo avoid assigning post-editing to every beginning\nsection, marking to every middle section, and user-\nchoice to every ending section, assignment was\ndone with an integer linear program with the above\nconstraints. Data was presented to users in the\norder [Post-edit, Marking, User Chosen, Agree-\nment].",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "AVUtMoogE61",
"year": null,
"venue": "EAMT 2014",
"pdf_link": "https://aclanthology.org/2014.eamt-1.21.pdf",
"forum_link": "https://openreview.net/forum?id=AVUtMoogE61",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Document-level translation quality estimation: exploring discourse and pseudo-references",
"authors": [
"Carolina Scarton",
"Lucia Specia"
],
"abstract": "Carolina Scarton, Lucia Specia. Proceedings of the 17th Annual conference of the European Association for Machine Translation. 2014.",
"keywords": [],
"raw_extracted_content": "Document-level translation quality estimation: exploring\ndiscourse and pseudo-references\nCarolina Scarton and Lucia Specia\nDepartment of Computer Science\nUniversity of Sheffield\nS1 4DP, UK\nfc.scarton,l.specia [email protected]\nAbstract\nPredicting the quality of machine trans-\nlations is a challenging topic. Quality\nestimation (QE) of translations is based\non features of the source and target texts\n(without the need for human references),\nand on supervised machine learning meth-\nods to build prediction models. Engineer-\ning well-performing features is therefore\ncrucial in QE modelling. Several fea-\ntures have been used so far, but they tend\nto explore very short contexts within sen-\ntence boundaries. In addition, most work\nhas targeted sentence-level quality predic-\ntion. In this paper, we focus on document-\nlevel QE using novel discursive features, as\nwell as exploiting pseudo-reference trans-\nlations. Experiments with features ex-\ntracted from pseudo-references led to the\nbest results, but the discursive features also\nproved promising.\n1 Introduction\nThe purpose of machine translation (MT) qual-\nity estimation (QE) is to provide a quality pre-\ndiction for new, unseen machine translated texts,\nwithout relying on reference translations (Blatz et\nal., 2004; Specia et al., 2009; Bojar et al., 2013).\nThis task is usually addressed with machine learn-\ning models trained on datasets composed of source\ntexts, their machine translations, and a quality la-\nbel assigned by humans or by an automatic metric\n(e.g.: BLEU (Papineni et al., 2002)). A common\nuse of quality predictions is the estimation of post-\nc\r2014 The authors. This article is licensed under a Creative\nCommons 3.0 licence, no derivative works, attribution, CC-\nBY-ND.editing effort in order to decide whether to trans-\nlate a text from scratch or post-edit its machine\ntranslation. Another use is the ranking of transla-\ntions in order to select the best text from multiple\nMT systems.\nFeature engineering is an important compo-\nnent in QE. Although several feature sets have al-\nready been explored, most approaches focus on\nsentence-level quality prediction, with sentence-\nlevel features. These disregard document struc-\nture or wider contexts beyond sentence bound-\naries. To the best of our knowledge, only Rubino\net al. (2013) considered discourse-related informa-\ntion by studying topic model features for sentence-\nlevel prediction. Soricut and Echihabi (2010) ex-\nplored document-level quality prediction, but they\ndid not use explicit discourse information, e.g. in-\nformation to capture text cohesion or coherence.\nIn this paper we focus on document-level fea-\ntures and document-level prediction . We be-\nlieve that judgements on translation quality de-\npend on units longer than just a given sentence,\ntaking into account discourse phenomena for lex-\nical choice, consistency, style and connectives,\namong others (Carpuat and Simard, 2012). This is\nparticularly important in MT evaluation contexts,\nsince most MT systems, and in particular statisti-\ncal MT (SMT) systems, process sentences one by\none, in isolation. Our hypothesis is that features\nthat capture discourse phenomena can improve\ndocument-level prediction. We consider two fami-\nlies of features that have been successfully applied\nin reference-based MT evaluation (Wong and Kit,\n2012) and readability assessment (Graesser et al.,\n2004). In terms of applications, document-level\nQE is very important in scenarios where the en-\ntire text needs to be used/published without post-\nedition.\n101\nSoricut and Echihabi (2010) and Soricut\nand Narsale (2012) explored a feature based\nonpseudo-references for document-level QE.\nPseudo-references are translations produced by\none or more external MT systems, which are dif-\nferent from the one producing the translations we\nwant to predict the quality for. These are used as\nreferences against which the output of the MT sys-\ntem of interest can be compared using standard\nmetrics such as BLEU. Soricut et al. (2012) and\nShah et al. (2013) explored pseudo-references for\nsentence-level QE. In both cases, features based on\npseudo-references led to significant improvements\nin prediction accuracy. Here we also use pseudo-\nreferences for document-level QE, with a number\nof string similarity metrics to produce document-\nlevel scores as features, which are arguably more\nreliable than sentence-level scores, particularly for\nmetrics like BLEU.\nIn the remainder of this paper, Section 2 presents\nrelated work. Section 3 introduces the document-\nlevel QE features we propose. Section 4 describes\nthe experimental setup of this work. Section 5\npresents the results.\n2 Related work\nWork related to this research includes document-\nlevel MT evaluation metrics, QE features, and\nQE prediction, as well as work focusing on\nother linguistic features, and work using pseudo-\nreferences.\nWong and Kit (2012) use lexical cohesion met-\nrics for MT evaluation at document-level. Lexi-\ncal cohesion relates to word choices, captured in\ntheir work by reiteration and collocation. Words\nand stems were used for reiteration, and synonyms,\nnear-synonyms and superordinates, for colloca-\ntions. These metrics are integrated with traditional\nmetrics like BLEU, TER (Snover et al., 2006)\nand METEOR (Banerjee and Lavie, 2005). The\nhighest correlation against human assessments was\nfound for the combination of METEOR and the\ndiscursive features.\nRubino et al. (2013) explore topic model fea-\ntures for QE at sentence-level. Latent Dirichlet Al-\nlocation is used to model the topics in two ways: a\nbilingual view, where the bilingual corpus is con-\ncatenated at sentence-level to build a single model\nwith two languages; and a polylingual view, where\none topic model is built for each language. While\nthe topics models are generated with informationfrom the entire corpus, the features are extracted\nat sentence-level. These are computed for both\nsource and target languages using vector distance\nmetrics between the words in these sentences and\nthe topic distributions. Topic model features has\nbeen achieved promising results.\nSoricut and Echihabi (2010) explore document-\nlevel QE prediction to rank documents translated\nby a given MT system. Features included BLEU\nscores based on pseudo-references from an off-the-\nshelf MT system, for both the target and the source\nlanguages. The use of pseudo-references has been\nshown to improve state-of-the-art results. Sori-\ncut and Narsale (2012) also consider document-\nlevel prediction for ranking, proposing the aggre-\ngation of sentence-level features for document-\nlevel prediction. The authors claim that a pseudo-\nreferences-based feature (based in BLEU) is one\nof the most powerful in the framework. For QE\nat sentence-level, Soricut et al. (2012) use BLEU\nbased on pseudo-references combined with other\nfeatures to build the best QE system of the WMT12\nQE shared task.1Shah et al. (2013) conduct a fea-\nture analysis, at sentence-level, on a number of\ndatasets and show that the BLEU-based pseudo-\nreference feature contributes the most to prediction\nperformance.\nIn terms of other types of linguistic features for\nQE, Xiong et al. (2010) and Bach et al. (2011)\npropose features for word-level QE and show that\nthese improve over the state-of-the-art results. At\nsentence-level, Avramidis et al. (2011), Hardmeier\n(2011) and Almaghout and Specia (2013) con-\nsider syntactic features, achieving better results\ncompared to competitive feature sets. Pighin and\nM`arquez (2011) obtain improvements over strong\nbaselines from exploiting semantic role labelling\nto score MT outputs at sentence-level. Felice and\nSpecia (2012) introduce several linguistic features\nfor QE at sentence-level. These did not show\nimprovement over shallower features, but feature\nselection analysis showed that linguistic features\nwere among the best performing ones.\n3 Features for document-level QE\nQE is traditionally done at sentence-level. This\nhappens mainly because the majority of MT sys-\ntems translate texts at this level. Evaluating sen-\ntences instead of documents can be useful for\nmany scenarios, e.g., post-editing effort prediction.\n1http://www.statmt.org/wmt12/\n102\nHowever, some linguistic phenomena can only be\ncaptured by considering the document as a whole.\nMoreover, for scenarios in which post-edition is\nnot possible, e.g., gisting, quality predictions for\nthe entire documents are more useful.\nSeveral features have been proposed for QE at\nsentence-level. Many of them can be directly\nused at document-level (e.g., number of words in\nsource/target sentences). However, other features\nthat better explore the document as a whole or\ndiscourse-related phenomena can bring additional\ninformation. In this paper, discourse information\nis explored in two ways: lexical cohesion (Sec-\ntion 3.1) and LSA cohesion (Section 3.2). The in-\ntuition behind using cohesion features for QE is\nthe following: on the source side, documents that\nhave low cohesion are likely to result in bad qual-\nity translations. On the target side, documents with\nlow cohesion are likely to have low overall quality.\nFrom the feature set proposed in (Soricut and\nEchihabi, 2010) for document-level ranking of MT\nsystem outputs, text-based and language model-\nbased features are also covered by the baseline fea-\ntures used in this paper. Pseudo-reference-based\nfeatures are also addressed herein (Section 3.3).\nThe example-based features cannot be easily re-\nproduced since we do not have access to additional\ndocuments to use as development set (our paral-\nlel corpora are already small). The training data-\nbased features were not considered because we use\nMT systems that do not have or make their training\nsets available.\n3.1 Lexical cohesion features\nOur first set of features is based on lexical cohe-\nsion metrics (hereafter, LC). Lexical cohesion is\nrelated to word choices in a text (Wong and Kit,\n2012). Words can be repeated to make the relation\namong sentences more explicit to the reader. An-\nother phenomenon of lexical cohesion is the use\nof synonyms, hypernyms, antonyms, etc. In this\npaper, we only consider word repetitions as fea-\ntures. These are features that can be easily ex-\ntracted for languages other than English, for which\na thesaurus with synonyms, hypernyms, etc., may\nnot be available. Our LC features are as follows:\nAverage word repetition: for each content word,\nwe count its frequency in all sentences of\nthe document. Then, we sum the repetition\ncounts and divide it by the total number of\ncontent words in the document. This is com-puted for the source and target documents, re-\nsulting in two features.\nAverage lemma repetition: the same as above,\nbut the words are first lemmatised.\nAverage noun repetition: the same as above, but\nonly nouns are considered as words.\n3.2 LSA cohesion features\nGeneral textual quality is often connected to the\nnotion of readability of a text. Readability can be\nmeasured in many ways, focusing on different as-\npects such as coherence, cohesion, how accessible\na text is to a certain audience, etc. The Coh-Metrix\nproject2(Graesser et al., 2004) has proposed a\nnumber of text readability metrics. Latent Seman-\ntic Analysis ( LSA ) (Landauer et al., 1998) is used\nin order to extract cohesion-related features. This\nis a statistical method based on Singular Vector\nDecomposition (SVD) and is often aimed at di-\nmensionality reduction. In SVD, a given matrix X\ncan be decomposed into the product of three other\nmatrices:\nX=WSPT;\nwhere Wdescribes the original row entities as vec-\ntors of derived orthogonal factor values; Sis a\ndiagonal matrix containing scaling values and P\n(PTis the transpose of P) is the same as Wbut\nfor columns. When these three matrices are mul-\ntiplied, the exact Xmatrix is recovered. The di-\nmensionality reduction consists in reconstructing\ntheXmatrix by only using the highest values of\nthe diagonal matrix S. For example, a dimension-\nality reduction of order two will consider only the\ntwo highest values of S.\nTheXmatrix (rows x columns) can be built\nfrom words by sentences, words by documents,\nsentences by documents, etc. In the case of words\nby sentences (which we use in our experiments),\neach cell contains the frequency of a given word in\na given sentence. LSA was originally designed to\nbe used with large corpora of multiple documents.\nIn our case, since we are interested in measur-\ning cohesion within documents, we compute LSA\nfor each individual document through a matrix of\nwords by sentences within the document.\nLSA was computed using a package for\npython,3which takes word stems and sentences to\nbuild the matrix. Usually, before applying SVD in\n2http://cohmetrix.com/\n3https://github.com/josephwilk/semanticpy\n103\nLSA, the Xmatrix is transformed wherefore each\ncell encapsulates information about a word’s im-\nportance in a sentence or a word’s importance in\nthe text in general. Landauer et al. (1998) sug-\ngest the use of TF-IDF transformation for that.\nHowever, we disregarded the use of TF-IDF as\nthis transformation would smooth out the values\nof high frequency words across sentences. In our\ncase, the salience of words in sentences is impor-\ntant.\nOur LSA features follow from Graesser et al.\n(2004)’s work on readability assessment:\nLSA adjacent sentences: for each sentence in a\ndocument, we compute the Spearman rank\ncorrelation coefficient of its word vector with\nthe word vectors of its immediate neighbours\n(sentences which appear immediately before\nand after the given sentence). For sentences\nwith two neighbours (most cases), we average\nthe correlation values. After that, we average\nthe values for all sentences in order to have a\nsingle figure for the entire document.\nLSA all sentences: for each sentence in a docu-\nment, we calculate the Spearman rank corre-\nlation coefficient of the word vectors between\nthis sentence and all the others. Again we av-\nerage the values for all sentences in the docu-\nment.\nHigher correlation scores are expected to corre-\nspond to higher text cohesion, since the correlation\namong the sentences in a document is related to\nhow close the words in the document are (Graesser\net al., 2004). Different from lexical cohesion fea-\ntures, LSA features are able to find correlations\namong different words, which are not repetitions\nand may not be synonyms, but are instead related\n(as given by co-occurrence patterns).\n3.3 Pseudo-references\nPseudo-references are translations produced by\nother MT systems than the system we want to pre-\ndict the quality for. They are used as references\nto evaluate the output of the MT system of inter-\nest. They have also been used for other purposes,\ne.g., to fulfil the lack of human references avail-\nable in reference-based MT evaluation (Albrecht\nand Hwa, 2008) and automatic summary evalua-\ntion (Louis and Nenkova, 2013). The application\nwe are interested in, originally proposed in (Sori-\ncut and Echihabi, 2010), is to generate features forQE. In this scenario, reference-based evaluation\nmetrics (such as BLEU) are computed between the\nMT system output and the pseudo-references and\nused to train quality prediction models.\nSoricut and Echihabi (2010) discussed the im-\nportance of the pseudo-references being generated\nby MT system(s) which are as different as possi-\nble from the MT system of interest, and prefer-\nably of much better quality. This should ensure\nthat string similarity features (like BLEU) indicate\nmore than simple consensus between similar MT\nsystems, which would produce the same (possibly\nbad quality) translations, e.g., Google Translate4.\n4 Experimental settings\nAlthough QE is traditionally trained on datasets\nwith human labels for quality (such as HTER\n– Human Translation Error Rate (Snover et al.,\n2006)), no large enough dataset with human-based\nquality labels assigned at document-level is avail-\nable. Therefore, we resort to predicting automatic\nmetrics as quality labels, as in (Soricut and Echi-\nhabi, 2010). This requires references (human)\ntranslations at training time, when the automatic\nmetrics are computed, but not at test time, when\nthe automatic metrics are predicted.\nCorpora Two parallel corpora with reference\ntranslations are used in our experiments: FAPESP\nand WMT13. FAPESP contains 2;823English-\nBrazilian Portuguese (EN-BP) documents ex-\ntracted from a scientific Brazilian news journal\n(FAPESP)5(Aziz and Specia, 2011). Each ar-\nticle covers one particular scientific news topic.\nThe corpus was randomly divided into 60% ( 1;694\ndocuments) for training a baseline MOSES6statis-\ntical MT system (Koehn et al., 2007) (with 20doc-\numents as development set); and 40% ( 1;128doc-\numents) for testing the SMT system, which gener-\nated translations for QE training (60%: 677doc-\numents) and test (40%: 451documents). In addi-\ntion, two external MT systems were used to trans-\nlate the test set: SYSTRAN7– a rule-based system\n– and Google Translate ( GOOGLE ), a statistical\nsystem.\nWMT13 contains English-Spanish ( EN-ES )\nand Spanish-English ( ES-EN ) translations from\n4http://translate.google.com.br/\n5http://revistapesquisa.fapesp.br\n6http://www.statmt.org/moses/?n=moses.\nbaseline\n7http://www.systransoft.com/\n104\nthe test set of the translation shared task of\nWMT13.8In total, 52source documents were\navailable for each language pair. In order to build\nthe QE systems, the outputs of all MT systems sub-\nmitted to the shared task were taken: 18 systems\nfor EN-ES ( 528 documents for QE training, and\n356for QE test), and 17 systems for ES-EN ( 500\ndocuments for QE training, and 332 documents\nfor QE test). In both cases, the translations from\none MT system are used as pseudo-references for\ntranslations from the other systems.\nQuality labels The automatic metrics selected\nfor quality labelling and prediction are BLEU and\nTER.9BLEU (BiLingual Evaluation Understudy)\nis a precision-oriented metric that compares n-\ngrams (n=1-4 in our case) from reference docu-\nments against n-grams of the MT output, mea-\nsuring how close the output of the system is to\none or more references. TER (Translation Er-\nror Rate) (Snover et al., 2006) measures the min-\nimum number of edits required to transform the\nMT output in the reference document. The Asiya\nToolkit10(Gim ´enez and M `arquez, 2010) was used\nto calculate both metrics.\nBaselines As baseline, we use 17 competitive\nfeatures from the QuEst toolkit (Specia et al.,\n2013) (the so-called baseline features orBL.11)\nSince the baseline features are sentence-level, we\naggregated them by computing the average for\neach feature across all sentences in a document.\nAs a second baseline ( Mean ), we calculate the av-\nerage BLEU or TER scores in the QE training set,\nand apply this value to all entries (documents) in\nthe test set.\nPseudo-reference features BLEU and TER\nscores are computed between the output of the\nMT system of interest and alternative MT sys-\ntems, at document-level, and used as features\nin QE models. For the FAPESP corpus, trans-\nlations from Google Translate were selected as\npseudo-references, since this system has shown\nthe best average BLEU score in the QE train-\ning set. For the WMT13 corpus, translations\nfrom uedin-wmt13-en-es , for EN-ES, and uedin-\nheafield-unconstrained for ES-EN, were used as\n8http://www.statmt.org/wmt13/\n9METEOR was also used but the results were inconclusive\n10http://asiya.lsi.upc.edu/\n11http://www.quest.dcs.shef.ac.uk/quest_\nfiles/features_blackbox_baseline_17pseudo-references, since these systems achieved\nthe best BLEU scores in the WMT13 translation\nshared task. Regarding the difference between the\nsystems, for the FAPESP corpus, this difference\nis guaranteed since GOOGLE is considerably dif-\nferent from SYSTRAN, and is trained on a dif-\nferent (much larger) corpus than MOSES. For the\nWMT13 corpus, it is not possible to make this as-\nsumption, as many of the systems participating in\nthe shared task are close variations of Moses.\nFeature sets As feature sets, we combine LC\nand LSA features with BL ( BL+LC ,BL+LSA and\nBL+LC+LSA ) to create the models with discur-\nsive information. The pseudo-reference features\nare combined with the baseline ( BL+Pseudo ) and\nwith all other features ( BL+LC+LSA+Pseudo ).\nMachine learning algorithm We use the Sup-\nport Vector Machines (SVM) regression algorithm\nwith a radial basis function kernel and hyperpa-\nrameters optimised via grid search to train the QE\nmodels with all feature sets The scikit-learn mod-\nule available in QuEst was used for that.\nEvaluation metrics The QE models with differ-\nent feature sets are evaluated using MAE (Mean\nAbsolute Error): MAE =Pn\ni=1jH(si)\u0000V(si)j\nN\nwhere H(si)is the predicted score, V(si)is the\ntrue score and Nis the number of data points in\nthe test set. To verify the significance of the results,\ntwo-tailed pairwise t-test (p <0.05) was performed\nfor different prediction outputs.\nMethod Two sets of experiments were con-\nduct. First (Section 5.1), we consider the out-\nputs of the FAPESP corpus of MOSES, SYS-\nTRAN and GOOGLE separately, using as training\nand test sets the outputs of each system individu-\nally, with GOOGLE translations used as pseudo-\nreferences for the other two systems. The second\nset of experiments (Section 5.2) considers, for the\nFAPESP corpus, the combination of the outputs of\nMOSES and SYSTRAN (MOS+SYS), again with\nGOOGLE translations used as pseudo-references.\nFor the WMT2013 corpora, we mixed translations\nfrom all except the best system, which were used\nas pseudo-references.\n5 Experiments and results\n5.1 MT system-specific models\nThe results for the prediction of BLEU and TER\nfor MOSES, SYSTRAN and GOOGLE systems\n105\nin the FAPESP corpus are shown in Table 1.\nThe best results for MOSES and SYSTRAN were\nobtained with the inclusion of pseudo-references\n(BL+Pseudo and BL+LC+LSA+Pseudo), with\nboth BLEU and TER. However, only the im-\nprovements for MOSES showed statistically sig-\nnificant difference: with both BLEU and TER,\nthe best results were tied between BL+Pseudo and\nBL+LC+LSA+Pseudo, but there are still signif-\nicant differences between their predictions. An\ninteresting finding is that without considering\npseudo-reference features for MOSES and SYS-\nTRAN, the best results are achieved with LSA fea-\ntures. In fact, for SYSTRAN the results from us-\ning of only BL+LSA are not significantly differ-\nent from the use of all features (including pseudo-\nreferences).\nFor GOOGLE, the best results (for BLEU and\nTER) were obtained by BL+LC12. However,\nBLEU predictions showed no significant differ-\nence among all feature sets and the best TER figure\nwas not significantly different from BL+LC+LSA.\nIn order to understand whether the MAE scores\nobtained are “good enough”, it is interesting to\ncompare them against the error of the Mean base-\nline, but also to analyse the average of the true\nscores and the range of variation of these true\nscores in the test set (last two lines in Table 1).\nFor the prediction of BLEU scores, the true scores\nrange from 0to0:5for MOSES and SYSTRAN,\nand from 0to0:8for GOOGLE. This suggests\nthat the impact of error differences in MOSES and\nSYSTRAN is higher. A wider range of scores and\na relatively higher Mean MAE could indicate a rel-\natively easier prediction task. This is directly con-\nnected to the variation in the quality of the transla-\ntions in the datasets. This seems to be the case with\nBLEU prediction for GOOGLE translations: the\nimprovements between the Mean baseline and the\nBL features is much higher than with the other MT\nsystems. The variation in terms of TER is larger,\nmaking improvements over the Mean baseline pos-\nsible with all feature sets.\nGiven the low MAE scores obtained by the\nMean baseline, as well as with simple BL features,\none could say that in general the task of predict-\ning BLEU and TER is close to trivial, at least in\nthe FAPESP corpus. This is again due to the low\nvariation in the quality of texts translated by each\n12Pseudo-reference features were not used for GOOGLE,\nsince its outputs was used as pseudo-reference for the other\nsystems.system. This is to be expected, given the very\nnature of document-level prediction: major varia-\ntions in the quality of specific translated segments\nget smoothed out throughout the document. In ad-\ndition, the FAPESP corpus consists of texts from\nthe same style and domain. On the other hand, the\naverage quality (as measured by BLEU and TER\nmetrics) of the different MT systems on the same\ncorpus is very different, as shown in the penulti-\nmate line of Table 1. This motivates the experi-\nment described next.\n5.2 MT system-independent models\nTo analyse document-level QE in a more chal-\nlenging scenario, we experiment with mixing dif-\nferent MT system outputs, for both FAPESP and\nWMT2013 corpora. Results are shown in Table 2.\nThe ranges of BLEU/TER scores are now wider,\nand the overall error scores (including for the\nMean baseline) are higher in these settings, show-\ning that this is indeed a harder task. Again, the\nbest results are obtained with the use of pseudo-\nreference features. However, in this case sta-\ntistically significant differences against other re-\nsults were only observed with MOS+SYS BLEU\nprediction and ES-EN TER prediction. For\nEN-ES BLEU prediction, the best result (0.043\nfor BL+Pseudo) showed no significant difference\nagainst BL+LC+LSA+Pseudo ( 0:045). For ES-\nEN BLEU prediction, there is no significant differ-\nence among the results of BL+LSA, BL+LC+LSA\nand BL+Pseudo. For MOS+SYS TER prediction,\nBL+Pseudo and BL+LC+LSA+Pseudo showed no\nsignificant difference. EN-ES TER prediction was\nthe only case were the BL results showed no signif-\nicant difference against pseudo-reference features.\nIt is worth mentioning that, as in the previous ex-\nperiments, if we disregard the pseudo-reference\nfeatures – which may not be available in many\nreal-world scenarios – the LSA feature sets show\nthe best results.\n6 Conclusions\nIn this paper we focused document-level machine\ntranslation quality estimation. We presented an at-\ntempt to address the problem by considering dis-\ncourse information in translation quality estima-\ntion in terms of novel features, relying on lexical\ncohesion aspects. LSA cohesion features showed\nvery promising results.\nFeatures based on pseudo-references were also\n106\nBLEU TER\nMOSES SYSTRAN GOOGLE MOSES SYSTRAN GOOGLE\nMean 0.059 0.047 0.066 0.063 0.062 0.068\nBL 0.046 0.047 0.056 0.054 0.059 0.061\nBL+LC 0.044 0.043 0.055 0.053 0.059 0.055\nBL+LSA 0.044 0.044 0.058 0.055 0.059 0.060\nBL+LC+LSA 0.044 0.043 0.057 0.053 0.058 0.061\nBL+Pseudo 0.042 * 0.038 - 0.052 * 0.051 -\nBL+LC+LSA+Pseudo 0.042 * 0.036 - 0.052 * 0.051 -\nTest-set average 0.365 0.275 0.456 0.427 0.506 0.372\nTest-set range [0.004,0.558] [0,0.406] [0.004, 0.79] [0.245,1.056] [0,1.071] [0.12,1.084]\nTable 1: MAE scores for document-level prediction of BLEU and TER for the FAPESP corpus. Bold-\nfaced figures indicate the smallest MAE for a given test set; * indicates a statistically significant differ-\nence against all other results; underlined values indicate no significant difference against the best system.\nBLEU TER\nFAPESP WMT2013 FAPESP WMT2013\nMOS+SYS EN-ES ES-EN MOS+SYS EN-ES ES-EN\nMean 0.064 0.061 0.076 0.07 0.066 0.089\nBL 0.045 0.056 0.065 0.063 0.059 0.069\nBL+LC 0.044 0.058 0.065 0.063 0.066 0.07\nBL+LSA 0.044 0.052 0.051 0.062 0.057 0.051\nBL+LC+LSA 0.044 0.053 0.052 0.064 0.054 0.062\nBL+Pseudo 0.043 0.043 0.038 0.053 0.034 0.038 *\nBL+LC+LSA+Pseudo 0.038 * 0.045 0.043 0.054 0.034 0.04\nTest-set average 0.32 0.266 0.261 0.466 0.524 0.55\nTest-set range [0,0.558] [0.107,0.488] [0.072,0.635] [0,1.07] [0.317,0.72] [0.216,0.907]\nTable 2: MAE scores for document-level prediction of BLEU and TER for the FAPESP corpus (mixing\nMOSES and SYSTRAN) and for the WMT2013 EN-ES and ES-EN corpora (mixing all but best system).\nexplored. Confirming the findings in (Soricut and\nEchihabi, 2010; Shah et al., 2013), these features\nwere found responsible for the most significant im-\nprovements over strong baselines. However, in\nmost settings, our proposed LSA cohesion features\nperformed as well as pseudo-reference features.\nPredicting automatic metrics at document-level\nproved a less challenging task than we expected.\nThis was mostly due to the low variance in the\nquality of translations for the various documents in\nthe corpus by a given MT system. This was con-\nfirmed by the low prediction error obtained by a\nsimple baseline that assigns the mean quality score\n(BLEU or TER) of the training set to all instances\nof the test set. Outperforming this mean base-\nline proved particularly difficult for some MT sys-\ntems when predicting BLEU. Putting MT systems\nof various quality levels together made the task\nmore complex. As a consequence, our QE mod-\nels yielded more significant improvements over the\nbaseline.\nIn future work, we plan to model this prob-\nlem as predicting post-editing effort scores, as\nit has been done in the state-of-the-art work for\nQE at sentence-level. This will require largerdatasets with post-edited machine translations and\ndocument-level markup.\nAcknowledgements: This work was supported\nby the EXPERT (EU Marie Curie ITN No.\n317471) project.\nReferences\nAlbrecht, Joshua S. and Rebecca Hwa. 2008. The Role\nof Pseudo References in MT Evaluation. In Pro-\nceedings of WMT 2008 , pages 187–190, Columbus,\nOH.\nAlmaghout, Hala and Lucia Specia. 2013. A CCG-\nbased Quality Estimation Metric for Statistical Ma-\nchine Translation. In Proceedings of the XIV MT\nSummit , pages 223–230, Nice, France.\nAvramidis, Eleftherios, Maja Popovic, David Vilar Tor-\nres, and Aljoscha Burchardt. 2011. Evaluate with\nconfidence estimation: Machine ranking of transla-\ntion outputs using grammatical features. In Proceed-\nings of WMT 2011 , pages 65–70, Edinburgh, UK.\nAziz, Wilker and Lucia Specia. 2011. Fully Automatic\nCompilation of a Portuguese-English Parallel Cor-\npus for Statistical Machine Translation. In Proceed-\nings of STIL 2011 , Cuiab ´a, MT, Brazil.\n107\nBach, Nguyen, Fei Huang, and Yaser Al-Onaizan.\n2011. Goodness: A method for measuring machine\ntranslation confidence. In Proceedings of ACL 2011 ,\npages 211–219, Portland, OR.\nBanerjee, Satanjeev and Alon Lavie. 2005. METEOR:\nAn Automatic Metric for MT Evaluation with Im-\nproved Correlation with Human Judgments. In Pro-\nceedings of the ACL 2005 Workshop on Intrinsic and\nExtrinsic Evaluation Measures for MT and/or Sum-\nmarization , pages 65–72, Ann Arbor, MI.\nBlatz, John, Erin Fitzgerald, George Foster, Simona\nGandrabur, Cyril Goutte, Alex Kulesza, Alberto San-\nchis, and Nicola Ueffing. 2004. Confidence Es-\ntimation for Machine Translation. In Proceedings\nof COLING 2004 , pages 315–321, Geneva, Switzer-\nland.\nBojar, Ond ˇrej, Christian Buck, Chris Callison-\nBurch, Christian Federmann, Barry Haddow, Philipp\nKoehn, Christof Monz, Matt Post, Radu Soricut, and\nLucia Specia. 2013. Findings of the 2013 Workshop\non Statistical Machine Translation. In Proceedings\nof WMT 2013 , pages 1–44, Sofia, Bulgaria.\nCarpuat, Marine and Michel Simard. 2012. The trou-\nble with SMT consistency. In Proceedings of WMT\n2012 , pages 442–449, Montr ´eal, Canada.\nFelice, Mariano and Lucia Specia. 2012. Linguistic\nfeatures for quality estimation. In Proceedings of\nWMT 2012 , pages 96–103, Montr ´eal, Canada.\nGim´enez, Jes ´us and Llu ´ıs M `arquez. 2010. Asiya:\nAn Open Toolkit for Automatic Machine Translation\n(Meta-)Evaluation. The Prague Bulletin of Mathe-\nmatical Linguistics , 94:77–86.\nGraesser, Arthur C., Danielle S. McNamara, Max M.\nLouwerse, and Zhiqiang Cai. 2004. Coh-Metrix:\nAnalysis of text on cohesion and language. Behav-\nior Research Methods, Instruments, and Computers ,\n36(2):193–202.\nHardmeier, Christian. 2011. Improving machine trans-\nlation quality prediction with syntatic tree kernels.\nInProceedings of EAMT 2011 , pages 233–240, Leu-\nven, Belgium.\nKoehn, Philipp, Hieu Hoang, Alexandra Birch, Chris\nCallison-Burch, Bertoldi Nicola Federico, Mar-\ncello, Brooke Cowan, Wade Shen, Christine Moran,\nRichard Zens, Chris Dyer, Ondrej Bojar, Alexandra\nConstantin, and Evan Herbst. 2007. MOSES: Open\nsource Toolkit for Statistical Machine Translation.\nInProceedings of ACL 2007, demonstration session ,\nPrague, Czech Republic.\nLandauer, Thomas K., Peter W. Foltz, and Darrell La-\nham. 1998. An Introduction to Latent Semantic\nAnalysis. Discourse Processes , 25(2-3):259–284.\nLouis, Annie and Ani Nenkova. 2013. Automati-\ncally Assessing Machine Summary Content With-\nout a Gold Standard. Computational Linguistics ,\n39(2):267–300.Papineni, Kishore, Salim Roukos, Todd Ward, and Wei\njing Zhu. 2002. BLEU: a Method for Automatic\nEvaluation of Machine Translation. In Proceedings\nof ACL 2002 , pages 311–318, Philadelphia, PA.\nPighin, D and L M `arquez. 2011. Automatic projec-\ntion of semantic structures: an application to pair-\nwise translation ranking. In Proceedings of SSST-5 ,\npages 1–9, Portland, OR.\nRubino, Raphael, Jos ´e G. C. de Souza, Jennifer Foster,\nand Lucia Specia. 2013. Topic Models for Trans-\nlation Quality Estimation for Gisting Purposes. In\nProceedings of the XIV MT Summit , pages 295–302,\nNice, France.\nShah, Kashif, Trevor Cohn, and Lucia Specia. 2013.\nAn Investigation on the Effectiveness of Features for\nTranslation Quality Estimation. In Proceedings of\nthe XIV MT Summit , pages 167–174, Nice, France.\nSnover, Matthew, Bonnie Dorr, Richard Schwartz, Lin-\nnea Micciulla, and John Makhoul. 2006. A study of\nTranslation Edit Rate with Targeted Human Annota-\ntion. In Proceedings of AMTA 2006 , pages 223–231,\nCambridge, MA.\nSoricut, Radu and Abdessamad Echihabi. 2010.\nTrustRank: Inducing Trust in Automatic Transla-\ntions via Ranking. In Proceedings of the ACL 2010 ,\npages 612–621, Uppsala, Sweden.\nSoricut, Radu and Sushant Narsale. 2012. Com-\nbining Quality Prediction and System Selection for\nImproved Automatic Translation Output. In Pro-\nceedings of WMT 2012 , pages 163–170, Montr ´eal,\nCanada.\nSoricut, Radu, Nguyen Bach, and Ziyuan Wang. 2012.\nThe SDL Language Weaver Systems in the WMT12\nQuality Estimation Shared Task. In Proceedings of\nWMT 2012 , pages 145–151, Montr ´eal, Canada.\nSpecia, Lucia, Marco Turchi, Nicola Cancedda, Marc\nDymetman, and Nello Cristianini. 2009. Estimating\nthe Sentence-Level Quality of Machine Translation\nSystems. In Proceedings of EAMT 2009 , EAMT-\n2009, pages 28–37, Barcelona, Spain.\nSpecia, Lucia, Kashif Shah, Jose G.C. de Souza, and\nTrevor Cohn. 2013. QuEst - A translation quality es-\ntimation framework. In Proceedings of WMT 2013:\nSystem Demonstrations , ACL-2013, pages 79–84,\nSofia, Bulgaria.\nWong, Billy T. M. and Chunyu Kit. 2012. Extend-\ning machine translation evaluation metrics with lexi-\ncal cohesion to document level. In Proceedings of\nEMNLP-CONLL 2012 , pages 1060–1068, Jeju Is-\nland, Korea.\nXiong, Deyi, Min Zhang, and Haizhou Li. 2010. Error\ndetection of statistical machine translation using lin-\nguistic features. In Proceedings of ACL 2010 , pages\n604–611, Uppsala, Sweden.\n108",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "YaVawMYChHx",
"year": null,
"venue": "EAMT 2016",
"pdf_link": "https://aclanthology.org/W16-3406.pdf",
"forum_link": "https://openreview.net/forum?id=YaVawMYChHx",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Combining Translation Memories and Syntax-Based SMT: Experiments with Real Industrial Data",
"authors": [
"Liangyou Li",
"Carla Parra Escartín",
"Qun Liu"
],
"abstract": "Liangyou Li, Carla Parra Escartin, Qun Liu. Proceedings of the 19th Annual Conference of the European Association for Machine Translation. 2016.",
"keywords": [],
"raw_extracted_content": "Baltic J. Modern Computing, V ol. 4 (2016), No. 2, pp. 165–177\nCombining Translation Memories and\nSyntax-Based SMT\nExperiments with Real Industrial Data\nLiangyou LI1, Carla PARRA ESCART ´IN2, Qun LIU1\n1ADAPT Centre, School of Computing, Dublin City University, Ireland\n2Hermes Traducciones, Madrid, Spain\nfliangyouli,qliu [email protected]\[email protected]\nAbstract. One major drawback of using Translation Memories (TMs) in phrase-based Machine\nTranslation (MT) is that only continuous phrases are considered. In contrast, syntax-based MT\nallows phrasal discontinuity by learning translation rules containing non-terminals. In this paper,\nwe combine a TM with syntax-based MT via sparse features. These features are extracted during\ndecoding based on translation rules and their corresponding patterns in the TM. We have tested\nthis approach by carrying out experiments on real English–Spanish industrial data. Our results\nshow that these TM features significantly improve syntax-based MT. Our final system yields\nimprovements of up to +3.1 BLEU, +1.6 METEOR, and -2.6 TER when compared with a state-\nof-the-art phrase-based MT system.\nKeywords: machine translation, translation memory, syntax-based SMT\n1 Introduction\nA Translation Memory (TM) is a database which stores legacy translations. Transla-\ntors use them in their work because TMs allow them to increase their productivity by\nretrieving past translations and help them to enhance terminology and style cohesion\nacross projects. Given an input sentence, a TM provides the most similar source sen-\ntence in the database together with its target translation as the reference for post-editing.\nIf the input sentence was already translated in the past, the translator does not necessar-\nily post-edit it. In the case of similar sentences (called “fuzzy matches”), the Computer\nAssisted Translation tool highlights the differences between the input sentence and the\none stored in the TM to enhance the post-editing task. Different coloring schemes are\nused to highlight changes and additions to the source text in the TM to help the trans-\nlator spot quicker the post-edits needed. As TMs can help produce high quality and\n166 Li et al.\nconsistent translations for repetitive materials, they are believed to be useful for Statis-\ntical Machine Translation (SMT).\nThe combination of TM and SMT (henceforth referred as “TM combination”) has\nbeen explored in many ways and it has shown to improve translation quality. Unlike the\nwell-known pipeline approaches (Koehn and Senellart, 2010; Ma et al., 2011), which\nuse a TM combination at sentence-level, run-time TM combination (namely, combining\nthe TM and SMT during decoding) can make a better use of the matched sub-sentences\n(Wang et al., 2013; Li et al., 2014a). Such run-time combination has been explored on\nPhrase-Based (PB) MT (Koehn et al., 2003). However, PBMT systems making use of\nTMs only take into consideration continuous segments and thus generalizations such as\nthe translation of the English call:::offinto the Spanish cancelar cannot be learned.\nIn this paper, we explore the possibility of using a run-time TM combination on\nsyntax-based MT. Syntax-based MT learns translation rules which can be easily extrap-\nolated to new sentences by allowing non-terminals. In our approach, for each applied\ntranslation rule during decoding, we identify a corresponding pattern in the TM and\nthen extract sparse features which are subsequently added to our system.\nIn our experiments, the TM combination is done on the hierarchical phrase-based\n(HPB) model (Chiang, 2005) and the dependency-to-string (D2S) model (Xie et al.,\n2011; Li et al., 2014b). The experimental results on real English–Spanish data3show\nthat syntax-based models produce significantly better translations than phrase-based\nmodels. After adding the TM features, the syntax-based models are further significantly\nimproved.\n2 TMs in SMT\nCombining TMs and SMT together has been explored in different ways in recent years.\nHe et al. (2010a) presented a recommendation system which used a Support Vector\nMachine (Cortes and Vapnik, 1995) binary classifier to select a translation from the\noutputs of a TM and an SMT system. He et al. (2010b) extended this work by re-ranking\nthe N-best list of SMT and TM outputs. Koehn and Senellart (2010) and Ma et al. (2011)\nused TMs in a pipeline manner. Firstly, they identified the matched part from the best\nmatch in the TM and merged their translation with the input. Then, they forced their\nphrase-based SMT system to translate the unmatched part of the input sentence. One\nmajor drawback of these methods is that they do not distinguish whether a match is\ngood or not at phrase-level.\nWang et al. (2013) proposed an improved method by using TM information on\nphrases during decoding. This method extracts features from the TM and then uses pre-\ntrained generative models to estimate one or more probabilities added to phrase-based\nsystems. However, their work requires a rather complex process to obtain training in-\nstances for these pre-trained models. Li et al. (2014a) simplified this method by extract-\ning sparse features and directly adding them to systems. In experiments, this simplified\nmethod was comparable to the one in Wang et al. (2013). However, in both works,\nfeatures are designed for phrase-based models.\n3Our data belongs to a translation company and is further described in Section 5.1.\nCombining Translation Memories and Syntax-Based SMT 167\n3 Syntax-Based SMT\nTypically, syntax-based decoders are based on the CYK algorithm (Kasami, 1965;\nYounger, 1967; Cocke and Schwartz, 1970). It searches for the best derivation d\u0003=\nr1r2\u0001\u0001\u0001rNamong all possible derivations D, as in Equation (1),\nd\u0003= argmax\nd2DP(d) (1)\nwhereriare the translation rules. Translations are carried out bottom-up. For each span\nof an input sentence, the decoder finds rules to translate it. The translation of a large\nspan can be obtained by combining translations from its sub-spans using the syntactic\nrules containing non-terminals.\nIn this paper, we use two syntax-based models for our experiments. One is the HPB\nmodel (Chiang, 2005) which is based on formal syntax. The other one is the D2S model\n(Xie et al., 2011; Li et al., 2014b) which is based on dependency structures generated\nby the Stanford parser4.\n3.1 Hierarchical Phrase-Based Translation\nA hierarchical phrase is an extension of a phrase by allowing gaps where other hier-\narchical phrases are nested. The HPB model is formulated by a synchronous context\nfree grammar (SCFG) where gaps are represented by a generic non-terminal symbol\nX. Rules in the HPB are in the following form:\nX!h\r;\u000b;\u0018i;\nwhere\ris a string over source terminal symbols and non-terminals, \u000bis a string over\ntarget terminal symbols and non-terminals, and \u0018is a one-to-one mapping between\nnon-terminals in \rand\u000b. An example of a rule is as follows:\nX!hBolivia holds X1;Bolivia sostiene X1i;\nwhere the index on each non-terminal indicates the mappings. These rules can be auto-\nmatically learned from parallel corpora based on word alignments.\n3.2 Dependency-to-String Translation\nIn the D2S model, there are two kinds of rules. One is the head rule which specifies the\ntranslation of a source word. For example:\nholds!sostiene\nThe other one is the head-dependent (HD) rule which consists of three parts: the HD\nfragment5sof the source side, a target string tand a one-to-one mapping \u001efrom vari-\nables insto variables in t, as in:\ns=(Bolivia ) holds (x1:selection)\nt=Bolivia sostiene x1\n\u001e=fx1:selection!x1g\n4http://nlp.stanford.edu/software/lex-parser.shtml\n5An HD fragment is composed of a head node and all of its dependents.\n168 Li et al.\nAlgorithm 1: Procedure for extracting a translation pattern from a TM instance.\nData: A rule rfor an input sentence I, a TM instance (S; T; A )\nResult: A translation pattern Rforr\n1let[i; j]denote the span covered by r;\n2h[ik; jk]i; k= 1\u0001\u0001\u0001narensubspans covered by non-terminals in r;\n3foreach span [ik; jk]do\n4 find a corresponding TM source span [is\nk; js\nk], according to string edits;\n5 find a TM target span [it\nk; jt\nk], according to word alignment A;\n6end\n7find corresponding TM source and target spans [is; js]and[it; jt]for[i; j];\n8s= words in span [is; js]and replacing phrases covered by h[is\nk; js\nk]iwith non-terminals;\n9t= words in span [it; jt]and replacing phrases covered by\n[it\nk; jt\nk]\u000b\nwith non-terminals;\n10R=hs; t; ai,aindicates mappings between non-terminals in sandt;\nwhere the underlined element denotes the leaf node. Variables in the Dep2Str model are\nconstrained either by words (like x1:selection) or Part-of-Speech tags (like x1:NN).\n4 TM Combination Method\nInspired by Li et al. (2014a), who directly add sparse features to the log-linear frame-\nwork of SMT (Och and Ney, 2002) to combine a TM with the PB model, in this paper\nwe extract sparse features for each applied rule during decoding and directly add them\nto our syntax-based SMT systems. These features can be jointly trained with other fea-\ntures to maximize translation quality measured by BLEU (Papineni et al., 2002).\nGiven an input sentence in our test set, our approach starts from retrieving the most\nsimilar sentence from a TM.6The similarity is measured by the so-called fuzzy match\nscore. Concretely, we use the word-based string-edit distance in Equation (2) (Koehn\nand Senellart, 2010) to compute the fuzzy match score between the input sentence and\nthe TM instance.\nF= 1\u0000editdistance(input;tmsource )\nmax(jinputj;jtmsourcej)(2)\nDuring the calculation of the fuzzy match score, we also obtain a sequence of opera-\ntions, including insertion, match, substitution and deletion, which are useful for finding\nthe TM correspondence of an input phrase.\n4.1 Recognizing Patterns in TM\nInstead of translating a unique continuous phrase, the rules in our system can contain\nnon-terminals which cover previously translated phrases. Before extracting any features\nfor a rule, we first identify its corresponding patterns in the TM. The identification\nprocedure is illustrated in Algorithm 1.\n6In our experiments, we use the training corpus of our SMT experiments as a TM.\nCombining Translation Memories and Syntax-Based SMT 169\nInput: click to select the policy that you want to delete\nTM Source: click to select the policy you want to editString Edits:\nTM Target: haga clic para seleccionar la norma que desea editarWord Alignments:x1 x2r\nx1 x2rs\nx1 x2 rt\nRule: select x1that you want to x2!seleccionar x1que desea x2\nTM Pattern: select x1you want to x2!seleccionar x1desea x2\nFig. 1. An illustration of extracting translation patterns for a rule r. Phrases in solid rectangles\nare covered by non-terminals ( xi). Phrases in dashed rectangles are covered by translation rules\nor patterns.\nFor an input sentence I, we first retrieve an instance hS;T;Aifrom the TM, where\nSdenotes the TM source segment, Tis the TM target segment, and Aindicates the\nword alignment between SandT. During decoding, rules are applied to translate I.\nEach rulercovers a continuous span [i;j]ofIand each non-terminal in rcovers a\nsub-span of [i;j](lines 1–2 in Algorithm 1). For the span and its sub-spans, we first\nfind their source correspondence in Saccording to the string edits between IandS\nand then the target correspondence in Taccording to the word alignment A(lines 3–7).\nFinally, we obtain a translation pattern R=hs;t;aiby replacing phrases covered by\nsub-spans with non-terminals (lines 8–10). Given a translation rule, Figure 1 illustrates\nhow to identify a corresponding pattern in the TM.\nNote that special cases might exist because of various situations in string edits and\nword alignments. Taking Figure 1 as an example, we cannot find a correspondence in\nthe TM source for the input word that. In addition, the target tin extracted patterns can\nbe extended by unaligned words, so in such case we might have multiple targets. These\ncases have been taken into consideration when extracting features (cf. Section 4.2).\n4.2 Extracting Features\nThe features we use are similar to the ones in Li et al. (2014a) but modified to handle\nnon-terminals in rules. Let r=h\r;\u000bidenote a rule we are using to translate an input\nsentenceI. A retrieved TM instance for IishS;T;Ai. The rule covers an input phrase\ns, which corresponds to TM segments hsr;tr= (tr\n1\u0001\u0001\u0001tr\nm)i. The following features are\nthe same used by Li et al. (2014a):\n–Feature Zx(x= 0\u0001\u0001\u000110) indicates the similarity between IandS. Each Z xcorre-\nsponds to a fuzzy match score range. For example, given a score F(I;S) = 0:818\nwhich goes into the range [0.8,0.9), we obtain the feature Z 8.\n–Feature SEP x(x=YorN) is the indicator of whether sis a punctuation mark at\nthe end of the input sentence I.\n170 Li et al.\n–Feature NLN xy(x= 0;1;2andy= 0;1;2andy < x ) models the context of\nsrands, wherexdenotes the number of matched neighbors (left and right words)\nandydenotes how many of those neighbors are aligned to target words. If sris\nunavailable, we use feature NLN non.\n–Feature CSS x(x=S;L;R;B ) describes the status of tr. Iftris unavailable, we\nuse feature CSS non. Whenm= 1 (i.e. the size of tris 1),x=S.x=L;R;B\nmeans thattris obtained by extending unaligned words only on the left side or the\nright side or both sides, respectively.\n–Feature LTC x(x=O;L;R;B;M ) is the indicator of whether a tr\niis the longest or\nnot. Iftris unavailable, we use feature LTC non.x=Omeanstr\niis not generated\nby extending unaligned words. x=L(orR;B ) meanstr\niis only extended on its\nleft (or right) side (or both sides) and has the longest left (or right) side (or both\nsides).x=Mmeanstr\niis extended but not the longest one.\nThe assumed extracted translation patterns for rareh\rr;\u000br= (\u000br\n1\u0001\u0001\u0001\u000br\nm)i. We modify\nthe following features and add them to our system:\n–Feature SPL xmeasures the length of s,x= 1\u0001\u0001\u00017andmore . Unlike PB models,\nwhere the phrase length is bounded, in syntax-based models we can use a rule to\ncover the whole input. So We use more to denotejsj>7.\n–Feature SCM x(x=L;H;M ) represents the matching status between \rand\rr,\ninstead ofsandsr. This notation is used because \rmight contain non-terminals,\nwhich in turn means that phrases covered by these non-terminals have already been\nconsidered. If \rris unavailable, we use feature SCM non. Otherwise, Ldenotes a\nlow similarity, namely F(\r;\rr)<0:5.HindicatesF(\r;\rr)>0:5, andMmeans\nF(\r;\rr) = 0:5.\n–Similar to the SCM x, feature TCM x(x=L;H;M ) is the matching status between\n\u000band each\u000br\niin\u000br. If\u000bris unavailable, we use feature TCM non.\n–We use the CPM xfeature to model the reordering information. If \ronly contains\nterminals, the feature is CPM nnt. Otherwise, if \u000bris unavailable, we use feature\nCPM non. Otherwise, \u000brand\u000bdefine two permutations in terms of non-terminals\nin\r. The two permutations are assumed to be p=p1\u0001\u0001\u0001pnandpr=pr\ni\u0001\u0001\u0001pr\nn.\nWe use the Spearman correlation defined in Equation (3) to score the permutations.\n\u001a= 1\u00006Pn\ni=1(pi\u0000pr\ni)2\nn(n2\u00001)(3)\nThe range of \u001ais[\u00001;1]. We divide the score into 5 groups, each of which indicates\na feature:x=nhwhen\u001a<\u00000:5,x=nlwhen\u001a2[\u00000:5;0),x= 0when\u001a= 0,\nx=plwhen\u001a2(0;0:5), andx=phwhen\u001a\u00150:5.\n5 Experiments\n5.1 Data\nWith the aim of further testing whether our experiments would be useful in a real com-\nmercial setting, we run our experiments on a real industrial data set. Our data belongs to\nCombining Translation Memories and Syntax-Based SMT 171\nTable 1. Statistics of English–Spanish (EN–ES) corpus.\nTraining Development Test\n#sentences 577,639 1,959 1,964\n#words (EN) 7,632,983 26,451 26,134\n#words (ES) 9,049,260 31,170 31,195\na translation company and consists of all segments contained in the TM of one of their\nclients. The TM comprises all past projects of that client and is duly maintained and\ncurated to ensure its quality. The data belongs to a technical domain and as mentioned\nearlier, it is used for English !Spanish translation tasks7.\nWe deleted all repeated segments from the TM as well as all segments containing\nHTML tags occurring within the TM segments. While repetitions were deleted to follow\nthe best practices in running SMT experiments, the segments with HTML tags were\ndeleted because we found out that those segments were HTML addresses that did not\nrequire a translation and would have added noise to our data. Inline tags were not treated\nspecifically and were maintained in the data. Once our data was cleaned, we randomly\nsplit it into training ,development andtest. Table 1 summarizes the size of our data in\nterms of number of sentences and running words.\n5.2 Settings\nIn our experiments, we build four baselines. The two phrase-based baselines are: PB,\nthe phrase-based model in Moses with default configurations, and PBLR , the phrase-\nbased model, adding three lexical reordering models (Galley and Manning, 2008) to\nimprove its reordering ability. The two syntax-based systems are: HPB , the hierarchi-\ncal phrase-based model in Moses with default configurations, and D2S, an improved\ndependency-to-string model which has been implemented in Moses (Li et al., 2014b).8\nWe add the TM features in Li et al. (2014a) to phrase-based systems and our TM fea-\ntures to syntax-based systems.\nWord alignment is performed by GIZA++ (Och and Ney, 2004) with the heuristic\nfunction grow-diag-final-and . We use SRILM (Stolcke, 2002) to train a 5-gram lan-\nguage model on the target side of our training corpus with modified Kneser-Ney dis-\ncounting (Chen and Goodman, 1996). Batch MIRA (Cherry and Foster, 2012) is used to\ntune weights. BLEU (Papineni et al., 2002), METEOR (Denkowski and Lavie, 2011),\nand TER (Snover et al., 2006) are used for evaluation.9\n5.3 Results and Discussion\nTable 2 accounts for the results obtained for all our experiments. As may be observed,\nall our baselines are already pretty high and thus improvements are harder to obtain.\n7Unfortunately, due to confidentiality agreements the data used in these experiments cannot be\npublicly released.\n8http://computing.dcu.ie/ liangyouli/dep2str.zip\n9https://github.com/jhclark/multeval\n172 Li et al.\nTable 2. Metric scores for all systems on English–Spanish. Each score is the average score over\nthree MIRA runs (Clark et al., 2011).\u0003means a system is better than PB at p\u00140:01.+indicates\na systems is better than PBLR at p\u00140:01.Bold figures are significantly better than their no-TM\ncounterparts at p\u00140:01.\nSystems BLEU \"(%) METEOR\"(%) TER#(%)\nPB 62.8 79.5 26.5\nPBLR 63.5\u000379.9\u000326.0\u0003\nHPB 64.3+80.3+25.2+\nD2S 65.3+80.8+24.5+\nPB+TM 63.5\u000379.9\u000326.0\u0003\nPBLR+TM 64.2+80.3+25.5+\nHPB+TM 65.9 81.0 24.3\nD2S+TM 65.9 81.1 23.9\nThis is not surprising, as we are working with an in-domain data set which is used\nfor real translation tasks. The lexical reordering models significantly improve the PB\nsystem (+0.7 BLEU, +0.4 METEOR, and -0.5 TER). After incorporating the TM Com-\nbination approach (Li et al., 2014a), both systems (PB and PBLR) further produce sig-\nnificantly better translations. Both syntax-based systems (HPB and D2S) achieve signif-\nicantly better results than phrase-based systems (up to +2.5 BLEU, +1.3 METEOR and\n-2.0 TER when comparing the PB system against the D2S system). Moreover, our TM\nfeatures, when added to the HPB and D2S models, consistently improve both syntax-\nbased baselines. In fact, these two systems (our final ones), achieve the best scores\nacross all evaluation metrics (up to +3.1 BLEU, +1.6 METEOR, and -2.6). Example 1\nshows how our D2S+TM and HPB+TM systems achieve better translations:\nExample 1.\nSource :button which opens the password entry window .\nRef:al pulsar este bot ´on se abrir ´a la ventana de introducci ´on de la contrase ˜na .\nTM Source :button which opens the container settings window .\nTM Target :al pulsar este bot ´on se abrir ´a la ventana configuraci ´on del repositorio .\nTM Score :0.75\nPBLR :bot´on que abre la ventana de introducci ´on de la contrase ˜na .\nHPB :bot´on que abre la ventana de introducci ´on de la contrase ˜na .\nD2S:bot´on que abre la ventana de introducci ´on de la contrase ˜na .\nPBLR+TM :bot´on que abre la ventana de introducci ´on de la contrase ˜na .\nHPB+TM :al pulsar este bot ´on se abrir ´a la ventana de introducci ´on de la contrase ˜na .\nD2S+TM :al pulsar este bot ´on se abrir ´a la ventana de introducci ´on de la contrase ˜na .\nWhen using the TM Combination, both syntax-based models achieve a BLEU score\nof 1, while all other systems have a BLEU score of 0.6989. It shall be noted that both\ntranslations could actually be possible, but in our data there seems to be a stylistic\npreference: se abrir ´a, is preferred over que abre , which would be a more literal but still\ncorrect translation of the English “which opens”. The TM Combination method allows\nour systems to learn the preferred translation in this case and match the reference. We\nhave also found cases in which an error in the syntactic analysis causes our system to\nCombining Translation Memories and Syntax-Based SMT 173\n(0,0.4) [0.4,0.5) [0.5,0.6) [0.6,0.7) [0.7,0.8) [0.8,0.9) [0.9,1)4050607080246 161 288 361 270 359 279\nFuzzy Match RangesBLEU (%)#Sentences\nPB\nPBLR\nHPB\nD2S\nFig. 2. BLEU scores evaluated on sentences grouped by fuzzy match scores. The symbol \u0000on\neach bar indicates the BLEU scores of the systems after incorporating the TM approach.\nfail (e.g. in the case of English nominal compounds), as well as cases in which our\nsystem produces a better translation than the reference.10\nSince the fuzzy match score, as defined in Equation (2), is used to select a TM\ninstance for an input sentence and thus is an important factor for combining the dif-\nferent SMT models and TM features, it is interesting to know the impact it has on the\ntranslation quality of the various systems we trained. Figure 2 shows BLEU scores of\nall systems evaluated on sentences grouped by fuzzy match scores. We first find that\nBLEU scores increase as fuzzy scores become higher. This is reasonable, since a higher\nfuzzy score means that we can find a similar sentence in the training data. The TM\napproach results in an improvement on almost all ranges. Such improvement is more\nconsistent in the higher fuzzy ranges, namely [0.7,1). This also suggests that although\nTM instances with lower fuzzy scores could be useful, those with higher fuzzy scores\nare more reliable.\nAnother interesting finding is that D2S is consistently better than HPB. The main\nreason could be that rules in D2S are guided by linguistic annotations. In comparison\nwith D2S, however, HPB benefits more when combined with the TM approach. In fact,\nwhen compared with their respective baselines (the same model without incorporating\nthe TM approach), the HPB model is the one which experiences the highest improve-\n10A qualitative analysis of our test set is being done to determine the real impact of our approach.\n174 Li et al.\n\u00145\u001410\u001415\u001420 >206062646668449 467 383 283 382\nSentence LengthBLEU (%)#Sentences\nPB PBLR\nHPB D2S\nFig. 3. BLEU scores evaluated on sentences grouped by sentence length. The symbol \u0000on each\nbar indicates the BLEU scores of the systems after incorporating the TM approach.\nment (+1.6 BLEU, +0.7 METEOR, and -0.9 TER). This finding suggests that the TM\napproach could enhance the rule selection when linguistic annotations are unavailable,\nand that the TM approach achieves its greatest potential when combined with syntax-\nbased models.\nFinally, since syntax-based models learn translation rules which have a better gen-\neralization and reordering ability, we grouped sentences according to their length and\nevaluated our different systems by their respective sentence-length groups. The results\nfor all systems are shown in Figure 3. As may be observed, the syntax-based systems,\nespecially D2S, outperform the phrase-based models in all length ranges. Moreover, the\nTM combination consistently improves the results for all systems.\nBearing in mind that the ultimate goal would be to integrate an SMT system in a\nreal commercial setting for MT Post-Editing tasks (MTPE), the results obtained suggest\nthat the best option would be to use the D2S+TM system. Parra Escart ´ın and Arcedillo\n(2015) investigated the productivity thresholds for MTPE tasks running an experiment\nwith 10 professional translators in a real commercial setting. They found out that for\nEnglish!Spanish MTPE tasks the productivity gain thresholds were of 45–50 BLEU\nand 25–30 TER. Given the results obtained by our systems, it seems that even the state-\nof-the-art baseline would already allow for a faster post-editing.\n6 Conclusion\nIn this paper, we have explored how TM approaches can be used to enhance syntax-\nbased SMT systems. To test our approach, we used real data from a translation company\nCombining Translation Memories and Syntax-Based SMT 175\nand trained, tuned and tested different SMT systems on such data. The results of our\nexperiments are very promising, particularly because improvements were achieved over\nalready high baseline systems. The combination of the TM approach with other existing\nSMT systems yielded better overall scores (up to +3:1BLEU, +1:6METEOR, and\n\u00002:6TER when compared with a state-of-the-art phrase-based MT system) in all MT\nevaluation metrics and for all sentence lengths.\nAnother interesting finding was that better BLEU scores seem to be obtained for the\nhighest fuzzy match bands. In future research, we plan to run experiments of our TM\nCombination method taking only into consideration the higher fuzzies ([0.7,1)) to test\nwhether better results are obtained and a threshold shall be established. We would also\nlike to test our approach on public corpora and use a different data set from the TM\nto train SMT systems. It would be also interesting to know how effective each sparse\nfeature is.\nAcknowledgements\nThis research has received funding from the People Programme (Marie Curie Actions)\nof the European Union’s Framework Programme (FP7/2007-2013) under REA grant\nagreement no317471. The ADAPT Centre for Digital Content Technology is funded\nunder the SFI Research Centres Programme (Grant 13/RC/2106) and is co-funded under\nthe European Regional Development Fund. We also thank the anonymous reviewers for\ntheir insightful comments and suggestions.\nReferences\nChen, S. F., Goodman, J. (1996). An Empirical Study of Smoothing Techniques for\nLanguage Modeling. Proceedings of the 34th Annual Meeting on Association for\nComputational Linguistics , Santa Cruz, California, 310–318.\nCherry, C., Foster, G. (2012). Batch Tuning Strategies for Statistical Machine Trans-\nlation. Proceedings of the 2012 Conference of the North American Chapter of the\nAssociation for Computational Linguistics: Human Language Technologies , Mon-\ntreal, Canada, 427–436.\nChiang, D. (2005). A Hierarchical Phrase-based Model for Statistical Machine Trans-\nlation. Proceedings of the 43rd Annual Meeting on Association for Computational\nLinguistics , Ann Arbor, Michigan, 263–270.\nClark, J. H., Dyer, C., Lavie, A., Smith, N. A. (2011). Better Hypothesis Testing for\nStatistical Machine Translation: Controlling for Optimizer Instability. Proceedings\nof the 49th Annual Meeting of the Association for Computational Linguistics: Human\nLanguage Technologies: Short Papers - Volume 2 , Portland, Oregon, 176–181.\nCocke, J., Schwartz, J. T. (1970). Programming Languages and Their Compilers: Pre-\nliminary Notes. Technical report, Courant Institute of Mathematical Sciences, New\nYork University, New York, NY .\nCortes, C., Vapnik, V . (1995). Support-Vector Networks. Machine Learning ,\n20(3):273–297.\n176 Li et al.\nDenkowski, M., Lavie, A. (2011). Meteor 1.3: Automatic Metric for Reliable Opti-\nmization and Evaluation of Machine Translation Systems. Proceedings of the Sixth\nWorkshop on Statistical Machine Translation , Edinburgh, Scotland, 85–91.\nGalley, M., Manning, C. D. (2008). A Simple and Effective Hierarchical Phrase Re-\nordering Model. Proceedings of the Conference on Empirical Methods in Natural\nLanguage Processing , Honolulu, Hawaii, 848–856.\nHe, Y ., Ma, Y ., van Genabith, J., Way, A. (2010a). Bridging SMT and TM with Trans-\nlation Recommendation. Proceedings of the 48th Annual Meeting of the Association\nfor Computational Linguistics , Uppsala, Sweden, 622–630.\nHe, Y ., Ma, Y ., Way, A., Van Genabith, J. (2010b). Integrating N-best SMT Outputs into\na TM System. Proceedings of the 23rd International Conference on Computational\nLinguistics: Posters , Beijing, China, 374–382.\nKasami, T. (1965). An Efficient Recognition and Syntax-Analysis Algorithm for\nContext-Free Languages. Technical report, Air Force Cambridge Research Lab, Bed-\nford, MA.\nKoehn, P., Hoang, H., Birch, A., Callison-Burch, C., Federico, M., Bertoldi, N., Cowan,\nB., Shen, W., Moran, C., Zens, R., Dyer, C., Bojar, O., Constantin, A., Herbst, E.\n(2007). Moses: Open Source Toolkit for Statistical Machine Translation. Proceed-\nings of the 45th Annual Meeting of the ACL on Interactive Poster and Demonstration\nSessions , Prague, Czech Republic, 177–180.\nKoehn, P., Och, F. J., Marcu, D. (2003). Statistical Phrase-based Translation. Proceed-\nings of the 2003 Conference of the North American Chapter of the Association for\nComputational Linguistics on Human Language Technology - Volume 1 , Edmonton,\nCanada, 48–54.\nKoehn, P., Senellart, J. (2010). Convergence of Translation Memory and Statistical\nMachine Translation. Proceedings of AMTA Workshop on MT Research and the\nTranslation Industry , Denver, Colorado, USA, 21–31.\nLi, L., Way, A., Liu, Q. (2014a). A Discriminative Framework of Integrating Translation\nMemory Features into SMT. Proceedings of the 11th Conference of the Association\nfor Machine Translation in the Americas, Vol. 1: MT Researchers Track , Vancouver,\nBC, Canada, 249–260.\nLi, L., Xie, J., Way, A., Liu, Q. (2014b). Transformation and Decomposition for Effi-\nciently Implementing and Improving Dependency-to-String Model In Moses. Pro-\nceedings of SSST-8, Eighth Workshop on Syntax, Semantics and Structure in Statisti-\ncal Translation .\nMa, Y ., He, Y ., Way, A., van Genabith, J. (2011). Consistent Translation using Dis-\ncriminative Learning - A Translation Memory-Inspired Approach. Proceedings of\nthe 49th Annual Meeting of the Association for Computational Linguistics: Human\nLanguage Technologies , Portland, Oregon, USA, 1239–1248.\nOch, F. J., Ney, H. (2002). Discriminative Training and Maximum Entropy Models\nfor Statistical Machine Translation. Proceedings of the 40th Annual Meeting on\nAssociation for Computational Linguistics , Philadelphia, Pennsylvania, 295–302.\nOch, F. J., Ney, H. (2004). The Alignment Template Approach to Statistical Machine\nTranslation. Compututational Linguistics , 30(4):417–449.\nPapineni, K., Roukos, S., Ward, T., Zhu, W.-J. (2002). BLEU: A Method for Automatic\nEvaluation of Machine Translation. Proceedings of the 40th Annual Meeting on\nAssociation for Computational Linguistics , Philadelphia, Pennsylvania, 311–318.\nCombining Translation Memories and Syntax-Based SMT 177\nParra Escart ´ın, C., Arcedillo, M. (2015). Living on the edge: productivity gain thresh-\nolds in machine translation evaluation metrics. Proceedings of the Fourth Workshop\non Post-editing Technology and Practice , Miami, Florida, 46–56.\nSnover, M., Dorr, B., Schwartz, R., Micciulla, L., Makhoul, J. (2006). A Study of\nTranslation Edit Rate with Targeted Human Annotation. Proceedings of Association\nfor Machine Translation in the Americas , Cambridge, Massachusetts, USA, 223–231.\nStolcke, A. (2002). SRILM-an Extensible Language Modeling Toolkit. Proceedings of\nthe 7th International Conference on Spoken Language Processing , Denver, Colorado,\nUSA, 257–286.\nWang, K., Zong, C., Su, K.-Y . (2013). Integrating Translation Memory into Phrase-\nBased Machine Translation during Decoding. Proceedings of the 51st Annual Meet-\ning of the Association for Computational Linguistics (Volume 1: Long Papers) , Sofia,\nBulgaria, 11–21.\nXie, J., Mi, H., Liu, Q. (2011). A Novel Dependency-to-string Model for Statistical\nMachine Translation. In Proceedings of the Conference on Empirical Methods in\nNatural Language Processing , Edinburgh, United Kingdom, 216–226.\nYounger, D. H. (1967). Recognition and Parsing of Context-Free Languages in Time\nn3.Information and Control , 10(2):189–208.\nReceived May 2, 2016 , accepted May 9, 2016",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "WykHwQAaF4",
"year": null,
"venue": "EAMT 2009",
"pdf_link": "https://aclanthology.org/2009.eamt-1.9.pdf",
"forum_link": "https://openreview.net/forum?id=WykHwQAaF4",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Use of Rich Linguistic Information to Translate Prepositions and Grammar Cases to Basque",
"authors": [
"Eneko Agirre",
"Aitziber Atutxa",
"Gorka Labaka",
"Mikel Lersundi",
"Aingeru Mayor",
"Kepa Sarasola"
],
"abstract": "Eneko Agirre, Aitziber Atutxa, Gorka Labaka, Mikel Lersundi, Aingeru Mayor, Kepa Sarasola. Proceedings of the 13th Annual conference of the European Association for Machine Translation. 2009.",
"keywords": [],
"raw_extracted_content": "Proceedings of the 13th Annual Conference of the EAMT , pages 58–65,\nBarcelona, May 2009\nUse of Rich Linguistic Information to Translate Prepositions and\nGrammatical Cases to Basque\nEneko Agirre, Aitziber Atutxa, Gorka Labaka, Mikel Lersundi,\nAingeru Mayor, Kepa Sarasola\nIXA Group. University of the Basque Country\[email protected]\nAbstract\nThis paper presents three successful tech-\nniques to translate prepositions heading\nverbal complements by means of rich lin-\nguistic information, in the context of a\nrule-based Machine Translation system for\nan agglutinative language with scarce re-\nsources. This information comes in the\nform of lexicalized syntactic dependency\ntriples, verb subcategorization and manu-\nally coded selection rules based on lex-\nical, syntactic and semantic information.\nThe first two resources have been auto-\nmatically extracted from monolingual cor-\npora. The results obtained using a new\nevaluation methodology show that all pro-\nposed techniques improve precision over\nthe baselines, including a translation dic-\ntionary compiled from an aligned corpus,\nand a state-of-the-art statistical Machine\nTranslation system. The results also show\nthat linguistic information in all three tech-\nniques are complementary, and that a com-\nbination of them obtains the best F-score\nresults overall.\n1 Introduction\nSince the first Machine Translation (MT) systems\nup to today’s, performing well the translation of\nthe prepositions is relevant for any MT system;\nJapkowicz and Wiebe (1991) claimed that doing\nit correctly is difficult because prepositions can-\nnot be translated in a systematic or coherent way.\nKoehn (2003) remarked the importance of the cor-\nrect translation of prepositions and he also reported\nthat the main reason for noun phrase (NP) and\nc/circlecopyrt2009 European Association for Machine Translation.prepositional phrase (PP) mistranslations consists\nof choosing wrong leading preposition.\nTranslation of prepositions is even more com-\nplex when the verb phrase and prepositional\nphrase structures differ widely in the languages\ninvolved in translation (Naskar and Bandyopad-\nhyayn, 2006). This is what happens when trans-\nlating from Spanish or English into Basque.\nThis paper explores the problem of translat-\ning prepositions heading verbal complements into\ntarget language equivalents. Although we focus\non Spanish to Basque translation, the evaluation\nmethodology and techniques can be applied to\nother language pairs. In Basque syntactic func-\ntions like subject, object and indirect objects are\nmarked by case-suffixes. In this work postposi-\ntions and grammatical cases have been homoge-\nneously treated, therefore it covers not only the\ntranslation of Spanish prepositions, but also how to\nchoose the correct grammatical case correspond-\ning to Spanish subjects, objects and indirect ob-\njects. Note that in most of the cases Spanish sub-\njects and objects are not marked by any surface\nword or special case marking. Thus, besides the\nSpanish prepositions, we also explore the transla-\ntion of the zero preposition corresponding to the\ngrammatical cases of subject and object.\nGiven an existing open-source rule-based ma-\nchine translation (RBMT) system called Matxin\n(Mayor, 2007; Alegria et al., 2007), we propose\nand evaluate three different techniques for trans-\nlating Spanish prepositions and syntactic functions\ninto Basque. These techniques use rich linguistic\ninformation like verb/postposition1/head-word de-\npendency triples, verb subcategorization and man-\nually coded selection rules based on lexical, syn-\n1When we use here the word postposition, we would like to\nrefer to grammatical cases and postpositions\n58\ntactic and semantic information. While the lat-\nter rules have been coded manually, the first two\nresources have been automatically extracted from\nmonolingual corpora.\nOne important contribution of this paper is the\nevaluation methodology. Previous work (Husain\net al., 2007; Gustavii, 2005) on preposition trans-\nlation measured only accuracy gains with respect\nto simple baselines, and focused on small sets of\nfrequent prepositions. Our methodology measures\nboth precision and recall over all prepositions oc-\ncurring in a small corpus of randomly chosen sen-\ntences. Once the evaluation corpus has been com-\npiled, the evaluation is fully automatic.\nThe results of this paper shows that all proposed\ntechniques improve over the baselines, including\na translation dictionary compiled from an aligned\ncorpus, and over a full-fledged statistical machine\ntranslation (SMT) system. The results also show\nthat the linguistic information in all three tech-\nniques is complementary, and a combination of\nthem obtains the best results overall.\nIn the next section of this paper we describe the\nRBMT system used, followed by a small review of\nrelated work on preposition translation. We then\npresent the linguistic knowledge used. Section 5\npresents the different baselines and techniques to\ntranslate prepositions. Our evaluation methodol-\nogy is proposed in Section 6, which is followed\nby Section 7 with the results. Finally, Section 8 is\ndevoted to conclusions and future work.\n2 Preposition translation in RBMT\nThe last decade has seen the raise of SMT tech-\nniques, and less research on rule-based tech-\nniques. Nevertheless, translation involving a less-\nresourced language poses serious difficulties for\nSMT, specially caused by the smaller size of paral-\nlel corpora. Morphologically-rich languages have\nalso been proved to be difficult for SMT, as shown\nin (Koehn and Monz, 2006), where SMT systems\nlag well behind commercial RBMT systems. At\npresent, domain-specific translation memories for\nBasque are no bigger than two or three million\nwords, much smaller than corpora used for other\nlanguages (the Europarl parallel corpus, for in-\nstance, has ca. 30 Mwords). Having limited dig-\nital resources, the rule-based approach is suitable\nfor the development of an MT system for Basque,\nalong with a focus on the enhancement of the core\nRBMT system with statistical and linguistic infor-mation.\nThe freely available open-source Matxin system\nis the first MT system available for Basque. It is a\nrule-based transfer system based on deep syntactic\nanalysis. which currently translates from Spanish\ninto Basque, and is currently being adapted to the\nEnglish-Basque pair. The current development sta-\ntus shows that it is useful for content assimilation,\nfor text understanding indeed, but that it is not yet\nsuitable for unrestricted use in text dissemination.\nMatxin has been evaluated and compared with\nthe state-of-the-art corpus-based Matrex MT sys-\ntem (Stroppa et al., 2006; Labaka, 2007) translat-\ning from Spanish to Basque. The evaluation was\nperformed using the edit-distance metric (Przy-\nbocki et al., 2006), based on the HTER (human-\ntargeted translation edit rate) presented in (Snover\net al., 2006), and the comparative results have\nshown that Matxin performs significantly better:\n43.60 vs. 57.97 in the parallel corpus where Ma-\ntrexwas trained, and 40.41 vs. 71.87 in an out-of-\ndomain corpus.\nThe preposition translation module of Matxin is\nlocated in the structural transfer phase and uses the\ninformation carried over from the syntactic analy-\nsis and lexical transfer modules. The system cur-\nrently uses Freeling analyzer for Spanish (Atserias\net al., 2006). The output of the preposition transla-\ntion module is later used in subsequent modules in\nthe structural transfer and generation phases. Note\nthat errors from previous modules affect the qual-\nity of the preposition translation phase, and this\nmakes the separate evaluation of preposition trans-\nlation a difficult task. We will get back to this prob-\nlem in Section 6.\n3 Related work\nKoehn (2003) envisages MT as a divide and con-\nquer task where improving NP/PP translation will\ncarry an improvement of the whole system. That\nstudy concluded that the main source of re-ranking\nerrors in NP/PPs translation was the inability to\ncorrectly predict the phrase start (preposition or\ndeterminer) without context; it can sometimes only\nbe resolved when the English verb is chosen and its\nsubcategorization is known.\nThere are two main approaches to disambiguate\nprepositions (Mamidi, 2004; Alam, 2004; Trujillo,\n1992): context based (used in transfer systems and\nmore suitable for languages that are structurally\ndifferent) and concept based (used in interlingua59\nsystems and more suitable for languages which are\nvery close). Most of the systems are context based\nand they use transfer rules given with semantic in-\nformation for the nouns which are head and com-\nplement of the preposition.\n(Miller, 2000) argued that statistical models for\npreposition selection must take into account not\nonly affinities between verbs and prepositions, but\naffinities between prepositions and nouns func-\ntioning as their complement as well.\n(Husain et al., 2007) describes an approach to\nautomatically select from two Indian languages the\nappropriate lexical correspondence of English sim-\nple prepositions. They use a set of rules that deal\nwith syntactic and lexical-semantic constraints on\nthe head and complement of the preposition. The\nresults showed relative improvements greater than\n20% in precision when compared to the default\nsense, but the experiments were conducted with\njust 6 high frequency prepositions. The algorithm\nwas tested on 100 sentences for each preposition\nThe input to the implemented system had been\nmanually checked and corrected to make sure that\nthere were no errors in the PP attachment given by\nthe parser and no mistakes in phrasal verb identifi-\ncation.\n(Naskar and Bandyopadhyayn, 2006) describes\nhow the prepositions are handled in an English-\nBengali MT system. As in Basque, there is no\nconcept of preposition in Bengali. English prepo-\nsitions are translated using inflections and/or post-\npositional words. The choice of the appropriate in-\nflection depends on the spelling of the complement\nof the preposition and the choice of the postposi-\ntional word depends on its semantic information,\nobtained from the WordNet . They don’t report any\nevaluation.\n(Gustavii, 2005) corrected the preposition trans-\nlations using a TBL classifier. She used aligned\nbilingual corpus data to infer her classifiers. Her\nevaluation is performed giving translation accu-\nracy for only the six most frequent prepositions in\nthe training corpus. She used a subset of 3 mil-\nlion tokens of the Swedish-English Europarl cor-\npus, 90% for training and 10% for testing. The\nrelative total improvement is of 12,45% (75,5% ac-\ncuracy for the baseline and 84.9% for her system).\nHowever the applicability of the strategy is limited\nto relatively similar languages, as the ones of that\nstudy (Swedish and English). In fact the system\navoids inducing rules where a preposition shouldFreq. Transitivity Postpositions\n4289.78 transitive ABS,ERG\n1534.24 intransitive ABS\n975.31 transitive ABS,ERG,INE\n476.70 intransitive ABS,INE\n166.68 transitive ABS,ERG,INS\nTable 1: Subcategorization for verb ikusi (to see ).\nbe changed to some other part-of-speech, or where\nit should be completely removed. So this approach\nis not useful to translate from Spanish to Basque.\n4 Acquisition of rich linguistic\ninformation from corpus\nBefore showing our specific techniques for prepo-\nsition translation, we briefly present the linguistic\nresources used, and how they were automatically\nacquired from Basque monolingual corpora.\n4.1 Verb subcategorization\nOne of the information sources used for this ex-\nperiment was an already existing subcategorization\ndictionary, initially built with the purpose of mak-\ning attachment decisions for a shallow parser on\nits way to full parsing (Atutxa, forthcoming). For\neach of the 2,571 verbs this dictionary lists infor-\nmation about possible postposition and grammat-\nical case combinations, transitivity, and estimated\nfrequency of each combination. Table 1 shows the\nmost frequent patterns in the dictionary entry for\nverb ikusi (to see ), including estimated frequency,\ntransitivity and postpositions (including grammat-\nical cases)2.\nThis dictionary was automatically built from\nraw corpora, comprising a compilation of 18\nmonths of news from Euskaldunon Egunkaria (a\nnewspaper written in Basque). The size of the cor-\npus is around 780,000 sentences, approximately 10\nMwords. From the 5,572 different verb lemmas\nin the corpus, the subcategorization dictionary was\ncompiled for the 2,751 verbs occurring at least 10\ntimes.\nThe corpus was parsed by a chunker (Aduriz et\nal., 2004) which includes both named-entity and\nmultiword recognition. The chunker uses a small\ngrammar to identify heads, postpositions and verb\nattachments of NPs and PPs. The grammar was\ndeveloped based on the fact that Basque is a head\n2ABS : absolutive case (can be subject or object depending\non transitivity). ERG : ergative (subject with transitive verbs).\nINE : inesive. INS : instrumental. DAT : dative. ALA : alla-\ntive.60\nfinal language and it includes a distance feature\nas well. Phrases were correctly attached to the\nverb with a precision of 0,78. Note that the aux-\niliary verb in Basque allows to unambiguously de-\ntermine the transitivity of the main verb. Given\nthe fact that Basque is a three-way pro-drop lan-\nguage (subject, object and indirect object can be\nelided), cases of elided arguments were recovered\nfrom the auxiliary verb in most of the cases. The\nonly exception were unergative verbs (e.g. lo egin\n– to sleep), which incorporate the missing argu-\nment. Statistical thresholds were used to reduce\nthe errors caused by unergative verbs and wrong\nverb attachment decisions.\n4.2 Verb/postposition/head-word dependency\ntriples\nVerbal subcategorization can be also modeled us-\ning attested (verb, dependency, head word) triples.\nThe postposition can be used as the type of the\ndependency. In contrast to the subcategorization\ndictionary, and given that the headword is also\nkept, these triples are bound to be more sparse.\nDue to sparseness, the statistical threshold used for\nsubcategorization acquisition proved to be ineffec-\ntive, and it was devised an alternative acquisition\nmethod.\nOnly dependencies from the preverbal position\nof each clause were extracted. This position is the\nfocus position of Basque, and the probability that\na phrase at this position is attached to the verb just\nbehind is quite high (up to 0.93 precision). Given\nthe fact that Basque is a free word order language,\nand provided it is used a large enough corpus, it\ncan be expected all arguments of a given verb to\nappear at the preverbal position in some attested\nsentence. This way, most of the potential argu-\nments of a verb would be attested in the preverbal\nposition, and therefore be captured as licit argu-\nments of the verb. Table 2 shows the top triples\nfor verb ikusi (to see ). Attested headwords in the\nexample include also elided pronouns and named-\nentities (of types PERSON, LOCATION, ORGA-\nNIZATION).\n5 Strategies for preposition translation\nIn this section we present both the dictionary\nand aligned corpora baselines, alongside our three\nmethods to translate prepositions: a context based\napproach using manually coded selection rules,\nand the use of subcategorization information or de-Freq. Postposition Head word\n70 ERG PRONOUN\n36 ABS PRONOUN\n30 ERG PERSON\n16 INE LOCATION\n13 ABS talde (group)\n11 ABS LOCATION\n9ABS ORGANIZATION\n9ABS partidu (match)\nTable 2: Dependency triples for verb ikusi .\npendency triples to disambiguate the prepositions\nheading verbal complements.\n5.1 Baselines\nThe baseline dictionary uses the preposition trans-\nlations in the Elhuyar dictionary (Elhuyar, 2000),\nthe most popular Spanish-Basque dictionary. The\nfirst postposition is taken as the preferred transla-\ntion.\nThe aligned corpora baseline was constructed\napplying Giza++ (Koehn et al., 2003) to the Con-\nsumer magazine parallel corpus (Alcazar, 2006).\nThis corpus contains 60,000 parallel sentences in\nSpanish (1.3 Mwords) and Basque (1 Mwords).\nThe Basque part of the corpus was morphologi-\ncally analyzed and segmented, i.e. word forms\nwere split into their lemma and postposition (e.g.:\netxetik (from the house) →etxe (the house) +\ntik(from)). After preprocessing the Basque sen-\ntences, we aligned the text automatically and ex-\ntracted for each Spanish preposition its most fre-\nquent corresponding Basque postposition. This\nalignment technique proved to be superior to word-\nbase alignment (Agirre et al., 2006). For a given\nSpanish preposition, the most frequent alignment\nwas chosen as its Basque translation.\nNote that these techniques do not tackle the\ntranslation of subject and object zero preposi-\ntions into Basque postpositions. In both baselines\nprepositions are always translated in the same way,\nirrespective of the context of occurrence of the\npreposition.\n5.2 Selection rules\nThe preposition dictionary used as baseline above\ncontains 351 Spanish prepositions (18 simple and\n333 compound) plus what we call zero preposi-\ntionfor subject and object, and the possible Basque\npostpositions (462 in total) into which they can be\ntranslated. We have manually coded 89 selection\nrules to select the appropriate equivalent for the\nambiguous prepositions.61\nPrep. Postpos. Rule\na INE ./[@nounPOS=’Zm’]\na DAT -\na ABS -\na ALA ./[@si=’cc’]\nTable 3: Rule for the Spanish preposition a.\nThe rules contain lexical, syntactic and semantic\ninformation about the parent of the PP, and about\nthe words in the PP (mainly the head).\nSelection rules select or discard possible post-\npositions for one preposition, and can thus return,\nin general, more than one postposition. In the case\nof multiple suggestions, another method would be\nused to choose among those returned by the selec-\ntion rules.\nFor example, given the sentence Los venden a\ntres euros (They sell them for three euros), the pos-\nsible translations for the preposition aare the cases\nINE, DAT, ABS and ALA, as we can see in Table\n33. The rule that selects INE is applied because\nthepart-of-speech of the head of the prepositional\nphrase is Zmand thus the selected translation will\nbe INE: Hiru eurotan saltzen dituzte .\n5.3 Verb subcategorization\nGiven a source sentence, the system accesses its\nsyntactic analysis (provided by Freeling Spanish\nparser) and retrieves the verbs and a list with their\ndependent NPs and PPs. We process each verb in\nturn. For each of the NPs and PPs, the dictionary\nis used to retrieve all possible translations of the\nprepositions, building a data structure that contains\nthe main verb and a list of potential translations for\neach of its dependent NPs and PPs. We also re-\ntrieve the translation of the main verb as produced\nby the lexical selection modules of Matxin . The al-\ngorithm then examines the subcategorization pat-\nterns of the translation of the verb, starting from\nthe most frequent one, until it finds a pattern that\nmatches the aforementioned data-structure.\nFor instance, given a source sentence like yo he\nvisto a tu madre (I have seen your mother), we\nretrieve the main verb ( visto - seen) and two de-\npendents: the subject NP ( yo– I) and the direct\nobject which in Spanish uses the preposition a(a\ntu madre – your mother). The possible transla-\ntions for the zero preposition are ABS, ERG and\nINE. The possible translations for aare ABS, DAT,\n3si: syntactic information. cccircumstancial complement.\nZm: tag for currency.ALA and INE. Given the translation of the verb,\nikusi as suggested by Matxin , we can now access\nits subcategorization patterns from the dictionary\nas described in Section 4.1. The most frequent pat-\ntern for ikusi is (transitive, ABS,ERG), as shown in\nTable 1. As this subcategorization frame matches\nthe example (ERG for yoand ABS for a tu madre )\nERG and ABS grammatical cases are selected as\ntranslation in Basque. This information would be\npassed onto the generation module of Matxin .\n5.4 Verb/postposition/head-word dependency\ntriples\nThe algorithm in this case is very similar to that\nused in the subcategorization method. For each\nverb in the source sentence, we generate a data\nstructure with the translation of the verb, and the\nlist of dependent NPs and PPs, with the possible\ntranslation postpositions for each. Here we also\nadd the translations into Basque of all heads of NPs\nand PPs.\nContrary to subcategorization, we treat each\ndependent NP and PP independently, one at a\nturn, choosing the most frequent dependency triple\nwhich matches the translation of the verb, one of\nthe translations of the postposition and the trans-\nlation of the noun. In other words, we choose the\npostposition which occurs first in the triples for this\nverb and head-word combination.\nWe will illustrate this example with a different\nexample. Given the source sentence El se conecta\na Internet (He connects to the Internet), we fo-\ncus on the translation of the apreposition. Matxin\ntranslates conecta askonektatu andInternet asIn-\nternet . Given the set of possible translations for a\n(ABS, DAT, ALA and INE), the list of triples con-\ntaining konektatu andInternet is checked, and the\nALA postposition is chosen as the most frequent\none for those.\n5.5 Combination of techniques\nGiven a set of single techniques for preposition\ntranslation, we can combine them in several ways.\nMost of the techniques above have partial recall\n(i.e. they sometimes are not able to choose a sin-\ngle best translation), due mainly to sparse data\nproblems. We therefore decided to combine them\nin cascade, one after the other, disambiguating\nin each step the prepositions which had not been\ntranslated in the previous one. We tried several\ncombinations, as will be shown in the following62\nPhrase Preposition Postposition\nEl mensaje - ABS\npor correo por INS\na su amiga a DAT\nTable 4: An example of the gold standard.\nsection, but the cascade always orders the tech-\nniques according to their precision in the test set.\n6 Evaluation framework\nWe ruled out the use of Bleu because, as pointed\nin (Callison-Burch et al., 2006), it cannot be al-\nways used to identify the improvements of the as-\npects of the translation. In our case, it is impos-\nsible to establish how much the Bleu score should\nrise or drop to detect significant improvements in\nthe translation of prepositions.\nWe designed the evaluation framework in order\nto provide automatically both precision and recall\nfor all prepositions. To create the gold standard,\nwe selected 300 sentences at random from a paral-\nlel corpus of newspapers and technical reports. As\nour evaluation had to isolate the preposition trans-\nlation task, the output of previous modules in the\nMT engine for each sentence was examined and\nif there was any mistake that affected the prepo-\nsition translation (e.g. in the source text analysis\nor in the verb transfer), we discarded the sentence.\nIn the remaining 54 sentences there were 80 Span-\nish prepositions and 81 syntactic functions (sub-\nject, direct object and indirect objects) to translate.\nTable 4 shows an example of the gold standard.\nFor the sentence El mensaje ha sido enviado por\ncorreo a su amiga (The message has been sent by\nmail to her friend) we coded the correct postposi-\ntion for the prepositions (included the zero preposi-\ntionin subject) of these three phrases: El mensaje\n(The message), por correo (by mail), a su amiga\n(to her friend).\n7 Evaluation results\nTable 5 shows, for each strategy, the number\nof correctly translated postpositions and the total\nnumber of postpositons translated (both correctly\nand incorrectly), alongside the overall number of\ncases in the test case. Precision, recall and F-\nscore (actually, F 1) are also included. Significance\nranges for F-score have been computed using boot-\nstrap resampling for 95% confidence. Given the\nsmall size of the dataset, the significance rangesare quite large, over 5 percentage points on all\ncases.\nThe first set of rows shows the results for the\nbaselines. We can see that the dictionary per-\nforms better than the translations coming from the\naligned corpus, which was an unexpected finding.\nBoth baselines return a translation in all cases, and\nhave recall identical to the precision.\nThe second set of rows describes the perfor-\nmance of each of the techniques proposed in this\npaper. The manually coded selection rules method\nhas the highest precision, but it scores second in\nrecall and F-score. Subcategorization obtains the\nlowest precision from the three techniques, but the\nbest recall and F-score. The precision of all of our\ntechniques improves over the baselines, but, due to\nthe fact that they don’t always provide a transla-\ntion, recall and the F-score are lower.\nRegarding combination, the third set of rows\npresents several cascades of techniques. Combin-\ning single techniques with the first sense baseline\nbasically provides full coverage and improves re-\ncall, providing non-significant improvements on F-\nscore for rules and triples, and statistically signifi-\ncant improvement for subcategorization. The pair-\nwise combination of two techniques gets good pre-\ncision, but not full coverage, and F-score is similar\nto the 1st sense baseline. On the same set of results\nthe cascade of all three methods is reported to have\nvery high precision and recall.\nThe last four rows report the results for pair-\nwise and three-wise combinations of the tech-\nniques with the 1st sense baseline. The improve-\nment is consistent in all combinations, and the best\nresult is for the combination of all.\nGiven the small number of examples only a\nfew performance differences are statistically sig-\nnificant. Below we list the pairs of results (among\nthose which have full coverage, i.e. those using 1st\nsense) that are statistically significant:\n1st sense <a+b+c+1st\na+1st<a+b+c+1st\nb+1st<a+b+c+1st\nRegarding the comparison among techniques,\nand although the differences are not statistically\nsignificant, the combinations that use subcatego-\nrization are the ones performing best, and it is al-\nways the single technique which improves most in\neach combination class. This is further enforced\nby the fact that a+1st and b+1st perform signifi-\ncantly worse than a+b+c+1st, while the difference63\nCorrect Translated Overall Precision Recall F-score Signif.\nBaselines\nDictionary 109 161 161 67.70% 67.70% 67.70% ±6.26\nAlignment Dict. 101 161 161 62.73% 62.73% 62.73% ±5.98\nTechniques\nRules (a) 73 83 161 87.95% 45.34% 59.84% ±6.73\nTriples (b) 54 62 161 87.10% 33.54% 48.43% ±7.40\nSubcat (c) 84 107 161 78.50% 52.17% 62.69% ±6.78\nCombinations\na+1st 110 161 161 68.32% 68.32% 68.32% ±6.64\nb+1st 111 161 161 68.94% 68.94% 68.94% ±6.30\nc+1st 116 161 161 72.05% 72.05% 72.05% ±5.42\na+b 87 98 161 88.78% 54.04% 67.18% ±6.09\nb+c 89 112 161 79.46% 55.28% 65.20% ±6.41\na+c 99 124 161 79.84% 61.49% 69.47% ±6.11\na+b+c 103 125 161 82.40% 63.98% 72.03% ±5.48\na+b+1st 115 161 161 71.43% 71.43% 71.43% ±5.92\nb+c+1st 118 161 161 73.29% 73.29% 73.29% ±5.91\na+c+1st 117 161 161 72.67% 72.67% 72.67% ±5.68\na+b+c+1st 121 161 161 75.16% 75.16% 75.16% ±5.70\nTable 5: Overall results of baselines, single techniques and combinations.\nCorrect Translated Overall Precision Recall F-score Signif.\nSMT word forms 60 161 161 37.27% 37.27% 37.27% ±6.84\nSMT segmented 82 149 161 55.03% 50.93% 52.90% ±6.35\nTable 6: Results for SMT systems trained with word forms and segmented words\nbetween c+1st and a+b+c+1st is not significant.\nTable 6 shows the results obtained by two state-\nof-the-art full-fledged SMT systems, one of them\nwas trained using Basque word forms for align-\nment, and the other using Basque segmented words\n(see Section 5.1). The whole sentences were trans-\nlated and then the postpositions related to the trans-\nlated phrases were compared with the gold stan-\ndard. Their results are clearly lower than those ob-\ntained with each of the three simple strategies or\nany of their combinations.\n8 Conclusions and future work\nIn this work, three techniques that use rich lin-\nguistic information to translate grammatical cases\nand prepositions heading verbal complements have\nbeen implemented and successfully evaluated in\nthe context of an RBMT system for an agglu-\ntinative language with scarce resources. They\nare based on verb/postposition/head-word depen-\ndency triples, verb subcategorization and manu-\nally coded selection rules based on lexical, syn-\ntactic and semantic information. The first two\nresources have been automatically extracted from\nmonolingual corpus, that obviously is easier to col-\nlect than parallel corpus. As traslation involving\na less resourced language poses serious dificulties\nfor pure SMT, we think these two techniques basedon monolingual corpus statistics are opening new\nways to integrate rule-based and statistical-based\ntechniques in MT languages with fewer digital re-\nsources.\nA new methodology of evaluation has been de-\nsigned. It allows to automatically measure preci-\nsion and recall against a gold standard. Even if our\ntest corpus is not very large, it is comparable with\nthose used in related work, and the F-scores show\nthat some of the improvements are statistically sig-\nnificant.\nThe proposed techniques improve precision\nover the baselines, including a translation dictio-\nnary compiled from an aligned corpus, and over a\nfull-fledged SMT system. The results also show\nthat the linguistic information in all three tech-\nniques is complementary, and a combination of\nthem obtains the best results overall.\nIn the near future we plan to collect\nlarger linguistic resources to obtain better\ninformation on verb subcategorization and\nverb/postposition/head-word triples, so we could\nimprove our present results. We also plan to\nenlarge the gold standard and to evaluate the\nrelevance of our techniques in overall translation\nquality, using the edit-distance metric (Przybocki\net al., 2006). We would also like to use the output\nof SMT systems in the combined system.64\nAcknowledgments\nThis research was supported in part by the Span-\nish Ministry of Education and Science (OpenMT:\nOpen Source Machine Translation using hybrid\nmethods, TIN2006-15307-C03-01; RICOTERM-\n3, HUM2007-65966.CO2-02) and the Regional\nBranch of the Basque Government (AnHITZ 2006:\nLanguage Technologies for Multilingual Inter-\naction in Intelligent Environments, IE06-185).\nGorka Labaka is supported by a PhD grant from\nthe Basque Government (grant code, BFI05.326).\nConsumer corpus has been kindly supplied by\nAsier Alc ´azar from the University of Missouri-\nColumbia and by Eroski Fundazioa.\nReferences\nE. Agirre, A. D ´ıaz de Ilarraza, G. Labaka and K. Sara-\nsola. 2006. Uso de informaci ´on morfol ´ogica en el\nalineamiento Espa ˜nol-Euskara . XXII Congreso de\nla SEPLN.\nY . S. Alam. 2004. Decision Trees for Sense Dis-\nambiguation of Prepositions: Case of Over . HLT-\nNAACL 2004: Workshop on Computational Lexical\nSemantics . Boston, Massachusetts, USA. ACL.\nA. Alc ´azar. 2006. Towards linguistically searchable\ntext. Proceedings of BIDE 2005. Bilbao.\nI. Alegria, A. D ´ıaz de Ilarraza, G. Labaka, M. Lersundi,\nA. Mayor and K. Sarasola. 2007. Transfer-based\nMT from Spanish into Basque: reusability, standard-\nization and open source . LNCS 4394. 374-384. Ci-\ncling 2007.\nI. Aduriz, M. J. Aranzabe, J. M. Arriola, A. D ´ıaz de\nIlarraza, K. Gojenola, M. Oronoz and L. Ur ´ıa. 2004.\nA Cascaded Syntactic Analyser for Basque. In Gel-\nbukh, A (ed.) Computational Linguistics and Intelli-\ngent Text Processing. Springer LNCS 2945.\nJ. Atserias, B. Casas, E. Comelles, M. Gonz ´alez, L.\nPadr ´o and M. Padr ´o. 2006. FreeLing 1.3: Syntac-\ntic and semantic services in an open-source NLP li-\nbrary . Proceedings of the 5th LREC (2006). Gen-\nova. Italia.\nC. Callison-Burch, M. Osborne and P. Koehn. 2006.\nRe-evaluating the role of BLEU in Machine Transla-\ntion Research . Proceedings of EACL-2006.\nElhuyar 2000. Elhuyar Hiztegia . Published by Elhuyar\nHizkuntz Zerbitzuak.\nE. Gustavii. 2005. Target language preposition se-\nlection - an experiment with transformation based\nlearning and aligned bilingual data . Proceedings of\nthe 10th EAMT conference. May 2005. Budapest.S. Husain, D.M. Sharma and M. Reddy. 2007. Simple\npreposition correspondence: a problem in English to\nIndian language machine translation . Proceedings\nof the 4th ACL-SIGSEM Workshop on Prepositions.\nPrague, Czech Republic. 28 June 2007. pp.51-58.\nN. Japkowicz and J. Wiebe. 1991. A System for\nTranslating Locative Prepositions from English into\nFrench . Proceedings of the Meeting of the ACL.\n153-160.\nP. Koehn. 2003. Noun Phrase Translation . PhD. The-\nsis.University of Southern California.\nP. Koehn, F. Och, and D. Marcu (2003). Statistical\nphrase based translation . In Proceedings of HLT-\nNAACL 2003, pp. 48-54, Edmonton, Canada.\nP. Koehn and C. Monz. 2006. Manual and Automatic\nEvaluation of Machine Translation between Euro-\npean Languages . Proceedings of the Workshop on\nSMT. ACL. June 2006. New York City. pp. 102–121.\nG. Labaka, N. Stroppa, A. Way and K. Sarasola.\n2007. Comparing Rule-Based and Data-Driven Ap-\nproaches to Spanish-to-Basque MT . Proceedings of\nthe MT-Summit XI. Copenhagen.\nR. Mamidi. 2004. Disambiguating Prepositions for\nMachine Translation using Lexical Semantic Re-\nsources . Proceedings of the ’National Seminar on\nTheoretical and Applied Aspects of Lexical Seman-\ntics’ organized by Centre of Advanced Study in Lin-\nguistics. Hyderabad.\nA. Mayor. 2007. Matxin: Erregeletan oinarritutako\nitzulpen automatikoko sistema baten eraikuntza es-\ntaldura handiko baliabide linguistikoak berrerabiliz .\nPhD. Thesis. (In Basque). University of the Basque\nCountry.\nK. Miller. 2000. The lexical choice of prepositions in\nmachine translation . PhD. Thesis. Upenn.\nS.K. Naskar and S. Bandyopadhyayn. 2006. Han-\ndling of Prepositions in English to Bengali Machine\nTranslation . Proceedings of the EACL workshop on\nPrepositions, Hyderabad.\nM. Przybocki, G. Sanders and A. Le. 2006. Edit dis-\ntance: a metric for Machine Translation evaluation .\nProceedings of the LREC-2006. Genoa, Italy.\nM. Snover, B Dorr, R. Schwartz, L. Micciulla and J.\nMakhoul. 2006. A study of translation edit rate with\ntargeted human annotation . Proceedings of the As-\nsociation for Machine Translation in the Americas..\nN. Stroppa, D. Groves, A. Way and K. Sarasola. 2006.\nExample-Based Machine Translation of the Basque\nLanguage . Proceedings of the 7th conference of the\nAMTA. pp.232–241. Boston.\nA. Trujillo. 1992. Locations in the Machine Trans-\nlation of Prepositional Phrases . Proceedings of the\nFourth International Conference on Theoretical and\nMethodological Issues in MT of Natural Languages.\nMontreal, Canada.65",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "kYiCmXQIjXp",
"year": null,
"venue": "EAMT 2015",
"pdf_link": "https://aclanthology.org/W15-4934.pdf",
"forum_link": "https://openreview.net/forum?id=kYiCmXQIjXp",
"arxiv_id": null,
"doi": null
}
|
{
"title": "HandyCAT - An Open-Source Platform for CAT Tool Research",
"authors": [
"Chris Hokamp",
"Qun Liu"
],
"abstract": "Christopher Hokamp, Qun Liu. Proceedings of the 18th Annual Conference of the European Association for Machine Translation. 2015.",
"keywords": [],
"raw_extracted_content": "HandyCAT\nChris Hokamp and Qun Liu\n{chokamp, qliu}@computing.dcu.ie\nCNGL/ADAPT/Dublin City University\nProject Website: http://handycat.github.io\nGithub: https://github.com/chrishokamp/handycat\nProject Description\nWe present HandyCAT, a new open-source Computer Aided Translation (CAT) tool, designed \nspecifically for conducting research on Computer-Aided Translation. The User Interface (UI) it-\nself, as well as the backend services, such as the Translation Memory engine, the MT system in-\nterface, the concordancer, and the glossary engine, are completely open-source. \nThe HandyCAT UI is implemented as a web application which runs in any modern browser. \nHandyCAT uses the XLIFF standard, and supports the core elements from both the XLIFF 1.2 \nand XLIFF 2.0 standards. GraphTM, the graph-based translation memory component, supports the \nTMX format, as well as several text input formats.\nWe introduce a factorization of the core interface components which allows a CAT tool to be \nviewed as a collection of standalone components connected by consistent APIs, facilitating re-\nsearch on new user interactions such as multi-modal input and interface control, and on new com-\nponents created specifically for the post-editing task. Because the tool is designed primarily for \nCAT research, we have also designed a logging API which allows component creators to design \nlogging customizable logging behavior for their components.\nAlthough several open-source CAT tools have already been developed, no web-based tool \nprovides a full CAT ecosystem as an open-source platform, including all user interface compon-\nents and data services. Because the backend data services are prerequisites for a modern CAT in-\nterface, it can be difficult to design and conduct new user studies using existing open-source inter-\nfaces.\nHandyCAT is built around the concepts of containers and interactive areas. Any CAT tool has \nsome standard components which can be presented to users in various ways. Both the visual \npresentation and the interaction design will have an impact on the translator's experience. There-\nfore, HandyCAT is designed to allow researchers to create parameterized components which are \neasy to test and modify.\nSeveral translation services provide free and/or paid APIs to proprietary services such as transla-\ntion memories, machine translation, and glossaries. Connecting these APIs with HandyCAT is \nstraightforward, allowing users and researchers to quickly integrate new services, or existing ser-\nvices which may have designed for other purposes.\nAll components of HandyCAT are completely open-source, meaning that the tool can easily be \nextended and improved. Because modern CAT tools are complex applications, developing a \nbaseline tool with standard features requires significant effort. By using HandyCAT, researchers \ncan implement only the components relevant to their work, while relying on the platform to \nprovide the core CAT tool functionality, and to provide the statistics and logging necessary for \nanalysis. 216",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "Bd7arW-0jgn",
"year": null,
"venue": "EAMT 2015",
"pdf_link": "https://aclanthology.org/W15-4911.pdf",
"forum_link": "https://openreview.net/forum?id=Bd7arW-0jgn",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Benchmarking SMT Performance for Farsi Using the TEP++ Corpus",
"authors": [
"Peyman Passban",
"Andy Way",
"Qun Liu"
],
"abstract": "Peyman Passban, Andy Way, Qun Liu. Proceedings of the 18th Annual Conference of the European Association for Machine Translation. 2015.",
"keywords": [],
"raw_extracted_content": "Benchmarking SMT Performance for Farsi Using the TEP++ Corpus\nPeyman Passban, Andy Way, Qun Liu\nADAPT Centre\nSchool of Computing\nDublin City University\nDublin, Ireland\n{ppassban,away,qliu}@computing.dcu.ie\nAbstract\nStatistical machine translation (SMT) suf-\nfers from various problems which are ex-\nacerbated where training data is in short\nsupply. In this paper we address the data\nsparsity problem in the Farsi (Persian) lan-\nguage and introduce a new parallel cor-\npus, TEP++. Compared to previous re-\nsults the new dataset is more efficient for\nFarsi SMT engines and yields better out-\nput. In our experiments using TEP++ as\nbilingual training data and BLEU as a met-\nric, we achieved improvements of +11.17\n(60%) and +7.76 (63.92%) in the Farsi–\nEnglish and English–Farsi directions, re-\nspectively. Furthermore we describe an\nengine (SF2FF) to translate between for-\nmal and informal Farsi which in terms of\nsyntax and terminology can be seen as\ndifferent languages. The SF2FF engine\nalso works as an intelligent normalizer for\nFarsi texts. To demonstrate its use, SF2FF\nwas used to clean the IWSLT–2013 dataset\nto produce normalized data, which gave\nimprovements in translation quality over\nFBK’s Farsi engine when used as training\ndata.\n1 Introduction\nIn SMT (Koehn et al., 2003), where the bilingual\nknowledge comes from parallel corpora, having\nlarge datasets is crucial. This issue is compounded\nwhen working with low-resource languages, such\nas Farsi. The poor performance of existing systems\n© 2015 The authors. This article is licensed under a Creative\nCommons 3.0 licence, no derivative works, attribution, CC-\nBY-ND.for the Farsi–English pair confirms the necessity\nof developing a large and representative dataset.\nClearly all the existing problems do not originate\nsolely from the data, but not having a reliable train-\ning set prevents us from investigating Farsi SMT to\nthe best extent possible.\nGenerating datasets is a time-consuming and\nexpensive process, especially for SMT, in which\nmassive amount of aligned bilingual sentences are\nrequired. Accordingly instead of starting from\nscratch we enriched and refined the existing corpus\nTEP (Pilevar et al., 2011).1Despite having a larger\nalternative (the Mizan2corpus), TEP was selected\nas the basis of our work that we clarify further in\nSection 3 and 4.1. TEP is a collection offilm subti-\ntles in spoken/informal Farsi (SF) that have distinct\nstructures from formal/journalistic Farsi (FF). Ac-\ncordingly, training an MT engine using this type\nof data might provide unsatisfactory results when\nworking with FF which is the dominant language\nof Farsi texts. For this reason TEP wasfirstly re-\nfined both manually and automatically, which Sec-\ntion 3 explains in detail. TEP++ is the refined ver-\nsion of TEP that is much closer to FF and con-\nsiderably cleaner. Using both TEP and TEP++ we\ntrained several engines for bidirectional translation\nof the Farsi–English pair, as well as an engine to\ntranslate between FF and SF (SF2FF). The next\nsections explain the challenges of dealing with SF\nand describe the data preparation process in detail.\nThe structure of paper is as follows. Section 2 dis-\ncusses background of MT, addressing existing sys-\ntems (§2.1) and available corpora (§2.2). Section\n3 explains TEP++ and our development process.\nExperimental results are reported in Section 4 in-\n1TEP: Tehran English-Persian parallel corpus\nhttp://opus.lingfil.uu.se/TEP.php\n2http://www.dadegan.ir/catalog/mizan82\ncluding a comparison of the various MT systems\nand a study of the impact of SF2FF in Farsi SMT.\nFinally the last section concludes the paper along\nwith some avenues for future works.\n2 Background\nBuilding an SMT engine for Farsi is difficult due\nto its rich morphology and inconsistent orthogra-\nphy (Rasooli et al., 2013). Not only these chal-\nlenges but also the complex syntax and several ex-\nceptional rules in the grammar make the process\nconsiderably complex. The lack of data is another\nobstacle in thisfield. Nevertheless there have been\nsome previous attempts at Farsi SMT. In this sec-\ntion we briefly review previous works encompass-\ning systems in thefirst section, as well as available\nresources in the second section.\n2.1 Farsi MT Systems\nThere are a limited number of SMT systems for\nFarsi. Some instances translate in one direction\nand some others are working bidirectionally. The\nPars translator3is a commercial rule-based engine\nfor English–Farsi translation. It contains 1.5 mil-\nlion words in its database and includes specific dic-\ntionaries for 33 differentfields of science. Another\nEnglish–Farsi MT system was developed by the\nIran Supreme Council of Information.4Postchi5is\na bidirectional system listed among the EuroMa-\ntrix6systems for the Farsi language. These sys-\ntems are not terribly robust or precise examples of\nFarsi SMT and are usually the by-products of re-\nsearch or commercial projects. The only system\nthat has officially been reported for the purpose of\nFarsi SMT is FBK’s system (Bertoldi et al., 2013).\nIt was tested on a publicly available dataset and\nfrom this viewpoint is the most important system\nfor our purposes.7\n2.2 Parallel Corpora for Farsi SMT\nThefirst attempts at generating Farsi–English par-\nallel corpora are documented in the Shiraz project\n(Zajac et al., 2000). The authors constructed a cor-\npus of 3000 parallel sentences, which were trans-\nlated manually from monolingual online Farsi doc-\n3http://mabnasoft.com/english/parstrans/index.htm\n4http://www.machinetranslation.ir/\n5http://www.postchi.com/\n6http://matrix.statmt.org/resources/pair?l1=fa&l2=en#pair\n7However other Farsi MT engines like the Shiraz system\n(Amtrup et al., 2000) or that of Mohaghegh (2012) use their\nown in-house datasets. As we are not able to replicate them\nwe do not include them in our comparisons.uments at New Mexico State University. More\nrecently Qasemizadeh et al. (2007) participated\nin the Farsi part of MULTEXT-EAST8project\n(Erjavec, 2010) and developed about 6000 sen-\ntences. There is also a corpus available in ELRA9\nconsisting of about 3,500,000 English and Farsi\nwords aligned at sentence level (about 100,000\nsentences). This is a mixed domain dataset in-\ncluding a variety of text types such as art, law,\nculture, literature, poetry, proverbs, religion etc.\nPEN (Parallel English–Persian News corpus) is\nanother small corpus (Farajian, 2011) generated\nsemi-automatically. It includes almost 30,000 sen-\ntences. Farajian developed a method tofind sim-\nilar sentence pairs and for quality assurance used\nGoogle Translate.10All these corpora are rela-\ntively small-scale datasets. However, there are\ntwo other large-scale collections, namely Mizan\nand TEP, that are more interesting for our pur-\nposes. Mizan is a bilingual Farsi–English cor-\npus of more than one million aligned sentences,\nwhich was developed by the Dadegan research\ngroup.11Sentences are gathered from classical lit-\nerature with an average length of 15 words each.\nDespite comprising a large amount of sentences,\nthe results obtained from using Mizan as a train-\ning set are less satisfactory. We will discuss the\nstructure of Mizan and analyse some translation\nerrors that ensue in the next section. Thefinal\ncorpus that is the basis of our work is TEP (Pil-\nevar et al., 2011), which consists of more than\n600,000 aligned Farsi–English sentences gathered\nfromfilm subtitles. Experimental results show that\nTEP works better than Mizan as a training corpus\nfor SMT.\n3 TEP++\nTEP++ is a refined version of TEP. TEP is a quite\nnoisy corpus and it triggers several failures in the\nFarsi SMT pipeline. Besides the problem of noise\nbecause it was gathered fromfilm subtitles, it is in\nSF. Accordingly it would be inappropriate to use\nan SMT system trained on SF data for the trans-\nlation of FF. Unfortunately discrepancies between\nformal and informal Farsi structures are quite con-\n8The project started in 1998 and the last version was released\nin 2010 (http://nl.ijs.si/ME)\n9http://catalog.elra.info/product info.php?products id=1111\n10https://translate.google.com/\n11A research group supported by the Iran Supreme Council of\nInformation to provide data resources for Farsi language and\nspeech processing (http://www.dadegan.ir)\n83\nsiderable. In what follows we show some of these\ncases and try to illustrate the main challenges with\nrefinements to TEP.\nIn terms of orthography, Farsi is one of the hard-\nest languages. It is written with the Perso-Arabic\nscript. Unlike Arabic, some Persian words have\ninter-word zero-width non-joiner spaces (or semi-\nspaces) (Rasooli et al., 2013). Usually semi-spaces\nare incorrectly written as regular space charac-\nter (U+0020 and U+200c are the Unicode for\nspace and semi-space, respectively) that can easily\nchange the meaning of the constituent and even the\nsyntax of the whole sentence. As an example the\nright form of the word greedy is ��������������≡/¯astin-\nder¯az/12with a semi-space character (betweenn\nsound anddsound). If it is written with a space\nas in��������������≡/¯astin e der ¯az/, it means long\nsleeve, a completely different meaning which will\nmislead the SMT engine. Another problem is the\npresence of multiple writing forms for some char-\nacters. For the character �≡/y/ all forms of �,\n��and��are common. This inconsistent writing\nstyle exists similarly for several other characters.\nThe diacritic problem is another issue. Words can\nappear both with and without diacritics, like��������\nor�������≡/axiran/ (recently). Clearly, these prob-\nlems should be resolved in preprocessing.\nIn addition, SF has its own specific problems,\none being lexical variation. Some words oc-\ncur in SF texts that do not have any counter-\npart in FF e.g. �����≡/eyval/ (good job). Syn-\ntax in SF is also a problem. Farsi is an SOV\nlanguage but in SF, versions of sentences with\nSVO and VOS order are both common. For ex-\nample,���������� ��������≡/æli n ¯ame ro bex-\noun/ (Ali, read the letter) is a standard SOV sen-\ntence, but both VOS ( ��� �� �������������) and SVO\n(�� ����������������) forms are very normal; even\nin SF these look more natural than the SOV vari-\nant. In TEP++, we tried to correct the order and\nsyntax of the sentences as much as possible which\nwas very challenging. Not only the order but also\nthe internal constituents of the sentences had to\nbe changed. For example the verb��������≡/bex-\noun/ (read) in SF is���������≡/bex ¯an/ in FF or ����\n12We used Wikipedia phonetic chart to show the spellings of\nFarsi words and - character to show the semi-space.\nhttp://en.wikipedia.org/wiki/Persian phonology≡/¯amad/ (came) is the formal version of ����≡\n/oumad/. These types of changes do not just hap-\npen to verbs. Other cases are even worse, e.g\nthe right form of ”for them” in FF is ������ �����\n≡/bar ¯aye¯anh¯a/ which is written as���������≡\n/bar¯aˇsoun/ in SF (two FF words are packed in a\nsingle SF word). SF suffers from word ambigu-\nity problem as well. A word like ���≡/to/ (you)\nwhich in formal texts is translated only into ”you”\n(3rd-singular person), can mean both ”you” (1st\nand3rd-singular person) and ”inside” in SF.\nProblems with SF are not limited to those dis-\ncussed. However as a solution we cleaned the\nTEP data both automatically and manually. As\na mandatory prerequisite of the refinement phase\nwe applied knowledge of Farsi linguistics and\ndeveloped a rule-based system for some of the\ncases. The rule-based system includes 17 general\nrules/templates. For the remainder a team of 20\nnative speaker of Farsi, manually edited the cor-\npus. The result is TEP++ with 578,251 aligned\nsentences, with an average length of 7 for the En-\nglish side and 9 for Farsi. It includes 4,963,693 En-\nglish tokens (62,185 unique tokens) and 5,065,434\nFarsi tokens (122,432 unique tokens). TEP++ cov-\ners 94% of the TEP and we neglected the remain-\ning 6% because of the bad quality of the original\nTEP data.\n4 Experiments\nThis section is divided into 3 subsections. The\nfirst part reports the BLEU scores for three main\nFarsi corpora, Mizan, TEP and TEP++. We also\ndiscuss the problems with Mizan in Section 4.1\nand perform error analysis on the output transla-\ntions, where it is used as the SMT training data. In\nthe second part using TEP and TEP++ we carry\nout monolingual translation between SF and FF\n(SF2FF) and discuss some use-cases for this type\nof translation task. Finally in the last part we show\nhow SF2FF boosts the SMT quality for Farsi and\nreport our results on the IWSLT–2013 dataset pro-\nviding a comparison with FBK’s system.\n4.1 Mizan, TEP and TEP++\nTo test the performance of our engines, they were\ntrained using Mizan, TEP and TEP++. We used\nMoses (Koehn et al., 2007) with the default con-\nfiguration for phrase-based translation. For the\nlanguage modeling, SRILM (Stolcke and others,84\n2002) was used. The evaluation measure is BLEU\n(Papineni et al., 2002) and to tune the models, we\napplied MERT (Och, 2003). Table 1 summarizes\nour experimental results for the Mizan dataset. We\nevaluated with two types of language models, 3-\ngram (LM3) and 5-gram (LM5). Numbers for both\nbefore and after tuning are reported. For all ex-\nperiments training, tuning and test sets were se-\nlected randomly from the main corpus. The size\nof the test set is 1,000 and the tuning set is 2000\nsentences. Training set sizes are reported in ta-\nbles. For all experiments BLEU scores for Google\nTranslate are reported as a baseline.\nEN–FA FA–EN\nBefore After Before After\nLM3 8.24 10.47 11.7013.35\nLM5 8.5410.5311.97 13.14\nGoogle2.32 4.21Translate\nTraining set 1,016,758 parallel sentences\nCorpus Mizan\nTable 1: Experimental Results for Mizan\nFrom a system that is trained on almost 1M sen-\ntences, we might expect better performance. To\ntry to gain some insight into the nature of the prob-\nlem, we randomly selected 100 Farsi translations\nand compared them with the reference sentences.\nBased on the statistics of the error analysis for the\nsubset of 100 translations, 3 main reasons of the\nfailures present themselves:\n1. In more than half of the cases (59%) the de-\ncoder does notfind the correct translation of a\ngiven word. Wrong lexical choice is the most\ncommon problem for the translation.\n2. Due to the rich morphology of Farsi 41% of\nthe words are generally translated with slight\nerrors in their forms. The problem, therefore,\nis wrong word formation on the target side\n(Farsi). To give an example translating verbs\ninto the wrong tense or with the wrong af-\nfixes.\n3. 33% of the constituents have reordering prob-\nlems. Some times the translations are correct\nbut are not in their right positions.\nSuch deficiencies do not only apply for Mizan;\nthey are common in Farsi SMT (and SMT in gen-\neral even), no matter what training data is. Study-ing the results of translation error analysis, Farzi\nand Faili (2015) confirm ourfindings.\nAnother issue which should be considered about\nthe Farsi SMT evaluation is that Farsi is a free\nword-order language. When compiling the results\nof our experiments, we only had a single refer-\nence available against which the output from our\nvarious systems could be compared. Computing\nautomatic evaluation scores when translating into\na free word-order language in the single-reference\nscenario is somewhat arbitrary. We would expect a\nmanual evaluation on a subset of sentences to con-\nfirm that the output translations are somewhat bet-\nter than the automatic evaluation scores suggest.\nSimilar to Mizan we repeated the same experi-\nments for the TEP and TEP++. Table 2 and Table 3\nshow the results of these related experiments. Two\nengines were trained using the TEP and TEP++\ncorpora. In order to provide a comparison between\nthe two corpora used, tuning and test sets were se-\nlected in a way which mirror each other in both\ndatasets, i.e. TEP sentences and their counterparts\nin TEP++.\nEN–FA FA–EN\nBefore After Before After\nLM3 10.1212.1417.29 17.60\nLM5 10.69 11.88 18.0518.57\nGoogle1.14 6.60Translate\nTraining set 609,085 parallel sentences\nCorpus TEP\nTable 2: Experimental Results for TEP\nEN–FA FA–EN\nBefore After Before After\nLM3 15.93 19.37 27.29 29.21\nLM5 15.9319.6028.2529.74\nGoogle3.27 7.35Translate\nTraining set 575,251 parallel sentences\nCorpus TEP++\nTable 3: Experimental Results for TEP++\nAs can be seen in the FA–EN direction we\nreached +11.17 (60%) improvement and in EN–\nFA direction the improvident is +7.76 (63.92%).13\n13The best performance using TEP for FA–EN is 18.57, the\nbest for TEP++ is 29.74 and the improvement of FA–EN di-\nrection is 60%85\nAnother achievement is that even where using less\ndata, the TEP++ engine performs better. TEP++\nincludes 94% of the TEP (§3) so even with about\n33K fewer sentences pairs in the training set we\nobtained better results. The BLEU scores of\nTEP++ still are significantly better than the base-\nline (TEP) considering the results of paired boot-\nstrap resampling (Koehn, 2004).14\nThis improvement is not odd and we were ex-\npecting such numbers. As it was studied in Rasooli\nat al. (2013) and Bertoldi et al. (2013) preprocess-\ning and normalization have a considerable effect\nin Farsi SMT, as we explained in §3. Results from\nGoogle Translate is another confirmation to this is-\nsue. SF (the language of TEP) is an almost un-\nknown language for Google Translate hence trans-\nlation from/into this language will provide inap-\npropriate results. Results are slightly better for\nTEP++ because the sentences are cleaner and more\nformal which are close to that of Google Translate.\nFinally it should be mentioned that Moses gener-\nally works much better than Google Translate for\nFarsi MT and the quality of Google Translate sig-\nnificantly decreases for long sentences.\n4.2 SF2FF Results\nDoing the refinements on TEP to produce TEP++\nthat as explained in §3, was very laborious. The\nby-product was a pair of corpora, one in SF and\none in FF. We trained a phrase-based translation\nengine using these corpora in order to translate\nfrom SF into FF. The benefit of having such an en-\ngine is to produce the cleaned FF for free, as the\nTEP refinement was a costly process. Moreover,\nhaving a knowledge of Farsi linguistics was a pre-\nrequisite. This engine provides the same function-\nality with less cost and without applying linguistic\nknowledge. The trained engine works like a black\nbox and carries out all the refinements. Similar\nto ours, Fancellu et al. (2014) have also worked\non monolingual SMT between Brazilian and Eu-\nropean Portuguese.\nIn the SF–FF direction we obtained 88.94\nBLEU points and in the opposite direction sys-\ntems works with BLEU score of 81.62. This pro-\ncess –more than an MT task– is a transformation\nin which words are converted into the normal-\nized/correct forms and the order of constituents are\nchanged in some cases. Accordingly BLEU num-\n14We used ARK research group codes for statistical signifi-\ncance testing for 1000 samples with 0.05 parameter\nhttp://www.ark.cs.cmu.edu/MT/bers are high. SF2FF engine helps us to stablish\na fully automated pipeline to make a large-scale\nbilingual Farsi corpus. Any type of data can be\ntaken from the internet such asfilm subtitles or\ntweets that are usually noisy with informal writing\nconventions. SF2FF can normalize them. and the\nnormalized version is good enough to be aligned\nwith the English side (or any other language).\nTo show the application of SF2FF and its perfor-\nmance, it was fed a test set from TEP (the same\ndataset we used in the TEP experiment). The data\nwas normalized by SF2FF. Normalization helps to\nprovide a more precise translation. The pipeline\nis illustrated in Figure 1. Selected sentences are\nin SF and the BLEU score for their translation by\nTEP is 18.57. If SF2FF translates them into FF\nthey would be cleaner and much closer to the lan-\nguage of TEP++ and consequently the results of\nSMT would be better. Sentences in the two sets\nare counterpart of each other. The TEP++ en-\ngine obtains a BLEU score of 29.72 on the for-\nmal/clean version of the same sentences. If the\nnoisy data is cleaned by SF2FF and is then trans-\nlated by TEP++, the BLEU score rises to 25.36, i.e.\nSF2FF provides +6.79-point improvement. The\nBLEU score obtained the normalized data is sig-\nnificantly better and is 36% higher than that of the\noriginal data which demonstrates the efficiency of\nSF2FF.\nFigure 1: SF normalization by SF2FF\n4.3 Comparison of SMT Performance\nThe only system that has been tested on a standard\ndataset and published is FBK’s Farsi translation\nengine. It was reported in Bertoldi et al. (2013)\nand tested on the IWSLT–2013 dataset. The data\nhas been made available by (Cettolo et al., 2012)\nand includes TED talk translations. In their paper,86\nthe FBK team explained that Farsi online data (in-\ncluding the IWSLT–2013 dataset) is very noisy and\nusing requires some preprocessing, so they tried\nto normalize the data. Therefore, for the transla-\ntion task, they used a normalized version of the\nIWSLT–2013 dataset along with an in-house cor-\npus for language modeling. They also mentioned\nthat using existing Farsi corpora such as TEP does\nnot enhance translation quality. To compare our\nengines with FBK’s system wefirstly normalized\nthe same dataset with SF2FF engine, and to make\nthe language model we used the TEP++ corpus.\nThe results for baseline,15FBK’s system and ours\n(DCU) are shown in Table 4. For the FA–EN di-\nBaseline FBK DCU\nEnglish-Farsi 9.13 10.3211.42\nFarsi-English 12.47 14.4716.21\nTable 4: Head-to-head comparison\nrection FBK obtained +2.0 points (16%) improve-\nment in BLEU score, while for the same direction\nour improvement is +3.74 (29%). For the opposite\ndirection we also outperform FBK, with a +1.10\ndifference in BLEU. The BLEU score for the EN–\nFA direction by DCU is 11.42, 2.29 points higher\nthan the baseline (25%).\n5 Conclusion and Future Work\nThe contributions of this paper are threefold. First\nwe developed a new corpus namely TEP++ and\ntrained a translation engine. We showed that\nTEP++ works better than its predecessor TEP. Sec-\nond we developed an engine to translate between\nFF and SF. SF2FF works like an intelligent prepro-\ncessor/normalizer and translates SF into FF that is\na big credit for Farsi SMT. Finally we obtained bet-\nter results in comparison to other reported results\nso far.\nAt the moment, in Farsi SMT data scarcity is\nthe main challenge despite the fact that large vol-\numes of textual data is available via the internet.\nStored data on the internet for Farsi is in most cases\nare very noisy and also appears in SF forms. Our\nSF2FF engine can help to clean the internet data\nto generate reliable Farsi corpora. In the next step\nby normalizing existing Farsi corpora and aggre-\ngating them we will release a large-scale, reliable\ndataset for Farsi SMT. TEP++ also will be publicly\navailable shortly. We also intended to carry out a\n15https://wit3.fbk.eu/score.php?release=2013-01human evaluation to investigate the correlation be-\ntween the automatic score and manualfindings.\nAcknowledgment\nWe would like to thank the three anony-\nmous reviewers for their valuable comments.\nThis research is supported by Science Foun-\ndation Ireland through the CNGL Programme\n(Grant 12/CE/I2267) in the ADAPT Centre\n(www.adaptcentre.ie) at Dublin City University.\nReferences\nAmtrup, Jan Willers, Hamid Mansouri Rad, Karine\nMegerdoomian, and R ´emi Zajac. 2000.Persian–\nEnglish machine translation: An overview of the Shi-\nraz project. Computing Research Laboratory, New\nMexico State University, USA.\nBertoldi, Nicola, M Amin Farajian, Prashant Mathur,\nNicholas Ruiz, and Marcello Federico. 2013. Fbks\nmachine translation systems for the IWSLT 2013\nevaluation campaign. InProceedings of the 10th In-\nternational Workshop for Spoken Language Transla-\ntion. Heidelberg, Germany.\nCettolo, Mauro, Christian Girardi, and Marcello Fed-\nerico. 2012. Wit3: Web inventory of transcribed and\ntranslated talks. InProceedings of the 16thCon-\nference of the European Association for Machine\nTranslation (EAMT), pages 261–268, Trento, Italy,\nMay.\nErjavec, Toma. 2010. Multext-east version 4: Multi-\nlingual morphosyntactic specifications, lexicons and\ncorpora. InProceedings of the Seventh International\nConference on Language Resources and Evaluation\n(LREC’10). Malta, European Language Resources\nAssociation (ELRA).\nFarajian, Mohammad Amin. 2011. PEN: parallel\nEnglish–Persian news corpus. InProceedings of the\n2011th World Congress in Computer Science, Com-\nputer Engineering and Applied Computing. Nevada,\nUSA.\nFarzi, Saeed and Heshaam Faili. 2015. A\nswarm-inspired re-ranker system for statistical ma-\nchine translation.Computer Speech & Language,\n29(1):45–62.\nFederico, Fancellu, O’Brien Morgan, and Way Andy.\n2014. Standard language variety conversion using\nsmt. InProceedings of the Seventeenth Annual Con-\nference of the European Association for Machine\nTranslation (EAMT), pages 143–149, Dubrovnik,\nCroatia, May.\nKoehn, Philipp, Franz Josef Och, and Daniel Marcu.\n2003. Statistical phrase-based translation. InPro-\nceedings of the 2003 Conference of the North Amer-\nican Chapter of the Association for Computational87\nLinguistics on Human Language Technology, pages\n48–54. Edmonton, Canada.\nKoehn, Philipp, Hieu Hoang, Alexandra Birch, Chris\nCallison-Burch, Marcello Federico, Nicola Bertoldi,\nBrooke Cowan, Wade Shen, Christine Moran,\nRichard Zens, et al. 2007. Moses: Open source\ntoolkit for statistical machine translation. InPro-\nceedings of the 45th annual meeting of the ACL on\ninteractive poster and demonstration sessions, pages\n177–180. Prague, Czech Republic.\nKoehn, Philipp. 2004. Statistical significance tests for\nmachine translation evaluation. InProceedings of\nthe 2004 Conference on Empirical Methods in Natu-\nral Language Processing(EMNLP), pages 388–395,\nBarcelona, Spain.\nMohaghegh, Mahsa. 2012.English–Persian phrase-\nbased statistical machine translation: enhanced\nmodels, search and training, Massey University, Al-\nbany (Auckland), New Zealand. Ph.D. thesis.\nOch, Franz Josef. 2003. Minimum error rate training\nin statistical machine translation. InProceedings of\nthe 41st Annual Meeting on Association for Compu-\ntational Linguistics-Volume 1, pages 160–167. Sap-\nporo, Japan.\nPapineni, Kishore, Salim Roukos, Todd Ward, and Wei-\nJing Zhu. 2002. BLEU: a method for automatic\nevaluation of machine translation. InProceedings\nof the 40th annual meeting on association for com-\nputational linguistics, pages 311–318. Philadephia,\nPennsylvania, USA.\nPilevar, Mohammad Taher, Heshaam Faili, and Ab-\ndol Hamid Pilevar. 2011. TEP: Tehran English–\nPersian parallel corpus. InComputational Linguis-\ntics and Intelligent Text Processing, pages 68–79.\nSpringer.\nQasemizadeh, Behrang, Saeed Rahimi, and\nBehrooz Mahmoodi Bakhtiari. 2007. Thefirst\nparallel multilingual corpus of persian: Toward\na persian blark. InThe Second Workshop on\nComputational Approaches to Arabic Script-based\nLanguages (CAASL-2). California, USA.\nRasooli, Mohammad Sadegh, Ahmed El Kholy, and\nNizar Habash. 2013. Orthographic and morpho-\nlogical processing for Persian-to-English statistical\nmachine translation. InProceedings of the Interna-\ntional Joint Conference on Natural Language Pro-\ncessing, pages 1047– 1051. Nagoya, Japan.\nStolcke, Andreas et al. 2002. SRILM an extensible\nlanguage modeling toolkit. InProceedings of the\nInternational Conference on Spoken Language Pro-\ncessing, pages 901–904. Denver, Colorado.\nZajac, R ´emi, Steve Helmreich, and Karine Megerdoo-\nmian. 2000. Black-box/glass-box evaluation in shi-\nraz. InWorkshop on Machine Translation Evalua-\ntion at LREC-2000. Athens, Greece.88",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "VNucmMw3Kt0",
"year": null,
"venue": "EAMT 2016",
"pdf_link": "https://aclanthology.org/W16-3403.pdf",
"forum_link": "https://openreview.net/forum?id=VNucmMw3Kt0",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Improving Phrase-Based SMT Using Cross-Granularity Embedding Similarity",
"authors": [
"Peyman Passban",
"Chris Hokamp",
"Andy Way",
"Qun Liu"
],
"abstract": "Peyman Passban, Chris Hokamp, Andy Way, Qun Liu. Proceedings of the 19th Annual Conference of the European Association for Machine Translation. 2016.",
"keywords": [],
"raw_extracted_content": "Baltic J. Modern Computing, V ol. 4 (2016), No. 2, pp. 129–140\nImproving Phrase-Based SMT Using\nCross-Granularity Embedding Similarity\nPeyman PASSBAN, Chris HOKAMP, Andy WAY , Qun LIU\nADAPT Centre\nSchool of Computing\nDublin City University\nDublin, Ireland\nfppassban,chokamp,away,qliu [email protected]\nAbstract. The phrase–based statistical machine translation (PBSMT) model can be viewed as a\nlog-linear combination of translation and language model features. Such a model typically relies\non the phrase table as the main resource for bilingual knowledge, which in its most basic form\nconsists of aligned phrases, along with four probability scores. These scores only indicate the co-\noccurrence of phrase pairs in the training corpus, and not necessarily their semantic relatedness.\nThe basic phrase table is also unable to incorporate contextual information about the segments\nwhere a particular phrase tends to occur. In this paper, we define six new features which express\nthe semantic relatedness of bilingual phrases. Our method utilizes both source and target side\ninformation to enrich the phrase table. The new features are inferred from a bilingual corpus by\na neural network (NN). We evaluate our model on the English–Farsi (En–Fa) and English–Czech\n(En–Cz) pairs and observe considerable improvements in the all En $Fa and En $Cz directions.\nKeywords: Statistical machine translation, phrase embeddings, incorporating contextual infor-\nmation.\n1 Introduction\nThe process of PBSMT can be interpreted as a search problem where the score at each\nstep of exploration is formulated as a log-linear model (Koehn, 2010). For each candi-\ndate phrase, the set of features is combined with a set of learned weights to find the best\ntarget counterpart of the provided source sentence. Because an exhaustive search of the\ncandidate space is not computationally feasible, the space is typically pruned via some\nheuristic search, such as beam search (Koehn, 2010). The discriminative log-linear\nmodel allows the incorporation of arbitrary context-dependent and context-independent\nfeatures. Thus, features such as those in Och and Ney (2002) or Chiang et al. (2009)\ncan be combined to improve translation performance. The standard baseline bilingual\nfeatures included in Moses (Koehn et al., 2007) by default are: the phrase translation\n130 Passban et al.\nprobability\u001e(ejf),inverse phrase translation probability \u001e(fje),direct lexical weight-\ninglex(ejf)andinverse lexical weighting lex(fje).1\nThe scores in the phrase table are computed directly from the co-occurrence of\naligned phrases in training corpora. A large body of recent work evaluates the hypothe-\nsis that co-occurrence information alone cannot capture contextual information as well\nas the semantic relations among phrases (see section 2). Therefore, many techniques\nhave been proposed to enrich the feature list with semantic information. In this paper,\nwe define six new features for this purpose. All of our features indicate the semantic\nrelatedness of source and target phrases. Our features leverage contextual information\nwhich is lost by the traditional phrase extraction operations. Specifically, in both sides\n(source and target) we look for any type of constituents including phrases, sentences or\neven words which can fortify the semantic information about phrase pairs.\nOur contributions in this paper are threefold: a) We define new semantic features\nand embed into PBSMT to enhance the translation quality. b) In order to define the new\nfeatures we train bilingual phrase and sentence embeddings using an NN. Embeddings\nare trained in a joint distributed feature space which not only preserves monolingual se-\nmantic and syntactic information but also represents cross-lingual relations. c) We indi-\nrectly incorporate external contextual information using the neural features. We search\nin the source and target spaces and retrieve the closest constituent to the phrase pair in\nour bilingual embedding space.\nThe structure of the paper is as follows. Section 2 gives an overview of related\nwork. Section 3 explains our pipeline and the network architecture in detail. In Section\n4, experimental results are reported. We also have a separate section to discuss differ-\nent aspects of embeddings and the model. Finally, in the last section we present our\nconclusions along with some avenues for future work.\n2 Background\nSeveral models such as He et al. (2008), Liu et al. (2008) and Shen et al. (2009) studied\nthe use of contextual information for statistical machine translation (SMT). The idea is\nto go beyond the phrase level and enhance the phrase representation by taking surround-\ning phrases into account. This line of research is referred as discourse SMT (Hardmeier,\n2014; Meyer, 2014). Because NNs can provide distributed representations for words\nand phrases, they are ideally suited to the task of comparing semantic similarity. Unsu-\npervised models such as Word2Vec2(Mikolov et al., 2013a) or Paragraph Vectors (Le\n& Mikolov, 2014) have shown that distributional information is often enough to learn\nhigh-quality word and sentence embeddings.\nA large body of recent work has evaluated the use of embeddings in machine trans-\nlation. A successful usecase was reported in (Mikolov et al., 2013b). They separately\n1Although the features contributed by the language model component are as important as the\nbilingual features, we do not address them in this paper, since they traditionally only make use\nof the monolingual target language context, and we are concerned with incorporating bilingual\nsemantic knowledge.\n2http://code.google.com/p/word2vec/\nImproving Phrase-Based SMT Using Cross-Granularity Embedding Similarity 131\nproject words of source and target languages into embeddings, then try to find a trans-\nformation function to map the source embedding space into the target space. The trans-\nformation function was approximated using a small set of word pairs extracted using an\nunsupervised alignment model trained with a parallel corpus. This approach allows the\nconstruction of a word-level translation engine with very large monolingual data and\nonly a small number of bilingual word pairs. The cross-lingual transformation mecha-\nnism allows the engine to search for translations for OOV (out-of-vocabulary) words by\nconsulting a monolingual index which contains words that were not observed in the par-\nallel training data.The work by Garcia and Tiedemann (2014) is another model follows\nthat the same paradigm.\nHowever, machine translation (MT) is more than word-level translation. In Mart ´ınez\net al. (2015) word embeddings were used in document-level MT to disambiguate the\nword selection. Tran et al. (2014) used bilingual word embeddings to compute the se-\nmantic similarity of phrases. To extend the application of text embedding beyond single\nwords, Gao et al. (2013) proposed learning embeddings for source and target phrases\nby training a network to maximize the sentence-level BLEU score. Costa-jussa et al.\n(2014) worked at the sentence-level and incorporated the source side information into\nthe decoding phase by finding the similarities between phrases and source embeddings.\nSome other models re-scored the phrase table (Alkhouli et al., 2014) or generated new\nphrase pairs in order to address the OOV word problem (Zhao et al., 2014).\nOur network makes use of some ideas from existing models, but also extends the\ninformation available to the embedding model. We train embeddings in the joint space\nusing both source and target side information simultaneously, using a model which is\nsimilar to that of Devlin et al. (2014) and Passban et al. (2015b). Similar to Gao et\nal. (2013) we make embeddings for phrases and sentences and add their similarity as\nfeature functions to the SMT model.\n3 Proposed Method\nIn order to train our bilingual embedding model, we start by creating a large bilingual\ncorpus. Each line of the corpus may include:\n–a source or target sentence,\n–a source or target phrase,\n–a concatenation of a phrase pair (source and target phrases which are each other’s\ntranslation),\n–a tuple of source and target words (each other’s translation).\nSentences of the bilingual corpus are taken from the SMT training corpus. Accordingly,\nphrases and words are from the phrase tables and lexicons, generated by the alignment\nmodel and phrase extraction heuristic used by the SMT model. This means that the\nbilingual corpus is a very large corpus with size of 2\u0003jcj+ 3\u0003jptj+jbljwhichjcj\nindicates the number of source/target sentences, jptjis the size of the phrase table and\njbljis the size of the bilingual lexicon.\nBy use of the concatenated phrases and bilingual tuples we try to score the quality\nof both sides of the phrase pair, by connecting phrases with other phrases in the same\n132 Passban et al.\nlanguage, and with their counterparts in the other language. Section 3.1 discusses how\nthe network benefits from this bilingual property.\nEach line of the bilingual training corpus has a dedicated vector (row) in the embed-\ndings matrix. During training embeddings are updated. After training, we extract some\ninformation to enrich the phrase table. First we compute the semantic similarity be-\ntween source and target phrases in phrase pairs. The similarity shows how semantically\nphrases are related to each other. The Cosine measure is used to compute the similarity:\nsimilarity (Es;Et) =Es:Et\njjEsjj\u0002jjEtjj\nwhereEsandEtindicate embeddings for the given source and target phrases, respec-\ntively. We map Cosine scores into the [0,1] range. This can be interpreted as a score\nindicating the semantic relatedness of the source and target phrases. The similarity be-\ntween the source phrase and target phrase is the first feature and is referred as sp2tp .\nAmong source-side embeddings (word, phrase or sentence embeddings) we search\nfor the close match to the source phrase. There might be a word, phrase or sentence\non the source side which can enhance the source phrase representation and ease its\ntranslation. If the closest match belongs to a phrase, probably that is a paraphrased\nform of the original phrase and if the closest match belongs to a word, probably that is\na keyword which could enhance the word selection quality. We refer to this source-side\nsimilarity score as sp2sm .\nWe also look for the closest match of the source phrase on the target side. As we\njointly learn embeddings, structures that are each other’s translation should have close\nembeddings. We compute the similarity of the closest target match to the source phrase\n(sp2tm ). We compute the same similarities for the target phrase, namely the similarity\nof the target phrase with the closest target match ( tp2tm ) and the closest source match\n(tp2sm ). The source and target matches may preserve other type of semantic similarity\n(sm2tm ), therefore these features should add more information about the overall quality\nof the phrase pair. All new features are added to the phrase table and used in the tuning\nphase to optimise the translation model. Figure 1 tries to clarify the relation among\ndifferent matches and phrases.\nsm2tm tp2tm sp2sm sp2tp source phrase \nsource embeddings target phrase \ntarget embeddings \nFig. 1. sp, tp, sm andtmstand for source phrase, target phrase, source match andtarget match ,\nrespectively. The embeddings size for all types of embedding are the same. The source/target-\nside embedding could belong to a source/target word, phrase or sentence. The labels of arrows\nindicate the Cosine similarity between two embeddings which is mapped into the [0,1] range.\nImproving Phrase-Based SMT Using Cross-Granularity Embedding Similarity 133\n3.1 Learning Embeddings\nOur network is an extension of Le and Mikolov (2014) and Passban et al. (2015b). In\nthose methods, documents (words, phrases, sentences and any other chunks of text) are\ntreated as atomic units in order to learn embeddings in the same semantic space as the\nspace used for the individual words in the model. The model includes an embedding\nfor each document which in our case may be a monolingual sentence, a monolingual\nphrase, a bilingual phrase pair or a bilingual word pair. During training, at each itera-\ntion a random target word ( wt) is selected from the input document to be predicted at\nthe output layer by using the context and document embeddings. The context embed-\nding is made by averaging embeddings of adjacent words around the target word. Word\nand document embeddings are updated during training until the cost is minimized. The\nmodel learns an embedding space in which constituents with similar distributional ten-\ndencies are close to each other. More formally, given a sequence of Si=w1;w2;:::;w n\nthe objective is to maximize the log probability of the target word given the context and\ndocument vector:\n1\nnnX\nj=1logp(wt\njjCwt\ni;Di)\nwherewt\nj2Siis randomly selected at each iteration. Diis the document embedding\nforSiandCwtindicates the context embedding which is the mean of embeddings for\nmpreceding and mfollowing words around the target word.\nAs previously mentioned, Sicould be a monolingual sentence or phrase, in which\ncasewtand adjacent words are from the same language. In other words, the context\nincludesmwords before and mwords after the target word. Sialso could be a con-\ncatenation of source and target phrases. In that case context words are selected from\nboth languages, i.e. mwords from the source (the side from which the target word is\nselected) and mwords from the target side. Finally Sicould be a pair of source and\ntarget words where Cwtis made using the target word’s translation. The word on one\nside is used to predict the word on the opposite side. In the proposed model mis the\nupper bound.\nTable 1. Context vectors for different input documents. wtisbetter andm= 5. Italics are in\nFarsi.\nD1 know him better than anyone\nCbetter\n1 [know, him, than, anyone ]s\nD2 know him better than anyone . ¯av r¯a bhtr ¯az hrks my ˇsn¯asy\nCbetter\n2 [know, him, than, anyone ]s+[¯av,r¯a,bhtr,¯az,hrks]t\nD3 better .bhtr\nCbetter\n3 [bhtr]t\n134 Passban et al.\nTable 1 illustrate some examples of the context window. The examples are selected\nfrom the En–Fa bilingual corpus (see Section 4).3InC1the context window includes\n2 words before better and 2 words after. In this case the target word and all other\ncontext words are from the same language (indicated by a ‘ s’ subscript). In the second\nexample the input document is a concatenation of English and Farsi phrases, so C2\nincludesm(or fewer) words from each side (indicated with different subscripts). In the\nfinal example the input document is a word tuple where the target word’s translation is\nconsidered as its context.\nAs shown in Huang et al. (2012), word vectors can be affected by the word’s sur-\nrounding as well as by the global structure of a text. Each unique word has a specific\nvector representation and clearly similar words in the same language would have sim-\nilar vectors (Mikolov et al., 2013a). By use of the bilingual training corpus and our\nproposed architecture we tried to expand the monolingual similarities to the bilingual\nsetting, resulting in an embedding space which contains both languages. Words that are\ndirect translations of each other should have similar/close embeddings in our model. As\nthe corpus contains tuples of <word L1;word L2>, embeddings for words which tend\nto be translations of one another are trained jointly. Phrasal units are also connected\ntogether by the same process. Since the bigger blocks encompass the embeddings for\nwords and phrasal units they should also have representations which are similar to the\nrepresentations of their constituents.\n3.2 Network Architecture\nIn the input layer we have an embedding matrix. Each row in the matrix is dedicated\nto one specific line in the bilingual corpus. During training embeddings are tuned and\nupdated. The network has only one hidden layer. A Softmax layer is placed on top of the\nhidden layer to map values to class probabilities. Softmax is a vector-valued function\nwhich maps its input values to the [0,1] range. The output values from the Softmax\ncan be interpreted as class probabilities for the given input. The Softmax function is\nformulated as follows:\nP(wt\njjCwt\ni\u000fDi) =exp(hj:wj+aj)P\nj02Vexp(hj:wj0+aj0)\nIntuitively, we are estimating the probability of selecting the j-th word as the tar-\nget word from the i-th training document. The input for the Softmax layer ish=\nW(Cwt\ni\u000fDi) +b, whereWis a weight matrix between the input layer and the hidden\nlayer,bis a bias vector and \u000findicates the concatenation function. wjis thej-th column\nof another weight matrix (between the hidden layer and the Softmax layer) andajis a\nbias term. The output of Softmax ,V2RjVj, is the distribution probability over classes\nwhich are words in our setting. The j-th cell inVis interpreted as the probability of se-\nlecting thej-th word from the target vocabulary Vas the target word. Based on Softmax\nvalues the word with the highest probability is selected and the error is computed ac-\ncordingly. The network parameters are optimized using stochastic gradient descent and\n3We used the DIN transliteration standard to show the Farsi alphabets;\nhttps://en.wikipedia.org/wiki/Persian alphabet\nImproving Phrase-Based SMT Using Cross-Granularity Embedding Similarity 135\nback-propagation (Rumelhart et al., 1988). All parameters of the model are randomly\ninitialized over a uniform distribution in the [-0.1,0.1] range. Weight matrices, bias val-\nues and word embeddings are all network parameters which are tuned during training.\nThe embedding size in our model is 200. Figure 2 illustrates the whole pipeline.\n1\n𝑛∑ \nw2 \nw4 \nw5 \nw6 w1 \nDs Csw3\n h \nTarget \nVocab. w3 \nFig. 2. Network architecture. The input document is S=w1w2w3w4w5w6and the target\nword is w3.\n4 Experimental Results\nWe evaluated our new features on two language pairs: En–Fa and En–Cz. Both Farsi\nand Czech are morphologically rich languages; therefore, translation to/from these lan-\nguages can be more difficult than it is for languages where words tend to be discrete\nsemantic units. Farsi is also a low-resource language, so we are interested in working\nwith these pairs. For the En–Fa pair we used the TEP++ corpus (Passban et al., 2015a)\nand for Czech we used the Europarl4corpus (Koehn, 2005). TEP++ is a collection of\n600,000 parallel sentences. We used 1000 and 2000 sentences for testing and tuning,\nrespectively and the rest of the corpus for training. From the Czech dataset we selected\nthe same number of sentences for training, testing and tuning. The baseline system is a\nPBSMT engine built using Moses (Koehn et al., 2007) with the default configuration.\nWe used MERT (Och, 2003) for tuning. In the experiments we trained 5-gram language\nmodels on the monolingual parts of the bilingual corpora using SRILM (Stolcke et al.,\n2002). We used BLEU (Papineni et al., 2002) as the evaluation metric. We added our\nfeatures to the phrase table and tuned the translation models. Table 2 shows the impact\nof each feature. We also estimated the translation quality in the presence of the all fea-\ntures (we run MERT for each row of Table 2). Bold numbers are statistically significant\naccording to the results of paired bootstrap re-sampling with p=0.05 for 1000 samples\n(Koehn, 2004). Arrows indicate whether the new features increased or decreased the\nquality over the baseline.\n4http://www.statmt.org/europarl/\n136 Passban et al.\nTable 2. Impact of the proposed features.\nFeature En–Fa \"# Fa–En \"# En–Cz \"# Cz–En \"#\nBaseline 21.03 0.00 29.21 0.00 28.35 0.00 39.63 0.00\nsp2tp 21.46 0.43\"29.71 0.50\"28.72 0.37\"40.34 0.71\"\nsp2sm 21.32 0.29\"29.74 0.53\"28.30 0.05#39.76 0.13\"\nsp2tm 21.40 0.37\"29.56 0.35\"28.52 0.17\"39.79 0.16\"\ntp2tm 20.40 0.63#29.56 0.35\"28.00 0.35#39.68 0.05\"\ntp2sm 21.93 0.90\"29.26 0.05\"28.94 0.59\"39.81 0.18\"\nsm2tm 21.18 0.15\"30.08 0.87\"28.36 0.01\"39.99 0.36\"\nAll 21.84 0.81\"30.26 1.05\"29.01 0.66\"40.24 0.61\"\nResults show that the new features are useful and positively affect the translation\nquality. Some of the features such as sp2tp are always helpful regardless of the trans-\nlation direction and language pair. This feature is the most important feature among\nothers. The sm2tm feature always works effectively in translating into English and the\ntp2sm feature is effective when translating from English. In the presence of all features\nresults are significantly better than the baseline system in all cases. Some of the features\nare not as strong as the others ( tp2tm ) and some of them behave differently based on\nthe language ( sp2tm ).\n5 Discussion\nNumbers reported in in Section 4 indicate that the proposed method and features result\nin a significant enhancement of translation quality, but it cannot be decisively claimed\nthat they are always helpful for all languages and settings. Therefore we tried to study\nthe impact of features not only quantitatively but also qualitatively. We mainly focus on\nthree issues in this section. First we show how the features change SMT translations.\nThen we show ability of the network in capturing cross-lingual similarities and finally\nwe discuss the way we learn embeddings.\nBased on our investigation, the new features seem to help the model determine the\nquality of a phrase pair. As an example for the English phrase “but I’m your teammate”\nin the phrase table, the corresponding Farsi target phrase is “¯am¯a mn hm tymyt hstm”\nwhich is the exact translation of the source phrase. The closest match in the source side\nis“we played together” and in the target side is “Ben mn ¯anj¯a b¯azy krdm” (meaning\n“I played in that team” ). These retrieved matches indicate that this is a high-quality\nphrase. By comparing the outputs we recognized that before adding our features the\nword “your” was not translated. In translation into Farsi, possessives sometimes are not\ntranslated and the verb implicitly shows them, but the best translation is a translation\nincluding possessives. The translation of “your” appeared in the output after adding\nour features.\nThe proposed model is expected to learn the cross-lingual similarities along with\nthe monolingual relations. To study this feature Table 3 shows two samples. Results in\nTable 3 show the proposed model can capture cross-lingual relations. It is also able to\nImproving Phrase-Based SMT Using Cross-Granularity Embedding Similarity 137\nmodel similarities in different granularities. It has word level, phrase level and sentence\nlevel similarities. Retrieved instances are semantically related to the given queries.\nTable 3. The top 10 most similar vectors for the given English query. Recall that the retrieved\nvectors could belong to words, phrases or sentences in either English or Farsi and word or phrase\npairs. The items that were originally in Farsi have been translated into English, and are indicated\nwith italics .\nQuery sadness\n1 <apprehension , nervous >\n2 emotion\n3 <ill,sick>\n4 pain\n5 <money ,money >\n6 benignity\n7 <may he was punished ,punished harshly >\n8 is really gonna hurt\n9 i know tom ’ s dying\n10 <bitter ,angry >\nTang et al. (2015) proposed that a sentence embedding could be generated by aver-\naging/concatenating embeddings of the words in that sentence. In our case the model\nby Tang et al. was not as beneficial as ours for both Farsi and Czech. As an example if\nthesp2tp is computed using their model, it degrades the En–Fa direction’s BLEU from\n21.03 to 20.97 and its improvement for the Fa–En direction is only +0.11 points (al-\nmost 5 times less than ours). Our goal is not to compare our model to that of Tang et al..\nWe only performed a simple comparison on the most important feature to see the dif-\nference. Furthermore, according to discussions from Le and Mikolov (2014) document\nvectors (such as ours) work better than averaging/concatenating vectors. Our model\nalso contains both source and target side information in word and phrase embeddings.\nAveraging cannot provide such rich information. Our results are aligned with Devlin et\nal. (2014), who showed the impact of using both source and target side information.\n6 Conclusion and Future work\nIn this work we proposed a novel neural network model which learns word, phrase, and\nsentence embeddings in a bilingual space. Using embeddings we define six new fea-\ntures which are incorporated into an SMT phrase table. Our results show that the new\nsemantic similarity features enhance translation performance across all of the languages\nwe evaluated. In future work, we hope to directly include the distributed semantic repre-\nsentation into the phrase table, allowing on-line incorporation of semantic information\ninto the translation model features.\n138 Passban et al.\nAcknowledgement\nWe would like to thank the three anonymous reviewers and Rasul Kaljahi for their\nvaluable comments and the Irish Center for High-End Computing (www.ichec.ie) for\nproviding computational infrastructures. This research is supported by Science Founda-\ntion Ireland through the CNGL Programme (Grant 12/CE/I2267) in the ADAPT Centre\n(www.adaptcentre.ie) at Dublin City University.\nReferences\nAlkhouli, T., Guta, A., & Ney, H. (2014). Vector space models for phrase-based ma-\nchine translation. Syntax, Semantics and Structure in Statistical Translation .\nChiang, D., Knight, K., & Wang, W. (2009). 11,001 new features for statistical machine\ntranslation. In Proceedings of human language technologies: The 2009 annual\nconference of the north american chapter of the association for computational\nlinguistics (pp. 218–226). Boulder, Colorado.\nCosta-jussa, M., Gupta, P., Rosso, P., & Banchs, R. (2014). English-to-hindi system\ndescription for wmt 2014: Deep sourcecontext features for moses. In Proceedings\nof the ninth workshop on statistical machine translation, baltimore, maryland,\nusa. association for computational linguistics.\nDevlin, J., Zbib, R., Huang, Z., Lamar, T., Schwartz, R., & Makhoul, J. (2014). Fast\nand robust neural network joint models for statistical machine translation. In\nProceedings of the 52nd annual meeting of the association for computational\nlinguistics (V ol. 1, pp. 1370–1380).\nGao, J., He, X., Yih, W., & Deng, L. (2013). Learning semantic representations for the\nphrase translation model. CoRR ,abs/1312.0482 .\nGarcia, E. M., & Tiedemann, J. (2014). Words vector representations meet machine\ntranslation. Syntax, Semantics and Structure in Statistical Translation , 132.\nHardmeier, C. (2014). Discourse in statistical machine translation . Unpublished doc-\ntoral dissertation.\nHe, Z., Liu, Q., & Lin, S. (2008). Improving statistical machine translation us-\ning lexicalized rule selection. In Proceedings of the 22nd international con-\nference on computational linguistics - volume 1 (pp. 321–328). Strouds-\nburg, PA, USA: Association for Computational Linguistics. Retrieved from\nhttp://dl.acm.org/citation.cfm?id=1599081.1599122\nHuang, E. H., Socher, R., Manning, C. D., & Ng, A. Y . (2012). Improving word\nrepresentations via global context and multiple word prototypes. In Proceedings\nof the 50th annual meeting of the association for computational linguistics: Long\npapers-volume 1 (pp. 873–882).\nKoehn, P. (2004). Statistical significance tests for machine translation evaluation. In\nEmnlp (pp. 388–395).\nKoehn, P. (2005). Europarl: A parallel corpus for statistical machine translation. In Mt\nsummit (V ol. 5, pp. 79–86).\nKoehn, P. (2010). Statistical machine translation (1st ed.). New York, NY , USA:\nCambridge University Press.\nImproving Phrase-Based SMT Using Cross-Granularity Embedding Similarity 139\nKoehn, P., Hoang, H., Birch, A., Callison-Burch, C., Federico, M., Bertoldi, N., et al.\n(2007). Moses: Open source toolkit for statistical machine translation. In Pro-\nceedings of the 45th annual meeting of the acl on interactive poster and demon-\nstration sessions (pp. 177–180).\nLe, Q. V ., & Mikolov, T. (2014). Distributed representations of sentences and docu-\nments. CoRR ,abs/1405.4053 .\nLiu, Q., He, Z., Liu, Y ., & Lin, S. (2008). Maximum entropy based rule selection\nmodel for syntax-based statistical machine translation. In Proceedings of the\nconference on empirical methods in natural language processing (pp. 89–97).\nStroudsburg, PA, USA: Association for Computational Linguistics. Retrieved\nfrom http://dl.acm.org/citation.cfm?id=1613715.1613729\nMart ´ınez, E., Espa ˜na Bonet, C., M ´arquez Villodre, L., et al. (2015). Document-level\nmachine translation with word vector models. In Proceedings of the 18th annual\nconference of the european association for machine translation (eamt) (pp. 59–\n66). Antalya, Turkey.\nMeyer, T. (2014). Discourse-level features for statistical machine translation . Unpub-\nlished doctoral dissertation, ´Ecole Polytechnique F ´ed´erale de Lausanne.\nMikolov, T., Chen, K., Corrado, G., & Dean, J. (2013a). Efficient estimation of word\nrepresentations in vector space. CoRR ,abs/1301.3781 .\nMikolov, T., Le, Q. V ., & Sutskever, I. (2013b). Exploiting similarities among languages\nfor machine translation. CoRR ,abs/1309.4168 .\nOch, F. J. (2003). Minimum error rate training in statistical machine translation. In Pro-\nceedings of the 41st annual meeting on association for computational linguistics\n- volume 1 (pp. 160–167). Sapporo, Japan.\nOch, F. J., & Ney, H. (2002). Discriminative training and maximum entropy models\nfor statistical machine translation. In Proceedings of the 40th annual meeting on\nassociation for computational linguistics (pp. 295–302). Philadelphia, Pennsyl-\nvania.\nPapineni, K., Roukos, S., Ward, T., & Zhu, W.-J. (2002). BLEU: a method for automatic\nevaluation of machine translation. In Proceedings of the 40th annual meeting on\nassociation for computational linguistics (pp. 311–318).\nPassban, P., Hokamp, C., & Liu, Q. (2015b). Bilingual distributed phrase representation\nfor statistical machine translation. In Proceedings of mt summit xv (pp. 310–318).\nPassban, P., Way, A., & Liu, Q. (2015a). Benchmarking SMT performance for Farsi\nunisng the TEP++ corpus. In Proceedings of the 18th annual conference of the\nEuropean Association for Machine Translation (eamt) (pp. 82–88). Antalya,\nTurkey.\nRumelhart, D. E., Hinton, G. E., & Williams, R. J. (1988). Learning representations by\nback-propagating errors. Cognitive modeling ,5, 3.\nShen, L., Xu, J., Zhang, B., Matsoukas, S., & Weischedel, R. (2009). Effective use\nof linguistic and contextual information for statistical machine translation. In\nProceedings of the 2009 conference on empirical methods in natural language\nprocessing: Volume 1 - volume 1 (pp. 72–80). Singapore.\nStolcke, A., et al. (2002). SRILM-an extensible language modeling toolkit. In Inter-\nspeech.\n140 Passban et al.\nTang, D., Qin, B., & Liu, T. (2015). Document modeling with gated recurrent neural\nnetwork for sentiment classification. In Proceedings of the 2015 conference on\nempirical methods in natural language processing (pp. 1422–1432).\nTran, K. M., Bisazza, A., & Monz, C. (2014). Word translation prediction for morpho-\nlogically rich languages with bilingual neural networks. In Proceedings of the\n2014 conference on empirical methods in natural language processing (EMNLP)\n(pp. 1676–1688).\nZhao, K., Hassan, H., & Auli, M. (2014). Learning translation models from monolin-\ngual continuous representations.\nReceived May 2, 2016 , accepted May 5, 2016",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "E7uyrZSm4zn",
"year": null,
"venue": "EAMT 2020",
"pdf_link": "https://aclanthology.org/2020.eamt-1.68.pdf",
"forum_link": "https://openreview.net/forum?id=E7uyrZSm4zn",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Project MAIA: Multilingual AI Agent Assistant",
"authors": [
"André Filipe Torres Martins",
"João Graça",
"Paulo Dimas",
"Helena Moniz",
"Graham Neubig"
],
"abstract": "This paper presents the Multilingual Artificial Intelligence Agent Assistant (MAIA), a project led by Unbabel with the collaboration of CMU, INESC-ID and IT Lisbon. MAIA will employ cutting-edge machine learning and natural language processing technologies to build multilingual AI agent assistants, eliminating language barriers. MAIA’s translation layer will empower human agents to provide customer support in real-time, in any language, with human quality.",
"keywords": [],
"raw_extracted_content": "Project MAIA: Multilingual AI Agent Assistant\nJo˜ao Grac ¸a1, Paulo Dimas1, Helena Moniz2, Andr ´e F. T. Martins1;3, Graham Neubig4\n1Unbabel / Lisbon, Portugal\n2INESC-ID / Lisboa, Portugal\n3Instituto de Telecomunicac ¸ ˜oes / Portugal\n4CMU / Pittsburgh, USA\nfjoao,pdimas,andre.martins [email protected] ,\[email protected] ,[email protected]\nAbstract\nThis paper presents the Multilingual Artifi-\ncial Intelligence Agent Assistant (MAIA),\na project led by Unbabel with the collab-\noration of CMU, INESC-ID and IT Lis-\nbon. MAIA will employ cutting-edge ma-\nchine learning and natural language pro-\ncessing technologies to build multilingual\nAI agent assistants, eliminating language\nbarriers. MAIA’s translation layer will em-\npower human agents to provide customer\nsupport in real-time, in any language, with\nhuman quality.\n1 Introduction\nOnline conversational support – chat – is the fastest\ngrowing customer service channel, being the pre-\nferred way for millennials to obtain customer ser-\nvice. Today, supporting international customers\nin this channel is mostly done by using human\nagents that speak different languages – a scarce\nand costly resource. The tremendous progress of\nlanguage technologies (machine translation and di-\nalogue systems) in the last years makes them an\nappealing tool for multilingual customer service.\nHowever, current systems are still too brittle and\nimpractical: first, they require too much data and\ncomputing power, failing for domains or languages\nwhere labeled data is scarce; second, they do not\ncapture contextual information (e.g. current MT\nsystems work on a sentence-by-sentence basis, ig-\nnoring the conversation context); third, fully au-\ntomatic systems lack human empathy and fail on\nunexpected scenarios, leading to low customer sat-\nc\r2020 The authors. This article is licensed under a Creative\nCommons 3.0 licence, no derivative works, attribution, CC-\nBY-ND.isfaction. In MAIA, we will develop a multilin-\ngual conversational platform where human agents\nare assisted by AI agents. This approach will over-\ncome the above limitations by targeting the follow-\ning scientific and technological goals:\n\u000fNew memory-efficient neural models for\ncontext-aware machine translation, suitable for\nonline and real-time translation. These models\nwill retain key aspects of a conversation (e.g.,\nthe gender of the customer), bringing them up\nwhenever needed to translate a message.\n\u000fNew answer generation techniques where the\nhuman agent (e.g., a tourism officer) will receive\nsuggestions that reduce effort and increase the\ncustomer’s (e.g. a tourist) satisfaction.\n\u000fNew techniques for conversational quality es-\ntimation and sentiment analysis to assess how\nwell the conversation is addressing the cus-\ntomer’s needs, while simultaneously increasing\n“human empathy”.\n\u000fIntegration of the scientific advances above into\na full end-to-end product. To this end, two\ndemonstrators will be built to cover concrete use\ncases in the Travel and Tourism industries.\n2 Overview of MAIA\nFigure 1 displays a mock-up of the user interface to\nassist the human agent. Illustrated is the conversa-\ntion history (on the agent’s language), a list of an-\nswer suggestions, a message box supporting auto-\ncompletion where the agent can type the response,\nand an indicator of the sentiment of the customer\nthroughout the conversation. The overarching goal\nof MAIA is to build context-aware, multilingual,\nempathetic agent assistants. These assistants will\nhelp human agents to provide real-time customer\nFigure 1: Mock-up of the user interface to assist the human agent.\nservice in multiple languages. This will be accom-\nplished by pursuing the following objectives:\n\u000fTranslation layer for multilingual customer\nservice. Enabling multilingual customer service\nby using domain-adapted machine translation\nfor translating messages sent from customers\nto agents and vice-versa. This will be ensured\nby developing new neural machine translation\nmodels that can be efficiently adapted to new do-\nmains and clients, the implementation of auto-\nmatic retraining of machine translation engines,\nand an active learning strategy to use the Unba-\nbel community of human post-editors to trans-\nlate conversations offline, in a recurrent manner,\nto build an ever-increasing parallel datasets.\n\u000fConversational context-awareness. Develop-\nment of methods for neural machine translation\nand automatic response generation that take into\naccount the context of the conversation. This re-\nquires the development of new machine learning\nmethods that are able to compress the conversa-\ntion history into a compact memory representa-\ntion, and to pick the relevant elements whenever\nneeded.\n\u000fModeling customer satisfaction via sentiment\nanalysis and conversational quality estima-\ntion. Development of a module for conversa-\ntional quality estimation that is able to detect\nwhen the agent is effectively answering to thecustomer’s needs, and react otherwise. This will\ntrigger specific actions for either the machine\ntranslation or the answer generation modules. In\naddition, a sentiment analysis component will\nestimate the sentiment and emotions of the cus-\ntomer throughout the conversation, informing\nthe agent.\n\u000fIntegration of the multilingual conversational\nplatform and acquisition of reference cus-\ntomers. Implementation of the MAIA platform\nand execution of Customer Discovery Programs\nfor testing, validating, and improving product\nprototypes, testing for feasibility, usability, and\nviability. Execution of a plan for commercial\nexploitation and use of the plat- forms, systems,\nand technologies developed in MAIA.\nReferences\nAndr ´e F. T. Martins and Marcin Junczys-Dowmunt\nand Fabio Kepler and Ramon Astudillo and Chris\nHokamp and Roman Grundkiewicz. 2017. Pushing\nthe Limits of Translation Quality Estimation, Trans-\nactions of the Association for Computational Lin-\nguistics].\nJonathan Herzig, Guy Feigenblat, Michal Schmueli-\nScheuer, Anat Rafaeli, Daniel Altman, and David\nSpivak. 2016. Classifying emotions in customer\nsupport dialogues in social media. Proceedings of\nthe 2016 SIGDIAL Conference , pp. 64.73.",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "NOkw3fuxa71",
"year": null,
"venue": "EAMT 2015",
"pdf_link": "https://aclanthology.org/W15-4944.pdf",
"forum_link": "https://openreview.net/forum?id=NOkw3fuxa71",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Abu-MaTran: Automatic building of Machine Translation",
"authors": [
"Antonio Toral",
"Tommi A. Pirinen",
"Andy Way",
"Gema Ramírez-Sánchez",
"Sergio Ortiz-Rojas",
"Raphael Rubino",
"Miquel Esplà-Gomis",
"Mikel L. Forcada",
"Vassilis Papavassiliou",
"Prokopis Prokopidis",
"Nikola Ljubesic"
],
"abstract": "Antonio Toral, Tommi A. Pirinen, Andy Way, Gema Ramírez-Sánchez, Sergio Ortiz Rojas, Raphael Rubino, Miquel Esplà, Mikel L. Forcada, Vassilis Papavassiliou, Prokopis Prokopidis, Nikola Ljubešić. Proceedings of the 18th Annual Conference of the European Association for Machine Translation. 2015.",
"keywords": [],
"raw_extracted_content": "Abu-MaTran: Automatic building of Machine Translation\nFP7-PEOPLE-2012-IAPP\nhttp://www.abumatran.euList of partners\nDublin City University, Ireland (coordinator)\nPrompsit Language Engineering SL, Spain\nUniversitat d'Alacant, Spain\nUniversity of Zagreb, Faculty of Humanities and Social Sciences, Croatia\nAthena Research and Innovation Center in Information Communication \n& Knowledge Technologies, Greece\nProject duration: January 2013 — December 2016\nSummary\nAbu-MaTran seeks to enhance industry–academia cooperation as a key aspect to tackle one\nof Europe’s biggest challenges: multilingualism. We aim to increase the hitherto low indus-\ntrial adoption of machine translation by identifying crucial cutting-edge research techniques\n(automatic acquisition of corpora and linguistic resources, pivot-language techniques, lin-\nguistically augmented statistical translation and diagnostic evaluation), making them suit-\nable for commercial exploitation. We also aim to transfer back to academia the know-how\nof industry to make research results more robust. We work on a case study of strategic in-\nterest for Europe: machine translation for the language of a new member state (Croatian)\nand related languages. All the resources produced will be released as free/open-source soft-\nware, resulting in effective knowledge transfer beyond the consortium. The project has a\nstrong emphasis on dissemination, through the organisation of workshops that focus on\ninter-sectoral knowledge transfer. Finally, we have a comprehensive outreach plan, includ-\ning the establishment of a Linguistic Olympiad in Spain, open-day activities and the parti-\ncipation in the Google Summer of Code.\nAt EAMT 2015 we will present the results of the second milestone of the project (December\n2014). To mention just a few: (i) MT systems for English–Croatian based on free/open-\nsource software and web crawled and publicly available data, both generic and specific for\nthe tourism domain, (ii) tools developed in the project (e.g. web crawling of parallel data\nand paradigm guessing) and (iii) outcomes of the project's dissemination activities (e.g. soft-\nware management for researchers, data creation for RBMT systems and establishment of a\nlinguistics Olympiad).\n227",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "1yVjKZNq3CL",
"year": null,
"venue": "EAMT 2014",
"pdf_link": "https://aclanthology.org/2014.eamt-1.45.pdf",
"forum_link": "https://openreview.net/forum?id=1yVjKZNq3CL",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Extrinsic evaluation of web-crawlers in machine translation: a study on Croatian-English for the tourism domain",
"authors": [
"Antonio Toral",
"Raphael Rubino",
"Miquel Esplà-Gomis",
"Tommi A. Pirinen",
"Andy Way",
"Gema Ramírez-Sánchez"
],
"abstract": "Antonio Toral, Raphael Rubino, Miquel Esplà-Gomis, Tommi Pirinen, Andy Way, Gema Ramírez-Sánchez. Proceedings of the 17th Annual conference of the European Association for Machine Translation. 2014.",
"keywords": [],
"raw_extracted_content": "Extrinsic Evaluation of Web-Crawlers in Machine Translation: a Case\nStudy on Croatian–English for the Tourism Domain\u0003\nAntonio Toraly, Raphael Rubino?, Miquel Espl `a-Gomisz,\nTommi Pirineny, Andy Wayy, Gema Ram ´ırez-S ´anchez?\nyNCLT, School of Computing, Dublin City University, Ireland\nfatoral,tpirinen,away [email protected]\n?Prompsit Language Engineering, S.L., Elche, Spain\nfrrubino,gramirez [email protected]\nzDep. Llenguatges i Sistemes Inform `atics, Universitat d’Alacant, Spain\[email protected]\nAbstract\nWe present an extrinsic evaluation of\ncrawlers of parallel corpora from multi-\nlingual web sites in machine translation\n(MT). Our case study is on Croatian to\nEnglish translation in the tourism domain.\nGiven two crawlers, we build phrase-based\nstatistical MT systems on the datasets pro-\nduced by each crawler using different set-\ntings. We also combine the best datasets\nproduced by each crawler (union and in-\ntersection) to build additional MT systems.\nFinally we combine the best of the previ-\nous systems (union) with general-domain\ndata. This last system outperforms all the\nprevious systems built on crawled data as\nwell as two baselines (a system built on\ngeneral-domain data and a well known on-\nline MT system).\n1 Introduction\nAlong with the addition of new member states to\nthe European Union (EU), the commitment with\nmultilingualism in the EU is strengthened to give\nsupport to new languages. This is the case of Croa-\ntia, the last member to join the EU in July 2013,\nand of the Croatian language, which became then\nan official language of the EU.\nCroatian is the third official South Slavic lan-\nguage in the EU along with Bulgarian and Slovene.\nOther surrounding languages (e.g. Serbian and\n\u0003The research leading to these results has received fund-\ning from the European Union Seventh Framework Pro-\ngramme FP7/2007-2013 under grant agreement PIAP-GA-\n2012-324414 (Abu-MaTran).\n\u0003c\r2014 The authors. This article is licensed under a Creative\nCommons 3.0 licence, no derivative works, attribution, CC-\nBY-ND.Bosnian), although still not official in the EU, be-\nlong also to the same language family and are the\nofficial languages of candidate member states, thus\nbeing also of strategic interest for the EU.\nWe focus on providing machine translation\n(MT) support for Croatian and other South Slavic\nlanguages using and producing publicly available\nresources. Following our objectives, we developed\na general-domain MT system for Croatian–English\nand made it available online on the day Croatia\njoined the EU. It is, to the best of our knowledge,\nthe first available MT system for this language pair\nbased on free/open-source technologies.\nNew languages in the EU like Croatian can ben-\nefit from MT to speed up the flow of information\nfrom and into other EU languages. While this is\nthe case for most types of content it is especially\ntrue for official documentation and for content in\nparticular strategic sectors.\nTourism is one of the most important economic\nsectors in Croatia. It represented 15.4% of Croa-\ntia’s gross domestic product in 2012 (up from\n14.4% in 2011).1With almost 12 million foreign\ntourists visiting Croatia annually, the tourism sec-\ntor results in income of 6.8 billion euro.\nThe increasing number of tourists in Croatia\nmakes tourism a relevant domain for MT in or-\nder to provide them with quick and up-to-date\ninformation about the country they are visiting.\nAlthough most visitors come from non-English\nspeaking countries,2English is frequently used as\na lingua franca. This observation led us to our\nfirst approach to support the Croatian tourism sec-\n1http://www.eubusiness.com/news-eu/\ncroatia-economy.nrl\n2According to the site croatia.eu , top emitting coun-\ntries are Germany (24.2%), Slovenia (10.8%), Austria (8.9%),\nItaly (7.9%), Czech Republic (7.9%), etc.\n221\ntor: to provide MT adapted to the tourism domain\nfrom Croatian into English. Later, we will provide\nMT in the visitors’ native languages, i.e. German,\nSlovene, etc.\nWe take advantage of a recent work that crawled\nparallel data for Croatian–English in the tourism\ndomain (Espl `a-Gomis et al., 2014). Several\ndatasets were acquired by using two systems for\ncrawling parallel data with a number of settings. In\nthis paper we assess these datasets by building MT\nsystems on them and checking the resulting trans-\nlation performance. Hence, this work can be con-\nsidered as an extrinsic evaluation of these crawlers\n(and their settings) in MT.\nBesides building MT systems upon the domain-\nspecific crawled data, we study the concurrent ex-\nploitation of domain-specific and general-domain\ndata, with the aim of improving the overall per-\nformance and coverage of the system. From this\nperspective, our case study falls in the area of do-\nmain adaptation of MT, following previous works\nin domains such as labour legislation and natu-\nral environment for English–French and English–\nGreek (Pecina et al., 2012) and automotive for Ger-\nman to Italian and French (L ¨aubli et al., 2013).\nThe rest of the paper is organised as follows.\nSection 2 presents the crawled datasets used in this\nstudy and details the processing undertaken to pre-\npare them for MT. Section 3 details the different\nMT systems built. Section 4 shows and comments\nthe results obtained. Finally, Section 5 draws con-\nclusions and outlines future lines of work.\n2 Crawled Datasets\nDatasets were crawled using two crawlers: ILSP\nFocused Crawler (FC) (Papavassiliou et al., 2013)\nand Bitextor (Espl `a-Gomis et al., 2010). The de-\ntection of parallel documents was carried out with\ntwo settings for each crawler: 10best and 1best for\nBitextor and reliable and all for FC (see (Espl `a-\nGomis et al., 2014) for further details). It is\nworth mentioning that reliable and 1best are sub-\nsets of all and 10best, respectively. These sub-\nsets were obtained with a more strict configura-\ntion of each crawler and, therefore, are expected\nto contain higher quality parallel text. In addition,\na set of parallel segments was obtained by aligning\nonly those pairs of documents which were checked\nmanually by two native speakers of Croatian.\nBoth Bitextor and FC segment the documents\naligned by using the HTML tags. These seg-ments were re-segmented in shorter segments and\ntokenised with the sentence splitter and tokeniser\nincluded in the Moses toolkit.3\nThe resulting segments were then aligned with\nHunalign (Varga et al., 2005), using the option\nrealign , which provides a higher quality align-\nment by aligning the output of the first align-\nment. The documents from each website were con-\ncatenated prior to aligning them using tags ( <p>)\nto mark document boundaries. Aligning multi-\nple documents at once allows Hunalign to build a\nlarger dictionary for alignment while ensuring that\nonly segments belonging to the same document\npair are aligned to each other. The resulting pairs\nof segments were filtered to remove those with a\nconfidence score lower than 0.4.4\nFrom the aligned segments coming from manu-\nally checked document pairs we remove duplicate\nsegments. We only keep pairs of segments with\nconfidence score higher than 1.5These segments\nare randomised and we keep two sets, one of 825\nsegmens for the development set and one of 816\nsegments for the test set.\nFrom the other 4 datasets, those obtained with\nthe different settings of the two crawlers (1best,\n10best, all and reliable), duplicate pairs of seg-\nments were also removed. Pairs of segments ap-\npearing either in the test or development set were\nalso removed. The remaining pairs of segments are\nkept and will be used for training MT systems.\nApart from the domain-specific crawled data we\nuse additional general-domain (gen) data gathered\nfrom several sources of Croatian–English paral-\nlel data: hrenWaC,6SETimes7and TED Talks.8\nThese three datasets are concatenated and will be\nused to build a baseline MT system.\nTable 1 presents statistics (number of sentence\npairs, number of tokens and number of unique\ntokens in source (Croatian) and target (English)\nlanguage) of the previously introduced parallel\ndatasets for Croatian–English. The table shows\n3https://github.com/moses-smt/\nmosesdecoder\n4Manual evaluation for English, French and Greek concluded\nthat 0.4 was an adequate threshold for Hunalign’s confidence\nscore (Pecina et al., 2012).\n5While segment pairs with score above 0.4, as shown above,\nare deemed to be of reasonable quality for training, we raise\nthe threshold to 1 for test and development data.\n6http://nlp.ffzg.hr/resources/corpora/\nhrenwac/\n7http://nlp.ffzg.hr/resources/corpora/\nsetimes/\n8http://zeljko.agic.me/resources/\n222\nDataset # s. pairs # tokens # uniq t.\ndev 82530,851 10,119\n34,558 7,588\ntest 81628,098 9,585\n31,541 7,366\ngen 387,2598,084,110 288,531\n9,015,757 149,430\n1best 27,761592,236 80,958\n680,067 46,671\n10best 34,815760,884 86,391\n864,326 52,660\nreliable 23,225613,804 71,657\n706,227 37,399\nall 27,154719,526 77,291\n819,353 40,095\nunion 52,0971,243,142 103,671\n1,418,950 60,956\nintersection 5,939131,569 28,761\n155,432 16,290\nTable 1: Statistics of the parallel datasets. For each\ndataset the first line corresponds to statistics for\nCroatian and the second to English.\ntwo additional datasets: union and intersection.\nThese are the union and intersection of datasets\n10best and reliable.\n3 Machine Translation Systems\nPhrase-based statistical MT (PB-SMT) systems\nare built with Moses 2.1 (Koehn et al., 2007). Tun-\ning is carried out on the development set with min-\nimum error rate training (Och, 2003).\nAll the MT systems use an English language\nmodel (LM) from our system for French !English\nat the WMT-2014 translation shared task (Rubino\net al., 2014).9We built individual LMs on each\ndataset provided at WMT-2014 and then interpo-\nlated them on a development set of the news do-\nmain (news2012).\nMost systems are built on a single dataset, hence\nthey have one phrase table and one reordering ta-\nble. These systems include a baseline built on the\ngeneral-domain data (gen), four systems built on\nthe crawled datasets (1best, 10best, reliable and\nall) and two systems built on the union and in-\ntersection of the best performing10dataset of each\ncrawler: 10best and reliable.\nThere is also one system (gen+u) built on two\ndatasets, the general-domain (gen) dataset and a\ndomain-specific dataset (union). Phrase tables\nfrom the individual systems gen and union are in-\nterpolated so that the perplexity on the develop-\nment set is minimised (Sennrich, 2012).\n9http://www.statmt.org/wmt14/\ntranslation-task.html\n10According to the BLEU score on the development set.System BLEU METEOR TER OOV\ngen 0.4092 0.3005 0.5601 9.5\ngoogle 0.4382 0.2947 0.5295 -\n1best 0.5304 0.3478 0.4848 7.6\n10best 0.5176 0.3436 0.5016 7.2\nreliable 0.4064 0.2945 0.5755 12.6\nall 0.4105 0.2927 0.5756 12.4\nunion 0.5448 0.3583 0.4726 6.3\ninters. 0.3224 0.2456 0.6582 23.1\ngen+u 0.5722 0.3767 0.4451 4.1\nTable 2: SMT results.\n4 Results\nThe MT systems are evaluated with a set of state-\nof-the-art evaluation metrics: BLEU (Papineni et\nal., 2002), TER (Snover et al., 2006) and ME-\nTEOR (Lavie and Denkowski, 2009). For each\nsystem we also report the percentage of out-of-\nvocabulary (OOV) tokens.\nTable 2 shows the scores obtained by each MT\nsystem. We compare our systems to two baselines:\na PB-SMT system built on general-domain data\n(gen) and an on-line MT system, Google Trans-\nlate11(google).\nSystems built solely on in-domain data outper-\nform the baselines (1best and 10best) or obtain\nsimilar results (reliable and all). Different crawling\nparameters of the same crawler (10best vs 1best\nand reliable vs all) do not seem to have much\nof an impact. In fact, while the scores by 1best\nare slightly better than scores by 10best, the latter\nscored slightly better on the development set (and\nthus it is used in system union).\nThe union of data crawled by both Bitextor\n(10best) and FC (reliable) achieves a further im-\nprovement over the top performing system built on\ndata by a single crawler (BLEU 0.5448 vs 0.5304).\nThe system built on the intersection is the least\nperforming system (BLEU 0.3224) but it should\nbe noted that this system is built on a very small\namount of data (5,939 sentence pairs, cf. Table 1).\nFinally a system built on the interpolation of\nthe systems union and gen obtains the best perfor-\nmance, beating all the other systems for all met-\nrics. In the interpolation procedure system union\nwas weighted around 85% and system gen around\n15%. Hence, the data provided by the union of the\ncrawlers, although considerably smaller than the\ngeneral-domain data (52,097 vs 387,259 sentence\npairs), is considered more valuable for translating\nthe domain-specific development set.\n11http://translate.google.com/\n223\n5 Conclusions and Future Work\nWe have presented an extrinsic evaluation of par-\nallel crawlers in MT. Our case study is on Croatian\nto English translation in the tourism domain.\nGiven two crawlers, we have built PB-SMT sys-\ntems on the datasets produced by each crawler us-\ning different settings. We have then combined\nthe best datasets produced by each crawler (both\nintersection and union) and built additional MT\nsystems. Finally we have combined the best of\nthe previous systems (union) with general-domain\ndata. This last system outperforms all the previous\nsystems built on crawled data as well as two base-\nlines (a PB-SMT system built on general-domain\ndata and a well known on-line MT system).\nAs future work we plan to build MT systems for\nother relevant languages. As German, Slovene and\nItalian account for over 50% of incoming tourists\nin Croatia, we consider of strategic interest to build\nsystems that translate from Croatian into these lan-\nguages. Even more as it seems that on-line MT\nsystems covering these pairs do not perform the\ntranslation directly but use English as a pivot.\nCroatian–Slovene is a pair of closely-related\nlanguages, already covered by Apertium.12We\nplan to perform domain adaptation on tourism\nof this rule-based MT system following previous\nwork in this area (Masselot et al., 2010). For the re-\nmaining languages (German and Italian), we plan\nto build SMT systems with crawled data following\nthe approach presented in this paper.\nReferences\nEspl`a-Gomis, Miquel, Felipe S ´anchez-Mart ´ınez, and Mikel L.\nForcada. 2010. Combining content-based and url-based\nheuristics to harvest aligned bitexts from multilingual sites\nwith Bitextor. The Prague Bulletin of Mathemathical Lin-\ngustics , 93:77–86.\nEspl`a-Gomis, Miquel, Filip Klubi ˇcka, Nikola Ljube ˇsi´c, Ser-\ngio Ortiz-Rojas, Vassilis Papavassiliou, and Prokopis\nProkopidis. 2014. Comparing two acquisition systems for\nautomatically building an English–Croatian parallel cor-\npus from multilingual websites. In Proceedings of the 9th\nLanguage Resources and Evaluation Conference .\nKoehn, Philipp, Hieu Hoang, Alexandra Birch, Chris\nCallison-Burch, Marcello Federico, Nicola Bertoldi,\nBrooke Cowan, Wade Shen, Christine Moran, Richard\nZens, Chris Dyer, Ond ˇrej Bojar, Alexandra Constantin,\nand Evan Herbst. 2007. Moses: open source toolkit\nfor statistical machine translation. In Proceedings of the\n45th Annual Meeting of the ACL on Interactive Poster and\nDemonstration Sessions , pages 177–180.\n12https://svn.code.sf.net/p/apertium/svn/\ntrunk/apertium-hbs-slv/L¨aubli, Samuel, Mark Fishel, Manuela Weibel, and Martin\nV olk. 2013. Statistical machine translation for automo-\nbile marketing texts. In Machine Translation Summit XIV:\nmain conference proceedings , pages 265–272.\nLavie, Alon and Michael J. Denkowski. 2009. The me-\nteor metric for automatic evaluation of machine transla-\ntion. Machine Translation , 23(2-3):105–115.\nMasselot, Franc ¸ois, Petra Ribiczey, and Gema Ram ´ırez-\nS´anchez. 2010. Using the apertium spanish-brazilian por-\ntuguese machine translation system for localisation. In\nProceedings of the 14th Annual conference of the Euro-\npean Association for Machine Translation .\nOch, Franz Josef. 2003. Minimum error rate training in sta-\ntistical machine translation. In Proceedings of the 41st An-\nnual Meeting on Association for Computational Linguis-\ntics, pages 160–167.\nPapavassiliou, Vassilis, Prokopis Prokopidis, and Gregor\nThurmair. 2013. A modular open-source focused crawler\nfor mining monolingual and bilingual corpora from the\nweb. In Proceedings of the Sixth Workshop on Building\nand Using Comparable Corpora , pages 43–51.\nPapineni, Kishore, Salim Roukos, Todd Ward, and Wei-Jing\nZhu. 2002. Bleu: A method for automatic evaluation\nof machine translation. In Proceedings of the 40th An-\nnual Meeting on Association for Computational Linguis-\ntics, pages 311–318.\nPecina, Pavel, Antonio Toral, Vassilis Papavassiliou, Prokopis\nProkopidis, and Josef van Genabith. 2012. Domain adap-\ntation of statistical machine translation using web-crawled\nresources: a case study. In Proceedings of the 16th An-\nnual Conference of the European Association for Machine\nTranslation , pages 145–152.\nRubino, Raphael, Antonio Toral, Victor M. S ´anchez-\nCartagena, Jorge Ferr ´andez-Tordera, Sergio Ortiz-Rojas,\nGema Ram ´ırez-S ´anchez, Felipe S ´anchez-Mart ´ınez, and\nAndy Way. 2014. Abu-MaTran at WMT 2014 Translation\nTask: Two-step Data Selection and RBMT-Style Synthetic\nRules. In Proceedings of the Ninth Workshop on Statisti-\ncal Machine Translation , Baltimore, USA. Association for\nComputational Linguistics.\nSennrich, Rico. 2012. Perplexity minimization for transla-\ntion model domain adaptation in statistical machine trans-\nlation. In Proceedings of the 13th Conference of the Eu-\nropean Chapter of the Association for Computational Lin-\nguistics , pages 539–549.\nSnover, Matthew, Bonnie Dorr, Richard Schwartz, Linnea\nMicciulla, and Ralph Weischedel. 2006. A study of trans-\nlation error rate with targeted human annotation. In Pro-\nceedings of the Association for Machine Translation in the\nAmericas .\nVarga, D ´aniel, L ´aszl´o N´emeth, P ´eter Hal ´acsy, Andr ´as Kor-\nnai, Viktor Tr ´on, and Viktor Nagy. 2005. Parallel corpora\nfor medium density languages. In Proceedings of RANLP ,\npages 590–596.\n224",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "3FIk1-aWgRm",
"year": null,
"venue": "EAMT 2011",
"pdf_link": "https://aclanthology.org/2011.eamt-1.39.pdf",
"forum_link": "https://openreview.net/forum?id=3FIk1-aWgRm",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Definite Noun Phrases in Statistical Machine Translation into Scandinavian Languages",
"authors": [
"Sara Stymne"
],
"abstract": "Sara Stymne. Proceedings of the 15th Annual conference of the European Association for Machine Translation. 2011.",
"keywords": [],
"raw_extracted_content": "Definite Noun Phrases in Statistical Machine Translation into\nScandinavian Languages\nSara Stymne\nLink ¨oping University, Link ¨oping, Sweden\nXerox Research Centre Europe, Grenoble, France\[email protected]\nAbstract\nThe Scandinavian languages have an un-\nusual structure of definite noun phrases\n(NPs), with a noun suffix as one possibility\nof expressing definiteness, which is prob-\nlematic for statistical machine translation\nfrom languages with different NP struc-\ntures. We show that translation can be im-\nproved by simple source side transforma-\ntions of definite NPs, for translation from\nEnglish and Italian, into Danish, Swedish,\nand Norwegian, with small adjustments of\nthe preprocessing strategy, depending on\nthe language pair. We also explored target\nside transformations, with mixed results.\n1 Introduction\nOne problem for statistical machine translation is\nwhen the source language has a different structure\nin some respect than the target language. One such\nissue is the unusual realization of definite noun\nphrases in Scandinavian languages. Definiteness\ncan be expressed in two ways in Scandinavian lan-\nguages, either by a definite article or by a suffix on\nthe head noun. This is problematic for translation\nfrom languages that only use definite articles, such\nas English or Italian, leading to problems such as\nwrong noun forms and spurious definite articles in\nthe translation output.\nIt has previously been shown that definite noun\nphrases can successfully be handled by a prepro-\ncessing step for translation between English and\nDanish (Stymne, 2009b). In this study, source lan-\nguage noun phrases were transformed to be sim-\nilar in structure to target language NPs. In this\npaper we show that preprocessing of definiteness\nc/circlecopyrt2011 European Association for Machine Translation.also can be successful for translation from English\ninto Swedish and Norwegian, and from Italian to\nDanish, using the same basic strategy as in Stymne\n(2009b). However, some small careful modifica-\ntions to the original English-Danish preprocessing\nstrategy were necessary.\n2 Definiteness\nIn the Scandinavian languages there are two mech-\nanisms for expressing definiteness, by using a def-\ninite article or by using a suffix on the head noun.\nThese mechanisms can also be used in combina-\ntion, so called double definiteness. The distri-\nbution rules for these two mechanisms are quite\nstrict in all Scandinavian languages, but they dif-\nfer somewhat between them. The noun phrase re-\nalization is different in NPs with a pre-modifier\nsuch as an adjective or numeral and in NPs with-\nout pre-modifiers. Table 1 shows the allowed and\ndisallowed combinations in Swedish, Norwegian\nBokm ˚al1and Danish, which are the target lan-\nguages we focus on in this paper, and compares\nit to English and Italian, the source languages we\nfocus on. There are similar phenomena in the other\nScandinavian languages – Norwegian Nynorsk,\nIcelandic, and Faroese – as well.\nAs can be seen in Table 1 there is a difference in\npre-modified noun phrases, where the definite arti-\ncle is used, and in simple noun-phrases where the\nsuffix is used. In Swedish and Norwegian there is\ndouble definiteness in pre-modified noun phrases,\nsomething that never occurs in Danish, where\nonly the definite article is used in pre-modified\nnoun phrases. The definite article, den/det/de (in-\nflected for gender and number), coincides with\n1There are two written varieties of Norwegian, Bokm ˚al and\nNynorsk. We will use the term Norwegian to refer to Norwe-\ngian Bokm ˚al.Mik el L. F orcada, Heidi Depraetere, Vincen t V andeghinste (eds.)\nPr o c e e dings of the 15th Confer enc e of the Eur op e an Asso ciation for Machine T r anslation , p. 289\u0015296\nLeuv en, Belgium, Ma y 2011\nNP type Swedish Norwegian Danish English Italian\nsg, -mod hund en hund en hund en the dog ilcane\n*denhund( en) * denhund( en) * denhund( en)\nsg, +mod densvarta hund en den svarte hund en den sorte hund theblack dog ilcane nero\n*densvarta hund * densvarte hund * densorte hunden\n*svarta hund en *svarte hund en *sorte hund en\npl, -mod hundar na hunde ne hunde ne the dogs icani\npl, +mod desvarta hundar na de svarte hunde ne de sorte hunde theblack dogs icani neri\nTable 1: Definite noun phrases in Swedish, Norwegian, and Danish, contrasted to English and Italian.\nNP type shows number (singular or plural), and if the NP is modified by an adjective or not. The\ngrammaticality judgments are for a definite reading of the definite articles den/de . In some cases these\nexamples are acceptable with a demonstrative reading of den/de , see Table 2.\nthe demonstrative article, so some of the ungram-\nmatical examples in Table 1 are grammatical in a\ndemonstrative reading, see Table 2.\nIn demonstrative NPs, shown in Table 2, there is\nalways double definiteness in Norwegian, whereas\nonly the demonstrative article is used in Danish.\nIn Swedish, the use of the definite suffix depends\non the choice of demonstrative article, with den\n(h¨ar)the suffix is used, but with denna no suffix\nis used. In possessive noun phrases, the indefinite\nnoun form is always used, except in a Norwegian\noption with a final possessive pronoun, where the\ndefinite suffix is used.\nNPs that are post-modified by a relative clause\nconstitute an additional complication. In all three\nlanguages both types of definiteness marker are al-\nlowed in NPs with relative clauses, exemplified in\nSwedish in (1–2). The definite article tends to be\nused with restrictive relative clauses, and the suffix\nfor non-restrictive relative clauses. But, just as in\nEnglish where the choice of relative pronoun and\ncommas can be used for this purpose, the distinc-\ntion between the two cases is fuzzy, and there are\nmany exceptions to the general tendency. Thus, we\nwill not be further concerned with relative clause\nexceptions in this paper.\n(1) Den hund som sk ¨allde ¨ar sn ¨all\nThe dog that barked is nice\n(2) Hunden som sk ¨allde ¨ar sn ¨all\nThe dog, which barked, is nice\nThere are also other special cases with irreg-\nular behavior, such as name-like uses like Vita\nhuset (the White House ) where the definite article\nis not used, or in connection with what Dahl (2003)\ncall selectors, inherently definite words, like f¨orst\n(first) orh¨oger (right ), where the realization varies.\nThese cases will also be ignored.\nIn summary Danish is most regular with respectto the definite suffix, which is only used in NPs\nwithout pre-modifiers. In Swedish and Norwegian\nthe definite suffix can be used in other construc-\ntions than pure definite NPs, such as demonstrative\nor possessive NPs.\nIn English and Italian the same base noun form\nis always used, see Tables 1 and 2, both with\ndefinite and demonstrative articles. In possessive\nnoun phrases Italian uses both a definite article and\na possessive adjective, contrary to the other lan-\nguages that mostly use just a possessive pronoun.\nItalian adjectives can be pre- and post-modifiers to\nnouns, as in (3), whereas all other languages only\nhave pre-modifying adjectives.\n(3) il\nthegrande\nbigcane\ndognero\nblack\n3 Previous Work\nOur work fits into a growing mass of work where\neither the source or target language is preprocessed\nbefore training a SMT system, in order to make the\nlanguages more similar. If the target language is\nmodified, a postprocessing step is necessary. Such\nmodifications have been targeted at many different\nphenomena, such as compound words and word\norder.\nThe current study is based on Stymne (2009b),\nwho address the issue of definiteness in transla-\ntion from English to Danish, by transforming En-\nglish NPs to a structure similar to that of Danish\nNPs. Rule-based transformations based on part-\nof-speech were used. The results, using only one\nsimple transformation, were very good with rela-\ntive Bleu improvements of 7.7% and 22.1% on two\ndifferent domains. Definiteness was also targeted\nby Samuelsson (2006), who transformed German\ntext, based only on surface forms, for translation\ninto Swedish. There were no improvements on\ntranslation from German to Swedish using this290\nNP type Swedish Norwegian Danish English Italian\ndem, -mod den (h ¨ar)hund en den hund en den hund thisdog questo cane\ndenna hund denne hund en denne hund\ndem, +mod den (h ¨ar)svarta hund en den svarte hund en den sorte hund thisblack dog questo cane nero\nposs, -mod min hund min hund min hund my dog ilmio cane\nhund enmin\nposs, +mod min svarta hund min svarte hund min sorte hund my black dog ilmio cane nero\ndensvarte hund enmin\nTable 2: Demonstrative and possessive noun phrases in Swedish, Norwegian, and Danish, contrasted to\nEnglish and Italian. NP type shows if the NP is dem(onstrative) or poss(essive), and if the NP is modified\nby an adjective or not.\napproach, but for translation in the other direc-\ntion, which included postprocessing of the mod-\nified German NPs, there was a relative Bleu im-\nprovement of 11.0%.\nPre- and postprocessing have also been used\nfor compound words, both for translating from\nGermanic languages such as German (Nießen and\nNey, 2000) and Swedish (Stymne and Holmqvist,\n2008), and for translation into a Germanic lan-\nguage, which requires post-processing where split\ncompounds are merged (Stymne, 2009a). Nießen\nand Ney (2000) explored several types of prepro-\ncessing for translation from German to English\nbesides compound splitting, including merging of\nmulti-word expressions, and separation of German\nverb prefixes, with good results. Preprocessing has\nalso been used extensively for targeting word or-\nder differences between languages, either by using\nhand written rules targeting known differences be-\ntween two languages (Collins et al., 2005), and au-\ntomatically learnt rules (Xia and McCord, 2004) to\nreorder the source language.\nAnother type of preprocessing is morphological\nreduction, i.e. to remove some of the morphologi-\ncal information in one of the languages. Goldwa-\nter and McClosky (2005) used lemmatized Czech,\nwith the addition of morphological tags both as\nseparate words and as suffixes, for translation into\nEnglish, and El-Kahlout and Yvon (2010) normal-\nized German morphology by removing all distinc-\ntions that are not present in English, both with pos-\nitive results. Fraser (2009) removed all German in-\nflections for translation into German, and recreated\nit in a postprocessing step, however, with negative\nresults. In these three studies, some information is\nremoved before the translation process. It seems,\nhowever, that care has to be taken not to remove\ntoo much information.\nAll these approaches work on different levels\nof linguistic representations, and require differentlinguistic tools. The lowest possible level of rep-\nresentation is surface form, which does not re-\nquire any linguistic processing, and is used in\nSamuelsson (2006). Methods based on part-of-\nspeech (Stymne, 2009b), chunks (Zhang et al.,\n2007), or parse trees (Collins et al., 2005) are more\ncommonly used. Some approaches also use mor-\nphological analyzers (Goldwater and McClosky,\n2005). While there is more information on the\nhigher level of linguistic representations, tools tend\nto make more errors, the more complex they are.\nThere is thus a trade-off between the expressivity\nand generality of the representation used, and its\ncorrectness using automatic tools.\n4 Preprocessing Strategies\nOur main strategy used to improve the translation\nwith respect to definiteness is to transform def-\ninite NPs in the source language, to make them\nsimilar in structure to NPs in the target language.\nWe also explore the opposite, to transform the tar-\nget to make it more similar to the source. These\nstrategies are based on the assumption that defi-\nnite noun phrases in the source language always\nare translated with definite noun phrases in the tar-\nget language, which is not always the case. Fur-\nther, we only focus on strict definite NPs, we do\nnot take into account demonstrative clauses or pos-\nsessive clauses, whose realization can differ in\nSwedish and Norwegian, and which always have\nnon-definite nouns in Danish.\nFor the source side processing we need to iden-\ntify definite NPs in English and Italian. The\ntarget side processing was only implemented for\nSwedish, and for that we identify Swedish defi-\nnite NPs without pre-modifiers. We use part-of-\nspeech tags and lemmas to identify definite noun\nphrases, obtained by an in-house Hidden-Markov-\nbased part-of-speech tagger for Italian and En-\nglish, and the Granska tagger for Swedish (Carl-291\nLanguage pair Non-modified NPs Modified NPs\nEnglish-Danish remove-DEF, add-DEFSUFFIX none\nItalian-Danish remove-DEF, add-DEFSUFFIX move-ADJ\nEnglish-Swedish/Norwegian 1 remove-DEF, add-DEFSUFFIX add-DEFSUFFIX\nEnglish-Swedish/Norwegian 2 remove-DEF none\nTable 3: Operators used to transform the source language for the different language pairs. Modified\nmeans pre-modified (or post-modified for Italian), by at least one adjective or numeral.\nOrig En: the central body is called ’ the european food authority ’ or ’ the authority ’ for short\nfor Da: the central body is called ’ european food authority-DEF ’ or ’ authority-DEF ’ for short\nfor Sv/No 1: the central body-DEF is called ’ the european food authority-DEF ’ or ’ authority-DEF ’ for short\nfor Sv/No 2: the central body is called ’ the european food authority ’ or ’ authority ’ for short\nOrig It: nei fondi strutturali , notiamo problemi nell’ applicazione delle normative a tutti i livelli\nfor Da: in il strutturali fondi , notiamo problemi in applicazione-DEF di normative-DEF a tutti livelli-DEF\nTable 4: Example source side transformations\nberger and Kann, 1999). Part-of-speech tags are\nused to identify nouns, adjectives and numerals,\nbut definite articles are not distinguished from\nother articles in the tagsets used for Italian and\nEnglish, so for them we use surface form in En-\nglish, the, and lemma in Italian, where all definite\narticles are given the lemmas looril. In addition,\nprepositions and articles can be contracted in Ital-\nian, but this is also handled by the lemmas, where\ncontractions are split and normalized, for instance\ndelle/dell’ tode lo . For Swedish, we have morpho-\nlogical tags, which identify nouns and articles as\ndefinite or indefinite.\nThe pattern used to identify English definite\nnoun phrases is defined in (4), and consists of a\ndefinite article, possibly followed by an arbitary\nnumber of modifiers: adjectives or numerals, fol-\nlowed by at least one noun. The pattern used for\nItalian definite NPs is defined in (5), and it differs\nfrom English in that adjectives can be placed af-\nter the head noun, in addition to before. In prac-\ntice though, allowing an arbitrary number of pre-\nmodifiers are error prone, due to tagging errors, so\nwe restrict transformations to noun phrases with\na maximum of two pre-modifiers. For Swedish\nwe are only interested in identifying definite NPs\nwithout pre-modifiers, and thus use the simplified\npattern in (6), where we identify definite nouns\nwhich are not preceded by a pre-modifier or an ar-\nticle.\n(4)DEF-ART (ADJ|NUM) *NOUN+\n(5)DEF-ART (ADJ|NUM) *NOUN+\nADJ*\n(6) ¬(ADJ|NUM|ART) NOUN-DEF\nWe chose to use part-of-speech and lemmassince we believe that gives us enough information\nto extract the definite NPs we need. An alterna-\ntive would have been to use a parser or chunker\nto identify noun phrases, but such tools generally\nhave more errors than a POS-tagger. The patterns\nin (4–5) in practice constitute a chunker, though,\nbut only for the definite NPs we need.\n4.1 Source Side Processing\nIn order to perform the transformations we use\ntwo main operators, remove-DEFART and add-\nDEFSUFFIX, where the first one removes unnec-\nessary definite articles in the source language, and\nthe second adds a definite suffix to the head noun,\nwhich is often a single noun, but can be the last\nof many nouns for noun compounds. For Italian\nas a source language, we introduce a third oper-\nator, move-ADJ, which moves adjectives that are\nplaced behind the noun, to before the noun. The\nchoice of operators depends on if the identified\nnoun phrase has adjectival and/or numeral mod-\nifiers or not. Swedish and Norwegian have the\nsame structure of definite NPs, and can thus use\nthe same strategies, whereas Danish has a differ-\nent structure.\nFor English-to-Danish we follow the strategy\ndescribed in Stymne (2009b). For noun phrases\nwithout pre-modifiers both remove-DEFART and\nadd-DEFSUFFIX is used, since they are only\nmarked with a definite suffix. For noun phrases\nwith pre-modifiers, no operators are used, since\nthey have the same structure as in English.\nFor Italian-to-Danish, the strategy is the same\nas for English-to-Danish, but we also take into\naccount post-modifying adjectives, in addition to\npre-modifiers, to distinguish the two classes of def-292\nLanguage pair Decoder Corpus Sentences Source words Source, proc Target words\nEnglish-Danish 1 Matrax Automotive 168,046 1,526,759 1,479,186 1,395,661\nEnglish-Danish 2 Matrax+P+CS Automotive 168,046 1,526,759 1,479.186 1,478,707\nItalian-Danish Matrax Europarl 100,000 2,086,719 2,057,694 2,003,699\nEnglish-Norwegian Matrax Automotive 395,733 3,889,706 3,733,483 3,340,557\nEnglish-Swedish 1 Matrax Automotive 327,596 3,454,887 3,295,481 2,870,623\nEnglish-Swedish 2 Moses+P Europarl 701,157 15,043,321 14,385,253 13,603,062\nTable 5: Experiment setup: languages, decoder, corpus and corpus statistics. +P on the decoder means\nthat we used a sequence model based on part-of-speech, and +CS that compounds are processed.\ninite NPs. The operator move-ADJ is used for def-\ninite noun phrases that contain an adjective that\npost-modifies the noun. In addition we also nor-\nmalize the definite articles that are not removed,\nby replacing them with the lemmas ilin singular\nandloin plural.\nFor English-to-Swedish/Norwegian, we tried\nto mimic the strategy for English-to-Danish as\nclosely as possible, while still taking into ac-\ncount the differences in realization between the\nlanguages. That means that for noun phrases\nwithout modifiers, the strategy is the same as for\nDanish, to use both remove-DEFART and add-\nDEFSUFFIX. For pre-modified phrases, though,\nwe need to use add-DEFSUFFIX, since both types\nof definite markers are used there. We also de-\ncided to try a second strategy, where we do not use\nadd-DEFSUFFIX, since the distribution of the def-\ninite suffixes are more complex in these languages\nin other types of phrases, and thus only used\nremove-DEFART in NPs without pre-modifiers.\nThis transformation means that we lose informa-\ntion about definiteness in the source, and thus leave\nthe choice of using a definite suffix or not mainly\nto the language model. The source side transfor-\nmations for the different language pairs are sum-\nmarized in Table 3, and exemplified in Table 4.\n4.2 Target Side Processing\nWe also tried to preprocess the target side of the\ncorpus for Swedish by adding articles that are\npresent in English bare definite NPs, exemplified\nin (7). This transformation addresses the problem\nof the source side strategies for Swedish, where the\nfirst strategy creates markup only on definite nouns\nin pure definite NPs, and not in other contexts,\nsuch as in demonstrative NPs, and where the sec-\nond strategy loses information present in English.\nThe added articles will be present in the Swedish\nMT output, and we thus need a postprocessing step\nto remove them.\n(7) grundvalen ¨ar det sv ˚ara beslutetDEF grundvalen ¨ar det sv ˚ara beslutet\n(the) basis-DEF is the hard decision-DEF\nThe added definite articles are separate tokens,\nDEF , which differ in surface form from the normal\nSwedish definite article, since we need to be able\nto identify them, in order to remove them in the\npostprocessing step. The tokens are added in NPs\nwithout pre-modifiers, where the head noun is in\ndefinite form. After translation, the DEF tokens\nare removed in the translation output.\n5 Experiments\nWe used two standard phrase-based decoders, Ma-\ntrax (Simard et al., 2005) and Moses (Koehn et al.,\n2007). Matrax allows noncontiguous bi-phrases,\nsuch as jeopardize –bringe . . . i fare (bring . .\n. into danger ) for English-Danish, where words in\nthe source, target, or both sides can be separated\nby gaps that have to be filled by other phrases at\ntranslation time. In the experiments we allowed up\nto four gaps per phrase pair. Moses, and most other\nphrase-based decoders can only use contiguous\nphrases. We use a 3-gram language model in Ma-\ntrax, and a 5-gram model in Moses. In some exper-\niments, we also used an additional sequence model\nbased on part-of-speech. The sequence models\nwere trained using the SRILM toolkit (Stolcke,\n2002).\nTo train the decoder we used two different cor-\npora, Europarl, proceedings of the European Par-\nliament (Koehn, 2005), and an automotive cor-\npus, collected from translation memory data. To\nreduce training times we did not use all data\nfrom Europarl. For the Italian-Danish experi-\nment we randomly selected the sentences to use,\nand for English-Swedish we used version 2 of\nEuroparl. The types and sizes of corpora used\nin the experiments are shown in Table 5. For\nEnglish-Danish, the first experiment is repeated\nfrom Stymne (2009b), and in the second experi-\nment a POS sequence model and compound pro-293\ncessing as described in Stymne and Holmqvist\n(2008) were added. In all cases the number of\nwords is higher in the source language than in the\ntarget language, but, as shown in the second last\ncolumn of Table 5, the number of words is re-\nduced and somewhat closer to the number of target\nwords, after the source side definiteness process-\ning. For test we used 1000 sentences, and for pa-\nrameter optimization we used 1000 sentences for\ntranslation into Danish, 500 sentences with Moses,\nand 2000 sentences otherwise.\nWe trained systems with source side process-\ning, which will be called DEF-proc for transla-\ntion into Danish, and DEF-proc1, and DEF-proc2,\nfor the two different strategies for translation into\nSwedish and Norwegian. For English–Swedish\nEuroparl we also trained a system with target side\nprocessing, which is called Target-proc. We com-\npare all results to baseline systems that do not use\nany transformations.\n5.1 Results\nTable 6 shows the results of the experiments, on the\ntwo standard metrics Bleu (Papineni et al., 2002)\nand NIST (Doddington, 2002) with one reference\ntranslation. Significance was tested using approxi-\nmate randomization (Riezler and Maxwell, 2005),\nwithα < 0.05. Overall the results are much higher\non the automotive corpus, than on Europarl, which\nis expected since that corpus is more homogenous,\nand has shorter sentences.\nFor English-Danish translation we see a large\nimprovement of 5.44 Bleu points in the first exper-\niment. In the second experiment, where we added\na POS-sequence model and compound processing,\nthe baseline is significantly better than the baseline\nof the first experiment. Again, definite processing\ngives an improvement, of 2.08 Bleu points, but it\nis smaller than in the first case, and the scores with\ndefinite processing are similar in the two experi-\nments. This indicates a need to further explore the\ninteractions of definite processing, and other types\nof preprocessing. For Italian-Danish translation,\nthere is also a significant improvement, of 1.5 Bleu\npoints.\nFor translation into Swedish and Norwegian, the\nfirst strategy, where nouns are marked with a suf-\nfix, led to significantly worse results than the base-\nline in both cases. The second strategy, which only\nuses remove-DEF, however, led to improvements\nin both cases, where the improvement for English-Languages System Bleu NIST\nEn-Da 1Baseline 70.91 8.8816\nDEF-proc 76.35+ 9.3629+\nEn-Da 2Baseline 74.09 9.2328\nDEF-proc 76.17+ 9.4342+\nIt-DaBaseline 10.54 4.3924\nDEF-proc 12.04+ 4.5754+\nEn-NoBaseline 58.57 8.8846\nDEF-proc1 56.59- 8.6943-\nDEF-proc2 59.08 8.9092\nEn-Sv 1Baseline 61.20 9.7934\nDEF-proc1 58.84- 9.4898-\nDEF-proc2 62.05+ 9.9129+\nEn-Sv 2Baseline 21.63 6.1085\nDEF-proc2 22.03+ 6.1778+\nTarget-proc 21.31- 6.1018\nTable 6: Translation results, a plus sign marks re-\nsults that are significantly better than the baseline,\nand a minus sign marks significantly worse results.\nSwedish were statistically significant. These im-\nprovements were smaller than for Danish, how-\never. For the second English–Swedish experi-\nment, we also investigated target side preprocess-\ning. This was not successful, with a significantly\nworse result on Bleu, and a somewhat worse NIST\nscore, as the baseline.\nWe performed an initial error analysis of 50\nshort sample sentences from the second Swedish\nexperiment, where the differences on the automatic\nmetrics were quite small. The results of this anal-\nysis were somewhat different than what we ex-\npected based on the metric scores, with the lowest\ntotal number of errors for the target-proc system,\nwhich had 61 errors, compared to 71 for the ab-\nseline and 74 for the source side processing. The\nslightly higher number of errors in the system with\nsource side processing were mainly due to wrong\ntranslations or insertions of function words, such\nas prepositions. Both systems with definiteness\nprocessing had a lower number of word order and\npunctuation errors than the baseline. The number\nof definiteness errors were approximately the same\nbetween the three systems, but they were all the\nwrong form of nouns in the system with source\nside processing, which is not surpsising since we\nremoved the definite distinction in English bare\nNPs, whereas other types of definiteness errors\nalso occured in the other two systems, such as spu-\nrious definite articles. This limited analysis did\nunfortunately not shed much light on the types of\nchanges that were the result of adding definite pro-\ncessing, and further analysis is needed.\nTo illustrate the effects of the definiteness pro-294\nItalian–Danish\nSrc: Non pensa che dovremmo ormai esplorare nuovi modi per affrontare il problema delle nostre relazioni\ncon la Birmania?\nRef: Finder De ikke, at vi b ¨or se p ˚a andre m ˚ader, hvorp ˚a vi kan tackle problemet med vores relationer i\nBurma?\nBaseline: T ¨anker ikke at vi b ¨or efterh ˚anden resterende tid nye m ˚ader fat af vores forbindelser med den Burma?\nDEF-proc: T ¨anker ikke at vi b ¨or overveje nye m ˚ader nu af vores forbindelser med Burma tackle problemet?\nEnglish–Swedish\nSrc: Men who commit murders rarely receive long prison sentences . . .\nRef: M ¨annen som utf ¨or morden f ˚ar s¨allan l ˚anga f ¨angelsestraff . . .\nBaseline: De m ¨an som beg ˚ar morden s ¨allan f ˚a l˚anga f ¨angelsestraff . . .\nDEF-proc2: M ¨an som beg ˚ar mord s ¨allan f ˚a l˚anga f ¨angelsestraff . . .\nTarget-proc: De m ¨an som beg ˚ar morden s ¨allan erh ˚aller l ˚anga f ¨angelsestraff . . .\nTable 7: Sample translations\ncessing, we will discuss two translation examples,\nshown in Table 7. In the Italian–Danish example,\nthere is an unnecessary definite article in front of\nthe proper name Burma in the baseline, which cor-\nrectly is not there in the DEF-proc version. Overall\nthe DEF-proc translation is a better translation than\nthe baseline, mainly since it manages to translate\nthe verbs esplorare (explore ) and affrontare (han-\ndle), even though it is slightly problematic with a\nmeaning shift of the first into overveje (consider )\nand a correct meaning, but wrong word order of\nthe second, tackle (handle ). Both these verb are,\nhowever, completely missing in the baseline trans-\nlation. Both translations miss the main pronoun De\n(you, polite ), which is not present in Italian, which\nis a pro-drop language. In the English–Swedish\nexample, all three renderings of Men who commit\nmurders are grammatically possible, but the base-\nline and Target-proc readings have lost the gen-\neral reading of the source, and refers to specific\nmurders . The rendering of the DEF-proc is ac-\ntually more true to the generality of murders in\nthe source than the reference, which might, how-\never, have taken the context of surrounding sen-\ntences into account. In all MT sentences, there are\nproblems with the placement of the adverb s¨allan\n(rarely ), and the main verb f˚ais non-finite in the\nbaseline and DEF-proc systems, but has the cor-\nrectly finite form erh˚aller in the Target-proc sys-\ntem, even though that is a worse lexical choice.\nOverall, we see some improvements with regard\nto definiteness in the systems with source side pre-\nprocessing, as in the examples discussed above,\nbut there are also problems still left. We also see\nmany other changes though, such as different lex-\nical choices and word orders. One possible ex-\nplanation for this can be that the word alignment\nchanges when the two languages are more similar,and of more equal sentence length, which was the\nresult of both types of definiteness processing.\n6 Conclusion\nWe have shown that source side preprocessing tar-\ngeting definite NPs is useful for translation into\nthree Scandinavian languages on two different cor-\npora using two different phrase-based decoders, as\nmeasured by automatic metrics. The attempt at\ntarget side preprocessing was not successful mea-\nsured by automatic metrics, but had good results\non an error analysis. There is a need for further\nanalysis of the results, to try and pinpoint the rea-\nsons for the improvements on the automatic met-\nrics, and to further investigate the effects of the\npreprocessing.\nCare has to be taken when adjusting the source\nside processing strategy to a new language pair.\nWhen we performed the same type of transforma-\ntion for translation into Swedish and Norwegian,\nas those that worked for Danish, both in this and\nprevious work, the results were worse than the\nbaseline. For translation into these languages, a\nmore limited transformation were more useful. We\nbelieve that some treatment of definiteness is use-\nful for translation into all Scandinavian languages,\nand that similar strategies as those described in this\npaper could also be useful for other source lan-\nguages, and/or for translation into the other Scan-\ndinavian languages.\nWe see a much smaller effect of definite pro-\ncessing for translation into Swedish and Norwe-\ngian than into Danish. The definite suffix is used\nin more types of clauses in Swedish and Norwe-\ngian, than in Danish, which could partly explain\nthis. Thus, it might be useful to design a more elab-\norate preprocessing strategy for these languages,\ntaking other types of phrases than only simple defi-295\nnite NPs into account, possibly by using a machine\nlearning method to decide where to apply transfor-\nmations. There are also other possibilities of target\nside preprocessing, such as splitting off the definite\nsuffix.\nReferences\nCarlberger, Johan and Viggo Kann. 1999. Imple-\nmenting an efficient part-of-speech tagger. Software\nPractice and Experience , 29:815–832.\nCollins, Michael, Philipp Koehn, and Ivona Ku ˇcerov ´a.\n2005. Clause restructuring for statistical machine\ntranslation. In Proceedings of the 43rd Annual Meet-\ning of the ACL , pages 531–540, Ann Arbor, Michi-\ngan, USA.\nDahl, ¨Osten. 2003. Definite articles in Scandinavian:\nCompeting grammaticalization processes in standard\nand non-standard varieties. In Kortmann, Bernd, ed-\nitor, Dialect Grammar from a Cross-Linguistic Per-\nspective , pages 147–180. Mouton de Gruyter, Berlin.\nDoddington, George. 2002. Automatic evaluation\nof machine translation quality using n-gram co-\noccurence statistics. In Proceedings of the Second\nInternational Conference on Human Language Tech-\nnology , pages 228–231, San Diego, California, USA.\nEl-Kahlout, ˙Ilknur Durgar and Franc ¸ois Yvon. 2010.\nThe pay-offs of preprocessing for German-English\nstatistical machine translation. In Proceedings of the\nInternational Workshop on Spoken Language Trans-\nlation , pages 251–258.\nFraser, Alexander. 2009. Experiments in morphosyn-\ntactic processing for translating to and from German.\nInProceedings of the Fourth Workshop on Statis-\ntical Machine Translation , pages 115–119, Athens,\nGreece.\nGoldwater, Sharon and David McClosky. 2005. Im-\nproving statistical mt through morphological analy-\nsis. In Proceedings of the Human Language Tech-\nnology Conference and the conference on Empiri-\ncal Methods in Natural Language Processing , pages\n676–683, Vancouver, British Columbia, Canada.\nKoehn, Philipp, Hieu Hoang, Alexandra Birch, Chris\nCallison-Burch, Marcello Federico, Nicola Bertoldi,\nBrooke Cowan, Wade Shen, Christine Moran,\nRichard Zens, Chris Dyer, Ondrej Bojar, Alexan-\ndra Constantin, and Evan Herbst. 2007. Moses:\nOpen source toolkit for statistical machine transla-\ntion. In Proceedings of the 45th Annual Meeting\nof the ACL, demonstration session , pages 177–180,\nPrague, Czech Republic.\nKoehn, Philipp. 2005. Europarl: A parallel corpus for\nstatistical machine translation. In Proceedings of MT\nSummit X , pages 79–86, Phuket, Thailand.Nießen, Sonja and Hermann Ney. 2000. Improv-\ning SMT quality with morpho-syntactic analysis. In\nProceedings of the 18th International Conference\non Computational Linguistics , pages 1081–1085,\nSaarbr ¨ucken, Germany.\nPapineni, Kishore, Salim Roukos, Todd Ward, and Wei-\nJing Zhu. 2002. BLEU: A method for automatic\nevaluation of machine translation. In Proceedings of\nthe 40th Annual Meeting of the ACL , pages 311–318,\nPhiladelphia, Pennsylvania, USA.\nRiezler, Stefan and John T. Maxwell. 2005. On some\npitfalls in automatic evaluation and significance test-\ning for MT. In Proceedings of the Workshop on In-\ntrinsic and Extrinsic Evaluation Measures for MT\nand/or Summarization at ACL’05 , pages 57–64, Ann\nArbor, Michigan, USA.\nSamuelsson, Yvonne. 2006. Nouns in statistical ma-\nchine translation. Unpublished manuscript: Term\npaper, Statistical Machine Translation.\nSimard, Michel, Nicola Cancedda, Bruno Cavestro,\nMarc Dymetman, Eric Gaussier, Cyril Goutte, Kenji\nYamada, Philippe Langlais, and Arne Mauser. 2005.\nTranslating with non-contiguous phrases. In Pro-\nceedings of the Human Language Technology Con-\nference and the conference on Empirical Methods in\nNatural Language Processing , pages 755–762, Van-\ncouver, British Columbia, Canada.\nStolcke, Andreas. 2002. SRILM – an extensible\nlanguage modeling toolkit. In Proceedings of the\nSeventh International Conference on Spoken Lan-\nguage Processing , pages 901–904, Denver, Col-\norado, USA.\nStymne, Sara and Maria Holmqvist. 2008. Process-\ning of Swedish compounds for phrase-based statis-\ntical machine translation. In Proceedings of the\n12th Annual Conference of the European Association\nfor Machine Translation , pages 180–189, Hamburg,\nGermany.\nStymne, Sara. 2009a. A comparison of merging strate-\ngies for translation of German compounds. In Pro-\nceedings of the EACL 2009 Student Research Work-\nshop, pages 61–69, Athens, Greece.\nStymne, Sara. 2009b. Definite noun phrases in statisti-\ncal machine translation into Danish. In Proceedings\nof the Workshop on Extracting and Using Construc-\ntions in NLP , pages 4–9, Odense, Denmark.\nXia, Fei and Michael McCord. 2004. Improving\na statistical MT system with automatically learned\nrewrite patterns. In Proceedings of the 20th Inter-\nnational Conference on Computational Linguistics ,\npages 508–514, Geneva, Switzerland.\nZhang, Yuqi, Richard Zens, and Hermann Ney. 2007.\nImproved chunk-level reordering for statistical ma-\nchine translation. In Proceedings of the Interna-\ntional Workshop on Spoken Language Translation ,\npages 21–28, Trento, Italy.296",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "0rxFX9BDbcl",
"year": null,
"venue": "EAMT 2006",
"pdf_link": "https://aclanthology.org/2006.eamt-1.2.pdf",
"forum_link": "https://openreview.net/forum?id=0rxFX9BDbcl",
"arxiv_id": null,
"doi": null
}
|
{
"title": "A Bilingual Grammar for Translation of English-Swedish Verb Frame Divergences",
"authors": [
"Sara Stymne",
"Lars Ahrenberg"
],
"abstract": "Sara Stymne, Lars Ahrenberg. Proceedings of the 11th Annual conference of the European Association for Machine Translation. 2006.",
"keywords": [],
"raw_extracted_content": "ABilingual Grammar forTranslation of\nEnglish-Sw edish VerbFrame Divergences\nSaraStymne andLarsAhren berg\nDepartmen tofComputer andInformation Science\nLinkÄopings universitet, SE-58183 LinkÄoping (Sweden)\nfsarst [email protected]\nAbstract\nWedescrib eabilingual grammar usedfortranslation ofverbframe diver-\ngences betweenSwedish andEnglish. Thegrammar isusedbothforanal-\nysisandgeneration withMinimal Recursion Seman ticsasinterlingua. Our\ngrammar isbased onthedelph-in resources forwhichseman tictransfer is\nproposedforMT.Weshowthataninterlingua strategy based onabilingual\ngrammar canhandle manycases ofverbframe divergences minimising the\nneedoftransfer.\n1Introduction\nTranslation viaseman ticrepresen tations of\nsource language input isacommon approac h\ninresearc h,although lessfrequen tincom-\nmercial systems, where seman ticdistinc-\ntionstendtobelocalised atthewordlevel\nandmotivatedmostly bypractical neces-\nsity.With theadventofgrammars thatex-\npress relations betweensurface strings and\nseman ticrepresen tations foralargepartof\ntheconstructions ofalanguage, suchas\ntheERG(Flickinger, 2000) andtheJACY\ngrammar (Siegel, 2000), theideaofperform-\ningpractical translation onacoupling of\ngeneral parsers andgenerators forlargefor-\nmallanguage descriptions using acommon\nseman ticframew orkseems lessesoteric. A\nrecentexample ofthisapproac histheLO-\nGON project(Oepenetal.,2004).\nJustasinLOGON, Minimal Recursion\nSeman ticsrepresen tations (MRS; Copes-\ntake,Flickinger, Sag,&Pollard, 2003) are\nusedasinterface structures. However,un-\nliketheLOGON architecture, whichuses\ndi®eren tgrammars based ondi®eren tfor-\nmalisms andlinguistic theories, weseeit\nasanadvantagetousethesame gram-\nmatical framew orkforbothsource andtar-\ngetlanguages, asthismeans thatthesame\nparser andgenerator canbeusedthrough-out. WeuseHPSG-lik eTypedFeature\nStructure Grammars asourframew orkbe-\ncause oftheavailabilit yoftheLKB work-\nbench(Copestake,2001) andthestore of\ntypede¯nitions knownastheMatrix (Ben-\nder,Flickinger, &Oepen,2002), made avail-\nablebythedelph-in collab oration (Bond,\nOepen,Siegel, Copestake,&Flickinger,\n2005). Inaddition, weseeitasdesirable\nthatgrammars aredesigned onsimilar prin-\nciples, sothatsolutions totranslation prob-\nlemscanbecoordinated betweenlanguages\nandimplemen tedusing ashared inventory\noftypes.Thismakesthesystem more ho-\nmogenous andfacilitates theaddition ofnew\nlanguages.\nWealsobelievethatthedevelopmen tof\npractical applications bene¯ts fromtheexis-\ntence ofacoresystem thatprovidesalibrary\nofsolutions tothetranslation problems that\narelikelytobeencoun tered inapplication\ndomains. Theworkreported hereshould be\nseenasasteptowardssuchacoresystem\nthatsupportsthedevelopmen tofapplica-\ntionsystems using English andSwedish.\nAnother desideratum isthat seman tic\ntransfer rules should berestricted tothe\ncases where theyareabsolutely necessary .\nEverything elsebeingequal, weprefer MRS\nstructures topassunchanged from parser\noutput togenerator input. When thisis\nthecase, wemayactually viewallstrings,\nwhether belonging tothesource language or\nthetarget language, tobepartofthesame,\nbilingual grammar.\nThespeci¯cgoalofthisworkhasbeento\ninvestigate thepossibilit yofhandling alarge\nnumberofverbframe divergences (VFDs)\nbetweenSwedish andEnglish inabilin-\ngualHPSG grammar. Wehaveconsidered\nalarge numberofdivergences where verbs\nfromthetwolanguages di®er syntactically\nand/or lexically ,butwhere theirseman tics\ncanbeconsidered tobethesame, i.e.where\nacommon interlingual relation canbeas-\nsumed.\nWehavefound thatanumberofcases\nofVFDs canactually betreated inabilin-\ngualgrammar. Another result ofthisre-\nsearchisataxonom yofVFDs withEnglish-\nSwedishinstances andanimplemen tedbilin-\ngualgrammar based ontheMatrix. There-\nsultshaveequal application toother Scan-\ndinavianlanguages, andwithmodi¯cations,\ntoother Germanic language pairsaswell.\nInthefollowing section wewillgive\nsomeexamples ofidenti¯edEnglish-Sw edish\nVFDs. Insection 3wewillreview related\nworkanddescrib etheseman tictransfer ap-\nproachthathaspreviously beenusedwith\nthedelph-in resources. Section 4describ es\nBiTSE, thebilingual grammar thatisthe\ncoreofourMTsystem. Insection 5weex-\nplainourtreatmen tofseveraltypesofdiver-\ngences. Section 6contains adiscussion on\nthemerits andlimits ofourapproac hand\nsection 7containstheconclusion.\n2VerbFrame Divergences\nAspartofthisstudy wehaveinvestigated\nverbframe divergences (VFDs). Averb\nframe consists ofaverbanditsargumen ts.\nAverbframe divergence iswhen twoverb\nframes withthesame meaning havedi®er-\nentstructures. Some examples ofthis,based\nonthecategories suggested byDorr(1994),\nwillbepresen tedhere. Allexamples inthis\narticle aretakenfromtheEuroparl corpus\n(Koehn,2005). Some oftheexamples shown\ncontainmore thanonetypeofdivergence.\n(1)Inthatcase,thematter turns outtobe\nanational problem afterall.Tillsistkommer ÄandockÄarendet attvisa\nsigvaraettnationellt problem.\n(2)This appearstobethecasewith the\neventswhichMrLomas reportsinhis\nquestion.\nDettycksnÄamligen varafallet medde\nfakta somherrLomas fÄorpºatalisin\nfrºaga.\n(3)Butthatisprecisely whywe¯rstneeda\nclearstrategy .\nJustdÄarfÄorÄartillattbÄorjamedenklar\nstrategi nÄodvÄandig hÄar.\nIn(1)\\turns outtobe\"corresp ondsto\n\\visasigvara\"(\\showitselfbe\")whichcon-\ntains twostructural divergences, when two\nlogical constituen tshavedi®eren tstructure.\nInEnglish aphrasal verbwiththeparticle\n\\out\" isusedandSwedish hasare°exiv e\nverbwiththefakere°exiv e\\sig\". Thever-\nbalcomplemen thasanin¯nitiv emarkerin\nEnglish, butnotinSwedish.\n(2)containsacon°ational divergence, the\nmain verb\\reports\" inEnglish corresp onds\nto\\fÄorpºatal\"(\\brings onspeech\")where\ntheconcept \\speech\"iscon°ated inEnglish\nbutexplicit inSwedish.\nAnexample ofacategorial divergence\ncanbeseenin(3).Categorial divergences\noccur when seman tically equivalentcon-\nstituen tshavedi®eren tsyntactic categories.\nHeretheEnglish main verb\\need\" seman-\ntically corresp ondstotheSwedish adjectiv e\n\\nÄodvÄandig\" (\\necessary\").\n3Related Work\nInterlingual approac hestomachinetransla-\ntionhavebeentried atleastsince thebe-\nginning ofthesixties withmuchdiscussion\nanddebate aboutthenature ofinterlinguas\nandthemerits anddrawbacksofinterlin-\ngualapproac hesascompared totransfer ap-\nproaches(e.g. Boitet, 1988; Nirenburg&\nGoldman, 1990). Basically aMRS relation\nisaplace-holder foraconcept withknown\nargumen tstructure whichisassociated with\noneormore linguistic expressions inalex-\nicon. Seman ticrelations suchashypon-\nomyandantonom y,andevensomeseman tic\ndecomp osition, could beadded, butisnot\npartofthecurren tsetup, though hyponymy\ncould bedealtwithwithin thetypesystem.\nDomain knowledge, asusedinknowledge-\nbased interlingual MTsuchastheKANT\nsystem (Mitam ura,Nyberg,&Carbonell,\n1991), isalsonothandled.\nSimilarities betweentwo(ormore) lan-\nguages canbeencodedinformal gram-\nmarsindi®eren tways.TheRosetta project\n(Rosetta, 1994) explored theideaofisomor-\nphicgrammars. Ourframew orkdoesnot\nrequire grammars tobeisomorphic; theim-\nportantthing isthattheyproduceacom-\nmonMRSforsentences thataretranslations\nofoneanother. Inaddition, thegrammars\nareactually implemen tedasonebilingual\n(ormultilingual) grammar, allowingtypesto\nbeshared betweenlanguages.\n3.1MT using DELPH-IN re-\nsources\nThere havebeenprevious suggestions for\nMTusing thedelph-in resources, most of\nthem using aseman tictransfer strategy ,but\nalsoanexperimen talmultilingual grammar\nusedasthecoreofasmall MTsystem.\nCopestake,Flickinger, Malouf, Riehe-\nmann, &Sag(1995) describ ehowMRS can\nbeused fortranslation. They suggest a\ndesign thatisbased onseman tictransfer\nusing MRS. Thetransfer componentworks\nonMRS toproduceoutput thatthetarget\ngrammar canaccept. Itispossible thatthe\ntransfer componentcanoutput more than\noneform, some ofwhichmaybeunaccept-\nablebythegenerator. When severalforms\nareoutput theywillbeordered byacon-\ntrolmechanism thatisdistinct from both\nthetransfer componentandthegenerator.\nThetransfer componentsuggested by\nCopestakeetal.(1995) isbased onset-\ntingupsymmetric andbidirectional trans-\nferequivalences betweeneachpairoflan-\nguages. Their suggestion alsoallowsinter-\nlingual predicates thatarecommon forall\nlanguages suchasnegation.\nAlarge-scale projectwhere seman tic\ntransfer withMRSisusedisLOGON, which\nfocusontranslation betweenNorwegian and\nEnglish (Oepenetal.,2004). Themain ar-\nchitecture is:analysis ofNorwegian toMRS\nusing theNorwegian LFGgrammar Nor-Gram (Dyvik, 1999), MRS transfer asde-\nscribedabove,andgeneration toEnglish us-\ningtheERG.\nTheLOGON system isunidirectional. It\nonlytranslates fromNorwegian toEnglish,\nduetothedesign withdi®eren tgrammars\nforanalysis andgeneration. However,Bond\netal.(2005) notes thatinthegeneral MTde-\nsigntheHPSG grammar foreachlanguage\nisreversible andcanbeusedbothforpars-\ningandgeneration. Thetransfer rulesare\nalsoreversible, except forcontextand¯lter\ninformation insome cases.\nBond etal.(2005) discusses theopen\nsource resources forMTmade available by\nthedelph-in collab oration andthegeneral\nstrategies used, including thebasic ideas\npresen tedinthissection. They alsoraise\nsome proposals forfuture work,including\n\\Howmuchoftheseman ticrepresen tation\ncanbeshared betweenlanguages (andthus\nrequire littleornotransfer)?\" (p.20).\nAdi®eren tMTarchitecture using Matrix-\nbased grammars, based onamultilingual\ngrammar hasbeensuggested byS¿gaard &\nHaugereid (2005). Thisdesign wasaninspi-\nration forourapproac h.\n4BiTSE -abilingual gram-\nmarascoreofMT\nThecoreofourMTsystem isBiTSE, the\nBilingual grammar forTranslation between\nSwedishandEnglish. Figure 1showstheba-\nsicdesign ofthesystem. Thetransfer mod-\nuleinFigure 1isnotcurren tlypartofour\nsystem, butisapossible extension forpossi-\nblenon-in terlingual partofMRSstructures.\nBiTSE wasdevelopedusing theLinguistic\nKnowledge Builder (LKB; Copestake,2001)\nandtheparser andgenerator ofLKB are\nusedwhen running BiTSE forMT.Thecov-\nerage ofBiTSE iscurren tlythecoreofthe\nlanguages andsome VFDs, including basic\nverbandnounphrases, some adjectiv aland\nprepositional modi¯ers, phrasal verbs, fake\nre°exiv es,polarquestion andmainandsub-\nordinate clause wordorder. Thelexicon is\nsmall andbasically includes onerepresen ta-\ntivelexical itemforeachtypeofverbcon-\nsidered.\nSLsentence-Parser\n(LKB)-MRS-Generato r\n(LKB)-SL\nTLsentencesTransfer\n¸\nU\nBiTSE\nShared Sw EnI µ\nFigure 1:Thedesign oftheMTsystem withBiTSE asitsknowledge source. Thetransfer module\nisnotcurren tlypartofthesystem butcould beadded andtransfer thenon-in terlingual partofthe\nMRS structure.\nBiTSE isbased ontheLinGO Gram-\nmarMatrix (Bender etal.,2002), across-\nlinguistic starter-kit forHPSG grammars\nproviding astore ofgrammatical andlexi-\ncaltypeswithMRS astheseman ticrepre-\nsentation. Asillustrated in(4)forthesen-\ntence \\The bigdogsleeps\", aMRS isatu-\nplecontaining atophandle (h1),aninstance\noreventvariable (e2),abagofelemen tary\npredications andabagofhandle constrain ts\n(qeq).\n(4)<h1,e2,\n{h3:def_q(x4,h5,h6),\nh7:big(e8,x4),\nh7:dog(x4),\nh9:sleep(e2,x4),\nh1:proposition(h10)},\n{h5qeqh7,h10qeqh9}>\nAMRS structure canbescope-resolv ed\ninoneorseveralwaysbyequating allhan-\ndleargumen tsandthetophandle witha\nhandle fromarelation respecting allhandle\nconstrain ts,forming atree. SeeCopestake\netal.(2003) foramore detailed description\nofMRS.\n4.1Constructions forlanguage\nToinclude more than onelanguage ina\ngrammar, afeature thatconstrains thelan-\nguage ofsigns hadtobeadded inaddi-\ntiontothebasics oftheMatrix. Following\nS¿gaard &Haugereid (2005) afeature forlanguage wasadded tosigns.Constrain ts\nwerethenadded onallrulestomakethem\nworkonasingle language atthetime, and\nforlanguage-sp eci¯c rulestoworkononly\noneofthelanguages. Figure 2showstwoof\nthese types.\nbinary-lang-agree-phrase :=\nbinary-headed-phrase &\n[LANG#lang,\nHEAD-DTR.LANG #lang,\nNON-HEAD-DTR.LANG #lang].\nswedish-only-rule :=headed-phrase &\n[LANG#sw,\nHEAD-DTR.LANG #sw&sw].\nFigure 2:Typesforlanguage handling\nAsforS¿gaard &Haugereid (2005)\nLangua geisnotaseman ticfeature,\nwhichmakestheseman ticrepresen tation\nlanguage-indep enden t,resulting ingenera-\ntiongiving allequivalentsentences inboth\nlanguages. Thus,theMRS in(4)will\ngenerate boththeEnglish \\The bigdog\nsleeps\" andtheSwedish \\Den stora hunden\nsover\".Inorder fortheMRStobelanguage-\nindependen tallequivalentrelations must\nhavethesame names. Toachievethis\nallrelations haveEnglish names, suchthat\neachEnglish wordgenerally hasarelation\nwiththesame name astheword,andeach\nSwedish wordhasarelation withthecorre-\nsponding English name.\n4.2Sharing typesbetweenlan-\nguages\nAnadvantageofthisgrammar design isthat\nitallowstheparts ofthegrammar thatare\nthesame tobeshared. Thisavoidsthere-\ndundancy ofentering thesame information\ntwiceinatwogrammar MTsystem. The\nshared information canmakeupaconsider-\nablepartofthegrammar, atleastforrelated\nlanguages likeSwedish andEnglish. AsTa-\nble1shows,theshared partofBiTSE con-\ntains more thanhalfofthegrammar. The\nfactthattheSwedish parthavenearly dou-\nbletheamoun toftypescompared toEn-\nglishislargely dueto27typesfordeclina-\ntionandconjugation ofnouns andverbsin\nSwedish, whicharenotneeded inEnglish.\nThecoverage forphrasal verbsisalsolarger\nforSwedish.\nTable1:Sizeofthedi®eren tparts ofBiTSE\nNo.oftypes\nShared 188\nSwedish 76\nEnglish 32\nThenumberoflanguage-sp eci¯cverblex-\nemes ishigher thanforother wordclasses,\nmostly because verbsarethecurren tfocusof\nBiTSE, andthusaremore specialised than\nother wordclasses. Itcould beexpected that\nifBiTSE weretogrow,thepercentageof\ntypesthatareshared mightdecrease.\nThisgrammar design canbeseenasan\nextension oftheMatrix fortwolanguages\ninthiscase,butpossibly foralarger group\noflanguages. Wehavealsotriedthisprin-\ncipleoutbyadopting theNorwegian gram-\nmarNorSource (Hellan &Haugereid, 2003)\ntoSwedish, whichshowedthatonlysmall\nmodi¯cations wereneeded.\n5Treatmen tofverbframe\ndivergences\nInthissection wedescrib ehowsome verb\nframe divergences arehandled inBiTSE.5.1Structural divergences\nStructural divergences areverycommon be-\ntweenSwedish andEnglish. They occur\nwhen constituen tsthatarelogically equiv-\nalentintwolanguages havedi®eren tstruc-\ntures. Wehavefound thefollowing four\ntypes:\n²prep. complemen tvs.NPobject:\ntalking aboutquality {diskuter ar\nkvaliteten\n²re°.verbvs.plain verb:\nuttalat sig{spoke\n²phrasal verbvs.plain verb\ngºattut{expired\n²in¯nitiv e+markervs.plain in¯nitiv e\nneedstoundergo{behÄoverunderkastas\nCombinations ofthese divergences are\nalsocommon. Inallthese cases there are\nonemore wordinonelanguage than the\nother. Thegeneral solution istotreat one\nofthese wordsasempty,i.e.carrying nose-\nmantics,andlettheother carry allseman-\nticinformation. Verbsthenspecifywhich\nemptyconstituen tsitneeds ascomplemen ts.\nSigurd (1995) suggests asolution forparti-\ncles,re°exiv esandprepositions intheSwe-\ntraReferen tGrammar MTsystem based on\nthesame principle.\nTheMatrix doesnotprovidegoodsup-\nportforemptycomplemen ts,soanumber\nofbasic typesforthiswereincorp orated\nintoBiTSE. Theexisting Matrix typesfor\nwordswithcomplemen tsdonotgivecor-\nrectseman ticstoemptycomplemen ts,which\nshould receiv enoseman ticbindings atall.\nSwedish verbscanhaveuptotwoempty\ncomplemen ts,asin(5),whichshowsall\nfourtypesofstructural divergences: empty\nre°exiv eandparticle, andaprepositional\ncomplemen twithanemptypreposition and\nanemptyin¯nitiv emarker\\att\" inthe\nprepositional complemen t.\n(5)knowhowyoucanescapeintoEurop e\nveta\nknowhur\nhowman\nonebÄar\ncarriessig\noneselfºat\nPART\nfÄor\nforatt\nto°y\n°eetill\ntoEuropa\nEurop e\nBesides emptycomplemen tsverbscanof\ncourse alsohavecomplemen tsthatcarry\nseman tics,whichwecallcontentive con-\nstituen ts.Thusnewtypesfordi®eren tcom-\nbinations ofemptyandcontentivecomple-\nmentsindi®eren torder wereneeded. As\nanexample Figure 3showstheBiTSE base\ntypeforaverbwithoneemptycomplemen t,\nlike\\uttala sig\"(\\express oneself \").Only\nthe¯rstargumen t,thesubject,ismapped\nasanargumen toftheverbal relation. The\nsecond argumen t,theemptyone,givesno\nseman ticcontribution.\nintrans-empty2ndarg-lex-item :=\nbasic-two-arg &\n[ARG-ST<[LOCAL.CONT.HOOK.INDEX\nref-ind &#ind],\nsynsem>,\nSYNSEM.LKEYS.KEYREL [ARG1#ind]].\nFigure 3:Typeforintransitiv everbwithan\nemptycomplemen t\nEmptyprepositions andin¯nitiv emark-\nersarehandled asemptysyntactic heads of\nthephrase thatislaterchosen byverbsas\ncomplemen ts.\nAsanexample wewillshowinsomemore\ndetail howfakere°exiv epronouns arehan-\ndled.\n5.1.1 Fakere°exiv epronouns\nVerbscanoccurwithre°exiv epronouns of\ntwotypes:fakere°exives ,whichobligatory\noccurwithre°exiv everbs,asin\\Iperjure\nmyself\",andordinary re°exives ,whichoc-\ncurasobjectstoordinary transitiv everbs,\nasin\\Ishavemyself\".\nThecategory thattakespartinstructural\ndivergences isthefakere°exiv es.They are\nanalysed asseman tically empty,withthe\nre°exiv everbcarrying therelation forthe\nmeaning oftheverbplusthefakere°ex-\niveandselecting thecorrect re°exiv epro-\nnoun. Thustheygetthesame MRS asa\nnon-re°exiv everbwiththeequivalentmean-\ning.\nThetypeforintransitiv ere°exiv everbs\nlike\\perjure oneself \",showninFigure 4,in-\nherits fromtwoBiTSE types,oneforverbs\ningeneral, andoneforthetypeforintransi-\ntiveverbswithanemptycomplemen tshowninFigure 3.Thepng(person, numberand\ngender) valueofthere°exiv eisco-indexed\nwiththatofthesubject,toensure agree-\nment.\nintrans-refl-comp-verb-lex :=ord-verb-lex &\nintrans-empty2ndarg-lex-item &\n[SYNSEM.LOCAL [CAT.VAL\n[SPR<>,\nSUBJ<#subj>,\nCOMPS<#refl&[OPT-]>,\nSPEC<>]],\nARG-ST<#subj&[LOCAL\n[CONT.HOOK.INDEX [PNG#p],\nCATnp&[HEAD.CASE nom]]],\n#refl&[LOCAL\nrefl-0-local &\n[CONT.HOOK.INDEX.PNG #p]]>].\nFigure 4:Typeforintransitiv ere°exiv everbs\nThistreatmen tensures thatthesame in-\nterlingual relation canbeusedforaplain\nverbasforare°exiv everb.\n5.2Con°ational divergences\nCon°ational divergences occurs when anar-\ngumen tthatisexplicit inonelanguage is\nimplicit, orcon°ated, intheother language,\nsuchas\\report\"/\\fÄ orapºatal\"(\\bring on\nspeech\").Sometimes argumen tscanbeop-\ntionally con°ated inonelanguage, as\n(6)Ishave[myself]\nJagrakarmig\nwhichisimplicitly re°exiv eiftheobjectis\nleftoutinEnglish. InSwedish itisnotpos-\nsibletoleavetheobjectoutforthistypeof\nverb,whichcauses adivergence.\nThere aretwofeatures onsynsem sto\nhandle optionalit y:aboolean feature opt,\nwhichisusedtomarksynsem sasoptional\nandopttype whichdescrib eswhichrela-\ntionshould beinserted inplace ofthere-\nmovedoptional argumen t.Thedefault value\nforopttype isunspec,whichmeans thatre-\nmovingtheoptional complemen tshould re-\nsultinitbeingleftunspeci¯ed, asfor\\Ieat\".\nForthistypeofoptionalit ythere isarule\nthatsimply removestheobjectfrom the\nverb'svalence listifitisnotpresen t.\nForverbslike\\shave\"where removalof\ntheobjectshould result inare°exiv ere-\nlation beingadded, theobjecthasopt-\ntypere°-opt .There isalsoaunary phrasal\nrulethataddsarelation forare°exiv epro-\nnoun when removinganoptional comple-\nment.Thisruleisconstrained toworkonly\nforopttype re°-opt .Itfurther assures that\ntheadded re°exiv epronoun relation agrees\nwiththesubjectonperson, numberandgen-\nder.\n5.3Head-in version divergences\nHead-in version occurs when amain verbin\nonelanguage corresp onds toanother con-\nstituen t,usually anadverb,intheother lan-\nguage. Anexample ofthisistheSwedish\nraising verb\\brukar\"whichcorresp ondsto\ntheEnglish scopal adverb\\usually\". The\nstandard HPSG analysis forthese twotypes\nofconstituen tsbased onMatrix typesassign\nthem similar seman tics:\n(7)\\Bob brukarsova\"\n<h1,e2,\n{h3:named(x4, ``Bob''),\nh5:def_q(x4,h6,h7),\nh8:brukar(e2,h9),\nh9:sleep(e10,x4),\nh1:proposition(h11)},\n{h6qeqh3,h11qeqh8}>\n(8)\\Bob usually sleeps\"\n<h1,e2,\n{h3:named(x4, ``Bob'',)\nh5:def_q(x4,h6,h7),\nh8:usually(e2,h9),\nh10:sleep(e11,x4),\nh1:proposition(h12)},\n{h6qeqh3,h9qeqh10,\nh12qeqh8}>\nTheonlydi®erence betweenthese two\nMRSstructures isthat\\brukar\"has\\sleep\"\ndirectly asanargumen tand\\usually\" hasit\nviaaqeq-relation.\nThese structures arebothundersp eci¯ed.\nTheonewiththescopal adverbhavetwo\nscope-resolv edversions: (9),whichisequiv-\nalenttotheonescope-resolv edversion ofthe\nraising verbstructure, and(10).\n(9)(proposition(def q(x4, named(x4,\nBob), usually(e2, sleep(e11, x4)))))(10) (proposition(usually(e2, defq(x4,\nnamed(x4, Bob), sleep(e11, x4)))))\nEventhough boththese readings canbe\nconsidered seman tically correct, webelieve\nthatitisnotnecessary toundersp ecifyscope\ninsuchaprecise wayinagrammar for\nSwedish{English MT.Swedish andEnglish\nareverysimilar withregard toscopeunder-\nspeci¯cation, andthuswebelieveitsu±ces\ntochooseoneofthetwopossible readings\nincases likeabove.Itisthenpossible to\ngivescopal adverbslike\\usually\" thesame\nseman ticsasraising verbslike\\brukar\",re-\nsulting inequivalentMRS structures. Even\nthough wehavenotbeenabletoidentifyany\ncases where thistypeofundersp eci¯cation\nmakesadi®erence toMTwedonotruleout\nthatthere mightbesome rarecases where\nitdoes.\n5.4Syntactic divergences\nSyntactic divergences occurs when synon y-\nmous verbshavedi®eren targumen tframes.\nOneexample ofthisisdivergences thatoc-\ncurbecause ofdativealternations. Inboth\nlanguages thedativeobjectcanbeeither a\nnoun phrase oraprepositional objectbut\nthedistribution isdi®eren t.\n(11) Itellhimastory\nItellastory tohim\n(12) *JagberÄattar honom enhistoria\nJagberÄattar enhistoria fÄorhonom\n(11)and(12)showsthepossible alterna-\ntions fortheverb\\tell\"/\\b erÄatta\", which\nhastwopossible patterns inEnglish and\nonlyoneinSwedish. Thistranslation di-\nvergence isactually presen twithin onelan-\nguage aswell,since thetwoEnglish sen-\ntences in(11)areequivalentandshould have\nthesame MRS inEnglish.\nInourapproac hnoother treatmen tofthis\ndivergence isneeded thanthatwhichisany-\nwayneeded within onelanguage. Inthis\ncasethestrategy tohandle dativealterna-\ntionscanalsobeshared betweenEnglish and\nSwedish, whichfurther eliminates redundan t\nrepresen tations.\n6Discussion\nThemechanisms used tosolvetheprob-\nlemsoftranslating VFDs wehaveillustrated\nabove,suchasanindependen tlevelofse-\nmanticrepresen tation, seman tically empty\nwords, andconstrain tsonsubcategoriza-\ntion, weretoalarge extentalready avail-\nable,orpotentially available, inthemono-\nlingual framew ork.And,ofcourse, amajor\nreason forchoosing aninterlingual, \"deep\ngrammar\" approac htotranslation hasal-\nwaysbeenthattranslation insuchaframe-\nworkcomes forfree.\nHowever,there areseveralproblems of\nVFD-translation thatremain tobetreated.\nSome ofthem, suchascategorial divergen-\nciesarediscussed inStymne (inpress). A\nmore general problem isgivenbytransla-\ntions thatdonotusesynon yms. Forex-\nample, theEnglish verb\\put\" isgenerally\ntranslated bySwedish verbswithamore\nspeci¯c meaning, suchas\\stÄalla\" (cause\ntostand somewhere), \\sÄatta\" (cause tosit\nsomewhere) and\\lÄagga\" (cause toliesome-\nwhere). Forsuchcases wenotethatthe\ntranslation relation mustnotbetakenas\ntransitiv e,i.e.,Swedish sentences suchas\n(13) HonstÄalldevasenilºadan\n(14) Honladevasenilºadan\nmustnotbetreated asequivalent,al-\nthough anEnglish sentence suchas\\Sheput\nthevaseinthebox\"maybeusedtotrans-\nlatebothofthem. Forthistobepossible\nwemustdistinguish thetranslation relation\nfrom thesynon ymyrelation andallowse-\nmanticrelations betweenconcepts ofthein-\nterlingua tobede¯ned andutilized inmap-\npings ofMRS represen tations. Thisgoes\nwellbeyondwhattheMatrix framew orkcur-\nrentlyallows.\n7Conclusion\nWehaveshownthattranslation ofVFDs\nthatarehandled byseman tictransfer inthe\ngeneral delph-in MTdesign could instead\nnaturally behandled byaninterlingual de-\nsigninmanycases, minimising theneedof\ntransfer.\nTheworkhasalsoproduced BiTSE, a\nbilingual grammar ofSwedish andEnglish,\ncovering basic phrase constructions andanumberofVFDs. Inthisgrammar more\nthan halfthetypesarecommon forthe\ntwolanguages, whichshowsthatabilin-\ngualgrammar design reduces theredun-\ndancy thatoccurs intwoseparate gram-\nmars.\nReferences\nBender, E.M.,Flickinger, D.,&Oepen,\nS.(2002). TheGrammar Matrix: An\nopensource starter-kit fortherapid devel-\nopmen tofcross-linguistically consisten t\nbroad-co verage precision grammars. In\nProceedingsoftheWorkshop onGrammar\nEngine ering andEvaluation atthe19th\nConfer enceonComputational Linguistics\n(pp.8{14). Taipei,Taiwan.\nBoitet, C.(1988). Prosandconsofthe\npivotandtransfer approac hesinmulti-\nlingual machinetranslation. InS.Niren-\nburg, H.Somers, &Y.Wilks (Eds.),\nReadings inmachine translation (pp.\n273{279). Cambridge, MA:MIT Press.\n(Reprin tedfromD.Maxw ell,K.Schubert,\nT.Witkam(Eds.), 1988,RecentDevelop-\nments inMachine Translation ,Dordrec ht,\nTheNetherlands: Foris)\nBond, F.,Oepen,S.,Siegel, M.,Copestake,\nA.,&Flickinger, D.(2005). Opensource\nmachinetranslation withDELPH-IN. In\nProceedings oftheOpen-Sour ceMachine\nTranslation Workshop atMTSummit X\n(pp.15{22). Phuket,Thailand.\nCopestake,A.(2001). Implementing typed\nfeaturestructur egrammars. Stanford,\nCA:CSLI Publications.\nCopestake,A.,Flickinger, D.,Malouf, R.,\nRiehemann, S.,&Sag,I.(1995). Trans-\nlation using Minimal Recursion Seman-\ntics. InProceedings oftheSixth Inter-\nnational Confer enceonTheoreticaland\nMethodologicalIssues inMachine transla-\ntion,TMI-95. Leuven,Belgium.\nCopestake,A.,Flickinger, D.,Sag,I.,&Pol-\nlard,C.(2003). Minimal Recursion Se-\nmantics:Anintroduction. Language and\nComputation ,1(3),1{47.\nDorr, B.J.(1994). Machinetranslation di-\nvergences: Aformal description andpro-\nposedsolution. Computational Linguis-\ntics,20(4),597{633.\nDyvik, H.(1999). Theuniversalit yoff-\nstructure: Discoveryorstipulation? The\ncaseofmodals. InProceedingsofthe4th\nInternational LexicalFunctional Gram-\nmarConfer ence.Manchester, UK.\nFlickinger, D.(2000). Onbuilding amore\ne®cien tgrammar byexploiting types.\nNaturalLanguage Engine ering(SpecialIs-\nsueonE±cient Processing withHPSG) ,\n6(1),15{28.\nHellan, L.,&Haugereid, P.(2003). Nor-\nsource -anexcercise inthematrix\ngrammar building design. InE.M.\nBender, D.Flickinger, F.Fouvry,&\nM.Siegel (Eds.), ProceedingsoftheWork-\nshoponIdeasandStrategiesforMulti-\nlingual Grammar Development, ESSLLI\n2003 (pp.41{48). Vienna, Austria.\nKoehn,P.(2005). Europarl: Aparallel cor-\npusforstatistical machinetranslation. In\nProceedings ofMTSummit X(pp.79{\n86).Phuket,Thailand.\nMitam ura, T.,Nyberg,E.H.,&Car-\nbonell, J.G.(1991). Ane±cien tinter-\nlingua translation system formulti-lingual\ndocumen tproduction. InProceedings of\ntheThirdMachine Translation Summit.\nWashington, DC.\nNirenburg, S.,&Goldman, K.(1990).\nTreatmen tofmeaning inMTsystems.\nInS.Nirenburg, H.Somers, &Y.Wilks\n(Eds.), Readings inmachine translation\n(pp.281{293). Cambridge, MA: MIT\nPress. (Reprin tedfromProceedingsofthe\nThirdInternational Confer enceonTheo-\nreticalandMethodologicalIssues inMa-\nchine Translation ofNaturalLanguages\n(pp.15{22). Austin, TX)\nNirenburg, S.,Somers, H.,&Wilks, Y.\n(Eds.). (2003). Readings inmachine\ntranslation. Cambridge, MA:MITPress.\nOepen,S.,Dyvik, H.,L¿nning, J.T.,\nVelldal, E.,Beermann, D.,Carroll, J.,\nFlickinger, D.,Hellan, L.,Johannessen,\nJ.B.,Meurer, P.,Nordg ºard,T.,&Ros¶en,\nV.(2004). Somºakapp-ete medtrollet?\nTowardsMRS-based Norwegian{English\nmachinetranslation. InProceedingsofthe10thInternational Confer enceonTheo-\nreticalandMethodologicalissues inMa-\nchineTranslation (pp.11{20). Baltimore,\nMD.\nRosetta, M.T.(1994). Compositional\ntranslation. Dordrec ht,TheNetherlands:\nKluwerAcademic Publishers.\nSiegel, M.(2000). HPSG analysis of\nJapanese. InW.Wahlster (Ed.), Verb-\nmobil: Foundations ofSpeech-to-Sp eech\nTranslation (pp.264{279). Berlin, Ger-\nmany:Springer.\nSigurd, B.(1995). Analysis ofparticle\nverbsforautomatic translation. Nordic\nJournal ofLinguistics ,18,55{65.\nS¿gaard, A.,&Haugereid, P.(2005). The\nnoun phraseinmainland Scandinavian.\nPresen tedatthe3rdmeeting oftheScan-\ndinavianNetworkofGrammar Engineer-\ningandMachineTranslation, Gothen-\nburg, Sweden. (Retriev edApril 28,2006,\nfromhttp://www.cst.dk/anders/publ/\ngothenburg04.pdf )\nStymne, S.(inpress). Swedish-English\nverbframedivergencesinabilingual\nHead-driven PhraseStructur eGrammar\nformachine translation. Master's thesis,\nLinkÄopings universitet, LinkÄoping, Swe-\nden.",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "uryA4q7aoE",
"year": null,
"venue": "EAMT 2008",
"pdf_link": "https://aclanthology.org/2008.eamt-1.25.pdf",
"forum_link": "https://openreview.net/forum?id=uryA4q7aoE",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Processing of Swedish compounds for phrase-based statistical machine translation",
"authors": [
"Sara Stymne",
"Maria Holmquist"
],
"abstract": "Sara Stymne, Maria Holmquist. Proceedings of the 12th Annual conference of the European Association for Machine Translation. 2008.",
"keywords": [],
"raw_extracted_content": "Processing of Swedish Compounds for\nPhr\nase-Based Statistical Machine Translation\nSara Stymne and Maria Holmqvist\nDepartment of Computer and Information Science\nLink¨ oping University, Sweden\n{sarst,marho }@ida.liu.se\nAbstract. We investigated the effects of processing Swedish compounds\nfor phrase-based SMT between Swedish and English. Compounds were\nsplit in a pre-processing step using an unsupervised empirical method.\nAfter translation into Swedish, compounds were merged, using a novel\nmerging algorithm. We investigated two ways of handling compound\nparts, by marking them as compound parts or by normalizing them to a\ncanonical form. We found that compound splitting did improve transla-\ntion into Swedish, according to automatic metrics. For translation into\nEnglish the results were not consistent across automatic metrics. How-\never, error analysis of compound translation showed a small improvement\nin the systems that used splitting. The number of untranslated words in\nthe English output was reduced by 50%.\n1 Introduction\nIn many languages, including the Germanic languages, compounding is very\ncommon, and compounds are written without spaces or other word boundaries.\nThis is problematic for many NLP applications. For phrase-based statistical\nmachine translation (PBSMT) it leads to problems due to data sparseness, with\na large number of out-of-vocabulary compounds.\nThis problem has been studied in several papers for translation between Ger-\nman and English. Koehn and Knight [1] suggested an empirical algorithm for\nsplitting compounds, which was successfully applied to German-to-English trans-\nlation. The same method was used by Popovi´ c et al. [2] for translation in both\ndirections between English and German. In addition, they merged compounds\nin a postprocessing step for translation into German. Stymne [3] tried a number\nof variations of the algorithm for translation in both translation directions. In\nboth studies translation quality was improved.\nCompound parts are usually treated as ordinary words in the training data,\ne.g. in [1,2]. In [3, 4], however, compound parts were marked with a symbol,\nto separate them from normal words, resulting in improved translation quality\ncompared to an unsplit baseline.\nVirpioja et al.[5] used an unsupervised algorithm for morphological splitting\nand merging, where both compounds and other words were split into stems\nand affixes, for translation between Swedish and other Scandinavian languages.\n12th EAMT conference, 22-23 September 2008, Hamburg, Gemany\n182\nTable 1. Comp ound forms in Swedish\nType Suffixes Example\nNone ris kkapital (risk + kapital)\nrisk capital\nAdditions -s - t frihetsl¨ angtan (frihet + l¨ angtan)\nlonging for peace\nTruncations -e - a pojkv¨ an (pojke + v¨ an)\nboyfriend\nCombinations -a/ -s -a/-e -a/-u -a/-o -e/-a -e/-s\n-el/-la -la/-el -ra/-erarbetsgrupp (arbete + grupp)\nworking group\nAll but the last part of a word were marked with a symbol, which was used\nin the merging step. They did not get improved translations when measured\nautomatically, but saw other advantages such as a reduction of out-of-vocabulary\nwords.\nThere have been many other suggestions of how to split compounds, but\nthey are often only evaluated against a gold standard, not on a translation\ntask. Alfonseca et al. [6] suggested a language independent supervised learning\nmethod, which needs a corpus of annotated compounds. They also showed that\nthe training corpus can be of another language than the corpus to be split. For\nSwedish, Sj¨ obergh and Kann [7] suggested several ways of choosing the correct\nsplitting points of compounds that were split using a method based on word\nlists.\nThe corpus-based language independent compound splitting method sug-\ngested in [1] was shown to be useful for PBSMT from and into German. In this\nwork we investigate if a similar empirical method is useful for translation from\nand into Swedish. In addition we investigate the effects of marking compound\nparts [3], compared to the more commonly used strategy where no marking of\ncompound parts is used. We also present a novel POS- and corpus-based merging\nalgorithm for compounds.\n2 Swedish Compounds\nCompounds in Swedish are normally formed by joining words, without any spaces\nor other word boundaries. Compound parts can have special compound forms,\ncreated by addition of letters, truncation of letters or combinations of these. An\noverview of compound forms can be seen in Table 1, compiled from two standard\nworks on Swedish morphology [8, 9].\nIn addition to the forms in Table 1, the spelling of compounds is changed\nin cases where adding two words would result in three identical consecutive\nconsonants. In such a case one of the three consonants is removed, leaving a\ndouble consonant. An example of this can be seen in (1). This can lead to\nambiguities, as in (1), which usually can be easily disambiguated semantically\nby a human, but which can lead to problems for automatic splitting methods.\n12th EAMT conference, 22-23 September 2008, Hamburg, Gemany\n183\n(1) stopplikt\nobl\nigation to stop–\n–stopp\nstop+\n+plikt\nduty/\n/stop\nstoup+\n+plikt\nduty\n2.1 Compound Splitting\nTo handle compounds in PBSMT we split them into their parts. We use a slightly\nmodified version of an empirical splitting method based on [1, 3].\nFor each word all possible splits of the word are tried, and a split is considered\nif all its parts are found as words in a monolingual corpus. If there are several\ncandidates the splitting option with the highest arithmetic mean of the frequen-\ncies of its parts is chosen, which can be the original unsplit word if it is common.\nWe also use part-of-speech information, from the Swedish Granska tagger [10].\nWe retokenize the tagger output to split word groups that are tokenized as one\nitem by the tagger, such as time expressions and coordinated compounds.\nWe split nouns, verbs, adjectives and adverbs, which are the parts-of-speech\nthat form compounds in Swedish. Proper names are excluded, since they gen-\nerally are not translated in parts, as the Swedish surname Sj¨ ogren , the parts of\nwhich mean lakeandbranch . The same parts-of-speech plus proper names and\nnumerals are used for frequency calculations from the monolingual corpus.\nWe also impose a restriction that the last part of the compound must have\nthe same part-of-speech as the full compound. In addition to surface forms, base\nforms, obtained from the tagger, are also used for frequency calculations, since\ncompound parts tend to have base form. We also impose limits on length, for a\nword to be split it must have at least six characters and each part must have at\nleast three characters. Additions of -sand truncations of -eand-aare allowed at\nall split points1. We also handle cases with consecutive consonants, by allowing\nthe addition of an extra consonant at splitting points between two identical\nconsonants.\nWe use two schemes to handle compound parts, marked andunmarked . In\nthe marked scheme compound parts keep the form they have in the compound,\nexcept in the three consonant case, where a consonant is added. In addition we\nadd the symbol ’#’ to all but the last part, to separate compound parts from\nother words, since compounds are not always compositional in meaning. In the\nunmarked scheme we normalize compound parts to a canonical form based on\nthe suffixes in Table 1, and no marking is added. The canonical form will coincide\nwith a word form that occurs independently in the corpus, or the base form of\nsuch a word. We give the last part of the compound the same POS-tag as the\nfull word, whereas the other parts get a special tag, based on the original tag.\nAn example of the splitting schemes is shown in (2).\n(2) Compound f¨ orvaltningssystem NN\nadministrative system\nUnmarked f¨ orvaltning NN-FL + system NN\nMarked f¨ orvaltnings# NN-FL + system NN\n1Using more variants of compound forms was not successful in a small pilot study.\n12th EAMT conference, 22-23 September 2008, Hamburg, Gemany\n184\n2.2 Compound Merging\nFor\ntranslation into Swedish it is necessary to merge compounds that are trans-\nlated in parts. For marked compounds we use the POS-based algorithm suggested\nin [4]. If a word has the special part-of-speech used for compound parts, it is\nmerged with the following word if it has a matching part-of-speech, which can\neither be another compound part, or the final part of a compound. In addition\nwe handle coordinated compounds, as in (3), by adding a hyphen to a word\nwhen the next word is a conjunction. In cases where the merged words would\nhave three identical consecutive consonants, see (1), we remove a consonant. No\nother processing is needed in this setting, since compound forms are kept, except\nremoving compound markup.\n(3) kunskaps-\nknowledgeoch\nandinformationssamh¨ alle\ninformation society\nFor unmarked compounds a more elaborate strategy is needed to handle the\nnormalized compound forms. In addition to part-of-speech, we use frequency\nlists of all words from the training corpus, and of compound parts with all\npossible compound forms found during splitting. To find the correct form of a\nword we first try all combinations of forms of each compound part and check\nif the result is a word that is known from the corpus. If any known word is\nfound we choose the most frequent one. Else, we add the parts from left to right\nchoosing the most frequent possible combination at each merging point, and if\nno known combination exists, the most frequent compound form for each part.\nTo investigate the potential of the merging method for unmarked compounds\nwe applied it to the split test text (see section 3.1). The splitting algorithm found\n2505 compounds, of which 227 are out-of-vocabulary with respect to the training\ncorpus. The merging method correctly merged all but 91 compounds, showing\nthat it finds all known compounds and have a reasonable success (60%) on\nunknown compounds. Most incorrect compounds can easily be understood by a\nhuman, even if the form is wrong. The most common error is a left out addition\nof-s.\n3 System Description\nThe translation system we use is a factored phrase-based SMT system, with\npart-of-speech as an additional output factor. We use TreeTagger [11] to tag\nthe English texts and Granska tagger [10] to tag the Swedish texts. We use two\nsequence models, produced by SRILM [12], a 5-gram language model on surface\nform and a 7-gram model on part-of-speech. For training and decoding we use the\nMoses toolkit [13]. We tune feature weights using minimum error rate training\n[14], that optimizes the Neva metric [15].\nA pre-processing step is performed on the Swedish side where compounds are\nsplit. Thus we train the system on English and modified Swedish. Compounds\nare merged after translation into Swedish and during tuning.\n12th EAMT conference, 22-23 September 2008, Hamburg, Gemany\n185\nTable 2. Num ber of tokens and types for the training corpus\nSystem Tokens Types\nSwedishBas\neline 13603062 182000\nUnmarked 14401784 100492\nMarked 14401784 107047\nEnglish 15043321 67044\nWe train three systems for this study, a baseline system without splitting,\nand two systems with splitting, marked , where compound parts are marked and\nunmarked , with unmarked parts normalized to canonical form.\n3.1 Corpus\nThe translation system is trained and tested using the Europarl corpus [16]. The\ntraining part contains 701157 sentences, where sentences longer than 40 words\nhave been filtered out. The number of tokens and types in the training corpus is\nshown in Table 2. The Swedish baseline text contain 2.7 times as many types as\nthe English side. Splitting Swedish compounds reduces the vocabulary size by up\nto 45%. The development and test corpora are taken evenly from the designated\ntest portion of the fourth quarter of 2000. The test set has 2000 sentences and\nthe development set has 500 sentences.\n4 Evaluation of Compound Splitting\nWe use two manually created gold standards to evaluate compound splitting.\nThe gold standard corpus consists of the first 5395 words (245 sentences) from\nthe test set.\n4.1 Gold Standards\nFor the first gold standard all compounds2in the gold standard corpus were\nannotated. This standard was prepared by two human judges who are native\nspeakers of Swedish.\nTo investigate the difficulty of the task we calculated agreement as suggested\nin [6], as the percentage of agreement in classification as compounds or non-\ncompounds (CCA), the Kappa score [17] obtained from CCA and the percentage\nof words for which the suggested decomposition was identical (DA). Since we\nevaluate on running text, which has a very large percentage of non-compound\nwords, the agreement could be expected to be high. Therefore we also measured\nagreement on only those words that are 12 characters or longer, to have a more\n2A word is considered to be a compound if it has several parts which all are semanti-\ncally meaningful with respect to the full compound and can be used as stand alone\nwords in some form.\n12th EAMT conference, 22-23 September 2008, Hamburg, Gemany\n186\nTable 3. Int er-judge agreement scores\nfor compound classification\nType CCA Kappa DA\nFull test 98.2% 0.96 98.0%\nLon\ng words 91.1% 0.82 97.7%Table 4. Res ults of compound splitting\nTest set Prec Rec Acc\nAll compoundsfull 56.4% 53.0% 95.8%\nlong 76.6% 51.3% 76.8%\nOne-to-onefull 31.9% 66.4% 96.1%\nlong 55.5% 65.7% 81.4%\neven distribution of compounds and non-compounds. As shown in T able 3, the\nagreement is high for all metrics and both samples.\nFor the final evaluation, the two judges agreed on a common judgement for\nthe words where they disagreed. The final test text has 288 compounds out of\n5395 words. We also used the test set with words that are 12 character or longer,\nthat contains 626 words, of which 231 are compounds.\nThe second gold standard consists of Swedish compounds whose parts are in\none-to-one agreement with separate words in the English translation, allowing\ninsertion of function words [1]. Since this task is more straightforward than for\nall compounds, which has a high inter-judge agreement, only one judge created\nthis gold standard. It contains 126 compounds out of 5395 words. Again we also\nuse the test set with long words, which contains 626 words, of which 117 are\ncompounds in one-to-one agreement with English.\n4.2 Results\nThe evaluation uses the three metrics precision, recall and accuracy, as defined\nby [1]. Table 4 shows the result of the evaluation of the compound splitting.\nPrecision is higher for all compounds and recall is higher on the one-to-one test\nset, which is quite natural, considering that the number of compounds is much\nsmaller in the one-to-one test set. On the test set with only long words precision\nshows a big improvement, recall a small drop, and accuracy a large drop over\nthe full test set.\nComparing the splitting accuracy to other studies, it is worse on linguis-\ntic evaluation than both the supervised method of [6] and the word list based\nmethod of [7], where, however, only accuracy on a corpus of only compounds is\nmeasured. It does perform better than some simpler versions of the algorithm in\n[6], e.g. their reimplementation of [1], that are only tested on German. We also\nhave worse precision and recall on 1-to-1 evaluation than the similar frequency-\nbased method used for German [1], who evaluated only on NP/PPs. However,\nbetter results on these metrics did not necessarily give better translation quality\nfor PBSMT [1,3], probably because phrases with compounds that were erro-\nneously split were linked together in the training phase.\n5 Evaluation of Translation\nTranslation was evaluated both by automatic measures and by human error\nanalysis, which focus on out-of-vocabulary words and translation of compounds.\n12th EAMT conference, 22-23 September 2008, Hamburg, Gemany\n187\nTable 5. Res ults for translation from\nSwedish to English\nSystem Meteor Bleu Neva NIST\nBaseline 55.47 29.97 34.08 7.3127\nUnma\nrked 55.82 29.89 34.08 7.3470\nMarked 55.78 29.85 34.05 7.2933Table 6. Res ults for translation from En-\nglish to Swedish\nSystem Meteor Bleu Neva NIST\nBaseline 57.86 21.63 26.53 6.1085\nUnma\nrked 58.43 22.12 26.99 6.1430\nMarked 58.31 21.92 26.81 6.2025\n5.1 Automatic Metrics\nWe u\nse four automatic metrics, Bleu [18], NIST [19], Neva [15] and Meteor3[20].\nCase sensitive versions of the metrics are used.\nThe result for translation from Swedish can be seen in Table 5. The unmarked\nsystem is slightly better than the baseline on Meteor and NIST, but worse on\nBleu. The marked system is worse than the unmarked on all metrics, and only\nbetter than the baseline on Meteor.\nTable 6 shows the results for translation into Swedish. In this direction the\ndifferences between the scores are bigger and both split systems beat the baseline\non all metrics. The unmarked system is better than the marked system on all\nmetrics except NIST.\n5.2 Out-of-Vocabulary Words\nTo investigate the effects of compound splitting on translation from Swedish we\nanalysed the out-of-vocabulary (OOV) words in the systems. These words are\nleft untranslated in the system output. The total number of out-of-vocabulary\nwords are reduced by about 50% in the split systems, compared to the baseline.\nA manual analysis showed that this decrease was to the largest part due to a\nhigher proportion of translated compounds, see Table 7. The system with marked\ncompounds has a slightly higher number of OOV:s, mainly due to the fact that\n16 marked compound parts are left untranslated. Of the remainder of the OOV:s\nin the unmarked system, 55 are numerals, 27 are proper names, 7 foreign words,\nand 82 miscellaneous unseen words.\n5.3 Compound Translation from Swedish\nTo investigate compound translation from Swedish we manually evaluated the\ntranslation of the first 100 compounds in the test text, with a clear translation\nin the English reference text. We then classified the translations in the test text\nwith respect to the reference text. The result can be seen in Table 8. As expected\nthe number of OOV:s is reduced in the systems with splitting. There is a small\nincrease in the number of compounds that are identical to or a good alternative\n3ForEnglish a version of Meteor optimized on human judgements is used, for Swedish\nthe original Meteor weights are used. For both languages the ”exact” and ”porter\nstem” modules are used.\n12th EAMT conference, 22-23 September 2008, Hamburg, Gemany\n188\nTable 7. Uni que out-of-\nvocabulary words and com-\npounds, for translation into\nEnglish\nSystem Comps/OOV:s\nBaseline 331/520\nUnma\nrked 87/258\nMarked 86/269Table 8. Ana lysis of translation of 100 com-\npounds from Swedish\nBaseline Unmarked Marked\nIdentical 58 59 60\nAlt\nernative 19 21 21\nUnderstandable 7 13 11\nPartly transl. 2 1 1\nMissing 1 2 3\nWrong 3 2 2\nOOV 10 2 2\nTable 9. Ana lysis of translation of 100 compounds into Swedish\nBaseline Unmarked Marked\nIdentical comp. 48 53 57\nAlt\n. compound 14 9 10\nAlt. word 16 16 12\nAlt. word group 9 8 9\nSplit compound 7 5 3\nPartly transl. 4 7 4\nMissing 0 0 2\nOOV 2 2 3\nto the reference translation in the systems with splitting. The largest increase,\nhowever, is in translations that convey the meaning but is somewhat ill-formed,\nthe understandable category.\nThese results are not as promising as in similar evaluations for German [4],\nwhich used similar compound splitting strategies.\n5.4 Compound Translation into Swedish\nWe performed a similar evaluation in the opposite translation direction, using the\nsame sample of 100 compounds from the reference translation. In this direction\nthe categories were changed slightly. For the alternative translations, we also\ndistinguished between translation that were compounds, single words or word\ngroups. There is also a category for word groups that were translated as separate\nwords, but should have been compounded.\nThe result of this evaluation can be seen in Table 9. There are more trans-\nlations that are identical to the reference in the two systems with splitting, but\nthe total number of identical and alternative translations are approximately the\nsame in the three systems. The number of split compounds is higher in the base-\nline system. The unmarked system produces more split compounds and partial\ntranslations than the marked system. This can be seen as an indication of mark-\ning having an effect, which, however, is not seen in the automatic evaluation.\nNo merging errors were found in this sample for the marked system. In the\nunmarked system the merging algorithm performed correctly for 60 of the 62\n12th EAMT conference, 22-23 September 2008, Hamburg, Gemany\n189\nmerged compounds. Of the two errors, the first, (4a), is a missing addition of an\n-sand the second error, (4b), is not covered by the algorithm since it should have\nbeen a combination of -e/-s, and combinations are not handled. The presence of\nsuch a word in this small sample indicates that it is worth investigating allowing\nmore compound forms in the future.\n(4) a. *medleml¨ ander\nmedlemsl¨ ander\nmember states–\n–medlem\nmember+\n+l¨ ander\ncountries\nb. *samh¨ allepolitiska\nsamh¨ allspolitiska\nsocio-political–\n–samh¨ alle\nsociety+\n+politiska\npolitical\n6 Conclusions\nIn this study we have investigated the effects of splitting and merging Swedish\ncompounds for PBSMT between Swedish and English. An unsupervised empiri-\ncal compound splitting method is used. Even though the splitting method does\nnot have a particularly high precision and recall compared to any of the two\ngold standards created, when incorporated into translation, it still improves au-\ntomatic scores for translation into Swedish. For translation from Swedish the\nautomatic metrics are inconsistent. In both directions, the error analysis shows\na small improvement of compound translation.\nA big improvement for translation into English is that the number of out-of-\nvocabulary words, that leads to untranslated words in the translation output,\nis reduced by approximately half. There are, however, still some untranslated\ncompounds left, which indicates that it might be useful to apply a more advanced\nand resource intensive splitting strategy (e.g. [6,7]) for PBSMT.\nMeasured by automatic metrics the system that uses canonical form of com-\npound parts is generally better than the system that uses marked compound\nparts. In the error analysis, the difference between the two versions are smaller,\nwith the marked system being slightly better in some cases. A drawback of the\nmarked system is that it has a small number of untranslated marked compound\nparts.\nThe two suggested merging algorithms work well, and generally produce\nvalid Swedish compounds. In a few cases the merging method for unmarked\ncompounds produces incorrect compound forms of parts. The resulting words\nare usually understandable for a human, and are better translation alternatives\nthan untranslated words.\nCompared to [5], where both compounds and other words were split into\nstems and affixes, we find our results more promising. In contrast to their re-\nsults, we do see some improvements using automatic metrics. The results are\nnot directly comparable since different language pairs are used, but a similarity\nis the large reduction of untranslated words in the output. Encouraged by these\nresults, our aim is to further explore compound processing for PBSMT, since we\nbelieve it will lead to improved translation quality.\n12th EAMT conference, 22-23 September 2008, Hamburg, Gemany\n190\nReferences\n1. K\noehn, P., Knight, K.: Empirical methods for compound splitting. In: Proc. of\nEACL-03, Budapest, Hungary (2003) 187–193\n2. Popovi´ c, M., Stein, D., Ney, H.: Statistical machine translation of German com-\npound words. In: Proc. of FinTAL, Turku, Finland (2006) 616–624\n3. Stymne, S.: German compounds in factored statistical machine translation. In:\nProc. of GoTAL, Gothenburg, Sweden (2008) 464–475\n4. Stymne, S., Holmqvist, M., Ahrenberg, L.: Effects of morphological analysis in\ntranslation between German and English. In: Proc. of the Third Workshop on\nStatistical Machine Translation, Columbus, Ohio (2008) 135–138\n5. Virpioja, S., J.V¨ ayrynen, J., Creutz, M., Sadeniemi, M.: Morphology-aware sta-\ntistical machine translation based on morphs induced in an unsupervised manner.\nIn: Proc. of MT Summit XI, Copenhagen, Denmark (2007) 491–498\n6. Alfonseca, E., Bilac, S., Pharies, S.: Decompounding query keywords from com-\npounding languages. In: Proc. of ACL-08: HLT, Short Papers, Columbus, Ohio\n(2008) 253–256\n7. Sj¨ obergh, J., Kann, V.: Finding the correct interpretation of Swedish compounds,\na statistical approach. In: Proc. of LREC, Lisbon, Portugal (2004) 899–902\n8. Thorell, O.: Svensk ordbildningsl¨ ara. Esselte Studium, Stockholm, Sweden (1981)\n9. Hellberg, S.: The Morphology of Present-Day Swedish. Number 13 in Data lin-\nguistica. Almqvist & Wiksell, Stockholm, Sweden (1978)\n10. Carlberger, J., Kann, V.: Implementing an efficient part-of-speech tagger. Software\nPractice and Experience 29(1999) 815–832\n11. Schmid, H.: Probabilistic part-of-speech tagging using decision trees. In: Proc. Intl.\nConf. on New Methods in Language Processing, Manchester, UK (1994) 44–49\n12. Stolcke, A.: SRILM - an extensible language modeling toolkit. In: Proc. Intl. Conf.\non Spoken Language Processing, Denver, Colorado (2002) 901–904\n13. Koehn, P., Hoang, H., Birch, A., Callison-Burch, C., Federico, M., Bertoldi, N.,\nCowan, B., Shen, W., Moran, C., Zens, R., Dyer, C., Bojar, O., Constantin, A.,\nHerbst, E.: Moses: open source toolkit for statistical machine translation. In: Proc.\nof ACL-07, demo session, Prague, Czech Republic (2007) 177–180\n14. Och, F.J.: Minimum error rate training in statistical machine translation. In: Proc.\nof ACL-03, Sapporo, Japan (2003) 160–167\n15. Forsbom, E.: Training a super model look-alike: featuring edit distance, n-gram\noccurrence, and one reference translation. In: Proc. of the Workshop on Ma-\nchine Translation Evaluation: Towards Systemizing MT Evaluation, New Orleans,\nLouisiana (2003) 29–36\n16. Koehn, P.: Europarl: a parallel corpus for statistical machine translation. In: Proc.\nof MT Summit X. (2005) 79–86\n17. Carletta, J.: Assessing agreement on classification tasks: the kappa statistic. Com-\nputational Linguistics 22(2) (1996) 249–254\n18. Papineni, K., Roukos, S., Ward, T., Zhu, W.J.: BLEU: a method for automatic\nevaluation of machine translation. In: Proc. of ACL-02, Philadelphia, Pennsylvania\n(2002) 311–318\n19. Doddington, G.: Automatic evaluation of machine translation quality using n-gram\nco-occurrence statistics. In: Proc. of the Second Int. Conf. on Human Language\nTechnology, San Diego, California (2002) 228–231\n20. Lavie, A., Agarwal, A.: METEOR: an automatic metric for MT evaluation with\nhigh levels of correlation with human judgments. In: Proc. of the Third Workshop\non Statistical Machine Translation, Prague, Czech Republic (2007) 228–231\n12th EAMT conference, 22-23 September 2008, Hamburg, Gemany\n191",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "uk4JWYwbdXZ",
"year": null,
"venue": "EAMT 2020",
"pdf_link": "https://aclanthology.org/2020.eamt-1.19.pdf",
"forum_link": "https://openreview.net/forum?id=uk4JWYwbdXZ",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Intelligent Translation Memory Matching and Retrieval with Sentence Encoders",
"authors": [
"Tharindu Ranasinghe",
"Constantin Orasan",
"Ruslan Mitkov"
],
"abstract": "Matching and retrieving previously translated segments from the Translation Memory is a key functionality in Translation Memories systems. However this matching and retrieving process is still limited to algorithms based on edit distance which we have identified as a major drawback in Translation Memories systems. In this paper, we introduce sentence encoders to improve matching and retrieving process in Translation Memories systems - an effective and efficient solution to replace edit distance-based algorithms.",
"keywords": [],
"raw_extracted_content": "Intelligent Translation Memory Matching and Retrieval\nwith Sentence Encoders\nTharindu Ranasinghe}, Constantin Or ˘asan~and Ruslan Mitkov}\n}Research Group in Computational Linguistics, University of Wolverhampton, UK\n~Centre for Translation Studies, University of Surrey, UK\nft.d.ranasinghehettiarachchige, r.mitkov [email protected]\[email protected]\nAbstract\nMatching and retrieving previously\ntranslated segments from a Translation\nMemory is the key functionality in\nTranslation Memories systems. However\nthis matching and retrieving process\nis still limited to algorithms based on\nedit distance which we have identified\nas a major drawback in Translation\nMemories systems. In this paper we\nintroduce sentence encoders to improve\nthe matching and retrieving process\nin Translation Memories systems - an\neffective and efficient solution to replace\nedit distance based algorithms.\n1 Introduction\nTranslation Memories (TMs) are “structured\narchives of past translations“ which store pairs of\ncorresponding text segments1in source and target\nlanguages known as “translation units” (Simard,\n2020). TMs are used during the translation process\nin order to reuse previously translated segments.\nThe original idea of TMs was proposed more\nthan forty years ago when (Arthern, 1979) noticed\nthat the translators working for the European\nCommission were wasting valuable time by re-\ntranslating (parts of) texts that had already been\ntranslated before. He proposed the creation of\na computerised storage of source and target texts\nwhich could easily improve the performance of\ntranslators and that could be part of a computer-\nbased terminology system. Based on this idea,\nc\r2020 The authors. This article is licensed under a Creative\nCommons 3.0 licence, no derivative works, attribution, CC-\nBY-ND.\n1Segments are typically sentences, but there are\nimplementations which consider longer or shorter units.many commercial TM systems appeared on the\nmarket in the early 1990s. Since then the use of\nthis particular technology has kept growing and\nrecent studies show that it is used on regular basis\nby a large proportion of translators (Zaretskaya et\nal., 2018).\nTranslation Memories systems help translators\nby continuously trying to provide them with so-\ncalled matches, which are translation proposals\nretrieved from its database. These matches are\nidentified by comparing automatically the segment\nthat has to be translated with all the segments\nstored in the database. There are three kinds of\nmatches: exact, fuzzy and no matches. Exact\nmatches are found if the segment to be translated is\nidentical to one stored in the TM. Fuzzy matches\nare used in cases where it is possible to identify\na segment which is similar enough to the one\nto be translated, and therefore, it is assumed\nthat the translator will spend less time editing\nthe translation retrieved from the database than\ntranslating the segment from scratch. No matches\noccur in cases where it is not possible to identify\na fuzzy match (i.e. there is no segment similar\nenough to the one to be translated to be worth using\nits translation).\nTMs distinguish between fuzzy matches\nand no matches by calculating the similarity\nbetween segments using a similarity measure and\ncomparing it to a threshold. Most of the existing\nTM systems rely on a variant of the edit distance\nas the similarity measure and consider a fuzzy\nmatch when the edit distance score is between\n70% and 95%.2The main justification for using\n2It is unclear the origin for these value, but they are widely\nused by translators. Most of the tools allow translators to\ncustomise the value of this threshold according to their needs.\nTranslators use their experience to decide which value for the\nthis measure is the fact that edit distance can be\neasily calculated, is fast, and is largely language\nindependent. However, edit distance is unable to\ncapture correctly the similarity between segments\nwhen different wording and syntactic structures\nare used to express the same idea. As a result,\neven if the TM contains a semantically similar\nsegment, the retrieval algorithm will not be able to\nidentify it in most of the cases.\nResearchers tried to address this shortcoming of\nthe edit distance metric by employing similarity\nmetrics that can identify semantically similar\nsegments even when they are different at token\nlevel. Section 2 discusses some of the approaches\nproposed so far. Recent research on the topic of\ntext similarity employed methods that rely on deep\nlearning and various vector based representations\nused in this field (Ranasinghe et al., 2019b; Tai\net al., 2015; Mueller and Thyagarajan, 2016).\nOne of the reasons for this is that calculating the\nsimilarity between vectors is more straightforward\nthan calculating the similarity between texts. It\nis easy to calculate how close or distant two\nvectors are by using well understood mathematical\ndistance metrics. In addition, deep learning based\nmethods proved more robust in numerous NLP\napplications.\nIn this paper we propose a novel TM matching\nand retrieval method based on the Universal\nSentence Encoder (Cer et al., 2018) which has\nthe capability to capture semantically similar\nsegments in TMs better than methods based\non edit distance. We selected the Universal\nSentence Encoder as our sentence encoder since it\noutperforms other sentence encoders like Infersent\n(Conneau et al., 2017) in many Natural Language\nProcessing tasks including Semantic Retrieval\n(Cer et al., 2018). Also the recently release\nof Multilingual Universal Sentence Encoder3\nis available on 16 different languages (Yang et\nal., 2019). Since we are planning to expand\nour research to other language pairs than the\nEnglish - Spanish pair investigated in this paper,\nthe multilingual aspect of the Universal Sentence\nEncoder can prove very useful.\nThe rest of the paper is organised as follows.\nSection 2 briefly describes several approaches\nused to improve the matching and retrieval in\nTMs. Section 3 contains information about the\nthreshold is appropriate for a given text.\n3https://tfhub.dev/google/universal-\nsentence-encoder-multilingual-large/3settings of the experiments carried out in this\npaper. It includes the experiments that were done\nfor semantic textual similarity tasks comparing\nthe Universal Sentence Encoder and edit distance.\nThe same section also presents the results of\nthe experiments on real world TMs. Section 4\ndiscusses the results and describes future research\ndirections. The implementation of the methods\npresented in this paper is available on Github.4\n2 Related Work\nDespite being the most used tools by professional\ntranslators, Translation Memories have rarely been\ncriticised because of the quality of the segments\nthey retrieve. Instead, quite often the requests\nfrom translators focus on the quality of the user\ninterface, the need to handle different file formats,\ntheir speed and possibility of working in the cloud\n(Zaretskaya et al., 2018). Most of the current\nwork on TMs is focused on the development of\naddons like terminology managers and plugins\nwhich integrate machine translation engines, as\nwell as project management features (Gupta et\nal., 2016). Even though retrieval of previously\ntranslated segments is a key feature in a TM\nsystem, this process is still very much limited to\nedit-distance based measures.\nResearchers working on natural language\nprocessing have proposed a number of methods\nwhich try to improve the existing matching and\nretrieval approaches used by translation memories.\nHowever, the majority of these approaches are\nnot suitable for large TMs, like the ones\nnormally employed by professional translators\nor were evaluated on very small number of\nsegments. Planas and Furuse (1999) extend the\nedit distance metric to incorporate lemmas and\npart-of-speech information when calculating the\nsimilarity between two segments, but they test\ntheir approach on less than 150 segments from\ntwo domains using two translation memories with\nless than 40,000 segments in total. Lemmas and\npart-of-speech information is also used in (Hod ´asz\nand Pohl, 2005) in order to improve matching,\nespecially for morphologically rich languages like\nHungarian. They also experiment with sentence\nskeletons in which NPs are automatically aligned\nbetween source and target. Unfortunately, the\npaper presents only preliminary results. Pekar\n4https://github.com/tharindudr/\nintelligent-translation-memories\nand Mitkov (2007) show how it is possible to\nimprove the quality of matching by taking into\nconsideration the syntactic structure of sentences.\nUnfortunately, the evaluation is carried out on\nonly a handful of carefully selected segments.\nAnother method which performs matching at level\nof syntactic trees is proposed in (Vanallemeersch\nand Vandeghinste, 2014). The results presented in\ntheir paper are preliminary and the authors notice\nthat tree matching method is “prohibitively slow”.\nMore recent work has focused on incorporating\nparaphrases into the matching and retrieving\nalgorithm (Utiyama et al., 2011; Gupta and\nOrasan, 2014; Chatzitheodorou, 2015). Utiyama\net al. (2011) proposed a finite transducer which\nconsiders paraphrases during the matching. The\nevaluation shows that the method improves\nboth precision and recall of matching, but it\nwas carried out with only one translator and\nfocused only on segments with exactly the\nsame meaning. Gupta and Orasan (2014)\nproposed a variant of the edit distance metric\nwhich incorporates paraphrases from PPDB5using\ngreedy approximation and dynamic programming.\nBoth automatic evaluation and evaluation with\ntranslators show the advantages of using this\napproach (Gupta et al., 2016). Chatzitheodorou\n(2015) follows a similar approach. They use NooJ6\nto create paraphrases for the verb constructions\nin all source translation units to expand the fuzzy\nmatching capabilities when searching in the TM.\nEvaluation with professional translators showed\nthat the proposed method helps and speeds up the\ntranslation process.\nTo best of our knowledge, deep learning\nmethods have not been used successfully in\ntranslation memories. Gupta (2016) presents an\nattempt to use ReVal, an evaluation metric that was\nsuccessfully applied in the WMT15 metrics task\n(Gupta et al., 2015). Unfortunately, none of the\nneural based methods used are able to lead to better\nresults than the standard edit distance.\n3 Experiments and Results\nAs mentioned above, the purpose of this research\nis to find out whether it is possible to improve\nthe quality of the retrieved segments by using\nthe Universal Sentence Encoder (Cer et al., 2018)\nreleased by Google as the sentence encoder for\n5http://paraphrase.org/\n6https://nooj4nlp.net.cutestat.com/this experiment. It comes with two versions:\none trained with a Transformer encoder and the\nother trained with a Deep Averaging Network\n(DAN) (Cer et al., 2018). The transformer\nencoder architecture uses an attention mechanism\n(Vaswani et al., 2017) to compute context aware\nrepresentations of words in a sentence and average\nthose representations to calculate the embedding\nfor the sentence. The DAN encoder begins\nby averaging together word and bi-gram level\nembeddings. Sentence embeddings are then\nobtained by passing the averaged representation\nthrough a feedforward deep neural network\n(DNN). The architecture of the DAN encoder is\nsimilar to the one proposed in (Iyyer et al., 2015).\nThe two architectures have a trade-off of\naccuracy and computational resource requirement.\nThe one that relies on a Transformer encoder\nhas higher accuracy, but is computationally more\nexpensive. In contrast the one with DAN encoding\nis computationally less expensive, but has a\nslightly lower accuracy. For the experiments\npresented in this paper we used both architectures.\nThe trained Universal Sentence Encoder model for\nEnglish is available on TensorFlow Hub7.\n3.1 Experiments on STS\nIn order to assess the performance of the two\narchitectures described in the previous section,\nwe applied them on several Semantic Textual\nSimilarity (STS) datasets and compared their\nresults with those obtained when only edit distance\nis employed. This was done only to find out how\nwell our unsupervised methods capture semantic\ntextual similarity in comparison to a simple edit\ndistance.\nIn this section we present the datasets that we\nused, the method and the results.\n3.1.1 Dataset\nWe carried out these experiments using two\ndatasets: the SICK dataset (Bentivogli et al., 2016)\nand SemEval 2017 Task 1 dataset (Cer et al., 2017)\nwhich we will refer to as STS2017 dataset.\nThe SICK data contains 9,927 sentence pairs\nwith a 5,000/4,927 training/test split. Each pair is\nannotated with a relatedness score between 1 and\n5, corresponding to the average relatedness judged\nby 10 different individuals. Table 1 shows a few\nexamples from the SICK training dataset.\n7https://tfhub.dev/google/universal-\nsentence-encoder/4\nSentence Pair Similarity\n1. A little girl is looking at a woman in costume.\n2. A young girl is looking at a woman in costume.4.7\n1. A person is performing tricks on a motorcycle.\n2. The performer is tricking a person on a motorcycle.2.6\n1. Someone is pouring ingredients into a pot.\n2. A man is removing vegetables from a pot.2.8\n1. Nobody is pouring ingredients into a pot.\n2. Someone is pouring ingredients into a pot.3.5\nTable 1: Example sentence pairs from the SICK training data\nThe STS2017 test datset has 250 sentence\npairs annotated with a relatedness score between\n[1,5]. As the training data for the competition,\nparticipants were encouraged to make use of all\nexisting data sets from prior STS evaluations\nincluding all previously released trial, training and\nevaluation data8. Once we combined them all\nSTS2017 had 8527 sentence pairs with a 8227/250\ntraining/test split. Table 2 shows a few examples\nfrom the STS2017 dataset.\nSentence Pair Similarity\n1. Two people in snowsuits are lying in the snow\nand making snow angels.\n2. Two angels are making snow on the lying children2.5\n1. A group of men play soccer on the beach.\n2. A group of boys are playing soccer on the beach.3.6\n1. One woman is measuring another woman’s ankle.\n2. A woman measures another woman’s ankle.5.0\n1. A man is cutting up a cucumber.\n2. A man is slicing a cucumber.4.2\nTable 2: Example sentence pairs from the STS2017 data\n3.1.2 Method\nWe followed a simple approach to calculate the\nsimilarity between two sentences. Each sentence\nwas passed through the Universal Sentence\nEncoder to acquire the corresponding sentence\nvector for each sentence. The Universal Sentence\nEncoder uses a 512 dimension vector to represent\na sentence. If the two vectors for two sentences X\nand Y areaandbcorrespondingly, we calculate the\ncosine similarity between aandbas of equation\n1 and use that value to represent the similarity\nbetween the two sentences.\ncos(a;b) =ab\nkakkbk\n=Pn\ni=1aibipPn\ni=1(ai)2pPn\ni=1(bi)2(1)\nSimple edit distance between two sentences\nwas used as a baseline. In order to convert\n8http://alt.qcri.org/semeval2017/task1/it to a similarity metric, we converted the edit\ndistance between two sentences to the negative\nvalue and performed a min-max normalisation\nover the whole dataset to bring it to a value\nbetween 0 and 1.\n3.1.3 Results\nAll the results were evaluated using the\nthree evaluation metrics normally employed in\nSTS tasks: Pearson correlation ( \u001c), Spearman\ncorrelation ( \u001a) and Mean Squared Error (MSE).\nTable 3 contains results for SICK dataset and Table\n4 for STS2017 dataset.\nAlgorithm \u001c\u001a MSE\nDAN Encoder 0.761 0.708 0.514\nTransformer 0.780 0.721 0.426\nEdit Distance 0.321 0.422 3.112\nTable 3: Results for SICK dataset\nAlgorithm \u001c\u001a MSE\nDAN Encoder 0.744 0.708 0.612\nTransformer 0.723 0.721 0.451\nEdit Distance 0.360 0.481 2.331\nTable 4: Results for STS2017 dataset\nAs shown in Tables 3 and 4 both architectures\nof Universal Sentence Encoder outperform edit\ndistance significantly in all three evaluation\nmetrics for both datasets. This is not surprising\ngiven how simple edit distance is, but reinforces\nour motivation to use better methods to capture\nsemantic similarity in translation memories. Table\n5 shows some of the example sentences where\nUniversal Sentence Encoder architectures showed\npromising results against the baseline - edit\ndistance.\nAs can be seen in table 5 both architectures\nof Universal Sentence Encoder handle semantic\ntextual similarity better than edit distance in many\ncases where the word order is changed in two\nsentences, but the meaning remains same. This\ndetection of similarity even when the word order\nis changed will be important in segment matching\nand retrieval in TMs.\n3.2 Experiments on Translation Memories\nIn this section we present the experiments we\nconducted on TMs using the Universal Sentence\nEncoder. First we introduce the dataset that\nSentence 1 Sentence 2 GOLD ED Transf. DAN\nIsrael expands subsidies to\nsettlementsIsrael widens settlement\nsubsidies1.0000 0.0214 0.8524 0.8231\nA man plays the guitar and\nsings.A man is singing and playing\na guitar.1.0000 0.0124 0.7143 0.7006\nA man with no shirt is\nholding a footballA football is being held by a\nman with no shirt1.0000 0.0037 0.9002 0.8358\nEU ministers were invited to\nthe conference but canceled\nbecause the union is closing\ntalks on agricultural reform,\nsaid Gerry Kiely, a EU\nagriculture representative in\nWashington.Gerry Kiely, a EU\nagriculture representative\nin Washington, said EU\nministers were invited but\ncanceled because the union is\nclosing talks on agricultural\nreform.1.0000 0.1513 0.7589 0.7142\nTable 5: Examples of sentence pairs where Universal Sentence Encoder performed significantly better than edit Distance in\nthe STS task. GOLD column shows the score assigned by humans, normalised between 0 and 1. The ED column shows the\nsimilarity obtained regarding the edit distance. Transf and DAN columns show the similarity obtained by Transformer and\nDAN architecture in Universal Sentence Encoder respectively.\nwe used and then we present the methodology\nemployed and the evaluation results.\n3.2.1 Dataset\nIn order to conduct the experiments, we\nused DGT-Translation Memory, a translation\nmemory made publicly available by The European\nCommission’s (EC) Directorate General for\nTranslation, together with the EC’s Joint\nResearch Centre. It consists of segments\nand their professionally produced translations\ncovering twenty-two official European Union\n(EU) languages and their 23 language-pair\ncombinations (Steinberger et al., 2012). It is\ntypically used by researches who work on TMs\n(Gupta et al., 2016; Baisa et al., 2015).\nWe used the English - Spanish segment pairs\nfor the experiments, but our approach is easily\nadoptable to any language pair as long as there\nare embeddings available for the source language.\nWe used data from the year 2018: 2018 Volume\n1was used as the translation memory and 2018\nVolume 3 was used as the input segments. The\ntranslation memory we built from 2018 volume 1\nhad 230,000 segment pairs, whilst the 2018 volume\n3had 66,500 segment pairs which we used as input\nsegments.\n3.2.2 Method\nWe conducted the following steps for both\narchitectures in Universal Sentence Encoder.\n1. Calculated the sentence embeddings foreach segment in the translation memory\n(230,000 segments) and stored the vectors\nin a AquilaDB9database. AquilaDB is\na Decentralized vector database to store\nFeature Vectors and perform K Nearest\nNeighbour retrieval. It is build on top of\npopular Apache CouchDB10. A record of the\ndatabase has 3 fields: source segment, target\nsegment and source segment vector.\n2. Calculated the sentence embedding for one\nincoming segment.\n3. Calculated the cosine similarity of that\nembedding with each of the embedding in\nthe database using equation 1. We retrieve\nthe embedding that had the highest cosine\nsimilarity with the input segment embedding\nand retrieve the corresponding target segment\nfor the embedding as the translation memory\nmatch. We used ’getNearest’ functionality\nprovided by AquilaDB for this step.\nThe efficiency of the TM matching and retrieval\nis a key-factor for translators who are using them.\nTherefore, we first analysed the efficiency of each\narchitecture in Universal Sentence Encoder. The\nresults are shown in table 6. The experiments were\ncarried out on an Intel(R) Core(TM) i7-8700 CPU\n@ 3.20GHz desktop computer. The performance\nof the Universal Sentence Encoder will be more\n9https://github.com/a-mma/AquilaDB\n10https://github.com/apache/couchdb\nefficient in a GPU (Graphics Processing Unit).\nNonetheless we carried our experiments without\nusing a GPU since the translators using translation\nmemory tools would probably not have access to a\nGPU on daily basis.\nArchitecture Step 1 Step 2 Step 3\nDAN Encoder 78s 0.77s 0.40s\nTransformer 108s 1.23s 0.40s\nTable 6: Time efficiency of each architecture in Universal\nSentence Encoder\nWhen we calculated the sentence embeddings\nfor the segments in the translation memory in\nStep 1, we processed the segments in batches\nof 256 segments. As can be seen in the table 6,\nDAN Architecture had the maximum efficiency\nproviding sentence embeddings within 78\nseconds for 230,000 segments. The Transformer\narchitecture was not too far behind, being able to\ncalculate the embeddings of the 230,000 segments\nin 108 seconds.\nThe next column in table 6 reports the time\ntaken from each sentence encoder to embed a\nsingle segment. We did not consider input\nsegments as batches as we did earlier for the\nsegments in the translation memory. We assumed\nthat since the translators translate the segments\none by one it would not be fair to encode the\ninput segments in batches. In that step too, the\nDAN Architecture was more efficient than the\nTransformer Architecture.\nThe next column is the time taken to retrieve\nthe best match from the translation memory. It\nincludes the time taken to calculate the cosine\nsimilarity of the segment embeddings of the\nsegments of the translation memory with the\nsegment embedding of the input segment. Also, it\nincludes the time taken to sort the similarities and\nget the index of the highest similarity and retrieve\nthe corresponding segment which we considered\nas the best match for the input segment from the\ntranslation memory. As shown in the table 6 both\narchitectures took approximately similar time for\nthis step since the size of the embedding is same\nfor both architectures.\nAs a whole, time taken to acquire the best match\nfrom the translation memory is the combined\ntime taken to step 2 and step 3. Therefore,\nthe time taken by the Transformer Encoder to\nretrieve a match from the translation memoryfor one incoming sentence is just 1.6s, which\nis reasonable. In light of this, we decided\nto use the Transformer Architecture for future\nexperiments since it is efficient enough and since\nit was reported that it provides better accuracy in\nsemantic retrieval tasks than the DAN Architecture\n(Cer et al., 2018).\n3.2.3 Results\nIn order to compare the results obtained by\nour method with those of an existing translation\nmemory tool we used Okapi which uses simple\nedit distance to retrieve matches from the\ntranslation memory. We calculated the METEOR\nscore (Denkowski and Lavie, 2014) between the\nactual translation of the incoming segment and the\nmatch we retrieved from the translation memory\nwith the transformer architecture of the Universal\nSentence Encoder. We repeated the same process\nwith the match we retrieved from Okapi. We used\nMETEOR score since we believed it can capture\nthe semantic similarity between two segments\nbetter than the BLEU score (Denkowski and Lavie,\n2014).\nTo understand the performance of our method,\nwe first removed the segments where the match\nprovided by Okapi and the Universal Sentence\nEncoder was same. Then, to have a better\nanalysis of the results, we divided the results in\nto 5 partitions. The first partition contained the\nmatches derived from Okapi that had a fuzzy\nmatch score between 0.8 and 1. We calculated the\naverage METEOR score for the segments retrieved\nfrom Okapi and for the segments retrieved from\nUniversal Sentence Encoder in the particular\npartition. We performed the same process for all\nthe partitions: fuzzy match score ranges 0.6-0.8,\n0.4-0.6, 0.2-0.4 and 0-0.2.\nAs shown in table 7 Universal Sentence Encoder\nperforms better than Okapi for the fuzzy match\nscores below 0.8, which means that the Universal\nSentence Encoder performs better when Okapi\nfails to find a significantly similar match in TM.\nHowever, this is not a surprise given that METEOR\nscore is largely based on overlapping ngrams, and\ntherefore will reward segments that have a high\nfuzzy match score.\nHowever, we noticed that in most cases, the\ndifference between the actual translation and the\nsuggested match from either Okapi or Universal\nSentence Encoder is just a number, a location,\nan organisation or a name of a person. We\nFuzzy score Okapi USE Amount\n0.8-1.0 0.931 0.854 1624\n0.6-0.8 0.693 0.702 4521\n0.4-0.6 0.488 0.594 6712\n0.2-0.4 0.225 0.318 13136\n0-0.2 0.011 0.134 24612\nTable 7: Result comparison between Okapi and the Universal\nSentence Encoder for each partition. Fuzzy score column\nrepresents the each partition. Okapi column shows the\naverage METEOR score between the matches provided by\nthe Okapi and the actual translations in that partition. USE\ncolumn shows the average METEOR score between the\nmatches provided by the Universal Sentence Encoder and the\nactual translations in that partition. Amount column shows\nthe number of sentences in each partition. Bold shows the\nbest result for that partition\nthought this might affect the results since we are\ndepending on the Universal Sentence Encoder’s\nability to retrieve semantically similar segments\nfrom the TM. For this reason, we applied a Named\nEntity Recognition (NER) pipeline on the actual\ntranslations, segments retrieved from Okapi and\nthe segments retrieved from Universal Sentence\nEncoder. Since the target language is Spanish, we\nused the Spanish NER pipeline provided by Spacy\nthat was trained on the AnCora and WikiNER\ncorpus11. We detected locations, organisations and\nperson names with the NER pipeline and replaced\nthem with a placeholder. We also used A ˜notador\n12to detect dates in the segments and replaced\nthem too with a placeholder. Last, we used a\nregular expression to detect number sequences in\nthe segments and replaced them too with a place\nholder. After that we removed the cases where\nthe match provided by Okapi and the Universal\nSentence Encoder is same and recalculated the\nresults in table 7 following the same process.\nAs shown in table 8 for the cases where the\nfuzzy match score is above 0.8, the segments\nretrieved by Okapi are still better than the segments\nretrieved from the Universal Sentence Encoder.\nHowever for the cases where the fuzzy match score\nis below 0.8 the Universal Sentence Encoder seems\nto be better than Okapi. After performing NER,\nthe results of the Universal Sentence Encoder\nimproved significantly in most of the partitions:\nspecially in 0.6-0.8 partition.\nGiven the fact that METEOR relies largely on\nstring overlap we assumed that it is unable to\n11https://spacy.io/models/es\n12http://annotador.oeg-upm.net/Fuzzy score Okapi USE Amount\n0.8-1.0 0.942 0.889 1512\n0.6-0.8 0.705 0.726 3864\n0.4-0.6 0.496 0.602 6538\n0.2-0.4 0.228 0.320 13128\n0-0.2 0.011 0.134 24612\nTable 8: Result comparison between Okapi and the Universal\nSentence Encoder for each partition after performing NER.\nThe Fuzzy score column represents each partition. The\nOkapi column shows the average METEOR score between\nthe matches provided by the Okapi and the actual translations\nin that partition. The USE column shows the average\nMETEOR score between the matches provided by the\nUniversal Sentence Encoder and the actual translations in\nthat partition. The Amount column shows the number of\nsentences in each partition. Bold shows the best result for\nthat partition\ncapture the fact that the segments retrieved using\nthe Universal Sentence Encoder are semantically\nequivalent. Therefore, we asked three native\nSpanish speakers to compare the segments from\nOkapi and report the sentences where Universal\nEncoder performed significantly better than Okapi.\nDue to the time restrictions they did not have\ntime to go through all the segments. But their\nopinion was generally that the Universal Sentence\nEncoder was better at identifying semantically\nsimilar segments in the TM. Table 9 presents\nsample segments they provided.\n4 Conclusion and Future Work\nIn this paper we have proposed a new TM\nmatching and retrieval method based on the\nUniversal Sentence Encoder. Our assumption was\nthat by using this representation we will be able\nto retrieve better segments from a TM than when\nusing a standard edit distance. As shown in 3.2.3\nsection, the Universal Sentence Encoder performs\nbetter than Okapi for fuzzy match scores ranged\nbelow 0.8. Therefore, we believe that the sentence\nencoders can improve the matching and retrieval\nin TMs and should be explored more. Usually TM\nmatches with lower fuzzy match scores (¡ 0.8) are\nnot used by professional translators, or when used,\nthey lead to a decrease in translation productivity.\nBut our method can provide better matches to\nsentences below fuzzy match score 0.8, hence will\nbe able to improve the translation productivity.\nAccording to the annotation guidelines of (Cer\net al., 2017) a semantic textual similarity score\nof 0.8 means ”The two sentences are mostly\nSource segment Human Translated\nsegmentUniversal Sentence\nEncoder SuggestionOkapi Suggestion\nIf applicable En su caso si procede No procede\nDate of granting Fecha de concesi ´on de\nla subvenci ´onFecha de autorizaci ´on Fecha de la garant ´ıa\notorgada\nThis Decision shall be\nkept under constant\nreview and shall be\nrenewed or amended,\nas appropriate, if the\nCouncil deems that\nits objectives have not\nbeen met.’La presente Decisi ´on\nestar ´a sujeta a\nrevisi ´on continua\ny se prorrogar ´a o\nmodificar ´a, seg ´un\nproceda, si el Consejo\nestima que no se\nhan cumplido sus\nobjetivos.Ser´a prorrogada o\nmodificada, seg ´un\nproceda, si el Consejo\nconsidera que no se\nhan cumplido sus\nobjetivos.Se prorrogar ´a o\nmodificar ´a, si procede,\nen caso de que el\nConsejo estime que no\nse han cumplido los\nobjetivos de la misma.\nThe information shall\ninclude:Esta informaci ´on\nincluir ´a:Esa informaci ´on\npodr ´a versar sobre lo\nsiguiente:Los indicadores\nclave de rendimiento\nincluir ´an:\nGeneral characteristics\nof the finished productCaracter ´ısticas\ngenerales del producto\nterminadodescripci ´on del\nproducto final,Caracter ´ısticas\ngenerales del\ncomponente\nde servicios de\ncopernicus\nSuch reports shall\nbe made publicly\navailable.Dichos informes se\nhar´an p´ublicos.Sus informes se har ´an\np´ublicos.Se pondr ´a a\ndisposici ´on del\np´ublico un resumen de\nlas evaluaciones.\nThe Commission\ndecision to initiate\nthe procedure (‘the\nOpening Decision’)\nwas published in the\nOfficial Journal of the\nEuropean Union.La Decisi ´on de la\nComisi ´on de incoar el\nprocedimiento (en lo\nsucesivo, Decisi ´on de\nincoaci ´on) se public ´o\nen el Diario Oficial de\nla Uni ´on Europea.La Decisi ´on de la\nComisi ´on de incoar\nel procedimiento (en\nlo sucesivo, Decisi ´on\nde incoaci ´on) fue\npublicada en el Diario\nOficial de la Uni ´on\nEuropea.La decisi ´on de la\nComisi ´on de incoar\nel procedimiento se\npublic ´o en el Diario\nOficial de la Uni ´on\nEuropea.\nChapter 2 is amended\nas follows:El cap ´ıtulo 2 se\nmodifica como sigue:la parte 2 se modifica\ncomo sigue:la secci ´on 2 queda\nmodificada como\nsigue:\nTable 9: Example segments where Universal Sentence Encoder suggestion was better than the Okapi suggestion\nequivalent, but some unimportant details differ”\nand semantic textual similarity score of 0.6\nmeans ”The two sentences are roughly equivalent,\nbut some important information differs/missing” .\nIf we further analyse the fuzzy match score\nrange 0.6-0.8, as shown in table 10, the mean\nsemantic textual similarity for the sentences\nprovided by Universal Sentence Encoder is 0.768.\nTherefore, we assume that the matches retrieved\nfrom the Universal Sentence Encoder in the\nfuzzy match score range 0.6-0.8 will help toimprove the translation productivity. However,\nthis is something that we plan to analyse further\nby carrying out evaluations with professional\ntranslators.\nIn the future, we also plan to experiment\nwith other sentence encoders such as Infersent\n(Conneau et al., 2017) and SBERT (Reimers and\nGurevych, 2019) and with alternative algorithms\nwhich are capable to capture semantic textual\nsimilarity between two sentences. We will try\nunsupervised methods like word vector averaging\nFuzzy score Mean STS score\n0.8 - 1.0 0.952\n0.6 - 0.8 0.768\n0.4 - 0.6 0.642\n0.2 - 0.4 0.315\n0 - 0.2 0.121\nTable 10: Mean STS score for the sentences retrieved by\nUniversal Sentence Encoder for each fuzzy match score.\nFuzzy score column shows the fuzzy match score ranges and\nMean STS score column shows that mean STS score for the\nsentence retrieved by Universal Sentence Encoder for that\nfuzzy match score range.\nand word moving distance (Ranasinghe et al.,\n2019a) as well as supervised algorithms such\nSiamese neural networks (Ranasinghe et al.,\n2019b) and transformers (Devlin et al., 2018).\n5 Acknowledgment\nWe would like to acknowledge Roc ´ıo Caro\nQuintana from University of Wolverhampton,\nEncarnaci ´on N ´u˜nez Ignacio from University of\nWolverhampton and Bell ´es-Calvera, Luc ´ıa from\nJaume I University: the team of volunteer\nannotators that provided their free time and efforts\nto manually evaluate the results between Universal\nSentence Encoder and edit Distance.\nAlso we would like to acknowledge Mar ´ıa\nNavas-Loro and Pablo Calleja from Polytechnic\nUniversity of Madrid for providing A ˜notador for\nfree to detect dates in the Spanish segments.\nReferences\nArthern, Peter J. 1979. Machine translation and\ncomputerized terminology systems: A translator’s\nviewpoint. Translating and the Computer,\nProceedings of a Seminar, London 14th November\n1978. Amsterdam: North-Holland Publishing\nCompany , pages 77–108.\nBaisa, V ´ıt, Ale ˇs Hor ´ak, and Marek Medve ˇd. 2015.\nIncreasing coverage of translation memories with\nlinguistically motivated segment combination\nmethods. In Proceedings of the Workshop Natural\nLanguage Processing for Translation Memories ,\npages 31–35, Hissar, Bulgaria, September.\nAssociation for Computational Linguistics.\nBentivogli, Luisa, Raffaella Bernardi, Marco Marelli,\nStefano Menini, Marco Baroni, and Roberto\nZamparelli. 2016. Sick through the semeval glasses.\nlesson learned from the evaluation of compositional\ndistributional semantic models on full sentences\nthrough semantic relatedness and textual entailment.\nLanguage Resources and Evaluation , 50:95–124.Cer, Daniel M., Mona T. Diab, Eneko Agirre,\nI˜nigo Lopez-Gazpio, and Lucia Specia. 2017.\nSemeval-2017 task 1: Semantic textual similarity\nmultilingual and crosslingual focused evaluation. In\nSemEval@ACL .\nCer, Daniel, Yinfei Yang, Sheng-yi Kong, Nan Hua,\nNicole Limtiaco, Rhomni St. John, Noah Constant,\nMario Guajardo-Cespedes, Steve Yuan, Chris Tar,\nBrian Strope, and Ray Kurzweil. 2018. Universal\nsentence encoder for English. In Proceedings of the\n2018 Conference on Empirical Methods in Natural\nLanguage Processing: System Demonstrations ,\npages 169–174, Brussels, Belgium, November.\nAssociation for Computational Linguistics.\nChatzitheodorou, Konstantinos. 2015. Improving\ntranslation memory fuzzy matching by paraphrasing.\nInProceedings of the Workshop Natural Language\nProcessing for Translation Memories , pages 24–\n30, Hissar, Bulgaria, September. Association for\nComputational Linguistics.\nConneau, Alexis, Douwe Kiela, Holger Schwenk, Lo ¨ıc\nBarrault, and Antoine Bordes. 2017. Supervised\nlearning of universal sentence representations from\nnatural language inference data. In Proceedings\nof the 2017 Conference on Empirical Methods\nin Natural Language Processing , pages 670–680,\nCopenhagen, Denmark, September. Association for\nComputational Linguistics.\nDenkowski, Michael and Alon Lavie. 2014. Meteor\nuniversal: Language specific translation evaluation\nfor any target language. In Proceedings of the EACL\n2014 Workshop on Statistical Machine Translation .\nDevlin, Jacob, Ming-Wei Chang, Kenton Lee, and\nKristina Toutanova. 2018. BERT: pre-training\nof deep bidirectional transformers for language\nunderstanding. CoRR , abs/1810.04805.\nGupta, Rohit and Constantin Orasan. 2014.\nIncorporating paraphrasing in translation memory\nmatching and retrieval. In Proceedings of the\nSeventeenth Annual Conference of the European\nAssociation for Machine Translation (EAMT2014) ,\npages 3–10.\nGupta, Rohit, Constantin Or ˘asan, and Josef van\nGenabith. 2015. ReVal: A Simple and Effective\nMachine Translation Evaluation Metric Based on\nRecurrent Neural Networks. In Proceedings of the\n2015 Conference on Empirical Methods in Natural\nLanguage Processing , pages 1066–1072, Lisbon,\nPortugal, September.\nGupta, Rohit, Constantin Or ˘asan, Marcos Zampieri,\nMihaela Vela, Josef van Genabith, and Ruslan\nMitkov. 2016. Improving translation memory\nmatching and retrieval using paraphrases. Machine\nTranslation , 30(1-2):19–40.\nGupta, Rohit. 2016. USE OF LANGUAGE\nTECHNOLOGY TO IMPROVE MATCHING AND\nRETRIEVAL IN TRANSLATION MEMORY . Ph.D.\nthesis, University of Wolverhampton.\nHod´asz, G ´abor and G ´abor Pohl. 2005. MetaMorpho\nTM: a linguistically enriched translation memory. In\nIn International Workshop, Modern Approaches in\nTranslation Technologies .\nIyyer, Mohit, Varun Manjunatha, Jordan Boyd-\nGraber, and Hal Daum ´e III. 2015. Deep\nunordered composition rivals syntactic methods\nfor text classification. In Proceedings of the\n53rd Annual Meeting of the Association for\nComputational Linguistics and the 7th International\nJoint Conference on Natural Language Processing\n(Volume 1: Long Papers) , pages 1681–1691,\nBeijing, China, July. Association for Computational\nLinguistics.\nMueller, Jonas and Aditya Thyagarajan. 2016.\nSiamese recurrent architectures for learning sentence\nsimilarity. In Proceedings of the Thirtieth AAAI\nConference on Artificial Intelligence , AAAI’16,\npage 2786–2792. AAAI Press.\nPekar, Viktor and Ruslan Mitkov. 2007. New\nGeneration Translation Memory: Content-Sensitive\nMatching. In Proceedings of the 40th Anniversary\nCongress of the Swiss Association of Translators,\nTerminologists and Interpreters .\nPlanas, Emmanuel and Osamu Furuse. 1999.\nFormalizing Translation Memories. In Proceedings\nof the 7th Machine Translation Summit , pages 331–\n339.\nRanasinghe, Tharindu, Constantin Orasan, and Ruslan\nMitkov. 2019a. Enhancing unsupervised sentence\nsimilarity methods with deep contextualised\nword representations. In Proceedings of the\nInternational Conference on Recent Advances in\nNatural Language Processing (RANLP 2019) , pages\n994–1003, Varna, Bulgaria, September. INCOMA\nLtd.\nRanasinghe, Tharindu, Constantin Orasan, and Ruslan\nMitkov. 2019b. Semantic textual similarity with\nSiamese neural networks. In Proceedings of the\nInternational Conference on Recent Advances in\nNatural Language Processing (RANLP 2019) , pages\n1004–1011, Varna, Bulgaria, September. INCOMA\nLtd.\nReimers, Nils and Iryna Gurevych. 2019. Sentence-\nbert: Sentence embeddings using siamese bert-\nnetworks. In Proceedings of the 2019 Conference on\nEmpirical Methods in Natural Language Processing .\nAssociation for Computational Linguistics, 11.\nSimard, Michel. 2020. Building and using parallel\ntext for translation. In O’Hagan, Minako, editor, The\nRoutledge Handbook of Translation and Technology ,\nchapter 5, pages 78 —- 90. Routledge.Steinberger, Ralf, Andreas Eisele, Szymon Klocek,\nSpyridon Pilos, and Patrick Schl ¨uter. 2012.\nDGT-TM: A freely available translation memory\nin 22 languages. In Proceedings of the Eighth\nInternational Conference on Language Resources\nand Evaluation (LREC’12) , pages 454–459,\nIstanbul, Turkey, May. European Language\nResources Association (ELRA).\nTai, Kai Sheng, Richard Socher, and Christopher D.\nManning. 2015. Improved semantic representations\nfrom tree-structured long short-term memory\nnetworks. In Proceedings of the 53rd Annual\nMeeting of the Association for Computational\nLinguistics and the 7th International Joint\nConference on Natural Language Processing\n(Volume 1: Long Papers) , pages 1556–1566,\nBeijing, China, July. Association for Computational\nLinguistics.\nUtiyama, Masao, Graham Neubig, Takashi Onishi,\nand Eiichiro Sumita. 2011. Searching Translation\nMemories for Paraphrases. In Proceedings of the\n13th Machine Translation Summit , pages 325–331,\nXiamen, China, September.\nVanallemeersch, Tom and Vincent Vandeghinste.\n2014. Improving fuzzy matching through syntactic\nknowledge. In Translating and the Computer 36 ,\nvolume 36, pages 90 – 99, London, UK.\nVaswani, Ashish, Noam Shazeer, Niki Parmar,\nJakob Uszkoreit, Llion Jones, Aidan N. Gomez,\nundefinedukasz Kaiser, and Illia Polosukhin. 2017.\nAttention is all you need. In Proceedings of the 31st\nInternational Conference on Neural Information\nProcessing Systems , NIPS’17, page 6000–6010, Red\nHook, NY , USA. Curran Associates Inc.\nYang, Yinfei, Daniel Matthew Cer, Amin\nAhmad, Mandy Guo, Jax Law, Noah Constant,\nGustavo Hern ´andez ´Abrego, Steve Yuan, Chris Tar,\nYun-Hsuan Sung, Brian Strope, and Ray Kurzweil.\n2019. Multilingual universal sentence encoder for\nsemantic retrieval. ArXiv , abs/1907.04307.\nZaretskaya, Anna, Gloria Corpas Pastor, and Miriam\nSeghiri. 2018. User Perspective on Translation\nTools: Findings of a User Survey. In Corpas\nPastor, Gloria and Isabel Duran, editors, Trends in E-\ntools and Resources for Translators and Interpreters ,\npages 37 – 56. Brill.",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "2oZOOI9VXiP",
"year": null,
"venue": "EAMT 2014",
"pdf_link": "https://aclanthology.org/2014.eamt-1.2.pdf",
"forum_link": "https://openreview.net/forum?id=2oZOOI9VXiP",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Incorporating paraphrasing in translation memory matching and retrieval",
"authors": [
"Rohit Gupta",
"Constantin Orasan"
],
"abstract": "Rohit Gupta, Constantin Orǎsan. Proceedings of the 17th Annual conference of the European Association for Machine Translation. 2014.",
"keywords": [],
"raw_extracted_content": "Incorporating Paraphrasing in Translation Memory Matching and\nRetrieval\nRohit Gupta andConstantin Or ˘asan\nRGCL, Research Institute of Information and Language Processing,\nUniversity of Wolverhampton, Stafford Street,\nWolverhampton WV11LY, UK\nfR.Gupta, C.Orasan [email protected]\nAbstract\nCurrent Translation Memory (TM) sys-\ntems work at the surface level and lack se-\nmantic knowledge while matching. This\npaper presents an approach to incorpo-\nrating semantic knowledge in the form\nof paraphrasing in matching and retrieval.\nMost of the TMs use Levenshtein edit-\ndistance or some variation of it. Generat-\ning additional segments based on the para-\nphrases available in a segment results in\nexponential time complexity while match-\ning. The reason is that a particular phrase\ncan be paraphrased in several ways and\nthere can be several possible phrases in a\nsegment which can be paraphrased. We\npropose an efficient approach to incor-\nporating paraphrasing with edit-distance.\nThe approach is based on greedy approx-\nimation and dynamic programming. We\nhave obtained significant improvement in\nboth retrieval and translation of retrieved\nsegments for TM thresholds of 100%, 95%\nand 90%.\n1 Introduction\nTranslation Memories (TMs) are tools commonly\nused by professional translators to speed up the\ntranslation process. The concept of TM can be\ntraced back to 1978 when Peter J. Arthern pro-\nposed the use of a translation archive (Arthern,\n1978). A TM system helps translators by retriev-\ning previously translated segments to extract the\nrelevant match for reuse. TMs also help them in\nc⃝2014 The authors. This article is licensed under a Creative\nCommons 3.0 licence, no derivative works, attribution, CC-\nBY-ND.maintaining the consistency with previous work\nand use of appropriate terminology. Lagoudaki\n(2006) surveyed the use of TMs by professional\ntranslators in 2006, and 721 out of 874 (82.5%)\nreplies confirmed the use of a TM.\nAlthough, extensive research has been done in\nNatural Language Processing (NLP) with empha-\nsis on improving the performance of automatic\nMachine Translation (MT), there is not much re-\nsearch on improving the TM systems by using\nNLP techniques. So far, most of the research in\nTM has been carried out mostly in industry with\nmore focus on improving user interface and user\nexperience in general rather than employing lan-\nguage technology to improve matching and re-\ntrieval. Recent research (Koehn and Senellart,\n2010; Zhechev and Genabith, 2010) on TM fo-\ncus more on improving machine translation using\nTMs.\nThe TMs currently used by translators find\nmatches for a given segment on the basis of sur-\nface form. This means that even if a paraphrased\nsegment is available in the TM, the TM systems\nhave no way to retrieve such segments. In this pa-\nper we try to mitigate this problem by using ex-\nisting paraphrase databases. To achieve this, we\nhave incorporated paraphrasing in the TM match-\ning process. A trivial approach to incorporating\nparaphrasing would be to generate all the possible\nsegments based on paraphrases available. How-\never in this approach the number of segments in-\ncreases exponentially and hence can not be applied\nin our task. This paper proposes a greedy approxi-\nmation and dynamic programming technique to in-\ncorporate paraphrasing in the matching algorithm.\n3\n2 Paraphrasing for TM\n2.1 Existing Work\nThe idea of incorporating paraphrasing or semantic\nfeatures at the conceptual level is not new. Work\ndone by (Pekar and Mitkov, 2007) and (Mitkov,\n2008) explores the issues in TM systems. Al-\nthough these works present good insight into TM\nsystems and their limitations, there is no feasi-\nble practical implementation proposed to improve\nthem. Another work (Utiyama et al., 2011) in-\ncorporates paraphrasing into TM. This approach\nuses a statistical framework to integrate paraphras-\ning which requires corpora from the same domain\nwith an abundance of similar segments. The down-\nside of this approach is that it requires genera-\ntion of all the additional segments based on para-\nphrases which is inefficient both in terms of time\nand space. In addition, the approach was used\nto get exact matches only. In SMT, Onishi et al.\n(2010) and Du et al. (2010) use paraphrasing lat-\ntice to improving MT by gaining more coverage.\n2.2 Need for Paraphrasing\nCurrent TM systems work on the surface level with\nno linguistic information. Because of this often\nthe paraphrased segments available in the TM are\neither not retrieved or retrieved with a very low\nthreshold and are ranked incorrectly among the re-\ntrieved segments. The lack of semantic knowledge\nin the matching process also leads to cases where,\nfor the same similarity score shown by the system,\none segment may require little effort while another\nrequires more in terms of post editing. For ex-\nample, even though segments like “the period laid\ndown in article 4(3)” and “the duration set forth in\narticle 4(3)” have the same meaning, the one seg-\nment may not be retrieved for another in current\nTM systems as having only 57% similarity based\non edit-distance as implemented in OmegaT1. In\nthis case we can see that one segment is a para-\nphrase of the another segment. To mitigate this\nlimitation of TM, we propose an approach to in-\ncorporating paraphrasing in TM matching without\ncompromising the beauty of edit-distance which\nhas been trusted by translators, translation service\nproviders and TM developers over the years.\n1OmegaT is an open source TM available form\nhttp://www.omegat.org2.3 PPDB:The Paraphrase Database\nThe PPDB 1.0 paraphrases database (Ganitkevitch\net al., 2013) contains lexical, phrasal and syntactic\nparaphrases automatically extracted using a large\ncollection of parallel corpora. This database comes\nin six sizes (S, M, L, XL, XXL, XXXL) where S is\nthe smallest and XXXL is the largest. The smaller\npackages contain only high precision paraphrases,\nwhile the larger ones aims at more coverage. We\nhave used lexical and phrasal paraphrases of “L”\nsize for our approach. The reason for choosing L\nsize was to retain the quality of segments retrieved\nusing paraphrasing and at the same time gain some\ncoverage.\n2.4 Classification of Paraphrases\nWe have classified paraphrases obtained from\nPPDB 1.0 into four types for our implementation\non the basis of the number of words in the source\nand target phrases. These four categories are as\nfollows:\n1. Paraphrases having one word on both the\nsource and target sides, e.g. “period”\n)“duration”\n2. Paraphrases having multiple words on both\nsides but differing in one word only, e.g. “in\nthe period” )“during the period”\n3. Paraphrases having multiple words as well as\nsame number of words on both sides, e.g.\n“laid down in article” )“set forth in article”\n4. Paraphrases in which the number of words on\nthe source and target sides differ, e.g. “a rea-\nsonable period of time to” )“a reasonable\nperiod to”\nAs we have already pointed out, a trivial ap-\nproach to implementing paraphrasing along with\nedit-distance is to generate all the paraphrases\nbased on the paraphrases available and store these\nadditional segments in the TM. This approach is\nhighly inefficient both in terms of time and space.\nFor example, for a TM segment which has four\ndifferent phrases where each phrase can be para-\nphrased in five more possible ways, we get 1295\n(64-1) additional segments (still not consider-\ning that these phrases may contain paraphrases\nas well) to store in the TM, which is inefficient\neven for small TMs. To handle this problem,\n4\neach class of paraphrases is processed in a differ-\nent manner. In our classification, Type 1 are one-\nword paraphrases and Type 2 can be reduced to\none-word paraphrases after considering the con-\ntext when storing in the TM. For Type 1 and Type\n2, we get the same accuracy as the trivial method\nin polynomial time complexity (see Section 3 for\ndetails). Paraphrases of Type 3 and Type 4 require\nadditional attention because they still remain mul-\ntiword paraphrases after reduction and greedy ap-\nproximation is needed to implement them in poly-\nnomial time.\n3 Our Approach\nA general approach for TM matching and retrieval\nis as follows:\n1. Read the Translation Memories available\n2. Read the file that needs to be translated\n3. Preprocess the input file, apply filter for dif-\nferent file formats and identify the segments\n4. For each segment in the input file search for\nthe most similar segment in TM and retrieve\nthe most similar segment if above a prede-\nfined threshold\n5. For each segment in the input file display the\ninput segment along with the most similar\nsegment to the translator for post-editing\nThere are two options for incorporating para-\nphrasing in this pipeline: paraphrase the input or\nparaphrase the TM. For our approach we have cho-\nsen to paraphrase the TM. There are many reasons\nfor this. First, once a system is set up, the user can\nget the retrieved matches in real time; second, TMs\ncan be stored in company servers and all process-\ning can be done offline; third, the TM system need\nnot be installed on the user computer and can be\nprovided as a service.\nFor our implementation we used the open source\nTM tool OmegaT, which uses word-based edit-\ndistance with cost 1 for insertion, deletion and\nsubstitution. We have employed OmegaT edit-\ndistance as a baseline and adapted this to incorpo-\nrate paraphrasing so that at a later stage we can add\nthis feature in OmegaT without compromising the\nconfidence users have in OmegaT fuzzy matches.\nOur approach can be briefly described as the fol-\nlowing steps:1. Read the Translation Memories available\n2. Collect all the paraphrases from the para-\nphrase database and classify them according\nto the classes presented in Section 2.4\n3. Store all the paraphrases for each segment in\nthe TM in their reduced forms according to\nthe process presented in Section 3.1\n4. Read the file that needs to be translated\n5. For each segment in the input file get the po-\ntential segments for paraphrasing in the TM\naccording to the filtering steps of Section 3.2\nand search for the most similar segment based\non approach described in Section 3.3 and re-\ntrieve the most similar segment if above a pre-\ndefined threshold\n3.1 Storing Paraphrases\nThe paraphrases are stored in the TM in\ntheir reduced forms as after capturing para-\nphrases for a particular segment we have al-\nready considered the context and there is no\nneed for it to be considered again while cal-\nculating edit-distance. We store only the\nlongest uncommon substring instead of the\nwhole paraphrase. This reduced paraphrase\nis stored with the source word where the un-\ncommon substring starts. We refer to this\nsource word as “token”. Table 1 shows the\nTM source segment (TMS), paraphrases cap-\ntured for this segment (TMP) and paraphrases\nstored in their reduced form (TMR). In this\ncase, the token “period” stores the two para-\nphrases “duration” and “time” and the token\n“laid” stores the two paraphrases “referred to”\nand “provided for by”. For Type 3 and Type 4\nthe paraphrase source length (represented by\nlsin Table 1) is also stored along with the\nparaphrase (represented by tpin Table 1). In\nthis case, length “2” for “laid down” is stored\nwith paraphrase “referred to” and length “3”\nfor “laid down in” is stored along with para-\nphrase “provided for by”.\n3.2 Filtering\nBefore processing begins, for each input seg-\nment certain filtering steps are applied in or-\nder to speed up the process. The purpose of\nthis preprocessing is to filter out unnecessary\n5\nTMS\nthe period laid down in article 4(3) of decision 468\nTMP\ntheperiod\nduration\ntimelaid down in article\nreferred to in article\nprovided for by article4(3) of decision 468\nTMR\ntheperiod\nduration\ntimelaid\nls tp\n2 referred to\n3 provided for bydown in article 4(3) of decision 468\nTable 1: Representing paraphrases in TM\ncandidates for participating in the paraphras-\ning process. Because we are generally inter-\nested in candidates above a certain threshold\nit is obvious to filter out candidates below a\ncertain threshold. Our filtering steps for get-\nting potential candidates for paraphrasing are\nas follows:\n\u000fWe first filter out the segments based on\nlength because if segments differ consid-\nerably in length, the edit-distance will\nalso differ. In our case, the threshold for\nlength was 49%. So, the TM segments\nwhich are shorter than 49% of the input\nare filtered.\n\u000fNext, we filter out the segments based\non baseline edit-distance similarity. The\nTM segments which are having a simi-\nlarity below a certain threshold will be\nremoved. In our case, the threshold was\n49%.\n\u000fNext, after filtering the candidates with\nthe above two steps we sort the remain-\ning segments in decreasing order of sim-\nilarity and pick the top 100 segments.\n\u000fFinally segments within a certain range\nof similarity with the most similar seg-\nment were selected for paraphrasing. In\nour case, the range is 35%. This means\nthat if the most similar segment has 95%\nsimilarity, segments with a similarity be-\nlow 60% will be discarded2.\n3.3 Matching and Retrieval\nFor matching, similarity is calculated with the po-\ntential segments for paraphrasing extracted as per\nSection 3.2. Type 1 and Type 2 paraphrases af-\nter reduction (as per Section 3.1) are single-word\nparaphrases and Type 3 and Type 4 paraphrases\n2these thresholds were determined empiricallyhave multiple words. For Type 1 and Type 2 the\nedit-distance procedure can be optimised globally\nas this is a simple case of matching one of these\n“paraphrases” when calculating the cost of substi-\ntution. For the example given in Table 1, if a word\nfrom input segment matches any of the words “pe-\nriod”, “time” or “duration”, the cost of substitution\nwill be 0.\nFor paraphrases of Types 3 and 4 the algorithm\ntakes the decision locally at the point where all\nparaphrases finish. The basic edit-distance calcula-\ntion procedure is given in Algorithm 1. The algo-\nrithm elaborating our decision-making process is\ngiven in Algorithm 2. In Algorithm 2, Input is the\nsegment that we want to translate and TMS is the\nTM segment. Table 2 shows the edit-distance cal-\nculation of the first five tokens of the Input and TM\nsegment with paraphrasing. In Algorithm 2, lines\n11 to 22 executes when Type 3 and Type 4 para-\nphrases are not available (e.g. edit-distance calcu-\nlation of the second token “period”). Lines 24 to\n57account for the case when Type 3 and Type 4\nparaphrases are available. Line 28 calculates the\nedit-distance of the corresponding longest source\nphrase and stores it in DSmatrix as shown in Al-\ngorithm 2 (e.g. calculation of the edit-distance of\n“laid down in” in Table 2). Lines 33 to 46 account\nfor the edit-distance calculation of each paraphrase\n(e.g. calculation of “referred to” and “provided for\nby” in Table 2). The edit-distance of each para-\nphrase is stored in DT P matrix as shown in Al-\ngorithm 2. Lines 38 to 46 account for the selection\nof the minimum edit-distance paraphrase or source\nphrase. At line 38 , the algorithm compares the\nedit-distance of paraphrase DT P (e.g. “referred\nto”) with the edit-distance of the corresponding\nsource phrase (e.g. “laid down”) as well as with\nthe current minimum distance. Lines 48, 52 and\n56account for updating the value of jto reflect\nthe current position for further calculation of edit-\n6\nAlgorithm 1 Basic Edit-Distance Procedure\n1:procedure EDIT-DISTANCE (Input ,TMS )\n2: M length of TMS ▷Initialise Mwith length of TM segment\n3: N length of Input ▷Initialise Nwith length of Input segment\n4: D[i;0] ifor0\u0014i\u0014N ▷initialisation\n5: D[0; j] jfor0\u0014j\u0014M ▷initialisation\n6: forj 1:::M do\n7: TMToken TMS j ▷get Token of TM segment\n8: fori 1:::Ndo\n9: InputToken InputSegment i ▷get Token of Input segment\n10: ifInputToken =TMToken then ▷match InputToken withTMToken\n11: substitutionCost 0 ▷Substitution cost if matches\n12: else\n13: substitutionCost 1 ▷Substitution cost if not matches\n14: D[i; j] minimum (D[i\u00001; j] +insertionCost; D [i; j\u00001] + deletionCost; D [i\u00001; j\u00001] +\nsubstitutionCost ) ▷store minimum of insertion, substitution and deletion\n15: Return D[N; M ] ▷Return minimum edit-distance\n16:end procedure\nj\n0\n1\n2\n3 4\n5\ni\n#\nthe\nperiod\nduration\ntime\nlaid down in\nreferred to\nprovided for by\nin\n0\n#\n0\n1\n2\n3 4 5\n3 4\n3 4 5\n5\n1\nthe\n1\n0\n1\n2 3 4\n2 3\n2 3 4\n4\n2\nperiod\n2\n1\n0\n1 2 3\n1 2\n1 2 3\n3\n3\nreferred\n3\n2\n1\n1 2 3\n0 1\n1 2 3\n2\n4\nto\n4\n3\n2\n2 23\n10\n2 2 3\n1\n5\nin\n5\n4\n3\n3 3 3\n2 1\n3 3 3\n0\nTable 2: Edit-Distance Calculation using Algorithm 2\ndistance (e.g. j= 5 after selecting “referred to”)\nandlines 50, 54 and 57 update the matrix Das\nshown in Algorithm 2.\nAs we can see in Table 2, starting from the\nthird token of the TM, “laid”, three separate edit-\ndistances are calculated, two for the two para-\nphrases “referred to” and “provided for by” and\none for the corresponding longest source phrase\n“laid down in” and the paraphrase “referred to” is\nselected as it gives a minimum edit-distance of 0.\nThe last column of Table 2 ( j= 5) shows the edit-\ndistance calculation of the next token “in” after se-\nlecting “referred to”.\n3.4 Computational Considerations\nThe time complexity of the basic edit-distance pro-\ncedure is O(mn)where m and n are lengths of\nsource and target segments, respectively. After\nemploying paraphrasing of Type 1 and Type 2 the\ncomplexity of calculating the substitution cost in-\ncreases from O(1)toO(log(p))(as searching the p\nwords takes O(log(p))time) where pis the num-\nber of paraphrases of Type 1 and Type 2 per to-ken of TM source segment, which increases the\nedit-distance complexity to O(mnlog (p)). Em-\nploying paraphrasing of Type 3 and Type 4 fur-\nther increases the edit-distance complexity to\nO(lmn(log(p) +q)), where qis the number of\nType 3 and Type 4 paraphrases stored per token\nandlis the average length of paraphrase. As-\nsuming the source and target segment are of same\nlength nand each token of the segment stores\nparaphrases of length l, the complexity will be\nO((q+log(p))n2l). By limiting the number of\nparaphrases stored per token of the TM segment\nwe can replace (q+log(p))by a constant c. In\nthis case complexity will be c\u0002O(n2l). However,\nin practice it will take less time as not all tokens\nin the TM segment will have pandqparaphrases\nand the paraphrases are also stored in the reduced\nform.\n4 Experiments and Results\nFor our experiments we have used English-French\npairs of the 2013 release of the DGT-TM corpus\n(Steinberger et al., 2012). The corpus was se-\n7\nAlgorithm 2 Edit-Distance with paraphrasing procedure\n1:procedure EDIT-DISTANCE PP(Input,TMS)\n2: M length (TMS ) ▷number of tokens in TM segment\n3: N length (Input ) ▷number of tokens in Input segment\n4: D[i;0] ifor0\u0014i\u0014N ▷initialise two dimensional matrix D\n5: D[0; j] jfor0\u0014j\u0014(M+p′)where p′accounts for increase in TM segment length because of paraphrasing\n6: decisionPoint 0,j 1\n7: scost 1,dcost 1,icost 1 ▷initialisation of substitution, deletion and insertion cost\n8: while j\u0014Mdo\n9: t TMS j ▷getting current TM token to process, e.g. 3rdtoken “laid”\n10: ifthas no paraphrases of type 3and type 4ordecisionPoint\u0015Nthen\n11: decisionPoint decisionPoint + 1,j j+ 1\n12: fori 1:::Ndo\n13: InputToken Input i\n14: ifInputToken =tthen\n15: scost 0\n16: else\n17: scost 1\n18: ifscost = 1then\n19: OneWordPP getOneWordPP (t) ▷get one word paraphrases associated with TM token t\n20: ifInputToken2OneWordPP then ▷applying type 1 and type 2 paraphrasing\n21: scost 0\n22: D[i; decisionPoint ] minimum (D[i; decisionPoint\u00001] + dcost; D [i\u00001; decisionPoint ] +\nicost; D [i\u00001; decisionPoint\u00001] +scost )\n23: else\n24: tp get paraphrases stored at t ▷e.g.tpfor Token “laid” in Table 1\n25: ls get corresponding source lengths stored at t ▷e.g.lsfor Token “laid” in Table 1\n26: lsmax length of longest source phrase\n27: DS[0; l\u00001] D[0; decisionPoint +l]for1\u0014l\u0014lsmax ▷ initialise two dimensional matrix DSto\ncalculate edit-distance of longest source phrase\n28: DS calculate edit-distance of longest source phrase with Input using D ▷ usesDfor first word, consider\nType 1 and Type 2 paraphrases\n29: P number of paraphrases of type 3 and type 4 ▷E.g. 2 for “laid”\n30: index 0,paraphraselen 0,isppwin false ,curDistance 1\n31: prevDistance D[decisionPoint; decisionPoint ]\n32: DTP [k;0; l\u00001] D[0; decisionPoint +l]for0\u0014k\u0014P\u00001for1\u0014l\u0014length (tp[k])▷initialise three\ndimensional matrix DTP to calculate edit-distances of paraphrases\n33: fork 0:::P\u00001do\n34: dps[k] decisionPoint +ls[k]\n35: ltp length (tp[k]) ▷get paraphrase length e.g. 2 for “referred to”\n36: dpt[k] decisionPoint +ltp\n37: DTP [k] calculate edit-distance of tp[k]withInput using D ▷usesDfor first word of tp[k]\n38: ifDTP [k; ltp\u00001; dpt[k]]< DS [ls[k]\u00001; dps [k]]andDTP [k; ltp\u00001; dpt[k]]< curDistance then\n39: ppwin true\n40: curDistance DTP [k; ltp\u00001; dpt[k]]\n41: index k\n42: paraphraselen ltp\n43: else if DS[ls[k]\u00001; dps [k]]< curDistance then\n44: ppwin false\n45: curDistance DS[ls[k]\u00001; dps [k]]\n46: index k\n47: ifppwin =true then ▷ true if paraphrase is better\n48: j j+ls[index ]\n49: decisionPoint decisionPoint +paraphraselen\n50: update Dusing DTP [index ]\n51: else if curDistance =prevDistance then ▷ true if source phrase is better and exactly matching\n52: j j+ls[index ]\n53: decisionPoint decisionPoint +ls[index ]\n54: update Dusing DS\n55: else\n56: j j+ 1,decisionPoint decisionPoint + 1\n57: update Dusing DS\nReturn D[N; decisionPoint ]\n58:end procedure\n8\nlected in such a way that it was not used to pro-\nduce PPDB. For this reason, its language may be\nslightly different from the one used to produce\nPPDB, which may be a reason for the relatively\nmodest results obtained in this paper. In our case\nEnglish was the source language and French was\nthe target language. From this corpus we have fil-\ntered out segments of fewer than five words and re-\nmaining pairs were used to create the TM and Test\ndataset. Tokenization of the English data was done\nusing Berkeley Tokenizer (Petrov et al., 2006). Ta-\nble 3 shows our corpus statistics. In our case,\naverage number of phrases per TM segment for\nwhich paraphrases are present in PPDB is 37 (Avg-\nPhrases) and average number of paraphrases per\nTM segment present in PPDB is 146 (AvgPP) as\nshown in the Table 3.\nTM\nTest\nSegments\n319709\n25000\nSource words\n8200796\n640265\nTarget words\n7807577\n609165\nAverage source length\n25.65\n25.61\nAverage target length\n24.42\n24.36\nAvgPhrases\n37\nAvgPP\n146\nTable 3: Corpus Statistics\nTH\n100\n95\n90\n85\n80\nEDR\n6352\n7062\n8369\n9829\n10730\nPPR\n6444\n7172\n8476\n9938\n10853\nImp\n1.45\n1.56\n1.28\n1.11\n1.15\nRC\n13\n20\n43\n68\n88\nBPP\n74.31\n73.16\n65.01\n63.29\n60.84\nBED\n65.89\n70.29\n60.70\n63.29\n61.31\nTable 4: Results on surface form: Using all four\ntypes of paraphrases\nTH\n100\n95\n90\n85\n80\nEDR\n6352\n7062\n8369\n9829\n10730\nPPR\n6421\n7142\n8450\n9915\n10820\nImp\n1.09\n1.13\n0.97\n0.87\n0.84\nRC\n8\n13\n27\n45\n55\nBPP\n73.18\n73.98\n63.08\n64.37\n63.37\nBED\n60.86\n71.43\n61.96\n65.10\n63.28\nTable 5: Results on surface form: Using para-\nphrases of Types 1 and 2 only\nTH\n100\n95\n90\n85\n80\nEDR\n8179\n8675\n9603\n10456\n11308\nPPR\n8294\n8802\n9735\n10597\n11462\nIMP\n1.41\n1.46\n1.37\n1.35\n1.36\nRC\n21\n30\n43\n73\n108\nBPP\n68.61\n78.04\n75.40\n69.06\n63.93\nBED\n59.89\n67.88\n66.32\n63.57\n61.92\nTable 6: Results with placeholders: Using all four\ntypes of paraphrases\nTH\n100\n95\n90\n85\n80\nEDR\n8179\n8675\n9603\n10456\n11308\nPPR\n8277\n8777\n9706\n10568\n11422\nIMP\n1.2\n1.18\n1.07\n1.07\n1.01\nRC\n19\n24\n30\n49\n73\nBPP\n58.28\n67.95\n71.03\n68.03\n61.02\nBED\n52.00\n54.81\n60.09\n62.13\n57.42\nTable 7: Results with placeholders: Using para-\nphrases of Types 1 and 2 only\nOur evaluation has two objectives: first to see\nhow much impact paraphrasing has in terms of re-\ntrieval and second to see the translation quality of\nthose segments which changed their ranking and\nbrought them up to the top because of the para-\nphrasing. The results of our evaluations are given\nin Tables 4, 5, 6, and 7 where each table shows the\nsimilarity threshold for TM (TH), the total number\nof segments retrieved using the baseline approach\n(EDR), the total number of segments retrieved us-\ning our approach (PPR), the percentage improve-\nment in retrieval obtained over the baseline (Imp),\nthe number of segments which changed their rank-\ning and come up to the top because of paraphras-\ning (RC), the BLEU score (Papineni et al., 2002)\non target side over translations retrieved by our ap-\nproach for segments which changed their ranking\nand come up to the top because of paraphrasing\n(BPP) and the BLEU score on target side over cor-\nresponding translations retrieved (irrespective of\nsimilarity score) by baseline approach for these\nsegments (BED).\nAs we can see in Table 4, on surface form for\na threshold of 90% we got a 1.28% improvement\nover baseline in terms of retrieval, i.e. we have\nretrieved 107 more segments. We can observe an\nincrease of more than four BLEU points for the\n9\n90% threshold and an increase of more than eight\nBLEU points for the 100% threshold for the seg-\nments which change their rank. There are 13 seg-\nments for threshold 100% which change their rank\nand 43 segments for threshold 90% which change\ntheir rank. Table 5 shows improvements we have\nobtained using paraphrases of Types 1 and 2 only.\nTo get more matches in TM, which is usually the\ncase for real TM, we have removed punctuation\nand replaced numbers and dates with placehold-\ners. For this experiment we observed significant\nimprovement for a threshold of 80% and above as\nshown in Tables 6 & 7. We can observe that af-\nter removing punctuation and replacing numbers\nand dates with placeholders we obtained more than\nfive BLEU points improvement over the baseline\nfor a threshold of 85% and above for the segments\nwhich changes their rank.\nTable 7 shows the improvements we have ob-\ntained using paraphrases of Type 1 and 2 only with\nplaceholders. As we can see, improvements in re-\ntrieval is less compared to Table 6 which uses all\nparaphrases but the BLEU score is still improving\nsignificantly. We can observe an increase of more\nthan 10 BLEU points over the baseline for thresh-\nolds of 95% and 90% .\n5 Conclusion and Future work\nWe have presented an efficient approach to incor-\nporating paraphrasing in TM. The approach is sim-\nple and fast enough to implement in practice. We\nhave also shown that incorporating paraphrasing\nsignificantly improves TM matching and retrieval.\nApart from TM, the approach can also be used for\nother natural language processing tasks (e.g. to in-\ncorporate paraphrasing in sentence semantic simi-\nlarity measures exploiting edit-distance).\nIn future, we would like to consider the syntac-\ntic structure of the paraphrases when performing\nmatching and retrieval, and also to take into ac-\ncount the context in which the paraphrases are used\nin order to have better accuracy. Alternative ways\nto implement using Finite State Transducers (FST)\ncan also be considered and compared.\nAcknowledgement\nThe research leading to these results has received\nfunding from the People Programme (Marie Curie\nActions) of the European Union’s Seventh Frame-\nwork Programme FP7/2007-2013/ under REA\ngrant agreement no. 317471.References\nArthern, Peter J. 1978. Machine Translation and\nComputerized Terminology Systems, A Translator’s\nviewpoint. In Translating and the Computer: Pro-\nceedings of a Seminar , pages 77–108.\nDu, Jinhua, Jie Jiang, and Andy Way. 2010. Facilitat-\ning Translation Using Source Language Paraphrase\nLattices. In Proceeding of EMNLP , pages 420–429.\nGanitkevitch, Juri, Van Durme Benjamin, and Chris\nCallison-Burch. 2013. Ppdb: The paraphrase\ndatabase. In Proceedings of NAACL-HLT , pages\n758–764, Atlanta, Georgia. Association for Compu-\ntational Linguistics.\nKoehn, Philipp and Jean Senellart. 2010. Convergence\nof translation memory and statistical machine trans-\nlation. In Proceedings of AMTA Workshop on MT\nResearch and the Translation Industry , pages 21–31.\nLagoudaki, Elina. 2006. Translation Memories Survey\n2006: Users’ perceptions around TM use. In Pro-\nceedings of Translating and the Computer 28 , pages\n1–29, London. Aslib.\nMitkov, Ruslan. 2008. Improving Third Genera-\ntion Translation Memory systems through identifi-\ncation of rhetorical predicates. In Proceedings of\nLangTech2008 .\nOnishi, Takashi, Masao Utiyama, and Eiichiro Sumita.\n2010. Paraphrase Lattice for Statistical Machine\nTranslation. In Proceeding of the ACL , pages 1–5.\nPapineni, Kishore, Salim Roukos, Todd Ward, and Wei-\nJing Zhu. 2002. Bleu: a method for automatic eval-\nuation of machine translation. In Proceedings of the\nACL, pages 311–318.\nPekar, Viktor and Ruslan Mitkov. 2007. New\nGeneration Translation Memory: Content-Sensivite\nMatching. In Proceedings of the 40th Anniversary\nCongress of the Swiss Association of Translators,\nTerminologists and Interpreters .\nPetrov, Slav, Leon Barrett, Romain Thibaux, and Dan\nKlein. 2006. Learning accurate, compact, and inter-\npretable tree annotation. In Proceedings of the COL-\nING/ACL , pages 433–440.\nSteinberger, Ralf, Andreas Eisele, Szymon Klocek,\nSpyridon Pilos, and Patrick Schl ¨uter. 2012. DGT-\nTM: A freely available Translation Memory in 22\nlanguages. LREC , pages 454–459.\nUtiyama, Masao, Graham Neubig, Takashi Onishi, and\nEiichiro Sumita. 2011. Searching Translation Mem-\nories for Paraphrases. In Machine Translation Sum-\nmit XIII , pages 325–331.\nZhechev, Ventsislav and Josef Van Genabith. 2010.\nSeeding statistical machine translation with trans-\nlation memory output through tree-based structural\nalignment. In Proceedings of ACL , pages 43–51.\n10",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "WBZDSc5w-eI",
"year": null,
"venue": "EAMT 2016",
"pdf_link": "https://aclanthology.org/W16-3413.pdf",
"forum_link": "https://openreview.net/forum?id=WBZDSc5w-eI",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Semantic Textual Similarity in Quality Estimation",
"authors": [
"Hanna Béchara",
"Carla Parra Escartín",
"Constantin Orasan",
"Lucia Specia"
],
"abstract": "Hanna Bechara, Carla Parra Escartin, Constantin Orasan, Lucia Specia. Proceedings of the 19th Annual Conference of the European Association for Machine Translation. 2016.",
"keywords": [],
"raw_extracted_content": "Baltic J. Modern Computing, V ol. 4 (2016), No. 2, pp. 256–268\nSemantic Textual Similarity in Quality Estimation\nHanna B ´ECHARA1, Carla PARRA ESCART ´IN2, Constantin OR ˘ASAN1,\nLucia SPECIA3\n1University of Wolverhampton, Wolverhampton, UK\n2Hermes Traducciones, Madrid, Spain\n3University of Sheffield, Sheffield, UK\[email protected], [email protected],\[email protected], [email protected]\nAbstract. Quality Estimation (QE) predicts the quality of machine translation output without\nthe need for a reference translation. This quality can be defined differently based on the task\nat hand. In an attempt to focus further on the adequacy and informativeness of translations, we\nintegrate features of semantic similarity into QuEst, a framework for QE feature extraction. By\nusing methods previously employed in Semantic Textual Similarity (STS) tasks, we use seman-\ntically similar sentences and their quality scores as features to estimate the quality of machine\ntranslated sentences. Preliminary experiments show that finding semantically similar sentences\nfor some datasets is difficult and time-consuming. Therefore, we opt to start from the assumption\nthat we already have access to semantically similar sentences. Our results show that this method\ncan improve the prediction of machine translation quality for semantically similar sentences.\nKeywords: Quality Estimation, Semantic Textual Similarity, Machine Translation\n1 Introduction\nMachine Translation Quality Estimation (MTQE) has been gaining increasing interest\nin Machine Translation (MT) output assessment, as it can be used to measure different\naspects of correctness. Furthermore, Quality Estimation (QE) tools forego the need for\na reference translation and instead predict the quality of the output based on the source.\nIn this paper, we address the use of semantic correctness in QE by integrating STS\nmeasures into the process, without relying on a reference translation. We propose a set\nof features that compares MT output to a semantically similar sentence, that has already\nbeen assessed, using monolingual STS tools to measure the semantic proximity of the\nsentence in relation to the second sentence.\nThe rest of this paper is organised as follows: Section 2 features the state of the art\nin QE and the context for our research. Section 3 introduces our approach to integrating\nsemantic information into QE. Section 4 details our experimental set-up, including the\nSemantic Textual Similarity in Quality Estimation 257\ntools we use for our experiments. Section 5 explains our experiments, details our new\nSTS features and summarises the results we observe when adding these features to\nQuEst. Finally, Section 6 presents our concluding remarks and plans for future work.\n2 Previous Work\nEarly work in QE built on the concept of confidence estimation used in speech recog-\nnition (Gandrabur and Foster, 2003, Blatz et al., 2004). These systems usually relied on\nsystem-dependent features, and focused on measuring how confident a given system is\nrather than how correct the translation is.\nLater experiments in QE used only system-independent features based on the source\nsentence and target translation (Specia et al., 2009b). They trained a Support Vector\nMachine (SVM) regression model based on 74 shallow features, and reported signif-\nicant gains in accuracy over MT evaluation metrics. At first, these approaches to QE\nfocused mainly on shallow features based on the source and target sentences. Such\nfeatures include n-gram counts, the average length of tokens, punctuation statistics\nand sentence length among other features. Later systems incorporate linguistic fea-\ntures such as part of speech tags, syntactic information and word alignment information\n(Specia et al., 2010).\nIn the context of QE, the term “quality” itself is flexible and can change to reflect\nspecific applications, from quality assurance, gisting and estimating post-editing (PE)\neffort to ranking translations. Specia et al. (2009a) define quality in terms of PE effi-\nciency, using QE to filter out sentences that would require too much time to post-edit.\nSimilarly, He et al. (2010) use QE techniques to predict human PE effort and recom-\nmend MT outputs to Translation Memory (TM) users based on estimated PE effort.\nIn contrast, Specia et al., 2010 use QE to rank translations from different systems and\nhighlight inadequate segments for post-editing.\nSince 2012, QE has been the focus of a shared task at the annual Workshop for Sta-\ntistical Machine Translation (WMT) (Callison-Burch et al., 2012). This task has pro-\nvided a common ground for the comparison and evaluation of different QE systems and\ndata at the word, sentence and document level (Bojar et al., 2015).\nThere have been a few attempts to integrate semantic similarity into the MT evalua-\ntion (Lo and Wu, 2011, Castillo and Estrella, 2012). The results reported are generally\npositive, showing that semantic information is not only useful, but often necessary, in\norder to assess the quality of machine translation output.\nSpecia et al. (2011) bring semantic information into the realm of QE in order to\naddress the problem of meaning preservation. The authors focus on what they term\n“adequacy indicators” and human annotations for adequacy. The results they report\nshow improvement with respect to a majority class baseline. Rubino et al. (2013) also\naddress MT adequacy using topic models for QE. By including topic model features\nthat focus on content words in sentences, their system outperforms state-of-the-art ap-\nproaches specifically with datasets annotated for adequacy. Bic ¸ici (2013) introduce the\nuse of referential translation machines (RTM) for QE. RTM is a computational model\nfor judging monolingual and bilingual similarity that achieves state-of-the-art results.\nThe authors report top performance in both sentence level and word-level tasks of WMT\n258 B ´echara et al.\n2013. Camargo de Souza et al. (2014) propose a set of features that explore word align-\nment information in order to address semantic relations between sentences. Their results\nshow that POS indicator features improve over the baseline at the shared task for QE\nat the workshop for machine translation. Kaljahi et al. (2014) employ syntactic and se-\nmantic information in quality estimation and are able to improve over the baseline when\ncombining these features with the surface features of the baseline. Our work builds on\nprevious work, focusing on the necessity of semantic information for MT adequacy. As\nfar as we are aware, our work is the first to explore quality scores from semantically\nsimilar sentences as surrogate to the quality of the current sentence.\n3 Our Approach\nIn this paper, we propose integrating semantic similarity into the quality estimation task.\nAs STS relies on monolingual data, we employ the use of a second sentence that bears\nsome semantic resemblance to the sentence we wish to evaluate.\nOur approach is illustrated in Figure 1, where sentences AandBare two seman-\ntically similar sentences with a similarity score R. Our task is to assess the quality of\nsentence Awith the help of sentence Bwhich has already undergone machine transla-\ntion evaluation, either through post-editing or by human evaluation (e.g. assessed on a\nscale from 1–5). As both sentences, AandBare semantically similar, our hypothesis\nis that their translations are also semantically similar and thus we can use the reference\nof sentence Bto estimate the quality of sentence A.\nSemantic Similarity of\nA and B (R)Sentence A\nSentence BSentence A \n(MT Output)\nSentence B \n(MT Output)bleu score \nof Bbleu \nscore of ARef \nmissing\nSMTSMT\nref of B\nFig. 1. Predicting the Quality of MT Output using a Semantically Similar Sentence B\nFor each sentence A, for which we wish to estimate MT quality, we retrieve a se-\nmantically similar sentence Bwhich has been machine translated and has a reference\ntranslation or a quality assessment value. We then extract the following three scores\n(that we use as STS features):\nSemantic Textual Similarity (STS) score: Rrepresents the STS between the source\nsentence pairs (sentence Aand sentence B). This is a continuous score ranging from 0\nSemantic Textual Similarity in Quality Estimation 259\nto5. We calculate this score using the MiniExperts system designed for SemEval2015\n(cf. Section 4.2) in all but one of our experiments, where we already have human anno-\ntations about STS. This experiment, the oracle experiment represents scores we could\nachieve if our STS was perfect.\nQuality Score for Sentence B:We calculate the quality of the MT output of Sen-\ntence B. This is either a S-B LEU score based on a reference translation, or a manual\nscore provided by a human evaluator.\nS-B LEUScore for Sentence A:We have no human evaluation or reference translation\nfor Sentence A, but we can calculate a quality score using Sentence Bas a reference.\nWe use sentence-level B LEU (S-B LEU) (Lin and Och, 2004). S-B LEU is designed to\nwork at the sentence level and will still positively score segments that do not have a\nhigh order n-gram match.\n4 Experimental Setting\nIn this section, we start with a brief introduction to the QuEst framework, followed by\na description of the settings for the experiments described in this paper.\n4.1 The QuEst Framework\nQuEst (Specia et al., 2013) is an open source framework for MTQE.4In addition to a\nfeature extraction framework, QuEst also provides the machine learning algorithms nec-\nessary to build the prediction models. QuEst gives access to a large variety of features,\neach relevant to different tasks and definitions of quality.\nAs QuEst is a state-of-the-art tool for MTQE and is used as a baseline in recent QE\ntasks, such as previous workshops for machine translation (Callison-Burch et al., 2012,\nBojar et al., 2013, Bojar et al., 2014 and Bojar et al., 2015), we use its 17 features as a\nbaseline to allow for comparison of our work to a state-of-the-art system.\nThe baseline features are system independent and include shallow surface features\nsuch as the number of punctuation marks, the average length of words and the number\nof words. Furthermore, these features include n-gram frequencies and language model\nprobabilities. A full list of the baseline features can be found in Table 1.\n4.2 MiniExpert’s STS Tool\nIn our experiments, we use the MiniExpert’s submission to Semeval2015’s Task 2a\n(B´echara et al., 2015). The source code is easy to use and available on GitHub.5The\nsystem uses a SVM regression model to predict the STS scores between two English\n4https://github.com/lspecia/quest\n5https://github.com/rohitguptacs/wlvsimilarity\n260 B ´echara et al.\nTable 1. Full List of QuEst’s Baseline Features\nIDDescription\n1number of tokens in the source sentence\n2number of tokens in the target sentence\n3average source token length\n4LM probability of source sentence\n5LM probability of the target sentence\n6average number of occurrences of the target word within the target sentence\n7average number of translations per source word in the sentence (as given by\nIBM 1 table thresholded so that prob(t js)>0.2)\n8average number of translations per source word in the sentence weighted by the\ninverse frequency of each word in the source corpus\n9percentage of unigrams in quartile 1 of frequency (lower frequency words)\nin a corpus of the source language\n10percentage of unigrams in quartile 4 of frequency (higher frequency words)\nin a corpus of the source language\n11percentage of bigrams in quartile 1 of frequency of source words in a corpus of the source language\n12percentage of bigrams in quartile 4 of frequency of source words in a corpus of the source language\n13percentage of trigrams in quartile 1 of frequency of source words in a corpus of the source language\n14percentage of trigrams in quartile 4 of frequency of source words in a corpus of the source language\n15percentage of unigrams in the source sentence seen in a corpus\n16number of punctuation marks in source sentence\n17number of punctuation marks in target sentence\nsentences. The authors train their system on a variety of linguistically motivated fea-\ntures inspired by deep semantics with distributional Similarity Measures, Conceptual\nSimilarity Measures, Semantic Similarity Measures and Corpus Pattern Analysis.\nThe system performs well and obtained a mean 0.7216 Pearson correlation in the\nshared task, ranking 33 out of 74 systems.\nWe train the STS tool on the SICK dataset Marelli et al., 2014, a dataset specifically\ndesigned for semantic similarity and used in previous SemEval tasks, augmented with\ntraining data from previous SemEval tasks (SemEval2014 and SemEval2015).\n4.3 Statistical Machine Translation System\nAll of our experiments require MT output to run MTQE tasks. To that end, we use\nthe state-of-the-art phrase based Statistical Machine Translation (SMT) system Moses\n(Koehn et al., 2007). We build 5-gram language models with Kneser-Ney smoothing\ntrained with SRILM, (Stolcke, 2002), and run the GIZA++ implementation of IBM\nword alignment model 4 (Och and Ney, 2003), with refinement and phrase-extraction\nheuristics as described in Koehn et al. (2003). We use Minimum Error Rate Training\n(MERT) (Och, 2003) for tuning.\nIn order to keep our experiments consistent, we use the same SMT system for all\ndatasets. We focus on English into French translations and we use the Europarl corpus\n(Koehn, 2005) for training. We train on 500,000 unique English–French sentences and\nthen tune our system (using MERT) on 1,000 different unique sentences also from the\nEuroparl corpus. We also train a French–English system to retrieve the backtranslations\nused in some of our experiments.\nSemantic Textual Similarity in Quality Estimation 261\n5 Experiments\nAs mentioned earlier, all our datasets focus on MTQE for English !French MT output.\nIn all our experiments we have a set of machine translated sentences Afor which we\nneed a QE and a set of sentences B, semantically similar to the set of sentences Aand\nfor which we have some type of evaluation score available.\nIn early experiments, we attempted to use freely available datasets used in previous\nworkshops on machine translation (WMT2012 and WMT2013) for the translation task\nand within the news domain (Bojar et al., 2013). The WMT datasets have two main\nadvantages: first, they allow us to compare our system with previous systems for QE\nand render our experiments replicable. Second, they have manual evaluations that are\navailable with the machine translations. Each sentence in the WMT dataset comes with\na score between 1 and 5, provided by human annotators. However, this method proved\nto be too time-consuming, as it often required scoring thousands of sentences before\nfinding two that were similar.\nThe first obstacle we faced in testing our approach with these datasets was the\ncollection of similar sentences against which to compare and evaluate. We automati-\ncally searched large parallel corpora for sentences that yielded high similarity scores.\nThese corpora included the Europarl corpus (Koehn, 2005), the Acquis Communautaire\n(Steinberger et al., 2006) and previous WMT data (from 2012 and 2013).\nFurthermore, the STS system we use (see Section 4.2) returned many false-positives.\nSome sentences which appeared similar to the STS system were actually too different\nto be usable. This led to noisy data and unusable results. The scarcity of semantically\nsimilar sentences and the computational cost of finding these sentences, lead us to look\ninto alternate datasets, preferably those with semantic similarity built into the corpus:\nthe DGT-TM and the SICK dataset.\nAll our experiments have the same set-up. In all cases, we used 500 randomly se-\nlected sentences for testing, and the remaining sentences in the respective data-set for\ntraining QuEst. We automatically search large parallel corpora for sentences that yield\nhigh similarity scores using the STS system described in section 4.2.\nWe attempt to predict the quality scores of the individual sentences, using the STS\nfeatures described above, added to QuEst’s 17 baseline features. We compare our results\nto both the QuEst baseline (cf. Section 4.1). and the majority class baseline6. We also\ntest our STS-related features separately, without the baseline features, and compare\nthem to the system with the combined system (STS+baseline).\nWe use the Mean Absolute Error (MAE) to evaluate the prediction rate of our sys-\ntems. MAE measures the average magnitude of the errors on the test set, without con-\nsidering their direction. Therefore, it is ideal for measuring the accuracy for continuous\nvariables. MAE is calculated as per Equation 1.\nMAE =1\nnX\njxi\u0000yj (1)\n6The Mean Absolute Error calculated using the mean rating in the training set as a projected\nscore for every sentence in the test set.\n262 B ´echara et al.\nwhere nis the number of instances in the test set, xiis the score predicted by the system,\nandyis the observed score. In our experiments, we use S-B LEU scores as the observed\nscore.\n5.1 DGT-TM\nWe use the 2014 release of the Directorate General for Translation – Translation Mem-\nory (DGT-TM) to test our system. The DGT-TM is a corpus of aligned sentences\nin 22 different languages created from the European Union’s legislative documents\n(Steinberger et al., 2006). We randomly extract 500 unique sentences ( B), then search\nthe rest of the TM for the 5 most semantically similar sentences ( A) for each of these\n500 sentences (STS score >3). This results in 2,500 sentences A(500x5) and their\nsemantically similar sentence pairs B. We make sure to avoid any overlap in sentence\nAso that while semantically similar sentence Bmight recur, sentence Awill remain\nunique. we assign an STS score to the resulting dataset using the system described\nin Section 4.2. We then translate these sentence pairs using the translation model de-\nscribed in Section 4.3 and use S-B LEU to assign evaluation scores for the MT outputs\nof Sentence AandB.\nOf these 2,500 sentence pairs and their MT outputs, we use 2,000 sentence pairs to\ntrain an SVM regression model on Quest’s baseline features using using sentence Aand\nits MT output as the source and target sentence. We further use sentence B’s S-B LEU\nscore and its STS score with sentence A. We use the remaining 500 sentences to test\nour system. Table 2 shows a sample sentence (Sentence B) from the DGT-TM along its\nsemantically similar retrieved match (Sentence A) and the machine translation output\nfor each sentence. The MiniExpert’s STS system gave the original English sentence pair\na STS score of 4.46, indicating that only minor details differ.\nTable 2. DGT-TM Sample Sentence\nSentence A\nSource In order to ensure that the measures provided for in this Regulation are effective ,\nit should enter into force on the day of its publication\nMT afin de garantir que les mesures pr ´evues dans ce r `eglement sont efficaces ,\nil devrait entrer en vigueur sur le jour de sa publication ,\nSentence B\nSource In order to ensure that the measures provided for in this Regulation are effective ,\nthis Regulation should enter into force immediately ,\nMT afin de garantir que les mesures pr ´evues dans ce r `eglement sont efficaces ,\nce r`eglement doit entrer en vigueur imm´ediatement ,\nSTS 4.46\nResults: Our results are summarised in Table 3, which shows that the MAE for the\ncombined features (QuEst + STS features) is considerably lower than that of QuEst\non its own. This means that the additional use of STS features can improve QuEst’s\nSemantic Textual Similarity in Quality Estimation 263\npredictive power. Even the 3 STS features on their own outperformed QuEst’s base-\nline features. These results show that our method can prove useful in a context where\nsemantically similar sentences are accessible.\nTable 3. Predicting the S-B LEU scores for DGT-TM - Mean Absolute Error\nMAE\nQuEst Baseline (17 Features) 0.120\nSTS (3 Features) 0.108\nCombined (20 Features) 0.090\n5.2 SICK Dataset\nIn order to further test the suitability of our approach for semantically similar sen-\ntences, we use the SICK dataset for further experiments. SICK (Sentences Involving\nCompositional Knowledge) is a dataset specifically designed for compositional distri-\nbutional semantics. It includes a large number of English sentence pairs that are rich in\nlexical, syntactic and semantic phenomena. The SICK dataset is generated from exist-\ning datasets based on images and video descriptions, and each sentence pair is anno-\ntated for relatedness (similarity) and entailment by means of crowd-sourcing techniques\n(Marelli et al., 2014). This means that we did not need to use the STS tool to annotate\nthe sentences. The similarity score is a score between 1 and 5, further described in Ta-\nble 4. As these scores are obtained by averaging several separate annotations by distinct\nevaluators, they are continuous, rather than discrete. As SICK already provides us with\nsentence pairs of variable similarity, it cuts out the need to search extensively for similar\nsentences. Furthermore, the crowd-sourced similarity scores act as a gold standard that\neliminates the uncertainty introduced by the automatic STS tool. This dataset lacks a\nreliable reference translation to compare against, however.\nTable 4. STS scale used by SemEval\n0The two sentences are on different topics\n1The two sentences are not equivalent, but are on the same topic\n2The two sentences are not equivalent, but share some details\n3The two sentences are roughly equivalent, but some important information differs/is missing\n4The two sentences are mostly equivalent, but some unimportant detail differs/missing\n5The two sentences are completely equivalent, as they mean the same thing\nWe extract 5,000 sentence pairs to use in our experiments and translate them into\nFrench using the MT system described in Section 4.3. The resulting dataset consists of\n5,000 semantically similar sentence pairs and their French machine translations. Of this\n264 B ´echara et al.\nset, 4,500 are used to train an SVM regression model in the same manner as described\nin Section 5.1. The remaining 500 sentences are used for testing.\nAs the SICK dataset is monolingual and therefore lacking in a reference translation,\nwe opted to use a back-translation (into English) as a reference instead of a French\ntranslation for these results. A back-translation is a translation of a translated text back\ninto the original language. Back-translations are usually used to compare translations\nwith the original text for quality and accuracy, and can help to evaluate equivalence\nof meaning between the source and target texts. In machine translation contexts, they\ncan be used to create a pseudo-source that can be compared against the original source.\nHe et al. (2010) used this back-translation as a feature in QE with some success. They\ncompared the back-translation to the original source using fuzzy match scoring and\nused the result to estimate the quality of the translation. The intuition here is that the\ncloser the back translation is to the original source, the better the translation is in the\nfirst place.\nFollowing this idea, we use the S-B LEU scores of the back-translations as stand-\nins for the MT quality scores. We use the MT system described in Section 3.3 for the\nback-translations.\nTable 5 shows a sample sentence from the resulting dataset, including the original\nEnglish sentence pairs and each sentence’s MT output. The crowd-sourced STS score\nfor this sentence pair is 4, indicating that only minor details differ.\nTable 5. SICK Sample Sentence\nSentence A\nSource Several children are lying down and are raising their knees\nMT Plusieurs enfants sont couch ´es et ´el`event leurs genoux\nSentence B\nSource Several children are sitting down and have their knees raised\nMT Plusieurs enfants sont assis et ont soulev ´e leurs genoux\nSTS 4\nResults: Results on the SICK datasets are summarised in Table 6. The lowest error rate\n(MAE) is observed for the system that combined our STS-based features with QuEst’s\nbaseline features (Combined (20 Features)) just as in the DGT-TM experiments. We ob-\nserve that even the STS features on their own outperformed QuEst in this environment.\nTable 6. Predicting the S-B LEU scores for SICK - Mean Absolute Error\nMAE\nQuEst Baseline (17 Features) 0.200\nSTS (3 Features) 0.189\nCombined (20 Features) 0.177\nSemantic Textual Similarity in Quality Estimation 265\nThe cherry-picked examples in Table 7 are from the SICK dataset, and show that\na high STS score between the source sentences can contribute to a high prediction\naccuracy. In both examples, the predicted score for Sentence Ais close to the actual\nobserved score.\nTable 7. SICK Sample Prediction\nSentence A Sentence B\nSource Dirt bikers are riding on a trail Two people are riding motorbikes\nMT Dirt Bikers roulent sur une piste Deux personnes font du v ´elo motos\nS-BLEU: 0.55 (Predicted) 0.84\n0.6 (Actual)\nSTS 3.6\nSource A man is leaning against a pole A man is leaning against a pole\nand is surrounded and is surrounded by people\nMT Un homme est appuy ´ee contre un Un homme est appuy ´ee contre un\npoteau et est entour ´e par des gens poteau et est entour ´e\nS-BLEU: 0.91 (Predicted) 0.91\n1 (Actual)\nSTS 4.2\nFurthermore, when we filtered the test set for the SICK experiments for sentences\nwith high similarity (4+), we observed an even higher drop in MAE, as demonstrated\nin Table 8. This suggests that our experiments perform especially well if we select for\nsentences with high similarity.\nTable 8. Predicting the S-B LEU scores for SICK sentences with high similarity - Mean Absolute\nError\nMAE\nQuEst Baseline (17 Features) 0.20\nCombined (20 Features) 0.15\n6 Conclusion and Future Work\nIn this paper we presented 3 semantically motivated features that augment QuEst’s base-\nline features. We tested our approach on three different datasets and the results are en-\ncouraging, showing that these features can improve over the baseline when a sufficiently\nsimilar sentence against which to compare is available.\nSeveral factors can be enhanced to further improve our system. To start with, the use\nof S-B LEU to evaluate our system is not ideal. Criticisms of B LEU and n-gram matching\nmetrics in general are addressed by Callison-Burch et al. (2008), who show that B LEU\n266 B ´echara et al.\nfails to correlate to (and even contradicts) human judgement. More importantly, B LEU\nitself does not measure meaning preservation. Therefore, to evaluate our system more\nthoroughly, we would need to compare it to human judgements. In order to address the\ncriticisms of both B LEU and the back-translations, we are currently collecting manual\nevaluations of the French SICK MT output sentences. Before a full manual evaluation\nis performed, we cannot conclusively state that our results on the SICK dataset are valid\nin a real world setting.\nAnother case worth addressing further is that where the retrieved matches are so\nsimilar to the original, that they could be acting as a pseudo-reference. While the exam-\nples show that this is not always the case, this phenomenon bears further investigation\nin future research.\nThe MiniExpert’s tool which we use to determine the STS scores for the DGT–TM\nis trained on very different data (the SICK corpus and SemEval data), which may affect\nits accuracy. This may explain why it did not work as well as expected given its reported\nperformance. However, the lack of readily available semantically annotated data to train\non limits us in this regard. Furthermore, our features rely on the existence of semanti-\ncally similar sentences against which we can compare our translations. These sentences\nare not always readily available and, as explained earlier in Section 5, searching large\ncorpora for similar sentences can be computationally costly and time-consuming.\nIn spite of these short-comings, this approach can be quite useful in settings where\nwe wish to predict the quality of sentences within a very specific domain. One potential\nsuch scenario, would be post-editing tasks in which professional translators are asked\nto post-edit MT output of specialized texts. As translators use Translation Memories\n(TMs) to ensure the quality of their work, such TMs could be used to obtain semanti-\ncally similar sentences to the ones in the MTPE task and compute with our approach a\nQE score. The results we obtained in the case of SICK are encouraging in this respect\nand in future work we plan to investigate this further.\nAcknowledgements\nThis work is supported by the People Programme (Marie Curie Actions) of the Eu-\nropean Union’s Framework Programme (FP7/2007-2013) under REA grant agreement\nno317471.\nReferences\nB´echara, H., Costa, H., Taslimipoor, S., Gupta, R., Orasan, C., Corpas Pastor, G., and Mitkov, R.\n(2015). MiniExperts: An SVM approach for Measuring Semantic Textual Similarity. In 9th\nInt. Workshop on Semantic Evaluation , SemEval’15, pages 96–101, Denver, Colorado. ACL.\nBic ¸ici, Ergun (2013). Referential translation machines for quality estimation. In Proceedings of\nthe Eighth Workshop on Statistical Machine Translation , pages 343–351, Sofia, Bulgaria.\nBlatz, J., Fitzgerald, E., Foster, G., Gandrabur, S., Goutte, C., Kulesza, C., Sanchis, A., and Ueff-\ning, N. (2004). Confidence estimation for machine translation. In Proceedings of the 20th\nInternational Conference on Computational Linguistics (CoLing-2004) , pages 315–321.\nSemantic Textual Similarity in Quality Estimation 267\nBojar, O., Buck, C., Callison-Burch, C., Federmann, C., Haddow, B., Koehn, P., Monz, C., Post,\nM., Soricut, R., and Specia, L. (2013). Findings of the 2013 Workshop on Statistical Machine\nTranslation. In Proceedings of the Eighth Workshop on Statistical Machine Translation ,\npages 1–44, Sofia, Bulgaria. Association for Computational Linguistics.\nBojar, Ondrej and Buck, Christian and Federmann, Christian and Haddow, Barry and Koehn,\nPhilipp and Leveling, Johannes and Monz, Christof and Pecina, Pavel and Post, Matt and\nSaint-Amand, Herve and Soricut, Radu and Specia, Lucia and Tamchyna, Ale ˇs (2014). Find-\nings of the 2014 Workshop on Statistical Machine Translation. In Proceedings of the Ninth\nWorkshop on Statistical Machine Translation , pages 12–58, Sofia, Bulgaria. Association for\nComputational Linguistics.\nBojar, O. and Chatterjee, R. and Federmann, C. and Haddow, B. and Huck, M. and Hokamp, C.\nand Koehn, P. and Logacheva, V . and Monz, C. and Negri, M. and others 2015 Findings of\nthe 2015 Workshop on Statistical Machine Translation\nCallison-Burch, C., Fordyce, C., Koehn, P., Monz, C., and Schroeder, J. (2008). Further Meta-\nEvaluation of Machine Translation. In Proceedings of the Third Workshop on Statistical\nMachine Translation (WMT) , pages 70–106.\nCallison-Burch, C., Koehn, P., Monz, C., Post, M., Soricut, R., and Specia, L., editors (2012).\nProceedings of the Seventh Workshop on Statistical Machine Translation . Association for\nComputational Linguistics, Montr ´eal, Canada.\nCastillo, J. and Estrella, P. (2012). Semantic textual similarity for mt evaluation. In Proceed-\nings of the Seventh Workshop on Statistical Machine Translation , WMT ’12, pages 52–58,\nStroudsburg, PA, USA. Association for Computational Linguistics.\nGandrabur, S. and Foster, G. (2003). Confidence estimation for translation prediction. In Pro-\nceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL 2003 -\nVolume 4 , CONLL ’03, pages 95–102, Stroudsburg, PA, USA. Association for Computational\nLinguistics.\nHe, Y ., Ma, Y ., van Genabith, J., and Way, A. (2010). Bridging SMT and TM with Transla-\ntion Recommendation. In Proceedings of the 28th Annual Meeting of the Association for\nComputational Linguistics , pages 622–630.\nKaljahi, R. and Foster, J. and Roturier, J. (2014). Syntax and Semantics in Quality Estimation\nof Machine Translation, In Syntax, Semantics and Structure in Statistical Translation , pages\n67.\nKoehn, P. (2005). Europarl: A parallel corpus for statistical machine translation. In MT summit ,\nvolume 5, pages 79–86.\nKoehn, P., Hoang, H., Birch, A., Callison-Burch, C., Federico, M., Bertoldi, N., Cowan, B., Shen,\nW., Moran, C., Zens, R., Dyer, C., Bojar, O., Constantin, A., and Herbst, E. (2007). Moses:\nOpen Source Toolkit for Statistical Machine Translation. In Proceedings of the Association\nfor Computational Linguistics (ACL) , pages 177–180.\nKoehn, P., Och, F., and Marcu, D. (2003). Statistical Phrase-Based Translation. In Proceedings\nof the Human Language Technology Conference and the North American Chapter of the\nAssociation for Computational Linguistics (HLT/NAACL) , pages 48–54.\nLin, C.-Y . and Och, F. J. (2004). Automatic evaluation of machine translation quality using\nlongest common subsequence and skip-bigram statistics. In Proceedings of the 42Nd Annual\nMeeting on Association for Computational Linguistics , ACL ’04, Stroudsburg, PA, USA.\nAssociation for Computational Linguistics.\nLo, C.-k. and Wu, D. (2011). Meant: An inexpensive, high-accuracy, semi-automatic metric for\nevaluating translation utility via semantic frames. In Proceedings of the 49th Annual Meeting\nof the Association for Computational Linguistics: Human Language Technologies-Volume 1 ,\npages 220–229. Association for Computational Linguistics.\n268 B ´echara et al.\nMarelli, M., Menini, S., Baroni, M., Bentivogli, L., Bernardi, R., and Zamparelli, R. (2014b). A\nsick cure for the evaluation of compositional distributional semantic models. In LREC’14 ,\nReykjavik, Iceland.\nOch, F. (2003). Minimum Error Rate Training in Statistical Machine Translation. In Proceedings\nof the Association for Computational Linguistics (ACL) , pages 160–167.\nOch, F. and Ney, H. (2003). A Systematic Comparison of Various Statistical Alignment Models.\nInProceedings of the Association for Computer Linguistics (ACL) , pages 29(1):19–51.\nRubino, Raphael and Souza, Jos ´e Guilherme Camargo and Foster, Jennifer and Specia, Lucia.\nTopic Models for Translation Quality Estimation for Gisting Purposes. In Machine Transla-\ntion Summit XIV, pages 295–302.\n, Camargo de Souza, Jos ´e Guilherme and Gonz ´alez-Rubio, Jes ´us and Buck, Christian and Turchi,\nMarco and Negri, Matteo. FBK-UPV-UEdin participation in the WMT14 Quality Estimation\nshared-task. In Proceedings of the Ninth Workshop on Statistical Machine Translation, pages\n322–328.\nSpecia, L., Raj, D., and Turchi, M. (2010). Machine Translation Evaluation versus Quality Esti-\nmation. In Machine Translation Volume 24, Issue 1 , pages 39–50.\nLucia Specia and Najeh Hajlaoui and Catalina Hallett and Wilker Aziz. Predicting Machine\nTranslation Adequacy. In Machine Translation Summit XIII, pages 513–520.\nSpecia, L., Shah, K., De Souza, J. G. C., and Cohn, T. (2013). QuEst - A translation quality esti-\nmation framework. In Proceedings of the Association for Computational Linguistics (ACL),\nDemonstrations .\nSpecia, L., Turchi, M., Cancedda, N., Dymetman, M., and Cristianini, N. (2009a). Estimating\nthe Sentence-Level Quality of Machine Translation Systems. In 13th Annual Meeting of the\nEuropean Association for Machine Translation (EAMT-2009) , pages 28–35.\nSpecia, L., Turchi, M., Wang, Z., Shawe-Taylor, J., and Saunders, C. (2009b). Improving the\nconfidence of machine translation quality estimates.\nSteinberger, R., Pouliquen, B., Widiger, A., Ignat, C., Erjavec, T., and Tufis, D. (2006). The JRC-\nAcquis: A multilingual aligned parallel corpus with 20+ languages. In Proceedings of the\n5th International Conference on Language Resources and Evaluation (LREC–2006 , pages\n2142–2147.\nStolcke, A. (2002). SRILM - an Extensible Language Modeling Toolkit. In Proceedings of the\nInternational Conference on Spoken Language Processing (ICSLP) , pages 901–904.\nReceived May3, 2016 , accepted May 13, 2016",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "CJ1IcyLf9dg",
"year": null,
"venue": "EAMT 2020",
"pdf_link": "https://aclanthology.org/2020.eamt-1.56.pdf",
"forum_link": "https://openreview.net/forum?id=CJ1IcyLf9dg",
"arxiv_id": null,
"doi": null
}
|
{
"title": "INMIGRA3: building a case for NGOs and NMT",
"authors": [
"Celia Rico",
"María Del Mar Sánchez Ramos",
"Antoni Oliver"
],
"abstract": "INMIGRA3 is a three-year project that builds on the work of two previous initi-atives: INMIGRA2-CM and CRISIS-MT . Together, they address the specific needs of NGOs in multilingual settings with a particular interest in migratory contexts. Work on INMIGRA3 concentrates in the analysis of how best can be NMT put to use for the purposes of translating NGOs documentation.",
"keywords": [],
"raw_extracted_content": "INMIGRA3: building a case for NGOs and NMT \nCelia Rico* \nUniversidad Europea de \nMadrid \n María del Mar Sánchez Ra mos** \nUniversidad de Alcalá Antoni Oliver*** \nUniversitat Oberta \nde Catalunya \n \n*celia.ri [email protected] \n**[email protected] \n***[email protected] \nAbstract \nINMIGRA3 is a three -year project that \nbuilds on the work of two previous initia-\ntives: INMIGRA2 -CM1 and CRISIS -MT2. \nTogether, they address the specific needs \nof NGOs in multilingual settings with a \nparticular interest in migratory contexts. \nWork on INMIGRA3 concentrates in the \nanalysis of how to use NMT for the pur-\nposes of translatin g NGOs documenta-\ntion. \n1 Translation needs of non-\ngovernmental organisations \nThe third sector is experiencing an increasing \nrelevance as the number of people in vulnerable \ncircumstances grows . Natural catastrophes, wars, \npolitical and religious persecutions, or economic \ncrisis are some of the conditions t hat leave \npeople unprotected. These are situations that \npose a challenge as complex linguistic situations \narise (Federici and O’Brien, 2020). And \nmigration flows are not an exception. \nPrevious research has r evealed a series of gaps \nstill to be filled if we are to understand the true \nnature of multilingual needs in not -for-profit \nsettings . For instance, which are their working \nconditions as related to multilingual needs? H ow \ntechnology can be best put to use ? (see, for \n \n1 The work of INMIGRA2 -CM was presented at EAMT \n2017: https://ufal.mff.cuni.cz/eamt2017/user -project -\nproduct -papers/Conference_Booklet_EAMT2017.pdf \n2 CRISIS -MT is a project funded by Universidad de Alcalá \n(CCG2018/HUM -043). Among other objectives, it aims at \ndesigning an MT system that can be easily put to use in \nmultilingual crisis communication . instance, the work of INTERACT project3, \nLanguage on the Move4 or Translators without \nBorders5). \nIn the case of NGOs working in the migratory \ncontext, one the main issues is that usually the \nbudget allocated to translation resources is scarce \nor non-existent . This might explain why catering \nfor the multilingual needs of migrant population \nis not considered am ong their core activities —at \nleast in the minds of official donors (Footitt, \nCrack and Tesseur, 2018) . Consequently , \ntranslation is mostly cond ucted as volunteer \nwork and using ad hoc materials and tools. In the \ncase of MT, volunteers mostly use free online \nengines (Rico, 2020) . This involves a high risk \nwhen dealing with confidential and personal data \nsuch as donors’ information, reports to offi cial \nbodies regarding field actions or personal \ndocumentation from most vulnerable people . \n2 Building a case for the use of neural \nmachine translation \nINMIGRA3 aims at building a case for the use of \nNMT for the specific translation needs of NGOs. \nThis invol ves an experimental setting along the \nfollowing lines: \n The participation of two NGOs working \nwith migrant people and refugees in \nSpain : Cáritas Española6 and the Spanish \nCommittee for Refugee Help (CEAR)7. \n \n3 INTERACT website: \nhttps://sites.google.com/view/crisistranslation/home?aut hus\ner=0 \n4 Language on the move website: \nhttps://www.languageonthemove.com/ \n5 TWB website: https://translatorswithoutborders.org/ \n6 Cáritas Española is the Spanish chapter of Caritas Interna-\ntionalis , a no t-for-profit organiz ation associated to the Ro-\nman Catholic Church . \n7 CEAR ’s website: https://www.cear.es/ \n \n Definition of a use -case scenario \naccording to the specific needs of the \ntwo NGOs participating in the project. \nThis involves different text typologies \nand topics: donors’ funding information, \neconomic reports to official bodies, \ndocumentation for asylum petition, \nadministrative forms and instructions for \nhousing benefits, workshop materials in \nfood safety and health issues, tax waiver \nforms, and informative texts on access to \neducation. The source language of all \ntexts is Spanish and target languages are \nEnglish, French, Russian, Arabic and \nChinese8. \n A compact NMT engine developed at the \nUniversitat Oberta de Catalunya that can \nbe used offline in a regular consumer \ncomputer . As demo nstrated in Oliver et \nal (2019) it is possible to develop neural \nMT systems that can be integrated in a \nvery compact set of application s that \nwork in computers with limited hardware \nresources and without Internet \nconnection. In our implementation, we \nare using the Marian toolkit with an s2s \narchitecture. The subword -nmt algorithm \n(Sennrich, 2016) has been applied to \nboth sides of the corpus to minimize the \neffect of out -of-vocabulary words. As a \ntraining corpus, we use MultiUN corpus . \n The evaluation of the MT output in terms \nof text usability ( Suojanen et al, 2015 ). \n The publication of a best practice report \nwith specific recommendations for \nNGOs on how to implement NMT . This \nwill include specifications on how these \nsystems can be adapted to the needs of a \ngiven situation (language pairs, subjects, \netc.), and how to implement the m in a \nregular consumer computer to provide a \nuseful solution to communication needs \nin migratory contexts . \nReference s \nFederici, F.M. an d O’Brien, S. 2020, Translation \nin Cascading Crisis . London: Routledge. \n \n8 We are aware that these languages do not cover the full \nspectrum of languages spoken by migrant people in Spain . \nLess resourced languages are not the specific subject of this \nproject as the participating NGOs issue their content in the \nlanguage combinations mentioned. Footitt, HA , Crack, A & Tesseur, W . \n2018, Respecting Communities in Internation-\nal Development: Languages and Cultural Un-\nderstanding . \nhttps://www.intrac.org/resources/respecting -\ncommunities -international -development -\nlanguages -cultural -understanding/ \nOliver, A., M. Sánchez Ramos and C. Rico. \nBuilding offline, compact, ready to use MT \nsystems for crisis translation. Workshop on \nCrisis Machine Translation (“Crisis MT”) at \nMT Summit 2019 \nhttps://sites.google.com/view/crisistranslation/\ncrisis -mt-workshop \nRico, C. 2020. Mapping translation technology \nand t he multilingual needs of NGOs along the \naid chain. In Federici, F.M. and O’Brien, S. \n2020, Translation in Cascading Crisis . Lon-\ndon: Routledge. \nSennrich, R. Barry Haddow and Alexandra Birch \n(2016): Neural Machine Translation of Rare \nWords with Subword Units Proceedings of the \n54th Annual Meeting of the Association for \nComputational Linguistics (ACL 2016). Berlin, \nGermany. \nSuojanen, T ., Koskinen, K. and Tuominen, T. \n2015 . User -Centered Translation. London and \nNew York: Routledge. \n \n \n \nAcknowledgements \nINMIGRA3 is a project jointly funded by the \nAutonomous Community of Madrid (Spain) and \nthe European Social Fund under grant \nH2019/HUM -5772 (start date: 1 Jan 2020; end \ndate: 31 Dec 2023).",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "ImKPrXqOVNQ",
"year": null,
"venue": "EAMT 2020",
"pdf_link": "https://aclanthology.org/2020.eamt-1.55.pdf",
"forum_link": "https://openreview.net/forum?id=ImKPrXqOVNQ",
"arxiv_id": null,
"doi": null
}
|
{
"title": "MTUOC: easy and free integration of NMT systems in professional translation environments",
"authors": [
"Antoni Oliver"
],
"abstract": "In this paper the MTUOC project, aiming to provide an easy integration of neural and statistical machine translation systems, is presented. Almost all the required software to train and use neural and statistical MT systems are released under free licences. However, their use is not always easy and intuitive and medium-high specialized skills are required. MTUOC project provides simplified scripts for preprocessing and training MT systems, and a server and client for easy use of the trained systems. The server is compatible with popular CAT tools for a seamless integration. The project also distributes some free engines.",
"keywords": [],
"raw_extracted_content": "MTUOC: easy and free integration of NMT systems\nin professional translation environments\nAntoni Oliver\nUniversitat Oberta de Catalunya\[email protected]\nAbstract\nIn this paper the MTUOC project, aim-\ning to provide an easy integration of neu-\nral and statistical machine translation sys-\ntems, is presented. Almost all the required\nsoftware to train and use neural and sta-\ntistical MT systems is released under free\nlicences. However, their use is not al-\nways easy and intuitive and medium-high\nspecialized skills are required. MTUOC\nproject provides simplified scripts for pre-\nprocessing and training MT systems, and a\nserver and client for easy use of the trained\nsystems. The server is compatible with\npopular CAT tools for a seamless integra-\ntion. The project also distributes some free\nengines.\n1 Introduction\nMTUOC is a project from the Arts and Humanities\ndepartment at the Universitat Oberta de Catalunya\n(UOC) to facilitate the use and integration of neu-\nral and statistical machine translation systems.\nMost of the software needed for training and\nusing such systems is distributed under free per-\nmissive licences. So this technology is, in princi-\nple, freely available for any professional, company\nor organization. The use of MT toolkits presents\nsome problems:\n\u000fTechnological skills : medium-high techno-\nlogical skills are required. Knowledge of\nsome programming (as Python, for exam-\nple) and scripting (as Bash, for example) lan-\nguages are necessary. On the other hand, the\nc\r2020 The authors. This article is licensed under a Creative\nCommons 3.0 licence, no derivative works, attribution, CC-\nBY-ND.documentation of these toolkits is not always\ndetailed enough and some time in trial and er-\nror is spent.\n\u000fIntegration: the resulting systems are not eas-\nily integrable in existing workflows. Most of\nthe toolkits provide access through some kind\nof API, usually using a server-client configu-\nration. Some CAT tools offer plugins to ac-\ncess some existing systems. But not all CAT\nTool - MT system combinations are available.\n\u000fHardware : relatively high hardware require-\nments are present, especially for training the\nsystems. For training SMT systems lots of\nRAM memory is required. For training NMT\nsystems one or more powerful GPU units are\ncompulsory.\nMTUOC tries to offer solutions for the first two\nproblems. Regarding the technological skills prob-\nlem, it provides a series of easy-to-use and easy-\nto-understand Python and Bash scripts for corpus\npre-processing and training. All these scripts are\nwell documented and can be adapted and extended\nin an easy way. Regarding the integration prob-\nlem, a fully configurable server and client are pro-\nvided. The server can mimic the behaviour of sev-\neral kinds of servers, so it can be used with a large\nrange of CAT tools. For example, the server can\nuse a Marian engine but behave as a Moses server\nso it can be directly integrated with OmegaT. The\nclient can deal with several widely used file for-\nmats (as XLIFF, for example) and generate TMX\ntranslation memories that can be used in any CAT\ntool. Regarding the hardware problem , several\nfacts should be borne in mind. Firstly, hardware\nrequirements for training are much harder than for\ntranslating. Once an engine is trained, it can be\nused in any consumer computer. So many potential\nusers can benefit from the freely available engines.\nSeveral providers can offer the service of training\ntailored machine translation systems. UOC can\nsign technology-transfer agreements with compa-\nnies and organizations to train tailored systems at\nvery competitive rates. This service is free for\nNGOs. Secondly, the price of hardware is getting\nlower over time and powerful GPU units are now\navailable at affordable prices.\n2 Components\nThe MTUOC project offers six main components:\n\u000fPython modules : providing several function-\nalities as tokenization, truecasing, etc.\n\u000fScripts written in Python and Bash and sev-\neral configuration files:\n–Corpus pre-processing scripts: to pro-\ncess the training corpora for training the\nsystems\n–Training scripts and configuration files:\nfor several SMT and NMT toolkits\n–Evaluation scripts: providing some\nwidely used MT evaluation metrics (as\nBLEU, NIST, WER, TER, edit distance)\n\u000fMTUOC Server : the component that receives\na segment to translate from the client or CAT\ntool, process it (tokenization, truecasing and\nso on) and sends it to the translation server.\nAfter receiving the translation this component\npost-processes it (detruecasing, detokeniza-\ntion and so on) and sends it back to the client\nor CAT tool.\n\u000fMTUOC Client : this component can handle\nseveral translation formats, send segments to\nthe server, receive the translations and create\nthe translated file.\n\u000fMTUOC Virtual Machine : as most toolkits\nwork under Linux, this virtual machine is use-\nful for Windows users to run all the required\ncomponents.\n\u000fPre-trained translation engines that can be\nfreely used with MTUOC\n3 MT Toolkits\nMTUOC-server can be used with the following\nMT toolkits.\u000fMoses1(Koehn et al., 2007)\n\u000fMarian2(Junczys-Dowmunt et al., 2018)\n\u000fOpenNMT3(Klein et al., 2017)\n\u000fModernMT4(Bertoldi et al., 2018)\n4 Obtaining MTUOC\nAll the components of MTUOC can be down-\nloaded from its SourceForge page.5The docu-\nmentation of the systems is available in the Wiki\nspace of the project page. All the MTUOC com-\nponents are released under a free licence, namely\nGNU GPL version 3.\nAcknowledgements: The training of the neu-\nral MT systems distributed by MTUOC has been\npossible thanks to the NVIDIA GPU grant pro-\ngramme.\nReferences\nBertoldi, Nicola, Davide Caroselli, and Marcello Fed-\nerico. 2018. The ModernMT project.\nJunczys-Dowmunt, Marcin, Roman Grundkiewicz,\nTomasz Dwojak, Hieu Hoang, Kenneth Heafield,\nTom Neckermann, Frank Seide, Ulrich Germann,\nAlham Fikri Aji, Nikolay Bogoychev, Andr ´e F. T.\nMartins, and Alexandra Birch. 2018. Marian: Fast\nneural machine translation in C++. In Proceed-\nings of ACL 2018, System Demonstrations , pages\n116–121, Melbourne, Australia, July. Association\nfor Computational Linguistics.\nKlein, Guillaume, Yoon Kim, Yuntian Deng, Jean\nSenellart, and Alexander M. Rush. 2017. Open-\nNMT: Open-source toolkit for neural machine trans-\nlation. In Proc. ACL .\nKoehn, Philipp, Hieu Hoang, Alexandra Birch, Chris\nCallison-Burch, Marcello Federico, Nicola Bertoldi,\nBrooke Cowan, Wade Shen, Christine Moran,\nRichard Zens, et al. 2007. Moses: Open source\ntoolkit for statistical machine translation. In Pro-\nceedings of the 45th annual meeting of the associa-\ntion for computational linguistics companion volume\nproceedings of the demo and poster sessions , pages\n177–180.\n1http://www.statmt.org/moses/\n2https://marian-nmt.github.io/\n3https://opennmt.net/\n4https://github.com/modernmt/modernmt\n5https://sourceforge.net/projects/mtuoc/",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "8N-qAuHvW2",
"year": null,
"venue": "EAMT 2020",
"pdf_link": "https://aclanthology.org/2020.eamt-1.44.pdf",
"forum_link": "https://openreview.net/forum?id=8N-qAuHvW2",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Quantitative Analysis of Post-Editing Effort Indicators for NMT",
"authors": [
"Sergi Alvarez",
"Antoni Oliver",
"Toni Badia"
],
"abstract": "The recent improvements in machine translation (MT) have boosted the use of post-editing (PE) in the translation industry. A new machine translation paradigm, neural machine translation (NMT), is displacing its corpus-based predecessor, statistical machine translation (SMT), in the translation workflows currently implemented because it usually increases the fluency and accuracy of the MT output. However, usual automatic measurements do not always indicate the quality of the MT output and there is still no clear correlation between PE effort and productivity. We present a quantitative analysis of different PE effort indicators for two NMT systems (transformer and seq2seq) for English-Spanish in-domain medical documents. We compare both systems and study the correlation between PE time and other scores. Results show less PE effort for the transformer NMT model and a high correlation between PE time and keystrokes.",
"keywords": [],
"raw_extracted_content": "Quantitative Analysis of Post-Editing Effort Indicators for NMT\nSergi Alvarez\nUniversitat Pompeu Fabra\[email protected] Oliver\nUniversitat Oberta de Catalunya\[email protected] Badia\nUniversitat Pompeu Fabra\[email protected]\nAbstract\nThe recent improvements in machine\ntranslation (MT) have boosted the use of\npost-editing (PE) in the translation indus-\ntry. A new MT paradigm, neural MT\n(NMT), is displacing its corpus-based pre-\ndecessor, statistical machine translation\n(SMT), in the translation workflows cur-\nrently implemented because it usually in-\ncreases the fluency and accuracy of the MT\noutput. However, usual automatic mea-\nsurements do not always indicate the qual-\nity of the MT output and there is still no\nclear correlation between PE effort and\nproductivity. We present a quantitative\nanalysis of different PE effort indicators\nfor two NMT systems (transformer and\nseq2seq) for English-Spanish in-domain\nmedical documents. We compare both sys-\ntems and study the correlation between PE\ntime and other scores. Results show less\nPE effort for the transformer NMT model\nand a high correlation between PE time\nand keystrokes.\n1 Introduction\nThe use of machine translation (MT) systems for\nthe production of drafts that are later post-edited\nhas become a widespread practice in the transla-\ntion industry. Research has concluded that post-\nediting of machine translation (PEMT) is usually\nmore efficient than translating from scratch (Plitt\nand Masselot, 2010; Federico et al., 2012; Green et\nal., 2013). Thus, it has been included in the trans-\nlation workflow because it increases productivity\nc\r2020 The authors. This article is licensed under a Creative\nCommons 3.0 licence, no derivative works, attribution, CC-\nBY-ND.when compared with human translation (Aranberri\net al., 2014) and reduces costs (Guerberof, 2009)\nwithout having a negative impact on quality (Plitt\nand Masselot, 2010). Post-editors “edit, modify\nand/or correct pre-translated text that has been pro-\ncessed by an MT system from a source language\ninto (a) target language(s)” (Allen, 2003, p. 296).\nIn recent years, neural machine translation\n(NMT) has produced promising results in terms of\nquality, for example in WMT 2019 (Barrault et al.,\n2019). This has increased the interest in this new\nparadigm for the translation industry, which has\nbegun to substitute its corpus-based predecessor,\nstatistical machine translation (SMT), with new\nNMT models. It has also boosted the incorpo-\nration of PEMT in many translation workflows.\nIn the 2018 Language Industry Survey,137% of\nthe respondents reported an increase of MT post-\nediting and an additional 17% indicated that they\nhad started implementing this practice.\nGiven the improved-quality performance of\nNMT and its widespread use in industrial scenar-\nios, it is necessary to study the potential this ap-\nproach can offer to post-editing. One of the main\nproblems is that automatic scores give a general\nidea of the MT output quality but do not always\ncorrelate to post-editing effort (Koponen, 2016;\nShterionov et al., 2018). However, many profes-\nsional translators state that if the quality of the MT\noutput is not good enough, they delete the remain-\ning segments and translate everything from scratch\n(Parra Escart ´ın and Arcedillo, 2015).\nOne of the main goals both of industry and re-\nsearch is to establish a correlation between the\nquality measurements of the MT output and trans-\nlators’ performance. Regarding post-editing ef-\n1http://fit-europe-rc.org/wp-content/uploads/2019/05/2018-\nLanguage-Industry-Survey-Report.pdf?x77803\nfort, all research uses the three separate but inter-\nrelated, dimensions established by Krings (2001):\ntemporal, technical and cognitive. Temporal effort\nmeasures the time spent post-editing the MT out-\nput. Technical effort makes reference to the inser-\ntions and deletions applied by the translator and is\nusually measured with keystroke analysis, HTER\n(Snover et al., 2006) or Levenshtein distance (edit\ndistance). Cognitive effort relates to the cognitive\nprocesses taking place during post-editing and has\nbeen measured by eye-tracking or think-aloud pro-\ntocols. Krings (2001) claimed that post-editing ef-\nfort could be determined as a combination of all\nthree dimensions. Even though no current mea-\nsure includes them all, cognitive effort was found\nto correlate with technical and temporal PE effort\nin a study by Moorkens et al. (2015).\nIn this paper we present a preliminary com-\nparative quantitative analysis of different post-\nediting effort indicators (technical and temporal)\nfor two NMT systems for English-Spanish in-\ndomain medical documents. First of all, we trained\na transformer and seq2seq model and compared\nthem with Google Translate and an SMT engine\n(check section 4.1 for further detail on the results).\nAs the NMT systems produced better quality re-\nsults, we used them to translate three English-to-\nSpanish medical texts. Then, two different trans-\nlators post-edited each version with PosEdiOn,2a\npost-editing tool developed mainly to collect infor-\nmation on different direct and indirect effort indi-\ncators (technical and temporal effort).\nIn Section 2 we analyse some of the previous\nwork on post-editing effort. We explain the differ-\nent NMT architectures in Section 3. In Section 4\nwe detail the MT systems and corpora used. We\nexplain the experimental settings in Section 5 and\nwe present the results in Section 6.\n2 Previous Work\nNMT is not a new architecture, but it can only be\napplied once the computational limitations have\nbeen solved (Cho et al., 2014; Bahdanau et al.,\n2015). The promising results obtained in auto-\nmatic metrics such as BLEU (Papineni et al., 2002)\nhave been paired with excellent scores in human\nevaluation of NMT (Wu et al., 2016; Junczys-\nDowmunt et al., 2016; Isabelle et al., 2017) when\ncompared to SMT, which has been the predomi-\nnant MT architecture so far.\n2https://sourceforge.net/projects/posedion/Once the improvement in quality has been de-\ntermined, it was necessary to analyse its benefits\nfor post-editing. One of the first complete papers\nstudying the impact of SMT and NMT in post-\nediting was (Bentivogli et al., 2016). They car-\nried out a small scale study on post-editing NMT\nand SMT outputs of English to German translated\nTED talks. They conclude that NMT in general\nterms decreases the post-editing effort, but de-\ngrades faster than SMT with sentence length. One\nof the main strengths of NMT is reordering of the\ntarget sentence.\nToral and S ´anchez-Cartagena (2017) increase\nthe initial scope of the study by Bentivogli et al.\n(2016) by increasing the language combinations\nand the metrics. One of the main conclusions is\nan improvement in quality when using NMT, al-\nthough it is not the same for all the language com-\nbinations.\nCastilho et al. (2017) report on a compara-\ntive analysis of phrase-based SMT (PBSMT) and\nNMT. They compare four language pairs and dif-\nferent automatic metrics and human evaluation\nmethods. General results show a quality increase\nfor NMT, although it also highlights some of the\nweaknesses of this new system. It focuses on\npost-editing and uses the PET interface (Aziz et\nal., 2012) to compare educational domain outputs\nfrom both systems using different metrics. NMT\nis shown to reduce word order errors and improve\nfluency. However, even if keystrokes are reduced,\ntemporal PE effort exhibits no significant reduc-\ntion.\nKoponen et al. (2019) present a comparison of\nPE changes performed on NMT, rule-based MT\n(RBMT) and SMT output for the English-Finnish\nlanguage combination. A total of 33 translation\nstudents participate in this English-to-Finnish PE\nexperiment. It outlines the strategies participants\nadopt to post-edit the different outputs, which con-\ntributes to the understanding of NMT, RBMT and\nSMT approaches. It also concludes that PE effort\nis lower for NMT than for SMT.\nIn industrial scenarios, Shterionov et al. (2018)\nshow that NMT systems obtain higher rankings\nby human reviewers than phrased-based SMT in\nall cases. They highlight that automatic measures\nsuch as BLEU, F-measure (Chinchor, 1992) and\nTER scores do not always correlate with NMT\nquality. Rather, they usually tend to underesti-\nmate it. Even in closely-related languages, which\nSystem BLEU NIST WER DA\nMarian S2S 0.3601 7.6142 0.6893 64\nMarian Transformer 0.3616 7.3863 0.6334 68\nMoses 0.3942 7.8146 0.7386 46\nGoogle Translate 0.3304 7.1197 0.7788 56\nTable 1: Automatic and DA evaluation figures\nare traditionally post-edited with RBMT systems,\nNMT systems with worse automatic metrics show\nbetter results in human evaluation (Costa-Juss `a,\n2017; Alvarez et al., 2019).\nRegarding PE effort indicators, PE time is one\nof the most commonly-used elements to study\nMT quality, although research shows considerable\nvariation among translators (Koponen et al., 2019).\nHTER is another measure frequently used in the\nindustry due to its theoretical correlation to PE ef-\nfort (Specia and Farzindar, 2010). However, re-\nsearch has shown it does not always correspond to\ntranslators’ perception of quality (Koponen, 2012;\nGraham et al., 2016). In fact, some authors sug-\ngest new ways of measuring PE effort taking into\naccount different scores (Scarton et al., 2019) or a\nmultidimensional approach that combines some of\nthe currently existing measures (Aranberri et al.,\n2014).\nGiven the undeniable improvements in quality\nNMT offers for post-editing, we study two differ-\nent NMT systems and how they affect different\nindicators of post-editing effort. We also analyse\nthe correlation of PE time with different direct and\nindirect measures of technical effort (keystrokes,\nHBLEU, HTER and edit distance). As far as we\nare aware, there are no studies comparing how two\ndifferent NMT outputs affect post-editing for En-\nglish to Spanish in-domain texts.\n3 NMT architectures\nThe basic architecture of NMT models (Cho et al.,\n2014; Sutskever et al., 2014) consists of an encoder\nand a decoder. First of all, each word included\nin the input sentence is introduced as a separate\nelement into the encoder so that it can encode it\ninto an internal fixed-length representation called\nthe context vector. It contains the meaning of the\nwhole sentence. Then, the decoder decodes the\ncontext vector and predicts the output sequence.\nInstead of encoding the input sequence into a\nsingle fixed context vector, attention (Bahdanau et\nal., 2015) is proposed as a solution to the limitationof the encoder-decoder model encoding the input\nsequence to one fixed length vector. It develops a\ncontext vector that is filtered specifically for each\noutput time step.\nTransformer (Vaswani et al., 2017) follows\nmainly the encoder-decoder model with attention\npassed from encoder to decoder. It employs a self-\nattention mechanism that allows the encoder and\ndecoder to account for every word included in the\nentire input sequence. Transformer proposes to en-\ncode each position, apply self-attention in both de-\ncoder and encoder, and enhance the idea of self-\nattention by calculating multi-head attention. This\nimproves performance expanding the model’s abil-\nity to focus on different positions and gives the\nattention layer multiple sets of weight matrices.\nThere are no recurrent networks, only a fully con-\nnected feed-forward network.\n4 MT systems and training corpora\n4.1 MT systems\nFor the experiments, we used Marian3(Junczys-\nDowmunt et al., 2018) to train two NMT sys-\ntems. For the first one (1) we used an RNN-based\nencoder-decoder model with attention mechanism\n(s2s), layer normalization, tied embeddings, deep\nencoders of depth 4, residual connectors and\nLSTM cells. For the second one (2), the trans-\nformer, we used the configuration in the example\nof the Marian documentation,4that is, 6 layer en-\ncoder and 6 layer decoder, tied embeddings for\nsource, target and output layer, label smoothing,\nlearn rate warm-up and cool down.\nTo establish a comparison baseline, we trained a\nMoses model with the same corpus, and also used\nGoogle translate. We assessed the resulting en-\ngines with standard automatic metrics (see Table\n1). The best scores for BLEU were obtained by\nthe Moses engine, even though WER was better\nfor the two NMT systems. This is in line with the\n3https://marian-nmt.github.io\n4https://github.com/marian-nmt/marian-\nexamples/tree/master/transformer\nCorpus Segments/Entries Tokens eng Tokens spa\nBMTR 816,544 14,726,693 16,836,428\nMedline Abstracts 100,797 1,772,461 1,964,860\nUFAL 258,701 3,202,162 3,437,936\nKreshmoi 1,500 28,454 32,158\nIBECS 72,168 13,575,418 15,014,299\nSciELO 741,407 17,464,256 19,305,165\nMedLine 140,479 1,649,869 1,846,374\nMSD Manuals 241,336 3,719,933 4,467,906\nEMEA 366,769 5,327,963 6,008,543\nPortal Clinic 8,797 159,717 169,294\nGlossary MeSpEn 125,645 - -\nICD10-en-es 5,202 - -\nSnowMedCT Denom. 887,492 - -1\nSnowMedCT Def. 4,268 177,861 184,574\nTotal 4,430,765 66,147,518 74,663,550\nTable 2: Size of the corpora and glossaries used to create the corpus to train the MT systems\nresults of recent research, which has shown certain\nautomatic metrics tend to underestimate NMT sys-\ntems (Shterionov et al., 2018; Alvarez et al., 2019).\nAdditionally, we conducted a manual evaluation\nof a 30-segment sample for the three MT outputs\nemploying monolingual direct assessment (DA) of\ntranslation adequacy (Graham and Baldwin, 2014;\nGraham and Liu, 2016). We used this DA setup\nbecause it simplifies the task of translation assess-\nment (usually done as a bilingual task) into a sim-\npler monolingual assessment task. We obtained the\nresults averaging the assessment of two annotators\nand the NMT systems received higher marks.\nAs it can be seen in Table 1, DA classified\nMoses as the worst rated. Therefore, we decided\nto include only the two NMT systems for the post-\nediting tasks.\n4.2 Corpora\nTo train the system we have used several publicly\navailable corpora in the English-Spanish pair:\n\u000fBiomedical translation repository (BMTR)5\n\u000fMedline abstracts training data provided by\nBiomedical Translation Task 20196\n\u000fThe UFAL Medical Corpus7v1.0.\n\u000fThe Khresmoi development data8\n5https://github.com/biomedical-translation-corpora/corpora\n6http://www.statmt.org/wmt19/biomedical-translation-\ntask.html\n7https://ufal.mff.cuni.cz/ufal medical corpus\n8https://lindat.mff.cuni.cz/repository/xmlui/handle/11234/1-\n2122\u000fThe IBECS9(Spanish Bibliographical Index\nin Health Sciences ) corpus.\n\u000fThe SciELO corpus10\n\u000fThe EMEA11(European Medicines Agency )\ncorpus.\nWe have also created several corpora from web-\nsites with medical content:\n\u000fMedline Plus12: we have compiled our own\ncorpus from the web and we have combined\nthis with the corpus compiled in MeSpEn.\n\u000fMSD Manuals13English-Spanish corpus,\ncompiled for this project under permission of\nthe copyright holders.\n\u000fPortal Cl ´ınic14English-Spanish corpus, com-\npiled by us for this project.\nWe have also used several glossaries and\nglossary-like databases treating them as corpora.\nThese resources contain a lot of useful terms\nand expressions in the medical domain. Namely,\nwe have used the English-Spanish glossary from\nMeSpEn, the 10th revision of the International\nStatistical Classification of ICD and SnowMedCT.\nWith all the corpora and glossaries we have cre-\nated an in-domain training corpus of 4,430,765\nsegments and entries.\n9http://ibecs.isciii.es\n10https://sites.google.com/view/felipe-soares/datasets\n11http://opus.nlpl.eu/EMEA.php\n12https://medlineplus.gov/\n13https://www.msdmanuals.com/\n14https://portal.hospitalclinic.org\nT1 (S2S) T2 (S2S) T3 (T) T4 (T)\nmean st. dev. mean st. dev. mean st. dev. mean st. dev.\nHTER 0.16 0.12 0.11 0.09 0.17 0.17 0.12 0.17\nHBLEU 0.53 0.27 0.65 0.27 0.56 0.29 0.67 0.33\nHEd 1.28 1.19 0.84 0.94 1.56 2.04 1.09 2.07\nKeys/tok 6.36 28.25 3.38 5.25 7.53 27.62 5.91 25.59\nPETpT 9.19 33.97 4.61 8.56 4.57 12.22 3.03 8.69\nTable 3: PE-based metrics (mean and standard deviation) for the task\nS2S NMT Transf. NMT\nmean st. dev. mean st. dev.\nHTER 0.13 0.10 0.11 0.09\nHBLEU 0.59 0.27 0.65 0.27\nHEd 1.06 1.06 0.84 0.94\nKeys/tok 4.87 16.75 3.38 5.25\nPETpT 6.90 21.26 4.61 8.56\nTable 4: Total PE-based metrics for each NMT model\nIn Table 2 the size of all corpora and glossaries\nused for training the MT systems is shown. Figures\nare calculated eliminating all the repeated source\nsegment-target segment pairs in the corpora.\n5 Experiment\nWe used the two NMT systems (transformer and\ns2s) trained with the corpora described above to\ntranslate from English into Spanish three texts\n(1468, 631 and 2247 words respectively) from the\nmedical domain.\nFour professional translators with at least one\nyear of post-editing experience carried out the\ntask: two of them post-edited the s2s output (T1\nand T2) and the other two, the transformer output\n(T3 and T4). They were asked to produce publish-\nable quality translations. As we wanted to reduce\nthe external variables as much as possible, they\nall used PosEdiOn15, a computer-assisted transla-\ntion tool specifically designed for assessing post-\nediting effort, which logs both post-editing time\nand edits (keystrokes, insertions and deletions, that\nis, technical effort). The main characteristics of\nthe post-editing tool were also explained to them\nbefore starting the task.\nIn order to avoid any bias, translators never post-\nedited the same text twice. However, they were\ntold that an NMT system was used to produce the\noutput. They received previous information on\n15https://sourceforge.net/projects/posedion/the tool and a three day period to test it before\ndoing the task. They were paid their usual rate\nand had a two-week deadline. Two of them ex-\npressed concerns about the tool, as they preferred\nto work with their usual tools. However, they did\nnot think it would affect the final quality of their\njob or their usual working speed. While post-\nediting, they could search for all the required in-\nformation in order to produce the final translation.\nThey could also pause the post-editing task when-\never they wanted.\n6 Results\n6.1 PE effort indicators\nOnce translators finished post-editing, we calcu-\nlated the following task-specific (PE based) met-\nrics (showed in Table 3):\n\u000fPETpT , PE time in seconds normalised by\nthe length of the target segment in tokens.\n\u000fHTER , the TER value comparing the raw\nMT output with the post-edited segment.\n\u000fHBLEU , the BLEU score obtained by com-\nparing the raw MT output with the post-edited\nsegment.\n\u000fHEd , an edit distance value (Levenshtein dis-\ntance) calculated comparing the raw MT out-\nput with the post-edited segment.\n\u000fKeystrokes normalized by the number of to-\nkens.\nPost-editor Unmodified seg.\nT1 (S2S) 22\nT2 (S2S) 31\nT3 (T) 19\nT4 (T) 58\nTable 5: Unmodified segments after post-editing\nFigure 1: Scatter plot of keystrokes and time for all of the translators\nIn order to avoid the maximum number of out-\nliers, we did not include those segments in which\n(normalized) time or (normalized) keystrokes dou-\nbled the mean plus the standard deviation of the to-\ntal time or number of keystrokes. As usually hap-\npens in these types of tasks, post-editing effort in-\ndicators show a considerable variation among dif-\nferent translators. For the seg2seg model, transla-\ntors showed a difference of 4.58 PETpT between\nthem. This difference was reduced to 1.54 in the\ncase of the transformer model. However, if we\ncheck the total figures for each of the systems (see\nTable 4), post-editing time is clearly reduced for\nthe transformer model, as well as all the other\nscores.\nWe also used the distribution-agnostic Kol-\nmogorov–Smirnov test to compare the distribution\nof PETpT for the two translators of each NMT\nmodel. We found there was no clear distribution\n(considering p<0.05). This would seem to indi-cate the need to increase the number of translators\nfor any given post-editing test to obtain a more rep-\nresentative mean.\nAnother interesting figure to understand PE ef-\nfort is the number of unmodified segments. Even\nthough that does not mean those segments imply\nno PE effort, it could give an indication of MT out-\nput. Table 5 shows the number of unmodified seg-\nments per translators from a total of 224 segments.\nThere is not a clear tendency for any MT system,\nbut rather a preference corresponding to the indi-\nvidual translator, especially T4, who didn’t modify\na high number of segments, which correlates to the\nlow PE time recorded.\nWe also checked PETpT related to segment\nlength, as research has shown longer segments\ntend to imply higher PE effort (Bentivogli et al.,\n2016). We studied segments with more than 35\ntokens to see if PETpT or any other PE effort indi-\ncator increased. We could find no statistically sig-\nT1 (S2S) T2 (S2S) T3 (T) T4 (T) ALL\nHTER 0.309* 0.545* 0.418* 0.00705* 0.49*\nHBLEU -0.072 -0.209 -0.148 -0.370* -0.21*\nHEd 0.043* 0.706 0.0770* 0.809* 0.66\nKeys 0.823* 0.868* 0.824* 0.822* 0.82*\nTable 6: Spearman’s correlation with time as a gold standard for different effort indicators (*p <0.001)\nFigure 2: Correlation for best and worst segments\nnificant evidence linking segment length to trans-\nlators’ effort in our experiments. This could indi-\ncate newer NMT models do not always reduce MT\nquality in longer segments.\nOur results with a limited number of transla-\ntors confirm previous studies (Castilho et al., 2017;\nShterionov et al., 2018; Alvarez et al., 2019) and\nfurther, more extensive experimentation is needed\nin order to obtain meaningful indicators of MT out-\nput quality.\n6.2 Correlation between scores\nOnce we established the overall results per each\nmodel, we tried to identify which metric produced\nscores that were closest to the total time spent per\nsegment. We calculated Spearman’s correlation\ncoefficient between the total amount of time and\nall other metrics.\nAs can be seen in Table 6, the best overall cor-\nrelation is found with the number of keys (see Fig-\nure 1) for all translators as well as for the total,\nfollowed by the calculated edit distance. Most of\nthe results obtained show a statistically significant\ncorrelation, especially those figures relating to the\nnumber of keystrokes (*p <0.001).\nThese results are in line with the conclusions\nreported by previous work (Graham et al., 2016;\nScarton et al., 2019) that found no clear correla-\ntion between temporal effort and the most frequent\nmetrics, even though the number of keystrokes was\nthe metric more closely related.6.3 Tails distribution\nThere was a lack of correlation between the distri-\nbution of PE time among translators, and between\nthis indicator and the others. We wanted to take a\ncloser look at the best and worse segments to anal-\nyse if the correlation improved. We counted the\nnumber of common segments between the 50 best\nand worst time segments and all other metrics cal-\nculated.\nAs can be seen in Figure 2, there is a better cor-\nrelation for the segments in which less time was\nspent. Furthermore, the edit distance shows the\nbest correlation in these cases. For the segments\nwith the higher time recorded, correlation is no-\ntably reduced in all cases and the edit distance and\nthe number of keystrokes show a higher correla-\ntion.\n7 Concluding remarks\nThere is a need for reliable metrics to evaluate MT\nquality in order to produce outputs which trans-\nlators can post-edit without too much effort. Our\nexperiments have shown that no single PE indica-\ntor can provide all the information necessary to as-\nsess the quality of the MT output. PE time pro-\nvides a useful measure, even though it does not\nalways correspond with other PE metrics and in-\ncludes a great variation among translators. The\nonly score that seems directly related to tempo-\nral effort are keystrokes (technical effort), but not\nHTER or HBLEU.\nIn industrial scenarios, the quality of a certain\nMT output is usually linked to PE time. The re-\nsults of our experiments suggest that the analy-\nsis of temporal effort can indicate the quality of\nthe MT output, but we believe a multidimensional\napproach that includes different effort indicators\nwould be a safer path to assess to convenience of\npost-editing a certain MT output.\nOur future work will study further indicators of\nMT quality for post-editing in depth, mainly the\ncharacterization of source text to assess PE effort.\nAcknowledgements: This work was supported\nby Universitat Pompeu Fabra (grant COMPLE-\nMENTA 2019).\nThe training of the neural machine translation\nsystems has been possible thanks to the NVIDIA\nGPU grant programme.\nReferences\nAllen, Jeffrey H. 2003. Post-editing. In Sommer,\nHarold, editor, Computers and Translation: A trans-\nlator’s guide , pages 297–317. John Benjamin, Ams-\nterdam.\nAlvarez, Sergi, Antoni Oliver, and Toni Badia. 2019.\nDoes NMT Make a Difference when Post-editing\nClosely Related Languages? The Case of Spanish-\nCatalan. In Proceedings of Machine Translation\nSummit XVII Volume 2: Translator, Project and User\nTracks , pages 49–56, Dublin, Ireland, August. Euro-\npean Association for Machine Translation.\nAranberri, Nora, Gorka Labaka, Arantza Ilarraza, and\nKepa Sarasola. 2014. Comparison of Post-Editing\nProductivity between Professional Translators and\nLay Users. In Proceedings of the Third Workshop\non Post-Editing Technology and Practice (WPTP -\n3), Vancouver, Canada.\nAziz, Wilker, Sheila C. M. De Sousa, and Lucia Specia.\n2012. PET: A Tool for Post-editing and Assessing\nMachine Translation. In Proceedings of the Eight In-\nternational Conference on Language Resources and\nEvaluation (LREC’12) , pages 3982–3987.\nBahdanau, Dzmitry, Kyunghyun Cho, and Yoshua Ben-\ngio. 2015. Neural Machine Translation by Jointly\nLearning to Align and Translate. In Bengio, Yoshua\nand Yann LeCun, editors, 3rd International Confer-\nence on Learning Representations, ICLR 2015, San\nDiego, CA, USA, May 7-9, 2015, Conference Track\nProceedings .\nBarrault, Lo ¨ıc, Ond ˇrej Bojar, Marta R. Costa-juss `a,\nChristian Federmann, Mark Fishel, Yvette Gra-\nham, Barry Haddow, Matthias Huck, Philipp Koehn,\nShervin Malmasi, Christof Monz, Mathias M ¨uller,Santanu Pal, Matt Post, and Marcos Zampieri. 2019.\nFindings of the 2019 Conference on Machine Trans-\nlation (WMT19). In Proceedings of the Fourth Con-\nference on Machine Translation (Volume 2: Shared\nTask Papers, Day 1) , pages 1–61, Florence, Italy,\nAugust. Association for Computational Linguistics.\nBentivogli, Luisa, Arianna Bisazza, Mauro Cettolo, and\nMarcello Federico. 2016. Neural versus Phrase-\nBased Machine Translation Quality: a Case Study.\nInProceedings of the 2016 Conference on Empiri-\ncal Methods in Natural Language Processing , pages\n257–267. Association for Computational Linguis-\ntics.\nCastilho, Sheila, Joss Moorkens, Federico Gas-\npari, Rico Sennrich, Vilelmini Sosoni, Yota Geor-\ngakopoulou, Pintu Lohar, Andy Way, Antonio Miceli\nBarone, and Maria Gialama. 2017. A Comparative\nQuality Evaluation of PBSMT and NMT using Pro-\nfessional Translators. In Proceedings of MT Summit\nXVI, vol.1: Research Track , pages 116–131, 9.\nChinchor, Nancy. 1992. MUC-4 Evaluation Metrics.\nInProceedings of the Fourth Message Understand-\ning Conference , pages 22–29.\nCho, Kyunghyun, Bart van Merri ¨enboer, Dzmitry Bah-\ndanau, and Yoshua Bengio. 2014. On the Properties\nof Neural Machine Translation: Encoder-Decoder\nApproaches. In Proceedings of SSST-8, Eighth\nWorkshop on Syntax, Semantics and Structure in\nStatistical Translation , Doha, Qatar. Association for\nComputational Linguistics.\nCosta-Juss `a, Marta R. 2017. Why Catalan-Spanish\nNeural Machine Translation? Analysis, Comparison\nand Combination with Standard Rule and Phrase-\nbased Technologies. Proceedings of the Fourth\nWorkshop on NLP for Similar Languages, Varieties\nand Dialects (VarDial) , pages 55–62.\nFederico, M., A. Catttelan, and M. Trombetti. 2012.\nMeasuring User Productivity in Machine Translation\nEnhanced Computer Assisted Translation. In Pro-\nceedings of the 10th Conference of the AMTA , pages\n44–56. AMTA.\nGraham, Yvette and Timothy Baldwin. 2014. Testing\nfor Significance of Increased Correlation with Hu-\nman Judgment. In Proceedings of the 2014 Con-\nference on Empirical Methods in Natural Language\nProcessing (EMNLP) , pages 172–176, Doha, Qatar,\nOctober. Association for Computational Linguistics.\nGraham, Yvette and Qun Liu. 2016. Achieving\nAccurate Conclusions in Evaluation of Automatic\nMachine Translation Metrics. In Proceedings of\nthe 15th Annual Conference of the North Ameri-\ncan Chapter of the Association for Computational\nLinguistics: Human Language Technologies , San\nDiego, CA. Association for Computational Linguis-\ntics.\nGraham, Yvette, Timothy Baldwin, Meghan Dowling,\nMaria Eskevich, Teresa Lynn, and Lamia Tounsi.\n2016. Is all that Glitters in Machine Translation\nQuality Estimation really Gold? In Proceedings of\nCOLING 2016, the 26th International Conference on\nComputational Linguistics: Technical Papers , pages\n3124–3134, Osaka, Japan, December. The COLING\n2016 Organizing Committee.\nGreen, Spence, Jeffrey Heer, and Christopher D. Man-\nning. 2013. The Efficacy of Human Post-editing\nfor Language Translation. In Proceedings of the\nSIGCHI Conference on Human Factors in Comput-\ning Systems - CHI 13 . ACM Press.\nGuerberof, Ana. 2009. Productivity and Quality in\nMT Post-editing. In Proceedings of MT Summit XII ,\npages 8–14. Association of Machine Translation.\nIsabelle, Pierre, Colin Cherry, and George F. Foster.\n2017. A Challenge Set Approach to Evaluating Ma-\nchine Translation. CoRR , abs/1704.07431.\nJunczys-Dowmunt, Marcin, Tomasz Dwojak, and Hieu\nHoang. 2016. Is Neural Machine Translation Ready\nfor Deployment? A Case Study on 30 Translation\nDirections. CoRR , abs/1610.01108.\nJunczys-Dowmunt, Marcin, Roman Grundkiewicz,\nTomasz Dwojak, Hieu Hoang, Kenneth Heafield,\nTom Neckermann, Frank Seide, Ulrich Germann,\nAlham Fikri Aji, Nikolay Bogoychev, Andr ´e F. T.\nMartins, and Alexandra Birch. 2018. Marian: Fast\nNeural Machine Translation in C++. Proceedings of\nACL 2018, System Demonstrations , pages 116–121,\nJuly.\nKoponen, Maarit, Leena Salmi, and Markku Nikulin.\n2019. A Product and Process Analysis of Post-editor\nCorrections on Neural, Statistical and Rule-based\nMachine Translation Output. Machine Translation ,\n(33, pages 61–90).\nKoponen, Maarit. 2012. Comparing Human Percep-\ntions of Post-editing Effort with Post-editing Opera-\ntions. Proceedings of the Seventh Workshop on Sta-\ntistical Machine Translation , pages 181–190.\nKoponen, Maarit. 2016. Is Machine Translation Post-\nediting Worth the Effort? A Survey of Research into\nPost-editing and Effort. The Journal of Specialised\nTranslation , pages 131–148.\nMoorkens, Joss, Sharon O ’brien, Igor A L Da Silva,\nNorma B De, Lima Fonseca, Fabio Alves, and\nNorma B De Lima Fonseca. 2015. Correlations of\nperceived post-editing effort with measurements of\nactual effort. Machine Translation , 29:267–284.\nPapineni, Kishore, Salim Roukos, Todd Ward, and\nWj Zhu. 2002. BLEU: A Method for Automatic\nEvaluation of Machine Translation. Number July,\npages 311–318.Parra Escart ´ın, Carla and Manuel Arcedillo. 2015.\nA Fuzzier Approach to Machine Translation Eval-\nuation: A Pilot Study on Post-editing Productiv-\nity and Automated Metrics in Commercial Set-\ntings. Proceedings of the ACL 2015 Fourth Work-\nshop on Hybrid Approaches to Translation (HyTra) ,\n1(2010):40–45.\nPlitt, Mirko and Franc ¸ois Masselot. 2010. A Produc-\ntivity Test of Statistical Machine Translation Post-\nEditing in a Typical Localisation Context. The\nPrague Bulletin of Mathematical Linguistics NUM-\nBER, 93:7–16.\nScarton, Carolina, Mikel L. Forcada, Miquel Espl `a-\nGomis, and Lucia Specia. 2019. Estimating Post-\nediting Effort: A Study on Human Judgements,\nTask-based and Reference-based Metrics of MT\nQuality. In Proceedings of IWSLT 2019 , volume\nabs/1910.06204, Hong Kong, China.\nShterionov, Dimitar, Riccardo Superbo, Pat Nagle,\nLaura Casanellas, Tony O’Dowd, and Andy Way.\n2018. Human versus Automatic Quality Evalua-\ntion of NMT and PBSMT. Machine Translation ,\n32(3):217–235, Septembre.\nSnover, Matthew, Bonnie Dorr, Richard Schwartz, Lin-\nnea Micciulla, and John Makhoul. 2006. A Study\nof Translation Edit Rate with Targeted Human An-\nnotation. Proceedings of Association for Machine\nTranslation in the Americas , (August):223–231.\nSpecia, Lucia and Atefeh Farzindar. 2010. Estimating\nMachine Translation Post-editing Effort with HTER.\nAMTA-2010 Workshop Bringing MT to the User: MT\nResearch and the Translation Industry , page 33–41.\nSutskever, Ilya, Oriol Vinyals, and Quoc V Le. 2014.\nSequence to Sequence Learning with Neural Net-\nworks. In Ghahramani, Z., M. Welling, C. Cortes,\nN. D. Lawrence, and K. Q. Weinberger, editors, Ad-\nvances in Neural Information Processing Systems\n27, pages 3104–3112. Curran Associates, Inc.\nToral, Antonio and V ´ıctor M. S ´anchez-Cartagena.\n2017. A Multifaceted Evaluation of Neural versus\nPhrase-Based Machine Translation for 9 Language\nDirections. In Proceedings of the 15th Conference\nof the European Chapter of the Association for Com-\nputational Linguistics , volume 1, Long Papers, pages\n1063–1073, East Stroudsburg. Association for Com-\nputational Linguistics (ACL).\nVaswani, Ashish, Noam Shazeer, Niki Parmar, Jakob\nUszkoreit, Llion Jones, Aidan N. Gomez, Lukasz\nKaiser, and Illia Polosukhin. 2017. Attention is all\nyou need. In Advances in Neural Information Pro-\ncessing Systems 30 (NIPS) .\nWu, Yonghui, Mike Schuster, Zhifeng Chen, Quoc V .\nLe, Mohammad Norouzi, Wolfgang Macherey,\nMaxim Krikun, Yuan Cao, Qin Gao, Klaus\nMacherey, Jeff Klingner, Apurva Shah, Melvin John-\nson, Xiaobing Liu, Lukasz Kaiser, Stephan Gouws,\nYoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith\nStevens, George Kurian, Nishant Patil, Wei Wang,\nCliff Young, Jason Smith, Jason Riesa, Alex Rud-\nnick, Oriol Vinyals, Gregory S. Corrado, Macduff\nHughes, and Jeffrey Dean. 2016. Google’s Neu-\nral Machine Translation System: Bridging the Gap\nbetween Human and Machine Translation. ArXiv ,\nabs/1609.08144.",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "VrbkRP94saP",
"year": null,
"venue": "EAMT 2015",
"pdf_link": "https://aclanthology.org/W15-4919.pdf",
"forum_link": "https://openreview.net/forum?id=VrbkRP94saP",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Unsupervised training of maximum-entropy models for lexical selection in rule-based machine translation",
"authors": [
"Francis M. Tyers",
"Felipe Sánchez-Martínez",
"Mikel L. Forcada"
],
"abstract": "Francis M. Tyers, Felipe Sánchez-Martínez, Mikel L. Forcada. Proceedings of the 18th Annual Conference of the European Association for Machine Translation. 2015.",
"keywords": [],
"raw_extracted_content": "Unsupervised training of maximum-entropy models for lexical selection in\nrule-based machine translation\nFrancis M. Tyers\nHSL-fakultehta,\nUiT Norgga ´arktala ˇs universitehta,\nN-9018 RomsaFelipe S ´anchez-Mart ´ınez\nDept. Lleng. i Sist. Inform.,\nUniversitat d’Alacant,\nE-03071 AlacantMikel L. Forcada\nDept. Lleng. i Sist. Inform.,\nUniversitat d’Alacant,\nE-03071 Alacant\nAbstract\nThis article presents a method of training\nmaximum-entropy models to perform lexi-\ncal selection in a rule-based machine trans-\nlation system. The training method de-\nscribed is unsupervised; that is, it does not\nrequire any annotated corpus. The method\nuses source-language monolingual corpora,\nthe machine translation (MT) system in\nwhich the models are integrated, and a sta-\ntistical target-language model. Using the\nMT system, the sentences in the source-\nlanguage corpus are translated in all possi-\nble ways according to the different transla-\ntion equivalents in the bilingual dictionary\nof the system. These translations are then\nscored on the target-language model and\nthe scores are normalised to provide frac-\ntional counts for training source-language\nmaximum-entropy lexical-selection mod-\nels. We show that these models can per-\nform equally well, or better, than using the\ntarget-language model directly for lexical\nselection, at a substantially reduced compu-\ntational cost.\n1 Introduction\nCorpus-based machine translation (MT) has been\nthe primary research direction in thefield of MT\nin recent years. However, rule-based MT (RBMT)\nsystems are still being developed, and there are\nmany successful commercial and non-commercial\nsystems. One reason for the continued development\nof RBMT systems is that in order to be successful,\nc�2015 The authors. This article is licensed under a Creative\nCommons 3.0 licence, no derivative works, attribution, CC-\nBY-ND.corpus-based MT requires parallel corpora in the\norder of tens of millions of words. Although for\nsome language pairs these exist, they only exist for\na fraction of the world’s languages.\nAn RBMT system typically consists of an analy-\nsis component,1a transfer component and a genera-\ntion component. As part of the transfer component\nit is necessary to make choices regarding words in\nthe source language (SL) which may have more\nthan one translation in the target language (TL).\nLexical selectionis the task of choosing, for a\ngiven SL word, the most adequate translation in the\nTL among a known set of alternatives. The task is\nrelated to the task of word-sense disambiguation\n(Ide and V ´eronis, 1998). However, it is different to\nword-sense disambiguation in that lexical selection\nis a bilingual problem, not a monolingual problem:\nits aim is tofind the most adequate translation, not\nthe most adequate sense. Thus, it is not necessary\nto choose among a series offine-grained senses if\nall these senses result in the samefinal translation;\nhowever, it may sometimes be necessary to choose a\ndifferent translation for the same sense, for example\nin a collocation.\n1.1 Prior work\nDagan and Itai (1994) used the termword sense dis-\nambiguationto refer to what is actually lexical se-\nlection in MT; they used a parser to identify syntac-\ntic relations such as subject–object or subject–verb.\nAfter generating all the possible translations for a\ngiven input sentence using an ambiguous bilingual\ndictionary, they extract the syntactic tuples from the\nTL and count the frequency in a previously-trained\nTL model of tuples. They use maximum-likelihood\nestimation to calculate the probability that a given\n1Such as a morphological or syntactic analyser.145\nTL tuple is the translation of a given SL tuple, with\nan automatically determined confidence threshold.\nLater, Berger et al. (1996) illustrated the use of\nmaximum-entropy classifiers on the specific prob-\nlem of lexical selection in IBM-style word-based\nstatistical MT. Other authors (Melero et al., 2007)\nhave used TL models to rank the translations re-\nsulting from all possible combinations of lexical\nselections. Nowadays, in state-of-the-art phrase-\nbased statistical MT (Koehn, 2010), lexical se-\nlection is taken care of by a combination of the\ntranslation model and the language model. The\ntranslation model provides probabilities of transla-\ntion between words or word sequences (often re-\nferred to asphrases) in the source and target lan-\nguage. The TL model provides probabilities of\nword sequences in the TL. Mare ˇcek et al. (2010)\ntrained a maximum-entropy lexical selector for\ntheir dependency-grammar-based transfer system\nTectoMT using a bilingual corpus. More recently,\nTyers et al. (2012) presented a method of lexical\nselection for RBMT based on rules which select or\nremove translations infixed-length contexts, along\nwith a training method for learning the rules from a\nword-aligned parallel corpus.2\n2 Method\nLexical selection in this paper considers for each\nword a simple SL context made up of neighbouring\nlemma+part-of-speech combinations. Contexts con-\nsidered include up to two words to the left and up to\ntwo words to the right of the word to be translated.\nLet the probability of a word tbeing the trans-\nlation of a word sin a SL context cbeps(t|c). In\nprinciple, this value could be estimated directly\nfrom the available corpora for every combination\nof(s, t, c) . This would however present two ques-\ntions: (1) how should the relevant contexts be cho-\nsen? and (2) what should be done when (s, t, c)\nis not found in the corpus? A maximum-entropy\nmodel answers both of these questions. It allows\nthe contexts that we consider to be linguistically\ninteresting to be defineda prioriand then integrate\nthese seamlessly into a probabilistic model (Man-\nning and Sch ¨utze, 1999). In answer to the second\nquestion, a maximum-entropy model maximises the\nentropy subject to match the expected counts of the\ndesigned features with those found in the training\n2The work by Ravi and Knight (2011) and Nuhn and Ney\n(2014), who decipher word-ciphered text using monolingual\ncorpora only may be seen as a generalised version of the prob-\nlem of lexical selection without parallel corpora.data. That is, if there is no information in the train-\ning data, then it assumes that all outcomes —that\nis, all possible translations— are equally likely. As\npreviously mentioned, the principle of maximum\nentropy has been applied to the problem of lexical\nselection before; in particular, Berger et al. (1996)\ncast the problem of lexical selection in statistical\nMT as a classification problem. They learn a sepa-\nrate maximum-entropy classifier for each SL word\nform, using SL context to distinguish between pos-\nsible translations. These classifiers are then incorpo-\nrated into the translation model of their word-based\nstatistical MT system. In their approach, a classifier\nconsists of a set of binary feature functions and cor-\nresponding weights for each feature. In both Berger\net al. (1996) and our method, features are defined\nin the form hs\nk(t, c) ,3where tis a translation, and\ncis a SL context. One difference is that Berger\net al. (1996) take s,tandcto be based on word\nforms, whereas in our method they are based on\nlemma forms. An example would be the follow-\ning feature where the Spanish wordpez(‘fish’ as\na living animal) is seen as the translation ofarrain\n(‘fish’) in the contextarrain handi‘bigfish’ and\nwould therefore be defined as:\nharrain\n+handi (t, c) =\n\n1if\n\nt=pez\nand\nhandifollowsarrain\n0otherwise\n(1)\nThis feature considers a context of zero words to\nthe left of the problem word and one word (+handi)\nto the right of it.\nAs a result of training, each of the nFfeatures\nhs\nk(t, c) in the classifier is assigned a weight λs\nk.\nCombining these weights of active features as in\nequation (2) yields the probability of a translation t\nfor wordsin contextc.\nps(t|c) =1\nZs(c)expnF�\nk=1λs\nkhs\nk(t, c)(2)\nIn this equation, Zs(c)is a normalising constant.\nThus, the most probable translation t�can be found\nusing\nt�= arg max\nt∈T s(s)ps(t|c) = arg max\nt∈T s(s)nF�\nk=1λs\nkhs\nk(t, c),\n(3)\n3We follow the notation of Berger et al. (1996)146\nS→pre-\nlexsel→({g i}i=|G|\ni=1, S)→ lexsel →(g�, S)→post-\nlexsel→τ(g�, S)\nFigure 1: A schema of the lexical selection process: source sentence Shas|G| lexical selection paths gi:lexselselects one of\nthemg�, which is used to generate translationτ(g�, S).\nwhere Ts(s)is the set of possible translations for\nSL words.\nThe approaches by Berger et al. (1996) and by\nMare ˇcek et al. (2010) cited above both take advan-\ntage of a parallel corpus to collect counts of con-\ntexts and translations in order to train maximum-\nentropy models. However, parallel corpora are not\navailable for the majority of the world’s written lan-\nguages. In this section we describe an unsupervised\nmethod to learn the models using only monolin-\ngual corpora and the components from the RBMT\nsystem in which they are used.\nThe input to our method consists of a col-\nlection of samples, G= (S, G) , where S=\n(s1, s2, . . . s |S|)is a sequence of SL words, and\nG={g 1, g2, . . . g |G|}is a set of possiblelexical-\nselection paths. A lexical-selection path g=\n(t1, t2, . . . , t |S|)is a sequence of lexical-selection\nchoices of those SL words, where tiis an element\nofTs(si), the set of possible translations of si.4\nThis is produced in thefirst stages of RBMT, just\nafter morphological analysis, part-of-speech tag-\nging, and bilingual dictionary lookup, and before\nany structural transfer takes place (we will call this\npre-lexsel). In our model, it is after thesefirst stages\nthat lexical selection (lexsel) occurs. After lexical\nselection, structural transfer and generation take\nplace; a function τ(gi, S)represents the result of\nthese last stages, which we will callpost-lexsel, and\nreturns afinished translation of a specific lexical-\nselection path giof sentence S. Figure 1 shows this\nprocess schematically.\nAs our method is unsupervised, and therefore\nthe occurrences of specific lexical selection events\n(s, t, c) cannot be counted, a TL model PTL(·)is\nused to compute a value for the fractional count\nfor disambiguation path gi,p(gi|S)after suitable\nnormalisation:\np(gi|S) =PTL(τ(g i, S))�\ngi∈GPTL(τ(g i, S))(4)\nThe maximum-entropy model is trained instead\nusing the fractional count p(gi|S)for the events\n4We deal only with single-word translations in this paper.(s, t, c) found in gi, that is, when in githe trans-\nlation for sin context cist. That is, as if event\n(s, t, c) had been seen a fractional number p(gi|S)\nof times. We prune (s, t, c) occurring less than a\ncertain number of times in the corpus, using a de-\nvelopment corpus to guide pruning (see section 4).\nThe method used here for lexical selection is anal-\nogous to the method used by S ´anchez-Mart ´ınez\net al. (2008) to train a hidden-Markov-model-based\npart-of-speech tagger in a RBMT system.\n3 Experimental setting\nThis section describes the training and evaluation\nsettings used in the remainder of this paper. The\nprimary motivation behind the evaluation is that it\nshould be automatic, meaningful, and be performed\nover a test set which is large enough to be repre-\nsentative. It should evaluate both performance on\nthe specific subtask of lexical selection, and on the\nwhole translation task. Evaluating lexical-selection\nperformance is anintrinsicmodule-based evalua-\ntion. It measures how well the lexical selection\nmodule disambiguates the lexical-transfer output\nas compared to a gold-standard corpus. The lexical\ntransfer output is the result of looking up the trans-\nlations of the SLlexical forms— lemmas and tags\n— in the bilingual dictionary.\nThe whole translation task evaluation is anextrin-\nsicevaluation, which tests how the system improves\nas regardsfinal translation quality in a real system.\nThe lexical-selection module should be as\nlanguage-independent as possible. To that end, the\nlanguage pairs tested show a wide variety of lin-\nguistic phenomena. It is also important that the\nmethodology be as applicable to lesser-resourced\nand marginalised languages as to major languages.\nThis section begins with a short description of\nthe Apertium platform (Forcada et al., 2011). This\nis followed by an overview of each of the language\npairs chosen for the evaluation. The corpora to be\nused for training and evaluation will subsequently\nbe described, along with the method used for anno-\ntating them. This is followed by a description of the\nperformance measures to be used in the evaluation,\nand the reference results using these metrics for147\neach of the language pairs.\n3.1 Apertium\nApertium is a free/open-source RBMT platform, it\ncomprises an engine, a toolbox and data to build\nRBMT systems. Translation is implemented as a\npipeline consisting of the following modules: mor-\nphological analysis, morphological disambiguation,\nlexical transfer, lexical selection, structural transfer\nand morphological generation.\n3.2 Language pairs\nEvaluation will be performed using four Apertium\n(Forcada et al., 2011) language pairs. These pairs\nhave been selected as they include languages with\ndifferent morphological complexity, and different\namounts of resources available — although for all\npairs there is a parallel corpus available for evalua-\ntion (see Section 3.3).5\nBreton–French (Tyers, 2010): Bilingual dictionar-\nies were not built with polysemy in mind from\nthe outset, but some entries were added later\nto start work on lexical selection.6\nMacedonian–English: The Macedonian–English\npair in Apertium was created specifically for\nthe purposes of running lexical-selection ex-\nperiments. The lexical resources for the pair\nwere tuned to the SETimes parallel corpus (Ty-\ners and Alperen, 2010). The most probable\nentry from automatic word alignment of this\ncorpus using GIZA ++ (Och and Ney, 2003)\nwas checked to ensure that it was an adequate\ntranslation, and if so marked as the default.7\nAs a result of attempting to include all possi-\nble translations, the average number of trans-\nlations per word is much higher than in other\npairs.8\nBasque–Spanish (Ginest ´ı-Rosell et al., 2009): al-\nternative translations were included in the\nbilingual dictionary.9\n5The Apertium revision (version) used is given in footnotes.\n6Revision 41375; https://svn.code.sf.net/p/\napertium/svn/trunk/apertium-br-fr\n7Bilingual dictionaries in Apertium (Forcada et al., 2011) may\ncontain several translations for a given word. Dictionary writ-\ners may mark aslinguistic defaultthe most general or most\nfrequent translation among the set of possible translations.\n8Revision 41476; https://svn.code.sf.net/p/\napertium/svn/trunk/apertium-mk-en\n9Revision 44846; https://svn.code.sf.net/p/\napertium/svn/trunk/apertium-eu-esEnglish–Spanish: The English–Spanish pair was\ndeveloped from a combination of the English–\nCatalan and Spanish–Catalan pairs, and con-\ntains a number of entries in the bilingual dic-\ntionary with more than one translation.10\n3.3 Performance measures\nThis section describes the measures that will be\nused to evaluate the performance of the lexical se-\nlection method proposed here: a (intrinsic)lexical\nselection performancemeasure and an (extrinsic)\nmachine translation performancemeasure.\n3.3.1 Lexical-selection performance\nThis is an intrinsic module-based evaluation of\nthe performance of the lexical-selection module.\nIt measures how well the lexical-selection mod-\nule disambiguates the output of the lexical-transfer\nmodule as compared to a gold-standard corpus. For\nthis task, we define a metric, the lexical-selection\nerror rate ( LER), that focuses on the problem of\nlexical selection by restricting the evaluation to this\nfeature; other features of the MT system, such as\nthe transfer rules and morphological generation, are\nnot taken into account.\nThe lexical-selection error rate is the fraction\nof times the given system chooses a translation\nfor a word which is not the one found in an anno-\ntated reference. The process uses a SL sentence,\nS= (s 1, s2, . . . , s |S|)and three functions. The\nfirst function, Ts(si), returns all possible transla-\ntions of siaccording to the bilingual dictionary.\nThe second function, Tt(si), returns the transla-\ntions of siselected by the lexical-selection mod-\nule: Tt(si)⊆T s(si); and usually |Tt(si)|= 1 .\nIf the lexical-selection module returns more than\none translation, thefirst translation is selected. The\nfunctionT r(si)returns the set of reference transla-\ntions which are acceptable for siin sentence S.11\nFor a single sentence, we define the lexical selection\nerror rate (LER) of that sentence as\nLER =�|S|\ni=1amb(s i) diff(T r(si), Tt(si))\n�|S|\ni=1amb(s i),(5)\nwhere\namb(s i) =�1if|T s(si)|>1\n0otherwise(6)\n10Revision 41387; https://svn.code.sf.net/p/\napertium/svn/trunk/apertium-en-es\n11Depending on how the reference is built, the set returned by\nTr(si)may not include all possible acceptable translations.148\nL’estiu ´es una estaci ´o llarga\nS el estiu ser un estaci ´o llarg\nTs(si) {the} {summer} {be} {a} {station, season} {long, lengthy}\nTr(si) {the} {summer} {be} {a} {season} {long}\nTt(si) {the} {summer} {be} {a} {station} {long}\namb(s i) 0 0 0 0 1 1\ndiff(T r(si), Tt(si))0 0 0 0 1 0\nFigure 2: An example input sentence in Catalan and the three sets of English translations used for calculating the lexical-selection\nerror rate. The source sentence S= (s 1, s2, . . . , s |S|)has two ambiguous words,estaci ´oandllarg( amb(s i) = 1 , eq. (6)).\nThere is one difference ( diff(T r(si), T t(si)) = 1 , eq. (7)) between the reference set Tr(si)and the test set Tt(si)of translations;\nthus, the error rate for this sentence is 50%.\ntests if a word is ambiguous, and the function\ndiff(T r(si), Tt(si)) =�1ifT r(si)∩T t(si) =∅\n0otherwise\n(7)\nstates that there is a difference if the intersection\nbetween the set of reference translations Tr(si)and\nthe set of translations from the lexical selection\nmodule Tt(si)is empty. Recall that, although Tt(si)\nreturns a set, this set will be a singleton, as when\nthe lexical-selection module returns more than one\ntranslation, Apertium will select the default one if\nmarked or thefirst one of not.12\nThe table in Figure 2 gives an overview of the\ninputs.In the description it is assumed that the refer-\nence translation has been annotated by hand. How-\never, hand annotation is a time-consuming process,\nand was not possible. A description of how the\nreference was built is given in Section 3.4.\n3.3.2 Machine translation performance\nThis is an extrinsic evaluation, which ideally\nwould test how much the system improves as re-\ngards an approximate measurement offinal trans-\nlation quality in a real system. For this task, we\nuse the widely-used BLEU metric (Papineni et al.,\n2002). This is not ideal for evaluating the task of a\nlexical selection module as the performance of the\nmodule will depend greatly on (a) the coverage of\nthe bilingual dictionaries of the RBMT system in\nquestion, and (b) the number of reference transla-\ntions. It is also worth noting that successful lexical\nselections may not lead to successful translations\ndue to inadequate transfer of morphological fea-\ntures. The BLEU metric is included only as it is\ncommonly used to evaluate MT systems.\n12In practice this does not happen as each ambiguous word has\nadefaulttranslation.3.3.3 Confidence intervals\nConfidence intervals for both metrics will be cal-\nculated throughbootstrap resampling(Efron and\nTibshirani, 1994) as described by Koehn (2004).\nIn all cases, bootstrap resampling will be carried\nout for 1,000 iterations. Where thep= 0.05confi-\ndence intervals overlap, we will also perform paired\nbootstrap resampling (Koehn, 2004).\n3.4 Corpora\nFor creating the test corpora, providing a SL corpus\nfor training, and a TL corpus for scoring, we used\nfour parallel corpora:\n•Ofis ar Brezhoneg (OAB): This parallel cor-\npus of Breton and French has been col-\nlected specifically for lexical-selection experi-\nments from translations produced byOfis ar\nBrezhoneg‘The Office of the Breton language’.\nThe corpus has recently been made available\nonline through OPUS .13\n•South-East European Times (SETimes ):\nDescribed in Tyers and Alperen (2010), this\ncorpus is a multilingual corpus of the Balkan\nlanguages (and English) in the news domain.\nThe Macedonian and English part will be used.\n•Open Data Euskadi (OpenData ): This is a\nBasque–Spanish parallel corpus made from\nthe translation memories of theHerri Ardu-\nralaritzaren Euskal Erakundea‘Basque Insti-\ntute of Public Administration’.14\n•European Parliament Proceedings\n(EuroParl ): Described by Koehn (2005),\nthis is a multilingual corpus of the European\nUnion official languages. We are using the\nEnglish–Spanish data from version 7.15\n13http://opus.lingfil.uu.se\n14http://tinyurl.com/eu-es-tm\n15http://www.statmt.org/europarl/149\nThere are a number of approaches to creating\nevaluation corpora for lexical selection in the lit-\nerature. Vickrey et al. (2005) use a parallel cor-\npus to make annotated test and training sets for\nexperiments in lexical selection applied to a sim-\nplified translation problem in statistical MT. They\nuse word alignments from GIZA ++ (Och and Ney,\n2003) to annotate SL words with their translations\nfrom the reference translation in the parallel corpus.\nOne disadvantage of this method is that only one\ntranslation is annotated per SL word, meaning that\naccuracies may be lower because of missing trans-\nlations — this happens when the system chooses\na translation which is adequate, but is not found\nin the reference translation. A second disadvan-\ntage is that the word alignments may not be 100%\nreliable, which decreases the accuracy of the anno-\ntated corpus. An alternative method is described by\nZinovjeva (2000), who manually tags ambiguous\nwords in English sentences with their translation in\nSwedish.\nIdeally we would have had a hand-annotated eval-\nuation corpus, as described by Zinovjeva (2000),\nbut as this did not exist, we decided to automati-\ncally annotate a test set using a process similar to\nthat described by Vickrey et al. (2005).\nThe annotation process proceeds as follows: First\nwe word-align the corpus to extract a set of word\nalignments, which are correspondences between\nwords in sentences in the source side of the parallel\ncorpus and those in the target side. Any aligner may\nbe used, but in this paper we use GIZA ++ (Och and\nNey, 2003).16We then use these alignments along\nwith the bilingual dictionary of the MT system in\nquestion to extract only those sentences where: (a)\nthere is at least one ambiguous word; (b) that am-\nbiguous word is aligned to a single word in the\nTL; and (c) the word it is aligned to in the TL is\nfound in the bilingual dictionary of the MT system.\nSentences where there are no ambiguous words (ap-\nproximately 90%, see Table 1) are discarded. The\nsource side of the extracted sentence is then passed\nthrough the lexical transfer module, which returns\nall the possible translations, and for each ambigu-\nous word, the translation is selected which is found\naligned in the reference.\nAfter this process, we selected 1,000 sentence\npairs at random for testing ( test ), 1,000 for devel-\n16The exact configuration of GIZA ++ used is equivalent to\nrunning the M OSES toolkit (Koehn et al., 2007) in default\nconfiguration up to step three of training.Pair SL TL Amb. % amb.\nbr-fr 13,854 13,878 1,163 8.39\nmk-en 13,441 14,228 3,872 28.80\neu-es 7,967 11,476 1,360 17.07\nen-es 19,882 20,944 1,469 7.38\nTable 2: Statistics about the test corpora. The columns SL\nandTLgive the number of tokens in the source and target\nlanguages respectively. The columns amb. words and% am-\nbiggives the number of word with more than one translation\nand the percentage of SL words which have more than one\ntranslation respectively.\nopment ( dev)17and left the remainder for training.\nTable 1 gives statistics about the size of the input\ncorpora, and how many sentences were left after\nprocessing for testing, training and development.\nTable 2 gives information about the test corpora.\n3.5 Reference systems\nWe compare our method to the following reference\n(or baseline) systems:\n•Linguist-chosen defaults . A bilingual dictio-\nnary in an Apertium language pair contains\ncorrespondences between lexical forms. The\ndictionaries allow many lexical forms to trans-\nlate to one lexical form. But a single lexical\nform may not have more than one translation\nwithout further processing. If there are many\npossible translations of a lexical form, then\none must be marked as thedefaulttranslation.\n•Oracle . The results for the oracle system are\nthose achieved by passing the automatically\nannotated reference translation through the\nrest of the modules of the MT system. This\nis included to show the upper bound for the\nperformance of the lexical-selection module.\n•Target language model (TLM). One method\nof lexical selection is to use the existing MT\nsystem to generate all the possible translations\nfor an input sentence, and then score these\ntranslationson-lineon a model of the TL. The\nhighest scoring sentence is then output. This\nis the method used by Melero et al. (2007).\n4 Results\nAs we are working with binary features, we use\nthe implementation of generalised iterative scaling\n17The development corpus was used for checking the value for\nfrequency pruning of features.150\nPair Lines Extract. train dev test No. amb Av. amb\nbr-fr 57,305 4,668 2,668 1,000 1,000 603 3.06\nmk-en 190,493 19,747 17,747 1,000 1,000 13,134 3.06\neu-es 765,115 87,907 85,907 1,000 1,000 1,806 3.11\nen-es 1,467,708 312,162 310,162 1,000 1,000 2,082 2.28\nTable 1: Statistics about the source corpora. The column no. amb gives the number of unique tokens with more than one\npossible translation. The column av. amb gives the average number of translations per ambiguous word. This is calculated by\nlooking up each word in the corpus in the bilingual dictionary of the MT system and dividing the total number of translation by\nthe number of words. Bothav. ambandno. ambare calculated over the whole corpus.\nPair Pruned # features\nbr-fr <5 5,277\nmk-en <7 205,494\neu-es <7 196,024\nen-es <7 195,605\nTable 3:Features in each rule set and pruning frequency.\navailable in the YASMET18to calculate the feature\nweights. After learning the feature sets and weights,\nwe compute the evaluation measures described in\nSection 3.3. There is an option to remove events\n(s, t, c) which occur less than a certain number of\ntimes in the training corpus. This is referred to as\nthe feature pruning frequency threshold — features\noccurring less than the threshold are discarded. The\nvalue was set experimentally. Values of between\ntwo and seven were tested, and the ones which\nprovided the best improvement on the development\ncorpus were selected; they happen to come close\nto the rule-of-thumb value offive that Manning\nand Sch ¨utze (1999, p. 596) found to be effective.\nTable 3 shows the number of features that have\neventually been used for each language pair.\nEvaluation results are presented in table 4, which\ncompares the results of the new approach with re-\nspect to thedefault behaviour(the linguist-chosen\ndefaults), with respect to theoracle(which repre-\nsents the upper bound to performance), and with re-\nspect to the results obtained by using the TL model\nonline, for each of the language pairs in Apertium\nwith respect to our two evaluation metrics. Note\nthat the high error rate for the Breton–French pair\nmay be as a result of having the linguistic defaults\ntuned to a different domain than that of the corpus.\nSignificant improvements with respect to the re-\n18http://www-i6.informatik.rwth-aachen.\nde/web/Software/YASMET.html ; the compilable ver-\nsion we used is available as part of the Apertium lex-tools\npackage, http://downloads.sourceforge.net/\nproject/apertium/apertium-lex-tools/\napertium-lex-tools-0.1.0.tar.gz.sults obtained using the TL model online are appar-\nent with the Breton–French —the pair with the least\ndata— and the English–Spanish language pairs. In\nthe remaining cases, the maximum-entropy method\ncomes close to the TL model performance in terms\nof similar or better BLEU and LER scores, at a\nmuch smaller computational cost.\nImprovements with respect to the TL model per-\nformance are likely due to the effective use that\nthe maximum-entropy model makes of information\nabout the relevant SL contexts and their translations,\nthrough the weighting of features representing those\nSL contexts across the whole corpus.\n5 Conclusions\nThis paper has presented a method to perform lexi-\ncal selection in RBMT, and one that can be trained\nin an unsupervised way, that is, without the need for\nan annotated corpus, (in this case a word-aligned\nbilingual corpus): one just needs a SL corpus, a\nstatistical TL model, and the RBMT system itself.\nThe input to the method is simply the part-of-speech\ntagged source text in which each word is annotated\nwith all the translations provided by the bilingual\ndictionary in the system: this makes it applicable\nto almost any RBMT system. The system uses\na maximum-entropy formalism for lexical selec-\ntion, as Berger et al. (1996) and Mare ˇcek et al.\n(2010), but instead of counting actual lexical se-\nlection events in an annotated corpus, it counts frac-\ntional occurrences of these events as estimated by\na TL model. The method is evaluated both intrin-\nsically (just looking at the actual lexical selection\nevents) and extrinsically (measuring the quality of\nMT). Results on four language pairs using the Aper-\ntium (Forcada et al., 2011) MT system show that\nthe method obtains similar or better results than\nthose expensively obtained by scoring an exponen-\ntial number of lexical selections for each sentence\nusing the TL model online.151\nPair MetricSystem\nLing TLM MaxEnt Oracle\nbr-frLER(%) [54.8, 60.7] [44.2, 50.5] [40.8, 46.9] [0.0, 0.0]\nBLEU (%) [14.5, 16.4] [15.4, 17.3] [14.8, 16.6] [16.7, 18.6]\nmk-enLER(%) [28.8, 32.6] [26.8, 30.5] [25.2, 28.8] [0.0, 0.0]\nBLEU (%) [28.6, 31.0] [30.7, 32.3] [29.1, 31.5] [30.9, 33.3]\neu-esLER(%) [43.6, 48.8] [38.8, 44.2] [40.9, 46.2] [0.0, 0.0]\nBLEU (%) [10.1, 12.0] [10.6, 12.6] [10.3, 12.2] [11.5, 13.5]\nen-esLER(%) [20.5, 24.9] [15.1, 18.9] [10.4, 13.8] [0.0, 0.0]\nBLEU (%) [21.5, 23.4] [21.9, 23.8] [22.2, 24.1] [22.8, 24.7]\nTable 4: LER and BLEU scores with 95% confidence intervals for the reference systems on the test corpora. The max-ent\nsystem has been trained using fractional counts. The results in bold face show statistically significant improvements for the\nmaximum-entropy model compared to the TL model according to pair-bootstrap resampling.\nAcknowledgements: We acknowledge support\nfrom the Spanish Ministry of Industry and Compet-\nitiveness through project Ayutra (TIC2012-32615)\nand from the European Commission through\nproject Abu-Matran (FP7-PEOPLE-2012-IAPP, ref.\n324414) and thank all three anonymous referees for\nuseful comments on the paper.\nReferences\nBerger, A., Pietra, S. D., and Pietra, V . D. (1996). A maximum\nentropy approach to natural language processing.Compu-\ntational Linguistics, 22(1):39–71.\nDagan, I. and Itai, A. (1994). Word sense disambiguation using\na second language monolingual corpus.Computational\nLinguistics, 20(4):563–596.\nEfron, B. and Tibshirani, R. J. (1994).An Introduction to the\nBootstrap. CRC Press.\nForcada, M. L., Ginest ´ı-Rosell, M., Nordfalk, J., ORegan,\nJ., Ortiz-Rojas, S., P ´erez-Ortiz, J. A., S ´anchez-Mart ´ınez,\nF., Ram ´ırez-S ´anchez, G., and Tyers, F. M. (2011). Aper-\ntium: a free/open-source platform for rule-based machine\ntranslation.Machine Translation, 25(2):127–144.\nGinest ´ı-Rosell, M., Ram ´ırez-S ´anchez, G., Ortiz-Rojas, S., Ty-\ners, F. M., and Forcada, M. L. (2009). Development of a\nfree Basque to Spanish machine translation system.Proce-\nsamiento de Lenguaje Natural, (43):185–197.\nIde, N. and V ´eronis, J. (1998). Word sense disambiguation:\nThe state of the art.Computational Linguistics, 24(1):1–41.\nKoehn, P. (2004). Statistical significance tests for machine\ntranslation evaluation. InProc. of the Conference on\nEMNLP, pages 388–395.\nKoehn, P. (2005). Europarl: A parallel corpus for statistical\nmachine translation. InProc. of the 10th MT Summit, pages\n79–86.\nKoehn, P. (2010).Statistical machine translation. Cambridge\nUniversity Press.\nKoehn, P., Hoang, H., Birch, A., Callison-Burch, C., Federico,\nM., Bertoldi, N., Cowan, B., Shen, W., Moran, C., Zens,\nR., Dyer, C., Bojar, O., Constantin, A., and Herbst, E.\n(2007). Moses: Open source toolkit for statistical machine\ntranslation. InProc. of the Annual Meeting of the ACL,\ndemonstration session.\nManning, C. D. and Sch ¨utze, H. (1999).Foundations of Statis-\ntical Natural Language Processing. MIT Press.Mare ˇcek, D., Popel, M., and ˇZabokrtsk ´y, Z. (2010). Maxi-\nmum entropy translation model in dependency-based MT\nframework. InWMT ’10 Proc. of the Joint 5th Workshop\non SMT and MetricsMATR, pages 201–206.\nMelero, M., Oliver, A., Badia, T., and Su ˜nol, T. (2007). Deal-\ning with bilingual divergences in MT using target language\nn-gram models. InProc. of the METIS-II Workshop, pages\n19–26.\nNuhn, M. and Ney, H. (2014). Em decipherment for large\nvocabularies. InProceedings of the 52nd Annual Meeting\nof the Association for Computational Linguistics (Short\nPapers), pages 759–764.\nOch, F. J. and Ney, H. (2003). A systematic comparison\nof various statistical alignment models.Computational\nLinguistics, 29(1):19–51.\nPapineni, K., Roukos, S., Ward, T., and Zhu, W. J. (2002).\n”BLEU: a method for automatic evaluation of machine\ntranslation. InACL-2002: 40th Annual meeting of the ACL,\npages 311–318.\nRavi, S. and Knight, K. (2011). Deciphering foreign language.\nInProceedings of the 49th Annual Meeting of the Asso-\nciation for Computational Linguistics: Human Language\nTechnologies-Volume 1, pages 12–21. Association for Com-\nputational Linguistics.\nS´anchez-Mart ´ınez, F., P ´erez-Ortiz, J. A., and Forcada, M. L.\n(2008). Using target-language information to train part-of-\nspeech taggers for machine translation.Machine Transla-\ntion, 22(1-2):29–66.\nTyers, F. M. (2010). Rule-based Breton to French machine\ntranslation. InProc. of the 14th Annual Conference of the\nEAMT, pages 174–181.\nTyers, F. M. and Alperen, M. S. (2010). SETimes: A parallel\ncorpus of Balkan languages. InWorkshop on Exploitation\nof multilingual resources and tools for Central and (South)\nEastern European Languages at the Language Resources\nand Evaluation Conference, pages 1–5.\nTyers, F. M., S ´anchez-Mart ´ınez, F., and Forcada, M. L. (2012).\nFlexiblefinite-state lexical selection for rule-based machine\ntranslation. InProc. of the 16th Annual Conference of the\nEAMT, pages 213–220, Trento, Italy.\nVickrey, D., Biewald, L., Teyssier, M., and Koller, D. (2005).\nWord-sense disambiguation for machine translation. In\nProc. of HLT Conference and Conference on EMNLP, pages\n771–778.\nZinovjeva, N. (2000). Learning sense disambiguation rules for\nmachine translation. Master’s thesis, Uppsala University.152",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "vzHb12ksE0x",
"year": null,
"venue": "EAMT 2009",
"pdf_link": "https://aclanthology.org/2009.eamt-1.29.pdf",
"forum_link": "https://openreview.net/forum?id=vzHb12ksE0x",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Rule-Based Augmentation of Training Data in Breton-French Statistical Machine Translation",
"authors": [
"Francis M. Tyers"
],
"abstract": "Francis M. Tyers. Proceedings of the 13th Annual conference of the European Association for Machine Translation. 2009.",
"keywords": [],
"raw_extracted_content": "Proceedings of the 13th Annual Conference of the EAMT , pages 213–217,\nBarcelona, May 2009\nRule-Based Augmentation of Training Data in Breton–French Statistical\nMachine Translation\nFrancis M. Tyers\nDept. de Llenguatges i Sist. Inform `atics,\nUniversitat d’Alacant\nE-03071 Alacant (Spain)Prompsit Language Engineering\nAv. St. Francesc d’Ass ´ıs, 74, 1r-L\nE-03195 l’Altet (Spain)\[email protected]\nAbstract\nThis article describes an initial statistical\nmachine translation system between Bre-\nton, a Celtic language spoken in France,\nand French. It also describes a method\nfor leveraging existing resources from an\nincomplete rule-based machine translation\nsystem to improve the coverage and trans-\nlation quality of the statistical system by\ngenerating expanded bilingual vocabulary\nlists. Results are presented which show\nthat the use of this method improves the re-\nsults of the system with respect to both the\nbaseline, and the baseline with a lemma-\nto-lemma bilingual lexicon.\n1 Introduction\nBreton is a Celtic language of the Brythonic\nbranch largely spoken in Brittany in the north-\nwest of France. Historically it was spoken\nonly in the northern part of Brittany, Breizh-Izel\n(Lower Brittany). This contrasts with Breizh-Uhel\n(Higher Brittany) which is traditionally Romance-\nspeaking.\nAlthough some sources put the number of native\nspeakers at between 500,000 and 600,000 (Gor-\ndon, 2005), a more up-to-date estimate can be\nfound from the organisation Ya d’ar brezhoneg\nwhich gives a number of 201,083 as of the 11th\nNovember 2008, and states that the number is de-\ncreasing at a rate of at least one per hour.1Breton\nis classed as a language in serious danger of ex-\ntinction by the UNESCO Red Book on Endangered\nLanguages (Salminen, 1999), a situation exacer-\nc/circlecopyrt2009 European Association for Machine Translation.\n1http://www.yadarbrezhoneg.com/\n?article245 ; Accessed: 11th November, 2008bated by the laissez-faire policies of the French\nstate.\nLike other Celtic languages, Breton exhibits\nthe phenomenon of initial consonant mutation.\nThis occurs when the initial consonant of a word\nchanges based on morpho-syntactic context. For\nexample in the word tad“father”, the initial con-\nsonant mutates to a ‘z’ ( aspirant mutation) when\nthe word follows the possessive ma“my”, so tadis\n“father”, while “my father” is ma zad .\nAs for many less-resourced language pairs,\nwhile there is little aligned bilingual text, bilin-\ngual lexicons are more readily available. One so-\nlution would be to use these bilingual lexicons\nwithin a rule-based system that makes use of the\nfeatures found in the bilingual lexicon: part-of-\nspeech, gender, number etc. to try to compensate\nfor the lack of data with some level of generalisa-\ntion. Even if little parallel data is available, it is\nstill worthwhile to compare any attempt at a more\nlinguistically motivated system with a greater gen-\neralising power with a straight-forward, state-of-\nthe-art non-linguistic approach, such as phrase-\nbased statistical machine translation (SMT).\n2 Resources\n2.1 Parallel corpora\nFor any language pair, parallel corpora are the\nscarcest of all resources. In the case of a lan-\nguage with a small population of speakers and no\nofficial recognition, the hurdle is even greater. In\ncontrast with Welsh, there are no bilingual parlia-\nmentary proceedings that may be used. As an of-\nficial body for the defence of the Breton language,\ntheOfis ar Brezhoneg is a big producer of Bre-\nton translations and we were given the opportu-\nnity to access their translation memories, which\n213\nCorpus Number of aligned segments\ntraining 27,987\ntuning 1,000\ndevtest 1,000\ntest 1,000\nTable 1: Split of parallel corpus\nmostly contain short segments (with an average\nlength of approx. 9 words per segment) largely in\nthe domains of tourism and computer localisation.\nThis results in approx. 285,000 Breton words and\n282,073 French words distributed in 31,000 lines.\nAfter basic space- and punctuation-based tokeni-\nsation, the total number of distinct tokens for Bre-\nton was 36,435 and for French was 41,932. These\nwere split into training, tuning and two test sets as\ndescribed in table 1.2\n3 A rule-based system\nA rule-based MT system for Breton–French is\ncurrently being developed inside the Apertium\nproject.3Apertium (Armentano-Oller et al., 2006)\nis an open-source platform for creating rule-based\nmachine translation systems. It was initially de-\nsigned for closely-related languages, but has also\nbeen applied to work with more distant language\npairs, such as Welsh–English (Tyers and Donnelly,\n2009) and Basque–Spanish. The translation engine\nin the platform follows a largely shallow-transfer\napproach. Finite-state transducers are used for lex-\nical processing, first-order Hidden Markov Mod-\nels (HMMs) and optionally, Constraint Grammars\n(CGs) based on VISLCG34are used for part-of-\nspeech disambiguation, and multi-stage finite-state\nbased chunking is used for structural transfer.\nThe current status of the Breton–French sys-\ntem is as follows: the system has a morphologi-\ncal analyser for Breton with approximately 11,000\nlemmata (approx. 85% coverage on open-domain\ntext), a bilingual dictionary with 10,797 part-of-\nspeech tagged correspondences between Breton\nand French, and a very small number of trans-\nfer rules (e.g. for concordance and re-ordering\nwithin noun phrases, verbal conjugation and pro-\nnoun insertion) adapted from the Spanish–French\n2The data used in the experiment may be downloaded\nfrom http://elx.dlsi.ua.es/ ˜fran/brfr_OAB_\ncorpus.tgz and used under the terms of the GNU GPL.\n3http://www.apertium.org/\n4http://visl.sdu.dk/constraint_grammar.\nhtmllanguage pair. It is not currently considered a pro-\nduction system as the coverage of the transfer rules\nis very sparse.\nFor examples of entries from the morphological\nanalyser and bilingual lexicon, please see figures 1\nand 2 respectively. In the morphological anal-\nyser, there are two kinds of paradigms ( <par> )\nreferenced, the first for specifying the initial con-\nsonant mutations described above, the second for\nlisting all the morphological forms of a given word\nalong with their analyses. For example, in the\ncase of verbs, a single combination of lemma and\nparadigm generates between 37 surface forms (for\nunmutating initial consonants) and 193 (for mutat-\ning initial consonants).\n4 A statistical phrase-based system\nA phrase-based statistical model was trained us-\ning the training and tuning sets mentioned above.\nAlthough other language model software is fre-\nquently used in the literature, the IRSTLM (Mar-\ncello et al., 2008) implementation was chosen as\nit was available and open-source. A 3-gram lan-\nguage model was trained using the French side\nof the parallel data. The rest of the training pro-\ncess followed the instructions for the baseline sys-\ntem for WMT08, the shared task in the ACL\n2008 workshop on statistical machine translation\n(Callison-Burch et al., 2008). Only a few modifi-\ncations in the tokeniser provided were necessary,\nto deal with the c’hcharacter in Breton. The train-\ning and tuning corpora were tokenised and lower-\ncased to try to alleviate the data sparseness. BLEU\nscores optimised with the MERT algorithm (Och,\n2003) on the tuning set and obtained on the test set\nare displayed in table 2.\n5 Extending the parallel corpus\nAs the corpus used for training was much smaller\nthan usually used in SMT, there was a problem\nof coverage. This was aggravated by the fact that\nBreton is an inflected language and as mentioned\npreviously also exhibits the phenomenon of initial\nconsonant mutation. Such a small corpus is un-\nlikely to contain the majority of frequent surface\nforms, and almost certainly would not contain the\nless frequent ones.\nTo try and alleviate the problem of low cover-\nage of the training data, it was decided to make\nuse of the resources available in the nascent rule-\nbased system described above. Two approaches214\n<e lm=\"labourat\">\n<i>labour</i>\n<par n=\"labour/at vblex\"/>\n</e>\n<e lm=\"kadarnaat\">\n<par n=\"initial-k\"/>\n<i>adarna</i>\n<par n=\"labour/at vblex\"/>\n</e>\nFigure 1: Example of morphological anal-\nyser entries for two verbs ( labourat ‘to work’\nand kadarnaat ‘to confirm’), including inflec-\ntional paradigm ( labour/at vblex ) and mutation\nparadigm ( initial-k )\n<e>\n<p>\n<l>labourat<s n=\"vblex\"/></l>\n<r>travailler<s n=\"vblex\"/></r>\n</p>\n</e>\n<e>\n<p>\n<l>kadarnaat<s n=\"vblex\"/></l>\n<r>confirmer<s n=\"vblex\"/></r>\n</p>\n</e>\nFigure 2: Example of bilingual lexicon entries for\ntwo verbs. The bilingual lexicon specifies corre-\nspondences between lemmata and parts of speech.\nwere taken. The first was to simply add the bilin-\ngual transfer lexicon from the system to the end of\nthe training data. This consisted of 10,797 lem-\nmata. The second was to automatically generate\nappropriate mappings between all of the surface\nforms of the given lemmata in the dictionaries of\nthis system.\nThere has been existing research in this area,\nfor example Dugast et al. (2008) generated a par-\nallel corpus from a rule-based system to train a\nphrase-based system, and Schwenk (2009) uses an\ninflected dictionary to produce training data for a\nstatistical system, albeit in a well resourced lan-\nguage pair (French–English).\nIn order to generate the surface-form mappings,\nan expansion of all possible surface forms was\ntaken, along with analyses in the Breton morpho-\nlogical analyser. These analyses were then passed\nthrough the rest of the Apertium pipeline in or-mignon,ami\nmignoned,amis\nvignon,ami\nvignoned,amis\ndale,retarde\ndale,il retarde\nlabouren,je travaillais\n...\nFigure 3: Example of output from the dictionary\nexpansion and translation – mignon ‘friend’, dale\n‘late’ and ‘He is delaying’ and labouren ‘I worked’\nder to produce all of the possible translations of\nsurface forms in French. This produces a bilin-\ngual inflected dictionary (see figure 3). It is worth\nmentioning that as a result of the transfer rules,\nentries for verbs, are generated, where appropri-\nate (e.g. finite verb tenses) with the correspond-\ning subject pronoun in French, and Breton tenses\nwhich are not found in French are converted into\nFrench tenses (e.g. past habitual is converted to\nimperfect).\nThis ‘expanded’ bilingual dictionary of surface\nforms was added to the end of the training corpus,\nand consisted of 116,514 mappings of inflected\nBreton forms to inflected French forms.\n6 Evaluation and error analysis\nAs time has not yet been found for a manual eval-\nuation, below are presented the BLEU (Papineni\net al., 2002) scores for the three statistical mod-\nels described above, along with a baseline word-\nfor-word translation generated by the unfinished\nRBMT system. As expected, the number of un-\nknown words decreases when the bilingual lexicon\nis added to the training data, and even more when\nthe fully-expanded bilingual lexicon is added. The\nrise in BLEU (keeping in mind that these are short\nsentences of ten words on average) is probably also\ndue to side effects such as a better word alignment\nand a better French context available to the lan-\nguage model scoring. When comparing systems 3\nand 4, a quick manual review may attribute most\nchanges to plural forms.\nSee examples in table 3. The first example\nshows how the plural form of the Breton for syl-\nlabe (syllable) could be matched thanks to the mor-\nphological extension of the lexicon. In example 2,\nanother kind of extension could be used by the de-\ncoder. In French, inflected verbs require the pres-\nence of subject pronouns, whereas in Breton this\nis not the case. This may lead to alignment errors215\nSystem Description BLEU Phrase pairs Unknown words in devtest\nsystem 1 word-for-word 0.16 n/a 1,191\nsystem 2 baseline phrase-based SMT 0.29 800k 623\nsystem 3 + uninflected dictionary 0.30 807k 562\nsystem 4 + inflected dictionary 0.36 843k 531\nTable 2: BLEU scores\nExample 1 Benveg troc’ha ˜n dre silabenno `u\nref outil de c ´esure par syllabe\ngloss hyphenation tool\nsystem 3 outil coupe par silabenno `u\nsystem 4 outil de coupure par syllabes\nExample 2 E rankit kevrea ˜n ouzh an holl darzhio `u roadenno `u\nref V ous devez vous connecter `a toutes les sources de donn ´ees\ngloss You should connect to all of the data sources\nsystem 3 Devez connexion de donn ´ees . les darzhio `u\nsystem 4 V ous devez se connecter tous les darzhio `u de donn ´ees\nExample 3 Emirelezhio `u Arab Unanet\nref ´emirats arabes unis\ngloss United Arab Emirates\nsystem 3 ´emirats arabes unies\nsystem 4 ´emirats arabes unissez\nTable 3: Translation examples\nespecially with sparse data. In this example, the\nsecond plural form in present tense of the Breton\nverb rankout (to have to), rankit was mapped to its\nFrench equivalent with the corresponding pronoun\nvous devez .\nIt is also worth noting that the error se connecter\nforvous connecter could be alleviated with a more\nrobust verb generation. In French the verb is re-\nflexive, and this is marked in the bilingual lexi-\ncon, but the appropriate reflexive pronoun is not\nyet generated by the rule.\nIn example 3, the translation of Emirelezhio\nArab Unanet (United Arab Emirates) displays the\nadjective for “united” with the incorrect gender.\nSystem 4 does not perform better, since it insteads\noutputs the imperative form of the correspond-\ning verb “unite!”. It is very likely that in a real\nparallel corpora the correct translation (as an ad-\njective) would have been more frequent than the\none picked up here in decoding from the extended\nbilingual lexicon.\n7 Conclusions and future work\nThis paper has presented, to my knowledge, the\nvery first results on Breton to French machinetranslation. While comparing BLEU scores on a\nrule-based and a statistical system is not mean-\ningful (Callison-Burch et al., 2006; Labaka et al.,\n2007), it has shown that the work on the linguistic\ncoding of dictionary entries helped improve a sta-\ntistical model that had to be trained on little data.\nOne of the avenues for improving the baseline\nstatistical system would be to add a larger language\nmodel on the target side. It would probably also be\npossible to try to learn probabilities for the rule-\nbased created phrase pairs as in Koehn and Knight\n(2000). Another option would be to try and cre-\nate “expanded” phrases based on chunks extracted\nfrom a bilingual corpus. For example if you have\nwar toenn an ti , “sur le toit de la maison” (on the\nroof of the house), it would be fairly straightfor-\nward given the rule-based system to generate all\npossible morphological combinations, viz. war\ntoenno `u an ti , “sur les toits de la maison” (on the\nroofs of the house), war toenno `u an tiez , “sur les\ntoits des maisons” (on the roofs of the houses), and\nwar toenn an tiez , “sur le toit des maisons” (on the\nroof of the houses) respectively.\nIt is also worth noting that at present the Breton–\nFrench lexicon in Apertium has only one (gener-216\nally the most frequent) translation per word. It\nwould be feasible to generate more than one entry\nper word, and then score these on language mod-\nels.\nThe method described here is knowledge-light,\nrequiring only a morphological analyser, bilin-\ngual dictionary and some very basic transfer rules\n(for verb conjugation) and could be applied to\nother under-resourced language pairs to improve\nthe coverage of a statistical system where little par-\nallel data is available.\nAcknowledgements\nI am very grateful to the Ofis ar Brezhoneg for\nmaking available their translation memory, and for\ntheir consistent help during the project. I would\nalso like to extend special thanks to: Fulup Jakez,\nthe director, for his work on verifying and ex-\npanding the Breton morphological analyser and\nBreton–French lexicon, the two contributors to this\npaper who do not wish to be named, and the re-\nviewers for the helpful comments I received.\nReferences\nArmentano-Oller, Carme. Carrasco, Rafael C., Corb ´ı-\nBello, Antonio M., Forcada, Mikel L., Ginest ´ı-\nRosell, Mireia, Ortiz-Rojas, Sergio, P ´erez-Ortiz,\nJuan Antonio, Ram ´ırez-S ´anchez, Gema, S ´anchez-\nMart ´ınez, Felipe and Scalco, Miriam A. 2006.\n“Open-source Portuguese-Spanish machine transla-\ntion” Proceedings of the 7th International Workshop\non Computational Processing of Written and Spoken\nPortuguese, PROPOR-2006\nCallison-Burch, Chris, Osbourne, Miles and Koehn\nPhilip 2006. “Re-evaluating the role of Bleu in ma-\nchine translation research” in 11th Conference of the\nEuropean Chapter of the Association for Computa-\ntional Linguistics , pp. 249–256\nCallison-Burch, Chris, Fordyce, Cameron, Koehn,\nPhilipp, Monz, Christof and Schroeder, Josh 2008.\n“Further Meta-Evaluation of Machine Translation”\ninProceedings of the Third Workshop on Statistical\nMachine Translation , pp. 70–106\nDugast, Lo ¨ıc, Senellart, Jean and Koehn, Philipp 2008.\n“Can we relearn an RBMT system?” in Proceedings\nof the Third Workshop on Statistical Machine Trans-\nlation , pp. 175–178\nGordon, Raymond G., Jr. (ed.) 2005. Ethnologue:\nLanguages of the World, Fifteenth edition (Dallas,\nTex.: SIL International)\nKoehn, Philip and Knight, Kevin 2000. “Estimat-\ning Word Translation Probabilities from UnrelatedMonolingual Corpora Using the EM Algorithm” in\nProceedings of the Seventeenth National Conference\non Artificial Intelligence pp. 711–715\nKoehn, Philip, Hoang, Hieu, Birch, Alexandra,\nCallison-Burch, Chris, Federico, Marcello, Bertoldi,\nNicola, Cowan, Brooke, Shen, Wade, Moran, Chris-\ntine, Zens, Richard, Dyer, Chris, Bojar, Ondrej Con-\nstantin, Alexandra and Herbst, Evan 2007. “Moses:\nOpen source toolkit for statistical machine transla-\ntion” in ACL 2007, demonstration session .\nLabaka, Gorka, Stroppa, Nicholas, Way, Andy and\nSarasola. Kepa 2007. “Comparing rule-based and\ndata-driven approaches to Spanish-to-Basque ma-\nchine translation” in Machine Translation Summit\nXI, Copenhagen, Denmark, pp. 297–304\nFederico, Marcello, Bertoldi, Nicola and Cettolo,\nMauro 2008. “IRSTLM: an Open Source Toolkit\nfor Handling Large Scale Language Models” Pro-\nceedings of the Interspeech 2008 , pp. 1618–1621\nOch, Franz J. 2003. “Minimum error rate training in\nstatistical machine translation” 41st Annual Meet-\ning of the Association for Computational Linguistics\npp. 160–167\nPapineni, Kishore, Roukos, Salim, Ward, Todd and\nZhu, Wei-jing 2002. “BLEU: a method for auto-\nmatic evaluation of machine translation” in 40th An-\nnual meeting of the Association for Computational\nLinguistics pp. 311–318\nSalminen, Tapani 1999. Unesco Red Book on Endan-\ngered Languages\nSchwenk, Holger 2009. “On the use of comparable\ncorpora to improve SMT performance” to appear\nEACL-2009\nTyers, Francis M. and Donnelly, Kevin 2009.\n“apertium-cy: a collaboratively-developed free\nRBMT system for Welsh to English” Prague Bul-\nletin of Mathematical Linguistics No. 91, pp. 57–66.217",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "0dYM_fIZ3lF",
"year": null,
"venue": "EAMT 2009",
"pdf_link": "https://aclanthology.org/2009.eamt-1.17.pdf",
"forum_link": "https://openreview.net/forum?id=0dYM_fIZ3lF",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Developing Prototypes for Machine Translation between Two Sami Languages",
"authors": [
"Francis M. Tyers",
"Linda Wiechetek",
"Trond Trosterud"
],
"abstract": "Francis M. Tyers, Linda Wiechetek, Trond Trosterud. Proceedings of the 13th Annual conference of the European Association for Machine Translation. 2009.",
"keywords": [],
"raw_extracted_content": "Proceedings of the 13th Annual Conference of the EAMT , pages 120–127,\nBarcelona, May 2009\nDeveloping Prototypes for Machine Translation between Two Sámi\nLanguages\nFrancis M. Tyers\nDepartament de Llenguatges\ni Sistemes Informàtics,\nUniversitat d’Alacant\nE-03071 Alacant (Spain)\[email protected] Wiechetek\nGiellatekno,\nUniversitetet i Tromsø,\nNorway\[email protected] Trosterud\nGiellatekno,\nUniversitetet i Tromsø,\nNorway\[email protected]\nAbstract\nThis paper describes the development of\ntwo prototype systems for machine trans-\nlation between North Sámi and Lule Sámi.\nExperiments were conducted in rule-based\nmachine translation (RBMT), using the\nApertium platform, and statistical ma-\nchine translation (SMT) using the Moses-\ndecoder. The experiments show that both\napproaches have their advantages and dis-\nadvantages, and that they can both make\nuse of pre-existing linguistic resources.\n1 Introduction\nIn this paper we describe the development of two\nprototype machine translation systems between\ntwo Sámi languages, North Sámi ( sme) and Lule\nSámi (smj), one rule-based (Apertium), and one\nstatistical (Moses). There are other systems which\nhave been developed with marginalised languages\nin mind (e.g. (Lavie, 2008)), but, as of writing,\nthese were not available under an open-source li-\ncence and thus could not be applied to the task at\nhand. The content will be split into several sec-\ntions. The first section will give a general overview\nof the languages in question, and sketch a typol-\nogy of MT scenarios for minority languages. The\nnext sections will describe the two machine trans-\nlation strategies in some detail and will outline\nhow the existing language technology was able to\nbe re-used and integrated. We will follow this by\na short evaluation and then some discussion and\nfuture work.\n1.1 The languages\nBoth North Sámi and Lule Sámi belong to the\nFinno-Ugric language family and are spoken in the\nc/circlecopyrt2009 European Association for Machine Translation.north of Norway and Sweden, North Sámi also\nin Finland. North Sámi has between 15,000 and\n25,000 speakers, while Lule Sámi has less than\n2,000 speakers.\nThe Sámi proto-language was originally an ag-\nglutinative language, but North and Lule Sámi\nhave developed features known from inflective lan-\nguages (case/number combinations are often ex-\npressed by one suffix only, certain morphologi-\ncal distinctions are expressed by means of conso-\nnant gradation (i.e. a non-segmental process) only,\netc.).\nThe main objective with the development of the\nprototype rule-based system was to evaluate how\nwell existing resources could be re-used, and if the\nshallow-transfer approach was suited to languages\nwith more agglutinative typologies.\n1.2 A typology of MT systems for minority\nlanguages\nMinority language speakers typically differ from\nthe majority in being bilingual, the minority speaks\nthe language of the majority, but not vice-versa.\nThis has some implications for the requirements\nsociety will put to machine translation systems.\nA majority to minority language system must be\nof high quality, so high that post-editing the output\nis faster than translating from scratch. The goal is\nto produce well-formed text, not to understand the\ncontent, since the minority language users will pre-\nfer the original to a bad translation. A minority to\nmajority language system, on the other hand, will\nbe useful even as a gist system, answering vital\nquestions such as “what are they writing about me\nin the minority language newspaper?”. The sys-\ntems presented here are minority to minority lan-\nguage systems. North and Lule Sámi are mutu-\nally intelligible, and also in this context a gist sys-\n120\ntem will not be that interesting. The importance of\nthe system lies in its ability to produce text. Here,\nNorth Sámi is the larger language, possessing close\nto a full curriculum of school textbooks. A high-\nquality MT system would help produce the same\nfor Lule Sámi, and moreover from the closely re-\nlated North Sámi than from Norwegian. The same\nsituation may found for many language communi-\nties.\n2 Rule-based machine translation\n2.1 Apertium\nApertium is an open-source platform for creating\nrule-based machine translation systems. It was ini-\ntially designed for closely-related languages, but\nhas also been adapted to work better for less-\nrelated languages. The engine largely follows a\nshallow-transfer approach. Finite-state transducers\n(Garrido-Alenda and Forcada, 2002) and (Roche\nand Schabes, 1997) are used for lexical processing,\nfirst-order hidden Markov models (HMM) are used\nfor part-of-speech tagging, and multi-stage finite-\nstate based chunking for structural transfer (For-\ncada, 2006). The original shallow-transfer Aper-\ntium system consists of a de-formatter, a morpho-\nlogical analyser, the categorial disambiguator, the\nstructural and lexical transfer module, the morpho-\nlogical generator, the post generator and the re-\nformatter.\n2.1.1 Analysis and generation\nFor the analysis and generation, we used exist-\ning finite-state transducers for the two languages.1\nlttoolbox has been widely used to model ro-\nmance language morphology, and although it has\nbeen used to model the morphology of other lan-\nguages with complex morphology (e.g. Basque), it\nis not ideal for these languages. lttoolbox is lack-\ning features for dealing with stem-internal varia-\ntion, diphthong simplification, and compounding.\nNorth Sámi word forms involve both conso-\nnant gradation, diphthong simplification and com-\npounding. The North Sámi noun guolli (‘fish’) al-\nternates between -ll-(strong stage) and -l-(weak\nstage).\nAdditionally, one has to deal with diphthong\nsimplification, the diphthong uochanges into a\nmonophtong uin e.g. accusative plural guliid .\nIn the Apertium lexicon, guolli is represented as\nin figure 1. gurepresents the stem, the item be-\n1http://giellatekno.uit.no/<pardef n=\"gu/olli__N\">\n<e>\n<p>\n<l>liid</l>\n<r>olli<s n=\"N\"/><s n=\"Pl\"/>\n<s n=\"Acc\"/></r>\n</p>\n</e>\n...\n</pardef>\nFigure 1: Section of inflectional paradigm for\ngu/olli__N\ntween<l></l> liidthe generated ending and the\nitems between <r></r> the analysis including the\nlemma and the morphological tags.\nThe Divvun and Giellatekno Sámi language\ntechnology projects2use finite-state transducers\nfor the morphological analyser and closed-\nsource finite-state tools from Xerox (Beesley and\nKarttunen, 2003). They tools handle two-level\nmorphology model with twolc (two-level com-\npiler) for morphophonological analysis together\nwith lexical tools in a single transducer, con-\nsonant gradation, diphthong simplification and\ncompounding are handled by two-level rules.\nConsonant gradation and diphthong simplification\nof the noun guolli are handled in the following\nway. guolli is listed in the root lexicon with lemma\nand continuation lexicon AIGI and redirected to\nthe sublexicon AIGI .\nguolli AIGI \"fish N\" ;\nLEXICON AIGI !Bisyll. V-Nouns.\n+N+Sg+Acc:%>X4 K ;\n+N:%>X5 GODII- ; ! weak gr dipth simpl\n...\nFrom there it is redirected to a further sublex-\nicon GODII- which redirects it to the sublexicon\nGODII- , which provides the plural accusative anal-\nysis.\nLEXICON GODII-\n+Pl+Acc:jd9 K ;\nAt the same time, a two-level rule handles diph-\nthong simplification when encountering the dia-\ncritical mark X5by removing the second vowel ( e\no a) in a diphthong ( ie uo ea ) if the suffix contains\nani.\nVx:0 <=> Vow _ Cns:+ i (...) X5: ;\nwhere Vx in (e o a) ;\n2To be found on http://www.divvun.no/index.\nhtml andhttp://giellatekno.uit.no/ .121\nConsonant gradation is handled in another rule\nwhere a consonant ( f l m n r s . . . ) is removed\nbetween a vowel, an identical consonant, another\nvowel, and a weak grade triggering diacritial mark\n(the rule is slightly simplified, noted by . . . ).\nCx:0 <=> Vow: _ Cy Vow (...) WeG: ;\nwhere Cx in (f l m n r s ...)\nCy in (f l m n r s ...)\nA general difficulty for generation and analysis\nare inconsistent tagsets in SL and TL. While verbs\nare specified with regard to transitivity ( V TV , V\nIV) for North Sámi, they were not specified in the\nLule Sámi dictionary (only V). Another matter of\nchoice and convenience is the degree of lexicali-\nsation as in the case of derived verbs. The North\nSámi verbform gohˇ coduvvo (‘he/she is called’) ei-\nther goes back to the form gohˇ cˇ cut (‘order’) or to\ngohˇ codit (‘call, name’), which is derived from go-\nhˇ cˇ cut but to some extent lexicalised.\ngohˇcˇcut+V+TV+Der1+Der/d+V\n+Der2+Der/PassL+V+Ind+Prs+Sg3\ngohˇcodit+V+TV+Der2+Der/PassL+V+Ind+Prs+Sg3\ngåhtjudit+V+TV+Der1+Der/Pass+V+Ind+Prs+Sg3\nIn Lule Sámi, gåhtjuduvvá only gets the analy-\nsis with the lexicalised verb gåhtjudit as a lemma.\nThe parallel derived form to North Sámi is not\nprovided in the analysis. For the construction\nof the bilingual sme-smj dictionary, that means\nthatgohˇ coduvvo is only matched with gåhtjuduvvá\nifgohˇ coduvvo is analysed with gohˇ codit as its\nlemma. In the bilingual dictionary, both pairs go-\nhˇ cˇ cut - gåhttjot andgohˇ codit - gåhtjudit exist. But\ngåhtjuduvvá cannot be generated from gåhttjot .\n<e><p><l>goh ˇcˇcut<s n=\"V\"/></l>\n<r>gåhttjot<s n=\"V\"/></r></p></e>\n<e><p><l>goh ˇcodit<s n=\"V\"/></l>\n<r>gåhtjudit<s n=\"V\"/></r></p></e>\nIn the previous case, tag assymetry is due to\nannotation-choices. In other cases tag inconsisten-\ncies are linguistically motivated as in the case of\nthe negation verb ii/ij(‘not (do)’), which is spec-\nified with regard to tense in Lule Sámi, but not in\nNorth Sámi. This is due to the fact, that Lule Sámi\nhas different present tense and past tense forms of\nthe verb. North Sámi, on the other hand only has\none form to express both present and past tense.\nThe tense distinction is made by means of the main\nverb following the negation verb as in ii boa ¯de\n(‘he/she does not come’) and ii boahtán (‘he/she\ndid not come’).\nii ii+V+IV+Neg+Ind+Sg3ij ij+V+Neg+Prs+Sg3\nittjij ij+V+Neg+Prt+Sg3\nBoth for generation and analysis that means that\none has to find a possibility to account for the\n‘missing’ tag in North Sámi. ‘Missing’ means here\nthe lack of tag specification for the temps (tempus)\nvariable in the transfer files.\nA number of multiword expressions differ from\neach other in SL and TL. While in North Sámi\ngii beare (‘whoever’) has inner inflection, the Lule\nSámi vajku guhti does not. The initial pronoun gii\ncorresponds to the second component guhti in Lule\nSámi.\nThe last type of generation modification hap-\npens in a separate step. Orthographic variants and\ncontractions are handled by the postgenerator. The\nLule Sámi copula liehket (’to be’) has three forms\nfor the tag combination liehket+V+Ind+Prs+Sg3,\nle, la, l . While leandlaare interchangable vari-\nants, lis a shortened form of laafter wordforms\nthat end in a vowel. The postgeneration lexicon\nspecifies this change and outputs the correct form.\n2.1.2 Disambiguation (Constraint Grammar)\nDisambiguation of morphological and shallow\nsyntactic tags is handled by the North Sámi parser.\nThe parser uses Constraint Grammar, a formal-\nism based on Karlsson (1990) and Karlsson (1995)\nand further developed by Tapanainen (1996) and\nBick (2000).\nThe approach is bottom-up, which means that\nall input (ideally) receives one or more analyses.\nThose analyses are then one by one removed ex-\ncept for the last reading, which is never removed.\nThe parser uses the output of the morphological\ntransducer as an input and adds shallow syntactic\ntags. Syntax tags do not only function as the basis\nof a dependency tree structure representation, but\nalso disambiguate morphology, e.g. homonymous\ngenitive and accusative forms are distinguished on\nthe level of syntax (genitive premodifier @→Nvs.\naccusative object @←OBJ). The readings are then\ndisambiguated by means of context rules.\nThe disambiguation file itself consists of differ-\nent sections:\n•Sets: lexical, POS, morphological features,\nsyntactic, semantic lists one wants to abstract\nover\n•Syntactic annotation rules : operators MAP\nand ADD annotate syntactic tags such as\n@←OBJ122\n•Disambiguation rules : operators SELECT\nand REMOVE either pick or discard a read-\ning\nIn the Apertium engine, the Constraint Gram-\nmar module is added as a pre-disambiguator after\nthe morphological analyser and before the statisti-\ncal POS tagger. Apertium uses the r21668 version3\nof the parser, which is based on vislcg3.\nA syntactic (or even semantic) analysis of the\nSL is also useful in MT, and the structural trans-\nfer in thesme-smj Apertium engine profits from\nsyntactic information. By mapping the habitive\ntag@HAB onto locative nouns with habitive syn-\ntax/semantics, one can directly translate locative\ninto inessive and a structural transfer rule in one\nof the MT modules becomes redundant. In the\nprototype system, the accuracy of the CG disam-\nbiguator has made the HMM-based tagger almost\nredundant.\nIt would appear that the rule-based Constraint\nGrammer parser is able to give a better perfor-\nmance than an HMM based tagger.4\nTrigrams are not suitable for expressing syntac-\ntic structure.5CG on the other hand successfully\nexpresses syntactic structure as a product of con-\ntextual disambiguation. (Bick, 2000, p.137)\n2.1.3 Lexical transfer\nLexical transfer is handled in the bilingual dic-\ntionary, where entries have the form\n<e><p><l>beaivi<s n=\"N\"/></l>\n<r>biejvve<s n=\"N\"/></r></p></e>\nThe North Sámi lemma with its POS specifica-\ntion comes first embedded in <l></l> , followed\nby the Lule Sámi lemma and its corresponding\nPOS specification embedded in <r></r> In the\ncase of a one-to-many relation between SL and\nTL, i.e. if several TL items exist for one SL item,\nthe default translation is picked by means of the\nrestriction <e r=\"RL\"> .\n<e r=\"RL\"><p><l>dàl<s n=\"Adv\"/></l>\n<r>dàlla<s n=\"Adv\"/></r></p></e>\n32008-10-29 23:20:45 +0100\n4Samuelsson and V outilainen note in their comparison of a a\nlinguistic and stochastic tagger that “at ambiguity levels com-\nmon to both systems, the error rate of the statistical tagger\nwas 8.6 to 28 times higher than that of EngCG-2.\" (Samuels-\nson and V outilainen, 1997, p.251)\n5According to Bick (2000), the syntactic structure problem is\n“unique” to probabilistic HMM grammars and resides in the\n“Markov assumption” that p(tn|t1...tn−1) =p(tn|tn−1)\n(for bigrams), or =p(tn|tn−1tn−2)(for trigrams).\nFigure 2: Coverage of the bilingual sme-smj\ndictionary\n<e> <p><l>dàl<s n=\"Adv\"/></l>\n<r>dàl<s n=\"Adv\"/></r></p></e>\nA lexical selection module as described in the\nApertium documentation (Forcada, 2008) is not\nemployed by the system. Lexical transfer is con-\nsidered to be regular instead of context-dependent.\nHow close that is to the real situation is still to be\ndecided.\nThe transfer lexicon was constructed in the fol-\nlowing way: The orthographical differences be-\ntween North and Lule Sámi are mostly regular. We\nthus made a finite state transducer which turned\nNorth Sámi lemmata into Lule Sámi candidates.\nThe candidates were run through our Lule Sámi\nmorphological transducer. Words recognised with\nthe same POS as the input word were accepted,\nwhereas words not recognised were manually re-\nvised. Semantic pairs which were non-cognates\nwere manually added. Figure 4 shows the cov-\nerage of our transfer lexicon, for the n-thousand\nmost common North Sámi lemmata.\n2.1.4 Syntactic transfer\nThere are a number of structural differences be-\ntween North and Lule Sámi that require structural\ntransfer rules.\n•The North Sámi locative case expressing\nplace and source corresponds to either Lule\nSámi inessive (place) and elative (source) de-\npending on the context.\n•In simple object constructions, the unmarked\nword order in Lule Sámi tends to be SOV ,\nwhile it is SVO in North Sámi.\n•In negation construction as discussed above,\nthe Lule Sámi negation verb can inflect for123\ntense, while in North Sámi tense is expressed\nby means of the mainverb negation form\nAs the default translation of North Sámi locative\n(1) the Apertium system chooses Lule Sámi elative\nas in (2).\n(1) son ˇcokkii dávviriid ja dávttiid boares hávddi-in.\nson ˇcokkii dávviriid ja dávttiid boares hávddi-\nLOC.PL.\n‘(s)he collected things and bones old graves.from.’\n(2) sån tjåkkij dávverijt ja dávtijt boares hávdi-js.\nsån tjåkkij dávverijt ja dávtijt boares hávdi-ELA.PL.\n‘(s)he collected things and bones old graves.from.’\nThe default elative becomes inessive\n•in habitive constructions,\n•in place adverbials of stative verbs,\n•before certain adverbs such as gitta.\nIn (3) a structural rule chooses inessive as\na translation for locative when encountering the\nhabitive tag @HAB distributed by a CG-rule, a\nverb from the verbs_stative list such as ássat\n(‘live’), and an adverb from the ine_adv list such\nasgitta (‘dependent on’).\n(3) Sámit dahjege sápmela ˇcˇcat ásset Ruošša-s, Suoma-s\nja Norgga-s.\nSámit dahjege sápmela ˇcˇcat ásset Ruošša-LOC.SG,\nSuoma-LOC.SG ja Norgga-LOC.SG.\n‘Sámi or also ‘sápmela ˇcˇcat’ live in Russia, Finland\nand Norway.’\n(4) Sáme jali sábmelattja årru Ruossja-n, Suoma-n ja\nVuona-n.\nSáme jali sábmelattja årru Ruossja-INE.SG, Suoma-\nINE.SG ja Vuona-INE.SG.\n‘Sámi or also ‘sábmelattja’ live in Russia, Finland\nand Norway.’\nNorth and Lule Sámi differ with respect to word\norder. Especially in written texts, Lule Sámi allows\nfor a number of unmarked SOV (6) construction\nwhereas North Sámi prefers SVO (5).\n(5) Anne\nAnneráhkada\nmakesbiepmu.\nfood.\n(6) Anne\nAnnebiebmov\nfooddahká.\nmakes.\nWord order is treated in the second transfer\nmodule. The SOV rule in figure 3 captures the\npattern (subject, verb, object) and outputs them\nin the order subject–object–verb by reordering<rule>\n<pattern>\n<pattern-item n=\"SN_Subj\"/>\n<pattern-item n=\"FMainV\"/>\n<pattern-item n=\"SN_Obj\"/>\n</pattern>\n<action>\n<out>\n<chunk>\n<clip pos=\"1\" part=\"whole\"/>\n</chunk>\n<b pos=\"1\"/>\n<chunk>\n<clip pos=\"3\" part=\"whole\"/>\n</chunk>\n<b pos=\"2\"/>\n<chunk>\n<clip pos=\"2\" part=\"whole\"/>\n</chunk>\n<out>\n</action>\n</rule>\nFigure 3: Transfer rule to convert SVO →SOV\nthe chunks indicated by pos=\"1\", pos=\"2\" and\npos=\"3\" into 1–3–2.\nThe structural rules work successfully in trans-\nferring North Sámi to Lule Sámi structures. As the\nstructural differences are minimal, the construction\nof rules is not very time-consuming. Rather the\nidentification of structural differences is a new task\nas contrastive North-Lule Sámi grammar has been\na rather neglected area within syntactic research.\n3 Statistical machine translation\nFor the statistically based machine translation\nwe used the Moses decoder, the word aligner\nGIZA++, and the srilm language model.6\n3.1 Corpora\nMinority languages may roughly be divided into\nthree groups: The ones with a (limited) role in\npublic administration or similar domains, the ones\nwith a standardised written language and some text\n(more often than not the Bible comprises the bulk\nof the available corpus), and the ones with neither\nof these. Of our languages, North Sámi falls in\nthe first group and Lule Sámi in the second. This\nmeans that the parallel resources available are ex-\ntremely limited, they consist of the New Testament\n(approx. 150,000 words each), and a small cor-\npus of school curriculum texts (appr. 15,000 words\n6Available from the urls http://www.statmt.org/\nmoses/ ,http://www.fjoch.com/GIZA++.html ,\nhttp://www.speech.sri.com/projects/\nsrilm/ respectively124\neach, describing the content of the curriculum for\nthe Sámi schools in Norway). The two NT ver-\nsions have been translated in different countries\n(Norway/Finland and Sweden, respectively), with\ndifferent Bible versions as source texts, and they\ndiffer from each other more than an ordinary paral-\nlel corpus would have done. The curriculum texts\nare probably translations of the same original – in\nany case the sentences are better matches of each\nother.\n3.2 Training process\nFor the statistical machine translation, we build\nboth factored and unfactored models. For Lule\nSámi (the target language) we made both an un-\nfactored and a factored trigram language model on\nour Lule Sámi corpus, 278,000 words. Half of\nthe corpus (120,000 words) consists of New Tes-\ntament (NT) texts, 106,000 belongs to the fact cat-\negory, and 39,000 words is fiction. The factored\nmodel contained POS information, obtained from\nour Lule Sámi CG parser.\nWe then built various translation models. The\nmodels were severely limited by the availability of\nparallel corpora. We had one corpus consisting of\nthe New Testament (9,200 parallel sentences), and\none containing curriculum texts (1700 parallel sen-\ntences).\n4 Evaluation\n4.1 Qualitative evaluation\nFor the development of the Apertium system 16\ntest sentences from Wikipedia were used as regres-\nsion tests.7Their target translations are based on a\nmanual translation. Out of the 16 test sentences, 12\nare successfully matched with the target translation\nat present. Remaining problems are not of a struc-\ntural kind, but are dependent on one-to-many re-\nlations in the bilingual dictionary, tag inconsisten-\ncies between the sme andsmj dictionaries, POS\nassymmetries and disambiguation errors from the\nConstraint Grammar disambiguator.\nFor evaluation purposes another independent\nmanual translation is used. The Apertium transla-\ntion deviates mostly with regard to lexical matters.\nOther lemmata were chosen. If they are synonyms\nor more idiomatic than the other ones remains to\nbe studied. With regard to structural deviations,\nthere was one deviating choice of case and one of\n7http://wiki.apertium.org/wiki/Northern_\nSami_and_Lule_Sami/Regression_tests\nFigure 4: BLEU result for three models\nword order. The word order deviation might hint\nat a rather optional SOV order.\nThe evaluation shows that while structural trans-\nfer seems to be mostly unproblematic, the choice\nof lexical tags and the lexical choices are the bigger\nchallenge. Tags should be consistently chosen for\nboth SL and TL whenever the deviation is not lin-\nguistically motivated. In the case of lexical choice,\none needs to have a closer look at the bilingual\nlexicon. Are the deviations interchangable trans-\nlations or is one of them the more idiomatic one?\n4.2 Quantitive evaluation\nFor evaluation, we used the same 16 North Sámi\nWikipedia test sentences (manually translated into\nLule Sámi). For the SMT system, they were tested\nby three different translation models, a factored\nand an unfactored model based upon the curricu-\nlum corpus, and an unfactored model based upon\nthe NT corpus (due to technical difficulties we\nwere not able to make a factored model of the NT\ncorpus).\nThe results are somewhat unexpected. Of the\ntwo versions of the curriculum translation model,\nthe unfactored one is better than the factored one,\nwith an average BLEU score of 0.3 as against 0.2.\nComparing the two unfactored models, the larger\none, containing NT and curriculum texts, performs\nsimilarly to the curriculum model for most sen-\ntences, but worse in some cases, resulting in a\nslightly worse overall score.125\nType of deviation Example\none-to-many relations dálla vs.dál(both ‘now’)\ntag inconsistencies iesjráddijiddje (‘self-governed’) is analysed both as a deverbal\nform and a lexicalised adjective\nPOS assymetries gullujiddje (‘belonging’) is analysed as a derived verb form\nCG disambiguation error liehket (infinitive) should be li(3rd person plural)\nTable 1: Remaining transfer problems\nType of deviation Example\nlexical matters tjiehpe vs.smidá ,moattegielak vs.ålogielak ,sáhttá vs.máhttá\ncase bargojn vs.bargoj\nword order manna l ulmmel SVO vs. man ulmmen la SOV (‘which is the purpose’)\nTable 2: Selection of divergences between North Sámi and Lule Sámi\nComparing the SMT and RBMT results is\nharder, as the lexicon for the rule-based system\nwas small, and the grammar rule set was restricted.\nThus, the RBMT did very well on known construc-\ntions (BLEU around 0.9 and better), but badly on\nnew text. The SMT did badly across the board,\nand much of its success was due to the similari-\nties of the languages (unknown words were passed\nthrough and now and then were correct).\nWith such a small training set, the result can-\nnot be but bad. From earlier cross-linguistic\nresearch, a morphology-rich language such as\nFinnish comes out with clearly worse results than\nthe more analytic German and French. Comparing\nBLEU score from (Banchs, 2005) with the token/-\ntype ratio of Banchs’ training set gives the picture\nin table 3.\nFrench German Finnish\nToken/type 189 74 29\nBLEU 0,302 0,245 0,203\nTable 3: Token/type ratio and BLEU for 4 source\nlanguages in a Europarl MT study\nThe token/type ratio changes from genre to\ngenre, but the relative distance between languages\nremain the same. This indicates that also an SMT\nsystem based upon a larger corpus would fare less\nthan good for a morphologically complex language\nlike Sámi.\n5 Discussion\nThe corpora for Sámi are not good enough for\nSMT systems to be able to replicate the goodRBMT results for North Sámi to Lule Sámi but\nmuch can be done both with tuning and corpus\ngathering. The corpora are probably good enough\nto build a gist system for North Sámi to Norwe-\ngian.\nApertium copes well with the structural trans-\nfer, but tag inconsistencies and many-to-many re-\nlations in the lexicon cause deviations between\nmanual and automatic translations. A good lexicon\nand a consistent tagset are the basis for successful\nRBMT.\nFor morphologically complex languages, the lt-\ntoolbox format for designing transducers might not\nbe ideal, and one might consider other morpholog-\nical transducers such as lexc and twolc.\nFuture plans in RBMT aim at making a full-\ncoverage system out of the Apertium prototype.\nWord alignment can help constructing a more com-\nplete and better bilingual dictionary, and statis-\ntical methods could be used to choose the most\nidiomatic wordform in the case of one/many-to-\nmany relations. Alternatively, a statistically-based\nlexical selection module as proposed in (Forcada,\n2008) may be included. For optimisation of struc-\ntural transfer, the Constraint Grammar could be\nenhanced by semantic roles that disambiguate be-\ntween an inessive locative (PLACE) and an elative\nlocative (SOURCE).\nThe available parallel corpora where Lule Sámi\nis one of the languages will not be large enough for\nSMT in the foreseable future.\nReturning to the typology of MT systems for mi-\nnority languages, we would like to explore the pos-\nsibility of using SMT to create a gisting system for\nNorth Sámi to Norwegian. A corpus of 1,000 sen-126\ntences has already been tested. For this language\npair, the linguistic distance is longer, but the em-\npirical base far better (the present corpus collec-\ntion contains appr. 120,000 sentences of parallel\n(but non-aligned) text). Although not much can be\nexpected from a North Sámi–Lule Sámi SMT sys-\ntem, the development of a North Sámi–Norwegian\nsystem should be possible.\nAcknowledgements\nMany thanks to the anonymous reviewers for their\nhelpful comments, and to Kevin Donnelly for re-\nviewing an earlier version of this paper.\nReferences\nBanchs, Rafael E. and Crego, Josep M. and de Gis-\npert, Adrià and Lambert, Patrik and Mariño, José B.\n2005. Statistical Machine Translation of Europarl\nData by using Bilingual N-grams, Proceedings of\nthe ACL Workshop on Building and Using Parallel\nTexts , pp. 133–136\nBeesley, K. R. and L. Karttunen. 2003. Finite State\nMorphology V ol. 1 CSLI Publications, Stanford.\nhttp://www.fsmbook.com/ .\nBick, E. 2000. The Parsing System ’Palavras’: Au-\ntomatic Grammatical Analysis of Portuguese in a\nConstraint Grammar Framework . Aarhus Univer-\nsity Press, Aarhus.\nForcada, M. L. 2006. Open-source machine transla-\ntion: an opportunity for minor languages. Strategies\nfor developing machine translation for minority lan-\nguages . 5th SALTMIL workshop on Minority Lan-\nguages. pp. 1–7\nForcada, M. L. and B. Ivanov Bonev and S.\nOrtiz Rojas and J. A. Pérez Ortiz and G.\nRamírez Sánchez and F. Sánchez Martínez and\nC. Armentano-Oller and M. A. Montava and F.\nM. Tyers. 2008. Documentation of the Open-\nSource Shallow-Transfer Machine Translation Plat-\nform Apertium .http://xixona.dlsi.ua.es/\n~fran/apertium2-documentation.pdf\nGarrido-Alenda A. and M. L. Forcada. 2002. Compar-\ning nondeterministic and quasideterministic finite-\nstate transducers built from morphological dictionar-\nies. Procesamiento del Lenguaje Natural . No. 29\npp. 73–80\nKarlsson, F., V outilainen, A., Heikkilä, J., and Anttila,\nA. (eds.). 1995. Constraint Grammar: A Language-\nIndependent System for Parsing Unrestricted Text .\nNatural Language Processing No. 4 Mouton de\nGruyter , Berlin and New York.\nKarlsson, F. 1990. Constraint Grammar As A Frame-\nwork For Parsing Running Text. Proceedings of\nCOLING V ol. 3 pp. 168–173Lavie, A. 2008. Stat-XFER: A General Search-based\nSyntax-driven Framework for Machine Translation\nProceedings of CICLing 2008 , pp. 362–375\nRoche, E. and Y . Schabes (eds.). 1997. Finite-State\nLanguage Processing. MIT Press , Cambridge, Mas-\nsachusetts.\nSamuelsson, C. and A. V outilainen. 1997. Comparing\na Linguistic and a Stochastic Tagger. Proceedings\nof the 35th Annual Meeting of the Association for\nComputational Linguistics and 8th Conference of the\nEuropean Chapter of the Association for Computa-\ntional Linguistics pp. 246–253\nTapanainen, P. 1996. The Constraint Grammar Parser\nCG-2 . University of Helsinki Publications V ol. 27\npp. 246–253127",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "AFA_4Cwg7gS",
"year": null,
"venue": "EAMT 2012",
"pdf_link": "https://aclanthology.org/2012.eamt-1.54.pdf",
"forum_link": "https://openreview.net/forum?id=AFA_4Cwg7gS",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Flexible finite-state lexical selection for rule-based machine translation",
"authors": [
"Francis M. Tyers",
"Felipe Sánchez-Martínez",
"Mikel L. Forcada"
],
"abstract": "Francis M. Tyers, Felipe Sánchez-Martínez, Mikel L. Forcada. Proceedings of the 16th Annual conference of the European Association for Machine Translation. 2012.",
"keywords": [],
"raw_extracted_content": "Flexible finite-state lexical selection for rule-based machine translation\nFrancis M. Tyers, Felipe Sánchez-Martínez, Mikel L. Forcada\nDepartament de Llenguatges i Sistemes Informàtics\nUniversitat d’Alacant\nE-03071 Alacant\n{ftyers,fsanchez,mlf}@dlsi.ua.es\nAbstract\nIn this paper we describe a module (rule for-\nmalism, rule compiler and rule processor)\ndesigned to provide flexible support for lex-\nical selection in rule-based machine trans-\nlation. The motivation and implementation\nfor the system is outlined and an efficient\nalgorithm to compute the best coverage of\nlexical-selection rules over an ambiguous\ninput sentence is described. We provide a\ndemonstration of the module by learning\nrules for it on a typical training corpus and\nevaluating against other possible lexical-\nselection strategies. The inclusion of the\nmodule, along with rules learnt from the\nparallel corpus provides a small, but con-\nsistent and statistically-significant improve-\nment over either using the highest-scoring\ntranslation according to a target-language\nmodel or using the most frequent aligned\ntranslation in the parallel corpus which is\nalso found in the system’s bilingual dictio-\nnaries.\n1 Introduction\nThis paper presents a module for lexical selection to\nbe used in rule-based machine translation (RBMT).\nThe module consists of an XML-based formalism\nfor specifying lexical-selection rules in the form\nof constraints, a compiler which converts the rules\nwritten in this format to a finite-state transducer,\nand a processor which applies the rule transducer to\nambiguous input sentences. The paper also presents\na method of learning lexical-selection rules from a\nparallel corpus.\nLexical selection is the task of choosing, given\nseveral source-language (SL) translations with the\nc\r2012 European Association for Machine Translation.same part-of-speech (POS), the most adequate\ntranslation among them in the target language (TL).\nThe task is related to the task of word-sense disam-\nbiguation (Ide and Véronis, 1998). The difference\nis that its aim is to find the most adequate trans-\nlation, not the most adequate sense. Thus, it is\nnot necessary to choose between a series of fine-\ngrained senses if all these senses result in the same\nfinal translation.\nThe dominant approach to MT for language pairs\nwith sufficient training data is phrase-based statis-\ntical machine translation; in this approach, lexical\nselection is performed by a combination of coocur-\nrence in the phrase table, and score from the target-\nlanguage model (Koehn, 2010). There have how-\never been attempts to improve on this by looking at\nglobal lexical selection over the whole sentence, see\ne.g. (Venkatapathy and Bangalore, 2007; Carpuat\nand Wu, 2007).\nIn order to test different approaches to lexical se-\nlection for RBMT, we use the Apertium (Forcada et\nal., 2011) platform. This free/open-source platform\nincludes 30 language pairs (as of February 2012).\nSánchez-Martínez et al. (2007) describe a\nmethod to perform lexical selection in Apertium\nbased on training a source-language bag-of-words\nmodel using TL cooccurrence statistics. This ap-\nproach was tested, but abandoned as it produced\nless adequate translations than using the transla-\ntion marked as default by a linguist in the bilingual\ndictionary.\nOther possible solutions would be to generate\nall possible combinations of translations, and score\nthem on a language model of the target language.\nThis approach is taken in the METIS -IIsystem\n(Melero et al., 2007). This has the benefit of being\neasy to implement, and only requiring a bilingual\ndictionary and a monolingual target language cor-\npus. It has the drawbacks of being both slow – many\nProceedings of the 16th EAMT Conference, 28-30 May 2012, Trento, Italy\n213\ntranslations must be performed – and not very cus-\ntomisable – control over the final translation is left\nto the TL model.\nAnother possible solution, and one that is already\nused in some Apertium language pairs (Brandt et\nal., 2011; Wiechetek et al., 2010) is to use con-\nstraint grammar (Karlsson et al., 1995) rules to\nchoose between possible alternative translations.\nAn advantage of this is that the constraint grammar\nformalism is well known, and powerful, allowing\ncontext searches of unlimited size. However, it is\ntoo slow to be able to be used for production sys-\ntems, as the speed is in the order of a few hundred\nwords per second as opposed to thousands of words\nper second for the slowest Apertium module.\nAnother approach not requiring a parallel cor-\npus is presented by Dagan and Itai (1994). They\nfirst parse the SL sentence and extract syntactic re-\nlations, such as verb + object, they then translate\nthese with a bilingual dictionary and use colloca-\ntion statistics from a TL corpus to choose the most\nadequate translation. While this method does not\nrely on the existence of a parallel corpus, it does\ndepend on some way of identifying SL syntactic\nrelations – which may not be available in all RBMT\nsystems.\nThe rest of the paper is laid out as follows: Sec-\ntion 2 presents some design decisions that were\nmade in the development of the module. Section 3\ndescribes in detail the rule formalism, the represen-\ntation of rules as a finite-state transducer, and the\nalgorithm for applying the rules to an ambiguous\ninput sentence. Section 4 shows how rules for the\nmodule may be learnt from a parallel corpus, and\nthen evaluated on a standard test set for MT. Finally,\nsection 6 offers some concluding remarks and ideas\nfor future work.\n2 Lexical selection in Apertium\nApertium is an free/open-source platform for cre-\nating shallow-transfer RBMT systems. The plat-\nform is being widely used to build MT systems\nfor a variety of language pairs, especially in those\ncases (mainly with related-language pairs) where\nshallow transfer suffices to produce good quality\ntranslations. It has, however, also proven useful\nin assimilation scenarios with more distant pairs\ninvolved.\nThe platform is designed to be: fast, in the order\nof thousands of words per second on a normal desk-\ntop computer; easy to develop; and standalone, no\nneed for existing data or large parallel corpora to\nbuild a system.Apertium uses a Unix pipeline architecture (see\nFigure 1) to perform translation: text is first stripped\nof format and morphologically analysed, then mor-\nphologically disambiguated. Then the unambigu-\nous analyses are passed through lexical and struc-\ntural transfer and finally morphological generation.\nThis translation strategy is very similar to other\ntransfer-based MT systems.\nThe Apertium platform does not currently have\na specific module for lexical selection. Some trans-\nlation ambiguity can be handled using multi-word\nexpressions (MWEs) encoded in the dictionaries of\nthe system, but the status quo is that for any given\nSL word, the most frequent, or most general trans-\nlation is given. This poses a translation problem, as\noften it may be difficult to choose the most frequent\nor the single most adequate translation of a word,\nor the selection strongly depends on the context.\n2.1 Requirements\nThe requirements of a lexical selection module are:\n\u000fIt should be efficient and fast, that is, it should\nprocess thousands of words per second on a\nnormal desktop computer. For rule sets of tens\nof thousands of rules.\n\u000fIt should not require any advanced resources,\nsuch as parallel corpora, but should be able to\ntake advantage of them if available.\n\u000fThe functioning of the module should be trace-\nable. In any given translation, it should be\npossible to identify the rules used.\n\u000fThe rules should be in a form suitable for read-\ning and writing by human beings so that users\ncan immediately change or add rules.\nIn the next section we describe a lexical selection\nmodule which fulfils these requirements.\nIn order to accomodate the new lexical selection\nmodule, a minor change was made to the pipeline\n(Figure 1). Where previously lexical transfer was\nperformed at the same time as structural transfer,\nnow lexical transfer is performed as a separate pro-\ncess before the structural transfer stage.\n3 Methodology\n3.1 Rule formalism\nThe rule formalism is based on context rules, con-\ntaining a sequence of the following features,\n\u000fA pattern matching a single SL lexical form\n214\nmorph.\nanalyserPOS\ntaggerlexical\ntransfer\nmorph.\ngeneratorpost-\ngeneratorSL\ntext\nTL\ntextdeformatter\nreformatterstructural\ntransferlexical\ntransferlexical\nselectionFigure 1: The Apertium architecture. The lexical transfer module (shadowed) has been moved from being called from the\nstructural transfer module to being a module in its own right (in bold face) and the lexical selection module has been inserted\nbetween lexical transfer and structural transfer.\n\u000fA pattern matching a single TL lexical form\n\u000fOne of the following operations:\nselect chooses the TL translation which\nmatches the lexical-form pattern and\nremoves all translations which do not\nmatch.\nremove removes the TL translation which\nmatch the given lexical-form pattern; and\nskip makes no changes and passes all the\ntranslations through unchanged; this is\nused when specifying the context of the\nrule.\nThe features are expressed by regular expres-\nsions, which may match any part of the input word\nstring (e.g. either the lemma, the tags or a combi-\nnation of both). As with the rest of the modules in\nthe Apertium platform, the rules are written in an\nXML-based format, which is processable by both\nhumans and machines.\nFigure 2 presents some examples of rules writ-\nten in this formalism. Each rule is enclosed in a\nrule element, with an optional cattribute for com-\nments. The rule tag may have one or more match\nelements which describe sequences of SL context.\nEach match element may have either a lemma or a\ntags attribute, neither (in which case it will match\nany word) or both.\nAmatch element may also contain a lexical se-\nlection operation, select orremove , the default\none being skip.\nThe rules can be written by hand to solve specific\ntranslation issues with a given context, for example,\ngiven the Spanish word estación ‘station, season’\nwith a default translation of ‘station’, we may write\nrules (see Figure 2) which say that we want to trans-\nlate the word as ‘season’ if it is followed by an\nadjective such as seca ‘dry’ or lluviosa ‘rainy’, or\nif it is followed by the preposition de‘of’, a deter-\nminer (e.g. el‘the’), and the noun año‘year’.A weak point of the formalism is that rules can\nonly take into account fixed-length, ordered con-\ntexts, so it is not possible to e.g. make a rule which\nselects a given translation based on a given word\nat any position in the sentence (e.g. treating the\nsentence, or part of it, as a bag of words). However,\na strength is that the rules may be compiled into a\ncompact finite-state transducer, which is traceable;\nfor each translation, it is possible to know exactly\nwhich rules were called.\n3.2 Rule compilation\nThe set of rules Rexpressed in XML is not pro-\ncessed directly; they are compiled into a finite-state\ntransducer (see Figure 3). In this transducer, each\ntransition is labelled with a symbol representing\nan SL pattern and a symbol representing an oper-\nation on a TL pattern. Both SL and TL patterns\nare compiled into regular expressions (finite-state\nrecognisers), and stored in a lookup table.\nThe transducer is defined as hQ;V;\u000e;q 0;qFi,\nwhereQis the set of states, V= \u0006\u0002\u0000is the\nalphabet of transition labels, where \u0006is the set of\ninput symbols and \u0000is the set of output symbols,\n\u000e:Q\u0002V!Qis the transition function, q0is the\ninitial state (nothing matched); and qFis the final\nstate indicating that a complete pattern has been\nmatched. Rules in Rare paths from q0toqF.\n3.3 Rule application\nIn order to apply the rules on an input sentence,\nwe use a variant of the best coverage algorithm\ndescribed by Sánchez-Martínez et al. (2009). We\ntry to cover the maximum number of words of each\nSL sentence by using the longest possible rules; the\nmotivation for this is that the longer the rules, the\nmore accurate their decisions may be expected to\nbe because they integrate more context.\nTo compute the best coverage a dynamic-\nprogramming algorithm (Alg. 1) is applied, which\nstarts a new search in the automaton at every new\nword in the sentence to be translated, and uses a\n215\n<rule c=\"default translation\">\n<match lemma=\"estación\"><select lemma=\"station\"/></match>\n</rule>\n<rule>\n<match lemma=\"estación\"><select lemma=\"season\"/></match>\n<or>\n<match lemma=\"seco\">\n<match lemma=\"lluvioso\">\n</or>\n</rule>\n<rule>\n<match lemma=\"estación\"><select lemma=\"season\"/></match>\n<match lemma=\"de\">\n<match tags=\"det. *\"/>\n<match lemma=\"año\">\n</rule>\n...\nFigure 2: An example of the rules written by hand in the XML formalism for describing lexical selection rules. The formalism\nis the same for both hand-written and learnt rules. The order of rules is only important in calculating the rule number for tracing.\nA E BC D\nestaci´ on : select(‘station′)estaci´ on : select(‘season′) seca : skip()\nlluviosa : skip()de : skip()<det>: skip()\na˜ no : skip()\nFigure 3: A finite-state transducer representing four lexical selection rules; each arc is a transition between a pattern matching\nan SL lexical form, and an operation with a pattern matching a TL lexical form.\n216\nset of alive states Ain the automaton and a map M\nthat, for each word in the sentence, returns the best\ncoverage up to that word together with its score.\nAlgorithm 1 uses four external procedures:\nWORDCOUNT (s)returns the number of words in\nthe strings; RULELENGTH (c)returns the number\nof words of the rule matched by state c; NEWCOV-\nERAGE (cov;c)computes a new coverage by adding\nto coverage covthe rule recognised by state c; fi-\nnally, B ESTCOVERAGE (a;b) receives two cover-\nages and returns the one using the least possible\nnumber of rules.\nIn the current implmentation, if two different\ncoverages use the same number of rules, then the\nformer is overwritten. This may not be the most\nadequate approach to dealing with the problem, and\nwe intend to study other approaches.\n4 Experiment\nIn order to test the flexibility of the module, we\ndecided to learn rules from an existing knowledge\nsource, i.e. a parallel corpus, and test the module\non a well-known task for the evaluation of MT.\nThe experimental setup follows the training of\nthe baseline system in the shared task on MT at\nWMT11 (Callison-Burch et al., 2011), with the fol-\nlowing differences: In place of the default Moses\nperl-based tokeniser, tokenisation was done us-\ning the Apertium morphological analyser (Cortés-\nVaíllo and Ortiz-Rojas, 2011). The corpus was also\nnot lowercased; instead the case of known words\nwas changed to the dictionary case as found in the\nApertium monolingual dictionary.\nWe use version 6.0 of the EuroParl corpus\n(Koehn, 2005), and take the first 1.4 million lines\nfor training.1We used the Apertium English to\nSpanish pair apertium-en-es2as it is one of the\nfew pairs that has dictionaries with more than one\nalternative translation per word.3\n4.1 Learning lexical selection rules from a\nparallel corpus\nThe procedure to learn rules from a parallel corpus\nis as follows: We first morphologically analyse and\ndisambiguate for part-of-speech both the SL and TL\nsides of the corpus. These are then word-aligned\nwith GIZA++ (Och and Ney, 2003).\n1The remaining lines were held out for future use.\n2Available from http://wiki.apertium.org/wiki/\nSVN; SVN revision: 35684\n3The lexical selection module is available as free/open-source\nsoftware in the package apertium-lex-tools . This pa-\nper uses SVN revision: 35799We then pass the SL side of the corpus through\nthe lexical-transfer stage of the MT system we are\nlearning the rules for; this gives three sets of sen-\ntences: the tagged SL sentences, the tagged TL\nsentences and the possible translations of the SL\nwords into the TL yielded by the bilingual dictio-\nnary.\nWe take these three sets, and extract from the\nparallel corpus those sentence pairs for which at\nleast one lexically ambiguous SL word is aligned\nto a word in the TL which is also found in the\nbilingual dictionary. This step is necessary as in\norder to be translated by the rest of the system, the\nalternative translation must appear in the bilingual\ndictionary. After extracting these sentence pairs we\nhave 332,525 sentences for training, that is around\n24% of them.\nFor each of these extracted sentences, we ex-\ntractn-grams (trigrams and five-grams) of context\naround the ambiguous SL word(s) which belong\nto the categories of adjective, noun and verb. We\nthen count up how many times we see this context\nappearing along with each of the translations in the\nTL. If a given possible translation appears aligned\nto a word in a given context more frequently than\nother possible translations, then we generate a rule\nwhich selects the aligned translation in that same\ncontext over other translations in that context.\n4.2 Systems\nTo evaluate the lexical selection module, and our\nmethod for obtaining rules from a parallel corpus,\nwe compare it against four baseline systems:\n\u000ffreq: Frequency defaults; the MT system is\ntested with rules that select the most frequent\ntranslation in the TL corpus. This is equivalent\nto a unigram TL model.\n\u000falig: The TL word which is most frequently\naligned to the given SL word is chosen. This\ncorrespondence must also appear in the bilin-\ngual dictionary of the MT system.\n\u000fling: The linguistic defaults, here the transla-\ntions considered ‘most adequate’ by the hu-\nman linguist who wrote the system, are se-\nlected.\n\u000ftlm: The highest scoring translation out of the\npossible translations for the whole sentence\nas chosen by a 5-gram language model of the\nSpanish side of the EuroParl corpus trained\nwith IRSTLM (Federico et al., 2008).\n217\nAlgorithm 1 OPTIMAL COVERAGE : Algorithm to compute the best coverage of an input sentence.\nRequire: s: SL sentence to translate\nA fq 0g\ni 1\nwhile (i\u0014WORDCOUNT (s)) do\nM[i] ;\nfor allq2Ado\nfor allc2Q9t:\u000e(q;(s[i] :t) =c)do\nA A[fcg\nifc=qFthen\nM[i] BESTCOVERAGE (M[i];NEWCOVERAGE (M[i\u0000RULELENGTH (c)];c ))\nend if\nend for\nA A\u0000fqg\nend for\ni i+ 1\nA A[fq 0g/* To start a new search from the next word */\nend while\nreturnM[i\u00001]\nWe also tested three different sets of rules in our\nlexical-selection module:\n\u000fall: No filtering. All of the generated rules are\nincluded.\n\u000ffilt1: The rules where contexts which only ap-\npear once in the training corpus are removed.\n\u000ffilt2: Rules which include the tags for subordi-\nnating conjunction and full stop are excluded\nas well as rules where the translation selected\nis under half of the total frequency of the word.\nSo for example if a word has three translations\nwith frequency 10 and one translation with\nfrequency 15, the rule selecting this transla-\ntion would be excluded as 15 <(45 / 2) even\nthough it is the most frequent.\nThe motivation for excluding rules which con-\ntain subordinating conjunctions and full stops is that\nthey are likely to be noisy. The motivation for ex-\ncluding rules with under half of the total frequency\nof the word is to try and keep only those rules that\nwe are really sure will improve translation quality\noverall. These are rather coarse heuristics, and the\nsubject of rule filtering merits further investigation\n(see section 6).\n5 Evaluation\nTo evaluate the systems, we extracted the set of\nsentences from the 2,489-sentence News Commen-\ntary corpus which contained at least one ambiguous\nopen-category word in the SL aligned with a TLword in the reference translation which could be\ngenerated by the MT system. The alignments be-\ntween SL and TL words in the corpus were obtained\nby adding it to a separate copy of the EuroParl\ncorpus to the one used for training, and running\nGIZA++ again.\nIn total, this gave 434 sentences (9,463 tokens)\nto be evaluated (approximately 17%). The average\nnumber of translations per word was 1.08.4We\nperformed two evaluation tasks, the first was the\nerror rate of the lexical selection module, and the\nsecond was a full translation task.\nFor the first, we made a labelled corpus (similar\nto that in (Vickrey et al., 2005)) by disambiguat-\ning the lexical transfer output using the reference\ntranslation. Out of the 434 sentences this gave us\na total of 604 disambiguated words. This could\nbe considered an oracle, that is the best result the\nMT system could get if it just chose the translation\nlooking at the reference translation. The column\nError in Table 2 gives the lexical-selection error\nrate over this test corpus, that is the number of times\nthe given system chooses a translation which is not\nequivalent to what the oracle would choose.\nThe second task was to compare the systems us-\ning the common evaluation metrics BLEU (Papineni\net al., 2002) and Word error rate (WER), based on\nthe Levenshtein distance (Levenshtein, 1965).\nThis second task is not ideal for evaluating the\ntask of a lexical selection module as the perfor-\n4This number is low and indicates that there is work to be done\non expanding the dictionaries of the system for lexical choice.\n218\nsrc: If it doesn’t reduce social benefits . . .\nref: Si no reduce los subsidios sociales . . .\nalig: Si no reduce beneficios sociales . . .\nfilt2: Si no reduce prestaciones sociales . . .\nTable 1: Translation of segment #56 in the News Commentary\ncorpus by two of the systems.\nmance of the module will depend greatly on (a) the\ncoverage of the bilingual dictionaries of the RBMT\nsystem in question, and (b) the number of reference\ntranslations. It is included only as it is a common\nmetric used to evaluate MT systems.\nIn addition, when there is only one reference\ntranslation (such as in the News Commentary cor-\npus), the system may easily generate a more ade-\nquate translation of a word, which is then not found\nin the reference. For example, in Table 1, presta-\nciones ‘benefits, provision, assistance’ is a more\nadequate translation for ‘benefits’ than beneficios\n‘profit, advantage, benefits’, but as it does not appear\nin the reference, this translation improvement is not\ncounted. However, without annotating a corpus\nmanually with all possible translation possibilities,\nor using several reference translations it is difficult\nto see how this problem may be overcome.\nTable 2 reports the 95% confidence interval for\ntheBLEU ,WER and ERROR scores achieved on the\ntest set by the seven systems. Confidence inter-\nvals were calculated through the bootstrap resam-\npling (Efron and Tibshirani, 1994) method as de-\nscribed by (Koehn, 2004; Zhang and V ogel, 2004).\nBootstrap resampling was carried out for 1,000 iter-\nations.\nGiven the small differences in score between the\nindividual systems, we also performed pair boot-\nstrap resampling between the two highest scoring\nsystems ( aligandfilt2) to see if the difference was\nstatistically significant. Over 1,000 iterations, the\nfilt2 system was shown to offer an improved transla-\ntion 95% of the time for both the BLEU and ERROR\nscores.\n6 Concluding remarks\nWe have presented a lexical-selection module suit-\nable for inclusion in a RBMT system, and shown\nhow the rules it uses may be learnt from a paral-\nlel corpus. In pair bootstrap resampling, the sys-\ntem offers a statistically significant improvement\nin translation quality over the next highest scoring\nsystem.\nIn the future we would like to investigate the\nfollowing: The first is the possibility of learning\nthe rules without any parallel corpus. We aim tofollow the same principles as (Sánchez-Martínez\net al., 2008) where a monolingual TL corpus was\nused to improve the performance of an HMM part-\nof-speech tagger. Some initial experiments have\nalready been conducted to this effect, however the\nobserved performance of the TL model in choos-\ning between different translations from an RBMT\nsystem gives an indiciation of the difficulty of im-\nproving over the ‘linguistic default’ baseline.\nWhile the learning from parallel corpora is only a\ndemonstration, we would like to look into methods\nto address the problem of filtering/pruning the gen-\nerated rules to remove those which do not offer an\nimprovement in translation quality, as it would also\napply to learning rules without parallel corpora.\nThe system has also been built with the possi-\nbility of weighted rules, we would like to investi-\ngate the possibility of automatically assigning rule\nweights to more reliable rules.\nAcknowledgements\nWe are thankful for the support of the Span-\nish Ministry of Science and Innovation through\nproject TIN2009-14009-C02-01, and the Universi-\ntat d’Alacant through project GRE11-20. We also\nthank Sergio Ortiz Rojas for his constructive com-\nments and ideas on the development of the system,\nand the anonymous reviewers for comments on the\nmanuscript.\nReferences\nBrandt, M. D., H. Loftsson, H. Sigurþórsson, and F. M.\nTyers. 2011. Apertium-icenlp: A rule-based ice-\nlandic to english machine translation system. In Pro-\nceedings of the 16th Annual Conference of the Eu-\nropean Association of Machine Translation, pages\n217–224.\nCallison-Burch, C., P. Koehn, C. Monz, and O. F.\nZaidan, editors. 2011. Proceedings of the Sixth\nWorkshop on Statistical Machine Translation. Asso-\nciation for Computational Linguistics.\nCarpuat, M. and D. Wu. 2007. How phrase sense\ndisambiguation outperforms word sense disambigua-\ntion for statistical machine translation. In Proceed-\nings of the 11th International Conference on Theo-\nretical and Methodological Issues in Machine Trans-\nlation, pages 43–52.\nCortés-Vaíllo, S. and S. Ortiz-Rojas. 2011. Using\napertium linguistic data for tokenization to improve\nmoses smt performance. In Proceedings of the In-\nternational Workshop on Using Linguistic Informa-\ntion for Hybrid Machine Translation, LIHMT-2011,\npages 29–35.\n219\nSystem Total rules Called Error BLEU WER\nfreq - - [42.8, 50.3] [0.1687, 0.1794] [0.712, 0.725]\nling 667 473 [25.4, 30.7] [0.1772, 0.1879] [0.710, 0.723]\nalig 600 533 [19.3, 25.8] [0.1786, 0.1892] [0.709, 0.723]\ntlm - - [37.0, 44.9] [0.1708, 0.1817] [0.714, 0.727]\nall 77,077 503 [21.3, 28.2] [0.1779, 0.1885] [0.710, 0.723]\nfilt1 9,978 503 [20.3, 26.9] [0.1782, 0.1889] [0.710, 0.723]\nfilt2 2,661 532 [17.9, 24.7] [0.1789, 0.1896] [0.709, 0.723]\nTable 2: Evaluation results for the seven systems on the news commentary test corpus.\nDagan, I. and A. Itai. 1994. Word sense disambigua-\ntion using a second language monolingual corpus.\nComputational Linguistics, 40(4):563–596.\nEfron, B. and R. J. Tibshirani. 1994. An introduction\nto the Bootstrap. CRC Press.\nFederico, M., N. Bertoldi, and M. Cettolo. 2008.\nIrstlm: an open source toolkit for handling large\nscale language models. In Proceedings of Inter-\nspeech, Brisbane, Australia.\nForcada, M. L., M. Ginestí-Rosell, J. Nordfalk,\nJ. O’Regan, S. Ortiz-Rojas, J. A. Pérez-Ortiz,\nF. Sánchez-Martínez, G. Ramírez-Sánchez, and F. M.\nTyers. 2011. Apertium: a free/open-source platform\nfor rule-based machine translation. Machine Trans-\nlation, 25(2):127–144.\nIde, N. and J. Véronis. 1998. Word sense disambigua-\ntion: The state of the art. Computational Linguistics,\n24(1):1–41.\nKarlsson, F., A. V outilainen, J. Heikkilä, and A. Anttila.\n1995. Constraint Grammar: A language indepen-\ndent system for parsing unrestricted text. Mouton de\nGruyter.\nKoehn, P. 2004. Statistical significance tests for ma-\nchine translation evaluation. In Proceedings of the\nConference on Empirical Methods in Natural Lan-\nguage Processing, pages 388–395.\nKoehn, P. 2005. Europarl: A parallel corpus for statis-\ntical machine translation. In Proceedings of the 10th\nMT Summit, pages 79–86.\nKoehn, P. 2010. Statistical Machine Translation. Cam-\nbridge University Press, United Kingdom.\nLevenshtein, V . I. 1965. Binary codes capable of\ncorrecting deletions, insertions, and reversals. Dok-\nlady Akademii Nauk SSSR, 163(4):845–848. English\ntranslation in Soviet Physics Doklady, 10(8), 707–\n710.\nMelero, M., A. Oliver, T. Badia, and T. Suñol. 2007.\nDealing with bilingual divergences in mt using tar-\nget language n-gram models. In Proceedings of the\nMETIS-II Workshop: New Approaches to Machine\nTranslation, CLIN17, pages 19–26.Och, F. J. and H. Ney. 2003. A systematic compari-\nson of various statistical alignment models. Compu-\ntational Linguistics, 29(1):19–51.\nPapineni, K., S. Roukos, T. Ward, and W-J. Zhu. 2002.\nBleu: A method for automatic evaluation of machine\ntranslation. In Proceedings of the 40th Annual Meet-\ning of the Assoc. Comp. Ling., pages 311–318.\nSánchez-Martínez, F., J. A. Pérez-Ortiz, and M. L. For-\ncada. 2007. Integrating corpus-based and rule-based\napproaches in an open-source machine translation\nsystem. In Proceedings of the METIS-II Workshop:\nNew Approaches to Machine Translation, CLIN17,\npages 73–82.\nSánchez-Martínez, F., J. A. Pérez-Ortiz, and M. L. For-\ncada. 2008. Using target-language information to\ntrain part-of-speech taggers for machine translation.\nMachine Translation, 22(1-2):29–66.\nSánchez-Martínez, F., M. L. Forcada, and A. Way.\n2009. Hybrid rule-based – example-based MT: Feed-\ning apertium with sub-sentential translation units. In\nProceedings of the 3rd Workshop on EBMT, pages\n11–18.\nVenkatapathy, S. and S. Bangalore. 2007. Three\nmodels for discriminative machine translation using\nglobal lexical selection and sentence reconstruction.\nInProceedings of SSST, NAACL-HLT/AMTA Work-\nshop on Syntax and Structure in Statistical Transla-\ntion, pages 152–159.\nVickrey, D., L. Biewald, M. Teyssier, and D. Koller.\n2005. Word-sense disambiguation for machine trans-\nlation. In Proceedings of Human Language Technol-\nogy Conference and Conference on Empirical Meth-\nods in Natural Language Processing, pages 771–\n778.\nWiechetek, L., F. M. Tyers, and T. Omma. 2010. Shoot-\ning at flies in the dark: Rule-based lexical selection\nfor a minority language pair. LNAI, 6233:418–429.\nZhang, Y . and S. V ogel. 2004. Measuring confidence\nintervals for the machine translation evaluation met-\nrics. In Proceedings of The 10th International Con-\nference on Theoretical and Methodological Issues in\nMachine Translation.\n220",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "eSGq2N6z5Oy",
"year": null,
"venue": "EAMT 2015",
"pdf_link": "https://aclanthology.org/W15-4918.pdf",
"forum_link": "https://openreview.net/forum?id=eSGq2N6z5Oy",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Evaluating machine translation for assimilation via a gap-filling task",
"authors": [
"Ekaterina Ageeva",
"Mikel L. Forcada",
"Francis M. Tyers",
"Juan Antonio Pérez-Ortiz"
],
"abstract": "Ekaterina Ageeva, Mikel L. Forcada, Francis M. Tyers, Juan Antonio Pérez-Ortiz. Proceedings of the 18th Annual Conference of the European Association for Machine Translation. 2015.",
"keywords": [],
"raw_extracted_content": "Evaluating machine translation for assimilation via a gap-filling task\nEkaterina Ageeva\nSchool of Linguistics\nHigher School of Economics\nMoscow, Russia\[email protected]\nFrancis M. Tyers\nHSL-fakultetet\nUiT Norgga árktalaš universitehta\n9017 Romsa, Norway\[email protected] L. Forcada\nDept. Llenguatges i Sistemes Informàtics\nUniversitat d’Alacant\nE-03071 Alacant, Spain\[email protected]\nJuan Antonio Pérez-Ortiz\nDept. Llenguatges i Sistemes Informàtics\nUniversitat d’Alacant\nE-03071 Alacant, Spain\[email protected]\nAbstract\nThis paper provides additional observa-\ntions on the viability of a strategy indepen-\ndently proposed in 2012 and 2013 for eval-\nuation of machine translation (MT) for as-\nsimilation purposes. The strategy involves\nhuman evaluators, who are asked to restore\nkeywords (tofill gaps) in reference transla-\ntions. The evaluation method is applied to\ntwo language pairs, Basque–Spanish and\nTatar–Russian. To reduce the amount of\ntime required to prepare tasks and analyse\nresults, an open-source task management\nsystem is introduced. The evaluation re-\nsults show that the gap-filling task may be\nsuitable for measuring MT quality for as-\nsimilation purposes.\n1 Introduction\nAs suggested by Church and Hovy (1993), modern\nmachine translation (MT) systems may be divided\ninto two broad categories according to their pur-\npose: post-editing and assimilation systems. The\noutput of the former is intended to be transformed\ninto text comparable to human translation; the lat-\nter systems’ goal is to enhance user’s comprehen-\nsion of text. Both kinds may be evaluated, either\nto control for quality in the development process or\nto compare the systems. Importantly, according to\nChurch and Hovy (1993), the evaluation methods\nmust closely consider the system’s primary pur-\npose.\nDespite the fact that, as a result of widespread\nusage of online MT, assimilation (or gisting)\nc�2015 The authors. This article is licensed under a Creative\nCommons 3.0 licence, no derivative works, attribution, CC-\nBY-ND.is currently the most frequent application of\nMT (in 2012, daily output of Google Trans-\nlate matched the yearly output of human transla-\ntions1), few methodologies are established for as-\nsimilation evaluation of MT. The methods include\npost-editing and comparison by bilingual experts\n(Ginestí-Rosell et al., 2009), and multiple choice\ntests (Jones et al., 2007; Trosterud and Unham-\nmer, 2012). These approaches are often costly\nand prone to subjectivity: see the discussion by\nO’Regan and Forcada (2013). As an alternative,\nthe modification ofclozetesting (Taylor, 1953)\nwas introduced for assimilation evaluation,first by\nTrosterud and Unhammer (2012) as a supplemen-\ntary technique, and then by O’Regan and Forcada\n(2013) as a stand-alone method. Prior to this, cloze\ntests have been used to evaluate raw MT qual-\nity (Van Slype, 1979; Somers and Wild, 2000).\nWhile these authors ask informants tofill gaps in\nMT output, Trosterud and Unhammer (2012) and\nO’Regan and Forcada (2013) ask informants tofill\ngaps in the reference (human) translation. A des-\nignated number of keywords is removed from the\nhuman-translated sentences. The evaluators are\nthen asked tofill the gaps with suitable words with\nand without the help of MT output. The gap-filling\ntask models how well users comprehend the key\npoints of the text, as it is roughly equivalent with\nanswering questions. Thus, the method does not\ndirectly evaluate the quality of machine-produced\ntext, but rather its usefulness in understanding the\nmeaning of the original text.\nThe gap-filling method has been successfully\nused to evaluate the Basque–English Apertium\nlanguage pair. In this work we extend the evalua-\n1http://googleblog.blogspot.co.uk/2012/\n04/breaking-down-language-barriersix-years.\nhtml137\ntion to two more language pairs: Basque–Spanish\nand Tatar–Russian. The former pair, while not pro-\nducing output suitable for post-editing, is a good\nexample of an assimilation MT system. In ad-\ndition, Basque and Spanish are not mutually un-\nderstandable, and therefore constitute a good pair\nfor evaluation. For the latter pair, the evaluation\nserved as a quality check in the period of active\ndevelopment during the Google Summer of Code\n2014 programme. In addition to evaluating, we ex-\nplore the previously unconsidered aspects of the\nexperiment: the correlation between evaluators’\nscores, and the effects of the linguistic domain\nof texts and the percentage of gaps in a sentence.\nTo facilitate the evaluation, we introduce an auto-\nmated system which creates task sets from paral-\nlel corpora given a range of parameters (number\nof gaps in a sentence, hint type, gapfiller, etc.),\nchecks evaluators’ answers, and calculates and re-\nports generalized results. This system is integrated\ninto the Appraise MT evaluation platform (Feder-\nmann, 2012); the code is open-source and is avail-\nable on GitHub.2\nWe anticipate that the assessed MT systems will\ncontribute to the users’ understanding of text, that\nis, the users will show better results in gap-filling\ntasks when assisted with MT. We also expect to see\ndifferent results depending on text domain and the\nrelative number of gaps in a sentence.\nThe paper is organised as follows: in section 2\nwe describe the gap-filling method for assimilation\nevaluation: the task layout, the choice of words,\nand how the tasks are generated. Section 3 in-\ntroduces the experimental material, the evaluators,\nthe distribution of tasks and the evaluation proce-\ndure. In section 4 we describe and discuss the ex-\nperiment results. Finally, section 5 draws some\nconclusions. This paper is concerned primarily\nwith assimilation evaluation; for a deeper discus-\nsion on evaluation see e.g. (Koehn, 2010, ch. 8).\n2 Methodology\nThis section discusses the reasoning behind the\ngap-filling method and task structure. The gap-\nfilling method of evaluating machine translation\nfor assimilation purposes is based on the follow-\ning hypothesis: a reader’s understanding of a given\ntext correlates with the number of words they are\nable to correctly restore in the text. Therefore, the\nbase of an assimilation task is a (reference) sen-\n2https://github.com/Sereni/Appraisetence, where some of the words are blacked out,\nor removed. The sentence is produced by a human\n(as opposed to machine-translated), and it is in the\nlanguage known to evaluators, which is also the\ntarget languageof the machine translation system.\nThe additional elements of the task are what we\ncall hints, or extra sentences that help the partic-\nipant to understand the main sentence. There are\ntwo types of hints:first, thesource, which is se-\nmantically equivalent to thereference, also human-\nproduced, but in the source language of the pair.\nThe second type is themachine-translatedhint,\nwhich comes from the machine translation of the\nsourcesentence. Table 1 shows a sample task, and\nFigure 1 shows the task in the online evaluation\nenvironment.\nIn the course of the experiment, following\nO’Regan and Forcada (2013), we offer these hint\ncombinations:\nReference sentence only:The participants are\nasked tofill the gaps without being given\nany context. This task serves as a baseline\nscore and as an indicator of gaps that can be\ncompleted using common knowledge or lan-\nguage intuition (e.g. idioms and strong collo-\ncations). For example, in an English phrase\n‘Jack ordered <...> and chips’, one of the nat-\nural answers would be ‘fish’. Such an answer,\nhowever, may be unrelated to the meaning of\nthe source text, and may be given on the basis\nof collocation only.\nReference sentence and source sentence:By\nsetup, the participants have no command of\nthe source language, however, it may help\nthem tofill in proper nouns or loan words.\nReference sentence and MT hint:In addition to\nthe reference sentence, the participants see\nthe source sentence translated via the MT sys-\ntem, in this case Apertium (Forcada et al.,\n2011). This type of task is used for measur-\ning the contribution of machine translation to\nunderstanding the gist of the text.\nReference sentence and both hints:This task is\nadded to check whether MT and source pro-\nvide complementary hints.\nIn order to prepare the evaluation questions, we\ndetermine and remove keywords from the refer-\nence sentences. We consider two parameters: the\nlist of allowed parts of speech (PoS), and the num-\nber of gaps relative to sentence length (“gap den-138\nRef Ayudas económicas para el tratamiento de toxicomanías en comunidades terapéuticas no concertadas.\nTask Ayudas económicas para el { } de toxicomanías en comunidades terapéuticas no concertadas.\nSrc Komunitate terapeutiko itundu gabeetan toxikomaniak tratatzeko diru-laguntzak ematea.\nMT Comunidad terapéutico pactar gabeetan toxikomaniak las-ayudas de dinero para tratar dar.\nTable 1:An example group of sentences showing the gapped sentence and hint types. Reference, MT and task sentences are in\nSpanish, the source sentence is in Basque.\nRef Примерно полчаса;вам нужно выйти через7остановок,потом пройти ещё около100метров.\n10% Примерно полчаса;вам нужно выйти через7 { },потом пройти ещё около100метров.\n20% { }полчаса;вам нужно{ }через7остановок, { }пройти ещё около100метров.\n30% Примерно полчаса;вам нужно{ }через7 { },потом пройти{ }около100 { }.\nTable 2:Example of different gap percentage settings for a Russian reference sentence.\nFigure 1:An example set of sentences in the online environment. The task is Russian legal text with 30% gaps.139\nsity”). For the evaluations described in this paper\nwe use gap densities of 10, 20 and 30 percent (Ta-\nble 2), and the following parts of speech: noun (in-\ncluding proper nouns), adjective, adverb and lexi-\ncal verb (as opposed to auxiliary verb).\nFor each sentence, the list of candidate key-\nwords is prepared. It is composed of all the words\nthat fall into the allowed PoS list. The number of\ngaps in the sentence is calculated based on sen-\ntence length and specified gap density. All refer-\nence sentences are longer than 10 words. Finally,\nthe required number of keywords is selected from\nthe candidate list in such a manner that the gaps\nare distributed evenly throughout the sentence. We\nstart at a random word in a sentence and check\nwhether it is a keyword candidate. If yes, we re-\nmove it, and movenwords forward, going back to\nthe beginning of sentence if necessary. The step\nlengthnis the sentence length divided by the de-\nsired number of gaps. If the word is not a keyword,\nor has already been removed, we look at the next\nword instead. The process is repeated until the des-\nignated number of words has been removed, or un-\ntil there are no more words in the keyword list.\nKeyword removal could be one of the most\ntime-consuming steps in task preparation. It nor-\nmally requires human effort, because we would\nlike to determine the words that contribute the\nmost to understanding the text as opposed to re-\nmoving random words. In our automatic setup,\nthe above procedure is performed by a script in-\ntegrated into the task generation pipeline. Parts of\nspeech are determined with Apertium’s morpho-\nlogical analysers. To control for homonymy, we\nonly allow the word into the candidate list if all of\nits possible part of speech attributions are on the\nPoS list. For example, if we only allow nouns on\nthe word list, and the word \"fly\" receives two pos-\nsible part of speech attributions from the tagger,\nnoun and verb, it is not considered for the candi-\ndate list.\nHaving prepared the sentence sets, we assemble\nthem into XML formatted for the Appraise plat-\nform.\n3 Experimental set-up\nIn this section we will discuss the evaluators, the\nevaluation procedure, and the tasks in more detail.\nFor each experiment we called for native speak-\ners of target language of the language pair (i.e.\nSpanish and Russian) who had no command ofsource language of the pair (Basque and Tatar, re-\nspectively). The knowledge was self-reported, and\nthe participants were not asked about any other\nlanguages they may know. Eleven evaluators par-\nticipated in the Basque–Spanish experiment, and\n28 in Tatar-Russian (although not everyone com-\npleted the task in full, see discussion). The ma-\njority of Russian participants were aged 20–25,\nwith university degrees or in the process of ob-\ntaining them. Although we have not asked the\nparticipants about their knowledge of languages\nother than Tatar and Russian, it is reasonable to as-\nsume that most Russian participants knew English\nto some extent. The Spanish participants were uni-\nversity staff with background in computer science.\nBy design, our gap-filling tasks require a human\ntranslation (reference) of source sentences. Call-\ning for a human translator, however, would signif-\nicantly increase the resources needed for evalua-\ntion. We therefore use parallel text sources, which\nprovide the same sentence in two languages simul-\ntaneously:\n1. For Basque–Spanish, from the corpus of le-\ngal texts “Memorias de traducción del Servi-\ncio Oficial de Traductores del IV AP”;3\n2. For Tatar-Russian, from the following sources\non three different topics:\n(a) Casual conversations, from a textbook4\nof spoken Tatar;\n(b) Legal texts, from the Constitution and\nlaws5of Tatarstan;\n(c) News, from the President of Tatarstan\nwebsite6.\nEach set features 36 pairs of sentences. For the\nBasque–Spanish experiment the pairs were drawn\nrandomly from the corpora; for Tatar–Russian,\ncompiled by hand by the developer of the language\npair in Apertium. The Basque–Spanish experiment\nfeatured 94, 181 and 272 gaps in the 10, 20 and\n30 % tasks, respectively. For Tatar–Russian these\nnumbers are 272, 396 and 724, due to longer sen-\ntences used in task creation.\n3http://tinyurl.com/ivaptm2\n4Литвинов И.Л.Я начинаю говорить по-татарски.\nКазань:Татарское кн.изд-во, 1994. — 320с. ISBN\n5–298–00463–6 (стр. 219, 220, 232, 233, 234)\n5http://tatarstan.ru\n6http://president.tatarstan.ru/140\n3.1 Procedure\nThe evaluations took place online, in a sys-\ntem called Appraise (Federmann, 2012), which is\ndesigned specifically for various MT evaluation\ntasks. We adapted the code of Appraise to accom-\nmodate for the gap-filling tasks. The tasks were\nuploaded into the system and manually distributed\nbetween the participants by the following rules:\n1. Each participant evaluates every sentence\n(understood as a succession of words), a to-\ntal of 36;\n2. these sentences are divided into 4 groups of 9,\none for each evaluation mode (see section 2);\n3. in total, all sentences of the set are evaluated\nwith 10, 20 and 30% of words removed;\n4. each participant may encounter a given sen-\ntence in only one of the percentage variations;\n5. each sentence-mode-percentage combination\nis evaluated by more than one participant.\nThe participants are given the instructions in\ntheir native language; these instructions are re-\npeated above each task in the evaluation system.\nFor the participants’ convenience, the body of\nquestions is split into smaller groups which al-\nlow multiple evaluation sessions. The instructions\nare the following: read all the available hints and\nfill each gap with one suitable word, guessing if\nunsure. Participants’ answers are recorded and\nmarked correct or incorrect automatically. In addi-\ntion, the time taken tofill the gaps in one sentence\nis recorded.\nThis variety of the gap-filling task requires open\nanswers, and it is therefore possible that the par-\nticipants may provide words thatfit the gaps well,\nbut do not match the original answer. To account\nfor these cases, we process all the answers to de-\ntect possible synonyms (a method suggested by\nO’Regan and Forcada (2013)). An answer is con-\nsidered a candidate synonym if it is given by two or\nmore evaluators, and it does not match the answer\nword. We record each candidate synonym along\nwith the answer key and the context sentence. For\nexample, the wordasumiris the original answer\nin the Spanish sentenceAprender a jugar y di-\nvertirse en el agua sin asumir riesgos(’Learning\nto play and have fun in the water without taking\nrisks’). However, two or more evaluators gave a\ndifferent answer,correr(correr riesgos, ’running\nrisks’). Based on this data, an expert, who is na-\ntive speaker of the target language and who has notparticipated in the evaluations, decides whether the\ncandidate synonym is an acceptable replacement to\nthe answer key in the given context. We then check\nparticipants’ results against the compiled synonym\nlist and increase scores where appropriate. On\naverage, the scores improve by three percentage\npoints in all evaluation modes. Candidate syn-\nonyms are extracted automatically from the eval-\nuators’ responses, and each individual score is au-\ntomatically updated according to the synonym list.\nThe synonym lists for Basque–Spanish and\nTatar–Russian contain 52 and 25 words, respec-\ntively. The time taken to compile each list depends\non the number of candidate synonyms, and in our\ncase was approximately 30 minutes.\n4 Results and discussion\nThe results are presented in this section. Table\n3 shows the proportion and standard deviation of\ncorrect answers depending on evaluation mode and\ngap density. The evaluators’ correct answer per-\ncentage is averaged over the number of evaluators.\nIn addition to the percentage of correct answers we\nkept a record of the time taken tofill the gaps in\none sentence. To reduce the noise from partici-\npants who were distracted during evaluation, when\ncalculating times we remove all the results over 6\nminutes (the statistical mode is approximately two\nminutes). The typical time taken to complete one\nquestion varies from under one minute for tasks\nwithout hints and few gaps, to approximately two\nminutes for tasks with more hints and gaps.\nWe expect the scores obtained in different task\nmodes inside one gap density to decrease when go-\ning from tasks with MT and source hint to tasks\nwith MT hint only, to tasks with source hint only,\nandfinally, to tasks with no hint. We also expect\nthat with the increase in gap density, the time taken\ntofill the gaps should also increase, and the per-\ncentage of correct answers should decrease.\nThe latter trend holds: the average time taken\ntofill the gaps increases and the average percent-\nage of correct answers decreases as the relative\nnumber of gaps goes up. The larger number of\ngaps in the sentence makes it more difficult to\npredict the answer based on the context, and also\nleaves more room for mistakes. Exploring differ-\nent percentage-mode combinations, we may note\nthat the 10% no-hint tasks take the least time to\ncomplete. We would have expected longer com-\npletion time, since the participant must come up141\nDensityBasque–Spanish Tatar–Russian\nMT & Src MT Src No hint MT & Src MT Src No hint\n10% 62±32 58±28 40±39 49±40 57±42 64±41 54±43 46±41\n20% 65±30 70±27 31±28 31±30 65±31 60±33 46±31 39±32\n30% 48±26 40±24 26±20 18±18 59±28 56±26 40±28 35±30\nTable 3:Average number of gaps successfullyfilled (%), using a synonym list, for each language pair in all four task modes.\nwith their own answer unassisted. However, in\nthe no-hint task the participant is required to read\nonly one (reference) sentence, as opposed to two\nor three (reference and hints) in other tasks. Also,\nthe number of gaps in 10%-gap tasks is low, as it\nnever exceeds three. We found that, as opposed to\ntrying to devise the best word for no-hint gaps, the\nparticipants often resorted tofiling these gaps with\nrandom words, which takes little time.\nWe will now discuss the percentage of correct\nanswers based on task type. In general, tasks with\nMT hints score higher than tasks without MT hints.\nThis aligns well with our expectations and sug-\ngests than machine translation helps to understand\nthe provided text. In addition, tasks with source\nhints are completed better than tasks without hints,\nand the same relation holds between MT+source\nand MT-only types of tasks. In view of the rel-\natively large standard deviations, the significance\nof the hints’ contribution was tested using a linear\nregression model. The data points (y) were rep-\nresented as an individual evaluator’s average score\n(the number of correct answers divided by the to-\ntal number of answers) in each of the percentage-\nhint combinations. Two separate models were cre-\nated: one for no-hint (x= 0) vs MT-hint (x= 1)\ntasks, and another for no-hint (x= 0) vs source-\nhint (x= 1) tasks. Given the null hypothesis that\nthe slopebof the regression liney=a+bxequals\nzero, the contribution of MT hint is found to be\nsignificant on thep <0.001level, while the con-\ntribution of the source hint is significant only with\np <0.162.\nTwo records in the data do not align with our ex-\npectations: the no-hint 10% sentences in Basque–\nSpanish, which scored significantly higher than the\nsource-hint in the same category, and MT+source\n10% sentences in Tatar–Russian, which we would\nhave expected to score higher than the correspond-\ning MT-only task. In thefirst case, this is largely\ndue to the use of synonyms list. Before taking\nsynonyms into account, the scores were 32 and\n35 percent for source and no-hint tasks, respec-\ntively. This still shows a small difference in fa-vor of no-hint tasks. However, the latter percent-\nage increases significantly after we extend the an-\nswer list with synonyms. Such an increase sug-\ngests that, in this case, the content words were\nrestored by semantic context rather than through\nstrong collocation. The second pattern, low scores\nin Tatar–Russian 10% MT+source, does not stem\nfrom the task content. Instead, it is the result of the\nfixed order of tasks: the participants have always\nbeen given MT+source 10% sentencesfirst, fol-\nlowed by other task types. The participants have\nnot received any training tasks before the main\nevaluations. Therefore, it is possible that the ac-\ncommodation period is responsible for lower-than-\nexpected scores in this mode of evaluation.\nIt remains questionable whether we can com-\npare results for different gap densities. The 10%,\n20%, and 30% sets contained the same sentences.\nHowever, in each case different words were re-\nmoved. It appears that some content words are eas-\nier tofill than the others. This may explain why in\nBasque–Spanish the 20% MT tasks are completed\nwith better accuracy than 10% tasks.\nIt is worth noting that many participants re-\nported feeling frustrated in the course of evalu-\nations, especially while working on the no-hint\ntasks. The latter required suggesting the words\nwith very little context, which led some of the par-\nticipants to giving random words for answers, or\nleaving the space blank. 6 out of 49 participants\nquit the experiment before completing it. Consid-\nering the importance of receiving the full set of\nevaluations, we must address the issue of partici-\npant motivation in the upcoming experiments. It\nmay be beneficial to offer monetary compensation\nfor the evaluators’ efforts (in our case, they were\nvolunteers).\n4.1 Annotator agreement\nAfter obtaining the results we calculated Krippen-\ndorff’s alpha (Krippendorff, 1970) measure to rep-\nresent annotator agreement, shown in Table 4.\nWe selected this measure because of its com-\npatibility with more than two annotators per task142\nDensityBasque–Spanish Tatar–Russian\nMT & Src MT Src No hint MT & Src MT Src No hint\n10% 0.496 0.517 0.400 0.124 0.598 0.459 0.711 0.517\n20% 0.714 0.700 0.358 0.275 0.740 0.667 0.473 0.261\n30% 0.559 0.430 0.406 0.300 0.534 0.581 0.411 0.412\nTable 4:Krippendorff Alpha measure of annotator agreement, for each language pair in all four task modes.\nand missing data (not all the gaps were evaluated).\nTo calculate Krippendorff’s alpha we used an algo-\nrithm implementation by Thomas Grill,7dividing\nthe answers in each gap into two categories: cor-\nrect and incorrect. The previously obtained syn-\nonym lists were taken into account, i.e. if the two\nanswers are different but both correct, they fall into\none category. The measure was calculated sepa-\nrately for each hint and percentage combination.\nThe interpretation of Krippendorff’s alpha\nvaries depending on the application. One of the\ngeneral guidelines suggested by Landis and Koch\n(1977) for kappa-like measures (which includes\nKrippendorff’s Alpha) is as follows:k <0in-\ndicates “poor” agreement, 0 to 0.2 “slight”, 0.21\nto 0.4 “fair”, 0.41 to 0.6 “moderate”, 0.61 to 0.8\n“substantial”, and 0.81 to 1 “near perfect”.\nIn general, the level of annotator agreement is\nrelatively high. As the MT and MT+source hints\nare introduced, the agreement increases (measures\ncloser to 1): the annotators are more consistently\ncorrect or incorrect in each given sentence. The\nagreement measure for the same sentences with-\nout hints is closer to zero, which attests to the re-\nliability of our methodology. We note the outlier\nscore in Tatar–Russian 10% source tasks, which\nhas the most contribution from the news texts. This\nset of sentences contains many loan words, which\nhave similar form in Tatar and Russian (e.g. presi-\ndent, minister, championship), and are understood\nby Russian speakers. The gaps with loan words\nhave mostly beenfilled correctly, while there was\nsome disagreement in other gaps.\n4.2 Results for different domains\nFor the Tatar–Russian language pair the partici-\npants were offered texts from three different do-\nmains (in equal proportions): casual conversations,\nlegal texts and news. The results by domains are\ndisplayed in table 5. The MT system used in the\nevaluation has been targeted to translate texts from\nall three of the domains. Taking into consideration\n7http://grrrr.org/data/dev/krippendorff_\nalpha/the above discussion of 10% MT+Source tasks,\nwe observe similar results across the three cate-\ngories. Note that the source sentences paired with\nMT improve participants’ performance in casual\ntexts, compared to MT-only task mode. This may\nbe due to the fact that many words are borrowed\nfrom Russian into Tatar, and are in fact understood\nby Russian speakers.\n5 Conclusions\nWe have conducted assimilation evaluation of two\nApertium translation directions: Basque–Spanish\nand Tatar–Russian. The results suggest that this\nevaluation method reflects the contribution of MT\nto users’ understanding of text. The version of the\ntoolkit used in this experiment may be downloaded\nfrom our repository.8\nThe experiments may easily be repeated for any\nlanguage pair (provided a parallel corpus) and any\nmachine translation system. Based on our expe-\nrience, we would like to suggest the following\namendments to the procedure:\n1. As reported by O’Regan and Forcada (2013),\nunless the evaluation is targeted at a specific\ntext domain, it may be beneficial to include\na stylistic variety of texts in the initial corpus.\nNeighboring sentences on the same topic may\nassist the users in gap-filling tasks;\n2. If possible, increase the number of evaluators,\nor reduce the number of questions per partic-\nipant. In the above experiments each partici-\npantfilled from 110 to 187 gaps, divided into\nsmall groups. Reducing the amount of work\nmay increase task completion rate;\n3. To account for the adaptation period, pro-\nvide training tasks before the main evalua-\ntions take place.\nAs a consideration for future work, it may be\nbeneficial to compare the results of evaluation by\n8https://github.com/Sereni/Appraise/tree/\n1e9d735faee64d1b97fb343ab111ace6a64509d7143\nEvaluation mode\nDomain Gap percentage MT & Src MT Src No hint\nCasual10% 64±45 64±43 62±46 53±44\n20% 73±32 63±36 41±28 38±31\n30% 70±31 60±24 39±27 38±33\nLegal10% 53±40 68±35 39±38 33±35\n20% 61±25 66±24 50±34 48±34\n30% 50±26 48±29 40±27 34±29\nNews10% 53±38 60±44 57±42 49±39\n20% 59±34 49±35 47±32 29±29\n30% 58±22 61±22 41±30 35±27\nTable 5:Tatar–Russian Average number of gaps successfullyfilled (%), using a synonym list, for three different domains, in\nall four task modes.\ngap-filling method with the traditional evaluation\nmetrics, as well as with human evaluation.\nAcknowledgments:This work has been partly\nfunded by the Spanish Ministerio de Economía y\nCompetitividad through project TIN2012-32615.\nEkaterina Ageeva’s work has been supported by\nGoogle Summer of Code 2014 through the Aper-\ntium project. We thank the volunteers who partici-\npated in the evaluations.\nReferences\nChurch, K. W. and Hovy, E. H. (1993). Good ap-\nplications for crummy machine translation.Ma-\nchine Translation, 8(4):239–258.\nFedermann, C. (2012). Appraise: an open-source\ntoolkit for manual evaluation of MT output.The\nPrague Bulletin of Mathematical Linguistics,\n98:25–35.\nForcada, M. L., Ginestí-Rosell, M., Nordfalk, J.,\nO’Regan, J., Ortiz-Rojas, S., Pérez-Ortiz, J. A.,\nSánchez-Martínez, F., Ramírez-Sánchez, G.,\nand Tyers, F. M. (2011). Apertium: a free/open-\nsource platform for rule-based machine transla-\ntion.Machine translation, 25(2):127–144.\nGinestí-Rosell, M., Ramırez-Sánchez, G., Ortiz-\nRojas, S., Tyers, F. M., and Forcada, M. L.\n(2009). Development of a free Basque to Span-\nish machine translation system.Procesamiento\ndel Lenguaje Natural, 43:187–195.\nJones, D., Herzog, M., Ibrahim, H., Jairam, A.,\nShen, W., Gibson, E., and Emonts, M. (2007).\nILR-based MT comprehension test with multi-\nlevel questions. InHLT 2007: The Conference\nof the North American Chapter of the Associa-\ntion for Computational Linguistics; CompanionVolume, Short Papers, pages 77–80. Association\nfor Computational Linguistics.\nKoehn, P. (2010).Statistical machine translation.\nCambridge University Press.\nKrippendorff, K. (1970). Estimating the reliabil-\nity, systematic error and random error of interval\ndata.Educational and Psychological Measure-\nment, 30(1):61–70.\nLandis, J. R. and Koch, G. G. (1977). The mea-\nsurement of observer agreement for categorical\ndata.biometrics, pages 159–174.\nO’Regan, J. and Forcada, M. L. (2013). Peeking\nthrough the language barrier: the development\nof a free/open-source gisting system for Basque\nto English based onapertium.org.Proce-\nsamiento del Lenguaje Natural, 51:15–22.\nSomers, H. and Wild, E. (2000). Evaluating ma-\nchine translation: the Cloze procedure revisited.\nInTranslating and the Computer 22: Proceed-\nings of the Twenty-second International Confer-\nence on Translating and the Computer.\nTaylor, W. L. (1953). \"Cloze procedure\": a new\ntool for measuring readability.Journalism quar-\nterly.\nTrosterud, T. and Unhammer, K. B. (2012). Eval-\nuating North Sámi to Norwegian assimilation\nRBMT. InProceedings of the Third Inter-\nnational Workshop on Free/Open-Source Rule-\nBased Machine Translation (FreeRBMT 2012).\nVan Slype, G. (1979). Critical study of methods\nfor evaluating the quality of machine transla-\ntion.Prepared for the Commission of Euro-\npean Communities Directorate General Scien-\ntific and Technical Information and Information\nManagement. Report BR, 19142.144",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "hFFkzgNiSjc",
"year": null,
"venue": "EAMT 2011",
"pdf_link": "https://aclanthology.org/2011.eamt-1.22.pdf",
"forum_link": "https://openreview.net/forum?id=hFFkzgNiSjc",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Rapid rule-based machine translation between Dutch and Afrikaans",
"authors": [
"Pim Otte",
"Francis M. Tyers"
],
"abstract": "Pim Otte, Francis M. Tyers. Proceedings of the 15th Annual conference of the European Association for Machine Translation. 2011.",
"keywords": [],
"raw_extracted_content": "Rapid rule-based machine translation between Dutch and Afrikaans\nPim Otte\nMendelcollege\nPim Mulierlaan 4\n2024 BT Haarlem\[email protected] M. Tyers\nDept. Lleng. i Sist. Inform.\nUniversitat d’Alacant\nE-03070 Alacant\[email protected]\nAbstract\nThis paper describes the design, develop-\nment and evaluation of a machine transla-\ntion system between Dutch and Afrikaans\ndeveloped over a period of around a month\nand a half. The system relies heavily on\nthe re-use of existing publically available\nresources such as Wiktionary, Wikipedia\nand the Apertium machine translation plat-\nform. A method of translating compound\nwords between the languages by means of\nleft-to-right longest match lookup is also\nintroduced and evaluated.\n1 Introduction\nDutch is a West-Germanic language spoken by\nnearly 23 million people, mostly from the Nether-\nlands and Flanders, the Dutch-speaking part of\nBelgium, and a minority living in former colonies\nof the Netherlands, such as Suriname, Aruba and\nthe Netherlands Antilles. Dutch, as it is today,\nstarted developing in the 16th century in the ma-\njor trade cities, such as Amsterdam and Antwerp\n(Shetter and Ham, 2002). Afrikaans is spoken\nby at least 5 million people, mainly in South\nAfrica but also in Namibia. Afrikaans is a vari-\nety of Dutch that originates from that spoken by\nthe Dutch colonists of the Cape Colony. In 1925\nAfrikaans replaced Dutch as an official language\nin South Africa, to be the joint official language to-\ngether with English (Donaldson, 1993). Currently,\nAfrikaans is one of the eleven national languages.\nIn this paper we will describe the development\nofapertium-af-nl , a bi-directional Afrikaans\nand Dutch machine-translation system based on\nthe Apertium platform. As Afrikaans and Dutch\nc/circlecopyrt2011 European Association for Machine Translation.are largely mutually intelligible, this machine\ntranslation system focuses on dissemination, the\ntranslation of text for the purpose of being post-\nedited and then being published.\nThis is not the first system to work with this lan-\nguage pair, van Huyssteen and Pilon (2009) de-\nscribe a rule-based system to convert in a single\ndirection from Dutch to Afrikaans. The reason we\nhave chosen to work with a rule-based approach,\ninstead of the ubiquitous corpus-based/statistical\napproach, is that the latter needs parallel corpora\nfor the two languages. The only freely avail-\nable Afrikaans–Dutch corpus, is KDE41, which is\ntranslated via English and domain specific. We\nfeel that these corpora do not approach the quality\nrequired for the statistical approach, which makes\nthe rule-based approach favourable.\nThe paper is laid out as follows: firstly, we will\ndescribe the reuse and creations of resources. We\nwill then discuss several grammatical features of\nAfrikaans and Dutch and how these were treated\nin the machine-translation system. We will then\npresent a section in which the system is evaluated.\nFinally, we will discuss the system and future work\nthat could be done.\n2 Method\nThe system is based on the Apertium machine\ntranslation platform.2The platform was originally\naimed at the Romance languages of the Iberian\npeninsula, but has also been adapted for other,\nmore distantly related, language pairs. The whole\nplatform, both programs and data, are licensed un-\nder the Free Software Foundation’s General Pub-\n1http://opus.lingfil.uu.se/KDE4.php\n2http://www.apertium.orgMik el L. F orcada, Heidi Depraetere, Vincen t V andeghinste (eds.)\nPr o c e e dings of the 15th Confer enc e of the Eur op e an Asso ciation for Machine T r anslation , p. 153\u0015160\nLeuv en, Belgium, Ma y 2011\nlic Licence3(GPL) and all the software and data\nfor the 25 supported language pairs (and the other\npairs being worked on) is available for download\nfrom the project website.\napertium-af-nl has been developed over the\ncourse of one and a half months. The vast majority\nof the work has been done by a Dutch secondary\nschool student supervised by a PhD student. How-\never, since for the latter this was not paid and for\nthe former it was not for school, there were very\nfew full 8-hour days of work during this period.\n2.1 Existing resources\nOne existing resource was reused with very lit-\ntle modification: the morphological transducer for\nAfrikaans, which was created during a separate,\ncurrently dormant, project on English–Afrikaans\nmachine translation. However, some changes were\nmade. The structure of verb entries was wholly\nrevised, and both infrequent words and words for\nwhich a translation could not be found, were re-\nmoved.\n2.2 Resources created\n2.2.1 Dutch morphological transducer\nThere are a number of existing morphological\nanalysers for Dutch (Bosch et al., 2007; Laureys\net al., 2004; DePauw et al., 2004), some of which\nalso function as morphological generators. Our de-\ncision to make a new morphological analyser was\nbased on the following rationale:\n•Licence: Neither the CELEX morphological\ndatabase for Dutch (Laureys et al., 2004), nor\nthe finite-state morphological transducer in\nthe FLaV oR project (DePauw et al., 2004) are\navailable under a free licence. As our objec-\ntive is to publish and distribute the system de-\nscribed here, this made them unusable.\n•Bidirectional: We wanted the dictionary to be\nable to be used for both morphological anal-\nysis and generation. Other analysers, for ex-\nample the one described in Bosch et al. (2007)\nare only designed for analysis.\n•Tagset: When creating a new machine trans-\nlation system, it is convenient if the tags\nwhich represent the same or similar features\nare equivalent in the morphological analy-\nsers/generators for each of the languages, e.g.\n3http://www.fsf.org/licensing/licenses/\ngpl.htmlDutch\nEtymology\nhoofd- (“main, head”) + stad (“city”)\nPronunciation\nNoun\nhoofdstad m. (plhoofdsteden, dimin hoofdstadje, dimin pl hoofdstadjes)\n1. capital city\nFigure 2: English language Wiktionary article for Dutch\nhoofdstad ‘capital city’ http://en.wiktionary.org/\nwiki/hoofdstad\n<sg>on both sides for ‘Singular’ instead of\n<sg>on one side and ev4on the other.\nThe open categories (nouns, verbs, adjec-\ntives, adverbs) for the Dutch morphological anal-\nyser were extracted semi-automatically from Wik-\ntionary,5a free online, multilingual dictionary that\noften includes inflectional information. On the En-\nglish Wiktionary, in the case of Dutch nouns it of-\nten (although not always) gives the gender and the\nplural and diminutive forms (see for example Fig-\nure 2). The category Dutch nouns in the English\nWiktionary has a total of 10,610 entries, while the\ncorresponding category on the Dutch Wiktionary\nhas 13,176 entries.\nClosed categories were added by hand based on\na grammar of Dutch (Shetter and Ham, 2002).\n2.2.2 Bilingual dictionary\nThe bilingual dictionary has been developed in\nseveral ways. Exact matches from the dictionaries\nwere automatically added to the bilingual dictio-\nnary. Proper names were added in the way that is\ndescribed in Tyers and Pienaar (2008). After that\ncognates were added. There are several small com-\nmon spelling differences between Afrikaans and\nDutch. The bilingual dictionary was expanded fur-\nther by categorising these spelling differences and\nautomatically adding translations to the bilingual\ndictionary if the spelling difference was the only\ndifference between the two words. Finally, some\nentries were done by hand. This included closed\ncategories, but also words that frequently appeared\nin Wikipedia which were not yet in the bilingual\ndictionary.\n4ev stands for enkelvoud ‘singular’ and is from\nthe tagset of the Tadpole morphological analyser\n(http://ilk.uvt.nl/tadpole/).\n5http://www.wiktionary.org154\nmorph.\nanalyserPOS\ntaggerlexical\ntransfer\nmorph.\ngeneratorpost-\ngeneratorSL\ntext\nTL\ntextdeformatter\nreformatterchunker interchunk postchunkstructural transferFigure 1: Modular architecture of the Apertium MT platform. The compound analysis and generation modules are included at\nthe morphological analyser and morphological generator stages respectively.\n2.3 Transfer rules\nA total of 32 transfer rules have been written for\nthe direction Afrikaans →Dutch and 15 for the di-\nrection Dutch→Afrikaans. Some of these transfer\nrules are discussed below.\n2.3.1 Afrikaans to Dutch\nAfrikaans hardly uses the word-attached geni-\ntives(Donaldson, 1993). The word seis used to\nindicate possesion. Therefore, a transfer rule has\nbeen added to remove the seand instead make the\npreceding noun genitive. Note that in Dutch the\ngenitive is not the preferred translation. A con-\nstruction using the word van‘of’ would be prefer-\nable, but would need restructuring of the phrase.\nThe verb hˆe‘have’ is the only verb used in\nAfrikaans as auxiliary verb with a past participle.\nIn Dutch the verbs hebben ‘have’ and zijn‘be’ are\nboth used, the latter mostly in cases of movement\nand a few exceptional cases, the former in all oth-\ners. To handle this, two transfer rules have been\nadded, to handle the patterns ‘h ˆe + past participle’\nand ‘h ˆe + nie + past participle + nie’, which change\nthe verb ‘to have’ into the verb ‘to be’, when the\npast participle is found in a list of verbs that go\nwith ‘zijn’.\nNouns in Afrikaans do not exhibit gender, where\nnouns in Dutch can be one of four genders, neuter,\nmasculine, feminine or common. The definite ar-\nticle het,diein Dutch must agree with the noun it\nmodifies. A number of transfer rules were written\nfor patterns such as ‘determiner + noun’, ‘deter-\nminer + adjective + noun’, etc., which propagate\nthe gender of the head noun to the article.\nIn Afrikaans, finite verbs do not agree in person\nand number with the subject of the sentence, where\nin Dutch they do. A rule was added which transfers\nthe person and number of subject pronouns to the\nverb following them. This is a limited rule as it\ndoes not deal with non-pronominal subjects.\nNegation in Afrikaans and Dutch differs in the\nuse, in Afrikaans, of a negation scope marker nie.Translating from Afrikaans to Dutch this marker\nneeds to be removed after ‘nie’ and also after other\nnegatives such as niemand ‘no one’, niks ‘noth-\ning’ and geen ‘not’. Translating from Dutch to\nAfrikaans this marker needs to be added.\n2.3.2 Compound words\nBoth Afrikaans and Dutch are languages in\nwhich words combine very productively into com-\npounds. For example the words infrastruktuuron-\ntwikkelingsplan ‘infrastructure development plan’\nandlugmagbasis ‘air force base’. As it is imprac-\ntical to introduce all compound words into the lex-\nicons, compound word analysis is performed on\nall unknown words. The analysis process works\nlongest-match left-to-right using the same trans-\nducer as is used for morphological analysis. This\nprocess only looks for compounds made up of just\nnouns, because they are more frequent than other\ncompounds. Results are restricted by two spe-\ncial symbols which do not appear in the output,\ncompound-L and compound-R . The compound-L\nsymbol is used for forms that can only appear on\nthe left side (e.g. surface form) of a compound,\nwhere compound-R is used for forms that can ei-\nther appear in a compound, or end it. Epenthet-\nics, that is linking letters that occur between com-\npound words, are also taken care of heuristically in\nthis way. For example the -s- in ontwikkelingsplan ,\nthe -en- in pannenkoek and the -e- in paardebloem .\nNotice that the epenthetic -e- is not productive in\nDutch, that is, it is not used in new compounds.\nThere are some limitations to this method. For\nexample: although both macht- andmachts- can be\nanalysed as an internal part of a compound, only\none of them can be generated. Which one will\nbe generated is decided based on the inflectional\nparadigm to which the word belongs.\n2.3.3 Separable verbs\nAnother feature of Afrikaans and Dutch is sepa-\nrable verbs, for example the word afslaan ‘to turn,155\nto decline, to stop’. This can appear in the follow-\ning forms afslaan ,sla af ,afgeslagen . Additionally\nthe two constituent parts of the verb in sla af , the\nverb itself slaand the particle afmay be separated\nby a word or phrase, Ik sla het aanbod af. ‘I de-\ncline the offer’.\nThe following cases are supported,\n•Infinitive: afslaan→afslaan ‘to turn’\n•Participle: afgeslagen →afgeslaan ‘turned’\n•Non-separated: Ik sla af naar rechts. →Ek\nslaan af na regs ‘I turned to the right’\n•Subordinate: Toen ik de bal afsloeg →Toe ek\ndie bal afslaan ‘When I teed off the ball’\nVerbs separated by a word or phrase are cur-\nrently translated word-for-word, so the particle and\nverb are translated. This causes a problem when\nthe verb is not constructed equally in Afrikaans\nand Dutch. Also, when one part of the verb, does\nnot exist as a stand-alone verb, it is not recognised\nby the analyser. for example in aankondig ‘to an-\nnounce’ kondig is not a word. Thus ... kondig ...\naancannot be analysed currently.\nA module is under development to handle sepa-\nrable verbs, but is currently in the prototype stage.\nThere are currently 484 separable verbs defined\nin the bilingual dictionary. Of these, 439 are\nseparable in both languages, 33 are separable in\nAfrikaans but not in Dutch, and 12 are separable\nin Dutch but not Afrikaans.\n2.4 Current status\nIn terms of dictionary entries, the pair currently has\n7,258 entries in the Afrikaans morphological dic-\ntionary, 7,048 in the Dutch morphological dictio-\nnary and 5,982 in the Bilingual dictionary.\n3 Evaluation\nThe system was evaluated in five ways. The first\nwas the coverage6of the system. The second was\nan evaluation of the compound analysis part of the\nsystem – new with respect to other Apertium lan-\nguage pairs. The third was the word error rate\n(WER) of the translations produced when compar-\ning with a corrected sentence. The fourth was an\n6Here coverage is defined as na¨ıve coverage , that is for any\ngiven surface form at least one analysis is returned. This may\nnot be complete.Corpus Tokens Coverage\nafWikipedia 2,926,943 82.1%±0.8\nnlWikipedia 18,569,183 80.5%±0.7\nTable 1: Na¨ıve vocabulary coverage for the two morphologi-\ncal analysers.\nCorpus Corr. Seg. Corr. Trans.\ntop-1,000 914 776\nrandom-1,000 957 801\nTable 2: Compound word accuracy in analysis and transla-\ntion.\nanalysis of the errors found by the second evalua-\ntion and finally a comparative evaluation with ex-\nisting systems.\n3.1 Coverage\nLexical coverage of the system is calculated over\nthe Afrikaans and Dutch Wikipedias: Both corpora\nwere split into four sections and coverage calcu-\nlated over each of the sections in order to calculate\nthe standard deviation.\nThe database dump of the Dutch Wikipedia\nwas from the 1st November 2010, and that of\nthe Afrikaans Wikipedia from the 31st July 2009.\nBoth database dumps were stripped of formatting.\n3.2 Compound words\nIn order to test the accuracy of the word com-\npounding/decompounding strategy we tested two\nlists of words which received compound analyses\nfrom the Wikipedia. This test was only conducted\nin the Afrikaans→Dutch direction, but we expect\nsimilar results in the other direction. The first set\nof sentences was constructed by taking the 1,000\nmost frequent words which received a compound\nanalysis from the corpus, the second was by taking\na list of all the words and selecting 1,000 pseudo-\nrandomly.7A total of 6,866 unknown words from\nthe corpus received a compound analysis.\nWe include results for both correct segmenta-\ntion (meaning the word was decompounded cor-\nrectly) and correct translation (meaning the word\nwas translated correctly). This allows us to take\ninto account the free ride phenomenon, whereby\nan incorrect analysis may lead to a correct transla-\ntion. There were 19 free rides in the top-1,000, and\n5 free rides in the random-1,000.\n7Using the Unix unsort program.156\n3.3 Quantitative\nThe translation quality was measured using word\nerror rate (WER). This metric is based on the\nLevenshtein distance (Levenshtein, 1965) and was\ncalculated for each of the sentences using the\napertium-eval-translator tool.8A metric\nbased on word error rate was chosen to be able to\ncompare the system against systems based on sim-\nilar technology, and to assess the usefulness of the\nsystem in a real setting, that is of translating for\ndissemination.\nFour sets of 100 sentences were selected\npseudo-randomly from Wikipedia.9The first\ntwo sets (C1, C3) contained no unknown words,\nwhereas the second two sets could contain un-\nknown words (C2, C4). This is to give an idea of\nthe performance of the system in ‘ideal’ and ‘real-\nistic’ settings.\nFor the Dutch to Afrikaans direction, the sen-\ntences were translated by the system, and then\npostedited by a native speaker. For the Afrikaans to\nDutch direction, we took the reference translation,\nas postedited by the native speaker and used it as a\nsource of Dutch to be translated to Afrikaans, then\nused the original Afrikaans sentence as a reference\ntranslation.\nConfidence intervals were calculated through\nthe bootstrap resampling method as described by\nKoehn (2004).\n3.4 Qualitative\nIn order to inform ourselves of where the effort\ncould be expanded in order to improve the sys-\ntem we undertook a qualitative evaluation by re-\nviewing the translation errors from the Afrikaans\nto Dutch direction and categorising them as in Ta-\nble 4. An example of each of the kind of error is\nfound below. In all examples, the first sentence is\nAfrikaans, the second the Dutch machine transla-\ntion, the third the post-editted Dutch and the fourth\nis the English translation of the sentence.\n3.4.1 Unknown word\nThe example in (1) shows two errors caused by\nunknown words. The first error Nystad is afree\nride, meaning that although it is an error it does\nnot affect the final quality of the translation.\n8http://sourceforge.net/project/\nshowfiles.php?group_id=143781&package_\nid=206517 ; Version 1.0, 4th October 2006.\n9The test corpora can be downloaded from removed for reviewError type Count % of total\nSyntactic transfer 235 42.4\n- Verb concordance 99 17.9\n- Auxiliary verbs 13 2.3\n- Relative pronoun 11 2.0\n- Capitalisation 10 1.8\n- Chunking error 9 1.6\n- Other 93 16.8\nUnknown word 147 26.5\nDisambiguation 106 19.1\nMorphology 28 5.1\nPolysemy 23 4.2\nMultiword 6 1.1\nCompounding 6 1.1\nSeparable verb 3 0.5\nTotal 554 100\nTable 4: Contribution to total error by type. Syntactic transfer\nerrors are split into further categories.\n(1) Hierdie besetting is in 1721 met die Ver-\ndrag van Nystad erken.\nDeze bezetting is in 1721 met het Verdrag\nvan*Nystad *erken .\nDeze bezetting is in 1721 met het Verdrag\nvanNystad erkend .\n‘This occupation has been acknowledged\nin 1721 with the Treaty of Nystad.’\nThe unknown words are marked with asterisk.\n3.4.2 Morphology\nMost errors in the morphological analyser were\ncaused by a flaw in the automatic extraction pro-\ncess. The example in (2) shows a morphologi-\ncal error due to gender. The country DDR ‘GDR’\nis feminine, which should go with the determiner\n‘de’. However, because it is marked as neuter in\nthe morphological analyser, it is translated with\n‘het’. The vast majority of countries are in fact\nneuter, but DDR is not.\n(2) In die DDR volg Erich Honecker Walter\nUlbricht as partyleier op.\nInhetDDR volgen *Erich *Honecker\n*Walter *Ulbricht dan *partyleier op.\nIndeDDR volgt Erich Honecker Walter\nUlbricht als partijleider op.\n‘In the DDR Erich Honecker succeeds Wal-\nter Ulbricht as party leader.\nErrors of this type could be fixed with a more thor-\nough revision of the morphological analyser.157\nDir. System C1 C2 C3 C4\naf-nlApertium 16.625±1.465 23.405±1.235 15.225±1.735 22.195±2.515\nGoogle 9.485±1.115 10.575±1.795 7.63±1.45 12.185±1.545\nnl-afApertium 15.435±1.885 21.72±1.06 18.375±2.785 24.975±2.075\nGoogle 21.81±1.72 25.71±1.22 24.31±3.22 30.965±2.385\nTable 3: Accuracy for the test corpora for the two systems as measured by Word Error Rate with 95% confidence interval.\n3.4.3 Disambiguation\nOne of the biggest disambiguation problems for\nAfrikaans is distinguishing between short infinitive\nand present tense, which are morphologically the\nsame. In example (3), in the Afrikaans sentence,\nthe verb volg ‘follow’ could be present tense or\ninfinitive. It has been tagged as infinitive, where\npresent tense is the correct option.\n(3) Hier volg ’n lys van hoofstede.\nHier volgen een lijst van hoofdsteden.\nHier volgt een lijst van hoofdsteden.\n‘Here follows a list of capital cities.’\nDistinguishing between these two analyses is a dif-\nficult problem for a bigram part-of-speech tagger.\n3.4.4 Multiword\nExample (4) is causing problems because it is\nhard, if not impossible, to catch the meaning of\nthe Afrikaans dwarsoor in one Dutch word. An\nappropriate multiword has solved the initial prob-\nlem, but this causes additional issues with the ar-\nticle of wereld ‘world’ as that is included in the\nphrase over de hele ‘all over the’.\n(4) Duitse argitekte pak projekte dwarsoor die\nwˆereld aan.\nDuitse architecten pakken projecten over\nde hele de wereld aan.\nDuitse architecten pakken projecten over\nde hele wereld aan\n‘German architects are taking on projects\nall over the world.’\n3.4.5 Syntactic transfer\nIn (5) the singular verb does not match the plural\nsubject, the noun vrouwen ‘women’. This could be\nsolved by identifying the subject of the sentence\nand matching the plurality of the verb with it.\n(5) Die belangrikste rol wat die vroue egter in\ndie stryd teen apartheid gespeel het, ...\nDe belangrijkste rol wat de vrouwen echter\nin de strijd tegen apartheid gespeeld heeft ,...\nDe belangrijkste rol die de vrouwen echter\nin de strijd tegen apartheid gespeeld\nhebben , ...\n‘The most important part that women\nplayed in the struggle against apartheid, ...’\nAfrikaans uses the verb hˆe‘have’ with all past par-\nticiples, whereas Dutch uses the verb zijn‘be’ in\ncases of, amongst others, verbs that imply move-\nment. This could be fixed by tracking the auxiliary\nverb in a sentence and alter it if the past participle\nis in a list of movement verbs.\n(6) Die sand het dan saam met die water\nweggespoel.\nHet zand heeft dan samen met het water\nweggespoeld.\nHet zand isdan samen met het water\nweggespoeld.\n‘The sand was washed away along with the\nwater.’\nAnother issue is relative pronouns. Afrikaans al-\nways uses the word wat, where the equivalent\nDutch word depends on the antecedent. In Dutch\nwatis used when i.e. the antecedent is an entire\nsentence. In this case (7) the antecedent is for-\nmules , for which the appropriate relative pronoun\nisdie.\n(7) Pi kom voor in baie formules in meetkunde\nwat sirkels en sfere betrek.\nPi komen voor in vele *formules in\nmeetkunde watcirkels en *sfere betrekken.\nPi komt voor in vele formules in\nmeetkunde die cirkels en bollen be-\ntrekken.\n‘Pi appears in many formulas in geometry\nwhich concern circles and spheres.’\nCapitalisation is generally straightforward. An ex-\nception is when a sentence starts with an apostro-\nphe in one language and does not start with that\nin the other. The Afrikaans indefinite article is158\n’n, which cannot be capitalised. Therefore in (8)\nthe translation has a capitalisation error. The word\neenshould be capitalised, while Petshould not be.\nIn Apertium, changes in word case are performed\nin the syntactic transfer stage, thus this could be\nsolved by altering the set of transfer rules.\n(8) ’n Pet vorm ook deel van die uniform.\neen Pet vormen ook deel van het uniform.\nEen pet vormt ook deel van het uniform.\n‘A cap is also part of the uniform.’\nApertium uses fixed length chunks for transfer. In\nexample (9) there is an error due to this: preciese\n‘exact, precise’ is an adjective modifying grens\n‘border’. While there is a pattern ‘adj cc adj noun’,\nthere is no pattern ‘adj adj cc adj noun’. This\ncauses the chunker to put ‘preciese’ in a seperate\nchunk, which results in the predicative form, rather\nthan the attributive.\n(9) Daar is geen presiese geografiese of geolo-\ngiese grens tussen Europa en Asi ¨e nie.\nDaar is geen precies geografische of geolo-\ngische grens tussen Europa en Azi ¨e niet.\nEr is geen precieze geografische of geolo-\ngische grens tussen Europa en Azi ¨e.\n‘There is no exact geographical or geologi-\ncal border between Europe and Asia.’\nThis error could be fixed by adding the aforemen-\ntioned pattern.\nExample (10) is one of those that was included\nin the ‘other’ category of syntactic transfer er-\nrors. The words om te come before infinitives in\nboth Afrikaans and Dutch, much like toin En-\nglish. However, the behaviour is not identical in\nAfrikaans as in Dutch.\n(10) Jy kan aan Wikipedia meewerk sonder om\nenige besprekingsblaaie te lees.\nJij kunt aan Wikipedia *meewerk zonder\nomenig *besprekingsblaaie telezen.\nJij kunt aan Wikipedia meewerken zonder\nenige besprekingsbladen te lezen.\n‘You can work on Wikipedia without\nreading any talk pages.’\nDutch cannot have om te after a preposition, in\nthis case ‘zonder’ (without). A simple transfer rule\ncould fix this for the case that om te is next to each\nother. However, in the case that it is seperated it is\nharder to solve.3.4.6 Polysemy\nThe sentence in (11) has an error due to poly-\nsemy. The Afrikaans algemene , here as an attribu-\ntive adjective, can be translated into Dutch as ei-\nther algemeen orvoorkomend (the former means\n‘general’, the latter ‘common’ in English). While\nthe Afrikaans word algemeen is used for both of\nthese, they have a distinct meaning in Dutch.\n(11) Sink is die vierde mees algemene metaal\nin gebruik.\nZink is de vierde meest algemene metaal\nin gebruik.\nZink is het op drie na meest voorkomende\nmetaal in gebruik.\n‘Zinc is the fourth most common metal in\nuse.’\nChoosing the correct translation would require a\nmodule for lexical selection. However, it might\nalso be worth changing the default translation.\n3.4.7 Compounding\nThe error in example (12) is due to a spe-\ncific rule in Dutch to do with compounds, klink-\nerbotsing – which also exists in Afrikaans as\nvokaalopeenhoping . If a compound is built-up\nfrom two words as such that the two vowels around\nthe splitting point constitute a sound on their own,\nwhich means the word could be mispronounced, a\nhyphen should be used to distinguish the different\nparts of the compound.\n(12) Die motornywerheid is die ekonomiese\nbasis van Oshawa, ...\nDeautoindustrie is de economische basis\nvan *Oshawa, ...\nDeauto-industrie is de economische ba-\nsis van Oshawa, ...\n‘The car industry is the economic base of\nOshawa, ...’\n3.4.8 Separable verb\nExample (13) demonstrates the problem with\nseperable verbs. The Afrikaans ruk ... hand uit\ncorresponds with the Dutch expression loopt ... uit\nde hand . However, ruk‘to pull’ in itself could\nnever be translated as lopen ‘to walk’. Note that\n‘uit de hand lopen’ technically is not a seperable\nverb, but it poses the exact same problem as one.\nSolving this is a significant MT challenge and is\nnot easily fixable.159\n(13) Die situasie ruk deur massabetogings\nhand uit.\nDe situatie *ruk door *massabetogings\nhand uit .\nDe situatie loopt door massabetogingen\nuit de hand .\n‘The situation got out of control because\nof mass protests.’\n3.5 Comparative\nWe compared our system to the other available\nMT system for Afrikaans to Dutch and Dutch\nto Afrikaans, Google Translate10, a popular web-\nbased statistical machine translation system. The\nevaluation was performed in the same way, the\ntest corpora were translated with Google, and then\npost-edited.\nFor Afrikaans to Dutch, Google substantially\noutperforms the prototype Apertium system, with\nerror rates reduced by a half. For Dutch to\nAfrikaans, the Apertium system performs better,\nalthough this could be due to the method used for\ntesting the Dutch to Afrikaans direction favours\nmore literal translations. E.g. it does not rely on\npost-edition. Another possible explanation could\nbe that there are substantially bigger monolingual\ncorpora for Dutch than for Afrikaans for building\nlanguage models.\n4 Discussion\nWe have presented a bi-directional rule-based ma-\nchine translation between Dutch and Afrikaans,\ntwo closely-related Germanic languages. The sys-\ntem gives promising results, and offers an im-\nprovement in translation quality in the Dutch to\nAfrikaans direction over another publically avail-\nable system, but does not offer any improvement\nin translation quality in the Afrikaans to Dutch di-\nrection.\nWe have shown that the development of an\nRBMT system between closely-related languages\ndoes not necessarily take a long time, and can be\ncarried out by people with little formal training,\nand that the resulting system provides compara-\nble results, in one direction at least, with a leading\ncorpus-based machine translation system.\n4.1 Future work\nThe three biggest issues in the system come from\nlack of dictionary coverage, poor morphological\n10http://translate.google.com/disambiguation and insufficient syntactic transfer.\nThus these areas are ones that we intend to concen-\ntrate on. In addition, false friends have not specif-\nically been looked at. We could review the list of\nfalse friends in (van Huyssteen and Pilon, 2009) to\nsee if any translations could be improved.\nAcknowledgements\nDevelopment of this system was partially sup-\nported by the Google Code-in, a contest to in-\ntroduce pre-university students to contributing to\nopen-source software.\nReferences\nVan den Bosch, A., Busser, G.J., Daelemans, W., and\nCanisius, S. 2007. An efficient memory-based mor-\nphosyntactic tagger and parser for Dutch. Selected\nPapers of the 17th Computational Linguistics in the\nNetherlands Meeting 99–114.\nDe Pauw, G., Laureys, T., Daelemans, W., and Van\nHamme, H. 2004. A Comparison of Two Differ-\nent Approaches to Morphological Analysis of Dutch.\nProceedings of the Workshop of the ACL Special\nInterest Group on Computational Phonology (SIG-\nPHON), ACL2004\nDonaldson, Bruce C. 1993. A grammar of Afrikaans .\nWalter de Gruyter, Berlin\nKoehn, P. 2004. Statistical significance tests for ma-\nchine translation evaluation. Proceedings of the\nConference on Empirical Methods in Natural Lan-\nguage Processing 388–395.\nLaureys, T., De Pauw, G., Van hamme, H., Daele-\nmans, W., and Van Compernolle, D. 2004. Evalu-\nation and adaptation of the Celex Dutch morpholog-\nical database. Proceedings of the 4th International\nConference on Language Resources and Evaluation\n(LREC 2004) 1247–1250.\nLevenshtein, Vladimir. 1965. Binary codes capable of\ncorrecting deletions, insertions, and reversals. Dok-\nlady Akademii Nauk SSSR 845–848.\nTyers, F. M. and Pienaar, J. A. 2008. Extracting bilin-\ngual word pairs from Wikipedia. Proceedings of the\nSALTMIL Workshop at the Language Resources and\nEvaluation Conference , LREC2008 19–22.\nShetter, William Z. and Ham, Esther. 2002. Dutch: An\nEssential Grammar , 9th edition. Routledge, Oxford.\nvan Huyssteen, Gerhard and Pilon, Sul ´ene. 2009.\nRule-based Conversion of Closely-related Lan-\nguages: A Dutch-to-Afrikaans Convertor. Proceed-\nings of the 2009 Conference of the Pattern Recog-\nnition Association of South Africa , Stellenbosch, SA\n23–28.160",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "Y0xX2Oc4vYs",
"year": null,
"venue": "EAMT 2011",
"pdf_link": "https://aclanthology.org/2011.eamt-1.30.pdf",
"forum_link": "https://openreview.net/forum?id=Y0xX2Oc4vYs",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Apertium-IceNLP: A rule-based Icelandic to English machine translation system",
"authors": [
"Martha Dís Brandt",
"Hrafn Loftsson",
"Hlynur Sigurþórsson",
"Francis M. Tyers"
],
"abstract": "Martha Dís Brandt, Hrafh Loftsson, Hlynur Sigurþórsson, Francis M. Tyers. Proceedings of the 15th Annual conference of the European Association for Machine Translation. 2011.",
"keywords": [],
"raw_extracted_content": "Apertium-IceNLP: A rule-based Icelandic to English\nmachine translation system\nMartha Dís Brandt, Hrafn Loftsson,\nHlynur Sigurþórsson\nSchool of Computer Science\nReykjavik University\nIS-101 Reykjavik, Iceland\n{marthab08,hrafn,hlynurs06}@ru.isFrancis M. Tyers\nDept. Lleng. i. Sist. Inform.\nUniversitat d’Alacant\nE-03071 Alacant, Spain\[email protected]\nAbstract\nWe describe the development of a pro-\ntotype of an open source rule-based\nIcelandic→English MT system, based on\nthe Apertium MT framework and IceNLP,\na natural language processing toolkit for\nIcelandic. Our system, Apertium-IceNLP ,\nis the first system in which the whole\nmorphological and tagging component of\nApertium is replaced by modules from\nan external system. Evaluation shows\nthat the word error rate and the position-\nindependent word error rate for our pro-\ntotype is 50.6% and 40.8%, respectively.\nAs expected, this is higher than the corre-\nsponding error rates in two publicly avail-\nable MT systems that we used for com-\nparison. Contrary to our expectations, the\nerror rates of our prototype is also higher\nthan the error rates of a comparable system\nbased solely on Apertium modules. Based\non error analysis, we conclude that bet-\nter translation quality may be achieved by\nreplacing only the tagging component of\nApertium with the corresponding module\nin IceNLP, but leaving morphological anal-\nysis to Apertium.\n1 Introduction\nOver the last decade or two, statistical machine\ntranslation (SMT) has gained significant momen-\ntum and success, both in academia and industry.\nSMT uses large parallel corpora, texts that are\ntranslations of each other, during training to derive\na statistical translation model which is then used to\nc/circlecopyrt2011 European Association for Machine Translation.translate between the source language (SL) and the\ntarget language (TL).\nSMT has many advantages, e.g. it is data-driven,\nlanguage independent, does not need linguistic ex-\nperts, and prototypes of new systems can by built\nquickly and at a low cost. On the other hand, the\nneed for parallel corpora as training data in SMT\nis also its main disadvantage, because such corpora\nare not available for a myriad of languages, espe-\ncially the so-called less-resourced languages , i.e.\nlanguages for which few, if any, natural language\nprocessing (NLP) resources are available. When\nthere is a lack of parallel corpora, other machine\ntranslation (MT) methods, such as rule-based MT,\ne.g.Apertium (Forcada et al., 2009), may be used\nto create MT systems.\nIn this paper, we describe the development\nof a prototype of an open source rule-based\nIcelandic→English ( is-en ) MT system based on\nApertium and IceNLP , an NLP toolkit for process-\ning and analysing Icelandic texts (Loftsson and\nRögnvaldsson, 2007b). A decade ago, the Ice-\nlandic language could have been categorised as\na less-resourced language. The current situation,\nhowever, is much better thanks to the development\nof IceNLP and various linguistic resources (Rögn-\nvaldsson et al., 2009). On the other hand, no large\nparallel corpus, in which Icelandic is one of the\nlanguages, is freely available. This is the main rea-\nson why the work described here was initiated.\nOur system, Apertium-IceNLP , is the first sys-\ntem in which the whole morphological and tagg-\ning component of Apertium is replaced by mod-\nules from an external system. Our motivation\nfor developing such a hybrid system was to be\nable to answer the following research question: Is\nthe translation quality of an is-en shallow-transfer\nMT system higher when using state-of-the-art Ice-Mik el L. F orcada, Heidi Depraetere, Vincen t V andeghinste (eds.)\nPr o c e e dings of the 15th Confer enc e of the Eur op e an Asso ciation for Machine T r anslation , p. 217\u0015224\nLeuv en, Belgium, Ma y 2011\nlandic NLP modules in the Apertium pipeline as\nopposed to relying solely on Apertium modules?\nEvaluation results show that the word error rate\n(WER) of our prototype is 50.6% and the position-\nindependent word error rate (PER) is 40.8%1. This\nis higher than the evaluation results of two publicly\navailable MT systems for is-en translation, Google\nTranslate2andTungutorg3. This was expected,\ngiven the short development time of our system,\ni.e. 8 man-months. For comparison, we know that\nTungutorg has been developed by an individual,\nStefán Briem, intermittently over a period of two\ndecades4.\nContrary to our expectations, the error rates of\nour hybrid system is also higher than the error rates\nof an is-en system based solely on Apertium mod-\nules. This “pure” Apertium version was devel-\noped in parallel with Apertium-IceNLP. Based on\nour error analysis, we conclude that better trans-\nlation quality may be achieved by replacing only\nthe tagging component of Apertium with the corre-\nsponding module in IceNLP, but leaving morpho-\nlogical analysis to Apertium.\nWe think that our work can be viewed as a\nguideline for other researchers wanting to develop\nhybrid MT systems based on Apertium.\n2 Apertium\nThe Apertium shallow-transfer MT platform was\noriginally aimed at the Romance languages of the\nIberian peninsula, but has also been adapted for\nother languages, e.g. Welsh (Tyers and Don-\nnelly, 2009) and Scandinavian languages (Nord-\nfalk, 2009). The whole platform, both programs\nand data, is free and open source and all the soft-\nware and data for the supported language pairs is\navailable for download from the project website5.\nThe Apertium platform consists of the following\nmain modules:\n•A morphological analyser : Performs token-\nisation and morphological analysis which for\na given surface form returns all of the possible\nlexical forms (analyses) of the word.\n•A part-of-speech (PoS) tagger : The HMM-\nbased PoS tagger, given a sequence of\n1See explanations of WER and PER in Section 5.\n2http://translate.google.com\n3http://www.tungutorg.is/\n4We do not have information on development months for the\nis-en part of Google Translate.\n5http://www.apertium.orgmorphologically analysed words, chooses the\nmost likely sequence of PoS tags.\n•Lexical selection : A lexical selection mod-\nule based on Constraint Grammar (Karlsson\net al., 1995) selects between possible transla-\ntions of a word based on sentence context.\n•Lexical transfer : For an unambiguous lex-\nical form in the SL, this module returns the\nequivalent TL form based on a bilingual dic-\ntionary.\n•Structural transfer : Performs local morpho-\nlogical and syntactic changes to convert the\nSL into the TL.\n•A morphological generator : For a given TL\nlexical form, this module returns the TL sur-\nface form.\n2.1 Language pair specifics\nFor each language pair, the Apertium platform\nneeds a monolingual SL dictionary used by the\nmorphological analyser, a bilingual SL-TL dictio-\nnary used by the lexical transfer module, a mono-\nlingual TL dictionary used by the morphological\ngenerator, and transfer rules used by the struc-\ntural transfer module. The dictionaries and transfer\nrules specific to the is-en pair will be discussed in\nSections 4.2 and 4.3, respectively.\nThe lexical selection module is a new module\nin the Apertium platform and the is-en pair is the\nfirst released pair to make extensive use of it. The\nmodule works by selecting a translation based on\nsentence context. For example, for the ambigu-\nous word bóndi ’farmer’ or ’husband’, the default\ntranslation is left as ’farmer’, but a lexical selec-\ntion rule chooses the translation of ’husband’ if\na possessive pronoun is modifying it. While the\ncurrent lexical selection rules have been written by\nhand, work is ongoing to generate them automati-\ncally with machine learning techniques.\n3 IceNLP\nIceNLP is an open source6NLP toolkit for pro-\ncessing and analysing Icelandic texts. Currently,\nthe main modules of IceNLP are the following:\n•A tokeniser. This module performs both\nword tokenisation and sentence segmenta-\ntion.\n6http://icenlp.sourceforge.net218\n•IceMorphy : A morphological analyser\n(Loftsson, 2008). The program provides the\ntag profile (the ambiguity class) for known\nwords by looking up words in its dictionary.\nThe dictionary is derived from the Icelandic\nFrequency Dictionary (IFD) corpus (Pind et\nal., 1991). The tag profile for unknown\nwords, i.e. words not known to the dictionary,\nis guessed by applying rules based on mor-\nphological suffixes and endings. IceMorphy\ndoes not generate word forms, it only carries\nout analysis.\n•IceTagger : A linguistic rule-based PoS\ntagger (Loftsson, 2008). The tagger produces\ndisambiguated morphosyntactic tags from the\ntagset of the IFD corpus. The tagger uses Ice-\nMorphy for morphological analysis and ap-\nplies both local rules and heuristics for dis-\nambiguation.\n•TriTagger : A statistical PoS tagger . This\ntrigram tagger is a re-implemenation of\nthe well-known HMM tagger described by\nBrants (2000). It is trained on the IFD cor-\npus.\n•Lemmald : A lemmatiser (Ingason et al.,\n2008). The method used combines a data-\ndriven method with linguistic knowledge to\nmaximise accuracy.\n•IceParser : A shallow parser (Loftsson and\nRögnvaldsson, 2007a). The parser marks\nboth constituent structure and syntactic func-\ntions using a cascade of finite-state transduc-\ners.\n3.1 The tagset and the tagging accuracy\nThe IFD corpus consists of about 600,000 tokens\nand the tagset of about 700 tags. In this tagset,\neach character in a tag has a particular function.\nThe first character denotes the word class . For\neach word class there is a predefined number of\nadditional characters (at most six), which describe\nmorphological features, like gender ,number and\ncase for nouns; degree anddeclension for adjec-\ntives; voice ,mood andtense for verbs, etc. To\nillustrate, consider the Icelandic word strákarnir\n’the boys’. The corresponding IFD tag is nkfng ,\ndenoting noun ( n), masculine ( k), plural ( f), nomi-\nnative ( n), and suffixed definite article ( g).\nPrevious work on PoS tagging Icelandic text\n(Helgadóttir, 2005; Loftsson, 2008; Dredze andWallenberg, 2008; Loftsson et al., 2009) has\nshown that the morphological complexity of the\nIcelandic language, and the relatively small train-\ning corpus in relation to the size of the tagset, is to\nblame for a rather low tagging accuracy (compared\nto related languages). Taggers that are purely\nbased on machine learning (including HMM tri-\ngram taggers) have not been able to produce high\naccuracy when tagging Icelandic text (with the ex-\nception of Dredze and Wallenberg (2008)). The\ncurrent state-of-the-art tagging accuracy of 92.5%\nis obtained by applying a hybrid approach, inte-\ngrating TriTagger into IceTagger (Loftsson et al.,\n2009).\n4 Apertium-IceNLP\nWe decided to experiment with using IceMorphy,\nLemmald, IceTagger and IceParser in the Aper-\ntium pipeline. Note that since Apertium is based\non a collection of modules that are connected by\nclean interfaces in a pipeline (following the Unix\nphilosophy (Forcada et al., 2009)) it is relatively\neasy to replace modules or add new ones. Figure 1\nshows the Apertium-IceNLP pipeline.\nOur motivation for using the above modules is\nthe following:\n1. Developing a good morphological analyser\nfor a language is a time-consuming task7.\nSince our system is unidirectional, i.e. is-en\nbut not en-is , we only need to be able to anal-\nyse an Icelandic surface form, but do not need\nto generate an Icelandic surface form from\na lexical form (lemma and morphosyntactic\ntags). We can thus rely on IceMorphy for\nmorphological analysis.\n2. As discussed in Section 3.1, research has\nshown that HMM taggers, like the one in-\ncluded in Apertium, have not been able to\nachieve high accuracy when tagging Ice-\nlandic. Thus, it seems logical to use the state-\nof-the-art tagger, IceTagger, instead.\n3. Morphological analysers in Apertium return\na lemma in addition to morphosyntactic tags.\nTo produce a lemma for each word, we\ncan instead rely on the Icelandic lemmatiser,\nLemmald.\n7Although there exists a morphological database for Ice-\nlandic ( http://bin.arnastofnun.is ), it is unfortu-\nnately not available as free/open source software/data.219\ninterchunk2IceMorphylexical\ntransfer\nmorph.\ngeneratorSL\ntext\nTL\ntextchunker interchunk1 postchunkstructural transfer\nlexical\nselectionIceTagger LemmaldFigure 1: The Apertium-IceNLP pipeline.\n4. Information about syntactic functions can be\nof help in the translation process. IceParser,\nwhich provides this information, can there-\nfore potentially be used (we have not yet\nadded IceParser to the pipeline).\n4.1 IceNLP enhancements\nIn order to use modules from IceNLP in the Aper-\ntium pipeline, various enhancements needed to be\ncarried out in IceNLP.\n4.1.1 Mappings\nVarious mappings from the output generated by\nIceTagger to the format expected by the Apertium\nmodules were necessary. All mappings were im-\nplemented by a single mapping file with different\nsections for different purposes. For example, mor-\nphosyntactic tags produced by IceTagger needed\nto be mapped to the tags used by Apertium. The\nmapping file thus contains entries for each possi-\nble tag from the IFD tagset and the corresponding\nApertium tags. For example, the following entry\nin the mapping file shows the mapping for the IFD\ntagnkfng (see Section 3.1).\n[TAGMAPPING]\n...\nnkfng <n><m><pl><nom><def>\nThe string “[TAGMAPPING]” above is a section\nname, whereas <n> stands for noun, <m> for mas-\nculine, <pl> for plural, <nom> for nominative, and\n<def> for definite.\nAnother example of a necessary mapping re-\ngards exceptions to tag mappings for particular\nlemmata. The following entries show that after\ntag mapping, the tags <vblex><actv> (verb, active\nvoice) for the lemmata vera ’to be’ and hafa ’to\nhave’ should be replaced by the single tag <vbser>\nand <vbhaver>, respectively. The reason is that\nApertium needs specific tags for these verbs.\n[LEMMA]\n...\nvera <vblex><actv> <vbser>\nhafa <vblex><actv> <vbhaver>The last example of a mapping concerns multi-\nword expressions (MWEs). IceTagger tags each\nword of a MWE, whereas Apertium handles them\nas a single unit because MWEs cannot be trans-\nlated word-for-word. Therefore, MWEs need to\nbe listed in the mapping file along with the cor-\nresponding Apertium tags. The following entries\nshow two MWEs, að einhverju leyti ’to some ex-\ntent’ and af hverju ’why’ along with the corre-\nsponding Apertium tags.\n[MWE]\n...\nað_einhverju_leyti <adv>\naf_hverju <adv><itg>\nInstead of producing tags for each component of a\nMWE, IceTagger searches for MWEs in its input\ntext that match entries in the mapping file and pro-\nduces the Apertium tag(s) for a particular MWE if\na match is found.\n4.1.2 Daemonising IceNLP\nThe versions of IceMorphy/IceTagger described\nin (Loftsson, 2008), and Lemmald described in\n(Ingason et al., 2008), were designed to tag and\nlemmatise large amounts of Icelandic text, e.g.\ncorpora. When IceTagger starts up, it creates an\ninstance of IceMorphy which in turn loads var-\nious dictionaries into memory. Similarily, when\nLemmald starts up, it loads its rules into memory.\nThis behaviour is fine when tagging and lemmatis-\ning corpora, because, in that case, the startup time\nis relatively small compared to the time needed to\ntag and lemmatise.\nOn the other hand, a common usage of a ma-\nchine translation system is translating a small num-\nber of sentences (for example, in online MT ser-\nvices) as opposed to a corpus. Using the modules\nfrom IceNLP unmodified as part of the Apertium\npipeline would be inefficient in that case because\nthe aforementioned dictionaries and rules would be\nreloaded every time the language pair is used.\nTherefore, we added a client-server function-\nality to IceNLP in order for it to run efficiently220\nas part of the Apertium pipeline. We added two\nnew applications to IceNLP: IceNLPServer and\nIceNLPClient . IceNLPServer is a server applica-\ntion, which contains an instance of the IceNLP\ntoolkit. Essentially, IceNLPServer is a daemon\nwhich runs in the background. When it is started\nup, all necessary dictionaries and rules are loaded\ninto memory and are kept there while the daemon\nis running. Therefore, the daemon can serve re-\nquests to the modules in IceNLP without any load-\ning delay.\nIceNLPClient is a console-based client for com-\nmunicating with IceNLPServer. This application\nbehaves in the same manner as the Apertium mod-\nules, i.e. it reads from standard input and writes to\nstandard output. Thus, we have replaced the Aper-\ntium tokeniser/morphological analyser/lemmatiser\nand the PoS tagger with IceNLPClient.\nThe client returns a PoS-tagged version of its in-\nput string. To illustrate, when the client is asked to\nanalyse the string Hún er góð ’She is good’:\necho \"Hún er góð\" | RunClient.sh\nit returns:\n^Hún/hún<prn><p3><f><sg><nom>$\n^er/vera<vbser><pri><p3><sg>$\n^góð/góður<adj><pst><f><sg><nom><sta>$\nThis output is consistent with the output gener-\nated by the Apertium tagger, i.e. for each word\nof an input sentence, the lexeme is followed by\nthe lemma followed by the (disambiguated) mor-\nphosyntactic tags. The output above is then fed di-\nrectly into the remainder of the Apertium pipeline,\ni.e. into lexical selection, lexical transfer, struc-\ntural transfer, and morphological generation (see\nFigure 1), to produce the English translation ’She\nis good’.\n4.2 The bilingual dictionary\nWhen this project was initiated, no is-en bilingual\ndictionary (bidix) was publicly available in elec-\ntronic format. Our bidix was built in three stages.\nFirst, the is-en dictionary was populated with\nentries spidered from the Internet from Wikipedia,\nWiktionary, Freelang, the Cleasby-Vigfusson Old\nIcelandic dictionary8and the Icelandic Word\nBank9. This provided a starting point of over\n5,000 entries in Apertium style XML format which\nneeded to be checked manually for correctness.\nAlso, since lexical selection was not an option in\n8http://www.ling.upenn.edu/~kurisuto/\ngermanic/oi_cleasbyvigfusson_about.html\n9http://www.ismal.hi.is/ob/index.en.htmlthe early stages of the project, only one entry could\nbe used. SL words that had multiple TL transla-\ntions had to be commented out, based on which\ntranslation seemed the most likely option. For ex-\nample, below we have three options for the SL\nword fíngerður in the bidix where it could be trans-\nlated as ’fine’, ’petite’ or ’subtle’, and the latter two\noptions are commented out.\n<e><p>\n<l>fíngerður<s n=\"adj\"/></l>\n<r>fine<s n=\"adj\"/><s n=\"sint\"/></r>\n</p></e>\n<!-- begin comment\n<e><p>\n<l>fíngerður<s n=\"adj\"/></l>\n<r>petite<s n=\"adj\"/></r>\n</p></e>\n<e><p>\n<l>fíngerður<s n=\"adj\"/></l>\n<r>subtle<s n=\"adj\"/></r>\n</p></e>\nend comment -->\nEach entry in the bidix is surrounded by element\ntags <e>...</e> and paragraph tags <p>...</p> .\nSL words are surrounded by left tags <l>...</l>\nand TL translations by right tags <r>...</r> .\nWithin the left and right tags the attribute value\n“adj” denotes that the word is an adjective and the\npresence of the attribute value “sint” denotes that\nthe adjective’s degree of comparison is shown with\n“-er/-est” endings (e.g. ’fine’, ’finer’, ’finest’).\nIn the second stage of the bidix development, a\nbilingual wordlist of about 6,000 SL words with\nword class and gender was acquired from an in-\ndividual, Anton Ingason. It required some pre-\nprocessing before it could be added to the bidix,\ne.g. determining which of these new SL words did\nnot already exist in the dictionary, and selecting a\ndefault translation in cases where more than one\ntranslation was given.\nLast, we acquired a bilingual wordlist from the\ndictionary publishing company Forlagið10, con-\ntaining an excerpt of about 18,000 SL words from\ntheir Icelandic-English online dictionary. This re-\nquired similar preprocessing work as described\nabove.\nCurrently, our is-en bidix contains 21,717 SL\nlemmata and 1,491 additional translations to be\nused for lexical selection.\n4.3 Transfer rules\nThe syntactic ( structural ) transfer stage (see Fig-\nure 1) in the translator is split into four stages.\n10http://snara.is/221\nThe first stage ( chunker ) performs local reorder-\ning and chunking. The second ( interchunk1 ) pro-\nduces chunks of chunks, e.g. chunking relative\nclauses into noun phrases. The third ( interchunk2 )\nperforms longer distance reordering, e.g. con-\nstituent reordering, and some tense changes. As\nan example of a tense change, consider: Hann\nvildi að verðlaunin færu til þeirra→‘He wanted\nthatthe awards went to them’→‘He wanted the\nawards to go to them’. Finally, the fourth stage\n(postchunk ) does some cleanup operations, and in-\nsertion of the indefinite article.\nThere are 78 rules in the first stage, the majority\ndealing with noun phrases, 3 rules in the second,\n26 rules in the third stage and 5 rules in the fourth\nstage.\nIt is worth noting that the development of the\nbilingual dictionary and the transfer rules benefit\nboth the Apertium-IceNLP system and the is-en\nsystem based solely on Apertium modules.\n5 Evaluation\nOur goal was to evaluate approximately 5,000\nwords, which corresponds to roughly 10 pages of\ntext, and compare our results to two other publicly\navailable is-en MT systems: Google Translate, an\nSMT system, and Tungutorg, a proprietary rule-\nbased MT system, developed by an individual. In\naddition, we sought a comparison to the is-en sys-\ntem based solely on Apertium modules.\nThe test corpus for the evaluation was extracted\nfrom a dump of the Icelandic Wikipedia on April\n24th 2010, which provided 187,906 lines of SL\ntext. The reason for choosing texts from Wikipedia\nis that the evaluation material can be distributed,\nwhich is not the case for other available corpora of\nIcelandic.\nThen, 1,000 lines were randomly selected from\nthe test corpus and the resulting file filtered semi-\nautomatically such that: i)each line had only one\ncomplete sentence; ii)each sentence had more than\nthree words; iii)each sentence had zero or one\nlower case unknown word (we want to test the\ntransfer, not the coverage of the dictionaries); iv)\nlines that were clearly metadata and traceable to\nindividuals were removed, e.g. user names; v)\nlines that contained incoherent strings of numbers\nwere removed, e.g. from a table entry; vi)lines\ncontaining non-Latin alphabet characters were re-\nmoved, e.g. if they contained Greek or Arabic font;\nvii)lines that contained extremely domain specificTranslator WER PER\nApertium-IceNLP 50.6% 40.8%\nApertium 45.9% 38.2%\nTungutorg 44.4% 33.7%\nGoogle Translate 36.5% 28.7%\nTable 1: Word error rate (WER) and position-independent\nword error rate (PER) over the test sentences for the publicly\navailable is-en machine translation systems.\nand/or archaic words were removed (e.g. words\nthat our human translator did not know how to\ntranslate); and viii) repetitive lines, e.g. multiple\nlines of the same format from a list, were removed.\nAfter this filtering process, 397 sentences re-\nmained which were then run through the four MT\nsystems. In order to calculate evaluation metrics\n(see below), each of the four output files had to\nbe post-edited. A bilingual human posteditor re-\nviewed each TL sentence, copied it and then made\nminimal corrections to the copied sentence so that\nit would be suitable for dissemination – meaning\nthat the sentence needs to be as close to grammat-\nically correct as possible so that post-editing re-\nquires less effort.\nThe translation quality was measured using two\nmetrics: word error rate (WER), and position-\nindependent word error rate (PER). The WER is\nthe percentage of the TL words that require cor-\nrection, i.e. substitutions, deletions and insertions.\nPER is similar to WER except that PER does\nnot penalise correct words in incorrect positions.\nBoth metrics are based on the well known Leven-\nshtein distance and were calculated for each of the\nsentences using the apertium-eval-translator\ntool11. Metrics based on word error rate were cho-\nsen so as to be able to compare the system against\nother Apertium systems and to assess the useful-\nness of the system in real settings, i.e. of translat-\ning for dissemination.\nNote that, in our case, the WER and PER\nscores are computed based on the difference be-\ntween the system output and a post-edited ver-\nsion of the system output. As can be seen in\nTable 1, the WER and PER for our Apertium-\nIceNLP prototype is 50.6% and 40.8%, respec-\ntively. This may seem quite high, but looking at\nthe translation quality statistics for some of the\nother language pairs in Apertium12, we see that\n11http://wiki.apertium.org/wiki/\nEvaluation\n12http://wiki.apertium.org/wiki/222\nthe WER for Norsk Bokmål-Nynorsk is 17.7%, for\nSwedish-Danish 30.3%, for Breton-French 38.0%,\nfor Welsh-English 55.7%, and for Basque-Spanish\n72.4%. It is worth noting however that each of\nthese evaluations had slightly different require-\nments for source language sentences. For instance,\nthe Swedish–Danish pair allowed any number of\nunknown words.\nWe expected that the translation quality of\nApertium-IceNLP would be significantly less than\nboth Google Translate and Tungutorg, and the re-\nsults in Table 1 confirm this expectation. The\nreason for our expectation was that the develop-\nment time of our system was relatively short (8\nman-months), whereas Tungutorg, for example,\nhas been developed intermittently over a period of\ntwo decades.\nUnexpectedly, the error rates of Apertium-\nIceNLP is also higher than the error rates of a sys-\ntem based solely on Apertium modules (see row\n“Apertium” in Table 1). We will discuss reasons\nfor this and future work to improve the translation\nquality in the next section.\n6 Discussion and future work\nIn order to determine where to concentrate ef-\nforts towards improving the translation quality of\nApertium-IceNLP, some error analysis was carried\nout on a development data set. This development\ndata was collected from the largest Icelandic online\nnewspaper mbl.is into 1728 SL files and then trans-\nlated by the system into TL files. Subsequently,\n50 files from the pool were randomly selected for\nmanual review and categorisation of errors.\nThe error categories were created along the way,\nresulting in a total of 6 error categories to iden-\ntify where it would be most beneficial to make\nimprovements. Analysis of the error categories\nshowed that 60.7% of the errors were due to words\nmissing from the bidix, mostly proper nouns and\ncompound words (see Table 2). This analysis sug-\ngests that improvement to the translation quality\ncan be achieved by concentrating on adding proper\nnouns to the bidix, on the one hand, and resolving\ncompound words, on the other.\nOne possible explanation for the lower er-\nror rates for the “pure” Apertium version than\nthe Apertium-IceNLP system is the handling of\nMWEs. MWEs most often do not translate liter-\nally nor even to the same number of words, which\nTranslation_quality_statisticsError category Freq. %\nMissing from the bidix 912 60.7%\nNeed further analysis 414 27.5%\nMultiword expressions 90 6.0%\nAbbreviations and initials 31 2.1%\nMore sophisticated patterns 31 2.1%\nOther 24 1.6%\nTotal 1502 100%\nTable 2: Error categories and corresponding frequencies.\ncan dramatically increase the error rate. The pure\nversion translates unlimited lengths of MWEs as\nsingle units and can deal with MWEs that con-\ntain inflectional words. In contrast, the length\nof the MWEs in IceNLP (and consequently also\nin Apertium-IceNLP) is limited to trigrams and,\nfurthermore, IceNLP cannot deal with inflectional\nMWEs.\nThe additional work required to get a better\ntranslation quality out of the Apertium-IceNLP\nsystem than a pure Apertium system raises the\nquestion as to whether “less is more”, i.e. whether\ninstead of incorporating tokenisation, morphologi-\ncal analysis, lemmatisation and PoS tagging from\nIceNLP into the Apertium pipeline, it may produce\nbetter results to only use IceTagger for PoS tagg-\ning but rely on Apertium for the other tasks. As\ndiscussed in Section 3.1, IceTagger outperforms\nan HMM tagger as the one used by the Apertium\npipeline.\nIn order to replace only the PoS tagger in the\nApertium pipeline, some modifications will have\nto be made to IceTagger. In addition to the mod-\nifications already carried out to make IceTagger\nreturn output in Apertium style format (see Sec-\ntion 4.1.1), the tagger will also have to be able\nto take Apertium style formatted input. More\nspecifically, instead of relying on IceMorphy and\nLemmald for morphological analysis and lemma-\ntisation, IceTagger would have to be changed to\nreceive the necessary information from the mor-\nphological component of Apertium.\n7 Conclusion\nWe have described the development of Apertium-\nIceNLP, an Icelandic →English ( is-en ) MT system\nbased on the Apertium platform and IceNLP, an\nNLP toolkit for Icelandic. Apertium-IceNLP is a\nhybrid system, the first system in which the whole\nmorphological and tagging component of Aper-223\ntium is replaced by modules from an external sys-\ntem.\nOur system is a prototype with about 8 man-\nmonths of development work. Evaluation, based\non word error rate, shows that our prototype does\nnot perform as well as two other available is-en\nsystems, Google Translate and Tungutorg. This\nwas expected and can mainly be explained by two\nfactors. First, our system has been developed over\na short time. Second, our system makes systematic\nerrors that we intend to fix in future work.\nContrary to our expectations, the Apertium-\nIceNLP system also performs worse than the is-\nensystem based solely on Apertium modules. We\nconjectured that this is mainly due to the fact\nthat the Apertium-IceNLP system does not han-\ndle MWEs adequately, whereas the handling of\nMWEs is an integrated part of the Apertium mor-\nphological analyser. Therefore, we expect that bet-\nter translation quality may be achieved by replac-\ning only the tagging component of Apertium with\nthe corresponding module in IceNLP, but leaving\nmorphological analysis to Apertium. This conjec-\nture will be verified in future work.\nAcknowledgments\nThe work described in this paper has been sup-\nported by: i) The Icelandic Research Fund, project\n“Viable Language Technology beyond English –\nIcelandic as a test case”, grant no. 090662012;\nand ii) The NILS mobility project (The Abel Pre-\ndoc Research Grant), coordinated by Universidad\nComplutense de Madrid.\nReferences\nBrants, Thorsten. 2000. TnT: A statistical part-of-\nspeech tagger. In Proceedings of the 6thConference\non Applied Natural Language Processing , Seattle,\nWA, USA.\nDredze, Mark and Joel Wallenberg. 2008. Icelandic\nData Driven Part of Speech Tagging. In Proceedings\nof the 46thAnnual Meeting of the Association for\nComputational Linguistics: Human Language Tech-\nnologies , Columbus, OH, USA.\nForcada, Mikel L., Francis M. Tyers, and Gema\nRamírez-Sánches. 2009. The Apertium machine\ntranslation platform: Five years on. In Proceedings\nof the First International Workshop on Free/Open-\nSource Rule-Based Machine Translation , Alacant,\nSpain.\nHelgadóttir, Sigrún. 2005. Testing Data-Driven\nLearning Algorithms for PoS Tagging of Icelandic.In Holmboe, H., editor, Nordisk Sprogteknologi\n2004 , pages 257–265. Museum Tusculanums Forlag,\nCopenhagen.\nIngason, Anton K., Sigrún Helgadóttir, Hrafn Lofts-\nson, and Eiríkur Rögnvaldsson. 2008. A Mixed\nMethod Lemmatization Algorithm Using Hierachy\nof Linguistic Identities (HOLI). In Nordström, B.\nand A. Rante, editors, Advances in Natural Lan-\nguage Processing, 6thInternational Conference on\nNLP , GoTAL 2008, Proceedings , Gothenburg, Swe-\nden.\nKarlsson, Fred, Atro V outilainen, Juha Heikkilä, and\nArto Anttila. 1995. Constraint Grammar: A\nLanguage-Independent System for Parsing Unre-\nstricted Text . Mouton de Gruyter, Berlin.\nLoftsson, Hrafn and Eiríkur Rögnvaldsson. 2007a.\nIceParser: An Incremental Finite-State Parser for\nIcelandic. In Proceedings of the 16thNordic Con-\nference of Computational Linguistics (NoDaLiDa\n2007) , Tartu, Estonia.\nLoftsson, Hrafn and Eiríkur Rögnvaldsson. 2007b.\nIceNLP: A Natural Language Processing Toolkit for\nIcelandic. In Proceedings of Interspeech 2007, Spe-\ncial Session: “Speech and language technology for\nless-resourced languages” , Antwerp, Belgium.\nLoftsson, Hrafn, Ida Kramarczyk, Sigrún Helgadóttir,\nand Eiríkur Rögnvaldsson. 2009. Improving the PoS\ntagging accuracy of Icelandic text. In Proceedings of\nthe17thNordic Conference of Computational Lin-\nguistics (NoDaLiDa 2009) , Odense, Denmark.\nLoftsson, Hrafn. 2008. Tagging Icelandic text: A lin-\nguistic rule-based approach. Nordic Journal of Lin-\nguistics , 31(1):47–72.\nNordfalk, Jacob. 2009. Shallow-transfer rule-based\nmachine translation for Swedish to Danish. In\nProceedings of the First International Workshop on\nFree/Open-Source Rule-Based Machine Translation ,\nAlacant, Spain.\nPind, Jörgen, Friðrik Magnússon, and Stefán Briem.\n1991. Íslensk orðtíðnibók [The Icelandic Frequency\nDictionary] . The Institute of Lexicography, Univer-\nsity of Iceland, Reykjavik.\nRögnvaldsson, Eiríkur, Hrafn Loftsson, Kristín Bjar-\nnadóttir, Sigrún Helgadóttir, Anna B. Nikulásdóttir,\nMatthew Whelpton, and Anton K. Ingason. 2009.\nIcelandic Language Resources and Technology: Sta-\ntus and Prospects. In Domeij, R., K. Kosken-\nniemi, S. Krauwer, B. Maegaard, E. Rögnvalds-\nson, and K. de Smedt, editors, Proceedings of the\nNoDaLiDa 2009 Workshop ’Nordic Perspectives on\nthe CLARIN Infrastructure of Language Resources’ .\nOdense, Denmark.\nTyers, Francis M. and Kevin Donnelly. 2009.\napertium-cy - a collaboratively-developed free\nRBMT system for Welsh to English. Prague Bul-\nletin of Mathematical Linguistics , 91:57–66.224",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "D9ehu5tpe-3",
"year": null,
"venue": "EAMT 2020",
"pdf_link": "https://aclanthology.org/2020.eamt-1.12.pdf",
"forum_link": "https://openreview.net/forum?id=D9ehu5tpe-3",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Double Attention-based Multimodal Neural Machine Translation with Semantic Image Regions",
"authors": [
"Yuting Zhao",
"Mamoru Komachi",
"Tomoyuki Kajiwara",
"Chenhui Chu"
],
"abstract": "Existing studies on multimodal neural machine translation (MNMT) have mainly focused on the effect of combining visual and textual modalities to improve translations. However, it has been suggested that the visual modality is only marginally beneficial. Conventional visual attention mechanisms have been used to select the visual features from equally-sized grids generated by convolutional neural networks (CNNs), and may have had modest effects on aligning the visual concepts associated with textual objects, because the grid visual features do not capture semantic information. In contrast, we propose the application of semantic image regions for MNMT by integrating visual and textual features using two individual attention mechanisms (double attention). We conducted experiments on the Multi30k dataset and achieved an improvement of 0.5 and 0.9 BLEU points for English-German and English-French translation tasks, compared with the MNMT with grid visual features. We also demonstrated concrete improvements on translation performance benefited from semantic image regions.",
"keywords": [],
"raw_extracted_content": "Double Attention-based Multimodal Neural Machine Translation\nwith Semantic Image Regions\nYuting Zhao1, Mamoru Komachi1, Tomoyuki Kajiwara2, Chenhui Chu2\n1Tokyo Metropolitan University, 6-6 Asahigaoka, Hino, Tokyo 191-0065, Japan\n2Osaka University, 2-8 Yamadaoka, Suita, Osaka 565-0871, Japan\[email protected]\[email protected]\nfkajiwara,[email protected]\nAbstract\nExisting studies on multimodal neural ma-\nchine translation (MNMT) have mainly\nfocused on the effect of combining vi-\nsual and textual modalities to improve\ntranslations. However, it has been sug-\ngested that the visual modality is only\nmarginally beneficial. Conventional vi-\nsual attention mechanisms have been used\nto select the visual features from equally-\nsized grids generated by convolutional\nneural networks (CNNs), and may have\nhad modest effects on aligning the vi-\nsual concepts associated with textual ob-\njects, because the grid visual features do\nnot capture semantic information. In con-\ntrast, we propose the application of se-\nmantic image regions for MNMT by in-\ntegrating visual and textual features us-\ning two individual attention mechanisms\n(double attention). We conducted ex-\nperiments on the Multi30k dataset and\nachieved an improvement of 0.5 and 0.9\nBLEU points for English !German and\nEnglish!French translation tasks, com-\npared with the MNMT with grid visual\nfeatures. We also demonstrated concrete\nimprovements on translation performance\nbenefited from semantic image regions.\n1 Introduction\nNeural machine translation (NMT) (Sutskever et\nal., 2014; Bahdanau et al., 2015) has achieved\nstate-of-the-art translation performance. Recently,\nc\r2020 The authors. This article is licensed under a Creative\nCommons 3.0 licence, no derivative works, attribution, CC-\nBY-ND.\nObject detection\nman in red shirt\nwatches dog on\nan agility\ncourse. \nRNNattention\nattentionun homme en\npolo rouge\nregarde son\nchien sur un\nparcours d's\nagilité .Image regionsFigure 1: Overview of our MNMT model.\nmany studies (Specia et al., 2016; Elliott et al.,\n2017; Barrault et al., 2018) have been increas-\ningly focusing on incorporating multimodal con-\ntents, particularly images, to improve translations.\nHence, researchers in this field have established a\nshared task called multimodal machine translation\n(MMT), which consists of translating a target sen-\ntence from a source language description into an-\nother language using information from the image\ndescribed by the source sentence.\nThe first MMT study by (Elliott et al., 2015)\ndemonstrated the potential of improving the trans-\nlation quality by using image. To effectively use\nan image, several subsequent studies (Gao et al.,\n2015; Huang et al., 2016; Calixto and Liu, 2017)\nincorporated global visual features extracted from\nthe entire image by convolutional neural networks\n(CNNs) into a source word sequence or hidden\nstates of a recurrent neural network (RNN). Fur-\nthermore, other studies started using local visual\nfeatures in the context of an attention-based NMT.\nThese features were extracted from equally-sized\ngrids in an image by a CNN. For instance, multi-\nmodal attention (Caglayan et al., 2016b) has been\ndesigned for a mix of text and local visual fea-\ntures. Additionally, double attention mechanisms\n(Calixto et al., 2017) have been proposed for text\nst - 1 sthomme en chemise rouge <eos>\nyt - 1 yt\nzt ctDecoder\nrp\nFaster R-CNN + ResNet-101100 semantic image region \nfeature vectores\nSource imagehi\nMan in a red shirt <eos>\nSource sentencexiAtt_img Att_textBi-directional GRU encoderwp (max)\nFigure 2: Our model of double attention-based MNMT with semantic image regions.\nand local visual features, respectively. Although\nprevious studies improved the use of local vi-\nsual features and the text modality, these improve-\nments were minor. As discussed in (Delbrouck and\nDupont, 2017), these local visual features may not\nbe suitable to attention-based NMT, because the at-\ntention mechanism cannot understand complex re-\nlationships between textual objects and visual con-\ncepts.\nOther studies utilized richer local visual features\nto MNMT such as dense captioning features (Del-\nbrouck et al., 2017). However, their efforts have\nnot convincingly demonstrated that visual features\ncan improve the translation quality. Caglayan et al.\n(2019) demonstrated that, when the textual context\nis limited, visual features can assist in generating\nbetter translations. MMT models disregard visual\nfeatures because the quality of the image features\nor the way in which they are integrated into the\nmodel are not satisfactory. Therefore, which types\nof visual features are suitable to MNMT, and how\nthese features should be integrated into MNMT,\nstill remain open questions.\nThis paper proposes the integration of seman-\ntic image region features into a double attention-\nbased NMT architecture. In particular, we com-\nbine object detection with a double attention mech-\nanism to fully exploit visual features for MNMT.\nAs shown in Figure 1, we use the semantic im-age region features extracted by an object detec-\ntion model, namely, Faster R-CNN (Ren et al.,\n2015). Compared with the local visual features ex-\ntracted from equally-sized grids, we believe that\nour semantic image region features contain ob-\nject attributes and relationships that are important\nto the source description. Moreover, we expect\nthat the model would be capable of making se-\nlective use of the extracted semantic image re-\ngions when generating a target word. To this end,\nwe integrate semantic image region features us-\ning two attention mechanisms: one for the se-\nmantic image regions and the other one for text.\nCode and pre-trained models are publicly avail-\nable at: https://github.com/Zhao-Yuting/MNMT-\nwith-semantic-regions.\nThe main contributions of this study are as fol-\nlows:\n\u000fWe verified that the translation quality can\nsignificantly improve by leveraging semantic\nimage regions.\n\u000fWe integrated semantic image regions into\na double attention-based MNMT, which re-\nsulted in the improvement of translation per-\nformance above the baselines.\n\u000fWe carried out a detailed analysis to identify\nthe advantages and shortcomings of the pro-\nposed model.\n2 MNMT with Semantic Image Regions\nIn Figure 2, our model comprises three parts: the\nsource-sentence side, source-image side, and de-\ncoder. Inspired by (Calixto et al., 2017), we in-\ntegrated the visual features using an independent\nattention mechanism. From the source sentence X\n= (x1,x2,x3,\u0001\u0001\u0001,xn) to the target sentence Y=\n(y1,y2,y3,\u0001\u0001\u0001,ym), the image-attention mech-\nanism focuses on all semantic image regions to\ncalculate the image context vector zt, while the\ntext-attention mechanism computes the text con-\ntext vector ct. The decoder uses a conditional\ngated recurrent unit (cGRU)1with attention mech-\nanisms to generate the current hidden state stand\ntarget wordyt.\nAt time step t, first, a hidden state proposal ^stis\ncomputed in cGRU, as presented below, and then\nused to calculate the image context vector ztand\ntext context vector ct.\n^\u0018t=\u001b(W\u0018EY[yt\u00001] +U\u0018st\u00001)\n^\rt=\u001b(W\rEY[yt\u00001] +U\rst\u00001)\nst= tanh (WEY[yt\u00001] + ^\rt\f(Ust\u00001))\n^st= (1\u0000^\u0018t)\fst+^\u0018t\fst\u00001(1)\nwhereW\u0018,U\u0018,W\r,U\r,W, andUare training\nparameters; EYis the target word vector.\n2.1 Source-sentence side\nThe source sentence side comprises a bi-\ndirectional GRU encoder and “soft” attention\nmechanism (Xu et al., 2015). Given a source sen-\ntenceX= (x1,x2,x3,\u0001\u0001\u0001,xn), the encoder up-\ndates the forward GRU hidden states by reading\nxfrom left to right, generates the forward annota-\ntion vectors (\u0000 !h1,\u0000 !h2,\u0000 !h3,\u0001\u0001\u0001,\u0000 !hn), and finally up-\ndates the backward GRU with the annotation vec-\ntors ( \u0000h1, \u0000h2, \u0000h3,\u0001\u0001\u0001, \u0000hn). By concatenating the\nforward and backward vectors hi=[\u0000 !hi; \u0000hi], every\nhiencodes the entire sentence while focusing on\nthexiword, and all words in a sentence are de-\nnoted asC= (h1,h2,\u0001\u0001\u0001,hn). At each time step t,\nthe text context vector ctis generated as follows:\netext\nt;i= (Vtext)Ttanh(Utext^st+Wtexthi)\n\u000btext\nt;i= softmax( etext\nt;i)\nct=nX\ni=1\u000btext\nt;ihi(2)\n1https://github.com/nyu-dl/dl4mt-\ntutorial/blob/master/docs/cgru.pdf\n(a)Grids.\n (b)Image regions.\nFigure 3: Comparing between (a) coarse grids and (b) se-\nmantic image regions.\nwhereVtext,Utext, andWtextare training param-\neters;etext\nt;iis the attention energy; \u000btext\nt;iis the at-\ntention weight matrix of the source sentence.\n2.2 Source-image side\nIn this part, we discuss the integration of semantic\nimage regions into MNMT using an image atten-\ntion mechanism.\nSemantic image region feature extraction. As\nshown in Figure 3, instead of extracting equally-\nsized grid features using CNNs, we extract se-\nmantic image region features using object detec-\ntion. This study applied the Faster R-CNN in con-\njunction with the ResNet-101 (He et al., 2016)\nCNN pre-trained on Visual Genome (Krishna et\nal., 2017) to extract 100 semantic image region\nfeatures from each image. Each semantic image\nregion feature is a vector rwith a dimension of\n2048, and all of these features in an image are de-\nnoted asR= (r1,r2,r3,\u0001\u0001\u0001,r100).\nImage-attention mechanism. The image-\nattention mechanism is also a type of “soft”\nattention. This mechanism focuses on 100 seman-\ntic image region feature vectors at every time step\nand computes the image context vector zt.\nFirst, we calculate the attention energy eimg\nt;p,\nwhich is an attention model that scores the degree\nof output matching between the inputs around po-\nsitionpand the output at position t, as follows:\neimg\nt;p= (Vimg)Ttanh(Uimg^st+Wimgrp)(3)\nwhereVimg,Uimg, andWimgare training param-\neters. Then the weight matrix \u000bimg\nt;pof eachrpis\ncomputed as follows:\n\u000bimg\nt;p= softmax( eimg\nt;p) (4)\nAt each time step, the image-attention mechanism\ndynamically focuses on the semantic image region\nfeatures and computes the image context vector zt,\nas follows:\nzt=\ft100X\np=1\u000bimg\nt;prp (5)\nForzt, at each decoding time step t, a gating scalar\n\ft2[0;1](Xu et al., 2015) is used to adjust the\nproportion of the image context vector according\nto the previous hidden state of the decoder st\u00001.\n\ft=\u001b(W\fst\u00001+b\f) (6)\nwhereW\fandb\fare training parameters.\n2.3 Decoder\nAt each time step tof the decoder, the new hidden\nstatestis computed in cGRU, as follows:\n\u0018t=\u001b(Wtext\n\u0018ct+Wimg\n\u0018zt+\u0016U\u0018^st)\n\rt=\u001b(Wtext\n\rct+Wimg\n\rzt+\u0016U\r^st)\n\u0016st= tanh (Wtextct+Wimgzt+\rt\f(\u0016U^st))\nst= (1\u0000\u0018t)\f\u0016st+\u0018t\f^st\n(7)\nwhereWtext\n\u0018,Wimg\n\u0018,\u0016U\u0018,Wtext\n\r,Wimg\n\r,\u0016U\r,Wtext,\nWimg, and \u0016Uare model parameters; \u0018tand\rtare\nthe output of the update/reset gates; \u0016stis the pro-\nposed updated hidden state.\nFinally, the conditional probability of generat-\ning a target word p(ytjyt\u00001;st;C;R )is computed\nby a nonlinear, potentially multi-layered function,\nas follows:\nsoftmax(Lotanh(Lsst+Lcct+Lzzt+LwEY[yt\u00001]))\n(8)\nwhereLo,Ls,Lc,Lz, andLware training param-\neters.\n3 Experiments\n3.1 Dataset\nWe conducted experiments for the\nEnglish!German (En!De) and English!French\n(En!Fr) tasks using the Multi30k dataset (Elliott\net al., 2016). The dataset contains 29k training and\n1,014 validation images. For testing, we used the\n2016 testset, which contains 1,000 images. Each\nimage was paired with image descriptions ex-\npressed by both the original English sentences and\nthe sentences translated into multiple languages.\nFor preprocessing, we lowercased and tokenized\nthe English, German, and French descriptions withthe scripts in the Moses SMT Toolkit.2Subse-\nquently, we converted the space-separated tokens\ninto subword units using the byte pair encoding\n(BPE) model.3Finally, the number of subwords\nin a description was limited to a maximum of 80.\n3.2 Settings\nOurs. We integrated the semantic image regions\nby modifying the double attention model of (Cal-\nixto et al., 2017). In the source-sentence, we\nreused the original implementation. In the source-\nimage, we modified the image attention mecha-\nnism to focus on 100 semantic image region fea-\ntures with a dimension of 2048 at each time step.\nThe parameter settings were consistent with the\nbaseline doubly-attentive MNMT model, wherein\nwe set the hidden state dimension of the 2-layer\nGRU encoder and 2-layer cGRU decoder to 500,\nsource word embedding dimension to 500, batch\nsize to 40, beam size to 5, text dropout to 0.3,\nand image region dropout to 0.5. We trained\nthe model using stochastic gradient descent with\nADADELTA (Zeiler, 2012) and a learning rate of\n0.002, for 25 epochs. Finally, after both the vali-\ndation perplexity and accuracy converged, we se-\nlected the converged model for testing.\nBaseline Doubly-attentive MNMT. We trained\na doubly-attentive MNMT model4as a baseline.\nFor the text side, the implementation was based\non OpenNMT model.5For the image side, atten-\ntion was applied to the visual features extracted\nfrom 7\u00027 image grids by CNNs. For the image\nfeature extraction, we compared three pre-trained\nCNN methods: VGG-19, ResNet-50, and ResNet-\n101.\nBaseline OpenNMT. We trained a text-only at-\ntentive NMT model using OpenNMT as the other\nbaseline. The model was trained on En !De and\nEn!Fr, wherein only the textual part of Multi30k\nwas used. The model comprised a 2-layer bidi-\nrectional GRU encoder and 2-layer cGRU decoder\nwith attention.\nFor baselines, we used the original implementa-\ntions and ensured the parameters were consistent\nwith our model.\n2https://github.com/moses-smt/mosesdecoder\n3https://github.com/rsennrich/subword-nmt\n4https://github.com/iacercalixto/MultimodalNMT\n5https://github.com/OpenNMT/OpenNMT-py\nEn!De En!Fr\nModel BLEU METEOR BLEU METEOR\nOpenNMT (text-only) 34.7\u00060.3 53.2\u00060.4 56.6\u00060.1 72.1\u00060.1\nDoubly-attentive MNMT (VGG-19) 36.4\u00060.2 55.0\u00060.1 57.4\u00060.4 72.4\u00060.4\nDoubly-attentive MNMT (ResNet-50) 36.5\u00060.2 54.9\u00060.4 57.5\u00060.4 72.6\u00060.4\nDoubly-attentive MNMT (ResNet-101) 36.5\u00060.3 54.9\u00060.3 57.3\u00060.2 72.4\u00060.2\nOurs (Faster R-CNN + ResNet-101) 37.0\u00060.1y55.3\u00060.2 58.2\u00060.5yz73.2\u00060.2\nvs. OpenNMT (text-only) (\"2.3) (\"2.1) (\"1.6) (\"1.1)\nvs. Doubly-attentive MNMT (ResNet-101) (\"0.5) (\"0.4) (\"0.8) (\"0.9)\nCaglayan et al. (2017) (text-only) 38.1\u00060.8 57.3\u00060.5 52.5\u00060.3 69.6\u00060.1\nCaglayan et al. (2017) (grid) 37.0\u00060.8 57.0\u00060.3 53.5\u00060.8 70.4\u00060.6\nCaglayan et al. (2017) (global) 38.8\u00060.5 57.5\u00060.2 54.5\u00060.8 71.2\u00060.4\nTable 1: BLEU and METEOR scores for different models on the En !De and En!Fr 2016 testset of Multi30k. All scores\nare averages of three runs. We present the results using the mean and the standard deviation. yandzindicate that the result\nis significantly better than OpenNMT and double-attentive MNMT at p-value <0.01, respectively. Additionally, we report\nthe best results of using grid and global visual features on Multi30k dataset according to (Caglayan et al., 2017), which is the\nstate-of-the-art system for En !De translation on this dataset.\n3.3 Evaluation\nWe evaluated the quality of the translation accord-\ning to the token level BLEU (Papineni et al., 2002)\nand METEOR (Denkowski and Lavie, 2014) met-\nrics. We trained all models (baselines and pro-\nposed model) three times and calculated the BLEU\nand METEOR scores, respectively. Based on the\ncalculation results, we report the mean and stan-\ndard deviation over three runs.\nMoreover, we report the statistical significance\nwith bootstrap resampling (Koehn, 2004) using the\nmerger of three test translation results. We defined\nthe threshold for the statistical significance test as\n0.01, and report only if the p-value was less than\nthe threshold.\n4 Results\nIn Table 1, we present the results for the Open-\nNMT, doubly-attentive MNMT and our model on\nMulti30k dataset. Additionally, we also compared\nwith Caglayan et al. (2017), which achieved the\nbest performance under the same condition with\nour experiments.\nComparing the baselines, the doubly-attentive\nMNMT outperformed OpenNMT. Because there\ndid not exist a big difference amongst the three\nimage feature extraction methods for the doubly-\nattentive MNMT model, we only used ResNet-101\nin our model.\nCompared with the OpenNMT baseline, the pro-posed model improved both BLEU scores and ME-\nTEOR scores for En !De and En!Fr tasks. Ad-\nditionally, the results of our proposed model are\nsignificantly better than the results obtained by the\nbaseline with a p-value <0.01 for both tasks.\nCompared with the doubly-attentive MNMT\n(ResNet-101) baseline, the proposed model also\nimproved the BLEU scores and METEOR scores\nfor both tasks. Moreover, the results are signif-\nicantly better than the baseline results with a p-\nvalue<0.01 for En!Fr task.\nFor comparison with Caglayan et al. (2017), we\nreport their results for the text-only NMT base-\nline, grid and global visual features for MNMT\nmethod. With the grid visual features, their results\nsurpassed the text-only NMT baseline for En !Fr,\nbut failed to surpass the text-only NMT baseline\nfor En!De with regard to both metrics. With the\nglobal visual features, their results surpassed the\ntext-only NMT baseline.\nFor En!De, though Caglayan et al. (2017)\n(global) achieved higher scores than our model,\nthe improvements were minor. In terms of relative\nimprovement compared with the text-only NMT\nbaseline, their results improved the BLEU score\nby 1.8% and METEOR score by 0.3%. In contrast,\nour model improved the BLEU score by 6.6% and\nMETEOR score by 3.9%.\nFor En!Fr, our results outperform Caglayan et\nal. (2017) (global) with regard to both metrics.\nIn terms of relative improvement compared with\nthe text-only NMT baseline, their results improved\nthe BLEU score by 1.9% and METEOR score by\n1.1% with the grid visual features and improved\nthe BLEU score by 3.8% and METEOR score by\n2.3% with the global visual features. Our model\nimproved the BLEU score by 2.8% and METEOR\nscore by 1.5%.\n5 Analysis\n5.1 Pairwise evaluation of translations\nWe randomly investigated 50 examples from the\nEn!Fr task to evaluate our model in detail. We\ncompared the translations of our model with the\nbaselines to identify improvement or deterioration\nin the translation. Then we categorized all ex-\namples into five types: 1) those whose transla-\ntion performance were better than both baselines;\n2) those whose translation performance were bet-\nter than the doubly-attentive MNMT (ResNet-101)\nbaseline; 3) those whose translation performance\nwere better than the OpenNMT baseline; 4) those\nwhose translation performance did not change; 5)\nthose whose translation performance deteriorated.\nWe counted the number and proportion of all types.\nIn Table 2, we can see that in nearly half of\nthe examples, the translation performance is bet-\nter than at least one baseline. Moreover, amongst\na total of 50 examples, 14 examples are better than\nthe doubly-attentive MNMT (ResNet-101) base-\nline and just two examples of local deterioration\nwere found compared with the baselines.\n5.2 Qualitative analysis\nIn Figure 4, we chose four examples to analyze\nour model in detail. The first two rows explain the\nadvantages of our model, while the last two rows\nexplain the shortcomings.\nAt each time step, the semantic image region\nis shown with deep or shallow transparency in the\nimage, according to its assigned attention weight.\nAs the weight increases, the image region becomes\nmore transparent. Considering the number of 100\nbounding boxes in one image and the overlapping\nareas, we visualized the top five weighted seman-\ntic image regions. The most weighted image re-\ngion is indicated by the blue lines, and the target\nword generated at that time step is indicated by the\nred text along with the bounding box. Then, we\nanalyzed whether the semantic image regions had\na positive or negative effect at the time step whenBetter than both baselines 8 (16%)\nBetter than MNMT baseline 6 (12%)\nBetter than NMT baseline 10 (20%)\nNo change 24 (48%)\nDeteriorated 2 (4%)\nTable 2: The amount and proportion of each type of examples\nin all investigated examples.\nthe target word was generated.\nAdvantages. In the first row, we can see that our\nmodel is better at translating the verb “grabbing”\ncompared with both baselines. For the text-only\nOpenNMT, the translation of the word “grabbing”\nis incorrect. In English it is translated as “strolling\nwith.” The doubly-attentive MNMT (ResNet-\n101) translated “grab” into “agrippe,” which failed\nto transform the verb into the present participle\nform. In contrast, although the reference is “sai-\nsissant” and our model generated “agrippant,” the\ntwo words are synonyms. Our approach improved\nthe translation performance both in terms of mean-\ning and verb deformation, owing to the semantic\nimage regions. We visualized the consecutive time\nsteps of generating the word “agrippant” in con-\ntext. Along with the generation of “agrippant,” the\nattention focused on the image region where the\naction was being performed, and thus captured the\nstate of the action at that moment.\nIn the second row, the noun “terrier” could not\nbe translated by the baselines. This word means\n“a lively little dog” in English. As we can see,\nwhen the target word “terrier” was generated in\nour model, the attented semantic image region at\nthat time step provided the exact object-level vi-\nsual feature to the translation.\nShortcomings. The example in the third row re-\nflects improvement and deficiency. Both base-\nlines lack the sentence components of the adver-\nbial “happily.” In contrast, our model translated\n“happily” into “joyeusement,” which is a better\ntranslation than both baselines. However, accord-\ning to the image, the semantic image region with\nthe largest attention weight did not carry the facial\nexpression of a boy.\nAlthough the maximum weight of the semantic\nimage region was not accurately assigned, other\nheavily weighted semantic image regions, which\ncontain the object attributes, may assist the trans-\nlation. There may be two reasons for this: the func-\na man in a blue coat grabbing a young boy’ s shoulder . Source (En)\nReference (Fr)\nNMT\nMNMT\nOursun homme en manteau bleu se baladant avec (strolling with) l’s épaule d’ s un jeune garçon .\nun homme en manteau bleu agrippe (grab) l’s épaule d’ s un jeune garçon .\nun homme en manteau bleu agrippant (grabbing) l’s épaule d’ s un jeune garçon .un homme en manteau bleu saisissant l’s épaule d's un jeune garçon .\nun terrier de boston court sur l's herbe verdoyante devant une clôture blanche .Source (En)\nReference (Fr)\nNMT\nMNMT\nOursun garde (guard) de boston court sur l's herbe souple devant une clôture blanche .\nun croreur (croror) court sur l's herbe verte devant une clôture blanche .\nun terrier (terrier) de boston terrier court sur l's herbe verte devant une clôture blanche .a boston terrier is running on lush green grass in front of a white fence .\nun petit enfant avec un t-shirt bleu et blanc tenant joyeusement un alligator en plastique jaune .Source (En)\nReference (Fr)\nNMT\nMNMT\nOursun petit enfant vêtu d's un t-shirt bleu et blanc \nbrandissant (brandishing) une bouteille (bottle) en plastique jaune .\nun petit enfant vêtu d's un t-shirt bleu et blanc \ntenant (holding) un fusil (rifle) en plastique jaune .\nun petit enfant vêtu d's un t-shirt bleu et blanc \nmet (put) joyeusement (happily) une forme (shape) en plastique jaune .a small child wearing a blue and white t-shirt happily holding a yellow plastic alligator .\ndes hommes jouant au volleyball , avec un joueur ratant le ballon mais avec les mains toujours en l's air .Source (En)\nReference (Fr)\nNMT\nMNMT\nOursdes hommes jouant au volleyball , un joueur à l's attraper , mais les autres mains ayant toujours dans les airs .\ndes hommes jouant au volley-ball , avec un joueur qui le regarde dans les airs (in the air) .\ndes hommes jouant au volleyball , avec un joueur qui passer le ballon mais les mains du vol (of the flight) . men playing volleyball , with one player missing the ball but hands still in the air .terrieragri@@ p@@ pant\nmet joyeusement forme\ndu volFigure 4: Translations from the baselines and our model for comparison. We highlight the words that distinguish the results.\nBlue words are marked for better translation and red words are marked for worse translation. We also visualize the semantic\nimage regions that the words attend to.\ntion of the attention mechanism is not sufficiently\neffective, or there exists an excessive amount of\nsemantic image regions.\nOn the other hand, for the generation of the word\n“holding” and “alligator,” the most weighted se-\nmantic image regions were not closely attended to.\nThere was a slight deviation between the image re-\ngions and semantics. Owing to the inaccuracy of\nthe image region that was drawn upon the object,\nthe semantic feature was not adequately extracted.\nThis indicates that the lack of specificity in the vi-\nsual feature quality can diminish the detail of the\ninformation being conveyed.\nIn the last row, we presented one of the two ex-\namples with local deterioration. The “air” is cor-\nrectly translated by baselines. However, our model\ntranslated “in the air” into “du vol (of the flight).”\nWe observed that the transparent semantic image\nregions with the five top weights in the image were\nvery scattered and unconnected. Amongst them,\nnone of the semantic image regions matched the\nfeature of “air.” We speculate that the word “air” is\ndifficult to interpret depending on visual features.\nOn the other hand, our model translated it into “vol\n(flight),” which is close to another meaning of the\npolysemous “air,” not something else.Summary. In our model, the improvement of\ntranslation performance benefits from semantic\nimage regions. The semantic image region visual\nfeatures include the object, object attributes, and\nscene understanding, may assist the model in per-\nforming a better translation on the verb, noun, ad-\nverb and so on.\nOn the other hand, there are some problems:\n\u000fIn some cases, although the translation\nperformance improved, the image attention\nmechanism did not assign the maximum\nweight to the most appropriate semantic im-\nage region.\n\u000fWhen the object attributes cannot be specifi-\ncally represented by image regions, incorrect\nvisual features conveyed by the semantic im-\nage regions may interfere with the translation\nperformance.\n\u000fIf the image attention mechanism leads to the\nwrong focused semantic image region, it will\nbring negative effects on translation perfor-\nmance.\nIn our investigation, we did not identify any\nclear examples of successful disambiguation. In\ncontrast, there is one example of detrimental re-\nsults upon disambiguation. If the semantic im-\nage regions did not have good coverage of the se-\nmantic features or the image attention mechanism\nworked poorly, the disambiguation of polysemous\nwords would not only fail, but ambiguous transla-\ntion would also take place.\n6 Related Work\nFrom the first shared task at WMT 2016,6many\nMMT studies have been conducted. Existing stud-\nies have fused either global or local visual image\nfeatures into MMT.\n6.1 Global visual feature\nCalixto and Liu (2017) incorporated global vi-\nsual features into source sentence vectors and en-\ncoder/decoder hidden states. Elliott and K ´ad´ar\n(2017) utilized global visual features to learn both\nmachine translation and visually grounding task si-\nmultaneously. As for the best system in WMT\n2017,7Caglayan et al. (2017) proposed differ-\nent methods to incorporate global visual features\nbased on attention-based NMT model such as ini-\ntial encoder/decoder hidden states using element-\nwise multiplication. Delbrouck and Dupont (2018)\nproposed a variation of the conditional gated re-\ncurrent unit decoder, which receives the global vi-\nsual features as input. Calixto et al. (2019) incor-\nporated global visual features through latent vari-\nables. Although their results surpassed the perfor-\nmance of the NMT baseline, the visual features of\nan entire image are complex and non-specific, so\nthat the effect of the image is not fully exerted.\n6.2 Local visual features\nGrid visual features. Fukui et al. (2016) applied\nmultimodal compact bilinear pooling to combine\nthe grid visual features and text vectors, but their\nmodel does not convincingly surpass an attention-\nbased NMT baseline. Caglayan et al. (2016a) inte-\ngrated local visual features extracted by ResNet-50\nand source text vectors into an NMT decoder us-\ning shared transformation. They reported that the\nresults obtained by their method did not surpass\nthe results obtained by NMT systems. Caglayan,\nBarrault, and Bougares (2016b) proposed a mul-\ntimodal attention mechanism based on (Caglayan\net al., 2016a). They integrated two modalities by\n6http://www.statmt.org/wmt16/multimodal-task.html\n7http://www.statmt.org/wmt17/multimodal-task.htmlcomputing the multimodal context vector, wherein\nthe local visual features were extracted by the\nResNet-50 CNN. Similarly, Calixto et al. (2016)\nincorporated multiple multimodal attention mech-\nanisms into decoder using grid visual features by\nVGG-19 CNN. Because the grid regions do not\ncontain semantic visual features, the multimodal\nattention mechanism can not capture useful infor-\nmation with grid visual features.\nTherefore, instead of multimodal attention, Cal-\nixto, Liu, and Campbell (2017) proposed two in-\ndividual attention mechanisms focusing on two\nmodalities. Similarly, Libovick ´y and Helcl (2017)\nproposed two attention strategies that can be ap-\nplied to all hidden layers or context vectors of\neach modality. But they still used grid visual fea-\ntures extracted by a CNN pre-trained on ImageNet.\nCaglayan et al. (2017) integrated a text context\nvector and visual context vectors by grid visual\nfeatures to generate a multimodal context vector.\nTheir results did not surpass those of the baseline\nNMT for the English–German task.\nHelcl, Libovick ´y, and Vari ˇs (2018) set an ad-\nditional attention sub-layer after the self-attention\nbased on the Transformer architecture, and in-\ntegrated grid visual features extracted by a pre-\ntrained CNN. Caglayan et al. (2018) enhanced\nthe multimodal attention into the filtered attention,\nwhich filters out grid regions irrelevant to transla-\ntion and focuses on the most important part of the\ngrid visual features. They made efforts to integrate\na stronger attention function, but the considered re-\ngions were still grid visual features.\nImage region visual features. Huang et al.\n(2016) extracted global visual features from en-\ntire images using a CNN and four regional bound-\ning boxes from an image by a R-CNN.8They in-\ntegrated the features into the beginning or end of\nthe encoder hidden states. Because the global vi-\nsual features were unable to provide extra sup-\nplementary information, they achieved slight im-\nprovement above the attention-based NMT. No-\ntably, detailed regional visual features lead to bet-\nter NMT translation performance.\nToyama et al. (2017) proposed a transformation\nto mix global visual feature vectors and object-\nlevel visual feature vectors extracted by a Fast R-\nCNN.9They incorporated multiple image features\ninto the encoder and the head of the source se-\n8https://github.com/rbgirshick/rcnn\n9https://github.com/rbgirshick/fast-rcnn\nquence and target sequence. Their model does not\nbenefit from the object-level regions because the\nintegration method cannot adequately handle vi-\nsual feature sequences. Delbrouck, Dupont, and\nSeddati (2017) used two types of visual features,\nwhich had been extracted by ResNet-50 pretrained\non ImageNet, and DenseCap10pretrained on Vi-\nsual Genome, respectively. They integrated the\nfeatures into their multimodal embeddings and\nfound that the regional visual features (extracted\nby DenseCap) resulted in improved translations.\nHowever, they did not clarify whether the improve-\nment in the regional visual features was brought by\nthe multimodal embeddings or the attention model.\nFor the best system in WMT 2018,11Gr¨onroos\net al. (2018) used different types of visual features,\nsuch as the scene type, action type, and object type.\nThey integrated these features into the transformer\narchitecture using multimodal settings. However,\nthey found that the visual features only exerted\na minor effect in their system. Anderson et al.\n(2018) proposed a bottom-up and top-down model,\nwhich calculates attention at the level of objects.\nThis model was used in visual question answering\nand image captioning tasks.\n7 Conclusion\nThis paper proposed a model that integrates se-\nmantic image regions with two individual attention\nmechanisms. We achieved significantly improved\ntranslation performance above two baselines, and\nverified that this improvement mainly benefited\nfrom the semantic image regions. Additionally, we\nanalyzed the advantages and shortcomings of our\nmodel by comparing examples and visualization of\nsemantic image regions. In the future, we plan to\nuse much finer visual information such as instance\nsemantic segmentation to improve the quality of\nvisual features. In addition, as English entity and\nimage region alignment has been manually anno-\ntated to the Multi30k dataset, we plan to use it as\nsupervision to improve accuracy of the attention\nmechanism.\nAcknowledgments\nThis work was supported by Microsoft Research\nAsia Collaborative Research Grant, Grant-in-Aid\nfor Young Scientists #19K20343 and Grant-in-Aid\nfor Research Activity Start-up #18H06465, JSPS.\n10https://github.com/jcjohnson/densecap\n11http://www.statmt.org/wmt18/multimodal-task.htmlReferences\nAnderson, Peter, Xiaodong He, Chris Buehler, Damien\nTeney, Mark Johnson, Stephen Gould, and Lei\nZhang. 2018. Bottom-up and top-down attention\nfor image captioning and visual question answering.\nInCVPR , pages 6077–6086.\nBahdanau, Dzmitry, Kyunghyun Cho, and Yoshua\nBengio. 2015. Neural machine translation by\njointly learning to align and translate. ICLR ,\nabs/1409.0473.\nBarrault, Lo ¨ıc, Fethi Bougares, Lucia Specia, Chiraag\nLala, Desmond Elliott, and Stella Frank. 2018.\nFindings of the third shared task on multimodal ma-\nchine translation. In WMT , pages 304–323.\nCaglayan, Ozan, Walid Aransa, Yaxing Wang,\nMarc Masana, Mercedes Garc ´ıa-Mart ´ınez, Fethi\nBougares, Lo ¨ıc Barrault, and Joost van de Weijer.\n2016a. Does multimodality help human and ma-\nchine for translation and image captioning? In\nWMT , pages 627–633.\nCaglayan, Ozan, Lo ¨ıc Barrault, and Fethi Bougares.\n2016b. Multimodal attention for neural machine\ntranslation. CoRR .\nCaglayan, Ozan, Walid Aransa, Adrien Bardet, Mer-\ncedes Garc ´ıa-Mart ´ınez, Fethi Bougares, Lo ¨ıc Bar-\nrault, Marc Masana, Luis Herranz, and Joost van de\nWeijer. 2017. LIUM-CVC submissions for WMT17\nmultimodal translation task. In WMT , pages 432–\n439.\nCaglayan, Ozan, Adrien Bardet, Fethi Bougares, Lo ¨ıc\nBarrault, Kai Wang, Marc Masana, Luis Herranz,\nand Joost van de Weijer. 2018. LIUM-CVC submis-\nsions for WMT18 multimodal translation task. In\nWMT , pages 597–602.\nCalixto, Iacer and Qun Liu. 2017. Incorporating global\nvisual features into attention-based neural machine\ntranslation. In EMNLP , pages 992–1003.\nCalixto, Iacer, Desmond Elliott, and Stella Frank.\n2016. DCU-UvA multimodal MT system report. In\nWMT , pages 634–638.\nCalixto, Iacer, Qun Liu, and Nick Campbell. 2017.\nDoubly-attentive decoder for multi-modal neural\nmachine translation. In ACL, pages 1913–1924.\nCalixto, Iacer, Miguel Rios, and Wilker Aziz. 2019.\nLatent variable model for multi-modal translation.\nInACL, pages 6392–6405.\nDelbrouck, Jean-Benoit and St ´ephane Dupont. 2017.\nAn empirical study on the effectiveness of images in\nmultimodal neural machine translation. In EMNLP ,\npages 910–919.\nDelbrouck, Jean-Benoit and St ´ephane Dupont. 2018.\nUMONS submission for WMT18 multimodal trans-\nlation task. In WMT , pages 643–647.\nDelbrouck, Jean-Benoit, St ´ephane Dupont, and Omar\nSeddati. 2017. Visually grounded word embeddings\nand richer visual features for improving multimodal\nneural machine translation. In GLU , pages 62–67.\nDenkowski, Michael and Alon Lavie. 2014. Meteor\nuniversal: Language specific translation evaluation\nfor any target language. In WMT , pages 376–380.\nElliott, Desmond and ´Akos K ´ad´ar. 2017. Imagination\nimproves multimodal translation. In IJCNLP , pages\n130–141.\nElliott, Desmond, Stella Frank, and Eva Hasler. 2015.\nMulti-language image description with neural se-\nquence models. CoRR .\nElliott, Desmond, Stella Frank, Khalil Sima’an, and Lu-\ncia Specia. 2016. Multi30k: Multilingual English-\nGerman image descriptions. In VL, pages 70–74.\nElliott, Desmond, Stella Frank, Lo ¨ıc Barrault, Fethi\nBougares, and Lucia Specia. 2017. Findings of the\nsecond shared task on multimodal machine transla-\ntion and multilingual image description. In WMT ,\npages 215–233.\nFukui, Akira, Dong Huk Park, Daylen Yang, Anna\nRohrbach, Trevor Darrell, and Marcus Rohrbach.\n2016. Multimodal compact bilinear pooling for vi-\nsual question answering and visual grounding. In\nWMT , pages 457–468.\nGao, Haoyuan, Junhua Mao, Jie Zhou, Zhiheng Huang,\nLei Wang, and Wei Xu. 2015. Are you talking to a\nmachine? dataset and methods for multilingual im-\nage question answering. In NIPS , pages 2296–2304.\nGr¨onroos, Stig-Arne, Benoit Huet, Mikko Kurimo,\nJorma Laaksonen, Bernard Merialdo, Phu Pham,\nMats Sj ¨oberg, Umut Sulubacak, J ¨org Tiedemann,\nRaphael Troncy, and Ra ´ul V ´azquez. 2018. The\nMeMAD submission to the WMT18 multimodal\ntranslation task. In WMT , pages 603–611.\nHe, Kaiming, Xiangyu Zhang, Shaoqing Ren, and Jian\nSun. 2016. Deep residual learning for image recog-\nnition. In CVPR , pages 770–778.\nHelcl, Jind ˇrich, Jind ˇrich Libovick ´y, and Du ˇsan Vari ˇs.\n2018. CUNI system for the WMT18 multimodal\ntranslation task. In WMT , pages 616–623.\nHuang, Po-Yao, Frederick Liu, Sz-Rung Shiang, Jean\nOh, and Chris Dyer. 2016. Attention-based multi-\nmodal neural machine translation. In WMT , pages\n639–645.\nKoehn, Philipp. 2004. Statistical significance tests for\nmachine translation evaluation. In EMNLP , pages\n388–395.\nKrishna, Ranjay, Yuke Zhu, Oliver Groth, Justin John-\nson, Kenji Hata, Joshua Kravitz, Stephanie Chen,\nYannis Kalantidis, Li-Jia Li, David A. Shamma,Michael S. Bernstein, and F. Li. 2017. Vi-\nsual genome: Connecting language and vision us-\ning crowdsourced dense image annotations. IJCV ,\n123(1):32–73.\nLibovick ´y, Jind ˇrich and Jind ˇrich Helcl. 2017.\nAttention strategies for multi-source sequence-to-\nsequence learning. In ACL, pages 196–202.\nPapineni, Kishore, Salim Roukos, Todd Ward, and Wei-\nJing Zhu. 2002. BLEU: a method for automatic\nevaluation of machine translation. In ACL, pages\n311–318.\nRen, Shaoqing, Kaiming He, Ross Girshick, and Jian\nSun. 2015. Faster R-CNN: Towards real-time object\ndetection with region proposal networks. In ICCV ,\npages 91–99.\nSpecia, Lucia, Stella Frank, Khalil Sima’an, and\nDesmond Elliott. 2016. A shared task on multi-\nmodal machine translation and crosslingual image\ndescription. In WMT , pages 543–553.\nSutskever, Ilya, Oriol Vinyals, and Quoc V Le. 2014.\nSequence to sequence learning with neural networks.\nInNIPS , pages 3104–3112.\nToyama, Joji, Masanori Misono, Masahiro Suzuki, Ko-\ntaro Nakayama, and Yutaka Matsuo. 2017. Neu-\nral machine translation with latent semantic of image\nand text. ArXiv , abs/1611.08459.\nXu, Kelvin, Jimmy Ba, Ryan Kiros, Kyunghyun Cho,\nAaron Courville, Ruslan Salakhudinov, Rich Zemel,\nand Yoshua Bengio. 2015. Show, attend and tell:\nNeural image caption generation with visual atten-\ntion. In ICML , pages 2048–2057.\nZeiler, Matthew D. 2012. ADADELTA: an adaptive\nlearning rate method. CoRR .",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "Hse7VS4UKfG",
"year": null,
"venue": "EAMT 2012",
"pdf_link": "https://aclanthology.org/2012.eamt-1.10.pdf",
"forum_link": "https://openreview.net/forum?id=Hse7VS4UKfG",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Efficiency-based evaluation of aligners for industrial applications",
"authors": [
"Antonio Toral",
"Marc Poch",
"Pavel Pecina",
"Gregor Thurmair"
],
"abstract": "Antonio. Toral, Marc Poch, Pavel Pecina, Gregor Thurmair. Proceedings of the 16th Annual conference of the European Association for Machine Translation. 2012.",
"keywords": [],
"raw_extracted_content": "Efficiency-based evaluation of aligners for industrial applications\u0003\nAntonio Toral\nSchool of Computing\nDublin City University\nDublin, Ireland\[email protected] Poch\nIULA, Universitat\nPompeu Fabra\nBarcelona, Spain\[email protected] Pecina\nFaculty of Mathematics and\nPhysics, Charles University\nPrague, Czech Republic\[email protected] Thurmair\nLinguatec GmbH\nMunich, Germany\[email protected]\nAbstract\nThis paper presents a novel efficiency-\nbased evaluation of sentence and word\naligners. This assessment is critical in or-\nder to make a reliable use in industrial sce-\nnarios. The evaluation shows that the re-\nsources required by aligners differ rather\nbroadly. Subsequently, we establish lim-\nitation mechanisms on a set of aligners\ndeployed as web services. These results,\npaired with the quality expected from the\naligners, allow providers to choose the\nmost appropriate aligner according to the\ntask at hand.\n1 Introduction\nAligners refer in this paper to tools that, given a\nbilingual corpus, identify corresponding pairs of\nlinguistic items, be they sentences (sentence align-\ners) or words (word aligners). Alignment is a key\ncomponent in corpus-based multilingual applica-\ntions. First, alignment is one of the most time-\nconsuming tasks in building Machine Translation\n(MT) systems. In terms of quality, good align-\nment is decisive for the final quality of the MT\nsystem; bad alignment decreases MT quality and\ninflates the phrase table with spurious translations\nwith very low probabilities, which reduces system\nperformance. Finally, for terminology acquisition,\nthe choice of a good aligner determines whether\nthe results of a term extraction tool are usable or\nnot; alignment quality on phrase level differs from\n\u0003We would like to thank Daniel Varga and Adrien Lardilleux\nfor their feedback on Hunalign and Anymalign, respectively.\nWe would like to thank Joachim Wagner for his help on using\nthe cluster. This research has been partially funded by the EU\nproject PANACEA (7FP-ITC-248064).\n\u0003c\r2012 European Association for Machine Translation.less than 5% (usable) to more than 40% (unusable)\nerror rate (Aleksic and Thurmair, 2012).\nThe performance of aligners is commonly eval-\nuated extrinsically, i.e. by measuring their im-\npact in the result obtained by a MT system that\nuses the aligned corpus (Abdul-Rauf et al., 2010;\nLardilleux and Lepage, 2009; Haghighi et al.,\n2009). Intrinsic evaluations have also been car-\nried out, mainly by measuring the Alignment Error\nRate (AER), precision and recall (von Waldenfels,\n2006; Varga et al., 2005; Moore, 2002; Haghighi\net al., 2009). Intrinsic evaluation is less popular\ndue to two reasons (Fraser and Marcu, 2007): (i)\nit requires a gold standard and (ii) the correlation\nbetween AER and MT quality is very low. Both\ntypes of evaluation have, however, a common as-\npect; they focus on measuring the quality of the\noutput produced by aligners. Conversely, seldom\nif at all has it been considered to assess the ef-\nficiency of aligners, i.e. to measure the compu-\ntational resources consumed (e.g. execution time,\nuse of memory). However, this assessment is crit-\nical if the aligners are to be exploited in an indus-\ntrial scenario.\nThis work is part of a wider project, whose ob-\njective is to automate the stages involved in the\nacquisition, production, updating and maintenance\nof language resources required by MT systems.\nThis is done by creating a platform, designed as\na dedicated workflow manager, for the composi-\ntion of a number of processes for the production\nof language resources, based on combinations of\ndifferent web services.\nThe present work builds upon (Toral et al.,\n2011), where we presented a web service architec-\nture for sentence and word alignment. Here we\nextend this proposal by evaluating the efficiency\nof the aligners integrated, and subsequently im-\nProceedings of the 16th EAMT Conference, 28-30 May 2012, Trento, Italy\n57\nproving the architecture by implementing limita-\ntion mechanisms that take into account the results.\n2 Evaluation\nWe have integrated a range of state-of-the-art\nsentence and word aligners into the web ser-\nvice architecture. The sentence aligners included\nare Hunalign (Varga et al., 2005), GMA1and\nBSA (Moore, 2002). As for word aligners,\nthey are GIZA++ (Och and Ney, 2003), Berke-\nleyAligner (Haghighi et al., 2009) and Anyma-\nlign (Lardilleux and Lepage, 2009). For a detailed\ndescription of the integration please refer to (Toral\net al., 2011).\nIn order to evaluate the efficiency of the align-\ners, we have run them over different amounts of\nsentences of a bilingual corpus (from 5k to 100k\nadding 5k at a time for sentence alignment and\nfrom 100k to 1.7M adding 100k at a time for\nword alignment). For all the experiments we use\nsentences from the Europarl English–Spanish cor-\npus,2which contains over 1.7M sentence pairs.\nThe aligners are executed using the default val-\nues for their parameters. All the experiments have\nbeen run in a cluster node with 2 Intel Xeon X5670\n6-core CPUs and 96 GB of RAM. The OS is\nGNU/Linux. The resources consumed have been\nmeasured using the following parameters of the\nGNU command time:\n\u000f%S(CPU-seconds used by the system on be-\nhalf of the process) plus %T(CPU-seconds\nthat the process used directly), to measure the\nexecution time. We limit our experiments to\n100k seconds.\n\u000f%M(maximum resident set size of the process\nduring its lifetime, in Kilobytes), to measure\nthe memory used.\nFigure 1 shows the execution times (logarithmic\nscale) of the sentence aligners. It emerges that the\ntime required by GMA is considerable higher com-\npared to the other two aligners (e.g., for 45k sen-\ntences GMA takes approximately 16 and 20 times\nlonger than BSA and Hunalign, respectively). The\ngap grows exponentially with the input size.\nFigure 2 shows the memory consumed by the\nsentence aligners. Hunalign has a steeper curve\n(for 45k sentences, Hunalign uses 6 and 4 times\nmore memory than BSA and GMA, respectively).\n1http://nlp.cs.nyu.edu/GMA/\n2http://www.statmt.org/europarl/\n51015202530354045505560657075808590951001101001,00010,000100,000\nhunalign\nbsa\ngma\nInput size (thousand sentences)Time (seconds)Figure 1: Execution time for sentence aligners\nIn fact Hunalign was not able to align inputs of\nmore than 45k sentences due to memory issues.3\nTable 1 contains all the measurements for sentence\nalignment.\n510152025303540455055606570758085909510005,000,00010,000,00015,000,00020,000,00025,000,00030,000,00035,000,000\nhunalign\nbsa\ngma\nInput size (thousand sentences)Memory (kilobytes)\nFigure 2: Memory used by sentence aligners\nTime (seconds) Memory (M bytes)\ni hun bsa gma hun bsa gma\n5 11 54 103 584 684 3,677\n10 33 105 405 1,616 1,079 5,749\n15 66 185 950 3,146 1,337 5,305\n20 113 247 1,866 6,115 1,597 6,126\n25 168 305 3,004 8,803 1,807 5,878\n30 234 364 4,370 12,104 2,070 6,276\n35 319 436 6,578 19,211 2,559 6,390\n40 412 494 7,775 23,827 2,919 6,433\n45 510 659 10,609 28,892 4,679 6,415\n50 - 721 11,947 - 5,297 6,594\n55 - 797 13,768 - 5,824 6,915\n60 - 878 17,780 - 6,347 6,888\n65 - 973 25,787 - 6,872 7,061\n70 - 1,053 25,251 - 7,415 7,143\n75 - 1,120 30,513 - 7,940 7,692\n80 - 1,165 31,591 - 8,469 7,832\n85 - 1,277 34,664 - 8,991 7,872\n90 - 1,348 42,720 - 9,518 7,730\n95 - 1,391 48,823 - 10,043 7,969\n100 - 1,863 54,350 - 14,537 7,911\nTable 1: Detailed results for sentence aligners. i\ninput sentences (thousand), hun hunalign\nFigure 3 shows the execution times for word\naligners. GIZA++ is the most efficient word\naligner, consistently across the different inputs.\n3A constant in the source code of Hunalign establishes the\nmaximum amount of memory it will use, by default 4GB;\nwe increased it to 64GB. Moreover, it can split the input into\nsmaller chunks with partialAlign (it cuts the data into chunks\nof approximately 5,000 sentences each, based on hapax clues\nfound on each side), however we did not use this preprocess-\ning tool but only the aligner itself.\n58\nThe performance of Berkeley is similar to that of\nGIZA++ for the first runs but the difference of\nexecution time grows with the size of the input.\nThere are no results for Berkeley for over 1,1M\nsentences as the time limit is exceeded. Finally,\nthe behaviour of Anymalign does not correlate at\nall with the size of the input. This has to do with\nthe very nature of this aligner.4\n1234567891011121314151617020,00040,00060,00080,000100,000120,000\ngizapp\nberkeley\nanymalign\nInput size (thousand sentences)Time (seconds)\nFigure 3: Execution time for word aligners\nFigure 4 shows the memory required by word\naligners. Berkeley consistently requires more\nmemory than both GIZA++ and Anymalign. The\nrequirements of GIZA++ and Anymalign are sim-\nilar, although slightly lower for the latter. Table 2\ncontains all the measurements for word alignment.\n1234567891011121314151617010,000,00020,000,00030,000,00040,000,00050,000,00060,000,000\ngizapp\nberkeley\nanymalign\nInput size (thousand sentences)Memory (kilobytes)\nFigure 4: Memory used by word aligners\n3 Limiting web services\nThe previous section has shown that the computa-\ntional resources required by state-of-the-art align-\ners are very different. These resources are limited\nand must be taken into account when they are be-\ning shared by users using web services.\nWe have studied ways on establishing limita-\ntions for the aligners deployed as web services.\nTwo kinds of limitations are explored and imple-\nmented: (i) the number of concurrent executions\nand (ii) the input size allowed for each aligner.\nThe web services are developed using\nSoaplab2.5This tool allows to deploy web\n4Anymalign runs are random, its stop criterion can be based\non the number of alignments it finds per second, we set this\nparameter to the most conservative value supported, i.e. 1\nalignment per second.\n5http://soaplab.sourceforge.net/soaplab2/Time (k seconds) Memory (M bytes)\ni giz brk any giz brk any\n1 1.7 9,0 31,9 1,894 23,906 1,582\n2 3.4 18,8 21,4 3,181 24,619 2,277\n3 5.1 29,2 33,2 4,293 24,222 3,142\n4 6,9 37,3 39,0 5,292 28,190 3,818\n5 8,7 43,6 12,4 6,245 32,586 3,525\n6 10,5 58,0 9,0 7,144 36,773 4,304\n7 12,3 66,2 26,5 8,008 45,999 5,017\n8 14,2 77,3 17,8 8,807 46,545 5,531\n9 15,9 84,7 12,4 9,565 52,437 5,407\n10 17,7 97,0 11,8 10,313 50,977 5,522\n11 19,3 - 18,9 11,030 - 6,800\n12 21,2 - 4,1 11,713 - 6,107\n13 23,6 - 10,1 12,403 - 6,301\n14 25,4 - 14,8 13,057 - 7,382\n15 27,0 - 16,5 13,688 - 8,931\n16 28,2 - 24,2 14,272 - 9,469\n17 30,2 - 17,9 15,270 - 8,860\nTable 2: Detailed results for word aligners. iinput\nsentences (hundred thousand), gizGIZA++, brk\nBerkeley, anyAnymalign\nservices on top of command-line applications by\nwriting files that describe the parameters of these\nservices in ACD format.6Soaplab2 then converts\nthe ACD files to XML metadata files which con-\ntain all the necessary information to provide the\nservices. The Soaplab server is a web application\nrun by a server container (Apache Tomcat7in our\nsetup) which is in charge of providing the services\nusing the generated metadata.\nFigure 5 shows the diagram of the program flow\nfor web services that incorporates limitation mech-\nanisms.8The modules are the following:\n\u000ftool.acd (e.g.bsa.acd), contains the meta-\ndata of the web service in ACD format.\n\u000fws.sh, controls other modules that imple-\nment the waiting and execution mechanisms.\n\u000finit_ws.sh, contains the code that imple-\nments the limitation on the number of concur-\nrent executions and waiting queue. The web\nservice is in waiting state while it is executing\nthis script.\n\u000ftool.sh (e.g.bsa.sh), executes the tool. The\nweb service is in executing state while it is\nexecuting this script.\n\u000fws_vars.sh, contains all the variables\nused by the different web services.\n\u000fws_common.sh, contains code routines\nshared by different web services.\n6http://soaplab.sourceforge.net/soaplab2/\nMetadataGuide.html\n7http://tomcat.apache.org/\n8The code is available under the GPL-v3 license at BLIND\n59\nFigure 5: Diagram of the program flow\n3.1 Limitation of concurrent executions\nThe limitation of concurrent executions is con-\ntrolled by two variables, MAX_WS_WAIT and\nMAX_WS_EXE, set in ws_vars.sh. They hold\nthe maximum number of web services that can be\nconcurrently waiting and executing, respectively.\nThe following actions are carried out when a\nweb service is executed. First, tool.acd calls\nws.sh. This one calls sequentially two scripts:\ninit_ws.sh andtool.sh. init_ws.sh checks\nif the waiting queue is full and aborts the execution\nif so. Otherwise it puts the execution in waiting\nstate and checks periodically whether the execu-\ntion queue is full. When there is a free execution\nslot,init_ws.sh exits returning the control to\nws.sh, which changes the state to executing and\ncalls tool.sh.\n3.2 Limitation of input size\nThe limitation of input/output data size can be\nperformed at three levels: Tomcat, Soaplab and\nweb service. Tomcat provides a parameter,\nMaxPostSize, which indicates the maximum\nsize of the POST in bytes that will be processed.\nSoaplab allows us to put a size limit (in bytes) to\nthe output of web services using a property. The\nuser can establish a general limit that applies to\nevery web service, and/or specific limits that apply\nto any web service in particular.\nBoth these methods allow us to limit the in-\nput/output of web services in bytes. However,\nlimiting the size according to different metrics\nmight be useful. For example, the inputs of align-\ners are usually measured in number of sentences\n(rather than number of bytes). Limits of num-\nber of input sentences have been established at\nthe web service level for each aligner following\nthe results obtained in the evaluation (Section 2).\nVariables with the desired maximum input size in\nnumber of sentences have been added for each\naligner in ws_vars.sh. A function included\ninws_common.sh checks the size of the input\nwhenever an aligner is executed.4 Conclusions\nThis paper has presented, to the best of our knowl-\nedge, the first efficiency-based evaluation of sen-\ntence and word aligners. This assessment is critical\nin order to make a reliable use in industrial scenar-\nios, especially when they are offered as services.\nThe evaluation has showed that the resources re-\nquired by aligners differ rather broadly. These re-\nsults, paired with the quality expected from the\naligners, allow providers to choose the most ap-\npropriate aligner according to the task at hand.\nReferences\nAbdul-Rauf, S., M. Fishel, P. Lambert, S. Noubours,\nand R. Sennrich. 2010. Evaluation of Sentence\nAlignment Systems (Project at the Fifth Machine\nTranslation Marathon).\nAleksic, V . and G. Thurmair. 2012. Rule-based MT\nsystem adjusted for narrow domain (ACCURAT De-\nliverable D4.4.). Technical report.\nFraser, A. and D. Marcu. 2007. Measuring Word\nAlignment Quality for Statistical Machine Transla-\ntion. Computational Linguistics, 33:293–303.\nHaghighi, A., J. Blitzer, J. DeNero, and D. Klein. 2009.\nBetter word alignments with supervised ITG models.\nInProceedings of the Joint Conference of the 47th\nAnnual Meeting of the ACL and the 4th International\nJoint Conference on Natural Language Processing of\nthe AFNLP, pages 923–931.\nLardilleux, A. and Y . Lepage. 2009. Sampling-based\nmultilingual alignment. In Proceedings of RANLP,\npages 214–218, Borovets, Bulgaria.\nMoore, R. C. 2002. Fast and accurate sentence align-\nment of bilingual corpora. In Proceedings of AMTA,\npages 135–144.\nOch, F. J. and H. Ney. 2003. A systematic comparison\nof various statistical alignment models. Computa-\ntional Linguistics, 29:19–51.\nToral, A., P. Pecina, A. Way, and M. Poch. 2011. To-\nwards a User-Friendly Webservice Architecture for\nStatistical Machine Translation in the PANACEA\nproject. In Proceedings of EAMT, pages 63–72, Leu-\nven, Belgium.\nVarga, D., L. N ´emeth, P. Hal ´acsy, A. Kornai, V . Tr ´on,\nand V . Nagy. 2005. Parallel corpora for medium\ndensity languages. In Proceedings of RANLP, pages\n590–596, Borovets, Bulgaria.\nvon Waldenfels, R. 2006. Compiling a parallel cor-\npus of slavic languages. Text strategies, tools and\nthe question of lemmatization in alignment. In\nBeitr ¨age der Europ ¨aischen Slavistischen Linguistik,\npages 123–138.\n60",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "elPoEanxWGZ",
"year": null,
"venue": "EAMT 2011",
"pdf_link": "https://aclanthology.org/2011.eamt-1.40.pdf",
"forum_link": "https://openreview.net/forum?id=elPoEanxWGZ",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Towards Using Web-Crawled Data for Domain Adaptation in Statistical Machine Translation",
"authors": [
"Pavel Pecina",
"Antonio Toral",
"Andy Way",
"Vassilis Papavassiliou",
"Prokopis Prokopidis",
"Maria Giagkou"
],
"abstract": "Pavel Pecina, Antonio Toral, Andy Way, Vassilis Papavassiliou, Prokopis Prokopidis, Maria Giagkou. Proceedings of the 15th Annual conference of the European Association for Machine Translation. 2011.",
"keywords": [],
"raw_extracted_content": "Towards Using Web-Crawled Data for Domain Adaptation\nin Statistical Machine Translation\nPavel Pecina,\nAntonio Toral, Andy Way\nSchool of Computing\nDublin City Universiy\nDublin 9, Ireland\n{ppecina,atoral,away}@computing.dcu.ieVassilis Papavassiliou,\nProkopis Prokopidis, Maria Giagkou\nInstitute for Language & Speech Processing\nArtemidos 6 & Epidavrou\n151 25 Maroussi, Greece\n{vpapa,prokopis,mgiagkou}@ilsp.gr\nAbstract\nThis paper reports on the ongoing work fo-\ncused on domain adaptation of statistical\nmachine translation using domain-specific\ndata obtained by domain-focused web\ncrawling. We present a strategy for crawl-\ning monolingual and parallel data and their\nexploitation for testing, language mod-\nelling, and system tuning in a phrase-\n-based machine translation framework.\nThe proposed approach is evaluated on\nthe domains of Natural Environment and\nLabour Legislation and two language\npairs: English–French and English–Greek.\n1 Introduction\nPerformance of a statistical machine translation\n(SMT) system usually drops when it is applied on\ndata of a different nature than that the system was\ntrained on (Banerjee et al., 2010). As any other\nmachine-learning application, SMT is not guaran-\nteed to perform optimally if the data for training\nand testing are not identically (and independently)\ndistributed, which is often the case in practice. The\nmain problem is usually vocabulary coverage: spe-\ncific domain texts typically contain a lot of special\nvocabulary that is not likely to be found in texts\nfrom other domains. Problems are also caused by\ndivergence in style or genre, where the difference\nis not only in terminology but also in grammar.\nIn order to achieve optimal performance, an\nSMT system must be trained on data from the same\ndomain, of the same genre, and the same style\nas that it is applied on. For many domains, such\ntraining resources (monolingual and parallel data)\nare not available in large enough amounts to train\na system of a sufficient quality. However, even\nsmall amounts of such data can be used to adapt\n© 2011 European Association for Machine Translation.an existing (general-domain) system to the partic-\nular domain (Koehn and Schroeder, 2007). If the\ndata is not available at all, a possible solution is to\nexploit publicly available data from the web.\nIn this work, we present a strategy for crawling\ndomain-specific texts from the web and their ex-\nploitation for domain-adaptation in a phrase-based\nstatistical machine translation (PB-SMT) frame-\nwork. At the current stage, we focus on two re-\nsources: in-domain parallel data for parameter tun-\ning and in-domain monolingual data for language\nmodel training. As part of our approach, we also\ncreate domain-specific test sets. The evaluation is\ncarried out on the domains of Natural Environment\n(env) and Labour Legislation ( lab) and two lan-\nguage pairs: English–French and English–Greek.\nThe remaining part of the paper is organized as\nfollows. After an overview of related work, we dis-\ncuss the possibility of adapting a general-domain\nSMT system to a specific domain by using various\ntypes of in-domain data. Then, we describe our\nbaseline SMT system and the web-crawling strat-\negy for monolingual and parallel data. Finally, we\nreport on the results, make conclusions and outline\nthe future directions of our work.\n2 Domain adaptation in SMT\nDomain adaptation is an active topic in SMT. It\nwas first introduced by Langlais (2002) who in-\ntegrated in-domain lexicons into the translation\nmodel. His work was followed by many others.\nEck et al. (2004) presented a language model\nadaptation technique applying an information re-\ntrieval approach based on selecting similar sen-\ntences from available training data. Hildebrand\net al. (2005) applied the same approach on the\ntranslation model. Wu and Wang (2004) and Wu\net al. (2005) proposed an alignment adaptation\napproach to improve domain-specific word align-\nment. Munteanu and Marcu (2005) automati-\ncally extracted in-domain bilingual sentence pairsMik el L. F orcada, Heidi Depraetere, Vincen t V andeghinste (eds.)\nPr o c e e dings of the 15th Confer enc e of the Eur op e an Asso ciation for Machine T r anslation , p. 297\u0015304\nLeuv en, Belgium, Ma y 2011\nlanguages (L1–L2) sentence pairs L1 tokens / vocabulary L2 tokens / vocabulary\nEnglish–French 1,725,096 47,956,886 73,645 53,262,628 103,436\nEnglish–Greek 964,242 27,446,726 61,497 27,537,853 173,435\nTable 1: Europarl corpus statistics for relevant language pairs.\nfrom large comparable (non-parallel) corpora to\nenlarge the in-domain bilingual corpus. Koehn and\nSchroeder (2007) integrated in-domain and out-of-\n-domain language models as log-linear features in\nthe Moses (Koehn et al., 2007) PB-SMT system\nwith multiple decoding paths for combining multi-\nple domain translation tables. Nakov (2008) com-\nbined in-domain translation and reordering mod-\nels with out-of-domain models also into Moses. In\nthis work, log-linear features were derived to dis-\ntinguish between phrases of multiple domains by\napplying data source indicator features. Finch and\nSumita (2008) employed a probabilistic mixture\nmodel combining two models for questions and\ndeclarative sentences with a general model. They\nused a probabilistic classifier to determine a vector\nof probability representing class membership.\nDomain adaptation of SMT can be approached\nin various ways depending on the availability of\ndomain-specific data and their type. If the data is\navailable, it can be directly used to improve com-\nponents of the MT system: word alignment and\nphrase extraction (Wu and Wang, 2004), language\nmodels (Koehn and Schroeder, 2007), and transla-\ntion models (Nakov, 2008), usually by merging the\ndata with general-domain data or by training new\nmodels and using them together with the general-\n-domain ones in the log-linear framework. If the\ndata is not available, it can be extracted from a pool\nof texts from different domains (Eck et al., 2004;\nHildebrand et al., 2005) or even from the web,\nwhich is the case in this work. We crawl mono-\nlingual data for language models and parallel data\nto improve parameter tuning. The crawled parallel\ndata is also used to create domain-specific test sets.\n3 Baseline system\nOur baseline system is MaTrEx, a combination-\nbased multi-engine architecture developed at\nDublin City University (Penkale et al., 2010) ex-\nploiting aspects of both the Example-based Ma-\nchine Translation (EBMT) and SMT paradigms.\nThe architecture includes various individual sys-\ntems: phrase-based, example-based, hierarchical\nphrase-based, and tree-based MT. In this work,\nwe only exploit the SMT phrase-based componentof the system which is based on Moses, a well-\n-known open-source toolkit for SMT. In addition\nto Moses, MaTrEx provides a set of tools for easy-\n-to-use preprocessing, training, tuning, decoding,\npostprocessing, and evaluation.\n3.1 General-domain data\nAs for other data-driven MT systems, MaTrEx re-\nquires certain data to be trained on, namely parallel\ndata for translation models, monolingual data for\nlanguage models, and parallel development data\nfor tuning of system parameters. Parameter tun-\ning is not strictly required but has a big influence\non system performance. For the baseline system\nwe decided to exploit the widely used data pro-\nvided by the organizers of the series of SMT work-\nshops (WPT 2005, WMT 2006–2010)1: the Eu-\nroparl parallel corpus (Koehn, 2005) version 5 as\ntraining data for translation models and language\nmodels, and WPT 2005 test set as the development\ndata for parameter optimization.\nThe Europarl parallel corpus is extracted from\nthe proceedings of the European Parliament. For\npractical reasons we consider this corpus to con-\ntain general-domain texts. Version 5 released in\nSpring 2010 includes texts in 11 European lan-\nguages including all languages of our interest (En-\nglish, French, and Greek; see Table 1). Note that\nthe amount of parallel data for English and Greek\nis only about one half of what is available for En-\nglish and French. Furthermore, Greek morphol-\nogy is more complex than French morphology so\nthe Greek vocabulary size (count of unique lower-\ncased alphabetical tokens) is much larger than the\nFrench one (see Table 1).\nThe WPT 2005 dev set is a set of 2,000 sentence\npairs available in the same languages as Europarl\nprovided by the WPT 2005 workshop organizers as\na development set for the translation shared task.\nLater WMT test sets do not include Greek data.\n3.2 System setting\nFor training the baseline MT system, all train-\ning data is tokenized and lowercased using the\nstandard Europarl tools.2The original (non-\n1http://www.statmt.org/\n2http://www.statmt.org/europarl/298\nlanguage dom websites docs sentences tokens vocabulary new vocab. sample size / accuracy %\nEnglish env 146 505 53,529 1,386,835 33,400 10,276 224 92.9\nlab 150 461 43,599 1,223,697 25,183 6,674 215 91.6\nFrench env 106 543 31,956 1,196,456 36,097 9,485 232 95.7\nlab 64 839 35,343 1,217,945 23,456 5,756 268 98.1\nGreek env 112 524 37,957 1,158,980 55,360 17,986 227 97.4\nlab 117 481 34,610 1,102,354 52,887 16,850 219 88.1\nTable 2: Web-crawled monolingual data statistics.\nlowercased) versions of the target sides of the\nparallel data are kept for training the Moses re-\ncaser. The lowercased versions of the target sides\nare used for training an interpolated 5-gram lan-\nguage model with Kneser-Ney discounting using\nthe SRILM toolkit (Stolcke, 2002). Translation\nmodels are trained on the relevant parts of the\nEuroparl corpus, lowercased and filtered on sen-\ntence level; we kept all sentence pairs having less\nthan 100 words on each side and with length ratio\nwithin the interval /angbracketleft0.11,9.0/angbracketright. Minimum error rate\ntraining (Och, 2003, MERT) is employed to opti-\nmize the model parameters on the development set.\nFor decoding, test sentences are tokenized, low-\nercased, and translated by the trained system. Let-\nter casing is then reconstructed by the recaser and\nextra blank spaces in the tokenized text are re-\nmoved in order to produce human-readable text.\n4 Acquisition of in-domain resources\n4.1 Web crawling of monolingual data\nOur workflow for acquiring in-domain monolin-\ngual data consists of the following steps: focused\nweb crawling, text normalization, language identi-\nfication, document clean-up and near-duplicate de-\ntection. For focused web crawling, we adapted\nthe open-source Combine3crawler, which inter-\nacts with a text-to-topic classifier. Each web page\nvisited by the crawler is classified as relevant to\nthe domain with respect to a topic definition pro-\nvided by the user. We used lists of triplets /angbracketleftterm,\nrelevance weight, topic class /angbracketrightas the basic entities\nof the topic definition. During crawling, a rele-\nvance score sfor each web page is calculated as\nin the formula below ( Nis the amount of terms in\na topic definition; wt\niis the weight of term i;wl\njis\nthe weight of location j;nijis the number of oc-\ncurrences of term iat location j;ljis the number\nof words at location j).\ns=N/summationdisplay\ni=14/summationdisplay\nj=1nijwt\niwl\nj\nlj\n3http://combine.it.lth.se/We adopted Ardö’s (2005) approach in consid-\nering four discrete locations in a web page ( ti-\ntle,metadata ,keywords , and plain text ) and ex-\nperimentally setting the corresponding weights for\nthese locations to 10, 4, 2, and 1. If the score is\ngreater than a predefined threshold, the web page is\nclassified as relevant to the specific domain and the\nlinks of the page are extracted. Finally, the crawler\nfollows the extracted links to visit new pages.\nTo construct the list of triplets of the topic defini-\ntion, we selected English, French, and Greek terms\n(both single and multi-word entries) from the do-\nmains with identifiers 52 (Natural Environment)\nand 44 (Employment and Working Conditions) of\nthe Eurovoc thesaurus v4.3.4For each language,\nwe extracted 209 terms for the envdomain and 86\nfor the labdomain. The weights assigned to the\nterms were signed integers indicating the relevance\nof each term to a topic-class. Topic-classes corre-\nspond to possible sub-categories of the domain.\nThe other input for the crawler is a list of seed\nURLs relevant to the domain. The seeds for the\nenvdomain were selected from relevant lists in\nthe Open Directory Project,5a repository main-\ntained by volunteer editors. For the labdomain,\nsimilar lists were not so easy to find. We there-\nfore adopted a different method, namely using\nthe BootCat toolkit (Baroni and Bernardini, 2004)\nto create random tuples (i.e. n-combinations of\nterms) from the terms included in the topic defi-\nnition. We then ran a query for each tuple on the\nYahoo! search engine,6kept the first five URLs re-\nturned for each query and finally constructed the\nseed list with these URLs.\nNormalization, the next step in the workflow,\nconcerned encoding identification based on the\ncontent_charset header of each document, and, if\nneeded, conversion to UTF-8. Language identifi-\ncation was performed by a modified version of the\nn-gram-based Lingua::Identify7tool, which was\n4http://eurovoc.europa.eu/\n5http://www.dmoz.org/Science/Environment/\n6http://www.yahoo.com/\n7http://search.cpan.org/ ambs/Lingua-Identify-0.29/299\nlanguages (L1–L2) dom websites documents sentences all / filtered / sampled / corrected\nEnglish–French env 6 559 16,487 13,840 3,600 3,392\nlab 4 900 33,326 23,861 3,600 3,411\nEnglish–Greek env 6 151 4,543 3,735 3,600 3,000\nlab 4 125 3,094 2,707 2,700 2,506\nTable 3: Web-crawled parallel data statistics.\nused to discard documents not in the targeted lan-\nguage. Web pages often need to be cleaned from\n“noise” such as navigation links, advertisements,\ndisclaimers, etc. (a.k.a. boilerplate), which are of\nlimited or no use for the purposes of training an\nMT system. Such noise was removed by the Boil-\nerpipe tool (Kohlschütter et al., 2010).8The fol-\nlowing step in the workflow involved applying the\nSpotSigs algorithm (Theobald et al., 2008) to de-\ntect and remove near duplicate documents.\nThe collections consisted of documents origi-\nnating from as many different web sites as pos-\nsible, in order to avoid bias to the language of\nspecific sites. Documents originating from bilin-\ngual web sites were excluded, as these sites were\nused for the acquisition of the parallel data (Sec-\ntion 4.2). The only postprocessing steps performed\non the monolingual data prior to SMT training\nwere tokenization and sentence boundary identifi-\ncation by the Europarl tools.\nThe statistics of the data are provided in Ta-\nble 2. The vocabulary column contains the amount\nof unique lowercased alphabetical tokens (words)\nin each data set and the new vocabulary column\nthen shows counts of such tokens not appearing in\nthe Europarl corpus. The ratio of new vocabulary\nis around 30% for all these data sets, which is en-\ncouraging, as by using them a better coverage of\nin-domain test sets can be expected. To evaluate\nthe crawler’s accuracy, we asked two native speak-\ners for each language to classify a sample of the\ncrawled data (selected to achieve at least a ±5%\nconfidence interval at a 95% confidence level) as\nout-domain or in-domain. The accuracy measured\non documents judged as in-domain by both evalu-\nators ranges from 88% to 98% (details in Table 2).\n4.2 Web crawling of parallel data\nThe workflow for acquiring in-domain parallel\ndata consisted of the following steps: first, web\nsites containing texts in targeted domains and pairs\nof languages were manually identified from the\npool of web sites collected during the phase of\nmonolingual data acquisition (Section 4.1). Pages\n8http://code.google.com/p/boilerpipe/from those sites were then used as seed URLs and\nthe crawler was constrained to follow only links\ninternal to each site. This constraint was applied\nin order to force the crawler to stayon the selected\nmultilingual web sites. Each web page visited by\nthe crawler was classified as relevant with respect\nto a bilingual topic definition. After following the\nnormalization and language identification steps de-\nscribed in Section 4.1, we end up with in-domain\nEN–FR or EN–EL subsets of websites mirrored\nlocally. The next step concerned using Bitextor\n(Esplà-Gomis and Forcada, 2010),9an open source\ntool that uses shallow textual features to decide\nwhich documents could be considered translations\nof each other, and to identify pairs of paragraphs\nfrom which parallel sentences could be extracted.\nThe next steps of the procedure aimed at iden-\ntification of sentence pairs which are likely to be\nmutual translations. In each paragraph pair we\napplied the following steps: identification of sen-\ntence boundaries by the Europarl sentence splitter,\ntokenization by the Europarl tokenizer, and sen-\ntence alignment by Hunalign,10a widely used tool\nfor automatic identification of parallel sentences in\nparallel texts. For each sentence pair identified as\nparallel, Hunalign provides a score which reflects\nthe level of parallelness, the degree to which the\nsentences are mutual translations. We manually in-\nvestigated a sample of sentence pairs extracted by\nHunalign from the pool data for each domain and\nlanguage pair (45–49 sentence pairs for each lan-\nguage pair and domain), by relying on the judge-\nment of native speakers, and estimated that sen-\ntence pairs with a score above 0.4 are of a good\ntranslation quality. In the next step, we removed\nall sentence pairs with scores below this threshold.\nAdditionally, we also removed duplicate sentence\npairs. The filtering step reduced the number of sen-\ntence pairs by about 15–20% (details in Table 3).\n4.3 Manual corrections of parallel data\nThe translation quality of the parallel data obtained\nby the procedure described above is not guaranteed\n9http://bitextor.sourceforge.net/\n10http://mokk.bme.hu/resources/hunalign/300\nlanguages (L1–L2) dom set sentences L1 tokens / vocabulary L2 tokens / vocabulary\nEnglish–French env dev 1,392 35,094 5,245 40,919 6,024\nenv test 2,000 49,778 6,252 58,166 7,252\nlab dev 1,411 45,306 5,034 51,372 5,994\nlab test 2,000 62,070 6,031 70,534 7,274\nEnglish–Greek env dev 1,000 26,507 5,790 23,980 3,980\nenv test 2,000 55,090 8,715 49,925 5,503\nlab dev 506 14,169 3,509 13,201 2,453\nlab test 2,000 58,429 7,466 54,372 4,559\nTable 4: Development and test data set statistics.\nin any sense. Tuning the procedure and focusing\non high-quality translations is possible but leads to\na trade-off between quality and quantity. For trans-\nlation model training, high translation quality of\nthe data is not as essential as for parameter tuning\nand testing. Bad phrase pairs can be removed from\nthe translation tables based on their low translation\nprobabilities. However, a development set contain-\ning sentence pairs which are not exact translations\nof each other might lead to sub-optimal values of\nmodel weights which would harm system perfor-\nmance. If such sentence pairs are used in the test\nset, the evaluation would clearly be very unreliable.\nIn order to create reliable development and test\nsets for each language pair and domain, we per-\nformed the following low-cost procedure. From\nthe data obtained by the steps described in the\nprevious section, we selected a random sample of\n3,600 sentence pairs (2,700 for English–Greek in\nthe Labour Legislation domain, for which no more\ndata was available) and asked native speakers to\ncheck and correct them. The task consisted of:\n1. checking that the sentence pairs belonged to\nthe right domain,\n2. checking that the sentences within a sentence\npair were equivalent in terms of content,\n3. checking translation quality and correcting (if\nneeded) the sentence pairs.\nThe goal was to obtain at least 3,000 correct sen-\ntence pairs (2,000 test pairs and 1,000 development\npairs) for each domain and language pair; thus the\ncorrectors did not have to correct every sentence\npair. They were allowed to skip (remove) those\nsentence pairs which were misaligned. In addition,\nwe asked them to remove those sentence pairs that\nwere obviously from a very different domain (de-\nspite being correct translations). The number of\ncorrected sentences obtained is shown in the last\ncolumn of Table 3. As the final step, we took a ran-\ndom sample from the corrected sentence pairs andselected 2,000 pairs for the test set and left the re-\nmaining part for the development set.\nDuring the correction phase, we made the\nfollowing observations: 55% of sentence pairs\nwere accurate translations, 35% of sentence pairs\nneeded only minor corrections, 3–4% of sentence\npairs would require major corrections (which was\nnot necessary to do in most cases, as the accurate\nsentence pairs together with those requiring mi-\nnor corrections were enough to reach our goal of\nat least 3,000 sentence pairs), 4–5% of sentence\npairs were misaligned and would have had to be\ntranslated completely (which was not necessary in\nmost cases), and 3–4% of sentence pairs were from\na different domain. The correctors confirmed that\nthe process was about 5–10 times faster than trans-\nlating the sentences from scratch. Detailed statis-\ntics of the test and development sets obtained by\nthe procedure described above are given in Table 4.\n5 Experiments and results\nThe described approach was evaluated in eight\ndifferent scenarios involving: two language pairs\n(English–Greek, English–French), both translation\ndirections (to English and from English), and the\ntwo domains (Natural Environment, Labour Leg-\nislation), using the following automatic evaluation\nmeasures: WER, PER, and BLEU (Papineni et al.,\n2002), NIST (Doddington, 2002), and METEOR\n(Banerjee and Lavie, 2005).\nThe baseline MT systems (denoted as v0) were\nevaluated using these test sets and results are\nshown in Table 5. The BLEU, METEOR, PER,\nand WER scores are percentages; WER and PER\nare error rates; OOV (out-of-vocabulary) is a ratio\nof unknown words, i.e. the occurrence of words\nwhich do not appear in the parallel training data\nand thus cannot be translated. The scores among\ndifferent systems are not freely comparable but\nthey give us some idea of how difficult translation\nis for particular languages or domains.301\nlanguages dom BLEU NIST METEOR PER WER OOV\nEnglish→French env 28.03 7.03 63.32 63.70 46.71 0.98\nlab 22.26 6.27 56.73 69.93 50.06 0.85\nFrench→English env 31.79 7.77 66.25 57.09 40.02 0.81\nlab 27.00 7.07 59.90 61.57 43.24 0.68\nEnglish→Greek env 20.20 5.73 82.81 67.83 54.02 1.15\nlab 22.92 5.93 87.27 65.88 52.21 0.47\nGreek→English env 29.23 7.50 60.57 54.69 41.07 1.53\nlab 31.71 7.76 62.42 52.34 38.37 0.69\nTable 5: Baseline MT system results (all scores except NIST are percentages).\nThe baseline MT system was trained solely on\nout-of-domain data (parallel, monolingual, and de-\nvelopment data from Europarl). First, we exploited\nthe in-domain development data and used it in the\nfirst modification of the baseline system ( v1) in-\nstead of the out-of-domain (Europarl) data. In this\ncase, the individual system models (translation ta-\nbles, language model, etc.) remained the same, but\ntheir relative importance (optimal weights in the\nSMT log-linear framework) was different.\nThe in-domain monolingual data could be ex-\nploited in two ways: a) to join the general-domain\ndata and the new in-domain data into one set,\nuse it to train one language model and optimize\nits weight using MERT on the in-domain devel-\nopment data. b) to train a new separate lan-\nguage model from the new data and add it to the\nlog-linear framework and let MERT optimize its\nweight together with other model weights. We\ntested both approaches. In system v2we followed\nthe first option (retraining the language model on\nan enlarged data) and in system v3we followed\nthe second option (training an additional language\nmodel and optimizing).\nAll evaluation results are presented in detail in\nTables 6 to 9. Each table compares the perfor-\nmance of all systems ( v0–v3) in one translation di-\nrection. The comparison between the scores of v0\nandv1tells us the importance of using in-domain\ndata for parameter optimization. The improve-\nment in terms of BLEU varies between 16% and\n48% relative, which is quite substantial, especially\ngiven the fact that this modification requires sev-\neral hundreds of sentence pairs only.\nThe comparison between v1andv2/v3shows\nthe effect of using additional in-domain data for\nlanguage modelling, which turned out not to be\nvery substantial in most scenarios. With only one\nexception (see below), the BLEU scores improved\nby less than 1 point. This observation is not very\nsurprising given the fact that the general-domain\ntranslation models were not enhanced in any wayand thus the new in-domain language models had\nonly limited room for improvement: the high OOV\nrates remained the same. After improving the\ntranslation models which (hopefully) will decrease\nthe OOV rates, the language models might have\na better chance of contributing to improved scores.\nThe only exception was the translation from En-\nglish to Greek for the Labour Legislation domain,\nfor which the BLEU score increased massivelly\nfrom 28.79 to 33.43 points (Table 8). This is prob-\nably due to the richer morphology of Greek as the\ntarget language and the relatively low OOV rate on\nthe Labour Legislation data; here, the performance\nimproved even if the OOV rate did not change.\nAn analysis of the differences between the re-\nsults of v2andv3could explain the difference be-\ntween using in-domain monolingual data in one\nlanguage model (together with general-domain\ndata) vs. using two separate models (general-\n-domain plus in-domain). However, due to the\nfact that the addition of in-domain monolingual\ndata did not lead to any significant improvement\nin MT quality, the differences are not really mea-\nsurable. It is likely that this situation will change\nafter improving the translation models by adding\nin-domain parallel data.\n6 Conclusion and future work\nIn this work we described our first steps towards\ndomain adaptation of statistical machine transla-\ntion based on data obtained by domain-focused\nweb crawling. We evaluated four SMT systems in\neight scenarios and tested the impact of two types\nof web-crawled language resources (in-domain\nparallel development data, in-domain monolingual\ntraining data) on the MT quality. In terms of au-\ntomatic evaluation measures, the effect of using\nin-domain development data for parameter opti-\nmization in SMT is very substantial, in the range\nof 16–48% relative improvement. The impact of\nusing in-domain monolingual data for language302\nsys dom BLEU / ∆% NIST / ∆% METEOR / ∆% PER / ∆% WER / ∆%\nv0 env 28.03 0.00 7.03 0.00 63.32 0.00 63.70 0.00 46.71 0.00\nv1 env 35.81 27.76 8.10 15.22 68.44 8.09 53.78 -15.57 40.34 -13.64\nv2 env 36.13 28.90 8.14 15.79 68.40 8.02 53.14 -16.58 40.07 -14.22\nv3 env 36.32 29.58 8.19 16.50 68.50 8.18 52.82 -17.08 39.62 -15.18\nv0 lab 22.26 0.00 6.27 0.00 56.73 0.00 69.93 0.00 50.06 0.00\nv1 lab 30.84 38.54 7.42 18.34 62.94 10.95 57.99 -17.07 43.11 -13.88\nv2 lab 30.18 35.58 7.31 16.59 62.86 10.81 59.05 -15.56 43.81 -12.49\nv3 lab 30.12 35.31 7.28 16.11 62.88 10.84 59.48 -14.94 44.24 -11.63\nTable 6: Evaluation results: English →French.\nsys dom BLEU / ∆% NIST / ∆% METEOR / ∆% PER / ∆% WER / ∆%\nv0 env 31.79 0.00 7.77 0.00 66.25 0.00 57.09 0.00 40.02 0.00\nv1 env 39.04 22.81 8.75 12.61 69.17 4.41 48.26 -15.47 34.56 -13.64\nv2 env 39.27 23.53 8.77 12.87 69.26 4.54 48.16 -15.64 34.49 -13.82\nv3 env 38.84 22.18 8.72 12.23 69.06 4.24 48.55 -14.96 34.71 -13.27\nv0 lab 27.00 0.00 7.07 0.00 59.90 0.00 61.57 0.00 43.24 0.00\nv1 lab 33.52 24.15 7.98 12.87 63.70 6.34 53.39 -13.29 38.42 -11.15\nv2 lab 33.91 25.59 8.02 13.44 64.06 6.94 53.11 -13.74 38.22 -11.61\nv3 lab 33.72 24.89 8.00 13.15 64.11 7.03 53.30 -13.43 38.31 -11.40\nTable 7: Evaluation results: French →English.\nmodelling cannot be confirmed where a system has\na high OOV rate, which can be minimized only by\nimproving the coverage of the translation models.\nOur future work will focus on crawling more paral-\nlel data and enhancing the translation models. Re-\nsults of tests of statistical significance will also be\nprovided.\n7 Acknowledgements\nThis work is supported by PANACEA, a 7th\nFramework Research Programme of the European\nUnion, contract number 7FP-ITC-248064.\nReferences\nArdö, Anders. 2005. Focused crawling in the ALVIS\nsemantic search engine. In Proceedings of the 2nd\nEuropean Semantic Web Conference , pages 19–20,\nHeraklion, Greece.\nBanerjee, Satanjeev and Alon Lavie. 2005. METEOR:\nAn automatic metric for MT evaluation with im-\nproved correlation with human judgments. In Pro-\nceedings of the ACL Workshop on Intrinsic and Ex-\ntrinsic Evaluation Measures for Machine Transla-\ntion and/or Summarization , pages 65–72, Ann Ar-\nbor, Michigan.\nBanerjee, Pratyush, Jinhua Du, Baoli Li, Sudip Naskar,\nAndy Way, and Josef van Genabith. 2010. Com-\nbining multi-domain statistical machine translation\nmodels using automatic classifiers. In AMTA 2010:\nThe Ninth Conference of the Association for Machine\nTranslation in the Americas , pages 141–150.\nBaroni, M. and S. Bernardini. 2004. Bootcat: Boot-\nstrapping corpora and terms from the web. In Pro-ceedings of the 4th Language Resources and Evalu-\nation Conference , pages 1313–1316, Lisbon.\nDoddington, George. 2002. Automatic evaluation\nof machine translation quality using n-gram co-\noccurrence statistics. In Proceedings of the sec-\nond international conference on Human Language\nTechnology Research , HLT ’02, pages 138–145, San\nDiego, California.\nEck, Matthias, Stephan V ogel, and Alex Waibel. 2004.\nLanguage model adaptation for statistical machine\ntranslation based on information retrieval. In In-\nternational Conference on Language Resources and\nEvaluation , Lisbon, Portugal.\nEsplà-Gomis, Miquel and Mikel L. Forcada. 2010.\nCombining content-based and url-based heuristics to\nharvest aligned bitexts from multilingual sites with\nbitextor. The Prague Bulletin of Mathemathical Lin-\ngustics , 93:77–86.\nFinch, Andrew and Eiichiro Sumita. 2008. Dynamic\nmodel interpolation for statistical machine transla-\ntion. In Proceedings of the Third Workshop on\nStatistical Machine Translation , StatMT ’08, pages\n208–215, Columbus, Ohio, USA.\nHildebrand, Almut Silja, Matthias Eck, Stephan V o-\ngel, and Alex Waibel. 2005. Adaptation of the\ntranslation model for statistical machine translation\nbased on information retrieval. In Proceedings of the\n10th Annual Conference of the European Association\nfor Machine Translation , pages 133–142, Budapest,\nHungary.\nHua, Wu, Wang Haifeng, and Liu Zhanyi. 2005. Align-\nment model adaptation for domain-specific word\nalignment. In 43rd Annual Meeting on Association\nfor Computational Linguistics , ACL ’05, pages 467–\n474, Ann Arbor, Michigan, USA.303\nsys dom BLEU / ∆% NIST / ∆% METEOR / ∆% PER / ∆% WER / ∆%\nv0 env 20.20 0.00 5.73 0.00 82.81 0.00 67.83 0.00 54.02 0.00\nv1 env 26.18 29.60 6.57 14.66 84.19 1.67 60.80 -10.36 49.10 -9.11\nv2 env 26.50 31.19 6.63 15.71 84.35 1.86 60.65 -10.59 48.76 -9.74\nv3 env 26.41 30.74 6.57 14.66 83.85 1.26 60.58 -10.69 48.99 -9.31\nv0 lab 22.92 0.00 5.93 0.00 87.27 0.00 65.88 0.00 52.21 0.00\nv1 lab 28.79 25.61 6.80 14.67 87.91 0.73 58.20 -11.66 46.43 -11.07\nv2 lab 33.43 45.86 7.33 23.61 88.94 1.91 54.93 -16.62 43.77 -16.17\nv3 lab 34.03 48.47 7.44 25.46 88.94 1.91 54.37 -17.47 43.25 -17.16\nTable 8: Evaluation results: English →Greek.\nsys dom BLEU / ∆% NIST / ∆% METEOR / ∆% PER / ∆% WER / ∆%\nv0 env 29.23 0.00 7.50 0.00 60.57 0.00 54.69 0.00 41.07 0.00\nv1 env 34.16 16.87 8.01 6.80 64.98 7.28 51.15 -6.47 37.67 -8.28\nv2 env 34.24 17.14 8.02 6.93 64.99 7.30 51.12 -6.53 37.65 -8.33\nv3 env 34.15 16.83 8.01 6.80 64.75 6.90 51.09 -6.58 37.83 -7.89\nv0 lab 31.71 0.00 7.76 0.00 62.42 0.00 52.34 0.00 38.37 0.00\nv1 lab 37.55 18.42 8.28 6.70 67.36 7.91 49.02 -6.34 35.27 -8.08\nv2 lab 38.00 19.84 8.36 7.73 67.73 8.51 48.45 -7.43 34.83 -9.23\nv3 lab 37.70 18.89 8.32 7.22 67.40 7.98 48.76 -6.84 35.03 -8.70\nTable 9: Evaluation results: Greek →English.\nKoehn, Philipp and Josh Schroeder. 2007. Experi-\nments in domain adaptation for statistical machine\ntranslation. In Proceedings of the Second Work-\nshop on Statistical Machine Translation , StatMT\n’07, pages 224–227, Prague, Czech Republic.\nKoehn, Philipp, Hieu Hoang, Alexandra Birch, Chris\nCallison-Burch, Marcello Federico, Nicola Bertoldi,\nBrooke Cowan, Wade Shen, Christine Moran,\nRichard Zens, Chris Dyer, Ond ˇrej Bojar, Alexandra\nConstantin, and Evan Herbst. 2007. Moses: open\nsource toolkit for statistical machine translation. In\nProceedings of the 45th Annual Meeting of the ACL\non Interactive Poster and Demonstration Sessions ,\nACL ’07, pages 177–180, Prague, Czech Republic.\nKoehn, Philipp. 2005. Europarl: A Parallel Corpus\nfor Statistical Machine Translation. In Conference\nProceedings: the tenth Machine Translation Summit ,\npages 79–86, Phuket, Thailand.\nKohlschütter, Christian, Peter Fankhauser, and Wolf-\ngang Nejdl. 2010. Boilerplate detection using shal-\nlow text features. In Proceedings of the 3rd ACM\nInternational Conference on Web Search and Data\nMining , pages 441–450, New York.\nLanglais, Philippe. 2002. Improving a general-purpose\nstatistical translation engine by terminological lexi-\ncons. In COLING-02 on COMPUTERM 2002: sec-\nond international workshop on computational termi-\nnology - Volume 14 , pages 1–7, Taipei, Taiwan.\nMunteanu, Dragos Stefan and Daniel Marcu. 2005.\nImproving machine translation performance by ex-\nploiting non-parallel corpora. Comput. Linguist. ,\n31:477–504.\nNakov, Preslav. 2008. Improving English-Spanish sta-\ntistical machine translation: experiments in domain\nadaptation, sentence paraphrasing, tokenization, andrecasing. In Proceedings of the Third Workshop on\nStatistical Machine Translation , StatMT ’08, pages\n147–150, Columbus, Ohio, USA.\nOch, Franz Josef. 2003. Minimum error rate train-\ning in statistical machine translation. In 41st Annual\nMeeting on Association for Computational Linguis-\ntics, ACL ’03, pages 160–167, Sapporo, Japan.\nPapineni, Kishore, Salim Roukos, Todd Ward, and Wei-\nJing Zhu. 2002. BLEU: a method for automatic\nevaluation of machine translation. In 40th Annual\nMeeting on Association for Computational Linguis-\ntics, ACL ’02, pages 311–318, Philadelphia, USA.\nPenkale, Sergio, Rejwanul Haque, Sandipan Dandapat,\nPratyush Banerjee, Ankit K. Srivastava, Jinhua Du,\nPavel Pecina, Sudip Kumar Naskar, Mikel L. For-\ncada, and Andy Way. 2010. MaTrEx: the DCU\nMT system for WMT 2010. In Proceedings of the\nJoint Fifth Workshop on Statistical Machine Trans-\nlation and MetricsMATR , pages 143–148, Uppsala,\nSweden.\nStolcke, Andreas. 2002. SRILM-an extensible lan-\nguage modeling toolkit. In Proceedings of Interna-\ntional Conference on Spoken Language Processing ,\npages 257–286, Denver, Colorado, USA.\nTheobald, Martin, Jonathan Siddharth, and Andreas\nPaepcke. 2008. Spotsigs: robust and efficient near\nduplicate detection in large web collections. In Pro-\nceedings of the 31st Annual International ACM SI-\nGIR Conference on Research and Development in\nInformation Retrieval , pages 563–570, Singapore.\nWu, Hua and Haifeng Wang. 2004. Improving domain-\nspecific word alignment with a general bilingual cor-\npus. In Proceedings of the 6th Conference of the As-\nsociation for Machine Translation in the Americas ,\npages 262–271, Washington, DC.304",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "wa1OE55dTay",
"year": null,
"venue": "EAMT 2011",
"pdf_link": "https://aclanthology.org/2011.eamt-1.11.pdf",
"forum_link": "https://openreview.net/forum?id=wa1OE55dTay",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Towards a User-Friendly Webservice Architecture for Statistical Machine Translation in the PANACEA project",
"authors": [
"Antonio Toral",
"Pavel Pecina",
"Marc Poch",
"Andy Way"
],
"abstract": "Antonio Toral, Pavel Pecina, Marc Poch, Andy Way. Proceedings of the 15th Annual conference of the European Association for Machine Translation. 2011.",
"keywords": [],
"raw_extracted_content": "Towards a User-Friendly Webservice Architecture for Statistical Machine\nTranslation in the PANACEA project∗\nAntonio Toral, Pavel Pecina, Andy Way\nSchool of Computing\nDublin City University\nDublin, Ireland\n{atoral,ppecina,away }@computing.dcu.ieMarc Poch\nIULA\nUniversitat Pompeu Fabra\nBarcelona, Spain\[email protected]\nAbstract\nThis paper presents a webservice archi-\ntecture for Statistical Machine Translation\naimed at non-technical users. A workflow\neditor allows a user to combine different\nwebservices using a graphical user inter-\nface. In the current state of this project,\nthe webservices have been implemented\nfor a range of sentential and sub-sentential\naligners. The advantage of a common in-\nterface and a common data format allows\nthe user to build workflows exchanging\ndifferent aligners.\n1 Introduction\nA human translator willing to use a state-of-the-\n-art Statistical Machine Translation (SMT) system\nhas two main options nowadays: to use an on-line\ntranslation system (e.g. Google Translate1) or to\ntrain his or her own MT system from scratch (e.g.\nMoses (Koehn et al., 2007)) using any available\nparallel corpus (e.g. Europarl2). Both options,\nhowever, present some important drawbacks. On\none hand, the on-line systems are not customisable\nand might be inadequate if the user wants to trans-\nlate sensitive data. On the other hand, installing\nan MT system with all its dependencies and train-\ning it requires technical expertise which is usually\nbeyond the competences of an average translator.\nThis paper presents an architecture for building\nSMT systems where each component is deployed\n∗We would like to thank the developers of Soaplab and Tav-\nerna for solving our questions and requests. This research\nhas been partially funded by the EU project PANACEA (7FP-\nITC-248064).\n∗c/circlecopyrt2011 European Association for Machine Translation.\n1http://translate.google.com/\n2http://www.statmt.org/europarl/as a webservice. This way the user does not have to\ndeal with technical issues regarding the tools, such\nas their installation, configuration or maintenance.\nA workflow editor allows the user to combine the\ndifferent webservices using a graphical user inter-\nface. In the current state of this project, webser-\nvices have been created for a range of sentential\nand sub-sentential aligners.\nThis work is part of the FP7 PANACEA\nproject,3which addresses the most critical aspect\nof MT: the language-resource bottleneck. Its ob-\njective is to build a factory of language resources\nthat automates the stages involved in the acquisi-\ntion, production, updating and maintenance of lan-\nguage resources required by MT systems. This is\ndone by creating a platform, designed as a dedi-\ncated workflow manager, for the composition of a\nnumber of processes for LR production, based on\ncombinations of different web services. The tech-\nniques developed are language independent, while\nthe use case language-pairs are English–French,\nEnglish–German and English–Greek.\nThe rest of the document is structured as fol-\nlows. The following section details the implemen-\ntation of webservices for each of the aligners con-\nsidered. In addition, we discuss the procedures de-\nveloped to convert the format of these aligners to\na common format and present a set of workflows\nthat demonstrate the usage of the aligners in pro-\ncessing pipelines. This is followed by a report on\nthe software that has been developed. Finally, we\ndraw conclusions from this experience and outline\navenues of future work.\n3http://panacea-lr.euMik el L. F orcada, Heidi Depraetere, Vincen t V andeghinste (eds.)\nPr o c e e dings of the 15th Confer enc e of the Eur op e an Asso ciation for Machine T r anslation , p. 63\u001570\nLeuv en, Belgium, Ma y 2011\n2 Aligners as Webservices\nThis section reports on the development of web-\nservices for a range of widely-used state-of-the-\n-art aligners which can be easily exchanged in user\nworkflows due to the common interface designed\nwithin the project. Table 1 describes the manda-\ntory parameters of the interface shared across all\nthe webservices. In addition, a webservice might\naccept optional parameters that allow to exploit\nspecific functionality of the aligner wrapped by\nthat webservice.\nName Type\nsource language 2 character ISO code\nsource corpus text\nsource language 2 character ISO code\ntarget corpus text\nTable 1: Shared mandatory parameters.\nWebservices created for sentential aligners (Hu-\nnalign, GMA and BSA) are covered in section 2.1,\nwhile section 2.2 deals with sub-sentential align-\ners (GIZA++, BerkeleyAligner and OpenMaTrEx\nchunk aligner).\n2.1 Sentential alignment\n2.1.1 Hunalign\nHunalign (Varga et al., 2005)4can work in two\nmodes. If a bilingual dictionary is available, this\ninformation is combined with sentence-length in-\nformation (Gale and Church, 1991) and used to\nidentify sentence alignment. In the absence of\na bilingual dictionary, it first identifies the align-\nment using sentence-length information only, then\nbuilds an automatic dictionary based on this align-\nment and finally realigns the text using this dictio-\nnary.\nThis tool requires three parameters (filenames\nfor the source corpus, target corpus and bilin-\ngual dictionary, although the dictionary file can be\nempty). As a webservice, the first two parameters\nare mandatory, together with the source and target\nlanguages. The bilingual dictionary file is optional,\nif none is provided the webservice will create and\nuse an empty file. Hunalign also provides a set of\noptional parameters. Some of them are offered by\nthe webservice ( bisent ,cautious andtext), while\ntwo of them are activated internally ( realign and\n4http://mokk.bme.hu/resources/hunalignutf). The remaining ones regard evaluation pur-\nposes or post-filtering and have been hidden from\nusers of the webservice.\n2.1.2 GMA\nGMA – Geometric Mapping and Alignment\n(Argyle et al., 2004)5– is an implemen-\ntation of the Smooth Injective Map Recog-\nnizer (Melamed, 1997) algorithm for mapping bi-\ntext correspondence and the Geometric Segment\nAlignment (Melamed, 1996) post-processor for\nconverting general bitext maps to monotonic seg-\nment alignments. The tool employs word corre-\nspondences, cognates, as well as information from\nbilingual dictionaries.\nThis tool accepts a pair of parameters for the\nsource and target corpus. Apart from that, it needs\na parameter pointing to a configuration file, which\ncontains several parameters, including language-\n-dependent lists of stop words. The webservice\noffers two parameters for the source and target\nlanguages; these denote a language pair, which\nis internally assigned a configuration file. GMA\nprovides configuration files and stop words for\nEnglish–French. Additional configuration files\nhave been created for the following language pairs:\nEnglish–German, English–Spanish and English–\nItalian (all the parameters have the same values\nacross language pairs except for those that are\nlanguage-dependent). The stop word lists used for\nEnglish, French, German, Spanish and Italian have\nbeen obtained from Universit ´e de Neuch ˆatel.6\n2.1.3 BSA\nBSA – Bilingual Sentence Aligner7(Moore,\n2002) – is a three-step hybrid approach. First,\nsentence-length based alignment is performed;\nsecond, statistical word alignment model is trained\non the high probability aligned sentences and third,\nall sentences are realigned based on the word\nalignments. It generates only 1:1 alignments.\nThis tool takes three parameters (filenames for\nthe source corpus, target corpus and an alignment\nprobability threshold). All of them are offered by\nthe webservice developed. The threshold is set to\n0.5by default. The output of this tool was altered\nby modifying the script filter-final-aligned-sents.pl\n5http://nlp.cs.nyu.edu/GMA/\n6http://members.unine.ch/jacques.savoy/\nclef/index.html\n7http://research.microsoft.\ncom/en-us/downloads/\naafd5dcf-4dcc-49b2-8a22-f7055113e656/64\nwhich carries out the last phase of the alignment; it\nreceives the set of word alignments found and out-\nputs only those with alignment probability above\nthe threshold. BSA originally outputs a couple\nof files with the aligned text (one sentence per\nline). Conversely, we are interested in a unique\nfile where each line contains an alignment by giv-\ning the sentence numbers of the source and target\nlanguages (see section 3). The output format has\nbeen modified to that of Hunalign.\n2.2 Sub-sentential alignment\n2.2.1 GIZA++\nGIZA++8(Och and Ney, 2003) is a statisti-\ncal toolkit that is used to train IBM Models 1-5\nand an HMM word alignment model. It performs\nword alignment in several steps that involve dif-\nferent tools ( plain2snt ,mkcls ,snt2cooc ,GIZA++ ,\ngiza2bal andsymal ). The webservice developed\nencapsulates all of them through one call.\nThis word aligner toolkit offers many fine-\ngrained input options, which are mainly nu-\nmeric parameters that modify the behaviour of the\naligner. Being the emphasis of the platform to pro-\nvide easy-to-use versions of the tools, most of the\nparameters have been kept fixed (i.e. they can-\nnot be changed through the webservice interface).\nThese values (taken from Moses) are:\nForGIZA++ :\n-model1iterations 5\n-model2iterations 0\n-model3iterations 3\n-model4iterations 3\n-model1dumpfrequency 1\n-model4smoothfactor 0.4\n-nodumps 1\n-nsmooth 4\n-onlyaldumps 1\n-p0(parameter p0 in IBM-3/4 )0.999\nForsymal :\n-alignment grow\n-diagonal yes\n-final yes\n-both yes\nApart from the mandatory parameters that fol-\nlow the common interface, the webservice in-\nterface offers two specific parameters for mkcls :\nnumber of iterations (default value 2) and number\nof classes (default value 50).\n8http://code.google.com/p/giza-pp/2.2.2 BerkeleyAligner\nBerkeleyAligner9(Liang et al., 2006; DeNero\nand Klein, 2007; Haghighi et al., 2009) is a word\nalignment toolkit combining unsupervised as well\nas supervised approaches to word alignment. It\nfeatures joint training of conditional alignment\nmodels (cross-EM), syntactic distortion model, as\nwell as posterior decoding heuristics.\nSimilarly to GIZA++, this tool provides a\nplethora of options (see documentation/manual.txt\nin its distribution for details). Most of them are not\nconsidered in the webservice interface. The only\nparameters that are offered by the webservice are\nthe mandatory ones following the common inter-\nface and the number of iterations to run the model\n(default value 2).\n2.2.3 OpenMaTrEx chunk aligner\nOpenMaTrEx10is a marker-driven example-\n-based machine translation system (Dandapat et\nal., 2010) that provides, among other capabilities,\nchunk alignment. The webservice developed per-\nforms chunk alignment by using several tools in-\ncluded in OpenMaTrEx, sequentially:\n•Marker-based chunking (Veale and Way,\n1997) (markers available for all the languages\nof the project but Greek are available).\n•Word alignment, relying on GIZA++.\n•Chunk alignment, using an algorithm based\non Levenshtein edit distance (Levenshtein,\n1966) and employing cognates and word\nprobabilities as distance knowledge.\nThe parameters offered by the webservice are\nthe mandatory ones according to the common in-\nterface. The output is produced by running the tool\nin the ebmt alignments ondisk with id mode ; this\nmode carries out chunk alignment adding sentence\nidentifiers to the output and has been modified to\nprovide not only the identifier of the sentence but\nalso the identifiers of the tokens that delimit each\nchunk in each language (OpenMaTrEx outputs the\ntextual chunks instead of their numeric identifiers).\n3 Travelling Object\nThis section deals with the conversion of the differ-\nent input/output formats of the aligners from and\n9http://code.google.com/p/\nberkeleyaligner/\n10http://openmatrex.org/65\nto a common format, the Travelling Object (TO)\n(Poch et al., 2010), which is based on the schema\nfor alignment in XCES.11\nA sample output for word alignment is shown\nin Figure 1. It aligns two documents ( docen.xml\nanddoces.xml ). There are two groups of links for\nthe first sentence of each file. The first token in the\nsource document is aligned to the first in the tar-\nget. The range of tokens two to three in the source\nare aligned to the second token in the target. The\nsame format can be used for sentential alignment\nchanging the values of the domains from sentences\nto paragraphs and the targets from tokens to sen-\ntences.\nSeveral webservices have been developed to\ndeal with the conversion between the formats of\nthe aligners and the TO. aligner2to is a web-\nservice that converts the output of any of the\naligners to the TO. Another webservice, sen-\ntalgtokto2word alg, takes three inputs in TO\nformat (source and target corpus, both sentence-\n-split and tokenised, and sentence alignment)\nand outputs two files (the subset of sentences\nin the source and target corpus that have 1-to-\n-1 sentence alignments) in the plain sentence-\n-split and tokenised format that the sub-sentential\nalignment webservices take as input. Finally,\nsentsplit tok2to is a webservice that was required\nbysentalg tokto2word alg; it converts sentence-\n-split and tokenised files to the TO format.\n4 Workflows\nThis section presents several workflows of web-\nservices showing possible usages of aligners in\nthe platform, the interactions that arise, etc.\nThese workflows have been created using Tav-\nerna12(Hull et al., 2006), a Workflow Manage-\nment System that allows the user to create work-\nflows by combining webservices using a graphical\ninterface.\nThe first workflow, depicted in Figure 2,\npresents a pipeline that performs sentence align-\nment on an English–Spanish parallel corpus (in-\nputs urlsland urltl). Each side of the cor-\npus is preprocessed with a sentence splitter\n(europarl sentence splitter ) and a tokeniser ( eu-\nroparl tokeniser ). Then the corpus is sentence-\naligned using Hunalign. Finally, the outputs of the\nalignment and the tokenisers are converted to the\n11http://www.xces.org/schema/#align\n12http://www.taverna.org.uk/TO using aligner2to andsentsplit tok2to respec-\ntively.\nFigure 3 and Figure 4 show modified versions\nof the workflow presented in Figure 2 to perform\nsentence alignment with the other sentence align-\ners, BSA and GMA, respectively. Figure 5 shows\na workflow that performs word alignment using\nGIZA++. The workflow in Figure 6 carries out\nword alignment using BerkeleyAligner. Figure 7\nperforms chunk alignment using OpenMaTrEx.\nFigure 2: Sentence alignment using Hunalign.\nFigure 3: Sentence alignment using BSA.66\n<cesAlign version=\"1.0\" xmlns=\"http://www.xces.org/schema/2003\">\n<cesHeader version=\"1.0\">\n<profileDesc>\n<translations>\n<translation trans.loc=\"doc_en.xml\" wsd=\"UTF-8\" n=\"1\"/>\n<translation trans.loc=\"doc_es.xml\" wsd=\"UTF-8\" n=\"2\"/>\n</translations>\n</profileDesc>\n</cesHeader>\n<linkList>\n<linkGrp domains=\"s1 s1\" targType=\"t\">\n<link>\n<align xlink:href=\"#s1_t1\"/>\n<align xlink:href=\"#s1_t1\"/>\n</link>\n<link>\n<align xlink:href=\"#xpointer(id(’s1_t2’)/range-to(id(’s1_t3’)))\"/>\n<align xlink:href=\"#s1_t2\"/>\n</link>\n[...]\n</linkGrp>\n<linkList>\n</cesAlign>\nFigure 1: Travelling Object sample for word alignment.\nFigure 4: Sentence alignment using GMA.\nFigure 5: Word alignment using GIZA++.\nFigure 6: Word alignment using BerkeleyAligner.\nFigure 7: Chunk alignment using OpenMaTrEx.67\nFinally, Figure 8 presents a combined pipeline\nthat performs both sentence and word alignment.\nIt consists of a pipeline made up of three nested\nworkflows that execute sequentially: sentential\nalignment, conversion from sentence alignment\nTO-compliant output to plain-text word alignment\ninput, and word alignment. The webservices used\nfor alignment are Hunalign for sentential align-\nment and GIZA++ for word alignment. However,\nthanks to the use of the common interface and the\nTO format, any of the aligners could be easily used\nto replace these.\n5 Software\nThe software and data developed can be classified\nin two categories: webservices and workflows.\nThe webservices are developed using\nSoaplab2.13This tool allows to deploy web-\nservices on top of command-line applications by\nwriting files that describe the parameters of these\nservices in ACD format.14Soaplab2 then converts\nthe ACD files to XML metadata files which\ncontain all the necessary information to provide\nthe services (script to be run, parameters, options,\nhelp messages, etc.). The Soaplab server is a web\napplication run by a server container (Apache\nTomcat15in our setup) which is in charge of pro-\nviding the services using the generated metadata.\nThe software pipeline for each webservice reflects\nthis schema:\nACD file→script→toolbinary\nThat is, an ACD file is created to define the in-\nput/output ports of the soaplab2 webservice. This\nfile links to an intermediate script, which, regard-\nless of the webservice, is in charge of three main\ntasks:\n•Parameter handling. The parameters of-\nfered by the corresponding webservice are\nchecked; if any of them are missing or if any\ndifferent parameter is used, the execution is\naborted.\n•Security. Execution is automatically aborted\nif any command fails or if any variable is not\nset.\n13http://soaplab.sourceforge.net/\nsoaplab2/\n14http://soaplab.sourceforge.net/\nsoaplab2/MetadataGuide.html\n15http://tomcat.apache.org/•Logging. A folder is created for each run and\nholds the input and output files and any log\nproduced by the tool itself.\nFinally the tool that the webservice wraps is\ncalled from the script. Soaplab is used in many\nfields like bioinformatics because of its ease of use,\nlow cost maintenance and features. With Soaplab,\nweb service providers do not need to be expert pro-\ngrammers, as they only need to describe their tools\nwith metadata.\nAll the software developed is released un-\nder the GPL version 316and can be accessed\nat http://www.computing.dcu.ie/\n˜atoral/#Resources . The source files of the\nworkflows detailed in section 4 are also included.\n6 Conclusions and Future Work\nThis paper has presented a webservice architec-\nture for Statistical Machine Translation aimed at\nnon-technical users. Compared to the existing on-\n-line systems, the proposal allows users to cus-\ntomise systems to their own personal needs. Com-\npared to the available decoders, it has the advan-\ntage of lowering the level of expertise required to\nbuild an MT system, thus allowing less computer-\n-literate users to benefit from access to the state-\n-the-art SMT systems used by researchers and sys-\ntem builders today. Moreover, the hardware and\nsoftware maintenance cost for the user is dramat-\nically lowered since most of the software is exe-\ncuted on remote machines where resources (cpu,\nmemory, etc.), temporary files and software up-\ndates are being managed.\nIn its current status, the architecture sup-\nports alignment, both sentential and sub-sentential.\nWebservices have been developed for a range of\nstate-of-the-art aligners. Each webservice acts as\na wrapper over an aligner, and offers a subset of\nits original functionality (the aim being to provide\neasy-to-use tools for final users).\nSample workflows for each of the aligners have\nbeen presented. These give a glimpse of the po-\ntential of the architecture for the final user. They\nalso show how the aligners interact with the other\nwebservices.\nAn important role is devoted to interoperability.\nThe usage of a common interface for all the align-\ners and of a common input/output format allows\nthe user to use any of the provided aligners.\n16http://www.gnu.org/licenses/gpl-3.0.\nhtml68\nFigure 8: Pipeline that performs sentence and word alignment.69\nRegarding future work, we consider three main\nlines: First, a user validation to measure to what\nextent the architecture meets the expectations of\nusers is currently under way. Second, we plan\nto incorporate components for the remaining steps\nof a SMT pipeline, mainly training and decod-\ning. Finally, a real-world usage of this architec-\nture will require the establishment of limitations\non the server side, which might be system-wide\n(e.g. taking into consideration load, memory, pro-\ncessing time or number of concurrent users) or per\nwebservice (e.g. taking into account the size of the\ninput). This kind of architectures open the door to\nnew exploitation models for tool developers or ser-\nvice providers who could be responsible for main-\ntaining those tools while helping the research com-\nmunity.\nReferences\nDandapat, Sandipan, Mikel L. Forcada, Declan Groves,\nSergio Penkale, John Tinsley, and Andy Way.\n2010. Openmatrex: a free/open-source marker-\ndriven example-based machine translation system.\nInProceedings of the 7th international confer-\nence on Advances in natural language process-\ning, IceTAL’10, pages 121–126, Berlin, Heidelberg.\nSpringer-Verlag.\nDeNero, John and Dan Klein. 2007. Tailoring word\nalignments to syntactic machine translation. In Pro-\nceedings of the 45th Annual Meeting of the Asso-\nciation of Computational Linguistics , pages 17–24,\nPrague, Czech Republic, June. Association for Com-\nputational Linguistics.\nGale, William A. and Kenneth W. Church. 1991. A\nprogram for aligning sentences in bilingual corpora.\nInProceedings of the 29th annual meeting on As-\nsociation for Computational Linguistics , ACL ’91,\npages 177–184, Stroudsburg, PA, USA. Association\nfor Computational Linguistics.\nHaghighi, Aria, John Blitzer, John DeNero, and Dan\nKlein. 2009. Better word alignments with super-\nvised itg models. In Proceedings of the Joint Confer-\nence of the 47th Annual Meeting of the ACL and the\n4th International Joint Conference on Natural Lan-\nguage Processing of the AFNLP: Volume 2 - Volume\n2, ACL ’09, pages 923–931, Stroudsburg, PA, USA.\nAssociation for Computational Linguistics.\nHull, Duncan, Katherine Wolstencroft, Robert Stevens,\nCarole Goble, Matthew Pocock, Peter Li, and\nThomas Oinn. 2006. Taverna: a tool for building\nand running workflows of services. Nucleic Acids\nResearch , 34(Web Server issue):729–732, July.\nKoehn, Philipp, Hieu Hoang, Alexandra Birch, Chris\nCallison-Burch, Marcello Federico, Nicola Bertoldi,Brooke Cowan, Wade Shen, Christine Moran,\nRichard Zens, Chris Dyer, Ond ˇrej Bojar, Alexandra\nConstantin, and Evan Herbst. 2007. Moses: open\nsource toolkit for statistical machine translation. In\nProceedings of the 45th Annual Meeting of the ACL\non Interactive Poster and Demonstration Sessions ,\nACL ’07, pages 177–180, Stroudsburg, PA, USA.\nAssociation for Computational Linguistics.\nLevenshtein, Vladimir I. 1966. Binary codes capa-\nble of correcting deletions, insertions, and reversals.\nTechnical Report 8.\nLiang, Percy, Ben Taskar, and Dan Klein. 2006. Align-\nment by agreement. In Proceedings of the main\nconference on Human Language Technology Con-\nference of the North American Chapter of the Asso-\nciation of Computational Linguistics , HLT-NAACL\n’06, pages 104–111, Stroudsburg, PA, USA. Associ-\nation for Computational Linguistics.\nMelamed, I. Dan. 1996. A geometric approach to map-\nping bitext correspondence. In Conference on Em-\npirical Methods in Natural Language Processing .\nMelamed, I. Dan. 1997. A word-to-word model of\ntranslational equivalence. In Proceedings of the\neighth conference on European chapter of the As-\nsociation for Computational Linguistics , EACL ’97,\npages 490–497, Stroudsburg, PA, USA. Association\nfor Computational Linguistics.\nMoore, Robert C. 2002. Fast and accurate sentence\nalignment of bilingual corpora. In Proceedings of\nthe 5th Conference of the Association for Machine\nTranslation in the Americas on Machine Translation:\nFrom Research to Real Users , AMTA ’02, pages\n135–144, London, UK. Springer-Verlag.\nOch, Franz Josef and Hermann Ney. 2003. A system-\natic comparison of various statistical alignment mod-\nels.Comput. Linguist. , 29:19–51, March.\nPoch, Marc, Prokopis Prokopidis, Gregor Thurmair,\nCarsten Schnober, Riccardo Del Gratta, and N ´uria\nBel. 2010. D3.1 - architecture and design of the\nplatform. Confidential deliverable, The PANACEA\nProject (7FP-ITC-248064).\nVarga, D ´aniel, L ´aszl´o N´emeth, P ´eter Hal ´acsy, Andr ´as\nKornai, Viktor Tr ´on, and Viktor Nagy. 2005. Paral-\nlel corpora for medium density languages. In Pro-\nceedings of the Recent Advances in Natural Lan-\nguage Processing , pages 590–596, Borovets, Bul-\ngaria.\nVeale, Tony and Andy Way. 1997. Gaijin: A Boot-\nstrapping, Template-Driven Approach to Example-\nBased MT. In Proceedings of New Methods in Nat-\nural Language Processing , pages 239–244, Sofia,\nBulgaria.70",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "XsPln7tWpbi",
"year": null,
"venue": "EAMT 2011",
"pdf_link": "https://aclanthology.org/2011.eamt-1.41.pdf",
"forum_link": "https://openreview.net/forum?id=XsPln7tWpbi",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Discriminative Weighted Alignment Matrices For Statistical Machine Translation",
"authors": [
"Nadi Tomeh",
"Alexandre Allauzen",
"François Yvon"
],
"abstract": "Nadi Tomeh, Alexandre Allauzen, François Yvon. Proceedings of the 15th Annual conference of the European Association for Machine Translation. 2011.",
"keywords": [],
"raw_extracted_content": "Discriminative Weighted Alignment Matrices\nFor Statistical Machine Translation\nNadi Tomeh andAlexandre Allauzen andFrançois Yvon\nLIMSI-CNRS and Université Paris Sud\nBP 133, 91403 Orsay\n{nadi,allauzen,yvon}@limsi.fr\nAbstract\nIn extant phrase-based statistical machine\ntranslation (SMT) systems, the transla-\ntion model relies on word-to-word align-\nments, which serve as constraints for the\nsubsequent heuristic extraction and scor-\ning processes. Word alignments are usu-\nally inferred in a probabilistic framework;\nyet, only one single best alignment is re-\ntained, as if alignments were deterministi-\ncally produced. In this paper, we explore\nways to take into account the entire align-\nment matrix, where each alignment link\nis scored by its probability. By compari-\nson with previous attempts, we use an ex-\nponential model to compute these proba-\nbilities, which enables us to achieve sig-\nnificant improvements on the NIST MT’09\nArabic-English translation task.\n1 Introduction\nIn Phrase-Based SMT systems, a source sentence\nis translated by concatenating translation options,\nselected from an inventory called the phrase table .\nBuilding this inventory from a parallel corpus con-\nstitute the translation model training phase, which\nis usually performed in two main steps. 1) For\neach training sentence pair, a set of source-target\nphrase-pairs, that are translations of one another,\nare first extracted. 2) Phrase pairs accumulated\nover the entire training corpus are collected and\nscored using relative frequencies estimates. The\ncollection of phrase-pairs and their scores consti-\ntutes the translation model.\nDuring the extraction step, we would like to use\na phrase alignment model that enables the compu-\nc/circlecopyrt2011 European Association for Machine Translation.tation of corpus level statistics related to the joint\nsegmentation and alignment of source and target\nsentences. Unfortunately, generative models de-\nsigned for this purpose (Marcu and Wong, 2002;\nBirch et al., 2006) fail to deliver good performance\ndue to three key difficulties (DeNero et al., 2006).\nFirst, exploring the whole space of phrase-\nto-phrase alignment is intractable, which makes\nphrase alignment a NP-hard (DeNero and Klein,\n2008) problem. Second, including a latent seg-\nmentation variable in the model increases the risk\nof overfitting during EM training. Third, spuri-\nous segmentation ambiguity tends to populate the\nphrase table with more entries, each having too few\ntranslation options. A practical solution is to re-\nconfigure the phrase alignment problem in terms\nofwords instead of phrases: a fixed segmentation,\nbased on word boundaries, is used, and the result-\ning model is simpler to train using EM. Then, for\neach word-aligned sentence in the training corpus,\nan additional step is required to identify the set of\nphrase-pairs to be extracted. A heuristic which\nextracts phrase-pairs that are consistent with the\nViterbi word alignment is widely used in practice.\nDuring the scoring step, relative frequencies\ncomputed on the training corpus are used to as-\nsess each extracted phrase pair. Additional scores,\nbased on lexical probabilities are also used so as to\nsmooth the scores of rare phrase-pairs.\nThe training of the translation model is thus de-\ncomposed as a modular pipeline the components of\nwhich can be developed independently. The result-\ning modularity comes at the price of possible error\npropagation between consecutive steps: errors in\nthe 1-best word alignment can propagate to phrase\npair extraction and to probability estimation.\nThis problem can be alleviated by feeding more\ninformation from word alignment into the pipeline.Mik el L. F orcada, Heidi Depraetere, Vincen t V andeghinste (eds.)\nPr o c e e dings of the 15th Confer enc e of the Eur op e an Asso ciation for Machine T r anslation , p. 305\u0015312\nLeuv en, Belgium, Ma y 2011\nFor this purpose, a structure called the Weighted\nAlignment Matrix (WAM) (Liu et al., 2009), which\ncompactly encodes the distribution of all possible\nalignments of a sentence pair, can be used to ex-\ntract and score phrase-pairs. Each cell in this ma-\ntrix corresponds to a pair of (source, target) words;\nthe associated value measures the quality of the\nalignment link. Therefore, a weighted matrix en-\ncodes, in linear space, the probabilities of expo-\nnentially many alignments.\nThe authors of (Liu et al., 2009) estimate link\nprobabilities by calculating relative frequencies\nover a list of N-best alignments produced by gen-\nerative models, and show some improvements in\ntranslation quality. However, using small N-best\nlists as samples is known to yield poor estimates\nof the alignment posteriors, as these lists usually\ncontain too few variations. In this paper, we ar-\ngue that better estimation of alignment probabili-\nties helps achieving clearer improvements. Our so-\nlution is to directly model the weighted alignment\nmatrices using a discriminative aligner (Ayan and\nDorr, 2006; Tomeh et al., 2010).\nThe rest of this paper is organized as follows:\nwe start in Section 2 by a recap of related work.\nSection 3 revisits the standard translation model\nprocedures and its extensions to weighted matri-\nces. Our own approach is introduced in Section 4\nand experimentally contrasted to various baselines\nin 5. We discuss further prospects in Section 6.\n2 Related work\nAs pointed out in the introduction, the construc-\ntion of the translation model starts with a word\nalignment step during which relevant phrase-pairs\nare extracted and their probabilities are estimated.\nYet, word alignment outputs a probability distri-\nbution over all possible alignments. However, the\nmost common practice (Koehn et al., 2003) is to\nuse only the 1-best, Viterbi alignment, while dis-\ncarding all the other informations contained in this\ndistribution, which seems to adversely impact the\nquality of the resulting translation model.\nIn fact, several researchers have shown that in-\ncorporating more information from the posterior\ndistribution helps reducing the propagation of er-\nrors and improves performance. In (Mi and Huang,\n2008), some gains are achieved by exploiting a\npacked forest, which compactly encodes exponen-\ntially many parses, to extract rules for a syntax-\nbased translation system, instead of using onlythe 1-best tree. This compact representation has\nalready been shown to be efficient and effective\n(Galley et al., 2006; Wang et al., 2007).\nSimilarly, N-best alignments are used to extract\nphrase-pairs as in (Xue et al., 2006; Venugopal et\nal., 2008); in the latter, a probability distribution\nover N-best alignments and parses is used to gener-\nate posterior fractional counts for rules in a syntax-\nbased translation model.\nDue to the difficulty of computing statistics un-\nder IBM3 and IBM4 models, the previously de-\nscribed approaches use N-best alignments as sam-\nples to approximate word-to-word alignment pos-\nterior probabilities. While simpler models, such\nas HMM and IBM1, allow for such a computation\n(Brown et al., 1993; Venugopal et al., 2003; Deng\nand Byrne, 2005), they do not compete with Model\n4 in terms of performance. A solution to this prob-\nlem is described in (Deng and Byrne, 2005), where\naword-to-phrase HMM alignment model is pro-\nposed, which constitutes a competitive model to\nIBM4. Under this model, the necessary statistics\ncan be computed efficiently with the forward al-\ngorithm. The phrase pair induction procedure de-\nscribed in (Deng and Byrne, 2005), benefits from\nthis efficiency to estimate a phrase-to-phrase pos-\nterior distribution, which is used further in the ex-\ntraction and scoring of phrases. In (de Gispert et\nal., 2010), a similar procedure is shown to be use-\nful for extracting synchronous grammar rules.\nA structure analogous to the packed forest for\ntrees is presented in (Liu et al., 2009) and called\nWeighted Alignment Matrix. Each element in the\nmatrix is assigned a probability which measures\nthe confidence of a word alignment. An algorithm\nfor extracting phrase-pairs from weighted matrices\nand for estimating their scores is shown to be ben-\neficial to translation quality.\nIn this paper, we continue this line of research\nand show that additional improvements can be ob-\ntained by better estimating the word alignments in\na discriminative manner, using the MaxEnt-based\nword aligner described in (Ayan and Dorr, 2006;\nTomeh et al., 2010; Tomeh et al., 2011).\n3 WAM-based Translation Models\nThe translation model Tconstitutes the primary\nsource of knowledge in a phrase-based SMT sys-\ntem and plays a crucial role in determining the\nquality of its output. Recall that a translation\nmodel is simply a list of bilingual phrase-pairs that306\nare translations of one another. Each phrase-pair\nis associated with a set of scores assessing its rele-\nvance for the translation task, where each score is\nbased on statistics accumulated over some training\ncorpus. In this section, we present a general frame-\nwork which enables to frame both the standard and\nthe WAM-based approaches.\n3.1 A General Framework\nAlgorithm 1 sketches a general approach to con-\nstruct the translation model T, by extracting and\nscoring phrase-pairs from a parallel corpus C.\nFor all sentence pairs (eI\n1,fJ\n1)made up of\nJsource words and Itarget words, we would like\nto enumerate all possible phrase-pairs (fj2\nj1,ei2\ni1)\nand assign each of them a score ( fE) that can be\nused to inform the selection criteria and/or provide\na fractional count quantifying its quality (step 5).\nYet, extracting allpossible phrase-pairs found\nin the training corpus would cause practical prob-\nlems, as (1) the correlated growth in the solution\nspace would dramatically slow down decoding; (2)\nthe simplicity of the scoring procedure, based on\nrelative frequencies, can not distinguish relevant\nrare translation candidates from noisy ones and\nwould cause some probability mass to be wasted\non noisy phrase-pairs. Hence the need for a se-\nlection procedure implemented in steps 4 and 6,\nwhich discard all phrase-pairs that do not satisfy\nsome alignment constraints CA(based on the ma-\ntrixA) or do not fit some selection criteria CS.\nThe final step is to add a set of scores to each\nselected phrase-pair (step 10). These scores usu-\nally include a translation probability φ, estimated\nusing relative frequencies over the training corpus,\nwhere each occurence of a phrase-pair is evaluated\nusing the counting function fC. They also include\nlexical weights lex, based on lexical translation\nprobabilities w, as a smoothing method to improve\nthe estimates computed for rare phrase-pairs. A\nvaluable, and relatively easy to acquire, source of\ninformation is the word alignment represented by\nthe alignement matrix A, which is consulted by the\ndifferent steps of this algorithm: filtering, evalua-\ntion and scoring of phrase-pairs.\n3.2 Standard Instantiation\nThe most common instantiation of this framework\n(Koehn et al., 2003) considers a binary alignm-\nnent matrix A, where each cell represents a binary\nvariable indicating whether the associated words\nare aligned or not. The matrix is usually obtained\n0.9 0.5 0.8 \n0.8 0.6 0.7 0.2 \n0.4 0.8 0.7 0.1 0.3 0.1 \n0.4 0.8 0.2 0.3 0.1 \n0.1 0.6 0.3 0.8 0.2 \n0.1 0.4 0.8 0.7 0.2 0.1 \n0.9 0.1 0.3 \n0.4 0.5 0.6 0.5 \n0.9 0.4 0.8 0.8 \n0.8 1.0 i1 \ni2 j1 j2 \nFigure 1: Computation of fractional counts:\nfC(fj2\nj1,ei2\ni1) =α(j1,j2,i1,i2)×β(j1,j2,i1,i2).\nEmpty cells have zero probability.\nby applying the symmetrization heuristic to two\nViterbi alignments, one for each translation direc-\ntion. The alignment constraints CAare defined\nso that extracted phrase-pairs (fj2\nj1,ei2\ni1)are consis-\ntent with A:∀(i,j)∈L: (j∈[j1,j2]∧i∈\n[i1,i2])∨(j /∈[j1,j2]∧i /∈[i1,i2])where Lis\nthe set of all active links in A. The selection cri-\nteriaCShelps improve the practical efficiency by\nestablishing a limit on the admissible source/target\nphrase lengths. All selected phrase-pairs are uni-\nformly evaluated and counted using fE=fC= 1.\n3.3 WAM-based Instantiation\nSince the standard instantiation ignores alignment\nprobabilities, it tends to be sensitive to alignment’s\nprecision and recall errors. An erroneous link,\nas unlikely as it may be, can prevent the extrac-\ntion of many plausible phrase-pairs. Furthermore,\nthe extracted phrase-pairs are all considered of\nequal quality, regardless of how much evidence the\nalignment matrix provides for them. A more flex-\nible and robust alternative instantiation takes ad-\nvantage of a structure called Weighted Alignment\nMatrix (WAM), presented in (Liu et al., 2009). In\nthe weighted matrix Aw={p(ai,j|e,f) : 1≤i≤\nI,1≤j≤J}, each possible link is weighted by\na scorep(ai,j|e,f)quantifying the confidence as-\nsigned to it by the alignment model.\nEvaluation and Counting Functions The use\nof a weighted matrix allows for conceptualizing\nmore informative evaluation and counting func-\ntions, which can help mitigate the error propaga-\ntion problem. To incorporate alignment posterior\nprobabilities when computing fractional counts for\na phrase-pair, all possible alignments should be307\nAlgorithm 1 Translation Model Construction\nInput: Parallel CorpusC\nOutput: Translation Model T\n1:Initialize the phrase table P={}\n2:for all sentence pairs in the training parallel corpus (eI\n1,fJ\n1)∈Cdo\n3: Construct the alignment matrix A=align (eI\n1,fJ\n1)\n4:PA=/braceleftBig\n(fj2\nj1,ei2\ni1) : 1≤j≤J,1≤i≤I,(fj2\nj1,ei2\ni1)satisfies some alignment constraints CA/bracerightBig\n5:PE={/angbracketleftx,fE(x,A)/angbracketright:x∈PA}, wherefEis an evaluation function\n6:PS={x:x∈PE,xsatisfies some selection criteria CS}\n7:P=P∪P S\n8:end for\n9:for all/angbracketleft(e,f),fE(e,f)/angbracketright∈P do\n10:T=T∪{/angbracketleft (e,f),φ(e|f),φ(f|e),lex(e|f),lex(f|e)/angbracketright}wherefCis a counting function,\nφ(e|f) =fC(e,f)/summationtext\neifC(ei,f),andlex(e|f,ae,f) =length (e)/productdisplay\ni=11\n|{j: (i,j)∈ae,f}|/summationdisplay\n∀(i,j)∈ae,fw(ei|fj),\n11:end for\nexplicitly enumerated. Unlike for N-best (Venu-\ngopal et al., 2008) or HMM (de Gispert et al.,\n2010) alignments, this is unrealistic for a weighted\nmatrix. Instead, we follow (Liu et al., 2009)\nand use link probabilities to compute a frac-\ntional count, interpreted as the probability that the\nphrase-pair satisfies consistency constraints.\nGiven a weighted alignment matrix Awand\na phrase-pair (fj2\nj1,ei2\ni1), two regions (in gray\non Figure 1) are identified: in(j1,j2,i1,i2)and\nout(j1,j2,i1,i2)which respectively represents\nlinks inside and outside (on the same rows and\ncolumns) of a phrase-pair. Denoting the probabil-\nity that two words are unaligned as ¯p(ai,j|e,f) =\n1−p(ai,j|e,f), we can compute, for the inside re-\ngion, the probability that there is at least one word\ninside one phrase aligned to a word inside the other\nphrase as:\nα(j1,j2,i1,i2) = 1−/productdisplay\n(j,i)∈in(j1,j2,i1,i2)¯p(ai,j|e,f).\nSimilarily for the outside region, we compute\nthe probability that no word inside one phrase is\naligned to a word outside the other phrase:\nβ(j1,j2,i1,i2) =/productdisplay\n(j,i)∈out(j1,j2,i1,i2)¯p(ai,j|e,f).\nFinally, the same function is used for evaluation\nand counting ( fE=fC) and defined as the product\nof these two probabilities:\nfC(fj2\nj1,ei2\ni1) =α(j1,j2,i1,i2)×β(j1,j2,i1,i2).Alignment Constraints and Selection Criteria\nWeighted alignment matrices admit flexible align-\nment constraints and selection criteria. Threshold-\ning enables to better tune the balance between the\nnumber of extracted phrase-pairs and the accuracy\nof their assigned scores. CArequires at least one\nlink inside the phrase-pair to have a probability\np(ai,j|e,f)> ta. Similar constraints could be ap-\nplied on links outside the phrase-pair. Likewise,\nCSadmits only phrase-pairs with an evaluation\nscore greater than a threshold fE(fj2\nj1,ei2\ni1)> tp,\nand setup a phrase length limit.\nTranslation Model Scores While the phrase\ntranslation probability estimated as φ(see step (10)\nof Algorithm 1) can be applied unchanged to the\nfractional counts fC, the lexical scores lexhave\nto be modified to incorporate link probabilities.\nThe main difference is the computation of the lex-\nical probabilities w(ei|fj)andw(fj|ei), which are\ncalculated using relative occurrence frequencies\n(Koehn et al., 2003). Instead of simply count-\ning every occurrence once count (ei,fj) = 1 ,\nlink probabilities provided by the weighted ma-\ntrix are used as fractional counts: count (ei,fj) =\np(ai,j|e,f)(Liu et al., 2009). Using fractional\ncounts forfE,fCandwenables a more accurate\nevaluation of phrase-pairs depending on the con-\ntext of the sentence-pair in which they occur, hence\na better estimation of their scores.308\n4 Discriminative Modeling of the\nWeighted Alignment Matrix\nPrevious attempts at taking advantage of WAMs\nhave relied on generative alignment models, aug-\nmented with some heuristics such as symmetriza-\ntion have been used to produce the alignment ma-\ntrices. This approch is less than optimal, since the\ngenerative paradigm is not well suited to incorpo-\nrate arbitrary and possibly interdependent sources\nof information. Furthermore, all symmetrization\nheuristics act locally at the sentence-pair level and\nlack a global view of the training corpus.\nTo overcome these limitations we propose to\nview the alignment problem as a structured clas-\nsification task and model the weighted matrix di-\nrectly as in (Tomeh et al., 2010). In the presence\nof manually annotated data with active orinactive\nlinks, a discriminative classifier can be trained to\nmodel the probability of each link being active us-\ning an exponential model:\np(li,j=active|x) =exp/summationtextK\nk=1λkgk(y,x)\nZ(x),\nwhere xdenotes the observation, Z(x)is a nor-\nmalization constant, (gk)K\nk=1defines a set of fea-\nture functions, and each gkis associated with a\nweightλk. We use features that describe the lin-\nguistic context of a given link, and depend on the\nsentence pair in which it occurs, augmented by\npart-of-speech tags and related corpus statistics.\nWe also incorporate the predictions of MGIZA++\nalignments as features, which can be viewed as a\nsolution to the symmetrization problem. Since the\nalignment matrix is typically sparse, with a ma-\njority of inactive links, the classification task in-\ntroduced above is imbalanced. Hence, we only\nconsider the links that occur in the union of all in-\nput alignments; all other links are deemed inactive.\nThe model is trained to optimize the log-likelihood\nof the parameters, regularized using a combination\nof/lscript1and/lscript2terms, allowing for efficient feature\nselection while maintaining numerical stability.\n5 Experiments\nIn our experiments, we aim (1) to compare the\nstandard translation model training method with\nthe method based on weighted alignment matrices;\nand (2) to contrast different approches to populate\nthe matrices with link posterior probabilities.\nFor this purpose we build several phrase-\nbased, Arabic to English, translation systems us-ing Moses1in its default configuration. In order\nto tune the parameters of the translation systems,\nMinimum Error-Rate Training (Och, 2003) is ap-\nplied on the development corpus, for which we\nused the NIST MT’06 evaluation’s test set, con-\ntaining 1,797 Arabic sentences (46K words) with\nfour English references (53K words). The per-\nformance of each system is assessed by calculat-\ning the multi-reference BLEU on NIST MT’08\nevaluation’s test set, which contains 1,360 Arabic\nsentences (43K words), each with four references\n(53K words). For training the various models used\nby the translation systems, we select a subset of\nthe LDC resources made available by the NIST\nMT’09 constrained track2. In order to validate\nthe obtained results on training corpora of vary-\ning sizes, we consider two training conditions, one\nwith 30K parallel sentence pairs, and another with\n130K. For each condition, we report below the\nAER, the BLEU scores on the test set, along with\nthe size of the obtained phrase tables. A 4-gram\nback-off language model, estimated with SRILM3\nis trained on the NIST MT’09 constrained English\ndata. All Arabic sentences are pre-processed using\nMADA+TOKAN4(Habash and Rambow, 2005),\nand segmented according to the D2 tokenization\nscheme. The IBM Arabic-English Word Align-\nment Corpus (Ittycheriah et al., 2006) is used to\ntrain both CRF and MaxEnt aligners and evaluate\nthem using Alignment Error Rate (AER).\n5.1 Translation Models Construction\nIn section 3, we have described a generic algorithm\nthat constructs the translation model in three steps:\nword alignment, phrase-pairs extraction, and scor-\ning. In this section, we compare different instan-\ntiations of these steps, and report the translation\nperformance of the resulting models.\nIn the word alignment step, we experiment two\nconfigurations of the alignment matrix: (i) a stan-\ndard alignment matrix , which contains the links of\nthe1-best alignment; and (ii) the weighted align-\nment matrix, which is populated with link proba-\nbilities. Note that we can obtain a matrix in con-\nfiguration (i) by thresholding the probabilities in\nthe weighted matrix according to a threshold ta5.\nHence, for each word aligner (briefly described be-\n1http://www.statmt.org/moses/\n2http://www.itl.nist.gov/iad/mig/tests/mt/2009/\n3http://www-speech.sri.com/projects/srilm/.\n4http://www1.ccls.columbia.edu/ cadim/MADA.html\n5In our experiments tais set to 0.5.309\nlow) that produce a weighted matrix, we derive\ntwo systems: standard andWAM-based6.\nThe two remaining steps depend on the form of\nthe alignment matrix computed in the first step.\nFor standard matrices (i) we use the standard\nheuristic for extraction, and relative frequencies\nfor scoring (Koehn et al., 2003). For weighted ma-\ntrices (ii), a phrase posterior fC(fj2\nj1,ei2\ni1)can be\ncalculated and used as a fractional count. Only\nphrase-pairs with a fractional count above certain\nthresholdtp7are extracted. The same fractional\ncounts are used for scoring with relative fractional\nfrequencies. In both configurations, only phrase-\npairs that do not exceed a length limit of 7, on the\nsource or the target side, are retained and scored.\n5.2 Results and Discussion\nIn this section, we describe five alignment systems\nand compare their performance (see Table 5.2).\nMGIZA++8These alignments are produced by\nthe multi-threaded and optimized alignment toolkit\nMGIZA++ (Gao and V ogel, 2008), which imple-\nments the IBM models. This tool only outputs de-\nterministic alignment matrices in configuration (i).\nThese models also deliver features for the discrim-\ninative word aligners described below.\nMGIZA++ IBM4 represents the performance of\nthe standard baseline: one IBM4 alignment in each\ndirection, which are symmetrized with grow-diag-\nfinal-and heuristic. This system deliver competi-\ntive BLEU scores of 35.9 and 40.2 on the 30K and\n130K respectively, with a much smaller phrase ta-\nble than all the other systems.\nN-best WAM9These alignments build weighted\nmatrices, by averaging link occurences over\nMGIZA++ N-best alignments produced by the\nIBM model 4, as described in (Liu et al., 2009).\nThis method slightly improves performance\nover the baseline. Gains of 0.3 BLEU point\non the small task and 0.2 on the larger one are\nobtained. Improvements are only obtained in\nthe weighted matrix configuration, while standard\nalignments obtained by thresholding the ( 10-best\nbased) weighted matrix seem to hurt performance\nfor the selected threshold (0.5). Phrase tables ob-\n6Our experiments show that post-processing the weighted ma-\ntrix to nullify all link probabilities, that are inferior to a thresh-\noldta, improves the performance. We use ta= 0.5.\n7In our experiments tpis set to 0.1.\n8http://geek.kyloo.net/\n9http://www.nlp.org.cn/ liuyang/wam/wam.htmltained from these systems are only slightly larger\nthan the baseline, which might explain the small\nimprovement. The N-best system achieves compa-\nrable AER to the MGIZA++ baseline.\nPostCAT10The Posterior Constrained Align-\nment Toolkit (Graça et al., 2007) implements an\nefficient and principled way to inject rich con-\nstraints on the posteriors of latent variables into\nthe EM algorithm, allowing it to satisfy additional,\notherwise intractable constraints. When applying\nconstraints such as a symmetry orbijectivity on a\nregular HMM alignment, it delivers models that\nare comparable in accuracy to IBM4 model, and\nunder which statistics to estimate posteriors can\nstill be collected efficiently. This allow us to con-\nstruct weighted matrices with posteriors estimated\nover constrained HMM models, by calculating for\neach link, the average of the posterior given by\ntwo HMM models in both translation direction, a\nmethod referred to as the soft union symmetriza-\ntion. Our experiments use Geppetto11(Ling et al.,\n2010), an implementation of the weighted align-\nment matrix integrated with PostCAT.\nFor the small task, both bijective and symmet-\nric PostCAT alignments, in the standard configu-\nration, outperform MGIZA++ and N-best WAM\nby≈0.8 BLEU point. The weighted matrix con-\nfiguration performs even better than the standard\none and increases BLEU scores by another ≈0.3\nBLEU point. Improvements are persistent but less\napparent on the larger task. We notice that the\nphrase table extracted from the weighted matrix is\nconsiderably larger than the standard one (by a fac-\ntor of at least 3). PostCAT also slightly decreases\nthe AER as compared to the MGIZA++ baseline.\nCRF12The alignment matrix is modeled with\na conditional random field (CRF), of which the\ngraphical structure is quite complex and contains\nmany loops (Niehues and V ogel, 2008). Therefore,\nneither training nor inference can be performed ex-\nactly, and the loopy belief propagation algorithm is\nused to approximate the posteriors. The CRF ap-\nproach differs from our MaxEnt model (Section 4)\nin two aspects: first, MaxEnt training only op-\ntimizes the log-likelihood, whereas CRF training\nalso aims at minimizing the AER. Second, while\nboth models use the same set of features, MaxEnt\n10http://www.seas.upenn.edu/ strctlrn/CAT/CAT.html\n11http://code.google.com/p/geppetto/\n12We thank J. Niehues (KIT) for sharing his implementation.310\nTranslation task: 30K 130K\nTranslation model construction: Standard(i) WAM(ii) Standard(i) WAM(ii)\nAlignment AER BLEU PT BLEU PT AER BLEU PT BLEU PT\nMGIZA++HMM 28.35 35.01 3,6 - - 26.77 39.15 9,7 - -GenerativeIBM4 24.97 35.90 2,4 - - 23.30 40.18 6,5 - -\n10-best IBM4 24.92 35.78 2,4 36.21 3,0 23.26 40.00 6,6 40.43 8,5\nPostCATBijective 22.53 36.62 3,3 36.94 10,2 20.49 40.08 9,1 40.61 29,5\nSymmetric 22.48 36.69 2,9 36.96 10,7 20.83 40.24 8,5 40.43 30,2\nCRFHMM 25.39 35.93 4,6 36.50 11,9 23.65 39.56 12,6 40.00 31,2DiscriminativeIBM4 23.51 36.07 3,4 36.93 8,4 22.04 40.34 8,7 40.32 21,3\nHMM+IBM 1,3,4 21.03 36.34 3,7 37.10 8,4 19.65 40.14 9,8 40.35 21,3\nMaxEntHMM 17.61 36.90 6,7 37.48 11,7 16.42 40.47 17,7 40.84 30,0\nIBM4 15.61 37.17 5,5 37.52 9,6 14.32 41.04 14,5 41.13 25,0\nHMM+IBM 1,3,4 14.69 37.12 5,2 37.92 8,6 13.92 40.82 13,4 41.08 22,2\nTable 1: Comparison of five word aligners: MGIZA++, 10-best, PostCAT, CRF and MaxEnt, in terms of\nAER, BLEU scores and Phrase Table size in millions (PT). We compare the standard to the WAM-based\ninstantiation of Algorithm 1. Two training corpus of different sizes (30K / 100K) are considered.\nturns real-valued features into discrete ones using\nunsupervised equal frequency interval binning.\nOn the small task, the CRF approach achieves\nimprovement up to ≈0.4 over the MGIZA++ base-\nline and up to≈1.2 over the WAM-based base-\nline. Using several input alignments as local fea-\ntures seems beneficial: approximatly 0.5 BLEU\npoint, for both configurations, is gained when us-\ning IBM3 and IBM4 features. Similar tendencies\nare observed for the larger task, albeit with smaller\ngains. The performance of CRF is comparable\nto that of PostCAT, but its translation models are\nhowever somewhat smaller. Even though the CRF\nmodel is trained to maximize the log-likelihood of\nthe manual alignment and to minimize its AER, it\nachieves only modest improvements in AER over\nMGIZA++ and PostCAT.\nMaxEnt This is the system of Section 4. Dis-\ncriminative weighted matrices significantly out-\nperform all the previous baselines in both configu-\nrations. For the 30K task and for the standard con-\nfiguration (i): when using only MGIZA++ HMM\nalignments as input to MaxEnt, we get 1 BLEU\npoint improvement over the standard MGIZA++\nIBM4 baseline, and 0.2 point over PostCAT. The\nextracted phrase table is twice as large as the ones\nused by the MGIZA++, 10-best or PostCAT. Fur-\nther improvements are obtained when using IBM4\nas input or combining several input alignments, 1.3\nBLEU point over MGIZA++ and 0.5 point over\nPostCAT. MaxEnt based matrices, in configura-\ntion (ii), achieve up to 2 BLEU point improvementover MGIZA++ IBM4 and up to 1 point over the\nbest weighted matrix baseline (PostCAT). It is no-\ntable that this later improvement is obtained with a\nsmaller phrase table ( ≈25% smaller).\nThese gains persist for the larger task: Max-\nEnt in standard (i) configuration is 0.8 BLEU\npoint better than MGIZA++ IBM4, and 0.6 better\nthan PostCAT. In the weighted matrix configura-\ntion (ii), these improvements allow us to outper-\nform MGIZA++/IBM4 by nearly 1 BLEU point,\n10-best by and approximately 0.7 point, and Post-\nCat by 0.5 point. As for the size of the phrase\ntable, MaxEnt uses smaller phrase tables (22,2M)\nthan PostCAT (30,2M), but much larger ones than\nMGIZA++ IBM4 (6,5M). Unlike all the other\nsystems, MaxEnt drastically decreases the AER,\nand achieves approximately 40% relative reduc-\ntion over MGIZA++ on both 30K and 130K tasks.\n6 Conclusion\nIn this paper we presented a generic algorithm to\nconstruct the translation model from a parallel cor-\npus, for which we described two instantiations:\nstandard and WAM-based. We compared sev-\neral generative and discriminative word aligners\nin both instantiations, and showed that the WAM-\nbased outperforms the standard procedure due to\nits improved use of the word alignment probability\ndistribution as compared to the Viterbi alignments.\nWe proposed a discriminative estimation scheme\nfor the probabilities in the weighted matrix using\nan exponential model and showed that significant311\nimprovements in BLEU scores can be achieved.\nOur MaxEnt modeling of the matrix led to approx-\nimately 2 BLEU points improvement over the stan-\ndard MGIZA++ baseline, using a small training\ncorpus and 1 BLEU point using a larger one. It is\nfinally interesting to see that, contrarily to the stan-\ndard training regime, WAM-based training seems\nto benefit from alignments with better AERs.\n7 Acknowledgments\nThis work was partly realized as part of the Quaero\nProgram, funded by OSEO, the French agency for\ninnovation.\nReferences\nAyan, Necip Fazil and Bonnie J. Dorr. 2006. A max-\nimum entropy approach to combining word align-\nments. In Proc. of NAACL-HLT , pages 96–103.\nBirch, Alexandra, Chris Callison-Burch, Miles Os-\nborne, and Philipp Koehn. 2006. Constraining the\nphrase-based, joint probability statistical translation\nmodel. In Proc. of WMT , pages 154–157, New York,\nNY .\nBrown, P. F., V . J. D. Pietra, S. A. D. Pietra, and R. L.\nMercer. 1993. The mathematics of statistical ma-\nchine translation: parameter estimation. Comput.\nLinguist. , 19(2):263–311.\nde Gispert, Adrià, Juan Pino, and William Byrne. 2010.\nHierarchical phrase-based translation grammars ex-\ntracted from alignment posterior probabilities. In\nProc. of EMNLP , pages 545–554.\nDeNero, John and Dan Klein. 2008. The complexity\nof phrase alignment problems. In Proc. of ACL’08:\nHLT, pages 25–28, Columbus, Ohio, June.\nDeNero, John, Dan Gillick, James Zhang, and Dan\nKlein. 2006. Why generative phrase models under-\nperform surface heuristics. In Proc. of WMT , pages\n31–38, New York City.\nDeng, Yonggang and William Byrne. 2005. HMM\nword and phrase alignment for statistical machine\ntranslation. In Proc. of the EMNLP , pages 169–176.\nGalley, M., J. Graehl, K. Knight, D. Marcu, S. De-\nNeefe, W. Wang, and I. Thayer. 2006. Scalable in-\nference and training of context-rich syntactic trans-\nlation models. In Proc. of the 21st ICCL and 44th\nACL, pages 961–968, Sydney, Australia.\nGao, Qin and Stephan V ogel. 2008. Parallel implemen-\ntations of word alignment tool. In SETQA-NLP ’08 ,\npages 49–57.\nGraça, João, Kuzman Ganchev, and Ben Taskar. 2007.\nExpectation maximization and posterior constraints.\nInNIPS .Habash, Nizar and Owen Rambow. 2005. Arabic tok-\nenization, part-of-speech tagging and morphological\ndisambiguation in one fell swoop. In Proc. of the\n43rd ACL , pages 573–580.\nIttycheriah, Abe, Yasser Al-Onaizan, and Salim\nRoukos. 2006. The IBM Arabic-English Word\nAlignment Corpus. Technical report.\nKoehn, Philipp, Franz Josef Och, and Daniel Marcu.\n2003. Statistical phrase-based translation. In Proc.\nNAACL-HLT 2003 , pages 48–54.\nLing, Wang, Tiago Luís, Joao Graça, Luísa Coheur, and\nIsabel Trancoso. 2010. Towards a General and Ex-\ntensible Phrase-Extraction Algorithm. In Proc. of\n7th IWSLT , pages 313–320.\nLiu, Yang, Tian Xia, Xinyan Xiao, and Qun Liu. 2009.\nWeighted alignment matrices for statistical machine\ntranslation. In Proc. of EMNLP , pages 1017–1026.\nMarcu, Daniel and William Wong. 2002. A phrase-\nbased, joint probability model for statistical machine\ntranslation. In Proc. of EMNLP , pages 133–139.\nMi, Haitao and Liang Huang. 2008. Forest-based\ntranslation rule extraction. In Proc. of the 2008\nEMNLP , pages 206–214, Honolulu, Hawaii.\nNiehues, Jan and Stephan V ogel. 2008. Discriminative\nword alignment via alignment matrix modeling. In\nProc. of WMT , pages 18–25.\nOch, Franz Josef. 2003. Minimum error rate training\nin statistical machine translation. In Proc. of the 41st\nAnnual Meeting on ACL , pages 160–167.\nTomeh, Nadi, Alexandre Allauzen, Guillaume Wis-\nniewski, and François Yvon. 2010. Refining word\nalignment with discriminative training. In Proc. of\nAMTA , Denver, CO.\nTomeh, Nadi, Thomas Lavergne, Allexandre Allauzen,\nand François Yvon. 2011. Designing an improved\ndiscriminative word aligner. In Proc. of CICLing ,\nTokyo, Japan.\nVenugopal, Ashish, Stephan V ogel, and Alex Waibel.\n2003. Effective phrase translation extraction from\nalignment models. In Proc. of the 41st Annual Meet-\ning of the ACL , pages 319–326.\nVenugopal, Ashish, Andreas Zollmann, Noah A. Smith,\nand Stephan V ogel. 2008. Wider pipelines: N-best\nalignments and parses in MT training. In Proc. of\nAMTA , pages 192–201.\nWang, Wei, Kevin Knight, and Daniel Marcu. 2007.\nBinarizing syntax trees to improve syntax-based ma-\nchine translation accuracy. In Proc. of EMNLP-\nCoNLL , pages 746–754, Prague, Czech Republic.\nXue, Yong-Zeng, Sheng Li, Tie-Jun Zhao, Mu-Yun\nYang, and Jun Li. 2006. Bilingual phrase extraction\nfrom n-best alignments. ICICIC , pages 410–414.312",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "_953yQAAsEw",
"year": null,
"venue": "EAMT 2008",
"pdf_link": "https://aclanthology.org/2008.eamt-1.23.pdf",
"forum_link": "https://openreview.net/forum?id=_953yQAAsEw",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Learning context-sensitive synchronous rules",
"authors": [
"Anders Søgaard"
],
"abstract": "Anders Søgaard. Proceedings of the 12th Annual conference of the European Association for Machine Translation. 2008.",
"keywords": [],
"raw_extracted_content": "Learning context-sensitive synchronous rules⋆\nAnd\ners Søgaard\nDpt. of Linguistics\nUniversity of Potsdam\[email protected]\nAbstract. Context-sensitive alignments are shown to be frequent in\nhand-aligned parallel corpora, e.g. in 24%–85% of the sentence pairs in\nthe corpora documented in [GPCC08]. A O(|G|n6) time strict extension\nof inversion transduction grammars (ITGs) [Wu97] called (2,2)-BRCGs\nis proposed in [Søg08] that induces such alignments. The increase in\ngenerative capacity comes from the ability to copy strings in derivations,\nwhich means that the (i) intersection of two translations and (ii) the\nunion of two alignment structures are easily defined. The problem for\nreal-life applications is how to induce the grammars from available re-\nsources; in particular, how to learn when copying is needed. This paper\npresents a quadratic time algorithm that reduces the problem of how\nto induce (2,2)-BRCGs from m:n-alignments to the same problem\nfor ITGs by unravelling alignment structures. The algorithm was run\non a parallel corpus in the Copenhagen Dependency Treebank [BK07]\n(Danish–English); the ratio of new alignment structures over the number\nof sentence pairs was 38.08%. For the ones in [GPCC08], the size of the\ncorpora increased by a factor of 1.74–2.0.\n1 Introduction\nConsider the simple example of a translation from English into Danish below:\n1. There was a discussion between two women.\n2. Der fandt en diskussion sted mellem to kvinder.\nThe discontinuous constituent in Danish, fandt sted (lit. ’found place’), is\nfully idiomatic and therefore necessarily a translation unit. The non-contiguous\nnoun-preposition pairs, discussion between anddiskussion mellem , are perhaps\nnot idiomatic, but conventionalized and idiosyncratic in the sense that the in-\nformation that the nouns select the prepositions in question must be stored in\ntheir lexical entries. It is thus best to treat them as indivisible translation units.\nThe most plausible alignment in this case is thus as follows:\nTh. was a disc. btw. tw. w.\nDer fandt en disk. sted ml. to kv.\n⋆This work was supported by the German Research Foundation in the Emmy Noether\nproject Ptolemaios on grammar learning from parallel corpora.\n12th EAMT conference, 22-23 September 2008, Hamburg, Germany\n170\nIt is not difficult to see that this alignment cannot be induced by m any\nformalisms for syntax-based machine translation, incl. inversion transduction\ngrammar (ITG) [Wu97], synchronous context-free grammar (SCFG) [Chi07] and\nsynchronous tree substitution grammar (STSG) [Eis03]. The important part of\nthe alignment is this:\naiaj\nbkblbmbn\nThe structure is called a cro ss-serial discontinuous translation unit (cross-\nserial DTU) below. The inability of the above theories to induce cross-serial\nDTUs follows from the observation that if bkandbmin the above are generated\nor recognized simultaneously in any of the above theories, blandbncannot be\ngenerated or recognized simulaneously. This is a straight-forward consequence of\nthe context-freeness of the component grammars. Context-sensitivity does not,\non the other hand, imply the ability to induce cross-serial DTUs. In synchronous\ntree-adjoining grammar (STAG) [SS90], for instance, the adjunction operation\nallows us to induce DTUs, but not cross-serial ones.\nCross-serial DTUs are frequent in hand-aligned parallel corpora, e.g. the\nratios of cross-serial DTUs over sentences modulo translation units in the cor-\npora documented in [GPCC08] are 24%–85%. The lowest ratio was for Spanish–\nFrench, the highest for English–Portuguese. The numbers are summarized in\nFigure 1.\nSnt. TUs DTUs DTUs/Snt. CDTU-ms CDTU-ms/Snt.\nEnglish–French: 100 937 95 95% 38 38%\nEng\nlish-Portuguese: 100 941 100 100% 85 85%\nEng\nlish–Spanish: 100 950 90 90% 50 50%\nPor\ntuguese–French: 100 915 77 77% 27 27%\nPor\ntuguese–Spanish: 100 991 80 80% 55 55%\nSpa\nnish–French 100 975 74 74% 24 24%\nFig\n.1.Frequency of cross-serial DTUs in the hand-aligned parallel corpora docu-\nmented in [GPCC08].\n[Søg08] introduces a O(|G|n6) strict extension of ITGs called binary two-\nvariable bottom-up non-erasing range concatenation grammars ((2 ,2)-BRCGs).\nIn (2,2)-BRCGs the above pair of input strings would be copied and parsed\ntwice; the alignment of each DTU is then induced by the parse of a separate\ncopy. Here’s an example of a clause that induces an alignment of one of the\nDTUs, but leaves the other nodes unaligned (see below for a formal definition\nof (2,2)-BRCGs):\nVP(wasX1X2,fandtX1stedX2)→NP(X1,X2)PP(X1,X2)\n12th EAMT conference, 22-23 September 2008, Hamburg, Germany\n171\nIntuitively, wast ranslates into fandt sted , but in Danish an object NP with\nits PP argument postponed may intervene between the two words.\nThe problem for real-life application of course is how to induce such grammars\nfrom available resources; in particular, how to learn when copying is needed. This\npaper presents a quadratic time algorithm that reduces the problem of how to\ninduce (2,2)-BRCGs from m:n-alignments to the same problem for ITGs. In\nparticular, it is linear in the size of the alignment structure and thereby quadratic\nin the length of the sentence pair.\n2 (2,2)-BRCGs\nThe introduction to (2,2)-BRCGs is very brief, but lengthier introductions and\nmore examples can be found in [Bou98] and [Søg08].\nDefinition 1 ((2,2)-RCGs). (2,2)-RCGs are 5-tuples G=/an}bracketle{tN,T,V,P,S /an}bracketri}ht.N\nis a finite set of predicate names with an arity function ρ:N→ {1,2},TandV\nare finite sets of terminal and non-terminal symbols. Pis a finite set of clauses of\nthe formψ0→φ,φ=ψ1...ψ m, where 0≤m≤2and each of the ψi,0≤i≤m,\nis a predicate of the form A(α1,...,α ρ(A)). Eachαj∈(T∪V)∗,1≤j≤ρ(A),\nis an argument. S∈Nis the start predicate name with ρ(S) = 2.\nA (2,2)-RCG is said to be bottom-up non-erasing , i.e. a (2,2)-BRCG, if and\nonly if for all clauses c∈Pall variables that occur in the RHS of calso occur\nin its LHS of c.\nThe language of a (2,2)-BRCG is based on the notion of range. For a string\npairw1...w n,vn+2...v n+ma range is a pair of indices /an}bracketle{ti,j/an}bracketri}htwith 0 ≤i≤j≤n\norn<i ≤j≤n+m, i.e. a string span, which denotes a substring wi+1...w j\nin the source string or a substring vi+1...v jin the target string. Only conse-\nqutive ranges can be concatenated into new ranges. Terminals, variables and\narguments in a clause are bound to ranges by a substitution mechanism. An\ninstantiated clause is a clause in which variables and arguments are consis-\ntently replaced by ranges; its components are instantiated predicates . For ex-\nampleA(/an}bracketle{tg...h /an}bracketri}ht,/an}bracketle{ti...j/an}bracketri}ht)→B(/an}bracketle{tg...h /an}bracketri}ht,/an}bracketle{ti+ 1...j−1/an}bracketri}ht) is an instantiation of\nthe clauseA(X1,aY1b)→B(X1,Y1) if the target string is such that vi+1=aand\nvj=b. Aderive relation = ⇒is defined on strings of instantiated predicates. In an\ninstantiated predicate is the LHS of some instantiated clause, it can be replaced\nby the RHS of that instantiated clause. The language of a (2,2)-BRCG G=\n/an}bracketle{tN,T,V,P,S /an}bracketri}htis the setL(G) ={/an}bracketle{tw1...w n,v1...v m/an}bracketri}ht |S(/an}bracketle{t0,n/an}bracketri}ht,/an}bracketle{t0,m/an}bracketri}ht)∗=⇒ǫ}.\nIn other words, an input string pair /an}bracketle{tw1...w n,v1...v m/an}bracketri}htis recognized if and\nonly if the empty string can be derived from S(/an}bracketle{t0,n/an}bracketri}ht,/an}bracketle{t0,m/an}bracketri}ht).\nExample 1 The grammar G=/an}bracketle{tN,T,V,P,S /an}bracketri}htwith the clauses Pbelow induces\nthe alignment structure discussed in the introduction. The initial substrings, there\nand der, and the final substrings, two women and to kvinder, are ignored for\nbrevity, since they translate directly into each other.\n12th EAMT conference, 22-23 September 2008, Hamburg, Germany\n172\n(1) A0(X1,Y1)→A1(X1,Y1)A2(X1, Y1)\n(2) A1(wasX1,fandtY1stedY2)→NPP(X1)NP(Y1)P(Y2)\n(3)A2(X1a disc.btw.,Y1en disk. Y2ml. Y2)→V(X1)V(Y1)Prt(Y2)\n(4) NPP(a disc.btw.)→ǫ\n(5) NP(en disk.)→ǫ\n(6) P(ml.)→ǫ\n(7) V(was)→ǫ\n(8) V(fandt)→ǫ\n(9) Prt(sted)→ǫ\nA possible derivation of /an}bracketle{twas a discussion between, fandt en diskussion sted\nmellem /an}bracketri}htis:\nA0(/an}bracketle{t0,4/an}bracketri}ht,/an}bracketle{t0,5/an}bracketri}ht)\n=⇒A1(/an}bracketle{t0,4/an}bracketri}ht,/an}bracketle{t0,5/an}bracketri}ht)A2(/an}bracketle{t0,4/an}bracketri}ht,/an}bracketle{t0,5/an}bracketri}ht) by (1)\n=⇒NPP(/an}bracketle{t1,4/an}bracketri}ht)NP(/an}bracketle{t1,3/an}bracketri}ht)P(/an}bracketle{t4,5/an}bracketri}ht)A2(/an}bracketle{t0,4/an}bracketri}ht,/an}bracketle{t0,5/an}bracketri}ht)by (2)\n=⇒A2(/an}bracketle{t0,4/an}bracketri}ht,/an}bracketle{t0,5/an}bracketri}ht) by (4–6)\n=⇒V(/an}bracketle{t0,1/an}bracketri}ht)V(/an}bracketle{t0,1/an}bracketri}ht)Prt(/an}bracketle{t3,4/an}bracketri}ht) by (3)\n=⇒ǫ by (7–9)\nNote, however, that what buys us the extra expressivity is clauses of the\nform:\nA0(X1,Y1)→A1(X1,Y1)A2(X1,Y1)\nClauses of this form allows us to take the intersection of two arbitrary transla-\ntions recognized by (2,2)-BRCGs. Since there is a simple translation from ITGs\ninto (2,2)-BRCGs, this means that (2,2)-BRCG recognizes the intersective clo-\nsure of translations recognized by ITGs, incl., for instance, {/an}bracketle{tanbmcndm,ancnbm\ndm/an}bracketri}ht |m,n≥0}.\n3 Unraveling alignments with DTUs\nThe following algorithm reduces the induction problem of (2,2)-BRCG to the\nsame problem for ITGs by unraveling the relevant subgraphs. Say Ais an align-\nment structure, and CoAligned (A) is the set of tuples of the ordered sequences\nof integers /an}bracketle{ti...j,k...l /an}bracketri}htsuch that the words in the source string at positions\ni...j and the words in the target string at positions k...l form a translation\nunit. Inside-out alignments [Wu97] are ignored, since the task is only to reduce\nthe induction problem to that for ITGs, but they are easily handled too. Simply\nadd a subprocedure insideout that removes a translation unit to a new alignment\nstructure if it is the left-most source string element in an inside-out alignment.\nCostly search is avoided if, as e.g. in the Copenhagen Dependency Treebank, A\nis read as an ordered sequence of the elements of CoAligned (A), ordered by the\nfirst elements in the sequences in the first arguments of the tuples. The overall\nruntime will turn quadratic in the size of the alignment structure, i.e. cubic in\nthe length of the sentence pair.\n12th EAMT conference, 22-23 September 2008, Hamburg, Germany\n173\nfunction unravel (A):\nf\norα∈A\nifcontiguous (α)returns false\nmoveαtoA′\nprintA′\nprintA\nend function\nfunction contiguous (α):\nifα=/an}bracketle{t...i(i+j)...,... /an}bracketri}ht,j >1\nreturn false\nelsifα=/an}bracketle{t...,...i (i+j).../an}bracketri}ht,j >1\nreturn false\nelse return true\nend function\nThe procedure only visits each translation unit once. Consequently, the over-\nall runtime remains quadratic in the length of the sentence pair and linear in\nthe size of the alignment structure. Say an induction algorithm for ITGs runs\nin time O(nk). It now follows that there is an extension of this algorithm for\n(2,2)-BRCG that runs in time O(n2+nk+1), which for all k≥1 equals O(nk+1).\nSay, for instance, we have the following alignment structure:\n1 2 3 4 5\nτ1τ2τ3τ4\n11 12 13 14\nOur algorithm reads the alignment as an ordered sequence of tran slation units:\n/an}bracketle{t/an}bracketle{t1,11 13 /an}bracketri}ht,/an}bracketle{t2 4,12/an}bracketri}ht,/an}bracketle{t3,13/an}bracketri}ht,/an}bracketle{t5,14/an}bracketri}ht. It then unravels the two first translation\nunits, /an}bracketle{t1,11 13/an}bracketri}htand/an}bracketle{t2 4,12/an}bracketri}ht. The translation units /an}bracketle{t3,13/an}bracketri}htand/an}bracketle{t5,14/an}bracketri}htstay in\nthe original structure, which is now reduced to:\n1 2 3 4 5\nτ1τ2τ3τ4\n11 12 13 14\nThe three new alignment structures can all be induced by ITGs.\nThe\nunravelling algorithm was applied to the Danish–English parallel cor-\npus in the Copenhagen Dependency Treebank [BK07]. The texts are from the\nParole corpora. The aligned corpus is hand-aligned and contains 4,729 sentence\npairs with a total of 110,511 translation units. Our unravelling algorithm pro-\nduced 1801 new alignment structures. This number reflects that 1.63% of the\ntranslation units were DTUs.\nOur unravelling algorithm was also run on the hand-aligned parallel corpora\ndocumented in [GPCC08], i.e. the first 100 sentences of the Europarl corpus\n12th EAMT conference, 22-23 September 2008, Hamburg, Germany\n174\nfor six different language pairs. The size of the corpora increas ed by a factor of\n1.74–2.0. See Figure 1 for details.\n4 Conclusion and future work\nThis paper provides empirical motivation for context-sensitive synchronous rules.\nThe main obstacle for real-life applications to machine translation is how to\ninduce context-sensitive grammars from available resources. This paper describes\na linear time algorithm that reduces the induction problem for (2,2)-BRCGs to\nthe induction problem for ITGs.\nAn alignment and translation system based on (2,2)-BRCGs is currently be-\ning implemented at the University of Potsdam. It assigns (2,2)-BRCG derivations\nto all aligned sentence pairs in a parallel corpus and estimates a probabilistic\ngrammar from the derivations. It introduces copying clauses for all alignment\nstructures that are unravelled and uses them to induce complex alignment struc-\ntures.\nReferences\n[BK07] Matthias Buch-Kromann. Computing translation units and quantifying par-\nallelism in parallel dependency treebanks. In Proceedings of the 45th Annual\nMeeting of the Association of Computational Linguistics, the Linguistic An-\nnotation Workshop , pages 69–76, 2007.\n[Bou98] Pierre Boullier. Proposal for a natural language processing syntactic back-\nbone. Technical report, INRIA, Le Chesnay, France, 1998.\n[Chi07] David Chiang. Hierarchical phrase-based translation. Computational Lin-\nguistics , 33(2):201–228, 2007.\n[Eis03] Jason Eisner. Learning non-isomorphic tree mappings for machine trans-\nlation. In Proceedings of the 41st Annual Meeting of the Association for\nComputational Linguistics , pages 205–208, Sapporo, Japan, 2003.\n[GPCC08] Joao Graca, Joana Pardal, Lu´ ısa Coheur, and Diamantino Caseiro. Build-\ning a golden collection of parallel multi-language word alignments. In Pro-\nceedings of the 6th International Conference on Language Resources and\nEvaluation , Marrakech, Morocco, 2008.\n[Søg08] Anders Søgaard. Range concatenation grammars for translation. In Pro-\nceedings of the 22nd International Conference on Computational Linguistics ,\nManchester, England, 2008. To appear.\n[SS90] Stuart Shieber and Yves Schabes. Synchronous tree-adjoining grammars.\nInProceedings of the 13th Conference on Computational Linguistics , pages\n253–258, Helsinki, Finland, 1990.\n[Wu97] Dekai Wu. Stochastic inversion transduction grammars and bilingual parsing\nof parallel corpora. Computational Linguistics , 23(3):377–403, 1997.\n12th EAMT conference, 22-23 September 2008, Hamburg, Germany\n175",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "4D3kf6FFjKT",
"year": null,
"venue": "EAMT 2010",
"pdf_link": "https://aclanthology.org/2010.eamt-1.5.pdf",
"forum_link": "https://openreview.net/forum?id=4D3kf6FFjKT",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Can inversion transduction grammars generate hand alignments",
"authors": [
"Anders Søgaard"
],
"abstract": "Anders Søgaard. Proceedings of the 14th Annual conference of the European Association for Machine Translation. 2010.",
"keywords": [],
"raw_extracted_content": "Can inversion transduction grammars generate hand alignments?\nAnders Søgaard\nUniversity of Copenhagen\nNjalsgade 140–2\nDK-2300 Copenhagen\[email protected]\nAbstract\nThe adequacy of inversion transduction\ngrammars (ITGs) has been widely de-\nbated, and the discussion’s crux seems to\nbe whether the search space is inclusive\nenough (Zens and Ney, 2003; Wellington\net al., 2006; Søgaard and Wu, 2009). Parse\nfailure rate when parses are constrained\nby word alignments is one metric that has\nbeen used, but no one has studied parse\nfailure rates of the full class of ITGs on\nrepresentative hand aligned corpora. It has\nalso been noted that ITGs in Chomsky nor-\nmal form induce strictly less alignments\nthan ITGs (Søgaard and Wu, 2009). This\nstudy is the first study that directly com-\npares parse failure rates for this subclass\nand the full class of ITGs.\n1 Introduction\nThe adequacy of grammar-based machine transla-\ntion formalisms is sometimes empirically evalu-\nated by running all-accepting grammars on large\namounts of automatically aligned text (Zens and\nNey, 2003). What is studied is called alignment\ncapacity (Søgaard and Wu, 2009) or translation\nequivalence modeling (Zhang et al., 2008), i.e. a\nformalism’s ability to generate observed align-\nments or translation equivalences; and the study\nis closely related to the study of translation model\nsearch spaces (Zens and Ney, 2003; Dreyer et al.,\n2007). All-acccepting grammars are simply gram-\nmars that contain all possible rules that can be ex-\npressed in a formalism. A grammar generates an\naligned sentence pair if it can generate the two sen-\ntences in such a way that all aligned words are gen-\nc\r2010 European Association for Machine Translation.erated simultaneously (Wu, 1997). What is studied\nis thus: Can an all-accepting grammar generate the\naligned sentence pairs observed in a text? The met-\nric in these studies is parse failure rate (PFR) or\nits inverse, i.e. the number of sentence pairs that\ncan be generated over the total number of sentence\npairs.\nMost alignment capacity studies use automati-\ncally aligned text, since hand-aligned text is hard to\ncome by. Recently three important data sets have\nbeen released (Pad ´o and Lapata, 2006; Graca et\nal., 2008; Buch-Kromann et al., 2010). Our ex-\nperiments include 12 hand-aligned parallel texts of\nvarying size.\nOur main contribution is evaluating the empir-\nical adequacy of inversion transduction grammars\n(ITGs) (Wu, 1997), a popular grammar-based ma-\nchine translation formalism, on these data sets.\nIt has been noted that the alignment capacity of\nthe full class of ITGs extends that of the class of\nITGs that are in Chomsky normal form (NF-ITGs)\n(Søgaard and Wu, 2009), i.e. while the normal\nform isa normal form in the sense that it does not\nalter the generative capacity in terms of sentence\npairs, the normal form restrictions doexclude cer-\ntain alignment configurations. Consequently, we\ncompare the adequacy of both ITGs and NF-ITGs.\nIt is shown that while ITGs are more adequate\nthan local reordering models (and in many cases\nalso to IBM models; cf. Zens and Ney (2003) and\nDreyer et al. (2007)), hand alignments are very\nhard to generate. While 1-PFR is >60% for most\ndata sets, ITGs and NF-ITGs fit four of our data\nsets rather poorly: 1-PFR is less than 50% for three\nof the data sets involving Danish, and for English-\nGerman.\nSect. 2 briefly summarizes related work. Sect. 3\nintroduces a novel algorithm for simulating an all-\n[EAMT May 2010 St Raphael, France]\naccepting grammar. Finally, Sect. 4 presents our\nexperiments.\n2 Related work\nUnlike other studies that have studied the ade-\nquacy of ITG’s alignment capacity, Wellington et\nal. (2006) used hand-aligned data in their studies.\nHand alignments contain fewer errors than auto-\nmatic alignments, are supposed to reflect transla-\ntional equivalence more closely and will remain\nrelevant regardless of improvements in technol-\nogy for automatic word alignments. All the par-\nallel data used in the experiments of Wellington\net al. (2006) have English as one of the two lan-\nguages. This of course biases their study a bit. On\nthe other hand, translations to or from English are\neasier to come by, and more hand-aligned texts are\navailable.\nOur experiments include a total of 12 data sets,\nout of which five are translations to or from En-\nglish. Wellington et al. (2006) use a total of five\ndata sets, one of which is also used in our exper-\niments below (Canadian Hansard). The data sets\nin their study are of about the same size as those\nused in ours. They consider a total of 1427 sen-\ntence pairs, whereas we consider a total of 2852\nsentence pairs.\nThe methodology of Wellington et al. (2006)\nalso differs from ours in one important respect,\nnamely how incomplete coverage of multiword\ntranslation units is counted. The authors count\nmultiword translation units in what they refer to as\nadisjunctive manner, i.e. if at least one link in ev-\nery unit is generated, the alignment configuration\nis counted in as having been generated. So for in-\nstance if all words in a source sentence are aligned\nto all words in the target sentence, it only takes\nproducing a single link to “generate” the alignment\nconfiguration.\nIn our experiments, we count coverage of trans-\nlation units in a “conjunctive” manner, i.e. all\nlinks in every translation unit must be gener-\nated before the overall alignment configuration can\nbe said to have been generated. See Søgaard\nand Kuhn (2009) for a number of arguments for\nmeasuring alignment capacity in terms of exact\nmatches of translation units.\nSøgaard and Wu (2009) show that ITG and NF-\nITG generate different classes of alignment config-\nurations. This is in a way surprising, since the two\nformalisms are equivalent in terms of generativecapacity. The reason for the apparent paradox is of\ncourse that ITGs only align words that are simulta-\nneously generated (Wu, 1997). Consequently, two\nITG derivations of the same sentence pair may in-\nduce different alignment configurations. Zens and\nNey (2003) and Wellington et al. (2006) introduce\nweaker normal form conditions in their studies.\nSøgaard and Wu (2009) consider some of the\nsame data sets used in our experiments, but their\napproach is very different. They identify align-\nment configurations that cannot be generated by\nITGs or NF-ITGs, e.g. inside-out alignments or\ndiscontinuous translation units, and simply count\ntheir occurrences in the parallel corpora. This has\nthe advantage that they provide some error anal-\nysis on the fly, e.g. they can immediately see the\nspecific impact of inside-out alignments on error\nrates. On the other hand, the lower bounds that\nthey provide on PFRs, are very conservative lower\nbounds. Our more agressive search show that the\nlower bounds on PFRs that they induce can be in-\ncreased by 15-25%.\n3 Alignment validation\nAlgorithms that compose constituents out of word-\nto-word links and try to find constituents that cover\nthe entire sentence pairs have been used in similar\nstudies (Wu, 1997; Zens and Ney, 2003; Welling-\nton et al., 2006). This process seems to have no es-\ntablished name in the literature, but we refer to it as\nalignment validation , i.e. checking if an alignment\nis valid wrt. a formalism in the sense that it can\nbe generated by an all-accepting grammar. Since\nwe measure coverage in a “conjunctive” manner,\nalignment validation is a bit more complicated than\nin related work. Our input constituents are possi-\nbly discontinuous translation units.\nThe following alignment validation algorithm\n(which can no doubt be optimized) was used in\nour experiments. The subprocedure in Figure 1,\nwhich is called by the overall procedure described\nbelow, takes two parse charts, i.e. two matrices\nm; m0withi < j for all i2mandm[i] =j\n(resp., m0), a derivation step counter c, a string po-\nsition pand a variable xwith valuesf0;1gas in-\nput and controls a chart-based parsing algorithm.\nThe subprocedure complete simply checks if there\nis a constituent that covers the entire span on both\nsides. Note that since there is no normal form\nassumption our parsing algorithm has to scan the\nchart twice before it can return a failure (lines 21–\n24). The subprocedure check rule is left out for\nbrevity. It checks that the application of the rule\nadding two new constituents hi; jiandhi0; j0ito\nthe charts is possible and that it does not violate the\nalignment configuration, i.e. that the charts do not\ncontain unvalidated links in these spans. Our na ¨ıve\nimplementation of this procedure has asymptotic\ncomplexityO(n8), since it needs to search for the\nmaximal covered spans on both sides.\nThe subprocedure is embedded in the overall\nalgorithm in Figure 2 which outputs the number\nof parsed sentence pairs, i.e. the number of sen-\ntence pairs where the alignment configuration can\nbe generated.\nThe Boolean variable nfis set to be true if nor-\nmal form conditions are imposed, i.e. this means\nthat translations units must be continuous. The call\nin line 5 adds all continuous translation units.\n4 Experiments\nThis section describes our experiments, incl. the\ndata sets used, the metric used in evaluation and\nthe results obtained on the data sets.\n4.1 Data Sets\nThe characteristics of the hand-aligned parallel\ntexts used are presented in Figure 3.\nThe parallel texts that involve Danish are part\nof the Copenhagen Dependency Treebank (Buch-\nKromann et al., 2010), based on translations of the\nbalanced Parole corpus, English-German is from\nPado and Lapata (2006) (Europarl), and the six\ncombinations of English, French, Portuguese and\nSpanish are documented in Graca et al. (2008) (Eu-\nroparl). We use the 200 sentences standard training\nsection of the Canadian Hansard data set for super-\nvised word alignment.\n4.2 Alignment reachability\nSimilarly to Zens and Ney (2003), we use the in-\nverse of PFR as metric. We refer to this below\nasalignment reachability , in analogy to transla-\ntion reachability. Each experiment thus applies the\nabove algorithm to a set of hand-aligned sentence\npairs. The algorithm either reaches an alignment,\nwhich means that the alignment canbe generated,\nor it does not, which mean that the alignment is\nbeyond the expressive power of the formalism in\nquestion. Parse failure rate is the number of fail-\nures over the total number of hand-aligned sen-\ntence pairs, whereas alignment reachability is thenumber of reached alignments over the total num-\nber of hand-aligned sentence pairs.\n4.3 Results\nWe compare our upper bounds on alignment reach-\nability for ITG and NF-ITG to the configuration-\nbased upper bounds obtained in Søgaard and\nWu (2009). We also introduce a simpler base-\nline system, namely ITG without inverse produc-\ntion rules. Such a system is a generalization of lo-\ncal reordering models (LR) such as MJ-1 and MJ-\n2 (Kumar and Byrne, 2005) whose expressivity is\nstudied by Dreyer et al. (2007).\nOur results are presented in Figure 4. Our base-\nline generates a superset of the alignments that can\nbe generated by MJ-1 and MJ-2. For the full class\nof ITGs the error rate is on average increased by\nmore than 25% compared to the bounds presented\nin Søgaard and Wu (2009). For normal form gram-\nmars, the increase is about 15%.\nA very interesting observation is that the differ-\nence in coverage between ITG and NF-ITG mea-\nsured in terms of true PFRs is much smaller than\nwhen estimated in a configuration-based manner.\nWhile the results in Søgaard and Wu (2009) indi-\ncate that the normal form proposed in Wu (1997)\ndrastically reduces the empirical adequacy of ITGs\nwhen evaluated on hand alignments, i.e. by more\nthan 10% on average over the data sets in Graca et\nal. (2008), our results show that decrease in cover-\nage is moderate ( <1.5%).\nIt is also clear from our results that only using\nthe Canadian Hansard in this type of studies leads\nto a significant bias. This data set does contain\ncomplex alignment configurations for which rules\nwith up to five nonterminals and 18 terminals in the\nright-hand side are required (Zhang et al., 2008),\nbut they are relatively infrequent.\nFinally our results seem to suggest that hand-\nalignments are not much more complex than\nautomatically generated alignments. Zens and\nNey (2003) estimate alignment reachability for\nGIZA-aligned Canadian Hansard data and report\ncomparable coverage. In one direction, NF-ITGs\ncover 81.3% of the alignments; in the other di-\nrection, they cover 73.6. The average is close to\nour 76.98% alignment reachability on the manu-\nally constructed alignments. On the other hand,\nthey report much higher scores than we do for\nsomething that remains a subset of the full class\nof ITGs. Interestingly, they show that coverage is\n1:begin chart parse (m; m0; c; p; x ) :\n2:c++\n3:ifcomplete (m; m0)then\n4: return true\n5:else\n6: rule applied false\n7: fori=ptolength (m) + 1 do\n8: forj2m[i]do\n9: fori0= 1tolength (m0) + 1 do\n10: forj02m[i0]do\n11: ifcheck rule(m; m0; i; j; i0; j0)then\n12: m[i][j]+ = [ c]\n13: m[i0][j0]+ = [ c]\n14: rule applied true\n15: return chart parse (m; m0; c; p; 1)\n16: end if\n17: end for\n18: end for\n19: end for\n20: end for\n21: if(notrule applied )andx= 1then\n22: return chart parse (m; m0; c;1;0)\n23: else if (notrule applied )andx= 0then\n24: return false\n25: end if\n26:end if\nFigure 1: Subprocedure in our alignment validation algorithm.\n1:forhs; s0i2Tdo\n2:m matrix (s)\n3:m0 matrix (s0)\n4: if(notnf)orcontinuous (hs; s0i)then\n5:hm; m0; ci add continuous (m; m0; s; s0;0)\n6: ifchart parse (m; m0; c;1;1)then\n7: parsed ++\n8: end if\n9: end if\n10:end for\n11:\n12:print parsed\nFigure 2: Our alignment validation algorithm.\nSentences Links\nDa-De 266 1314\nDa-It 26 1386\nDa-Ru 33 833\nDa-Sp 966 8944\nEn-Fr 100 1279\nEn-Ge 987 23243\nEn-Po 100 1198\nEn-Sp 100 1198\nPo-Fr 100 1290\nPo-Sp 100 1189\nSp-Fr 100 1303\nTotal 2852 43086\nFigure 3: Characteristics of the data sets used in our experiments.\nNF-ITG SW09(NF) ITG SW09 LR\nEn-Fr 65.00 78.00 68.00 94.00 32.00\nEn-Po 65.00 81.00 67.00 95.00 25.00\nEn-Sp 73.00 85.00 74.00 93.00 30.00\nPo-Fr 63.00 76.00 63.00 91.00 44.00\nPo-Sp 80.00 92.00 81.00 99.00 53.00\nSp-Fr 68.00 77.00 68.00 93.00 51.00\nA V 69.00 81.50 70.17 94.17 -\nDa-De(25) 47.62 - 49.35 - -\nDa-It(25) 60.00 - 60.00 - -\nDa-Ru(25) 47.05 - 47.05 - -\nDa-Sp(25) 30.68 *59.50 35.54 *89.63 -\nEn-Ge(15) 38.97 *30.70 45.13 *52.68 -\nHansard(15) 76.98 - 81.75 - -\nFigure 4: Alignment reachability scores for NF-ITG, ITG and (an upper bound on) local reordering\nmodels, compared to results in Søgaard and Wu (2009). Sentence length cut-off given in parentheses. *\nmeans that results are incomparable to those in Søgaard and Wu (2009), because different cut-offs were\nused.\nalso considerably better than that of the IBM mod-\nels.\n5 Conclusion\nThe status of alignments in machine translation\nis widely debated (Fraser and Marcu, 2007), but\nso are other metrics such as BLEU. Alignment\nreachability is related to BLEU oracle computa-\ntion (Dreyer et al., 2007), but in a very indirect\nway. Our studies nevertheless show that there\nare translational equivalences that are very hard\nto capture in computational search spaces. While\nthe ITG translation search space seems better fit\nthan many other models, as indicated by our ex-\nperiment as well as experiments cited above, there\nare still many alignment configurations that are be-\nyond reach. Capturing those configurations may\nnot be necessary from a practical point of view,\nbut it may nevertheless be worth considering other\nways of balancing expressivity and efficiency.\nReferences\nMatthias Buch-Kromann, J ¨urgen Wedekind, and Jakob\nElming. 2010. The Copenhagen Danish–English\nDependency Treebank. To appear.\nMarkus Dreyer, Keith Hall, and Sanjeev Khudanpur.\n2007. Comparing reordering constraints for SMT\nusing efficient BLEU oracle computation. In The\n1st Workshop on Syntax and Structure in Statistical\nTranslation, North American Chapter of the Associ-\nation for Computational Linguistics - Human Lan-\nguage Technologies (NAACL-HLT) 2007 , New York,\nNY .\nAlexander Fraser and Daniel Marcu. 2007. Measuring\nword alignment quality for statistical machine trans-\nlation. Computational Linguistics , 33(3):293–303.\nJoao Graca, Joana Pardal, Lu ´ısa Coheur, and Dia-\nmantino Caseiro. 2008. Building a golden collec-\ntion of parallel multi-language word alignments. In\nLREC’08 , Marrakech, Morocco.\nShankar Kumar and William Byrne. 2005. Lo-\ncal phrase reordering models for statistical machine\ntranslation. In HLT-EMNLP , Vancouver, Canada.\nSebastian Pad ´o and Mirella Lapata. 2006. Optimal\nconstituent alignment with edge covers for semantic\nprojection. In ACL-COLING’06 , Sydney, Australia.\nAnders Søgaard and Jonas Kuhn. 2009. Empirical\nlower bounds on alignment error rates in syntax-\nbased machine translation. In NAACL-HLT’09,\nSSST-3 , Boulder, CO.Anders Søgaard and Dekai Wu. 2009. Empirical lower\nbounds on alignment error rates for the full class of\ninversion transduction grammars. In Proceedings of\nthe 11th International Conference on Parsing Tech-\nnologies , Paris, France.\nBenjamin Wellington, Sonjia Waxmonsky, and Dan\nMelamed. 2006. Empirical lower bounds on the\ncomplexity of translational equivalence. In ACL’06 ,\npages 977–984, Sydney, Australia.\nDekai Wu. 1997. Stochastic inversion transduction\ngrammars and bilingual parsing of parallel corpora.\nComputational Linguistics , 23(3):377–403.\nRichard Zens and Hermann Ney. 2003. A comparative\nstudy on reordering constraints in statistical machine\ntranslation. In ACL’03 , Sapporo, Japan.\nHao Zhang, Daniel Gildea, and David Chiang. 2008.\nExtracting synchronous grammar rules from word-\nlevel alignments in linear time. In Coling , Manch-\nester, England.",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "i7i_zgRVuV_",
"year": null,
"venue": "EAMT 2020",
"pdf_link": "https://aclanthology.org/2020.eamt-1.20.pdf",
"forum_link": "https://openreview.net/forum?id=i7i_zgRVuV_",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Reassessing Claims of Human Parity and Super-Human Performance in Machine Translation at WMT 2019",
"authors": [
"Antonio Toral"
],
"abstract": "Antonio Toral. Proceedings of the 22nd Annual Conference of the European Association for Machine Translation. 2020.",
"keywords": [],
"raw_extracted_content": "Reassessing Claims of Human Parity and Super-Human Performance in\nMachine Translation at WMT 2019\nAntonio Toral\nCenter for Language and Cognition\nUniversity of Groningen\nThe Netherlands\[email protected]\nAbstract\nWe reassess the claims of human parity\nand super-human performance made at the\nnews shared task of WMT 2019 for three\ntranslation directions: English !German,\nEnglish!Russian and German !English.\nFirst we identify three potential issues in\nthe human evaluation of that shared task:\n(i) the limited amount of intersentential\ncontext available, (ii) the limited transla-\ntion proficiency of the evaluators and (iii)\nthe use of a reference translation. We then\nconduct a modified evaluation taking these\nissues into account. Our results indicate\nthat all the claims of human parity and\nsuper-human performance made at WMT\n2019 should be refuted, except the claim\nof human parity for English !German.\nBased on our findings, we put forward a set\nof recommendations and open questions\nfor future assessments of human parity in\nmachine translation.\n1 Introduction\nThe quality of the translations produced by ma-\nchine translation (MT) systems has improved con-\nsiderably since the adoption of architectures based\non neural networks (Bentivogli et al., 2016). To\nthe extent that, in the last two years, there have\nbeen claims of MT systems reaching human parity\nand even super-human performance (Hassan et al.,\n2018; Bojar et al., 2018; Barrault et al., 2019). Fol-\nlowing Hassan et al. (2018), we consider that hu-\nman parity is achieved for a given task tif the per-\nformance attained by a computer on tis equivalent\nc\r2020 The authors. This article is licensed under a Creative\nCommons 3.0 licence, no derivative works, attribution, CC-\nBY-ND.to that of a human, i.e. there is no significant dif-\nference between the performance obtained by hu-\nman and by machine. Super-human performance\nis achieved for tif the performance achieved by a\ncomputer is significantly better than that of a hu-\nman.\nTwo claims of human parity in MT were re-\nported in 2018. One by Microsoft, on news trans-\nlation for Chinese!English (Hassan et al., 2018),\nand another at the news translation task of WMT\nfor English!Czech (Bojar et al., 2018), in which\nMT systems Uedin (Haddow et al., 2018) and\nCuni-Transformer (Kocmi et al., 2018) reached\nhuman parity and super-human performance, re-\nspectively. In 2019 there were additional claims\nat the news translation task of WMT (Barrault et\nal., 2019): human parity for German !English,\nby several of the submitted systems, and for\nEnglish!Russian, by system Facebook-FAIR (Ng\net al., 2019), as well as super-human performance\nfor English!German, again by Facebook-FAIR.\nThe claims of human parity and super-human\nperformance in MT made in 2018 (Hassan et al.,\n2018; Bojar et al., 2018) have been since refuted\ngiven three issues in their evaluation setups (L ¨aubli\net al., 2018; Toral et al., 2018): (i) part of the\nsource text of the test set was not original text\nbut translationese, (ii) the sentences were evalu-\nated in isolation, and (iii) the evaluation was not\nconducted by translators. However, the evaluation\nsetup of WMT 2019 was modified to address some\nof these issues: the first issue (translationese) was\nfully addressed, while the second (sentences eval-\nuated in isolation) was partially addressed, as we\nwill motivate in Section 2.1, whereas the third (hu-\nman evaluation conducted by non-translators) was\nnot acted upon. Given that some of the issues that\nled to refute the claims of human parity in MT\nmade in 2018 have been addressed in the set-up\nof the experiments leading to the claims made in\n2019, but that some of the issues still remain, we\nreassess these later claims.\nThe remainder of this paper is organised as fol-\nlows. Section 2 discusses the potential issues in\nthe setup of the human evaluation at WMT 2019.\nNext, in Section 3 we conduct a modified evalua-\ntion of the MT systems that reached human parity\nor super-human performance at WMT 2019. Fi-\nnally, Section 4 presents our conclusions and rec-\nommendations.\n2 Potential Issues in the Human\nEvaluation of WMT 2019\nThis section discusses the potential issues that we\nhave identified in the human evaluation of the news\ntranslation task at WMT 2019, and motivates why\nthey might have had contributed to the fact that\nsome of the systems evaluated therein reached hu-\nman parity or super-human performance. These is-\nsues concern the limited amount of intersentential\ncontext provided to the evaluators (Section 2.1),\nthe fact that the evaluations were not conducted by\ntranslators (Section 2.2) and the fact that the evalu-\nation was reference-based for one of the translation\ndirections (Section 2.3).\n2.1 Limited Intersentential Context\nIn the human evaluation at previous editions of\nWMT evaluators had no access to intersentential\ncontext since the sentences were shown to eval-\nuators in random order. That changed in WMT\n2019 (Barrault et al., 2019), which had two evalua-\ntion settings that contained intersentential context:\n\u000fDocument-level (DR+DC), inspired by\nL¨aubli et al. (2018), in which the whole doc-\nument is available and it is evaluated globally\n(see top of Figure 1). While the evaluator has\naccess to the whole document, this set-up has\nthe drawback of resulting in very few ratings\n(one per document) and hence suffers from\nlow statistical power (Graham et al., 2019).\n\u000fSentence-by-sentence with document context\n(SR+DC), in which segments are provided in\nthe “natural order as they appear in the docu-\nment” and they are assessed individually (see\nbottom of Figure 1). Such a set-up results in\na much higher number of ratings compared\nto the previous evaluation setting (DR+DC):\nFigure 1: A snapshot of an assessment using setting DR+DC\n(top) and SR+DC (bottom) at WMT 2019, taken from Bar-\nrault et al. (2019)\none per sentence rather than one per docu-\nment. The problem with the current setting\nis that the evaluator can access limited inter-\nsentential context since only the current sen-\ntence is shown. This poses two issues, with\nrespect to previous and following sentences in\nthe document being evaluated. With respect\nto previous sentences, while the evaluator has\nseen them recently, he/she might have forgot-\nten some details of a previous sentence that\nare relevant for the evaluation of the current\nsentence, e.g. in long documents. As for fol-\nlowing sentences, the evaluator does not have\naccess to them while evaluating the current\nsentence, which may be useful in some cases,\ne.g. when evaluating the first sentence of a\ndocument, i.e. the title of the newstory, since\nin some cases this may present an ambiguity\nfor which having access to subsequent sen-\ntences could be useful.\nSR+DC was the set-up used for the official rank-\nings of WMT 2019, from which the claims of hu-\nman parity and super-human performance were de-\nrived. The requirement of information from both\nprevious and following sentences in human evalu-\nation of MT has been empirically proven in con-\ntemporary research (Castilho et al., in press 2020).\nIn our evaluation setup, evaluators are shown lo-\ncal context (the source sentences immediately pre-\nceding and following the current one) and are pro-\nvided with global context: the whole source docu-\nment as a separate text file. Evaluators are told to\nuse the global context if the local context does not\nprovide enough information to evaluate a sentence.\nIn addition, evaluators are asked to evaluate all the\nsentences of a document in a single session.\n2.2 Proficiency of the Evaluators\nThe human evaluation of WMT 2019 was con-\nducted by crowd workers and by MT researchers.\nThe first type of evaluators provided roughly\ntwo thirds of the judgments (487,674) while the\nsecond type contributed the remaining one third\n(242,424). Of the judgments provided by crowd\nworkers, around half of them (224,046) were by\n“workers who passed quality control”.\nThe fact that the evaluation was not conducted\nby translators might be problematic since it has\nbeen found that crowd workers lack knowledge of\ntranslation and, compared to professional transla-\ntors, tend to be more accepting of (subtle) transla-\ntion errors (Castilho et al., 2017).\nTaking this into account, we will reassess the\ntranslations of the systems that achieved human\nparity or super-human performance at WMT 2019\nwith translators and non-translators. The latter are\nnative speakers of the target language who are not\ntranslators but who have an advanced level of the\nsource language (at least C1 in the Common Euro-\npean Framework of Reference for Languages).\n2.3 Reference-based Evaluation\nWhile for two of the translation directions for\nwhich there were claims of human parity at\nWMT 2019 the human evaluation was reference-\nfree (from English to both German and Russian),\nfor the remaining translation direction for which\nthere was a claim of parity (German to English),\nthe human evaluation was reference-based. In a\nreference-free evaluation, the evaluator assesses\nthe quality of a translation with respect to the\nsource sentence. Hence evaluators need to be pro-\nficient in both the source and target languages. Dif-\nferently, in a reference-based evaluation, the eval-\nuator assesses a translation with respect, not (only)\nto the source sentence, but (also) to a reference\ntranslation.The advantage of a reference-based evaluation\nis that it can be carried out by monolingual speak-\ners, since only proficiency in the target language is\nrequired. However, the dependence on reference\ntranslations in this type of evaluation can lead to\nreference bias. Such a bias is hypothesised to re-\nsult in (i) inflated scores for candidate translations\nthat happen to be similar to the reference transla-\ntion (e.g. in terms of syntactic structure and lexi-\ncal choice) and to (ii) penalise correct translations\nthat diverge from the reference translation. Recent\nresearch has found both evicence that this is the\ncase (Fomicheva and Specia, 2016; Bentivogli et\nal., 2018) and that it is not (Ma et al., 2017).\nIn the context of WMT 2019, in the transla-\ntion directions that followed a reference-free hu-\nman evaluation, the human translation (used as\nreference for the automatic evaluation) could be\ncompared to MT systems in the human evalua-\ntion, just by being part of the pool of transla-\ntions to be evaluated. However, in the trans-\nlation directions that followed a reference-based\nhuman evaluation, such as German !English,\nthe reference translation could not be evalu-\nated against the MT systems, since it was it-\nself the gold standard. A second human trans-\nlation was used to this end. In a nutshell, for\nEnglish!German and English !Russian there is\none human translation, referred to as H UMAN ,\nwhile for German!English there are two human\ntranslations, one was used as reference and the\nother was evaluated against the MT systems, to\nwhich we refer to as R EFand H UMAN , respec-\ntively.\nThe claim of parity for German !English re-\nsults therefore from the fact that H UMAN and the\noutput of an MT system (Facebook-FAIR) were\ncompared separately to a gold standard transla-\ntion, R EF, and the overall ratings that they obtained\nwere not significantly different from each other.\nIf there was reference bias in this case, it could\nbe that H UMAN was penalised for being different\nthan R EF. To check whether this could be the case\nwe use BLEU (Papineni et al., 2002) as a proxy to\nmeasure the similarity between all the pairs of the\nthree relevant translations: R EF, HUMAN and the\nbest MT system. Table 1 shows the three pairwise\nscores.1HUMAN appears to be markedly differ-\n1We use the multi-bleu.perl implementation of BLEU,\ngiving as parameters one of the translations as the reference\nand the other as the hypothesis. Changing the order of the\nparameters results in very minor variations in the score.\nent than MT and REF, which are more similar to\neach other.\nMT, R EF MT, H UMAN REF, HUMAN\n35.9 26.5 21.9\nTable 1: BLEU scores between pairs of three trans-\nlations (R EF, H UMAN and the best MT system) for\nGerman!English at the news translation task of WMT 2019.\nThese results indicate thus that H UMAN could\nhave been penalised for diverging from the ref-\nerence translation R EF, which could have con-\ntributed to the best MT system reaching parity. In\nour experiments, we will conduct a reference-free\nevaluation for this translation direction comparing\nthis MT system to both human translations.\n3 Evaluation\n3.1 Experimental Setup\nWe conduct a human evaluation2for the\nthree translation directions of WMT 2019 for\nwhich there were claims of human parity or\nsuper-human performance: German !English,\nEnglish!German and English !Russian. We\nevaluate the first twenty documents of the test set\nfor each of these language pairs. These amount\nto 317 sentences for German !English and 302\nfor both English!German and English !Russian\n(the English side of the test set in all from-English\ntranslation directions is common).\nWe conduct our evaluation with the Appraise\ntoolkit (Federmann, 2012), by means of relative\nrankings, rather than direct assessment (DA) (Gra-\nham et al., 2017) as in Barrault et al. (2019). While\nDA has some advantages over ranking, their out-\ncomes correlate strongly ( R > 0:9in Bojar et\nal. (2016)) and the latter is more appropriate for\nour evaluation for two reasons: (i) it allows us to\nshow the evaluator all the translations that we eval-\nuate at once, so that they are directly compared\n(DA only shows one translation at a time, entailing\nthat the translations evaluated are indirectly com-\npared to each other) and (ii) it allows us to show\nlocal context to the evaluator (DA only shows the\nsentence that is being currently evaluated).\nEvaluators are shown two translations for both\nEnglish!German and English !Russian: one by\na human (referred to as H UMAN ) and one by the\n2Code and data available at https://github.com/\nantot/human_parity_eamt2020best MT system3submitted to that translation di-\nrection (referred to as MT). For German !English\nthere are three translations (see Section 2.3): two\nby humans (H UMAN and R EF) and one by an MT\nsystem. The MT system is Facebook-FAIR for all\nthree translation directions. The order in which the\ntranslations are shown is randomised.\nFor each source sentence, evaluators rank the\ntranslations thereof, with ties being allowed. Eval-\nuators could also avoid ranking the translations of\na sentence, if they detected an issue that prevented\nthem from being able to rank them, by using the\nbutton flag error; they were instructed to do so only\nwhen strictly necessary. Figure 2 shows a snapshot\nof our evaluation.\nFrom the relative rankings, we extract the num-\nber of times one of the translations is better\nthan the other and the number of times they are\ntied. Statistical significance is conducted with two-\ntailed sign tests, the null hypothesis being that\nevaluators do not prefer the human translation over\nMT or viceversa (L ¨aubli et al., 2018). We report\nthe number of successes x, i.e. number of ratings\nin favour of the human translation, and the number\nof trialsn, i.e. number of all ratings except for ties.\nFive evaluators took part in the evaluation for\nEnglish!German (two translators and three non-\ntranslators), six took part for English !Russian\n(four translators and two non-translators) and three\ntook part for German !English (two translators\nand one non-translator).\nImmediately after completing the evaluation,\nthe evaluators completed a questionnaire (see Ap-\npendix A). It contained questions about their lin-\nguistic proficiency in the source and target lan-\nguages, their amount of translation experience,\nthe frequency with which they used the local\nand global contextual information, whether they\nthought that one of the translations was normally\nbetter than the other(s) and whether they thought\nthat the translations were produced by human\ntranslators or MT systems.\nIn the remaining of this section we present the\nresults of our evaluation for the three language\npairs, followed by the inter-annotator agreement\nand the responses to the questionnaire.\n3The MT system with the highest normalised average DA\nscore in the human evaluation of WMT 2019.\nFigure 2: A snapshot of our human evaluation, for the German !English translation direction, for the second segment of a\ndocument that contains nine segments. The evaluator ranks three translations, two of which are produced by human translators\n(REF and HUMAN) while the remaining one comes from an MT system (Facebook-FAIR), by comparing them to the source,\nsince no reference translation is provided. Local context (immediately preceeding and following source sentences) is provided\ninside the evaluation tool and global context (the whole source document) is provided as a separate file.\n3.2 Results for English !German\nFigure 3 shows the percentages of rankings4for\nwhich translators and non-translators preferred the\ntranslation by the MT system, that by the hu-\nman translator or both were considered equivalent\n(tie). Non-translators preferred the translation by\nthe MT engine slightly more frequently than the\nhuman translation (42.3% vs 36.7%) while the op-\nposite is observed for translators (36.9% for H U-\nMAN vs 34.9% for MT). However, these differ-\nences are not significant for either translators ( x=\n222,n= 432 ,p= 0:6) nor for non-translators\n(x= 332 ,n= 715 ,p= 0:06). In other words, ac-\ncording to our results there is no super-human per-\nformance, since MT is not found to be significantly\nbetter than H UMAN (which was the case at WMT\n2019) but H UMAN is not significantly better than\nMT either. Therefore our evaluation results in hu-\nman parity, since the performance of the MT sys-\ntem and H UMAN are not significantly different in\nthe eyes of the translators and the non-translators\nthat conducted the evaluation.\nFigure 4 shows the results for each evaluator\nseparately, with ties omitted to ease the visuali-\nsation. We observe a similar trend across all the\nnon-translators: a slight preference for MT over\n4We show percentages instead of absolute numbers in order\nto be able to compare the rankings by translators and non-\ntranslators, as the number of translators and non-translators is\nnot the same.\nMT better TieHuman better0.0%5.0%10.0%15.0%20.0%25.0%30.0%35.0%40.0%45.0%\n34.9%\n28.2%36.9%42.3%\n21.0%36.7%\nTranslators\nNon-translatorsFigure 3: Results for English !German for translators ( n=\n602) and non-translators ( n= 905 )\nt1t2nt1nt2nt30.0%10.0%20.0%30.0%40.0%50.0%60.0%\n45.3%51.5%52.5%53.3%54.8% 54.7%\n48.5%47.5%46.7%45.2%\nMT better\nHuman better\nFigure 4: Results for English !German for each evaluator\nseparately: translators t1 and t2 and non-translators nt1, nt2\nand nt3.\nHUMAN , where the first is preferred in 52.5% to\n54.8% of the times whereas the second is preferred\nin 45.2% to 47.5% of the cases. However, the two\ntranslators do not share the same trend; translator\nt1 prefers H UMAN more often than MT (54.7% vs\n45.3%) while the trend is the opposite for transla-\ntor t2, albeit more slightly (51.5% MT vs 48.5%\nHUMAN ).\n3.3 Results for English !Russian\nFigure 5 shows the results for English !Russian.\nIn this translation direction both translators and\nnon-translators prefer H UMAN more frequently\nthan MT: 42.3% vs 34.4% ( x= 499 ,n= 905 ,\np < 0:01) and 45.5% vs 35.8% ( x= 275 ,n=\n491,p< 0:01), respectively. Since the differences\nare significant in both cases, our evaluation refutes\nthe claim of human parity made at WMT 2019 for\nthis translation direction.\nMT better TieHuman better0.0%5.0%10.0%15.0%20.0%25.0%30.0%35.0%40.0%45.0%50.0%\n34.4%\n23.4%42.3%\n35.8%\n18.7%45.5%\nTranslators\nNon-translators\nFigure 5: Results for English !Russian for translators ( n=\n1181 ) and non-translators ( n= 604 )\nAgain we zoom in on the results by the individ-\nual evaluators, as depicted in Figure 6. It can be\nseen that all but one of the evaluators, translator t1,\nprefer H UMAN considerably more often than MT.\nHowever, the differences are only significant for t3\n(x= 114 ,n= 178 ,p< 0:001) and nt2 (x= 119 ,\nn= 202 ,p < 0:05), probably due to the small\nnumber of observations.\n3.4 Results for German !English\nAs explained in section 2.3, for this translation\ndirection there are two human translations, re-\nferred to as H UMAN and R EF, and one MT sys-\ntem. Hence we can establish three pairwise com-\nparisons: R EF–MT, H UMAN –MT and H UMAN –\nREF. The results for them are shown in Figure 7,\nFigure 8 and Figure 9, respectively.\nBoth translators preferred the translation by the\nMT system slightly more often than the human\nt1t2t3t4nt1nt20%10%20%30%40%50%60%70%\n50%\n45%\n36%45%46%\n41%50%55%64%\n55%54%59%\nMT better\nHuman betterFigure 6: Results for English !Russian for each evaluator\nseparately: translators t1, t2, t3 and t4 and non-translators nt1\nand nt2.\nMT better Tie Ref better0.0%10.0%20.0%30.0%40.0%50.0%60.0%70.0%\n40.1%\n21.1%38.8%46.4%\n12.0%41.6%58.7%\n19.6%21.8%t1\nt2\nnt1\nFigure 7: Results for German !English for R EFand MT,\nwith translators t1 and t2 and non-translator nt1.\ntranslation R EF, 40% vs 39% and 46% vs 42%,\nbut the difference is not significant ( x= 255 ,\nn= 529 ,p= 0:4). The non-translator pre-\nferred the translation by MT considerably more of-\nten than R EF: 59% vs 22%, with the diffence be-\ning significant ( x= 69 ,n= 255 ,p < 0:001). In\nother words, compared to R EF, the human transla-\ntion used as gold standard at WMT 2019, the MT\nsystem achieves human parity according to the two\ntranslators and super-human performance accord-\ning to the non-translator.\nMT better Tie Human better0.0%10.0%20.0%30.0%40.0%50.0%60.0%70.0%\n34.1%\n19.9%46.1%\n35.0%\n8.5%56.5%65.9%\n15.5%18.6%t1\nt2\nnt1\nFigure 8: Results for German !English for H UMAN and MT,\nwith translators t1 and t2 and non-translator nt1.\nNow we discuss the results of comparing the\nMT system to the other human translation, H U-\nMAN (see Figure 8). The outcome according to\nthe non-translator is, as in the previous comparison\nbetween R EFand MT, super-human performance\n(x= 59 ,n= 268 ,p < 0:001), which can be ex-\npected since this evaluator prefers MT much more\noften than H UMAN : 66% vs 19% of the times. We\nexpected that the results for the translators would\nalso follow a similar trend to their outcome when\nthey compared MT to the other human translation\n(REF), i.e. human parity. However, we observe\na clear preference for H UMAN over MT: 46% vs\n34% and 57% vs 35%, resulting in a significant\ndifference (x= 325 ,n= 544 ,p< 0:001).\nHuman better Tie Ref better0.0%10.0%20.0%30.0%40.0%50.0%60.0%\n48.9%\n15.8%35.3%56.2%\n6.6%37.2%\n29.7% 30.6%39.7%\nt1\nt2\nnt1\nFigure 9: Results for German !English for R EFand H U-\nMAN , with translators t1 and t2 and non-translator nt1.\nThe last comparison is shown in Figure 9 and\nconcerns the two human translations: R EFand\nHUMAN . The two translators exhibit a clear pref-\nerence for H UMAN over R EF: 49% vs 35% and\n56% vs 37%, ( x= 230 ,n= 563 ,p < 0:001).\nConversely, the non-translator preferred R EFsig-\nnificantly more often than H UMAN (x= 126 ,\nn= 220 ,p< 0:05): 40% vs 30%.\nGiven that (i) parity was found between MT\nand H UMAN in the reference-based evaluation of\nWMT, where R EFwas the reference translation,\nthat (ii) H UMAN is considerably different than R EF\nand MT (see Section 2.3) and that (iii) H UMAN is\nfound to be significantly better than R EFby trans-\nlators in our evaluation, it seems that reference bias\nplayed a role in the claim of parity at WMT.\n3.5 Results of the Inter-annotator Agreement\nWe now report the inter-annotator agreement\n(IAA) between the evaluators. Since we have two\ntypes of evaluators, translators and non-translators,\nwe report the IAA for both of them. IAA is calcu-\nlated in terms of Cohen’s kappa coefficient ( \u0014) as\nit was done at WMT 2016 (Bojar et al., 2016, Sec-tion 3.3).\nEvaluators\nDirection ts nts\nEnglish!German 0.326 0.266\nEnglish!Russian 0.239 0.238\nGerman!English 0.320 NA\nTable 2: Inter-annotator agreement with Cohen’s \u0014among\ntranslators (ts) and non-translators (nts) for the three transla-\ntion directions.\nTable 2 shows the IAA coefficients. For\nEnglish!German, the IAA among translators\n(\u0014= 0:326) is considerably higher, 23% rela-\ntive, than among non-translators ( \u0014= 0:266). For\nEnglish!Russian, both types of evaluators agree\nat a very similar level ( \u0014= 0:239and\u0014= 0:238).\nFinally, for German !English, we cannot establish\na direct comparison between the IAA of translators\nand non-translators, since there was only one non-\ntranslator. However, we can compare the IAA of\nthe two translators ( \u0014= 0:32) to that of each of\nthe translators and the non-translator: \u0014= 0:107\nbetween the first translator and the non-translator\nand\u0014= 0:125between the second translator and\nthe non-translator. The agreement between trans-\nlators is therefore 176% higher than between one\ntranslator and the non-translator.\nIn a nutshell, for the three translation directions\nthe IAA of translators is higher than, or equiva-\nlent to, that of non-translators, which corroborates\nprevious findings by Toral et al. (2018), where the\nIAA was 0.254 for translators and 0.13 for non-\ntranslators.\n3.6 Results of the Questionnaire\nThe questionnaire (see Appendix A) contained two\n5-point Likert questions about how often addi-\ntional context, local and global, was used. In both\ncases, translators made slightly less use of context\nthan non-translators: M= 2:9,SD = 2:0ver-\nsusM= 3:5,SD = 1:0for local context and\nM= 1:4,SD = 0:7versusM= 2,SD = 0:9\nfor global context. Our interpretation is that trans-\nlators felt more confident to rank the translations\nand thus used additional contextual information to\na lesser extent. If an evaluator used global con-\ntext, they were asked to specify whether they used\nit mostly for some sentences in particular (those at\nthe beginning, middle or at the end of the docu-\nments) or not. Out of 8 respondents, 5 reported to\nhave used global context mostly for sentences re-\ngardless of their position in the document and the\nremaining 3 mostly for sentences at the beginning.\nIn terms of the perceived quality of the trans-\nlations evaluated, all non-translators found one of\nthe translations to be clearly better in general. Five\nout of the eight translators gave that reply too while\nthe other three translators found all translations to\nbe of similar quality (not so good).\nAsked whether they thought the translations had\nbeen produced by MT systems or by humans, all\nevaluators replied that some were by humans and\nsome by MT systems, except one translator, who\nthought that all the translations were by MT sys-\ntems, and one non-translator who answered that\nhe/she did not know.\n4 Conclusions and Future Work\nWe have conducted a modified evaluation on the\nMT systems that reached human parity or super-\nhuman performance at the news shared task of\nWMT 2019. According to our results: (i) for\nEnglish!German, the claim of super-human per-\nformance is refuted, but there is human parity; (ii)\nfor English!Russian, the claim of human parity is\nrefuted; (iii) for German !English, for which there\nwere two human translations, the claim of human\nparity is refuted with respect to the best of the hu-\nman translations, but not with respect to the worst.\nBased on our findings, we put forward a set of\nrecommendations for human evaluation of MT in\ngeneral and for the assessment of human parity in\nMT in particular:\n1. Global context (i.e. the whole document)\nshould be available to the evaluator. Some of\nthe evaluators have reported that they needed\nthat information to conduct some of the rank-\nings and contemporary research (Castilho et\nal., in press 2020) has demonstrated that such\nknowledge is indeed required for the evalua-\ntion of some sentences.\n2. If the evaluation is to be as accurate as pos-\nsible then it should be conducted by profes-\nsional translators. Our evaluation has cor-\nroborated that evaluators that do not have\ntranslation proficiency evaluate MT systems\nmore leniently than translators and that inter-\nannotator agreement is higher among the lat-\nter (Toral et al., 2018).\n3. Reference-based human evaluation should be\nin principle avoided, given the reference biasissue (Bentivogli et al., 2018), which ac-\ncording to our results seems to have played\na role in the claim of human parity for\nGerman!English at WMT 2019. That said,\nwe note that there is also research that con-\ncludes that there is no evidence of reference\nbias (Ma et al., 2017).\nThe first two recommendations were put for-\nward recently (L ¨aubli et al., 2020) and are cor-\nroborated by our findings. We acknowledge that\nour conclusions and recommendations are some-\nwhat limited since they are based on a small num-\nber of sentences (just over 300 for each translation\ndirection) and evaluators (14 in total).\nClaims of human parity are of course not spe-\ncific to translation. Super-human performance has\nbeen reported to have been achieved in many other\ntasks, including board games, e.g. chess (Hsu,\n2002) and Go (Silver et al., 2017). However, we\nargue that assessing human parity in translation,\nand probably in other language-related tasks too,\nis not as straightforward as in other tasks such as\nboard games, and that the former task poses, at\nleast, two open questions, which we explore briefly\nin the following to close the paper.\n1. Against whom should the machine be eval-\nuated? In other words, should one claim\nhuman parity if the output of an MT sys-\ntem is perceived to be indistiguishable from\nthat by an average professional translator or\nshould we only compare to a champion pro-\nfessional translator? In other tasks it is the lat-\nter case, e.g. chess in which D EEPBLUE out-\nperformed world champion Gary Kasparov.\nRelated, we note that in tasks such as chess\nit is straightforward to define the concept of\na player being better than another: whoever\nwins more games, the rules of which are de-\nterministic. But in the case of translation, it\nis not so straightforward to define whether a\ntranslator is better than another. This ques-\ntion is pertinent since, as we have seen for\nGerman!English (Section 3.4), where we\nhad translations by two professional transla-\ntors, the choice of which one is used to evalu-\nate an MT system against can lead to a claim\nof human parity or not. In addition, the reason\nwhy one claim remains after our evaluation\n(human parity for English !German) might\nbe that the human translation therein is not as\ngood as it could be . Therefore, once the three\npotential issues that we have put forward (see\nSection 2) are solved, we think that an impor-\ntant potential issue that should be studied, and\nwhich we have not considered, has to do with\nthe quality of the human translation used.\n2. Who should assess claims of human parity\nand super-human performance? Taking again\nthe example of chess, this is straightforward\nsince one can just count how many games\neach contestant (machine and human) wins.\nIn translation, however, we need a person\nwith knowledge of both languages to assess\nthe translations. We have seen that the out-\ncome is dependent to some extent on the\nlevel of translation proficiency of the evalua-\ntor: it is more difficult to find human parity if\nthe translations are evaluated by professional\ntranslators than if the evaluation is carried out\nby bilingual speakers without any translation\nproficiency. Taking into account that most of\nthe users of MT systems are not translators,\nshould we in practice consider human parity\nif those users do not perceive a significant dif-\nference between human and machine trans-\nlations, even if an experienced professional\ntranslator does?\nAcknowledgments\nThis research has received funding from CLCG’s\n2019 budget for research participants. I am grate-\nful for valuable comments from Barry Haddow,\nco-organiser of WMT 2019. I would also like\nto thank the reviewers; their comments have def-\ninitely led to improve this paper.\nReferences\nBarrault, Lo ¨ıc, Ond ˇrej Bojar, Marta R. Costa-juss `a,\nChristian Federmann, Mark Fishel, Yvette Gra-\nham, Barry Haddow, Matthias Huck, Philipp Koehn,\nShervin Malmasi, Christof Monz, Mathias M ¨uller,\nSantanu Pal, Matt Post, and Marcos Zampieri. 2019.\nFindings of the 2019 conference on machine transla-\ntion (WMT19). In Proceedings of the Fourth Con-\nference on Machine Translation (Volume 2: Shared\nTask Papers, Day 1) , pages 1–61, Florence, Italy,\nAugust. Association for Computational Linguistics.\nBentivogli, Luisa, Arianna Bisazza, Mauro Cettolo, and\nMarcello Federico. 2016. Neural versus phrase-\nbased machine translation quality: a case study.\nInProceedings of the 2016 Conference on Empiri-\ncal Methods in Natural Language Processing , pages\n257–267, Austin, Texas.Bentivogli, Luisa, Mauro Cettolo, Marcello Federico,\nand Federmann Christian. 2018. Machine transla-\ntion human evaluation: an investigation of evaluation\nbased on post-editing and its relation with direct as-\nsessment. In 15th International Workshop on Spoken\nLanguage Translation 2018 , pages 62–69.\nBojar, Ond ˇrej, Rajen Chatterjee, Christian Federmann,\nYvette Graham, Barry Haddow, Matthias Huck,\nAntonio Jimeno Yepes, Philipp Koehn, Varvara\nLogacheva, Christof Monz, Matteo Negri, Aure-\nlie Neveol, Mariana Neves, Martin Popel, Matt\nPost, Raphael Rubino, Carolina Scarton, Lucia Spe-\ncia, Marco Turchi, Karin Verspoor, and Marcos\nZampieri. 2016. Findings of the 2016 conference\non machine translation. In Proceedings of the First\nConference on Machine Translation , pages 131–198,\nBerlin, Germany, August.\nBojar, Ondej, Christian Federmann, Mark Fishel,\nYvette Graham, Barry Haddow, Matthias Huck,\nPhilipp Koehn, and Christof Monz. 2018. Find-\nings of the 2018 conference on machine translation\n(wmt18). In Proceedings of the Third Conference on\nMachine Translation, Volume 2: Shared Task Papers ,\npages 272–307, Belgium, Brussels, October. Associ-\nation for Computational Linguistics.\nCastilho, Sheila, Joss Moorkens, Federico Gaspari,\nRico Sennrich, Vilelmini Sosoni, Panayota Geor-\ngakopoulou, Pintu Lohar, Andy Way, Antonio Va-\nlerio Miceli Barone, and Maria Gialama. 2017.\nA Comparative Quality Evaluation of PBSMT and\nNMT using Professional Translators. In MT Summit\n2017 , pages 116–131, Nagoya, Japan.\nCastilho, Sheila, Maja Popovic, and Andy Way. in\npress 2020. On Context Span Needed for Ma-\nchine Translation Evaluation. In Proceedings of\nthe Twelvfth International Conference on Language\nResources and Evaluation (LREC 2020) . European\nLanguage Resources Association (ELRA).\nFedermann, Christian. 2012. Appraise: An open-\nsource toolkit for manual evaluation of machine\ntranslation output. The Prague Bulletin of Mathe-\nmatical Linguistics , 98:25–35, September.\nFomicheva, Marina and Lucia Specia. 2016. Reference\nbias in monolingual machine translation evaluation.\nInProceedings of the 54th Annual Meeting of the As-\nsociation for Computational Linguistics (Volume 2:\nShort Papers) , pages 77–82, Berlin, Germany, Au-\ngust. Association for Computational Linguistics.\nGraham, Yvette, Timothy Baldwin, Alistair Moffat, and\nJustin Zobel. 2017. Can machine translation sys-\ntems be evaluated by the crowd alone. Natural Lan-\nguage Engineering , 23(1):3–30.\nGraham, Yvette, Barry Haddow, and Philipp Koehn.\n2019. Translationese in machine translation evalu-\nation. arXiv preprint arXiv:1906.09833 .\nHaddow, Barry, Nikolay Bogoychev, Denis Emelin,\nUlrich Germann, Roman Grundkiewicz, Kenneth\nHeafield, Antonio Valerio Miceli Barone, and Rico\nSennrich. 2018. The university of edinburgh’s sub-\nmissions to the wmt18 news translation task. In Pro-\nceedings of the Third Conference on Machine Trans-\nlation, Volume 2: Shared Task Papers , pages 403–\n413, Belgium, Brussels, October. Association for\nComputational Linguistics.\nHassan, Hany, Anthony Aue, Chang Chen, Vishal\nChowdhary, Jonathan Clark, Christian Federmann,\nXuedong Huang, Marcin Junczys-Dowmunt, Will\nLewis, Mu Li, Shujie Liu, Tie-Yan Liu, Renqian\nLuo, Arul Menezes, Tao Qin, Frank Seide, Xu Tan,\nFei Tian, Lijun Wu, Shuangzhi Wu, Yingce Xia,\nDongdong Zhang, Zhirui Zhang, and Ming Zhou.\n2018. Achieving Human Parity on Automatic Chi-\nnese to English News Translation.\nHsu, Feng-Hsiung. 2002. Behind Deep Blue: Build-\ning the Computer That Defeated the World Chess\nChampion . Princeton University Press, Princeton,\nNJ, USA.\nKocmi, Tom, Roman Sudarikov, and Ondej Bojar.\n2018. Cuni submissions in wmt18. In Proceed-\nings of the Third Conference on Machine Transla-\ntion, Volume 2: Shared Task Papers , pages 435–441,\nBelgium, Brussels, October. Association for Compu-\ntational Linguistics.\nL¨aubli, Samuel, Rico Sennrich, and Martin V olk. 2018.\nHas Machine Translation Achieved Human Parity?\nA Case for Document-level Evaluation. In Proceed-\nings of the 2018 Conference on Empirical Methods\nin Natural Language Processing , Brussels, Belgium.\nL¨aubli, Samuel, Sheila Castilho, Graham Neubig, Rico\nSennrich, Qinlan Shen, and Antonio Toral. 2020.\nA set of recommendations for assessing human–\nmachine parity in language translation. Journal of\nArtificial Intelligence Research , 67:653–672.\nMa, Qingsong, Yvette Graham, Timothy Baldwin, and\nQun Liu. 2017. Further investigation into reference\nbias in monolingual evaluation of machine transla-\ntion. In Proceedings of the 2017 Conference on\nEmpirical Methods in Natural Language Processing ,\npages 2476–2485, Copenhagen, Denmark, Septem-\nber. Association for Computational Linguistics.\nNg, Nathan, Kyra Yee, Alexei Baevski, Myle Ott,\nMichael Auli, and Sergey Edunov. 2019. Facebook\nfairs wmt19 news translation task submission. In\nProceedings of the Fourth Conference on Machine\nTranslation (Volume 2: Shared Task Papers, Day 1) ,\npages 314–319, Florence, Italy, August. Association\nfor Computational Linguistics.\nPapineni, Kishore, Salim Roukos, Todd Ward, and Wei-\nJing Zhu. 2002. Bleu: a method for automatic eval-\nuation of machine translation. In Proceedings of the\n40th Annual Meeting of the Association for Com-\nputational Linguistics , pages 311–318, Philadelphia,Pennsylvania, USA, July. Association for Computa-\ntional Linguistics.\nSilver, David, Thomas Hubert, Julian Schrittwieser,\nIoannis Antonoglou, Matthew Lai, Arthur Guez,\nMarc Lanctot, Laurent Sifre, Dharshan Kumaran,\nThore Graepel, Timothy P. Lillicrap, Karen Si-\nmonyan, and Demis Hassabis. 2017. Mastering\nchess and shogi by self-play with a general reinforce-\nment learning algorithm. CoRR , abs/1712.01815.\nToral, Antonio, Sheila Castilho, Ke Hu, and Andy Way.\n2018. Attaining the unattainable? reassessing claims\nof human parity in neural machine translation. In\nProceedings of the Third Conference on Machine\nTranslation, Volume 1: Research Papers , pages 113–\n123, Belgium, Brussels, October. Association for\nComputational Linguistics.\nA Post-experiment Questionnaire\n1. Rate your knowledge of the source language\n\u000fNone; A1; A2; B1; B2; C1; C2; native\n2. Rate your knowledge of the target language\n\u000fNone; A1; A2; B1; B2; C1; C2; native\n3. How much experience do you have translating from the\nsource to the target language?\n\u000fNone, and I am not a translator; None, but I am\na translator; Less than 1 year; between 1 and 2\nyears; between 2 and 5 years; more than 5 years\n4. During the experiment, how often did you use the lo-\ncal context shown in the web application (i.e. source\nsentences immediately preceding and immediately fol-\nlowing the current sentence)?\n\u000fNever; rarely; sometimes; often; always\n5. During the experiment, how often did you use the\nglobal context provided (i.e. the whole source docu-\nment provided as a text file)?\n\u000fNever; rarely; sometimes; often; always\n6. If you used the global context, was that the case for\nranking some sentences in particular?\n\u000fYes, mainly those at the beginning of documents,\ne.g. headlines\n\u000fYes, mainly those in the middle of documents\n\u000fYes, mainly those at the end of documents\n\u000fNo, I used the global context regardless of the po-\nsition of the sentences to be ranked\n7. About the translations you ranked\n\u000fNormally one was clearly better\n\u000fAll were of similar quality, and they were not so\ngood\n\u000fAll were of similar quality, and they were very\ngood\n8. The translations that you evaluated were in your opin-\nion:\n\u000fAll produced by human translators\n\u000fAll produced by machine translation systems\n\u000fSome produced by humans and some by machine\ntranslation systems\n\u000fI don’t know",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "jEElk0Jxkya",
"year": null,
"venue": "EAMT 2020",
"pdf_link": "https://aclanthology.org/2020.eamt-1.14.pdf",
"forum_link": "https://openreview.net/forum?id=jEElk0Jxkya",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Fine-grained Human Evaluation of Transformer and Recurrent Approaches to Neural Machine Translation for English-to-Chinese",
"authors": [
"Yuying Ye",
"Antonio Toral"
],
"abstract": "Yuying Ye, Antonio Toral. Proceedings of the 22nd Annual Conference of the European Association for Machine Translation. 2020.",
"keywords": [],
"raw_extracted_content": "Fine-grained Human Evaluation of Transformer and Recurrent\nApproaches to Neural Machine Translation for English-to-Chinese\nYuying Ye\nDigital Humanities Programme\nUniversity of Groningen\nThe Netherlands\[email protected] Toral\nCenter for Language and Cognition\nUniversity of Groningen\nThe Netherlands\[email protected]\nAbstract\nThis research presents a fine-grained hu-\nman evaluation to compare the Trans-\nformer and recurrent approaches to neural\nmachine translation (MT), on the transla-\ntion direction English-to-Chinese. To this\nend, we develop an error taxonomy com-\npliant with the Multidimensional Quality\nMetrics (MQM) framework that is cus-\ntomised to the relevant phenomena of this\ntranslation direction. We then conduct an\nerror annotation using this customised er-\nror taxonomy on the output of state-of-the-\nart recurrent- and Transformer-based MT\nsystems on a subset of WMT2019’s news\ntest set. The resulting annotation shows\nthat, compared to the best recurrent sys-\ntem, the best Transformer system results in\na 31% reduction of the total number of er-\nrors and it produced significantly less er-\nrors in 10 out of 22 error categories. We\nalso note that two of the systems evaluated\ndo not produce any error for a category that\nwas relevant for this translation direction\nprior to the advent of NMT systems: Chi-\nnese classifiers.\n1 Introduction\nThe field of machine translation (MT) has been\nrevolutionised in the past few years by the emer-\ngence of a new approach: neural MT (NMT).\nNMT is a dynamic research area and we have\nwitnessed two mainstream architectures already,\nthe first of which is based on recurrent neural\nc\r2020 The authors. This article is licensed under a Creative\nCommons 3.0 licence, no derivative works, attribution, CC-\nBY-ND.networks (RNN) with attention (Bahdanau et al.,\n2014) while the second, referred to as Transformer,\nmakes use of the self-attention mechanism in non-\nrecurrent networks (Vaswani et al., 2017).\nSeveral studies have analysed in depth, using\nboth automatic and human evaluation methods, the\nresulting translations of NMT systems under the\nrecurrent architecture and compared them to the\ntranslations of the previous mainstream approach\nto MT: statistical MT (Koehn et al., 2003), e.g.\n(Bentivogli et al., 2016; Castilho et al., 2017;\nKlubi ˇcka et al., 2018; Popovi ´c, 2017; Shterionov\net al., 2018). However, while the Transformer ar-\nchitecture has brought, at least when trained with\nsufficient data, considerable gains over the recur-\nrent architecture (Vaswani et al., 2017), the re-\nsearch conducted to date that analyses the result-\ning translations of these two neural approaches is,\nto the best of our knowledge, limited to automatic\napproaches (Burlot et al., 2018; Lakew et al., 2018;\nTang et al., 2018a; Tang et al., 2018b; Tran et al.,\n2018; Yang et al., 2019).\nIn this paper we conduct a detailed human anal-\nysis of the outputs produced by state-of-the-art re-\ncurrent and Transformer NMT systems. Namely,\nwe manually annotate the errors found according\nto a detailed error taxonomy which is compliant\nwith the hierarchical listing of issue types defined\nas part of the Multidimensional Quality Metrics\n(MQM) framework (Lommel et al., 2014). We\ncarry out this analysis for the news domain in the\nEnglish-to-Chinese translation direction. To this\nend, we define an error taxonomy that is relevant to\nthe problematic linguistic phenomena of this trans-\nlation direction. This taxonomy is then used to an-\nnotate errors produced by NMT systems that fall\nunder the recurrent and Transformer architectures.\nThe main contributions of this paper can then be\nsummarised as follows:\n1. We develop an MQM-compliant error taxon-\nomy tailored to the English-to-Chinese trans-\nlation direction.\n2. We conduct, to the best of our knowledge,\nthe first human fine-grained error analysis of\nTransformer-based versus recurrent NMT.\nThe rest of the paper is arranged in the following\nway. Section 2 presents a brief review of related\nwork. Next, Section 3 outlines the recurrent- and\nTransformer-based NMT systems and the dataset\nused in our experiments. Subsequently, Section 4\npresents the methodology for error annotation and\nthe definition of the error taxonomy, followed by\nresults and statistical analysis of the annotation.\nFinally, Section 5 gives a conclusion and sugges-\ntions for future work.\n2 Related Work\nThis section provides an overview of related re-\nsearch on the two topics that correspond to our\nmain contributions: human error analysis of MT\noutputs for the language pair English–Chinese\n(Section 2.1) and analyses of MT systems based on\nthe recurrent and Transformer architectures (Sec-\ntion 2.2).\n2.1 Human Error Analyses of MT for\nChinese\nOne of the first taxonomies of MT errors, by Vilar\net al. (2006), had a specific error typology for the\nChinese-to-English translation direction, in accor-\ndance with the specific relevant phenomena of this\nlanguage pair. Compared to their base taxonomy,\na refined categorisation of word order was added\nto mark syntactic mistakes that appear in transla-\ntions of questions, infinitives, declarative and sub-\nordinate sentences. In addition, the error type\nUnknown words was refined into four sub-types:\nPerson ,Location ,Organisation andOther proper\nnames .\nLi et al. (2009) carried out an error analysis for\nthe Chinese-to-Korean translation direction with\nonly three categories from the taxonomy of Vilar\net al. (2006) ( Missing words ,Wrong word order\nandIncorrect words ), and they replaced Incorrect\nwords with two more specific categories: one for\nboth wrong lexical choices and extra words andanother for wrong modality. The simplified tax-\nonomy was used to check if their method of re-\nordering verb phrases, prepositional phrases and\nmodality-bearing words in the Chinese data re-\nsulted in an improved MT system.\nHsu (2014) adapted the classification scheme of\nFarr´us et al. (2010) to conduct an error analysis for\nthe Chinese-to-English translation direction. The\nerror taxonomy of Farr ´us et al. (2010) was origi-\nnally defined for Catalan !Spanish. Its first level\ncorresponded to five types of errors, related to dif-\nferent linguistic levels: orthographic, morphologi-\ncal, lexical, semantic and syntactic.\nCastilho et al. (2017) assessed the output of two\nMT systems (statistical and recurrent) on patents,\nalso for the Chinese-to-English translation direc-\ntion. For this, they used a custom error taxonomy\nconsisting of the error types Punctuation ,Part of\nspeech ,Omission ,Addition ,Wrong terminology ,\nLiteral translation , and Word form .\nHassan et al. (2018) analysed the output of\na Transformer-based MT system, again for the\nChinese-to-English translation direction, using a\ntwo-level taxonomy based on that by Vilar et\nal. (2006). The first level contains nine error\ntypes: Missing words ,Word repetition ,Named en-\ntity,Word order ,Incorrect words ,Unknown words ,\nCollocation ,Factoid , and Ungrammatical . Only\nthe error type Named entity has a second level,\nwith five subcategories: Person ,Location ,Organ-\nisation ,Event , and Other .\nAs we can observe in these related works, fine-\ngrained human evaluation for the English–Chinese\nlanguage pair has been hitherto conducted, to the\nbest of our knowledge, (i) only for the Chinese-\nto-English direction and (ii) with error taxonomies\nthat were either developed prior to the advent of\nthe MQM framework or that were designed ad-hoc\nand were not thoroughly motivated. The position\nof our paper in these regards is thus clearly novel:\n(i) our analysis is for the English-to-Chinese trans-\nlation direction and (ii) we devise and use an error\ntaxonomy that is compliant with the MQM frame-\nwork.\n2.2 Analyses of Recurrent versus\nTransformer MT Systems\nTang et al. (2018a) compared recurrent- and\nTransformer-based MT systems on a syntactic task\nthat involves long-range dependencies (subject-\nverb agreement) and on a semantic task (word\nsense disambiguation) The recurrent system out-\nperformed Transformer on the syntactic task while\nTransformer was better than the recurrent system\non the semantic task. The latter finding was cor-\nroborated by Tang et al. (2018b).\nTran et al. (2018) compared the recurrent and\nTransformer architectures with respect to their\nability to model hierarchical structure in a mono-\nlingual setting, by means of two tasks: subject-\nverb agreement and logical inference. On both\ntasks, the recurrent system outperformed Trans-\nformer, slightly but consistently.\nBurlot et al. (2018) confronted English !Czech\nTransformer- and recurrent-based MT systems\nsubmitted to WMT20181on a test suite that ad-\ndresses morphological competence, based on the\nerror typology by Burlot and Yvon (2017). The re-\ncurrent system outperformed Transformer on cases\nthat involve number, gender and tense, while both\narchitectures performed similarly on agreement. It\nis worth noting that agreement here regards lo-\ncal agreement (e.g. an adjective immediately fol-\nlowed by a noun), while the aforementioned cases\nof agreement in which a recurrent system outper-\nforms Transformer (Tang et al., 2018a; Tran et al.,\n2018) regard long-distance agreement.\nYang et al. (2019) assessed the ability of both ar-\nchitectures to learn word order. When trained on a\nspecific task related to word order, word reordering\ndetection, a recurrent system outperformed Trans-\nformer. However, when trained on a downstream\ntask, MT, Transformer was able to learn better po-\nsitional information.\nLakew et al. (2018) evaluated multilingual NMT\nsystems under the Transformer and recurrent ar-\nchitectures in terms of their morphological, lexi-\ncal, and word order errors. In both architectures\nlexical errors were found to be the most prominent\nones, followed by morphological, and lastly come\nreordering errors. The authors compared the num-\nber of errors in bilingual, multilingual and zero-\nshot systems, both for recurrent and Transformer,\nand found multilingual and zero-shot systems to be\nmore competitive with respect to bilingual models\nfor Transformer than for recurrent.\n3 Machine Translation Systems\nThis section reports on the MT systems and the\ndataset used in our experiments.\n1http://www.statmt.org/wmt18/We have used output from systems that fall un-\nder the recurrent and Transformer architectures\nand were top-ranked at the news translation shared\ntask at the Conference on Machine Translation\n(WMT). We chose the University of Edinburgh’s\nMT system (Sennrich et al., 2017) as our recurrent\nNMT system due to the fact that this system had\nthe highest BLEU score (36.3) for the translation\ndirection English!Chinese at WMT20172and it\nwas ranked first (tied with other two systems) in\nthe human evaluation.\nAs for the Transformer-based MT system used\nin our research, we have taken the PATECH sub-\nmission to WMT2019.3We conducted our exper-\niments before the human evaluation of WMT2019\nwas available, and therefore we chose the PAT-\nECH’s system based on the automatic evalua-\ntion of WMT2019, in which this system was the\nbest performing one.4However, PATECH’s sys-\ntem was not included in the human evaluation of\nWMT2019. Therefore we carried out an additional\nannotation on the top-performing system from that\nhuman evaluation: the Transformer system devel-\noped by Kingsoft AI Lab (Guo et al., 2019), here-\nafter referred to as KSAI.\nBefore our human error analysis, we would\nlike to compare the recurrent and Transformer\nMT systems in terms of an automatic evaluation\nmetric. This is not possible from their outputs\nsince they correspond to two different test sets\n(newstest2017 andnewstest2019 ). In or-\nder to be able to compare them, we asked the de-\nveloper of the recurrent system to provide us with\nthe output from their system for newstest2019 .\nAs shown in Table 1, the use of the Transformer\narchitecture leads to a considerable improvement\ncompared to the recurrent system (on average\n31.4% relative in terms of BLEU). While the gap\nbetween the two architectures is large based on\nBLEU, this is an overall metric and therefore does\nnot provide any insight into which aspects of the\ntranslation have improved with Transformer with\nrespect to the recurrent system. To gain further in-\nsight we conduct a fine-grained human error anal-\nysis in the following section.\n2http://www.statmt.org/wmt17/\n3http://matrix.statmt.org/systems/show/\n4243\n4http://matrix.statmt.org/matrix/systems_\nlist/1908\nRNNTransformer\n(PATECH)Transformer\n(KSAI)\n33.1 44.6 42.4\nTable 1: Automatic evaluation (BLEU scores) of the 3 MT\nsystems on the WMT 2019 news test set.\n4 Error Annotation\nThis section details the annotation setup (Sec-\ntion 4.1), explains how we defined our MQM-\ncompliant error taxonomy adapted to the relevant\ncharacteristics of translating from English into\nChinese and the challenges faced by NMT sys-\ntems in this translation direction (Section 4.2) and\npresents the results of the annotation, as well as\nanalysis and discussion thereof (Section 4.3).\n4.1 Annotation Setup\nWe use translate5 ,5an open-source web-\nbased tool, as the annotation environment.\ntranslate5 was installed on a cloud server, so\nthat it could be accessed remotely by annotators.\nThe source text and reference translation are pro-\nvided next to the NMT translations.\nThe annotation was performed by two annota-\ntors who are native Chinese speakers with fluent\nEnglish. They both had an academic background\nand experience in translation. Prior to annota-\ntion, they were fully informed on the annotation\nenvironment and were provided with annotation\ninstructions, comprising MQM’s usage guidelines\nand decision tree (Burchardt and Lommel, 2014).\nThe dataset used in our experiments is the\ntest set from WMT2019 ( newstest2019 ) for\nEnglish!Chinese. This test set is chosen due to\nthe fact that we have outputs for the RNN- and\nTransformer-based MT systems (see Section 3),\nand also because it is a commonly-used benchmark\nin the MT community. In our error annotation we\nuse two subsets of this test set.\n\u000fA calibration set, made of the first 40 sen-\ntences from the testset. This refers to a\nsmall sample of annotation data that annota-\ntors work on before the actual annotation task\ntakes place. Its purpose is twofold: (i) we use\nit to find out which error types occur in the\ntranslations and therefore use it to guide the\nrefinement of the error taxonomy in a data-\ndriven way; (ii) we also use it to identify dis-\nagreements between the annotators.\n5http://www.translate5.net\u000fAn evaluation set, made up of 100 sentences\nfrom the test set. In order to have intersen-\ntential context, these sentences are taken from\nsix documents (five full documents and the\nfirst sentences of the sixth document up to 100\nsentences are reached). Using this evaluation\nset led then to the annotation of 500 sentences\n(100 distinct sentences times two MT systems\n(RNN and PATECH) times two annotators,\nplus the annotation of the 100 sentences for\na third system (KSAI) by one annotator).\nThe annotators annotated the calibration set with\nour custom error taxonomy (see Figure 2), after\nwhich they discussed difficult cases and reached\nagreement on how to annotate them. Then they an-\nnotated the translations of the evaluation set. Once\nannotators started working on the evaluation set,\nthey were not allowed to discuss problems in an-\nnotation any more.\n4.2 Error Taxonomy\nWe decided to develop our error taxonomy based\non the MQM framework developed at the QT-\nLaunchPad project (Lommel et al., 2014), after\nreviewing different translation quality evaluation\nframeworks. MQM stands out with its extensive\nstandardised issue types6which are provided with\nclear definitions and explanations. In addition, a\nthorough guideline and decision tree7are avail-\nable to assist annotators. Furthermore, this frame-\nwork allows the building of customised error tax-\nonomies.\nFollowing the method of Klubi ˇcka et al. (2018),\nour customisation process started with the sam-\nple MQM-compliant hierarchy for diagnostic MT\nevaluation (Figure 1) as the initial stage of our\nerror taxonomy. The sample MQM tagset went\nthrough the preliminary selection of issue types to\nbe used for fine-grained MT evaluation.\nWe annotated the calibration set with the sam-\nple MQM-compliant hierarchy to find out what\ntypes of errors occur in the outputs of our MT sys-\ntems. Based on the results of the calibration set, we\ndefined the complete tagset (shown in Figure 2).\nIn the following subsections we provide detailed\ninformation concerning each of the modifications\nmade to the error taxonomy.\n6http://www.qt21.eu/mqm-definition/\nissues-list-2015-12-30.html\n7http://www.qt21.eu/downloads/\nMQM-usage-guidelines.pdf\nIssue T ypesFluency Accuracy\nUnintelligibleGrammarMistranslation\nOmission\nUntranslatedAddition\nFunction wordWord order\nIncorrectMissingTypography\nExtraneousWord formSpelling\nTense/aspect/moodPart of speech\nAgreementFigure 1: The sample MQM-compliant error hierarchy for diagnostic MT evaluation. The italicised issue types are not included\nin the standard MQM issue types.\nIssue T ypes\nFluencyAccuracy\nUnintelligibleGrammarMistranslation\nOmission\nUntranslatedAddition\nFunction word\nWord orderIncorrect\nMissing\nTypographyExtraneousOverly-\nliteral\nClassifierEntity\nUnpaired-marks\nPunctuationAdverb\nParticlePreposition\nFigure 2: The MQM-compliant error taxonomy for the translation direction English !Chinese. All the changes are marked by\nboxes with grey dotted lines and the issue types that are not included in the MQM issue types are italicised.\n4.2.1 Word Form & Spelling\nGiven that Chinese is an analytic language with-\nout inflection and its writing system is logographic,\nthe issue types Word form andSpelling are of no\ninterest to our research agenda.\n4.2.2 Classifier\nWe add one of the distinctive features of Chi-\nnese part-of-speech, the usage of classifiers, which\nhave been researched thoroughly in Chinese lin-\nguistics (Jin, 2018) and Chinese language process-\ning (Huang et al., 2017). In short, classifiers are\nspecial linguistic units located behind a number,\ndemonstrative or certain quantifiers. These classi-\nfiers do not have a counterpart in English, which\nmight give rise to translation problems. Examples\nof classifiers are shown in Table 2. How MT sys-\ntems handle such a specific linguistic phenomenon\nis of interest to us.\nPronoun Classifier Noun\n每(mei)个(ge)角落(jiaoluo )\nEvery corner\n一(yi)架(jia)飞机(feiji)\nOne plane\nTable 2: Examples of classifiers in Chinese. The classifiers\nare underscored.4.2.3 Typography\nWe extend the issue type Typography into two\nspecific subtypes, based on the result of the cal-\nibration set. Though an unpaired quote or a\nmisuse of punctuation is less likely to damage\nthe comprehension of the content critically than\nother errors, as stated in Vilar et al. (2006), the\nChinese!English error annotation conducted by\nHsu (2014) shows that punctuation accounts for\n10% of the errors. Such a high amount of punc-\ntuation mistakes could be a nuisance in the MT\noutput. Incorrect usage of Typography could nega-\ntively influence the reception of a translation, since\nthe reader might consider such an error as a sign of\nlack of professionalism, and therefore react by dis-\ntrusting the content.\n4.2.4 Mistranslation\nPreliminarily, we observe that Mistranslation is\na major issue in the calibration set and that related\ntranslation errors Overly-literal andEntity appear\nfrequently. We have thus decided to specify them\nas sub-types of Mistranslation . Vilar et al. (2006)\nalso included entity errors in their error typology\nfor the Chinese!English language pair and fur-\nther divided them into specific sub-types. As their\nresult showed, this issue type only amounted to a\nsmall percentage of the errors. Therefore, the is-\nsue type Entity is not further specified in our tax-\nonomy.\n4.2.5 Function word\nFunction word is extended to one extra layer\nunder Extraneous with the intention of covering\nwesternised Chinese expressions that were ob-\nserved in the calibration set. Westernised Chinese\nrefers to a cross-lingual phenomenon of impos-\ning English grammar on Chinese, which is man-\nifested in many problematic forms, abuse of func-\ntion words especially (Tse, 2001). The relations\nbetween sentence parts, tenses and aspects are of-\nten shown through word order, particles or context\nin Chinese, due to its lack of inflection. Specifying\nthe types of extraneous function words into three\ncommon types, Preposition ,Adverb andParticle\ncould be useful to discuss whether there is differ-\nence among these word classes.\nThe two other sub-types of Function word (In-\ncorrect andMissing ) are not specified in confor-\nmity with the initial examination of the data. Not\nonly might adding the extra layer for both sub-\nerror types not prove practical, but it is also not\nadvised by the MQM guidelines to have the error\ntaxonomy so big that it could challenge annotators’\nmemory limit (Burchardt and Lommel, 2014).\n4.3 Results and Discussion\n4.3.1 Inter-annotator Agreement\nInter-annotator agreement (IAA) was calculated\nwith Cohen’s Kappa ( \u0014) (Cohen, 1960) on the an-\nnotations of the calibration and evaluation sets for\nthe RNN and PATECH’s Transformer systems (Ta-\nble 3). It is worth noting that the IAA values of the\nevaluation set improve considerably upon those of\nthe calibration set ( \u0014= 0:44versus 0:27). It shows\nthat the discussion of annotation disagreements\ncan contribute to improving the level of agreement\nnotably.\nIAA RNNTransformer\n(PATECH)Both\nCalibration set 0.31 0.22 0.27\nEvaluation set 0.45 0.43 0.44\nTable 3: Total and average inter-annotator agreement (Co-\nhen’sκvalues) for the MQM calibration set and evaluation\nset.\nAs shown in Table 3, the difference of IAA\nscores between Transformer and RNN is slight\nin our evaluation set. The average IAA value(0.44), corresponds to moderate agreement, ac-\ncording to Cohen (1960). When interpreting these\nresults, it should be taken into account that IAA\nscores are known to be low in human evaluation\nof MT. For example, Callison-Burch et al. (2007)\nobserved fair agreements for fluency and accuracy\nfor eight language pairs, and, though the MQM\nframework is rigorously defined and supported by\nclear guidelines, in the experiments by Lommel\nand Burchardt (2014) MQM led to relatively low\nIAA, due to span-level difference, ambiguous cat-\negorisation and differences of opinion. Klubi ˇcka\net al. (2018) reported a moderate agreement on\nEnglish–Croatian, higher than that by Lommel and\nBurchardt (2014), probably because the agreement\nwas calculated on errors annotated for each sen-\ntence, thus not taking the spans of the annotations\ninto account. Our own IAA results do not differ\ngreatly with aforementioned research.\nRNNTransformer\n(PATECH)Both\nAccuracy 0.60 0.61 0.61\nMistranslation 0.50 0.52 0.51\nEntity -0.03 0.39 0.18\nOverly-literal 0.24 0.21 0.23\nOmission 0.52 0.67 0.60\nAddition 0.37 0.00 0.19\nUntranslated 0.73 0.71 0.72\nFluency 0.01 0.07 0.04\nGrammar 0.36 0.24 0.30\nFunction word 0.17 -0.01 0.08\nExtraneous 0.32 -0.01 0.16\nPreposition 0.65 -0.01 0.32\nAdverb 0.00 N/A N/A\nParticle -0.02 -0.03 -0.03\nIncorrect -0.02 -0.01 -0.02\nMissing 0.32 0.00 0.16\nWord order 0.45 0.29 0.37\nClassifier N/A N/A N/A\nUnintelligible 0.20 -0.02 0.09\nTypography 0.22 0.28 0.25\nPunctuation 0.21 0.29 0.25\nUnpaired-mark N/A N/A N/A\nTable 4: Inter-annotator agreement (Cohen’s \u0014values) on\nthe evaluation set for the RNN and PATECH’s Transformer\nsystems and their average. Substantial scores (0.61–0.80) are\nshown in bold. N/A is given to the error categories that were\nnever used, since no data points could be used to calculate the\nIAA score.\nIn addition to overall IAA, Cohen’s ( \u0014) was also\ncalculated for each issue type in the evaluation set\nindividually (Table 4). For both systems, the IAA\nscores for Accuracy and its sub-types are consid-\nerably higher than those under Fluency . It is an\nexpected result taken into account that accuracy\nerrors are more straightforward and less open to\ninterpretation. The \u0014values are relatively consis-\ntent between Transformer and RNN, except a strik-\ning plunge in agreement scores for Transformer in\nsome categories ( Function word and its subtypes,\nWord order andUnintelligible ) and the opposite, a\nconsiderably lower agreement for RNN, for Entity .\nThe source of these disagreements can be traced\nback to the annotation output. For example, in\nthe case of Unintelligible , the evaluators annotated\ndifferent sentences with this error category. As\nforEntity , it is worth mentioning that disagree-\nment arose over this category in the annotation of\nthe calibration set. It seems that, despite the dis-\ncussion, the understanding of entity was still not\nshared by the two annotators. It is also possi-\nble that due to the improved translation quality of\nTransformer, mistakes such as Function word are\nmore subtle and harder to detect.\n4.3.2 Annotated Errors\nTable 5 presents the overall number of annotated\nerror tags in the output of each system by each an-\nnotator. One can clearly observe that both annota-\ntors have annotated relatively less errors in Trans-\nformer’s output (PATECH) than in RNN’s; the er-\nror reduction is of 35% in the case of annotator 1\nand of 27% in the case of the second annotator.\nThe Transformer system from KSAI only reduces\nthe number of errors by 12.5%, compared to the\nRNN system.\nSystem RNNTransformer\n(PATECH)Transformer\n(KSAI)\nAnnotator 1 168 109 147\nAnnotator 2 193 141\nTable 5: Total amounts of error per annotator and system, as\nannotated in MQM.\nTo delve deeper into the error distribution, we\nplot a histogram to show how many errors ap-\npear in each sentence and how many of these sen-\ntences are there in the output from each system.\nThe mean of both annotators’ annotations for the\nfirst two systems are used, amounting to 100 sen-\ntences per system. The histogram is shown in Fig-\nure 3. It can be observed that more than 35 sen-\ntences in the Transformer (PATECH) output are\nnot annotated with any error while only slightlyover 20 sentences in the RNN are marked as er-\nrorless. The two systems have similar amount of\nsentences with one mistake, while PATECH’s out-\nput contains considerably less sentences than RNN\nwith more than one error.\nFigure 3: Error distribution per system. For RNN and Trans-\nformer (PATECH), the average of annotation data from both\nannotators has been used.\nWe can also see notable differences between the\ntwo Transformer systems in Figure 3. Fewer sen-\ntences in the KSAI’s output are annotated with-\nout error, while considerably more sentences are\ntagged with two errors in this output than in PAT-\nECH’s system.\nWhile comparing the systems in terms of their\ntotal number of errors gives us a clear indica-\ntion of their relative performance, we note that a\nfairer comparison should take into account their\noutputs’ lengths. To that end, we make use of\nthe normalisation approach proposed by Klubi ˇcka\net al. (2018): tokens annotated with errors are\ncounted for each system’s output and they are then\nused to compute each system’s error ratio, which\nequals to the total number of erroneous tokens\n(Chinese characters) divided by the total number\nof tokens in the system’s output. This error ratio\ncan serve then as a general score for each system.\nWe also apply the same normalisation procedure\nto each issue type. Statistical significance for the\ntotal amount of errors and each issue type is com-\nputed with a pairwise chi-squared ( \u001f2) test (Plack-\nett, 1983), following its application to normalised\nMQM errors introduced by Klubi ˇcka et al. (2018).\nTable 6 shows the error ratios (both overall and\nfor each issue type) for each system, together with\nan indication of whether there are significant dif-\nferences between each pair of systems. In terms of\ntotal error ratio, compared to RNN, the error reduc-\ntion by PATECH amounts to 34% relative (11.85%\nversus 17.93%) and is significant ( p< 0:001). No\nsignificant difference is observed between the two\nTransformer-based systems.\nFor nearly half of the error types, the decrease in\nerror ratio for Transformer (PATECH), compared\nto RNN, is statistically significant. For exam-\nple, the number of tokens with Fluency errors de-\ncreased by 45% (6.45% verse 3.56%, p < 0:001).\nThe reduction is particularly notable for its child\ncategory Unintelligible , for which the number of\nerroneous tokens decreased by 55% (2.1% verse\n0.93%,p < 0:001). This Transformer-based sys-\ntem also managed to generate significantly less ex-\ntraneous Function words , gaining a decrease of\n47% (0.51% verse 0.27%, p < 0:05). In addition,\nTransformer manages to produce significantly less\nextraneous Prepositions , (0.2% verse 0.04%, p <\n0:05). Though it also produces less Overly-literal\ntranslations (1.20% verse 0.86%) and no extrane-\nousAdverb ( 0.06% verse 0%), these differences\nare not significant.\nConversely, this Transformer-based system un-\nderperforms on Punctuation (0.2% verse 0.37%),\nalthough the difference is not significant. By trac-\ning this back to the annotation, we can observe that\nTransformer (PATECH) produces several cases of\nmissing, wrong or redundant punctuation marks.\nFor example, in one instance an English period (.)\nwas used instead of a Chinese full stop ( 。). This\nTransformer system also had issues with adding\nguillemets ( 《》 ) around newspaper names and\nputting commas after adverbials, which are re-\nquired in Chinese grammar.\nBetween the two Transformer systems, we can\nsee that except for the category Entity andUntrans-\nlated , the two Transformer systems do not produce\nstatistically significant different amount of errors.\nIt proves that there are few significant discrepan-\ncies between these two systems.\nFinally, we note that the error category\nUnpaired-mark , has not been used by any of the\nannotators for any of the three MT systems and\nthe category Classifier has only been used to anno-\ntate 6 tokens (0.16%) in the third system’s output.\nWhile these categories were relevant in MT in the\npast (see Section 2), our results seem to indicate\nthat they can be considered to have been solved by\nNMT.RNNTransf-\normer\n(PAT-\nECH)Transf-\normer\n(KSAI)\nAccuracy 11.48 8.29** 7.41\nMistranslation 7.49 4.50** 4.39\nEntity 0.24 0.23 0.59*\nOverly-literal 1.20 0.86 0.51\nOmission 0.61 0.33** 0.35\nAddition 0.23 0.19 0.22\nUntranslated 3.16 3.27 2.45*\nFluency 6.45 3.56** 3.02\nGrammar 3.08 1.83** 2.24\nFunction word 0.51 0.27** 0.40\nExtraneous 0.35 0.12** 0.30\nPreposition 0.20 0.04** 0.13\nAdverb 0.06 0 0.05\nParticle 0.07 0.08 0.08\nIncorrect 0.06 0.08 0\nMissing 0.10 0.07 0.11\nWord order 2.32 1.41** 1.46\nClassifier 0 0 0.16\nUnintelligible 2.10 0.93** 0\nTypography 0.20 0.37 0.59\nPunctuation 0.20 0.37 0.59\nUnpaired-mark 0 0 0\nTotal error ratio 17.93 11.85** 10.40\nTable 6: Error ratio (%) for each error type and overall. The\nannotations on RNN and Transformer (PATECH) from both\nannotators are concatenated. * indicates p-value<0.05 and\n**p-value<0.001, when a system is compared to the system\nadjacent to its left side. Numbers shown in bold indicate that\nthe system has significantly more erroneous tokens in the pair\ncomparison.\n5 Conclusion\nThis paper presented a fine-grained manual evalu-\nation for English!Chinese on the two mainstream\narchitectures of NMT: RNN and Transformer. The\nevaluation was approached in the form of a human\nerror annotation based on a customised MQM er-\nror taxonomy.\nThe error taxonomy was developed from the\nMQM core taxonomy for MT evaluation. Chinese\nlinguistic features and issues emerged in the cal-\nibration set were taken into account by including\ncustomised error types, such as Extraneous func-\ntion word ,Classifier andTypography . The error\ntype Extraneous function word underpins investi-\ngating westernised Chinese phenomena of extrane-\nous function words by specifying it into three word\nclasses: Preposition ,Adverb andParticle .\nFrom our analysis, it is clear that Transformer-\nbased systems generate significantly more accu-\nrate, fluent and comprehensible translation with\nless westernised Chinese expressions. However,\nTransformer systems do not handle typography as\nwell as RNN. We also note that none of the MT\nsystems did produce any errors related to unpaired-\nmarks and only one system produced errors related\nto classifiers, which were very unfrequent (0.16%\nof the tokens). We can conclude that Transformer\nsystems produce an overall better translation com-\npared to RNN when translating from English to\nChinese, which corroborates findings of prior stud-\nies on other language pairs. A limitation worth\nmentioning is that our annotation was conducted\nby only two annotators on a limited amount of\ndata.\nOur taxonomy could be of use for further error\nanalysis on Chinese MT quality. Future research\ncould include a larger annotation sample to inves-\ntigate if punctuation is a a common issue in NMT\nsystems based on Transformer and to verify that\nNMT is able to produce correct classifiers. Also,\nas Transformer still shows a major problem in mis-\ntranslation, the error taxonomy can be extended\nwith more specific categories to explore this issue\nin more detail.\nThe annotations for the three MT systems and\nthe code used for the analysis thereof are publicly\navailable.8\nAcknowledgements\nWe would like to thank Rico Sennrich, for trans-\nlating the test set used in this paper with a sys-\ntem he had co-developed for a previous edition of\nthe WMT news translation shared task, and Filip\nKlubi ˇcka, for providing us with the code to per-\nform the statistical analysis of MQM output.\nReferences\nBahdanau, Dzmitry, Kyunghyun Cho, and Yoshua Ben-\ngio. 2014. Neural Machine Translation by Jointly\nLearning to Align and Translate. In Proceedings\nof International Conference on Learning Represen-\ntations 2015 , pages 1–15, San Diego, CA, USA,\nSeptember.\nBentivogli, Luisa, Arianna Bisazza, Mauro Cettolo, and\nMarcello Federico. 2016. Neural versus Phrase-\nBased Machine Translation Quality: a Case Study.\n8https://github.com/yy-ye/mqm-analysisInProceedings of the 2016 Conference on Empiri-\ncal Methods in Natural Language Processing , pages\n257–267, Austin, Texas. Association for Computa-\ntional Linguistics.\nBurchardt, Aljoscha and Arle Lommel, 2014. Practi-\ncal Guidelines for the Use of MQM in Scientific Re-\nsearch on Translation Quality .\nBurlot, Franck and Franc ¸ois Yvon. 2017. Evaluating\nthe morphological competence of machine transla-\ntion systems. In Proceedings of the Second Confer-\nence on Machine Translation , pages 43–55, Copen-\nhagen, Denmark, September. Association for Com-\nputational Linguistics.\nBurlot, Franck, Yves Scherrer, Vinit Ravishankar,\nOndˇrej Bojar, Stig-Arne Gr ¨onroos, Maarit Ko-\nponen, Tommi Nieminen, and Franc ¸ois Yvon.\n2018. The WMT’18 morpheval test suites for\nEnglish-Czech, English-German, English-Finnish\nand Turkish-English. In Proceedings of the Third\nConference on Machine Translation: Shared Task\nPapers , pages 546–560, Belgium, Brussels, October.\nAssociation for Computational Linguistics.\nCallison-Burch, Chris, Cameron Fordyce, Philipp\nKoehn, Christof Monz, and Josh Schroeder. 2007.\n(Meta-) evaluation of machine translation. In Pro-\nceedings of the Second Workshop on Statistical Ma-\nchine Translation - StatMT ’07 , pages 136–158,\nPrague, Czech Republic. Association for Computa-\ntional Linguistics.\nCastilho, Sheila, Joss Moorkens, Federico Gaspari,\nIacer Calixto, John Tinsley, and Andy Way. 2017.\nIs Neural Machine Translation the New State of the\nArt? The Prague Bulletin of Mathematical Linguis-\ntics, 108(1):109–120.\nCohen, Jacob. 1960. A coefficient of agreement\nfor nominal scales. Educational and Psychological\nMeasurement , 20(1):37–46.\nFarr´us, Mireia, Marta Costa-jussa, Jose Bernardo\nMari ˜no Acebal, and Jos ´e Fonollosa. 2010.\nLinguistic-based evaluation criteria to identify sta-\ntistical machine translation errors. In Proceedings\nof the 13th Annual Conference of the EAMT , pages\n52–57, Barcelona, Spain, 01.\nGuo, Xinze, Chang Liu, Xiaolong Li, Yiran Wang,\nGuoliang Li, Feng Wang, Zhitao Xu, Liuyi Yang,\nLi Ma, and Changliang Li. 2019. Kingsoft’s neu-\nral machine translation system for WMT19. In\nProceedings of the Fourth Conference on Machine\nTranslation (Volume 2: Shared Task Papers, Day 1) ,\npages 196–202, Florence, Italy, August. Association\nfor Computational Linguistics.\nHassan, Hany, Anthony Aue, Chang Chen, Vishal\nChowdhary, Jonathan Clark, Christian Feder-\nmann, Xuedong Huang, Marcin Junczys-Dowmunt,\nWilliam Lewis, Mu Li, Shujie Liu, Tie-Yan Liu,\nRenqian Luo, Arul Menezes, Tao Qin, Frank Seide,\nXu Tan, Fei Tian, Lijun Wu, and Ming Zhou. 2018.\nAchieving human parity on automatic chinese to\nenglish news translation. arXiv:1803.05567 [cs] ,\npages 1–25, 03.\nHsu, John D. 2014. Error Classification of Ma-\nchine Translation A Corpus-based Study on Chinese-\nEnglish Patent Translation. Translation Studies\nQuarterly , 18:121–136.\nHuang, Chu-Ren, Shu-Kai Hsieh, Keh-Jiann Chen,\nShu-Kai Hsieh, and Keh-Jiann Chen. 2017. Man-\ndarin Chinese Words and Parts of Speech : A\nCorpus-based Study . Routledge, 7.\nJin, Jing. 2018. Partition and Quantity : Numeral\nClassifiers, Measurement, and Partitive Construc-\ntions in Mandarin Chinese . Routledge, 06.\nKlubi ˇcka, Filip, Antonio Toral, and V ´ıctor M. S ´anchez-\nCartagena. 2018. Quantitative Fine-Grained Hu-\nman Evaluation of Machine Translation Systems: a\nCase Study on English to Croatian. arXiv e-prints ,\n32(3):195–215.\nKoehn, Philipp, Franz Josef Och, and Daniel Marcu.\n2003. Statistical phrase-based translation. In Pro-\nceedings of the 2003 Conference of the North Amer-\nican Chapter of the Association for Computational\nLinguistics on Human Language Technology - Vol-\nume 1 , NAACL ’03, pages 48–54, Stroudsburg, PA,\nUSA. Association for Computational Linguistics.\nLakew, Surafel Melaku, Mauro Cettolo, and Marcello\nFederico. 2018. A comparison of transformer and\nrecurrent neural networks on multilingual neural ma-\nchine translation. In Proceedings of the 27th Inter-\nnational Conference on Computational Linguistics ,\npages 641–652, Santa Fe, New Mexico, USA, Au-\ngust. Association for Computational Linguistics.\nLi, Jin-Ji, Jungi Kim, Dong-Il Kim, and Jong-Hyeok\nLee. 2009. Chinese Syntactic Reordering for\nAdequate Generation of Korean Verbal Phrases in\nChinese-to-Korean SMT. In Proceedings of the\nFourth Workshop on Statistical Machine Translation ,\npages 190–196. Association for Computational Lin-\nguistics.\nLommel, Arle and Aljoscha Burchardt. 2014. As-\nsessing inter-annotator agreement for translation er-\nror annotation. In MTE: Workshop on Automatic and\nManual Metrics for Operational Translation Evalu-\nation , Reykjavik, Iceland.\nLommel, Arle, Hans Uszkoreit, and Aljoscha Bur-\nchardt. 2014. Multidimensional quality metrics\n(mqm): A framework for declaring and describing\ntranslation quality metrics. Tradum `atica: tecnolo-\ngies de la traducci ´o.\nPlackett, R. L. 1983. Karl pearson and the chi-squared\ntest. International Statistical Review / Revue Inter-\nnationale de Statistique , 51(1):59–72.Popovi ´c, Maja. 2017. Comparing language related\nissues for nmt and pbmt between german and en-\nglish. The Prague Bulletin of Mathematical Linguis-\ntics, 108(1):209–220.\nSennrich, Rico, Alexandra Birch, Anna Currey, Ulrich\nGermann, Barry Haddow, Kenneth Heafield, An-\ntonio Valerio Miceli Barone, and Philip Williams.\n2017. The University of Edinburgh’s Neural MT\nSystems for WMT17. In Proceedings of the Second\nConference on Machine Translation , pages 389–399,\nCopenhagen, Denmark, 09. Association for Compu-\ntational Linguistics.\nShterionov, Dimitar, Riccardo Superbo, Pat Nagle,\nLaura Casanellas, Tony O’dowd, and Andy Way.\n2018. Human versus automatic quality evaluation\nof nmt and pbsmt. Machine Translation , 32(3):217–\n235.\nTang, Gongbo, Mathias M ¨uller, Annette Rios, and Rico\nSennrich. 2018a. Why self-attention? a targeted\nevaluation of neural machine translation architec-\ntures. arXiv preprint arXiv:1808.08946 .\nTang, Gongbo, Rico Sennrich, and Joakim Nivre.\n2018b. An analysis of attention mechanisms: The\ncase of word sense disambiguation in neural ma-\nchine translation. In Proceedings of the Third Con-\nference on Machine Translation: Research Papers ,\npages 26–35, Brussels, Belgium, October. Associa-\ntion for Computational Linguistics.\nTran, Ke, Arianna Bisazza, and Christof Monz. 2018.\nThe importance of being recurrent for modeling hier-\narchical structure. In Proceedings of the 2018 Con-\nference on Empirical Methods in Natural Language\nProcessing , pages 4731–4736, Brussels, Belgium.\nAssociation for Computational Linguistics.\nTse, Yiu Kay 谢耀基. 2001. ”hanyu yufan ouhua\nzongshu” 汉语语法欧化综述[a review on the west-\nernised chineses grammar]. 语文研究, 1:17–22.\nVaswani, Ashish, Noam Shazeer, Niki Parmar, Jakob\nUszkoreit, Llion Jones, Aidan N. Gomez, Lukasz\nKaiser, and Illia Polosukhin. 2017. Attention is all\nyou need. abs/1706.03762.\nVilar, David, Jia Xu, Luis Fernando D’Haro, and Her-\nmann Ney. 2006. Error Analysis of Statistical Ma-\nchine Translation Output. In Proceedings of the Fifth\nInternational Conference on Language Resources\nand Evaluation (LREC’06) , pages 697–702.\nYang, Baosong, Longyue Wang, Derek F. Wong,\nLidia S. Chao, and Zhaopeng Tu. 2019. Assessing\nthe ability of self-attention networks to learn word\norder. In Proceedings of the 57th Annual Meet-\ning of the Association for Computational Linguistics ,\npages 3635–3644, Florence, Italy, July. Association\nfor Computational Linguistics.",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "KYxhS5oX4oE",
"year": null,
"venue": "EAMT 2016",
"pdf_link": "https://aclanthology.org/W16-3422.pdf",
"forum_link": "https://openreview.net/forum?id=KYxhS5oX4oE",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Collaborative Development of a Rule-Based Machine Translator between Croatian and Serbian",
"authors": [
"Filip Klubicka",
"Gema Ramírez-Sánchez",
"Nikola Ljubesic"
],
"abstract": "Filip Klubička, Gema Ramírez-Sánchez, Nikola Ljubešić. Proceedings of the 19th Annual Conference of the European Association for Machine Translation. 2016.",
"keywords": [],
"raw_extracted_content": "Baltic J. Modern Computing, V ol. 4 (2016), No. 2, pp. 361–367\nCollaborative development of a rule-based\nmachine translator between Croatian and Serbian\nFilip KLUBI ˇCKA1, Gema RAM ´IREZ-S ´ANCHEZ2, Nikola LJUBE ˇSI´C1;3\n1University of Zagreb, Ivana Lu ˇci´ca 3, HR-10000 Zagreb, Croatia\n2Prompsit Language Engineering, Avenida de la Universidad s/n, ES-03202 Elche, Spain\n3Joˇzef Stefan Institute, Jamova cesta 39, SI-1000 Ljubljana, Slovenia\[email protected], [email protected], [email protected]\nAbstract. This paper describes the development and current state of a bidirectional Croatian-\nSerbian machine translation system based on the open-source Apertium platform. It has been\ncreated inside the Abu-MaTran project with the aims of creating free linguistic resources as well\nas having non-experts and experts work together. We describe the collaborative way of collecting\nthe necessary data to build our system, which outperforms other available systems.\nKeywords: machine translation, collaboration, Apertium, open-source, Croatian, Serbian\n1 Introduction\nCroatian and Serbian are language varieties and official registers of the pluricentric\nBosnian-Croatian-Montenegrin-Serbian (BCMS) language. Although mutually intelli-\ngible, the national varieties are standardised differently, and both communities have a\nhigh interest to produce documentation that adheres to these standards, if for no other\nreason, then for the sake of producing standard documents for Serbian, the official lan-\nguage of an EU candidate state. Thus it is sensible to make use of a related language\nof a recent member state and employ machine translation between these two language\nvarieties to meet this aim.\nCreating machine translation (MT) systems for South-Slavic languages, both be-\ntween themselves and other languages, is also the aim of the Abu-MaTran project.4In\nthe first phase of the project, the focus was on MT between English and Croatian, while\nMT between South-Slavic is the focus of the second phase. The system presented in\nthis paper will be used within the project to increase the amount of English - Serbian\nparallel data by translating the Croatian side of English-Croatian parallel data to Ser-\nbian. It will also be added to another by-product of the Abu-MaTran project - AltLang\n- a service for translating between language varieties.5\n4http://abumatran.eu\n5http://www.altlang.net\n362 Klubi ˇcka et al.\n2 Related work\nForcada et al. (2011) and their open-source Apertium platform have shown that, when\ndoing machine translation between language variants or closely-related languages like\nSpanish and Catalan, a rule-based shallow transfer approach is often sufficient to pro-\nduce good quality translations. Indeed, work has already been done in building rule\nbased translators from BCMS into Macedonian and Slovene (Peradin et al, 2014). To\nour knowledge, however, no similar work has been done for the Croatian-Serbian lan-\nguage pair specifically. The only accessible state of the art system for this pair is Google\nTranslate ,6which reaches a BLEU score of 82.27 in the Serbian-Croatian direction.\nHowever, the statistical approach that Google uses, which has also been explored in\n(Popovi ´c et al, 2014) but only using small corpora, is not a feasible option for us, as\nthere are not enough parallel corpora available to train SMT systems that can deal with\nthe minute differences between the two languages without introducing additional noise.\nNonetheless, some free linguistic resources were initially available to us: the HBS\nmonolingual dictionary7built for other Apertium language pairs like HBS-Macedonian\n(Peradin and Tyers, 2012) and HBS-Slovene (Peradin et al, 2014), the SETimes news\ncorpora of both Croatian and Serbian8and the hrWaC and srWaC web corpora (Ljube ˇsi´c\nand Klubi ˇcka, 2014). This is always an advantage, as both monolingual and bilingual\ncorpora are extensively used to semiautomatically extract knowledge for Apertium such\nas frequent non-covered entries, bilingual correspondences, rules, development and test\nsets, and data needed to train statistical part-of-speech taggers.\nConsidering the amount of available data, coupled with the fact that differences\nbetween Croatian and Serbian occur mostly at the level of orthography and lexicon,\nwith only a bit of syntax (limited only to specific structures and verbal tenses), a rule-\nbased approach makes the most sense. We expect a high quality and a more controlled\noutput from such a system, reproducing other Apertium-based success stories such as\nthe Norwegian Nynorsk-Norwegian Bokm ˚al (Unhammer et al, 2006) or Spanish and\nAragonese (Cort ´es et al, 2012) language pairs.\n3 Apertium language pair\nThe structure of the Croatian-Serbian language pair is based on the same structure\nshared by other Apertium language pairs. This esentially includes two monolingual\ndictionaries (source and target) which are used as morphological anlysers/generators,\none set of morphological tags for the part-of-speech tagger (currently shared by the\ntwo languages involved), and two sets of structural transfer rules (one for each transla-\ntion direction). However, because there is significant overlap in the lexemes of the lan-\nguages, instead of two separate monolingual dictionaries, there is only one. In addition\nto pairing lemmas with inflectional paradigms, this monolingual dictionary - called the\nmetadix - explicitly encodes differences between the three language varieties (Bosnian,\nCroatian, Serbian) with regards to variant-specific lexemes and the reflex of the vowel\n6http://translate.google.com\n7HBS is the ISO 639-3 code for the macrolanguage covering the three languages in question\n8http://nlp.ffzg.hr/resources/corpora/setimes/\nCollaborative RBMT, HR-SR 363\nyat.9Furthermore, the language pair includes a bilingual dictionary which explicates\nlexical differences as one-to-one translations, a shared Hidden Markov Model (HMM)\ntagger, a transfer module for each translation direction and a transliterator for the cyrilic\nand latin alphabets.\nThe basic system on which we started making improvements was produced in only\na couple of weeks by extracting relevant components from existing language pairs. In\nother words, we took the dictionaries from HBS-Slovene, the tagger from the HBS\nmodule, a bilingual dictionary created from monolingual entries and the transfer rules\nfor agreement between basic noun phrases. Additionally, the work presented in this\npaper also kicked off the efforts to enrich the HBS monolingual dictionary, which ran\nin parallel with our construction of the Croatian-Serbian language pair, and resulted in\nApertium’s largest lexicon to date, with 97,437 lemmas (Ljube ˇsi´c et al, 2016).\n4 Development\nEven though there is considerable overlap, the biggest source of differences between\nCroatian and Serbian is still the differing lexicon. Thus it was important to construct a\nlarge, high-coverage bilingual dictionary. Additionally, transfer rules needed to be de-\nfined to account for the few syntactic differences between the languages. Each of these\ntasks was tackled in two phases - at hands-on Abu-MaTran workshops held in Zagreb\nand within a course held during the winter semester of 2015/2016 at the University of\nZagreb, titled Selected chapters in Natural Language Processing .10\nThe approach to including non-experts in the process consisted of creating very\nfocused tasks for data which is needed for each of the Apertium modules based on\nmaterials created beforehand, e.g. in the form of precomputed bilingual entries that\nthey had to assess. When possible, user-friendly interfaces or very simple spreadsheets\nwere used to lower the technical barrier. After each task, the contributors were able to\nsee the impact of their collaborative work in the translator’s performance almost real-\ntime, which proved to be very motivating. While larger groups could work on dictionary\nentries (as this is an easy task), only a reduced group worked on writing transfer rules\n(as this requires an advanced level of technical knowledge).\n4.1 Adding bilingual entries\nFirst phase: The first workshop was focused on monolingual and bilingual dictionar-\nies.11We automatically produced bilingual candidates from comparable corpora - hrWaC\nand srWaC (Ljube ˇsi´c and Klubi ˇcka, 2014) - by identifying lexemes from the Serbian\ncorpus that, given their frequency in the Croatian corpus, were occurring much more\nfrequently than by chance. The workshop participants validated the candidates and\n9For example, the following lexical entry extracted from the metadix produces either\nthe surface form ’pjeva ˇcica’ or ’peva ˇcica’, depending on the chosen language variant:\n<e lm=”pjeva ˇcica”><i>p</i><par n=”e jeyat”/><i>vaˇcic</i><par n=”vodnic/a n”/></e>\n10Within this course, students were taught about machine translation and the Apertium frame-\nwork, among other things.\n11Materials available at http://www.abumatran.eu/?p=292\n364 Klubi ˇcka et al.\nadded additional linguistic information, such as pointing out parts of speech, morpho-\nlogical differences and translation direction details.12This workshop resulted in the ad-\ndition of approximately 485 new entries in a single day. These entries were additionally\nchecked by experts later on.\nSecond phase: During a one-semester course, our students collected bilingual data\nand produced many new entries for the bilingual dictionary using several methods, rang-\ning from running texts of their choice through our translator and filling the bilingual dic-\ntionary with the untranslated lexemes, to validating and adding bilingual candidates ex-\ntracted by using the output of a distributional similarity tool (Fi ˇser and Ljube ˇsi´c, 2011)\napplied to texts from the Croatian and Serbian Wikipedia. By the end of the course, the\ndictionary contained 1694 bilingual entries, which is also its current size.\n4.2 Adding rules\nFirst phase: Our second workshop focused on transfer rules13from Serbian to Croatian.\nWe automatically extracted rules for Serbian to Croatian (S ´anchez et al., 2015) and our\nworkshop participants validated them based on actual examples of these rules, answer-\ning the simple question ”Is this a valid translation?”14We taught them how to formalise\nthe rules and presented them with 100 rules to be validated. The implementation of\nthe rules was done by experts after the workshop. In 1 week we implemented 25 new\nSerbian to Croatian rules and 10 basic Croatian to Serbian rules.\nSecond phase: Nearing the end of the course, after adding sufficient bilingual en-\ntries, the students were taught about shallow transfer rules. They once again looked into\nthe outputs of the texts they ran through the translator and annotated the syntactic mis-\ntakes occurring in the translations. Some of the rules that could be fixed via shallow\ntransfer were added during the course for demonstration purposes, but most were for-\nmalised and implemeted as a result of the joint work between a language expert and an\nApertium expert during a secondment at Prompsit Language Engineering. At the end\nof this stage, the number of Serbian to Croatian rules was extended to 99 rules, and\nCroatian to Serbian to 82, which is the current state of the system. Most of the rules\nimplemented cover a bit of syntax via short-distance word shifting (e.g. there are sev-\neral verbal constructions involving the daparticle which differ between the languages\nin regards to word order and whether the daparticle is present or not)15, as well as\nagreement rules (e.g. if the head noun of a noun phrase changes gender in translation,\nthe premodifying adjectives need to change gender as well).16\n12Participants would point out whether the translation of a given lexeme is bidirectional (like\ndirektorica-direktorka ), just from Croatian to Serbian (like zabava- ˇzurka ), or just from Serbian\nto Croatian (like kasnije-docnije )\n13Materials available at http://www.abumatran.eu/?p=418\n14[SR] Zemlje jugoisto ˇcne Evrope trebale bi da suraduju\n[HR] Zemlje jugoisto ˇcne Europe trebale bi suradivati\n15[SR] da li mo ˇzeˇs\n[HR] mo ˇzeˇs li\n16[SR] na ˇs brzi ra ˇcunar (masculine)\n[HR] na ˇse brzo ra ˇcunalo (neuter)\nCollaborative RBMT, HR-SR 365\n4.3 Tagger training\nAdditional insight gained during the workshops and coursework was that the HBS tag-\nger was in serious need of improvement. The tagger we had at the beginning of the\ndescribed process was using a constraint grammar, and it was producing many errors,\nwhich very palpably hindered the translation process. Fortunately, by the time the bilin-\ngual lexicon and transfer rules were extended, we had the newly created hr500k Croat-\nian training corpus (Ljube ˇsi´c et al, 2016) at our disposal, so we decided to train a statis-\ntical tagger17based on Hidden Markov Models (Rabiner, 1989) with the tools provided\nin Apertium. A necessary preprocessing step was to transfer the tags in the training cor-\npus from the MULTEXT-East Morphosyntactic Specifications, revised Version 418to\nApertium’s notation. This was done by automatically mapping the hr500k training cor-\npus to the Apertium tagset, retaining only sentences with full coverage and splitting this\ndataset into training and test data. This left us with 145,626 tokens (9,465 sentences) of\ntraining data and 7,682 tokens (500 sentences) of test data.\nAdditionally, a tagset file with ambiguity classes was defined so as to narrow down\nthe tagset as much as possible. This step makes learning the morphological disambigua-\ntion process feasible as the amount of training data that would be necessary to observe\nall the possible sequences of full tags, given the rich morphology of the languages, is\nmany orders higher than the amount of data currently available.\nWe performed a comparative intrinsic evaluation of both the constraint grammar and\nstatistical tagger on the 500-sentence test dataset. We evaluated both taggers via token-\nlevel accuracy. In this setting, the improvement in accuracy was quite substantial: while\nthe old constraint grammar-based tagger had an accuracy of 76%, the new HMM tagger\nachieved an accuracy of 90.19%.\n5 Evaluation\nFinally, we perform a comparative evaluation of our system, but we present an eval-\nuation of only the Serbian to Croatian direction as this direction was the initial fo-\ncus of the development and the other direction was still under development at the\nmoment of presenting these results. We compare our system to the output of Google\nTranslate ,19as this is the current state of the art system. For our baseline we assume\nthat the output is identical to the input, a setup which yields the lowest evaluation\nscores. Our SMT baseline was constructed by training a phrase-based Moses system on\n200k segments from the SETimes parallel corpus, with an additional 2 thousand seg-\nments of development data, while we use hrWaC2.0 for building the language model\n(Ljube ˇsi´c and Klubi ˇcka, 2014).\nFor the evaluation we use a test set consisting of 351 Serbian sentences gathered\nfrom newspaper texts that were manually translated into Croatian by students. We eval-\nuate the system with BLEU (Papineni et al, 2006) and TER (Snover et al, 2006). Table\n1 shows the results of the evaluation process.\n17It should be noted that even though using the same tagger for both Croatian and Serbian is not\nideal, previous experiments (Agi ´c et al., 2013) have shown that only a minor drop in accuracy\nshould be expected from this setting.\n18https://github.com/ffnlp/sethr/blob/master/mte4r-upos.mapping\n19Output retrieved on 2016-01-27\n366 Klubi ˇcka et al.\nBLEU TER\nbaseline 72.66 0.1300\nSMT 73.54 0.1255\nGoogle 82.27 0.0873\nApertium 82.97 0.0782\nTable 1. Results of the MT evaluation. Statistically significantly better results are in bold.\nWhen compared to our baseline systems, the evaluation scores are decidedly posi-\ntive. When compared to Google’s system, we also improve, but the question is whether\nthis improvement is statistically significant. To calculate this we use approximate ran-\ndomisation with 1000 iterations, and while the reported 0.7 point improvement in BLEU\nyields a p-value of 0.384, which is too high to prove statistical significance, the improve-\nment in TER by -0.0091 is in fact statistically significant, with a p-value of 0.018. Given\nthat BLEU is known to favour statistical machine translation in its evaluation, it is safe\nto claim that our system outperforms that of Google.\n6 Conclusion\nIn this paper we present a bidirectional machine translation system between Croatian\nand Serbian, which was collaboratively developed between the University of Zagreb\nand Prompsit Language Engineering in the framework of the Abu-MaTran project. To\nachieve this, we combine Apertium’s resources with the University of Zagreb’s man-\npower and resources, taking advantage of our researcher’s employment and second-\nments, as well as hands-on workshops organised as part of our Abu-MaTran activities\nto get other interested parties to help with the creation of additional necessary linguistic\nresources.\nThe result of this work is a system that has been developed in a total of approx-\nimately 6 person months (including experiments for semi-automatic extraction of vo-\ncabulary and data, work in dictionaries, HMM and implementation of rules, workshop\nand course materials, training of non-experts and evaluation) and which outperforms\nthe current state of the art. The contribution of this work for the wider community is\nthe release of numerous freely available linguistic tools and resources, as well as the\nconsiderable transfer of knowledge between all participating institutions. Additionally,\nthis system opens up the possibility of smoothing the way towards translating official\nEU documents that are and will be published in Croatian20into Serbian, the language\nof an EU candidate state.\nFuture work will go into extending the system and further evaluating both transla-\ntion directions, creating combinations with Bosnian, using it to create synthetic training\ndata, and adding it to AltLang to offer a commercial service that uses the current system\nto customise content to a specific language variant.\n20E.g. the acquis communautaire ; EU parallel corpora and translation memories such as DGT\n(https://ec.europa.eu/jrc/en/language-technologies/dgt-translation-memory); the Special Edi-\ntion of the EU Official Journal (http://eur-lex.europa.eu/eu-enlargement/hr/special.html)\nCollaborative RBMT, HR-SR 367\nAcknowledgements\nThe research leading to these results has received funding from the European Union\nSeventh Framework Programme FP7/2007-2013 under grant agreement PIAP-GA-2012-\n324414 (Abu-MaTran) and the Swiss National Science Foundation grant IZ74Z0 160501\n(ReLDI).\nReferences\nAgi´c,ˇZ., Ljube ˇsi´c, N., Merkler, D. (2013). Lemmatization and morphosyntactic tagging of Croa-\ntian and Serbian. In Proceedings of the Fourth Biennial International Workshop on Balto-\nSlavic Natural Language Processing , Sofia, Bulgaria.\nCort´es Mart ´ınez, J.P., O’Regan, J., Tyers, F.M. (2012). Free/Open Source Shallow-Transfer Based\nMachine Translation for Spanish and Aragonese. In Proceedings of the Eighth International\nConference on Language Resources and Evaluation , Istanbul, Turkey.\nFiˇser, D., Ljube ˇsi´c, N. (2011). Bilingual Lexicon Extraction from Comparable Corpora for\nClosely Related Languages. In Proceedings of the Recent Advances in Natural Langugage\nProcessing Conference , Hissar, Bulgaria.\nForcada, M.L., Ginest ´ı-Rosell, M., Nordfalk, J., O’Regan, J., Ortiz-Rojas, S., P ´erez-Ortiz, J.A.,\nS´anchez-Mart ´ınez, F., Ram ´ırez-S ´anchez, G., Tyers, F.M. (2011). Apertium: a free/open-\nsource platform for rule-based machine translation . Machine Translation. 25(2):127-144\nLjube ˇsi´c, N., Klubi ˇcka, F. (2014). fbs,hr,sr gWaC – Web corpora of Bosnian, Croatian and Ser-\nbian. In Proceedings of the 9th Web as Corpus Workshop (WaC-9) , Gothenburg, Sweden.\nLjube ˇsi´c, N., Klubi ˇcka, F., Agi ´c,ˇZ., Jazbec, I. (2016). New Inflectional Lexicons and Training\nCorpora for Improved Morphosyntactic Annotation of Croatian and Serbian. In Proceedings\nof the Tenth International Conference on Language Resources and Evaluation , Portoro ˇz,\nSlovenia.\nPapineni, K., Roukos, S., Ward, T., and Zhu, W.-J. (2002). BLEU: A method for automatic eval-\nuation of machine translation. In Proceedings of the 40th Annual Meeting on Association for\nComputational Linguistics , Philadelphia, Pennsylvania.\nPeradin, H., Tyers, F. (2012). A rule-based machine translation system from Serbo-Croatian\nto Macedonian . Third International Workshop on Free/Open-Source Rule-Based Machine\nTranslation (FreeRBMT 2012).\nPeradin, H., Petkovski, F., Tyers, F. (2014). Shallow-transfer rule-based machine translation for\nthe Western group of South Slavic languages. In Proceedings of the 9th SaLTMiL Workshop\non Free/open-Source Language Resources for the Machine Translation of Less-Resourced\nLanguages , Reykjavik, Iceland.\nPopovi ´c, M., Ljube ˇsi´c N. Exploring cross-language statistical machine translation for closely\nrelated South Slavic languages. Language Technology for Closely Related Languages and\nLanguage Variants (LT4CloseLang) , Doha, Qatar.\nRabiner, L. R. (1989). A tutorial on hidden Markov models and selected applications in speech\nrecognition. In Proceedings of the IEEE\nS´anchez-Cartagena, V .M., P ´erez-Ortiz, J.A., S ´anchez-Mart ´ınez, F. (2015). A generalised align-\nment template formalism and its application to the inference of shallow-transfer machine\ntranslation rules from scarce bilingual corpora. In Computer Speech & Language\nSnover, M., Dorr, B., Schwartz, R., Micciulla, L., and Makhoul, J. (2006). A Study of Translation\nEdit Rate with Targeted Human Annotation. In Proceedings of AMTA\nUnhammer, K., Trosterud, T. (2009). Reuse of free resources in machine translation between\nNynorsk and Bokm ˚al. In Proceedings of the First International Workshop on Free/Open-\nSource Rule-Based Machine Translation , Alicante, Spain.\nReceived May 2, 2016 , accepted May 18, 2016",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "1G7VBHrWig",
"year": null,
"venue": "EAMT 2016",
"pdf_link": "https://aclanthology.org/W16-3421.pdf",
"forum_link": "https://openreview.net/forum?id=1G7VBHrWig",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Dealing with Data Sparseness in SMT with Factured Models and Morphological Expansion: a Case Study on Croatian",
"authors": [
"Víctor M. Sánchez-Cartagena",
"Nikola Ljubesic",
"Filip Klubicka"
],
"abstract": "Victor M. Sánchez-Cartagena, Nikola Ljubešić, Filip Klubička. Proceedings of the 19th Annual Conference of the European Association for Machine Translation. 2016.",
"keywords": [],
"raw_extracted_content": "Baltic J. Modern Computing, V ol. 4 (2016), No. 2, pp. 354–360\nDealing with Data Sparseness in SMT with\nFactored Models and Morphological Expansion:\na Case Study on Croatian\nV´ıctor M. S ´ANCHEZ-CARTAGENA1, Nikola LJUBE ˇSI´C2;3, Filip KLUBI ˇCKA3\n1Prompsit Language Engineering, Av. Universitat, s/n. Edifici Quorum III, E-03202, Elx, Spain\n2Dept. of Knowledge Technologies, Jo ˇzef Stefan Institute, Jamova cesta 32, SI-1000, Ljubljana,\nSlovenia\n3Dept. of Information and Communication Sciences, University of Zagreb\[email protected], [email protected], [email protected]\nAbstract. This paper describes our experience using available linguistic resources for Croatian in\norder to address data sparseness when building an English-to-Croatian general domain phrase-\nbased statistical machine translation system. We report the results obtained with factored translation\nmodels and morphological expansion, highlight the impact of the algorithm used for tagging\nthe corpora, and show that the improvement brought by these methods is compatible with the\napplication of data selection on out-of-domain parallel corpora.\nKeywords: data sparseness, factored translation models, morphological expansion\n1 Introduction\nData sparseness is a well-known problem that phrase-based statistical machine transla-\ntion (SMT) systems suffer from when dealing with highly inflected languages, especially\nwhen the highly inflected one is the target language (TL). In these kinds of languages, a\nsingle word (lemma) can have dozens of different inflected forms. Translation perfor-\nmance is hampered because it is difficult to observe all the forms of a given word (in the\ndifferent contexts relevant for translation) in the training corpus.\nThis paper is part of the Abu-MaTran project, where we aim to provide machine\ntranslation support to Croatian, as the official language of a new EU country. Croatian\nis a highly inflected language and hence it is affected by data sparseness. For instance,\nadjectives inflect for 3genders, 2numbers and 7cases and the hrLex Croatian inflectional\nlexicon ( Ljube ˇsi´c et al, 2016b ) contains 939unique morphosyntactic description tags.\nIn this paper, we show how we dealt with that problem in a general-domain English-\nto-Croatian phrase-based SMT system by leveraging a Croatian inflectional lexicon\nand adapting existing solutions in the literature, namely factored translation models\nDealing with Data Sparseness in SMT: a Case Study on Croatian 355\n(Koehn and Hoang, 2007 ) and morphological expansion (Turchi and Ehrmann, 2011).\nSections 2 and 3 respectively describe these solutions, while Section 4 shows that they\ncan be successfully combined with a data selection strategy. The paper ends with a brief\ndescription of related approaches and some concluding remarks and future directions.\n2 Factored translation models\nFactored translation models ( Koehn and Hoang, 2007 ) split the translation of words in\nthe translation of different factors (surface forms, lemmas, lexical categories, morphosyn-\ntactic information, etc.). Among the different ways these factors can be combined, we\nopted for producing a surface form factor and a morphosyntactic description (MSD)\nfactor for each word in the output, and used two different language models (LMs), one\noperating on surface forms and another one on MSDs. This setup has been reported to\nbe effective (and efficient in terms of decoding time) when the TL is highly inflected but\nthe SL is not ( Skadin ˇs et al., 2010 ), since it helps the decoder to produce grammatically\ncorrect phrases that have not been observed in the training corpus. We considered the\nfollowing aspects when building our factored phrase-based SMT system.\n–Order of the MSD LM. The order of surface-form based LMs is usually set to 5. As\nthe number of different MSDs is several orders of magnitude lower than the number\nof different surface forms, a greater order can also be considered.\n–Corpora tagging algorithm. In order to obtain the MSD factor of the TL side of the\nparallel corpus and the TL monolingual corpus, a part-of-speech (PoS) tagger is\nneeded. We tested the effect of lexicon constraining in PoS tagging.\n2.1 Constrained part of speech tagging\nThe best performing tagger available for Croatian is a CRF-based tagger (Ljube ˇsi´c et\nal, 2016b). While the tagger makes use of the hrLex inflectional lexicon (Ljube ˇsi´c et al,\n2016b), the corresponding lexicon entries are not used for constraining the tagger to the\npotential tags, but just as features. As a result, the number of translations for each SL\nphrase in a factored system grows when compared to a non-factored phrase-based SMT\nsystem: in the system setups described in Section 2.2, the average number of translations\nper SL phrase in the phrase table of our non-factored system is 2:091. This value grows\nto2:119in the factored system that uses the CRF tagger. Moreover, we observed that the\nincrease in the number of translation options caused by unconstrained tagging is more\nrelevant in frequent SL phrases. For instance, the Croatian surface form ku´ca(house ) can\nonly be analysed as a feminine, singular, nominative, common noun or as a feminine,\nplural, genitive, common noun according to the lexicon. However, in our factored\nsystem with unconstrained tagging (Section 2.2), we can find 8phrase table entries\nwhose SL word is house and whose TL surface form factor is ku´ca. Additional MSDs\ninclude cardinal number, adjective, or proper noun. Since according to some studies\n(Ling et al., 2012 ), the presence of redundant phrase translations can hurt translation\nquality, we also tested a modified version of the tagger in which it only selects the MSDs\npresent in the lexicon.4As a result, the average number of translations per SL phrase\nwas2:111.\n4We post-processed the output of the CRF tagger: for each word tagged with an MSD not present\nin the lexicon, we replaced it with the most likely one from the lexicon according to a 3-gram\n356 S ´anchez-Cartagena et al.\nWe compared the original and the constrained tagger on the test set traditionally\nused to evaluate taggers on Croatian: 300 sentences (6306 tokens) from news, general\nweb and Wikipedia domains ( Agi´c and Ljube ˇsi´c, 2014 ): constraining the tagger slightly\nreduces accuracy from 0:9253 to0:9232 .\n2.2 Experiments and results\nWe built our phrase-based SMT system from corpora crawled from the web: we consider\nthem to be the most suitable ones for an open-domain system. In particular, we used\nhrenWaC as parallel corpus ( Ljube ˇsi´c et al., 2016a ) and hrWaC (Ljube ˇsi´c and Klubi ˇcka,\n2014) as TL monolingual corpus. The parallel corpus contains 1 166 732 sentences,\n32 908 281 English words and 29 199 856 Croatian words. The size of the vocabularies\nis605 929 (English) and 888 405 (Croatian): the ratio between them is 1:47, which gives\nus an idea of the morphological richness of Croatian as compared with English. The\nmonolingual corpus contains 67 403 231 sentences and 1 404 303 868 words. We used\nMoses5with the MIRA tuning algorithm ( Watanabe et al., 2007 ). We estimated a 5-gram\nsurface-form LM from the TL monolingual corpus. Our factored system contains an\nadditional MSD LM estimated from the same monolingual corpus. We experimented\nwith orders 3,5and7and the two tagging alternatives discussed in the previous section.\nWe used KenLM and Knesser-Ney discounting.6\nTable 1. Results of the evaluation of factored models. A score in bold means that the system\noutperforms the plain baseline by a statistically significant margin according to paired bootstrap\nresampling (Koehn, 2004) ( p= 0:05,1 000 iterations).\nMSD LM order constrained tagging BLEU TER METEOR\nbaseline - 0:2356 0:6351 0:2119\n3 N 0:2429 0:6296 0:2152\n5 N 0:2408 0:6327 0:2142\n7 N 0:2373 0:6352 0:2125\n3 Y 0:2458 0:6226 0:2167\n5 Y 0:2432 0:6256 0:2161\n7 Y 0:2413 0:6280 0:2154\nWe tuned the systems with newstest2012 and evaluated them with newstest2013 , as\nPirinen et al. (2016) did. Table 1 shows the values of the BLEU, TER and METEOR\nevaluation metrics for the basic phrase-based SMT system and the different factored\nalternatives. Results show that constraining the tagging brings a consistent improvement\n(for all evaluation metrics and MSD LM orders), thus confirming the observations by\nLM of MSDs estimated from the same annotated corpus from which the CRF tagger was trained\n(Ljube ˇsi´c et al, 2016b ). Words not found in the lexicon were assigned the special tag UNK(this\nwas not done when evaluating tagging accuracy in order to perform a fair comparison with the\nunconstrained tagger).\n5http://www.statmt.org/moses/. Corpora were normalized, tokenized and truecased with the tools\nprovided with Moses. Parallel sentences with more than 80tokens were removed.\n6https://kheafield.com/code/kenlm/\nDealing with Data Sparseness in SMT: a Case Study on Croatian 357\nLing et al. (2012). Concerning the order of the MSD LM, the best translation quality\nis obtained for order 3. We observed that, as we increase the value of the order, short-\ndistance agreement deteriorates, but long-distance agreement does not improve. A couple\nof examples can be found in Table 2. This is probably caused by the fact that the order\nof the constituents of the sentence is relatively free in Croatian. Thus, it is difficult to\npredict MSDs in the TL with n-grams that cross constituent boundaries.\nTable 2. Example sentences illustrating the difference in local agreement between different orders\nof the MSD LM. In the first example, the phrase obiˇcne smrt should be in the genitive case, but it\nis nominative in the order 7 alternative; in the second example, the adjective egipatske should be\nin neuter gender in order to agree with druˇstva, but it is feminine in the order 7 alternative.\nExample 1 Example 2\nsource The courage of ordinary death. [...] respect for the other elements of Egyptian society.\norder 3 hrabrost obiˇcne smrti . [...] po ˇstovanje za druge elemente egipatskog dru ˇstva.\norder 7 hrabrost obiˇcna smrt . [...] po ˇstovanja za druge elemente egipatske dru ˇstva.\nref. hrabrost obiˇcne smrti . [...] po ˇstovanja ostalih elemenata egipatskog dru ˇstva.\n3 Morphological expansion\nFactored systems with an additional MSD LM cannot produce surface forms in Croatian\nthat have not been observed in the training corpus. In order to further mitigate the data\nsparseness problem, we enhanced our system with morphological expansion. It consists\nof creating new phrase table entries from existing ones by means of changing values\nof morphological inflection attributes and inflecting words accordingly. We followed\na strategy7inspired by Turchi and Ehrmann (2011) but we restricted the process with\nlinguistically motivated rules so as to avoid the need to optimise filtering thresholds. We\ncreated new phrase pairs by changing only the TL side of existing phrase pairs, and only\nfor those phrase pairs whose TL side is a single word or a grammatically meaningful\nphrase. We select, among others, TL phrases that contain a noun, a noun phrase, an\nadjective, a verb, etc. Then, we generate new phrases with all the possible values of the\nmorphological inflection features not present in English.8We added the generated phrase\npairs to a new phrase table which is combined with the original one at decoding time by\nmeans of independent decoding paths (Koehn and Schroeder, 2007).\nWe added morphological expansion to the best factored system described in Sec-\ntion 2.2 and repeated the evaluation. In view of the positive results of reducing translation\nalternatives by constraining PoS tagging, we also evaluated an alternative morphological\nexpansion strategy in which only noun phrases were expanded. Our expansion rules\ngenerate only 3 alternatives for them (for nominative, accusative and instrumental cases),\nwhile the number of generated entries is higher for verbal phrases and adjectives. Results\ndisplayed in Table 3 show that there is not a clear difference between both expansion\n7Implementation is available at: https://github.com/vitaka/morph-xpand-smt\n8The file with the expansion rules used and some comments can be found at\nhttps://github.com/vitaka/morph-xpand-smt/blob/master/tags 29-1-2016-somecases.\n358 S ´anchez-Cartagena et al.\nstrategies, and that morphological expansion is not able to outperform the factored\nsystem (the difference is not statistically significant). We performed a manual analysis\non the agreement errors found in 40sentences randomly selected from the test set and\nfound that for only 25% of all the agreement errors proper word forms were not present\nin the phrase table, out of which only 23% were generated by morphological expansion.\nMost needed words were not generated because they were not present in the lexicon.\nTable 3. Results of the evaluation of morphological expansion.\nSystem BLEU TER METEOR\nbest factored 0:2458 0:6226 0:2167\n+ morph. expansion noun phrases 0:2470 0:6232 0:2174\n+ morph. expansion all 0:2460 0:6235 0:2179\n4 Combination with data selection\nA different way of dealing with data sparseness is to augment the training corpus by\nselecting the most suitable sentences from out-of-domain data. Pirinen et al. (2016;\nSection 2.6) followed that strategy and obtained a significant improvement over a plain\nphrase-based SMT system trained on hrenWaC (Table 4 shows the result of evaluating\ntheir approach; the evaluation setup defined in Section 2.2 was followed). In order\nto make the most of the available resources for English–Croatian we combined both\napproaches: we built a system with factored translation models (following the best setup\nin Section 2.2) on the parallel corpora obtained as a result of data selection. Results,\nwhich are also depicted in Table 4, confirm that both approaches can be successfully\ncombined, allowing us to reach the state-of-the-art system, Google Translate .9\nTable 4. Results of the evaluation of the combination of data selection and the best factored setup.\nThere are not statistically significant differences between that combination and Google Translate .\nSystem BLEU TER METEOR\nhrenWaC + factored 0:2458 0:6226 0:2167\ndata selection 0:2576 0:6060 0:2264\nGoogle Translate 0:2673 0:5946 0:2321\ndata selection + factored 0:2700 0:5963 0:2338\n5 Related work\nFactored models have been deeply studied by Tamchyna and Bojar (2013), who con-\ncluded that automatically searching for the best factored model architecture in a given\n9http://translate.google.com\nDealing with Data Sparseness in SMT: a Case Study on Croatian 359\nlanguage pair is not feasible. Successful application of factored models to different\nlanguage pairs has been already reported by other authors, like Bojar (2007) and Koehn\net al. (2010). Regarding morphological expansion, to the best of our knowledge, the\napproach by Turchi and Ehrmann (2011) is the only one that addresses the expansion of\nthe TL side of the phrase table. Concerning other ways of adding linguistic information\nto an SMT system, we refer the reader to the survey by Costa-Juss `a and Farr ´us (2014).\n6 Conclusions and future work\nIn this paper, we presented a set of strategies on how to leverage existing Croatian linguis-\ntic resources to address data sparseness in a general-domain English-to-Croatian SMT\nsystem. Applying factored models showed to be successful. We observed that accuracy\nof PoS tagging and translation performance are not correlated and that increasing the\norder of the MSD LM is counterproductive. A combination of factored models and data\nselection allowed us to build a system that reaches state-of-the-art commercial tools.\nImprovement obtained with morphological expansion was negligible.\nSince most of the agreement errors were not caused by lack of inflected forms in the\nphrase table, and the best results were obtained with a low-order MSD LM because of\nthe free constituent order in Croatian, hybridisation with an RBMT system that performs\nfull syntactic analysis (Labaka et al., 2014) could further improve the results.\nAcknowledgements\nResearch funded by the European Union Seventh Framework Programme FP7/2007-\n2013 under grant agreement PIAP-GA-2012-324414 (Abu-MaTran).\nReferences\nAgi´c,ˇZ., Ljube ˇsi´c, N. (2014). The SETimes.HR Linguistically Annotated Corpus of Croatian. In\nProceedings of the Ninth International Conference on Language Resources and Evaluation ,\nReykjavik, Iceland.\nOndˇrej Bojar (2007). English-to-Czech factored machine translation. In Proceedings of the Second\nWorkshop on Statistical Machine Translation , Prague, Czech Republic.\nCosta-Juss `a, M. R., Farr ´us, M. (2014). Statistical machine translation enhancements through\nlinguistic levels: A survey. ACM Computing Surveys 46:3.\nKoehn. P. (2004). Statistical significance tests for machine translation evaluation. In Proceedings\nof the 2004 Conference on Empirical Methods in Natural Language Processing , Barcelona,\nSpain.\nKoehn, P., Haddow, B., Williams, P., Hoang, H. (2010). More Linguistic Annotation for Statistical\nMachine Translation. In Proceedings of the Joint Fifth Workshop on Statistical Machine\nTranslation and MetricsMATR , Uppsala, Sweden.\nKoehn, P., H. Hoang (2007). Factored translation models. In Proceedings of the 2007 Joint\nConference on Empirical Methods in Natural Language Processing and Computational\nNatural Language Learning , Prague, Czech Republic.\nKoehn, P. and Schroeder, J. (2007). Experiments in domain adaptation for statistical machine\ntranslation. In Proceedings of the Second Workshop on Statistical Machine Translation , Prague,\nCzech Republic.\n360 S ´anchez-Cartagena et al.\nLabaka, G., Espa ˜na-Bonet, C., M `arquez, L., Sarasola, K. (2014). A hybrid machine translation\narchitecture guided by syntax. Machine Translation , 28(2):91–125.\nLing, W., Gra c ¸a, J., Trancoso, I., Black, A. (2012). Entropy-based Pruning for Phrase-based\nMachine Translation. In Proceedings of the 2012 Joint Conference on Empirical Methods in\nNatural Language Processing and Computational Natural Language Learning , Jeju Island,\nKorea.\nLjube ˇsi´c, N., Espl `a-Gomis, M., Toral, A., Klubi ˇcka, F. (2016). Producing Monolingual Web\nCorpora and Bitext at the Same Time - SpiderLing and Bitextor’s Love Affair. In Proceedings\nof the Tenth International Conference on Language Resources and Evaluation , Portoro ˇz,\nSlovenia.\nLjube ˇsi´c, N., Klubi ˇcka, F. (2014). fbs,hr,sr gWaC – Web corpora of Bosnian, Croatian and Serbian.\nInProceedings of the 9th Web as Corpus Workshop (WaC-9) , Gothenburg, Sweden. 29–35.\nLjube ˇsi´c, N., Klubi ˇcka, F., Agi ´c,ˇZ., Jazbec, I. (2016). New Inflectional Lexicons and Training\nCorpora for Improved Morphosyntactic Annotation of Croatian and Serbian. In Proceedings\nof the Tenth International Conference on Language Resources and Evaluation , Portoro ˇz,\nSlovenia.\nPirinen, T., Rubino, R., S ´anchez-Cartagena, V .M., Klubi ˇcka, F., Toral, A. (2016). D5.1c\nEvaluation of the MT systems deployed in the third development cycle . Auto-\nmatic building of Machine Translation. FP7-PEOPLE-2012-IAPP project deliverable\n(http://www.abumatran.eu/?page id=59).\nSkadin ˇs, R., Goba, K., ˇSics, V . (2010). Improving SMT for Baltic Languages with Factored Models.\nInProceedings of the Fourth International Conference Baltic HLT, Frontiers in Artificial\nIntelligence and Applications , Riga, Latvia.\nTamchyna, A., Bojar, O. (2013). No Free Lunch in Factored Phrase-Based Machine Translation. In\nProceedings of the 14th international conference on Computational Linguistics and Intelligent\nText Processing - Volume 2 , Samos, Greece.\nTurchi, M., Ehrmann, M. (2011). Knowledge Expansion of a Statistical Machine Translation\nSystem using Morphological Resources. In Proceedings of the 12th International Conference\non Intelligent Text Processing and Computational Linguistics , Tokyo, Japan.\nWatanabe, T., Suzuki, J., Tsukada, H., Isozaki, H. (2007). Online large-margin training for\nstatistical machine translation. In Proceedings of the 2007 Joint Conference on Empirical\nMethods in Natural Language Processing and Computational Natural Language Learning ,\nPrague, Czech Republic.\nReceived May 2, 2016 , accepted May 16, 2016",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "ZgmWFBWET6T",
"year": null,
"venue": "EAMT 2009",
"pdf_link": "https://aclanthology.org/2009.eamt-1.31.pdf",
"forum_link": "https://openreview.net/forum?id=ZgmWFBWET6T",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Are Unaligned Words Important for Machine Translation?",
"authors": [
"Yuqi Zhang",
"Evgeny Matusov",
"Hermann Ney"
],
"abstract": "Yuqi Zhang, Evgeny Matusov, Hermann Ney. Proceedings of the 13th Annual conference of the European Association for Machine Translation. 2009.",
"keywords": [],
"raw_extracted_content": "Proceedings of the 13th Annual Conference of the EAMT , pages 226–233,\nBarcelona, May 2009\nAreUnaligned Words ImportantforMachineTranslation ?\nYuqiZhang EvgenyMatusov HermannNey\nHuman Language Technologyand PatternRecognition\nLehrstuhlf ¨ur Informatik 6–Computer ScienceDepartment\nRWTHAachenUniversity, D-52056 Aachen,Germany\n{yzhang,matusov,ney }@cs.rwth-aachen.de\nAbstract\nIn this paper, we deal with the problem of\nalargenumberofunalignedwordsinauto-\nmaticallylearnedwordalignmentsforma-\nchine translation (MT). These unaligned\nwordsarethereasonforambiguousphrase\npairsextractedbyastatisticalphrase-based\nMT system. In translation, this phrase am-\nbiguity causes deletion and insertion er-\nrors. We present hard and optional dele-\ntion approaches to remove the unaligned\nwords in the source language sentences.\nImprovements in translation quality are\nachieved both on large and small vocabu-\nlarytaskswiththepresentedmethods.\n1 Introduction\nWord alignment is a key part in the training of\na statistical MT system because it provides map-\npings of words between each source sentence and\nits target language translation. Because of the dif-\nference in the structure of the involved languages,\nnot all words in the source language have a corre-\nsponding word in the target language. So in the\nalignments, no matter manually created or auto-\nmatically learned, some words are aligned, some\narenot.\nCurrent state-of-the-art statistical machine\ntranslation is based on phrases. First the word\nalignments for the training corpus are generated.\nThen phrase alignments are inferred heuristically\nfrom the word alignments. This approach was\npresented by (Och et al., 1999) and implemented\nby e.g. (Koehn et al., 2003). Since this widely\nused phrase extraction method depends on word\nalignments, it is often assumed that the quality\nc/circlecopyrt2009 EuropeanAssociation forMachine Translation.of word alignment is critical to the success\nof translation. However, some research have\nshown that the large gains in alignment accuracy\noften lead to, at best, minor gains in translation\nperformance (Lopez and Resnik, 2006). They\nconcluded that it could be more useful to directly\ninvestigate ways to reduce the noise in phrase\nextraction than improving word alignment. The\nworkby(Maetal.,2007)showsthatagoodphrase\nsegmentation is important for translation result.\nEncouraged by the work, this paper explores the\ninfluence of the unaligned words on the phrase\nextraction and machine translation results. We\nshow that the presence of unaligned words causes\nextraction of “noisy” phrases which can lead to\ninsertion and deletion errors in the translation\noutput. Furthermore, we propose approaches\nfor “hard” and “soft” deletion of the unaligned\nwords on the source language side. We then show\nthat better way to deal with unaligned words can\nsubstantially improve translation quality, on both\nsmallandlargevocabularytasks.\nIn section 2, we briefly review the word align-\nment concept and point out that there is a large\nnumber of unaligned words in both manual and\nautomaticalignmentsusedforcommontranslation\ntasks. In section 3, we explain how the unaligned\nwords affect the phrase extraction and cause dele-\ntion and insertion errors. In section 4, we present\ntwoapproachestoprovethenegativeimpactofthe\nunalignedwordsontranslationquality. Theexper-\nimental results are given in sections 5 and 6. Fi-\nnally, section 7 presents a conclusion and future\nwork.\n2 Unalignedwordsin word alignment\nIn statistical translation models (Brown\net al., 1990), a “hidden” alignment\n226\naJ\n1:=a1, . . ., a j, . . ., a Jis introduced for\naligning the source sentence fJ\n1to the target\nsentence eI\n1. The source word at position jis\naligned to the target word at position i=aj.\nThe alignment aJ\n1may contain special alignment\naj= 0, which means that the source word at\nindexjis not aligned to any target word. Because\na word in the source sentence cannot be aligned\nto multiple words in the target sentence, the\nalignment is trained in both translation directions:\nsource to target and target to source. For each\ndirection, a Viterbi alignment (Brown et al.,\n1993) is computed: A1={(aj, j)|aj≥0}and\nA2={(i, bi)|bi≥0}. Here, aJ\n1is the alignment\nfrom the source language to the target language\nandbI\n1isthealignmentfromthetargetlanguageto\nthe source language. To obtain more symmetrized\nalignments, A1andA2can be combined into one\nalignment matrix Awith the following combina-\ntion methods. More details are described in (Och\nandNey, 2004):\n•intersect :A=A1∩A2\n•union:A=A1∪A2\n•refined: extend from the intersection.\nintersect ⊆refined ⊆union\nIn any of the alignments, there are many words\nwhich are unaligned. We have counted unaligned\nwordsinvariousChinese-Englishalignmentsboth\na small corpus (LDC2006E931) and a large cor-\npus (GALE-All2). Table 1 presents what percent-\nage of unaligned words occurs in each alignment.\nSince the released LDC2006E93 corpus contains\nmanual alignments, we can see that even in “cor-\nrect” alignments, more than 10%words are un-\naligned. intersect , the alignment with the best\nprecision, has around 50%unaligned words on\nboth sides. IN union, which has best recall, still\naround 10%of the words are unaligned. The most\noften used refined alignment, which has the bal-\nance between precision and recall, has about 25%\nunaligned words. Since phrase pairs are extracted\nfrom the word alignments, these unaligned words\nwillaffectthemasdescribedbelow.\n1LDC2006E93: LDC GALE Y1 Q4 Release - Word Align-\nment V1.0,LinguisticData Consortium(LDC)\n2GALE-ALL: all available training data for\nChinese-English translation released by LDC.\nhttp://projects.ldc.upenn.edu/gale/data/DataMatrix.htmlFigure 1: An alignment example with unaligned\nwords.\n/c77\n/c1c/c0e\n/c94\n/c1c/cad\n/c0bthat?\n/c36/cdbwhyis\n为什么 why\n为什么 why is\n那么 为什么 why\n那么 为什么 why is\n为什么这样why isthat\n为什么这样呢why isthat\n那么 为什么这样why isthat\n那么 为什么这样呢why isthat\n为什么这样呢?why isthat?\n那么 为什么这样呢?why isthat?\n那么is\n呢is\n这样is that\n这样that\n这样呢is that\n这样呢that\n这样呢?is that?\n这样呢?that ?\n呢??\n??\n3 Phraseextraction\nInthestate-of-the-artstatisticalphrase-basedmod-\nels, the unit of translation is any contiguous se-\nquence of words, which is called a phrase. The\nphrase extraction task is to find all bilingual\nphrases in the training data which are consistent\nwith the word alignment. This means that all\nwords within the source phrase must be aligned\nonly with the words of the target phrase; likewise,\nthewordsofthetargetphrasemustbealignedonly\nwith the words of the source phrase (Och et al.,\n1999)(Zensetal.,2002). Atargetphrasecanhave\nmultiple consistent source phrases if there are un-\nalignedwordsattheboundaryofthesourcephrase\nand viceversa.\nFigure 1 gives an alignment example with un-227\nCorpus Sentence Alignment Unaligned Unaligned\nChinesewords Englishwords\nLDC2006E93 10,565 manual 14% 11%\nintersect 53% 40%\nrefined 23% 23%\nunion 7% 14%\nGALE-All 8,778,755 intersect 48% 55%\nrefined 24% 27%\nunion 9% 16%\nTable1: Thepercentages ofunaligned wordsinvariantalignments.\naligned words on both source and target sides and\nthephrasetableextractedfromthisalignment. The\nunaligned words will result in multiple extracted\nphrase pairs. All of these phrase pairs are kept be-\ncause the unaligned words are necessary to com-\nplete a good sentence though they have no cor-\nresponding translations. However, the translation\nmodels are not powerful enough to select the cor-\nrect phrase pair from these multiple pairs. As a\nresult, this ambiguity often causes insertion errors\nwhich is adding redundant words to the transla-\ntion and deletion errors which means that transla-\ntions of some source words are missing. We have\nused the phrase table in figure 1 to translate the\nsource sentence. (The translation system will be\ndescribedinthesection5). Sincetheexamplesen-\ntence is short, to see how the phrase pairs are con-\ncatenated, we limit the length of the used phrase\nfrom1to4. In the table 2 there is an insertion\nerror with slen= 1,tlen= 1, which is caused\nby the unaligned ’ is’ in the phrase ’呢# is’. With\nslen= 2,tlen= 2andslen= 3,tlen= 3there\naredeletionerrorswhereunaligned’ is’ismissing\ninphrase’那么为什么 whyis#is’.\n4 Deletion ofthe unalignedwordsin\nsourcesentences\nBased on the observations in the last section, we\naregoingtodisambiguatethemultiplephrasepairs\ncaused by unaligned words. In the automatically\ntrainedalignmentthereareafewpossiblecasesfor\ntheunaligned words.\ncorrect vs. wrong : an unaligned word is correct\nifitreallyhasnocorrespondingtranslationsandis\nleftunalignedbyahumanannotator. Anunaligned\nword is wrong if it has been aligned in the manual\nalignment.\nfunction words vs. content words : Compar-ing the alignment of function words and content\nwords, we could find that the correct unaligned\nwordsareroughlyfunctionwords,whilethewrong\nunaligned words are usually content words. The\nfunction words have little lexical meaning, but in-\nstead serve to express grammatical relationships\nwith other words within a sentence On the con-\ntrary, the content words usually carry meaning,\nwhich are “natural units” of translation between\nlanguages.\nIf we just focus on the disambiguation of mul-\ntiple phrases and not consider applying grammat-\nical information in function words to the transla-\ntion system, like the work done by (Setiawan et\nal., 2007), the simplest way of reducing the mul-\ntiple phrases is to delete the ’correct’ unaligned\nwords: the function words. The function words\nat the target sideshould not be touched, since they\nare necessary to complete a good sentence. How-\never,thefunctionwordsatthesourcesidecouldbe\nremoved, when they have no corresponding trans-\nlations.\n4.1 Deletion Candidates\nNot all unaligned words should be removed. Be-\nsides the content words, a source function word\ncould also have correct mappings to the target\nwords in some sentences. We have used two\nconstraints to filter out the words which can be\ndeleted..\nWe use relative frequencies to estimate the\nprobabilityof awordbeingaligned.\np(walign) =Nwalign\nN(w)(1)\nThe number of times a word wis aligned in the\ntraining data is denoted by Nwalign, andN(w)is\nthetotalnumberofoccurrencesoftheword w. The228\nslen=1tlen=1 why#为什么##is#那么##that#这样##is #呢##? #?whyisthat is?\nslen=2tlen=2 why#那么为什么 ##that#这样## ? #呢? whythat?\nslen=3tlen=3 why#那么为什么 ##that? #这样呢? whythat?\nslen=4tlen=4 whyis that#那么为什么这样##? #呢? whyisthat ?\nTable 2: The translations of the example with phrase length limitation. The symbol # # denotes concate-\nnationof phrasepairs.\nfirst constraint is that the probability of a word be-\ningaligned isbelowathreshold τ.\nConp(w) =/braceleftbigg1ifp(walign)≤τ\n0ifp(walign)> τ(2)\nThis constraint can be used with different thresh-\nolds. The smaller the threshold is, the more strict\nconstraintisappliedandfewerwordsaretobecon-\nsidered. When p(walign)is0.5, it means that the\nword has the same probability to be aligned and\nnot to be aligned. In order to filter out the deletion\ncandidates,thebestthresholdasdeterminedinour\nexperiments shouldbe lessthan 0.5.\nThe second constraint is to use the POS tags to\nmark the function words. In general, the content\nwords include nouns, verbs, adjectives, and most\nadverbs. We denote the POS tag set for content\nwords as S={noun, verb, adj, adv }. The con-\nstraintfor thefunctionwordis:\nConfun(w) =/braceleftbigg1ifPOS(w)/ne}ationslash∈S\n0otherwise(3)\nIn the experiments, we will test both Conp(w)\nandConp(w)+Confun(w). We will show that\nit is more important for a deletion candidate to\nbe constrained by Conp(w), since content and\nfunction words in linguistics are not always dis-\ntinguishedclearly.\n4.2 Hard deletion\nThe simplest way of deletion is directly removing\nthefoundwordsfromthesourcesentencesandthe\nalignments. The change of the alignment will af-\nfect not only the extracted phrase pairs around the\ndeletedword,butalsotheprobabilityestimationof\nall phrases. In this way, the source sentences be-\ncome relatively shorter. The size of the phrase ta-\nble will be smaller because of the reduction in the\nmultiple translation pairs. However, the drawback\nof the method is obvious. Most words are aligned\nornotindifferentcontexts. Whenweset τgreater\nthan0and delete the filtered words, there must be\nsome words which should actually be translated,\nwhichmeans that theyweredeletedwrongly.Hard deletion is an easy method to investigate\ntheinfluenceofunalignedwordsontranslationre-\nsults. Although the method will cause overdele-\ntion, it can reflect which multiple translation pairs\ncontaininganunalignedwordprovidemoreuseful\ninformation or more harmful information for ulti-\nmate translationquality.\n4.3 Optionaldeletion\nA better and more complicated method is to apply\noptional deletion. We do not make a firm decision\nto delete any words. Instead, we preserve ambigu-\nityand deferthedecisionuntillater stages.\nWe use a confusion network (CN) to represent\nthe ambiguous input. Some works are reported\nto use CNs in machine translation (Bertoldi et al.,\n2007; Koehn et al., 2007). A CN is a directed\nacyclic graph in which each path goes through all\nthe nodes from the start node to the end node. Its\nedgesarelabeledwithwords. AnexampleofaCN\nfor optionaldeletionis shownintable 3.\n把.38机票1.0忘1.0在1.0家里1.0了.21\nε.62 ε.79\nTable3: A CNexample of optionaldeletion.\nThe special empty-word εrepresents a word\ndeletion. Also, the word aligned probability is\nattached to each edge. The probability is calcu-\nlated by equation (1). When the word is a content\nword,itsalignedprobabilityis 1.0. Thescorewith\nepsilonmeans the probability of the word in the\nsame column not to be aligned, which is equal to\n1−p(walign).\nInput source sentences are represented by CN.\nLike what is done in the hard deletion, the align-\nments are modified by deleting all deletion candi-\ndates and the corresponding points in the align-\nment matrix. However, to match the possible\nnon-deletion of the unaligned words, the original\nalignment is also needed. We combine the two\nalignments by merging the phrase counts and re-\ncompute thephraseprobabilities.229\n5 ExperimentalSetup\n5.1 Data\nWe carried out MT experiments for translation\nfromChinesetoEnglishontwodatasets: BTEC08\nandGALE08.\nThe BTEC08 data was provided within the\nIWSLT 2008 evaluation campaign (Paul, 2008),\nextracted from the Basic Traveling Expression\nCorpus(BTEC) (Takezawa et al., 2002). The data\nis a multilingual speech corpus which contains\nsentences which are usually found in books for\ntourists. The sentences are short, with less than\n10 words on average. The parallel training data\nisrelativelysmall. WeaddedtheofficialIWSLT08\ntrainingdata,theIWSLT06devdataandIWSLT06\nevaluation data and their references to the training\ndata. The development and test sets in the experi-\nmentsbelowarefromtheIWSLT04andIWSLT05\nevaluation data. We found that the two data sets\nare not similar, so we took the first half of each\nandcombinethemasdevdata. Theremainingtwo\nhalves arecombinedas testdata.\nThe large vocabulary GALE data were all\nprovided by LDC. The test data has four gen-\nres: broadcastnews(BN),broadcastconversations\n(BC),newswire(NW)andwebtext(WT).Thefirst\ntwo genres are for speech translation and the last\ntwo are for text translation. Here, we only car-\nried out experiments on NW. The sentences of the\nGALE task are longer (around 30 words per sen-\ntence) and moredifficulttotranslate.\nThecorpusstatisticsforbothtasksareshownin\nTable4:\n5.2 Baseline System\nOur baseline system is a standard phrase-based\nSMT system. Word alignments are obtained by\nusing GIZA++ (Och and Ney, 2003) with IBM\nmodel 43. We symmetrized bidirectional align-\nments using the refined heuristic (Och and Ney,\n2004). The phrase-based translation model is a\nlog-linear model that include phrase translation\nprobabilities and word-based translation probabil-\nities in both translation directions, phrase count\nmodels, word and phrase penalty, target language\nmodel (LM) and a distortion model. Language\nmodels were built using the SRI language mod-\n3Specifically, on GALE data we performed 5 iterations of\nModel 1, 5 iterations of HMM, 2 iterations of Model 4. On\nBTECdataweperformed4iterationsofModel1,5iterations\nof HMM,8iterations of Model4.BTEC Chinese English\nTrain: Sentences 23940\nRunning words 181486 232746\nDev: Sentences 503\nRunning words 3085 3887\nTest: Sentences 503\nRunning words 3109 3991\nGALE Chinese English\nTrain: Sentences 8778755\nRunning words 232799466 249514713\nDev08: Sentences 485\nRunning words 14750 16570\nTest08: Sentences 480\nRunning words 14800 16683\nTable4: CorpusStatisticsoftheBTECandGALE\ntranslation tasks. For BTEC dev and test sets, the\nnumber of English tokens is the average over 16\nhuman referencetranslations.\neling toolkit (Stolcke, 2002). On the small vo-\ncabulary BTEC task we used a 6-gram. On the\nlarge vocabulary GALE task we included 5-gram\nlanguage model probabilities. The model scaling\nfactors are optimized on the development set with\nthe goal of improving the BLEU score. We used a\nnon-monotonicphrase-basedsearchalgorithmthat\ncan take confusion networks as input in the style\nof (Matusovetal.,2008).\n6 ExperimentalResults\n6.1 Thedeletion candidates\nFirst, we tested different thresholds τin the range\nfrom0.2to0.5. The set with small τis a subset\nof the one with larget τ. We filtered out words\nwhichweremostfrequentlynotalignedintraining.\nWe performed the experiments on both BTEC and\nGALE tasks. The findings are reported in table 5.\nFor each threshold the table gives the number of\nuniquewordsremoved(num.) andsomeexamples.\nBy applying the two constraints\nConp+Confun, the number of deletion\ncandidates is reduced greatly. That means among\nthe unaligned words in alignments there are many\ncontent words. The content words, especially\nnouns, usually are expected to be translated.\nIt is not good if there are many content words\nunaligned.\nComparingbetweenBTECandGALEthereare\nfewerdeletioncandidatesinGALEdata,bothcon-230\nBTEC GALE\nConp Conp+Confun Conp Conp+Confun\nτnum. example num. example num. example num.example\n0.21的 1的 1恭 0 -\n0.34的了哭却 3的了却7的慧毛病. . .1的\n0.421叶以把战争. . .10以把. . .17的罗斯. . . 1的\n0.5152呀对着当时. . .20呀对着. . .62的中之兆. . .3的中之\nTable 5: Some statistics and examples of the words removed based on the cons traints defined in equa-\ntions 2and3.\ntent and function words. It implies that large\ndata leads to obtain better alignments which as-\nsign more mappings between source and target\nlanguages.\n6.2 Hard deletion\nSincetheharddeletioniseasytocarryout,weper-\nformedtheexperimentsonbothBTECandGALE\ntasks here, too. As the number of deletion can-\ndidates on GALE is small, we tested the small-\nest deletion candidate set “ 的” and the biggest set\nwhichisundertheconstraint Conpwithτ= 0.5.\nTranslation results are shown in table 6. The sec-\nond row “rm-1” is the hard deletion of 的and the\nthird row “rm-62” is for the deletion of the 62\nwordsas shownintable5.\nIt is interesting to see that the deletions of both\nthe small set and large set of words improve the\nbaseline on every metric. 的is the most common\nfunctionwordinChinesetoconnectadjectivesand\nnouns and it is also the word with lowest aligned\nprobability in the table 5. The BLEU and TER\nscoresbothimprove 0.5%absoluteondevandtest\ndata just by removing this single word. However,\nwhen we remove the 62 words including 的, the\nresult does not improve further. This means that\nthe deletion candidate set contains some content\nwords, the deletion of which has a negative influ-\nence ontranslationquality.\nThe BTEC data provides us with a larger dele-\ntion candidate set. Additionally, the small size of\nthe training data for the BTEC task makes it pos-\nsible to run some finer-grained experiments. We\nfocus on how the removable function words affect\nthetranslationquality. Theexperimentsarecarried\nonthewordsetwithdifferentthresholds τandun-\nder the constraint Conp+Confun. The transla-\ntionresultswithharddeletiononBTECareshown\nintable7.\nThe improvement in the BTEC data is not asmuch as on the GALE data. Only when τis set\nto0.4, we obtained slightly better scores. The rea-\nsonisthatextractedphrasesareverylongcompar-\ning to the sentence length. The maximum phrase\nlength was set to 15 words, both for BTEC and\nGALE task. However,theaverage sentencelength\nof the BTEC test set is around 7 words, vs. 30\nwords on the GALE task. When phrase pairs are\nlonger, there are fewer cases that unaligned words\nare at their boundaries. The translation exam-\nples in table 2 also reflect this phenomenon. That\nsource sentence has 5 words. When the phrase\nlength limitation is 4, unaligned ’is’ is an inner\nword in the phrase pair why is that #那么为什\n么这样.\n6.3 Optionaldeletion\nIn addition to the hard deletion experiments on\nBTEC, we carried out the optional deletion exper-\niments in the same settings. The results are also\nshown in table 7. The optional deletion method\nachieved good performance. The BLEU score im-\nproves consistently with all settings, at most 1.5%\non the dev set and 0.7%on the test set with τ=\n0.4.\nFurthermore, we are also interested in the influ-\nence of individual deletion candidate on the trans-\nlation results. It would be more useful if we know\nwhat words are important for the deletion instead\nof just determining the optimal threshold. Since\nτ= 0.4has achieved the best result both in hard\ndeletion and optional deletion, we explore the 10\nremovable function words in the set one by one.\nThe10words are listed in table 8. At first, we\nsortedthe 10wordsaccordingtotheprobabilityof\nbeing aligned. From the low to high probability,\nwe add one word a time to the deletion candidate\nset. Theresultsareshownintable8. Theword 的,\nwhich has the lowest probability of being aligned,\nis themostimportantwordintheset.231\ndev08 test08\n%BLEU Interval TER Interval BLEU Interval TER Interval\nbaseline 31.5[30.4,32.7] 60.7[59.9,61.5] 30.9[29.8,32.0] 60.3[59.5,61.1]\nrm-1:的31.9[30.6,33.1] 60.2[59.1,61.2] 31.4[30.3,32.7] 59.7[58.9,60.7]\nrm-62 32.3[31.0,33.6] 60.1[59.0,61.0] 31.2[30.0,32.3] 59.9[59.0,60.8]\nTable6: Translationresultsusingtheharddeletionmethod ontheGALEtask.\ndev test\n%BLEU Interval TER Interval BLEU Interval TER Interval\nbaseline 49.6[47.0,52.6] 41.3[39.1,43.5] 49.5[46.8,52.1] 41.3[39.3,43.3]\nrm-funW Harddeletion\nτ= 0.249.1[46.3,52.1] 41.9[39.6,43.9] 49.7[47.0,52.3] 41.5[39.5,43.6]\nτ= 0.350.0[47.1,52.9] 41.0[38.8,43.5] 49.3[46.4,51.9] 41.2[39.3,43.6]\nτ= 0.450.0[46.9,52.9] 41.3[39.4,43.8] 49.7[47.1,52.6] 41.1[39.0,43.3]\nrm-funW Optionaldeletion\nτ= 0.251.1[48.6,54.2] 40.5[38.2,42.7] 49.6[46.7,52.6] 41.5[39.3,43.7]\nτ= 0.351.2[48.9,53.9] 40.4[38.5,42.1] 49.9[47.1,52.6] 41.5[39.5,43.8]\nτ= 0.451.1[48.5,53.5] 40.6[38.7,42.9] 50.2[47.7,53.0] 41.4[39.3,43.5]\nTable7: Translationresultsusinghardandoptionaldeletion methods ontheB TECtask.\nWe also calculate the 95%confidence intervals\nfor both hard deletion and optional deletion. Un-\nfortunately, the new systems are not statisticalsig-\nnificantthough theBLEUscoresarebetter.\n7 Conclusionandfuture work\nIn this paper, we have devoted attention to the\nproblem of a large number of unaligned words in\nthe word alignments generally used for MT model\ntraining. These unaligned words result in ambigu-\nous phrase pairs being extracted by a state-of-the-\nart phrase-based statistical MT system. In trans-\nlation, this phrase ambiguity causes deletion and\ninsertionerrors. Weclassifiedtheunalignedwords\nintofunctionwordsandcontentwordsandshowed\nthat unaligned function words have an important\ninfluence onphraseextraction.\nFurthermore, we have proposed two methods to\nimprove phrase extraction based on handling of\nunaligned words. Since it is important to keep the\nunaligned words on the target side to obtain com-\nplete and fluent translations, we have applied hard\ndeletion and optional deletion of the unaligned\nwords on the source side before phrase extraction.\nThoughthemethodsaresimple,theystillachieved\nnotableimprovementsinautomaticMTevaluation\nmeasuresonbothsmallandlargevocabularytasks.\nWe have shown that differentiating between use-\nful and “removable” unaligned words is importantforthequalityoftheextractedphrasesand,conse-\nquently, forthequality ofthe phrase-basedMT.\nThis paper pointed out the importance of un-\naligned words,but only consideredthe source lan-\nguage words. In the future, more work should be\ndone regarding the unaligned words in the target\nlanguage. The translations are more directly af-\nfectedbythequalityoftargetphrases. Sincedelet-\ning of unaligned words at the target side is clearly\nnot the right solution, some disambiguation mod-\nels aretobeinvestigated.\n8 Acknowledgments\nThis material is partly based upon work sup-\nportedbytheDefenseAdvancedResearchProjects\nAgency (DARPA) under Contract No. HR0011-\n06-C-0023 and partly realized as part of the\nQuaero Programme, funded by OSEO, French\nStateagency forinnovation.\nReferences\nP.F.Brown, J.Cocke, S.A.Della Pietra, V.J.Della Pietra,\nF. Jelinek, J.D.Lafferty, R.L.Mercer, and P.S.\nRoossin. 1990. A statistical approach to machine\ntranslation., In Computational Linguistics,16(2),\npages 79-85, Jun.\nP.F.Brown, S.A.Della Pietra, V.J.Della Pietra, and\nR.L.Mercer. 1993. The mathematics of statistical232\nHarddeletion optional deletion\ndev test dev test\np(walign)BLEU[%] BLEU[%] BLEU[%] BLEU[%]\nbaseline - 49.6 49.5 49.6 49.5\n的 0.007 49.1 49.7 51.1 49.6\n+了 0.21 50.0 49.3 51.2 49.9\n+却 0.27 50.0 49.3 51.2 49.9\n+以 0.35 50.0 49.4 51.2 49.9\n+把 0.38 50.0 49.7 51.1 50.2\n+对于 0.4 50.1 49.7 51.1 50.1\n+既 0.4 50.1 49.6 51.1 50.1\n+着 0.4 50.1 49.7 51.1 50.1\n+式 0.4 50.1 49.6 51.1 50.1\n+对 0.4 50.0 49.7 51.1 50.2\nTable8: Theinfluence ofdeleting individualwordsonthetranslationquality (BTECtask).\nmachinetranslation: Parameterestimation., In Com-\nputational Linguistics,19(2), pages 263-311\nNicola Bertoldi, Richard Zens and Marcello Federico.\n2007. Speech Translation by Confusion Network\nDecoding, In Proceedings of the International Con-\nferenceonAcoustics,Speech,andSignalProcessing\n(ICASSP), pages 1297-1300, Apr.\nPhilipp Koeln, Franz Josef Och, and Daniel Marcu.\n2003. Statistical phrase-based translation, In Pro-\nceedings of the 2003 Human Language Technol-\nogy onference of the North American Chapter of\nthe Association for Computational Linguistics(HLT-\nNAACL), pages 127-133, May.\nPhilipp Koehn, Nicola Bertoldi, Ondrej Bojar, Chris\nCallison-Burch, Alexandra Constantin, Brooke\nCowan,ChrisDyer,MarcelloFederico,EvanHerbst,\nHieu Hoang, Christine Moran, Wade Shen, and\nRichard Zens. 2007. Open Source Toolkit for Sta-\ntistical Machine Translation: Factored Translation\nModels and Confusion Network Decoding, CLSP\nSummer Workshop Final Report WS-2006,\nAdam Lopez and Philip Reesnik. 2006. Word-Based\nAlignment, Phrase-Based Translation: What’s the\nLink?, In Proceedings of the 7th Conference of the\nAssociation for Machine Translation in the Ameri-\ncas,pages 90-99, Aug.\nEvgeny Matusov and Bj ¨orn Hoffmeister and Hermann\nNey. 2008. ASR Word Lattice Translation with Ex-\nhaustive Reordering is Possible, In Proceedings of\nthe Interspeech 2008, pages 2342-2345, Sep.\nYanjun Ma and Nicolas Stroppa and Andy Way 2007.\nBootstrappingWordAlignmentviaWordPacking In\nProceedings of the 45th Annual Meeting of the As-\nsociation of Computational Linguistics, pages 304–\n311, Jun. Prague, Czech Republic.Franz Josef Och, Christoph Tillman, and Hermann\nNey. 1999. Improved alignment models for sta-\ntistical machine translation, In Proceedings of\nthe 1999 Joint SIGDAT Conference on Empirical\nMethos in Natural Language Processing and very\nLarge Corpora(EMNLP-VLC), pages 20-28, Jun.\nFranz Josef Och and Hermann Ney. 2003. A system-\naticcomparisonofvariousstatisticalalignmentmod-\nels, InComputational Linguistics,29(1):19-51\nFranz Josef Och and Hermann Ney. 2004. The\nAlignment Template Approach to Statistical Ma-\nchine Translation, In Computational Linguistics\nMichael Paul. 2008. Overview of the IWSLT 2008\nevaluation campaign, In Proceedings of the Interna-\ntional Workshop on Spoken Language Translation,\npages 1–17, Oct.\nAndrea Stolcke. 2002. SRILM - An extensible lan-\nguage modeling toolkit, In Proceedings of the Inter-\nnational Conference on Spoken Language Process-\ning,pages 901-904.\nHendra Setiawan and Min-Yen Kan and Haizhou Li.\n2007. Ordering Phrases with Function Words, In\nProceedingsofthe45thAnnualMeetingoftheAsso-\nciationofComputationalLinguistics pages712–719,\nJun.\nToshiyuki Takezawa and Eiichiro Sumita and Fumi-\nakiSugayaandHirofumiYamamotoandSeiichiYa-\nmamoto. 2002. Toward a broad-coverage bilingual\ncorpus for speech translation of travel conversations\nin the real world, In Proceedings of Third Interna-\ntionalConferenceonLanguageResourcesandEval-\nuation 2002 (LREC), pages 147-152\nRichard Zens, Franz Josef Och and Hermann Ney.\n2002. Phrase-based statistical machine translation,\nIn25th German Conference on Artificial Inteligence\n(LNAI),pages 18-32, Sep.233",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "AWcmuCRHDs",
"year": null,
"venue": "EAMT 2020",
"pdf_link": "https://aclanthology.org/2020.eamt-1.34.pdf",
"forum_link": "https://openreview.net/forum?id=AWcmuCRHDs",
"arxiv_id": null,
"doi": null
}
|
{
"title": "A User Study of the Incremental Learning in NMT",
"authors": [
"Miguel Domingo",
"Mercedes García-Martínez",
"Álvaro Peris",
"Alexandre Helle",
"Amando Estela",
"Laurent Bié",
"Francisco Casacuberta",
"Manuel Herranz"
],
"abstract": "Miguel Domingo, Mercedes García-Martínez, Álvaro Peris, Alexandre Helle, Amando Estela, Laurent Bié, Francisco Casacuberta, Manuel Herranz. Proceedings of the 22nd Annual Conference of the European Association for Machine Translation. 2020.",
"keywords": [],
"raw_extracted_content": "A User Study of the Incremental Learning in NMT\nMiguel Domingo1andMercedes García-Martínez2andÁlvaro Peris3andAlexandre Helle2\nandAmando Estela2andLaurent Bié2andFrancisco Casacuberta1andManuel Herranz2\n1PRHLT Research Center - Universitat Politècnica de València\n{midobal, fcn}@prhlt.upv.es\n2Pangeanic / B.I Europa - PangeaMT Technologies Division\n{m.garcia, a.helle, a.estela, l.bie, m.herranz}@pangeanic.com\n3Independent Researcher\[email protected]\nAbstract\nIn the translation industry, human experts usu-\nally supervise and post-edit machine trans-\nlation hypotheses. Adaptive neural machine\ntranslation systems, able to incrementally up-\ndate the underlying models under an online\nlearning regime, have been proven to be use-\nful to improve the efficiency of this workflow.\nHowever, this incremental adaptation is some-\nwhat unstable, and it may lead to undesirable\nside effects. One of them is the sporadic ap-\npearance of made-up words, as a byproduct of\nan erroneous application of subword segmen-\ntation techniques. In this work, we extend pre-\nvious studies on on-the-fly adaptation of neu-\nral machine translation systems. We perform a\nuser study involving professional, experienced\npost-editors, delving deeper on the aforemen-\ntioned problems. Results show that adaptive\nsystems were able to learn how to generate the\ncorrect translation for task-specific terms, re-\nsulting in an improvement of the user’s produc-\ntivity. We also observed a close similitude, in\nterms of morphology, between made-up words\nand the words that were expected.\n1 Introduction\nDespite its improvements and obtaining admissible re-\nsults in many tasks, machine translation (MT) is still\nvery far from obtaining automatic high-quality trans-\nlations ( Dale,2016 ;Toral et al. ,2018 ). Thus, a hu-\nman agent needs to supervise and correct the outputs\ngenerated by an MT system. This process is known as\npost-editing and is a common use case of MT in the in-\ndustrial environment. As MT systems are continuously\nimproving their capabilities, it has acquired major rele-\nvance in the translation market ( Guerberof ,2008 ;Pym\net al.,2012 ;Hu and Cadwell ,2016 ;Turovsky ,2016 ).\nThroughout the post-editing process, new data are\ncontinuously generated. These new data have valuable\nproperties—they are domain-specific training samples.\nThus, it can be leveraged to continuously adapt the sys-\nc⃝2020 The authors. This article is licensed under a Creative\nCommons 3.0 licence, no derivative works, attribution, CC-\nBY-ND.tem towards a given domain or the style of the post-\neditor. A common way of achieving this consists in\nfollowing an online-learning paradigm ( Ortiz-Martínez ,\n2016 ;Peris and Casacuberta ,2019 ). Each time the\nuser validates a post-edit, the system’s models are up-\ndated incrementally with this new sample. Hence, when\nthe system generates the next translation, it will con-\nsider the previous post-edits made by the user and it is\nexpected to produce higher quality translations (or, at\nleast, more suited to the post-editor’s preferences).\nDomingo et al. (2019b ) conducted a preliminary user\nstudy for professional post-editors, who had a positive\nperception of the adaptive systems. However, they no-\nticed that, in some cases, there were occurrences of\nsome made-up words. In this work, we study the impact\nof this phenomenon. Additionally, we extend their user\nstudy by involving three more participants and provid-\ning additional measures for the increase in productivity\ngained with the adaptive system.\n2 Related work\nPost-editing MT hypotheses is a practice that was\nadopted in the translation industry a long time ago\n(e.g., Vasconcellos and León ,1985 ). Its relevance grew\nas MT technology advanced and improved. The ca-\npabilities of MT post-editing have been demonstrated\nthrough many user studies ( Aziz et al. ,2012 ;Bentivogli\net al.,2016 ;Castilho et al. ,2017 ;Green et al. ,2013a ).\nParallel to the rise of the post-editing protocol, using\nuser post-edits to adapt MT systems has also attracted\nthe attention of researches and industry. This was stud-\nied in the CasMaCAT ( Alabau et al. ,2013 ) and Mate-\nCAT ( Federico et al. ,2014 ) projects and phrase-based\nstatistical MT systems based on online learning were\ndeveloped ( Ortiz-Martínez ,2016 ). With the break-\nthrough in neural MT (NMT) technology ( Bahdanau\net al. ,2015 ;Wu et al. ,2016 ;Vaswani et al. ,2017 ),\nresearch shifted towards constructing adaptive systems\nvia online learning in this post-editing scenario. The\nuse of online learning to adapt an NMT system to a\nnew domain with post-edited samples was proposed by\nPeris et al. (2017 ) and Turchi et al. (2017 ). Other works\nrefined these adaptation techniques and applied them\nto new use cases ( Kothur et al. ,2018 ;Wuebker et al. ,\n2018 ;Peris and Casacuberta ,2019 ).\nThe evaluation of MT post-edits is a hard topic that\nCorpus #Sentences# Tokens # Types Average length\nEn Es En Es En Es\nTraining 23.4M 702M 786M 1.8M 1.9M 30.0 33.6\nDocument 1 150 1.7K - 618 - 11.3 -\nDocument 2 150 2.6K - 752 - 17.3 -\nTable 1: Corpora statistics in terms of number of sentences, number of tokens, number of types (vocabulary size) and average\nsentence length. K denotes thousands and M, millions.\nis currently being actively researched (e.g., Toral ,2019 ;\nFreitag et al. ,2020 ;Läubli et al. ,2020 ). Several works\nconducted user studies for MT post-editing systems,\neither phrase-based ( Alabau et al. ,2013 ;Green et al. ,\n2013b ;Denkowski et al. ,2014 ;Bentivogli et al. ,2016 )\nor NMT ( Daems and Macken ,2019 ;Koponen et al. ,\n2019 ;Jia et al. ,2019 ). Moreover, two studies showed\nimprovements in terms of productivity time and trans-\nlation quality with the application of an online learn-\ning protocol ( Karimova et al. ,2018 ;Domingo et al. ,\n2019b ). This latter study is tightly related to ours. We\nextend it by performing a finer-grained evaluation of the\noutputs of the adaptive systems.\n3 Experimental framework\nAs we extended the work of Domingo et al. (2019b ),\nwe used their same data and systems. The task at hand\nconsisted of a small medico-technical (description of\nmedical equipment) corpus from their production sce-\nnario. It contains specific vocabulary from a very closed\ndomain. It was conformed by two documents of 150\nsentences, which contained 1:7and2:7thousand words\nrespectively. The translation direction was English to\nSpanish. The system was trained using the data from\nWMT’13’s translation task ( Bojar et al. ,2013 ) and sam-\nples selected by the feature decay selection technique\n(Biçici and Yuret ,2015 ). The data features are summa-\nrized in Table 1 . We applied joint byte pair encoding\n(Sennrich et al. ,2016 ), using 32;000merge operations.\nThe system was built with OpenNMT-py ( Klein et al. ,\n2017 ), using a long short-term memory ( Gers et al. ,\n2000 ) recurrent encoder–decoder with attention ( Bah-\ndanau et al. ,2015 ). All model dimensions were 512.\nThe system was trained using Adam ( Kingma and Ba ,\n2014 ) with a fixed learning rate of 0:0002 (Wu et al. ,\n2016 ) and a batch size of 60. A label smoothing of 0:1\n(Szegedy et al. ,2015 ) was applied. At inference time,\nwe used beam search with size 6.\nThe adaptation process followed the findings from\nPeris and Casacuberta (2019 ). We tuned the hyperpa-\nrameters for the adaptation process on our development\nset, under simulated conditions. For each new post-\nedited sample, we applied two plain SGD updates, with\na fixed learning rate of 0:05.\nAs translation environment we used the one designed\nbyDomingo et al. (2019a ). It connects our adaptive\nNMT engine with the SDL Trados Studio interface,\nwhich is used by the post-editors in our productionworkflow. In addition, it also allowed us to trace the\nproductivity metrics and user behavior.\n3.1 Evaluation\nWe evaluated two main aspects of the adaptation pro-\ncess: productivity of the post-editors and quality of the\nNMT systems. The former was assessed by computing\nthe average post-editing time per sentence and the num-\nber of words generated by the post-editor per hour. For\nthe latter, we employed two well-known MT metrics:\n(h)BLEU ( Papineni et al. ,2002 ) and (h)TER ( Snover\net al.,2006 ). In order to ensure consistent BLEU scores,\nwe used sacreBLEU ( Post,2018 ). Since we computed\nper-sentence BLEU scores, we used exponential BLEU\nsmoothing ( Chen and Cherry ,2014 ).\nWe applied approximate randomization tests ( Riezler\nand Maxwell ,2005 ), with 10;000repetitions and a p-\nvalue of 0:05, to determine whether two systems pre-\nsented statistically significant differences.\n3.2 Human post-editors\nSix professional translators were involved in the exper-\niment. Some profiling details about them can be found\ninTable 2 .\nUser Sex Age Professional experience\nUser 1 Male 24 1.5 years\nUser 2 Female 25 5 years\nUser 3 Female 30 5 years\nUser 4 Female 24 1 month\nUser 5 Female 22 1 year\nUser 6 Male 48 22 years\nTable 2: Information about the human post-editors that took\npart in the experiment, regarding their sex, age and years of\nprofessional experience.\nThe static experiment consisted in post-editing using\nthe initial NMT system, which remained fixed along\nthe complete process. For the adaptive experiment, all\nusers started with the initial system, which was adapted\nto each user through the process using their own post-\nedits. Therefore, at the end of the process, each user\nobtained a tailored system. In order to avoid the influ-\nence of translating the same text multiple times, each\nparticipant post-edited a different document set under\neach scenario (static and adaptive), as shown in Table 3 .\nUser Document 1 Document 2\nUser 1 Static Adaptive\nUser 2 Adaptive Static\nUser 3 Static Adaptive\nUser 4 Adaptive Static\nUser 5 Static Adaptive\nUser 6 Adaptive Static\nTable 3: Distribution of users, document sets and scenarios.\nAll users conducted first the experiment which involved post-\nediting document 1 and then document 2 (e.g., user 2 first\npost-edited document 1 on an adaptive scenario and, then,\ndocument 2 on a static scenario).\n4 User study\nIn our study, we focus on the differences between static\nand adaptive systems based on three main aspects: the\nproductivity of post-editors, the quality of post-edits\nand the generation differences.\n4.1 On the productivity of the post-editors\nTable 4 shows the average gains obtained in terms of\ntranslation quality. These results demonstrate how the\nadaptive systems benefits from the user post-edits to im-\nprove the translation quality, yielding gains of up to 6:7\nTER points and 8:0BLEU points.\nTest System hTER [ #] hBLEU [ \"]\nDocument 1Static 39:5 47:9\nAdaptive 32:8y55:9y\nDocument 2Static 36:2 42:9\nAdaptive 34:3y50:5y\nTable 4: Results of the user experiments, in terms of trans-\nlation quality. These numbers are averages over the results\nobtained by the different post-editors. Static system stands\nfor conventional post-editing—without adaptation. Adaptive\nsystem refers to post-editing in an environment with online\nlearning. hTER andhBLEU refer to the TER and BLEU of\nthe system hypothesis computed against the post-edited sen-\ntences.yindicates statistically significant differences between\nthe static and the adaptive systems.\nTable 5 presents the productivity improvements\nyielded by the adaptive system. With two exceptions,\nthe adaptive system significantly reduced the averaged\ntime needed to post-edit a sentence (with gains from 4:0\nup to 33:5seconds per sentence). These two exceptions\nwere for user 2—whose average time was the same for\nboth systems—and user 4—whose average time was\nbigger when using the adaptive system. This last case\ncan be explained by taking into account that user 4 is\none of the least experienced users and that she con-\nducted the experiment involving the adaptive scenario\nfirst (see Tables 2 and3). Therefore, as time goes on,\nuser 4 feels more comfortable with the task and tools\nand, thus, the post-editing time decreases. This phe-\nnomenon was already observed during the CasMaCAT\nproject ( Alabau et al. ,2013 ).\nWhen measuring productivity in terms of numberUser System Time [ #]Words per\nhour [ \"]\nUser 1Static 37:9 1685\nAdaptive 33:0y1935y\nUser 2Static 30:5 2091\nAdaptive 30:4 2097y\nUser 3Static 38:0 1678\nAdaptive 27:0y2364y\nUser 4Static 37:5 1701\nAdaptive 47:4y1346y\nUser 5Static 80:2 795\nAdaptive 46:7y1367y\nUser 6Static 53:7 1188\nAdaptive 49:7y1284y\nTable 5: Results of the user experiments, in terms of pro-\nductivity. Static system stands for conventional post-editing,\nwithout adaptation. Adaptive system refers to post-editing in\nan environment with online learning. Time corresponds to the\naverage post-editing time per sentence, in seconds. Words per\nhour represents the number of words generated by the post-\neditors per hour. Users 4 to 6 has less experience, in this\nparticular domain, than users 1 to 3.yindicates statistically\nsignificant differences between the static and the adaptive sys-\ntems.\nof words generated per hour, the adaptive systems\nachieved significant gains for all cases except for user\n4—which is coherent with the results obtained in terms\nof time per sentence. These gains range from 6—for\nuser 2, who took the same average time for both sce-\nnarios—to 686words per hour. Therefore, both metrics\nshowcase how adaptive systems are able to significantly\nimprove productivity.\n4.1.1 User feedback\nFollowing Domingo et al. (2019b ) post-editors filled\na questionnaire (see Appendix A) regarding the task\nthey had just performed. We asked them about their\nlevel of satisfaction of the translations they had pro-\nduced; whether they would have preferred translating\nfrom scratch instead of post-editing; and their opin-\nion about the automatic translations, in terms of gram-\nmar, style and overall quality. Additionally, we also re-\nquested them to give, as an open-answer question, their\nfeedback on the task.\nWhile post-editors were generally satisfied with the\nsystem and the translations they produced (as also re-\nported by Domingo et al. (2019b )), they spotted some\nissues regarding the adaptive NMT system: they no-\nticed that domain-specific term were “forgotten” by the\nsystem, being wrongly translated. In addition, the users\nspotted the occurrence of some nonexistent words in the\ntarget language (e.g., “absolvido”). We delve deeper\ninto these problems in Section 5 .\n4.2 On the quality of the post-edits\nIn order to assess and compare the quality of the hu-\nman post-edits using the static and adaptive systems, a\n4 3 2 1\nAdequacy020406080100CountAdaptive\nStatic(a)Document 1.\n4 3 2 1\nAdequacy020406080CountAdaptive\nStatic (b)Document 2.\nFigure 1: Sentence-level adequacy scores. Count values are the average between both evaluators.\n4 3 2 1\nFluency020406080CountAdaptive\nStatic\n(a)Document 1.\n4 3 2 1\nFluency0102030405060CountAdaptive\nStatic (b)Document 2.\nFigure 2: Sentence-level fluency scores. Count values are the average between both evaluators.\nhuman evaluation was conducted with the help of two\nprofessional translators—who had not taken part in the\nuser study. In this evaluation, the evaluators were given\na source sentence and the post-edits produced by each\nuser—three of which had used the static system, and\nthe other three the adaptive system.\nFollowing Castilho et al. (2019 ) and TAUS ade-\nquacy/fluency guidelines1, they were asked to assess,\non a 4-point scale, the adequacy (how much of the\nmeaning is represented in the translation) and the flu-\nency (the extent to which the translation is well-formed\ngrammatically, has correct spellings, adheres to com-\nmon use of terms, titles and names, is intuitively accept-\nable and can be sensibly interpreted by a native speaker)\nof each post-edit.\nIn total, they evaluated 600 sentences: the post-edits\nof the first 50 sentences of Document 1 and the post-\nedits from the first 50 sentences of Document 2 (see\nSection 3 ). To avoid biases, evaluators were not given\nany information regarding the origin of the translations.\nFigs. 1 and2present the results of the evaluation.\nIn terms of adequacy, results show that, for both sys-\ntems, most of the post-edits convey the full meaning of\n1https://www.taus.net/academy/\nbest-practices/evaluate-best-practices/\nadequacy-fluency-guidelines .the original sentence or most of it (represented by the\nscores 4and3). Just a few of them convey little or none\nof the original meaning (represented by the scores of 2\nand 1). While both system behave similarly, we observe\nthat a larger amount of the post-edits generated using\nthe adaptive system have the highest adequacy score.\nThis difference is more noteworthy for the post-edits\nfrom document 1 than for those from document 2. Sim-\nilar conclusions can be reached according to fluency:\nMost post-edits, independently of the system used, are\neither flawless or good (represented by scores 4 and 3)\nregarding the extent to which they are constructed. Just\na few are considered to be dis-fluent or incomprehensi-\nble (represented by a score of 2). Again, both systems\nare perceived to be similar in document 2, while the\nadaptive system is perceived as slightly more fluent.\nFinally, it is worth noting sine particularities of the\ntask that may have influenced the results of the eval-\nuation: the task consists in the description of medi-\ncal equipment and, thus, contains several singularities\nsuch as specific acronyms (with which the target audi-\nence may be more familiar in their original language\nthan with their translation) or description of parts of an\nequipment (taking into account that the physical equip-\nment may have tags in its original language). Since the\nevaluators were given no specific instruction about how\n<10.0\n[10.0,20.0) [20.0,30.0) [30.0,40.0) [40.0,50.0) [50.0,60.0) [60.0,70.0) [70.0,80.0) [80.0,90.0)>=90.0\nBLEU01020304050607080CountAdaptive\nStatic(a)Document 1.\n<10.0\n[10.0,20.0) [20.0,30.0) [30.0,40.0) [40.0,50.0) [50.0,60.0) [60.0,70.0) [70.0,80.0) [80.0,90.0)>=90.0\nBLEU020406080100CountAdaptive\nStatic (b)Document 2.\nFigure 3: Histogram of sentence-level BLEU scores. The counts are distributed in buckets of range 10.\nto solve those particularities, their personal criteria may\nhad an impact in the evaluation results.\n4.3 On differences in the generation\nNext, we compare both adaptive and static systems in\nterms of the translations generated. To this end, we\nemployed the discriminative language model method\n(Akabe et al. ,2014 ) implemented in the compare-mt\n(Neubig et al. ,2019 ) tool, comparing sentence-level\nBLEU and word n-grams.\nIn terms of translation quality, we show a histogram\nof sentence-level BLEU scores in Fig. 3 . For both docu-\nments, we observe similar trends: the static system gen-\nerated low-scored sentences more frequently than the\nadaptive systems. The adaptive systems placed more\nhypotheses from bucket [50;60)onward, for both test\ndocuments. Moreover, the differences in frequencies\nbetween adaptive and static systems were kept at a sim-\nilar proportion along all high-score buckets. Hence,\nadaptive systems were able to outperform the static one\nin these high-score ranges.\nThe study of the different n-grams helped us to iden-\ntify common patterns across all users: adaptive sys-\ntems were able to effectively learn ad-hoc sequences\nfor the task at hand. We discovered several phenomena\namong the most common n-gram matches of adaptive\nsystems: the correct translation of acronyms, entities\nrelating a particular device and specific task terminol-\nogy. See Fig. 4 for examples of these phenomena. We\nfound these common constructions to be one of the ma-\njor causes of the differences in terms of translation qual-\nity.\n5 Generation of made-up words\nOn their feedback, the users reported that, in some\ncases, the system’s hypothesis contained words which\nwere not real words (e.g., “absolvido”). This phe-nomenon, although infrequent, was a bit cumbersome.\nMost likely, it is caused by an incorrect segmentation\nof a word via the byte pair encoding process which, ac-\ncording to their frequency, splits words into multiple\ntokens. In order to assess its impact, we start by quan-\ntifying the issue. Table 6 shows the total of made-up\nwords generated per user.\nUser System Words\nUser 1Static 3\nAdaptive 6\nUser 2Static 8\nAdaptive 5\nUser 3Static 3\nAdaptive 17\nUser 4Static 8\nAdaptive 5\nUser 5Static 3\nAdaptive 14\nUser 6Static 8\nAdaptive 4\nTable 6: Total made-up words generated per user.\nWhile this phenomenon is not very frequent (it repre-\nsents from 0:2up to 0:8%of all the words generated by\na given system), it is present in all systems. Depending\non the user, this problem was more present using the\nstatic or the adaptive system. Since users were using a\ndifferent document set for each scenario (see Table 3 )\nand there is a significant difference between documents\nin terms of total words and vocabulary (see Table 1 ), we\nneed to compute the average per document in order to\nevaluate how the problem of made-up words generation\naffects the different scenarios. These results are shown\ninTable 7 .\nAlthough it could be expected for the adaptive sys-\nPhenomenon System Example\nAcronymsSource QSE Number\nPost-edit Número de ESC\nAdaptive Número de ESC\nStatic Número QSE\nEntitiesSource Show the R Series ALS\nPost-edit Mostrar la serie R ALS\nAdaptive Mostrar la serie R ALS\nStatic Mostrar el R Series ALS\nTerminologySource There are several steps involved with sidestream end tidal CO2 setup.\nPost-edit La configuración del CO2 espiratorio final de flujo lateral se realiza en varios pasos.\nAdaptive Hay varias etapas de la configuración del CO2 espiratorio final del ajuste.\nStatic Hay varias etapas que involucran la configuración del CO2 maremoto del CO2 maremoto\nFigure 4: Examples of the n-gram differences between adaptive and static systems. In boldface we highlight the differences\nintroduced by adaptive systems.\nDocument System Words\nDocument 1Static 5\nAdaptive 4\nDocument 2Static 8\nAdaptive 12\nTable 7: Average of made-up words generated per document\nfor all users.\ntem—which has to deal with a higher number of out-\nof-vocabularies, introduced by the user—to generate\nmade-up words with a higher frequency, both systems\nbehave similarly: on document 1 case, the static sys-\ntem generated 0:1%more made-up words and, in the\nother case (document 2), it was the adaptive system\nwhich generated 0:1%more made-up words. Further-\nmore, when comparing both documents, we observe\nthat, despite document 2 having a bigger vocabulary,\nboth static systems generated the same percentage of\nmade-up words. However, Document 2’s adaptive sys-\ntems generated 0:2%of made-up words on average.\nMost likely, since we did not have an in-domain cor-\npus for training the systems (see Section 3 ), the big-\nger the document’s vocabulary is, the easier an out-of-\nvocabulary word may results in an incorrect subword\nsegmentation.\n5.1 Error analysis\nFig. 5 shows some example of made-up words gener-\nated by the static system.\nFrom the examples, we observe that while the made-\nup words do not have any sense, they resemble real\nwords (e.g., pacio resembles espacio ;escaga resem-\nblesescala ; etc). However, the words they resemble are\nsemantically very different to the correct words (e.g.,\nwhile pacio resembles espacio , the correct word would\nbeestimulación ).\nThe adaptive systems generates similar made-up\nwords (see Fig. 6 for some examples). However, in this\ncase we observe that some made-up words are almost\ncorrect: while los válvulos does not exist ( valve is a1.La zona verde es para pacio .\n2.Roll al paciente a su lado, y luego rodar el electrodo ha-\ncia la espalda del paciente a la izquierda de su columna\ny debajo de la escaga .\n3.Presione la tecla del softón .\n4.Sin embargo, el metrónomo absolvido si las compre-\nsiones son inferiores a las directrices.\n5.Que el dispositivo puede hacer un choque de prueba de\n30jojuelas .\nFigure 5: Example of sentences with made-up words (de-\nnoted in bold) from the static system. The first word should\nhave been estimulación , the second one omóplato , the third\noneRCP, the fourth one sonará and the fifth one julios .\n1.Al mover el Selector de modo a Pacer se activará la\npuerta del pidante para abrir.\n2.Coloque el sensor con el adaptador instalado fuera de\ntodas las fuentes de CO2 (incluidos los válvulos de aire\nde respiración y respiratorio) exhalado.\n3.Las marcapasas de estimulación deben producirse\naproximadamente cada centímetro en la tira.\n4.El conector de autoprueba funciona solo cuando el en-\nvase del electrodo es inabierto y conectado a la serie R\nSeries.\n5.Para aplicar los electrodos OneStep, introduzca primero\nel electrodo trasero para evitar la herración del elec-\ntrodo delantero.\nFigure 6: Example of sentences with made-up words (de-\nnoted in bold) from the adaptive systems. The first word\nshould have been marcapasos , the second one válvulas , the\nthird one marcadores , the fourth one cerrado and the fifth one\ndeformación .\nfeminine word in Spanish), it would be correct, from a\nmorphological point of view, if valve were masculine.\nSomething similar, but with the opposite gender, hap-\npens with las marcapasas (which should be los marca-\npasos ) although, in this case, the correct word would\nbemarcadores . While we do not have the means for\nevaluating the impact in the cognitive effort, we believe\nthis kind of errors are more difficult for the users to\ndeal with due to their similarity with the correct words.\nHowever, we need to assess the real impact in a future\nwork.\nWhen comparing both type of systems, there are\ntimes in which the adaptive systems are able to gener-\nate the correct word when the static system had gener-\nated a made-up word; times in which the adaptive sys-\ntems generate the same made-up word than the static\nsystem; and times in which the adaptive systems gener-\nate a made-up word when the static system was able to\ngenerate the correct word. Note that the behavior of the\nadaptive systems depend on their user (see Fig. 7 for an\nexample).\nStatic system: Coloque el sensor con el adaptador instalado\nfuera de todas las fuentes de CO2 (incluido el del pa-\nciente) y sus válvulas de escape para el aire libre exhal-\nado y el ventilador del ventilador.\nAdaptive system User 1 :Coloque el sensor con el adaptador\ninstalado fuera de todas las fuentes de CO2 (incluido el\ndel paciente y su respiración y el respirador exhalado) .\nAdaptive system User 3 :Coloque el sensor con el adaptador\ninstalado fuera de todas las fuentes de CO2 (incluidos\nlos válvulos de aire de respiración y respiratorio) exhal-\nado.\nAdaptive system User 5 :Coloque el sensor con el adaptador\nalejado de todas las fuentes de CO2 incluido el paciente,\nysus válvulas de respiración y respiración exhalados).\nFigure 7: Example of the different behaviors of the adaptive\nsystems. At a certain point of the translation hypothesis, the\nstatic system generates the words sus válvulas . In their place,\nthe adaptive system for user 1 generates the words su res-\npiración . However, the adaptive system for user 3 generates\nthe words los válvulos —making-up the word válvulos . Fi-\nnally, the adaptive system for user 5 coincides with the static\nsystem in generating the words sus válvulas .\nFinally, we tried to compare, using edit distance, the\nmade-up words with the closest words (in morphologi-\ncal terms) from the vocabulary in order to have a better\nunderstanding of this phenomenon. However, this study\ndid not show any significant information: in almost all\nthe cases, made-up words had a lot of morphological\nsimilitudes with words from the vocabulary but those\nwords had no semantic relation with the correct word.\n6 Conclusions and future work\nIn this work, we extended a previous user study\nof an adaptive NMT system. We conducted new\nexperiments with the help of professional transla-\ntors, and observed significant improvements of the\ntranslation quality—measured in terms of hTER and\nhBLEU—and significant improvements of the user’s\nproductivity—measured in terms of post-editing timeand number of words generated. We also conducted,\nwith the help of two additional professional translators,\na human evaluation that verified the quality of the post-\nedits generated during the user study.\nThe users were pleased with the system. They no-\nticed that corrections applied on a given segment gen-\nerally were reflected on the successive ones, making\nthe post-editing process more effective and less te-\ndious. When comparing the translations generated by\nboth kind of systems, we identified that adaptive sys-\ntems were able to generate the correct translation of\nacronyms, entities relating a particular device and spe-\ncific task terminology.\nAn undesirable side effect mentioned by the users\nwas the sporadic apparition of made-up words. We\nstudied this phenomenon and reached the conclusion\nthat due to the increase in the number of out-of-\nvocabularies as part of the post-editing process, adap-\ntive systems suffer this problem more than static sys-\ntems. Furthermore, sometimes these made-up words\nare very similar, in morphological terms, to the cor-\nrect words—such as a feminine word converted into its\nnon-existent masculine equivalent—which made them\nharder to detect. However, the cognitive impact in the\npost-editors will need to be assesed before reaching cat-\negorical conclusions.\nIn regards to future work, we should try to assess the\ncognitive impact of the made-up words phenomenon.\nWe would also like to study the degradation of domain-\nspecific terms, and analyze the impact on the amount\nof work required to post-edit subsequent sentences as\nthe user provides corrected examples. Additionally,\nwe will integrate our adaptive systems together with\nother translation tools, such as translation memories\nor terminological dictionaries, with the aim of foster-\ning the productivity of the post-editing process. With\nthis feature-rich system, we would like to conduct ad-\nditional experiments involving more diverse languages\nand domains, using domain-specialized NMT systems,\ntesting other models (e.g., Transformer, Vaswani et al. ,\n2017 ) and involving a larger number of professional\npost-editors. Finally, we also intend to implement\nthe interactive–predictive machine translation protocol\n(Lam et al. ,2018 ;Peris and Casacuberta ,2019 ) in our\ntranslation environment, and compare it with the regu-\nlar post-editing process.\nAcknowledgements\nThe authors wish to thank the anonymous reviewers\nfor their careful reading and in-depth criticisms and\nsuggestions. The research leading to these results has\nreceived funding from the Spanish Centre for Tech-\nnological and Industrial Development (Centro para el\nDesarrollo Tecnológico Industrial) (CDTI); the Euro-\npean Union through Programa Operativo de Crec-\nimiento Inteligente (Project IDI-20170964) and through\nPrograma Operativo del Fondo Europeo de Desar-\nrollo Regional (FEDER) from Comunitat Valenciana\n(20142020) under project Sistemas de frabricación in-\nteligentes para la indústria 4.0 (grant agreement ID-\nIFEDER/2018/025); and Generalitat Valenciana (GV A)\nunder project Deep learning for adaptive and multi-\nmodal interaction in pattern recognition (DeepPattern)\n(grant agreement PROMETEO/2019/121). We grate-\nfully acknowledge the support of NVIDIA Corporation\nwith the donation of a GPU used for part of this re-\nsearch; and the translators and project managers from\nPangeanic for their help with the user study.\nReferences\nAkabe, K., Neubig, G., Sakti, S., Toda, T., and Naka-\nmura, S. (2014). Discriminative language models as\na tool for machine translation error analysis. In Pro-\nceedings of COLING 2014, the 25th International\nConference on Computational Linguistics: Technical\nPapers , pages 1124–1132.\nAlabau, V., Bonk, R., Buck, C., Carl, M., Casacuberta,\nF., García-Martínez, M., González-Rubio, J., Koehn,\nP., Leiva, L. A., Mesa-Lao, B., Ortiz-Martínez, D.,\nSaint-Amand, H., Sanchis-Trilles, G., and Tsoukala,\nC. (2013). CASMACAT: An open source workbench\nfor advanced computer aided translation. The Prague\nBulletin of Mathematical Linguistics , 100:101–112.\nAziz, W., Castilho, S., and Specia, L. (2012). Pet: a\ntool for post-editing and assessing machine transla-\ntion. In In proceedings of The International Confer-\nence on Language Resources and Evaluation , pages\n3982–3987.\nBahdanau, D., Cho, K., and Bengio, Y. (2015). Neural\nmachine translation by jointly learning to align and\ntranslate. arXiv:1409.0473 .\nBentivogli, L., Bertoldi, N., Cettolo, M., Federico, M.,\nNegri, M., and Turchi, M. (2016). On the evalua-\ntion of adaptive machine translation for human post-\nediting. IEEE/ACM Transactions on Audio, Speech\nand Language Processing , 24(2):388–399.\nBiçici, E. and Yuret, D. (2015). Optimizing instance\nselection for statistical machine translation with fea-\nture decay algorithms. IEEE/ACM Transactions on\nAudio, Speech and Language Processing , 23(2):339–\n350.\nBojar, O., Buck, C., Callison-Burch, C., Haddow, B.,\nKoehn, P., Monz, C., Post, M., Saint-Amand, H.,\nSoricut, R., and Specia, L., editors (2013). Proceed-\nings of the Eighth Workshop on Statistical Machine\nTranslation .\nCastilho, S., Moorkens, J., Gaspari, F., Calixto, I., Tins-\nley, J., and Way, A. (2017). Is neural machine trans-\nlation the new state of the art? The Prague Bulletin\nof Mathematical Linguistics , 108(1):109–120.\nCastilho, S., Resende, N., Gaspari, F., Way, A.,\nODowd, T., Mazur, M., Herranz, M., Helle, A.,\nRamírez-Sánchez, G., Sánchez-Cartagena, V., Pin-\nnis, M. a., and Šics, V. (2019). Large-scale machinetranslation evaluation of the iADAATPA project. In\nProceedings of the Machine Translation Summit ,\npages 179–185.\nChen, B. and Cherry, C. (2014). A systematic compari-\nson of smoothing techniques for sentence-level bleu.\nInProceedings of the Ninth Workshop on Statistical\nMachine Translation , pages 362–367.\nDaems, J. and Macken, L. (2019). Interactive adaptive\nsmt versus interactive adaptive nmt: a user experi-\nence evaluation. Machine Translation , pages 1–18.\nDale, R. (2016). How to make money in the trans-\nlation business. Natural Language Engineering ,\n22(2):321–325.\nDenkowski, M., Dyer, C., and Lavie, A. (2014). Learn-\ning from post-editing: Online model adaptation for\nstatistical machine translation. In Proceedings of the\n14th Conference of the European Chapter of the As-\nsociation for Computational Linguistics , pages 395–\n404.\nDomingo, M., García-Martínez, M., Estela Pastor, A.,\nBié, L., Helle, A., Peris, Á., Casacuberta, F., and Her-\nranz Pérez, M. (2019a). Demonstration of a neural\nmachine translation system with online learning for\ntranslators. In Proceedings of the 57th Annual Meet-\ning of the Association for Computational Linguistics:\nSystem Demonstrations , pages 70–74.\nDomingo, M., García-Martínez, M., Peris, Á., Helle,\nA., Estela, A., Bié, L., Casacuberta, F., and Herranz,\nM. (2019b). Incremental adaptation of NMT for pro-\nfessional post-editors: A user study. In Proceedings\nof the Machine Translation Summit , pages 219–227.\nFederico, M., Bertoldi, N., Cettolo, M., Negri, M.,\nTurchi, M., Trombetti, M., Cattelan, A., Farina, A.,\nLupinetti, D., Martines, A., Massidda, A., Schwenk,\nH., Barrault, L., Blain, F., Koehn, P., Buck, C., and\nGermann, U. (2014). The matecat tool. In Proceed-\nings of the 25th International Conference on Compu-\ntational Linguistics: System Demonstrations , pages\n129–132.\nFreitag, M., Grangier, D., and Caswell, I. (2020).\nBleu might be guilty but references are not innocent.\narXiv:2004.06063 .\nGers, F. A., Schmidhuber, J., and Cummins, F. (2000).\nLearning to forget: Continual prediction with LSTM.\nNeural computation , 12(10):2451–2471.\nGreen, S., Heer, J., and Manning, C. D. (2013a). The\nefficacy of human post-editing for language transla-\ntion. In Proceedings of the SIGCHI Conference on\nHuman Factors in Computing Systems , pages 439–\n448.\nGreen, S., Wang, S., Cer, D., and Manning, C. D.\n(2013b). Fast and adaptive online training of feature-\nrich translation models. In Proceedings of the 51st\nAnnual Meeting of the Association for Computa-\ntional Linguistics , volume 1, pages 311–321.\nGuerberof, A. (2008). Productivity and quality in\nthe post-editing of outputs from translation memo-\nries and machine translation. Localisation Focus ,\n7(1):11–21.\nHu, K. and Cadwell, P. (2016). A comparative study of\npost-editing guidelines. In Proceedings of the 19th\nAnnual Conference of the European Association for\nMachine Translation , pages 34206–353.\nJia, Y., Carl, M., and Wang, X. (2019). Post-editing\nneural machine translation versus phrase-based ma-\nchine translation for english–chinese. Machine\nTranslation , pages 1–21.\nKarimova, S., Simianer, P., and Riezler, S. (2018). A\nuser-study on online adaptation of neural machine\ntranslation to human post-edits. Machine Transla-\ntion, 32(4):309–324.\nKingma, D. P. and Ba, J. (2014). Adam: A\nmethod for stochastic optimization. arXiv preprint\narXiv:1412.6980 .\nKlein, G., Kim, Y., Deng, Y., Senellart, J., and Rush,\nA. M. (2017). OpenNMT: Open-source toolkit for\nneural machine translation. In Proceedings of the\nAssociation for the Computational Linguistics , pages\n67–72.\nKoponen, M., Salmi, L., and Nikulin, M. (2019). A\nproduct and process analysis of post-editor correc-\ntions on neural, statistical and rule-based machine\ntranslation output. Machine Translation , pages 1–30.\nKothur, S. S. R., Knowles, R., and Koehn, P. (2018).\nDocument-level adaptation for neural machine trans-\nlation. In Proceedings of the 2nd Workshop on Neu-\nral Machine Translation and Generation , pages 64–\n73.\nLam, T. K., Kreutzer, J., and Riezler, S. (2018). A rein-\nforcement learning approach to interactive-predictive\nneural machine translation. In Proceedings of the Eu-\nropean Association for Machine Translation confer-\nence, pages 169–178.\nLäubli, S., Castilho, S., Neubig, G., Sennrich, R., Shen,\nQ., and Toral, A. (2020). A set of recommenda-\ntions for assessing human–machine parity in lan-\nguage translation. Journal of Artificial Intelligence\nResearch , 67:653–672.\nNeubig, G., Dou, Z.-Y., Hu, J., Michel, P., Pruthi, D.,\nand Wang, X. (2019). compare-mt: A tool for holistic\ncomparison of language generation systems. In Pro-\nceedings of the 2019 Conference of the North Amer-\nican Chapter of the Association for Computational\nLinguistics (Demonstrations) , pages 35–41.\nOrtiz-Martínez, D. (2016). Online learning for statisti-\ncal machine translation. Computational Linguistics ,\n42(1):121–161.\nPapineni, K., Roukos, S., Ward, T., and Zhu, W.-J.\n(2002). BLEU: a method for automatic evaluationof machine translation. In Proceedings of the An-\nnual Meeting of the Association for Computational\nLinguistics , pages 311–318.\nPeris, Á. and Casacuberta, F. (2019). Online learn-\ning for effort reduction in interactive neural machine\ntranslation. Computer Speech & Language. In Press.\nPeris, Á., Cebrián, L., and Casacuberta, F. (2017).\nOnline learning for neural machine translation post-\nediting. arXiv:1706.03196 .\nPost, M. (2018). A call for clarity in reporting bleu\nscores. In Proceedings of the Third Conference on\nMachine Translation: Research Papers , pages 186–\n191.\nPym, A., Grin, F., Sfreddo, C., and Chan, A. (2012).\nThe status of the translation profession in the euro-\npean union. Technical report.\nRiezler, S. and Maxwell, J. T. (2005). On some pitfalls\nin automatic evaluation and significance testing for\nmt. In Proceedings of the ACL workshop on intrinsic\nand extrinsic evaluation measures for machine trans-\nlation and/or summarization , pages 57–64.\nSennrich, R., Haddow, B., and Birch, A. (2016). Neu-\nral machine translation of rare words with subword\nunits. In Proceedings of the Annual Meeting of\nthe Association for Computational Linguistics , pages\n1715–1725.\nSnover, M., Dorr, B., Schwartz, R., Micciulla, L., and\nMakhoul, J. (2006). A study of translation edit rate\nwith targeted human annotation. In Proceedings of\nthe Association for Machine Translation in the Amer-\nicas, pages 223–231.\nSzegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S.,\nAnguelov, D., Erhan, D., Vanhoucke, V., and Rabi-\nnovich, A. (2015). Going deeper with convolutions.\nInProceedings of the IEEE Conference on Computer\nVision and Pattern Recognition , pages 1–9.\nToral, A. (2019). Post-editese: an exacerbated transla-\ntionese. In Proceedings of Machine Translation Sum-\nmit XVII Volume 1: Research Track , pages 273–281.\nToral, A., Castilho, S., Hu, K., and Way, A. (2018).\nAttaining the unattainable? reassessing claims of hu-\nman parity in neural machine translation. In Pro-\nceedings of the Third Conference on Machine Trans-\nlation: Research Papers , pages 113–123, Brussels,\nBelgium. Association for Computational Linguistics.\nTurchi, M., Negri, M., Farajian, M. A., and Federico,\nM. (2017). Continuous learning from human post-\nedits for neural machine translation. The Prague Bul-\nletin of Mathematical Linguistics , 108(1):233–244.\nTurovsky, B. (2016). Ten years of Google Translate.\nVasconcellos, M. and León, M. (1985). SPANAM and\nENGSPAN: machine translation at the pan ameri-\ncan health organization. Computational Linguistics ,\n11(2-3).\nVaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J.,\nJones, L., Gomez, A. N., Kaiser, Ł., and Polosukhin,\nI. (2017). Attention is all you need. In Advances\nin Neural Information Processing Systems , pages\n5998–6008.\nWu, Y., Schuster, M., Chen, Z., Le, Q. V., Norouzi,\nM., Macherey, W., Krikun, M., Cao, Y., Gao, Q.,\nMacherey, K., Klingner, J., Shah, A., Johnson, M.,\nLiu, X., Kaiser, Ł., Gouws, S., Kato, Y., Kudo, T.,\nKazawa, H., Stevens, K., Kurian, G., Patil, N., Wang,\nW., Young, C., Smith, J., Riesa, J., Rudnick, A.,\nVinyals, O., Corrado, G., Hughes, M., and Dean, J.\n(2016). Google’s neural machine translation system:\nBridging the gap between human and machine trans-\nlation. arXiv:1609.08144 .\nWuebker, J., Simianer, P., and DeNero, J. (2018). Com-\npact personalized models for neural machine trans-\nlation. In Proceedings of the 2018 Conference on\nEmpirical Methods in Natural Language Processing ,\npages 881–886.\nAppendix A User Questionnaire\nHow satisfied are you with the translation you have\nproduced?\n\u000fVery satisfied.\n\u000fSomewhat satisfied.\n\u000fNeutral.\n\u000fSomewhat dissatisfied.\n\u000fVery dissatisfied.\nWould you have preferred to work on your trans-\nlation from scratch instead of post-editing machine\ntranslation?\n\u000fYes.\n\u000fNo.\nDo you think that you will want to apply machine\ntranslation in your future translation tasks?\n\u000fYes, at some point.\n\u000fNo, never.\n\u000fI’m not sure yet.\nBased on the post-editing task you have per-\nformed, how much do you rate machine translation\noutputs on the following attributes?\nWell below average Below average Average Above average Well above average\nGrammatically\nStyle\nOverall quality\nBased on the post-editing task you have per-\nformed, which of these statements will you go for?\u000fI had to post-edit ALL the outputs.\n\u000fI had to post-edit about 75 % of the outputs.\n\u000fI had to post-edit 2550 % outputs.\n\u000fI only had to post-edit VERY FEW outputs.\nBased on the post-editing task you have per-\nformed, how often would you have preferred to\ntranslate from scratch rather than post-editing ma-\nchine translation?\n\u000fAlways.\n\u000fIn most of the cases (75 % of the outputs or more).\n\u000fIn almost half of the cases (approximately 50 %).\n\u000fOnly in a very few cases (less than 25 %).\nWhich of the tasks do you think was the one that\ncontained online learning? (Note: This question was\nonly asked after both tasks had been completed.)\n\u000fTask 1.\n\u000fTask 2.\nGive your opinion about the task you have per-\nformed.",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "CV-QoGFgFMZ",
"year": null,
"venue": "EAMT 2016",
"pdf_link": "https://aclanthology.org/W16-3415.pdf",
"forum_link": "https://openreview.net/forum?id=CV-QoGFgFMZ",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Interactive-Predictive Translation Based on Multiple Word-Segments",
"authors": [
"Miguel Domingo",
"Álvaro Peris",
"Francisco Casacuberta"
],
"abstract": "Miguel Domingo, Alvaro Peris, Francisco Casacuberta. Proceedings of the 19th Annual Conference of the European Association for Machine Translation. 2016.",
"keywords": [],
"raw_extracted_content": "Baltic J. Modern Computing, V ol. 4 (2016), No. 2, pp. 282–291\nInteractive-Predictive Translation based on\nMultiple Word-Segments\nMiguel DOMINGO, ´Alvaro PERIS, Francisco CASACUBERTA\nPattern Recognition and Human Language Technology Research Center\nCamino de Vera s/n, 46022 Valencia, Spain\[email protected], [email protected], [email protected]\nAbstract. Current machine translation systems require human revision to produce high-quality\ntranslations. This is achieved through a post-editing process or by means of an interactive hu-\nman–computer collaboration. Most protocols belonging to the last scenario follow a left-to-right\nstrategy, where the prefix of the translation is iteratively increased by successive validations and\ncorrections made by the user. In this work, we propose a new interactive protocol which allows\nthe user to validate all correct word sequences in the translation generated by the system, breaking\nthe left-to-right barrier. We evaluated our proposal through simulated experiments, obtaining large\nreductions of the human effort.\nKeywords: machine translation, computer-assisted translation, interactive-predictive machine\ntranslation\n1 Introduction\nMachine Translation (MT) technology is still far from producing perfect translations\n(Dale, 2016). Therefore, translation errors must be corrected by a human in a later\npost-editing stage.\nThe Interactive-Predictive Machine Translation (IMT) field arose as an alternative to\nclassic post-editing systems, aiming to reduce human post-editing effort and increase\nefficiency. This paradigm strives for combining the knowledge of a human translator and\nthe efficiency of an MT system. Notable contributions to IMT technology were carried\nout around the TransType (Foster et al., 1997; Langlais and Lapalme, 2002), TransType2\n(Barrachina et al., 2009; Casacuberta et al., 2009); and CasMaCat (Mart ´ınez-G ´omez\net al., 2012; Alabau et al., 2013; Gonz ´alez-Rubio et al., 2013; Sanchis-Trilles et al.,\n2014) projects, among others (Koehn, 2009; Huang et al., 2012; Cai et al., 2013; Green\net al., 2014; Torregrosa et al., 2014; Azadi and Khadivi, 2015; Marie and Max, 2015).\nEspecially interesting is the so-called prefix-based IMT (Barrachina et al., 2009).\nIn this approach, the user corrected the first wrong word (from left-to-right) of the\nInteractive-Predictive Translation based on Multiple Word-Segments 283\ntranslation suggested by the system. Then, the system proposed an alternative hypothesis,\ncompatible with the user feedback. A cumbersome phenomenon noticed in this protocol\nhappened when the non-validated part of the sentence contained correct words. If such\nwords were modified by the system in following predictions, the user had to edit words\nthat were correct in previous iterations. Therefore, the effort made by the user was\nincreased and the system had an annoying behavior.\nTo overcome this weakness, we propose a new IMT approach which allows the\nuser to select, at each interaction, all correctly translated word segments. Hence, correct\nparts of the current translation are kept in successive hypothesis produced during the\nhuman–machine interaction, reducing the number of corrections required and avoiding\nthe aforementioned issue. This approach relies on the idea from Gonz ´alez-Rubio et al.\n(2016) of breaking down the prefix constraint.\nThe proposed protocol shares some similarities with Marie and Max (2015) in that\nwe select word segments from a translation hypothesis. However, on the one hand, our\nprotocol contains more types of user interactions such as word corrections and word\ndeletions (see Section 2); and, on the other hand, we have different goals in mind: Marie\nand Max (2015) aim at increasing translation quality with the help of a human user, and\nwe aim at reducing the human effort of generating a translation in an IMT framework.\nThe rest of this paper is structured as follows: Section 2 describes our segment-based\nIMT approach. After that, in Section 3, we report the experiments conducted in order\nto assess our proposal and the results of those experiments. Finally, conclusions of the\nwork are drawn in Section 4.\n2 Segment-Based Search\nThe goal of the IMT protocol developed in this work is to offer more freedom to\nthe human agent, empowering the selection of the correct segments of a translation\nhypothesis. To achieve this, we allow the user to select, remove, or replace parts of a\ntranslation suggestion. The system then reacts to this human feedback, producing a new\ncompatible hypothesis. Fig. 1 shows an example of an IMT session using the proposed\nsegment-based approach.\n2.1 Statistical Framework\nBarrachina et al. (2009) proposed an statistical framework for the prefix-based IMT\napproach, where human and computer iteratively collaborated for translating a source\nsentence x. In this framework, at the beginning of the process, the system proposes\na translation hypothesis y. Then, the user searches, from left-to-right, the first wrong\nword in yand corrects it. With this action, the user defines a valid translation prefix\n^p. At the next iteration, the system generates a suffix ~sthat completes ^pin order to\n(hopefully) obtain a better translation of x:y0=^p~s. This process is repeated until the\nuser accepts the complete suggestion of the system. At each iteration, ~sis obtained as\nthe most probable of all possible suffixes s, given the prefix ^pand the source sentence x:\n~s= arg max\nsPr(sjx;^p) (1)\n284 Domingo et al.\nsource ( x):Et la question n ’ a pas encore ´et´e´evalu ´ee chez les patients atteints de cancer gastrique\ntarget translation ( ^ y):And the issue has not been evaluated in gastric cancer patients\nIT-0TAnd the issue has not yet been investigated among patients with gastric cancer\nIT-1UAnd the issue has not been evaluated among patients with gastric cancer\nTAnd the issue has not been evaluated not in gastric cancer patients with\nIT-2UAnd the issue has not been evaluated in gastric cancer patients #\nTAnd the issue has not been evaluated in gastric cancer patients\nEND UAnd the issue has not been evaluated in gastric cancer patients\nFig. 1: Segment-based IMT session to translate a French sentence into English. At the initial\niteration ( IT-0), the system suggests an initial translation. Then, at iteration 1, the user selects\nthose segments to keep (“ And the issue has not ”, “been ” and “ gastric cancer ”); deletes a word\n(“yet”); and substitutes “ investigated ” by “ evaluated ”, which is added to the segment. With this\ninformation, the system suggests a new hypothesis. Similarly, at iteration 2, the user selects new\nvalid segments (“ in” and “ patients ”), deletes words that are in the middle of two segments (“ not”),\nand inputs an end of sentence mark (illustrated as “ #”). The session ends when the user accepts\nthe last translation suggested by the system.\nThis equation can be straightforwardly rewritten as:\n~s= arg max\nsPr(^p;sjx) (2)\nTherefore, at each iteration, the process consists of a regular search in the space of\nthe translations but constrained by the prefix ^p.\nThe protocol proposed in our work follows this iterative procedure but, at each\niteration, the user is free to validate all correct subsequences of words (segments) from\ny. The user has also the possibility of deleting all words located between two segments\n(merging both segments into one), and either correcting a wrong word (as in the prefix-\nbased approach) or inserting a new word between two segments.\nLetf=^f1; : : : ; ^fNbe a feedback signal, where ^f1; : : : ; ^fNis the sequence of N\nsegments validated by the user in an interaction (including a one-word segment with\nthe new word). The goal is to generate a sequence h=~h1; : : : ; ~hNof new translation\nsegments (an ~hifor each pair of validated segments ^fi,^fi+1; being 1\u0014i < N ) to\nobtain a (hopefully) better translation of x:y0=^f1;~h1; : : : ; ^fN;~hN. In our statistical\nframework, the best translation segments are obtained as:\n~h1; : : : ; ~hN= arg max\nh1;:::;hNPr(h1; : : : ; hNjx;^f1; : : : ; ^fN) (3)\nwhich can be rewritten as:\n~h1; : : : ; ~hN= arg max\nh1;:::;hNPr(^f1;h1; : : : ; ^fN;hNjx) (4)\nThis last equation is very similar to the classical prefix-based IMT equation (Eq. (1)),\nwith the main difference being that the search process in Eq. (1) is limited to the space\nof suffixes constrained by ^p, while the search in Eq. (4) is in the space of possible\nsubstrings of the translations of x, constrained by the sequence of segments ^f1; : : : ; ^fN.\nInteractive-Predictive Translation based on Multiple Word-Segments 285\n3 Experiments\n3.1 Corpora\nWe tested our proposal in four tasks from different domains: the EMEA corpus1(Tiede-\nmann, 2009), formed by documents from the European Medical Agency ; the EUcor-\npus (Barrachina et al., 2009), extracted from the Bulletin of the European Union ; the\nTED corpus2(Federico et al., 2011), a collection of recordings of public speeches cover-\ning a variety of topics; and the Xerox corpus (Barrachina et al., 2009), extracted from\nXerox printer manuals. To the best of our knowledge, excluding EMEA, all corpora\nhave been used in previous IMT works (Tom ´as and Casacuberta, 2006; Barrachina et al.,\n2009; Gonz ´alez-Rubio et al., 2013). The partition sets used in this work are the same\nthan those used in the aforementioned works.\nAll datasets have been tokenized by means of the standard tool provided with the\nMoses toolkit (Koehn et al., 2007)—exempting Chinese sentences, which were split\ninto words using the Standford word segmenter (Tseng et al., 2005). Sentences have\nbeen kept truecased, except for the Zh–En language pair, since Chinese has no case\ninformation. Table 1 shows the corpora main features.\nTable 1: Corpora statistics. K denotes thousands and M millions .jSjstands for number of sentences ,\njWjfornumber of words and jVjforsize of the vocabulary .\nEMEA EU TED Xerox\n(Fr/En) (Es/En) (Zh/En) (Es/En)\nTrainjSj 1.1M 214K 106.9K 55.6K\njWj14.3M/17.0M 6M/5.4M 1.9M/2.1M 750K/665K\njVj71K/80K 84K/70K 55K/41.7K 16.8K/14K\nDev.jSj 500 400 934 1012\njWj12K/10K 12K/10K 21.5K/20.1K 16K/14.4K\njVj2.9K/2.7K 3K/2.7K 3.8K/3.2K 1.8K/1.6K\nTestjSj 1K 800 1.6K 1.1K\njWj27K/21K 23K/20K 33.2K/31.9K 10.1K/8.4K\njVj4.5K/4.5K 4.7K/4.2K 4.5K/3.7K 2K/1.9K\n3.2 Metrics\nThe quality of our interactive protocol is assessed according to the following metrics:\nWord Stroke Ratio (WSR) (Tom ´as and Casacuberta, 2006): Measures the number of\nwords edited by the user, normalized by the number of words in the final translation.\nIn this work, we assume that the edition of a word is considered to have a constant\ncost (one word stroke) independently of its length.\n1http://www.statmt.org/wmt14/medical-task/\n2https://wit3.fbk.eu/mt.php?release=2012-03-test\n286 Domingo et al.\nMouse Action Ratio (MAR) (Barrachina et al., 2009): Measures the number of mouse\nactions made by the user, normalized by the number of characters in the final\ntranslation. In classic IMT, the user makes a mouse action each time she needs\nto edit a word (to position the prompt), and one more per sentence to validate\nthe translation. In the protocol proposed in this work, in addition to those mouse\nactions, the user makes two actions each time she validates a segment (clicking at\nthe beginning and at the end of the segment), and two more each time she deletes\nsome words located between segments3(same procedure as selecting segments but\nusing the right button of the mouse).\nConceptually, WSR accounts for the physical effort of typing corrections, while\nMAR accounts for the cognitive effort of the supervision process (Macklovitch et al.,\n2005).\nAdditionally, to evaluate the quality of the initial translations, we have used the\nfollowing well-known metric:\nBiLingual Evaluation Understudy (BLEU) (Papineni et al., 2002): computes the\ngeometric average of the modified n-gram precision, multiplied by a factor that\npenalizes short sentences.\n3.3 Implementation\nOur implementation of the segment-based IMT protocol is based on the Moses toolkit\n(Koehn et al., 2007). We profit from the feature that allows to bring external knowledge\nto the decoder by means of an XML Markup language (see Fig. 2 for an example),\nfor validating the translation of parts of a sentence without changing the models. The\ndecoder has an XML markup scheme that allows us to plug in the translation of parts\nof a sentence without changing the models. More precisely, we use the exclusive mode,\nwhich only takes into account the given translation for a part of a sentence—ignoring any\nphrases from the phrase table that overlaps with that span. With this, we can constrain\nthe search process to follow Eq. (4).\n<x translation = “And the issue has not been evaluated” >Et la question n ’ a pas encore ´et´e\n´evalu ´ee</x><wall/>chez les patients atteints de <x translation = “gastric cancer” >cancer\ngastrique </x><wall/>\nFig. 2: Example of a sentence in XML markup language (corresponding to the sentence of the first\niteration of Fig. 1), specifying the desired translation for some parts of the sentence: Et la question\nn ’ a pas encore ´et´e´evalu ´eemust be translated as And the issue has not been evaluated , and cancer\ngastrique must be translated as gastric cancer . The tag <wall/>indicates to the decoder that\nthose segments should not be reordered.\nWe implemented a prototype that manages the interaction between a human agent\nand the MT system. This is an iterative process in which the prototype, by means of the\n3One mouse action is enough for selecting or deleting a one-word segment (in which case, the\nuser would simply click on the word).\nInteractive-Predictive Translation based on Multiple Word-Segments 287\nXML markup language, takes into account the feedback provided by the user, obtains\na translation with Moses , and suggests the new hypothesis. All this takes place at the\nend of each iteration, with an average response time of 90 ms4per iteration. According\nto Nielsen (1993), this time is below “the limit for having the user feel that the system is\nreacting instantaneously” .\nAt each one of these iterations, the user has three different ways of interacting with\nthe system (see Section 2). Such interactions affect differently in the generation of the\nnew XML markup sentence:\nSegment selection: for each segment selected by the user, we align the words of that\nsegment with their correspondent source words (phrase alignments), and generate\nan XML tag to plug in that segment (the desired translation) to those source words.\nWord deletion: in the same fashion as with segments, for each word to delete, we\nalign that word with its correspondent source words and generate a new XML tag,\nindicating that we want to obtain an empty translation.\nWord correction: each time the user corrects a word or inserts a new one, we align the\nnew word with its correspondent source words using a hidden markov alignment\nmodel (V ogel et al., 1996).\nAll the MT systems used in this work were trained with the standard configuration\nofMoses , with the weights of the log-linear model being optimized by means of the\nMinimum Error Rate Training (MERT) procedure (Och, 2003). Lastly, a 5-gram word-\nbased language model was estimated on the target side of the parallel corpora, using the\nimproved KneserNey smoothing (Chen and Goodman, 1996), by means of the SRILM\ntoolkit (Stolcke, 2002).\nFor the implementation of the classic prefix-based IMT systems, we made the word\ngraph exploration and the best suffix generation for a given prefix following the procedure\ndescribed by Barrachina et al. (2009): We generated a word graph for each sentence\nto translate. After that, treating the word graph as a weighted finite-state automaton,\nwe parsed the prefix over it, from the initial state to any other intermediate state, to\nfind the best path that accounts for the prefix. Finally, we obtained the corresponding\ntranslation for the best path from the intermediate state to the finale state. Therefore,\nour implementation of prefix-based IMT is consistent with Barrachina et al. (2009),\nconsidering that we generate word graphs with the current SMT state-of-the-art Moses\ntoolkit.\n3.4 Evaluation on a Simulated Environment\nSince the evaluation with human agents is too slow and expensive to be applied frequently\nduring system development, we carried out an automatic evaluation with simulated users.\nFor this evaluation, we considered the references in the corpora as the translations the\nuser desires. Furthermore, without loss of generality and for the sake of simplicity, we\nassumed that the user always corrected the left-most wrong word.\nAt each iteration of the IMT session, we selected those segments that were common\nwith the reference. After that, following a left-to-right order, we compared each word of\n4Tested on a machine with an Intel i5 CPU at 3.1 GHz.\n288 Domingo et al.\nthe current translation with those of the reference. When we found a different word in\ntranslation and reference, if that reference word was the first one of the next selected\nsegment, we deleted all the words between those two segments; otherwise, we input that\nword (merging all previous segments into one). Once translation and reference were the\nsame, we moved on to the next sentence.\n3.5 Results\nTable 2 shows the user-effort results of our segment-based protocol against the prefix-\nbased approach. Prefix-based results were obtained following the work of Barrachina\net al. (2009) and are similar to those reported on the literature (Tom ´as and Casacuberta,\n2006; Barrachina et al., 2009; Gonz ´alez-Rubio et al., 2013). The quality of the initial\ntranslation is also displayed as an indicative of the difficulty of each task. Our proposal\nclearly improves prefix-based IMT in terms of user physical effort of typing corrections.\nThe WSR is always reduced, yielding diminishes up to 29 points.\nTable 2: Results of our segment-based IMT proposal, in comparison with the prefix-based approach.\nThe quality of the initial translation is shown as an indicative of the difficulty of each task. All\nvalues are reported as percentages.\nCorpus Language BLEUPrefix-Based Segment-Based\nWSR MAR WSR MAR\nEMEAFr–En 31.3 57.8 12.4 34.4 18.8\nEn–Fr 30.2 58.4 12.5 40.4 16.3\nEUEs–En 48.2 45.6 10.2 28.3 15.0\nEn–Es 48.7 44.6 9.7 29.8 13.5\nTEDZh–En 11.7 83.1 22.4 54.1 28.3\nEn–Zh 8.7 86.3 55.7 59.2 72.4\nXeroxEs–En 54.5 35.8 10.5 23.2 16.9\nEn–Es 62.2 28.3 7.9 22.1 12.5\nThis reduction of typing effort comes with an increase in the number of mouse actions\n(from 4 up to 6.5 points of MAR), which is always smaller than the effort reduction.\nAn exception to this comes with the En–Zh language pair since, due to Chinese nature,\nwords have fewer number of characters, which penalizes MAR metric. This penalization\nresults in a greater increase in MAR, although this increase is still smaller than the\neffort reduction. Moreover, as mentioned before, WSR and MAR account for different\nphenomena and thus have different cost from a human point of view (Macklovitch et al.,\n2005). Therefore, the physical effort is substantially decreased, while the cognitive one\nis slightly increased. Nonetheless, we need to test these considerations with real human\nusers before reaching to categorical conclusions.\nInteractive-Predictive Translation based on Multiple Word-Segments 289\n4 Conclusions\nIn this work, we have proposed a new IMT approach that overcomes the classic prefix-\nbased IMT limitation of only correcting the prefix. Our proposal allows the user to select\nall correct word segments each time the system proposes a new translation. The system\nleverages this additional knowledge for offering more enlightened hypothesis. Hence,\nthe human typing effort should be reduced.\nWe tested the proposal in a simulated environment, which confirmed that our ap-\nproach effectively reduces the physical effort required, at the expense of a slight increase\nin the cognitive effort. As future work, we should test the improvements of our proposal\nwith real users in order to obtain actual measures of the effort reduction.\nAcknowledgments\nThe research leading to these results has received funding from the Ministerio de\nEconom ´ıa y Sostenibilidad (MINECO) under project SmartWays (grant agreement\nRTC-2014-1466-4), and Generalitat Valenciana under project ALMAMATER (grant\nagreement PROMETEOII/2014/030).\nReferences\nAlabau, Vicent, Ragnar Bonk, Christian Buck, Michael Carl, Francisco Casacuberta,\nMercedes Garc ´ıa-Mart ´ınez, Jes ´us Gonz ´alez-Rubio, Philipp Koehn, Luis A. Leiva,\nBartolom ´e Mesa-Lao, Daniel Ortiz-Mart ´ınez, Herv ´e Saint-Amand, Germ ´an Sanchis-\nTrilles, and Chara Tsoukala (2013). “CASMACAT: An Open Source Workbench for\nAdvanced Computer Aided Translation”. In: The Prague Bulletin of Mathematical\nLinguistics 100, pp. 101–112.\nAzadi, Fatemeh and Shahram Khadivi (2015). “Improved Search Strategy for Interactive\nMachine Translation in Computer-Asisted Translation”. In: Proceedings of Machine\nTranslation Summit XV , pp. 319–332.\nBarrachina, Sergio, Oliver Bender, Francisco Casacuberta, Jorge Civera, Elsa Cubel,\nShahram Khadivi, Antonio Lagarda, Hermann Ney, Jes ´us Tom ´as, Enrique Vidal, and\nJuan-Miguel Vilar (2009). “Statistical Approaches to Computer-Assisted Transla-\ntion”. In: Computational Linguistics 35, pp. 3–28.\nCai, Dongfeng, Hua Zhang, and Na Ye (2013). “Improvements in Statistical Phrase-\nBased Interactive Machine Translation”. In: Proceedings of the International Confer-\nence on Asian Language Processing , pp. 91–94.\nCasacuberta, Francisco, Jorge Civera, Elsa Cubel, Antonio L. Lagarda, Guy Lapalme,\nElliott Macklovitch, and Enrique Vidal (2009). “Human Interaction for High-quality\nMachine Translation”. In: Communications of the Association for Computing Ma-\nchinery 52.10, pp. 135–138.\nChen, Stanley F. and Joshua Goodman (1996). “An Empirical Study of Smoothing\nTechniques for Language Modeling”. In: Proceedings of the Annual Meeting on\nAssociation for Computational Linguistics , pp. 310–318.\n290 Domingo et al.\nDale, Robert (2016). “How to make money in the translation business”. In: Natural\nLanguage Engineering 22.2, pp. 321–325.\nFederico, Marcello, Luisa Bentivogli, Michael Paul, and Sebastian St ¨uker (2011).\n“Overview of the IWSLT 2011 evaluation campaign”. In: International Workshop on\nSpoken Language Translation , pp. 11–27.\nFoster, George, Pierre Isabelle, and Pierre Plamondon (1997). “Target-Text Mediated\nInteractive Machine Translation”. In: Machine Translation 12, pp. 175–194.\nGonz ´alez-Rubio, Jes ´us, Daniel Ortiz-Mart ´ınez, Jos ´e-Miguel Bened ´ı, and Francisco\nCasacuberta (2013). “Interactive Machine Translation using Hierarchical Translation\nModels”. In: Proceedings of the Conference on Empirical Methods in Natural\nLanguage Processing , pp. 244–254.\nGonz ´alez-Rubio, Jes ´us, Jos ´e-Miguel Bened ´ı, and Francisco Casacuberta (2016). “Beyond\nPrefix-Based Interactive Translation Prediction”. Unpublished results.\nGreen, Spence, Jason Chuang, Jeffrey Heer, and Christopher D. Manning (2014). “Pre-\ndictive Translation Memory: A Mixed-Initiative System for Human Language Trans-\nlation”. In: Proceedings of the Annual Association for Computing Machinery Sympo-\nsium on User Interface Software and Technology , pp. 177–187.\nHuang, Chung-chi, Ping-che Yang, Keh-jiann Chen, and Jason S. Chang (2012). “TransA-\nhead: A Computer-Assisted Translation and Writing Tool”. In: Proceedings of the\nConference of the North American Chapter of the Association for Computational\nLinguistics , pp. 352–356.\nKoehn, Philipp (2009). “A Web-Based Interactive Computer Aided Translation Tool”. In:\nProceedings of the International Joint Conference on Natural Language Processing ,\npp. 17–20.\nKoehn, Philipp, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico,\nNicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris\nDyer, Ond ˇrej Bojar, Alexandra Constantin, and Evan Herbst (2007). “Moses: Open\nSource Toolkit for Statistical Machine Translation”. In: Proceedings of the Annual\nMeeting of the Association for Computational Linguistics , pp. 177–180.\nLanglais, Philippe and Guy Lapalme (2002). “TransType: Development-Evaluation\nCycles to Boost Translator’s Productivity”. In: Machine Translation 17.2, pp. 77–98.\nMacklovitch, Elliot, Nam-Trung Nguyen, and Roberto Silva (2005). User evaluation\nreport . Tech. rep. Transtype2 (ISR-2001-32091).\nMarie, Benjamin and Aur ´elien Max (2015). “Touch-Based Pre-Post-Editing of Machine\nTranslation Output”. In: Proceedings of the Conference on Empirical Methods in\nNatural Language Processing , pp. 1040–1045.\nMart ´ınez-G ´omez, Pascual, Germ ´an Sanchis-Trilles, and Francisco Casacuberta (2012).\n“Online Adaptation Strategies for Statistical Machine Translation in Post-Editing\nScenarios”. In: Pattern Recognition 45.9, pp. 3193–3203.\nNielsen, Jakob (1993). Usability Engineering . Morgan Kaufmann Publishers Inc. ISBN :\n0125184050.\nOch, Franz Josef (2003). “Minimum Error Rate Training in Statistical Machine Transla-\ntion”. In: Proceedings of the Annual Meeting of the Association for Computational\nLinguistics , pp. 160–167.\nInteractive-Predictive Translation based on Multiple Word-Segments 291\nPapineni, Kishore, Salim Roukos, Todd Ward, and Wei-Jing Zhu (2002). “BLEU: a\nMethod for Automatic Evaluation of Machine Translation”. In: Proceedings of the\nAnnual Meeting of the Association for Computational Linguistics , pp. 311–318.\nSanchis-Trilles, Germ ´an, Vicent Alabau, Christian Buck, Michael Carl, Francisco\nCasacuberta, Mercedes Garc ´ıa-Mart ´ınez, Ulrich Germann, Jes ´us Gonz ´alez-Rubio,\nRobin Hill, Philipp Koehn, Luis A. Leiva, Bartolom ´e Mesa-Lao, Daniel Ortiz-\nMart ´ınez, Herv ´e Saint-Amand, Chara Tsoukala, and Enrique Vidal (2014). “Interac-\ntive Translation Prediction vs. Conventional Post-editing in Practice: A Study with\nthe CasMaCat Workbench”. In: Machine Translation 28.3–4, pp. 217–235.\nStolcke, Andreas (2002). “SRILM - An extensible language modeling toolkit”. In: Pro-\nceedings of the International Conference on Spoken Language Processing , pp. 257–\n286.\nTiedemann, J ¨org (2009). “News from OPUS - A Collection of Multilingual Parallel\nCorpora with Tools and Interfaces”. In: Recent Advances in Natural Language\nProcessing . V ol. V, pp. 237–248.\nTom ´as, Jes ´us and Francisco Casacuberta (2006). “Statistical Phrase-Based Models for\nInteractive Computer-Assisted Translation”. In: Proceedings of the International\nConference on Computational Linguistics/Association for Computational Linguistics ,\npp. 835–841.\nTorregrosa, Daniel, Mikel L. Forcada, and Juan Antonio P ´erez-Ortiz (2014). “An Open-\nSource Web-Based Tool for Resource-Agnostic Interactive Translation Prediction”.\nIn:Prague Bulletin of Mathematical Linguistics 102, pp. 69–80.\nTseng, Huihsin, Pichuan Chang, Galen Andrew, Daniel Jurafsky, and Christopher Man-\nning (2005). “A Conditional Random Field Word Segmenter”. In: Proceedings of the\nSpecial Interest Group of the Association for Computational Linguistics Workshop\non Chinese Language Processing , pp. 168–171.\nV ogel, Stephan, Hermann Ney, and Christoph Tillmann (1996). “HMM-based Word\nAlignment in Statistical Translation”. In: Proceedings of the Conference on Compu-\ntational Linguistics . V ol. 2, pp. 836–841.\nReceived April 29, 2016 , accepted May 15, 2016",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "Npm6p-XLEd4",
"year": null,
"venue": "EAMT 2016",
"pdf_link": "https://aclanthology.org/W16-3418.pdf",
"forum_link": "https://openreview.net/forum?id=Npm6p-XLEd4",
"arxiv_id": null,
"doi": null
}
|
{
"title": "A Graphical Pronoun Analysis Tool for the PROTEST Pronoun Evaluation Test Suite",
"authors": [
"Christian Hardmeier",
"Liane Guillou"
],
"abstract": "Christian Hardmeier, Liane Guillou. Proceedings of the 19th Annual Conference of the European Association for Machine Translation. 2016.",
"keywords": [],
"raw_extracted_content": "Baltic J. Modern Computing, V ol. 4 (2016), No. 2, pp. 318–330\nA Graphical Pronoun Analysis Tool for the\nPROTEST Pronoun Evaluation Test Suite\nChristian HARDMEIER1, Liane GUILLOU2\n1Uppsala University\n2Ludwig-Maximilians-Universität München\[email protected], [email protected]\nAbstract. We present a graphical pronoun analysis tool and a set of guidelines for manual eval-\nuation to be used with the PROTEST pronoun test suite for machine translation (MT). The tool\nprovides a means for researchers to evaluate the performance of their MT systems and browse\nindividual pronoun translations. MT systems may be evaluated automatically by comparing the\ntranslation of the test suite pronoun tokens in the MT output with those in the reference transla-\ntion. Those translations that do not match the reference are referred for manual evaluation, which\nis supported by the graphical pronoun analysis tool and its accompanying annotation guidelines.\nBy encouraging the manual examination and evaluation of individual pronoun tokens, we hope\nto understand better how well MT systems perform when translating different categories of pro-\nnouns, and gain insights as to where MT systems perform poorly and why.\nKeywords: Evaluation, machine translation, pronouns, graphical interface, manual annotation\n1 Introduction\nPronoun translation poses a problem for statistical machine translation (SMT). Despite\nrecent efforts, little progress has been made (Le Nagard and Koehn, 2010; Hardmeier\nand Federico, 2010; Novák, 2011; Guillou, 2012; Hardmeier, 2014). Most recently, the\nresults of the DiscoMT 2015 shared task on pronoun translation (Hardmeier et al., 2015)\nrevealed that even discourse-aware Machine Translation (MT) systems were unable\nto beat a simple phrase-based SMT baseline. We believe that there are two important\nobstacles that currently limit progress in pronoun translation. Firstly, we need to obtain a\ndeeper understanding of the problems that MT systems face when translating pronouns,\nand of the performance of our systems when faced with these problems. Secondly,\nwe lack evaluation methodologies that specifically target pronoun translation and that\nare capable of providing a detailed overview of system performance. In this paper, we\npresent a graphical tool and an evaluation methodology for manual assessment and\ninvestigation of pronoun translation that address both of these factors.\nA Graphical Pronoun Analysis Tool for the PROTEST Test Suite 319\nWhen dealing with pronouns, many of the fundamental assumptions cherished by\nthe MT community break down. MT researchers routinely rely on automatic evalua-\ntion metrics such as BLEU (Papineni et al., 2002) to guide their development efforts.\nThese automated metrics typically assume that overlap of the MT output with a human-\ngenerated reference translation may be used as a proxy for correctness. This assumption\nfails for certain types of pronouns. In particular, it does not hold in the important case\nofanaphoric pronouns , which refer back to a mention introduced earlier in the dis-\ncourse (an antecedent ): If the pronoun’s antecedent is translated in a way that differs\nfrom the reference translation, a different pronoun may be required. One that matches\nthe reference translation may in fact be wrong. In less complex cases, too, the syntactic\nvariability in pronoun translation is generally high even in closely parallel texts, which\ncreates difficulties both for translation modelling and for MT evaluation. We hope that\nour contribution will make it easier for MT researchers to anchor their decisions in\ndescriptive corpus data and face the full complexity of pronoun translation.\n2 The PROTEST Pronoun Evaluation Test Suite\nTo address the problem of evaluation, Hardmeier (2015) suggests using a test suite\ncomposed of carefully selected pronoun tokens which can then be checked individu-\nally to evaluate pronoun correctness. In Guillou and Hardmeier (2016) we introduce\nPROTEST, a test suite comprising 250 hand-selected pronoun tokens exposing particu-\nlar problems in English-French pronoun translation, along with an automatic evaluation\nscript. The pronoun analysis tool and methodology presented here are specifically de-\nsigned to be used with the PROTEST test suite. They can be applied to any parallel cor-\npus with (manual or automatic) coreference resolution and word alignments, although\npro-drop languages might require changes to the guidelines.\nThe pronoun tokens in PROTEST are extracted from the DiscoMT2015.test dataset\n(Hardmeier et al., 2016), which has been manually annotated according to the ParCor\nannotation guidelines (Guillou et al., 2014). The pronoun tokens are categorised accord-\ning to a range of different problems that MT systems face when translating pronouns. At\nthe top level the categories capture pronoun function , with four different functions rep-\nresented in the test suite3(Fig. 1). Anaphoric pronouns refer to an antecedent. Pleonas-\nticpronouns, in contrast, do not refer to anything. Event reference pronouns refer to\na verb, verb phrase, clause or even an entire sentence. Finally, addressee reference\npronouns are used to refer to the reader/audience. At a second level of classification,\nwe distinguish other features like morphosyntactic properties, pronoun-antecedent dis-\ntance, and different types of addressee reference.\nThe PROTEST test suite comes with an automatic pronoun evaluation tool, which\ncompares the translation of each pronoun token in the MT output with that in the ref-\nerence translation. For the purpose of automatic evaluation, pronouns are broadly split\ninto two groups. Anaphoric pronouns must meet the following criteria: The translation\nof both the pronoun and the head of its antecedent must match that in the reference. For\nall other pronoun functions, only the translation of the pronoun is considered. Pronoun\n3Some categories in the corpus, e.g. speaker reference , were excluded from the test suite to\nfocus on systematic divergences between English and French (Guillou and Hardmeier, 2016).\n320 Hardmeier and Guillou\nanaphoric I have a bicycle .Itis red.\npleonastic Itis raining.\nevent He lost his job. Itcame as a total surprise.\naddressee reference You’re welcome.\nFig. 1. Examples of different pronoun functions\ntranslations that do no match the reference are not necessarily incorrect, but must be\nmanually checked. This is a prime use case of the pronoun analysis tool described here.\n3 Use Cases and Interface Design\nThe PROTEST pronoun analysis tool is intended as a platform for manual inspection\nandevaluation of pronoun translation examples in parallel text. Our tool provides the\nresearcher or MT system developer with a focused view of the pronoun translation and\nits context, and it enables the manual annotation of examples for correctness and other\nrelevant features according to the guidelines detailed in Section 4. On certain occasions,\nfor instance when evaluating major development steps in the system, the system devel-\noper may decide to conduct a more thorough evaluation involving external annotators.\nTo cater for this, the tool offers the functionality to prepare batches of examples for\nannotation, which can then be processed in a special, easy-to-use annotator mode. An-\nnotated batches can be fed back into the master file. A translation overview mode then\nallows the researchers to gain an overview of all annotations for a specific example.\nThe core component of the analysis tool is the translation window . On its left-hand\nside, the translation window displays a pronoun in the source language and its trans-\nlation by a given system. The amount of context shown in the translation window is\nvariable and depends on the pronoun function. In the case of anaphoric pronouns, it\nincludes the sentence(s) that contain the antecedent and the pronoun plus one addi-\ntional sentence of context. For other pronouns, it just shows the sentence containing\nthe pronoun and the one immediately preceding it. The pronoun and its translation are\nhighlighted in the source text and the translation, as too are the antecedent head and its\ntranslation, in the case of anaphoric pronouns.\nThe right-hand side of the translation window comes in two variants, which we\ncall the annotation panel (Fig. 2) and the overview panel (Fig. 3). The annotation panel\n(Fig. 2) is used by the developer or by annotators for the task of manually evaluating the\ntranslation of the pronouns. During manual evaluation, the annotator is asked to make\na yes/no judgement as to whether the pronoun has been correctly translated. These\njudgements are recorded via radio button groups in the top right-hand corner of the\nwindow. In the case of anaphoric pronouns, the correctness of the antecedent translation\nis evaluated in a separate question.\nIn addition to these formalised judgements, two additional input elements allow an-\nnotators to react flexibly to common annotation issues, to create meaningful annotations\nfor examples that are atypical in some way and to supply additional information. The tag\nboxmakes it possible to assign tags to an example. The guidelines contain instructions\nA Graphical Pronoun Analysis Tool for the PROTEST Test Suite 321\nFig. 2. Translation window with annotation panel\nFig. 3. Translation window with overview panel\n322 Hardmeier and Guillou\non the use of certain tags, and annotators are instructed to be as consistent as possible\nin their use of tags. The annotation tool does not constrain the tags to a predefined set,\nallowing annotators to define new tags as the need arises, but provides a drop-down list\nof existing tags in the corpus to support and encourage consistency. The remarks box in\nthe bottom-left corner stores free-form notes about the pronoun translation.\nIn practical annotation work, we found these two mechanisms extremely useful. An-\nnotation conflicts between our annotators typically arose in borderline cases, where the\nannotators agreed about their evaluation of the example in principle, but were uncertain\nabout how to encode this according to the formal guidelines. Frequently, they would\nleave very enlightening comments in the remarks field in those cases, making it easier\nfor us to understand the difficulties they had encountered and the reasoning behind their\nannotation choices. Moreover, the annotators’ free-form comments were very useful as\na form of tangible evidence of how they interpreted the guidelines and, consequently,\nwhat parts of the guidelines needed to be updated.\nIn addition to the annotations already discussed, the target text box on the left-hand\nside of the translation window offers the possibility to click on individual tokens in the\ntranslation of the pronoun or, in the anaphoric case, its antecedent to highlight them.\nWe use this functionality to identify, for each example labelled as correct, the mini-\nmum set of tokens constituting a correct translation of the pronoun or antecedent. This\nallows us to distinguish between the tokens making up the core translation and other sur-\nrounding tokens that also happen to be word-aligned to the source-language pronoun\nor antecedent. The annotation guidelines (Section 4) describe the process for assign-\ning judgements and tags to translations, and selecting minimal token sets for correct\ntranslations.\nWhenever the annotator clicks the “Prev” or “Next” button to navigate to another ex-\nample, a number of checks are made to detect annotation conflicts such as highlighting\ntokens in a translation marked as incorrect or failing to highlight tokens in a translation\nmarked as correct. If a conflict is detected, a pop-up dialogue appears, and the annotator\nhas the choice to amend the annotation or to leave it unchanged.\nTo use and compare annotations created by multiple annotators, the analysis tool\noffers another view of the translation window, in which the annotation panel is replaced\nby an overview panel (Fig. 3). The information displayed on this panel is the same as\ndescribed above, but it shows annotations from multiple annotators simultaneously, and\nit is not editable. The correctness judgements and tags are shown in tabular form and\nthe remarks field combines notes from all annotators. An additional set of navigation\nbuttons is provided (in the top-right corner) to browse between the tokens highlighted\nby different annotators.\n4 Manual Evaluation Methodology\nIn this section, we introduce a set of guidelines for manual annotation and evaluation of\npronoun translations in the context of our pronoun analysis tool. The aim of the anno-\ntation is to assess the ability of MT systems to translate pronouns. It is also possible to\nuse the examples annotated as correctly translated as additional reference translations\nin conjunction with the automatic pronoun evaluation tool in the PROTEST test suite.\nA Graphical Pronoun Analysis Tool for the PROTEST Test Suite 323\nIn the annotation, we focus on the correctness of the highlighted pronouns and their\nantecedents. The correctness of other words in the translated sentences is not consid-\nered, except where this makes it impossible to assess the correctness of the pronoun and\nantecedent head translations. For each example we gather the following information:\n–Overall assessment: Decide whether or not the pronoun is translated correctly. In\nthe case of anaphoric pronouns, the translation of the pronoun’s antecedent head\nmust also be assessed.\n–Token selection: For those translations marked as “correct”, select the minimum set\nof tokens that constitute a correct translation.\n–Tags: Certain recurring patterns are marked by assigning tags. The set of standard\ntags and their use is described in Section 4.3.\n–Remarks: Free-form notes may be added for each example. This function is used to\nrecord any information that may be useful in the interpretation or evaluation of the\nannotations. For example, the annotator may be unsure about the annotation of an\nexample, or may have made assumptions about the interpretation of the text.\nPronoun tokens are annotated according to the general guidelines outlined in Sec-\ntion 4.1. In the case of anaphoric pronouns, additional guidelines apply (see Section 4.2).\n4.1 General Guidelines: All Pronouns\nThe annotator is asked to answer the question: “Pronoun Correctly Translated?”. Pos-\nsible options are “yes” and “no”. This question should be answered for all source-\nlanguage pronouns, regardless of whether they are translated by the MT system. If a\npronoun remains untranslated, the annotator should assess whether or not this is a cor-\nrect translation strategy in this particular case. If the pronoun translation is marked as\ncorrect, the next step is to select the minimum number of highlighted tokens that con-\nstitutes a correct translation of the source-language pronoun.\nTo enable the use of the annotations as references in an automatic evaluation setting,\nwe emphasise precision over recall and instruct the annotators to reject doubtful cases.\nWe also emphasise natural language use over prescriptive grammar rules in cases where\nthey conflict. In practice the annotators are asked to mark translations as correct only\nif they feel that the translation is something “natural” that they might say themselves,\nor that they might expect to hear someone else say. An exception is made for singular\naddressee reference pronouns, where the correctness decision is made independently\nof the level of formality (“tu” or “vous”) of the French pronoun. The natural level of\nformality is annotated separately instead (see Section 4.3).\n4.2 Anaphoric Pronouns\nIf the pronoun is anaphoric, it is necessary to consider both the translation of the an-\ntecedent head and the pronoun. The head of the source-language pronoun’s antecedent\nwill be highlighted in the interface. If the antecedent head was translated by the MT\nsystem, the translations (consisting of one or more tokens) will also be highlighted.\n324 Hardmeier and Guillou\nThe annotator is first asked to answer the question: “Antecedent Correctly Trans-\nlated?”. Possible options are “yes” and “no”. If the antecedent head translation has\nbeen marked as correct, the next step is to select the minimum number of highlighted\ntokens that constitutes a correct translation of the source-language antecedent head. To\narrive at a truly minimal set, we include noun tokens, but not adjectives or determiners.\nMultiple tokens may be selected. It is not possible to select tokens that appear outside\nof the highlighted set of words aligned to the antecedent head in the source.\nThe annotator is then asked to answer the question: “Pronoun Correctly Translated\n(given antecedent head)?”, again using “yes/no” options. Here a correctly translated\npronoun is one that is compatible with the translation of the antecedent head, regardless\nof whether the antecedent head is translated correctly. Compatibility frequently coin-\ncides with the notion of morphosyntactic agreement, but it does not always do so. An\nexample of a compatible pronoun-antecedent pair violating morphosyntactic agreement\nis the use of “singular they” in English to refer to a single person – formally, the pro-\nnoun “they” is a plural and does not agree in number with its antecedent, but the use of\n“they” to refer to singular antecedents is acceptable in English (for example in the case\nwhere the gender of the person is unknown) and should therefore be marked as correct.\nIf the pronoun is marked as correct, the minimum number of tokens consisting a correct\npronoun translation should be highlighted as in the general case.\n4.3 Tags\nTags are used to denote specific recurring patterns, where errors may be present, or to\nprovide additional information that could be useful when interpreting annotations. The\nfollowing general purpose tags are provided for all pronoun categories.\nbad_translation is used when the overall sentence translation is so poor that it\nis not possible to judge whether the translation of the pronoun/antecedent is correct. In\nthis case the example should not be annotated for correctness.\nincorrect_word_alignment denotes that a pronoun/antecedent translation exists\nin the translation of the source-language text but is not highlighted due to a problem with\nthe word alignments. In this case the example should not be annotated for correctness.\nnoncompositional_translation is used when the translation as a whole is cor-\nrect, but the source-language pronoun is aligned to a pronoun with a different function\nin the target language. A typical example is a referring (event or anaphoric) English\npronoun that gets word-aligned to the pleonastic pronoun “il” in the French impersonal\nconstruction “il faut” (“it is necessary”). Often such translations are correct, but the\nFrench pronoun cannot be said to be a translation of the English one.\ndesc_vs_presc signals a conflict between something that a French speaker might\n(naturally) say and what French prescriptive grammar rules state.\nIn the case of anaphoric pronouns, ant_unsure indicates uncertainty as to whether\nthe antecedent has been correctly identified in the source language. The antecedents in\nPROTEST were extracted from manual annotations over the DiscoMT2015.test dataset.\nThese annotations are generally of high quality and sometimes the pronoun annotators’\ndoubts are due to the limited context displayed in the pronoun analysis tool, but the\npossibility of errors in the coreference annotation cannot be completely excluded.\nA Graphical Pronoun Analysis Tool for the PROTEST Test Suite 325\nIn the specific case of singular, deictic addressee reference pronouns, French makes\na distinction between two levels of formality, “tu” and “vous”. We view this as a\nseparate problem and do not consider it in the correctness judgements. Instead, the\nannotators are asked to add one of the tags politeness_tu ,politeness_vous or\npoliteness_unknown to each of the examples in this category. The latter tag signals\nthat neither possibility can be ruled out given the available context.\n5 Manual Annotation\nTo demonstrate the use of the pronoun analysis tool for the task of manual annota-\ntion, we asked two annotators to annotate a sample of pronoun translations from the\nDiscoMT 2015 shared task on pronoun translation, an English-to-French MT task. The\ntranslations were taken from the official DiscoMT data release (Hardmeier et al., 2016).\nBoth of our annotators are native speakers of French and have a very high standard of\nEnglish. We gave both of them the same set of 116 pronoun translations produced by\nMT systems, or taken from the reference translation. The sample set was randomly\nselected, with the aim of selecting at least 100 pronoun translations from the full set\nof 1,750 translations, in proportion to the relative size of each pronoun category in\nPROTEST, and ensuring that at least one translation was included for each category.\nThe full set comprises translations of the 250 pronoun tokens in the test suite, produced\nby five of the systems submitted to the shared task4and the official shared task baseline\nsystem, as well as from the human authored reference translation in DiscoMT2015.test .\n5.1 Results\nTable 1 displays the results of the manual annotation of the sample set, completed by\ntwo annotators. The “ 3” symbol denotes a correct translation, “ 7” an incorrect transla-\ntion and “ ?” a translation for which no judgement has been provided. Judgements are\nnot provided for bad translations or those with incorrect word alignments.\nInter-annotator agreement scores, calculated using Cohen’s Kappa (Cohen, 1960),\nare displayed in Table 2. Agreement for judgements on antecedent translation are very\nhigh, with only one disagreement out of 68 annotations. Agreement is lower for pronoun\ntranslations, suggesting that this aspect of the annotation task is more difficult. However,\nwe deem the Kappa score to be high enough to proceed with the annotation of the\nremaining test suite translations in future work.\nDisagreements between two or more annotators can provide a useful starting point\nfor understanding the difficulties of the manual annotation task. Whilst some indication\nis provided in Table 1, we cannot obtain a complete picture from raw counts alone. To\ngain a deeper understanding we need to look at the individual pronoun translations and\ntheir annotations using the translation window of the pronoun analysis tool (Fig. 3).\nWe can also use the tags and remarks to identify pronoun translations that represent\ninteresting cases. Some examples are discussed in Section 5.2.\n4System A3-108 is omitted due to very poor results in the DiscoMT 2015 shared task evaluation\n326 Hardmeier and Guillou\nCategory Count Pronoun Antecedent\nAnnotator A Annotator B Annotator A Annotator B\n3 7 ?3 7 ?3 7 ?3 7 ?\nAnaphoric\nInter-sentential “it”\nSubject 12 7 3 2 5 5 2 12 0 0 12 0 0\nNon-subject 3 2 1 0 1 2 0 3 0 0 3 0 0\nIntra-sentential “it”\nSubject 11 10 1 0 10 1 0 11 0 0 11 0 0\nNon-subject 8 6 1 1 6 2 0 7 1 0 7 1 0\nInter-sentential “they” 13 9 4 0 8 5 0 13 0 0 13 0 0\nIntra-sentential “they” 10 6 4 0 5 5 0 9 0 1 10 0 0\nSingular “they” 7 7 0 0 5 1 1 5 2 0 5 2 0\nGroup “it/they” 4 4 0 0 3 0 1 4 0 0 4 0 0\nEvent Reference “it” 14 10 4 0 8 6 0 – – – – – –\nPleonastic “it” 11 10 1 0 10 1 0 – – – – – –\nAddressee Reference\nDeictic singular “you” 7 7 0 0 7 0 0 – – – – – –\nDeictic plural “you” 6 5 0 1 5 1 0 – – – – – –\nGeneric “you” 10 10 0 0 10 0 0 – – – – – –\nTotal 116 93 19 4 83 29 4 64 3 1 65 3 0\nTable 1. Annotation results over a sample set of 116 pronoun translations\nJudgement Total Annotations Disagreements Kappa Score\nPronoun 116 14 0.69\nAntecedent 68 1 0.85\nTable 2. Inter-Annotator Agreement Scores\n5.2 Discussion\nAs an example of where the two annotators disagreed, consider Example 1, in which the\nanaphoric, intra-sentential pronoun “they” refers to “things”. The MT system translated\nthe antecedent as “choses” [fem. pl.] and the pronoun as “ils” [masc. pl.]. Both annota-\ntors marked the translation of the antecedent as correct, but differed in their judgement\nof the pronoun. Annotator B marked the pronoun translation as incorrect. Annotator A\nmarked it as correct and added the desc_vs_presc tag, indicating that it is something a\nFrench speaker might say, in a very casual manner, despite it being incorrect according\nto French grammar rules. This difference in descriptive vs. prescriptive grammar high-\nlights a problem that researchers should consider: Whether to be guided by grammar\nrules or by what is observed in the data, i.e. what people actually say, or how they write.\nExample 1.\nSource: Yeah, I think many of the things we just talked about are like that, where\nthey’re really – I almost use the economic concept of additionality, which means that\nyou’re doing something that wouldn’t happen unless you were actually doing it.\nA Graphical Pronoun Analysis Tool for the PROTEST Test Suite 327\nMT Output: Oui, je pense que beaucoup des choses que nous avons seulement parlé\nsont comme ça, où ilssont vraiment – j’ai failli utiliser le concept économique de\nl’additionnalité, ce qui signifie que tu fais quelque chose qui n’arriverait pas si vous\nétiez réellement le faire.\nAnother problem for MT systems is the translation of named entities. Both annota-\ntors agreed that had the antecedent in the MT output of Example 2 been “Deep Mind”\n(rather than the literal translation “profond esprit”) then the pronoun translation “Ils”\n[pl.] would have been acceptable, despite not agreeing with the antecedent [sg.].\nExample 2.\nSource: So I think Deep Mind, what’s really amazing about Deep Mind is that it can\nactually – they’re learning things in this unsupervised way. They started with video\ngames. . .\nMT Output: Je pense donc que l’esprit profond, ce qui est vraiment incroyable profond\nesprit est qu’il peut en fait – ils apprennent des choses dans ce sans supervision. Ilsont\ncommencé avec des jeux vidéo . . .\nPoliteness is also a problem for MT systems. In Example 3, the correct translation of\nthe English pronoun “you” requires knowledge of the relationship between the speaker\nand addressee. Here the annotators commented that it would be unusual for a (modern)\nFrench speaker to use the formal “vous” when speaking to their Grandpa.\nExample 3.\nSource: I mean, I would call him, and I’d be like, “Listen, Grandpa, I really need this\ncamera. You don’t understand.\nMT Output: je compte, je l’appellerais et je serais comme, « listen, Grandpa, j’ai vrai-\nment besoin de cet appareil photo. vous ne comprenez pas.\nIn the set of 116 translations, 8 were marked as noncompositional_translation\nby at least one annotator, including this example taken from the reference translation:\nExample 4.\nSource: The big labs have shown that fusion is doable, and now there are small com-\npanies that are thinking about that, and they say, it’s not that it cannot be done, but it’s\nhow to make it cost-effectively.\nReference: Les grands labos ont montré qu’elle était faisable, et maintenant des petites\nentreprises y pensent et disent : certes, ce n’est pas impossible, mais [ ilfaut] que ce soit\nrentable.\nIn Example 4, the English pleonastic pronoun “it” is aligned to the French pronoun\n“il”. However, “il faut” (meaning “it is necessary”) is a fixed expression and as such, the\nFrench pronoun “il” cannot be considered a direct translation of “it”. In scenarios such\nas these, the annotators are instructed to evaluate the translation of the clause instead of\nthe pronoun in isolation. Both annotators marked the translation as correct, which one\nmight expect given that the French translation is taken from the reference. Examples\nsuch as this present a problem for both manual and automated evaluation of pronoun\ntranslation in MT, which until now has considered pronoun translation at the token level.\n328 Hardmeier and Guillou\n6 Related Work\nThe PROTEST pronoun analysis tool shares some similarities with the interface for the\npronoun selection task (Hardmeier, 2014) which has been used by Guillou and Webber\n(2015) and in the manual evaluation of the DiscoMT 2015 shared task on pronoun trans-\nlation (Hardmeier et al., 2015). In the pronoun selection task, pronouns in the source-\nlanguage text are highlighted and their corresponding translations in the MT output are\nreplaced with a placeholder. The role of the human annotator is to select, from a given\nlist of options, which pronoun should replace the placeholder. In this way, the annotator\nis not biased by the pronoun translation in the MT output. In contrast, our tool presents\nthe annotator with the translation of the pronoun in context and poses questions about\nits translation. Furthermore, the pronoun analysis tool is not just an annotation inter-\nface. It enables researchers to examine translations in detail and to browse and compare\ntranslations by different systems, and annotations by one or more annotators.\nIn spirit, the tool is similar to other user interfaces for manual data inspection such\nas the analysis.perl utility for BLEU score analysis distributed with Moses (Koehn\net al., 2007) or the Blast interface for manual error analysis in MT output (Stymne,\n2011). Our tool is novel in that it focuses on a specific linguistic problem in translation\nand links manual inspection and evaluation with a manually selected test suite and the\npossibility of feeding back the annotations into a semi-automatic evaluation process.\nThe underlying approach of the automatic evaluation script included as part of\nPROTEST is similar in its methodology to the ACT metric for assessing the translation\nof discourse connectives (Hajlaoui and Popescu-Belis, 2013). Like PROTEST, ACT at-\ntempts to match translations in the MT output with those in the reference translation and\nrefers mismatches for manual evaluation. ACT, however, is accompanied by neither an\ninterface for, nor guidelines for manual evaluation.\n7 Conclusions and Future Work\nWe have presented a graphical pronoun analysis tool for the PROTEST test suite. It sup-\nports the manual evaluation of pronoun translations through manual annotation by one\nor more annotators. Researchers are provided with the means to manually examine in-\ndividual pronoun translations and to browse and compare manual annotations. We have\nalso presented a set of annotation guidelines underlying a simple, but useful methodol-\nogy for manually and semi-automatically evaluating pronouns in MT output. We have\ntested the use of the tool and the guidelines by annotating a small set of pronoun to-\nkens translated by systems submitted to the DiscoMT 2015 shared task on pronoun\ntranslation, and demonstrated the type of insights that this methodology has to offer. A\npractical conclusion that we have already drawn for our own work is that the problem\nof translating event pronouns deserves greater attention in future research.\nIn future work we plan to complete the manual annotation of the translation of all\n250 PROTEST pronoun tokens by the DiscoMT 2015 systems. This will provide a set\nof manually verified translations for use with the automatic evaluation in PROTEST.\nBoth the annotation tool described in this paper and the data sets will be published in\nthe LINDAT data repository.\nA Graphical Pronoun Analysis Tool for the PROTEST Test Suite 329\nAcknowledgements\nWe would like to thank Marie Dubremetz and Miryam de Lhoneux for manually anno-\ntating the output of the DiscoMT 2015 systems. This work was funded by the Swedish\nResearch Council under grant 2012-916 Discourse-Oriented Statistical Machine Trans-\nlation (research) and the European Association for Machine Translation (annotation).\nReferences\nJacob Cohen. A Coefficient of Agreement for Nominal Scales. Educational and Psy-\nchological Measurement , 20(1), 1960.\nLiane Guillou. Improving pronoun translation for statistical machine translation. In\nProceedings of the Student Research Workshop at the 13th Conference of the Euro-\npean Chapter of the Association for Computational Linguistics , pages 1–10, Avignon\n(France), April 2012.\nLiane Guillou and Christian Hardmeier. PROTEST: A test suite for evaluating pronouns\nin machine translation. In Proceedings of the Eleventh Language Resources and\nEvaluation Conference (LREC’16) , Portorož (Slovenia), May 2016.\nLiane Guillou and Bonnie Webber. Analysing ParCor and its translations by state-\nof-the-art SMT systems. In Proceedings of the Second Workshop on Discourse in\nMachine Translation , pages 24–32, Lisbon, Portugal, September 2015.\nLiane Guillou, Christian Hardmeier, Aaron Smith, Jörg Tiedemann, and Bonnie Web-\nber. ParCor 1.0: A parallel pronoun-coreference corpus to support statistical\nMT. In Proceedings of the Tenth Language Resources and Evaluation Conference\n(LREC’14) , pages 3191–3198, Reykjavík (Iceland), 2014.\nNajeh Hajlaoui and Andrei Popescu-Belis. Assessing the accuracy of discourse connec-\ntive translations: Validation of an automatic metric. In 14th International Conference\non Intelligent Text Processing and Computational Linguistics , page 12. University of\nthe Aegean, Springer, March 2013.\nChristian Hardmeier. Discourse in Statistical Machine Translation , volume 15 of Studia\nLinguistica Upsaliensia . Acta Universitatis Upsaliensis, Uppsala, 2014.\nChristian Hardmeier. On statistical machine translation and translation theory. In Pro-\nceedings of the Second Workshop on Discourse in Machine Translation , pages 168–\n172, Lisbon (Portugal), September 2015.\nChristian Hardmeier and Marcello Federico. Modelling pronominal anaphora in statis-\ntical machine translation. In Proceedings of the Seventh International Workshop on\nSpoken Language Translation (IWSLT) , pages 283–289, Paris (France), 2010.\nChristian Hardmeier, Preslav Nakov, Sara Stymne, Jörg Tiedemann, Yannick Versley,\nand Mauro Cettolo. Pronoun-focused MT and cross-lingual pronoun prediction:\nFindings of the 2015 DiscoMT shared task on pronoun translation. In Proceedings\nof the 2nd Workshop on Discourse in Machine Translation (DiscoMT 2015) , pages\n1–16, Lisbon (Portugal), 2015.\nChristian Hardmeier, Jörg Tiedemann, Preslav Nakov, Sara Stymne, and Yannick\nVersely. DiscoMT 2015 Shared Task on Pronoun Translation, 2016. LIN-\nDAT/CLARIN digital library at Institute of Formal and Applied Linguistics, Charles\nUniversity in Prague. http://hdl.handle.net/11372/LRT-1611.\n330 Hardmeier and Guillou\nPhilipp Koehn, Hieu Hoang, Alexandra Birch, et al. Moses: Open source toolkit for\nStatistical Machine Translation. In Annual Meeting of the Association for Computa-\ntional Linguistics: Demonstration session , pages 177–180, Prague (Czech Republic),\n2007.\nRonan Le Nagard and Philipp Koehn. Aiding pronoun translation with co-reference\nresolution. In Proceedings of the Joint Fifth Workshop on Statistical Machine Trans-\nlation and MetricsMATR , pages 252–261, Uppsala (Sweden), July 2010.\nMichal Novák. Utilization of anaphora in machine translation. In Week of Doctoral\nStudents 2011 Proceedings of Contributed Papers, Part I , pages 155–160, Prague\n(Czech Republic), 2011.\nKishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. BLEU: A method\nfor automatic evaluation of machine translation. In Proceedings of the 40th Annual\nMeeting of the Association for Computational Linguistics , pages 311–318, Philadel-\nphia (Pennsylvania, USA), 2002.\nSara Stymne. Blast: A tool for error analysis of machine translation output. In Proceed-\nings of the ACL-HLT 2011 System Demonstrations , pages 56–61, Portland (Oregon,\nUSA), June 2011.\nReceived May 9, 2016 , accepted May 16, 2016",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "2T6oFZNi0S-",
"year": null,
"venue": "EAMT 2016",
"pdf_link": "https://aclanthology.org/W16-3414.pdf",
"forum_link": "https://openreview.net/forum?id=2T6oFZNi0S-",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Climbing Mont BLEU: The Strange World of Reachable High-BLEU Translations",
"authors": [
"Aaron Smith",
"Christian Hardmeier",
"Jörg Tiedemann"
],
"abstract": "Aaron Smith, Christian Hardmeier, Joerg Tiedemann. Proceedings of the 19th Annual Conference of the European Association for Machine Translation. 2016.",
"keywords": [],
"raw_extracted_content": "Baltic J. Modern Computing, V ol. 4 (2016), No. 2, pp. 269–281\nClimbing Mount BLEU: The Strange World of\nReachable High-BLEU Translations\nAaron SMITH1;2, Christian HARDMEIER1, J¨org TIEDEMANN3\n1Uppsala University\n2Convertus AB, Uppsala, Sweden\n3University of Helsinki\[email protected], [email protected],\[email protected]\nAbstract. We present a method for finding oracle BLEU translations in phrase-based statistical\nmachine translation using exact document-level scores. Experiments are presented where the\nBLEU score of a candidate translation is directly optimised in order to examine the properties of\nreachable translations with very high BLEU scores. This is achieved by running the document-\nlevel decoder Docent in BLEU-decoding mode, where proposed changes to the translation of\na document are only accepted if they increase BLEU. The results confirm that the reference\ntranslation cannot in most cases be reached by the decoder, which is limited by the set of phrases\nin the phrase table, and demonstrate that high-BLEU translations are often of poor quality.\nKeywords: Statistical machine translation, oracle decoding, BLEU, Docent\n1 Introduction\nThis paper presents a method for finding oracle translations in phrase-based (PB) statis-\ntical machine translation (SMT) using exact document-level BLEU scores. The method,\nwhich we call BLEU decoding, is implemented in the document-level machine transla-\ntion decoder Docent. BLEU decoding is a stochastic hill climbing algorithm: changes\nare proposed by the decoder to an initial translation and only accepted if they increase\nBLEU.\nAnalysing the translations obtained in this way we corroborate previous research\non the problem of reference reachability: perfect BLEU scores, corresponding to the\ndecoder finding the reference translation exactly, are rarely possible; meanwhile we add\nto the extensive literature on problems and biases with the BLEU metric itself, showing\nfor the first time clear examples of sentences from documents with high BLEU scores\nwith obvious poor translation quality.\nThe paper is structured in the following manner: Section 2 describes the BLEU\nmetric, Section 3 presents the Docent decoder and BLEU decoding, Section 4 details\n270 Smith et al.\nexperiments carried out with BLEU decoding and presents their results, while Section\n5 comprises a discussion.\n2 BLEU\nThe BLEU score, introduced by Papineni et al. (2002), is a metric for evaluating the\nquality of a candidate translation by comparing it to one or more reference translations.\nFor1\u0014n\u0014N, where normally N= 4, each n-gram in each candidate sentence is\nchecked against all of the references in order to calculate precision. To count towards\nprecision, the candidate n-gram need only appear in one of the references; this helps\nto account for possible variations in style and word choice. However, the same n-gram\nappearing more than once in the candidate is only counted multiple times if it also\nappears multiple times in a single reference. BLEU is then based on the geometric\naverage of these so-called modified n-gram precisions pn.\nAs multiple references are employed in calculating BLEU, it is difficult to take\nrecall into account, which could lead to short sentences scoring unfairly highly. To\nprevent this from occurring, a brevity penalty is introduced, lowering the BLEU score\nfor cases where the length of the candidate translation cis less than the length of the\nreference translation r. The equation for BLEU is as follows:\nBLEU =min (exp(1\u0000r=c);1)\u0001exp NX\nn=1logpn\nN!\n(1)\nObvious problems with BLEU are that it gives all words equal weighting and harshly\npunishes synonyms and elaborations, as well as words such as ‘thus’ or ‘however’\nspliced occasionally into a text (see Callison-Burch et al. (2006) for a full discussion of\nthese shortcomings). Chiang et al. (2008) meanwhile describe several situations where\nthey are able to obtain highly dubious improvements in BLEU. They point out, for ex-\nample, that if translating multiple genres at the same time, one can generate longer sen-\ntences within a specific genre where the translation quality is known to be higher, and\nshorter sentences in other more difficult genres. This will generate higher overall BLEU\nscores due to the fact that the brevity penalty works on whole documents rather than\nsentence-by-sentence, but the final translation quality would clearly have been higher if\ncombined systems had been used, each optimised for a particular genre.\nDespite these and other issues, however, BLEU has been shown to correlate ex-\ntremely well with human judgement of translation quality in many cases (Agarwal and\nLavie, 2008; Farr ´us et al., 2012). There have been a lot of recent efforts to develop\nmore sophisticated metrics that counteract some of BLEU’s weaknesses (Mach ´aˇcek\nand Bojar, 2013), but for the time being it remains ubiquitous in SMT. For this rea-\nson, the computation of oracle BLEU hypotheses is an active field (Wisniewski et al.,\n2010; Sokolov et al., 2012). Oracle BLEU hypotheses are those in the search space of\na PBSMT decoder with the highest BLEU scores. Ultimately we want our translation\nsystems to find these hypotheses on unseen data; calculating them when a reference is\navailable can help identify deficiencies in current systems and facilitate the develop-\nment of new techniques. BLEU oracles are also useful during feature-weight tuning,\nClimbing Mount BLEU 271\nthough it has been pointed out that relying too heavily on BLEU here can lead to poor\nresults (Liang et al., 2006; Chiang, 2012).\n3 Docent\nDocent is a decoder for phrase-based SMT (Hardmeier et al., 2013). In Docent’s search\nalgorithm, feature models have access to a complete translation of a whole document\nat all stages of the search process. The algorithm is a stochastic variant of standard hill\nclimbing: at each step, the decoder generates a successor of the current translation by\nrandomly applying one of a set of state-changing operations at a random location in the\ndocument, and accepts the new translation only if it has a better score than the previ-\nous translation. Implemented operations include changing the translation of a phrase,\nchanging the word order by swapping the positions of two phrases or moving a sequence\nof phrases, and resegmenting phrases.\nThe original motivation behind Docent was to facilitate the development of models\nwith cross-sentence dependencies. A classic problem is that of pronominal anaphora\nresolution: identifying the antecedents of pronouns in order, for example, to correctly\ntranslate from English into languages that have grammatical gender for inanimate nouns.\nThis type of problem is very difficult to solve in standard SMT decoders, which have\nhard-wired assumptions of sentence independence.\nThe standard tool-kit of sentence-level models, such as the phrase table, n-gram\nlanguage models and distortion cost are implemented in Docent, along with document-\nlevel models including a length parity model, a semantic language model and several\nreadability models. The initial translation can be created either by generating a random\nsegmentation and taking random translations from the phrase table in monotonic order,\nor by a run from Moses.\nDocent is not designed to perform better than Moses when only sentence-level fea-\ntures are used; its advantage lies in the ability to use features that disable recombination.\nInformation about Docent’s performance can be found in Hardmeier et al. (2012).\n3.1 BLEU decoding\nBLEU decoding is the name we have given to a particular mode of decoding in Do-\ncent whereby proposed changes to the translation are only accepted by the decoder if\nthey result in an increase in the BLEU score. A new feature model, BleuModel , was\nimplemented in Docent. Before decoding begins, BleuModel processes and stores\nthe lengths of the reference translations, as well as the lengths of individual sentences\nwithin those translations and n-gram counts for 1\u0014n\u00144. Once an initial candi-\ndate translation for each document has been created, BleuModel calculates the BLEU\nscore. The clipped counts for each sentence, required to calculate BLEU, are recorded\nalong with the length of the candidate translation. In this way the counts for a particular\nsentence need only be updated when Docent proposes a change to that sentence; this\nmakes BleuModel a particularly efficient feature model.\nIn the following section experiments are carried out in pure BLEU-decoding mode\nin Docent, that is to say the weights of all standard feature functions are set to zero, and\n272 Smith et al.\nonly changes to the translation that increase BLEU are accepted. The aim is to examine\nthe properties of translations with very high BLEU scores that are reachable by the\ndecoder.\n4 Experiments\nA German-English Moses translation model was trained on just over 1.5 million sen-\ntences from Europarl v7. The test data was a set of 3052 sentences from the new-\nstest2013 data, divided into 52 separate documents. Two types of experiments were\ncarried out, firstly with the candidate translation initialised by running Moses (with a\n5-gram language model trained with KenLM on 2.2 million Europarl sentences and\nfeature weights tuned using MERT on a development set of 2525 sentences from the\nnewstest2009 data), and secondly by random initialisation (i.e. random segmentation\nand random phrase translation). Docent was then run in BLEU-decoding mode: only\nchanges to the translation that increased BLEU were accepted. Model and BLEU scores\nwere monitored at exponentially increasing intervals, after iterations 28;29; :::; 225. The\nmotivation for this sampling is that many more proposed changes to the translation are\naccepted in the beginning: as decoding progresses and the translation improves, there\nare simply more iterations between each interesting event.\n4.1 Moses-based initial translation\nFig. 1 shows how BLEU scores evolve across the 52 test documents during decoding\nfrom initial translations produced by Moses. The initial BLEU scores after Moses de-\n0510152025020406080100\nIteration (log2)BLEU score\nFig. 1. Progression of the minimum, mean, and maximum BLEU scores across 52 test documents\nduring BLEU decoding from an initial configuration based on a Moses run.\ncoding ranged from 7.2 to 33.1, with a mean of 19.3; after subsequently running Docent\nin BLEU-decoding mode, the mean had increased to 50.4, with a range from 25.9 to\n72.5. A substantial and consistent increase in BLEU, as expected, is thus observed.\nClimbing Mount BLEU 273\nGiven the huge increase in mean BLEU score from 19.3 to 50.4, conventional wis-\ndom would say that the quality of the translations after BLEU decoding should be much\nhigher. However, looking at our BLEU-decoded documents it quickly became clear that\nthis was not the case: many of the translations appeared to have deteriorated in quality.\nTo confirm this, we evaluated the first 100 sentences from the test data, randomising the\norder in which the two competing translations were presented so that it was not possi-\nble to know which translation was which, and judged which of the two was of better\ntranslation quality. We found that the Moses translation was judged to be superior in 59\ncases, the BLEU-decoded translation in 23, and in 18 cases the two translations were\njudged to be of equal quality.\nThis is a striking result that deserves restating: despite an increase in mean BLEU\nscore from 19.3 to 50.4, the translations are worse in 59 out of 100 sentences studied.\nMoreover, it is fair to say that sentences that got worse often got a lot worse, whereas\nsentences that improved generally did so only marginally. Although we have only stud-\nied 100 sentences systematically, it is clear to us that this pattern holds over the whole\ntest set, and even in other experiments with different data sets and language pairs. Let\nus take a look at some demonstrative examples to understand how this can happen:\n(Example 1)\nSRC: in diesem sinne untergraben diese maßnahmen teilweise das demokratische\nsystem der usa .\nREF: in this sense , the measures will partially undermine the american democratic\nsystem .\nMOS: in this sense , undermine these measures in the democratic system of the\nunited states .\nBLEU: thedemocratic system kin this sense , the measures kpartially undermine\nthe american .\nThe fragments in bold show n-grams for n\u00152where the Moses and BLEU trans-\nlations match the reference. The pipe symbol kis used to separate contiguous non-\noverlapping n-gram matches. We see here by comparing to the reference (REF) that the\nMoses translation (MOS) is quite poor, with these measures appearing as the object,\nrather than subject, of the verb undermines . With some effort, however, the true sense\nof the phrase can be understood from this translation. This is not the case, however,\nwith the BLEU-optimised translation, which is completely unintelligible. The problem\nis that BLEU decoding has worked hard to increase the number of n-gram matches,\nleading to the phrase partially undermine the american , which unbeknown to BLEU\nneeds to be followed by democratic system to retain the meaning of the original sen-\ntence. The Moses translation meanwhile includes the democratic system of the united\nstates , a perfectly acceptable equivalent to the american democratic system , but one that\nBLEU decoding does not like.\nBLEU decoding produces an even more nonsensical translation in the following\nexample:\n274 Smith et al.\n(Example 2)\nSRC: am wichtigsten ist es aber , mit seinem arzt zu sprechen , um zu bestimmen ,\nob er durchgef ¨uhrt werden sollte oder nicht .\nREF: but the important thing is to have a discussion with your doctor to determine\nwhether or not to take it .\nMOS: the most important thing is , however , with his doctor to speak , in order to\ndetermine whether it should be carried out or not .\nBLEU: the important thing is to have a doctor performed but , with to take it .\ntalking to determine whether or not to s\nAgain we see that while the original Moses translation, although far from perfect, has\nsome merit, the BLEU-decoded version is junk. It is telling that there are no 4-gram\nmatches at all in the Moses translation, while the long matching fragments in the BLEU\ntranslation ensure that there are as many as eight such matches. The BLEU translation\nalso has a higher unigram precision; indeed, for all 1\u0014n\u00144, the number of matching\nn-grams is much higher in the BLEU translation than the Moses translation.\nIn a third example BLEU decoding does in fact produce an intelligible translation:\n(Example 3)\nSRC: es ist auch ein risikofaktor f ¨ur mehrere andere krebsarten .\nREF: it is also a risk factor for a number of others .\nMOS: there is also a risk factor for a number of other types of cancer .\nBLEU: it is also a risk factor for a number of others . cancers\nIn this example the Moses translation is actually very good; a more literal translation\nof the source sentence than the reference, which lacks a direct translation of krebsarten\n(cancers ortypes of cancer ). After BLEU decoding the sentence has been transformed:\nit now matches the whole of the reference, but with the word cancers added after the\nfull-stop. It is straightforward to see why the BLEU translation leads to a higher BLEU\nscore: the extra couple of tokens at the end of the matching fragment increase the preci-\nsion for all n-grams. It is in many ways the reference itself here which is the problem:\nBLEU decoding has been tricked into trying to mimic a less-literal reference translation\nrather than stick with a perfectly valid translation from the standard log-linear model.\nDespite being intelligible and matching the reference, it is highly doubtable that there\nis any benefit to a system finding this translation over the Moses translation.\n4.2 Model scores during BLEU decoding\nIn the standard setting for statistical machine translation, we decode to maximise the\ncombined scores of a set of features, then use BLEU as an independent evaluation\nmetric. In pure BLEU-decoding mode we are able to turn the tables somewhat, and look\nat what happens to the model score as decoding proceeds. Of course, BLEU has been\nshown to correlate better with translation quality than model score, but we would still\nClimbing Mount BLEU 275\nexpect the two to correspond to some extent: this is why we normally build our systems\naround this set of features. With this in mind, Fig. 2 shows how the model score, for a\nstandard set of features with MERT-tuned weights, varies as BLEU increases.\nIteration (log2)0 510 15 20 25Model score\n-4000-3500-3000-2500-2000-1500-1000-5000\nFig. 2. Progression of the minimum, mean, and maximum model scores across 52 test documents\nduring BLEU decoding from an initial configuration based on a Moses run.\nWe observe that the model scores decrease as decoding progresses and BLEU in-\ncreases; Docent in BLEU-decoding mode is able to find translations with high BLEU\nscores that score poorly on the traditional set of PBSMT features. The Moses-based\ninitialisation procedure works of course to maximise the model score, so it would be\nunrealistic to expect it to increase much more during BLEU decoding, unless we had\nreason to believe that there was significant search error in the Moses decoding process.\nThe fact that the model score drops in this way however adds weight to the point made\nearlier by the example sentences, that we have high BLEU scores but many poor quality\nsentences. These results suggest that by letting BLEU run wild, we move far away from\nthe part of the search space containing good translations.\n4.3 Random initial translation\nFig. 3 shows how the BLEU scores evolve among the 52 test documents during de-\ncoding from an random initial translation. We again observe a large increase in BLEU\nscores; on this occasion the mean BLEU score at the beginning of the decoding process\nwas 3.6 (with range 0.0 to 6.6); after running Docent in BLEU-decoding mode it had\nincreased to 50.2 (with range 24.9 to 71.5). The figure for the mean at the end of de-\ncoding is very similar to that of 50.4 obtained when decoding from Moses-based initial\ntranslations, suggesting that the initial translation does not have a great effect on the\nfinal result.\n276 Smith et al.\n0510152025020406080100\nIteration (log2)BLEU score\nFig. 3. Progression of the minimum, mean, and maximum BLEU scores across 52 test documents\nduring BLEU decoding from a random initial configuration.\nWe can now go back to an example sentence from the previous experiment and add\ntwo new translations: the random initial translation (RAND) and the revised version of\nthis after BLEU decoding (BLEU2):\n(Example 2)\nSRC: am wichtigsten ist es aber , mit seinem arzt zu sprechen , um zu bestimmen ,\nob er durchgef ¨uhrt werden sollte oder nicht .\nREF: but the important thing is to have a discussion with your doctor to determine\nwhether or not to take it .\nMOS: the most important thing is , however , with his doctor to speak , in order to\ndetermine whether it should be carried out or not .\nBLEU: the important thing is to have a doctor performed but , with to take it .\ntalking to determine whether or not to s\nRAND: most important of all has it , which from his own medical with talking about\nwith a view to set , whether or not report implement to be or notkit .\nBLEU2: talking but the important thing is to itsto have akdoctor to determine\nwhether or not to take it . or report implement to\nWhile BLEU and BLEU2 are not identical, they are strikingly similar in that they\nshare many phrases and contiguous sets of words, as well as the property that they make\nvery little sense. This suggests that the type of translation in which BLEU decoding\nresults is independent of the initial translation; the initial translations – MOS and RAND\n– of BLEU and BLEU2 are clearly very different from each other. It is also interesting\nto compare BLEU2 with its antecedent RAND. While neither of these translations can\nbe said to convey much of the sense of the original German sentence, it could perhaps\nbe argued that BLEU2 is slightly more sensical than RAND. Perhaps the chunks that\nClimbing Mount BLEU 277\nmatch the reference do actually help to bring through some trace of meaning. One way\nto compare the random initial translation to the BLEU-decoded version is to look again\nat the model scores.\nIteration (log2)0 510 15 20 25Model score\n-4000-3500-3000-2500-2000-1500-1000-5000\nFig. 4. Progression of the minimum, mean, and maximum model scores across 52 test documents\nduring BLEU decoding from a random initial configuration.\nFig. 4 shows a slight increase in model scores at the beginning of decoding, fol-\nlowed by a gradual decline, but with final values still above the initial translation. We\ncan therefore draw the conclusion that BLEU decoding from a random initial transla-\ntion does result in translations that are slightly better, in some meaningful sense, than\nthe initial translation. It is however clear by looking at the example sentences that the\nimprovement in quality is nowhere near that which would be normally be expected\ngiven the jump in mean BLEU score from 3.6 to 50.2.\n4.4 BLEU decoding towards reachable translations\nWe saw in the previous sections that the mean BLEU score after 225iterations of BLEU\ndecoding was 50.4 when the initial translation came from Moses, and 50.2 when the ini-\ntial translation was randomly chosen. While these are undoubtedly high BLEU values,\nthey are still a long way from 100, which would represent the decoder finding the ref-\nerence translation exactly. It is natural to wonder why this is the case; what is stopping\nthe BLEU score getting much higher. BLEU decoding bears some resemblance to the\ntechnique of forced decoding, where the training data is decoded in such a way that\nguarantees the reference be found, in order to re-calculate phrase translation probabili-\nties. Wuebker et al. (2010) reported being able to match the reference 95% percent of the\ntime, while Foster and Kuhn (2012) report slightly lower performance. Note however\nthat in these cases it is the same training data used for the original phrase extraction that\nis force-decoded, unlike in our case where BLEU decoding is carried out on a separate\ntest/development data.\n278 Smith et al.\nThere are two obvious candidates to explain the failure of BLEU decoding to find\nthe reference exactly. One is the availability of the right phrases in the phrase table.\nReference reachability has long been known to be a problem in PBSMT (Liang et al.,\n2006). This is also the problem in forced decoding, where despite the fact that the\nphrases are extracted from the same data being decoded, it is not always possible to\nforce-decode every sentence (Foster and Kuhn, 2012). Another possibility might be\nthat the decoder’s hill-climbing algorithm tends to get stuck in local maxima. The fact\nthat the initial configuration apparently plays no role speaks against this hypothesis,\nbut not definitively. Another test that can be carried out is to give the decoder a pseudo-\nreference translation, that is not really a true reference at all, but simply another random\ntranslation generated by Docent. As Docent generates this translation from phrases in\nthe phrase table, it is guaranteed that the reference is theoretically reachable by the\ndecoder.\nThe experimental set-up was similar to that described in Section 4, the only differ-\nence being the switch from the genuine reference translation to a simulated reference\ngenerated randomly by Docent. The 52 test documents were decoded from random ini-\ntial translations.\nThe average BLEU score before decoding was 6.8, with range from 4.2 to 12.9; after\n225iterations it was 98.4, with range 96.8 to 99.6 (Fig. 5). The contrast between Fig. 5\n0510152025020406080100\nIteration (log2)BLEU score\nFig. 5. Progression of the minimum, mean, and maximum BLEU scores across 52 test documents\nduring BLEU decoding towards a reachable configuration.\nand Fig. 3 is striking. These results confirm that, when the necessary phrases are in the\nphrase table, the decoder in BLEU-decoding mode is able to get extremely close to the\nreference translation. Note that the monotonic way in which random initial translations\nare generated in Docent makes it somewhat easier for the decoder to find the reference\ntranslations than in a real-world case where extensive reordering is necessary. We also\ntested however decoding from a random initial translation towards a Moses translation\nClimbing Mount BLEU 279\nas the pseudo-reference, where more reordering is likely to be necessary, and again\nfound BLEU scores in the high 90s.\nThis suggests that the lower BLEU scores for decoding towards genuine reference\ntranslation are as high, or almost as high, as they can possibly get, on this data given\nthe phrases in the phrase table. BLEU scores near to 100 are simply impossible on this\ndata set given a genuine reference translation and the phrases available to the decoder.\nThe poor quality translations demonstrated by the examples in the previous section are\ntherefore probably as good as it gets in terms of BLEU score: there are no translations\nthat the decoder can conceivably reach with significantly higher BLEU scores.\n5 Discussion\nThis paper presented BLEU decoding, a method for finding oracle BLEU translations\nusing exact document-level scores for a phrase-based SMT decoder. Previous attempts\nto find oracle translations, for use in feature-weight tuning for example, have relied\non sentence-level approximations to BLEU. As other authors have shown, however,\noptimising BLEU at the sentence-level and document-level are not always equivalent\n(Chiang et al., 2008).\nBy performing experiments with BLEU decoding, we explored the view from the\ntop of Mount BLEU, examining in detail high-BLEU regions of the search space. While\nit might be assumed that translations in this region would be of high quality, results pre-\nsented here show this not to be the case if the reference translation is not reachable by\nthe decoder. Despite an increase in mean BLEU score from 19.3 to 50.4 across 52 docu-\nments from newstest2013 translated in BLEU-decoding mode from an initial translation\ngenerated by Moses, there was a clear drop in translation quality (59 out of 100 sen-\ntences were judged to be worse, and only 23 judged better). We observed long n-gram\nmatches interleaved with strings of nonsense, leaving many sentences unintelligible.\nThis makes sense given how BLEU works, favouring long n-gram matches and saying\nnothing about parts that do not match. An even larger increase in mean BLEU score,\nfrom 3.6 to 50.2, was observed when decoding from a random initial translation, but\nresults were similar in terms of translation quality.\nWhat do these results say about BLEU as an evaluation metric? Initial impressions\nmight suggest that the evidence presented here is damning for BLEU: it has been clearly\nshown that it can be ‘cheated’: very bad translations can get high BLEU scores. This\nis not the first time problems with BLEU have been highlighted (Callison-Burch et\nal., 2006; Chiang et al., 2008), and research into better metrics is a very active field\n(Mach ´aˇcek and Bojar, 2013). However, it must be remembered that the experiments pre-\nsented here used BLEU in a very different fashion from that for which it was designed.\nPapineni et al. (2002) demonstrated clearly that when translation quality is manipulated\nas the independent variable in experiments, there is a strong correlation with BLEU as\nthe dependent variable. This does not imply, and indeed the opposite has been shown in\nthis paper, that manipulating BLEU as an independent variable will necessarily result\nin high quality translations.\nAnother way of saying this is as follows: if translations are produced independently\nof BLEU, then BLEU is often a good metric to distinguish their quality; however this\n280 Smith et al.\ndoes not imply that actively looking for translations with high BLEU score will result\nin high quality. There is clearly a high-BLEU area of the search space with low quality\ntranslations. This problem has previously been encountered by researchers working on\nfeature-weight tuning (Liang et al., 2006; Chiang, 2012). Searching for weights that\nproduce high BLEU scores on development data is a central part of many standard\ntuning algorithms such as MERT (Och, 2003), PRO (Hopkins and May, 2011) and\nMIRA (Watanabe et al., 2007). In reality feature models are selected in such a way\nthat ending up in this strange region of the search space is unlikely, but if we blindly\noptimise feature weights using BLEU, we could run the risk of moving dangerously\nclose.\nDespite these problems, BLEU decoding may still have the potential to be applied\nduring tuning to improve translation quality. We have seen that in its pure form, BLEU\ndecoding leads us far from the area of the search space containing good translations,\nand optimising our models towards finding these regions is unlikely to be a good idea.\nHowever, by combining BLEU decoding with other regular SMT features, we may be\nable to keep the decoder in higher-quality areas of the search space, using the BLEU\nfeature model to find the best translations within this constrained region. The principle\nbehind BLEU decoding can also be implemented for other translation metrics, either to\ninclude as additional features in the tuning process, or in order to stress test the metric\nitself.\nReferences\nA. Agarwal and A. Lavie. 2008. METEOR, M-BLEU and M-TER: Evaluation Metrics\nfor High-Correlation with Human Rankings of Machine Translation Output. In\nProceedings of the Third Workshop on Statistical Machine Translation , pages 115–\n118.\nC. Callison-Burch, M. Osborne, and P. Koehn. 2006. Re-evaluating the Role of BLEU\nin Machine Translation Research. In 11th Conference of the European Chapter of\nthe Association for Computational Linguistics , pages 249–256.\nD. Chiang, S. DeNeefe, Y . S. Chan, and H. T. Ng. 2008. Decomposability of Transla-\ntion Metrics for Improved Evaluation and Efficient Algorithms. In Proceedings of\nthe 2008 Conference on Empirical Methods in Natural Language Processing , pages\n610–619.\nD. Chiang. 2012. Hope and Fear for Discriminative Training of Statistical Translation\nModels. Journal of Machine Learning Research , 13:1159–1187.\nM. Farr ´us, M. R. Costa-juss `a, and M. Popovi ´c. 2012. Study and Correlation Analysis\nof Linguistic, Perceptual and Automatic Machine Translation Evaluations. Journal\nof the American Society for Information Science and Technology , 63(1):174–184.\nG. Foster and R. Kuhn. 2012. Forced Decoding for Phrase Extraction. Technical\nReport, Universit ´e de Montreal.\nC. Hardmeier, J. Nivre, and J. Tiedemann. 2012. Document-Wide Decoding for\nPhrase-Based Statistical Machine Translation. In Proceedings of the 2012 Joint\nConference on Empirical Methods in Natural Language Processing and Computa-\ntional Natural Language Learning , pages 1179–1190.\nClimbing Mount BLEU 281\nC. Hardmeier, S. Stymne, J. Tiedemann, and J. Nivre. 2013. Docent: A Document-\nLevel Decoder for Phrase-Based Statistical Machine Translation. In Proceedings of\nthe 51st Annual Meeting of the Association for Computational Linguistics: System\nDemonstrations , pages 193–198.\nM. Hopkins and J. May. 2011. Tuning as Ranking. In Proceedings of the 2011 Con-\nference on Empirical Methods in Natural Language Processing , pages 1352–1362.\nP. Liang, A. Bouchard-C ˆot´e, D. Klein, and B. Taskar. 2006. An End-to-End Dis-\ncriminative Approach to Machine Translation. In Proceedings of the 44th Annual\nMeeting of the Association for Computational Linguistics.\nM. Mach ´aˇcek and O. Bojar. 2013. Results of the WMT13 Metrics Shared Task. In\nProceedings of the Eighth Workshop on Statistical Machine Translation , pages 45–\n51.\nF. J. Och. 2003. Minimum Error Rate Training in Statistical Machine Translation. In\nProceedings of the 41st Annual Meeting on Association for Computational Linguis-\ntics - Volume 1 , pages 160–167.\nK. Papineni, S. Roukos, T. Ward, and W. J. Zhu. 2002. BLEU: a Method for Automatic\nEvaluation of Machine Translation. In Proceeding of the 40th Annual Meeting of\nthe Association for Computationial Linguistics , pages 311–318.\nA. Sokolov, G. Wisniewski, and F. Yvon. 2012. Computing Lattice BLEU Oracle\nScores for Machine Translation. In Proceedings of the 13th Conference of the Eu-\nropean Chapter of the Association for Computational Linguistics , pages 120–129.\nT. Watanabe, J. Suzuki, H. Tsukada, and H. Isozaki. 2007. Online Large-Margin Train-\ning for Statistical Machine Translation. In Proceedings of the 2007 Joint Confer-\nence on Empirical Methods in Natural Language Processing and Computational\nNatural Language Learning , pages 764–773.\nG. Wisniewski, A. Allauzen, and F. Yvon. 2010. Assessing Phrase-Based Translation\nModels with Oracle Decoding. In Proceedings of the 2010 Conference on Empirical\nMethods in Natural Language Processing , pages 933–943.\nJ. Wuebker, A. Mauser, and H. Ney. 2010. Training Phrase Translation Models with\nLeaving-One-Out. In Proceeding of the 48th Annual Meeting of the Association for\nComputational Linguistics , pages 475–484.\nReceived May8, 2016 , accepted May 15, 2016",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "HZMpzllpmK",
"year": null,
"venue": "EAMT 2011",
"pdf_link": "https://aclanthology.org/2011.eamt-1.32.pdf",
"forum_link": "https://openreview.net/forum?id=HZMpzllpmK",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Improving Machine Translation Quality Prediction with Syntactic Tree Kernels",
"authors": [
"Christian Hardmeier"
],
"abstract": "Christian Hardmeier. Proceedings of the 15th Annual conference of the European Association for Machine Translation. 2011.",
"keywords": [],
"raw_extracted_content": "Improving Machine Translation Quality Prediction\nwith Syntactic Tree Kernels\nChristian Hardmeier\nUppsala universitet\nInst. f ¨or lingvistik och filologi\nSE-751 26 Uppsala\[email protected]\nAbstract\nWe investigate the problem of predicting\nthe quality of a given Machine Translation\n(MT) output segment as a binary classifi-\ncation task. In a study with four different\ndata sets in two text genres and two lan-\nguage pairs, we show that the performance\nof a Support Vector Machine (SVM) clas-\nsifier can be improved by extending the\nfeature set with implicitly defined syn-\ntactic features in the form of tree ker-\nnels over syntactic parse trees. Moreover,\nwe demonstrate that syntax tree kernels\nachieve surprisingly high performance lev-\nels even without additional features, which\nmakes them suitable as a low-effort initial\nbuilding block for an MT quality estima-\ntion system.\n1 Introduction\nEven though automatic high-quality translation\nfor general domains is still far beyond the reach\nof current Statistical Machine Translation (SMT)\nsystems, recent systems achieve levels of perfor-\nmance that make them viable for use as core el-\nements in commercial translation processes. In\ncertain text genres and with systems trained on\nsufficient amounts of in-domain data, producing\nraw translations with an SMT system and having\nhuman translators post-edit them can be a more\ncost-effective way of obtaining production-quality\ntranslations than doing fully human translation.\nOne example for this is the domain of TV film sub-\ntitles, where a good SMT system can output trans-\nlations closely similar to a translation produced\nby a professional translator for more than 30 % of\nthe subtitles under favourable conditions (V olk et\nal., 2010). Still, despite the large number of good\ntranslations, other subtitles in the SMT output can\nc/circlecopyrt2011 European Association for Machine Translation.be of very low quality, placing an unnecessary bur-\nden on the post-editors, who have to take a deci-\nsion to discard the bad raw translation before trans-\nlating the subtitle from scratch anyway.\nIn order to reduce the effort post-editors have to\nspend on acceptance decisions and make subtitle\npost-editing a more pleasant experience, it would\nbe desirable to predict the quality of a segment\nautomatically given the input, the output and the\nmodels of the SMT system, a task that has gone un-\nder the name of confidence prediction in the litera-\nture. While the SMT system itself internally scores\nalternative translations of each segment to find the\nbest one, raw SMT scores are not sufficient as a\nconfidence measure. As conditional probabilities,\nthey are not comparable across sentences. Fur-\nthermore, they are not properly normalised by the\nSMT decoder since performing the required nor-\nmalisation would render decoding intractable. Us-\ning a decoder-external confidence prediction mod-\nule also makes it possible to use certain features\nwhich by their nature are difficult to integrate in\nthe left-to-right beam search framework typical of\ncurrent phrase-based SMT decoders.\nPrevious research in Machine Translation (MT)\nconfidence estimation has used a variety of dif-\nferent features representing characteristics of the\ninput and the output sentence, their relation with\neach other as well as the relation of the input sen-\ntence to the training data. Successful feature sets\nare the result of considerable engineering effort;\nfeature extraction requires a collection of tools and\nmodels dealing with various aspects of the texts\nthat might affect translation quality. In the present\npaper, we explore the use of syntactic tree kernels\nover parsed representations of MT input and out-\nput strings in conjunction with Support Vector Ma-\nchine (SVM) classification for MT confidence es-\ntimation. Tree kernels are interesting since they\nallow us to define implicitly an immense space of\nstructural features and leave the feature selectionMik el L. F orcada, Heidi Depraetere, Vincen t V andeghinste (eds.)\nPr o c e e dings of the 15th Confer enc e of the Eur op e an Asso ciation for Machine T r anslation , p. 233\u0015240\nLeuv en, Belgium, Ma y 2011\nproblem to the SVM training algorithm. Struc-\ntural sentence characteristics are likely to be at the\nroot of many important problems, such as word re-\nordering, which is notoriously difficult for SMT,\nbut selecting the right structural features manually\nis difficult and tedious. Tree kernels are ideal as an\ninitial building block of an MT confidence system\nas they provide reasonable performance with min-\nimal effort – it is sufficient to parse the data to get\nstarted.\nIn this paper, we focus on the task of filtering\nout presumably bad translation from SMT output\nusing binary SVM classifiers. For four different\ndatasets in two language pairs and two text gen-\nres, we build and evaluate classifiers based on ex-\nplicitly extracted feature sets, syntactic tree kernels\nand their combination. We demonstrate that it is\nrelatively easy to build a reasonable classifier us-\ning the tree kernel approach alone and that syntac-\ntic tree kernels have something to contribute even\nin the presence of a traditional feature set.\n2 Related work\nThe problem of sentence-level confidence estima-\ntion for Machine Translation has been addressed\nwith various Machine Learning techniques in the\npast. Blatz et al. (2004) present a comparison\nof different Machine Learning algorithms for MT\nconfidence estimation and a set of features that has\nbecome the basis of much later work. They train\nclassifiers trained on data labelled automatically\nbased on the NIST and WER Machine Translation\nevaluation measures, accepting as good the top-\nscored 5 or 30 percent of the examples. A similar\nsetup and feature set were used by Quirk (2004),\nwho also ran some experiments with a very small\nmanually annotated corpus, using only 350 sen-\ntences for training and 150 sentences for testing.\nA comparable feature set was also used by Soricut\nand Echihabi (2010).\nSpecia et al. (2009a) use a fairly large feature set\nincluding most of the features proposed by Blatz et\nal. (2004) to train a Partial Least Squares (PLS) re-\ngressor on a variety of datasets, both manually and\nautomatically annotated. In another paper from the\nsame year (Specia et al., 2009b), they suggest a\nway to compute a threshold value to use the PLS\nregressor as classifier at a given target precision us-\ning Inductive Confidence Machines. They argue\nthat if the MT output is to be post-edited by pro-\nfessional translators, it may be more important toensure a reasonable level of precision by suppress-\ning bad translations to avoid flooding the transla-\ntors with bad MT output than to achieve high levels\nof recall. While this is an important point to con-\nsider, it seems at least doubtful, and very much de-\npendent on the particularities of a given workflow,\nwhether filtering out bad translations with a recall\nof less than 30 %, as reported in some of their ex-\nperiments, is really making the best use of an exist-\ning MT system. The research in these papers was\nlater published as a journal article (Specia et al.,\n2010b), which is interesting for us because it re-\nports some evaluation figures directly comparable\nto our work.\nWork presented in the papers discussed so far\nhas used explicitly engineered features based on\nvarious aspects of the input and output but not re-\nquiring syntactic parsing. Parse tree information\nhas been used e. g. by Liu and Gildea (2005), who\nuse a BLEU-inspired measure of parse tree simi-\nlarity as well as Subset Tree Kernels (Collins and\nDuffy, 2001) in the context of MT evaluation, i. e.\nfor scoring against a reference translation. They do\nnot train an SVM or a similar Machine Learning\nalgorithm with their tree kernels; instead, the tree\nkernel function is directly used to measure the sim-\nilarity between a candidate and a reference transla-\ntion. For this purpose, the BLEU-inspired “subtree\nmetric” proposed by the authors works much better\nthan the tree kernel function.\nIn an MT confidence estimation task, parse tree\nfeatures were used by Gamon et al. (2005). They\ntrained an SVM classifier to predict whether a\nsentence was more likely produced by a human\nor by an MT system, under the assumption that\n“machine-translated output is known a priori to be\nof much worse quality than human translations.”\nThis assumption is questioned by Specia et al.\n(2009a). Parse tree information is encoded as a set\nof binary features indicating the presence or ab-\nsence of particular context-free productions. Some\nsemantic features are also included.\nIn our experiments, we adopt the experimental\nsetup of Specia et al. (2009b) in terms of the data\nused and most parts of the experimental protocol.\nUnlike them, however, we train binary classifiers\nwith Support Vector Machines rather than PLS re-\ngressors and strive for balanced precision and re-\ncall scores. In terms of features, the main contri-\nbution of our work is the use of tree kernels as a\nway to define a large implicit feature space poten-234\ntially covering abstract linguistic phenomena with\nrelatively low effort compared to the explicit fea-\nture engineering approach of previous work.\n3 Datasets\nThe research presented in this paper was mainly\ndeveloped while working on a confidence estima-\ntion component for an MT system for film subti-\ntles, for which we had a specific dataset freshly\nannotated with quality scores at our disposal. Our\nannotations were modelled after a collection of an-\nnotated data published by Specia et al. (2010a),\non which we ran our experiments for comparison\nsince the subtitle dataset cannot be made publicly\navailable.\n3.1 Europarl datasets\nThe data collection provided by Specia et al.\n(2010a) is composed of 4,000 sentences randomly\ndrawn from the development and test sets of\nthe WMT 2008 Machine Translation shared task,\ntranslated from English into Spanish with four\ndifferent Statistical Machine Translation systems.\nThe quality of the MT output for each single sen-\ntence was judged by professional translators on a\nscale ranging from 1 to 4 with the following defi-\nnitions (Specia et al., 2010a):\n1. requires complete retranslation\n2. a lot of post-editing needed (but quicker than\nretranslation)\n3. a little post-editing needed\n4. fit for purpose\nThe datasets are distributed in lowercased and to-\nkenised form.\nIn this paper, we report experimental results\nonly for systems 1, 2 and 3 of this collection. Sys-\ntem 4 is a very unbalanced set with 93.5 % of the\nexamples belonging to the negative class. Like\nSpecia et al. (2010b), who used the same data col-\nlection, we observed that classifiers trained on this\ndata almost invariably learn to reject everything\nthey see, so these results are fairly uninteresting\nand therefore omitted here.\n3.2 Subtitle dataset\nOur subtitle dataset was composed of the subti-\ntle captions of 12 episodes of different TV series,\nwhich had been translated from their original lan-\nguage English into Swedish with a phrase-basedSMT system and then post-edited by professional\ntranslators to achieve a sufficient quality level to al-\nlow broadcasting the results. The total number of\nsubtitles (segments) amounted to 4,442, of which\n1,363 (3 files) had been post-edited independently\nby three different persons, whose scores had been\naveraged, while the other 3,079 subtitles (9 files)\nhad been post-edited by one person only. The post-\neditors had been asked to judge the quality of the\nraw MT output, assigning to each subtitle a score\nbetween 1 and 4. The definitions of the scores were\nvery similar to those used by Specia et al. (2010a),\nexcept for the fact that the definition of grade 3\nhad been slightly modified to focus more clearly\non post-editing speed, and the two intermediate\ngrades were illustrated with clarifying sentences to\nmake their use more consistent. The instructions\ngiven to the post-editors were as follows:\n1. MT output unusable, subtitle needs to be re-\ntranslated from scratch.\n2. Post-editing quicker than retranslation.\n(“I needed to think about whether or not the\nMT output was usable.”)\n3. Only quick post-editing required.\n(“I could see almost immediately what I had\nto change.”)\n4. MT output fit for purpose, no changes re-\nquired.\nOur experiments were set up as binary classi-\nfiers. Scale grades 1 and 2 were considered nega-\ntive, 3 and 4 positive examples.\nUnfortunately, the inter-annotator agreement\nachieved on the portion annotated by three post-\neditors was relatively low. Agreement as measured\nby Krippendorff’s α(Krippendorff, 2004) for or-\ndinally scaled data reached 0.495 for the 4-class\ndata and 0.319 after collapsing categories. There\nwas considerable variation between the individual\nsubtitle files, which we suspect is due partly to the\nfact that the film episodes came from different gen-\nres and presented different challenges to the SMT\nsystem and partly to the circumstance that the set\nof annotators scoring the files varied.\n4 Feature extraction\n4.1 Explicit features\nAs a baseline system, we created a classifier based\non a number of features explicitly extracted from\nthe datasets. Our feature set was modelled on a235\nsubset of the features used by Specia et al. (2009b).\nIt contained the following items:\n•number of words, length ratio\n•type-token ratio\n•number of tokens matching particular pat-\nterns:\n–numbers\n–opening and closing parentheses\n–strong punctuation signs\n–weak punctuation signs\n–ellipsis signs\n–hyphens\n–single and double quotes\n–apostrophe-s tokens\n–short alphabetic tokens ( ≤3 letters)\n–long alphabetic tokens ( ≥4 letters)\n•source and target language model (LM) and\nlog-LM scores\n•LM and log-LM scores normalised by sen-\ntence length\n•number and percentage of out-of-vocabulary\nwords\n•percentage of source 1-, 2-, 3- and 4-grams\noccurring in the source part of the training\ncorpus\n•percentage of source 1-, 2-, 3- and 4-grams in\neach frequency quartile of the training corpus\nWhenever applicable, features were computed\nfor both the source and the target language, and\nadditional features were added to represent the\nsquared difference of the source and target lan-\nguage feature values.\nFor the subtitle dataset only, we ran some exper-\niments with an extended feature set containing a\nnumber of additional features:\n•number of some particular tokens specific to\nsubtitles\n–discourse turn marker\n–marker for continuation in next subtitle\n–marker for continuation from previous\nsubtitle\n•a binary feature indicating that the output\ncontains more than three times as many al-\nphabetic tokens as the input•percentage of unaligned words and words\nwith 1 : 1, 1 : n,n: 1 and m:nalignments.\nThese features were not used in the Europarl ex-\nperiments, partly because they were not applica-\nble to the genre and tokenisation of those datasets,\npartly because alignment information from the MT\ndecoder, which we used for computing the align-\nment features, was not provided in the datasets.\n4.2 Parse trees\nWe annotated all datasets with both parse trees for\nboth the source and the target language. In the\nsource language, English, we were able to produce\nboth constituency and dependency parses. In the\ntarget languages, Swedish and Spanish, we lim-\nited our experiments to dependency parses because\nof the better availability of parsing models. En-\nglish constituency parses were produced with the\nStanford parser (Klein and Manning, 2003) using\nthe model bundled with the parser. For depen-\ndency parsing, we used the MaltParser (Nivre et\nal., 2006). POS tagging was done with HunPOS\n(Hal´acsy et al., 2007) for English and Swedish\nand SVMTool (Gim ´enez and M ´arquez, 2004) for\nSpanish, with the models provided by the OPUS\nproject (Tiedemann, 2009). A recaser based on\nthe Moses SMT system (Koehn et al., 2007) and\ntrained on the WMT 2008 training data was used\nto transform the lowercase-only Europarl datasets\ninto mixed-case form before tagging and parsing.\nThe MT output was parsed with a standard\nparser model trained on regular treebank data.\nSMT output contains many grammatically mal-\nformed sentences. We do not know of a reliable\nmethod to assess the impact of this problem on\nparsing accuracy, nor is it clear what effect re-\nduced parsing accuracy has on classifier perfor-\nmance, since the tree-kernel classifier may very\nwell be able to extract useful information from cor-\nrupted parse trees if the corruption is sufficiently\nsystematic. In the present work, we therefore treat\nthe parsers as a black box and rely on the classifier\nto make sense of whatever input it receives.\nTo be used with tree kernels, the output of the\ndependency parser had to be transformed into a\nsingle tree structure with a unique label per node\nand unlabelled edges, similar to a constituency\nparse tree. We followed Johansson and Moschitti\n(2010) in using a tree representation which en-\ncodes part-of-speech tags, dependency relations\nand words as sequences of child nodes (see fig. 1).236\nFigure 1: Representation of the dependency tree\nfragment for the words Nicole ’s dad\n5 Implicit feature modelling with tree\nkernels\nTo exploit parse tree information in our Machine\nLearning (ML) component, we used tree kernel\nfunctions. Kernel functions make it possible to\nrepresent very complex and high-dimensional fea-\nture spaces for certain ML techniques such as Sup-\nport Vector Machines (SVM) in an efficient way.\nThey take advantage of the fact that in the learn-\ning and inference algorithms for these ML meth-\nods, feature vectors are only ever evaluated in the\nform of dot products over pairs of data points. Dot\nproducts over certain types of feature spaces can\nbe computed very efficiently without reference to\nthe full feature representation.\nTree kernels (Collins and Duffy, 2001) are ker-\nnel functions defined over pairs of tree structures.\nThey measure the similarity between two trees\nby counting the number of common substructures.\nImplicitly, they define an infinite-dimensional fea-\nture space whose dimensions correspond to all\npossible tree fragments. Features are thus avail-\nable to cover different kinds of abstract node con-\nfigurations that can occur in a tree. The important\nfeature dimensions are effectively selected by the\nSVM training algorithm through the selection and\nweighting of the support vectors.\nIn our experiments, we used two different kinds\nof tree kernels (see fig. 2). The Subset Tree Ker-\nnel (Collins and Duffy, 2001) considers tree frag-\nments consisting of more than one node with the\nrestriction that if one child of a node is included,\nthen all its siblings must be included as well so\nthat the underlying production rule is completely\nrepresented. This kind of kernel is well suited for\nconstituency parse trees and was used in our ex-\nA tree and some of its Subset Tree Fragments\nS \nN \nNP \nD N VP \nV Mary \nbrought \na cat NP \nD N \na cat \nN \n cat D \na V \nbrought N \nMary NP \nD N VP \nV \nbrought \na cat \nFig. 1. As y n t a c t i cp a r s et r e ew i t hi t ss u b -\ntrees (STs).NP \nD N \na cat \nNP \nD N NP \nD N \na NP \nD N \nNP \nD N VP \nV \nbrought \na cat cat NP \nD N VP \nV \na cat NP \nD N VP \nV \nN \n cat D \na V \nbrought N \nMary … \nFig. 2. At r e ew i t hs o m eo fi t ss u b s e tt r e e s\n(SSTs).\nNP \nD N VP \nV \nbrought \na cat NP \nD N VP \nV \na cat NP \nD N VP \na cat NP \nD N VP \na NP \nD VP \na NP \nD VP \nNP \nN VP \nNP \nN NP NP \nD N D NP \n… VP \nFig. 3. At r e ew i t hs o m eo fi t sp a r t i a lt r e e s\n(PTs). \n \n \n \n is \nWhat offer \nan plan \ndirect stock purchase \nFig. 4. Ad e p e n d e n c yt r e eo faq u e s t i o n .\nconstraint over the SSTs, we obtain a more general form of substructures that we\ncallpartial trees (PTs). These can be generated by the application of partial\nproduction rules of the grammar, consequently [VP [V]] and [VP [NP]] are\nvalid PTs. Figure 3 shows that the number of PTs derived from the same tree as\nbefore is still higher (i.e. 30 PTs). These di fferent substructure numbers provide\nan intuitive quantification of the di fferent information levels among the tree-\nbased representations.\n3F a s t T r e e K e r n e l F u n c t i o n s\nThe main idea of tree kernels is to compute the number of common substructures\nbetween two trees T1and T2without explicitly considering the whole fragment\nspace. We have designed a general function to compute the ST, SST and PT\nkernels. Our fast evaluation of the PT kernel is inspired by the e fficient evaluation\nof non-continuous subsequences (describ ed in [13]). To increase the computation\nspeed of the above tree kernels, we also apply the pre-selection of node pairs\nwhich have non-null kernel.\n3.1 The Partial Tree Kernel\nThe evaluation of the common PTs rooted in nodes n1and n2requires the\nselection of the shared child s ubsets of the two nodes, e.g. [S [DT JJ N]] and\n[S [DT N N]] have [S [N]] (2 times) and [S [DT N]] in common. As the order\nof the children is important, we can use subs equence kernels for their generation.\nMore in detail, let F={f1,f2,. . ,f |F|}be a tree fragment space of type PTs and\nlet the indicator function Ii(n)b ee q u a lt o1i ft h et a r g e t fiis rooted at node n\nand 0 otherwise, we define the PT kernel as:\nA tree and some of its Partial Tree Fragments\nFigure 2: Tree fragments extracted by the Subset\nTree Kernel and by the Partial Tree Kernel. Illus-\ntrations by Moschitti (2006a).\nperiments with constituency trees. For the experi-\nments with dependency trees, we used the Partial\nTree Kernel (Moschitti, 2006a) instead. It extends\nthe Subset Tree Kernel by permitting also the ex-\ntraction of tree fragments comprising only part of\nthe children of any given node. Lifting this restric-\ntion makes sense for dependency trees since a node\nand its children do not correspond to a grammati-\ncal production in a dependency tree in the same\nway as they do in a constituency tree.\n6 Experiments and results\nAll our experiments were run with the SVMlight\nsoftware with tree kernel extensions (Moschitti,\n2006b; Joachims, 1999), using polynomial kernels\nof degree 3 for the explicit features. In experi-\nments with both a polynomial kernel and a tree ker-\nnel, a linear combination with equal weights was\nused. Each of the results obtained was obtained by\nrandomly subsampling the complete dataset five\ntimes, dividing it into a training part (80 %) and a\ntest part (20 %). The figures reported are the means\nof precision, recall and F1 score over the five runs\nfor a binary classifier separating positive examples\nlabelled 3 or 4 by the annotators from negative ex-\namples labelled 1 or 2.\nThe experimental results are presented in tables\n1 (Europarl datasets) and 2 (subtitle dataset). Base-\nline scores were calculated for a majority class\nclassifier which simply labels all examples as posi-\ntive. This results in a precision equal to the propor-237\nSystem 1 System 2 System 3\nP R F P R F P R F\nmajority class 71.0 100.0 83.0 54.6 100.0 70.6 51.8 100.0 68.3\nexplicit features 73.2 96.7 83.5 67.1 82.7 74.0 74.5 66.7 70.4\nexplicit + constituency 80.2 90.7 85.1 74.4 73.3 73.9 73.6 73.2 73.4\nexplicit + dependency (src/tgt) 78.0 92.9 84.8 74.1 76.2 75.1 73.8 74.0 73.9\nTable 1: Experimental results (Precision/Recall/F-score) for the Europarl datasets\nP R F\nmajority class 50.2 100.0 66.8\nall features 69.5 58.3 63.3\nreduced features 72.3 48.1 57.7\nall + constituency (S) 67.5 66.3 66.8\nall + dependency (S+T) 68.7 68.8 68.8\nred. + constituency (S) 68.2 67.8 68.0\nred. + dependency (S+T) 68.3 67.6 67.9\nTable 2: Experimental results (Precision/Recall/F-\nscore) for the subtitle dataset\ntion of positive examples in the dataset and, triv-\nially, in a recall score of 100 %. It turns out that\nthis baseline is relatively hard to beat in terms of\nbalanced F-score for some datasets. This does not\nnecessarily mean that a classifier with a lower per-\nformance is useless. Depending on the application\nscenario, it may be more important to obtain higher\nprecision at the cost of somewhat lower recall in\norder to make the post-editors’ job less tedious.\nThis is the stance adopted by Specia et al. (2009b),\nwho argue that more experienced translators make\nhigh demands on the quality of MT output, so “a\nlarger proportion of positive examples” must po-\ntentially be discarded.\nFor the Europarl systems, classifiers based on\nthe reduced explicit feature set we applied to all\nsystems performed slightly better than the base-\nline, with gains ranging from 0.5 points in F-score\nfor system 1 to 3.4 points for system 2. For the\nsubtitle dataset, this is not the case: The perfor-\nmance of the reduced feature set, which is identi-\ncal to the feature set used by the classifiers in the\nEuroparl experiments, is more than 9 points below\nthe baseline. By including the additional features\nlisted at the end of section 4.1, the F-score can be\nimproved from 57.7 % to 63.3 %, but it remains\nseveral points below the baseline of 66.8 %.\nSeveral factors may have contributed to the low\nperformance of the explicit feature set on the sub-\ntitle data. To begin with, the sentences in the sub-title data set are much shorter than the sentences in\nthe Europarl datasets and mostly written in a ca-\nsual oral style characterised, among other things,\nby low syntactic complexity. This may have the\neffect that some of our features that are supposed\nto measure sentence complexity in a crude way,\nsuch as the counts of various punctuation tokens,\nhave little of interest to measure. The vocabu-\nlary coverage of the subtitle translation system is\ngenerally quite good and out-of-vocabulary words,\nwhen they occur, are often proper names that can\nbe translated correctly by just copying them to the\noutput, so the vocabulary coverage features may be\nless useful than in texts where out-of-vocabulary\nitems are more frequent. Finally, the subtitle MT\nsystem is known to suffer from a specific problem\nthat causes it to drop content words occasionally.\nProbably some of our additional features help de-\ntect items affected by this particular bug, partly ex-\nplaining the difference in performance between the\nfull and the reduced feature set.\nThe best overall performance is obtained by\ncombining the explicit feature set with tree kernels\n(tables 1 and 2). All experiments in these config-\nurations performed at least as well, and almost al-\nways better, than either the trivial baseline or the\nclassifiers with explicit features only. It is not clear\nwhether the constituency or the dependency parse\nconfiguration is to be preferred, but the former has\nthe advantage that it reaches similar levels of per-\nformance without parsing the MT output at all.\nIn table 3, we show the results of all experi-\nments in terms of accuracy. While we believe that\nprecision and recall scores are more informative,\nthis format has the advantage of being comparable\nwith the scores published by Specia et al. (2010b)\nfor the three Europarl test sets. As can be seen,\nour systems are generally competitive with the re-\nsults published in the recent literature. This ta-\nble also contains results for a number of systems\nthat use only tree kernels and do not make use\nof the explicit features at all. For these experi-238\nEuroparl sub-\n1 2 3 titles\nmajority class 71.0 54.6 51.8 50.2\nSpecia et al. (2010b), best results 76.8 66.0 69.8\nexplicit features, full set 66.4\nexplicit features, reduced set 72.6 68.7 70.3 64.3\nconstituency tree kernel (src) 66.4 66.9 64.7\ndependency tree kernel (src) 67.6 66.6 64.0\ndependency tree kernel (tgt) 65.5 65.2 62.6\ndependency tree kernel (src+tgt) 66.4 67.8 65.0\nfull set + constituency (src) 66.7\nfull set + dependency (src+tgt) 68.3\nreduced set + constituency (src) 77.8 71.1 72.5 65.6\nreduced set + dependency (src+tgt) 76.7 72.4 72.8 67.9\nTable 3: Experimental results in terms of accuracy\nments, the scores of Europarl system 1 are omit-\nted because the tree-kernel-only classifiers degen-\nerated into the uninteresting accept-all case for this\ndataset, and small score differences with respect to\nthe majority class baseline are exclusively due to\nthe sampling variance.\nWhile the results of the tree-kernel-only systems\nwere generally lower than the corresponding re-\nsults obtained with the explicit feature set, it is in-\nteresting to notice that this was the case only by\na relatively small margin. The constituency parse\nconfiguration performs well even though it only\nuses information from the source language. For the\ndependency parses, using only the source language\nworks slightly better than using only the target\nlanguage, and combining the two generally works\nbest. Taking into account the fact that setting up\nthe tree-kernel-only systems only requires a work-\ning parser for one or both languages, whereas con-\nstructing explicit feature sets takes a considerable\namount of engineering work, it seems reasonable\nto use tree kernels as an initial building block for\na new MT confidence estimation system that can\ndeliver a certain level of performance on its own,\nadding other features as required to improve per-\nformance.\n7 Conclusions\nSyntactic tree kernels are an easy way to ex-\nploit complex structural information in a Machine\nLearning system. This is especially true when us-\ning a constituency parser whose output can directly\nbe fed into the ML component, but dependency\ntrees can also be used after a simple conversionstep. The feature space expressed by syntax trees is\nvery expressive, and feature selection can be han-\ndled effectively by the SVM training algorithm. In\ncombination, these advantages make a tree-kernel-\nbased approach a perfect starting point for an MT\nquality prediction system. This is borne out by our\nexperimental results, which show that MT qual-\nity classifiers based on tree kernels alone perform\nonly slightly worse than traditional systems based\non explicit features while being considerably eas-\nier to build.\nThis is not to say, of course, that explicit fea-\ntures have nothing to contribute. Our best results\nwere obtained by combining syntactic tree kernels\nwith a traditional feature set, and this is not sur-\nprising considering that the tree kernels we used\nonly encode information about the MT input and\noutput segments in isolation and do not take into\naccount their relation to the SMT training data or\ntheir mutual relation with each other. At least the\nlatter point could certainly be addressed with a\nmore advanced tree kernel design as well, and it\nremains for future work to show whether this may\nlead to further improvements. For the time being,\nit is safe to conclude that tree kernels should have\ntheir place in MT quality estimation as an easy and\nversatile method to encode complex feature sets.\nAcknowledgements\nParts of this work were carried out while the au-\nthor was working at Fondazione Bruno Kessler,\nHLT unit, Trento, Italy. We gratefully acknowl-\nedge the help of Alessandro Moschitti, who gave\nadvice on using tree kernel classifiers, of J ¨orgen239\nAasa, who organised the preparation of our subti-\ntle dataset, and of Lucia Specia and Marco Turchi,\nwho explained many details of their own experi-\nmental setup.\nReferences\nBlatz, John, Erin Fitzgerald, George Foster, et al. 2004.\nConfidence estimation for machine translation. In\nProceedings of the 20th International Conference on\nComputational Linguistics (COLING 2004) , pages\n315–321.\nCollins, Michael and Nigel Duffy. 2001. Convolution\nkernels for natural language. In Proceedings of NIPS\n2001 , pages 625–632.\nGamon, Michael, Anthony Aue, and Martine Smets.\n2005. Sentence-level MT evaluation without refer-\nence translations: Beyond language modeling. In\nProceedings of the 10th Annual Conference of the\nEAMT , pages 103–111, Budapest.\nGim´enez, Jes ´us and Llu ´ıs M ´arquez. 2004. SVMTool:\nA general POS tagger generator based on Support\nVector Machines. In Proceedings of the 4th Con-\nference on International Language Resources and\nEvaluation (LREC-2004) , Lisbon.\nHal´acsy, P ´eter, Andr ´as Kornai, and Csaba Oravecz.\n2007. HunPos – an open source trigram tagger.\nInProceedings of the 45th Meeting of the Associa-\ntion for Computational Linguistics , pages 209–212,\nPrague.\nJoachims, Thorsten. 1999. Making large-scale SVM\nlearning practical. In Sch ¨olkopf, B., C. Burges, and\nA. Smola, editors, Advances in Kernel Methods –\nSupport Vector Learning . MIT Press.\nJohansson, Richard and Alessandro Moschitti. 2010.\nSyntactic and semantic structure for opinion expres-\nsion detection. In Proceedings of the 2010 Confer-\nence on Natural Language Learning , Uppsala.\nKlein, Dan and Christopher D. Manning. 2003. Ac-\ncurate unlexicalized parsing. In Proceedings of the\n41st Meeting of the Association for Computational\nLinguistics , pages 423–430, Sapporo.\nKoehn, Philipp, Hieu Hoang, Alexandra Birch, et al.\n2007. Moses: open source toolkit for statistical ma-\nchine translation. In Annual meeting of the Associ-\nation for Computational Linguistics: Demonstration\nsession , pages 177–180, Prague.\nKrippendorff, Klaus. 2004. Measuring the reliability\nof qualitative text analysis data. Quality and Quan-\ntity, 38:787–800.\nLiu, Ding and Daniel Gildea. 2005. Syntactic features\nfor evaluation of machine translation. In Proceed-\nings of the ACL Workshop on Intrinsic and Extrin-\nsic Evaluation Measures for Machine Translation ,\npages 25–32, Ann Arbor.Moschitti, Alessandro. 2006a. Efficient convolu-\ntion kernels for dependency and constituent syntactic\ntrees. In Proceedings of the 17th European Confer-\nence on Machine Learning , Berlin.\nMoschitti, Alessandro. 2006b. Making tree kernels\npractical for natural language learning. In Proceed-\nings of the Eleventh International Conference of the\nEuropean Association for Computational Linguis-\ntics, Trento.\nNivre, Joakim, Johan Hall, and Jens Nilsson. 2006.\nMaltParser: A language-independent system for\ndata-driven dependency parsing. In Proceedings of\nthe 5th Conference on International Language Re-\nsources and Evaluation (LREC-2006) , pages 2216–\n2219, Genoa.\nQuirk, Christopher B. 2004. Training a sentence-\nlevel machine translation confidence measure. In\nProceedings of the 4th Conference on International\nLanguage Resources and Evaluation (LREC-2004) ,\npages 825–828, Lisbon.\nSoricut, Radu and Abdessamat Echihabi. 2010.\nTrustRank: Inducing trust in automatic translations\nvia ranking. In Proceedings of the 48th Meeting of\nthe Association for Computational Linguistics , pages\n612–621, Uppsala.\nSpecia, Lucia, Nicola Cancedda, Marc Dymetman,\nMarco Turchi, and Nello Cristianini. 2009a. Esti-\nmating the sentence-level quality of Machine Trans-\nlation systems. In Proceedings of the 13th Annual\nConference of the EAMT , pages 28–35, Barcelona.\nSpecia, Lucia, Craig Saunders, Marco Turchi, Zhuoran\nWang, and John Shawe-Taylor. 2009b. Improving\nthe confidence of Machine Translation quality esti-\nmates. In Proceedings of MT Summit XII , Ottawa.\nSpecia, Lucia, Nicola Cancedda, and Marc Dymetman.\n2010a. A dataset for assessing machine translation\nevaluation metrics. In Proceedings of the 7th Con-\nference on International Language Resources and\nEvaluation (LREC-2010) , pages 3375–3378, Val-\nletta, Malta.\nSpecia, Lucia, Dhwaj Raj, and Marco Turchi. 2010b.\nMachine translation evaluation versus quality esti-\nmation. Machine Translation , 24:39–50.\nTiedemann, J ¨org. 2009. News from OPUS – a collec-\ntion of multilingual parallel corpora with tools and\ninterface. In Nicolov, N., K. Bontcheva, G. An-\ngelova, and R. Mitkov, editors, Recent Advances in\nNatural Language Processing , pages 237–248. John\nBenjamins, Amsterdam.\nV olk, Martin, Rico Sennrich, Christian Hardmeier, and\nFrida Tidstr ¨om. 2010. Machine translation of TV\nsubtitles for large scale production. In Proceedings\nof the Second Joint EM+/CNGL Workshop “Bring-\ning MT to the User: Research on Integrating MT in\nthe Translation Industry” (JEC 2010) , pages 53–62,\nDenver, CO.240",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "0FnuaxA_XKG",
"year": null,
"venue": "EAMT 2020",
"pdf_link": "https://aclanthology.org/2020.eamt-1.24.pdf",
"forum_link": "https://openreview.net/forum?id=0FnuaxA_XKG",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Document-level Neural MT: A Systematic Comparison",
"authors": [
"António V. Lopes",
"M. Amin Farajian",
"Rachel Bawden",
"Michael Zhang",
"André F. T. Martins"
],
"abstract": "António Lopes, M. Amin Farajian, Rachel Bawden, Michael Zhang, André F. T. Martins. Proceedings of the 22nd Annual Conference of the European Association for Machine Translation. 2020.",
"keywords": [],
"raw_extracted_content": "Document-level Neural MT: A Systematic Comparison\nAnt´onio V . Lopes1M. Amin Farajian1Rachel Bawden2\nMichael Zhang3Andr ´e F. T. Martins1\n1Unbabel, Rua Visc. de Santar ´em 67B, Lisbon, Portugal\n2University of Edinburgh, Scotland, UK\n3University of Washington, Seattle, WA, USA\nfantonio.lopes, amin, andre.martins [email protected]\[email protected] ,[email protected]\nAbstract\nIn this paper we provide a systematic com-\nparison of existing and new document-\nlevel neural machine translation solutions.\nAs part of this comparison, we introduce\nand evaluate a document-level variant of\nthe recently proposed Star Transformer ar-\nchitecture. In addition to using the tradi-\ntional metric BLEU, we report the accu-\nracy of the models in handling anaphoric\npronoun translation as well as coherence\nand cohesion using contrastive test sets.\nFinally, we report the results of human\nevaluation in terms of Multidimensional\nQuality Metrics (MQM) and analyse the\ncorrelation of the results obtained by the\nautomatic metrics with human judgments.\n1 Introduction\nThere has been undeniable progress in Machine\nTranslation (MT) in recent years, so much so that\nfor certain languages and domains, when sentences\nare evaluated in isolation, it has been suggested\nthat MT is on par with human translation (Has-\nsan et al., 2018). However, it has been shown\nthat human translation clearly outperforms MT at\nthe document level, when the whole translation is\ntaken into account (L ¨aubli et al., 2018; Toral et al.,\n2018; Laubli et al., 2020). For example, the Con-\nference on Machine Translation (WMT) now con-\nsiders inter-sentential translations in their shared\ntask (Barrault et al., 2019). This sets a demand for\ncontext-aware machine translation: systems that\ntake the context into account when translating, as\nopposed to translating sentences independently.\n© 2020 The authors. This article is licensed under a Creative\nCommons 3.0 licence, no derivative works, attribution, CC-\nBY-ND.Translating sentences in context (i.e. at the doc-\nument level) is essential for correctly handling\ndiscourse phenomena whose scope can go be-\nyond the current sentence and which therefore re-\nquire document context (Hardmeier, 2012; Baw-\nden, 2018; Wang, 2019). Important examples in-\nclude anaphora, lexical coherence and cohesion,\ndeixis and ellipsis; crucial aspects in delivering\nhigh quality translations which often are poorly\nevaluated using standard automatic metrics.\nNumerous context-aware neural MT (NMT)\napproaches have been proposed in recent years\n(Tiedemann and Scherrer, 2017; Zhang et al.,\n2018; Maruf et al., 2019; Miculicich et al., 2018;\nV oita et al., 2019b; Tu et al., 2018), integrat-\ning source-side and sometimes target-side context.\nHowever, they have often been evaluated on differ-\nent languages, datasets, and model sizes. Certain\nmodels have also previously been trained on few\nsentence pairs rather than in more realistic, high-\nresource scenarios. A direct comparison and anal-\nysis of the methods, particularly concerning their\nindividual strengths and weaknesses on different\nlanguage pairs is therefore currently lacking.\nWe fill these gaps by comparing a representa-\ntive set of context-aware NMT solutions under the\nsame experimental settings, providing:\n• A systematic comparison of context-aware NMT\nmethods using large datasets (i.e. pre-trained\nusing large amounts of sentence-level data)\nfor three language directions: English (EN)\ninto French (FR), German (DE) and Brazil-\nian Portgueuse (PT br). We evaluate on\n(i) document translation using public data for\nEN!fFR,DEgand (ii) chat translation using\nproprietary data for all three directions. We use\ntargeted automatic evaluation and human assess-\nments of quality.\n• A novel document-level method inspired by the\nStar transformer approach (Guo et al., 2019),\nwhich can leverage full document context from\narbitrarily large documents.\n• The creation of an additional open-source large-\nscale contrastive test set for EN !FR anaphoric\npronoun translation.1\n2 Neural Machine Translation\n2.1 Sentence-level NMT\nNMT systems are based on the encoder-decoder\narchitecture (Bahdanau et al., 2014), where the\nencoder maps the source sentence into word vec-\ntors, and the decoder produces the target sentence\ngiven these source representations. These systems,\nby assuming a conditional independence between\nsentences, are applied to sentence-level transla-\ntion, i.e. ignoring source- and target-side context.\nAs such, current state-of-the-art NMT systems op-\ntimize the negative log-likelihood of the sentences:\np(y(k)jx(k)) =nY\nt=1p(y(k)\ntjy(k)\n<t;x(k));(1)\nwherex(k)andy(k)are thekthsource and target\ntraining sentences, and y(k)\ntis thetthtoken iny(k).\nIn this paper, the underlying architecture is a\nTransformer (Vaswani et al., 2017). Transform-\ners are usually applied to sentence-level transla-\ntion, using the sentence independence assumption\nabove. This assumption precludes these systems\nfrom learning inter-sentential phenomena. For ex-\nample, Smith (2017) analyzes certain discourse\nphenomena that sentence-level MT systems cannot\ncapture, such as obtaining consistency and lexical\ncoherence of named entities, among others.\n2.2 Context-aware NMT\nContext-aware NMT relaxes the independence as-\nsumption of sentence-level NMT; each sentence is\ntranslated by conditioning on the current source\nsentence as well as other sentence pairs (source\nand target) in the same document. More for-\nmally, given a document DcontainingKsentence\npairsf(x(1);y(1));(x(2);y(2));:::; (x(K);y(K))g,\nthe probability of translating x(k)intoy(k)is:\np(y(k)jx(k)) =nY\nt=1p(y(k)\ntjy(k)\n<t;X;Y(<k));(2)\n1The dataset and scripts are available at https://github.com/\nrbawden/Large-contrastive-pronoun-testset-EN-FRwhereX:=fx(1);:::;x(K)gare the document’s\nsource sentences and Y(<k):=fy(1);:::;y(k\u00001)g\nthe previously generated target sentences.\n2.3 Chat translation\nA particular case of context-aware MT is chat\ntranslation, where the document is composed of\nutterances from two or more speakers, speaking\nin their respective languages (Maruf et al., 2018;\nBawden et al., 2019).\nThere are two main defining aspects of chat:\nthe content type (shorter, less planned, more infor-\nmal and ungrammatical and noisier), and the con-\ntext available (past utterances only, from multiple\nspeakers in different languages). Specifically, chat\nis an online task where only the past utterances\nare available and context-aware models (see §3)\nneed to be adapted to cope with multiple speak-\ners. In this work we introduce tokens to distinguish\neach speaker and modifying the internal flow of\nthe method to incorporate both speakers’ context.\nThere is also an additional challenge in how to han-\ndle both language directions and how using gold or\npredicted context affects chat models. In this work\nwe consider a simplification of this problem by as-\nsuming the language direction of the first speaker\nis always from a gold set, leaving for future work\nthe assessment of the impact of using predictions\nof the other speaker’s utterances.\n3 Context-aware NMT methods\nWe compare three previous context-aware ap-\nproaches (concatenation, multi-source and cache-\nbased) in our experiments. As well as illustrat-\ning different methods of integrating context, they\nvary in terms of which context (source/target, pre-\nvious/future) and how much context (number of\nsentences) they can exploit, as shown in Table 1.\nAlthough other context-aware methods do exist,\nwe choose these three methods as being represen-\ntative of the number of context sentences and usage\nof both source and target side context.\nConcatenation: Tiedemann and Scherrer (2017)\nuse the previous sentence as context, i.e. X(k\u00001)\nandY(k\u00001), concatenated to the current sentence,\ni.e.X(k)andY(k), separated by a special token. It\nis called 2to1 when just the source-side context\nis used, and 2to2 when the target is used too.\nMulti-source context encoder: Zhang et al.\n(2018) model the previous source sentences,\nX(<k)with an additional encoder. They modify\nthe transformer encoder and decoder blocks to in-\ntegrate this encoded context; they introduce an ad-\nditional context encoder in the source side that re-\nceives the previous two source sentences as con-\ntext (separated by a special token), encodes them\nand passes the context encodings to both the en-\ncoder and decoder, integrating them using addi-\ntional multi-head attention mechanisms. Similar\nto the concatenation-based approach, here the con-\ntext is limited to the previous few sentences.\nCache-based: Tu et al. (2018) model all previ-\nous source and target sentences, X(<k)andY(<k)\nwith a cache-based approach (Grave et al., 2016),\nwhereby, once a sentence has been decoded, its\ndecoder states and attention vectors are saved in\nan external key-value memory that can be queried\nwhen translating subsequent sentences. This is one\nof the first approaches that uses the global context.\nOther methods have been proposed to use both\nsource and target history with different ranges of\ncontext. (Miculicich et al., 2018) attends to words\nfrom previous sentences with a 2-stage hierarchi-\ncal approach, while (Maruf et al., 2019), simi-\nlarly, attends to words in specific sentences us-\ning sparse hierarchical selective attention. (V oita\net al., 2019a), which extends the concatenation-\nbased approach to four sentences in a monolingual\nAutomatic Post-Edition (APE) setting; whereas\nJunczys-Dowmunt (2019) proposes full document\nconcatenation with a BERT model to improve the\nword embeddings through document context and\nfull document APE. Ng et al. (2019) proposes a\nnoisy channel approach with reranking, where the\nlanguage model (LM) operates at document-level\nbut the reranking does not. Yu et al. (2019) extends\nthe previous work using conditionally dependent\nsentence reranking with the document-level LM.\n#Prev #Fut Src Trg\nConcat2to1 (1) 1 - X\nConcat2to2 (1) 1 - X X\nMulti-source context encoder (2) 2 - X\nCache-based (3) all - X X\nStar (4) - (see §4) all all (src) X X\nTarget APE (5) 3 3 X\nSparse Hierarchical attn. (6) all - X X\nTable 1: A summary of the methods compared (1-4). We also\ninclude (5-6) in this summary table for comparative purposes.4 Doc-Star-Transformer\nWe propose a scalable approach to document-level\nNMT inspired by the Star architecture (Guo et al.,\n2019) for sentence-level NMT. We have an equiv-\nalent relay node and build sentence-level represen-\ntations; we propagate this non-local information at\ndocument-level and enrich the word-level embed-\ndings with context information.\nTo do this, we augment the vanilla sentence-\nlevel Transformer model of Vaswani et al. (2017)\nwith two additional multi-headed attention sub-\nlayers. The first sub-layer is used to summarize\nthe global contribution of each sentence into a sin-\ngle embedding. The second layer then uses these\nsentence embeddings to update word representa-\ntions throughout the document, thereby incorpo-\nrating document-wide context.\nIn §4.1, we describe our model assuming it can\nattend to context from the entire document with-\nout practical memory constraints. Then in §4.2 we\nshow how to extend the model to arbitrarily long\ncontexts by introducing sentence-level recurrence.\n4.1 Document-level Context Attention\nWe begin by describing the encoder of the Doc-\nStar-Transformer (Figure 1). We refer to the sen-\ntence and word representations of the kthsentence\nat layeriass(k)\niandw(k)\nirespectively. Our Doc-\nStar-Transformer model makes use of the Scaled\nDot-Product Attention of Vaswani et al. (2017) to\nperform alternating updates to sentence and word\nembeddings across the document to efficiently in-\ncorporate document-wide context; our method can\nefficiently capture local and non-local context (at\ndocument-level) and, like the Star Transformer,\nalso eliminates the need to compute pairwise at-\ntention scores for each word in the document .\nIntermediate word representations, H(k)\ni, are\nupdated with sentence-level context. These inter-\nmediate representation are then used in a second\nstage of multi-headed attention to generate an em-\nbedding for each sentence in the document.\nH(k)\ni=Transformer (w(k)\ni\u00001); (3)\ns(k)\ni=MultiAtt (s(k)\ni\u00001;H(k)\ni); (4)\nWe then concatenate the newly constructed sen-\ntence representations and allow each word in sen-\ntencekto attend to all preceding sentences’ repre-\nsentations.2Finally, we apply a feed-forward net-\n2We describe our method in the online setting and to match\nwork, which uses two linear transformations with\na ReLU activation to get the layer’s final output.\nH(k)\ni0=MultiAtt (H(k)\ni;[s(k)\ni;s(k\u00001)\ni;:::;s(1)\ni]);\n(5)\nw(k)\ni=ReLu (H(k)\ni0); (6)\nFigure 1: Doc-Star-Transformer encoder.\nThe Doc-Star-Transformer decoder follows a\nsimilar structure to the encoder, except that the de-\ncoder does not have access to the sentence repre-\nsentation of the current sentence k, thus, remov-\ning sentence s(k)\nifrom (5). Source-side context is\nadded through concatenation of the previous sen-\ntence embeddings from the final layer of the en-\ncoder with the decoder’s in (5).\n4.2 Sentence-level Recurrence\nTo overcome practical memory constraints (due to\nvery long documents), we introduce a sentence-\nlevel recurrence mechanism with state reuse, sim-\nilar to that used by Dai et al. (2019). During\ntraining, a constant number of sentence embed-\ndings are cached to provide context when translat-\ning the next segment. We cut off gradients to these\ncached sentence embeddings, but allow them to\nthe decoder side. In the document-MT setting, (5) concate-\nnates all sentences’ representations to include context from\nfuture source-side sentences during translation.be used to model long-term dependencies without\ncontext fragmentation. More formally, we allow \u001c\nto be the number of previous sentence embeddings\nmaintained in the cache and update as follows:\nH(k0)\ni=MultiAtt (H(k)\ni;[s(k)\ni;s(k)\ni\u00001;:::;s(B)\ni;\nSG(s(B)\ni);:::;SG(s(B\u0000\u001c)\ni )]);\nwhereBis the index of the first sentence in\nthe batch and SGs are the sentence representations\nwith stopped gradients. In contrast with previous\napproaches, such as Hierarchical Attention (Maruf\net al., 2019), this gradient caching strategy has the\nadvantage of letting the model attend to full source\ncontext regardless of document lengths and there-\nfore to avoid practical memory issues.\n5 Evaluating Context-Aware NMT\nThe evaluation of context-aware MT is notori-\nously tricky (Hardmeier, 2012); standard auto-\nmatic metrics such as B LEU (Papineni et al., 2002)\nare poorly suited to evaluating discourse phenom-\nena (e.g. anaphoric references, lexical cohesion,\ndeixis, ellipsis) that require document context. We\ntherefore evaluate all models using a range of\nphenomenon-specific contrastive test sets.\nContrastive sets are an automatic way of evalu-\nating the handling of particular phenomena (Sen-\nnrich, 2017; Rios Gonzales et al., 2017). The aim\nis to assess how well models rank correct transla-\ntions higher than incorrect (contrastive) ones. For\ncontext-aware test sets, the correctness of transla-\ntions depends on context. Several such sets exist\nfor a range of discourse phenomena and for sev-\neral language directions: EN !FR (Bawden et al.,\n2018), EN!DE (M ¨uller et al., 2018) and EN !RU\n(V oita et al., 2019b). In this article, we evaluate\nusing the following test sets for our two language\ndirections of focus, EN !DE and EN!FR:\nEN-FR: anaphora, lexical choice (Bawden et\nal., 2018):3two manually crafted sets (200 con-\ntrastive pairs each), for which the previous sen-\ntence determines the correct translation. The sets\nare balanced such that each correct translation also\nappears as an incorrect one (a non-contextual base-\nline achieves 50% precision). Anaphora examples\ninclude singular and plural personal and posses-\nsive pronouns. In addition to standard contrastive\nexamples, this set also contains contextually cor-\nrect examples, where the antecedent is translated\n3https://github.com/rbawden/discourse-mt-test-sets\nstrangely, designed to test the use of past transla-\ntion decisions. Lexical choice examples include\ncases of lexical ambiguity (cohesion) and lexical\nrepetition (cohesion).\nEN!DE: anaphoric pronouns (ContraPro)\n(M¨uller et al., 2018).4A large-scale automati-\ncally created set from OpenSubtitles2018 (Lison\net al., 2018), in which sentences containing the\nEnglish anaphoric pronoun it(and its correspond-\ning German translations er,sieores) are automat-\nically identified, and contrastive erroneous transla-\ntions are automatically created. The test set con-\ntains 4,000 examples for each target pronoun type,\nand the disambiguating context can be found in\nany number of previous sentences.\nEN!FR: large-scale pronoun test set We au-\ntomatically create a large-scale EN !FR test set\nfrom OpenSubtitles2018 (Lison et al., 2018) in\nthe style of ContraPro, with some modifications to\ntheir protocol due to the limited quality of avail-\nable tools. The test set is created as follows:\n1. Instances of itandtheyand their antecedents are\ndetected using NEURALCOREF .5Unlike M ¨uller\net al. (2018), we only run English coreference\ndue to a lack of an adequate French tool.\n2. We align pronouns to their translations ( il,elle,\nils,elles) using FastAlign (Dyer et al., 2013).\n3. Examples are filtered to only include sub-\nject pronouns (using Spacy6) with a nominal\nantecedent, aligned to a nominal French an-\ntecedent matching the pronoun’s gender. We\nalso remove examples whose antecedent is\nmore than five sentences away to avoid cases\nof imprecise coreference resolution.\n4. Contrastive translations are created by inverting\nthe pronouns’ gender (cf. Figure 2). We modify\nthe gender of words that agree with the pronoun\n(e.g. adjectives and some past participles) using\nthe Le ffflexicon (Sagot, 2010)).\nThe test set consists of 3,500 examples for each\ntarget pronoun type (cf. Table 2 for the distribution\nof coreference distances).\n6 Experimental Setup\nAs mentioned in §1, we aim to provide a system-\natic comparison of the approaches over the same\n4https://github.com/ZurichNLP/ContraPro\n5https://github.com/huggingface/neuralcoref\n6https://spacy.ioContext sentence\nPies made from apples like these.\nDestartes ffaites avec des pommes comme celles-ci\nCurrent sentence\nOh,they do look delicious.\n\b Elles font l’air d ´elicieux.\n× Ilsmont l’air d ´elicieux.\nFigure 2: An example from the large-scale EN !FR test set.\n# examples at each distance\nPronoun 0 1 2 3 4 5\nil 1,628 1,094 363 213 127 75\nelle 1,658 1,144 356 166 106 70\nils 1,165 1,180 501 302 196 156\nelles 1,535 1,148 409 199 128 81\nTable 2: The distribution of each pronoun type according to\ndistance (in #sentences) from the antecedent.\ndatasets, training data sizes and language pairs. We\nstudy whether pre-training with larger resources\n(in a more realistic high-resource scenario) has an\nimpact on the methods on language directions that\nare challenging for sentence-level MT. We con-\nsider translation from English into French (FR),\nGerman (DE) and Brazilian Portuguese (PT br),\nwhich all have gendered pronouns corresponding\nto neutral anaphoric pronouns in English ( itfor all\nthree and they for FR and PT br).\nWe compare the three previous methods (§3)\nplus the Doc-Star-Transformer in two scenarios:\n(i) document MT, testing on TED talks (EN !FR\nand EN!PTbr), and (ii) chat MT testing on pro-\nprietary conversation data for all three directions.\n6.1 Data\nFor both scenarios, we pre-train baseline mod-\nels on large amounts of publicly available\nsentence-level parallel data ( \u001818M,\u001822Mand\n\u00185Msentence pairs for EN !DE, EN!FR, and\nEN!PTbr respectively). We then separately fine-\ntune them to each domain. For the document MT\ntask, we consider EN !DE and EN!FR and fine-\ntune on IWSLT17 (Cettolo et al., 2012) TED Talks,\nusing the test sets 2011-2014 as dev sets, and\n2015 as test sets. For the chat MT task, we fine-\ntune on (anonymized) proprietary data of 3dif-\nferent domains and on an additional language pair\n(EN!PTbr). Dataset sizes are shown in Table 3\n(sentence-level pre-training data) and Tables 4–5\n(document and chat task data respectively).\nTrain Dev\nEN-DE 18M 1K\nEN-FR 20M 1K\nEN-PT br 5M 1K\nTable 3: Sentence-level corpus sizes (#sentences)\nTrain Dev Test\nEN-DE 206K 5.4K 1.1K\nEN-FR 233K 5.8k 1.2K\nTable 4: TED talks document-level corpus sizes (#sentences)\nDomain1 Domain2 Domain3\nEN-DETrain 674k 62K 13K\nDev 37K 3.2K 0.6K\nTest 35K 3.6K 0.7K\nEN-FRTrain 395K 108K 110K\nDev 21K 6.3K 6.1K\nTest 22K 6.2K 6.3K\nEN-PT brTrain 235K 61K 13K\nDev 13K 3.4K 0.7K\nTest 13K 3.2K 0.7K\nTable 5: The corpora sizes of the chat translation task. We\nconsider both speakers for this count.\n6.2 Training Configuration\nFor all experiments we use the Transformer base\nconfiguration (hidden size of 512, feedforward size\nof 2048, 6 layers, 8 attention heads) with the\nlearning rate schedule described in (Vaswani et\nal., 2017). We use label smoothing with an ep-\nsilon value of 0:1(Pereyra et al., 2017) and early\nstopping of 5consecutive non-improving valida-\ntion points of both accuracy and perplexity. Self-\nattentive models are sensitive to batch size (Popel\nand Bojar, 2018), and so we use batches of 32kto-\nkens for all methods.7For all tasks, we use a sub-\nword unit vocabulary (Sennrich et al., 2016) with\n32koperations. We share source and target embed-\ndings, as well as target embeddings with the final\nvocab projection layer (Press and Wolf, 2017).\nFor the document translation experiments, we\nrun the same experimental setting with 3different\nseeds and average the scores of each model.\nFor the approaches that fine-tune just the\ndocument-level parameters (i.e. cache-based,\nmulti-source encoder, and Doc-Star-Transformer),\nwe reset all optimizer states and train with the\nsame configuration as the baselines (with the base\nparameters frozen), as described in (Tu et al., 2018;\nZhang et al., 2018). For Doc-Star-Transformer we\nuse multi-heads of 2 and 8 heads. All methods are\n7The optimizer update is delayed to simulate the 32ktokens.implemented in Open-NMT (Klein et al., 2017).\n6.3 Chat-specific modifications\nIn the case of the concatenation-based approaches,\nmulti-source context encoder, and the Doc-Star-\nTransformer, we add the speaker symbol as spe-\ncial token to the beginning of each sentence. For\nthe cache-based systems, we introduce two differ-\nent caches, one per speaker, and investigate dif-\nferent methods for deep fusing them (Tu et al.,\n2018): (i) deep fusing the first speaker’s cache first\nand next fusing with the second speaker’s cache,\n(ii) the same method but with the second speaker\nfirst, and (iii) jointly integrating the caches. In ad-\ndition, for the cache-based system we explore the\neffect of storing full words or subword units in\nthe external memory For the full word approach,\nwe use subword units in the vocab but merge the\nwords when adding to the cache.\n6.4 Evaluation setup\nWe perform both automatic and manual evalua-\ntion, in order to gain more insights into the dif-\nferences between the models.\nAutomatic evaluation: We first evaluate\nall methods with case-sensitive detokenized\nBLEU (Papineni et al., 2002).8We then evaluate\ncontext-dependent discourse-level phenomena us-\ning the previously described contrastive test sets.\nFor EN!DE this corresponds to the large-scale\nanaphoric pronoun test set of M ¨uller et al. (2018)\nand for EN!FR our own analogous large-scale\nanaphoric pronoun test set (described in §5),9as\nwell as the manually crafted test sets of Bawden et\nal. (2018) for anaphora and coherence/cohesion.\nManual evaluation: In the case of the chat\ntranslation task (using proprietary data), in addi-\ntion to BLEU, we also manually assess the perfor-\nmance of the systems with professional human an-\nnotators, who mark the errors of the systems with\ndifferent levels of severity (i.e. minor, major, crit-\nical). In the case of extra-sentential errors such as\nagreements we asked them to mark both the pro-\nnoun and its antecedent. We score the systems’\nperformance using Multidimensional Quality Met-\nrics (MQM) (Lommel, 2013):\nMQM =100\u0000minor +major\u00035 +critical\u000310\nWord count\n8Using Moses’ (Koehn et al., 2007) multi-bleu-detok .\n9For both large-scale test sets, we make sure to exclude the\ndocuments they include from the training data.\nBy having access to the full conversation, the\nannotators can annotate both intra- and extra-\nsentential errors (e.g. document-level error exam-\nples of agreement or lexical consistency).\nWe prioritize documents with a large number of\nedits compared to the sentence-level baseline (nor-\nmalized by document length) due to document-\nlevel systems tending to perform few edits with re-\nspect to the high performance non-context-aware\nsystems. We request annotations of approximately\n200 sentences per language pair and method.\n7 Results and analysis\n7.1 Document Translation Task\nTable 6 shows the results of the average perfor-\nmance of each system on IWSLT data according\nto BLEU. Although the approaches have previ-\nously shown improved performance compared to\na baseline, when a stronger baseline is used, we\nsee marginal to no improvements over the baseline\nfor both language directions.\nEN!DE EN !FR\nBaseline 32.08 40.92\nConcat2to1 31.84 40.67\nConcat2to2 30.89 40.57\nCache SubWords 32.10 40.91\nCache Words 32.12 40.88\nZhang et al. 2018 31.03 40.95\nStar, 2 heads, gold target ctx 31.76 41.00\nStar, 2 heads, predicted target ctx 31.39 40.72\nStar, 8 heads, gold target ctx 31.74 40.74\nStar, 8 heads, predicted target ctx 31.29 40.58\nTable 6: BLEU score results on the IWSLT15 test set (aver-\naged over 3 different runs for each method).\nTable 7 shows the average performance of each\nsystem for all contrastive sets. The results differ\ngreatly from BLEU results; methods on par or be-\nlow the baseline according to BLEU perform better\nthan the baseline when evaluated on the contrastive\ntest sets. This is notably the case of the Concat\nmodels, which achieve some of the best results on\nthe both large-scale pronoun sets (EN !DE and\nEN!FR), as shown by the high percentages on the\nmore difficult feminine pronoun Siefor EN!DE\nand all pronouns for EN !FR.\nMost models struggle to achieve high perfor-\nmances for the feminine Sieand neutral Er, which\nis likely due to masculine Esbeing the majority\nclass in the training data. For French, although\nthe feminine pronouns are also usually challeng-\ning, the high scores seen here are possibly due tothe fact that many examples have an antecedent\nwithin the same sentence. The Concat2to2 method\nhowever performs well across the board, proving\nto be an effective way of exploiting context. It also\nachieves the highest scores on both the anaphora\nand coherence/cohesion test set, which is only pos-\nsible when the context is actually being used, as\nthe test set is completely balanced. This appears to\nconfirm the findings of Bawden et al. (2018) that\ntarget-side context is most effectively used when\nchannelled through the decoder. Surprisingly, the\nmulti-source encoder approach degrades the base-\nline with respect to this evaluation, suggesting that\nthe context being used is detrimental to the han-\ndling of these phenomena.\nWe note that using OpenSubtitles as a resource\nfor context-dependent translation or scoring, has\nadditional challenges. Figure 3 illustrates four of\nthese, which could make translation more chal-\nlenging if they affect the context being exploited.\n7.2 Chat Translation Task\nTable 8 shows BLEU score results on the propri-\netary data, with the modifications described in §3\nto address the chat task. As expected, document-\nlevel information has a larger impact for the lowest\nresource language pair, EN !PTbr, with marginal\nimprovements on EN !FR and EN!DE.\nThe performance of these methods depends on\nthe language pair and domain. Although it is not\nconclusive which method performs best, our pro-\nposed method improves over the baseline consis-\ntently, whereas the cache-based and Concat2to2\nmethods also perform well in some scenarios. For\nour Doc-Star-Transformer approach, using predic-\ntions rather than the gold history harms the model\nat inference, showing that bridging this gap could\nlead to a better handling of target-side context.\nThere is little correlation between BLEU scores\nand the human MQM scores (as shown by the\ncomparison for 3 methods in Table 9). Although\nthe difference between BLEU scores are marginal,\nMQM indicates that quality differences can be\nseen by human evaluators: the document-level sys-\ntems (Cache and Star) both achieve higher results\nfor EN!PTbr (although the Star approach under-\nperforms for EN!FR). This shows that for cer-\ntain language directions, the document-level ap-\nproaches do learn to fix some errors and therefore\nimprove translation quality. This also confirms\nprevious suggestions that BLEU is not a good met-\nEN!DE EN !FR\nTotal Es Sie Er Totalit they AnaphoraCoherence/\ncohesion(%)\nelle il elles ils All All\nBaseline 45.0 91.9 22.9 20.2 79.7 88.1 82.7 76.1 72.2 50.0 50.0\nConcat2to1 48.0 91.6 27.1 25.3 80.9 88.4 83.3 77.2 73.9 50.0 52.5\nConcat2to2 70.8 91.8 61.9 58.7 83.2 89.2 86.2 80.4 77.6 82.5 55.0\nCache (Subwords) 45.2 92.1 23.5 19.9 79.7 88.0 82.7 76.0 72.0 50.0 50.0\nMulti-src Enc 42.6 62.3 33.9 31.5 59.0 62.0 61.3 57.2 57.3 47.0 46.5\nStar, 8 heads 45.9 91.3 27.0 19.5 79.6 88.0 82.6 76.1 72.0 50.0 50.0\nTable 7: Accuracies (in %) for the contrastive sets. Methods outperforming the baseline are in bold.\nDomain1 Domain2 Domain3\nEN-DE EN-FR EN-PT br EN-DE EN-FR EN-PT br EN-DE EN-FR EN-PT br\nBaseline 78.53 79.71 81.21 72.11 76 73.94 69.67 74.76 74.95\nConcat2to1S1,S2 + speaker tag 78.04 79.65 80.36 71 75.35 73.02 69.92 74.57 74.82\nS1 77.97 79.55 80.26 70.95 75.21 73.33 69.77 74.47 74.84\nConcat2to2S1,S2 + speaker tag 79.84 79.3 80.33 70.56 74.87 73.52 69.74 74.37 74.56\nS1 78.88 79.15 79.92 70.13 74.9 73.33 69.59 74.25 74.33\nCache S1 + CliJointPolicy Subwords 78.62 79.66 80.79 72.12 75.03 73.47 69.47 74.77 75.04\nJointPolicy Words 78.52 79.63 80.93 71.66 75.93 73.54 69.55 74.77 74.97\nCache S1 onlySubwods 78.41 79.46 81.17 71.73 75.92 74.41 69.68 74.8 74.94\nWords 78.28 79.54 81.04 71.9 75.87 74.33 69.51 74.82 74.94\nMulti-src enc SEP + speaker tag 78.23 79.64 81.04 71.5 75.87 73.78 - 74.66 74.82\nStarS1,S2 2 heads Gold target ctx 79.7 80.08 82.64 71.79 75.62 73.67 71.36 74.87 75.03\nS1,S2 2 heads Predicted target ctx 78.81 79.38 79.63 71.72 75.58 73.7 69.38 74.77 75.11\nS1 2 heads Gold target ctx 79.35 79.58 82.52 72.16 75.95 74.1 71.33 75.01 75.48\nS1 2 heads Predicted target ctx 78.17 79.24 79.83 72.24 75.68 73.9 70.24 74.65 75.21\nTable 8: BLEU scores on the chat translation task (proprietary data for 3 different domains and language pairs). S1 and S2\nrefer to the speakers in the case of chat translation task.\nEN!FR EN !PTbr\nBLEU MQM BLEU MQM\nBaseline 74.76 87.46 74.95 92.47\nCache 74.82 89.02 74,94 93.20\nStar 2 heads 75.01 86.80 75.48 95.20\nTable 9: The results of automatic and manual evaluation\nof the context-aware NMT methods in terms of BLEU and\nMQM on English !French and English !Portuguese.\nric to distinguish between strong NMT systems.\n8 Conclusion\nWe provided a systematic comparison of several\ncontext-aware NMT methods. One of the meth-\nods in this comparison was a new adaptation of\nthe recently proposed StarTransformer architec-\nture to document-level MT. In addition to BLEU,\nwe reported results of the contrastive evaluation\nof context-dependent phenomena (anaphora and\ncoherence/cohesion), creating an additional large-\nscale contrastive test set for EN !FR anaphoric\npronouns, and we carried out human evalua-\ntion in terms of Multidimensional Quality Met-\nrics (MQM). Our findings suggest that existing\ncontext-aware approaches are less advantageous in\nscenarios with larger datasets and strong sentence-\nlevel baselines. In terms of the targeted context-\ndependent evaluation, one of the promising ap-proaches is one of the simplest: the Concat2to2,\nwhere translated context is channelled through\nthe decoder, although our Doc-Star-Transformer\nmethod achieves good results according to the\nmanual evaluation of MT quality.\nAcknowledgments\nWe thank the anonymous reviewers for their\nvaluable feedback. This work is supported by\nthe EU in the context of the PT2020 project\n(contracts 027767 and 038510) and the H2020\nGoURMET project (825299), by the European Re-\nsearch Council (ERC StG DeepSPIN 758969), by\nthe Fundac ¸ ˜ao para a Ci ˆencia e Tecnologia through\ncontract UID/EEA/50008/2019 and by the UK En-\ngineering and Physical Sciences Research Council\n(MTStretch fellowship grant EP/S001271/1).\nReferences\nBahdanau, D., K. Cho, and Y . Bengio. 2014. Neural\nmachine translation by jointly learning to align and\ntranslate. arXiv preprint arXiv:1409.0473 .\nBarrault, Lo ¨ıc, Ond ˇrej Bojar, Marta R. Costa-juss `a,\nChristian Federmann, Mark Fishel, Yvette Gra-\nham, Barry Haddow, Matthias Huck, Philipp Koehn,\nShervin Malmasi, Christof Monz, Mathias M ¨uller,\nSantanu Pal, Matt Post, and Marcos Zampieri. 2019.\nDifficulty English French\nColloquialisms Well, they just ain’t a-treatin’ me right Eh bien, elles me traitent mal\n‘Well, they’re treating me badly’\nParaphrasing Do not forget your friends, they are always with\nyou heart and soul!N’oubliez pas vos amis: ils sont toujours pr `es de vous!\n‘Don’t forget your friends: they are always near to you’\nTruncation Neighbor. what have you done? V oisin ?\n‘Neighbour?’\nFree translation I don’t understand either. Moi non plus.\n‘me neither’\nFigure 3: Examples of four challenges for MT of OpenSubtitles: (i) colloquialisms, (ii) paraphrasing, (iii) subtitle truncation\n(can be due to space constraints), and (iv) free translations that fulfill the same discursive role.\nFindings of the 2019 Conference on Machine Trans-\nlation (WMT19). In Proceedings of the 4th Confer-\nence on Machine Translation .\nBawden, Rachel, Rico Sennrich, Alexandra Birch, and\nBarry Haddow. 2018. Evaluating Discourse Phe-\nnomena in Neural Machine Translation. In Proceed-\nings of the 2018 Conference of the North American\nChapter of the Association for Computational Lin-\nguistics; Human Language Technologies .\nBawden, Rachel, Sophie Rosset, Thomas Lavergne,\nand Eric Bilinski. 2019. DiaBLa: A Corpus of\nBilingual Spontaneous Written Dialogues for Ma-\nchine Translation.\nBawden, Rachel. 2018. Going beyond the sentence:\nContextual Machine Translation of Dialogue . Ph.D.\nthesis, LIMSI, CNRS, Universit ´e Paris-Sud, Univer-\nsit´e Paris-Saclay, Orsay, France.\nCettolo, Mauro, Christian Girardi, and Marcello Fed-\nerico. 2012. WIT3: Web Inventory of Transcribed\nand Translated Talks. In Proceedings of the 16th\nConference of the European Association for Machine\nTranslation .\nDai, Zihang, Zhilin Yang, Yiming Yang, Jaime G.\nCarbonell, Quoc V . Le, and Ruslan Salakhutdinov.\n2019. Transformer-XL: Attentive Language Models\nBeyond a Fixed-Length Context. In Proceedings of\nthe 57th annual meeting on association for compu-\ntational linguistics .\nDyer, Chris, Victor Chahuneau, and Noah A. Smith.\n2013. A simple, fast, and effective reparameteriza-\ntion of IBM model 2. In Proceedings of the 2013\nConference of the North American Chapter of the\nAssociation for Computational Linguistics: Human\nLanguage Technologies .\nGrave, Edouard, Armand Joulin, and Nicolas Usunier.\n2016. Improving neural language models with a\ncontinuous cache. arXiv preprint arXiv:1612.04426 .\nGuo, Qipeng, Xipeng Qiu, Pengfei Liu, Yunfan Shao,\nXiangyang Xue, and Zheng Zhang. 2019. Star-\ntransformer. In Proceedings of the 2019 Conference\nof the North American Chapter of the Association for\nComputational Linguistics: Human Language Tech-\nnologies .Hardmeier, Christian. 2012. Discourse in Statistical\nMachine Translation. a survey and a case study. Dis-\ncours , 11.\nHassan, Hany, Anthony Aue, Chang Chen, Vishal\nChowdhary, Jonathan Clark, Christian Feder-\nmann, Xuedong Huang, Marcin Junczys-Dowmunt,\nWilliam Lewis, Mu Li, et al. 2018. Achieving hu-\nman parity on automatic Chinese to English news\ntranslation. arXiv preprint arXiv:1803.05567 .\nJunczys-Dowmunt, Marcin. 2019. Microsoft translator\nat wmt 2019: Towards large-scale document-level\nneural machine translation. In Proceedings of the\nFourth Conference on Machine Translation .\nKlein, Guillaume, Yoon Kim, Yuntian Deng, Jean\nSenellart, and Alexander M. Rush. 2017. Open-\nNMT: Open-source toolkit for neural machine trans-\nlation. In Proceedings of the 55th Annual Meeting of\nthe Association for Computational Linguistics .\nKoehn, Philipp, Hieu Hoang, Alexandra Birch, Chris\nCallison-Burch, Marcello Federico, Nicola Bertoldi,\nBrooke Cowan, Wade Shen, Christine Moran,\nRichard Zens, Chris Dyer, Ond ˇrej Bojar, Alexandra\nConstantin, and Evan Herbst. 2007. Moses: Open\nsource toolkit for statistical machine translation. In\nProceedings of the 45th Annual Meeting of the Asso-\nciation for Computational Linguistics .\nL¨aubli, Samuel, Rico Sennrich, and Martin V olk. 2018.\nHas machine translation achieved human parity? a\ncase for document-level evaluation. In Proceedings\nof the 2018 Conference on Empirical Methods in\nNatural Language Processing .\nLaubli, Samuel, Sheila Castilho, Graham Neubig,\nRico Sennrich, Qinlan Shen, and Antonio Toral.\n2020. A set of recommendations for assessing hu-\nman–machine parity in language translation. Jour-\nnal of Artificial Intelligence Research (JAIR) , 67.\nLison, Pierre, J ¨org Tiedemann, and Milen Kouylekov.\n2018. OpenSubtitles2018: Statistical rescoring of\nsentence alignments in large, noisy parallel corpora.\nInProceedings of the 11th International Conference\non Language Resources and Evaluation .\nLommel, Arle Richard. 2013. Multidimensional qual-\nity metrics: a flexible system for assessing transla-\ntion quality\nMaruf, Sameen, Andr ´e F. T. Martins, and Gholamreza\nHaffari. 2018. Contextual neural model for translat-\ning bilingual multi-speaker conversations. In Proc.\nof the 3rd Conference on Machine Translation .\nMaruf, Sameen, Andr ´e F. T. Martins, and Gholamreza\nHaffari. 2019. Selective Attention for Context-\naware Neural Machine Translation. In Proceedings\nof the 2019 Conference of the North American Chap-\nter of the Association for Computational Linguistics:\nHuman Language Technologies .\nMiculicich, Lesly, Dhananjay Ram, Nikolaos Pappas,\nand James Henderson. 2018. Document-Level Neu-\nral Machine Translation with Hierarchical Attention\nNetworks. In Proceedings of the 2018 Conference\non Empirical Methods in Natural Language Process-\ning.\nM¨uller, Mathias, Annette Rios, Elena V oita, and Rico\nSennrich. 2018. A large-scale test set for the eval-\nuation of context-aware pronoun translation in neu-\nral machine translation. In Proceedings of the Third\nConference on Machine Translation .\nNg, Nathan, Kyra Yee, Alexei Baevski, Myle Ott,\nMichael Auli, and Sergey Edunov. 2019. Facebook\nfair’s wmt19 news translation task submission. In\nProceedings of the Fourth Conference on Machine\nTranslation (Volume 2: Shared Task Papers, Day 1) .\nPapineni, Kishore, Salim Roukos, Todd Ward, and Wei-\nJing Zhu. 2002. BLEU: a method for Automatic\nEvaluation of Machine Translation. In Proceedings\nof the 40th Annual Meeting on Association for Com-\nputational Linguistics .\nPereyra, Gabriel, George Tucker, Jan Chorowski,\nŁukasz Kaiser, and Geoffrey Hinton. 2017. Regu-\nlarizing neural networks by penalizing confident out-\nput distributions. In Proceedings of the 5th Interna-\ntional Conference on Learning Representations .\nPopel, Martin and Ond ˇrej Bojar. 2018. Training tips\nfor the transformer model. The Prague Bulletin of\nMathematical Linguistics , 110(1).\nPress, Ofir and Lior Wolf. 2017. Using the output em-\nbedding to improve language models. In Proceed-\nings of the 15th Conference of the European Chapter\nof the Association for Computational Linguistics .\nRios Gonzales, Annette, Laura Mascarell, and Rico\nSennrich. 2017. Improving word sense disambigua-\ntion in neural machine translation with sense embed-\ndings. In Proceedings of the 2nd Conference on Ma-\nchine Translation .\nSagot, Beno ˆıt. 2010. The lefff, a freely available and\nlarge-coverage morphological and syntactic lexicon\nfor French. In Proceedings of the 7th International\nConference on Language Resources and Evaluation .\nSennrich, Rico, Barry Haddow, and Alexandra Birch.\n2016. Neural machine translation of rare words with\nsubword units. In Proc. of the 54th Annual Meeting\nof the Association for Computational Linguistics .Sennrich, Rico. 2017. How Grammatical is Character-\nlevel Neural Machine Translation? In Proceedings\nof the 15th Conference of the European Chapter of\nthe Association for Computational Linguistics .\nSmith, Karin Sim. 2017. On integrating discourse in\nmachine translation. In Proceedings of the Third\nWorkshop on Discourse in Machine Translation .\nTiedemann, J ¨org and Yves Scherrer. 2017. Neural ma-\nchine translation with extended context. In Proceed-\nings of the 3rd Workshop on Discourse in Machine\nTranslation .\nToral, Antonio, Sheila Castilho, Ke Hu, and Andy Way.\n2018. Attaining the Unattainable? Reassessing\nClaims of Human Parity in Neural Machine Trans-\nlation. In Proceedings of the Third Conference on\nMachine Translation .\nTu, Zhaopeng, Yang Liu, Shuming Shi, and Tong\nZhang. 2018. Learning to remember translation his-\ntory with a continuous cache. Transactions of the\nAssociation for Computational Linguistics , 6.\nVaswani, Ashish, Noam Shazeer, Niki Parmar, Jakob\nUszkoreit, Llion Jones, Aidan N Gomez, Łukasz\nKaiser, and Illia Polosukhin. 2017. Attention is all\nyou need. In Advances in neural information pro-\ncessing systems .\nV oita, Elena, Rico Sennrich, and Ivan Titov. 2019a.\nContext-aware monolingual repair for neural ma-\nchine translation. In Proceedings of the 2019 Con-\nference on Empirical Methods in Natural Language\nProcessing and the 9th International Joint Confer-\nence on Natural Language Processing .\nV oita, Elena, Rico Sennrich, and Ivan Titov. 2019b.\nWhen a good translation is wrong in context:\nContext-aware machine translation improves on\ndeixis, ellipsis, and lexical cohesion. In Proceed-\nings of the 57th Annual Meeting of the Association\nfor Computational Linguistics .\nWang, Longyue. 2019. Discourse-Aware Neural Ma-\nchine Translation . Ph.D. thesis, Dublin City Univer-\nsity, Dublin, Ireland.\nYu, Lei, Laurent Sartran, Wojciech Stokowiec, Wang\nLing, Lingpeng Kong, Phil Blunsom, and Chris\nDyer. 2019. Putting machine translation in context\nwith the noisy channel model.\nZhang, Jiacheng, Huanbo Luan, Maosong Sun, Feifei\nZhai, Jingfang Xu, Min Zhang, and Yang Liu. 2018.\nImproving the Transformer Translation Model with\nDocument-Level Context. In Proceedings of the\n2018 Conference on Empirical Methods in Natural\nLanguage Processing .",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "s8eWXGOTsdO",
"year": null,
"venue": "EAMT 2012",
"pdf_link": "https://aclanthology.org/2012.eamt-1.43.pdf",
"forum_link": "https://openreview.net/forum?id=s8eWXGOTsdO",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Mixture-Modeling with Unsupervised Clusters for Domain Adaptation in Statistical Machine Translation",
"authors": [
"Rico Sennrich"
],
"abstract": "Rico Sennrich. Proceedings of the 16th Annual conference of the European Association for Machine Translation. 2012.",
"keywords": [],
"raw_extracted_content": "Mixture-Modeling with Unsupervised Clusters for Domain Adaptation in\nStatistical Machine Translation\nRico Sennrich\nInstitute of Computational Linguistics\nUniversity of Zurich\nBinzm ¨uhlestr. 14\nCH-8050 Z ¨urich\[email protected]\nAbstract\nIn Statistical Machine Translation, in-\ndomain and out-of-domain training data\nare not always clearly delineated. This\npaper investigates how we can still use\nmixture-modeling techniques for domain\nadaptation in such cases. We apply un-\nsupervised clustering methods to split the\noriginal training set, and then use mixture-\nmodeling techniques to build a model\nadapted to a given target domain. We show\nthat this approach improves performance\nover an unadapted baseline, and several al-\nternative domain adaptation methods.\n1 Introduction\nAs the availability of parallel data for Statistical\nMachine Translation (SMT) increases, new op-\nportunities and challenges for domain adaptation\narise. Some corpora may contain text from a\nvariety of domains, especially if they are built\nfrom heterogeneous resources such as crawled web\npages. Many domain adaptation techniques do not\noperate on a single text, but require multiple mod-\nels which are then mixed.\nWe investigate domain adaptation in a scenario\nwhere we have a known target domain, including\ndevelopment and test data from this domain, but\nwhere there is only a single heterogeneous train-\ning corpus. While this training corpus does con-\ntain in-domain data, we assume that we have no\nsupervised means of extracting it.\nOur basic approach is divided into two steps.\nFirstly, we perform unsupervised clustering on the\nparallel training data to obtain a given number of\nclusters. Secondly, we apply domain adaptation\nc\r2012 European Association for Machine Translation.algorithms to compute a model from these clusters\nthat is adapted to the development set.\n2 Related Work\nThe general idea in domain adaptation is to obtain\nmodels that are specifically optimized for best per-\nformance in one domain, with a potentially nega-\ntive effect on its performance for other domains.\nThe classical domain adaptation scenario consists\nof a (small) in-domain corpus, a (large) out-of-\ndomain corpus, and in-domain development and\ntest sets. Mixture-modeling approaches such as\n(Koehn and Schroeder, 2007; Foster and Kuhn,\n2007; Sennrich, 2012) fall into this category.\nWe will here give an overview of adaptation\ntechniques that assume less prior knowledge about\nthe training set and/or target domains.\nYamamoto and Sumita (2008) operate without\nany predetermined domains, and without assum-\ning that either the training or the test data is ho-\nmogeneous. They cluster the training text into k\nclusters, and use unsupervised domain selection to\ntranslate each test set sentence by a cluster-specific\nmodel.\nFinch and Sumita (2008) distinguish between\ntwo classes of sentences: questions and declara-\ntives (i.e. non-questions). They split the training\ncorpus automatically according to a simple rule\n(does the target sentence end with ’?’), and for\ndecoding use a linear interpolation of the class-\nspecific and a general model, the interpolation\nweight depending on the class membership of each\nsentence.\nBanerjee et al. (2010) focus on a scenario in\nwhich the domains of the training texts are known,\nwhereas the test sets are a mix of two domains.\nThey use a sentence-level classifier to translate\neach sentence with a domain-specific SMT system.\nProceedings of the 16th EAMT Conference, 28-30 May 2012, Trento, Italy\n185\nKm(x;x0) =mX\nn=1X\nu2\u0006nfx(u)pP\nv2\u0006nfx(v)fx0(u)pP\nv2\u0006nfx0(v)(1)\nKm(x;x0) =mX\nn=1X\nu2\u0006ns\nfx(u)P\nv2\u0006nfx(v)fx0(u)P\nv2\u0006nfx0(v)(2)\nEck, V ogel and Waibel (2004) use information\nretrieval techniques to find the sentences in a par-\nallel corpus that are closest to the translation in-\nput, then use the corresponding target sentences to\nbuild a language model. Their approach is simi-\nlar to that of Yamamoto and Sumita (2008) in that\nboth try to adapt models in a fully unsupervised\nmanner. The main difference is that Yamamoto\nand Sumita (2008) compute the clusters (and the\ncluster-specific models) offline, and only do clus-\nter prediction online, whereas in (Eck et al., 2004),\nthe whole adaptation process, i.e. selecting a sub-\nset of training data, training a model, and translat-\ning with the specific model, happens online.\nWe will focus on a scenario which is slightly dif-\nferent from these prior studies in that we want to\nbuild a translation system for a specific target do-\nmain, but with in-domain and out-of-domain train-\ning data being mixed in a heterogeneous training\nset. For such a scenario, none of the outlined ap-\nproaches are a perfect fit. Mixture-modeling tech-\nniques presume the existence of multiple models\nto mix, a condition which is not met in this sce-\nnario. The unsupervised methods, on the other\nhand, do not use sophisticated adaptation tech-\nniques, mostly because the target domain is un-\nknown. We will test a hybrid approach that com-\nbines unsupervised methods to cluster the training\ntext with known mixture-modeling techniques to\nobtain a model adapted to the target domain.\n3 Clustering\nWe compare two unsupervised sentence clustering\nalgorithms in order to split the training text into\nclusters that can later be recombined. Both al-\ngorithms are instances of k-means clustering, but\nwith different distance functions. Yamamoto and\nSumita (2008) use language models as centroids,\ntrained on all sentences in a cluster, and the lan-\nguage model entropy as the distance between each\nsentence and cluster. Andr ´es-Ferrer et al. (2010)\nuse word-sequence-kernels (WSK) (Cancedda et\nal., 2003) as distance metric between two docu-ments. We initially followed their proposed nor-\nmalization of the WSK, reproduced in equation 1.\nfx(u)is the frequency of the n-gramuin docu-\nment/sentence x.1Unfortunately, the normaliza-\ntion in the proposed equation is flawed and causes\na bias towards assigning sentences to the largest\ncluster. The WSK should be normalized so that\nthe string ais (at least) as similar to itself as to\na a (if we only consider unigrams). However,\n1p\n11p\n1<1p\n12p\n2. We use an alternative normal-\nization, shown in equation 2, that has no such nu-\nmerical bias.\nBoth algorithms are initialized with randomly\ngenerated clusters, and both can be expanded to\nclustering sentence pairs by taking the sum of\nthe distance on both language sides. In terms\nofn-gram length, we follow the respective au-\nthors’ practice, using unigram models for the im-\nplementation of (Yamamoto and Sumita, 2008),\nandm= 2 for equation 2. Note that the cluster-\ning algorithm has the objective of minimizing LM\nentropy, whereas the WSK is a similarity function\nand thus is maximized.\n3.1 Exponential Smoothing\nOne drawback of sentence-level clustering is that\ncluster assignment is made on the basis of very lit-\ntle information, i.e. the sentence itself. If we as-\nsume that the domain of a text does not rapidly\nchange between sentences, it is sensible to con-\nsider a larger context for clustering.\nWe achieve this by using an exponentially de-\ncaying score for cluster assignment.2In the base-\nline without exponential decay (equation 3), we\nassign the sentence pair ito the cluster cthat\n1For the full motivation of the equation, see (Andr ´es-Ferrer et\nal., 2010). In short, for all n-grams up to a maximum length\nm, the kernel sums over the product of their normalized fre-\nquency in two given documents.\n2The most similar use of an exponential decay that we are\naware of is by Zhong (2005), who proposes exponential de-\ncay to reduce the contribution of history data in a text stream\nclustering algorithm. However, the exponential decay affects\na different component, namely the centroids, and does not\nserve the same purpose as our proposal.\n186\nminimizes the distance (i.e. the LM entropy or the\nnegative WSK score).\n^ci= arg min\ncd(i;c) (3)\nIn equation 4, the distance of sentence pair ito\nclustercis smoothed by the weighted average\nof the distance of each sentence jtoc, with the\nweight exponentially decaying as the textual dis-\ntance between iandjincreases, and with the decay\nfactor\u0015determining how fast the weight decays.\n^ci= arg min\ncnX\nj=1d(j;c)\u0001\u0015ji\u0000jj(4)\nNote that the equation is two-sided, meaning that\nboth previous and subsequent sentences are con-\nsidered for the assignment.\nAlgorithmically, two-sided exponential smooth-\ning only slows down cluster assignment by a con-\nstant factor; we do not need to sum over all\nsentences for each assignment, but can store the\nweighted distance of all previous sentences in a\nsingle variable. Algorithm 1 shows the smoothed\nassignment step for nsentences and kclusters.\nAlgorithm 1 Cluster assignment with decay\nEnsure: 0\u0014decay\u00141\n1:letd(x;y)be a distance function for a sentence\nxand a centroid y\n2:letdmin[n],dcurr[n],^c[n]be arrays\n3:set all elements of dmin to1\n4:forc= 0tokdo\n5:cache 0\n6: set all elements of dcurr to0\n7: fori= 0tondo\n8:cache decay\u0003cache\n9:cache cache +d(i;c)\n10:dcurr[i] cache\n11: end for\n12:cache 0\n13: fori=nto0do\n14:cache decay\u0003cache\n15:dcurr[i] dcurr[i] +cache\n16: ifdcurr[i]<dmin[i ]then\n17:dmin[i ] dcurr[i]\n18: ^c[i] c\n19: end if\n20:cache cache +d(i;c)\n21: end for\n22:end forNote that the decay factor \u0015determines the ex-\ntent of smoothing, i.e. how strongly context is\ntaken into account for the assignment of each sen-\ntence. A decay factor of 0 corresponds to the\nunsmoothed sentence-level score (with 00= 1).\nWith a decay factor of 1, the algorithm returns the\nsame distance for all sentence pairs. We use a de-\ncay factor of 0.5 throughout the experiments. This\nis a relatively fast decay: one third of the score is\ndetermined by the sentence itself; two thirds by the\nsentence and its two neighboring sentences. What\ndecay factor is optimal may depend on the proper-\nties of the text, i.e. how quickly documents and/or\ndomains change, so we will not evaluate different\ndecay factors in this paper.\nWe could extend the algorithm to reset the cache\nto0whenever we cross a known document bound-\nary, and thus implement document-level scoring\n(with a decay factor of 1), or a hybrid (with a decay\nfactor between 0 and 1). We did not do this since\nwe want to demonstrate that the approach does not\nrequire document boundaries in the training text.\nAnother point to note is that we slightly mod-\nify the LM entropy method by normalizing entropy\nby sentence length, which ensures that longer sen-\ntences have no inflated effect on their neighbors’\ncluster assignment.\n4 Model Combination\nHaving split the training text into clusters, there are\nvarious possibilities to exploit them. Yamamoto\nand Sumita (2008) use each cluster to train a\ncluster-specific model, which they interpolate with\na general model, using a constant interpolation co-\nefficient. Translating a text then consists of pre-\ndicting the cluster of each sentence, then translat-\ning it with this cluster-specific model. If we make\nthe assumption that the test set is relatively ho-\nmogeneous, with all sentences belonging to the\nsame domain, we can perform a more sophisti-\ncated adaptation to this target domain.\nOne potential shortcoming of the algorithm in\n(Yamamoto and Sumita, 2008) is that their do-\nmain prediction has little information to base its\nprediction on, and thus may not choose the best\ncluster. Additionally to predicting the domain for\neach sentence, we will test a document-level do-\nmain prediction, i.e. selecting the cluster with the\nshortest distance to the whole test set. Even this\nmight be suboptimal if the number of clusters is\nhigh. In this case, we can expect relevant data to\n187\nbe distributed over multiple clusters, in which case\nit might be beneficial to not be restricted to one\ncluster-specific model.\nA second shortcoming is the lack of model op-\ntimization. Yamamoto and Sumita (2008) set the\ninterpolation weights between the cluster-specific\nmodel and the general one manually after some\npreliminary experiments, and re-used the model\nparameters from the general model for all exper-\niments. Specifically, they use linear interpola-\ntion with interpolation coefficients of 0.7 and 0.3\nfor the cluster-specific and the general translation\nmodel, respectively, and a log-linear combination\nfor language models, with a slightly lower weight\nfor the domain-specific (0.4) than the general (0.6)\nmodel.\nBoth the inability to consider multiple rele-\nvant datasets and the need to manually set model\nweights can be solved by using automatic mixture-\nmodel methods. We will experiment with au-\ntomatic adaptation methods that use perplexity\nminimization to produce domain-specific models\ngiven a development set from the domain. The\nfirst step is again to train cluster-specific transla-\ntion and language models, which we then recom-\nbine into a single adapted model. We use a lin-\near interpolation with the interpolation coefficients\nset through perplexity minimization for language\nmodel and translation model adaptation, which has\nbeen demonstrated to be a successful technique\nin SMT (Foster and Kuhn, 2007). For transla-\ntion model interpolation, we use the approach de-\nscribed in (Sennrich, 2012), optimizing each trans-\nlation model feature separately on a parallel devel-\nopment set.\nThe optimization itself is convex, which means\nthat we can easily apply it to a high number of clus-\nters. The biggest risk is that the weight vector will\nbe overfitted if we optimize it for a high number of\nsmall models. Finally, we set new log-linear SMT\nweights through MERT (Och and Ney, 2003) for\neach experiment.\n5 Experiments\nThe main questions that we want to answer in our\nexperiments are:\n1. How well does unsupervised clustering split\na heterogeneous training text according to its\ndomains? How are the results affected by dif-\nferent distance functions and smoothing?Data set sentences words (fr)\nAlpine (in-domain) 200k 4 400k\nEuroparl 1 500k 44 000k\nJRC Acquis 1 100k 24 000k\nOpenSubtitles v2 2 300k 18 000k\nTotal train 5 100k 90 400k\nDev (perplexity) 1424 33 000\nDev (MERT) 1000 20 000\nTest 991 21 000\nTable 1: Parallel data sets for German – French\ntranslation task.\n2. How much translation quality do we lose or\ngain from mixture-modeling based on un-\nsupervised clusters, compared to a scenario\nwhere we start with multiple domain-specific\ncorpora.\n5.1 Data and Methods\nWe perform the experiments on a German–French\ndata set. The parallel data sets used are listed\nin table 1. The in-domain corpus is a collection\nof Alpine Club publications (V olk et al., 2010).\nAs parallel out-of-domain data sets, we use Eu-\nroparl, a collection of parliamentary proceedings\n(Koehn, 2005), JRC-Acquis, a collection of leg-\nislative texts (Steinberger et al., 2006), and Open-\nSubtitles v2, a parallel corpus extracted from film\nsubtitles3(Tiedemann, 2009).\nFor language model training, we used the same\n90 million word corpus, plus, on the target side, the\nnews corpus from WMT 2011 (appr. 610 million\ntokens), and appr. 8 million tokens monolingual\nin-domain data. We used the following language\nmodel settings: for clustering, unigram language\nmodels. For domain selection, 3-gram language\nmodels with Good-Turing smoothing. For trans-\nlation, 5-gram language models with interpolated\nKneser-Ney smoothing. We clustered additional\ntarget language data with the method described in\n(Yamamoto and Sumita, 2008), i.e. one cluster as-\nsignment step, starting from the bilingual clusters,\nand not assigning any sentences which are closest\nto the general LM.\nFor the clustering experiments, these data sets\nare concatenated to simulate a heterogeneous train-\ning set. The relative amount of in-domain data in\nthe training sets is 2% (monolingual) and 4% (par-\nallel). Note that this makes success of our method\n3http://www.opensubtitles.org\n188\nmore likely than in scenarios where there is no in-\ndomain training data in the training set. We do\nnot claim that any heterogeneous training text is\nequally suited for domain adaptation.\nIn (Andr ´es-Ferrer et al., 2010), clustering qual-\nity is measured intrinsically, i.e. by calculating\nthe intra-cluster language model perplexity. In\nour evaluation, we use an extrinsic evaluation that\ncompares the resulting clusters to the original four\nparallel datasets. For this evaluation, we assume\nthat clustering is felicitous if it clusters sentences\nfrom the same original data set together. We mea-\nsure this using entropy (equation 5), with Nbeing\nthe total number of sentence pairs and orig(i)be-\ning the corpus to which sentence ioriginally be-\nlonged.pc(orig (i))is the probability that a sen-\ntence in cluster cis originally from corpus orig(i),\nestimated through relative frequency.\nH(X) =\u0000kX\nc=0X\ni2c1\nNlog2pc(orig (i)) (5)\nIf a cluster only contains sentences from one cor-\npus, its entropy is 0. The baseline is a uniform\ndistribution, which corresponds to an entropy of\n1.698 (with the data sets from table 1).\nThe second evaluation is a translation task. In\nterms of tools and techniques used, we mostly\nadhere to the work flow described for the WMT\n2011 baseline system4. The main tools are Moses\n(Koehn et al., 2007), SRILM (Stolcke, 2002), and\nGIZA++ (Och and Ney, 2003), with settings as de-\nscribed in the WMT 2011 guide. One exception is\nthat we additionally filter the phrase table accord-\ning to statistical significance tests, as described by\n(Johnson et al., 2007). We use two different devel-\nopment sets, one for domain adaptation (through\nperplexity optimization) and one for MERT, in or-\nder to rule out that MERT gives too much weight\nto the language and translation model which are\noptimized on the same dataset.\nWe measure translation performance through\nBLEU (Papineni et al., 2002) and METEOR 1.3\n(Denkowski and Lavie, 2011). All results are low-\nercased and tokenized, measured with five inde-\npendent runs of MERT (Och and Ney, 2003). We\nperform significance testing with MultEval (Clark\net al., 2011), which uses approximate randomiza-\ntion to account for optimizer instability. Note that\nthere are other causes of instability unaccounted\n4http://www.statmt.org/wmt11/baseline.\nhtmldistancekentropy itr.\nmean stdev (avg)\nno smoothing\nWSK 10 0.727 0.022 21.4\nLM 10 0.439 0.034 20.2\nLM 100 0.344 0.008 38.8\nexponential smoothing\nWSK 10 0.263 0.048 13.8\nLM 10 0.112 0.016 10.4\nLM 100 0.064 0.013 9.0\nTable 2: Entropy comparison between clustering\nwith different distance functions (with or without\nsmoothing), and different numbers of clusters (k ).\nMean, standard deviation, and average number of\niterations out of 5 runs are reported. WSK: word\nsequence kernels; LM: language model entropy\nfor, e.g. the randomness of clustering. Word align-\nment has been kept constant across all experi-\nments.\n5.2 Results\nIn all experiments, we perform k-means clustering\nwithk= 10 andk= 100. A higher number of\nclusters typically increases the homogeneity of the\nresulting clusters, and may boost performance by\nallowing us to give high weights to very specific\nsubdomains of the training set. On the downside,\nclusters will be smaller on average, which exacer-\nbates data sparseness problems. In the trivial case,\nhaving one sentence per cluster results in an en-\ntropy of 0, but this granularity would be unsuitable\nfor the domain adaptation methods that we evalu-\nate because of data sparseness.\nTable 2 shows entropy of both sentence-level\nclustering and exponential smoothing with word\nsequence kernels and LM entropy as distance func-\ntions. All methods achieve a strong reduction of\nentropy over the uniform baseline (1.698), but LM\nentropy as a distance measure outperforms word\nsequence kernels, with a mean entropy of 0.439\ncompared to 0.727 for 10 clusters. In all experi-\nments, exponential smoothing reduces the entropy\nof the resulting clusters even further. With LM en-\ntropy as distance function, it is reduced from 0.439\nto 0.112 for k= 10 , and from 0.344 to 0.064\nwithk= 100. A second advantage of smooth-\ning is that the algorithm converges faster, and re-\nduces the number of iterations by a factor of 2–\n4. Thus, smoothing seems a good choice because\n189\nsystem BLEU METEOR\ngeneral 18.5 37.3\nadapted TM 18.8 37.8\nadapted LM 18.8 37.8\nadapted TM & LM 18.6 37.9\nTable 3: Baseline SMT results DE–FR. Concate-\nnation of all data and using domain adaptation with\noriginal four datasets.\nthe smoothed algorithm is both faster and better at\nclustering sentences from the same original dataset\ninto the same cluster. Whether this leads to bet-\nter SMT performance is tested in the evaluation of\ntranslation performance.\nWe can compare translation performance to four\nbaselines, shown in table 3. The general system\n(without domain adaptation) performs worst, with\na B LEU score of 18.5 and a METEOR score of\n37.3. Both TM and LM adaptation significantly\nincrease scores by 0.3 B LEU and 0.5 METEOR\npoints. The system that combines TM and LM\nadaptation is not significantly different from the\nsystems with only one model adapted in terms of\nBLEU, but performs best in terms of METEOR\n(0.6 points better than the general model).\nFor the experimental systems, we limit our-\nselves to LM entropy as distance function, and\nvary a number of parameters. k, the number of\nclusters, is 10 in table 4, and 100 in table 5.\nFor bothk, we test clustering without smooth-\ning (sentence-level clustering) and with exponen-\ntial smoothing and a decay factor of 0.5. For\neach variation of these parameters, we pick a sin-\ngle clustering run at random. For model combina-\ntion, we contrast the approach by Yamamoto and\nSumita (2008) (i.e. domain prediction with a fixed\ninterpolation), and the mixture models described in\nsection 4, i.e. perplexity-minimization to find the\noptimal weights for the linear interpolation of the\nlanguage and translation model (Sennrich, 2012).\nIn sections 3.1 and 4, we have identified possi-\nble shortcomings of the original approach by (Ya-\nmamoto and Sumita, 2008), and will now reiterate\nand discuss them.\nFirstly, we have hypothesized that unsmoothed\nsentence-level clustering may fail to cluster in-\ndomain data together, and have proposed expo-\nnential smoothing. The entropy results in table\n2 support this hypothesis; if we look at transla-\ntion results with document-level domain predic-tion, the performance differences are small. A look\nat the clusters that are selected in domain predic-\ntion shows that smoothing improved homogene-\nity (180 000 in-domain / 20 000 out-of-domain\nsentence pairs) over an unsmoothed sentence-level\nclustering (146 000 in-domain / 90 000 out-of-\ndomain), but both approaches cluster the majority\nof the 200 000 in-domain sentence pairs together\nand outperform the unadapted baseline.\nSecondly, we suspected that domain predic-\ntion on a sentence-level would suffer from sim-\nilar data-sparseness problems, and not pick the\noptimal cluster for translation. With 10 clusters,\nthere is little difference between sentence-level and\ndocument-level domain prediction, both in terms\nof performance and the cluster that is predicted\nin domain prediction. With (smoothed or un-\nsmoothed) sentence-level prediction, 80-90% of\ntest set sentences are predicted to belong to the\nsame cluster. With 100 clusters, the opposite of\nour hypothesis is true. Document-level domain\nprediction performs worse than (smoothed or un-\nsmoothed) sentence-level domain prediction, and\nno better than the unadapted baseline. For the in-\nterpretation of this result, we must also consider\nthe mixture-modeling results.\nAdapting models through perplexity optimiza-\ntion performs better than or equally well as the\nmethods with domain prediction and a fixed inter-\npolation between the domain-specific and the gen-\neral model. This is true for both domain predic-\ntion methods, and both smoothed and unsmoothed\nclustering. The best result is obtained with k= 10\nand smoothed clustering, with a B LEU score of\n19.2 and a METEOR score of 38.3, which is 0.7\nBLEU points and 1 METEOR points above the\nunadapted baseline. The system also beats the\nadapted baseline, which uses the same model com-\nbination algorithm on the original four datasets, by\n0.6 B LEU points and 0.4 METEOR points, and\nthe approach by (Yamamoto and Sumita, 2008)\n(sentence-level clustering and domain prediction)\nby 0.3 B LEU points and 0.4 METEOR points.\nWith 100 clusters, perplexity minimization\nyields no further performance gains, but remains\nsignificantly better than the systems with domain\nprediction and the baseline systems. As to the\nreason why document-level domain prediction per-\nforms poorly with 100 clusters, the main problem\nis that relevant data is spread out over multiple\nclusters, and that only a small amount of relevant\n190\nclustering domain prediction model combinationadapted TM adapted TM & LM\nBLEU METEOR BLEU METEOR\nsentence-levelsentence-level fixed weights 18.7 37.6 18.9 37.9\ndocument-level fixed weights 18.8 37.7 18.9 37.9\n- perplexity 18.8 38.0 18.9 38.2\nsmoothedsmoothed fixed weights 18.9 37.8 19.0 38.0\ndocument-level fixed weights 18.9 37.8 19.0 38.1\n- perplexity 19.1 38.3 19.2 38.3\nTable 4: SMT results DE–FR based on clustered training data (k = 10).\nclustering domain prediction model combinationadapted TM adapted TM & LM\nBLEU METEOR BLEU METEOR\nsentence-levelsentence-level fixed weights 18.8 37.7 18.6 37.6\ndocument-level fixed weights 18.5 37.5 18.5 37.5\n- perplexity 19.0 38.0 19.0 38.3\nsmoothedsmoothed fixed weights 18.6 37.5 18.5 37.5\ndocument-level fixed weights 18.6 37.5 18.4 37.4\n- perplexity 19.1 38.1 19.1 38.2\nTable 5: SMT results DE–FR based on clustered training data (k = 100).\ndata can be considered with document-level do-\nmain prediction. Sentence-level domain prediction\navoids this problem by choosing different cluster-\nspecific models to translate different sentences, the\nperplexity mixture-models by being able to give\nhigh weights to multiple cluster-specific models.\n6 Conclusion\nWe demonstrate that it is possible to apply\nmixture-modeling techniques to models that are\nobtained through unsupervised clustering of a het-\nerogeneous training text. We obtained a mod-\nest performance boost from applying mixture-\nmodeling on the clusters rather than the original\nparallel corpora. The main advantage of the clus-\ntering step, however, is that it reduces the require-\nments for mixture-modeling, eliminating the need\nfor a homogeneous, in-domain training corpus,\nand only requiring a development set from the tar-\nget domain. It is thus more general and could be\napplied to monolithic, heterogeneous data collec-\ntions.\nCompared to the fully unsupervised method by\n(Yamamoto and Sumita, 2008), we observed small\nperformance improvements of up to 0.3 B LEU\npoints. In a closed-domain setting, the approach\nalso has the advantage of moving the domain adap-\ntation cost into the offline phase, and not requir-\ning a domain prediction phase and multiple mod-\nels during decoding. To support multiple target do-mains, the approach could be combined with that\nof (Banerjee et al., 2010), who discuss the prob-\nlem of translating texts that contain sentences from\nmultiple (known) domains.\nWe also propose exponential smoothing during\ncluster assignment to better capture slow-changing\ntextual properties such as their domain member-\nship, and to combat data sparseness issues when\nhaving to do an assignment decision based on short\nsentences. While the effects on our translation ex-\nperiments were small, the increased homogeneity\nof the resulting clusters and the faster speed of con-\nvergence indicate that smoothing is a beneficial en-\nhancement to sentence-level k-means clustering.\nAcknowledgments\nThis research was funded by the Swiss National\nScience Foundation, grant 105215 126999.\nReferences\nAndr ´es-Ferrer, Jes ´us, Germ ´an Sanchis-Trilles, and\nFrancisco Casacuberta. 2010. Similarity word-\nsequence kernels for sentence clustering. In Pro-\nceedings of the 2010 joint IAPR international con-\nference on Structural, syntactic, and statistical pat-\ntern recognition, pages 610–619, Berlin, Heidelberg.\nSpringer-Verlag.\nBanerjee, Pratyush, Jinhua Du, Baoli Li, Sudip Kumar\nNaskar, Andy Way, and Josef Van Genabith. 2010.\n191\nCombining multi-domain statistical machine transla-\ntion models using automatic classifiers. In 9th Con-\nference of the Association for Machine Translation\nin the Americas (AMTA 2010).\nCancedda, Nicola, Eric Gaussier, Cyril Goutte, and\nJean Michel Renders. 2003. Word sequence kernels.\nJ. Mach. Learn. Res., 3:1059–1082, March.\nClark, Jonathan H., Chris Dyer, Alon Lavie, and\nNoah A. Smith. 2011. Better hypothesis testing for\nstatistical machine translation: Controlling for op-\ntimizer instability. In Proceedings of the 49th An-\nnual Meeting of the Association for Computational\nLinguistics: Human Language Technologies, pages\n176–181, Portland, Oregon, USA, June. Association\nfor Computational Linguistics.\nDenkowski, Michael and Alon Lavie. 2011. Meteor\n1.3: Automatic Metric for Reliable Optimization and\nEvaluation of Machine Translation Systems. In Pro-\nceedings of the EMNLP 2011 Workshop on Statisti-\ncal Machine Translation.\nEck, Matthias, Stephan V ogel, and Alex Waibel. 2004.\nLanguage model adaptation for statistical machine\ntranslation based on information retrieval. In 4th In-\nternational Conference on Languages Resources and\nEvaluation (LREC 2004).\nFinch, Andrew and Eiichiro Sumita. 2008. Dynamic\nmodel interpolation for statistical machine transla-\ntion. In Proceedings of the Third Workshop on\nStatistical Machine Translation, StatMT ’08, pages\n208–215, Stroudsburg, PA, USA. Association for\nComputational Linguistics.\nFoster, George and Roland Kuhn. 2007. Mixture-\nmodel adaptation for SMT. In Proceedings of the\nSecond Workshop on Statistical Machine Transla-\ntion, StatMT ’07, pages 128–135, Stroudsburg, PA,\nUSA. Association for Computational Linguistics.\nJohnson, Howard, Joel Martin, George Foster, and\nRoland Kuhn. 2007. Improving translation qual-\nity by discarding most of the phrasetable. In Pro-\nceedings of the 2007 Joint Conference on Empirical\nMethods in Natural Language Processing and Com-\nputational Natural Language Learning (EMNLP-\nCoNLL), pages 967–975, Prague, Czech Republic,\nJune. Association for Computational Linguistics.\nKoehn, Philipp and Josh Schroeder. 2007. Experi-\nments in domain adaptation for statistical machine\ntranslation. In Proceedings of the Second Work-\nshop on Statistical Machine Translation, StatMT\n’07, pages 224–227, Stroudsburg, PA, USA. Asso-\nciation for Computational Linguistics.\nKoehn, Philipp, Hieu Hoang, Alexandra Birch, Chris\nCallison-Burch, Marcello Federico, Nicola Bertoldi,\nBrooke Cowan, Wade Shen, Christine Moran,\nRichard Zens, Chris Dyer, Ond ˇrej Bojar, Alexandra\nConstantin, and Evan Herbst. 2007. Moses: Open\nSource Toolkit for Statistical Machine Translation.InACL 2007, Proceedings of the 45th Annual Meet-\ning of the Association for Computational Linguis-\ntics Companion Volume Proceedings of the Demo\nand Poster Sessions, pages 177–180, Prague, Czech\nRepublic, June. Association for Computational Lin-\nguistics.\nKoehn, Philipp. 2005. Europarl: A parallel corpus for\nstatistical machine translation. In Machine Transla-\ntion Summit X, pages 79–86, Phuket, Thailand.\nOch, Franz Josef and Hermann Ney. 2003. A system-\natic comparison of various statistical alignment mod-\nels.Computational Linguistics, 29(1):19–51.\nPapineni, Kishore, Salim Roukos, Todd Ward, and Wei-\nJing Zhu. 2002. Bleu: A method for automatic eval-\nuation of machine translation. In ACL ’02: Proceed-\nings of the 40th Annual Meeting on Association for\nComputational Linguistics, pages 311–318, Morris-\ntown, NJ, USA. Association for Computational Lin-\nguistics.\nSennrich, Rico. 2012. Perplexity minimization for\ntranslation model domain adaptation in statistical\nmachine translation. In Proceedings of the 13th Con-\nference of the European Chapter of the Association\nfor Computational Linguistics, pages 539–549, Avi-\ngnon, France. Association for Computational Lin-\nguistics.\nSteinberger, Ralf, Bruno Pouliquen, Anna Widiger,\nCamelia Ignat, Tomaz Erjavec, Dan Tufis, and\nDaniel Varga. 2006. The JRC-Acquis: A multilin-\ngual aligned parallel corpus with 20+ languages. In\nProceedings of the 5th International Conference on\nLanguage Resources and Evaluation (LREC’2006).\nStolcke, A. 2002. SRILM – An Extensible Language\nModeling Toolkit. In Seventh International Confer-\nence on Spoken Language Processing, pages 901–\n904, Denver, CO, USA.\nTiedemann, J ¨org. 2009. News from OPUS - a collec-\ntion of multilingual parallel corpora with tools and\ninterfaces. In Nicolov, N., K. Bontcheva, G. An-\ngelova, and R. Mitkov, editors, Recent Advances\nin Natural Language Processing, volume V , pages\n237–248. John Benjamins, Amsterdam/Philadelphia,\nBorovets, Bulgaria.\nV olk, Martin, Noah Bubenhofer, Adrian Althaus, Maya\nBangerter, Lenz Furrer, and Beni Ruef. 2010. Chal-\nlenges in building a multilingual alpine heritage cor-\npus. In Proceedings of the Seventh conference on\nInternational Language Resources and Evaluation\n(LREC’10), Valletta, Malta. European Language Re-\nsources Association (ELRA).\nYamamoto, Hirofumi and Eiichiro Sumita. 2008.\nBilingual cluster based models for statistical ma-\nchine translation. IEICE - Trans. Inf. Syst., E91-\nD:588–597, March.\nZhong, S. 2005. Efficient streaming text clustering.\nNeural Networks, 18(5-6):790–798, July.\n192",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "4_aen4YAi0Fz",
"year": null,
"venue": "EAMT 2014",
"pdf_link": "https://aclanthology.org/2014.eamt-1.37.pdf",
"forum_link": "https://openreview.net/forum?id=4_aen4YAi0Fz",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Handling technical OOVs in SMT",
"authors": [
"Mark Fishel",
"Rico Sennrich"
],
"abstract": "Mark Fishel, Rico Sennrich. Proceedings of the 17th Annual conference of the European Association for Machine Translation. 2014.",
"keywords": [],
"raw_extracted_content": "Handling Technical OOVs in SMT\nMark Fishel andRico Sennrich\nInstitute of Computational Linguistics\nUniversity of Zurich\nBinzm ¨uhlestr. 14\nCH-8050 Z ¨urich\nffishel,sennrich [email protected]\nAbstract\nWe present a project on machine transla-\ntion of software help desk tickets, a highly\ntechnical text domain. The main source of\ntranslation errors were out-of-vocabulary\ntokens (OOVs), most of which were either\nin-domain German compounds or techni-\ncal token sequences that must be preserved\nverbatim in the output. We describe our ef-\nforts on compound splitting and treatment\nof non-translatable tokens, which lead to a\nsignificant translation quality gain.\n1 Problem Setting\nIn this paper we focus on statistical machine trans-\nlation of a highly technical text domain: software\nhelp desk tickets, or put simply – bug reports.\nThe project described here was a collaboration be-\ntween the University of Zurich and Finnova AG\nand aimed at developing an in-domain translation\nsystem for the company’s bug reports from Ger-\nman into English. Here we present a general de-\nscription of the key project results, the main prob-\nlems we faced and our solutions to them.\nTechnical texts like bug reports present an in-\ncreased challenge for automatic processing. In ad-\ndition to having a highly specific lexicon, there\nis often a large amount of source code snippets,\nform and database field identifiers, URLs and other\n“technical” tokens that have to be preserved in the\noutput without translation – for example:\nGer: siehe auch ecl kd042 decrm basis\nMP-MAR-11, kapitel 9.2.1.1\nEng: see also ecl kd042 decrm basis\nMP-MAR-11, chapter 9.2.1.1\nc\r2014 The authors. This article is licensed under a Creative\nCommons 3.0 licence, no derivative works, attribution, CC-\nBY-ND.While these technical tokens need no transla-\ntion, our baseline system also suffers from a large\nnumber of out-of-vocabulary tokens (OOVs) that\nshould be translated. The concatenative morphol-\nogy of German compounds is a classical prob-\nlem for machine translation, as it leads to an in-\ncreased vocabulary and exacerbates data sparsity\n(Koehn and Knight, 2003). In our case the prob-\nlem is inflated due to the domain-specific com-\npound terms like Tabellenattribute (table attribute)\norNachbuchungen (subsequent postings): many of\nthese are not seen in the smaller in-domain paral-\nlel corpus and they are too specific to be present in\ngeneral-domain corpora.\nTechnical tokens like URLs and alphanumeric\nIDs do not require translation and should be trans-\nferred into the output verbatim. However, since\nthey are also unknown to the translation system,\nthey still present a number of problems. They\nare often broken by tokenization and not restored\nproperly by subsequent de-tokenization. Also,\nsplitting a technical token into several parts might\nresult in the internal order of those parts broken.\nEven tokens that are correctly preserved in their\noriginal form can cause problems: if they are un-\nknown to the language model, the model strongly\nfavours permutations of the output in which OOVs\nare grouped together.\nIn the following section we give a description of\nour project and baseline system. We then turn to\nthe problem of OOVs, and focus on handling the\ntechnical tokens that require no translation in Sec-\ntion 3, and on compound splitting strategies in Sec-\ntion 4. Experimental results constitute Section 5.\n2 Translating Help Desk Tickets\nThe aim of our project was to develop an in-\ndomain translation system for translating help desk\n159\nToken Type Regular Expression Examples\nDB and form field IDs [A-Z0-9][A-Z0-9_/-] *[A-Z0-9] BEG DAT BUCH\nnumbers -?[0-9]+([.’][0-9]+)? -124.30, 1’000\nUNIX paths and URLs ([ˆ ():] */){2,}[ˆ ():] * /home/user/readme.txt\ncode with dots, e.g. java [ˆ :.]{2,}(\\.[ˆ :.]{2,})+ java.lang.Exception\nTable 1: Examples of technical tokens and regular expressions for their detection.\ntickets from German to English for use in a post-\nediting work-flow.\nThe company had a set of manual translations\nfrom the target domain, which enabled us to\nuse statistical machine translation (SMT). The in-\ndomain parallel corpus composed of these transla-\ntions consisted of 227 000 parallel sentences (2.8\n/ 3.2 million German/English tokens). Additional\nmonolingual English data for the same domain was\nalso available (141 000 sentences, 1.9 million to-\nkens). As a baseline we used the Moses framework\n(Koehn et al., 2007) with settings identical to the\nbaseline of WMT shared tasks (Bojar et al., 2013).\nTo increase the vocabulary of the system we\nadded some publicly available general-domain and\nout-of-domain parallel corpora: Europarl (Koehn,\n2005), OPUS OpenSubtitles (Tiedemann, 2012)\nand JRC-Acquis (Steinberger et al., 2006). Each\nof these is at least 10 times bigger than our in-\ndomain corpus. To prefer in-domain translations\nin case of ambiguity, we combined all the available\ncorpora via instance weighting using TMCombine\nfrom the Moses framework (Sennrich, 2012).\nDespite the vast amount of general-domain data,\nthe improvement over an in-domain system is rel-\natively small: from 21.9 up to 22.3 BLEU points.1\nThis best confirms that our target domain is highly\nspecific. In fact, general-domain data actually\nhurts translation performance if its size is greater\nand no domain adaptation is performed: a simple\nconcatenation of the same corpora without weight-\ning causes a drop in translation quality to 21.3\nBLEU points.\nA post-editing set-up with our translation sys-\ntem resulted in an average efficiency gain of 30%\nover a pure translation work-flow, raising the num-\nber of ticket translations per hour from 4.5 to 5.9.\nIn the next sections, we describe further attempts\nto improve translation quality by addressing dif-\nferent types of OOVs in the system.\n1Measured on a test set of 1000 randomly held-out sentences,\ndetokenized and re-cased.3 Preserving Technical Tokens\nThe main problems with technical tokens that do\nnot require translation are preserving their orthog-\nraphy and internal order, and placing them at the\ncorrect position in a sentence.\nMost of these tokens are highly regular, which\nmeans that they can be detected with regular ex-\npressions and handled separately. We designed\na set of regular expressions for that purpose and\ntagged them with the type of tokens that they de-\ntect. Table 1 presents some examples of the regular\nexpressions and detected tokens. 8.8% of the to-\nkens are identified as “technical”, with the largest\ngroup being upper-case database and form field\nIDs (4.0% of the tokens) and numbers (1.6% of\nthe tokens).\nWe use XML mark-up to mark all technical to-\nkens (consequently referred to as masking ), and\npass masked tokens unchanged through all compo-\nnents of our translation pipeline, i.e. the tokenizer,\nlowercaser, and the Moses decoder. While mask-\ning ensures that the masked tokens themselves are\npreserved, their position in the output is deter-\nmined by the decoder. We observed that the n-\ngram language model that we use for decoding is\npoor at modelling the position of unknown words,\npreferring translation hypotheses where unknown\nwords are grouped together, often at the beginning\nor end of the sentence.\nAs a solution to this issue, we change the trans-\nlation pipeline as follows:\n\u000fthe input text is tokenized and the detected\ntechnical tokens are reduced to a single con-\nstant token __TECH__ .\n\u000fthe translation is done on reduced text; the\nphrase table, lexical reordering and the lan-\nguage model are trained on corpora with re-\nduced technical expressions.\n\u000fafter the translation step, the reduced expres-\nsions are restored based on the input text and\nthe word alignment between the input and the\noutput, which is reported by the decoder.\n160\nThis way, the original form of the technical tokens\nis preserved explicitly, and the feature functions of\nthe translation pipeline do not have to deal with\nadditional unknown input (the approach will be re-\nferred to as 1-token reduction ).\nAn alternative variant we explored is to repre-\nsent each token sequence with its type (like JAVA ,\nDATE ,URL, etc.) instead of a single token TECH .\nA higher level of detail could be useful to model\ndifferences in word order between different kinds\nof technical tokens. Also, in case a sentence\ncontains maskable tokens of different types, this\nreduces the number of duplicate tokens between\nwhich the model cannot discriminate (this alterna-\ntive will be referred to as type reduction ).\n4 Compound splitting\nThe German language has a productive compound-\ning system, which increases vocabulary size and\nexacerbates the data sparsity effect. Many com-\npounds are domain-specific and are unlikely to be\nlearned from larger general-domain corpora. Com-\npound splitting, however, has the potential to also\nwork on our in-domain texts.\nWe evaluate two methods of compound split-\nting. Koehn and Knight (2003) describe a purely\ndata-driven approach, in which frequency statis-\ntics are collected from the unsplit corpus, and\nwords are split so that the geometric mean of\nthe word frequencies of its parts is maximized.\nFritzinger and Fraser (2010) describe a hybrid ap-\nproach, which uses the same corpus-driven selec-\ntion method to choose the best split of a word\namong multiple candidates, but instead of consid-\nering all character sequences to be potential parts,\nthey only consider those splits that are validated by\na finite-state morphology tool.\nThe motivation for using the finite-state mor-\nphology is to prevent linguistically implausible\nsplittings such as Testsets!Test ETS . We use the\nZmorge morphology (Sennrich and Kunz, 2014),\nwhich combines the SMOR grammar (Schmid et\nal., 2004) with a lexicon extracted from Wik-\ntionary.2With this hybrid approach, we only con-\nsider nouns for compound splitting; with the data-\ndriven approach on the other hand we have no con-\ntrol over which word classes are split.\n2http://www.wiktionary.orgSource: erweiterung tabellen TX VL und TXTSVL .\nReference: extension of tables TX VL and TXTSVL .\nMasking: extension of tables TX VL TXTSVL and .\nReduction: extension of tables TX VL and TXTSVL .\nTable 2: An example of the effect of reducing: the\ncorrect order of technical tokens is preserved.\n5 Experiments and Results\nWe evaluated our experiments on a held-out in-\ndomain test set. Translation quality is judged\nusing the MultEval package (Clark et al., 2011)\nand its default automatic metrics (BLEU, TER\nand METEOR); the package implements the met-\nrics and performs statistical significance testing\nto account for optimizer instability. We per-\nform three independent tuning runs, and use 95%\nas the significance threshold. Statistically non-\nsignificant results are shown in italics. Since to-\nkenization differs between experiments, we com-\npare de-tokenized and re-cased hypothesis and ref-\nerence translations.\nAs baseline, we use the weighted combination\nof in-domain and other corpora, described in Sec-\ntion 2. All modifications to tokenization and com-\npound splitting are done on all included training\ncorpora, both in-domain and others.\nMasking the detected technical tokens yields\nlarge quality gains over default tokenization:\nBLEU METEOR TER\nBaseline 22.3 26.1 62.2\nMasking 25.1 27.6 56.8\nThe system with masking better matches the\nlength of the reference translation than the base-\nline (99.5% vs. 103.7%); this can be attributed to\nthe technical tokens being broken in the baseline\nand not fixed by the default de-tokenization.\nThe reduced representation of technical tokens\nbrings a small improvement:\nBLEU METEOR TER\nJust masking 25.1 27.6 56.8\n1-token reduction 25.5 27.7 56.4\nType reduction 25.4 27.7 56.6\nA manual inspection supports the hypothesis\nthat the reduced representation improves word or-\nder for sentences with multiple OOVs; see Table 2\nfor an example. Representing the expressions with\ntheir type, however, does not seem to have any ad-\n161\nditional effect: statistically it is indistinguishable\nfrom 1-token reduction.\nCompound splitting yields gains of 0.8–1 BLEU\nwhen evaluated separately from technical token re-\nduction:\nBLEU METEOR TER\nJust masking 25.1 27.6 56.8\nData-driven split 26.1 28.9 55.1\nHybrid split 25.9 28.6 55.4\nIn contrast to the results reported by Fritzinger\nand Fraser (2010), we observe no gains of the hy-\nbrid method over the purely data-driven method\nby Koehn and Knight (2003). We attribute this\nto the fact that domain-specific anglicisms such\nasEventhandling (event handling) and Debugmel-\ndung (debug message) are unknown to the mor-\nphological analyzer, but are correctly split by the\ndata-driven method.\nFinally, we obtain the best system by combining\nmasking, 1-token reduction and data-driven seg-\nmentation.\nBLEU METEOR TER\nJust masking 25.1 27.6 56.8\n1-token reduction 25.5 27.7 56.4\nData-driven split 26.1 28.9 55.1\nFull combination 26.5 29.0 54.1\nTo conclude, we have shown that the modelling\nof OOVs has a large impact on translation quality\nin technical domains with high OOV rates. Over-\nall we observed an improvement of 4.2 BLEU, 2.9\nMETEOR and 8.1 TER points over the baseline.\nIn this paper, we focused on two types of OOV\ntokens: German compounds that can be split into\ntheir components, and technical tokens that need\nno translation. While our modelling of both these\ntypes was successful both individually and in com-\nbination, in the general case the handling of dif-\nferent types of OOVs are not necessarily indepen-\ndent steps. Also, additional strategies for handling\nOOVs may be required in other domains and lan-\nguage pairs, e.g. transliteration of named entities.\nRobustly choosing the right strategy for each OOV\ntoken independently of the domain could be the\ntarget of future research.References\nBojar, Ond ˇrej, Christian Buck, Chris Callison-\nBurch, Christian Federmann, Barry Haddow, Philipp\nKoehn, Christof Monz, Matt Post, Radu Soricut, and\nLucia Specia. 2013. Findings of the 2013 Work-\nshop on Statistical Machine Translation. In Proceed-\nings of the Eighth Workshop on Statistical Machine\nTranslation , pages 1–44, Sofia, Bulgaria.\nClark, Jonathan H, Chris Dyer, Alon Lavie, and Noah A\nSmith. 2011. Better hypothesis testing for statistical\nmachine translation: Controlling for optimizer insta-\nbility. In Proceedings of the 49th ACL , pages 176–\n181, Portland, Oregon, USA.\nFritzinger, Fabienne and Alexander Fraser. 2010. How\nto avoid burning ducks: Combining linguistic analy-\nsis and corpus statistics for German compound pro-\ncessing. In Proceedings of the Joint Fifth Work-\nshop on Statistical Machine Translation and Metric-\nsMATR , pages 224–234, Uppsala, Sweden.\nKoehn, Philipp and Kevin Knight. 2003. Empirical\nmethods for compound splitting. In Proceedings of\nthe 10th EACL , pages 187–193, Budapest, Hungary.\nKoehn, Philipp, Hieu Hoang, Alexandra Birch, Chris\nCallison-Burch, Marcello Federico, Nicola Bertoldi,\nBrooke Cowan, Wade Shen, Christine Moran,\nRichard Zens, Chris Dyer, Ond ˇrej Bojar, Alexan-\ndra Constantin, and Evan Herbst. 2007. Moses:\nopen source toolkit for statistical machine transla-\ntion. In Proceedings of the 45th ACL , pages 177–\n180, Prague, Czech Republic.\nKoehn, Philipp. 2005. Europarl: A parallel corpus for\nstatistical machine translation. In Proceedings of MT\nSummit X , volume 5, pages 79–86.\nSchmid, Helmut, Arne Fitschen, and Ulrich Heid.\n2004. A German Computational Morphology Cov-\nering Derivation, Composition, and Inflection. In\nProceedings of the 4th LREC , pages 1263–1266, Lis-\nbon, Portugal.\nSennrich, Rico and Beat Kunz. 2014. Zmorge: A\nGerman morphological lexicon extracted from Wik-\ntionary. In Proceedings of the 9th LREC , (in print),\nReykjavik, Iceland.\nSennrich, Rico. 2012. Perplexity minimization for\ntranslation model domain adaptation in statistical\nmachine translation. In Proceedings of the 13th\nEACL , pages 539–549, Avignon, France.\nSteinberger, Ralf, Bruno Pouliquen, Anna Widiger,\nCamelia Ignat, Tomaz Erjavec, Dan Tufis, and\nD´aniel Varga. 2006. The JRC-Acquis: A multi-\nlingual aligned parallel corpus with 20+ languages.\nInProceedings of the 5th LREC , pages 2142–2147,\nGenoa, Italy.\nTiedemann, J ¨org. 2012. Parallel data, tools and inter-\nfaces in OPUS. In Proceedings of the 8th LREC ,\npages 2214–2218, Istanbul, Turkey.\n162",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "y1K9jTAgAja",
"year": null,
"venue": "EAMT 2011",
"pdf_link": "https://aclanthology.org/2011.eamt-1.14.pdf",
"forum_link": "https://openreview.net/forum?id=y1K9jTAgAja",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Combining Multi-Engine Machine Translation and Online Learning through Dynamic Phrase Tables",
"authors": [
"Rico Sennrich"
],
"abstract": "Rico Sennrich. Proceedings of the 15th Annual conference of the European Association for Machine Translation. 2011.",
"keywords": [],
"raw_extracted_content": "Combining Multi-Engine Machine Translation and Online Learning\nthrough Dynamic Phrase Tables\nRico Sennrich\nInsitute of Computational Linguistics\nUniversity of Zurich\nBinzm ¨uhlestr. 14\nCH-8050 Z ¨urich\[email protected]\nAbstract\nExtending phrase-based Statistical Ma-\nchine Translation systems with a second,\ndynamic phrase table has been done for\nmultiple purposes. Promising results have\nbeen reported for hybrid or multi-engine\nmachine translation, i.e. building a phrase\ntable from the knowledge of external MT\nsystems, and for online learning. We argue\nthat, in prior research, dynamic phrase ta-\nbles are not scored optimally because they\nmay be of small size, which makes the\nMaximum Likelihood Estimation of trans-\nlation probabilities unreliable. We pro-\npose basing the scores on frequencies from\nboth the dynamic corpus and the primary\ncorpus instead, and show that this modifi-\ncation significantly increases performance.\nWe also explore the combination of multi-\nengine MT and online learning.\n1 Introduction\nTwo recent trends in Machine Translation are\nmulti-engine MT, and online learning. In multi-\nengine MT, the aim is to combine the strengths of\ndifferent MT systems to perform better than any\nsingle system. Online learning is of high interest\nin the field of interactive MT; In order to increase\ntranslation performance and user satisfaction, it is\nbeneficial to consider previous post-edits made by\nthe user of the system.\nBoth approaches can be implemented within the\nphrase-based SMT framework by adding a second,\ndynamic phrase table. This architecture was first\ndescribed in (Chen et al., 2007), who built a dy-\nnamic phrase table trained on translation hypothe-\nc/circlecopyrt2011 European Association for Machine Translation.ses by external MT systems. The online learn-\ning system described in (Hardt and Elming, 2010)\nuses a similar architecture, with the difference that,\nrather than translations by external systems, previ-\nous translations, post-edited by the user, constitute\nthe dynamic corpus.\nThe aim of this study is to: a) evaluate both ap-\nproaches in an independent reimplementation and\non a different corpus; b) implement and evaluate\nan alternative scoring procedure that promises fur-\nther performance gains; c) show the feasibility of\ncombining multi-engine MT and online learning in\na single framework.\n2 Related Work\nSystem combination for Machine Translation is\nan active research field. The last two Workshops\non Machine Translation (WMT) included a sys-\ntem combination task; an overview is given in\n(Callison-Burch et al., 2009; Callison-Burch et al.,\n2010).\nThe effectiveness of system combination\nstrongly depends on the relative performance of\nthe systems being combined. In the 2009 WMT,\n(Callison-Burch et al., 2009) conclude that “In\ngeneral, system combinations performed as well\nas the best individual systems, but not statistically\nsignificantly better than them.” A possible reason\nfor this failure to improve on individual systems\nis given in the following year: “This year we\nexcluded Google translations from the systems\nused in system combination. In last year’s evalua-\ntion, the large margin between Google and many\nof the other systems meant that it was hard to\nimprove on when combining systems. This year,\nthe system combinations perform better than their\ncomponent systems more often than last year.”\n(Callison-Burch et al., 2010)Mik el L. F orcada, Heidi Depraetere, Vincen t V andeghinste (eds.)\nPr o c e e dings of the 15th Confer enc e of the Eur op e an Asso ciation for Machine T r anslation , p. 89\u001596\nLeuv en, Belgium, Ma y 2011\nWe implemented a system combination archi-\ntecture similar to that described in (Chen et al.,\n2007). While some approaches treat all systems as\nblack boxes, needing only the 1-best output from\neach system and a language model, (e.g. (Barrault,\n2010; Heafield and Lavie, 2010)), the combined\nsystem described by (Chen et al., 2007) is an ex-\ntension of an existing SMT system. The combi-\nnation is achieved by taking a vanilla SMT system\nand adding a second, dynamic phrase table to the\nexisting primary one. (Chen et al., 2007) propose\nthat the dynamic phrase table be trained online on\nthe translation output of several rule-based trans-\nlation systems. (Chen et al., 2009) expand on this\nconcept by allowing for the inclusion of arbitrary\ntranslation systems. We think this distinction into\na primary system and several secondary ones is at-\ntractive for our translation scenario, as will be ex-\nplained in section 3.1.\n(Hardt and Elming, 2010) propose a technically\nsimilar approach, albeit for a different purpose.\nTheir idea is to keep post-edited translations in a\ndynamic corpus. This corpus grows with every\nsentence that is translated, and is periodically used\nto re-train a dynamic phrase table.\nIn a simulation of the approach, using reference\ntranslations instead of actual post-edited transla-\ntions, they show that this dynamic phrase table\nhelps to improve translation performance signifi-\ncantly in some domains. They argue for the exis-\ntence of a file-context effect , that is, that “transla-\ntion data from within a file has a striking effect on\ntranslation quality” (Hardt and Elming, 2010).\nSince dynamic phrase tables are typically small,\nword alignment has been recognized as a ma-\njor challenge in all related publications. A suc-\ncessfully tested solution is to train GIZA++ (Och\nand Ney, 2003) on the primary corpus first, then\nusing the obtained models to align the dynamic\ncorpus (Chen et al., 2009; Hardt and Elming,\n2010). (Hardt and Elming, 2010) then apply\nheuristic post-processing to improve these approx-\nimate alignments.\nMany approaches of combining and weighting\nphrase tables have been proposed. (Hardt and Elm-\ning, 2010) add the dynamic phrase table as an alter-\nnative decoding path to their Moses system, copy-\ning the parameter weights from the primary phrase\ntable. (Chen et al., 2007) concatenate the phrase\ntables and augment them by adding new features\nas system markers. (Chen et al., 2009) proposea combination that avoids duplicate phrase pairs,\ngiving priority to the primary phrase table.\nIn contrast to word alignment and phrase table\ncombination, little attention has been paid to the\nissue of obtaining translation probabilities for the\ndynamic phrase tables. (Hardt and Elming, 2010)\nand (Chen et al., 2009) report that they use stan-\ndard Moses procedures, i.e. Maximum Likelihood\nEstimation (MLE) for phrase translation probabili-\nties, and lexical weights that are based on the word\ntranslation probabilities estimated by MLE.1Since\nMLE is unreliable for low sample sizes, we ex-\npect the performance of systems that include a dy-\nnamic phrase table to seriously degrade as the dy-\nnamic phrase table becomes smaller. We propose\nto mitigate the problem by grounding the MLE-\nbased scoring of the dynamic phrase table in the\nfrequency counts of both the dynamic and the pri-\nmary phrase table. Since our implementation can\nbe used for both system combination and online\nlearning, we will test the effect of scoring on both\napproaches.\n3 System description\n3.1 Data and Tools\nWe conduct our experiments on the parallel part of\nthe Text+Berg corpus, a collection of Alpine texts\n(V olk et al., 2010). The collection so far consists of\nthe yearbooks of the Swiss Alpine Club from 1864\nto 1995. Since 1957, the yearbook has been pub-\nlished in two parallel editions, German and French.\nTable 1 shows the amount of training data. Note\nthat we use a relatively small amount of training\ndata, but that training is in-domain with respect to\nthe test set. As a consequence, the main weak-\nness of the baseline system is data sparseness. In\nthe 1000-sentence test set, 19% of the types (5% of\nthe tokens) are out-of-vocabulary words, i.e. words\nthat are not in the translation model. This can be\nmostly attributed to the morphological complexity\nof German, which is the source language in our ex-\nperiments. Incorporating rule-based MT systems,\nwhich are able to decompose German compounds\nand analyse inflected forms, into the translation\nprocess promises to mitigate this problem.\nOur motivation is to use external systems to fill\nlexical gaps in the baseline SMT system, which is\ntrained on a relatively small amount of in-domain\ntraining data and outperforms other systems not\nadapted to the domain (see section 4.1). Ideally,\n1See (Koehn et al., 2003) for the formulae.90\nCorpus segments words DE words FR\nTraining 151 000 2 840 000 3 200 000\nTuning 1135 23 100 25 800\nTest 991 19 200 21 600\nLM 490 000 - 9 510 000\nTable 1: Data used for training, tuning and testing,\nand for training the language model.\ntranslations by the external systems should only\nbe used for source words or phrases which are not\nwell-evidenced in the primary system. For this, the\nglass-box approach of taking an existing SMT sys-\ntem and extending it with a dynamic phrase table\nseems better suited than a black box combination\nof systems, in which we cannot know how well-\nevidenced different translation options are.\nAs external SMT systems for the multi-engine\ntranslation approach, we use the rule-based Per-\nsonal Translator 142, and Google Translate3.\nWhile Google Translate is a statistical system, it\npromises to be more robust to data sparseness than\nour in-domain system because the Google system\nis trained on significantly more training data.4\nWe build the SMT systems using Moses (Koehn\net al., 2007), SRILM (Stolcke, 2002), and\nMGIZA++ (Gao and V ogel, 2008). In terms of\nconfiguration, we adhere to the instructions for\nbuilding a baseline system by the 2010 ACL Work-\nshop on SMT.5Additionally, we prune the primary\nphrase table using the approach by (Johnson et al.,\n2007).\n3.2 Alignment\nWe compute the word alignment of the primary\ncorpus using the default configuration in Moses,\nbut saving all models to disk. We then force an\nalignment of all dynamic corpora on the basis of\nthese models with MGIZA++. Since we do not fo-\ncus on word alignment in this paper, we only com-\npute the alignments once for each dynamic corpus,\nre-using the alignment when we build phrase ta-\nbles from parts of the corpus. This allows us to rule\nout alignment differences as the reason for varia-\ntion in performance.\n2http://www.linguatec.net/products/tr/pt\n3http://translate.google.com\n4Even though we do not know the actual amount of training\ndata used for Google Translate DE-FR, this is a safe assump-\ntion (see table 1).\n5http://www.statmt.org/wmt10/baseline.\nhtml3.3 Scoring\nFor each of the experiments with dynamic phrase\ntables, the baseline scoring sytem is vanilla Moses,\ni.e. a scoring of translation probabilities based on\nthe dynamic corpus only. This is the implementa-\ntion described in (Chen et al., 2007) and (Hardt and\nElming, 2010). We propose to score the dynamic\nphrase table by also taking the primary corpus into\naccount, since MLE is unreliable for small sample\nsizes.\nOur modified approach to scoring is imple-\nmented as follows. The Moses training scripts\nare modified to not only return phrase translation\nprobabilities and lexical weights, but also the suffi-\ncient statistics , i.e. word and phrase (pair) frequen-\ncies, required to recompute all parameters. Each\ntime the dynamic corpus is updated, we train the\ndynamic phrase table using this modified script.\nThen, we rescore the translation probabilities and\nlexical weights in the dynamic phrase table with a\nclient-server architecture.\nThe server stores all relevant frequencies of the\nprimary corpus in memory, and upon receiving the\ncommand to rescore the dynamic phrase table, ex-\ntracts the frequencies of the dynamic corpus, then\ncomputes updated translation probabilities based\non the sum of the frequencies in both corpora.\nWe illustrate the motivation behind this modifi-\ncation with two examples, shown in table 2. The\ntwo sentences exemplify two different situations.\nIn the first, the German compound Konditionswun-\nder(roughly: one who is in miraculous shape ) is\nunknown by the primary system. Here, the multi-\nengine approach is shown to work, since this lexi-\ncal gap is filled with an adequate translation by one\nof the external systems.\nThe second sentence is translated well by the\ndomain-specific system, but improperly by the\nexternal systems. Most striking is the German\nword P¨asse (English: mountain passes ), correctly\ntranslated as cols by the primary system, but\nas either passeports (English: passports ) or la\npasse (English: pass [of the ball] ) by the ex-\nternal ones, both possible translations of Pass,\nbut unlikely ones in the mountaineering domain.\nP¨asse is well-evidenced in the primary corpus\n(136 observations), with p(cols|P¨asse)estimated\nat64/136 (0.47). We find that, 2 being the fre-\nquency of P¨asse in the dynamic corpus, estimating\np(passeports|P¨asse)andp(la passe|P¨asse)at\n1/(136 + 2) (0.007), rather than 1/2(0.5), better91\nSource Er ist ein Konditionswunder.\nHe is in miraculous shape.\nReference C’est un miracle de condition physique.\nSystem 1 (Moses) C’est un Konditionswunder.\nSystem 2 (PT 14) C’est un miracle de condition.\nSystem 3 (Google Translate) Il est un miracle de remise en forme.\nMulti-Engine (vanilla) C’est un miracle de condition.\nMulti-Engine (modified) C’est un miracle de condition.\nSource Wir konnten das Aussehen der P ¨asse nur ahnen.\nWe could only guess at the look of the mountain passes .\nReference Nous ne pouvions que deviner l’aspect des cols .\nSystem 1 (Moses) nous ne pouvions seulement deviner l’aspect des cols .\nSystem 2 (PT 14) Nous ne pouvions que nous douter de l’air des passeports .\nSystem 3 (Google Translate) Nous ne pouvions imaginer l’aspect de la passe .\nMulti-Engine (vanilla) nous ne pouvions de l’air des cols de la passe .\nMulti-Engine (modified) nous ne pouvions l’aspect des cols que deviner.\nTable 2: German–French translation examples.\nmodels our expectation. The numbers are simpli-\nfied, discussing only one of four scores computed\nfor the phrase table. Also, possible errors during\nword alignment and/or phrase extraction are not\nconsidered. If we look at the output of the vanilla\nmulti-engine system, we see that such a misalign-\nment has indeed occurred, with German nur ah-\nnen(English: only guess ) being mistranslated as\nde la passe . With modified scoring, this phrase\npair is given low scores6, preventing it from being\nselected during decoding.\nSumming the frequencies of different corpora is\nnot a new idea. If we simply concatenated the cor-\npora before scoring, we would achieve the same\neffect. However, working with two phrase tables,\none static and one dynamic, has distinct advan-\ntages: since we only rescore the dynamic phrase\ntable, training is much faster than if we had to\nrescore the primary model regularly. It also allows\nus to give different weights to the two phrase ta-\nbles. We show in the evaluation that this leads to a\nbetter performance.\n3.4 Combination of Phrase Tables\n(Chen et al., 2009) decided against using the pri-\nmary and dynamic phrase table as alternative de-\ncodings paths in Moses, since this increases the\nsearch space for MERT, especially since they ex-\ntend the system with additional features, for in-\n6This especially applies to the lexical weights; since both the\nsource and the target phrase are rare in the primary corpus, the\nphrase translation probabilities are only slightly penalized.stance to mark the origin of translation hypotheses.\n(Hardt and Elming, 2010), on the other hand, use\nalternative decoding paths and avoid the problem\nof tuning by using the same set of weights for both\nphrase tables.\nFor our experiments, we will use alternative de-\ncoding paths, keeping the search space under con-\ntrol by not adding any features, and never using\nmore than one dynamic phrase table. In the online\nlearning experiments, we will follow (Hardt and\nElming, 2010) in using the same set of weights for\nboth phrase tables.\n4 Evaluation\nData and tools used for our experiments are de-\nscribed in section 3.1. For the evaluation, we\nuse BLEU (Papineni et al., 2002) and METEOR\n(Banerjee and Lavie, 2005), applying bootstrap re-\nsampling to test for statistical significance (Riezler\nand Maxwell, 2005). After establishing the base-\nline performance of our in-domain SMT system\nand the two external systems (i.e. Personal Trans-\nlator 14 and Google Translate), we describe three\nexperiments.\nThe first is a re-implementation of the multi-\nengine approach described in (Chen et al., 2009),\nthe second one of online learning by (Hardt and\nElming, 2010), and the third a combination of the\ntwo. In the first two experiments, we want to ad-\ndress the following research questions:\n•Can we reproduce the positive effect of both\napproaches in our evaluation scenario?92\nSystem BLEU METEOR\nown baseline 17.18 38.28\nPersonal Translator 14 13.29 35.68\nGoogle Translate 12.94 34.36\nTable 3: SMT performance DE–FR.\n•How does the multi-engine approach de-\nscribed here compare to system combination\nalgorithms that only use the translation hy-\npotheses and a language model, but not a par-\nallel corpus?\n•What is the effect of dynamic phrase table\nsize on translation performance, excluding\nword alignment as a factor?\n•Is our proposed modification to scoring effec-\ntive at improving system performance?\nIn the third experiment, our aim is to demonstrate\nthat both multi-engine MT and online learning can\nbe combined within a single framework.\n4.1 Baseline Experiments\nIn terms of baseline performance (table 3), we find\nthat our in-domain system obtains markedly bet-\nter scores than both Personal Translator 14 and\nGoogle Translate when evaluating it on our Alpine\ntest set.7However, the performance is relatively\nlow for the language pair DE–FR – when we\ntrained an SMT system on Europarl, we achieved\n28.24 BLEU points on a Europarl test set for this\nlanguage pair. This indicates that the mountaineer-\ning narratives which constitute our test set are rel-\natively hard to translate.\n4.2 Multi-Engine Translation\nFor the multi-engine translation experiments, we\nfirst built a dynamic phrase table encompassing\nboth the tuning and the test set (or approximately\n4000 sentence pairs).8We then conduct MERT\nwith this dynamic phrase table added as an alter-\nnative decoding path to Moses.\nThree alternative system combination meth-\nods are evaluated against the approach described\n7We are aware that BLEU scores might not give a fair assess-\nment of rule-based MT as compared to SMT (see (Callison-\nBurch et al., 2006)). If the rule-based system indeed performs\nbetter than the BLEU scores suggest, this is all the more rea-\nson for tapping its knowledge in a multi-engine approach.\n8This is twice the size of the tuning and test set: every source\nsentence is once paired with its translation by Personal Trans-\nlator 14, once with the one by Google Translate.here (called Dynamic ):Concat , an SMT system\ntrained on the concatenation of the parallel training\ncorpus and the translation hypotheses by Google\nTranslate and Personal Translator 14. MANY\n(Barrault, 2010) and MEMT (Heafield and Lavie,\n2010), both open source system combination soft-\nware with confusion network decoding.\nFor the experiments with a dynamic phrase ta-\nble, we test the effect of dynamic corpus size on\nSMT performance. We do this by varying the num-\nber of sentences that are translated at once, each\ntime building a dynamic phrase table that only in-\ncludes the translations of the sentences needed at\nthe time. In the extreme case, each sentence is\ntranslated independently, with a dynamic phrase\ntable built from two sentence pairs.\n4.2.1 Results\nAll combined systems shown in table 4 signif-\nicantly outperform the baseline of 17.18 BLEU\npoints. The score difference between MANY and\nMEMT is not statistically significant, but both are\nsignificantly better than the baseline and signifi-\ncantly worse than the approaches that make use\nof the in-domain parallel text. This validates our\nattempts to exploit the in-domain parallel corpus\nfor system combination. We have not analyzed at\nwhich stage MANY and MEMT fail to exploit the\nfull potential of the translation hypotheses; both\nalignment errors and decoding errors are conceiv-\nable.\nA concatenation of the primary corpus and the\ntranslation hypotheses, with the same training pro-\ncedure as in the baseline system, works surpris-\ningly well, yielding a BLEU score of 19.11. How-\never, this approach has little practical use, since\nretraining the entire SMT system is prohibitively\nslow. The experiments with a dynamic phrase table\nyield the best results. The modified scoring as pro-\nposed in section 3.3 achieves 20.06 BLEU points,\nas opposed to 19.33 BLEU points achieved with\nvanilla scoring.\nOne weakness of the vanilla scoring algorithm,\ni.e. the unreliability of MLE, is bound to become\nmore severe when we translate the test set in\nsmaller steps and build smaller dynamic phrase ta-\nbles. The results in table 5 confirm that this holds\ntrue, although the effect is smaller than we ex-\npected. Only with a dynamic corpus size of 2, i.e.\nwhen builiding a separate dynamic phrase table for\neach sentence that is translated, did we observe a\nstatistically significant drop in performance. With93\nCombination System BLEU METEOR\nMANY 18.23 39.68\nMEMT 18.39 39.01\nConcat 19.11 39.45\nDynamic (vanilla) 19.33 40.00\nDynamic (modified) 20.06 40.59\nTable 4: SMT performance DE–FR for multiple\nsystem combination approaches.\nsize of dynamicvanilla modifiedcorpus (sentence pairs)\n4000 19.33 20.06\n200 19.26 19.95\n20 19.17 19.96\n2 18.80 19.93\nTable 5: SMT performance DE–FR as a function\nof dynamic corpus size. BLEU scores.\nour modified scoring algorithm, we successfully\neliminated this dependence of SMT performance\non the size of the dynamic corpus.\n4.3 Online Learning\nOur test set consists of 7 held-out articles of the\nText+Berg corpus, spanning 991 sentences. The\nfact that the test sentences are not selected ran-\ndomly allows us to investigate file-context effects\nas observed in (Hardt and Elming, 2010).\nWe use the same translation process as in the last\nexperiments, with two differences: Firstly, we do\nnot perform MERT and use the weights of the pri-\nmary phrase table for the dynamic one. Secondly,\nthe dynamic corpus is different. Instead of using\nexternal translation hypotheses for the sentences\nthat are currently translated, we use the reference\ntranslation for all previously translated sentences\nof the test set. This simulates a post-editing ap-\nproach where the corrected translations are re-used\nto improve later translations. The dynamic corpus\nis thus retrained after every sentence, but its size\nincreases over time.\n4.3.1 Results\nThe results are shown in table 6. Our vanilla\nreimplementation of the online learning approach\nis worse than the baseline. (Hardt and Elming,\n2010) attributed the lack of improvement in one\nexperiment to a weak file-context effect in one of\nthe test sets. While our test set, which consists\nof mountaineering narratives, is also less repeti-System BLEU METEOR\nbaseline 17.18 38.28\nvanilla scoring 16.81 37.61\nmodified scoring 17.57 38.60\nTable 6: SMT performance DE–FR with online\nlearning system.\ntive than the technical domains in which (Hardt\nand Elming, 2010) found strong file-context ef-\nfects, this is not enough to explain why the scores\ngo down.\nThe reason is that translation probabilities are\nnot estimated well, as discussed in section 3.3.\nTo give another amusing example of the con-\nsequences, the German word farbige (English:\ncolourful ) is translated as tr`es color ´esby our base-\nline system, but as nous d ´eployons (English: we\ndeploy ) by the experimental one. The transla-\ntion, learned from a sentence about deploying\nparachutes, makes little sense in the context of\ncolourful birds . With modified scoring, the mis-\ntranslation no longer occurs.\nBy using the modified scoring procedure, we\nsignificantly outperform the baseline. Considering\nthe small amount of additional training material,\nand the elimination of other possible confounding\nfactors (all systems use the same weights), we con-\nclude that file-context effects exist in our test set.\nAlso, the experiment validates our modifications\nto scoring of the dynamic phrase table, turning a\nloss of 0.4 BLEU points with the experimental ap-\nproach into a gain of the same size.\n4.4 Combining Multi-Engine MT and Online\nLearning\nIt is attractive to combine the two prior experi-\nments, since both are based on the same architec-\nture, with the only difference being the corpus for\ntraining the dynamic phrase table, and the param-\neters for the models. Sadly, we observed compar-\natively little gains with online learning in our test\nset, which limits the potential of the combined ap-\nproach.\nWe decided against increasing the number of dy-\nnamic phrase tables, which would complicate the\nscoring, without offering additional benefits: the\nmain advantage of having multiple phrase tables\nwould be the possibility of using separate param-\neters for each table, but the explosion in the size\nof the search space makes it unlikely for MERT to94\nSystem BLEU METEOR\nbaseline 17.18 38.28\nonline learning 17.57 38.60\nmulti-engine MT 19.93 40.52\ncombined 20.05 40.61\nTable 7: SMT performance DE–FR with system\ncombining multi-engine MT and online learning.\nfind good weights.\nWe chose our best-performing experiment so\nfar, the multi-engine system with modified scor-\ning, as our new baseline. We performed retraining\nof the dynamic phrase table for every sentence, in-\ncluding the translation hypotheses by both exter-\nnal MT systems, and all previous translation pairs\nfrom the test set. Preliminary experiments have\nshown that the multi-engine system works signif-\nicantly worse than our best experiment when we\nsimply copy the MERT parameters of the primary\nphrase table (BLEU score: 18.04). Thus, we chose\nto adopt the parameters of the multi-engine exper-\niment for this one.\n4.4.1 Results\nThe combined system does not significantly out-\nperform the multi-engine MT system, as table 7\nshows. One problem is that the effect of online\nlearning is already small in our test set; the other,\nthat we cannot expect the improvements of the two\napproaches to be additive, since both mitigate the\nsame weakness of the primary system.\n5 Conclusion\nIn this paper, we reimplemented two differently\nmotivated but technically similar approaches that\nuse a dynamic phrase table, along with a static pri-\nmary one, to provide SMT systems with informa-\ntion relevant to the translation task at hand. (Chen\net al., 2009) built a dynamic phrase table from\ntranslation hypotheses by external MT systems,\nwhile (Hardt and Elming, 2010) used previous\ntranslation pairs from the same file to contribute\nto the translation of future ones. We successfully\nreimplemented both approaches and showed them\nto work in our translation setting that consists of\na German–French SMT system trained on a small\ndomain-specific translation corpus. We show that\nthis approach can outperform system combination\nalgorithms that only use information on the target\nside, i.e. the translation hypotheses and a language\nmodel. Additionally, we propose modification tothe scoring procedure for dynamic phrase tables.\nRather than basing MLE of translation probabil-\nities only on the dynamically created corpus, we\nshow that combining the frequencies of the pri-\nmary and the dynamic corpus for the purpose of\nscoring leads to significant gains in SMT perfor-\nmance. We have observed an increase by 0.73 and\n0.76 BLEU points over the vanilla scoring algo-\nrithm, and 2.88 BLEU points over the best indi-\nvidual system. We also identified the vanilla scor-\ning procedure as the reason for a decrease in per-\nformance in one experiment, namely the reimple-\nmentation of incremental retraining by (Hardt and\nElming, 2010). We conclude that it is advisable\nto score dynamic phrase tables with recourse to\nthe frequencies in larger corpora. With a client-\nserver architecture, where the frequencies are held\nin memory, there is little impact on translation\ntime, albeit at the cost of memory space.\nThe potential performance gain of both multi-\nengine MT and online learning approaches varies\nbetween translation scenarios, depending on the\navailability and quality of training data, exter-\nnal MT systems, and the file-context effect. We\ndemonstrated the feasibility of combining the two\napproaches, but found no statistically significant\nadditional score increase over the multi-engine\napproach. We suspect that bigger gains can be\nachieved when translating texts of a more repeti-\ntive nature, for which (Hardt and Elming, 2010)\ndemonstrated the beneficial effect of online learn-\ning.\nSo far, the only component of our experimental\nsystem that incorporates dynamic knowledge is the\nphrase table. For future research, we want to inves-\ntigate whether dynamically retraining other com-\nponents such as the language model or reordering\nmodel may lead to additional performance gains.\nAcknowledgments\nI would like to thank Lo ¨ıc Barrault and Kenneth\nHeafield for providing me with the newest source\ncode and technical support for their software. This\nresearch was funded by the Swiss National Science\nFoundation under grant 105215 126999.\nReferences\nBanerjee, Satanjeev and Alon Lavie. 2005. METEOR:\nAn automatic metric for MT evaluation with im-\nproved correlation with human judgments. In Pro-\nceedings of the ACL Workshop on Intrinsic and Ex-95\ntrinsic Evaluation Measures for Machine Transla-\ntion and/or Summarization , pages 65–72, Ann Ar-\nbor, Michigan, June. Association for Computational\nLinguistics.\nBarrault, Lo ¨ıc. 2010. MANY: Open source MT sys-\ntem combination at WMT’10. In Proceedings of the\nJoint Fifth Workshop on Statistical Machine Trans-\nlation and MetricsMATR , pages 277–281, Uppsala,\nSweden, July. Association for Computational Lin-\nguistics.\nCallison-Burch, C., M. Osborne, and P. Koehn. 2006.\nRe-evaluating the Role of Bleu in Machine Transla-\ntion Research. In Proceedings the Eleventh Confer-\nence of the European Chapter of the Association for\nComputational Linguistics , pages 249–256, Trento,\nItaly.\nCallison-Burch, Chris, Philipp Koehn, Christof Monz,\nand Josh Schroeder. 2009. Findings of the 2009\nWorkshop on Statistical Machine Translation. In\nProceedings of the Fourth Workshop on Statistical\nMachine Translation , pages 1–28, Athens, Greece,\nMarch. Association for Computational Linguistics.\nCallison-Burch, Chris, Philipp Koehn, Christof Monz,\nKay Peterson, Mark Przybocki, and Omar Zaidan.\n2010. Findings of the 2010 joint workshop on sta-\ntistical machine translation and metrics for machine\ntranslation. In Proceedings of the Joint Fifth Work-\nshop on Statistical Machine Translation and Metric-\nsMATR , pages 17–53, Uppsala, Sweden, July. Asso-\nciation for Computational Linguistics. Revised Au-\ngust 2010.\nChen, Yu, Andreas Eisele, Christian Federmann, Eva\nHasler, Michael Jellinghaus, and Silke Theison.\n2007. Multi-engine machine translation with an\nopen-source decoder for statistical machine transla-\ntion. In Proceedings of the Second Workshop on\nStatistical Machine Translation , StatMT ’07, pages\n193–196, Morristown, NJ, USA. Association for\nComputational Linguistics.\nChen, Yu, Michael Jellinghaus, Andreas Eisele,\nYi Zhang, Sabine Hunsicker, Silke Theison, Chris-\ntian Federmann, and Hans Uszkoreit. 2009. Com-\nbining multi-engine translations with Moses. In Pro-\nceedings of the Fourth Workshop on Statistical Ma-\nchine Translation , StatMT ’09, pages 42–46, Morris-\ntown, NJ, USA. Association for Computational Lin-\nguistics.\nGao, Qin and Stephan V ogel. 2008. Parallel imple-\nmentations of word alignment tool. In Software En-\ngineering, Testing, and Quality Assurance for Natu-\nral Language Processing , pages 49–57, Columbus,\nOhio. Association for Computational Linguistics.\nHardt, Daniel and Jakob Elming. 2010. Incremental re-\ntraining for post-editing SMT. In Conference of the\nAssociation for Machine Translation in the Americas\n2010 (AMTA 2010) , Denver, CO, USA.Heafield, Kenneth and Alon Lavie. 2010. CMU multi-\nengine machine translation for WMT 2010. In Pro-\nceedings of the Joint Fifth Workshop on Statistical\nMachine Translation and MetricsMATR , WMT ’10,\npages 301–306, Stroudsburg, PA, USA. Association\nfor Computational Linguistics.\nJohnson, Howard, Joel Martin, George Foster, and\nRoland Kuhn. 2007. Improving translation qual-\nity by discarding most of the phrasetable. In Pro-\nceedings of the 2007 Joint Conference on Empirical\nMethods in Natural Language Processing and Com-\nputational Natural Language Learning (EMNLP-\nCoNLL) , pages 967–975, Prague, Czech Republic,\nJune. Association for Computational Linguistics.\nKoehn, Philipp, Franz Josef Och, and Daniel Marcu.\n2003. Statistical phrase-based translation. In\nNAACL ’03: Proceedings of the 2003 Conference\nof the North American Chapter of the Association\nfor Computational Linguistics on Human Language\nTechnology , pages 48–54, Morristown, NJ, USA.\nAssociation for Computational Linguistics.\nKoehn, Philipp, Hieu Hoang, Alexandra Birch, Chris\nCallison-Burch, Marcello Federico, Nicola Bertoldi,\nBrooke Cowan, Wade Shen, Christine Moran,\nRichard Zens, Chris Dyer, Ond ˇrej Bojar, Alexandra\nConstantin, and Evan Herbst. 2007. Moses: Open\nSource Toolkit for Statistical Machine Translation.\nInMeeting of the Association for Computational Lin-\nguistics (ACL 2007) , pages 177–180, Prague, Czech\nRepublic.\nOch, Franz Josef and Hermann Ney. 2003. A system-\natic comparison of various statistical alignment mod-\nels.Computat. Linguist. , 29(1):19–51.\nPapineni, Kishore, Salim Roukos, Todd Ward, and Wei-\nJing Zhu. 2002. Bleu: A method for automatic eval-\nuation of machine translation. In ACL ’02: Proceed-\nings of the 40th Annual Meeting on Association for\nComputational Linguistics , pages 311–318, Morris-\ntown, NJ, USA. Association for Computational Lin-\nguistics.\nRiezler, Stefan and John T. Maxwell. 2005. On some\npitfalls in automatic evaluation and significance test-\ning for MT. In Proceedings of the ACL Workshop\non Intrinsic and Extrinsic Evaluation Measures for\nMachine Translation and/or Summarization , pages\n57–64, Ann Arbor, Michigan. Association for Com-\nputational Linguistics.\nStolcke, A. 2002. SRILM – An Extensible Language\nModeling Toolkit. In Seventh International Confer-\nence on Spoken Language Processing , pages 901–\n904, Denver, CO, USA.\nV olk, Martin, Noah Bubenhofer, Adrian Althaus, Maya\nBangerter, Lenz Furrer, and Beni Ruef. 2010. Chal-\nlenges in building a multilingual alpine heritage cor-\npus. In Proceedings of the Seventh conference on\nInternational Language Resources and Evaluation\n(LREC’10) , Valletta, Malta. European Language Re-\nsources Association (ELRA).96",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "jMU8fV9Vy9",
"year": null,
"venue": "EAMT 2020",
"pdf_link": "https://aclanthology.org/2020.eamt-1.39.pdf",
"forum_link": "https://openreview.net/forum?id=jMU8fV9Vy9",
"arxiv_id": null,
"doi": null
}
|
{
"title": "On the differences between human translations",
"authors": [
"Maja Popovic"
],
"abstract": "Maja Popovic. Proceedings of the 22nd Annual Conference of the European Association for Machine Translation. 2020.",
"keywords": [],
"raw_extracted_content": "On the differences between human translations\nMaja Popovi ´c\nADAPT Centre\nSchool of Computing\nDublin City University, Ireland\[email protected]\nAbstract\nMany studies have confirmed that trans-\nlated texts exhibit different features than\ntexts originally written in the given lan-\nguage. This work explores texts trans-\nlated by different translators taking into ac-\ncount expertise and native language. A set\nof computational analyses was conducted\non three language pairs, English-Croatian,\nGerman-French and English-Finnish, and\nthe results show that each of the factors\nhas certain influence on the features of\nthe translated texts, especially on sentence\nlength and lexical richness. The results\nalso indicate that for translations used for\nmachine translation evaluation, it is im-\nportant to specify these factors, especially\nwhen comparing machine translation qual-\nity with human translation quality.\n1 Introduction\nMany studies have demonstrated that translated\ntexts (human translations, HTs) have different lex-\nical, syntactic and other textual features than texts\noriginally written in the given language (originals).\nThese special traits of HTs are result of a com-\npromise between two often antagonised aspects of\nthe translation process: fidelity to the source text\nand naturalness of the generated target language\ntext. Although all studies confirm the existence of\nunique HT features, two categories of these fea-\ntures are distinguished in the literature. One cate-\ngory, “translation universals”, represents a general\nset of features shared by all translations, indepen-\ndent of the characteristics of involved languages\nc\r2020 The authors. This article is licensed under a Creative\nCommons 3.0 licence, no derivative works, attribution, CC-\nBY-ND.(Baker et al., 1993). Another category, “interfer-\nence”, reflects the impact of the source language,\nthe “trace” which the source language leaves in the\ntranslation (Toury, 1979). Some studies investi-\ngate and demonstrate the existence of both cate-\ngories, sometimes called “source universals” and\n“target universals” (Chesterman, 2004; Koppel and\nOrdan, 2011).\nOur research aims to find out whether differ-\nences between translators have any influence on\nthe text features. We investigate impact of the\ntranslator’s expertise and native language. We\npresent results of a computational analysis of a\nset of HTs originating from the news domain and\ninvolving three distinct language pairs, English-\nCroatian, German-French and English-Finnish.\nThe analysis is guided by the following research\nquestions:\nRQ1 Are there differences between HTs related to\ntranslator’s expertise?\nRQ2 Are there differences between HTs related\nto translator’s native language and translation\ndirection? (from or into translator’s native\nlanguage)\nThe main contribution of this work is empiri-\ncal, showing evidence of differences between text\nfeatures of HTs produced by different translators.\nWe expect our findings to motivate and drive fu-\nture research in this direction in order to better\nunderstand these differences by identifying and\nanalysing underlying linguistic phenomena.\nMoreover, differences between HTs may have\npractical impact on evaluation of machine transla-\ntion (MT) systems. Several recent studies (Toral\net al., 2018; L ¨aubli et al., 2018; Zhang and Toral,\n2019; Freitag et al., 2019) have shown that the\ntranslation direction has impact on the results of\nevaluation of MT outputs, so that it is important\nto specify whether originals or HTs were used\nas source texts for MT systems. Taking into ac-\ncount these studies and the findings reported in this\nwork, potential effects of translators’ backgrounds\non MT should be investigated too.\n2 Related work\nAnalysis of translated texts A lot of work has\nbeen done exploring differences between HTs and\noriginals. Some studies (Baker et al., 1993) have\nemphasised the existence of “translation univer-\nsals”, general features of translated texts, “simpli-\nfication” and “explicitation” being the most well-\nknown. Other studies (Toury, 1979) have pointed\nout the influence of the source language, “interfer-\nence”, whereas some (Chesterman, 2004) concen-\ntrate on both categories, called “S-universals” and\n“T-universals”.\nSince many text features can be measured quan-\ntitatively, a number of publications demonstrated\nthat HTs can be automatically distinguished from\noriginals (Baroni and Bernardini, 2006; Koppel\nand Ordan, 2011; V olansky et al., 2015; Rabi-\nnovich and Wintner, 2015; Rubino et al., 2016).\nThe features used for the classifiers are partly\nmotivated by the theoretical categories mentioned\nabove, however many features are not directly re-\nlated to a particular category, and many can be-\nlong to more than one category. The most common\nfeatures are lexical variety (percentage of distinct\nwords in a text), lexical density (sometimes called\ninformation density, percentage of content words\nin a text), sentence length, word length, as well\nas frequencies of certain POS categories, function\nwords and collocations.\nRabinovich et al. (2016) include analysis of non-\nnative texts, namely texts originally written in the\ngiven language but by non-native speakers. They\nfound that these texts generally exhibit different\nfeatures than native originals and HTs, thus rep-\nresenting yet another text category. On the other\nhand, their features are closer to those of HTs than\nto native originals, indicating the influence (“inter-\nference”) of the native language.\nIn addition to analysis of HTs, more and more\npublications report analysis of machine translated\ntexts. Ahrenberg (2017) compares MT outputs\nwith HTs by means of automatically calculated\ntext features as well as by manual analysis of di-vergences (shifts) from the source text. The main\nfinding is that MT output is much more similar\nto the source text than HT. Another study of ma-\nchine translated texts (Vanmassenhove et al., 2019)\nreports significantly lower lexical richness in MT\noutputs in comparison to originals and HTs.\nPost-editing (PE) of MT outputs has lead to\nyet another type of translated text which has been\nanalysed extensively in the recent years ( ˇCulo and\nNitzke, 2016; Daems et al., 2017; Farrell, 2018;\nToral, 2019; Castilho et al., 2019). These studies\ndemonstrated that PEs represent an additional text\ncategory with the features lying between those of\nHTs and of MT outputs.\nRelations between machine and human trans-\nlation As machine translation (MT) technology\nimproves, more and more work has been done on\ninvestigating relations between different aspects of\nMT and HT direction. First publications on this\ntopic (Kurokawa et al., 2009; Lembersky et al.,\n2013) demonstrated that the direction of HT plays\nan important role for building a statistical MT sys-\ntem, and recommend training on parallel corpora\nwhich were translated in the same direction as the\nMT system (i.e. using originals as source and HTs\nas target).\nRecently, several publications (L ¨aubli et al.,\n2018; Toral et al., 2018; Freitag et al., 2019; Zhang\nand Toral, 2019) demonstrated that the translation\ndirection plays an important role both for human\nas well as for automatic evaluation of MT systems.\nBefore these findings were published, this aspect\nhas not been taken into account at all in the MT\ncommunity.1Afterwards, as a consequence, using\nonly originals as source test texts and HTs as ref-\nerence test texts has become a common practice in\nthe WMT shared tasks2from 2019. The main rea-\nson is to avoid all possible side effects, since Toral\net al. (2018) have shown that the use of HTs as\nsource texts facilitates the MT process mainly be-\ncause of the decreased lexical variety. On the other\nhand, Freitag et al. (2019) recommend using both,\nalbeit separated, original as well as HT source texts\nprecisely in order to be able to take into account\nand better understand all effects.\nApart from the impact of translation direction,\nthe impact of divergences from a source text in HT\n1For example, in the WMT shared tasks, even texts writ-\nten in an ”external” original language were used extensively,\ne.g. English HTs from Czech texts were used as source for\nEnglish-to-German MT systems.\n2http://www.statmt.org/wmt19/\nused as MT data has been investigated, too. The\npotential influence of different translation strate-\ngies and resulting divergences (shifts) on MT eval-\nuation was discussed in (Popovi ´c, 2019), whereas\nVyas et al. (2018) explored automatic identifica-\ntion of such divergences and their effects on MT\ntraining.\nTexts translated by different translators De-\nspite of a large body of work dealing with anal-\nysis of different translated texts in different con-\ntexts, there is, however, not much work about texts\ntranslated by different translators. Rubino et al.\n(2016) explored effects of translator’s expertise to\nsome extent, and reported that texts translated by\nstudents could be automatically distinguished from\noriginals with higher accuracy than texts translated\nby professional translators. This indicates that the\nfeatures of professional HTs are more similar to\nthe features of originals. To the best of our knowl-\nedge, our work is the first attempt to systematically\ncompare texts translated by different translators.\n3 Data sets\nFor our experiments, we used three avail-\nable parallel data sets involving three different\nlanguage pairs and five translation directions:\nEnglish!Croatian ( EnHr ), German$French\n(DeFr ) and English$Finnish ( EnFi ). All data\nsets belong to the news domain and originate from\nthe publicly available WMT shared tasks.\nIdeally, each data set should have been designed\nspecifically for one particular RQ, and created un-\nder the same conditions: each of the translators\nshould have translated the same, sufficiently large\nsource text. In addition, all source language texts\nshould be originally written in that language, not\nbeing translated from some other language. Ta-\nble 1 summarises the properties of our three data\nsets, and the following limitations can be noted:\n* None of the data sets were specifically de-\nsigned for one RQ: only the EnHr data set is (al-\nmost) ideal for translation expertise (RQ1). The\nDeFr is appropriate for both expertise (RQ1) and\nnative language (RQ2), whereas the EnFi data set\nis suitable for native language (RQ2).\n* The EnHr data set is, as mentioned above,\nalmost ideal for exploring translation expertise, al-\nthough one HT was generated from a different\nsource text than the other three. The main draw-\nback is that both source language texts were not\nwritten originally in English, but are HTs: theywere translated from Czech in the framework of\nthe WMT 2012 and WMT 2013 shared task How-\never, this fact has no influence on the results of our\nexperiment, because all HTs are coming from the\nsame original language.\n* The main limitation of the DeFr data set is\nits small size: while in the other two data sets\nat least 1000 source sentences were translated by\neach translator, this data set ranges from 100 to\n750 sentences. Another drawback is the lack of\na common source text for all translators – there-\nfore, different HTs represent a comparable corpus\ninstead of a parallel corpus. The same limitation\nrepresents the main drawback of the EnFi data\nset.\nIn addition, several domains/genres should ide-\nally be covered, whereas all our data sets come\nfrom the news domain. Nevertheless, an ideal data\nset is, to the best of our knowledge, currently not\navailable for any of our research questions. There-\nfore, we carried out our first experiments on the\ndescribed available texts which, despite of their\nflaws, represent a good starting point for this re-\nsearch direction. The data sets are made publicly\navailable for further research.3No personal infor-\nmation (such as name, gender, age, working place)\nabout the translators are shared. The details and\nstatistics of each of the texts used are presented to-\ngether with the results in the corresponding sec-\ntions.\n4 Text features\nThe set of text features used in our experiments is\ninspired by the features frequently used in the lit-\nerature (Baroni and Bernardini, 2006; Koppel and\nOrdan, 2011; V olansky et al., 2015; Toral, 2019).\nAlthough they are also motivated by two theoret-\nical categories, simplification (Baker et al., 1993)\nand interference (Toury, 1979), they do not rep-\nresent any of these categories exclusively. The\nchoice of features is based on a hypothesis that\nthe selected features might vary depending on the\nfactors addressed in our work, namely translator’s\nexperience, native language and translation strate-\ngies.\nFor all features, punctuation marks were sepa-\nrated and counted as words. POS tags for all lan-\nguages are generated by TreeTagger.4The features\n3https://github.com/m-popovic/\ndifferent-HTs\n4https://www.cis.uni-muenchen.de/ ˜schmid/\nproperty EnHr DeFr EnFi\nRQ1: expertise + + \u0000\nRQ2: translation direction and native language \u0000 + +\nsame source language text for each translator \u0006 \u0000 \u0000\n\u00151000 sentences per translator +\u0000 +\nsource language is the original language \u0000 + +\nTable 1: Properties of the three data sets; ” \u0000” denotes lack of a specific property.\nare defined and calculated in the following way:\nSentence length: Number of words in each sen-\ntence of the text.\nSome translators might tend to generate longer\nsentences in the target text than others. Some\ntranslators might keep the number of words in the\ntranslated sentences closer to the number of source\ntext words than others.\nMean word length: The total number of charac-\nters in the text divided by total number of words.\nSome translators might prefer longer (poten-\ntially more complex) words than others.\nLexical variety: The total number of distinct\nwords in the text divided by the total number of\nwords in the text.\nlexV ar =N(distinct words )\nN(words )(1)\nPrevious work has shown that vocabulary of HTs\nis generally less rich than vocabulary of originals.\nHowever, some translators might use more distinct\nwords (a richer vocabulary) than others.\nMorpho-syntactic variety: The total number of\ndistinct POS tags in the text divided by the total\nnumber of words.\nmorphsynV ar =N(distinct POS )\nN(words )(2)\nSome translators might use more complex and/or\nmore diverse grammatical structures than others.\nSome might keep the grammatical structure of\ntranslated sentences closer to the one of the source\ntext than others.\nLexical density: The ratio between the total\nnumber of content words (adverbs, adjectives,\nnouns and verbs) and the total number of words.\nlexDens =N(content words )\nN(words )(3)\ntools/TreeTagger/HTs have been found to have a lower percentage\nof content words than originals. However, some\ntranslators might use more content words than oth-\ners.\n5 Experimental set-up\nFor each feature, we calculate relative difference\nbetween the feature value of the original source\ntextf(source )and the feature value of its trans-\nlation f(ht).\n\u0001(f) =f(source )\u0000f(ht)\nf(source )(4)\nThe main benefits of reporting relative differences\nare:\n\u000frelative difference reduces impact of distinct\nsource languages (language pairs);\n\u000frelative difference minimalises effects of us-\ning comparable instead of parallel HTs.\nTable 2 shows an example of lexical varieties\nof two comparable HTs. The values of the two\ntarget language lexical varieties f(ht)imply that\nthe second HT is lexically richer. However, the\nreason for that difference might simply be the ini-\ntially higher lexical variety of the second source\ntext. Relative difference, though, clearly demon-\nstrates that the second translation is lexically less\nrich and also closer to the source text.\nf(ht)f(source ) \u0001( f)\nsource 1 0.721 0.434 66.1%\nsource 2 0.832 0.548 (!) 51.8%\nTable 2: Example of analysing lexical varieties of two com-\nparable HTs and advantage of using relative differences.\nFor each text and each feature, relative differ-\nence is calculated as average value over chunks\nof 100 sentences5(approximately 2000 words),\nsimilarly to some previous work (V olansky et al.,\n550 sentences (1000 words) for the DeFr corpus due to the\nsmall size\n2015). The purpose of averaging over small\nchunks is manifold: to make sure that the length of\na text does not interfere with the feature values, to\navoid issues related to the small size of some texts,\nand to further minimise the potential effects of us-\ning comparable instead of parallel translations.\nFor each of the research questions, the obtained\nvalues are reported and discussed in the following\nsection. It is worth noting that the numbers differ\nbetween the data sets due to distinct properties of\nthe language pairs. For example, relative differ-\nence between Finnish and English lexical and POS\nvarieties are much larger than those between Ger-\nman and French.\nWe did not perform any text classification in\nthis experiment, because the sizes of the currently\navailable texts are not sufficient for training a clas-\nsifier.\n6 Results\n6.1 RQ1: Influence of expertise and different\ncohorts\nTheEnHr data set and the appropriate part of the\nDeFr data set were used to examine the potential\ninfluence of different translator cohorts on text fea-\ntures. The statistics showing number of sentences\nand translator cohorts for both data sets is shown\nin Table 3. All translators were native speakers of\nthe target language.\nTheEnHr data set was created in the frame-\nwork of the Abu-MaTran project.6A subset of\nthe English test set7from WMT 2012 (1011 sen-\ntences) was translated into Croatian in two ways:\nprofessional translation and crowdsourcing via the\nCrowdFlower platform.8The options on the plat-\nform were configured in a way that enables the best\npossible translation quality: geography was lim-\nited to Croatia, and only the contributors on the top\nperformance level were considered. In this way, 30\ndifferent crowdsourcing contributors participated\nin translation. In total, three HTs were created\nfrom this English source text: one by a profes-\nsional translator and two by different crowd con-\ntributors. In a later phase of the project, 1000 En-\nglish sentences9from the WMT 2013 were trans-\nlated by a student, thus representing a third trans-\nlator cohort, although as a comparable text.\n6https://www.abumatran.eu/\n7HT from Czech, as mentioned in Section 3\n8http://crowdflower.com/\n9also HT from CzechTheDeFr data set was created for the WMT\n2019 shared task. A subset of 1327 sentences\nwas originally written in German and translated\nby translators with three different levels of exper-\ntise: student (326 sentences), professional trans-\nlator (756 sentences), and specialist10(245 sen-\ntences).\n6.1.1 Results on the EnHr data set\nThe main tendencies which can be observed in\nTable 4 are variations in sentence length and lex-\nical variety, and to a lesser extent, in morpho-\nsyntactic variety. In addition, the features of the\ntwo crowd HTs are very similar, and more dis-\ntinct than the features of the other two HTs. The\nsentence length indicates that the crowd produced\nshorter Croatian translations than the professional\ntranslator and the student. Higher lexical and mor-\nphosyntactic varieties are probably a consequence\nof a large number of different contributors which\nlead to a decrease in consistency. Here, it should\nbe noted that a large lexical and/or grammatical va-\nriety as well as a large divergence from the source\ntext are not necessarily positive.\nEffects on automatic MT evaluation Since the\nEnHr data set is the only one containing paral-\nlel (instead of comparable) HTs, it represents a\nperfect data set for testing the behaviour of auto-\nmatic MT evaluation scores calculated on distinct\nreference translations. For this purpose, we trans-\nlated the English source text by two online MT sys-\ntems,11Google Translate12and Bing Translator.13\nWe then calculated the widely used BLEU score\n(Post, 2018) and two recently proposed character-\nbased metrics, F-score (Popovi ´c, 2015) and edit\ndistance (Wang et al., 2016). All scores are cal-\nculated by comparing MT output with each of the\nHTs.\nThe resulting scores in Table 5 lead to different\nconclusions depending on the used reference HT.\nAccording to the professional HT, the Google MT\noutput is substantially better than the Bing output\nin terms of all three evaluation metrics. If the first\ncrowd HT is used as a reference, the differences\nbetween the two systems become small accord-\ning to BLEU and chrF, whereas characTER even\nsays that the Bing MT output is better. A simi-\nlar tendency can be observed if the student HT is\n10not a professional translator by vocation, but experienced\n11in November 2019\n12https://translate.google.com/\n13https://www.bing.com/translator\ndata parallel translation number of\nset text translator expertise sentence pairs\nEnHr2012 en!hr T hr1 professional\n2012 en!hr T hr2 crowd 1011\n2012 en!hr T hr3 crowd\n2013 en!hr T hr4 student 1000\nDeFr2019 de!fr T fr1 student 326\n2019 de!fr T fr2 specialist 245\n2019 de!fr T fr3 professional 756\nTable 3: Characteristics of the texts used to examine the influence of translation expertise: language pair, translator, translator’s\nexpertise and number of sentences.\nEnHr translator T hr1 Thr2 Thr3 Thr4\nen!hr expertise prof. crowd crowd stud.\nSRC (en)\u0000HT(hr)\nSRC (en)\u0001(sentence length) 8.06 12.8 13.4 11.3\n\u0001(word length) -13.7 -14.3 -15.0 -14.8\n\u0001(lexical variety) -32.6 -40.3 -41.6 -35.6\n\u0001(POS variety) -413 -426 -423 -406\n\u0001(lexical density) 51.9 51.6 51.8 53.1\nTable 4: Relative differences (%) between features of the original texts and features of the translated texts for English !Croatian\ntexts translated by translators with different expertises: professional, crowd and student.\nused, albeit the comparison is not completely ap-\npropriate since the source text is different. If the\nsecond crowd HT is used, the BLEU score of the\nBing output becomes slightly better, the charac-\nTER score becomes substantially better, whereas\nthe chrF score is slightly worse than Google.\nThe fact that automatic scores calculated on dif-\nferent reference translations are different is, of\ncourse, nothing new. However, here we point out\nthat translator cohort providing the reference HT\ncan have influence on the scores and perceptions of\nsystems’ quality, and therefore represents a factor\nwhich should be taken into account in MT evalua-\ntion.\n6.1.2 Results on the DeFr data set\nTable 6 shows the text features of the DeFr data\nset. In spite of differences between this corpus\nand the EnHr corpus in terms of expertise lev-\nels, languages, as well as comparable HTs instead\nof parallel HTs, the same general tendencies can\nbe observed, namely variations in sentence length,\nlexical variety and morpho-syntactic variety. The\nsentences in the professional HT are longest and\nthe lexical variety is highest, which could be intu-\nitively expected – professional translators tend to\ndivert more from the source language and to use\nricher vocabulary. Morpho-syntactic variety, how-\never, is highest in the specialist HT, although not\nmuch higher than in the other two. All the findings\nindicate that translation expertise has influence on\nsentence length, lexical and morpho-syntactic va-riety, however a deeper analysis is needed in the\nfuture to identify the nature of these differences.\nLexical density, however, varies only in the\nDeFr data set, especially for the specialist’s trans-\nlation. This feature should certainly be analysed\nfurther in order to determine whether the variations\nare related to the translator expertise, or maybe to\nsome other factors such as distinct nature of the\nlanguage pair, translator’s individual preferences,\netc.\n6.2 RQ2: Influence of native language and\ntranslation direction\nThe differences between native and non-native\nHTs were analysed on appropriate portions of the\nDeFr andEnFi data sets. The statistics of the\ntexts used are shown in Table 7. As already men-\ntioned in Section 3, both data sets contain compa-\nrable HTs.\nTheDeFr texts, created for the WMT 2019\nshared task, enable two ways of investigating in-\nfluence of (non-)native language. One is to com-\npare two translation directions of one transla-\ntor: a French native specialist T fr2was trans-\nlating in both directions: from French into Ger-\nman (from their native language) and from German\ninto French (into their native language). Another\nway is to compare two translators working on the\nsame translation direction: a French native special-\nist Tfr2and a German native specialist T de1both\nwere translating from French into German.\nTheEnFi data set enables only the first type\nEnHr , en!hr BLEU\" chrF\" characTER#\nreference Google Bing Google Bing Google Bing\nThr1(professional) 41.9 34.9 65.5 60.3 30.3 33.4\nThr2(crowd) 32.9 32.6 59.8 59.0 34.1 33.4\nThr3(crowd) 29.5 29.6 57.6 57.4 36.2 35.0\nThr4(student) 34.7 31.2 58.8 57.5 35.9 35.7\nTable 5: Three automatic evaluation scores (BLEU, chrF and characTER) for English-to-Croatian on-line MT systems calcu-\nlated on reference translations produced by translators with different expertises: professional, crowd and student.\nDeFr translator T fr1Tfr2Tfr3\nde!fr expertise stud. spec. prof.\nSRC (de)\u0000HT(fr)\nSRC (de)\u0001(sentence length) -21.3 -23.4 -26.1\n\u0001(word length) 10.6 11.4 10.9\n\u0001(lexical variety) 12.8 10.3 14.8\n\u0001(POS variety) 39.8 40.9 38.7\n\u0001(lexical density) -10.2 -5.62 -13.2\nTable 6: Relative differences (%) between features of the original texts and features of the translated texts for German !French\ntexts translated by translators with different expertises: student, specialist and professional.\nof analysis, namely comparison of two transla-\ntion directions done by one translator. It contains\nthree HTs produced by a Finnish native profes-\nsional: one English into Finnish (into their native\nlanguage) translation and two Finnish to English\n(from their native language) translations.\nTable 8 presents text features for all native and\nnon-native HTs. Texts translated by one trans-\nlator in two different translation directions are\ncompared in Table 8(a). The following general ten-\ndencies can be observed for both translators and\nboth language pairs: sentence length, word length\nand lexical variety substantially differ depending\non the translation direction. Word length and lexi-\ncal variety are higher when translating into the na-\ntive language, indicating that the translators tend\nto choose longer words more often and to use a\nricher vocabulary in their native language, as intu-\nitively can be expected. As for sentence length, the\ndifferences tend in opposite directions: for DeFr ,\nthe length of non-native HTs is closer to the source\ntext (which can be intuitively expected), whereas\nforEnFi is the other way round. The reason\nmight lay in sheer differences between the two lan-\nguage pairs, which should be investigated in future\nwork. Deeper analysis of reasons and underlying\nphenomena is also needed for POS variety and lex-\nical density, because the tendencies are very dif-\nferent for the two language pairs: in DeFr texts,\nlexical density varies whereas there are no large\ndifferences in POS variety, and in EnFi texts is\nthe other way round.\nTable 8(b) shows the features of texts trans-\nlated from French into German by two trans-lators with different native languages . It can be\nseen that the variations in sentence length, word\nlength and lexical variety observed in Table 8(a)\nare confirmed. Furthermore, word length and lex-\nical variety are again higher in the native trans-\nlations. Sentence length of the non-native HT is\ncloser to the source language, same as the other\nDeFr non-native HT presented in Table 8(a) – this\nalso indicates that the reason for the opposite ten-\ndency observed on the EnFi language pair might\nindeed be the different nature of the language pair\nitself. In any case, a detailed analysis is definitely\nnecessary, as well as for morpho-syntactic variety\nand lexical density.\nDespite the fact that certain tendencies should\nbe investigated further, it can be noted that na-\ntive and non-native translated texts generally ex-\nhibit different traits, especially regarding sentence\nlength, word length and lexical variety. Therefore,\nthe native language of the translator should also be\ntaken into account for MT evaluation.\n7 Conclusions\nThis work presents results of a set of computa-\ntional analyses on three data sets containing three\nlanguage pairs and five translation directions with\nthe aim of finding out whether different human\ntranslations exhibit different traits. Despite certain\nlimitations, our findings represent a good base for\nanalysing different human translations.\nThe main contribution of this work is empirical,\nshowing that each of the investigated factors has\ncertain influence on the features of translated texts.\nSentence length and lexical variety are affected by\ndata parallel translation number of\nset text translator direction sentence pairs\nDeFr2019 de!fr T fr2 into native 245\n2019 fr!de T fr2 from native 235\n2019 fr!de T de1 into native 100\nEnFi2017 en!fi T fi1 into native 1502\n2017 fi!en T fi1 from native 1500\n2019 fi!en T fi1 from native 1996\nTable 7: Characteristics of the texts used to examine the influence of native language and translation direction: language pair,\ntranslator, translation direction, and number of sentences.\n(a) one translator, two translation directions\nDeFr translation direction de !fr fr!de\nTfr 2 (into native) (from native)\nSRC\u0000HT\nSRC\u0001(sentence length) -23.4 0.21\n\u0001(word length) 11.4 -4.96\n\u0001(lexical variety) 10.3 -2.82\n\u0001(POS variety) 40.9 -41.3\n\u0001(lexical density) -5.62 19.0\nEnFi translation direction en !fi fi !en\nTfi1 (into native) (from native) (from native)\nSRC\u0000HT\nSRC\u0001(sentence length) 21.5 -51.4 -46.8\n\u0001(word length) -55.5 36.2 35.0\n\u0001(lexical variety) -46.7 39.4 36.3\n\u0001(POS variety) -393 79.8 79.3\n\u0001(lexical density) -26.5 24.3 26.2\n(b) one translation direction, two translators\nDeFr translator T de1 Tfr2\nfr!de (into native) (from native)\nSRC (fr)\u0000HT(de)\nSRC (fr)\u0001(sentence length) 5.46 0.21\n\u0001(word length) -9.64 -4.96\n\u0001(lexical variety) -3.42 -2.82\n\u0001(POS variety) -24.4 -41.3\n\u0001(lexical density) 22.9 19.0\nTable 8: Relative differences (%) between features of native and non-native HTs; one translator working on two translation\ndirections (a) and two translators working on one translation direction (b).\nall factors, whereas word length varies depending\non native language. As for POS variety and lexi-\ncal density, a deeper analysis is needed to under-\nstand the observed tendencies. While we believe\nthat the trends observed in the reported results are\nnot incidental, more research is needed to find lin-\nguistic explanations. Our study is based on rather\nsuperficial text features at word and POS level –\ntherefore, for future work, different HTs should be\nanalysed in depth, including over- or under-using\nparticular words, collocations and POS categories,\nas well as presence or absence of different types of\ntranslation shifts and semantic divergences. Fur-\nthermore, as described in Section 3, this study is\ncarried out on sub-optimal data sets – providing\nand investigating larger data sets containing par-\nallel HTs generated from the same source text is\nnecessary. More data will also enable another lineof work, namely automatic discrimination between\ndifferent HTs.\nMore (ideal) data will also enable better analysis\nof potential effects on human and automatic MT\nevaluation. Nevertheless, even the presented pre-\nliminary results suggest that it is important to spec-\nify which kind of HTs were used for MT evalua-\ntion, especially for evaluations which involve com-\nparing human and machine translation quality. As\nMT quality improves, such comparisons are be-\ncoming more and more frequent, and are also be-\ncoming a part of WMT shared tasks – at the WMT\n2019 shared task ((Barrault et al., 2019), Section\n3.8), for the German-English language pair it is\nreported that “many systems are tied with human\nperformance”, as well as that “Facebook-FAIR\nsystem achieves super-human translation perfor-\nmance”. For this type of evaluation, we highly\nrecommend that researchers/evalutaors specify the\ndetails about the HTs used.\n8 Acknowledgments\nThe ADAPT SFI Centre for Digital Media Tech-\nnology is funded by Science Foundation Ireland\nthrough the SFI Research Centres Programme and\nis co-funded under the European Regional Devel-\nopment Fund (ERDF) through Grant 13/RC/2106.\nSpecial thanks to Maarit Koponen, Antonio\nToral, Barry Haddow, Lo ¨ıc Barrault, Franck Bur-\nlot, Tereza V ojt ˇechov ´a and M ¯arcis Pinnis for all\ninvaluable information and support.\nReferences\nAhrenberg, Lars. 2017. Comparing machine transla-\ntion and human translation: A case study. In Pro-\nceedings of the 1st Workshop on Human-Informed\nTranslation and Interpreting Technology (HiT-IT\n2017) , pages 21–28, Varna, Bulgaria, September.\nBaker, Mona, Gill Francis, and Elena Tognini-Bonelli.\n1993. Corpus linguistics and translation studies: Im-\nplications and applications. Text and Technology: in\nHonour of John Sinclair , pages 233–250.\nBaroni, Marco and Silvia Bernardini. 2006. A new\napproach to the study of translationese: Machine-\nlearning the difference between original and trans-\nlated text. Literary and Linguistic Computing ,\n21(3):259–274, September.\nBarrault, Lo ¨ıc, Ond ˇrej Bojar, Marta R. Costa-juss `a,\nChristian Federmann, Mark Fishel, Yvette Gra-\nham, Barry Haddow, Matthias Huck, Philipp Koehn,\nShervin Malmasi, Christof Monz, Mathias M ¨uller,\nSantanu Pal, Matt Post, and Marcos Zampieri. 2019.\nFindings of the 2019 Conference on Machine Trans-\nlation (WMT19). In Proceedings of the Fourth Con-\nference on Machine Translation (WMT 2019) , pages\n1–61, Florence, Italy, August.\nCastilho, Sheila, Natalia Resende, and Ruslan Mitkov.\n2019. What influences the features of post-editese?\na preliminary study. In Proceedings of the 2nd Work-\nshop on Human-Informed Translation and Interpret-\ning Technology (HiT-IT 2019) , pages 20–28, Varna,\nBulgaria, September.\nChesterman, Andrew. 2004. Beyond the particular. In\nTranslation universals: Do they exist? , pages 33–50.\nJohn Benjamins.\nˇCulo, Oliver and Jean Nitzke. 2016. Patterns of termi-\nnological variation in post-editing and of cognate use\nin machine translation in contrast to human transla-\ntion. In Proceedings of the 19th Annual Conference\nof the European Association for Machine Translation\n(EAMT 2016) , pages 106–114, Riga, Latvia.Daems, Joke, Orph ´ee De Clercq, and Lieve Macken.\n2017. Translationese and post-editese: how com-\nparable is comparable quality? Linguistica Antver-\npiensia New Series – Themes in Translation Studie ,\n16:89–103.\nFarrell, Michael. 2018. Machine translation markers\nin post-edited machine translation output. In Pro-\nceedings of the 40th Conference on Translating and\nthe Computer (TC40) , pages 50–59, London, UK,\nNovember.\nFreitag, Markus, Isaac Caswell, and Scott Roy. 2019.\nAPE at Scale and Its Implications on MT Evaluation\nBiases. In Proceedings of the Fourth Conference\non Machine Translation (WMT 2019) , pages 34–44,\nFlorence, Italy, August.\nKoppel, Moshe and Noam Ordan. 2011. Transla-\ntionese and its dialects. In Proceedings of the 49th\nAnnual Meeting of the Association for Computa-\ntional Linguistics: Human Language Technologies\n(ACL-HLT 2011) , pages 1318–1326, Portland, Ore-\ngon, June.\nKurokawa, David, Cyril Goutte, and Pierre Isabelle.\n2009. Automatic detection of translated text and its\nimpact on machine translation. In In Proceedings of\nMT Summit XII , pages 81–88, Ottawa, Canada, Au-\ngust.\nL¨aubli, Samuel, Rico Sennrich, and Martin V olk. 2018.\nHas machine translation achieved human parity? a\ncase for document-level evaluation. In Proceedings\nof the 2018 Conference on Empirical Methods in\nNatural Language Processing (EMNLP 2018) , pages\n4791–4796, Brussels, Belgium, October-November.\nLembersky, Gennadi, Noam Ordan, and Shuly Wintner.\n2013. Improving statistical machine translation by\nadapting translation models to translationese. Com-\nputational Linguistics , 39(4):999–1023.\nPopovi ´c, Maja. 2015. chrF: character n-gram f-score\nfor automatic MT evaluation. In Proceedings of the\nTenth Workshop on Statistical Machine Translation ,\npages 392–395, Lisbon, Portugal, September.\nPopovi ´c, Maja. 2019. On reducing translation shifts\nin translations intended for MT evaluation. In Pro-\nceedings of Machine Translation Summit XVII , pages\n80–87, Dublin, Ireland, 19–23 August.\nPost, Matt. 2018. A call for clarity in reporting BLEU\nscores. In Proceedings of the Third Conference on\nMachine Translation: Research Papers , pages 186–\n191, Brussels, Belgium, October.\nRabinovich, Ella and Shuly Wintner. 2015. Unsu-\npervised identification of translationese. Transac-\ntions of the Association for Computational Linguis-\ntics, 3:419–432, December.\nRabinovich, Ella, Sergiu Nisioi, Noam Ordan, and\nShuly Wintner. 2016. On the similarities between\nnative, non-native and translated texts. In Proceed-\nings of the 54th Annual Meeting of the Association\nfor Computational Linguistics (ACL 2016) , pages\n1870–1881, Berlin, Germany, August.\nRubino, Raphael, Ekaterina Lapshinova-Koltunski, and\nJosef van Genabith. 2016. Information density and\nquality estimation features as translationese indica-\ntors for human translation classification. In Proceed-\nings of the 2016 Conference of the North American\nChapter of the Association for Computational Lin-\nguistics: Human Language Technologies (NAACL-\nHLT 2016) , pages 960–970, San Diego, California,\nJune.\nToral, Antonio, Sheila Castilho, Ke Hu, and Andy Way.\n2018. Attaining the Unattainable? Reassessing\nClaims of Human Parity in Neural Machine Trans-\nlation. In Proceedings of the 3rd Conference on Ma-\nchine Translation (WMT 2018) , pages 113–123, Bel-\ngium, Brussels, October.\nToral, Antonio. 2019. Post-editese: an exacerbated\ntranslationese. In Proceedings of Machine Transla-\ntion Summit XVII , pages 273–281, Dublin, Ireland,\n19–23 August.\nToury, Gideon. 1979. Interlanguage and its manifesta-\ntions in translation. Meta , 24(2):223–231.\nVanmassenhove, Eva, Dimitar Shterionov, and Andy\nWay. 2019. Lost in translation: Loss and decay\nof linguistic richness in machine translation. In Pro-\nceedings of Machine Translation Summit XVII , pages\n222–232, Dublin, Ireland, 19–23 August.\nV olansky, Vered, Noam Ordan, and Shuly Wintner.\n2015. On the features of translationese. Digital\nScholarship in the Humanities DSH , 30(1):98–118,\nApril.\nVyas, Yogarshi, Xing Niu, and Marine Carpuat. 2018.\nIdentifying semantic divergences in parallel text\nwithout annotations. In Proceedings of the 2018\nConference of the North American Chapter of the\nAssociation for Computational Linguistics: Human\nLanguage Technologies (NAACL-HLT 2018) , pages\n1503–1515, New Orleans, Louisiana, June.\nWang, Weiyue, Jan-Thorsten Peter, Hendrik\nRosendahl, and Hermann Ney. 2016. Charac-\nter: Translation edit rate on character level. In\nProceedings of the First Conference on Machine\nTranslation , pages 505–510, Berlin, Germany,\nAugust.\nZhang, Mike and Antonio Toral. 2019. The effect\nof translationese in machine translation test sets.\nInProceedings of the 4th Conference on Machine\nTranslation (WMT 2019) , pages 73–81, Florence,\nItaly, August. Association for Computational Lin-\nguistics.",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "yKP1luPolOA",
"year": null,
"venue": "EAMT 2020",
"pdf_link": "https://aclanthology.org/2020.eamt-1.52.pdf",
"forum_link": "https://openreview.net/forum?id=yKP1luPolOA",
"arxiv_id": null,
"doi": null
}
|
{
"title": "QRev: Machine Translation of User Reviews: What Influences the Translation Quality?",
"authors": [
"Maja Popovic"
],
"abstract": "Maja Popovic. Proceedings of the 22nd Annual Conference of the European Association for Machine Translation. 2020.",
"keywords": [],
"raw_extracted_content": "QRev : Machine Translation of User Reviews:\nWhat Influences the Translation Quality?\nMaja Popovi ´c\nADAPT Centre\nSchool of Computing\nDublin City University, Ireland\[email protected]\nAbstract\nThis project aims to identify the important\naspects of translation quality of user re-\nviews which will represent a starting point\nfor developing better automatic MT met-\nrics and challenge test sets, and will be\nalso helpful for developing MT systems\nfor this genre. We work on two types\nof reviews: Amazon products and IMDb\nmovies, written in English and translated\ninto two closely related target languages,\nCroatian and Serbian.\n1 Description\nData sets used for MT research include mainly\n”formal written text” (such as news) and ”for-\nmal speech” (such as TED talks). Recently, there\nhas been an increase of interest in the translation\nof ”informal written text” which focuses on very\nnoisy texts originating from sources like What-\nsApp, Twitter and Reddit. On the other hand, other\ntypes of ”informal written text” such as user re-\nviews have not been investigated thoroughly, al-\nthough they are important both from commercial\nand from a user perspective – user reviews of prod-\nucts have become an important feature that many\ncustomers expect to find.\nThis project focusses on user reviews in order to\ninvestigate which new challenges this ”mid-way”\nkind of text poses for current MT systems. The\nmain goal is to identify important quality aspects\nfor MT of user reviews which will enable:\n\u000fdevelopment of appropriate automatic evalu-\nation metrics;\nc\r2020 The authors. This article is licensed under a Creative\nCommons 3.0 licence, no derivative works, attribution, CC-\nBY-ND.\u000fdesign of test suites specialised for important\nfactors;\n\u000fdefinition of directions for improving MT\nsystems.\nAlthough the focus of the project are user reviews\ntranslated into Serbian and Croatian (as a case in-\nvolving mid-size less-resourced morphologically\nrich European languages), the proposed evaluation\nstrategy is completely genre/domain/language in-\ndependent, so it can be applied to any genre, do-\nmain and language pair.\n2 Data sets\nWe are working with two types of publicly avail-\nable user reviews:\n\u000fIMDb movie reviews1\n\u000fAmazon product reviews2\n3 MT systems\nThe main goal of the project is to find the common\naspects important for the translation quality, and\nnot to evaluate or compare particular MT systems.\nWe are currently analysing MT outputs3of three\non-line systems: Google Translate4, Bing5and\nAmazon translate6. We are also developing our\nown system using publicly available data, which\nwill be analysed in the later stages of the project.\n1https://ai.stanford.edu/ ˜amaas/data/\nsentiment/\n2http://jmcauley.ucsd.edu/data/amazon/\n3generated at the end of January 2020\n4https://translate.google.com/\n5https://www.bing.com/translator\n6https://aws.amazon.com/translate/\n4 Evaluation procedure\nOur evaluation procedure is based on comprehen-\nsibility and fidelity (adequacy) (Roturier and Ben-\nsadoun, 2011), and it is being carried out on the re-\nview level (not on the sentence level). It should be\nnoted that comprehensibility is not fluency – a flu-\nent text can be incomprehensible, and vice versa.\nThe novelty of our procedure is asking the annota-\ntors to concentrate on problematic parts of the text\nand to mark them, without assigning any scores\nor classifying errors. The procedure can also be\nguided by other evaluation criteria, not only com-\nprehensibility and adequacy. The annotators were\ncomputational linguistics students and researchers\nfluent in the source language and native speakers\nof the target language. The annotation consisted of\ntwo independent subsequent tasks with the follow-\ning guidelines:\nComprehensibility A monolingual task without\naccess to the original source language text. Which\nparts of the translated review are not understand-\nable? Distinguish two levels: ”completely incom-\nprehensible” and ”not fully clear due to grammatic\nor stylistic errors”.\nFidelity (Adequacy) A bilingual task with ac-\ncess to the original source language text. Which\nparts of the translated review do not correspond to\nthe meaning of the original? Distinguish two lev-\nels: ”the meaning of the original text is changed”\nand ”not an optimal translation choice”. If there\nare any problems in the source language, mark it,\ntoo (spelling or other errors, incomprehensible, un-\nfinished, etc.).\nThe annotation started on 2 February 2020 and\nfinished in April 2020. The annotated texts will\nbe further analysed in order to identify common\nmistakes and linguistic phenomena which have the\nlargest influence on comprehensibility and ade-\nquacy. The main aim of the analysis is to find\nthe most important patterns and aspects which then\ncan serve as a basis for automatic metrics, test\nsuites, as well as for system improvements. In ad-\ndition, the analysis will show in which way and\nto which extent particular phenomena contribute to\ncomprehensibility and adequacy.\nIn total, 28 IMDb and 122 Amazon reviews (16807\nuntokenised English source words) are covered in\nthis evaluation. However, not all generated MT hy-\npotheses (6 for each review) are included. Each ofthe 270 included hypotheses is annotated by two\nannotators. The annotated data sets will be pub-\nlicly released under the Creative Commons CC-\nBY licence.\n5 First results: inter-annotator\nagreement and percentage of issues\nInter-annotator agreement (IAA) is shown in Ta-\nble 1 in the form of F-score and normalised edit\ndistance (WER).\nIAA (%) C F (A)\nF-score\" 85.5 86.6\nWER# 27.2 23.9\nTable 1: Inter-annotator agreement (IAA): F-score and nor-\nmalised edit distance WER.\nPercentage of words with issues for the two tar-\nget languages is shown in Table 2.\n% of issues C F (A)\nhr major 9.0 8.0\nminor 12.3 12.5\nsr major 13.1 12.1\nminor 19.4 14.4\nTable 2: Percentages of words problematic for comprehensi-\nbility (C) and fidelity/adequacy (F (A)).\nAcknowledgments\nThis research is being conducted with the financial\nsupport of the European Association for Machine\nTranslation under its programme “2019 Sponsor-\nship of Activities” at the ADAPT Research Centre\nat Dublin City University. The ADAPT SFI Cen-\ntre for Digital Media Technology is funded by Sci-\nence Foundation Ireland through the SFI Research\nCentres Programme and is co-funded under the\nEuropean Regional Development Fund (ERDF)\nthrough Grant 13/RC/2106.\nWe would like to thank all the evaluators for pro-\nviding us with annotations and feedback.\nReferences\nLohar, Pintu, Maja Popovi ´c, and Andy Way. 2019.\nBuilding English-to-Serbian Machine Translation\nSystem for IMDb Movie Reviews. In Proceedings of\nthe 7th Workshop on Balto-Slavic Natural Language\nProcessing (BSNLP 2019) , Florence, Italy, August.\nRoturier, Johann and Anthony Bensadoun. 2011. Eval-\nuation of MT Systems to Translate User Generated\nContent. In Proceedings of the MT Summit XIII , Xi-\namen, China, September.",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "OC01MxbGmp4",
"year": null,
"venue": "EAMT 2016",
"pdf_link": "https://aclanthology.org/W16-3410.pdf",
"forum_link": "https://openreview.net/forum?id=OC01MxbGmp4",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Potential and Limits of Using Post-edits as Reference Translations for MT Evaluation",
"authors": [
"Maja Popovic",
"Mihael Arcan",
"Arle Lommel"
],
"abstract": "Maja Popovic, Mihael Arčan, Arle Lommel. Proceedings of the 19th Annual Conference of the European Association for Machine Translation. 2016.",
"keywords": [],
"raw_extracted_content": "Baltic J. Modern Computing, V ol. 4 (2016), No. 2, pp. 218–229\nPotential and Limits of Using Post-edits as\nReference Translations for MTEvaluation\nMaja POPOVI ´C1, Mihael AR ˇCAN2, Arle LOMMEL3\n1Humboldt University of Berlin\n2Insight Centre for Data Analytics, NUI Galway, Ireland\n3Common Sense Advisory (CSA Research)\[email protected], [email protected],\[email protected]\nAbstract. This work investigates the potential use of post-edited machine translation ( MT) out-\nputs as reference translations for automatic machine translation evaluation, focusing mainly on\nthe following important question: Is it necessary to take into account the machine translation\nsystem and the source language from which the given post-edits are generated?\nIn order to explore this, we investigated the use of post-edits originating from different machine\ntranslation systems (two statistical systems and two rule-based systems), as well as the use of\npost-edits originating from two different source languages (English and German). The obtained\nresults shown that for comparison of different systems using automatic evaluation metrics, a good\noption is to use a post-edit originating from a high-quality (possibly distinct) system. A better op-\ntion is to use it together with other references and post-edits, however post-edits originating from\npoor translation systems should be avoided. For tuning or development of a particular system,\npost-edited output of this same system seems to be the best reference translation.\nKeywords: machine translation evaluation, reference translations, post-edited translations\n1 Introduction\nThe evaluation of the machine translation ( MT) output is an important and difficult task.\nThe fastest way is to use an automatic evaluation metric, which compares the obtained\noutput with a human translation of the same source text and calculates a numerical\nscore related to their similarity. Despite all disadvantages and criticisms, such metrics\nare still irreplaceable for many tasks (such as rapid development of a new system, tuning\nof a statistical MTsystem, etc.) and are considered as at least baseline metrics for MT\nquality evaluation. All these metrics (n-gram based such as BLEU [Papineni et al., 2002]\nand METEOR [Banerjee and Lavie, 2005], edit-distance based such as TER [Snover et\nal., 2006], etc.) are reference-based, i.e. a human reference translation is needed as a\ngold standard. Since there is usually not only one single best translation of a text, the\nbest way of evaluating an MToutput would be to compare it with many references\nPost-edits as References 219\n– nevertheless, creating each reference translation is a time consuming and expensive\nprocess. Therefore, automatic MTevaluation is usually carried out using only a single\nreference.\nOn the other hand, MThas considerably improved in the recent years so that the use\nofMToutputs as a starting point for human translation has become a common practice.\nTherefore, ever-increasing amounts of post-edited machine translation outputs ( PEs) are\nbeing collected. These represent very valuable data and are being used for a number of\napplications, such as automatic quality prediction, adaptation, etc. Among other things,\npost-edits are more similar to MToutputs than “independent” references, thus being\npotentially more useful for automatic evaluation and/or tuning. However, their use as\nreference translations has been scarcely investigated so far.\nThis work explores two scenarios: comparing four distinct MTsystems using PEs\noriginating from these systems, as well as comparing translations from two different\nsource languages using PEs originating from these source languages. In addition, the\neffects of using multiple references are reported in terms of variations and standard\ndeviations of automatic scores for different number of references.\n1.1 Related work\nPost-edited translations have been used for many applications, such as automatic pre-\ndiction of translation quality [Specia, 2011], analysing various aspects of post-editing\neffort [Tatsumi and Roturier, 2010, Blain et al., 2011], human and automatic analy-\nsis of performed edit operations [Koponen, 2012, Wisniewski et al., 2013], as well as\nimproving translation and language model of an SMT system by learning from post-\nedits [Bertoldi et al., 2013, Denkowski et al., 2014, Mathur et al., 2014]. The cache-\nbased approach, introduced in [Bertoldi et al., 2013], makes it possible to periodically\nadd knowledge from PEs into an SMT system in real-time, without the need to stop it.\nThe main idea behind the cache-based models is to mix a large global (static) model\nwith a small local (dynamic) model estimated from recent items observed in the his-\ntory of the input stream. In [Wisniewski et al., 2013], the PEs are used as references\nfor automatic estimation of performed edit operations, namely substitutions, deletions,\ninsertions and shifts. [Denkowski et al., 2014] report the improvements of the BLEU\nscores calculated on independent references as well as on PEs in order to emphasise the\nsuitability of their methods for the post-editing task.\nA number of publications deals with the usage of multiple references for automatic\nMTevaluation. Using pseudo-references, i.e. raw translation outputs from different MT\nsystems has been investigated in [Albrecht and Hwa, 2007, Albrecht and Hwa, 2008]\nand it is shown that, even though these are not correct human translations, it is benefi-\nciary to add pseudo-references instead of using one single reference. Adding automat-\nically generated paraphrases together to a set of standard human references for tuning\nhas been investigated in [Madnani et al., 2008], and it is shown that the paraphrases\nare improving automatic scores BLEU and TER when the number of multiple human\nreferences is less than four. Recently, multiple references have been explored in [Qin\nand Specia, 2015] in terms of using recurring information in these references in order to\ngenerate better version of BLEU and NIST [Doddington, 2002] metrics by better n-gram\nweighting.\n220 Popovi ´c et al.\nTo the best of our knowledge, no systematic investigation regarding the use of post-\nedited translation outputs as reference translations has been carried out yet.\n2 Research questions\nAlthough the PEs are intuitively better suitable for MTevaluation than standard human\nreferences because they are closer to the MToutput structure, there are several important\nquestions which have to be taken into account:\n1. Should the PEoriginate from the very same MTsystem, or is it acceptable to use\nany PE?\n2. Is the source language of any importance?\n3. Does the system type (statistical or rule-based) have any impact?\nIn order to systematically explore the potential and limits of post-edits and answer\nthese questions, following scenarios are investigated:\n–using PEs produced by four distinct MTsystems;\n–using PEs generated from two different source languages;\nThe PEs are used for system comparison in order to explore variations and possible\nbias of the obtained automatic scores. Apart from the use of each post-edit separately,\nthe effects of combining them in the form of multiple references has been investigated.\nIn addition, the effect of the source language has also been explored in terms of tuning\nanSMT system. The details about the experiments and the obtained results are described\nin the next two sections.\n3 Experiments\n3.1 Data sets\nFor investigation of effects described in the previous section, two suitable data sets\ncontaining different language pairs, target languages and domains were available:\n1.TARA XÜ texts [Avramidis et al., 2014] containing German-to-English, German-\nto-Spanish and English-to-German raw translations and PEs of WMT news texts\ngenerated by two SMT (phrase-based and hierarchical) and two RBMT systems;\n2. O PENSUBTITLES texts from the PE2rr corpus [Popovi ´c and Ar ˇcan, 2016] contain-\ning Serbian and Slovenian subtitle raw translations and PEs generated by phrase-\nbased SMT systems from English and from German.\nBoth data sets contain single standard reference translations, as well as sentence-level\nhuman rankings.\nFor the TARA XÜ WMT texts, post-editing and ranking were performed by profes-\nsional translators, and for the PE2rr O PENSUBTITLES texts by researchers familiar with\nmachine and human translation highly fluent both in source and in target languages. De-\ntails about the texts can be seen in Table 1.4\n4Although the texts are already publicly available, they are also available in the exact form used\nin this work at https://github.com/m-popovic/multiple-edits-refs .\nPost-edits as References 221\nTable 1: Data statistics\ndomain language # source avg. target # of\npair sentences sent. length PE\nWMT de-en 240 22.9 4\n(TARA XÜ) de-es 40 26.8 4\nen-de 272 21.9 4\nes-de 101 23.2 4\nOPEN en-sr 440 8.3 2\nSUBTITLES de-sr 440 8.1 2\n(PE2rr) en-sl 440 8.7 2\nde-sl 440 8.5 2\nIt should be noted that, although there are more (larger) publicly available data sets\ncontaining post-edited MToutputs, none of these sets contains post-edits originating\nfrom different translation systems or from different source languages, which are re-\nquested to answer the questions posed in Section 2.\n3.2 Evaluation methods\nFor all experiments, BLEU scores [Papineni et al., 2002] and character n-gram F scores,\ni.e. CHRF3 scores [Popovi ´c, 2015], calculated using different PEs are reported. BLEU\nis used as a well-known and widely used metric, and CHRF3 as a simple tokenisation-\nindependent metric, which has shown very good correlations with human judgements\non the WMT -2015 shared metric task [Stanojevi ´c et al., 2015], both on the system level\nas well as on the segment level, especially for morphologically rich(er) languages.\nFor both scores, Pearson’s system-level correlation coefficient ris reported for each\nPE. For CHRF3, segment-level Kendall’s \u001ccorrelation coefficient is presented as well.\nFor both correlation coefficients, the ties in human rankings were excluded from calcu-\nlation. In all tables, post-edited MToutputs are marked withpe.\nInitially, for each of the two data sets the scores were calculated separately for each\ntarget language. Nevertheless, since no differences related to the target language were\nobserved, the results were merged.\n4 Results\n4.1 Post-edits from (four) different translation systems\nIn order to investigate PEs originating from different MTsystems, the TARA XÜ corpus\nwas used, where each source sentence was translated by four MTsystems. Although\nthere is certain overlap, i.e. some of the source sentences are human translations of\nother source sentences, the majority of them are unique. BLEU and CHRF3 scores are\ncalculated separately using each of PEs as reference, as well as for combinations of\n222 Popovi ´c et al.\nmultiple PEs. The scores, together with system-level and segment-level correlation co-\nefficients, are presented in Table 2.\nTable 2: BLEU (left) and CHRF3 (right) scores calculated on PEs originating from four\ndistinct MTsystems (two SMT and two RBMT ) and on an independent reference trans-\nlation; the scores are strongly biased towards the particular system and slightly biased\ntowards the system type; the best option is to use PEof a high performance system or\nmultiple references without PEs of poor quality systems.\nBLEU scores translation output corr.\n# reference(s) S1 S2RB1RB2sys\n1S1pe41.0 25.8 22.5 20.0 -.40\nS2pe27.8 35.4 21.2 19.7 -.99\nRB1pe22.4 19.4 46.6 25.9 .75\nRB2pe21.7 19.4 28.8 41.3 .77\nreference 12.7 11.4 12.0 10.6 -.15\n2 two SMTpe43.4 38.0 27.0 24.7 -.97\ntwo RBMTpe27.0 23.6 48.7 43.5 .99\n3 no S1pe34.8 38.1 49.5 44.4 .93\nnoS2pe43.8 31.1 49.5 44.5 .98\nnoRB1pe44.3 38.9 35.6 43.9 .20\nnoRB2pe44.4 38.8 48.9 32.2 .19\n4 allpe44.9 39.4 50.0 45.1 .92\n5 allpe+ref 46.5 40.8 51.6 45.8 .88\nhuman ranks 57.6 47.6 69.3 67.4CHRF3 scores translation output corr.\n# reference(s) S1 S2RB1RB2sysseg\n1S1pe67.9 55.8 55.7 54.4 -.24 .03\nS2pe58.1 63.5 54.6 53.8 -.98 .12\nRB1pe54.0 50.7 72.3 58.6 .83.29\nRB2pe53.3 50.6 59.4 69.5 .80.24\nreference 43.4 41.4 44.0 43.6 .93.13\n2 two SMTpe68.6 64.2 58.1 57.0 -.71 .30\ntwo RBMTpe56.6 53.4 72.8 70.2 .96.34\n3 no S1pe60.9 64.3 73.0 70.4 .78.38\nnoS2pe68.6 58.0 73.0 70.4 .95.30\nnoRB1pe68.8 64.4 62.5 70.2 .10.14\nnoRB2pe68.9 64.5 72.9 61.4 .26.20\n4 allpe69.0 64.7 73.2 70.6 .97.30\n5 allpe+ref 69.1 64.8 73.2 70.6 .97.30\nhuman ranks 32.0 22.0 54.8 46.5\nThe following can be observed:\n–each system gets the highest score when its own PEis used as a reference (bold);\nsystem level correlations are very low if the worse ranked system’s PEs are used –\nin such scenario, worst systems obtain the highest automatic scores;\n–the scores for both of SMT systems are higher if the two SMTPE s are used; analo-\ngously applies for the RBMT systems;\n–the best options in terms of correlations are\n\u000fusing PEof the best ranked system;\n\u000fnot using PEof the worst ranked system;\n\u000fusing all PEs (and reference).\nTable 3 presents edit distances between PEs as well as between PEs and the refer-\nence, and it can be seen that the differences are not negligible, which explains the strong\nbias towards the particular system. It can also be seen that the post-edits of the same\nsystem types are slightly closer ( \u001835%) than those of the two different system types\nPost-edits as References 223\nTable 3: Edit distances between PEs originating from four distinct systems and refer-\nence; the PEs of the same system types are slightly closer than those of the two different\nsystem types; the reference is significantly different from all PEs.\nedit distance S1peS2peRB1peRB2peref\nS1pe/ 34.3 41.0 42.9 70.0\nS2pe34.2 / 41.9 42.4 70.1\nRB1pe40.5 41.5 / 35.4 70.9\nRB2pe41.7 41.3 34.7 / 69.6\nref 69.0 69.2 70.6 70.5 /\n(\u001842%), as well as that there is a large distance ( \u001870%) between the reference and\neach of the PEs.\nAn example of German-to-English translation outputs, PEs and the corresponding\nreference is presented in Table 4.\nTable 4: Example of post-edited German-to-English MToutputs originating from four\ndistinct translation systems.\nsystem translation output PE\nS1 There are also a few cars off the road. There are also a few cars off the road.\nS2 Few cars are off the road. A few cars are also off the road.\nRB1 Also a few Pkws lie in the street ditch. Also, a few cars are lying on the side\nof the street.\nRB2 Also a few car lies in the ditch. A few cars are also lying in the ditch.\nreference: Also several cars ended up in a ditch.\n4.2 Post-edits from (two) different source languages\nFor exploring influence of the source language, the O PENSUBTITLES texts were used,\nwhere each of the parallel German and English source sentences was translated by a\ncorresponding phrase-based SMT system. The effects of the source language on the\nautomatic scores are shown in Table 5. Since there are only two systems to compare,\nsystem-level Pearson’s correlation coefficient can be either 1 or -1.\nIt can be noted that:\n–the source language strongly influences the results: for each translation output, the\nautomatic scores are always higher when its own PEis used;\n224 Popovi ´c et al.\nTable 5: BLEU (left) and CHRF3 (right) scores calculated on SMTPE s originating from\ntwo different source languages and on an independent reference translation; the results\nare strongly biased towards the source language; the best option is to use PEof a high\nperformance system or multiple references without PEs of poor systems.\nBLEU scores translation output corr.\nreference(s) en!x de!x sys\n1 en!xpe47.7 23.9 1\nde!xpe24.4 45.5 -1\nreference 24.8 17.2 1\n2 en!xpe+ref 51.3 28.2 1\nde!xpe+ref 35.9 47.9 -1\nbothpe50.4 48.0 1\n3 bothpe+ref 53.0 49.2 1\nhuman ranks 38.6 21.1CHRF3 scores translation output corr.\nreference(s) en!x de!x sys seg\n1 en!xpe64.7 44.6 1.42\nde!xpe45.8 62.6 -1.13\nreference 47.8 39.4 1.44\n2 en!xpe+ref 65.6 47.8 1.42\nde!xpe+ref 54.6 63.4 -1.28\nbothpe66.5 63.8 1.48\n3 bothpe+ref 67.2 64.3 1.50\nhuman ranks 38.6 21.1\n–using PEof the better ranked system yields good correlation, whereas using PEfrom\nthe worse system claims that this system is better;\n–the scores obtained by the independent reference are more similar to those obtained\nby the PEgenerated from English.\nFurthermore, Table 6 shows that edit distances between PEs are rather large, about\n45%. Similar edit distance can be seen between the independent reference and the PE\noriginating from English, whereas for the PEoriginating from German it is much larger\n– over 55%. At this point, it is important to note that the original source language of\nall used texts is English – the German source text as well as the Serbian and Slovenian\nreferences are human translations of the English original. Therefore, the fact that the\nPEoriginating from German source is an “outlier” confirms the previous findings about\nthe importance of the original source language, e.g. [Kurokawa et al., 2009,Lembersky\net al., 2013], namely that (i) a translated text has different characteristics than the same\ntext written directly in the given language, as well as that (ii) the direction of human\ntranslation has impact on MT, so that it is better to train MTsystem in the corresponding\ndirection, i.e. using original texts as the source language and human translations as the\ntarget language.\n4.3 Multiple reference effects\nApart from the main questions posed in Section 2, an additional question has been\nraised during the realisation of the described experiments – what are the actual effects\nof the use of multiple references vs. the use of a single reference?\nThe advantage of multiple references is surely well known as mentioned in Sec-\ntion 1.1, however our question is – what is exactly happening with the automatic scores?\nIn order to answer it, we explored the variations in automatic scores when different\nPost-edits as References 225\nTable 6: Edit distances between post-edits originating from two different source lan-\nguages and an independent reference translation.\nedit distance en-xpede-xpereference\nen!xpe0 44.6 44.7\nde!xpe45.7 0 56.4\nreference 45.2 55.6 0\nnumbers of multiple references are used. For this experiment, apart from the two data\nsets described in previous sections, an additional small data set5was explored as well.\nThis data set consists of only 20 English source sentences from technical domain, how-\never each source sentence corresponds to 12 different human translations into German,\ni.e. 12 multiple references. Each source sentence has been automatically translated by\nfour distinct translation systems, two statistical and two rule-based (albeit not the same\nas those used for experiments in Section 4.1), but no post-editing has been performed.\nFor each of the three data sets, average BLEU and CHRF3 values and their standard\ndeviations (\u001b) for different numbers of available reference translations are calculated\nand results are presented in Table 7. It can be seen that:\n–average values are logarithmically increasing with increasing number of multiple\nreferences;\n–standard deviations are\n\u000fdropping with increasing number of multiple references\n\u000fclose to zero only for more than 10 references\n\u000fsmaller for the MTsystems of lower performance\nThese tendencies can be equally observed for all data sets, no matter how many PEs\n(more similar to MToutputs) and how many independent references (less similar to MT\noutputs) are used.\n4.4 Tuning\nA preliminary experiment regarding tuning on PEs originating from different source\nlanguages has been carried out using the O PENSUBTITLES data set: (i) the translation\nsystem was tuned with MERT [Och, 2003] on BLEU using (i) the independent reference\n(standard method), (ii) using the PEoriginating from the corresponding language and\n(iii) using the PEoriginating from the other language.\nThe results for another test set (not the one used for tuning) containing 2000 sen-\ntences6are presented in Table 8 showing that tuning on the post-edit from the corre-\nsponding source language produces best BLEU and METEOR scores.\nThis confirms the effect of the source language bias and indicates a potential of\nusing PEs of a MTsystem for tuning and development of this system.\n5also available at https://github.com/m-popovic/multiple-edits-refs\n6also available at the aforementioned repository\n226 Popovi ´c et al.\nTable 7: Effects of the number of multiple references: average BLEU and CHRF3 scores\nwith standard deviations for different number of (independent) references ranging from\n1 to 12. The results are obtained on the texts used in previous sections (a), (b) as well\nas on a small text with a large number (12) of independent reference translations (c).\n(a) PEs of four different systems + one reference\ntranslation output\nnumber of SMT1 SMT2 RBMT 1 RBMT 2\nreferences avg. \u001bavg. \u001bavg. \u001b avg. \u001b\nBLEU 125.1 9.3 22.3 8.0 26.2 11.5 23.5 10.2\n235.0 7.0 30.8 5.9 37.3 9.5 33.3 8.3\n340.3 5.3 35.4 4.4 43.6 7.6 38.8 6.7\n443.9 3.4 38.5 2.8 48.2 5.2 42.8 4.6\nall (5) 46.5 / 40.8 / 51.6 / 45.8 /\nCHRF3 1 55.3 7.9 52.4 7.2 57.2 9.1 56.0 8.4\n262.0 5.7 58.4 5.0 64.6 7.0 62.9 6.2\n365.1 4.5 61.3 3.8 68.3 5.7 66.2 5.0\n467.4 2.9 63.3 2.6 71.0 4.2 68.7 3.5\nall (5) 69.1 / 64.8 / 73.2 / 70.6 /\n(b) PEs of two source languages + one reference\ntranslation output\nnumber of en!x de!x\nreferences avg. \u001b avg. \u001b\nBLEU 1 32.3 10.9 28.9 12.1\n2 45.9 7.0 41.4 9.3\nall (3) 53.0 / 49.2 /\nCHRF3 1 52.8 8.5 48.9 9.9\n2 62.2 5.4 58.3 7.4\nall (3) 67.2 / 64.3 /\n(c) twelve references\ntranslation output\nnumber of sys1 sys2 sys3 sys4\nreferences avg. \u001bavg. \u001bavg. \u001bavg. \u001b\nBLEU 132.1 8.4 29.4 9.7 23.2 6.5 13.3 5.0\n241.6 6.0 39.2 9.2 29.2 4.2 17.7 2.9\n10 61.2 1.7 62.4 2.0 40.5 0.6 26.0 0.6\n11 62.0 1.2 63.3 1.3 40.8 0.4 26.3 0.4\nall (12) 62.8 / 64.0 / 41.1 / 26.6 /\nCHRF3 1 63.6 7.8 61.5 8.6 56.0 5.6 54.2 5.7\n271.1 5.1 69.3 6.8 62.0 3.3 60.0 3.7\n10 80.5 0.5 81.3 1.1 68.6 0.3 67.1 0.3\n11 80.7 0.3 81.7 0.6 68.8 0.2 67.3 0.2\nall (12) 81.0 / 82.0 / 69.0 / 67.5 /\nPost-edits as References 227\nTable 8: Effects of source language on tuning of an SMT system: MERT tuning on BLEU\nusing independent reference, post-edit from the corresponding source language and\npost-edit from another source language. The best BLEU and METEOR scores are ob-\ntained when the corresponding source language post-edit is used.\ntranslating tuned on BLEU METEOR\nen!sr ref 20.1 39.2\nen!srpe21.6 39.9\nde!srpe20.8 39.8\nde!sr ref 17.2 35.4\nen!srpe16.8 35.5\nde!srpe18.0 35.5translating tuned on BLEU METEOR\nen!sl ref 26.0 45.2\nen!slpe26.5 45.3\nde!slpe25.5 44.4\nde!sl ref 18.1 36.6\nen!slpe18.4 36.7\nde!slpe18.8 37.1\n5 Discussion\nKnowing how difficult the generation of (even a single) references/ PEs is, the following\nfindings from the results described in Section 4 can be summarised:\n–for comparison of different systems, using single PEof a high quality translation\noutput yields reliable automatic scores; the scores are even more reliable if the PE\nis generated by an external system – otherwise, the ranking would be still correct\nbut the scores will be biased to this particular system;\n–using multiple PEs (and references) is generally beneficial – however, it is better\nto have fewer PEs of high quality translation outputs than more PEs of low quality\ntranslation outputs;\n–evaluation of low quality translation outputs is less prone to variability and is gen-\nerally more reliable, except if (one of) the used reference(s) is its own PE; on the\nother hand, high quality translation outputs can easily be underestimated if using a\nsingle reference/ PE;\n–for tuning and development of a particular system, the PEfrom this very system\nshould be used.\n6 Summary and outlook\nThis work has examined the potential and limits of the use of post-edited MToutputs as\nreference translations for automatic MTevaluation. The experiments have shown that\nthe post-edited translation outputs are definitely useful as reference translations, but\nit should be kept in mind that the obtained automatic evaluation scores are strongly\nbiased towards the actual system by which the used PEis generated, as well as towards\nthe source language from which the used PEoriginates. The best option for comparison\nof different systems using a single PEis to use PEof a high quality translation output\nwhich is, if possible, generated by an independent system.\n228 Popovi ´c et al.\nMultiple references are in principle beneficial, although PEs generated from low\nquality translation outputs should be avoided. Further investigation concerning both\nquality and quantity of multiple references should be carried out.\nFor tuning an SMT system, the best option is to use a PEgenerated by this same\nsystem. Nevertheless, it should be noted that this was a preliminary experiment, so that\nfurther confirmation of reported findings on more data and language pairs is necessary.\nAcknowledgments\nThis publication has emanated from research supported by the T RAMOOC project\n(Translation for Massive Open Online Courses), partially funded by the European Com-\nmission under H2020-ICT-2014/H2020-ICT-2014-1 under Grant Agreement Number\n644333, and by a research grant from Science Foundation Ireland (SFI) under Grant\nNumber SFI/12/RC/2289 (Insight)\nReferences\nJoshua S. Albrecht, Rebecca Hwa (2007). Regression for Sentence-level MT Evaluation with\nPseudo-references. In Proceedings of the 45rd Annual Meeting of the Association for Com-\nputational Linguistics (ACL-07), pages 296–303, Prague, Czech Republic, July.\nJoshua S. Albrecht and Rebecca Hwa. 2008. The Role of Pseudo-references in MT evaluation.\nInProceedings of the 3rd Workshop on Statistical Machine Translation (WMT-08) , pages\n187–190, Columbus, Ohio, June.\nEleftherios Avramidis, Aljoscha Burchardt, Sabine Hunsicker, Maja Popovi ´c, Cindy Tscher-\nwinka, David Vilar Torres, and Hans Uszkoreit. 2014. The taraXÜ Corpus of Human-\nAnnotated Machine Translations. In Proceedings of the 9th International Conference on Lan-\nguage Resources and Evaluation (LREC-14) , pages 2679–2682, Reykjavik, Iceland, May.\nSatanjeev Banerjee and Alon Lavie. 2005. METEOR: An Automatic Metric for MT Evaluation\nwith Improved Correlation with Human Judgements. In Proceedings of the ACL-05 Workshop\non Intrinsic and Extrinsic Evaluation Measures for MT and/or Summarization , pages 65–72,\nAnn Arbor, MI, June.\nNicola Bertoldi, Mauro Cettolo, and Marcello Federico. 2013. Cache-based Online Adaptation\nfor Machine Translation Enhanced Computer Assisted Translation. In Proceedings of MT\nSummit XIV , Nice, France.\nFrédéric Blain, Jean Senellart, Holger Schwenk, Mirko Plitt, and Johann Roturier. 2011. Qual-\nitative Analysis of Post-Editing for High Quality Machine Translation. In Proceedings of\nMachine Translation Summit XIII , Xiamen, China, September.\nMichael Denkowski, Chris Dyer, and Alon Lavie. 2014. Learning from Post-Editing: Online\nModel Adaptation for Statistical Machine Translation. In Proceedings of the 14th Conference\nof the European Chapter of the Association for Computational Linguistics (EACL-14) , pages\n395–404, Gothenburg, Sweden, April.\nGeorge Doddington. 2002. Automatic Evaluation of Machine Tanslation Quality using n-gram\nCo-occurrence Statistics. In Proceedings of the ARPA Workshop on Human Language Tech-\nnology , pages 128–132, San Diego, CA, March.\nMaarit Koponen. 2012. Comparing human perceptions of post-editing effort with post-editing\noperations. In Proceedings of the 7th Workshop on Statistical Machine Translation (WMT-\n12), pages 181–190, Montréal, Canada, June.\nPost-edits as References 229\nDavid Kurokawa, Cyril Goutte, and Pierre Isabelle. 2009. Automatic Detection of Translated\nText and its Impact on Machine Translation. In Proceedings of MT Summit XII , pages 81–88,\nOttawa, Canada, August.\nGennadi Lembersky, Noam Ordan, and Shuly Wintner. 2013. Improving Statistical Machine\nTranslation by Adapting Translation Models to Translationese. Computational Linguistics ,\n39(4):999–1024, December.\nNitin Madnani, Philip Resnik, Bonnie J. Dorr, and Richard Schwartz. 2008. Are multiple refer-\nence translations necessary? Investigating the value of paraphrased reference translations in\nparameter optimization. In Proceedings of the 8th Conference of the Association for Machine\nTranslation in the Americas (AMTA-08) , Waikiki, Hawaii, October.\nPrashant Mathur, Mauro Cettolo, Marcello Federico, and José Guillerme Carmago de Souza.\n2014. Online Multi-User Adaptive Statistical Machine Translation. In Proceedings of the\n11th Conference of the Association for Machine Translation of the Americas (AMTA-14) ,\npages 152–165, Vancouver, Canada, October.\nFranz Josef Och. 2003. Minimum Error Rate Training in Statistical Machine Translation. In\nProceedings of the 41th Annual Meeting of the Association for Computational Linguistics\n(ACL 03) , pages 160–167, Sapporo, Japan, July.\nKishore Papineni, Salim Roukos, Todd Ward, Wei-Jing Zhu (2002). BLEU: a Method for Auto-\nmatic Evaluation of Machine Translation. In Proceedings of the 40th Annual Meeting of the\nAssociation for Computational Linguistics (ACL-02) , pages 311–318, Philadelphia, PA, July.\nMaja Popovi ´c. 2015. chrF: Character n-gram F-score for Automatic MT Evaluation. In Proceed-\nings of the 10th Workshop on Statistical Machine Translation (WMT-15) , pages 392–395,\nLisbon, Portugal, September.\nMaja Popovi ´c, Mihael Ar ˇcan. 2016. PE2rr corpus: Manual Error Annotation of Automatically\nPre-annotated MT Post-edits. In Proceedings of the 10th International Conference on Lan-\nguage Resources and Evaluation (LREC-16) , Portorož, Slovenia, May.\nYing Qin and Lucia Specia. 2015. Truly Exploring Multiple References for Machine Translation\nEvaluation. In Proceedings of the 18th Annual Conference of the European Association for\nMachine Translation (EAMT-15) , pages 113–120, Antalya, Turkey, May.\nMatthew Snover, Bonnie Dorr, Richard Schwartz, Linnea Micciulla, and John Makhoul. 2006. A\nStudy of Translation Error Rate with Targeted Human Annotation. In Proceedings of the 7th\nConference of the Association for Machine Translation in the Americas (AMTA-06) , pages\n223–231, Boston, MA, August.\nLucia Specia. 2011. Exploiting Objective Annotations for Measuring Translation Post-editing Ef-\nfort. In Proceedings of the 15th Annual Conference of the European Association for Machine\nTranslation (EAMT-11) , pages 73–80, Leuven, Belgium, May.\nMiloš Stanojevi ´c, Amir Kamran, Philipp Koehn, and Ond ˇrej Bojar. 2015. Results of the WMT15\nMetrics Shared Task. In Proceedings of the 10th Workshop on Statistical Machine Translation\n(WMT-15) , pages 256–273, Lisbon, Portugal, September.\nMidori Tatsumi and Johann Roturier. 2010. Source Text Characteristics and Technical and Tem-\nporal Post-editing Effort: What is their Relationship? In Proceedings of the Second Joint\nEM+/CGNL Worskhop Bringing MT to the user (JEC-10) , pages 43–51, Denver, Colorado,\nNovember.\nGuillaume Wisniewski, Anil Kumar Singh, Natalia Segal, and François Yvon. 2013. Design and\nAnalysis of a Large Corpus of Post-edited Translations: Quality Estimation, Failure Analysis\nand the Variability of Post-edition. In Proceedings of MT Summit XIV , pages 117–124, Nice,\nFrance, September.\nReceived May 2, 2016 , accepted May11, 2016",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "fJ-_SYlH2Lk",
"year": null,
"venue": "EAMT 2015",
"pdf_link": "https://aclanthology.org/W15-4914.pdf",
"forum_link": "https://openreview.net/forum?id=fJ-_SYlH2Lk",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Poor man's lemmatisation for automatic error classification",
"authors": [
"Maja Popovic",
"Mihael Arcan",
"Eleftherios Avramidis",
"Aljoscha Burchardt",
"Arle Lommel"
],
"abstract": "Maja Popović, Mihael Arčan, Eleftherios Avramidis, Aljoscha Burchardt, Arle Lommel. Proceedings of the 18th Annual Conference of the European Association for Machine Translation. 2015.",
"keywords": [],
"raw_extracted_content": "Poor man’s lemmatisation for automatic error classification\nMaja Popovi ´c1Mihael Ar ˇcan2Eleftherios Avramidis1\nAljoscha Burchardt1Arle Lommel1\n1DFKI – Language Technology Lab, Berlin, Germany\[email protected]\n2Insight Centre for Data Analytics, National University of Galway, Ireland\[email protected]\nAbstract\nThis paper demonstrates the possibility to\nmake an existing automatic error classi-\nfier for machine translations independent\nfrom the requirement of lemmatisation.\nThis makes it usable also for smaller and\nunder-resourced languages and in situa-\ntions where there is no lemmatiser at hand.\nIt is shown that cutting all words into the\nfirst four letters is the best method even\nfor highly inflective languages, preserving\nboth the detected distribution of error types\nwithin a translation output as well as over\nvarious translation outputs.\nThe main cost of not using a lemmatiser\nis the lower accuracy of detecting the in-\nflectional error class due to its confusion\nwith mistranslations. For shorter words,\nactual inflectional errors will be tagged as\nmistranslations, for longer words the other\nway round. Keeping all that in mind, it is\npossible to use the error classifier without\ntarget language lemmatisation and to ex-\ntrapolate inflectional and lexical error rates\naccording to the average word length in the\nanalysed text.\n1 Introduction\nFuture improvement of machine translation (MT)\nsystems requires reliable automatic evaluation and\nerror classification tools in order to minimise ef-\nforts of time and money consuming human clas-\nsification. Therefore automatic error classification\ntools have been developed in recent years (Zeman\nc�2015 The authors. This article is licensed under a Creative\nCommons 3.0 licence, no derivative works, attribution, CC-\nBY-ND.et al., 2011; Popovi ´c, 2011) and are being used to\nfacilitate the error analysis. Although these tools\nare completely language independent, for obtain-\ning a precise error distribution over classes a lem-\nmatiser for the target language is required. For\nthe languages strongly supported in language re-\nsources and tools this does not pose a problem.\nHowever, for a number of languages a lemmatiser\nmight not be at hand, or it does not exist at all.\nThis paper investigates possibilities for obtaining\nreasonable error classification results without lem-\nmatisation. To the best of our knowledge, this issue\nhas not been investigated so far.\n2 Motivation and explored methods\nWe investigate the edit-distance i.e. word error\nrate ( WER) approach implemented in the Hjer-\nson tool (Popovi ´c, 2011), which enables detec-\ntion offive error categories:inflectional errors,\nword order errors,missing words(omissions),ex-\ntra words(additions) andlexical errors(mistrans-\nlations). For a given MT output and reference\ntranslation, the classication results are provided in\nthe form of thefive error rates, whereby the num-\nber of errors for each category is normalised over\nthe total number of words.\nThe detailed description of the approach can be\nfound in (Popovi ´c and Ney, 2011). The starting\npoint is to identify actual words contributing to the\nWord Error Rate ( WER), recall (reference) error\nrate (R PER) and precision (hypothesis) error rate\n(HPER). The WER errors are marked as substitu-\ntions, deletions and insertions. Then, the lemmas\nare used:first, to identify the inflectional errors –\nif the lemma of an erroneous word is correct and\nthe full form is not. Second, the lemmas are also\nused for detecting omissions, additions and mis-\ntranslations. It is also possible to calculate WER105\nMethod\nfull The visit will reach its peak in the afternoon .\nlemmaThe visit will reach its peak in the afternoon .\n4letThe visi will reac its peak in the afte .\n2thirdsTh vis wi rea it pe in th aftern .\nstemThe visi wil rea its pea in the afternoo .\nfull President is receiving the Minister of Finance .\nlemmaPresident be receive the Minister of Finance .\n4letPres be rece the Mini of Fina .\n2thirdsPresid is receiv th Minis of Fina .\nstemPresiden is receiv the Minist of Financ .\nTable 1: Examples for each of the word reduction methods.\nbased on lemmas instead of full words in order to\nincrease the precision with regard to human error\nannotation, which makes the algorithm even more\nsusceptible to possible lack of lemmas.\nIf the full word forms were used as a replace-\nment for lemmas, it would not be possible to de-\ntect any inflectional error thus setting the inflec-\ntional error rate to zero, and noise would be intro-\nduced in omission, addition and mistranslation er-\nror rates. Therefore, a simple use of the full forms\ninstead of lemmas is not advisable, especially for\nthe highly inflective languages. The goal of this\nwork is to examine possible methods for process-\ning of the full words in a more or less simple way\nin order to yield a reasonable error classification\nresults by using them as a replacement for lemmas.\nFollowing methods for word reduction are ex-\nplored:\n•first four letters of the word (4let)\nThe simplest way for word reduction is to use\nonly itsfirstnletters. The choice offirst four\nletters has been shown to be successful for\nimprovement of word alignments (Fraser and\nMarcu, 2005), therefore we decided to setn\nto four.\n•first two thirds of the word length (2thirds)\nIn order to take the word length into account,\nthe words are reduced to 2/3 of their original\nlength (rounded down).\n•word stem (stem)\nA more refined method which splits words\ninto stems and suffixes based on harmonic\nmean of their frequencies is used, similar\nto the compound splitting method describedin (Koehn and Knight, 2003). The suffix\nof each word is removed and only the stem\nis preserved. For calculation of stem and\nsuffix frequencies, both the translation out-\nput and its corresponding reference transla-\ntion are used.\nExamples of two English sentences processed by\neach of the methods is shown in Table 1.\nThe methods are tested on various distinct tar-\nget languages and domains, some of the languages\nbeing very morphologically rich. Detailed descrip-\ntion of the texts can be found in the next section.\n3 Experiments and results\nThe two main objectives of automatic error classi-\nfier are:\n•to estimate the error distribution within a\ntranslation output\n•to compare different translation outputs in\nterms of error categories\nTherefore we tested the described methods for\nboth these aspects by comparing the results with\nthose obtained when using lemmatised words, i.e.\nwe used the error rates obtained with lemmas as\nthe “reference” error rates. The best way for the\nassessment would be, of course, a comparison with\nhuman error classification. Nevertheless, this has\nnot been done for two reasons:first, the original\nmethod using lemmas is already thoroughly tested\nin previous work (Popovi ´c and Ney, 2011) and is\nshown to correlate well with human judgements.\nSecond, human evaluation is resource and time-\nconsuming.\nThe explored target languages in this work are\nEnglish, Spanish, German, Slovenian and Czech106\noriginating from news, technical texts, client data\nof Language Service Providers, pharmaceutical\ndomain, Europarl (Koehn, 2005), as well as the\nOpenSubtitles1spoken language corpus. In addi-\ntion, one Basque translation output from technical\ndomain has been available as well. The publicly\navailable texts are described in (Callison-Burch et\nal., 2011), (Specia, 2011) and (Tiedemann, 2012).\nThe majority of translation outputs has been cre-\nated by statistical systems but a number of trans-\nlations has been produced by rule-based systems.\nIt should be noted that not all target languages\nwere available for all domains, however the to-\ntal amount of texts and the diversity of languages\nand domains are sufficient to obtain reliable re-\nsults – about 36000 sentences with average num-\nber of words ranging from 8 (subtitles) through 15\n(domain-specific corpora) up to 25 (Europarl and\nnews) have been analysed.\nLemmas for English, Spanish and German texts\nare generated using TreeTagger,2Slovenian lem-\nmas are produced by the Obeliks tagger (Gr ˇcar et\nal., 2012), and Czech texts are lemmatised using\ntheCOMPOST tagger (Spoustov ´a et al., 2009).\nIt should be noted that all the reported results are\ncalculated using WER of lemmas (or corresponding\nsubstitutions) since no changes related to lemma\nsubstitution techniques were observed in compari-\nson with the use of the standard full word WER.\n3.1 Error distributions within a translation\noutput\nOurfirst experiment consisted of calculating dis-\ntributions offive error rates within one translation\noutput using all word reduction methods described\nin Section 2 and comparing the obtained results\nwith the reference distributions of error rates ob-\ntained using lemmas. The results for three distinct\ntarget languages are presented in Table 2: English\nas the least inflective, Spanish having very rich\nverb morphology, and Czech as generally highly\ninflective.\nReference distributions are presented in thefirst\nrow, followed by the investigated word reduction\nmethods; in the last row the results obtained us-\ning full words are shown as well, and the intu-\nitively suspected effects can be clearly seen: no\ninflectional errors are detected, and the vast major-\nity of them are tagged as lexical error (mistransla-\n1http://www.opensubtitles.org/\n2http://www.ims.uni-stuttgart.de/\nprojekte/corplex/TreeTagger/tion). Furthermore, it is confirmed that the varia-\ntions in word order errors, omissions and additions\nare small, whereas the most affected error classes\nare inflections and mistranslations.\nAs for different target languages, in the English\noutput the differences between the error rates are\nsmall for all error classes, but for the more in-\nflected Spanish text and the highly inflected Czech\ntext the situation is fairly different:4letdistribu-\ntion is closest to the reference lemma error dis-\ntribution, whereas2thirdsandstemdistributions\nare lying between the lemma and the full word\ndistributions. In addition, it can be observed that\nthestemmethod performs better than the2thirds\nmethod.\nIn Table 3, the parts of the reference transla-\ntions from Table 1 containing inflectional errors\nare shown together with the corresponding parts of\nthe translation output in order to better understand\nthe different performance of the methods. Each of\nthe sentences contains one (verb) inflectional error.\nThefirst error, “receives” instead of “receiving”, is\ncorrectly detected by all methods. The second one,\n“reached” instead of “reach” is correctly tagged by\nall methods except by2thirdsbecause the reduced\nword forms are not the same in the translation and\nin the reference. Thestemmethod often exhibits\nthe same problem, however less frequently.\n3.2 Comparing translation outputs\nFor the comparison of different translation outputs,\nonly the4letmethod has been investigated because\nit produces the best error distributions (closest to\nthose obtained by lemmas) and it is also the sim-\nplest to perform.\nFigure 1 illustrates the results for the two highly\ninflectional languages, namely Slovenian (above)\nand Czech (below). Slovenian translations orig-\ninating from six statistical MT systems (dealing\nwith three different domains and two source lan-\nguages) and Czech outputs produced by four dif-\nferent MT systems have been analysed. Only\nthe two most critical error classes are presented,\nnamely inflectional (left) and lexical (right) error\nrates – for other error categories no significant per-\nformance differences between the reduction meth-\nods were observed.\nFor the Slovenian translations, the correlation\nbetween4letand reference lemma system rank-\nings is 1, both for the inflectional and for the lex-\nical error rates. The same applies to Czech lex-107\nTarget Error Rates [%]\nLanguage Method inflorder miss add lex\nEnglishlemma(ref) 1.5 7.6 5.2 3.0 8.7\n4let1.9 7.6 5.2 3.0 8.2\n2thirds0.9 7.5 5.3 3.0 9.3\nstem1.2 7.6 5.3 3.0 9.0\nfull 0 7.6 5.4 3.1 10.1\nSpanishlemma(ref) 4.6 6.4 5.9 3.6 13.5\n4let4.0 6.6 6.0 3.6 13.9\n2thirds2.6 6.4 6.0 3.5 15.5\nstem3.1 6.6 6.1 3.6 14.8\nfull 0 6.7 6.1 3.6 17.9\nCzechlemma(ref) 10.4 10.6 7.1 7.6 36.4\n4let10.0 10.8 7.0 7.7 36.9\n2thirds5.6 11.0 6.8 7.6 41.4\nstem7.2 10.9 7.0 7.7 39.7\nfull 0 11.3 6.8 7.6 47.1\nTable 2: Comparison of error rates obtained by each of the described word reduction methods with the\nreference lemma error rates for three translation outputs: English (above), Spanish (middle) and Czech\n(below). Error rates using full words as lemma replacement are shown as well, illustrating why this\nmethod is not recommended.\nMethod Reference translation MT output\nfull The visit will reach Visit reached\nlemmaThe visit willreachVisitreach\n4letThe visi willreacVisitreac\n2thirdsTh vis wireaVisrea\nstemThe visi willreaVisrea\nfull President is receivin g President receives\nlemmaPresident bereceivePresidentreceive\n4letPres berecePresrece\n2thirdsPresid isreceivPresidentrecei\nstemPresiden isreceivPresidentreceiv\nTable 3: Illustration of the main problem for inflectional error detection: if the reduced word form is\nnot exactly the same in the reference and in the translation output (bold), the error will not be tagged\nas inflectional. This phenomenon occurs most frequently for the2thirdsmethod, therefore this method\nexibits the poorest performance.108\n 3 4 5 6 7 8 9\nsys1 sys2 sys3 sys4 sys5 sys6inflectional error rates [%]\nlemma\n4let\n 6 8 10 12 14 16 18 20 22 24\nsys1 sys2 sys3 sys4 sys5 sys6lexical error rates [%]\nlemma\n4let\n(a) Comparing six Slovenian MT outputs\n 8 8.5 9 9.5 10 10.5 11\nsys1 sys2 sys3 sys4inflectional error rates [%]\nlemma\n4let\n 30 32 34 36 38 40 42\nsys1 sys2 sys3 sys4lexical error rates [%]\nlemma\n4let\n(b) Comparing four Czech MT outputs\nFigure 1: Comparison of translation outputs for highly inflective languages based on the two most critical\nerror classes, i.e. inflectional (left) and lexical errors (right) – six Slovenian (above) and four Czech\n(below) translation outputs. Reference lemma error rates are presented by full lines,4leterror rates by\ndashed lines.\n109\nical error rates, but not for the Czech inflections\nthough: lemma method ranks the error rates (from\nhighest to lowest)1, 3, 2, 4whereas the4letrank-\ning is2, 3, 4, 1. However, the important fact is that\nthe relative differences between the systems are\nvery small for inflectional errors; all the systems\ncontain a high number of inflectional errors (be-\ntween 9.6 and 10.8%), whereas the absolute differ-\nences between the systems range only between 0.2\nand 1%. This means that the4letmethod is gen-\nerally well capable of system comparison, but it is\nnot able to capture very small relative differences\ncorrectly.\n3.3 Analysis of confusions\nIn previous sections it is shown that the4let\nmethod, despite certain disadvantages, is well ca-\npable to substitute the lemmas both for estimating\nerror distributions within an output as well as for\ncomparing error rates across the translation out-\nputs. However, an important remaining question\nis: what is exactly happening? Results presented in\nprevious sections indicate that a number of inflec-\ntional errors is substituted by lexical errors. How-\never, they also show that the4letinflectional error\nrates sometimes are lower and sometimes higher\nthan the lemma-based ones, thus indicating that\nnot only a simple substitution of inflectional errors\nby mistranslations is taking place.\nIn order to explore these underlying phenomena,\naccuracies and confusions between error classes\nare calculated and confusion matrix is presented in\nTable 4. Since there are practically no variations in\nreordering error rates, the confusions are presented\nonly for inflections, additions3and lexical errors.\nAs afirst step, the confusions are calculated for\nall merged texts and the results are presented in the\nfirst row. It is confirmed that the low accuracy of\nthe inflections and their confusions with mistrans-\nlations are indeed the main problems, however\nthere is a number of reverse confusions, i.e. cer-\ntain mistranslations are tagged as inflectional er-\nrors. Apart from that, there is also certain amount\nof confusions between inflections and additions.\nSince some of the used reference translations\nwere independent (“free”) human translations and\nsome were post-edited translation outputs, we sep-\narated the texts into two sets and calculated con-\nfusions for each one. Nevertheless, no important\n3The situation regarding omissions is analogous to the one\nregarding additions.differences could be observed, as it can be seen in\nthe corresponding rows in Table 4.\nThe next step was to analyse each of the target\nlanguages separately, and the results are presented\nfurther below in the table. Although the numbers\nare more diverse, all the important phenomena are\npractically same for all languages, namely low ac-\ncuracy of inflections due to confusion with mis-\ntranslations. Only for the Basque translation the\npercentage is similar for confusions in both direc-\ntions.\nLast step was division of texts into written text\nand spoken language transcriptions, and, contrary\nto the other set-ups, several notable differences\nwere observed. First of all, the accuracy of inflec-\ntions is significantly lower for spoken language,\nand the percentage of confusions with mistransla-\ntions is much higher. On the other hand, in written\ntext much more mistranslations are substituted by\ninflections.\n3.3.1 Word length effects\nThe differences between written and spoken\nlanguage, together with the observations about\nBasque where the words can be very long, showed\nthat the word length is an important factor which is\nneglected by the simple cutting of words intofirst\nfour letters. The inflections of very short words\nsuch as articles and auxiliary verbs cannot be cap-\ntured, and some long words which are not related\nat all can be easily tagged as inflectional errors\nonly because they share thefirst four characters\n– see Table 5. Furthermore,reception,receipt,\nrecentandreceiverall sharefirst four letters and\ncould possibly be tagged as inflectional error. On\nthe other hand, such coincidences are not very fre-\nquent and therefore there are less substitutions of\nlexical errors. We calculated the average lengths\nof words for which each of the two substitution\ntypes occur, and obtained an average word length\nof 3.44 for inflection→mistranslation substitution\nand 8.64 for the reverse one.\nNeglecting the word length by the4letmethod\nwas the reason to explore the other two methods\n(2thirdsandstem) in thefirst place. However,\nthey produced significantly worse error distribu-\ntions due to the often inconsistent word cutting.\nSince thestemmethod could be potentially im-\nproved (contrary to the2thirdsmethod), we anal-\nysed its confusions and compared with those of the\n4letmethod in order to better understand the dif-\nferences. The confusions for all merged translation110\n4letinflinfl→lex infl→add lex lex→infladd add→infl\nOverall57.1 36.05.6 89.5 8.0 88.9 8.0\nReference 56.2 37.4 5.5 90.5 7.2 90.4 3.7\nPost-edit 57.9 34.3 5.8 87.5 9.8 86.8 6.1\nEnglish 47.1 46.8 5.5 93.2 5.4 94.7 2.8\nSpanish* 55.6 35.2 5.3 89.4 8.6 91.8 2.6\nGerman 43.2 47.0 7.3 87.9 8.5 84.9 8.1\nSlovenian 51.6 41.8 6.2 91.9 6.2 86.7 2.1\nCzech* 66.3 28.4 5.2 90.0 7.3 81.3 6.3\nBasque* 79.216.43.8 84.013.486.3 5.6\nWritten 65.7 27.4 5.087.0 10.187.66.4\nSpoken44.4 49.26.0 94.1 4.6 89.7 1.6\nTable 4: Accuracies and confusions between reference lemma error categories and those obtained by the\n4letmethod; for all texts (Overall), separately for post-editions and for references, separately for each\ntarget language, and separately for written and spoken language.\nMethod Reference translation MT output\nfull There were ergonomic There was ergonomische\nproblems . problems .\nlemmaThere be ergonomic Therebe inflergonomische lex\nproblem . problem .\n4letTher were ergo prob . Therwas lexergo inflprob .\nTable 5: Illustration of the word length problem for the4letmethod: inflectional errors for short words\n(were/was) are impossible to detect and are considered as lexical errors; on the other hand, a lexical\nerror (untranslated German wordergonomische) is tagged as inflectional error because it sharesfirst four\nletters with the reference translationergonomic.\nMethod inflinfl→lex infl→add lex lex→infladd add→infl\nOverall4let57.1 36.0 5.689.5 8.0 88.9 8.0\nstem48.4 44.2 6.094.2 4.8 89.7 4.3\nTable 6: Comparison of overall4letandstemaccuracies and confusions.111\noutputs (Overall) presented in Table 6 show that\nthestemmethod is better in avoiding substitutions\nof mistranslations and additions with inflections,\nbut the problem with low inflection error accuracy\nis worse. One possible reason is that the stem\nand the suffix frequencies are estimated from the\nvery small amount of data (only the reference and\nthe translation output) and therefore is often not\nable to perform consistent cuttings for all words.\nThis method should be investigated in future work,\ntrained on the large target language corpus as well\nas in combination with the4letmethod.\n4 Conclusions and Future Work\nThe experiments presented in this paper show that\nit is possible to use an existing automatic error\nclassifier without target language lemmas. It is\nshown that cutting all words intofirst four letters\nis the best method even for highly inflective lan-\nguages, preserving both the distribution of error\ntypes within a system as well as distribution of\neach error type over various systems. However, it\nmight not be able to capture very small variations\ncorrectly.\nThe main issue is the low accuracy of inflec-\ntional error class due to confusions with mistrans-\nlations. For shorter words, actual inflectional er-\nrors tend to be tagged as mistranslations, for longer\nwords the other way round. Keeping all that in\nmind, it is possible to use the error classifier with-\nout target language lemmatisation and to extrapo-\nlate inflectional and lexical error rates according to\nthe dominant word length in the analysed text.\nOur further work will concentrate on combining\nthe4letmethod with more refined methods which\ntake into account the word length, and also inves-\ntigating otherfixed reduction lengths, e.g. 5 and\n6. Comparison with human error classification re-\nsults as well as manual inspection of problematic\nwords and error confusion types should be carried\nout as well.\nAcknowledgments\nThis publication has emanated from research sup-\nported by QTL EAPproject – ECs FP7 (FP7/2007-\n2013) under grant agreement number 610516:\n“QTL EAP: Quality Translation by Deep Lan-\nguage Engineering Approaches” and by a research\ngrant from Science Foundation Ireland (SFI) under\nGrant Number SFI/12/RC/2289. We are grateful to\nthe reviewers for their valuable feedback.References\nCallison-Burch, Chris, Philipp Koehn, Christof Monz,\nand Omar Zaidan. 2011. Findings of the 2011\nWorkshop on Statistical Machine Translation. In\nProceedings of the Sixth Workshop on Statistical Ma-\nchine Translation (WMT 2011), pages 22–64, Edin-\nburgh, Scotland, July.\nFraser, Alexander and Daniel Marcu. 2005. ISI’s Par-\nticipation in the Romanian-English Alignment Task.\nInProceedings of the ACL Workshop on Building\nand Using Parallel Texts, pages 91–94, Ann Arbor,\nMichigan, June.\nGrˇcar, Miha, Simon Krek, and Kaja Dobrovoljc. 2012.\nObeliks: statisti ˇcni oblikoskladenjski ozna ˇcevalnik\nin lematizator za slovenski jezik. InProceedings\nof the 8th Language Technologies Conference, pages\n89–94, Ljubljana, Slovenia, October.\nKoehn, Philipp and Kevin Knight. 2003. Empirical\nmethods for compound splitting. InProceedings of\nthe 10th Conference of the European Chapter of the\nAssociation for Computational Linguistics (EACL\n03), pages 347–354, Budapest, Hungary, April.\nKoehn, Philipp. 2005. Europarl: A Parallel Corpus for\nStatistical Machine Translation. InProceedings of\nthe 10th Machine Translation Summit, pages 79–86,\nPhuket, Thailand, September.\nPopovi ´c, Maja and Hermann Ney. 2011. Towards Au-\ntomatic Error Analysis of Machine Translation Out-\nput.Computational Linguistics, 37(4):657–688, De-\ncember.\nPopovi ´c, Maja. 2011. Hjerson: An Open Source\nTool for Automatic Error Classification of Machine\nTranslation Output.The Prague Bulletin of Mathe-\nmatical Linguistics, (96):59–68, October.\nSpecia, Lucia. 2011. Exploiting Objective Annota-\ntions for Measuring Translation Post-editing Effort.\nInProceedings of the 15th Annual Conference of\nthe European Association for Machine Translation\n(EAMT 11), pages 73–80, Leuven, Belgium, May.\nSpoustov ´a, Drahom ´ıra “Johanka”, Jan Haji ˇc, Jan Raab,\nand Miroslav Spousta. 2009. Semi-Supervised\nTraining for the Averaged Perceptron POS Tagger.\nInProceedings of the 12th Conference of the Euro-\npean Chapter of the ACL (EACL 2009), pages 763–\n771, Athens, Greece, March.\nTiedemann, J ¨org. 2012. Parallel data, tools and inter-\nfaces in OPUS. InProceedings of the 8th Interna-\ntional Conference on Language Resources and Eval-\nuation (LREC12), pages 2214–2218, May.\nZeman, Daniel, Mark Fishel, Jan Berka, and Ond ˇrej\nBojar. 2011. Addicter: What Is Wrong with My\nTranslations?The Prague Bulletin of Mathematical\nLinguistics, 96:79–88, October.112",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "wAJF9Yo9aXk",
"year": null,
"venue": "EAMT 2015",
"pdf_link": "https://aclanthology.org/W15-4913.pdf",
"forum_link": "https://openreview.net/forum?id=wAJF9Yo9aXk",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Identifying main obstacles for statistical machine translation of morphologically rich South Slavic languages",
"authors": [
"Maja Popovic",
"Mihael Arcan"
],
"abstract": "Maja Popović, Mihael Arčan. Proceedings of the 18th Annual Conference of the European Association for Machine Translation. 2015.",
"keywords": [],
"raw_extracted_content": "Identifying main obstacles for statistical machine translation of\nmorphologically rich South Slavic languages\nMaja Popovi ´c\nDFKI – Language Technology Lab\nBerlin, Germany\[email protected] Ar ˇcan\nInsight Centre for Data Analytics\nNational University of Galway, Ireland\[email protected]\nAbstract\nThe best way to improve a statistical ma-\nchine translation system is to identify con-\ncrete problems causing translation errors\nand address them. Many of these prob-\nlems are related to the characteristics of\nthe involved languages and differences be-\ntween them. This work explores the main\nobstacles for statistical machine transla-\ntion systems involving two morphologi-\ncally rich and under-resourced languages,\nnamely Serbian and Slovenian. Systems\nare trained for translations from and into\nEnglish and German using parallel texts\nfrom different domains, including both\nwritten and spoken language. It is shown\nthat for all translation directions structural\nproperties concerning multi-noun colloca-\ntions and exact phrase boundaries are the\nmost difficult for the systems, followed by\nnegation, preposition and local word order\ndifferences. For translation into English\nand German, articles and pronouns are the\nmost problematic, as well as disambigua-\ntion of certain frequent functional words.\nFor translation into Serbian and Slovenian,\ncases and verb inflections are most diffi-\ncult. In addition, local word order involv-\ning verbs is often incorrect and verb parts\nare often missing, especially when trans-\nlating from German.\n1 Introduction\nThe statistical approach to machine translation\n(SMT), in particular phrase-based SMT, has be-\nc�2015 The authors. This article is licensed under a Creative\nCommons 3.0 licence, no derivative works, attribution, CC-\nBY-ND.come widely used in the last years: open source\ntools such as Moses (Koehn et al., 2007) have\nmade it possible to build translation systems for\nany language pair, domain or text type within days.\nDespite the fact that for certain language pairs, e.g.\nEnglish-French, high quality SMT systems have\nbeen developed, a large number of languages and\nlanguage pairs have not been (systematically) in-\nvestigated. The largest study about European lan-\nguages in the Digital Age, the META-NET Lan-\nguage White Paper Series1in year 2012 showed\nthat only English has good machine translation\nsupport, followed by moderately supported French\nand Spanish. More languages are only fragmen-\ntary supported (such as German), whereby the ma-\njority of languages are weakly supported. Many\nof those languages are also morphologically rich,\nwhich makes the SMT task more complex, espe-\ncially if they are the target language. A large part\nof weakly supported languages consists of Slavic\nlanguages, where both Serbian and Slovenian be-\nlong. Both languages are part of to the South\nSlavic language branch, Slovenian2being the third\nofficial South Slavic language in the EU and Ser-\nbian3is the official language of a candidate mem-\nber state. For all these reasons, a systematic inves-\ntigation of SMT systems involving these two lan-\nguages and defining the most important errors and\nproblems can be very very beneficial for further\nstudies.\nIn the last decade, several SMT systems have\nbeen built for various South Slavic languages and\nEnglish, and for some systems certain morpho-\nsyntactic transformations have been applied more\n1http://www.meta-net.eu/whitepapers/\nkey-results-and-cross-language-comparison\n2together with Croatian and Bulgarian\n3together with Bosnian and Montenegrin97\nor less successfully. However, all the experiments\nare rather scattered and no systematic or focused\nresearch has been carried out in order to define ac-\ntual translation errors as well as their causes.\nThis paper reports results of an extensive er-\nror analysis for four language pairs: Serbian and\nSlovenian with English as well as with German,\nwhich is also a challenging language – complex\n(both as a source and as a target language) and\nstill not widely supported. SMT systems have\nbeen built for all translation directions using pub-\nlicly available parallel texts originating from sev-\neral different domains including both written and\nspoken language.\n2 Related work\nOne of thefirst publications dealing with SMT sys-\ntems for Serbian-English (Popovi ´c et al., 2005)\nand Slovenian-English (Sepesy Mau ˇcec et al.,\n2006) are reportingfirst results using small bilin-\ngual corpora. Improvements for translation into\nEnglish are also reported by reducing morpho-\nsyntactic information in the morphologically rich\nsource language. Using morpho-syntactic knowl-\nedge for the Slovenian-English language pair was\nshown to be useful for both translation directions\nin (ˇZganec Gros and Gruden, 2007). However, no\nanalysis of results has been carried out in terms\nof what were actual problems caused by the rich\nmorphology and which of those were solved by the\nmorphological preprocessing.\nThrough the transLectures project,4the\nSlovenian-English language pair became available\nin the 2013 evaluation campaign of IWSLT .5They\nreport the BLEU scores of TED talks translated by\nseveral systems, however a deeper error analysis\nis missing (Cettolo et al., 2013).\nRecent work in SMT also deals with the Croat-\nian language, which is very closely related to Ser-\nbian. First results for Croatian-English are re-\nported in (Ljube ˇsi´c et al., 2010) on a small weather\nforecast corpus, and an SMT system for the tourist\ndomain is presented in (Toral et al., 2014). Fur-\nthermore, SMT systems for both Serbian and Croa-\ntian are described in (Popovi ´c and Ljube ˇsi´c, 2014),\nhowever only translation errors caused by lan-\nguage mixing are analysed, not the problems re-\nlated to the languages themselves.\n4https://www.translectures.eu/\n5International Workshop on Spoken Language Translation,\nhttp://workshop2013.iwslt.org/Different SMT systems for subtitles were devel-\noped in the framework of the SUMAT project,6\nincluding Serbian and Slovenian (Etchegoyhen et\nal., 2014). However, only the translations be-\ntween them have been carried out as an example\nof closely related and highly inflected languages.\n3 Language characteristics\nSerbian (referred to as “sr”) and Slovenian (“sl”),\nas Slavic languages, have quite free word order and\nare highly inflected. The inflectional morphology\nis very rich for all word classes. There are six\ndistinct cases affecting not only common nouns,\nbut also proper nouns as well as pronouns, adjec-\ntives and some numbers. Some nouns and adjec-\ntives have two distinct plural forms depending on\nthe number (less thanfive or not). There are also\nthree genders for the nouns, pronouns, adjectives\nand some numbers leading to differences between\nthe cases and also between the verb participles for\npast tense and passive voice.\nAs for verbs, person and many tenses are ex-\npressed by the suffix, and, similarly to Spanish and\nItalian, the subject pronoun (e.g. I, we, it) is of-\nten omitted. In addition, negation of three quite\nimportant verbs, “biti(sr/sl)” (to be), “imati(sr) /\nimeti(sl)” (to have) and “hteti(sr) /hoteti(sl)” (to\nwant), is formed by adding the negative particle\nto the verb as a prefix. In addition, there are two\nverb aspects, namely many verbs have perfective\nand imperfective form(s) depending on the dura-\ntion of the described action. These forms are either\ndifferent although very similar or are distinguished\nonly by prefix.\nAs for syntax, both languages have a quite free\nword order, and there are no articles, neither indef-\ninite nor definite.\nAlthough the two languages share a large degree\nof morpho-syntactic properties and mutual intel-\nligibility, a speaker of one language might have\ndifficulties with the other. The language differ-\nences are both lexical (including a number of false\nfriends) as well as grammatical (such as local word\norder, verb mood and/or tense formation, question\nstructure, dual in Slovenian, usage of some cases).\n4SMT systems\nIn order to systematically explore SMT issues re-\nlated to the targeted languages,five different do-\nmains were used in total. However, not all do-\n6http://www.sumat-project.eu98\n(a) number of sentences\n# of Sentences sl-en sl-de sr-en sr-de\nDGT 3.2M 3M / /\nEuroparl 600k 500k / /\nEMEA 1M 1M / /\nOpenSubtitles 1.8M 1.8M 1.8M 1.8M\nSEtimes / / 200k /(b) average sentence length\nAvg. Sent. Length sl sr en de\nDGT 16.0 / 17.3 16.6\nEuroparl 23.4 / 27.0 25.4\nEMEA 12.7 / 12.3 11.8\nOpenSubtitles 7.7 7.6 9.2 8.9\nSEtimes / 22.4 23.8 /\nTable 1: Corpora characteristics.\nmains were used for all language pairs due to\nunavailability. It should be noted that accord-\ning to the META-NET White Papers, both lan-\nguages have minimal support, with only fragmen-\ntary text and speech resources. For the Slovenian-\nEnglish and Slovenian-German language pairs,\nfour domains were investigated: DGT transla-\ntion memories provided by the JRC (Steinberger\net al., 2012), Europarl (Koehn, 2005), Euro-\npean Medicines Agency corpus (EMEA) in the\npharmaceutical domain, as well as the Open-\nSubtitles7corpus. All the corpora are down-\nloaded from the OPUS web site8(Tiedemann,\n2012). For the Serbian language, only two do-\nmains were available: the enhanced version of\nthe SEtimes corpus9(Tyers and Alperen, 2010)\ncontaining “news and views from South-East Eu-\nrope” for Serbian-English, and OpenSubtitles for\nthe Serbian-English and Serbian-German language\npairs. It should be noted that all the corpora\ncontain written texts except OpenSubtitles, which\ncontains transcriptions and translations of spoken\nlanguage thus being slightly peculiar for machine\ntranslation. On the other hand, this is the only cor-\npus containing all language pairs of interest.\nTable 1 shows the amount of parallel sentences\nfor each language pair and domain (a) as well as\nthe average sentence length for each language and\ndomain (b). For each domain, a separate system\nhas been trained and tuned on an unseen portion of\nin-domain data. Since the sentences in OpenSubti-\ntles are significantly shorter than in other texts, the\ntuning and test sets for this domain contain 3000\nsentences whereas all other sets contain 1000 sen-\ntences. Another remark regarding the OpenSubti-\ntles corpus is that we trained our systems only on\nthose sentence pairs, which were available in En-\n7http://www.opensubtitles.org/\n8http://opus.lingfil.uu.se/\n9http://nlp.ffzg.hr/resources/corpora/\nsetimes/glish as well as in German in order to have a com-\npletely same condition for all systems.\nAll systems have been trained using phrase-\nbased Moses (Koehn et al., 2007), where the word\nalignments were build with GIZA++ (Och and\nNey, 2003). The 5-gram language model was build\nwith the SRILM toolkit (Stolcke, 2002).\n5 Evaluation and error analysis\nThe evaluation has been carried out in three steps:\nfirst, the BLEU scores were calculated for each of\nthe systems. Then, the automatic error classifica-\ntion has been applied in order to estimate actual\ntranslation errors. After that, manual inspection of\nlanguage related phenomena leading to particular\nerrors is carried out in order to define the most im-\nportant issues which should be addressed for build-\ning better systems and/or develop better models.\n5.1 BLEU scores\nAs afirst evaluation step, the BLEU scores (Pap-\nineni et al., 2002) have been calculated for each of\nthe translation outputs in order to get a rough idea\nabout the performance for different domains and\ntranslation directions.\nThe scores are presented in Table 2:\n•the highest scores are obtained for transla-\ntions into English;\n•the scores for translations into German are\nsimilar to those for translations into Slovenian\nand Serbian;\n•the scores for Serbian and Slovenian are bet-\nter when translated from English than when\ntranslated from German;\n•the best scores are obtained for DGT (which\ncontains a large number of repetitions), fol-\nlowed by EMEA (which is very specific do-\nmain); the worst scores are obtained for spo-\nken language OpenSubtitles texts.99\nDomain/Lang. pair sl-en sr-en sl-de sr-de en-sl de-sl en-sr de-sr\nDGT 77.3 / 59.3 / 72.1 58.6 / /\nEuroparl 58.9 / 33.8 / 56.0 36.5 / /\nEMEA 69.7 / 53.8 / 66.0 56.2 / /\nOpenSubtitles 38.4 33.2 21.5 22.4 26.2 19.6 22.8 18.4\nSEtimes / 43.8 / / / / 35.8 /\nTable 2: BLEU scores for all translation outputs.\nIn addition, all the BLEU scores are compared\nwith those of Google Translate10outputs of all\ntests. All systems built in this work outperform\nthe Google translation system by absolute differ-\nence ranges from 1 to 10%, confirming that the\nlanguages are weakly supported for machine trans-\nlation.\n5.2 Automatic error classification\nAutomatic error classification has been performed\nusing Hjerson (Popovi ´c, 2011) and the error distri-\nbutions are presented in Figure 1. For the sake of\nbrevity and clarity, as well as for avoiding redun-\ndancies, the error distributions are not presented\nfor all translation outputs, but have been averaged\nin the following way: since no important differ-\nences were observed neither between domains (ex-\ncept that OpenSubtitles translations exhibit more\ninflectional errors than others) nor between Ser-\nbian and Slovenian (neither as source nor as the\ntarget language), the errors are averaged over do-\nmains and two languages called “x”. Thus, four\nerror distributions are shown: translation from and\ninto English, and translation from and into Ger-\nman.\nThe following can be observed:\n•translations into English are “the easiest”,\nmostly due to the small number of morpho-\nlogical errors; however, the English transla-\ntion outputs contain more word order errors\nthan Serbian and Slovenian ones;\n•all error rates are higher in German transla-\ntions than in English ones, but the mistransla-\ntion rate is especially high;\n•German translation outputs have less mor-\nphological errors than Serbian and Slovenian\ntranslations from German; on the other hand,\n10http://translate.google.com, performed on\n27th February 2015more reordering errors can be observed in\nGerman outputs;\n•all error rates are higher in translations from\nGerman than from English, except inflec-\ntions.\nThe results of the automatic error analysis are\nvaluable and already indicate some promising di-\nrections for improving the systems, such as word\norder treatment and handling morphologic gen-\neration. Nevertheless, more improvement could\nbe obtained if more precise guidance about prob-\nlems and obstacles related to the language proper-\nties and differences were available (apart from the\ngeneral ones already partly investigated in related\nwork).\n5.3 Identifying linguistic related issues\nAutomatic error analysis has already shown that\nthat different language combinations show differ-\nent error distributions. This often relates to linguis-\ntic characteristics of involved languages as well as\nto divergences between them. In order to explore\nthose relations, manual inspection of about 200\nsentences from each domain and language pair an-\nnotated by Hjerson together with their correspond-\ning source sentences has been carried out.\nAs the result of this analysis, the following has\nbeen discovered:\n•there is a number of frequent error patterns,\ni.e. obstacles (issues) for SMT systems\n•nature and frequency of many issues depend\non language combination and translation di-\nrection\n•some of translation errors depend on the do-\nmain and text type, mostly differing for writ-\nten and spoken language\n•issues concerning Slovenian and Serbian both\nas source and as target languages are almost100\n 5 10 15 20\nform order omission addition mistranserror rates [%]x-en\nen-x\nx-de\nde-x\nFigure 1: Error rates forfive error classes: word form, word order, omission, addition and mistranslation;\neach error rate represents the percentage of particular (word-level) error type normalised over the total\nnumber of words.\nthe same – there are only few language spe-\ncific phenomena.\nThe most frequent general issues11, i.e. relevant\nfor all translation directions, are:\n•noun collocations(written language)\nMulti-word expressions consisting of a head\nnoun and additional nouns and adjectives in\nEnglish poses large problems for all transla-\ntion directions, especially from and into En-\nglish. Their formation is different in other\nlanguages and often the boundaries are not\nwell detected so that they are translated to un-\nintelligible phrases.\nsource 12th ”Tirana Fall” art and\nmusic festival\noutput∗12th ”Tirana collection fall of\nart and a music festival\nreference the ratings agency’sfirst\nEmerging Europe Sensitivity\nIndex ( EESI )\noutput thefirst Index sensitivity\nEurope in the development of\n( EESI ) this agency\n11Non-English parts of examples are literally translated into\nEnglish and marked with∗.•negation\nDue to the distinct negation forming, mainly\nconcerning multiple negations in Serbian and\nSlovenian, negation structures are translated\nincorrectly.\nreference the prosecution has done\nnothing so far\nsource∗the prosecution has not done\nnothing so far\noutput the prosecution is not so far\nhad done nothing\nsource Being a rector does not give\nsomeone the freedom\nreference∗Being a rector does not give\nnobody the freedom\noutput∗Being a rector does not\ngive some freedom\n•phrase boundaries and order\nPhrase boundaries are not detected properly\nso that the group(s) of words are misplaced,\noften accompanied with morphological errors\nand mistranslations.101\nreference of which about afifth is used\nfor wheat production\noutput of which used to produce\nabout onefifth of wheat\nreference But why have I brought\nthis thing here?\noutput This thing, but why\nam I here?\nreference The US ambassador to Sofia\nsaid on Wednesday .\noutput Said on Wednesday ,\nUS ambassador to Sofia .\n•prepositions\nPrepositions are mostly mistranslated, some-\ntimes also omitted or added.\n•word order\nLocal word permutations mostly affecting\nverbs and surrounding auxiliary verbs, pro-\nnouns, nouns and adverbs.\nSome of the frequent issues are strongly dependent\non translation direction. For translation into En-\nglish and German the issues of interest are:\n•articles\nDue to the absence of articles in Slavic lan-\nguages, a number of articles in English and\nGerman translation outputs are missing, and\na certain number is mistranslated or added.\n•pronouns\nThe source languages are pro-drop, therefore\na number of pronouns is missing in the En-\nglish and German translation outputs.\n•possessive pronoun “svoj”\nThis possessive pronoun can be used for all\npersons (“my”, “your”, “her”, “his”, “our”,\n“their”) and it is difficult to disambiguate.\n•verb tense\nDue to different usage of verb tenses in some\ncases, the wrong verb tense from the source\nlanguage is preserved, or sometimes mis-\ntranslated. The problem is more prominent\nfor translation into English.\n•agreement(German target)\nA number of cases and gender agreements in\nGerman output is incorrect.•missing verbs(German target)\nVerbs or verb parts are missing in German\noutput, especially when they are supposed to\nappear at the end of the clause.\n•conjunction “i” (and)(Serbian source)\nThe main meaning of this conjunction is\n“and”, but another meaning “also, too, as\nwell” is often used too; however, it is usually\ntranslated as “and”.\n•adverb “lahko”(Slovenian source)\nThis word is used for Slovenian conditional\nphrases which correspond to English con-\nstructions with modal verbs “can”, “might”,\n“shall”, or adverbs “possibly”; the entire\nclause is often not translated correctly due to\nincorrect disambiguation.\nFor translation into Serbian and Slovenian, the\nmost important obstacles are:\n•case\nIncorrect case is used, often in combination\nwith incorrect singular/plural and/or gender\nagreement.\n•verb inflection\nVerb inflection does not correspond to the\nperson and/or the tense; a number of past\nparticiples also has incorrect singular/plurar\nand/or gender agreement.\n•missing verbs\nVerb or verb parts are missing, especially for\nconstructions using auxiliary and/or modal\nverbs. The problem is more frequent when\ntranslating from German.\n•question structure(spoken language)\nQuestion structure is incorrect; the problem\nis more frequent in Serbian where additional\nparticles (“li” and “da li”) should be used but\nare often omitted.\n•conjunction “a”(Serbian target)\nThis conjunction does not exist in other lan-\nguages, it can be translated as “and” or “but”,\nand its exact meaning is something in be-\ntween. Therefore it is difficult to disam-\nbiguate the corresponding source conjunc-\ntion.102\n•“-ing” forms(English source)\nEnglish present continuous tense does not ex-\nist in other languages, and in addition, it is\noften difficult to determine if the word with\nthe suffix “-ing” is a verb or a noun. There-\nfore words with the “-ing” suffix are difficult\nfor machine translation.\n•noun-verb disambiguation(English source)\nApart from the words ending with the suf-\nfix “-ing”, there is a number of other English\nwords which can be used both as a noun as\nwell as a verb, such as ”offer”, “search”, etc.\n•modal verb “sollen”(German source)\nThis German modal verb can have differ-\nent meanings, such as “should”, “might”\nand “will” which is often difficult to disam-\nbiguate.\nIt has to be noted that some of the linguistic\nphenomena known to be difficult are not listed –\nthe reason is that their overall number of occur-\nrences in the analysed texts is low and therefore\nthe number of related errors too. For example, Ger-\nman compounds are well known for posing prob-\nlems to natural language processing tasks among\nwhich is machine translation – however, in the\ngiven texts only a few errors related to compounds\nwere observed, as well as a low total number of\ncompounds. Another similar case is the verb as-\npect in Serbian and Slovenian – some related errors\nwere detected, but their count as well as the overall\ncount of such verbs in the data is very small.\nTherefore the structure and nature of the texts\nfor a concrete task should always be taken into ac-\ncount. For example, for improvements of spoken\nlanguage translation more effort should be put in\nquestion treatment than in noun collocation, and in\ntechnical texts the compound problem would prob-\nably be significant.\n6 Conclusions and future work\nIn this work, we have examined several SMT\nsystems involving two morphologically rich and\nunder-resourced languages in order to identify\nthe most important language related issues which\nshould be dealt with in order to build better sys-\ntems and models. The BLEU scores are reported\nas afirst evaluation step, followed by automatic\nerror classification which has captured interestinglanguage related patterns in distributions of error\ntypes. The main part of the evaluation consisted\nof (manual) analysis of errors taking into account\nlinguistic properties of both target and source lan-\nguage. This analytic analysis has defined a set\nof general issues which are causing errors for all\ntranslation directions, as well as sets of language\ndependent issues. Although many of these issues\nare already known to be difficult, they can be ad-\ndressed only with the precise identification of con-\ncrete examples.\nThe main general issues are shown to be struc-\ntural properties concerning multi-noun colloca-\ntions and exact phrase boundaries, followed by\nnegation formation, wrong, missing or added\npreposition as well as local word order differences.\nFor translation into English and German, article\nand pronoun omissions are the most problematic,\nas well as disambiguation of certain frequent func-\ntional words. For translation into Serbian and\nSlovenian, cases and verb inflections are most dif-\nficult to handle. In addition, other problems con-\ncerning verbs are frequent as well, such as local\nword order involving verbs and missing verb parts\n(which is especially difficult when translating from\nGerman).\nIn future work we plan to address some of the\npresented issues practically and analyse the effects.\nAn important thing concerning system improve-\nment is that although most of the described issues\nare common for various domains, the exact nature\nof the texts desired for the task at hand should\nalways be kept in mind. Analysis of issues for\ndomains and text types not covered by this paper\nshould be part of future work too.\nAcknowledgments\nThis publication has emanated from research sup-\nported by the QT21 project – European Union’s\nHorizon 2020 research and innovation programme\nunder grant number 645452 as well as a research\ngrant from Science Foundation Ireland (SFI) under\ngrant number SFI/12/RC/2289. We are grateful to\nthe anonymous reviewers for their valuable feed-\nback.\n103\nReferences\nCettolo, Mauro, Jan Niehues, Sebastian St ¨uker, Luisa\nBentivogli, and Marcello Federico. 2013. Report on\nthe 10th IWSLT evaluation campaign. InProceed-\nings of the International Workshop on Spoken Lan-\nguage Translation (IWSLT), Heidelberg, Germany,\nDecember.\nEtchegoyhen, Thierry, Lindsay Bywood, Mark Fishel,\nPanayota Georgakopoulou, Jie Jiang, Gerard Van\nLoenhout, Arantza Del Pozo, Mirjam Sepesy\nMau ˇcec, Anja Turner, and Martin V olk. 2014.\nMachine Translation for Subtitling: A Large-Scale\nEvaluation. InProceedings of the Ninth Interna-\ntional Conference on Language Resources and Eval-\nuation (LREC14), Reykjavik, Iceland, May.\nKoehn, Philipp, Hieu Hoang, Alexandra Birch, Chris\nCallison-Burch, Marcello Federico, Nicola Bertoldi,\nBrooke Cowan, Wade Shen, Christine Moran,\nRichard Zens, Chris Dyer, Ond ˇrej Bojar, Alexandra\nConstantin, and Evan Herbst. 2007. Moses: Open\nsource toolkit for statistical machine translation. In\nProceedings of the 45th Annual Meeting of the ACL\non Interactive Poster and Demonstration Sessions,\nStroudsburg, PA, USA.\nKoehn, Philipp. 2005. Europarl: A Parallel Corpus for\nStatistical Machine Translation. InProceedings of\nthe 10th Machine Translation Summit, pages 79–86,\nPhuket, Thailand, September.\nLjube ˇsi´c, Nikola, Petra Bago, and Damir Boras. 2010.\nStatistical machine translation of Croatian weather\nforecast: How much data do we need? In Lu ˇzar-\nStiffler, Vesna, Iva Jarec, and Zoran Beki ´c, editors,\nProceedings of the ITI 2010 32nd International Con-\nference on Information Technology Interfaces, pages\n91–96, Zagreb. SRCE University Computing Centre.\nOch, Franz Josef and Hermann Ney. 2003. A Sys-\ntematic Comparison of Various Statistical Alignment\nModels.Computational Linguistics, 29(1):19–51,\nMarch.\nPapineni, Kishore, Salim Roukos, Todd Ward, and Wie-\nJing Zhu. 2002. BLEU: a method for automatic\nevaluation of machine translation. pages 311–318,\nPhiladelphia, PA, July.\nPopovi ´c, Maja and Nikola Ljube ˇsi´c. 2014. Explor-\ning cross-language statistical machine translation for\nclosely related South Slavic languages. InProceed-\nings of the EMNLP14 Workshop on Language Tech-\nnology for Closely Related Languages and Language\nVariants, pages 76–84, Doha, Qatar, October.\nPopovi ´c, Maja, David Vilar, Hermann Ney, Slobodan\nJovi ˇci´c, and Zoran ˇSari´c. 2005. Augmenting a\nSmall Parallel Text with Morpho-syntactic Language\nResources for Serbian–English Statistical Machine\nTranslation. pages 41–48, Ann Arbor, MI, June.\nPopovi ´c, Maja. 2011. Hjerson: An Open Source\nTool for Automatic Error Classification of MachineTranslation Output.The Prague Bulletin of Mathe-\nmatical Linguistics, (96):59–68, October.\nSepesy Mau ˇcec, Mirjam, Janez Brest, and Zdravko\nKaˇciˇc. 2006. Slovenian to English Machine Trans-\nlation using Corpora of Different Sizes and Morpho-\nsyntactic Information. InProceedings of the 5th\nLanguage Technologies Conference, pages 222–225,\nLjubljana, Slovenia, October.\nSteinberger, Ralf, Andreas Eisele, Szymon Klocek,\nSpyridon Pilos, and Patrick Schl ¨uter. 2012. DGT-\nTM: A freely available Translation Memory in 22\nlanguages. InProceedings of the 8th International\nConference on Language Resources and Evaluation\n(LREC12), pages 454–459, Istanbul, Turkey, May.\nStolcke, Andreas. 2002. SRILM – an extensible lan-\nguage modeling toolkit. volume 2, pages 901–904,\nDenver, CO, September.\nTiedemann, J ¨org. 2012. Parallel data, tools and inter-\nfaces in OPUS. InProceedings of the 8th Interna-\ntional Conference on Language Resources and Eval-\nuation (LREC12), pages 2214–2218, May.\nToral, Antonio, Raphael Rubino, Miquel Espl `a-Gomis,\nTommi Pirinen, Andy Way, and Gema Ramirez-\nSanchez. 2014. Extrinsic Evaluation of Web-\nCrawlers in Machine Translation: a Case Study on\nCroatianEnglish for the Tourism Domain. InPro-\nceedings of the 17th Conference of the European\nAssociation for Machine Translation (EAMT), pages\n221–224, Dubrovnik, Croatia, June.\nTyers, Francis M. and Murat Alperen. 2010. South-\nEast European Times: A parallel corpus of the\nBalkan languages. InProceedings of the LREC\nWorkshop on Exploitation of Multilingual Resources\nand Tools for Central and (South-) Eastern European\nLanguages, pages 49–53, Valetta, Malta, May.\nˇZganec Gros, Jerneja and Stanislav Gruden. 2007. The\nvoiceTRAN machine translation system. InPro-\nceedings of the 8th Annual Conference of the In-\nternational Speech Communication Association (IN-\nTERSPEECH 07), pages 1521–1524, Antwerp, Bel-\ngium, August. ISCA.\n104",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "etdspuf1lPZ",
"year": null,
"venue": "EAMT 2016",
"pdf_link": "https://aclanthology.org/W16-3411.pdf",
"forum_link": "https://openreview.net/forum?id=etdspuf1lPZ",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Can Text Simplification Help Machine Translation?",
"authors": [
"Sanja Stajner",
"Maja Popovic"
],
"abstract": "Sanja Štajner, Maja Popovic. Proceedings of the 19th Annual Conference of the European Association for Machine Translation. 2016.",
"keywords": [],
"raw_extracted_content": "Baltic J. Modern Computing, V ol. 4 (2016), No. 2, pp. 230–242\nCan Text Simplification\nHelp Machine Translation?\nSanja ˇSTAJNER1and Maja POPOVI ´C2\n1Data and Web Science Group, University of Mannheim, Germany\n2Humboldt University of Berlin, Germany\[email protected], [email protected]\nAbstract. This article explores the use of text simplification as a pre-processing step for statisti-\ncal machine translation of grammatically complex under-resourced languages. Our experiments\non English-to-Serbian translation show that this approach can improve grammaticality (fluency)\nof the translation output and reduce technical post-editing effort (number of post-edit operations).\nFurthermore, the use of more aggressive text simplification methods (which do not only simplify\nthe given sentence but also discard irrelevant information thus producing syntactically very sim-\nple sentences) also improves meaning preservation (adequacy) of the translation output.\n1 Introduction\nMachine translation for under-resourced languages is facing a number of problems.\nFirst, there is not enough parallel data to build robust statistical machine translation\n(SMT) systems. Second, most of these languages (including Serbian) have a very rich\nmorphology and suffer from data sparsity when it comes to less frequently used cases,\ntenses, etc. Third, there is a number of syntactic differences which are difficult to cap-\nture. For English-to-Serbian SMT, a number of language related problems has been\nidentified so far (Popovi ´c and Ar ˇcan, 2015). Most of them are related to syntactic dif-\nferences, e.g. missing verb parts due to distinct structure of certain verb tenses, incorrect\nprepositions, or incorrect translations of English sequences of nouns.\nIn this paper, we explore whether it is possible to improve the performance of the\nmachine translation for under-resourced languages by introducing a pre-processing step\nin which source sentences are first simplified by an automatic text simplification (ATS)\nsystem. We focus on English-to-Serbian MT and apply two state-of-the-art ATS systems\nas a pre-processing step for simplifying the original English sentence before feeding it\ninto a phrase-based SMT system.\nWe exploit two different types of ATS systems, a more conservative one (which,\nwhile simplifying the input sentence lexically and syntactically, retain all the informa-\ntion contained in the original sentence), and the more aggressive one (which, while sim-\nCan Text Simplification help Machine Translation? 231\nplifying the input sentence lexically and syntactically, also tries to reduce the amount of\ninformation by discarding irrelevant information and high-level details). In this way, we\naddress two different usage scenarios in MT: (1) when it is important to maintain all the\ninformation contained in the source text (e.g. translations of whole texts or documents);\nand (2) when it is enough to get a gist of the source text (e.g. skimming through news\narticles and looking for the most important news).\nThe results of the human evaluation of the news articles translated using the two\nabove-mentioned approaches, in terms of grammaticality (fluency) and meaning preser-\nvation (accuracy) of the output, and the analysis of the post-editing effort (number of\npost-edit operations) shows that both approaches improve the MT output.\nThe remainder of the article is structured as follows. Section 2 briefly reports on\nthe existing approaches to automatic text simplification and motivates our choice of\nATS systems. Section 3 describes the chosen ATS systems in more details, presents the\ndatasets and the SMT system used in experiments and describes the evaluation proce-\ndure. Section 4 presents and discusses the results of our experiments, while Section 5\nsummarises the main findings and presents ideas for future research.\n2 Related Work\nAutomatic text simplification (ATS) systems aim to transform original texts into their\nlexically and syntactically simpler variants. In theory, they could also simplify texts on\nthe discourse level, but most of the systems still operate only on the sentence level.\nThe motivation for building the first ATS systems was to improve the performance\nof machine translation systems and other text processing tasks, e.g. parsing, information\nretrieval, and summarisation (Chandrasekar et al., 1996). It was argued that simplified\nsentences (which have simpler sentential structures and reduced ambiguity) could lead\nto improvements in the quality of machine translation (Chandrasekar, 1994).\nSince then, a great number of ATS systems has been proposed not only for En-\nglish, but also for other languages, e.g. Basque (Aranzabe et al., 2013), Portuguese\n(Specia, 2010), Spanish (Saggion et al., 2015), French (Brauwers et al., 2014), and Ital-\nian (Barlacchi and Tonelli, 2013).\nFor English, the state-of-the-art ATS systems range from those performing only lex-\nical (Glava ˇs and ˇStajner, 2015) or only syntactic (Siddharthan, 2011) simplification, to\nthose combining lexical and syntactic simplification (Angrosh and Siddharthan, 2014).\nRecently, several ATS systems have been proposed which do not only simplify given\ntext/sentences but also reduce the amount of information contained by removing high-\nlevel details, such as appositions, adverbial phrases, or purely descriptive sentences\n((Glava ˇs and ˇStajner, 2013), (Siddharthan et al., 2014), (Narayan and Gardent, 2014)).\nHowever, in these twenty years, the motivation for building ATS systems has shifted\nfrom improving text processing systems to making texts more accessible to wider au-\ndiences (e.g. children, non-native speakers, people with low literacy levels, and people\nwith various language or learning disabilities). Therefore, ATS systems have only been\nevaluated for the quality of the generated output, its readability levels, and usefulness\nin making texts more accessible to target populations (reducing reading speed and im-\nproving comprehension). To the best of our knowledge, there has been no evaluation of\n232 ˇStajner and Popovi ´c\nthe state-of-the-art ATS systems in terms of how much (if at all) they can help improve\nMT systems (which was, as previously mentioned, their first intended goal and main\nmotivation).\n3 Experiments\nIn this study, we use two state-of-the-art ATS systems:\n1.TS-A : A combination of lexical TS system (Glava ˇs and ˇStajner, 2015) with the\nEventSimplify (Glava ˇs and ˇStajner, 2013) which performs syntactic simplification\nwith a significant content reduction. This is the most “aggressive” system of all\nabove-mentioned systems which perform content reduction (Section 2), i.e. it is the\nsystem which performs the highest level of content reduction and achieves the most\nreadable (simplest) output (due to a high number of sentence splitting operations).\n2.TS-C : The lexico-syntactic TS system proposed by Angrosh and Siddhathan (2014)\nwhich belong to the “conservative” ATS systems which do not perform any content\nreduction and thus, completely preserve the original meaning of the sentence;\nWe used 100 news articles from the EMM NewsBrief3for which the output of the\nEventSimplify (Glava ˇs and ˇStajner, 2013) ATS system was freely available.4We fur-\nther focused on the output of the event-wise simplification scheme (which achieved the\nhighest readability of all four provided schemes) and applied the lexical simplification\nsystem (Glava ˇs and ˇStajner, 2015) on top of it in order to obtained a full simplification\nsystem which encompasses lexical simplification, syntactic simplification and content\nreduction (TS-A). Next, we applied the TS-C system on all those 100 original articles.\n3.1 Text Simplification Systems\nThe examples presented in Table 1 illustrate the potential of the two ATS systems used\n(TS-C andTS-A ) and differences among them. In general, the TS-A performs more\nsentence splitting than the TS-C (see examples 2, 3, and 4 in Table 1, with the extreme\ncase of producing four simplified sentences instead of one original sentence in the fourth\nexample). The TS-A system also removes some details (e.g. “several minutes later” in\nthe third example, or “in Port St. John” in the second example), or entire subordinate\nclauses (e.g. “a steep fall from.. ” in the first example).\nThe main focus of both ATS systems is on structural simplification, although there\nare occasional cases of lexical simplification as well (e.g. “arrived”!“came” in\nthe second example, or “recieved”!“got” and“refuge”!“shelter” in the fourth\nexample).\nIt is interesting to note that both systems (though TS-A more frequently) also sim-\nplify the tense of the verbs, as in the following examples: “before turning the gun!\n“After that, ... turned the gun (ex. 2), “Deputies arrived... to hear ... ”!“Deputies\ncame ... Deputies heard ... ”(ex. 3), and “ had received ”!“got” (ex. 4). Furthermore,\nthe AT-C system consistently changes constructions of the type “<clause >, X said.”\ninto“X said that <clause >”(as illustrated in the second example in Table 1).\n3http://emm.newsbrief.eu/NewsBrief/clusteredition/en/latest.html\n4http://takelab.fer.hr/data/evsimplify/\nCan Text Simplification help Machine Translation? 233\nTable 1. Examples of sentence simplification performed by the two ATS systems (TS-C and\nTS-A). Differences to the original sentences are shown in bold.\nEx. Version Sentence\n1 Original Vladimir Putin’s United Russia party won less than 50% of Sunday’s vote, a\nsteep fall from its earlier two-thirds majority, according to preliminary results.\nTS-C Vladimir Putin’s United Russia party won less than 50% Sunday’s vote , ac-\ncording to preliminary results. This is a steep fall from its earlier two-\nthirds majority.\nTS-A Putin’s United Russia party won less than 50%.\n2 Original A Florida mother shot her four children early Tuesday morning before turning\nthe gun on herself at her home in Port St. John, police said.\nTS-C Police said that a Florida mother shot her four children early Tuesday morn-\ning before turning the gun on herself at her home in Port St. John.\nTS-A A Florida mother shot her four children early Tuesday morning . After that, a\nFlorida mother turned the gun on herself at her home.\n3 Original Deputies arrived at the house several minutes later to hear more shots fired.\nTS-C Deputies came at the house several minutes later to hear more shots fired.\nTS-A Deputies came to the house . Deputies heard more shots.\n4 Original The Chinese Embassy said it had received a report that a dozen Chinese fish-\ning boats had taken refuge in a lagoon of Huangyan Island to escape foul\nweather when the Philippine gunboat blocked the lagoon entrance and sent 12\nPhilippine soldiers to harass the Chinese fishermen.\nTS-C The Chinese Embassy said it also got a report that a dozen Chinese fishing\nboats had taken refuge in a lagoon of Huangyan Island to escape foul weather.\nThen the Philippine gunboat sent 12 Philippine soldiers to harass the Chi-\nnese fishermen. At that time, the gunboat blocked the lagoon entrance.\nTS-A The Chinese Embassy had received a report . Adozen Chinese fishing ships\nhad taken shelter in a lagoon of Huangyan Island . The Philippine gunboat\nblocked the lagoon entrance . The Philippine gunboat sent 12 Philippine sol-\ndiers to harass the Chinese fishermen.\n234 ˇStajner and Popovi ´c\nAs ATS systems do not always produce perfectly grammatical output and lexical\nsimplification sometimes lead to changed meaning (Angrosh and Siddharthan, 2014;\nGlava ˇs and ˇStajner, 2015), we manually inspected a randomly selected subset of 65\noriginal sentences and their automatically simplified sentences produced by both sys-\ntems (TS-A and TS-C).5In those cases where the meaning or grammaticality was in-\ncorrect, we performed a minimal post-editing (PE) necessary to restore the original\nmeaning and grammaticality of the sentence. As the goal of this PE is not to make any\nfurther simplifications and the mistakes were easy to notice, this type of PE was very\nfast (11.3 seconds per sentence for TS-A and 15.2 seconds per sentence for TS-C) and\ndid not even require a native speaker or trained annotator, but only someone with the\nproficiency level of English. For illustration, several sentences are given in Table 2.\nTable 2. Examples of post-editing performed on the automatically simplified sentences generated\nby the TS-C and TS-A systems. Differences between the automatically simplified sentences and\ntheir PE versions are shown in bold.\nEx. Version Sentence\n1 Original Ex-Soviet leader Mikhail Gorbachev says Russian authorities must an-\nnul the parliamentary vote results and hold a new election.\nTS-C (no PE) Ex-Soviet leader Mikhail Gorbachev says .Russian authorities must an-\nnul the parliamentary vote results. These authorities hold a new election.\nTS-C (PE) Ex-Soviet leader Mikhail Gorbachev says that Russian authorities must\nannul the parliamentary vote results. These authorities must hold a new\nelection.\n2 Original A 21-year-old man was arrested on April 30, on suspicion of murder\nand was released on bail until May 29 pending further enquiries.\nTS-C (no PE) A 21-year-old man was arrested on April 30, on suspicion of murder.\nThis man was followed until May 29 pending further enquiries.\nTS-C (PE) A 21-year-old man was arrested on April 30, on suspicion of murder.\nThis man was released until May 29 pending further enquiries.\nTS-A (no PE) A 21-year-old man was arrested on April 30 on suspicion. A 21-year-old\nman was released on jailuntil May 29.\nTS-A (PE) A 21-year-old man was arrested on April 30 on suspicion of murder. A\n21-year-old man was released on bailuntil May 29.\n5This subset of sentences was later used for MT experiments and human evaluation and post-\nediting.\nCan Text Simplification help Machine Translation? 235\n3.2 Statistical Machine Translation System\nFor the machine translation from English to Serbian, we used the ASISTENT system.6\nIt is a freely available SMT system, based on the widely used phrase-based SMT frame-\nwork (Koehn et al., 2003) and it supports translations from English to Slovene, Croatian\nand Serbian and vice versa. Additionally, translations between those three Slavic lan-\nguages are also possible.\nThe system was trained using the Moses toolkit (Koehn et al., 2007). The word\nalignments were built with GIZA++ (Och and Ney, 2003), and the 5-gram language\nmodel was built using the SRILM toolkit (Stolcke, 2002) The training dataset origi-\nnates from the OPUS website7(Tiedemann, 2012) where three domains were avail-\nable for the Serbian-English language pair: the enhanced version of the SEtimes cor-\npus8(Tyers and Alperen, 2010) containing “news and views from South-East Europe”,\nOpenSubtitles9, and the KDE localisation documents and manuals, i.e. technical do-\nmain. Approximately 20.7M sentences, in total, were used for training (20.5M subtitles,\n200,000 news, 30,000 technical), and 2,000 sentences were used for tuning (retaining\nthe same proportions of the sentences from the three corpora as in the training dataset).\nThe English-to-Serbian part of the ASISTENT system (Ar ˇcan et al., 2016) was tested\non 2,000 sentences from the three corpora used for training and tuning (the 2,000 sen-\ntences which were not used for training and tuning) and achieved a 38.88 BLEU score\n(Papineni et al., 2002), a 31.18 METEOR score (Denkowski and Lavie, 2014), and a\n61.62 chrF3 score (Popovi ´c, 2015).\n3.3 Evaluation Procedure\nFrom the initial set of 100 news articles, we randomly selected 65 original sentences and\nevaluated all translation outputs (from original sentences, and TS-A and TS-C systems,\nwhich led to a total of 195 target sentences) with respect to the following aspects:\n1. adequacy, i.e. meaning preservation\n2. fluency, i.e. grammaticality\n3. technical post-editing effort, i.e. amount of necessary edit operations\nEach of the tasks has been carried out separately, i.e. the evaluation of adequacy and\nfluency were carried out in two separate passes, and post-editing was carried out in the\nthird pass.\nFor adequacy, a quality score from 1 to 5 was assigned to each segment according\nto the following guidelines:\n–1 = very bad (regardless of a potentially good grammaticality)\n–2 = difficult to understand and different from the source meaning\n–3 = the main idea is preserved but some parts are unclear/different from the source\n–4 = understandable with minor ambiguities/differences\n6http://server1.nlp.insight-centre.org/asistent/\n7http://opus.lingfil.uu.se/\n8http://nlp.ffzg.hr/resources/corpora/setimes/\n9http://www.opensubtitles.org/\n236 ˇStajner and Popovi ´c\n–5 = perfectly understandable (regardless of a potentially poor grammar)\nFor fluency scores, the following guidelines were used:\n–1 = very bad (regardless of a potentially good meaning preservation)\n–2 = many grammatical errors\n–3 = a number of grammatical errors but mostly minor ones\n–4 = almost correct (a small number of minor errors)\n–5 = perfectly grammatical (regardless of possible loss/change of meaning)\nThe post-editing effort was analysed in the following way:\n–Each translated segment was post-edited by looking into the corresponding source\nsegment, i.e. using English originals for translations of originals, using the corre-\nsponding simplified English sentences for translations of simplified segments.\n–The raw edit counts and edit rates (raw counts normalised with the segment length)\nwere calculated using Hjerson (Popovi ´c, 2011) for:\n\u000ffive classes of edits/errors\n\u000fall edit operations\nReference translations were not available.\n4 Results and Discussion\nThe average adequacy and fluency scores, and the percentages of sentences with each\nof the scores are presented in Table 3. It can be noted that the use of TS-C does not\nimprove the overall adequacy, but it might improve fluency, whereas the use of TS-A\nimproves MT in both aspects.\nTable 3. Average scores for adequacy and fluency (first row) and percentage of sentences for each\nof the five scores (1–5).\nScoreAdequacy Fluency\nOrig TS-C TS-A Original TS-C TS-A\nAverage 3.17 3.02 3.63 2.91 3.13 3.45\n1 15.2 13.0 6.5 4.3 2.3 2.3\n2 10.9 17.4 8.7 23.9 17.4 11.4\n3 32.6 32.6 30.4 47.8 45.6 31.8\n4 23.9 28.3 23.9 23.9 34.8 47.7\n5 17.4 8.7 30.4 0 0 6.8\nA closer look into the distribution of the sentence scores indicates that the use of the\nTS-C system in MT decreases the number of sentences with very bad accuracy score,\nbut it also decreases the number of sentences with perfect adequacy scores. The TS-A\nCan Text Simplification help Machine Translation? 237\nsystem, however, significantly increases the number of sentences with perfect adequacy\nscores, at the same time decreasing the number of sentences with low adequacy scores.\nAs for the fluency, both TS systems significantly increase the number of sentences\nwith high fluency scores (score 4, and in the case of TS-A, score 5 as well) while at\nthe same time they decrease the number of sentences with low fluency scores. It should\nbe noted that the fluency is generally problematic for the SMT system – none of the\noriginal English sentences has been translated into a perfectly grammatical sentence,\nand the use of TS-C does not succeed in improving this either. However, the use of the\nTS-A system leads to a 6.8% of sentences being translated into perfectly grammatical\nsentences.\nTable 4. Percentage of changes in adequacy and fluency scores.\n(a) Adequacy\nOriginalTS-C TS-A\n1 2 3 4 5 1 2 3 4 5\n1 10.9 2.2 2.2 0 0 4.3 2.2 2.2 4.3 2.2\n2 0 4.3 4.3 2.2 0 0 2.2 6.5 0 2.2\n3 2.2 8.7 15.2 6.5 0 2.2 2.2 15.2 2.2 10.9\n4 0 0 4.3 17.4 2.2 0 0 4.3 13.0 6.5\n5 0 2.2 6.5 2.2 6.5 0 2.2 2.2 4.3 8.7\n(b) Fluency\nOriginalTS-C TS-A\n1 2 3 4 5 1 2 3 4 5\n1 2.2 2.2 0 0 0 0 4.3 0 0 0\n2 0 4.3 17.4 2.2 0 0 2.2 8.7 13.0 0\n3 0 8.7 21.8 17.4 0 2.2 2.2 17.4 19.6 6.5\n4 0 2.2 6.5 15.2 0 0 2.2 4.3 13.0 4.4\n5 0 0 0 0 0 0 0 0 0 0\nTable 4 presents the results of further analysis, showing the percentage of each par-\nticular change in adequacy and fluency scores for each of the TS systems. The desired\nchanges (from lower to higher score) are presented in bold.\nFor the TS-C system, it is confirmed that a number of sentences with a bad adequacy\nscore is improved, and on the other hand, a number of sentences with a good adequacy\nscore is deteriorated. The majority of sentences does not change. As for the fluency,\nthe main improvement comes from improving poor sentences into medium ones and\nmedium sentences into almost good ones. The majority of sentences does not change.\nFor the TS-A system, the main changes in adequacy originate from improving sen-\ntences with very bad adequacy scores even up to perfect, and from the improvement\n238 ˇStajner and Popovi ´c\nof sentences with a medium adequacy score into perfect. The main contribution for\nfluency, using the TS-A system, comes from improving medium sentences into almost\nperfect, and from improving poor ones into medium and almost good.\nFor illustration, Table 5 contains several examples of original sentences, their auto-\nmatically simplified sentences by both systems and the fluency and adequacy scores for\nthe produced translations into Serbian. The first example shows how a strong reordering\nof clauses within a sentence (without any sentence splitting) can improve both fluency\nand adequacy of the translation output. The second example demonstrates how even one\nlexical change (replacement of a phrasal verb with a more frequently used non-phrasal\nverb) can also improve the fluency and adequacy of the translation. The third exam-\nple shows how much sentence splitting and its combination with lexical simplification\ncan improve the translation in the case of a long source sentence. In the penultimate\nexample, we again see how much sentence splitting in a combination with tense sim-\nplification and discarding details can improve translation, leading to a perfect fluency\nand adequacy. The last example demonstrates how retaining only the most important\ninformation can improve the fluency of the translation output.\n4.1 Post-Editing Effort\nResults for the post-editing effort are shown in Table 6. The overall raw count of\nedit operations decreases for both TS systems albeit significantly more for the TS-A,\nwhich is expected since the sentences are shorter. Edit rates also decrease for both TS\nsystems, but more for the TS-C due to the reduced sentence lengths of the TS-A system.\nFurthermore, the TS-A reduces raw counts for each of the five error classes, whereas\nthe improvement with the TS-C comes mainly from the reduction of reordering errors.\nThis is still an important improvement since it has been shown that the reordering edit\noperations strongly correlate with the cognitive post-editing effort (Popovi ´c et al, 2014).\nTable 7 shows the percentage of improved, deteriorated and unchanged sentences\nfor both TS systems with regard to all evaluation aspects, i.e. adequacy, fluency, edit\nrate, and raw count of edit operations.\nFor about one half of the sentences (54.3% for the TS-C and 43.5% for the TS-\nA) the adequacy scores do not change. Among those sentences which do change the\nadequacy score, in the case of the TS-C, more sentences deteriorate their score than\nimprove it (26.1% as opposed to 19.6%), while in the case of the TS-A, in contrast,\nmore sentences improve their adequacy instead of deteriorating it (39.1% as opposed to\n17.4%).\nThe number of sentences that improve their fluency is higher than the number of\nsentences that deteriorate it for both TS systems, and it is particularly pronounced for\nTS-A.\nEdit rates are improved significantly with using the TS-C (47.8%) and for the ma-\njority of sentences (54.3%) using the TS-A. Raw counts of edit operations are improved\nfor more than one half of the sentences by the TS-C (60.9%) and for more than 82% of\nthe sentences by the TS-A.\nCan Text Simplification help Machine Translation? 239\nTable 5. Examples of the adequacy and fluency scores received for the translation of original\nsentence and two automatically simplified sentences (using the TS-C and TS-A systems), for the\ncases where TS led to improvements in the output. Differences between the original sentences\nand their automatically simplified versions are shown in bold.\nEx. Version A F Sentence\n1 Original 2 3 ”As we emerge from a decade of conflict abroad and economic crisis\nat home, it’s time to renew America,” Obama said, speaking against a\nbackdrop of armored vehicles and a U.S. flag.\nTS-C 4 4 Speaking against a backdrop of armored vehicles and a U.S. flag,\nObama said it’s time to renew America as we emerge from a decade\nof conflict abroad and economic crisis at home.\n2 Original 3 2 Several Israeli security delegations have visited Egypt during the past\ntwo months to decide on a new embassy location.\nTS-C 4 4 Several Israeli security delegations have visited Egypt during the past\ntwo months to choose a new embassy location.\n3 Original 1 2 The Chinese Embassy said it had received a report that a dozen Chinese\nfishing boats had taken refuge in a lagoon of Huangyan Island to escape\nfoul weather when the Philippine gunboat blocked the lagoon entrance\nand sent 12 Philippine soldiers to harass the Chinese fishermen.\nTS-C 2 3 The Chinese Embassy said it also got a report that a dozen Chinese\nfishing boats had taken refuge in a lagoon of Huangyan Island to escape\nfoul weather. Then the Philippine gunboat sent 12 Philippine soldiers\nto harass the Chinese fishermen. At that time, the gunboat blocked\nthe lagoon entrance.\nTS-A 3 3 The Chinese Embassy had received a report . Adozen Chinese fishing\nships had taken shelter in a lagoon of Huangyan Island . The Philippine\ngunboat blocked the lagoon entrance . The Philippine gunboat sent 12\nPhilippine soldiers to harass the Chinese fishermen.\n4 Original 4 3 A Florida mother shot her four children early Tuesday morning before\nturning the gun on herself at her home in Port St. John, police said.\nTS-A 5 5 A Florida mother shot her four children early Tuesday morning . After\nthat, a Florida mother turned the gun on herself at her home.\n5 Original 4 3 Vladimir Putin’s United Russia party won less than 50% of Sunday’s\nvote, a steep fall from its earlier two-thirds majority, according to pre-\nliminary results.\nTS-A 4 4 Putin’s United Russia party won less than 50%.\n240 ˇStajner and Popovi ´c\nTable 6. Raw counts and edit rates (%) normalised with the segment length.\nEdit Raw counts Edit rates (%)\nOperations Orig. TS-C TS-A Orig. TS-C TS-A\n\u0006errors 565 542 321 46.2 43.0 45.0\nMorphology 209 210 132 17.2 16.9 18.6\nOrder 100 66 43 8.2 5.3 6.1\nOmission 76 80 38 5.8 5.9 5.0\nAddition 21 26 10 1.7 2.1 1.4\nMistranslation 159 160 98 13.1 12.8 13.8\nTable 7. Percentage of sentences with better/worse/same sentences with respect to adequacy (A),\nfluency (F), edit rate (ER) and raw edit counts (REC).\n%TS-C TS-A\nA F ER REC M(A) G(F) ER REC\nbetter 19.6 39.1 60.9 47.8 39.1 52.1 54.3 82.6\nworse 26.1 17.4 34.8 39.1 17.4 15.2 45.6 8.7\nsame 54.3 43.5 4.3 13.0 43.5 32.6 0 8.7\n5 Summary and Outlook\nIn this article, we investigated whether the state-of-the-art automatic text simplification\nsystems (ATS) can improve English-to-Serbian machine translation (MT) if used as a\npre-processing step to simplify source sentences before translating them with the SMT\nsystem. We tested this hypothesis by using two ATS systems, a more “conservative” one\n(TS-C) which only performs lexical and syntactic simplifications, and a more “aggres-\nsive” one (TS-A) which performs more lexical and syntactic changes but also performs\na significant content reduction thus leading to a loss of some information details.\nAll the presented results indicate that the use of the TS-C can improve the fluency\nof the MT output and reduce technical and cognitive post-editing effort through reduc-\ntion of reordering errors. The use of the TS-A introduces even more improvements for\nadequacy, fluency and all types of edit operations, but at the cost of losing some details\nin the information. This approach, however, could be very useful for tasks where the\nmain meaning of the text is crucial and the loss of some details is affordable.\nIn addition, our results show that the use of a TS system as a pre-processing step in\na MT pipeline is only useful for a subset of sentences, whereas the rest of the sentences\neither deteriorates or remains unchanged. Therefore a method for filtering sentences\ninto two or three classes (TS improves/TS worsens or TS improves/TS does not influ-\nence/TS worsens) would be very useful and should be investigated in the future work.\nIn future research, we will also include more language pairs and domains.\nCan Text Simplification help Machine Translation? 241\nAcknowledgements\nWe would like to thank Mihael Ar ˇcan for the help with the English-to-Serbian SMT\nsystem, and to Goran Glava ˇs, Advaith Siddharthan and Mandya Angrosh for the help\nwith the automatic text simplification systems.\nReferences\nMandya Angrosh and Advaith Siddharthan. 2014. Hybrid text simplification using synchronous\ndependency grammars with hand-written and automatically harvested rules. In Proceedings\nof the 14th Conference of the European Chapter of the Association for Computational Lin-\nguistics (EACL) , Gothenburg, Sweden, pages 722–731.\nMar´ıa Jes ´us Aranzabe, Arantza D ´ıaz de Ilarraza, and Itziar Gonzalez-Dios. 2013. Transforming\nComplex Sentences using Dependency Trees for Automatic Text Simplification in Basque.\nProcesamiento del Lenguaje Natural , V olume 50, pages 61–68.\nMihael Ar ˇcan, Maja Popovi ´c and Paul Buitelaar. Asistent – a machine translation system for\nSlovene, Serbian and Croatian. In Proceedings of the 10th Conference on Language Tech-\nnologies and Digital Humanities , Ljubljana, Slovenia.\nGianni Barlacchi and Sara Tonelli. 2013. ERNESTA: A Sentence Simplification Tool for Chil-\ndren’s Stories in Italian. In Computational Linguistics and Intelligent Text Processing , pages\n476–489.\nLaetitia Brouwers, Delphine Bernhard, Anne-Laure Ligozat and Thomas Franc ¸ois. 2014. Syntac-\ntic Sentence Simplification for French. In Proceedings of the EACL Workshop on Predicting\nand Improving Text Readability for Target Reader Populations (PITR) , Gothenburg, Sweden,\npp. 47–56.\nRaman Chandrasekar. 1994. Hybrid Approach to Machine Translation using Man Machine\nCommunication. PhD Thesis . Tata Institute of Fundamental Research, University of Bombay,\nBombay, India.\nRaman Chandrasekar, Christine Doran and Bangalore Srinivas. 1996. Motivations and Meth-\nods for Text Simplification. In Proceedings of the Sixteenth International Conference on\nComputational Linguistics (COLING) , pages 1041–1044.\nMichael Denkowski and Alon Lavie. 2014. Meteor Universal: Language Specific Translation\nEvaluation for Any Target Language. In Proceedings of the EACL 2014 Workshop on Statis-\ntical Machine Translation , pages 376–380.\nGoran Glava ˇs and Sanja ˇStajner. 2013. Event-Centered Simplication of News Stories. In Proceed-\nings of the Student Workshop held in conjunction with RANLP Conference , Hissar, Bulgaria,\npages 71–78.\nGoran Glava ˇs and Sanja ˇStajner. 2015. Simplifying Lexical Simplification: Do We Need Simpli-\nfied Corpora? In Proceedings of the 53rd Annual Meeting of the Association for Computa-\ntional Linguistics and the 7th International Joint Conference on Natural Language Process-\ning (Volume 2: Short Papers) , pages 63–68.\nPhilipp Koehn and Franz Josef Och and Daniel Marcu 2003. Statistical phrase-based transla-\ntion. in Proceedings of the Conference of the North American Chapter of the Association for\nComputational Linguistics on Human Language Technology - Volume 1 , pages 48–54.\nPhilipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola\nBertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ond ˇrej\nBojar, Alexandra Constantin, Evan Herbst. 2007. Moses: Open source toolkit for statistical\nmachine translation. In Proceedings of the 45th Annual Meeting of the ACL on Interactive\nPoster and Demonstration Sessions , Stroudsburg, PA, USA.\n242 ˇStajner and Popovi ´c\nShashi Narayan and Claire Gardent. 2014. Hybrid Simplification using Deep Semantics and\nMachine Translation. In Proceedings of the 52nd Annual Meeting of the Association for\nComputational Linguistics (ACL) , pages 435–445.\nFranz Josef Och and Hermann Ney. 2003. A Systematic Comparison of Various Statistical\nAlignment Models. Computational Linguistics , 29(1):19–51.\nKishore Papineni, Salim Roukos, Todd Ward and Wei-Jing Zhu. 2002. BLEU: a Method for\nAutomatic Evaluation of Machine Translation. In Proceedings of the 40th Annual Meeting\nof the Association for Computational Linguistics (ACL) , pages 311–318.\nMaja Popovi ´c. 2011. Hjerson: An Open Source Tool for Automatic Error Classificatio n of\nMachine Translation Output. The Prague Bulletin of Mathematical Linguistics , pages 59–\n68, Prague, Czech Republic, October.\nMaja Popovi ´c, Arle Lommel, Aljoscha Burchardt, Eleftherios Avramidis, Hans Uszkoreit. 2014.\nRelations between different types of post-editing operations, cognitive effort and temporal ef-\nfort. In Proceedings of the 17th Annual Conference of the European Association for Machine\nTranslation (EAMT 14) , pages 191–198, Dubrovnik, Croatia.\nMaja Popovi ´c. 2015. chrF: character n-gram F-score for automatic MT evaluation. In Proceed-\nings of the 10th Workshop on Statistical Machine Translation , pages 392–395.\nMaja Popovi ´c, Mihael Ar ˇcan. 2015. Identifying main obstacles for statistical machine trans-\nlation of morphologically rich South Slavic languages. In Proceedings of the 18th Annual\nConference of the European Association for Machine Translation (EAMT-15) , pages 97–104,\nAntalya, Turkey.\nHoracio Saggion, Sanja ˇStajner, Stefan Bott, Luz Rello, Simon Mille and Biljana Drndarevi ´c.\n2015. Making It Simplext: Implementation and Evaluation of a Text Simplification System\nfor Spanish. ACM Transactions on Accessible Computing , V olume 6, Chapter 14.\nAdvaith Siddharthan. 2011. Text Simplification using Typed Dependencies: A Comparison of\nthe Robustness of Different Generation Strategies. In Proceedings of the 13th European\nWorkshop on Natural Language Generation (ENLG) , pages 2–11.\nAngrosh Mandya, Tadashi Nomoto and Advaith Siddharthan. 2014. Lexico-syntactic text simpli-\nfication and compression with typed dependencies. In Proceedings of the 25th International\nConference on Computational Linguistics (COLING) , Dublin, Ireland, pages 1996–2006.\nLucia Specia. 2010. Translating from complex to simplified sentences. In Proceedings of the 9th\ninternational conference on Computational Processing of the Portuguese Language , pages\n30–39.\nAndreas Stolcke. 2002. SRILM – an extensible language modeling toolkit. volume 2, pages\n901–904, Denver, CO, September.\nJ¨org Tiedemann. 2012. Parallel data, tools and interfaces in OPUS. In Proceedings of the 8th\nInternational Conference on Language Resources and Evaluation (LREC) , pages 2214–2218.\nFrancis M. Tyers and Murat Alperen. 2010. South-East European Times: A parallel corpus of\nthe Balkan languages. In Proceedings of the LREC Workshop on Exploitation of Multilingual\nResources and Tools for Central and (South-) Eastern European Languages , pages 49–53,\nValetta, Malta.\nReceived May 2, 2016 , accepted May 12, 2016",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "rzblf_JXMut",
"year": null,
"venue": "EAMT 2014",
"pdf_link": "https://aclanthology.org/2014.eamt-1.41.pdf",
"forum_link": "https://openreview.net/forum?id=rzblf_JXMut",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Relations between different types of post-editing operations, cognitive effort and temporal effort",
"authors": [
"Maja Popovic",
"Arle Lommel",
"Aljoscha Burchardt",
"Eleftherios Avramidis",
"Hans Uszkoreit"
],
"abstract": "Maja Popović, Arle Lommel, Aljoscha Burchardt, Eleftherios Avramidis, Hans Uszkoreit. Proceedings of the 17th Annual conference of the European Association for Machine Translation. 2014.",
"keywords": [],
"raw_extracted_content": "Relations between different types of post-editing operati ons,\ncognitive effort and temporal effort\nMaja Popovi ´c, Arle Lommel, Aljoscha Burchardt,\nEleftherios Avramidis, Hans Uszkoreit\nDFKI – Berlin, Germany\[email protected]\nAbstract\nDespite the growing interest in and use\nof machine translation post-edited out-\nputs, there is little research work explor-\ning different types of post-editing opera-\ntions, i.e. types of translation errors cor-\nrected by post-editing. This work in-\nvestigates five types of post-edit oper-\nations and their relation with cognitive\npost-editing effort (quality level) and post-\nediting time. Our results show that for\nFrench-to-English and English-to-Spanish\ntranslation outputs, lexical and word or-\nder edit operations require most cogni-\ntive effort, lexical edits require most time,\nwhereas removing additions has a low im-\npact both on quality and on time. It is also\nshown that the sentence length is an impor-\ntant factor for the post-editing time.\n1 Introduction and related work\nIn machine translation research, ever-increasing\namounts of post-edited translation outputs are be-\ning collected. These have been used primarily for\nautomatic estimation of translation quality. How-\never, they enable a large number of applications,\nsuch as analysis of different aspects of post-editing\neffort. (Krings, 2001) defines three aspects: tem-\nporal, referring to time spent on post-editing, cog-\nnitive, referring to identifying the errors and the\nnecessary steps for correction, and technical, refer-\nring to edit operations performed in order to pro-\nduce the post-edited version. These aspects of ef-\nfort are not necessary equal in various situations.\nc/circlecopyrt2014 The authors. This article is licensed under a Creative\nCommons 3.0 licence, no derivative works, attribution, CC-\nBY-ND.Since the temporal aspect is important for the\npractice, post-editing time is widely used for mea-\nsuring post-editing effort (Krings, 2001; Tatsumi,\n2009; Tatsumi et Roturier, 2010; Specia, 2011).\nHuman quality scores based on the needed amount\nof post-editing are involved as assessment of the\ncognitive effort in (Specia et al., 2010; Specia,\n2011). Using edit distance between the original\nand the post-edited translation for assessment of\nthe technical effort is reported in (Tatsumi, 2009;\nTatsumi et Roturier, 2010; Temnikova, 2010; Spe-\ncia, 2011; Blain et al., 2011).\nMore details about the technical effort can be\nobtained by analysing particular edit operations.\n(Blain et al., 2011) defined these operations on\na linguistic level as post-editing actions and per-\nformed comparison between statistical and rule-\nbased systems. (Temnikova, 2010) proposed the\nanalysis of edit operations for controlled language\nin order to explore cognitive effort for different\nerror types – post-editors assigned one of ten er-\nror types to each edit operation which were then\nranked by difficulty. In (Koponen, 2012) post-edit\noperations are analysed in sentences with discrep-\nancy between the assigned quality score and the\nnumber of performed post-edits. In one of the ex-\nperiments described in (Wisniewski et al., 2013)\nan automatic analysis of post-edits based on Lev-\nenshtein distance is carried out considering only\nthe basic level of substitutions, deletions, inser-\ntions and TER shifts. These edit operations are\nanalysed on the lexical level in order to determine\nthe most frequent affected words. General user\npreferences regarding different types of machine\ntranslation errors are explored in (Kirchhoff et al.,\n2012) for English-Spanish translation of texts from\npublich health domain, however without any rela-\ntion to post-editing task. (Popovi´ c and Ney(, 2011)\n191\nnumber of quality level\nsentences ok edit+ edit edit- bad\nfr-en 2011 323 1559 0 544 99\nen-es 2011 31 399 0 550 20\nen-es 2012 200 548 856 576 74\nTable 1: Corpus statistics: number of sentences as-\nsigned to each of the quality levels.\ndescribe a method for automatic classification of\nmachine translation errors into five categories, but\nonly using independent human reference transla-\ntions, not post-edited translation outputs.\nThe aim of this work is to systematically explore\nthe relations of five different types of edit opera-\ntions with the cognitive and the temporal effort. To\nthe best of our knowledge, such study has not yet\nbeen carried out. Classification of edit operations\nis based on the edit distance and is performed auto-\nmatically, and human quality level scores are used\nas a measure of cognitive effort.\n2 Method and data\nExperiments are carried out on 2525 French-to-\nEnglish and 1000 English-to-Spanish translated\nsentences described in (Specia, 2011) as well\nas 2254 English-to-Spanish sentences used for\ntraining in the 2013 Quality Estimation shared\ntask (Callison-Burch et al., 2012). All translation\noutputs were generated by statistical machine sys-\ntems. For each sentence in these corpora, a human\nannotator assigned one of four or five quality levels\nas a measure for the cognitive effort:\n•acceptable (ok)\n•almost acceptable, easy to post-edit (edit+)\n•possible to edit (edit)\n•still possible to edit, better than from scratch\n(edit-)\n•very low quality, better to translate from\nscratch than try to post-edit (bad)\nNumbers of sentences assigned to each quality\nlevel are presented in Table 1.\nAll sentences were post-edited by the same two\nhuman translators1which were instructed to per-\nform the minimum number of edits necessary to\n1One for French-English and one for English-Spanish output.make the translation acceptable. Post-editing time\nis measured on the sentence level in a controlled\nway in order to isolate factors such as pauses be-\ntween sentences.\nThe technical effort is represented by following\nfive types of edit operations:\n•correcting word form\n•correcting word order\n•adding omission\n•deleting addition\n•correcting lexical choice\nThe performed edit operations are classified on\nthe word level using the Hjerson automatic tool\n(Popovi´ c, 2011) for error analysis. The post-edited\ntranslation output was used as a reference transla-\ntion, and the results are available in the form of\nraw counts and edit rates for each category. Edit\nrate is defined as the raw count of edited words\nnormalised over the total number of words i.e. sen-\ntence length of the given translation output.\n3 Results\n3.1 Edit operations and quality level\nThe distributions of five edit rates for different\nquality levels are presented in Figure 1. All edit\nrates increase with the decrease of quality, lexi-\ncal choice and word order being the most promi-\nnent. The main difference between two edit types\nis that the number of lexical edits increases mono-\ntonically whereas the number of reordering edits\nis relatively low for high quality translations and\nrelatively high for low quality translations.\nImpact of reordering distance: In addition to\nfive basic error types, we analysed reordering dis-\ntances, i.e., the number of word positions by which\na particular word is shifted. Reordering distances\nfor different quality levels are presented in Fig-\nure 2. It can be seen that the distant reorder-\nings are not an important issue, even for low qual-\nity translations, whereas the number of local and\nlonger range reorderings both increase as quality\ndecreases. The increase of longer ones, however,\nis more prominent for the low-quality translations:\nthis relationship means that the increase of overall\nreordering errors presented in Figure 1 is primarily\ndue to these reorderings. It should be noted that the\nexperiments were carried out only on the language\n192\n 0 5 10 15 20 25 30\nform orderomission addition lexicaledit rates [%]bad\nedit-\nedit+\nok\n(a) French →English 2011\n 0 5 10 15 20 25 30\nform orderomission addition lexicaledit rates [%]bad\nedit-\nedit+\nok\n(b) English →Spanish 2011\n 0 5 10 15 20 25 30\nform orderomission addition lexicaledit rates [%]bad\nedit-\nedit\nedit+\nok\n(c) English →Spanish 2012\nFigure 1: Distribution of five edit types for differ-\nent quality levels in (a) one French-to-English and\n(b) two English-to-Spanish translation outputs.\npairs with prevailing local structure differences –\nfuture experiments should include languages with\ndifferent structure, such as German. 0 0.5 1 1.5 2\n 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24reordering rate [%]\nreordering distance [words]bad\nedit-\nedit+\nok\n(a) French →English 2011\n 0 0.5 1 1.5 2\n 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24reordering rate [%]\nreordering distance [words]bad\nedit-\nedit+\nok\n(b) English →Spanish 2011\n 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6\n 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24reordering rate [%]\nreordering distance [words]bad\nedit-\nedit\nedit+\nok\n(c) English →Spanish 2012\nFigure 2: Distribution of reordering distances\nfor different quality levels in (a) one French-to-\nEnglish and (b),(c) two English-to-Spanish trans-\nlation outputs.\n3.1.1 Almost acceptable translations\nIn addition to exploring different quality levels,\nwe carried out an analysis only on almost accept-\nable translations for different language pairs. Al-\nmost acceptable translations are of the special in-\n193\nterest for high-quality machine translation – they\nare namely close to perfect translations and do not\nrequire much post-editing effort. The main ques-\ntion is which types of errors are keeping these\ntranslations from perfect.\nFor analysis of almost acceptable translations,\napart from the sentences assigned to the “edit+”\ncategory in Table 1, an additional corpus was avail-\nable, namely a portion of the German-to-English\n(778 sentences) and English-to-German (955 sen-\ntences) translations obtained by the best ranked\nstatistical and rule based systems in the frame-\nwork of the 2011 shared task (Callison-Burch et\nal., 2011).\n 0 2 4 6 8 10 12 14\nform orderomission addition lexicaledit rates [%]de-en\nen-de\nen-es1\nen-es2\nfr-en\n(a) edit operations in almost acceptable translations\n 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9\n 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20reordering rate [%]\nreordering distance [words]de-en\nen-de\nen-es1\nen-es2\nfr-en\n(b) reordering distances in almost acceptable translation s\nFigure 3: Distribution of (a) five edit operations\nand (b) reordering distances in almost accept-\nable translations: French-to-English, two English-\nto-Spanish, German-to-English and English-to-\nGerman outputs.\nDistributions of five edit types as well as re-\nordering distances in five almost acceptable sets\nare shown in Figure 3 and it can be seen that theyare largely dependent on language pair and trans-\nlation direction. The lexical edits are the most\nprominent for all translation directions indicating\nthat even in the high-quality translations, large por-\ntions of texts are mistranslated. Inflectional errors\nare rare in high-quality English outputs, but still\nrelatively high in Spanish and German translations.\nAs for reordering errors, for French-English and\nEnglish-Spanish translations the reordering edit\nrates are low, less than 4%, however for German-\nto-English translations it is almost 8% being not\nmuch lower than the lexical edit rate. This high\nrate indicates that, for this translation direction,\neven high-quality translations contain a significant\nnumber of syntactic errors. English-to-German,\nconversely, is quite difficult in general and the re-\nordering edit rate is comparable to the rates for\nother types of operations; since all the edit rates\nare similar, improving any of them should lead to\nquality increase. As for reordering distances, short\nrange reorderings are dominant in all high-quality\ntranslations, and the main difference for German-\nto-English outputs is due longer range reordering\nedits. Further analysis (e.g. based on POS tags)\nis needed to determine exact nature of reordering\nproblems in the high quality translations.\n3.2 Edit operations and post-editing time\nPost-editing times are available for the 2011 data\n(first two rows in Table 1). The post-editing times\nfor the English output are much shorter than for\nthe Spanish output, probably due to language dif-\nferences and/or to the different annotators. In any\ncase, this difference does not represent an issue for\nestimating distribution of post-editing time over\nfive edit operation classes. For each edit operation\ntype, average post-editing time is calculated in the\nfollowing way:\n•for each sentence, divide the raw count of\neach edit type by the total number of edit op-\nerations thus obtaining weights;\n•for each edit type in the sentence, estimate its\npost-editing time by multiplicating its weight\nwith the whole sentence post-editing time;\n•finally, for each edit type average the post-\nediting time over all sentences.\nIt should be noted that using uniform weights\nmight be debatable on the sentence level but is suf-\nficiently reliable on the document level. For exam-\nple, if one sentence contains two lexical errors and\n194\none word order error and the editing took 30 sec-\nonds, the estimated time for correcting each error\ntype in this sentence is 10 seconds. However, it is\ntheoretically possible that the reordering error ac-\ntually took 20s and each of the lexical errors took\nonly 5s. Nevertheless, many other sentences with\ndifferent error distributions will be able to reflect\nthis correctly. Therefore, averaging over all sen-\ntences gives a good estimate of post-editing time\ndistribution over edit types. Distribution of post-\nediting time over reordering distances is calculated\nin a similar way, and all the results are presented in\nFigure 4.\nIt can be seen that the lexical edits require the\nlargest portion of the time for both outputs. For\nthe English translation output, the shortest time is\nneeded for correction of the word form, and the\ntimes for other three edit types are similar. For\nthe Spanish output, the deletion of extra words re-\nquires much less time than other edit types. As for\nreordering distances, as expected, longer reorder-\nings require more time.\n3.3 Quality level and post-editing time\nIn previous sections, we compared five edit oper-\nation types with cognitive effort and with tempo-\nral effort separately. Nevertheless, the relation be-\ntween these two aspects in the given context is also\nimportant to better understand all effects.\nPost-editing times for different quality levels for\nthe 2011 data are presented in Figure 5. Although\nan overall increase of the post-editing time can be\nobserved when quality level decreases (i.e. cogni-\ntive effort increases), there is a discrepance for a\nsignificant number of sentences, especially for the\nsentences with low quality level score. In order to\nexplore the reasons for differences between cogni-\ntive and temporal effort, further analysis of edit op-\nerations is carried out taking into accout both qual-\nity level and post-editing time.\n3.4 Analysis of discrepances\nIn order to examine differences between the cog-\nnitive and the temporal effort, we divided the texts\nin four parts:\n•create two quality subsets: high-quality\n(edit+ and ok) and low-quality (edit- and bad)\nsentences\n•calculate median post-editing time for low-\nquality sentences (which is 40 seconds for the 0 5 10 15 20 25 30 35 40 45\nform orderomission addition lexicaltime [s]fr-en\nen-es\n(a) average post-editing time for five edit operations\n 0 20 40 60 80 100 120 140 160\n 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24time [s]\nreordering distance [words]fr-en\nen-es\n(b) average post-editing time for different reordering dis tances\nFigure 4: Average post-editing time (a) for\nfive types of edit operations and (b) for differ-\nent reordering distances: French-to-English and\nEnglish-to-Spanish translation outputs.\n 0 100 200 300 400 500 600 700 800 900\nok edit+ edit- badtime [s]\nFigure 5: Distribution of post-editing times for dif-\nferent quality levels.\nEnglish and 100 seconds for the Spanish out-\nput) and use it as a threshold\n•create two time subsets for both quality sub-\nsets according to this threshold: “short-time”\nand “long-time” sentences.\n195\nAs a first step, edit rates for each subset are cal-\ncuated and the results are shown in Figure 6. The\ndistributions for the same quality are very close\n– all edit rates are higher for the low-quality sen-\ntences regardless of the post-editing time. This in-\ndicates that the cognitive effort is tightly related to\nthe amount of particular translation errors, mainly\nlexical and reordering errors, as already stated in\nSection 3.1.\n 0 2 4 6 8 10 12 14\nform orderomission addition lexicaledit rate [%]low quality, long time\nlow quality, short time\nhigh quality, long time\nhigh quality, short time\n(a) French →English 2011\n 0 5 10 15 20\nform orderomission addition lexicaledit rate [%]low quality, long time\nlow quality, short time\nhigh quality, long time\nhigh quality, short time\n(b) English →Spanish 2011\nFigure 6: Edit rates for five edit operations –\nanalysing discrepances between quality and time;\n(a) French-to-English and (b) English-to-Spanish\noutput.\nThe next step was the analysis of post-editing\ntime – what are the causes of long post-editing\ntime for high quality translations and short post-\nediting time for low quality translations? For each\nsentence subset, average time distributions over\nfive edit operation types are calculated as described\nin Section 3.2 and presented in Figure 7. The\nsame tendencies can be observed for both trans-\nlation outputs:\n•all edit types required significantly more timein the long-time sentences than in the short-\ntime sentences regardless of the quality level;\n•low-quality translations required more time\nthan high-quality translations in the same\ntime subset;\n–this effect is larger for the long-time sen-\ntences,\n–especially for reordering errors, omis-\nsions and lexical corrections.\n 0 5 10 15 20 25 30\nform orderomission addition lexicaltime [s]low quality, long time\nhigh quality, long time\nlow quality, short time\nhigh quality, short time\n(a) French →English 2011\n 0 20 40 60 80 100\nform orderomission addition lexicaltime [s]low quality, long time\nhigh quality, long time\nlow quality, short time\nhigh quality, short time\n(b) English →Spanish 2011\nFigure 7: Average post-editing times for five\nedit operations – analysing discrepances between\nquality and time; (a) French-to-English and (b)\nEnglish-to-Spanish output.\nThe results confirm that the lexical and reorder-\ning errors require more post-editing effort than the\nothers. In addition, post-editing time for low-\nquality translations is also affected by omissions,\nwhereas this class has no significant importance in\nthe high-quality translations.\nThese results also indicate the importance of the\nsentence length for the post-editing time (which\n196\nhas also been observed in other studies, e.g. (Tat-\nsumi, 2009; Koponen, 2012)). Edit rates are\nnamely raw counts of edit operations normalised\nover the sentence length: since there is no sig-\nnificant variation of edit rates between the long-\ntime and the short-time subset, the only remaining\nfactor is the sentence length. On the other hand,\na number of high-quality sentences require long\npost-editing time despite of low edit rates: the pos-\nsible reason is that those sentences are longer.\nIn order to confirm this assumption, average\nsentence lengths were calculated for each sentence\nsubset and the results are given in Table 2. As ex-\npected, long-time sentences are longer than short-\ntime sentences regardless of the quality level. In\naddition, the relations of the sentence length with\npost-editing time and with quality level are pre-\nsented in Figure 8: the post-editing time increases\nalmost linearly with the increase of the sentence\nlength, whereas the correspondence between the\nsentence length and the quality level is not straight-\nforward, mainly due to the large number of short\nlow-quality sentences.\nquality time fr-en en-es\nhigh short 22.7 19.6\nlong 43.2 31.4\nlow short 21.2 19.0\nlong 40.6 35.5\nTable 2: Average sentence lengths for four sen-\ntence subsets based on different quality levels and\npost-editing times.\n4 Summary and outlook\nWe presented an experiment aiming to explore the\nrelations of five different types of post-edit oper-\nations with the cognitive and the temporal post-\nediting effort. We performed automatic analysis\nof edit operations for different quality levels and\nestimated post-editing time for each of the five cat-\negories. The results showed that the reordering\nedits (shifts) and correcting mistranslations corre-\nlated most strongly with quality level i.e. cogni-\ntive effort, as well as that the lexical errors require\nthe largest portion of post-editing time. Analysis\nof reordering distances showed that longer range\nreorderings have more effects both to the quality\nlevel and to the post-editing time, however very\nlong ranges do not represent an issue.In addition, we analysed the edit operations and\nreordering distances in almost acceptable transla-\ntions in order to investigate which error types are\npresent in almost perfect high-quality translations\npreventing them to be completely perfect. It is\nshown that the error distributions are dependent\non the language pair and the translation direction:\nhowever, mistranslations are the dominant error\ntype for all translation outputs.\nFurthermore, we showed that the edit rates, es-\npecially for mistranslations and reorderings, cor-\nrelate strongly with quality level regardless of the\ntime spent on post-editing. On the other hand,\npost-editing time strongly depends on the sentence\nlength.\nOur experiment offers many directions for fu-\nture work. First of all, it should be kept in mind\nthat the French-English and English-Spanish lan-\nguage pairs are very similar in the terms of struc-\nture and morphology – word order differences are\nmostly of the local character, and both French and\nSpanish morphologies are rich mostly due to verbs.\nIn future work, languages with more distinct struc-\ntural differences (such as German) and richer mor-\nphology (such as Czech or Finnish) should be anal-\nysed. Furthermore, more details about edit opera-\ntion types can be obtained by the use of additional\nknowledge such as POS tags.\nAcknowledgments\nThis work has been supported by the project\nQTL AUNCH PAD(EU FP7 CSA No. 296347).\nReferences\nBlain, Fr´ ed´ eric, Jean Senellart, Holger Schwenk, Mirko\nPlitt, and Johann Roturier. 2011. Qualitative analy-\nsis of post-editing for high quality machine transla-\ntion. In Machine Translation Summit XIII , Xiamen,\nChina, September.\nCallison-Burch, Chris, Philipp Koehn, Christof Monz,\nand Omar Zaidan. 2011. Findings of the 2011\nWorkshop on Statistical Machine Translation. In\nProceedings of the Sixth Workshop on Statistical Ma-\nchine Translation (WMT 2011) , pages 22–64, Edin-\nburgh, Scotland, July.\nCallison-Burch, Chris, Philipp Koehn, Christof Monz,\nMatt Post, Radu Soricut, and Lucia Specia. 2012.\nFindings of the 2012 Workshop on Statistical Ma-\nchine Translation. In Proceedings of the Seventh\nWorkshop on Statistical Machine Translation , page\n1051, Montral, Canada, June. Association for Com-\nputational Linguistics.\n197\n 0 20 40 60 80 100 120 140\n 0 10 20 30 40 50 60 70time [s]\nlength [words]\n(a) French →English 2011 0 100 200 300 400 500 600 700 800 900\n 0 10 20 30 40 50 60 70time [s]\nlength [words]\n(b) English →Spanish 2011\n 0 20 40 60 80 100 120\nok edit+ edit- bad\n(c) French →English 2011 0 10 20 30 40 50 60 70 80 90\nok edit+ edit- bad\n(d) English →Spanish 2011\nFigure 8: Distribution of post-editing times for (a),(b) di fferent sentence lengths and (c),(d) different\nquality levels; (a),(c) French-to-English and (b),(d) Eng lish-to-Spanish output.\nKirchhoff, Katrin, Daniel Capurro, and Anne Turner.\n2012. Evaluating user preferences in machine trans-\nlation using conjoint analysis. In Proceedings of the\n16th Annual Conference of the European Association\nfor Machine Translation (EAMT 12) , pages 119–126,\nTrento, Italy, May.\nKoponen, Maarit. 2012. Comparing human percep-\ntions of post-editing effort with post-editing oper-\nations. In Proceedings of the Seventh Workshop\non Statistical Machine Translation , pages 181–190,\nMontral, Canada, June. Association for Computa-\ntional Linguistics.\nKrings, Hans. 2001. Repairing texts: empirical in-\nvestigations of machine translation post-editing pro-\ncesses. Kent, OH. Kent State University Press.\nPopovi´ c, Maja. 2011. Hjerson: An Open Source\nTool for Automatic Error Classification of Machine\nTranslation Output. The Prague Bulletin of Mathe-\nmatical Linguistics , (96):59–68, October.\nPopovi´ c, Maja and Hermann Ney. 2011. Towards Au-\ntomatic Error Analysis of Machine Translation Out-\nput. Computational Linguistics , 37(4), pages 657–\n688, December.\nSpecia, Lucia, Nicola Cancedda, and Marc Dymet-\nman. 2010. A Dataset for Assessing Machine Trans-\nlation Evaluation Metrics. In Proceedings of the\nSeventh conference on International Language Re-\nsources and Evaluation (LREC’2010) , pages 3375–\n3378, Valletta, Malta, May.Specia, Lucia. 2011. Exploiting Objective Annota-\ntions for Measuring Translation Post-editing Effort.\nInProceedings of the 15th Annual Conference of\nthe European Association for Machine Translation\n(EAMT 11) , pages 73–80, Leuven, Belgium, May.\nTatsumi, Midori. 2009. Correlation between auto-\nmatic evaluation metric scores, post-editing speed\nand some other factors. In Proceedings of MT Sum-\nmit XII , pages 332–339, Ottawa, Canada, August.\nTatsumi, Midori, Roturier, Johann. 2010. Source\ntext characteristics and technical and temporal post-\nediting effort: what is their relationship?. In Pro-\nceedings of the Second Joint EM+/CGNL Worskhop\nBringing MT to the user (JEC 10) , pages 43–51,\nDenver, Colorado, November.\nTemnikova, Irina. 2010. Cognitive evaluation ap-\nproach for a controlled language post-editing exper-\niment. In Proceedings of the Seventh International\nConference on Language Resources and Evaluation\n(LREC’10) , Valletta, Malta, May.\nWisniewski, Guillaume, Anil Kumar Singh, Natalia Se-\ngal, and Franc ¸ois Yvon. 2013. Design and analysis\nof a large corpus of post-edited translations: qual-\nity estimation, failure analysis and the variability of\npost-edition. In Proceedings of the MT Summit XIV ,\npages 117–124, Nice, France, September.\n198",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "y2LVxUwLYU",
"year": null,
"venue": "EAMT 2014",
"pdf_link": "https://aclanthology.org/2014.eamt-1.38.pdf",
"forum_link": "https://openreview.net/forum?id=y2LVxUwLYU",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Using a new analytic measure for the annotation and analysis of MT errors on real data",
"authors": [
"Arle Lommel",
"Aljoscha Burchardt",
"Maja Popovic",
"Kim Harris",
"Eleftherios Avramidis",
"Hans Uszkoreit"
],
"abstract": "Arle Lommel, Aljoscha Burchardt, Maja Popović, Kim Harris, Eleftherios Avramidis, Hans Uszkoreit. Proceedings of the 17th Annual conference of the European Association for Machine Translation. 2014.",
"keywords": [],
"raw_extracted_content": "Using a New Analytic Measure for the Annotation and Analysis of MT\nErrors on Real Data\nArle Lommel1, Aljoscha Burchardt1, Maja Popovi ´c1,\nKim Harris2, Eleftherios Avramidis1, Hans Uszkoreit1\n1DFKI / Berlin, Germany\n2text&form / Berlin, Germany\[email protected]\[email protected]\nAbstract\nThis work presents the new flexible Mul-\ntidimensional Quality Metrics (MQM)\nframework and uses it to analyze the per-\nformance of state-of-the-art machine trans-\nlation systems, focusing on “nearly accept-\nable” translated sentences. A selection\nof WMT news data and “customer” data\nprovided by language service providers\n(LSPs) in four language pairs was anno-\ntated using MQM issue types and exam-\nined in terms of the types of errors found\nin it.\nDespite criticisms of WMT data by the\nLSPs, an examination of the resulting er-\nrors and patterns for both types of data\nshows that they are strikingly consistent,\nwith more variation between language\npairs and system types than between text\ntypes. These results validate the use of\nWMT data in an analytic approach to as-\nsessing quality and show that analytic ap-\nproaches represent a useful addition to\nmore traditional assessment methodolo-\ngies such as BLEU or METEOR.\n1 Introduction\nFor a number of years, the Machine Translation\n(MT) community has used “black-box” measures\nof translation performance like BLEU (Papineni\net al., 2002) or METEOR (Denkowski and Lavie,\n2011). These methods have a number of advan-\ntages in that they can provide automatic scores for\nc\r2014 The authors. This article is licensed under a Creative\nCommons 3.0 licence, no derivative works, attribution, CC-\nBY-ND.MT output in cases where there are existing refer-\nence translations by calculating similarity between\nthe MT output and the references. However, such\nmetrics do not provide insight into the specific\nnature of problems encountered in the translation\noutput and scores are tied to the particularities of\nthe reference translations.\nAs a result of these limitations, there has been\na recent shift towards the use of more explicit er-\nror classification and analysis (see, e.g., Vilar et al.\n(2006)) in addition to automatic metrics. The error\nprofiles used, however are typically ad hoc cate-\ngorizations and specific to individual MT research\nprojects, thus limiting their general usability for\nresearch or comparability with human translation\n(HT) results. In this paper, we will report on an-\nnotation experiments that use a new, flexible er-\nror metric and that showcase a new type of MT\nresearch involving collaboration between MT re-\nsearchers, human translators, and Language Ser-\nvice Providers (LSPs).\nWhen we started to prepare our annota-\ntion experiments, we teamed up with LSPs\nand designed a custom error metric based\non the “Multidimensional Quality Metric”\nMQM designed by the QTLaunchPad project\n(http://www.qt21.eu/launchpad). The metric was\ndesigned to facilitate annotation of MT output by\nhuman translators while containing analytic error\nclasses we considered relevant to MT research\n(see Section 2 , below). This paper represents the\nfirst publication of results from use of MQM for\nMT quality analysis.\nPrevious research in this area has used er-\nror categories to describe error types. For in-\nstance, Farr ´us et al. (2010) divide errors into five\nbroad classes (orthographic, morphological, lexi-\ncal, semantic, and syntactic). By contrast, Flana-\n165\ngan (1994) uses 18 more fine-grained error cate-\ngories with additional language-pair specific fea-\ntures, while Stymne and Ahrenberg (2012) use ten\nerror types of somewhat more intermediate granu-\nlarity (and specifically addresses combinations of\nmultiple error types). All of these categorization\nschemes are ad hoc creations that serve a particular\nanalytic goal. MQM, however, provides a general\nmechanism for describing a family of related met-\nrics that share a common vocabulary. This metric\nwas based upon a rigorous examination of major\nhuman and machine translation assessment met-\nrics (e.g., LISA QA Model, SAE J2450, TAUS\nDQF, ATA assessment, and various tool-specific\nmetrics) that served as the basis for a descriptive\nframework for declaring what a particular metric\naddresses. While the metric described in this pa-\nper is still very much a purpose-driven metric, it is\ndeclared in this general framework, which we pro-\npose for use to declare specific metrics for general\nquality assessment and error annotation tasks.\nFor data, we chose WMT data (Bojar et al.,\n2013) to represent the state of the art output for MT\nin research. However, LSPs frequently reported to\nus that the mostly journalistic WMT data does not\nrepresent their business data (mostly technical doc-\numentation) or typical applications of MT in busi-\nness situations. In addition, it turned out that jour-\nnalistic style often contains literary flourishes, id-\niosyncratic or mixed styles, and deep embedding\n(e.g., nested quotations) that sometimes make it\nvery difficult to judge the output.\nAs a result, we decided to use both WMT data\nand customer MT data that LSPs contributed from\ntheir daily business to see if the text types gener-\nate different error profiles. This paper accordingly\npresents and compares the results we obtained for\nboth types of sources. For practical purposes, we\ndecided to analyze only “near miss” translations,\ntranslations which require only a small effort to\nbe converted into acceptable translations. We ex-\ncluded “perfect” translations and those translations\nthat human evaluators judged to have too many\nerrors to be fixed easily (because these would be\ntoo difficult to annotate). We therefore had human\nevaluators select segments representing this espe-\ncially business-relevant class of translations prior\nto annotation.\nA total of nine LSPs participated in this task,\nwith each LSP analyzing from one to three lan-\nguage pairs. Participating LSPs were paid up toe1000 per language pair. The following LSPs par-\nticipated: Beo, Hermes, iDisc, Linguaserve, Lo-\ngrus, Lucy, Rheinschrift, text&form, and Welocal-\nize.\n2 Error classification scheme\nThe Multidimensional Quality Framework\n(MQM) system1provides a flexible system for\ndeclaring translation quality assessment methods,\nwith a focus on analytic quality, i.e., quality\nassessment that focuses on identifying specific\nissues/errors in the translated text and categorizing\nthem.2MQM defines over 80 issue/error types\n(the expectation is that any one assessment task\nwill use only a fraction of these), and for this\ntask, we chose a subset of these issues, as defined\nbelow.\n\u000fAccuracy . Issues related to whether the in-\nformation content of the target is equivalent\nto the source.\n– Terminology . Issues related to the use\nof domain-specific terms.\n– Mistranslation . Issues related to the im-\nproper translation of content.\n– Omission . Content present in the source\nis missing in the target.\n– Addition . Content not present in the\nsource has been added to the target.\n– Untranslated . Text inappropriately ap-\npears in the source language.\n\u000fFluency . Issues related to the linguistic prop-\nerties of the target without relation to its status\nas a translation.\n– Grammar . Issues related to the gram-\nmatical properties of the text.\n\u0003Morphology (word form) . The text\nuses improper word forms.\n\u0003Part of speech . The text uses the\nwrong part of speech\n1http://www.qt21.eu/mqm-definition/\n2This approach stands in contrast to “holistic” methods that\nlook at the text in its entirety and provide a score for the as\na whole in terms of one or more dimensions, such as over-\nall readability, usefulness, style, or accuracy. BLEU, ME-\nTEOR, and similar automatic MT evaluation metrics used for\nresearch can be considered holistic metrics that evaluate texts\non the dimension of similarity to reference translations since\nthey do not identify specific, concrete issues in the transla-\ntion. In addition, most of the options in the TAUS Dynamic\nQuality Framework (DQF) (https://evaluation.taus.net/about)\nare holistic measures.\n166\n\u0003Agreement . Items within the text do\nnot agree for number, person, or gen-\nder.\n\u0003Word order . Words appear in the\nincorrect order.\n\u0003Function words . The text uses func-\ntion words (such as articles, prepo-\nsitions, “helper”/auxiliary verbs, or\nparticles) incorrectly.\n– Style . The text shows stylistic problems.\n– Spelling . The text is spelled incorrectly\n\u0003Capitalization . Words are capital-\nized that should not be or vice versa.\n– Typography . Problems related to typo-\ngraphical conventions.\n\u0003Punctuation . Problems related to\nthe use of punctuation.\n– Unintelligible . Text is garbled or oth-\nerwise unintelligible. Indicates a major\nbreakdown in fluency.\nNote that these items exist in a hierarchy. An-\nnotators were asked to choose the most specific is-\nsue possible and to use higher-level categories only\nwhen it was not possible to use one deeper in the\nhierarchy. For example, if an issue could be cate-\ngorized as Word order it could also be categorized\nasGrammar , but annotators were instructed to use\nWord order as it was more specific. Higher-level\ncategories were to be used for cases where more\nspecific ones did not apply (e.g., the sentence He\nslept the baby features a “valency” error, which is\nnot a specific type in this hierarchy, so Grammar\nwould be chosen instead).\n3 Corpora\nThe corpus contains Spanish !English,\nGerman!English, English !Spanish, and\nEnglish!German translations. To prepare the\ncorpus, for each translation direction a set of\ntranslations were evaluated by expert human\nevaluators (primarily professional translators) and\nassigned to one of three classes:\n1.perfect (class 1) . no apparent errors.\n2.almost perfect or “near miss” (class 2) .\neasy to correct, containing up to three errors.\n3.bad (class 3) . more than three errors.\nBoth WMT and “customer” data3were rated\nin this manner and pseudo-random selections (se-\n3WMT data was from the top-rated statistical, rule-based, and\nhybrid systems for 2013; customer data was taken from a vari-lections were constrained to prevent annotation of\nmultiple translations for the same source segment\nwithin a given data set in order to maximize the\ndiversity of content from the data sources) taken\nfrom the class 2 sentences, as follows:\n\u000fCalibration set . For each language pair we\nselected a set of 150 “near miss” (see be-\nlow) translations from WMT 2013 data (Bo-\njar et al., 2013).\n–For English!German and English !\nSpanish, we selected 40 sentences from\nthe top-ranked SMT, RbMT, and hybrid\nsystems, plus 30 of the human-generated\nreference translations.\n–For German!English and Spanish !\nEnglish, we selected 60 sentences from\nthe top-ranked SMT and RbMT systems\n(no hybrid systems were available for\nthose language pairs), plus 30 of the\nhuman-generated reference translations.\n\u000fCustomer data . Each annotator was pro-\nvided with 200 segments of “customer” data,\ni.e., data taken from real production systems.4\nThis data was translated by a variety of sys-\ntems, generally SMT (some of the German\ndata was translated using an RbMT system).\n\u000fAdditional WMT data . Each annotator was\nalso asked to annotate 100 segments of previ-\nously unannotated WMT data. In some cases\nthe source segments for this selection over-\nlapped with those of the calibration set, al-\nthough the specific MT outputs chosen did\nnot (e.g., if the SMT output for a given seg-\nment appeared in the calibration set, it would\nnot reappear in this set, although the RbMT,\nhybrid, or human translation might). Note\nthat the additional WMT data provided was\ndifferent for each LSP in order to maximize\ncoverage of annotations in line with other re-\nsearch goals; as such, this additional data\ndoes not factor into inter-annotator agreement\ncalculations (discussed below).\nety of in-house systems (both statistical and rule-based) used\nin production environments.\n4In all but one case the data was taken from actual projects;\nin the one exception the LSP was unable to obtain permission\nto use project data and instead took text from a project that\nwould normally not have been translated via MT and ran it\nthrough a domain-trained system.\n167\nIt should be noted that in all cases we selected\nonly translations for which the source was origi-\nnally authored in the source language. The WMT\nshared task used human translations of some seg-\nments as source for MT input: for example, a sen-\ntence authored in Czech might be translated into\nEnglish by humans and then used as the source for\na translation task into Spanish, a practice known\nas “relay” or “pivot” translation. As we wished to\neliminate any variables introduced by this practice,\nwe eliminated any data translated in this fashion\nfrom our task and instead focused only on those\nwith “native” sources.\n3.1 Annotation\nThe annotators were provided the data described\nabove and given access to the open-source trans-\nlate55annotation environment. Translate5 pro-\nvides the ability to mark arbitrary spans in seg-\nments with issue types and to make other annota-\ntions. All annotators were invited to attend an on-\nline training session or to view a recording of it and\nwere given written annotation guidelines. They\nwere also encouraged to submit questions concern-\ning the annotation task.\nThe number of annotators varied for individ-\nual segments, depending on whether they were in-\ncluded in the calibration sets or not. The numbers\nof annotators varied by segment and language pair:\n\u000fGerman!English : Calibration: 3; Cus-\ntomer + additional WMT: 1\n\u000fEnglish!German : Calibration: 5; Cus-\ntomer + additional WMT: 1–3\n\u000fSpanish!English : Calibration: 4; Customer\n+ additional WMT: 2–4\n\u000fEnglish!Spanish : Calibration: 4; Customer\n+ additional WMT: 1–3\nAfter annotation was complete some post-\nprocessing steps simplified the markup and ex-\ntracted the issue types found by the annotators to\npermit comparison.\n3.2 Notes on the data\nThe annotators commented on a number of aspects\nof the data presented to them. In particular, they\nnoted some issues with the WMT data. WMT is\nwidely used in MT evaluation tasks, and so en-\njoys some status as the universal data set for tasks\n5http://www.translate5.netsuch as the one described in this paper. The avail-\nable translations represent the absolute latest and\nmost state-of-the-art systems available in the in-\ndustry and are well established in the MT research\ncommunity.\nHowever, feedback from our evaluators indi-\ncated that WMT data has some drawbacks that\nmust be considered when using it. Specifically,\nthe text type (news data) is rather different from\nthe sorts of technical text typically translated in\nproduction MT environments. News does not rep-\nresent a coherent domain (it is, instead, a genre),\nbut rather has more in common with general lan-\nguage. In addition, an examination of the human-\ngenerated reference segments revealed that the hu-\nman translations often exhibited a good deal of\n“artistry” in their response to difficult passages,\nopting for fairly “loose” translations that preserved\nthe broad sense, but not the precise details.\nThe customer data used in this task does not all\ncome from a single domain. Much of the data\ncame from the automotive and IT (software UI) do-\nmains, but tourism and financial data were also in-\ncluded. Because we relied on the systems available\nto LSPs (and provided data in a few cases where\nthey were not able to gain permission to use cus-\ntomer data), we were not able to compare different\ntypes of systems in the customer data and instead\nhave grouped all results together.\nAn additional factor is that the sentences in the\ncalibration sets were much longer (19.4 words,\nwith a mode of 14, a median of 17, and a range\nof 3 to 77 words) than the customer data (average\n14.1 words, with a mode of 11, a median of 13,\nand a range of 1 to 50 words). We believe that\nthe difference in length may account for some dif-\nference between the calibration and customer sets\ndescribed below.\n4 Error analysis\nIn examining the aggregate results for all language\npairs and translation methods, we found that four\nof the 21 error types constitute the majority (59%)\nof all issues found:\n\u000fMistranslation: 21%\n\u000fFunction words: 15%\n\u000fWord order: 12%\n\u000fTerminology: 11%\nNone of the remaining issues comprise more\nthan 10% of annotations and some were found so\n168\ninfrequently as to offer little insight. We also found\nthat some of the hierarchical distinctions were of\nlittle benefit, which led us to revise the list of is-\nsues for future research (see Section 4.2 for more\ndetails).\n4.1 Inter-Annotator Agreement\nBecause we had multiple annotators for most of the\ndata, we were able to assess inter-annotator agree-\nment (IAA) for the MQM annotation of the cal-\nibration sets. IAA was calculated using Cohen’s\nkappa coefficient. At the word level (i.e., seeing if\nannotators agreed for each word, we found that the\nresults lie between 0.2 and 0.4 (considered “fair”),\nwith an average of pairwise comparisons of 0.29\n(de-en), 0.25 (es-en), 0.32 (en-de), and 0.34 (en-\nes), with an overall average of 0.30\n4.2 Modifications\nThis section addresses some of the lessons learned\nfrom an examination of the MQM annotations de-\nscribed in Section 4.1 , with a special emphasis on\nways to improve inter-annotator agreement (IAA).\nAlthough IAA does not appear to be a barrier to\nthe present analytic task, we found a number of ar-\neas where the annotation could be improved and\nsuperfluous distinctions eliminated. For example,\n“plain” Typography appeared so few times that it\noffered no value separate from its daughter cat-\negory Punctuation . Other categories appeared to\nbe easily confusible, despite the instructions given\nto the annotators (e.g., the distinction between\n“Terminology” and “Mistranslation” seemed to be\ndriven largely by the length of the annotated issue:\nthe average length of spans tagged for “Mistrans-\nlation” was 2.13 words (with a standard deviation\nof 2.43), versus 1.42 (with a standard deviation of\n0.82) for “Terminology’.’ (Although we had ex-\npected the two categories to exhibit a difference in\nthe lengths of spans to which they were applied,\na close examination showed that the distinctions\nwere not systematic with respect to whether actual\nterms were marked or not, indicating that the two\ncategories were likely not clear or relevant to the\nannotators. In addition, “Terminology” as a cat-\negory is problematic with respect to the general-\ndomain texts in the WMT data sets since no termi-\nnology resources are provided.)\nBased on these issues, we have undertaken the\nfollowing actions to improve the consistency of\nfuture annotations and to simplify analysis of the\npresent data.\u000fThe distinction between Mistranslation and\nTerminology was eliminated. (For calculation\npurposes Terminology became a daughter of\nMistranslation .)\n\u000fThe Style/Register category was eliminated\nsince stylistic and register expectations were\nunclear and simply counted as general Flu-\nency for calculation purposes.\n\u000fThe Morphology (word form) category was\nrenamed Word form and Part of Speech ,\nAgreement , and Tense/mood/aspect were\nmoved to become its children.\n\u000fPunctuation was removed, leaving only Ty-\npography, and all issues contained in either\ncategory were counted as Typography\n\u000fCapitalization , which was infrequently en-\ncountered, was merged into its parent\nSpelling .\nIn addition, to address a systematic problem\nwith the Function words category, we added ad-\nditional custom children to this category: Extra-\nneous (for function words that should not appear),\nMissing (for function words that are missing from\nthe translation), and Incorrect (for cases in which\nthe incorrect function word is used). These were\nadded to provide better insight into the specific\nproblems and to address a tendency for annotators\nto categorize problems with function words as Ac-\ncuracy issues when the function words were either\nmissing or added. This revised issue type hierar-\nchy is shown in Figure 1.\n0%10%20%30%40%50%\nUnint. Func.\nwordsWord\norderWord\nformGram. Punc. Spell. Flu. Unt. Add. Omis. Mis. Acc.SMT\nRbMT\nHybrid\nHTAll language pairs\nFigure 3: Average Sentence-level error rates [%]\nfor all language pairs.\nThis revised hierarchy will be used for ongoing\nannotation in our research tasks. We also realized\nthat the guidelines to annotators did not provide\nsufficient decision-making tools to help them se-\nlect the intended issues. To address this problem\n169\nT ranslation\nQualityFluencySpelling\nPart of speech\nT ense/aspect/mood\nExtraneous\nIncorrectTyp o g rap hy\nGrammar W ord order\nMissing Function wordsAgreement W ord form\nUnintelligibleAccuracyOmission\nMistranslation\nUntranslated\nAdditionFigure 1: Revised issue-type hierarchy.\nwe created a decision-tree to guide their annota-\ntions. We did not recalculate IAA from the present\ndata set with the change in categories since we\nhave also changed the guidelines and both changes\nwill together impact IAA. We are currently run-\nning additional annotation tasks using the updated\nerror types that will result in new scores.\nRefactoring the existing annotations according\nto the above description, gives the results for each\ntranslation direction and translation method in the\ncalibration sets, as presented in Figure 2 (with av-\nerages across all language pairs as presented in\nFigure 3). Figure 4 presents the same results for\neach language pair in the customer data. As pre-\nviously mentioned, we were not able to break out\nresults for the customer data by system type.\n4.3 Differences between MT methods\nDespite considerable variation between language\npairs, an examination of the annotation revealed\na number of differences in the output of differ-\nent system types. While many of the differences\nare not unexpected, the detailed analytic approach\ntaken in this experiment has enabled us to provide\ngreater insight into the precise differences rather\nthan relying on isolated examples. The overall re-\nsults for all language pairs are presented in Fig-\nure 3 (which includes the results for the human\ntranslated segments as a point of comparison).\nThe main observations for each translation\nmethod include:\n\u000fstatistical machine translation\n–Performs the best in terms of Mistrans-\nlation\n–Most likely to drop content ( Omission );\notherwise it would be the most accurate\ntranslation method considered.–Had the lowest number of Function\nWords errors, indicating that SMT gets\nthis aspect substantially better than alter-\nnative systems.\n–Weak in Grammar , largely due to signif-\nicant problems in Word Order\n\u000frule-based machine translation\n–Generated the worst results for Mis-\ntranslation\n–Was least likely to omit content ( Omis-\nsion)\n–Was weak for Function Words ; statistical\nenhancements (moving in the direction\nof hybrid systems) would offer consid-\nerable potential for improvement\n\u000fhybrid machine translation (available only\nfor English!Spanish and English !German)\n–Tends to perform in between SMT and\nRBMT in most respects\n–Most likely method to produce mistrans-\nlated texts ( Mistranslation )\nWhen compared to the results of human transla-\ntion assessment, it is apparent that all of the near-\nmiss machine translations are somewhat more ac-\ncurate than near-miss human translation and sig-\nnificantly less grammatical. Humans are far more\nlikely to make typographic errors, but otherwise\nare much more fluent. Note as well that humans\nare more likely to add information to translations\nthan MT systems, perhaps in an effort to render\ntexts more accessible. Thus, despite substantial\ndifferences, all of the MT systems are overall more\nsimilar to each other than they are to human trans-\nlation. However, when one considers that a far\ngreater proportion of human translation sentences\nwere in the “perfect” category and a far lower pro-\nportion in the “bad” category, and that these com-\nparisons focus only on the “near miss sentences,” it\n170\n0%5%10%15%20%25%30%35%\nUnint. Func.\nwordsWord\norderWord\nformGram. Punc. Spell. Flu. Unt. Add. Omis. Mis. Acc.SMT\nRbMT\nHTSpanish→English\n0%10%20%30%40%50%\nUnint. Func.\nwordsWord\norderWord\nformGram. Punc. Spell. Flu. Unt. Add. Omis. Mis. Acc.SMT\nRbMT\nHTGerman→English\n0%10%20%30%40%50%\nUnint. Func.\nwordsWord\norderWord\nformGram. Punc. Spell. Flu. Unt. Add. Omis. Mis. Acc.SMT\nRbMT\nHybrid\nHTEnglish→Spanish\n0%10%20%30%40%50%\nUnint. Func.\nwordsWord\norderWord\nformGram. Punc. Spell. Flu. Unt. Add. Omis. Mis. Acc.SMT\nRbMT\nHybrid\nHTEnglish→GermanFigure 2: Sentence-level error rates [%] for each\ntranslation direction and each translation method\nfor WMT data.\nis apparent that outside of the context of this com-\nparison, human translation still maintains a much\nhigher level of Accuracy and Fluency.\nIn addition, a number of the annotators com-\nmented on the poor level of translation evident\nin the WMT human translations. Despite being\nprofessional translations, there were numerous in-\nstances of basic mistakes and interpretive transla-\n0%10%20%30%40%50%\nUnint. Func.\nwordsWord\norderWord\nformGram. Punc. Spell. Flu. Unt. Add. Omis. Mis. Acc.DE>EN\nES>EN\nEN>DE\nEN>ESCustomer data\nFigure 4: Sentence-level error rates [%] for each\ntranslation direction for customer data.\ntions that resulted in translations that would gener-\nally be considered poor references for MT evalu-\nation (since MT cannot make interpretive transla-\ntions). However, at least in part, these problems\nwith translation may be attributed to the uncon-\ntrolled nature of the source texts, which tended to\nbe more literary than is typical for industry uses of\nMT. In may cases the WMT source sentences pre-\nsented translation difficulties for the human trans-\nlators and the meaning of the source texts was not\nalways clear out of context. As a result the WMT\ntexts provide difficulties for both human and ma-\nchine translators.\n4.4 Comparison of WMT and customer data\nBy contrast, the customer data was more likely to\nconsist of fragments (such as Drive vibrates or sec-\ntion headings) or split segments (i.e., one logical\nsentence was split with a carriage return, result-\ning in two fragments) that caused confusion for\nthe MT systems. It also, in principle, should have\nhad advantages over the WMT data because it was\ntranslated with domain-trained systems.\nDespite these differences, however, the average\nprofiles for all calibration data and all customer\ndata across language pairs look startlingly simi-\nlar, as seen in Figure 5. There is thus signifi-\ncantly more variation between language pairs and\nbetween system types than there is between the\nWMT data and customer data in terms of the error\nprofiles. (Note, however, that this comparison ad-\ndresses only the “near-miss” translations and can-\nnot address profiles outside of this category; it also\ndoes not address the overall relative distribution\ninto the different quality bands for the text types.)\n171\n0%10%20%30%40%50%\nUnint. Func.\nwordsWord\norderWord\nformGram. Punc. Spell. Flu. Unt. Add. Omis. Mis. Acc.Calibration\nCustomerFigure 5: Sentence-level error rates [%] for cali-\nbration vs. customer data (average of all systems\nand language pairs).\n5 Conclusions and outlook\nThe experiment here shows that analytic quality\nanalysis can provide a valuable adjunct to auto-\nmatic methods like BLEU and METEOR. While\nmore labor-intensive to conduct, they provide in-\nsight into the causes of errors and suggest possible\nsolutions. Our research treats the human annota-\ntion as the first phase in a two-step approach. In the\nfirst step, described in this paper, we use MQM-\nbased human annotation to provide detailed de-\nscription of the symptoms of MT failure. This an-\nnotation also enables us to detect the system type-\nand language-specific distribution of errors and to\nunderstand their relative importance.\nIn the second step, which is ongoing, linguists\nand MT experts will use the annotations from the\nfirst step to gain insight into the causes for MT fail-\nures on the source side or into MT system limita-\ntions. For example, our preliminary research into\nEnglish source-language phenomena indicates that\n-ing verbal forms, certain types of embedding in\nEnglish (such as relative sentences or quotations),\nand non-genitive uses of the preposition ofare par-\nticularly contributory to MT failures. Further re-\nsearch into MQM human annotation will undoubt-\nedly reveal additional source factors that can guide\nMT development or suggest solutions to system-\natic linguistic problems. Although many of these\nissues are known to be difficult, it is only with the\nidentification of concrete examples that they can\nbe addressed.\nIn this paper we have shown that the symptoms\nof MT failure are the same between WMT and cus-\ntomer data, but it is an open question as to whether\nthe causes will prove to be the same. We thereforeadvocate for a continuing engagement with lan-\nguage service providers and translators using these\ndifferent types of data. These approaches will help\nfurther the acceptance of MT in commercial set-\ntings by allowing them to be compared to HT out-\nput and will also help research to go forward in a\nmore principled and requirements-driven fashion.\nAcknowledgments\nThis work has been supported by the QTLaunch-\nPad (EU FP7 CSA No. 296347) project.\nReferences\nBojar, O., Buck, C., Callison-Burch, C., Feder-\nmann, C., Haddow, B., Koehn, P., Monz, C.,\nPost, M., Soricut, R., and Specia, L. (2013).\nFindings of the 2013 workshop on statistical\nmachine translation. In Proceedings of the\nEighth Workshop on Statistical Machine Trans-\nlation , pages 1–44.\nDenkowski, M. and Lavie, A. (2011). Meteor 1.3:\nAutomatic metric for reliable optimization and\nevaluation of machine translation systems. In\nProceedings of the EMNLP 2011 Workshop on\nStatistical Machine Translation .\nFarr´us, M., Costa-juss `a, M. R., Mari ˜no, J. B.,\nand Fonollosa, J. A. (2010). Linguistic-based\nevaluation criteria to identify statistical machine\ntranslation errors. In Proceedings of the EAMT\n2010 .\nFlanagan, M. (1994). Error classification for MT\nevaluation. In Proceedings of AMTA , pages 65–\n72.\nPapineni, K., Roukos, S., Ward, T., and Zhu, W.-J.\n(2002). BLEU: A method for automatic evalu-\nation of machine translation. In Proceedings of\nthe 40th Annual Meeting of the Association for\nComputational Linguistics (ACL) , pages 311–\n318.\nStymne, S. and Ahrenberg, L. (2012). On the\npractice of error analysis of machine translation\nevaluation. In Proceedings of LREC 2012 , pages\n1785–1790.\nVilar, D., Xu, J., D’Haro, L. F., and Ney, H. (2006).\nError analysis of statistical machine translation\noutput. In Proceedings of the International Con-\nference on Language Resources and Evaluation\n(LREC) , pages 697–702.\n172",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "wRHbJqdBbla",
"year": null,
"venue": "EAMT 2011",
"pdf_link": "https://aclanthology.org/2011.eamt-1.36.pdf",
"forum_link": "https://openreview.net/forum?id=wRHbJqdBbla",
"arxiv_id": null,
"doi": null
}
|
{
"title": "From Human to Automatic Error Classification for Machine Translation Output",
"authors": [
"Maja Popovic",
"Aljoscha Burchardt"
],
"abstract": "Maja Popović, Aljoscha Burchardt. Proceedings of the 15th Annual conference of the European Association for Machine Translation. 2011.",
"keywords": [],
"raw_extracted_content": "FromHumantoAutomatic Error Classification\nfor Machine TranslationOutput\nMaja Popovi ´candAljoscha Burchardt\nGerman Research CenterforArtificial Intelligence(DFKI)\nLanguageTechnologyGroup (LT)\nBerlin, Germany\n{maja.popovic,aljoscha.burchardt }@dfki.de\nAbstract\nFuture improvement of machine transla-\ntion systems requires reliable automatic\nevaluation and error classification mea-\nsures to avoid time and money consuming\nhuman classification. In this article, we\npropose a new method for automatic er-\nror classification and systematically com-\npare its results to those obtained by hu-\nmans. We show that the proposed auto-\nmatic measures correlate well with human\njudgments across different error classes as\nwell as across different translation outputs\non four out of five commonly used error\nclasses.\n1 Introduction and relatedwork\nThe evaluation of machine translation output is\nan intrinsically difficult task. Human evaluation\nis expensive and time consuming. Therefore a\ngreat deal of effort has been spent on finding mea-\nsures that correlate well with human judgements\nwhen ranking translation systems for quality (see\nforexample(Callison-Burchetal.,2009;Callison-\nBurch et al., 2010)). A considerable amount of\nwork has been put into the improvement of these\nmeasures. However,mostoftheworkhasbeenfo-\ncused just on ranking between different machine\ntranslation systems. While ranking systems is an\nimportant first step towards their improvement, it\ndoes not provide enough scientific insights. Re-\nsearchers often would find it helpful to get an-\nswers to questions such as What is a particular\nstrength/weakness of my system? What kind of\nerrors does the system make most often? Does\na particular modification improve some aspect of\nc/circlecopyrt2011 European Association for Machine Translation.the system, even if it does not improve the overall\nscore? Does a worse–ranked system outperform a\nbetter–ranked one in any aspect? , etc.\nIn order to answer such questions, a framework\nfor human error analysis and error classification\nhas been proposed in (Vilar et al., 2006), where a\nclassificationschemebasedon(Llitj´ osetal.,2005)\nispresentedtogether withadetailedanalysisofthe\nobtained results. The method has become widely\nused in recent years (Avramidis and Koehn, 2008;\nMax et al., 2008; Khalilov and Fonollosa, 2009;\nLi et al., 2009). Still, human error classification\nisresource-intensive andmightbecomepractically\nunfeasible when translating into manylanguages.\nAs for automatic methods, an approach for au-\ntomaticidentificationofpatternsintranslationout-\nput using POS sequences is proposed in (Lopez\nand Resnik, 2005) in order to see how well a\ntranslation system is capable of capturing system-\natic reordering patterns. Using relative differences\nbetween Word Error Rate (WER) and Position-\nindependentWordErrorRate(PER)fornouns,ad-\njectives and verbs has been proposed in (Popovi´ c\net al., 2006) for the estimation of inflectional and\nreordering errors. A method based on WER and\nPER decomposition for discovering inflectional\nerrors andmissing words ispresented in (Popovi´ c\nand Ney, 2007). Zhou (2008) proposed a diagnos-\ntic evaluation of linguistic check-points obtained\nautomatically by aligning parsed source and target\nsentences. However, to our best knowledge, there\nhas been no attempt to design a set of automatic\nmetricswhichcoverstheerrorcategoriesfrom(Vi-\nlar et al., 2006) in asystematic manner.\nInthiswork, wefirst define fiveerror categories\nbasedonthosedescribed in(Vilaretal.,2006) and\npresent theresults for these categories obtained by\nhuman evaluators and by a novel automatic toolMik el L. F orcada, Heidi Depraetere, Vincen t V andeghinste (eds.)\nPr o c e e dings of the 15th Confer enc e of the Eur op e an Asso ciation for Machine T r anslation , p. 265\u0015272\nLeuv en, Belgium, Ma y 2011\nbased on the method proposed in (Popovi´ c and\nNey,2007). Wecalculate correlations between hu-\nmanandautomaticerrorclassificationresults,both\nacross different error classes as well as across dif-\nferent translation outputs. Finally, we perform a\ndeep analysis of the obtained results in order to\nbetter understand the differences between human\nand automatic evaluation.\n2 Error classification\nThe two main goals of the proposed automatic\nmethod for error analysis and classification are to\nbe able:\n•to estimate the distribution of errors over the\nerrorclassesinordertodeterminewhicherror\ntypes are particularly problematic for a given\ntranslation system;\n•to estimate the differences between the num-\nbersoferrorsineachclassfordifferent trans-\nlation outputs in order to compare translation\nsystems.\nThe starting point for the automatic error clas-\nsification proposed in this work is the identifica-\ntionofactualwordscontributing totheWordError\nRate(WER)(Levenshtein, 1966)andtotherecall-\nand precision-based Position-independent Error\nRates called Reference PER (RP ER) and Hypoth-\nesis PER (HP ER) (Popovi´ c and Ney, 2007). The\nWER errors are marked assubstitutions, deletions\norinsertions. TheRP ERerrorsrepresentthewords\ninthereferencewhichdonotappearinthehypoth-\nesis, andthe HP ERerrorsthewordsinthehypoth-\nesis which do not appear in the reference. If mul-\ntiple reference translations are available, the refer-\nencewiththelowest WER score ischoosen for all\nmetrics.\nOnce these words have been identified, the fol-\nlowing error categories based on the classification\nscheme used in(Vilar et al., 2006) aredefined:\n•inflectional errors — an inflectional error oc-\ncurs if the base form of the generated word is\ncorrect but the full form is not.\n•reordering errors — a word which occurs\nboth in the reference and in the hypothesis\nthus not contributing to RP ERor HPERbut\nis marked as a WER error is considered as a\nreordering error.•missing words — a word which occurs as\ndeletion in WER errors and at the same time\noccurs as RP ERerror without sharing the\nbase form with any hypothesis error is con-\nsidered asmissing.\n•extra words — a word which occurs as inser-\ntion in WER errors and at the same time oc-\ncurs as HP ERerror without sharing the base\nformwithanyreferenceerrorisconsideredas\nextra.\n•incorrect lexical choice — a word which\nbelongs neither to inflectional errors nor to\nmissing or extra words is considered as lex-\nical error.\nThepresented method islanguage-independent,\nhowever availability of base forms for the particu-\nlar target language is arequisite.\nHumanerror classification\nAs there are often several correct translations of a\ngivensourcesentence thatcorrespond moreorless\nto the given reference translation(s), human error\nanalysis can be carried out in various ways. Er-\nrors can be counted by doing a direct strict com-\nparison between the given reference and the trans-\nlation outputs, but much more flexibility can be\nallowed: substitution of words and expresions by\nsynonyms, syntactically correct different word or-\nder, etc, which is a more natural way. It is also\npossibletousethereferencesonlyforthesemantic\naspect, i.e. tolookonlywhether themainmeaning\nis preserved. It is even possible not to use a refer-\nence translation at all, but compare the translation\noutput withthe source text.\nThe human error classification is definitely not\nunambigous — often it is not easy to determine\nin which particular error category some error ex-\nactly belongs, sometimes one word can be as-\nsigned to more than one category, and varia-\ntions between different human evaluators are pos-\nsible. Especially difficult is disambiguating be-\ntween incorrect lexical choice and missing words\nor extra words. Furthermore, a choice of words\nto be assigned to reordering class may vary.\nSome typical examples are shown in Table 1.\nIn the first example, one possible interpretation\nis that All-People Headquarters are missing\nwords and Pan Country are extra words. How-\never, it could also be considered that all words\nrepresent incorrect lexical choice. In addition, in266\nreference translation obtained output error classes\nin the General Assembly resolution, PanCountry inthe General missing+extra\nAll-People Headquarters said ... Assembly resolution, said ... or lexical?\namore serious problem ... a problem more serious ... reordering errors?\nTable1: Examples of ambiguous error classficiation.\nthe General Assembly may or may not be con-\nsidered as reordering error. The second example\npresents a typical example of reordering ambigu-\nity: which words should be assigned to this class:\nmore serious , orproblem, or all of them? De-\nspite of these ambiguities, such scheme for error\nclassification has proven to be useful and an au-\ntomatisation oftheprocess isneeded. Moreelabo-\nrate classification schemes using sets of errors per\nword areleft for future work.\nIn this work, two types of human error analysis\nusing reference translations are carried out in or-\nder to make a fair comparison with the automatic\nmethod: a strict one (comparing with a reference)\nandaflexibleone(syntactically correctdifferences\nand word order and substitutions by synonyms are\nnotconsideredaserrors). Theflexibletypeoferror\nanalysis identifies muchfewer words aserrors.\n3 Experimental results\n3.1 Experimental set-up\nFor the human and automatic error classifications\ndescribed in the previous sections, we used six\nEnglish translation outputs obtained by state-of-\nthe-art statistical phrase-based translation systems\nin the framework of the GALE1project and the\nfourth Workshop on Statistical Machine Transla-\ntion2(WMT).TwoGALEoutputsaretranslations\nfrom Arabic into English, and the third is a re-\nsult of Chinese–to–English translation. All three\nWMT outputs aretranslations from the sameGer-\nman text into English, thus being appropriate for\ncomparison of different translation systems. For\neach translation output, only one reference trans-\nlation was available. For the GALE texts, the\nstrict human error analysis is carried out, and for\nthe WMT texts the flexible one. TreeTagger3was\nused for obtaining the base forms of the words for\nthe automatic error classification.\n1GALE – Global Autonomous Language Exploitation.\nhttp://www.arpa.mil/ipto/programs/gale/index.htm\n2FourthWorkshop onStatisticalMachine Translation.\nhttp://www.statmt.org/wmt09/\n3http://www.ims.uni-stuttgart.de/projekte/corplex/Tr eeTagger/3.2 Results andcorrelations\nThe results of both human and automatic error\nanalysis for all texts and all error classes are pre-\nsented in Table 2 in the form of raw error counts.\nInaddition, thePearson( r)andSpearmanrank( ρ)\ncorrelation coefficients between human and auto-\nmatic results are calculated for each translation\noutput across error classes (rightmost column).\nSince all WMT outputs are translations of the\nsame source text, the correlations are presented\nalso for each error class across different transla-\ntion outputs (last row). These correlation coeffi-\ncients are very high, both across the error classes\nand across the translation outputs. For the WMT\noutputs, the correlations across the error classes\nare slightly lower than for the GALE outputs; this\ncould be expected due to the more flexible crite-\nria for the human error classification. In addition,\nit can be noted that the extra words category has\ntheweakest correlation across different translation\noutputs.\nThe results show that the automatic method can\nsuccessfully substitute human analysis in order to\nanswer the questions that the overall ranking eval-\nuation metrics cannot. The automatic method is\nwell capable of detecting weak and strong points\nof particular translation system as well as of com-\nparing different translation systems. Neverthe-\nless, there are certain differences between human\nand automatic classification, and it would be use-\nful to better understand them: Are all errors de-\ntectedbyhumanssuccessfullycoveredbytheauto-\nmaticmethod? Whydoestheautomatictoolassign\nmuch more reordering errors than human evalua-\ntors? How does the automatic method cope with\ndisambiguation between lexical errors and miss-\ning/extrawords? Whyarethecorrelationsforextra\nwords lower than others?\n3.3 Analysis of thedifferences\nIn order to answer the above questions, recall and\nprecision of all error classes are presented in the\nform of a confusion matrix. Recall shows how\nmany errors classified by humans are successfully267\n(a) GALE translationoutputs\nGALE (BLEU) inflection order missing extra lexical ρ r\nArEn1(59.7%) 20/23 39/66 79/63 127/137 135/147 0.90 0.96\nArEn2(72.1%) 22/24 30/41 97/102 73/76 140/131 1.00 0.99\nCnEn(58.0%) 38/40 127/171 288/244 95/117 203/239 1.00 0.93\n(b) WMT translation outputs\nWMT (BLEU) inflection order missing extralexical ρ r\nDeEn1(16.9%) 12/32 60/235 204/199 52/40189/521 0.70 0.72\nDeEn2(18.4%) 16/44 41/212 172/200 30/56163/495 0.7 0.74\nDeEn3(17.2%) 17/46 100/274 107/153 68/99171/508 0.90 0.91\nρ 1.00 1.00 0.60 0.51.00\nr 0.90 0.99 0.90 0.62 0.96\nTable 2: Raw error counts Nhum/Nautobained by human (left) and automatic (right) error analysi s for\nthe GALE (a) and for the WMT (b) translation outputs; Spearma n (left) and Pearson (right) correlation\ncoefficients for each translation output across error categ ories (last two columns) and for each error\ncategory across different WMT translation outputs (last tw o rows). For each translation output, the\nBLEU score isgiven as illustration.\ncovered by the automatic tool, and precisions how\nmany automatically classified errors are correct,\ni.e. assigned to the same class by humans. The\nresults are presented for one translation output of\neach set, namely for the GALE output ArEn1 and\nfor the WMT output DeEn1.\n3.3.1 GALE translation outputs\nTable 3 shows the results for the GALE ArEn1\noutput. For each error class, both recall and\nprecision are high – errors detected by humans\nare sucesfully detected by the automatic tool too,\nand at the same time errors detected by the auto-\nmatic method are marked as errors by humans as\nwell. However, the precision of reordering errors\nis lower than for other categories – about a third\nof automatically detected reordering errors are not\nconsideredaserroneouswordsbyhumans. Further\ninspection showed that the majority of such words\nare articles, punctuations, the conjunctions ”and”\nand ”or” as well as some prepositions, i.e. words\nwhich occur frequently. Since they often appear\nseveral times in one sentence, the automatic tool\ndoes not see them as RP ER/HPERerrors, but de-\npending on their position they are often marked as\nWER errors and thus classified as reordering er-\nrors. Asimilarphenomenonalsoleadstoanumber\nof extra words and lexical errors in the hypothesis\nwhich are not detected as errors by the automatic\ntool. Andifthere weremorefrequent wordsinthe\nreference than in the hypothesis, the same couldhappen with missing words and reference lexi-\ncal errors. Introducing some kind of information\naboutthewordpositionintotheprocesscandimin-\nish these discrepances. Apart from that, a certain\nnumber of reordering errors is distributed differ-\nentlyoverwordsasexplained inTable1thuslead-\ning to confusions ”x-reord” and ”reord-x”. As for\ndisambiguation of lexical errors vs missing/extra\nwords, bothrecall andprecision confusions canbe\nobserved, though lexical errors have a rather high\nrecall. This means that lexical errors detected by\nhumans are very well covered by the automatic\ntool, but a number of human annotated extra and\nmissingwordsarealsoconsideredaslexicalerrors.\nExamples of human and automatic error analy-\nsis are presented in Table 4. The first sentence il-\nlustrates atotal agreement between thehumanand\nautomatic error classification. In the second sen-\ntence, thewords Japanese andfriendly areclas-\nsified into the same category both by the human\nandbytheautomaticanalysis. Thewords feeling\nforrepresent an example where the human analy-\nsisassigns theerror tothemissing wordscategory,\nbut the automatic analysis classifies it as a lexical\nerror. Similarly, the words can feel are consid-\nered as extra words by humans, but as lexical er-\nrors by theautomatic tool.\n3.3.2 WMT translation outputs\nTheresultsforthe WMT DeEn1output arepre-\nsented in Table 5. The main difference in compar-268\n(a) Reference translationfor the GALE ArEn1output.\nArEn1ref inflection order missing lexical x\ninflection 78.9/78.9 /2.2/10.5 0.8/5.3 0.1/5.3\norder /92.5/51.4 8.8/11.1 3.2/5.6 1.8/31.9\nmissing / /53.8/81.7 4.8/10.0 0.4/8.3\nlexical 15.8/2.1 2.5/0.7 29.7/19.3 85.5/75.7 0.2/2.1\nx 5.3/0.1 5.0/0.2 5.5/0.4 5.6/0.5 97.5/98.8\n(b) ArEn1hypothesis translation.\nArEn1hyp inflection order extra lexical x\ninflection 81.0/89.5 /0.7/5.3 /0.1/5.3\norder 4.8/1.4 90.2/51.4 3.6/6.9 5.8/12.5 1.5/27.8\nextra 4.8/1.0 /53.3/72.3 15.4/23.8 0.2/3.0\nlexical 4.8/0.8 2.4/0.8 15.3/15.9 64.1/75.8 0.8/6.8\nx 4.8/0.1 7.3/0.2 27.0/2.8 14.7/1.7 97.5/95.2\nTable3: Recall (left) and precision (right) values for the G ALE ArEn1translation output:\n(a) reference translation, (b) hypothesis. The columns rep resent the error classes obtained by human\nevaluators, the rows represent the classes obtained automa tically. The class “x” stands for “no error\ndetected”.\nreference: ... of local party committees . Secretaries of the Commission ...\nhypothesis: ... of local party committees of the provincial Commission ...\nerrors: Secretaries –missing(hum,aut)\nprovincial –extra(hum,aut)\nreference: ... ,although the Japanese friendly feelings for China added anincrease ,...\nhypothesis: ... ,although China can feeltheJapanese increase ,...\nerrors: Japanese –order(hum,aut)\nfriendly –missing(hum,aut)\nfeelings for –missing(hum)/lexical(aut)\ncan feel –extra(hum)/lexical(aut)\nTable 4: Examples of human and automatic error analysis from the GALE translation outputs: words\nin bold italic are assigned to the same error category both by human and automatic error analysis, and\nwords only in bold represent differences.\nison with the GALE results is that the precisions\nof all error classes aremuch lower –theautomatic\ntoolidentifiesmuchmoreerrorsthanhumanevalu-\nators. Thisisespecially notable for reordering and\nforlexical errors–thereasonistheflexiblehuman\nevaluation which allows synonyms and different\nword orders. Nevertheless, the recall values are\nvery high (except for extra words), meaning that\nthe automatic tool is capable of discovering errors\ndetected by human evaluators also when the flex-\nible (more natural) human classification is carried\nout.\nThe high number of reordering errors is again\nmostly due to the frequent words, but there is alsoa number of other words as well since the flexible\nhuman classification allows more word orders. In\naddition, identifying different words as reordering\nerrors happens more often. There is also anumber\nof reordering errors which the automatic method\nconsiders as lexical errors: the reason for that are\nthe synonyms or different expressions. A different\nway of expression is also the reason for the higher\nnumber of automatically detected inflectional er-\nrors, for example patients’ health -- health\nof the patient oris building -- builds .\nFor this set, the confusion between lexical er-\nrors vs missing/extra words is also present, espe-\ncially for the extra words – the major part of extra269\n(a) Reference translationfor the WMT DeEn1output.\nDeEn1ref inflection order missing lexical x\ninflection 92.3/37.51.6/3.1 2.0/12.5 1.6/9.4 1.1/37.5\norder /61.3/15.35.9/4.8 2.6/2.0 17.3/77.8\nmissing /6.5/2.1 45.8/48.4 16.6/16.7 5.7/32.8\nlexical 7.7/0.2 11.3/1.4 42.9/17.5 78.2/30.322.6/50.6\nx /19.4/1.9 3.4/1.1 1.0/0.3 53.4/96.6\n(b) DeEn1hypothesis translation.\nDeEn1hyp inflection order extra lexical x\ninflection 92.3/37.55.4/12.5 /2.6/12.5 1.1/37.5\norder /51.4/15.314.8/3.2 4.5/2.8 17.8/78.6\nextra /1.4/3.2 16.7/29.0 3.2/16.1 1.5/51.6\nlexical 7.7/0.2 24.3/4.0 57.4/6.985.8/29.624.4/59.3\nx /17.6/2.1 11.1/1.0 3.9/1.0 55.3/96.0\nTable5: Recall (left) and precision (right) values for the W MT DeEn1translation output:\n(a) reference translation, (b) hypothesis. The columns rep resent the error classes obtained by human\nevaluators, the rows represent the classes obtained automa tically. The class “x” stands for “no error\ndetected”.\nwords recall is actually confusion with lexical er-\nrors. There is also a number of extra words which\nare assigned to reordering errors or correct words\n–theseareagainmostlyfrequent words. Theseare\nthe reasons why extra words have the lowest cor-\nrelation coefficients across the translation outputs\n– this error category is not particularly reliable for\ncomparing different translation systems. Lexical\nerrors on the other hand have very high recall – as\nfor the GALE task, those detected by humans are\nsuccessfully covered by the automatic tool. How-\never,becauseofthesynonyms,theprecisionislow\n– the major part of the automatically detected lex-\nical errors are actually correct words. Using syn-\nonym lists can increase this precision and also de-\ncreasethenumberofreorderingerrorsclassifiedas\nlexical errors.\nTable 6 presents examples of human and au-\ntomatic error analysis for the WMT data. The\nfirst example illustrates total agreement. In ad-\ndition, it also illustrates a case where the re-\nordering errors could be defined in a differ-\nent way, both by human evaluators and by the\nautomatic tool: the word group coffee and\nnewspapers could be considered as reordering\nerror. This phenomenon can also be seen in\nthe second sentence, namely famous journalist\nGustav Chalupa may also be considered as a re-\nordering error. Furthermore, this sentence illus-\ntrates confusions between lexical errors and miss-ing/extra words, as well as why the number of\nthe lexical errors is significantly higher for the au-\ntomatic tool. The tool considers this,theand\nLamborgini as lexical errors, as well as born in\nˇCesk´e Budˇejovice/from Budweis . However,the\nhumans considered the first three words as miss-\ning or extra words, and the rest beeing synonyms\nis not considered as error at all — Budweis is En-\nglish name for the Czechtown ˇCesk´ e Budˇ ejovice.\n4 Conclusions and outlook\nIn this work we have proposed a systematic\nmethod for automatic error classification of ma-\nchine translation output. The method detects five\nerror classes commonly used in human error anal-\nysis: inflectional errors, reordering errors, missing\nwords, extra words and incorrect lexical choice.\nWe have shown that the error classification results\nobtained by this approach correlate very well with\nthe results of human error analysis with Spearman\nand Pearson correlation coefficients over 0.7 and\nmostly around 0.9, both across different error cat-\negories within one translation output as well as\nacross different translation outputs within one er-\nrorcategory. Theautomaticmetricsalsohavehigh\nrecall, i.e. the method is well capable of finding\nthe errors detected by human evaluators. Hence,\nthe presented automatic method can successfully\nreplace human error analysis in order to get bet-270\nreference: Passengers can get coffee and newspapers whenboarding .\nhypothesis: Coffee and newspapers can passengers in boarding .\nerrors: Passengers can –order(hum,aut)\nget –missing(hum,aut)\nwhen –lexical(hum,aut)\nin –lexical(hum,aut)\nreference: Thefamous journalist GustavChalupa ,bornin\nˇCesk´eBudˇejovice,also confirms this.\nhypothesis: Thealso confirms thefamous Austrian journalist GustavChalupa ,\nfrom BudweisLamborghini .\nerrors: famous journalist Gustav Chalupa –order(aut)\nborn inˇCesk´ e Budˇ ejovice –lex(aut)\nalso confirms –order(hum,aut)\nthis –missing(hum)/lexical(aut)\nthe –extra(hum)/lexical(aut)\nAustrian –extra(hum,aut)\nfrom Budweis– lexical(aut)\nLamborghini –extra(hum)/lexical(aut)\nTable 6: Examples of human and automatic error analysis from the WMT translation outputs: words\nin bold italic are assigned to the same error category both by human and automatic error analysis, and\nwords only in bold represent differences.\nter insight about the strengths and weaknesses of\none translation system as well as about the differ-\nences between various translation systems. Only\nthe extra word class has proven not to be stable\nand reliable enough.\nInadetailedqualitative analysisoftypical prob-\nlemsrelated toboth human and automatic classifi-\ncationoftranslationerrors,wepointedoutpromis-\ningfuturework. Introducing synonym listsandin-\nformation about word positions can further help\nthe presented automatic method to increase pre-\ncision, as could going from word to phrase level.\nOther directions that would address the intrinsic\ndifficultyoftheerrorclassificationtasksareadding\nprobabilities to error classes and allowing the as-\nsignment of multiple errors per word.\nThe method is currently being tested and fur-\nther developed in the framework of the TARAX¨U\nproject4. In this project, three industry and one\nresearch partner aim to develop a hybrid machine\ntranslation architecture that satisfiescurrent indus-\ntry needs, which includes a number of large-scale\nevaluation rounds involving various target lan-\nguages: English,French,German,Czech,Spanish,\nRussian, Chinese and Japanese.\n4http://taraxu.dfki.de/Acknowledgments\nThis work has partly been developed within the\nTARAX¨U project financed by TSB Technologies-\ntiftung Berlin –Zukunftsfonds Berlin, co-financed\nby the European Union – European fund for re-\ngional development. Special thanks toDavidVilar\nand Eleftherios Avramidis.\nReferences\nAvramidis, Eleftherios and Philipp Koehn. 2008. En-\nriching morphologically poor languages for statisti-\ncal machine translation. In Proceedings of the 46rd\nAnnual Meeting of the Association for Computa-\ntionalLinguistics(ACL08) ,pages763–770,Colum-\nbus,Ohio,June.\nCallison-Burch, Chris, Philipp Koehn, Christof Monz,\nand Josh Schroeder. 2009. Findings of the 2009\nWorkshop on Statistical Machine Translation. In\nProceedings of the Fourth Workshop on Statistical\nMachine Translation , pages 1–28, Athens, Greece,\nMarch.\nCallison-Burch, Chris, Philipp Koehn, Christof Monz,\nKay Peterson, Mark Przybocki, and Omar Zaidan.\n2010. Findings of the 2010 joint workshop on sta-\ntistical machine translation and metrics for machine\ntranslation. In Proceedings of the Joint Fifth Work-\nshoponStatisticalMachineTranslationandMetric-\nsMATR (WMT 10) , pages 17–53, Uppsala, Sweden,\nJuly.271\nKhalilov, Maxim and Jos´ e A. R. Fonollosa. 2009.\nN-gram-based statistical machine translation versus\nsyntax augmented machine translation: comparison\nandsystem combination. In Proceedingsof the 12th\nConference of the European Chapter of the Associ-\nation for Computation Linguistic (EACL 09) , pages\n424–432,Athens,Greece,March.\nLevenshtein,Vladimir Iosifovich. 1966. Binary Codes\nCapable of Correcting Deletions, Insertionsand Re-\nversals. Soviet Physics Doklady , 10(8):707–710,\nFebruary.\nLi, Jin-Ji, Jungi Kim, Dong-Il Kim, and Jong-Hyeok\nLee. 2009. Chinese syntactic reordering for ade-\nquategenerationofkoreanverbalphrasesinchinese-\nto-korean smt. In Proceedings of the Fourth Work-\nshop on Statistical Machine Translation (WMT 09) ,\npages190–196,Athens,Greece,March.\nLlitj´ os, Ariadna Font, Jaime G. Carbonell, and Alon\nLavie. 2005. A framework for interactive and au-\ntomatic refinement of transfer-based machine trans-\nlation. In Proceedings of the 10th Annual Con-\nference of the European Association for Machine\nTranslation (EAMT 05) , pages 87–96, Budapest,\nHungary,May.\nLopez, Adam and Philip Resnik. 2005. Pattern visual-\nization for machine translation output. In Proceed-\ningsofHLT/EMNLPonInteractiveDemonstrations ,\npages12–13,Vancouver,Canada,October.\nMax, Aur´ elien, Rafik Makhloufi, and Philippe\nLanglais. 2008. Explorations in using grammat-\nical dependencies for contextual phrase translation\ndisambiguation. In Proceedings of the 12th Annual\nConferenceoftheEuropeanAssociationforMachine\nTranslation (EAMT 08) , pages 114–119, Hamburg,\nGermany,September.\nPopovi´ c, Maja and Hermann Ney. 2007. Word Error\nRates: Decomposition over POS classes and Appli-\ncationsforErrorAnalysis. In Proceedingsofthe2nd\nACL 07 Workshop on Statistical Machine Transla-\ntion(WMT 07) ,pages48–55,Prague,CzechRepub-\nlic,June.\nPopovi´ c, Maja, Adri` a de Gispert, Deepa Gupta, Pa-\ntrik Lambert, Hermann Ney, Jos´ e B. Mari˜ no, Mar-\ncello Federico, and Rafael Banchs. 2006. Morpho-\nsyntactic Information for Automatic Error Analysis\nof Statistical Machine Translation Output. In Pro-\nceedingsof the 1st NAACL 06 Workshop on Statisti-\ncalMachineTranslation(WMT06) ,pages1–6,New\nYork,NY, June.\nVilar, David, Jia Xu, Luis Fernando D’Haro, and Her-\nmann Ney. 2006. Error Analysis of Statistical Ma-\nchine Translation Output. In Proceedings of the 5th\nInternational Conference on Language Resources\nand Evaluation (LREC 06) , pages 697–702, Genoa,\nItaly,May.Zhou, Ming, Bo Wang, Shujie Liu, Mu Li, Dongdong\nZhang, and Tiejun Zhao. 2008. Diagnostic evalua-\ntion of machine translation systems using automati-\ncally constructed linguistic check-points. Proceed-\nings of the 22nd International Conference on Com-\nputational Linguistics (CoLing 2008) , pages 1121–\n1128,August.272",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "oTsnD-No8OU",
"year": null,
"venue": "EAMT 2005",
"pdf_link": "https://aclanthology.org/2005.eamt-1.29.pdf",
"forum_link": "https://openreview.net/forum?id=oTsnD-No8OU",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Exploiting phrasal lexica and additional morpho-syntactic language resources for statistical machine translation with scarce training data",
"authors": [
"Maja Popovic",
"Hermann Ney"
],
"abstract": "Maja Popovic, Hermann Ney. Proceedings of the 10th EAMT Conference: Practical applications of machine translation. 2005.",
"keywords": [],
"raw_extracted_content": "Exploiting Phrasal Lexica and Additional Morpho-syntactic\nLanguage Resources for Statistical Machine Translation with\nScarce Training Data\nMaja Popovi ´c and Hermann Ney\nLehrstuhl f ¨ur Informatik VI, Computer Science Department\nRWTH Aachen University\nD-52056 Aachen, Germany\n{popovic,ney }@informatik.rwth-aachen.de\nAbstract. In this work, the use of a phrasal lexicon for statistical machine translation is proposed,\nand the relation between data acquisition costs and translation quality for different types and sizes oflanguage resources has been analyzed. The language pairs are Spanish-English and Catalan-English,and the translation is performed in all directions. The phrasal lexicon is used to increase as well as toreplace the original training corpus. The augmentation of the phrasal lexicon with the help of additional\nmonolingual language resources containing morpho-syntactic information has been investigated for the\ntranslation with scarce training material. Using the augmented phrasal lexicon as additional training data,a reasonable translation quality can be achieved with only 1000 sentence pairs from the desired domain.\n1 Introduction and Related Work\nThe goal of statistical machine translation (SMT)\nis to translate an input word sequence fJ\n1 =\nf1...f j...f Jinto a target word sequence eI\n1=\ne1...e i...e Iby maximising the probability\nP(eI\n1|fJ\n1). This probability can be factorised into\nthe translation model probability P(fJ\n1|eI\n1), which\ndescribes the correspondence between the words in\nthe source and the target sequence and the language\nmodel probability P(eJ\n1), which describes the well-\nformedness of the produced target sequence. These\ntwo probabilities can be modelled independently of\neach other. For detailed descriptions of SMT mod-\nels, see for example (Brown et al., 1993). Trans-\nlation probabilities are extracted from a bilingual\nparallel text corpus, whereas language model prob-abilities are learnt from a monolingual text corpus\nin the target language. Usually, the larger the avail-\nable training corpus, the better the performance of\na translation system. However, acquisition of a\nlarge high-quality bilingual parallel text for the de-\nsired domain and language pair requires lot of time\nand effort, and, for many language pairs, is not\neven possible. Therefore, the strategies for exploit-\ning limited amounts of bilingual data are receiving\nmore and more attention (Al-Onaizan et al., 2000;\nNießen and Ney, 2004; Matusov et al., 2004).Conventional dictionaries (one word and its\ntranslation(s) per entry) have been proposed\nin (Brown et al., 1993) and are shown to be valu-\nable resources for SMT systems. They can be used\nto augment and also to replace the training corpus.\nNevertheless, the main draw-back is that they typ-\nically contain only base forms of the words and\nnot inflections. The use of morpho-syntactic in-\nformation for overcoming this problem is investi-\ngated in (Nießen and Ney, 2004) for translationfrom German into English and in (V ogel and Mon-\nson, 2004) for translation from Chinese into En-\nglish. Still, the dictionaries normally contain one\nword per entry and do not take into account phrases,\nidioms and similar complex expressions.\nIn our work, we have exploited a phrasal lexicon\n(one short phrase and its translation(s) per entry) as\na bilingual knowledge source for SMT which has\nnot been examined so far. A phrasal lexicon is ex-\npected to be especially helpful to overcome some\ndifficulties which cannot be handled well with stan-\ndard dictionaries.\nWe have used the phrasal lexicon to increase the\nexisting training corpus as well as to replace it. We\nhave also investigated the augmentation of the lex-\nicon by the use of additional morpho-syntactically\nannotated language resources in order to obtain a\n212 EAMT 2005 Conference Proceedings\nreasonable translation quality with minimal amount\nof training data.\nThe language pairs in our experiments are\nSpanish-English and Catalan-English, and transla-\ntion is performed in all four directions using thephrase-based SMT system with optimised scaling\nfactors (Och and Ney, 2002).\n2 Language Resources\nThe parallel trilingual corpus used in our experi-ments has been successively built in the framework\nof the LC-STAR project (Arranz et al., 2003). It\nconsists of spontaneous dialogues in Spanish, Cata-\nlan and English in the tourism and travelling do-\nmain. The development and test set are randomly\nextracted from this corpus and the rest is used for\ntraining (referred to as 40k).\nIn order to investigate the scenario with scarce\ntraining material, a small training corpus (referred\nto as 1k) has been constructed by random selection\nof 1000 sentences from the original trilingual train-\ning set.\n2.1 Phrasal Lexicon\nThe phrasal lexicon used in our experiments (PL)\nconsists of a list of English phrases and their trans-\nlations into Spanish and Catalan. These English\nphrases have been extracted partly from various di-\nalogue corpora and web-sites and have partly been\ncreated manually. The average phrase length is\nshort, with about 4 words per entry. However, the\nvocabularies are rather large for all three languages,\nas can be seen in the Table 1. Besides full forms\nof the words, POS tags for all three languages aswell as base forms for Spanish and Catalan are also\navailable.\n2.2 Word Expansion Lists\nFor Spanish and Catalan, there was an additional\nmonolingual language resource available: a list of\nword base forms along with all possible POS tags\nand all full forms that can be derived from them.\nThis expansion list was extracted from the morpho-logical analyzer by UPC (Carmona et al., 1998).\nSince Spanish and Catalan have an especially\nrich morphology for verbs, we used these lists only\nfor verb expansions for the experiments reported in\nthis paper. However, it might be reasonable to in-clude expansions of other word classes in some fu-\nture experiments.\n3 Experiments and Results\n3.1 Experimental Settings\nThe experiments have been done on the full training\ncorpus containing about 40k sentences and 500k\nrunning words as well as on the small training cor-\npus containing about 1k sentences and 12k running\nwords. We also present the results obtained usingonly the phrasal lexicon as training corpus. The\ncorpus statistics is shown in Table 1.\nExtensions of the phrasal lexicon have been\ndone using base forms and POS information for the\nSpanish and Catalan verbs and word expansion lists\nfor those two languages. Each base form of theverb seen in the lexicon more than five times has\nbeen expanded with all POS tags seen in the lexi-\ncon. For each base form and POS tag, the possi-\nble English equivalents are extracted using lexical\nprobabilities and then manually checked and even-\ntually corrected. For example, for the Spanish base\nform and POS tag combination “ir VMIP1S0”, the\ncorrect English equivalents are “I go” and “I am\ngoing”. Finally, each base form and POS tag is\nmapped to the corresponding full form by using the\nword expansion list, e.g. “ir VMIP1S0” is mappedto “voy” and “ir VMIF1P0” is mapped to “iremos”\n(we will go). In this way, the lexicon was enriched\nwith some previously unseen full forms of the verb.\nThe translation with the system trained only\non the phrasal lexicon is done without a language\nmodel as well as with the language model trained\non the full target language corpus. The same\nset-up has been used for the small training corpus -\nonce with the small language model trained on 1k\nsentences and once with the full language model\ntrained on 40k sentences.\nIn order to investigate the effects of the phrasal\nlexicon and the size of bilingual corpus available for\ntraining on translation quality, the following set-ups\nhave been defined:\n1.full training corpus;\n2.full training corpus with phrasal lexicon;\n3.small training corpus;Exploiting phrasal lexica and additional morpho-syntactic language resources ...\nEAMT 2005 Conference Proceedings 213\n4.small training corpus with phrasal lexicon;\n5.small training corpus with extended phrasal\nlexicon;\n6.small training corpus with extended phrasal\nlexicon and language model trained on the full\ncorpus;\n7.phrasal lexicon without language model;\n8.extended phrasal lexicon without language\nmodel;\n9.extended phrasal lexicon and full language\nmodel.\nBesides the standard development and test set\ndescribed in Section 2, we also performed transla-\ntion on an external test text which does not comefrom the same domain as the training set. This text\nhas been translated with the systems 1, 3, 6, 7 and\n9.\n3.2 Results\nThe translation results can be seen in Table 2 and\nTable 3. As expected, the best results are ob-tained using the full training corpus with additional\nphrasal lexicon. It can be seen that the use of the\nphrasal lexicon yields improvements of the trans-\nlation quality even when a large bilingual corpus\nfrom the domain is available, although these im-\nprovements are relatively small.\nHowever, for the small bilingual corpus, the im-\nportance of the phrasal lexicon is significant. The\ndegradation in terms of WER and PER by using\nonly 2.5% of the original corpus is not higher than\n25% relative if the additional phrasal lexicon and\nlanguage resources containing morphological infor-mation are available. This can be further improved\nby up to 1.9% absolute in WER if monolingual in-\ndomain data is available, so that a better language\nmodel can be trained. This effect is even more sig-\nnificant for the external test corpus, although all the\nerror rates are higher since this text does not come\nfrom the same domain. As can be seen in Table 4,\nthe degradation of error rates by reducing the train-\ning corpus is not higher than 6% relative if the ex-\ntended phrasal lexicon is added. The big advantage\nof using such a small corpus is that its acquisitionshould not require any particular effort since pro-\nducing 1000 parallel sentences in two or three lan-\nguages can also be done manually.\nUsing only the phrasal lexicon and additional re-\nsources, the obtained error rates are similar to those\nfor the small training corpus alone. These error\nrates are rather high, but they might be acceptable\nfor tasks where only the gist of the translated text is\nneeded, like for example document classification or\nmulti-lingual information retrieval.\nSome translation examples for the direction\nEnglish →Spanish using only the phrasal lexicon\nwith and without verb expansions are shown in Ta-\nble 5. It can be seen that the extension of the\nlexicon enables the system to find the correct full\nform of the verb in the inflected language more of-\nten. For the translation Spanish →English, Table 6\nshows that, for the extended lexicon, the system is\nable to produce correct or approximatively correct\ntranslations even for full forms that have not beenseen in the original training corpus. In the baseline\nsystem, those words remain untranslated and are\nmarked by UNKNOWN\n. The effects of the lexi-\ncon extension for the other language pair (Catalan-\nEnglish) as well as for the scenarios with the small\ntraining corpus are basically the same.\n4 Conclusion\nIn this work, we have examined the possibilities forthe use of a phrasal lexicon for statistical machine\ntranslation, especially as an additional resource for\nscarce training material.\nWe presented different translation scenarios:\nwith the full training corpus, with only 1000 sen-\ntence pairs (2.5% of the full corpus), both with and\nwithout phrasal lexicon as additional data and also\nonly with the phrasal lexicon as training corpus. We\nalso studied the effects of extending a lexicon usingmorpho-syntactic knowledge sources. We showed\nthat with the extended phrasal lexicon as additional\ntraining data, an acceptable translation quality can\nbe achieved with only 1000 sentence pairs of in-\ndomain text. The big advantage of such a small cor-\npus is that the costs of its acquisition are rather low\n- such a corpus basically can be produced manually\nin relatively short time.\nIn our future research, we plan to examine com-\nbinations of phrasal lexica and conventional dictio-Popovic and Ney\n214 EAMT 2005 Conference Proceedings\nTable 1. Corpus Statistics\nTraining: Spanish Catalan English\nfull corpus (40k) Sentences 40574\nRunning Words + Punct. 482290 485514 516717\nV ocabulary 14327 12772 8116\nSingletons 6743 5930 3081\nsmall corpus (1k) Sentences 1014\nRunning Words + Punct. 12138 12215 12972\nV ocabulary 1880 1823 1436\nSingletons 1150 1070 744\nphrasal lexicon (PL) Entries 10520\nRunning Words + Punct. 44289 46002 41850\nV ocabulary 10797 10460 11167\nSingletons 6573 6218 7153\nDevelopment: Sentences 972\nRunning Words + Punct. 12883 13039 13983\nOOVs - 40k 209 (1.4%) 179 (1.4%) 95 (1.2%)\nOOVs - 1k 1105 (58.8%) 1029 (56.4%) 766 (53.3%)\nOOVs - PL 726 (6.7%) 627 (6.0%) 328 (2.9%)\nTest: Sentences 972\nRunning Words + Punct. 12771 12973 13922\nOOVs - 40k 206 (1.4%) 171 (1.3%) 117 (1.4%)\nOOVs - 1k 1095 (58.2%) 1008 (55.3%) 777 (54.1%)\nOOVs - PL 733 (6.8%) 611 (5.8%) 365 (3.1%)\nExternal Test: Sentences 200\nRunning Words + Punct. 2949 3020 3117\nOOVs - 40k 19 (0.1%) 36 (0.3%) 59 (0.7%)\nOOVs - 1k 275 (14.6%) 284 (15.6%) 234 (16.3%)\nOOVs - PL 176 (1.6%) 184 (1.8%) 107 (0.9%)\nnaries, and also to investigate effects for other lan-\nguage pairs and tasks.\nAcknowledgement\nThis work was partly funded by the TC-STARproject by the European Community (FP6-506738)\nand by the Deutsche Forschungsgemeinschaft\n(DFG) under the project “Statistical Methods for\nWritten Language Translation” (Ne572/5).\n5 References\nY . Al-Onaizan, U. Germann, U. Hermjakob,\nK. Knight, P. Koehn, D. Marcu, K. Yamada. 2000.\nTranslating with scarce resources. In Proc. of the\n17th National Conference on Artificial Intelligence\n(AAAI) , pages 672–678, Austin, TX, August.V . Arranz, N. Castell, and J. Gim ´enez. 2003.\nDevelopment of language resources for speech-\nto-speech translation. In Proc. of RANLP’03 ,\nBorovets, Bulgaria, September.\nP. F. Brown, S. A. Della Pietra, V . J. Della Pietra,\nand R. L. Mercer. 1993. The mathematics of sta-\ntistical machine translation: Parameter estimation.\nComputational Linguistics , 19(2):263–311\nP. F. Brown, S. A. Della Pietra, V . J. Della Pietra,\nand M. J. Goldsmith. 1993. But Dictionaries are\nData Too. In Proc. ARPA Human Language Tech-\nnology Workshop ’93 , pages 202–205, Princeton,\nNJ, March.\nJ. Carmona, S. Cervell, L. M `arquez, M. Mart ´ı,Exploiting phrasal lexica and additional morpho-syntactic language resources ...\nEAMT 2005 Conference Proceedings 215\nTable 2. Translation Error Rates [%] for the language pair Spanish-English\nSpanish →English Development Test\nTraining Corpus WER PER 1-BLEU WER PER 1-BLEU\n40k(full corpus) 40.8 33.3 59.9 41.5 33.9 60.8\n+PL 39.5 32.1 58.9 40.8 32.8 59.9\n1k(small corpus) 53.6 44.9 75.9 54.8 46.0 76.8\n+PL 49.8 40.9 70.9 50.7 41.7 71.7\n+verb expansions 48.7 39.4 69.8 50.1 40.3 70.9\n+full LM (40k) 48.2 39.1 68.9 49.2 39.8 69.8\nPL(only lexicon) 57.8 47.1 78.8 59.3 47.8 80.7\n+verb expansions 56.7 45.9 78.1 57.6 46.5 78.8\n+full LM (40k) 53.9 44.0 75.0 56.0 45.7 76.6\nEnglish →Spanish\n40k(full corpus) 41.3 34.9 56.1 43.2 35.7 57.8\n+PL 40.7 34.4 55.8 42.9 35.9 57.8\n1k(small corpus) 57.5 49.2 74.3 58.4 49.9 76.5\n+PL 52.4 43.9 68.0 53.6 44.9 68.7\n+verb expansions 51.4 43.0 67.7 52.8 44.0 68.4\n+full LM (40k) 50.4 42.3 66.2 52.1 43.6 67.6\nPL(only lexicon) 61.3 51.8 74.6 62.3 52.7 75.7\n+verb expansions 58.1 49.7 73.2 59.5 50.4 74.7\n+full LM (40k) 57.6 49.4 72.5 59.1 50.1 73.9\nTable 3. Translation Error Rates [%] for the language pair Catalan-English\nCatalan →English Development Test\nTraining Corpus WER PER 1-BLEU WER PER 1-BLEU\n40k(full training) 39.7 32.5 59.8 41.4 33.6 61.1\n+PL 38.9 31.8 58.6 41.1 33.2 60.2\n1k(reduced training) 53.4 44.8 76.4 54.2 45.0 76.6\n+PL 49.2 40.4 71.8 50.9 41.4 72.7\n+verb expansions 48.9 39.3 70.9 50.1 39.7 71.4\n+full LM (40k) 48.1 38.7 69.8 49.4 39.3 70.4\nPL(only lexicon) 57.2 47.3 79.2 59.0 47.9 80.2\n+verb expansions 55.0 44.6 76.9 56.1 45.1 76.7\n+full LM (40k) 54.0 44.0 75.2 55.6 44.7 75.9\nEnglish →Catalan\n40k(full training) 41.5 35.5 58.3 43.3 36.3 60.0\n+PL 41.0 35.0 57.8 43.3 36.3 60.1\n1k(reduced training) 57.2 49.3 75.2 57.9 49.8 75.1\n+PL 52.3 44.2 69.8 53.2 44.9 69.4\n+verb expansions 51.6 43.7 69.3 53.0 44.8 70.0\n+full LM (40k) 49.7 42.0 66.2 51.2 43.0 67.2\nPL(only lexicon) 63.0 53.8 76.3 65.0 55.1 78.2\n+verb expansions 58.5 49.8 73.6 59.8 50.6 74.6\n+full LM (40k) 57.7 49.3 73.4 59.1 50.2 74.6Popovic and Ney\n216 EAMT 2005 Conference Proceedings\nTable 4. Translation Error Rates [%] for the external test corpus\nSpanish →English WER PER 1-BLEU\n40k 61.6 49.4 79.1\n1k 67.3 56.1 83.1\n+PL+exp+lm40k 61.6 49.1 76.7\nPL 69.9 57.2 87.6\n+exp+lm40k 66.5 54.5 82.0\nEnglish →Spanish\n40k 71.3 58.7 76.8\n1k 78.8 67.3 84.0\n+PL+exp+lm40k 72.5 60.7 77.5\nPL 78.8 66.6 84.0\n+exp+lm40k 75.2 63.4 79.1\nCatalan →English\n40k 64.3 51.5 80.5\n1k 71.8 60.5 88.0\n+PL+exp+lm40k 68.9 55.7 85.9\nPL 73.5 59.7 89.1\n+exp+lm40k 70.1 57.0 83.7\nEnglish →Catalan\n40k 70.4 58.9 79.8\n1k 78.3 67.6 86.8\n+PL+exp+lm40k 74.4 62.7 83.1\nPL 79.8 68.9 88.0\n+exp+lm40k 75.1 64.2 80.6\nTable 5. Translation examples English →Spanish using the Phrasal Lexicon (PL) as training data without and with\nadditional verb expansions\nsource sentence well I am pretty interested .\nPL bien estoy bastante interesa .\nPL with expansions bien estoy bastante interesado .\nsource sentence there is no problem .\nPL hay no es problema\nPL with expansions no hay problema .\nsource sentence I do not know , what kind of sport do you like best ?\nphrasal lexicon no me sabe ,q u´e tipo de deporte te gusta mejor ?\nphrasal lexicon with expansions no lo s ´e,q u´e tipo de deporte te gusta mejor ?\nL. Padr ´o, R. Placer, H. Rodr ´ıguez, M. Taul ´ee, and\nJ. Turmo. 1998. An environment for morphosyn-tactic processing of unrestricted Spanish text. In\nProc. of the first Int. Conf. on Language Resources\nand Evaluation (LREC) , Granada, Spain.\nE. Matusov, M. Popovi ´c, R. Zens, and H. Ney.\n2004. Statistical Machine Translation of Sponta-neous Speech with Scarce Resources. In Proc. of\nthe Int. Workshop on Spoken Language Translation(IWSLT) , pages 139–146, Kyoto, Japan, September.\nS. Nießen and H. Ney. 2004. Statistical Machine\nTranslation with Scarce Resources Using Morpho-\nsyntactic Information. Computational Linguistics ,\n30(2):181–204Exploiting phrasal lexica and additional morpho-syntactic language resources ...\nEAMT 2005 Conference Proceedings 217\nTable 6. Translation examples Spanish →English using the Phrasal Lexicon (PL) as training data without and with\nadditional verb expansions\nsource sentence si , esto , cu ´antas personas ser ´an ?\nPL if , this , how many people UNKNOWN ser´an?\nPL with expansions if , that , how many people will be ?\nsource sentence quer´ıamos un poco de fruta , yogur , este tipo de cosas , algunas galletas .\nPL UNKNOWN quer´ıamos a little fruit , yogurt , this kind of things , some salted .\nPL with expansions we wanted a little fruit , yogurt , this kind of things , some salted .\nsource sentence s´ı , los hoteles , nos movemos entre las tres y las cinco estrellas .\nPL yes , hotels , we UNKNOWN movemos between the three and the five-star .\nPL with expansions yes , the hotels , we are moving within the three and the five-star .\nF. J. Och and H. Ney. 2002. Discriminative Train-\ning and Maximum Entropy Models for Statistical\nMachine Translation. In Proc. 40th Annual Meet-\ning of the Association for Computational Linguis-\ntics (ACL) , Philadelphia, PA, July.\nK. Papineni, S. Roukos, T. Ward, and W.J. Zhu.\n2002. BLEU: a method for automatic evaluation of\nmachine translation. In Proc. 40th Annual Meeting\nof the Assoc. for Computational Linguistics , pages\n311–318, Philadelphia, PA, July.\nS. V ogel and C. Monson. 2004. Augmenting Man-\nual Dictionaries for Statistical Machine Translation\nSystems. In Proc. 4th Int. Conf. on Language Re-\nsources and Evaluation (LREC) , pages 1589–1592,\nLisbon, Portugal, May.Popovic and Ney\n218 EAMT 2005 Conference Proceedings",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "kKkQ_DLaE7H",
"year": null,
"venue": "EAMT 2012",
"pdf_link": "https://aclanthology.org/2012.eamt-1.56.pdf",
"forum_link": "https://openreview.net/forum?id=kKkQ_DLaE7H",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Crowd-based MT Evaluation for non-English Target Languages",
"authors": [
"Michael Paul",
"Eiichiro Sumita",
"Luisa Bentivogli",
"Marcello Federico"
],
"abstract": "Michael Paul, Eiichiro Sumita, Luisa Bentivogli, Marcello Federico. Proceedings of the 16th Annual conference of the European Association for Machine Translation. 2012.",
"keywords": [],
"raw_extracted_content": "Crowd-based MT Evaluation for non-English Target Languages\nMichael Paul andEiichiro Sumita\nNI\nCT\nHikaridai 3-5\n619-0289 Kyoto, Japan\n<Firstname>.<Lastname>@nict.go.jpLuisa Bentivogli andMarcello Federico\nFBK-irst\nVia Sommarive, 18\n38123 Povo-Trento, Italy\n{bentivo,federico} @fbk.eu\nAbstract\nThis paper investigates the feasibility of\nusing crowd-sourcing services for the hu-\nman assessment of machine translation\nquality of translations into non-English tar-\nget languages. Non-expert graders are\nhired through the CrowdFlower interface\nto Amazon’s Mechanical Turk in order to\ncarry out a ranking-based MT evaluation\nof utterances taken from the travel conver-\nsation domain for 10 Indo-European and\nAsian languages. The collected human as-\nsessments are analyzed for their worker\ncharacteristics, evaluation costs, and qual-\nity of the evaluations in terms of the agree-\nment between non-expert graders and ex-\npert/oracle judgments. Moreover, data\nquality control mechanisms including “lo-\ncale qualification” “qualificatio testing”,\nand “on-the-fl verification are investi-\ngated in order to increase the reliability of\nthe crowd-based evaluation results.\n1 Introduction\nThis paper focuses on the evaluation of machine\ntranslation (MT) quality for target languages other\nthan English. Although human evaluation of MT\noutput provides the most direct and reliable as-\nsessment, it is time consuming, costly, and subjec-\ntive. Various automatic evaluation measures were\nproposed to make the evaluation of MT outputs\ncheaper and faster (Przybocki et al., 2008), but au-\ntomatic metrics have not yet proved able to con-\nsistently predict the usefulness of MT technolo-\ngies. To counter the high costs in human assess-\nment of MT outputs, the usage of crowdsourc-\ning services such as Amazon’s Mechanical Turk1\n(MTurk) and CrowdFlower2(CF) were proposed\nrecently (Callison-Burch, 2009; Callison-Burch et\nal., 2010; Denkowski and Lavie, 2010).\n1http://www.mturk.com\n2http://cro wdfl\nwer.comThe feasibility of crowd-based MT evaluations\nwas investigated for shared tasks such as the WMT\n(Callison-Burch, 2009) and the IWSLT (Federico\net al., 2011) evaluation campaigns. Their re-\nsults showed that agreement rates for non-experts\nwere comparable to those for experts, and that\nthe crowd-based rankings correlated very strongly\nwith the expert-based rankings. Most of the\ncrowd-based evaluation experiments focused on\nEnglish as the target language, with the exception\nof (Callison-Burch et al., 2010) evaluating Czech,\nFrench, German, and Spanish translation outputs\nand (Federico et al., 2011) evaluating translations\ninto French.\nThis paper investigates the feasibility of using\ncrowdsourcing services for the human assessment\nof translation quality of translation tasks where the\ntarget language is notEnglish, with a focus on\nnon-European languages. In order to identify non-\nEnglish target languages for which we can expect\nto fin qualifie workers, we referred to existing\nsurveys that analyze the demographics of MTurk\nworkers (see Section 2). In total, we selected 7\nnon-European languages consisting of Arabic (ar),\nChinese (zh), Hindi (hi), Japanese (ja), Korean\n(ko), Russian (ru), and Tagalog (tl), as well as 3\nEuropean languages covering English (en), French\n(fr), and Spanish (es) as the target languages for\nour translation experiments.\nThe MT evaluation was carried out using utter-\nances taken from the domain of travel conversa-\ntions. A description of the utilized language re-\nsources and the MT engines are summarized in\nSection 3. The translation quality of the MT en-\ngines was evaluated using (1) the automatic eval-\nuation metric BLEU (Papineni et al., 2002) and\n(2) human assessment of MT quality based on the\nRanking metric (Callison-Burch et al., 2007).\nFor the 10 investigated language pairs, non-\nexpert graders were hired through the CF interface\nto MTurk in order to carry out the ranking-based\nMT evaluation as described in Section 4. In ad-\ndition, expert graders were employed for four of\nProceedings of the 16th EAMT Conference, 28-30 May 2012, Trento, Italy\n229\nthe target languages (en, ja, ko, zh) to carry out\nexactly the same evaluation task\nas the non-expert\nworkers. For all target languages without expert\ngraders, we used an oracle ranking metric based\non the “Training Size Preference” assumption, i.e.,\nthe larger the training size, the better the transla-\ntion quality can be expected to be, to evaluate the\nquality of the worker judgments.\nBesides a thorough analysis of the obtained non-\nexpert grading results, we also investigated differ-\nent data quality control mechanisms in order to\nincrease the reliability of crowd-based evaluation\nresults (see Section 5). The experiments carried\nout in this paper revealed that the quality of the\ncrowd-based MT evaluation is closely related to\nthe demographics of the online work marketplace.\nAlthough high-quality evaluation results could be\ncollected for the majority of the investigated non-\nEnglish languages, the need for multi-layered data\nquality control mechanisms causes an increase in\nevaluation time. The finding of this paper con-\nfi m that crowdsourcing is an effective way of re-\nducing the costs of MT evaluation without sacrific\ning quality even for non-English target languages\ngiven that control mechanisms carefully tailored to\nthe evaluation task at hand are in place.\n2 MechanicalTurkDemographics\nPast surveys on the demographics of MTurk users\nindicated that most of the workers come from the\nUS. (Ipeirotis, 2010) conducted a recent survey on\nthe demographics of MTurk users which showed a\nshift in the “country of origin” of workers, i.e., a\ndecrease in US workers to 47% and an increase of\nIndian workers to 34%, with the remaining 19%\nof workers coming from 66 different countries3.\nBased on the country information from MTurk\nworkers taking part in the survey, we analyzed\nwhich languages are used by these workers.\nThe language distribution shows that the major-\nity of workers speak English, followed by Hindi,\nRomanian, Tagalog, and Spanish. At least 5 work-\ners were native speakers of Dutch, Arabic, Italian,\nGerman, and Chinese. However, taking into ac-\ncount officia languages spoken in the respective\ncountries, we can expect larger contributions of\nworkers speaking Spanish, French, and Arabic.\n3 MT Evaluation Task\nThe crowd-based MT evaluation is carried out us-\ning the translation results of phrase-based statis-\n3Details on the surve ycan be found at http://hdl.handle.\nnet/2451/29585tical machine translation (SMT) systems that are\ntrained on parallel corpora. The translation quality\nof SMT engines heavily depends on the amount\nof bilingual language resources available to train\nthe statistical models. We exploited this charac-\nteristic of data-driven MT approaches to defin an\n“oracle” ranking metric ( ORACLE ) according to the\n“Training Size Preference” assumption, in which\nan MT output of a system A wins (or ties in) a com-\nparison with the MT output of a system B, where\nthe training corpus of system B is a subset of the\none of system A.\nThe language resources used to build MT en-\ngines are described in Section 3.1. We selected 10\nIndo-European and Asian languages based on the\nfollowing criteria:\n•“Worker Availability” covering languages with ‘many’\n(en, hi), ‘several’ (es, tl), ‘few’ (ar, fr, ja, ru, zh), ‘almost\nnone’ (ko) MTurk workers available.\n•“Usage for MT Research” covering ‘frequently’ (ar, fr,\nzh), ‘often’ (es, ru), ‘sporadically’ (ja, ko) used lan-\nguages as well as under-resourced languages (tl, hi).\n•“AvailabilityofLanguageResources ” used for the train-\ning and evaluation of MT engines.\nThe training corpus consisting of 160k relatively\nshort sentences was split into three subsets of 80k,\n20k, and 10k sentence pairs, respectively. Each\nsubset was used to train an MT engine whose\ntranslation quality significantl differed from the\nothers, with the MT engine trained on the full cor-\npus achieving the best translation quality.\nThis translation experiment setup renders the\nmanual evaluation relatively reliable due to (1) a\nrelatively easy translation task and (2) large differ-\nences in translation performance between the uti-\nlized MT engines. Moreover, the ORACLE metric\ncan be exploited to judge the quality of crowd-\nbased evaluation results for all languages where\nexpert graders were not available.\n3.1 Language Resources\nThe crowd-based MT evaluation experiments are\ncarried out using the multilingual BasicTravelEx-\npressions Corpus (BTEC), which is a collection\nof sentences that bilingual travel experts consider\nuseful for people going to or coming from another\ncountry (Kikui et al., 2006). The sentence-aligned\ncorpus consists of 160k sentences and covers all 10\nlanguages investigated in this paper.\nThe parallel text corpus was randomly split into\nthree subsets: for evaluating translation quality\n(eval, 300 sentences), for tuning the SMT model\nweights (dev, 1000 sentences) and for training the\n230\nstatistical models (train, 160k sentences). Further-\nmor\ne, three subsets of varying sizes (80k, 20k, and\n10k sentences) were randomly extracted from the\ntraining corpus and used to train four SMT engines\non the respective training data sets for each of the\ninvestigated language pairs.\n3.2 Translation Engines\nThe translation results evaluated in this paper were\nobtained using fairly typical phrase-based SMT\nengines built within the framework of a feature-\nbased exponential model. For the training of the\nSMT models, standard word alignment (Och, 2003)\nand language modeling (Stolcke, 2002) tools were\nused. Minimum error rate training ( MERT ) was\nused to tune the decoder’s parameters and was per-\nformed on the devset using the technique proposed\nin (Och, 2003). For the translation, an in-house\nmulti-stack phrase-based decoder was used.\nIn order to maximize the gains4from an in-\ncreased training data size and therefore allow for\nreliable ORACLE judgments, we selected English as\nthe source language for the translations into Ara-\nbic, Japanese, Korean, and Russian. For all other\ntranslation experiments, Japanese source sentences\nwere used as the input for the SMT decoder.\n3.3 Automatic Evaluation\nFor the automatic evaluation of translation quality,\nwe applied the BLEU metric (Papineni et al., 2002).\nScores range between 0 (worst) and 1 (best).\nThe results of the translation engines de-\nscribed in Section 3.2 are summarized in Ta-\nble 1, where the BLEU scores are given as\npercent figu es (%BLEU). The obtained scores\nconfi m the “Training Size Preference” assump-\ntion (160k> 80k> 20k> 10k) of the ORACLE met-\nric. Concerning the target languages, the high-\nest BLEU scores were achieved for Korean and\nJapanese, followed by English, Chinese, Spanish\nand French. Arabic and Hindi seem to be the\nmost difficul target languages for the given trans-\nlation and evaluation tasks obtaining the lowest au-\ntomatic evaluation scores for each of the investi-\ngated tasks.\n3.4 Subjective Evaluation\nHuman assessments of translation quality were\ncarried out using the Ranking metrics where hu-\nman graders were asked to “rank each whole sen-\ntence translation from Bestto Worst relative to the\n4For relatively simple translation tasks, the amount of training\ndata affects the translation quality of closely related languages\nfar less than for more distinct languages.Table 1: Translation Quality ( %BLEU )\nLanguage MT Engine\nSource Target 160k 80k 20k 10k\nen ar 12.90 12.45 10.89 9.97\nja 28.58 25.38 21.00 19.41\nko 29.53 26.42 21.43 18.66\nru 16.15 15.84 13.90 12.36\nja en 24.47 19.95 15.35 12.57\nes 19.52 17.43 13.30 11.73\nfr 19.35 18.84 14.67 14.43\nhi 14.17 12.57 9.97 8.24\ntl 18.93 17.81 15.78 13.58\nzh 21.22 17.08 13.03 12.64\nother choices (ties are allowed)” (Callison-Burch\net al., 2007).\nThe unit of\nevaluation was the ranking set,\nwhich is composed of a source sentence, the main\nreference provided as an acceptable translation,\nand the MT outputs of all four MT engines to be\njudged. The order of the MT outputs was changed\nrandomly for each ranking set to avoid bias. The\nRanking evaluation was carried out using a web-\nbrowser interface and graders had to order four\nsystem outputs by assigning a grade between 1\n(best) and 4 (worse).\n4 Crowd-basedMT Evaluation\nTo counter the high costs in human assessment\nof MT outputs, crowdsourcing services such as\nMTurk and CF have attracted a lot of attention\nboth from industry and academia as a means for\ncollecting data for human language technologies\nat low cost. MTurk is an on-line work market-\nplace, where people are paid small sums of money\nto work on Human Intelligence Tasks (HITs), i.e.\ntasks that machines have hard time doing. The\nCF platform works across multiple crowdsourcing\nservices, including MTurk. CF gives unrestricted\naccess, making it possible for non US-based re-\nquesters to place HITs on MTurk.\n4.1 DataQuality Control Mechanism\nOne of the most crucial issues to consider when\ncollecting crowdsourced data is how to ensure their\nquality. MTurk and CF provide requesters with\nquality control mechanisms including the “locale\nqualification option to restrict workers by coun-\ntry. Preliminary qualification for workers can be\nset by requiring workers to complete a qualifica\ntion test using training ranking sets. Only workers\npassing the test are allowed to accept a HIT for the\nevaluation task at hand. Moreover, CF provides\na mechanism to verify the workers’ reliability on-\nthe-fl . The HIT design interface provided by CF\nallows including so called “gold units”, i.e. items\n231\nwith known labels, along with the other units com-\nposing the requested HIT\n. Gold units are randomly\nmixed with the other units by CF when it cre-\nates the worker assignments. These control units5\nallows distinguishment between trusted workers\n(those who correctly replicate the gold units) and\nuntrusted workers (those who fail the gold units).\nUntrusted workers are automatically blocked and\nnot paid, and their labels are filte ed out from the\nfina data set. CF uses the workers’ history to apply\nconfidenc scores (the “trust level” feature) to their\nannotations. In order to be considered trusted in a\njob, workers are required to judge a minimum of\nfour gold units and to be above an accuracy thresh-\nold of 70%. As a further control, CF pauses a job\n(the “auto-takedown” feature), if workers are fail-\ning too many gold units.\nIn this paper, we investigated the dependency of\nthe quality of the evaluation results for the follow-\ning quality control features:\n•locale qualification (LOC): restriction to officia lan-\nguage countries; the most important control mechanism\nto prevent workers from tainting the evaluation results.\n•qualificationtesting (PRI): training phase assessment of\nworker’s eligibility prior to the evaluation task.\n•on-the-flyverification (GOLD): identificatio of trusted\nworkers using control units with a known answer.\n4.2 Control Units\nControl units have to be unambiguous, not too triv-\nial, and also not too difficult For the translation\ntask at hand, we selected the original corpus sen-\ntence as the main reference translation. From para-\nphrased reference translations6, we selected a sin-\ngle reference as the goldtranslation to be included\nin the control units. A paraphrased reference to\nbe selected as a gold translation should have the\nfollowing characteristics: (1) it should be similar\nto the main reference and (2) its translation qual-\nity should be better than the best MT output for all\ntranslation hypotheses of the same input. If native\nspeakers are available, the gold translation quality\nshould be checked manually. However, for most\nof the investigated target languages, native speak-\ners were not available. Thus, we automatically se-\nlected a gold translation based on the edit distance\nof each paraphrased reference to (a) the main refer-\nence and (b) the ORACLE -best (=160k) MT output\nfor all sentence IDs of the evalset. We selected the\nmost appropriate paraphrased reference according\n5The suggested amount of gold units tobe provided is around\n10% of the requested units.\n6Up to 15 paraphrased reference translations are available for\nthe data sets described in Section 3.1.to its minimal distance to the main reference and\nits maximal distance to the MT output. The top-\n30 sentence IDs with the best gold translation dis-\ntance scores were selected as control units for the\nrespective translation task.\nFor each control unit sentence ID, a random MT\noutput was replaced in the ranking set with the\ngold translation. For our experiments, we distin-\nguished two GOLD annotation schemes:\n•“best-only” (GOLDb): check only the best translation,\ni.e., force rank ‘1’ assignment for the gold translation.\n•“best+worse” (GOLDbw): check the best and the worst\ntranslation, i.e., allow rank ‘1’ or ‘2’ for the gold and\nrank ‘3’ or ‘4’ for the ORACLE-worst (10k) translation.\n4.3 Evaluation Interface\nCF provides two interfaces: (1) an external one for\nMTurk workers and (2) an internal one for which\nyou have to prepare your own work force. The\ninternal interface is (currently) free of charge and\nwas used to collect judgments from in-house ex-\npert graders using exactly the same HITs and the\nsame online interface as the MTurk workers.\n4.4 Experiment Setup\nFor each target language ( TRG), we repeated the\nsame MT evaluation experiment using the follow-\ning data quality control settings7:\n1.NONE : no quality control (all TRGs)\n2.GOLD: on-the-fl only (all TRGs)\n3.LOC+GOLD: locale+on-the-fl (all TRGs)\n4.LOC+GOLD+PRI : locale+testing+on-the-fl (hi, ko)\nAll experiments using the same control set-\nting were carried out simultaneously, i.e., a single\nworker might take part in more than one evalua-\ntion experiment. A HIT consisted of 3 ranking sets\nper page and is paid 6 cents for all experiments. In\ntotal, the evaluation costs8for all the experiments\nadded up to $390 for 30 experiments, resulting in\nan average of $13 for the crowd-based evaluation\nof 4 MT outputs for 300 input sentences.\n5 Evaluation Results\nIn order to investigate the effects of the data quality\ncontrol mechanisms, the analysis of the evaluation\nresults is conducted experiment-wise. i.e., we do\nnot differentiate between single workers, but treat\nall the collected judgments of the respective exper-\niment as a “single” grader result. This enables a\n7India was excluded by default for all experiments besides the\nones having Hindi as the target language.\n8The requester’s payment includes a fee to MTurk of 10% of\nthe amount paid to the workers. In addition, CF takes a 33%\nshare of the payments by the requester.\n232\ncomparison of non-expert vs. expert/oracle grad-\ning results and the impact of\neach control setting\non the quality of the collected judgments. The de-\ntails of the experiment results for each target lan-\nguage are listed in Appendix A.\n5.1 Worker Characteristics\nTable A.1. summarizes the amount of participat-\ning workers. For each control setting, we list the\namount of workers ( total) and the percentage of\nworkers coming from a country where the lan-\nguage is the officia language ( native). The worker\ndemographics are summarized in Table A.2.\nWithout any control mechanism in place, the\njudgments mainly originated from non-native\nworkers. 53% of the workers submitted HITs for\nat least two tasks, with the largest overlap being\nf ve tasks. Although some workers might be able\nto speak and evaluate more than two languages, the\nresults indicate that thelargertheoverlap,theless\nreliable the judgments are expected to be.\nThe on-the-fl verificatio based on gold trans-\nlations only (GOLDb) resulted in a high percent-\nage of judgments obtained from trusted workers\n(65∼ 100%) for the majority of tasks, but achieved\nworse figu es with respect to native worker contri-\nbutions. These finding indicate that single gold\ntranslations are not sufficient to identify workers\nassigninggrades based on fixed patterns.\nAs a counter-measure, we limited the worker\norigin to the officia language countries and the\nUS, and annotated both the best and worst trans-\nlation of the control units. As a results, 47%\nof theLOC+GOLDbwgradings were collected from\nnative speakers. These results show that the lo-\ncale and on-the-fl control enable the collection of\nless tainted judgments and the identificatio of un-\ntrusted workers, respectively. Table A.3. summa-\nrizes the amount of judgments collected for each\ntask. The total count depends on the number of\nnon-trusted workers accepting HITs for the respec-\ntive language.\nAlthough high-quality control units positively\naffect the quality of the evaluations as shown in\nSection 5.2, the average time needed to collect the\ndata increased by a factor of 8. The evaluation\nperiod, i.e., the number of days needed to col-\nlect all the data, the grading time, i.e., the hours\nspent on actually grading the translations, and the\naverage grading time per assignment are summa-\nrized in Table A.4. The grading time for each task\nranged from 2.5h to 6.5h for the LOC+GOLDbwex-\nperiments. However, the evaluation period largelydepends on the language, ranging from 2 days (hi,\ntl, es) to over 2 weeks (ru, zh, ko). The analysis\nof the average time needed to judge a single HIT\nindicates that the shorter the evaluation time, the\nless reliable the judgments are expected to be.\nThe most problematic languages are Korean and\nHindi. For Korean, the evaluation experiments\nlasted 3 months due to the lack of trusted work-\ners. Moreover, the Hindi LOC+GOLDbwtask could\nnot be finishe because the large amount of un-\ntrusted workers triggered CF’s auto-takedown fea-\nture. In order to prevent an auto-takedown for\njobs where low trust levels of workers are to be\nexpected, a training phase assessing the worker’s\neligibility prior to the evaluation task needs to be\nincluded. Only workers passing the qualificatio\ntest were allowed to accept HITs for the respec-\ntive task. The Korean and Hindi results given\nin Appendix A were therefore obtained using the\nLOC+GOLDbw+PRI data quality control setting.\n5.2 Ranking Results\nTheRanking scores were obtained as the average\nnumber of times that a system was judged better\nthan any other system. The results summarized\nin Table A.5. differ largely for the investigated\ndata quality settings. System ranking scores result-\ning in an MT system ordering other than the ex-\npert rankings are marked in boldface. For most of\nthe uncontrolled tasks, worker rankings are differ-\nent from expert rankings. The GOLDbsetting tasks\nachieved a higher correlation with expert rank-\nings, but still differ for 3 out of the 10 languages.\nTheLOC+GOLDbwtasks ranked all the MT systems\nidentically to the experts. Interestingly, the rank-\ning scores obtained for the better controlled evalu-\nation experiments are much higher, indicating the\ncollected evaluation data is of good quality.\n5.3 Grading Consistency\nThe most informative indicator of the quality of\na dataset is given by the agreement rate, or grad-\ning consistency, both between different judges and\nthe same judge. To this purpose, the agreement\nbetween non-expert graders of experiments using\ndifferent data quality control mechanisms was cal-\nculated for the MTurk data and compared to the re-\nsults obtained by expert/oracle judgments. Agree-\nment rates are calculated using the Fleiss’ kappa\ncoefficient κ(Fleiss, 1971):\nκ=Pr(a)−Pr(e)\n1−Pr(e),\nwhere Pr(a) is the\nobserved agreement among\ngraders, and Pr(e) is the hypothetical probability of\n233\nchance agreement. In our task, Pr(a) is given by the\nproportion of times that tw\no judges assessing the\nsame pair of systems on the same source sentence\nagree that A >B, A=B, or A<B. Grader agreement\nscores can be interpreted as follows: “ none”κ <0,\n“slight” κ≤0.2, “fair” κ≤0.4, “moderate” κ≤0.6,\n“substantial” κ≤0.8, and “almost perfect” κ≤1.0\n(Landis and Koch, 1977).\nThe quality of the judgment is confi med by\nthe ranking agreement scores listed in Table A.6.\nComparing the worker vs. the expert judgments,\nonlyslight agreement was obtained for the less\ncontrolled settings, but the proposed data quality\ncontrol mechanisms achieved levels of up to sub-\nstantial agreement. The comparison of agreement\nscores for oracle and expert judgments indicates\nthat at least fairagreement is to be expected for\nlanguages where expert graders are not available.\n6 Conclusions\nIn this paper, we investigated the use of the data\nquality control mechanisms of online work mar-\nketplaces for the collection of high-quality MT\nevaluation data for non-English target languages.\nThe analysis of the worker characteristics revealed\nthatlocalequalification control settings enable the\ncollection of less tainted judgments and that bad\nworkers can be identifie by short HIT grading\ntimes, large overlaps of evaluation tasks run si-\nmultaneously, and low trust levels measured either\nprior to or during the evaluation task.\nDue to the lack of expert graders for 6 out of 10\nlanguages, the creation of control units was carried\nout automatically, where the proposed similarity-\nbased gold translation selection method proved to\nbe a practical alternative to manual selection by\nnative speakers. The improved setting of control\nunits to verify not only the best but also the worst\ntranslation helped to identify untrusted workers us-\ning fi ed gradings schemes. Finally, the combina-\ntion of multiple control mechanism proved to be\nessential for collecting high-quality data for all the\ninvestigated non-English languages.\nBased on the obtained findings we recommend\ncarrying out crowd-based MT evaluations by (1)\nlimiting the access to workers in countries where\nthe target language is the officia language, al-\nthough for languages lacking workers, the US\nmight be included if evaluation time is a crucial\nfactor and (2) definin control units so that ex-\npected rankings for the best and the worst systems\nare preserved and grading variations of non-expert\ngraders are taking into account.As future work, we are planning to investigate\nthe effectiveness of other control mechanisms such\naspayment and the applicability of the proposed\ncrowd-based MT evaluation method to more com-\nplex translation tasks, ranking more MT systems,\nas well as covering other domains such as the\ntranslation of public speeches.\nReferences\nCallison-Burch, C., C. Fordyce, P. Koehn, C. Monz,\nand J. Schroeder. 2007. (Meta-) Evaluation of Ma-\nchine Translation. In Proc. of the Second Workshop\non SMT , pages 136–158.\nCallison-Burch, C., P. Koehn, C. Monz, K. Peterson,\nM. Przybocki, and O. Zaidan. 2010. Findings of\nthe 2010 Joint Workshop on SMT and Metrics for\nMT. InProc. of the Joint Fifth Workshop on SMT\nand MetricsMATR , pages 17–53.\nCallison-Burch, C. 2009. Fast, Cheap, and Creative:\nEvaluating MT Quality Using Amazon’s Mechanical\nTurk. InProc. of the EMNLP , pages 286–295.\nDenkowski, M. and A. Lavie. 2010. Exploring Nor-\nmalization Techniques for Human Judgments of Ma-\nchine Translation Adequacy Collected Using Ama-\nzon Mechanical Turk. In Proc. of the NAACL HLT\nWorkshop on Creating Speech and Language Data\nwith Amazon’s Mechanical Turk , pages 57–61.\nFederico, M., L. Bentivogli, M. Paul, and S. St ¨ucker.\n2011. Overview of the IWSLT 2011 Evaluation\nCampaign. In Proc. of IWSLT , pages 11–27.\nFleiss, J. 1971. Measuring nominal scale agree-\nment among many raters. PsychologicalBulletin , 76\n(5):378–382.\nIpeirotis, P. 2010. New demographics of Mechanical\nTurk. http://hdl.handle.net/2451/29585.\nKikui, G., S. Yamamoto, T. Takezawa, and E. Sumita.\n2006. Comparative study on corpora for speech\ntranslation. IEEE Transactions on Audio, Speech\nand Language Processing , 14(5):1674–1682.\nLandis, J. and G. Koch. 1977. The measurement of\nobserver agreement for categorical data. Biometrics,\n33 (1):159–174.\nOch, F.J. 2003. Minimum Error Rate Training in SMT.\nInProc. of the 41st ACL , pages 160–167.\nPapineni, K., S. Roukos, T. Ward, and W. Zhu. 2002.\nBLEU: a Method for Automatic Evaluation of MT.\nInProc. of the 40th ACL , pages 311–318.\nPrzybocki, M., K. Peterson, and S. Bronsart. 2008.\nMetrics for MAchine TRanslation Challenge. http://\nnist.gov/speech/tests/metricsmatr/2008/results.\nStolcke, A. 2002. SRILM: an extensible language\nmodeling toolkit. In Proc. of the ICSLP .\n234\nAppendix A. Crowd-based MT Evaluation\nA.1. Amount of Workers\nThe total\nnumber of participating workers, as well as the number and\nthe percentage of trusted/native workers for each evaluation task.\nData Quality Control Mechanism\nLOC+GOLDbwGOLDbNONE\nTRG total trusted native total trusted [native] total trusted native\ncount (%oftotal) [ %oftotal] count (%oftotal) [%oftotal] count (%oftotal) [%oftotal]\nen 23 18 (78.3%) 13 [56.5%] 38 30 (78.9%) 10 [26.3%] 8 – 4 [50.0%]\nar 41 26 (63.4%) 23 [56.0%] 29 19 (65.5%) 6 [20.6%] 14 – 0 [ 0.0%]\nes 19 19 (100.0%) 15 [78.9%] 12 11 (91.6%) 2 [16.6%] 8 – 0 [ 0.0%]\nfr 10 9 (90.0%) 4 [40.0%] 10 9 (90.0%) 0 [0.0%] 14 – 2 [14.2%]\nhi 31∗28∗(90.3%) 27∗[8 7.0%] 85 37 (43.5%) 34 [40.0%] 47 – 33 [70.2%]\nja 14 11 (78.5%) 3 [21.4%] 15 13 (86.6%) 0 [0.0%] 10 – 1 [10.0%]\nko 45∗43∗(95.5%) 2∗[4.4 %] 24 17 (70.8%) 0 [0.0%] 5 – 0 [ 0.0%]\nru 30 20 (66.6%) 4 [13.3%] 7 7 (100.0%) 0 [0.0%] 14 – 0 [ 0.0%]\ntl 10 9 (90.0%) 5 [50.0%] 6 6 (100.0%) 0 [0.0%] 2 – 1 [50.0%]\nzh 18 11 (61.1%) 3 [16.6%] 16 12 (75.0%) 0 [0.0%] 7 – 0 [ 0.0%]\n∗marked results are obtained using the LOC+GOLDbw+PRI data quality control setting.\nA.2. Countryof Origin\nThe total number of countries and workers per country participating in each evaluation task.\nData Quality Control Mechanism\nLOC+GOLDbwGOLDbNONE\nTRG country: workers country: workers country: workers\nen 9 countries 11 countries 4 countries\nUSA:15, AUS:1, CAN:1, GBR:1, MYS:1, PHL:1, USA:15, MKD:9, CHN:2, NLD:2, ROU:2, JPN:2, USA:5, AUS:1, JPN:1, MKD:1\nBGD:1, CMR:1, SGP:1 PAK:2, AUS:1, BGD:1, CMR:1, MDV:1\nar 11 countries 15 countries 10 countries\nJOR:12, EGY:8, USA:7, TUN:3, LBN:3, SAU:2, MKD:6, TUN:3, JOR:3, EGY:2, USA:2, BGD:2, MKD:3, EGY:2, PAK:2, CHN:1, DZA:1, GBR:1,\nMAR:2, DZA:1, KWT:1, ARE:1, OMN:1 ARE:2, GBR:2, DZA:1, CHN:1, ESP:1, MDV:1, LBN:1, TUN:1, ARE:1, USA:1\nROU:1, OMN:1, SAU:1\nes 8 countries 5 countries 7 countries\nESP:5, MEX:4, USA:4, COL:2, ARG:1, GTM:1, MKD:7, ESP:2, USA:1, BGD:1, ROU:1 USA:2, BHS:1, ESP:1, PRT:1, MKD:1, PAK:1,\nURY:1, VEN:1 ROU:1\nfr 4 countries 5 countries 8 countries\nUSA:5, FRA:3, CAN:1, CMR:1 MKD:6, USA:1, CMR:1, NLD:1, ROU:1 MKD:3, PAK:3, FRA:2, ROU:2, CAN:1, CMR:1,\nNLD:1, USA:1\nhi 2 countries 4 countries 8 countries\nIND:30∗, USA:1∗IND:80, PAK:3, USA:1, ROU:1 IND:33, MKD:6, CHN:2, PAK:2, SGP:1, ARE:1,\nROU:1, USA:1\nja 2 countries 8 countries 5 countries\nUSA:10, JPN:4 MKD:6, ROU:2, PAK:2, BGD:1, CHN:1, JPN:1, USA:4, JPN:2, MKD:2, PAK:1, PHL:1\nMDV:1, NLD:1\nko 2 countries 10 countries 3 countries\nUSA:41, KOR:2 MKD:9, ROU:3, PHL:3, USA:2, CHN:2, POL:1, CHN:2, USA:2, MKD:1\nBGD:1, MDV:1, PAK:1, ESP:1\nru 2 countries 5 countries 7 countries\nUSA:25, RUS:5 PAK:2, ROU:2, GBR:1, SRB:1, MKD:1 MKD:8, MDA:1, POL:1, SRB:1, UKR:1, CHN:1,\nPAK:1\ntl 2 countries 3 countries 1 country\nPHL:7, USA:3 MKD:3, ROU:2, PAK:1 PHL:2\nzh 4 countries 6 countries 4 countries\nUSA:12, CHN:3, SGP:2, HKG:1 MKD:9, USA:3, ROU:1, NLD:1, CHN:1, BGD:1 USA:3, CHN:2, SGP:1, MKD:1\n∗marked results are obtained using the LOC+GOLDbw+PRI data quality control setting.\nA.3. Judgments\nThe total number of rankings sets judged by all/trusted/native workers for each evaluation task.\nData Quality Control Mechanism\nLOC+GOLDbwGOLDbNONE\nTRG total trusted native total trusted native total trusted native\ncount (%oftotal) [ %oftotal] count (%oftotal) [% oftotal] count (%oftotal) [%oftotal]\nen 564 495 (87.8%) 168 [29.8%] 664 568 (85.5%) 128 [19.3%] 442 – 78 [17.6%]\nar 693 543 (78.4%) 432 [62.3%] 559 463 (82.8%) 117 [20.9%] 465 – 0 [ 0.0%]\nes 581 581 (100.0%) 542 [93.3%] 428 416 (97.2%) 86 [20.1%] 421 – 0 [ 0.0%]\nfr 463 409 (88.3%) 178 [38.4%] 416 404 (97.1%) 0 [ 0.0%] 495 – 18 [ 3.6%]\nhi 580∗505∗(87.1%) 49 6∗[85.5%] 1013 531 (52.4%) 477 [47.1%] 723 – 314 [43.5%]\nja 386 356 (92.2%) 60 [15.5%] 472 448 (94.9%) 0 [ 0.0%] 447 – 0 [ 0.0%]\nko 642∗603∗(93.9%) 66∗[1 0.3%] 583 523 (89.7%) 0 [ 0.0%] 408 – 0 [ 0.0%]\nru 657 555 (84.5%) 96 [14.6%] 370 370 (100.0%) 0 [ 0.0%] 504 – 0 [ 0.0%]\ntl 437 428 (97.9%) 91 [20.8%] 344 344 (100.0%) 0 [ 0.0%] 371 – 36 [ 9.7%]\nzh 575 481 (83.6%) 354 [61.6%] 462 429 (92.9%) 0 [ 0.0%] 476 – 0 [ 0.0%]\n∗marked resul ts are obtained using the LOC+GOLDbw+PRI data quality control setting.\n235\nA.4. Evaluation Time\nThe evaluation period (gi\nven in days), the total grading time (given in hours, “(hh:mm:ss)”), and the average\ntime per HIT (given in seconds, “[mm:ss]”) of the trusted gradings obtained for each evaluation task.\nData Quality Control Mechanism\nEXPERT LOC+GOLDbwGOLDbNONE\nevaluation (grading [avg. time per evaluation (grading [avg. time per evaluation (grading [avg. time per evaluation (grading [avg. time per\nTRG period time) assignment] period time) assignment] period time) assignment] period time) assignment]\nen 6.9 days (06:41:09) [00:13] 4.8 days (04:30:13) [00:39] 0.9 days (03:24:45) [00:25] 0.4 days (01:12:32) [00:17]\nar – 4.7 days (06:29:32) [00:47] 0.7 days (02:48:34) [00:24] 0.1 days (00:45:50) [00:07]\nes – 2.2 days (06:06:55) [00:47] 0.3 days (01:49:34) [00:16] 0.1 days (00:48:22) [00:07]\nfr – 3.9 days (04:19:36) [00:40] 0.2 days (03:13:52) [00:29] 0.2 days (00:55:04) [00:14]\nhi – 1.2 days∗(03:27:34 )∗[00:35]∗0.2 days (02:44:42) [00:19] 0.1 days (00:52:55) [00:08]\nja 1.1 days (05:48:35) [01:07] 12.8 days (02:22:28) [00:27] 0.7 days (01:39:58) [00:14] 0.1 days (01:07:17) [00:10]\nko 7.1 days (11:29:41) [00:16] 88.9 days∗(04:45:05 )∗[00:41]∗3.1 days (01:10:46) [00:10] 0.1 days (01:07:52) [00:11]\nru – 17.0 days (06:48:44) [00:52] 0.1 days (01:55:05) [00:18] 0.2 days (01:12:47) [00:11]\ntl – 2.1 days (03:03:16) [00:26] 0.1 days (00:43:59) [00:07] 0.1 days (01:07:17) [00:10]\nzh 1.1 days (07:32:56) [01:26] 23.7 days (05:09:30) [00:43] 2.1 days (01:29:36) [00:13] 0.1 days (01:52:16) [00:16]\n∗marked results are obtained using the LOC+GOLDbw+PRI data quality control setting.\nA.5. Ranking Results( %better)\nThe subjective evaluation of translation quality of 4 MT engines trained on different training data sizes (160k, 80k, 20k, 10k).\nTheRanking scores were obtained as the average number of times that a system was judged better than any other system.\nData Quality Control Mechanism\nEXPERT LOC+GOLDbwGOLDbNONE\nTRG 160k 80k 20k 10k 160k 80k 20k 10k 160k 80k 20k 10k 160k 80k 20k 10k\nen 0.5245 0.4755 0.3272 0.1453 0.4766 0.3481 0.2343 0.1138 0.2853 0.2620 0.1673 0.0750 0.1605 0.1714 0.1020 0.06 80\nar – 0.4319 0.3038 0.1943 0.1497 0.1816 0.1135 0.0837 0.0723 0.0008 0.0009 0.0019 0.0081\nes – 0.4899 0.4062 0.2342 0.1176 0.1983 0.1474 0.0758 0.0620 0.0000 0.0000 0.0000 0.0000\nfr – 0.4823 0.4020 0.1652 0.0908 0.1929 0.1631 0.0879 0.10 350.04000.0326 0.0370 0.0370\nhi – 0.2837∗0.2068∗0.1094∗0.08 89∗0.1872 0.1587 0.0868 0.09 470.02010.0111 0.0191 0.0040\nja 0.5735 0.4803 0.2528 0.1027 0.4811 0.3695 0.1461 0.0755 0.2355 0.1639 0.1281 0.0675 0.0724 0.0678 0.0470 0.0165\nko 0.4690 0.3746 0.2625 0.1136 0.3809∗0.3185∗0.1740∗0.09 19∗0.0862 0.0689 0.0532 0.0517 0.0000 0.0000 0.0000 0.0000\nru – 0.3459 0.2957 0.1830 0.1078 0.2588 0.2390 0.1887 0.1613 0.0606 0.0552 0.0433 0.0400\ntl – 0.3914 0.2679 0.1428 0.1027 0.0679 0.0648 0.0340 0.03 400.00220.0011 0.0022 0.0044\nzh 0.5482 0.4313 0.3318 0.2133 0.6367 0.5128 0.4110 0.2811 0.1371 0.1331 0.1223 0.1035 0.0802 0.0552 0.0542 0.0427\n∗marked results are obtained using the LOC+GOLDbw+PRI data quality control setting.\nA.6. Ranking Agreement\nFleiss’ kappa correlation coefficient comparing the obtained crowd-based evaluation results to the oracle and\nexpert judgments for each translation task. The κscores are interpreted in (Landis and Koch, 1977) as follows:\nκ < 0 : “none” κ≤0.6 : “moderate”\nκ≤0.2 : “slight ”κ≤0.8 : “substantial ”\nκ≤0.4 : “fair ” κ≤1.0 : “almost perfect ”\nWorker vs. Oracle/Expert Agreement\nκ Data Quality Control Mechanism\nLOC+GOLDbwGOLDbNONE\nTRG oracle expert oracle expert oracle expert\nen 0.45 0.62 0.19 0.30 0.39 0.43\nar 0.22 – 0.09 – 0.11 –\nes 0.35 – 0.08 – 1.00 –\nfr 0.26 – 0.04 – 0.53 –\nhi 0.05∗– 0.00 – -0.02 –\nja 0.38 0.66 0.10 0.22 0.01 0.23\nko 0.56 0.50 0.79 0.14 -0.01 0.17\nru 0.32 – 0.08 – 0.15 –\ntl 0.21 – 0.04 – -0.02 –\nzh 0.62 0.56 0.07 0.09 0.17 0.20\n∗marked results are obtained using the LOC+GOLDbw+PRI data quality control setting.\n236\nReadability and Translatability Judgments for ‘Controlled Japanese ’ \nAnthony Hartley \nToyohashi University of Techno logy \[email protected] Midori Tatsumi \nToyohashi University of Techno logy \[email protected] \nHitoshi Isahara \nToyohashi U of Techno logy \[email protected] Kyo Kageura \nUniversity of Tokyo \[email protected] Rei Miyata \nUniversity of Tokyo \[email protected] \nAbstract \nWe report on an experiment to test the e f-\nficacy of ‘controlled language’ authoring \nof technical documents in Japanese, with \nrespect both to the readability of the Jap-\nanese source and the quality of the En g-\nlish machine-translated output. Using \nfour MT systems, we tested two sets of \nwriting rules designed for two document \ntypes written by authors with contrasting \nprofessional profiles. We elicited jud g-\nments from native speakers to establish \nthe positive or negative impact of each \nrule on readability and translation quality. \n1 Introduction \nIt is widely acknowledged that the typological \n‘distance’ between Japanese and English (the \nmost common European target language for MT \nfrom Japanese) hampers the achievement of \nhigh-quality translation. We seek to address this \nchallenge by investigating the feasibility of d e-\nveloping a ‘controlled Japanese’ with explicit \nrestrictions on vocabulary, syntax and style ad e-\nquate for authoring technical documentation. \nOur starting point is sentences extracted from \ntwo types of document: consumer user manuals \n(UM) and company-internal documents articula t-\ning the know-how of key employees (KH). UM \nare produced by professional technical authors, \nwhile KH are written as ‘one -offs’ by the e m-\nployees themselves, capturing their own know-\nhow. Thus, there is a sharp difference in the e f-\n \n© 2012 European Association for Machine Translation. \n fort the two groups of writers can be expected to \ninvest and the linguistic knowledge they bring to \na controlled authoring task. \nIn outline, our experiment entailed formula t-\ning a set of writing rules (‘authoring guidelines’) \nfor each document type. Sentences violating the \nrules were extracted from the original data and \nrewritten (‘pre -edited’ in this experimental se t-\nting) in accordance with the respective rule. The \noriginal and rewritten sentences were then tran s-\nlated by different MT systems; finally, the inputs \nand outputs were submitted to human evaluation. \nSince the readers of the original Japanese and \nthe readers of the translated English are equally \nimportant, we devised protocols to assess what \nwe termed the ‘readability’ of the Japanese \nsource sentences and the ir ‘translatability’ as \ngauged by the perceived quality of the English \ntarget sentences. \nIn interpreting the results, we try to identify \nthe most promising avenues for further develop-\nment. \n2 Controlled Language and MT \nThe general principles of controlled language \n(CL) and the challenges posed by its deployment \nare clearly summarised by (Kittredge, 2003; \nNyberg et al., 2003). Evidence of the effectiv e-\nness of CL in cutting translation costs has been in \nthe public domain for some 30 years, from (Pym , \n1990) in the automotive domain to (Roturier, \n2009) in the software domain. \nMore specific studies have been undertaken to \nidentify those rules which have the greatest i m-\npact on the usability of MT output (e.g., O ’Brien \nand Roturier, 2007 ). \nProceedings of the 16th EAMT Conference, 28-30 May 2012, Trento, Italy\n237",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "n8uX4fcCYKW",
"year": null,
"venue": "EAMT 2016",
"pdf_link": "https://aclanthology.org/W16-3416.pdf",
"forum_link": "https://openreview.net/forum?id=n8uX4fcCYKW",
"arxiv_id": null,
"doi": null
}
|
{
"title": "A Contextual Language Model to Improve Machine Translation of Pronouns by Re-ranking Translation Hypotheses",
"authors": [
"Ngoc-Quang Luong",
"Andrei Popescu-Belis"
],
"abstract": "Ngoc Quang Luong, Andrei Popescu-Belis. Proceedings of the 19th Annual Conference of the European Association for Machine Translation. 2016.",
"keywords": [],
"raw_extracted_content": "EAMT 2016, V ol. 4 (2020), No. 2, pp. 292–304\nA Contextual Language Model to\nImprove Machine Translation of Pronouns by\nRe-ranking Translation Hypotheses\nNgoc-Quang LUONG, Andrei POPESCU-BELIS\nIdiap Research Institute, CH-1920 Martigny, Switzerland\nfngoc-quang.luong, andrei.popescu-belis [email protected]\nAbstract. This paper addresses the translation divergencies of pronouns from English to French,\nspecifically itandthey, which have several gendered and non-gendered possible translations into\nFrench. Instead of using anaphora resolution, which is error-prone, we build a target language\nmodel that estimates the probabilities of a tuple of consecutive nouns followed by a pronoun. We\nbring evidence for the linguistic validity of the model, showing that the probability of observing\na pronoun with a given gender and number increases with the proportion of nouns with the same\ngender and number preceding it. We use this French language model to re-rank the translation\nhypotheses generated by a phrase-based statistical machine translation system. While none of\nthe pronoun-focused translation systems at the DiscoMT 2015 shared task improved over the\nbaseline, our proposal achieves a modest but statistically significant improvement over it.\nKeywords: statistical machine translation, pronoun translation, context modeling\n1 Introduction\nPronoun systems do not strictly map across languages, and therefore translation diver-\ngencies of pronouns must often be addressed in machine translation (MT). For instance,\ndepending on its function (referential or pleonastic) and on its actual referent, an oc-\ncurrence of the English itcould be translated into French by il,elle,ce/c’ orcela, to\nmention only the most frequent possibilities.\nWhile designers of MT systems have tried to address the problem since the early\nyears of MT, it is only in recent years that specific strategies for translating pronouns\nhave been proposed and evaluated (see Hardmeier, 2014, Section 2.3.1). However, in\nthe culmination of these recent efforts at the DiscoMT 2015 shared task on pronoun-\nfocused translation (Hardmeier et al., 2015), none of the submitted systems was able to\nbeat a well-trained phrase-based statistical MT baseline. A large proportion of previous\nstudies have attempted to convey information from anaphora resolution systems, albeit\nContextual Language Model for Pronouns 293\nimperfect, to statistical MT ones (Hardmeier and Federico, 2010; Le Nagard and Koehn,\n2010), or have advocated distinguishing first the functions of pronouns (Guillou, 2016).\nIn this paper, we present a simple yet effective approach to improve the translation\nof neuter English pronouns itandthey into French, which outperforms the DiscoMT\n2015 baseline by about 5% (relative improvement on an automatic metric). The method\nstems from the observation that the antecedent of a pronoun is likely to be one of the\nnoun phrases preceding it closely; therefore, if a majority of these nouns exhibit the\nsame gender and number, it is more likely that the correct French pronoun agrees in\ngender and number with them. This does not require any hypothesis on which of the\nnouns is the antecedent.\nIn what follows, we explain how to represent these intuitions in a formal probabilis-\ntic model that is instantiated from French data (Section 3), and we report on empirical\nobservations supporting the validity of our idea (Section 4). Then, we show how our\npronominal language model (PLM) is used to re-rank the hypotheses generated by a\nphrase-based statistical MT system (Section 5) and we analyze its results with respect\nto a baseline (Section 6). But first, we present the state of the art in pronoun translation\nand compare briefly our proposal with it.\n2 State of the art\nUsing rule-based or statistical methods for anaphora resolution, several studies have\nattempted to improve pronoun translation by integrating anaphora resolution with sta-\ntistical MT, as reviewed by Hardmeier (2014, Section 2.3.1). Le Nagard and Koehn\n(2010) trained an English-French translation model on an annotated corpus in which\neach occurrence of English pronouns itandthey was annotated with the gender of its\nantecedent in the target side, but this solution could not outperform a baseline that was\nnot aware of coreference links.\nIntegrating anaphora resolution with English-Czech statistical MT, Guillou (2012)\nstudied the role of imperfect coreference and alignment results. Hardmeier and Fed-\nerico (2010) integrated a word dependency model into an SMT decoder as an addi-\ntional feature function, which keeps track of pairs of source words acting as antecedent\nand anaphor in a coreference link. This model helped to improve slightly the English-\nGerman SMT performance (F-score customized for pronouns) on the WMT News\nCommentary 2008 and 2009 test sets.\nFollowing a similar strategy, Luong et al. (2015) linearly combined the score ob-\ntained from a coreference resolution system with the score from the search graph of\nthe Moses decoder, to determine whether an English-French SMT pronoun translation\nshould be post-edited into the opposite gender (e.g. il!elle). Their system performed\nbest among six participants on the pronoun-focused shared task at the 2015 DiscoMT\nworkshop (Hardmeier et al., 2015), but still remained below the SMT baseline.\nA considerable set of coreference features, used in a deep neural network architec-\nture, was presented by Hardmeier (2014, Chapters 7–9), who observed significant im-\nprovements on TED talks and News Commentaries. Alternatively, to avoid extracting\nfeatures from an anaphora resolution system, Callin et al. (2015) developed a classi-\nfier based on a feed-forward neural network, which considered mainly the preceding\n294 Luong and Popescu-Belis\nnouns, determiners and their part-of-speech as features. Their predictor worked partic-\nularly well (over 80% of F-score) on ceandilspronouns, and reached an overall macro\nF-score of 55.3% for all classes at DiscoMT 2015 pronoun prediction task, which aimed\nat restoring hidden pronouns from a given translation of a source text. However, at this\ntask, none of the participants could outperform a statistical baseline using a powerful\nlanguage model (Hardmeier et al., 2015). Therefore, the goal of this paper – although\nin the framework of pronoun-focused translation – is to extend such a language model\nwith anaphora-inspired information, and to demonstrate improvement over a purely n-\ngram-based baseline.\n3 Construction of a pronoun-aware language model\n3.1 Overall idea of the model\nThe key intuition behind our proposal is that additional, probabilistic constrains on\ntarget pronouns can be obtained by examining the gender and number of the nouns\npreceding them, without any attempt to perform anaphora resolution, which is error-\nprone. For instance, considering the EN/FR translation divergency “ it!il/elle/:::”,\nthe higher the number of French masculine nouns preceding the pronoun, the higher\nthe probability that the correct translation is il(masculine).\nOf course, such an intuition, if used unconditionally, might be even more error-\nprone than post-editing based on anaphora resolution. Therefore, to make it operational,\nwe propose two key solutions:\n1. We estimate from parallel data the probabilistic connection between the target-side\ndistribution of gender and number features among the nouns preceding a pronoun\nand the actual translation of this pronoun into French (focusing on translations of it\nandthey which exhibit strong EN/FR divergencies).\n2. We use the above information in a probabilistic way by re-ranking the translation\nhypotheses made by a standard phrase-based SMT system, so that this informa-\ntion comes into play only when the constraints from the baseline system cannot\ndiscriminate significantly before several translation options for a pronoun.\nThe two solutions above are implemented as a pronoun-aware language model\n(PLM), which is trained as explained in the next subsection, and is then used for re-\nranking translation hypotheses as explained in Section 5.\n3.2 Learning the PLM\nThe data used for training the PLM is the target side (French) of the WIT3parallel\ncorpus (Cettolo et al., 2012) distributed by the IWSLT workshops. This corpus is made\nof transcripts of TED talks, i.e. lectures that typically last 18 minutes, on various topics\nfrom science and the humanities with high relevance to society. The TED talks are given\nin English, then transcribed and translated by volunteers and TED editors. The French\nside contains 179,404 sentences, with a total of 3,880,369 words. We will later use the\nparallel version, with the same number of sentence pairs, to train our baseline SMT\nsystem in Section 5 below.\nContextual Language Model for Pronouns 295\nTo obtain the morphological tag of each word, specifically the gender and number\nof every noun and pronoun, we employ a French part-of-speech (POS) tagger, Morfette\n(Chrupala et al., 2008).\nWe process the data sequentially, word by word, from the beginning to the end. We\nkeep track of the gender and number of the Nmost recent nouns and pronouns in a\nlist, which is initialized as empty and is then updated when a new noun or pronoun is\nencountered. In these experiments, we set N= 5, i.e. we will examine up to four nouns\nor pronouns before a pronoun. This value is based on the intuition that the antecedent\nseldom occurs too far before the anaphor. When a French pronoun is encountered, the\nsequence formed by the gender/number features of the Nprevious nouns or pronouns,\nacquired from the above list, and the pronoun itself is appended to a data file which\nwill be used to train the PLM. If the lexical item can have multiple lexical functions,\nincluding pronoun – e.g. leorlacan be object pronouns or determiners – then their POS\nassigned by Morfette is used to filter out the non-pronoun occurrences. We only process\nthe French pronouns that are potential translations of the English itandthey, namely the\nfollowing list: il, ils, elle, elles, le, la, lui, l’, on, ce, c ¸a, c’, c ¸, ceci, cel `a, celui, celui-ci,\ncelui-l `a, celle, celle-ci, celle-l `a, ceux, ceux-ci, ceux-l `a, celles, celles-ci, celles-l `a.\nIn the next step, we apply the SRILM language modeling toolkit (Stolcke, 2002),\nwith modified Kneser-Ney smoothing, to build a 5-gram language model over the train-\ning dataset collected above, which includes 179,058 of the aforementioned sequences.\nThe sequences are given to SRILM as separate “sentences”, i.e. two consecutive se-\nquences are never joined and are considered independently of each other. The pronouns\nare always ending a sequence in the training data, but not necessarily in the n-grams\ngenerated by SRILM (exemplified in Figure 1), which include n-grams that do not end\nwith a pronoun (e.g. the fifth and the sixth ones in the figure). These will be needed for\nback-off search and are kept in the model used below.\n-2.324736 masc.sing. masc.plur. elle\n-1.543632 fem.sing. fem.plur. fem.sing. elle\n-0.890777 masc.sing. masc.sing. masc.sing. masc.sing. il\n-1.001423 masc.sing. masc.plur. masc.plur. masc.plur. ils\n-1.459787 masc.plur. masc.plur. masc.plur.\n-1.398654 masc.sing. masc.plur. masc.sing. masc.sing.\nFig. 1. Examples of PLM n-grams, starting with their log-probabilities, learned by SRILM.\n4 Empirical validation of the PLM\nWe investigate in this section, using the observations collected in the PLM, the influ-\nence of the (pro)nouns preceding a pronoun on the translation of itorthey into French.\nThe goal is to test the intuition that a larger number of (pro)nouns of a given gender\nand number increases the probability of a translation of itwith the same gender and\nnumber. We consider also the ‘number’ parameter because it is possible, under some\n296 Luong and Popescu-Belis\n-3,5-3-2,5-2-1,5-1-0,50il ce ils elle elles\n1 2 3 4\n-3,5-3-2,5-2-1,5-1-0,50il ce ils elle elles\n1 2 3 4\n(a) masculine singular nouns (b) feminine singular nouns\nFig. 2. Log-probabilities to observe a given pronoun depending on the number of (pro)nouns of\na given gender/number preceding it, either masculine singular in (a) or feminine singular in (b).\nIn (a), the probability of ilincreases with the number of masculine singular (pro)nouns preceding\nit (four bars under il, 1 to 4 (pro)nouns from left to right), while the probabilities of all other\npronouns decrease with this number. A similar result for ellewith respect to the other pronouns\nis observed in (b), depending on the number of feminine singular (pro)nouns preceding elle.\ncircumstances, that it, although singular, is translated into a plural (e.g. if it co-refers\nwith a word such as “ the funeral ”, in French “ les fun ´erailles ”), or conversely that they\nis translated into a singular (e.g. if it co-refers with a word such as “ the police ” or\nrepresents a gender-neutral singular referent).\nWe inspect the learned PLM and observe how the log-probability, e.g., of French\nmasculine singular ilvaries with the number of masculine singular (pro)nouns preced-\ning it, as represented in Figure 2(a), first four bars. To do that, we compute the average\nlog-probability over all PLM n-grams containing exactly ntime(s) (nfrom 1 to 4 for\nthe bars from left to right) a masculine singular noun and finishing with il. The same\noperation can be done for other pronouns, such as ce,ils,elleorelles, as represented in\nthe subsequent groups of bars in Figure 2(a), which all show the evolution of the prob-\nability to observe the respective pronoun after 1 or 2 or 3 or 4 masculine singular nouns\n(bars from left to right for each pronoun). The main result supporting our model is that\nthis log-probability increases for ilwith the number of masculine singular (pro)nouns\npreceding it, and decreases for all the other pronouns, except for the neutral ce, for\nwhich it remains constant.\nSimilar observations can be made for the log-probability to observe one of the five\npronouns listed above after 1 or 2 or 3 or 4 feminine singular nouns, as shown in Fig-\nure 2(b). Again, our proposal is supported by the fact that this probability increases for\nelleand decreases for all other pronouns.\nFor completeness, we provide in Table 1 the log-probabilities for four combinations\nof features (fmasculine, feminine g\u0002f singular, pluralg) and the twelve most frequent\nFrench pronouns which are translations of itandthey. These numbers allow a more\nprecise view than the bar charts shown above, and confirm the variations of the proba-\nbilities observed above, as synthesized in the last columns: we indicate with \"a strictly\nincreasing series of four log-probabilities, and with #a decreasing one. For instance,\nthe average log-probability of elleis quite low (\u00001:839) when it has only one feminine\nContextual Language Model for Pronouns 297\nN. of preceding nouns\nPronoun 1 2 3 4Var.\nmasculine, singular\nil -1.166 -1.048 -0.962 -0.891\"\nelle -1.875 -1.941 -1.942 -1.943#\nils -1.353 -1.445 -1.588 -1.768#\nelles -1.898 -2.081 -2.390 -2.957#\nce -1.070 -1.056 -1.039 -1.037\"\nc’ -1.165 -1.100 -1.066 -1.058\"\non -1.376 -1.318 -1.264 -1.272\u0000\nc ¸a -1.628 -1.552 -1.464 -1.462\"\nle -2.069 -1.970 -1.820 -1.682\"\nla -2.681 -2.749 -2.743 -2.730\u0000\nlui -2.658 -2.538 -2.311 -2.025\"\nl’ -2.147 -2.045 -1.908 -1.753\"\nfeminine, singular\nil -1.161 -1.233 -1.328 -1.440#\nelle -1.839 -1.465 -1.168 -0.980\"\nils -1.347 -1.421 -1.538 -1.700#\nelles -1.887 -2.083 -2.174 -2.552#\nce -1.084 -1.074 -1.065 -1.050\"\nc’ -1.167 -1.119 -1.054 -1.036\"\non -1.409 -1.398 -1.370 -1.431\u0000\nc ¸a -1.677 -1.694 -1.662 -1.746\u0000\nle -2.052 -2.175 -2.238 -2.234\u0000\nla -2.615 -2.402 -2.391 -2.274\"\nlui -2.602 -2.614 -2.550 -2.480\u0000\nl’ -2.141 -2.098 -2.104 -1.944\u0000N. of preceding nouns\nPronoun 1 2 3 4Var.\nmasculine, plural\nil -1.162 -1.196 -1.227 -1.244#\nelle -1.871 -2.046 -2.319 -2.744#\nils -1.309 -1.135 -1.000 -0.883\"\nelles -1.920 -2.020 -2.033 -2.197#\nce -1.072 -1.041 -1.036 -1.044\u0000\nc’ -1.183 -1.190 -1.189 -1.291\u0000\non -1.411 -1.460 -1.492 -1.383\u0000\nc ¸a -1.665 -1.657 -1.568 -1.567\u0000\nle -2.038 -1.893 -1.750 -1.752\u0000\nla -2.604 -2.626 -2.805 -2.937#\nlui -2.663 -2.689 -2.863 -3.296\u0000\nl’ -2.110 -2.083 -2.060 -2.135\u0000\nfeminine, plural\nil -1.160 -1.204 -1.365 -1.441#\nelle -1.914 -2.101 -2.169 N.A.\u0000\nils -1.319 -1.350 -1.550 -1.599#\nelles -1.759 -1.340 -1.059 -0.817\"\nce -1.078 -1.076 -1.139 -1.441\u0000\nc’ -1.169 -1.228 -1.240 -1.379#\non -1.395 -1.401 -1.473 -1.277\u0000\nc ¸a -1.668 -1.742 -1.916 -2.290#\nle -2.095 -2.172 -2.190 N.A.\u0000\nla -2.759 -2.763 N.A. N.A.\u0000\nlui -2.683 -2.810 N.A. N.A.\u0000\nl’ -2.210 -2.344 -2.160 N.A.\u0000\nTable 1. The fluctuation of average log-probability of n-grams as the number of a occurrences\nof a specific gender/number value increases, computed over 12 frequent French pronouns. The\nlast column (Observations) indicates the overall trend: \"for monotonic increase, #for monotonic\ndecrease, and\u0000for undecided. ‘N.A.’ means that no instance is found.\nsingular (pro)noun among the four (pro)nouns preceding it, but increases to \u00001:465and\nthen\u00001:168as two then three of these words are feminine singular, and finally reaches\na high value of\u00000:980when all of the four nouns preceding it are feminine singular.\nOverall, for most third-person pronouns ( il,elle,ils,elles,le,la) the average log-\nprobability of the pronoun gradually increases when more and more nouns (or pro-\nnouns) of the same gender and number are found before it. By contrast, the log-proba-\nbility decreases with the presence of more words of a different gender and number. For\ninstance, for masculine plural ils, its log-probability drops as it is preceded by more and\nmore masculine singular words.\nHowever, such tendencies are not observed for the neuter indefinite pronoun on, the\nvowel-preceding object pronoun l’, or the indirect object pronoun lui, for a good reason:\nthese pronouns can have antecedents of both genders (and sometimes, both numbers),\nand are expected to be independent from the investigated factor. Among the neuter\n298 Luong and Popescu-Belis\nimpersonal pronouns ( c’,ce, and c ¸a), we observe that the log-probabilities of c’and\nceincrease with the number of masculine or feminine singular nouns, and similarly for\nc ¸awith masculine singular nouns.\nAnother important observation, which holds for all four possible combinations of\ngender and number values, is that the log-probability of the n-gram containing four\nnouns of the same gender and number as the pronoun (e.g. four masculine singular\nnouns followed by il) is always higher than those containing a different pronoun (e.g.\nfour masculine singular nouns followed by elleorelles orils. In Figure 2(a)), for exam-\nple, if all four preceding words are masculine singular, then the most likely pronoun is\nil(\u00000:891). Moreover, among the remaining pronouns, the PLM prioritizes the neuter\nones (e.g. ce,c’, or ca) over those of the opposite gender or number. This is indeed\nbeneficial for pronoun selection by re-ranking hypotheses from an SMT decoder, since\nit is preferable to reward neutral or pleonastic pronouns rather than rewarding a pronoun\nwith a gender and number which is not shared with any of the four nouns preceding it.\n5 Re-ranking translation hypotheses with the PLM\nThe Moses statistical MT system (Koehn et al., 2007) used in this study outputs on\ndemand a list of N-best translation hypotheses, for every source sentence, together with\ntheir score. In production mode, only the 1-best hypothesis is output as the translation\nof the source. However, in this study, we will consider several translation hypotheses\nfor the source sentences containing the pronouns itorthey, and re-rank them based on\nadditional information from the pronoun language model presented above. As a result,\nthe 1-best hypothesis may change, and we will demonstrate in Section 6 that pronoun\ntranslation is on average improved.\nFor every source sentence containing at least one occurrence of itorthey we re-\nrank the SMT hypotheses through the following steps. In the implementation, we will\nconsider the 1000-best hypotheses for each source sentence.\n1. Determine the gender and number of the four preceding nouns or pronouns, by\nexamining the current sentence but possibly also the previous ones from the same\ndocument (TED lecture).\n2. Shorten the N-best list, to avoid considering multiple translation hypotheses that\nhave the same pronouns, as the PLM cannot change their ranking with respect to\neach other. Therefore, in the N-best list, we retain only the highest-ranked hypoth-\nesis among all those that have identical translated values of the source pronouns it\nandthey. E.g., if the source sentence contains only one pronoun, we keep only the\nhighest-ranked translation for each of the different translation possibilities that oc-\ncur in the N-best list. If the source sentence contains several pronouns, we consider\nthe tuples of translation possibilities instead of a single value. If the N-best list con-\ntains no variations in the translation of pronouns, then no re-ranking is attempted.\nThis step thus increases the efficiency of our method, without changing its results.\n3. Format the shortened list of hypotheses so that they can be scored by the PLM. We\nadd before all the target pronouns, translations of itorthey determined from the\nalignment provided by Moses, the gender and number features of the four preced-\ning nouns or pronouns. We illustrate this step in Figure 3, where the four nouns\nContextual Language Model for Pronouns 299\npreceding the (wrong) translation of itare all feminine singular. Moreover, the ‘*’\nonil-PRN* indicates that the target pronoun ilagrees in number with the source\none – a feature that will be used below.\n4. Obtain the PLM score for each pronoun of each translation hypothesis. We invoke\nthe “ngram -debug 2 ” command of the SRILM toolkit with the PLM to generate\nthe scores of all possible n-grams of each hypothesis, and we select among them\nthose ending by the pronoun(s) appearing in the hypothesis. As SRILM only out-\nputs the maximal n-gram ending with each word, we only obtain one score per\npronoun, either from a PLM 5-gram ending with a pronoun, or from a shorter one.\nThe score is noted SPLM (pronoun ).\n5. Compute a new score for each formatted hypothesis from the shortened list. The\nnew score of each hypothesis, noted S0(sentence ), is the weighted sum of the score\nobtained from the Moses decoder, SDEC (sentence )and of the PLM scores of its\npronouns, weighted by a factor \u000b= 5. Moreover, we reward the PLM scores of the\npronouns which have the same number as the source pronoun (marked with a ‘*’\nas shown in Fig. 3) by a factor \f= 5(these values of \u000band\fcould be optimized\nin the future on a new data set). Therefore, the new score of each hypothesis sde-\npending on its pronouns p2sis given by:\nS0(s) =SDEC (s) +\u000b\u00030\n@X\nfp2sjdi\u000b:nb:gSPLM (p) +\f\u0003X\nfp2sjsame nb :gSPLM (p)1\nA:\n6. Finally, the hypothesis with the highest S0score is selected as the new best trans-\nlation of the sentence. Moreover, its pronoun(s) are also used to update the list of\ngender/number features of (pro)nouns used for scoring subsequent pronouns with\nthe PLM.\nSRC\u00001: The house of my mother in law was damaged by a heavy storm.\nSRC : When my wife came, ithad lost its roof.\nHYP\u00001: La maison de ma belle-m `ere a ´et´e endommag ´ee par une violente temp ˆete.\nHYP : Lorsque ma femme est venue, il-PRN* avait perdu son toit .\nNP : fem.sing. fem.sing. fem.sing. fem.sing.\nF-HYP : Lorsque ma femme est venue, fem.sing. fem.sing. fem.sing. fem.sing. il-PRN*\navait perdu son toit .\nFig. 3. Example of formatting of a translation hypothesis: we add the gender and number of the\nfour nouns preceding the pronoun il, which is tagged as PRN by Morfette (wrong translation\nof the source itinstead of elle). ‘SRC\u00001’ and ‘HYP\u00001’ denote the source and target sentences\nbefore the one being processed, and ‘F-HYP’ denotes the formatted sentence.\n300 Luong and Popescu-Belis\n6 Experiments\n6.1 Settings and evaluation metrics\nWe trained the Moses phrase-based SMT system (Koehn et al., 2007) on the following\nparallel and monolingual datasets: aligned TED talks from the WIT3corpus (Cettolo\net al., 2012), Europarl v. 7 (Koehn, 2005), News Commentary v. 9 and other news data\nfrom WMT 2007–2013 (Bojar et al., 2014). The system was tuned on a development set\nof 887 sentences from IWSLT 2010 provided for the shared task on pronoun translation\nof the DiscoMT 2015 workshop (Hardmeier et al., 2015). Our test set was also the\none of the DiscoMT 2015 shared task, with 2,093 English sentences extracted from\n12 recent TED talks (French gold-standard translations were made available after the\ntask). The test set contains 809 occurrences of itand 307 of they, hence a total of 1,116\npronouns.\nWe compare two systems: (1) the Moses phrase-based SMT system trained as above,\nnoted ‘BL’ (baseline); and (2) the system which re-ranks the N-best list generated by\nBL using the PLM, as described in the previous section, noted ‘RR’.\nTheir performances are computed automatically in terms of the number of pronouns\nwhich are identical between a system and the reference translation. We use four scores\nnotedC1throughC4, inspired from the metric for Accuracy of Connective Translation\n(Hajlaoui and Popescu-Belis, 2013). C1is the number of candidate pronouns which\ncorrespond identically to the ones in the reference translation, while C2is the number\nof “similar” pronouns in the reference and the candidate. “Similarity” accounts for the\nvariants of ceandc ¸a, with or without apostrophe, and for the two different apostrophe\ncharacters, resulting in two equivalence classes only: fce, c', c’, cgandfc ¸a, ca, c ¸', c ¸’,\ncg. TheC3score is the number of candidate pronouns which differ from the reference,\nwhileC4is the number of source pronouns left untranslated in the candidate translation.\nOverall, we will compare C1andC1+C2between the BL and RR systems, as well as\naccuracy, namely C1+C2divided by the total number of pronouns (1,116).\nThese scores rely only on the comparison of the system’s pronouns (candidates)\nwith the ones in the reference translation. Although such a metric is only an imperfect\nreflection of translation correctness, it is likely that increasing the first two scores ( C1\nandC2) indicates an improved quality. In theory, the target pronoun does not need to be\nidentical to the reference one to be correct: it must only point to the same antecedent.\nTherefore, some variation would be acceptable to a human evaluator, but not to our\nmetrics, which yield lower scores.\n6.2 Results\nThe upper part of Table 2 displays the scores of the BL and RR systems in terms of\npronoun metrics. The results demonstrate that RR outperforms BL on both exact trans-\nlations (C1) or acceptable translations ( C1+C2), with improvements of 21 and, re-\nspectively, 22 occurrences. Besides, although RR generates more translations that are\ndifferent from the reference than BL ( C3of 560 vs. 551), this is balanced by the fact\nthat RR leaves fewer untranslated source pronouns ( C4of 61 vs. 92). The accuracy of\nRR is 2% (absolute) or 5% (relative) higher than that of BL.\nContextual Language Model for Pronouns 301\nIn addition, to understand more deeply about the method’s performance, we also\ncomputeC1::C4scores of all submitted systems at DiscoMT 2015 pronoun-focused\ntranslation task (Hardmeier et al., 2015) and show in the lower part of Table 2. Com-\npared with these systems, RR is still the best-performing one, whose accuracy is 2.07%\n(absolute) higher than that of the best system of DiscoMT 2015 (BASELINE).\nSystem C1 C2 C3 C4 C1+C2 Accuracy\nBL 395 78 551 92 473 .424\nRR 416 79 560 61 495 .444\nComparison to DiscoMT 2015 submitted systems\nBASELINE 400 66 522 128 466 .417\nUU-TIEDEMANN 388 69 491 168 457 .409\nIDIAP 392 70 516 138 462 .414\nUU-HARDMEIER 362 80 573 101 442 .396\nAUTO-POSTEDIT 297 102 620 97 399 .358\nITS2 9 10 1056 41 19 .017\nTable 2. Performances of BL, RR and all submitted systems at DiscoMT 2015 pronoun-focused\nshared task in terms of C1::C4scores and accuracy ( (C1+C2)=Total ). RR outperforms the\nremaining systems on both C1andC1+C2scores.\nAs for BLEU scores, which measure the overall quality and are not expected to be\nsensitive enough to the improvement of a small proportion of words, the baseline system\nreaches 37.80 BLEU points, while the re-ranked translations reach a marginally higher\nvalue of 37.96. These numbers show that the improvement of pronoun translation by re-\nranking is not done at the expense of the overall quality, and might even be marginally\nbeneficial to it.\nTo verify the significance of the improvement on pronouns, we perform a McNemar\ntest comparing the scores of BL and RR for each pronoun, either in terms of identity\nto the reference (criterion C1) or of similarity to the reference (criterion C1+C2). The\np-values of the two comparisons are respectively 0.0294 and 0.0218, showing that RR\nis significantly better than BL with 95% confidence. Given that at the DiscoMT 2015\nshared task none of the systems was able to outperform the baseline (which was the\nsame as the BL system presented here), we believe that this is a promising result that\nimproves over the state of the art.\nTo understand in more detail the effect of our method on specific pronouns, we\nanalyze per pronoun type the cases where the translations proposed by RR differ from\nthose of BL. An ‘improvement’ means that the translation of RR is in the C1orC2case\n(i.e. identical or similar to the reference) and that of BLis not, while a ‘degradation’\nmeans the contrary. Overall, there are 92 pronouns (out of 1,116) changed between BL\nand RR, amounting to 57 improvements and 35 degradations.\nTable 3 shows that most modifications are made on the third person singular subject\npronouns: 23 on iland 24 on elle. Among them, the improvements brought by RR\nsurpass the degradations: +5 on iland +8 on elle. Similarly, third person plural subject\npronouns are improved (+2 in both cases), although they are less affected (14 changes\n302 Luong and Popescu-Belis\nonilsand 4 on elles). RR produces quite often the neuter pronouns c’(7 times), c ¸a(12\ntimes) and ce(2 times), which is likely due to their rather high PLM score, regardless\nof the preceding gender and number features. However, only the occurrences of c’are\nclearly improved (+5). In contrast, the object pronouns are practically untouched by RR\n(only +1 on le), which is related to the rather weak influence observed in the PLM of\nthe preceding gender and number on object pronouns.\nPronoun Improved Degraded \u0001\nil 14 9 5\nelle 16 8 8\nils 8 6 2\nelles 3 1 2\nce 1 1 0\nc’ 6 1 5\non 2 2 0Pronoun Improved Degraded \u0001\nc ¸a 6 6 0\nle 1 0 1\nla 0 0 0\nlui 0 0 0\nl’ 0 0 0\ny 0 1 -1\nTotal 57 35 22\nTable 3. Performance of the re-ranking system (RR) on specific pronoun translations, in terms\nof improved vs. degraded pronouns with respect to the baseline (BL). The difference for each\npronoun type, noted \u0001, is always positive, except for the single occurrence of ‘y’.\nWe illustrate a contribution of RR vs. BL in Figure 4. BL wrongly translates itinto\nilin the 1-best hypothesis, and the translation into elleappears in the hypotheses ranked\nlower. However, this pronoun is preceded by a majority of feminine singular nouns\nin the French translation of BL (namely commission ,urgence , and contre-r ´evolution ,\nwhile only sabotage is masculine). The PLM log-probability of the 5-gram formed by\nelleand the gender/number of the four preceding nouns is higher than that of the same\nn-gram ending with il:\u00001.0185 vs.\u00001.1871. As a result, RR succeeds in promoting\nthe translation with elleas the new 1-best translation.\nSRC\u00001 : in 1917 , the russian communists founded the emergency commission for com-\nbating counter-revolution and sabotage .\nSRC : it was led by felix dzerzhinsky .\nHYP\u00001 : en 1917 , les communistes russes ont cr ´e´e la commission d’ urgence pour com-\nbattre la contre-r ´evolution et sabotage .\nHYP/BL: ila´et´e entra ˆın´e par felix dzerzhinsky .\nHYP/RR: ellea´et´e emmen ´ee par felix dzerzhinsky .\nREF : elle´etait dirig ´ee par f ´elix dzerjinski .\nFig. 4. Example of translation improved by RR, thanks to a majority of feminine nouns.\nContextual Language Model for Pronouns 303\n7 Conclusion\nIn this paper, we presented a method to improve the machine translation of pronouns,\nwhich relies on learning a pronoun-aware language model (PLM). The PLM encodes\nthe likelihood of generating a target pronoun given the gender and number of the nouns\nor pronouns preceding it. For every source sentence of the test set containing itorthey,\nthe method re-ranks the translation hypotheses produced by a phrase-based SMT base-\nline, combining the decoder scores and the PLM scores of the pronoun and preceding\nnouns or pronouns.\nOur re-ranking method outperforms the DiscoMT 2015 baseline by 5% relative im-\nprovement, while none of the systems participating in that shared task could outperform\nit. The method performs particularly well on all third person singular subject pronouns,\nbut also on the neuter impersonal or pleonastic pronouns, despite the fact that they are\nmore independent from the gender and nouns of preceding words than the subject ones.\nIn the near future, the performance of the PLM will be tested at the shared task on\npronoun prediction at the First Conference on Machine Translation (WMT 2016).\nWe will attempt to increase the accuracy of our model by training it on more data\nsets, increasing the order of n-grams ( N) and optimizing the \u000band\fparameters on a\ndevelopment set. Besides, we will attempt to put more weight on n-grams where the\npreceding (pro)nouns of the same gender and number with the given pronoun are closer\nto it. Longer-term future work will focus on integrating the proposed PLM into the\ndecoder’s log-linear function, although extracting gender-number n-grams at decoding\ntime is non-trivial. Furthermore, it would be interesting to model the cases when the\ngender and number of preceding nouns are not the same, because in these cases, we be-\nlieve that using solely the PLM scores is inadequate. Using information from anaphora\nresolution, or at least from features that are relevant anaphora resolution, should help\naddress these cases.\nAcknowledgments\nWe are grateful for their support to the Swiss National Science Foundation (SNSF)\nunder the Sinergia MODERN project (www.idiap.ch/project/modern/, grant n. 147653)\nand to the European Union under the Horizon 2020 SUMMA project (www.summa-\nproject.eu, grant n. 688139).\nReferences\nBojar, O., Buck, C., Federmann, C., Haddow, B., Koehn, P., Leveling, J., Monz, C.,\nPecina, P., Post, M., Saint-Amand, H., Soricut, R., Specia, L., Tamchyna, A., 2014.\nFindings of the 2014 Workshop on Statistical Machine Translation. In: Proceedings\nof the Ninth Workshop on Statistical Machine Translation. Baltimore, MD, USA, pp.\n12–58.\nCallin, J., Hardmeier, C., Tiedemann, J., 2015. Part-of-speech driven cross-lingual pro-\nnoun prediction with feed-forward neural networks. In: Proceedings of the Second\nWorkshop on Discourse in Machine Translation (DiscoMT). Lisbon, Portugal, pp.\n59–64.\n304 Luong and Popescu-Belis\nCettolo, M., Girardi, C., Federico, M., 2012. WIT3: Web inventory of transcribed and\ntranslated talks. In: Proceedings of the 16th Conference of the European Association\nfor Machine Translation (EAMT). Trento, Italy, pp. 261–268.\nChrupala, G., Dinu, G., van Genabith, J., 2008. Learning morphology with Morfette. In:\nProceedings of the 6th International Conference on Language Resources and Evalu-\nation (LREC). Marrakech, Morocco.\nGuillou, L., 2012. Improving pronoun translation for statistical machine translation. In:\nProceedings of EACL 2012 Student Research Workshop (13th Conference of the\nEuropean Chapter of the ACL). Avignon, France, pp. 1–10.\nGuillou, L., 2016. Incorporating pronoun function into statistical machine translation.\nPhD thesis, University of Edinburgh, UK.\nHajlaoui, N., Popescu-Belis, A., 2013. Assessing the accuracy of discourse connective\ntranslations: Validation of an automatic metric. In: Proceedings of the 14th Inter-\nnational Conference on Intelligent Text Processing and Computational Linguistics\n(CICLING). Samos, Greece.\nHardmeier, C., 2014. Discourse in statistical machine translation. PhD thesis, Uppsala\nUniversity, Sweden.\nHardmeier, C., Federico, M., 2010. Modelling pronominal anaphora in statistical ma-\nchine translation. In: Proceedings of International Workshop on Spoken Language\nTranslation (IWSLT). Paris, France.\nHardmeier, C., Nakov, P., Stymne, S., Tiedemann, J., Versley, Y ., Cettolo, M., 2015.\nPronoun-focused MT and cross-lingual pronoun prediction: Findings of the 2015\nDiscoMT shared task on pronoun translation. In: Proceedings of the Second Work-\nshop on Discourse in Machine Translation (DiscoMT). Lisbon, Portugal, pp. 1–16.\nKoehn, P., 2005. Europarl: A parallel corpus for statistical machine translation. In: Pro-\nceedings of the 10th Machine Translation Summit. Phuket, Thailand, pp. 79–86.\nKoehn, P., Hoang, H., Birch, A., Callison-Burch, C., Federico, M., Bertoldi, N., Cowan,\nB., Shen, W., Moran, C., Zens, R., Dyer, C., Bojar, O., Constantin, A., Herbst, E.,\n2007. Moses: Open source toolkit for statistical machine translation. In: Proceedings\nof the 45th Annual Meeting of the Association for Computational Linguistics (ACL).\nPrague, Czech Republic, pp. 177–180.\nLe Nagard, R., Koehn, P., 2010. Aiding pronoun translation with co-reference resolu-\ntion. In: Proceedings of the Joint 5th Workshop on Statistical Machine Translation\nand Metrics (MATR). Uppsala, Sweden, pp. 258–267.\nLuong, N. Q., Miculicich Werlen, L., Popescu-Belis, A., 2015. Pronoun translation and\nprediction with or without coreference links. In: Proceedings of the Second Work-\nshop on Discourse in Machine Translation (DiscoMT). Lisbon, Portugal, pp. 94–100.\nStolcke, A., 2002. SRILM – an extensible language modeling toolkit. In: Proceedings of\nthe 7th International Conference on Spoken Language Processing (ICSLP). Denver,\nCO, USA, pp. 901–904.\nReceived May 2, 2016 , accepted May 15, 2016",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "zrj_PqeTMZ",
"year": null,
"venue": "EAMT 2011",
"pdf_link": "https://aclanthology.org/2011.eamt-1.21.pdf",
"forum_link": "https://openreview.net/forum?id=zrj_PqeTMZ",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Cognate Identification for a French - Romanian Lexical Alignment System: Empirical Study",
"authors": [
"Mirabela Navlea",
"Amalia Todirascu"
],
"abstract": "Mirabela Navlea, Amalia Todiraşcu. Proceedings of the 15th Annual conference of the European Association for Machine Translation. 2011.",
"keywords": [],
"raw_extracted_content": "Cognate Identification for a French - Romanian Lexical Alignment \nSystem: Empirical Study \nMirabela Navle a \nLinguistique, Langues, Parole (LiLPa) \nUniversité de Strasbourg \n22, rue René Descartes \nBP, 80010, 67084 S trasbourg, cedex \[email protected] Amalia Todira/cfa/c46/c58 \nLinguistique, Langues, Parole (LiLPa) \nUniversité de Strasbourg \n22, rue René Descartes \nBP, 80010, 67084 S trasbourg, cedex \[email protected] \n \n/c03Abstract \nThis paper describes a cognate identifica-\ntion method, used by a lexical alignment \nsystem for French and Romanian. We \ncombine statistical techniques and lin-\nguistic information to extract cognates \nfrom lemmatized, tagged and sentence-\naligned parallel corpora. We evaluate the \ncognate identification model and we \ncompare it to other methods using pure \nstatistical techniques. We show that the \nuse of linguistic information in the cog-\nnate identification system improves sig-\nnificantly the results. \n1 Introduction \nWe present a new cognate identification module \nrequired for a French - Romanian lexical align-\nment system. This system is used for French - \nRomanian law corpora. Cognates are translation \nequivalents presenting orthographic or phonetic \nsimilarities (common etymology, borrowings, \nand calques). They represent very important ele-\nments in a lexical alignment system for legal \ntexts for two reasons: \n- French and Romanian are two close \nRomance languages with a rich morphology; \n- Romanian language borrowed and cal-\nqued legal terminology from French. So, cog-\nnates are very useful to identify bilingual legal \nterminology from parallel corpora, while we do \nnot use any external terminological resources for \nthese languages. \nCognate identification is one of the main steps \napplied for lexical alignment for MT systems. If \nwe have several efficient tools for several Euro-\n \n© 2011 European Association for Machine Translation. \n pean languages, few lexically aligned corpora or \nlexical alignment tools (Tufi/c08/c03/c48/c57/c03/c44/c4f/c11/c0f/c03/c15/c13/c13/c18/c0c/c03/c44/c55/c48/c03\navailable for Romanian - English or Romanian - \n/c2a/c48/c55/c50/c44/c51/c03/c0b/c39/c48/c55/c57/c44/c51/c03/c44/c51/c47/c03/c2a/c44/c59/c55/c4c/c4f/c03/c0f/c03/c15/c13/c14/c13/c0c/c11/c03/c2c/c51/c03/c4a/c48/c51/c48/c55/c44/c4f/c0f/c03\nfew linguistic resources and tools for Romanian \n(dictionaries, parallel corpora, terminological \ndata bases, MT systems) are currently available. \nSome MT systems use resources for the English - \nRomanian language pair (Marcu and Munteanu, \n/c15/c13/c13/c18/c1e/c03/c2c/c55/c4c/c50/c4c/c44/c0f/c03/c15/c13/c13/c1b/c1e/c03/c26/c48/c44/c58/cfa/c58/c0f/c03/c15/c13/c13/c1c/c0c/c11/c03/c32/c57/c4b/c48/c55/c03/c30/c37/c03\nsystems develop resources for German - Roma-\n/c51/c4c/c44/c51/c03/c0b/c2a/c44/c59/c55/c4c/c4f/c03/c0f/c03/c15/c13/c13/c1c/c1e/c03/c39/c48/c55/c57/c44/c51/c03/c44/c51/c47/c03/c2a/c44/c59/c55/c4c/c4f/c03/c0f/c03/c15/c13/c14/c13/c0c/c03\nor for French - Romanian (N/c44/c59/c4f/c48/c44/c03/c44/c51/c47/c03/c37/c52/c47/c4c/c55/c44/c08/c46/c58/c0f/c03\n2010). Most of the cognate identification mod-\nules used by these systems were purely statistic-\nal. No cognate identification method is available \nfor the studied languages. \nCognate identification is a difficult problem, es-\npecially to detect false friends. Inkpen et al. \n(2005) classify bilingual words pairs in several \ncategories such as: \n- cognates (reconnaissance (FR) - reco-\ngnition (EN)); \n- false friends (blesser /c0b/cb5/c57/c52/c03/c4c/c51/c4d/c58/c55/c48/cb6/c0c/c03/c0b/c29/c35/c0c/c03-\nbless (EN)); \n- partial cognates (facteur (FR) - factor or \nmailman (EN)); \n- genetic cognates (chef (FR) - head \n(EN)); \n- unrelated pairs of words (glace (FR) - ice \n(EN) and glace (FR) - chair (EN)). \nIn our method, we rather identify cognates and \npartial cognates to improve lexical alignment. \nThus, we aim to obtain a high precision of our \nmethod and to eliminate some false friends using \nstatistical techniques and linguistic information. \nTo identify cognates from parallel corpora, sev-\neral approaches exploit the orthographic similari-\nty between two words of a bilingual pair. A sim-\nple method is the 4-grams method (Simard et al., \n1992). This method considers that two words are \ncognates if they contain at least 4 characters and Mik el L. F orcada, Heidi Depraetere, Vincen t V andeghinste (eds.)\nPr o c e e dings of the 15th Confer enc e of the Eur op e an Asso ciation for Machine T r anslation , p. 145\u0015152\nLeuv en, Belgium, Ma y 2011\nat least their first 4 characters are identical. Other \n/c50/c48/c57/c4b/c52/c47/c56/c03/c48/c5b/c53/c4f/c52/c4c/c57/c03/c44/c56/c56/c52/c46/c4c/c44/c57/c4c/c52/c51/c03/c56/c46/c52/c55/c48/c56/c03/c44/c56/c03/c27/c4c/c46/c48/cb6/c56/c03\ncoefficient (Adamson and Boreham, 1974) or a \nvariant of this coefficient (Brew and McKelvie, \n1996). This measure computes the ratio between \nthe number of common character bigrams of the \ntwo words and the total number of two words \nbigrams. Also, two words are considered as cog-\nnates if the ratio between the length of the maxi-\nmum common substring of ordered (and not nec-\nessarily contiguous) characters and the length of \nthe longest word is greater than or equal to a cer-\ntain empirically determined threshold (Melamed, \n1999; Kraif, 1999). Similarly, other methods \ncompute the distance between two words, that \nrepresent the minimum number of substitutions, \ninsertions and deletions used to transform one \nword into another (Wagner and Fischer, 1974). \nOn the other hand, other methods compute the \nphonetic distance between two words belonging \nto a bilingual pair (Oakes, 2000). Kondrak \n(2009) proposes methods identifying three cha-\nracteristics of cognates: recurrent sound corres-\npondences, phonetic similarity and semantic af-\nfinity. \nWe present a French - Romanian cognate identi-\nfication module. We combine statistical tech-\nniques and linguistic information (lemmas, POS \ntags) to improve the results of the cognate identi-\nfication method. We compare it with other me-\nthods using exclusively statistical techniques. \nThe cognate identification system is integrated \ninto a lexical alignment system. \nIn the next section, we present our lexical align-\nment method. We present our parallel corpora \nand the tools used to preprocess our parallel cor-\npora, in section 3. In section 4, we describe our \ncognate identification method. We present the \n/c55/c48/c56/c58/c4f/c57/c56/cb6/c03/c48/c59/c44/c4f/c58/c44/c57/c4c/c52/c51/c03/c4c/c51/c03/c56/c48/c46/c57/c4c/c52/c51/c03/c18/c11/c03/c32/c58/c55/c03/c46/c52/c51/c46/c4f/c58/c56/c4c/c52/c51/c56/c03\nand further works figure in section 6. \n2 The Lexical Align ment Module \nThe output of the cognate identification module \nis exploited by a French - Romanian lexical \nalignment system. \nOur lexical alignment system combines statistical \nmethods and linguistic heuristics. We use GI-\nZA++ (Och and Ney, 2000, 2003) implementing \nIBM models (Brown et al., 1993). These models \nrealize word-based alignments. Indeed, each \nsource word has zero, one or more translation \nequivalents in the target language, computed \nfrom aligned sentences. Due to the fact that these \nmodels do not provide many-to-many align-ments, we also use some heuristics (Koehn et al., \n2003; Tufi/c08 et al., 2005) in order to detect \nphrase-based alignments such as chunks: nomin-\nal, adjectival, verbal, adverbial or prepositional \nphrases. \nWe use lemmatized, tagged and annotated at \nchunk level parallel corpora. These corpora are \ndescribed in details in the next section. \nTo improve the lexical alignment, we use lem-\nmas and morpho-syntactic properties. We pre-\npare the corpus in the input format required by \nGIZA++, providing also the lemma and the two \nfirst characters of the morpho-syntactic tag. This \noperation morphologically disambiguates the \nlemmas (Tufi/c08 et al., 2005). For example, the \nsame French lemma traité (=treaty, treated) can \nbe a common noun or a participial adjective: \ntraité_Nc vs. traité_Af. This disambiguation pro-\n/c46/c48/c47/c58/c55/c48/c03/c4c/c50/c53/c55/c52/c59/c48/c56/c03/c57/c4b/c48/c03/c2a/c2c/c3d/c24/c0e/c0e/c03/c56/c5c/c56/c57/c48/c50/cb6/c56/c03/c53/c48/c55/c49/c52r-\nmance. \nIn order to obtain high accuracy of the lexical \nalignment, we realize bidirectional alignments \n(FR-RO and RO-FR) with GIZA++, and then we \nintersect them (Koehn et al., 2003). This heuris-\ntic only selects sure links, because these align-\nments are detected in the two lexical alignment \nprocess directions. \nTo obtain sure word alignments, we also use a \nset of automatically identified cognates. Indeed, \nwe filter the list of translation equivalents ob-\ntained by alignment intersection, using a list of \ncognates. To extract cognates from parallel cor-\npora, we developed a method adapted to the stu-\ndied languages. This method combines statistical \ntechniques and linguistic information. The me-\nthod is presented in section 4. \nWe obtain sure word alignments using multiword \nexpressions such as collocations. They represent \npolylexical expressions whose words are related \nby lexico-syntactic rela/c57/c4c/c52/c51/c56/c03/c0b/c37/c52/c47/c4c/c55/c44/cfa/c46/c58/c03/c48/c57/c03/c44/c4f/c11/c0f/c03\n2008). We use a multilingual dictionary of ver-\nbo-/c51/c52/c50/c4c/c51/c44/c4f/c03/c46/c52/c4f/c4f/c52/c46/c44/c57/c4c/c52/c51/c56/c03/c0b/c37/c52/c47/c4c/c55/c44/cfa/c46/c58/c03/c48/c57/c03/c44/c4f/c11/c0f/c03/c15/c13/c13/c1b/c0c/c03\nto align them. This dictionary is available for \nFrench, Romanian and German. The dictionary is \ncompleted by data extracted from legal texts and \nit contains the most frequent verbo-nominal col-\nlocations from this domain. The external re-\nsource is used to align this class of collocations \n(for legal corpora), but it do not resolve the \nalignment problems of other classes (noun + \nnoun, adv erb + adjective, etc.). \nFinally, we apply a set of linguistically motivated \nheuristic rules (Tufi/cfa/c03/c48/c57/c03/c44/c4f/c11/c0f/c03/c15/c13/c13/c18/c0c/c03/c4c/c51/c03/c52/c55/c47/c48/c55/c03/c57/c52/c03\naugment the recall of the lexical alignment me-\nthod: 146\ni. we define some POS affinity classes (a \nnoun can be translated by a noun, a verb \nor an adjective); \nii. we align content-words such as nouns, \nadjectives, verbs, and adverbs, according \nto the POS affinity classes; \niii. we align chunks containing translation \nequivalents aligned in a previous step; \niv. we align elements belonging to chunks \nby linguistic heuristics; At this level, we \ndeveloped a supplementary module de-\npending on the two studied languages. \nThis module uses a set of 27 morpho-\nsyntactic contextual heuristics rules. \nThese rules are defined according to \nmorpho-syntactic differences between \nFrench and Romanian (Navlea and \nTodira/ce2cu, 2010). For example, in Ro-\nmanian relative clause, the direct object \nis simultaneously realized by the relative \npronoun care 'that' (preceded by the pe \npreposition) and by the personal pronoun \nîl, -l, o, îi, -i, le. In French, it is expressed \nby que relative pronoun. Thus, we define \na morpho-syntactic heuristic rule to align \nthe supplementary elements from the \nsource and from the target language (see \nFigure 1). The rule aligns que with the \nsequence pe care (accusative) le. \n \nFrench Romanian \nles problemele \nproblèmes pe \nque care \ncréerait le- \nla ar \npublication genera \n publicarea \nFigure 1 Example of lexical alignment using \nmorpho-syntactic heuristics rules (the case of \nrelative clause) \n \nWe focus here on the development of the cognate \nidentification method used by our French - Ro-\nmanian lexical alignment system. In the next sec-\ntion, we present the parallel corpora used for our \nexperiments. \n3 French - Romanian Parallel Corpora \nIn our project, we use two freely available sen-\ntence-aligned legal parallel corpora as JRC-\nAcquis (Steinberger et al., 2006) and DGT-TM1. \n \n1 http://langtech.jrc.it/DGT-TM.html These corpora are based on the Acquis Commu-\nnautaire multilingual corpus available in 22 offi-\ncial languages of EU. It is composed of laws \nadopted by EU member states and EU candidates \nsince 1950. For our project, we use a subset of \n228,174 pairs of 1-1 aligned sentences from the \nJRC-Acquis, selected from the common docu-\nments available in French and in Romanian. We \nalso use a subset of 490,962 pairs of 1-1 aligned \nsentences extracted from the DGT-TM. \nAs the JRC-Acquis and the DGT-TM are legal \ncorpora, we built other multilingual corpora for \nother domains (politics, aviation). Thus, we ma-\nnually selected French - Romanian available \ntexts from several websites according to several \ncriteria: availability of the bilingual texts, relia-\nbility of the sources, translation quality, and do-\nmain. The used corpora are described in the Ta-\nble 1: \n \nCorpora Sou rce Number \nof words / \nFrench Number of \nwords / \nRomanian \nJRC-Acquis 5,828,169 5,357,017 \nDGT-TM 9,953,360 9,142,291 \nEuropean Par-\n/c4f/c4c/c44/c50/c48/c51/c57/cb6/c56/c03/c5a/c48b-\nsite 137,422 126,366 \n \nEuropean \n/c26/c52/c50/c50/c4c/c56/c56/c4c/c52/c51/cb6/c56/c03\nwebsite 200,590 185,476 \n \nRomanian air-\nplane compa-\n/c51/c4c/c48/c56/cb6/c03/c5a/c48/c45/c56/c4c/c57/c48/c56 33,757 29,596 \n \nTable 1 French - Romanian parallel corpora \n \nWe preprocess our corpora by applying the TTL2 \ntagger (Ion, 2007). This tagger is available for \nFrench and for Romanian as Web service. Thus, \nthe parallel corpora are tokenized, lemmatized, \nPOS tagged and annotated at chunk level. TTL \nuses the set of morpho-syntactic descriptors \n(MSD) proposed by the Multext Project3 for \nFrench (Ide and Véronis, 1994) and for Roma-\nnian (Tufi/cfa/c03/c44/c51/c47/c03/c25/c44/c55/c45/c58/c0f/c03/c14/c1c/c1c/c1a/c0c/c11/c03/c37/c37/c2f/cb6/c56 results are \navailable in XCES format (see Figure 2). \n \n \n \n \n2 Tokenizing, Tagging and Lemmatizing free running texts \n3 http://aune.lpl.univ-aix.fr/projects/multext/ \n 147\n<seg lang=\"fr\"><s id=\"ttlfr.3\"> \n<w lemma=\"voir\" ana=\"Vmps-s\">vu</w> \n<w lemma=\"le\" ana=\"Da-fs\" \nchunk=\"Np#1\">la</w> \n<w lemma=\"proposition\" ana=\"Ncfs\" \nchunk=\"Np#1\">proposition</w> \n<w lemma=\"de\" ana=\"Spd\" \nchunk=\"Pp#1\">de</w> \n<w lemma=\"le\" ana=\"Da-fs\" \nchunk=\"Pp#1,Np#2\">la</w> \n<w lemma=\"commission\" ana=\"Ncfs\" \nchunk=\"Pp#1,Np#2\">Commission \n</w> \n<c>;</c> \n</s></seg> \n \nFigure 2 /c37/c37/c2f/cb6/c56/c03/c52/c58/c57/c53/c58/c57/c03/c49/c52/c55/c03/c29/c55/c48/c51/c46/c4b \n \nIn the example of the Figure 2, lemma attribute \nrepresents the lemmas of lexical units, ana \nattribute provides morpho-syntactic information \nand chunk attribute marks nominal and preposi-\ntional phrases. We exploit these linguistic infor-\nmation in order to adapt lexical alignment algo-\nrithm for French and for Romanian. Thus, we \nstudy the influence of linguistic information to \nthe quality of the lexical alignment. \n4 Cognate Identification \nWe did our lexical alignment and cognate identi-\nfication experiments on a legal parallel corpus \nextracted from the Acquis Communautaire. We \nautomatically selected 1,000 1:1 aligned com-\nplete sentences (starting with a capital letter and \nfinishing with a punctuation sign). Each selected \nsentence has no more than 80 words. This corpus \ncontains 33,036 tokens in French and 28,645 to-\nkens in Romanian. We tokenized, lemmatized \nand tagged our corpus as mentioned in the pre-\nvious section. \nThus, to extract French - Romanian cognates \nfrom lemmatized, tagged and sentence-aligned \nparallel corpus, we exploit linguistic information: \nlemmas, POS tags. In addition, we use ortho-\ngraphic and phonetic similarities between cog-\nnates. To detect such similarities, we focus rather \non the beginning of the words and we ignore \ntheir endings. First, we use n-gram methods (Si-\nmard et al., 1992), where n=4 or n=3. Second, we \ncompare ordered sequences of bigrams (an or-\ndered pair of characters). Then, we apply some \ndata input disambiguation strategies, such as: \n- we iteratively extract sure cognates, such as \ninvariant strings (abbreviations, numbers etc.) or similar strings (3- and 4-grams). At each itera-\ntion, we delete them from the input data; \n-we use cognate pairs frequencies in the studied \ncorpus. \nWe consider as cognates the words belonging to \na bilingual pair simultaneously respecting the \nfollowing linguistic conditions: \n1) their lemmas are translation equivalents \nin two parallel sentences; \n2) they have identical lemmas or have or-\nthographic or phonetic similarities be-\ntween lemmas; \n3) they are content-words (nouns, verbs, \nadverbs, etc.) having the same POS tag \nor showing POS affinities. So, we filter \nout short words as prepositions and con-\njunctions to limit noisy output. Thus, we \ndo not generally restrict lemmas length. \nWe also detect short cognates as /c4c/c4f/c03/cb6/c4b/c48/cb6 \nvs. el (personal pronouns), cas 'case' vs \ncaz (nouns). We avoid ambiguous pairs \nsuch as lui 'him' (personal pronoun) (FR) \nvs. lui (possessive determiner) (RO), ce \n'this' (demonstrative determiner) (FR) vs. \nce 'that' (relative pronoun) (RO). \nWe classify French - Romanian cognates (de-\ntected in the studied parallel corpus) in several \ncategories: \n1) cross-lingual invariants (numbers, cer-\ntain acronyms and abbreviations). In \nthis category, we also consider punc-\ntuation signs; \n2) identical cognates (civil vs civil); \n3) similar cognates (at the orthographic \nor phonetic level) : \na) 4-grams (Simard et al., 1992); \nThe first 4 characters of lem-\nmas are identical. The length \nof these lemmas is greater \nthan or equal to 4 (autorité \n'authority' vs. autoritate). \nb) 3-grams; The first 3 characters \nof lemmas are identical and \nthe length of the lemmas is \ngreater than or equal to 3 \n(mars 'March' vs. martie); \nc) 8-bigrams; Lemmas have a \ncommon sequence of charac-\nters (eventually disconti-\nnuous) among the first 8 bi-\ngrams. At least one character \nof each bigram is common to \nthe two words. This condition \nallows the jump of a non iden-\ntical character (fonctionne-148\nment /cb5/c49/c52/c51/c46/c57/c4c/c52/c51/cb6/c03/c59/c56/c11/c03\nfu/c51/c46/c20/c4c/c52/c51/c44/c55/c48). This applies on-\nly to long lemmas, with the \nlength greater than 7. \nd) 4-bigrams; Lemmas have a \ncommon sequence of charac-\nters (eventually disconti-\nnuous) among the 4 first bi-\ngrams: rembourser 'refund' \nvs. rambursa; objet 'object' \nvs. obiect. This applies to long \nlemmas (length > 7) but also \nto short lemmas (length less \nthan or equal to 7). \n \nOur method mainly follows three stages. In the \nfirst place, we apply a set of empirically estab-lished orthographic adjustments between French \n- Romanian lemmas, such as: remove diacritics, \ndetect phonetic mappings, etc. (see Table 2). As \nFrench uses an etymological writing and Roma-\nnian has a phonetic writing, we identify phonetic \ncorrespondences between lemmas. We make \nsome orthographic adjustments from French to \nRomanian. For example, cognates phase 'phase' \n(FR) vs. faz/c03 (RO) become faze (FR) vs. faza \n(RO)). In this example, we make two adjust-\nments: the French consonant group ph [f] be-\ncome f (as in Romanian) and the French intervo-\ncalic s [z] become z (as in Romanian). We also \nmake adjustments in the ambiguous cases, by \nreplacing with both variants (ch ([/cfa] or [k])): ma-\nchine vs. ma/cfa/c4c/c51/c03/c03/c0a/c46/c44/c55/c0a; chlorure 'chlorure' vs. \ncloru/c55/c03.\nLevels of orthographic adjustments French Romanian Examples \nDiacritics x x dépôt - depozit \ndouble contiguous letters x x rapport - raport \nconsonant groups ph \nth \ndh \ncch \nck \ncq \nch \nch f [f] \nt [t] \nd [d] \nc [k] \nc [k] \nc [k] \n/cfa [/cfa] \nc [k] phase - f/c44/c5d/c03 \nméthode - met/c52/c47/c03 \nadhérent - aderent \nbacchante - bac/c44/c51/c57/c03 \nstockage - stocare \ngrecque - grec \nfiche - fi/cfa/c03 \nchapitre - capitol \nq q (final) \nqu(+i) (medial) \nqu(+e) (medial) \nqu(+a) \nque (final) \n c [k] \nc [k] \nc [k] \nc(+a) [k] \nc [k] \n cinq - cinci \néquilibre - echilibru \nmarquer - marca \nqualité - calitate \npratique - practic/c03 \n \nintervocalic s v + s + v v + z + v présent - prezent \nw w v wagon - vagon \ny y i yaourt - iaurt \nTable 2 French - Romanian cognates orthographic adjustments \n \nSecondly, we apply seven cognate extraction \nsteps (see Table 3). To extract cognates from \nparallel corpora, we aim to improve the precision \nof our method. Thus, we extract cognates by ap-\nplying the categories 1 - 3 (a-d) (see Table 3). \nMoreover, in order to decrease the noise of cog-\nnate identification method, we apply two sup-\nplementary strategies. We filter out ambiguous \ncognate candidates (a same source lemma occurs \nwith several target candidates), by computing \ntheir frequencies in the corpus. Thus, we keep \nthe most frequent candidate pair. This strategy is \nvery effective to augment the results precision, \nbut it might decrease the recall in certain cases. \nIndeed, there are cases when French - Romanian cognates have one form in French, but two vari-\nous forms in Romanian (information 'informa-\ntion' vs. informa /c20/c4c/c48 or informare; manifestation \n'manifestation' vs. /c50/c44/c51/c4c/c49/c48/c56/c57/c44/c20/c4c/c48 or manifestare). \nWe recover these pairs by using regular expres-\nsions based on specific lemma ending (ion (FR) \nvs. /c20ie|re (RO)). \nThen, we delete the reliable cognate pairs (high \nprecision) from the input data at the end of the \nextraction step. Thus, we disambiguate the data \ninput. For example, the identical cognates trans-\nport vs. transport, obtained in a previous extrac-\ntion step and deleted from the input data, elimi-\nnate the occurrence of candidate transport vs. \ntranzit as 4-grams cognate, in a next extraction \nstep. 149\nThese strategies allow us to increase the preci-\nsion of our method. We give below some exam-\nples of correct extracted cognates: /c44/c58/c57/c52/c55/c4c/c57/c70/c03/cb5/c44u-\n/c57/c4b/c52/c55/c4c/c57/c5c/cb6/c03/c0b/c29/c35/c0c/c03- autoritate (RO); disposition /cb5/c4f/c44y-\n/c52/c58/c57/cb6/c03/c0b/c29/c35/c0c/c03- /c47/c4c/c56/c53/c52/c5d/c4c/c20/c4c/c48/c03/c0b/c35/c32/c0c/c1e/c03 /c47/c4c/c55/c48/c46/c57/c4c/c59/c48/c03/cb5/c47/c4c/c55/c48/c46/c57/c4c/c59/c48/cb6/c03\n(FR) - /c47/c4c/c55/c48/c46/c57/c4c/c59/c03/c03/c0b/c35/c32/c0c/c11/c03We also eliminate some \nfalse friends: /c44/c58/c57/c52/c55/c4c/c57/c70/c03/cb5/c44/c58/c57/c4b/c52/c55/c4c/c57/c5c/cb6/c03/c0b/c29/c35/c0c/c03- autori-\n/c5d/c44/c55/c48/c03/cb5/c44/c58/c57/c4b/c52/c55/c4c/c5d/c44/c57/c4c/c52/c51/cb6 (RO) /c1e/c03/c47/c4c/c56/c53/c52/c56/c4c/c57/c4c/c52/c51/c03/cb5/c4f/c44/c5c/c52/c58/c57/cb6/c03\n(FR) - /c47/c4c/c56/c53/c52/c5d/c4c/c57/c4c/c59/c03/cb5/c47/c48/c59/c4c/c46/c48/cb6/c03/c0b/c35/c32/c0c/c1e /c47/c4c/c55/c48/c46/c57/c4c/c52/c51/c03/cb5/c47/c4c/c55/c48c-\n/c57/c4c/c52/c51/cb6 (FR) - /c47/c4c/c55/c48/c46/c57/c4c/c59/c03/c03/cb5/c47/c4c/c55/c48/c46/c57/c4c/c59/c48/cb6/c03/c0b/c35/c32/c0c/c11 \n \nExtraction steps \nby category of \ncognates F \n Deletion \nfrom \ninput data P \n(%) \n1 : cross lingual \ninvariants x 100 \n2 : identical \ncognates x 100 \n3 : 4-grams \n/c0b/c4f/c48/c50/c50/c44/c56/cb6/c03/c4f/c48/c51gth \n>= 4) ; x x 99.05 \n4 : 3-grams \n/c0b/c4f/c48/c50/c50/c44/c56/cb6/c03/c4f/c48/c51gth \n>=3) ; x x 93.13 \n5 : 8-bigrams \n(long lemmas, \n/c4f/c48/c50/c50/c44/c56/cb6/c03/c4f/c48/c51/c4a/c57/c4b/c03\n>7) x 95.24 \n6 : 4-bigrams \n(long lemmas, \n/c4f/c48/c50/c50/c44/c56/cb6/c03/c4f/c48/c51/c4a/c57/c4b \n> 7) \n 75 \n7 : 4-bigrams \n(short lemmas, \n/c4f/c48/c50/c50/c44/c56/cb6/c03/c4f/c48/c51/c4a/c57/c4b \n=< 7) x 65.63 \nTable 3 /c33/c55/c48/c46/c4c/c56/c4c/c52/c51/c03/c52/c49/c03/c46/c52/c4a/c51/c44/c57/c48/c56/cb6/c03/c48/c5b/c57/c55/c44/c46/c57/c4c/c52/c51 steps; \nF=Frequency; P=Precision \n \nHowever, our system extracts some false candi-\ndates, such as: /c51/c58/c50/c70/c55/c52/c03/cb5/c51/c58/c50/c45/c48/c55/cb6/c03/c0b/c29/c35/c0c/c03- nume \n/cb5/c51/c44/c50/c48/cb6/c03/c0b/c35/c32/c0c/c1e/c03/c46/c52/c51/c56/c52/c50/c50/c44/c57/c4c/c52/c51/c03/cb5/c46/c52/c51/c56/c58/c50/c53/c57/c4c/c52/c51/cb6/c03/c0b/c29/c35/c0c/c03\n- /c46/c52/c51/c56/c4c/c47/c48/c55/c44/c55/c48/c03/cb5/c46/c52/c51/c56/c4c/c47/c48/c55/c44/c57/c4c/c52/c51/cb6/c03/c0b/c35/c32/c0c/c1e/c03/c46/c52/c50/c53/c4f/c70/c57/c48/c55/c03\n/cb6/c46/c52/c50/c53/c4f/c48/c57/c48/cb6/c03/c0b/c35/c32/c0c/c03- com/c53/c58/c51/c48/c03/cb5/c46/c52/c50/c53/c52/c56/c48/cb6/c03/c0b/c29/c35/c0c/c0c/c11 \nWe apply the same method for cognates having \nPOS affinity (N-V; N-ADJ). We keep only 4-\ngram cognates, due to a significant decrease of \nthe precision for the other categories (3-grams, 8-\nbigrams and 4-bigrams). \nFinally, we recover initial cognates lemmas for \nboth languages. 5 Evaluation \nWe evaluate our method on a parallel corpus of \n1,000 sentences described in the previous sec-\ntion. We compare the results with another two \nmethods (see Table 4): \na) the method exclusively based on 4-\ngrams; \nb) a combination of the 4-gram approach \nand the orthographic adjustments. \n \nMethods P \n(%) R \n(%) F \n(%) \n4-grams 90.85 47.84 62.68 \n4-grams + \nOrthographic \nAdjustments 91.55 72.42 80.87 \nOur method 94.78 89.18 91.89 \nTable 4 /c35/c48/c56/c58/c4f/c57/c56/cb6/c03/c48/c59/c44/c4f/c58/c44/c57/c4c/c52/c51/c1e/c03/c33/c20/c33/c55/c48/c46/c4c/c56/c4c/c52/c51/c1e/c03\nR=Recall; F=F-measure \n \nWe manually built a reference list of cognates \ncontaining 2,034 pairs from parallel studied sen-\ntences. Then, we compare extracted cognate list \nto this reference list. Our method extracted 1814 \ncorrect cognate pairs (from a total of 1914 ex-\ntracted pairs), which represents a precision of \n94,78 %. The 4-grams method has good preci-\nsion (90,85%), but low recall (47,84%). The or-\nthographic adjustment method significantly im-\nproves the recall of the 4-grams method. The \nvarious extraction steps using statistical tech-\nniques and linguistic filters, applied after the or-\nthographic adjustment step, improve both recall \n(89,18% from 72,42%) and precision (94,78% \nfrom 91,55%). These results show that the use of \nsome linguistic information provides better re-\nsults than purely statistical methods. \n6 Conclusions and Further Work \nWe present here a cognate identification module \nfor two morphologically rich languages such as \nFrench and Romanian. Cognates are very impor-\ntant elements used by a lexical alignment system. \nThus, we aim to obtain high precision and recall \nof our cognate identification method by combin-\ning statistical techniques and linguistic informa-\ntion. We show that an orthographic adjustment \nstep between French - Romanian lemmas bilin-\ngual pairs and linguistic filters improve signifi-\ncantly module's performance. \nThe cognate identification method is integrated \ninto a French-Romanian lexical alignment mod-\nule. The alignment module is part of a larger 150\nproject aiming to develop a French - Romanian \nfactored phrase-based statistical machine transla-\ntion system. \nReferences \nAdamson, George W., and Jillian Boreham. 1974. The \nuse of an association measure based on character \nstructure to identify semantically related pairs of \nwords and document titles, Information Storage \nand Retrieval, 10(7-8):253-260. \nBrew, Chris, and David McKelvie. 1996. Word-pair \nex-traction for lexicography, in Proceedings of In-\nternational Conference on New Methods in Natural \nLanguage Processing, Bilkent, Turkey, 45-55. \nBrown, Peter F., Vincent J. Della Pietra, Stephen A. \nDella Pietra, and Robert L. Mercer. 1993. The ma-\nthematics of statistical machine translation: Para-\nmeter estimation, Computational Linguistics, \n19(2):263-312. \n/c26/c48/c44/c58/cfa/c58/c0f/c03/c24/c4f/c48/c5b/c44/c51/c47/c55/c58/c11/c03/c15/c13/c13/c1c/c11/c03/c37/c48/c4b/c51/c4c/c46/c4c/c03/c47/c48/c03/c57/c55/c44/c47/c58/c46/c48/c55/c48/c03/c44/c58/c57/c52-\n/c50/c44/c57/c03/c03/cfa/c4c/c03/c44/c53/c4f/c4c/c46/c44/c45/c4c/c4f/c4c/c57/c44/c57/c48/c44/c03/c4f/c52/c55/c03/c4f/c4c/c50/c45/c4c/c4c/c03/c55/c52/c50/c6b/c51/c48/c03/c46/c44/c03/c4f/c4c/c50/c45/c03/c03\n/c56/c58/c55/c56/c03/c0f/c03Ph.D. Thesis, Romanian Academy, Buchar-\nest, April 2009, 123 pp. \n/c2a/c44/c59/c55/c4c/c4f/c03/c0f/c03/c30/c52/c51/c4c/c46/c44/c11/c03/c15/c13/c13/c1c/c11/c03/c36/c30/c37/c03/c48/c5b/c53/c48/c55/c4c/c50/c48/c51/c57/c56/c03/c49/c52/c55/c03/c35/c52/c50a-\nnian and German using JRC-Acquis, in Proceed-\nings of the Workshop on Multilingual resources, \ntechnologies and evaluation for central and East-\nern European languages, 7th Recent Advances in \nNatural Language Processing (RANLP), 17 Sep-\ntember 2009, Borovets, Bulgaria, pp. 14-18. \nIde, Nancy, and Jean Véronis. 1994. Multext (multi-\nlin-gual tools and corpora), in Proceedings of the \n15th International Conference on Computational \nLinguistics, CoLing 1994, Kyoto, August 5-9, pp. \n90-96. \nInkpen, Diana, Frunza Oana, and Kondrak Grzegorz. \n2005. Automatic Identification of Cognates and \nFalse Friends in French and English, in Proceed-\nings of Recent Advances in Natural Language \nProcessing, RANLP-2005, Bulgaria, Sept. 2005, \np.251-257. \nIon, Radu. 2007. Metode de dezambiguizare \n/c56/c48/c50/c44/c51/c57/c4c/c46/c03/c03/c44/c58/c57/c52/c50/c44/c57/c03/c11/c03/c24/c53/c4f/c4c/c46/c44/c20/c4c/c4c/c03/c53/c48/c51/c57/c55/c58/c03/c4f/c4c/c50bile \n/c48/c51/c4a/c4f/c48/c5d/c03/c03/cfa/c4c/c03/c55/c52/c50/c6b/c51/c03/c0f/c03Ph.D. Thesis, Romanian Acad-\nemy, Bucharest, May 2007, 148 pp. \nIrimia, Elena. 2008. Experimente de Traducere Auto-\n/c50/c44/c57/c03/c03/c25/c44/c5d/c44/c57/c03/c03/c53/c48/c03/c28/c5b/c48/c50/c53/c4f/c48/c0f/c03/c4c/c51/c03Proceedings of Work-\n/c56/c4b/c52/c53/c03/c35/c48/c56/c58/c55/c56/c48/c03/c2f/c4c/c51/c4a/c59/c4c/c56/c57/c4c/c46/c48/c03/c35/c52/c50/c6b/c51/c48/cfa/c57/c4c/c03/cfa/c4c/c03/c2c/c51/c56/c57/c55/c58/c50/c48/c51/c57/c48/c03\npentru Prelucrarea Limbii Române, Iasi, 19-21 \nNovember 2008, pp. 131-140. \nKoehn, Philipp, Franz Josef Och, and Daniel Marcu. \n2003. Statistical Phrase-Based Translation, in Pro-\nceedings of Human Language Technology Confe-\nrence of the North American Chapter of the Asso-ciation of Computational Linguistics, HLT-NAACL \n2003, Edmonton, May-June 2003, pp. 48-54. \nKoehn, Philipp, and Hieu Hoang. 2007. Factored \ntranslation models, in Proceedings of the 2007 \nJoint Conference on Empirical Methods in Natural \nLanguage Processing and Computational Natural \nLanguage Learning (EMNLP-CoNLL), Prague, \nJune 2007, 868-876. \nKondrak, Grzegorz. 2009. Identification of Cognates \nand Recurrent Sound Correspondences in Word \nLists, in Traitement Automatique des Langues \n(TAL), 50(2) :201-235. \nKraif, Olivier. 1999. Identification des cognats et ali-\ngnement bi-textuel : une étude empirique, dans \nActes de la 6ème conférence annuelle sur le Trai-\ntement Automatique des Langues Naturelles, TALN \n99, Cargèse, 12-17 juillet 1999, 205-214. \nMarcu, Daniel, /c44/c51/c47/c03/c27/c55/c44/c4a/c52/cfa/c03/c2b. Munteanu. 2005. Statis-\ntical Machine Translation: An English-Romanian \nExperiment, in 7th International Summer School \nEUROLAN 2005, July 25 - August 6, Cluj-Napoca, \nRomania. \nMelamed, I. Dan. 1999. Bitext Maps and Alignment \nvia Pattern Recognition, in Computational Linguis-\ntics, 25(1):107-130. \n/c31/c44/c59/c4f/c48/c44/c0f/c03/c30/c4c/c55/c44/c45/c48/c4f/c44/c0f/c03/c44/c51/c47/c03/c24/c50/c44/c4f/c4c/c44/c03/c37/c52/c47/c4c/c55/c44/cfa/c46/c58/c11/c032010. Lin-\nguistic Resources for Factored Phrase-Based Statis-\ntical Machine Translation Systems, in Proceedings \nof the Workshop on Exploitation of Multilingual \nResources and Tools for Central and (South) East-\nern European Languages, 7th International Confe-\nrence on Language Resources and Evaluation \n(LREC 2010), Malta, Val-letta, May 2010, pp. 41-\n48. \nOakes, Michael, P. 2000. Computer Estimation of Vo-\ncabulary in Protolanguage from Word Lists in Four \nDaughter Languages, in Journal of Quantitative \nLinguistics, 7(3):233-243. \nOch, Franz Josef, and Hermann Ney. 2000. Improved \nStatistical Alignment Models, in Proceedings of \nthe 38th Conference of the Association for Compu-\ntational Linguistics, ACL 2000, Hong Kong, pp. \n440-447. \nOch, Franz Josef, and Hermann Ney. 2003. A System-\natic Comparison of Various Statistical Alignment \nModels, in Computational Linguistics, 29(1):19-51. \nSimard, Michel, George Foster, and Pierre Isabelle. \n1992. Using cognates to align sentences, in Pro-\nceedings of the Fourth International Conference on \nTheoretical and Methodological Issues in Machine \nTranslation, Montréal, pp. 67-81. \nSteinberger, Ralph, Bruno Pouliquen, Anna Widiger, \n/c26/c44/c50/c48/c4f/c4c/c44/c03/c2c/c4a/c51/c44/c57/c0f/c03/c37/c52/c50/c44/ce5/c03/c28/c55/c4d/c44/c59/c48/c46/c0f/c03/c27/c44/c51/c03/c37/c58/c49/c4c/cfa/c0f/c03/c44/c51/c47/c03\nDániel Varga. 2006. The JRC-Acquis: A Multilin-151\ngual Aligned Parallel Corpus with 20+ Languages, \nin Proceedings of the 5th International Conference \non Language Resources and Evaluation (LREC \n2006), Genoa, May 2006, pp. 2142-2147. \n/c37/c52/c47/c4c/c55/c44/cfa/c46/c58/c0f/c03/c24/c50/c44/c4f/c4c/c44/c0f/c03/c38/c4f/c55/c4c/c46/c4b/c03/c2b/c48/c4c/c47/c0f/c03/c27/c44/c51/c03/cf9/c57/c48/c49/c03/c51/c48/c56/c46/c58/c0f/c03/c27/c44/c51/c03\n/c37/c58/c49/c4c/cfa/c0f/c03/c26/c4b/c55/c4c/c56/c57/c52/c53/c4b/c48/c55/c03/c2a/c4f/c48/c47/c4b/c4c/c4f/c4f/c0f/c03/c30/c44/c55/c4c/c52/c51/c03/c3a/c48/c4f/c4f/c48/c55/c0f/c03/c44/c51/c47/c03\nFrançois Rousselot. 2008. Vers un dictionnaire de \ncollocations multilingue, in Cahiers de Linguis-\ntique, 33(1) :161-186, Louvain, août 2008. \n/c37/c58/c49/c4c/cfa/c0f/c03/c27/c44/c51/c0f/c03/c44/c51/c47/c03/c24/c51/c44/c03/c30/c44/c55/c4c/c44/c03/c25/c44/c55/c45/c58/c11/c031997. A Reversible \nand Reusable Morpho-Lexical Description of Ro-\n/c50/c44/c51/c4c/c44/c51/c0f/c03/c4c/c51/c03/c27/c44/c51/c03/c37/c58/c49/c4c/cfa/c03/c44/c51/c47/c03/c33/c52/c58/c4f/c03/c24/c51/c47/c48/c55/c56/c48/c51/c03/c0b/c48/c47/c56/c11/c0c/c0f/c03\nRecent Advances in Romanian Language Technol-\nogy, pp. 83-93, Editura Academiei Române, \n/c25/c58/c46/c58/c55/c48/cfa/c57/c4c/c0f/c03/c14/c1c/c1c/c1a/c11/c03/c2c/c36/c25/c31/c03/c1c/c1a/c16-27-0626-0. \n/c37/c58/c49/c4c/cfa/c0f/c03/c27/c44/c51/c0f/c03/c35/c44/c47/c58/c03/c2c/c52/c51/c0f/c03/c24/c4f/c48/c5b/c44/c51/c47/c55/c58/c03/c26/c48/c44/c58/cfa/c58/c0f/c03/c44/c51/c47/c03/c27/c44/c51/c03\n/cf9/c57/c48/c49/c03/c51/c48/c56/c46/c58/c11/c03/c15/c13/c13/c18/c11/c03/c26/c52/c50/c45/c4c/c51/c48/c47/c03/c24/c4f/c4c/c4a/c51/c48/c55/c56/c0f/c03/c4c/c51/c03Proceed-\nings of the Workshop on Building and Using Paral-\nlel Texts: Data-Driven Machine Translation and \nBeyond, pp. 107-110, Ann Arbor, USA, Associa-\ntion for Computational Linguistics. ISBN 978-973-\n703-208-9. \n/c39/c48/c55/c57/c44/c51/c0f/c03/c26/c55/c4c/c56/c57/c4c/c51/c44/c0f/c03/c44/c51/c47/c03/c30/c52/c51/c4c/c46/c44/c03/c2a/c44/c59/c55/c4c/c4f/c03/c11/c03/c15/c13/c14/c13/c11/c03/c30/c58/c4f/c57/c4c/c4f/c4c/c51-\ngual applications for rich morphology language \npairs, a case study on German Romanian, in Dan \n/c37/c58/c49/c4c/cfa/c03/c44/c51/c47/c03/c26/c52/c55/c4c/c51/c44/c03/c29/c52/c55/c03/c56/c46/c58/c03/c0b/c48/c47/c56/c11/c0c/c1d/c03Multilinguality \nand Interoperability in Language Processing with \nEmphasis on Romanian, Romanian Academy Pub-\nlishing House, Bucharest, pp. 448-460, ISBN 978-\n973-27-1972-5. \nWagner, Robert A., and Michael J. Fischer. 1974. The \nString-to-String Correction Problem, Journal of the \nACM, 21(1):168-173. \n \n 152",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "I04N-65x_o9",
"year": null,
"venue": "EAMT 2020",
"pdf_link": "https://aclanthology.org/2020.eamt-1.32.pdf",
"forum_link": "https://openreview.net/forum?id=I04N-65x_o9",
"arxiv_id": null,
"doi": null
}
|
{
"title": "An English-Swahili parallel corpus and its use for neural machine translation in the news domain",
"authors": [
"Felipe Sánchez-Martínez",
"Víctor M. Sánchez-Cartagena",
"Juan Antonio Pérez-Ortiz",
"Mikel L. Forcada",
"Miquel Esplà-Gomis",
"Andrew Secker",
"Susie Coleman",
"Julie Wall"
],
"abstract": "Felipe Sánchez-Martínez, Víctor M. Sánchez-Cartagena, Juan Antonio Pérez-Ortiz, Mikel L. Forcada, Miquel Esplà-Gomis, Andrew Secker, Susie Coleman, Julie Wall. Proceedings of the 22nd Annual Conference of the European Association for Machine Translation. 2020.",
"keywords": [],
"raw_extracted_content": "An English–Swahili parallel corpus and its use for\nneural machine translation in the news domain\nFelipe S ´anchez-Mart ´ınez,zV´ıctor M. S ´anchez-Cartagena,zJuan Antonio P ´erez-Ortiz,z\nMikel L. Forcada,zMiquel Espl `a-Gomis,zAndrew Secker,ySusie Coleman,yJulie Wally\nzDep. de Llenguatges i Sistemes Inform `atics, Universitat d’Alacant\nE-03690 Sant Vicent del Raspeig (Spain)\nffsanchez,vmsanchez,japerez,mlf,mespla [email protected]\nyThe British Broadcasting Corporation\nBBC Broadcasting House, Portland Place, London, W1A 1AA. (UK)\nfandrew.secker,susie.coleman,julie.wall [email protected]\nAbstract\nThis paper describes our approach to cre-\nate a neural machine translation system to\ntranslate between English and Swahili (both\ndirections) in the news domain, as well as\nthe process we followed to crawl the neces-\nsary parallel corpora from the Internet. We\nreport the results of a pilot human evalua-\ntion performed by the news media organisa-\ntions participating in the H2020 EU-funded\nproject GoURMET.\n1 Introduction\nLarge news media organisations often work in a\nmultilingual space in which they both publish their\nmaterial in numerous languages and monitor the\nworld’s media across video, audio, printed and on-\nline sources. As regards content creation , one way\nin which efficient use is made of journalistic endeav-\nour is the republication of news originally authored\nin one language into another; by using machine\ntranslation, and with the appropriate user interfaces,\na journalist is able to take a news story or script, in\nthe case of an audio or video report, and quickly\nobtain a preliminary translation that will be then\nmanually post-edited to ensure it has the quality\nrequired to be presented to the audience. Concern-\ningnews gathering , expert monitors and journalists\nhave to currently perform a lot of manual work to\nkeep up with a growing amount of broadcast and\nsocial media streams of data; it is becoming im-\nperative to automate tasks, such as translation, in\norder to free monitors and journalists to perform\nmore journalistic tasks that cannot be achieved with\ntechnology.\nc\r2020 The authors. This article is licensed under a Creative\nCommons 3.0 licence, no derivative works, attribution, CC-\nBY-ND.In order to cope with these requirements, promot-\ning both the reach of the news published to under-\nserved audiences and the world-wide broadcasting\nof local information, the H2020 EU-funded project\nGoURMET (Global Under-Resourced Media Trans-\nlation),1aims at improving neural machine trans-\nlation (NMT) for under-resourced language pairs\nwith special emphasis in the news domain. The\ntwo partner media organisations in the GoURMET\nproject, the BBC in the UK and Deutsche Welle\n(DW) in Germany, publish news content in 40 and\n30 different languages, respectively, and gather\nnews in over 100 languages. In particular, both\nmedia partners gather news in and produce content\nin Swahili.\nAccording to Wikipedia, Swahili has between\n2 and 15 million first-language speakers and 90\nmillion second-language speakers. As one of the\nlargest languages in Africa and the recognised lin-\ngua franca of the East African community, BBC\nand DW see Swahili as an important language in\nwhich to make content available. The NMT systems\ndescribed and evaluated herein can be deployed to\nsupport them in this domain specific context.\nThe rest of the paper is organised as follows.\nNext section describes the corpora we used to train\nour English–Swahili NMT systems in both transla-\ntion directions. Section 3 then describes the crawl-\ning of the additional corpora we used and made\npublicly available. Section 4 describes the main lin-\nguistic contrasts between English and Swahili and\nthe challenges they pose for building MT systems\nbetween them. Section 5 describes the resources,\nother than corpora, that we used to build our own\nsystems and the technical details of the training of\nthe NMT systems. Section 6 discussed the results of\n1https://gourmet-project.eu/\nCorpus Sent’s en tokens sw tokens\nGoURMET v1 156 061 3 334 886 2 981 699\nSAWA 272 544 1 553 004 1 206 757\nTanzil v1 138 253 2 376 908 1 734 247\nGV v2017q3 29 698 534 270 546 107\nGV v2015 26 033 467 353 476 478\nUbuntu v14.10 986 2 486 2 655\nEUbookshop v2 17 191 228\nGNOME v1 40 168 170\ntotal 623 632 8 269 266 6 948 341\nTable 1: Parallel English–Swahili corpora used to train the\nNMT systems described in this work. GVstands for the Glob-\nalV oices corpus.\nautomatic evaluation measures, describes a manual\nevaluation we are conducting and provides prelimi-\nnary results. The paper ends with some concluding\nremarks.\n2 Monolingual and bilingual corpora\nParallel data is the basic resource required to\ntrain NMT. Additionally, it is common practice\nto use synthetic parallel corpora obtained by\nback-translating monolingual data (Sennrich et al.,\n2016b). This section describes the corpora we used\nto train the NMT systems described in Section 5.\nTables 1 and 2 describe the parallel and mono-\nlingual corpora we used, respectively. As regards\nparallel corpora, with the exception of GoURMET\nand SAWA, all of them were downloaded from the\nOPUS website,2one of the largest repositories of\nparallel data on the Internet.3We used two addi-\ntional parallel corpora: the SAWA corpus (De Pauw\net al., 2011), that was kindly provided by their edi-\ntors, and the GoURMET corpus, that was crawled\nfrom the web following the method described in\nSection 3.\nAs regards monolingual data, only three corpora\nwere used: the NewsCrawl (Bojar et al., 2018)\nfor English ( en) and for Swahili ( sw),4and the\nGoURMET monolingual corpus for sw. The first\ntwo corpora were chosen because they belong to\nthe news domain, the same domain of application\nof our NMT systems. Given that the size of the sw\nmonolingual corpus is much smaller than the size\nof the enmonolingual corpus, additional monolin-\ngual data in swwas obtained as a by-product of the\nprocess of crawling parallel data from the web.\n2http://opus.nlpl.eu/\n3Table 1 contains the parallel corpora available at OPUS at the\ntime of training our systems. New corpora have been added\nrecently, such as the large JW300 corpus (Agi ´c and Vuli ´c,\n2019), which we did not use.\n4http://data.statmt.org/news-crawl/sw/Corpus Sent’s Tokens\nNewsCrawl (en) 18 113 311 359 823 264\nNewsCrawl (sw) 174 425 3 603 035\nGoURMET (sw) 5 687 000 174 867 482\nTable 2: Monolingual Swahili and English corpora used to\nbuild synthetic parallel data through back-translation.\n3 Crawling of additional corpora\nThe amount of data for en–swis clearly low, even\nif one compares it to the amount of data available\nfor other under-resourced language pairs, such as\nEnglish–Maltese or English–Icelandic.5For this\nreason, a new corpus was crawled from the Internet\n(see the GoURMET corpus in Table 1). This corpus\nhas been made publicly available.6\nThe GoURMET corpus was obtained by using\nBitextor (Espl `a-Gomis and Forcada, 2010; Espl `a-\nGomis et al., 2019), a free open/source software that\nallows to identify parallel content on multilingual\nwebsites. Bitextor is organised as a pipeline that\nperforms a sequence of steps to obtain parallel data\nfrom a list of URLs; for each of these steps, Bitextor\nsupports different approaches that require different\nresources. In this section, the specific configuration\nof Bitextor for this work is described, as well as the\nresulting corpora crawled from the Web.\nCrawling. Crawling is the first step of the\npipeline implemented in Bitextor and consists of\ndownloading any document containing text from\nthe websites specified by the user. We used wget7\nto crawl documents from 3 751 websites;8these\nwebsites were obtained by leveraging automatic-\nlanguage-identification metadata from the Com-\nmonCrawl corpus:9we consider those websites\nwith at least 5 kB of text in enand in sw.\nEvery website was crawled during a period of 12\nhours and only documents in enorswwere kept;\nCLD210was used for automatic language identifi-\ncation. Plain text was extracted from HTML/XML\nand, after this, sentence splitting was applied to\nevery document. From the collection of 3 751 pre-\nselected websites, 519 were not available at the time\n5For example, in OPUS one can find about 3M sentence pairs\nfor English–Icelandic and 7.6M sentence pairs for English–\nMaltese, whereas only 1.2M are available for en–sw.\n6http://data.statmt.org/gourmet/corpora/\nGoURMET-crawled.en-sw.zip\n7https://www.gnu.org/software/wget/\n8The list of crawled websites can be found in the hosts.gz\nfile accompanying the corpus.\n9https://commoncrawl.github.io/\ncc-crawl-statistics/plots/languages\n10https://github.com/CLD2Owners/cld2\nof crawling and, from the remaining 3 232, only 908\nended up containing data in both languages.\nDocument alignment. In this step, documents\nthat are likely to contain parallel data are identified.\nBitextor supports two strategies for document align-\nment: one based on bilingual lexicons and another\nbased on MT. The last option was not feasible in\nthis work as no high-quality MT system between\nswandenwas available; therefore, the first one\nwas used. This method combines information from\nbilingual lexicons, the HTML structure of the docu-\nments, and the URL to obtain a confidence score for\nevery pair of documents to be aligned (Espl `a-Gomis\nand Forcada, 2010). The bilingual lexicon used was\nautomatically obtained from the word alignments\nobtained with mgiza++ (Gao and V ogel, 2008) for\nthe following corpora: EUBookshop v2, Ubuntu\nand Tanzil (see Table 1). A total of 180 520 pairs\nof documents were obtained by using this method.\nSentence alignment. In this step, aligned docu-\nments are segmented and aligned at the sentence\nlevel. Two sentence-alignment tools are supported\nby Bitextor: Hunalign (Varga et al., 2007) and\nBLEUalign (Sennrich and V olk, 2010). We used\nHunalign because BLEUalign requires an MT sys-\ntem to be available. The same bilingual dictionary\nused for document alignment was provided to Hu-\nnalign in order to improve the accuracy of the align-\nment. After applying Hunalign, 2 051 678 unique\nsegment pairs were obtained.\nCleaning. Bicleaner11(S´anchez-Cartagena et al.,\n2018) was used to clean the raw corpora obtained\nafter sentence alignment. Cleaning implies remov-\ning the noisy sentence pairs that are either incor-\nrectly aligned or not in the expected languages.12\nBicleaner cleaning models require some language-\ndependent resources:\n\u000fTwo probabilistic bilingual dictionaries, one\nfor each direction for the language pair, built\nfrom the corpora used to build the bilingual\nlexica for document alignment.\n\u000fA parallel (ideally clean) corpus to train the\nregressor used to score the segment pairs in\nthe raw corpus: the preexisting GlobalV oices\nv2015 parallel corpus was used, as Bicleaner\n11https://github.com/bitextor/bicleaner/\n12This additional language checking is required as document-\nlevel language identification may be too general and small\nfragments in other languages can be included in the sentence-\naligned corpus.requires parallel data used to train the dictio-\nnaries and the regressor to be different.\n\u000fA collection of pairs of segments that are\nwrongly aligned to train a language model:\nfollowing Bicleaner’s documentation, this col-\nlection was obtained from the raw parallel cor-\npus by applying the “hard rules” implemented\nin Bicleaner.\nBicleaner was used to score all the sentence pairs in\nthe raw corpus with two different scores: one com-\ning from the regressor, which may be interpreted as\nthe probability that the pair of sentences are parallel,\nand one coming from the language model, which\nis the probability that one of the sentences in the\npair is malformed. After sampling a small fraction\nof the corpus, the score thresholds were set to 0.68\nand 0.5, respectively. The resulting parallel corpus\nconsisted of 156 061 pairs of segments.\nIn addition to the parallel corpus obtained after\ncleaning, a large amount of Swahili monolingual\ndata was obtained as a by-product of crawling and\nreleased as a monolingual corpus. Monolingual\ndata cleaning consisted of discarding those sen-\ntences not deemed fluent enough to be used for\nNMT training. Sentences were ranked by perplex-\nity computed by a character-based 7-gram language\nmodel and only the 6 million sentences with the\nlowest perplexity were kept. The language model\nwas trained13on the concatenation of the swside\nof the parallel corpora listed in Table 1, excluding\nGoURMET. Moreover, those sentences that were\nautomatically identified not to be in sw,14or con-\ntained more numeric or punctuation characters than\nalphabetic characters were also discarded.\n4 Contrasts and challenges for MT\nSwahili belongs to a very large African language\nfamily, the Niger–Congo family, and more specifi-\ncally to the Bantu group. Swahili is currently writ-\nten in the Latin script, with no diacritics; the apos-\ntrophe is used in the seldom-occurring combination\nng’which represents the sound of nginsinger (not\nfinger ); one common example is ng’ombe , (‘cow’).\nSwahili is morphologically and syntactically\nquite different from English, in spite of the fact that\nboth are subject–verb–object languages. Swahili\nverb morphology is rich and agglutinative, and a\n13The language model was trained with KenLM (Heafield,\n2011) with modified Kneser-Ney smoothing (Ney et al., 1994).\n14Automatic language identification was carried out by using\nCLD3: https://github.com/google/cld3\nlarge number of morphologically-marked nominal\ngenders participate in nominal and verbal agree-\nment. Table 3 provides a summary of the main\nlinguistic contrasts between enandsw; some ex-\namples are from Perrott (1965) and the table is\nmostly based on https://wals.info .\nThe challenges to build an MT system for news\ntranslation between enandsware twofold. On the\none hand, parallel corpora are rather scarce. On the\nother hand, a number of challenges stem from the\nlinguistic divergences between the two languages:\n\u000fThe absence of definite and indefinite articles\ninswmay make the generation of grammati-\ncalentricky.\n\u000fGenders in swdo not mark sex (in fact, all\nnouns designating people are in the same gen-\nder or class); generating the correct en3rd-\nperson pronouns and possessives may be chal-\nlenging.\n\u000fWhen translating into sw, the presence of\nmany noun classes and their agreement inside\nnoun phrases and with verbal affixes may be\nan important obstacle.\n\u000fSwahili interrogatives have to be reordered\nwhen translating to en.\n\u000fFortunately, most word-order differences seem\nto occur locally (basically inside the noun\nphrase). This may only be a problem for\nlonger noun phrases.\n5 Neural machine translation model\nThis section describes the steps followed to build\nen!swandsw!enNMT systems from the cor-\npora described in Section 2. We firstly describe\ncorpora preprocessing and give details about the\nNMT architecture used and the process followed\nto choose it. Secondly, we present the strategies\nfollowed in order to take advantage of monolingual\ncorpora and to integrate linguistic information into\nthe NMT systems.\n5.1 Corpus preparation\nIn order to properly train NMT systems, we need a\ndevelopment corpus to help the training algorithm\ndecide when to finish, and a test corpus that allows\nus to estimate the quality of the systems.\nWe obtained both of them from the Glob-\nalV oices parallel corpus. We randomly selected\n4 000 parallel sentences from the concatenation of\nGlobalV oices-v2015 and GlobalV oices-v2017q3,\nand split them into two halves (with 2 000 sentenceseach), which were used respectively as develop-\nment and test corpora. The half reserved to be used\nas test corpus was further filtered to remove the\nsentences that could be found in any of the mono-\nlingual corpora.\nThe remaining sentences from GlobalV oices-\nv2015 and GlobalV oices-v2017q3, together with\nthe other parallel corpora listed in Table 1 were de-\nduplicated to obtain the final parallel corpus used\nto train the NMT systems.\nAll corpora were tokenised with the Moses to-\nkeniser (Koehn et al., 2007) and truecased. Paral-\nlel sentences with more than 100tokens in either\nside were removed. Words were split in sub-word\nunits with byte pair encoding (BPE; Sennrich et\nal. (2016c)). Table 4 reports the size of the corpora\nafter this pre-processing.\n5.2 Neural machine translation architecture\nWe trained the NMT models with the Marian\ntoolkit (Junczys-Dowmunt et al., 2018). Since train-\ning hyper-parameters can have a large impact in the\nquality of the resulting system (Lim et al., 2018),\nwe carried out a grid search in order to find the\nbest hyper-parameters for each translation direc-\ntion. We explored both the Transformer (Vaswani\net al., 2017) and recurrent neural network (RNN)\nwith attention (Bahdanau et al., 2014) architectures.\nOur starting points were the Transformer hyper-\nparameters15described by Sennrich et al. (2017)\nand the RNN hyper-parameters16described by Sen-\nnrich et al. (2016a).\nFor each translation direction and architecture,\nwe explored the following hyper-parameters:\n\u000fNumber of BPE operations: 15 000 ,30 000 ,\nor85 000 .\n\u000fBatch size: 8 000 tokens (trained on one GPU)\nor16 000 tokens (trained on two GPUs).\n\u000fWhether to tie the embeddings for both lan-\nguages (Press and Wolf, 2017)\nWe trained a system for each combination of\nhyper-parameters, using only the parallel data de-\nscribed above. Early stopping was based on per-\nplexity on the development set and patience was set\nto5. We selected the checkpoint that obtained the\n15https://github.com/marian-nmt/\nmarian-examples/tree/master/\nwmt2017-transformer\n16https://github.com/marian-nmt/\nmarian-examples/tree/master/\ntraining-basics\nFeature Value in English Value in Swahili Examples\nCoding of plural-\nity in nounsPlural suffix Plural prefix kichwa (‘head’), vichwa (‘heads’); jicho (‘eye’),\nmacho(‘eyes’)\nNumber of cat-\negories encoded\nin a single-word\nverbFew (number, per-\nson, tense)Many (“STROVE”,\nthat is, number and\nperson of subject,\ntense, aspect and\nmood, optional\nrelatives, number and\nperson of object, verb\nroot, and optional\nextensions)nimekinunua kitabu ‘I have bought the book’, where:\nni‘I’, subject; me, present perfect; ki, ‘it’, object;\nnunua , ‘buy’, verb root.\nDefinite articles Definite word dis-\ntinct from demon-\nstrativeDemonstrative (sel-\ndom) used as definite\narticlekitabu (‘book’, ‘the book’, ‘a book’).\nNoun Phrase Con-\njunctionAnd different from\nwithAnd identical to with Lete chai na maziwa (‘Bring tea and milk’); Yesu\nalikuja na Baba yake (‘Jesus came with his Father’).\nInflectional mor-\nphologySuffixing Mainly prefixing kitabu (‘book’), vitabu (‘books’); nilinunua (‘I\nbought’), ulinunua (‘You bought’); but jenga (‘build’),\njengwa (‘be built’)\nReduplication No productive redu-\nplicationProductive full and\npartial reduplicationMimi ninasoma kitabu ’I am reading the book’; mimi\nninasomasoma kitabu ’I am reading the book bit by\nbit’\nNumber of gen-\ndersThree, sex-based,\nonly in 3rd person\nsingular pronouns\nand possessivesMany, not based on\nsex (called classes )kitabu ‘book’ ( ki-vi-class): plural vitabu ‘books’ ;\nmtoto ‘child’ ( m-wa -class): plural watoto ‘children’\n; etc. Note that adjectives and verbs have to agree:\nkitabu kidogo ‘small book’, vitabu vidogo ‘small\nbooks’; mtoto mdogo ‘small child’, etc.\nOrder of genitive\nand nounNo dominant order Noun–genitive gari la mama ‘Mom’s ( mama ) car ( gari)’;paa la\nnyumba ‘The roof ( paa) of the house ( nyumba )’.\nOrder of adjetive\nand nounadjective–noun noun–adjective mtoto mdogo ‘small child’, lit. ‘child small’\nOrder of demon-\nstrative and noundemonstrative–\nnounnoun–demonstrative gari hili ‘this car’, lit. ‘car this’\nOrder of numeral\nand nounnumeral–noun noun–numeral vitabu viwili (‘two books’, lit. ‘books two’)\nExpression of\nPronominal\nSubjectsObligatory pronouns\nin subject positionSubject affixes on\nverbNilinunua (‘I bought’), ulinunua (‘You bought’)\nNegation Particle or construc-\ntionNegative form of verb Ninasoma (‘I am reading’), Sisomi (‘I am not read-\ning’); Unasoma (‘You are reading’), husomi (‘You are\nnot reading’);\nPosition of Inter-\nrogative Phrases\nin Content Ques-\ntionsInitial interrogative\nphraseNot initial interroga-\ntive phraseUnasoma vitabu (‘You are reading books’); Unasoma\nnini? (‘What are you reading’, lit. ‘you are reading\nwhat?’)\nPolar questions Change in word or-\nder, use of auxil-\niariesNo change in word or-\nderAmesoma (‘He has read’); Amesoma? (‘Has he read?’)\nComparative Comparative form\nof adjective (‘-er’) or\n‘more’Absolute form of ad-\njectiveVirusi ni ndogo (‘A virus is small’) Virusi ni ndogo\nkuliko bakteria (‘A virus is smaller than a bacterium’,\nlit. ‘A virus is small where there is a bacterium’)\nPredicative Pos-\nsession’have’ conjunctional (‘to be\nwith’)Nina swali (‘I have a question’, lit. ‘I-am-with ques-\ntion’)\nTable 3: A summary of linguistic contrasts between English and Swahili.\nhighest BLEU (Papineni et al., 2002) score on the\ndevelopment set.\nWe obtained the highest test BLEU scores for\nen!swwith an RNN architecture, 30 000 BPE\noperations, tied embeddings and single GPU, while\nthe highest ones for sw!enwere obtained with a\nTransformer architecture, 30 000 BPE operations,\ntied embeddings and two GPUs.5.3 Leveraging monolingual data\nOnce the best hyper-parameters were identified, we\ntried to improve the systems by making use of the\nmonolingual corpora via back-translation. Back-\ntranslation (Sennrich et al., 2016b) is a widespread\nmethod for integrating target-language (TL) mono-\nlingual corpora into NMT systems. The quality of\na system trained on back-translated data is usually\nCorpus Sentences en tokens sw tokens\nparallel 424 821 7 536 537 6 191 959\nNewsCrawl\n(en)40 000 000 796 199 072 -\nNewsCrawl\n(sw)414 598 - 8 377 157\nGoURMET\nmono (sw)5 687 000 -174 867 482\ndevelopment 2 000 41 726 42 037\ntest (en-sw) 1 863 41 097 41 188\ntest (sw-en) 1 969 43 149 43 174\nTable 4: Size of the corpora used to build the NMT systems\nafter preprocesing. For the enNewsCrawl corpus, only the\nsize of the subset that has been used for training is displayed.\nToken counts were calculated before BPE splitting.\ncorrelated with the quality of the system that trans-\nlates the TL monolingual corpus into the source\nlanguage (SL) (Hoang et al., 2018, Sec. 3). We\ntook advantage of the fact that we are building sys-\ntems for both the en!swandsw!endirections\nand applied an iterative back-translation (Hoang et\nal., 2018) algorithm that simultaneously leverages\nmonolingual swand monolingual endata. It can\nbe outlined as follows:\n1.With the best identified hyper-parameters for\neach direction we built a system using only\nparallel data.\n2.enandswmonolingual data were back-\ntranslated with the systems built in the pre-\nvious step.\n3.Systems in both directions were trained on the\ncombination of the back-translated data and\nthe parallel data.\n4.Steps 2–3were re-executed 3more times.\nBack-translation in step 2was always carried\nout with the systems built in the most recent ex-\necution of step 3, hence the quality of the sys-\ntem used for back-translation improved with\neach iteration.\nTheswmonolingual corpus used in step 2was\nthe GoURMET monolingual corpus. The enmono-\nlingual corpus was a subset of the NewsCrawl cor-\npus, the size of which was duplicated after each\niteration. It started at 5million sentences.\nSince the swNewsCrawl corpus was made avail-\nable near the end of the development of our MT\nsystems, it could not be used during the iterative\nback-translation process. Nevertheless, we added it\nafterwards: the swNewsCrawl was back-translated\nwith the last available sw!ensystem obtained af-\nter completing all the iterations, concatenated to theexisting data for the en!swdirection and the MT\nsystem was re-trained.\n5.4 Integrating linguistic information\nIn addition to the corpora described above, linguis-\ntic information encoded in a more explicit represen-\ntation was also employed to build the MT systems.\nIn particular, we explored the interleaving (Nade-\njde et al., 2017) of linguistic tags in the TL side of\nthe training corpus with the aim of enhancing the\ngrammatical correctness of the translations.\nMorphological taggers were used to obtain the in-\nterleaved tags added to the training corpus. The sw\ntext was tagged with TreeTagger (Schmid, 2013).\nWe used a model17trained on the Helsinki Corpus\nof Swahili.18Theentext was tagged with the Stan-\nford tagger (Qi et al., 2018), which was trained on\nthe English Web Treebank (Silveira et al., 2014).\nFigure 1 shows examples of en!swand\nsw!entraining parallel sentences with inter-\nleaved tags. While the tags returned by the sw\ntagger were just part-of-speech tags, entags con-\ntained also morphological inflection information.\nInterleaved tags are removed from the final transla-\ntions produced by the system.\n6 Evaluation\nThis section reports the scores obtained on the test\ncorpus using automatic evaluation metrics. It then\ndescribes the manual evaluation we are conduct-\ning at the time of writing these lines and provides\npreliminary results.\n6.1 Automatic evaluation\nTable 5 shows the BLEU and chrF2++ scores, com-\nputed on the test set, for the different steps in the\ndevelopment of the MT systems. All systems were\ntrained with the hyper-parameters described in Sec-\ntion 5.2. As a reference, we also show the scores\nobtained by the translation obtained with Google\nTranslate19on 6th March 2020 using the web inter-\nface.\nIt is worth noting the positive effect of\nadding monolingual data during the iterative back-\ntranslation iterations and that interleaved tags also\nhelp to improve the systems according to the auto-\nmatic evaluation metrics.\n17Available at https://www.cis.uni-muenchen.\nde/˜schmid/tools/TreeTagger/\n18https://korp.csc.fi/download/HCS/a-v2/\nhcs-a-v2-dl\n19https://translate.google.com/\nen(SL): he ’s studying law at No @@tre D @@ame .\nsw(TL): VFIN A@@nj@@ifunza Nsheria PRON hukoPROPNAME No@@trePROPNAME D@@ame\nsw(SL): A @@nj@@ifunza sheria huko Notre Dame\nen(TL): PRON|Nom|Masc|Sing|3|Prs heAUX|Ind|Sing|3|Pres|Fin ’sVERB|Pres|Part studying\nNOUN|Sing lawADP atPROPN|Sing No@@trePROPN|Sing D@@ame\nFigure 1: Examples of parallel sentences after interleaving linguistic tags. The @@symbol is placed at the end of each BPE\nsub-word when it is not the last sub-word of a token. The tag corresponding to the morphological analysis of a token is interleaved\nbefore the first sub-word unit of the token.\nStrategy it. BLEU chrF++\nen!sw\nonly parallel - 22:23 46:34\niter. backt. 1 25:59 50:08\niter. backt. 2 26:22 50:91\niter. backt. 3 26:36 51:09\niter. backt. 4 26:58 51:39\n+ NewsCrawl 4 26:77 51:46\n+ NewsCrawl + tags 4 27:42 52:11\nGoogle Translate - 23:24 48:80\nsw!en\nonly parallel - 22:66 44:62\niter. backt. 1 29:29 51:19\niter. backt. 2 29:70 51:82\niter. backt. 3 29:99 51:98\niter. backt. 4 30:19 52:10\n+ tags 4 30:55 52:72\nGoogle Translate - 30:36 53:32\nTable 5: Automatic evaluation results obtained for the dif-\nferent development steps of the MT systems: only parallel\nstands for the systems trained only on parallel data with the\nbest hyper-parameters; iter. backt. represents systems obtained\nafter iteratively back-translating monolingual data (iteration\nnumber is shown in column it.);+NewsCrawl means that the\nswNewsCrawl corpus was back-translated and added; and +\ntags indicates that TL linguistic tags were interleaved.\nFinally, our system clearly outperforms Google\nTranslate for the en!swdirection, while their per-\nformances are close for the opposite direction. We\nnoticed that the sw!enGoogle Translate system\nimproved dramatically since we built our systems,\nwhich suggests that their systems may be trained\non data that was not available at OPUS website at\nthat time.\n6.2 Manual evaluation\nManual evaluation requires the use of humans to\ngive subjective feedback on the quality of transla-\ntion, either directly or indirectly. All manual evalua-\ntion undertaken within the GoURMET project uses\nin-domain data, i.e. test data derived from news\nsources. Two types of subjective evaluation have\nbeen selected and applied in order to generate the\nmost insight for the media partners:\n\u000fDirect assessment (Graham et al., 2016a;\nGraham et al., 2016b) (DA) is used to testen!sw. This corresponds to the content cre-\nation use case which will use translation pre-\ndominantly in this direction, and where the\ncorrectness of the translation is key.\n\u000fGap filling (Forcada et al., 2018) (GF) is used\nto test sw!en. This corresponds to the media\nmonitoring use case which will use translation\nalmost exclusively in this direction and where\ngetting the gist of the meaning of a sentence\nis enough to fulfil the use-case, perfect trans-\nlation of sentence structure is less important.\nCustom interfaces were created to support both\nevaluations; see figures 2 and 3 for DA and GF,\nrespectively.\nEvaluators were recruited from within the media\npartner organisations to complete the DA and GF\ntasks. Evaluators were required to have an excel-\nlent level of comprehension in the TL (i.e. swfor\nDA and enfor GF) and precedence was given to\njournalists who write exclusively or predominately\nin one of the two target languages.\nMedia partners (BBC, DW) prepared test data\nusing previously published articles. For DA this\nconsisted of 205 sentences drawn at random from\nsix different articles originally published in enby\nDW. The test data was further augmented with 5\nsentences written in the TL by a human and used\nascalibration examples resulting in a total of 210\nsentences shown to each evaluator in random order.\nAll evaluators were asked to rate the quality of\nthe translated sentence on a sliding scale from 0\nto 100 for two criteria according to the statement\n“For the pair of sentences below read the text and\nstate how much you agree that: Q1) The black text\nadequately expresses the meaning of the grey text\nand Q2) The black text is a well written phrase or\nsentence that is grammatically and idiomatically\ncorrect ”. The ratings for the first five sentences\nwere discarded as practice evaluations while the\nresults for the five sentences used for calibration\nwere discarded, leaving 200 pairs of results for each\nevaluator. Four evaluators completed the task.\nFigure 2: Custom Direct Assessment interface.\nFor GF 30 sentences were selected from six dif-\nferent articles originally published in swby DW.\nEach sentence was translated into enby a pro-\nfessional translator and it was ensured that once\ntranslated, each sentence was 15 words or more in\nlength. For each sentence in en, 20% of the content\nwords were removed, making sure there were no\ntwo consecutive gaps, typically leaving between 1\nand 8 missing words in each sentence, averaging\n2.67, for a total of 70 different missing-word prob-\nlems. Each sentence in swwas translated into en\nby the GoURMET MT system described here, and\nGoogle Translate. The work of seventeen human\nevaluators was collected and their work on each of\nthe 30 sentences was evaluated in three different\nways: one evaluator saw the gapped sentence with\nno hint; one evaluator saw the gapped sentence with\nthe GoURMET MT output as a hint; finally, one\nevaluator saw the gapped sentence with the Google\nTranslate output as a hint. A total of 210 different\nmissing-word/hint type configurations were there-\nfore evaluated by an average of 17/3=5.67 evalu-\nators. Sentences were distributed in such a way\nthat no evaluator ever saw the same sentence twice.\nThe GF evaluation requires the evaluator to fill in\nthe missing words using the hint (if present). The\naccuracy is simply a success rate : the fraction of\ngaps correctly filled.\n6.3 Manual evaluation results\nGap-filling (GF) success rates are shown in Ta-\nble 6. As may be seen, Google Translate seems to\nbe more helpful in this gisting task than the sys-\ntem created in this paper. To get an idea of how\nsignificant this difference is, Figure 4 shows a box-\nand-whisker plot of the distribution of success rates\nfor each hint type by evaluator. As may be seen, the\nboxes for Google Translate and GoURMET clearly\nFigure 3: Custom Gap Filling interface.\nHint type Success rate\nNo hint 26.45%\nGoogle 60.60%\nGoURMET 54.34%\nTable 6: Gap-filling success rates for each hint type\noverlap, meaning that the difference in usefulness\nis not significant. However we also notice a slight\noverlap between the GoURMET success-rate distri-\nbution and that when there is no hint (NONE); this\noverlap does not occur with Google Translate.\nDirect asssesment (DA): evaluators 1 and 2\nscored the calibration sentences with values close\nto the expected ones (0 or 100 depending on the\nsentence), but evaluators 3 and 4 provided relatively\ninconsistent scores. Besides that, there is a weak\npositive correlation among the evaluators’ answers\n(Pearson correlation coefficients between 0.22 and\n0.46 for Q1, and between 0.24 and 0.49 for Q2, the\nhighest values corresponding to evaluators 1 and\n2 in both cases). Consequently, Table 7 shows the\naverage score per evaluator. Unfortunately, these\nscores do not allow us to extract reliable conclu-\nsions.\n7 Concluding remarks\nWe have described the development and evaluation\nof an NMT system to translate in the news domain\nbetween English and Swahili in both directions. We\nhave also described the crawling of a new parallel\ncorpus from the Internet which we have made pub-\nlicly available.\nWe performed an automatic evaluation of both\nsystems. According to it, the en!swNMT sys-\ntem performs better than Google Translate , whereas\nthesw!ensystems performs on par with it. In\naddition, the sw!enNMT system was manually\nevaluated to ascertain it usefulness for gisting pur-\nposes, and the en!swNMT system as regards\nFigure 4: Gap-filling success rate distribution across evalua-\ntors for each hint type.\nQ1 Q2\nEvaluator 1 77:65\u00063:97 70:80\u00064:30\nEvaluator 2 47:30\u00066:20 52:94\u00065:97\nEvaluator 3 48:42\u00063:28 60:20\u00063:75\nEvaluator 4 54:40\u00063:92 56:53\u00064:02\nTable 7: Average score and confidence intervals (estimated\nvia standard significance testing) for questions Q1 and Q2 in\nthe direct assessment evaluation.\nits fluency and adequacy. The preliminary results\nof both evaluations show that the sw!ensystem\nperforms similarly to Google Translate (which is\nconsistent with the automatic evaluation), and that\ntheen!swsystem needs to be further evaluated\nbecause evaluators provided quite different scores.\nAs future work, and in view of the scarcity of\nbilingual resources available, we plan to try ap-\nproaches based on monolingual corpora (Artetxe et\nal., 2018). We also plan to study if a correct seg-\nmentation of verbs, which are very rich and com-\nplex (see Table 3), as a pre-processing step helps\nimprove performance.\nAcknowledgements: Work funded by the Euro-\npean Union’s Horizon 2020 research and innovation\nprogramme under grant agreement number 825299,\nproject Global Under-Resourced Media Translation\n(GoURMET). We thank the editors of the SAWA\ncorpus for letting us use it for training. We also\nthank Wycliffe Muia (BBC) for help with Swahili\nexamples and DW for helping in the manual evalu-\nation.\nReferences\nAgi´c,ˇZ. and I. Vuli ´c. 2019. JW300: A wide-coverage\nparallel corpus for low-resource languages. In Pro-\nceedings of the 57th Annual Meeting of the Asso-ciation for Computational Linguistics , pages 3204–\n3210, Florence, Italy, July.\nArtetxe, M., G. Labaka, and E. Agirre. 2018. Unsuper-\nvised statistical machine translation. arXiv preprint\narXiv:1809.01272 .\nBahdanau, D., K. Cho, and Y . Bengio. 2014. Neural\nmachine translation by jointly learning to align and\ntranslate. CoRR , abs/1409.0473.\nBojar, O., C. Federmann, M. Fishel, Y . Graham, B. Had-\ndow, P. Koehn, and C. Monz. 2018. Findings of the\n2018 conference on machine translation (WMT18).\nInProceedings of the Third Conference on Machine\nTranslation: Shared Task Papers , pages 272–303,\nBelgium, Brussels, October. Association for Compu-\ntational Linguistics.\nDe Pauw, G., P.W. Wagacha, and G.-M. De Schryver.\n2011. Exploring the SAWA corpus: collection and\ndeployment of a parallel corpus English–Swahili.\nLanguage resources and evaluation , 45(3):331.\nEspl`a-Gomis, M., M.L. Forcada, G. Ram ´ırez-S ´anchez,\nand H. Hoang. 2019. ParaCrawl: Web-scale parallel\ncorpora for the languages of the EU. In Proceed-\nings of Machine Translation Summit XVII Volume 2:\nTranslator, Project and User Tracks , pages 118–119,\nDublin, Ireland, August.\nEspl`a-Gomis, M and M.L. Forcada. 2010. Combin-\ning content-based and url-based heuristics to harvest\naligned bitexts from multilingual sites with bitex-\ntor. The Prague Bulletin of Mathematical Linguis-\ntics, 93:77–86.\nForcada, M.L., C. Scarton, L. Specia, B. Haddow,\nand A. Birch. 2018. Exploring gap filling as a\ncheaper alternative to reading comprehension ques-\ntionnaires when evaluating machine translation for\ngisting. CoRR , abs/1809.00315.\nGao, Q. and S. V ogel. 2008. Parallel implementa-\ntions of word alignment tool. In Software Engineer-\ning, Testing, and Quality Assurance for Natural Lan-\nguage Processing , pages 49–57, Columbus, Ohio,\nUSA, June.\nGraham, Y ., T. Baldwin, M. Dowling, M. Eskevich,\nT. Lynn, and L. Tounsi. 2016a. Is all that glit-\nters in machine translation quality estimation really\ngold? In Proceedings of the 26th International Con-\nference on Computational Linguistics: Technical Pa-\npers, pages 3124–3134, Osaka, Japan.\nGraham, Y ., T. Baldwin, A. Moffat, and J. Zobel.\n2016b. Can machine translation systems be evalu-\nated by the crowd alone. Natural Language Engi-\nneering , 23(1):3–30.\nHeafield, K. 2011. KenLM: faster and smaller lan-\nguage model queries. In Proceedings of the EMNLP\n2011 Sixth Workshop on Statistical Machine Transla-\ntion, pages 187–197, Edinburgh, Scotland, UK, July.\nHoang, V .C.D., P. Koehn, G. Haffari, and T. Cohn.\n2018. Iterative back-translation for neural machine\ntranslation. In Proceedings of the 2nd Workshop on\nNeural Machine Translation and Generation , pages\n18–24, Melbourne, Australia, July.\nJunczys-Dowmunt, M., R. Grundkiewicz, T. Dwojak,\nH. Hoang, K. Heafield, T. Neckermann, F. Seide,\nU. Germann, A. Fikri Aji, N. Bogoychev, A.F.T. Mar-\ntins, and A. Birch. 2018. Marian: Fast neural ma-\nchine translation in C++. In Proceedings of ACL\n2018, System Demonstrations , pages 116–121, Mel-\nbourne, Australia, July.\nKoehn, P., H. Hoang, A. Birch, C. Callison-Burch,\nM. Federico, N. Bertoldi, B. Cowan, W. Shen,\nC. Moran, R. Zens, C. Dyer, O. Bojar, A. Constantin,\nand E. Herbst. 2007. Moses: Open source toolkit\nfor statistical machine translation. In Proceedings of\nthe 45th Annual Meeting of the Association for Com-\nputational Linguistics Companion Volume Proceed-\nings of the Demo and Poster Sessions , pages 177–\n180, Prague, Czech Republic, June.\nLim, R.V ., K. Heafield, H. Hoang, M. Briers, and A.D.\nMalony. 2018. Exploring hyper-parameter optimiza-\ntion for neural machine translation on GPU architec-\ntures. CoRR , abs/1805.02094.\nNadejde, M., S. Reddy, R. Sennrich, T. Dwojak,\nM. Junczys-Dowmunt, P. Koehn, and A. Birch. 2017.\nPredicting target language CCG supertags improves\nneural machine translation. In Proceedings of the\nSecond Conference on Machine Translation, Volume\n1: Research Papers , pages 68–79, Copenhagen, Den-\nmark, September.\nNey, H., U. Essen, and R. Kneser. 1994. On structur-\ning probabilistic dependences in stochastic language\nmodelling. Computer Speech & Language , 8(1):1 –\n38.\nPapineni, K., S. Roukos, T. Ward, and W.-J. Zhu. 2002.\nBLEU: a method for automatic evaluation of ma-\nchine translation. In Proceedings of the 40th Annual\nMeeting on Association for Computational Linguis-\ntics, pages 311–318, Philadelphia, PA, USA, July.\nPerrott, D.V . 1965. Teach Yourself Swahili . English\nUniversities Press.\nPress, O. and L. Wolf. 2017. Using the output em-\nbedding to improve language models. In Proceed-\nings of the 15th Conference of the European Chap-\nter of the Association for Computational Linguistics:\nVolume 2, Short Papers , pages 157–163, Valencia,\nSpain, April.\nQi, P., T. Dozat, Y . Zhang, and C.D. Manning. 2018.\nUniversal dependency parsing from scratch. In Pro-\nceedings of the CoNLL 2018 Shared Task: Multilin-\ngual Parsing from Raw Text to Universal Dependen-\ncies, pages 160–170, Brussels, Belgium, October.S´anchez-Cartagena, V .M., M. Ba ˜n´on, S. Ortiz-Rojas,\nand G. Ram ´ırez-S ´anchez. 2018. Prompsit’s submis-\nsion to wmt 2018 parallel corpus filtering shared task.\nInProceedings of the Third Conference on Machine\nTranslation, Volume 2: Shared Task Papers , Brussels,\nBelgium, October.\nSchmid, H. 2013. Probabilistic part-of-speech tagging\nusing decision trees. In New methods in language\nprocessing , page 154.\nSennrich, R. and M. V olk. 2010. MT-based sentence\nalignment for ocr-generated parallel texts. In Pro-\nceedings of the Ninth Conference of the Association\nfor Machine Translation in the Americas , Denver,\nUSA, October.\nSennrich, R., B. Haddow, and A. Birch. 2016a. Ed-\ninburgh Neural Machine Translation Systems for\nWMT 16. In Proceedings of the First Conference\non Machine Translation , pages 371–376, Berlin, Ger-\nmany, August.\nSennrich, R., B. Haddow, and A. Birch. 2016b. Improv-\ning neural machine translation models with monolin-\ngual data. In Proceedings of the 54th Annual Meet-\ning of the Association for Computational Linguistics\n(Volume 1: Long Papers) , pages 86–96, Berlin, Ger-\nmany, August.\nSennrich, R., B. Haddow, and A. Birch. 2016c. Neu-\nral machine translation of rare words with subword\nunits. In Proceedings of the 54th Annual Meeting of\nthe Association for Computational Linguistics (Vol-\nume 1: Long Papers) , volume 1, pages 1715–1725,\nBerlin, Germany, August.\nSennrich, R., A. Birch, A. Currey, U. Germann,\nB. Haddow, K. Heafield, A.V . Miceli Barone, and\nP. Williams. 2017. The University of Edinburgh’s\nNeural MT Systems for WMT17. In Proceedings of\nthe Second Conference on Machine Translation, Vol-\nume 2: Shared Task Papers , pages 389–399, Copen-\nhagen, Denmark, September.\nSilveira, N., T. Dozat, M.-C. de Marneffe, S. Bow-\nman, M. Connor, J. Bauer, and C.D. Manning. 2014.\nA gold standard dependency corpus for English.\nInProceedings of the Ninth International Confer-\nence on Language Resources and Evaluation (LREC-\n2014) , pages 2897–2904, Reykjavik, Iceland, May.\nVarga, D., P. Hal ´acsy, A. Kornai, V . Nagy, L. N ´emeth,\nand V . Tr ´on. 2007. Parallel corpora for medium den-\nsity languages. Amsterdam Studies In The Theory\nAnd History Of Linguistic Science Series 4 , 292:247.\nVaswani, A., N. Shazeer, N. Parmar, J. Uszkoreit,\nL. Jones, A.N. Gomez, Ł. Kaiser, and I. Polosukhin.\n2017. Attention is all you need. In Advances in Neu-\nral Information Processing Systems 30 , pages 5998–\n6008.",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "opwAhRCRZb3",
"year": null,
"venue": "EAMT 2014",
"pdf_link": "https://aclanthology.org/2014.eamt-1.4.pdf",
"forum_link": "https://openreview.net/forum?id=opwAhRCRZb3",
"arxiv_id": null,
"doi": null
}
|
{
"title": "An efficient method to assist non-expert users in extending dictionaries by assigning stems and inflectional paradigms to unknknown words",
"authors": [
"Miquel Esplà-Gomis",
"Víctor M. Sánchez-Cartagena",
"Felipe Sánchez-Martínez",
"Rafael C. Carrasco",
"Mikel L. Forcada",
"Juan Antonio Pérez-Ortiz"
],
"abstract": "Miquel Esplà-Gomis, Víctor M. Sánchez-Cartegna, Felipe Sánchez-Martínez, Rafael C. Carrasco, Mikel L. Forcada, Juan Antonio Pérez-Ortiz. Proceedings of the 17th Annual conference of the European Association for Machine Translation. 2014.",
"keywords": [],
"raw_extracted_content": "An efficient method to assist non-expert users in extending dictionaries by\nassigning stems and inflectional paradigms to unknown words\nMiquel Espl `a-Gomis\[email protected]\nRafael C. Carrasco\[email protected]´ıctor M. S ´anchez-Cartagena\[email protected]\nMikel L. Forcada\[email protected]\nDept. de Llenguatges i Sistemes Inform `atics, Universitat d’Alacant, 03071 Alacant, SpainFelipe S ´anchez-Mart ´ınez\[email protected]\nJuan Antonio P ´erez-Ortiz\[email protected]\nAbstract\nA method is presented to assist users with\nno background in linguistics in adding the\nunknown words in a text to monolingual\ndictionaries such as those used in rule-\nbased machine translation systems. Adding\na word to these dictionaries requires identi-\nfying its stem and the inflection paradigm\nto be used in order to generate all its word\nforms. Our method is based on a previous\ninteractive approach in which non-expert\nusers were asked to validate whether some\ntentative word forms were correct forms\nof the new word; these validations were\nthen used to determine the most appropriate\nstem and paradigm. The previous approach\nwas based on a set of intuitive heuristics\ndesigned both to obtain an estimate of the\neligibility of each candidate stem/paradigm\ncombination and to determine the word\nform to be validated at each step. Our new\napproach however uses formal models for\nboth tasks (a hidden Markov model to esti-\nmate eligibility and a decision tree to select\nthe word form) and achieves significantly\nbetter results.\n1 Introduction\nCreation of the linguistic data (such as mono-\nlingual dictionaries, bilingual dictionaries, trans-\nfer rules, etc.) required by rule-based machine\ntranslation (RBMT) systems has usually involved\nteams of trained linguists. However, development\ncosts could be significantly reduced by involving a\nbroader group of non-expert users in the extension\nc\r2014 The authors. This article is licensed under a Creative\nCommons 3.0 licence, no derivative works, attribution, CC-\nBY-ND.of these resources. This may include, for instance,\nthe very same users of the machine translation (MT)\nsystem or accidental collaborators recruited through\ncrowdsourcing platforms (Wang et al., 2013). The\nscenario considered in this paper is that of non-\nexpert users (in a general sense) who have to in-\ntroduce into the two monolingual dictionaries1of\na RBMT system the unknown words found in an\ninput text so that the system is subsequently able to\ncorrectly translate them.2Note, however, that our\nmethod could be applied to the addition of entries\ninto the morphological dictionaries used in many\nother natural language processing applications. The\nobjective of our work is to obtain a system which\ncan be used not only to add the particular unknown\nword form (for example, wants ) to the dictionary,\nbut also to assist in discovering an appropriate stem\nand a suitable inflection paradigm so that all the\nword forms of the unknown word and their associ-\nated morphological inflection information (such as\nwants, verb, present, 3rd person orwanting, verb,\ngerund ) can be inserted as well.\nInflection paradigms are commonly introduced\nin RBMT systems in order to group regularities\nin the inflection of a set of words;3a paradigm is\nusually defined as a collection of suffixes and their\ncorresponding morphological information; e.g., the\nparadigm assigned to many common English verbs\nindicates that by adding the suffix -ingto the stem,4\n1One source-language dictionary used for morphological anal-\nysis and one target-language dictionary used for morphological\ngeneration.\n2It could also happen that the word form is not completely un-\nknown, but it is assigned to a different paradigm; for example,\nthe word flycould already be included in a dictionary as a verb,\nbut a user may need to insert it as a noun.\n3Paradigms ease the management of dictionaries in two ways:\nby reducing the quantity of information that needs to be stored,\nand by simplifying revision and validation because of the\nexplicit encoding of regularities in the dictionary.\n4The stem is the part of a word that is common to all its\n19\nthe gerund is obtained; by adding the suffix -ed,\nthe past is obtained; etc. Adding a new entry to\na monolingual dictionary therefore implies deter-\nmining the stem of the new word and a suitable\ninflection paradigm among those defined by the\nMT system for the corresponding language. In this\nwork we assume that the paradigms for all possi-\nble words in the language are already included in\nthe dictionary.5We will focus on monolingual dic-\ntionaries because insertion of information in the\nbilingual dictionaries of RBMT systems is usually\nstraightforward (S ´anchez-Cartagena et al., 2012a).\nOur approach improves a previous interactive\nmethod (Espl `a-Gomis et al., 2011) that was based\non a number of intuitive heuristics; the improve-\nment presented here is twofold: on the one hand,\nmore coherent and principled models are intro-\nduced; on the other hand, the results are signifi-\ncantly better.\nThe rest of the paper is organised as follows. Sec-\ntion 2 discusses other works related to our proposal.\nSection 3 introduces the concepts on monolingual\ndictionaries that will be used in the remainder of the\npaper. An overview of the previous method (Espl `a-\nGomis et al., 2011) for dictionary extension is pre-\nsented in Section 4, followed by the description\nof our new approach in Section 5. Section 6 dis-\ncusses our experimental setting in which a Spanish\nmonolingual dictionary is used, while the results\nobtained are presented and discussed in Section 7.\nFinally, some concluding remarks are presented in\nSection 8.\n2 Related work\nIn this section, related works in literature are com-\nmented and compared with the common features in\nour new approach and in the work by Espl `a-Gomis\net al. (2011).\nTwo of the most prominent works in literature in\nrelation to the elicitation of knowledge to build or\nimprove RBMT systems are those by Font-Llitj ´os\n(2007) and McShane et al. (2002). The former pro-\nposes a strategy for improving both transfer rules\nand dictionaries by analysing the postediting pro-\ncess performed by a non-expert user through a ded-\nicated interface. McShane et al. (2002) design a\nframework to elicit linguistic knowledge from in-\nformants who are not trained linguists and use this\ninformation in order to build MT systems which\ninflected forms.\n5This can be easily expected as most unknown words belong\nto regular paradigms.translate into English; their system provides users\nwith a lot of information about different linguistic\nphenomena to ease the elicitation task. Unlike these\ntwo approaches, our method is aimed at transfer-\nbased MT systems in which a single translation is\ngenerated and no language model is used in order\nto rank a number of translation hypothesis; this\nkind of systems are notably more sensitive to er-\nroneous linguistic information. We also want to\nrelieve users from acquiring linguistic skills.\nAdditional tools that ease the creation of linguis-\ntic resources for MT by users with some linguis-\ntic background have also been developed. To this\nend, the smart paradigms devised by D ´etrez and\nRanta (2012) help users to obtain the right inflec-\ntion paradigm for a new word to be inserted in\nan MT system dictionary. A smart paradigm is a\nfunction that returns the most appropriate paradigm\nfor a word given its lexical category, some of its\nword forms and, in some cases, some morphologi-\ncal inflection. There are two important differences\nwith our approach: firstly, smart paradigms are cre-\nated exclusively by human experts; and secondly,\nusers of smart paradigms need to have some lin-\nguistic background. For instance, an expert could\ndecide that in order to correctly choose the inflec-\ntion paradigm of most verbs in French the infinitive\nand the first person plural present indicative forms\nare needed; dictionary developers must then pro-\nvide these two forms when inserting a new verb.\nBartuskov ´a and Sedl ´acek (2002) also present a tool\nfor semi-automatic assignment of words to declen-\nsion patterns; their system is based on a decision\ntree with a question in every node. Their proposal,\nunlike ours, works only for nouns and is aimed\nat experts because of the technical nature of the\nquestions. Desai et al. (2012) focus on verbs and\npresent a system for paradigm assignment based on\nthe information collected from a corpus for each\ncompatible paradigm; if the automatic method fails,\nusers are then required to manually enter the correct\nparadigm.\nAs regards the automatic acquisition of mor-\nphological resources for MT, the work by ˇSnajder\n(2013) is of particular interest: he turns the choice\nof the most appropriate paradigm for a given word\ninto a machine learning problem. Given the values\nof a set of features extracted from a monolingual\ncorpus and from the orthographic properties of the\nlemmas, each compatible paradigm is classified\nas correct/incorrect by a support vector machine\nclassifier. The main difference with our approach\n20\nlies in the fact that their method is designed to\nbe used in a fully-automatic pipeline, while we\nuse the inferred models in order to minimise the\nnumber of queries posed to non-expert users. Fi-\nnally, the automatic identification of morpholog-\nical rules to segment a word into morphemes (a\nproblem for which paradigm identification is a po-\ntential resolution strategy) has also been recently\naddressed (Monson, 2009; Walther and Nicolas,\n2011).\n3 Preliminaries\nLetP=fpigbe the set of paradigms in a monolin-\ngual dictionary. Each paradigm pidefines a set of\npairs (fij;mij), wherefijis a suffix6which is ap-\npended to stems to build new word forms , andmij\nis the corresponding morphological information.\nGiven a stem/paradigm pairc=t=picomposed of\na stemtand a paradigm pi, the expansionI(c)is\nthe set of possible word forms resulting from ap-\npending each of the suffixes in pitot. For instance,\nan English dictionary may contain the stem want-\nassigned to a paradigm with suffixes7pi=f-,-\ns,-ed,-ingg; the expansion I(want=pi)consists\nof the set of word forms want ,wants ,wanted and\nwanting .\nGiven a new word form wto be added to a mono-\nlingual dictionary, our objective is to find both the\nstemt2Pr(w)8and the paradigm pisuch that\nI(want=pi)is the set of word forms which are all\nthe correct forms of the unknown word. To that\nend, a setLcontaining all the stem/paradigm pairs\ncompatible with wis determined by using a gener-\nalised suffix tree (McCreight, 1976) containing all\nthe possible suffixes included in the paradigms in\nP.\nThe following example illustrates the previous\ndefinitions. Consider a simple dictionary with only\nfour paradigms: p1=f-,-sg;p2=f-y,-iesg;\np3=f-y,-ies,-ied,-yingg; andp4=f-a,-\numg. Let’s assume that the new word form is\nw=policies (actually, the noun policy ); the com-\npatible stem/paradigm pairs which will be obtained\nafter this stage are: c1=policies /p1;c2=policie /p1;\nc3=polic /p2; andc4=polic /p3.\n6Although our approach focuses on languages generating word\nforms by adding suffixes to stems (for example, Romance\nlanguages), it could be easily adapted to inflectional languages\nbased on different ways of adding morphemes.\n7We hereinafter omit the morphological information contained\ninpiand show only the suffixes.\n8Pr(w)is the set of all possible prefixes of w.4 Previous approach\nEspl`a-Gomis et al. (2011) have already proposed\nan interactive method for extending the dictionar-\nies of RBMT systems with the collaboration of\nnon-expert users. In their work, the most appro-\npriate stem/paradigm pair is chosen by means of\na sequence of simple yes/no questions whose an-\nswer only requires speaker-level understanding of\nthe language. Basically, users are asked to validate\nwhether some word forms resulting from tentatively\nassigning different compatible stem/paradigm pairs\ninL(see Section 3) to the new word are correct\nword forms of it. The specific forms that are pre-\nsented to the users for validation are automatically\nobtained by estimating the most informative ones\nwhich allow the system to discard the greatest num-\nber of wrong candidate paradigms at each step. The\nresults showed (Espl `a-Gomis et al., 2011) that the\naverage number of queries posed to the users for\na Spanish monolingual dictionary was around 5,\nwhich is reasonably small considering that the aver-\nage number of initially compatible paradigms was\naround 56. Furthermore, S ´anchez-Cartagena et al.\n(2012a) have shown that when the source-language\nword has already been inserted, the system is able\nto more accurately predict the right target-language\nparadigm by exploiting the correlations between\nparadigms in both languages from the correspond-\ning bilingual dictionary, thus reducing significantly\nthe number of questions.\nAfter obtaining the list of compatible\nstem/paradigm pairs L, the original method\nperforms three tasks: stem/paradigm pair scoring,\nselection of word forms to be offered to the user\nfor validation an discrimination between equivalent\nparadigms.\nParadigm scoring. Afeasibility score is com-\nputed for each compatible stem/paradigm pair cn2\nLusing a large monolingual corpus C. Candidates\nproducing a set of word forms which occur more\nfrequently in the corpus get higher scores. Fol-\nlowing our example, the word forms for the dif-\nferent candidates would be: I(c1)=fpolicies ,poli-\nciessg;I(c2)=fpolicie ,policiesg;I(c3)=fpolicy ,\npoliciesg; andI(c4)=fpolicy ,policies ,policied ,\npolicyingg. Using a large English corpus, word\nforms policies andpolicy will be easily found, and\nthe rest of them ( policie ,policiess ,policied and\npolicying ) probably will not. Therefore, c3would\nprobably obtain the highest feasibility score.\nSelection of word forms. The best candidate is\nchosen from Lby querying the user about a reduced\n21\nset of the word forms for some of the compatible\nstem/paradigm pairs cn2L. To do so, the system\nfirst sortsLin descending order using the feasibility\nscore. Then, users are asked (following the order\ninL) to confirm whether some of the word forms\nin each compatible stem/paradigm pair are correct\nforms ofw. In this way, when a word form w0is ac-\ncepted by the user, all cn2Lfor whichw0=2I(cn)\nare removed from L; otherwise, all cn2Lfor\nwhichw02I(cn)are removed from L. In order\nto attempt to maximise the number of word forms\ndiscarded and consequently minimise the amount\nof yes/no questions, users are iteratively asked to\nvalidate the word form from the first compatible\nstem/paradigm pair in Lwhich exists in the mini-\nmum number of other compatible stem/paradigm\npairs. This process is repeated until only one can-\ndidate (or a set of equivalent candidates; see next)\nremains inL.\nEquivalent paradigms. When more than one\nparadigm provides exactly the same set of suffixes\nbut with different morphological information, no\nadditional question can be asked in order to dis-\ncriminate between them.9For example, in the case\nof Spanish, many adjectives such as alto(’high’)\nand nouns such as gato (’cat’) are inflected iden-\ntically. Therefore, two paradigms producing the\nsame collection of suffixes f-o(masculine, singu-\nlar),-a(feminine, singular), -os(masculine, plural),\n-as(feminine, plural)gbut with different morpho-\nlogical information are defined in the monolingual\ndictionary, the stems alt-andgat-assigned to one of\nthem each. This issue also affects paradigms with\nthe same lexical category: abeja andabismo are\nnouns that are inflected identically; abeja is how-\never feminine, whereas abismo is masculine. When\nadding unknown words such as gato orabeja , no\nyes/no question can consequently be asked in order\nto discriminate between both paradigms. S ´anchez-\nCartagena et al. (2012b) proposed a solution to\nthis issue that consisted of introducing an n-gram-\nbased model of lexical categories and inflection\ninformation which was used as a final step10to\nautomatically choose the right stem/paradigm pair\nwith success rates between 56% and 96%.\n9Around 81% of the word forms in a Spanish dictionary\nhave been reported (S ´anchez-Cartagena et al., 2012b) to be\nassignable to more than one equivalent paradigm.\n10Note that this model is disconnected from the models used\nfor scoring the compatible paradigms and deciding the word\nforms to be shown to the user.5 Method\nThe approach discussed in the preceding section\nprovides a complete framework for dictionary ex-\ntension, but this framework can still be improved\nif more rigorous and principled models rather than\nintuitive heuristics are used. We propose conse-\nquently to replace those heuristics with hidden\nMarkov models (HMMs) (Rabiner, 1989) and bi-\nnary decision trees as follows. For a given un-\nknown word form, first the set Lof compatible\nstem/paradigm pairs is determined (see Section 3).\nThe probability of each of them is then estimated\nby means of a first-order HMM. After that, these\nprobabilities are used in order to build a decision\ntree which is used to guide the selection of words\nto be offered to the non-expert user for validation.\nNote that, unlike in the original method in which\nisolated unknown words were inserted into the dic-\ntionary, the HMM in our new method explicitly\nconsiders the sentence in which the new word ap-\npears and uses this contextual information in order\nto better estimate the likelihood of each compati-\nble stem/paradigm pair. The objective here is to\nminimise the interaction with the user so that the\naddition of new words is made as fast as possible.\nHidden Markov models. A first-order HMM is\ndefined as\u0015= (\u0000;\u0006;A;B;\u0019 ), where \u0000is the set\nof states, \u0006is the set of observable outputs, Ais the\nj\u0000j\u0002j\u0000jmatrix of state-to-state transition probabili-\nties,Bis thej\u0000j\u0002j\u0006jmatrix with the probability of\neach observable output \u001b2\u0006being emitted from\neach state\r2\u0000, and the vector \u0019, with dimension-\nalityj\u0000j, defines the initial probability of each state.\nThe system produces an output each time a state\nis reached after a transition. In our method, \u0000is\nmade up of all the paradigms in the dictionary and\n\u0006corresponds to the set of suffixes produced by all\nthese paradigms.\nOur HMMs are trained in a way very similar to\nHMMs used in unsupervised part-of-speech tag-\nging (Cutting et al., 1992), that is, by using the\nBaum-Welch algorithm (Baum, 1972) with an un-\ntagged corpus. The training corpus is built from\na text corpus as follows: (i) the monolingual dic-\ntionary is used in order to obtain the set Fof all\npossible word forms; (ii) the word forms in the\ntext corpus that belong to Fare assigned all their\ncorresponding suffix and paradigm pairs; (iii) the\nword forms not in Fare assigned the set of suffix\nand paradigm pairs obtained from the set Lof their\ncompatible candidates, as described in Section 4.\nOnce the HMM is trained, the probability qt(cn)\n22\nof assigning the word form located at position tin\nthe sentence to the compatible candidate cn2L\ncan be computed by applying the following equa-\ntion, which corresponds to Eq. (27) in the tutorial\nby Rabiner (1989):\nqt(cn) =\u000bt(cn)\ft(cn)\nPjLj\nm=1\u000bt(cm)\ft(cm)(1)\nThis equation computes the probability that the\nmodel is in state cnwhen at position t. In the\nequation,\u000bt(cn)accounts for the (forward) prob-\nability of the sub-sentence from the begining of\nthe sentence to position tgiven statecnat position\nt, whereas\ft(cn)corresponds to the (backward)\nprobability of the sub-sentence from position t+ 1\nto the end of the sentence, given state cnat position\nt(Rabiner, 1989).\nDecision trees. Decision trees are commonly\nused to learn classifiers: the internal nodes (deci-\nsion nodes) of a decision tree are labelled with an\ninput feature, an arc coming from an internal node\nexists for each possible feature value, and leaves\nare labelled with classes. The ID3 algorithm (Quin-\nlan, 1986) has been proposed in order to build these\ntrees. This algorithm follows a greedy approach\n(the resulting trees are therefore sub-optimal) by\nselecting the most appropriate attribute to split the\ndata set on each iteration. The algorithm starts from\nthe root of the tree with the whole data set S. At\neach iteration, an attribute Ais picked for splitting\nS, beingAthe attribute providing the highest infor-\nmation gain. A child node is then created for each\npossible value of A, with a new test set containing\nonly the elements matching this attribute value. The\ninformation gain measures the difference in entropy\nbefore and after Sis split; for computing this en-\ntropy, the probability of each class is approximated\nby using the proportion of elements belonging to\neach of them.\nOur method uses ID3 in order to build a binary\n(each node corresponds to a yes/no question) deci-\nsion tree for each new word. Each class corresponds\nto a compatible stem/paradigm and the attribute set\nis made up of the set of different word forms, i.e.\n[ci2LI(ci). The entropy in the ID3 algorithm could\nin principle be computed as stated before, i.e. by\nusing the proportion of word forms derived from\nevery stem/paradigm combination. In our approach,\nhowever, a more accurate computation of the en-\ntropy is proposed by using the class probability\nprovided by the hidden Markov model.\nA weakness that this method shares with the onedescribed in Section 4 is that candidate paradigms\nproducing the same collection of suffixes cannot be\ndifferentiated with yes/no questions. Therefore, at\nthe end of the querying process, it is possible for\nmore than one candidate to remain. In order to deal\nwith this, the already computed HMM contextual\nprobabilities could be used rather than the addi-\ntionaln-gram model of morphological information\nproposed by S ´anchez-Cartagena et al. (2012b).11\nFor this work, as in the one by (S ´anchez-Cartagena\net al., 2012a), we considered these paradigms pro-\nducing the same word forms as equivalent and,\ntherefore, they count as a single paradigm.\n6 Experimental Setting\nIn order to ensure an accurate comparison between\nthe methods described in Sections 4 and 5, our ex-\nperimental framework replaces non-expert users, to\nwhich this method is eventually addressed, with an\noracle so that interferences caused by human errors\nare avoided. The evaluation consisted of simulating\nthe addition of a set of words to the Spanish mono-\nlingual dictionary of the Spanish–Catalan Apertium\nMT system (Forcada et al., 2011).\nSix test sets were built consisting of sentences\nin Spanish containing at least an unknown word.\nUsing an oracle, the average number of questions\nneeded in order to obtain the correct paradigm was\ncomputed for the following three methods: the\noriginal approach by Espl `a-Gomis et al. (2011) de-\nscribed in Section 4, a decision tree using propor-\ntions rather than probabilities,12and a decision tree\nassigning the probabilities estimated by an HMM.\nIt is worth noting that this metric ignores the fact\nthat, depending on the word form posed, a user\ncould need more time to decide whether to accept\nor reject it. This will be evaluated in a future work.\nIn addition to the average number of questions, the\nHMM probabilities and the feasibility scores of the\noriginal approach were compared by evaluating the\nsuccess in detecting the correct paradigm, that is,\nin assigning the highest score or probability to the\ncorrect paradigm. This second metric is aimed at\nmeasuring the relation between the relative correct-\nness in the probability/score assignment and the\nnumber of queries posed to the user.\nEach of the six data sets consists of (i) a mono-\n11Although out of the scope of this work, it could be inter-\nesting to compare both approaches to the task of choosing\n(or supporting a user to choose) the best correct compatible\nstem/paradigm combination.\n12As in this approach there is only one element per class, this\nis equivalent to consider all classes as equiprobable.\n23\nlingual dictionary D; (ii) a collection of text sen-\ntencesScontaining each at least one word form\nof a word not in D; and (iii) the list of the cor-\nrect stem/paradigm combination for the target word\nforms to be added to the dictionary, which is used\nas the oracle for our evaluation.\nIn order to measure the feasibility of these meth-\nods at different times in the development of a dic-\ntionary, the revision history of the dictionary in the\nApertium project Subversion repository was used.13\nThis strategy also allowed us to use the different\nrevisions in order to build the oracle for the experi-\nments: given a pair of dictionary revisions (R1;R2)\nwithR1being an earlier revision than R2, the evalu-\nation task consisted of adding to R1the words in R2\nbut not inR1(i.e., the relative complement of R1in\nR2), which will be called, henceforth, target words.\nIn order to ensure that all the paradigms assigned to\nthese words were also available in R1, we sequen-\ntially checked all the revisions of the dictionary and\ngrouped them according to their paradigm defini-\ntions, thus obtaining ranges of compatible revisions .\nWe then computed the number of words differing\nbetween the oldest and newest revisions of each\nrange, and manually picked for the experiments six\nrevision pairs among those with the greatest number\nof different words.\nIn order to obtain sentences containing the target\nwords, the Spanish side of the parallel corpus News\nCommentary (Bojar et al., 2013) was used.14The\ncorpus was split into two parts, one containing 90%\nof the sentences, which were used for training the\nHMM, and another one including the remaining\n10%, which were used for testing. Sentences not\ncontaining at least one word form of one of the\ntarget words were removed from each test set. Ta-\nble 1 shows the list of revision pairs, the number\nof words differing between them and the number\nof word forms included in the evaluation text. For\nboth the training and testing corpora, the text was\nprocessed by following the strategy described in\nSection 5 using the revision R1of each test set. A\ndifferent HMM was therefore trained for each test\nset; in all cases, the Baum–Welch algorithm was\nstopped after 9 iterations.\nFinally, following the experimental setting pro-\nposed by S ´anchez-Cartagena et al. (2012a), a word\n13https://svn.code.sf.net/p/apertium/svn/\ntrunk/apertium-en-es/apertium-en-es.es.\ndix\n14This corpus was chosen because it belongs to an heteroge-\nneous domain and it is already segmented into sentences.Revision pair Target Target word\nR1 R2 words forms in corpus\n7217 7287 109 485\n11762 12415 1802 550\n17582 20212 700 362\n27241 27627 1048 297\n34649 35985 1194 79\n36838 44118 1039 650\nTable 1: Revision pairs of the Spanish monolingual dictionary\nin the Apertium Spanish–Catalan MT system used in the ex-\nperiments, number of target words (added from R1toR2),\nand number of target word forms appearing in the corpus.\nlist obtained from the Spanish Wikipedia dump15\nwas used as the monolingual corpus to compute\nthe the feasibility scores in the heuristic-based ap-\nproach in Section 4.\n7 Results and Discussion\nTable 2 shows the average number of questions\nneeded to determine the correct paradigm for the\ntarget words evaluated. Since the objective of\nour method is to reduce the interaction with the\nuser as much as possible, lower values represent\nbetter results. Cells in bold correspond to statis-\ntically significant differences between the corre-\nsponding method and the two other approaches with\np\u00140:05.16Those values which are significantly\nbetter are marked with the symbol \", whereas val-\nues significantly worse are marked with #. As can\nbe seen, the two decision-tree-based approaches\nare, in general, better than the heuristic-based ap-\nproach. Contrary evidence however is seen for the\nsole case of the test set corresponding to revision\npair(7217;7287) . Furthermore, using the HMM\nprobabilities for computing information gain in the\nID3 algorithm results in a statistically significant\nimprovement to the original ID3 method in four out\nof the six test sets evaluated. In order to shed some\nlight on these results, additional experiments were\nperformed in order to check how well the feasibil-\nity scores and the HMM-based probabilistic model\nranked the candidate paradigms. Table 3 shows\nthe average position of the correct paradigm in the\nsorted candidate list, as well as the percentage of\ntimes that the correct paradigm was ranked as the\nfirst one. Overall, the results in this table suggest\nthat the quality of the ranking has a higher impact\n15http://dumps.wikimedia.org/eswiki/\n20110114/eswiki-20110114-pages-articles.\nxml.bz2\n16Statistical significance tests were performed with sigf,\navailable at http://www.nlpado.de/ ˜sebastian/\nsoftware/sigf.shtml\n24\nRevision pairs mean number of queries\nR1 R2 ID3+HMM ID3 Original\n7217 7287 3.26 5.50#3.08\"\n11762 12415 5.22 5.26 10.71#\n17582 20212 4.74\"5.65#5.18\n27241 27627 4.35\"5.72 5.85\n34649 35985 6.22 6.32 8.67#\n36838 44118 5.83\"6.11 7.48#\nTable 2: Mean number of yes/no questions needed by the\ntree approaches under evaluation (ID3-trained decision tree\nusing HMM probabilities, ID3-trained decision tree using pro-\nportions, and heuristic-based approach) for each of the test\nsets.\nin the heuristic-based original approach: in the case\nof the revisions pair (7217;7287) , the good results\nin ranking end up producing a significantly smaller\nnumber of yes/no questions. However, for the re-\nmaining test sets, in which the ranking is not so\ngood, even in the cases when it is better than the\none obtained by the HMM, the mean number of\nquestions is larger. Note that comparing both ap-\nproaches in terms of ranking is difficult: while the\nheuristic-based approach uses a ranked list as the\nbase for choosing the word forms to be posed to\nthe user, the new approach uses a decision tree for\nthis. In the case of the tree, the accumulation of\nprobability in the correct candidate is notably more\nimportant than its position in a ranking, since this\naccumulation is what allows to reduce the number\nof questions to the user. This information neverthe-\nless helps to understand the quality of the prediction\nof each strategy.\nIt is also important to analyse the impact of dic-\ntionary size in these results. Note that in the case\nof the decision-tree-based approaches, as the dic-\ntionary becomes larger, the number of yes/no ques-\ntions necessary to determine the correct paradigm\nis also larger, although the growth rate is very slow.\nSimilarly, the heuristic-based approach requires a\nlarger number of questions as the dictionary size\ngrows, although the heuristic strategy followed by\nthe approach makes it more unstable and the differ-\nences between revisions larger. In the case of the\napproach using decision trees and HMM, the rising\nnumber of questions seems to be mitigated by the\nricher information available for disambiguating the\ntraining corpus.\nAlthough a deeper analysis of the behaviour of\nthe different approaches needs to be carried out,\nit can be concluded that decision-tree-based ap-\nproaches are more stable and, in general, providebetter results in terms of number of yes/no ques-\ntions than the previous heuristic-based approach.\n8 Conclusions and future work\nIn this work we have presented an approach that\ncombines a hidden Markov model (HMM) and a\nbinary decision tree in order to assist non-expert\nusers in adding new words to monolingual dictio-\nnaries. This approach has been compared to the\nheuristic-based method proposed by Espl `a-Gomis\net al. (2011). The results have confirmed that the\nmethods based on a decision tree are more stable\nand usually better than the original one. In addition,\nthe comparison between the method using deci-\nsion trees only and that combining decision trees\nand HMMs concluded that the number of queries\nasked in the second case is significantly lower. The\nJava code for the resulting system is available17\nunder the free/open-source GNU General Public\nLicense.18\nAs regards future work, an extended evaluation\nincluding more pairs of languages and corpora\nwould be necessary to confirm the results obtained\nhere. It could be also interesting to try to improve\nthe training corpus used, for example, by using a\npart-of-speech tagger to further reduce the number\nof compatible paradigms in Lfor each word form.\nMoreover, as pointed out in Section 5, a second part\nof the evaluation should still be performed to deter-\nmine the feasibility of replacing the n-gram model\nproposed by S ´anchez-Cartagena et al. (2012b) with\nthe probabilities obtained with the HMM for choos-\ning the correct paradigm among a set of equivalent\nones.\nAcknowledgements\nThis work has been partially funded by the Span-\nish Ministerio de Ciencia e Innovaci ´on through\nprojects TIN2009-14009-C02-01 and TIN2012-\n32615, by Generalitat Valenciana through grant\nACIF/2010/174 from V ALi+d programme, and by\nthe European Commission through project PIAP-\nGA-2012-324414 (Abu-MaTran).\nReferences\nBartuskov ´a, D. and R. Sedl ´acek. 2002. Tools for\nsemi-automatic assignment of Czech nouns to dec-\n17https://apertium.svn.sourceforge.\nnet/svnroot/apertium/branches/\ndictionary-enlargement\n18http://www.gnu.org/licenses/gpl.html\n25\nRevision R1Revision R2Mean position of correct Rate correct is first\nHMM Feasibility score HMM Feasibility score\n7217 7287 1.47 0.51 70.31 72.99\n11762 12415 5.66 10.45 28.00 8.36\n17582 20212 1.87 1.72 52.49 40.88\n27241 27627 7.11 4.67 39.73 42.76\n34649 35985 6.66 5.18 45.57 45.57\n36838 44118 1.08 3.51 81.08 70.52\nTable 3: For the approach using decision trees and HMM and for the heuristic-based approach, mean position for each test set of\nthe correct paradigm in the ranking of feasibility scores or probabilities and percentage of times that the correct candidate was\nthe one with the highest score or probability.\nlination patterns. In Proceedings of the 5th Inter-\nnational Conference on Text, Speech and Dialogue ,\npages 159–164.\nBaum, L. E. 1972. An inequality and associated maxi-\nmization technique in statistical estimation for prob-\nabilistic functions of a Markov process. Inequalities ,\n3:1–8.\nBojar, O., C. Buck, C. Callison-Burch, C. Federmann,\nB. Haddow, P. Koehn, C. Monz, M. Post, R. Soricut,\nand L. Specia. 2013. Findings of the 2013 Work-\nshop on Statistical Machine Translation. In Proceed-\nings of the Eighth Workshop on Statistical Machine\nTranslation , pages 1–44.\nCutting, D., J. Kupiec, J. Pedersen, and P. Sibun. 1992.\nA practical part-of-speech tagger. In Proceedings of\nthe Third Conference on Applied Natural Language\nProcessing , pages 133–140.\nDesai, S., J. Pawar, and P. Bhattacharyya. 2012. Au-\ntomated paradigm selection for FSA based Konkani\nverb morphological analyzer. In COLING (Demos) ,\npages 103–110.\nD´etrez, G. and A. Ranta. 2012. Smart paradigms and\nthe predictability and complexity of inflectional mor-\nphology. In Proceedings of EACL , pages 645–653.\nEspl`a-Gomis, M., V .M. S ´anchez-Cartagena, and J.A.\nP´erez-Ortiz. 2011. Enlarging monolingual dictionar-\nies for machine translation with active learning and\nnon-expert users. In Proceedings of RANLP , pages\n339–346.\nFont-Llitj ´os, A. 2007. Automatic improvement of ma-\nchine translation systems . Ph.D. thesis, Carnegie\nMellon University.\nForcada, M.L., M. Ginest ´ı-Rosell, J. Nordfalk,\nJ. O’Regan, S. Ortiz-Rojas, J.A. P ´erez-Ortiz,\nF. S´anchez-Mart ´ınez, G. Ram ´ırez-S ´anchez, and F.M.\nTyers. 2011. Apertium: a free/open-source platform\nfor rule-based machine translation. Machine Trans-\nlation , 25(2):127–144.\nMcCreight, E.M. 1976. A space-economical suffix tree\nconstruction algorithm. Journal of the Association\nfor Computing Machinery , 23:262–272, April.\nMcShane, M., S. Nirenburg, J. Cowie, and R. Zacharski.\n2002. Embedding knowledge elicitation and MT sys-\ntems within a single architecture. Machine Transla-\ntion, 17:271–305.Monson, C. 2009. ParaMor: From Paradigm Structure\nto Natural Language Morphology Induction . Ph.D.\nthesis, Carnegie Mellon University.\nQuinlan, J. R. 1986. Induction of decision trees. Ma-\nchine Learning , 1(1):81–106.\nRabiner, L.R. 1989. A tutorial on hidden Markov mod-\nels and selected applications in speech recognition.\nProceedings of the IEEE , 77(2):257–286.\nˇSnajder, J. 2013. Models for Predicting the Inflec-\ntional Paradigm of Croatian Words. Sloven ˇsˇcina 2.0:\nempirical, applied and interdisciplinary research ,\n1(2):1–34.\nS´anchez-Cartagena, V .M., M. Espl `a-Gomis, and J.A.\nP´erez-Ortiz. 2012a. Source-language dictionaries\nhelp non-expert users to enlarge target-language dic-\ntionaries for machine translation. In Proceedings of\nLREC , pages 3422–3429.\nS´anchez-Cartagena, V .M., M. Espl `a-Gomis, F. S ´anchez-\nMart ´ınez, and J.A. P ´erez-Ortiz. 2012b. Choosing\nthe correct paradigm for unknown words in rule-\nbased machine translation systems. In Proceedings\nof the Third International Workshop on Free/Open-\nSource Rule-Based Machine Translation , pages 27–\n39.\nWalther, G. and L. Nicolas. 2011. Enriching mor-\nphological lexica through unsupervised derivational\nrule acquisition. In Proceedings of the International\nWorkshop on Lexical Resources , Ljubljana, Slovenia.\nWang, A., C. Hoang, and M.Y . Kan. 2013. Perspec-\ntives on crowdsourcing annotations for natural lan-\nguage processing. Language Resources and Evalua-\ntion, 47(1):9–31.\n26",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "cCxI2rziwWC",
"year": null,
"venue": "EAMT 2005",
"pdf_link": "https://aclanthology.org/2005.eamt-1.12.pdf",
"forum_link": "https://openreview.net/forum?id=cCxI2rziwWC",
"arxiv_id": null,
"doi": null
}
|
{
"title": "An open-source shallow-transfer machine translation engine for the Romance languages of Spain",
"authors": [
"Antonio M. Corbí-Bellot",
"Mikel L. Forcada",
"Sergio Ortiz-Rojas",
"Juan Antonio Pérez-Ortiz",
"Gema Ramírez-Sánchez",
"Felipe Sánchez-Martínez",
"Iñaki Alegria",
"Aingeru Mayor",
"Kepa Sarasola"
],
"abstract": "Antonio M. Corbi-Bellot, Mikel L. Forcada, Sergio Ortíz-Rojas, Juan Antonio Pérez-Ortiz, Gema Ramírez-Sánchez, Felipe Sánchez-Martínez, Iñaki Alegria, Aingeru Mayor, Kepa Sarasola. Proceedings of the 10th EAMT Conference: Practical applications of machine translation. 2005.",
"keywords": [],
"raw_extracted_content": "EAMT 2005 Conference Proceedings 79 An Open-Source Shallow-Transfer Machine Translation Engine \nfor the Romance La nguages of Spain \nAntonio M. Corbí-Bellot1, Mikel L. Forcada1, Sergio Ortiz-Rojas1, Juan Antonio Pérez-\nOrtiz1, Gema Ramírez-Sánchez1, Felipe Sánchez-Martínez1, Iñaki Alegria2, Aingeru \nMayor2, Kepa Sarasola2 \n1Transducens group, Departament de Llenguatges i Sistemes Informàtics \nUniversitat d'Alacant, E-03071 Alacant \n2IXA Taldea, Informatika Fakultatea, \nEuskal Herriko Uniberts itatea, E-20071 Donostia \n \[email protected], [email protected], sortiz@d lsi.ua.es, [email protected], ge-\[email protected], [email protected], ac [email protected], [email protected], ksara-\[email protected] \nAbstract. We present the current status of develo pment of an open-source shallow-transfer \nmachine translation engine for the Romance languages of Spain (the main ones being Span-\nish, Catalan and Galician) as part of a larger government-funded project which includes non-Romance languages such as Basque and involving both universities and linguistic \ntechnology companies. The mach ine translation architecture uses finite-state transducers for \nlexical processing, hidden Markov models for part-of-speech tagging, and finite-state based chunking for structural transfer, and is largely based upon that of systems already devel-\noped by the Transducens group at the Univ ersitat d'Alacant, such as interNOSTRUM \n(Spanish—Catalan) and Traductor Universia (Spanish—Portuguese). The possible scope of the project, however, is wider, since it will be possible to use the resulting machine transla-\ntion system with new pairs of languages; to th at end, the project also aims at proposing \nstandard formats to encode the linguistic data needed. This paper briefly describes the ma-\nchine translation engine, the formats it uses for linguistic data, and the compilers that con-\nvert these data into an effici ent format used by the engine. \n1. Introduction \nThis paper presents the current status of deve-\nlopment and the main motivations of an open-source shallow-transfer m achine translation (MT) \nengine for the Romance languages of Spain (the \nmain ones being Spanish (\nes), Catalan ( ca) and \nGalician1 (gl)) as part of a larger government-\nfunded project which will also include MT en-\ngines for non-Romance languages such as \n \n1Most scholars consider Galician and Portuguese \n(pt) the same language; however, the official or-\nthography of Galician is very different from the ones \nused for European and Brazilian Portuguese. There-fore, while grammatical resources will be rather re-usable, lexical resources will not easily be. Basque (eu) and involving four universities and \nthree linguistic technology enterprises.2 The \nshallow-transfer architecture will also be suit-able for other pairs of closely related languages which are not Romance, for example, Czech—Slovak, Danish—Swedish, etc. \nThe multilingual nature of Spain is recog-\nnized, to a varying extent, in laws and regula-tions corresponding to the various levels of go-\n \n \n2TALP (Universitat Politècnica de Catalunya), SLI \n(Universidade de Vigo), Transducens (Universitat d’Ala-cant), IXA (Euskal Herriko Unibertsitatea), ima-xin|software (Santiago de Compostela), Elhuyar Fundazioa (Usurbil), and Eleka Ingeniaritza Linguis-tikoa (Usurbil, coordinator). \nCorbí-Bellot et al. \n80 EAMT 2005 Conference Proceedings vernment (the Constitution of Spain and the \nStatutes of Autonomy granted to Aragon, the \nBalearic Islands, Cata lonia and Valencia ( ca), \nGalicia (gl), and Navarre and the Basque Count-\nry (eu)). On the one hand, demand by many \ncitizens in these territories make private com-\npanies increasingly interested in generating in-formation (documentati on for products and ser-\nvices, customer support, etc.) in languages dif-ferent from Spanish. On the other hand, the va-rious levels of government ( national , autono-\nmic, provincial, municipal) must respect, in the \nmentioned territories, the linguistic rights rec-ognized to their citizens and promote the use of such languages. Machine translation is a key technology to meet these goals and demands. \nExisting MT programs for the \nes—ca and \nthe es—gl pairs (there are no programs for the \nes—eu pair) are mostly commercial or use \nproprietary technologies, which makes them \nvery hard to adapt to new usages, and use dif-ferent technologies acro ss language pairs, which \nmakes it very difficult to integrate them in a single \nmultilingual content management system. \nThe MT architecture proposed here uses fi-\nnite-state transducers for lexical processing, hidden Markov models fo r part-of-speech tag-\nging, and finite-state based chunking for struc-tural transfer, and is largely based upon that of systems already developed by the Transducens group such as interNOSTRUM\n3 (Spanish—Ca-\ntalan, Canals-Marote et al. 2001) and Traductor Universia\n4 (Spanish—Portuguese, Garrido-Alen-\nda et al. 2003); these systems are publicly ac-cessible through the net and used on a daily ba-sis by thousands of users. \nOne of the main novelties of this architec-\nture is that it will be released under an open-source license\n5 (together with pilot linguistic \ndata derived from other open-source projects such as Freeling (Carreras et al. 2004) or cre-ated specially for this purpose) and will be dis-tributed free of charge. This means that anyone having the necessary computational and linguis-\n \n \n3 http://www.internostrum.com/ \n4 http://traductor.universia.net/ \n5 The license has still to be determined. Most likely, \nthere will be two different licenses: one for the ma-chine translation engine and tools, and another one for the linguistic data. tic skills will be able to adapt or enhance it to \nproduce a new MT system, even for other pairs of related languages. The whole system will be released at the beginning of 2006.\n6 \nWe expect that the introduction of a unified \nopen-source MT architecture will ease some of the mentioned problems (having different tech-nologies for different pairs, closed-source archi-tectures being hard to adapt to new uses, etc.). It will also help shift the current business model from a licence-centred one to a services-centred one, and favour the interc hange of existing lin-\nguistic data through the use of the XML-based formats defined in this project. \nIt has to be mentioned that this is the first \ntime that the government of Spain funds a large project of this kind, although the adoption of open-source software by administrations in Spain is not new.\n7 \nThe following sections give an overview of \nthe architecture (sec. 2), the formats defined for the encoding of linguistic data (sec. 3), and the compilers used to convert these data into an ex-ecutable form (sec. 4); finally, we give some concluding remarks (sec. 5). \n2. The MT architecture \nThe MT strategy used in the system has already \nbeen described in detail (Canals-Marote et al. 2001; Garrido-Alenda et al. 2003); a sketch will be given here. The engine is a classical shallow-\ntransfer or transformer system consisting of an 8-module assembly line; we have found that this strategy is sufficient to achieve a reason-able translation quality between related lan-\nguages such as \nes, ca or gl. While, for these \nlanguages, a rudimentary word-for-word MT \nmodel may give an adequate translation for 75% of the text, the addition of homograph dis-\n \n \n6 Other attempts at open-source implementations of \nMT systems have been initiated, such as GPLTrans (http://www.translator.cx) and Traduki (http://tradu-ki.sourceforge.net), but the level of activity in these projects is low and far fr om reaching the usability \nlevels of the existing interNOSTRUM—Traductor Universia engine inspiring the one in this project. \n7 The most remarkable case being the success of \nLinex (http://www.linex.org) , the Linux distribution promoted by the autonomous government of Extre-madura. \nAn open-source shallow-transfer machine translation engine for the Romance languages of Spain \nEAMT 2005 Conference Proceedings 81 ambiguation, management of contiguous multi-\nword units, and local reordering and agreement rules may raise the fraction of adequately trans-lated text above 90%. This is the approach used in the engine presented here. \nTo ease diagnosis and independent testing, \nmodules communicate between them using text streams\n8 (examples below give an idea of the \ncommunication format used). This allows for some of the modules to be used in isolation, in-dependently from the rest of the MT system, for other natural-language processing tasks. As in interNOSTRUM or Traductor Universia, im-plementations of the systems for Linux and Windows architectures will be made available. \nThe modules are shown in figure 1. Most of the modules are capable of process-\ning tens of thousands of words per second on current desktop workstations; only the structu-ral transfer module lags behind at several thou-sands of words per second. \nAs has been mentioned in the introduction, \nin addition to this shallow-transfer MT architec-ture, the project is also designing a deeper-\ntransfer architecture for the \nes—eu pair (Díaz \nde Ilarraza et al. 2000) . Even though the cur-\nrent prototype is being programmed in an ob-ject-oriented framework using code and data \nfrom Freeling (Carreras et al. 2004) for \nes and \n \n8Information will circulate in two different formats: \ncurrently, an ad-hoc text format derived from in-terNOSTRUM and Traductor Universia, is being used and will be illustrated in this paper; a new format based on XML (World Wide Web Con-sortium 2004) is being designed for enhanced in-teroperability. conventional syntactical and lexical generation \nfor eu, its architecture could easily be rewritten \ninto one which would share modules and format \nspecifications with the shallow-transfer archi-tecture described here (for instance, morpho-logical analysis and generation, part-of-speech tagging, or lexical transfer). \nThe following sections describe each mod-\nule of the shallow-transfer architecture in detail. \n2.1. The de-formatter \nThe de-formatter separates the text to be trans-\nlated from the format information (RTF, HTML, etc.). Format information is encapsu-lated so that the rest of the modules treat it as blanks between words. For example, the HTML text in Spanish: \nvi <em>una señal</em> \n(“I saw a signal”) would be processed by the de-formatter so that it would encapsulate the HTML tags between brackets and deliver \nvi[ <em>]una señal[</em>] \nThe character sequences in brackets are treated as simple blanks between words by the rest of the modules. \n \nSource text → De-formatter \n ↓ \n Morphological Analyser \n ↓ \n Part-of-speech tagger \n ↓ \n Structural transfer ↔ Lexical transfer \n ↓ \n Morphological generator \n ↓ \n Post-generator \n ↓ \n Re-formatter → Target text \nFigure 1: The eight modules of the MT system \nCorbí-Bellot et al. \n82 EAMT 2005 Conference Proceedings 2.2. The morphological analyser \nThe morphological analyser tokenizes the text \nin surface forms (lexical units as they appear in \ntexts) and delivers, for each surface form, one or more lexical forms consisting of lemma , lexi-\ncal category and morphological inflection in-\nformation. Tokenization is not straightforward due to the existence, on the one hand, of con-tractions, and, on the other hand, of multi-word lexical units. For contractions, the system reads in a single surface form and delivers the corre-sponding sequence of lexical forms (for in-\nstance, the \nes preposition—article contraction \ndel would be analysed into two lexical forms, \none for the preposition de and another one for \nthe article el). Multi-word surface forms are \nanalysed in a left-to-right, longest-match fash-\nion; for instance, the analysis for the es prepo-\nsition a would not be delivered when the input \ntext is a través de (“through”), which is a multi-\nword preposition in es. Multi-word surface \nforms may be invariable (such as a multi-word \npreposition or conjunction) or inflected (for ex-\nample, in es, echaban de menos, “they missed”, \nis a form of the imperfect indicative tense of the \nverb echar de menos, “to miss”). Limited sup-\nport for some kinds of discontinuous multi-word units is also available. The module reads in a binary file compiled from a source-language morphological dictionary (see section 3.1). \nUpon receiving the example text in the pre-\nvious section, the morphological analyser would deliver \n^vi/ver<vblex><ifi><1><sg>$[ \n<em>]^una/un<det><ind><f><sg>/unir<vblex><prs><1><sg>/unir<vblex><prs><3><sg>$ ^señal/señal<n><f><sg>$[</em>] \nwhere each surface form is analysed into one or \nmore lexical forms. For example, vi is analysed \ninto lemma ver, lexical category lexical verb \n(vblex ), indefinite indicative ( ifi), 1st person, \nsingular, whereas una (a homograph) receives \nthree analyses: un, determinant, indefinite, \nfeminine singular, and two forms of the present \nsubjunctive ( prs) of the verb unir (to join). The \ncharacters “ ^” and “$” delimit the analyses for \neach surface form; lexical forms for each sur-\nface form are separated by “ /”; angle brackets \n“<...>”are used to delimit grammatical symbols. The string after the “ ^” and before the first “ /” \nis the surface form as it appears in the source \ninput text. \n2.3. The part-of-speech tagger \nAs has been shown in the previous example, some \nsurface forms (about 30% in Romance lan-guages) are homographs, ambiguous forms for which the morphological analyser delivers more than one lexical form. The part-of-speech tagger \nchooses one of them, according to the lexical forms of neighbouring words. When translating between related languag es, ambiguous surface \nforms are one of the main sources of errors when incorrectly solved. \nThe part-of-speech tagger (an open-source \nprogram) reads in a file containing a hidden Markov model (HMM) which has been trained on representative source-language texts (using an open-source training program). Two training modes are possible: one can use either a larger amount (millions of words) of untagged text processed by the morphological analyser or a small amount of tagged text (tens of thousands of words) where a lexical form for each homo-graph has been manually selected. The second method usually leads to a slightly better per-formance (about 96% correct part-of-speech tags). We are currently building a collection of open corpora (both untagged and tagged) using texts published on the web under Creative Com-mons\n9 licenses. The behaviour of the part-of-\nspeech tagger and the training program are both controlled by a tagger definition file (see sec-tion 3.2). \nThe result of processing the example text de-\nlivered by the morphological analyser with the part-of-speech tagger would be \n^ver<vblex><ifi><1><sg>$[ \n<em>]^un<det><ind><f><sg>$ ^señal<n><f><sg>$[</em>] \nwhere the correct lexical form (determiner) has \nbeen selected for the word una. \n2.4. The lexical transfer module \nThe lexical transfer module is called by the struc-\ntural transfer module (see next section); it reads each source-language lexical form and delivers \n \n \n9 http://creativecommons.org/ \nAn open-source shallow-transfer machine translation engine for the Romance languages of Spain \nEAMT 2005 Conference Proceedings 83 a corresponding target-language lexical form. \nThe module reads in a binary file compiled from a bilingual dictionary (see section 3.1). The dictionary contains a single equivalent for each source-language entry; that is, no word-sense disambiguation is performed. For some words, multi-word entries are used to safely \nselect the correct equivalent in frequently-oc-curring fixed contexts. This approach has been used with very good results in Traductor Uni-versia and interNOSTRUM. \nEach of the lexical forms in the running ex-\nample would be translated into Catalan as fol-lows: \n• ver<vblex> → veure<vblex> \n• un<det> → un<det> \n• señal<n><f> → senyal<n><m> \nwhere the remaining grammatical symbols for \neach lexical form would be simply copied to the target-language output. Note the gender change to masculine when translating señal into Cata-\nlan. \n2.5. The structural transfer module \nThe structural transfer module uses finite-state \npattern matching to detect (in the usual left-to-right, longest-match way) fixed-length patterns of lexical forms ( chunks or phrases ) needing \nspecial processing due to grammatical diver-gences between the two languages (gender and number changes to ensure agreement in the tar-get language, word reorderings, lexical changes such as changes in prepositions, etc.) and per-forms the corresponding transformations. This module is compiled from a transfer rule file (see section 3.3). In the running example, a deter-miner-noun rule is used to change the gender of the determiner so that it agrees with the noun; the result is \n^veure<vblex><ifi><1><sg>$[ \n<em>]^un<det><ind><m><sg>$ ^senyal<n><m><sg>$[</em>] \n2.6. The morphological generator \nThe morphological generator delivers a target-\nlanguage surface form for each target-language lexical form, by suitably inflecting it. The mo-dule reads in a binary file compiled from a tar-\nget-language morphological dictionary (see sec-tion 3.1). The result for the running example would be \nvaig veure[ <em>]un sen-\nyal[</em>] \n2.7. The post-generator \nThe post-generator performs orthographical ope-\nrations such as contractions and apostrophe-tions. The module reads in a binary file com-piled from a rule file expressed as a dictionary (section 3.1). The post-generator is usually dor-\nmant (just copies the input to the output) until a \nspecial alarm symbol contained in some target-language surface forms wakes it up to perform a \nparticular string transformation if necessary; then it goes back to sleep . \nFor example, in Catala n, clitic pronouns in \ncontact may change before a verb: em (“to me”) \nand ho (“it”) contract into m'ho , em and els \n(“them”) contract into me’ls and em and la \n(“her”) are written me la . To signal these change, \nlinguists prepend an alarm to the target-lan-guage surface form em in target-language dic-\ntionaries and write post-generation rules to en-sure the changes described. \n2.8. The re-formatter \nFinally, the re-formatter restores the format in-\nformation encapsulated by the de-formatter into the translated text and removes the encapsula-tion sequences used to protect certain characters in the source text. The result for the running ex-ample would be the correct translation of the HTML text: \nvaig veure <em>un senyal</em> \n3. Formats for linguistic data \nAn adequate documentation of the code and \nauxiliary files is crucial for the success of open-source software. In the case of a MT system, this implies carefully defining a systematic format for each source of linguistic data used by the system. The formats used by this architec-\nture (which will not be described in detail for lack of space) are modified versions of the for-mats currently used by interNOSTRUM and \nCorbí-Bellot et al. \n84 EAMT 2005 Conference Proceedings Traductor Universia. These programs used an \nad-hoc text-based format; in the current project, these formats have been converted into XML (World Wide Web Consortium, 2004) for inter-operability; in particular, for easier parsing, transformation, and maintenance. The XML for-mats for each type of linguistic data are defined through conveniently-designed XML document-type definitions (DTDs). \nOn the one hand, the success of the open-\nsource machine translation engine heavily de-pends on the acceptance of these formats by other groups;\n10 acceptance may be eased by the \nuse of an interoperable XML-based format which simplifies the transformation of data from and towards it, and also by the availability of tools to manage linguistic data in these for-mats; the current project is expected to produce transformation and management tools in a later phase. But, on the other hand, acceptance of the formats also depends on the success of the translation engine itself. \n3.1. Dictionaries (lexical processing) \nThe format for monolingual morphological dic-\ntionaries and bilingual dictionaries may be seen as an XML version of the format already used in interNOSTRUM or Traductor Universia<, which was defined in Garrido et al. 1999. The current DTD\n11 and examples of morphological \nand bilingual dictionaries may be found in http://www.torsimany.ua.es/eamt2005/. \nMorphological dictionaries establish the cor-\nrespondences between surface forms and lexical forms and contain (a) a definition of the alpha-bet (used by the tokenizer), (b) a section defin-ing the grammatical symbols used in a particu-lar application to specify lexical forms (sym-bols representing concepts such as noun , verb, \nplural , present , feminine , etc.), (c) a section de-\nfining paradigms (describing reusable groups of correspondences between parts of surface forms and parts of lexical forms), and (d) one or more labelled dictionary secti ons containing lists of \nsurface form—lexical form correspondences for whole lexical units (including contiguous multi-\n \n \n10 This is indeed the mechanism by which de facto \nstandards appear. \n11 Subject to minor modifications as the project pro-\ngresses. word units). Paradigms may be used directly in \nthe dictionary sections or to build larger para-digms (at the conceptual level, paradigms rep-\nresent the regularities in the inflective system of the corresponding language). Bilingual diction-aries have a very similar structure and establish correspondences between source-language lexi-cal forms and target-language lexical forms, but seldom use paradigms. Finally, post-generation dictionaries are used to establish correspondences between input and output strings corresponding to the orthographical transformations to be per-formed by the post-gener ator on the target-lan-\nguage surface forms generated by the generator. \n3.2. Tagger definition \nSource -language lexical forms delivered by the \nmorphological analyser are defined in terms of fine part-of-speech tags (for example, the word \ncantábamos [\nes] has lemma cantar , category \nverb, and the following inflection information: \nindicative , imperfect , 1st person , plural ), which \nare necessary in some parts of the MT engine (structural transfer, morphological generation); however, for the purpose of efficient disam-biguation, these fine part-of-speech tags may be grouped in coarser part-of-speech tags (such as \nverb in personal form ). \nThe tagger definition file is also an XML \nfile (the corresponding DTD may also be found in http://www.torsimany.ua.es/eamt2005/) where (a) coarser tags are defined in terms of fine tags, both for single-word and for multi-word units, (b) constraints may defined to forbid or enforce certain sequences of part-of-speech tags, and (c) priority lists are used to decide which fine part-of-speech tag to pass on to the structural trans-\nfer module when the coarse part-of-speech tag contains more than a fine tag. The tagger defini-tion file is used to define the behaviour of the part-of-speech tagger both when it is being trained on a source-language corpus and when it is running as part of the MT system. \n3.3. Structural transfer \nAn XML format for shallow structural transfer \nrules has been recently drafted; a commented DTD may be found in http://www.torsimany.ua.es/eamt2005/. \nThe rule files contain pattern–action rules \ndescribing what has to be done for each pattern \nAn open-source shallow-transfer machine translation engine for the Romance languages of Spain \nEAMT 2005 Conference Proceedings 85 (much like in languages such as perl or lex) . \nUsing a declarative notation such as XML is \nrather straightforward for the pattern part of rules but using it for the action (procedural) part means stretching it a bit; we have, however, found a reasonable way to translate the ad-hoc C-style action language used in the correspond-ing module of interNOSTRUM and Traductor Universia, which was defined in detail in Gar-rido-Alenda and Forcada (2001), into a simple XML notation having the same expressiveness. In this way, we follow as close as possible the declarative approach used in the XML files de-\nfining the linguistic data used for the tagger and for the lexical processing modules. \n3.4. De-formatter and re-formatter \nThe current de-formatter and re-formatter used \nin Traductor Universia and interNOSTRUM are different for each of the three formats supported (plain ISO-8859-1 text, HTML and RTF). Their behaviour is specified following a pattern–ac-tion scheme, with patterns specified as regular expressions and actions written in C code, using \nlex to generate the executable code. At the \ntime of writing these lines we are still studying \nwhether a declarative XML-based definition of the behaviour of these and new format-process-ing modules is feasible; the main obstacle be-ing nested dependencies in RTF-like formats, \nwhich make a \nlex-style finite-state reader hard \nto specify in a declarative way. \n4. Compilers \nCompilers to convert the linguistic data into the \ncorresponding efficient form used by the mod-ules of the engine are currently under develop-ment. Two compilers are used in this project: one for the four lexical processing modules of the system and another one for the structural transfer. \n4.1. Lexical processing \nThe four lexical processing modules (morpho-\nlogical analyser, lexical transfer, morphological generator, post-generator) are currently being implemented as a single program\n12 which reads \n \n12 The MT programs interNOSTRUM and Traductor \nUniversia use four different compilers, one for each binary files containing a compact and efficient \nrepresentation of a class of finite-state transdu-cers ( letter transducers , Roche & Schabes \n1997); in particular, augmented letter transduc-\ners (Garrido-Alenda et al. 2002). These binaries are an improved version of those used in inter-NOSTRUM and Traductor Universia and are generated from XML dictionaries (specified in section 3.1) using a new compiler, completely rewritten from scratch. The new compiler is much faster (taking seconds instead of minutes to compile the current dictionaries in inter-NOSTRUM and Traductor Universia) and uses much less memory, thanks to the use of new transducer building strategies and to the mini-mization of partial finite-state transducers dur-ing construction. This makes linguistic data de-velopment much easier, because the effect on the whole system of changing a rule or a lexical item may be tested almost immediately. \n4.2. Structural transfer \nAs has been explained in section 3.3, the format \nof the structural transfer rules is still in a draft phase. When it is fixed, a compiler will be built (largely based on that of Garrido-Alenda and Forcada 2001) which will generate a module which will use finite-state technology to detect the patterns of source-language lexical forms needing processing and will contain fast code to generate the corresponding target-language le-xical form patterns.\n13 \n5. Concluding remarks \nThis paper has shown the current state of devel-\nopment of an open-source shallow-transfer ma-chine translation engine for the Romance lan-guages of Spain (the main ones being Spanish, \n \n \nlexical processing task (morphological analysis, le-\nxical transfer, morphological generation and post-generation); here, a single module will show the re-quired input-output behavi our in each case accord-\ning to the arguments with which it is invoked. \n13 In the first prototypes, the rules will likely be \ntranslated from the new XM L format into the format \nused in Garrido-Alenda and Forcada (2001) using a combination of macro processing and XSLT style sheets, and the old compiler will be used. This will serve to refine the XML rule file language before wri-ting a new compiler. \nCorbí-Bellot et al. \n86 EAMT 2005 Conference Proceedings Catalan and Galician). This is one of the ma-\nchine translation engines that will be developed in a large, government-funded open-source de-velopment project (the other one is a deeper-transfer engine for the Spanish—Basque pair, which will be described elsewhere). Further-more, as a well-documented open-source en-gine, it could be adapted to translating between other Romance languages of Europe (French, Portuguese, Italian, Occitan, etc.) or even be-tween related language pairs outside the Ro-mance group (Swedish—Danish, Czech—Slo-vak, etc.). \nSome of the components (modules, data \nformats and compilers) from this architecture will also be useful in the design of deeper-trans-fer architectures for more difficult language pairs; indeed, the project is also building a MT system for one such pair, Spanish—Basque, and some components of the architecture pre-sented here will be tested on that language pair. \nIn particular, the shallow-transfer engine \nwill not be designed from scratch but may ra-ther be seen as a complete open-source rewrit-ing of an existing closed-source engine (inter-NOSTRUM, Canals-Marote et al. 2001; Tra-ductor Universia, Garrido-Alenda et al. 2003) which is currently used daily by thousands of \npeople through the net, and the corresponding redesign of linguistic data formats and rewriting of compilers. \nThe code, together with pilot Spanish—Ca-\ntalan and Spanish—Galicia n linguistic data to \ndemonstrate it, will be released at the beginning of 2006 through the project web page (currently under construction). \nAcknowledgements : Work funded by pro-\njects FIT-340101-2004-3 (Spanish Ministry of Industry, Commerce and Tourism) and TIC2003-08681-C02-01 (Spanish Ministry of Science and Technology). Felipe Sánchez-Martínez is supported by the Spanish Ministry of Science and Education and the European Social Fund through grant BES-2004-4711. \n6. References \nCANALS -MAROTE , R., A. ESTEVE -GUILLÉN , A. GAR-\nRIDO -ALENDA , M.I. GUARDIOLA -SAVALL , A. ITUR-RASPE -BELLVER , S. MONTSERRAT -BUENDIA , S. OR-\nTIZ-ROJAS , H. PASTOR -PINA, P.M. PÉREZ -ANTÓN , \nM.L. FORCADA (2001). “The Spanish-Catalan ma-\nchine translation system interNOSTRUM”, in B. \nMaegaard, ed., Proceedings of MT Summit VIII: Ma-\nchine Translation in the Information Age , 73-76. \nCARRERAS , X., I. CHAO, L. PADRÓ AND M. PADRÓ \n(2004). “FreeLing: An Open-Source Suite of Lan-\nguage Analyzers”, in M.T. Lino, M. F. Xavier, F. Ferreira, R. Costa, R. Silva, ed., Proceedings of the \n4th International Conference on Language Resour-\nces and Evaluation (LREC’04). Lisbon, Portugal . \nD\nÍAZ DE ILARRAZA , A., A. MAYOR , K. SARASOLA \n(2000). “Reusability of wide-coverage linguistic re-\nsources in the construction of a multilingual machine translation system”, in Lewis, D., Mitkov, R., ed., \nProceedings of MT 2000 (U niv. of Exeter, UK, 19-\n22 Nov. 2000) , . \nG\nARRIDO , A., AMAIA ITURRASPE , SANDRA MON-\nTSERRAT , HERMÍNIA PASTOR , MIKEL L. FORCADA \n(1999). “A compiler for morphological analysers and generators based on finite-state transducers”, Pro-\ncesamiento del Lenguaje Natural , 25, 93-98 \nG\nARRIDO -ALENDA , A., M.L. FORCADA (2001). “Mor-\nphTrans: un lenguaje y un compilador para especifi-\ncar y generar módulos de transferencia morfológica \npara sistemas de traducción automática”, Procesa-\nmiento del Lenguaje Natural , 27, 157-162. \nGARRIDO -ALENDA , A. MIKEL L. FORCADA , RAFAEL \nC. CARRASCO (2002). “Incremental construction and \nmaintenance of morphological analysers based on \naugmented letter transducers”, in Mitamura, T., Ny-\nberg, E., ed., Proceedings of TMI 2002 (Theoretical \nand Methodological Issues in Machine Translation, \nKeihanna/Kyoto, Japan, March 2002) , 53-62. \nGARRIDO -ALENDA , A., PATRÍCIA GILABERT ZARCO , \nJUAN ANTONIO PÉREZ ORTIZ, ANTONIO PERTUSA -\nIBÁÑEZ , GEMA RAMÍREZ -SÁNCHEZ , FELIPE SÁNCHEZ -\nMARTÍNEZ , MÍRIAM A. SCALCO , MIKEL L. FORCADA \n(2004). “Shallow parsing for Portuguese-Spanish \nMachine Translation”, in Branco, A. and Mendes, A., \nRibeiro, R., Language technology for Portuguese: \nshallow processing tools and resources , 135-144. \nROCHE , E., SCHABES , Y. (1997). “Introduction”, in \nRoche, E., Schabes, Y., Finite-state language proc-\nessing , 1-65. \nWORLD WIDE WEB CONSORTIUM (2004). “Extensible \nMarkup Language (XML)”, http://www.w3.org/XML/.",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "CXiMxBneKGM",
"year": null,
"venue": "EAMT 2016",
"pdf_link": "https://aclanthology.org/W16-3405.pdf",
"forum_link": "https://openreview.net/forum?id=CXiMxBneKGM",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Stand-off Annotation of Web Content as a Legally Safer Alternative to Bitext Crawling for Distribution",
"authors": [
"Mikel L. Forcada",
"Miquel Esplà-Gomis",
"Juan Antonio Pérez-Ortiz"
],
"abstract": "Mikel L. Forcada, Miquel Esplà-Gomis, Juan Antonio Pérez-Ortiz. Proceedings of the 19th Annual Conference of the European Association for Machine Translation. 2016.",
"keywords": [],
"raw_extracted_content": "Baltic J. Modern Computing, V ol. 4 (2016), No. 2, pp. 152–164\nStand-off Annotation of Web Content as a Legally\nSafer Alternative to Crawling for Distribution\nMikel L. FORCADA, Miquel ESPL `A-GOMIS, Juan Antonio P ´EREZ-ORTIZ\nDepartament de Llenguatges i Sistemes Inform `atics,\nUniversitat d’Alacant, E-03071 Alacant, Spain\nfmlf,mespla,japerez [email protected]\nAbstract. Sentence-aligned web-crawled parallel text or bitext is frequently used to train statistical\nmachine translation systems. To that end, web-crawled sentence-aligned bitext sets are sometimes\nmade publicly available and distributed by translation technologies practitioners. Contrary to what\nmay be commonly believed, distribution of web-crawled text is far from being free from legal\nimplications, and may sometimes actually violate the usage restrictions. As the distribution and\navailability of sentence-aligned bitext is key to the development of statistical machine translation\nsystems, this paper proposes an alternative: instead of copying and distributing copies of web\ncontent in the form of sentence-aligned bitext, one could distribute a legally safer stand-off\nannotation of web content, that is, files that identify where the aligned sentences are, so that end\nusers can use this annotation to privately recrawl the bitexts. The paper describes and discusses\nthe legal and technical aspects of this proposal, and outlines an implementation.\nKeywords: bitext, parallel text, stand-off annotation, legal issues, statistical machine translation\n1 The importance of sentence-aligned crawled bitext\nThe importance of bitext orparallel text in current translation technologies is hard to\nemphasize. Isabelle et al. (1993) —but also Simard et al. (1993)— are famously quoted\nfor saying that “Existing translations contain more solutions to more translation problems\nthan any other currently available resource”, but the formulation of the concept of bitext\nas a translation object can be traced back to Harris (1988).\nFor bitexts to be used in two key translation technologies, namely corpus-based\nmachine translation —particularly statistical machine translation (Koehn, 2009), but\nalsoexample-based machine translation (Carl and Way, 2003)— and computer-aided\ntranslation (Bowker and Fisher, 2010), they have to be segmented and aligned, usually\nsentence by sentence. Sentence-aligned bitexts, frequently in the form of translation\nmemories , are usually obtained as a by-product of computer-aided translation processes,\nand many of them have been made publicly available, such as DGT-MT, the translation\nStand-off Annotation of Web Content 153\nmemory of the European Commission’s Directorate General for Translation (Steinberger\net al., 2012); a comprehensive repository of such sentence-aligned bitexts is provided by\nOPUS1(Tiedemann, 2012).\nBut in view of the fact that the Internet is packed with webpages which are mutual\ntranslations, it is not uncommon for researchers and practitioners to build sentence-\naligned bitext by harvesting these webpages, pairing them, sentence-aligning them,\nand making the resulting corpora publicly available. The most famous example would\nprobably be the Europarl corpus (Koehn, 2005).\nContrary to what may be commonly believed, distribution of web-crawled bitext is\nfar from being free from legal implications,2and may sometimes actually violate the\nusage restrictions of web content, as will be discussed in Section 2. As the distribution\nand availability of sentence-aligned bitext is key to the development of statistical machine\ntranslation systems —in particular when it comes to adapt an existing system to a specific\ndomain (Pecina et al., 2012)—but also to save professional translation effort, Section 3\nproposes an alternative: instead of copying and distributing copies of web content in the\nform of sentence-aligned bitext, one could distribute a legally safer stand-off annotation\nof web content, that is, files that identify where the aligned sentences are, so that end\nusers can use software and this annotation to privately or locally recrawl the bitexts they\nneed. Section 4 surveys related standards and technologies, and an implementation is\nsketched in Section 5. Concluding remarks (Section 6) end the paper.\n2 Legal problems\nConsidering that a sentence-aligned bitext is an example of the general concept of corpus ,\nand that web-crawling is an example of compiling , the statement by Baker et al. (2006),\np. 48, is clearly pertinent, even if obvious: “Corpus compilers need to observe copyright\nlaw by ensuring that they seek permission from the relevant copyright holders to include\nparticular texts. This can only be a difficult and time-consuming process as copyright\nownership is not always clear [...]. If the corpus is likely to be made publicly available,\ncopyright holders may require a fee for allowing their text(s) to be included”.\nOne might think that web content is not subject to copyright, but this is seldom\nthe case. On the one hand, some web content has explicitly stated licenses which may\nimpact on products derived from it. For instance, Wikipedia3uses the Creative Commons\nAttribution-Sharealike license,4which is quite open about the reuse of content, but\nrequires all derivatives to carry the same license. Web-based newspapers usually have\nmore restrictive terms: for instance, the web edition of The New York Times uses a typical\ncopyright notice: “You may not modify, publish, transmit, participate in the transfer\nor sale of, reproduce [...], create new works from, distribute, perform, display, or for\n1http://opus.lingfil.uu.se\n2Many parallel corpora crawled from the Internet are distributed disregarding the copyright on\nthe original documents from which they were extracted. A clear example is the case of the\nEuroparl corpus for which authors claim (see http://www.statmt.org/europarl/ ) that:\nwe are not aware of any copyright restrictions of the material .\n3https://www.wikipedia.org/\n4https://creativecommons.org/licenses/by-sa/3.0/\n154 Forcada et al.\nany way exploit, any of the Content [...] in whole or in part.”5In another example,\nparticipants in the Microblog Track of the Text Retrieval Conference (TREC) interact\nwith a corpus of tweets stored remotely through a search API since 2013. The motivation\nbehind this arrangement —as opposed to the one used in former editions, where the\ncorpus could be downloaded— is to adhere to Twitter’s terms of service as they “forbid\nredistribution of tweets, and thus it would not be permissible for an organization to host\na collection of tweets for download” (Lin and Efron, 2013).\nNote that usage rights management in the case of bitext corpora compiled from\nvarious sources with different licenses may be very complex, which would be particularly\nhard for non-experts. But what happens when web content is provided without an explicit\ncopyright statement? One would think that it might be possible to use it freely, but this is\nnot the case. According to customary interpretations of the Berne Convention,6the most\nimportant international agreement dealing with copyright joined by 170 states, copyright\nnotices are optional, works are automatically copyrighted when they are created, and, by\ndefault, this means that acts of copying, distribution or adaptation without the author’s\nconsent are forbidden. Therefore, in most countries, copyright is automatic and “all\nrights reserved”. The Berne Convention, as an international agreement, may not take\ninto account the variations that copyright law may have in each country.7However,\nit authorizes countries to allow a fair use of copyrighted works. In line with this, the\nCopyright Directive of the European Union8states that:\n“Member States may provide for exceptions or limitations to the rights [...] in\nthe following cases: [...] use for the sole purpose of illustration for teaching or\nscientific research [...] and to the extent justified by the non-commercial purpose\nto be achieved” (Article 5.3).\nIn the UK, for instance, there is a prominent exception to copyright dealing with text\nand data mining for non-commercial purposes,9which does not exist in other countries.\nAlong these lines, the European Commission recently10outlined its vision to modernise\nEuropean Union copyright rules in order to “make it easier for researchers to use text\nand data mining technologies to analyse large sets of data”; note, however, that corpus\nredistribution may still face a lot of risks and uncertainties.\nAll this means that, depending on the copyright terms of the source material, web-\ncrawled bitexts may not be freely distributed. Tsiavos et al. (2014) discuss in detail the\nlegal issues involved in the distribution of web-crawled data, and even give a number of\nworked examples. Two main conclusions are:\n5http://www.nytimes.com/content/help/rights/terms/terms-of-service.html\n6Berne Convention for the Protection of Literary and Artistic Works, 9 September 1886, as last\nrevised at Paris on 24 July 1971, 1161 U.N.T.S. 30.\n7“Copyright law is not fully harmonized at the international level and, hence, it is extremely\ndifficult to provide a generic answer for the entirety of the situations involving more than one\njurisdiction, where possible act of infringement takes place.” (Arranz et al., 2013)\n8Directive 2001/29/EC of the European Parliament and of the Council of 22 May 2001.\n9https://www.gov.uk/guidance/exceptions-to-copyright\n10http://europa.eu/rapid/press-release_IP-15-6261_en.htm\nStand-off Annotation of Web Content 155\n–In general, publish only after clearing copyright of the content with the holder (if all\nof the crawled content has the same public license and it allows redistribution under\nspecific terms, one can of course avoid clearing copyright).\n–Abide by a notice and take down procedure,11much in the way in which online hosts\nremove content following notice such as court orders or allegations that content\ninfringes copyright.\nAlso, they suggest that if one cannot clear copyright, it may be safer to publish a derivative\nof the crawled content from which it is impossible to reconstruct the original source. In\nfact, when discussing annotations as a special case of derivative works, Tsiavos et al.\n(2014, p. 41) conclude that “unless [the annotations] reproduce part of the original work\nthey do not constitute a problem”. Similarly, Arranz et al. (2013) analyse the legal status\nof different acts involving web crawling of data and web services built around them, and\nstate that “if what is communicated to the public is the actual data either in their original\nor their derivative form, then this constitutes yet another act restricted by copyright law.\nIf, however, the end user is only the recipient of a web service that implements the web\ncrawling and processing without any direct communication of the actual web data, then\ncopyright law is not activated at all”.\nIt is in this context that avoiding redistribution and moving usage rights management\nto the final user shows its advantages: as content is not republished but referred to, there\nis no need to handle copyright, and use after recrawling chez the end-user is more likely\nto be considered fair use.\n3 The proposal: stand-off annotation\nFollowing the rationale in the previous section, it is proposed that instead of publicly\ndistributing web-crawled sentence-aligned bitexts, a stand-off annotation of the Internet\nwill be distributed, an annotation detailed enough for the end user to efficiently recrawl\nlocally the sentence-aligned bitext using appropriate software, on the grounds that an\nannotation cannot be considered a derived work but rather a description of existing\ncontent geared at a specific purpose, not too different from the concepts of metadata or\nbibliographical reference as used in scholarly publishing. Public distribution is avoided,\nand, as a result, the need to clear copyright disappears altogether for corpus compilers,\nand the responsibility of rights management is passed on to the end user.\nMany of the usages by end users could actually fall into what is called fair use :\nfor instance, a translator may use and modify selected segments of a web-crawled\ntranslation memory to produce the translation of a new document. The legal status of\nmore extensive usages such as when a web-crawled sentence-aligned bitext is used to\ntrain or domain-adapt a statistical machine translation is less clear, but some machine\ntranslation systems available on the web (Google Translate12and Bing Translator13) rely\nin part on web-crawled content14and this usage, to the best of our knowledge, has not\nbeen the subject of any solid legal challenge.\n11https://en.wikipedia.org/wiki/Notice_and_take_down\n12http://translate.google.com\n13https://www.bing.com/translator/\n14http://v.gd/tausgt (shortened URL)\n156 Forcada et al.\nThe Text Encoding Initiative15defines “Stand-off markup (also known as remote\nmarkup or stand-off annotation)” as “the kind of markup that resides in a location\ndifferent from the location of the data being described by it. It is thus the opposite of\ninline markup, where data and annotations are intermingled within a single location”.\nThe idea of stand-off annotation of corpora is not new, but to the best of our knowl-\nedge, it has not been used before to directly annotate web content at large , that is, in the\nwild. However, there are some examples of stand-off annotation for building bitexts from\ncollections of documents, as it is the case of the JRC-Acquis (Steinberger et al., 2006)\ncorpus, which is distributed as a collection of monolingual documents and a stand-off\nannotation file that describes the segment-aligned bitexts that can be obtained for every\npair of languages with different alignment tools. In this case, this stand-off annotation is\nrather simple, given that the monolingual documents are preprocessed so every segment\nof the text is identified with a code that is later used to relate parallel segments across\nbitexts. This is, in fact, the usual stand-off approach to corpus annotation, where some\nauxiliary inline annotation is involved:\n“A middle course is for the original corpus publication to have a scheme for\nidentifying any sub-part. Each sentence, tree, or lexical entry, could have a\nglobally unique identifier, and each token, node or field (respectively) could\nhave a relative offset. Annotations, including segmentations, could reference\nthe source using this identifier scheme (a method which is known as stand-off\nannotation). This way, new annotations could be distributed independently of\nthe source, and multiple independent annotations of the same source could be\ncompared and updated without touching the source.” (Bird et al., 2009, ch. 11).\nWe could call this impure stand-off, as the object being annotated has to be segmented\nand provided with identifiers. As this is not possible with read-only web content at\nlarge, we have to resort to pure stand-off annotation, as described below. The following\nproposals for crawled bitext andcrawled translation memory are based on the concept\nofstand-off annotation of the web as it is found at the time of crawling.\n3.1 Deferred bitext crawl\nThe core of the proposal for crawled bitext, which will be called a deferred bitext crawl\nisa pair of uniform resource identifiers (URIs) , one pointing at the left document , and\nanother one pointing at the right document , such that they are selected as being mutual\ntranslations at the time of crawling. To the pair of URIs, one has to add some metadata:\n–Thedate and time of annotation.\n–Thelanguages of the two texts, each one with an optional indicator of how confident\nthe annotating crawler is that they are actually written in those languages.\n–Checksum information for both the left and right documents, that will be used to\nensure that the texts have not changed since they were crawled. Note that while\nchecksum information could be weakly considered as a derivative, it does not allow\nthe reconstruction the original content: it would have to be recrawled.\n15http://wiki.tei-c.org/index.php/Stand-off_markup\nStand-off Annotation of Web Content 157\n–Optionally, one or more indicators expressing the confidence with which the two\ntexts are taken to be mutual translations.\nThis information may be used to recrawl the two sides of the bitexts and check that they\nhave not changed since they were crawled and classified as being a bitext. Those bitexts\nnot passing the test should be discarded.16\n3.2 Deferred translation memory crawl\nA product that could be derived by selecting sentence pairs from a set of deferred\nbitext crawls , after aligning their sentences, is the deferred sentence-aligned bitext crawl\n(also deferred translation memory crawl ordeferred training corpus crawl ): a set (not\nnecessarily ordered) of sentence pairs, each one completely independent, in which every\npair is described by:\n–Thedate and time of annotation.\n–Thelanguages of the two sentences, each one with an optional confidence indicator.\n–The URI of the file from which each sentence is taken.\n–A record indicating the location of each sentence, such as the position of the first\ncharacter of the sentence, and either the position of the last character or the length\nin characters of the sentence.\n–The checksum value (or other values that ease integrity check ) at annotation time.\n–Optionally, one or more indicators expressing the confidence with which the two\nsentences are taken to be mutual translations (derived from the bitext confidences\nabove, but optionally refined for this specific pair of sentences).\n4 Relevant standards and technologies\nThis paper does not aim at proposing a final solution, but rather at trying to convince the\nreader that existing technologies may make the sketch in Section 3 technically feasible\nby actually advancing the main features of the solution. To that end, a survey of related\nstandards and technologies is provided in this section. The main technical requirement is\nto have locators that allow us to point at specific fragments in an HTML document.17\nIdeally, these locators should be sufficiently specific so that changes in the original\ndocument can be detected and, in addition to this, error recovery strategies could be\nimplemented in order to find the segment in a different location.\n4.1 Integrity checks\nThe W3C Web Annotation Working Group launched in 2014 with the aim of developing\na set of recommendations for web annotation, which will include specifications regarding\n16“It is better to cause stand-off annotations to break on such components of the new version than\nto silently allow [them] to refer to incorrect locations.” (Bird et al., 2009, ch 11).\n17All of the discussion in this paper assumes that webpage content will be in HTML, some XML-\nbased text format, and in some cases plain text: an extension to deal with PDF or wordprocessor\ndocuments published in websites falls out of the scope of this paper.\n158 Forcada et al.\nrobust anchoring into third-party documents. Robustness against modifications in the\nURL, in the content text or in the underlying structure of the HTML document is an\nimportant feature for the systems processing this kind of locators. A common solution is\nto extend the locator with information about the matched text along with some of the text\nimmediately before and after it,18but this practice could lead to copyright infringement.\nA more covenient option would be in that case to rely on character positions.19\nThere is also a plethora of message-digest and checksum algorithms that may be\nused to detect changes in the segments pointed at by the stand-off annotation in the\ndeferred crawls described in Section 3. In addition to the MD520message digests,21\nthere are alternatives such as SHA-2:22most have publicly available implementations.\nLink death is obviously a major issue here. A number of studies have analysed\nthe persistence of URLs over time: Gomes and Silva (2006) found that the lifetime of\nURLs follows a logarithmic distribution in which only a minority persists for periods\nlonger than a few months; Lawrence et al. (2001) studied a database of computer science\npapers and found that around 30-40% of links were broken, but they could manually\nfound the new location of the page (or highly related information) 80% of the times.\nIn fact, solutions to find the new location of the content when it has been moved, have\nbeen proposed ranging from the use of uniform resource names (URNs)23to heuristic\nstrategies for automatic fixing of dead links (Morishima et al., 2009). Park et al. (2004)\nfound that a lexical signature consisting of several key words is usually sufficient to\nobtain the new location, which suggests that these key words could be incorporated into\nthe extended locators proposed in our paper. A different, more limited24approach (Resnik\nand Smith, 2003) crawls only non-volatile resources such as the Internet Archive.25\n4.2 Linking to a fragment of a document\nIn the definition of URI,26the only provision to refer to parts of a webpage occurs\nthrough the use of fragment identifiers using the symbol “#”, as in the example: http://\nserver.info/folder/page.html#section2 ; however, this presumes the existence\nof identified anchors in the HTML document. A standard that could be repurposed to\n18Seehttps://w3c.github.io/web-annotation/model/wd/#text-quote-selector or\nhttps://hypothes.is/blog/fuzzy-anchoring/ .\n19The project Emphasis by The New York Times (see http://open.blogs.nytimes.com/\n2011/01/11/emphasis-update-and-source/ ) uses keys made up of the first characters\nfrom the first and last words in the segment, which constitutes a more compact description and\navoids the need to copy text verbatim.\n20https://en.wikipedia.org/wiki/MD5\n21https://www.ietf.org/rfc/rfc1321.txt\n22https://en.wikipedia.org/wiki/SHA-2\n23https://www.w3.org/TR/uri-clarification/\n24Even though it is possible to use this repository for a more stable version of some contents, it is\nworth noting that: (a) it does not cover every website on the Internet, and (b) the websites stored\nin the Internet Archive are not continuously crawled, which means that some live contents may\nnot be available until a new crawl is carried out.\n25https://archive.org/\n26RFC 3986, https://www.ietf.org/rfc/rfc3986.txt .\nStand-off Annotation of Web Content 159\nrefer to specific character offsets in a webpage is RFC 5147,27“text/plain fragment\nidentifiers”, which however deals only with content of the text/plain media type, but\nnot with text/html which would be the usual media type for webpage content.28Note\nthat RFC 5147 already provides the means to implement integrity checks and explicitly\nsupports the MD5 message digest standard.29\nWhile RFC 5147 could be repurposed for general web content, it does not take into\naccount the structure of the document; indeed, most edits to a webpage usually occur in\na way that its structure is only modified locally. Using character offsets would mean that\nall text after each single edit could fail the integrity check and therefore be discarded: a\nstructure-aware approach could be beneficial to avoid such massive losses of content,\nthe closest candidates being:\n–XPointer ,30a system to address components of an XML document, can only be\napplied to valid XML documents and most webpages are not (they would have to\nbe univocally transformed or normalized into valid XML documents, and pointing\nwould be through the intermediate normalized document). Specific characters inside\nthe contents of an XML element can be linked via the substring function.\n–Cascaded style sheet (CSS) selectors,31used to provide a presentation for an HTML\ndocument,32do not require it to be a valid XML document; they can therefore operate\non a wider range of webpages but they cannot address specific characters. There is\nsome interest in extending the standard in this direction,33and indeed extensions to\naddress specific letters34have been implemented as JavaScript libraries.\n–Canonical Fragment Identifier for EPUB,35a method for referencing arbitrary\ncontent within electronic books in EPUB format (a format based on HTML). Its\nlinking notation uses a combination of child sequences36(similar to those defined in\nXPointer with the element scheme) and anchor identifiers, but it is not as robust\nand expressive as CSS selectors or XPointer. It also allows for character offsets in\nthe form of ranges such as 2:5.\nA combination of one or more of the mentioned standards could form the basis for\nspecifying locators that could be used to point at any character span in the web.\n27https://tools.ietf.org/rfc/rfc5147.txt\n28RFC 7111 ( https://tools.ietf.org/rfc/rfc7111.txt ) provides fragment identifiers\nfor the text/csv media type.\n29See also the work by Hellmann et al. (2012) for more character-level proposals.\n30https://www.w3.org/TR/xptr-xpointer/\n31https://www.w3.org/TR/css3-selectors/\n32CSS selectors are also used to point at elements in the document in JavaScript.\n33https://css-tricks.com/a-call-for-nth-everything/\n34http://letteringjs.com/\n35http://www.idpf.org/epub/linking/cfi/epub-cfi.html\n36An example of a child sequence is 3/1which represents the second child (counts start at zero)\nof an element that is the fourth child in the current context.\n160 Forcada et al.\n4.3 Leveraging TMX\nA modified version of TMX, the translation memory exchange format37could be used to\ndistribute deferred training corpus crawls —also called deferred translation memory\ncrawls — (see section 3.2); this would allow an easy conversion into TMX —basically\nby retrieving the content pointed at—, ready for use as a translation memory in most\ncomputer-aided translation software; converting them to training corpora for statistical\nmachine translation would also be quite simple and could leverage existing software to\ndo so. The main change would affect the seg(segment) element, which would have to\nbe substituted by a stand-off annotation of the segment, which could be called webseg ,\nand which would contain the URL of the source document and a specification of the\nactual fragment inside the document; integrity check information could be either added\ndirectly to this webseg element or as a property using the standard prop element. As\nregards date and time, TMX already supports this information as a property of each\ntranslation unit. To avoid repeating URLs in webseg s, the header could contain an\nelement assigning an identifier to each unique source document.\n4.4 An example of the TMX-inspired format\nFigure 1 illustrates how the TMX format could be transformed into an XML format\ncapable of representing deferred sentence-aligned bitext crawls anddeferred translation\nmemory crawls . This file contains a single sentence pair or translation unit (tu), having\ntwovariants (tuv), one in English and another one in Spanish (the actual texts are About\nthe UA andSobre la UA ). Aproperties element ( prop ) in each variant contains the MD5\nchecksum of the text. Instead of using the standard TMX segment element ( seg), aweb\nsegment element ( webseg ) contains a pointer to a particular segment, made up of an\nURL, a fragment identifier using Xpointer notation, and a character range inside the\nselected element ( 0:11 in English and 0:10 in Spanish).\n5 Implementation: stand-off crawlers\nGiven the fact that there is a number of bilingual web crawlers able to harvest bitexts\nfrom the Internet, such as Bitextor (Espl `a-Gomis and Forcada, 2010), ILSP Focused\nCrawler (Mastropavlos and Papavassiliou, 2011), STRAND (Resnik and Smith, 2003),\nBITS (Ma and Liberman, 1999), or WeBiText (D ´esilets et al., 2008), it seems more\nreasonable to consider adapting an existing parallel data crawler to produce deferred\ntranslation memories than implementing a new stand-off crawler from scratch. In general,\nmost of these parallel data crawlers work following a similar process:\n1. several documents from a given website are downloaded;\n2. documents are pre-processed and their language is identified;\n3. parallel documents are identified (document alignment) using heuristics;\n4. optionally, parallel documents are segment-aligned.\n37https://www.gala-global.org/tmx-14b\nStand-off Annotation of Web Content 161\n<? xml version = \" 1.0 \" encoding = \"UTF \u00008\"?>\n<tmx version = \" 1.4 \" >\n<header creationtool = \" Deferred Corpus Creator \"\ncreationtoolversion = \" 0.95 \"\ndatatype = \" text / html \" segtype = \" sentence \"\nadminlang = \"en\" srclang = \"en\" o\u0000tmf = \" web \" />\n<body>\n<tu tuid = \"1\">\n<prop type = \"x\u0000alignment confidence \" >0.86</ prop >\n<tuv xml:lang = \"en\" date = \" 20161105 T153005Z \" >\n<prop type = \"x\u0000lang confidence \" >0.91</ prop >\n<prop type = \"x\u0000md5 \">\n28709 ee845d8efaf62318210ecd8ca82\n</ prop >\n<webseg >\nhttp: // web .ua.es/en/about \u0000the\u0000ua. html # fragment (// \u0003[ @id =&\nquot ; parteSuperiorPagina & quot ;]/ div /h1 /0 :11 )\n</ webseg >\n</ tuv>\n<tuv xml:lang = \"es\" date = \" 20161105 T153013Z \" >\n<prop type = \"x\u0000lang confidence \" >0.73</ prop >\n<prop type = \"x\u0000md5 \">\nd502972dbfc178f2c1085875890c2144\n</ prop >\n<webseg >\nhttp: // web .ua.es/va/sobre \u0000la\u0000ua. html # fragment (// \u0003[ @id =&\nquot ; parteSuperiorPagina & quot ;]/ div /h1 /0 :10 )\n</ webseg >\n</ tuv>\n</tu>\n</ body >\n</ tmx>\nFig. 1. Example of a deferred translation memory crawl containing a single translation unit (see\ntext for details).\nTherefore, the problems faced when adapting any parallel data crawler to the purposes\nof our work would be similar in any of them. This section discusses how these crawlers\ncould be adapted to produce deferred translation memories (Section 4.3).\nOne of the main obstacles to adapt a state-of-the-art parallel data crawler for the pur-\npose of our work is that, in most of the cases, they do not obtain the translation memories\ndirectly from the original documents downloaded from the web: these documents are\npre-processed before segment-aligning them. For example, Bitextor and ILSP Focused\nCrawler normalise HTML documents into XHTML by using the tool Apache Tika ,38and\nremove boilerplates with the tool Boilerpipe .39In addition, most crawlers remove the\nHTML mark-up before segment alignment. This means that both the HTML structure\nand the content of the documents may be modified before obtaining the final segment\nalignment. To deal with this problem it would be necessary to annotate the text in the\ndocument with the reference of its position in the original document. This could be done\nby using additional HTML mark-up, which would be preserved during pre-processing.\nAfter document alignment and HTML mark-up cleaning, every document would\nconsist of a collection of text blocks for which their current offset is mapped to their\n38http://tika.apache.org/\n39http://code.google.com/p/boilerpipe/\n162 Forcada et al.\nposition in the original document. At this point, sentence splitting is carried out, which\nyields several segments from a single text block for which its position in the original\ndocument is known. It will therefore be necessary to obtain the position of every segment\nin the original document, which should be straightforward knowing that every text block\nappears in a known position of the HTML tree in the original document. In this case,\nit is sufficient to keep track of the offset of the first and last characters of the segments\nobtained taking the position identifier of the original document as a reference.\nBy adapting existing parallel data crawlers to keep track of the processing carried\nout to transform the original documents to the final segment-aligned parallel corpus,\na TMX-like document such as the one described in Section 4.3 could be obtained by\nreplacing the actual sentence pairs obtained after sentence alignment by the mapping to\ntheir original locations.\n6 Concluding remarks\nThis paper has laid the foundations and advanced a proposal for a new way to distribute\nweb-crawled sentence-aligned bitext to avoid legal problems associated to distribution.\nThe main idea is to distribute a stand-off annotation of the wild web content that makes\nup the aligned sentences or translation units, which is called a deferred translation\nmemory crawl ordeferred training corpus crawl . It is proposed that a modification of\nthe existing TMX standard for translation memories is used as the basis of the new\nstandoff format. This makes it easy to modify existing crawlers such as Bitextor and\nILSP Focused Crawler to produce this kind of output. Although in this paper a tentative\nsyntax to point at the linked segments has been outlined, it could change and evolve\nas specifications regarding robust anchoring to third-party documents are developed\nby the recently created W3C Web Annotation Working Group. If the proposal in this\npaper is adopted, we could be looking at massive repositories of deferred translation\nmemories that could be legally distributed without having to manage the copyright of\nthe original content, and which could be used by end users (professional translators,\nstatistical machine translation practitioners) to recrawl the web and use the selected\ncontent under fair use provisions.\nAcknowledgements\nFunding from the European Union Seventh Framework Programme FP7/2007-2013\nunder grant agreement PIAP-GA-2012-324414 (Abu-MaTran) is acknowledged. The\nauthors would like to thank the anonymous reviewers for their valuable suggestions.\nReferences\nVictoria Arranz, Khalid Choukri, Olivier Hamon, N ´uria Bel, and Prodromos Tsi-\navos. PANACEA project deliverable 2.4, annex 1: Issues related to data crawling\nand licensing. http://cordis.europa.eu/docs/projects/cnect/4/248064/080/\ndeliverables/001-PANACEAD24annex1.pdf , 2013.\nStand-off Annotation of Web Content 163\nPaul Baker, Andrew Hardie, and Tony McEnery. A glossary of corpus linguistics . Edinburgh\nUniversity Press, 2006.\nSteven Bird, Ewan Klein, and Edward Loper. Natural Language Processing with Python . O’Reilly,\n2009. http://www.nltk.org/book/ch11.html .\nLynne Bowker and Des Fisher. Computer-aided translation. Handbook of Translation Studies , 1:\n60, 2010.\nMichael Carl and Andy Way. Recent advances in example-based machine translation , volume 21.\nSpringer Science & Business Media, 2003.\nAlain D ´esilets, Benoit Farley, M Stojanovic, and G Patenaude. WeBiText: Building large hetero-\ngeneous translation memories from parallel web content. In Proceedings of Translating and the\nComputer , pages 27–28, London, UK, 2008.\nMiquel Espl `a-Gomis and Mikel L. Forcada. Combining content-based and URL-based heuris-\ntics to harvest aligned bitexts from multilingual sites with bitextor. The Prague Bulletin of\nMathematical Linguistics , 93:77–86, 2010.\nDaniel Gomes and M ´ario J. Silva. Modelling information persistence on the web. In Proceedings\nof the 6th International Conference on Web Engineering , ICWE ’06, 2006.\nBrian Harris. Bi-text, a new concept in translation theory. Language Monthly , 54:8–10, 1988.\nSebastian Hellmann, Jens Lehmann, and S ¨oren Auer. Linked-data aware URI schemes for\nreferencing text fragments. In Proceedings of the 18th International Conference on Knowledge\nEngineering and Knowledge Management , pages 175–184, Galway City, Ireland, 2012.\nPierre Isabelle, Marc Dymetman, George Foster, Jean-Marc Jutras, Elliott Macklovitch, Fran c ¸ois\nPerrault, Xiaobo Ren, and Michel Simard. Translation analysis and translation automation.\nInProceedings of the 1993 conference of the Centre for Advanced Studies on Collaborative\nresearch: distributed computing-Volume 2 , pages 1133–1147. IBM Press, 1993.\nPhilipp Koehn. Europarl: A Parallel Corpus for Statistical Machine Translation. In Proceedings of\nthe 10th Machine Translation Summit , pages 79–86, Phuket, Thailand, 2005.\nPhilipp Koehn. Statistical machine translation . Cambridge University Press, 2009.\nSteve Lawrence, David M. Pennock, Gary William Flake, Robert Krovetz, Frans M. Coetzee, Eric\nGlover, Finn ˚Arup Nielsen, Andries Kruger, and C. Lee Giles. Persistence of web references in\nscientific research. Computer , 34(2):26–31, 2001.\nJ. Lin and M. Efron. Overview of the TREC-2013 Microblog Track. In Proceedings of the\nTwenty-Second Text REtrieval Conference (TREC 2013) , 2013.\nXiaoyi Ma and Mark Liberman. Bits: A method for bilingual text search over the web. In Machine\nTranslation Summit VII , pages 538–542, Singapore, Singapore, 1999.\nNikos Mastropavlos and Vassilis Papavassiliou. Automatic acquisition of bilingual language\nresources. In Proceedings of the 10th International Conference of Greek Linguistics , 2011.\nAtsuyuki Morishima, Akiyoshi Nakamizo, Toshinari Iida, Shigeo Sugimoto, and Hiroyuki Kita-\ngawa. Bringing your dead links back to life: A comprehensive approach and lessons learned. In\nProceedings of the 20th ACM Conference on Hypertext and Hypermedia , 2009.\nSeung-Taek Park, David M. Pennock, C. Lee Giles, and Robert Krovetz. Analysis of lexical\nsignatures for improving information persistence on the world wide web. ACM Trans. Inf. Syst. ,\n22(4):540–572, 2004.\nPavel Pecina, Antonio Toral, Vassilis Papavassiliou, Prokopis Prokopidis, Josef Van Genabith, and\nRIC Athena. Domain adaptation of statistical machine translation using web-crawled resources:\na case study. In Proceedings of the 16th Annual Conference of the European Association for\nMachine Translation , pages 145–152, 2012.\nPhilip Resnik and Noah A. Smith. The Web as a parallel corpus. Computational Linguistics , 29\n(3):349–380, 2003.\nMichel Simard, George F. Foster, and Fran c ¸ois Perrault. Transsearch: A bilingual concordance\ntool. Centre d’innovation en technologies de l’information, Laval, Canada , 1993.\n164 Forcada et al.\nR. Steinberger, B. Pouliquen, A. Widiger, C. Ignat, T. Erjavec, and D. Tufis ¸. The JRC-Acquis: A\nmultilingual aligned parallel corpus with 20+ languages. In Proceedings of the 5th International\nConference on Language Resources and Evaluation , pages 2142–2147, Genoa, Italy, 2006.\nRalf Steinberger, Andreas Eisele, Szymon Klocek, Spyridon Pilos, and Patrick Schl ¨uter. DGT-TM:\nA freely available translation memory in 22 languages. In Proceedings of the 8th International\nConference on Language Resources and Evaluation (LREC’2012) , 2012.\nJ¨org Tiedemann. Parallel data, tools and interfaces in OPUS. In LREC , pages 2214–2218, 2012.\nProdromos Tsiavos, Stelios Piperidis, Maria Gavrilidou, Penny Labropoulou, and Tasos Pa-\ntrikakos. Qtlaunchpad public deliverable d4.5.1: Legal framework. http://www.qt21.eu/\nlaunchpad/system/files/deliverables/QTLP-Deliverable-4_5_1_0.pdf , 2014.\nReceived May 2, 2016 , accepted May 9, 2016",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "tf9cFMdYQ6",
"year": null,
"venue": "EAMT 2009",
"pdf_link": "https://aclanthology.org/2009.eamt-1.10.pdf",
"forum_link": "https://openreview.net/forum?id=tf9cFMdYQ6",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Gappy Translation Units under Left-to-Right SMT Decoding",
"authors": [
"Josep Maria Crego",
"François Yvon"
],
"abstract": "Josep M. Crego, François Yvon. Proceedings of the 13th Annual conference of the European Association for Machine Translation. 2009.",
"keywords": [],
"raw_extracted_content": "Proceedings of the 13th Annual Conference of the EAMT , pages 66–73,\nBarcelona, May 2009\nGappy Translation Units under Left-to-Right SMT Decoding\nJosep M. Crego\nSpoken Language Processing Group\nLIMSI-CNRS, BP 133\n91430 Orsay cedex, France\[email protected]¸ cois Yvon\nUniv Paris-Sud 11\nLIMSI-CNRS, BP 133\n91430 Orsay cedex, France\[email protected]\nAbstract\nThis paper presents an extension for\na bilingual n-gram statistical machine\ntranslation (SMT) system based on al-\nlowing translation units with gaps. Our\ngappy translation units can be seen as\na first step towards introducing hierar-\nchical units similar to those employed\nin hierarchical MT systems. Our goal\nis double. On the one hand we aim\nat capturing the benefits of the higher\ngeneralization power shown by hierar-\nchical systems. On the other hand, we\nwant to avoid the computational bur-\nden of decoding based on parsing tech-\nniques, which among other drawbacks,\nmake difficult the introduction of the\nrequired target language model costs.\nOur experiments show slight but con-\nsistent improvements for Chinese-to-\nEnglish machine translation. Accu-\nracy results are competitive with those\nachieved by a state-of-the-art phrase-\nbased system.\n1 Introduction\nWork in SMT has evolved from the traditional\nword-based (Brown et al., 1993) to the cur-\nrent phrase-based (Och et al., 1999; Zens et\nal., 2002; Koehn et al., 2003) and hierarchical-\nbased (Melamed, 2004; Chiang, 2007) trans-\nlation models. Phrase-based and hierarchical\nsystems are also characterized by the underly-\ning formal device employed to produce transla-\ntions (Knight, 2008): finite-state transducers\n(FST) on the one hand, and tree transducers\nc/circlecopyrt2009 European Association for Machine Translation.(TT) on the other hand, specified respectively\nby rational and context-free grammars, thus\nimplying clear differences in generative power.\nA thorough comparison between phrase-\nbased and hierarchical MT can be read\nin (Zollmann et al., 2008), concluding\nthat hierarchical models slightly outperform\nphrase-based models under “sufficiently non-\nmonotonic language pairs”. One of the reasons\nfor the gap in performance seems to be the abil-\nity to generalize using non-terminal categories\nbeyond the strictly lexicalized knowledge rep-\nresented in phrase-based models.\nAn illustrative example is given below. It\nconsist of the translation from English to\nFrench of negative verb phrases, which yields\nthe alignment of don’t X ;ne X pas , where X\ncould be replaced by almost any finite verb. In\nthis example, the English token don’t is trans-\nlated into the French non-contiguous words ne\nandpas1.\nThe right translation can only be achieved\nunder phrase-based systems, if X(saywant)\nhas been seen in training next to don’t, yielding\nthe translation unit:\ndon′t want :ne veux pas\nIn contrast, under hierarchical systems, it\nis possible to obtain the right generalization,\ndecomposing the previous pattern as:\nX→don′t Y:ne Y pas\nY→want :veux\n1This example is only used for illustrative purposes.\nThe contracted form don’t is not a real issue as most\ntokenizers split the form as do not , thus solving the\nalignment problem.\n66\nThis ability to capture better generalization\ncomes at a double price: translation as pars-\ning is typically cubic with respect to the source\nsentence length; furthermore, in this formal-\nism, target constituent are no longer produced\nmonotonically from left-to-right, thus render-\ning the application of the language model score\ndifficult (Chiang, 2007).\nThis example also suggests that hierarchi-\ncal rules tend to be less sparse, given that the\nholistic unit in the phrase-based (PB) model is\ndivided into two smaller, more reusable, rules.\nNotice that, in this specific case, the rich mor-\nphology of French verbs increases the sparse-\nness problem of phrase-based translation units.\nFinally, by using discontinuous patterns, hier-\narchical translation models can capture large\nspan (bilingual) dependencies.\nOther than modeling discontinuous con-\nstituents, a major difference between FST- and\nCFG-based approaches to translation, has to\ndo with the size of the search space, or more\nprecisely with the kind of pruning that takes\nplace to make the search feasible.\nAs previously outlined, when considering the\nuse of translation units with gaps under the\nleft-to-right decoding approach, the main dif-\nficulty arises motivated by the appearance of\ndiscontinuities in the output side. In this work,\nwe make use of an input word lattice to natu-\nrally avoid this problem, allowing to monoton-\nically compose translation.\nRelated Work\nWe follow the work in (Simard et al., 2005),\nwhich, to the best of our knowledge is the first\nMT system that within a left-to-right decoding\napproach, introduces the idea of phrases with\ngaps. A main limitation of their work arised\nfrom the difficulties of left-to-right decoders to\nhandle gaps in the target side, again because\nof the non-monotonic generation of the target.\nSuch gaps are to be filled in further steps of\nthe search, thus, increasing the complexity of\ndecoding and at the same time that hindering\nthe use of the target language model.\nSuch translation units are more naturally\nused under systems employing parsing tech-\nniques to perform the search (hierarchical\nMT). Different kind of hierarchical transla-\ntion units have been proposed, which mostly\ndiffer from the level of syntactical informa-tion they use. We mainly differentiate here\nbetween translation units that are formally\nsyntax-based, like those appearing in (Chi-\nang, 2007), which employ non-terminal cate-\ngories without linguistic motivation, working\nas placeholders to be filled by words in further\ntranslation steps; and hierarchical units that\nare more linguistically motivated, as in (Zoll-\nmann and Venugopal, 2006).\nMore recently, (Watanabe et al., 2006)\npresents a hierarchical system in which the tar-\nget sentence is generated in left-to-right order,\nthus enabling a straightforward integration of\nthen-gram language models during search.\nThe authors employ a top-down strategy to\nparse the foreign language side, using a syn-\nchronous grammar having a GNF2-like struc-\nture. This means that the target side body of\neach translation rule takes the form bβ, where b\nis a string of terminal symbols and βa (possi-\nbly empty) string of non-terminals. This en-\nsures that the target is built monotonously.\n(Venugopal et al., 2007) present a hierarchical\nsystem that derives translations in two steps,\nso as to mitigate the computational impact re-\nsulting from the intersection of a probabilis-\ntic synchronous CFG and and the n-gram lan-\nguage model. Firstly, a CYK-style decoding\nconsidering first-best chart item approxima-\ntions is used to generate an hypergraph of tar-\nget language derivations. In the second step,\na detailed exploration of the previous hyper-\ngraph is performed. The language model is\nused to drive the second step search process\nand to recover from search errors made during\nthe first step.\nOur work differs from theirs crucially in that\nour system employs a different set of trans-\nlation structures (units), and because our de-\ncoder follows strictly the FST-based approach.\nThe remaining of this paper is organized as\nfollows. In Section 2, we outline the n-gram-\nbased approach used in the rest of this work.\nSections 3 and 3.2 detail the use of transla-\ntion units with gaps in a left-to-right decoding\napproach. Translation accuracy results are re-\nported for the Chinese-English language pair\nin section 4. Finally, in section 5, we draw\nconclusions and outline further work.\n2Greibach Normal Form67\n2N-gram-based SMT\nThe baseline translation system described in\nthis paper implements a log-linear combina-\ntion of several models. In contrast to stan-\ndard phrase-based approaches (Koehn et al.,\n2003), the translation model is expressed in tu-\nples(instead of phrases), and is estimated as\nanN-gram language model over such units. It\nactually defines a joint probability between the\nlanguage pairs under consideration (Mari˜ no et\nal., 2006).\nWe have reimplemented the decoder de-\nscribed in (Crego and Mari˜ no, 2007a), that we\nhave extended to decode input lattices. At de-\ncoding time, only those reordering hypotheses\nencoded in the word lattice are to be exam-\nined. Reordering hypotheses are introduced\nfollowing a set of reordering rules automati-\ncally learned from the bi-text corpus word-to-\nword alignments. Hence, reordering rules are\napplied on top of the source sentences to be\ntranslated.\nMore formally, given a source sentence, f, in\nthe form of a linear word automaton, and N\noptional reordering rules to be applied on the\ngiven sentence in the form of string transducers\n(τi), the resulting lattice containing reordering\nhypotheses, f∗, is obtained by the sequential\ncomposition of FSTs, as:\nf∗=τN◦τN−1· · · ◦ · · · τ1◦f\nwhere ◦denotes the composition operation.\nNote that the sequence of FSTs (reordering\nrules) is sorted according to the length of the\nleft-hand side (LHS) of the rule. More specific\nrules, having a larger LHS, are applied (com-\nposed) first, in order to ensure the recursive\napplication of the rules. Hence, some paths are\nobtained by applying reordering on top of al-\nready reordered paths. Figure 1 illustrates an\nexample where two reordering rules: abc;cab\n(τ1) and ab;ba(τ2) are applied on top of the\nsentence abcd(s). As it can be seen, the re-\nsulting word lattice contains the path of the\noriginal sentence s:abcd, as well as the ad-\nditional paths appeared by the composition of\nreordering rules: τ1(s) :cab,τ2(s) :baand\nτ2(τ1(s)) :cba.\nPart-of-speech (POS) and syntactic informa-\ntion are used to increase the generalizationpower of our rules. Hence, instead of raw\nwords, the LHS of the reordering rules typi-\ncally make reference to POS-tags patterns, or\nto dependency sub-trees.\nFor instance, the rule NN JJ ;JJ NN\nis defined in terms of POS-tags, and produces\nthe swap of the sequence noun adjective that is\nobserved for the pair French-to-English. Addi-\ntional details regarding the syntax-based rules\nare given in section 3.\na:bb:a *:*\na:c b:ac:b *:*\na b c db a\ncb\naa\nba b c d\nFigure 1: Initial linear automaton (top). Re-\nordering rules in the form of string transducers\n(middle) and final word lattice after rule com-\nposition.\nFor the experiments reported in this paper,\nwe consider that all paths in the input lattice\nare equally likely, a simplification we may wish\nto remove in further research.\n3 Translation units with gaps\nIn this section we give details of the gappy\ntranslation units introduced in this work.\n3.1 Split rules and reordering\nSome phrase-based systems have been able\nto introduce some levels of syntactical infor-\nmation. In (Habash, 2007) the author em-\nploys automatically learned syntactic reorder-\ning rules to preprocess the input, aiming at\nsolving the reordering problem, before passing\nthe reordered input to a phrase-based decoder\nfor Arabic-English translation. However, this\nkind of systems cannot produce the translation68\nneeded in our original English-to-French exam-\nple because of the left-to-right decoding ap-\nproach used in the underlying system. Transla-\ntion is sequentially composed from left to right,\nand none of the word orderings of the source\nsentence, don’t + want andwant + don’t , pro-\nduces the desired translation. Instead, they\nproduce respectively: ne pas + veux andveux\n+ ne pas .\nWe propose a method that allows phrase-\nbased systems to introduce gappy units similar\nto those typically employed in hierarchical sys-\ntems, while keeping the left-to-right decoding\napproach.\nTo collect gappy units, we analyze the (sym-\nmetric) word alignments of the training corpus.\nThe method basically consists of identifying, in\nthe source sentence, single tokens translated\ninto multiple ( n >1) non-contiguous target\ntokens. Figure 2 shows an example.\ndon'twant\nneveuxpas\ndon't1 want\nneveux pasdon't2don't1 want\nneveuxpasdon't2\nsplit\nFigure 2: Original tuple (top left), introduction\nof split words (top right) and tuples obtained\nafter reordering source words (bottom).\nIn the example, the English token don′t\nis translated into a sequence of discontinu-\nous word segments ne... pas . Once identi-\nfied, the original source token is split so as to\nmatch the number of discontinuous segments.\nTo continue with our example, don′tis split\nintodon′t1anddon′t2to match the two dis-\ncontinuous segments ne...pas. Hence, simi-\nlar to (Crego and Mari˜ no, 2007b), we aim at\nmonotonizing the word-to-word alignment, the\nmain novelty being here the introduction of\nsplit tokens.\nAs it can be seen in the example, the target\nside of translation units remains unchanged,\nmeaning that we can continue to generate the\ntarget in left-to-right fashion. Word reorder-ings and split words are introduced in the\nsource sentence only, motivating the use of a\nword lattice. During training, the alignment is\nentirely monotonized before extracting tuples,\nonly keeping those one-to-many andmany-to-\nonealignments where the tokens on the many\nare contiguous; when this is not the case, split-\nting takes place.\nNote that when translating the same ex-\nample in the opposite direction, that is from\nFrench to English, the right translation is\nachieved without needing to split tokens. In\nsuch a case, the system would proceed by first\nreordering source words, obtaining ne pas veux ,\nand then monotonically translating using the\nunits: ne pas : don’t andveux : want , yielding\nthe right translation don’t want .\nWhen decoding test sentences, the word lat-\ntice is used to encode the most promising re-\norderings/splits of the input sentence, so as\nto reproduce the modifications introduced in\nthe source sentences of the training corpus (as\nshown in figure 2). Thus, we slightly extend\nthe reordering formalism introduced in 2 to al-\nlow the insertion of split tokens. Following the\nprevious example, the new rule consists of:\ndon′t want ;don′t1want don′t2\nmeaning that whenever you find in the input\nsentence the word sequence don’t want , the in-\nput lattice is extended with the path don′t1\nwant don′t2, as represented on figure 3.\ndon't1want\ndon't2\ndon't want ... ...\nFigure 3: Monotonic input graph extended with\na split rule.\nSo far, the method presented does not pro-\nduce gappy units, but standard tuples with\nhigher monotonization levels. However, with\nthe addition of split rules, they become very\nsimilar to the units used in hierarchical trans-\nlation systems. Note that the resulting ex-\ntended input graph (figure 3) contains exactly69\nthe units extracted by the splitting procedure\n(figure 2 bottom).\nThe fully lexicalized split rules previously in-\ntroduced would however be useless, failing to\ngeneralize to novel patterns. Therefore, as is\ndone with “standard” reordering rules, split\nrules are defined over patterns of POS tags,\ninstead words. Of course, the identity of split\nword has to be preserved, as it would make\nno sense to split, during decoding, words for\nwhich no translation units have been collected\nin training. Finally, the split rule induced for\nthe previous example is:\ndon′t V ;don′t1V don′t2\nwhere Vis a POS tag standing for a verb.\nThis strategy has two additional benefits.\nFirst, it yields smaller translation units, whose\nprobability are better estimated. Going back\nto the example of figure 2, the original trans-\nlation unit (left) is larger than the new one\n(right), and more likely to cause estimation\nproblems. Second, it allows to better use the\ninformation available in the training corpus.\nTo see why, consider again our running exam-\nple. Leaving the original unit undecomposed\nprevents to extract the match between want\nandveux, which is correctly extracted in the\nnovel formalism.\nIn the next section, we detail how the gener-\nalization power of split/reordering rules can be\nfurther increased by using dependency parse\ntrees.\n3.2 Syntax aware split rules\nSyntactic reordering rules employed in this\nwork are similar to those detailed in (Crego\nand Mari˜ no, 2007b). These rules introduce\nreorderings at the level of syntactic nodes.\nHence, long reorderings can be achieved with\nshort rules, as nodes may dominate arbitrary\nlong sequences of words. Thus, the LHS of\nthe rules is referred to the parse nodes of the\noriginal source sentences, while the RHS spec-\nifies the permutation that is introduced. Fig-\nure 4 shows the parse tree and POS tags of the\nChinese sentence: Aozhou shi yu Beihan you\nbangjiao de shaoshu guojia zhiyi , an example\nborrowed from (Chiang, 2007).\nFigure 5 illustrates how, by applying three\nrules to the previous Chinese example, we canget the reorderings/split required to derive the\ncorrect English translation: Australia is one of\nthe few countries that have diplomatic relations\nwith North Korea . As previously stated, rules\n(FSTs) are sorted before applied (composed).\nNote that in the case of syntactic rules, the\nlength of a rule is based on the number of words\nappearing in the LHS of the rule.\nFigure 4: Dependency parse tree and POS tags\nof the Chinese sentence: ’Aozhou shi yu Bei-\nhan you bangjiao de shaoshu guojia zhiyi’.\nFigure 5: Chinese sentence rewritten by means\nof reordering/split rules.\nConsidering the first rule applied in figure 5,\nthe tree in its LHS contains four nodes (eight\nwords), which cover the following sequences\nof Chinese words: yu Beihan you bangjiao de ,\nshaoshu ,guojia andzhiyi. Words matched by\nthe rules are displayed above the rules using\nbold characters.\nNote that equivalently to POS rules, words\nto be split in syntactical rules appear fully lex-\nicalized. The second rule in figure 5 splits the\nwordde. Thus, it appears fully lexicalized in\nthe LHS of the rule.\nFinally, the last rule is formed of POS tags.\nIt reorders the words yu Beihan you bangjiao\nintoyou bangjiao yu Beihan . The monotonic\ntranslation of the resulting reordered path\nyields the correct English translation.70\nSyntactical reordering/split rules are auto-\nmatically extracted from the training bi-texts,\nmaking use of the word-to-word alignments\nand the source dependency trees.\nTo conclude this section, notice that gappy\nunits introduced in this work are only those\nthat are motivated by word structures where\nwords of the source side are aligned to multiple\nnon-contiguous words of the target side. As a\nresult, we approximate the behavior of a hier-\narchical system employing only a very limited\nset of rule patterns.\n4 Experiments\nIn this section, we give details regarding the\nevaluation framework and report on the ex-\nperimental work carried out to evaluate the\nimprovements.\n4.1 Evaluation Framework\nWe have used the BTEC (Takezawa et al.,\n2002) corpus focusing on translations from\nChinese to English. It consists of the data\nmade available for the IWSLT 2007 evaluation\ncampaign. Some statistics regarding the cor-\npora used, namely number of sentences, words,\nvocabulary, average sentence length and num-\nber of references per language are shown in ta-\nble 1.\nSent Words Voc Avg Refs\nTrain\nen 377k 11k 9.5\nzh40k354k 9,6k 8.91\nTune / Test (zh)\ntune 506 3,564 871 7 16\ntst2 500 3,608 921 7.22 16\ntst3 506 3,889 916 7.69 16\ntst4 489 5,476 1,094 11.2 7\ntst5 500 5,846 1,292 11.69 7\ntst6 489 3,325 864 6.8 6\nTable 1: BTEC Corpus (Chinese-to-English).\nChinese words were segmented by means\nof the ICTCLAS (Zhang et al., 2003) tag-\nger/segmenter. Word alignments were com-\nputed for the training data in the original word\norder, using GIZA++3. The grow-final-diag-\nand heuristic is used to refine the alignments\n3www.fjoch.com/GIZA++before the translation units extraction. The\nChinese side was parsed using the freely avail-\nable Stanford Chinese Dependency Parser4.\nWe have used the SRILM toolkit5to estimate\ntheN-gram language models, using respec-\ntively 4 and 5 as n-gram orders for the transla-\ntion LM and target LM (Kneser-Ney smooth-\ning and interpolation of lower and higher n-\ngrams are always used).\nFor tuning, optimal log-linear coefficients\nwere found using an in-house implementation\nof the downhill SIMPLEX method. The BLEU\nscore was used as the objective function.\n4.2 Results\nAccuracy results are reported for different con-\nfigurations in table 2. System configurations\nconsist of: basefor which translation units do\nnot introduce the ability to split source words\ninto multiple tokens, and +split where the\nprevious technique is used. The POS config-\nuration employs POS tags in the source side\nof the reordering rules while +SYN employs\nboth POS tag and syntactic rules.\nbase +splitSetPOS +SYN POS +SYNMoses\ntst2 47.25 48.15 47.42 48.39 48.14\ntst3 55.82 56.88 56.44 57.17 55.95\ntst4 15.72 16.82 16.48 17.08 18.06\ntst5 15.89 16.32 16.34 16.89 15.91\ntst6 29.56 30.81 29.81 31.67 31.76\nTable 2: Accuracy results measured using the\nBLEU score.\nThe last column shows accuracy results ob-\ntained by Moses (Koehn et al., 2007), a state-\nof-the-art phrase-based SMT system.\nIt is worth saying that the Moses system\nwas built using the same data sets and align-\nments that were used for our system (Moses\nperforms lexicalized reordering with a maxi-\nmum reordering distance of 8 words). In this\ncase, we run a different optimization for each of\nthe system configurations. BLEU confidence\nintervals range depending on the test set ap-\nproximately from ±2.0 to±3.0 points BLEU.\nAs it can be seen, the system built using\nthe+split technique obtains higher accuracy\nresults than the baseline one ( base), in all test\n4nlp.stanford.edu/downloads/lex-parser.shtml\n5www.speech.sri.com/projects/srilm71\nsets and for both reordering rule configurations\n(POS and+SYN ).\nEven if results show a clear tendency to\nhighly score the +split system, differences in\nall BLEU results fall within the confidence\nmargin. However, when inspecting transla-\ntions obtained by the system +split +SYN ,\nwe find several examples, such as the one\nshown in figure 6, where the decoder succeeds\nto apply the proposed gappy units.\nFigure 6: Sequence of translation units output\nby the decoder.\nAs it can be seen, motivated by a gappy\nunit, the first Chinese word is translated in\ntwo distant steps, yielding how much andcost\nrespectively. The gap between both fragments\nis correctly filled by the English words does it\nas translation of the second and third Chinese\nwords.\nConsidering the base systems, the same\ntranslation could only be produced if the first\nthree Chinese words had been seen in training\naligned to how much does it . In other words,\nlarger units are needed to account for the cor-\nrect translation.\nThe increment in the total number of trans-\nlation units extracted when moving from the\nbase to the +split configurations (from 267 k\nto 285 k), as well as the increment in units\nused to translate the test sets (from 18 ,345 to\n19,150) supports the fact that higher mono-\ntonizations levels of the training corpus have\nbeen achieved. All together, the resulting vo-\ncabulary of translation units, including all the\nnew split units (13 ,706), contains 63 ,036 units\nto be compared with the 56 ,046 units in the\nbaseline system.\nConsidering search efficiency, decoding time\nwas increased about 1 .5 times when build-\ning the system using the split technique, for\nboth reordering rule configurations ( POS and\n+SYN ). Using gappy translation units does\nnot increase the complexity of the search.5 Conclusions and Further Work\nIn this paper, we have presented an exten-\nsion to a bilingual n-gram translation system\nin which we allow translation units with gaps.\nThe use of word lattices allowed us to introduce\nthe concept of gappy translation units into an\nn-gram-based system, as an attempt to bridge\nthe gap between phrase-based and hierarchi-\ncal systems. Our decoder additionally benefits\nfrom the simplicity of left-to-right decoders, in\ncontrast to the cost in complexity incurred by\nperforming decoding as parsing. This have\nbeen achieved by means of standard tuples\ntightly coupled with reordering/split rules, in-\ntroduced into the overall search through an in-\nput word lattice.\nOur small but consistent accuracy improve-\nments can mainly be attributed to the fact\nthat a higher level of monotonization of\nthe training corpus allows the extraction of\nsmaller/more reusable units. As explained\nabove, the split/reordering rules used in this\nstudy are costless, meaning that all reorder-\nings are equally likely. As a consequence, the\nreward of using a split rule only comes from\nthe translation models’ score, which are com-\nputed separately for each instance of a split\ntoken. We believe that devising an appropri-\nate weighting scheme for these split/reordering\nrules is needed to take full advantage of the ex-\ntra expressiveness allowed by gappy units.\nWith the objective that our translation\nmodel highly benefits from the advantages of\nadditional context, each gappy translation unit\nmust be entirely weighted with a single proba-\nbility. Instead, in our current implementation,\neach gappy unit is multiply weighted with par-\ntial probabilities. An open issue to definitely\ntackle in further research.\nAdditionally, we believe that the slight im-\nprovements achieved can be increased if ad-\nditional gappy units are acquired from bilin-\ngual structures other than the one-to-many\nemployed in the present experiments. We plan\nto extend the framework proposed in this pa-\nper with more complex gappy units, simil-\niar to those used by hierarchical MT systems,\nthereby, taking full advantage of additional\ntranslation context provided by these units.\nWe also plan to further investigate other as-\npects of hierarchical units, such as different72\nlevels of lexicalization in both the source and\nthe target side.\nAcknowledgments\nThis work has been partially funded by OSEO\nunder the Quaero program.\nReferences\nBrown, P., S. Della Pietra, V. Della Pietra, and\nR. Mercer. 1993. The mathematics of statisti-\ncal machine translation: Parameter estimation.\nComputational Linguistics , 19(2):263–311.\nChiang, David. 2007. Hierarchical phrase-\nbased translation. Computational Linguistics ,\n33(2):201–228.\nCrego, J.M. and J.B. Mari˜ no. 2007a. Extending\nmarie: an n-gram-based smt decoder. 45rd An-\nnual Meeting of the Association for Computa-\ntional Linguistics , April.\nCrego, J.M. and J.B. Mari˜ no. 2007b. Syntax-\nenhanced n-gram-based smt. Proc. of the MT\nSummit XI , pages 111–118, September.\nHabash, N. 2007. Syntactic preprocessing for sta-\ntistical machine translation. Proc. of the MT\nSummit XI , September.\nKnight, Kevin. 2008. Capturing practical natural\nlanguage transformations. Machine Translation ,\n21(2):121–133.\nKoehn, Ph., F.J. Och, and D. Marcu. 2003. Sta-\ntistical phrase-based translation. Proc. of the\nHuman Language Technology Conference, HLT-\nNAACL’2003 , May.\nKoehn, Philipp, Hieu Hoang, Alexandra Birch,\nChris Callison-Burch, Marcello Federico, Nicola\nBertoldi, Brooke Cowan, Wade Shen, Christine\nMoran, Richard Zens, Chris Dyer, Ondrej Bojar,\nAlexandra Constantin, and Evan Herbst. 2007.\nMoses: Open source toolkit for statistical ma-\nchine translation. In Proceedings of the 45th An-\nnual Meeting of the Association for Computa-\ntional Linguistics Companion Volume Proceed-\nings of the Demo and Poster Sessions , pages\n177–180, Prague, Czech Republic, June. Asso-\nciation for Computational Linguistics.\nMari˜ no, J.B., R.E. Banchs, J.M. Crego, A. de Gis-\npert, P. Lambert, J.A.R. Fonollosa, and M.R.\nCostajuss` a. 2006. N-gram based machine trans-\nlation. Computational Linguistics , 32(4):527–\n549.\nMelamed, D. 2004. Statistical machine translation\nby parsing. 42nd Annual Meeting of the Associ-\nation for Computational Linguistics , pages 653–\n661, July.Och, F.J., Ch. Tillmann, and H. Ney. 1999. Im-\nproved alignment models for statistical machine\ntranslation. Proc. of the Joint Conf. of Empiri-\ncal Methods in Natural Language Processing and\nVery Large Corpora , pages 20–28, June.\nSimard, M., N. Cancedda, B. Cavestro, M. Dymet-\nman, E. Gaussier, C. Goutte, K. Yamada,\nP. Langlais, and A. Mauser. 2005. Translating\nwith non-contiguous phrases. pages 755 – 762,\nOctober 6-8.\nTakezawa, T., E. Sumita, F. Sugaya, H Yamamoto,\nand S. Yamamoto. 2002. Toward a broad-\ncoverage bilingual curpus for speech translation\nof travel conversations in the real world. 3rd\nInt. Conf. on Language Resources and Evalua-\ntion, LREC’02 , pages 147–152, May.\nVenugopal, Ashish, Andreas Zollmann, and Vogel\nStephan. 2007. An efficient two-pass approach\nto synchronous-CFG driven statistical MT. In\nHuman Language Technologies 2007: The Con-\nference of the North American Chapter of the\nAssociation for Computational Linguistics; Pro-\nceedings of the Main Conference , pages 500–\n507, Rochester, New York, April. Association for\nComputational Linguistics.\nWatanabe, T., H. Tsukada, and H Isozaki. 2006.\nLeft-to-right target generation for hierarchical\nphrase-based translation. Proc. of the 21st Int.\nConf. on Computational Linguistics and 44th\nAnnual Meeting of the Association for Compu-\ntational Linguistics , July.\nZens, R., F.J. Och, and H. Ney. 2002. Phrase-\nbased statistical machine translation. In Jarke,\nM., J. Koehler, and G. Lakemeyer, editors, KI\n- 2002: Advances in artificial intelligence , vol-\nume LNAI 2479, pages 18–32. Springer Verlag,\nSeptember.\nZhang, H., H. Yu, D. Xiong, and Q. Liu. 2003.\nHHMM-based chinese lexical analyzer ictclas. In\nProc. of the 2nd SIGHAN Workshop on Chi-\nnese language processing , pages 184–187, Sap-\nporo, Japan.\nZollmann, Andreas and Ashish Venugopal. 2006.\nSyntax augmented machine translation via chart\nparsing. In Proceedings on the Workshop on\nStatistical Machine Translation , pages 138–141,\nNew York City, June. Association for Computa-\ntional Linguistics.\nZollmann, Andreas, Ashish Venugopal, Franz Och,\nand Jay Ponte. 2008. A systematic compar-\nison of phrase-based, hierarchical and syntax-\naugmented statistical MT. In Proceedings of\nthe 22nd International Conference on Compu-\ntational Linguistics (Coling 2008) , pages 1145–\n1152, Manchester, UK, August. Coling 2008 Or-\nganizing Committee.73",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "b_Cy-5jz04j",
"year": null,
"venue": "EAMT 2015",
"pdf_link": "https://aclanthology.org/W15-4907.pdf",
"forum_link": "https://openreview.net/forum?id=b_Cy-5jz04j",
"arxiv_id": null,
"doi": null
}
|
{
"title": "The role of artificially generated negative data for quality estimation of machine translation",
"authors": [
"Varvara Logacheva",
"Lucia Specia"
],
"abstract": "Varvara Logacheva, Lucia Specia. Proceedings of the 18th Annual Conference of the European Association for Machine Translation. 2015.",
"keywords": [],
"raw_extracted_content": "The role of artificially generated negative data for quality estimation of\nmachine translation\nVarvara Logacheva\nUniversity of Sheffield\nSheffield, United Kingdom\[email protected] Specia\nUniversity of Sheffield\nSheffield, United Kingdom\[email protected]\nAbstract\nThe modelling of natural language tasks\nusing data-driven methods is often hin-\ndered by the problem of insufficient nat-\nurally occurring examples of certain lin-\nguistic constructs. The task we address\nin this paper – quality estimation (QE) of\nmachine translation – suffers from lack of\nnegative examples at training time, i.e.,\nexamples of low quality translation. We\npropose various ways to artificially gener-\nate examples of translations containing er-\nrors and evaluate the influence of these ex-\namples on the performance of QE models\nboth at sentence and word levels.\n1 Introduction\nThe task of classifying texts as “correct” or “incor-\nrect” often faces the problem of unbalanced train-\ning sets: examples of the “incorrect” class can be\nvery limited or even absent. In many cases, natu-\nrally occurring instances of these examples are rare\n(e.g. incoherent sentences, errors in human texts).\nIn others, the labelling of data is a non-trivial task\nwhich requires expert knowledge.\nConsider the task of quality estimation (QE) of\nmachine translation (MT) systems output. When\nperforming binary classification of automatically\ntranslated sentences one should provide examples\nof both bad and good quality sentences. Good\nquality sentences can be taken from any parallel\ncorpus of human translations, whereas there are\nvery few corpora of sentences annotated as having\nlow quality. These corpora need to be created by\nc�2015 The authors. This article is licensed under a Creative\nCommons 3.0 licence, no derivative works, attribution, CC-\nBY-ND.human translators, who post-edit automatic trans-\nlations, mark errors in translations, or rate transla-\ntions for quality. This process is slow and expen-\nsive. It is therefore desirable to devise automatic\nprocedures to generate negative training data for\nQE model learning.\nPrevious work has followed the hypothesis that\nmachine translations can be assumed to have low\nquality (Gamon et al., 2005). However, this is not\nthe case nowadays: many translations can be con-\nsideredflawless. Particularly for word-level QE, it\nis unrealistic to presume that every single word in\nthe MT output is incorrect. Another possibility is\nto use automatic quality evaluation metrics based\non reference translations to provide a quality score\nfor MT data. Metrics such as BLEU (Papineni\net al., 2002), TER (Snover et al., 2006) and ME-\nTEOR (Banerjee and Lavie, 2005) can be used to\ncompare the automatic and reference translations.\nHowever, these scores can be very unreliable, es-\npecially for word-level QE, as every word that dif-\nfers in form or position would be annotated as bad.\nPrevious efforts have been made for negative\ndata generation, including random generation of\nsentences from word distributions and the use of\ntranslations in low-ranked positions in n-best lists\nproduced by statistical MT (SMT) systems. These\nmethods are however unsuitable for QE at the word\nlevel, as they provide no information about the\nquality of individual words in a sentence.\nIn this paper we adopt a different strategy: we\ninsert errors in otherwise correct sentences. This\nprovides control over the proportion of errors in\nthe negative data, as well as knowledge about the\nquality of individual words in the generated sen-\ntences. The goals of the research presented here\nare to understand the influence of artificially gener-\nated data (by various methods and in various quan-51\ntities) on the performance of QE models at both\nsentence and word levels, and ultimately improve\nupon baseline models by extending the training\ndata with suitable artificially created examples. In\nSection 2 we further review existing strategies for\nartificial data generation. We explain our genera-\ntion strategies in Section 3. In Section 4 we de-\nscribe our experiment and their results.\n2 Previous work\n2.1 Discriminative language modelling\nOne example of task that requires low quality\nexamples is discriminative language modelling\n(DLM), i.e., the classification of sentences as\n”good” or ”bad”. It wasfirst introduced in a mono-\nlingual context within automatic speech recogni-\ntion (Collins et al., 2005), and later applied to MT.\nWhile in speech recognition negative examples can\nbe created from system outputs that differ from\nthe reference (Bhanuprasad and Svenson, 2008), in\nMT there are multiple correct outputs, so negative\nexamples need to be defined more carefully.\nIn Okanohara (2007) bad sentences used as neg-\native training instances are drawn from the dis-\ntributionP(w i|wi−N+1 , ..., w i−1):first the start\nsymbol< s >is generated, then the next words\nare taken based on the word probability given the\nalready generated words.\nOther approaches to discriminative LMs use the\nn-best list of the MT system as training data (Li\nand Khudanpur, 2008). The translation variant\nwhich is closest to the oracle (e.g. has the highest\nBLEU score) is used as a positive example, while\nthe variant with high system score and low BLEU\nscore is used as a negative example. Such dataset\nallows the classifier to reduce the differences be-\ntween the model score and the actual quality score\nof a sentence.\nLi et al. (2010) simulate the generation of an\nn-best list using translation tables from SMT sys-\ntems. By taking entries from the translation table\nwith the same source side they create a set of alter-\nnative translations for a given target phrase. For\neach sentence, these are combined, generating a\nconfusion set for this sentence.\n2.2 Quality estimation for MT\nQE can be modelled as a classification task where\nthe goal is to distinguish good from bad transla-\ntions, or to provide a quality score to each trans-\nlation. Therefore, examples of bad sentences orwords produced by the MT system are needed. To\nthe best of our knowledge, the only previous work\non adding errors to well-formed sentences is that\nby Raybaud et al. (2011).\nIn (Raybaud et al., 2011), the training data\nfor the negative data generation process consists\nof a set of MT hypotheses manually post-edited\nby a translator. Hypotheses are aligned with the\ncorresponding post-editions using the TERp tool\n(Snover et al., 2008). The alignment identifies the\nedit operations performed on the hypothesis in or-\nder to convert it to the post-edited version: leave\nword as is (no error), delete word, insert new word,\nsubstitute word with another word. Two models of\ngeneration of error strings from a well-formed sen-\ntence are proposed. Both are based on the observed\nfrequency of errors in the post-edited corpus and\ndo not account for any relationships between the\nerrors and the actual words. Thebigram error\nmodeldraws errors from the bigram probabilities\nP(C i|Ci−1)whereC iis an error class. Theclus-\nter error modelgenerates clusters of errors based\non the distribution of lengths of erroneous word\nsequences in the training data. Substituting words\nare chosen from a probability distribution defined\nas the product of these words’ probabilities in the\nIBM-1 model and a 5-gram LM. A model trained\nonly on artificial data performs slightly better than\none trained on a small manually annotated corpus.\n2.3 Human error correction\nAnother task that can benefit from artificially gen-\nerated examples is language learner error correc-\ntion. The input for this task is text that potentially\ncontains errors. The goal is tofind these errors,\nsimilarly to QE at the word level, and additionally\ncorrect them. While the text is written by humans,\nit is assumed that these are non-native speakers,\nwho possibly translate the text from their native\nlanguage. The difference is that in this task the\nsource text is a hidden variable, whereas in MT it\nis observed.\nThe strategy of adding errors to correct sen-\ntences has also been used for this task. Human\nerrors are more intuitive to simulate as language\nlearners explicitly attempt to use natural language\ngrammars. Therefore, rule-based systems can be\nused to model some grammar errors, particularly\nthose affecting closed class words, e.g. determiner\nerrors (Izumi et al., 2003) or countability errors\n(Brockett et al., 2006).52\nMore recent statistical methods use the distribu-\ntions of errors in corpora and small seed sets of\nerrors. They often also concentrate on a single er-\nror type, usually with closed class words such as\narticles and prepositions (Rozovskaya and Roth,\n2010). Felice and Yuan (2014) go beyond closed\nclass words to evaluate how errors of different\ntypes are influenced by various linguistic param-\neters: text domain, learner’sfirst language, POS\ntags and semantic classes of erroneous words. The\napproach led to the generation of high-quality ar-\ntificial data for human error correction. However,\nit could not be used for MT error identification,\nas MT errors are different from human errors and\nusually cannot be assigned to a single type.\n3 Generation of artificial data\nThe easiest choice for artificial data generation is\nto create a sentence by taking all or some of its\nwords from a probability distribution of words in\nsome monolingual corpus. The probability can\nbe defined for unigrams only or conditioned on\nthe previous words (as it was done for discrimina-\ntive LMs). This however is a target language-only\nmethod that does not suit the QE task as the “qual-\nity” of a target word or sentence is dependent on\nthe source sentence, and disregarding it will cer-\ntainly lead to generation of spurious data.\nRandom target sentences based on a given\nsource sentence could be generated with bilingual\nLMs. However another limitation of this approach\nis the assumption that all words in such sentences\nare wrong, which makes the data useless for word-\nlevel QE.\nAlternatively, the artificial sentences can be gen-\nerated using MT systems for back-translation. The\ntarget sentences arefirst fed to a target–source\nMT system, and then its output is passed to a\nsource–target system. However, according to our\nexperiments, if both systems are statistical the\nback-translation is too similar to the original sen-\ntence, and the majority of their differences are in-\nterchangeable paraphrases. Rule-based systems\ncould be more effective, but the number of rule-\nbased systems freely available would limit the\nwork to a small number of language pairs.\n3.1 A two-stage error generation method\nAs previously discussed, existing methods that ar-\ntificially generate entire sentences have drawbacks\nthat make them difficult or impossible to use forQE. Therefore, following Raybaud et al. (2011)\nand previous work on human error correction, our\napproach is to inject errors into otherwise correct\ntexts. This process consists of two stages:\n•labelling of a sentence with error tags,\n•insertion of the errors into that sentence.\nThefirst stage assigns an error tag to every word\nin a sentence. The output of this stage is the initial\nsentence where every word is assigned a tag de-\nnoting a type of error that needs to be incurred on\nthis word. We usefive tags corresponding to edit\noperations in the TERp tool: no error (OK), sub-\nstitution (S), deletion (D), insertion (I) and shift\n(H). During the second stage the words in the sen-\ntence are changed according to their tag: substi-\ntuted, deleted, shifted, or left in place if word has\nthe tagOK. Figure 1 gives an example of the com-\nplete generation process.\n3.1.1 Error tagging of sentences\nWe generate errors based on a corpus of post-\nedited machine translations. We align transla-\ntions and post-editions using the TERp tool (ex-\nact matching) and extract counts on the number\nof shifts, substitutions, insertions and deletions.\nTERp does not always capture the true errors, in\nparticular, it fails to identify phrase substitutions\n(e.g.was→has been). However, since editors\nare usually asked to minimise the number of ed-\nits, translations and post-editions are often close\nenough and the TERp alignment provide a good\nproxy to the true error distribution.\nThe TERp alignments can be used to collect the\nstatistics on errors alone or to combine the fre-\nquency of errors with the words they are incurred\non. We suggest three methods of generation of an\nerror string for a sentence:\n•bigramEG : thebigramerror generation that\nuses a bigram error model regardless of the\nactual words (Raybaud et al., 2011).\n•wordprobEG : the conditional probability of\nan error given a word.\n•crfEG : the combination of the bigram error\nmodel and error probability conditioned on a\nword. This generation method can be mod-\nelled with Hidden Markov Model (HMM) or\nconditional randomfields (CRF).\nThefirst model has the advantage of keeping\nthe distribution of errors as in the training data,\nbecause the probability distributions used depend\n53\nFigure 1: Example of the two-stage artificial data generation process\nonly on the frequency of errors themselves. The\nsecond model is more informed about which words\ncommonly cause errors. Our implementation of\nthe third method uses CRFs to train an error model.\nWe use all unigrams, bigrams and trigrams that in-\nclude the target word as features for training. This\nmethod is expected to produce more plausible er-\nror tags, but it can have the issue that the vocab-\nulary we want to tag is not fully covered by the\ntraining data, so some words in the sentences to\ntag will be unknown to the trained model. If an\nunknown word needs to be tagged, it will more of-\nten be tagged with the most frequent tag, which is\n“Good” in our case. In order to avoid this problem\nwe replace rare words in training set with a default\nstring or with the word class, e.g. a POS tag.\n3.1.2 Insertion of errors\nWe consider errors of four types:insertion,\ndeletion,substitutionandshift. Word marked\nwith the ‘deletion’ error tag are simply removed.\nShift errors require the distribution of shift dis-\ntances which are computed based on a TERp-\naligned corpus. Substitutions and insertions re-\nquire word insertion (WI) and the new words need\nto be drawn from some probability distribution.\nWe suggest two methods for the generation of\nthese distributions:\n•unigramWI : word frequencies computed\nbased on a large monolingual corpus.\n•paraphraseWI : distributions of words that\ncan be used instead of the current word in the\ntranslation. This computation is performed\nas follows:first all possible sources of a tar-\nget word are extracted from an SMT system’s\ntranslation table, then all possible targets for\nthese sources. That gives us a confusion set\nfor each target word.\n4 Experiments\nWe conducted a set of experiments to evaluate the\nperformance of artificially generated data on dif-\nferent tasks of QE at the sentence and word levels.4.1 Tools and datasets\nThe tools and resources required for our experi-\nments are: a QE toolkit to build QE models, the\ntraining data for them, the data to extract statistics\nfor the generation of additional examples.\nThe for sentence-level QE we used the Q UEST\ntoolkit (Specia et al., 2013). It trains QE mod-\nels usingsklearn1versions of Support Vec-\ntor Machine (SVM) classifier (for ternary clas-\nsification task, Section 4.4) and SVM regression\n(for HTER prediction, Section 4.5). The word-\nlevel version of Q UEST2was used for word-level\nfeature extraction. Word-level classifiers were\ntrained withCRFSuite3. The CRF error mod-\nels were trained withCRF++4. POS tagging\nwas performed withTreeTagger(Schmid, 1994).\nSentence-level QuEst uses 17 baseline features5\nfor all tasks. Word-level QuEst reimplements the\nset of 30 baseline features described in (Luong\net al., 2014). The QE models were built and\ntested based on the data provided for the WMT14\nEnglish–Spanish QE shared task (Section 4.3).\nThe statistics on error distributions were com-\nputed using the English–Spanish part of training\ndata for WMT13 shared task on QE6. The statis-\ntics on the distributions of words, alignments and\nlexical probabilities were extracted from the Eu-\nroparl corpus (Koehn, 2005). We trained the align-\nment model withFastAlign(Dyer et al., 2013) and\nextracted the lexical probabilities tables for words\nusing scripts for phrase table building inMoses\n(Koehn et al., 2007). For all the methods, errors\nwere injected into the News Commentary corpus7.\n1http://scikit-learn.org/\n2http://github.com/ghpaetzold/quest\n3http://www.chokkan.org/software/crfsuite/\n4https://code.google.com/p/crfpp/\n5http://www.quest.dcs.shef.ac.uk/\nquest files/features blackbox baseline 17\n6http://www.quest.dcs.shef.ac.uk/\nwmt13 qe.html\n7http://statmt.org/wmt14/\ntraining-parallel-nc-v9.tgz54\n4.2 Generated data\nCombining three methods of errors generation and\ntwo methods of errors insertion into sentences re-\nsulted in a total of six artificial datasets. Here we\nperform some analysis on the generated data.\nThe datasets differ in the percentage of errors\ninjected into the sentences.BigramEGdatasets\nhave 23% of edits which matches the distribution\nof errors on the real data.WordprobEGdatasets\ncontain fewer errors — 17%.\nThecrfEGmodels contain the lowest number\nof errors — 5% of the total number of words. As it\nwas expected, data sparsity makes the CRF model\ntag the majority of the words with the most fre-\nquent tag (“Good”). Replacing rare words with a\ndefault word token or with a POS tag did not im-\nprove these statistics.\nWord insertersUnigram Paraphrase\nError generators\nBigram 699.9 888.64\nWordprob 538.84 673.61\nCRF + default word 165.36 172.97\nCRF + POS tag 161.59 167.23\nTable 1: Perplexities of the artificial datasets\nWe computed the perplexity of all datasets with\nrespect to an LM trained on the Spanish part of the\nEuroparl corpus (see Table 1). Thefigures match\nthe error percentages in the data — the lower the\nnumber of errors, the more is kept from the original\nsentence, and thus the more natural it looks (lower\nperplexity). Note that sentences where errors were\ninserted from a general distribution (unigramWI)\nhave lower perplexity than those generated using\nusing paraphrases. This can be because theun-\nigramWImodel tends to choose high-frequency\nwords with lower perplexity, while the constructed\nparaphrases contain more noise and rare words.\n4.3 Experimental setup\nWe evaluated the performance of the artificially\ngenerated data in three tasks: the ternary clas-\nsification of sentences as “good”, “almost good”\nor “bad”, the prediction of HTER (Snover et al.,\n2009) score for a sentence, and the classification\nof words in a sentence as “good” or “bad” (tasks\n1.1, 1.2 and 2 of WMT14 QE shared task8, respec-\ntively).\n8http://statmt.org/wmt14/\nquality-estimation-task.htmlThe goal of the experiments was to check\nwhether it is possible to improve upon the baseline\nresults by adding artificially generated examples\nto the training sets. The baseline models for all\ntasks were trained on the data provided for the cor-\nresponding shared tasks for the English–Spanish\nlanguage pair. All models were tested on the offi-\ncial test sets provided for the corresponding shared\ntasks.\nSince we know how many errors were injected\ninto the sentences, we know the TER scores for our\nartificial data. The discrete labels for the ternary\nclassification task are defined as follows: “bad”\nsentences have four or more non-adjacent errors\n(two adjacent erroneous words are considered one\nerror), “almost good” sentences contain one er-\nroneous phrase (possibly of several words), and\n“good” sentences are error-free.\nThe new training examples were added to the\nbaseline datasets. We ran a number of experiments\ngradually increasing the number of artificially gen-\nerated sentences used. At every run, the new data\nwas chosen randomly in order to reduce the influ-\nence of outliers. In order to make the results more\nstable, we ran each experiment 10 times and aver-\naged the evaluation scores.\n4.4 Sentence-level ternary QE task\nThe original dataset for this task contains 949\n“good”, 2010 “almost good”, and 857 “bad” sen-\ntences, whereas the test set has 600 entries: 131\n“good”, 333 “almost good”, 136 “bad”. The re-\nsults were evaluated using F1-score.\nThe addition of new “bad” sentences leads to\nan improvement in quality, regardless of the sen-\ntence generation method used. Models trained on\ndatasets generated by different strategies display\nthe same trend: adding up to 400 sentences results\nin a considerable increase in quality, while fur-\nther addition of data only slightly improves qual-\nity. Figure 2 shows the results of the experiments\n– here for clarity we included only the results\nfor datasets generated with theunigramWI, al-\nthough theparaphraseWIdemonstrates a similar\nbehaviour with slightly lower quality. The best F1-\nscore of 0.49 is achieved by a model trained on the\ndata generated with thecrferror generator, which\nis an absolute improvement of 1.9% over the base-\nline.\nHowever, adding only negative data makes the\ndistribution of classes in the training data less55\nFigure 2: Ternary classification: performance of\nerror generators\nsimilar to that of the test set, which might af-\nfect performance negatively. Therefore, we con-\nducted other three sets of experiments: we added\n(i) equal amount of artificial data for the “good”\nand “bad” classes (ii) batches of artificial data\nfor all classes that keep the original proportion of\nclasses in the data (iii) artificial data for only the\n“good” class. The latter setting is tested in order to\ncheck whether the classifier benefits from negative\ninstances, or just from having new data added to\nthe training sets.\nThe results are shown in Figure 3. We plot only\nthe results for thebigramEG+unigramWIset-\nting as it achieved the best result in absolute val-\nues, but the trends are the same for all data gen-\neration techniques. The best strategy was to add\nboth “good” and “bad” sentences: it beats the mod-\nels which uses only negative examples, but after\n1000 artificial sentences its performance degrades.\nKeeping the original distribution of classes is not\nbeneficial for this task: it performs worse than\nany other tested scenario since it decreases the F1-\nscore for the “good” class dramatically.\nOverall, the additional negative training data im-\nproves the ternary sentence classification. The ad-\ndition of both positive and negative examples can\nfurther improve the results, while providing addi-\ntional instances of the “almost good” class did not\nseem to be as helpful.\n4.5 Sentence-level HTER QE task\nFigure 4 shows that the addition of any type of ar-\ntificial data leads to substantial improvements in\nquality for this task. The results were evaluated\nin terms of Mean Absolute Error (MAE). The ini-Figure 3: Ternary classification: artificial exam-\nples of different classes\ntial training dataset was very small – 896 sentences\n(200 sentences for test), which may explain the\nsubstantial improvements in prediction quality as\nnew data is added. We also noticed that the perfor-\nmance of the generated datasets was primarily de-\nfined by the method of errors generation, whereas\ndifferent word choice strategies did not impact the\nresults as much. Figure 4 depicts the results for the\nunigramWIwords selection method only with all\nerror generation methods.\nThe addition of data from datasets generated\nwithcrfEGgives the largest drop in MAE (from\n0.161 to 0.14). This result is achieved by a model\nthat uses 1200 artificial sentences. Further addi-\ntion of new data harms performance. The data\ngenerated by other error generators does not cause\nsuch a large improvement in quality, although it\nalso helps reduce the error rate.\nAs it was described earlier, thecrfEGmodel\ngenerates sentences with a small number of er-\nrors. Since the use of this dataset leads to the\nlargest improvements, we can suggest that in the\nHTER prediction task, using the baseline dataset\nonly, the majority of errors is found in sentences\nwhose HTER score is low. However, the reason\nmight also be that the distributions of scores in the\nbaseline training and test sets are different: the test\nset has lower average score (0.26 compared to 0.31\nin the training set) and lower variance (0.03 versus\n0.05 in the training set). The use of artificial data\nwith a small number of errors changes this distri-\nbution.\nWe also experimented with training a model us-\ning only artificial data. The results of models\ntrained on only 100 artificial sentences for each56\nFigure 4: HTER regression results\ngeneration method were surprisingly good: their\nMAE ranged from 0.149 to 0.158 (compared to\nthe baseline result of 0.161 on the original data).\nHowever, the further addition of new artificial sen-\ntences did not lead to improvements. Thus, despite\nthe positive impact of the artificial data on the re-\nsults, the models cannot be further improved with-\nout real training examples.\n4.6 Word-level QE task\nHere we tested the impact of the artificial data on\nthe task of classifying individual words as “good”\nor “bad”. The baseline set contains 47335 words,\n35% of which have the tag “bad”. The test set has\n9613 words with the same label distribution.\nAll the datasets led to similar results. Overall,\nthe addition of artificial data harms prediction per-\nformance: the F1-score goes down until 1500 sen-\ntences are added, and then levels off. The perfor-\nmance for all datasets is similar. However, analo-\ngously to the previous tasks, there are differences\nbetweencrfEGand the other two error generation\ntechniques: the former leads to faster deterioration\nof F1-score. No differences were observed among\nthe word insertion techniques tested.\nFigure 5 shows the average weighted F1-score\nand F1-scores for both classes. Since all datasets\nbehave similarly, we show the results for two\nof them that demonstrate slightly different per-\nformance:crfEG+unigramWIis shown with\nsolid blue lines, whilebigramEG+unigramWIis\nshown with dotted red lines. The use of data gen-\nerated with CRF-based methods results in slightly\nfaster decline in performance than the use of data\ngenerated withbigramEGorwordprobEG. One\npossible reason is that the CRF-generated datasetsFigure 5: Word-level QE. Blue solid lines – results\nforcrfEG, red dotted lines –bigramEG\nhave fewer errors, hence they change the original\ntags distribution in the training data. Therefore,\ntest instances are tagged as “bad” less often. That\nexplains why the F1-score of the “bad” class de-\ncreases, whereas the F1-score of the “good” class\nstays at the same.\nTo summarise ourfindings for word-level QE,\nthe strategies of data generation proposed and\ntested thus far do not lead to improvements. The\nword-level predictions are more sensitive to indi-\nvidual words in training sentences, so the replace-\nment of tokens with random words may confuse\nthe model. Therefore, the word-level task needs\nmore elaborate methods for substituting words.\n5 Conclusions and future work\nWe presented and experimented with a set of new\nmethods of simulation of errors made by MT sys-\ntems. Sentences with artificially added errors were\nused as training data in models that predict the\nquality of sentences or words.\nThe addition of artificial data can help improve\nthe output of sentence-level QE models, with sub-\nstantial improvements in HTER score prediction\nand some improvements in sentences classification\ninto “good”, “almost good” and “bad”. However,\nthe largest improvements are related to the fact that\nthe additional data changes the overall distribution\nof scores in the training set, making it more sim-\nilar to the test set. On the other hand, the fact\nthat the artificial sentences did not decrease the\nquality in such cases proves that it can be used\nto counter-balance the large number of positive\nexamples. Unlike sentence-level QE, the task of57\nword-level QE did not benefit from the artificial\ndata. That may relate to our choice of method to\nreplace words in artificial sentences.\nWhile thus far we analysed the usefulness of ar-\ntificial data for the QE task only, it would be in-\nteresting to check if this data can also improve the\nperformance of discriminative LMs.\nAcknowledgements\nThis work was supported by the EXPERT (EU\nMarie Curie ITN No. 317471) project.\nReferences\nBanerjee, Satanjeev and Alon Lavie. 2005. METEOR:\nAn Automatic Metric for MT Evaluation with Im-\nproved Correlation with Human Judgments. InACL-\n2005, MTSumm workshop, pages 65–72.\nBhanuprasad, Kamadev and Mats Svenson. 2008.\nErrgrams A Way to Improving ASR for Highly\nInflected Dravidian Languages. InIJCNLP-2008,\npages 805–810.\nBrockett, Chris, William B. Dolan, and Michael Ga-\nmon. 2006. Correcting esl errors using phrasal smt\ntechniques. InColing-ACL-2006.\nCollins, Michael, Brian Roark, and Murat Saraclar.\n2005. Discriminative Syntactic Language Modeling\nfor Speech Recognition. InACL-2005.\nDyer, Chris, Victor Chahuneau, and A. Noah Smith.\n2013. A simple, fast, and effective reparameteriza-\ntion of ibm model 2. InNAACL-HLT-2013, pages\n644–648.\nFelice, Mariano and Zheng Yuan. 2014. Generating\nartificial errors for grammatical error correction. In\nEACL-2014, pages 116–126.\nGamon, Michael, Anthony Aue, and Martine Smets.\n2005. Sentence-level MT evaluation without refer-\nence translations: beyond language modeling. In\nEAMT-2005.\nIzumi, Emi, Kiyotaka Uchimoto, Toyomi Saiga, Thep-\nchai Supnithi, and Hitoshi Isahara. 2003. Automatic\nerror detection in the japanese learners’ english spo-\nken data. InACL-2003, pages 145–148.\nKoehn, Philipp, Hieu Hoang, Alexandra Birch, Chris\nCallison-burch, Marcello Federico, Nicola Bertoldi,\nBrooke Cowan, Wade Shen, Christine Moran,\nRichard Zens, Chris Dyer, Ondej Bojar, Alexandra\nConstantin, and Evan Herbst. 2007. Moses: Open\nSource Toolkit for Statistical Machine Translation.\nInACL-2007, Demo session, pages 177–180.\nKoehn, Philipp. 2005. Europarl: A Parallel Corpus\nfor Statistical Machine Translation. InMT-Summit\n2005, pages 79–86.Li, Zhifei and Sanjeev Khudanpur. 2008. Large-scale\nDiscriminative n -gram Language Models for Sta-\ntistical Machine Translation. InAMTA-2008, pages\n21–25.\nLi, Zhifei, Ziyuan Wang, Sanjeev Khudanpur, and Ja-\nson Eisner. 2010. Unsupervised Discriminative\nLanguage Model Training for Machine Translation\nusing Simulated Confusion Sets. InColing-2010.\nLuong, Ngoc Quang, Laurent Besacier, and Benjamin\nLecouteux. 2014. Lig system for word level qe task\nat wmt14. InWMT-2014, pages 335–341.\nOkanohara, Daisuke. 2007. A Discriminative Lan-\nguage Model with Pseudo-Negative Samples. In\nACL-2007, pages 73–80.\nPapineni, Kishore, Salim Roukos, Todd Ward, and Wei-\njing Zhu. 2002. BLEU: a Method for Automatic\nEvaluation of Machine Translation. InACL-2002,\npages 311–318.\nRaybaud, Sylvain, David Langlois, and Kamel Sma ¨ıli.\n2011. This sentence is wrong. Detecting errors in\nmachine-translated sentences.Machine Translation,\n25(1):1–34.\nRozovskaya, Alla and Dan Roth. 2010. Generating\nconfusion sets for context-sensitive error correction.\nInEMNLP-2010, pages 961–970.\nSchmid, Helmut. 1994. Probabilistic part-of-speech\ntagging using decision trees. InInternational Con-\nference on New Methods in Language Processing,\npages 44–49.\nSnover, Matthew, Bonnie Dorr, Richard Schwartz, Lin-\nnea Micciulla, and John Makhoul. 2006. A Study of\nTranslation Edit Rate with Targeted Human Annota-\ntion. InAMTA-2006, pages 223–231.\nSnover, Matthew, Nitin Madnani, Bonnie Dorr, and\nRichard Schwartz. 2008. TERp System Description.\nInAMTA-2008, MetricsMATR workshop.\nSnover, Matthew, Nitin Madnani, Bonnie J. Dorr, and\nRichard Schwartz. 2009. Fluency, adequacy, or\nhter?: Exploring different human judgments with a\ntunable mt metric. InWMT-2009, pages 259–268.\nSpecia, Lucia, Kashif Shah, Jose G C de Souza, and\nTrevor Cohn. 2013. QuEst - A translation quality\nestimation framework. InACL-2013, Demo session.\n58",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "59ZfR3b8D9u",
"year": null,
"venue": "EAMT 2015",
"pdf_link": "https://aclanthology.org/W15-4943.pdf",
"forum_link": "https://openreview.net/forum?id=59ZfR3b8D9u",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Multi-Dialect Machine Translation (MuDMat)",
"authors": [
"Fatiha Sadat"
],
"abstract": "Fatiha Sadat. Proceedings of the 18th Annual Conference of the European Association for Machine Translation. 2015.",
"keywords": [],
"raw_extracted_content": "�\n�������������������������������������������\n�������������������������������������������������������������������\n������������������\n����������������� �\n�\n�����������������\n���������������������������������������������������������������������������\n�\n�\n�����������������������������������������������\n�\n��������\n���� �������������� �������� ������������ ��������� �������� ����� ��� ���������� ��������� ����\n�������������������������������������������������������������������������������������������\n���������������������������������������������������������������������������������� �������\n�����������������������������������������������������������������������������������������������\n�������������������������������������������������������������������������������������������\n������������������������������\n��������������������������������������������������������������������������������������������������\n���� ����� ������������ ����� ������� ����� ������������ ���� ������� ��������� ������� ������ ���\n�������������������������������������������������������������������������������������������������\n���������������������������������������������������������������������������������������������\n�������������������������������������������\n��� ���� �������� ������� ������� �������� ��������� ������� ������������ ���� ����������� ��������\n���������������������������������������������������������������������������������������������\n���������������������������������������������������������������������������������������������\n����������������������������������������������������������������������������������������������\n��������������������������������������������������������������������������������������������\n������������������������������������������������������������������������������������������������\n������������������������������������������������������������������������������������������\n���������������������������������������������������������������������������������������������\n����� ����� ���� �������������� ���������� ����� ���� ���� ��� ���������������� ������ ���������\n������������������������������������������������������������������������\n�����������������������������������������������������������������������������������������������\n����������������������������������������������������������������������������������������������\n���� �������� �������� ���� ������������� ��� ����� ����������� ���������� ����� ��� ����������� ����\n������������������������������������������������������������������������������������������������\n������������������������������������������������������������������������������������������\n�������������������������������������������������������������\n���� ������������� ��� ���� �������������� �������� ������������ ������� ����� ����\n���������������������������������������\n�226",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "ZMqOqncjBbb",
"year": null,
"venue": "EAMT 2010",
"pdf_link": "https://aclanthology.org/2010.eamt-1.31.pdf",
"forum_link": "https://openreview.net/forum?id=ZMqOqncjBbb",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Learning an Expert from Human Annotations in Statistical Machine Translation: the Case of Out-of-Vocabulary Words",
"authors": [
"Wilker Aziz",
"Marc Dymetman",
"Lucia Specia",
"Shachar Mirkin"
],
"abstract": "Wilker Aziz, Marc Dymetman, Lucia Specia, Shachar Mirkin. Proceedings of the 14th Annual conference of the European Association for Machine Translation. 2010.",
"keywords": [],
"raw_extracted_content": "Learning an Expert from Human Annotations in Statistical Machine\nTranslation: the Case of Out-of-Vocabulary Words\nWilker Aziz\u0003, Marc Dymetmany, Shachar Mirkinx, Lucia Speciaz, Nicola Cancedday, Ido Daganx\n\u0003University of S ˜ao Paulo,yXerox Research Centre Europe\nxBar-Ilan University,zUniversity of Wolverhampton\[email protected], fdymetman,[email protected]\[email protected], fmirkins,[email protected]\nAbstract\nWe present a general method for incorporating an\n“expert” model into a Statistical Machine Transla-\ntion (SMT) system, in order to improve its perfor-\nmance on a particular “area of expertise”, and ap-\nply this method to the specific task of finding ade-\nquate replacements for Out-of-V ocabulary (OOV)\nwords. Candidate replacements are paraphrases\nand entailed phrases, obtained using monolin-\ngual resources. These candidate replacements are\ntransformed into “dynamic biphrases”, generated\nat decoding time based on the context of each\nsource sentence. Standard SMT features are en-\nhanced with a number of new features aimed at\nscoring translations produced by using different\nreplacements. Active learning is used to discrimi-\nnatively train the model parameters from human\nassessments of the quality of translations. The\nlearning framework yields an SMT system which\nis able to deal with sentences containing OOV\nwords but also guarantees that the performance\nis not degraded for input sentences without OOV\nwords. Results of experiments on English-French\ntranslation show that this method outperforms pre-\nvious work addressing OOV words in terms of ac-\nceptability.\n1 Introduction\nWhen translating a new sentence, Statistical Ma-\nchine Translation (SMT) systems often encounter\n“Out-of-V ocabulary” (OOV) words, that is, words\nfor which no translation is provided in the system\nphrase table. The problem is particularly severe\nwhen bilingual data are scarce or the text to be\ntranslated is not from the same domain as the data\nused to train the system.\nOne approach consists in replacing the OOV\nword by a paraphrase, i.e. a word that is equiv-\nalent and known to the phrase-table. For instance,\nin the sentence “The police hit the protester ”, ifthe source word “hit ” is OOV , it could be replaced\nby its paraphrase “struck ”. In previous work such\nparaphrases are learnt by “pivoting” through par-\nallel texts involving multiple languages (Callison-\nBurch et al., 2006) or on the basis of monolingual\ndata and distributional similarity metrics (Marton\net al., 2009).\nMirkin et al. (2009) go beyond the use of para-\nphrase to incorporate the notion of an entailed\nphrase, that is, a word which is implied by the\nOOV word, but is not necessarily equivalent to\nit — for example, this could result in “hit ” being\nreplaced by the entailed phrase “attacked ”. Both\nparaphrases and entailed phrases are obtained us-\ning monolingual resources such as WordNet (Fell-\nbaum, 1998). This approach results in higher\ncoverage and human acceptability of the transla-\ntions produced relative to approaches based only\non paraphrases.\nIn (Mirkin et al., 2009) a replacement for the\nOOV word is chosen based on a score represent-\ning how well it fits the context of the input sen-\ntence, combined with the global SMT score ob-\ntained after translating multiple alternative sen-\ntences produced by alternative replacements. The\ncombination of source and target language scores\nis heuristically defined as their product, and en-\ntailed phrases are only used when paraphrases are\nnot available. This approach has several short-\ncomings: translating each replacement variant is\nwasteful and does not capitalize on the search ca-\npabilities of the decoder; the ad hoc combination\nof scores makes it difficult to tune the contribution\nof each score or to extend the approach to incorpo-\nrate new features; and the enforced preference to\nparaphrases may result in inadequate paraphrases\ninstead of acceptable entailed phrases.\nWe propose an approach which also takes into\naccount both paraphrased and entailed words and\nuses a context model score, but differs from\n(Mirkin et al., 2009) in several crucial aspects,\n[EAMT May 2010 St Raphael, France]\nmostly stemming from the fact that we integrate\nthe selection of the replacement words into the\nSMT decoder. This has implications for both the\ndecoding and training processes.\nAtdecoding time, when translating a source\nsentence with an OOV word, besides the collec-\ntion of biphrases1stored in the system phrase ta-\nble, we generate a set of dynamic biphrases on the\nfly, based on the context of that specific source\nsentence, to address the OOV word. For exam-\nple, we could derive the dynamic biphrases (hit,\na frapp ´e)and(hit, a attaqu ´e)from the static ones\n(struck, a frapp ´e)and(attacked, a attaqu ´e).\nSuch dynamic biphrases are assigned several\nfeatures that characterize different aspects of the\nprocess that generated them, such as the appro-\npriateness of the replacement in the context of\nthe specific source sentence, allowing for example\nreach to be preferred to strike orattack in replac-\ninghitin “We hit the city at lunch time ”. Dynamic\nand static biphrases compete during the search for\nan optimal translation.\nAttraining time, standard techniques such as\nMERT (Minimum Error Rate Training) (Och,\n2003), which attempt to maximize automatic met-\nrics like BLEU (Papineni et al., 2002) based on\na bilingual corpus, are directly applicable. How-\never, as has been discussed in (Callison-Burch et\nal., 2006; Mirkin et al., 2009), such automatic\nmeasures are poor indicators of improvements in\ntranslation quality in presence of semantic modifi-\ncations of the kind we are considering here. There-\nfore, we perform the training and evaluation on the\nbasis of human annotations. We use a form of ac-\ntive learning to focus the annotation effort on a\nsmall set of candidates which are useful for the\ntraining.\nSentences containing OOV words represent a\nfairly small fraction of the sentences to be trans-\nlated2. Thus, to avoid human annotation of a\nlarge sample with relatively few cases of OOV\nwords, for the purpose of yielding a statistically\nunbiased sample, we perform a two-phase train-\ning: (a) the standard SMT model is first trained on\nan unbiased bilingual sample using MERT and N-\nbest lists; and (b) this model is extended with ad-\nditional dynamic features and iteratively updated\nby using other samples containing only sentences\nwith OOV words annotated for quality by humans.\n1Biphrases are the standard source and target phrase pairs.\n2In our experimental setting, in 50K sentences from News\ntexts, 15% contain at least one OOV content word.We update the model parameters in such a way\nthat the new model does not modify the scores\nof translations of cases without OOV words. This\nis done through an adaptation of the online learn-\ning algorithm MIRA (Crammer et al., 2006) which\npreserves linear subspaces of parameters. This ap-\nproach consists therefore in learning an expert that\nis able to improve the performance of the transla-\ntion system on a specific set of inputs, while pre-\nserving its performance on all other inputs.\nThe main contributions of this paper can be\nsummarized as follows: an efficient mechanism\nintegrated into the decoder for handling contextual\ninformation; a method for adding expertise to an\nSMT model relative to a specific task, relying on\nhighly informative, biased, samples and on human\nscores; expert models that affect only a specific set\nof inputs related to a particular problem, improv-\ning the translation performance in such cases.\nIn the remainder of this paper, we introduce the\nframework proposed in this paper for learning an\nexpert for the task of handling sentence contain-\ning OOV words (Section 2), then present our ex-\nperimental setup (Section 3) and finally our results\n(Section 4).\n2 Learning an expert for OOV words\nOur approach to learning an OOV expert for\nSMT is motivated by several general require-\nments. First, for efficiency reasons, we want the\nexpert to be tightly integrated with the SMT de-\ncoder. Second, we need to rely on human judg-\nments of the translations produced, since auto-\nmatic evaluation measures such as BLEU are poor\npredictors of translation quality in the presence of\nsemantic approximations of the kind we are con-\nsidering (Mirkin et al., 2009) . Third, because hu-\nman annotations are costly, we need to use them\nsparingly. In particular: (i) we want to focus the\nannotation task on the specific problem of sen-\ntences containing OOV words, and (ii) even for\nthese sentences, we should only hand the anno-\ntators a small, well-chosen, sample of translation\ncandidates to assess, not an exhaustive list. Fi-\nnally, we need to be careful not to bias training to-\nwards the human annotated sample in such a way\nthat the integrated decoder becomes better on the\nOOV sentences, but is degraded on the “normal”\nsentences. We address these requirements as fol-\nlows.\nIntegrated decoding The integrated decoder con-\nsists of a standard phrase-based SMT decoder\n(Lopez, 2008; Koehn, 2010) enhanced with the\nability to add dynamic biphrases at runtime and\nattempting to maximize a variant of the stan-\ndard “log-linear” objective function. The stan-\ndard SMT decoder tries to find argmax (a;t)\u0003\u0001\nG(s;t;a ), where \u0003is a vector of weights, and\nG(s;t;a )a vector of features depending on the\nsource sentence s, the target sentence tand the\nphrase-level alignment a. The integrated decoder\ntries to find\nargmax (a;t)\u0003\u0001G(s;t;a ) +M\u0001H(s;t;a )\nwhereMis an additional vector of weights and\nH(s;t;a )an additional vector of “dynamic” fea-\ntures associated with the dynamic biphrases and\nassessing different characteristics of their associ-\nated replacements (see Section 2.2). The inte-\ngrated model is thus completely parametrized by\nthe concatenated weight vector. We call this model\n\u0003\bMfor short.\nHuman annotations We select at random a set of\nOOV sentences from our test domain to compose\nourOOV training set , and for each of these sen-\ntences, provide the human annotators with a sam-\nple of candidate translations for different choices\nof replacements. They are then asked to rank these\ncandidates according to how well they approxi-\nmate the meaning of the source. In order not to\nforce the annotators to decide on fine-grained dis-\ntinctions that they are not confident about, which\ncould be confusing and increase noise for the\nlearning module, we provide guidelines and an an-\nnotation interface that encourage ranking the can-\ndidates in a few distinct “clusters”, where the rank\nbetween clusters is clear, but the elements inside\neach cluster are considered indistinguishable. The\nannotators are also asked to concentrate their judg-\nment on the portions of the sentences which are\naffected by the different replacements. To cover\npotential cases of cognates, annotators can choose\nthe actual OOV as the best “replacement”.\nActive sampling In order to keep the sample of\ncandidate translations to be annotated for a given\nOOV source sentence small, but still informative\nfor training, we adopt an active learning scheme\n(Settles, 2010; Haffari et al., 2009; Eck et al.,\n2005). We do not extract a priori a sample of\ntranslation candidates for each sentence in the\nOOV training set and ask the annotators to workon these samples — which would mean that they\nmight have to compare candidates that have little\nchance of being selected by the end-model after\ntraining. Instead, This is an iterative process, with\na slice of the OOV training set selected for each\niteration. When sampling candidate translations\n(out of a given slice of the OOV training set) to be\nannotated in the next iteration, we use the transla-\ntions produced by the model \u0003\bMobtained so\nfar, after training on all previous samples. This\nguarantees that we sample the overall best can-\ndidates for each OOV sentence according to the\ncurrent model. Additionally, we sample several\nother translations corresponding to top candidates\naccording to individual features used in the model,\nincluding the context model score, as we will de-\ntail in Section 3. This ensures a diversity of candi-\ndates to compare, while avoiding having to ask the\nannotators to give feedback on candidates that do\nnot stand a chance of being selected by the model.\nAvoiding bias We train the model \u0003\bMaim-\ning to guarantee that when the integrated decoder\nfinds a new sentence containing OOV words, it\nwill rank the translation candidates in a way con-\nsistent with the ranks that the human judges would\ngive to these candidates; in particular it should out-\nput as its best translation a candidate that the anno-\ntators would rank in top position. However, if we\ntune both\u0003andMto attain this goal, the value of\n\u0003in the integrated decoder can differ significantly\nfrom its value in the standard decoder, say \u00030. In\nthat case, when decoding a non-OOV sentence, for\nwhich the dynamic features H(s;t;a )are null, the\nintegrated decoder would use \u0003instead of \u00030, pos-\nsibly degrading its performance on such sentences.\nTo avoid this problem, while training \u0003\bMwe\nkeep\u0003fixed at the value \u00030, in other words, we al-\nlow onlyMto be updated in the iterative learning\nprocess. In such a way, we preserve the original\nbehavior of the system on standard inputs. This\nrequires a learning technique that can be adapted\nin a way that the parameter vector \u0003\bMvaries\nonly in the linear subspace for which \u0003 = \u0003 0; no-\ntice that this is different from training \u0003andM\nseparately and then learning the best mixing fac-\ntor between the two models. One technique which\nprovides a mathematically neat way to handle this\nrequirement is MIRA (Crammer et al., 2006), an\nonline training method in which each learning step\nconsists in updating the current parameter vector\nminimally (in the sense of Euclidian distance) so\nthat it lies in a certain subspace determined by the\ncurrent training point. It is then quite natural to\nadd the constraint that it also lies on the subspace\n\u0003 = \u0003 0.\n2.1 Learning to rank OOV candidates with\nMIRA\nLet us first write \n\u0011\u0003\bM, andF(s;t;a )\u0011\nG(s;t;a )\bH(s;t;a ), and also introduce notation\nfor the two projection operators \u0019(\u0003)(\u0003\bM) = \u0003\nand\u0019(M)(\u0003\bM) =M.\nOur goal when training from human annota-\ntions is that, whenever the annotators say that the\ntranslation candidate (s;t;a )is strictly better than\nthe translation candidate (s;t0;a0), then the model\nscores give the same result, namely are such that\n\n\u0001F(s;t;a )>\n\u0001F(s;t0;a0). Our approach to\nlearning can then be outlined as follows. Based\non the value of \nlearned on previous iterations\nwith other samples of OOV sentences, we ac-\ntively sample, as previsouly described, a few can-\ndidate translations (s;tj;aj)for each source sen-\ntencesin the current slice of the data, and have\nthem ranked by human annotators, preferably in\na few distinct clusters. We extract at random a\ncertain number of pairs of translation candidates\nyj;k\u0011((s;tj;aj);(s;tk;ak)), where (s;tj;aj)\nand(s;tk;ak)are assumed to belong to two dif-\nferent clusters. We then define a feature vec-\ntor on candidate pairs \b(yj;k)\u0011F(s;tj;aj)\u0000\nF(s;tk;ak).\nThe basic learning step is the following. We\nassume \nto be the current value of the parame-\nters, andyj;kthe next pair of annotated candidates,\nwith (without loss of generality) (s;tj;aj)being\nstrictly preferred by the annotator to (s;tk;ak).\nThe update from \nto\n0is then performed as fol-\nlows:\nIf\n:\b(yj;k)\u00150then\n0:= \nElse\n0:=argmin!k!\u0000\nk2(a)\ns.t.!:\b(yj;k)\u0000!:\b(yk;j)\u00151(b)\nand\u0019(\u0003)(!) = \u0003 0 (c)\nIn other words, we are learning to rank the candi-\ndates through a “pairwise comparison” approach\n(Li, 2009), in which whenever a candidate pair\nyj;kis ordered in opposite ways by the annota-\ntor and the model, an update of \nis performed.\nThis update is a simple variant of the MIRA al-\ngorithm (as presented for instance in (Crammer,\n2007)), where we update the parameter \nmini-\nmally in terms of Euclidian distance (a) such thatthe new parameter respects two conditions. The\ncondition (b) forces the classification margin for\nthe pair to become larger with the updated model\nthan the loss currently incurred on that pair, con-\nsidering that this loss is 0 when the model chooses\nthe correct order yj;k, and 1 when it chooses the\nwrong order yk;j. The second condition (c), which\nis our original addition to MIRA, forces the new\nparameter to have an invariant \u0003-projection. The\nsolution \n0to the constrained optimization prob-\nlem above can be obtained through Lagrange mul-\ntipliers (proof omitted). Assuming that we already\nstart from a parameter \nsuch that\u0019(\u0003)(\n) = \u0003 0,\nthen the update is given by:\n\n0= \n +\u001c \u0019(M)(X);\nwhereX\u0011\b(yj;k)\u0000\b(yk;j) = 2 \b(yj;k)and\n\u001c\u00111\u0000\n:X\nk\u0019(M)(X)k2.3As is standard with MIRA, the\nfinal value for the model is found by averaging the\n\nvalues found by iterating the basic learning step\njust described.\n2.2 Dynamic Features\nGiven an OOV word, similar to (Mirkin et al.,\n2009), we search for a set of candidate replace-\nments in WordNet, considering both synonyms\nand hypernyms of the OOV word which are avail-\nable in the biphrase table. To this set we add the\nOOV word itself to account for proper nouns and\npotential cognates. Unlike previous work, we do\nnot explicitly give preference to any type of candi-\ndate (e.g. synonyms over hypernyms), but instead\ndistinguish them through features associated with\nthe new biphrases. Given a source sentence swith\nan OOV word ( oov), we compute several feature\nscores for each candidate replacement ( rep):\nContext model score Score indicating the degree\nby whichrepfits the context of s. Following the\nresults reported by Mirkin et al. (2009) we apply\nLatent Semantic Analysis (LSA) (Deerwester et\nal., 1990) as the method for computing this score,\nusing 100-dimension vectors constructed based on\na corpus of the same domain as the test set. Given\nsandrep, we compute the cosine similarity be-\ntween their LSA vectors, where the sentence’s\nvector is the average of the vectors of all the con-\ntent words in it.\n3Technically, this ratio is only defined for \u0019(M)(X) 6= 0,\ni.e. for cases where the pair of translations differ in their M\nprojections; in the rare instances where this might not be true,\nwe can simply ignore the pair in the learning process.\nDomain similarity Score representing how well\nrepcan replaceoovin general in texts of a given\ndomain. It is computed as the cosine similarity\nbetween the LSA vectors of the two words and is\nintended to give preference to replacements which\ncorrespond to more frequent senses of the OOV\nword in that domain (McCarthy et al., 2004).\nInformation loss Measures the distance in Word-\nNet’s hierarchy, denoted d, betweenoovandrep:\nS(unk;sub ) = 1\u0000(1\nd+1), where the distance be-\ntween synonyms is 0, and the further the hyper-\nnym is up the hierarchy, the smaller the score. This\ncan be considered a simple approximation to the\nnotion of information loss , that is, the further the\nrepis from theoovin a hierarchy, the fewer se-\nmantic traits exist between the two, and therefore\nthe more information is lost if we use rep.\nIdentity Binary feature to mark the cases where\nthe OOV is kept in the sentence, what we call an\n“identity” replacement.\nSynonyms vs hypernyms Binary feature to dis-\ntinguish between synonym and hypernym replace-\nments.\nStatic plus dynamic Dynamic biphrases for a\ngiven source sentence can be derived from all\nthe static biphrases containing the chosen replace-\nment. For example, when replacing the OOV at-\ntacked byaccused , a number of static biphrases\nhaving accused in the source side could be used\nto generate (was attacked, a ´et´e accus ´e),(he was\nattacked, il a ´et´e accus ´e),(attacked, a incrimin ´e),\n(attacked, le) . Although these dynamic biphrases\nare very different, they will be assigned the same\ndynamic features values. To allow for the decoder\nto distinguish among such biphrases, we define an\nadditional feature as the linear combination of the\nfeature values of the static biphrase from which the\ndynamic biphrase was derived.\nAll static features are assigned a null value in the\ndynamic biphrases, and all dynamic features are\nassigned a null value in the static biphrases.4\n3 Experimental Setting\nData We consider the English-French translation\ntask and a scenario where an SMT system is used\n4Thus, what we have mnemonically called “dynamic\nfeatures” are features that are non-null only in dynamic\nbiphrases; some are contextual, others not.to translate texts of a different domain from the\none it was trained on. We train a standard phrase-\nbased SMT system on Europarl-v4 ( \u00181Msen-\ntence pairs) and use it to decode sentences from\nthe News domain. The standard log-linear model\nparameters are tuned using 2Kunseen sentences\nfrom Europarl-v4 through MERT. A 3-gram tar-\nget language model is trained using 7Msen-\ntences of French News texts. All datasets are\ntaken from the the WMT-09 competition5. For\nthe learning framework, we take all sentences in\nthe News Commentary domain (training, devel-\nopment and test sets) from WMT-09 ( \u001875K)\nand extract those containing one OOV word that\nis not a proper name, symbol or number ( \u001815%\nof the sentences). Of these, we then randomly se-\nlected 1Ksentences for tuning the context model\n(LSA tuning set ), other 1Ksentences for tuning\nthe SMT feature weights ( SMT tuning set ), and\n500sentences for evaluating all methods ( test set ).\nThe data used for computing the context model\nand domain similarity scores is the Reuters Cor-\npus, V olume 1 (RCV1), which is also of the News\ndomain6.\nWe experiment with the following systems:\nBaseline SMT The SMT system we use, MA-\nTRAX (Simard et al., 2005), without any special\ntreatment for OOV words, where these are simply\ncopied to the translations.\nMonolingual retrieval Method described in\n(Marton et al., 2009) where paraphrases for OOV\nwords are extracted from a monolingual corpus\nbased on similarity metrics. We use their best-\nperforming setting with single-word paraphrases\nextracted from a News domain corpus with 10M\nsentences. The additional biphrases are statically\nadded into the system’s biphrase library and the\nsimilarity score is used as a new feature. The log-\nlinear model is then entirely retrained with MERT\nand the SMT tuning set .\nLexical entailment Two best performing meth-\nods described in (Mirkin et al., 2009). For each\nsentence with an OOV word a set of alternative\nsource sentences is generated by directly replac-\ning each OOV word by synonyms from Word-\nNet or – if synonyms are not found – by hyper-\nnyms. These two settings do not add features to\n5http://www.statmt.org/wmt09/.\n6http://trec.nist.gov/data/reuters/reuters.html\nthe model, hence they do not require retraining:\n\u000fSMT All alternative source sentences are\ntranslated using a standard SMT system and\nthe “best” translation is the one with the high-\nest global SMT score.\n\u000fLSA-SMT The 20-best alternative source\nsentences are selected according to an LSA\ncontext model score and translated by the a\nstandard SMT system. The “best” translation\nis the one that maximizes the product of the\nLSA and global SMT scores.\nOOV expert Method proposed in this paper, as\ndescribed in Section 2. The expert model with\nall dynamic features is trained on the basis of hu-\nman annotations using the SMT tuning set. At\neach iteration of the learning process we sample\n80sentences for annotation by bilingual (English\nand French) speakers. For a given source sen-\ntence, the sampled options at each iteration con-\nsist of a random choice of 8dynamic biphrases\ncorresponding to different replacements, 4addi-\ntional dynamic biphrases corresponding to differ-\nent ways of translating those replacements, and the\ntop candidates according to each of our main dy-\nnamic features: 1-best given by the information\nloss feature, 2-best given by the context model fea-\nture, 1-best given by the domain similarity feature\nand1-best given by the identity feature. In total\nat most 17 non-identical candidates can be pro-\nduced for annotation, but typically only a dozen\nare found. The results reported in Section 4 are\nobtained after only 6iterations.\nMERT Baseline with the same settings as the\nOOV expert , but where the tuning of allmodel\nparameters (both static and dynamic) is done au-\ntomatically using standard MERT with reference\ntranslations for the SMT tuning set, instead of our\nlearning framework and human annotations.\n4 Results\nTest set Following the same guidelines used for\nthe annotation task, two native speakers of French\n(and fluent speakers of English) were asked to\njudge translations produced by different systems\non 500 source sentences, according to how well\nthey reproduced the meaning of the source sen-\ntence. They were asked to rank the translations in\na few clearly distinct clusters and to discard use-\nless translations.Features \u0016 \u001b Best Acceptance\nLID 2.477 1.465 0.4728 0.5252\nID 2.491 1.463 0.4668 0.5211\nLI 2.547 1.457 0.4427 0.5050\nI 2.561 1.463 0.4447 0.4970\nD 2.924 1.414 0.3360 0.3722\nLD 2.930 1.412 0.3340 0.3702\nL 3.056 1.361 0.2857 0.3300\nBaseline 3.219 1.252 0.2093 0.2918\nTable 1: Comparison between different feature combina-\ntions and the baseline showing the percentage of times each\ncombination outputs a translation that is acceptable, i.e. is\nnot discarded (Acceptance), a translation that is ranked in the\nfirst cluster (Best), as well as the the mean rank ( \u0016) and stan-\ndard deviation ( \u001b) of each combination, where the discarded\ntranslations are conventionally assigned a rank of 5, lower\nthan the rank of any acceptable cluster observed among the\nannotations. (L) context model score, (I) information-loss,\n(D) domain similarity, (Baseline) SMT system.\nWe computed inter-annotator agreement con-\ncerning both acceptance and ranking, for trans-\nlations of 30 randomly sampled source sentences\nthat were evaluated by both annotators. For rank-\ning, we followed (Callison-Burch et al., 2008),\nchecking for each two translations, AandB,\nwhether the annotators agreed that A=B,A>B\norA < B . This resulted in kappa coefficient\nscores (Cohen, 1960) of 0.87 for translation ac-\nceptance and 0.83 for ranking.\nCombinations of dynamic features In order to\nhave a picture of the contribution of each dynamic\nfeature to the expert model, we compare the per-\nformance on the test set of different combinations\nof our main features. The results are shown in Ta-\nble 1. The features not mentioned in the table,\nsuch as the identity flag, are secondary features in-\ncluded in all combinations.\nThe baseline SMT system (i.e., only identity\nreplacements ) reaches 29.18% of acceptance (a\ntranslation is said to be acceptable if it is not dis-\ncarded), which is related to the fact that, for the\ngiven domain, copying an OOV English word into\nthe French output often results in a cognate. The\nbest performance is obtained with combination of\nall features (LID).\nEvolution of learning For the complete feature\nvector LID we compared the performance (on the\ntest data) of models corresponding to different it-\nerations of the online learning scheme. The results\nare presented in Table 2. We see a large increase\nin performance from M0toM1, then smaller in-\ncreases. After two or three iterations the perfor-\nIterations \u0016 \u001b Best Acceptance\nM6 2.487 1.458 0.4628 0.5252\nM5 2.491 1.459 0.4628 0.5231\nM4 2.489 1.458 0.4628 0.5252\nM3 2.493 1.455 0.4588 0.5252\nM2 2.501 1.456 0.4567 0.5211\nM1 2.519 1.456 0.4507 0.5151\nM0 2.944 1.407 0.328 0.3642\nBaseline 3.237 1.228 0.1932 0.2918\nTable 2: Each iteration adds 80 annotated sentences to the\ntraining set, from which the next vector of weights is com-\nputed. The dynamic vector M0was initialized with zero for\nthe replacement-related features and 1 for the source-target\nfeature. (Baseline) SMT system without OOV handling.\nmance changes are negligible, indicating that an-\nnotation effort for training the system could be\nroughly divided by two without affecting its end\nperformance.\nComparison with other systems We now com-\npare our LID model, in different decoding and\ntraining setups, with the methods proposed in pre-\nvious work and described in Section 3. Table 3\npresents the results in terms of mean rank and stan-\ndard deviation (note that the rank is relative to the\nother systems in the comparison and is not directly\ncomparable to the rank of the same system in a dif-\nferent comparison), percentage of time each sys-\ntem outputs a first-ranked translation and the per-\ncentage of time it outputs an acceptable one, using\nthe same conventions as in Table 1.\nLet us first focus on the lines other than b-LID\nin the table, corresponding to systems mentioned\nin Section 3. These results are consistent across\ndifferent measures: acceptance, mean rank, or be-\ning ranked in the best cluster. In particular we\nsee that both the LID, trained on human annota-\ntions, and LID-MERT systems, trained by MERT\nfrom reference translations, considerably outper-\nform the baseline and the Monolingual Retrieval\nmethod, with LID being better than LID-MERT\nparticularly in terms of acceptability. A somewhat\ndisappointing result, however, is that LID is infe-\nrior to both SMT-LSA and SMT on all measures.\nBy observing the outputs of SMT and SMT-\nLSA, we noticed that, although they can theoret-\nically produce identity replacements, they never\nactually do so on the test set. This is probably due\nto the fact that the language model that is part of\nthe scoring function in both SMT and SMT-LSA\ncontributes to giving a very bad score to identity\nreplacements, unless they happen to belong to theset of possible French forms (“exact” cognates),\nand therefore these models tend to strongly favor\nentailment replacements.\nOn the other hand, our LID model does actually\nproduce identity replacements quite often, some of\nwhich are acceptable (perhaps even ranked first)\nto the annotators, but a majority of which lead\nto non-acceptability. This is due to the fact that,\nat training time, the LID model actually learns\nto score the identity replacements relatively well\n(often overcoming the repulsion of the language\nmodel feature in the underlying baseline SMT sys-\ntem), due to the fact that many of them are ac-\ntually preferred by the annotators, typically those\nthat correspond to approximate cognates of exist-\ning French words (the annotation guidelines did\nnot discourage them from doing so). Thus the LID\nmodel has a tendency to sometimes favor identities\nover entailments. However , it is not clever enough\nto distinguish the “good” identities (namely, the\nquasi-cognates) from the bad ones (namely, En-\nglish words with no obvious French connotation),\ngiven that all identity replacements are only identi-\nfied by a binary feature (identity vs. non-identity)\ninstead of being associated with any features that\ncould predict their understandability in a French\nsentence. Thus LID, when it selects an identity\nreplacement, often selects an unacceptable one.\nMotivated by this uncertainty concerning the\nuse of identity replacements, we defined a sys-\ntemb-LID which uses the same model as LID,\nbut the identity replacements are blocked at decod-\ning time . In this way the system is forced to pro-\nduce an entailment replacement instead of an iden-\ntity one, but otherwise ranks the different entail-\nment replacements in the same order as the origi-\nnal LID. We can then see from Table 3 that b-LID\noutperforms every other system by a large mar-\ngin:7it is excellent at distinguishing between true\nentailments, and while it misses some good iden-\ntity replacements, is not handicapped in this re-\nspect relative to the other systems, which are also\nunable to model them.\n5 Conclusions\nWhile our approach is motivated by a specific\nproblem (OOV terms), we believe that some of\nthe innovations we have introduced are of a larger\n7A Wilcoxon signed rank test (Wilcoxon, 1945) shows\nthatb-LID is better ranked than its closest competitor SMT\nwith a p-value of less than 2%.\nSystem \u0016 \u001b Best Acceptance\nb-LID 2.274 1.803 0.6258 0.7002\nSMT 2.736 1.933 0.5172 0.5822\nSMT-LSA 2.744 1.931 0.5132 0.5822\nLID 3.018 1.913 0.4145 0.5252\nLID-mert 3.153 1.928 0.4024 0.4849\nBaseline 3.998 1.603 0.1549 0.2918\nMonRet 4.107 1.584 0.1690 0.2495\nTable 3: (LID) complete dynamic vector trained on the ba-\nsis of human assessments; ( b-LID) as LID, but blocking iden-\ntity replacements; (LID-MERT) complete dynamic vector\ntrained on the basis of automatic assessments; (SMT, SMT-\nLSA) and (MonRet or Monolingual retrieval) as described in\nSection 3; (Baseline) SMT system without OOV handling.\ngeneral interest for SMT: our use of dynamic\nbiphrases and features for incorporating complex\nadditional run-time knowledge into a standard\nphrase-based SMT system, our approach to in-\ntegrating a MERT-trained log-linear model with\na model actively trained from a small sample\nof human annotations addressing a specific phe-\nnomenon, and finally the formal techniques used\nin order to guarantee that the expert that is thus\nlearned from a focussed, biased, sample, is able\nto improve performance on its domain of exper-\ntise while preserving the baseline system’s perfor-\nmance on the standard cases.\nAcknowledgments\nThis work was supported in part by the ICT Pro-\ngramme of the European Community, under the\nPASCAL-2 Network of Excellence, ICT-216886.\nWe thank Binyam Gebrekidan Gebre and Ibrahim\nSoumana for performing the annotations and the\nanonymous reviewers for their useful comments.\nThis publication only reflects the authors’ views.\nReferences\nChris Callison-Burch, Philipp Koehn, and Miles Osborne.\n2006. Improved Statistical Machine Translation Using\nParaphrases. In Proceedings of HLT-NAACL .\nChris Callison-Burch, Cameron Fordyce, Philipp Koehn,\nChristof Monz, and Josh Schroeder. 2008. Further Meta-\nEvaluation of Machine Translation. In Proceedings of\nWMT .\nJacob Cohen. 1960. A Coefficient of Agreement for Nomi-\nnal Scales. Educational and Psychological Measurement ,\n20(1):37–46.\nKoby Crammer, Ofer Dekel, Joseph Keshet, Shai Shalev-\nShwartz, and Yoram Singer. 2006. Online passive-\naggressive algorithms. Journal of Machine Learning Re-\nsearch , 7:551–585.Koby Crammer. 2007. Online learning of real-\nworld problems. Tutorial given at ICML.\nwww.cis.upenn.edu/˜crammer/icml-tutorial-index.html.\nScott Deerwester, S.T. Dumais, G.W. Furnas, T.K. Landauer,\nand R.A. Harshman. 1990. Indexing by Latent Semantic\nAnalysis. Journal of the American Society for Information\nScience, 41 , pages 391–407.\nMatthias Eck, Stephan V ogel, and Alex Waibel. 2005. Low\ncost portability for statistical machine translation based on\nn-gram coverage. In MT Summit X , pages 227–234.\nChristiane Fellbaum, editor. 1998. WordNet: An Electronic\nLexical Database (Language, Speech, and Communica-\ntion) . The MIT Press.\nGholamreza Haffari, Maxim Roy, and Anoop Sarkar. 2009.\nActive learning for statistical phrase-based machine trans-\nlation. In Proceedings of Human Language Technologies\n/ North American Chapter of the Association for Compu-\ntational Linguistics , pages 415–423.\nPhilipp Koehn. 2010. Statistical Machine Translation . Cam-\nbridge University Press.\nHang Li. 2009. Learning to rank. Tutorial given\nat ACL-IJCNLP, August. research.microsoft.com/en-\nus/people/hangli/li-acl-ijcnlp-2009-tutorial.pdf.\nAdam Lopez. 2008. Statistical machine translation. ACM\nComputing Surveys , 40(3):1–49.\nYuval Marton, Chris Callison-Burch, and Philip Resnik.\n2009. Improved statistical machine translation using\nmonolingually-derived paraphrases. In Proceedings of the\n2009 Conference on Empirical Methods in Natural Lan-\nguage Processing , pages 381–390.\nDiana McCarthy, Rob Koeling, Julie Weeds, and John Car-\nroll. 2004. Finding predominant word senses in untagged\ntext. In In Proceedings of ACL , pages 280–287.\nShachar Mirkin, Lucia Specia, Nicola Cancedda, Ido Da-\ngan, Marc Dymetman, and Idan Szpektor. 2009. Source-\nlanguage entailment modeling for translating unknown\nterms. In Proceedings of ACL , Singapore.\nFranz Josef Och. 2003. Minimum error rate training in statis-\ntical machine translation. In ACL ’03: Proceedings of the\n41st Annual Meeting on Association for Computational\nLinguistics , pages 160–167.\nKishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing\nZhu. 2002. BLEU: a Method for Automatic Evaluation\nof Machine Translation. In Proceedings of ACL .\nBurr Settles. 2010. Active learning literature survey. Tech-\nnical report, University of Wisconsin-Madison.\nM. Simard, N. Cancedda, B. Cavestro, M. Dymetman,\nE. Gaussier, C. Goutte, and K. Yamada. 2005. Trans-\nlating with Non-contiguous Phrases. In Proceedings of\nHLT-EMNLP .\nFrank Wilcoxon. 1945. Individual Comparisons by Ranking\nMethods. Biometrics Bulletin , 1(6):80–83.",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "TrjF2wloc__",
"year": null,
"venue": "EAMT 2008",
"pdf_link": "https://aclanthology.org/2008.eamt-1.17.pdf",
"forum_link": "https://openreview.net/forum?id=TrjF2wloc__",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Explorations in using grammatical dependencies for contextual phrase translation disambiguation",
"authors": [
"Aurélien Max",
"Rafik Makhloufi",
"Philippe Langlais"
],
"abstract": "Aurélien Max, Rafik Makhloufi, Philippe Langlais. Proceedings of the 12th Annual conference of the European Association for Machine Translation. 2008.",
"keywords": [],
"raw_extracted_content": "Explorations in Using Grammatical\nDependencies for Contextual\nPhrase Translation Disambiguation\nAur´ elien Max1, Rafik Makhloufi1, and Philippe Langlais2\n1LIMSI-CNRS & Universit´ e Paris-Sud 11, Orsay, France\n2DIRO, Universit´ e de Montr´ eal, Canada\nAbstract. Recent research has shown the importance of using source\ncontext information to disambiguate source phrases in phrase-based Sta-\ntistical Machine Translation. Although encouraging results have been\nobtained, those studies mostly focus on translating into a less inflected\ntarget language. In this article, we present an attempt at using source\ncontext information to translate from English into French. In addition\nto information extracted from the immediate context of a source phrase,\nwe also exploit grammatical dependencies information. While automatic\nevaluation does not exhibit a significant difference, a manual evaluation\nwe conducted provides evidence that our context-aware translation en-\ngine outperforms its context-insensitive counterpart.\n1 Introduction\nOne notable shortcoming of the now standard phrase-based approach to Statis-\ntical Machine Translation (PBSMT) [1] is that a unique conditional probability\ndistribution p(e|f ) is considered for translating all the occurrences of a source\nphrase f,3while obviously, differences in translation may happen due to the\ncontext in which the phrase occurs.\nA systematic inspection of the output of a state-of-the-art PBSMT engine has\nbeen conducted by [2]. Among other interesting things, it shows that sense errors\n(including wrong lexical choice and source ambiguity) account for 21.9% of the\nobserved errors when translating from English into Spanish, and for 28.2% when\ntranslating from Spanish into English. Incorrect forms (including incorrect verb\ntense or person, and incorrect gender or number agreement) account for 33.9%\nof errors when translating from English into Spanish, and for 9.9% from Spanish\ninto English. Although the latter type of errors may be recovered to some extent\nby a target language model, those translation errors essentially arise because the\nengine fails to use grammatical dependencies present in the source sentences,\nsuch as subject-verb or verb-object relations.\nThe integration of source contextual information in phrase-based SMT al-\nready gave rise to a number of interesting works [3–7]. [3] and [4] embedded\n3We use the standard notation where fis a source phrase, and eis a target phrase.\n12th EAMT conference, 22-23 September 2008, Hamburg, Germany\n114\nexisting word-sense disambiguation systems into a phrase-based SMT system.\nBoth systems use local collocations and word and POS from the immediate con-\ntext. In [3], bag-of-word context and basic dependency features are used. [5]\ntrained Support Vector Machine classifiers for every possible candidate trans-\nlation of a source phrase. They considered words of the immediate context (5\ntokens to the left and to the right), n-grams (up to size 3) of words, POS and\nlemmas, as well as chunking labels. [6] trained a global memory-based classi-\nfier that performs implicit smoothing of the probability estimates. The classifier\nmakes use of words and POS information of the immediate context of the phrase\n(2 tokens to the left and to the right).\nThe experimental conditions and the gains reported in the aforementioned\nstudies differ significantly. However, it is interesting to note that all but one\nattempts are considering English as the target language. Given the fact that\nthis type of integration only requires linguistic analysis for the source language,\nwe may interpret this as a particular difficulty in improving performances when\ntranslating into a highly inflected language. Translating into a morphologically\nrich language from a highly inflected language (e.g. Spanish →French) should\nintuitively be easier than translating from a less inflected language. The work by\n[7] is, to our knowledge, the only attempt that deals with the latter. Their exper-\niments conducted on English →German do not show significant improvements\nwhen considering context information.\nIn this paper, we address translating from English into French using features\nfrom the immediate context of the source phrases, as well as features from the\nlarger context through the use of grammatical dependencies. To our knowledge,\nthe impact of this latter type of features has not been previously investigated.4\nThe paper is organized as follows. We describe our system in section 2. We\npresent experimental results in section 3, and conclude our work and discuss\nfuture avenues in section 4.\n2 Context-aware PBSMT system\nOur context-aware system is directly inspired by the approach of [6]. It consists\nin the addition to the so-called log-linear combination of features traditionally\nmaximized by a phrased-based engine, context-informed features in the form of\nconditional probabilities that a target phrase eis the translation of a source\nphrase f:\nhm(f, C (f), e) = log P(e|f, C (f))\nwhere the context C(f) can be any information extracted from the source sen-\ntence to translate. Since data sparsity severely impacts the estimation of such\nprobabilities,5we followed [6] and used a memory-based approach for estimating\nthe conditional probabilities.\n4[3] do mention that their classifiers use basic dependency features, but their nature\nis not further described.\n5Relative frequency would for instance largely overestimate most of the parameters.\n12th EAMT conference, 22-23 September 2008, Hamburg, Germany\n115\nIn a nutshell, we ask a decision-tree based classifier to produce for the input\n/angbracketleftf, C (f)/angbracketright, a set of weighted class labels representing the possible translations of\nf. We obtain a posterior probability P(e|f, C (f)) by simply normalizing those\nweights. We used the Tribl hybrid algorithm6of the TiMBL software package\n[8] as a classifier. Building such a classifier is mainly a matter of collecting\ntraining examples {(/angbracketleftf, C (f)/angbracketright, e)} for all the phrases fseen in context C(f) and\ntranslated as e.7This classification implicitly performs smoothing by returning\nthe example in the tree matching on most features. In case of an exact match,\nthe actual class (i.e. target phrase) seen in the training material is returned. In\ncase of a mismatch, a majority vote is performed.\nApproaches such as [5, 6, 4] relied on information extracted from the imme-\ndiate context of a source phrase. To begin with, we considered this information\nas well, and represented an example by a fixed-length vector encoding the words\nof the source phrase f, their POS tags, the POS tags of two words on the left\nand the two words on the right of f, as well as the associated words. This is\nillustrated in Figure 1.\nsource our1/prp [declaration 2/nnof3/prp] rights 4/nns is5/vbz unique 6/jj\ntarget notre 1d´ eclaration 2des3droits 4est5unique 6\nexample (/angbracketleftdeclaration of,nnprp,nil,prp,nn s,vbz,nil,our,rights,is/angbracketright,d´ eclaration des)\nFig. 1. Encoding of the context information of the English phrase declaration of aligned\nwith the French phrase d´ eclaration des. Numbers show the word alignment. The symbol\nnilis used in place of missing information.\nWe also took into account information extracted from the dependency parse\nof the source sentence. More precisely, we considered the dependencies linking\ntokens of a given phrase to tokens outside this phrase, such as the dependency\nposs(declaration,our) in the above example, which links the inside word decla-\nration to the outside one our. We selected a set of 16 dependency types (e.g. ,\nposs-out, a possessive dependency out-linking to an outside word) thanks to\ntheir Information Gain value we computed on a held-out data set. Each depen-\ndency type is represented in the vector by the outside word8it involves, or by\nthe symbol nil, which indicates that this type of dependency does not occur in\nthe phrase under consideration.\nThe ordering of the features according to Information Gain values were con-\nsistent with that obtained for the Italian →English system of [6]. As expected,\nthe source phrase as well as its concatenated POS tags are the most discrimi-\nnative features for predicting the translation of the phrase. Immediate context\nwords and POS tags are the next promising features, the right context being\n6It does perform slightly better than the IGTree algorithm used in [6].\n7We relied on an in-house version of the standard phrase extraction procedure [1] for\ncollecting the context information required for each source phrase.\n8Using POS instead of words as dependency targets slightly decreases performance.\n12th EAMT conference, 22-23 September 2008, Hamburg, Germany\n116\nmore discriminative than the left one. Dependency-based features were found\nless informative, which can be explained by the fact that often, the immediate\ncontext already captures the discriminative information.\nIn an attempt to boost the selection of the most probable target phrase\naccording to context disambiguation, we added to the log-linear combination a\nbinary feature proposed by [6] which equals 1 for the target phrase that obtains\nthe highest probability P(e|f, C (f)), and 0 otherwise.\n3 Experiments and evaluation\n3.1 Baseline and context-aware systems\nWe used the French-English Europarl bitext compiled by [9]. To keep the data to\na manageable size, we considered a subset of 95,734 sentences that the Stanford\nparser [10] could parse,9for training our global classifier. Following the stan-\ndard practice, we performed phrase extraction for at most 7 word-long phrases\nusing Giza++ and the grow-diag-final-and heuristics [1]. We obtained ap-\nproximately 11,5M phrases; 3,7M of which are potentially useful for translating\nthe dev and test corpora gathering 475 and 472 bisentences respectively.\nWe built a baseline system from the set of contextless extracted biphrases.\nWe investigated two context-aware systems as well. System S1uses the features\nfrom the immediate context only and is a replication of the system described in\n[6]. System S2uses the same features plus the 16 most informative dependency\nfeatures found empirically. Following a practical note in [7], we filtered out from\nthe phrase tables of S1 and S2 entries for which p(e|f )<0.0002. This reduces\nexperimentation time dramatically without impacting translation results very\nmuch. No such filtering was done for the baseline system.\nModels weights were optimized using Minimum Error Rate Training [11], and\ndecoding was performed using the Moses10open source PBSMT decoder. All\nof our translation engines share the same target language model: a Kneser-Ney\nsmoothed trigram model we trained on the French part of the whole Europarl\ncorpus thanks to the SRILM toolkit [12].\nFor running S1 and S2 systems, we proceeded sentence by sentence. We first\nclassified each sequence of words of a given sentence offline, thus producing a\ncontext-aware phrase-table. This table was merged to the main one. Since Moses\nis not designed to handle context-dependent phrases, we had to serialize each\ntoken in the source sentence in order to differentiate each repetition of a source\nphrase in a sentence. We dynamically modified the phrase table accordingly, and\napplied the reversed operation to the translation produced.\n3.2 Evaluation results\nTable 1 reports results obtained for automatic and manual evaluation. For the\nBLEU and NIST metrics, significance testing using paired bootstrap showed no\n9We used the POS tags output by this parser as well.\n10http://www.statmt.org/\n12th EAMT conference, 22-23 September 2008, Hamburg, Germany\n117\nsignificant results using 300 samples and p <0.05. As far as BLEU is concerned,\nS1is slightly below the baseline, whereas S2is just above the baseline. Analysis\nshows that the baseline system tends to use shorter source phrases than the\ncontext-aware systems, and in turn S1uses shorter phrases than S2. These\nresults are coherent with those of [13] which indicate that context-aware systems\ntend to make more “truly phrasal” lexical choices.\nWe carried out a manual evaluation to see if differences not shown by auto-\nmatic metrics would appear. Four native speakers were asked to rank 100 one-\nbest outputs chosen randomly from the test set, and were presented in random\norder. Each output was thus assigned a rank in {1,2,3}; ties were allowed.11\nContext-aware systems obtained better average mean rank than the baseline\nsystem and were preferred more often (an absolute 10% increase). Globally, two\njudges found S1andS2to have similar performance, while the other two pre-\nferred S2. Figure 2 shows an example where S2 was found better than S1 and\nthe baseline by all judges.\nautomatic manual\nBLEU NIST avg. rank %1st %2nd %3rd\nBaseline 30.89 6.72 1.54 64.7 9.3 26.0\nS130.54 6.70 1.39 74.0 12.0 14.0\nS231.06 6.70 1.34 74.7 13.3 12.0\nTable 1. Automatic and manual evaluation results.\nSrc it should be made clear to all countries that accession to the European Union is\nnot quite . . .\nBaseline/S1 elle doit ˆ etre clair pour tous les pays que l’ adh´ esion ` a l’ Union eu-\nrop´ eenne n’ est pas tout ` a fait . . .\nS2il faut pr´ eciser clairement ` a tous les pays que l’ adh´ esion ` a l’ Union europ´ eenne n’\nest pas tout ` a fait . . .\nFig. 2. Translation outputs produced by the three systems.\n4 Discussion and future work\nWhereas the need for exploiting source context information in SMT has been\nclearly identified, results showing improvements in translation quality only con-\nsider translation pairs with a less inflected target language. We have presented\n11Incidentally, S1 (resp. S2) and the baseline produced 11 (resp. 22) identical transla-\ntions.\n12th EAMT conference, 22-23 September 2008, Hamburg, Germany\n118\nexperiments in using source context information including grammatical depen-\ndency targets while translating from a less inflected to a highly inflected lan-\nguage. While automatic metrics did not show clear gains on translation quality,\na manual evaluation we conducted confirms that context-aware systems can pro-\nduce better translations for highly inflected target languages as well.\nWe plan to repeat our experiments on more test sets of a bigger size, as well\nas to increase the size of the corpus used for training the classifier. We then plan\nto carry out two further experiments: the first one between two highly inflected\nlanguages (e.g. French →Spanish) to see if gains are observed in accord with the\nexperiments presented here; the second one between a highly inflected and a less\ninflected language (e.g. French →English) to see if our approach is competitive\nwith other systems, and, in particular to see if adding grammatical dependency\ninformation is beneficial.\nMoreover, several classifiers (different methods and/or vector representa-\ntions) could be used to learn the weights of the individual features of the log-\nlinear combination, as in [7].\nReferences\n1. Koehn, P., Och, F.J., , Marcu, D.: Statistical phrase-based translation. In: Pro-\nceedings of NAACL/HLT, Edmonton, Canada (2003)\n2. Vilar, D., Xu, J., d’Haro, L.F., Ney, H.: Error Analysis of Statistical Machine\nTranslation Output. In: Proceedings of LREC, Genoa, Italy (2006)\n3. Carpuat, M., Wu, D.: Context-dependent phrasal translation lexicons for SMT.\nIn: Proceedings of Machine Translation Summit XI, Copenhagen, Denmark (2007)\n4. Chan, Y.S., Ng, H.T., Chiang, D.: Word sense disambiguation improves statistical\nmachine translation. In: Proceedings of ACL’07, Prague, Czech Republic (2007)\n5. Gim´ enez, J., M` arquez, L.: Context-aware discriminative phrase selection for SMT.\nIn: Proceedings of WMT at ACL, Prague, Czech Republic (2007)\n6. Stroppa, N., van den Bosch, A., Way, A.: Exploiting source similarity for smt using\ncontext-informed features. In: Proceedings of TMI, Skvde, Sweden (2007)\n7. Gimpel, K., Smith, N.A.: Rich source-side context for SMT. In: Proceedings of\nWMT at ACL, Columbus, USA (2008)\n8. Daelemans, W., Zavrel, J., van der Sloot, K., van den Bosch, A.: TiMBL: Tilburg\nMemory Based Learner, v6.1, Reference Guide. Technical report, ILK 07-xx (2007)\n9. Koehn, P.: Europarl: A parallel corpus for statistical machine translation. In:\nProceedings of MT Summit, Phuket, Thailand (2005)\n10. de Marneffe, M.C., MacCartney, B., Manning, C.D.: Generating typed dependency\nparses from phrase structure parses. In: Proceedings of LREC, Genoa, Italy (2006)\n11. Och, F.J.: Minimum error rate training for statistical machine translation. In:\nProceedings of ACL, Sapporo, Japan (2003)\n12. Stolcke, A.: SRILM - an extensible language modeling toolkit. In: Proceedings of\nICSLP, Denver, Colorado (Sept 2002)\n13. Carpuat, M., Wu, D.: Evaluation of context-dependent phrasal translation lexicons\nfor SMT. In: Proceedings of LREC, Marrakech, Morroco (2008)\n12th EAMT conference, 22-23 September 2008, Hamburg, Germany\n119",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "RWDfmzBtFJI",
"year": null,
"venue": "EAMT 2009",
"pdf_link": "https://aclanthology.org/2009.eamt-1.4.pdf",
"forum_link": "https://openreview.net/forum?id=RWDfmzBtFJI",
"arxiv_id": null,
"doi": null
}
|
{
"title": "TS3: an Improved Version of the Bilingual Concordancer TransSearch",
"authors": [
"Stéphane Huet",
"Julien Bourdaillet",
"Philippe Langlais"
],
"abstract": "Stéphane Huet, Julien Bourdaillet, Philippe Langlais. Proceedings of the 13th Annual conference of the European Association for Machine Translation. 2009.",
"keywords": [],
"raw_extracted_content": "Proceedings of the 13th Annual Conference of the EAMT , pages 20–27,\nBarcelona, May 2009\nTS3: an Improved Version of the Bilingual Concordancer TransSearch\nSt´ephane Huet, Julien Bourdaillet andPhilippe Langlais\nDIRO - Universit ´e de Montr ´eal\nC.P. 6128, succursale Centre-ville\nH3C 3J7, Montr ´eal, Qu ´ebec, Canada\n{huetstep,bourdaij,felipe }@iro.umontreal.ca\nAbstract\nComputer Assisted Translation tools re-\nmain the preferred solution of human\ntranslators when publication quality is\nof concern. In this paper, we present\nour ongoing efforts conducted within\nTS3, a project which aims at improv-\ning the commercial bilingual concordancer\nTransSearch . The core technology of\nthis Web-based service mainly relies on\nsentence-level alignment. In this study, we\ndiscuss and evaluate the embedding of sta-\ntistical word-level alignment.\n1 Introduction\nAlthough the last decade witnessed an impressive\namount of effort devoted to improving the current\nstate of Machine Translation (MT), professional\ntranslators still prefer Computer Assisted Transla-\ntion (CAT) tools, among which translation mem-\nory(TM) systems and bilingual concordancers .\nBoth tools exploit a TM composed of a bitext : a\nset of pairs of units (typically sentences) that are\nin translation relation. Whereas a TM system is\na translation device, a bilingual concordancer is\nconceptually simpler, since its main purpose is to\nretrieve from a bitext, the pairs of units that con-\ntain a query (typically a phrase) that a user man-\nually submits. It is then left to the user to locate\nthe relevant material in the retrieved target units.\nAs simple as it may appear, a bilingual concor-\ndancer is nevertheless a very popular CAT tool. In\n(Macklovitch et al., 2008), the authors report that\nTransSearch ,1the commercial web-based con-\ncordancer we focus on in this study, received an av-\nc/circlecopyrt2009 European Association for Machine Translation.\n1www.tsrali.comerage of 177 000 queries a month over a one-year\nperiod (2006–2007).\nFigure 1 provides a screenshot of a session with\nthe current concordancer TransSearch . A user\nsubmitted the multi-word query in keeping\nwith to which the system responded with a web-\npage showing the first 25 pairs of sentences in the\nTM that contain an occurrence of the query. As\ncan be observed, nothing in the target material re-\ntrieved is emphasized, which forces the user to\nread the examples retrieved until an appropriate\ntranslation was found.\nFigure 1: Screenshot of TransSearch . Two of\nthe first 25 matches returned to the user for the\nquery in keeping with .\nThe main objective of the TS3 project is to auto-\nmatically identify (highlight) in the retrieved mate-\nrial the different translations of a user query. Iden-\ntifying translations offers interesting prospects for\nuser-efficient interactions. Although the definitive\nlook-and-feel of the new prototype is not settled\nyet, Figure 2 shows an interface where the user can\nconsult the most likely translations automatically\n20\nidentified. Of course, she can still consult the pairs\nof sentences containing the query, but can as well\nclick a given translation to see related matches.\nFigure 2: A hypothetical interface which exploits\ntranslation spotting.\nThe reminder of this paper is organized as fol-\nlows. We first describe in Section 2 the translation\nspotting technique we implemented. Since trans-\nlation spotting is a notoriously difficult problem,\nwe discuss two novel issues that we think are es-\nsential to the success of a concordancer such as\nTransSearch : the identification of erroneous\nalignments (Section 3) and the grouping of trans-\nlation variants (Section 4). We present the data we\nused in Section 5 and report on experiments in Sec-\ntion 6. We conclude our discussion and propose\nongoing avenues in Section 7.\n2 Transpotting\nTranslation spotting , ortranspotting , is the task of\nidentifying the word-tokens in a target-language\n(TL) translation that correspond to the word-\ntokens of a query in a source language (SL). It\nis therefore an essential part of the TS3 project.\nWe call transpot the target word-tokens automat-\nically associated with a query in a given pair\nof units (sentences). For instance in Figure 2,\nconform ´ement `aandva dans le sens\ndeare two of the six transpots displayed to the\nuser for the query in keeping with .\n2.1 Word Alignment\nAs mentioned in (Simard, 2003), translation spot-\nting can be seen as a by-product of word-levelalignment. Since the seminal work of (Brown et\nal., 1993), statistical word-based models are still\nthe core technology of today’s Statistical MT. This\nis therefore the alignment technique we consider\nin this study.\nFormally, given an SL sentence S=s1...snand\na TL sentence T=t1...tmin translation relation,\nan IBM-style alignment a=a1...amconnects\neach target token to a source one ( aj∈ {1, ..., n })\nor to the so-called NULL token which accounts for\nuntranslated target tokens, and which is arbitrarily\nset to the source position 0(aj= 0). This de-\nfines a word-level alignment space between Sand\nTwhose size is in O(mn+1).\nSeveral word-alignment models are introduced\nand discussed in (Brown et al., 1993). They differ\nby the expression of the joint probability of a target\nsentence and its alignment, given the source sen-\ntence. For computational reasons, we focus here\non the simplest form, which corresponds to IBM\nmodels 1&2:\np(tm\n1, am\n1|sn\n1) =m/productdisplay\nj=1/summationdisplay\ni∈[0,n]p(tj|si)×p(i|j, m, n )\nwhere the first term inside the summation is the\nso-called transfer distribution and the second one\nis the alignment distribution.\nGiven this decomposition of the joint probabil-\nity, it is straightforward to compute the so-called\nViterbi alignment, that is, the one maximizing\nthe quantity p(am\n1|tm\n1, sn\n1). This approach often\nproduces discontiguous alignments, which poses\nproblems in practice. Furthermore, most of the\nqueries in our logfile are contiguous ones, there-\nfore we expect their translations to be contiguous\nas well.\n2.2 Transpotting Algorithm\nIn order to enforce contiguous transpots, we im-\nplemented a variant of the transpotting algorithm\ninitially proposed by Simard (2003), and which\nshares close similarities with phrase extraction\ntechnique described in (Venugopal et al., 2003).\nThe idea is to compute for each pair /angbracketleftj1, j2/angbracketright ∈\n[1, m]×[1, m], two Viterbi alignments: one be-\ntween the phrase tj2\nj1and the query si2\ni1, and one\nbetween the remaining material in the sentences\n¯si2\ni1≡si1−1\n1sn\ni2+1and ¯tj2\nj1≡tj1−1\n1tm\nj2+1. This\nmethod, which finds the translation of the query21\naccording to:\nˆtˆj2\nˆj1= argmax\n(j1,j2)\n\nmaxaj2\nj1p(aj2\nj1|si2\ni1, tj2\nj1)\n×\nmax¯aj2\nj1p(¯aj2\nj1|¯si2\ni1,¯tj2\nj1),\nhas a complexity in O(nm3). It ranked first among\nseveral other alternatives we investigated (Bour-\ndaillet et al., 2009).\n2.3 The Need for Post-processing\nFrequent queries in the TM receive numerous\ntranslations by the previously described transpot-\nting process. Figure 3 illustrates the many trans-\npots returned by our transpotting algorithm for the\nquery in keeping with . As can be observed,\nsome transpots (those marked by a star) are clearly\nwrong ( e.g.`a), many others (in italics) are only\npartially correct ( e.g.conform ´ement ). Also, it\nappears that many transpots are indeed very simi-\nlar (e.g.conforme `aandconformes `a).\nconforme `a(45) conform ´ement `a(29)\n`a⋆(21) dans⋆(20)\n. . .\nconforme au (12) conformes `a(11)\navec⋆(9) conform ´ement (9)\n. . .\ncorrespond `a(1)respectent (1)\nd’actualit ´e⋆(1)gestes en⋆(1)\nFigure 3: Subset of the 273 different transpots re-\ntrieved for the query in keeping with . Their\nfrequency is shown in parentheses.\nSince in TS3 we want to offer the user a list of\nretrieved translations for a query, strategies must\nbe devised for bypassing alignment errors and de-\nlivering as many as possible translation variants\nto the user. We investigated two avenues in this\nstudy: detecting erroneous transpots (Section 3)\nand merging together variants of the same canoni-\ncal translation (Section 4).\n3 Refining Transpotting\nWe investigated the learning of classifiers trained\nto distinguish good transpots from bad ones.\nWe tried several popular classifiers:2avoted-\nperceptron algorithm (Freund and Schapire, 1999)\n2We used Weka in our experiments www.cs.waikato.\nac.nz/ml/weka .which has been reported to work well in a number\nof NLP tasks (Collins, 2002); a support vector ma-\nchine (SVM), commonly used in supervised learn-\ning (Cristianini and Shawe-Taylor, 2000); a deci-\nsion stump , a very simple one-level decision tree;\nAdaBoost using a decision stump as weak classifier\n(Freund and Schapire, 1996); and a majority vot-\ningclassifier between a voted-perceptron, an SVM\nand AdaBoost (Kittler et al., 1998).\nEach classifier was trained in a supervised way\nthanks to an annotated corpus we devised (see Sec-\ntion 5). We computed three groups of features\nfor each example, that is, each query/transpot pair\n(q, t). The first group is made up of features related\nto the size (counted in words) of qandt, with the\nintuition that they should be related. The second\ngroup gathers various alignment scores computed\nwith word-alignment models (min and max like-\nlihood values, etc.). The last group gathers clues\nthat are more linguistically flavored, among which\nthe ratio of grammatical words in qandt, or the\nnumber of prepositions and articles. In total, each\nexample is represented by at most 40 numerical\nfeatures.\n4 Merging Variants\nOnce erroneous transpots have been filtered out,\nthere usually remain many translations for a given\nquery. For instance, the best classifier we trained\nidentified 91 bad transpots among the 273 candi-\ndat ones. Among the remaining transpots, some\nof them are very similar and are therefore redun-\ndant for the user (see Figure 3). This phenomenon\nis particularly acute for the French language with\nits numerous conjugated forms for verbs. Another\nproblem that often shows up is that many transpots\ndiffer only by punctuation marks or by a few gram-\nmatical words.\nOf the 182 transpots surviving the filter, we\nestimate that no less than 37 interesting canoni-\ncaltranslations exist for the query in keeping\nwith . Therefore, it is important from the user per-\nspective to identify them. We investigated ways to\nmerge together close variants. This raises several\ndifficulties. First, the transpots must be compared\ntogether, which represents both a tricky and time\nconsuming process. Second, we need to identify\ngroups of similar variants. We describe our solu-\ntions to these problems in the sequel.22\n4.1 Word-Based Edit Distance\nA word-level specific edit distance was empirically\ndeveloped to meet the constraints of our applica-\ntion. Different substitution, deletion and insertion\ncosts are set according to the grammatical classes\nor possible inflections of the words; it is therefore\nlanguage dependent. We used an in-house lexicon\nthat lists, for both French and English, the lem-\nmas of each inflected form and its possible parts-\nof-speech.\nA minimal substitution cost was empirically\ngiven between two inflected forms of the same\nlemma. A score has been engineered which in-\ncreasingly penalizes in that order edit operations\ninvolving punctuation marks, articles, grammatical\nwords (prepositions, conjunctions and pronouns),\nauxiliary verbs and lexical words (verbs, nouns,\nadjectives and adverbs).\n4.2 Neighbor-Joining Algorithm\nComparing the transpots pairwise with the dis-\ntance we defined is an instance of multiple se-\nquence alignment, a well studied problem in bioin-\nformatics (Chenna et al., 2003). We adopted\nthe approach of progressive alignment construc-\ntion. This method first computes the word-\nbased edit-distance between every pair of trans-\npots and stores the results in an edit-matrix. Sec-\nond, a greedy bottom-up clustering method called\nneighbor-joining (Saiou and Nei, 1987) is con-\nducted; it builds a tree by joining together either\ntwo transpots, that is two leaves of the tree, or a\ntranspot and a node in the tree already aggregating\nseveral translations. At each step, the most simi-\nlar pair is merged and added to the tree, until no\ntranspot remains unaligned.\nFinally, the neighbor-joining algorithm returns a\ntree whose leaves are transpots. Closest leaves in\nthis tree correspond to the most similar variants.\nTherefore, clusters of variants can be formed by\ntraversing the tree in a post-order manner. The\ntranspots associated with two neighboring leaves\nand which differ only by grammatical words or by\ninflectional variants are considered as sufficiently\nsimilar to be merged into a single cluster. This\nprocess is repeated until all the leaves have been\ncompared with their nearest neighbor and no more\nsimilar variants remain.\nFigure 4 illustrates this process. The two neigh-\nbor transpots conforme `aandconformes `a\nare first grouped together, so are conforme auandconforme aux . Then, those two groups\nare merged into a single cluster. The transpot\ncorrespondant `abeing too different is not\naggregated into this cluster.\nFigure 4: Merging of close transpots.\n4.3 Naive Joining Algorithm\nWe also implemented a conceptually simpler\nmerging algorithm which relies on the frequencies\nof transpots. The algorithm compares the most fre-\nquent variant with all the other ones. Those that are\nclose enough (according to our distance) are aggre-\ngated into a cluster. This process is iteratively ap-\nplied on the remaining variants until no more clus-\nter can be formed.\n5 Corpora\n5.1 Translation Memory\nThe largest collections in TransSearch come\nfrom the Canadian Hansards, that is, parallel texts\nin English and French drawn from official records\nof the proceedings of the Canadian Parliament. For\nour experiments, we indexed with Lucene3a TM\ncomprising 3.3 million pairs of French-English\nsentences aligned at the sentence-level by an in-\nhouse aligner. This was the maximum amount\nof material we could train a statistical word-\nalignment model on, running the giza++ (Och\nand Ney, 2003) toolkit on a computer equipped\nwith 16 gigabytes of memory.\n5.2 Automatic Reference Corpus\nWe developed a reference corpus ( REF) by inter-\nsecting our TM with a bilingual lexicon, and some\nuser queries. We used an in-house bilingual-phrase\nlexicon we collected over various projects, which\nincludes 59 057 English phrases with an average of\n1.4 French translations each. We extracted from\nthe logs of TransSearch the 5 000 most fre-\nquent queries submitted by users to the system.\n4 526 of those queries actually occurred in our TM,\nand of these, 2 130 had a sanctioned translation in\nour bilingual lexicon. We collected up to 5 000\npairs of sentences for each of those 2 130 queries,\n3http://lucene.apache.org23\nleading to a set of 1 102 357 pairs of sentences,\nwith an average of 517 pairs of sentences per\nquery. For each of the 2 130 queries, the bilingual\nlexicon enabled us to extract a mean of 3.5 differ-\nent transpots (and a maximum of 37). This results\nin a set of 7 472 different pairs of query/translation.\n5.3 Human Reference\nIn order to train the classifiers described in Sec-\ntion 3, four human annotators were asked to iden-\ntify bad transpots among those proposed by our\ntranspotting algorithm. We decided to annotate the\nquery/transpot pairs without their contexts of oc-\ncurrence, which allows a relatively fast annotation\nprocess,4but leaves some cases difficult to anno-\ntate. For instance, in our running example, a trans-\npot such as conforme `ais straightforward to\nannotate, but others such as dans le sens de\nortenir compte de gave annotators a harder\ntime since both can be valid translations in some\ncontexts. We ended up with a set of 531 queries\nthat have an average of 22.9 transpots each, for\na total of 12 144 annotated examples. We com-\nputed the inter-annotator agreement and observed\na 0.76 kappa score, which indicates a high degree\nof agreement.\n6 Experiments\n6.1 Transpotting\nFor each of the 1 102 357 pairs of sentences of REF,\nwe evaluated the ability of the transpotting algo-\nrithm described in Section 2.2 to find the reference\ntranslation ˆtfor the query q, according to recall and\nprecision ratios computed as follows:\nrecall =|t∩ˆt|/|ˆt|precision =|t∩ˆt|/|t|\nF-measure = 2× |t∩ˆt|/(|t|+|ˆt|)\nwhere tis the transpot identified by the algo-\nrithm, and the intersection operation is to be un-\nderstood as the portion of words shared by tand\nˆt. A point of detail is in order here: since sev-\neral pairs of sentences often contain the same given\nquery/reference translation pair (q,ˆt), we first av-\nerage for a given pair the ratios measured for all\nthe occurrences of that pair in the reference cor-\npus. Then, we average the scores over the set of all\ndifferent pairs (q,ˆt)in the corpus. This avoids bi-\nasing our evaluation metrics toward frequent pairs\nin the REFcorpus.5\n4On the order of 40 seconds per query.\n5Without this normalization, results would be increased by a\nrange of 0.2 to 0.4 points.prec. rec. F-meas.\ntranspotting 0.30 0.60 0.38\ntranspotting0.37 0.76 0.46+ voting\nTable 1: Transpotting results before and after fil-\ntering ( REF). See next section for an explanation\nof line 2.\nOur transpotting algorithm (see line 1 of Ta-\nble 1) achieves a precision of 0.30, and a recall\nof0.60. At a first glance, these figures might seem\nrather low. However, recall that our normalization\nprevents frequent queries that are often correctly\naligned from being counted several times. Thus,\nthis reinforces the score measured for infrequent\nqueries, which in turn tend to be worse aligned.\nAlso, we observed that very often the reference\ntranslation is a subset of the transpot found, which\nlowers precision. This is the case of the example\nshown in Figure 5.\nUne telle restriction ne s’\ninscrit pas dans le sens des\npratiques actuelles.\nFigure 5: Transpot (underlined) and reference\ntranslation (in bold) for the query in keeping\nwith .\n6.2 Training Classifiers\nAs described in Section 3, we trained various clas-\nsifiers to identify spurious transpots, representing\nan example (a query/transpot pair) by three kinds\nof feature sets. All these variants plus a few chal-\nlenging baselines are evaluated according to the ra-\ntio of Correctly Classified Instances (CCI). Since\nin our application we are interested in filtering\nout bad transpots, precision, recall and F-measure\nrates related to this class are computed as well.\nWe report in Table 2 the figures we measured\nby a 10-fold stratified cross-validation procedure.\nTo begin with, the simplest baseline we built (line\n1) classifies all instances as good. This results in\na useless filter with a CCI ratio of 0.62. A more\nsensible baseline —that we engineered after we in-\nvestigated the usefulness of different feature sets—\nclassifies as bad the transpots whose ratio of gram-\nmatical words is above 0.75. It is associated with\na CCI ratio of 0.78 (line 2).\nWe started by investigating the voted-perceptron24\nBad\nClassifier Features CCI precision recall F-measure\nBaseline: all good 0.62 0.00 0.00 0.00\nBaseline: grammatical ratio >0.75 0.78 0.88 0.49 0.63\nVoted-Perceptron (VP)size 0.73 0.75 0.47 0.58\nIBM 0.78 0.69 0.78 0.73\ngrammatical 0.79 0.88 0.52 0.65\nall 0.83 0.81 0.73 0.77\nSVM\nall0.83 0.84 0.70 0.76\nDecision Stump 0.81 0.77 0.70 0.74\nAdaBoost 0.83 0.71 0.83 0.76\nMajority-Voting (VP+SVM+AdaBoost) 0.84 0.84 0.71 0.77\nTable 2: Performance of different algorithms for identifying bad transpots.\nand the contribution of each feature sets on its per-\nformance.6When the voted-perceptron is trained\nusing only one set of features, the one making use\nof the grammatical features obtains the best CCI\nratio of 0.79and an F-measure of 0.65. Even\nif the configuration based on IBM model 2 word\nalignment scores obtains a slightly inferior CCI\nratio of 0.78, it has a much higher F-measure of\n0.73and can be considered as the best feature set.\nWhen using all feature sets, the voted-perceptron\nclearly surpasses the baseline with a CCI of 0.83\nand an F-measure of 0.77. It should be noticed\nthat while the best baseline has a better precision\nthan the best voted-perceptron, precision and recall\nare more balanced for the latter. Because it is not\nclear whether precision or recall should be favored\nfor the task of bad transpot filtering, optimizing the\nF-measure is preferred.\nWhen training the other classifiers using all fea-\nture sets, no significant gain can be observed. Nev-\nertheless the majority-voting classifier obtains the\nbest CCI ratio of 0.84and an F-measure of 0.77.\nThe figures obtained by the decision stump, a one-\nlevel decision tree, are surprisingly high. The rule\nused by this classifier considers the minimal word\nalignment probability inside a Viterbi alignment\nbased on an IBM model 2. At the very least, this\nconfirms the interest of this feature set.\nOnce the best classifier had been obtained, i.e.\nmajority-voting, we evaluated the impact of trans-\npotting filtering against the REF corpus. Results\nare shown in Table 1 (line 2). We observe a signif-\nicant gain in F-measure which increases from 0.38\nto0.46. The higher gain is in recall, from 0.60to\n6Similar results concerning the different feature set have been\nobserved for the other classifiers and are not presented here.0.76. Referring to the example of Section 6.1, this\nmeans that filtering eliminates too short transpots.\nInspections revealed that short bad transpots, such\nas grammatical words, are frequently identified as\nbad by the classifier. This demonstrates the interest\nof filtering bad transpots.\n6.3 Merging Variants\nThe interest of grouping together similar variants is\nclear from a user perspective. However, the gran-\nularity with which we should aggregate variants is\nnot obvious.7We studied two approaches. The\nfirst method aims at grouping together transpots\nthat differ by punctuation marks or that are inflec-\ntional variants of the same lemma. It is based on an\nedit distance, called D1, which uses the same edit-\ncosts for grammatical and lexical words. The sec-\nond method groups together variants with looser\nconstraints. It resorts to an edit distance, named\nD2, that associates lower edit-costs with grammat-\nical words than with lexical words.\nFrom the transpots obtained for the 5 000\nqueries of the REFcorpus (and filtered by our best\nclassifier), this method leads to an average of 69\nclusters per query (Table 3, columns 2 and 4),\nwhereas there are on average 85 unique transpots\nper query (Table 3, column 1). The same level\nof grouping is observed for the two joining algo-\nrithms described in Section 4.3.\nAs expected, the use of D2dramatically re-\nduces the number of clusters to an average of 45\nper query (Table 3, columns 3 and 5). Contrary\ntoD1,D2allows the merging process to gather\nsimilar variants such as sur des ann ´ees and\n7This would certainly require tests with real users.25\nbaselinenaive joining neighbor-joining\nD1 D2 D1 D2\n85 69 45 69 45\nTable 3: Average number of responses per query.\ndurant des ann ´ees. However, it occa-\nsionally leads to erroneous groupings such as\ntout `a fait (fully ) and fait tout (do\neverything ).\nIn what follows, we measure quantitatively the\nimprovement from the point of view of the quality\nof the first responses suggested for each query.\nExperimental Setup Table 4 shows the 5 most\nfrequent transpots computed for two queries by the\noriginal transpotting algorithm (baseline) and ob-\ntained after grouping together variants (with the\nnaive joining method). We observe the tendency\nof the baseline to propose inflectional variants\nof the same translation, while merging variants\nleads to more diversity, which is preferable since\nthe number of variants that can be displayed in\nTransSearch without scrolling is limited. In-\ndeed, we think that presenting the user with around\n5 transpots and some sentences where they oc-\ncurred is a good compromise (see Figure 2).\nIn order to simulate this, we measure in what\nfollows the diversity of the best 5 transpots pro-\nposed by different methods. The baseline keeps\nthe 5 most frequent transpots as returned by our\ntranspotting algorithm, while the other methods al-\nlow for clustering the transpots. The 5 most fre-\nquent clusters are considered,8and the most fre-\nquent variant in each cluster is retained. Therefore\neach method delivers at most 5 transpots.\nThe best 5 transpots are considered as bags\nof unigrams, bigrams or trigrams and compared\nto reference translations turned also in bags of\nn-grams. All the words are lemmatized, and\nshort words (less than 4 characters) are discarded\nas a proxy to remove grammatical words. For\ninstance, the transpots returned by the baseline\nmethod in Table 4 for the first query are turned into\n{d´ecrire ,comme }.\nResults The comparison of the generated bags-\nof-words with the reference ones is done by com-\nputing precision and recall. The reference used\nhere is the resource described in Section 5.3. Ta-\n8The frequency of a cluster is the cumulative frequency of all\nthe variants it groups.ble 5 reports results obtained with the metrics\nbased on bags of n-grams without joining variants\n(line 1) and when using either the neighbor-joining\nalgorithm (lines 2 and 3) or the naive method\n(lines 4 and 5). Their comparison shows an im-\nprovement in terms of F-measure for unigrams, bi-\ngrams and trigrams when variants are merged. If\nthe precision slightly decreases for unigrams w.r.t.\nthe baseline, a significant improvement is obtained\nespecially with the edit distance D2. These results\nare correlated with the more diversified transla-\ntions obtained when variants are grouped together.\n7 Discussion\nIn this study, we have investigated the use of sta-\ntistical word-alignment for improving the commer-\ncial concordancer TransSearch . A transpotting\nalgorithm has been proposed and evaluated. We\ndiscussed two novel issues that are essential to the\nsuccess of our new prototype: detecting erroneous\ntranspots, and grouping together similar variants.\nWe proposed our solutions to these two problems\nand evaluated their efficiency. In particular, we\ndemonstrated that it is possible to detect erroneous\ntranspots better than a fair baseline, and that merg-\ning variants leads to transpots of better diversity.\nFor the time being, it is difficult to compare our\nresults to others in the community. This is princi-\npally due to the uniqueness of the TransSearch\nsystem, which archives a huge TM. To give a point\nof comparison, in (Callisson-Burch et al., 2005)\nthe authors report alignment results they obtained\nfor 120 selected queries and a TM of 50 000 pairs\nof sentences. This is several orders of magnitude\nsmaller than the experiments we conducted in this\nstudy.\nThere are several issues we are currently in-\nvestigating. First, we only considered simple\nword-alignment models in this study. Higher-level\nIBM models can potentially improve the quality\nof the word alignments produced. At the very\nleast, HMM models (Vogel et al., 1996), for which\nViterbi alignments can be computed efficiently,\nshould be considered. The alignment method used\nin current phrase-based SMT is another alternative\nwe are considering.\nAcknowledgements\nThis research is being funded by an NSERC grant\nin collaboration with Terminotix.9\n9www.terminotix.com26\nbaseline d´ecrits d ´ecrite d ´ecrit tel que d ´ecrit comme l’a\nnaive joining D2 d´ecrits pr ´evu comme l’a tel que prescrit comme le propose\nbaseline s’est r ´ev´el´e s’est av ´er´e s’est av ´er´ee s’est r ´ev´el´ee a ´et´e\nnaive joining D2s’est r ´ev´el´e s’est av ´er´e a ´et´e s’est montr ´e a prouv ´e\nTable 4: 5 most frequent responses for the queries as described andhas proven to be when\na joining method is used or not.\nunigrams bigrams trigrams\nprec. rec. FM prec. rec. FM prec. rec. FM\nbaseline 0.93 0.45 0.61 0.82 0.35 0.49 0.68 0.30 0.41\nnaive D10.93 0.51 0.65 0.86 0.40 0.55 0.72 0.33 0.45\njoining D20.90 0.57 0.69 0.79 0.40 0.53 0.72 0.33 0.45\nneighbor- D10.93 0.50 0.65 0.86 0.41 0.55 0.72 0.34 0.46\njoining D20.90 0.56 0.69 0.80 0.40 0.53 0.71 0.34 0.46\nTable 5: Evaluation of quality of the variants merging process for the 5 most frequent groups retrieved\nfor 531 queries.\nReferences\nBourdaillet, J., S. Huet, F. Gotti, G. Lapalme, and\nP. Langlais. 2009. Enhancing the bilingual concor-\ndancer TransSearch with word-level alignment. In\n22nd Conference of the Canadian Society for Com-\nputational Studies of Intelligence , Kelowna, Canada.\nBrown, P., V. Della Pietra, S. Della Pietra, and R. Mer-\ncer. 1993. The mathematics of statistical machine\ntranslation: parameter estimation. Computational\nLinguistics , 19(2):263–311.\nCallisson-Burch, C., C. Bannard, and J. Schroeder.\n2005. A compact data structure for searchable trans-\nlation memories. In 10th European Conference of\nthe Association for Machine Translation (EAMT) ,\npages 59–65, Budapest, Hungary.\nChenna, R., H. Sugawara, T. Koike, R. Lopez, T. J.\nGibson, D. G. Higgins, and J. D. Thompson. 2003.\nMultiple sequence alignment with the Clustal series\nof programs. Nucleic Acids Research , 31(13):3497–\n3500.\nCollins, M. 2002. Discriminative training methods for\nhidden Markov models: theory and experiments with\nperceptron algorithms. In Conference on Empirical\nMethods in Natural Language Processing (EMNLP) ,\npages 1–8, Philadelphia, PA, USA.\nCristianini, N. and J. Shawe-Taylor. 2000. An In-\ntroduction to Support Vector Machines and Other\nKernel-Based Learning Methods. Cambridge Uni-\nversity Press.\nFreund, Y. and R. Schapire. 1996. Experiments with a\nnew boosting algorithm. In 13th International Con-\nference on Machine Learning (ICML) , pages 148–\n156, Bari, Italy.Freund, Y. and R. Schapire. 1999. Large margin clas-\nsification using the perceptron algorithm. Machine\nLearning , 37(3):277–296.\nKittler, J., M. Hatef, R. P.W. Duin, and J. Matas.\n1998. On combining classifiers. IEEE Transac-\ntions on Pattern Analysis and Machine Intelligence ,\n20(3):226–239.\nMacklovitch, E., G. Lapalme, and F. Gotti. 2008.\nTransSearch: What are translators looking for? In\n18th Conference of the Association for Machine\nTranslation in the Americas (AMTA) , pages 412–\n419, Waikiki, Hawai’i, USA.\nOch, F. J. and H. Ney. 2003. A systematic comparison\nof various statistical alignment models. Computa-\ntional Linguistics , 29(1):19–51.\nSaiou, N. and M. Nei. 1987. The neighbor-joining\nmethod: a new method for reconstructing phylo-\ngenetic trees. Molecular Biology and Evolution ,\n4(4):406–425.\nSimard, M. 2003. Translation spotting for transla-\ntion memories. In HLT-NAACL 2003 Workshop on\nBuilding and Using Parallel Texts: Data Driven Ma-\nchine Translation and beyond , pages 65–72, Edmon-\nton, Canada.\nVenugopal, A., S. Vogel, and A. Waibel. 2003. Ef-\nfective phrase translation extraction from alignment\nmodels. In 41st Annual Meeting of the Association\nfor Computational Linguistics (ACL) , pages 319–\n326, Sapporo, Japan.\nVogel, S., H. Ney, and Tillmann C. 1996. HMM-based\nword alignment in statistical translation. In 16th\nConference on Computational Linguistics , pages\n836–841, Copenhagen, Denmark.27",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "LF9knQYDNJG",
"year": null,
"venue": "EAMT 2008",
"pdf_link": "https://aclanthology.org/2008.eamt-1.18.pdf",
"forum_link": "https://openreview.net/forum?id=LF9knQYDNJG",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Multilingual summarization in practice: the case of patent claims",
"authors": [
"Simon Mille",
"Leo Wanner"
],
"abstract": "Simon Mille, Leo Wanner. Proceedings of the 12th Annual conference of the European Association for Machine Translation. 2008.",
"keywords": [],
"raw_extracted_content": "Multilingual Summarization in Practice: \nThe Case of Patent Claims \nSimon Mille1 and Leo Wanner1,2 \n \n1Department of Information and Communicati on Technologies, Pompeu Fabra University, \nOcata, 1, 08003 Barcelona, Spain \n2Catalan Institute for Research and Advanced Studies (ICREA), \nLluis Companys, 23, 08010 Barcelona, Spain \[email protected], [email protected] \nAbstract. Hardly any other type of textual material is as difficult to read and \ncomprehend as patents. Especially the cl aims in a patent reveal very complex \nsyntactic constructions which are difficult to process even for native speakers, \nlet alone for foreigners who do not master well the language in which the patent \nis written. Therefore, multilingual summarization is very attractive to \npractitioners in the field. We propose a multilingual summarizer that operates at \nthe Deep-Syntactic Structures (DSyntSs) as introduced in the Meaning-Text Theory. Firstly, the original claims are linguistically simplified and analyzed \ndown to DSyntSs. Then, syntactic and discursive summarization criteria are \napplied to the DSyntSs to remove summary irrelevant DSyntS-branches. The pruned DSyntS are transferred into DSyntSs of the language in which the \nsummary is to be generated. For the generation of the summary from the \ntransferred DSyntSs, we use the fu ll fledged text generator MATE. \nKeywords: patent claims, summary, machine translation, Meaning-Text \nTheory, Deep-Syntactic Structure. \n1 Introduction \nHardly any other kind of text material is as notoriously difficult to read and \ncomprehend as patents. This is first of a ll due to their abstract vocabulary and very \ncomplex syntactic constructions. Especially th e claims in a patent are a challenge: in \naccordance with international patent writing regulations, each claim must be rendered \nin a single sentence. As a result, sentence s containing more than 250 words are not \nuncommon; consider a still “rather short” claim from the patent EP0137272A2: \n \n(1) An automatic focusing device comprising: an objective lens for focusing a light beam emitted by a \nlight source on a track of an information recording medium; a beam splitter for separating a \nreflected light beam reflected by the information recording medium at a focal spot thereon and through the objective lens from the light beam emitted by the light source; an astigmatic optical system including an optical element capable of causing the astigmatic aberration of the separated reflected light beam; a light detector having a light receiving surface divided, except the central \n12th EAMT conference, 22-23 September 2008, Hamburg, Germany\n120\nportion thereof, into a plurality of light receiving sections which are arranged symmetrically with \nrespect to a first axis extending in parallel to the axial direction of the optical element and to a second axis extending perpendicularly to the first axis and adapted to receive the reflected beam transmitted through the optical element and to give a light reception output signal corresponding to the shape of the spot of the reflected light beam formed on the light receiving surface; a focal position detecting circuit capable of giving an output signal corresponding to the displacement of the objective lens from the focused position, on the basis of the output signal given by the light detector; and a lens driving circuit which drives the objective lens along the optical axis on the basis of the output signal given by the focal position detecting circuit.\n \n \nA sentence of this length and complexity is difficult to process even for native \nspeakers of English, let alone for foreigners who do not master English well. Given that professionals have to sift through the claims of a large number of patents returned as response to a search in a patent DB (which makes a quick assessment of the relevance of patent essential), it is not surprising that multili ngual summarization of \npatent claims is very attractive to practitioners in the field. Nonetheless, only little work has been done so far in the area; cf . as an example [1], who proposes a reading \naid based on the segmentation of claims into smaller and simpler sentences. The focus has been on the machine translation – es pecially in the light of the recently \ndramatically increased prominence of patents in languages not widely spoken in the West (e.g., Korean and Chinese). \nAs far as summarization of patent material is concerned, up to date, the \noverwhelming share of it is manual.\n1 One explication for this unsatisfactory state of \naffairs is that the peculiarities of the genre of patent claims require new approaches to summarization: the application of surface level criteria such as term frequency, \nposition etc., term level criteria such as similarity, word co-occurrence, etc. or text or \ndiscourse level criteria such as lexical chains, discourse relation trees, etc. to claims \nin their original form is not appropriate. The linguistic style of patent claims requires a novel summarization strategy that implies prior segmentation, simplification and text structure and discourse analysis. \nWe present an experimental rule-based module for the production of multilingual \nsummaries from English patent claims developed in the framework of the PATExpert \npatent processing service.\n2 The target languages are French, Spanish and German. The \nmodule currently undergoes an extensive evaluation and further extension. However, \nalready in it present state it shows promising performance. \nThe remainder of the paper is structured as follows. In the next section, we assess \nthe different ways to address summarization of patent material and briefly outline our \napproach to multilingual summarization. Sec tion 3 presents the strategy in more \ndetail. In Section 4, the evaluation of the performance of both summarization and \nmultilinguality is presented. Section 5, fi nally, summarizes the main points of our \nwork and gives hints to related work. \n \n1 Thomson Derwent is the world leading company in services for semi-manual patent \nabstracting; see http://scientific.thomson.com/derwent/ \n2 PATExpert has been partially funded by th e European Commission under the contract number \nFP6-028116. See [2] for a general presentation of the PATExpert service. \n12th EAMT conference, 22-23 September 2008, Hamburg, Germany\n121\n2 How to Do Multilingual Summarization of Patent Claims? \nThe abstract vocabulary of patent claims a nd their complex linguistic structures make \na deep analysis needed for abstraction very hard, such that linguistically less \nchallenging shallow summarization seems more promising. \nOne option is to exploit the claim tree structure, which defines the dependency \nbetween claims, cutting branches of the tree at depth n in accordance with the length \nof the summary desired by the user. This strategy reflects that claims at depth n are \nmore general (and thus more relevant to the summary) than claims at depth n+1. But \nit does not increase the readability of the summary and is still very difficult to \ntranslate. Therefore, it is mo re appropriate to identify clai m chunks (rather than entire \nclaims) as relevant/irrelevant to the summary. \nSince standard parsing algorithms are not able to cope with a reasonable outcome \nwith sentences of such a length, a prior two-step simplification procedure of the \noriginal is needed: (i) segmentation into simpler chunks and (ii) repair of chunks \nwhich are not grammatical clauses by intr oducing missing constituents or referential \nlinks, or by modifying available constituents. The output of the simplification can \nserve for two extraction based summarization strategi es: (a) discourse structure \noriented summarization; (b) syntactic structure oriented summarization. \nDiscourse structure oriented summarization as proposed by [3] uses the depth of \nthe subtree “controlled” by an element of a discourse relation in the sense of the \nRhetorical Structure Theory [4] – under the assumption that the nucleus of a relation \ncontrols an elementary tree formed by the nucleus and satellite of a relation. See [5] \nfor the application of this strategy to the summarization of patent claims. \nThe syntactic structure based summarization often uses syntactic dependency \ncriteria which indicate the importance of syntactic tree branches, drawing on \ndependency relations [6,7]. To the best of our knowledge, the syntax oriented strategy has not been applied so far to patents. \nIn PATExpert, three different summarizat ion strategies are implemented: (i) a \nstrategy based on the claim structure, (ii) a strategy based on the discourse structure, and (iii) a strategy based on the deep-syntactic (or shallow semantic) structure. Cf. \nFigure 1 for the architecture of the summarization module. \n \n \n \n \n \n \nFig. 1. Architecture of the mu ltilingual summarization module \nThe shallow semantic (or deep-syntactic) structure summarization is most suitable for \nmultilingual summarization. It presupposes two preprocessing stages: (a) claim Structuring/ Coref. \nAnalysis Simplification/ \nDiscourse analysis Claim structure \nsummarization Discourse structure summarization \nDependent \nclaim fusion Shallow sem. structure summarization \nMate \ntransfer summarization \ngeneration Full Parsing \n(Minipar) \n12th EAMT conference, 22-23 September 2008, Hamburg, Germany\n122\ndependency structure determination, simplification, and discourse analysis, and (b) \nfull parsing of the simplified claim sentences. For parsing, we use MINIPAR [8]. \nDespite some shortcomings such as syst ematic right-attachment, we chose MINIPAR \nsince it produces syntactic structures wh ich roughly correspond to the Surface \nSyntactic Structures (SSyntSs) of the linguistic framework underlying the linguistic workbench MATE [9] we use for generation: the Meaning-Text Theory, MTT [10]. \nThe summarization and multilingual transfer stages are performed on the Deep-\nSyntactic Structures (DSyntSs) of the MTT, such that prior to these stages, the \nMINIPAR structures are mapped onto SSyntSs and the SSyntSs onto DSyntSs; for details on the preprocessing stages, see [11]. The abstract nature of the DSyntS, which \neliminates the surface-syntactic idiosyncrasies of the linguistic constructions, ensures, on the one hand, quasi-semantic criteria for summarization, and, on the other hand, \nsimplified transfer between the structures of different languages; cf., e.g., [12]. \n3 Multilingual Summarization of Patent Claims \nStarting from the DSyntSs of the simplified claims, the multilingual summarization of \npatents consists of the following steps: (1) summarization of the original claims, (2) \ntransfer of DSyntS of the s ource language to the target language, (3) generation of the \nsummary in the target language. \n3.1 Deep-Syntactic Summarization \nThe summarization criteria are based on speci fic patterns recognized within the input \nDSyntS. These patterns trigger the applic ation of summarization rules from the \nsummarization grammar define d in MATE. Consider some of these patterns and the \neffect of the application of the corresponding summarization rules, namely removing \nof the chunks (in reality, branches of the DSyntS) that appear in brackets: \n \n1. A noun has a postponed attribute: \n(a) The optical component is a shading member [arranged near the optical \naxis around the aperture plane of the optical system ]. \n(b) The recesses are formed in the uppe r face and extend from a land surface \n[adjacent to said cutting edge ]. \n2. A definite noun is modified by a full statement: \n(a) An automatic focusing apparatus comprises the actuator [which controls \nthe focusing means depending upon the output of the phase detector ]. \n3. A noun in a dependent claim is modifi ed by a “has-part” relation (in an \nindependent claim, it can bear important information): \n(a) A unitary ridge is formed on the top face [having side surfaces \nconstituting the first and second side chip deflector surfaces ]. \n4. A noun in a dependent claim is modified by a PURPOSE relation ( for + \nGerund): \n(a) The apparatus comprises a lens [for convert ing the light from the signal \nplane ]. \n12th EAMT conference, 22-23 September 2008, Hamburg, Germany\n123\n5. A number appears in a senten ce of a dependent claim: \n(a) The reflective component-containing layer has a film thickness of 0.01µm \nto 0.5 µm.3 \n(b) [The film thickness is 0.01µm to 0.09 µm ]. \n \nOnce the DSyntSs are cleared of redundant information in the summarization stage, \nthey are aggregated in that coordination conjunctions, ellipses, and relative clauses are \nintroduced to produce a more natural, fluent text; for details, see [11]. \n3.2 Multilingual Transfer \nAggregated structures serve as a starting point for the multilingual transfer. The prior \nsimplification guarantees that the source lan guage DSyntSs are rather simple – with \nthe effect that the mismatches between the source and target DS yntSs are minimized \n(for handling of the mismatches at the DSyntS- level of transfer, see [12]). As a result, \nthe transfer becomes to a large extent a lexical transfer . The transfer procedure proper \nis preceded by word disambiguation. \nDisambiguation. The disambiguation of words must be addressed in order to obtain \nthe correct translation from the transfer dictionary (see below). For instance, the \nEnglish OPEN can be translated by two French verbs S’OUVRIR and OUVRIR. \nWhich one is correct depends on the number of semantic actants (one or two) of OPEN. In other words, S’OUVRIR and OUVRIR correspond to two different senses \nof OPEN: OPEN1 and OPEN2. \nAn important criterion for the disambiguation is the subcategorization information \navailable in the dictionary. Several simple rules retrieve from the dictionary the right entry for the verb according to the number of syntactic actants f ound in the DSyntS: \n \n(4) ?X {–I →?Y} | ?X–II→ ?N ?X.voice=passive disambiguation::(?X.dlex).(I).(lex) \n \n ?X {dlex=disambiguation::(?X.dlex).(I).(lex)} \n \nThe above rule states that if a node bound to the variable ?X has a DSynt actancial \nrelation “I” with the node ?Y, but no relation “II”; if it is not in the passive and has an \nentry in the “disambiguation” dictionary, then the name (“dlex”) of ?X is the value of \nthe attribute “lex”, which is the non-atomic value of the attribute “I” in the entry of \n“?X” in the disambiguation dictionary. Applied to OPEN this rule gives us OPEN1: \n \n(5) open {dpos=V \n I = {lex=open_1 \n gp = {I = {dpos=N}}} \n II = {lex=open_2 \n gp = {I = {dpos=N} \n II = {dpos=N}}}} \n \n3 In (5a), the first sentence is an independent claim which has a dependent claim that contains \nthe second sentence. \n12th EAMT conference, 22-23 September 2008, Hamburg, Germany\n124\n(‘gp’ stands for “government pattern”, i.e., valency structure). Monolingual \ndictionaries of this kind are available for each of the source and target languages (in \nour case: English, French, German and Spanish). \nMultilingual Transfer Proper. The entries in the transfer dictionary have the \nfollowing form: \n \n(6) open_1 {V ={ FRE = {trad=ouvrir_1} \n GER = {trad=öffnen_1} \n SPA = {trad=abrir_1}}} \n \nThe transfer itself is simple and straightforward: the nodes of the disambiguated, \nsummarized and aggregated DSyntSs are mapped almost one-to-one to the target \nDSyntSs by getting the translations from th e transfer dictionary. Consider a very \nsimple example sequence DSynt Eng Disambiguated DSynt Eng DSynt Fr: \n \n(7) \n \n \n \n \n \n \nMost transfer rules are language-independent, but some of them preprocess the tree \nfor the language-dependent surface-syntactic structural mismatches. For instance, the \nEnglish construction N 1-Ving-N24 as in signal processing circuit is more naturally \nrendered in French or in Spanish via a relative clause pattern N 2-that-V-N 1. For \nDSyntS, this only means adding an attribute to the node of the verb which will trigger \nthe introduction of the relative clause in SSy ntS: relative pronouns are considered as a \npossible surface-syntactic manifestation of the ATTR DSynt-relation. The following \nrule establishes this equivalence: \n \n(8) ?N 2 { –?r→ ?X} | ?X.finiteness=GERUND ?N2.dpos=Prep \n \n ?X {rel_dep=subj finiteness=fin tense=Pres mood =IND} \n \nThe value of the attribute “rel_dep” sta nds for the dependency relation that the \nrelative pronoun has with its verbal governor; it indicates at the same time the presence of the relative pronoun in the SSynt S. This configuration is exemplified in \n(9) for An [energy absorbing ] element opens vs. Un élément [qui absorbe l’énergie ] \ns’ouvre ‘An element which absorb s the energy’. The actual structural difference \nbetween the English and the French sentence s is only surface-syntactic – such that it \nwill appear only in the SSyntS, as shown in the next subsection. \n \n \n4 N2 is the syntactical governor of the group, hence it is the top node in the rule below \n \n12th EAMT conference, 22-23 September 2008, Hamburg, Germany\n125\n(9) \n\n\n\n\n\n \n\n\n\n\n\n\n\nMultilingual Generation. During generation of the target language summary, the \nDSynt-SSynt transition is central. The DSynt-SSynt rules ca ll the monolingual \ndictionaries in order to retrieve language-specific information such as governed \nprepositions, auxiliaries, pronominal status, etc. For instance, FROM is a preposition \nrequested by the third actant of the English verb PREVENT. This preposition does not appear in the DSyntS and has to be generated in the SSyntS. Therefore, the \ncorresponding preposition – if there is one – of the French equivalent EMPÊCHER \nmust appear in the GP of EMPÊCHER in the monolingual dictionary; cf. (10). Similarly, thanks to the monolingual information, we know that OUVRIR1 is a \npronominal verb: \n \n(10)\n empêcher : verb_dt { //eng=keep/prevent \n elision = yes \n gp = {III = {dpos = V \n rel = obl_obj \n Prep = de}}} \n \nIn French, the feature “pronominal” is rea lized by the clitic SE, which introduced by \nthe following rule: \n \n(11) ?V | lexicon::(?V.dlex).pronominal=yes language=FR \n \n ?V {–clitic → ?X} \n \n(11) checks the attribute “pronominal” in th e entry for the verb in the lexicon. If the \nvalue is “yes”, a node “?X” and an edge “clitic” connecting ?X to the verb are \ncreated. The same kind of mechanism operate s, for instance, for the introduction of \nrelative pronouns and determiners. \n(12) shows the SSyntSs that correspond to the DSyntSs in (9); the structural \ndifference between English and French is now visible (the value ?r of the rel_dep \nattribute is “subj”). \n \n\nouvrir_1: verb { //eng:open_1 \n lemma = ouvrir \n elision = yes \n pronominal =yes \n past_aux = être} \n12th EAMT conference, 22-23 September 2008, Hamburg, Germany\n126\n \n(12) \n \n \n \n \n \n \n \n \n \n \n \n \nThe rest of the surface generation, i.e., the linearization and morp hological processing \nof the lexical units is detailed in [11]. \n4 Evaluation of the Multilingual Summarization of Patent Claims \nGiven that no reliable unique evaluation metrics exists as yet for multilingual \nsummarization, we performed a preliminary evaluation of our strategy of multilingual \nsummarization from the perspective of the quality of the summary and from the perspective of the quality of the multilinguality. \nThe evaluation of the quality of our summary has been performed using ROUGE \n[13]. As baseline, we used the MS Word automatic summarizer (MSAS), with the summarization parameter set to 50%. \nOut of a list of 50 patents that underwent simplification, 30 were randomly selected \nand summarized with our summarization mo dule and MSAS. The summaries used as \nreference have been done by a patent specialist. Our summarization obtained an overall f-score of 61% over quadrigrams and trigrams, while MSAS reached 43%. \nThat we did not surpass 61% can be partially explained by the object/method \ndichotomy in some patent claims, which we cannot identify reliably in an automatic \nway. If a patent claim section contains claims referring to both the invented object and \nthe method of applying this object, both kinds of claims tend to contain largely the \nsame information. Human created referenc e summaries avoid the repetition of this \ninformation, while our module is currently not able to differentiate an object-related claim from a method-related one. Furthermore, it is worth noting that the evaluation that has been carried out so far does not take into account the quality of the text, for \nwhich a qualitative evaluation would be necessary. \nFor the evaluation of the quality of th e multilinguality, we chose human evaluation \nin order to balance the purely statistical metrics of the ROUGE evaluation and to obtain some objective opinions from native speakers and experts. For this purpose, six \nnative speakers were asked to rate twelve different claim descriptions in their native \n12th EAMT conference, 22-23 September 2008, Hamburg, Germany\n127\ntongue produced by PATExpert with summarization switched off (such that only \nsimplification, transfer and regeneration we re effective) against the online-Google \ntranslation of the origin al claims as baseline.5 Given that the recall of our multilingual \ngenerator is still very much hampered by the shortage of multilingual resources, we \nconsider this evaluation a general indication of the potential of “deep” translation \ntechniques when combined with the preprocessing of the claims. \nThe evaluation was based on a questionnaire which has been largely inspired by \n[14]. It consists of three categories: “intelligibility”, “simplicity” and “accuracy”. The first two deal with the quality of the transfe rred text; both have a five value scale. The \nthird category, which has a seven value scale, captures how the content from the \nEnglish input is conveyed in the transferred text. Due to th e lack of space, we do not \ncite here the questionnaire itself. Table 1 shows the accuracy regarding each of the \nthree quality categories for PATExpert and the baseline. \nTable 1. Accuracy of the PATExpert Multil ingual Summarizer against a baseline \n Google Translator (baseline) PATExpert Multilingual Summarizer \nIntelligibility 0,49 0,58 \nSimplicity 0,49 0,74 \nAccuracy 0,47 0,51 \n \nAs expected, the complexity of our multilingual summarization module is much \nlower, hence the intelligibility is about 9% higher. But surprisingly, there is no \nsignificant difference regard ing the accuracy of the two translations, which might \nshow that no meaningful information is lo st during the simplification stage compared \nto a non-simplified output. \n5 Summary \nFrom the practitioners’ side, there is a high demand for multilingual summarization of patent claims. However, traditional appr oaches to summarization do not show the \nrequired performance due to the particular linguistic styl e and abstract vocabulary of \nthe claims. In this paper, we proposed a strategy that makes use of a number of preprocessing stages for a prior linguistic simplification of the material and that \nintegrates de facto the summarization into generation. This allows us, on the one \nhand, to perform the summarization at a rath er abstract level and thus to use “deep” \nsummarization criteria, and, on the other hand, to reduce the transfer to a large extent \nto lexical transfer. The resu lts are encouraging. Still, the three central components \ninvolved in the process: summarization, transfer and generation, are continuously \nbeing extended and improved, such that in the full paper, we will be able to present \nevaluation figures that are likely to be cons iderably superior to those presented above. \nThere are some related works. The most similar ones are the MUSI-summarizer by \n[15] and the summarizer within VERBMOBIL described in [16]. As our strategy, \n \n \n5 Since our goal was to evaluate the multilingual output of our system with the original claims \nas input, we consider it correct to run th e Google translator on the original claims. \n12th EAMT conference, 22-23 September 2008, Hamburg, Germany\n128\nMUSI implies a deep analysis stage and a regeneration stage. However, MUSI’s \nsummarization strategy consists in sentence extraction using surface-oriented criteria \n(cue phases and positions of sentences). The analysis is applied to the extracted \nsentences and the resulting syntactic structures are mapped onto conceptual \nrepresentations from which then the (possibly multilingual) summary is generated. \n[16] describes multilingual summary genera tion in a speech-to-speech system. The \ndifference between their system and ours ag ain mainly consists in the summarization \nstrategy, with ours being considerably deeper. \n \nAcknowledgments. Many thanks to the members of the TALN group at UPF as well \nas to all colleagues of the PATExpert Consortium for their valuable support. \nReferences \n1. Sheremetyeva, S. Natural language analysis of patent clai ms. In: ACL Workshop on Patent \nCorpus Processing, pp. 66 – 73. Sapporo (2003) \n2. Wanner, L. et al. Towards Content-Oriented Patent Processing. Worl d Patent Information. \n30, 21–33 (2008) \n3. Marcu, D. From discourse structures to text summarie s. In: ACL/EACL Workshop on \nIntelligent Scalable Text Summarization, pp. 82–88. Madrid (1997) \n4. Mann, W.C., Thompson, S.A. Rhetorical Structure Theory: Toward a Functional Theory of \nText Organization. Text, 8, 243–281 (1988) \n5. Shinmori, A., Okumura, M., Marukawa, Y., Iw ayama, M. Patent Processing for Readability. \nStructure analysis and Term Explanation. In: ACL Workshop on Patent Corpus Processing, \npp. 56–65. Sapporo (2003) \n6. Farzindar, A., Lapalme, G., Desclés, J-P. Résumé de textes juridiques par identification de \nleur structure thématique. Traitement automatique des langues. 45, 39–64 (2004) \n7. da Cunha, I., Wanner, L., Cabré, T. Summ arization of Special Discourse: The Case of \nmedical articles in Spanish. Terminology. 13, 249–286 (2007) \n8. Lin, D. Dependency-based Evaluation of MINIPAR. In: Workshop on the Evaluation of \nParsing Systems, pp. 234–241. Granada (1998) \n9. Bohnet, B., Langjahr, A., Wanner, L. A development environment for an MTT-based \nsentence generator. In: INLG 2000, pp. 260–263. Mitzpe Ramon (2000) \n10. Mel’čuk, I. Dependency Syntax. SUNY Press, Albany (1988) \n11.Mille, S., Wanner, L. Making Text Resources Accessible to the Reader: The Case of Patent \nClaims. In: LREC’08. Marrakech (2008) \n12. Mel’cuk, I., Wanner, L. Syntactic Mismatches in Machine Translation. Machine \nTranslation. 20, 81–138 (2006) \n13. Lin, C. ROUGE: A Package for Automatic Evaluation of Summaries. In: ACL Workshop \nText Summarization Branches Out, pp. 25–26. Barcelona (2004) \n14. Nagao, M., Tsujii, J., Nakamura, J. Th e Japanese Government Project for Machine \nTranslation. Computational Linguistics. 11, 91–109 (1985) \n15. Lenci, A. et al. Multilingual summarization by integrating linguistic resources in the MLIS-\nMUSI project. In: LREC’02, pp.1464–1471. Las Palmas (2002) \n16.Alexandersson, J., Poller, P., Kipp, M., En gel, R. Multilingual Summary Generation in a \nSpeech-To-Speech Translation System for Mult ilingual Dialogues. In: INLG-2000, pp. 148 \n–155. Mitzpe Ramon (2000) \n12th EAMT conference, 22-23 September 2008, Hamburg, Germany\n129",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "e5pg9_Ydp7o",
"year": null,
"venue": "EAMT 2008",
"pdf_link": "https://aclanthology.org/2008.eamt-1.20.pdf",
"forum_link": "https://openreview.net/forum?id=e5pg9_Ydp7o",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Two-step flow in bilingual lexicon extraction from unrelated corpora",
"authors": [
"Rogelio Nazar",
"Leo Wanner",
"Jorge Vivaldi"
],
"abstract": "Rogelio Nazar, Leo Wanner, Jorge Vivaldi. Proceedings of the 12th Annual conference of the European Association for Machine Translation. 2008.",
"keywords": [],
"raw_extracted_content": "Two-Step Flow in Bilingual Lexicon Extraction from \nUnrelated Corpora\nRogelio Nazar1, Leo Wanner2 and Jorge Vivaldi1\n1 Institut Universitari de Lingüística Aplicada, Pompeu Fabra University\nPl. de la Mercè 10-12. Barcelona.\n2 Fundació Barcelona Media, Pompeu Fabra University\nOcata 1. Barcelona.\n{rogelio.nazar; leo.wanner; jorge.vivaldi}upf.edu\nAbstract. This paper presents a language independent methodology for \nautomatically extracting bilingual lexicon entries from the web without the need \nof resources like parallel or comparable corpora, POS tagging, nor an initial \nbilingual lexicon. It is suitable for specialized domains where bilingual lexicon \nentries are scarce. The input for the process is a corpus in the source language \nto use as example of real usage of the units we need to translate. It is a two-step \nflow process because first we extract single-word units from the source \nlanguage and then the multi-word units where the initial single units are \ninstantiated. For each of the multi-word units, we see if they appear in texts \nfrom the web in the target language. The unit of the target language that appears \nmore frequently across the sets of multi-word units is usually the correct \ntranslation of the initial single-word source language entry. \nKeywords: Bilingual Lexicon Extraction, Specialized Terminology, Machine \nTranslation, Corpus Linguistics, Knowledge-poor methods, statistical methods.\n1 Introduction\nStrategies that involve the use of parallel corpora were among the first attempts to \nextract bilingual lexicons, using measures of statistical association to study the co-\noccurrence of pairs of entries in the aligned sentences ([1]; [2]; among others). This \nmethodology has yielded accurate results. However, the shortcoming is that parallel \ncorpora are not easy to compile, particularly in the case of specialized domains. \nThere have also been a number of attempts to extract bilingual lexicons without the \nneed of parallel corpora, but using bilingual dictionaries as seed words. In this line of \nresearch there are two main trends. The first one is represented by authors such as [3]; \n[4]; [5]; [6] and [7]. Briefly, most of these approaches involve a similarity metric \nbetween a word in the source language and a candidate for translation in the target \nlanguage. The rationale behind this strategy is that both the source language word and \nits equivalent are supposed to share the same profile of co-occurrence, in the same \n12th EAMT conference, 22-23 September 2008, Hamburg, Germany\n140\nmanner that synonyms do ([8]; [9]; [10]). The process is then to study the units that \nco-occur significantly with the input word, and then try to translate as many as \npossible co-occurrents with the help of the initial bilingual lexicon. Once this \ninformation is gathered, the next step is to select as a candidate for translation the unit \nof the target language that co-occurs more often (in some corpus) with the greatest \nnumber of the translations obtained with the bilingual lexicon. \nThe other trend in the literature ([11]; [12]) is more related to the present proposal. \nIn order to obtain documents where equivalents co-occur, [11] exploited search \nengines using pairs of equivalent terms in Japanese and English obtained from a \nbilingual dictionary as queries. This yields bilingual glossaries as well as other \npartially parallel texts among the downloaded collection. In the case of [12], they \nmine English-to-Chinese bilingual translations and transliterations from monolingual \nChinese Web pages. Their idea is to extract equivalent pairs searching for a pattern of \nan English expression enclosed by parenthesis in a Chinese document. All possible \nsequences of words before the English words are considered possible translation \ncandidates. In this way they collect a large number of translation candidates keeping \nonly the most probable ones. The ranking of the equivalent pairs depends on different \nfeatures but mainly on a machine learning method trained with an initial bilingual \nlexicon. With this they build a character bigram language model which yields a \ntransliteration probability from English to Chinese. \nIn this paper we present a different approach that has encouraging results even \nwhen we do not use any of the resources that other authors need, such as comparable \ncorpora, lemmatization, POS tagging or initial bilingual lexicons. The only input \nneeded is Internet access and a corpus of the studied domain in the source language \nwhere the words we need to translate occur, with an extension of at least 40,000 \ntokens. Hereafter, this corpus will be called DSCSL, which stands for Domain \nSpecific Corpus in the Source Language. The purpose of this knowledge-poor \napproach is to determine to what extent we can have a quality result with the \nminimum resources and the maximum amount of generalization possible. Not using \nthis type of resources means that our conclusions can be extrapolated to other \nlanguages and domains. In future work we will explore hybrid methods, that is, also \ntaking into account knowledge of the domain and the language, although at the \nmoment we are casting the problem of bilingual terminology acquisition purely as a \nmathematical problem, in the line of previous work ([13]). \nThe rest of the paper is organized as follows: the next section gives a basic outline \nof the algorithm; section 3 explains some support actions that improve accuracy and \nsection 4 shows some evaluation figures for the results. In section 5 we discuss \nconclusions and in section 6 a few promising lines of future work.\n2 Basic Algorithm\nEnglish is a widely used language in scientific and technical domains, therefore it is \nnot surprising to find terms or fragments of text in English in specialized literature \neven when it is written in other languages. Abstracts and keywords in English and in \nthe language of the document are commonly included in scientific papers; titles in the \n12th EAMT conference, 22-23 September 2008, Hamburg, Germany\n141\nbibliographical references may include terms in English relevant to the topic of the \ndocument and even authors often include the English version of the terminology they \nintroduce in their native languages. As a consequence, many specialized terms that \nare currently found on the web are statistically associated to their equivalents in \ndifferent languages. Thus, we can obtain equivalent terms in different languages using \nthe DSCSL as input and Internet access. The process is as follows:\n1. Take as initial units single words from the DSCSL. This is a set V .\n2. Extract multi-word units from the DSCSL where the single words appear. Then \nfor each element Vi we have another set Vi = {V i,1, Vi,2, Vi,3 ... Vi,n }, where every \nVi,j is a unit that has Vi as a component (including its only component). For \ninstance, if V i is light, then V i,1 is the same element, light, Vi,2 is incident light ; \nVi,3 is transmitted light ; Vi,4 is light beams , and so on.\n3. For each multi-word download n documents in the target language and sort their \nvocabulary by decreasing frequency order, as shown in tables 1, 2 and 3.\n4. For each source language single word, see which is the single word that has \noccurred more times in the multi-word alignments. If Vi is light and the target \nlanguage is Spanish, then the most recurrent element is luz .\n5. Return to the multi-word alignments and select the candidate that shares an \nassociated pair of words. For instance, in table 1, link incident light to luz \nincidente because they share the associated pair luz-light.\nTable 1. Expressions appearing with incident light in Spanish documents. \nRank Term Frequency\n1) sekonic 76\n2) luz incidente 44\n3) incident light 22\nTable 2. Expressions appearing with transmitted light in Spanish documents. \nRank Term Frequency\n1) microscope 158\n2) illumination 59\n3) scales 47\n4) transmitted light 39\n5) luz transmitida 32\nTable 3. Expressions appearing with light beams in Spanish documents. \nRank Term Frequency\n1) photoshop 231\n2) optics 70\n3) photoshop comentarios 58\n4) light beams 41\n... ... ...\n13) haces de luz 5\n12th EAMT conference, 22-23 September 2008, Hamburg, Germany\n142\n3 Some Support Strategies\nThere are several possibilities to improve the performance of the algorithm explained \nabove that do not need external knowledge sources like bilingual dictionaries. In this \nsection we explain those that are already implemented at the time of writing. Other \nstrategies, that also seem interesting but have not yet been tested, are included in \nsection 6, future work.\n3.1 Including a Reference Corpus of General Language\nWe use reference corpora of general language of the source and the target language as \na model of the expected frequency of a word in a text. Such reference corpora can be \nautomatically acquired from the web using random queries of high or mid-frequency \nwords. Two million tokens of text include approximately 20,000 types with a \nfrequency greater than 5. We can use this model to eliminate false candidates that are \nfrequent but non informative, like would or has been, because they are very frequent \nin the source language reference corpus. In addition, we can expect that both the \nsource unit and the correct equivalent in the target language have a similar frequency \nin their respective reference corpora. Using the model of the language we can infer \nthat este trabajo is not a good candidate for the translation of alkyl group, because \neste trabajo is more frequent in the Spanish reference corpus than alkyl group. In \ncontrast, the correct candidate, which is grupo alquilo, has the same zero frequency in \nthe reference corpus as alkyl group.\n3.2 Using a Measure of Dispersion\nWe do not expect the correct translation to be only the most frequent among the \ndownloaded collection, but also the most dispersed. If the downloaded collection has \nmore than five documents, we can safely remove all units that appear in only one or \ntwo documents, and then the vocabulary size and computational cost will be \nsignificantly reduced. A simple measure of dispersion for candidates can be tf.df \nbeing tf the term frequency in the collection and df the number of documents where it \noccurs. \n3.3 Using a Similarity Measure\nIt is usually observed, specially in scientific or technical domains, that term \nequivalents in different languages are cognates. Thus we can use some similarity \nmeasure to detect morphological resemblance between the candidates and the unit we \nare trying to translate. The unit reflected energy in our corpus generates a set of \nequivalent candidates. Among these we find the Spanish term energía reflejada. It is \npossible to automatically detect the relation between those two using a vector \nsimilarity measure. First we transform both units to vectors X and Y that have \nsequences of two characters as components. We compute a Dice similarity \n12th EAMT conference, 22-23 September 2008, Hamburg, Germany\n143\ncoefficient, which is defined as (1). The correct translation is not always the cognate, \nhowever the result of this similarity measure will add some points to the final score of \na candidate. \n .(1)\n3.4 Length Ratio\nEquivalent terms should have a relatively similar length. Therefore, assuming that \nlr(t) is the length in characters of term t, we can define a ratio ln( i,j) for a term i and a \ntranslation candidate j as (2). Since we are not interested in an exact match in length \n(equivalents rarely have exactly the same length) we will take into account this \nvariable only if it is less than a threshold of .7. Otherwise, it has a value of 1.\n.(2)\n3.5 Statistical Noise Reduction\nAnother problem that we can observe is the presence of a repetitive noise that is \ndomain specific. Candidates such as Buenos Aires , Facultad de Ciencias , \nUniversidad Nacional , Departamento de Ciencias , etc., appear frequently as \ncandidates for translation. We can reduce this noise statistically using a distributional \ncriterion. These units have an exaggerated dispersion among the sets, thus we can \nreduce their weight accordingly to make up for their high frequency. \n.(3)\nIn (3), if w(t) is the weight that t had as a translation candidate for some term, d(t) \nwould be the number of times t has been proposed as a candidate. With a threshold h, \nthe size of the initial single-word sample over 7, we reduce the effect of d(t) as in (4). \n.(4)\n12th EAMT conference, 22-23 September 2008, Hamburg, Germany\n144\n3.6 Analyzing the Language of the Context\nA simple strategy is to study the contexts of occurrence of a translation candidate. If \nthe contexts are in the target language, then the probability of being a correct \ntranslation is higher, except if it is enclosed by parenthesis or marked with some \ntypographical convention such as italics, which may indicate that the sign is \nextraneous to the language of the text. We can confidently eliminate a candidate if the \nratio of Spanish words vs. English words in the contexts where the candidate occurs is \nbelow a certain threshold. If s(t) is the total number of Spanish words found in the \ncontexts of term t and e(t) the total number of English words found in the same \ncontext, then we can define the condition (5).\n \n.(5)\n3.7 Final Weighting of Candidates\nAs stated earlier, the final weighting technique is divided into two-steps. The first step \nis to find, for each single word unit, the equivalent candidates for the multi-word units \nwhere the single unit appears. We have defined a collection of measures of weighting, \nsuch as the frequency of the term in the collection, fr( t), that we will express as (6).\n.(6)\nThe frequency of a word in a reference target language corpus, Sfr( t), is also \nexpressed in log scale, as Efr( t), the frequency of that term in the English reference \ncorpus. Naturally, we will prefer as a Spanish translation a unit that has a greater \nfrequency on the Spanish corpus than in the English one. We can define a binary \nvalue m(t) with value of 1 if this is the case. The frequency on the reference corpus of \na term i and its translation candidate j, as explained in section 3.1, also gives us pr( i, \nj), defined as (7).\n.(7)\nOther variables we defined are df( t), in section 3.2; sim( t), the similarity metric of \nsection 3.3. and two more binary variables, y(t) and n(t). The first one has value 1 if \nthe term t has as an internal component (not at the beginning nor at the end) a very \nfrequent Spanish word, such as “de” in medio de almacenamiento, while n(t) will \npunish with value 1 a candidate with a very frequent English word inside. For every \n12th EAMT conference, 22-23 September 2008, Hamburg, Germany\n145\nVi,j, as a multi-word instance of English word Vi , the first weighting of a candidate \nVi,j,k is then defined as (8). \n.(8)\nOnce the score for each alignment V i,j,k is defined, we will calculate the best \ncandidate for V i. Some of the variables are the same, only changing the parameters: \npr(i,k) or sim(i,k), remembering that V i is the initial single word in the target language \nand k the term of the weighted alignment Vi,j,k. Weights are normalized before the final \nscore is defined, ranging then from 0 to 1. The final weight of the alignment of \nelements V i and k is shown in (9). The only variable that is new here is st( Vi, k), which \nis the number of subsets of Vi (Vij) where k is present. In the example given in section \n2, k would be how recurrent is the element luz among the translation sets of the \nphrases with the element light.\n.(9)\nCertainty over multi-word alignment can be recalculated now on the basis of the \nresults of (9). If two single-word units are highly associated, the multi-word units \nwhere they appear will be associated too, as explained at the end of section 2. Thus, if \nw(Vi, k) is above a certain threshold, for instance, if dispositivo and device are \nstrongly associated, then, from all the candidates available for safety device the \nalgorithm will select dispositivo de seguridad because it contains a member of an \nassociated pair. If two multi-word equivalent candidates i and j share an associated \npair, then the final certainty score sc( i,j) is defined as (10). The new element here is \nsw(i,j) that is the number of associated pairs in common. \n.(10)\n4 Evaluation\nAs a preliminary and small scale evaluation, we show an experiment translating from \nEnglish to Spanish1, 76 randomly selected single-words from the DSCSL. For each \n1A complete evaluation for the claim of language independence would be to translate, for \nexample, from French to Spanish using English as intermediate step. It is not yet clear then if \nproblems would be magnified by the iteration or if, on the contrary, the triangle could be \nused as an extra source of information and greater certainty.\n12th EAMT conference, 22-23 September 2008, Hamburg, Germany\n146\nsingle-word, the computer program written for this evaluation selected up to 10 \ndifferent multi-word units (of a maximum extension of five words). For each multi-\nword unit, the program attempted to download up to 100 documents in Spanish where \nthe unit occurs. After downloading thousands of documents from the web and \nselecting the most frequent, informative, dispersed and similar units (as explained in \nsections 2 and 3), the system outputs tables of candidates ordered by their final score \nas equivalents. Table 4 shows the results we obtained.\nTable 4. Accuracy by position in the rank of candidates on a random sample of 76 English \nhead-words.\n1# 2# 3# 5# 10# 15#\n39% 44% 50% 51% 57% 59%\nTable 5 shows some of the alignments. It must be borne in mind that these \nequivalences are valid only in the domain we are studying. For instance, an \nequivalence between emitting and emisor may seem strange because it is a gerund and \ntherefore it should be translated as emitiendo . But in this context it is correct because \nwe have terms like light emitting diode that are translated as diodo emisor de luz .\nTable 5. A few examples of the obtained single-word alignments.\nEnglish Term Spanish Equivalent\nacetate acetato\napparatus aparatos\nbeam rayo / haz\nclock reloj\ncuring curado\ncutting corte\ndeflection deflexión\ndevice dispositivo\nIn respect to the alignment of multi-word units, we need to minimize the \ncompounding effect of errors made during the single-word alignment running again in \nthe multi-word alignment step. Therefore, we rank the multi-word pairs by the degree \nof certainty explained in subsection 3.7-(10). In this case, the program has to select \nonly one translation for each multi-word unit. The trial is considered a success if the \naligned pair is correct, such as carbon material and materiales de carbono or position \nsensor and sensor de posición . All other cases, including partial matches such as \noptical disc drive and disco óptico were considered failures. From a sample of 150 \nmulti-word alignments, only a small subset has a minimum degree of certainty. \nHowever, certainty and precision are linked, as shown in Figure 1. The vertical axis \nindicates the cumulative precision while the results are ranked in the horizontal axis \naccording to certainty. We can see that, among the first 40 positions on the ranking, \nprecision is above 50%. From that point, the curve rapidly decreases and gets steady \nfrom position 100 at around 25% precision. These figures are consistent with the \nsmall proportion of terms in a random sample of n-grams.\n12th EAMT conference, 22-23 September 2008, Hamburg, Germany\n147\nFig. 1. Precision vs. Certainty in multi-word alignment using only the top candidate. Most of \nthe correct trials are in the first positions of the ranking. \n6 Conclusions\nWe have presented a method for the extraction of a bilingual lexicon requires \npractically no external resources except a corpus with the units to translate and \nInternet access. It is an interesting methodology from an engineering, terminographic \nor lexicographic point of view. However, it is also an attractive subject of research \nfrom a purely theoretical perspective, since it states a fact about macroscopic and \nstructural regularities of language that are visible only now, when the massive amount \nof data from the web offer us the possibility to extract valid conclusions out of a great \nnumber of apparently chaotic individual behaviors, the decisions made by each author \nin every language and discipline. From this unorganized social behavior, a remarkable \nregularity emerges, that is the statistical association of equivalent terms in different \nlanguages. \n7 Future Work\nWe are extending this work in different directions. Most new ideas will be included as \nsupport and refinement strategies, like those described in section 2. The most \nimportant pending work now is replication of this experiment with bigger data sets \nand with different domains and languages. The second most important thing will be to \n12th EAMT conference, 22-23 September 2008, Hamburg, Germany\n148\nuse a terminology extraction system for the selection of the units to translate. This \nwould have undoubtedly yielded better results than with the simple random sampling \nwe used, and its replacement will not affect the general architecture of this system. \nAnother line is to try a hybrid method. Using different degrees of knowledge of the \nlanguage and/or the domain in question may improve the quality of the results. There \nis yet another strategy that is conceptually simple but computationally costly. One of \nthe possible ways to eliminate false candidates would be to iterate the process in the \nopposite direction. That means, repeating the process with each of the equivalent \ncandidates this time as input to find their translation in what was originally the source \nlanguage. The correct translation will have the original term among the equivalent \ncandidates in the original source language.\nAcknowledgments\nThis paper was possible thanks to fundings from the project RICOTERM3 lead by Dr. \nMercè Lorente, which is in turn funded by the Ministry of Education and Science of \nthe Government of Spain (HUM2007-65966-C02-01/FILO). We would like to thank \nthe anonymous reviewers for their comments and to Edmund Maklouf for \nproofreading.\nReferences\n1. Brown, P.F., Cocke, J., Della Pietra, S.A., Della Pietra, V.J., Jelinek, F., Lafferty, J.D., \nMercer, R.L., Roosin, P.: A Statistical Approach to Machine Translation. Computational \nLinguistics, 16, pp. 79--85 (1990).\n2. Gale, W., Church, K.: Identifying word correspondences in parallel texts. Proceedings of the \nDARPA SNL Workshop (1991).\n3. Fung. P.: Compiling Bilingual Lexicon Entries from a Non-Parallel English-Chinese Corpus. \nProceedings of the Third Workshop on Very Large Corpora, pp. 173--183 (1995).\n4. Fung, P.: A Statistical View on Bilingual Lexicon Extraction: From Parallel Corpora to Non-\nParallel Corpora. Proceedings of the AMTA Conference, pp. 1--16 (1998).\n5. Fung, P., McKeown, K.: Finding Terminology Translations From Non-Parallel Corpora. The \n5th Annual WVLC, pp. 192--202, Hong Kong (1997).\n6. Rapp, R.: Automatic Identification of Word Translations from Unrelated English and \nGerman Corpora. Proceedings of 37th ACL Annual Meeting, pp. 5190--526 (1999).\n7. Tanaka, T., Matsuo, Y.: Extraction of Translation Equivalents from Non-Parallel Corpora. \nProceedings of the 8th TMI Conference, pp. 109--119 (1999).\n8. Harris, Z.: Distributional Structure. In: Katz, J.J. The Phylosophy of Linguistics, pp 26--47. \nOxford University Press, New York (1954/1985).\n9. Grefenstette, G.: Explorations in Automatic Thesaurus Discovery. Kluwer Academic \nPublishers, Norwell, MA (1994).\n10. Schütze, H., Pedersen, J.: A Co-occurrence-based Thesaurus and Two Applications to \nInformation Retrieval. Information Processing and Management. 33(3), p.307--318 (1997).\n11. Nagata, M., Saito, T.; Suzuki, K.: Using the Web as a Bilingual Dictionary. Proceedings of \nACL DD-MT Workshop (2001).\n12. Guihong C., Jianfeng G., Jian-Yun N.: A System to Mine Large-Scale Bilingual \nDictionaries from Monolingual Web Pages. Proceedings of MT Summit XI (2007).\n13. Nazar, R.: Bilingual Terminology Acquisition from Unrelated Corpora. Proceedings of the \nXIII EURALEX Congress, Barcelona (2008).\n12th EAMT conference, 22-23 September 2008, Hamburg, Germany\n149",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "JCepRd4SpGg",
"year": null,
"venue": "EAMT 2020",
"pdf_link": "https://aclanthology.org/2020.eamt-1.17.pdf",
"forum_link": "https://openreview.net/forum?id=JCepRd4SpGg",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Fine-Grained Error Analysis on English-to-Japanese Machine Translation in the Medical Domain",
"authors": [
"Takeshi Hayakawa",
"Yuki Arase"
],
"abstract": "Takeshi Hayakawa, Yuki Arase. Proceedings of the 22nd Annual Conference of the European Association for Machine Translation. 2020.",
"keywords": [],
"raw_extracted_content": "Fine-Grained Error Analysis on English-to-Japanese Machine T ranslation\nin the Medical Domain\nT akeshi Hayakawa\nGraduate School of Information\nScience and T echnology ,\nOsaka University , Osaka, Japan\nASCA Corporation, Osaka, Japan\[email protected] uki Arase\nGraduate School of Information\nScience and T echnology ,\nOsaka Un iversity , Osaka, Japan\[email protected]\nAbstract\nW e performed a detailed error analy-\nsis in domain-specific neural machine\ntranslation (NMT) for the English\nand Japanese language pair with fine-\ngrained manual annotation. Despite\nits importance for advancing NMT\ntechnologies, research on the perfor-\nmance of domain-specific NMT and\nnon-European languages has been lim-\nited. In this study , we designed\nan error typology based on the er-\nror types that were typically gener-\nated by NMT systems and might cause\nsignificant impact in technical transla-\ntions: “Addition,” “Omission,” “Mis-\ntranslation,” “Grammar,” and “T ermi-\nnology . ” The error annotation was tar-\ngeted to the medical domain and was\nperformed by experienced professional\ntranslators specialized in medicine un-\nder careful quality control. The\nannotation detected 4,912 errors on\n2,480 sentences, and the frequency and\ndistribution of errors were analyzed.\nW e found that the major errors in\nNMT were “Mistranslation” and “T er-\nminology” rather than “Addition” and\n“Omission,” which have been reported\nas typical problems of NMT. Interest-\ningly , more errors occurred in docu-\nments for professionals compared with\nthose for the general public. The results\nof our annotation work will be pub-\nlished as a parallel corpus with error la-\n© 2020 The authors. This article is licensed under a\nCreative Commons 3.0 licence, no derivative works,\nattribution, CC-BY-ND.bels, which are expected to contribute\nto developing better NMT models, au-\ntomatic evaluation metrics, and quality\nestimation models.\n1 Introduction\nW e performed a manual annotation of trans-\nlation errors using fine-grained error typol-\nogy in domain-specific neural machine transla-\ntion (NMT) of Japanese and English language\npairs. Although several approaches have been\nproposed to evaluate the performance of NMT,\nit has been commonly presented as scores of\nautomatic evaluation, and detailed analysis of\nproblems in NMT is limited. Previous stud-\nies ( Specia et al., 2017 ;Kepler et al., 2019 ) an-\nnotated errors in MT outputs; however, they\ntargeted only on a general domain and Euro-\npean languages. Detailed error detection is es-\nsential, especially in the domain-specific set-\ntings, where tiny mistakes, such as incorrect\ntranslation of a technical term, leads to signif-\nicant misunderstanding.\nT o tackle this problem, we performed an\nannotation-based analysis of errors that oc-\ncurred in NMT for a specific technical do-\nmain. Professional translators annotated types\nand positions of errors that occurred in trans-\nlation from English to Japanese. The error\ntypology was designed based on an existing\nframework, Multidimensional Quality Metrics\n(MQM) ( Lommel et al., 2014 ), which was cus-\ntomized to our study . W e selected medicine\nas the domain field because medical transla-\ntion is in growing demand in the society to\nenrich healthcare information, which requires\nhighly specific domain expertise. Recent issues\nregarding public health, such as the pandemic\nof coronavirus disease 2019, highlight demands\non sharing correct and understandable infor-\nmation throughout the world including Asian\ncountries. W e prepared five medical contents\nwith English-to-Japanese translation data us-\ning state-of-the-art NMT systems. As a result,\n4,912 errors in five types were annotated on\n2,480 sentences. W e also analyzed the anno-\ntation results in detail to reveal distributions\nand characteristics of errors produced by cur-\nrent NMT systems.\nThe results of annotation will be published\nas a parallel corpus with error labels. This\nis the first corpus of error annotation (1)\non domain-specific and (2) on English-to-\nJapanese NMT outputs. Such corpora anno-\ntating errors in machine translation (MT) are\nvaluable resources to understand problems in\nNMT models, develop automatic evaluation\nmetrics, and estimate the quality of machine\ntranslation ( Blatz et al., 2004 ).\n2 Related W ork\nOur annotation corpus is based on the er-\nror typology that conforms to structured cat-\negories of quality metrics for translation qual-\nity . Previous studies employed a few differ-\nent typologies, such as MQM and SCA TE\n(Smart Computer-aided T ranslation Environ-\nment) ( T ezcan et al., 2017 ). Among them,\nMQM is one of the most common frameworks\nfor quality assessment of human translation.\nThe framework of the typology in our study\nalso refers to the MQM.\nQT21 Consortium has published post edited\nand error annotated data for machine transla-\ntions in four languages: Czech, English, Ger-\nman, and Latvian ( Specia et al., 2017 ) based\non MQM. This data just included languages\nin Europe, and prior studies that used the\nMQM have evaluated translation of European\nlanguages ( Klubička et al., 2018 ;V an Brus-\nsel et al., 2018 ). Our corpus in English to\nJapanese will add a useful resource of anno-\ntation. The shared task of quality estima-\ntion in the Conference on Machine T ransla-\ntion (WMT) has also employed the MQM for\ndocument-level quality estimation since 2018.\nApproaches of quality estimation tasks with\nMQM include word-level annotation ( Specia et\nal., 2018 ) and the estimation of MQM scorewith prediction models ( Kepler et al., 2019 ).\nNonetheless, there has been a limited resource\nfor domain-specific translation ( Rigouts T er-\nryn et al., 2019 ), which is indispensable to de-\nvelop an evaluation strategy for appropriate-\nness of word choice in the technical context.\n3 Error Typology & Development of\nAnnotation Guidelines\nIn this study , we developed customized error-\ntypology criteria for the evaluation of domain-\nspecific NMT. Our typology was based on\nMQM. The major error categories in MQM\nare “Accuracy ,” “Fluency ,” “Design,” “Lo-\ncale convention,” “Style,” “T erminology ,” and\n“V erity ,” of which subcategories are defined for\na specific type of incorrectness.\nW e selected and customized several error\nsubtypes in the original MQM for annotation\nthat were applicable to translations by NMT\nsystems. In this paper, we focused on subtypes\nthat annotation results confirmed as the major\nproblems of the current NMT systems, namely ,\n“Addition,” “Omission,” and “Mistranslation”\nfrom “Accuracy;” “T erminology;” and “Gram-\nmar” from “Fluency;” as summarized in T able\n1.\nW e customized these error subtypes to han-\ndle domain specificity and the Japanese lan-\nguage due to different systems of grammar\nand sociolinguistic register from W estern lan-\nguages. The following sections describe these\nerror types and guidelines given to annotators\nto identify each error.1\n3.1 Addition and Omission\nOver- and under-generations are typical errors\nin NMT because of the lack of a mechanism to\nexplicitly track source-sentence coverage ( T u\net al., 2016 ). These were categorized as “Ad-\ndition” and “Omission,” respectively .\n“Addition” and “Omission” errors occur\nonly in target and source sentences, respec-\ntively . Our guidelines instructed annotators\nto assign a label of “Addition” on the word(s)\nof target sentence that does not semantically\ncorrespond to any word in the source sentence.\nOn the contrary , the guidelines required to at-\ntach a label of “Omission” to the word(s) of\n1The guidelines are attached to our corpus to be re-\nleased.\nError type Description of error Annotation span Annotation side\nAdditionThe target text includes text not\npresent in the source.*Word/Phrase Target\nOmissionContent is missing from the trans-\nlation that is present in the source.*Word/Phrase Source\nMistranslationThe target content does not accurate-\nly represent the source content.*Word/Phrase Source\nTerminologyThe target text is not suitable in\nterms of the domain of document.Word/Phrase Source\nGrammarSyntax or function words are\npresented incorrectly.Word/Phrase Target\nTable 1: Error typology (Descriptions with asterisks are cited from MQM Issue Types.)\nthe source sentence of which translation did\nnot appear in the target sentence. In cases\nthat grammatical words specific to the target\nlanguage were not translated, this kind of er-\nrors was not considered as “Omission” but as\n“Grammar. ”\nRelevant error subtypes to “Addition” and\n“Omission” defined in MQM are “Over-\ntranslation” and “Under-translation. ” These\napply to a translation output that is more\nor less specific than the source sentence, re-\nspectively . Different from human translation,\nour annotation results revealed that Over- and\nUnder-translations were far infrequent in cur-\nrent NMT systems.\n3.2 Mistranslation\nThis type of error refers to the semantic differ-\nence between words or phrases in source and\ntarget sentences. The wrong choice of meaning\nin polysemous words was included in the “Mis-\ntranslation,” as well as incorrect translation.\nThe guidelines instructed annotators to as-\nsign a label of “Mistranslation” on the word(s)\nof a source sentence that was incorrectly\ntranslated. W e distinguished mistranslation\nand terminological errors to identify domain-\nspecific errors. Hence, inappropriate use of\nwords with the same or similar meaning in\ntranslation was categorized to “Mistransla-\ntion,” as discussed below.\n3.3 T erminology\nW e incorporated the appropriateness of word\nchoice to our typology as the category of “T er-\nminology ,” to ensure applicability to measure\nthe domain specificity of translation outputs.\nW e defined terminology errors as a translated\nword that was unsuitable to the description in\nthe medical field, even though the meaning of\nthe word was acceptable in the translation ofthe general domain.\nThe “Mistranslation” and “T erminology” er-\nrors were distinguished whether a translation\noutput correctly reflected the meaning of the\nsource sentence.\nOur guidelines instructed annotators that\nthe errors in the choice of technical terms with\nsimilar meaning should be labeled as “T ermi-\nnology ,” instead of “Mistranslation. ” On the\ncontrary , if a translated word(s) was seman-\ntically incorrect, the word was assigned the\n“Mistranslation” label, irrespective of the pres-\nence of “T erminology” error. The labels of\n“T erminology” were placed on the source sen-\ntence.\nF or example, the word “primary” means\n“most important” or “coming earliest” in gen-\neral, but when used as “primary tumor” in the\ncontext of medicine, it means “the originally\ndeveloped cancer cells in the body . ” Hence,\ntranslating “primary tumor” as “most impor-\ntant tumor” is regarded as “T erminology” er-\nror, while translating into “new tumor” is re-\ngarded as a “Mistranslation” error.\n3.4 Grammar\nGrammatical errors in English-to-Japanese\ntranslation affect the quality of translation\nmore significantly . This is because grammat-\nical errors in English-to-Japanese translation\nare characterized by incorrect understanding\nof syntax, which often changes the meaning of\nsource sentence. F or example, incorrect trans-\nlation output of Japanese particles may be pre-\nsented as the conversion between subjective\nand objective cases.\nThe guidelines instructed annotators to as-\nsign a label of “Grammar” on the target sen-\ntence for the errors of incorrect syntax rep-\nresentation, grammatically inappropriate out-\nput, and wrong order of words.\n3.5 Sides of Annotation\nThe right-most column of T able 1 shows\nwhether annotations were conducted on source\nsentences or translation outputs for each error\ntype. Since MQM has not determined which\nside of the sentence the error should be labeled,\nin this study , we defined the annotation side\nspecific to each error type. “Addition” and\n“Omission” were marked on target and source\nsides, respectively , because their occurrences\nare one-sided. As for “Mistranslation” and\n“T erminology ,” we attached the labels on only\nsource sentences for simplicity of the annota-\ntion process. The alignment of these source\nwords and phrases to the target-side is sub-\nject to our future work. The “Grammar” error\nwas marked in the target-side because anno-\ntators can identify ungrammatical parts in a\nsentence, but it was hard to determine what\ncaused these grammatical errors.\n4 Annotation Setup\nIn this section, we describe the annotation pro-\ncedure and resources used to perform the an-\nnotation.\n4.1 Annotation Procedure\nFirst of all, annotators were instructed to\nread through the annotation guidelines be-\nfore starting the annotation and to be famil-\niar with the standards. The annotators were\nprovided triples of a source sentence, refer-\nence translation, and MT output, and worked\nfor annotation through October to Decem-\nber 2018 . The annotators identified spans\nof word/phrase/sentence presenting errors and\nassigned the corresponding error types as la-\nbels on the sentence level. Annotation could\nbe overlapped on the same spans for different\ntypes of errors.\n4.2 NMT Systems\nDistribution of the occurrence of errors might\ndepend on a certain translation system; there-\nfore, we used multiple systems to reduce the\neffect of such dependency . W e used state-of-\nthe-art NMT systems for English-to-Japanese\ntranslation available in October 2018 at the\ntime of annotation, as described below.\n• Google’s neural machine translation sys-\ntem (GNMT) ( W u et al., 2016 )• NICT’s neural machine translation sys-\ntem ( W ang et al., 2018 ) (NICT NMT)\nThe preliminary investigation confirmed that\nthere was no substantial difference between\nboth systems. The corpus-level BLEU scores\nof GNMT and NICT NMT were 36.20 and\n35.70, respectively . The mean normalized Lev-\nenshtein distance2of each sentence between\nreferences and translation outputs of GNMT\nand NICT NMT were 0.64 (±0.23) and 0.64\n(±0.22), respectively . Paired bootstrap resam-\npling test ( Koehn, 2004 ) showed no significant\ndifference in the two NMT systems for corpus\nBLEU ( p= 0.17) as well as Student’s t-test for\nnormalized Levenshtein distance ( p= 0 .63);\nhence, we did not distinguish their outputs in\nthe later processes.\n4.3 Corpora for Annotation\nOur annotation corpus consisted of 2,480 sen-\ntences from the medical/pharmaceutical do-\nmain in English. W e collected the sentences\nfrom five sources of documents with differ-\nent types: MSD Manual Consumer V ersion\n(Merck and Co., Inc., 2015a ), MSD Man-\nual Professional V ersion ( Merck and Co., Inc.,\n2015c ), New England Journal of Medicine\n(Massachusetts Medical Society, 2019 ), Jour-\nnal of Clinical Oncology ( American Society\nof Clinical Oncology, 2019 ), and ICH guide-\nlines ( Singh, 2015 ). T wo versions of MSD\nmanual are for the same topics of medical in-\nformation but differentiated by expertise lev-\nels of contents: Professional V ersion includes\nhighly technical terms for health profession-\nals, and Consumer V ersion is written for the\ngeneral population without domain knowledge.3\nNew England Journal of Medicine and Jour-\nnal of Clinical Oncology are standard academic\njournals of medicine. ICH guidelines consist\nof international regulations for pharmaceuti-\ncal manufacturing processes. The source sen-\ntences were randomly extracted from each doc-\nument.\nW e obtained the Japanese translation of the\ncorpora from the two NMT systems. The set\nof target sentence was produced by randomly\n2Levenshtein distance divided by the length of refer-\nence and target sentences.\n3Therefore, the Consumer and Professional versions\nconsist of comparable sentences with different exper-\ntise levels but are not exactly parallel.\nSourceExpertise\nLevelNumber of\nsentencesMean number of\nwords per sentenceBLEUnormalized Leven-\nshtein distance\nMSD Manual Consumer Version General 580 17.88(±7.89) 31.58 0 .66(±0.23)\nMSD Manual Professional Version Professional 560 19.50(±9.48) 38.93 0 .59(±0.24)\nNew England Journal of Medicine Professional 420 29.96(±17.12) 37.65 0 .62(±0.21)\nJournal of Clinical Oncology Professional 420 22.99(±12.09) 36.29 0 .69(±0.24)\nICH guidelines Professional 500 18.08(±5.77) 33.67 0 .66(±0.21)\nTotal 2,480 21.20(±11.61) 35.95 0 .64(±0.23)\nTable 2: Statistics of language resource for annotation\nselecting each translated sentence from the two\nNMT outputs ( 50% for each), to prepare bilin-\ngual pairs of the 2,480 sentences. T able 2\nshows the statistics of our annotation corpus.\nThese source sentences have corresponding\nJapanese versions, which were prepared by\nhuman translation with the professional re-\nview ( Merck and Co., Inc., 2015d ;Merck and\nCo., Inc., 2015b ; Nankodo Co.,Ltd., 2019 ;\nAmerican Society of Clinical Oncology, 2018 ;\nPharmaceuticals and Medical Devices Agency,\n2018 ). These Japanese versions were used as\nthe reference translations.4\n4.4 Annotators\nT o ensure the quality of annotation, we re-\ncruited three professional translators in the\nmedical/pharmaceutical field. All the anno-\ntators were native Japanese translators with\nan academic background in biology or pharma-\ncology . Y ear of translation experience ranged\nfrom three to eight years. The annotators iden-\ntified errors and their types in an NMT output\nreferring to corresponding source and reference\ntranslations.\n5 Quality Control of Annotation\nThis kind of error annotation is inevitably sub-\njective, because the ability to detect errors\nin translation depends on the level of exper-\ntise. In addition, determination of the type\nand span of errors should be contingent on the\npreference of each annotator, which may cause\nthe variation of the annotation work.5\n4Some of the Japanese articles in the MSD manual are\ncomparable but not parallel translations because of\ndifference in edition and local regulation. Therefore,\nwe manually selected sentences ensuring the equiva-\nlence of the translation pairs.\n5Due to this variation, a common metric to measure\nthe agreement of annotations, i.e., Fleiss’ Kappa, is\nnot applicable.In this study , to collect reliable annotations\nalleviating such subjectivity , we conducted a\npilot study and reconciliation of annotated la-\nbels.\n5.1 Pilot Study\nW e performed a pilot study with the annota-\ntors using an independent data, consisted of\n100 pairs of sentences.\nAnnotations on the pilot study were thor-\noughly reviewed by the authors and feed-\nbacked to the annotators when there were mis-\nunderstandings of the guidelines. Also, ques-\ntions raised by any annotator and the answers\nwere shared to ensure that annotators have the\nsame understanding of the task.\n5.2 Reconciliation of Annotation\nOnce the annotators completed the annota-\ntion, they reviewed all the annotation re-\nsults from the other annotators. They judged\nwhether to accept or reject each annotation la-\nbel. When two or more annotators voted to\naccept an annotation label, the corresponding\nannotation is retained, otherwise discarded.\nThe first annotation process identified 7,424\nerrors. The three annotators assigned 3,115 la-\nbels on average, with a standard deviation of\n37.82. After the reconciliation process, the to-\ntal number of errors with types was reduced\nto4,912 . Among these, 4,572 annotations\nwere agreed by all the three annotators, and\nthe rest 340 were agreed by two, which shows\nthat our final annotation results are highly re-\nliable. Note that 2,352 errors with the same\nlabels and spans were consolidated as one er-\nror. Errors with overlapping span but with\ndifferent labels were kept as independent anno-\ntations. Annotations on partially overlapping\nspan with same error type were combined to\none annotation that had larger span (e.g. T wo\nannotations on “a condition” and “condition”\nwere combined to that on “a condition. ”).\nW e confirmed that “T erminology ,” “Ad-\ndition,” and “Omission” errors were highly\nagreed ( 96.8%,71.4%, and 64.1% of errors were\naccepted by at least two annotators). On the\nother hand, “Mistranslation” and “Grammar”\nerrors had an opposite tendency ( 46.0% and\n47.4% were accepted by at least two annota-\ntors). The disagreement of annotation separat-\ning “Mistranslation” and “T erminology” was\neffectively combined through the reconciliation\nwork. The judgment of “Mistranslation” and\n“T erminology” errors tended to be more sub-\njective, which caused disagreement. These re-\nsults imply that the many cases of disagree-\nment were reconciled as “T erminology” error,\nrejecting the annotation of “Mistranslation. ”\nIn addition, annotators commented that “Ad-\ndition” and “Omission” errors were harder to\ndetect and large part of disagreement in these\nerrors were due to oversight. Therefore, the\nreconciliation resulted in the high acceptance\nratios.\n5.3 Annotation Examples\nT able 3 shows examples of annotation re-\nsults after reconciliation, in which underlined\nphrases in the text indicate errors. The first\ncase is an example of “Addition,” in which the\nsame words of “ 長期的な (long-term)” appear\ntwice in the target sentence. The second ap-\npearance was annotated as “Addition. ” In the\nsecond case, the translation corresponding to\nthe words “both of” in the source sentence is\nnot included in the target sentence. This type\nof error was annotated as “Omission. ” The\nthird and fourth cases represented “T erminol-\nogy” errors. In the third case, the word “at 90\ndays” was used to mean a time point; however,\nthe MT output referred to duration, and thus\nannotated as “Mistranslation. ” In the fourth\ncase, “may” was used to express a possibility ,\nwhich was not reflected in the target output.\nThe fifth case is an example of “Grammar. ” In\nthis case, the coordination in the source sen-\ntence means “low vitamin D intake or low cal-\ncium intake;” however, the translation in the\ntarget text means “low vitamin D, and calcium\nintake. ” This type of syntax error was anno-\ntated as “Grammar. ” The sixth and seventh\ncases represented “T erminology” errors. In the\nsixth case, “fluid” specifically had the mean-ing of water, which was translated into a word\nsuggesting general liquid. In the seventh case,\nthe word “response” corresponded to several\nwords in Japanese, and the selection of words\nwas not correct to represent the reduction of\ncancer cells.\nBoth “Mistranslation” and “T erminology”\nare the issue of word choice; however, there is\na substantial difference in the two error types,\nas presented in these examples. Our typology\ndesign allowed distinguishing these two error\ntypes in a specific domain by fine-grained an-\nnotation.\n6 Analysis of Annotation Results\nW e conducted an in-depth analysis of annota-\ntion results from four perspectives:\n• F requency and distribution of errors in\ncurrent NMT systems (Section 6.1 ),\n• Possible factors affecting error occurrence\n(Section 6.2 ),\n• Co-occurrence of errors to reveal depen-\ndence among error types (Section 6.3 ),\nand\n• Correlation with conventional automatic\nmetrics for machine translation evalua-\ntion to investigate their powers of the test\n(Section 6.4 ).\n6.1 Error Distribution\nThe rate of error occurrence was 1.98 per sen-\ntence, with a standard deviation of 2.07. The\nrate of error occurrence per source word was\n0.09. This means that, on average, NMT out-\nputs included approximately two errors within\none sentence, although the high standard de-\nviation suggested that the distribution of the\npresence of errors was somewhat dispersed. As\nshown in Figure 1, most of the sentences had\nerrors of five or less ( 94.60%), and 572 sen-\ntences ( 23.06%) had no error.\nT able 4 shows the distribution of errors\nby error types. Errors in terms of “T ermi-\nnology” accounted for more than one-third.\nThe second-largest proportion was “Mistrans-\nlation” ( 22.78%) followed by “Grammar” er-\nrors ( 20.38%).\n6.2 F actors affecting to Error Occurrence\nW e investigated possible factors that may af-\nfect the occurrence of errors in NMT out-\nputs. Namely , we investigated the effects of\nError type Source T arget Reference\nAddition Even former athletes who stop\nexercising do not retain mea-\nsurable long-term benefits.運動をやめた元スポーツ選手でさ\nえ、 長期的な (long-term) 長期的な\n(long-term) 利益を維持すること\nはできない。元運動選手であっても、運動を\nやめてしまえば、その効果を長\n期間維持することはできません。\nOmission Regular exercise can improve\nboth of these qualities.通常の運動は、これらの性質を改\n善することができる。定期的な運動によってその両方\n(both of) を向上させることが\nできます。\nMistranslation The primary end point was a\ncomposite of death, the need\nfor dialysis, or a persistent in-\ncrease of at least 50 % from\nbaseline in the serum creati-\nnine level at 90 days .主要なエンドポイントは、死\n亡、透析の必要性、または 90日間\n(for 90 days)の血清クレアチニン\nレベルのベースラインからの少な\nくとも 50%の持続的な増加の複合\n物であった。90日の時点 (at90 days)にお\nける死亡、透析の必要性、血清\nクレアチニン値のベースライン\nから 50%以上の上昇の持続の\n複合を主要評価項目とした。\nMistranslation When men with BPH urinate,\nthe bladder may not empty\ncompletely .BPHの男性が排尿すると、膀胱が\n完全に空になることはありません\n(will not empty) 。前 立 腺 肥 大 症 の 男 性 が 排\n尿 す る 場 合、 膀 胱 が 完 全\nに空にならないことがあります\n(may not empty) 。\nGrammar Aging, estrogen deficiency , low\nvitamin D or calcium intake,\nand certain disorders can de-\ncrease the amounts of the com-\nponents that maintain bone\ndensity and strength.老 化、 エ ス ト ロ ゲ ン 欠\n乏、低ビタミン Dまたは\nカルシウム摂取 (low vitamin\nD, and calcium intake) 、および\nある種の障害は、骨密度および強\n度を維持する成分の量を減少させ\nる可能性がある。加齢、エストロゲンの不足、\nビタミン Dやカルシウムの\n摂取不足 (low vitamin D or\ncalcium intake) 、およびある種\nの病気によって、骨密度や骨の\n強度を維持する成分の量が減少\nすることがあります。\nT erminology Maintaining adequate levels of\nfluid and sodium helps prevent\nheat illnesses.十分な量の液体 (liquid)とナトリ\nウムを維持することは、熱病予防\nに役立ちます。十分な水分 (water)およびナト\nリウム値を維持することが、熱\n中症の予防に役立つ。\nT erminology The rate of any complete or\npartial response to cabozan-\ntinib, vandetanib, and suni-\ntinib was 37 %,18 %, and 22 %,\nrespectively .カボザンチブ、バンデタニブ、お\nよびスニチニブに対する完全また\nは部分応答 (answer) の割合は、そ\nれぞれ 37%、18%および 22%であ\nった。完全 /部分奏効 (response) 率は\nCabozantinib 37%、 V andetanib\n18%、および Sunitinib 22%で\nあった。\nTable 3: Examples of annotation results (Underlines indicate the errors with corresponding English translations\nin parentheses. Underlines and parentheses are for explanation and do not included in the actual annotation\ncorpus.)\nSubtype Occurrence (%) Mean per sentence (SD)\nAddition 230(4.68%) 0.09(±0.40)\nOmission 794(16.16%) 0.32(±0.73)\nMistranslation 1,119(22.78%) 0.45(±0.75)\nGrammar 1,001(20.38%) 0.40(±0.74)\nTerminology 1,768(35.99%) 0.71(±0.95)\nTotal 4,912(100.00%) 1.98(±2.07)\nTable 4: Error occurrence based on the typology\nFigure 1: Distribution of errors in sentence ()\nthe length of source sentences, expertise levelof source documents, and terminology .6\n6.2.1 Length of Source Sentence\nOne of the most intuitive factors that affect\nthe quality of NMT outputs is the length of\nthe source sentence, i.e., longer sentences are\nmore difficult to translate. As expected, source\nlength was confirmed to have a high correlation\nwith error occurrence. The correlation coeffi-\ncients were ρ= 0.65 for the number of words\nin a sentence ( p <0.0001 ).\n6These are dependent factors for each other, but we\nindependently investigated their effects for simplicity.\n6.2.2 Effect of Expertise Levels of\nDocuments\nW e assumed that sentences from documents\nfor experts were more challenging for NMT\nsystems due to discrepancies in terminologies\nfrom those of the general domain. Among the\nsources of our corpora, two versions of MSD\nManuals were about the same topics of medical\ninformation but distinguished by the levels of\nexpertise: the Consumer V ersion was targeted\nat the general population, and the Professional\nV ersion was at health professionals. Source\nsentences of the Professional V ersion and the\nConsumer V ersion had 2,819 and2,123 unique\nwords, respectively , of which overlapped pres-\nence was limited to 984 words.\nThe difference in error occurrence was sum-\nmarized in T able 5. Overall, translations of the\nProfessional V ersion had a larger number of er-\nrors ( 1,108 ) than those of Consumer V ersion\n(770 ). Specifically , the errors of “Mistrans-\nlation,” “Grammar,” and “T erminology” were\nsignificantly more frequent on translations of\nProfessional V ersion than on those of Con-\nsumer V ersion.7These results confirm our as-\nsumption that expertise levels of source docu-\nments negatively affect to the translation qual-\nity of current NMT systems.\n6.2.3 Error Occurrence Dependent on T erms\nT able 4shows that the most common error\ntypes in NMT outputs are incorrect transla-\ntions of terms, i.e., “Mistranslation” and “T er-\nminology ,” which took up in total of 58.77% of\nerrors. In this section, we further investigated\nwhat kind of words tend to cause these errors.\nT able 6ranks the most frequent words that\nwere annotated as “Mistranslation” and “T er-\nminology ,” respectively .8F requent “Mistrans-\nlation” words included numbers and units\n(“days,” and “months”), comparative words\n(“more,” “less,” and “versus”), and auxiliaries\n(“may”). In our analysis, these types of words\nmore frequently produced incorrect translation\nthan proper nouns, verbs, or other specific\nwords in medicine. These words look simple\n7Although a significant difference was also confirmed\non “Addition,” we omit it due to their small numbers\nof occurrences.\n8Stop words, such as short function words and punc-\ntuation marks, were filtered out from the ranking for\nbrevity.but require different translations depending on\nco-occurring words and the context.\n“T erminology” errors list different types of\nwords from “Mistranslation. ” The high-ranked\nwords such as “primary” and “response” are\npolysemous in the domain of medicine, which\nwas failed to translate correctly by NMT sys-\ntems.\n6.3 Co-occurrence of Error Types\nIn this section, we investigated the interaction\nbetween error types to examine if some errors\ntend to lead to other types of errors. T o de-\ntermine the tendency of co-occurrence of the\nerrors, we computed correlation coefficients of\ncombinations of error types.\nT able 7shows combinations of error types\nwhose correlation coefficients were larger than\n0.3. The highest co-occurrence was observed\nin the combination of “Addition” and “Omis-\nsion. ” Notably , in the total of 176 occur-\nrences of “Addition” errors, 100 (56.82%) were\naccompanied by “Omission” errors. The er-\nrors of “Addition” and “Omission” were typ-\nically caused by over-generation and under-\ngeneration in NMT, respectively . This result\nrevealed that over and under generations af-\nfect each other; over-generation of unnecessary\nphrases may lead to under generation of nec-\nessary phrases, and vice versa.\nIt is reasonable that “Addition” and “Omis-\nsion” co-occur with “Grammar” errors, be-\ncause the insertion of unnecessary words or\ndeletion of necessary words may corrupt gram-\nmatical structures. The other way around is\nalso possible, i.e., source sentences that an\nNMT system fails to capture correct grammat-\nical structures are difficult to translate, which\nresults in “Addition” and “Omission” errors.\nThe high co-occurrence of these errors sug-\ngests that the common problems of machine\ntranslation may mutually have causal correla-\ntions.\n6.4 Correlation with Automatic Metrics\nFinally , we investigated the correlation be-\ntween annotated errors and BLEU scores as\nthe most commonly used automatic evaluation\nmetric. Specifically , we calculated a correla-\ntion coefficient between the number of errors\nin a sentence and sentence BLEU score. In ad-\ndition, we also calculated the correlation with\nSubtype Occurrence p-value\nConsumer\n(Merck and Co., Inc., 2015a )Professional\n(Merck and Co., Inc., 2015c )\nAddition 26 46 0.0300\nOmission 142 168 0.1071\nMistranslation 225 265 0.0489\nGrammar 102 224 <0.0001\nTerminology 275 405 0.0001\nTotal 770 1,108\nTable 5: Error occurrence by expertise levels of documents (Student t-test was used to calculate p-values)\nMistranslation Terminology\ncount word count word\n27 may 61primary\n15 more 33response\n14 days 33common\n12 less 28survival\n11pneumonitis 28outcome\n10 rate 26 end\n10 versus 22 point\n9common 19 fluid\n9 therapy 18 active\n9 months 17 benefit\n9 active 17therapy\n8 falls 16 rate\n7 medical 16analysis\n7 benefit 15Secondary\n7 drug 14 drug\n7 ratio 14 overall\n7 arms 14ovarian\n6 illness 14studies\n6 disease 13outcomes\n6 number 12 cancer\nTable 6: Ranking of words with “Mistranslation” and\n“Terminology” errors\nError Combination ρp value\nAddition & Omission 0.43 <0.0001\nOmission & Grammar 0.35 <0.0001\nAddition & Grammar 0.31 <0.0001\nTable 7: Highly correlate error types ( ρ >0.3)\nfairly simple metric, normalized Levenshtein\ndistance between the translation outputs and\nreference translations as a baseline.\nThe correlation coefficient of error occur-\nrence and sentence BLEU was ρ=−0.18\n(p < 0.0001 ) while that of normalized Leven-\nshtein distance was ρ= 0.27 (p <0.0001 ). The\nsentence BLEU showed an even lower correla-\ntion than the normalized Levenshtein distance.\nThis result indicates that sentence BLEU is\nnot only ignorant of errors in translation out-\nput but also fails to evaluate the overall trans-\nlation quality . Our annotation corpus con-\ntributes to design new automatic evaluation\nmetrics that have the power to discriminate\nerrors.7 Discussion and F uture W ork\nW e performed the error analysis of NMT for\nthe English and Japanese language pair in\nthe medical domain, based on fine-grained and\nquality-controlled manual annotation.\nIn the analysis of detected 4,912 errors on\n2,480 sentences, we found that the major er-\nrors in NMT were “Mistranslation” and “T er-\nminology ,” rather than “Addition” and “Omis-\nsion. ” The errors of “Addition” and “Omis-\nsion” have been deemed typical in NMT as\nover-generation and under-generation, respec-\ntively; however, our results revealed that the\nsemantic and terminology errors were more\ncommon in domain-specific technical docu-\nments. Interestingly , these errors were of-\nten observed in quantitative and polysemous\nwords. This finding suggests future challenges\nin machine translation research targeting in\nthe representation of numeric and multi-sense\nwords.\nW e found more errors in documents for\nhealth-care professionals compared with those\nfor the general public, specifically in terms of\nerrors in “Grammar” and “T erminology . ” This\nfinding encourages further research to improve\nthe performance of NMT in documents that\ninclude sentences with complex syntax and\nhighly-specialized technical terms.\nThe results of annotation will be published\nas a parallel corpus with detailed error labels,\nwhich is expected to be a valuable resource to\nimprove NMT models, develop automatic eval-\nuation metrics, and estimate qualities of ma-\nchine translation. The limitations in current\nautomatic evaluation metrics are partly at-\ntributable to insufficient understanding of the\nreal performance of NMT systems. F urther-\nmore, the dependence on the reference transla-\ntion is problematic. The similarity to the refer-\nence does not necessarily represent the seman-\ntic accordance of the translation to the source\nsentence. Natural language is characterized\nby its ambiguity , such as multiple meanings\nand contextual implications, and thus transla-\ntion should not have the unique correct answer.\nWhile verbatim similarly to the reference en-\nforces a strict constraint, it does not ensure the\nactual quality of translation. Better estima-\ntion of translation quality should incorporate\nfeatures reflecting the actual quality of transla-\ntion, such as semantic accuracy and linguistic\nfluency .\nW e believe our corpus contributes to re-\nsearch on evaluation or estimation models of\nNMT performance to overcome these limita-\ntions. Essentially , it is a valuable resource\nfor assessing the domain-specificity of transla-\ntion outputs. As future works, we will develop\nquality estimation models using the corpus to\nallow fine-grained and domain-specific evalua-\ntion. Also, we will extend the annotation cor-\npus in other domains and language pairs.\nAcknkowlegement\nThis work was supported by NTT communi-\ncation science laboratories.\nReferences\nAmerican Society of Clinical Oncology. 2018.\nJournal of Clinical Oncology (Japanese V ersion).\nhttp://usaco.jcoabstracts.jp/contents/ .\nAmerican Society of Clinical Oncology. 2019.\nJournal of Clinical Oncology. http://ascopubs.\norg/journal/jco/ .\nJ. Blatz et al. 2004. Confidence estimation for\nmachine translation. In COLING 2004: Pro-\nceedings of the 20th International Conference on\nComputational Linguistics, pages 315–321.\nF. Kepler et al. 2019. Unbabel’s participation\nin the WMT19 translation quality estimation\nshared task. In Proceedings of the F ourth Con-\nference on Machine T ranslation, pages 78–84.\nF. Klubička, A. T oral, V. M. Sánchez-Cartagena.\n2018. Quantitative fine-grained human evalua-\ntion of machine translation systems: a case study\non english to croatian. Machine T ranslation,\n32(3):195–215.\nP . Koehn. 2004. Statistical significance tests for\nmachine translation evaluation. In Proceedings\nof the 2004 Conference on Empirical Methods in\nNatural Language Processing, pages 388–395.\nA. Lommel, H. Uszkoreit, A. Burchardt. 2014.\nMultidimensional quality metrics (mqm): A\nframework for declaring and describing transla-\ntion quality metrics. T radumàtica, (12):0455–\n463.Massachusetts Medical Society. 2019. The New\nEngland Journal of Medicine. https://www.\nnejm.org/ .\nMerck and Co., Inc. 2015a. MSD MANUAL Con-\nsumer V ersion. https://www.msdmanuals.com/\nhome/ .\nMerck and Co., Inc. 2015b. MSD MANUAL\nConsumer V ersion (Japanese V ersion). https:\n//www.msdmanuals.com/ja-jp/ .\nMerck and Co., Inc. 2015c. MSD MANUAL Pro-\nfessional V ersion. https://www.msdmanuals.\ncom/professional/ .\nMerck and Co., Inc. 2015d. MSD MANUAL Pro-\nfessional V ersion (Japanese V ersion). https:\n//www.msdmanuals.com/ja-jp/ .\nNankodo Co.,Ltd. 2019. The New England Journal\nof Medicine (Japanese V ersion). https://www.\nnejm.jp/ .\nPharmaceuticals and Medical Devices Agency.\n2018. ICH guidelines (Japanese V ersion).\nhttps://www.pmda.go.jp/int-activities/\nint-harmony/ich/0070.html .\nA. Rigouts T erryn et al. 2019. Pilot study on med-\nical translations in lay language: Post-editing by\nlanguage specialists, domain specialists or both?\nIn T ranslating and the Computer 41. Editions\nT radulex.\nJ. Singh. 2015. International conference on har-\nmonization of technical requirements for regis-\ntration of pharmaceuticals for human use. Jour-\nnal of pharmacology & pharmacotherapeutics,\n6(3):185.\nL. Specia et al. 2017. T ranslation quality and\nproductivity: A study on rich morphology lan-\nguages. In Proceedings of Machine T ranslation\nSummit XVI, pages 55–71.\nL. Specia et al. 2018. Findings of the wmt 2018\nshared task on quality estimation. In Proceed-\nings of the Third Conference on Machine T rans-\nlation: Shared T ask Papers, pages 689–709.\nA. T ezcan, V. Hoste, L. Macken. 2017. Scate tax-\nonomy and corpus of machine translation errors.\nIn T rends in e-tools and resources for translators\nand interpreters, pages 219–244.\nZ. T u et al. 2016. Modeling coverage for\nneural machine translation. arXiv preprint\narXiv:1601.04811.\nL. V an Brussel, A. T ezcan, L. Macken. 2018. A\nfine-grained error analysis of nmt, smt and rbmt\noutput for english-to-dutch. In Proceedings of\nthe Eleventh International Conference on Lan-\nguage Resources and Evaluation (LREC 2018).\nR. W ang et al. 2018. Sentence selection and\nweighting for neural machine translation domain\nadaptation. IEEE/ACM T ransactions on Audio,\nSpeech, and Language Processing, 26(10):1727–\n1741.\nY. W u et al. 2016. Google’s neural machine trans-\nlation system: Bridging the gap between hu-\nman and machine translation. arXiv preprint\narXiv:1609.08144.",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "6GLbth-cPqi",
"year": null,
"venue": "EAMT 2011",
"pdf_link": "https://aclanthology.org/2011.eamt-1.24.pdf",
"forum_link": "https://openreview.net/forum?id=6GLbth-cPqi",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Oracle-based Training for Phrase-based Statistical Machine Translation",
"authors": [
"Marion Potet",
"Emmanuelle Esperança-Rodier",
"Hervé Blanchon",
"Laurent Besacier"
],
"abstract": "Marion Potet, Emmanuelle Esperança-Rodier, Hervé Blanchon, Laurent Besacier. Proceedings of the 15th Annual conference of the European Association for Machine Translation. 2011.",
"keywords": [],
"raw_extracted_content": "Oracle-based Training for Phrase-based Statistical Machine Translation\nAnkit K. Srivastava\nCNGL, School of Computing\nDublin City University, Ireland\[email protected] Ma\nBaidu Inc.\nBeijing, China\[email protected] Way\nCNGL, School of Computing\nDublin City University, Ireland\[email protected]\nAbstract\nA Statistical Machine Translation (SMT)\nsystem generates an n-best list of candidate\ntranslations for each sentence. A model er-\nror occurs if the most probable translation\n(1-best) generated by the SMT decoder is\nnot the most accurate as measured by its\nsimilarity to the human reference transla-\ntion(s) (an oracle). In this paper we inves-\ntigate the parametric differences between\nthe 1-best and the oracle translation and at-\ntempt to try and close this gap by propos-\ning two rescoring strategies to push the or-\nacle up the n-best list. We observe modest\nimprovements in METEOR scores over the\nbaseline SMT system trained on French–\nEnglish Europarl corpora. We present a de-\ntailed analysis of the oracle rankings to de-\ntermine the source of model errors, which\nin turn has the potential to improve overall\nsystem performance.\n1 Introduction\nPhrase-based Statistical Machine Translation (PB-\nSMT) systems typically learn translation, reorder-\ning, and target-language features from a large\nnumber of parallel sentences. Such features are\nthen combined in a log-linear model (Och and Ney,\n2002), the coefficients of which are optimized on\nan objective function measuring translation quality\nsuch as the BLEU metric (Papineni et al., 2002),\nusing Minimum Error Rate Training (MERT) as\ndescribed in Och (2003).\nAn SMT decoder non-exhaustively explores the\nexponential search space of translations for each\nsource sentence, scoring each hypothesis using the\nc/circlecopyrt2011 European Association for Machine Translation.formula (Och and Ney, 2002) in (1).\nP(e|f) =exp(M/summationdisplay\ni=1λihi(e,f)) (1)\nThe variable hdenotes each of the Mfea-\ntures (probabilities learned from language models,\ntranslation models, etc.) and λdenotes the associ-\nated feature weight (coefficient).\nThe candidate translation (in the n-best list) hav-\ning the highest decoder score is deemed to be the\nbest translation (1-best) according to the model.\nAutomatic evaluation metrics measuring similarity\nto human reference translations can be modified to\ngenerate a score on the sentence level instead of at\nsystem level. These scores can, in turn, be used\nto determine the quality or goodness of a transla-\ntion. The candidate having the highest sentence-\nlevel evaluation score is deemed to be the most ac-\ncurate translation (oracle).\nIn practice, it has been found that the n-best list\nrankings can be fairly poor (i.e. low proportion\nof oracles in rank 1), and the oracle translations\n(the candidates closest to a reference translation as\nmeasured by automatic evaluation metrics) occur\nmuch lower in the list. Model errors (Germann et\nal., 2004) occur when the optimum translation (1-\nbest) is not equivalent to the most accurate transla-\ntion (oracle). The aim of this paper is to investigate\nthese model errors by quantifying the differences\nbetween the 1-best and the oracle translations, and\nevaluate impact of the features used in decoding\n(tuned using MERT) on the positioning of oracles\nin the n-best list.\nAfter a brief overview of related approaches in\nsection 2, we describe in section 3 a method to\nidentify the oracles in the n-best lists, and our an-\nalytical approach to determine whether the basic\nfeatures (used in decoding) help or hurt the oracle\nrankings. Section 4 lists our experiments on mod-\nifying the feature weights to help push the oraclesMik el L. F orcada, Heidi Depraetere, Vincen t V andeghinste (eds.)\nPr o c e e dings of the 15th Confer enc e of the Eur op e an Asso ciation for Machine T r anslation , p. 169\u0015176\nLeuv en, Belgium, Ma y 2011\nup the n-best list, followed by discussion in sec-\ntion 5. We conclude with our remarks on how to\nobtain the best of the available ntranslations from\nthe MT system together with avenues for further\nresearch on incorporating our methods in main-\nstream reranking paradigms.\n2 Related Work\nOne manner to minimize the problem of low rank-\ning of higher quality translation candidates in the\nn-best lists has been to extract additional features\nfrom the n-best lists and rescore them discrimina-\ntively. These reranking approaches differ mainly\nin the type of features used for reranking and the\ntraining algorithm used to determine the weights\nneeded to combine these features.\nOch et al. (2004) employed nearly 450 syn-\ntactic features to rerank 1000-best translation can-\ndidates using MERT optimized on BLEU. These\nsame features were then trained in a discrimina-\ntive reranking model by replacing MERT with a\nperceptron-like splitting algorithm and ordinal re-\ngression with an uneven margin algorithm (Shen et\nal., 2004). Unlike the aforementioned approaches,\nYamada and Muslea (2009) trained a perceptron-\nbased classifier on millions of features extracted\nfrom shorter n-best lists of size 200 of the entire\ntraining set for reranking, and computed BLEU on\na sentence level rather than corpus level as we do\nhere.\nHasan et al. (2007) observed that even after the\nreference translations were included in the n-best\nlist, less than 25% of the references were actually\nranked as the best hypotheses in their reranked sys-\ntem. They concluded that better reranking mod-\nels were required to discriminate more accurately\namongst the n-best lists. In this paper we take a\nstep in that direction by trying to observe the im-\npact of existing features (used in MERT and de-\ncoding) on the positioning of oracle-best hypothe-\nses in the n-best lists to motivate new features for\na reranking model.\nOur work is most related to Duh and Kirchhoff\n(2008) in that they too devise an algorithm to re-\ncompute the feature weights tuned in MERT. How-\never, they focus on iteratively training the weights\nof additional reranking features to move towards a\nnon-linear model, using a relatively small dataset.\nWhile most papers cited above deal with feature-\nbased reranking (and as such are not directly re-\nlated to our proposed approach), they constitutea firm foundation and serve as motivation for our\noracle-based study. We focus on the features used\nin decoding itself and recompute their weights to\ndetermine the role of these features in moving ora-\ncles up (and down) the n-best list.\n3 Methodology\nThe central thrust of our oracle-based training is\nthe study of the position of oracle translations in\nthen-best lists and an analysis of sentences where\nthe most likely translation (1-best) does not match\nwith the best-quality translation (oracle). In this\nsection, we describe the selection procedure for\nour oracles followed by an overview of the base-\nline system settings used in all our experiments,\nthe rescoring strategies, and a filtering strategy to\nincrease oracle confidence.\n3.1N-best Lists and Oracles\nThe oracle sentence is selected by picking the\ncandidate translation from amongst an n-best list\nclosest to a given reference translation, as mea-\nsured by an automatic evaluation metric. We chose\nBLEU for our experiments, as despite shortcom-\nings such as those pointed out by (Callison-Burch\net al., 2006), it remains the most popular met-\nric, and is most often used in MERT for opti-\nmizing the feature weights. Our rescoring exper-\niments focus heavily on these weights. Note that\nBLEU as defined in (Papineni et al., 2002) is a\ngeometric mean of precision n-grams (usually 4),\nand was not designed to work at the sentence-\nlevel, as is our requirement for the oracle selection.\nSeveral sentence-level implementations known as\nsmoothed BLEU have been proposed (Lin and\nOch, 2004; Liang et al., 2006). We use the one\nproposed in the latter, as shown in (2).\nSBLEU =4/summationdisplay\ni=1BLEUi(cand,ref )\n24−i+1(2)\nFigure 1 shows a sample of 10 candidate En-\nglish translations from an n-best list for a French\nsentence. The first column gives the respective\ndecoder cost (log-linear score) used to rank an n-\nbest list and the third column displays the sBLEU\n(sentence-level BLEU score) for each candidate\ntranslation. The candidate in the first position in\nthe figure is the 1-best according to the decoder.\nThe 7th-ranked sentence is most similar to the ref-\nerence translation and hence awarded the highest170\nDecoder Sentence sBLEU\n-5.32 is there not here two weights , two measures ? 0.0188\n-5.50 is there not here double standards ? 0.147\n-5.66 are there not here two weights , two measures ? 0.0125\n-6.06 is there not double here ? 0.025\n-6.15 is there not here double ? 0.025\n-6.17 is it not here two sets of standards ? 0.0677\n-6.28 is there not a case of double standards here ? 0.563\n-6.37 is there not here two weights and two yardsticks ? 0.0188\n-6.38 is there no double here ? 0.0190\n-6.82 is there not here a case of double standards ? 0.563Figure 1: Sample from an n-best list of\ntranslation candidates for the input sentence\nN’y a-t-il pas ici deux poids, deux mesures? ,\nwhose reference translation is: Is this not a\ncase of double standards?\nsBLEU score. This sentence is the oracle trans-\nlation for the given French sentence. Note that\nthere may be ties where the oracle is concerned\n(the 7th and the 10th ranking sentence have the\nsame sBLEU score). Such issues are discussed and\ndealt with in section 3.4. Oracle-best hypotheses\nare a good indicator of what could be achieved if\nour MT models were perfect, i.e. discriminated\nproperly between good and bad hypotheses.\n3.2 Baseline System\nThe set of parallel sentences for all our exper-\niments is extracted from the WMT 20091Eu-\nroparl (Koehn, 2005) dataset for the language\npair French–English after filtering out sentences\nlonger than 40 words (1,050,398 sentences for\ntraining and 2,000 sentences each for development\n(test2006 dataset) and testing (test2008 dataset)).\nWe train a 5-gram language model using SRILM\n2with Kneser-Ney smoothing (Kneser and Ney\n, 1995). We train the translation model us-\ning GIZA++3for word alignment in both di-\nrections followed by phrase-pair extraction using\ngrow-diag-final heuristic described in Koehn et al.,\n(2003). The reordering model is configured with\na distance-based reordering and monotone-swap-\ndiscontinuous orientation conditioned on both the\nsource and target languages with respect to previ-\nous and next phrases.\nWe use the Moses (Koehn et al., 2007) phrase-\nbased beam-search decoder, setting the stack size\nto 500 and the distortion limit to 6, and switch-\ning on the n-best-list option. Thus, this baseline\nmodel uses 15 features, namely 7 distortion fea-\ntures ( d1through d7), 1 language model feature\n(lm), 5 translation model features ( tm1 through\ntm5), 1 word penalty ( w), and 1 unknown word\npenalty feature. Note that the unknown word fea-\n1http://www.statmt.org/wmt09/\n2http://www-speech.sri.com/projects/srilm/\n3http://code.google.com/p/giza-pp/ture applies uniformly to all the candidate transla-\ntions of a sentence, and is therefore dropped from\nconsideration in our experiments.\n3.3 Recalculating Lambdas\nIn contrast to mainstream reranking approaches in\nthe literature, this work analyzes the 14 remaining\nbaseline features optimized with MERT and used\nby the decoder to generate an initial n-best list of\ncandidates. No new features are added, the exist-\ning feature values are not modified, and we only\nalter the feature weights used to combine the indi-\nvidual features in a log-linear model. We are in-\nterested in observing the influence of each of these\nbaseline features on the position of oracles in the\nn-best lists. This is achieved by comparing a spe-\ncific feature value for a 1-best translation against\nits oracle. These findings are then used in a novel\nway to recompute the lambdas using one of the fol-\nlowing two formulae.\n•RESCsum: For each of the 14 features, the\nnew weight factors in the difference between\nthe mean feature value of oracles and the\nmean feature value of the 1-bests.\nλnew=λold+ (¯foracle−¯f1best)(3)\n•RESCprod: For each of the 14 features, the\nnew weight factors in the ratio of the mean\nfeature value of oracles to the mean feature\nvalue of the 1-bests.\nλnew=λold∗¯foracle\n¯f1best(4)\nBoth formulae aim to close the gap between\nfeature values of oracle translations and those of\nthe baseline 1-best translations. The recalculated\nweights are then used to rescore the n-best lists, as\ndescribed in section 4.171\nAccordingly, our experiments are essentially fo-\ncused on recomputing the original set of feature\nweights rather than the feature values. We reiterate\nthat the huge mismatch between oracles and 1-best\ntranslations implies that MERT is sub-optimal (He\nand Way , 2009) despite being tuned on translation\nquality measures such as (document-level) BLEU.\nIn recomputing weights using oracle translations,\nthe system tries to learn translation hypotheses\nwhich are closest to the reference. These compu-\ntations and rescorings are learned on the develop-\nment set ( devset ), and then carried over to rescor-\ning the n-best lists of the testset (blind dataset).\n3.4 Oracle Filtering\nA system composed of all the oracle hypotheses\nserves as an upper bound on any improvement due\nto reranking. However, one must carefully eval-\nuate these so-called oracle translations. There is\ninherent noise due to:\n•the existence of a large population of identical\nsurface-level hypotheses (but different phrase\nsegmentations) in the n-best list;\n•the tendency of BLEU and other metrics to\naward the same score to sentences differing\nin the order or lexical choice of one or two\nwords only.\nRevisiting the n-best list given in Figure 1, note\nthat both the 7th and the 10th sentence as well as\nthe 1st and 8th sentence were awarded the same\nsBLEU score. There is no way to distinguish be-\ntween the two as far as the oracle is concerned.\nFurthermore, note that this sample was carefully\nselected to show the variety of the n-best list. That\nis, in reality, approximately 20 hypotheses (iden-\ntical to the 1-best hypothesis at the surface-level)\noccur between the 1st and the 2nd sentence in the\nfigure.\nN-BEST DIFF DIVERSE ACCEPTED\n100 62.10% 48.55% 27.10%\n500 55.50% 57.75% 30.50%\n1000 54.05% 61.40% 32.80%\nTable 1: Statistics of % of oracle sentences consid-\nered for rescoring experiments\nSince the underlying strength of all our experi-\nments relies primarily on the goodness of oracles,we explore a combination of two filtering strate-\ngies to increase the confidence in oracles, namely\nDIFFERENCE and D IVERSITY .\nThe D IFFERENCE filter computes the difference\nin the sentence-level BLEU scores of the hypothe-\nses at rank 1 and rank 2. Note that it is often\nthe case that more than one sentence occupies the\nsame rank. Thus when we compute the difference\nbetween rank 1 and rank 2, these are in actuality a\ncluster of sentences having the same scores. The\npurpose of this filter is to ensure that oracles (rank\n1) are “different enough” compared to the rest of\nthe sentences (rank 2 and beyond).\nThe D IVERSITY filter aims at ensuring that the\nspecific sentence has a wide variety of hypothe-\nses leading to a distinguishing oracle (selected us-\ning the previous filter). This is computed from the\nproportion of n-best translations represented by the\nsentences in rank 1 and rank 2 clusters (based on\nhow many sentences are present in rank 1 or 2).\nThe motivation behind this filter is to drop sen-\ntences whose n-best lists contain no more than 2\nor 3 clusters. In such cases, all the hypotheses\nare very similar to each other, when scored by the\nsBLEU metric. We used both filters in tandem be-\ncause this ensured that the sentences selected in\nour final list had an oracle which was significantly\ndifferent from the rest of the n-best list, and the n-\nbest list itself had a good variety of hypotheses to\nchoose from.\nThresholds for both filters were empirically de-\ntermined to approximate the average of their re-\nspective mean and median values. Sentences\nwhich possessed a value above both thresholds\nconstituted the set of true oracles used to recal-\nculate the lambdas for our rescoring experiments.\nTable 1 shows the number of sentences passing\nthe Difference filter (column 2), the Diversity filter\n(column 3) and both (column 4: the accepted set\nof true oracles). Experiments were carried out for\n3 different sizes of n-best lists. It is observed that\nall three sets follow the same trend.\n4 Experimental Analysis\nOur analyses of the differences between the 1-best\nand the oracle translations follows. We perform\nall our experiments on 3 different n-best list sizes–\n100, 500, and 1000.172\n(a)DEVSET (b)TESTSET\nRANGE 100-BEST 500-BEST 1000-BEST 100-BEST 500-BEST 1000-BEST\nRank 1 725 402 308 725 415 324\nRank 2 to 5 194 87 68 176 95 69\nRank 6 to 10 121 52 37 125 67 53\nRank 11 to N 960 1459 1587 974 1423 1554\nTable 2: Number of times an oracle occurs in a particular range of ranks in the n-best lists of (a)D EVSET\nand (b)T ESTSET\n4.1 Distribution of Oracles\nBefore proceeding with our rescoring experiments,\nit is important to determine how the oracle trans-\nlations are distributed across the space of the base-\nline systems. Table 2 gives a summary of where (at\nwhat rank) each oracle candidate is placed in the\nn-best list of the development and test sets of 2000\nsentences each. It is evident that with increasing n-\nbest list size, the number of oracles in the top ranks\ndecreases. This is alarming as this increases the\ncomplexity of our problem with increasing n-best\nlist sizes. This is another reason why we filter or-\nacles, as described in the previous section. Oracle\nfiltering clearly shows that not all sentences have a\ngood quality oracle. This balances the tendency of\nhigh-ranking translations to be placed lower in the\nlist.\n4.2 System-level Evaluation\nWe extract the 14 baseline features for sentences\nfrom the devset of 2000 sentences using the\ntest2006 dataset selected via oracle filtering men-\ntioned previously. For each of these sentences, we\ncompare the 1-best and oracle-best features and\ncompute the mean value per feature. This is then\nused to compute two new sets of weights using the\nRESCsum and R ESCprod rescoring strategies, de-\nscribed in the previous section. We implemented\nour rescoring strategies on the devset and then ap-\nplied the 2 new sets of weights computed on the\ntestset of n-bests. Evaluation is done at a sys-\ntem level for both the development and testsets us-\ning BLEU (Papineni et al., 2002) and METEOR\n(Banerjee and Lavie, 2005). We also evaluate how\nmany sentences contain the oracle candidates in\nthe top position (rank 1). This is shown in Table\n3. The last row in each subsection labeled O R-\nACLE gives the upper bound on each system, i.e.\nperformance if our algorithm was perfect and all\nthe oracles were placed at position 1.\nWe also perform a Top5-BLEU oracle evalua-\ntion (shown in Table 4). The difference between\nthe evaluations in Tables 3 and 4 is that the lat-ter evaluates on a list of top-5 hypotheses for each\nsentence instead of the usual comparison of a sin-\ngle translation hypothesis with the reference trans-\nlation. The sentences used in Table 3 are present\nin the top 1 position of sentences used in Table\n4. This means that when BLEU and METEOR\nscores are evaluated at system-level, for each sen-\ntence, the translation (among 5) with the highest\nsBLEU score is selected as the translation for that\nsentence. This is similar to the post-editing sce-\nnario where human translators are shown ntrans-\nlations and are asked to either select the best or\nrank them. Some studies have used as many as 10\ntranslations together (Koehn and Haddow, 2009).\nWe only use 5 in our evaluation.\nWe observe that overall the R ESCsum system\nshows a modest improvement over the baseline in\nterms of METEOR scores, but not BLEU scores.\nThis trend is consistent across all the 3 n-best list\nsizes. We speculate that perhaps the reliance of\nMETEOR on both precision and recall as opposed\nto precision-based BLEU is a factor for this dis-\nagreement between metrics. We also observe that\nthe degree of improvement in the BLEU and ME-\nTEOR scores of each system from top-1 (Table 3)\nto top-5 (Table 4) is more obvious in the rescored\nsystems R ESCsumand R ESCprod compared to the\nbaseline. This gives weight to our observation that\nthe oracles have moved up, just not to the top po-\nsition.\n4.3 Per feature Comparison\nFigure 2 analyses which features favour how many\noracles over 1-best translations. The figures are\nin percentages. We only give values for 1000-best\nlists, because the results are consistent across the\nvarious n-best list sizes.\nThe oracles seems to be favoured by d2 (mono-\ntone orientation) and tm5 (phrase penalty) fea-\ntures. Note that this selection is arbitrary and\nchanges when the dataset changes. This means\nthat if we use a different D EVSET , a different set\nof features will favour the oracle rankings. Further173\n(a)DEVSET (b)TESTSET\nSYSTEM BLEU MET ORC BLEU MET ORC\nrescored on 100-best list\nBASE 32.17 61.34 36.25 32.47 61.80 36.25\nRESC sum 31.99 61.45 36.55 32.33 61.75 35.65\nRESC prod 32.13 61.35 36.30 32.46 61.78 35.60\nORACLE 34.90 63.65 100 35.26 64.01 100\nrescored on 500-best list\nBASE 32.17 61.34 20.10 32.47 61.80 20.75\nRESC sum 31.56 61.62 20.15 31.99 62.00 19.65\nRESC prod 32.08 61.30 20.15 32.43 61.75 20.65\nORACLE 36.45 64.70 100 36.80 65.12 100\nrescored on 1000-best list\nBASE 32.17 61.34 15.4 32.47 61.80 16.2\nRESC sum 31.45 61.48 15.7 31.84 61.87 15.45\nRESC prod 32.04 61.26 15.6 32.41 61.73 16.2\nORACLE 37.05 65.14 100 37.50 65.65 100\nTable 3: Summary of the Fr–En translation results on WMT (a)test2006 (devset) and (b)test2008 (testset)\ndata, using BLEU and METEOR metrics. The column labeled ORC refers to the % of sentences selected\nas the oracle w.r.t. BLEU metric.\n(a)DEVSET (b)TESTSET\nSYSTEM BLEU MET ORC BLEU MET ORC\nrescored on 100-best list\nBASE 5 32.83 61.95 45.95 33.17 62.34 45.05\nRESC sum 5 32.72 62.04 45.75 33.08 62.40 45.65\nRESC prod 5 32.78 61.92 45.80 33.16 62.34 45.00\nORACLE 34.90 63.65 100 35.26 64.01 100\nrescored on 500-best list\nBASE 5 32.83 61.95 24.45 33.17 62.34 25.50\nRESC sum 5 32.49 62.31 27.20 32.95 62.71 27.90\nRESC prod 5 32.74 61.89 24.75 33.12 62.30 25.80\nORACLE 36.45 64.70 100 36.80 65.12 100\nrescored on 1000-best list\nBASE 5 32.83 61.95 18.80 33.17 62.34 19.65\nRESC sum 5 32.45 62.27 20.90 32.85 62.68 21.85\nRESC prod 5 32.70 61.88 18.60 33.13 62.30 19.85\nORACLE 37.05 65.14 100 37.50 65.65 100\nTable 4: Top5 Eval: Summary of the Fr–En translation results on WMT (a)test2006 (devset) and\n(b)test2008 (testset) data, using BLEU and METEOR metrics on best of top 5 hypotheses. The col-\numn labeled ORC refers to the % of sentences selected as the oracle w.r.t. BLEU metric.\nexperimentation is required to determine whether\nthere is a pattern to this. Nevertheless, this com-\nputation provides some clue as to how the baseline\nfeature weights change during rescoring.\n4.4 Movement in Rankings\nTable 5 shows the number (n) of sentences (out of\n2000) which were moved up ( ↑), moved up to a\nposition in the top-5, moved down ( ↓), or moved\ndown from a position in the top-5, and the average\nnumber of positions moved (p) for both our rescor-\ning strategies. We observe that R ESCsumis more\neffective in promoting oracles than R ESCprod. Per-\nhaps it is no surprise that the R ESCsum formula\nresembles the highly effective perceptron formula\n(without the iterative loop) of Liang et al., (2006).\nThe similarity between the number of positionsmoved up and down explains why our rescoring\nstrategies fail to record a more marked improve-\nment at the system level.\n5 Discussion and Future Work\n5.1 Impact of MERT features on oracles\nWe try to re-estimate the weights of the baseline\nfeatures and observe the impact of them on oracle\nreranking. While a substantial amount of oracles\nare moved to the top-5 ranks (not necessarily to\nthe top-1), it does not automatically imply a better\nBLEU score. However, there is up to a 0.5% rela-\ntive improvement in the METEOR scores. Perhaps\nthis implies low quality oracles for at least some of\nthe sentences. Note that although we filter away\nsentences before recomputing lambdas, we imple-174\n(a)DEVSET (b)TESTSET\nSYS n↑ p↑ n5↑ n↓ p↓ n5↓ n↑ p↑ n5↑ n↓ p↓ n5↓\nrescored on 100-best list\nRsum 637 24 267 776 23 278 627 24 260 794 22 278\nRprod 590 10 94 534 11 89 559 10 93 587 12 93\nrescored on 500-best list\nRsum 840 122 212 875 121 185 869 129 277 850 111 199\nRprod 856 54 75 722 74 64 831 55 84 739 69 80\nrescored on 1000-best list\nRsum 908 237 180 878 248 147 933 247 198 870 215 176\nRprod 918 114 63 758 163 51 895 117 73 785 148 66\nTable 5: Movement of oracles in n-bests of (a) development set and (b) test set after rescoring the baseline\nsystem with weights learned from R ESCsumand R ESCprod: how many & how much?\nFigure 2: Results for a 1000-best list of filtered or-\nacles: For how many sentences (% given on the X-\naxis) does a baseline feature (given on the Y-axis)\nfavour the oracle translation (black bar) over the\n1-best translation (light grey bar). The dark grey\nbar (third band in each bar) denotes percentage of\nsentences having the same value for its oracle and\n1-best hypothesis\n.\nment our rescoring strategies on the entire set (i.e.\nno filtering). Therefore the devset and testset may\ncontain noise which makes it difficult for any im-\nprovements to be seen. Overall, there are certain\nbaseline features (see section 4.3), which favour\noracles and help in pushing them up the n-best list.\nDuh and Kirchhoff, (2008) conclude that log-\nlinear models often underfit the training data in\nMT reranking and that is the main reason for the\ndiscrepancy between oracle-best hypothesis and\nreranked hypothesis of a system. We agree with\nthis statement (cf. figure 2). However, we believe\nthat there is scope for improvement on the baseline\nfeatures (used in decoding) before extracting more\ncomplex features for reranking.5.2 Role of oracles in boosting translation\naccuracy\nWe believe oracle-based training to be a viable\nmethod. In future work, we intend to explore more\nfeatures (especially those used in the reranking lit-\nerature such as Och et al., (2004)) to help promote\noracles. We believe that our oracle-based method\ncan help select better features for reranking. We\nalso plan to use a host of reranking features (Shen\net al., 2004) and couple them with our R ESCsum\nrescoring strategy. We will also generate a feature\nbased on our rescoring formula and use it as an ad-\nditional feature in discriminative reranking frame-\nworks. We have used here sentence-level BLEU as\nopposed to system-level BLEU as used in MERT\nfor oracle identification. We plan to use metrics\nbetter suited for sentence-level like TER (Snover\net al., 2006).\n6 Conclusion\nWe analyze the relative position of oracle transla-\ntions in the n-best list of translation hypotheses to\nhelp reranking in a PB-SMT system. We propose\ntwo new rescoring strategies. In general, the im-\nprovements provided by reranking the n-best lists\nis dependent on the size of nand the type of trans-\nlations produced in the n-best list. We see an im-\nprovement in METEOR scores. To conclude, ora-\ncles have much to contribute to the ranking of bet-\nter translations and reducing the model errors.\nAcknowledgements\nThis work is supported by Science Foundation Ire-\nland (grant number: 07/CE/I1142). This work\nwas carried out during the second author’s time\nat CNGL in DCU. The authors wish to thank the\nanonymous reviewers for their helpful insight.175\nReferences\nBanerjee, Satanjeev and Alon Lavie. 2005. METEOR:\nAn Automatic Metric for MT Evaluation with Im-\nproved Correlation with Human Judgments. ACL\n2005, Proceedings of the Workshop on Intrinsic and\nExtrinsic Evaluation Measures for MT and/or Sum-\nmarization at the 43rd Annual Meeting of the Asso-\nciation for Computational Linguistics , Ann Arbor,\nMichigan. 65–72.\nCallison-Burch, Chris, Miles Osborne, and Philipp\nKoehn. 2006. Re-evaluating the role of BLEU in\nMachine Translation Research. EACL 2006, Pro-\nceedings of the 11th Conference of the European\nChapter of the Association for Computational Lin-\nguistics , Trento, Italy. 249–256.\nDuh, Kevin and Katrin Kirchhoff. 2008. Beyond Log-\nLinear Models: Boosted Minimum Error Rate Train-\ning for N-best Re-ranking. ACL 2008, Proceedings\nof the 48rd Annual Meeting of the Association for\nComputational Linguistics Short Papers , Columbus,\nOhio. 37–40.\nGermann, Ulrich, Michael Jahr, Kevin Knight, Daniel\nMarcu, and Kenji Yamada. 2004. Fast and Optimal\nDecoding for Machine Translation. Artificial Intelli-\ngence , 154. 127–143.\nHasan, Sa ˇsa, Richard Zens, and Hermann Ney. 2007.\nAre Very Large N-best List Useful for SMT?. In\nProceedings of NAACL-HLT ’07 , Rochester, New\nYork. 57–60.\nHe, Yifan and Andy Way. 2009. Improving the Ob-\njective Function in Minimum Error Rate Training.\nInProceedings of Machine Translation Summit XII ,\nOttawa, Canada. 238–245.\nKneser, R. and Hermann Ney. 1995. Improved\nBacking-off for n-gram Language Modeling. In Pro-\nceedings IEEE International Conference on Acous-\ntics, Speech, and Signal Processing , V ol. 1, Detroit,\nMichigan. 181–184.\nKoehn, Philipp, Franz Och, and Daniel Marcu. 2003.\nStatistical Phrase-Based Translation. In Proceedings\nof NAACL ’03 , Edmonton, Canada. 48–54.\nKoehn, Philipp. 2005. Europarl: A Parallel Corpus\nfor Statistical Machine Translation. In Proceedings\nof Machine Translation Summit X , Phuket, Thailand.\n79–86.\nKoehn, Philipp, Hieu Hoang, Alexandra Birch, Chris\nCallison-Burch, Marcello Federico, Nicola Bertoldi,\nBrooke Cowan, Wade Shen, Christine Moran,\nRichard Zens, Chris Dyer, Ondrej Bojar, Alexandra\nConstantin, and Evan Herbst. 2007. Moses: Open\nSource Toolkit for Statistical Machine Translation.\nACL 2007, 45th Annual Meeting of the Association\nfor Computational Linguistics, demonstration ses-\nsion, Prague, Czech Republic. 177–180.Koehn, Philipp and Barry Haddow. 2009. Interac-\ntive Assistance to Human Translators using Statis-\ntical Machine Translation Methods. In Proceedings\nof MT Summit XII , Ottawa, Canada. 73–80.\nLiang, Percy, Alexandre Bouchard-Cote, Dan Klein,\nand Ben Taskar. 2006. An end-to-end Discrimi-\nnative Approach to Machine Translation. COLING-\nACL 2006, 21st International Conference on Compu-\ntational Linguistics and 44th Annual Meeting of the\nAssociation for Computational Linguistics , Sydney,\nAustralia. 761–768.\nLin, Chin-Yew and Franz J Och. 2004. ORANGE: A\nMethod for Evaluating Automatic Evaluation Met-\nrics for Machine Translation. COLING 2004, Pro-\nceedings of the 20th International Conference on\nComputational Linguistics , Geneva, Switzerland.\n501–507.\nOch, Franz J and Hermann Ney. 2002. Discriminative\nTraining and Maximum Entropy Models for Statis-\ntical Machine Translation. ACL 2002, 40th Annual\nMeeting of the Association for Computational Lin-\nguistics , Philadelphia, PA. 295–302.\nOch, Franz J. 2003. Minimum Error Rate Training in\nStatistical Machine Translation. ACL 2003, 41st An-\nnual Meeting of the Association for Computational\nLinguistics , Sapporo, Japan. 160–167.\nOch, Franz J., Dan Gildea, Sanjeev Khudanpur, Anoop\nSarkar, Kenji Yamada, Alex Fraser, Shankar Kumar,\nLibin Shen, David Smith, Katherine Eng, Viren Jain,\nZhen Jin, and Dragomir Radev. 2004. A Smor-\ngasbord of Features for Statistical Machine Transla-\ntion. HLT-NAACL 2004, the Human Language Tech-\nnology Conference and the North American Chap-\nter of the Association for Computational Linguistics ,\nBoston, MA. 161–168.\nPapineni, Kishore, Salim Roukos, Todd Ward, and Wei-\nJung Zhu. 2002. BLEU: A Method for Automatic\nEvaluation of Machine Translation. ACL 2002, 40th\nAnnual Meeting of the Association for Computa-\ntional Linguistics , Philadelphia, PA. 311–318.\nShen, Libin, Anoop Sarkar, and Franz J. Och. 2004.\nDiscriminative Reranking for Machine Transation.\nHLT-NAACL 2004, the Human Language Technol-\nogy Conference and the North American Chapter\nof the Association for Computational Linguistics ,\nBoston, MA. 177–184.\nSnover, Matthew, Bonnie Dorr, Richard Schwartz, Lin-\nnea Micciula, and John Makhoul. 2006. A Study of\nTranslation Edit Rate with targeted Human Annota-\ntion. AMTA 2006, 7th Conference of the Association\nfor Machine Translation in the Americas , Cambrdge,\nMA. 223–231.\nYamada, Kenji and Ion Muslea. 2009. Reranking\nfor Large-Scale Statistical Machine Translation. In\nCyril Goutte, Nicola Cancedda, Marc Dymetman,\nand George Foster (eds.), Learning Machine Trans-\nlation . MIT Press, Cambridge, MA. 151–168.176",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "21CzJfpwdcQ",
"year": null,
"venue": "EAMT 2010",
"pdf_link": "https://aclanthology.org/2010.eamt-1.6.pdf",
"forum_link": "https://openreview.net/forum?id=21CzJfpwdcQ",
"arxiv_id": null,
"doi": null
}
|
{
"title": "A fully unsupervised approach for mining parallel data from comparable corpora",
"authors": [
"Thi-Ngoc-Diep Do",
"Laurent Besacier",
"Eric Castelli"
],
"abstract": "Thi Ngoc Diep Do, Laurent Besacier, Eric Castelli. Proceedings of the 14th Annual conference of the European Association for Machine Translation. 2010.",
"keywords": [],
"raw_extracted_content": "A Fu\nlly Unsupervised Approach for Mining Parallel Data from \nComparable Corpora \nDo Thi \nNgoc Diep1\n,2, \nLaurent Besacier1,\n Eric Castelli2 \n(1)\n LIG Laboratory, CNRS/UMR-5217, Grenoble, France \n(2) MICA Center, CNRS/UMI-2954, Hanoi, Vietnam \[email protected] \n \n Abstr\nact \nT\nhis paper presents an unsupervised \nmethod for extracting parallel sentence \npairs from a comparable corpus. A trans-\nlation system is used to mine the compa-\nrable corpus and to detect parallel sen-\ntence pairs. An iterative process is im-plemented not only to increase the num-\nber of extracted parallel sentence pairs \nbut also to improve the overall quality of the translation system. A comparison be-tween this unsupervised method and a \nsemi-supervised method is also pre-\nsented. The unsupervised method was tested in a hard condition: no available \nparallel corpus to bootstrap the process \nand the comparable corpus contained up to 50% of non parallel data. The experi-\nments conducted show that the unsuper-\nvised method can be really applied in the case of lacking parallel data. While pre-liminary experiments are conducted on \nFrench-English translation, this unsuper-\nvised method is also applied successfully to a low e-resourced language pair \n(French-Vietnamese). \n1 Intr\noduction \nOve\nr the past fifty years of development \n(Hutchins, 2001), machine translation (MT) has obtained good results when applied to several \npairs of languages such as English, French, Italia, Japanese, etc. Many approaches for MT have \nbeen proposed, such as: rule-based (direct \ntranslation, interlingua-based, transfer-based), corpus-based (statistical, example-based) as well as hybrid approaches. However, research on \n \n \n© \n2010 European Association for Machine Translation. \n sta\ntistical MT for low e-resourced languages \nalways faces the challenge of getting enough data to support any particular approach. \nStatistical machine translation (SMT) uses sta-\ntistical method based on large parallel bilingual corpora of source and target languages to build a \nstatistical translation model for source/target lan-\nguages and a statistical language model for target language. The two models and a search module are then used to decode the best translation \n(Brown et al, 1993; Koehn et al, 2003). Thus, a \nlarge parallel bilingual text corpus is a prerequi-site. However, such a corpus is not always avail-\nable, especially for low e-resourced languages. \nThe most common methods to build parallel \ncorpora consist in automatic methods which col-lect parallel sentence pairs from the Web (Resnik \nand Smith, 2003; Kilgarriff and Grefenstette, 2003), or alignment methods which extract paral-lel documents/sentences from two monolingual \ncorpora (Koehn, 2005; Gale and Church, 1993, \nPatry and Langlais, 2005). There is also the method of extracting parallel sentence pairs from \na comparable corpus (Zhao and Vogel, 2002; \nFung and Cheung, 2004; Munteanu and Marcu, 2006). Abdul-Rauf and Schwenk (2009) present a semi-supervised extracting method requiring an \ninitial parallel corpus in order to build a first \nSMT system that will be used during the semi-supervised extraction (see more in section 2.1). \nWe assume that in the case of a low e-resourced \nlanguage pair, even a small parallel corpus might not be available to start developing a SMT sys-\ntem. So, does a fully unsupervised method, start-\ning with a highly noisy parallel corpus, allow to solve the problem of lacking parallel data? \nFirstly, it is important to note that we consider \nthat “comparable” and “noisy parallel” have equivalent meanings in the context of our work, since a “noisy parallel” corpus can be extracted \nfrom a “comparable” corpus using a minimal \ninformation retrieval component (based on basic \n[EAMT May 2010 St Raphael, France]\nfeatures like publishing date, sentence length, \netc.). Advanced IR approaches for mining com-\nparable corpora are outside of the scope of this \npaper whose goal is exactly to get rid of complex \nIR approaches by using an iterative process \nbased on SMT. \nThis paper presents a fully unsupervised ex-\ntracting method, which is compared to a semi-\nsupervised extracting method. The first results \nshow that the unsupervised method can be really \napplied in the case of lacking parallel data. The \nrest of the paper is organized as follows. Section \n2 describes the methods of extracting parallel \nsentence pairs from a noisy parallel corpus: semi-\nsupervised method and fully unsupervised \nmethod. Section 3 presents our experiments and \nour results on testing the unsupervised method. \nThe next section presents an application of this \nmethod for a real low e-resourced language pair: \nVietnamese-French. The last section concludes \nand gives some perspectives. \n2 Mining parallel data from compara-\nble corpora \n2.1 Extracting methods \nA comparable corpus contains data which are not \nparallel but “still closely related by conveying \nthe same information” (Zhao and Vogel, 2002). \nIt may contain “non-aligned sentences that are \nnevertheless mostly bilingual translations of the \nsame document” (Fung and Cheung, 2004) or \ncontain “various levels of parallelism, such as \nwords, phrases, clauses, sentences, and \ndiscourses, depending on the corpora \ncharacteristics” (Kumano et al., 2007). \nExtracting parallel data from comparable cor-\npus has been presented in some previous works. \nZhao and Vogel (2002) propose a maximum like-\nlihood criterion which combines sentence length \nmodel and a statistical translation lexicon model \nextracted from an already existing aligned paral-\nlel corpus. An iterative process is applied to re-\ntrain the translation lexicon model with the ex-\ntracted data. Munteanu and Marcu (2006) present \na method for extracting parallel sub-sentential \nfragments from a very non-parallel corpus. Each \nsource language document is translated into tar-\nget language using a bilingual lexicon/dictionary. \nThe target language document which matches \nthis translation is extracted from a collection of \ntarget language documents. A probabilistic trans-\nlation lexicon based on the log likelihood-ratio is \nused to detect parallel fragments from this docu-\nment pair. Abdul-Rauf and Schwenk (2009) pre-sent a similar technique, but a proper statistical \nmachine translation system is used instead of the \nbilingual dictionary, and the evaluation metric \nTER is used to decide the degree of parallelism \nbetween sentences. Sarikaya et al. (2009) intro-\nduce an iterative bootstrapping approach in \nwhich the extracted sentence pairs are then added \nto the initial parallel corpus to rebuild the SMT \nsystem. All these methods are presented as effec-\ntive methods to extracting parallel frag-\nments/sentences from a comparable corpus. \n2.2 Semi-supervised v/s Unsupervised \nlearning method \nThese above methods can be modeled as fig-\nure 1a, with a translation phase and a filtering \nphase (with or without iterations). The source \nside of a comparable corpus D is translated by \nusing translation module S\n0 \n(a translation lexicon \nmodel or a proper statistical machine translation \nsystem). The translated output is then compared \nwith the target side of the corpus D and filtered \nby filtering module (using a score or an evalua-\ntion metric). These methods can be considered as \nsemi-supervised methods which require an initial \nparallel corpus C\n1\n (or at least a bilingual diction-\nary) to build the translation module. We assume \nthat in the case of low e-resourced languages, \nthis parallel corpus, even small, may not be \navailable. So, we try to propose a fully unsuper-\nvised method, here, where the starting point is \njust a simple noisy comparable corpus, without \nusing additional parallel data. \n \n \n \n \n \nFigure 1. Semi-supervised v/s unsupervised \nmethods. \nIn the unsupervised learning scheme (figure \n1b), the translation module S\n0\n is built based on \nanother comparable corpus C\n2\n and the iterative \nscheme is recommended. One of the challenges \nof this work is to see if such a different starting \npoint (noisy comparable corpus, versus truly par-\nF\nilter\ning \nmodule \nComparable corpus: C2 \nComparable \ndata: D\n \nUnsupervised\n \nTransl\na\nt\nion \nmodule (S\n0\n)\n \n~ \nPara\nl-\nlel data \nF\nilter\ning \nmodule \nParallel corpus: C1 \nComparable \ndata: D\n \nSemi-supervised\n \nTransl\na\ntion \nmodule (S\n0\n)\n \nPara\nl\nlel \ndata \nallel corpus) can still lead to the design of an ex-\ntracting system and also improve the quality of \nthe overall translation system. \nIn our research, we focus on mining the paral-\nlel sentence pairs. The translation module S\n0\n is a \nstatistical machine translation system, and filter-\ning module bases on evaluation metric estimated \nfor each sentence pair. Several evaluation metrics \nare used to determine which one is the most suit-\nable: BLEU (Papineni et al., 2002), NIST (Dod-\ndington, 2002), TER (Snover et al., 2006) and a \nmodified PER* (see details in section 3.3). A \npair is considered as parallel if its evaluation \nmetric is larger (for BLEU, NIST, PER* metrics) \nor smaller (for TER metric) than a threshold. \nThe extracted sentence pairs are then com-\nbined with the system S\n0\n in several ways to cre-\nate a new translation module. An iterative proc-\ness is performed which re-translates the source \nside by this new translation system, re-calculates \nthe evaluation metric and then re-filters the paral-\nlel sentence pairs. We hope that each iteration \nnot only increases the number of extracted paral-\nlel sentence pairs but also improves the quality of \nthe translation system. \nThe extracted parallel data are re-used in dif-\nferent combinations: \n- W1: the translation system at step i is re-\ntrained on a training corpus consisting of C2 and \nE\ni-1\n (the extracted data from the last iteration); E\n0\n \nbeing the data extracted when translation system \nis trained on C2 only (S\n0\n). \n- W2: the translation system at step i is re-\ntrained on training corpus consisting of C2 and \nE\n0\n+E\n1\n+…+E\ni-1\n (the extracted data from the pre-\nvious iterations). \n- W3: at iteration i, a new separate phrase-\ntable is built based on the extracted data E\ni-1\n. The \ntranslation system decodes using both phrase-\ntable of S\n0\n and this new one (log-linear model) \nwithout weighting them. \n- W4: the same combination as W3, but the \nphrase-table of S\n0\n and the new one are weighted, \ne.g. 1:2. \n3 Preliminary experiments for French-\nEnglish SMT \nIn this section, we present experiments on unsu-\npervised method, in comparison with those on \nsemi-supervised method. Two systems were \nbuilt, one based on semi-supervised method \n(Sys1), another based on unsupervised method \n(Sys2). 3.1 Data preparation \nWe chose French-English languages for these \npreliminary experiments. A noisy parallel corpus \nwas “simulated” by gathering parallel and non-\nparallel sentence pairs in order to control the pre-\ncision and the recall of the extracting method. \nThe correct parallel sentence pairs were taken \nfrom the Europarl corpus, version 3 (Koehn, \n2005). A significant number of wrong sentence \npairs were added in the data (about 50%). \nTo make it comparable with the real case \ntreated in section 4 (a low e-resourced language \npair), the size of data was chosen small for this \npreliminary setup. The corpus C1 contains only \n50K correct parallel sentence pairs. The corpus \nC2 contains 25K correct parallel sentence pairs \n(withdrawn from C1) and 25K wrong sentence \npairs. The corpus D, the input data for extracting \nprocess, was built from 10K correct parallel sen-\ntence pairs and 10K wrong sentence pairs, which \nwere different from sentence pairs of C1 and C2. \nThe correct and the wrong sentence pairs of D \nwere marked to calculate the precision and the \nrecall later. \n3.2 System construction \nBoth systems Sys1 and Sys2 were constructed \nusing the Moses toolkit (Koehn et al., 2007). \nThis toolkit contains all of components needed to \ntrain the translation model. It also contains tools \nfor tuning these models using minimum error \nrate training and for evaluating the translation \nresult using the BLEU score. We used the default \nsettings in Moses: \n- GIZA++ (Och and Ney 2003) was used for \nword alignments, the “ -alignment” option \nfor phrase extraction was “ grow-diag-final-\nand” \n- 14 features in total were used in the log-\nlinear model: distortion probabilities (6 fea-\ntures), one tri-gram language model prob-\nability, bidirectional translation probabilities \n(2 features) and lexicon weights (2 features), \na phrase penalty, a word penalty and a dis-\ntortion distance penalty. \n- A 3-gram target language model was built \nusing the SRILM Toolkit (Stolcke, 2002). \nThe target (English) language model was built \nfrom the English part of the entire Europarl cor-\npus. The baseline translation models were built \nfrom corpus C1 and C2 respectively. \n3.3 Starting with parallel or comparable \ncorpus? \nOne question that we want to answer first is \nwhether the translation system based on a noisy \nparallel corpus can be used to filter the input data \nlike the translation system based on parallel cor-\npus does. To examine this problem, the French \nside of corpus D was translated by Sys1 and \nSys2. Then, the translated outputs were com-\npared with the English side of the corpus D. Four \nevaluation metrics were used for this compari-\nson: BLEU, NIST, TER and PER*. Our modified \nposition-independent word error rate (PER*) is \ncalculated based on the similarity, while the PER \n(Tillmann et al., 1997) measures an error (the \ndifference of words occurring in hypotheses and \nreference). \n \nThe distributions of evaluation scores for cor-\nrect parallel sentence pairs and wrong sentence \npairs were calculated and presented in figure 2. \nFrom these distributions, we can make the fol-\nlowing comments: first, the distributions of \nscores have the same shape between Sys1 and \nSys2. Especially, the distributions of scores for \nthe wrong pairs were nearly identical in both sys-\ntems. So, a noisy parallel corpus can replace a \nparallel corpus for constructing an initial transla-\ntion system. Remember that the initial corpus \nhere contains up to 50% non-parallel sentence \npairs. Another important result is that the PER*, \na simple and easily calculated score, can be con-\nsidered as the best score to filter the correct par-\nallel sentence pairs (while TER gave poor result \nfor our experimental setup). Table 1 presents the \nprecision and the recall of filtering parallel sen-\ntence pairs from both systems: Sys1 and Sys2. \n \nSys1 – semi-supervised method \nFiltered by\n \nFound Correct\n \nPrecision\n \nRecall\n \nF1-score \nBleu=0.1 6908\n \n6892\n \n99.76\n \n68.92\n \n81.52\n \nNist=0.4 8350\n \n8347\n \n99.96\n \n83.47\n \n90.97\n \nPer*=0.3\n \n10342\n \n9785\n \n94.61\n \n97.85\n \n96.20\n \nPer*=0.4\n \n9390\n \n9333\n \n99.39\n \n93.33\n \n96.27\n \nSys2 – unsupervised method \nFiltered by\n \nFound Correct\n \nPrecision\n \nRecall\n \nF1-score \nBleu=0.1 6233\n \n6218\n \n99.75\n \n62.18\n \n76.61\n \nNist=0.4 7110\n \n7108\n \n99.97\n \n71.08\n \n83.08\n \nPer*=0.3\n \n10110\n \n9468\n \n93.65\n \n94.68\n \n94.16\n \nPer*=0.4\n \n8682\n \n8629\n \n99.38\n \n86.29\n \n92.37\n \n \nTable 1. Precision and recall of filtering parallel \nsentence pairs (given 10K correct pairs). \n \nFigure 2. Score distributions for semi-supervised \n(Sys1) and unsupervised (Sys2) methods. \n3.4 The iterations of the unsupervised \nmethod \nSection 3.3 has shown that translation system \nbased on a noisy parallel corpus can be used to \nfilter parallel data from another corpus. However \nthe result of filtering in Sys2 is lower than that in \nSys1 (for example, the number of correct ex-\ntracted sentence pairs is reduced (table1)). So, we \npropose, in this section, an iterative process in \norder to improve the quality of the translation \nsystem, and then to increase the number of cor-\nrectly extracted sentence pairs. \n \nIncreasing the number of correct extracted \nsentence pairs : In Sys2, the extracted sentence \npairs were combined with the baseline system S\n0\n \nin four ways (as mentioned in section 2.2). In \norder to receive the maximum number of correct \nextracted sentence pairs, for all iterations we \nchose the evaluation score PER* with the thresh-\nold=0.3, which gave the maximum re-\ncall=94.68% for the baseline system. \nFigure 3 presents the number of correctly ex-\ntracted sentence pairs after 6 iterations for four \ndifferent combinations: W1, W2, W3 and W4. \nThe number of correct extracted pairs was in-\ncreased in all cases; however the combination \nW2 brought the largest number of correct ex-\ntracted sentence pairs. \n \nFigure 3. Number of correctly extracted sentence \npairs after 6 iterations for four different combina-\ntions. \n \nIncreasing the precision and the recall of the \nfiltering process : The precision and the recall of \nthese four combinations are presented in figure 4. \nBecause the filtering process focused on extract-\ning the largest number of correct extracted sen-\ntence pairs, the precision was decreased. How-\never, using the combination W2, the recall after 6 \niterations (97.77) nearly reached the recall of the \nsemi-supervised system Sys1 (97.85). \n \n \nFigure 4. Precision and recall of filtering using \ndifferent combinations. Translation system evaluation : The quality of \nthe translation systems was also evaluated. A test \nset containing 400 French-English parallel sen-\ntence pairs was extracted from Europarl corpus. \nEach French sentence had only one English ref-\nerence. The quality was reported in BLEU and \nTER. Figure 5 gives the evaluation scores for the \nsystems after each iteration. \nThe translation system evaluation revealed an \nimportant result. The quality of the translation \nsystem was increased quickly during some first \niterations, but decreased after that. It can be ex-\nplained by the fact that, in the first iterations, a \nlot of new parallel sentence pairs were extracted \nand included to the translation model. However, \nin the next iterations, when the precision of the \nextracting process was decreased, more wrong \nsentence pairs were added to the system so the \ntranslation model got worse and the quality of \nthe translation system was reduced. \nIn fact, Sarikaya et al. (2009) presents a simi-\nlar system using a different evaluation metric for \nfiltering (Bleu), and use a combination similar to \nour W2 type. However, their research does not \nprovide a full explanation about why they choose \nBleu and this combination method, and further \nmore, the problem of decreasing the quality of \ntranslation system after several iterations is not \nmentioned. \n \n \nFigure 5. Translation system evaluations. \n \nAfter about 3 iterations, the Bleu score can in-\ncrease of about 2 points. Note that there is no \ntuning for the statistical models (no development \ndata set was used for this experimental setup). \n4 Application for Vietnamese - French \nlanguage pair \nVietnamese is the 14th widely-used language in \nthe world; however research on MT for Viet-\nnamese is rare. The earliest MT system for Viet-\nnamese is the system from the Logos Corpora-\ntion, developed as an English-Vietnamese system \nfor translating aircraft manuals during the 1970s \n(Hutchins, 2001). Until now, in Vietnam, there \nare only four research groups working on MT \n(Ho, 2005). \nWe focus on mining a bilingual news corpus \nfrom the Web and building a Vietnamese-French \nstatistical machine translation (SMT) system. In \na former paper (Do et al., 2009), we have pre-\nsented a mining method (named Method1 ) based \non publication date, special words and sentence \nalignment result. Firstly, possible parallel docu-\nment pairs are filtered by using publishing date \nand special words (numbers, attached symbols, \nnamed entities). Secondly, sentences in a possi-\nble parallel document pair are aligned using \nChampollion toolkit (Ma, 2006), which uses \nlexical information (lexemes, stop words, a bi-\nlingual dictionary, etc.). Finally, parallel sen-\ntences pairs are extracted based on the sentence \nalignment information, which combines docu-\nment length information and lexical information. \nThis method was applied to mine a text corpus \nfrom a Vietnamese daily news website, the Viet-\nnam News Agency\n1\n (VNA) (containing 20,884 \nFrench documents and 54,406 Vietnamese \ndocuments). This corpus used is a really compa-\nrable corpus because it tends to contain parallel \nsentences or rough translations of sentences on \nthe same topics. 50,322 parallel sentence pairs \nwere extracted using Method1 . A SMT system \nfor Vietnamese-French was then built using the \nMoses toolkit with the same default settings as \ndescribed in section 3.2. \nIn this paper, the proposed unsupervised \nmethod was applied on the same corpus VNA. \nInstead of aligning sentences and filtering sen-\ntence alignment information, we create a compa-\nrable corpus and apply the proposed unsuper-\nvised method to extract parallel sentence pairs. \nThen we compare the unsupervised method with \nthe Method1 . \n4.1 Preparing the data \nFirstly, from the comparable corpus VNA, the \nnumber of possible parallel document pairs was \n \n \n1\n http://www.vnagency.com.vn/ \nreduced by using publishing date filter. Then \neach sentence in a Vietnamese document was \nmerged with all sentences in the possible French \ndocument. So a pair of one Vietnamese docu-\nment (containing m sentences) and one French \ndocument (containing n sentences) produced m x \nn pairs of sentences. From the corpus VNA, we \nobtained a comparable corpus of 1,442,448 pairs \nof sentences, which is really noisy parallel. We \njust kept the pairs with the ratio of French sen-\ntence’s length to Vietnamese sentence’s length \nbetween 0.8 and 1.3. So we got a comparable \ncorpus of 345,575 pairs of sentences (named \nC\nall\n). \n4.2 Building the initial translation system \nIn order to apply the proposed unsupervised \nmethod, we have split the corpus C\nall\n into two \nsets: an initial training corpus C2 and a mining \ncorpus D (C2 and D are referred in figure 1b). To \nensure a minimum quality for C2 (and conse-\nquently for the initial translation system S\n0\n), we \npropose the following cross-filtering process to \nextract C2. \n- Split the corpus C\nall\n into 4 sub-corpora \ncontaining different sentence pairs: SC\n1\n \n(85,011 sentence pairs), SC\n2\n (85,008 sen-\ntence pairs), SC\n3 \n(86,529 sentence pairs), \nSC\n4 \n(89,027 sentence pairs). \n- Build 4 different translation systems from \n4 sub-corpora: SC\n1\n à SMT\nsc1\n, SC\n2\n à \nSMT\nsc2\n,\n \nSC\n3\n à SMT\nsc3\n,\n \nSC\n4\n à SMT\nsc4\n. \n- Apply the proposed unsupervised method \nfor each pair of (SC\n1\n, SMT\nsc2\n), (SC\n2\n, \nSMT\nsc1\n), (SC\n3\n, SMT\nsc4\n), (SC\n4\n, SMT\nsc3\n). \n(with one iteration; PER* threshold=0.45 \nto ensure the reliability of extracted sen-\ntence pairs (according to figure 2) and an \nacceptable number of pairs to build SMT \nsystem). We obtain the extracted sentence \npairs C2\n1\n, C2\n2\n, C2\n3\n, C2\n4\n, their union is \nconsidered as reliable enough for serving \nas C2 corpus. The rest is treated as corpus \nD. \n \n \nFigure 6. Process to extract corpus C2, for pair \n(SC\n1\n, SMT\nsc2\n), (SC\n2\n, SMT\nsc1\n), etc. \n \nSC\n1\n \nSC\n2\n \nSC\n2\n \nSMT\nsc2\n \nC2\n1\n \nSC\n1\n \nSMT\nsc1\n \n… C2\n2\n \nFilter by \nPER* \nFilter by \nPER* \nSub-\ncorpus Translated \nby Nbr. of ex-\ntracted pairs \n(C2) Nbr. of \nremaining \npairs (D) \nSC\n1\n SMT\nSC2\n C2\n1\n: 2916 82095 \nSC\n2\n SMT\nSC1\n C2\n2\n: 3495 81513 \nSC\n3\n SMT\nSC4\n C2\n3\n: 3820 82709 \nSC\n4\n SMT\nSC3\n C2\n4\n: 3892 85135 \nTable 2. Extracted data for C2 and D. \nAfter this step, we obtained corpus C2 con-\ntaining 14,123 sentence pairs, and corpus D con-\ntaining 331,452 sentence pairs. The fully unsu-\npervised method described in section 2.2 was \nthen applied on C2 and D to filter more parallel \nsentence pairs. \n4.3 Applying unsupervised method \nThe initial translation system S\n0\n was built from \nthe training corpus C2 of 14,123 French-\nVietnamese sentence pairs. The corpus D con-\ntains 331,452 French-Vietnamese sentence pairs. \nThe unsupervised method was applied with the \ntype of combination W2 and the evaluation met-\nric PER*. There is no tuning process for the sta-\ntistical models. The number of extracted sentence \npairs after each iteration is reported in figure 7. \nAfter 5 iterations, we obtained 39,758 sentence \npairs. The quality of the translation systems was \nalso evaluated on a test set of 400 manually ex-\ntracted Vietnamese-French parallel sentence \npairs (same test set as in the implementation of \nMethod1 ). The Vietnamese sentences were ini-\ntially segmented into syllables (no word segmen-\ntation pre-processing was applied). Each Viet-\nnamese sentence has only one French reference. \nThe evaluation scores after each iteration were \nreported in table 3. \n \nFigure 7. Number of extracted sentence pairs \nafter each iteration\n \nSMT \niter. Training data \n(nbr. of pairs) Bleu Nist Ter \n0 14,123 30.67 6.45 0.59 \n1 26,517 32.18 6.70 0.57 \n2 37,210 32.42 6.75 0.56 \n3 38,530 32.45 6.77 0.55 \n4 39,254 32.14 6.73 0.56 \n5 39,758 31.85 6.68 0.56 \nTable 3. Evaluation scores after each iteration. The results in this case are similar to those in \npreliminary test: the number of extracted sen-\ntence pairs was increased after iterations; the \nquality of translation system was increased in \nsome first iterations and decreased after that. Al-\nthough the number of training sentence pairs in-\ncreased about two times from iteration 0 to itera-\ntion 1, the evaluation score increased only 2 \npoints for Bleu. One reason may be that the ini-\ntial system (S\n0\n) has already a good performance \ndue to our cross-filtering process described in \nsection 4.2. Moreover, the evaluation is only \nconducted with automatic metrics using one ref-\nerence only and a deeper analysis should be con-\nducted with human evaluations. \nFurthermore, to compare with the Method1 , \nthe quality of the translation systems trained by \nextracted sentence pairs from two methods is \ngiven in table 4. Although the number of ex-\ntracted sentence pairs in our method is lower \nthan that in the Method1 , the quality of the SMT \nsystem is comparable. Note that the Method1 \ndepends highly on the additional data such as the \nquality of bilingual dictionary or filtering heuris-\ntics. \nFrom these results, we can say that the unsu-\npervised method was applied successfully in a \nreal low e-resourced language pair: Vietnamese - \nFrench. The result shows that this method can be \nreally applied in the case of lacking parallel data. \nMoreover, the quality of the translation system \nbuilt from extracted data is comparable with the \ntranslation system built from other method using \nlexical information (bilingual dictionary, etc.) \nand data filtering heuristics. This proposed \nmethod requires no more additional data. We \nintend to apply this method on a larger scale for \nmining a bigger comparable data stream ex-\ntracted from the web. \n \nMining method\n \nNbr. of train\ning \ndata Bleu\n \nNist Ter \nLexical info. + \nHeuristics \n(Method1 )\n 50,322 32.74\n \n6.78 0.55 \nUnsupervised \nmethod 38,530 32.45\n \n6.77 0.56 \nTable 4. Comparison between mining Method1 \nand unsupervised method.\n \n5 Conclusion and perspectives \nThis paper presents an unsupervised method for \nextracting parallel sentence pairs from a compa-\nrable/noisy parallel corpus. An initial translation \nsystem was built based on a noisy parallel cor-\npus, instead of a truly parallel corpus. The initial \ntranslation system was then used to translate an-\nother comparable corpus, to withdraw the paral-\nlel sentence pairs. An iterative process was \nevaluated to increase the number of extracted \nparallel sentence pairs and to improve the quality \nof translation system. The method was prelimi-\nnary tested in a hard condition: the parallel cor-\npus does not exist and the initial corpus contains \nup to 50% of non parallel sentence pairs. How-\never, the result shows that this method can be \nreally applied, especially in the case of lacking \nparallel data. Several ways of filtering and use \nthe extracted data were also presented (different \nevaluation metrics for filtering and different \nways of combining the extracted data with the \ninitial translation system). An interesting result is \nthat the quality of the translation system can be \nimproved during some first iterations, but it be-\ncomes worse later because of adding noisy data \ninto the statistical models. Moreover, the quality \nof the translation system built by extracted data \nfrom this unsupervised method is comparable \nwith that of another method which requires better \nquality data for bootstrapping (bilingual diction-\nary, etc.). \nOur future works will focus on deeper analysis \nof the best filtering and data inclusion tech-\nniques, on experiments at a larger scale and on \nhuman evaluations to confirm improvements ob-\ntained with our unsupervised method. \nReferences \nAbdul-Rauf, S. and H. Schwenk. 2009. On the use of compara-\nble corpora to improve smt performance, Proceedings of the \n12th Conference of the European Chapter of the Association \nfor Computational Linguistics . \nBrown, P.F., S.A.D. Pietra, V.J.D. Pietra and R.L. Mercer. \n1993. The mathematics of statistical machine translation: \nparameter estimation. Computational Linguistics . Vol. 19, \nno. 2. \nDoddington G. 2002. Automatic evaluation of machine transla-\ntion quality using n-gram co-occurrence statistics. In Human \nLanguage Technology Proceedings. \nFung, P., P Cheung. 2004. Mining very-non-parallel corpora: \nparallel sentence and lexicon extraction via bootstrapping \nand EM. Conference on Empirical Methods on Natural \nLanguage Processing. \nGale, W.A. and K.W. Church. 1993. A program for aligning \nsentences in bilingual corpora. Proceedings of the 29th an-\nnual meeting on Association for Computational Linguistics . \nHo, T.B. 2005. Current status of machine translation research in \nvietnam, towards asian wide multi language machine trans-\nlation project. Vietnamese Language and Speech Processing \nWorkshop . \nHutchins, W.J. 2001. Machine translation over fifty years. His-\ntoire, epistemologie, langage. ISSN 0750-8069. Kilgarriff, A. and G. Grefenstette. 2003 . Introduction to the \nspecial issue on the Web as corpus. Computational Linguis-\ntics, volume 29 . \nKoehn, P. 2005. Europarl: a parallel corpus for statistical ma-\nchine translation. Machine Translation Summit . \nKoehn, P., F.J. Och and D. Marcu. 2003 . Statistical phrase-\nbased translation. Conference of the North American Chap-\nter of the Association for Computational Linguistics on Hu-\nman Language Technology Vol. 1. \nKoehn, P., H. Hoang, A. Birch, C. Callison-Burch, R. Zens, M. \nFederico, N. Bertoldi, B. Cowan, W. Shen and C. Moran. \n2007. Moses: open source tool-kit for statistical machine \ntranslation. Proceedings of the Association for Computa-\ntional Linguistics . \nKumano, T., H. Tanaka, T. Tokunaga. 2007. Extracting phrasal \nalignments from comparable corpora by using joint prob-\nability SMT model. Conference on Theoretical and Meth-\nodological Issues in Machine Translation. \nMa, Xiaoyi. 2006. Champollion: A robust parallel text sentence \naligner. LREC: Fifth International Conference on Language \nResources and Evaluation. \nMunteanu, D.S. and D. Marcu. 2006. Extracting parallel sub-\nsentential fragments from non-parallel corpora. 44th annual \nmeeting of the Association for Computational Linguistics . \nOch, Franz Josef, and Hermann Ney. 2003. A systematic com-\nparison of various statistical alignment models. Computa-\ntional Linguistics 29.1 \nPapineni K., S. Roukos, T. Ward, and W. Zhu. 2002. BLEU:a \nmethod for automatic evaluation of machine translation. In \nProceedings of the 40th Annual Meeting of the Association \nfor Computational Linguistics . \nPatry, A. and P. Langlais. 2005. Paradocs: un système \nd’identification automatique de documents parallèles. 12e \nConference sur le Traitement Automatique des Langues \nNaturelles . \nResnik, P. and N.A. Smith. 2003. The Web as a parallel corpus. \nComputational Linguistics. \nSarikaya R., S. Maskey, R. Zhang, E. Jan, D. Wang, B. Ramab-\nhadran, S. Roukos. 2009. Iterative sentence–pair extraction \nfrom quasi–parallel corpora for machine translation. Inter-\nspeech . \nSnover M., B. Dorr, R. Schwartz, L. Micciulla, and J. Makhoul. \n2006. A study of translation edit rate with targeted human \nannotation. Proceedings of Association for Machine Trans-\nlation in the Americas . \nStolcke, Andreas. 2002. SRILM an extensible language model-\ning toolkit. Intl. Conf. on Spoken Language Processing. \nTillmann C., S. Vogel, H. Ney, A. Zubiaga, and H. Sawaf. \n1997. Accelerated DP based search for statistical translation. \nIn 5th European Conf. on Speech Communication and \nTechnology . \nDo T.N.D., V.B. Le, B. Bigi, L. Besacier, E. Castelli. 2009. \nMining a comparable text corpus for a Vietnamese-French \nstatistical machine translation system. 4\nth\n Workshop on Sta-\ntistical Machine Translation . \nZhao B., S. Vogel. 2002. Adaptive parallel sentences mining \nfrom Web bilingual news collection. International Confer-\nence on Data Mining .",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "RJZ1yUxGLfa",
"year": null,
"venue": "EAMT 2015",
"pdf_link": "https://aclanthology.org/W15-4902.pdf",
"forum_link": "https://openreview.net/forum?id=RJZ1yUxGLfa",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Building hybrid machine translation systems by using an EBMT preprocessor to create partial translations",
"authors": [
"Mikel Artetxe",
"Gorka Labaka",
"Kepa Sarasola"
],
"abstract": "Mikel Artetxe, Gorka Labaka, Kepa Sarasola. Proceedings of the 18th Annual Conference of the European Association for Machine Translation. 2015.",
"keywords": [],
"raw_extracted_content": "Building hybrid machine translation systems by using an EBMT\npreprocessor to create partial translations\nMikel Artetxe, Gorka Labaka, Kepa Sarasola\nIXA NLP Group, University of the Basque Country (UPV/EHU)\n{martetxe003@ikasle., gorka.labaka@, kepa.sarasola@}ehu.eus\nAbstract\nThis paper presents a hybrid machine\ntranslation framework based on a pre-\nprocessor that translates fragments of the\ninput text by using example-based ma-\nchine translation techniques. The pre-\nprocessor resembles a translation mem-\nory with named-entity and chunk general-\nization, and generates a high quality par-\ntial translation that is then completed by\nthe main translation engine, which can\nbe either rule-based (RBMT) or statisti-\ncal (SMT). Results are reported for both\nRBMT and SMT hybridization as well as\nthe preprocessor on its own, showing the\neffectiveness of our approach.\n1 Introduction\nThe traditional approach to Machine Transla-\ntion (MT) has been rule-based (RBMT), but\nit has been progressively replaced by Statisti-\ncal Machine Translation (SMT) since the 1990s\n(Hutchins, 2007). Example-Based Machine Trans-\nlation (EBMT), the other main MT paradigm, has\nnever attracted that much attention: even though\nit gives excellent results with repetitive text for\nwhich accurate matches are found in the parallel\ncorpus, its quality quickly degrades as more gen-\neralization is needed. Nevertheless, it has been ar-\ngued that, along with the raise of hybrid systems\nthat try to combine multiple paradigms, EBMT can\nhelp to overcome some of the weaknesses of the\nother approaches (Dandapat et al., 2011)1.\nc�2015 The authors. This article is licensed under a Creative\nCommons 3.0 licence, no derivative works, attribution, CC-\nBY-ND.\n1This paper refers to as hybridization to any combination of\nMT paradigms, no matter if they are integrated in a singleIn this paper, we propose one such system based\non a multi-pass system combination: an EBMT\npreprocessor translates those fragments of the in-\nput text for which accurate matches are found in\nthe parallel corpus, generating a high-quality par-\ntial translation that is then completed by the main\ntranslator, which can be either rule-based or statis-\ntical.\nThe function of the EBMT preprocessor is\ntherefore similar to that of Translation Memories\n(TM), with the difference that previously made\ntranslations are not reused to aid human translators\nbut a MT engine. Needless to say, if the EBMT\npreprocessor was only able to reuse full sentences\nas traditional TM systems do at the most basic\nlevel, the quality of its partial translations would\nmatch that of humans, but its contribution would\nbe negligible in most situations. At the same time,\ntrying to increase the coverage by generalizing too\nmuch at the expense of translation quality, as tra-\nditional EBMT systems do, would make the whole\nsystem pointless if the preprocessor is not able to\noutperform the main MT engine for the fragments\nit translates. This way, for our approach to work as\nintended, it is necessary tofind a trade-off between\ncoverage and translation quality. In this work, we\ntake a preprocessor that reuses full sentences as our\nstarting point and explore two generalization tech-\nniques similar to those used by second and third\ngeneration TM systems (Gotti et al., 2005):\n•Named-entity (NE) generalization, giving\nthe option to replace NEs like proper names\nand numerals in the parallel corpus with any\nother found in the text to translate.\nengine or not. However, some authors distinguish between\nhybridization for systems that meet this requirement and com-\nbination for systems that do not.11\n•Chunk generalization, giving the option to\nreuse examples in a subsentential level.\nSeveral other methods that combine EBMT and\nTM with other MT paradigms have been proposed\nin the literature. Koehn and Senellart (2010) use\nan SMT system tofill the mismatched parts from\na fuzzy search in a TM. Similarly, Shirai et al.\n(1997) use a RBMT engine to complete the mis-\nmatched fragments from an EBMT system and\nsmooth the resulting output using linguistic rules.\nOn the other hand, Dandapat et al. (2012) integrate\nSMT phrase tables into an EBMT framework. Fol-\nlowing the opposite approach, Groves and Way\n(2005) feed an SMT system with alignments ob-\ntained using EBMT techniques. S ´anchez-Mart ´ınez\net al. (2009) use EBMT techniques to obtain bilin-\ngual chunks that are then integrated into a RBMT\nsystem. Lastly, Alegria et al. (2008) propose a\nmulti-engine system that selects the best transla-\ntion created by a RBMT, an SMT and an EBMT\nengine. However, to the best of our knowledge, the\nuse of a generic multi-pass hybridization method\nfor EBMT that works with both SMT and RBMT\nhas never been reported so far.\nThe remaining of this paper is structured as fol-\nlows. The proposed method is presented in Section\n2. Section 3 explains the experimental settings un-\nder which the system was tested, and the results\nobtained are then discussed in Section 4. Section 5\nconcludes the paper.\n2 Method\nOur method follows the so-called compiled ap-\nproach to EBMT, which differs from runtime or\npure EBMT in that it requires a training phase to\ncompile translation units below the sentence level\n(Dandapat, 2012). Therefore, the system we pro-\npose consists of three elements: the compiling\ncomponent presented in Section 2.1, which ana-\nlyzes and aligns the parallel corpus to be used by\nthe EBMT preprocessor; the EBMT preprocessor\nitself as presented in Section 2.2, which creates a\nhigh-quality partial translation of the input text us-\ning the data created by the previous module; and\nthe integration with the main translator presented\nin Section 2.3, which completes the partial transla-\ntion given by the previous module by using either\na RBMT or an SMT engine.2.1 Compiling\nThe compiling phase involves processing a\nsentence-aligned parallel corpus to be used by the\nEBMT preprocessor. Two steps are required for\nthis: the analysis step, presented in Section 2.1.1,\nand the alignment step, presented in Section 2.1.2.\nThe resulting data is encoded in a custom binary\nformat based on suffix arrays (Manber and Myers,\n1990) for its efficient retrieval by the EBMT pre-\nprocessor.\n2.1.1 Analysis\nThe analysis step involves the tokenization, NE\nrecognition and classification, lemmatization and\nparsing of each side of the parallel corpus. We\nhave used Freeling (Padr ´o and Stanilovsky, 2012)\nas our analyzer for Spanish, Stanford CoreNLP\n(Socher et al., 2013) for English and Eustagger\n(Ezeiza et al., 1998) for Basque, with a custom\nregex-based handling for numerals. The result-\ning constituency-based parse tree is simplified by\nremoving inner nodes that correspond to part-of-\nspeech tags and representing NEs as single leaves.\nIn the case of Basque, our analyzer is only capable\nof shallow parsing, so we have generated a dummy\ntree in which chunks are the only inner nodes.\n2.1.2 Alignment\nThe alignment step involves establishing the\ntranslation relationships among the tokens2and\nNEs of the parallel corpus. This is done separately\nbecause the latter serves as the basis for NE gener-\nalization as discussed in Section 1, so we allow the\noption of not aligning NEs in this level if there is\nnot enough evidence to do so.\nThis way, word-alignment produces a setA nfor\neachnth sentence pair where(i, j)∈A nif and\nonly if there is a translation relationship between\ntheith token in the source language and thejth\ntoken in the target language, as well as the lexi-\ncal weightings or translation probabilities in both\ndirections, that is, a set ofp(e|f)andp(f|e)prob-\nabilities that express the likelihood of theftoken\nto be translated aseand theetoken to be trans-\nlated asf, respectively. Our system has been in-\ntegrated both with GIZA++ (Och and Ney, 2003)\nand Berkeley Aligner (Liang et al., 2006).\nAs for NE alignment, we align NEs if and only\nif they have the same written form, are equivalent\n2We refer as tokens to the leaves of the parse tree obtained\nin the analysis phase, which implies that NEs are considered\n(multiword) tokens.12\nnumerals or are found in either of the following\ndictionaries:\n•A manually built dictionary, mostly con-\nsisting of translation relationships between\nproper names like countries.\n•An automatically generated dictionary from\nWikipedia article titles with support for redi-\nrections.\n•An automatically generated dictionary from\nword-alignment, consisting of every NE pair\nf−efor whichp(e|f)+p(f|e)\n2>θandfande\nappear a minimum ofltimes in the corpus.3\n2.2 EBMT preprocessing\nThe goal of the EBMT preprocessing is to create a\nhigh-quality partial translation of the input text. As\nit is common in EBMT, this is done in three steps:\nmatching, alignment and recombination, which are\ndescribed in the following subsections.\n2.2.1 Matching\nThe matching phase involves looking for frag-\nments of the input text in the training corpus. For\nthis purpose, the input text isfirst analyzed as de-\nscribed in Section 2.1.1, and chunks of each of the\ninput sentences are then searched in the parallel\ncorpus according to the following criteria:\n1. The searched chunks must be syntactic units\n(either inner nodes or groups of consecutive\ninner siblings).\n2. The searched chunks must contain a mini-\nmum ofktokens to avoid trivial translations\nthat would have a negative impact on the\noverall translation quality. After some pre-\nliminary experiments, we have setkto 4.\n3. The search process is hierarchical, that is,\nnodes that are closer to the root have priority\nover the rest in case of overlapping matches.\nIf overlapping matches are found in the same\nlevel of the parse tree, the chunk with the\nbiggest number of tokens has priority over the\nrest.\n4. Full syntactic match requirement, that is, not\nonly the leaves of the searched chunks have to\nmatch but also their corresponding subtrees.\n3Based on some preliminary experiments, we setθ= 0.5and\nl= 10.5. The generalization of aligned NEs in the\ntraining corpus. According to this criterion,\naligned NEs in the training corpus are con-\nsidered to be valid matches for any NE in the\ninput text, whereas unaligned NEs are pro-\ncessed as plain tokens.\n2.2.2 Alignment\nThe next step in the EBMT preprocessing is to\nbuild a translation for each match,filtering those\nthat are not valid. For that purpose, wefirst iden-\ntify the translation that corresponds to each match\nin the parallel corpus, and we then translate the\naligned NEs it contains.\nFor thefirst point, given a match of a chunk\nin the source language, we select the shortest se-\nquence in the target language that satisfies the fol-\nlowing conditions. If there is no possible transla-\ntion that satisfies all these conditions for a given\nmatch, the match is rejected.\n1. It must contain at least one aligned token.\n2. No token in either fragment can be aligned\nwith a token outside the other fragment.\n3. The translation must be a syntactic unit as de-\nfined in Section 2.2.1, but without the require-\nment for the matched nodes to be inner ones\n(i.e. they could also be leaves).\nDue to NE generalization, the translation gener-\nated this way might contain NEs that do not corre-\nspond to the searched ones. These NEs are trans-\nlated as follows:\n1. Identify the searched NE for each aligned NE\nin the translation. This is done by following\nthe translation relationships as defined by NE\nalignment in the compiling phase.\n2. Translate the lemma of the searched NEs.\nThe set of dictionaries described in Section\n2.1.2 is used for that purpose with a cus-\ntom processing for numerals. NEs that can-\nnot be translated by these means are left un-\nchanged, as they would presumably corre-\nspond to proper names of persons or loca-\ntions.\n3. Inflect the translated lemma by applying the\nsame morphological tags that the aligned NE\nhad. We only apply this step for morpho-\nlogically rich languages as it is the case of\nBasque.\n13\nFor instance, if “Putin claims victory in Rus-\nsia elections” is matched with “Pe ˜na Nieto claims\nvictory in Mexico elections”, and “Pe ˜na Nietok\nMexikoko hauteskundeak irabazi ditu” is selected\nas its translation, we wouldfirst identify that the\nBasque “Pe ˜na Nietok” is aligned with the English\n“Pe˜na Nieto”, which was matched with “Putin”,\nand “Mexikoko” is aligned with “Mexico”, which\nwas matched with “Russia”. We would then trans-\nlate “Putin” as “Putin” and “Russia” as “Erru-\nsia” according to the dictionaries described in Sec-\ntion 2.1.2. Lastly, we would inflect these lemmas\nto match the lexical form of their corresponding\naligned NE. In this case, “Pe ˜na Nietok” was the\nergative form of “Pe ˜na Nieto”, so we would in-\nflect “Putin” in ergative giving “Putinek”. Sim-\nilarly, “Mexikoko” was the local-genitive form\nof “Mexiko”, so we would inflect “Errusia” in\nlocal-genitive giving “Errusiako”. This way, we\nwould obtain thefinal translation “Putinek Errusi-\nako hauteskundeak irabazi ditu”.\n2.2.3 Recombination\nAfter the alignment phase, it is possible to have\neither zero, one, or several translation candidates\nfor each searched chunk. Thanks to the hierarchi-\ncal searching process, it is guaranteed that these\ntranslations will not overlap, so rather than com-\nbining them we try to select the best candidate\nfor each searched chunk. For that purpose, we\nchoose the most frequent translation in each case\nand, in case of a tie, the one with the highest lexi-\ncal weighting.\n2.3 Integration\nAs discussed in the previous section, the EBMT\npreprocessor creates a partial translation of the in-\nput text by translating chunks that are matched in\nthe training corpus. The next and last phase in-\nvolves building the full translation by completing\nit with the help of the main MT system. This is\ndone differently depending on the type of system\nit is:\n•When hybridizing with RBMT systems, the\ninput text is translated as it is, and a postpro-\ncessor replaces translation fragments that cor-\nrespond to matched chunks with the ones pro-\nposed by the EBMT preprocessor. In order to\nidentify these fragments, the original chunks\nare marked with XML tags that the main MT\nsystem keeps in the translation it generates.•When hybridizing with SMT systems,\nMoses’ XML markup is used in its “inclu-\nsive” mode to make the translations generated\nby the EBMT preprocessor compete with the\nentries in the phrase table. It is remarkable\nthat the “exclusive” and “constraint” modes,\nwhich force the decoder to choose the pro-\nposed translation or others that contain it,\nrespectively, gave consistently worse results.\nWe speculate that this could be due to the\nboundary friction problem, as the EBMT\nsystem translates fragments without taking\ntheir context into account, and the language\nmodel might be able to choose a better\ntranslation for the given context.\n3 Experimental settings\nAs discussed in Section 1, it is expected that the\nperformance of our method will greatly depend on\nthe similarity between the input text and the ex-\namples given in the training corpus. Taking that\ninto account, we decided to train our system in two\ndifferent domains: the particularly repetitive do-\nmain of collective bargaining agreements, and the\nmore common domain of parliamentary proceed-\nings. For the former, we used the Spanish-Basque\nIV AP corpus, consisting of a total of 81 collec-\ntive bargaining agreements to which we added the\nlarger Elhuyar’s administrative corpus to aid word-\nalignment. For the latter, we used the Spanish-\nEnglish Europarl corpus as given in the shared task\nof the ACL 2007 workshop on statistical machine\ntranslation, consisting of proceedings of the Euro-\npean Parliament. Table 1 summarizes their details.\nAs for the testing data, we used an in-domain test\nset for each corpus as well as an out-of-domain one\nfor Europarl as shown in Table 2.\nIn order to evaluate the performance of our\nmethod we carried out the following two experi-\nments:\n•A manual evaluation of the EBMT prepro-\ncessorto measure both the coverage and the\nquality of its partial translations. For this pur-\npose, we randomly selected 100 sentences for\neach in-domain test set and asked 5 volun-\nteers to score the quality of each translated\nfragment in its context in a scale between 1\n(incorrect translation) and 4 (correct transla-\ntion).\n•An automatic evaluation of the whole sys-\ntemusing the Bilingual Evaluation Under-14\nLanguage Domain Sentences\nIV AP + Elhuyar es-eu collective bargaining agreements + administrative 50,824 + 4,747,332\nEuroparl es-en parliament proceedings 1,254,414\nTable 1: Training corpus\nLanguage Domain In domain? Sentences Tokens Tokens / sentence\nIV AP es-eu collective bargaining agreement yes 1,928 39,625 20.55\nEuroparl es-en parliamentary proceedings yes 2,000 56,213 28.01\nNews commentary es-en news no 2,007 61,341 30.67\nTable 2: Test set\nFull sentences Full sentences with NEChunks with NE\nGIZA++ Berkeley (HMM) Berkeley (synt.)\nIV AP 18,284 (46.14%) 18,691 (47.17%) 23,962 (60.47%) 26,436 (66.72%) -\nEuroparl 379 (0.62%) 548 (0.89%) 10,565 (17.22%) 10,986 (17.91%) 9,653 (15.74%)\nNews commentary 12 (0.02%) 12 (0.02%) 5,365 (9.54%) 5,566 (9.90%) 4,674 (8.31%)\nTable 3: Tokens translated by the EBMT preprocessor\nstudy (BLEU) metric (Papineni et al., 2002).\nFor this automatic evaluation, we hybridized\nour system both with a RBMT and an SMT\nsystem. Our RBMT translator of choice was\nMatxin (Mayor et al., 2011) for Spanish-\nBasque and Apertium (Forcada et al., 2011)\nfor Spanish-English, whereas we used Moses\n(Koehn et al., 2007) as our SMT engine for\nboth language pairs.\n4 Results and discussion\nThis section presents the outcomes of the experi-\nments described in Section 3. The results for the\nquality and coverage experiment are discussed in\nSection 4.1, and the RBMT and SMT hybridiza-\ntion in Sections 4.2 and 4.3.\n4.1 Quality and coverage of EBMT\nTable 3 shows the number of tokens translated by\nthe EBMT preprocessor according to each gener-\nalization mechanism. In the case of chunk gen-\neralization, we tried both GIZA++ and Berke-\nley aligner with and without syntactic tailoring\n(DeNero and Klein, 2007), which could presum-\nably generate more chunk alignments that meet\nthe restrictions of our translation process. How-\never, contrary to our expectations syntactic tailor-\ning gave the worst results by far both in terms\nof coverage and translation quality, apparently be-\ncause it is still an experimental feature, and it was\nthe default HMM mode of Berkeley Aligner which\nclearly outperformed the rest. We will conse-\nquently refer to the results obtained by this aligner\nin the remaining of this section.\nAs we expected, Table 3 reflects that the cover-age of the EBMT preprocessing clearly depends on\nthe similarity between the input text and the train-\ning corpus. For the domain of collective bargain-\ning agreements, our EBMT preprocessor is able\nto translate around two thirds of the input tokens.\nEven though the results we obtain for the other\ntest sets are poorer, the impact of our method is\nstill very significant, as the EBMT preprocessor is\nable to translate 17.91% and 9.90% of the tokens\nin the in-domain and out-of-domain test sets for\nEuroparl, respectively. As for the distribution of\nthese partial translations, we observe that most of\nthe translations in IV AP come from the traditional\nTM behavior of our preprocessor4, but the relative\ncontribution of the generalization mechanisms gets\nconsiderably higher as the distance between the in-\nput text and the training corpus increases5.\nAs far as the quality of the partial translations\nis concerned, Tables 4 and 5 show the results of\nthe manual evaluation we carried out for both in-\ndomain test sets. The overall results are very pos-\nitive in both cases, with an average score of 3.45\nand 3.39 out of 4 for IV AP and Europarl, respec-\ntively. In spite of the average scores being sim-\nilar, it is worth mentioning that there is a con-\nsiderable difference in the variance of the eval-\nuations, with Europarl obtaining much more co-\nherent scores than IV AP (3.30-3.49 range for Eu-\nroparl and 3.02-3.73 range for IV AP). We believe\n469.16% of the tokens translated by the EBMT preprocessor\nwhen using all the generalization mechanisms correspond to\nfull sentences (18,284 out of 26,436 as shown in Table 3)\n5Only 3.45% and 0.22% of the tokens translated by the EBMT\npreprocessor when using all the generalization mechanisms\ncorrespond to full sentences in Europarl and News commen-\ntary, respectively (379 out of 10,986 and 12 tokens out of\n5,566 as shown in Table 3)15\n1 2 3 4 Average\nEvaluator 1 2 (1.56%) 5 (3.91%) 19 (14.84%) 102 (79.69%) 3.73\nEvaluator 2 5 (3.91%) 4 (3.13%) 18 (14.06%) 101 (78.91%) 3.68\nEvaluator 3 11 (8.59%) 8 (6.25%) 9 (7.03%) 100 (78.13%) 3.55\nEvaluator 4 13 (10.16%) 14 (10.94%) 25 (19.53%) 76 (59.38%) 3.28\nEvaluator 5 19 (14.96%) 23 (18.11%) 21 (16.54%) 64 (50.39%) 3.02\nAverage 10 (7.82%) 10.8 (8.45%) 18.4 (14.4%) 88.6 (69.33%) 3.45\nTable 4: Results of the manual evaluation in IV AP (es-eu)\n1 2 3 4 Average\nEvaluator 1 8 (4.79%) 11 (6.59%) 40 (23.95%) 108 (64.67%) 3.49\nEvaluator 2 14 (8.38%) 11 (6.59%) 28 (16.77%) 114 (68.26%) 3.45\nEvaluator 3 11 (6.71%) 20 (12.2%) 25 (15.25%) 108 (65.85%) 3.40\nEvaluator 4 16 (9.58%) 14 (8.38%) 38 (22.75%) 99 (59.28%) 3.32\nEvaluator 5 17 (10.24%) 20 (12.05%) 25 (15.06%) 104 (62.65%) 3.30\nAverage 13.2 (7.94%) 15.2 (9.15%) 31.2 (18.77%) 106.6 (64.14%) 3.39\nTable 5: Results of the manual evaluation in Europarl (es-en)\nRBMT baseline RBMT + full sentences RBMT + full sentences with NE RBMT + chunks with NE (Berkeley HMM)\nIV AP 0.0498 0.3350 0.3330 0.3168\nEuroparl 0.1755 0.1786 0.1790 0.1983\nNews commentary 0.2173 0.2173 0.2173 0.2227\nTable 6: BLEU scores with RBMT hybridization\nSource Finalmente,Se ˜nor´ıas, los medios de comunicacin debenjugar tambi ´en un papel importanteen esta tarea.\nBaseline Finally,Se ˜nor´ıas, the media have toplay also an important paperin this task.\nSystem Finally,ladies and gentlemen, the media have toplay an important role tooin this task.\nReference Finally,ladies and gentlemen, the media mustalso play an important rolein this task.\nTable 7: An example of RBMT hybridization in Europarl\nthat the reason behind that is the unfamiliarity of\nsome evaluators with machine translation and the\nregister used for legal documents in Basque, which\ncould have made them penalize minor mistakes\nthat were sometimes even found in the reference\ntranslations too severely6. As a matter of fact,\nsome full sentence translations that were equal to\nthe reference ones got 1 and 2 scores. In any case,\nthe reported results reflect that our EBMT prepro-\ncessor produces high-quality partial translations,\nwith less than 20% of them obtaining a negative\n(1 or 2) score in average for both test sets.\n4.2 RBMT hybridization\nTable 6 shows the BLEU scores obtained when\nhybridizing with RBMT translators. As it can be\nseen, we obtain very good results, with our system\noutperforming the baseline in all the test sets. The\ngain in BLEU is particularly remarkable in the case\nof IV AP, with an improvement of 26.7 points, but\nstill notable for the other more standard in-domain\nand out-of-domain test sets, with an improvement\nof 2.28 and 0.54 points, respectively.\nAs far as the contribution of each generalization\n6Note that not all the evaluators for both test sets were the\nsamestep is concerned, it can be observed that, in the\ncase of IV AP, all the improvement comes from the\nTM behavior of our preprocessor, and the gener-\nalization steps themselves have a negative impact.\nWe believe that this is due to an integration prob-\nlem with Matxin, as wefind that it often misplaces\nour XML tags in its translations, yielding to sense-\nless replacements that have a negative impact in\nthe overall translation quality. In the case of both\nApertium test sets, which do not suffer from this\nproblem, the generalization steps work as expected\nand, in fact, practically all the improvement comes\nfrom them. Table 7 shows one such case, where\nthe proposed system is able to properly translate\nthe out-of-vocabulary word “Se ˜nor´ıas” and the id-\niomatic expression “jugar un papel importante”\nunlike the baseline.\n4.3 SMT hybridization\nThe BLEU scores obtained with SMT hybridiza-\ntion are shown in Table 8. As it can be seen, our\nsystem is not able to beat the baseline for either of\nthe Spanish-English test sets, although there are in-\nstances in which the hybrid system gives better re-\nsults as it is the case of the example in Table 9. We\nthink that, as shown in Table 10, the reason behind16\nSMT baseline SMT + full sentences SMT + full sentences with NE SMT + chunks with NE (Berkeley HMM)\nIV AP 0.3368 0.4483 0.4472 0.4593\nEuroparl 0.3307 0.3307 0.3304 0.3251\nNews commentary 0.2984 0.2982 0.2982 0.2967\nTable 8: BLEU scores with SMT hybridization\nSource De ser as ´ı, se comete un error, ya que se trata de la credibilidad yfiabilidad que tiene la Uni ´on Europea [...]\nBaseline For example, we are making a mistake, because that is the credibility and reliability of the European Union [...]\nSystem If that is the case, it is a mistake, because that is the credibility and reliability of the European Union [...]\nReference If it were to be the case then it is a miscalculationbecause this is about the credibility and reliability of the European Union [...]\nTable 9: An example of SMT hybridization in Europarl\nFull sentences Full sentences with NE Chunks with NE\nIV AP 15.10 11.31 8.09\nEuroparl 7.02 9.39 5.10\nNews commentary 6.00 - 4.74\nTable 10: Average length of the fragments translated by the EBMT preprocessor\nthat is that the fragments translated by the EBMT\npreprocessor are too short for these test sets, as the\nbaseline SMT system would be able to properly\nhandle this size n-grams. Increasing the minimum\nnumber of tokenskto be searched by the EBMT\npreprocessor as discussed in Section 2.2.1 would\nsolve this problem, but it would also decrease its\ncoverage, considerably reducing the impact of the\nwhole system.\nNevertheless, we obtain very good results in\nIV AP, where we achieve an overall improvement of\n12.25 BLEU points from which 1.1 come from the\ngeneralization steps. We therefore conclude that\nour system works with SMT hybridization as long\nas the domain is repetitive enough to reuse long\ntext chunks that traditional SMT systems are not\nable to handle effectively.\n5 Conclusions and future work\nIn summary, this paper develops a generic multi-\npass hybridization method based on an EBMT pre-\nprocessor that creates partial translations making\nuse of NE and chunk generalization. The effective-\nness of the preprocessor is experimentally demon-\nstrated both in terms of coverage and translation\nquality. Furthermore, our experiments show that\nthe proposed method considerably improves the\nbaseline with RBMT hybridization, and we also\nobtain very good results with SMT hybridization\nin repetitive enough domains.\nIn the future, we intend to further optimize our\nsystem by using heuristics to detect wrong align-\nments, improve our processing for Spanish con-\ntractions, which often led to parsing errors, and\nintroduce a better handling for NEs with com-mon nouns, which were incorrectly left unchanged\nwhen not found in any dictionary. In addition, we\nplan to improve SMT integration by increasing the\nminimum number of tokens to be translated by the\nEBMT preprocessor and optimizing the weight as-\nsigned to our partial translations. We also want to\nexplore the possibility of selecting more than one\ntranslation for each chunk that would then com-\npete with each other and the rest of the entries in\nthe phrase table. Furthermore, we would like to\nfix the integration problems with Matxin and use a\nfull syntactic analyzer for Basque. We also intend\nto try more metrics to better understand the behav-\nior of the whole system. Lastly, we plan to release\nour system as an open source project.\nAcknowledgments\nThe research leading to these results was carried\nout as part of the TACARDI project (Spanish Min-\nistry of Education and Science, TIN2012-38523-\nC02-011, with FEDER funding) and the QTLeap\nproject funded by the European Commission (FP7-\nICT-2013.4.1-610516).\nReferences\nAlegria, I ˜naki, Arantza Casillas, Arantza D ´ıaz De Ilar-\nraza, Jon Igartua, Gorka Labaka, Mikel Lersundi,\nAingeru Mayor, Kepa Sarasola, Xabier Saralegi, and\nB Laskurain. 2008. Mixing Approaches to MT\nfor Basque: Selecting the best output from RBMT,\nEBMT and SMT.MATMT 2008: Mixing Ap-\nproaches to Machine Translation, pages 27–34.\nDandapat, Sandipan, Sara Morrissey, Andy Way, and\nMikel L Forcada. 2011. Using example-based MT17\nto support statistical MT when translating homoge-\nneous data in a resource-poor setting. InProceed-\nings of the 15th annual meeting of the European\nAssociation for Machine Translation (EAMT 2011),\npages 201–208.\nDandapat, Sandipan, Sara Morrissey, Andy Way, and\nJoseph van Genabith. 2012. Combining EBMT,\nSMT, TM and IR technologies for quality and scale.\nInProceedings of the Joint Workshop on Exploiting\nSynergies between Information Retrieval and Ma-\nchine Translation (ESIRMT) and Hybrid Approaches\nto Machine Translation (HyTra), pages 48–58. Asso-\nciation for Computational Linguistics.\nDandapat, Sandipan. 2012.Mitigating the Problems of\nSMT using EBMT. Ph.D. thesis, Dublin City Univer-\nsity.\nDeNero, John and Dan Klein. 2007. Tailoring Word\nAlignments to Syntactic Machine Translation. In\nProceedings of the 45th Annual Meeting of the Asso-\nciation of Computational Linguistics, pages 17–24,\nPrague, Czech Republic, June. Association for Com-\nputational Linguistics.\nEzeiza, Nerea, I ˜naki Alegria, Jos ´e Mar ´ıa Arriola,\nRub´en Urizar, and Itziar Aduriz. 1998. Combin-\ning stochastic and rule-based methods for disam-\nbiguation in agglutinative languages. InProceed-\nings of the 36th Annual Meeting of the Associa-\ntion for Computational Linguistics and 17th Inter-\nnational Conference on Computational Linguistics-\nVolume 1, pages 380–384. Association for Computa-\ntional Linguistics.\nForcada, Mikel L, Mireia Ginest ´ı-Rosell, Jacob Nord-\nfalk, Jim ORegan, Sergio Ortiz-Rojas, Juan An-\ntonio P ´erez-Ortiz, Felipe S ´anchez-Mart ´ınez, Gema\nRam´ırez-S ´anchez, and Francis M Tyers. 2011.\nApertium: a free/open-source platform for rule-\nbased machine translation.Machine Translation,\n25(2):127–144.\nGotti, Fabrizio, Philippe Langlais, Elliott Macklovitch,\nDidier Bourigault, Benoit Robichaud, and Claude\nCoulombe. 2005. 3GTM: A third-generation trans-\nlation memory. InProceedings of the 3rd Computa-\ntional Linguistics in the North-East Workshop, pages\n8–15.\nGroves, Declan and Andy Way. 2005. Hybrid\nexample-based SMT: the best of both worlds? In\nProceedings of the ACL Workshop on Building and\nUsing Parallel Texts, pages 183–190. Association for\nComputational Linguistics.\nHutchins, John. 2007. Machine translation: A con-\ncise history.Computer aided translation: Theory\nand practice.\nKoehn, Philipp and Jean Senellart. 2010. Convergence\nof translation memory and statistical machine trans-\nlation. InProceedings of AMTA Workshop on MT\nResearch and the Translation Industry, pages 21–31.Koehn, Philipp, Hieu Hoang, Alexandra Birch, Chris\nCallison-Burch, Marcello Federico, Nicola Bertoldi,\nBrooke Cowan, Wade Shen, Christine Moran,\nRichard Zens, et al. 2007. Moses: Open source\ntoolkit for statistical machine translation. InPro-\nceedings of the 45th Annual Meeting of the ACL\non Interactive Poster and Demonstration Sessions,\npages 177–180. Association for Computational Lin-\nguistics.\nLiang, Percy, Ben Taskar, and Dan Klein. 2006. Align-\nment by agreement. InProceedings of the main con-\nference on Human Language Technology Conference\nof the North American Chapter of the Association of\nComputational Linguistics, pages 104–111. Associ-\nation for Computational Linguistics.\nManber, Udi and Gene Myers. 1990. Suffix Arrays:\nA New Method for On-line String Searches. InPro-\nceedings of the First Annual ACM-SIAM Symposium\non Discrete Algorithms, SODA ’90, pages 319–327,\nPhiladelphia, PA, USA. Society for Industrial and\nApplied Mathematics.\nMayor, Aingeru, I ˜naki Alegria, Arantza D ´ıaz De Ilar-\nraza, Gorka Labaka, Mikel Lersundi, and Kepa Sara-\nsola. 2011. Matxin, an open-source rule-based ma-\nchine translation system for Basque.Machine trans-\nlation, 25(1):53–82.\nOch, Franz Josef and Hermann Ney. 2003. A system-\natic comparison of various statistical alignment mod-\nels.Computational linguistics, 29(1):19–51.\nPadr´o, Llu ´ıs and Evgeny Stanilovsky. 2012. FreeLing\n3.0: Towards Wider Multilinguality. InProceedings\nof the Language Resources and Evaluation Confer-\nence (LREC 2012), Istanbul, Turkey, May. ELRA.\nPapineni, Kishore, Salim Roukos, Todd Ward, and Wei-\nJing Zhu. 2002. BLEU: a method for automatic\nevaluation of machine translation. InProceedings of\nthe 40th annual meeting on association for compu-\ntational linguistics, pages 311–318. Association for\nComputational Linguistics.\nS´anchez-Martınez, Felipe, Mikel L Forcada, and Andy\nWay. 2009. Hybrid rule-based-example-based\nMT: feeding Apertium with sub-sentential trans-\nlation units. In3rd International Workshop on\nExample-Based Machine Translation, page 11. Cite-\nseer.\nShirai, Satoshi, Francis Bond, and Yamato Takahashi.\n1997. A hybrid rule and example-based method for\nmachine translation. InProceedings of NLPRS, vol-\nume 97, pages 49–54. Citeseer.\nSocher, Richard, John Bauer, Christopher D Manning,\nand Andrew Y Ng. 2013. Parsing with composi-\ntional vector grammars. InIn Proceedings of the\nACL conference. Citeseer.18",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "QUq1alcNYn-",
"year": null,
"venue": "EAMT 2009",
"pdf_link": "https://aclanthology.org/2009.eamt-1.11.pdf",
"forum_link": "https://openreview.net/forum?id=QUq1alcNYn-",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Relevance of Different Segmentation Options on Spanish-Basque SMT",
"authors": [
"Arantza Díaz de Ilarraza",
"Gorka Labaka",
"Kepa Sarasola"
],
"abstract": "Arantza Díaz de Ilarraza, Gorka Labaka, Kepa Sarasola. Proceedings of the 13th Annual conference of the European Association for Machine Translation. 2009.",
"keywords": [],
"raw_extracted_content": "Proceedings of the 13th Annual Conference of the EAMT , pages 74–80,\nBarcelona, May 2009\nRelevance of Different Segmentation Options on Spanish-Basque SMT\nArantza D ´ıaz de Ilarraza, Gorka Labaka and Kepa Sarasola\nEuskal Herriko Univertsitatea/Universidad del Pa ´ıs Vasco\[email protected], [email protected], [email protected]\nAbstract\nSegmentation is widely used in adapting\nStatistical Machine Translation to highly\ninflected languages as Basque. The way\nthis segmentation is carried out impacts\non the quality of the translation. In or-\nder to look for the most adequate segmen-\ntation for a Spanish-Basque system, we\nhave tried different segmentation options\nand analyzed their effects on the transla-\ntion quality.\nAlthough all segmentation options used in\nthis work are based on the same morpho-\nlogical analysis, translation quality varies\nsignificantly depending on the segmen-\ntation criteria used. Most of the seg-\nmentation options outperform the base-\nline according to all metrics, except the\none which splits words according the mor-\npheme boundaries. From here we can con-\nclude the importance of the development\nof the segmentation criteria in SMT.\n1 Introduction\nIn this paper we present the work done for adapting\na baseline SMT system to carry out the translation\ninto a morphologically-rich agglutinative language\nsuch as Basque. In translation from Spanish to\nBasque, some Spanish words, such as prepositions\nor articles, correspond to Basque suffixes, and, in\ncase of ellipsis, more than one of those suffixes can\nbe added to the same word. In this way, based on\nthe Basque lemma ’etxe’ /house/ we can generate\n’etxeko’ /of the house/ , ’etxekoa’ /the one of the\nhouse/ , ’etxekoarengana’ /towards the one of the\nhouse/ and so on.\nc/circlecopyrt2009 European Association for Machine Translation.Besides, Basque is a low-density language and\nthere are few corpora available comparing to other\nlanguages more widely used as Spanish, English,\nor Chinese. For instance, the parallel corpus avail-\nable for this work is 1M word for Basque (1.2M\nwords for Spanish), much smaller than the corpora\nusually used on public evaluation campaigns such\nas NIST.\nIn order to deal with the problems presented\nabove, we have split up Basque words into the\nlemma and some tags which represent the mor-\nphological information expressed on the inflection.\nDividing Basque words in this way, we expect to\nreduce the sparseness produced by the agglutina-\ntive being of Basque and the small amount of train-\ning data.\nAnyway, there are several options to define\nBasque segmentation. For example, considering\nall the suffixes all together as a unique segment,\nconsidering each suffix as a different segment, or\nconsidering any other of their intermediate com-\nbinations. In order to define the most adequate\nsegmentation for our Spanish-Basque system, we\nhave tried some of those segmentation options and\nhave measured their impact on the translation qual-\nity.\nThe remainder of this paper is organized as fol-\nlows. In Section 2, we present a brief analysis of\nprevious works adapting SMT to highly inflected\nlanguages. In Section 3, we describe the systems\ndeveloped for this paper (the baseline and the mor-\npheme based systems) and the different segmenta-\ntion used by those systems. In Section 4, we eval-\nuate the different systems, and report and discuss\nour experimental results. Section 5 concludes the\npaper and gives avenues for future work.\n74\n2 Related work\nMany researchers have tried to use morphologi-\ncal information in improving machine translation\nquality. In (Koehn and Knight, 2003), the au-\nthors got improvements splitting compounds in\nGerman. Nießen and Ney (2004) achieved a simi-\nlar level of alignment quality with a smaller cor-\npora restructuring the source based on morpho-\nsyntactic information when translating from Ger-\nman to English. More recently, on (Goldwater\nand McClosky, 2005) the authors achieved im-\nprovements in Czech-English MT optimizing a set\nof possible source transformations, incorporating\nmorphology.\nIn general most experiments are focused on\ntranslating from morphologically rich languages\ninto English. But last years some works have\nexperimented on the opposite direction. For ex-\nample, in (Ramanathan et al., 2008), the authors\nsegmented Hindi in English-Hindi statistical ma-\nchine translation separating suffixes and lemmas\nand, in combination with the reordering of the\nsource words based on English syntactic analysis,\nthey got a significant improvement both in auto-\nmatic and human evaluation metrics. In a simi-\nlar way Oflazer and El-Kahlout (2007) also seg-\nmented Turkish words when translate from En-\nglish. The isolated use of segmentation does not\nget any improvement at translation, but combining\nsegmentation with a word-level language model\n(incorporated by using n-best list re-scoring) and\nsetting as unlimited the value of the distortion limit\n(in order to deal with the great order difference be-\ntween both languages) they achieve a significant\nimprovement over the baseline.\nSegmentation is the most usual way to trans-\nlate into highly inflected languages, but other ap-\nproaches have been also tried. In (Bojar, 2007)\nfactored translation have been used on English-\nCzech translation. Words of both languages are\ntagged with morphological information creating\ndifferent factors which are translated indepen-\ndently and combined in a generation stage. Finally,\nin (Minkov et al., 2007) the authors have divided\ntranslation in two steps where they first use usual\nSMT system to translate from English to Russian\nlemmas and in a second step they decide the inflec-\ntion of each lemma using bilingual information.3 SMT systems\nThe main deal of this work is to measure the\nimpact of different segmentation options on a\nSpanish-Basque SMT system. In order to mea-\nsure this impact we have compared the quality of\nthe baseline system which does not use segmenta-\ntion at all, with systems that use different segmen-\ntation options. the development of those systems\nhas been carried out using freely available tools:\n•GIZA++ toolkit (Och and H. Ney, 2003) was\nused for training the word alignment.\n•SRILM toolkit (Stolcke, 2002) was used for\nbuilding the language model.\n•Moses Decoder (Koehn et al., 2007) was used\nfor translating the test sentences.\n3.1 Baseline\nWe have trained Moses on the tokenized cor-\npus (without any segmentation) as baseline sys-\ntem. Moses and the scripts provided with it al-\nlow to easily train a state-of-the-art phrase-based\nSMT system. We have used a log-linear (Och and\nNey, 2002) combination of several common fea-\nture functions: phrase translation probabilities (in\nboth directions), word-based translation probabil-\nities (lexicon model, in both directions), a phrase\nlength penalty and a target language model.\nThe decoder also relies on a target language\nmodel. The language model is a simple 5-gram\nlanguage model trained on the Basque portion of\nthe training data, using the SRI Language Mod-\neling Toolkit, with modified Kneser-Ney smooth-\ning. Finally, we have also used a lexical reorder-\ning model (one of the advanced features provided\nby Moses1), trained using Moses scripts and ’msd-\nbidirectional-fe’ option. The general design of the\nbaseline system is presented on Figure 1.\nMoses also implements Minimum-Error-Rate\nTraining (Och, 2003) within a log-linear frame-\nwork for parameter optimization. The metric used\nto carry out this optimization is BLEU (Papineni et\nal., 2002).\n3.2 Morpheme-based statistical machine\ntranslation\nBasque is an agglutinative language, so words\nmay be made up several morphemes. Those mor-\nphemes are added as suffixes to the last word of\n1http://www.statmt.org/moses/?n=Moses.AdvancedFeatures75\nFigure 1: Basic design of a SMT system\nnoun phrases and verbal chains. Suffixes repre-\nsent the morpho-syntactic information associated\nto the phrase, such as number, definiteness, gram-\nmar case and postposition.\nAs a consequence, many words only occur once\nin the training corpus, leading to serious sparse-\nness problems when extracting statistics from the\ndata. In order to overcome this problem, we seg-\nmented each word into a sequence of morphemes,\nand then we worked at this representation level.\nWorking at the morpheme level we reduced the\nnumber of tokens that occur only once and, at\nthe same time, we reduce the 1-to-n alignments.\nAlthough 1-to-n alignments are allowed in IBM\nmodel 4, training can be harmed when the paral-\nlel corpus contains many cases.\nAdapting the baseline system to work at the\nmorpheme level mainly consists on training Moses\non the segmented text (same training options are\nused in baseline and morpheme-based systems).\nThe system trained on these data will generate a\nsequence of morphemes as output and a genera-\ntion post-process will be necessary in order to ob-\ntain the final Basque text. After generation, we\nhave integrated a word-level language model us-\ning n-best list re-ranking. The general design of\nthe morpheme-based system is presented on Fig-\nure 2.\n3.2.1 Segmentation options for Basque\nSegmentation of Basque words can be made in\ndifferent ways and we want to measure the impact\nthose segmentation options have on the translation\nquality. In order to measure this impact, we have\ntried different ways to segment Basque words and\nwe have trained a different morpheme-based sys-\ntem on each segmentation.\nThe different segmentation options we have\ntried are all based on the analysis obtained by\nFigure 2: Design of the morpheme-based SMT\nsystem\nEustagger (Aduriz and D ´ıaz de Ilarraza, 2003), a\ntagger for Basque based on two-level morphol-\nogy (Koskeniemmi, 1983) and statistical disam-\nbiguation. Based on those analysis we have di-\nvided each Basque word in different ways. From\nthe most fine-grained segmentation, where each\nmorpheme is represented as a token, to the most\ncoarse-grained segmentation where all morphemes\nlinked to the same lemma are put together in an\nunique token. Figure3 shows an analysis obtained\nby Eustagger the lemma and the morphological in-\nformation added by the morphemes is represented\nmarking the morphemes boundaries with a ’+’.\nFollowing we define the four segmentation op-\ntions we are experimenting with.\nEustagger Segmentation : In our first approach\nwe have strictly based on the lexicon of Eustag-\nger, and we have created a separate token for each\nmorpheme recognized by the analyzer. This lex-\nicon has been created following a linguistic per-\nspective and, although it has been proved very\nuseful for the develop of several applications, it\nis probably not the most adequate for this work.\nAs the lexicon is very fine-grained, some suffixes,\nwhich could be considered as a unique morpheme,\nare represented as a concatenation of several fine-\ngrained morphemes in the Eustagger lexicon. Fur-\nthermore, some of those morphemes have not any\neffect on the word form, and they only adds some\nmorphological features. Figure 3 shows segmen-\ntation of ’aukeratzerakoan’ /at the election time/\nword according to the segmentation produced by\nEustagger.\nOne suffix per word : Taking into account that\nthe Eustagger lexicon is too fine-grained and that\nit generates too many tokens at segmentation, our\nnext approach consisted on putting together all suf-\nfixes linked to a lemma in one token. So, at split-\nting one Basque word we will generate at most76\nAnalysis aukeratu <adi><sin>+<adize>+<ala><gel>+<ine>\nEustagger seg. aukeratu <adi><sin> +<adize> +<ala> +<gel> +<ine>\nAutomatic seg. aukeratu <adi><sin> +<adize><ala> +<gel> +<ine>\nHand defined seg. aukeratu <adi><sin><adize> +<ala><gel><ine>\nOneSuffix seg. aukeratu <adi><sin> +<adize><ala><gel><ine>\nFigure 3: Analysis obtained by Eustagger for ’aukeratzerakoan’ /at the election time/ word. And the\ndistinct segmentation inferred from it.\nthree tokens (prefixes, lemma and suffixes). We\ncan see ’aukeratzerakoan’ /at the election time/\nword’s segmentation on Figure 3.\nManual morpheme-grouping : After realizing\nthe impact of the segmentation in translation, we\ntried to obtain an intermediate segmentation which\noptimizes the translation quality. Our first at-\ntempt consists on defining by hand which mor-\nphemes can be grouped together in one token and\nwhich ones can be considered a token by their own.\nIn order to decide which morphemes to group,\nwe have analyzed the alignment errors occurred\nat previous segmentation experiments, defining a\nsmall amount of rules to grouping morphemes.\nFor instance, ’+ <adize>’2morpheme is usually\nwrongly aligned when it is considered as a token,\nso we have decided to join it to the lemma at seg-\nmentation. On Figure 3 we can see the segmen-\ntation corresponding to ’aukeratzerakoan’ /at the\nelection time/ word.\nAutomatic morpheme-grouping : Anyway, the\nmorpheme-grouping defined by hand depends on\nthe language pair and if we change it, we should\nredefine the grouping criteria, analyzing again the\ndetected errors. So, in order to find a language in-\ndependent way to define the most appropriate seg-\nmentation, we focus our research in establishing\na statistical method to decide which morphemes\nhave to be put into the same token. We observed\nthat the morphemes which generates most of the\nerrors are those which have not their own mean-\ning, those that need another morpheme to complete\ntheir meaning. We thought on using the mutual in-\nformation metric in order to measure statistical de-\npendence between two morphemes. We will group\nthose morphemes that are more dependent than a\nthreshold. On this experiment we tried different\nthresholds and we obtained the best results when\nit is set to 0.5 (value that involve grouping most of\nthe morphemes). In Figure 3 we can see ’auker-\natzerakoan’ /at the election time/ word segmented\nin this way.\n2suffix for verb normalisation3.2.2 Generating words from morphemes\nWhen working at the morpheme level, the out-\nput of our SMT system is a sequence of mor-\nphemes. In order to produce the proper Basque\ntext, we need to generate the words based on this\nsequence, so the output of the SMT system is post-\nprocessed to produce the final Basque translation.\nTo develop generation post-processing, we\nreuse the lexicon and two-level rules of our mor-\nphological tool Eustagger. The same generation\nengine is useful for all the segmentation options\ndefined in section 3.2.1 since we have produced\nthem based on the same analysis. However, we\nhave to face two main problems:\n•Unknown lemmas: some lemmas such as\nproper names are not in the Eustagger lexicon\nand could not be generated by it. To solve this\nproblem and to be able to generate inflection\nof those words, the synthesis component has\nbeen enriched with default rules for unknown\nlemmas.\n•Invalid sequences of morphemes: the output\nof the SMT system is not necessarily a well-\nformed sequence from a morphological point\nof view. For example, morphemes can be\ngenerated in a wrong order or they can be\nmissed or misplaced (i.e. a nominal inflec-\ntion can be assigned to a verb). In the current\nwork, we did not try to correct these mistakes,\nand when the generation module can not gen-\nerate a word it outputs the lemma without any\ninflection. A more refined treatment is left for\nfuture work.\n3.3 Incorporation of word-level language\nmodel\nWhen training our SMT system over the seg-\nmented test the language model used in decod-\ning is a language model of morphemes (or groups\nof morphemes depending on the segmentation op-\ntion). Real words are not available at decoding,\nbut, after generation we can incorporate a second77\nsentences words morph word-vocabulary morph-vocabulary\ntrainingSpanish58,2021,284,089 - 46,636 -\nBasque 1,010,545 1,699,988 87,763 35,316\ndevelopmentSpanish1,45632,740 - 7,074 -\nBasque 25,778 43,434 9,030 5,367\ntestSpanish1,44631,002 - 6,838 -\nBasque 24,372 41,080 8,695 5,170\nTable 1: Some statistics of the corpora.\nlanguage model based on words. The most appro-\npriate way to incorporate the word-level language\nmodel is using n-best list as was done in (Oflazer\nand El-Kahlout, 2007). We ask Moses to produce\na n-best list, and after generating the final transla-\ntion based on Moses output, we estimate the new\ncost of each translation incorporating word-level\nlanguage model. Once new cost is calculated the\nsentence with the lowest cost is selected as the final\ntranslation.\nThe weight for the word-level language model\nis optimized at Minimum Error Rate Training with\nthe weights of the rest of the models. Minimum\nError Rate Training procedure has been modified\nto post-process Moses output and to include word-\nlevel language model weight at optimization pro-\ncess.\n4 Experimental results\n4.1 Data and evaluation\nIn order to carry out this experiment we used the\nConsumer Eroski parallel corpus. This corpus is a\ncollection of 1036 articles written in Spanish (Jan-\nuary 1998 to May 2005, Consumer Eroski mag-\nazine, http://revista.consumer.es) along with their\nBasque, Catalan and Galician translations. It con-\ntains more than 1,200,000 Spanish words and more\nthan 1,000,000 Basque words. This corpus was\nautomatically aligned at sentence level3and it is\navailable4for research. Consumer Eroski maga-\nzine is composed by the articles which compare\nthe quality and prices of commercial products and\nbrands.\nWe have divided this corpus in three sets, train-\ning set (60,000 sentences), development set (1,500\nsentences) and test set (1,500 sentences), more de-\ntailed statistics on Table 1.\n3corpus was collected and aligned by Asier Alc ´azar from the\nUniversity of Missouri-Columbia\n4The Consumer corpus is accessible on-line via Universidade\nde Vigo (http://sli.uvigo.es/CLUVI/, public access) and Uni-\nversidad de Deusto (http://www.deli.deusto.es, research in-\ntranet).In order to assess the quality of the translation\nobtained using the systems, we used four auto-\nmatic evaluation metrics. We report two accuracy\nmeasures: BLEU, and NIST (Doddington, 2002);\nand two error measures: Word Error Rate (WER)\nand Position independent word Error Rate (PER).\nIn our test set, we have access to one Basque ref-\nerence translation per sentence. Evaluation is per-\nformed in a case-insensitive manner.\n4.2 Results\nThe evaluation results for the test corpus is re-\nported in Table 2. These results show that the\ndifferences at segmentation have a significant im-\npact at translation quality. Segmenting words ac-\ncording to the morphemes boundaries of the Eu-\nstagger lexicon does not involve any improvement.\nCompared to the baseline, which did not use any\nsegmentation, the results obtained for the evalua-\ntion metrics are not consistent and varies depend-\ning on the metric. According to BLEU segmenta-\ntion harms translation, but according the rest of the\nmetrics the segmentation slightly improves transla-\ntion, but this improvement is probably not statisti-\ncally significant.\nThe rest of the segmentation options, which are\nbased on the same analysis of Eustagger and con-\ntains the same morpheme sequences, consistently\noutperforms baseline according to all the metrics.\nBest results are obtained using the hand defined\ncriteria (based on the alignment errors), but au-\ntomatically defined segmentation criteria obtains\nsimilar results.\nDue to the small differences on the results ob-\ntained for the evaluation metrics we have carried\nout a statistical significance test (Zhang et al.,\nMay 2004) over BLEU. According with this, the\nsystem using hand defined segmentation signifi-\ncantly outperforms both the system using OneSuf-\nfix segmentation and the system using segmenta-\ntion based on mutual information. Difference be-\ntween the system using OneSuffix segmentation\nand the system based on mutual information are78\nBLEU NIST WER PER\nBaseline 10.78 4.52 80.46 61.34\nMorphemeBased-Eustagger 10.52 4.55 79.18 61.03\nMorphemeBased-OneSuffix 11.24 4.74 78.07 59.35\nMorphemeBased-AutoGrouping 11.24 4.66 79.15 60.42\nMorphemeBased-HandGrouping 11.36 4.69 78.92 60.23\nTable 2: BLEU, NIST, WER and PER evaluation metrics.\nSegmentation option Running tokens Vocabulary size BLEU\nNo Segmentation 1,010,545 87,763 10.78\nHand Defined grouping 1,546,304 40,288 11.36\nOne Suffix per word 1,558,927 36,122 11.24\nStatistical morph. grouping 1,580,551 35,549 11.24\nEustagger morph. boundaries 1,699,988 35,316 10.52\nTable 3: Correlation between token amount on the train corpus and BLEU evaluation results\nnot statistically significant.\nFinally, given the low scores obtained, we would\nlike to make two additional remarks. First, it shows\nthe difficulty of the task of translating into Basque,\nwhich is due to the strong syntactic differences\nwith Spanish. Second, the evaluation based on\nwords (or n-grams of words) always gives lower\nscores to agglutinative languages like Basque. Of-\nten one Basque word is equivalent to two or three\nSpanish or English words, so a 3-gram matching in\nBasque is harder to obtain having a highly negative\neffect on the automatic evaluation metrics.\n4.3 Correlation between segmentation and\nBLEU\nAnalyzing the obtained results, we have realized\nthat there are a correlation between the amount of\ntokens generated at segmentation and the results\nobtained at evaluation. Before segmentation, there\nare 1M words for Basque, which together with the\n1.2M words for Spanish, make the word align-\nment more difficult (due to the 1-to-n alignment\namount). Anyway, after segmenting the Basque\nwords according with the morpheme boundaries of\nEustagger, the Basque text contains 1.7M tokens\n(the same alignment problem is generated but in\nthe opposite direction) see Table 3.\nIntermediate segmentation options, where mor-\nphemes marked by Eustagger are grouped in dif-\nferent ways, get better results when the amount of\nthe generated tokens is closer to the amount of to-\nkens we have in Spanish part. We leave for future\nwork to experiment ways to reduce the different\nnumber of tokens of both languages.5 Conclusions and Future work\nWe have proved that the quality of the transla-\ntion varies significantly when applying different\noptions for word segmentation. Based on the same\noutput of morphological analyzer, we have seg-\nmented words in different ways creating more fine\nor coarse grained segments (from one token per\neach morpheme to a unique token for all suffixes of\na word). Surprisingly, the criteria based on consid-\nering each morpheme as a separate token obtains\nworse results than the system without segmenta-\ntion. Other segmentation options outperforms the\nbaseline, getting the best results with a hand de-\nfined intermediate grouping based on an alignment\nerror analysis.\nAnyway, the work done by hand is language de-\npendent and could not be reused for a different pair\nof languages, so we also tried a statistical way to\ndetermine the morpheme grouping criteria which\ngets almost as accurate results as those obtained\nwith the hand defined criterion. So we could use\nthis statistical grouping criteria to adapt our sys-\ntem to a different language pair such as English-\nBasque.\nAs future work, we thought on trying a differ-\nent measure to determine the statistical indepen-\ndence of the morphemes, as χ2. Besides, as the\ndependence between morphemes is calculated on\nthe monolingual text, a bigger monolingual corpus\ncould be used (instead of using just the Basque side\nof the bilingual corpus) for this.\nTaking into account the obtained correlation be-\ntween the token amount and translation quality.\nWe want to redefine the segmentation criteria to\nreduce the amount of tokens obtained. In such a\nway that the difference in the number of tokens of79\nboth languages would be reduced.\nAcknowledgement\nThis research was supported in part by the Span-\nish Ministry of Education and Science (OpenMT:\nOpen Source Machine Translation using hybrid\nmethods, TIN2006-15307-C03-01) and the Re-\ngional Branch of the Basque Government (An-\nHITZ 2006: Language Technologies for Multi-\nlingual Interaction in Intelligent Environments.,\nIE06-185). Gorka Labaka is supported by a PhD\ngrant from the Basque Government (grant code,\nBFI05.326).\nConsumer corpus has been kindly supplied by\nAsier Alc ´azar from the University of Missouri-\nColumbia and by Eroski Fundazioa.\nReferences\nAduriz, I. and A. D ´ıaz de Ilarraza. 2003. Morphosyn-\ntactic disambiguation ands shallow parsing in Com-\nputational Processing of Basque. In Inquiries into\nthe lexicon-syntax relations in Basque. Bernarrd Oy-\nharabal (Ed.) , Bilbao.\nBojar, Ondrej. 2007. English-to-Czech Factored\nMachine Translation. In Proceedings of the Sec-\nond Workshop on Statistical Machine Translation ,\nPrague, Czech Republic.\nDoddington, G. 2002. Automatic evaluation of Ma-\nchine Translation quality using n-gram cooccurrence\nstatistics. In Proceedings of HLT 2002 , San Diego,\nCA.\nGoldwater, S. and D. McClosky. 2005. Improving\nStatistical MT Through Morphological Analysis. In\nProceedings of the Conference on Empirical Meth-\nods in Natural Language Processing (EMNLP) , Van-\ncouver.\nKoehn, P. and K. Knight. 2003. Empirical Methods for\ncompound splitting. In Proceedings of EACL 2003 ,\nBudapest, Hungary.\nKoehn, Philipp, Hieu Hoang, Alexandra Birch, Chris\nCallison-Burch, Marcello Federico, Nicola Bertoldi,\nBrooke Cowan, Wade Shen, Christine Moran,\nRichard Zens, Chris Dyer, Ondrej Bojar, Alexandra\nConstantin, and Evan Herbst. 2007. Moses: Open\nSource Toolkit for Statistical Machine Translation.\nInAnnual Meeting of the Association for Compu-\ntational Linguistics (ACL) , Prague, Czech Republic,\nJune.\nKoskeniemmi, K. 1983. Two-level Model for Morpho-\nlogical Analysis. In Proceedings of the Eigth Inter-\nnational Joint Conference on Artificial Intelligence ,\nKarlsruhe, Germany.Minkov, E., K. Toutanova, and H. Suzuki. 2007. Gen-\nerating Complex Mophology for Machin Transla-\ntion. In Proceedings of 45th ACL , Prague, Czech\nRepublic.\nNießen, S. and H. Ney. 2004. Statistical machine trans-\nlation with scarce resources using morpho-syntactic\ninformation. Comput. Linguist. , 30(2):181–204.\nOch, F. and H. Ney. 2003. A systematic comparison of\nvarious statistical alignment models. Computational\nLinguistics , 29(1):19–51.\nOch, Franz Josef and Hermann Ney. 2002. Discrimi-\nnative training and maximum entropy models for sta-\ntistical machine translation. In ACL, pages 295–302.\nOch, Franz Josef. 2003. Minimum error rate training in\nstatistical machine translation. In ACL, pages 160–\n167.\nOflazer, Kemal and Ilknur Durgar El-Kahlout. 2007.\nExploring Different Representation Units in English-\nto-Turkish Statistical Machine Translation. In Pro-\nceedings of the Second Workshop on Statistical Ma-\nchine Translation , Prague, Czech Republic.\nPapineni, K., S. Roukos, T. Ward, and W. Zhu. 2002.\nBLEU: a method for automatic evaluation of ma-\nchine translation. In Proceedings of 40th ACL ,\nPhiladelphia, PA.\nRamanathan, Ananthakrishnan, Pushpak Bhat-\ntacharyya, Jayprasad Hegde, Ritesh M.Shah,\nand Sasikumar M. 2008. Simple Syntactic and\nMophological Processing Can Help English-Hindi\nStatistical Machine Translation. In IJCNLP 2008:\nThird International Joint Conference on Natural\nLanguage Processing , Hyderabad, India.\nStolcke, Andreas. 2002. SRILM - An Extensible Lan-\nguage Modeling Toolkit. In Proc. Intl. Conf. Spoken\nLanguage Processing , Denver, Colorado, Septem-\nber.\nZhang, Ying, Stephan V ogel, and Alex Waibel. May\n2004. Interpreting Bleu/NIST scores: How much\nimprovement do we need to have a better system?\nInProceedings of LREC 2004 , Lisbon, Portugal.80",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "sarW6KyBRSY",
"year": null,
"venue": "EAMT 2015",
"pdf_link": "https://aclanthology.org/W15-4901.pdf",
"forum_link": "https://openreview.net/forum?id=sarW6KyBRSY",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Exploiting portability to build an RBMT prototype for a new source language",
"authors": [
"Nora Aranberri",
"Gorka Labaka",
"Arantza Díaz de Ilarraza",
"Kepa Sarasola"
],
"abstract": "Nora Aranberri, Gorka Labaka, Arantza Díaz de Ilarraza, Kepa Sarasola. Proceedings of the 18th Annual Conference of the European Association for Machine Translation. 2015.",
"keywords": [],
"raw_extracted_content": "Exploiting portability to build an RBMT prototype\nfor a new source language\nNora Aranberri, Gorka Labaka, Arantza D ´ıaz de Ilarraza and Kepa Sarasola\nIXA Group\nUniversity of the Basque Country\nManuel Lardizabal 1, 20018 Donostia, Spain\n{nora.aranberri,gorka.labaka,a.diazdeilarraza,kepa.sarasola}@ehu.eus\nAbstract\nThis paper presents the work done to port\na deep-transfer rule-based machine trans-\nlation system to translate from a differ-\nent source language by maximizing the ex-\nploitation of existing resources and by lim-\niting the development work. Specifically,\nwe report the changes and effort required\nin each of the system’s modules to ob-\ntain an English-Basque translator, ENEUS,\nstarting from the Spanish-Basque Matxin\nsystem. We run a human pairwise compar-\nison for the new prototype and two statis-\ntical systems and see that ENEUS is pre-\nferred in over 30% of the test sentences.\n1 Introduction\nBuilding a corpus-based system is undeniably\nquicker than building a rule-based machine trans-\nlation (RBMT) system, given the availability of\nlarge quantities of parallel text. However, this is\noften not the case for many language pairs, which\nmakes building a mainstream statistical system\nsuboptimal. Usually, lesser-resourced languages\nopt for RBMT systems, where language-specific\nNLP tools and resources are crafted.\nHeavy investment and long development peri-\nods have been attributed to RBMT systems but\n(Surcin et al., 2013) pointed out that a large part of\nthe systems’ code is reusable. They state that 80%\nof Systran’s code belongs to the analysis mod-\nule, whereas the remaining 20% is equally divided\ninto transfer and generation. Transfer is language-\npair specific, but analysis and generation are built\nc�2015 The authors. This article is licensed under a Creative\nCommons 3.0 licence, no derivative works, attribution, CC-\nBY-ND.with information about one language only and they\nare therefore reusable for systems that use those\nlanguages. Rapid development of new language\npairs benefits from existing resources but also from\nmodular, stable infrastructures where new pairs\ncan be developed by modifying the linguistic data.\nAn example of RBMT portability attempts for\nlesser-resourced languages is the Apertium project\n(Forcada et al., 2011). Apertium is a free/open-\nsource shallow-transfer MT platform. Researchers\nhave been active in porting the system to different\nlanguage pairs (Peradin et al., 2014; Otte and Ty-\ners, 2011). The system specializes in translation\nbetween related languages where shallow transfer\nsuffices to produce good quality translations.\nShallow parsing is sometimes too limited for\ndissimilar language pairs. Unrelated languages of-\nten require a richer and moreflexible deeper trans-\nfer architecture to tackle differing linguistic fea-\ntures. Examples are (Gasser, 2012) and Matxin\n(Mayor et al., 2011). In this work we present an\nattempt to port the deep-transfer RBMT Matxin1,\ndesigned to cope with dissimilar languages.\nThe remaining work is organized as follows:\nSection 2 gives a brief overview of the architec-\nture of the Matxin system. Section 3 describes\nthe work done in each of the system’s modules.\nSection 4 provides the results of the new Matxin\nENEUS prototype’s evaluation. Finally, Section 5\npresents the conclusions and future work.\n2 General system features\nMatxin is a modular RBMT system originally de-\nveloped to translate from Spanish into Basque\n(Mayor et al., 2011). It follows the standard three-\nstep architecture, consisting of separate modules\n1Matxin:https://matxin.sourceforge.net\n3\nFigure 1: The general Matxin architecture.\nfor analysis, transfer and generation (Figure 1). It\nwas devised to translate between dissimilar lan-\nguages, that is, pairs that require deep analysis to\nenable translation and to do so, it works on de-\npendency trees and chunks, and includes a mod-\nule for reordering. Because it was developed with\nthe Spanish-Basque pair in mind, the architecture\ncan handle translation from analytic to agglutina-\ntive languages, thus dealing with rich morphology.\nThe portability exercise we present aims to ex-\namine the strengths and limitations of the Matxin\narchitecture, by measuring theflexibility of the\ninfrastructure and by specifying the language re-\nsource development needed for a new language\npair. In particular, we examine the work effort re-\nquired to change the source language and obtain\nEnglish to Basque translations.\n3 Portability exercise\nGiven the three-step architecture of Matxin, when\nmodifying the system to translate from a different\nsource language, wefirst need a completely new\nanalysis module. Next, the transfer rules need to\nbe updated to synchronize the new source with the\ntarget language. The generation module is mostly\nreusable and remains intact. In what follows, we\ndescribe the work done in each of the modules.\n3.1 Analysis\nPackages that analyze text at different levels are\navailable, even more so for mainstream languages\nsuch as English. Therefore, what needs to be con-\nsidered when selecting a package is whether it\nextracts the relevant information that the genera-\ntion module will require. The information Matxin\nneeds to translate into Basque is word forms, lem-\nmas, part-of-speech categories, chunks, and de-\npendency trees with named relations.\nNote that chunks and dependency trees are dif-\nferent ways of representing sentence structure and\nboth are necessary when translating into Basque.Chunks identify word groupings whereas depen-\ndency trees specify the relations between words. In\nBasque, postpositions are attached to the last word\nof the chunk they modify.2Therefore, chunks al-\nlow us to easily identify the word that needs to be\nflexed. Dependency relations provide the MT sys-\ntem with predicate-argument structures.\nTwo main contenders were found: Freeling, a\nrule-based analyzer developed at the Universitat\nPolit ´ecnica de Catalunya (Carreras et al., 2004)\nand the statistical analysis package developed at\nStanford University (de Marneffe et al., 2006).\nThe original architecture uses Freeling for Spanish\nanalysis, and using their English package would\nmake the integration easier, as tags are already\nknown by the system. Yet we carried out a small\ncomparison to opt for the best performing system.\nWe analyzed 50 sentences with both systems, 25\nregular sentences and 25 news headlines. We in-\ncluded both simple and complex sentences show-\ning a wide variety of features and structures. A\nsentence was to be correctly analyzed if all the\nlemmas, POS categories and the dependency tree\nwere correctly annotated. 28% of the sentences\nwere correctly analyzed by Freeling and 38% by\nStanford. The remaining sentences show one or\nmore errors, which would have varying impact on\nthe translation process. Overall, the number of\nerrors made by Freeling was higher compared to\nStanford, 48 and 27 errors respectively. Freeling\ninserted 18 POS errors whereas Stanford inserted\n17 (12 in headings). Dependency tree analyses in-\nclude errors at different levels. One of the most se-\nvere error is the incorrect identification of the root\n(typically the main verb), which usually leads to\nthe whole translation being wrong. Freeling failed\nto identify the root in 6 occasions. Stanford, in\nturn, did not make this type of error.\nOverall, we saw that Stanford made fewer errors\n2We include subject and object case-markers within this class\nbecause they are processed equally.4\ncompared to Freeling. The popularity and devel-\nopment activity of this system at the time (Bach,\n2012; Sagodkar and Damani, 2012) made us opt\nfor the second package. The initial Spanish analy-\nsis component in Matxin was ported to English by\nintegrating a new analysis package and by updat-\ning tag equivalences to allow for interoperability.\n3.2 Transfer\nThe most labor-intensive component is the transfer\nmodule. In what follows, we examine the dictio-\nnaries and grammars that need to be updated in the\norder in which the architecture applies them.\nLexical transfer\nBilingual dictionaries are the basis for trans-\nlation and these had to be compiled to include\nEnglish-Basque equivalences. We used two main\nsources to build the new dictionaries. First, an\nEnglish-Basque dictionary was made available for\nresearch purposes by Elhuyar, a Basque language\ntechnology company. We obtained 16,000 pairs\nand 1,047 multi-word units from this resource.\nThe words in the Elhuyar dictionary are prob-\nably enough to translate the most frequent En-\nglish words and understand a general text. How-\never, we decided to try to increase the coverage\nof Matxin ENEUS with a second resource, that is,\nWordNet (Miller, 1995). It is a lexical database\nthat was initially built for English, where nouns,\nverbs, adjectives and adverbs are grouped around\ncognitive synonyms that refer to the same concept\ncalled synsets. Synsets are linked to each other\nthrough conceptual and lexical relations, making\nup a conceptual web of meaning-relations. Even\nif it wasfirst built for English, WordNets have\nbeen developed for other languages, as is the case\nof Basque (Pociello et al., 2010). Words in dif-\nferent languages share synsets and therefore it is\npossible to extract equivalences, creating a bilin-\ngual pseudo-dictionary. The Basque WordNet has\n33,442 synsets that are mapped to their English\ncounterparts. We have paired the variants of each\nmapped synset in all possible combinations, ob-\ntaining over 82,000 pairs after discarding multi-\nword units. These provide us with Basque equiv-\nalents for almost 32,000 English lemmas. Even if\nWordNet was not designed to be used as a dictio-\nnary and the equivalences have not been reviewed\nby an expert, we decided to include them in the\nsystem’s dictionary even if priority was given to\nthe Elhuyar data. The union of both resources ac-plane + pos=[NN]→hegazkin + pos=[IZE][ARR] + num=[NUMS]\nplane + pos=[NN]→plano + pos=[IZE][ARR] + num=[NUMS]\nbig + pos=[JJ]→handi + pos=[ADJ IZO]\nbig+ pos=[JJR]→handi + pos=[ADJ IZO] + suf=[GRA][KONP]\nbig + pos=[JJS]→handi + pos=[ADJ IZO] + suf=[GRA][SUP]\ngo + pos=[VB]→joan + pos=[ADI]\ngo + pos=[VBZ]→joan + pos=[ADI]\ngo + pos=[VBG]→joan + pos=[ADI]\nFigure 2: Dummy examples of dictionary rules.\ncounts for around 35,000 entries.\nThe dictionary lists the source lemma and its\nPOS tag and points to the equivalent target lemma\ntogether with its POS and morphological infor-\nmation (Figure 2). The information for both lan-\nguages is the same, but the tag set used is different\nand generator-dependent. The information in the\nEnglish tag is itemized into one or more Basque\ntags. For example, the EnglishNNtag referring to\ncommon singular nouns is broken down into three\nseparate tags,IZE,ARRandNUMSreferring to\nnoun, common and singular, respectively.3\nThe dictionary lists all the possible equivalences\ngathered from the bilingual resources. Yet, Matxin\nENEUS selects thefirst available equivalent re-\ngardless of the context of use. The order in which\nalternatives are coded in the dictionary is based\non frequency in the case of the Elhuyar dictionary\nand therefore, this already introduces some sort\nof selection rule. The architecture allows creat-\ning context-specific selection rules and other word\nsense disambiguation (WSD) techniques can be in-\ntegrated but this is out of the scope of this work.\nAfter the information from the bilingual dic-\ntionary is collected, the selected target word is\nsearched for in a semantic dictionary (D ´ıaz et al.,\n2002) and features added if available.\nPreposition transfer\nEnglish prepositions are translated into Basque\nmainly through postpositions. As previously men-\ntioned, these postpositions are attached to the last\nword of the postpositional phrase (chunk) and the\ninformation about it must be moved to the relevant\nword. To allow for this, prepositions are processed\ndifferently, using a purposely-built dictionary. It\nconsists of English prepositions and their Basque\npostposition equivalents, where the lemmas and\nmorphological tags are specified.\n3Note that verbs are handled separately, and therefore, all\nforms carry the same neutral target tag in the dictionary.\n4Statistics for work in progress when only 20 prepositions\nhave been addressed. The level of ambiguity tends to increase\nas detailed disambiguation work is done.\n5\nSimple\nprepositionUnique\nequivalentMultiple4\nequivalentsAverage\nambiguity\nEnglish 66 20 46 3.8\nSpanish 20 7 13 3.9\nTable 1: Statistics for the preposition dictionary.\nWe have worked with a list of 66 English simple\nprepositions. We have identified 20 with a unique\ntranslation. The remaining 46 have an average of\n3.8 translations (ranging from 2 to 10) (Table 1).\nEquivalence rule Example\nby→er gative written by Wilde→Wildeekidatzia\nby→instrumental travel by plane→hegazkinezbidaiatu\nby→genitive abook by Shelly→Shellyrenliburu bat\nby→genitive + ondoan bythe door→atearen ondoan\nby→inessive bycandlelight→kandelaren argipean\nby→ablative hold by the hand→eskutikheldu\nby→genitive + arabera bythe barometer→barometroaren arabera\nby→adlative +\ntime-location genitivebynow→honezkero\nby→+ bider 3multiplied by 2→3bider2\nby→+ aurretik drive by your house→zure etxeaurretik\nFigure 3: Basque equivalences forby.\nThe linguistic work has to identify the different\nuses for the multiple equivalences, define contexts\nand write rules that will allow for the appropriate\nequivalent to be selected (Figure 3). Rules can in-\nclude different types of knowledge. By default, the\ndesign of Matxin allows including elements that\nare in direct dependency (lemma, POS, morpho-\nlogical, syntactic and semantic features). At the\ntime of write-up, 27 selection rules have been cre-\nated and further effort is envisaged. If we compare\nthe effort required for the English-Basque pair with\nthe existing work for the Spanish-Basque system,\nwe observe that the list includes 20 simple prepo-\nsitions, that is, about a third, out of which 7 have a\nsingle translation and the ambiguous ones have an\naverage of 3.9 translation options (ranging from 2\nto 11). This reveals that the linguistic work neces-\nsary to set up the preposition transfer for the new\npair is more labor-intensive. Rules are given full\npriority during selection, and translation equiva-\nlences which do not have a selection rule assigned\nto them are listed by frequency of appearance.\nIn addition to the equivalence table, Matxin\navails of two other sources of information, which\nare used when no selection rules apply: lexical-\nized syntactic dependency triplets and verb sub-\ncategorisation, both automatically extracted from\na monolingual corpus (Agirre et al., 2009).\nLexicalized triplets are groupings of verbs, lem-mas and argument cases with which each verb ap-\npears in the corpus (Figure 4). In the cases where\nselection rules are not sufficient, the verb is identi-\nfied and the lemma of the word to which the post-\nposition needs to be attached is searched for. If the\nverb-lemma combination is present, the candidate\nargument cases from the dictionary are checked\nagainst the triplets and thefirst matching selected.\nVerb Lemma Argument case\nemanunibertsitate inessive\nPaul ergative\ndative\namore absolutive\npartitive\n... ...\nFigure 4: Examples of triplets foreman(give).\nThe information contained in lexicalized triplets\nis often too precise and restrictive. If triplets do\nnot cover the verb-lemma combination, we turn\nto verb subcategorisation. This resource includes,\nordered by frequency, a list of the most common\nargument case combinations for each verb (Fig-\nure 5). The possible postpositions for each of the\nprepositions that depend on a verb are collected\nfrom the dictionary and matched against the sub-\ncategorisation information until the combination\nthat suits best is selected.\nVerb Paradigm Subject case Arg case Arg case\nsuntsitusubj-dObj ergative absolutive -\nsubj absolutive - -\nsubj-dObj ergative absolutive instrumental\n... ... ... ...\nFigure 5: Examples of verb subcategorization for\nsuntsitu(destroy).\nBecause both Spanish and English use preposi-\ntions, the design of Matxin has been adequate for\nour goal. The preposition dictionary and selection\nrules were replaced, but verb subcategorisation and\nlexical triplets were reused, as they are Basque-\nspecific and source-language-independent.\nVerb transfer\nBasque verbs carry considerable information,\nsuch as, person and number of the subject and ob-\njects, tense, aspect and mood. In Spanish, infor-\nmation about the objects is not present. In English,\nthe verbs carry even less information: tense, aspect\nand mood are present, but it is only in the case of\npresent tense third person singular that we know\nabout the subject thanks to thesmark attached to6\nthe verb. No reference to the subject (exception\nabove) or objects is made explicit in the verb.\nBefore applying verb transfer rules, therefore, a\nset of movement rules needs to collect all the rele-\nvant information for Basque verbs from the depen-\ndency tree. This difference was partially addressed\nduring the Spanish-Basque implementation. In the\ncase of English, movement rules were modified to\ninclude the person and number of the subject and\nobjects, if they explicitly appeared in the text to be\ntranslated, as well as the paradigm information ob-\ntained from the preposition selection step. Thus,\nthe developer availed of all the source text infor-\nmation required to work on transfer rules. Given\nthe information of subject and objects, the rules\nare written to identify tense, aspect and mood in-\nformation from the source verb and replacement\nrules gather up information to generate an equiva-\nlent Basque verb (Figure 6).\nIdrivemy car to university every morning\ninput pattern to verb transfer\ndrive[VBP]+[subj1s][dObj3s][iObj00]+[paradigm2]+gidatu\ntarget pattern assigned by grammar\ngidatu{Asp}{Mod+Asp}{Aux}{Tense}{Subj}{dObj}{iObj}\ntransformed pattern\ngidatu{IMPERF}{}{edun}{A1}{subj1s}{dObj3s}\nNiknire autoagidatzen dutunibertsitatera goizero.\nFigure 6: Dummy example of verb transfer steps.\nVerb transfer in the Matxin architecture is car-\nried out usingfinite-state transducers (Alegria et\nal., 2005; Mayor et al., 2012). In short, the trans-\nducers take the source verb phrase as input, per-\nform a number of replacements and create thefinal\noutput which is ready for the syntactic and mor-\nphological generators to interpret.\nWe kept the three-step organization of the gram-\nmar used in the original language pair.\n1. Identification of the Basque verbal schema\ncorresponding to the source verbal chunk.\nWe use 21 patterns that we then unify into\n5 general schemes corresponding to simple\ntenses (works, worked), compound tenses\n(have worked, will work), continuous tenses\n(is working, had been working), simple tenses\npreceded by a modal (should work), and com-\npound or continuous tenses preceded by a\nmodal (must have worked).\n2. Resolution of the values for the attributes in\neach of the Basque schemes.A total of 222 replacement rules were written\nto transfer verbal information into the target\nlanguage in a format that is interpreted by the\ngenerators (Table 2).\n3. Elimination of unnecessary information (4\nrules in total).\nType Number of rules\nauxiliary verb selection 20\naspect of main verb or auxiliary 65\nmodal-specific 2\nnegation 4\nparadigm selection and feature assignment 107\ntense 24\nTotal 222\nTable 2: Verb transfer rules by type.\nWhen building the prototype, considerable ef-\nfort was made to ensure wide verb coverage. Most\nof the tenses in the indicative have been covered,\nfor all four paradigms in Basque (subj, subj-dObj,\nsubj-dObj-iObj, subj-iObj) in the affirmative, neg-\native and questions, for active and passive voices.\nThe imperative was also included.\nWork was also done for modals, even if to a\nmore limited extent. Matxin ENEUS identifies the\nmost common modals: ability (can, could, would),\npermission and prohibition (must, mustnt, can,\nhave to), advice (should) and probability (may,\nmight, will) for affirmative and negative cases. De-\npending on the context, modals acquire a slightly\ndifferent meaning. At the time of writing, only one\nsense per modal was covered by the system.\nComplex sentences\nThe modifications mentioned so far describe\nhow simple sentences and their components are\ntreated. However, complex sentences require a\nmore intricate approach. The transfer rules that\nso far handledfinite verbs now need to consider\nthe varying translations of non-finite verbs as well\nas the permutations subordinate markers require.\nAlso, information movements are directed by more\nelaborate rules. For Matxin ENEUS, we ad-\ndressed, in their simplest forms, relative clauses,\ncompletives, conditionals and a number of adver-\nbial clauses (time, place and reason).\n3.3 Movements\nIt is theflexibility to move information along the\ndependency tree-nodes that provides the Matxin\narchitecture with the capacity to tackle dissimilar7\nlanguages (Mayor et al., 2011). In thisfirst porta-\nbility exercise few changes were introduced to the\nmovement rule-sets as basic structures in Span-\nish and English required similar basic movements.\nGenerally, Basque chunks (verbs aside) consist of\na number of lemmas and a last word to whichflex-\nion information is attached. Therefore, the basic\ninformation movements for both Spanish and En-\nglish have been (1) preposition information moved\nto the last word of the chunk, and (2) number\nand definiteness information of the source chunk\nmoved to the last word of the target chunk.\nAdditionally, the movement rule-set preced-\ning verb transfer was modified to address certain\nEnglish-specific structures. For example, English\nverb+toandverb+ingstructures, e.g.want to eat,\nintend to goand similar, require that the second\nverb is treated differently to how main verbs are\ntreated. This needs to be noted before the verbs\narrive in the verb transfer component. In order\nto do that, a special attribute needs to be passed\non to the verb phrase. We tested these two cases\nand saw that Matxin’s design can be appropriate\nfor language-specific structures.\n3.4 Generation\nThe generation component of an RBMT system\nis usually developed using target-language knowl-\nedge only to increase reuse possibilities. In\nMatxin, the three modules included in the gener-\nation component avail of Basque knowledge only\n(with the exception of the rule-set to address non-\ncanonical source language word order). First,\nthe sentence-level ordering rules in the genera-\ntion component establish the canonical word order\ngiven the elements in the dependency tree.\nSecondly, the chunk-level information stored at\nthe chunk-level node is passed on to the word that\nneeds to beflexed. Again, this set of rules avails\nof target language knowledge only. The rule-set is\nused as is for different source languages.\nFinally, the information collected over the trans-\nlation process (lemmas and corresponding tag se-\nquences) is passed on to the word generation mod-\nule, a morphological generator specifically devel-\noped for Basque, which was fully reused.\n4 System evaluation\nWe used human evaluation as the main indicator\nfor the prototype’s performance. Also, we ran au-\ntomatic metrics to compare their scores against thehuman evaluation even when it is known that au-\ntomatic scores tend to favor SMT systems over\nRBMT systems because they do not consider the\ncorrectness of the output but rather compare the\ndifference between the output and the reference\ntranslations (Callison-Burch et al., 2006). And the\nuse of a single reference accentuates this.\nTo get a perspective on the overall performance,\nwe ran the evaluation for two additional systems,\nan in-house statistical system, SMTs, and Google\nTranslate, as well as Matxin ENEUS. Our SMT\nsystem was trained on a parallel corpus of 12 mil-\nlion Basque words and 14 million English words\ncomprising user manuals, academic books and\nweb data. We implemented a phrase-based sys-\ntem using Moses (Koehn et al., 2007). To better\ndeal with the agglutinative nature of Basque, we\ntrained the system on morpheme-level segmented\ndata (Labaka, 2010). As a result, we need a gen-\neration postprocess to obtain real word forms for\nthe decoder. We incorporated a second language\nmodel (LM) based on real word forms to be used\nafter the morphological postprocess. We imple-\nmented the word form-based LM by using an n-\nbest list following (Olafzer and El-Kahlout, 2007).\nWefirst generate a candidate ranking based on the\nsegmented training. Next, these candidates are\npostprocessed. We then recalculate the total cost\nof each candidate by including the cost assigned\nby the new word form-based LM in the models\nused during decoding. Finally, the candidate list\nis re-ranked according to the new total cost. This\nrevises the candidate list to promote those that are\nmore likely to be real word form sequences. The\nweight for the word form-based LM was optimized\nwith minimum error rate training together with the\nweights for the rest of the models.\nWe used the same evaluation set for both the hu-\nman evaluation and the automatic metrics. It is a\nset of 500 sentences consisting of 250 sentences\nset aside from the training corpus and 250 out-of-\ndomain sentences from online news sites and mag-\nazines. All sentences contain at least one verb, are\nself-contained and have 5 to 20 tokens.\n4.1 Human evaluation\nWe performed a human evaluation for the three\nsystems mentioned above as part of a wider eval-\nuation campaign. We carried out a pairwise com-\nparison evaluation with non-expert volunteer par-\nticipants who accessed an evaluation platform on-8\nline. They were presented with a source sentence\nand two machine translations. They were asked\nto compare the translations and decide which was\nbetter. They were given the options1st is better,\n2nd is betterandthey are both of equal quality.\nOver 551 participants provided responses in the\ncampaign which allowed us to collect over 7,500\ndata points for the systems we show here. We col-\nlected at least 5 evaluations per source sentence for\neach system-pair (2,500 evaluations per pair).\nWe adopted the following strategy to decide on\na winning system for each evaluated sentence in\neach system-pair comparison: if the difference in\nvotes between two systems is larger than 2, the sys-\ntem with the highest number of votes is the undis-\nputed winner (System X++). If the difference in\nvotes is 1 or 2, the system scoring higher is the\nwinner (System X+). If both systems score the\nsame amount of votes, the result is a draw (equal).\nFrom the evaluations collected (Figure 7), we\nsee that the output of Matxin ENEUS is considered\nbetter than its competitors 31-34% of the time, a\nsignificant proportion given the prototype’s rapid\ndevelopment and limited coverage. This is par-\nticularly interesting for hybridization purposes. It\nwould be invaluable to pinpoint the specific struc-\ntures in which this system succeeds and its specific\nstrengths to guide future hybridization attempts.\nSMTs and Google are preferred over the proto-\ntype. When compared against each other, the dif-\nference in sentences allocated to each system is not\nsignificant, with only 8 additional sentences allo-\ncated to SMTs (229 vs 221, 50 equal).\n4.2 Automatic scores\nWe provide BLEU and TER scores in Table 3. Low\nBLEU scores are common for agglutinative target\nlanguages when using word-based metrics. A uni-\ngram match in these languages can easily equate to\na 3-gram match in analytic languages, i.e., a word\nin Basque often consists of a lemma and number,\ndefiniteness and postpositional suffixes.\nThe human comparison evaluation tells us\nwhich translation candidate is preferred over an-\nother but it does not capture the distance between\ntheir quality. On the other hand, BLEU tries to pro-\nvide the difference in the overall quality of the sys-\ntems. Our results seem to suggest that Google has\na better overall quality whereas SMTs has more\nvariability in terms of quality, and this leads to our\nsystem being preferred for over 40% of the sen-\nFigure 7: Human comparison results.\nSystem BLEU TER\nSMTs 8.37 75.893\nGoogle 11.64 72.997\nMatxin ENEUS 4.27 83.940\nTable 3: Automatic scores.\ntences, despite having a lower BLEU score.\nIn the case of Matxin ENEUS, the overall qual-\nity seems to be lower, but it still surpasses the\nstatistical systems in over 30% of the sentences,\nwhich is not captured by BLEU.\n5 Conclusions\nWe have ported the Matxin deep-transfer rule-\nbased system to work with a different source lan-\nguage and described the requirements and effort\ninvolved in the process. More precisely, we have\nreplaced the analysis module with an existing En-\nglish package which provided us with the nec-\nessary lemma, morphological, chunk and depen-\ndency information. Most of the work was devoted\nto the transfer module: we compiled a new bilin-\ngual dictionary from an existing electronic ver-\nsion and WordNet; we wrote a preposition-specific\ndictionary with several disambiguation rules; we\nwrote the verb transfer grammar and we specified\na number of information movements across the de-\npendency tree to address complex sentences and\nnon-finite structures. The generation module was\nfully reused as the target language remained the\nsame. We estimate that this process required about\n8 person month full-time work for a linguist and\n1 person month full-time work for a computer sci-\nentist, although this estimates will vary depending\non each professional’s skills and familiarity with\nthe architecture and linguistic work.\nOverall, we have gathered evidence that, thanks\nto its modularity, the use of trees and theflex-9\nibility it offers to move information across tree-\nnodes, Matxin can be a suitable architecture to de-\nvelop systems for dissimilar languages or those for\nwhich deep-transfer is necessary.\nWe have evaluated the new English-to-Basque\nprototype by a human pair-wise comparison to-\ngether with two statistical systems. Although these\nsystems are generally preferred, Matxin ENEUS\nsurpasses statistical competitors in 30% of the\ncases. Apart from continuing with development\nwork for the new language pair, we now aim tofind\nout the characteristics of those cases, in particular,\nfor hybridization opportunities.\nAcknowledgements\nThe research leading to this work received funding\nfrom the People Programme (Marie Curie Actions)\nof the European Union’s Seventh Framework Pro-\ngramme (FP7/2007/2013) under REA agreement\n302038, FP7-ICT-2013-10-610516 (QTLeap) and\nSpanish MEC agreement TIN2012-38523-C02\n(Tacardi) with FEDER funding.\nReferences\nAgirre, Eneko, Aitziber Atutxa, Gorka Labaka, Mikel\nLersundi, Aingeru Mayor and Kepa Sarasola. 2009.\nUse of rich linguistic information to translate prepo-\nsitions and grammar cases to Basque.EAMT 2009,\nBarcelona, Spain. 58–65.\nAlegria, I ˜naki, Arantza D ´ıaz de Ilarraza, Gorka Labaka,\nMikel Lersundi, Aingeru Mayor and Kepa Sarasola.\n2005. An FST grammar for verb chain transfer in\na Spanish-Basque MT System.FSMNLP , Lecture\nNotes in Computer Science, 4002:295–296.\nBach, Nguyen. 2012. Dependency structures for statis-\ntical machine translation.SMNLP-2012, Donostia,\nSpain. 65–69.\nCallison-Burch, Chris, Miles Osborne, and Philipp\nKoehn. 2006. Re-evaluating the role of BLEU in\nmachine translation research.EACL-2006, Trento,\nItaly. 249–256.\nCarreras, Xavier, Isaac Chao and Llu ´ıs Padr ´o and\nMuntsa Padr ´o. 2012. FreeLing: An Open-Source\nSuite of Language Analyzers.LREC-2004, Lisbon.\nD´ıaz de Ilarraza, Arantza, Aingeru Mayor and Kepa\nSarasola. 2002. Semiautomatic labelling of seman-\ntic features.COLING-2002, Taipei, Taiwan.\nForcada, Mikel, Mireia Ginest ´ı-Rosell, Jacob Nord-\nfalk, Jim O’Regan, Sergio Ortiz-Rojas, Juan An-\ntonio P ´erez-Ortiz, Felipe S ´anchez-Martnez, Gema\nRam´ırez-Snchez and Francis Tyers. 2011. Aper-\ntium: a free/open-source platform for rule-basedmachine translation.Machine Translation Journal,\n25(2):127–144.\nGasser, Michael. 2012. Toward a rule-based system\nfor English-Amharic translation.SALTMIL-AfLaT-\n2012, Istanbul, Turkey.\nKoehn, Philipp, Hieu Hoang, Alexandra Birch, Chris\nCallison-Burch, Marcello Federico, Nicola Bertoldi,\nBrooke Cowan, Wade Shen, Christine Moran,\nRichard Zens, Chris Dyer, Ondej Bojar, Alexan-\ndra Constantin and Evan Herbst. 2007. Moses:\nopen source toolkit for statistical machine transla-\ntion.ACL-2007, Interactive Poster and Demonstra-\ntion Sessions, Prague, Czech Republic.\nGorka Labaka. 2010. EUSMT: Incorporating Linguis-\ntic Information into SMT for a Morphologically Rich\nLanguage.PhD, University of the Basque Country.\nde Marneffe, Marie-Catherine, Bill MacCartney and\nChristopher Manning. 2006. Generating Typed\nDependency Parses from Phrase Structure Parses.\nLREC-2006, Genoa, Italy.\nMayor, Aingeru, I ˜naki Alegria, Arantza Diaz de Ilar-\nraza, Gorka Labaka, Mikel Lersundi and Kepa Sara-\nsola. 2011. Matxin, an open-source rule-based ma-\nchine translation system for Basque.Machine Trans-\nlation Journal, 25(1):53–82.\nMayor, Aingeru, Mans Hulden and Gorka Labaka.\n2012. Developing an Open-Source FST Grammar\nfor Verb Chain Transfer in a Spanish-Basque MT\nSystem.FSMNLP-2012, Donostia, Spain.\nMiller, George. 1995. WordNet: A Lexical\nDatabase for English.Communications of the ACM,\n38(11):39–41.\nOflazer, Kemal and Ilknur Durgar El-Kahlout. 2007.\nExploring Different Representation Units in English-\nto-Turkish Statistical Machine Translation.WMT-\n2007, Prague, Czech Republic. 25–32.\nOtte, Pim and Francis. Tyers. 2011. Rapid rule-based\nmachine translation between Dutch and Afrikaans.\nEAMT-2011, Leuven, Belgium. 153–160.\nPeradin, Hrvoje, Filip Petkovski and Francis. Tyers.\n2014. Shallow-transfer rule-based machine transla-\ntion for the Western group of South Slavic.LREC-\n2012, Reykjav ´ık, Iceland. 25–30.\nPociello, Elisabete, Aitziber Atutxa and Izaskun Aldez-\nabal. 2010. Methodology and construction of the\nBasque WordNet.Language Resources and Evalu-\nation, 45:121–14.\nSangodkar, Amit and Om Damani. 2012. Re-ordering\nSource Sentences for SMT.LREC-2012, Istambul,\nTurkey. 2164–2171.\nSurcin, Sylvain, Elke Lange and Jean Senellart. 2007.\nRapid Development of New Language Pairs at SYS-\nTRAN.XI MT Summit, Copenhagen, Denmark.\n443–449.10",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "pqKunKo3w9h",
"year": null,
"venue": "EAMT 2011",
"pdf_link": "https://aclanthology.org/2011.eamt-1.15.pdf",
"forum_link": "https://openreview.net/forum?id=pqKunKo3w9h",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Choosing the best machine translation system to translate a sentence by using only source-language information",
"authors": [
"Felipe Sánchez-Martínez"
],
"abstract": "Felipe Sánchez-Martínez. Proceedings of the 15th Annual conference of the European Association for Machine Translation. 2011.",
"keywords": [],
"raw_extracted_content": "Choosing thebest machine translation system totranslate asentence by\nusing only source-language information\nFelipe S´anchez-Mart ´ınez\nDep. deLlenguatges iSistemes Inform `atics\nUniversitat d’Alacant\nE-03071, Alacant, Spain\[email protected]\nAbstract\nThis paper describes anovelapproach\naimed toidentify apriori which subset of\nmachine translation (MT) systems among a\nknownsetwill produce themost reliable\ntranslations foragivensource-language\n(SL) sentence. Weaimtoselect thissub-\nsetofMTsystems byusing only informa-\ntionextracted from theSLsentence tobe\ntranslated, andwithout access totheinner\nworkings oftheMT systems being used.\nAsystem able toselect inadvance, with-\nouttranslating, thatsubset ofMTsystems\nwillallowmulti-engine MTsystems tosave\ncomputing resources andfocus onthecom-\nbination oftheoutput ofthebest MTsys-\ntems. Theselection ofthebestMTsystems\nisdone byextracting asetoffeatures from\neach SLsentence andthen using maximum\nentrop yclassifiers trained overasetofpar-\nallel sentences. Preliminary experiments on\ntwoEuropean language pairs showasmall,\nnon-statistical significant impro vement.\n1Introduction\nMachine translation (MT) hasbecome aviable\ntechnology thathelps individuals inassimilation\n—to get the gistofatextwritten inalanguage the\nreader does notunderstand— anddissemination\n—to produce adraft translation tobepost-edited for\npublication— tasks. However,none ofthediffer-\nentapproaches toMT,whether statistical (Koehn,\n2010), example-based (Carl andWay,2003), rule-\nbased (Hutchins andSomers, 1992) orhybrid (Thur -\nmair,2009), always pro vide thebest results. This\nc/circlecopyrt2011 European Association forMachine Translation.iswhysome researchers haveinvestig ated thede-\nvelopment ofmulti-engine MT(MEMT) systems\n(Eisele, 2005; Machere yandOch, 2007; Duetal.,\n2009; Duetal.,2010) aimed toprovide transla-\ntions ofhigher quality than those produced bythe\nisolated MTsystems inwhich theyarebased on.\nMEMT systems canbeclassified according to\nhowtheywork. Ononehand, wefind systems\nthatcombine thetranslations provided byseveral\nMTsystems intoaconsensus translation (Bang a-\nloreetal.,2001; Bang alore etal.,2002; Matuso v\netal.,2006; Heafield etal.,2009; Duetal.,2009;\nDuetal.,2010); theoutput ofthese MEMT sys-\ntems may differfrom those provided bytheindi-\nvidual MTsystem stheyarebased on.Ontheother\nhand, wehavesystems thatdecide which transla-\ntion, among allthetranslations computed bythe\nMTsystems theyarebased on,isthemost appro-\npriate one(Nomoto, 2004; ZwartsandDras, 2008)\nandoutput thistranslation without changing itin\nanyway.In-between, wefindtheMEMT systems\nthatbuildaconsensus translation from areduced\nsetoftranslations, i.e.systems thatfirstchose the\nsubset with themost promising translations, and\nthen combine these translations toproduce asingle\noutput (Machere yandOch, 2007).\nEventhough MEMT systems thatselect themost\npromising translation and those that workona\nreduced subset oftranslations donotuseallthe\ntranslations computed byalltheMTsystem, both\nkinds ofMEMT systems need totranslate theinput\nsource-language (SL) sentence asmanytimes as\ndifferent MTsystems theyuse. This factmakes\nitdifficulttointegrate MEMT systems inenviron-\nments wher eresponse timeandrequir edresources\n(mainly amount ofmemory andcomputing speed)\nareconstrained. Inaddition, thisalsoforces MEMT\nsystems tokeeptheamount ofMTsystems they\nMik el L. F orcada, Heidi Depraetere, Vincen t V andegh i n ste (ed s. )\nPr o c e e dings of the 15th Confer enc e of the Eur op e an A sso c i a t ion for Machine T r anslation , p. 97\u0015104\nLeuv en, Belgium, Ma y 2011*\n* See erratum at the end of the paper.*\nusetoaminimum inorder tokeeptheamount of\nneeded resources low.\nInthispaper wedescribe anovelapproach aimed\ntoidentify apriori which subset ofMTsystems\namong aknownsetwillproduce themost reliable\ntranslations foragivenSLsentence. Asystem able\ntoselect inadvance, without translating, thatsubset\nofMTsystems from aknownsetofMTsystems\nwill allowMEMT systems tosavecomputing re-\nsources andfocus onthecombinat ionoftheoutput\nofthebest MTsystems. Atthesame time, such a\ntool will allowthenumber ofMTsystems inwhich\ncurrent MEMT systems arebased tobeincreased.\nTheselection ofthebestMTsystems isdone by\nextracting asetoffeatures from each SLsentence\nandthen using maximum entrop yclassifiers trained\noverasetofparallel sentences. During training the\nsource sentences inthetraining parallel corpus are\nautomatically translated with thedifferent MTsys-\ntems being considered, andthen thetargetsentences\nareevaluated against thereference translations in\nthetraining parallel corpus. Toautomatically deter -\nmine theMTsystem producing thebesttranslation\nduring training wehavetried severalMTevaluation\nmeasures atthesentence level.\nThe restofthepaper isorganised asfollo ws.\nNextsection presents theSLfeatures used todis-\ncriminate between thedifferent MTsystems, and\nexplains thetraining procedure and thewayin\nwhich theclassifiers areused forthetask athand.\nSection 3then describes theexperiments conducted,\nwhereas results arediscussed inSection 4.Thepa-\nperends with some concluding remarks andplans\nforfuture work.\n2System selection asaclassification\nproblem\nWeaimtoselect thesubset ofMTsystems that\nwillproduce thebesttranslations byusing only in-\nformation extracted from thesource sentence to\ntranslate, without access totheinner workings of\ntheMTsystems being used. Toachie vethisgoal we\nhaveused binary maximum entrop yclassifiers (see\nbelow)andtried severalfeatures, some ofwhich\nneeds theinput sentence tobeparsed bymeans of\nastatistical parser (see Section 3toknowabout\ntheparser wehaveused),1while the others canbe\n1Itmay beargued thatparsing asentence may beastime con-\nsuming astranslating it;however,inMEMT asentence is\ntranslated severaltimes, andthus avoiding toperform such\ntranslations, evenbyusing computationally expensi veproce-\ndures such asparsing, helps saving computational resourceseasily obtained from theSLsentence. Note that\nsome ofthe(SL) features wehaveused havealso\nbeen used incombination with other features for\nsentence-le velconfidence estimation (Blatz etal.,\n2003; Quirk, 2004; Specia etal.,2009), arelated\ntask aimed atassessing thecorrectness ofatrans-\nlation. Adescription ofthefeatures wehavetried\nfollows:\n•maximum depth oftheparse tree[gmaxd ],\n•mean depth oftheparse tree[gmeand ],\n•joint likelihood oftheparse treetandthe\nwordswinthesentence, i.e.p(t,w)[gjl],\n•likelihood oftheparse treegiventhewords,\ni.e.p(t|w)[gcl],\n•sentence likelihood asprovided bythemodel\nused toparse thesentence, i.e.summing out\nallpossible parse trees [gsentl ],\n•maximum number ofchild nodes pernode\nfound intheparse tree[gmaxc ],\n•mean number ofchild nodes pernode\n[gmeanc ],\n•number ofinternal nodes [gint ],\n•number ofwords whose mean shift (seebelow)\nisgreaterthan agiventhreshold (values used:\n1, 2, 3, 4, 5)[smean ],\n•number ofwords whose variance overtheshift\nisgreaterthan agiventhreshold (values used:\n2, 4, 6, 8, 10)[svar ],\n•number ofwords whose mean fertility ,i.e.\nthemean number oftargetwords towhich a\nsource wordisaligned, isgreater than agiven\nthreshold (values used: 0.25, 0.5,0.75, 1,1.25,\n1.50, 1.75, 2)[fmean ],\n•number ofwords whos evariance overthefer-\ntility isgreater than agiventhreshold (values\nused: 0.25, 0.5,0.75, 1,1.25, 1.50, 1.75, 2)\n[fvar ],\n•sentence length inwords [len],\n•number ofwords notappearing inthecorpora\nused totrained thecorpus-based MTsystem\nused [unk],and\nbecause each sentence isparsed only once.\n98\n•likelihood ofthesentence asprovided byann-\ngram language model trained onaSLcorpus\n[slm].\nThe shift ofasource wordatposition iisde-\nfined asabs(j−i),where jistheposition of\nthefirsttargetwordtowhich thatsource wordis\naligned. Intheexperiments wecomputed themean\nandvariance ofboth theshift andthefertility from\naparallel corpus bycomputing wordalignments in\ntheusual way,i.e.byrunning GIZA++(Och and\nNey,2003) inboth translation directions andthen\nsymmetrising both setsofalignments through the\n“grow-diag-final-and” heuristic (Koehn etal.,2003)\nimplemented inMOSES(Koehn etal.,2007). We\nthen usethese pre-computed values when obtaining\nthefeatures ofaninput sentence.\nThefeatures obtained from theparse treeofthe\nsentence trytodescribe thesentence interms ofthe\ncomple xityofitsstructure. Thefeatures related to\ntheshift andthefertility ofthewords tobetrans-\nlated areintended todescribe thesentence interms\nofthecomple xityofitswords.Therestoffeatures\n—sentence length, likelihood ofthesentence tobe\ntranslated andnumber ofwords notappearing inthe\nparallel corpora used totrain thecorpus-based MT\nsystems— might behelpful todiscriminate between\ntherule-based MTsystems andthecorpus-based\nones.\nTofindthesetofrelevantfeatures wehaveused\nthechi-square method (Liu andSetiono, 1995) that\nevaluates features individually .Werankedallthe\nfeatures according totheir chi-squared statistic (De-\nGroot andSchervish, 2002, Sec. 7.2)with respect\ntotheclasses andselect thefirstNfeatures inthe\nranking. Todetermine thebestvalue of Nweeval-\nuated thetranslation performance achie vedona\ndevelopment corpus with allpossible values ofN.\nTraining .Foreach MT system used wehave\ntrained amaximum entrop ymodel (Bergeretal.,\n1996) thatwillallowoursystem tocompute foran\ninput sentence theprobability ofthatsentence be-\ningbesttranslated byeach system. Inorder totrain\nthese classifiers, andforeach different evaluation\nmeasure wehavetried, each parallel sentence inthe\ntraining corpus ispreprocessed asfollows:\n1.theSLsentence istranslated into theTL\nthrough alltheMTsystems;\n2.each translation isevaluated against therefer-\nence translation inthetraining parallel corpus;3.allthemachine translated sentences areranked\naccording totheevaluation scores obtained,\nandthesubset ofMTsystem producing the\nbest translation aredetermined; note thatit\nmay happen thatseveralMTsystems produce\nthesame translation, orthatseveralmachine\ntranslated sentences areassigned thesame\nscore.\nAfter thispreprocessing, thecorpus ofinstances\nfrom which thebinary classifier associated toan\nMTsystem istrained consist of as manyinstances\nasparallel sentences inthetraining corpus. Each\ninstance inthiscorpus isclassified asbelonging\ntotheclass ofthatMTsystem ifitappears inthe\nsubset ofMTsystems producing thetranslat ion(s)\nleading with thebestevaluation score.\nSystem selection. When aSLsentence istobe\ntranslated, firstthesentence isparsed, andthefea-\ntures described aboveareextracted; then, theprob-\nability ofeach MTsystem being thebest system\ntotranslate thatsentence isestimated bymeans of\nthedifferent maximum entrop ymodels. Thesys-\ntems finally selected totranslate theinput sentence\naretheones with thehighest probabilities. Inthis\npapers wehavetested thisapproach byselecting\nonly asingle MTsystem, theonewith thehighest\nprobability .\n3Experimental settings andresour ces\nWehavetested ourapproach inthetranslation of\nEnglish andFrench textsintoSpanish. Thesystems\nwehaveused are:theshallo w-transfer rule-based\nMT system APERTIUM(Forcada etal.,2011),2\ntherule-based MT system SYSTRAN(Surcin et\nal.,2007),3thephrase-ba sedstatistical MTsystem\nMOSES(Koehn etal.,2007),4theMOSES-CHART\nhierarchical phrase-based MT(Chiang, 2007) sys-\ntem, andthehybridexample-based–statistical MT\nsystem CUNEI(Phillips andBrown,2009).5\nThethree corpus-based systems, namely MOSES,\nMOSES-CHARTandCUNEI,were trainedusing the\ndatasetreleased aspartoftheWMT10 shared trans-\nlation task.6Thecorpora used totrain andevaluate\nthefivebinarymaximum entrop yclassifiers were\n2http://www.apertium.org\n3Wehaveused theversion ofSystran provided byYahoo!\nBabelfish:http://babelfish.yahoo.com\n4http://www.statmt.org/moses/\n5http://www.cunei.org\n6http://www.statmt.org/wmt10/\ntraining- parallel.tgz\n99\nPair Corpus Num. sent. Num. words\nen-esTraining 98,480en:2,996,310; es:3,420,636\nDevelopment 1,984en:49,003;es:57,162\nTest 1,985en:55,168;es:65,396\nfr-esTraining 99,022fr:3,513,404; es:3,449,999\nDevelopment 1,987fr:60,352;es:59,551\nTest 1,982fr:64,392;es:64,440\nTable 1:Number ofsentences andwords inthecorpora used totrain andevaluate ourMTsystem(s)\nselection approach.\nextracted from thecorpus oftheUnited Nations\nthatisalsodistrib uted aspartoftheWMT10 shared\ntranslation task. TheFrench–Spanish parallel cor-\npuswasobtained from theEnglish–French andthe\nEnglish–Spanish parallelcorpora bypairing French\nandSpanish sentence shaving astranslation the\nsame English sentence.7After remo ving duplicated\nsentences andsentences longer than 200words, we\nused thefirst2,000 sentences fordevelopment, the\nsecond 2,000 sentences fortesting, andthenext\n100,000 sentences fortraining. Note thatsome sen-\ntences inthese corpora could notbeparsed with\ntheparser wehaveused (seebelow)and, therefore,\ntheywere remo vedbefore running theexperiments.\nTable 1provides detailed information about these\ncorpora andthenumber ofsentences finally used in\ntheexperiments.\nToparse theinput SLsentences weused the\nBerk eleyParser (Petro vetal.,2006; Petro vand\nKlein, 2007) together with theparsing models avail-\nable forEnglish andFrench from theparser web-\nsite.8Tocompute thelikelihood oftheSLsen-\ntences weused a5-gram language model trained by\nmeans oftheIRSTLM language modelling toolkit9\n(Federico etal.,2008) byusing theSLcorpora dis-\ntributed aspartoftheWMT10 shared translation\ntask. Variance andmean shifts andfertili tieswere\ncalculated onthesame corpora used totrain the\ncorpus-based MTsystems.\nAfter translating theSLsentences inthetrain-\ningcorpora through alltheMTsystems being con-\nsidered, weused theASIYAevaluation toolkit10\n(Gim ´enez andM`arquez, 2010) toevaluate, atthe\nsentence level,thetranslation provided byeach\nMTsystem against theTLreference inthetraining\n7Original corpora can bedownloaded from http://\nwww.statmt.org/wmt10/un.en- fr.tgz andhttp:\n//www.statmt.org/wmt10/un.en- es.tgz\n8http://code.google.com/p/berkeleyparser/\n9http://hlt.fbk.eu/en/irstlm\n10http://www.lsi.upc.edu/ ˜nlp/Asiya/parallel corpora. Forthatweused theprecision-\noriented measure BLEU (Papineni etal.,2002),\ntwoeditdistance-based measures, PER andTER\n(Sno veretal.,2006); andMETEOR (Lavieand\nAgarwal,2007), ameasure aimed atbalancing pre-\ncision andrecall thatconsiders stemming and, only\nforsome languages, synon ymy lookup using Word-\nNet. Inourexperiments weonly used stemming\nwhen computing thelexical similarity oftwowords.\nTotrain and test thefivebinary maximum\nentrop yclassifiers we used the WEKA ma-\nchine learning toolkit (Witten and Frank,\n2005) with default parameters; theclass im-\nplementing themaximum entrop yclassifier is\nweka.classifiers.functions.Logis-\ntic.The class implementing the chi square\nmethod weused toselect thesetofrelevantfeatures\nonadevelopment corpus isweka.attribute-\nSelection.ChiSquaredAttributeEval .\nWithrespect totheinstances used totrain thefive\nbinary maximum entrop yclassifiers andhowmany\ntimes aninstance happens tobelong tomore than a\nclass (MT system), Table 2reports thepercentage\nofsentences inthetraining corpora forwhich the\ntranslation ortranslations being assigned thebest\nevaluation score areproduced byMdifferent MT\nsystems. Recall thatMmay begreater than one\nbecause more than anMTsystem may produce the\nsame translation orbecause more than amachine\ntranslated sentence may beassigned thesame eval-\nuation score. Itisworthnoting thatthepercentage\nofsentence forwhich theoutput ofmore than an\nMTsystem gets thehighest score islargerinthe\ncase ofTER andPER than inthecase oftheother\ntwoevaluation measures.\n4Results anddiscussion\nTable 3reports, forthetwolanguage pairs wehave\ntried, thetranslation performance, asmeasured by\ndifferent MTevaluation measures, achie vedbythe\n100\nPair Measure M=1M=2M=3M=4M=5\nen-esBLEU 82.8% 9.9% 5.6% 0.8% 0.9%\nPER 58.7% 23.6% 12.5% 3.5% 1.7%\nTER 62.1% 22.3% 11.1% 2.9% 1.6%\nMETEOR 83.5% 9.3% 5.4% 0.7% 1.1%\nfr-esBLEU 74.4% 12.8% 6.4% 3.3% 3.1%\nPER 51.6% 21.9% 13.7% 7.2% 5.6%\nTER 52.6% 22.1% 13.2% 6.7% 5.4%\nMETEOR 74.0% 11.9% 5.9% 3.0% 5.2%\nTable 2:Percentage ofsentences inthetraining corpora forwhich thebestevaluation score isassigned to\nthetranslation ortranslations produced byMdifferent MTsystems.\nPair Configuration BLEU PER TER METEOR\nen-esBest system 0.3481 (M)0.3581 (MC)0.4851 (M)0.2745 (C)\nSystem selection 0.3529 (11) 0.3582 (3) 0.4838 (8) 0.2762 (13)\nOracle 0.3905 0.3299 0.4409 0.2965\nfr-esBest system 0.3146 (C)0.4128 (C) 0.5880 (C)0.2281 (C)\nSystem selection 0.3192 (19) 0.4109 (16) 0.5861 (16) 0.2286 (22)\nOracle 0.3467 0.3913 0.5548 0.2389\nTable 3:Performance achie vedbythebest MTsystem, bythesystems selected through ourapproach\n(system selection), andbythecombination oftranslations providing thebestpossible perform ance (oracle).\nThesystem achie ving thebest performance atthecorpus levelandthenumber offeatur esused byour\napproach arereported between brack ets.Mstands forMOSES,MCforMOSES-CHART,andCforCUNEI.\nbest MTsystem atthecorpus level(reported be-\ntween brack ets), theperformance achie vedbyour\napproach, andthatoftheoracle, i.ethebest pos-\nsible performance. The latter wascalculated by\ntranslating alltheSLsentences inthetestcorpus\nthrough alltheMTsystem being used, andthen se-\nlecting foreach sentence thetranslation getting the\nbestevaluation score. Thenumber offeatures used\nbyourapproach after feature selection isreported\nbetween brack ets.\nResults inTable 3showthatourmethod very\nslightly impro vestheperformance achie vedby\nthebest MT system forboth language pairs, al-\nthough this small impro vement islargerinthe\ncase ofEnglish–Spanish. 95% confidenc einterv als\ncomputed bybootstrap resampling (Koehn, 2004)\nshowalargeoverlapping between theperformance\nachie vedbythebestsystem andthatofoursystem\nselection approach. Note thatnooverlapping oc-\ncurs between theconfidence interv alsofthebest\nsystem andthatoftheoracle. Itisworthnoting that\nonthedevelopment corpus theimpro vement was\nlargerforfr-es than foren-es ,although still\nverysmall tobestatistically significant.\nAmanual inspection ofthefirst500sentencesintheen-es testcorpus together with their auto-\nmatic translations showthatmost ofthetimes the\nMTsystems produce translations ofsimilar qual-\nity,andtherefore itishard tochose oneofthem\nasthebest translation. Forthefirst500sentences\nintheen-es testcorpus werankedthetransla-\ntions provided bythedifferent MT systems we\nhaveused, without access tothereference trans-\nlation, andfound outthatthedifference between\ntheBLEU score achie vedbythebest performing\nMTsystem forthefirst500sentences oftheen-es\ntestcorpus, i.e.MOSES,(0.3926) andthatofthe\nbesttranslation manually selected (0.3928) iseven\nlowerthan theoneobtained through ourapproach.\nThis may beexplained bythefactthatthethree\ncorpus-based systems wehaveused were trained\nonthesame parallel corpora andalso because of\nthehomogeneity ofthecorpora wehaveused for\ntraining andtesting.\nWithrespect tothenumber oftimes each sys-\ntemischosen byourapproach when translating the\ntestcorpora, Table 4reports thepercentage oftime\nthishappens foreach system andMTevaluation\nmeasure. Note thatwhen theen-es system se-\nlection istrained using PER, most ofthetimes it\n101\nPair Measure M M C C A S\nen-esBLEU 32.9% 51.1% 2.6% 0.1% 13.3%\nPER 2.9% 95.8% 0.0% 0.0% 1.3%\nTER 53.6% 36.0% 5.5% 0.0% 4.9%\nMETEOR 28.8% 18.5% 41.8% 0.0% 10.9%\nfr-esBLEU 0.2% 42.5% 38.1% 0.0% 19.2%\nPER 0.0% 28.4% 59.8% 0.0% 11.8%\nTER 0.2% 36.7% 53.7% 0.0% 9.4%\nMETEOR 0.0% 26.6% 63.2% 0.0% 10.2%\nTable 4:Percentage oftimes each systems ischosen when translating thetestcorpora. Mstands for\nMOSES,MCforMOSES-CHART,CforCUNEI,AforAPERTIUM,andSforSYSTRAN.\nchooses MOSES-CHART;itmay beconcluded that\nthereduced number offeatures chosen bythefea-\ntureselection method onthedevelopment corpus\nforthislanguage pairandevaluation measure does\nnotallowthesystem todiscriminate between the\ndifferent MTsystems.\nFinally ,thefeatures thathappen toberelevant\nwith themajority ofevaluation measures are(see\nSection 2foradescription ofeach one)\n•foren-es :gmaxd ,gmeand ,gcl,\ngsentl ,smean forthresholds 1and2,and\nsvar forthresholds 2,4and6;and\n•forfr-es :len,gcl,gsentl ,gint ,\nsmean forthresholds 1and 2,svar for\nthresholds 2, 4, 6, and10,fmean forthresh-\nolds 0.25, 0.5and0.75, andslm.\n5Concluding remarks\nInthispaper wehavepresented anovelapproach\naimed toselect thesubset ofMTsystems, among\naknownsetofsystems, thatwillproduce themost\nreliable translations foragivensentence byusing\nonly information extracted from thatsentence. Pre-\nliminary experiments inthetranslation ofEnglish\nandFrench textsintoSpanish showsasmall, non-\nstatistically-significant impro vement compared to\nthetranslation provided bytheMTsystem perform-\ningbest onthewhole testcorpus. Inaddition, a\nmanual selection ofthebest MTsystem onaper-\nsentence basis showsthat it ishard toperform such\naselection because most ofthesentences aretrans-\nlated similarly with most oftheMTsystems.\nAsafuture workweplan totry dif ferent configu-\nrations ofWEKAaswell asuseadevelopment cor-\npustotune thetrained classifiers. Wealsoplan to\nincorporate newfeatures, useMTsystems trained\nondifferent corpora, usecorpora with sentencescoming from different sources, andevaluate the\ntranslation performance whenafixednumber of\nMTsystems areselected through ourapproach and\nthen their translations arecombined using MANY\n(Barrault, 2010).\nAckno wledgements\nWethank SergioOrtiz-Rojas forhishelp andideas,\nInmaculada Ruiz L´opez forperforming themanual\nranking ofthefirst500sentences intheEnglish–\nSpanish testcorpus, andMikelL.Forcada andFran-\ncisM.Tyers fortheir help with themanuscript.\nWorkfunded bytheEuropean Association forMa-\nchine Translation through its2010 sponsorship of\nactivities program andbytheSpanish Ministry of\nScience andInnovation through project TIN2009-\n14009-C02-01.\nRefer ences\nBang alore, S.,G.Bordel, andG.Riccardi. 2001. Com-\nputing consensus translation from multiple machine\ntranslation systems. InProceedings oftheIEEE\nWorkshop onAutomatic Speec hReco gnition andUn-\nderstanding ,pages 351–354.\nBang alore, S.,V.Murdock, and G.Ricca rdi. 2002.\nBootstrapping bilingual data using consensus trans-\nlation foramultilingual instant messaging system.\nInProceedings of19th International Confer ence on\nComputational Linguistics ,pages 1–7, Taipei, Tai-\nwan.\nBarrault, L.2010. MANY: open source machine trans-\nlation system combination. PragueBulletin ofMath-\nematical Linguistics ,(93):147–155. Fourth Machine\nTranslation Marathon. Dublin, Ireland.\nBerger,A.L.,S.A.Della Pietra, and V.J.Della\nPietra. 1996. Amaximum entrop yapproach tonatu-\nrallanguage processing. Computational Linguistics ,\n22(1):39–71.\n102\nBlatz, J.,E.Fitzgerald, G.Foster ,S.Gandrab ur,\nC.Goutte, A.Kulesza, A.Sanchis, and N.Ueff-\ning. 2003. Confidence estimation formachine\ntranslation. Technical report, Technical Report Nat-\nuralLanguage Engineering Workshop Final Report,\nJohns Hopkins University .\nCarl, M.andA.Way,editors .2003. Recent Advances\ninExample-Based Machine Translation ,volume 21.\nSpringer .\nChiang, D.2007. Hierarchical phrase-based transla-\ntion. Computational Linguistics ,33(2):201–228.\nDeGroot, M.H.andM.J.Schervish. 2002. Probability\nandStatistics .Addison-W esley,third edition.\nDu,J.,Y.Ma,andA.Way.2009. Source-side conte xt-\ninformed hypothesis alignment forcombining out-\nputs from machine translation systems. InPro-\nceedings oftheTwelfthMachine Translation Summit ,\npages 230–237, Ottawa,ON, Canada.\nDu,J.,P.Pecina, andA.Way.2010. Anaugmented\nthree-pass system combination frame work: DCU\ncombination system forWMT 2010. InProceedings\noftheFifthACLWorkshop onStatistical Machine\nTranslation ,pages 271–276, Uppsala, Sweden.\nEisele, A.2005. First steps towards multi-engine ma-\nchine translation. InProceedings oftheACLWork-\nshop onBuilding and Using Parallel Texts,pages\n155–158, Ann Arbor ,MI,USA.\nFederico, M,N.Bertoldi, and M.Cettolo. 2008.\nIRSTLM: anopen source toolkit forhandling large\nscale language models. InProceedings ofInter -\nspeec h,pages 1618–1621, Melbourne, Australia.\nForcada, M. L., M. Ginest ´ı-Rosell, J.Nordf alk,\nJ.O’Re gan, S.Ortiz-Rojas, J.A.P´erez-Ortiz,\nF.S´anchez-Mart ´ınez, G.Ram´ırez-S ´anchez, andF.M.\nTyers. 2011. Apertium: afree/open-source platform\nforrule-based machine translation. Machine Trans-\nlation .Special Issue onFree/Open-Source Machine\nTranslation (Inpress).\nGim´enez, J.and L.M`arquez. 2010. Asiya: an\nopen toolkit forautomatic machine translation (me-\nta-)evaluation. Prague Bulletin ofMathemati cal\nLinguistics ,(94):77–86. FifthMachine Translation\nMarathon. LeMans, France.\nHeafield, K.,G.Hanneman, andA.Lavie. 2009. Ma-\nchine translation system combination with flexible\nwordordering. InProceedings oftheFourth ACL\nWorkshop onStatistical Machine Translation ,pages\n56–60, Suntec, Singapore.\nHutchins, W.J.andH.L.Somers. 1992. AnIntroduc-\ntion to Machine Translation .Academic Press.\nKoehn, P.,F.J.Och, andD.Marcu. 2003. Statistical\nphrase-based translation. InProceedings ofthe2003\nConfer ence oftheNorth Ameri canChapter oftheAs-\nsociation forComputational Linguistics onHumanLangua geTechnolo gy,pages 48–54, Morristo wn,NJ,\nUSA.\nKoehn, P.,H.Hoang, A.Birch, C.Callison-Burch,\nM.Federico, N.Bertoldi, B.Cowan,W.Shen,\nC.Moran, R.Zens, C.Dyer ,O.Bojar ,A.Constantin,\nandE.Herbst. 2007. Moses: open source toolkit\nforstatistical machine translation. InProceedings of\nthe45th Annual Meeting oftheACL,pages 177–180,\nMorristo wn,NJ,USA.\nKoehn, P.2004. Statistical significance tests forma-\nchine translation evaluation. InProceedings ofthe\n2004 Confer ence onEmpiri calMethods inNatur al\nLangua geProcessing ,pages 388–395, Barcelona,\nSpain.\nKoehn, P.2010. Statistical Machine Translation .Cam-\nbridge University Press.\nLavie,A.andA.Agarwal.2007. METEOR: Anau-\ntomatic metric forMT evaluation with high levels\nofcorrelation with human judgments. InProceed-\nings oftheSecond Workshop onStatistical Machine\nTranslation ,pages 228—-231, Prague, Czech Repub-\nlic.\nLiu, H.andR.Setiono. 1995. Chi2: Feature selec-\ntionanddiscretiz ation ofnumeric attrib utes. InPro-\nceedings oftheIEEE 7thInternational Confer ence\nonTools with Artificial Intellig ence,pages 388–391.\nMachere y,W.andF.J.Och. 2007. Anempirical study\noncomputing consensus translations from multiple\nmachine translation systems. InProceedings ofthe\nConfer ence onEmpirical Methods inNatur alLan-\nguageProcessing ,pages 986–995, Prague, Czech\nRepublic.\nMatuso v,E.,N.Ueffing, andH.Ney.2006. Computing\nconsensus translation from multiple machine transla-\ntion systems using enhanced hypotheses alignment.\nInProceedings ofthe11th Confer ence oftheEuro-\npean Chapter oftheAssociation forComputational\nLinguistics ,pages 33–40, Trento, Italy.\nNomoto, T.2004. Multi-engine machine translation\nwith voted language model. InProceedings ofthe\n42nd Annual Meeting oftheAssociation forCom-\nputational Linguistics ,pages 494–501, Barcelona,\nSpain.\nOch, F.J.andH.Ney.2003. Asystematic compari-\nsonofvarious statistical alignment models. Compu-\ntational Linguistics ,29(1):19–51.\nPapineni, K.,S.Rouk os,T.Ward,andW.J.Zhu. 2002.\nBLEU: amethod forautomatic evaluation ofma-\nchine translat ion. InProceedings ofthe40th An-\nnual meeting oftheAssociation forComputational\nLinguistics ,pages 311–318, Philadelphia, PA,USA.\nPetro v,S.andD.Klein. 2007. Impro vedinference for\nunlexicalized parsing. InProceedings oftheAnnual\nConfer ence oftheNorth Amer ican Chapter oftheAs-\nsociation forComputational Linguistics ,pages 404–\n411, Rochester ,NY,USA.\n103\nPetro v,S.,L.Barrett, R.Thibaux, andD.Klein. 2006.\nLearning accurate, compact, andinterpretable tree\nannotation. InProceedings ofthe21st International\nConfer ence onComputational Linguistics and the\n44th annual meeting oftheAssociation forCompu-\ntational Linguistics ,pages 433–440, Sydne y,Aus-\ntralia.\nPhillips, A.B.andR.D.Brown.2009. Cunei machine\ntranslation platform: system description. InProceed-\nings oftheThirdWorkshop onExample-Based Ma-\nchine Translation ,pages 29–36, Dublin, Ireland.\nQuirk, C.2004. Training asentence-le velmachine\ntranslation confidence measure. InProceedings of\ntheThefourth international confer ence onLangua ge\nResour cesandEvaluation ,pages 525–828, Lisbon,\nPortug al.\nSnover,M.,B.Dorr,R.Schw artz, L.Micciulla, and\nJ.Makhoul. 2006. Astudy oftranslation editrate\nwith targeted human annotation. InProceedings of\nthe7thConfer ence oftheAssociation forMachine\nTranslation intheAmericas ,pages 223–231, Cam-\nbridge, MA, USA.\nSpecia, L.,M.Turchi, Z.Wang, J.Shawe-T aylor ,and\nC.Saunders. 2009. Impro ving theconfidence ofma-\nchine translation quality estimates .InProceedings\nofTwelfth Machine Translation Summit ,pages 136–\n143, Ottawa,Canada.\nSurcin, S.,E.Lange, andJ.Senellart. 2007. Rapid de-\nvelopment ofnewlanguage pairs atSYSTRAN. In\nProceedings oftheEleventh MTSummit ,pages 443–\n449, Copenhagen, Denmark.\nThurmair ,G. 2009. Comparing different architec-\ntures ofhybrid machine translation systems. InPro-\nceedings oftheTwelfth Machine Translation Summit ,\npages 340–347, Ottawa,ON, Canada.\nWitten, I.H.andE.Frank. 2005. Data mining: practi-\ncalmachine learning tools andtechniques .Elsevier,\nsecond edition.\nZwarts, S.andM.Dras. 2008. Choosing theright\ntranslation: asyntactically informed classification\napproach. InProceedings ofthe22nd International\nConfer ence onComputational Linguistics ,pages\n1153–1160, Manchester ,UK.\n104Erratum_________________________\nStatistical significant tests performed by pair\nbootstrap resampling (Koehn, 2004) show\nthat the difference in performance between\nthe system performing best at the document level\nand that of the system selection approach\ndescribed in this paper is statistically significant\nmetrics we have used, with the exception of with p=0.05 for all the automatic MT evaluation\nthe METEOR scores obtained for the French-Spanish\nlanguage pair.",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "ugxJ-vz2YGj",
"year": null,
"venue": "EAMT 2011",
"pdf_link": "https://aclanthology.org/2011.eamt-1.13.pdf",
"forum_link": "https://openreview.net/forum?id=ugxJ-vz2YGj",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Using word alignments to assist computer-aided translation users by marking which target-side words to change or keep unedited",
"authors": [
"Miquel Esplà-Gomis",
"Felipe Sánchez-Martínez",
"Mikel L. Forcada"
],
"abstract": "Miquel Esplà, Felipe Sánchez-Martínez, Mikel L. Forcada. Proceedings of the 15th Annual conference of the European Association for Machine Translation. 2011.",
"keywords": [],
"raw_extracted_content": "Using word alignments to assist computer-aided translation users by\nmarking which target-side words to change or keep unedited\nMiquel Espl `aand Felipe S ´anchez-Mart ´ınez and Mikel L. Forcada\nDepartament de Llenguatges i Sistemes Inform `atics\nUniversitat d’Alacant, E-03071 Alacant, Spain\n{mespla,fsanchez,mlf }@dlsi.ua.es\nAbstract\nThis paper explores a new method to im-\nprove computer-aided translation (CAT)\nsystems based on translation memory (TM)\nby using pre-computed word alignments be-\ntween the source and target segments in the\ntranslation units (TUs) of the user’s TM.\nWhen a new segment is to be translated by\nthe CAT user, our approach uses the word\nalignments in the matching TUs to mark\nthe words that should be changed or kept\nunedited to transform the proposed trans-\nlation into an adequate translation. In this\npaper, we evaluate different sets of align-\nments obtained by using GIZA++. Experi-\nments conducted in the translation of Span-\nish texts into English show that this ap-\nproach is able to predict which target words\nhave to be changed or kept unedited with\nan accuracy above 94% for fuzzy-match\nscores greater or equal to 60%. In an ap-\npendix we evaluate our approach when new\nTUs (not seen during the computation of\nthe word-alignment models) are used.\n1 Introduction\nComputer-aided translation (CAT) systems based\non translation memory (TM) (Bowker, 2002;\nSomers, 2003) and, optionally, additional tools such\nas terminology databases (Bowker, 2003), are the\ntranslation technology of choice for most profes-\nsional translators, especially when translation tasks\nare very repetitive and effective recycling of previ-\nous translations is feasible.\nWhen using a TM-based CAT system to trans-\nlate a source segment s/prime, the system provides the\nc/circlecopyrt2011 European Association for Machine Translation.set of translation units (TUs) {(si, ti)}N\ni=1whose\nfuzzy-match score is above a given threshold Θ,\nand marks which words in each source-language\n(SL) segment sidiffer from those in s/prime. It is how-\never up to the translator to identify which target\nwords in the corresponding target-language (TL)\nsegments tishould be changed to convert tiintot/prime,\nan adequate translation of s/prime.\nThe method we propose and evaluate in this pa-\nper is aimed at recommending the CAT user which\nwords of tishould be changed by the translator\nor kept unedited to transform tiintot/prime. To do so,\nwe pre-process the user’s TM to compute the word\nalignments between the source and target segments\nin each TU. Then, when a new segment s/primeis to be\ntranslated, the TUs with a fuzzy-match score above\nthe threshold Θare obtained and the alignment be-\ntween the words in siandtiare used to mark which\nwords in tishould be changed or kept unedited.\nRelated work. In the literature one can find dif-\nferent approaches that use word or phrase align-\nments to improve existing TM-based CAT systems;\nalthough, to our knowledge, none of them use word\nalignments for the purpose we study in this pa-\nper. Simard (2003) focuses on the creation of TM-\nbased CAT systems able to work at the sub-segment\nlevel by proposing as translation sub-segments ex-\ntracted from longer segments in the matching TUs.\nTo do this, he implements the translation spotting\n(V´eronis and Langlais, 2000) technique by using\nstatistical word-alignment methods (Och and Ney,\n2003); translation spotting consists of identifying,\nfor a pair of parallel sentences, the words or phrases\nin a TL segment that correspond to the words in a\nSL segment. The work by Bourdaillet et al. (2009)\nfollows a similar approach, although it does not\nfocus on traditional TM-based CAT systems, butMik el L. F orcada, Heidi Depraetere, Vincen t V andeghinste (eds.)\nPr o c e e dings of the 15th Confer enc e of the Eur op e an Asso ciation for Machine T r anslation , p. 81\u001588\nLeuv en, Belgium, Ma y 2011\non the use of a bilingual concordancer to assist\nprofessional translators.\nMore similar to our approach is the one by Kra-\nnias and Samiotou (2004) which is implemented on\ntheESTeam CAT system. Kranias and Samiotou\n(2004) align the source and target segments in each\nTU at different sub-segment levels by using a bilin-\ngual dictionary (Meyers et al., 1998), and then use\nthese alignments to (i) identify the sub-segments in\na translation proposal tithat need to be changed,\nand (ii) propose a machine translation for them.\nIn this paper we propose a different way of using\nword alignments in a TM-based CAT system to\nalleviate the task of professional translators. The\nmain difference between our approach and those\npreviously described is that in our approach word\nalignments are used only to recommend the words\nto be changed or kept unedited, without proposing\na translation for them, so that the user can focus\non choosing a translation where words have to be\nchanged. It is worth noting that as we do not change\nthe translation proposals in any way, our approach\ndoes not affect the predictability of TM proposals\nand the way in which fuzzy-match scores (Sikes,\n2007) are interpreted by the CAT user. In addition,\nour system is independent of any external resources,\nsuch as MT systems or dictionaries, as opposed to\nthe work by Kranias and Samiotou (2004).\nThe rest of the paper is organized as follows. Sec-\ntion 2 presents the way in which word alignments\nare used by our approach and the different word\nalignment methods we have tried. Section 3 then de-\nscribes the experimental framework, whereas Sec-\ntion 4 discusses the results obtained. Section 5\nincludes some concluding remarks and plans for fu-\nture work. In Appendix A we evaluate our approach\nwhen it is applied to new TUs not seen during the\ncomputation of the word-alignment models used.\n2 Methodology\nLetwijbe the word in the j-th position of segment\ntiwhich is aligned with word vikin the k-th posi-\ntion of its counterpart segment si. Ifvikis part of\nthe match between siands/prime(the new segment to\nbe translated), then this indicates that wijmight be\npart of the translation of that word and, therefore, it\nshould be kept unedited. Conversely, if vik/primeis not\npart of the match between siands/prime, this indicates\nthatwij/primemight not be the translation of any of the\nwords in s/primeand it should be changed (see Figure 1).\nNote that wijmay not be aligned with any wordsiti\nvikwij[keep]\nvik/primewij/prime[edit]wij/prime/prime\n?[?]\nmatched\nwiths/primeunmatched\nwiths/primematched\nwiths/prime\u0000\u0000\u0000\u0000\u0000\n\u0000\u0000\u0000\u0000\u0000\nFigure 1: Target word wijmay have to be kept\nunedited because it is aligned with source word vik\nwhich is in the part of sithat matches s/prime. Target\nword wij/primemay have to be changed because it is\naligned with source word vik/primewhich is in the part\nofsithat does not match s/prime. As target word wij/prime/primeis\nnot aligned to any source word in si, nothing can\nbe said about it.\ninsi, and that in these cases nothing can be said\nabout it. This information may be shown using\ncolour codes, for example, red for the words to be\nchanged, green for the words to be kept unedited\nand yellow for those unaligned words for which\nnothing can be said.\nTo determine if word wijin the target proposal\ntishould be changed or kept unedited, we compute\nthe fraction of words vikaligned to wijwhich are\ncommon to both siands/prime:\nfK(wij, s/prime, si, ti) =/summationtext\nvik∈aligned( wij)matched( vik)\n|aligned( wij)|\nwhere aligned( wij)is the set of source words in\nsiwhich are aligned with target word wijinti,\nandmatched( vik)equals 1 if word vikis part of\nthe match between siands/prime, the segment to be\ntranslated, and 0 otherwise. Function matched( x)\nis based on the optimal edit path, obtained as a\nresult of the word-based Levenshtein distance (Lev-\nenshtein, 1966) between the segment to be trans-\nlated and the SL segment of the matching TU.\nThe fraction fK(wij, s/prime, si, ti)may be interpreted\nas the likelihood that word wijhas to be kept\nunedited. If|aligned( wij)|happens to be zero,\nfK(wij, s/prime, si, ti)is arbitrarily set to1\n2, meaning\n“do not know”.\nWe have chosen the likelihood that word wijwill\nbe kept unedited to depend on how many SL words\naligned with it are matched with the SL segment\nto be translated. It may happen that wijis aligned\nwith one or more words in sithat are matched with\nwords in s/prime, and, at the same time, it is aligned with82\none or more unmatched words in si. In the experi-\nments we have tried two ways of dealing with this,\none that requires all SL words in sito be matched,\nand another one that only requires the majority of\nwords aligned with wijto be matched. These strate-\ngies have been chosen because of their simplicity,\nalthough it could also be possible to use, for exam-\nple, a maximum entropy classifier (Berger et al.,\n1996), in order to determine which words should be\nchanged or kept unedited. In that case, fKwould be\none of the features used by the maximum entropy\nclassifier.\nTo illustrate these ideas, Figure 2 shows an ex-\nample of a word-aligned pair of segments ( siand\nti) and a segment s/primeto be translated. As can be\nseen, the word heintiis aligned with the word ´el\ninsi, which does not match with any word in s/prime.\nTherefore, heshould be marked to be changed. Con-\nversely, the words hisandbrother are aligned with\nsuandhermano , respectively, which are matched in\ns/primeand, therefore should be kept unedited. Finally,\nthe word missed is aligned with three words in si:\nech´oandde, which are matched in s/prime, and menos ,\nwhich is not matched. In this case, if the criterion\nof unanimity is applied, the word would be marked\nneither as “keep” nor as “change”. Otherwise, if\nthe criterion of majority is applied, the word would\nbe marked to be changed.\n[edit] [?] [keep] [keep]\nti:he missed\n\u0001\n\u0001\u0001J\nJJhisbrother\nsi:´elech´odemenos a su hermano\ns/prime:ellaech´odecasa a su hermano\nFigure 2: Example of alignment and matching.\nFor the experiments in this paper we have\nused word alignments obtained by means of the\nfree/open-source GIZA++1tool (Och and Ney,\n2003) which implements standard word-based sta-\ntistical machine translation models (Brown et al.,\n1993) as well as a hidden-Markov-model-based\nalignment model (V ogel et al., 1996). GIZA++\nproduces alignments in which a source word can\nbe aligned with many target words, whereas a tar-\nget word is aligned with, at most, one source word.\nFollowing common practice in statistical machine\ntranslation (Koehn, 2010, Ch. 4) we have obtained\n1http://code.google.com/p/giza-pp/a set of symmetric word alignments by running\nGIZA++ in both translation directions, and then\nsymmetrizing both sets of alignments. In the exper-\niments we have tried the following symmetrization\nmethods:\n•the union of both sets of alignments,\n•the intersection of the two alignment sets, and\n•the use of the grow-diag-final-and heuris-\ntic (Koehn et al., 2003) as implemented in\nMoses (Koehn et al., 2007).\n3 Experimental settings\nWe have tested our approach in the translation\nof Spanish texts into English by using two TMs:\nTM trans andTM test. Evaluation was carried out\nby simulating the translation of the SL segments in\nTM trans by using the TUs in TM test. We firstly ob-\ntained the word alignments between the parallel seg-\nments of TM testby training and running GIZA++\non the TM itself. Then, for each source segment\ninTM trans, we obtained the TUs in TM testhav-\ning a fuzzy-match score above threshold Θ, and\ntagged the words in their target segments as “keep”\nor “change”.\n3.1 Fuzzy-match score function\nAs in most TM-based CAT systems, we have cho-\nsen a fuzzy-match score function based on the Lev-\nenshtein distance (Levenshtein, 1966):\nscore( s/prime, si) = 1−D(s/prime, si)\nmax(|s/prime|,|si|)\nwhere|x|stands for the length (in words) of string\nxandD(x, y)refers to the word-based Levenshtein\ndistance (edit distance) between xandy.\n3.2 Corpora\nThe TMs we have used were extracted from the\nJRC-Acquis corpus version 3 (Steinberger et al.,\n2006),2which contains the total body of European\nUnion (EU) law. Before extracting the TMs used,\nthis corpus was tokenized and lowercased, and then\nsegment pairs in which either of the segments was\nempty or had more than 9 times words than its coun-\nterpart were removed. Finally, segments longer\nthan 40 words (and their corresponding counter-\nparts) were removed because of the inability of\nGIZA++ to align longer segments.\n2http://wt.jrc.it/lt/Acquis/83\nΘ(%) TUs Nwords\n50 9.5 484,523\n60 6.0 303,193\n70 4.5 220,304\n80 3.5 166,762\n90 0.9 42,708\nTable 1: Average number of matching TUs per\nsegment and number of words to tag for different\nfuzzy-match score thresholds ( Θ).\nFinally, the segment pairs in TM trans and\nTM testwere randomly chosen without repetition\nfrom the resulting corpus. TM testconsists of\n10,000 parallel segments, whereas TM trans con-\nsists of 5,000 segment pairs. It is worth noting that\nthese TMs may contain incorrect TUs as a result of\nwrong segment alignments and this can negatively\naffect the results obtained.\nWith respect to the number of TUs found in\nTM testwhen simulating the translation of the SL\nsegments in TM trans, Table 1 reports for the differ-\nent fuzzy-match score thresholds we have used: the\naveraged number of TUs per segment to be trans-\nlated and the total number of words to classify as\n“keep” or “change”. These data provide an idea of\nhow repetitive the corpora we have used to carry\nout the experiments are.\n3.3 Evaluation\nWe evaluate our approach for different fuzzy-match\nscore thresholds Θby computing the accuracy, i.e.\nthe percentage of times the recommendation of our\nsystem is correct, and the coverage, i.e. the per-\ncentage of words for which our system is able to\nsay something. For that purpose we calculate the\noptimal edit path between the target segments in\nTM trans and the translation proposals in TM test\nto determine the actual word-editing needs in each\ntranslation proposal.\nFor each SL segment s/primeinTM trans we compute\nthe set of matching TUs {(si, ti)}N\ni=1inTM test\nwhose fuzzy-match score is above threshold Θ. We\nthen calculate the fraction fK(wij, s/prime, si, ti)repre-\nsenting the likelihood that word wijintiwill be\nkept unedited and use it to mark wijas having to be\nchanged or kept unedited by using the two different\ncriteria (unanimity or majority) mentioned above:\nunanimity: iffK(·) = 1 the word is tagged as\n“keep”, whereas if fK(·) = 0 it is tagged as“change”; in the rest of cases no recommenda-\ntion is made for that word.\nmajority: iffK(·)>0.5the word is tagged as\n“keep”, whereas it is tagged as “change” if\nfK(·)<0.5; in the unlikely case of hav-\ningfK(·) = 0.5no recommendation is made\nabout that word.\nThe first criterion requires all the source words\naligned with word wijto be matched (conversely,\nunmatched) with a word in the new segment to be\ntranslated, while the second criterion only requires\nthe majority or source words aligned with wijto be\nmatched (conversely, unmatched).\n4 Results and discussion\nWe evaluated our approach with the different sets of\nword alignments obtained through the symmetriza-\ntion methods described in Section 2 for values of\nthe fuzzy-match score threshold Θbetween 50%\nand 90%.\nTables 2 and 3 reports the accuracy and the cov-\nerage obtained with each set of alignments together\nwith their confidence intervals for a statistical sig-\nnificance level p= 0.99(DeGroot and Schervish,\n2002, Sec. 7.5) when the majority criterion and the\nunanimity criterion, respectively, are used to mark\nthe words as “keep” or “change”.\nAs can be seen, with both criteria the best accu-\nracy is achieved with the set of alignments obtained\nthrough the intersection method, although the use of\nthis set of alignments shows a smaller coverage as\ncompared to the other two sets of alignments. The\nuse of either the union or the grow-diag-final-and\nsets of alignments seems to have a small impact\non the accuracy although the coverage obtained\nfor the union is slightly better. Note that with the\nalignments obtained by means of the intersection\nmethod, both criteria are equivalent because each\nword is aligned at most with one word in the other\nlanguage.\nThe use of the unanimity criterion causes the ac-\ncuracy to grow slightly, as compared to the majority\ncriterion, while the coverage gets slightly worse as\nexpected. It is worth noting that for fuzzy-match\nscore thresholds above 50% differences in accuracy\nbetween both criteria are insignificant, whereas the\ndifferences in coverage are small, but significant\nfor values of 60% and 70% of Θ.\nFinally, it is important to remark that for values\nofΘgreater or equal to 60%, which are the values\nthat professional translators tend to use (Bowker,84\nΘ(%)Union Intersection GDFA\nAcc. (%) Cover. (%) Acc. (%) Cover. (%) Acc. (%) Cover. (%)\n50 92.35±.10 97.33±.06 93.80±.10 90.78±.11 92.34±.10 96.73±.07\n60 94.62±.11 98.06±.07 95.80±.10 92.44±.12 94.72±.11 97.70±.07\n70 97.19±.10 98.69±.06 98.04±.08 94.03±.13 97.31±.09 98.37±.07\n80 98.31±.08 99.05±.06 98.82±.07 95.50±.13 98.44±.08 98.78±.07\n90 97.97±.18 99.24±.11 98.75±.14 95.41±.26 98.25±.16 98.75±.14\nTable 2: For different fuzzy-match score thresholds ( Θ), accuracy (Acc.) and coverage (Cover.) obtained\nby the majority criterion for the three different sets of word alignments: intersection, union and grow-\ndiag-final-and (GDFA).\nΘ(%)Union Intersection GDFA\nAcc. (%) Cover. (%) Acc. (%) Cover. (%) Acc. (%) Cover. (%)\n50 92.53±.10 96.87±.06 93.80±.10 90.78±.11 92.43±.10 96.50±.07\n60 94.73±.11 97.78±.07 95.80±.10 92.44±.12 94.76±.11 97.57±.07\n70 97.26±.10 98.50±.07 98.04±.08 94.03±.13 97.35±.09 98.30±.07\n80 98.35±.08 98.96±.06 98.82±.07 95.50±.13 98.45±.08 98.75±.07\n90 98.02±.18 99.17±.11 98.75±.14 95.41±.26 98.26±.16 98.73±.14\nTable 3: For different fuzzy-match score thresholds ( Θ), accuracy (Acc.) and coverage (Cover.) obtained\nby the unanimity criterion for the three different sets of word alignments: intersection, union and grow-\ndiag-final-and (GDFA).\n2002, p. 100), with the three sets of alignments and\nwith both criteria accuracy is always above 94%.\n5 Concluding remarks\nIn this paper we have presented and evaluated a\nnew approach to guide TM-based CAT users by rec-\nommending the words in a translation proposal that\nshould be changed or kept unedited. The method\nwe propose requires the TM to be pre-processed in\nadvance in order to get the alignment between the\nwords in the source and target segments of the TUs.\nIn any case, this pre-processing needs to be done\nonly once, although to consider new TUs created\nby the user it may be worth to re-run the alignment\nprocedure (see Appendix A). The experiments con-\nducted in the translation of Spanish texts into En-\nglish show an accuracy above 94% for fuzzy-match\nscore thresholds greater or equal to 60% and above\n97% for fuzzy-match score thresholds above 60%.\nOur approach is intended to guide the TM-based\nCAT user in a seamless way, without distorting\nthe known advantages of the TM-based CAT sys-\ntems, namely, high predictability of the translation\nproposals and easy interpretation of fuzzy-match\nscores. We plan to field-test this approach with\nprofessional translators in order to measure the pos-\nsible productivity improvements. To do this wewill integrate this method in OmegaT,3a free/open-\nsource TM-based CAT system.\nA Adding new TUs to the TM\nIn our experiments, we obtained the word alignment\nmodels from TM testand used them to align the\nwords in the TUs of the same TM. In this way, we\nused the most information available to obtain the\nbest word alignments possible. However, TMs are\nnot always static and new TUs can be added to them\nduring a translation job. In this case, the previously\ncomputed alignment models could be less effective\nto align the segments in the new TUs.\nIn this appendix, we evaluate the re-usability\nof previously computed alignment models on new\nTUs for our approach. To do so, we used an in-\ndomain TM ( TM in) and an out-of-domain TM\n(TM out) to obtain the alignment models and used\nthem to align the segments in the TUs of TM test.\nWe then repeated the same experiments described\nin Section 3.3 in order to compare the results ob-\ntained.\nTM inwas built with 10,000 pairs of segments\nextracted from the JCR-Acquis corpus. These pairs\nof segments were chosen so as to avoid any com-\nmon TU between TM inandTM test, or between\n3http://www.omegat.org85\nΘ(%)Union Intersection GDFA\nAcc. (%) Cover. (%) Acc. (%) Cover. (%) Acc. (%) Cover. (%)\n50 91.95±.10 94.03±.09 93.44±.10 87.19±.12 92.10±.10 93.42±.09\n60 94.34±.11 94.06±.11 95.53±.10 88.26±.15 94.51±.11 93.74±.11\n70 97.05±.10 93.99±.13 97.86±.08 89.23±.17 97.21±.09 93.71±.13\n80 98.22±.09 93.64±.15 98.74±.07 90.05±.19 98.35±.08 93.42±.16\n90 97.88±.19 93.61±.31 98.69±.15 89.81±.38 98.10±.18 93.28±.31\nTable 4: For different fuzzy-match score thresholds ( Θ), accuracy (Acc.) and coverage (Cover.) obtained\nby the majority criterion for the three different sets of word alignments (intersection, union and grow-\ndiag-final-and (GDFA)) when the alignment models are learned from TM in.\nΘ(%)Union Intersection GDFA\nAcc. (%) Cover. (%) Acc. (%) Cover. (%) Acc. (%) Cover. (%)\n50 92.07±.10 93.70±.09 93.44±.10 87.19±.12 92.16±.10 93.25±.09\n60 94.39±.11 93.87±.11 95.53±.10 88.26±.15 94.53±.11 93.66±.11\n70 97.07±.10 93.87±.13 97.86±.08 89.23±.17 97.22±.09 93.66±.13\n80 98.22±.09 93.60±.15 98.74±.07 90.05±.19 98.35±.08 93.42±.16\n90 97.88±.19 93.60±.31 98.69±.15 89.81±.38 98.10±.18 93.27±.31\nTable 5: For different fuzzy-match score thresholds ( Θ), accuracy (Acc.) and coverage (Cover.) obtained\nby the unanimity criterion for the three different sets of word alignments (intersection, union and grow-\ndiag-final-and (GDFA)) when the alignment models are learned from TM in.\nΘ(%)Union Intersection GDFA\nAcc. (%) Cover. (%) Acc. (%) Cover. (%) Acc. (%) Cover. (%)\n50 90.57±.12 88.03±.12 93.83±.10 77.13±.16 90.37±.12 88.27±.12\n60 93.66±.12 88.50±.15 96.04±.10 79.88±.19 93.64±.12 88.45±.15\n70 96.77±.10 88.77±.17 98.34±.08 82.48±.21 96.87±.10 88.53±.18\n80 98.10±.09 88.29±.20 98.96±.06 84.39±.23 98.23±.09 88.05±.21\n90 97.86±.19 90.71±.36 98.87±.14 84.98±.45 98.15±.18 90.24±.37\nTable 6: For different fuzzy-match score thresholds ( Θ), accuracy (Acc.) and coverage (Cover.) obtained\nby the majority criterion for the three different sets of word alignments (intersection, union and grow-\ndiag-final-and (GDFA)) when the alignment models are learned from TM out.\nΘ(%)Union Intersection GDFA\nAcc. (%) Cover. (%) Acc. (%) Cover. (%) Acc. (%) Cover. (%)\n50 91.15±.11 87.22±.12 93.83±.10 77.13±.16 90.87±.11 87.74±.12\n60 93.94±.12 88.10±.15 96.04±.10 79.88±.19 93.88±.12 88.20±.15\n70 96.94±.10 88.54±.18 98.34±.08 82.48±.21 97.02±.10 88.40±.18\n80 98.16±.09 88.22±.20 98.96±.07 84.39±.23 98.29±.09 87.99±.21\n90 97.89±.19 90.68±.36 98.87±.14 84.98±.45 98.17±.18 90.22±.37\nTable 7: For different fuzzy-match score thresholds ( Θ), accuracy (Acc.) and coverage (Cover.) obtained\nby the unanimity criterion for the three different sets of word alignments (intersection, union and grow-\ndiag-final-and (GDFA)) when the alignment models are learned from TM out.86\nTM inandTM trans.TM outwas built with 10,000\npairs of segments extracted from the EMEA corpus\nversion 0.3 (Tiedemann, 2009),4which is a compi-\nlation of documents from the European Medicines\nAgency, and, therefore, it clearly belongs to a differ-\nent domain. Before extracting the TUs, the EMEA\ncorpus was pre-processed in the same way that the\nJRC-Acquis was (see Section 3.2).\nTables 4 and 5 show the results of the experi-\nments when using the alignment models learned\nfrom TM infor the majority criterion and for the\nunanimity criterion, respectively. Analogously, ta-\nbles 6 and 7 show the analogous results when the\nalignment models learned from TM outare used.\nAs can be seen, the accuracy obtained by our\napproach when re-using alignment models from an\nin-domain corpus is very similar to that obtained\nwhen these alignments are learned from the TM\nwhose TUs are aligned. Even when the alignment\nmodels are learned from an out-of-domain corpus,\nthe loss of accuracy is, in the worst case, lower\nthan 2%. The main problem is the loss of coverage,\nwhich is about 6% for the in-domain training and\nhigher than a 10% for the out-of-domain training.\nOn the one hand, these results show that our ap-\nproach is able to re-use alignment models computed\nfor a TM on subsequently added TUs keeping a rea-\nsonable accuracy in the recommendations. On the\nother hand, it is obvious that our method becomes\nless informative for these new TUs as their domain\ndiffers from the domain from which the alignment\nmodels have been learned.\nAcknowledgements: Work supported by Span-\nish government through project TIN2009-14009-\nC02-01. The authors thank Yanjun Ma, Andy Way\nand Harold Somers for suggestions.\nReferences\nBerger, A.L., V .J. Della Pietra, and S.A. Della Pietra.\n1996. A maximum entropy approach to natural\nlanguage processing. Computational Linguistics ,\n22(1):39–71.\nBourdaillet, J., S. Huet, F. Gotti, G. Lapalme, and\nP. Langlais. 2009. Enhancing the bilingual concor-\ndancer TransSearch with word-level alignment. In\nProceedings of the 22nd Canadian Conference on Ar-\ntificial Intelligence , volume 5549 of Lecture Notes in\nArtificial Intelligence , pages 27–38. Springer.\nBowker, L., 2002. Computer-aided translation technol-\nogy: a practical introduction , chapter Translation-\n4http://opus.lingfil.uu.se/EMEA.phpMemory Systems, pages 92–127. University of Ot-\ntawa Press.\nBowker, L. 2003. Terminology tools for translators. In\nSomers, H., editor, Computers and Translation: A\nTranslator’s Guide , pages 49–65. John Benjamins.\nBrown, P.F., S.A. Della Pietra, V .J. Della Pietra, and\nR.L. Mercer. 1993. The mathematics of statistical\nmachine translation: Parameter estimation. Compu-\ntational Linguistics , 19(2):263–311.\nDeGroot, M. H. and M. J. Schervish. 2002. Probability\nand Statistics . Addison-Wesley, third edition.\nKoehn, P., F.J. Och, and D. Marcu. 2003. Statistical\nphrase-based translation. In Proceedings of the 2003\nConference of the North American Chapter of the As-\nsociation for Computational Linguistics on Human\nLanguage Technology , pages 48–54, Morristown, NJ,\nUSA.\nKoehn, P., H. Hoang, A. Birch, C. Callison-Burch,\nM. Federico, N. Bertoldi, B. Cowan, W. Shen,\nC. Moran, R. Zens, C. Dyer, O. Bojar, A. Constantin,\nand E. Herbst. 2007. Moses: open source toolkit for\nstatistical machine translation. In Proceedings of the\n45th Annual Meeting of the Association for Compu-\ntational Linguistics , pages 177–180, Prague, Czech\nRepublic.\nKoehn, P. 2010. Statistical Machine Translation . Cam-\nbridge University Press.\nKranias, L. and A. Samiotou. 2004. Automatic trans-\nlation memory fuzzy match post-editing: A step be-\nyond traditional TM/MT integration. In Proceedings\nof the 4th International Conference on Language Re-\nsources and Evaluation , pages 52–57, Lisbon, Portu-\ngal.\nLevenshtein, V .I. 1966. Binary codes capable of cor-\nrecting deletions, insertions and reversals. Soviet\nPhysics Doklady. , 10(8):707–710.\nMeyers, A., M. Kosaka, and R. Grishman. 1998. A\nmultilingual procedure for dictionary-based sentence\nalignment. In Machine Translation and the Informa-\ntion Soup , volume 1529 of Lecture Notes in Com-\nputer Science , pages 187–198. Springer.\nOch, F.J. and H. Ney. 2003. A systematic compari-\nson of various statistical alignment models. Compu-\ntational Linguistics , 29(1):19–51.\nSikes, R. 2007. Fuzzy matching in theory and practice.\nMultiLingual , 18(6):39–43.\nSimard, M. 2003. Translation spotting for transla-\ntion memories. In Proceedings of the HLT-NAACL\n2003, Workshop on Building and Using Parallel\nTexts: Data Driven Machine Translation and Be-\nyond , pages 65–72, Morristown, NJ, USA.\nSomers, H., 2003. Computers and translation: a trans-\nlator’s guide , chapter Translation Memory Systems,\npages 31–48. John Benjamins.87\nSteinberger, R., B. Pouliquen, A. Widiger, C. Ignat,\nT. Erjavec, and D. Tufis ¸. 2006. The JRC-Acquis:\nA multilingual aligned parallel corpus with 20+ lan-\nguages. In Proceedings of the 5th International\nConference on Language Resources and Evaluation ,\npages 2142–2147, Genoa, Italy.\nTiedemann, J. 2009. News from OPUS - a collection\nof multilingual parallel corpora with tools and inter-\nfaces. In Recent Advances in Natural Language Pro-\ncessing , volume V , pages 237–248. John Benjamins,\nBorovets, Bulgaria.\nV ogel, S., H. Ney, and C. Tillmann. 1996. HMM-based\nword alignment in statistical translation. In Proceed-\nings of the 16th International Conference on Com-\nputational Linguistics , pages 836–841, Copenhagen,\nDenmark.\nV´eronis, J. and P. Langlais, 2000. Parallel Text Pro-\ncessing: Alignment and Use of Translation Corpora\n(Text, Speech and Language Technology) , chapter\nEvaluation of Parallel Text Alignment Systems – The\nARCADE Project, pages 369–388. Kluwer Aca-\ndemic Publishers.88",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
{
"id": "1XsNUBVo-4S",
"year": null,
"venue": "EAMT 2015",
"pdf_link": "https://aclanthology.org/W15-4903.pdf",
"forum_link": "https://openreview.net/forum?id=1XsNUBVo-4S",
"arxiv_id": null,
"doi": null
}
|
{
"title": "Using on-line available sources of bilingual information for word-level machine translation quality estimation",
"authors": [
"Miquel Esplà-Gomis",
"Felipe Sánchez-Martínez",
"Mikel L. Forcada"
],
"abstract": "Miquel Esplà-Gomis, Felipe Sánchez-Martínez, Mikel L. Forcada. Proceedings of the 18th Annual Conference of the European Association for Machine Translation. 2015.",
"keywords": [],
"raw_extracted_content": "Using on-line available sources of bilingual information for word-level\nmachine translation quality estimation\nMiquel Espl `a-Gomis Felipe S ´anchez-Mart ´ınez Mikel L. Forcada\nDepartament de Llenguatges i Sistemes Inform `atics\nUniversitat d’Alacant, E-03071 Alacant, Spain\n{mespla,fsanchez,mlf}@dlsi.ua.es\nAbstract\nThis paper explores the use of external\nsources of bilingual information available\non-line for word-level machine translation\nquality estimation (MTQE). These sources\nof bilingual information are used as ablack\nboxto spot sub-segment correspondences\nbetween a source-language (SL) sentence\nSto be translated and a given translation\nhypothesis Tin the target-language (TL).\nThis is done by segmenting both Sand\nTinto overlapping sub-segments of vari-\nable length and translating them into the\nTL and the SL, respectively, using the avail-\nable bilingual sources of informationon the\nfly. A collection of features is then obtained\nfrom the resulting sub-segment translations,\nwhich is used by a binary classifier to de-\ntermine which target words in Tneed to be\npost-edited.\nExperiments are conducted based on the\ndata sets published for the word-level\nMTQE task in the 2014 edition of the Work-\nshop on Statistical Machine Translation\n(WMT 2014). The sources of bilingual\ninformation used are: machine translation\n(Apertium and Google Translate) and the\nbilingual concordancer Reverso Context.\nThe results obtained confirm that, using\nless information and fewer features, our ap-\nproach obtains results comparable to those\nof state-of-the-art approaches, and even out-\nperform them in some data sets.\nc�2015 The authors. This article is licensed under a Creative\nCommons 3.0 licence, no derivative works, attribution, CC-\nBY-ND.1 Introduction\nRecent advances in thefield of machine translation\n(MT) have led to the adoption of this technology\nby many companies and institutions all around the\nworld in order to bypass the linguistic barriers and\nreach out to broader audiences. Unfortunately, we\nare still far from the point of having MT systems\nable to produce translations with the level of qual-\nity required for dissemination in formal scenarios,\nwhere human supervision and MT post-editing are\nunavoidable. It therefore becomes critical to min-\nimise the cost of this human post-editing. This\nhas motivated a growing interest in thefield of MT\nquality estimation (Blatz et al., 2004; Specia et al.,\n2010; Specia and Soricut, 2013), which is thefield\nthat focuses on developing techniques that allow to\nestimate the quality of the translation hypotheses\nproduced by an MT system.\nMost efforts in MT quality estimation (MTQE)\nare aimed at evaluating the quality of whole trans-\nlated segments, in terms of post-editing time, num-\nber of editions needed, and other related metrics\n(Blatz et al., 2004). Our work is focused on the\nsub-field ofword-level MTQE. The main advantage\nof word-level MTQE is that it allows not only to\nestimate the effort needed to post-edit the output\nof an MT system, but also to guide post-editors on\nwhich words need to be post-edited.\nIn this paper we describe a novel method which\nuses black-box bilingual resources from the Inter-\nnet for word-level MTQE. Namely, we combine\ntwo on-line MT systems, Apertium1and Google\nTranslate,2and the bilingual concordancer Reverso\nContext3to spot sub-segment correspondences be-\ntween a sentence Sin the source language (SL) and\n1http://www.apertium.org\n2http://translate.google.com\n3http://context.reverso.net/translation/19\na given translation hypothesis Tin the target lan-\nguage (TL). To do so, both SandTare segmented\ninto overlapping sub-segments of variable length\nand they are translated into the TL and the SL, re-\nspectively, by means of the bilingual sources of\ninformation mentioned above. These sub-segment\ncorrespondences are used to extract a collection\nof features that is then used by a binary classifier\nto determine the words to be post-edited. Our ex-\nperiments confirm that our method provides results\ncomparable to the state of the art using considerably\nfewer features. In addition, given that our method\nuses (on-line) resources which are publicly avail-\nable on the Internet, once the binary classifier is\ntrained it can be used for word-level MTQE on the\nfly for new translations.\nThe rest of the paper is organised as follows.\nSection 2 briefly reviews the state of the art in\nword-level MTQE. Section 3 describes our binary-\nclassification approach, the sources of information,\nand the collection of features used. Section 4 de-\nscribes the experimental setting used for our experi-\nments, whereas Section 5 reports and discusses the\nresults obtained. The paper ends with some con-\ncluding remarks and the description of ongoing and\npossible future work.\n2 Related work\nSome of the early work on word-level MTQE can\nbe found in the context of interactive MT (Gan-\ndrabur and Foster, 2003; Ueffing and Ney, 2005).\nGandrabur and Foster (2003) obtain confidence\nscores for each word tin a given translation hypoth-\nesisTof the SL sentence Sto help the interactive\nMT system to choose the translation suggestions\nto be made to the user. Ueffing and Ney (2005)\nextend this application to word-level MTQE also\nto automatically reject those target words twith\nlow confidence scores from the translation propos-\nals. This second approach incorporates the use of\nprobabilistic lexicons as a source of translation in-\nformation.\nBlatz et al. (2003) introduce a more complex\ncollection of features for word-level MTQE, using\nsemantic features based on WordNet (Miller, 1995),\ntranslation probabilities from IBM model 1 (Brown\net al., 1993), word posterior probabilities (Blatz et\nal., 2003), and alignment templates from statistical\nMT (SMT) models. All the features they use are\ncombined to train a binary classifier which is used\nto determine the confidence scores.\nUeffing and Ney (2007) divide the features usedby their approach in two types: those which are\nindependent of the MT system used for transla-\ntion (system-independent), and those which are\nextracted from internal data of the SMT system\nthey use for translation (system-dependent). These\nfeatures are obtained by comparing the output of\nan SMT system T1to a collection of alternative\ntranslations {Ti}NT\ni=2obtained by using the N-best\nlist from the same SMT system. Several distance\nmetrics are then used to check how often word tj,\nthe word in position jofT, is found in each trans-\nlation alternative Ti, and how far from position j.\nThese features rely on the assumption that a high\noccurrence frequency in a similar position is an\nevidence that tjdoes not need to be post-edited.\nBic ¸ici (2013) proposes a strategy for extending this\nkind of system-dependent features to what could\nbe called a system-independent scenario. His ap-\nproach consists in choosing parallel data from an\nadditional parallel corpus which are close to the\nsegment Sto be translated by means of feature-\ndecay algorithms (Bi c ¸ici and Yuret, 2011). Once\nthis parallel data are extracted, a new SMT system\nis trained and its internal data is used to extract\nthese features.\nThe MULTILIZER approach to (sentence-level)\nMTQE (Bojar et al., 2014) also uses other MT\nsystems to translate Sinto the TL and Tinto the\nSL. These translations are then used as a pseudo-\nreference and the similarity between them and the\noriginal SL and TL sentences is computed and taken\nas an indication of quality. This approach, as well\nas the one by Bi c ¸ici and Yuret’s (2011) are the most\nsimilar ones to our approach. One of the main\ndifferences is that they translate whole segments,\nwhereas we translate sub-segments. As a result,\nwe can obtain useful information about specific\nwords in the translation. As the approach in this pa-\nper, MULTILIZER also combines several sources\nof bilingual information, while Bi c ¸ici and Yuret\n(2011) only uses one MT system.4\nAmong the recent works on MTQE, it is worth\nmentioning the QuEst project (Specia et al., 2013),\nwhich sets a framework for MTQE, both at the\nsentence level and at the word level. This frame-\nwork defines a large collection of features which\ncan be divided in three groups: those measuring the\ncomplexity of the SL segment S, those measuring\nthe confidence on the MT system, and those mea-\nsuring bothfluency and adequacy directly on the\n4To the best of our knowledge, there is not any public descrip-\ntion of the internal workings of MULTILIZIER.20\ntranslation hypothesis T. In fact, some of the most\nsuccessful approaches in the word-level MTQE task\nin the Workshop on Statistical Machine Translation\nin 2014 (WMT 2014) (Bojar et al., 2014) are based\non some of the features defined in that framework\n(Camargo de Souza et al., 2014).\nThe work described in this paper is aimed at\nbeing a system-independent approach that uses\navailable on-line bilingual resources for word-level\nMTQE. This work is inspired by the work by Espl `a-\nGomis et al. (2011), in which several on-line MT\nsystems are used for word-level quality estimation\nin translation-memory-based computer aided trans-\nlation tasks. In the work by Espl `a-Gomis et al.\n(2011), given a translation unit (S, T) suggested to\nthe translator for the SL segment to be translated\nS�, MT is used to translate sub-segments from S\ninto the TL, and TL sub-segments from Tinto the\nSL. Sub-segment pairs obtained through MT that\nare found both in SandTare an evidence that they\nare related. The alignment between SandS�, to-\ngether with the sub-segment translations between\nSandThelp to decide which words in Tshould\nbe modified to get T�, the desired translation of\nS�. Based on the same idea, we built a brand-new\ncollection of word-level features to extend this ap-\nproach to MTQE. One of the main advantages of\nthis approach as compared to other approaches de-\nscribed in this section is that it uses light bilingual\ninformation extracted from any available source.\nObtaining this information directly from the Inter-\nnet allows us to obtain on thefly confidence esti-\nmates for the words in Twithout having to rely on\nmore complex sources, such as probabilistic lexi-\ncons, part-of-speech information or word nets.\n3 Word-level quality estimation using\nbilingual sources of information from\nthe Internet\nThe approach proposed in this work for word-level\nMTQE uses binary classification based on features\nobtained through sources of bilingual information\navailable on-line. We use these sources of bilingual\ninformation to detect connections between the origi-\nnal SL segment Sand a given translation hypothesis\nTin the TL following the same method proposed\nby Espl `a-Gomis et al. (2011): all the overlapping\nsub-segments of SandT, up to a given length L,\nare obtained and translated into the TL and the\nSL, respectively, using the sources of bilingual in-\nformation available. The resulting collections of\nsub-segment translations MS→TandMT→Scanbe then used to spot sub-segment correspondences\nbetween TandS. In this section we describe a\ncollection of features designed to identify these re-\nlations for their exploitation for word-level MTQE.\nPositive features. Given a collection of sub-\nsegment translationsM(eitherM S→TorM T→S),\none of the most obvious features consists in comput-\ning the amount of sub-segment translations (σ,τ)∈\nMthat confirm that word tjinTshould be kept in\nthe translation of S. We consider that a sub-segment\ntranslation (σ,τ) confirms tjifσis a sub-segment\nofS, and τis a sub-segment of Tthat covers posi-\ntionj. Based on this idea, we propose the collection\nof positive featuresPos n:\nPos n(j, S, T, M) =\n|{τ:τ∈conf n(j, S, T, M)}|\n|{τ:τ∈segn(T)∧j∈span(τ, T)}|\nwhere segn(X) represents the set of all possible\nn-word sub-segments of segment Xand func-\ntionspan(τ, T) returns the set of word positions\nspanned by the sub-segment τin the segment T.5\nFunction conf n(j, S, T, M) returns the collection\nof sub-segment pairs (σ,τ) that confirm a given\nwordt j, and is defined as:\nconf n(j, S, T, M) ={(σ,τ)∈M:\nτ∈segn(T)∧σ∈seg∗(S)∧j∈span(τ, T)}\nwhere seg∗(X) is similar to segn(X) but without\nlength constraints.6\nAdditionally, we propose a second collection of\nfeatures, which use the information about the trans-\nlation frequency between the pairs of sub-segments\ninM. This information is not available for MT, al-\nthough it is for the bilingual concordancer we have\nused (see Section 4). This frequency determines\nhow often σis translated as τand, therefore, how\nreliable this translation is. We define Posfreq\nnto\nobtain these features as:\nPosfreq\nn(j, S, T, M) =\n�\n∀(σ,τ)∈conf n(j,S,T,M)occ(σ,τ, M)�\n∀(σ,τ�)∈Mocc(σ,τ�, M)\nwhere function occ(σ,τ, M) returns the number of\noccurrences inMof the sub-segment pair(σ,τ).\n5Note that a sub-segment τmay be found more than once\nin segment T: function span(τ, T) returns all the possible\npositions spanned.\n6Two variants of function conf nwere tried: one applying also\nlength constraints when segmenting S(with the consequent\nincrement in the number of features), and one not applying\nlength constraints at all. Preliminary results confirmed that\nconstraining only the length ofτwas the best choice.21\nBoth positive features, Pos(·) andPosfreq(·), are\ncomputed for tjfor all the values of sub-segment\nlength nup to L. In addition, they can be computed\nfor both MS→TandMT→S, producing 4Lpositive\nfeatures in total for each wordt j.\nNegative features. Our negative features, i.e.\nthose features that help to identify words that should\nbe post-edited in the translation hypothesis T, are\nalso based on sub-segment translations (σ,τ)∈M ,\nbut they are used in a different way. Negative fea-\ntures use those sub-segments τthatfit two criteria:\n(a) they are the translation of a sub-segment σfrom\nSbut cannot be matched in T; and (b) when they\nare aligned to Tusing the Levenshtein edit distance\nalgorithm (Levenshtein, 1966), both theirfirst word\nθ1and last word θ|τ| can be aligned, therefore de-\nlimiting a sub-segment τ�ofT. Our hypothesis is\nthat those words tjinτ�which cannot be aligned\ntoτare likely to need to be post-edited. We define\nour negative feature collectionNegmn�as:\nNegmn�(j, S, T, M) =\n�\n∀τ∈NegEvidencemn �(j,S,T,M)1\nalignmentsize(τ, T)\nwhere alignmentsize(τ, T) returns the length of\nthe sub-segment τ�delimited by τinT. Func-\ntionNegEvidencemn�(·)returns the set of τsub-\nsegments that are considered negative evidence and\nis defined as:\nNegEvidencemn�(j, S, T, M) ={τ: (σ,τ)∈M\n∧σ∈segm(S)∧|τ�|=n�∧\nτ/∈seg∗(T)∧IsNeg(j,τ, T)}\nIn this function length constraints are set so that\nsub-segments σtake lengths m∈[1, L] .7However,\nthe case of the sub-segments τis slightly different:\nn�does not stand for the length of the sub-segments,\nbut the number of words in τwhich are aligned to\nT.8Function IsNeg(·) defines the set of conditions\nrequired to consider a sub-segment τa negative\nevidence for wordt j:\nIsNeg(j,τ, T) =∃j�, j��∈[1,|T|] :j�< j < j��\n∧aligned(t j�,θ1)∧aligned(t j��,θ|τ|)∧\n� ∃θk∈seg1(τ) : aligned(t j,θk)\nwhere aligned(X, Y) is a binary function that\nchecks whether wordsXandYare aligned or not.\n7In contrast to the positive features, preliminary results showed\nan improvement in the performance of the classifier when\nconstraining the length of the σsub-segments used for each\nfeature in the set.\n8That is, the length of longest common sub-segment of τand\nT.Negative features Negmn�(·)are computed for\ntjfor all the values of SL sub-segment lengths\nm∈[1, L] and the number of TL words n�∈[2, L]\nwhich are aligned to words θkin sub-segment τ.\nNote that the number of aligned words between T\nandτcannot be lower than 2 given the constraints\nset by function IsNeg(j,τ, T) . This results in a\ncollection of L×(L−1) negative features. Obvi-\nously, for these features only MS→Tis used, since\ninMT→Sall the sub-segments τcan be found in\nT.\n4 Experimental setting\nThe experiments described in this section compare\nthe results of our approach to those in the word-\nlevel MTQE task in WMT 2014 (Bojar et al., 2014),\nwhich are considered the state of the art in the task.\nIn this section we describe the sources of bilingual\ninformation used for our experiments, as well as\nthe binary classifier and the data sets used for eval-\nuation.\n4.1 Evaluation data sets\nFour data sets for different language pairs were\npublished for the word-level MTQE task in WMT\n2014: English–Spanish (EN–ES), Spanish–English\n(ES–EN), English–German (EN–DE), and German–\nEnglish (DE–EN). The data sets contain the original\nSL segments, and their corresponding translation\nhypotheses tokenised at the level of words. Each\nword is tagged by hand using three levels of granu-\nlarity:\n•binary : words are classified only taking into\naccount if they need to be post-edited (class\nBAD) or not (classOK);\n•level 1 : extension of the binary classification\nwhich differentiates betweenaccuracyerrors\nandfluencyerrors;\n•multi-class :fine-grained classification of er-\nrors divided in 20 categories.\nIn this work we focus on the binary classification,\nwhich is the base for the other classification granu-\nlarities.\nFour evaluation metrics were defined for this\ntask:\n•TheF1score weighted by the rate ρcof in-\nstances of a given classcin the data set:\nFw\n1=�\n∀c∈Cρc2pcrc\npc+rc22\nwhere Cis the collection of classes defined\nfor a given level of granularity (OK and BAD\nfor the binary classification) and pcandrcare\nthe precision and recall for a class c∈C ,\nrespectively;\n•TheF1score of the less frequent class in the\ndata set (class BAD, in the case of binary clas-\nsification):\nFBAD\n1=2×p BAD×rBAD\npBAD+rBAD;\n•The Matthews correlation coefficient ( MCC ),\nwhich takes values in [−1,1] and is more re-\nliable than the F1score for unbalanced data\nsets (Powers, 2011):\nMCC =TOK×TBAD−FOK×FBAD\n2√AOK×A BAD×POK×PBAD\nwhere TOKandTBADstand for the number\nof instances correctly classified for each class,\nFOKandFBADstand for the number of in-\nstances wrongly classified for each class, POK\nandPBADstand for the number of instances\nclassified either as OK or BAD, and AOKand\nABADstand for the actual number of each\nclass; and\n•Total accuracy (ACC):\nACC =TOK+TBAD\nPOK+PBAD\nThe comparison between the approach presented\nin this work and those described by Bojar et al.\n(2014) is based on the FBAD\n1 score because this\nwas the main metric used to compare the different\napproaches participating in WMT 2014. However,\nall the metrics are reported for a better analysis of\nthe results obtained.\n4.2 Sources of Bilingual Information\nAs already mentioned, two different sources of in-\nformation were used in this work, MT and a bilin-\ngual concordancer. For our experiments we used\ntwo MT systems which are freely available on the\nInternet: Apertium and Google Translate. These\nMT systems were exploited by translating the sub-\nsegments, for each data set, in both directions (from\nSL to TL and vice versa). It is worth noting that\nlanguage pairs EN–DE and DE–EN are not avail-\nable for Apertium. For these data sets only Google\nTranslate was used.The bilingual concordancerReverso Contextwas\nalso used for translating sub-segments. Namely,\nthe sub-sentential translation memory of this sys-\ntem was used, which is a much richer source of\nbilingual information and provides, for a given SL\nsub-segment, the collection of TL translation alter-\nnatives, together with the number of occurrences\nof the sub-segments pair in the translation memory.\nFurthermore, the sub-segment translations obtained\nfrom this source of information are more reliable,\nsince they are extracted from manually translated\ntexts. On the other hand, its main weakness is the\ncoverage: although Reverso Context uses a large\ntranslation memory, no translation can be obtained\nfor those SL sub-segments which cannot be found\nin it. In addition, the sub-sentential translation\nmemory contains only those sub-segment transla-\ntions with a minimum number of occurrences. On\nthe contrary, MT systems will always produce a\ntranslation, even though it may be wrong or contain\nuntranslated out-of-vocabulary words. Our hypoth-\nesis is that combining both sources of bilingual\ninformation can lead to reasonable results for word-\nlevel MTQE.\nFor our experiments, we computed the features\ndescribed in Section 3 separately for both sources\nof information. The value of the maximum sub-\nsegment length Lused was set to 5, which resulted\nin a collection of 40 features from the bilingual\nconcordancer, and 30 from MT.9\n4.3 Binary classifier\nEspl`a-Gomis et al. (2011) use a simple percep-\ntron classifier for word-level quality estimation in\ntranslation-memory-based computer-aided transla-\ntion. In this work, a more complexmultilayer per-\nceptron(Duda et al., 2000, Section 6) is used, as\nimplemented in Weka 3.6 (Hall et al., 2009). Multi-\nlayer perceptrons (also known asfeedforward neu-\nral networks) have a complex structure which in-\ncorporates one or morehidden layers, consisting\nof a collection of perceptrons, placed between the\ninput of the classifier (the features) and the output\nperceptron. This hidden layer makes multilayer\nperceptrons suitable for non-linear classification\nproblems (Duda et al., 2000, Section 6). In fact,\nHornik et al. (1989) proved that neural networks\nwith a single hidden layer containing afinite num-\nber of neurons are universal approximators and may\ntherefore be able to perform better than a simple per-\n9As already mentioned, the features based on translation fre-\nquency cannot be obtained for MT.\n23\nceptron for complex problems. In our experiments,\nwe have used a batch training strategy, which iter-\natively updates the weights of each perceptron in\norder to minimise a total error function. A subset of\n10% of the training examples was extracted from\nthe training set before starting the training process\nand used as a validation set. The weights were itera-\ntively updated on the basis of the error computed in\nthe other 90%, but the decision to stop the training\n(usually referred as the convergence condition) was\nbased on this validation set. This is a usual practice\nwhose objective is to minimise the risk of overfit-\nting. The training process stops when the total error\nobtained in an iteration is worse than that obtained\nin the previous 20 iterations.10\nHyperparameter optimisation was carried out us-\ning a grid search (Bergstra et al., 2011) in a 10-fold\ncross-validation fashion in order to choose the hy-\nperparameters optimising the results for the metric\nto be used for comparison,F 1for classBAD:\n•Number of nodes in the hidden layer: Weka\n(Hall et al., 2009) makes it possible to choose\nfrom among a collection of predefined net-\nwork designs; the design performing best in\nmost cases happened to have the same number\nof nodes in the hidden layer as the number of\nfeatures.\n•Learning rate: this parameter allows the di-\nmension of the weight updates to be regulated\nby applying a factor to the error function after\neach iteration; the value that best performed\nfor most of our training data sets was 0.9.\n•Momentum: when updating the weights at the\nend of a training iteration, momentum smooths\nthe training process for faster convergence by\nmaking it dependent on the previous weight\nvalue; in the case of our experiments, it was\nset to 0.07.\n5 Results and discussion\nTable 1 shows the results obtained by the base-\nline consisting on marking all the words as BAD,\nwhereas Table 2 shows the reference results ob-\ntained by the best performing system according to\nthe results published by Bojar et al. (2014). These\n10It is usual to set a number of additional iterations after the er-\nror stops improving, in case the function is in a local minimum,\nand the error starts decreasing again after a few more iterations.\nIf the error continues to worsen after these 20 iterations, the\nweights used are those obtained after the iteration with the\nlowest error.language weighted BAD\npairF 1 F1 MCC accuracy\nEN–ES 18.71 52.53 0.00 35.62\nES–EN 5.28 29.98 0.00 17.63\nEN–ES 12.78 44.57 0.00 28.67\nDE–EN 8.20 36.60 0.00 22.40\nTable 1: Results of the “always BAD” baseline for the differ-\nent data sets.\nlanguage weighted BAD\npairF 1 F1 MCC accuracy\nEN–ES 62.00 48.73 18.23 61.62\nES–EN 79.54 29.14 25.47 82.98\nEN–DE 71.51 45.30 28.61 72.97\nDE–EN 72.41 26.13 16.08 76.14\nTable 2: Results of the best performing systems for the dif-\nferent data sets according to the results published by Bojar et\nal. (2014).\ntables are used as a reference for the results ob-\ntained with the approach described in this work.\nTable 3 shows the results obtained when using\nReverso Context as the only source of information.\nUsing only Reverso Context leads to reasonably\ngood results for language pairs EN–ES and EN–\nDE, while for the other two language pairs results\nare much worse, basically because no word was\nclassified as needing to be post-edited. This situ-\nation is caused by the fact that, in both cases, the\namount of examples of words to be post-edited in\nthe training set is very small (lower than 21%). In\nthis case, if the features are not informative enough,\nthe strong bias leads to a classifier that always rec-\nommends to keep all words untouched. However, it\nis worth noting that with a small amount of features\n(40 features) state-of-the-art results were obtained\nfor two data sets.11Namely, in the case of the\nEN–ES data set, the one with the largest amount of\ntraining instances, the results for the main metric\n(F1score for the less frequent class, in this case\nBAD) were better than those of the state of the art.\nIn the case of the EN–DE data set the results are\nnoticeably lower than the state of the art, but they\nare still comparable to them.\nTable 4 shows the results obtained when com-\nbining the information from Reverso Context and\nthe MT systems Apertium and Google Translate.\nAgain, one of the best results is obtained for the\nEN–ES data set, which would again beat the state\nof the art for the F1score for theBADclass, and\n11We focus our comparison on the F1score for theBADclass\nbecause this was the metric on which the classifiers were opti-\nmised.24\nlanguage weighted BAD\npairF 1 F1 MCC accuracy\nEN–ES 60.18 49.09 16.28 59.46\nES–EN 74.41 0.00 0.00 82.37\nEN–DE 65.88 41.24 17.05 65.71\nDE–EN 67.82 0.00 0.00 77.60\nTable 3: Results of the approach proposed in this paper for the\nsame data sets used to obtain Table 2 using Reverso Context\nas the only source of bilingual information.\nlanguage weighted BAD\npairF 1 F1 MCC accuracy\nEN–ES 61.43 49.03 17.71 60.91\nES–EN 75.87 10.44 9.61 81.82\nEN–DE 66.75 43.07 19.38 78.71\nDE–EN 75.00 40.33 25.85 76.03\nTable 4: Results of the approach proposed in this work for the\nsame data sets used to obtain Table 2 using both Reverso Con-\ntext and both Google Translate and Apertium as the sources of\nbilingual information.\nwhich obtained results still closer to those of the\nstate of the art for the rest of metrics. In addition,\nthe biased classification problem for data sets DE–\nEN and ES–EN is alleviated. Actually, the results\nfor the DE–EN language pair are particularly good,\nand outperform the state of the art for all the met-\nrics. The low F1score obtained for the ES–EN data\nset may be explained by the unbalanced amount of\npositive and negative instances. Actually, the ratio\nof negative instances is somewhat related to the re-\nsults obtained: 35% for EN–ES, 17% for ES–EN,\n30% for EN–DE and 21% for DE–EN. A closer\nanalysis of the results shows that our approach is\nbetter when detecting errors in theTerminology,\nMistranslation, andUnintelligiblesubclasses. The\nratio of this kind of errors over the total amount\nof negative instances for each data set is again re-\nlated to the results obtained: 73% for EN–ES, 27%\nfor ES–EN, 47% for EN–DE and 35% for DE–EN.\nThis information may explain the differences in the\nresults obtained for each data set.\nAgain, it is worth noting that this light method\nusing a reduced set of 70 features can obtain, for\nmost of the data sets, results comparable to those\nobtained by approaches using much more features.\nFor example, the best system for the data set EN–ES\n(Camargo de Souza et al., 2014) used 163 features,\nwhile the winner system for the rest of data sets\n(Bic ¸ici and Way, 2014; Bi c ¸ici, 2013) used 511,000\nfeatures. The sources of bilingual information used\nin this work are rather rich; however, given that\nany source of bilingual information could be used\non thefly, simpler sources of bilingual informationcould also be used. It would therefore be interesting\nto carry out a deeper evaluation of the impact of\nthe type and quality of the resources used with this\napproach.\n6 Concluding remarks\nIn this paper we describe a novel approach for word-\nlevel MTQE based on the use of on-line available\nbilingual resources. This approach is aimed at being\nsystem-independent, since it does not make any as-\nsumptions about the MT system used for producing\nthe translation hypotheses to be evaluated. Further-\nmore, given that this approach can use any source\nof bilingual information as a black box, it can be\neasily used with few resources. In addition, adding\nnew sources of information is straightforward, pro-\nviding considerable room for improvement. The\nresults described in Section 5 confirm that our ap-\nproach can reach results comparable to those in the\nstate of the art using a smaller collection of features\nthan those used by most of the other approaches.\nAlthough the results described in this paper are\nencouraging, it is worth noting that it is difficult to\nextract strong conclusions from the small data sets\nused. A wider evaluation should be done, involving\nlarger data sets and more language pairs. As future\nwork, we plan to extend this method by using other\non-line resources to improve the on-line coverage\nwhen spotting sub-segment translations; namely,\ndifferent bilingual concordancers and on-line dic-\ntionaries. Monolingual target-language information\ncould also be obtained from the Internet to deal with\nfluency issues, for example, getting the frequency\nof a given n-gram from search engines. We will\nalso study the combination of these features with\nfeatures used in previous state-of-the-art systems\n(see Section 2) Finally, it would be interesting to\ntry the new features defined here in word-level qual-\nity estimation for computer-aided translation tools,\nas in Espl `a-Gomis et al. (2011).\nAcknowledgements\nWork partially funded by the Spanish Ministerio de\nCiencia e Innovaci ´on through projects TIN2009-\n14009-C02-01 and TIN2012-32615 and by the\nEuropean Commission through project PIAP-GA-\n2012-324414 (Abu-MaTran). We specially thank\nReverso-Softissimo and Prompsit Language Engi-\nneering for providing the access to Reverso Context,\nand to the University Research Program for Google\nTranslate that granted us access to the Google Trans-\nlate service.25\nReferences\nBergstra, James S., R ´emi Bardenet, Yoshua Bengio,\nand Bal ´azs K ´egl. 2011. Algorithms for hyper-\nparameter optimization. In Shawe-Taylor, J., R.S.\nZemel, P.L. Bartlett, F. Pereira, and K.Q. Weinberger,\neditors,Advances in Neural Information Processing\nSystems 24, pages 2546–2554. Curran Associates,\nInc.\nBic ¸ici, Ergun and Andy Way. 2014. Referential trans-\nlation machines for predicting translation quality. In\nProceedings of the 9th Workshop on Statistical Ma-\nchine Translation, pages 313–321, Baltimore, USA.\nBic ¸ici, Ergun. 2013. Referential translation machines\nfor quality estimation. InProceedings of the 8th\nWorkshop on Statistical Machine Translation, pages\n343–351, Sofia, Bulgaria.\nBic ¸ici, Ergun and Deniz Yuret. 2011. Instance selec-\ntion for machine translation using feature decay al-\ngorithms. InProceedings of the 6th Workshop on\nStatistical Machine Translation, pages 272–283.\nBlatz, John, Erin Fitzgerald, George Foster, Simona\nGandrabur, Cyril Goutte, Alex Kulesza, Alberto San-\nchis, and Nicola Ueffing. 2003. Confidence estima-\ntion for machine translation. Technical Report Final\nReport of the Summer Workshop, Center for Lan-\nguage and Speech Processing, Johns Hopkins Uni-\nversity, Baltimore, USA.\nBlatz, John, Erin Fitzgerald, George Foster, Simona\nGandrabur, Cyril Goutte, Alex Kulesza, Alberto San-\nchis, and Nicola Ueffing. 2004. Confidence esti-\nmation for machine translation. InProceedings of\nthe 20th International Conference on Computational\nLinguistics, COLING ’04.\nBojar, Ondrej, Christian Buck, Christian Federmann,\nBarry Haddow, Philipp Koehn, Johannes Leveling,\nChristof Monz, Pavel Pecina, Matt Post, Herve Saint-\nAmand, Radu Soricut, Lucia Specia, and Ale ˇs Tam-\nchyna. 2014. Findings of the 2014 workshop on\nstatistical machine translation. InProceedings of\nthe 9th Workshop on Statistical Machine Translation,\npages 12–58.\nBrown, Peter F., Vincent J. Della Pietra, Stephen\nA. Della Pietra, and Robert L. Mercer. 1993.\nThe mathematics of statistical machine translation:\nParameter estimation.Computational Linguistics,\n19(2):263–311.\nCamargo de Souza, Jos ´e Guilherme, Jes ´us Gonz ´alez-\nRubio, Christian Buck, Marco Turchi, and Matteo\nNegri. 2014. FBK-UPV-UEdin participation in\nthe wmt14 quality estimation shared-task. InPro-\nceedings of the 9th Workshop on Statistical Machine\nTranslation, pages 322–328, Baltimore, USA, June.\nAssociation for Computational Linguistics.\nDuda, R. O., P. E. Hart, and D. G. Stork. 2000.Pattern\nClassification. John Wiley and Sons Inc., second edi-\ntion.Espl`a-Gomis, Miquel, Felipe S ´anchez-Mart ´ınez, and\nMikel L. Forcada. 2011. Using machine translation\nin computer-aided translation to suggest the target-\nside words to change. InProceedings of the Ma-\nchine Translation Summit XIII, pages 172–179, Xi-\namen, China.\nGandrabur, Simona and George Foster. 2003. Confi-\ndence estimation for translation prediction. InPro-\nceedings of the 7th Conference on Natural Language\nLearning at HLT-NAACL 2003 - Volume 4, CONLL\n’03, pages 95–102.\nHall, Mark, Eibe Frank, Geoffrey Holmes, Bernhard\nPfahringer, Peter Reutemann, and Ian H. Witten.\n2009. The WEKA Data Mining Software: an Up-\ndate.SIGKDD Explorations, 11(1):10–18.\nHornik, K., M. Stinchcombe, and H. White. 1989. Mul-\ntilayer feedforward networks are universal approxi-\nmators.Neural Networks, 2(5):359–366, July.\nLevenshtein, V.I. 1966. Binary codes capable of cor-\nrecting deletions, insertions and reversals.Soviet\nPhysics Doklady, 10(8):707–710.\nMiller, George A. 1995. Wordnet: A lexical\ndatabase for English.Communications of the ACM,\n38(11):39–41.\nPowers, David M. W. 2011. Evaluation: From\nprecision, recall and F-measure to ROC, informed-\nness, markedness & correlation.Journal of Machine\nLearning Technologies, 2.\nSpecia, Lucia and Radu Soricut. 2013. Quality esti-\nmation for machine translation: preface.Machine\nTranslation, 27(3-4):167–170.\nSpecia, Lucia, Dhwaj Raj, and Marco Turchi. 2010.\nMachine translation evaluation versus quality estima-\ntion.Machine Translation, 24(1):39–50.\nSpecia, Lucia, Kashif Shah, Jos ´e GC De Souza, and\nTrevor Cohn. 2013. QuEst-a translation quality es-\ntimation framework. InACL (Conference System\nDemonstrations), pages 79–84.\nUeffing, Nicola and Hermann Ney. 2005. Applica-\ntion of word-level confidence measures in interactive\nstatistical machine translation. InProceedings of\nthe 10th European Association for Machine Transla-\ntion Conference ”Practical applications of machine\ntranslation”, pages 262–270.\nUeffing, Nicola and Hermann Ney. 2007. Word-level\nconfidence estimation for machine translation.Com-\nputational Linguistics, 33(1):9–40, March.\n26",
"main_paper_content": null
}
|
{
"decision": "Unknown",
"reviews": []
}
| 0 | 0 |
[] |
[] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.