metadata
dict
paper
dict
review
dict
citation_count
int64
0
0
normalized_citation_count
int64
0
0
cited_papers
listlengths
0
0
citing_papers
listlengths
0
0
{ "id": "M2-IV12D_X", "year": null, "venue": "EAMT 2023", "pdf_link": "https://aclanthology.org/2023.eamt-1.61.pdf", "forum_link": "https://openreview.net/forum?id=M2-IV12D_X", "arxiv_id": null, "doi": null }
{ "title": "HPLT: High Performance Language Technologies", "authors": [ "Mikko Aulamo", "Nikolay Bogoychev", "Shaoxiong Ji", "Graeme Nail", "Gema Ramírez-Sánchez", "Jörg Tiedemann", "Jelmer van der Linde", "Jaume Zaragoza" ], "abstract": "Mikko Aulamo, Nikolay Bogoychev, Shaoxiong Ji, Graeme Nail, Gema Ramírez-Sánchez, Jörg Tiedemann, Jelmer van der Linde, Jaume Zaragoza. Proceedings of the 24th Annual Conference of the European Association for Machine Translation. 2023.", "keywords": [], "raw_extracted_content": "HPLT: High Performance Language Technologies\nMikko Aulamo⋆, Nikolay Bogoychev‡, Shaoxiong Ji⋆, Graeme Nail‡, Gema Ram ´ırez-S ´anchez†,\nJ¨org Tiedemann⋆, Jelmer van der Linde‡, Jaume Zaragoza†\n⋆University of Helsinki,‡University of Edinburgh,†Prompsit Language Engineering\nhttps://hplt-project.org/\nAbstract\nWe describe the High Performance Lan-\nguage Technologies project (HPLT), a 3-\nyear EU-funded project started in Septem-\nber 2022. HPLT will build a space combin-\ning petabytes of natural language data with\nlarge-scale model training. It will derive\nmonolingual and bilingual datasets from\nthe Internet Archive and CommonCrawl\nand build efficient and solid machine trans-\nlation (MT) as well as large language mod-\nels (LLMs). HPLT aims at providing free,\nsustainable and reusable datasets, mod-\nels and workflows at scale using high-\nperformance computing (HPC).\n1 Introduction\nThe HPLT project aims at innovating the cur-\nrent language and translation modelling landscape\nby building the largest collection of free and re-\nproducible models and datasets for around 100\nlanguages. Datasets will be derived from web-\ncrawled data using already established processing\npipelines from the ParaCrawl1and MaCoCu cor-\npora.2They will be adapted and improved to run\nefficiently on HPC centres in order to produce con-\nsistent datasets at scale. HPLT will also build open,\nsustainable and efficient LLMs and MT models\nwith significant language coverage using the pow-\nerful supercomputing infrastructure of European\nHPC centres. Datasets, models, pipelines and soft-\nware to build them will be shared along with addi-\ntional tools to ease data management, model build-\ning and evaluation.\nAn HPC-powered consortium: The consortium\ngathers research groups, the experience of an in-\n© 2023 The authors. This article is licensed under a Creative\nCommons 4.0 licence, no derivative works, attribution, CC-\nBY-ND.\n1https://paracrawl.eu/\n2https://macocu.eudustry partner, and the computational infrastruc-\nture and involvement of two HPC centres in Eu-\nrope. Most of the processing will happen on\nLUMI, a pre-exascale supercomputer, which will\nbe made NLP-aware to pave the way for fur-\nther initiatives and exploitation of the project out-\ncomes. The 8 partners in the consortium are:\nCharles University in Prague, University of Edin-\nburgh, University of Helsinki, University of Oslo,\nUniversity of Turku, Prompsit Language Engineer-\ning, CESNET, and Sigma2 HPC centres.\n2 Expected Results\nDatasets: Starting from 7 PB of web-crawled\ndata from the Internet Archive3and 5 from Com-\nmonCrawl,4we will derive monolingual and bilin-\ngual datasets for systematic LLM and MT build-\ning with a large language coverage. Data cura-\ntion, a crucial part of the process, will be based\non adapted versions of the Bitextor and Mono-\ntextor pipelines5. Filtered and anonymized ver-\nsions enriched with genre information will be re-\nleased. Output formats will follow commonly\nadopted standards and their distribution will be\nhandled through OPUS6and LINDAT7with open-\nsource licenses along with analytics and metadata.\nModels: Efficient and high-quality language and\ntranslation models will be built and released. Re-\ngarding LLMs, when sizes and computational re-\nsources allow, we aim at building BERT (Devlin et\nal., 2019), T5 (Raffel et al., 2020), and GPT-like\nmodels (Brown et al., 2020) for all the targeted\nlanguages. We will opt for multilingual models\nwhere necessary to mitigate the lack of sufficient\ntraining data that is expected for some of the tar-\ngeted languages. For MT models, we plan to build\n3https://archive.org/\n4https://commoncrawl.org/\n5https://github.com/bitextor/\n6https://opus.nlpl.eu/\n7https://lindat.mff.cuni.cz/\nnot only English-centric models but also other lan-\nguage combinations including multilingual mod-\nels depending on data availability and interest. We\nwill share HPLT models through OPUS-MT and\nHuggingFace with open-source licenses. The first\nHPLT LLMs have already been published: GPT3-\nlike models for Finnish8, still under evaluation.\nPipelines and Tools: HPLT wants to ease data\nmanagement and model building, making HPC\ncentres in Europe ready to run the same pipelines\nand tools in a transparent and straightforward man-\nner even on other datasets and languages. Below,\nwe describe two of the tools that HPLT is develop-\ning in this direction.\nOpusCleaner9is a one-stop dataset down-\nload/examine/filter toolkit built with modern large-\nscale NLP models in mind. It is based on python\nand uses a web interface to make it easy to run\non HPC clusters. The workflow is as follows: (1)\ndataset selection: downloads to the host running\nthe web server, not the local machine; (2) filter\nselection: allows filtering and visualizing the ef-\nfect interactively on a random sample of each se-\nlected dataset; (3) labeling: allows categorising\neach dataset; (4) batch filter execution: applies fil-\nters and labeling to all datasets from a one-line run-\nme command and (5) dataset (near-)deduplication\nacross collections.\nOpusTrainer10is a large-scale data shuf-\nfler/augmenter which takes a collection of datasets\nand feeds it to a neural network training toolkit ac-\ncording to a set schedule. Its design aims to solve\nneural network training problems at scale. It fea-\ntures: (1) sampling and mixing of data from multi-\nple sources; (2) per-source shuffling and indepen-\ndent dataset mixing avoiding out-of-memory is-\nsues; (3) curriculum learning with the definition of\ntraining stages, each one having its own mixture of\ndatasets; (4) stochastic modifications of the train-\ning batch to support end-user requirements like\nsupport for title case, all caps, placeholders, etc.\n3 MT at HPLT\nHPLT’s ambition is to democratise access to effi-\ncient MT. We will use our large curated datasets\nwith robust software pipelines to train high-quality\nMT systems and, by leveraging the HPC capacity\navailable to the project, over an extensive set of\n8https://turkunlp.org/gpt3-finnish\n9shorturl.at/boLW7\n10shorturl.at/pDKPTlanguages. All models will be properly evaluated\nand documented using standard metrics. Releas-\ning all models with appropriate metadata and op-\ntimised training recipes will also help to avoid un-\nnecessary computation for sub-optimal and repet-\nitive procedures. Beyond large systems, we aim\nto build lightweight models using knowledge dis-\ntillation (Kim and Rush, 2016). An ensemble of\nlarge teacher models can produce compact stu-\ndents that mimic their teacher’s quality, with neg-\nligible degradation but much lower computational\ncosts during inference. Quantisation and other ef-\nficiency techniques can further increase speed and\nlower the memory footprint, which is essential for\nresponsive and large-scale translation tasks.\nAcknowledgment\nThis project has received funding from the Eu-\nropean Union’s Horizon Europe research and in-\nnovation programme under Grant agreement No\n101070350 and from UK Research and Innovation\n(UKRI) under the UK government’s Horizon Eu-\nrope funding guarantee [grant number 10052546].\nThe contents of this publication are the sole re-\nsponsibility of its authors and do not necessarily\nreflect the opinion of the European Union.\nReferences\n[Brown et al.2020] Brown, Tom, Benjamin Mann, Nick\nRyder, Melanie Subbiah, Jared D Kaplan, Pra-\nfulla Dhariwal, Arvind Neelakantan, Pranav Shyam,\nGirish Sastry, Amanda Askell, et al. 2020. Lan-\nguage models are few-shot learners. Advances in\nNeural Information Processing Systems , 33:1877–\n1901.\n[Devlin et al.2019] Devlin, Jacob, Ming-Wei Chang,\nKenton Lee, and Kristina Toutanova. 2019. BERT:\nPre-training of deep bidirectional transformers for\nlanguage understanding. In Proceedings of the 2019\nConference of the North American Chapter of the\nAssociation for Computational Linguistics: Human\nLanguage Technologies .\n[Kim and Rush2016] Kim, Yoon and Alexander M\nRush. 2016. Sequence-level knowledge distillation.\nInProceedings of the 2016 Conference on Empiri-\ncal Methods in Natural Language Processing , pages\n1317–1327.\n[Raffel et al.2020] Raffel, Colin, Noam Shazeer, Adam\nRoberts, Katherine Lee, Sharan Narang, Michael\nMatena, Yanqi Zhou, Wei Li, and Peter J Liu. 2020.\nExploring the limits of transfer learning with a uni-\nfied text-to-text transformer. The Journal of Machine\nLearning Research , 21(1):5485–5551.", "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "O3vZokfejfhG", "year": null, "venue": "EAMT 2015", "pdf_link": "https://aclanthology.org/W15-4946.pdf", "forum_link": "https://openreview.net/forum?id=O3vZokfejfhG", "arxiv_id": null, "doi": null }
{ "title": "Smart Computer Aided Translation Environment - SCATE", "authors": [ "Vincent Vandeghinste", "Tom Vanallemeersch", "Frank Van Eynde", "Geert Heyman", "Sien Moens", "Joris Pelemans", "Patrick Wambacq", "Iulianna Van der Lek-Ciudin", "Arda Tezcan", "Lieve Macken", "Véronique Hoste", "Eva Geurts", "Mieke Haesen" ], "abstract": "Vincent Vandeghinste, Tom Vanallemeersch, Frank Van Eynde, Geert Heyman, Sien Moens, Joris Pelemans, Patrick Wambacq, Iulianna Van der Lek - Ciudin, Arda Tezcan, Lieve Macken, Véronique Hoste, Eva Geurts, Mieke Haesen. Proceedings of the 18th Annual Conference of the European Association for Machine Translation. 2015.", "keywords": [], "raw_extracted_content": "Smart Computer Aided Translation Environment – SCATE\nIWT – Agentschap voor Innovatie door Wetenschap en Technologie\nStrategic basic research\nProject Nr. 130041\nhttp://www.ccl.kuleuven.be/scateUniversity of Leuven (CCL - ESAT/PSI - LIIR – Fac. Arts Antwerp), Belgium\nUniversity of Ghent (LT3), Belgium\nHasselt University (tUL - iMinds, Expertise Centre for Digital Media), Belgium\nProject duration: March 2014 – February 2018\nSummary\nWe aim at improving the translators' efficiency through five different scientific objectives. \nConcerning improvements in translation technology, we are investigating syntax-based fuzzy \nmatching in which we estimate similarity based on syntactic edit distance or similar measures. We \nare working on syntax-based MT using synchronous tree substitution grammars induced from \nparallel node-aligned treebanks, and are building a decoder to use these grammars in translation.\nConcerning improvements in evaluation of computer-aided translation, we have developed a \ntaxonomy of typical MT errors and are constructing a manually annotated corpus of 3000 segments \nof Google Translate MT errors. Post-editing behaviour of translators is being monitored. \nConcerning improvements in automated terminology extraction from comparable corpora, we \nhave developed C-BiLDA, a multilingual topic model. It does not assume linked documents to have \nidentical topic distributions. On the task of cross-lingual document categorization, we trained it on a \ncomparable corpus of Wikipedia documents, and inferred cross-lingual document representations on \na dataset for document categorization. The document representations and category labels are fed to \nan SVM classifier: we train on the source language and predict the labels for the target language \ndocuments. C-BiLDA outperforms the state-of-the-art in multilingual topic modeling.\nConcerning improvements in speech recognition accuracy, we clustered words by their \ntranslations in multiple languages. If words share a translation in many languages, they are \nconsidered synonyms. By adding context and by filtering out those that do not belong to the same \npart of speech, we find meaningful word clusters to incorporate into a language model. We found no \nimprovements, and attribute this in part to errors made by the MT system and to the incorporation \ntechnique (hard clustered class-based n-grams). We will take context into account during evaluation \nand/or further improve the word clusters by using the translations as features in vector space \nmodeling techniques.\nConcerning improvements in work flows and personalised user interfaces, we reviewed existing \ntranslation systems, and created an inventory of the various features and configuration options of \nthe systems. Six Flemish companies are interviewed regarding their practices and their vision for \nfuture CAT tools. A worldwide survey has been conducted with more than 135 responses. Detailed \nanalyses of translators' practices have been conducted by observing more than 7 translators by \nconducting a contextual inquiry.\nIn the upcoming period, the results of the different studies will be analysed in order to obtain a \nmodel of how CAT tools can support workflows for specific translators. This model will be used as \na base for the personalised visualisations as part of interfaces for translation work. In contrast with \ntraditional engineering approaches, this model will also be usable by translators as part of the \nconfiguration of their personal CAT tool.\n229", "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "LUMC7t7dL8", "year": null, "venue": "EAMT 2018", "pdf_link": "https://aclanthology.org/2018.eamt-main.57.pdf", "forum_link": "https://openreview.net/forum?id=LUMC7t7dL8", "arxiv_id": null, "doi": null }
{ "title": "Smart Computer-Aided Translation Environment (SCATE): Highlights", "authors": [ "Vincent Vandeghinste", "Tom Vanallemeersch", "Bram Bulté", "Liesbeth Augustinus", "Frank Van Eynde", "Joris Pelemans", "Lyan Verwimp", "Patrick Wambacq", "Geert Heyman", "Marie-Francine Moens", "Iulianna Van der Lek-Ciudin", "Frieda Steurs", "Ayla Rigouts Terryn", "Els Lefever", "Arda Tezcan", "Lieve Macken", "Sven Coppers", "Jens Brulmans", "Jan Van den Bergh", "Kris Luyten", "Karin Coninx" ], "abstract": "Vincent Vandeghinste, Tom Vanallemeersch, Bram Bulté, Liesbeth Augustinus, Frank Van Eynde, Joris Pelemans, Lyan Verwimp, Patrick Wambacq, Geert Heyman, Marie-Francine Moens, Iulianna van der Lek-Ciudin, Frieda Steurs, Ayla Rigouts Terryn, Els Lefever, Arda Tezcan, Lieve Macken, Sven Coppers, Jens Brulmans, Jan Van Den Bergh, Kris Luyten, Karin Coninx. Proceedings of the 21st Annual Conference of the European Association for Machine Translation. 2018.", "keywords": [], "raw_extracted_content": "Smart Computer -Aided Translation Environment ( SCATE ): Highlights \nVincent Vandeghinste \nTom Vanallemeersch \nBram Bulté \nLiesbeth Augustinus \nFrank Van Eynde \nJoris Pelemans \nLyan Verwimp \nPatrick Wambacq \nGeert Heyman \nMarie -Francine Moens \nIulianna van der Lek-Ciudin \nFrieda Steurs \nKU Leuven \nfirst.lastname @kuleuven.be Ayla Rigouts Terryn \nEls Lefever \nArda Tezcan \nLieve Macken \nGhent University \[email protected] \nSven Coppers \nJens Brulmans \nJan Van den Bergh \nKris Luyten \nKarin Coninx \nUHasselt – tUL – EDM \[email protected] \nAbstract \nWe present the highlights of the now fin-\nished 4 -year SCATE project . It was com-\npleted in February 2018 and f unded by the \nFlemish Government IWT -SBO , project \nNo. 130041.1 \nWe present key results of SCATE (Smart Com-\nputer -Aided Translation Environment) . The pro-\nject investigated algorithms, user interface s and \nmethods that can contribute to the development of \nmore efficient tools for translation work. \nImproved fuzzy matching: Levenshtein dis-\ntance is not the best predictor for post -editing ef-\nfort. Linguistic metrics and different metrics (such \nas TER) combined show better results . \nIntegration of Translation Memory (TM) \nand Machine Translation (MT): Combining \nTM matches, fuzzy match repair and SMT show s \nimprovements over a baseline SMT . \nInformed Quality Estimation: Accuracy and \nfluency error detection systems form the basis of \nthe sentenc e-level Quality Estimation system, \nwhich results in better correlations with temporal \npost-editing effort compared to the Quest++ base-\nline. Detected errors can additionally be high-\nlighted in the MT output. \n Identifying bilingual terms in comparable \ntexts : We found improvements when combining \nword embeddings with character -based models \n \n © 201 8 The authors. This article is licensed under \na Creative Commons 3.0 licence, no derivative works, attrib-\nution, CC -BY-ND. \n using a neural classifier trained on a seed lexicon. \nThis includes short multi -word term phrases. \nPost-Editing via Automated Speech Recog-\nnition (ASR): ASR for post -editing can benefit \nfrom additional information sources, such as the \nsource language, the MT translation model and \nthe activation of domain -specific terminology, for \nwhich w e boost ed ASR language model probabil-\nities. The ASR language model is also enriched \nwith character -level information , making it possi-\nble to model out-of-vocabulary words, which are \nvery common in new do mains. \nIntelligible Translator Interfaces: We itera-\ntively developed a functional prototype that inte-\ngrates several of the aforementioned translation \naids. In contrast with other approaches, our sys-\ntem applies the design concept of intelligibility to \nsuppor t translators’ decision -making process \nwhen they interact with their translation environ-\nment. The e valuation showed that the prototype \nallows translators to better evaluate translation \nsuggestions from MT, TM and term base but it \nhad no major impact on their performance in \nterms of speed and quality . Furthermore, a small -\nscale lab experiment revealed no significant dif-\nference in efficiency between translating with the \nprototype and with a commercial tool , which \nshows less suggestions by default. \nIntegration: We created a n interactive demo \nso that translators can experience and evaluate our \nresearch results: http://scate.edm.uhasselt.be/ . \n1 http://www.ccl.kuleuven.be/scate P\u0013 erez-Ortiz, S\u0013 anchez-Mart\u0013 \u0010nez, Espl\u0012 a-Gomis, Popovi\u0013 c, Rico, Martins, Van den Bogaert, Forcada (eds.)\nProceedings of the 21st Annual Conference of the European Association for Machine Translation , p. 367\nAlacant, Spain, May 2018.", "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "lWinDKTqrh", "year": null, "venue": "EAMT 2023", "pdf_link": "https://aclanthology.org/2023.eamt-1.53.pdf", "forum_link": "https://openreview.net/forum?id=lWinDKTqrh", "arxiv_id": null, "doi": null }
{ "title": "SignON: Sign Language Translation. Progress and challenges", "authors": [ "Vincent Vandeghinste", "Dimitar Shterionov", "Mirella De Sisto", "Aoife Brady", "Mathieu De Coster", "Lorraine Leeson", "Josep Blat", "Frankie Picron", "Marcello Paolo Scipioni", "Aditya Parikh", "Louis ten Bosch", "John O'Flaherty", "Joni Dambre", "Jorn Rijckaert", "Bram Vanroy", "Víctor Ubieto Nogales", "Santiago Egea Gómez", "Ineke Schuurman", "Gorka Labaka", "Adrián Núñez-Marcos", "Irene Murtagh", "Euan McGill", "Horacio Saggion" ], "abstract": "Vincent Vandeghinste, Dimitar Shterionov, Mirella De Sisto, Aoife Brady, Mathieu De Coster, Lorraine Leeson, Josep Blat, Frankie Picron, Marcello Paolo Scipioni, Aditya Parikh, Louis ten Bosch, John O’Flaherty, Joni Dambre, Jorn Rijckaert, Bram Vanroy, Victor Ubieto Nogales, Santiago Egea Gomez, Ineke Schuurman, Gorka Labaka, Adrián Núnez-Marcos, Irene Murtagh, Euan McGill, Horacio Saggion. Proceedings of the 24th Annual Conference of the European Association for Machine Translation. 2023.", "keywords": [], "raw_extracted_content": "SignON Sign Language Translation: Progress and Challenges\nVincent Vandeghinste†a, Dimitar Shterionov∗, Mirella De Sisto∗, Aoife Brady‡, Mathieu De Coster§\nLorraine Leeson¶, Josep Blat∗∗, Frankie Picron††, Marcello Paolo Scipioni‡‡\nAditya Parikh§§, Louis ten Bosch§§, John O’Flaherty∥, Joni Dambre§, Jorn Rijckaertx\nBram Vanroya, Victor Ubieto Nogales∗∗, Santiago Egea Gomez∗∗, Ineke Schuurmana\nGorka Labakab,Adri ´an N ´u˜nez-Marcosb,Irene Murtaghc,Euan McGill∗∗,Horacio Saggion∗∗\n∗Tilburg University,†Instituut voor de Nederlandse Taal,‡ADAPT,§Ghent University,\n¶Trinity College Dublin,∗∗Universitat Pompeu Fabra,††European Union of the Deaf,\n‡‡Fincons,§§Radboud University,∥mac.ie,xVlaams Gebarentaalcentrum,aKU Leuven,\nbUniversity of the Basque Country UPV/EHU,cTU Dublin\nSignON1is a Horizon 20202project, running\nfrom January 2021 until December 2023, address-\ning the lack of technology and services for MT be-\ntween sign languages (SLs) and spoken languages\n(SpLs), through an inclusive, human-centric solu-\ntion, contributing to the repertoire of communica-\ntion media for deaf, hard of hearing (DHH) and\nhearing individuals. Even though there are esti-\nmates that over 70 million DHH individuals have\nSLs as their primary means of communication,\nSLs are often not targeted by new language tech-\nnologies, due to challenges, such as the scarcity\nof data and the lack of a standardized written rep-\nresentation. This paper presents an update of the\nproject status, describing how we address the chal-\nlenges and peculiarities of SLMT.\nWe built an MT framework between SLs and\nSpLs, in all possible combinations, focusing on\nIrish, Dutch, Flemish, Spanish and British SL and\non Irish, Dutch, Spanish and English SpLs (spoken\nand written). To limit the computational complex-\nity and allow the effective development of compo-\nnents in parallel, we develop a translation pipeline\nthat employs an interlingual representation (In-\nterL) (Figure 1). Inputs can be an SpL utterance\nin audio or text or an SL utterance in video. The\ninput is processed via the corresponding compo-\nnent: automatic speech recognition (ASR) con-\nverts audio into text; SL recognition (SLR) con-\nverts SL videos into latent representations. The in-\ntegration of all of these components is currently\nongoing. We develop ASR for both typical and\natypical speech, such as speech of DHH persons.\n© 2023 The authors. This article is licensed under a Creative\nCommons 4.0 licence, no derivative works, attribution, CC-\nBY-ND.\n1https://signon-project.eu/\n2Research and Innovation Programme Grant Agreement No.\n101017255A use case sub-project collects speech data from\nthis specific user group. Both conventional ‘mod-\nular’ approaches as well as more recently devel-\noped end-to-end approaches based on deep learn-\ning (DL) are employed.\nSLR uses a pose estimator (Lugaresi et al.,\n2019) and post-processing of the predicted key-\npoints. This yields robust representations: miss-\ning data are imputed and keypoints are normalised\nto account for camera position. These representa-\ntions are further processed into embeddings, which\nare fine-tuned on SL data, using glosses as target\nlabels. However, we do not predict glosses but ex-\ntract visual embeddings which are used as input for\nthe SL MT models.\nWe use mBART (Liu et al., 2020) for text-to-\ntext translation, fine-tuned to also support Irish\nand SL-to-text translation, trained to work with\nvisual embeddings coming from SLR. We also\noperationalise knowledge-based approaches. We\nuse Abstract Meaning Representation (AMR) (Ba-\nnarescu et al., 2013) as an InterL to “extract”\nmeaning. mBART was fine-tuned on automatically\ntranslated versions of the AMR Bank 3.0 (Knight\net al., 2020) to create a multilingual text-to-AMR\nmodel.3Because of the lack of SL data we work on\na knowledge-based alternative and use rule-based\nmethods for data-augmentation (Chiruzzo et al.,\n2022). Schuurman et al. (to appear) investigate\nwhether SL WordNet (“SignNets”) can be linked\nto existing WordNets or whether the difference in\nmodality warrants its own approach.\nThe output of the InterL (AMR or embeddings)\nis decoded into the target language. In case of\na target SL, this is a representation for avatar\nmovement, such as BML (Behaviour Markup Lan-\n3https://huggingface.co/spaces/\nBramVanroy/text-to-amr\nVideo (SL)\nKeypoint \nrepresentation\nSymbolic representation \n/ Embeddings\nInterLAvatar (SL)\nBML\nSymbolic representation \n/ Vocabulary\nTextAudio Input text\nRecognized Text\nNLU-processed text\nText2AudioFigure 1: The SignON MT pipeline facilitating the translation between all supported sign and spoken languages.\nguage) (Murtagh et al., 2022) or SiGML (Signing\nGesture Markup Language). In case of SpLs it is\ntext, which can be converted to speech through a\ntext-to-speech system.\nTo allow users acces to the SignON services,\nwe have developed a mobile app (for iOS and An-\ndroid) that has access to the SignON MT pipeline.\nDevelopment of SLR and SLMT tools is slowed\ndown due to resource scarcity and standardization\nissues in the available data. De Sisto et al. (2022)\ncompare various SL corpora and machine learn-\ning datasets and propose a framework to unify the\navailable resources and facilitate SL research. We\nhave initiated a number of data collection efforts.\nVandeghinste et al. (2022) compiled a corpus of\nBelgian COVID-19 press conferences, annotated\nwith keypoints and speech recognition, providing a\nparallel VGT-NL dataset. GostParcSign (De Sisto\net al., submitted) and NGT-HoReCo are two small\ndatasets in which professional SL translators trans-\nlate VGT into Dutch and Dutch into NGT, respec-\ntively. Another approach towards data collection\nis through the SignON ML app, which allows SL\nusers to upload SL recordings and their associated\ntranslation in a written language.\nSignON is in a continuous dialogue with target\nusers. We regularly organize co-creation events\n(e.g. round tables, focus groups, and workshops)\nto receive feedback on the project’s progress,\nwhich is then used to steer and refine further de-\nvelopments.\nConclusions Up till now we have conducted a\nsignificant amount of research in the fields of SLR,\nSL(M)T, SLS, ASR, (SL) linguistics, ethics, and\nothers. We continue the development and testing\nof models as well as their validation by the com-\nmunity. We have co-developed the inference as\nwell as ML Apps. We have established a fruitful\nco-creation that allows hearing, deaf and hard ofhearing professionals and potential users to work\ntogether.\nReferences\nBanarescu, L., C. Bonial, S. Cai, et al. 2013. Ab-\nstract Meaning Representation for Sembanking. In\n7th Linguistic Annotation Workshop and Interoper-\nability with Discourse , pages 178–186, August.\nChiruzzo, L., E. McGill, S. Egea-G ´omez, and H. Sag-\ngion. 2022. Translating Spanish into Spanish Sign\nLanguage: Combining rules and data-driven ap-\nproaches. In LoResMT .\nDe Sisto, M., V . Vandeghinste, S. Egea G ´omez, et al.\n2022. Challenges with Sign Language Datasets for\nSign Language Recognition and Translation. In\nLREC , pages 2478–2487.\nDe Sisto, Mirella, Vincent Vandeghinste, and Dimitar\nShterionov. submitted. GoSt-ParC-Sign: Gold Stan-\ndard Parallel Corpus of Sign and spoken language.\nInEAMT 2023 .\nKnight, Kevin, Bianca Badarau, Laura Baranescu, et al.\n2020. Abstract Meaning Representation (AMR) An-\nnotation Release 3.0, January.\nLiu, Yinhan, Jiatao Gu, Naman Goyal, et al. 2020.\nMultilingual Denoising Pre-training for Neural Ma-\nchine Translation. Transactions of the Association\nfor Computational Linguistics , 8:726–742.\nLugaresi, C., J. Tang, H. Nash, et al. 2019. MediaPipe:\nA Framework for Perceiving and Processing Reality.\nInWorkshop at CVPR .\nMurtagh, Irene, V ´ıctor Ubieto Nogales, and Josep Blat.\n2022. Sign Language Machine Translation and the\nSign Language Lexicon: A Linguistically Informed\nApproach. In AMTA , pages 240—-251.\nSchuurman, I., T. Declerck, C. Brosens, et al. to appear.\nAre there just WordNets or also SignNets? In Global\nWordNet Conference .\nVandeghinste, V ., B. Van Dyck, M. De Coster, et al.\n2022. BeCoS Corpus: Belgian Covid-19 Sign Lan-\nguage Corpus. A Corpus for Training Sign Lan-\nguage Recognition and Translation. CLIN Journal ,\n12:7–17.", "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "bQDuibQ0D6Y", "year": null, "venue": "EAMT 2022", "pdf_link": "https://aclanthology.org/2022.eamt-1.53.pdf", "forum_link": "https://openreview.net/forum?id=bQDuibQ0D6Y", "arxiv_id": null, "doi": null }
{ "title": "DeepSPIN: Deep Structured Prediction for Natural Language Processing", "authors": [ "André F. T. Martins", "Ben Peters", "Chrysoula Zerva", "Chunchuan Lyu", "Gonçalo M. Correia", "Marcos V. Treviso", "Pedro Henrique Martins", "Tsvetomila Mihaylova" ], "abstract": "André F. T. Martins, Ben Peters, Chrysoula Zerva, Chunchuan Lyu, Gonçalo Correia, Marcos Treviso, Pedro Martins, Tsvetomila Mihaylova. Proceedings of the 23rd Annual Conference of the European Association for Machine Translation. 2022.", "keywords": [], "raw_extracted_content": "DeepSPIN: Deep Structured Prediction for Natural Language Processing\nAndré F. T. Martins, Ben Peters, Chrysoula Zerva, Chunchuan Lyu,\nGonçalo Correia, Marcos Treviso, Pedro Martins, Tsvetomila Mihaylova\nInstituto de Telecomunicações and Unbabel,\nLisbon, Portugal\[email protected]\nAbstract\nDeepSPIN is a research project funded\nby the European Research Council (ERC),\nwhose goal is to develop new neural struc-\ntured prediction methods, models, and al-\ngorithms for improving the quality, inter-\npretability, and data-efficiency of natural\nlanguage processing (NLP) systems, with\nspecial emphasis on machine translation\nand quality estimation. We describe in this\npaper the latest findings from this project.\n1 Description\nThe DeepSPIN project1is an ERC Starting Grant\n(2019–2023) hosted at Instituto de Telecomuni-\ncações. Part of the work has been done in col-\nlaboration with Unbabel, an SME in the crowd-\nsourcing translation industry. The main goal of\nDeepSPIN is to bring together deep learning and\nstructured prediction techniques to solve struc-\ntured problems in NLP. The three main objectives\nare: developing better decoding strategies; making\nneural networks more interpretable through the in-\nduction of sparse structure; and incorporating of\nweak supervision to reduce the need for labeled\ndata. We focus here on the applications to MT, in-\ncluding some of the recent results obtained in the\nproject.\nBetter Decoding Strategies. Our initial work on\nsparse sequence-to-sequence models (Peters et al.,\n2019) proposed a new class of decoders (called\n“entmax decoders”, shown in Fig. 1) which op-\nerate over a sparse probability distribution over\n© 2022 The authors. This article is licensed under a Creative\nCommons 3.0 licence, no derivative works, attribution, CC-\nBY-ND.\n1Project website: https://deep-spin.github.io .\nat\non\n,This \nSo\nAnd \nHere \nthe tree of life . view \nlook \nglimpse \nkind \nlooking \nway \nvision \ngaze \nis another 92.9% \n 5.9% \n1.3% \n<0.1% \n49.8% \n27.1% \n19.9% \n2.0%\n0.9% \n0.2% \n<0.1% \n<0.1% \n95.7% \n5.9% \n1.3% Figure 1: Forced decoding using entmax for the German\nsource sentence “Dies ist ein weiterer Blick auf den Baum\ndes Lebens.” Only predictions with nonzero probability are\nshown at each time step. When consecutive predictions con-\nsist of a single word, we combine their borders to showcase\nauto-completion potential.\nwords, which prunes hypotheses automatically. In\n(Peters and Martins, 2021), we have shown that\nentmax decoders are better calibrated and less\nprone to the length bias problem and developed\na new label smoothing technique. We also pre-\nsented entmax sampling for text generation, with\nimproved generation quality (Martins et al., 2020).\nAnother line of work concerns modeling of context\nin machine translation. We introduced conditional\ncross-mutual information (CXMI), a technique to\nmeasure the effective use of contextual informa-\ntion by context-aware systems, and context-aware\nword dropout , which increases its use, leading to\nimprovements (Fernandes et al., 2021). We also\ncompared the models’ use of context to that of hu-\nmans for translating ambiguous words, using the\nlatter as extra supervision (Yin et al., 2021).\nSparse Attention and Explainability. A key\nobjective of DeepSPIN is to make neural networks\nmore interpretable to humans. Building upon\nour work on sparse attention mechanisms (Correia\net al., 2019), we presented a framework to pre-\ndict attention sparsity in transformer architectures,\navoiding comparison of queries and keys which\nMT DA C OMET UA-C OMET\nОна сказала, -0.815 0.586 0.149\n’Это не собирается [-0.92, 1.22]\nработать.\nGloss: “ She said, ‘that’s not willing to work ”\nОна сказала: 0.768 1.047 1.023\n«Это не сработает. [0.673, 1.374]\nGloss: “ She said, «That will not work ”\nTable 1: Example of uncertainty-aware MT evaluation.\nShown are two Russian translations of the same English\nsource “ She said, ‘That’s not going to work. ” with refer-\nence “Она сказала: “Не получится.” For the first sen-\ntence, C OMET provides a point estimate (in red) that over-\nestimates quality, as compared to a human direct assessment\n(DA), while our UA-C OMET (ingreen ) returns a large 95%\nconfidence interval which contains the DA value. For the\nsecond sentence UA-C OMET is confident and returns a nar-\nrow 95% confidence interval. Taken from (Glushkova et al.,\n2021).\nwill lead to zero attention probability (Treviso et\nal., 2022). To model long-term memories, we pro-\nposed a new framework based on continuous atten-\ntion, the∞-former (Martins et al., 2022). We also\ncompared different strategies for explainability of\nquality estimation scores, which led to an award in\nthe EvalNLP workshop (Treviso et al., 2021).\nTransfer Learning. We leveraged large pre-\ntrained models to build state-of-the-art models for\nquality estimation (Zerva et al., 2021) and for\nmachine translation evaluation (Rei et al., 2021).\nBuilding upon the recently proposed deep-learned\nMT evaluation metric C OMET (Rei et al., 2020),\nwhich tracks human judgements, we presented a\nnew framework for uncertainty-aware MT eval-\nuation (Glushkova et al., 2021), which endows\nCOMET with confidence intervals for segment-\nlevel quality assessments (Table 1).\nReleased Code and Datasets. To promote re-\nsearch reproducibility, the DeepSPIN project has\nreleased software code and datasets, including:\nOpenKiwi,2an open-source toolkit for quality es-\ntimation (Kepler et al., 2019); the entmax pack-\nage3for sparse attention and sparse losses; a\ndataset with post-editor activity data (Góis and\nMartins, 2019) and various datasets for quality es-\ntimation, used at WMT 2018–2021 shared tasks\n(Specia et al., 2021).\n2http://github.com/Unbabel/OpenKiwi\n3https://github.com/deep-spin/entmaxAcknowledgments. This work was supported\nby ERC StG DeepSPIN 758969 with AM as PI.\nReferences\nCorreia, Gonçalo, Vlad Niculae, and André F. T. Martins.\n2019. Adaptively sparse Transformers. In EMNLP .\nFernandes, Patrick, Kayo Yin, Graham Neubig, and André FT\nMartins. 2021. Measuring and increasing context usage in\ncontext-aware machine translation. In ACL.\nGlushkova, Taisiya, Chrysoula Zerva, Ricardo Rei, and An-\ndré FT Martins. 2021. Uncertainty-aware machine trans-\nlation evaluation. In Findings of EMNLP .\nGóis, António and André FT Martins. 2019. Translator2vec:\nUnderstanding and representing human post-editors. In\nMT Summit .\nKepler, Fabio, Jonay Trénous, Marcos Treviso, Miguel Vera,\nand André F. T. Martins. 2019. Openkiwi: An open source\nframework for quality estimation. In ACL System Demon-\nstrations .\nMartins, Pedro Henrique, Zita Marinho, and André FT Mar-\ntins. 2020. Sparse text generation. In EMNLP .\nMartins, Pedro Henrique, Zita Marinho, and André FT Mar-\ntins. 2022. ∞-former: Infinite memory transformer. In\nACL.\nPeters, Ben and André FT Martins. 2021. Smoothing and\nshrinking the sparse seq2seq search space. In NAACL .\nPeters, Ben, Vlad Niculae, and André F. T. Martins. 2019.\nSparse sequence-to-sequence models. In ACL.\nRei, Ricardo, Craig Stewart, Ana C Farinha, and Alon Lavie.\n2020. Comet: A neural framework for mt evaluation. In\nProceedings of the 2020 Conference on Empirical Meth-\nods in Natural Language Processing (EMNLP) , pages\n2685–2702.\nRei, Ricardo, Ana C Farinha, Chrysoula Zerva, Daan van\nStigt, Craig Stewart, Pedro Ramos, Taisiya Glushkova,\nAndré FT Martins, and Alon Lavie. 2021. Are references\nreally needed? Unbabel-IST 2021 submission for the met-\nrics shared task. In WMT .\nSpecia, Lucia, Frédéric Blain, Marina Fomicheva, Chrysoula\nZerva, Zhenhao Li, Vishrav Chaudhary, and André Mar-\ntins. 2021. Findings of the wmt 2021 shared task on qual-\nity estimation. In WMT .\nTreviso, Marcos, Nuno M Guerreiro, Ricardo Rei, and An-\ndré FT Martins. 2021. IST-Unbabel 2021 submission\nfor the explainable quality estimation shared task. In\nEvalNLP .\nTreviso, Marcos, António Góis, Patrick Fernandes, Erick\nFonseca, and André FT Martins. 2022. Predicting atten-\ntion sparsity in transformers. In SPNLP Workshop .\nYin, Kayo, Patrick Fernandes, Danish Pruthi, Aditi Chaud-\nhary, André FT Martins, and Graham Neubig. 2021. Do\ncontext-aware translation models pay the right attention?\nInACL.\nZerva, Chrysoula, Daan van Stigt, Ricardo Rei, Ana C Far-\ninha, Pedro Ramos, José GC de Souza, Taisiya Glushkova,\nMiguel Vera, Fabio Kepler, and André FT Martins. 2021.\nIst-unbabel 2021 submission for the quality estimation\nshared task. In WMT .", "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "YuxO36DLgv", "year": null, "venue": "EAMT 2022", "pdf_link": "https://aclanthology.org/2022.eamt-1.65.pdf", "forum_link": "https://openreview.net/forum?id=YuxO36DLgv", "arxiv_id": null, "doi": null }
{ "title": "Automatic Video Dubbing at AppTek", "authors": [ "Mattia Di Gangi", "Nick Rossenbach", "Alejandro Pérez", "Parnia Bahar", "Eugen Beck", "Patrick Wilken", "Evgeny Matusov" ], "abstract": "Mattia Di Gangi, Nick Rossenbach, Alejandro Pérez, Parnia Bahar, Eugen Beck, Patrick Wilken, Evgeny Matusov. Proceedings of the 23rd Annual Conference of the European Association for Machine Translation. 2022.", "keywords": [], "raw_extracted_content": "Automatic Video Dubbing at AppTek\nMattia Di Gangi, Nick Rossenbach, Alejandro P ´erez, Parnia Bahar\nEugen Beck, Patrick Wilken, Evgeny Matusov\nAppTek GmbH\nAachen, Germany\[email protected]\nAbstract\nAutomatic Video Dubbing is the process of\nautomatically revoicing a video with a new\nscript to make it accessible to a new audi-\nence. In this paper, we describe AppTek\nDubbing, a product that will be available\nin Q3 2022 to automatically dub a video\ninto a target language. We plan multiple\nreleases of the product with incremental\nfeatures, as well as the possibility to allow\nhuman intervention for increased quality.\n1 Introduction\nVideo dubbing is the activity of revoicing a video\nwhile offering a viewing experience equivalent to\nthe original video. The revoicing usually comes\nwith a new script, and it should reproduce the orig-\ninal emotions, coherent with the body language,\nand be lip synchronized. ¨Oktem et al. (2019) and\nFederico et al. (2020) introduced two automatic\ndubbing systems as a cascade of automatic speech\nrecognition (ASR), machine translation (MT) and\nText-to-Speech (TTS), enhanced with a prosodic\nalignment (PA) component to transfer prosody\nthrough the pipeline. In this project, we aim to\nbuild an AD system in two phases: (1) voice-\nover; (2) full dubbing, and enhance it with human-\nin-the-loop capabilities for a higher quality. The\nproduct will be released in the form of REST APIs\nand a web interface in Q3 of the current year.\nThe pricing will follow a pay-per-use scheme, with\npossible variations according to requested quality\ncontrol or if the script to dub is provided by the\nuser for higher quality dubbing.\n© 2022 The authors. This article is licensed under a Creative\nCommons 3.0 licence, no derivative works, attribution, CC-\nBY-ND.2 Current Features\nOur current system is designed as an enhanced\npipeline of ASR, MT and TTS. Our ASR sys-\ntem includes speaker diarization (the task of de-\ntecting “who speaks when”) so that consecutive\nsegments from the same speaker can be translated\nas coherent units, and each speaker is assigned a\nunique voice. Our MT system is a Transformer-\nbased encoder-decoder, augmented with metadata\nfeatures for style adaptation (Matusov et al., 2020)\nand output length control (Lakew et al., 2019). The\ntranslations are performed from and to subtitle files\nto preserve the timestamps and use them as bound-\naries for the synthesized voices. Additionally, we\nuse speaker-adaptive TTS to reproduce the voice\nfeatures of the original actor for the given seg-\nment in the new language. Finally, the background\nsound, obtained via source separation, is merged\nwith the synthesized voices for the final audio and\nvideo rendering. This system can already trans-\nlate video contents and dub the output videos in a\nvoice-over style.\n3 Voice-over\nV oice-over is a simpler solution than dubbing,\nwhere the original voice’s volume is lowered\ndown, and the new voice is rendered with a nat-\nural volume over it, usually with a delay of some\nframes. Our system is already capable of perform-\ning voice-over for some language pairs1but some\naspects can be improved:\nDiarization: speaker diarization can be im-\nproved in the cases when the audio quality is low,\nor one speaker speaks for less than one second.\n1see demo at https://www.apptek.com/post/automatic-\ndubbing-for-user-generated-content\nProsody Alignment: we plan to add prosody\nalignment for transferring the pauses from the\nsource to the target speech, but also the emphasis\napplied to sentences and single words.\nMT Output Length: although in voice-over we\nhave time constraints less strict than in dubbing,\nsome translations do not fit the allocated space, and\nit is important to have a fine-grained control over\nthe MT output length.\n4 Emotional Voice-over\nThe main limitation of the current system is the\nsynthesized voice speaking with a “flat” tone,\nwhich does not match the emotions expressed in\nthe original video. Our research effort for achiev-\ning emotional speech is aimed to release the fea-\nture in 2023 and will affect the whole pipeline:\nEmotion Detection: emotions need to be de-\ntected from the source audio and matched with the\nrecognized text, in order to annotate the latter with\nemotions tags.\nEmotion-aware MT: Expand AppTek’s MT\nsystems to support emotions as part of their meta-\ndata. Additional research effort will focus on let-\nting the MT system annotate the output text with\nemotions at a word level, to be used from our TTS\nsystem.\nEmotion-aware TTS: develop TTS systems\nthat can generate emotional speech for different\nemotions. Such a task can be challenging given\nthe low data availability, particularly for languages\nother than English.\n5 Full Dubbing\nA fully-fledged AD system improves the voice-\nover approach by fully synchronizing audio and\nvideo time. Lip-syncing is a strict requirement that\ncan be achieved using orthogonal technologies:\nIsometric translations: improve the methods to\ngenerate translations under length constraints.\nLips motion: modify the lips’ movement in the\nvideo to match the synthesized speech, building\nover the work described in (Furukawa et al., 2016).\n6 Language Support\nOur initial release will include English-to-Arabic\nand English-to-Spanish. In the following two yearswe plan to expand it to English to many European\nlanguages, including French, German, Italian, Pol-\nish and Ukrainian, plus Russian and Chinese. The\nreverse directions will also be rolled out soon after.\n7 Human in the Loop\nAn AD system can make errors in multiple points\nof its pipeline, and the earlier the errors occur, the\nmore harmful they can be for the final result. For\nthis reason, we plan to let users adding manual\ntranscripts or the final scripts to obtain a higher-\nquality video at the cost of more manual work, us-\ning our internal tool for easy editing parallel data.\n8 Conclusion\nAppTek Dubbing is an ambitious pioneering\nproject that combines MT with other technologies\nto provide a high-quality and localized translated\nvideo, with the goal of making dubbing accessible\nbeyond the movie industry. Intermediate product\nreleases will support simpler re-voicing modes and\na human-in-the-loop approach to allow the users to\ntrade-off costs with quality.\nReferences\nFederico, M., R. Enyedi, R. Barra-Chicote, R. Giri,\nU. Isik, A. Krishnaswamy and H. Sawaf. 2020.\nFrom Speech-to-Speech Translation to Automatic\nDubbing. Proceedings of the 17th International\nConference on Spoken Language Translation , pp.\n257–264.\nFurukawa, S., T. Kato, P. Savkin and S. Morishima.\n2016 Video Reshuffling: Automatic Video Dubbing\nwithout Prior Knowledge. ACM SIGGRAPH 2016\nPosters pp. 1–2.\nLakew, S. M., M. A. Di Gangi, and M. Federico. 2019.\nControlling the Output Length of Neural Machine\nTranslation. 16th International Workshop on Spoken\nLanguage Translation.\nMatou ˇsek, J. and J. V ´ıt. 2012. Improving Au-\ntomatic Dubbing with Subtitle Timing Optimisa-\ntion Using Video Cut Detection. 2012 IEEE\nInternational Conference on Acoustics, Speech and\nSignal Processing (ICASSP) , pp. 2385–2388.\nMatusov, E., P. Wilken, and C. Herold. 2020. Flexible\nCustomization of a Single Neural Machine Trans-\nlation System with Multi-dimensional Metadata In-\nputs. Proceedings of the 14th Conference of the As-\nsociation for Machine Translation in the Americas\n(Volume 2: User Track) , pp. 204–216\n¨Oktem, A., M. Farr ´us, and A. Bonafonte. 2019.\nProsodic Phrase Alignment for Machine Dubbing.\nProceedings of Interspeech 2019 , pp. 4215–4219.", "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "-ugkmigKoKB", "year": null, "venue": "EAMT 2009", "pdf_link": "https://aclanthology.org/2009.eamt-1.23.pdf", "forum_link": "https://openreview.net/forum?id=-ugkmigKoKB", "arxiv_id": null, "doi": null }
{ "title": "A Phrase-Based Hidden Semi-Markov Approach to Machine Translation", "authors": [ "Jesús Andrés-Ferrer", "Alfons Juan" ], "abstract": "Jesús Andrés-Ferrer, Alfons Juan. Proceedings of the 13th Annual conference of the European Association for Machine Translation. 2009.", "keywords": [], "raw_extracted_content": "Proceedings of the 13th Annual Conference of the EAMT , pages 168–175,\nBarcelona, May 2009\nAPhrase-BasedHidden Semi-MarkovApproach toMachine Tran slation\nJes´us Andr´es-Ferrer\nInstitutoTecnol´ ogicodeInform´ atica\nUniversidadPolit´ ecnicadeValencia\[email protected]\nDpto. desist. inform´ aticosycomputaci´ on\nUniversidadPolit´ ecnicadeValencia\[email protected]\nAbstract\nStatistically estimated phrase-based mod-\nels promised to further the state-of-the-art,\nhowever, several works reported a perfor-\nmance decrease with respect to heuristi-\ncally estimated phrase-based models. In\nthis work we present a latent variable\nphrase-based translationmodelinspiredby\nthehidden semi-Markov models, that does\nnot degrade the system. Experimental re-\nsults report animprovement over thebase-\nline. Additionally, it is observed that both\nBaum-Welch and Viterbi trainings obtain\nthe very same result, suggesting that most\noftheprobabilitymassisgatheredintoone\nsingle bilingual segmentation.\n1 Introduction\nThe machine translation problem is stated as the\nproblem of translating a sourcesentence, xJ\n1,into\natargetsentence, yI\n1. In accordance with the sta-\ntistical approach to machine translation, the opti-\nmal translation ˆyof a source sentence xis given\nbythe fundamental equation of statistical machine\ntranslation (Brownand others, 1993)\nˆy= arg max\ny∈Y⋆p(x|y)p(y) (1)\nwhere p(x|y)is approximated by an inverse\ntranslation model andp(y)ismodelledwitha lan-\nguage model ; which is usually instanced by a n-\ngramlanguagemodel (ChenandGoodman,1996).\nThe first approaches to model the translation\nprobability in Eq. (1), were based on word dic-\ntionaries. These word-based models, the so-called\nIBMtranslation models (Brownandothers, 1993),\nc/circlecopyrt2009 European Association for Machine Translation.tackled the problem with word-level dictionaries\nplus alignments between words. However, current\nsystems model the inverse conditional probability\nin Eq. (1) using phrase dictionaries . A phrase is\nunderstood here as any sequence of source or tar-\nget words. This phrase-based methodology stores\nspecific sequences of target words ( target phrase )\ninto which a sequence of source words ( source\nphrase) istranslated.\nHowever, a key concept of this approach is the\nprocedure through which these phrase pairs are\ninferred. The common approach consists in us-\ningtheIBMalignment models (Brownand others,\n1993) to obtain a symmetrised alignment matrix\nfrom which coherent phrases are extracted (Och\nand Ney, 2004). Then, a simple count normalisa-\ntion is carried out in order to obtain a conditional\nphrase dictionary.\nAlternatively, some approaches infer the phrase\ndictionaries statistically. Forinstance, ajointprob-\nability model for phrase-based estimation is pro-\nposed in (Marcu and Wong, 2002). In that work,\nallpossiblesegmentationswereextractedusingthe\nEMalgorithm(Dempsteretal.,1977),withoutany\nmatrix alignment constraint, in contrast to the ap-\nproach followed in (Och and Ney, 2004). Based\non this work, another work (Alexandra Birch and\nKoehn, 2006), constrained the EM to only con-\nsider phrases which agree with the alignment ma-\ntrix, thus reducing the size of the phrase dictionar-\nies (or tables).\nA possible drawback of the above phrase-\nbased models is that they are not conditional, but\njoint models that require a re-normalisation post-\nprocessing in order to obtain a conditional model.\nHowever, a generative conditional phrase-based\nmodel presented in (DeNero et al., 2006) showed\naworsening of phrase dictionaries.\n168\nIn this work, we propose a conditional phrase-\nbasedhiddensemi-Markovmodel(PBHSMM) that\nimproves the phrase-dictionary estimation. Al-\nthough, theimprovements arenotimpressive, bare\nin mind that the main property of this model is its\nclear theoretical foundation, since it is based on\na well-known statistical modelling technique, the\nso-called HSMM(Ostendorf et al.,1996). Thisal-\nlow us to include several statistical improvements\ninto future revisions of the model (see section 5).\nA previous work (Andr´ es-Ferrer and Juan-C´ ıscar,\n2007) already presented a conditional phrase-\nbasedhiddenMarkovmodel(HMM).Howeverour\nmodel presents significant improvements, both in\ntheory and practice.\nThemodelisdetailedinsection2,whileitsEM-\nbasedtrainingalgorithmsareanalysedinsection3.\nExperiments are reported in section 4. Finally,\nconcluding remarks are gathered in section 5.\n2 The model\nIn this section, we present our phrase-based\nhidden semi-Markov model (PBHSMM) for ma-\nchine translation. Hidden semi-Markov models\n(HSMMs) (Ostendorf et al., 1996) are a varia-\ntion on HMM that allow the emission of segments\nxj+l−1\njat each state instead of constraining the\nemission to one element xjas HMM do. There-\nfore,theprobabilityofemittinganobjectsequence\nxj+l−1\njinanystatedepends onthesegment length\nl. Note that in hidden Markov models (HMMs),\nthe probability of emitting a segment of length l\nstaying in the same state q, can only be simulated\nby transitions to the same state q. This yields the\nexponential decaying length probability expressed\nasfollows\np(l|q) = [p( q|q)]l−1, (2)\nwhich isnot appropriate for manysituations.\nThe HSMM model introduced in this section\nis clearly inspired in the phrase-based translation\nmodels (Koehn et al., 2003). The idea behind this\nmodelistoprovidedawell-definedmonotonicfor-\nmalism that, while remaining close to the phrase-\nbased models, explicitly introduces the statistical\ndependencies needed to define a phrased mono-\ntonic translation process. Although the mono-\ntonic constraint is an obvious disadvantage for\nthis primer HSMM translation model, it can be\nextended to non-monotonic processes. However,these extensions lay far beyond the aim of this\nwork.\nAlbeit there are several ways to formalise a\nHSMM, we advocate for a similar formalisation\nof that found in (Murhpy, 2007). Let x∈ X⋆\nbe the source sentence and y∈ Y⋆the target\nsentence, then we start by decomposing the con-\nditional translation probability, p(x|y,I,J). We\nassume that the monotonic translation process has\nbeen carried out from left to right in sequences\nof words or phrases. For this purpose, both sen-\ntences should be segmented into the same amount\nof phrases. Figure 1, depicts an example of a pos-\nsible monotonic bilingual segmentation in which\nthe source sentence has a length of 9words, while\nthe target sentence is made up of 11words. Note\nthat each bilingual phrase makes up a concept; for\ninstance c1,c2,c3andc4are concepts in Figure 1.\nTorepresent thesegmentation process, weusetwo\nsegmentation variables for both source, l, and tar-\nget,m, sentences.\nThe target segmentation variable mstores each\ntarget segment length at the position at which\nthe segment begins. Therefore, if the target seg-\nment length variable mhas a value greater than\n0at position i, then a segment with length mi\nstarts at this position i. For instance, the target\nsegmentation represented in Figure 1 is given by\nm=m11\n1= (3,0,0,3,0,0,2,0,3,0,0). Note\nthat values for the segment length variable such\nas,m= (3,0,0,3,0,0,2,0,1,0,0)orm=\n(3,0,0,3,0,0,1,0,3,0,0),are invalid . It is also\nworth noting that the domain of the segmenta-\ntion ranges among all the possible segmentation\nlengths.\nThe source segmentation variable lrepresents\nthe length of each source segment at the position\nat which its corresponding target segment ends. If\nthe source segment length variable lhas a value\ngreater than 0at position i; then the length of the\nsource segment corresponding to the target seg-\nment that starts at position i, isli. For instance,\nin Figure 1 the source segment length variable is\nl=l11\n1= (3,0,0,2,0,0,3,0,1,0,0).\nGivenatargetsegmentationvariable,say m,we\ndefine itsprefix counterpart, ¯mas follows\n¯mi=i/summationdisplay\nk=1mki= 0,1,... ,I . (3)\nIn Figure 1, the prefix segments lengths are\n¯m11\n0= (0,3,3,3,6,6,6,8,8,11,11,11) and169\nc1 c2 c3 c4x1 x2 x3 x4 x5 x6 x7 x8 x9\ny1 y2 y3 y4 y5 y6 y7 y8 y9 y10 y11\nFigure1: Agenerative exampleofthephrase-based hidden se mi-Markov modelformachinetranslation.\n¯l11\n0= (0,3,3,3,5,5,5,8,8,9,9,9), for target and\nsource segment length variables respectively.\nMathematically, weexpress theidea depicted in\nFigure 1 unhiding the former segmentation length\nvariables\np(x|y) =/summationdisplay\nl/summationdisplay\nmp(x,l,m|y,I,J).(4)\nThe completed model in Eq. (4) is decomposed as\nfollows\np(x,l,m|y):=p(m)p(l|m)p(x|m,l,y)(5)\nwhere we have dropped the dependence on yfor\nthe segment variables. Note that for clarity we\nhave omitted the dependency on the lengths Jand\nIin all probabilities; and we will henceforth pro-\nceed this way.\nBothlengthprobabilitiesinEq.(5)arebeingde-\ncomposed left-to-right. However, in order to keep\nthe training as fast as possible, a special decom-\nposition of such probabilities is going to be made.\nWedetail herethedecomposition ofthetarget seg-\nmentlengthprobability model,omittingdetailsfor\nthe remaining random variables.\nThe probability of the target segment length\nvariable is given by\np(m) =I/productdisplay\ni=1p(mi|mi−1\n1).(6)\nAt first stage, we had assumed that each partial\nprobability in Eq. (6) does not depend neither on\ny,nor onboth lengths ( IandJ). Hence, the prob-\nability p(mi|mi−1\n1)ismodelled as follows\np(mi|mi−1\n1) =/braceleftBigg\np(mi) ¯mi−1+ 1 = i, m i/ne}ationslash= 0\n1 ¯ mi−1+ 1/ne}ationslash=i, m i= 0\n(7)\nFinally the segment length probability is ex-\npressed as follows\np(m) :=/productdisplay\ni∈Z(m)1/productdisplay\ni/negationslash∈Z(m)p(mi),(8)where Z(m)or simply Zstands for the set of po-\nsitions tfor which mtis0. For instance, in the\nexample in Figure 1, Zis instanced to Z(m) =\n{2,3,5,6,8,10,11}.\nProvided that one of the two products in Eq. (8)\nsimplifies to 1, the segment length probability is\nexpressed as\np(m) :=/productdisplay\ni/negationslash∈Zp(mi). (9)\nSince explicitly showing these details forces the\ndiscourse to be awkward, we will omit these de-\ntails. Therefore, we will use equations resembling\nthe following\np(m) :=/productdisplay\ntp(mt), (10)\nwhere we have explicitly ommitted that t∈ Z,\nand we have changed the index iintotfor sub-\ntly summarising the whole previous simplification\nprocess. This approach resembles the state prob-\nability decomposition in HSMM (Ostendorf et al.,\n1996).\nSimilarlytothetargetsegmentlengthmodel,the\nsourcesegmentlengthyieldsthefollowingdecom-\nposition\np(l|m) :=/productdisplay\ntp(lt|mt).(11)\nFinally, knowing the length segment variables,\nthe emission probability is also decomposed left-\nto-right as follows\np(x|l,m,y) :=/productdisplay\ntp(x(t)|y(t)),(12)\nwhere y(t)standsfor yt+mt−1\ntandx(t)standsfor\nx¯lt¯lt−1+1; i.e., the t-th “emitted” phrase and its re-\nspective t-th target phrase. Note that since tis a\nboundary of a target segment, then ¯ltis equal to\n¯lt−1+lt.170\nSummarising, the proposed (completed) condi-\ntional translation model is defined by\np(x,l,m|y) :=/productdisplay\ntp(mt)p(lt|mt)p(x(t)|y(t))\n(13)\nThen, the incomplete model introduced in Eq. (4)\nisparameterised asfollows\np(x|y):=/summationdisplay\nl/summationdisplay\nm/productdisplay\ntp(mt)p(lt|mt)p(x(t)|y(t))\n(14)\nwiththe following parameter set θ\nθ={p(m),p(l|m),p(u|v)}(15)\nwhere landmare positive integers, uis a source\nphrase, i.e., u∈ X⋆; andvis a target phrase v∈\nY⋆.\nIt is important to smooth the phrase translation\nprobabilities to avoid over-training. For doing so,\nwehaveusedtheIBMmodel1(Brownandothers,\n1993) asfollows\n˜ p(u|v) = (1 −ǫ) p(u|v)+ǫpIBM1(u|v)(16)\nNote that in this model, each target phrase y(t)\nis understood as the “state” of a HSMM in which\nthe source phrase x(t)is emitted. Obviously this\nis not a pure HSMM in which we have a latent\nstate variable. The omission of this latent variable\nis more an assumption than a requirement. Recall\nthat in Figure 1 we have depicted each bilingual\nphrase pair being emitted by a concept. There-\nfore, we could theoretically model this latent vari-\nableaswell. Thisinclusionwouldnotsignificantly\nchange the algorithms proposed here. However,\nthis idea is left as future work, since it is firstly\nneeded to check whether this primer model de-\ngradesornotthesystemperformanceassomesim-\nilarworkshavepreviously reported(DeNeroetal.,\n2006; Marcu and Wong, 2002).\n3 The training\nSince the proposed PBHSMM assumes that the\nsegment length variables arenotgiveninthetrain-\ning data, some approximate inference algorithm\nsuch as the EM (Dempster et al., 1977) is needed.\nWe omit here the EM derivations which lead to\nthe well-known Baum-Welch algorithm (Rabiner,\n1990). Thisalgorithm followstheiterativescheme\nof all the EMinstantiations. First, weguess an ad-\nequate parameter set, θ(0), as a start point. Then,we compute the forward, α(0)\ntl(x,y), and back-\nward, β(0)\ntl(x,y), recurrences for each sample.\nThese recurrences are used to compute the frac-\ntional counts γ(0)\ntlt′l′(x,y); and afterwards, a new\nθ(1)isestimated fromthosefractional counts. The\nre-estimated parameter set θ(1)can be used again\ntore-computetherecurrences, defininganiterative\nprocess that ensures the log-likelihood to increase\nin each iteration (or remain the same). This pro-\ncess goes on until either convergence or a maxi-\nmumnumber of iterations is achieved.\n3.1 Forward recurrence\nTheforward recurrence αtlis defined as theprefix\nprobability\nαtl=αtl(x,y) = pθ(xl\n1,¯lt=l,¯mt=t|y)\n(17)\nwhere ¯lt=land¯mt=tmean that a source and\na target phrase end/start at position lof the input\nandtof the output. This prefix probability is re-\ncursively computed as follows\nαtl=\n\n1 t= 0,l= 0/summationtext\nt′/summationtext\nl′αt′l′p(l′−l,t′−t) 0< t≤I,\n·p(xl\nl′+1|yt\nt′+1) 0< l≤J\n0 otherwise\n(18)\nwhere the sum over t′ranges from 0tot−1and\nlikewisethesumover l′rangesfrom 0tol−1;and\nwhere we have used p(l′−l,t′−t)to denote the\nproduct of lengths\np(l′−l,t′−t) = p( t′−t)p(l′−l|t′−t),(19)\ninorder tocompress notation.\n3.2 Backward recurrence\nThe backward recurrence βtlis defined as the fol-\nlowing suffixprobability\nβtl=βtl(x,y)=pθ(xJ\nl+1|¯lt=l,¯mt=t,y)\n(20)\nwhere ¯lt=land¯mt=tmean that a source and a\ntargetphraseended/startedatposition loftheinput\nandtof the output. The former suffix probability\nisrecursively computed as follows\nβtl=\n\n1 t=I,l=J/summationtext\nt′/summationtext\nl′βt′l′p(l′−l,t′−t) 0≤t < I,\n·p(xl′\nl+1|yt′\nt+1) 0 ≤l < J\n0 otherwise\n(21)171\nwhere the sum over t′ranges from t+ 1toIand\nlikewise thesum over l′ranges from l+ 1toJ.\nThese two recurrences answer the question of\nwhich is the probability for a given pair of sen-\ntences\npθ(x|y) =αIJ=β00.(22)\nBoth the forward and backward recurrence re-\nquire a matrix of size O(IJ). In order to compute\ntheserecurrences atimecomplexity of O(I2J2)is\nrequired. However,itcanbereducedto O(IJM2)\nby defining amaximum phrase length M.\n3.3 Fractional counts\nUsing the previously defined recursions, we can\ncompute the probability of segmenting a given\nsampleinthesource positions (l,l′)andinthetar-\nget positions (t,t′)\nγtlt′l′=αtlp(l′−l,t′−t)p(xl′\nl+1|yt′\nt+1)βt′l′\npθ(x,y)\n(23)\nThis fractional count is very helpful through the\nBaum-Welch training.\n3.4 Re-estimation\nOnce we have computed the recurrences and the\nfractional counts, the phrase translation probabili-\nties are re-estimated asfollows\np(u|v) =N(u,v)/summationtext\nu′N(u′,v)(24)\nwith\nN(u,v) =/summationdisplay\nn/summationdisplay\nl<l′/summationdisplay\nt<t′γntlt′l′δ(xl′\nl+1,u)δ(yt′\nt+1,v)\n(25)\nwhere δ(a,b)is the Kronecker delta function\nwhich is 1ifa=band0otherwise.\nThe target phrase length probabilities are esti-\nmated asfollows\np(m) =N(m)/summationtext\nm′N(m′)(26)\nwith\nN(m) =/summationdisplay\nn/summationdisplay\nl<l′/summationdisplay\ntγn,t,l,(t+m),l′(27)\nFinally, the source phrase length probabilities\nare re-estimated by\np(l|m) =N(l,m)/summationtext\nl′N(l′,m)(28)with\nN(l,m) =/summationdisplay\nn/summationdisplay\nl′/summationdisplay\ntγn,t,l,(t+m),(l′+l)(29)\nwhere ldenotes a source phrase length, and ma\ntarget phrase length.\nAn alternative training algorithm is obtained\ncomputing the maximum segmentation instead of\ntherecurrences. Thistraining,theso-calledViterbi\ntraining (Rabiner, 1990), is an iterative training\nprocess as well. Each iteration comprises two\nstages: computingthemaximumsegmentationand\nre-estimating the parameters. The Viterbi recur-\nsion isused toobtain the maximum segmentation\nδtl=\n\n1 t= 0,l= 0\nmax t′,l′{δt′l′p(l′−l,t′−t) 0< t≤I,\np(xl\nl′+1|yt\nt′+1)}0< l≤J\n0 otherwise\n(30)\nA traceback of the decisions made to compute δIJ\nprovides the maximum segmentation ˆmandˆl.\nAfterwards, the re-estimation equations are the\nsimilar toEqs. (24),(26),and (28),but inthiscase\nthe counts N(u,u),N(m), andN(l,m)are the\nactual counts since the latent segmentation is as-\nsumed tobe the maximum segmentation.\n4 Experiments\nThe aim of the experimentation is to see how\nthe proposed method and algorithm improves the\nquality of a any phrase dictionary given as in-\nput. For doing so, we have tested our algorithm\nin two corpora: the Europarl- 10and the Europarl-\n20. The former comprises all the sentences from\nthe English-to-Spanish part of Europarl (version\n3)(Koehn,2005)withlengthequal orlessthan 10.\nThelatter ismadeup of all the English-to-Spanish\nEuroparl sentences with length equal or less than\n20. For both corpora we have randomly selected\n5000sentences for testing the algorithms. Note\nthat we have constrained the training length of the\nstandard Europarl because ofthetimerequirement\nfor training theproposed PBHSMM.Table1gath-\ners some basic statistics of the training partition;\nand Table2is the counterpart for testing.\nAll the experiments were carried out using a\n4-gram language model computed with the stan-\ndard tool SRILM (Stolcke, 2002), and a modi-\nfied Kneser-Ney smoothing. To define a trans-\nlation baseline, we compare our results with172\nTraining Europarl-10 Europarl-20\nEn Sp En Sp\nsentences 76,996 306,897\navg. length 7.01 7 .0 12.6 12 .7\nrunning words 546K540K3.86M3.91M\nvoc. size 16K22K 37K 58K\nTable1: Basic statistics of the training sets.\nTest Europarl-10 Europarl-20\nEn Sp En Sp\nsentences 5,000 5,000\navg. length 7.2 7 .0 12.8 13 .0\nrunning words 35.8K35.2K62.1K63.0K\nppl (3-gram) 53.4 64 .477.6 86 .8\nTable 2: Basicstatistics of thetest sets.\nMoses (Koehn and others, 2007) but constrain-\ning the model to only use a phrase-based inverse\nmodel.\nFor evaluating the quality of the translations we\nhave used two error measures: bilingual evalua-\ntion understudy (B LEU) (Papineni et al., 2001),\nand translation edit rate (T ER) (Snover and others,\n2006).\nTheproposed trainingalgorithms needaninitial\nguess. To this aim, we have computed the IBM\nword models alignments with GIZA++ (Och and\nNey, 2003), for both translation directions. Then,\nwe have computed the simmetrisation heuris-\ntic (Och and Ney, 2004) and extracted all the con-\nsistentphrases (Och and Ney, 2004). Afterwards,\nwe have computed our initial guess by counting\nthe occurrences of each bilingual phrase and then\nnormalising the counts. Instead of directly using\nthe Moses system to do this work, we have imple-\nmented our ownversion of this process.\nSince the training algorithm highly depends on\nthe maximum phrase length, for most of the ex-\nperimentation we have limited it to 4. In Table 3,\nthe results obtained for both translation directions\nare summarised for the Europarl- 10. Surprisingly,\nViterbi training obtains almost the same results\nthat the Baum-Welch training; probably because\nmost of the sentences accumulate all the probabil-\nitymass injust one possible segmentation. Maybe\nthat is why our algorithm is not able to obtain\na large improvement with respect to the initiali-\nsation. Note that since the proposed system and\nMoses use different phrase-tables, the comparison\nof this two numbers is not fair. Therefore, theIterations En→Sp Sp→En\nTERBLEUTERBLEU\nMoses p(x|y)baseline\n50.0 32 .947.2 32 .7\nIterations Baum-Welch\n0 51.4 31 .948.2 33 .2\n1 51.4 31 .947.9 33 .1\n2 51.5 31 .947.9 33 .1\n4 51.2 32 .648.1 33 .1\n8 51.4 31 .848.0 33 .0\nIterations Viterbi\n0 51.4 31 .948.2 33 .2\n1 51.4 31 .947.9 33 .1\n2 51.1 32 .648.0 33 .2\n4 51.2 32 .648.0 33 .0\n8 51.4 31 .848.0 33 .0\nTable 3: Results obtained with the Europarl- 10\ncorpus withamaximum phrase length of 4.\nMosesbaselineisonlygivenasareferenceandnot\nas a system to improve. The important question\nis whether the model produces an improvement\nwith respect to the initialisation, i.e., the result on\niteration 0. Note that this corpus is small, and\nalthough its complexity allow us to check some\nPBHSMM properties, we cannot to obtain further\nconclusions.\nOn the other hand, Table 4 summarises the re-\nsults obtained with the Europarl- 20. This Table\nonly report results for the Viterbi training since\nagain Baum-Welchtraining hasnoadvantage with\nrespect to it. Typically, over 4iterations suffices\nto avoid over-training, and maximise the system\nperformance. The results show a minor improve-\nmentovertheinitialisation. Althoughtheimprove-\nment is small, its magnitude is similar to the im-\nprovementobtainedwhenextendingthemaximum\nphrase length as shown in Table 5. For instance, it\nis seen that extending the maximum phrase length\nfrom4to5incurs in the same improvement that\nperforming 4Viterbi iterations with a maximum\nphrase length of 4. Inmost of thecases theViterbi\ntraining improves the translation quality.\nAlthough, in most cases the training does not\nincur in a significant improvement over the base-\nline; in practice the quality of the translations is\nincreased by the training. In Table 6, we have se-\nlected sometranslation examples. Adetailed anal-\nysis of the system translations suggest that most\ncases belong tothe cases A or B.173\nCase A Training improves evaluation measures\nREF.I sincerelybelieve that the aimof the present directive isa stepinthe right direction.\nIT. 0 I am convinced that the aimof this directive is astepinthe ri ght direction.\nIT. 4 I sincerelybelieve that the aimof the directive before us is astepinthe right direction.\nMOSES I sincerelybelieve that the aimbehind the directive isalso a stepinthe right direction.\nCase B Training improves translation butnot evaluation measures\nREF.Mr president ,i wishtoendorse mrposselt ’s comments .\nIT. 0 Mr president ,i support fortoour .\nIT. 4 Mr president ,i joiningood faithtoour colleague ,mr possel t .\nMOSES mr president ,i wouldlike tojoiningood faithinthe words of our colleague , mrrbig .\nCase C Training degrades evaluation measures\nREF.BSEhas already cost the ukgbp 1.5billioninlost exports .\nIT. 0 BSEhas cost the uk 1.5millionlosses exports .\nIT. 4 BSEalready has cost inthe ukalone 1500 millionpounds intol oss of exports .\nMOSES BSEhas already claimedtobritain1500 millionpounds intol oss of trade .\nCase D Other cases\nREF.Are there any objections toamendment nos 3and 14being consi dered as null and voidfrom now on?\nIT. 0 Are there any objections togive amendments nos 3and14 .\nIT. 4 Are there any objections toadopt amendments nos 3and14 ?\nMOSES Are there any objections toconsider amendments nos 3and 14?\nTable6: Sometranslation examples(Sp →En)beforeandaftertrainingthephrasetable 4iterationswith\nthe Viterbi training and maximum phrase length of 4.\nIterations En→Sp Sp→En\nTERBLEUTERBLEU\nMoses p(x|y)baseline\n57.3 23 .555.1 24 .10\nIterations Viterbi\n0 57.7 25 .056.0 26 .0\n1 57.7 25 .155.8 26 .4\n2 57.7 25 .155.9 26 .4\n4 57.7 25 .255.8 26 .5\n8 57.7 25 .255.8 26 .5\nTable 4: Results obtained with the Europarl- 20\ncorpus witha maximum phrase length of 4.\n5 Conclusions and Future work\nWe have presented a phrase-based hidden semi-\nMarkov model for machine translation inspired\non both phrase-based models and classical hidden\nsemi-Markov models. The idea behind this model\nis to provide a well-defined monotonic formalism\nthat explicitly introduces the statistical dependen-\ncies needed to define the monotonic translation\nprocess with theoretical correctness and without\nmoving awayfrom the phrase-based models.\nAdetailedpractical analysisshowedaslightim-\nprovement by applying the estimation algorithmsIterations En→Sp Sp→En\nTERBLEUTERBLEU\nIterations Maximum phrase length 2\n0 60.5 21 .257.9 23 .5\n4 60.5 21 .258.1 23 .5\nIterations Maximum phrase length 3\n0 58.6 24 .156.1 25 .7\n4 58.3 24 .156.4 25 .5\nIterations Maximum phrase length 4\n0 57.7 25 .056.0 26 .0\n4 57.7 25 .155.8 26 .5\nIterations Maximum phrase length 5\n0 57.7 25 .155.8 26 .6\n4 57.4 25 .355.3 26 .9\nIterations Maximum phrase length 6\n0 57.7 25 .455.9 26 .6\n4 57.3 25 .655.4 26 .8\nTable 5: Results obtained with the Europarl- 20\ncorpus for several maximum phrase lengths.174\nwith respect to the baseline. Surprisingly, wehave\nobserved that both trainings, Viterbi and Baum-\nWelch,obtainthesamepracticalbehaviour. There-\nfore, we recommend the use of the fastest: the\nViterbi training. However, we have not used the\nproposedPBHSMMasafeatureinsidealog-linear\nmodel as most of the current state-of-the-art sys-\ntems. Weleave this comparison asafuture work.\nAs discussed in section 2, one outstanding and\nsimple extension to the proposed model is to un-\nhide theconceptvariable by having a mixture of\nphrase-based dictionaries, p(x|y,c). Actually,\nthe requirements of this modification would not\nsignificantly affect to the proposed estimation al-\ngorithms. We are already extending the model to-\nwards this direction.\nFinally, the most undesirable property of the\nproposedmodelisitsmonotonicity atphraselevel.\nAlthough the monotonic constraint is a clear dis-\nadvantage for this primer PBHSMM translation\nmodel, it can be extended to non-monotonic pro-\ncesses. However, we leave these extensions as fu-\nture work.\nAcknowledgement\nWork partially supported by the Spanish research\nprogramme Consolider Ingenio 2010: MIPRCV\n(CSD2007-00018), bytheEC(FEDER),theSpan-\nishMECundergrantTIN2006-15694-CO2-01and\nthe Valencian “Conselleria d’Empresa, Universitat\ni Ci` encia” under grant CTBPRA/2005/004.\nReferences\nAlexandraBirch,ChrisCallison-Burch,MilesOsborne\nand Philipp Koehn. 2006. Constraining the phrase-\nbased, joint probability statistical translation model.\nInProceedingsoftheNAACL2006WorkshoponSta-\ntistical MachineTranslationConference .\nAndr´ es-Ferrer,J. and A. Juan-C´ ıscar. 2007. A phrase-\nbased hidden markov model approach to machine\ntranslation. In Proceedings of New Approaches to\nMachineTranslation ,pages57–62,January.\nBrown,P.F.etal. 1993. TheMathematicsofStatistical\nMachine Translation: Parameter Estimation. Com-\nputationalLinguistics ,19(2):263–311.\nChen,S.F. andJ. Goodman. 1996. An empiricalstudy\nof smoothing techniques for language modeling. In\nProc. of ACL'96 , pages 310–318, Morristown, NJ,\nUSA, June. Association for ComputationalLinguis-\ntics.Dempster, A. P., N. M. Laird, and D. B. Rubin. 1977.\nMaximum likelihood from incomplete data via the\nEMalgorithm. J.RoyalStatist.Soc.Ser.B ,39(1):1–\n22.\nDeNero, J., D. Gillick, J. Zhang, and D. Klein. 2006.\nWhy generative phrase models underperform sur-\nface heuristics. In Proceedings on the Workshop on\nStatistical Machine Translation , pages 31–38, New\nYorkCity,June.AssociationforComputationalLin-\nguistics.\nKoehn, P. et al. 2007. Moses: Open source toolkit for\nstatistical machine translation. In Proc. of ACL'07:\nDemo and Poster Sessions , pages 177–180, Morris-\ntown,NJ,USA,June.AssociationforComputational\nLinguistics.\nKoehn, P., F.J. Och, and D. Marcu. 2003. Statisti-\ncal phrase-basedtranslation. In Proc. of NAACL'03 ,\npages48–54,Morristown,NJ, USA.Associationfor\nComputationalLinguistics.\nKoehn,P. 2005. Europarl: A parallel corpusfor statis-\nticalmachinetranslation. In Proc.oftheMTSummit\nX,pages79–86,September.\nMarcu, Daniel and Qilliam Wong. 2002. A phrase-\nbased,jointprobabilitymodelforstatisticalmachine\ntranslation. In Proceedings of the Conference on\nEmpiricalMethodsinNaturalLanguageProcessing\n(EMNLP-2002) ,Philadelphia,PA, July.\nMurhpy,Kevin P. 2007. Hidden semi-MarkovModels\n(HSMMs). Technical report, University of British\nColumbia.\nOch, F.J. and H. Ney. 2003. A systematic comparison\nof various statistical alignment models. Computa-\ntionalLinguistics ,29(1):19–51.\nOch, F.J. and H. Ney. 2004. The alignment template\napproach to statistical machine translation. Compu-\ntationalLinguistics ,30(4):417–449.\nOstendorf,M., V. Digalakis, and O. A. Kimball. 1996.\nFrom hmms to segment models: a unified view of\nstochastic modeling for speech recognition. IEEE\nTrans.onSpeechandAudioProcessing ,4:360–378.\nPapineni, K., S. Roukos, T. Ward, and W. Zhu. 2001.\nBLEU: a Method for Automatic Evaluation of Ma-\nchine Translation. Technical Report RC22176,\nThomasJ. Watson ResearchCenter.\nRabiner, Lawrence R. 1990. A tutorial on hidden\nmarkov models and selected applications in speech\nrecognition. pages267–296.\nSnover, M. et al. 2006. A study of translation edit\nrate with targeted human annotation. In Proc. of\nAMTA'06 , pages 223–231, Boston, Massachusetts,\nUSA, August. Association for Machine Translation\nintheAmericas.\nStolcke, A. 2002. SRILM – an extensible language\nmodelingtoolkit. In Proc. of ICSLP'02 ,pages901–\n904,September.175", "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "sZJ-6KneLjt", "year": null, "venue": "EAMT 2012", "pdf_link": "https://aclanthology.org/2012.eamt-1.41.pdf", "forum_link": "https://openreview.net/forum?id=sZJ-6KneLjt", "arxiv_id": null, "doi": null }
{ "title": "Domain Adaptation in SMT of User-Generated Forum Content Guided by OOV Word Reduction: Normalization and/or Supplementary Data", "authors": [ "Pratyush Banerjee", "Sudip Kumar Naskar", "Johann Roturier", "Andy Way", "Josef van Genabith" ], "abstract": null, "keywords": [], "raw_extracted_content": "Domain Adaptation in SMT of User-Generated Forum Content Guided by\nOOV Word Reduction: Normalization and/or Supplementary Data?\nPratyush Banerjee, Sudip Kumar Naskar, Johann Roturier1, Andy Way2, Josef van Genabith\nCNGL, School of Computing, Dublin City University, Dublin, Ireland\nfpbanerjee,snaskar,[email protected]\n1Symantec Limited, Dublin, Ireland\njohann [email protected]\n2Applied Language Solutions, Delph, UK\[email protected]\nAbstract\nThis paper reports a set of domain adap-\ntation techniques for improving Statisti-\ncal Machine Translation (SMT) for user-\ngenerated web forum content. We inves-\ntigate both normalization and supplemen-\ntary training data acquisition techniques,\nall guided by the aim of reducing the num-\nber of Out-Of-V ocabulary (OOV) items in\nthe target language with respect to the\ntraining data. We classify OOVs into a set\nof types, and address each through ded-\nicated normalization and/or supplemen-\ntary training material selection-based ap-\nproaches. We investigate the effect of\nthese methods both in an additive as well\nas a contrastive scenario. Our findings\nshow that (i) normalization and supple-\nmentary training material techniques can\nbe complementary, (ii) for general forum\ndata, fully automatic supplementary train-\ning data acquisition can perform as well\nor sometimes better than semi-automatic\nnormalization (although tackling different\ntypes of OOVs) and (iii) for very noisy\ndata, normalization really pays off.\n1 Introduction\nWeb-forums are rich sources of user-generated\ncontent on the web. The increasing popularity of\ntechnical forums have motivated major IT compa-\nnies like Symantec to create and support forums\naround their products and services. For individual\nusers or larger customers, such forums provide an\neasy source of information and a viable alternative\nto traditional customer service options. Being a\nc\r2012 European Association for Machine Translation.multinational company, Symantec hosts its forums\nin different languages (English, German, French\netc), but currently the content is siloed in each\nlanguage. Clearly, translating the forums to make\ninformation available across languages would be\nbeneficial for Symantec as well as its multilingual\ncustomer base. This forms the primary motivation\nof techniques presented here.\nDespite growing interest in translation of forum\ndata (Flournoy and Rueppel, 2010), to date, sur-\nprisingly little research has actually focussed on\nforum data translation (Roturier and Bensadoun,\n2011). Compared to professionally edited text,\nuser-generated forum data is often more noisy, tak-\ning some liberty with commonly established gram-\nmar, punctuation and spelling norms. For our re-\nsearch, we use translation memory (TM) data from\nSymantec, which is part of their corporate doc-\numentation, professionally edited and generally\nconforming to the Symantec controlled language\nguidelines. On the other hand, our target data (fo-\nrum) is only lightly moderated and does not con-\nform to any publication quality guidelines. Hence\ndespite being from the same IT domain, there is a\nsignificant difference in style between the training\nand the test data. In this paper, we focus our efforts\non systematically reducing this difference through\nthe use of both normalization and supplementary\ntraining material acquisition techniques.\nOur research was conducted on English to Ger-\nman (En–De) and English to French (En–Fr) lan-\nguage directions. To identify the differences be-\ntween the TM and forum data, we focus on the\nOOV words in the English forum data with respect\nto the source side (English) of the TM data. We\nclassify OOVs into different categories which re-\nquire independent attention. In order to optimally\nhandle each individual category, different tech-\nProceedings of the 16th EAMT Conference, 28-30 May 2012, Trento, Italy\n169\nniques were developed to make the forum-based\ntest sets better resemble the training data. For the\nfirst category – containing tokens such as URLs,\npaths, registry entries, and memory addresses –\nregular expressions were used to capture the to-\nkens and replace them with unique place-holders.\nThe second category included valid words inad-\nvertently fused by punctuation characters (espe-\ncially ‘. ’) which required a training data-guided\nsplitting technique. The third category compris-\ning spelling errors were handled by an off-the-\nshelf automatic spell checker. Additionally the\nspell checker was trained with ‘in domain’ data to\nmake it aware of the domain-specific terms to im-\nprove the quality of spell checking. For the fourth\ncategory of OOVs – valid words not occurring in\nthe training data – various supplementary ‘out-\nof-domain’ bitext training data were automatically\nsearched. For every OOV in this category, parallel\nsentence pairs from different ‘out-of-domain’ data\nwere added to the ‘in-domain’ training data to im-\nprove the coverage of the translation models.\nWhile improving translation quality by reduc-\ning OOVs is the primary objective of our research,\nwe are particularly interested in the effect of spell\nchecking on translation quality of forum data with\nvarious degrees of noise. Furthermore, we com-\npare the relative improvements provided by the\nnormalization to supplementary data selection to\njustify the effectiveness of the respective tech-\nniques. The rest of the paper is organized as fol-\nlows: Section 2 briefly reviews relevant related\nwork. Section 3 provides a detailed discussion\non the normalization techniques as well as the ac-\nquisition of supplementary training material. Sec-\ntion 4 presents the datasets and the experiments\nand corresponding results, followed by our conclu-\nsions and pointers to future work in Section 5.\n2 Related Work\nThe technique of using ‘out-of-domain’ datasets\nto supplement ‘in-domain’ training data has been\nwidely used in domain adaptation of SMT. Infor-\nmation retrieval techniques were used by Eck et\nal. (2004) to propose a language model adaptation\ntechnique for SMT. Hildebrand et al. (2005) uti-\nlized this approach to select similar sentences from\navailable bitext to adapt translation models, which\nimproved translation performance. Habash (2008)\nused spelling expansion, morphological expan-\nsion, dictionary term expansion and proper nametransliteration to enhance or reuse existing phrase\ntable entries to handle OOVs in Arabic–English\nMT. More recently an effort to adapt MT by min-\ning bilingual dictionaries from comparable corpora\nusing untranslated OOV words was carried out by\nDaume III and Jagarlamudi (2011).\nOur current line of work is related to the work\nreported in Daume III and Jagarlamudi (2011) and\nthat of Habash (2008). In our case, however, the\ntarget domain (web-forum) is different from the\ntraining data (Symantec TMs) more in terms of\nstyle rather than actual domain (Banerjee et al.,\n2011). Secondly, in contrast to mining comparable\ndata for bilingual dictionary extraction (Daume III\nand Jagarlamudi, 2011), we exploit sentence pairs\nfrom available parallel training data to handle un-\ntranslated OOVs. Moreover, mining supplemen-\ntary parallel data guided by OOVs is used as a\ntechnique complementing the normalization-based\napproaches to reduce specific types of OOVs in\nthe target domain. We classify OOVs into dif-\nferent categories and treat each of them sepa-\nrately. In contrast to extending the phrase table\nentries (Habash, 2008) our normalization methods\nmostly comprise pre- and post-processing tech-\nniques. Finally we also present a comparison be-\ntween the normalization and supplementary train-\ning data acquisition techniques for different error\ndensity-based scenarios of the target domain. To\nthe best of our knowledge, the use of ‘domain-\nadapted’ spell checkers to reduce OOV rates in the\ntarget domain is novel, and is one of the other main\ncontributions of the paper.\n3 Normalization and Supplementary\nData Selection Techniques\nThis section introduces the datasets used for the\nexperiments followed by the adaptation techniques\nused in the experiments.\n3.1 Datasets\nThe primary training data for our experiments con-\nsisted of En–De and En–Fr bilingual datasets in the\nform of Symantec TMs. Monolingual Symantec\nforum posts in German and French along with the\ntarget side of the TM training data served as lan-\nguage modelling data. In addition, we also had a\ncollection of posts from the original Symantec En-\nglish forums acquired over a period of two years\nwhich formed the basis of our OOV category esti-\nmation. The development (dev) and test sets used\n170\nin our experiments were randomly selected from\nthis particular data set. Table 1 reports the amount\nof data used for all our experiments.\nData Set En–De En–Fr\nBi-textSymantec TM 832,723 702,267\nDevelopment Set 500 500\nTest 1 2,022 2,022\nTest 2 600 600\nMonolingualEnglish Forum 1,129,749\nGerman Forum 42,521\nFrench Forum 41,283\nTable 1: Number of Sentences for training, development and\ntest sets, and forum data sets\nAs reported in Table 1, we used two different\ntest sets, for our experiments. The first one (Test-\n1) was randomly chosen from the English forum\ndata. Since one of our objectives was also to in-\nvestigate a scenario with a high density of spelling\nerrors, typical for some forum posts, the second\ntest set (Test-2) was selected to simulate a higher\nproportion of noise (approximately one spelling\nerror in every two test set sentences). This was\nachieved by flagging the remaining forum dataset\n(after removing the Test-1 sentences), using an au-\ntomatic spell checker, and randomly selecting sen-\ntences with spelling errors followed by a manual\nreview. Both these test sets were manually trans-\nlated following basic guidelines for quality assur-\nance. The randomly chosen dev set was translated\nusing Google Translate,1and manually post-edited\nby professional translators following guidelines2\nfor achieving ‘good enough quality’.\n3.2 OOV Categorization\nThe remaining (after dev and test set selection)\nEnglish forum data, comprising over 1.13M sen-\ntences (around 17.5M words), were used to com-\npute OOV words in the forum domain with re-\nspect to the training data, using a unigram lan-\nguage model estimated on the source side of the\ntraining data. Manual inspection of the OOV word\nlist identifies the following general categories:\n1. Maskable Tokens (MASK): URLs, paths,\nregistry entries, email addresses, memory lo-\ncations, date and time tokens and IP addresses\nor version numbers.\n2. Fused Words (FW): Two or more valid tokens\nconcatenated using punctuation marks like ‘.’.\n1http://translate.google.com/\n2http://www.translationautomation.com/machine-translation-\npost-editing-guidelines.html3. Spelling Errors (SPERR): Spelling errors or\ntypos.\n4. Valid Words (V AL): Valid words not occur-\nring in the training data.\n5. Non-Translatable (NTR): Tokens compris-\ning standalone product and service names\nand numbers (not part of Category-1 tokens)\nwhich ideally should not be translated.\nTable 2 depicts the percentage of the OOV word\ncategories in the English forum data and the two\ntest sets with respect to the En–De and En–Fr TM-\nbased source data sets. Comparing the category-\nwise percentage figures on the two test sets (Test-\n1 and Test-2) clearly show the distribution of the\ncategories in Test-1 is similar to that of the orig-\ninal Forums. Test-2 shows a higher percentage of\nSPERR tokens as it had been consciously designed\nto have high spelling error density. The figures\nalso depict the relative importance of the specific\nOOV categories in forum-style data, with non-\ntranslatable (NTR) and maskable tokens (MASK)\ncovering nearly 75% of the OOV range.\nOOV\nTypeEn–De En–Fr\nForum Test-1 Test-2 Forum Test-1 Test-2\nMASK 25.68 21.33 9.93 25.47 19.43 9.83\nFW 8.89 4.11 2.05 8.75 3.71 2.00\nSPERR 10.41 12.64 52.91 10.45 12.29 52.67\nV AL 6.38 14.06 12.33 6.74 18.86 12.17\nNTR 48.64 47.87 22.77 48.60 45.71 23.33\nTable 2: Category-based percentage of OOVs in the English\nforum and two test data sets\nDifferent normalization techniques used to in-\ndependently address each of these OOV categories\nare detailed below.\n3.3 Regular Expression-based Normalization\nFor the normalization of MASK OOVs we devel-\noped a set of regular expressions to identify to-\nkens. These were replaced with unique place-\nholders. These replacements were then applied\nuniformly over all data sets (TM and forum) in\na pre-processing step. Most of the tokens in this\ncategory were multi-word tokens, and this method\nallowed them to be treated as single tokens dur-\ning the translation process. This not only helped in\nmaintaining the internal ordering of words within\nsuch tokens but also ensured that none of the terms\nwithin such a token were translated.\n3.4 Fused Word Splitting\nTo handle FW tokens which comprise two or more\nvalid words fused using a period (‘.’) symbol, we\n171\nidentified all tokens which had a period symbol\nflanked by alphabetic characters. However, since\na large number of valid file names, website names\nor abbreviations (e.g. N.I.S., explorer.exe, shop-\nping.aol.com, etc.) were also identified, we used\nheuristics based on the training data to identify the\nvalid ones. Lists of known file extensions (e.g. exe,\njar, pdf, etc.) and website domain extensions (e.g.\ncom, edu, net, gov, co.uk, etc.) were used to fil-\nter out file names and website names. Finally we\nused a dictionary built on the training data. Every\nsplit was validated against the dictionary, with the\nconstraint that all its constituent splits had to oc-\ncur in this dictionary. This normalization was only\napplied on the dev and test sets as the TM training\ndata was assumed to be clean of such fused words.\n3.5 spell checker-based Normalization\nA considerable amount of the OOVs in the un-\nnormalized forum data comprise spelling errors\nor typos (SPERR). We used an off-the-shelf spell\nchecker (cf. Section 4.2) to identify and correct\nthese tokens so that they mapped to valid words\n(preferably in the training data). While the ready-\nto-use spell checker worked well for most of the\nspelling errors in general-purpose English words,\nit flagged a lot of ‘in-domain’ ( technical) words.\nHence we adapted the spell checker to the do-\nmain. This was achieved by generating glossary\nlists from the source side of the TMs and adding\nthem to the spell checker dictionary. Furthermore,\nthe spell checking models had to be retrained using\nthe source side of ‘in-domain’ data from TMs. The\nadaptation of the spell checker helped us to elimi-\nnate most of the false positives flagged by the orig-\ninal unadapted spell checker. The errors flagged by\nthe spell checker were replaced with the highest\nranking suggestion from the spell checker. As in\nSection 3.4, the spelling corrections were applied\nonly to the test sets to ensure a reduction in the\nnumber of spelling error-based OOVs.\n3.6 Supplementary Data Selection\nTo take care of the V AL tokens which are valid\nwords but absent in the training data, we explored\ntechniques of mining supplementary data to im-\nprove the chances of successfully translating these\ntokens. We used the following freely available par-\nallel data collections as potential sources of sup-\nplementary data:\n1. Europarl (Koehn, 2005): Parallel corpus com-\nprising of the proceedings of the EuropeanParliament.\n2. News Commentary Corpus: Released as a\npart of the WMT 2011 Translation Task.3\n3. OpenOffice Corpus: Parallel documentation\nof the Office package from OpenOffice.org,\nreleased as part of the OPUS corpus (Tiede-\nmann, 2009).\n4. KDE4 Corpus: A parallel corpus of the\nKDE4 localization files released as part of\nOPUS.\n5. PHP Corpus: Parallel corpus generated from\nmultilingual PHP manuals also released as\npart of OPUS.\n6. OpenSubtitles2011 Corpus:4A collection of\ndocuments released as part of OPUS.\n7. EMEA Corpus: A parallel corpus from the\nEuropean Medical Agency also released as\npart of OPUS corpus.\nTo select relevant parallel data, we queried each\nof the parallel corpora with the V AL OOV words\nand added sentence pairs containing the OOVs into\nthe existing ‘in-domain’ parallel corpora. During\nthe selection process, the number of parallel sen-\ntences selected for any particular OOV item was\nrestricted to a threshold of 500 for En–De and 67\nfor En–Fr. This was done to limit the size of the\nselected ‘out-of-domain’ supplementary data such\nthat it did not exceed the size of the TM-based (in-\ndomain) training data. The target sentences of the\nselected parallel data were added to the language\nmodel to ensure language model adaptation. This\nprocess allowed us to cover 87.55% and 92.13% of\nV AL OOVs for En–De and En–Fr language pairs,\nrespectively.\n3.7 OOV Tokens Unsuitable for Translation\nThe last remaining category of OOVs (NTR) rep-\nresents tokens for which translation was usually\nunnecessary. Most of these comprised product or\nservice names, names of the forum users or nu-\nmeric tokens. This class of tokens was not explic-\nitly handled under the assumption that due to their\nabsence from the training data (and hence from the\nphrase table), they would be preserved during the\ntranslation process in the standard SMT setup.\n3http://www.statmt.org/wmt11/translation-task.html\n4http://www.opensubtitles.org/\n172\n4 Experiments and Results\n4.1 Pre- and Post-Processing\nPrior to training, all the bilingual and monolin-\ngual data were subjected to tokenization and lower\ncasing using the standard Moses pre-processing\nscripts. However, for the regular expression-based\nnormalization, the standard tokenizer is slightly\nmodified to ensure that unique placeholders (Sec-\ntion 3.3) are not tokenized. During the replace-\nment process a mapping is maintained between the\nunique placeholders, the line number and the ac-\ntual token replaced. This mapping file is used later\nin the post-processing step to substitute the actual\ntokens in the position of the unique placeholders.\nFor target sentences having multiple placeholders\nof the same type, the corresponding actual tokens\nare replaced in the order in which they appeared in\nthe source.\n4.2 Tools\nFor all our translation experiments we used Open-\nMaTrEx (Dandapat et al., 2010), an open source\nSMT system which wraps the standard log-linear\nphrase-based SMT system Moses (Koehn et al.,\n2007). Word alignment was performed with\nGiza++ (Och and Ney, 2003). The phrase and\nreordering tables were built on the word align-\nments using the Moses training script. The fea-\nture weights for the log-linear combination of the\nfeature functions were tuned using Minimum Er-\nror Rate Training (Och, 2003) on the devset in\nterms of BLEU (Papineni et al., 2002). We used 5-\ngram language models in all our experiments cre-\nated using the IRSTLM (Federico et al., 2008) lan-\nguage modelling toolkit using Modified Kneser-\nNey smoothing. Results of translations in every\nphase of our experiments were evaluated using\nBLEU and TER (Snover et al., 2006).\nFor the spell checking task we used a combina-\ntion of two off-the-shelf spelling correction toolk-\nits. Using the ‘After the Deadline toolkit’ (AtD)5\nas our primary spell checker, we also used a Java\nwrapper on Google’s spellchecking API6to sup-\nplement the AtD spell checking results. However,\nthe ‘in-domain’ adaptation of the spell checker\n(Section 3.5) could only be achieved for the AtD\nspell checker.\n5http://open.afterthedeadline.com/\n6http://www.google.com/tbproxy/spell?lang=en&hl=en4.3 Experimental Results\nTable 3 shows the different BLEU and TER scores\nfor translations subject to each category of normal-\nization and supplementary data selection, along\nwith the percentage of OOV word reduction they\nresult in, for both the test sets under considera-\ntion. The last row of the table reports the results\nfor translating only regular expression-based nor-\nmalized test sets (without the other normalizations)\nusing supplementary training data enhanced mod-\nels.\nThe experiments were carried out in five differ-\nent phases, each focussing on reducing one cate-\ngory of OOV words in the English forum data. For\nthe baseline translation and language models, the\nTM and forum data was subjected to only basic\nclean-up such as dropping empty lines and very\nlong sentences (more than 100 tokens). The base-\nline testsets were then subjected to the following\nadaptations in a cumulative step-by-step manner:\n1. Regex: Regular Expression-based normaliza-\ntion for the reduction of MASK OOVs.\n2. Wrd-Split: Heuristic-based tokenization for\nnormalization of FW OOVs.\n3. Spell-Chk: Off-the-shelf spell checking\nbased normalization for reducing SPERR.\n4. Adapted-Spell-Chk (Ada SpChk): spell\nchecking using domain adapted spell check-\ners to reduce false positive flags.\n5. Sup-data: Supplementary data selection and\naddition to enrich existing models to reduce\nV AL OOVs.\nThe final experimental step (Regex+Sup) did not\ninvolve any specific normalization, but was rather\nperformed to investigate the effect of supplemen-\ntary data selection on regex-based normalized test\nsets without any other normalizations.\nAs the results in Table 3 show, regular\nexpression-based normalization results in a 0.55\nabsolute (2.12% relative) BLEU point improve-\nment in En–De translations and a 0.66 absolute\n(1.93% relative) BLEU point improvement for\nEn–Fr translations for Test-1. For Test-2, the\nimprovements are 0.31 absolute (1.45% relative)\nBLEU points and 0.38 absolute (1.26% relative)\nBLEU points for En–De and En–Fr, respectively.\nWhile the Test-1 improvements are statistically\nsignificant at p=0.05 level using bootstrap resam-\npling (Koehn, 2004), the Test-2 improvements are\nnot statistically significant. The TER scores also\n173\nNormaliz-\nationEn–De En–Fr\nTest-1 Test-2 Test-1 Test-2\nOOV BLEU TER OOV BLEU TER OOV BLEU TER OOV BLEU TER\nBaseline –25.98 0.6407 –21.32 0.6361 –34.14 0.5250 –30.27 0.5405\nRegex 21.33 26.53\u00030.6372 9.42 21.63 0.6332 19.43 34.80\u00030.5179 9.67 30.65 0.5402\nWrd-Split 3.48 26.59 0.6380 1.54 21.68\u00030.6284 3.14 34.89 0.5178 1.50 30.77\u00030.5386\nSpell-Chk 8.06 26.78 0.6365 37.16 22.50\u00030.6279 8.57 35.10 0.5158 36.17 31.60\u00030.5303\nAda-SpChk 4.27 26.92 0.6299 11.30 23.17\u00030.6174 3.57 35.33 0.5121 11.00 32.28\u00030.5128\nSup-data 13.74 27.86\u00030.6207 13.53 24.08\u00030.5923 17.43 36.04\u00030.5024 15.17 33.75\u00030.5043\nRegex-Sup 13.74 27.45 0.6242 13.53 23.01 0.6191 17.43 35.55 0.5068 15.17 31.96 0.5178\nTable 3: Translation Results after normalization and supplementary data selection. The OOV column indicate the percentage\nof total OOVs reduced in each step.\u0003denote statistically significant improvement over the scores in previous row.\nshow a decreasing trend which also suggest trans-\nlation quality improvement. The reason behind\nthis may be attributed to the larger percentage of\ncategory-1 tokens in Test-1 compared to Test-2.\nThe number of OOV words is reduced by 135 and\n136 on Test-1 and 55 and 58 on Test-2 with respect\nto different training data sets. The improvements\nresult from the fact that this normalization helps to\nmaintain intra word ordering within MASK tokens\nand avoid translation of constituent sub-tokens.\nThe first example in Table 4 clearly depicts this\nparticular behaviour for MASK tokens.\nUsing the fused word splitting technique on the\nregex-processed testsets, the scores improve only\nby 0.06 absolute (0.23% relative) BLEU points\nand 0.09 (0.26% relative) absolute BLEU points\non Test-1 over the previous normalization scores,\nfor En–Fr and En–De respectively. For Test-2 the\nimprovements are 0.05 absolute (0.23% relative)\nBLEU points and 0.12 absolute (0.39%) BLEU\npoints for En–De and En–Fr translations, respec-\ntively. Despite the marginal improvement, the im-\nprovements for Test-2 were statistically significant\nat p=0.05 level. Improvements in Test-1 were not\nsignificant. The reason for the marginal improve-\nment becomes apparent when observing the low\npercentage of OOV’s (Table 3) reduced by this\nmechanism. However, the percentage of category-\n2 tokens in test-2 is nearly double that of Test-1\nwhich may explain the statistical significance of\nthe improvements gained.\nAs expected, handling the spelling errors using\nspell checkers had a profound effect on the reduc-\ntion of OOV words for the high density spelling er-\nror testset, Test-2. Using the adapted spell checker\non this test set, we achieve an improvement of 1.49\nabsolute (6.87% relative) BLEU points for En–De\nand 1.51 absolute (4.9%) BLEU points for En–Fr\ntranslations. This corresponds to a total reduction\n(combining reductions for unadapted and adapted\nspell checking) of 283 OOVs for both En–De andEn–Fr test sets. The overall improvement when us-\ning spell checkers over the previous normalization\nresults were statistically significant at the p=0.05\nlevel. However, for Test-1, with spelling error den-\nsity reflecting that of average forum data, the im-\nprovements are much lower. Adapted spell check-\ning results in a total improvement of 0.33 absolute\n(1.24% relative) BLEU points for En–De and 0.44\nabsolute (1,26% relative) BLEU points for En–Fr\ntranslations. These are not statistically significant\nand correspond to a reduction of 78 and 85 OOVs\nfor En–De and En–Fr test sets, respectively. The\nTER scores also reflect the same level of improve-\nments across the two different test sets.\nThe fourth phase of experiments, where differ-\nent parallel data sources are mined guided by the\nlist of V AL OOV words, results in further reduc-\ntion in the OOV rates and improvement in trans-\nlation scores. The guided selection process im-\nproves the scores by 0.94 absolute (3.49% rel-\native) and 0.71 absolute (2.01% relative) BLEU\npoints for En–De and En–Fr translations, respec-\ntively on Test-1. For Test-2 the improvement\nfigures are 0.91 absolute (3.93% relative) BLEU\npoints and 1.47 absolute (4.55% relative) BLEU\npoints for En–De and En–Fr translation, respec-\ntively, over the previous normalization results. The\nTER scores also show similar improvements for\nboth language pairs and test sets. All improve-\nments are statistically significant at the p=0.05\nlevel. Furthermore, this technique further reduces\nthe number of OOVs by 79 for the En–De test set\nand 91 counts for the En–Fr on Test-2. The corre-\nsponding reductions for Test-1 are 87 and 122 for\nEn–De and En–Fr, respectively.\nIn summary, using supplementary data selection\ntechniques to complement the normalization re-\nsulted in statistically significant overall improve-\nments of 1.88 absolute (7.24% relative) and 1.9\nabsolute (5.57% relative) BLEU points over the\nbaseline scores on Test-1. On Test-2, the im-\n174\nprovements were 2.76 absolute (12.95% relative)\nand 3.48 absolute (11.49% relative) BLEU points\nfor En–De and En–Fr translations, respectively.\nTranslating the regex-normalized test sets (without\nword splitting and spell checking) with the supple-\nmentary data-enhanced models, we aimed to as-\nsess the impact of supplementary data selection\ntechnique in contrast to that of the normalization\nmethods. For Test-1, the results show that this pro-\ncess results in scores slightly better (0.53 absolute\nBLEU on En–De and 0.22 absolute BLEU for En–\nFr) than those achieved by complete normalization\n(adapted spell checking scores, row 5 in Table 4.3).\nFor Test-2 however, the scores are lower than the\nadapted spell checking scores by 0.16 and 0.32 ab-\nsolute BLEU points for En–De and En–Fr, respec-\ntively. Overall results clearly show that for general\nforum data (with average spelling error density),\nfully automatic supplementary training data acqui-\nsition can perform as well and sometimes better\nthan semi-automatic normalization although they\ntarget different types of OOVs. Finally for very\nnoisy data, normalization complemented with sup-\nplementary data selection really pays off.\nType Sentence\nSrc 5 . click on the folder button and navigate to c :ndocuments and settings nall\nusersnapplication datanand select the carbonite folder\nRef 5. klicken sie auf die ordnerschaltfl ¨ache und ¨offnen sie den ordner ” c :ndocuments\nand settingsnall usersnapplication datancarbonite ”\nBaseline 5. klicken sie auf den ordner ” und navigieren sie zu c :ndokumente und einstel-\nlungennalle benutzernanwendungsdatennund w ¨ahlen sie die carbonite ordner\nRegex 5. klicken sie auf die schaltfl ¨ache ” und wechseln sie zum ordner c :ndocuments\nand settingsnall usersnapplication datancarbonite und w ¨ahlen sie die carbonite\nordners\nSrc re : nis09 did not detect 8 threats & 23 infected objects.and 16 suspicious objects ?\nRef re : nis09 n’ a pas d ´etect ´e 8 menaces , 23 objets infect ´es et 16 objets suspects ?\nBaseline re : nis09 n’ a pas d ´etecter 8 menaces et 23 infect ´eobjects.and 16 les objets ?\nWrd-Split re : nis09 n’ a pas d ´etecter 8 menaces et 23 infect ´e objets . et 16 les objets ?\nSrc and no for somthing completly different .\nRef und nun zu etwas v ¨ollig anderem .\nBaseline und keine f ¨ursomthing completly anders .\nSpck und nicht f ¨uretwas v ¨ollig anders .\nSrc pretty disappointed with nis parental control not blocking websites on blocked list\nas well as through their category of websites to block .\nRef je suis assez d´ec ¸uque le contr ˆole parental de nis ne bloque pas les sites web figurant\ndans la liste bloqu ´es aussi bien que ceux de la cat ´egorie des sites web `a bloquer .\nBaseline assez disappointed avec contr ˆole parental de nis pas le blocage de sites web sur\nliste bloqu ´es ainsi que par l’ interm ´ediaire de leur cat ´egorie de sites web `a bloquer .\nSup assez d´ec ¸ude contr ˆole parental de nis pas le blocage de sites web sur liste bloqu ´es\nainsi que dans leur cat ´egorie de sites web `a bloquer .\nTable 4: Translation examples for each normalization and\nsupplementary data selection Technique\nIn order to substantiate the improvements ob-\nserved on the automatic evaluation scores, we\npresent some examples from our test sets (both\nTest-1 and 2), to depict how the normalization or\ndata selection methods actually affect the trans-\nlations. Table 4 presents 4 different examples of\ntranslations each highlighting the effect of a single\nnormalization or data selection technique. The first\nexample clearly shows how regular expression-\nbased masking allows internal parts of the pathstructure to be left untranslated, unlike in the base-\nline set-up. The second sentence (row 5) is an\nexample of the fused word splitting technique en-\nabling better translation of the token ‘objects.and’\nwhich had been treated as an OOV in the base-\nline. The third example (rows 9-12) highlights the\neffect of spell checking on the translation quality\nof the source sentence. Automatic spell check-\ning changes the tokens ‘somthing completly’ into\n‘something completely’ thereby allowing them to\nbe translated. The final set of sentences is an ex-\nample of how supplementary data selection allows\nthe translation of the valid yet OOV word ‘disap-\npointed’ appearing in the source sentence. As is\nevident from the examples, the normalization tech-\nniques discussed in the paper do work towards bet-\nter translations for sentences with specific OOV\ntypes. However, the relative densities of each type\nleads to varied improvements in scores reported in\nTable 4.3.\n5 Conclusion and Future Work\nIn this paper we have explored a set of normaliza-\ntion techniques to achieve better translation quality\nfor user-generated forum content. We have shown\nthat supplementary data selection techniques posi-\ntively complement normalization in terms of trans-\nlation quality. For test data with spelling error den-\nsity representative of the overall forum data (Test-\n1), supplementary data selection on its own can\nproduce improvements similar to those achieved\nthrough normalisation (targeting different OOVs).\nWhile data normalization carried out at the level\nreported in this paper (with different OOV cate-\ngories and different normalisation approaches for\neach) is a semi-automatic process which requires\nsome manual analysis, supplementary data selec-\ntion is fully automatic and involves much less over-\nall effort. Thus, for moderately noisy datasets\n(such as Test-1), normalization may not always\nbe worth the effort. For more noisy datasets\n(e.g. Test-2) however, normalization does improve\ntranslation quality more effectively than data sup-\nplementation.\nIn this research, the classification of OOV words\nwas done in a semi-automatic fashion. Using auto-\nmatic classification techniques to identify the dif-\nferent categories in OOV words would be one of\nthe prime future directions here. Furthermore, a\ndetailed investigation of the individual contribu-\ntions of multiple resources used for supplementary\n175\ndata selection is required to better understand the\ncause of the improvements in scores. Finally we\nwould also like to work towards developing au-\ntomatic threshold detection techniques for optimal\nsupplementary data selection.\nAcknowledgments\nThis work is supported by Science Foundation Ire-\nland (Grant No. 07/CE/I1142) as part of the Centre\nfor Next Generation Localisation (www.cngl.ie) at\nDublin City University. We thank the reviewers for\ntheir insightful comments.\nReferences\nBanerjee, Pratyush, Sudip Kumar Naskar, Johann Ro-\nturier, Andy Way, and Josef van Genabith. 2011.\nDomain Adaptation in Statistical Machine Transla-\ntion of User-Forum Data using Component Level\nMixture Modelling. In Proceedings of the Thirteenth\nMachine Translation Summit, pages 285–292, Xia-\nmen, China.\nDandapat, S., M. L. Forcada, D. Groves, S. Penkale,\nJ. Tinsley, and A. Way. 2010. OpenMaTrEx:\nA Free/Open-Source Marker-Driven Example-Based\nMachine Translation System. In Proceedings of the\n7th International Conference on Natural Language\nProcessing (IceTAL 2010), page 121–126, Reyk-\njav´ık, Iceland.\nDaume III, Hal and Jagadeesh Jagarlamudi. 2011.\nDomain adaptation for machine translation by min-\ning unseen words. In Proceedings of the 49th An-\nnual Meeting of the Association for Computational\nLinguistics: Human Language Technologies, pages\n407–412, Portland, Oregon, USA.\nEck, Matthias, Stephan V ogel, and Alex Waibel. 2004.\nLanguage model adaptation for statistical machine\ntranslation based on information retrieval. In Pro-\nceedings of 4th International Conference on Lan-\nguage Resources and Evaluation, (LREC 2004),\npages 327–330, Lisbon, Portugal.\nFederico, Marcello, Nicola Bertoldi, and Mauro Cet-\ntolo. 2008. IRSTLM: an open source toolkit for\nhandling large scale language models. In Inter-\nspeech 2008: 9th Annual Conference of the Inter-\nnational Speech Communication Association, pages\n1618–1621, Brisbane, Australia.\nFlournoy, Raymond and Jeff Rueppel. 2010. One\nTechnology : Many Solutions. In AMTA 2010: Pro-\nceedings of the Ninth Conference of the Association\nfor Machine Translation in the Americas, pages 6–\n12, Denver, Colorado, USA.\nHabash, Nizar. 2008. Four techniques for online han-\ndling of out-of-vocabulary words in arabic-english\nstatistical machine translation. In Proceedings of the46th Annual Meeting of the Association for Compu-\ntational Linguistics on Human Language Technolo-\ngies: Short Papers, pages 57–60, Columbus, Ohio.\nHildebrand, Almut Silja, Matthias Eck, Stephan V ogel,\nand Alex Waibel. 2005. Adaptation of the Transla-\ntion Model for Statistical Machine Translation based\non Information Retrieval. In 10thEAMT Confer-\nence: Practical Applications of Machine Transla-\ntion, Conference Proceedings, pages 119–125, Bu-\ndapest, Hungary.\nKoehn, Philipp, Hieu Hoang, Alexandra Birch, Chris\nCallison-Burch, Marcello Federico, Nicola Bertoldi,\nBrooke Cowan, Wade Shen, Christine Moran,\nRichard Zens, Chris Dyer, Ond ˇrej Bojar, Alexan-\ndra Constantin, and Evan Herbst. 2007. Moses:\nopen source toolkit for statistical machine transla-\ntion. In ACL 2007, Proceedings of the Interactive\nPoster and Demonstration Sessions, pages 177–180,\nPrague, Czech Republic.\nKoehn, Phillipe. 2004. Statistical Significance Tests\nfor Machine Translation Evaluation. In Proceedings\nof the Conference on Empirical Methods in Natural\nLanguage Processing, (EMNLP 2004), pages 388–\n395, Barcelona, Spain.\nKoehn, P. 2005. Europarl: A Parallel Corpus for\nStatistical Machine Translation. In MT Summit X:\nThe 10th Machine Translation Summit, pages 79–86,\nPhuket, Thailand.\nOch, Franz Josef and Hermann Ney. 2003. A system-\natic comparison of various statistical alignment mod-\nels.Computational Linguistics, 29:19–51.\nOch, Franz Josef. 2003. Minimum error rate training\nin statistical machine translation. In Proceedings of\nthe 41st Annual Meeting on Association for Compu-\ntational Linguistics - Volume 1, pages 160–167, Sap-\nporo, Japan.\nPapineni, Kishore, Salim Roukos, Todd Ward, and Wei-\nJing Zhu. 2002. Bleu: a method for automatic eval-\nuation of machine translation. In 40th Annual Meet-\ning of the Association for Computational Linguistics,\n(ACL 2002), pages 311–318, Philadelphia, Pennsyl-\nvania.\nRoturier, Johann and Anthony Bensadoun. 2011. Eval-\nuation of MT Systems to Translate User Generated\nContent. In Proceedings of the Thirteenth Machine\nTranslation Summit, pages 244–251, Xiamen,China.\nSnover, Matthew, Bonnie Dorr, Richard Schwartz, Lin-\nnea Micciulla, and John Makhoul. 2006. A study\nof translation edit rate with targeted human anno-\ntation. In Proceedings of Association for Machine\nTranslation in the Americas, pages 223–231, Cam-\nbridge, MA.\nTiedemann, J ¨org. 2009. News from OPUS - A collec-\ntion of multilingual parallel corpora with tools and\ninterfaces. pages 237–248.\n176", "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "hbO-vMF0aDk", "year": null, "venue": "EAMT 2015", "pdf_link": "https://aclanthology.org/W15-4935.pdf", "forum_link": "https://openreview.net/forum?id=hbO-vMF0aDk", "arxiv_id": null, "doi": null }
{ "title": "TraMOOC: Translation for Massive Open Online Courses", "authors": [ "Valia Kordoni", "Kostadin Cholakov", "Markus Egg", "Andy Way", "Lexi Birch", "Katia Kermanidis", "Vilelmini Sosoni", "Dimitrios Tsoumakos", "Antal van den Bosch", "Iris Hendrickx", "Michael Papadopoulos", "Panayota Georgakopoulou", "Maria Gialama", "Menno van Zaanen", "Ioana Buliga", "Mitja Jermol", "Davor Orlic" ], "abstract": "Valia Kordoni, Kostadin Cholakov, Markus Egg, Andy Way, Lexi Birch, Katia Kermanidis, Vilelmini Sosoni, Dimitrios Tsoumakos, Antal van den Bosch, Iris Hendrickx, Michael Papadopoulos, Panayota Georgakopoulou, Maria Gialama, Menno van Zaanen, Ioana Buliga, Mitja Jermol, Davor Orlic. Proceedings of the 18th Annual Conference of the European Association for Machine Translation. 2015.", "keywords": [], "raw_extracted_content": "�\n�����������������������������������������������������\n����������������������������������������\n������������������������������������������������������������\n������������������������������������\n��������������������������\n�����������������������\n�\n�����������������\n��������������������������������������������������������������\n��������������������������������������\n����������������������������������������\n���������������������������������\n���������������������������������������������������������������������������������������\n�������������������������������������������������������������\n������������������������������������������������������\n������������������������������������������������������������������������������������\n���������������������������������������������������������������������������\n��������������������������������������������������������\n�����������������������������������������������������������������������������������������������\n������������������������������������������������������������������������������������������������������\n�����������������������������������������������������������������������������������������������\n�����������������������������������������������������������������������������������������������\n�������������� �����������������������������������������������������������������������������������\n��������������������������������������������������������������������������������������������������������\n������������������������������������ ��������������������������������������������������������\n���������������������������������������������������������������������������������������������������\n����������������������������������������������������������������������������������������������������������\n������������������������������������������������������������������������������������������������������\n����������������������������������������������������������������������������������������������������������\n���������������������������������������������������������������������������������������������������������\n���������������������������������������������������������������������������������������������������\n������������������������������������������������������������������������������������������������������\n������������������������������������������������������������������������������������������������������\n�������������������������������������������������������������������������������������������������������\n����� ����� ��������� ���� ��� ���� ������������������ �������� ������ �������� ��������217", "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "jij0lw2d9i8", "year": null, "venue": "EAMT 2023", "pdf_link": "https://aclanthology.org/2023.eamt-1.7.pdf", "forum_link": "https://openreview.net/forum?id=jij0lw2d9i8", "arxiv_id": null, "doi": null }
{ "title": "Exploiting large pre-trained models for low-resource neural machine translation", "authors": [ "Aarón Galiano Jiménez", "Felipe Sánchez-Martínez", "Víctor M. Sánchez-Cartagena", "Juan Antonio Pérez-Ortiz" ], "abstract": "Aarón Galiano-Jiménez, Felipe Sánchez-Martínez, Víctor M. Sánchez-Cartagena, Juan Antonio Pérez-Ortiz. Proceedings of the 24th Annual Conference of the European Association for Machine Translation. 2023.", "keywords": [], "raw_extracted_content": "Exploiting large pre-trained models\nfor low-resource neural machine translation\nAar´on Galiano-Jim ´enez, Felipe S ´anchez-Mart ´ınez,\nV´ıctor M. S ´anchez-Cartagena, Juan Antonio P ´erez-Ortiz\nDep. de Llenguatges i Sistemes Inform `atics, Universitat d’Alacant\nE-03690 Sant Vicent del Raspeig (Spain)\[email protected], {fsanchez,vmsanchez,japerez }@dlsi.ua.es\nAbstract\nPre-trained models have revolutionized the\nnatural language processing field by lever-\naging large-scale language representations\nfor various tasks. Some pre-trained mod-\nels offer general-purpose representations,\nwhile others are specialized in particu-\nlar tasks, like neural machine translation\n(NMT). Multilingual NMT-targeted sys-\ntems are often fine-tuned for specific lan-\nguage pairs, but there is a lack of evidence-\nbased best-practice recommendations to\nguide this process. Additionally, deploying\nthese large pre-trained models in computa-\ntionally restricted environments, typically\nfound in developing regions where low-\nresource languages are spoken, has be-\ncome challenging. We propose a pipeline\nto tune the mBART50 pre-trained model\nto 8 diverse low-resource language pairs,\nand then distill the resulting system to\nobtain lightweight and more sustainable\nNMT models. Our pipeline conveniently\nexploits back-translation, synthetic corpus\nfiltering, and knowledge distillation to de-\nliver efficient bilingual translation models\nthat are 13 times smaller, while maintain-\ning a close BLEU performance.\n1 Introduction\nIn the field of natural language processing (NLP),\nmost of the so called pre-trained or foundation\nmodels (Bommasani et al., 2021) fall into one\nof three categories, based on whether the under-\nlying architecture corresponds to the encoder of\nthe transformer (Vaswani et al., 2017), the de-\ncoder or both. Encoder-like models consist of a\n© 2023 The authors. This article is licensed under a Creative\nCommons 4.0 licence, no derivative works, attribution, CC-\nBY-ND.number of bidirectional self-attention layers that\nlearn deep general-purpose representations with\nself-supervised denoising learning objectives —\nsuch as predicting the original token for masked\nor perturbed tokens in the input— and can then\nbe adapted to a wide range of downstream tasks.\nMonolingual models such as BERT (Devlin et al.,\n2019) and cross-lingual variations like mBERT or\nXLM-R (Conneau et al., 2020) have been obtained\nthis way. Decoder-like pre-trained models —such\nas GPT-3 (Brown et al., 2020) or LLaMA (Tou-\nvron et al., 2023)— are trained to auto-regressively\npredict the next token in the sequence by us-\ning causal self-attention layers. Pre-trained mod-\nels involving the whole encoder-decoder trans-\nformer architecture —e.g. DeltaLM (Ma et al.,\n2021), BART (Lewis et al., 2020) and its cross-\nlingual variation mBART (Liu et al., 2020)— are\nalso pre-trained to denoise perturbations in the in-\nput, and then fine-tuned for particular text-to-text\ndownstream tasks such as neural machine transla-\ntion (NMT).\nIn addition to models pre-trained to obtain\ngeneral-purpose neutral representations, there ex-\nist a number of multilingual encoder-decoder mod-\nels specifically pre-trained to translate between\nmany different language pairs. Well-known sys-\ntems in this group include mBART50 (Tang et al.,\n2021), or NLLB-200 (NLLB Team et al., 2022).\nAll these pre-trained models attain high translation\nquality (Tran et al., 2021) because they leverage\ninformation from multiple language pairs, thus be-\ncoming an interesting realization of the possibili-\nties of transfer learning. In this paper, we focus on\nmBART50 and leave the exploration of other pre-\ntrained models to future work. mBART50 (Tang\net al., 2021) was obtained by additionally training\nmBART in a supervised manner to translate be-\ntween English and 49 languages, and vice versa.1\n1mBART50 can be considered as a fine-tuned model on its\nAs a consequence of the relatively recent release\nof pre-trained models specifically aimed at NMT,\nthere are just a few studies (see Sect. 5) on how\nto adapt them to a certain language pair. In this\npaper we focus on low-resource languages in low-\nresource settings, since low-resource languages are\nusually spoken in impoverished or conflicting ar-\neas with limited computational resources.\nWe propose a pipeline to tune the English-to-\nmany mBART50 model for the translation be-\ntween English and a specific low-resource lan-\nguage (or vice versa with the many-to-English pre-\ntrained model) and, afterwards, distill the knowl-\nedge in the fine-tuned mBART50 teacher model\nto build a lightweight student model that has a\nmuch smaller number of parameters. In this re-\ngard, our pipeline considers mBART50 as an ini-\ntial resource-hungry model which is conveniently\nexploited to generate synthetic parallel sentences\nthat are conveniently filtered before training a\nsmaller student NMT system that can then be run\non low-end devices. We prove that filtering is ben-\neficial in most cases, without being detrimental in\nany of them. We chose mBART50 for our ex-\nperiments based on its performance in the litera-\nture (Liu et al., 2021; Lee et al., 2022; Chen et al.,\n2022), as it has been shown to provide comparable\nor better BLEU scores than alternatives like M2M-\n100, mT5, CRISS, and SixT, at least for language\npairs including English.\nOur pipeline is evaluated on eight translation\ntasks involving four low-resource languages and\nEnglish. In order to evaluate the transferabil-\nity of the pre-trained model to unseen languages,\ntwo of our languages were not considered during\nmBART50’s pre-training. Languages were cho-\nsen so that each one belongs to a different lan-\nguage family. The results show that when English\nis the source language, our student models outper-\nform the teacher models or perform comparably.\nHowever, when English is the target language, the\nteachers perform better that the students. In either\ncase, the student models are 92% faster than the\nteacher models when they are executed on a CPU.\nThe rest of the paper is organized as follows.\nNext section describes our pipeline for fine-tuning\nand knowledge distillation of pre-trained NMT\nmodels. Sect. 3 then presents the experimental set-\nown, as it results from adapting a pre-trained model to a par-\nticular task, or as a pre-trained model used as the seed to ob-\ntain specific bilingual machine translation (MT) models as we\ndo in this paper.\nParallel\ncorpus\n[en-xx]Fine tune\nmBAR T50\nen → xxMonolingual\ncorpus [en]\nSynthetic\nparallel\ncorpus\nxx-enFine-tune\nXLM-RSynthetic\nparallel\ncorpus\nen-xxTranslate\nen → xx\nFine tune \nmBAR T50\nxx → enTranslate\nxx → en\nMonolingual\ncorpus [xx]Filter\nsynthetic\ncorpus\nFilter\nsynthetic\ncorpusBicleaner-AI\nmodelIterative processFigure 1: Pipeline for fine-tuning mBART50 to translate En-\nglish ( en) into a low-resource language ( xx), and vice versa,\nusing parallel and monolingual corpora.\ntings with eight different translation tasks involv-\ning four low-resource languages, whereas Sect. 4\nreports the main results and discusses the most rel-\nevant observed patterns. The paper ends with a re-\nview of related work, followed by some conclud-\ning remarks and future work plans.2\n2 Approach\nOur pipeline consists of two different stages: a\nfirst stage aimed at improving the pre-trained mod-\nels by combining iterative back-translation, paral-\nlel corpus filtering and fine-tuning; and a second\nstage aimed at distilling the knowledge from the\nfine-tuned models to train a student model with far\nfewer parameters but comparable performance.\nFine-tuning of pre-trained models. This pro-\ncess, depicted in Figure 1, combines fine-\ntuning of the pre-trained models with back-\ntranslation (Hoang et al., 2018) and synthetic\nparallel corpus filtering via a fine-tuned XLM-R\nmodel (Conneau et al., 2020). For our English-\ncentric scenario and a particular low-resource lan-\nguage, this consists of the following steps:\n1. Use the available parallel corpora to train\na Bicleaner-AI (Zaragoza-Bernabeu et al.,\n2022) model. Bicleaner-AI learns a classifier\non top of XLM-R that predicts if a pair of in-\nput sentences are mutual translation or not.\n2. Fine-tune both the English-to-many and the\nmany-to-English mBART50 models with the\noriginal parallel corpora.\n2The code for our training pipeline is available at https:\n//github.com/transducens/tune-n-distill\n3. Perform incremental iterative back-\ntranslation.\n(a) Translate the available English monolin-\ngual corpora into the low-resource lan-\nguage, and vice versa, using the last fine-\ntuned mBART50 models.\n(b) Filter the synthetic corpora using the\nXLM-R model trained in step 1.\n(c) Use the filtered synthetic corpora and the\navailable parallel corpora to further fine-\ntune the last fine-tuned mBART50 mod-\nels translating to and from English.\n(d) Evaluate the performance of the two re-\nsulting models on a development set. If\nnone improves, stop the iterative pro-\ncess. Otherwise, increase the size of\nboth monolingual corpora and jump to\nstep 3(a).\nTo filter the synthetic corpora generated in\neach iteration, a threshold in the interval [0,1] is\nused to discretize the output of Bicleaner-AI. This\nthreshold is set in the first iteration of the back-\ntranslation process —step 3(b)— by exploring all\nthresholds in [0.0,0.9]at steps of 0.1. The thresh-\nold for the remaining iterations is the one that pro-\nduces the synthetic corpus that leads to the best\nmBART50 models on the development set. We\nstart the iterative back-translation with 1 million\nmonolingual sentences in each language (or the\nwhole corpus if the amount is smaller) and we add\n1 million sentences in each language (if available)\nafter step 3(d).\nTraining of student models. Knowledge distil-\nlation is usually implemented in NLP at token\nlevel, but in tasks like NMT performing it at se-\nquence level (Kim and Rush, 2016) is usually\nequivalent and easier to implement: the student is\ntrained on a synthetic corpus obtained by trans-\nlating with the teacher the source segments of\nthe original training parallel corpus, if available.\nHowever, in the case of third-party-developed pre-\ntrained models, this corpus may not be available.\nWe hypothesize that, in its absence, as well as for\nlanguages never seen by pre-trained models, we\ncan generate synthetic training samples by translat-\ning monolingual data with the teacher model and\nthen filtering the synthetic data generated to dis-\ncard low-quality or noisy sentence pairs.\nOnce the pre-trained models have been prop-\nerly fine-tuned, we train a student model byperforming standard sentence-level knowledge\ndistillation (Kim and Rush, 2016). To this\nend, monolingual English data is automatically\ntranslated into the low-resource language with\nthe best fine-tuned English-to-many mBART50\nsystem and the resulting synthetic bilingual corpus\n(opportunely cleaned with the same Bicleaner-AI\nmodel) together with the true bilingual corpus\nare used to train the student model translating the\nlow-resource language into English. Conversely,\nmonolingual data available for the low-resource\nlanguage is automatically translated into En-\nglish with the best fine-tuned many-to-English\nmBART50 model and the resulting cleaned corpus\ntogether with the bilingual corpus are used to train\nthe system translating from English into the low-\nresource language. In addition to this approach\nbased on back-translation, we will also explore\ntwo other approaches to student training: using\nforward-translated texts (Li and Specia, 2019) and\nusing both, forward- and back-translated ones.\n3 Experimental settings\nSelection of low-resource languages. We con-\nducted experiments for the translation from four\nlow-resource languages into English, and vice\nversa. These low-resource languages are Swahili\n(sw), Kyrgyz ( ky), Burmese ( my) and Macedo-\nnian (mk).3They belong to different language fam-\nilies and use different alphabets. Swahili belongs\nto the Niger-Congo language family and is written\nin the Latin script. Kyrgyz is a Turkic language\nwritten in a Cyrillic alphabet in Kyrgyzstan, and\nin a Perso-Arabic alphabet in Xinjiang. Burmese\nis a Sino-Tibetan language that has its own writ-\ning system. The presence of blank spaces between\nwords is optional in Burmese, but they are com-\nmonly used in a non-standard manner to ease legi-\nbility. Finally, Macedonian is a Slavic language us-\ning the Cyrillic alphabet, but differs in some char-\nacters from other languages with the same script.\n3It should be emphasized that the term low-resource fre-\nquently used to categorize languages in the literature is inher-\nently ambiguous and relative. In order to more precisely de-\nfine the degree of data sparseness of human languages, Joshi\net al. (2020) have proposed a six-class taxonomy based on\nthe number of available resources, ranging from class 0 lan-\nguages (labeled as the left-behinds ) with no representation\nin any existing resource, to class 5 (the winners ). Under\nthis classification, Swahili belongs to class 2 (the hopefuls ),\nwhereas Kyrgyz, Macedonian and Burmese belong to class 1\n(thescraping-bys ).\nModel architecture. The pre-trained model ex-\nploited in this paper is mBART50 (Tang et\nal., 2021), a multilingual sequence-to-sequence\nencoder-decoder pre-trained on large-scale mono-\nlingual corpora using the BART denoising ob-\njective (Lewis et al., 2020) and then fine-tuned\nfor multilingual MT. mBART50 was trained on a\nset of 50 languages, including English, Burmese\nand Macedonian, but neither Swahili nor Kyr-\ngyz. mBART50 uses a standard transformer ar-\nchitecture (Vaswani et al., 2017) with 12 layers\nfor both the encoder and the decoder, embedding\ndimension of 1024, feed-forward inner-layer di-\nmension of 4096, and 16 attention heads. This\nadds up to approximately 680M parameters. Our\nbilingual baselines and student models consist of\na transformer architecture with 6 layers for both\nthe encoder and the decoder, embedding dimen-\nsion of 512, feed-forward inner-layer dimension\nof 2048, and 8 attention heads. These mod-\nels have near 50M parameters, approximately 13\ntimes fewer parameters than the mBART50 mod-\nels. All our models were trained or fine-tuned us-\ning the Fairseq toolkit.4\nData. Most of the training corpora used for each\nlanguage pair comes from OPUS.5In addition,\nparallel corpora from GoURMET6and JW300\nwere also used. The ALT corpora7was addi-\ntionally used for Burmese and SAWA (De Pauw\net al., 2009) for Swahili. We used monolingual\ntexts from NewsCrawl, except for Burmese, for\nwhich we used OSCAR (Ortiz Su ´arez et al., 2020).\nWe added the monolingual corpora available in\nGoURMET to Kyrgyz and Macedonian. For\nMacedonian, an in-house corpus was used, repre-\nsenting 48% of the Macedonian monolingual sen-\ntences shown in Table 1. Burmese texts were pre-\nprocessed with the Pyidaungsu8word segmenter.\nParallel sentences longer than 100 words in either\nside were discarded for all languages. Table 1 pro-\nvides information about the training corpora after\ntheir pre-processing.\nFor development and testing, we used the\nFLORES-101 (Goyal et al., 2021) dataset which\n4https://github.com/facebookresearch/fairseq\n5https://opus.nlpl.eu/\n6https://gourmet-project.eu/\ndata-model-releases/\\#ib-toc-anchor-0\n7https://www2.nict.go.jp/astrec-att/\nmember/mutiyama/ALT/\n8https://github.com/kaunghtetsan275/\npyidaungsuLanguage pair sentences\nEnglish–Burmese 87 432\nEnglish–Swahili 232 133\nEnglish–Kyrgyz 311 705\nEnglish–Macedonian 756 746\nLanguage sentences\nEnglish 3 000 000\nBurmese 1 192 914\nSwahili 455 488\nKyrgyz 1 125 488\nMacedonian 2 393 325\nTable 1: Number of sentences in the parallel and monolingual\ncorpora used for mBART50 fine-tuning and student training.\ncontains the same set of sentences translated\nby professional translators across 101 languages.\nWe use the 927 sentences in the dev directory\nfor development and the 1,012 sentences in the\ndevtest directory for testing.9\nSub-word splitting. When using mBART50,\nsentences in all languages are tokenized with\nthe SentencePiece model (Kudo and Richardson,\n2018) provided with mBART50 (same model for\nall languages). To be consistent with mBART,\nwhose parameters are used to initialize mBART50\nbefore pre-training, mBART50 uses mBART’s\nSentencePiece model, which in turn was ob-\ntained using monolingual data for the 101 lan-\nguages in the XLM-R pre-trained model (Con-\nneau et al., 2020). Consequently, this Senten-\ncePiece model (with a vocabulary of 250k to-\nkens) already supports languages beyond the 50\nlanguages in mBART50 pre-training, including\nSwahili and Kyrgyz. Sub-word tokens for these\nlanguages are thus present in the embedding table\nof mBART50, but their parameters were not up-\ndated during mBART50’s pre-training10except for\nthose tokens shared with some of the 50 languages.\nMoreover, as the SentencePiece model is jointly\ncomputed for 101 languages, it may split words in\nSwahili or Kyrgyz in sub-optimal ways. To avoid\nthese issues, we obtained two new joint Sentence-\nPiece models of 10,000 tokens each for English–\nSwahili and English–Kyrgyz. We then filtered the\nembedding table of mBART50 out by removing\n9FLORES-101 contains a third of sentences from Wikinews\n(news articles), a third from Wikijunior (non-fiction children\nbooks), and a third from Wikivoyage (a travel guide).\n10They were not updated during mBART’s denoising pre-\ntraining, since neither Swahili nor Kyrgyz corpora were in\nthe training data of mBART.\nthose tokens that were not included in the new Sen-\ntencePiece vocabulary. Finally, we extended the\nembedding table to include every new token in the\nSentencePiece vocabulary.11The already learned\nembeddings are thus kept for those tokens already\nincluded in the original token set. This procedure\nmay also be applied to new languages not in the\noriginal mBART50’s SentencePiece model, even if\nthey have a new alphabet. As regards the students\nand the baseline bilingual models, we computed\na different joint bilingual SentencePiece model for\neach language pair using the bilingual training cor-\npora and a vocabulary of 10,000 tokens.\nTraining. When training and fine-tuning,\nwe used a learning rate of 0.0007 with the\nAdam (Kingma and Ba, 2015) optimizer ( β1=0.9,\nβ2=0.98), 8,000 warm-up updates and 4,000\nmax tokens. We trained with a dropout of\n0.1 and updated the model every 5,000 steps.\nValidation-based early stopping on the FLORES-\n101 development set was carried out as a form of\nregularization to prevent over-fitting. The cross-\nentropy loss with label smoothing was computed\non the development set after every epoch and the\nbest checkpoint was selected after 6 validation\nsteps with no improvement.\n4 Results and discussion\nTable 2 shows, for the different language pairs and\nsystems evaluated, the mean and standard devia-\ntion of the BLEU score computed on the test set\nafter three different runs. The systems evaluated\nare the following: i) baseline models trained on the\navailable parallel corpora, using the same architec-\nture as the students, followed by iterative back-\ntranslation with the same monolingual corpora\nused in other set-ups for the teacher; ii) mBART50\nwithout further fine-tunning; iii) teacher models\nafter their fine-tuning; and iv) the three different\nstudent configurations explained next. Note that\nfor the teacher models only the results of a sin-\ngle run are provided as their parameters are ini-\ntialized to those of the pre-trained model. The\nthree different student configurations are “Student\nBack”, which refers to the student models trained\non synthetic parallel corpora generated by running\nthe teacher model from target to source; “Student\nFwd”, which refers to the students trained on syn-\nthetic parallel corpora obtained by translating from\n11The number of model parameters after this trimming proce-\ndure decreases from 680M to approximately 370M.source to target with the teacher model; and “Stu-\ndent All”, which refers to students trained on both\nforward and backward translations.\nAs can be seen, when English is the target lan-\nguage, the student models lag further behind the\nteacher models as compared to when English is the\nsource language: the difference with the best stu-\ndent models (“Student All” in all cases) is around\n3 BLEU points, being the minimum difference of\n1.82 BLEU points ( ky-en) and the maximum dif-\nference of 3.80 BLEU points ( my-en). This is\nclearly motivated by the fact that the English-to-\nmany mBART50 translates from one language to\n50 languages, whereas the many-to-English model\nonly generates English. The latter is therefore spe-\ncialized in generating English texts. As the stu-\ndent models have been trained on much less En-\nglish corpora than mBART50, they are not able to\nmatch the performance of mBART50 when trans-\nlating into English. Alternative evaluation met-\nrics, such as chrF (Popovi ´c, 2015) or spBLEU (see\nbelow), show the same trend; consequently, only\nBLEU scores are reported in Table 2.\nThe best student models consistently improve\nthe results of the bilingual baselines by a wide mar-\ngin, thus confirming the appropriateness of con-\nsidering large pre-trained models as the seed for\nNMT models and the effectiveness of our pipeline.\nAs regards the low BLEU scores attained by the\nbilingual baseline models involving Kyrgyz, our\nresults match the pattern described by Nekoto et al.\n(2020), who observed that 8 out of 9 low-resource\nNMT systems for African languages trained on\nJW300 generalized very poorly in human evalu-\nations when shifting to domains such as TED talks\nor COVID-19 surveys; they concluded that the val-\nidation score on the JW300 test set was misleading\nas it overestimated the model quality.\nImpact of forward and backward translations.\nAs seen in Table 2, the models trained using\nboth forward and backward translations gener-\nated by the teacher model (Student All) are the\nbest performing ones (except for en-my where\nStudent Fwd performs slightly better). Contrary\nto intuition, the use of forward translations when\nEnglish is the source language results in better\nperformance than the use of backward translations\nwhen English is the target. This may be due\nto the fact that the amount of monolingual text\nused in Student Fwd is much larger than that of\nStudent Back, because the amount of monolingual\nModel en-mk mk-en en-my my-en en-sw sw-en en-ky ky-en\nBaseline 28.7±.234.1±.113.4±.417.5±.426.3±2.427.2±5.10.1±.11.1±.1\nmBART50 23.1 33.1 13.5 22.5 – – – –\nTeacher 32.1 40.0 16.5 24.6 31.8 36.3 9.1 17.0\nStudent All 31.0±.536.3±.316.9±.720.8±.533.3±.133.1±.29.2±.215.2±.4\nStudent Back 28.8±.834.9±.611.7±.520.7±.429.8±.132.5±.38.3±.315.0±.3\nStudent Fwd 30.5±.534.7±.517.0±.11.0±.332.7±.430.3±.18.9±.113.8±.2\nTable 2: BLEU scores for the different NMT models. Burmese reference has been processed with Pyidaungsu.\nModel Synthetic Discarded ∆BLEU\nen-mkBack 2 292 343 29.49% -0.01\nFwd 2 994 928 18.84% 1.18\nmk-enBack 2 994 928 18.84% 0.39\nFwd 2 292 343 29.49% 0.08\nen-myBack 600 934 76.40% 11.35\nFwd 2 934 522 6.10% 0.21\nmy-enBack 2 934 522 6.10% -0.07\nFwd 600 934 76.40% 0.94\nen-swBack 454 796 7.69% 0.14\nFwd 2 986 535 4.58% -0.10\nsw-enBack 2 986 535 4.58% 0.42\nFwd 454 796 7.69% 0.31\nen-kyBack 1 109 097 29.88% 0.26\nFwd 2 988 350 10.25% -0.16\nky-enBack 2 988 350 10.25% 0\nFwd 1 109 097 29.88% -0.20\nTable 3: Number of synthetic sentences and percentage of\nsentences discarded by Bicleaner-AI. The ∆BLEU column\nshows the improvement in terms of BLEU when the student\nmodels are trained with the filtered corpora (see Table 2) over\nusing the whole corpus.\ncorpora available in English is higher, and in each\niteration of back-translation one million English\nsentences are added and translated. The my-en\nStudent Fwd model produces remarkably poor\nresults, most probably because of the differences\nin Burmese segmentations between our texts and\nthe original training corpora, which may challenge\nmBART50’s processing capabilities and result in\ntranslation errors or hallucinations that hinder the\nstudent model’s learning. The impact of using\nsynthetic English as the target language is more\npronounced, as demonstrated by the performance\nof the en-myStudent Back model trained on the\nsame corpus. A more thorough investigation of\nthis phenomenon is leaved for future work.\nImpact of synthetic corpus filtering. Table 3\nshows the percentage of synthetic corpora dis-\ncarded when using the same scores we used dur-ing the incremental iterative back-translation fine-\ntuning of the teacher model. The differences in\nBLEU scores between the student models trained\non the filtered corpus and those trained on the\nwhole synthetic corpus is shown in the ∆BLEU\ncolumn, where a positive value means that filter-\ning is effective. Note that only a few small neg-\native values exist and that most of them are posi-\ntive, even though in some cases the proportion of\ndiscarded sentences is quite significant.\nAs regards the average threshold used with\nBicleaner-AI for each language pair, it is around\n0.4, although it ranges from 0.0 to 0.7 depend-\ning on the language pair. In addition to this,\nthe amount of synthetic sentence pairs discarded\nvaries considerably between language pairs. The\nlanguage pair for which this difference in more\npronounced is English–Burmese:12while for\nen-my the percentage of segments discarded is\n6.1% (threshold of 0.4), for my-en it is 76.4%\n(threshold of 0.3).13\nAs can be seen, when English is the synthetic\nlanguage, the percentage of discarded sentences is\nhigher. This could be due to the specialization of\nmBART50 in English generation, which may make\nit generate fluent sentences but not correct transla-\ntions. Although there could be noise in the corpus,\nthis noise has a different effect depending on the\nsize of the corpus and whether the synthetic lan-\nguage is used as the source or the target. Trans-\nformer’s noise tolerance can explain why, in the\nmajority of cases, corpus filtering does not affect\nthe BLEU scores. All in all, filtering is a good\npractice as it may lead to better scores or, at least,\nto a reduction in training time due to the removal\nof noisy sentence pairs.\n12Bicleaner-AI was trained on the same corpora in both cases.\n13The large number of discarded segments contributes to the\nextremely low score of the Student Fwd my-en model in Ta-\nble 2.\nImpact of distillation on efficiency. Compared\nto the teacher models, the student models with 13\ntimes fewer parameters demonstrate a remarkable\nincrease in inference speed: 61% faster on one\nGPU NVIDIA A100, and 92% on an Intel i5 2.9\nGHz CPU (both measured as the fraction of the\nteacher’s execution time we can save by switching\nto the student). For example, on the GPU, using\nfairseq interactive with a beam search\nof 5 and maximum number of tokens of 4,000,\ntheen-mk teacher model takes around 900 sec-\nonds to translate the FLORES 101 devtest (31 to-\nkens/second), whereas the student model produces\nthe output in approximately 350 seconds (97 to-\nkens/second). The same teacher and student mod-\nels executed on CPU take 4,800 seconds (6 to-\nkens/second) and 400 seconds (87 tokens/second),\nrespectively.\nComparison with other models. Table 4 shows\na comparison in terms of spBLEU14between\nour models, including mBART50 without fine-\ntuning, and three prominent multilingual mod-\nels: M2M-124 (Goyal et al., 2021) and\nDeltaLM+Zcode (Yang et al., 2021) —the baseline\nand winner system at WMT 2021, respectively—\nand NLLB-200 (NLLB Team et al., 2022). As\ncan be seen, student models perform considerably\nbetter than DeltaM+Zcode when the target lan-\nguage is not English, except for en-mk . When\nthe target language is English, DeltaM+Zcode\nclearly outperforms the teacher and student mod-\nels. NLLB-200 matches or exceeds the results of\nother models in all languages, but is by far the\nlargest model in the comparison. Our students are\nnoticeably smaller, but note that both M2M-124\nand DeltaLM+Zcode are one-size-fits-all models\nwhich have not been bilingually fine-tuned.\n5 Related work\nMultilingual NMT models. A large amount of\npre-trained multilingual NMT models15have been\n14As good tokenizers are not always available for low-\nresource languages, spBLEU (Goyal et al., 2021) has been\nproposed as an evaluation metric. spBLEU applies Senten-\ncePiece (Kudo and Richardson, 2018) to both the output and\nthe reference translation before computing BLEU. As all our\nlanguages are part of FLORES-101, the pre-computed Sen-\ntencePiece model of 256k tokens provided by its develop-\ners at https://github.com/facebookresearch/\nflores\\#spm-bleu has been used.\n15We omit discussion of general multilingual text-to-text\nmodels such as DeltaLM (Ma et al., 2021), mT5 (Xue et al.,\n2021) or mT6 (Chi et al., 2021) that were not specifically de-developed in the last years: NLLB-200 (NLLB\nTeam et al., 2022), CRISS (Tran et al., 2020),\nDeltaLM (Ma et al., 2021), M2M-100 (Fan et\nal., 2021), M2M-12416(Goyal et al., 2021),\nmBART50 (Tang et al., 2021), SixT (Chen et al.,\n2021), and SixT +(Chen et al., 2022), to name\nbut a few. In most cases, their encoders and de-\ncoders are initialized from cross-lingual encoder-\nlike pre-trained models, mainly XLM-R (Conneau\net al., 2020), or full cross-lingual models such as\nmBART (Liu et al., 2020).\nThe number of supported languages varies,\nranging from a few to around 100, mainly those\nin the OPUS-10017or FLORES-101 (Goyal et al.,\n2021) corpora. Recently, larger models supporting\nup to 200 (NLLB Team et al., 2022) or even around\n1000 (Bapna et al., 2022) languages have ap-\npeared. mBART50 can be seen as a medium-size\nEnglish-centric model supporting 50 languages.\nA number of common training techniques such\nas iterative back-translation are exploited by most\nmodels. Additionally, every model incorpo-\nrates distinctive elements: language-specific lay-\ners (Zhang et al., 2020; Fan et al., 2021); remov-\ning of residual connections in the encoder to mi-\nnorate language-specific representations by reduc-\ning the influence of positional information (Chen\net al., 2022); adding a mixture of experts sub-\nlayer to significantly improve the representabil-\nity of low-resource languages while maintaining\nthe same inference and training efficiency (NLLB\nTeam et al., 2022); modification of the decoder\nto have interleaved layers with self-attention and\ncross-attention so that the former are randomly ini-\ntialized but the latter can be paired with the cor-\nresponding layers in an encoder-like pre-trained\nmodel (Ma et al., 2021); or rescaling the gradients\nso that performance for low-resource languages\nimproves (Li and Gong, 2021).\nPre-training is based on monolingual mask-\ning/corruption and, optionally, translation pair\nmasking/corruption, but for some models, such as\nDeltaLM+Zcode (Yang et al., 2021), this kind of\ndenoising tasks are learned at the same time they\nare fine-tuned for MT. DeltaLM+Zcode (Yang et\nal., 2021) is based on DeltaLM (Ma et al., 2021)\nand can be considered as one of the best current\nsigned for MT, although they could be fine-tuned to do so.\n16An extended version of M2M-100 that includes all the lan-\nguages in the FLORES-101 dataset.\n17https://opus.nlpl.eu/opus-100.php\nModel # params en-mk mk-en en-my my-en en-sw sw-en en-ky ky-en\nNLLB-200 54.5B 42.4 47.9 24.2 33.7 37.9 48.729.927.5\nM2M-124 615M 33.8 33.7 - 10.0 26.9 30.4 4.511.4\nDeltaLM+Zcode 1013M 42.4 45.6 - 24.2 34.4 36.719.822.1\nDeltaLM+Zcode 711M 35.9 42.4 - 19.7 27.7 32.813.620.9\nmBART50 680M 28.3 34.9 26.8 23.7 - - - -\nTeacher 680M 39.1 41.5 31.1 26.2 36.3 37.221.919.0\nOur best student 50M 38.1 38.0 31.3 22.1 38.0 33.822.517.3\nTable 4: spBLEU scores on the FLORES-101 testset for three large, non-English-centric multilingual pre-trained models (Yang\net al., 2021) and our fine-tuned English-centric mBART50-based teachers and best performing student models. The results for\nthe en-my column were calculated after segmenting the reference and model output with pyidaungsu; as the output translations\nof some of the models have not been published, the corresponding scores in that column are not provided.\nmultilingual NMT systems,18translating all direc-\ntions across the 101 languages in the FLORES-101\ndataset. Its training process exploits multiple fac-\ntors such as an incremental architecture, genera-\ntion of pseudo-parallel synthetic data, curriculum\nlearning to progressively reduce the influence of\nthe denoising tasks, and iterative back-translation.\nFine-tuning of multilingual models. Birch et\nal. (2021) fine-tuned mBART50 via curriculum\nlearning and back-translation to obtain competitive\nEnglish–Pashto NMT systems. Lee et al. (2022)\nevaluated mBART50 on 10 languages, all disjoint\nwith ours. Liu et al. (2021) improved mBART’s\nperformance on NMT with new languages by pre-\ntraining with a denoising task on mixed-language\nsentences containing masked tokens, removed\ntokens, or words replaced by their English coun-\nterparts obtained from unsupervised bilingual\ndictionaries (Lample et al., 2018). Similar mixed-\nlanguage sentences that allow the system to align\nrepresentations between English and the new\nlanguages were also used in the mRASP2 (Pan et\nal., 2021) model. Adelani et al. (2022) fine-tuned\nM2M-100 for African languages by mapping the\ncodes of languages not included in the pre-training\nto the codes of already included languages. A par-\nallel line of research ( ¨Ust¨un et al., 2021; Stickland\net al., 2021) adds language-specific information\nfor unseen languages in the form of adapters which\nare pre-trained with monolingual data and then\nfine-tuned with bilingual data. The NMT-Adapt\nmethod (Ko et al., 2021) initializes the transformer\nwith mBART and then jointly optimizes a combi-\nnation of tasks including high-resource translation,\nlow-resource back-translation, monolingual de-\nnoising of all languages, and adversarial training\n18DeltaLM+Zcode won the task on Large-Scale Multilingual\nMachine Translation of WMT 2021 (Wenzek et al., 2021).to obtain universal representations. Finally, Alabi\net al. (2022) perform monolingual fine-tuning\nof pre-trained multilingual models on unseen\nrepresentative African languages.\n6 Concluding remarks\nIn this paper, we have presented a pipeline to\ntune large NMT pre-trained models, and distill the\nknowledge in the fine-tuned teachers to build stu-\ndent models using far fewer parameters. In order\nto fine-tune the teacher model we apply an iter-\native back-translation procedure that integrates a\nBicleaner-AI classifier based on XLM-R to dis-\ncard poor quality translations. We have demon-\nstrated that filtering yields benefits in the majority\nof cases, without causing harm in any instance.\nOur approach has been tested on the English-\ncentric mBART50 pre-trained model and on four\ndifferent low-resource languages, translating to\nand from English. The languages belong to dif-\nferent language families and two of them were not\npart of the pre-training stage of mBART50. The re-\nsults show two clear trends, depending on whether\nEnglish is the source or the target language. When\ntranslating from English, our student models out-\nperform the teacher models or perform compara-\nbly. When translating into English, the teacher\nmodels clearly outperform the student models. In\nany case, the student models have 13 times fewer\nparameters and are 92% faster when translating on\na regular CPU, which makes them suitable for af-\nfordable computational devices.\nWe leave the in-depth exploration of alternative\nmodels such as SixT +, NLLB-200 or DeltaLM as\nfuture work. We also plan to extend our pipeline\nwith monolingual and bilingual denoising tasks,\nespecially for unseen languages, as well as to ex-\nplore a larger number of language combinations.\nAcknowledgments\nThis paper is part of the R+D+i project PID2021-\n127999NB-I00 funded by the Spanish Ministry\nof Science and Innovation (MCIN), the Spanish\nResearch Agency (AEI/10.13039/501100011033)\nand the European Regional Development Fund\nA way to make Europe. The computational re-\nsources used were funded by the European Re-\ngional Development Fund through project ID-\nIFEDER/2020/003.\nReferences\nAdelani, David Ifeoluwa, Jesujoba Oluwadara Alabi,\nAngela Fan, et al. 2022. A few thousand translations\ngo a long way! Leveraging pre-trained models for\nafrican news translation.\nAlabi, Jesujoba Oluwadara, David Ifeoluwa Adelani,\nMarius Mosbach, and Dietrich Klakow. 2022. Mul-\ntilingual language model adaptive fine-tuning: A\nstudy on african languages. In 3rd Workshop on\nAfrican Natural Language Processing .\nBapna, Ankur, Isaac Caswell, Julia Kreutzer, et al.\n2022. Building machine translation systems for the\nnext thousand languages.\nBirch, Alexandra, Barry Haddow, Antonio Valerio\nMiceli Barone, et al. 2021. Surprise language chal-\nlenge: Developing a neural machine translation sys-\ntem between Pashto and English in two months. In\nProc. of Machine Translation Summit XVIII: Re-\nsearch Track , pages 92–102.\nBommasani, Rishi, Drew A. Hudson, Ehsan Adeli,\net al. 2021. On the opportunities and risks of foun-\ndation models. CoRR , abs/2108.07258.\nBrown, Tom, Benjamin Mann, Nick Ryder, et al.\n2020. Language models are few-shot learners. In\nAdvances in Neural Information Processing Systems ,\nvolume 33, pages 1877–1901.\nChen, Guanhua, Shuming Ma, Yun Chen, et al. 2021.\nZero-shot cross-lingual transfer of neural machine\ntranslation with multilingual pretrained encoders. In\nProc. of the 2021 Conference on Empirical Methods\nin Natural Language Processing , pages 15–26.\nChen, Guanhua, Shuming Ma, Yun Chen, et al. 2022.\nTowards making the most of multilingual pretraining\nfor zero-shot neural machine translation. In Proc. of\nACL.\nChi, Zewen, Li Dong, Shuming Ma, et al. 2021. mT6:\nMultilingual pretrained text-to-text transformer with\ntranslation pairs. In Proc. of the 2021 Conference on\nEmpirical Methods in Natural Language Processing ,\npages 1671–1683.Conneau, Alexis, Kartikay Khandelwal, Naman Goyal,\net al. 2020. Unsupervised cross-lingual representa-\ntion learning at scale. In Proc. of the 58th Annual\nMeeting of the ACL , pages 8440–8451.\nDe Pauw, Guy, Peter Waiganjo Wagacha, and Gilles-\nMaurice de Schryver. 2009. The SAWA corpus: A\nparallel corpus English–Swahili. In Proc. of the First\nWorkshop on Language Technologies for African\nLanguages , pages 9–16.\nDevlin, Jacob, Ming-Wei Chang, Kenton Lee, and\nKristina Toutanova. 2019. BERT: Pre-training of\ndeep bidirectional transformers for language under-\nstanding. In Proc. of the 2019 Conference of the\nNorth American Chapter of the Association for Com-\nputational Linguistics: Human Language Technolo-\ngies, pages 4171–4186.\nFan, Angela, Shruti Bhosale, Holger Schwenk, et al.\n2021. Beyond English-centric multilingual machine\ntranslation. Journal of Machine Learning Research ,\n22(107):1–48.\nGoyal, Naman, Cynthia Gao, Vishrav Chaudhary, et al.\n2021. The FLORES-101 evaluation benchmark for\nlow-resource and multilingual machine translation.\nCoRR , abs/2106.03193.\nHoang, Vu Cong Duy, Philipp Koehn, Gholamreza\nHaffari, and Trevor Cohn. 2018. Iterative back-\ntranslation for neural machine translation. In Proc.\nof the 2nd Workshop on Neural Machine Translation\nand Generation , pages 18–24.\nJoshi, Pratik, Sebastin Santy, Amar Budhiraja, et al.\n2020. The state and fate of linguistic diversity and\ninclusion in the NLP world. In Proc. of the 58th An-\nnual Meeting of the ACL , pages 6282–6293.\nKim, Yoon and Alexander M. Rush. 2016. Sequence-\nlevel knowledge distillation. In Proc. of the 2016\nConference on Empirical Methods in Natural Lan-\nguage Processing , pages 1317–1327.\nKingma, Diederik P. and Jimmy Ba. 2015. Adam: A\nmethod for stochastic optimization. In 3rd Inter-\nnational Conference on Learning Representations,\nICLR 2015, Conference Track Proc.\nKo, Wei-Jen, Ahmed El-Kishky, Adithya Renduchin-\ntala, et al. 2021. Adapting high-resource NMT mod-\nels to translate low-resource related languages with-\nout parallel data. In Proc. of the 59th Annual Meeting\nof the ACL and the 11th IJCNLP , pages 802–812.\nKudo, Taku and John Richardson. 2018. Sentence-\nPiece: A simple and language independent subword\ntokenizer and detokenizer for neural text process-\ning. In Proc. of the 2018 Conference on Empirical\nMethods in Natural Language Processing: System\nDemonstrations , pages 66–71.\nLample, Guillaume, Alexis Conneau, Marc’Aurelio\nRanzato, et al. 2018. Word translation without par-\nallel data. In International Conference on Learning\nRepresentations .\nLee, En-Shiun Annie, Sarubi Thillainathan, Shra-\nvan Nayak, et al. 2022. Pre-trained multilingual\nsequence-to-sequence models: A hope for low-\nresource language translation?\nLewis, Mike, Yinhan Liu, Naman Goyal, et al.\n2020. BART: Denoising sequence-to-sequence pre-\ntraining for natural language generation, translation,\nand comprehension. In Proc. of the 58th Annual\nMeeting of the ACL , pages 7871–7880.\nLi, Xian and Hongyu Gong. 2021. Robust optimization\nfor multilingual translation with imbalanced data. In\nAdvances in Neural Information Processing Systems .\nLi, Zhenhao and Lucia Specia. 2019. Improving neu-\nral machine translation robustness via data augmen-\ntation: Beyond back-translation. In Proc. of the 5th\nWorkshop on Noisy User-generated Text (W-NUT\n2019) , pages 328–336.\nLiu, Yinhan, Jiatao Gu, Naman Goyal, et al. 2020. Mul-\ntilingual denoising pre-training for neural machine\ntranslation. Transactions of the Association for Com-\nputational Linguistics , pages 726–742.\nLiu, Zihan, Genta Indra Winata, and Pascale Fung.\n2021. Continual mixed-language pre-training for ex-\ntremely low-resource neural machine translation. In\nFindings of the Association for Computational Lin-\nguistics: ACL-IJCNLP 2021 , pages 2706–2718.\nMa, Shuming, Li Dong, Shaohan Huang, et al. 2021.\n∆LM: Encoder-decoder pre-training for language\ngeneration and translation by augmenting pretrained\nmultilingual encoders.\nNekoto, Wilhelmina, Vukosi Marivate, Tshinondiwa\nMatsila, et al. 2020. Participatory research for low-\nresourced machine translation: A case study in\nAfrican languages. In Findings of the Association\nfor Computational Linguistics: EMNLP 2020 , pages\n2144–2160.\nNLLB Team, Marta R. Costa-juss `a, James Cross, et al.\n2022. No language left behind: Scaling human-\ncentered machine translation.\nOrtiz Su ´arez, Pedro Javier, Laurent Romary, and Beno ˆıt\nSagot. 2020. A monolingual approach to contextual-\nized word embeddings for mid-resource languages.\nInProc. of the 58th Annual Meeting of the ACL ,\npages 1703–1714.\nPan, Xiao, Mingxuan Wang, Liwei Wu, and Lei Li.\n2021. Contrastive learning for many-to-many mul-\ntilingual neural machine translation. In Proc. of the\n59th Annual Meeting of the ACL and the 11th IJC-\nNLP, pages 244–258.\nPopovi ´c, Maja. 2015. chrF: character n-gram F-score\nfor automatic MT evaluation. In Proceedings of the\nTenth Workshop on Statistical Machine Translation ,\npages 392–395, Lisbon, Portugal, September. Asso-\nciation for Computational Linguistics.Stickland, Asa Cooper, Xian Li, and Marjan\nGhazvininejad. 2021. Recipes for adapting pre-\ntrained monolingual and multilingual models to ma-\nchine translation. In Proc. of the 16th Conference of\nthe EACL , pages 3440–3453.\nTang, Yuqing, Chau Tran, Xian Li, et al. 2021. Mul-\ntilingual translation from denoising pre-training. In\nFindings of the Association for Computational Lin-\nguistics: ACL-IJCNLP 2021 , pages 3450–3466.\nTouvron, Hugo, Thibaut Lavril, Gautier Izacard, et al.\n2023. Llama: Open and efficient foundation lan-\nguage models.\nTran, Chau, Yuqing Tang, Xian Li, and Jiatao Gu. 2020.\nCross-lingual retrieval for iterative self-supervised\ntraining. In Advances in Neural Information Pro-\ncessing Systems , volume 33, pages 2207–2219.\nTran, Chau, Shruti Bhosale, James Cross, et al. 2021.\nFacebook AI WMT21 news translation task submis-\nsion. In Proc. of the Sixth Conference on Machine\nTranslation (WMT) , pages 205–215.\n¨Ust¨un, Ahmet, Alexandre Berard, Laurent Besacier,\nand Matthias Gall ´e. 2021. Multilingual unsuper-\nvised neural machine translation with denoising\nadapters. In Proc. of the 2021 Conference on Empiri-\ncal Methods in Natural Language Processing , pages\n6650–6662.\nVaswani, Ashish, Noam Shazeer, Niki Parmar, et al.\n2017. Attention is all you need. In Advances in Neu-\nral Information Processing Systems , volume 30.\nWenzek, Guillaume, Vishrav Chaudhary, Angela Fan,\net al. 2021. Findings of the WMT 2021 shared task\non large-scale multilingual machine translation. In\nProc. of the Sixth Conference on Machine Transla-\ntion, pages 89–99.\nXue, Linting, Noah Constant, Adam Roberts, et al.\n2021. mT5: A massively multilingual pre-trained\ntext-to-text transformer. In Proc. of the 2021 Con-\nference of the North American Chapter of the Asso-\nciation for Computational Linguistics: Human Lan-\nguage Technologies , pages 483–498.\nYang, Jian, Shuming Ma, Haoyang Huang, et al. 2021.\nMultilingual machine translation systems from Mi-\ncrosoft for WMT21 shared task. In Proc. of the Sixth\nConference on Machine Translation , pages 446–455.\nZaragoza-Bernabeu, Jaume, Marta Ba ˜n´on, Gema\nRam´ırez-S ´anchez, and Sergio Ortiz-Rojas. 2022.\nBicleaner AI: Bicleaner goes neural. In Proc. of\nthe Language Resources and Evaluation Conference\n(LREC) .\nZhang, Biao, Philip Williams, Ivan Titov, and Rico Sen-\nnrich. 2020. Improving massively multilingual neu-\nral machine translation and zero-shot translation. In\nProc. of the 58th Annual Meeting of the ACL , pages\n1628–1639.", "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "VMwzE4T9Qg", "year": null, "venue": "EAMT 2023", "pdf_link": "https://aclanthology.org/2023.eamt-1.10.pdf", "forum_link": "https://openreview.net/forum?id=VMwzE4T9Qg", "arxiv_id": null, "doi": null }
{ "title": "Empirical Analysis of Beam Search Curse and Search Errors with Model Errors in Neural Machine Translation", "authors": [ "Jianfei He", "Shichao Sun", "Xiaohua Jia", "Wenjie Li" ], "abstract": null, "keywords": [], "raw_extracted_content": "Empirical Analysis of Beam Search Curse and Search Errors with Model\nErrors in Neural Machine Translation\nJianfei He1, Shichao Sun2, Xiaohua Jia1, Wenjie Li2\n1City University of Hong Kong, Tat Chee Avenue, Kowloon, Hong Kong\n2The Hong Kong Polytechnic University, Hung Hom, Kowloon, Hong Kong\[email protected], [email protected]\[email protected], [email protected]\nAbstract\nBeam search is the most popular decoding\nmethod for Neural Machine Translation\n(NMT) and is still a strong baseline com-\npared with the newly proposed sampling-\nbased methods. To better understand the\nbeam search, we investigate its two well-\nrecognized issues, beam search curse and\nsearch error, not only on the test data as\na whole but also at the sentence level. We\nfind that only less than 30% of sentences in\nthe WMT17 En–De and De–En test set ex-\nperience these issues. Meanwhile, there is\na related phenomenon. For the majority of\nsentences, their gold references get lower\nprobabilities than the predictions from the\nbeam search. We also test with differ-\nent levels of model errors including a spe-\ncial test using training samples and mod-\nels without regularization. In this test, the\nmodel has an accuracy of 95% in predict-\ning the tokens on the training data. We\nfind that these phenomena still exist even\nfor such a model with very high accuracy.\nThese findings show that it is not promis-\ning to improve the beam search by seeking\nhigher probabilities and further reducing\nthe search errors in decoding. The relation-\nship between the quality and the probabil-\nity at the sentence level in our results pro-\nvides useful information to find new ways\nto improve NMT.\n© 2023 The authors. This article is licensed under a Creative\nCommons 4.0 licence, no derivative works, attribution, CC-\nBY-ND.1 Introduction\nBeam search has been the most popular decoding\n(inference) method for Neural Machine Transla-\ntion (NMT) (Bahdanau et al., 2014). Fernandes\net al. (2022)1and our experimental results (in Ap-\npendix A) show that the beam search is still a very\nstrong baseline compared with the recently pro-\nposed sampling-based methods, including Top-k\nsampling, Nucleus (Top-p) sampling (Holtzman et\nal., 2019) and Minimum Bayes Risk (MBR) de-\ncoding (Eikema and Aziz, 2021; Freitag et al.,\n2022). This is verified with different evaluation\nmethods: BLEU, Meteor, and Comet (Rei et al.,\n2020).\nMeanwhile, there are still open issues deserving\nfurther exploration for the beam search.\nOne widely recognized issue is a phenomenon\ncalled beam search curse (Koehn and Knowles,\n2017; Yang et al., 2018; Meister et al., 2020).\nBeam search tends to get worse performance when\nthe beam size increases. This issue is counter-\nintuitive. Usually, it is expected that using a larger\nbeam size finds a sequence with higher probability\nin the search space and gets better quality.\nAnother issue is search error (Stahlberg and\nByrne, 2019; Shi et al., 2020), which means\nthat the beam search as a heuristic method is not\nguaranteed to find the sequence with the largest\nprobability in the search space. Stahlberg and\nByrne (2019) implement exact search which can\nfind the global maximum for experiments. They\nuse it to assess the search errors in the beam search.\nThis paper aims to better understand these two\n1Their conclusion is that MBR with Comet as the utility func-\ntion outperforms the beam search if Comet is also used as the\nmetrics. But if BLEU is used as the metrics, the beam search\nis still the best for the large models as shown in their Table 1\nand Table 2.\nissues via empirical analysis.\nWe look into beam search curse at the sentence\nlevel. Although the beam search curse is consis-\ntently verified on the whole test set at the cor-\npus level, only a small portion of sentences suffer\nfrom this issue. One-sixth of sentences in WMT17\nEn–De and De–En test sets get worse translations\nwhen the beam size increases, meanwhile a similar\nnumber of sentences get better translations. One of\nthe reasons for the beam search curse is model er-\nror, which means that the model is not well fitted\nto the data. We investigate the beam search curse\nusing the model checkpoints with different valida-\ntion accuracies. We find that there is no strong cor-\nrelation between the beam search curse and model\naccuracy if the corpus BLEU score is used for eval-\nuation. But there is an obvious correlation using\ntheoracle BLEU score.\nWe assess search error using exact search with\na length constraint. Exact search can be regarded\nas a beam search with its beam size as large as\nthe size of vocabulary. We find that only less than\n30% of sentences suffer from search errors using\nthe beam search even with a small beam size like\n5. For the majority of sentences, beam search can\ngenerate the sequences with the largest probability.\nWe also compare exact search with beam search\nin terms of the quality of the predictions. Exact\nsearch gets significantly worse BLEU scores than\nbeam search at the corpus level. At the sentence\nlevel, the number of sentences with worse quality\nfrom exact search is only slightly larger than those\nwith better quality. This result is consistent with\nthe experiments in the beam search curse issue.\nOur experiments also demonstrate one phe-\nnomenon that is related to these two issues. The\nmajority of the gold references get lower probabil-\nities than the predictions from beam search. Al-\nthough beam search seeks the sequences with high\nprobability in principle, this result shows that it is\nthe wrong direction to further pursue larger proba-\nbilities and smaller search errors.\nTo investigate how beam search performs under\nvery low model errors, we test a special case. We\nuse models without regularization which have an\naccuracy of around 95% on training data. The test\ndata in this case are samples from training sets to\nreduce the mismatch of data distributions between\ntraining and testing. In this case, the phenomena\nabout exact search and gold references are still ob-\nserved.These findings may contribute to future im-\nprovements in decoding and training methods.\n2 Related Work\nThere are two approaches for decoding to-\nday: mode-seeking decoding and sampling-\nbased stochastic decoding. Mode-seeking is also\nknown as Maximum-A-Posteriori (MAP) decod-\ning (Smith, 2011; Eikema and Aziz, 2020). Its\nobjective is to predict a translation by searching\na sequence y?that maximizes log P (yjsrc;\u0012),\nwheresrc is the source sentence and \u0012is the\nmodel parameter set. Exact search (Stahlberg and\nByrne, 2019) aims to find the global maximum in\nthe whole search space. Due to the vast search\nspace, exact search is intractable in real applica-\ntion. Beam search (Lowerre, 1976; Graves, 2012)\nis used as a viable approximation by extending the\nN most probable partial solutions at each decoding\nstep, where N is called beam size . Beam search is\nwidely used for NMT.\nRecently the sampling-based stochastic decod-\ning (Fan et al., 2018; Holtzman et al., 2019;\nEikema and Aziz, 2021; Freitag et al., 2022) is ac-\ntively investigated. Sampling methods are used in\ndecoding to get a set of candidate sequences, then a\ndecision rule is used to choose the final prediction\namong these candidates. Although these meth-\nods are used for open-ended text generation tasks\nsuch as story generation, Fernandes et al. (2022)\nand our experimental results (in Appendix A) show\nthat beam search is still a very strong baseline\ncompared with these sampling-based methods for\nNMT.\nBeam search curse is recognized as one of six\nchallenges in NMT (Koehn and Knowles, 2017).\nMurray and Chiang (2018) and Yang et al. (2018)\nattribute its root cause to the length ratio problem\nvia empirical study. With beam size increasing,\nbeam search tends to get shorter predictions and\nresults in lower BLEU due to the brevity penalty\nin the definition of BLEU scores. But it is a\nusual practice using length normalization methods\nand the issue of short predictions is significantly\nmitigated. On the other hand, the beam search\ncurse also consistently exists with other evaluation\nmethods such as Meteor and Comet. Cohen and\nBeck (2019) investigate the discrepancy gap which\nis defined as the difference in log-probability be-\ntween the most likely token and the chosen token.\nThey find that the majority of discrepancy happen\nin early positions and increasing the beam width\nleads to more early discrepancies. We investigate\nthe beam search curse at the sentence level, which\nis orthogonal to their conclusion about the position\nof tokens.\nSearch error in NMT is intensively investigated\nby Stahlberg and Byrne (2019). They use an al-\ngorithm based on the deep first search to explore\nwhether there is a sequence with a higher proba-\nbility than the prediction from beam search. They\nalso implement the exact search to find the se-\nquence with the largest probability in the search\nspace.\nIn these research, the beam search curse and the\nsearch error are mainly investigated on the whole\ntest set at the corpus level, not at the sentence level.\nAnd it’s not investigated how these issues are re-\nlated to model errors . The model error means that\nthe model is not well fitted to the data.\n3 Methodology\nWe choose the widely used language pairs: En–\nDe and De–En. Besides a standard test, we con-\nduct a special cleanroom test to investigate the is-\nsues with very low model errors. Figure 1 de-\npicts the distribution of sentence length in all test\nsets. Comparing it with our experimental results, it\nshows that the sentence length is not an influential\nfactor in the conclusions.\nStandard test In this test, we use Transformer\nBig and Transformer Base models and use the\ncorpora from WMT172: Europarl v7, News-\ncommentary-v12 and Common Crawl for training,\nNewstest2014 for validation, Newstest2017 for the\ntest which has 3004 sentence pairs.\nCleanroom test In this test, we investigate how\nthe decoding methods work when the model is fit-\nted well to the test data. The model errors are very\nsmall in this test. For this purpose, we randomly\nselect 2000 sentences from the training set and use\nthem as the test data. To further reduce the model\nerrors in this test, we use models without regular-\nization. Dropout (Srivastava et al., 2014) and la-\nbel smoothing (Szegedy et al., 2016)) are used in\nTransformer as regularization methods to prevent\nneural networks from overfitting. The models that\nwe used in this test are trained with both methods\nturned off.\n2http://www.statmt.org/wmt17Models We use the notations below for three\nmodels in our experiments.\n•Base and Big for the normal Transformer\nBase and Transformer Big models. They use\nregularization methods.\n•NoReg are based on Transformer Big except\nthat they are trained with dropout and label\nsmoothing turned off. These models have an\naccuracy larger than 95% on the training data.\nDecoding methods Forbeam search , we use\ntwo beam sizes and compare their results to inves-\ntigate the issue of beam search curse. One is 5 and\nthe other is 100. For exact search , we reimplement\nthe algorithm in Stahlberg and Byrne (2019). In\nthis algorithm, the search only extends a partial se-\nquence if its probability is larger than a baseline\nvalue. A large baseline value can speed up the ex-\nact search. We get the probabilities of the predic-\ntions from the beam search with a series of beam\nsizes: 1–20, 50, and 100. We also get the proba-\nbility of the gold reference under the model. Then\nwe get the largest probability among these 23 in-\nstances for each sentence in the test set and use it\nas the baseline value for the exact search. We sort\nthe test sets with the baseline values in descend-\ning order so that sentences with higher baseline\nvalues are translated before those with lower base-\nline values. We continue to run the search on one\nNvidia GF1080Ti GPU for nearly 100 days. Ta-\nble 3 lists how many sentences are translated using\nthe exact search. We apply one of the length con-\nstraints used by Stahlberg and Byrne (2019) for ex-\nact search: the length of the target sentences is con-\nstrained to be no less than 1/4 of the length of their\nsource sentences. Stahlberg and Byrne (2019) also\nuse some tighter constraints to further mitigate the\nsearch errors. We aim to investigate the details at\nthe sentence-level in the exact search. Therefore\nwe choose a loose and practical constraint.\nTraining and Evaluations Our implementation\nis based on the OpenNMT-tf toolkit (Klein et al.,\n2020) with a typical configuration3. The Base\nmodels are trained for 200,000 steps on 4 GPUs,\nwhile the Big and NoReg are trained for 300,000\nsteps on 8 GPUs. All GPUs are Nvidia GF1080Ti.\nWe use the unigram (Kudo, 2018) in Sentence-\nPiece4for subwords with 32,000 updates and use a\n3https://opennmt.net/OpenNMT-py/FAQ.html\\\n#how-do-i-use-the-transformer-model\n4https://github.com/google/sentencepiece\n(a)Standard test set (En)\n (b)Standard test set (De)\n(c)2000 samples from training set (En)\n (d)2000 samples from training set (De)\nFigure 1: The histograms of sentence length for test sets. The number of subwords are counted for each sentence.\nEn–De De–En\nModel Base Big Base Big\nMetrics BLEU Meteor Comet BLEU Meteor Comet BLEU Meteor Comet BLEU Meteor Comet\nBeam5 28.2 29.1 0.490 28.9 29.2 0.498 33.5 36.5 0.520 33.8 36.7 0.539\nBeam100 27.7 26.0 0.450 27.4 28.8 0.426 33.5 36.5 0.521 33.2 36.5 0.527\nTable 1: Performance of the beam search using beam size 5 and 100, denoted as Beam5 andBeam100 respectively.\n(a)Gap of sentence BLEU: Beam100 minus Beam5\n (b)Gap of log-probability as the x-axis and gap of sentence\nBLEU as the y-axis: Beam100 minus Beam5\nFigure 2: Investigate the beam search curse at sentence level for En–De.\nshared vocabulary for source and target. For eval-\nuation, we use BLEU, Meteor, and Comet to com-\npare the beam search with sampling-based stochas-\ntic decoding methods. Since the results are consis-\ntent, we stick to BLEU in the investigation of the\nbeam search. For BLEU, We use SacreBLEU5\n(Post, 2018)6. For Meteor7, we use version 1.5.\nFor Comet8, we use the wmt20-comet-da model.\n4 Beam Search Curse\n4.1 Only a Small Portion of Sentences\nExperience Beam Search Curse\nThe beam search curse has been consistently ver-\nified at the corpus level. Our results in Table 1\ndemonstrate this issue using the comparison be-\ntween beam size 5 and beam size 100, denoted as\nBeam5 andBeam100 respectively.\nHowever, our experiments reveal that this issue\nis not ubiquitous at the sentence level.\nWe investigate the gap of the sentence BLEU\nscore between Beam100 and Beam5 for each sen-\ntence. The results from a standard test using the\nBig model are shown in Table 2. It illustrates\nhow many sentences in the standard test set get\nlarger ,equal , and smaller sentence BLEU scores\nfrom Beam100 compared with Beam 5. Smaller\nsentence BLEU scores from Beam100 imply the\nbeam search curse for these sentences. It shows\nthat only about one-sixth of sentences have this is-\nsue. For En–De, the number of sentences with the\nbeam search curse is less than those that Beam100\ngets better performance than Beam5.\nTotal Sent. >Beam5 =Beam5 <Beam5\nEn–De 3004 506 1968 530\nDe–En 3004 515 1976 513\nTable 2: The number of sentences that Beam100 gets larger ,\nequal andsmaller sentence BLEU compared with Beam 5,\ndenoted as >Beam5 ,=Beam5 and<Beam5 respectively.\nFigure 2a illustrates the gap of sentence BLEU\nscores for En–De. The sentences with a zero\nBLEU gap are not counted in this figure.\nWe also investigate the relationship between\nthe gap of sentence BLEU and the gap of log-\nprobability for each sentence, as illustrated in Fig-\nure 2b. For most sentences, Beam100 gets larger\n5https://github.com/mjpost/sacreBLEU\n6case.mixed+numrefs.1+smooth.exp+tok.13a+version.1.4.14\n7http://www.cs.cmu.edu/ ˜alavie/METEOR/\n8https://github.com/Unbabel/COMETlog-probabilities than Beam5. Beam search with a\nlarger beam size has more opportunities to find se-\nquences with larger log-probabilities. The major-\nity of sentences have small log-probability gaps.\nFor these sentences, the gap of sentence BLEU\nhas a similar probability to be positive or nega-\ntive. When the log-probability gap increases, the\nBLEU gap tends to be more negative. This small\nportion of sentences result in worse quality at the\ncorpus level. Potentially we can find a way to iden-\ntify these sentences and apply a small beam size\nfor them. Meanwhile, we can use a large beam\nsize to improve the quality of other sentences. The\nsentences with a zero log-probability gap are not\ncounted in this figure.\nWe conduct experiments using out-of-domain\ntest sets and get consistent results which are illus-\ntrated in Appendix B.\n4.2 Correlation between Beam Search Curse\nand Model Accuracy\nIt is an interesting question whether the beam\nsearch curse is mitigated for a model with higher\naccuracy. We record the checkpoints at every\n10,000 steps till 300,000 steps in training the Big\nmodel. The values of their validation accuracy are\ndepicted in Figure 3a. As shown in Figure 3b, we\nsurprisingly find that there is no strong correlation\nbetween model accuracy and beam search curse in\nterms of the corpus BLEU.\nHowever, we find two correlations related to\nthe model accuracy. One is the number of sen-\ntences with zero gap. When the model accuracy\nincreases, Beam100 and Beam5 tend to have more\nsentences that have the same BLEU scores, as il-\nlustrated in Figure 3c. The other is oracle cor-\npus BLEU , which is calculated given that the gold\nreferences are used to pick the best predictions\nfrom candidates. More candidates usually con-\ntain better oracle hypotheses. It is not surpris-\ning that Beam100 has much better oracle BLEU\nscores than Beam5. The interesting result in Fig-\nure 3d is the strong correlation between the gap\nof the oracle corpus BLEU and the model accu-\nracy. This means that there are better candidates\nin the top 100 candidates with higher model accu-\nracy. But current Beam100 cannot make use of it\nto make better predictions because the usual beam\nsearch method uses the probabilities of candidates\nto decide the final output. Better candidates do\nnot necessarily have the larger probabilities. They\n(a)Validation accuracy with steps\n (b)Gap of corpus BLEU: Beam100 minus Beam5\n(c)Number of sentences with a zero BLEU gap\n (d)Gap of oracle BLEU: Beam100 minus Beam5\nFigure 3: Investigate the correlation between beam search curse and model accuracy\nTotal Sent. Exact Beam5 \u0001 <Beam5 =Beam5 >Beam5\nStd+Big En–De 2319 27.33 30.49 -3.16 431 1638 250\nDe–En 2375 32.80 35.70 -2.90 424 1701 250\nSample+NoReg En–De 2000 52.47 53.80 -1.33 259 1606 135\nDe–En 2000 58.51 60.23 -1.72 264 1623 113\nTable 3: Corpus BLEU of exact search (denoted as Exact ) and comparison with Beam5. Total Sent is the total number of\nsentences that the exact search finishes translation. Columns <Beam5 ,=Beam5 and>Beam5 are how many sentences that\nexact search gets lower, equal, and greater BLEU compared with Beam5.\nare probably discarded in the final decision. This\nimplies a potential solution to improve the beam\nsearch. Beam search may benefit from the mod-\nels with lower model errors in case that we have a\nsuitable reranking method on the candidates.\n5 Zero Search Error Gets Worse Quality\nWe compare the BLEU scores from exact search\nwith Beam5 at both the corpus level and the sen-\ntence level. In our experiments, we find that a zero\ngap of the sentence BLEU score usually impliesa zero probability gap as well, which means zero\nsearch error for Beam5.\nThe results at the sentence level in Table 3 re-\nveal that the beam search works quite well in terms\nof the search error. Even with a small size like 5,\nbeam search is capable to find the sequences with\nthe largest probability for about 70% of sentences.\nTable 3 also shows that the exact search gets sig-\nnificantly worse corpus BLEU scores than Beam5.\nFigure 4a and Figure 4b shows the results of the\n(a)Gaps of sentence BLEU: Std+Big\n (b)Gaps of probability (x-axis) and gaps of sentence BLEU (y-\naxis): Std+Big\n(c)Gaps of sentence BLEU: Sample+NoReg\n (d)Gaps of probability (x-axis) and gaps of sentence BLEU (y-\naxis):Sample+NoReg\nFigure 4: Comparison between exact search and Beam5: En–De. All gaps are exact search minus Beam5.\nstandard test with the Big model. Figure 4c and\nFigure 4d show the results of the training samples\nwith the NoReg model. In this case that the model\nerrors are very small, the gap of the corpus BLEU\nscore is mitigated. But in both cases, when the gap\nof log-probability between two methods increases,\nthe gap of BLEU is more likely to be negative.\nIn all these four figures, sentences having a zero\nBLEU gap are not counted.\n6 Gold References Get Lower Probability\nthan Predictions from Beam Search\nThe experiments above show that sequences with\nhigher log-probabilities do not necessarily get bet-\nter BLEU scores. This leads us to investigate the\nlog-probabilities of gold references. We find that\ngold references get lower log-probability than the\npredictions from the beam search even with very\nlow model errors.\nFigure 5a illustrates the gap of log-probability\nbetween the gold references and Beam5 for En–De. Only for a few sentences, the gold references\nhave higher log-probabilities than the predictions\nof Beam5. Figure 5b demonstrates the strong cor-\nrelation between the gap of log-probability (as the\nx-axis) and the sentence BLEU scores of Beam5\n(as the y-axis). When the gold references get\nlower log-probabilities than Beam5, the sentence\nBLEU scores of Beam5 decrease. These two fig-\nures are results from the standard test with the Big\nmodel. We also test using the training samples\nwith models without regularization. Results are\nillustrated in Figure 5c and Figure 5d. Compar-\ning these two test cases, we find that the gaps are\nreduced when the model errors are smaller in the\nlatter case. However, the correlation between the\nlog-probability and the sentence BLEU still exists\neven for a model with an accuracy of 95% in the\ncleanroom test.\nCase study and analysis Table 4 illustrates an\nexample in the test using training samples and\nmodels without regularization. There is only one\n(a)Gap of log-probability: Std+Big\n (b)Gap of log-probability and related BLEU: Std+Big\n(c)Gap of log-probability: Sample+NoReg\n (d)Gap of log-probability and related BLEU: Sample+NoReg\nFigure 5: The gap of log-probability between gold references and Beam5 for En–De. All gaps are gold reference minus Beam5.\nSource Die Aktionspl ¨ane derHoch rang igen Arbeitsgruppe zielen zwar aufdiezuk¨unftige\nBegrenzung des Einwanderung s strom s ab , doch tragen sieinkeiner Weise zur\nVerbesserung derSituation hinsichtlich derMenschenrechte und derGrundfreiheiten\nsowie der wirtschaftliche n Situation der betroffenen L¨ander bei .\nPrediction Although action plans established bythehigh - level working group aim tolimit\nmigratory flows inthefuture , these plans donothing toimprove human rights ,\ncivil liberties and theeconomic situation ofthecountries concerned .Log Prob-\nablity:\n-2.4142\nGold\nRefer-\nenceAlthough action plans established bythehigh - level working group aim tolimit\nmigratory flows inthefuture , these plans donothing toimprove human rights ,\ncivil liberties and theeconomic situation inthecountries concerned .Log Prob-\nablity:\n-6.9390\nTable 4: An example that a gold reference gets a lower log-probability than Beam5. There is only one token that is different\nbetween the prediction of Beam5 and the gold reference.\ntoken that is different between the gold reference\nand the prediction of Beam5. This small differ-\nence results in a significantly lower log probability\nfor the gold reference.\nThis result can be explained by the objective in\ntraining.\nWe usesandtito denote the source sequence\nand the ground truth token at the target side for the\nstepi.t0\niis a token different from tiat stepi. At\nstepk, the usual training objective is to maximize\nlogP (tkjs;t1;:::;t k\u00001). If the model is effectively\ntrained, it implieslog P (tkjs; t1; :::; t k\u00001)> log P (t0\nkjs; t1; :::; t k\u00001):(1)\nHowever, the inequality below is notpart of the\ntraining objective:\nlog P (tkjs; t1; :::; t k\u00001)> log P (tkjs; t1; :::; t0\nk\u00001)(2)\nThis can lead to the phenomenon that gold ref-\nerences get lower probabilities than potential se-\nquences in the search space even in a model with\nvery small model errors.\n7 Conclusion\nExperiments show that the beam search still out-\nperforms most stochastic decoding methods in\nNMT. We investigate the beam search in the de-\ntails at the sentence level. We find that two well-\nrecognized issues, beam search curse and search\nerror, only happen to a small portion of sentences\nin the test set. Meanwhile, for the majority of\nsentences, their gold references get lower log-\nprobabilities than the predictions from the beam\nsearch. We also test with different levels of model\nerrors including a cleanroom test using training\nsamples and models without regularization. The\nresults show that these issues still exist even for a\nmodel with an accuracy of 95%. These findings\nshow that we cannot improve the beam search by\nfurther seeking higher log-probability during the\nsearch. In other words, further reducing search er-\nrors are not promising. Our results about the re-\nlationship between the quality and the gap of log-\nprobability provide useful information for two po-\ntential ways to improve NMT. One is to find better\nreranking methods or decision rules to find good\ntranslations among the candidates from the beam\nsearch. The other is to find a new way to train\nthe model so that the sequences with higher log-\nprobabilities get better performance.\nReferences\nBahdanau, Dzmitry, Kyunghyun Cho, and Yoshua Ben-\ngio. 2014. Neural machine translation by jointly\nlearning to align and translate. arXiv preprint\narXiv:1409.0473 .\nCohen, Eldan and Christopher Beck. 2019. Empiri-\ncal analysis of beam search performance degradation\nin neural sequence models. In International Con-\nference on Machine Learning , pages 1290–1299.\nPMLR.\nEikema, Bryan and Wilker Aziz. 2020. Is MAP decod-\ning all you need? the inadequacy of the mode in neu-\nral machine translation. In Proceedings of the 28th\nInternational Conference on Computational Linguis-\ntics, pages 4506–4520, Barcelona, Spain (Online),\nDecember. International Committee on Computa-\ntional Linguistics.\nEikema, Bryan and Wilker Aziz. 2021. Sampling-\nbased minimum bayes risk decoding for neural ma-\nchine translation. arXiv preprint arXiv:2108.04718 .\nFan, Angela, Mike Lewis, and Yann Dauphin. 2018.\nHierarchical neural story generation. In Proceedings\nof the 56th Annual Meeting of the Association forComputational Linguistics (Volume 1: Long Papers) ,\npages 889–898.\nFernandes, Patrick, Ant ´onio Farinhas, Ricardo Rei,\nJos´e GC de Souza, Perez Ogayo, Graham Neubig,\nand Andr ´e FT Martins. 2022. Quality-aware decod-\ning for neural machine translation. arXiv preprint\narXiv:2205.00978 .\nFreitag, Markus, David Grangier, Qijun Tan, and\nBowen Liang. 2022. High quality rather than high\nmodel probability: Minimum bayes risk decoding\nwith neural metrics. Transactions of the Association\nfor Computational Linguistics , 10:811–825.\nGraves, Alex. 2012. Sequence transduction\nwith recurrent neural networks. arXiv preprint\narXiv:1211.3711 .\nHoltzman, Ari, Jan Buys, Li Du, Maxwell Forbes, and\nYejin Choi. 2019. The curious case of neural text de-\ngeneration. In International Conference on Learning\nRepresentations .\nKlein, Guillaume, Franc ¸ois Hernandez, Vincent\nNguyen, and Jean Senellart. 2020. The opennmt\nneural machine translation toolkit: 2020 edition. In\nProceedings of the 14th Conference of the Associa-\ntion for Machine Translation in the Americas (AMTA\n2020) , pages 102–109.\nKoehn, Philipp and Rebecca Knowles. 2017. Six chal-\nlenges for neural machine translation. In Proceed-\nings of the First Workshop on Neural Machine Trans-\nlation , pages 28–39, Vancouver, August. Association\nfor Computational Linguistics.\nKudo, Taku. 2018. Subword regularization: Improv-\ning neural network translation models with multiple\nsubword candidates. In Proceedings of the 56th An-\nnual Meeting of the Association for Computational\nLinguistics (Volume 1: Long Papers) , pages 66–75,\nMelbourne, Australia, July. Association for Compu-\ntational Linguistics.\nLowerre, Bruce T. 1976. The harpy speech recognition\nsystem. Carnegie Mellon University.\nMeister, Clara, Ryan Cotterell, and Tim Vieira. 2020.\nIf beam search is the answer, what was the question?\nInProceedings of the 2020 Conference on Empirical\nMethods in Natural Language Processing (EMNLP) ,\npages 2173–2185.\nMurray, Kenton and David Chiang. 2018. Correct-\ning length bias in neural machine translation. WMT\n2018 , page 212.\nPost, Matt. 2018. A call for clarity in reporting bleu\nscores. In Proceedings of the Third Conference on\nMachine Translation: Research Papers , pages 186–\n191.\nRei, Ricardo, Craig Stewart, Ana C Farinha, and Alon\nLavie. 2020. COMET: A neural framework for MT\nevaluation. In Proceedings of the 2020 Conference\non Empirical Methods in Natural Language Process-\ning (EMNLP) , pages 2685–2702, Online, November.\nAssociation for Computational Linguistics.\nShi, Xing, Yijun Xiao, and Kevin Knight. 2020. Why\nneural machine translation prefers empty outputs.\narXiv preprint arXiv:2012.13454 .\nSmith, Noah A. 2011. Linguistic structure prediction.\nSynthesis lectures on human language technologies ,\n4(2):1–274.\nSrivastava, Nitish, Geoffrey Hinton, Alex Krizhevsky,\nIlya Sutskever, and Ruslan Salakhutdinov. 2014.\nDropout: a simple way to prevent neural networks\nfrom overfitting. The journal of machine learning\nresearch , 15(1):1929–1958.\nStahlberg, Felix and Bill Byrne. 2019. On NMT search\nerrors and model errors: Cat got your tongue? In\nProceedings of the 2019 Conference on Empirical\nMethods in Natural Language Processing and the\n9th International Joint Conference on Natural Lan-\nguage Processing (EMNLP-IJCNLP) , pages 3356–\n3362, Hong Kong, China, November. Association\nfor Computational Linguistics.\nSzegedy, Christian, Vincent Vanhoucke, Sergey Ioffe,\nJon Shlens, and Zbigniew Wojna. 2016. Rethink-\ning the inception architecture for computer vision.\nInProceedings of the IEEE conference on computer\nvision and pattern recognition , pages 2818–2826.\nYang, Yilin, Liang Huang, and Mingbo Ma. 2018.\nBreaking the beam search curse: A study of (re-)\nscoring methods and stopping criteria for neural ma-\nchine translation. In Proceedings of the 2018 Con-\nference on Empirical Methods in Natural Language\nProcessing , pages 3054–3059.\nA Comparing Beam Search to other\nDecoding Methods\nTable 6 shows the comparison between beam\nsearch and some of sampling-based decoding\nmethods. We use the notations below for the de-\ncoding methods.\n• Beam5: beam search, the beam size is 5.\n• Top5k10 and Top5k30: Top-k sampling, us-\ning top 10 and top 30 for the range for sam-\npling respectively, the beam size is 5.\n• Top5p75 and Top5p90: Nucleus (Top-p) sam-\npling, using 75% and 90% for the sampling\nprobability mass respectively. The beam size\nis 5.\n• MBR300: the MBR decoding using 300 can-\ndidates from the unbiased sampling. The de-\ncision rule (utility function) is the similarityin terms of the sentence BLEU score between\nany two candidates. Fernandes et al. (2022)\nalso use other utility functions such as Comet.\nThese methods use some pre-trained models\nand introduce extra knowledge in the deci-\nsion rule. Since we focus on the comparison\nof different decoding methods, we only use\nthe ngram-based decision rule for MBR in our\nexperiments.\nB Out-of-Domain Test sets\nWe use the test sets in EMEA9for out-of-domain\n(OOD) tests.\nFigure 6a illustrates the gap of sentence BLEU\nscores for En–De. Figure 6b illustrates the rela-\ntionship between the gap of sentence BLEU and\nthe gap of log-probability for each sentence. Ta-\nble 5 shows the number of sentences that Beam100\ngets larger ,equal and smaller sentence BLEU\ncompared with Beam 5 These results are consis-\ntent with the in-domain tests, shown in Figure 2a,\nFigure 2b and Table 2 in Section 4.1 respectively.\nTotal Sent. >Beam5 =Beam5 <Beam5\nEn–De 1267 347 434 486\nDe–En 1267 275 646 346\nTable 5: Out-of-domain (OOD) tests: the number of sen-\ntences that Beam100 gets larger ,equal andsmaller sentence\nBLEU compared with Beam 5, denoted as >Beam5 ,=Beam5\nand<Beam5 respectively.\n9http://https://opus.nlpl.eu/EMEA.php\nEn–De De–En\nModel Base Big Base Big\nMetrics BLEU Meteor Comet BLEU Meteor Comet BLEU Meteor Comet BLEU Meteor Comet\nBeam5 28.2 29.1 0.490 28.9 29.2 0.498 33.5 36.5 0.520 33.8 36.7 0.539\nTop5k10 22.5 26.0 0.391 23.9 26.8 0.426 28.1 34.2 0.442 29.5 34.8 0.481\nTop5k30 21.4 25.5 0.357 23.2 26.3 0.413 27.2 33.5 0.420 28.5 34.3 0.456\nTop5p75 24.6 27.2 0.415 25.7 27.7 0.457 30.0 35.1 0.462 31.4 35.6 0.502\nTop5p90 20.6 24.9 0.292 22.5 25.9 0.379 26.4 32.8 0.357 28.1 33.8 0.420\nMBR300 24.9 27.0 0.181 26.5 27.9 0.298 30.7 34.2 0.301 31.9 35.0 0.377\nTable 6: Comparison between beam search, Top-k sampling, Nucleus (Top-p) sampling and MBR decoding for En–De and\nDe–En.\n(a)Gap of sentence BLEU: Beam100 minus Beam5\n (b)Gap of log-probability as the x-axis and gap of sentence\nBLEU as the y-axis: Beam100 minus Beam5\nFigure 6: Out-of-domain (OOD) tests: investigate the beam search curse at sentence level for En–De.", "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "-s__L9MSOE", "year": null, "venue": "EAMT 2022", "pdf_link": "https://aclanthology.org/2022.eamt-1.63.pdf", "forum_link": "https://openreview.net/forum?id=-s__L9MSOE", "arxiv_id": null, "doi": null }
{ "title": "Multi3Generation: Multitask, Multilingual, Multimodal Language Generation", "authors": [ "Anabela Barreiro", "José G. C. de Souza", "Albert Gatt", "Mehul Bhatt", "Elena Lloret", "Aykut Erdem", "Dimitra Gkatzia", "Helena Moniz", "Irene Russo", "Fábio N. Kepler", "Iacer Calixto", "Marcin Paprzycki", "François Portet", "Isabelle Augenstein", "Mirela Alhasani" ], "abstract": "Anabela Barreiro, José GC de Souza, Albert Gatt, Mehul Bhatt, Elena Lloret, Aykut Erdem, Dimitra Gkatzia, Helena Moniz, Irene Russo, Fabio Kepler, Iacer Calixto, Marcin Paprzycki, François Portet, Isabelle Augenstein, Mirela Alhasani. Proceedings of the 23rd Annual Conference of the European Association for Machine Translation. 2022.", "keywords": [], "raw_extracted_content": "Multi3Generation: Multitask, Multilingual, Multimodal Language\nGeneration\nAnabela Barreiro1José GC de Souza2Albert Gatt3,4Mehul Bhatt5Elena Lloret6\nAykut Erdem7Dimitra Gkatzia8Helena Moniz9,1Irene Russo10Fabio Kepler2\nIacer Calixto11Marcin Paprzycki12François Portet13Isabelle Augenstein14\nMirela Alhasani15\n1INESC-ID, Portugal2Unbabel, Portugal3University of Malta, Malta\n4Utrecht University, The Netherlands5Örebro University, Sweden\n6University of Alicante, Spain7Koç University, Turkey\n8Edinburgh Napier University, United Kingdom9University of Lisbon, Portugal\n10National Research Council, Italy11Amsterdam University Medical Centers, The Netherlands\n12Polish Academy of Sciences, Poland13Grenoble Alpes University, France\n14University of Copenhagen, Denmark15Epoka University, Albania\[email protected]\nAbstract\nThis paper presents the Multitask, Mul-\ntilingual, Multimodal Language Genera-\ntion COST Action – Multi3Generation\n(CA18231), an interdisciplinary network\nof research groups working on different as-\npects of language generation. This \"meta-\npaper\" will serve as reference for citations\nof the Action in future publications. It\npresents the objectives, challenges and a\nthe links for the achieved outcomes.\n1 Introduction\nMulti3Generation1fosters the development of a\nnetwork of researchers and technologists across in-\nterdisciplinary fields working on topics related to\nlanguage generation (LG). We frame LG broadly\nas the set of tasks where the ultimate goal in-\nvolves generating language. In contrast to the\nmore classical definition of natural language gen-\neration (NLG), this also includes tasks not con-\ncerned with LG in an immediate sense, but that can\ninform or improve LG models. The action focuses\non four core challenges: (a) data and information\nrepresentation challenges, such as those involving\ninputs of different sources: images, videos, knowl-\nedge bases (KBs) and graphs; (b) machine learn-\ning (ML) challenges of modern approaches, such\nas mapping of inputs to different correct outputs,\ne.g. structured prediction and representation learn-\ning; (c) interaction in applications of LG, such as\n© 2022 The authors. This article is licensed under a Creative\nCommons 3.0 licence, no derivative works, attribution, CC-\nBY-ND.\n1https://www.cost.eu/actions/CA18231/ . The\nAction is funded by the European Commission and is running\nfrom June 2019 till September 2023.dialogue systems, conversational search interfaces\nand human-robot interaction due to the uncertainty\nderived from the changing environment and the\nnon-deterministic fashion of interaction; (d) KB\nexploitation: structured knowledge is key to nat-\nural language processing (NLP) tasks, including\nNLG, supporting ML methods that require expan-\nsion, filtering, disambiguation or user adaptation\nof generated content. The Action addresses these\nchallenges by answering the following questions:\n1. How can we efficiently exploit common-\nsense, world knowledge and multimodal in-\nformation from various inputs such as KBs,\nimages and videos to address LG tasks such\nas multimodal machine translation (MT),\nvideo description and summarisation?\n2. How can ML methods such as multi–task\nlearning (MTL), representation learning and\nstructured prediction be leveraged for LG?\n3. How can the models from (1) and (2) be ex-\nploited to develop dialogue-based, conversa-\ntional human-computer and human-robot in-\nteraction methods?\n2 Objectives\nMulti3Generation created an interdisciplinary Eu-\nropean LG research network targeting scientific\nadvances and societal benefits in the following\nfour focus themes: (T1) grounded multimodal rea-\nsoning and generation; (T2) efficient ML algo-\nrithms, methods, and applications to LG; (T3) di-\nalogue, interaction and conversational LG applica-\ntions; and (T4) exploiting large KBs and graphs.\nThe following are the research coordination ob-\njectives:\n• Foster knowledge exchange by sharing of re-\nsources including semantic annotation guide-\nlines, benchmarking corpora, ML and align-\nment tools.\n• Create multimodal and multilingual bench-\nmarks for NLG involves experimenting with\nautomatic mapping between existing re-\nsources, crawling of web data, definition\nof annotation guidelines and launching of\ncrowdsourcing campaigns for bigger datasets,\nalso as games-with-a-purpose).\n• Facilitate interactions, collaborations, knowl-\nedge building and dissemination between the\nAction’s participants via online tools, as web-\nsite, blogs, downloadable publications.\n• Promote the generation of novel ideas and in-\ntroduce the new joint Multi3Generation disci-\npline to other researchers.\n• Provide opportunities for joint research\nprojects by the Action’s members on multi-\ntask, multilingual and multimodal processing\nduring exchange visits of Early Career Inves-\ntigators (ECIs), and other activities that en-\ncourage young researchers to establish links\nwith industry and senior academics.\n• Disseminate the results of the Action through\nconferences, scientific and industrial gather-\nings, which will have substantial impact in the\nparticipating countries and beyond.\n• Create synergies between participants via\njoint publications in books, journals and con-\nferences; reports from working group meet-\nings and training materials from training\nschools.\nThe overall expected impact of the Action is to\nbring about a significant change in progress to-\nwards effective solutions for computational chal-\nlenges involving LG with respect to multitask,\nmultilingual and multimodal aspects. In particular,\nMulti3Generation is focusing on the integration of\nthese three aspects and how they can benefit LG\nsolutions. The Action’s specific objectives for ca-\npacity building are:\n• Strengthen European research on theory,\nmethodology and real-world technology in\nLG, particularly in the four Multi3Generation\nfocus research themes (T1–T4);• Facilitate collaboration, networking and in-\nterdisciplinary community building by yearly\nconferences and workshops and biannual in-\nternational training schools;\n• Drive scientific progress by liaising exten-\nsively with industry and end-users, and by\nincreasing joint collaboration and knowledge\ntransfer by the end of the Action;\n• To coordinate the development of benchmark\ndata resources for tasks relating to the focus\nthemes above and to organise corresponding\nshared-task competitions.\nIn order to accomplish the objectives of the Ac-\ntion, its members are encouraged to produce novel\noutcomes and establish critical mass, as well as\nto engage in joint applications for European and\nnational funding for research projects within the\nfields covered by the Action.\n3 Outcomes\nSince its inception, the action fostered collabo-\nrations that has produced more than 24 publica-\ntions2, ranging from surveys to approaches to spe-\ncific LG problems. Among the collaborations are\nthe short term missions (STMs), visits among re-\nsearchers that take part in the Action3. Further-\nmore, a series of datasets4have been developed\nand made available for diverse number of LG-\nrelated problems. Another important outcome of\nthe Action is the organization of training schools\nin 2022, one on the topic of “representation medi-\nated multimodality”5and another one on the topic\nof “automatically creating text from data”6.\n4 Acknowledgements\nThis publication is based upon work from\nCOST Action Multi3Generation - Multitask,\nMultilingual, Multimodal Language Generation\n(CA18231), supported by COST (European Coop-\neration in Science and Technology).\n2https://multi3generation.eu/outcomes/pub\nlications/\n3https://multi3generation.eu/funding-oppo\nrtunities/short-term-scientific-missions\n/\n4https://multi3generation.eu/outcomes/dat\na/\n5https://codesign-lab.org/school2022/inde\nx.html\n6https://multi3generation.eu/category/eve\nnts/training-schools/", "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "NG4jKuntfBB", "year": null, "venue": "EAMT 2022", "pdf_link": "https://aclanthology.org/2022.eamt-1.7.pdf", "forum_link": "https://openreview.net/forum?id=NG4jKuntfBB", "arxiv_id": null, "doi": null }
{ "title": "Passing Parser Uncertainty to the Transformer. Labeled Dependency Distributions for Neural Machine Translation", "authors": [ "Dongqi Pu", "Khalil Sima'an" ], "abstract": null, "keywords": [], "raw_extracted_content": "Passing Parser Uncertainty to the Transformer: Labeled Dependency\nDistributions for Neural Machine Translation\nDongqi Liu Khalil Sima’an\[email protected] [email protected]\nInstitute for Logic, Language and Computation\nUniversity of Amsterdam\nAbstract\nExisting syntax-enriched neural machine\ntranslation (NMT) models work either\nwith the single most-likely unlabeled parse\nor the set of n-best unlabeled parses com-\ning out of an external parser. Passing a\nsingle or n-best parses to the NMT model\nrisks propagating parse errors. Further-\nmore, unlabeled parses represent only syn-\ntactic groupings without their linguisti-\ncally relevant categories. In this paper\nwe explore the question: Does passing\nboth parser uncertainty and labeled syn-\ntactic knowledge to the Transformer im-\nprove its translation performance? This\npaper contributes a novel method for in-\nfusing the whole labeled dependency dis-\ntributions (LDD) of the source sentence’s\ndependency forest into the self-attention\nmechanism of the encoder of the Trans-\nformer. A range of experimental results on\nthree language pairs demonstrate that the\nproposed approach outperforms both the\nvanilla Transformer as well as the single\nbest-parse Transformer model across sev-\neral evaluation metrics.\n1 Introduction\nNeural Machine Translation (NMT) models based\non the seq2seq schema, e.g., Kalchbrenner and\nBlunsom (2013); Cho et al. (2014); Sutskever et\nal. (2014); Bahdanau et al. (2014), first encode the\nsource sentence into a high-dimensional content\nvector before decoding it into the target sentence.\n© 2022 The authors. This article is licensed under a Creative\nCommons 3.0 licence, no derivative works, attribution, CC-\nBY-ND.Several prior studies (Shi et al., 2016; Belinkov\nand Bisk, 2018) have pointed out that although\nNMT models may induce aspects of syntactic re-\nlations, they still cannot capture the subtleties of\nsyntactic structure that should be useful for accu-\nrate translation, particularly by bridging long dis-\ntance relations.\nPrevious work provides support for the hypoth-\nesis that explicit incorporation of source syntactic\nknowledge could result in better translation per-\nformance, e.g., Eriguchi et al. (2016); Bastings et\nal. (2017). Most models condition translation on a\nsingle best parse syn:\narg max\ntP(t|s,syn) (1)\nwhere sandtare the source and target sentences\nrespectively. Other models incorporate the n-best\nparses or forest (without parser probabilities and\nlabels), e.g., Neubig and Duh (2014). The idea\nhere is that the syntactically richer input (s, syn)\nshould be better than the bare sequential word or-\nder of s, leading to a more accurate and sharper\ntranslation distribution P(t|s,syn).\nWhile most syntax-enriched strategies result in\nperformance improvements, there are two note-\nworthy gaps in the literature addressing source\nsyntax. Firstly, none of the existing works con-\nditions on the probability distributions over source\nsyntactic relations. And secondly, none of the ex-\nisting approaches conditions on the dependency\nlabels, thereby conditioning only on the binary\nchoice whether there is an unlabeled dependency\nrelation between two words.\nTu et al. (2010); Ma et al. (2018); Zaremoodi\nand Haffari (2018) showed that the whole depen-\ndency forest provides better performance than a\nsingle best parse approach. In this paper we go\none step further and propose that a syntactic parser\nis more useful if it conveys to the NMT model\nalso its remaining uncertainty, expressed as the\nwhole probability distributions over dependency\nrelations rather than a mere forest.\nTo the best of our knowledge, there is no pub-\nlished work that incorporates a parser’s distribu-\ntions over dependency relations into the Trans-\nformer model (Vaswani et al., 2017), let alone in-\ncorporating distributions over labeled dependency\nrelations into NMT models at large.\nThis paper contributes a generic approach for\ninfusing labeled dependency distributions into the\nencoder’s self-attention layer of the Transformer.\nWe represent a labeled dependency distributions\nas a three-dimensional tensor of parser probabil-\nities, where the first and second dimensions con-\ncern word-positions and the third concerns the de-\npendency labels.\nThe resulting tensor is infused into the compu-\ntation of the multi-head self-attention, where every\nhead is made to specialize in a specific dependency\nclass. We contribute empirical evidence that pass-\ning uncertainty to the Transformer and passing la-\nbeled dependencies both give better performance\nthan passing a single unlabeled parse, or an unla-\nbeled/labeled set of dependency relations with uni-\nform probabilities.\n2 Related Work\nThe role of source syntactic knowledge in better\nreordering was appreciated early on during the Sta-\ntistical Machine Translation (SMT) era. For exam-\nple, Mylonakis and Sima’an (2011) propose that\nsource language parses should play a crucial role\nin guiding the reordering within translation, and\ndo so by integrating constituency labels of varying\ngranularity into the source language. Although,\nNMT encoders have been claimed to have the abil-\nity to learn syntax, work on RNNs-based mod-\nels shows the value of external source syntax in\nimproving translation performance, e.g., Eriguchi\net al. (2016), by refining the encoder component,\nleading to a combination of a tree-based encoder\nand a sequential encoder.\nNoteworthy to recall here that the atten-\ntion mechanism was originally aimed to capture\nall word-to-word relations, including syntactic-\nsemantic relations. whereas, the work of Bastings\net al. (2017) has shown that a single unlabeled de-\npendency parse, encoded utilizing Graph Convo-lutional Networks (GCNs), can help improve MT\nperformance. Ma et al. (2018) and Zaremoodi and\nHaffari (2018) attempt to incorporate parse forests\ninto RNNs-based NMT models, mitigating parsing\nerrors by providing more candidate options. How-\never, these two works only rely on the binary (un-\nlabeled) relations in all the sub-trees, ignoring the\nelaborate probability relations between word posi-\ntions and the type of these relations.\nAlthough the Transformer (Vaswani et al.,\n2017) is considered to have a better ability to\nimplicitly learn relations between words than the\nRNNs-based models, existing work (Zhang et al.,\n2019; Currey and Heafield, 2019) shows that even\nincorporating a single best parse could improve the\nTransformer translation performance. Followup\nwork (Bugliarello and Okazaki, 2020; Peng et\nal., 2021) provides similar evidence by changing\nthe Transformer’s self-attention mechanism based\non the distance between the input words of de-\npendency relations, exploiting the single best un-\nlabeled dependency parse.\nThe work of Pham et al. (2019) suggests that\nthe benefits of incorporating a single (possibly\nnoisy) parse (using data manipulation, linearized\nor embedding-based method) can be explained as\na mere regularization effect of the model, which\ndoes not help the Transformer to exploit the ac-\ntual syntactic knowledge. Interestingly, Pham et\nal. (2019) arrive at a similar hypothesis, but they\nconcentrate on exploring how to train one of the\nheads of the self-attention in the Transformer for a\ncombined objective of parsing and translation. The\nparsing-translation training objective focuses the\nself-attention of a single head at learning the distri-\nbution of unlabeled dependencies while learning to\ntranslate as well, i.e., the distribution is not taken\nas source input but as a gold training objective. By\ntraining a single head with syntax, they leave all\nother heads without direct access to syntax.\nOur work confirms the intuition of Pham et\nal. (2019) regarding the utility of the parser’s full\ndependency distributions, but in our model these\ndistributions are infused directly into the self-\nattention while maintaining a single training ob-\njective (translation). Furthermore, we propose that\nonly when the full probability distribution matri-\nces over labeled dependency relations is infused\ndirectly into the transformer’s self-attention mech-\nanism (not as training objective), syntax has a\nchance to teach the Transformer to better learn\nsyntax-informed self-attention weights.\n3 Proposed Approach\nA parser can be seen as an external expert sys-\ntem that provides linguistic knowledge to assist the\nNMT models in explicitly taking into account syn-\ntactic structure. For some sentences, the parser\ncould be rather uncertain and spread its proba-\nbility over multiple parses almost uniformly, but\nin the majority of cases the parser could have a\nrather sharp distribution over the alternative parses.\nTherefore, simply passing a dependency forest\namounts merely to passing all alternative parses\naccompanied with zero information on parser con-\nfidence (maximum perplexity) to the Transformer\nNMT model, which does not help it to distinguish\nbetween the parsing information of the one input\nfrom that of another. This could increase the com-\nplexity of learning the NMT model unnecessarily.\nAn alternative is then to use for each sentence\na dependency distribution in the form of condi-\ntional probabilities, which could be taken to rep-\nresent the degree of confidence of the parser in the\nindividual dependency relations. Furthermore, we\npropose that each dependency relation type (label),\nprovides a more granular local probability distri-\nbution that could assist the Transformer model in\nmaking more accurate estimation of the context\nvector. This might enhance the quality of encod-\ning the source sentence, particularly because the\nTransformer model relies on a weak notion or word\norder, which is input in the form of positional en-\ncoding outside the self-attention mechanism.\nNote that the word-to-word dependency proba-\nbilities is not equivalent to using a distribution over\ndependency parses. This is because in some cases\nthe word-to-word dependencies (just like word-to-\nword attention) could combine together into gen-\neral graphs (not necessarily trees). We think that\nusing relations between pairs of words (rather than\nupholding strict tree or forest structures) fits well\nwith the self-attention mechanism.\n3.1 Dependency Distributions\nDenote with |T|target sentence length and with\nencode( ·)the NMT model’s encoder. We contrast\ndifferent syntax-driven models:\nP(t|s,syn)≈|T|Y\ni=1P(ti|t<i,encode( s,syn))(2)with syn∈ {{L,U}DD,U{L,U}DD,{L,U}DP},\nwhere {L,U}DD is the labeled/unlabeled de-\npendency distribution1,U{L,U}DDthe uniform\nlabeled/unlabeled dependency distribution2, and\n{L,U}DP the 1-best labeled/unlabeled depen-\ndency parse. We also use LDA to stand for a model\nwere the attention weights are fixed equal to LDD\n(i.e., not learned).\nOur primary idea is to exert a soft influence on\nthe self-attention in the encoder of the Transformer\nto allow it to fit its parameters with both syntax and\ntranslation awareness together. For infusing the la-\nbeled dependency distributions, we start with “ma-\ntrixization” of labeled dependency distributions,\nwhich results in a compact tensor representation\nsuitable for NMT models.\nFigure 1: Labeled dependency distributions\nFigure 1 illustrates by example how we convert\nthe labeled dependency distribution ( LDD ) into a\nthree-dimensional LDD tensor. The x-axis and y-\n1Unlabeled dependency distribution is the sum of labeled de-\npendency distributions on the z-axis, which is the same as\n1-best unlabeled dependency parse.\n2It is used for the purpose of ablation experiments, that is, the\nvalue of each point in the 3-dimensional tensor is identical.\naxis of the tensor are the words in the source sen-\ntence, and the z-axis represents the type of depen-\ndency relation. Each point representing a condi-\ntional probability p(i, j, l) =p(sj, l|si)∈[0,1]⊆\nRof source word simodifying another source\nword sjwith relation l.\nLDD Matrix for a specific label l:The matrix\nLDDlextracted from the LDD tensor for a depen-\ndency label lis defined as the matrix in which ev-\nery entry (i, j)contains the probability of a word\nsito modify word sjwith dependency relation l.\n3.2 Parser-Infused Self-attention\nInspired by Bugliarello and Okazaki (2020), we\npropose a novel Transformer NMT model that in-\ncorporates the LDD into the first layer of the en-\ncoder side. Figure 2 shows our LDD sub-layer.\nThe standard self-attention layer employs a\nmulti-head attention mechanism of hheads. For\nan input sentence of length T, the input of self-\nattention head hiin the LDD layer is the word\nembedding matrix X∈RT×dmodel and the depen-\ndency distribution matrix LDDli∈RT×Tfor label\nliassigned to head hiuniquely3. Hence, when we\nrefer to head hi, we refer also to its uniquely as-\nsigned dependency label li, but we omit lito avoid\ncomplicating the notation.\nAs usual in multi-head self-attention ( hbeing\nthe number of heads) for head hi, first it linearly\nmaps three input vectors, q,k,v∈R1×dmodel for\neach token, resulting in three matrices Qhi∈\nRT×d,Khi∈RT×d, and Vhi∈RT×d, where\ndmodel is the dimension of input vectors, and d=\ndmodel/h. Subsequently, an attention weight for\neach position is obtained by:\nShi=Qhi·Khi⊤\n√\nd(3)\nAt this point we infuse the resulting self-\nattention weight matrix Shifor head hiwith the\nspecific LDD matrix LDDlifor label liusing\nelement-wise multiplication. Assuming that dlip,q∈\nLDDli, this is to say:\nnhip,q=ship,q×dlip,q,forp, q= 1, ..., T (4)\nThe purpose of element-wise multiplication is to\nnudge the attention mechanism to “dynamically”\n3We group the original dependency labels into 16 alternative\ngroup labels. The grouping is provided in Appendix A.learn weights that optimize the translation objec-\ntive but also diverge the least from the parser prob-\nabilities in the dependency distribution matrix.\nNext, the resulting weights are softmaxed to ob-\ntain the final syntax-infused distribution matrix for\nheadhiand the label attached to this head li:\nNhi= softmax( Shi⊙LDDli) (5)\nWe stress that every attention head is infused\nwith a different dependency relation matrix LDDli\nfor a particular dependency relation li. By focus-\ning every head on a different label we hope to “soft\nlabel”, or specialize, it for that label.\nNow that we have syntax-infused weights Nhi\nwe multiply them with the value matrix Vhito get\nthe attention weight matrix of the attention head hi\nfor the relation li.\nMhi=Nhi·Vhi(6)\nSubsequently, the multi-head attention linearly\nmaps the concatenation of all the heads with a pa-\nrameter matrix Wo∈Rdmodel×dmodel, and sends\nthis hidden representation to the standard Trans-\nformer encoder layers for further computations.\nMultiHead( Q,K,V) = Concat( Mhi, ...,Mhm)Wo(7)\nFinally, the objective function for training our\nmodel with syntax knowledge is identical to that\nof the vanilla Transformer (Vaswani et al., 2017):\nLoss = −TX\nt=1[ytln(ot)+(yt−1) ln(1 −ot)](8)\nWhere ytandotare, respectively, the true and\nthe model-predicted value at state t, and Trepre-\nsents the number of states. The syntactic distribu-\ntion matrices are not the object of optimization in\nthe model, so it is incorporated into the model in\nthe form of a parameter-free matrix.\n4 Experiments and Analysis\nExperimental Setup We establish seven distinct\nsets of experiments, refer to Table 1. To be\nspecific, we will conduct particular experiments\nto validate the empirical performance under both\nmedium size and small size training parallel cor-\npora. Apart from the different network structures\nused in the models, the number of network lay-\ners are identical in the same language pair trans-\nlation experiments for all models. Additionally,\nFigure 2: Labeled dependency distribution sub-layer ( LDDlifor head hi)\nthe seven models in each experiment will use the\nsame parameter settings, loss function, and opti-\nmizer algorithm. Experiments will employ BLEU-\n{1,4}score (Papineni et al., 2002), RIBES score\n(Isozaki et al., 2010), TER score (Snover et al.,\n2006), and BEER score (Stanojevic and Sima’an,,\n2014) as criteria for evaluating the model’s effec-\ntiveness.\nParser: We employ an external dependency\nparser SuPar (Zhang et al., 2020) to automatically\nparse the source sentences. Since this parser was\ntrained using the biaffine method (Dozat and Man-\nning, 2016), we can extract dependency distribu-\ntions by changing its source code.\nData: We evaluate the translation tasks for\nthree language pairs from three different language\nfamilies: English-Chinese (En →Zh), English-\nItalian (En →It), and English-German (En →De).\nWe chose dev2010 andtest2010 as our validation\nand test datasets from IWSLT2017 En →De and\nEn→It tasks. In En →Zh, we randomly selected a\n110K subset from the IWSLT2015 dataset as train-\ning set and used dev2010 as validation set, tst2010\nas test set. Table 2 exhibits the division and statis-\ntics of the datasets.\nFor training only, we first filtered out the source\nsentences that SuPar cannot parse and sentencesthat exceed 256 tokens in length. And then, we\nused SuPar4to parse each source language sen-\ntence to obtain the labeled dependency distribu-\ntions and applied Spacy5to tokenize the source and\ntarget languages, respectively. Finally, we replaced\nwords in the corpus with “ <unk>” for words with\nfrequency less than two counts, and for each mini-\nbatch sentences, added “ <bos>”,“<eos>” tokens\nat the beginning and end, and for sentences with\ninconsistent lengths per mini-batch, added a corre-\nsponding number of “ <pad>” tokens at the end of\nthe sentences to keep the batch length consistent.\nHyperparameters: In the low-resource ex-\nperiments, the batch size was 256, the number\nof layers for the encoder and decoder was 4, and\nthe number of warm-up steps was 400. In the\nmedium-resource experiments, their values were\n512, 6, 4000, respectively. For the rest, we use the\nbase configuration of the Transformer (Vaswani et\nal., 2017): All experiments were optimized using\nAdam (Kingma and Ba, 2015) (where β1was 0.9,\nβ2was 0.98, ϵwas 10-9) and the initial learning\nrate was set to 0.0001, gradually reduced during\ntraining as follows:\n4https://github.com/yzhangcs/parser\n5https://spacy.io/\nTable 1: Five sets of experimental group description\nExperimental group Description\nBaseline (BL) The original Transformer model.\n+Labeled dependency attention only (LDA) Replace Smatrix directly with the labeled dependency distributions.\n+1-best labeled dependency parse (LDP) Incorporate 1-best dependency tree with specific (e.g. l1) label.\n+1-best unlabeled dependency parse (UDP) Incorporate 1-best (regardless the type of dependency relations) dependency tree.\n+Uniform labeled dependency distributions (ULDD) Incorporate uniform labeled dependency distributions.\n+Uniform unlabeled dependency distributions (UUDD) Incorporate uniform unlabeled dependency distributions.\n+Labeled dependency distributions (LDD) Incorporate labeled dependency distributions with standard Transformer self-attention.\nTable 2: Datasets statistics\nTask Corpus Training set Validation set Test set\nEnglish →GermanMulti30k 29000 1014 1000\nIWSLT 2017 206112 888 1568\nEnglish →Italian IWSLT 2017 231619 929 1566\nEnglish →Chinese IWSLT 2015 107860 802 1408\nlr =d−0.5\nmodel·min(step num−0.5,step num\n·warmup steps−1.5)(9)\nThe number of heads in multi-head attention\nwas set to 8 (16 in LDD layer), the dimension of\nthe model was 512, the dimension of inner fully-\nconnected layers was set to 2048, and the loss\nfunction was the cross-entropy loss function. The\ncheckpoint with the highest BLEU-4 score on the\nvalidation set was saved for model testing during\ntraining. The number of epochs was set to 50 (one\nepoch represents a complete training produce). In\norder to prevent over-fitting, we set the dropout\nrate (also in our LDD layer) to 0.1.\n4.1 Experimental Results\nThe experimental results for each model under\nlow- and medium-resource scenarios are shown in\nTables 3 to 6. The first group represents the base-\nline model, while the remaining groups represent\nthe control models. It is necessary to note that the\nlast group is the model proposed in this paper.\nAs compared to the baseline model, either form\nof modeling the syntactic knowledge of the source\nlanguage could be beneficial to the NMT models.\nWhether it was in the choice of lexical (BLEU-\n1) or in the order of word (RIBES), there was a\ncertain degree of improvement, which also sup-\nports the validity and rationality of incorporating\nsyntactic knowledge. The proposed model (LDD)\nachieved the best score in at least three of the five\ndifferent evaluation metrics, regardless of the lan-\nguage translation tasks. The proposed model con-\nsistently reached the highest results on BLEU-4,Table 3: Multi30k evaluation results (En →De)\nModel BLEU-1 RIBES BLEU-4 TER BEER\nBL 58.13 78.86 30.14 62.95 0.59\n+LDA 54.10 80.10 30.49 63.47 0.61\n+LDP 54.26 79.58 30.71 79.58 0.61\n+UDP 55.84 78.96 31.05 63.38 0.60\n+ULDD 52.20 79.50 27.80 63.02 0.59\n+UUDD 53.38 79.75 29.09 63.34 0.60\n+LDD 55.65 79.97†‡31.29†‡62.66†‡0.61\nLDD compared to BL −∆2.48 +∆1.11 +∆1.15 +∆0.29 +∆0.02\nLDD compared to UDP −Φ0.19 +Φ1.01 +Φ0.24 +Φ0.72 +Φ0.01\n1The black bold in the table represents the best experimental\nresults under the same test set.\n2∆andΦrepresent the improvement of our model compared\nto baseline and 1-best unlabeled parse system respectively.\n3†and‡indicate statistical significance (p <0.05) against\nbaseline and 1-best unlabeled parse system via T-test and\nKolmogorov-Smirnov test respectively.\nTable 4: IWSLT2017 evaluation results (En →De)\nModel BLEU-1 RIBES BLEU-4 TER BEER\nBL 51.63 68.64 26.13 83.34 0.53\n+LDA 49.89 69.04 26.16 83.53 0.53\n+LDP 51.12 68.91 26.38 83.93 0.53\n+UDP 50.90 69.20 26.39 84.65 0.53\n+ULDD 50.80 69.56 25.10 82.76 0.53\n+UUDD 48.85 68.90 25.41 86.19 0.53\n+LDD 54.98†‡68.83†27.78†‡81.85†‡0.54\nLDD compared to BL +∆3.35 +∆0.19 +∆1.65 +∆1.49 +∆0.01\nLDD compared to UDP +Φ4.08 −Φ0.37 +Φ1.39 +Φ2.80 +Φ0.01\n1The black bold in the table represents the best experimental\nresults under the same test set.\n2∆andΦrepresent the improvement of our model compared\nto baseline and 1-best unlabeled parse system respectively.\n3†and‡indicate statistical significance (p <0.05) against\nbaseline and 1-best unlabeled parse system via T-test and\nKolmogorov-Smirnov test respectively.\nwhich increased by at least one point when com-\npared to the baseline model, with an average in-\ncrease rate of more than 5%. Furthermore, in most\ntranslation experiments, incorporating labeled de-\npendency distributions provided better outcomes\nthan the 1-best unlabeled dependency parse system\n(UDP)6. This indicates the efficacy of providing\nmore parsing information, particularly the depen-\ndency probabilities. In the low resource scenarios,\nthe models of incorporating syntactic knowledge\n6All previous work uses only 1-best unlabeled parse, which is\nalso our main comparison object. We will refer to it as 1-best\nparse or 1-best tree below.\nTable 5: IWSLT2017 evaluation results (En →It)\nModel BLEU-1 RIBES BLEU-4 TER BEER\nBL 54.14 68.58 27.11 77.52 0.56\n+LDA 51.25 69.90 26.13 81.23 0.56\n+LDP 51.72 68.26 25.65 80.03 0.55\n+UDP 53.17 69.90 28.13 76.18 0.56\n+ULDD 51.30 67.83 25.23 80.62 0.54\n+UUDD 54.00 66.83 25.23 78.41 0.55\n+LDD 56.73†‡69.69†29.34†‡76.34†0.57\nLDD compared to BL +∆2.59 +∆1.11 +∆2.23 +∆1.18 +∆0.01\nLDD compared to UDP +Φ3.56 −Φ0.21 +Φ1.21 −Φ0.16 +Φ0.01\n1The black bold in the table represents the best experimental\nresults under the same test set.\n2∆andΦrepresent the improvement of our model compared\nto baseline and 1-best unlabeled parse system respectively.\n3†and‡indicate statistical significance (p <0.05) against\nbaseline and 1-best unlabeled parse system via T-test and\nKolmogorov-Smirnov test respectively.\nTable 6: IWSLT2015 evaluation results (En →Zh)\nModel BLEU-1 BLEU-4 TER BEER\nBL 46.53 18.31 67.96 0.20\n+LDA 44.91 18.25 70.96 0.20\n+LDP 47.34 18.85 70.02 0.20\n+UDP 46.92 19.71 67.29 0.20\n+ULDD 40.67 17.89 77.04 0.19\n+UUDD 34.14 18.05 79.27 0.18\n+LDD 47.62†‡20.25†‡67.38†0.20\nLDD compared to BL +∆1.09 +∆1.94 +∆0.58 +∆0.00\nLDD compared to UDP +Φ0.70 +Φ0.54 −Φ0.09 +Φ0.00\n1The black bold in the table represents the best exper-\nimental results under the same test set.\n2∆andΦrepresent the improvement of our model\ncompared to baseline and 1-best unlabeled parse sys-\ntem respectively.\n3†and‡indicate statistical significance (p <0.05)\nagainst baseline and 1-best unlabeled parse sys-\ntem via T-test and Kolmogorov-Smirnov test respec-\ntively.\npaid less attention to the neighboring words in\nthe corpus sentence because syntactic knowledge\nmay assist models in focusing on distant words\nwith syntactic relations, which was reflected in the\ndecrease of BLEU-1 scores. This problem was\nalleviated in the richer-resource scenarios, which\nalso showed that the robustness of the models im-\nproved.\nFor ablation experiments, passing the uniform\ndependency distributions verifies our hypothesis.\nA uniform probability tensor cannot provide valu-\nable information to the Transformer model and\nrisks misleading the model, resulting in the worst\nperformance. Another notable finding is that sim-\nply incorporating labeled dependency distributions\n(replacing the KandQmatrices in the attention\nmatrices) as dependency attention outperformed\nthe baseline model on average. The benefit of this\nstrategy is that by replacing KandQmatrices and\ntheir associated calculation process can drasticallydecrease the number of parameters and computing\nrequirements.\n4.2 Qualitative Analysis\nBLEU-4 Scores Comparison: We also at-\ntempted to visualize the results to understand the\nperformance of the proposed model better. In Fig-\nure 3, although the 1-best parse model performs\nbetter than the baseline model, the model we pro-\npose has higher scores than the baseline model\nand the 1-best parse model in all the median, up-\nper and lower quartile scores. From the original\nscatter diagram, we can observe the scatter distri-\nbution of the proposed model at the upper posi-\ntion in general, indicating that, our model can earn\nhigher scores for translated results than the base-\nline model and 1-best parse model.\nFigure 3: Box plot of baseline model, 1-best tree model and\nproposed model results\nImpact of Sentence Length: We investigated\ntranslation performance for different target sen-\ntence lengths, by grouping the target sentences in\nthe IWSLT datasets by sentence length intervals.\nWe choose to group the target sentence lengths\nrather than source sentence lengths because, cf.\nMoore (2002), the source sentence and target sen-\ntence lengths are proportional. Second, since the\ntarget languages are different, and the source lan-\nguage is English, we are particularly concerned\nabout the change in the length of sentences across\ndifferent target languages.\nOverall, our model outperformed the baseline\nsystem and 1-best parse system, as shown in Fig-\nure 4. Among them, the increase in the length\nrange (20,30], (30,40] and (40,50] were more pro-\nnounced over the baseline system and 1-best parse\nsystem. The BLEU-4 scores of both our model\nand 1-best parse model were in danger of slipping\nFigure 4: BLEU-4 comparison in sentences length\nbelow the baseline model in the sentence length\ninterval (0,10]. Corpus analysis shows that this\nlength interval contains many fragments, remain-\ning after slicing long sentences. Because the syn-\ntactic structures of these fragments were incom-\nplete, they may negatively impact on the model’s\ntranslation performance. As sentence length in-\ncreased further, all models saw substantial declines\nin BLEU-4 scores, following similar downward\npatterns. When the sentence length exceeds 50,\nthe BLEU-4 scores of our method remained sig-\nnificantly different from both the baseline model\nand the 1-best parse model. These showed that\nour proposed model has better translation perfor-\nmance in lengthy sentences, but BLEU-4 scores\nwere still relatively low, indicating that the NMT\nmodels have much room for improvement.\nAttention Weights Visualization: The final\nlayer’s attention weights of the 1-best parse model\nand the model we proposed are depicted in Figures\n5 and 6, respectively. Judging from the compar-\nison of the figures, we find that there are certain\nconsistencies; for example, each word has higher\nattention weights to the words around it. However,\nthe distinction is also discernible.\nSpecifically, for the word “A”, the word “A” and\nthe word “man” have a syntactic relation, which\nwas represented in both figures. However, the 1-\nbest parse model also provided “staring” a higher\nFigure 5: An example of 1-best parse model’s attention\nweights\nFigure 6: An example of proposed model’s attention weights\nattention weight, which is contrary to the syntac-\ntic structures, and the model we proposed resolved\nthis problem. For the word “man”, the 1-best parse\nmodel did not pay proper attention to distance but\nwith syntactic relation word “staring”, on the con-\ntrary, in the proposed model, “staring” was paid at-\ntention with a very high value. In a nutshell, both\nthe 1-best parse model and the proposed model are\nbetter than the baseline model in terms of attention\nalignment which demonstrates that the syntactic\nknowledge contained in dependency distributions\ncan guide the weight computation of the attention\nmechanism, directing it to pay more attention to\nwords with syntactic relations, thereby improving\nthe alignment quality to a certain extent.\n5 Conclusion\nThis paper presented a novel supervised con-\nditional labeled dependency distributions Trans-\nformer network (LDD-Seq). This method primar-\nily improves the self-attention mechanism in the\nTransformer model by converting the dependency\nforest to conditional probability distributions; each\nself-attention head in the Transformer learns a de-\npendency relation distribution, allowing the Trans-\nformer to learn source language’s dependency con-\nstraints, and generates attention weights that are\nmore in line with the syntactic structures. The\nexperimental outcomes demonstrated that the pro-\nposed method was straightforward, and it could\neffectively leverage the source language depen-\ndency syntactic structures to improve the Trans-\nformer’s translation performance without increas-\ning the complexity of the Transformer network or\ninterfering with the highly parallelized character-\nistic of the Transformer model.\nReferences\nBahdanau, Dzmitry, Kyunghyun Cho, and Yoshua Ben-\ngio. 2014. Neural machine translation by jointly\nlearning to align and translate. arXiv preprint\narXiv:1409.0473 .\nBastings, Jasmijn and Ivan Titov and Wilker Aziz and\nDiego Marcheggiani and Khalil Sima’an. 2017.\nGraph Convolutional Encoders for Syntax-aware\nNeural Machine Translation. Proceedings of the\n2017 Conference on Empirical Methods in Natural\nLanguage Processing . 1957–1967.\nBelinkov, Yonatan and Yonatan Bisk. 2018. Syn-\nthetic and Natural Noise Both Break Neural Machine\nTranslation. International Conference on Learning\nRepresentations .\nBugliarello, Emanuele and Naoaki Okazaki. 2020.\nEnhancing Machine Translation with Dependency-\nAware Self-Attention. Proceedings of the 58th An-\nnual Meeting of the Association for Computational\nLinguistics , Online. 1618–1627.\nChen, Kehai and Rui Wang and Masao Utiyama and\nEiichiro Sumita and Tiejun Zhao. 2018. Syntax-\ndirected attention for neural machine translation.\nProceedings of the AAAI Conference on Artificial In-\ntelligence .\nCho, Kyunghyun and Bart van Merri ´enboer and\nCaglar Gulcehre and Dzmitry Bahdanau and Fethi\nBougares and Holger Schwenk and Yoshua Ben-\ngio. 2014. Learning Phrase Representations us-\ning RNN Encoder–Decoder for Statistical Machine\nTranslation. Proceedings of the 2014 Conference on\nEmpirical Methods in Natural Language Processing\n(EMNLP) . 1724–1734.\nCurrey, Anna and Kenneth Heafield. 2019. Incorpo-\nrating Source Syntax into Transformer-Based Neu-\nral Machine Translation. Proceedings of the FourthConference on Machine Translation (Volume 1: Re-\nsearch Papers . 24–33.\nDeguchi, Hiroyuki and Akihiro Tamura and Takashi\nNinomiya. 2019. Dependency-based self-attention\nfor transformer NMT. Proceedings of the Interna-\ntional Conference on Recent Advances in Natural\nLanguage Processing (RANLP 2019) . 239–246.\nDozat, Timothy and Christopher D Manning. 2016.\nDeep biaffine attention for neural dependency pars-\ning. arXiv preprint arXiv:1611.01734 .\nDuan, Sufeng and Hai Zhao and Junru Zhou and Rui\nWang. 2019. Syntax-aware transformer encoder\nfor neural machine translation. 2019 International\nConference on Asian Language Processing (IALP) .\nIEEE. 396–401.\nEriguchi, Akiko and Kazuma Hashimoto and Yoshi-\nmasa Tsuruoka. 2016. Tree-to-Sequence Atten-\ntional Neural Machine Translation. Proceedings\nof the 54th Annual Meeting of the Association for\nComputational Linguistics (Volume 1: Long Papers) ,\nBerlin, Germany 823–833.\nIsozaki, Hideki and Tsutomu Hirao and Kevin Duh and\nKatsuhito Sudoh and Hajime Tsukada. 2010. Au-\ntomatic evaluation of translation quality for distant\nlanguage pairs. Proceedings of the 2010 Conference\non Empirical Methods in Natural Language Process-\ning. 944–952.\nKalchbrenner, Nal and Phil Blunsom. 2013. Recurrent\nContinuous Translation Models. Proceedings of the\n2013 Conference on Empirical Methods in Natural\nLanguage Processing . 1700–1709.\nKingma, Diederik P and Jimmy Ba. 2015. Adam: A\nMethod for Stochastic Optimization. ICLR (Poster) .\nMa, Chunpeng and Akihiro Tamura and Masao\nUtiyama and Tiejun Zhao and Eiichiro Sumita.\n2018. Forest-Based Neural Machine Translation.\nProceedings of the 56th Annual Meeting of the As-\nsociation for Computational Linguistics (Volume 1:\nLong Papers) , Melbourne, Australia. 1253–1263.\nMoore, Robert C. 2002. Fast and accurate sentence\nalignment of bilingual corpora. Conference of the\nAssociation for Machine Translation in the Ameri-\ncas. Springer. 135–144.\nOmote, Yutaro and Akihiro Tamura and Takashi Ni-\nnomiya. 2019. Dependency-based relative posi-\ntional encoding for transformer NMT. Proceed-\nings of the International Conference on Recent Ad-\nvances in Natural Language Processing (RANLP\n2019) . 854–861.\nMylonakis, Markos and Khalil Sima’an. 2011. Learn-\ning hierarchical translation structure with linguistic\nannotations. Proceedings of the 49th Annual Meet-\ning of the Association for Computational Linguis-\ntics: Human Language Technologies . 642–652.\nNeubig, Graham and Kevin Duh. 2014. On the ele-\nments of an accurate tree-to-string machine transla-\ntion system. Proceedings of the 52nd Annual Meet-\ning of the Association for Computational Linguistics\n(Volume 2: Short Papers) . 143–149.\nPapineni, Kishore and Salim Roukos and Todd Ward\nand Wei-Jing Zhu. 2002. Bleu: a method for au-\ntomatic evaluation of machine translation. Proceed-\nings of the 40th annual meeting of the Association\nfor Computational Linguistics . 311–318.\nPeng, Ru and Tianyong Hao and Yi Fang. 2021.\nSyntax-aware neural machine translation directed by\nsyntactic dependency degree. Neural Computing\nand Applications . 16609–16625.\nPham, Thuong Hai and Dominik Mach ´aˇcek and Ond ˇrej\nBojar. 2019. Promoting the Knowledge of Source\nSyntax in Transformer NMT Is Not Needed. Com-\nputaci ´on y Sistemas . 923–934.\nShi, Xing and Inkit Padhi and Kevin Knight. 2016.\nDoes String-Based Neural MT Learn Source Syn-\ntax? Proceedings of the 2016 Conference on Em-\npirical Methods in Natural Language Processing ,\nAustin, Texas. 1526–1534.\nSnover, Matthew and Bonnie Dorr and Richard\nSchwartz and Linnea Micciulla and John Makhoul.\n2006. A study of translation edit rate with targeted\nhuman annotation. Proceedings of the 7th Confer-\nence of the Association for Machine Translation in\nthe Americas: Technical Papers . 223–231.\nStanojevi ´c, Milo ˇs and Khalil Sima’an. 2014. Fitting\nSentence Level Translation Evaluation with Many\nDense Features. Proceedings of the 2014 Confer-\nence on Empirical Methods in Natural Language\nProcessing (EMNLP) , Doha, Qatar. 202–206.\nSutskever, Ilya and Oriol Vinyals and Quoc V Le. 2014.\nSequence to sequence learning with neural networks.\nAdvances in neural information processing systems .\nTu, Zhaopeng and Yang Liu and Young-Sook Hwang\nand Qun Liu and Shouxun Lin. 2010. Dependency\nforest for statistical machine translation. Proceed-\nings of the 23rd International Conference on Com-\nputational Linguistics (Coling 2010) . 1092–1100.\nVaswani, Ashish and Noam Shazeer and Niki Parmar\nand Jakob Uszkoreit and Llion Jones and Aidan\nN Gomez and Łukasz Kaiser and Illia Polosukhin.\n2017. Attention is all you need. Advances in neural\ninformation processing systems . 5998–6008.\nZaremoodi, Poorya and Gholamreza Haffari. 2018.\nIncorporating Syntactic Uncertainty in Neural Ma-\nchine Translation with a Forest-to-Sequence Model.\nProceedings of the 27th International Conference on\nComputational Linguistics . 1421–1429.Zhang, Tianfu and Heyan Huang and Chong Feng and\nLongbing Cao. 2021. Self-supervised bilingual syn-\ntactic alignment for neural machine translation. Pro-\nceedings of the AAAI Conference on Artificial Intel-\nligence . 14454–14462.\nZhang, Meishan and Zhenghua Li and Guohong Fu and\nMin Zhang. 2019. Syntax-Enhanced Neural Ma-\nchine Translation with Syntax-Aware Word Repre-\nsentations. Proceedings of the 2019 Conference of\nthe North American Chapter of the Association for\nComputational Linguistics: Human Language Tech-\nnologies, Volume 1 (Long and Short Papers , Min-\nneapolis, Minnesota. 1151–1161.\nZhang, Yu and Zhenghua Li and Min Zhang. 2020.\nEfficient Second-Order TreeCRF for Neural Depen-\ndency Parsing. Proceedings of the 58th Annual\nMeeting of the Association for Computational Lin-\nguistics , Online. 3295–3305.\nA Appendix: Dependency group labels\nTable A: 16 alternative dependency group labels\nDependency group labels Original dependency labels\nl1 root\nl2 aux, auxpass, cop\nl3 acomp, ccomp, pcomp, xcomp\nl4 dobj, iobj, pobj\nl5 csubj, csubjpass\nl6 nsubj, nsubjpass\nl7 cc\nl8 conj, preconj\nl9 advcl\nl10 amod\nl11 advmod\nl12 npadvmod, tmod\nl13 det, predet\nl14 num, number, quantmod\nl15 appos\nl16 punct", "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "cBVJQnIHvqr", "year": null, "venue": "EAMT 2012", "pdf_link": "https://aclanthology.org/2012.eamt-1.34.pdf", "forum_link": "https://openreview.net/forum?id=cBVJQnIHvqr", "arxiv_id": null, "doi": null }
{ "title": "Can Automatic Post-Editing Make MT More Meaningful", "authors": [ "Kristen Parton", "Nizar Habash", "Kathleen R. McKeown", "Gonzalo Iglesias", "Adrià de Gispert" ], "abstract": "Kristen Parton, Nizar Habash, Kathleen McKeown, Gonzalo Iglesias, Adrià de Gispert. Proceedings of the 16th Annual conference of the European Association for Machine Translation. 2012.", "keywords": [], "raw_extracted_content": "Can Automatic Post-Editing Make MT More Meaningful?\nKristen Parton1Nizar Habash1Kathleen McKeown1Gonzalo Iglesias2Adri `a de Gispert2\n1Columbia University, NY , USA\nfkristen, kathy, [email protected]\n2University of Cambridge, Cambridge, UK\nfgi212, [email protected]\nAbstract\nAutomatic post-editors (APEs) enable the\nre-use of black box machine translation\n(MT) systems for a variety of tasks where\ndifferent aspects of translation are impor-\ntant. In this paper, we describe APEs\nthat target adequacy errors, a critical\nproblem for tasks such as cross-lingual\nquestion-answering, and compare different\napproaches for post-editing: a rule-based\nsystem and a feedback approach that uses\na computer in the loop to suggest improve-\nments to the MT system. We test the APEs\non two different MT systems and across\ntwo different genres. Human evaluation\nshows that the APEs significantly improve\nadequacy, regardless of approach, MT sys-\ntem or genre: 30-56% of the post-edited\nsentences have improved adequacy com-\npared to the original MT.\n1 Introduction\nAutomatic post-editors (APEs) seek to perform the\nsame task as human post-editors: correcting errors\nin text produced by machine translation (MT) sys-\ntems. APEs have been used to target a variety of\ndifferent types of MT errors, from determiner se-\nlection (Knight and Chander, 1994) to grammatical\nagreement (Mare ˇcek et al., 2011). There are two\nmain reasons that APEs can improve over decoder\noutput: they can exploit information unavailable\nto the decoder, and they can carry out deeper text\nanalysis that is too expensive to do in a decoder.\nWe describe APEs that target three types of\nadequacy errors: deleted content words, content\nwords that were translated into function words, and\nmistranslated named entities. These types of er-\nrors are common across statistical MT (SMT) sys-\ntems and can significantly degrade translation ade-\nquacy, the amount of information preserved dur-\ning translation. Adequacy is critical to the suc-\ncess of many cross-lingual applications, partic-\nularly cross-lingual question answering (CLQA),\nc\r2012 European Association for Machine Translation.where adequacy errors can significantly decrease\ntask performance. The APEs utilize word align-\nments, source- and target-language part-of-speech\n(POS) tags, and named entities to detect phrase-\nlevel errors, and draw on several external resources\nto find a list of corrections for each error.\nOnce the APEs have a list of errors with pos-\nsible corrections, we experiment with different ap-\nproaches to apply the corrections: an approach that\nuses phrase-level editing rules, and two techniques\nfor passing the corrections as feedback back to\nthe MT systems. The rule-based APE uses word\nalignments to decide where to insert the top-ranked\ncorrection for each error into the target sentence.\nThis approach rewrites the word or phrase where\nthe error was detected, but does not modify the\nrest of the sentence. We test these MT system-\nindependent rules on two MT systems, MT A and\nMT B (described in more detail in section ??).\nThe feedback APE passes multiple suggestions\nfor each correction back to the MT system, and\nallows the MT decoder to determine whether to\ncorrect each error and how to correct each error\nduring re-translation. Many MT systems have a\nmechanism for “pre-editing,” or providing certain\ntranslations in advance (e.g., for named entities\nand numbers). We exploit this mechanism to pro-\nvide post-editor feedback to the MT systems dur-\ning a second-pass translation. While post-editing\nvia feedback is a general technique, the mecha-\nnism the decoder uses is dependent upon the im-\nplementation of each MT system: in our experi-\nments, MT A accepts corpus-level feedback from\nthe APE, while MT B can handle more targeted,\nphrase-level feedback from the APE.\nOur evaluation using human judgments shows\nthat the APEs always improve the overall transla-\ntion adequacy: across all conditions, whether rule-\nbased or feedback, MT A or MT B, newswire or\nweb genre, adequacy improved in 30-56% of post-\nedited sentences, and the improved sentences sig-\nnificantly outnumbered sentences that got worse.\nWe also collected judgments on fluency, which\nhighlighted the relative advantages of each APE\nProceedings of the 16th EAMT Conference, 28-30 May 2012, Trento, Italy\n111\napproach. The rule-based approach affords more\ncontrol for error correction, at the expense of flu-\nency. The feedback approach improves adequacy\nonly when it can maintain some level of fluency,\nwhich results in more fluent post-edits than the\nrule-based approach. Due to the fluency con-\nstraints, the feedback APEs do not modify as many\nsentences as the rule-based APE, and therefore im-\nprove fewer sentences. Our analysis suggests ways\nin which feedback may be improved in the future.\n2 Motivation\nAs MT has increased in quality and speed, its us-\nage has gone beyond open-ended translation to-\nwards a variety of applications: cross-lingual sub-\njectivity analysis, cross-lingual textual entailment,\ncross-lingual question-answering, and many oth-\ners. Open-ended MT systems are task-agnostic,\nso they seek to balance fluency and adequacy.\nDepending on the task, however, adequacy may\ntake precedence over fluency (or vice versa). We\npropose using the framework of automatic post-\nediting (Knight and Chander, 1994) to detect and\ncorrect task-specific MT errors at translation time.\n(In this paper, we use the term “post-editing” to\nrefer to automatic post-editing only.)\nThe advantage of post-editing is that the APE\ncan adapt any MT output to the needs of each task\nwithout having to re-train or re-tune a specific MT\nsystem (Isabelle et al., 2007). Acquiring parallel\ntext, training and maintaining an SMT system is\ntime-consuming and resource-intensive, and there-\nfore not feasible for everyone who wishes to use\nMT in an application. Ideally, an APE can adapt\nthe output of a black-box MT system to the needs\nof a specific task in a light-weight and portable\nmanner. Since APEs are not tied to a specific\nMT system, they also allow application develop-\ners flexibility in switching MT systems as better\nsystems become available.\nOur focus on adequacy in automatic post-editing\nis motivated by CLQA with result translation. In\nthis task, even when the correct answer in the\nsource language is retrieved, it may be perceived\nas irrelevant in the target language if not translated\ncorrectly. The MT errors that have the biggest im-\npact on CLQA include missing or mistranslated\nnamed entities and missing content words (Parton\nand McKeown, 2010; Boschee et al., 2010).\nManual error analysis of MT has shown that\nmissing content words produce adequacy errors\nacross different language pairs and different types\nof SMT systems. Condon et al. (2010) found that\n26% of their Arabic-English MT errors were verb,noun or pronoun deletions. Similarly, Vilar et al.\n(2006) found that 22% of Chinese-English MT\nerrors were content deletion. Popovi ´c and Ney\n(2007) reported that 68% deleted tokens from their\nSpanish-English MT system were content words.\nWe address these errors via automatic post-editing,\nwith the ultimate goal of improving MT output for\nadequacy-oriented tasks.\n3 Related Work\nThe goal of APE is to automatically correct trans-\nlated sentences produced by MT. Adaptive APEs\ntry to learn how to improve the translation output\nby adapting to the mistakes made by a specific MT\nsystem. In contrast, general APEs target specific\ntypes of errors, such as English determiner selec-\ntion (Knight and Chander, 1994), certain types of\ngrammar errors in English (Doyon et al., 2008) and\nSwedish (Stymne and Ahrenberg, 2010), and com-\nplex grammatical agreement in Czech (Mare ˇcek et\nal., 2011). The APEs in this paper are more similar\nto general APEs, since they target specific kinds of\nadequacy errors.\nAPEs may utilize information unavailable to the\ndecoder to improve translation output. Previous\ntask-based MT approaches have used task con-\ntext to select verb translations in CLQA at query\ntime (Ma and McKeown, 2009) and to identify\nand correct name translations in CLIR (Parton et\nal., 2008). The rule-based APE we describe ex-\ntends those APEs to cover additional types of ad-\nequacy errors. The feedback APEs are most sim-\nilar to (Suzuki, 2011), which uses confidence es-\ntimation to select poorly translated sentences and\nthen passes them to an adaptive SMT post-editor.\nOther work in confidence estimation (Specia et al.,\n2011) aims to predict translation adequacy at run-\ntime without using reference translations, which is\nsimilar to our error detection step.\nMany APEs use sentence-level analysis tools to\nmake improvements over decoder output. Since\nthese tools rely on having a fully resolved trans-\nlation hypothesis (and since they are expensive),\nthey are infeasible to run during decoding. The\nDepFix post-editor (Mare ˇcek et al., 2011) parses\ntranslated sentences, and uses the bilingual parses\nto correct Czech morphology. While syntax-based\nMT systems use POS and parses, most systems do\nnot use other types of annotations (e.g., informa-\ntion extraction, event detection or sentiment anal-\nysis). An alternative approach would be to incor-\nporate these features directly into the MT system;\nthe focus of this paper is on adapting translations\nto the task without changing the MT system.\n112\n4 Post-Editing Techniques\nOur APEs carry out three steps: 1) detect errors,\n2) suggest and rank corrections for the errors, and\n3) apply the suggestions. All the APEs use iden-\ntical algorithms for steps 1 and 2, and only differ\nin how they apply the suggestions. The algorithms\nare language-pair independent, though we carried\nout all of our experiments on Arabic-English MT.\n4.1 Pre-Processing\nThe Arabic source text was analyzed and tokenized\nusing MADA+TOKAN (Habash et al., 2009).\nEach MT system used a different tokenization\nscheme, so the source sentences were processed\nin two separate pipelines. Separate named en-\ntity recognizers (NER) were built for each pipeline\nusing the Stanford NER toolkit (Finkel et al.,\n2005), by training on CoNLL and ACE data.\nEach translated English sentence was re-cased us-\ning Moses and then analyzed using the Stanford\nCoreNLP pipeline to get part-of-speech (POS) tags\n(Toutanova et al., 2003) and NER (Finkel et al.,\n2005).\n4.2 Detecting Errors and Suggesting\nCorrections\nThe APEs address specific adequacy errors that we\nhave found to be most detrimental for the CLQA\ntask: content words that are not translated at all,\ncontent words that are translated to function words,\nand mistranslated named entities. In the error de-\ntection step, these types of errors are detected via\nan algorithm from prior work that uses bilingual\nPOS tags and word alignments (Parton and McK-\neown, 2010). Each flagged error consists of one\nor more source-language tokens and zero or more\ntarget-language tokens. In the error correction\nstep, the source and target sentences and all the\nflagged errors are passed to the suggestion genera-\ntor, which uses the following three resources.\nPhrase Table: The phrase table from MT B is\nused as a phrase dictionary (described in more de-\ntail in ??).\nDictionaries: We also use a translation dictio-\nnary extracted from Wikipedia, a bilingual name\ndictionary extracted from the Buckwalter analyzer\n(Buckwalter, 2004) and an English synonym dic-\ntionary from the CIA World Factbook.1They are\nhigh precision and low recall: most errors do not\nhave matches in the dictionaries, but when they do,\nthey are often correct, particularly for NEs.\n1http://www.cia.gov/library/publications/the-world-factbookBackground MT corpus: Since our motiva-\ntion is CLQA, we also draw on a resource specific\nto CLQA: a background corpus of about 120,000\nArabic newswire and web documents that have\nbeen translated into English by a state-of-the-art\nindustry MT system. Ma and McKeown (2009)\nwere able to exploit a similar pseudo-parallel cor-\npus to correct deleted verbs, since words deleted in\none sentence are frequently correctly translated in\nother sentences.\nFor each error, the source-language phrase is\nconverted into a query to search all three resources.\nThen the target-language results are aggregated\nand ranked by overall confidence scores. The\nconfidence scores are a weighted combination of\nphrase translation probability, number of dictio-\nnary matches and term frequencies in the back-\nground corpus. The weights were set manually on\na development corpus.\n4.3 Rule-Based APE\nTable 1 shows examples of sentences post-edited\nby the different APEs. For each error, the rule-\nbased post-editor applies the top-ranked correc-\ntion using one of two operations: replace orin-\nsert. An error can be replaced if there is an exist-\ning translation, and all of the source- and target-\nlanguage tokens aligned to the error are flagged as\nerrors. (This is to avoid over-writing a correct par-\ntial phrase translation, as in example 2a where the\nword “their” is not replaced.) If the error cannot be\nreplaced, the new correction is inserted.\nDuring replace, all the original target tokens are\ndeleted, and the correction is inserted at the index\nof the first target token. For insert, the algorithm\nfirst chooses an insertion index, and then inserts\nthe correction. The insertion index is chosen based\non the indices of the target tokens in the error. If\nthere are no target tokens, the insertion index is\ndetermined by the alignments of the neighboring\nsource tokens. If they are aligned to neighbor-\ning translations, the correction is inserted between\nthem. Or, if only one of them is aligned to a trans-\nlation, the correction is inserted adjacent to it. If\nan insertion index cannot be determined via rules,\nthe error is not corrected.\nThese editing rules are MT system-independent,\nlanguage-independent and relatively simple. The\nword order is copied from the original transla-\ntion or from the source sentence. This sim-\nple model worked for (Parton et al., 2008) be-\ncause they were rewriting mistranslated NEs that\nwere already present in the translation. Simi-\nlarly, Ma and McKeown (2009) successfully re-\ninserted deleted verbs into English translations us-\n113\nSentence Sentence\nReference Vanunu was released in April, 2004 . . . Why does Aramco donate 8thousand dollars . . .\nMT A orig. And was released in April, 2004 . . . Why ARAMCO to $ thousands . . .\nRule-Based And was vanunu released in April, 2004 . . . He donates why ARAMCO the amount of dollars to $ thousands . . .\nCorpus-Level Vanunu was released in April, 2004 . . . Why Aramco donate $ 8 of thousands of dollars . . .\n1a) Both APEs re-insert the deleted name,\nbut the rule-based version has poor word\norder.1b) Both APEs re-insert the deleted verb, but the feedback word order\nis better. $is incorrectly detected as a function word, and both APEs\nincorrectly re-insert “dollars”. The feedback APE avoids adding the\nredundant “the amount of”.\nReference . . . in proportion to the efforts they make .. . . Ministry of Interior Starts to Define Committee’s Authority!!\nMT B orig. . . . commensurate with their. . . . The Ministry of Interior started to define the terms of the !\nRule-Based . . . commensurate with effort exert their. . . . The Ministry of Interior started to define the terms of body !\nPhrase-Level . . . commensurate with the work they do .. . . The Interior Ministry started the authority of the board !\n2a) The rule-based APE makes two sepa-\nrate edits to insert “effort” and “exert. ”\nThe feedback APE produces a more fluent\nsentence by handling both at once.2b) The original sentence deletes the noun Committee. The rule-\nbased version has the wrong translation and is ungrammatical. The\nphrase-level feedback selects a better translation, but the verb (de-\nfine) is now deleted.\nTable 1: Examples of the kinds of edits (both good and bad) made by different APEs.\ning only word alignments, assuming that local Chi-\nnese SVO word order would linearly map to En-\nglish word order.\nHowever, our APEs need to deal with a much\nwider range of error types, including phrases that\nwere mistranslated, partially translated or never\ntranslated; and content words of any POS, not just\nNEs or verbs. Since Arabic word order differs\nfrom English, these rules often produce poorly or-\ndered words: verbs may appear before their sub-\njects, and adjectives may appear after their nouns.\nIn this case, we are explicitly trading off fluency\nfor adequacy, under the assumption that the end\ntask is adequacy-oriented. In example 1a, the sub-\nject comes after the auxiliary verb, but the sentence\ncan still be understood. On the other hand, since\nadequacy and fluency are not independent, degrad-\ning the fluency of a sentence can often negatively\nimpact the adequacy as well.\nEven when the error detection and correction\nsteps work correctly, not all errors can be fixed\nwith these simple operations. The original MT\nmay be too garbled to correct, or may have no\nplace to insert the corrected translation so that it\ncarries the appropriate meaning.\n4.4 Feedback APEs\nTo mitigate the problems of the rule-based APE,\nwe developed an approach that is more powerful\nand flexible. The feedback APEs take as input the\nsame list of errors and corrections as the rule-based\nAPE, and then convert the corrections into feed-\nback for the MT system. Sentences with detected\nerrors are decoded a second time with feedback.\nPassing feedback to the MT system is a general\ntechnique: many MT systems allow users to spec-\nify certain fixed translations ahead of time, such as\nnumbers, dates and named entities. The underlying\nimplementation of how these fixed translations arehandled by the decoder is MT system-specific, and\nwe describe two such implementations in section\n4.5: corpus-level feedback and phrase-level feed-\nback.\nThe difference between pre-editing and post-\nediting in this case is that the post-editor is reac-\ntiveto the first-pass translation. The APE only\npasses suggestions to the MT system when it de-\ntects an error in the first-pass translation, and has\nsome confidence that it can provide a reasonable\ncorrection. Since the post-editing is actually done\nby the decoder, the effectiveness of the feedback\nAPE will vary across different MT systems.\nThis is similar to the error correction approach\ndescribed in (Parton and McKeown, 2010), where\nsentences with detected errors are re-translated us-\ning a much better (but slower) MT system. They\nfound that the second-pass translations were much\nbetter than the first-pass translations, but most of\nthe detected errors were still present. The feed-\nback post-editor allows us to pass specific infor-\nmation about which errors to correct and how to\ncorrect them to the original MT system. Unlike\nadaptive post-editors, where the second translation\nstep translates from “bad” target-language text to\n“good” target-language text, the feedback APEs\nre-translate from the source text, and only one MT\nsystem is needed.\nThe biggest advantage the feedback APEs have\nover the rule-based APE is that the MT system can\nmodify the whole sentence during re-translation,\nwhile taking the feedback into account, rather than\njust replacing or inserting a single phrase at a time.\nThe decoder will not permit local disfluencies that\nmight occur from a simple insertion (e.g., “they\ngoes” or “a impact”), and will often prefer the cor-\nrect word order, as in example 1a in Table 1. Fur-\nthermore, the decoder can take all of the feedback\ninto account at once, whereas the rule-based ap-\n114\nproach makes each correction in the sentence sep-\narately, as in example 2a. Finally, the rule-based\napproach always picks the top-ranked correction\nfor each error, and almost always edits every er-\nror. The feedback APEs can pass multiple correc-\ntions to the MT system, often along with proba-\nbilities, which proves helpful in example 2b. One\ndrawback of the feedback APEs is that they are\nslower than the rule-based APE since they require\na second-pass decoding. Also, the decoder may\nultimately decide not to use any of the corrections,\nwhich may be an advantage if low-confidence sug-\ngestions are discarded, or could be a disadvantage,\nsince fewer errors will get corrected.\n4.5 Corpus-Level vs. Phrase-Level Feedback\nEach of our MT systems has a different mecha-\nnisms for accepting feedback on-the-fly, and han-\ndles the feedback differently. MT A allows corpus-\nlevel feedback without translation probabilities. In\nother words, the APE passes all of the translation\nsuggestions for the entire corpus back to the MT\nsystem during re-translation. MT B allows phrase-\nlevel feedback with translation probabilities. Each\nsource phrase flagged as an error is annotated with\nthe list of possible corrections and their transla-\ntion probabilities. Both MT systems allow mul-\ntiple corrections for each detected error, unlike the\nrule-based APE. Both also allow the post-edited\ncorrections to compete with existing translations\nin the system, so the re-translation may not use\nthe suggested translations. Note that both forms\nof feedback are used in an online manner by the\nSMT systems; no re-training or re-tuning is done.\nOverall, the phrase-level feedback mechanism is\nmore fine-grained because corrections are targeted\nat specific errors. On the other hand, the coarser,\ncorpus-level feedback could result in unexpected\nimprovements in sentences where errors were not\ndetected, since the translation corrections can be\nused in any re-translated sentence.\n5 Experiments\nWe tested our APEs on two different MT sys-\ntems using the NIST MT08 newswire (nw) and\nweb (wb) testsets, which had 813 and 547 sen-\ntences, respectively. The translations were eval-\nuated with multiple automatic metrics as well as\ncrowd-sourced human adequacy judgments.\n5.1 MT Systems\nWe used state-of-the art Arabic-English MT\nsystems with widely different implementations.\nMT A was built using HiFST (de Gispert et al.,2010), a hierarchical phrase-based SMT system\nimplemented using finite state transducers. It is\ntrained on all the parallel corpora in the NIST\nMT08 Arabic Constrained Data track (5.9M par-\nallel sentences, 150M words per language). The\nfirst-pass 4-gram language model (LM) is trained\non the English side of the parallel text and a sub-\nset of Gigaword 3. The second-pass 5-gram LM\nis a zero-cutoff stupid-backoff (Brants et al., 2007)\nestimated using 6.6B words of English newswire\ntext.\nMT B was built using Moses (Koehn et al.,\n2007), and is a non-hierarchical phrase-based sys-\ntem. It is trained on 3.2M sentences of par-\nallel text (65M words on the English side) us-\ning several LDC corpora including some avail-\nable only through the GALE program (e.g.,\nLDC2004T17, LDC2004E72, LDC2005E46 and\nLDC2004T18). The data includes some sentences\nfrom the ISI corpus (LDC2007T08) and UN cor-\npus (LDC2004E13) selected to specifically add vo-\ncabulary absent in the other resources. The Ara-\nbic text is tokenized and lemmatized using the\nMADA+TOKAN system (Habash et al., 2009).\nLemmas are used for Giza++ alignment only. The\ntokenization scheme used is the Penn Arabic Tree-\nbank scheme (Habash, 2010; Sadat and Habash,\n2006). The system uses a 5-gram LM that was\ntrained on Gigaword 4. Both systems are tuned\nfor BLEU score using MERT.\n5.2 Automatic and Human Evaluation\nWe ran several automatic metrics on the baseline\nMT output and the post-edited MT output: BLEU\n(Papineni et al., 2002), Meteor-a (Denkowski and\nLavie, 2011) and TERp-a (Snover et al., 2009).\nBLEU is based on n-gram precision, while Meteor\ntakes both precision and recall into account. TERp\nalso implicitly takes precision and recall into ac-\ncount, since it is similar to edit distance. Both Me-\nteor and TERp allow more flexible n-gram match-\ning than BLEU, since they allow matching across\nstems, synonyms and paraphrases. Meteor-a and\nTERp-a are both tuned to have high correlation\nwith human adequacy judgments.\nIn contrast to automatic system-level metrics,\nhuman judgments can give a nuanced sentence-\nlevel view of particular aspects of the MT. In or-\nder to compare adequacy across APEs, we used\nhuman annotations crowd-sourced from Crowd-\nFlower.2Since our annotators are not MT experts,\nwe used a head-to-head comparison rather than a\n5-point scale. Adequacy scales have been shown\n2http://www.crowdflower.com\n115\nsents sents\nMT set APE w/err. mod.\nA nw rule-based 48% 41%\ncorpus feed. 48% 40%\nwbrule-based 69% 64%\ncorpus feed. 69% 62%\nB nw rule-based 24% 24%\nphrase feed. 24% 15%\nwbrule-based 34% 34%\nphrase feed. 34% 25%\nTable 2: The percentage of all sen-\ntences with errors detected, and the\npercentage of all sentences modified\nby each APE.\u0001BLEU \u0001TERp-adeq \u0001Meteor-adeq\nbase rule feed base rule feed base rule feed\nMT set MT based back MT based back MT based back\nA nw 51.32 \u00000.91 \u00000.41 37.49 \u00000.54 \u00000.74 69.48 +0.15 +0.32\nwb 36.15 \u00001.41 +0.03 60.66 \u00001.34 \u00002.69 55.24 +0.15 +0.88\nB nw 51.23 \u00000.49 +0.05 35.31 \u00000.22 \u00000.26 70.38 +0.00 +0.17\nwb 37.60 \u00000.50 \u00000.12 55.97 \u00000.26 \u00000.23 57.06 \u00000.07 +0.13\nTable 3: The effect of APEs on automatic metric scores. Base columns show the\nscore for the original MT and the other columns show the difference between the\npost-edited MT and the original MT. The rule-based APE is the same for both sys-\ntems, and the feedback APE is corpus-level for MT A and phrase-level for MT B.\nto have low inter-annotator agreement (Callison-\nBurch et al., 2007). Each annotator was asked to\nselect which of two sentences matched the mean-\ning of one reference sentence the best, or to se-\nlect “about the same.” The tokens that differed\nbetween the translations were automatically high-\nlighted, and their order was randomized. The in-\nstructions explicitly said to ignore minor gram-\nmatical errors and focus only on how the meaning\nof each translation matched the reference, and in-\ncluded a number of example judgments.\nWe compared each post-edited sentence to the\nbaseline MT. For each comparison, we collected\nfive “trusted” judgments (as defined by Crowd-\nFlower) according to how well they did on our\ngold-standard questions. For clarity, we are re-\nporting results using macro aggregation, in other\nwords, the number of times overall that a particu-\nlar APE was voted better than, worse than, or about\nthe same as the original MT.\n6 Results\nTable 2 shows the percentage of sentences with\ndetected errors for which the correction algorithm\nfound a suggested solution. These sentences were\npassed to each APE, which could then decide to\nmodify the sentence or leave it unchanged. The\npercentage of all sentences that were changed by\neach APE is also shown in Table 2.\nThe web genre has more errors than the\nnewswire genre, likely because informal text is\nmore difficult for both MT systems to translate.\nMT A has twice as many sentences with detected\nerrors as MT B. This is not a reflection of relative\nMT quality (both systems have comparable BLEU\nscores), but rather a limitation of the error detect-\ning algorithm. When MT A deletes a word, it is\nfrequently dropped as a single token, which is sim-\nple to detect as a null alignment. Missing words in\nMT B are frequently deleted as part of a phrase, so\nthey are more difficult to detect (e.g., mistranslat-ing “white house” as “white” does not get flagged).\nThe impact of the APEs also varies depend-\ning on how many sentences with detected errors\nwere actually changed by the APE. The rule-based\nAPE almost always applies the edits. The corpus-\nlevel APE also modified most of the sentences,\nsince all of the corrections were applied to all of\nthe re-translated sentences. However, the phrase-\nlevel feedback APE frequently retained the origi-\nnal translation.\nBoth of these factors mean that the potential\nimprovement from post-editing varies significantly\nby experimental setting, from only 15% of the sen-\ntences by the phrase-based feedback (MT B) on the\nnews corpus, up to 64% of the corpus by the rule-\nbased APE for MT A on the web corpus.\n6.1 Automatic Metric Results\nTable 3 shows the automatic metric scores for both\nMT systems, across both datasets. For the base-\nline MT output, the raw score is shown, and for the\nAPEs, the change in score between the post-edited\nMT and the baseline MT is shown. (Since post-\nediting only changes a fraction of sentences in the\ncorpus, the score changes are generally small.)\nAll APEs improve the TERp-a score across all\nconditions3, with the feedback APEs often outper-\nforming the rule-based APE. The feedback APEs\nalso improve the Meteor-a score across all condi-\ntions, while the rule-based APE has mixed Me-\nteor results. None of the APEs improve the BLEU\nscore: the rule-based APE is always significantly\nworse than the original MT, while the feedback\nAPEs have either a negative or negligible impact.\nThe positive improvements in TERp-a and\nMeteor-a suggest that the APEs are improving ade-\nquacy. In general, the feedback APEs improve the\nautomatic scores more than the rule-based APE,\nalthough the rule-based APE actually edits more\nsentences in the corpus than the feedback APEs.\n3Since TERp is an error metric, smaller scores are better.\n116\n0%10%20%30%40%50%60%\nr\nule-based corpus\nfeedbackrule-based corpus\nfeedbackrule-based phrase\nfeedbackrule-based phrase\nfeedback\nmt08-nw mt08-wb mt08-nw mt08-wbPE more adequate Base more adequate About the same Not edited\nMT A MT BFigure 1: Percentage of post-edited sentences that were judged more adequate, less adequate or about the same as the original\nMT. “Not edited” is the percentage of sentences with errors that the APE decided not to modify.\nThe feedback APEs also always have better BLEU\nscores than the rule-based APE. The negative im-\npact of APEs on BLEU score is not surprising,\nsince they work by adding content to the transla-\ntions, which is more likely to improve translation\nrecall than precision.\n6.2 Human-Annotated Adequacy Results\nFigure 1 shows the percentage of post-edited sen-\ntences that were judged more adequate, less ade-\nquate or the same as the original MT, and the per-\ncentage of sentences with errors that the APE did\nnot edit. Of the sentences that were post-edited,\nthe APEs improved adequacy 30-56% of the time.\nAcross both MT systems and both datasets, post-\nediting improved adequacy much more often than\nit degraded it: the ratio of improved sentences to\ndegraded sentences varied from 1.7 to 4.1. For\nboth MT systems, the APEs had a larger impact\non the web corpus than the newswire corpus, both\nbecause more errors were detected in the web cor-\npus and because the APEs edited errors more often\nin the web corpus.\nWe were surprised to find that the rule-based\nAPE improved adequacy more often than the feed-\nback APEs, across both MT systems and genres,\nespecially given that the automatic metrics favored\nthe feedback APEs. To understand the results\nbetter, we did another crowd-sourced evaluation,\ncomparing the fluency of the rule-based and feed-\nback post-edited sentences (when both APEs made\nchanges). The sentences produced by the feedback\nAPEs were judged more fluent than the rule-based\nAPE sentences across all conditions.\nThe fluency evaluation shows the relative ad-\nvantages of the different approaches. The rule-\nbased APE does introduce new, correct informa-\ntion into the translations, but at the expense of flu-\nency. With extra effort, the meaning of these sen-\ntences can usually be inferred, especially when the\nrest of the sentence is fluent (as in example 1a).On the other hand, the feedback APEs try to bal-\nance the post-editor’s request to include more in-\nformation in the sentence against the goal of the\ndecoder to produce fluent output. But the need for\nfluency also led to fewer modified sentences, par-\nticularly for phrase-level feedback. In cases where\nboth APE approaches improve the adequacy, the\nfeedback approach is better because it produces\nmore fluent sentences. But in cases where the feed-\nback approach does not modify the sentence, the\nrule-based approach can often still improve the ad-\nequacy of the translation at the expense of fluency.\n7 Conclusions and Future Work\nWe described several APE techniques: rule-based\nin addition to corpus-level and phrase-level feed-\nback. Whereas previous APEs focused primar-\nily on translation fluency and grammaticality, our\nAPEs targeted adequacy errors. Manual analysis\nshowed that post-editing was effective in improv-\ning the adequacy of the original MT output 30-\n56% of the time, across two MT systems and two\ntext genres. The APEs had a larger impact on the\nweb text than the newswire, indicating that they are\nparticularly useful for hard-to-translate genres.\nManual evaluation of the APEs revealed a trade-\noff between fluency and control. The rule-based\nAPE allowed control over which errors to correct\nand exactly how to correct them, but was limited\nto two basic edit operations that often led to dis-\nfluent sentences. The feedback APEs produced\nsentences that were more fluent, but they relied on\nMT decoders that might or might not carry out the\ncorrections. The corpus-level feedback APE was\nthe least targeted, because suggestions passed to\nthe MT system could affect any re-translated sen-\ntence, even those where the phrase was translated\ncorrectly. Surprisingly, it was still able to improve\nadequacy. The phrase-level feedback APE allowed\nmore targeted error correction, yet had the least\n117\nimpact because it often ignored the corrections.\nIn future work, we plan to improve the error de-\ntection module to handle additional types of ade-\nquacy errors, in order to detect more of the ade-\nquacy errors made by MT B. We would also like\nto encourage the phrase-level APE to carry out\nour corrections more often. Another direction for\nresearch is including syntactic information in the\nrule-based APE, for more fluent translations.\nThe APEs were motivated by the CLQA task,\nwhere adequacy errors can make correct answers\nappear incorrect after translation. We believe that\nAPE is particularly suitable for task-oriented MT,\nwhere black box MT systems must be adapted to\nthe needs of a specific task. We plan to do a task-\nbased evaluation of the adequacy-oriented APEs,\nto measure their impact on CLQA relevance.\nAcknowledgments\nThis material is based upon work supported by DARPA under\nContract Nos. HR0011-12-C-0016 and HR0011-12-C-0014.\nAny opinions, findings, and conclusions expressed in this ma-\nterial do not necessarily reflect the views of DARPA. The re-\nsearch leading to these results has received funding from the\nEuropean Union Seventh Framework Programme (FP7-ICT-\n2009-4) under grant agreement number 247762.\nReferences\nBoschee, Elizabeth, Marjorie Freedman, Roger Bock, John\nGraettinger, and Ralph Weischedel. 2010. Error analy-\nsis and future directions for distillation. In Handbook of\nNatural Language Processing and Machine Translation.\nBrants, Thorsten, Ashok C. Popat, Peng Xu, Franz J. Och, and\nJeffrey Dean. 2007. Large language models in machine\ntranslation. In EMNLP-CoNLL, pp. 858–867.\nBuckwalter, Tim. 2004. Buckwalter arabic morphological\nanalyzer version 2.0. LDC2004L02, ISBN 1-58563-324-0.\nCallison-Burch, Chris, Cameron Fordyce, Philipp Koehn,\nChristof Monz, and Josh Schroeder. 2007. (meta-) evalu-\nation of machine translation. In StatMT ’07: Proc. of the\nSecond WMT, pp. 136–158.\nCarpuat, Marine, Yuval Marton, and Nizar Habash. 2012.\nImproved arabic-to-english statistical machine translation\nby reordering post-verbal subjects for word alignment.\nMachine Translation, 26:105–120.\nCondon, Sherri L., Dan Parvaz, John S. Aberdeen, Christy\nDoran, Andrew Freeman, and Marwan Awad. 2010. Eval-\nuation of machine translation errors in English and Iraqi\nArabic. In LREC.\nde Gispert, Adri `a, Gonzalo Iglesias, Graeme Blackwood, Ed-\nuardo R. Banga, and William Byrne. 2010. Hierarchical\nphrase-based translation with weighted finite-state trans-\nducers and shallow-n grammars. Computational Linguis-\ntics, 36(3):505–533.\nDenkowski, Michael and Alon Lavie. 2011. Meteor 1.3: Au-\ntomatic Metric for Reliable Optimization and Evaluation\nof Machine Translation Systems. In EMNLP 2011: Proc.\nof the Sixth WMT.\nDoyon, Jennifer, Christine Doran, C. Donald Means, and\nDomenique Parr. 2008. Automated machine translation\nimprovement through post-editing techniques: analyst and\ntranslator experiments. In AMTA, pp. 346–353.\nElming, Jakob. 2006. Transformation-based corrections of\nrule-based MT. In EAMT, pp. 219–226.Finkel, Jenny Rose, Trond Grenager, and Christopher Man-\nning. 2005. Incorporating non-local information into in-\nformation extraction systems by Gibbs sampling. In ACL,\npp. 363–370.\nHabash, Nizar, Owen Rambow, and Ryan Roth. 2009.\nMADA+TOKAN: A toolkit for Arabic tokenization, di-\nacritization, morphological disambiguation, pos tagging,\nstemming and lemmatization. Proc. of the 2nd Inter-\nnational Conference on Arabic Language Resources and\nTools (MEDAR), pp. 242–245.\nHabash, Nizar. 2010. Introduction to Arabic Natural Lan-\nguage Processing. Morgan & Claypool Publishers.\nIsabelle, Pierre, Cyril Goutte, and Michel Simard. 2007. Do-\nmain adaptation of MT systems through automatic post-\nediting. MT Summit XI.\nKnight, Kevin and Ishwar Chander. 1994. Automated poste-\nditing of documents. In AAAI ’94, pp. 779–784.\nKoehn, Philipp, Hieu Hoang, Alexandra Birch, Chris\nCallison-Burch, Marcello Federico, Nicola Bertoldi,\nBrooke Cowan, Wade Shen, Christine Moran, Richard\nZens, Chris Dyer, Ond ˇrej Bojar, Alexandra Constantin,\nand Evan Herbst. 2007. Moses: open source toolkit for\nstatistical machine translation. In ACL ’07: Interactive\nPoster and Demonstration Sessions, pp. 177–180.\nMa, Wei-Yun and Kathleen McKeown. 2009. Where’s the\nverb?: correcting machine translation during question an-\nswering. In ACL-IJCNLP, pp. 333–336.\nMare ˇcek, David, Rudolf Rosa, Petra Galu ˇsˇc´akov ´a, and Ondrej\nBojar. 2011. Two-step translation with grammatical post-\nprocessing. In Proc. of the Sixth WMT, pp. 426–432.\nPapineni, Kishore, Salim Roukos, Todd Ward, and Wei jing\nZhu. 2002. BLEU: a method for automatic evaluation of\nmachine translation. In ACL, pp. 311–318.\nParton, Kristen and Kathleen McKeown. 2010. MT error de-\ntection for cross-lingual question answering. In COLING\n(Posters), pp. 946–954.\nParton, Kristen, Kathleen McKeown, James Allan, and En-\nrique Henestroza. 2008. Simultaneous multilingual search\nfor translingual information retrieval. In CIKM, pp. 719–\n728.\nPopovi ´c, Maja and Hermann Ney. 2007. Word error rates:\nDecomposition over POS classes and applications for error\nanalysis. In Proc. of the Second WMT, pp. 48–55.\nSadat, Fatiha and Nizar Habash. 2006. Combination of ara-\nbic preprocessing schemes for statistical machine transla-\ntion. In Proceedings of the Conference of the Association\nfor Computational Linguistics, Sydney, Australia.\nSimard, Michel, Cyril Goutte, and Pierre Isabelle. 2007.\nStatistical phrase-based post-editing. In HLT-NAACL, pp.\n508–515.\nSnover, Matthew, Nitin Madnani, Bonnie J. Dorr, and Richard\nSchwartz. 2009. Fluency, adequacy, or HTER?: exploring\ndifferent human judgments with a tunable MT metric. In\nStatMT ’09: Proc. of the Fourth WMT, pp. 259–268.\nSpecia, Lucia, Najeh Hajlaoui, Catalina Hallett, and Wilker\nAziz. 2011. Predicting machine translation adequacy. In\nMT Summit XIII.\nStymne, Sara and Lars Ahrenberg. 2010. Using a grammar\nchecker for evaluation and postprocessing of statistical ma-\nchine translation. In Proc. of the Seventh International\nConference on Arabic Language Resources and Tools.\nSuzuki, Hirokazu. 2011. Automatic post-editing based on\nSMT and its selective application by sentence-level auto-\nmatic quality evaluation. MT Summit XIII.\nToutanova, Kristina, Dan Klein, Christopher D. Manning, and\nYoram Singer. 2003. Feature-rich part-of-speech tagging\nwith a cyclic dependency network. In NAACL-HLT, pp.\n173–180.\nVilar, David, Jia Xu, Luis Fernando D’Haro, and Hermann\nNey. 2006. Error analysis of machine translation output.\nInLREC, pp. 697–702.\n118", "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "55bgHz3aFgB", "year": null, "venue": "EAMT 2006", "pdf_link": "https://aclanthology.org/2006.eamt-1.18.pdf", "forum_link": "https://openreview.net/forum?id=55bgHz3aFgB", "arxiv_id": null, "doi": null }
{ "title": "Leveraging Recurrent Phrase Structure in Large-scale Ontology Translation", "authors": [ "G. Craig Murray", "Bonnie J. Dorr", "Jimmy Lin", "Jan Hajic", "Pavel Pecina" ], "abstract": "G. Craig Murray, Bonnie J. Dorr, Jimmy Lin, Jan Hajič, Pavel Pecina. Proceedings of the 11th Annual conference of the European Association for Machine Translation. 2006.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "CWU_aUal6A", "year": null, "venue": "EAMT 2020", "pdf_link": "https://aclanthology.org/2020.eamt-1.6.pdf", "forum_link": "https://openreview.net/forum?id=CWU_aUal6A", "arxiv_id": null, "doi": null }
{ "title": "Incorporating External Annotation to improve Named Entity Translation in NMT", "authors": [ "Maciej Modrzejewski", "Miriam Exel", "Bianka Buschbeck", "Thanh-Le Ha", "Alexander Waibel" ], "abstract": null, "keywords": [], "raw_extracted_content": "Incorporating External Annotation to improve Named Entity Translation\nin NMT\nMaciej Modrzejewski Thanh-Le Ha Alexander Waibel\nInstitute for Anthropomatics and Robotics\nKIT - Karlsruhe Institute of Technology, Germany\[email protected]\[email protected]\nMiriam Exel Bianka Buschbeck\nSAP SE, Walldorf, Germany\[email protected]\nAbstract\nThe correct translation of named entities\n(NEs) still poses a challenge for conven-\ntional neural machine translation (NMT)\nsystems. This study explores methods\nincorporating named entity recognition\n(NER) into NMT with the aim to improve\nnamed entity translation. It proposes an\nannotation method that integrates named\nentities and inside–outside–beginning\n(IOB) tagging into the neural network\ninput with the use of source factors. Our\nexperiments on English →German and\nEnglish→Chinese show that just by\nincluding different NE classes and IOB\ntagging, we can increase the BLEU score\nby around 1 point using the standard test\nset from WMT2019 and achieve up to\n12% increase in NE translation rates over\na strong baseline.\n1 Introduction\nThe translation of named entities (NE) is challeng-\ning because new phrases appear on a daily basis\nand many named entities are domain specific, not\nto be found in bilingual dictionaries. Improving\nnamed entity translation is important to transla-\ntion systems and cross-language information re-\ntrieval applications (Jiang et al., 2007). Conven-\ntional neural machine translation (NMT) systems\nare expected to translate NEs by learning complex\nlinguistic aspects and ambiguous terms from the\ntraining corpus only. When faced with named en-\ntities, they are found to be occasionally distorting\nc/circlecopyrt2020 The authors. This article is licensed under a Creative\nCommons 3.0 licence, no derivative works, attribution, CC-\nBY-ND.location, organization or person names and even\nsometimes ignoring low-frequency proper names\naltogether (Koehn and Knowles, 2017).\nThis paper explores methods incorporating\nnamed entity recognition (NER) into NMT with\nthe aim to improve NE translation. NER systems\nare often adopted as an early annotation step\nin many Natural Language Processing (NLP)\npipelines for applications such as question an-\nswering and information retrieval. This work\nexplores an annotation method that integrates\nnamed entities and inside–outside–beginning\n(IOB) (Ramshaw and Marcus, 1999) tagging into\nthe neural network input with the use of source\nfactors. In our experiments, we focus on three NE\nclasses: organization, location and person, and use\nthe state-of-the-art encoder-decoder Transformer\nnetwork. We also investigate how the granularity\nof NE class labels influences NE translation\nquality and conclude that specific labels contribute\nto the NE translation improvement. Further,\nwe execute an extensive evaluation of the MT\noutput assessing the influence of our annotation\nmethod on NE translation. Our experiments on\nEnglish→German and English →Chinese show\nthat by just including different NE classes and\nIOB tagging, we can increase the BLEU score by\naround 1 point using the standard test set from\nWMT2019 and achieve up to 12% increase in NE\ntranslation rates over a strong baseline.\n2 Related Work\nSeveral research groups propose translating named\nentities prior to the translation of the whole sen-\ntence by an external named entity translation\nmodel. Li et al., (2018a); Yan et al., (2018);\nWang et al., (2017) follow the “tag-replace”\ntraining method using an external character-level\nEn BPE only Belfast - Gi@@ ants won thanks to Patri@@ ck D@@ w@@ yer\nEn fine-grained Belfast 2-0Gi@@ 3ants 3won 0thanks 0to0Patri@@ 1ck1D@@ 1w@@ 1yer1\nEn coarse-grained Belfast 1-0Gi@@ 1ants 1won 0thanks 0to0Patri@@ 1ck1D@@ 1w@@ 1yer1\nEn IOB tagging Belfast B-OGi@@ Bants Iwon Othanks OtoOPatri@@ BckID@@ Iw@@ IyerI\nEn Inline Ann.\n(fine-grained)<LOC>Belfast</LOC>-<ORG>Gi@@ ants </ORG>won thanks to <PER>\nPatri@@ ck D@@ w@@ yer </PER>\nTable 1: Different annotation configurations; i. fine-grained: (0) for a regula rsub-word (default), (1) for NE class Person , (2)\nfor NE class Location , (3) for NE class Organization ii. coarse-grained: (0) default, (1) to denote a NE\nsequence-to-sequence model to translate named\nentities. Li et al. (2018b) explore inserting in-\nline annotations into the data providing informa-\ntion about named entity features. Such annotations\nare inserted into the source sentence in form of\nXML tags, consisting of XML boundary tags and\nNE class labels.\nRecently, researchers have shown the benefit\nof explicitly encoding linguistic features, in form\nof source factors, into NMT (Sennrich and Had-\ndow, 2016; Garc ´ıa-Mart ´ınez et al., 2016). Dinu\net al. (2019) use source factors successfully to\nenforce terminology. The work of Ugawa et\nal. (2018) is similar to ours, in the way that they\nalso incorporate NE tags with the use of source\nfactors into the NMT model to improve named en-\ntity translation. They, however, introduce a chunk-\nlevel long short-term memory (LSTM) (Hochreiter\nand Schmidhuber, 1997) layer over a word-level\nLSTM layer into the encoder to better handle com-\npound named entities. Furthermore, they use a dif-\nferent network architecture (LSTM), and apply a\ndifferent annotation technique (IO tagging) than\nwe explore (IOB tagging). Finally, the work at\nhand provides an extensive evaluation of NE qual-\nity translation (Section 5.2), including a human as-\nsessment (Section 5.3).\n3 NMT with NE tagging\nWe explore incorporating NE information as ad-\nditional parallel streams (source factors) to signal\nNE occurrence in the fashion described in Sen-\nnrich and Haddow (2016). Source factors provide\nadditional word-level information, are applied to\nthe source language only, and take form of supple-\nmentary embeddings that are either added or con-\ncatenated to the word embeddings. This is illus-\ntrated with the following formula:\nE·x=/circleplustext\nf∈FEf·xif (1)\nwhere/circleplustext∈ {/summationtext,/bardbl},(·)denotes a matrix-vector\nmultiplication, Efis a feature embedding matrix,xiis thei-th word from the source sentence, and F\nis a finite, arbitrary set of word features. While we\nuse a state-of-the-art encoder-decoder Transformer\nnetwork, our approach does not modify the stan-\ndard NMT model architecture, thus can be applied\nto any sequence-to-sequence NMT model.\nFurther, we also explore whether the NE class\ngranularity may influence translation quality and\nhelp decrease word ambiguity. For this purpose,\nwe define a “fine-grained” case, where we use spe-\ncific NE class labels (e.g. person, location, orga-\nnization) and also a “coarse-grained” case, where\nwe use two different source factor values only:\n(0) as default and (1) to denote a named entity\nin a generic manner. Additionally, we investi-\ngate whether inside–outside–beginning (IOB) tag-\nging (Ramshaw and Marcus, 1999) used to sig-\nnalize where a NE begins and ends as a second\ninput feature may guide models to translate com-\npound named entities better. In IOB tagging, (B)\nindicates the beginning, (I) the inside and (O) the\noutside of a NE (a regular word or a sequence of\nwords).\nWe annotate source sentences with an external\nNER system. Examples for the different annota-\ntion strategies (that we experiment with) are pre-\nsented in Table 1. Each sub-word is assigned an in-\ndex denoting its corresponding source factor value.\nAs our goal resembles that of Li et al. (2018b),\nwe compare our approach against their inline an-\nnotation method with XML boundary tags. Li et\nal. (2018b) use specific NE class labels, which cor-\nrespond to the “fine-grained” case in our work.\nWe refer to their approach as “Inline Ann. (fine-\ngrained)” and present this annotation method in\nTable 1.\n4 Experiments\n4.1 Parallel data & pre-processing\nWe train NMT systems for English →German and\nEnglish→Chinese on data of the WMT2019 news\nEn→De En →Zh\nNo. of sentences 2,146,644 2,128,234\nNo. of sentences with NE 1,082,873 1,153,545\nPercentage ≈50.44% ≈53.95%\nORG labels 983,558 (53%) 1,325,462 (57%)\nPER labels 223,309 (12%) 211,892 (9%)\nLOC labels 639,304 (35%) 796,269 (34%)\nTable 2: Occurrences of NE annotations in the training\ndatasets\ntranslation task.1For English →German we use the\ndata from Europarl v9 and news commentary data\nv14. For English →Chinese the models are trained\non news commentary v14 and UN Parallel Corpus\nv1.0. The latter dataset is shortened to match the\nsize of the training dataset for English →German\nby using the newest data from the end of the corpus\nfor training, see also Table 2.\nAs NE Recognition is an active research field\nand the search for best recognition methods con-\ntinues, the quality of NER systems may vary under\ndifferent research scenarios and domains (Goyal\net al., 2018). Incorrect NE annotation in the data\nmay influence the results of this work negatively.\nTherefore, we focus on three well-researched NE\nclasses: Person ,Location andOrganization , limit-\ning, thus, the possibility of incorrect annotation.\nWe use spaCy Named Entity Recognition\n(NER) system2to recognize named entities in\nthe source sentences. The ratio of sentences in\nthe training data with at least one named entity\noccurrence (based on three NE classes) in the\nsource sentence amounts to 50.44% for En–De and\n53.95% for En–Zh. Table 2 presents the details.\nWe tokenize the English and German corpora\nusing the spaCy Tokenizer3, and use the Open-\nNMT Tokenizer4(mode aggressive) on the Chi-\nnese side. Further, we perform a joint source\nand target Byte-Pair encoding (BPE) (Sennrich et\nal., 2016) for English →German and disjoint for\nEnglish→Chinese, both with 32,000 merge oper-\nations. For every source sentence in the training\ndata (after applying BPE), we generate two files\nwith source factors: i. marking named entities (ei-\nther the coarse-grained or the fine-grained case),\nii. marking IOB tagging. The baseline model is\ntrained with no external annotation.\n1http://www.statmt.org/wmt19/translation-task.html\n2https://spacy.io/usage/linguistic-features \\#named-entities\n3https://spaCy.io/api/tokenizer\n4https://github.com/OpenNMT/TokenizerLabel type Variant IOB En →De En→Zh\nfine-grained sum no 33.61 26.29\nfine-grained concat 8 yes 33.11 26.45\nfine-grained sum yes 33.07 26.26\ncoarse-grained concat 8 yes 32.90 26.08\ncoarse-grained sum yes 32.70 26.34\nBaseline no 32.60 26.29\nInline Ann. (fine-grained) no 32.50 26.05\nTable 3: BLEU scores on newstest2019 (WMT2019)\n4.2 NMT architecture\nWe use the Sockeye machine translation frame-\nwork (Hieber et al., 2017) for our experiments\nand train our models with a Transformer network\n(Base) (Vaswani et al., 2017) with 6 encoding and\n6 decoding layers all with 2048 hidden units. We\nuse word embeddings of size 512, dropout prob-\nability for multi-head attention of size 0.1, batch\nsize of 4096 tokens, a maximum sequence length\nof 100 and source factor embedding of size 8 for\nthe concatenation case. Each model is trained on 1\nGPU Tesla T4. Training finishes if there is no im-\nprovement for 32 consecutive checkpoints on the\nvalidation data newstest2018 (validation data from\nthe WMT2019 news translation task).\n5 Results\n5.1 General translation quality\nWe perform the evaluation on the standard test\ndataset newstest2019 from the WMT2019 news\ntranslation task. It has identical content for En–De\nand En–Zh and contains 1997 sentences, in which\n63.95% of the sentences on the English side con-\ntain at least one named entity. There are 2681\nnamed entity occurrences; 908 belong to the la-\nbelLocation (34% of all NEs), 870 to the label\nPerson (32%) and 903 to the label Organization\n(34%); annotated with spaCy NER. Each sentence\nwith named entity occurrence contains, on aver-\nage, approx. 2 NEs. To assess the general transla-\ntion quality, we calculate the BLEU score using the\nevaluation script multi-bleu-detok.perl from Moses\n(Koehn et al., 2007). We detokenize the MT out-\nput with detokenizer.perl (Koehn et al., 2007) for\nEn–De and use OpenNMT detokenize function to\ndo the same for En–Zh.\nTable 3 displays the results. Column “Label\ntype” denotes whether specific (“fine-grained”) or\ngeneric (“coarse-grained”) NE labels are used; col-\numn “Variant” describes whether source factors\nare added (“sum”) or concatenated (“concat”) to\nEn→De\nLabel type Variant IOB LOC PER ORG Total\nfine-grained sum no 73.68 70.11 61.79 69.89\nfine-grained concat 8 yes 72.87 71.96 63.41 70.67\nfine-grained sum yes 75.71 70.85 69.11 72.39\ncoarse-grained concat 8 yes 74.09 71.22 62.60 70.67\ncoarse-grained sum yes 75.30 71.22 65.04 71.61\nBaseline no 74.09 71.59 60.16 70.36\nInline Ann. (fine-grained) no 70.45 67.16 61.79 67.39\nTable 4: Results of the automatic in-depth analysis on ran-\ndom300 dataset for En–De with spaCy NER, NE match rate\nin %\nthe word embeddings; column “IOB” describes\nwhether IOB tagging is used as a second source\nfactor stream.\nAlmost all models annotated with source fac-\ntors show improvements w.r.t BLEU in compar-\nison to the baseline; with one En–Zh model be-\ning insignificantly worse. Overall, the fine-grained\nmodel with source factors added and no use of IOB\ntagging seems to perform best and achieves around\none BLEU point more than the baseline (for En–\nDe). As the BLEU score only assesses the qual-\nity of NE translation indirectly, we do not deem it\nto be a reliable evaluation metric to assess the NE\ntranslation quality. As named entities affect only\na small part of a sentence, we do not expect high\nBLEU variations and continue with the in-depth\nnamed entity analysis in the next section.\n5.2 Automatic hit/miss NE evaluation\nIn this section we execute an automatic in-depth\nanalysis of NE translation quality with spaCy\n(German models) and Stanford NER (Finkel et al.,\n2005) (Chinese models). For this purpose, we\nrandomly select 100 sentences from newstest2019\ncontaining at least one named entity for each of\nthe three classes (PER, LOC, ORG) on the English\nside of the corpus, in total 300 sentences. We re-\nfer to this dataset in later part of this work as ran-\ndom300 . We annotate the reference sentence with\nan external NER system (spaCy or Stanford NER)\nto find named entities and compare if they appear\nin the hypothesis in the same form (string-based).\nIf yes, we define this case as a “hit”, otherwise as a\n“miss” and calculate the result according to the NE\nmatch rate formula:hit\nhit+miss. Table 4 and Table 5\ndisplay the results. Column “Total” calculates the\naccumulated NE match rate for three named entity\nclasses.\nAt first glance, we see that the result values\nfor En–De are significantly higher than for En–En→Zh\nLabel type Variant IOB LOC PER ORG Total\nfine-grained sum no 41.67 20.07 31.62 24.41\nfine-grained concat 8 yes 33.33 23.36 36.76 27.96\nfine-grained sum yes 41.67 20.44 33.09 25.12\ncoarse-grained concat 8 yes 33.33 22.63 33.09 26.30\ncoarse-grained sum yes 33.33 21.90 38.97 27.73\nBaseline no 33.33 18.98 35.29 24.64\nInline Ann. (fine-grained) no 33.33 19.71 34.56 24.88\nTable 5: Results of the automatic in-depth analysis on ran-\ndom300 dataset for for En–Zh with Stanford NER, NE match\nratein %\nZh. We attribute this to the transliteration issues\nwhich emerge while translating from English to\nChinese and, thus, occurring mismatch between\nthe reference and hypothesis translation. In gen-\neral, the baseline models show high performance\nas a certain amount of NEs has already been seen\nby the network in the training data. Furthermore,\nwe observe improvements in named entity trans-\nlation for En–De and En–Zh among almost all\nclasses, showing that augmenting source sentences\nwith NE information leads to their improved trans-\nlation. There is, however, no consistent improve-\nment in the models not using IOB tagging annota-\ntion. Their total NE match rate values are lower\nthan that one of the baseline models. As such,\nIOB tagging, indicating compound named enti-\nties, proves to be an important piece of informa-\ntion for the NMT systems. Further, augmenting\nthe model with exact NE class labels (fine-grained\ncase) seems to achieve higher NE match rates in\ncomparison to the coarse-grained case. Addition-\nally, coarse-grained models perform better than the\nbaseline. This finding indicates that the mere in-\nformation that a word is a NE proves to be use-\nful to the NMT system even if the class is not\nclearly specified. Inline Annotation does not de-\nliver promising results, contrary to the findings of\nLi et al. (2018b), with the total NE match rate be-\nlow that one of the baseline system (En–De) or in-\nsignificantly above (En–Zh).\nValidation of the NE match rates After hav-\ning executed the automatic in-depth analysis with\nspaCy NER, we wish to validate the results of the\nEn–De models with a second state-of-the-art NER\nsystem: Stanford NER. The analysis is conducted\nin an identical way as earlier and only the En–De\nmodels are analyzed. At the point of writing this\npaper, spaCy does not provide a Chinese model.\nTable 6 presents the results. Column “Total” cal-\nEn→De\nLabel type Variant IOB LOC PER ORG Total\nfine-grained sum no 76.25 76.14 60.00 73.70\nfine-grained concat 8 yes 75.62 77.16 64.62 74.88\nfine-grained sum yes 80.00 78.68 69.23 76.78\ncoarse-grained concat 8 yes 75.62 77.66 67.69 75.36\ncoarse-grained sum yes 77.50 76.65 69.23 76.48\nBaseline no 78.75 76.65 60.00 74.64\nInline Ann. (fine-grained) no 73.75 74.11 60.00 71.80\nTable 6: Results of the automatic in-depth analysis on ran-\ndom300 dataset for En–De with Stanford NER, NE match rate\nin %\nculates the accumulated NE match rate for three\nnamed entity classes.\nFirst, we observe that the overall NE match rates\nare higher than in Table 4. We attribute this phe-\nnomenon to the fact that Stanford NER recognizes\na different set of NEs in the reference sentences\nthan spaCy does. This, however, is not problematic\nas we are interested in the variations in NE match\nrates between the models. In general, there are no\ndifferences in the results of the automatic in-depth\nanalysis, regardless whether spaCy or Stanford is\nused to conduct it. All models trained with IOB\ntags translate NEs more accurately than the base-\nline model does. Again, fine-grained model trained\nwith IOB tags and source factors added to the word\nembeddings achieves the highest NE match rate .\nThe model trained without IOB tags has a lower\nNE match rate than the baseline re-confirming thus\nthe usefulness of the IOB tags.\n5.3 Human hit/miss NE evaluation\nAs NER systems are prone to delivering inaccurate\nresults,5we also perform a human evaluation. It\nconsists in recognizing NEs in the reference trans-\nlation, comparing them to the corresponding NE\ntranslation in the MT output and calculating the NE\nmatch rate on the random300 dataset. We compare\nthe baseline and the best model (highest total NE\nmatch rate in Tables 4 and 5) for En–De and En–\nZh and refer to them as annotated models. If a\nNE is in a different form in the hypothesis than the\nreference proposes or a NE is transliterated into or\nfrom Chinese, but its form is still grammatically\nand semantically correct, its occurrence is counted\nas correct. Human evaluation is executed by one\nnative speaker for each language pair. Table 7\n5spaCy’s German model has 83% F1-Score (https://spaCy.io/\nmodels/de) with a warning that it may “perform inconsistently\non many genres”, the same holds for Stanford NER:\nhttps://nlp.stanford.edu/projects/project-ner.shtml.En→De\nLabel type Variant IOB LOC PER ORG Total\nfine-grained sum yes 93.02 83.52 78.01 85.17\nBaseline no 89.77 82.05 70.92 82.14\nEn→Zh\nfine-grained concat 8 yes 73.85 67.04 64.27 68.05\nBaseline no 71.43 61.90 57.35 63.24\nTable 7: Results of the human in-depth evaluation on ran-\ndom300 dataset, NE match rate in %\npresents the results of the human hit/miss evalu-\nation. Column “Total” calculates the accumulated\nNE match rate for three named entity classes.\nThe NE match rate for human hit/miss evalu-\nation is higher than for its automatic counterpart.\nThis is due to the fact that all false positives in the\nreference and false negatives in the hypothesis are\neliminated. Most importantly, we can state that\ntheannotated models perform consistently better\nthan the baseline and, in fact, the incorporation of\nexternal annotation in form of source factors into\nthe source sentence leads to an improvement in NE\ntranslation. There is an increase of 3.67% in the to-\ntalNE match rate value for En–De and 7.61% for\nEn–Zh. Furthermore, we observe the greatest NE\nmatch rate improvement when translating organi-\nzations’ names (+9.99% for En–De, and +12.07%\nfor En–Zh).\n5.4 Accuracy of spaCy NER\nWhile executing the human hit/miss NE evalua-\ntion, we also annotated false positives and false\nnegatives in the reference, executing, thus, a qual-\nity check of spaCy NER on data from the news\ndomain (on random300 dataset, German model\nonly). Precision value is 84.43% and recall\namounts to 85.93%. The above observation leads\nto the conclusion that incorrect NE annotation may\noccur relatively frequently in the training data. We\nhypothesize that NE annotation with source fac-\ntors may lead to better results if the training data is\nfully correctly annotated.\n5.5 Discussion\nIn this section we discuss our observations based\non the human evaluation and provide translation\nexamples. The use of source factors seems to\nalleviate the problem of ignoring low-frequency\nproper names as the annotated models appear to\nconsistently react to NE occurrence by produc-\ning a translation. The baseline, however, may ig-\nnore more complex NEs, producing, thus, under-\nSource Palin, 29, of Wasilla, Alaska, was arrested (...) according to a report released Saturday by Alaska State Troop-\ners.\nReference Palin, 29, aus Wasilla, Alaska, wurde (...) verhaftet. Gegen ihn liegt be reits ein Bericht (...), so eine Meldung,\ndie am Samstag von den Alaska State Troopers ver¨offentlicht wurde.\nAnnotated Palin, 29 von Wasilla, Alaska, wurde (...) verhaftet (...), wie ein am S amstag von Alaska State Troopers\nver¨offentlichter Bericht besagt.\nBaseline Laut einem Bericht von Alaska , der Samstag ver ¨offentlicht wurde, wurde Palin, 29 von Wasilla, Alaska, (...)\nverhaftet (...).\nSource Saipov, 30, allegedly used a Home Depot rental truck (...).\nReference Saipov, 30, hat (...) angeblich einen Leihwagen von Home Depot (...) benutzt (...).\nAnnotated Saipov, 30, soll einen Mietwagen aus dem Home Depot benutzt haben (...).\nBaseline Saipov, 30, soll einen Home Department Depot Rental benutzt haben (...).\nSource The pair’s business had been likened to Gwyneth Paltrow’s Goop brand.\nReference Das Gesch ¨aft der beiden war mit der Marke Goop vonGwyneth Paltrow verglichen worden.\nAnnotated Das Gesch ¨aft des Paares wurde mit der Marke Gop vonGwyneth Paltrow verglichen.\nBaseline Das Gesch ¨aft des Paares wurde mit der Marke von Gwyneth Palop verglichen.\nSource TheGiants got an early two-goal lead through strikes from Patrick Dwyer and Francis Beauvillier.\nReference DieGiants hatten durch Treffer von Patrick Dwyer und Francis Beauvillier eine fr ¨uhe Zwei-Tore-F ¨uhrung.\nAnnotated DieGiganten bekamen durch die Streiks von Patrick Dwyer und Franziskus Beauvillier ein fr ¨uhes Ziel.\nBaseline DieGiganten erhielten durch die Streiks von Patrick Dwyer und Francis Beauvillier ein fr ¨uhes Ziel.\nTable 8: Translation examples: Comparison of the annotated model and baseline for En–De\ntranslation as in the Alaska State Troopers exam-\nple in Table 8. Furthermore, source factors seem\nto guide the annotated models better (in compar-\nison to the baseline) to prevent over-translation,\nas shown in the Home Depot example or miss-\ntranslation ( Gwyneth Paltrow’s Goop ), both exam-\nples are in Table 8.\nOn the other hand, a frequent cause of errors in\ntheannotated models stems from the fact that or-\nganizations’ or persons’ names are translated ver-\nbatim instead of being kept in their original forms,\nas in the Francis/Franziskus andGiants/Giganten\nexample in Table 8. This problem concerns both\ntheannotated model and the baseline. This be-\nhavior may not be desirable for persons’ names,\nyet for organizations’ names the desired output is\ndependent on the context and translation language\npairs.\n6 Conclusion\nOur work focused on establishing if annotating\nnamed entities with the use of source factors leads\nto their more accurate translation. We can state\nthat the general translation quality with the anno-\ntated models improves (improvements in BLEU\nscore). Additionally, in-depth automatic and hu-\nman named entity evaluation prove that the same\nholds true for NE translation.\nThe accuracy of named entity annotation plays\na crucial role during the annotation of named en-\ntities in the training data as well as during evalua-\ntion (automatic hit/miss analysis). By establishing\nspaCy’s F1-Score on random300 during the hu-man hit/miss analysis to amount to approx. 85%,\nwe conclude that the accuracy of any NER sys-\ntem greatly influences the practicability of our ap-\nproach. Therefore, the improvement of named en-\ntity translation is closely related to the improve-\nment of NER systems.\nAcknowledgements\nWe would like to thank Zihan Chen for her help\nwith the human evaluation of the En–Zh transla-\ntion.\nReferences\nDinu, Georgiana, Prashant Mathur, Marcello Federico,\nand Yaser Al-Onaizan. 2019. Training neural ma-\nchine translation to apply terminology constraints.\nInProceedings of the 57th Annual Meeting of the\nAssociation for Computational Linguistics , pages\n3063–3068.\nFinkel, Jenny Rose, Trond Grenager, and Christopher\nManning. 2005. Incorporating non-local informa-\ntion into information extraction systems by gibbs\nsampling. In Proceedings of the 43rd annual meet-\ning on Association for Computational Linguistics ,\npages 363–370. Association for Computational Lin-\nguistics.\nGarc ´ıa-Mart ´ınez, Mercedes, Lo ¨ıc Barrault, and Fethi\nBougares. 2016. Factored neural machine transla-\ntion. arXiv preprint arXiv:1609.04621 .\nGoyal, Archana, Vishal Gupta, and Manish Kumar.\n2018. Recent named entity recognition and classi-\nfication techniques: a systematic review. Computer\nScience Review , 29:21–43.\nHieber, Felix, Tobias Domhan, Michael Denkowski,\nDavid Vilar, Artem Sokolov, Ann Clifton, and Matt\nPost. 2017. Sockeye: A toolkit for neural machine\ntranslation. arXiv preprint arXiv:1712.05690 .\nHochreiter, Sepp and J ¨urgen Schmidhuber. 1997.\nLSTM can solve hard long time lag problems. In\nAdvances in neural information processing systems ,\npages 473–479.\nJiang, Long, Ming Zhou, Lee-Feng Chien, and Cheng\nNiu. 2007. Named entity translation with web min-\ning and transliteration. In Proceedings of the 20th in-\nternational joint conference on Artifical Intelligence ,\npages 1629–1634.\nKoehn, Philipp and Rebecca Knowles. 2017. Six chal-\nlenges for neural machine translation. In Proceed-\nings of the First Workshop on Neural Machine Trans-\nlation , pages 28–39.\nKoehn, Philipp, Hieu Hoang, Alexandra Birch, Chris\nCallison-Burch, Marcello Federico, Nicola Bertoldi,\nBrooke Cowan, Wade Shen, Christine Moran,\nRichard Zens, et al. 2007. Moses: Open source\ntoolkit for statistical machine translation. In Pro-\nceedings of the 45th annual meeting of the Associ-\nation for Computational Linguistics companion vol-\nume proceedings of the demo and poster sessions ,\npages 177–180.\nLi, Xiaoqing, Jinghui Yan, Jiajun Zhang, and\nChengqing Zong. 2018a. Neural name transla-\ntion improves neural machine translation. In China\nWorkshop on Machine Translation , pages 93–100.\nSpringer.\nLi, Zhongwei, Xuancong Wang, Aiti Aw, Eng Siong\nChng, and Haizhou Li. 2018b. Named-entity tag-\nging and domain adaptation for better customized\ntranslation. In Proceedings of the Seventh Named\nEntities Workshop , pages 41–46.\nRamshaw, Lance A and Mitchell P Marcus. 1999. Text\nchunking using transformation-based learning. In\nNatural language processing using very large cor-\npora, pages 157–176. Springer.\nSennrich, Rico and Barry Haddow. 2016. Linguistic\ninput features improve neural machine translation.\nInProceedings of the First Conference on Machine\nTranslation: Volume 1, Research Papers , pages 83–\n91.\nSennrich, Rico, Barry Haddow, and Alexandra Birch.\n2016. Neural machine translation of rare words with\nsubword units. In Proceedings of the 54th Annual\nMeeting of the Association for Computational Lin-\nguistics (Volume 1: Long Papers) , pages 1715–1725.\nUgawa, Arata, Akihiro Tamura, Takashi Ninomiya, Hi-\nroya Takamura, and Manabu Okumura. 2018. Neu-\nral machine translation incorporating named entity.\nInProceedings of the 27th International Conference\non Computational Linguistics , pages 3240–3250.Vaswani, Ashish, Noam Shazeer, Niki Parmar, Jakob\nUszkoreit, Llion Jones, Aidan N Gomez, Łukasz\nKaiser, and Illia Polosukhin. 2017. Attention is all\nyou need. In Advances in neural information pro-\ncessing systems , pages 5998–6008.\nWang, Yuguang, Shanbo Cheng, Liyang Jiang, Jiajun\nYang, Wei Chen, Muze Li, Lin Shi, Yanfeng Wang,\nand Hongtao Yang. 2017. Sogou neural machine\ntranslation systems for wmt17. In Proceedings of the\nSecond Conference on Machine Translation , pages\n410–415.\nYan, Jinghui, Jiajun Zhang, JinAn Xu, and Chengqing\nZong. 2018. The impact of named entity translation\nfor neural machine translation. In China Workshop\non Machine Translation , pages 63–73. Springer.", "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "EiYf47LJG8n", "year": null, "venue": "EAMT 2023", "pdf_link": "https://aclanthology.org/2023.eamt-1.56.pdf", "forum_link": "https://openreview.net/forum?id=EiYf47LJG8n", "arxiv_id": null, "doi": null }
{ "title": "First WMT Shared Task on Sign Language Translation (WMT-SLT22)", "authors": [ "Mathias Müller", "Sarah Ebling", "Eleftherios Avramidis", "Alessia Battisti", "Michèle Berger", "Richard Bowden", "Annelies Braffort", "Necati Cihan Camgöz", "Cristina España-Bonet", "Roman Grundkiewicz", "Zifan Jiang", "Oscar Koller", "Amit Moryossef", "Regula Perrollaz", "Sabine Reinhard", "Annette Rios Gonzales", "Dimitar Shterionov", "Sandra Sidler-Miserez", "Katja Tissi", "Davy Van Landuyt" ], "abstract": "Mathias Müller, Sarah Ebling, Eleftherios Avramidis, Alessia Battisti, Michèle Berger, Richard Bowden, Annelies Braffort, Necati Cihan Camgoz, Cristina España-Bonet, Roman Grundkiewicz, Zifan Jiang, Oscar Koller, Amit Moryossef, Regula Perrollaz, Sabine Reinhard, Annette Rios Gonzales, Dimitar Shterionov, Sandra Sidler-Miserez, Katja Tissi, Davy Van Landuyt. Proceedings of the 24th Annual Conference of the European Association for Machine Translation. 2023.", "keywords": [], "raw_extracted_content": "First WMT Shared Task on Sign Language Translation (WMT-SLT22)\nMathias M ¨uller\nUniversity of ZurichSarah Ebling\nUniversity of ZurichEleftherios Avramidis\nDFKI BerlinAlessia Battisti\nUniversity of Zurich\nMich `ele Berger\nHfH ZurichRichard Bowden\nUniversity of SurreyAnnelies Braffort\nUniversity of Paris-SaclayNecati Cihan Camg ¨oz\nMeta Reality Labs\nCristina Espa ˜na-Bonet\nDFKI Saarbr ¨uckenRoman Grundkiewicz\nMicrosoftZifan Jiang\nUniversity of ZurichOscar Koller\nMicrosoft\nAmit Moryossef\nBar-Ilan UniversityRegula Perrollaz\nHfH ZurichSabine Reinhard\nHfH ZurichAnnette Rios\nUniversity of Zurich\nDimitar Shterionov\nTilburg UniversitySandra Sidler-Miserez\nHfH ZurichKatja Tissi\nHfH ZurichDavy Van Landuyt\nEuropean Union of the Deaf\nAbstract\nThis paper is a brief summary of the First\nWMT Shared Task on Sign Language\nTranslation (WMT-SLT22), a project\npartly funded by EAMT. The focus of\nthis shared task is automatic translation\nbetween signed and spoken languages.\nDetails can be found on our website1or in\nthe findings paper (M ¨uller et al., 2022).\n1 Project duration\nThe project ran roughly from July 2021 (when the\norganizing commitee was assembled) to December\n2022 (presentation of final results at WMT).\n2 Description of the project\nThis project entailed planning and realizing a\nWMT shared task on automatic translation be-\ntween signed and spoken2languages. Recently,\nYin et al. (2021) called for including signed lan-\nguages in natural language processing (NLP) re-\nsearch. We regard our shared task as a direct an-\nswer to this call. While WMT has a long history\nof shared tasks for spoken languages (Akhbardeh\n© 2023 The authors. This article is licensed under a Creative\nCommons 4.0 licence, no derivative works, attribution, CC-\nBY-ND.\n1https://www.wmt-slt.com/\n2In this paper we use the word “spoken” to refer to any lan-\nguage that is not signed, no matter whether it is represented as\ntext or audio, and no matter whether the discourse is formal\n(e.g. writing) or informal (e.g. dialogue).et al., 2021), this is the first time that signed lan-\nguages are included in a WMT shared task.\nThe task is novel in the sense that it requires pro-\ncessing visual information (such as video frames\nor human pose estimation) beyond the well-known\nparadigm of text-to-text machine translation (MT).\nAs a consequence, solutions need to consider a\ncombination of NLP and computer vision (CV)\ntechniques.\nThe task featured two tracks, translating from\nSwiss German Sign Language (DSGS) to German\nand vice versa.\n3 Objectives\nThe project envisioned that there would be bene-\nfits both for Deaf sign language users and for the\nresearch community.\nFor Deaf communities, the shared task aimed for\nbetter access to linguistic tools, including MT, in\ntheir native languages and also to improve recog-\nnition for sign languages.\nFor the MT research community, our goal was to\ninclude sign languages in WMT shared tasks as a\nway of informing researchers about sign languages\nand boosting research on sign language translation.\nMore concretely, we were looking to produce\npublic benchmark data for MT systems, transla-\ntions by many state-of-the-art systems and judge-\nments of translation quality by humans. For sign\nlanguages, such resources did not exist before the\nshared task.\n4 Final results\nMain outcome Seven teams (including one from\nthe University of Zurich whose submission we\nconsider a baseline) participated in our task. All\nof them submitted to the DSGS-to-German track,\nwhile there were no submissions for the second\ntranslation direction, presumably because this di-\nrection is more challenging.\nSeven teams is a high turnout, considering that\nother comparable efforts (such as a shared task\non Taiwanese sign language translation co-located\nwith LoResMT 2021 (Ojha et al., 2021) or the\nworkshop on sign language recognition, transla-\ntion and production (SLRTP) 20223) had fewer\nparticipants.\nWe presented the final results at WMT 2022 in\nAbu Dhabi in December 20224. The shared task\nwas well received and sparked considerable inter-\nest in the machine translation community.\nOther important artifacts Besides a system\nranking and system papers describing state-of-the-\nart techniques, our shared task made the follow-\ning scientific contributions: novel corpora, repro-\nducible baseline systems and new protocols and\nsoftware for human evaluation. Finally, the task\nalso resulted in the first publicly available set of\nsystem outputs and human evaluation scores for\nsign language translation.\n5 Funding agencies\nThis shared task was funded by EAMT (through\nthe call “Sponsorship of Activities”) and by Mi-\ncrosoft AI for Accessibility. We are grateful for\ntheir support which enabled us to provide test data,\nhuman evaluation and interpretation in Interna-\ntional Sign during the WMT conference.\nThe organizing committee further acknowledge\nfunding from the following projects: the EU Hori-\nzon 2020 projects EASIER (grant agreement num-\nber 101016982) and SignON (101017255), the\nSwiss Innovation Agency (Innosuisse) flagship\nIICT (PFFS-21-47) and the German Ministry of\nEducation and Research through the project So-\ncialWear (01IW20002).\n3https://slrtp-2022.github.io/\n4https://www.project-easier.eu/news/2023/\n01/09/easier-at-emnlp-and-wmt-2022/References\nAkhbardeh, Farhad, Arkady Arkhangorodsky, Mag-\ndalena Biesialska, Ond ˇrej Bojar, Rajen Chatter-\njee, Vishrav Chaudhary, Marta R. Costa-jussa,\nCristina Espa ˜na-Bonet, Angela Fan, Christian Fe-\ndermann, Markus Freitag, Yvette Graham, Ro-\nman Grundkiewicz, Barry Haddow, Leonie Har-\nter, Kenneth Heafield, Christopher Homan, Matthias\nHuck, Kwabena Amponsah-Kaakyire, Jungo Kasai,\nDaniel Khashabi, Kevin Knight, Tom Kocmi, Philipp\nKoehn, Nicholas Lourie, Christof Monz, Makoto\nMorishita, Masaaki Nagata, Ajay Nagesh, Toshi-\naki Nakazawa, Matteo Negri, Santanu Pal, Allah-\nsera Auguste Tapo, Marco Turchi, Valentin Vydrin,\nand Marcos Zampieri. 2021. Findings of the 2021\nConference on Machine Translation (WMT21). In\nProceedings of the Sixth Conference on Machine\nTranslation , pages 1–88, Online, November. Asso-\nciation for Computational Linguistics.\nM¨uller, Mathias, Sarah Ebling, Eleftherios Avramidis,\nAlessia Battisti, Mich `ele Berger, Richard Bowden,\nAnnelies Braffort, Necati Cihan Camg ¨oz, Cristina\nEspa ˜na-bonet, Roman Grundkiewicz, Zifan Jiang,\nOscar Koller, Amit Moryossef, Regula Perrollaz,\nSabine Reinhard, Annette Rios, Dimitar Shterionov,\nSandra Sidler-miserez, and Katja Tissi. 2022. Find-\nings of the first WMT shared task on sign language\ntranslation (WMT-SLT22). In Proceedings of the\nSeventh Conference on Machine Translation (WMT) ,\npages 744–772, Abu Dhabi, United Arab Emirates\n(Hybrid), December. Association for Computational\nLinguistics.\nOjha, Atul Kr., Chao-Hong Liu, Katharina Kann, John\nOrtega, Sheetal Shatam, and Theodorus Fransen.\n2021. Findings of the LoResMT 2021 shared task\non COVID and sign language for low-resource lan-\nguages. In Proceedings of the 4th Workshop on\nTechnologies for MT of Low Resource Languages\n(LoResMT2021) , pages 114–123, Virtual, August.\nAssociation for Machine Translation in the Ameri-\ncas.\nYin, Kayo, Amit Moryossef, Julie Hochgesang, Yoav\nGoldberg, and Malihe Alikhani. 2021. Including\nSigned Languages in Natural Language Processing.\nInProceedings of the 59th Annual Meeting of the\nAssociation for Computational Linguistics and the\n11th International Joint Conference on Natural Lan-\nguage Processing (Volume 1: Long Papers) , pages\n7347–7360, Online, August. Association for Com-\nputational Linguistics.", "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "KXuLqUYbn4", "year": null, "venue": "EAMT 2014", "pdf_link": "https://aclanthology.org/2014.eamt-1.23.pdf", "forum_link": "https://openreview.net/forum?id=KXuLqUYbn4", "arxiv_id": null, "doi": null }
{ "title": "An efficient two-pass decoder for SMT using word confidence estimation", "authors": [ "Ngoc-Quang Luong", "Laurent Besacier", "Benjamin Lecouteux" ], "abstract": "Ngoc-Quang Luong, Laurent Besacier, Benjamin Lecouteux. Proceedings of the 17th Annual conference of the European Association for Machine Translation. 2014.", "keywords": [], "raw_extracted_content": "An Efficient Two-Pass Decoder for SMT Using Word Confidence\nEstimation\nNgoc-Quang Luong Laurent Besacier\nLIG, Campus de Grenoble\n41, Rue des Math ´ematiques,\nUJF - BP53, F-38041 Grenoble Cedex 9, France\nfngoc-quang.luong,laurent.besacier,benjamin.lecouteux [email protected] Lecouteux\nAbstract\nDuring decoding, the Statistical Machine\nTranslation (SMT) decoder travels over all\ncomplete paths on the Search Graph (SG),\nseeks those with cheapest costs and back-\ntracks to read off the best translations. Al-\nthough these winners beat the rest in model\nscores, there is no certain guarantee that\nthey have the highest quality with respect\nto the human references. This paper ex-\nploits Word Confidence Estimation (WCE)\nscores in the second pass of decoding to\nenhance the Machine Translation (MT)\nquality. By using the confidence score of\neach word in the N-best list to update the\ncost of SG hypotheses containing it, we\nhope to “reinforce” or “weaken” them re-\nlied on word quality. After the update, new\nbest translations are re-determined using\nupdated costs. In the experiments on our\nreal WCE scores andideal (oracle) ones ,\nthe latter significantly boosts one-pass de-\ncoder by 7.87 BLEU points, meanwhile\nthe former yields an improvement of 1.49\npoints for the same metric.\n1 Introduction\nBeside plenty of commendable achievements, the\nconventional one-pass SMT decoders are still not\nsufficient yet in yielding human-acceptable trans-\nlations (Zhang et al., 2006; Venugopal et al., 2007).\nTherefore, a number of methods to enhance them\nare proposed, such as: post-editing, re-ranking\nor re-decoding, etc. Post-editing (Parton et al.,\nc\r2014 The authors. This article is licensed under a Creative\nCommons 3.0 licence, no derivative works, attribution, CC-\nBY-ND.2012) is in fact known to be a human-inspired\ntask where the machine post edits translations in\na second automatic pass. In re-ranking (Zhang\net al., 2006; Duh and Kirchhoff, 2008; Bach et al.,\n2011), more features are integrated with the exist-\ning multiple model scores for re-selecting the best\ncandidate among N-best list. Meanwhile, the re-\ndecoding process intervenes directly into the de-\ncoder’s search graph (SG), driving it to the optimal\npath (cheapest hypothesis).\nThe two-pass decoder has been built by several\ndiscrepant ways in the past. Kirchhoff and Yang\n(2005); Zhang et al. (2006) train additional Lan-\nguage Models (LM) and combine LM scores with\nexisting model scores to re-rank the N-best list.\nAlso focusing on the idea of re-ranking, yet Bach\net al. (2011); Luong et al. (2014) employ sen-\ntence and word confidence scores in the second\npass. Meanwhile, Venugopal et al. (2007) do a first\npass translation without LM, but use it to score the\npruned search hyper-graph in the second pass.\nThis work concentrates on a second automatic pass\nwhere the costs of all hypotheses in the decoder’s\nSG containing words of the N-best list will be\nupdated regarding the word quality predicted by\nWord Confidence Estimation (Ueffing and Ney,\n2005) (WCE) system. In single-pass decoding, the\ndecoder searches among complete paths (i.e. those\ncover all source words) for obtaining the optimal-\ncost ones. Essentially, the hypothesis cost is a\ncomposite score, synthesized from various SMT\nmodels (reordering, translation, LMs etc.). Al-\nthough the N-bests beat other SG hypotheses in\nterm of model scores, there is no certain clue that\nthey will be the closest to the human references.\nAs the reference closeness is the users’ most piv-\notal concern on SMT decoder, this work estab-\nlishes one second pass where model-independent\n117\nscores related to word confidence prediction are in-\ntegrated into the first-pass SG to re-determine the\nbest hypothesis. Inheriting the first pass’s N-best\nlist, the second one involves three additional steps:\n\u000fFirstly, apply a WCE classifier on the N-best\nlist to assign the quality labels (“Good” or\n“Bad”) along with the confidence probabili-\nties for each word.\n\u000fSecondly, for each word in the N-best list, up-\ndate the cost of all SG’s hypotheses contain-\ning it by adding the update score ( see Section\n3.2 for detailed definitions).\n\u000fThirdly, search again on the updated SG for\nthe cheapest-cost hypothesis and trace back-\nward to form the new best translation.\nBasically, this initiative originates from an intu-\nition that all parts of hypotheses corresponding to\ncorrect (predicted) words should be appreciated\nwhile those containing wrong ones must be weak-\nened. The use of novel decoder-independent and\nobjective features like WCE scores is expected to\nraise up the better candidate, rather than accept-\ning the current sub-optimal one. The new decoder\ncan therefore use both real andoracle word con-\nfidence estimates. In the next section, we intro-\nduce the SG’s structure. Section 3 depicts our\napproach about using WCE scores to modify the\nfirst-step SG. The experimental settings and re-\nsults, followed by in-depth analysis and compar-\nison to other approaches are discussed in Section 4\nand Section 5. The last section concludes the paper\nand opens some outlooks.\n2 Search Graph Structure\nThe SMT decoder’s Search Graph (SG) can be\nroughly considered as a “vast warehouse” storing\nall possible hypotheses generated by the SMT sys-\ntem during decoding for a given source sentence.\nIn this large directed acyclic graph, each hypoth-\nesis is represented by a path, carrying all nodes\nbetween its begin and end ones, along with the\nedges connecting adjacent nodes. One hypothe-\nsis is called complete when all the source words\nare covered and incomplete otherwise. Starting\nfrom the empty initial node, the SG is gradually\nenlarged by expanding hypotheses during decod-\ning. To avoid the explosion of search space, some\nweak hypotheses can be pruned or recombined. Inorder to facilitate the access and the cost calcula-\ntion, each hypothesis His further characterized by\nthe following fields (we can access the value of the\nfield fof hypothesis Hby using the notion f(H)):\n\u000fhyp: hypothesis ID\n\u000fstack : the stack (ID) where the hypothesis is\nplaced, also the number of foreign (source)\nwords translated so far.\n\u000fback : the backpointer pointing to its previous\ncheapest path.\n\u000ftransition : the cost to expand from the pre-\nvious hypothesis (denoted by pre(H) ) to this\none.\n\u000fscore : the cost of this hypothesis. Apparently,\nscore (H) =score (pre(H)) +transition .\n\u000fout: the last output (target) phrase. It is worth\nto accentuate that outcan contain multiple\nwords.\n\u000fcovered : the source coverage of out, repre-\nsented by the start and the end position of the\nsource words translated into out.\n\u000fforward : the forward pointer pointing to the\ncheapest outgoing path expanded from this\none.\n\u000ff-score : estimated future cost from this par-\ntial hypothesis to the complete one (end of the\nSG).\n\u000frecombined : the pointer pointing to its re-\ncombined1hypothesis.\nFigure 1 illustrates a simple SG generated for the\nsource sentence: “identifier et mesurer les fac-\nteurs de mobilization” . The attributes “t”and\n“c” refer to the transition cost and the source\ncoverage, respectively. Hypothesis 175541 is ex-\ntended from 57552 , when the three words from\n3rd to 5th of the source sentence ( “les fac-\nteurs de” ) are translated into “the factors of”\nwith the transition cost of \u00008:5746 . Hence,\nits cost is: score (175541) = score (57552) +\ntransition (175541) =\u000016:1014+(\u00008:5746) =\n\u000024:6760 . Three rightmost hypotheses: 204119 ,\n204109 and198721 are complete since they cover\nall source words. Among them, the cheapest-cost\n1In the SG, sometimes we recombine hypotheses to reduce\nthe search space in a risk-free way. Two hypotheses can be\nrecombined if they agree in (1) the source word covered so\nfar (2) the last two target words generated, and (3) the end of\nthe last source phrase covered.\n118\nFigure 1: An example of search graph representation\none2is198721 , from which the model-best trans-\nlation is read off by following the track back to the\ninitial node 0:“identify the causes of action .” .\n3 Our Approach: Integrating WCE\nScores into SG\nIn this section, we present the idea of using addi-\ntional scores computed from WCE output (labels\nand confidence probabilities) to update the SG. We\nalso depict the way that update scores are defined.\nFinally, the detailed algorithm followed by an ex-\nample illustrates the approach.\n3.1 Principal Idea\nWe assume that the decoder generates N best hy-\npothesesT=fT1;T2;:::;TNgat the end of the\nfirst pass. Using the WCE system (which can only\nbe applied to sequences of words - and not directly\nto the search graph - that is why N best hypotheses\nare used), we are able to assign the j-thword in the\nhypothesisTi, denoted by tij, with one appropriate\nquality label, cij( e.g. “G” (Good: no translation\nerror), “B” (Bad: need to be edited)), followed\nby the confidence probabilities ( Pij(G);Pij(B)or\nP(G);P(B)for short). Then, the second pass is\ncarried out by considering every word tijand its\nlabels and scores cij;P(G);P(B). Our principal\nidea is that, if tijis apositive (good) translation,\ni.e.cij= \\G00orP(G)\u00191, all hypotheses\nHk2SGcontaining it in the SG should be “re-\nwarded” by reducing their cost. On the contrary,\nthose containing negative (bad) translation will be\n“penalized”. Let reward (tij)andpenalty (tij)\n2It is important to note that the concept cheapest cost hy-\npothesis means that it has the highest model’s score value. In\nother words, the higher the model score value, the “cheaper”\nthe hypothesis is.denote the reward or penalty score of tij. The new\ntransition cost ofHkafter being updated is for-\nmally defined by:\ntransition0(Hk) =transition (Hk) +(\nreward (tij)iftij=goodtranslation\npenalty (tij)ifotherwise(1)\nThe update finishes when all words in the N-best\nlist have been considered. We then re-compute the\nnew score of complete hypotheses by tracing back-\nward via back-pointers and aggregating the tran-\nsition cost of all their edges. Essentially, the re-\ndecoding pass reorders SG hypotheses in term of\nthe more “G” words (predicted by WCE system)\nthey contain, the more cost reduction will be made\nand consequentially, the more opportunity they get\nto be admitted in the N-best list. The re-decoding\nperformance depends largely on the accurateness\nof confidence scores, or in other words, the WCE\nquality.\nIt is vital to note that, during the update process,\nwe might face a phenomena that the word tij(cor-\nresponds to the same source words) occurs in dif-\nferent sentences of the N-best list. In this case, for\nthe sake of simplicity, we process it only at its first\noccurrence (in the highest rank sentence) instead\nof updating the hypotheses containing it multiple\ntimes. In other words, if we meet the exact tij\nonce again in the next N-best sentence(s), no fur-\nther score update will be done in the SG.\n3.2 Update Score Definitions\nDefining the update scores is obviously a nontriv-\nial task as there is no correlation between WCE\nlabels and the SG costs. Furthermore, we have no\nclue about how proportional the SMT model and\n119\nWCE (penalty or reward) scores should share in or-\nder to ensure that both of them will be appreciated.\nIn this article, we propose several types of update\nscores, deriving from the global or local cost.\n3.2.1 Definition 1: Global Update Score\nIn this type, an unique score derived from the\ncost of the current best hypothesis H\u0003(by the first\npass) is used for all updates. We propose to com-\npute this score by two ways: (a) exploiting WCE\nlabelsfcijg;or(b) only WCE confidence prob-\nabilitiesfP(G);P(B)gwill matter, WCE labels\nare left aside.\nDefinition 1a:\npenalty (tij) =\u0000reward (tij) =\n\u000b\u0003score (H\u0003)\n#words (H\u0003)(2)\nWhere #words (H\u0003)is the number of target words\ninH\u0003, the positive coefficient \u000baccounts for\nthe impact level of this score on the hypothe-\nsis’s final cost and can be optimized during ex-\nperiments. Here, penalty (tij)gets negative sign\n(sincescore (H\u0003)<0) and will be added to the\ntransition cost of all hypotheses containing tijin\ncase where this word is labelled as “B”; whereas\nreward (tij)(same value, opposite sign) is used in\nthe other case.\nDefinition 1b:\nupdate (tij) =\u000b\u0003P(B)\u0003score (H\u0003)\n#words (H\u0003)\n\u0000\f\u0003P(G)\u0003score (H\u0003)\n#words (H\u0003)\n= (\u000b\u0003P(B)\u0000\f\u0003P(G))\u0003score (H\u0003)\n#words (H\u0003)(3)\nWhereP(G),P(B) (P(G) +P(B) = 1) are\nthe probabilities of “Good” and “Bad” class of\ntij. The positive coefficient \u000band\fcan be tuned\nin the optimization phase. In this definition, the\nfact thatupdate (tij)isa reward (reward (tij))\nora penalty (penalty (tij)) will depend on tij’s\ngoodness. Indeed, we have: update (tij) =\nreward (tij)ifupdate (tij)>0, which means:\n\u000b\u0003[1\u0000P(G)]\u0000\f\u0003P(G)<0(sincescore (H\u0003)<\n0), therefore P(G)>\u000b\n\u000b+\f. On the contrary, if\nP(G)is under this threshold, update (tij)takes a\nnegative value and therefore becomes a penalty.\n3.2.2 Definition 2: Local Update Score\nThe update score of each (local) hypothesis Hk\ndepends on its current transition cost, even whenthey cover the same word tij. Similarly to Defini-\ntion 1 , two sub-types are defined as follows:\nDefinition 2a:\npenalty (tij) =\u0000reward (tij) =\n\u000b\u0003transition (Hk)(4)\nDefinition 2b:\nupdate (tij) =\u000b\u0003P(B)\u0003transition (Hk)\n\u0000\f\u0003P(G)\u0003transition (Hk)\n= (\u000b\u0003P(B)\u0000\f\u0003P(G))\u0003transition (Hk)\n(5)\nWheretransition (Hk)denotes the current tran-\nsition cost of hypothesis Hk, and the mean-\nings of coefficient \u000b(Definition 2a ) or\u000b,\f\n(Definition 2b ) are analogous to those of Defini-\ntion 1a (Definition 1b ), respectively.\n3.3 Re-decoding Algorithm\nThe below pseudo-code depicts our re-decoding\nalgorithm using WCE labels ( Definition 1a and\nDefinition 2a ).\nAlgorithm 1 Using WCE labels in SGdecoding\nInput: SG=fHkg; T=fT1; T2; :::; T Ng,C=fcijg\nOutput: T0=fT0\n1; T0\n2; :::; T0\nNg\n1:fStep 1: Update the Search Graph g\n2:Processed ;\n3:forTiinTdo\n4: fortijinTido\n5: pij position of the source words aligned to tij\n6: if(tij; pij)2Processed then\n7: continue;fignore if tijappeared in the previ-\nous sentencesg\n8: end if\n9: Hypos fHk2SGjout(Hk)3tijg\n10: if(cij= \\Good00)then\n11: forHkinHypos do\n12: transition (Hk) transition (Hk) +\nreward (tij)freward hypothesisg\n13: end for\n14: else\n15: forHkinHypos do\n16: transition (Hk) transition (Hk) +\npenalty (tij)fpenalize hypothesis g\n17: end for\n18: end if\n19: Processed Processed[f(tij; pij)g\n20: end for\n21:end for\n22:fStep 2: Trace back to re-compute the score for all\ncomplete hypotheses g\n23:forHkinFinal (Set of complete hypotheses) do\n24: score (Hk) 0\n25: while Hk6=initial hypothesis do\n26: score (Hk) score (Hk) +transition (Hk)\n27: Hk pre(Hk)\n28: end while\n29:end for\n30:fStep 3: Select N cheapest hypotheses and output the\nnew list T0g\n120\nRank Cost Hypotheses + WCE labels\n1 -29.9061 identify the cause of action .\nG G G G B B\n2 -40.0868 identify and measure the factors of mobilization\nG G G G G G G\nTable 1: The N-best (N=2) list generated by the SG in Figure 1 and WCE labels\nFigure 2: Details of update process for the SG in Figure 1. The first loop (when 1st rank hypothesis is\nused) is represented in red color, while the second one is in blue. For edges with multiple updates, all\ntransition costs after each update are logged. The winning cost is also emphasized by red color.\nThe algorithm in case of using WCE confidence\nprobabilities ( Definition 1b andDefinition 2b ) is\nessentially similar, except the update step (from\nline 10 to line 18) is replaced by the following part:\nforHkinHypos do\ntransition (Hk) transition (Hk) +update (tij)\nend for\nDuring the update process, the pairs includ-\ning the visited word tijand the position of its\naligned source words pijis consequentially admit-\nted toProcessed , so that all the analogous pairs\n(t0\nij;p0\nij)occuring in the latter sentences can be\ndiscarded. For each tij, a list of hypotheses in the\nSG containing it, called Hypo , is formed, and its\nconfidence score cij(orP(G)) determines whether\nall members HkinHypo will be rewarded or pe-\nnalized. Once having all words in the N-best list\nvisited, we obtain a new SG with updated tran-\nsition costs for all edges containing them. The\nlast step is to travel over all complete hypotheses\n(stored inFinal ) to re-compute their scores and\nthen backtrack the cheapest-cost hypothesis to out-\nput the new best translation.\nThese above depictions can be clarified by tak-\ning another look at the example in Figure 1: from\nthis SG, the N-best list (for the sake of simplic-ity, we choose N= 2) is generated as the single-\npass decoder’s result. According to our approach,\nthe second pass starts by tagging all words in the\nlist with their confidence labels, as seen in Ta-\nble 1. Then, the graph update process is per-\nformed for each word in the list, sentence by sen-\ntence, which details are tracked in Figure 2. In\nthis example, we apply Definition 1a to calcu-\nlate the reward or penalty score, with \u000b=1\n2,\nresulting in: penalty (tij) =\u0000reward (tij) =\n1\n2\u0003\u000029:9061\n6=\u00002:4922 . Firstly, all hypothe-\nses containing words in the 1st ranked sentence\nare considered. Since the word “identify” is la-\nbeled as “G”, its corresponding edge (connecting\ntwo nodes 0and1) is rewarded and updated with\na new cost : tnew=told+reward =\u00001:8411 +\n2:4922 = +0:6511 . On the contrary, the edge be-\ntween two nodes 121252 and182453 is penalized\nand takes new cost: tnew=told+penalty =\n\u00005:8272 + (\u00002:4922) =\u00008:3194 , due to the\nbad quality of the word “action” . Obviously, the\nedges having multiple considered words (e.g. the\none between nodes 19322 and121252 ) will be up-\ndated multiple times, and the transition costs af-\nter each update can be also observed in Figure 2 (\ne.g.t1,t2, etc). Next, when the 2nd-best is taken\ninto consideration, all repeated words (e.g. “iden-\n121\ntify”,“the” and“of” ) are waived since they have\nbeen visited in the first loop, whereas the remain-\ning ones are identically processed. The only un-\ntouched edge in this SG corresponds to the word\n“mobilizing” , as this word does not belong to the\nlist. Once having the update process finished, the\nremaining job is to recalculate the final cost for ev-\nery complete path and returns the new best transla-\ntion: “identify and measure the factors of mobi-\nlization” (new cost =\u000022:6414 ).\n4 Experimental Setup\n4.1 Datasets and SMT Resources\nFrom a dataset of 10,881 French sentences, we\napplied a Moses-based SMT system to generate\ntheir English hypotheses. Next, human translators\nwere invited to correct MT outputs, giving us the\npost editions. The set of triples (source, hypothe-\nsis, post edition) was then divided into the training\nset (10000 first triples) and test set (881 remaining\nones). The WCE model was trained over all 1-best\nhypotheses of the training set. More details on our\nWCE system can be found in next section.\nTheN-best list (N= 1000 ) with involved align-\nment information is also obtained on the test set\n(1000 * 881 = 881000 sentences) by using Moses\n(Koehn et al., 2007) options “-n-best-list” and\n“-print-alignment-info-in-n-best” . Besides, the\nSGs are extracted by some parameter settings: “-\noutput-search-graph” ,“-search-algorithm 1” (us-\ning cube pruning) and “-cube-pruning-pop-limit\n5000” (adds 5000 hypotheses to each stack). They\nare compactly encoded under a plain formatted\ntext file that is convenient to transform into user-\ndefined structures for further processing. We then\nstore the SG for each source sentence in a sepa-\nrated file, and the average size is 43.8 MB.\n4.2 WCE scores and Oracle Labels\nWe employ the Conditional Random Fields (Laf-\nferty et al., 2001) (CRFs) as our machine learn-\ning method, with WAPITI toolkit (Lavergne et al.,\n2010), to train the WCE model. A number of\nknowledge resources are employed for extracting\nthe system-based, lexical, syntactic and semantic\ncharacteristics of word, resulting in the total of 25\nmajor feature types as follows:\n\u000fTarget Side: target word; bigram (trigram)\nbackward sequences; number of occurrences\n\u000fSource Side: source word(s) aligned to the\ntarget word\u000fAlignment Context (Bach et al., 2011): the\ncombinations of the target (source) word and\nall aligned source (target) words in the win-\ndow\u00062\n\u000fWord posterior probability (Ueffing et al.,\n2003)\n\u000fPseudo-reference (Google Translate): Does\nthe word appear in the pseudo reference?\n\u000fGraph topology (Luong et al., 2013): num-\nber of alternative paths in the confusion set,\nmaximum and minimum values of posterior\nprobability distribution\n\u000fLanguage model (LM) based: length of the\nlongest sequence of the current word and its\nprevious ones in the target (resp. source) LM.\nFor example, with the target word wi: if the\nsequencewi\u00002wi\u00001wiappears in the target\nLM but the sequence wi\u00003wi\u00002wi\u00001widoes\nnot, the n-gram value for wiwill be 3.\n\u000fLexical Features: word’s Part-Of-Speech\n(POS); sequence of POS of all its aligned\nsource words; POS bigram (trigram) back-\nward sequences; punctuation; proper name;\nnumerical\n\u000fSyntactic Features: null link (Xiong et al.,\n2010); constituent label; depth in the con-\nstituent tree\n\u000fSemantic Features: number of word senses in\nWordNet.\nIn the next step, the word’s reference labels (or\nso-called oracle labels ) are initially set by using\nTERp-A toolkit (Snover et al., 2008) in one of\nthe following classes: “I’ (insertions), “S” (sub-\nstitutions), “T” (stem matches), “Y” (synonym\nmatches), “P” (phrasal substitutions), “E” (exact\nmatches) and are then regrouped into binary class:\n“G” (good word) or “B” (bad word). Once hav-\ning the prediction model, we apply it on the test\nset (881 x 1000 best = 881000 sentences) and get\nneeded WCE labels along with confidence prob-\nabilities. In term of F-score, our WCE system\nreaches very promising performance in predicting\n“G” label ( 87.65% ), and acceptable for “B” label\n(42.29% ). Both WCE andoracle labels will be\nused in experiments.\n4.3 Experimental Decoders\nWe would like to investigate the WCE’s contribu-\ntions in two scenarios: real WCE and ideal WCE\n122\n(where all predicted labels are totally identical to\nthe oracle ones). Therefore, we experiment with\nthe seven following decoders:\n\u000fBL: Baseline (1-pass decoder)\n\u000fBL+WCE(1a, 1b, 2a, 2b) : four 2-pass de-\ncoders, using our estimated WCE labels and\nconfidence probabilities to update the SGs,\nand the update scores are calculated by Defi-\nnition (1a, 1b, 2a, 2b) .\n\u000fBL+OR(1a, 2a) : two 2-pass decoders, com-\nputing the reward or penalty scores by Defi-\nnition (1a, 2a) on the oracle labels\nIt is important to note that, when using oracle la-\nbels, Definition 1b becomes Definition 1a and\nDefinition 2b becomes Definition 2a , since if a\nwordtijis labelled as “G”, then P(G) = 1 and\nP(B) = 0 , and vice versa. In order to tune the\ncoefficients \u000band\f, we carry out a 2-fold cross\nvalidation on the test set. First, the set is split\ninto two equivalent parts: S1andS2. Playing the\nrole of a development set, S1will train the param-\neter(s) which then be used to compute the update\nscores on S2re-decoding process, and vice versa.\nThe optimization steps are handled by CONDOR\ntoolkit (Berghen, 2004), in which we vary \u000band\n\fwithin the interval [0:00; 5:00](starting point is\n1.00), and the maximum number of iterations is\nfixed as 50. Test set is further divided to launch ex-\nperiments in parallel on our cluster using an open-\nsource batch scheduler: OAR (Nicolas and Joseph,\n2013). This mitigates the overall processing times\non such huge SGs. Finally, the re-decoding results\nfor them are properly merged for evaluation.\n5 Results\nTable 2 shows the translation performances of\nall experimental decoders and their percentages\nof sentences which outperform, remain equivalent\nor degrade the baseline hypotheses (when match\nagainst the references, measured by TER). Re-\nsults suggest that using oracle labels to re-direct\nthe graph searching boosts dramatically the base-\nline quality. BL+OR(1a) augments 7.87 points\nin BLEU, and diminishes 0.0607 (0.0794) point\nin TER(TERp-A), compared to BL. Meanwhile,\nwith BL+OR(2a) , these gains are 7.67, 0.0565 and\n0.0514 (in that order). Besides, the contribution of\nour real WCE system scores seems less prominent,\nyet positive: the best performing BL+WCE(1a)increases 1.49 BLEU points of BL(0.0029 and\n0.0136 gained for TER and TERp-A). More re-\nmarkable, tiny p-values (in the range [0:00; 0:02],\nseen on Table 2) estimated between BLEU of each\nBL+WCE system and that of BLrelying on Ap-\nproximate Method (Clark et al., 2011) indicate that\nthese performance improvements are significant.\nResults also reveal that the use of WCE labels\nare slightly more beneficial than that of confidence\nprobabilities: BL+WCE(1a) andBL+WCE(2a)\noutperform BL+WCE(1b) andBL+WCE(2b) . In\nboth scenarios, we observe that the global update\nscore ( Definition 1 ) performs more fruitfully com-\npared to the local one ( Definition 2 ).\nFor more insightful understanding about WCE\nscores’ acuteness, we make a comparison with\nthe best achievable hypotheses in the SG (ora-\ncles), based on the “LM Oracle” approximation\napproach presented in (Sokolov et al., 2012). This\nmethod allows to simplify the oracle decoding to\nthe problem of searching for the cheapest path on\na SG where all transition costs are replaced by\nthen-gram LM scores of the corresponding words.\nThe LM is built for each source sentence using\nuniquely its target post-edition. We update the SG\nby assigning all edges with the LM back-off score\nof the word it contains (instead of using the current\ntransition cost). Finally, we combine the oracles of\nall sentences yielding BLEU oracle of 66.48 .\nTo better understand the benefit of SG re-\ndecoding, we compare the obtained performances\nwith those from our previous attempt in using\nWCE for N-best list re-ranking (green zone of Ta-\nble 2). The idea is to build sentence-level fea-\ntures starting from WCE labels, then integrate\nthem with existing SMT model scores to recal-\nculate the objective function value, thus re-order\ntheN-best list (Luong et al., 2014). Both ap-\nproaches are implemented in analogous settings,\ne.g. identical SMT system, WCE system, and\ntest set. Results suggest that the contribution of\nWCE in SG re-decoding outperforms that in N-\nbest re-ranking in both “oracle” or real scenar-\nios.BL+OR(1a) overpasses its corresponding ora-\ncle re-ranker BL+OR(Nbest RR) in 2.08 points of\nBLEU, diminishes 0.0253 (0.0280) in TER(TERp-\nA). Meanwhile, BL+WCE(1a) wins real WCE\nre-ranker BL+WCE(Nbest RR) in 1.03 (BLEU),\n0.0015 (TER), 0.0103 (TERp-A). These achieve-\nments might originate from the following reasons:\n(1) In re-ranking, WCE scores are integrated at\n123\nSystems Performance Comparison to BL p-\nBLEU\"TER#TERp-A# Better (%) Equivalent (%) Worse (%) value\nBL 52.31 0.2905 0.3058 - - - -\nBL+WCE(1a) 53.80 0.2876 0.2922 28.72 57.43 13.85 0.00\nBL+WCE(1b) 53.24 0.2896 0.2995 26.45 59.26 14.29 0.00\nBL+WCE(2a) 53.32 0.2893 0.3018 23.68 60.11 16.21 0.02\nBL+WCE(2b) 53.07 0.2900 0.3006 22.27 55.17 22.56 0.01\nBL+OR(1a) 60.18 0.2298 0.2264 62.52 24.36 13.12 -\nBL+OR(2a) 59.98 0.2340 0.2355 60.18 28.82 11.00 -\nBL+OR(Nbest RR) 58.10 0.2551 0.2544 58.68 29.63 11.69 -\nBL+WCE(Nbest RR) 52.77 0.2891 0.3025 18.04 68.22 13.74 0.01\nOracle BLEU score BLEU = 66.48 (from SG)\nTable 2: Translation quality of the conventional decoder and the 2-pass ones using scores from real or\n“oracle” WCE, followed by the percentage of better, equivalent or worse sentences compared to BL\nsentence level, so word translation errors are not\nfully penalized; and (2) in re-ranking, best trans-\nlation selection is limited to N-best list, whereas\nwe afford the search over the entire updated SG\n(on which not only N-best list paths but also those\ncontain at least one word in this list are altered) .\n6 Conclusion and perspectives\nWe have presented a novel re-decoding approach\nfor enhancing the SMT quality. Inherited the re-\nsult from the first pass ( N-best list), we predict\nwords’ labels and confidence probabilities, then\nemploy them to seek a more valuable (cheaper)\npath over SGs throughout the re-decoding stage.\nWhile “oracle” WCE labels extraordinarily lifts\nthe MT quality up (to reach the oracle score),\nreal WCE achieves also the positive and promis-\ning gains. The method sharpens WCE increasing\ncontributions in every aspect of SMT. As future\nwork, we focus on estimating in more detail the\nword quality using MQM3metric as error typol-\nogy, making WCE labels more impactful. Besides,\nthe update scores used in this article would be fur-\nther considered towards the consistency with SMT\ngraph scores to obtain a better updated SG.\nReferences\nNguyen Bach, Fei Huang, and Yaser Al-Onaizan. Goodness: A method for\nmeasuring machine translation confidence. In Proceedings of the 49th An-\nnual Meeting of the Association for Computational Linguistics , pages 211–\n219, Portland, Oregon, June 19-24 2011.\nFrank Vanden Berghen. CONDOR: a constrained, non-linear, derivative-free\nparallel optimizer for continuous, high computing load, noisy objective\nfunctions . PhD thesis, University of Brussels (ULB - Universit ´e Libre de\nBruxelles), Belgium, 2004.\nJonathan Clark, Chris Dyer, Alon Lavie, and Noah Smith. Better hypothe-\nsis testing for statistical machine translation: Controlling for optimizer in-\nstability. In Proceedings of the Association for Computational Lingustics ,\n2011.\nKevin Duh and Katrin Kirchhoff. Beyond log-linear models: Boosted minimum\nerror rate training for n-best re-ranking. In Proc. of ACL, Short Papers ,\n2008.\n3http://www.qt21.eu/launchpad/content/trainingKatrin Kirchhoff and Mei Yang. Improved language modeling for statistical\nmachine translation. In Proceedings of the ACL Workshop on Building and\nUsing Parallel Texts , pages 125–128, Ann Arbor, Michigan, June 2005.\nPhilipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello\nFederico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran,\nRichard Zens, Chris Dyer, Ondrej Bojar, Alexandra Constantin, and Evan\nHerbst. Moses: Open source toolkit for statistical machine translation. In\nProceedings of the 45th Annual Meeting of the Association for Computa-\ntional Linguistics , pages 177–180, Prague, Czech Republic, June 2007.\nJohn Lafferty, Andrew McCallum, and Fernando Pereira. Conditional random\nfields: Probabilistic models for segmenting et labeling sequence data. In\nProceedings of ICML-01 , pages 282–289, 2001.\nThomas Lavergne, Olivier Capp ´e, and Franc ¸ois Yvon. Practical very large\nscale crfs. In Proceedings of the 48th Annual Meeting of the Association\nfor Computational Linguistics , pages 504–513, 2010.\nNgoc Quang Luong, Laurent Besacier, and Benjamin Lecouteux. Word confi-\ndence estimation and its integration in sentence quality estimation for ma-\nchine translation. In Proceedings of The Fifth International Conference on\nKnowledge and Systems Engineering (KSE 2013) , Hanoi, Vietnam, October\n17-19 2013.\nNgoc Quang Luong, Laurent Besacier, and Benjamin Lecouteux. Word confi-\ndence estimation for smt n-best list re-ranking. In Proceedings of the Work-\nshop on Humans and Computer-assisted Translation (HaCaT) , Gothen-\nburg, Sweden, April 2014.\nCapit Nicolas and Emeras Joseph. OAR Documentation - User Guide . LIG lab-\noratory, Laboratoire d’Informatique de Grenoble Bat. ENSIMAG - antenne\nde Montbonnot ZIRST 51, avenue Jean Kuntzmann 38330 MONTBON-\nNOT SAINT MARTIN, 2013.\nKristen Parton, Nizar Habash, Kathleen McKeown, Gonzalo Iglesias, and Adri `a\nde Gispert. Can automatic post-editing make mt more meaningful? In\nProceedings of the 16th EAMT , pages 111–118, Trento, Italy, 28-30 May\n2012.\nMatthew Snover, Nitin Madnani, Bonnie Dorr, and Richard Schwartz. Terp\nsystem description. In MetricsMATR workshop at AMTA , 2008.\nArtem Sokolov, Guillaume Wisniewski, and Franc ois Yvon. Computing lat-\ntice bleu oracle scores for machine translation. In Proceedings of the 13th\nConference of the European Chapter of the Association for Computational\nLinguistics , pages 120–129, Avignon, France, April 2012.\nNicola Ueffing and Hermann Ney. Word-level confidence estimation for ma-\nchine translation using phrased-based translation models. In Proceedings\nof Human Language Technology Conference and Conference on Empiri-\ncal Methods in Natural Language Processing , pages 763–770, Vancouver,\n2005.\nNicola Ueffing, Klaus Macherey, and Hermann Ney. Confidence measures for\nstatistical machine translation. In Proceedings of the MT Summit IX , pages\n394–401, New Orleans, LA, September 2003.\nAshish Venugopal, Andreas Zollmann, and Stephan V ogel. An efficient two-\npass approach to synchronous-cfg driven statistical mt. In Proceedings of\nHuman Language Technologies 2007: The Conference of the North Ameri-\ncan Chapter of the Association for Computational Linguistics , April 2007.\nDeyi Xiong, Min Zhang, and Haizhou Li. Error detection for statistical ma-\nchine translation using linguistic features. In Proceedings of the 48th Asso-\nciation for Computational Linguistics , pages 604–611, Uppsala, Sweden,\nJuly 2010.\nYing Zhang, Almut Silja Hildebrand, and Stephan V ogel. Distributed language\nmodeling for n-best list re-ranking. In Proceedings of the 2006 Conference\non Empirical Methods in Natural Language Processing (EMNLP 2006) ,\npages 216–223, Sydney, July 2006.\n124", "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "fSlHB_Pydw", "year": null, "venue": "EAMT 2022", "pdf_link": "https://aclanthology.org/2022.eamt-1.52.pdf", "forum_link": "https://openreview.net/forum?id=fSlHB_Pydw", "arxiv_id": null, "doi": null }
{ "title": "Sign Language Translation: Ongoing Development, Challenges and Innovations in the SignON Project", "authors": [ "Dimitar Shterionov", "Mirella De Sisto", "Vincent Vandeghinste", "Aoife Brady", "Mathieu De Coster", "Lorraine Leeson", "Josep Blat", "Frankie Picron", "Marcello Paolo Scipioni", "Aditya Parikh", "Louis ten Bosch", "John O'Flaherty", "Joni Dambre", "Jorn Rijckaert" ], "abstract": "Dimitar Shterionov, Mirella De Sisto, Vincent Vandeghinste, Aoife Brady, Mathieu De Coster, Lorraine Leeson, Josep Blat, Frankie Picron, Marcello Paolo Scipioni, Aditya Parikh, Louis ten Bosh, John O’Flaherty, Joni Dambre, Jorn Rijckaert. Proceedings of the 23rd Annual Conference of the European Association for Machine Translation. 2022.", "keywords": [], "raw_extracted_content": "Sign Language Translation: Ongoing Development, Challenges and\nInnovations in the SignON Project\nDimitar Shterionov∗, Mirella De Sisto∗, Vincent Vandeghinste†,, Aoife Brady‡, Mathieu De Coster§,\nLorraine Leeson¶, Josep Blat∗∗, Frankie Picron††, Marcello Paolo Scipioni‡‡,\nAditya Parikh§§, Louis ten Bosch§§, John O’Flaherty∥,\nJoni Dambre§, Jorn Rijckaertx\n∗Tilburg University,†Instituut voor de Nederlandse Taal,‡ADAPT,§Ghent University\n¶Trinity College Dublin,∗∗Universitat Pompeu Fabra,††European Union of the Deaf,\n‡‡Fincons,§§Radboud University,∥mac.ie,xVlaams Gebarentaalcentrum\n1 Introduction\nSignON1focuses on the research and develop-\nment of a sign language (SL) translation mobile\napplication and an open communications frame-\nwork . SignON addresses the lack of technol-\nogy and services for the automatic translation\nbetween signed and spoken languages, through\nan inclusive, human-centric solution which facili-\ntates communication between deaf, hard of hearing\n(DHH) and hearing individuals.\nWe present an overview of the status of the\nproject, describing the milestones and the ap-\nproaches developed to address the challenges and\npeculiarities of SL machine translation (SLMT).\nSLs are the primary means of communication\nfor over 70 million DHH individuals.2Despite\nthis, they are rarely included in ongoing develop-\nments of natural-language processing (NLP) ad-\nvancements (Yin et al., 2021). Machine transla-\ntion (MT) research which targets SLs is still in its\ninfancy, due mainly to the lack of data and effec-\ntive representation of signs (including the lack of a\nstandardized written form for SLs).\nBoth the low volume of available resources, as\nwell as the linguistic properties of SLs provide\nchallenges for MT. Furthermore, SLs are visual\nlanguages, which presents yet another challenge:\n© 2022 The authors. This article is licensed under a Creative\nCommons 3.0 licence, no derivative works, attribution, CC-\nBY-ND.\n1SignON is a Horizon 2020 (Research and Innovation Pro-\ngramme Grant Agreement No. 101017255) project that\nruns from 2021 until the end of 2023. https://\nsignon-project.eu/ . The consortium is constituted\nby 17 partners, among which Instituut voor de Nederlandse\nTaal, Tilburg University, ADAPT, Ghent University, Trinity\nCollege Dublin, Universitat Pompeu Fabra, European Union\nof the Deaf, Fincons, Radboud University, mac.ie, Vlaams\nGebarentaalcentrum, and Dublin City University.\n2According to the World Federation of the Deaf.the recognition and synthesis of a signing human.\n2 The SignON approach to SLMT\nThe objective of the SignON project is MT be-\ntween signed and spoken languages in all possible\ncombinations, as well as the delivery of this ser-\nvice to the primary user groups: DHH and hearing\nusers.\nThe project revolves around 4 spoken (English,\nSpanish, Dutch, Irish) and 5 SLs, (ISL, NGT, VGT,\nLSE, and BSL —namely Irish, Dutch, Flemish,\nSpanish and British SL). Addressing this many\nlanguage pairs and directions on a pair-by-pair ba-\nsis would require a substantial amount of time\nand effort, far beyond the scope of the project.\nSignON employs an MT approach that (i) focuses\non processing and understanding individual lan-\nguages, (ii) employs a common multi-lingual rep-\nresentation (InterL) to facilitate translation and (iii)\nuses symbolic as well as deep-learning methods\nfor the synthesis of a 3D virtual signer. This ap-\nproach involves automatic SL and speech recogni-\ntion (SLR and ASR respectively), NLP, sign and\nspeech synthesis, text generation and, most im-\nportantly, representation of utterances in a com-\nmon frame of reference —an interlingual repre-\nsentation space based on embeddings and/or sym-\nbolic structures, the InterL. The complexity and\ndiversity of these processing steps require multi-\ndomain knowledge and expertise. Furthermore, we\nchose this approach as there are only limited par-\nallel resources available between signed and spo-\nken/written languages. Relying on techniques such\nas transfer learning, and pre-built NLP models (i.e.\nmBART (Lewis et al., 2020)) will improve MT\nperformance.\nWe have built state-of-the-art models and\ncomponents for SLR , exploiting convolutional\nFigure 1: General approach of the SignON translation system\nneural network-, recurrent neural network- and\ntransformer-based models, natural-language un-\nderstanding and MT based on mBART. We are\ndeveloping approaches through wordnets and ab-\nstract semantic representation and synthesis based\non language specific logical structures for SL, be-\nhavioral markup language and a 3D avatar render-\ning system.\nThe ASR component will tune to the use cases\nand to the speaker (including atypical speech from\ndeaf speakers and speakers with cochlear im-\nplants). The ASR addresses (i) privacy chal-\nlenges (ii) adaption to communicative settings and\n(iii) extension to new data and languages. Cur-\nrently, English and Dutch are ready; Spanish is\nin progress. The transfer learning approach is\nadapted for Irish. The ASR works as a web ser-\nvice via a secure restful API.\n3 SignON application and open\nframework\nThe general architecture (Figure 1) consists of a\nmobile application which connects users to the\ncloud-based MT platform. The SignON app is the\ninterface between the user and the SignON frame-\nwork which handles the internal data flow and pro-\ncessing. The framework executes the following\nsteps. The source message (audio, video or text)\nand any relevant metadata coming from the mo-\nbile app is processed by an orchestrator which\nqueues it towards the translation pipeline through\namessage broker . A dispatcher subscribed to\nthe appropriate queue receives the message, invok-\ning the relevant component depending on the type\nof input. After the required processing is com-\nplete, the message passes to the next stage of the\npipeline until, finally, once the translation tasks\nare completed, the output message is produced in\nthe requested format (text, audio or sign languageavatar). The output is delivered to the app via the\norchestrator . Each component is encapsulated in a\ndocker container and distributed over different ma-\nchines.\nThe first release of the SignON mobile applica-\ntion is due in June 2022, and will then evolve to its\nfinal release at the end of the project (Dec. 2023).\nThe app will be available as open source and for\nfree.\n4 Societal impact\nAlong with the technological and academic inno-\nvations that come in terms of new models and\nmethodsfor SLMT, SignON strives towards having\na large societal impact. Currently we face soci-\netal challenges such as clashes between the views\nof DHH and hearing people, with respect to use-\ncases, technological importance and communica-\ntion needs. We organized two sets of interviews\nwith deaf participants, an online survey and we\nhave two round tables planned. Via workshops\nwe inform both the research and user communities\nabout the progress of SignON and the state-of-the-\nart in SLMT.\n5 Progress and next steps\nIn the first 15 months of this project 8 academic pa-\npers were accepted for publication. These papers\ndiscuss SLR, NLP, SLMT as well as SL represen-\ntations. At the time of writing more than 5 papers\nare under review. We have conducted focus group\ninterviews with VGT, ISL, LSE and NGT signers\nas well as public and internal surveys.\nReferences\nLewis, Mike, Yinhan Liu, Naman Goyal, Mar-\njan Ghazvininejad, Abdelrahman Mohamed, Omer\nLevy, Veselin Stoyanov, and Luke Zettlemoyer.\n2020. BART: denoising sequence-to-sequence pre-\ntraining for natural language generation, translation,\nand comprehension. In D. Jurafsky et al., editor,\nProc. of the 58th Annual Meeting of the Assoc. for\nComputational Linguistics, ACL 2020, Online, July\n5-10, 2020 , pages 7871–7880. ACL.\nYin, Kayo, Amit Moryossef, Julie Hochgesang, Yoav\nGoldberg, and Malihe Alikhani. 2021. Including\nsigned languages in natural language processing. In\nProc. of the 59th Annual Meeting of the ACL and the\n11th Int. Joint Conference on NLP (Volume 1: Long\nPapers) , pages 7347–7360, Online, August. ACL.", "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "bPyD5UDykRW", "year": null, "venue": "EAMT 2005", "pdf_link": "https://aclanthology.org/2005.eamt-1.25.pdf", "forum_link": "https://openreview.net/forum?id=bPyD5UDykRW", "arxiv_id": null, "doi": null }
{ "title": "Efficient statistical machine translation with constrained reordering", "authors": [ "Evgeny Matusov", "Stephan Kanthak", "Hermann Ney" ], "abstract": "Evgeny Matusov, Stephan Kanthak, Hermann Ney. Proceedings of the 10th EAMT Conference: Practical applications of machine translation. 2005.", "keywords": [], "raw_extracted_content": "Efficient Statistical Machine Translation with Constrained\nReordering\nEvgeny Matusov, Stephan Kanthak, and Hermann Ney\nLehrstuhl f ¨ur Informatik VI, Computer Science Department,\nRWTH Aachen University\nD-52056 Aachen, Germany\n{matusov,kanthak,ney }@informatik.rwth-aachen.de.\nAbstract. This paper describes how word alignment information makes machine translation more effi-\ncient. Following a statistical approach based on finite-state transducers, we perform reordering of sourcesentences in training using automatic word alignments and estimate a phrase-based translation model.Using this model, we translate monotonically taking a permutation graph as input. The permutationgraph is constrained using an efficient and flexible reordering framework. We then propose to automati-\ncally identify source word sequences which should always be translated monotonically and keep the word\norder of these sequences in search. This allows us to obtain fast good-quality translations. We presentcompetitive experimental results on the Verbmobil German-to-English and BTEC Chinese-to-Englishtranslation tasks.\n1 Introduction\nWord reordering is of crucial importance for ma-\nchine translation. Most of the phrase-based statisti-cal approaches like the Alignment Template system\nof (Och et al., 2004) rely on reorderings which are\nimplicitly memorized with each pair of source andtarget phrases in training. Additional reorderingson phrase level are fully integrated into the decod-ing process, which increases the complexity of the\nsystem and makes it hard to modify.\nOther statistical approaches make use of the ef-\nficient search representation with weighted finite-\nstate transducers (WFSTs). Many of these ap-\nproaches use joint probabilities of the source andthe target language string. The automated trans-ducer inference techniques OMEGA (Vilar, 2000)and GIATI (Casacuberta et al., 2004) estimate\nphrase-based models, but capture reordering only\nimplicitly in bilingual corpus representations. Thisleads to a strong degradation of translation qualitywhen translating into a language with a completelydifferent word order. In (Bangalore et al., 2000)weighted reordering has been applied to target sen-\ntences. In order to reduce the computational com-\nplexity, this approach considers only a set of plau-sible reorderings seen on training data.In this paper, we follow a phrase-based joint-\nprobability WFST translation approach, in whichsource sentence reordering is applied on word level,both in training and for translation. This is a novel\napproach inspired by the work of (Knight et al.,\n1998) and (Kumar et al., 2003). In this approach, areordering graph is computed on-demand and takenas input for monotonic translation. The approach ismodular and allows easy introduction of different\nreordering constraints and probabilistic dependen-\ncies. Here, we extend this approach by introduc-ing additional restrictions on reorderings. We de-scribe our efficient finite-state implementations ofIBM (Berger et al., 1996), inverse IBM and local re-ordering constraints. Furthermore, we apply these\nconstraints in the search, but keep the word or-\nder within source phrases which were consistentlyaligned monotonically in training.\nIn the next section we review the general theory\nof our translation system based on weighted finite-state transducers and describe the use of word align-ments for reordering in training. We then discussthree modeling techniques which use alignment in-formation to establish connections between source\nand target words. These connections are needed\nin order to estimate the joint translation probabil-ity. Section 3 describes the on-demand computable\nEAMT 2005 Conference Proceedings 181\nframework for permutation models and the various\ntypes of the reordering constraints that are appliedin the search. In Section 4 we propose how thereordering process can be further constrained bykeeping the order of monotonic source sequences.We conclude the paper with experimental results,\nwhich show the advantages of these constraints on\ntwo translation tasks.\n2 Basics of the Translation System\n2.1 Bayes Decision Rule\nIn statistical machine translation, we are\nlooking for a target language sentence eI\n1\nwhich translates a source sentence fJ\n1.\nWe formulate the Bayes decision rule formaximizing the posterior probability:\nˆeˆI\n1= argmax\nI,eI\n1Pr(eI\n1|fJ\n1)\n= argmax\nI,eI\n1Pr(fJ\n1,eI1)\n= argmax\nI,eI\n1/summationdisplay\nAPr(A)·Pr(fJ\n1,eI1|A)\n∼=argmax\nI,eI\n1max\nAPr(A)·Pr(fJ\n1,eI1|A)\nHere, the posterior probability Pr(eI\n1|fJ\n1)is rewrit-\nten as a joint probability of the input and the out-\nput sentence. The stochastic finite-state transducer\napproach allows for convenient modeling of joint\nprobabilities. We also assume that we have wordlevel alignments Aof all sentence pairs from a\nbilingual training corpus and introduce such align-ments as a hidden variable.\n2.2 Word Alignments\nThe statistical word alignments are used in two\nways. First, we reorder the words in each train-\ning source sentence based on an alignment whichis a function of source words and naturally definestheir permutation (Figure 1). This allows to train amonotonic translation model.\nNext, the goal is to establish connections be-\ntween the reordered source and the target wordsin order to reliably estimate the joint translationprobability with statistical language modeling tech-\nniques. To this end, most of the WFST approachesaim at a “bilanguage” representation of each pair ofsentences in the training corpus with Kbilingual\nphrases (˜f\nk,˜ek),k=1 , ..., K of varying length.\nAll of these methods do not require, but work espe-\ncially well with fully monotonic alignments. Here,\nwe discuss the most common techniques.\nThe representation used by e. g. (Bangalore et\nal., 2000) allows source and target phrases ˜fand\n˜eto have the length of either 0 or 1. This means\nthat each pair of training sentences is written with\nbilingual tuples (f, e)where either forecan be a\nnormal word or an “empty word” which we denotewith$. To create a corpus of such bilingual pairs, a\none-to-one alignment is used. The number of bilin-\ngual phrases Kcan vary from max( I,J)to(I+J).\nWhereas the vocabulary size of the corpus in thisrepresentation is relatively limited, m-gram mod-\nels with a long history mhave to be built to capture\nenough phrasal context. Also, the complexity of theWFST search increases, since epsilon arcs have to\nbe used in order to hypothesize non-aligned target\nwords.\nIn the representation of (Casacuberta et al.,\n2004) ˜fis one real source word only, and ˜eis a\ncontiguous target phrase of 0or more words. This\nrepresentation arises from one-to-many alignments\nwhich are functions of target words. The advan-tage of this representation is that the search effortis proportional to the length of the source sentence.However, the vocabulary size of the “bilanguage”\nincreases. This may result in data sparsity prob-\nlems, which at least partially can be solved withsmoothing techniques.\nFinally, (de Gispert et al., 2002) describe bilin-\ngual X-grams (˜f,˜e)without restrictions on the\nlength of source or target phrase. This represen-\ntation can be derived from a general alignmentwith many-to-many connections. The drawback of\nthis representation is the enormous vocabulary sizewhich may not allow for reliable estimation of thetranslation probability. Another disadvantage is the\ninability to translate individual words in ˜f,i fe . g .\nthey do not appear in the training corpus in anothercontext. In our opinion, however, in at least twoMatusov et al.\n182 EAMT 2005 Conference Proceedings\ntheverybeginningofMaywouldsuitme.\nmir\nwuerde\nsehr\ngut\nAnfang\nMai\npassen\n.sehr gut Anfang Mai wuerde passen mir .\nthe very beginning of May would suit me .\nA) $|the sehr|very gut|$ Anfang|beginning $|of\nMai|May wuerde|would passen|suit mir|me .|.\nB) sehr|the_very gut|$ Anfang|beginning\nMai|of_May wuerde|would passen|suit mir|me .|.\nC) sehr_gut|the_very Anfang|beginning\nMai|of_May wuerde|would passen|suit mir|me .|.\nFigure 1. An example of alignment, source sentence reordering, monotonization and some alternative bilingual corpus\nrepresentations. Alignment connections: used for reordering and all representations; used for reordering and\nrepresentation (C); used for representations (B) and (C); ignored due to the monotonicity requirements.\ncases it may be reasonable to include some phrases\n˜fwith length >1. The first case is when several\nsource words are always translated with one target\nword (e. g. translating an English noun phrase intoa German compound). The second case usuallyinvolves non-literal phrase-to-phrase translations,when translating individual source words does not\nconvey the meaning of the source phrase.\nAn example of the source sentence reordering,\nas well as of the three described bilingual corpusrepresentations, labeled with (A),(B), and (C), re-spectively, is given in Figure 1.\nIn our approach, we can avoid various heuris-\ntics and learn these and other types of corpus repre-\nsentations by using a flexible alignment frameworkpresented in (Matusov et al., 2004). Following thiswork, we efficiently compute optimal, minimum-cost alignments which satisfy certain constraints.\nThe constraints may include the requirement for\neach word to be aligned at least once, functionalform or full monotonicity. Local alignment costsbetween a source word f\njand a target word eiare\nestimated statistically using state occupation prob-abilities of the HMM and IBM-4 models as trained\nby the GIZA++ toolkit (Och et al., 2003).\n2.3 Optimization Criterion\nUsing one of the corpus representations (˜f,˜e)via\na certain (constrained) alignment A, we rewrite the\njoint translation probability in the decision rule asfollows:\nˆe\nI\n1= argmax\nI,eI\n1max\nAPr(A)·Pr(fJ\n1,eI1|A)\n= argmax\n˜eK\n1max\nA,KPr(A)·Pr(˜fK\n1,˜eK\n1|A, K)\n∼=argmax\n˜eK\n1,A,KK/productdisplay\nk=1Pr(˜fk,˜ek|˜fk−1\n1 ,˜ek−1\n1 ,A ,K )\n∼=argmax\n˜eK\n1,A,KK/productdisplay\nk=1p(˜fk,˜ek|˜fk−1\nk−m,˜ek−1\nk−m,A ,K )\nIn other words: the translation problem is mapped\nto the problem of estimating an m-gram language\nmodel over a learned set of bilingual tuples (˜fk,˜ek).\nMapping the bilingual language model to a WFST\nTis canonical.\n3 Reordering in Search\nSince we chose to reorder source sentences in train-\ning and translate monotonically, we can properly\ntranslate only sentences which have the word or-\nder of the target language. To overcome this ob-stacle, the input sentence has to be permuted, andthe translation model will then select the best paththrough the permutation graph in a global decisionprocess.\nWhen searching the best translation ˜e\nK\n1for a\ngiven source sentence fJ\n1, we firstly represent this\ninput sentence as a linear automaton with word-\nlabeled arcs (see top of Figure 3). We then com-pute permutations of this automaton as describedEfficient statistical machine translation with constrained reordering\nEAMT 2005 Conference Proceedings 183\na)\n0000 10001110021110311114\nb)\n00001000 1\n010021100210103\n1\n0110311103\n11014\n1\n0111 411114\n1321011 4\n2\nc)\n000010001\n01002\n001030001\n41001\n4\n1010311002\n1\n11\n11012\n11113\n1110244\n3\nd)\n00001000 1\n01002\n1100210103\n111103\n1101411114\n32\nFigure 2. Permutations of a) positions j=1 ,2,3,4of a\nsource sentence f1f2f3f4using a window size of 2 for b)\nIBM constraints, c) inverse IBM constraints and d) local\nconstraints.\nin (Knight et al., 1998). The overall search prob-\nlem can be rewritten using finite-state terminology\n(Kanthak et al., 2004):\nˆeI\n1=project-output (best(permute (fJ\n1)◦T))\nThis implementation of the search problem with\nweighted finite-state transducers is very efficient.However, permuting an input sequence of Jsym-\nbols results in J!possible permutations, i. e. in\nexponential complexity. Therefore, we compute\na constrained permutation automaton on-demand\nwhile optionally applying beam pruning in thesearch.\nFor on-demand computation of an automaton\nwe specify a state description and an algorithm thatcalculates all outgoing arcs of a state from the state\ndescription. In our case, each state represents a per-\nmutation of a subset of the source words f\nJ\n1, which\nare already translated. This can be described by abit vector bJ\n1. Each bit of the state bit vector corre-\nsponds to an arc of the linear input automaton andis set to one if the arc has been used on any pathfrom the initial to the current state. The bit vectorsof two states connected by an arc differ only in asingle bit. Note that bit vectors elegantly solve the\nproblem of recombining paths in the automaton as\nstates with the same bit vectors can be merged. Asa result, a fully minimized permutation automatonhas only a single initial and final state.\nEven with on-demand computation, complexity\nusing full permutations is unmanageable for longsentences. We further reduce complexity by addi-\ntionally limiting permutations with the constraints,\nwhich we describe in the following. For all of theseconstraints, we use implementations with bit vectorstate descriptions to compute constrained permuta-tion graphs on-demand. Refer to Figure 2 for theirvisualizations.\nThe IBM reordering constraints are well-known\nin the field of machine translation and were first\ndescribed in (Berger et al., 1996). The idea be-\nhind these constraints is to deviate from monotonictranslation by postponing translations of a limitednumber of words. More specifically, at each statewe can translate any of the first lyet uncovered\nword positions. For consistency we associate win-\ndow size with the parameter l.\nFor some language pairs, it is beneficial to trans-\nlate some words at the end of the sentence first and\nto translate the rest of the sentence nearly monoton-ically. Following this idea we can define the inverse\nIBM constraints . Let jbe the first uncovered posi-\ntion. We can choose any position for translation,\nunless l−1words on positions j\n/prime>j have been\ntranslated. If this is the case we must translate theword in position j.\nFor some language pairs, e.g. Italian – English,\nwords are moved only a few positions to the left orright. The IBM constraints provide too many alter-native permutations to chose from as each word canbe moved to the end of the sentence. A solution that\nallows only for local permutations and therefore has\nvery low complexity is given by the following per-mutation rule: the next word for translation comesMatusov et al.\n184 EAMT 2005 Conference Proceedings\n0 1ja2,3wir4koennen5mein6Auto7nehmen\n01ja\n2, 3,13\nwir\nja4\nwir\n5koennen 6koennen12 mein\nwir7\nmein\n8Auto9Auto11 nehmen\nmein10 nehmenAuto koennen ,\nFigure 3. An example of local constraints with window size of 2.\nfrom the window of lpositions1counting from the\nfirst yet uncovered position. Note, that the localconstraints define a true subset of the permutationsdefined by the IBM constraints. Figure 3 illustrates\nthese most restrictive, but efficient constraints with\nthe window size of 2 when permuting the Germansentence “ja, wir k ¨onnen mein Auto nehmen”.\nWe also introduce weights to the constraints and\nnormally give higher probability to the arcs of themonotonic path through the reordering graph, whilepenalizing the non-monotonic ones.\n4 Monotonic Sequences\nEven with constrained reordering, the search space,especially for long sentences, may become too largeto handle. However, many paths in the reorderinggraph are not relevant for translation and may evenbe harmful for the performance. Usually, each input\nsource sentence can be viewed as several sequences\nofn≥1words, each of which should be translated\nmonotonically.\nWe propose to identify such sequences in train-\ning and forbid permutations which change the wordorder within such sequences, or break them up. Tothis end, we collect statistics over the training cor-\npus by considering alignments which are functions\nof source words. We extract consecutive sourcephrases of various length ( ≤10), which were con-\nsistently aligned with some target words in a mono-tonic way.\nWhen translating a source sentence, we search\nfor monotonic sequences observed in training and\nperform longest match. We then concatenate all\nthe words in the found monotonic sequences anduse them to label only one arc in the linear au-tomaton (see e. g. top of Figure 4). When overlap-ping matches exist, we unite the matched sequences\n1both covered and uncoveredand thus are able to identify longer monotonic se-quences not observed in training.\nWe permute the transformed linear automaton\nunder some constraints using on-demand computa-tion. Next, we make the reverse transformation andreplace each arc in the reordering graph which islabeled with nwords by nsingle-word arcs. This\nallows us to apply the bilingual m-gram language\nmodel transducer on the original lexical entries andmake use of its generalization capability. All ofthese steps are efficiently realized at runtime withgeneric composition operations. The resulting per-mutation graph is shown in Figure 4. Note that it is\nsignificantly more compact than the corresponding\ngraph in Figure 3 and contains the most plausiblereorderings only. In particular, the movements ofthe verb “nehmen” are not restricted, which makesit possible for the system to choose the sequence of\narcs “k ¨onnen nehmen” for correct phrasal transla-\ntion with “can take”.\nIt is also possible to generalize from the mono-\ntonic sequences in training by matching corre-\nsponding sequences of word classes or part-of-speech tags. Another application of the presentedtechnique would be to explicitly forbid reorderingsof word sequences which must be a-priori trans-lated monotonically, like sequences of digits, time\nand date expressions, multi-word names, spelled\nletters, etc.. Such restrictions are especially impor-tant for subjective user appreciation of the system’sperformance.\n5 Experimental Results\n5.1 Corpus Statistics\nThe translation experiments were carried out on the\nBasic Travel Expression Corpus (BTEC), a mul-\ntilingual speech corpus which contains tourism-\nrelated sentences usually found in travel phrasebooks. We tested our system on the Chinese-Efficient statistical machine translation with constrained reordering\nEAMT 2005 Conference Proceedings 185\n0 1ja_,_wir_koennen2mein_Auto3nehmen\n01 ja\n2mein9,\n3Auto4ja5,6wir7\nkoennen8 nehmen\n10wir11koennen12mein13\nnehmen\nAuto14mein\nAuto\nFigure 4. Reordering with local constraints and window size of 2 and non-reordered monotonic sequences.\nChinese English\nTrain sentences 20 000\nwords 182 904 160 523\nsingletons 3 525 2 948\nvocabulary 7 643 6 982\nTest sentences 506\nwords 3 515 3 595\nTable 1. Statistics of the Basic Travel Expression corpus.\nGerman English\nTrain Sentences 58 073\nWords 519 523 549 921\nV ocabulary 7 939 4 672\nSingletons 3 453 1 698\nLexicon Entries 12 779\nTest Sentences 251\nWords 2 628 2 871\nTable 2. Statistics of the Verbmobil corpus.\nto-English Supplied Task, the corpus for which\nwas provided during the International Workshop onSpoken Language Translation (IWSLT 2004) (Ak-iba et al., 2004). The corpus statistics for the BTEC\ncorpus are given in Table 1. We evaluate the im-\npact of the proposed reordering restrictions on theCSTAR 2003 test set with 506Chinese sentences\nand 16 reference translations.\nWe also present results on the Verbmobil task\n(Wahlster, 2000). The domain of this corpus is\nappointment scheduling, travel planning, and hotelreservation. It consists of transcriptions of sponta-neous speech. Table 2 shows the statistics of thiscorpus.\n5.2 Evaluation Criteria\nFor the automatic evaluation, we used the word er-\nror rate (WER), position-independent word errorrate (PER), and the BLEU score (Papineni et al.,\n2002). This score measures accuracy, i. e. largerscores are better. The three measures were com-\nputed with respect to multiple reference transla-\ntions, when they were available. To indicate this,we will label the error rate acronyms with an m.\nOn the Chinese-to-English BTEC task, both train-ing and evaluation were performed using corporaand references in lowercase and without punctua-\ntion marks.\n5.3 Experiments\nAs described in Sec. 2.2, we reordered the source\nsentences in training. We then created a bilingualcorpus of tuples (f\nj,˜ej)(i. e. representation (B) in\nFigure 1) based on a fully monotonic alignment that\nis a function of target words. Using this corpus,we estimated a smoothed m-gram language model\n2\nand represented it as a finite-state transducer.\nWhen translating, we applied moderate beam\npruning to the search graph only when necessary.\nThis allowed for reasonable translation times and\nmemory consumption without a significant nega-tive impact on performance. In baseline exper-iments, we did not reorder source sentences inthe search. In all other experiments where con-\nstrained reordering was permitted, we obtained\nmost optimal results when we restricted reorderingin matched word sequences which had been mono-tonically aligned in training more than 50 % of thetime. With this setting, the average number of arcsin a linear automaton representation of a sentence\ndecreased from 7 to about 5 for the BTEC test set,\nand dramatically from more than 10 to 6 for theVerbmobil test set.Matusov et al.\n186 EAMT 2005 Conference Proceedings\n 46 47 48 49 50 51 52 53 54 55\n 1 2 3 4 5 6 7 8 9\nreordering constraints window sizeINV-IBM\nINV-IBM + MON. SEQUENCES\nLOCAL\nLOCAL + MON. SEQUENCES\nFigure 5. Word error rate [%] as a function of the\nreordering window size for different reordering\nconstraints: Chinese-to-English translation.\nmWER mPER BLEU speed mem\n[%] [%] [%] [w/s] [MB]\nbaseline∗54.0 42.3 23.0 110 28\n4-inv-ibm 48.0 39.4 33.2 0.5 402\n3-inv-ibm♦49.1 40.3 30.1 7 94\n5-local♦49.4 40.2 31.2 56 62\nTable 3. Translation quality and efficiency on the BTEC\ntask, development corpus (∗:full search;♦: with fixed\nword order in monotonic sequences).\n5.3.1 Chinese-to-English Translation\nWord order in Chinese and English is somewhat\nsimilar. However, a few word reorderings overquite large distances may be necessary. This is es-\npecially true in case of questions, in which question\nwords like “where” and “when” are placed, unlikein English, at the end of a sentence. Based on theseobservations, we expected that identifying mono-tonic sequences will result in faster and better trans-\nlations under reordering constraints with small win-\ndow sizes.\nThe best translation results for this task were\nachieved under inverse IBM reordering constraintswith window size ≥4. Figure 5 shows that using\nmonotonic sequences in which the words are not\npermuted in search, we can achieve similar perfor-\nmance with window size 3. The local constraints\ngenerally perform well on this task only for verylarge window sizes ≥9. By keeping the word order\nin monotonic sequences, we are able to reach sim-\n2m=4on the BTEC task, m=3on the Verbmobil task.ilar performance with a window size of 5 or 6. Ta-\nble 3 presents all error measures, as well as time andmemory usage for three configurations with similarword error rate. Keeping the word order in mono-tonic sequences fixed, we observed dramatic im-provements in translation speed from 0.5to7,o r\neven to 56words per second\n3without a large degra-\ndation of the performance.\nThe increase in the word error rate for larger\nwindow sizes with the proposed restrictions canbe explained by insufficient alignment quality.In some alignments in training, source word se-quences were incorrectly aligned monotonically.\nTheir permutation may be useful, but is not per-\nformed in translation process.\n5.3.2 German-to-English Translation\nGerman language differs in word order from En-\nglish mainly in the position of verbs and verb pre-fixes, which often appear at the end of a sentence.Reordering is very important to achieve good trans-lation performance.\nThe Alignment Template system of (Och et al.,\n2004) performs phrasal reordering using a compli-cated graph search algorithm with extensive prun-\ning and heuristic functions for rest cost estimation.\nIt also incorporates several features like the lexiconscores, word penalty, etc., the scaling factors forwhich have to be optimized. In contrary, our systemuses only the translation model score and limited\ncomputational resources, so that pruning is often\nnot necessary and search errors can be avoided al-together. Nevertheless, using weighted constrainedreorderings in search, we can report competitivetranslation results.\nFor three different types of reordering con-\nstraints, the window size and probability for themonotonic path were optimized on a development\nset. The best word error rate on the test set is\nachieved under IBM constraints with a window sizeof 4 (see Table 4). Pruning is necessary, and thetranslation speed is 8 words per second. Using in-verse IBM constraints, we are able to get the low-est position-independent error rate of 26.5 % re-\nported in (Och et al., 2004). Here we perform full\n32 x Pentium III 600MHz, 1GB RAM.Efficient statistical machine translation with constrained reordering\nEAMT 2005 Conference Proceedings 187\nmWER PER BLEU speed mem\n[%] [%] [%] [w/s] [MB]\nbaseline 41.5 29.1 40.6 170 28\n3-inv-ibm 37.5 26.5 50.5 2 80\n4-ibm⋆36.2 27.4 49.1 8 62\n2-inv-ibm 36.9 26.9 50.3 13 37\n3-local♦36.3 27.3 49.9 35 53\nTable 4. Translation quality and efficiency on the\nVerbmobil task (⋆: with beam pruning;♦: with fixed word\norder in monotonic sequences).\nsearch and translate at a speed of 2 words per sec-\nond. At the same time, when we keep the word or-der in the monotonic sequences fixed, we can pro-duce translations of almost the same quality using\nthe efficient local constraints at the rate of 35 words\nper second. Thus, the efficiency of the translationincreases without significant loss in performance.Fast translations are quite important in the appli-cations similar to the original Verbmobil project –\nspeech-to-speech dialogue translation.\n6 Conclusion\nIn this paper, we described a novel extension to\nthe reordering framework which performs sourcesentence reordering on word level. We employeda monotonic phrase-based translation system that\ntakes a reordering graph as input. Based on statis-\ntics for monotonically aligned source word se-quences in training, we identified source phrasesin the input sentences, the word order in whichshould be kept fixed. Using an efficient finite-state implementation, we included the modeling\nof such phrases into the framework which realizes\nconstrained, weighted, on-demand computable per-mutations. We showed that this new componentsignificantly improves the efficiency of the search,while allowing quality translations into a languagewith different word order. We achieved competi-\ntive results on Chinese-to-English and German-to-\nEnglish tasks. In the future, we would like to ex-plore more sophisticated probability distributionsfor the reordering alternatives.\nAcknowledgement\nThis work was partially funded by the DeutscheForschungsgemeinschaft (DFG) under the project“Statistische Text ¨ubersetzung” (Ne572/5) and by\nthe European Union under the integrated projectTC-STAR – Technology and Corpora for Speech toSpeech Translation (IST-2002-FP6-506738).\n7 References\nAkiba, Y ., Federico, M., Kando, N., Nakaiwa, H., Paul, M., and\nTsujii, J. (2004). Overview of the IWSLT04 Evaluation Cam-\npaign . Proc. Int. Workshop on Spoken Language Translation,\npp. 1–12, Kyoto, Japan.\nBangalore, S. and Riccardi, G. (2000). Stochastic Finite-State\nModels for Spoken Language Machine Translation . Proc.\nWorkshop on Embedded Machine Translation Systems, pp.\n52–59.\nBerger, A. L., Brown, P. F., Della Pietra, S. A., Della Pietra, V .\nJ., Gillett, J. R., Kehler, A. S., and Mercer, R. L. (1996). Lan-\nguage Translation Apparatus and Method of Using Context-based Translation Models . United States Patent 5510981.\nCasacuberta, F. and Vidal, E. (2004). Machine Translation\nwith Inferred Stochastic Finite-State Transducers. Computa-tional Linguistics, vol. 30(2):205-225.\nde Gispert, A. and Mari ˜no, J. (2002). Using X-grams for\nSpeech-to-Speech Translation . Proc. of the 7th Int. Conf. on\nSpoken Language Processing, ICSLP’02.\nKanthak, S. and Ney, H. (2004). FSA: An Efficient and Flexi-\nble C++ Toolkit for Finite State Automata using On-demandComputation . Proc. 42nd Annual Meeting of the ACL, pp.\n510–517, Barcelona, Spain.\nKnight, K. and Al-Onaizan, Y . (1998). Translation with\nFinite-State Devices . Lecture Notes in Artificial Intelligence,\nSpringer-Verlag, vol. 1529, pp. 421–437.\nKumar, S. and Byrne, W. (2003). A Weighted Finite State\nTransducer Implementation of the Alignment Template Model\nfor Statistical Machine Translation . Proc. Human Language\nTechnology Conf. NAACL, pp. 142–149, Edmonton, Canada.\nMatusov, E., Zens, R., and Ney, H. (2004). Symmetric Word\nAlignments for Statistical Machine Translation . Proc. 20th Int.\nConf. on Computational Linguistics, pp. 219–225, Geneva,\nSwitzerland.\nOch, F. J. and Ney, H. (2003). A Systematic Comparison of\nVarious Statistical Alignment Models . Computational Linguis-\ntics, vol. 29, number 1, pp. 19–51.\nOch, F. J. and Ney, H. (2004). The Alignment Template Ap-\nproach to Statistical Machine Translation . Computational Lin-\nguistics, vol. 30(4):417-449.\nPapineni, K., Roukos, S., Ward, T., and Zhu, W.-J. (2002).\nBLEU: a Method for Automatic Evaluation of Machine Trans-lation . Proc. 40th Annual Meeting of the ACL, Philadelphia,\nPA, pp. 311–318.\nVilar, J. M. (2000). Improve the Learning of Sub-sequential\nTransducers by Using Alignments and Dictionaries . Lecture\nNotes in Artificial Intelligence, Springer-Verlag, vol. 1891, pp.298–312.\nWahlster, W., editor. (2000). Verbmobil: Foundations of\nspeech-to-speech translations . Springer Verlag, Berlin, Ger-\nmany.Matusov et al.\n188 EAMT 2005 Conference Proceedings", "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "ZHT8EMybsh", "year": null, "venue": "EAMT 2022", "pdf_link": "https://aclanthology.org/2022.eamt-1.44.pdf", "forum_link": "https://openreview.net/forum?id=ZHT8EMybsh", "arxiv_id": null, "doi": null }
{ "title": "MTee: Open Machine Translation Platform for Estonian Government", "authors": [ "Toms Bergmanis", "Marcis Pinnis", "Roberts Rozis", "Janis Slapins", "Valters Sics", "Berta Bernane", "Guntars Puzulis", "Endijs Titomers", "Andre Tättar", "Taido Purason", "Hele-Andra Kuulmets", "Agnes Luhtaru", "Liisa Rätsep", "Maali Tars", "Annika Laumets-Tättar", "Mark Fishel" ], "abstract": "Toms Bergmanis, Marcis Pinnis, Roberts Rozis, Jānis Šlapiņš, Valters Šics, Berta Bernāne, Guntars Pužulis, Endijs Titomers, Andre Tättar, Taido Purason, Hele-Andra Kuulmets, Agnes Luhtaru, Liisa Rätsep, Maali Tars, Annika Laumets-Tättar, Mark Fishel. Proceedings of the 23rd Annual Conference of the European Association for Machine Translation. 2022.", "keywords": [], "raw_extracted_content": "MT EE: Open Machine Translation Platform for Estonian Government\nToms Bergmanis, M ¯arcis Pinnis, Roberts Rozis, J ¯anis ˇSlapin ¸ ˇs, Valters ˇSics,\nBerta Bern ¯ane, Guntars Pu ˇzulis, Endijs Titomers\nTilde, Latvia {name.surname }@tilde.lv\nAndre T ¨attar, Taido Purason, Hele-Andra Kuulmets, Agnes Luhtaru, Liisa R ¨atsep,\nMaali Tars, Annika Laumets-T ¨attar, Mark Fishel\nUniversity of Tartu, Estonia {name.surname }@ut.ee\nAbstract\nWe present the MT EEproject—a research\ninitiative funded via an Estonian public\nprocurement to develop machine transla-\ntion technology that is open-source and\nfree of charge. The MT EEproject de-\nlivered an open-source platform serving\nstate-of-the-art machine translation sys-\ntems supporting four domains for six lan-\nguage pairs translating from Estonian into\nEnglish, German, and Russian and vice-\nversa. The platform also features gram-\nmatical error correction and speech trans-\nlation for Estonian and allows for format-\nted document translation and automatic\ndomain detection. The software, data and\ntraining workflows for machine translation\nengines are all made publicly available for\nfurther use and research.\n1 Project Background\nMT EEis an Estonian governmental project to de-\nvelop high-quality machine translation (MT) plat-\nform that is open-source and free of charge. The\nproject was motivated by the COVID-19 pan-\ndemic. It was aimed to address the country’s need\nfor fast and cheap translation of information to and\nfrom Estonian and the languages most relevant to\nEstonia’s society: English, German, and Russian.\nMT EEwas funded by the Ministry of Education\nand Research via a public procurement through\nthe Language Technology Competence Center at\nthe Institute of the Estonian Language. The dura-\ntion of MT EEproject was nine months, and it con-\n© 2022 The authors. This article is licensed under a Creative\nCommons 3.0 licence, no derivative works, attribution, CC-\nBY-ND.cluded in January 2022. It was fulfilled as a col-\nlaboration between Tilde and the Institute of Com-\nputer Science of the University of Tartu. A demon-\nstration of the platform1is made publicly available\nby hosting using the infrastructure of the High Per-\nformance Computing Center of the University of\nTartu.\n2 Data\nTo train MT systems, we used parallel data\nfrom OPUS (Tiedemann, 2009), ELRC-SHARE\n(Piperidis et al., 2018) and EU Open Data Portal,2\nas well as data donors and industry partners. In\ncontrast, monolingual data were mainly obtained\nfrom the public web. To classify data as belong-\ning to legal, military, crisis, or general domains,\nwe used its source information. Furthermore, we\nused terminology provided by the Institute of the\nEstonian Language to automatically obtain addi-\ntional data for individual domains. The result-\ning data sets ranged from 5 to 20 million parallel\nsentences for the general domain. However, data\nsets were much smaller for niche domains and lan-\nguage pairs, such as the German–Estonian crisis\ndomain, where only a few dozen sentence pairs\nwere identified. We observed a similar pattern for\nthe monolingual data, for which data sizes ranged\nfrom 50 million sentences for the general domain\nto only 8 thousand sentences for the Russian mili-\ntary domain.\nWe used random held-out subsets of training\ndata for testing and development, which, depend-\ning on the language pair and domain, were 500 to\n2000 sentences large. Held-out subsets, however,\nare part of pre-existing parallel corpora, which\n1https://mt.cs.ut.ee/\n2https://data.europa.eu/\nmay be present in training data of other (also third\nparty) MT systems, which would make a fair com-\nparison of the MT system quality impossible. For\nthis reason, we also created entirely novel transla-\ntion benchmarks3by ordering professional transla-\ntions of recent news.\n3 Models\nFollowing the implementation by Lyu et al. (2020),\nwe trained modular multilingual transformer-\nbased models (Vaswani et al., 2017) using fairseq\n(Ott et al., 2019) with separate encoders and de-\ncoders for each input and output language. We\nselected this architecture because it showed bet-\nter results for lower-resourced language pairs and\ndomains. The final set of models was trained on\na combination of parallel and back-translated data\nand fine-tuned for each domain.\nTo evaluate MT EEMT systems, we compared\nthem against the public systems by Tilde, Google,\nDeepL and Neurot ˜olge.4The evaluation using the\nnewly created translation benchmarks yielded re-\nsults5on average favouring MT EEsystems for all\ndomains. These results suggest that, at least as\nthese tests can tell, MT EEsystems are competitive\nand of high quality.\n4 Platform\nThe MT EEplatform serves the MT systems and\nprovides functionality for text, document (.docx,\n.xlsx, .odt, .tmx, .pptx, .txt), and web page trans-\nlation for all domains and language pairs. Be-\nfore the translation request is routed to the corre-\nsponding MT model, adherence to one of the four\ndomains is automatically detected using a fine-\ntuned XLM-RoBERTa (Conneau et al., 2020) lan-\nguage model. For translation directions where Es-\ntonian is the source language, the platform also\nprovides hints for grammatical error correction6\nand speech translation via a cascade of automatic\nspeech recognition7followed by an MT system.\nThese components can be accessed through the\n3https://github.com/Project-MTee/MTee_\ntranslation_benchmarks\n4https://www.neurotolge.ee\n5https://raw.githubusercontent.com/wiki/\nProject-MTee/mtee-platform/WP3.pdf\n6https://github.com/tartunlp/grammar-api/\npkgs/container/grammar-api\n7https://github.com/tartunlp/\nspeech-to-text-api/pkgs/container/\nspeech-to-text-apitranslation website or their REST APIs. All com-\nponents developed for the platform are dockerized\nand released under the MIT license.8\n5 Current Status of MT EE\nThe MT EEproject concluded in January 2022, and\nits results were handed over to the Language Tech-\nnology Competence Center at the Institute of the\nEstonian Language.\nThe High Performance Computing Center of the\nUniversity of Tartu is hosting the MT EEplatform’s\ndemonstration for at least another year. Tilde and\nthe Institute of Computer Science of the Univer-\nsity of Tartu also continue to provide their techni-\ncal and scientific support during this period.\nUltimately, when the Institute of the Estonian\nLanguage has approbated the technical and scien-\ntific results of the project, they should possess the\nknowledge and the know-how to extend and main-\ntain the platform independently.\nReferences\nConneau, Alexis, Kartikay Khandelwal, Naman Goyal,\nVishrav Chaudhary, Guillaume Wenzek, Francisco\nGuzm ´an,´Edouard Grave, Myle Ott, Luke Zettle-\nmoyer, and Veselin Stoyanov. 2020. Unsupervised\nCross-lingual Representation Learning at Scale. In\nProceedings of ACL 2020 , pages 8440–8451.\nLyu, Sungwon, Bokyung Son, Kichang Yang, and\nJaekyoung Bae. 2020. Revisiting Modularized\nMultilingual NMT to Meet Industrial Demands. In\nProceedings of EMNLP 2020 , pages 5905–5918,\nNovember.\nOtt, Myle, Sergey Edunov, Alexei Baevski, Angela\nFan, Sam Gross, Nathan Ng, David Grangier, and\nMichael Auli. 2019. fairseq: A Fast, Extensible\nToolkit for Sequence Modeling. In Proceedings of\nNAACL 2019 (Demonstrations) , pages 48–53.\nPiperidis, Stelios, Penny Labropoulou, Miltos Deli-\ngiannis, and Maria Giagkou. 2018. Managing Pub-\nlic Sector Data for Multilingual Applications Devel-\nopment. In Proceedings of LREC 2018 , pages 1289–\n1293.\nTiedemann, J ¨org. 2009. News from OPUS-A Collec-\ntion of Multilingual Parallel Corpora with Tools and\nInterfaces. In Recent Advances in Natural Language\nProcessing , volume 5, pages 237–248.\nVaswani, Ashish, Noam Shazeer, Niki Parmar, Jakob\nUszkoreit, Llion Jones, Aidan N Gomez, Łukasz\nKaiser, and Illia Polosukhin. 2017. Attention is\nAll You Need. Advances in Neural Information Pro-\ncessing Systems , 30.\n8https://github.com/orgs/Project-MTee/\npackages", "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "CLYZt5aVqnK", "year": null, "venue": "EAMT 2020", "pdf_link": "https://aclanthology.org/2020.eamt-1.53.pdf", "forum_link": "https://openreview.net/forum?id=CLYZt5aVqnK", "arxiv_id": null, "doi": null }
{ "title": "ELITR: European Live Translator", "authors": [ "Ondrej Bojar", "Dominik Machácek", "Sangeet Sagar", "Otakar Smrz", "Jonás Kratochvíl", "Ebrahim Ansari", "Dario Franceschini", "Chiara Canton", "Ivan Simonini", "Thai-Son Nguyen", "Felix Schneider", "Sebastian Stüker", "Alex Waibel", "Barry Haddow", "Rico Sennrich", "Philip Williams" ], "abstract": "Ondřej Bojar, Dominik Macháček, Sangeet Sagar, Otakar Smrž, Jonáš Kratochvíl, Ebrahim Ansari, Dario Franceschini, Chiara Canton, Ivan Simonini, Thai-Son Nguyen, Felix Schneider, Sebastian Stücker, Alex Waibel, Barry Haddow, Rico Sennrich, Philip Williams. Proceedings of the 22nd Annual Conference of the European Association for Machine Translation. 2020.", "keywords": [], "raw_extracted_content": "ELITR: European Live Translator\nOnd ˇrej Bojar1Dominik Mach ´aˇcek1Sangeet Sagar1Otakar Smr ˇz1\nJon´aˇs Kratochv ´ıl1Peter Pol ´ak1Ebrahim Ansari1\nDario Franceschini2Chiara Canton2Ivan Simonini2\nThai-Son Nguyen3Felix Schneider3Sebastian St ¨uker3Alex Waibel3\nBarry Haddow4Rico Sennrich4Philip Williams4\n1Charles University,2PerV oice,3Karlsruhe Institute of Technology,4University of Edinburgh\nCoordinator email: [email protected]\nAbstract\nELITR (European Live Translator) project\naims to create a speech translation system\nfor simultaneous subtitling of conferences\nand online meetings targetting up to 43\nlanguages. The technology is tested by\nthe Supreme Audit Office of the Czech Re-\npublic and by alfaview®, a German online\nconferencing system. Other project goals\nare to advance document-level and mul-\ntilingual machine translation, automatic\nspeech recognition, and meeting summa-\nrization.\n1 Description\nELITR (European Live Translator, elitr.eu ) is\na three-year EU H2020 Research and Innovation\nProgramme running from 2019 to 2021. The con-\nsortium consists of Charles University, University\nof Edinburgh, Karlsruhe Institute of Technology\n(research partners), PerV oice (integrator) and alfa-\ntraining (user partner).\n2 Objectives\nELITR objectives are research and innovations in\nthe field of spoken language and text translation\nand automatic summarization of meetings.\n2.1 Simultaneous Subtitling\nIn ELITR, we aim to develop a system for simulta-\nneous subtitling of conferences and online meet-\nings. Our affiliated user partner is the Supreme\nAudit Office of the Czech Republic. It is hosting\na congress of EUROSAI (European Organization\n© 2020 The authors. This article is licensed under a Creative\nCommons 3.0 licence, no derivative works, attribution, CC-\nBY-ND.of Supreme Audit Institutions). The congress par-\nticipants are natives of 43 languages, and many of\nthem have difficulties in understanding any of the\nsix congress official languages, into which it is in-\nterpreted by humans, or to understand some non-\nnative accents. For this and other similar cases,\nwe develop a simultaneous speech translation sys-\ntem from 7 spoken languages (English, German,\nRussian, Italian, French, Spanish, and experimen-\ntally Czech) subtitling into 43 languages, including\nthose for which a human interpreter would not be\navailable for capacity reasons. The 43 languages\nare 24 EU official languages and 19 others, spoken\nbetween Morocco and Kazachstan.\nWith our other user partner, alfatraining, we\nconnect our system with an online meeting plat-\nform, alfaview®.\n2.2 Other Research Topics\nThe most visible application goal of live subti-\ntling is supported by our advancements in the re-\nlated areas. We research into document-level ma-\nchine translation to enable conference participants\nto translate documents between all the 43 lan-\nguages in high-quality, taking inter-sentential phe-\nnomena into account (V oita et al., 2019a; V oita et\nal., 2019b; V ojt ˇechov ´a et al., 2019; Popel et al.,\n2019; Rysov ´a et al., 2019).\nWe research into multilingual machine transla-\ntion to reduce the cost of targeting many languages\nat once, and to leverage multiple language variants\nof the source for higher quality (Zhang et al., 2019;\nZhang and Sennrich, 2019).\nTo face challenges of simultaneous translation,\nsuch as robustness to noise, out-of-vocabulary\nwords, domain adaptation, and non-standard ac-\ncents (Mach ´aˇcek et al., 2019), latency and qual-\nity trade-off, we aim to improve automatic speech\nrecognition. We also explore cascaded and fully\nend-to-end neural spoken language translation\n(Pham et al., 2019; Nguyen et al., 2019; Nguyen\net al., 2020) and co-organize shared tasks at WMT\nand IWSLT.\n2.3 Automatic Minuting\nThe last objective of our project is an automatic\nsystem for structured summaries of meetings. It\nis a challenging and high-risk goal, but potentially\nvery profitable. We aim to lay the necessary foun-\ndations for research in this area by collecting and\nreleasing relevant datasets (Nedoluzhko and Bo-\njar, 2019; C ¸ ano and Bojar, 2019a; C ¸ ano and Bojar,\n2019b) and plan to run shared tasks.\n3 ELITR SLT System\nELITR’s integration of components of spoken lan-\nguage translation builds on a proprietary software\nsolution by the project integrator PerV oice. The\ncentral point is a server called the “mediator”.\n“Workers” for SLT subtasks, such as automatic\nspeech recognition, machine translation, and in-\ntermediate punctuating component for specific lan-\nguages, potentially in event-specific or experimen-\ntal versions, are provided by research labs in the\nconsortium, ran on their hardware and connected\nto the mediator. A client requests a specific task,\nfor example, German audio into Czech translation,\nand the mediator connects a cascade of workers to\ndeliver the requested output. The last worker fi-\nnally publishes subtitles on a webpage. Meeting\nparticipants follow subtitles and slides on personal\ndevices.\nThe system provides simultaneous low latency\ntranslation. We follow the re-translation approach\nof Niehues et. al (2018). The translation is first\ndisplayed around 1 second after the speaker, and\nthen it is occasionally corrected and finalized after\napproximately 7 seconds.\nAcknowledgement\nThis project has received funding from the Eu-\nropean Union’s Horizon 2020 Research and In-\nnovation Programme under Grant Agreement No.\n825460.\nReferences\nC ¸ ano, Erion and Ond ˇrej Bojar. 2019a. Efficiency Met-\nrics for Data-Driven Models: A Text Summarization\nCase Study. arXiv e-prints , page arXiv:1909.06618.C ¸ ano, Erion and Ond ˇrej Bojar. 2019b. Keyphrase\ngeneration: A text summarization struggle. In\nNAACL/HLT , June.\nMach ´aˇcek, Dominik, Jon ´aˇs Kratochv ´ıl, Tereza\nV ojtˇechov ´a, and Ond ˇrej Bojar. 2019. A Speech\nTest Set of Practice Business Presentations with\nAdditional Relevant Texts. In SLSP .\nNedoluzhko, Anna and Ond ˇrej Bojar. 2019. Towards\nAutomatic Minuting of Meetings. In ITAT: Sloven-\nskoˇcesk´y NLP workshop (SloNLP 2019) .\nNguyen, Thai-Son, Sebastian Stueker, Jan Niehues,\nand Alex Waibel. 2019. Improving sequence-\nto-sequence speech recognition training with\non-the-fly data augmentation. arXiv preprint\narXiv:1910.13296 .\nNguyen, Thai-Son, Sebastian St ¨uker, and Alex Waibel.\n2020. Toward Cross-Domain Speech Recogni-\ntion with End-to-End Models. arXiv preprint\narXiv:2003.04194 .\nNiehues, Jan, Ngoc-Quan Pham, Thanh-Le Ha,\nMatthias Sperber, and Alex Waibel. 2018. Low-\nlatency neural speech translation. arXiv preprint\narXiv:1808.00491 .\nPham, Ngoc-Quan, Thai-Son Nguyen, Jan Niehues,\nMarkus M ¨uller, and Alex Waibel. 2019. Very Deep\nSelf-Attention Networks for End-to-End Speech\nRecognition. Proc. Interspeech, Graz, Austria .\nPopel, Martin, Dominik Mach ´aˇcek, Michal\nAuersperger, Ond ˇrej Bojar, and Pavel Pecina.\n2019. English-Czech Systems in WMT19:\nDocument-Level Transformer. In WMT Shared Task\nPapers .\nRysov ´a, Kate ˇrina, Magdal ´ena Rysov ´a, Tom ´aˇs Musil,\nLucie Pol ´akov ´a, and Ond ˇrej Bojar. 2019. A Test\nSuite and Manual Evaluation of Document-Level\nNMT at WMT19. In WMT Shared Task Papers .\nV oita, Elena, Rico Sennrich, and Ivan Titov. 2019a.\nContext-Aware Monolingual Repair for Neural Ma-\nchine Translation. In EMNLP/IJCNLP .\nV oita, Elena, Rico Sennrich, and Ivan Titov. 2019b.\nWhen a Good Translation is Wrong in Context:\nContext-Aware Machine Translation Improves on\nDeixis, Ellipsis, and Lexical Cohesion. In ACL.\nV ojtˇechov ´a, Tereza, Michal Nov ´ak, Milo ˇs Klou ˇcek, and\nOndˇrej Bojar. 2019. SAO WMT19 Test Suite: Ma-\nchine Translation of Audit Reports. In WMT Shared\nTask Papers .\nZhang, Biao and Rico Sennrich. 2019. Root Mean\nSquare Layer Normalization. In NIPS , Vancouver,\nCanada.\nZhang, Biao, Ivan Titov, and Rico Sennrich. 2019.\nImproving Deep Transformer with Depth-\nScaled Initialization and Merged Attention. In\nEMNLP/IJCNLP .", "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "TlJKlwbZaA", "year": null, "venue": "EAMT 2022", "pdf_link": "https://aclanthology.org/2022.eamt-1.33.pdf", "forum_link": "https://openreview.net/forum?id=TlJKlwbZaA", "arxiv_id": null, "doi": null }
{ "title": "Towards Readability-Controlled Machine Translation of COVID-19 Texts", "authors": [ "Fernando Alva-Manchego", "Matthew Shardlow" ], "abstract": null, "keywords": [], "raw_extracted_content": "Towards Readability-Controlled Machine Translation of COVID-19 Texts\nFernando Alva-Manchego\nCardiff University\[email protected] Shardlow\nManchester Metropolitan University\[email protected]\nAbstract\nThis project investigates the capabilities\nof machine translation (MT) models for\ngenerating translations at varying levels\nof readability, focusing on texts about\nCOVID-19. Funded by the European As-\nsociation for Machine Translation and by\nthe Centre for Advanced Computational\nSciences at Manchester Metropolitan Uni-\nversity, we collected manual simplifica-\ntions for English and Spanish texts in the\nTICO-19 dataset, and assessed the perfor-\nmance of neural MT models in this new\nbenchmark. Future work will implement\nmodels that jointly translate and simplify,\nand develop suitable evaluation metrics.\n1 Introduction\n“Multilingual Translation with Readability-\nControlled Output Generation” is a project that\nreceived funding from the European Association\nfor Machine Translation (under its programme\n“2021 Sponsorship of Activities”) and from the\nCentre for Advanced Computational Sciences at\nManchester Metropolitan University. We aim to\ndevelop machine translation (MT) models that\ngenerate translations that can be understood by\nnon-expert readers, focusing on texts with medical\ninformation. This is pertinent in the context of the\nCOVID-19 pandemic, where there is a disparity in\nthe availability of health-related content produced\nin English, compared to other languages.\nThe project has the following objectives: (1) to\ncollect a dataset with simplified versions of paral-\nlel texts in English and Spanish about COVID-19;\n(2) to assess how well existing state-of-the-art MT\nmodels perform on our new benchmark; and (3) to\n© 2022 The authors. This article is licensed under a Creative\nCommons 3.0 licence, no derivative works, attribution, CC-\nBY-ND.Lang. Complexity W/S Sy/W FRE ↑ S-P↑\nEnglishOriginal 23.015 6.444 45.69 –\nSimplified 21.838 6.308 52.70 –\nSpanishOriginal 27.623 6.287 – 75.17\nSimplified 24.749 6.271 – 79.49\nTable 1: Statistics of Simple TICO-19: average number of\nwords per sentence (W/S), average number of syllables per\nword (Sy/W), and estimated readability with Flesch Reading\nEase (FRE) for English and Szigriszt-Pazos (S-P) for Spanish.\ninvestigate additional model architectures and/or\nresources that are needed to generate and evaluate\nsimplified in-domain translations.\nThe first two goals of the project were carried\nout from January 2021 to December 2021, and re-\nsulted in the release of the Simple TICO-19 dataset\n(Shardlow and Alva-Manchego, 2022).1We con-\ntinue to work with the new dataset to further in-\nvestigate the nature of readability-controlled out-\nput generation in the MT context. We hope to ap-\nply for further funding at a national and European\nlevel as a result of this work.\n2 The Simple TICO-19 Dataset\nWe leveraged the TICO-19 benchmark (Anasta-\nsopoulos et al., 2020), which contains 3,000 sen-\ntences related to the COVID-19 pandemic, trans-\nlated from English into 36 languages and from sev-\neral sources (e.g. academic publications, speech\ncorpora, news articles, etc.). For our project, we\ncollected manual simplifications for the English\nand Spanish subsets, resulting in the Simple TICO-\n19 dataset, where each sentence has either a sim-\nplified version of itself, or a decision has been\ntaken that the sentence is already sufficiently sim-\nple. Table 1 shows some high level statistics of the\nresulting corpus, including readability indices such\nas Flesch Reading Ease (FRE) (Flesch, 1948) for\n1https://github.com/MMU-TDMLab/\nSimpleTICO19\nEnglish, and Szigriszt-Pazos (S-P) (Szigriszt Pa-\nzos, 2001) for Spanish. These indices, in particu-\nlar, showcase the improvements in readability from\nthe original sentences in the dataset to their simpli-\nfied versions, for both languages.\n3 Machine Translation Baselines\nTo obtain baseline results, we leveraged models\npre-trained on opus-mt-en-es with MarianMT\nas architecture. Table 2 reports BLEU (Papineni et\nal., 2002) and BERTScore (Zhang et al., 2020) as\nevaluation metrics on all the test set and per data\nsource therein, considering original–en as source\nand two targets: original–es and simplified–es.\nThe highest scores are obtained when original–\nes is the target, showing that standard neural MT\nmodels cannot generate simplified texts by default.\nAlso, performance varies depending on the data\nsource, indicating the effect of the style of text.\norig–en →orig–es orig–en →simp–es\nData Source BLEU BERTScore BLEU BERTScore\nCMU 33.51 0.678 17.05 0.581\nPubMed 51.63 0.819 42.69 0.757\nWikinews 55.41 0.826 40.22 0.732\nWikipedia 52.16 0.875 44.83 0.836\nWikisource 39.98 0.715 31.85 0.647\nAll 51.42 0.841 43.15 0.788\nTable 2: Results per data source of our baseline models on\nthe test set of Simple TICO-19.\n4 Future Work\nTranslation and Simplification. In order to in-\ncorporate simplification capabilities into MT mod-\nels, we will first experiment with pipeline systems\nthat translate and then simplify (and vice-versa)\nleveraging state-of-the-art models for each task.\nWe will then work on models that perform both\ntasks jointly, exploring multi-task architectures.\nControllable Translation. We will study how to\ntrain models that generate outputs at diverse read-\nability levels. We will explore varying the pro-\nportion of translation and simplification training\ninstances to control the readability of the transla-\ntions. We will rank target-side simple sentences\naccording to the proportion of complex words and\nsyntactic complexity, and use this ranked list to\ncreate different readability levels that allow train-\ning models for multiple degrees of complexity.Evaluation. We will develop novel metrics suit-\nable for the joint translation and simplification\ntask, specifically for the medical domain. For\ninstance, we will combine traditional similarity-\nbased metrics, such as BLEU and BERTScore,\nwith readability indices. While the latter are more\nsuitable for analysing documents, we plan to adapt\nthem for sentence-level assessment using complex\nword identification approaches and heuristics. We\nwill then measure the correlation of our new met-\nrics with human judgements on adequacy and sim-\nplicity of automatic translations.\nAcknowledgements\nThis project was funded by the European Associ-\nation for Machine Translation (EAMT) under its\nprogramme “2021 Sponsorship of Activities”, and\nby the Centre for Advanced Computational Sci-\nences at Manchester Metropolitan University.\nReferences\nAnastasopoulos, Antonios, Alessandro Cattelan, Zi-\nYi Dou, Marcello Federico, Christian Federmann,\nDmitriy Genzel, Franscisco Guzm ´an, Junjie Hu,\nMacduff Hughes, Philipp Koehn, Rosie Lazar,\nWill Lewis, Graham Neubig, Mengmeng Niu, Alp\n¨Oktem, Eric Paquin, Grace Tang, and Sylwia Tur.\n2020. TICO-19: the translation initiative for COvid-\n19. In Proceedings of the 1st Workshop on NLP for\nCOVID-19 (Part 2) at EMNLP 2020 , Online, De-\ncember. Association for Computational Linguistics.\nFlesch, Rudolph. 1948. A new readability yardstick.\nJournal of Applied Psychology , 32(3):221.\nPapineni, Kishore, Salim Roukos, Todd Ward, and Wei-\nJing Zhu. 2002. Bleu: a method for automatic eval-\nuation of machine translation. In Proceedings of the\n40th Annual Meeting of the Association for Com-\nputational Linguistics , pages 311–318, Philadelphia,\nPennsylvania, USA, July. ACL.\nShardlow, Matthew and Fernando Alva-Manchego.\n2022. Simple TICO-19: A dataset for joint trans-\nlation and simplification of covid-19 texts. In Pro-\nceedings of the 13th Language Resources and Evalu-\nation Conference , Marseille, France, June. European\nLanguage Resources Association.\nSzigriszt Pazos, Francisco. 2001. Sistemas predic-\ntivos de legilibilidad del mensaje escrito: f ´ormula de\nperspicuidad . Universidad Complutense de Madrid,\nServicio de Publicaciones.\nZhang, Tianyi, Varsha Kishore, Felix Wu, Kilian Q.\nWeinberger, and Yoav Artzi. 2020. Bertscore: Eval-\nuating text generation with bert. In International\nConference on Learning Representations .", "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "CWeXVVOu774", "year": null, "venue": "EAMT 2020", "pdf_link": "https://aclanthology.org/2020.eamt-1.57.pdf", "forum_link": "https://openreview.net/forum?id=CWeXVVOu774", "arxiv_id": null, "doi": null }
{ "title": "The Multilingual Anonymisation Toolkit for Public Administrations (MAPA) Project", "authors": [ "Eriks Ajausks", "Victoria Arranz", "Laurent Bié", "Aleix Cerdà-i-Cucó", "Khalid Choukri", "Montse Cuadros", "Hans Degroote", "Amando Estela", "Thierry Etchegoyhen", "Mercedes García-Martínez", "Aitor García Pablos", "Manuel Herranz", "Alejandro Kohan", "Maite Melero", "Mike Rosner", "Roberts Rozis", "Patrick Paroubek", "Arturs Vasilevskis", "Pierre Zweigenbaum" ], "abstract": "riks Ajausks, Victoria Arranz, Laurent Bié, Aleix Cerdà-i-Cucó, Khalid Choukri, Montse Cuadros, Hans Degroote, Amando Estela, Thierry Etchegoyhen, Mercedes García-Martínez, Aitor García-Pablos, Manuel Herranz, Alejandro Kohan, Maite Melero, Mike Rosner, Roberts Rozis, Patrick Paroubek, Artūrs Vasiļevskis, Pierre Zweigenbaum. Proceedings of the 22nd Annual Conference of the European Association for Machine Translation. 2020.", "keywords": [], "raw_extracted_content": "The Multilingual Anonymisation Toolkit for Public Administrations\n(MAPA) Project\n¯E. Ajausks y, V . Arranz z, L. Bi ´e*, A. Cerd `a-i-Cuc ´o*, K. Choukri z, M. Cuadros x,\nH. Degroote*, A. Estela*, T. Etchegoyhen x, M. Garc ´ıa-Mart ´ınez*,\nA. Garc ´ıa-Pablos x, M. Herranz*, A. Kohan*, M. Melero**, M. Rosner {,\nR. Rozis y, P. Paroubek\u0005, A. Vasil ¸evskis y, P. Zweigenbaum\u0005 \u0003\nyTildeferiks.ajausks,roberts.rozis,arturs.vasilevskis [email protected]\nzELDA/ELRAfarranz,choukri [email protected]\n*Pangeanic - PangeaMT fl.bie,a.cerda,h.degroote,a.estela,m.garcia,m.herranz,a.kohan [email protected]\n**Barcelona Supercomputing Center [email protected]\u0005Universit ´e Paris-Saclay, CNRS, LIMSI fpap,[email protected]\n{University of Malta [email protected] xVicomtechfagarciap,mcuadros,tetchegoyhen [email protected]\nAbstract\nWe describe the MAPA project, funded un-\nder the Connecting Europe Facility pro-\ngramme, whose goal is the development of\nan open-source de-identification toolkit for\nall official European Union languages. It\nwill be developed since January 2020 until\nDecember 2021.\n1 Introduction\nDe-identification may provide the means to share\nlanguage data while also protecting private or sen-\nsitive data by spotting then deleting, obfuscating,\npseudoymising or encrypting personally identifi-\nable information. De-identification is typically\nperformed for the purpose of protecting an individ-\nual’s private activities while maintaining the use-\nfulness of the gathered data for research and de-\nvelopment purposes.\nThe Multilingual Anonymisation toolkit for\nPublic Administrations (MAPA) project aims to\nleverage natural language processing tools to de-\nvelop an open-source toolkit for effective and re-\nliable text de-identification, focusing on the med-\nical and legal domains. The project is funded by\nthe Connecting Europe Facility (CEF) programme,\nunder grant NoA2019/1927065, and will run from\nJanuary 2020 until December 2021.\nThe toolkit developed by the MAPA partners\n(Pangeanic1, Tilde2, CNRS3, ELDA4, Univer-\n\u0003All authors have contributed equally to this work.\n\u0003c\r2020 The authors. This article is licensed under a Creative\nCommons 3.0 licence, no derivative works, attribution, CC-\nBY-ND.\n1https://pangeamt.com/\n2https://www.tilde.com/\n3http://www.cnrs.fr/\n4http://www.elra.info/en/sity of Malta5, Vicomtech6and SEAD7) will ad-\ndress all official EU languages, including under-\nresourced ones such as Latvian, Lithuanian, Esto-\nnian, Slovenian and Croatian, and severely under-\nresourced ones such as Irish and Maltese.\nAs a part of the project, a connection to eTrans-\nlation,8an online machine translation service pro-\nvided by the European Commission, will be es-\ntablished to foster the provision of multilingual\ndatasets by public administrations that may in\nturn improve the coverage and quality of machine\ntranslation systems.\n2 Approach\nAt its core, the MAPA anonymisation toolkit will\nrely on Named Entity Recognition and Classifica-\ntion (NERC) techniques using neural networks and\ndeep learning techniques. The latest deep learn-\ning architectures and the availability of pre-trained\nmultilingual language models, such as BERT (De-\nvlin et al., 2019) have pushed the state of the art in\nNERC to new levels of performance.\nIn addition, thanks to the transfer learning capa-\nbilities shown by this type of deep learning models,\nnew systems can be trained using smaller datasets\nof manually labelled data, and the knowledge ac-\nquired for a given domain or language can be re-\nused in a cross-domain or cross-language setting\n(Garc ´ıa-Pablos et al., 2020). MAPA will leverage\nthe most innovative technology to provide robust\nmodels for the 24 official European languages,\ntrained to detect named entities that involve sensi-\ntive information, depending on the application do-\n5https://www.um.edu.mt/\n6https://www.vicomtech.org/en/\n7https://avancedigital.gob.es/\n8https://ec.europa.eu/info/resources-partners/machine-\ntranslation-public-administrations-etranslation en\nmain (e.g., medical, legal).\nMAPA will contain a general NERC model, that\nwill be further fine-tuned to detect domain-specific\nentities. The system will then be tailored to ful-\nfil the specific needs of each use case. Since\nsome severely under-resourced languages such as\nMaltese, one of the official EU languages, are\nnot included in the pre-trained multilingual BERT\nmodel, a separate solution will be developed in this\ncase.\nThe deep learning NERC approach will be com-\nplemented with other configurable mechanisms,\nsuch as pattern detection based on regular expres-\nsions, to deal with pattern-based entities: email ad-\ndresses, ID numbers, telephone numbers, bank ac-\ncounts, etc. It will also be capable of using user-\ndefined dictionaries to detect specific uses of entity\nnames known in advance.\nAll these subsystems will be seamlessly com-\nbined into an integrated system that will provide\na powerful and customisable de-identification en-\ngine. For each EU language, a separate docker im-\nage will be published, which will take text as input\nand return it in de-identified form.\n3 Use cases\nThe project includes two specific deployment\ncases for public institutions in an EU country: one\nfor the health domain and one for the legal domain.\nBoth domains were selected given their strong de-\nidentification requirements prior to any sharing of\nthe data. In each deployment case, the system will\nbe tailored to the specific needs of the relevant in-\nstitution.\nL-Universit `a ta’ Malta (University of Malta)\nwill take care of the deployment case for the\nMAPA toolkit in Malta. In Spain, the deployment\ncase will be executed under the umbrella of the\nLanguage Technology Plan9, which is already run-\nning actions in the Health sector in close collabo-\nration with the Ministry and regional institutions,\nand is willing to expand its activities to the Legal\npublic sector.\n4 Data Collection\nMAPA will count on a data collection activity to\nprovide the necessary training and testing data for\nthe toolkit development. Data is currently being\n9https://www.plantl.gob.es/tecnologias-\nlenguaje/PTL/Paginas/plan-impulso-tecnologias-\nlenguaje.aspxidentified and collected for the 24 relevant Euro-\npean languages. One million sentence corpora are\ntargeted per language, prioritising both medical\nand legal data, but also containing some general-\nlanguage data for training. Testing will make\nuse of sample data sets which will be manually\nannotated with named entities addressing the de-\nidentification needs of the covered domains in the\n24 languages. Specific annotation guidelines are\ncurrently being defined for that purpose.\nThe performance of the produced system will\nbe evaluated for each language on held-out sample\ndata sets for each of the two prioritized domains.\nThis evaluation will inform use case designers and\nusers about the expected performance of the base\nsystem so that they can assess their need for further\nadaptation.\n5 Conclusion\nThe MAPA project will develop an open-source\nanonymisation toolkit for all official EU lan-\nguages, which will support public administrations\nin sharing their data while complying with the\nGDPR requirements. The toolkit will be publicly\navailable and particularly targeted to public admin-\nistrations in the health and legal domains, as a re-\nsult of the specific use cases addressed during the\ndevelopment of the project.\nReferences\nDevlin, Jacob, Ming-Wei Chang, Kenton Lee, and\nKristina Toutanova. 2019. BERT: Pre-training of\nDeep Bidirectional Transformers for Language Un-\nderstanding. In Proceedings of the 2019 Conference\nof the North American Chapter of the Association for\nComputational Linguistics: Human Language Tech-\nnologies, Volume 1 (Long and Short Papers) , pages\n4171–4186.\nGarc ´ıa-Pablos, Aitor, Naiara Perez, and Montse\nCuadros. 2020. Sensitive Data Detection and Clas-\nsification in Spanish Clinical Text: Experiments with\nBERT. In Proceedings of the 12th International\nConference on Language Resources and Evaluation\n(LREC’20) .", "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "I7mC_rkijl3Y", "year": null, "venue": "EAMT 2014", "pdf_link": "https://aclanthology.org/2014.eamt-1.3.pdf", "forum_link": "https://openreview.net/forum?id=I7mC_rkijl3Y", "arxiv_id": null, "doi": null }
{ "title": "Combining bilingual terminology mining and morphological modeling for domain adaptation in SMT", "authors": [ "Marion Weller", "Alexander M. Fraser", "Ulrich Heid" ], "abstract": null, "keywords": [], "raw_extracted_content": "Combining Bilingual Terminology Mining and Morphological Modeling\nfor Domain Adaptation in SMT\nMarion Weller1;2Alexander Fraser2Ulrich Heid3\n1Institut f ¨ur Maschinelle2Centrum f ¨ur Informations-3Institut f. Informationswissen-\nSprachverarbeitung und Sprachverarbeitung schaft u. Sprachtechnologie\nUniversit ¨at Stuttgart LMU M ¨unchen Universit ¨at Hildesheim\[email protected] [email protected] [email protected]\nAbstract\nTranslating in technical domains is a well-\nknown problem in SMT, as the lack of par-\nallel documents causes significant problems\nof sparsity. We discuss and compare differ-\nent strategies for enriching SMT systems\nbuilt on general domain data with bilingual\nterminology mined from comparable cor-\npora. In particular, we focus on the target-\nlanguage inflection of the terminology data\nand present a pipeline that can generate pre-\nviously unseen inflected forms.\n1 Introduction\nAdapting statistical machine translation (SMT) sys-\ntems to a new domain is difficult when the domain\nlacks sufficient amounts of parallel data, as is the\ncase in many technical or medical domains. SMT\nsystems trained on general language (e.g. govern-\nment proceedings) face data-sparsity issues when\ntranslating texts from such domains, particularly if\ntranslating into a morphologically rich language.\nIn this paper, we compare different strategies to\nadapt an EN-FR SMT system built on Europarl to\na technical domain (wind energy) by making use\nof term-translation pairs mined from comparable\ndomain-specific corpora. In a first series of experi-\nments, we study two methods of integrating bilin-\ngual terminology into a phrase-based SMT system:\nadding term translation pairs via XML mark-up and\nas pseudo-parallel training data. In particular, we\ncompare the effects of integrating translation candi-\ndates for multi-word terms vs. single-word terms\nand show that the use of single-word terms can be\nc\r2014 The authors. This article is licensed under a Creative\nCommons 3.0 licence, no derivative works, attribution, CC-\nBY-ND.harmful. Using bilingual terminology in the form\nof pseudo-parallel data significantly outperforms\nthe the baseline.\nHowever, it also becomes evident that termi-\nnology handling requires morphological modeling:\nwhen the integrated term-translation pairs are re-\nstricted to the inflected forms seen in the (domain-\nspecific) data, this ignores the fact that other forms\nmight be needed when translating. Furthermore,\ntranslation-relevant morphological features (e.g.\nnumber) must be maintained during the translation\nprocess. As a way to address these problems, we\npresent a morphology-aware translation system that\ntreats inflection as a target-side generation problem.\nCombining the integration of term-translation pairs\nand the modeling of target-side morphology allows\nfor the generation of unseen word forms and the\npreservation of translation-relevant features. In the\nsecond part of the paper, we describe and discuss a\nnovel pipeline for morphology-aware integration of\nbilingual terminology. While this system’s improve-\nment over the baseline is not statistically significant,\nour analysis highlights the need for explicit mor-\nphological modeling, which, as far as we know, has\nnot been addressed previously.\nIssues in translating out-of-domain data.\nWhen translating texts of domains that are not well\nrepresented by the training data, there are two main\nproblems: (i) data sparsity: many domain-specific\nwords do not appear in the parallel data and\nthus cannot be translated (e.g. the English term\ntorque which does not occur in Europarl), and\n(ii) polysemy: words can have different meanings\nwhen used in general vs. specialized language. For\nexample, the word boss means either manager or\nrefers to a rivet-type object. In a general language\ntext, the meaning of manager is predominant,\n11\nwhereas in a text of a technical domain, that sense\nis less likely to be correct. Because a translation\nmodel trained on general language data learns\nthat boss!manager is a good translation, this\ntranslation is likely to be used when translating\ndata from a technical domain. In order to make\npreviously unknown terms available and to model\ndomain-specific preferences, we enrich the SMT\nsystem with domain-specific term-translation pairs\nthat are not contained in the general language\nparallel data.\nModeling morphology. Another type of data\nsparsity occurs in translations to languages with\nrich (noun) inflection, as the parallel training data\nis unlikely to cover the full inflection paradigms\nof all words. As a result, some inflected forms are\nunavailable to the SMT system. This problem in-\ncreases considerably when translating terms which\nare not well represented in the parallel training data,\nas is the case in the domain-adaptation scenario pre-\nsented in this work. Modeling target-side morphol-\nogy helps to reduce this kind of data-sparsity: we\npresent a two-step approach, in which we separate\nthe translation process from target-side inflection\nby first translating into a lemmatized representation,\nwith a post-processing component for generating\ninflected forms. This simplifies the translation task,\nas information concerning only the target language\nhas been removed. Also, this two-step approach\nallows us to generate forms which are not contained\nin the parallel data, which is of particular interest for\ndomain-adaptation scenarios, where the full inflec-\ntional paradigm of term-translation pairs might not\neven be covered by the domain-specific data used\nfor term mining. Furthermore, this setup allows us\nto specifically indicate how a term in a given con-\ntext should be translated. For example, it provides\nthe means to guarantee that a source-language term\nin plural is translated by the corresponding target-\nlanguage term in plural, regardless of whether the\nrequired inflected form occurs in the training data.\nAlthough there are exceptions such as furniture SG\n!meubles PL, we believe they play a negligible\nrole when translating under-resourced domains.\n2 Related work\nThere has been considerable interest in mining\ntranslations directly from comparable corpora. A\nfew representative examples are (Daille and Morin,\n2005; Haghighi et al., 2008; Daum ´e III and Ja-\ngarlamudi, 2011; Prochasson and Fung, 2011), allof which mine terms using distributional similarity.\nThese approaches tend to favor recall over precision.\nIn contrast, we use a high-precision method consist-\ning in recognizing term candidates by means of part-\nof-speech patterns with an alignment method rely-\ning on dictionary entries (Weller and Heid, 2012).\nA second strand of relevant work is the inte-\ngration of terms into SMT decoding. H ´alek et al.\n(2011) integrated named entity translations mined\nfrom Wikipedia using the XML mode of Moses,\nwhich creates new phrase table entries dynami-\ncally. Pinnis and Skadins (2012) also studied min-\ning named entities, as well as using a high quality\nterminological database, and added these resources\nto the parallel training data. We compare these two\noptions (XML vs. added parallel data) and show\nthat adding the terms to the parallel training data\nleads to better results.\nTo deal with the issue of obtaining the proper\ninflection of mined terms, we implemented a\nmorphology-aware English to French translation\nsystem that separates the translation task into two\nsteps (translation + inflection generation), following\nToutanova et al. (2008) and Fraser et al. (2012).\nFormiga et al. (2012) use a component for target-\nside morphological generation to translate news\nand web-log data. In contrast to our work, they do\nnot deal with nominal morphology, but model verb\ninflection: this is important for web-log data, as\nsecond-person verb forms rarely appear in Europarl-\ntype training data. Wu et al. (2008) use dictionary\nentries for adapting a system trained on Europarl\nto news, but without applying morphological mod-\nelling to their EN-FR system. Furthermore, news\nand also web-log data are considerably more similar\nto Europarl than technical data.\nOur main contribution is that we show how to\ncombine three areas of research: bilingual term\nmining, using terms in SMT, and generation of\ninflection for SMT. We describe a novel end-to-end\nmorphology-aware solution for using bilingual term\nmining in SMT decoding.\n3 Bilingual terminology mining\nIn contrast to parallel corpora, which are difficult\nto obtain in larger quantities, comparable corpora\nof a particular domain are relatively easy to obtain.\nComparable corpora are expected to have similar\ncontent and consequently similar domain-specific\nterms in both languages and thus constitute a suit-\nable basis for the mining of term-translation pairs.\n12\nFor both source and target-language, term candi-\ndates are extracted based on part-of-speech patterns,\nfocusing on nominal phrases. The resulting sets of\nterm candidates are then aligned.\nWe use all available domain-specific training\ndata (cf. section 7) for monolingual term extraction\non the target language. Source language terms are\nonly extracted for the input data to the SMT sys-\ntem (tuning/test set) because our methods for term\nintegration are restricted to terms contained in the\nsentences to be translated.\nTerm alignment. The task of term alignment\nconsists in finding the equivalent of a source lan-\nguage term in a set of target language terms.\nOne method is pattern-based compositional term-\nalignment : all components of a multi-word term\nare first translated individually using a (general lan-\nguage) dictionary, and then recombined according\nto handcrafted translation patterns such as\n(EN)noun1 noun2$(FR)noun2 denoun11.\nAs the recombination of individual translations\nleads to over-generation, the generated translation\ncandidates are filtered against the list of extracted\ntarget-language terms. A principal assumption is\nthat the term pairs are semantically transparent and\nof a similar morpho-syntactic structure. The exam-\nple for the term glass fibre illustrates the process:\n(1) individual translation:\nnoun1: glass!verre (glass ),\nloupe (magnifying glass )\nnoun2: fibre!fibre\n(2) recombination2of translations:\nfibre de verre ,fibre de loupe\n(3) filtering against target terms:\nfibre de verre ,fibre deloupe\nSource and target terms are not necessarily of the\nsame word class; such shifts are dealt with by sim-\nple morphological rules, as shown by the term\nenergy Nyield assessment\n!estimation du rendement ´energ ´etique ADJ\n(assessment of energetical ADJyield )\nAdding the entry energy!´energ ´etique ADJto the\ndictionary allows to cover morphological variation\nbetween source and target terms.\nFor the alignment, terms are lemmatized and\nneed to be mapped to the respective inflected forms\nbefore being integrated into the SMT system. The\n1de: French preposition meaning of.\n2Working with translation patterns, non-content words such as\nprepositions can be easily inserted in this step.MWT SWT\ntuning set total 440 1014\ntest set total 442 1015\ntuning set not in phrase-table 156 18\ntest set not in phrase-table 192 15\nTable 1: Number of terms (types) for which one or\nmore translation candidates were found.\ntranslation probabilities are computed based on the\nrelative frequencies of the inflected forms of all\ntranslation possibilities in the domain-specific data:\nEN FR freq prob\nhub heighthauteur du moyeu 14 87.5\nhauteur de moyeu 2 12.5\nTable 1 gives an overview of the number of ob-\ntained translation pairs for terms extracted from the\ntest/tuning data (cf. section 7); we differentiate be-\ntween single-word terms (SWTs) and multi-word\nterms (MWTs). This is motivated by the fact that\nMWTs provide more context in step (3) and are\ntherefore more likely to be correctly aligned. In the\ncase of SWTs, every translation listed in the dictio-\nnary can be output as a valid alignment provided it\noccurred in the corpus, regardless of context. Ta-\nble 1 also shows the amount of term-translation\npairs not covered by the phrase-table: in the case\nof MWTs, a reasonable amount of term-translation\npairs are new to the system, whereas the number of\nnew SWTs is very low in comparison to the number\nof found SWT term-translation pairs.\nThe pattern-based compositional term alignment\ntends to favor precision over recall. This general\noutcome is observed in earlier work (Weller and\nHeid, 2012)3; we assume that the findings for DE-\nEN largely also apply to our EN-FR alignment\nscenario. Moreover, it is not guaranteed that the\ntranslation of a source term occurs in the target-\nlanguage data when working with comparable cor-\npora. Another problem are structural mismatches of\nthe source term and its target-language equivalent.\nWhile the translation occurs in the target language\nterm list, it is of a different morpho-syntactic struc-\nture in a way that is not captured by the patterns\nand morphological rules. Finally, lack of dictio-\nnary coverage is also responsible for not finding\ntarget-language equivalents. We focus on integrat-\ning moderate amounts of good-quality term pairs,\nmotivated by our method for integrating term pairs:\nour results indicate that the SMT-system is sensitive\nto incorrect translations, particularly for SWTs.\n3We use alignment patterns adapted from this work.\n13\nSMT-output pred. gen. post- gloss\n+ stem-markup feat. forms proc.\nle<+DET>[ART] M.Sg le l’ the\nexc`es<M.Sg>[N] M.Sg exc`es exc`es excess\nde[P] – de d’ of\n´energie< F.Sg>[N] F.Sg ´energie ´energie energy\npeut[VFIN-peut] – peut peut can\nˆetre[VINF] – ˆetre ˆetre be\nvendre[VPP] M.Sg vendu vendu sold\n`a[P] – `a au to\nle<+DET>[ART] M.Sg le the\nr´eseau< M.Sg>[N] M.Sg r´eseau r´eseau grid\nTable 2: Processing steps for the EN input sentence\n[ ... ] excess energy can be sold back to the grid .\n4 Inflection prediction system\nTo build the morphology-aware system, the target-\nside data (parallel and language model data) is trans-\nformed into a stemmed format, based on the annota-\ntion of a morphological tagger (Schmid and Laws,\n2008). This representation contains translation-\nrelevant feature markup: nouns are marked with\ngender (considered part of the stem) and number .\nAssuming that source-side nouns are translated by\nnouns with the same number value, this feature is\nindirectly determined by the source-side input. The\nnumber markup is thus needed to ensure that the\nsource-side number information is transferred to\nthe target side. For a better generalization, we split\nportmanteau prepositions into article and preposi-\ntion ( au!`a+le :to+the ).\nFor predicting the morphological features of the\nSMT output ( number andgender ), we use a linear\nchain CRF (Lavergne et al., 2010). In the predic-\ntion step, the values specified in the stem-markup\n(number andgender on nouns) are propagated over\nthe rest of the phrase, as illustrated in column 2 of\ntable 2. Based on the stems and the morphological\nfeatures, inflected forms can be generated using a\nmorphological tool for analyzing and generating\ninflected forms (cf. section 7), as illustrated in col-\numn 3. In order to generate correct French surface\nforms, a post-processing step is required, includ-\ning the re-insertion of apostrophes and portmanteau\nmerging ( `a+le!au), cf. column 4.\n5 Integration of term-translation pairs\nIn this section, we compare two methods to inte-\ngrate bilingual terminology, using a standard SMT-\nsystem (to be referred to as the “inflected” system):\nusing XML-markup and in the form of pseudo-\nparallel data. In section 6, we discuss the integra-\ntion of terms into the “morphology-aware” system.Using XML input to add translation options.\nOne way to integrate term-translation pairs into\nan SMT system is to list translation options with\ntheir translation probabilities for a word or word\nsequence in the input sentence by means of XML-\nmarkup. This approach has been applied by H ´alek\net al. (2011) (cf. section 2) to translations of named\nentities mined from Wikipedia in an English-Czech\nSMT system. In contrast, we integrate translation\npairs of nominal phrases: this requires modelling\nfeatures that are dependent on the source-side (e.g.\nnumber) which is not to the same extent necessary\nfor names. Named entities are in many cases easier\nto deal with than terminology, as they are usually\nthe same on the source side, even though their in-\nflection can vary, e.g. in the form of case-markers,\nwhich depend on the target-language. This means\nthat source-side information plays a negligible role,\nwhereas for nominal phrases, number information\n(as contained in the stem markup) is important for\nthe generation of inflected forms.\nFor the integration of term translation pairs, po-\ntential source terms are identified in the input sen-\ntence using the same pattern-based approach as\nfor monolingual term identification (cf. section 3).\nLonger terms are preferred in the case of several an-\nnotation possibilities in order to provide the system\nwith long translations, but also to avoid that phrasal\nunits are interrupted: [wind Nenergy N] site Nvs.\n[wind Nenergy NsiteN].\nWe compare the effects of integrating multi-word\nand single-word terms vs. only multi-word terms.\nAs a variant, only term-translation pairs of which\nthe source-side term does not occur in the phrase\ntable are integrated: assuming that the translation\nmodel already has more reliable statistics for terms\nin the phrase-table, only term-translation pairs that\nare not covered by the parallel data are used. Partic-\nularly for SWTs, this drastically reduces the amount\nof term-translation pairs. When restricting the in-\ntegration to “new” terms, however, the problem\nof polysemy (e.g. boss!manager orrivet-type\nobject ) is not resolved. In such cases, it is even\nlikely that the wrong sense, i.e. the general lan-\nguage meaning, is output by the translation system.\nNevertheless, this variant leads to the best results.\nAs term alignment is based on lemmas, a map-\nping between surface forms and lemmas is needed:\nfirst, inflected EN surface forms are projected to\ntheir lemmas, which are then aligned to FR lemmas.\nThen, the aligned target-side lemmas are mapped\n14\nInput clean the <term translation=’’fer au rotor||pale de rotor||pales de rotor\n||pale du rotor||pales du rotor’’ prob=’’0.0385||0.0385||0.2692||0.1153||\n0.5384’’> rotor blades </term> with a mild soap and water .\nBaseline nettoyage du rotor des lames de savon avec une l ´eg`ere et de l’ eau .\ncleaning of the rotor of the blades (of a knife) of soap with a mild and water.\nWith terms nettoyer les pales du rotor avec un savon mod ´er´ee et de l’ eau .\ncleaning the blades of the rotor with a moderate soap and water.\nReference Nettoyez les pales du rotor au savon doux et `a l’eau.\nTable 3: Adding translation options for the term rotor blades to the input sentence.\nto the respective inflected forms observed in the\ndomain-specific corpus. As a result, some of the\ninflected forms can be incorrect in terms of number\nby mapping the lemma to both singular and plural\nforms, regardless of the input term. Filtering for\nnumber in this step is useful only to a limited ex-\ntent, as it will prevent a translation entirely if the\ninflected forms of the required number value do not\noccur in the domain-specific data. While a good\ntranslation in the wrong number is clearly better\nthan no translation, it is still desirable to have the\npossibility to model number : we consider this a\nstrong motivation for a morphology-aware integra-\ntion of terminology.\nAnother crucial point is the language model data\nwhich needs to contain the target-language terms\noffered to the translation model. As all target lan-\nguage terms are extracted from a domain-specific\ncorpus, this data is used in the language model.\nThe example in table 3 illustrates how the sys-\ntem benefits from the translations for the term rotor\nblades in the input sentence: while FR pale (blades\non a wind mill) occurs once in the parallel data,\nthere is no alignment to EN blade . As a result,\nblades is translated as lames (blades on a knife).\nProviding the translation options leads to the cor-\nrect translation of blades!pales in the context of\nthe term rotor blades . In addition, the system with\nterminology information produces a well-formed\nFrench sentence in contrast to the meaningless out-\nput of the baseline system, because the correct\ntranslation allows for matching a plausible word\nsequence with the language model.\nAdding terms to parallel data. In our experi-\nments, adding translation options via XML markup\ndid not work as well as hoped for; this is in line\nwith the findings of H ´alek et al. (2011): adding\ntranslation pairs directly into the SMT system can\nbe too intrusive, causing more harm than benefit.\nWe tested a different approach: the term-translation\npairs are added as a pseudo parallel corpus to theparallel training data. Adding each term-translation\npair once is not likely to help if the word is ambigu-\nous and already occurs in the parallel data with its\ngeneral language translation. Instead, term trans-\nlation pairs are added according to their frequency\nin the target-side corpus. As before, all observed\ninflected forms are listed as possible translations.\n6 Morphology-aware integration of\nterm-translation pairs\nThe setup described in the previous sections has\ntwo shortcomings: the data might not provide the\nfull inflection paradigm of the terms, and it is not\npossible to model features such as number : integrat-\ning stemmed terms to the inflection prediction sys-\ntem allows us to handle these two problems as the\nnumber information of a source-term can simply be\ntransferred as number markup to the stemmed trans-\nlation candidate and specific forms not occurring\nin the data used for term mining can be generated\nusing a morphological resource.\nFor the terminology integration into a mor-\nphology-aware translation system, we opted for the\nvariant of adding pseudo parallel data to the training\ndata of the SMT system as this led to the best re-\nsults in the previous experiments. First, the aligned\nterms are transferred to the stemmed representation.\nFor the number markup, the source-side is tagged\nand the number values are transferred to the corre-\nsponding stems based on the alignment patterns (cf.\nsection 3). In this step, the number markup in the\ngenerated target-side text is determined by transfer\nfrom the source-side. In comparison, the number\nmarkup in the “original” parallel data (Europarl) is\ngiven by the target-side, i.e. the parse-annotation.\nGenerating target phrases depending on the re-\nquirements of the source-side, i.e. creating unseen\nforms, can lead to stem+markup combinations that\ndo not occur in the data used to build the language\nmodel. Words not contained in the language model\nscore very badly during decoding and are thus ef-\n15\nfectively not available to the SMT system. In order\nto make all stems accessible, the generated pseudo\nparallel data is added to the language model data.\nAn alternative way to avoid the generation of\nforms not represented in the language model con-\nsists in foregoing number markup. Instead of\nkeeping it through the translation in form of stem\nmarkup, number information can be reinstated in\nthe feature prediction step using source-side fea-\ntures. However, this creates two new problems:\nfirst, the representation without number markup\nloses discriminatory power4. For example, there is\nno way to guarantee subject-verb agreement with-\nout number information on nouns. The second prob-\nlem is that parallel domain-specific data is needed\nto train the models for feature prediction. While\nwe believe that removing number markup in the\ntranslation step is a sounder way to deal with target-\nside morphology in this application, we leave this\nextension of our model to future work due to the\npractical problems that arise with this.\n7 Data and resources\nOur experiments are carried out on an EN-FR stan-\ndard phrase-based Moses5system which is adapted\nto the domain of wind energy. As a basis for ter-\nminology mining, we compiled a target-language\ncorpus for that domain. This included documents\nobtained by automatic crawling (de Groc, 2011),\nand manually obtained data from various web-sites.\nIn total, the corpus consists of 161.367 sentences\n(4.136.751 words). For the tuning/test data, we\nmanually collected and sentence-aligned parallel\ntexts from various internet resources, including\nmanuals for setting up/maintaining wind energy\ntowers, multi-lingual scientific journal articles and\ndata about regulations and administrative aspects.\nThe resulting 1290 parallel sentences were evenly\ndivided into test/tuning sets.\nThe parallel training data for the EN-FR SMT\nsystem consists of 2.159.501 sentences (Europarl\nand News data from the 2013 WMT shared task).\nFor the language model, we used a combination\nof the FR part of the parallel data and the wind\nenergy corpus. As the domain-specific corpus is\nconsiderably smaller, we built individual language\nmodels for each corpus and interpolated them using\nweights optimized on the tuning data following the\n4See also experiments on re-inflecting surface forms\n(“Method 1”) in Toutanova et al. (2008).\n5http://www.statmt.org/mosesapproach of Schwenk and Koehn (2008).\nFor the feature prediction, we used the Wapiti\ntoolkit (Lavergne et al., 2010) to train CRFs on\ncombinations of the wind corpus and the FR part of\nthe parallel data. The CRF has access to the basic\nfeatures stem andPOS tag as well as gender and\nnumber within a window of 5 positions to each side\nof the current word.\nThe morphological analysis of the French train-\ning data is obtained using RFTagger, which is de-\nsigned for annotating fine-grained morphological\ntags (Schmid and Laws, 2008). For generating in-\nflected forms based on stems and morphological\nfeatures, we use an extended version of the finite-\nstate morphology FRMOR (Zhou, 2007). FRMOR\nis a morphology tool similar to SMOR (Schmid et\nal., 2004), which allows to analyze and generate\ninflected word forms. The term alignment requires\na general language dictionary6from which we use\nthe 36,963 1-to-1 entries.\n8 Experiments and results\nWe present results for the integration of bilin-\ngual terminology into an inflected system and a\nmorphology-aware translation system.\nIntegrating terminology into the inflected sys-\ntem. An easy way to adapt an SMT system to\na new domain consists in adding language model\ndata of that domain. This does not help with the\nproblem of out-of-vocabulary words, but it can en-\nhance translations with low probabilities and pro-\nvide plausible contexts for the generated sentences.\nThe systems in row 1 in table 4 show that adding\ndomain-specific data leads to a considerable in-\ncrease in BLEU; all further systems in table 4 use\nthis enlarged language model and are compared to\nbaseline b.\nMoses’ XML mode offers two possibilities: forc-\ning the SMT system to use the given translations\n(exclusive ) or allowing for an optional usage ( inclu-\nsive). As preliminary experiments, as well as the\nfindings of H ´alek et al. (2011), showed that the in-\nclusive setting leads to better results, we only report\nBLEU scores for this variant7. We compare two\nversions: providing only the translations of multi-\nword terms (MWTs) and providing the translations\n6fromwww.dict.cc andwww.freelang.net\n7Particularly for SWTs, forcing the system to use the provided\ntranslations using the exclusive setting can very much hurt\nperformance as it goes against Moses’ tendency to use long\ntranslation units.\n16\nsystem BLEU\n1Baseline a: general LM 18.93\nBaseline b: +domain-spec. LM 21.59\n2XML-markup (MWT + SWT) 20.56\nXML-markup (MWT) 20.71\n3XML-markup-filt. (MWT + SWT) 21.68\nXML-markup-filt. (MWT) 21.57\n4Added parallel (MWT + SWT) 21.68\nAdded parallel (MWT) 21.87\nAdded parallel (MWT + filt. SWT) 22.03*\nAdded parallel filt. (MWT + SWT) 21.96*\nTable 4: Results for integration of terminology into\nan inflected EN-FR translation system. (*: signifi-\ncantly better than baseline b at a 0.05 level)\nof both multi-word and single-word terms (SWTs).\nThis is motivated by the assumption that adding\ntranslations of single words is likely to be more\nharmful as it is to some extent incompatible with\nMoses’ tendency to prefer longer phrases.\nThe translation probabilities of term-translation\npairs given in the XML markup usually are consid-\nerably higher than the ones in the phrase-table and\nmight thus have an undue advantage, particularly\nwhen assuming that the statistics in the phrase-table\nare more reliable for terms that are not restricted to\nthe domain. Furthermore, the generated translations\nof multi-word terms are more likely to be correct\nas they provide more context in the alignment step.\nWhile the system with only MWTs is slightly bet-\nter, both variants are worse than the baseline (row\n2 in table 4). Restricting the added term-translation\npairs to those where the source-phrase does not\noccur in the phrase-table helps, but does not outper-\nform the baseline (row 3 in table 4). Here, using\nboth MWTs and SWT leads to a slightly better\nscore, presumably because the added SWTs are un-\nknown to the system and even a translation by a\none-word phrase is beneficial.\nIntegrating bilingual terminology in the form of\npseudo-parallel data leads to the best results (row 4\nin table 4). Again, restricting the data to MWTs is\nslightly better than using all term-translation pairs.\nThe score for the MWT-only system (21.87) is on\nthe verge of being statistically significantly better\nthan baseline b. Adding single-word translations\nwhich do not occur in the phrase-table leads to\na statistically significant improvement (22.03), as\ndoes filtering both SWTs and MWTs (21.96).\nIntegrating terminology into the morphology-\naware system. The score of the morphology-\naware system (21.54) is comparable to that of the\ninflected system (21.59), as shown in table 5. Thesystem CRF trained on BLEU\n1Baseline wind+news 21.47\nwind+europarl 21.54\n2MWTawind+europarl 21.77\n3MWT + SWTcwind+europarl 21.11\n4MWT + filt. SWTbwind+europarl 21.74\n5filt. (MWT + SWT)bwind+europarl 21.48\nTable 5: Adding pseudo parallel data to the training\ndata for a morphology-aware system. a: LM from\nbaseline system; b: MWT translations added to LM\ndata; c: MWT+SWT translations added to LM data.\nimportance of in-domain training data for the CRF\nis illustrated by the results obtained when training\nthe CRF on wind+news (318.112 sentences) and on\nwind+europarl (2.161.367 sentences): even though\nthe second training set is considerably larger, there\nis basically no gain in BLEU. Considering this out-\ncome, we assume that more in-domain training data\nfor the CRF would lead to better overall results.\nIn order to make better use of the in-domain\ntraining data, singletons were replaced by their part-\nof-speech tag8. However, the stem feature consider-\nably contributes to the prediction result: this is illus-\ntrated by the results in table 5, where a CRF trained\non a combination of Europarl and wind energy data\nis only marginally better in terms of BLEU than a\nsystem trained on a much smaller amount of general\nlanguage data and data of the wind energy domain.\nIt is important to keep in mind that the CRF is\ntrained on fluent data whereas the SMT output is\nheavily disfluent. As a result, there is a mismatch\nbetween ill-formed translation output and the well-\nformed data used to train the CRF; the gap between\ntraining data and the text for which features are to\nbe predicted gets larger with increasing difficulty\nof the translation task, as is the case here.\nEffects caused by sparse data do also affect the\nlanguage model data: forms which are not con-\ntained in the parallel data cannot be produced by\nthe translation system. In order to deal with out-of-\nvocabulary words, stem markup+tags are stripped\nof all those words in the language model data that\ndo not occur in the parallel data. This enables\nthe SMT system to score unknown words (e.g.\nnames) in the language model, but also leads to\nside-effects due to sparsity: for example, the French\nterm rotors occurs once in the parallel data and\nis correctly stemmed as rotor<Masc.Pl>[N] ,\nwhile all occurrences of rotor in the singular form\n8Experiments with replacing out-of-vocabulary words by a\nspecial tag were also not effective in terms of BLEU.\n17\nare stripped of the markup and treated as a name\nand thus do not undergo the inflection process.\nAs the method of adding term translation pairs\nto the parallel data led to the best results for the\ninflected system, we opted for this method for the\nintegration of terms into the morphology-aware sys-\ntem. While the MWT-only system (2 in table 5)\ngets a better score than the baseline (1 in table 5)\n(21.77 vs. 21.54 using the larger CRF), the differ-\nence is not statistically significant. In contrast to\nthe results for the inflected system, adding the set\nof SWTs filtered against the phrase-table slightly\ndecreases BLEU, whereas adding all SWTs leads\nto a considerable decrease in BLEU. We assume\nthat this outcome is partially caused by a problem\nwith the language model: while all generated target\nterms are added to the language model data, they\nare not embedded in the context of a sentence, or,\nif also adding SWTs (system 3 in table 5), not even\nin the context of a term.\n9 Conclusion\nWe presented different approaches to integrate bilin-\ngual terminology of a technical domain into an\nSMT system. First, we compared two integrating\nmethods (providing translation options vs. term-\ntranslation pairs as pseudo-parallel data) and stud-\nied the effects of using only multi-word terms in\ncomparison to both single-word and multi-word\nterms. Then, we applied the best term integration\nstrategy to a morphology-aware translation system.\nWith the inflected system, we obtained a signif-\nicant improvement over the baseline when adding\nterms as pseudo-parallel data. Our evaluation also\nclearly showed that Moses’ XML mode has consid-\nerable problems in dealing with single-word terms.\nFurthermore, we highlighted the need for explicit\nmodeling of morphological features for the integra-\ntion of bilingual terminology.\nWhile the morphology-aware system enriched\nwith term pairs was not able to outperform the base-\nline on a statistically significant level, it outlines a\npipeline that tackles two central problems of adapt-\ning translation systems to under-resourced domains:\n(i) preservation of translation-relevant features and\n(ii) generation of previously unseen inflected forms.\n10 Acknowledgements\nThis work was funded by the DFG Research Projects “Distri-\nbutional Approaches to Semantic Relatedness” and “Models\nof Morpho-Syntax for Statistical Machine Translation”.\nThe research leading to these results has received funding fromthe European Community’s Seventh Framework Programme\n(FP7/2007-2013) under Grant Agreement n. 248005.\nReferences\nDaille, B. and E. Morin. 2005. French-English terminology\nextraction from comparable corpora. In Proceedings of\nIJCNLP 2005 .\nDaum ´e III, H. and J. Jagarlamudi. 2011. Domain adapta-\ntion for machine translation by mining unseen words. In\nProceedings of ACL 2011 .\nde Groc, C. 2011. Babouk: Focused web crawling for corpus\ncompilation and automatic terminology extraction. In Inter-\nnational Conferences on Web Intelligence and Intelligent\nAgent Technology .\nFormiga, L., A. Hern ´andez, J. Mari ˜no, and E. Monte. 2012.\nImproving English to Spanish out-of-domain translations by\nmorphology generalization and generation. In Proceedings\nof AMTA 2012 .\nFraser, A., M. Weller, A. Cahill, and F. Cap. 2012. Modeling\nInflection and Word-Formation in SMT. In Proceedings of\nEACL 2012 .\nHaghighi, A., P. Liang, T. Berg-Kirkpatrick, and D. Klein.\n2008. Learning bilingual lexicons from monolingual cor-\npora. In Proceedings of ACL 2008 .\nH´alek, O., R. Rosa, A. Tamchyna, and O. Bojar. 2011. Named\nentities from wikipedia for machine translation. In Pro-\nceedings of the Conference on Theory and Practice of In-\nformation Technologies .\nLavergne, T., O. Capp ´e, and F. Yvon. 2010. Practical very\nlarge scale CRFs. In Proceedings of ACL 2010 .\nPinnis, M. and R. Skadins. 2012. MT adaptation for under-\nresourced domains - what works and what not. In Proceed-\nings of HLT - the baltic Perspective .\nProchasson, E. and P. Fung. 2011. Rare word translation\nextraction from aligned comparable documents. In Pro-\nceedings of ACL 2011 .\nSchmid, H. and F. Laws. 2008. Estimation of conditional\nprobabilities with decision trees and an application to fine-\ngrained pos tagging. In Proceedings of COLING 2008 .\nSchmid, H., A. Fitschen, and U. Heid. 2004. SMOR: a\nGerman Computational Morphology Covering Derivation,\nComposition, and Inflection. In Proceedings of LREC 2004 .\nSchwenk, H. and P. Koehn. 2008. Large and diverse language\nmodels for statistical machine translation. In Proceedings\nof IJCNLP 2008 .\nToutanova, K., H. Suzuki, and A. Ruopp. 2008. Applying\nMorphology Generation Models to Machine Translation.\nInProceedings of ACL-HLT 2008 .\nWeller, M. and U. Heid. 2012. Analyzing and aligning german\ncompound nouns. In Proceedings of LREC 2012 .\nWu, Hua, Haifeng Wang, and Chengqing Zong. 2008. Domain\nadaptation for statistical machine translation with domain\ndictionary and monolingual corpora. In Proceedings of\nCOLING 2008 .\nZhou, Z. 2007. Entwicklung einer franz ¨osischen Finite-State-\nMorphologie. University of Stuttgart.\n18", "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "t83dounR4BPK", "year": null, "venue": "EAMT 2015", "pdf_link": "https://aclanthology.org/W15-4923.pdf", "forum_link": "https://openreview.net/forum?id=t83dounR4BPK", "arxiv_id": null, "doi": null }
{ "title": "Target-Side Generation of Prepositions for SMT", "authors": [ "Marion Weller", "Alexander M. Fraser", "Sabine Schulte im Walde" ], "abstract": "Marion Weller, Alexander Fraser, Sabine Schulte im Walde. Proceedings of the 18th Annual Conference of the European Association for Machine Translation. 2015.", "keywords": [], "raw_extracted_content": "Target-side Generation of Prepositions for SMT\nMarion Weller1,2, Alexander Fraser2, Sabine Schulte im Walde1\n1IMS, University of Stuttgart – (wellermn|schulte)@ims.uni-stuttgart.de\n2CIS, Ludwig-Maximilian University of Munich – [email protected]\nAbstract\nWe present a translation system that mod-\nels the selection of prepositions in a target-\nside generation component. This novel ap-\nproach allows the modeling of all subcate-\ngorized elements of a verb as either NPs or\nPPs according to target-side requirements\nrelying on source and target side features.\nThe BLEU scores are encouraging, but\nfail to surpass the baseline. We addi-\ntionally evaluate the preposition accuracy\nfor a carefully selected subset and discuss\nhow typical problems of translating prepo-\nsitions can be modeled with our method.\n1 Introduction\nThe translation of prepositions is a difficult task\nfor machine translation; a preposition must convey\nthe source-side meaning while also meeting target-\nside constraints. This requires information that\nis not always directly accessible in an SMT sys-\ntem. Prepositions are typically determined by gov-\nernors, such as verbs (to believe in sth.) or nouns\n(interest in sth.). Functional prepositions tend to\nconvey little meaning and mostly depend on target-\nside restrictions, whereas content-bearing preposi-\ntions are largely determined by the source-side, but\nmay also be subject to target-side requirements, as\nin the following example:go to the cinema/to the\nbeach→ins Kino/an den Strand gehen.\nIn this paper, we treat prepositions as a target-\nside generation problem and move the selection\nof prepositions out of the translation system into\na post-processing component. During translation,\nc�2015 The authors. This article is licensed under a Creative\nCommons 3.0 licence, no derivative works, attribution, CC-\nBY-ND.we use an abstract representation of prepositions\nas a place-holder that serves as a basis for the gen-\neration of prepositions in the post-processing step.\nIn this step, all subcategorized elements of a verb\nare considered and allotted to their respective func-\ntions – as PPs with an overt preposition, but also as\nNPs with an “empty” preposition, e.g.to call for\nsth.→ ∅ etw. erfordern. In a standard SMT system,\nsubcategorization is difficult to capture in the lan-\nguage model or by the translation rules if the verb\nand its subcategorized elements are not adjacent.\nIn the following, we outline a method to handle\nprepositions with a target-side generation model in\nan English-German morphology-aware SMT sys-\ntem. We study two aspects: (i) features for a mean-\ningful abstract representation of prepositions and\n(ii) how to predict prepositions in the translation\noutput using a combination of source and target-\nside information. In addition, we compare prepo-\nsitions in the machine translation output with those\nin the reference translation for a selected subset.\nFinally, we discuss examples illustrating typical\nproblems of translating prepositions.\n2 Related Work\nMost research on translating prepositions has been\nreported for rule-based systems. Naskar and\nBandyopadhyay (2006) outline a method to han-\ndle prepositions in an English-Bengali MT system\nusing WordNet and an example base for idiomatic\nPPs. Gustavii (2005) uses bilingual features and\nselectional constraints to correct translations in\na Swedish-English system. Agirre et al. (2009)\nmodel Basque prepositions and grammatical case\nusing syntactic-semantic features such as subcat-\negorization triples for a rule-based system which\nleads to an improved translation quality for prepo-\nsitions. Shilon et al. (2012) extend this approach177\ninput lemmatized SMT output prep morph. feat. inflected gloss\n∅ −→ PREP ∅-Acc –\nwhat welch<PWAT> Acc Acc.Fem.Sg.Wk welche which\nrole Rolle<+NN><Fem><Sg> Acc Acc.Fem.Sg.Wk Rolle role\n∅ −→ PREP ∅-Nom –\nthe die<+ART><Def> Nom Nom.Masc.Sg.St der the\ngiant riesig<ADJ> Nom Nom.Masc.Sg.Wk riesige giant\nplanet Planet<+NN><Masc><Sg> Nom Nom.Masc.Sg.Wk Planet planet\nhas gespielt<VVPP> – – gespielt played\nplayed hat<VAFIN> – – hat has\nin−→ PREP bei-Dat – bei for\nthe die<+ART><Def> Dat Dat.Fem.Sg.St der the\ndevelopment Entwicklung<+NN><Fem><Sg> Dat Dat.Fem.Sg.Wk Entwicklung development\nof−→ PREP ∅-Gen –\nthe die<+ART><Def> Gen Gen.Neut.Sg.St des of-the\nsolar system Sonnensystem<+NN><Neut><Sg> Gen Gen.Neut.Sg.Wk Sonnensystems solar system\nFigure 1: Prediction of prepositions, morphological features and generation of inflected forms for the\nlemmatized SMT output. German cases: Acc-Accusative, Nom-Nominative, Dat-Dative, Gen-Genitive.\nwith a statistical component for ranking transla-\ntions. Weller et al. (2014) use noun class informa-\ntion as tree labels in syntactic SMT to model selec-\ntional preferences of prepositions. The presented\nwork is similar to that of Agirre et al. (2009), but\nis applied to a fully statistical MT system. The\nmain difference is that Agirre et al. (2009) use lin-\nguistic information to select appropriate transla-\ntion rules, whereas we generate prepositions in a\npost-processing step.\nA related task to generating prepositions is the\ngeneration of determiners, which are problematic\nwhen translating from languages without definite-\nness morphemes, e.g. Czech or Russian. Tsvetkov\net al. (2013) create synthetic translation options to\naugment a standard phrase-table. They use a clas-\nsifier trained on local contextual features to pre-\ndict whether to generate or remove determiners for\nthe target-side of translation rules. Another related\ntask is error correction of second language learn-\ners, e.g. Rozovskaya and Roth (2013), which also\ncomprises the correction of prepositions.\nIn addition to the standard evaluation metric\nBLEU, we evaluate the accuracy of prepositions in\ncases where the governing verb and governed noun\nin the translation output match with the reference\ntranslation. Conceptually, this is loosely related\nto semantically focused metrics (e.g. MEANT, Lo\nand Wu (2011)), as we go beyond a “flat” n-gram\nmatching but evaluate a meaningful entity, in our\ncase a preposition-noun-verb triple.\n3 Methodology\nOur approach is integrated into an English-German\nmorphology-aware SMT system whichfirst trans-\nlates into a lemmatized representation with a com-ponent to generate fully inflected forms in a second\nstep, an approach similar to the work by Toutanova\net al. (2008) and Fraser et al. (2012). The inflection\nrequires the modeling of the grammaticalcaseof\nnoun phrases (among other features), which cor-\nresponds to determining the syntactic function1.\nWeller et al. (2013) describe modelingcasein\nSMT; we want to treat all subcategorized elements\nof a verb in one step and extend their setup to cover\nthe prediction of prepositions in both PP and NPs\n(i.e., the “empty” preposition).\n3.1 Translation and Prediction Steps\nTo build the translation model, we use an abstract\ntarget-language representation in which nouns, ad-\njectives and articles are lemmatized and preposi-\ntions are substituted with place-holders. Addi-\ntionally, “empty” place-holder prepositions are in-\nserted at the beginning of noun phrases. To ob-\ntain a symmetric data structure, place-holders for\n“empty” prepositions are also added to source NPs.\nWhen generating surface forms for the trans-\nlation output, a phrase containing a place-holder\ncan be realized as a noun phrase (with an “empty”\npreposition) or as an overt prepositional phrase (by\ngenerating the preposition’s surface form).\nFigure 1 illustrates the process: for the English\ninput with extra null-prepositions (column 1), the\nSMT system outputs a lemmatized representation\nwith place-holder prepositions (column 2). In a\nfirst step, prepositions andcasefor the SMT out-\nput are predicted (column 3). Then, the three re-\nmaining inflection-relevant morphological features\nnumber,genderandstrong/weakare predicted on\n“regular” sentences without place-holders, given\n1The subject usually is innominativecase and direct/indirect\nobjects areaccusative/dative.178\nthe prepositions from the previous step (column\n4). In the last step, fully inflected forms2are pro-\nduced based on features and lemmas (column 5).\nAs the inflected forms are generated at the end of\nthe pipeline, portmanteau prepositions, i.e. prepo-\nsitions merged with an article in certain conditions,\nsuch aszu+dem=zum(to+the), are easily handled.\nDue to the lemmatized representation, all sub-\ncategorized elements of a verb are available in an\nabstract form and can be allotted to their respective\nfunctions (subject, object, PPs) and be inflected ac-\ncordingly. Furthermore, the generation of (func-\ntional) prepositions is independent of structural\nmismatches of source and target side: for example,\nas translation ofto pay attention to sth., bothauf\netw. achtenand∅ etw. beachtenare possible, but\nrequire a different realization of the place-holder\n(∅vs. overt preposition).\nFor the prediction of prepositions, we combine\nsource and target-side features into afirst-order\nlinear chain CRF which provides aflexible frame-\nwork to make use of different knowledge sources.\nWe use distributional information about subcate-\ngorization preferences to model functional prepo-\nsitions, whereas source-side features (such as the\naligned word) tend to be more important for pre-\ndicting prepositions conveying content. These fea-\ntures address both functional and content-bearing\nprepositions, but are designed to not require an ex-\nplicit distinction between the two categories be-\ncause the model is optimized on the relevant fea-\ntures for each context during training.\nDuring the generation step, the relevant infor-\nmation (such as governing verb/noun and subcat-\negorization preferences) is presented in a refined\nform, as opposed to the limited information avail-\nable in a standard SMT system (such as immediate\ncontext in a translation rule or language model). It\nis thus able to bridge large distances between the\nverb and its subcategorized elements.\n4 Abstract Representation of Prepositions\nIn addition to providing a means to handle subcat-\negorized elements by target-side generation, one\nobjective of the reduced representation of preposi-\ntions is to obtain a more general SMT system with\na generally improved translation performance. Our\nexperiments will show, however, that replacing\nprepositions by simple place-holders decreases the\n2We only generate inflected forms for NPs/PPs (nouns, adjec-\ntives, determiners); verbs are inflected throughout the system.translation quality. The effect that a simplified\nSMT system loses discriminative power has also\nbeen observed by e.g. Toutanova et al. (2008)\nwho found that keeping morphological informa-\ntion during translation can be preferable to remov-\ning it from the system despite the problem of in-\ncreased data sparseness. We will thus evaluate sys-\ntems with varying levels of information annotated\nto the place-holders (cf. section 6.2).\nAs an extension to the basic approach with plain\nplace-holders, we experiment with enriching the\nplace-holders such that they contain more relevant\ninformation and represent the content of a preposi-\ntion while still being abstract. To this end, we en-\nrich the place-holders with syntactically motivated\nfeatures. For example, the representation can be\nenriched by annotating the place-holder with the\ngrammatical case of the preposition it represents:\nfor overt prepositions, case is often an indicator of\nthe content (such as direction/location), whereas\nfor empty prepositions (NPs), case indicates the\nsyntactic function. As extension, we mark whether\na place-holder is governed by a noun or a verb.\nFurthermore, we take into account whether a\npreposition is functional or conveys content: based\non a subcategorization lexicon (Eckle, 1999), we\ndecide whether a place-holder in a given context\nis subcategorized or not. This idea is extended\nto a system containing both place-holder and nor-\nmal prepositions: assuming that merely functional\nprepositions contribute less in terms of meaning,\nthese are replaced by an abstract representation\n(case and type of governor), whereas for all non-\nfunctional prepositions, the actual preposition with\nannotation (case and type of governor) are kept.\n5 Predicting Prepositions\nIn this section, we explain the features used to pre-\ndict the values of the place-holder prepositions and\nevaluate the prediction quality on clean data.\n5.1 Features for Predicting Prepositions\nTable 1 illustrates the features for predicting prepo-\nsitions: in addition to target-side context in the\nform of adjacent lemmas and POS-tags (5 words\nleft/right), we combine three types of features:\n(1) source-side features, (2) projected source-side\ninformation and (3) target-side subcategorization\nframes. The source-side information consists of\n•the word aligned to the place-holder preposi-\ntion: a source-side overt or empty preposition179\nlemma glosssource-side projected source-side target-sidelabelprp func,noun g.verb noun g.verb subcat\naberbut – – – – – – -\nPRPPRP ∅subj, we endure wir leiden ∅-Nom:5∅-Acc:0unter-Dat:4 ∅-Nom\nwirwe – – – – – – Nom\nleidensuffer – – – – – – -\n...... ... ... ... ... ... ... ...\nauchtoo – – – – – – -\nPRPPRP ∅obj, effect endure Treibhauseffektleiden ∅-Nom:5∅-Acc:0unter-Dat:4 unter-Dat\ndiethe – – – – – – Dat\nTreibhausgreenhouse – – – – – – Dateffekt effect\nTable 1: Prediction features in the training data. Source-sentence with inserted empty prepositions:\n“...,∅we too are having to endure∅the greenhouse effects”.\n(“prp” in column “source-side” in table 1)\n•its governing verb or noun (column “g.verb”)\n•the governed noun and its syntactic function\nin relation to its governor (col. “func,noun”)\nThese source-side features, extracted from de-\npendency parses (Choi and Palmer, 2012), are\nthen projected to the target-side based on the word\nalignment (column “projected source-side”). Us-\ning source-side projections to identify the gover-\nnor on the target-side eliminates the need to parse\nthe disfluent MT output.\nFinally, we use distributional subcategorization\ninformation as our third feature type (column\n“target-side subcat”). Relying on distributional\nsubcategorization information (cf. section 6.1), we\nprovide subcategorizational preferences for the ob-\nserved verb in the form ofverb-preposition-case\ntuples. The grammatical case indicates whether\nthe noun is predominantly used as subject or di-\nrect/indirect object with an empty preposition.\nFrom the tuples, the system can learn, for exam-\nple, thatunter etwas leidenis a lot more plausible\nthan∅ etwas leiden, even though the English sen-\ntence contains no preposition (to endure sth.). For\neach preposition, including∅, we list how often the\nverb occurred with the respective preposition-case\ncombination, with values ranging from 0 (no evi-\ndence) to 5 (high amount of observations); table 1\nonly shows three of these pairs.\nFrom this training example, the model can learn\nthat the second place-holder, even though aligned\nto an empty preposition governing an object on the\nEnglish side, is not likely to be realized as a direct\nobject as there is no evidence of the verbleiden\n(to suffer) with an accusative object, but a strong\npreference for the prepositionunter+Dat. The pro-\njected noun (Treibhauseffekt) should rule out the\npossibility of∅-Nom, as it is an unlikely subject of\nleiden. On the other hand, for thefirst place-holderpreposition, all features point to a realization as∅-\nNom (subject). This example illustrates how the\nfeatures can bridge the gap between the verblei-\ndenand the place-holder to be realized asunter\n(middle part of the sentence omitted in the table).\nIn addition to tuples of the formverb-\npreposition-case, we also usenoun-noun genitive\ntuples (not shown in table 1) to help the sys-\ntem decide whether two adjacent nouns headed\nwith a place-holder should be realized as anoun-\nnoun genitive construction (equivalent with English\nnoun-of-noun), anoun-prep-nounconstruction or\nas two adjacent (subcategorized) NPs, for example\nNPAccNPDat(direct/indirect object).\n5.2 Evaluation of Prediction Accuracy\nThe success of generating-prepositions in SMT de-\npends to a large extent on the quality of the predic-\ntion component. Before beginning with the MT\nexperiments, we thus evaluate the quality of pre-\ndicting prepositions on clean data, the tuning-set.\nWe use the Wapiti toolkit (see section 6.1) to\ntrain a CRF to predict prepositions. We opted for\na sequence model to take into account decisions\nfrom previous positions. Even though it only looks\nat previous decisions on bigram-level, the annota-\ntion ofcaseon all elements of noun phrases should\nprevent that two adjacent noun phrases be assigned\nthe same value forcase.\nTable 2 shows the performance of predict-\ning prepositions on clean data. In the column\n“prep+case”, we evaluate the accuracy of the pre-\ndiction of both the preposition and its grammati-\ncal case, whereas the column “prep” gives the ac-\ncuracy when only looking at the predicted prepo-\nsition. We compare a model using source-side\nand projected source-side features (1) and a model\nwith additional subcategorization information (2).\nSource-side information and its target-side pro-180\nFeatures prep+case prep\n1basic + source 73.58 85.76\n2basic + source + subcat 73.42 85.78\nTable 2: Results on clean data (3000 sentences).\nprep acc. top-3 predicted (freq)\n∅ 95.17 ∅(10235), in (134), von (95)\nin 79.19 in (1123),∅(170), von (21)\nvor 77.14 vor (81),∅(10), bei (3)\nnach 68.70 nach (90),∅(22), in (4)\nzu 64.67 zu (238),∅(60), in (21)\nan 61.09 an (179),∅(47), in (22)\nunter 60.71 unter (34),∅(12), von (4)\nauf 59.56 auf (215),∅(59), in (32)\naus 55.38 aus (72),∅(25), von (19)\nwegen 22.22 wegen (4), f ¨ur (4),∅(3)\nTable 3: Individual prediction results.\njection are crucial – without source-information,\ncontent-conveying prepositions would need to be\nguessed – the addition of subcategorization infor-\nmation does not lead to further gains, though.\nTable 3 lists the prediction results for some of\nthe prepositions to be modeled, ranging from 95%\nto 22%. The realization as empty preposition con-\nstitutes by far the majority. In the list of the top-3\npredicted prepositions, it becomes obvious that the\nrealization as∅instead of an overt preposition is\nalso the most frequent error; similarly, the prepo-\nsitionsvon/in(of/in), all high-frequency preposi-\ntions, are often output instead of the correct prepo-\nsition.\n6 Experiments and Evaluation\nHere, we present the setup and results of our exper-\niments. In addition to the traditional metric BLEU,\nwe assess the quality of the translated prepositions\nfor a subset where relevant elements (verb, noun)\nmatch with the reference. Finally, we discuss some\nexamples before concluding the paper.\n6.1 Data and Experimental Setup\nWe trained a standard phrase-based Moses sys-\ntem on 4.3M lines of EN–DE data (WMT’14)\nwith a 10.3M sentence language model. For\nthe lemmatized representation of the morphology-\naware SMT system, the German part was parsed\nwith BitPar (Schmid, 2004) and analyzed with the\nmorphological tool SMOR (Schmid et al., 2004).\nThe models for predicting inflectional features and\nprepositions were built with the Wapiti toolkit\n(Lavergne et al., 2010). The inflectional models\n(case, number, gender strong/weak) were trained\non lemma and tag information of the German partof the parallel data. The models to predict prepo-\nsitions were trained on half of the parallel data due\nto the considerably larger amount of labels that can\nbe predicted. The subcategorization tuples were\nextracted from German web data (Scheible et al.\n(2013), Faaß and Eckart (2013)) and Europarl. We\nused WMT’13 as tuning and WMT’14 as test sets3.\n6.2 Evaluation with BLEU\nTable 4 shows the results of experiments with the\nbaseline system (a), a morphology-aware SMT\nsystem with no special treatment for prepositions4.\nAs a variant of the baseline system (b), we re-\nmoved all prepositions from the translation output\nto be re-predicted. This does not lead to much\nchange in BLEU, illustrating that the prediction\nstep itself is not harmful. However, only chang-\ning existing prepositions is not sufficient and it is\nnot possible to model empty vs. overt prepositions.\nTable 5 shows results for the variants of the\nplace-holder systems. Using a basic place-holder\n(✷) representation (S1) leads to a considerably\ndrop in relation to the baseline in table 4. Anno-\ntating the place-holder withcase(S2) leads to an\nimprovement of ca. 0.4, indicating that the abstract\nrepresentation of the place-holders plays a signifi-\ncant role here.\nIn (S3), we mark whether the preposition is gov-\nerned by a verb or a noun, to no avail. As an\nextension, we annotate the status of the place-\nholder: subcategorized or non-subcategorized in\n(S4), which seems to slightly help, even though\nthe observed differences are very small. Assuming\nthat functional prepositions contribute only little\nin terms of meaning, only subcategorized prepo-\nsitions are represented by place-holders, whereas\nnon-functional prepositions are kept. Again, we\nshow two variants: in (S5a), all prepositions are\nre-predicted, while in (S5b), the forms of non-\nfunctional prepositions in the MT output are kept\nand only those for functional prepositions are pre-\ndicted – this last result reaches the baseline level.\nWhile none of the variants outperforms the base-\nline, we consider the results encouraging as they\nillustrate (i) that the representation of prepositions\nduring the translation step considerably influences\nthe MT quality (S2) and (ii) that applying the pre-\ndiction step to a carefully selected subset of prepo-\n3In the current version, we only work with the 1-best output\nof the MT system, and do not consider the n-best list.\n4For comparison, Baseline surface shows the score for a non-\nmorphology-aware system operating on surface forms.181\nSystem Prepositions BLEU CRF\nBaseline surface – 16.84 –\nBaseline (a) – 17.38 –\nBaseline (b) re-predict17.36 src\n17.31 src+subcat\nTable 4: Baseline variants (3003 sentences).\nRepresentation BLEU BLEU\nof place-holders source src+sub\nS1 ✷ 16.81 16.77\nS2 ✷+Case 17.23 17.23\nS3 ✷+Case+(V|N) 16.91 16.89\nS4 ✷+Case+(V|N)+subcat 17.09 17.08\nS5a✷+Case+(V|N): functional17.12 17.06prp +Case+(V|N): non-func.\nS5b✷+Case+(V|N): functional17.29 17.29prp +Case+(V|N): non-func.\nTable 5: Results for place-holder systems.\nsitions improves the results (S5a vs. S5b).\n6.3 Evaluation of Prepositions\nBLEU is known to not capture subtle differences\nbetween two translation systems very well. Thus,\nwe present a second evaluation in which we ana-\nlyze the translation accuracy of prepositions.\nIt is difficult to automatically assess the quality\nof the translation of prepositions as the choice of\na preposition depends on its context, mainly the\nverbs and/or nouns it occurs with. It is not suffi-\ncient to compare the prepositions occurring in the\nreference translation with those in the translation\noutput, as the used verbs/nouns or even the en-\ntire structure of the sentence might differ. We will\nthus restrict the evaluation to cases where the rele-\nvant parts, namely the governing verb and the noun\ngoverned by the preposition are the same in the ref-\nerence sentence and in the translation output5: in\nsuch cases, an automatic comparison of the prepo-\nsition in the MT output with the preposition in the\nreference sentence is possible.\nTo obtain the set for which to evaluate the prepo-\nsitions, we took each preposition in the reference\nsentence6governing a proper noun or named en-\ntity. The governing verb is identified relying on\ndependency parses of the reference translation.\nFor extracting the equivalents of the relevant parts\n(preposition, noun, verb) in the translation output,\nwe made use of the alignments with the English\nsource sentence as pivot. The matching is made on\nlemma-level.\n5We ignore PPs governed by nouns (such asN von/an N(N of\nN)) as they are often equivalent with genitive structures.\n6The preposition needs to be in the group of the 17 preposi-\ntions which are subject of modeling in this work.BL S2 S5\nverb MT= verb REF 502 469 503\nverb MT= verb REF, noun MT= noun REF 270 260 271\nTable 6: Subsets where governing verb/governed\nnoun are the same in MT output and reference.\nBL S2 S5a S5b\nverb MT= verb REF245 233 261 250\n48.8% 49.7% 51.9% 49.7%\nverb MT= verb REF, 179 174 188 178\nnoun MT= noun REF 66.3% 66.9% 69.4% 65.7%\nTable 7: Percentage of correct prepositions for the\nsubsets from table 6.\nTable 6 gives an overview of the amount of cases\nwhere the reference contains a preposition and its\nnoun and governing verb are the same in the MT\noutput; in the set of 3003 sentences, this is the\ncase for a subset of 270 (baseline), 260 (S2, the\nbest place-holder-only system) and 271 (S5). Note\nthat the slightly less prep-noun-verb triples of S2\nthat match the reference compared to the baseline\nare not per-se a sign for inferior translation quality\nas we did not consider the possibility of synony-\nmous translations.\nTable 7 shows the amount of prepositions for the\nrespective subsets that were considered correct, i.e.\nmatch with the reference. While the difference is\nvery small, the percentage of correct prepositions\nis slightly higher for the systems S2/5a. Systems\n5a/b are based on the same MT output; however, 5a\nfares better in this evaluation even though 5b had\na higher BLEU score. We thus assume that BLEU\ndid not improve based on the examined subset.\nThis analysis also shows that the translation\nquality of prepositions is a problem in need of\nmore attention7. It has to be noted, though, that\nthis evaluation only gives partial insights into the\nperformance of the systems. The main problem is\nthat the evaluation is centered around prepositions\nin the reference translation, which often is (struc-\nturally) different from the source sentence and con-\nsequently also the translation output. Thus, sen-\ntences with prepositions in the translation, but not\nin the reference, are not considered. Nevertheless,\nwe regard this evaluation as suitable to evaluate the\ncorrectness of prepositions in an automatic way.\n6.4 Examples\nHere, we discuss outputs from the baseline and\nsystem 2 (cf. table 5) that cover the different syn-\n7In some cases however, prepositions in the MT output are\nacceptable even if they do not match with the reference.182\n1SRC ... malmon ’s team will have to improveonrecent performances .\nBL ... malmon das Team wird ¨uberdie j ¨ungsten Leistungen zu verbessern.\n... malmon the team willoverthe recent performances improve.\nNEW ... malmon das Team hat∅die j ¨ungsten Leistungen zu verbessern .\n... malmon the team has-to∅the recent performances improve\nREF ... muss sich das Malmon-Team im Vergleich zu den vergangenen Auftritten auf jeden Fall steigern .\n... must -refl- the malmon-team in comparison to the past performances in any case improve.\n2SRC outer space offers many possibilities for studying∅substances under extreme conditions ...\nBL in den Weltraum bietet viele M ¨oglichkeiten f ¨ur das Studium∅Stoffe unter extremen Bedingungen ...\nin the space offers many possibilities study noun∅substances under extreme conditions ...\nNEW der Raum bietet viele M ¨oglichkeiten zum StudiumvonStoffen unter extremen Bedingungen ...\nin the space offers many possibilities for study nounofsubstances under extreme conditions ...\nREF Das Weltall bietet viele M ¨oglichkeiten, Materie unter extremen Bedingungen zu studieren ...\nthe universe offers many possibilities , substances under extreme conditions to study ...\n3SRC nowadays there are specialistsinrenovation to suit the needs of the elderly.\nBL heutzutage gibt es Spezialisteninder Renovierung der Bed ¨urfnisse der ¨alteren Menschen.\nnowadays there are specialistsinthe renovation of the needs of the elderly.\nNEW heutzutage gibt es Spezialistenf ¨urRenovierung , die die Bed ¨urfnisse der ¨alteren Menschen.\nnowadays there are specialistsforrenovation, that the needs of the elderly.\nREF heute gibt es auchf ¨urden altersgerechten Umbau Spezialisten .\ntody there are also for the age-appropriate renovation specialists.\n4SRC ... what role the giant planet has playedinthe development of the solar system.\nBL ... welche Rolle der riesige Planet gespielt hat,inder Entwicklung des Sonnensystems.\n... which role the giant planet played has,inthe development of-the solar system.\nNEW ... welche Rolle der riesige Planet gespielt hatbeider Entwicklung des Sonnensystems.\n... which role the giant planet played hasinthe development of-the solar system.\nREF ... welche Rolle der Riesenplanet bei der Entwicklung des Sonnensystems gespielt hat .\n... which role the giant-planet in the development of the solar-system played has.\nTable 8: Example sentences.\ntactic phenomena, namely different types of struc-\ntural differences in source and target language, re-\nferred to in the introductory sections.\nIn (1), the prepositiononshould not be trans-\nlated, as the verbverbessern(to improve) subcate-\ngorizes a direct object (Leistungen/performances).\nWhile there is a preposition ( ¨uber) in the base-\nline, no preposition is produced by the new system,\nleading to a correct translation. As the reference\ndoes not match with the MT output, this sentence\nis not counted in the evaluation from the previous\nsection or given credit from BLEU, even though it\nimproved over the baseline.\nIn (2), the constellation is opposite: with no\npreposition in the English sentence, the baseline\noutput is missing a preposition, marked with∅.\nHere, the German structure is different as the\nverbstudyingis expressed by a noun (Studium).\nIn this construction, the phrase containingStoffe\n(substances) needs to be expressed as the PPvon\nStoffen(of substances). Alternatively, anoun-\nnoun genitive structure is possible – our system is\nable to produce both versions.\nIn (3), the literal translation ofinin the baseline\nis not grammatical and the translation does not ex-\npress the meaning of the source sentence. The new\ntranslation contains the appropriate prepositionf ¨urand also correctly reproduces the source sentence.\nSimilarly, the prepositionbeiin (4) is a better\nchoice thaninin the baseline, even though the\nbaseline sentence is understandable. This sentence\npair is counted in the evaluation from the previous\nsection, as the verb (gespielt) and noun (Sonnen-\nsystem) each match with the reference translation.\n7 Conclusion and Future Work\nWe presented a novel system with an abstract rep-\nresentation for prepositions during translation and\na post-processing component for generating target-\nside prepositions. In this setup, we effectively\ncombine relevant source-side and target-side fea-\ntures. By making use of an abstract representation\nand then assigning all subcategorized elements to\ntheir respective functions to be inflected accord-\ningly, our method can explicitly handle structural\ndifferences in source and target language. We thus\nbelieve that this is a sound strategy to handle the\ntranslation of prepositions.\nWhile the systems fail to improve over the base-\nline, our experiments show that a meaningful rep-\nresentation of prepositions is crucial for translation\nquality. In particular, the annotation ofcasere-\nsulted in the best of all placeholder-only systems –183\nthis information can be considered as a “light” se-\nmantic annotation. Consequently, a more seman-\ntically motivated annotation representing the se-\nmantic class of a preposition (e.g. temporal, local)\nmight lead to a more meaningful representation\nand remains an interesting idea for future work.\nAlternatively, integrating the generation step of the\nprepositions into the decoding process, e.g. fol-\nlowing (Tsvetkov et al., 2013), might be another\npromising strategy.\nIn our evaluation we discussed typical problems\narising when translating prepositions. Further-\nmore, we addressed the problem of automatically\nevaluating the quality of prepositions in sentences\nthat are often structured differently than the refer-\nence sentence by considering only the respective\nrelevant elements. As the translation of preposi-\ntions remains a difficult problem in machine trans-\nlation, an automatic method that takes into account\nboth the morpho-syntactic as well as the semantic\naspects of the realization of prepositions in their\nrespective contexts is needed. In our evaluation,\nwe takefirst steps into this direction.\nAcknowledgments\nThis project has received funding from the Eu-\nropean Union’s Horizon 2020 research and in-\nnovation programme under grant agreement No\n644402, the DFG grantsDistributional Ap-\nproaches to Semantic RelatednessandModels of\nMorphosyntax for Statistical Machine Translation\nand a DFG Heisenberg Fellowship.\nReferences\nAgirre, Eneko, Aitziber Atutxa, Gorka Labaka, Mikel\nLersundi, Aingeru Mayor, and Kepa Sarasola. 2009.\nUse of Rich Linguistic Information to Translate\nPrepositions and Grammatical Cases to Basque. In\nProceedings of EAMT.\nChoi, Jinho D. and Martha Palmer. 2012. Getting the\nMost out of Transition-Based Dependency Parsing.\nInProceedings of ACL.\nEckle, Judith. 1999.Linguistisches Wissen zur\nautomatischen Lexikon-Akquisition aus deutschen\nTextcorpora. Ph.D. thesis, Universit ¨at Stuttgart.\nFaaß, Gertrud and Kerstin Eckart. 2013. SdeWaC –\na Corpus of Parsable Sentences from the Web. In\nProceedings of GSCL.\nFraser, Alexander, Marion Weller, Aoife Cahill, and Fa-\nbienne Cap. 2012. Modeling Inflection and Word-\nFormation in SMT. InProceedings of EACL.Gustavii, Ebba. 2005. Target-Language Preposi-\ntion Selection - an Experiment with Transformation-\nBased Learning and Aligned Bilingual Data. InPro-\nceedings of EAMT.\nLavergne, Thomas, Olivier Capp ´e, and Franc ¸ois Yvon.\n2010. Practical very large scale CRFs. InProceed-\nings of ACL.\nLo, Chi-kiu and Dekai Wu. 2011. MEANT: An in-\nexpensive, high-accuracy, semi-automatic Metric for\nEvaluating Translation Utility via Semantic Frames.\nInProceedings of ACL.\nNaskar, Sudip Kumar and Sivaji Bandyopadhyay.\n2006. Handling of Prepositions in English to Ben-\ngali Machine Translation. InProceedings of ACL-\nSIGSEM.\nRozovskaya, Alla and Dan Roth. 2013. Joint Learning\nand Inference for Grammatical Error Correction. In\nProceedings of EMNLP.\nScheible, Silke, Sabine Schulte im Walde, Marion\nWeller, and Max Kisselew. 2013. A Compact but\nLinguistically Detailed Database for German Verb\nSubcategorisation relying on Dependency Parses\nfrom a Web Corpus. InProceedings of WaC.\nSchmid, Helmut, Arne Fitschen, and Ulrich Heid.\n2004. SMOR: a German Computational Morphol-\nogy Covering Derivation, Composition, and Inflec-\ntion. InProceedings LREC 2004.\nSchmid, Helmut. 2004. Efficient Parsing of Highly\nAmbiguous Context-Free Grammars with Bit Vec-\ntors. InProceedings of COLING.\nShilon, Reshef, Hanna Fadida, and Shuly Wintner.\n2012. Incorporating Linguistic Knowledge in Statis-\ntical Machine Translation: Translating Prepositions.\nInProceedings of the Workshop on Innovative Hy-\nbrid Approaches to the Processing of Textual Data.\nToutanova, Kristina, Hisami Suzuki, and Achim\nRuopp. 2008. Applying Morphology Generation\nModels to Machine Translation. InProceedings of\nACL.\nTsvetkov, Yulia, Chris Dyer, Lori Levin, and Archna\nBhatia. 2013. Generating English Determiners in\nPhrase-Based Translation with Synthetic Translation\nOptions. InProceedings of WMT.\nWeller, Marion, Alexander Fraser, and Sabine Schulte\nim Walde. 2013. Using Subcategorization Knowl-\nedge to Improve Case Prediction for Translation to\nGerman. InProceedings of ACL.\nWeller, Marion, Sabine Schulte im Walde, and Alexan-\nder Fraser. 2014. Using Noun Class Informa-\ntion to Model Selectional Preferences for Translating\nPrepositions in SMT. InProceedings of AMTA.184", "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "AapQAyzixf", "year": null, "venue": "EAMT 2022", "pdf_link": "https://aclanthology.org/2022.eamt-1.29.pdf", "forum_link": "https://openreview.net/forum?id=AapQAyzixf", "arxiv_id": null, "doi": null }
{ "title": "Post-editing in Automatic Subtitling: A Subtitlers' perspective", "authors": [ "Alina Karakanta", "Luisa Bentivogli", "Mauro Cettolo", "Matteo Negri", "Marco Turchi" ], "abstract": null, "keywords": [], "raw_extracted_content": "Post-editing in Automatic Subtitling: A Subtitlers’ Perspective\nAlina Karakanta1,2, Luisa Bentivogli1, Mauro Cettolo1,\nMatteo Negri1, Marco Turchi1\n1Fondazione Bruno Kessler2University of Trento\n{akarakanta,bentivo,cettolo,negri,turchi }@fbk.eu\nAbstract\nRecent developments in machine trans-\nlation and speech translation are open-\ning up opportunities for computer-assisted\ntranslation tools with extended automation\nfunctions. Subtitling tools are recently be-\ning adapted for post-editing by providing\nautomatically generated subtitles, and fea-\nturing not only machine translation, but\nalso automatic segmentation and synchro-\nnisation. But what do professional sub-\ntitlers think of post-editing automatically\ngenerated subtitles? In this work, we con-\nduct a survey to collect subtitlers’ impres-\nsions and feedback on the use of automatic\nsubtitling in their workflows. Our find-\nings show that, despite current limitations\nstemming mainly from speech processing\nerrors, automatic subtitling is seen rather\npositively and has potential for the future.\n1 Introduction\nMachine Translation (MT) is today widely adopted\nin most areas of translation and post-editing has\nbeen established as a professional practice, shap-\ning the landscape of the translation industry. Au-\ndiovisual Translation (A VT) is one area where MT\nhas for long found limited success (Burchardt et\nal., 2016). Among the main reasons are the in-\nability of MT systems to deal with creative texts\n(Guerberof-Arenas and Toral, 2022) and the mul-\ntimodality of the source, since the translation de-\npends on visual, acoustic and textual elements\n(Taylor, 2016). For subtitling, additional chal-\nlenges are posed by the formal requirements of the\n© 2022 The authors. This article is licensed under a Creative\nCommons 3.0 licence, no derivative works, attribution, CC-\nBY-ND.target: subtitles should not exceed a specific length\nand should be synchronised with the speech (Car-\nroll and Ivarsson, 1998). However, recent devel-\nopments in neural machine translation (NMT) and\nspeech translation (ST) are paving the way for vi-\nable and usable (semi-)automatic solutions for sub-\ntitling. Compared to solutions providing MT for\nsubtitling, automatic subtitling tools do not simply\ntranslate human-generated source language subti-\ntles, but incorporate automatic transcription of the\nspeech, MT, automatic synchronisation (spotting)\nand segmentation of the translated speech into sub-\ntitles. Altogether, these technologies come with\nthe promise of reducing the human effort in the\nsubtitling process, but, to date, automatic subtitil-\ning has still to be put to test by the actual users.\nEven though translators are fundamental for the\nadvance of new technologies, their views are of-\nten not sufficiently considered (Guerberof-Arenas,\n2013). The study of subtitlers’ perceptions of\nthe technology they are interacting with can be\nbeneficial for all stakeholders in the A VT indus-\ntry. Furthermore, the inclusion of subtitlers in\nthe process of technological change can alleviate\ntheir resistance to adopting technologies (Cadwell\net al., 2018). Developers can direct their imple-\nmentation efforts in the right direction to provide\nuser-friendly tools and interfaces (Moorkens and\nO’Brien, 2017), and A VT trainers can identify nec-\nessary skills for teaching and training (Bola ˜nos-\nGarc ´ıa-Escribano et al., 2021). A better under-\nstanding of subtitlers’ interaction with technology\ncan help define the rising profession of the sub-\ntitler post-editor (Bywood et al., 2017), and es-\ntablish metrics and standards to protect subtitlers\nagainst dropping rates and ensure fairness (Nikolic\nand Bywood, 2021).\nIn response to the challenges brought about by\nincreasing technologisation, in this work we con-\nduct a survey of subtitlers’ perspectives on the de-\nveloping paradigm of automatic subtitling. This\nsurvey is a timely contribution to take stock of\nthis nascent technology and its implementation in\nthe subtitling profession from the very beginning,\nwhile setting the stage for further developments.\nThe survey focuses on the subtitlers’ user expe-\nrience when post-editing automatically-generated\nsubtitles from and into different Western European\nlanguages. It also aims at collecting feedback on\nthe main issues and benefits of the technology,\nas well as on the impact of automatic subtitling\non the subtitler’s profession. Based on qualitative\nand quantitative analysis of a survey questionnaire,\nwe provide a participant-based evaluation of auto-\nmatic subtitling and a comprehensive view of sub-\ntitlers’ attitudes towards this new paradigm. Our\nfindings indicate that despite its current limitations\nmainly related to challenges in speech processing,\nautomatic subtitling has potential and its benefits\nare already recognised by the users. Based on the\nreceived criticisms, we provide a list of recommen-\ndations for future improvements in automatic sub-\ntitling tools, which we hope will serve as a guide\nfor technology developers. We further release the\nquestionnaire and responses to foster replication\nand reproducibility in automatic subtitling.1\n2 Related work\nAutomatising subtitling has recently received\ngrowing interest. One research direction aims at\ncontrolling the generation of captions and subtitles\nbased on particular variables and properties, such\nas genre (Buet and Yvon, 2021), length (Lakew et\nal., 2019; Liu et al., 2020) or alignment between\nsource and machine-translated subtitles (Cherry et\nal., 2021). Though relevant from the technological\nstandpoint, this line of research has employed au-\ntomatic metrics for the evaluation of MT and has\nnot included subtitlers in the evaluation process.\nOther studies have tested the usability of MT\nfor subtitles by focusing on quality and productiv-\nity, mainly through the task of post-editing (PE).\nThe human evaluation, however, did not always in-\nvolve professional subtitlers. Some studies used\nvolunteers (C. M. de Sousa et al., 2011), native\nspeakers (Popowich et al., 2000; O’Hagan, 2003)\nor translators (Melero et al., 2006). Nevertheless,\nsubtitling requires special training and skills which\n1https://github.com/fatalinha/subtitlers-have-a-saynative speakers or translators do not necessarily\npossess. Larger scale evaluations involved pro-\nfessional subtitlers, but focused on machine trans-\nlating human-generated source language subtitles.\nThis setting has less challenges than automatic\nsubtitling, since the source text is error-free and\nalready compressed, while the spotting and seg-\nmentation are performed by a human. V olk et\nal. (2010) built an MT system between Scandina-\nvian languages, which was tested by professional\nsubtitlers, and collected their feedback in a non-\nstructured way. The large-scale SUMAT project\n(Etchegoyhen et al., 2014) involved two profes-\nsional subtitlers per language pair, who performed\npost-editing and rated their perceived PE effort.\nMatusov et al. (2019) evaluated the productivity\ngains of their proposed English into Spanish sys-\ntem with two post-editors, who were additionally\nasked to rank the adequacy, fluency and design of\nthe subtitles. User feedback was collected in a non-\nstructured way, where subtitlers commented on the\npost-editing process and on their perception of MT\nin their workflows. Lastly, Koponen et al. (2020b)\nperformed a comprehensive human evaluation of\ntheir MT systems for Scandinavian languages. The\nevaluation included the collection of product and\nprocess (keystrokes) data, as well as rich feedback\nbased on a mixed methods approach using ques-\ntionnaires and semi-structured interviews.\nOur present study builds upon the work by Ko-\nponen et al. (2020b) by extending the feedback\ncollection to a larger participant sample (22 com-\npared to 12) working in a variety of Western Eu-\nropean language pairs. One main difference is the\ntechnology behind the generation of the target sub-\ntitles. In our study, respondents are asked to evalu-\nate their user experience after post-editing subtitles\ngenerated through a three-step fully automatic pro-\ncess involving transcription, synchronisation and\ntranslation. On the contrary, in (Koponen et al.,\n2020b) source subtitles were first obtained by a hu-\nman (subtitle template), and then machine trans-\nlated and aligned to the original frames. In addi-\ntion, the subtitlers used their preferred subtitling\nsoftware in the PE tasks. However, as the authors\nadmit, the subtitling tools are not designed for MT\nPost-editing (MTPE), and may therefore not be op-\ntimal for the task. Our work has the benefit of eval-\nuating the PE experience using a professional tool\nspecifically tailored for post-editing automatically\ngenerated subtitles as a case study.\n3 Methodology\nThe survey described in this paper was conducted\nin December 2021 and consisted in respondents\nfilling in a questionnaire after having taken part in\ntesting sessions of an automatic subtitling tool.\n3.1 The task\nIn the PE task, subtitlers were required to post-edit\nthe automatically-generated subtitles of 8 video\nclips. The clips were self-contained excerpts from\ndifferent TV series (drama), each around 3 min-\nutes long, amounting to a total duration of 30 min-\nutes. TV series were selected as the material to\npost-edit since they are representative examples of\nreal subtitling tasks. In addition, they contain ele-\nments which are particularly challenging both for\nhuman subtitlers and automatic systems, such as\nbackground noise, slang, overlapping speech and\nmulti-speaker events. The original language of the\nseries was English. Since all subtitlers edited the\nsame clips but not all of them worked with English\nas source language, we used the dubbed version for\nsubtitlers working from Spanish and Italian.\nThe task was performed over two consecutive\ndays and the subtitlers took sufficient breaks be-\ntween each video to avoid fatigue effects. The sub-\ntitlers worked from their personal office without\nany explicit time limit. Before starting the task,\nall participants, regardless of their previous expe-\nrience with the subtitling tool, were asked to famil-\niarise themselves with it by watching a video tuto-\nrial, in which the functionalities of the tool were\nexplained. This setting resulted in a homogeneous\ntask for all participants, with a sufficient duration\nto develop reliable judgements and a robust opin-\nion on their user experience.\n3.2 The tool\nThe automatic subtitling system selected for this\nstudy is integrated in a novel subtitling tool, Mate-\nsub.2Matesub is a typical instance of an automatic\nsubtitling tool. It features a state-of-the-art ST sys-\ntem, with automatic generation of timestamps for\nthe translated subtitles – a process called automatic\nspotting (or auto-spotting) – and automatic seg-\nmentation of the translated audio into subtitles.\nFigure 1 shows a screenshot of the tool. The\nsubtitlers are presented with a list of the automat-\nically generated subtitles (upper left box) and the\nvideo on which the subtitles appear (upper right).\n2https://matesub.com/The boxes corresponding to each subtitle appear at\nthe bottom of the screen, superimposed on a wave-\nform which allows the subtitler to identify parts of\nthe video corresponding to the selected speech seg-\nments. The position and length (duration) of the\nboxes can be adjusted to match the beginning and\nthe end of the spoken utterance and to accommo-\ndate the time the subtitle will appear on screen.\nMoreover, the tool has a quality assurance fea-\nture which raises an issue whenever pre-defined\nsubtitling constraints are violated, for example if\na subtitle is too long (length) or disappears too\nearly (reading speed). All these elements, along\nwith other useful features, such as keyboard short-\ncuts and positioning or colour settings, are im-\nplemented in most subtitling editors not offering\nMT integration, therefore post-editing subtitles in\nMatesub has the benefit of being representative of\nsubtitlers’ real working settings. The tool is free,\ntested in real-life use cases and is already being\nused by professional subtitlers.\n3.3 Respondents\nThe respondents were professional subtitlers who\ntook part in the post-editing task with the Mate-\nsub tool. They were recruited through a language\nservice provider (Translated.com). Participation to\nthe survey was voluntary and the responses were\ncollected anonymously. Before starting the sur-\nvey, participants were informed about the objec-\ntive of the research, the purposes of the data col-\nlection and gave their consent. In total, 22 out of\n24 subtitlers responded to the questionnaire (91%\nresponse rate). The subtitlers worked in different\nlanguage pairs. Table 1 shows the number of sub-\ntitlers for each language pair. Subtitlers worked in\nfrom-English, into-English, but also non-English\nlanguage pairs, which are often disregarded in MT\nresearch (Fan et al., 2021). The focus of the sur-\nvey is to obtain a broad overview of subtitlers’\nopinions on automatic subtitling, regardless of the\nlanguage-specific performance of the technology.\nTherefore we opted for selecting respondents so as\nto cover a wide range of language pairs.\n3.4 Survey and questionnaire\nThe questionnaire was set up as an online form\ncontaining open and closed questions. It was deliv-\nered in English for all respondents and contained\nthree parts. The first part collected factual infor-\nmation about the subtitlers, such as years of expe-\nrience in subtitling, years of experience in MTPE\nFigure 1: The Matesub subtitling tool.\nLanguage pair Subtitlers\nSpanish →English 2\nSpanish →Italian 3\nSpanish →German 3\nItalian→French 3\nEnglish →French 2\nEnglish →Spanish 3\nEnglish →Polish 3\nEnglish →Dutch 3\nTable 1: Respondents per language pair.\nand how often they use Matesub. Three questions\nfocused on the working settings and the diffusion\nof MT in subtitling jobs. These questions asked\nhow often their subtitling jobs involved using mas-\nter templates, working directly from the video, and\nediting machine translated subtitles.\nThe second part of the questionnaire focused on\nthe respondents’ user experience with the task of\nPE automatically generated subtitles. We used the\nUser Experience Questionnaire (UEQ) by Kopo-\nnen et al. (2020a), a version of the UEQ of Laug-\nwitz et al. (2008) for end-user evaluation of soft-\nware products, which has been adapted to post-\nediting experience. This selection of questionnaire\nfacilitates comparison of PE in automatic subti-\ntling with the PE experience based on a differ-\nent system. By using an existing questionnaire,\nwe respond to the need for standardisation in ex-\nperimental research in A VT and MT. The ques-\ntionnaire contained 13 pairs of adjectives related\nto the post editing experience, in the form Post-\nediting was... (difficult/easy, unpleasant/pleasant,\nstressful/relaxed, labourious/effortless, slow/fast,\ninefficient/efficient, boring/exciting, tedious/fun,\ncomplicated/simple, annoying/enjoyable, limit-ing/creative, demotivating/motivating, impracti-\ncal/practical ). Since the tool features auto-\nspotting and automatic segmentation, we included\nevaluations on the quality of spotting and segmen-\ntation and the perceived effort of editing them.\nThe responses are provided on a scale of -3 to +3,\nwith 0 representing a neutral mid-point. As in the\nUEQ, average scores between -0.8 and +0.8 are\nconsidered neutral evaluations, while scores below\n-0.8 correspond to negative evaluations and scores\nabove 0.8 to positive evaluations.\nThe last part of the questionnaire contained open\nquestions on the quality of MT, auto-spotting and\nautomatic segmentation, as well as the subtitlers’\nopinion on the benefits of automatic subtitling,\nwhether it helps the work of subtitlers and whether\nthey see any dangers for the profession of subti-\ntlers from using automatic subtitling. The open\nquestions were analysed based on thematic anal-\nysis (Braun and Clarke, 2006) using the Taguette3\nsoftware. This analysis aimed at identifying main\nissues with the technologies implemented in the\ntool, as well as the main benefits from using auto-\nmatic subtitling. The general opinion on usability\nis coded as positive, neutral/mixed or negative.\n4 Results\n4.1 Subtitlers’ profiles and working settings\nThe respondents had on average 2.3 years of ex-\nperience as subtitlers (SD=1.5, range 1-5 years)\nand 2.6 years of experience with MTPE (SD=2.4,\nrange 0-10 years). In terms of working settings,\nthere is large variability in the way subtitling is per-\n3https://www.taguette.org/\nFigure 2: User experience (UX) scores. Interrupted vertical lines mark the -0.8/+0.8 threshold for neutral evaluations. Hori-\nzontal lines mark standard deviation.\nformed. To the question How often do your subti-\ntling jobs involve master templates , 5 subtitlers re-\nsponded they never work with templates, 4 rarely,\n6 sometimes and 7 often. When asked How of-\nten do your subtitling jobs involve working directly\nfrom the video , 3 subtitlers responded that they al-\nways work from the video, 4 often, 6 sometimes, 5\nrarely and 4 never. When it comes to the question\nHow often do your jobs involve editing machine-\ntranslated subtitles , 4 subtitlers mentioned that\nthey always edit machine-translated subtitles, 3 of-\nten, 4 sometimes, 6 rarely and 5 never. This shows\nthat there is variability in the professional condi-\ntions in subtitling when it comes to the use of tools,\nsettings and requirements but, despite this, MT is\na reality for subtitling. In addition, the responses\nconfirm that our respondent sample covers differ-\nent levels of expertise and a broad skill range.\n4.2 User experience\nThe mean scores for the user experience across\nsubtitlers and language pairs are shown in Figure 2.\nOverall, the post-editing experience can be con-\nsidered as neutral to positive, with all except one\nmean scores leaning on the positive side of the\nscale. The subtitlers found the post-editing pro-\ncess simple and practical. Even though still in the\nneutral range, the lowest scores were observed for\nthe quality of autospotting and automatic segmen-\ntation, where mean scores are close to 0.\nWhen comparing the scores with the study of\nKoponen et al. (2020a), our scores are more dis-\ntributed towards the positive side, even though adirect comparison of the user experience of the dif-\nferent subtitling systems is not the focus of this\npaper. It should also be noted that our sample\nis larger (22 respondents instead of 12) and with\na larger variety in language pairs (8 compared to\n4). In (Koponen et al., 2020a), the lowest aver-\nage scores were found for the adjectives labori-\nous/effortless andlimiting/creative . This adjective\npairs received low scores in our study too, how-\never with slow/fast having the lowest score and a\nvery large deviation. Similarly, the quality of au-\ntospotting and segmentation had lower scores than\nthe effort to fix them. All in all, the user experi-\nence scores show that PE in automatic subtitling\nis a task found acceptable by the subtitlers and\npointed out particular limitations, mainly related to\nthe technical aspects of spotting and segmentation.\n4.3 Subtitlers’ feedback\nMain issues with automatic subtitling Ta-\nble 2 shows the main issues for automatic transla-\ntion, auto-spotting and segmentation, as identified\nbased on the thematic analysis of the subtitlers’ re-\nsponses to the open questions. For automatic trans-\nlation, speech recognition errors seem to be the\nmost common reason for errors in the translation\n(10 statements). Subtitlers mentioned that transla-\ntion quality was highly influenced by the speaker’s\naccent, audio quality and the speed of speech.\nFor example, they mentioned that muffled or fast\nspeech ,music and background noises can often\nconfuse the AI . Speech recognition errors have in-\ndeed been identified as the main issue for speech\nAutomatic translation Autospotting Segmentation\nSpeech/audio\nrecognition errors10Inaccurate\n(starting to early, too late)10Oversegmentation\n(too may short subtitles)6\nLexical, punctuation, case 7False negatives\n(no subtitle when speech)5No respect of\nsyntactic/semantic units5\nMissing context,\ninconsistencies5False positives\n(subtitle when no speech)3No respect of constraints\nand guidelines4\nNot respecting visual elements\n(shot changes)2Undersegmentation\n(too long subtitles)3\nWorked well 3 Worked well 6 Worked well 5\nTable 2: Main issues related to automatic translation, autospotting and segmentation, and number of statements.\ntranslation systems, regardless of whether they are\ndirect or cascaded architectures (Bentivogli et al.,\n2021). The second group contained lexical errors,\nsuch as the translation of slang, idioms, colloquial\nexpressions, figurative language and named enti-\nties, and in some cases, casing and punctuation\n(7 statements), with subtitlers reporting that auto-\nmatic translation still tends to be a bit too literal .\nTranslations out of context or words translated in-\ndividually or inconsistently across the video were\nalso mentioned as common issues (5 statements).\nA subtitler noted that inconsistent translation sug-\ngestions by the system may lead the human trans-\nlator to lose consistency as well . Three subtitlers\nthought translation worked well.\nFor autospotting, lack of accuracy was the main\nreported issue (10 statements), since subtitlers\nthought that subtitles often started too early or too\nlate and were not properly synchronised with the\nspeaker. False negatives (no subtitle created when\nthere is speech) and false positives (subtitles cre-\nated when there is no speech) were also reported\nin 5 and 3 statements respectively. All these fac-\ntors are related to common speech recognition is-\nsues, for example when speech is not recognised\ndue to bad audio quality or when background noise\nis recognised as speech. Some subtitlers (2 state-\nments) mentioned that automatically-spotted sub-\ntitles did not respect shot changes and other visual\nelements. Six subtitlers reported that autospotting\nworked pretty well or did not report any issues.\nFor automatic segmentation, oversegmentation\n(unnecessarily segmenting subtitles into small\npieces) and undersegmentation (failing to segment\ntoo long subtitles) were mentioned in 6 and 3 state-\nments respectively. Other issues were that the seg-\nmentation did not respect the norms of the target\nlanguage because of splitting semantic/syntactic\nunits (5 statements), and that segmentation re-\nsulted in subtitles not respecting the guidelines andlength/reading speed constraints.4Five subtitlers\naffirmed that automatic segmentation worked well.\nMain benefits of automatic subtitling When\nasked about the main benefits of automatic sub-\ntitling, speed was considered the main benefit by\nalmost all subtitlers (18/22). Surprisingly, this is\nin contrast with the low mean score for slow/fast\nin the UX questionnaire. When looking into the\nbenefits reported by subtitlers who rated the PE\nexperience as slow (negative values for slow/fast ),\nall of them mentioned that it saves time, but only\non the creation of subtitle boxes and setting the\ntimestamps. This shows the importance of not\nrelying only on quantitative scores in participant-\nbased studies, but complementing the judgements\nwith quantitative explanations. Additionally, effi-\nciency was noted as a benefit in 10 statements and\nreduction of effort related to technical aspects in\n6 statements. Specifically, subtitlers reported that\nautomatic subtitling saves a lot of tedious work ,\ncreates a guideline of what needs to be translated\ninstead of watching the whole video andserves as a\nstarting template , which, as a result, allows focus-\ning more on the translation rather than having to\nspend time on technical aspects. The provision of\nuseful suggestions was mentioned in 2 statements,\nrelated to subtitling solutions that the subtitler had\nnot considered or to terminology and vocabulary.\nGeneral impressions for the subtitling profes-\nsion To the question whether they think that au-\ntomatic subtitling helps the work of subtitlers,\n14 subtitlers responded positively, 5 gave neu-\ntral/mixed statements and 3 claimed that in most\ncases automatic subtitling does not help. The sub-\ntitlers who responded neutrally mentioned as con-\ncerns that the quality depends on the language,\n4Netflix guidelines: https://partnerhelp.netflixstudios.com/hc/en-\nus/articles/360051554394-Timed-Text-Style-Guide-Subtitle-\nTiming-Guidelines\naudio quality, and that it may be useful only for\nsome applications (e.g. template creation, other\naudiovisual products, such as online conferences\nor courses, documentaries ).\nWhen asked whether they see any possible dan-\nger to the profession because of automatic subti-\ntling, 8 subtitlers mentioned they see no dangers\nat all, 8 subtitlers saw no dangers for the time be-\ning, given the current state of the technology and\nits low diffusion, while 9 subtitlers identified some\ntype of danger. Possible dangers were the loss\nin the quality of the final subtitles (4), dropping\nrates (2) and having less or no work if clients se-\nlect cheaper, automatic options (5). Another dan-\nger identified was the improper application of the\ntechnology (3 statements), where subtitlers consid-\nered that the profession is not at risk only as long\nas a human is involved in the final phase.\n5 Discussion\nThis study focused on subtitlers’ user experience\nand perspectives on the task of post-editing auto-\nmatically generated subtitles. Our findings suggest\na neutral to positive experience. Even though there\nare those who still see no benefits from this new\ntechnology, automatic subtitling was welcomed\nwith enthusiasm by many subtitlers, as an aid to\nsave time and effort. As with studies on MTPE\nexperience (Guerberof-Arenas, 2013; Bundgaard,\n2017), subtitlers have expressed disfavour towards\nautomatic subtitling in respect to technological\nflaws, but also acknowledged its positive aspects\nand expected technology to shape their profession\nin the near future. As for the dangers to the pro-\nfession, most criticisms were not rooted in the fear\nof being outperformed by automatic systems, but\nrather in the effect of technology on the final prod-\nuct and market consequences (Vieira, 2020). The\npositive aspects of technology can only be appre-\nciated when combined with respectful and ethical\nprofessional and market practices.\nPrevious work reporting feedback of subtitlers\nfocused on a setting where MT was applied to\nhuman-generated subtitles. The views of the sub-\ntitlers involved did not lead to auspicious conclu-\nsions in favour of the use of MT in subtitling. In\nspite of encouraging automatic evaluation scores,\nsubtitlers were cautious in reporting productivity\ngains in (V olk et al., 2010), while in (Etchegoyhen\net al., 2014) PE experience was rated as rather neg-\native (2.37 on a 1-5 scale), with MT being usefulonly for simple and short sentences. An increase\nin productivity for simple sentences was reported\nin (Matusov et al., 2019), where the two subtitlers\nrated their experience as fair. In (Koponen et al.,\n2020a) the participants did not find PE particularly\ndifficult but characterised it as negative or limit-\ning and did not think MTPE increased productiv-\nity. Similar criticisms were reported for MT qual-\nity in our study, with MT described as too literal,\nunable to properly translate spoken and figurative\nlanguage. However, most subtitlers acknowledged\nthat automatic subtitling makes their work faster\nand more efficient, especially when compared to\nold-style subtitling . The difference of our study\ncompared to studies of MT for subtitling is the au-\ntomatisation not only of the translation, but also\nof the technical aspects of spotting and segmen-\ntation. Subtitlers recognised the importance of\nautomatising these aspects, which are often char-\nacterised as tiresome and dull. By not focusing\nonly on the translation but the automation of the\ntechnical aspects, automatic subtitling allows sub-\ntitlers to spare time and effort on the tedious part of\nthe work (spotting and segmentation) and unleash\ntheir creativity in adjusting the final text.\nOur study aimed at providing a broad view of\nsubtitlers’ perspectives, by complementing quan-\ntitative scores with open questions, attempting to\ncover several language pairs and a range of sub-\ntitler profiles. However, we acknowledge that\nthe findings should be interpreted with some cau-\ntion. Questionnaire-based studies have a context-\nbound nature and may be affected by factors such\nas the system (quality, language), the participants\n(age, familiarity with technology) and the setting\n(Tuominen, 2018). Therefore, some limitations\nshould be considered when drawing conclusions.\nFirstly, responses and user experience scores\nmay have been affected by the language pair, due\nto differences in the subtitling quality depending\non the ASR and MT performance, despite keeping\nall other settings (videos, instructions) equal. Still,\nwe opted for not reporting results separately for\neach language pair, since the sample size per pair\n(2-3) would be too small to draw robust and gener-\nalizable conclusions on a per-language basis. Sec-\nond, even though we attempted to include a broad\nrange of professional subtitler profiles, the group\nis not necessarily representative of the subtitlers’\ngeneral population. For example, the respondents’\nage, a variable not collected in our survey, may\naffect their technological acceptance. Moreover,\ntheir experience in subtitling, template translation\nand MTPE varies. We found in statistical tests that\nthe only variable affecting the user experience is\nMTPE experience. Subtitlers with less experience\n(<= 2years) had significantly higher user experi-\nence scores than the more experienced ones.5It is\npossible that experts, already being used to a cer-\ntain level of MT output quality and to their pre-\nferred interfaces, are less willing to change tasks\nand tools, while novices, having less consolidated\nworking practices, are more open and less critical\nagainst new interfaces and workflows. Accepting\nto take part in a task involving automatic subtitling\nalready means the subtitlers were willing, curious\nor even familiar with the technology, and therefore\nmay have been positively inclined towards automa-\ntisation in subtitling, contrary to many A VT pro-\nfessionals (Audiovisual Translators Europe, 2021).\nLastly, the interface used in PE has a great influ-\nence on user experience. We selected Matesub as\na typical instance of an automatic subtitling tool.\nHowever, the generalisability to other tools is not\nguaranteed. In an attempt to test whether previ-\nous experience with Matesub had an effect on user\nexperience, we separated the respondents in two\ngroups based on their responses to the question\nHow often do you use Matesub in your subtitling\njobs: regular users (often, sometimes) and occa-\nsional (never, rarely). We found that familiarity\nwith the tool did not have an effect on the average\nuser experience scores.6This shows that the tool is\nuser-friendly, with a steep learning curve, and does\nnot require extensive training. Less user-friendly\ntools may negatively affect the post-editing experi-\nence. Despite these limitations, this study presents\na screenshot of the current state of the quickly\nevolving technology, necessary to drive implemen-\ntation efforts in the right direction.\n5.1 Recommendations for improvement\nOur findings have identified some limitations of\ncurrent automatic subtitling systems. Based on the\nsubtitlers’ feedback, we present a list of sugges-\ntions for improving automatic subtitling tools in\na direction that benefits the user experience. The\nsuggestions are listed in order of priority.\n5Novices ( N=14, M=1.0, SD=0.7) vs Experts ( N=8,\nM=−0.4,SD=1.1). Based on an equal-variance independent\nsamples t-test: ( t(20) = 3 .82, p=.001)\n6Regular ( N=14, M=0.6, SD=1.2) vs Occasional ( N=8,\nM=0.4,SD=0.9). ( t(20) = 0 .42, p=.679)•Improving autospotting and segmentation .\nThe main benefit of automatic subtitling accord-\ning to the subtitlers was eliminating tedious work\nand leaving more space for creativity. Given that\nmany criticisms were addressed to the quality of\nautospotting and segmentation, improvements in\nthe automation of technical aspects are a prior-\nity. Except for improving the accuracy of auto-\nspotting through enhanced audio processing and\na more syntactically-informed segmentation, in-\nteraction with these elements could become more\nuser-friendly. For example, it could be useful to\nimplement interactive features such as automatic\nadjustment of subtitle boxes to match length and\nreading speed constraints after subtitlers translate\nor finish editing one subtitle.\n•Improved audio pre-processing . Most prob-\nlems in the translation, autospotting and segmen-\ntation stemmed from the segmentation of the au-\ndio. This is an open problem in speech process-\ning (Gaido et al., 2021; Tsiamas et al., 2022); au-\ndio segmentation is typically approached by break-\ning the audio on speaker silences, considered as\na proxy of clause boundaries, and not on syntac-\ntic information. A syntax-unaware segmentation is\nresponsible for translations out of context and the\nissues in segmentation (over-undersegmentation,\nno respect of syntactic units). In addition, the re-\nported cases of false positives/false negatives in\nautospotting (see Table 2) indicate that voice ac-\ntivity detection technologies should be improved\nto properly distinguish speech from noise.\n•Improving in-video consistency . Consis-\ntency of MT suggestions is important for easily\nspotting errors and for avoiding repetitive correc-\ntions. Consistency can be improved through adap-\ntive MT (Bic ¸ici and Yuret, 2011) or document-\nlevel MT (Lopes et al., 2020).7Another direction\ncould be the integration of external resources, such\nas termbases and translation memories. These aids\nhave passed the test of time and are usually the first\nrequirement of users before overshooting with MT\nsolutions (Audiovisual Translators Europe, 2021).\n•User experience vs. automatic metrics .\nPunctuation and casing was reported as an issue\nfor automatic translation. However, WER, the\nmetric used to evaluate ASR systems, is normally\ncomputed in a case/punctuation insensitive way.\nCasing and punctuation cannot be derived directly\n7However, it should be noted that (Koponen et al., 2020b)\nfound no preference for document-level MT compared to\nsentence-level MT in subtitling.\nfrom the audio and therefore these errors are tra-\nditionally considered as less relevant by the scien-\ntific community. On the contrary, in the context of\nautomatic subtitling they must be weighed appro-\npriately. This points out the need for task-specific\nevaluation metrics, which take into account ele-\nments that shape user experience.\n•Incorporation of elements from the visual\nmodality . Since subtitling is highly multimodal\nand intersemiotic, ignoring elements from the vi-\nsual modality can result to errors. Some fea-\ntures from the visual modality are already inte-\ngrated in many (non-MT) tools, e.g. marking of\nshot changes. Another useful feature could be the\nrecognition of on-screen text.\n6 Conclusions\nIn this work we presented findings on subti-\ntlers’ user experience and perspectives when post-\nediting automatically generated subtitles, based on\na survey questionnaire. Subtitlers’ experience was\nmarked as neutral to positive. Thematic analysis of\nthe open questions showed that the main issues of\nautomatic subtitling stem from failures in speech\nrecognition and pre-processing, which result in er-\nror propagation, translations out of context, inac-\ncuracies in auto-spotting and suboptimal segmen-\ntation. However, subtitlers acknowledge the posi-\ntive sides of the technology, which are speed and\nreduction of effort, especially related to the techni-\ncal aspects, as well as the provision of useful sug-\ngestions. We conclude that, despite current limita-\ntions, automatic subtitling tools can be beneficial\nfor subtitlers, as long as improvements consider\nsubtitlers’ opinions, and ethical and professional\nstandards are respected. We expect that as au-\ntomatic subtitling tools mushroom, larger studies\nwill be needed to explore different variables and\nmonitor the progress in automatic subtitling.\nAcknowledgements\nWe kindly thank all the subtitlers who took part in\nthe survey, and Anna Matamala and Mar ´ıa Euge-\nnia Larreina Morales for their useful feedback on\nquestionnaire analysis.\nReferences\nAudiovisual Translators Europe. 2021. A VTE Ma-\nchine Translation Manifesto. https://avteurope.eu/\nwp-content/uploads/2021/09/Machine-Translation-\nManifesto ENG.pdf. Last accessed: 31/03/2022.Bentivogli, Luisa, Mauro Cettolo, Marco Gaido, Alina\nKarakanta, Alberto Martinelli, Matteo Negri, and\nMarco Turchi. 2021. Cascade versus Direct Speech\nTranslation: Do the Differences Still Make a Dif-\nference? In Proceedings of the 59th Annual Meet-\ning of the Association for Computational Linguistics\nand the 11th International Joint Conference on Natu-\nral Language Processing , pages 2873–2887, Online,\nAugust. Association for Computational Linguistics.\nBic ¸ici, Ergun and Deniz Yuret. 2011. Instance Se-\nlection for Machine Translation using Feature Decay\nAlgorithms. In Proceedings of the 6th Workshop on\nStatistical Machine Translation , pages 272–283, Ed-\ninburgh. Association for Computational Linguistics.\nBola ˜nos-Garc ´ıa-Escribano, Alejandro, Jorge D ´ıaz-\nCintas, and Serenella Massidda. 2021. Latest\nadvancements in audiovisual translation education.\nThe Interpreter and Translator Trainer , 15(1):1–12.\nBraun, Virginia and Victoria Clarke. 2006. Using the-\nmatic analysis in psychology. Qualitative Research\nin Psychology , 3(2):77–101.\nBuet, Franc ¸ois and Franc ¸ois Yvon. 2021. Toward\nGenre Adapted Closed Captioning. In Interspeech\n2021 , pages 4403–4407, Brno (virtual), Czech Re-\npublic, August. ISCA.\nBundgaard, Kristine. 2017. Translator Attitudes to-\nwards Translator-Computer Interaction - Findings\nfrom a Workplace Study. HERMES - Journal of Lan-\nguage and Communication in Business , 56:125–144.\nBurchardt, Aljoscha, Arle Lommel, Lindsay Bywood,\nKim Harris, and Maja Popovi ´c. 2016. Machine\ntranslation quality in an audiovisual context. Target ,\n28(2):206–221.\nBywood, Lindsay, Panayota Georgakopoulou, and\nThierry Etchegoyhen. 2017. Embracing the threat:\nmachine translation as a solution for subtitling. Per-\nspectives , 25(3):492–508.\nC. M. de Sousa, Sheila, Wilker Aziz, and Lucia Spe-\ncia. 2011. Assessing the Post-Editing Effort for Au-\ntomatic and Semi-Automatic Translations of DVD\nSubtitles. In Proceedings of the International Con-\nference Recent Advances in Natural Language Pro-\ncessing 2011 , pages 97–103, Hissar, Bulgaria. Asso-\nciation for Computational Linguistics.\nCadwell, Patrick, Sharon O’Brien, and Carlos S. C.\nTeixeira. 2018. Resistance and accommodation:\nfactors for the (non-) adoption of machine transla-\ntion among professional translators. Perspectives ,\n26(3):301–321.\nCarroll, Mary and Jan Ivarsson. 1998. Code of Good\nSubtitling Practice . Simrishamn: TransEdit.\nCherry, Colin, Naveen Arivazhagan, Dirk Padfield,\nand Maxim Krikun. 2021. Subtitle Translation as\nMarkup Translation. In Proceedings of Interspeech\n2021 , pages 2237–2241.\nEtchegoyhen, Thierry, Lindsay Bywood, Mark Fishel,\nPanayota Georgakopoulou, Jie Jiang, Gerard van\nLoenhout, Arantza del Pozo, Mirjam Sepesy\nMau ˇcec, Anja Turner, and Martin V olk. 2014.\nMachine Translation for Subtitling: A Large-Scale\nEvaluation. In Proceedings of the 9th International\nConference on Language Resources and Evaluation\n(LREC) , pages 46–53, May.\nFan, Angela, Shruti Bhosale, Holger Schwenk, Zhiyi\nMa, Ahmed El-Kishky, Siddharth Goyal, Man-\ndeep Baines, Onur Celebi, Guillaume Wenzek,\nVishrav Chaudhary, Naman Goyal, Tom Birch, Vi-\ntaliy Liptchinsky, Sergey Edunov, Edouard Grave,\nMichael Auli, and Armand Joulin. 2021. Beyond\nEnglish-Centric Multilingual Machine Translation.\nJournal of Machine Learning Research , pages 1–48.\nGaido, Marco, Matteo Negri, Mauro Cettolo, and\nMarco Turchi. 2021. Beyond voice activity detec-\ntion: Hybrid audio segmentation for direct speech\ntranslation. In Proceedings of The Fourth Interna-\ntional Conference on Natural Language and Speech\nProcessing , pages 55–62, Trento, Italy. Association\nfor Computational Linguistics.\nGuerberof-Arenas, Ana and Antonio Toral. 2022. Cre-\nativity in translation: Machine translation as a con-\nstraint for literary texts. Translation Spaces .\nGuerberof-Arenas, Ana. 2013. What do professional\ntranslators think about post-editing? JoSTrans - The\njournal of specialised translation , 19.\nKoponen, Maarit, Umut Sulubacak, Kaisa Vitikainen,\nand J ¨org Tiedemann. 2020a. MT for subtitling: In-\nvestigating professional translators’ user experience\nand feedback. In Proceedings of 1st Workshop on\nPost-Editing in Modern-Day Translation , pages 79–\n92, Virtual. AMTA.\nKoponen, Maarit, Umut Sulubacak, Kaisa Vitikainen,\nand J ¨org Tiedemann. 2020b. MT for subtitling:\nUser evaluation of post-editing productivity. In Pro-\nceedings of the 22nd Annual Conference of the Eu-\nropean Association for Machine Translation , pages\n115–124, Lisboa, Portugal, November. European\nAssociation for Machine Translation.\nLakew, Surafel Melaku, Mattia Di Gangi, and Marcello\nFederico. 2019. Controlling the Output Length of\nNeural Machine Translation. In Proceedings of the\n16th International Workshop on Spoken Language\nTranslation, (IWSLT) .\nLaugwitz, Bettina, Theo Held, and Martin Schrepp.\n2008. Construction and evaluation of a user expe-\nrience questionnaire. In Holzinger, Andreas, editor,\nHCI and Usability for Education and Work , pages\n63–76, Berlin, Heidelberg. Springer.\nLiu, Danni, Jan Niehues, and Gerasimos Spanakis.\n2020. Adapting end-to-end speech recognition for\nreadable subtitles. In Proceedings of the 17th Inter-\nnational Conference on Spoken Language Transla-\ntion, pages 247–256. Association for Computational\nLinguistics.Lopes, Ant ´onio, M. Amin Farajian, Rachel Bawden,\nMichael Zhang, and Andr ´e F. T. Martins. 2020.\nDocument-level neural MT: A systematic compari-\nson. In Proceedings of the 22nd Annual Conference\nof the European Association for Machine Transla-\ntion, pages 225–234, Lisboa, Portugal. European As-\nsociation for Machine Translation.\nMatusov, Evgeny, Patrick Wilken, and Yota Geor-\ngakopoulou. 2019. Customizing Neural Machine\nTranslation for Subtitling. In Proceedings of the\nFourth Conference on Machine Translation (Volume\n1: Research Papers) , pages 82–93, Florence, Italy,\nAugust. Association for Computational Linguistics.\nMelero, Maite, Antoni Oliver, and Toni Badia. 2006.\nAutomatic Multilingual Subtitling in the eTITLE\nProject. In Proceedings of ASLIB Translating and\nthe Computer 28 , November.\nMoorkens, Joss and Sharon O’Brien. 2017. Assess-\ning User Interface Needs of Post-Editors of Machine\nTranslation. Human Issues in Translation Technol-\nogy: The IATIS Yearbook , pages 109–130.\nNikolic, Kristijan and Lindsay Bywood. 2021. Au-\ndiovisual Translation: The Road Ahead. Journal of\nAudiovisual Translation , 4(1):50–70, Apr.\nO’Hagan, Minako. 2003. Can language technology\nrespond to the subtitler’s dilemma? - a preliminary\nstudy. In Proceedings of the 25th International Con-\nference on Translation and the Computer .\nPopowich, Fred, Paul McFetridge, Davide Turcato, and\nJanine Toole. 2000. Machine Translation of Closed\nCaptions. Machine Translation , pages 311–341.\nTaylor, Christopher. 2016. The multimodal approach\nin audiovisual translation. Target , 2(28), December.\nTsiamas, Ioannis, Gerard I G ´allego, Jos ´e AR Fonollosa,\nand Marta R Costa-juss `a. 2022. Shas: Approaching\noptimal segmentation for end-to-end speech transla-\ntion. arXiv e-prints , pages arXiv–2202.\nTuominen, Tiina. 2018. Multi-method research - re-\nception in context. In Giovanni, Elena Di and Yves\nGambier, editors, Reception Studies and Audiovisual\nTranslation , volume 141, pages 69–90. BTL.\nVieira, Lucas Nunes. 2020. Automation anxiety and\ntranslators. Translation Studies , 13(1):1–21.\nV olk, Martin, Rico Sennrich, Christian Hardmeier, and\nFrida Tidstr ¨om. 2010. Machine Translation of TV\nSubtitles for Large Scale Production. In Zhechev,\nVentsislav, editor, Proceedings of the Second Joint\nEM+/CNGL Workshop ”Bringing MT to the User:\nResearch on Integrating MT in the Translation In-\ndustry (JEC’10) , pages 53–62, Denver.", "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "oG3Pi66YOE", "year": null, "venue": "EAMT 2022", "pdf_link": "https://aclanthology.org/2022.eamt-1.57.pdf", "forum_link": "https://openreview.net/forum?id=oG3Pi66YOE", "arxiv_id": null, "doi": null }
{ "title": "Towards a methodology for evaluating automatic subtitling", "authors": [ "Alina Karakanta", "Luisa Bentivogli", "Mauro Cettolo", "Matteo Negri", "Marco Turchi" ], "abstract": null, "keywords": [], "raw_extracted_content": "Towards a methodology for evaluating automatic subtitling\nAlina Karakanta1,2, Luisa Bentivogli1, Mauro Cettolo1,\nMatteo Negri1, Marco Turchi1\n1Fondazione Bruno Kessler\n2University of Trento\n{akarakanta,bentivo,cettolo,negri,turchi }@fbk.eu\nAbstract\nIn response to the growing interest towards\nautomatic subtitling, the 2021 EAMT-\nfunded project “Towards a methodology\nfor evaluating automatic subtitling” aimed\nat collecting subtitle post-editing data in a\nreal use case scenario where professional\nsubtitlers edit automatically generated sub-\ntitles. The post-editing setting includes,\nfor the first time, automatic generation of\ntimestamps and segmentation, and focuses\non the effect of timing and segmentation\nedits on the post-editing process. The col-\nlected data will serve as the basis for in-\nvestigating how subtitlers interact with au-\ntomatic subtitling and for devising evalua-\ntion methods geared to the multimodal na-\nture and formal requirements of subtitling.\n1 Project overview\nAutomatic subtitling is the task of generating tar-\nget language subtitles for a given video without\nany intermediate human transcription and timing\nof the source speech. The source speech in the\nvideo is automatically transcribed, translated and\nsegmented into subtitles, which are synchronised\nwith the speech – a process called automatic spot-\nting (or auto-spotting). Automatic subtitling is be-\ncoming a task of increasing interest for the MT\ncommunity, practitioners and the audiovisual in-\ndustry. Despite the technological advancements,\nthe evaluation of automatic subtitling still repre-\nsents a significant research gap. Popular MT eval-\nuation metrics consider only content-related pa-\nrameters (translation quality), but not form-related\n© 2022 The authors. This article is licensed under a Creative\nCommons 3.0 licence, no derivative works, attribution, CC-\nBY-ND.parameters, such as format (length and segmen-\ntation) and timing (synchronisation with speech,\nreading speed), which are important features for\nhigh-quality subtitles (Carroll and Ivarsson, 1998).\nMoreover, the way subtitlers interact with auto-\nmatically generated subtitles has not been yet ex-\nplored, since the majority of works which con-\nducted human evaluations of the post-editing effort\nin MT for subtitling have focused on edits in the\ntextual content (V olk et al., 2010; Bywood et al.,\n2017; Matusov et al., 2019; Koponen et al., 2020).\nThis project seeks to investigate automatic sub-\ntitling, the factors contributing to post-editing ef-\nfort and their relation to the quality of the out-\nput. This is achieved through the collection of\nrich, product- and process-based subtitling data in\na real use case scenario where professional subti-\ntlers edit automatically translated, spotted and seg-\nmented subtitles in a dedicated subtitling environ-\nment. The richness of the data collected during this\none-year project is ideal for understanding the op-\nerations performed by subtitlers while they inter-\nact with automatic subtitling in their professional\nenvironment and for applying mixed methods ap-\nproaches to:\n•Investigate the correlation between amount of\ntext editing, adjustments in auto-spotting and post-\nediting temporal/technical effort\n•Explore the effect of auto-spotting edits on the\ntotal post-editing process\n•Investigate the variability in subtitle segmenta-\ntion decisions among subtitlers\n•Propose tentative metrics for auto-spotting\nquality and subtitle segmentation\n2 Data collection\nThree professional subtitlers with experience in\npost-editing tasks (two subtitlers en →it, one\nen→de) were asked to post-edit 9 single-speaker\nTED talks from the MuST-Cinema test set,1the\nonly publicly available speech subtitling corpus\n(Karakanta et al., 2020), amounting to one hour\nof video (10,000 source words) in total. The post-\nediting task was performed in a novel PE subtitling\ntool, Matesub,2which features automatic speech\nrecognition, machine translation, automatic gener-\nation of timestamps and automatic segmentation of\nthe translations into subtitles.\nFor each subtitler, we collected the following\ndata: 1) original automatically-generated subti-\ntle files and the corresponding final human post-\nedited subtitle files in SubRip .srt format; 2)\nprocess logs from the Matesub tool, which records\nthe original and final subtitle, original and fi-\nnal timestamps and total time spent on the sub-\ntitle; 3) keystrokes, using InputLog3(Leijten and\nVan Waes, 2013). Screen recordings were also\ncollected to trace the translation and segmenta-\ntion decisions of the subtitlers and identify possi-\nble outliers. At the end of the task, the subtitlers\ncompleted a questionnaire giving feedback on their\nuser experience with automatic subtitling, particu-\nlar problems faced, and their general impressions\non automatic subtitling.\nFor en →it, we collected in total 1,199 subti-\ntles from the first subtitler (it1) and 1,208 subtitles\nfrom the second subtitler (it2), while for en →de\n1,198 subtitles. Based on the process logs we can\ndefine the status of each subtitle: new – a new\nsubtitle is added by the subtitler; deleted – an au-\ntomatically generated subtitle is discarded by the\nsubtitler; or edited – any subtitle that is not new\nor deleted, regardless of whether it was confirmed\nexactly as generated by the system or changed. Ta-\nble 1 shows the distribution of subtitles based on\ntheir status, with edited being the majority.\nSubtitler Edited New Deleted\nit1 1,015 (84,7%) 59 (4.9%) 125 (10.4%)\nit2 953 (78.9%) 68 (5.7%) 187 (15.4%)\nde 1,051 (87.7%) 59 (4.9%) 88 (7.4%)\nTable 1: Distribution of subtitles based on their status.\n3 Final remarks\nThis project focuses on automatic subtitling and\nthe challenges in its evaluation due to the multi-\n1https://ict.fbk.eu/must-cinema/\n2https://matesub.com/\n3https://www.inputlog.net/modal nature of the source medium (video, audio)\nand the formal requirements of the target (format\nand timing of subtitles). The data collected con-\nstitute the basis for future multi-faceted analyses\nto explore correlations between translation qual-\nity, spotting quality, and post-editing effort, possi-\nbly leading to new metrics for automatic subtitling.\nThe subtitling data collected will be publicly re-\nleased to promote research in automatic subtitling.\nAcknowledgements\nThis project has been partially funded by the\nEAMT programme “2021 Sponsorship of Activi-\nties - Students’ edition”. We kindly thank the sub-\ntitlers Giulia Donati, Paolo Pilati and Anastassia\nFriedrich for their participation in the PE task.\nReferences\nBywood, Lindsay, Panayota Georgakopoulou, and\nThierry Etchegoyhen. 2017. Embracing the threat:\nmachine translation as a solution for subtitling. Per-\nspectives , 25(3):492–508.\nCarroll, Mary and Jan Ivarsson. 1998. Code of Good\nSubtitling Practice . Simrishamn: TransEdit.\nKarakanta, Alina, Matteo Negri, and Marco Turchi.\n2020. MuST-Cinema: a Speech-to-Subtitles cor-\npus. In Proceedings of the 12th Language Resources\nand Evaluation Conference , pages 3727–3734, Mar-\nseille, France. ELRA.\nKoponen, Maarit, Umut Sulubacak, Kaisa Vitikainen,\nand J ¨org Tiedemann. 2020. MT for subtitling: User\nevaluation of post-editing productivity. In Proceed-\nings of the 22nd Annual Conference of the European\nAssociation for Machine Translation , pages 115–\n124, Lisboa, Portugal, November. European Asso-\nciation for Machine Translation.\nLeijten, Mari ¨elle and Luuk Van Waes. 2013. Keystroke\nlogging in writing research: Using inputlog to an-\nalyze writing processes. Written Communication ,\n30:358–392.\nMatusov, Evgeny, Patrick Wilken, and Yota Geor-\ngakopoulou. 2019. Customizing Neural Machine\nTranslation for Subtitling. In Proceedings of the\nFourth Conference on Machine Translation (Volume\n1: Research Papers) , pages 82–93, Florence, Italy,\nAugust. Association for Computational Linguistics.\nV olk, Martin, Rico Sennrich, Christian Hardmeier, and\nFrida Tidstr ¨om. 2010. Machine Translation of TV\nSubtitles for Large Scale Production. In Zhechev,\nVentsislav, editor, Proceedings of the Second Joint\nEM+/CNGL Workshop ”Bringing MT to the User:\nResearch on Integrating MT in the Translation In-\ndustry (JEC’10) , pages 53–62, Denver.", "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "ihjlbOL30Ft", "year": null, "venue": "EAMT 2022", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=ihjlbOL30Ft", "arxiv_id": null, "doi": null }
{ "title": "Extending the MuST-C Corpus for a Comparative Evaluation of Speech Translation Technology", "authors": [ "Luisa Bentivogli", "Mauro Cettolo", "Marco Gaido", "Alina Karakanta", "Matteo Negri", "Marco Turchi" ], "abstract": null, "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "NkrA8Y39Kotf", "year": null, "venue": "EAMT 2012", "pdf_link": "https://aclanthology.org/2012.eamt-1.62.pdf", "forum_link": "https://openreview.net/forum?id=NkrA8Y39Kotf", "arxiv_id": null, "doi": null }
{ "title": "Hierarchical Sub-sentential Alignment with Anymalign", "authors": [ "Adrien Lardilleux", "François Yvon", "Yves Lepage" ], "abstract": "Adrien Lardilleux, François Yvon, Yves Lepage. Proceedings of the 16th Annual conference of the European Association for Machine Translation. 2012.", "keywords": [], "raw_extracted_content": "Hierarchical Sub-sentential Alignment with Anymalign\nAdrien Lardilleux\nLIMSI-CNRS\nOrsay, France\[email protected] ¸ois Yvon\nLIMSI-CNRS/University Paris Sud\nOrsay, France\[email protected] Lepage\nWaseda University, IPS\nWaseda, Japan\[email protected]\nAbstract\nWe present a sub-sentential alignment al-\ngorithm that relies on association scores\nbetween words or phrases. This algorithm\nis inspired by previous work on alignment\nby recursive binary segmentation and on\ndocument clustering. We evaluate the re-\nsulting alignments on machine translation\ntasks and show that we can obtain state-of-\nthe-art results, with gains up to more than\n4 BLEU points compared to previous work,\nwith a method that is simple, independent\nof the size of the corpus to be aligned, and\ndirectly computes symmetric alignments.\nThis work also provides new insights re-\ngarding the use of “heuristic” alignment\nscores in statistical machine translation.\n1 Introduction\nSub-sentential alignment consists in identifying\ntranslation units in sentence-aligned parallel cor-\npora, i.e. in texts in which each sentence has been\nmatched with its translation. This task constitutes\nthe first step in the process of training most data-\ndriven machine translation (MT) systems (statistical\nor example-based). The most prominent approach\nnowadays is phrase-based statistical machine trans-\nlation (SMT), where the core model is a translation\ntable derived from sub-sentential mappings. This ta-\nble consists in a pre-computed list of phrase1pairs,\nwhere each (source, target ) pair is associated with\na certain number of scores loosely reflecting the\nlikelihood that source translates to target.\nThe problem of identifying sub-sentential map-\npings from parallel texts, e.g. between isolated\nwords or n-gram s of words, is well-known, and nu-\nmerous proposals have been put forward to perform\nthis task. Those methods roughly fall into two main\nc\r2012 European Association for Machine Translation.\n1In this context, a phrase is a sequence of words and does not\nnecessarily correspond to a syntactic phrase.categories. On the one hand, the probabilistic ap-\nproach, introduced by Brown et al. (1988), consid-\ners the problem of identifying links between words\nor groups of words in parallel sentences. This ap-\nproach consists in defining a probabilistic model of\nthe parallel corpus, the parameters of which are es-\ntimated by a global maximization process which si-\nmultaneously considers all possible associations in\nthe corpus. The goal is to determine the best set of\nalignment links between all source and target words\nof every parallel sentence pair. The most famous\nrepresentatives in this category are the IBM models\n(Brown et al., 1993) for aligning isolated words,\nwhich have given rise to an impressive series of\nvariants and amendments (see e.g. (V ogel et al.,\n1996; Wu, 1997; Deng and Byrne, 2005; Liang\net al., 2006; Fraser and Marcu, 2007; Ganchev et\nal., 2008), to cite a few). Generalizing word align-\nment models to phrase alignment proves to be a\nmuch more difficult problem, and in the view of\nwork of Marcu and Wong (2002) and V ogel (2005),\nsuch alignments are generally produced by heuristi-\ncally combining asymmetric 1– nword alignments\n(“oriented”) in both directions (Koehn et al., 2003;\nDeNero and Klein, 2007). Once the set of align-\nment links is constituted, it is possible to assign\nscores to each pair of segments extracted.\nOn the other hand, associative approaches (also\ncalled heuristic by Och and Ney (2003)), were in-\ntroduced by Gale and Church (1991). They do\nnot rely on an alignment model: in order to detect\ntranslations, they rely on independence statistical\nmeasures such as, for instance, Dice coefficient,\nmutual information (Gale and Church, 1991; Fung\nand Church, 1994), or likelihood ratio (Dunning,\n1993)—see also more recent work by Melamed\n(2000) and by Moore (2005). Computations are\ngenerally limited to a list of association candidates\nprecomputed using patterns and filters, for instance,\nby focusing exclusively on the most frequent word\nn-gram s. In this approach, a local maximisation\nprocess is used, where each sentence is processed\nProceedings of the 16th EAMT Conference, 28-30 May 2012, Trento, Italy\n279\nindependently. Alignment links can then be com-\nputed, using for instance the greedy algorithm pro-\nposed by Melamed (2000) (competitive linking).\nThe probabilistic approach is the most widely\nused, mainly due to its tight integration with SMT,\nof which it constitutes a cornerstone since the in-\ntroduction of IBM models (Brown et al., 1993).\nThe two approaches have shown complementary\nstrengths and weaknesses, as acknowledged by e.g.\nJohnson et al. (2007), where phrase associations\nextracted from word alignments are filtered out ac-\ncording to statistical association measures.\nAnymalign, introduced in (Lardilleux and Lep-\nage, 2009; Lardilleux et al., 2011a), aims at ex-\ntracting sub-sentential associations, addressing a\nnumber of issues that are often overlooked. It can\nprocess any number of languages simultaneously, it\ndoes not make any distinction between source and\ntarget, is amenable to massive parallelism, scales\neasily, and is very simple to implement. Anyma-\nlign’s association scores have proven to produce bet-\nter results than state-of-the-art methods on bilingual\nlexicon constitution tasks (evaluation performed by\ncomparing word associations with reference dic-\ntionaries). However, Anymalign’s phrase tables\nare not as good as those obtained with standard\nmethods (evaluation performed with standard MT\nmetrics) (Lardilleux et al., 2011b).\nOne possible explanation for these contrasted re-\nsults is that, Anymalign does not compute any align-\nment at the word or at the phrase level; instead, it\ndirectly computes translation tables along with their\nassociated scores. Those tables have very different\nprofiles than those obtained with probabilistic meth-\nods, mainly in terms of their n-gram distribution\n(Luo et al., 2011). In particular, despite recent im-\nprovements (Lardilleux et al., 2011b), the quantity\nof long n-gram s produced remains relatively small\ncompared with Moses’s translation tables.\nIn this paper, we complement Anymalign with a\nsimple alignment algorithm, so as to better under-\nstand its current limitations. The resulting align-\nments improve Anymalign’s phrase tables to a point\nwhere they can be used to obtain state-of-the art re-\nsults. In passing, we also propose a computationally\ncheap way to compute ITG alignments based on\narbitrary word level association scores.\nThe rest of this paper is organized as follows:\nSection 2 describes the alignment method in detail,\nSection 3 presents an evaluation on machine trans-\nlation tasks and an analysis of the results, and Sec-\ntion 4 concludes and discusses further prospects.2 Description of the Method\nIn a nutshell, out method segments pairs of parallel\nsentences in two parts, linking the two resulting tar-\nget segments with their proper translation amongst\nthe two source segments (monotonous or inverted\ntranslation), and repeats this process recursively on\nthe segment pairs thus obtained.\nThis work is strongly inspired by that of Wu\n(1997) and Deng et al. (2006). The former in-\ntroduces inversion transduction grammars, which\ngenerate synchronized binary parse trees in source\nand target languages. This formalism models both\nvariable-length associations at leaf (terminal) nodes,\nand reorderings (inversions) at any level of the parse\ntree. As we are only interested in computing align-\nment based on arbitrary lexical association scores,\nwe will dispense here from using the full apparatus\nof stochastic grammars, yielding algorithms that\nare computationally much cheaper. The latter uses\na similar concept, where more or less coarse bi-\nsegments are extracted from non-sentence-aligned\nparallel texts by iteratively recursively applying a\ntop-down binary segmentation algorithm. We repro-\nduce the same approach here at the sentence level,\nusing different local association scores.\n2.1 Alignment Matrix\nOur starting point are (1) a sentence-aligned bitext;\nand (2) a function wmeasuring the strength of the\ntranslation link between any source andtarget pair\nof words. Several definitions of ware possible; it is\nnevertheless natural to define it endogenously from\nword occurrences in the bitext. The scores we will\nfirst use will be obtained using Anymalign’s output.\nWe will see later that they lead to better results than\nscores obtained using other standard measures.\nIn the following, the score w(s;t)between a\nsource word sand a target word tis defined as\nthe product of the two translation probabilities\np(sjt)\u0002p(tjs), produced by Anymalign:\nw(s;c) = p(sjt)\u0002p(tjs)\n=åN\nn=1[ [(s;t)2(Sn;Tn)] ]kn\nåN\nn0=1[ [s2Sn0] ]kn0\u0002åN\nn=1[ [(s;t)2(Sn;Tn)] ]kn\nåN\nn0=1[ [t2Tn0] ]kn0\n=(åN\nn=1[ [(s;t)2(Sn;Tn)] ]kn)2\n(åN\nn0=1[ [s2Sn0] ]kn0)\u0002(åN\nn0=1[ [t2Tn0] ]kn0)\nwhere:\n\u000f[ [x] ] =1 ifxis true, 0 otherwise;\n\u000fNis the number of entries (source–target\nphrase pairs) in Anymalign’s translation table;\n\u000fSn(resp. Tn) is the source (resp. target) part of\nan entry in the translation table;\n\u000fknis the count associated to the pair ( Sn,Tn) in\nthe translation table. This figure is not by itself\n280\nSn Cn kn\npays countries 151,190\npays country 17,717\npays tiers third countries 10,865\nlespays countries 6,284\nmon pays mycountry 4,057\ncespays these countries 3,742\npays . country . 2,007\n´etat country 122\nw(pays; country) = p(paysj country)\u0002p(countryj pays)\n=17,717 + 4,057 + 2,007\n151,190 + 17,717 + 10,865 + 6,284 + 4,057 + 3,742 + 2,007\n\u000217,717 + 4,057 + 2,007\n17,717 + 4,057 + 2,007 + 122\n'0:121\nFigure 1: Computing a score between source word\npays and target word country from a subset of a\ntranslation table produced by Anymalign with the\nFrench and English parts of the Europarl corpus\n(Koehn, 2005).\nan indicator of the quality of the entry; it is just\nthe number of times the translation pair has\nbeen produced by Anymalign (see (Lardilleux\net al., 2011a) for details).\nThis computation is illustrated on Figure 1.\nWhat we do here is tantamount to a very simpli-\nfied version of the algorithm that is used to train\nstandard translation models: starting with lexical\nassociations, we derive by heuristic means an opti-\nmal (Viterbi) alignment, from which the translation\ntables are finally computed. Our procedure is much\nsimpler, though, as we do not iterate the procedure\n(like in EM training) and directly manipulate sym-\nmetric representations at the phrase level.\n2.2 Segmentation Criterion\nThe segmentation criterion described hereafter is\ninspired by the work of Zha et al. (2001) on docu-\nment clustering. Their problem consists in comput-\ning the optimal joint clustering of a bipartite graph\nrepresenting occurrences of terms inside a set of\ndocuments. We adapt it to the search of the best\nalignment between words of a source sentence and\nthose of a target sentence.\nTo this end, we consider a pair of sentences (S;T)\nfrom the parallel corpus, where the source sentence\nSis made up of Isource words and the target sen-\ntence Tis made up of Jtarget words: S= [s1:::sI]\nandT= [t1:::tJ]. Moreover, we consider “split”\nindices xandywhich define a binary segmentation\nof the source and target sentences (the “.” symbol\nrefers to the concatenation of word strings):\nS=A:¯Awith A= [s1:::sx\u00001]and ¯A= [sx:::sI]\nT=B:¯Bwith B= [t1:::ty\u00001]and ¯B= [ty:::tJ]B ¯B\nt1 . . . ty\u00001ty . . . tJ\ns1\nA... W(A;B) W(A;¯B)\nsx\u00001\nsx\n¯A... W(¯A;B) W(¯A;¯B)\nsI\nFigure 2: Schematic representation of the segmen-\ntation of a pair of sentences S=A:¯AandT=B:¯B.\nThe choice of xandywill be guided by the sum W\nof the association scores between each source and\ntarget words of a block (X;Y)2fA;¯Ag\u0002fB; ¯Bg:\nW(X;Y) =å\ns2X;t2Yw(s;t)\nThese notations are summarized in Fig. 2.\nThen, we define the total score of a segmentation:\ncut(X;Y) =W(X;¯Y)+W(¯X;Y)\nNote that cut(X;Y) =cut(¯X;¯Y). In our case, a low\nvalue indicates that the association scores between\nthe words of Xand that of ¯Yon the one hand, and\nbetween the words of ¯Xand that of Yon the other\nhand, are low; in other words, those two blocks are\nunlikely to correspond to good translations, con-\ntrarily to (X;Y)and(¯X;¯Y). We would thus like\nto identify the pair (x;y)that leads to the lowest\npossible value of cut( X;Y).\nAs pointed out by Zha et al. (2001), this quantity\ntends to produce unbalanced segments (document\nclusters in their case) because of the absence of\nnormalisation, which warrants its replacement by:\nNcut( X;Y) =cut(X;Y)\ncut(X;Y)+2\u0002W(X;Y)+cut(¯X;¯Y)\ncut(¯X;¯Y)+2\u0002W(¯X;¯Y)\nThis variant adds a density constraint on (X;Y)and\n(¯X;¯Y), which is partially satisfied by the introduc-\ntion of the denominators in the above expression.\nIts values are in the range [0;2].\nOur problem eventually consists in determining\nthe pair (x;y)that minimizes Ncut . Although effi-\ncient search methods exist and are commonly used\nin graph theory, our “graphs” (pairs of sentences)\nare small in practice: about 30 words per sentence\nin average in the Europarl corpus used in the fol-\nlowing experiments. We thus content ourselves\nwith determining the best segmentation through an\nexhaustive enumeration.\n2.3 Alignment Algorithm\nWe can now recursively segment and align a pair\nof sentences. At each step, we test every pos-\nsible pair (x;y)of indices in order to determine\n281\nprocedure align( S;T):\niflength(S ) = 1 orlength(T ) = 1 :\nlink each word of Sto each word of T\nstop procedure\nminNcut = 2\n(X;Y)=(S;T)\nfor each (i;j)2f2 :::Ig\u0002f2 :::Jg:\nifNcut( A;B)<minNcut :\nminNcut = Ncut( A;B)\n(X;Y)=(A;B)\nifNcut( A;¯B<minNcut :\nminNcut = Ncut( A;¯B)\n(X;Y)=(A;¯B)\nalign( X;Y)\nalign( ¯X;¯Y)\nFigure 3: Recursive alignment algorithm.\nthe lowest Ncut . The worst case happens when\nthe matrix is cut in the most unbalanced possible\nway; the complexity of the algorithm is thus cubic\n(O(I\u0002J\u0002min( I;J))) in the length of the input sen-\ntences. Using a greedy strategy only delivers sub-\noptimal solutions, yet it does so much faster than\nexact ITG parsing, which is cubic in the product\nI\u0002J(Wu, 1997). For a given pair (x;y), two values\nare computed: one corresponds to a monotonous\nalignment ( Ncut( A;B)) and the other one to an in-\nversion of the two segments ( Ncut( A;¯B)). We then\napply the process recursively on each of the two\nsegment pairs that correspond to the minimal Ncut .\nIt ends when one of the segments contains only one\nword and produces 1– norn–1 alignments. In this\napproach, all words are aligned. By considering\ndifferent stopping criteria, eg. based on thresholds\nonNcut , variants of the algorithm are readily ob-\ntained, which enable to balance the granularity of\nthe alignment with its precision, by choosing to\nbuild larger and safer blocks ( m–n alignments) in-\nstead of smaller and less sure ones. We leave this\nfor future work. Figure 3 presents the complete\nalgorithm, and Fig. 4 illustrates the process on two\nactual examples. In the following, we refer to this\nalgorithm under the name of “Cutnalign.”\nThe algorithm itself is independent of the size\nof the parallel corpus to align, because each sen-\ntence pair is processed independently. Aligning a\ncorpus can thus easily be made parallel: the total\nrunning time is divided by the number of available\nprocessors. Another advantage is that the align-\nments produced are symmetric during the whole\nprocess, contrary to more widely spread models\nsuch as IBM models that produce better result when\nrun in both translation directions and their outputs\ncombined using heuristics.3 Evaluation\n3.1 Description of Experiments\nOur alignment method is evaluated within a\nphrase-based SMT system. We use the Moses\ntoolkit (Koehn et al., 2007), and data extracted\nfrom the Europarl corpus (Koehn, 2005), in\nthree languages: Finnish–English (agglutinating\nlanguage–isolating language), French–Spanish, and\nPortuguese-Spanish (very close languages). For\neach pair, we use a training set made up of\n350,000 sentence pairs (avg.: 30 words/sentence in\nEnglish), and development and test sets made up\nof 2,000 sentence pairs each. The systems are opti-\nmized with MERT (Och, 2003). Unless otherwise\nspecified, a lexicalized reordering model is used.\nTranslations are evaluated using BLEU (Papineni\net al., 2002) and TER2(Snover et al., 2006).\nFive approaches are compared:\nMGIZA++ (Gao and V ogel, 2008), implements\nthe IBM models (Brown et al., 1993) and the HMM\nof V ogel et al. (1996). Integrated to Moses, it re-\nmains the reference in the domain. It is run with\ndefault settings: 5 iterations of IBM1, HMM, IBM3,\nand IBM4, in both directions (source to target and\ntarget to source). The alignments are then made\nsymmetric and a translation table is produced from\nthe alignments using Moses tools (grow-diag-final-\nandheuristic for phrase pair extraction).\nAnymalign (Lardilleux et al., 2011a), used to di-\nrectly build the translation tables. As this tool can\nbe stopped at any time, its running time is set so that\nit runs for the same duration as MGIZA++. The\nsame experiment is repeated by varying the length\nof output phrases from 1 to 4 (see (Lardilleux et al.,\n2011b) for details). In the following, we refer to it\nunder the names “Anymalign-1” to “Anymalign-4.”\nThe reordering model used in this configuration is a\nsimple distance-based model, because Anymalign\nalone cannot provide the information required for a\nlexicalized reordering model.\nAnymalign + Cutnalign: we apply the algo-\nrithm described in previous section to each of the\nfour translation tables produced by Anymalign-1\nto Anymalign-4. Although every intermediary seg-\nmentation step (all possible rectangles in Fig. 4) ac-\ntually corresponds to a phrase pair that could be ex-\ntracted and fit in a phrase-table, in our experiments,\nwe only rely on terminal alignment points, that are\nthen passed to the Moses toolkit to build new trans-\nlation tables (using again the grow-diag-final-and\n2Contrary to BLEU, lower scores are better.\n282\nthe level of budgetary implementation ;\nle0.037 e 0.001 e e e\nniveau e 0.591 e e e e\nd’ e e 0.003 e e e\nex´ecution e e e e 0.060 e\nbudg ´etaire e e e 0.659 e e\n; e e e e e 0.287\nfinally , what our fellow citizens are demanding is the right to information .\nenfin 0.607 0.001 e e 0 e e 0 e e e e e e\n,0.001 0.445 e e e e e e e 0.001 e0.001 e 0.001\nc’ e e0.001 e e e e 0 0.036 0.001 e e e e\nest e e0.001 e e e e 0 0.223 0.016 e0.001 e 0.001\nun e e e e e e e e 0.005 e e e e e\ndroit e e e e e e e 0 e e 0.084 e e e\n`a e e e e e e 0.001 e 0.001 0.003 0.001 0.018 e e\nl’ e e e e e e e e 0.002 0.009 e0.002 e e\ninformation e e e e e e e e e e e e 0.499 e\nque e e0.002 e e e 0.001 e 0.002 0.001 e0.001 e e\nr´eclament 0 0 e e e e e 0.152 e e 0 0 0 e\nnos e e e0.171 0.004 0.001 e e e e e e e e\nconcitoyens 0 e e0.001 0.323 0.009 e e e e 0 e 0 e\n.e e e e e e e e 0.001 0.001 e e e 0.954\nFigure 4: Two examples of segmentation-alignment. The number in each cell corresponds to the value of\nthe function w, with 0<e\u00140:001. A null value indicates that the two words never appear together in\nthe translation table. Alignment points retained by the algorithm, i.e. at maximum level of recursion, are\nin boldface. In the first example, the translation is monotonous except for the name/adjective inversion\n(ex´ecution budg ´etaire/budgetary implementation), therefore most alignment links are along the diagonal.\nThe second example, more complex, attests for the inversion of propositions inside the sentence.\nheuristic). This approach yields more phrase pairs\nas it allows to extract together segments on both\nsides of a split point, e.g. le niveau/the level.\nSimple probabilities + Cutnalign: the purpose\nof this configuration is to evaluate the choice of w,\nrather than the algorithm itself. To this end, we\nuse a very simple association score: the probability\nthat a source word and a target word are transla-\ntions of one another (product of the two translation\nprobabilities), where this probability is computed\nfrom their co-occurrence counts over the training\ncorpus. The definition of wis thus the same as in\nSec. 2.1, with two minor differences: (1) counts\nare directly computed over the training bitext; and\n(2)kn=1;8n.\nAnymalign + Cutnalign / MGIZA++: This is\na combination of the MGIZA++ and Anyma-\nlign+Cutnalign approaches. We do this by taking\nthe union of the two alignment sets. In pratice,\nwe simply concatenate the two alignment files pro-\nduced by the aligners, and duplicate the training\nbicorpus so that we end up with a new, twice as\nlarge, training bicorpus and alignment file, from\nwhich the phrase table is extracted.\nIn terms of runtime, although Cutnalign is cur-\nrently implemented in a high-level programming\nlanguage (Python) and its complexity is cubic in thelength of the sentence pairs to process, the fact that\neach sentence pair can be aligned independently\nmakes it amenable to massive parallelism if numer-\nous CPUs are available.\n3.2 Results\nResults are in Table 1. For each task, using the ba-\nsic version of Anymalign yields worse scores than\nMGIZA++-based system, even though extending\nthe phrase length reduces this gap by roughly a half,\nexcept for the Finnish–English pair. Those results\nare in line with (Lardilleux et al., 2011b).\nCutnalign leads to significant gains in all con-\nfigurations: from 1.6 to 4.6 BLEU points ( fr–en ,\nAnymalign-1 + Cutnalign), with an average gain\nof 2.6 BLEU and 2.7 TER points. Anymalign\n+ Cutnalign is still 1.1 to 1.6 BLEU points be-\nlow in Finnish–English relatively to MGIZA++ but\nproduces results of comparable quality in French–\nEnglish and Portuguese–Spanish.\nThe “simple probabilities + Cutnalign” configu-\nration produces intermediary quality results, gen-\nerally between “basic” Anymalign and Anymalign\n+ Cutnalign. This shows that the function whas a\nsignificant impact on the behavior of the alignment\nmethod. Assuming the function used in these ex-\nperiments is one of the simplest possible, there is\nample room here for improvements. Merging both\nphrase tables is almost always the best strategy, at\n283\nTask SystemBLEU\n(%)TER\n(%)Entries\n(millions)Length of\nentriesLinksLength of\nextracted blocks\nMGIZA++ 22.27 62.92 22.2 3.24 26 1.16\nAnymalign-1 18.68 67.30 11.8 1.87\nAnymalign-2 17.86 68.60 4.4 2.09\nAnymalign-3 18.06 68.13 3.0 2.32\nAnymalign-4 18.06 68.53 2.1 2.42\nAnymalign-1 + Cutnalign 21.14 63.74 7.7 3.26 62 1.45\nfi–en Anymalign-2 + Cutnalign 21.14 64.69 7.5 3.27 69 1.48\nAnymalign-3 + Cutnalign 20.83 64.18 7.3 3.29 73 1.50\nAnymalign-4 + Cutnalign 20.64 64.52 7.1 3.29 78 1.53\nSimple prob. + Cutnalign 19.09 67.09 5.5 3.23 74 1.78\nAnymalign-1 + Cutnalign / MGIZA++ 22.66 62.45 27.0 3.24 44 1.30\nAnymalign-2 + Cutnalign / MGIZA++ 22.68 62.91 26.9 3.24 47 1.31\nAnymalign-3 + Cutnalign / MGIZA++ 22.73 62.82 26.8 3.24 49 1.32\nAnymalign-4 + Cutnalign / MGIZA++ 22.78 62.11 26.7 3.24 52 1.33\nMGIZA++ 29.65 55.25 25.6 4.29 31 1.17\nAnymalign-1 25.10 59.36 6.1 1.27\nAnymalign-2 26.60 58.16 6.3 1.99\nAnymalign-3 27.02 57.96 3.9 2.29\nAnymalign-4 26.85 58.00 2.6 2.42\nAnymalign-1 + Cutnalign 29.65 55.22 12.9 4.21 50 1.49\nfr–en Anymalign-2 + Cutnalign 29.69 55.44 13.1 4.22 48 1.48\nAnymalign-3 + Cutnalign 29.26 55.49 13.0 4.23 50 1.49\nAnymalign-4 + Cutnalign 29.16 55.46 12.8 4.23 52 1.51\nSimple prob. + Cutnalign 27.97 56.85 10.2 3.95 54 1.62\nAnymalign-1 + Cutnalign / MGIZA++ 30.02 54.81 31.9 4.24 41 1.32\nAnymalign-2 + Cutnalign / MGIZA++ 29.91 54.88 31.9 4.24 40 1.32\nAnymalign-3 + Cutnalign / MGIZA++ 30.22 54.94 31.9 4.24 41 1.32\nAnymalign-4 + Cutnalign / MGIZA++ 29.91 54.87 31.8 4.24 42 1.33\nMGIZA++ 38.53 48.46 32.2 4.30 30 1.09\nAnymalign-1 35.20 50.89 5.7 1.26\nAnymalign-2 36.80 49.60 5.9 1.99\nAnymalign-3 36.82 49.67 3.7 2.26\nAnymalign-4 36.96 49.80 2.4 2.37\nAnymalign-1 + Cutnalign 37.35 49.55 17.9 4.30 50 1.32\npt–es Anymalign-2 + Cutnalign 38.96 48.04 18.0 4.30 48 1.32\nAnymalign-3 + Cutnalign 38.55 48.40 17.7 4.31 50 1.33\nAnymalign-4 + Cutnalign 38.56 48.37 17.3 4.31 54 1.35\nSimple prob. + Cutnalign 37.71 49.04 13.9 4.09 50 1.41\nAnymalign-1 + Cutnalign / MGIZA++ 38.77 48.12 37.7 4.25 40 1.20\nAnymalign-2 + Cutnalign / MGIZA++ 38.69 48.39 37.9 4.25 39 1.20\nAnymalign-3 + Cutnalign / MGIZA++ 38.94 48.12 37.8 4.25 40 1.20\nAnymalign-4 + Cutnalign / MGIZA++ 38.82 48.18 37.8 4.25 42 1.21\nTable 1: Summary of results obtained in our experiments. The first two columns (BLEU and TER)\nreport performance in machine translation. The two middle columns diplay various characteristics of\nthe translation tables: the number of entries and their length in words. The last two columns present\ncharacteristics of the alignments prior to the production of the translation table: average number of\nalignment links per training sentence pair and average length of the source part of minimal blocks\nextracted (translations of the phrases that are consistent with word alignments).\n284\nthe most of much larger models.\n3.3 Analysis of Alignments\nOne motivation for proposing this new alignment\nmethod is that Anymalign still lacks the ability to\nextract long n-gram translations in sufficient quan-\ntity. In this section, we study some characteristics\nof the alignments thus produced (see Table 1).\nRegarding translation tables first, we observe\nthat those obtained from Cutnalign contain many\nmore entries than those produced by Anymalign\nalone3(three times more in average), except for\nAnymalign-1 in Finnish–English. Nevertheless,\nthey are still much smaller than tables obtained\nfrom MGIZA++, as they contain twice less en-\ntries in average. In addition, the average length\nof those entries is almost equal to that of those\nin MGIZA++’s translation tables, while those pro-\nduced by Anymalign are much shorter: producing\na translation table from alignment links allows to\nmake up for the lack of long n-grams as desired.\nSecondly, we study the alignment links them-\nselves. The column “Links” of Table 1 shows that\nour method produces more alignment links than\nMGIZA++: between 1.5 and 3 times more, depend-\ning on the task. The last column gives the main\nreason: alignment blocks extracted by our method,\ni.e. rectangles obtained at maximal recursion depth,\nare always longer than minimum blocks obtained\nfrom MGIZA++’s alignments (+ 26% in average).\nSince we systematically align all source words with\nall target words in such a rectangle, and since all\nwords of a sentence pair are therefore necessarily\naligned, the total number of alignments produced is\nnaturally high. This also explains the fact that the\nnumber of entries in our translation tables is always\nmuch lower than those obtained from MGIZA++,\nas the latter produces 0–1 alignments that are at\nthe origin of numerous phrases extracted during the\nconstitution of the table by Moses (grow-diag-final-\nandheuristic by default) (Ayan and Dorr, 2006).\nDespite this, alignments produced by our method\nlead to state-of-the-art scores in two machine trans-\nlation tasks over three in our experiments.\n4 Conclusion\nWe have presented a sub-sentential alignment\nmethod based on a recursive binary segmentation\nprocess of the alignment matrix between a source\nsentence and its translation. Inspired by work on\n3These tables were produced by running Anymalign for an\nidentical amount of time in all configurations, which explains\nwhy larger values of the length parameter lead to smaller\ntables—see details in (Lardilleux et al., 2011b).alignment by Wu (1997) and Deng et al. (2006)\nand work on document clustering by Zha et al.\n(2001), we have shown that despite its simplicity,\nthis method leads to state-of-the-art results in two\ntasks over three in our experiments. When fed with\nAnymalign’s scores, it yields significant gains (up\nto 4.6 BLEU points in French–English) in com-\nparison with Anymalign alone. These experiments\nconfirm that Anymalign’s main handicap concerns\nthe translation of long n-gram s. A complementary\nalignment step, strictly speaking, is thus desired\nin order to improve its results in machine trans-\nlation. The alignment method proposed here is\nsimple, symmetric with respect to the translation\ndirection, and the use of local computations makes\nit scale up easily. Many improvements are possible,\namongst which the use of early stopping criteria\nduring segmentation of the alignment matrix so as\nto trade alignment granularity for confidence; the\nuse more sophisticated metrics for scoring blocks,\nor the exploration of richer (e.g. ternary) segmenta-\ntion schemes, enabling to account for more complex\nlinguistic constructs.\nReferences\nAyan, Necip Fazil and Bonnie J. Dorr. 2006. Going\nbeyond AER: An extensive analysis of word align-\nments and their impact on MT. In Proc. of Col-\ning/ACL’06, pages 9–16, Sydney, Australia.\nBrown, Peter, John Cocke, Stephen Della Pietra, Vin-\ncent Della Pietra, Fredrick Jelinek, Robert Mercer,\nand Paul Roossin. 1988. A statistical approach to\nlanguage translation. In Proc. of Coling’88, pages\n71–76, Budapest.\nBrown, Peter, Stephen Della Pietra, Vincent\nDella Pietra, and Robert Mercer. 1993. The\nmathematics of statistical machine translation:\nParameter estimation. Computational Linguistics,\n19(2):263–311.\nDagan, Ido and Ken Church. 1994. Termight: identi-\nfying and translating technical terminology. In Proc.\nof the 4th conference on Applied natural language\nprocessing, pages 34–40, Stuttgart.\nDeNero, John and Dan Klein. 2007. Tailoring word\nalignments to syntactic machine translation. In Proc.\nof ACL’07, pages 17–24, Prague.\nDeng, Yonggang and William Byrne. 2005. HMM\nword and phrase alignment for statistical machine\ntranslation. In Proc. of HLT/EMNLP’05, pages 169–\n176, Vancouver, British Columbia, Canada, October.\nDeng, Yonggang, Shankar Kumar, and William Byrne.\n2006. Segmentation and alignment of parallel text\nfor statistical machine translation. Natural Lan-\nguage Engineering, 13(3):235–260.\nDunning, Ted. 1993. Accurate methods for the statis-\ntics of surprise and coincidence. Computational Lin-\nguistics, 19(1):61–74.\n285\nFraser, Alexander and Daniel Marcu. 2007. Getting the\nstructure right for word alignment: LEAF. In Proc.\nof EMNLP/CoNLL’07), pages 51–60, Prague.\nFung, Pascale and Kenneth Church. 1994. K-vec: A\nnew approach for aligning parallel texts. In Proc. of\nColing’94, volume 2, pages 1096–1102, Ky ¯oto.\nFung, Pascale and Lo Yuen Yee. 1998. An IR approach\nfor translating new words from nonparallel, compa-\nrable texts. In Proc. of Coling/ACL’98, volume 1,\npages 414–420, Montreal.\nGale, William and Kenneth Church. 1991. Identify-\ning word correspondences in parallel texts. In Proc.\nof the 4th DARPA workshop on Speech and Natural\nLanguage, pages 152–157, Pacific Grove.\nGanchev, Kuzman, Jo ˜ao Grac ¸a, and Ben Taskar. 2008.\nBetter alignments = better translations? In Proc. of\nACL’08, pages 986–993, Columbus, Ohio.\nGao, Qin and Stephan V ogel. 2008. Parallel imple-\nmentations of word alignment tool. In Software En-\ngineering, Testing, and Quality Assurance for Nat-\nural Language Processing, pages 49–57, Columbus\n(Ohio, USA).\nGaussier, ´Eric and Jean-Marc Lang ´e. 1995. Mod `eles\nstatistiques pour l’extraction de lexiques bilingues.\nTraitement Automatique des Langues, 36(1-2):133–\n155.\nJohnson, Howard, Joel Martin, George Foster, and\nRoland Kuhn. 2007. Improving translation quality\nby discarding most of the phrasetable. In Proc. of\nEMNLP/CoNLL’07, pages 967–975, Prague.\nKoehn, Philipp, Franz Och, and Daniel Marcu. 2003.\nStatistical phrase-based translation. In Proc. of\nHLT/NAACL’03), pages 48–54, Edmonton.\nKoehn, Philipp, Hieu Hoang, Alexandra Birch, Chris\nCallison-Burch, Marcello Federico, Nicola Bertoldi,\nBrooke Cowan, Wade Shen, Christine Moran,\nRichard Zens, Chris Dyer, Ondrej Bojar, Alexandra\nConstantin, and Evan Herbst. 2007. Moses: Open\nsource toolkit for statistical machine translation. In\nProc. of ACL’07, pages 177–180, Prague.\nKoehn, Philipp. 2005. Europarl: A parallel corpus for\nstatistical machine translation. In Proc. of MT Sum-\nmit X), pages 79–86, Phuket.\nLardilleux, Adrien and Yves Lepage. 2009. Sampling-\nbased multilingual alignment. In Proc. of RANLP,\npages 214–218, Borovets.\nLardilleux, Adrien, Yves Lepage, and Franc ¸ois Yvon.\n2011a. The contribution of low frequencies to multi-\nlingual sub-sentential alignment: a differential asso-\nciative approach. International Journal of Advanced\nIntelligence, 3(2):189–217.\nLardilleux, Adrien, Franc ¸ois Yvon, and Yves Lep-\nage. 2011b. G ´en´eralisation de l’alignement\nsous-phrastique par ´echantillonnage. In Proc. of\nTALN 2011, volume 1, pages 507–518, Montpellier,\nFrance.\nLiang, Percy, Ben Taskar, and Dan Klein. 2006. Align-\nment by agreement. In Proc. of the HLT/NAACL’06,\npages 104–111, New York City.Luo, Juan, Adrien Lardilleux, and Yves Lepage. 2011.\nImproving sampling-based alignment by investigat-\ning the distribution of n-grams in phrase translation\ntables. In Proc. of PACLIC 25, pages 150–159, Sin-\ngapour.\nMarcu, Daniel and Daniel Wong. 2002. A phrase-\nbased, joint probability model for statistical machine\ntranslation. In Proc. of EMNLP’02, pages 133–139,\nPhiladelphia.\nMelamed, Dan. 2000. Models of translational equiv-\nalence among words. Computational Linguistics,\n26(2):221–249.\nMoore, Robert. 2004. On log-likelihood-ratios and the\nsignificance of rare events. In Proc. of EMNLP’04,\npages 333–340, Barcelona.\nMoore, Robert. 2005. Association-based bilingual\nword alignment. In Proc. of the ACL Workshop on\nBuilding and Using Parallel Texts, pages 1–8, Ann\nArbor.\nOch, Franz and Hermann Ney. 2003. A systematic\ncomparison of various statistical alignment models.\nComputational Linguistics, 29:19–51.\nOch, Franz. 2003. Minimum error rate training in\nstatistical machine translation. In Proc. of ACL’03,\npages 160–167, Sapporo.\nPapineni, Kishore, Salim Roukos, Todd Ward, and Wei-\nJing Zhu. 2002. B LEU: a method for automatic eval-\nuation of machine translation. In Proc. of ACL’02,\npages 311–318, Philadelphia.\nSmadja, Frank, Vasileios Hatzivassiloglou, and Kath-\nleen McKeown. 1996. Translating collocations for\nbilingual lexicons: A statistical approach. Computa-\ntional Linguistics, 22(1):1–38.\nSnover, Matthew, Bonnie Dorr, Richard Schwartz, Lin-\nnea Micciulla, and John Makhoul. 2006. A study of\ntranslation edit rate with targeted human annotation.\nInProc. of AMTA’06), pages 223–231, Cambridge,\nAugust.\nV ogel, Stephan, Hermann Ney, and Christoph Tillman.\n1996. Hmm-based word alignment in statistical\ntranslation. In Proc. of Coling’96, pages 836–841,\nCopenhague.\nV ogel, Stephan. 2005. PESA: Phrase pair extraction as\nsentence splitting. In Proc. of MT Summit X, pages\n251–258, Phuket.\nWu, Dekai. 1997. Stochastic inversion transduction\ngrammar and bilingual parsing of parallel corpora.\nComputational Linguistics, 23(3):377–404.\nZha, Hongyuan, Xiaofeng He, Chris Ding, Horst Si-\nmon, and Ming Gu. 2001. Bipartite graph partition-\ning and data clustering. In Proc. of the 10th inter-\nnational conference on Information and knowledge\nmanagement, pages 25–32, Atlanta.\n286", "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "YKRiuYirlKV", "year": null, "venue": "EAMT 2018", "pdf_link": "https://aclanthology.org/2018.eamt-main.8.pdf", "forum_link": "https://openreview.net/forum?id=YKRiuYirlKV", "arxiv_id": null, "doi": null }
{ "title": "Reading Comprehension of Machine Translation Output: What Makes for a Better Read?", "authors": [ "Sheila Castilho", "Ana Guerberof Arenas" ], "abstract": null, "keywords": [], "raw_extracted_content": "Reading Comprehension of Machine Translation Output: What Makes\nfor a Better Read?\nSheila Castilho\nADAPT Centre\nDublin City University\[email protected] Guerberof Arenas\nADAPT Centre/SALIS\nDublin City University\[email protected]\nAbstract\nThis paper reports on a pilot experiment\nthat compares two different machine trans-\nlation (MT) paradigms in reading com-\nprehension tests. To explore a suitable\nmethodology, we set up a pilot experi-\nment with a group of six users (with En-\nglish, Spanish and Simplified Chinese lan-\nguages) using an English Language Test-\ning System (IELTS), and an eye-tracker.\nThe users were asked to read three texts\nin their native language: either the original\nEnglish text (for the English speakers) or\nthe machine-translated text (for the Span-\nish and Simplified Chinese speakers). The\noriginal texts were machine-translated via\ntwo MT systems: neural (NMT) and sta-\ntistical (SMT). The users were also asked\nto rank satisfaction statements on a 3-point\nscale after reading each text and answering\nthe respective comprehension questions.\nAfter all tasks were completed, a post-task\nretrospective interview took place to gather\nqualitative data. The findings suggest that\nthe users from the target languages com-\npleted more tasks in less time with a higher\nlevel of satisfaction when using transla-\ntions from the NMT system.\n1 Introduction\nRecently, there has been an increase in Neural Ma-\nchine Translation (NMT) research as contempo-\nrary hardware supports much more powerful com-\nputation during the creation process. Research\nc/circlecopyrt2018 The authors. This article is licensed under a Creative\nCommons 3.0 licence, no derivative works, attribution, CC-\nBY-ND.on the translation quality of NMT engines show\nthat, in general, when compared against Statistical\nMachine Translation (SMT) engines, the output\nquality of NMT systems is higher when measured\nusing automatic metrics (Bahdanau et al., 2014;\nJean et al., 2015; Bojar et al., 2016; Koehn and\nKnowles, 2017). However, results are not as pos-\nitive when human evaluators compare these out-\nputs (Bentivogli et al., 2016; Castilho et al., 2017a;\nCastilho et al., 2017b).\nHuman evaluation of MT output, although not\nalways implemented in quality evaluation, has\nbeen increasingly endorsed by researchers who ac-\nknowledge the need for human assessments. Some\nof the most commonly-used manual metrics are\nfluency and adequacy, error analysis, translation\nranking, as well as post-editing effort. Despite the\nconsiderable focus on MT quality evaluation, the\nimpact of MT on the end user has been under-\nresearched. Measuring the usability of MT out-\nput allows for identification of the impact that the\ntranslation might have on the end user (Castilho\net al., 2014). With the intention of exploring the\ncognitive effort required to read texts originating\nfrom SMT and NMT engines by the end users of\nthose texts, we set-up a pilot experiment that aims\nto measure the reading comprehension of Spanish\nand Simplified Chinese users of texts produced by\nboth paradigms using an eye-tracker (using the En-\nglish users’ data as a baseline).\nThe remainder of this paper is organized as fol-\nlows: in Section 2, we survey the existing literature\nconcerning reading comprehension for MT eval-\nuation and the use of eye-tracking techniques for\ntranslation assessment; in Section 3, we describe\nthe research questions and hypotheses which guide\nthis pilot experiment, as well as the methodology\napplied to carry out the experiment with EnglishP\u0013 erez-Ortiz, S\u0013 anchez-Mart\u0013 \u0010nez, Espl\u0012 a-Gomis, Popovi\u0013 c, Rico, Martins, Van den Bogaert, Forcada (eds.)\nProceedings of the 21st Annual Conference of the European Association for Machine Translation , p. 79{88\nAlacant, Spain, May 2018.\n(EN), Spanish (ES) and Simplified Chinese (ZH)\nnative speakers; the results are discussed in Sec-\ntion 4, and finally, in Section 5, we draw the main\nconclusions of the pilot study and outline promis-\ning avenues for future work.\n2 Related Work\n2.1 Reading Comprehension for Machine\nTranslation Evaluation\nDespite the considerable focus on MT quality eval-\nuation, there has not been much research focused\non the impact of MT on the end user. With the\ncurrent shift of paradigm in the MT landscape, it\nhas become essential to also test the reading com-\nprehension of NMT models by the end users of\nthose translations. A few studies have attempted\nto measure reading comprehension (Scarton and\nSpecia, 2016) and usability of MT output. Tomita\net al. (1993) use reading comprehension tests to\ncompare different MT systems. The content for\nreading and comprehension was extracted from an\nEnglish proficiency exam and then translated into\nJapanese via three commercial MT systems as well\nas through the process of human translation. Sixty\nnative speakers of Japanese were asked to read the\ntext and answer the questions. The authors show\nthat reading comprehension is a valid evaluation\nmethodology for MT; however, their experiment\nonly takes into consideration the informativeness,\ni.e. the number of correct answers for the compre-\nhension questions.\nFuji (1999) proposes reading comprehension\ntasks in order to measure informativeness and,\nmoreover, the author adds comprehensiveness and\nfluency to the evaluation measures. The content\nused comprises several texts from official exami-\nnations of English language designed for Japanese\nstudents. Participants were asked to read the text,\nanswer the comprehension questions and judge\nhow comprehensible and how fluent the text is, us-\ning a 4 point scale. Following on from this, Fuji et\nal. (2001) examined the “usefulness” of machine-\ntranslated text from two commercial MT systems\ncompared to the English version. The experiment\nconsisted of participants reading the texts and an-\nswering comprehension questions. The authors\nclaim that presenting the source with the MT out-\nput results in higher comprehension performance.\nJones et al. (2005) ask 84 English native speak-\ners to answer questions from a machine-translated\nand human-translated version of the Defense Lan-guage Proficiency Test for Arabic language. Task\ntime and subjective rating were also measured.\nTheir results suggest that MT may enable a lim-\nited working proficiency but it is not suitable for a\ngeneral professional proficiency.\nUsefulness, comprehensibility, and acceptabil-\nity of MT technical documents are examined by\nRoturier (2006). The author claims that a text is\ndeemed useful when readers are able to solve their\nproblem with the help of the translation. The study\nuses a customer satisfaction questionnaire to deter-\nmine whether controlled English rules can have a\nsignificant impact from a Web users perspective.\nThe main drawback of Roturiers approach is that\nthere is no task being performed by the end user\nas the methodology consists of an online question-\nnaire.\n2.2 Eye tracking in Translation Research\nDoherty and O’Brien (2012) is the first study to use\neye-tracking techniques to measure the usability\nof translated texts via the end user. They conduct\na study to compare the usability of raw machine-\ntranslated output for four target languages (Span-\nish, French, German and Japanese) against the us-\nability of the source content (English). The result\nof this first phase compared the machine-translated\ngroup against the source group, and found signifi-\ncant difference for goal completion, efficiency, and\nuser satisfaction between the source and the MT\noutput. In the second phase of the study, Doherty\nand O’Brien (2014) analyse the results according\nto target languages compared to the source. The\nresults show that the raw MT output scores lower\nfor usability measurements, requiring more cogni-\ntive effort for all target languages when compared\nwith the source language content.\nStymne et al. (2012) present a preliminary study\nusing eye tracking as a complement to MT error\nanalysis. In this methodology, although the main\nfocus is to identify and classify MT errors, a com-\nprehension task is also applied. For the perception\nquestions, the human translation scored better than\nall the MT options. For both perceived and ac-\ntual reading comprehension questions, their results\nshow that participants are more efficient when us-\ning the MT output of a system trained using a large\ncorpus. Regarding gaze data, MT errors are asso-\nciated with both longer gaze times and more fixa-\ntions than correct passages, and average gaze time\nis dependent on the type of errors which may sug-80\ngest that some error types are more disturbing for\nreaders than others.\nKlerke et al. (2015) present an experimental\neye-tracking usability test with text simplification\nand machine translation (for both the original and\nsimplified versions) of logic puzzles. Twenty na-\ntive speakers of Danish were asked to solve and\njudge 80 different logic puzzles while having their\neye movements recorded. A greater number of\nfixations on the MT version of the original text\n(with no simplification) was observed and partic-\nipants were less efficient when using the MT ver-\nsion of the original puzzles; however, the simpli-\nfied MT version seemed to ease task performance\nwhen compared to the original English version.\nCastilho et al. (2014) had two groups of 9\nusers each performing tasks using either the raw\nMT or the post-edited version of instructions for\na PC-based security product, and cognitive and\ntemporal effort indicators were gathered using an\neye-tracker. Their results show that lightly post-\nedited instructions present a higher level of usabil-\nity when compared to raw MT. Building on this,\nCastilho and O’Brien (2016) perform similar ex-\nperiments with German and English native speak-\ners, with instructions for spreadsheet software. Re-\nsults show that the post-editing group is faster,\nmore efficient, and more satisfied than the MT\ngroup. No significant differences appear in cog-\nnitive effort between raw and post-edited instruc-\ntions, but differences exist between the post-edited\nversions and the source language. Moreover, the\nauthors claim that the cognitive data should not be\nviewed in isolation, and highlight the importance\nof collecting qualitative data for measuring usabil-\nity. Finally, Castilho (2016) extended previous\nexperiments using Simplified Chinese, Japanese,\nGerman and English for the same set of instruc-\ntion of the spreadsheet software. Results show that\nparticipants who used the post-editing instructions\nwere more effective, more efficient, and faster than\nparticipants who used the raw MT instructions, es-\npecially for Simplified Chinese and German. An-\nother interesting finding is that the source mostly\ndid not differ from the post-editing groups, sug-\ngesting that the post-editing output is of equiva-\nlent quality. Regarding satisfaction, the author re-\nports that German participants who use the MT in-\nstructions, even though they are able to success-\nfully perform more tasks than other MT groups,\nare the least satisfied with the instructions, whilethe Japanese participants do not present any dif-\nference between the MT and post-editing groups\nfor satisfaction even though the MT group was the\nleast efficient. The author notes that these findings\nare likely to be related to cultural characteristics,\nas the Japanese participants are more tolerant and\nless likely to complain. Another interesting finding\nis that all groups, including the English-speaking\nparticipants, suggest that the instructions need im-\nprovements.\nFinally, Jordan-Nez et al. (2017) compare three\nMT systems for assimilation, namely Systran (hy-\nbrid corpus based and rule-based MT); Google\nTranslate (at the time of the experiment, a SMT\nsystem); and Apertium (a rule-based system),\nagainst professional translations. Results show\nthat the MT output into a language in the same\nfamily as the readers first language may facilitate\ncomprehension of texts originally written in a lan-\nguage from a different family. The authors note,\nhowever, that the level of usefulness depends on\nthe field and on the MT system used as well as on\nthe level of speciality.\nFollowing previous work, we expect that the\nMT system that shows closer efficiency measures\nto the source text and lower task time, as well as\nlower cognitive effort indicators, is more likely to\nbe rated higher for the satisfaction.\n3 Methodology\nHypothesis and Research Questions As men-\ntioned in Section 1, the primary aim of this ex-\nperiment is to gather more information about the\nuser experience when reading for comprehension\nmachine-translated texts. With this aim in mind,\nwe identified the following research questions:\nRQ1: Which MT engine offers better efficiency\nto participants, i.e. with which one are they able\nto successfully answer more comprehension ques-\ntions? Or with which one are they able to complete\nthe tasks faster?\nRQ2: To what extent are there differences in\nparticipants cognitive processes due to different\nengines (NMT and SMT)?\nRQ3: What is the participants level of satisfac-\ntion with SMT and NMT when reading for com-\nprehension?\nContent and Design In order to answer the re-\nsearch questions, we measured participants read-\ning comprehension according to the number of81\ncorrect answers (goal completion) to a set of com-\nprehension questions about each text, and task\ntime. Eye-tracking fixation count and duration are\nalso computed, as well as satisfaction indexes after\neach reading task. After all tasks were completed,\nwe interviewed the participants by means of a\nsemi-structured retrospective interview to gauge\nthe understanding of the texts from a qualitative\nperspective.\nFor this pilot, we recruited two native speakers\nper language, a total of six participants (English,\nSpanish and Simplified Chinese languages). In this\ncase, we used a sample of convenience. The par-\nticipants were part of the student and staff body of\nDublin City University. There were three female\nand three male participants, average age was 30.6\nyears, and all of them had received education to\na post-graduate level. Half of them had previous\nexperience in reading comprehension tests, either\nas part of their education or work. The Spanish\nand Simplified Chinese participants had a univer-\nsity level standard of English as they have taken\nEnglish Proficiency tests and have been working\nand studying in an English-speaking country for\nsome time.\nAs for the reading texts, two were taken from\nthe International English Language Testing Sys-\ntem (IELTS)1that measures English language pro-\nficiency by assessing four language skills: listen-\ning, reading, writing and speaking. IELTS has two\ntypes of tests: General and Academic. Since we\nwere trying to assess the reception of raw output\nfor a general user, we decided to use the Gen-\neral Training IELTS, reading modality, which con-\ntains a text and comprehension questions about\nthat same text. The total number of words in the\nsource content amounted to 1090 words.\nThe two English texts selected and their accom-\npanying comprehension questions were then trans-\nlated using Microsoft Translator Try and Compare\nfeature2that allowed one to generate output in both\nSMT and NMT, and compare their quality. The\nfirst text (Text 2), entitled “Beneficial work prac-\ntices for the keyboard operator”, contained seven\ncomprehension questions in which the users were\nrequired to choose the correct heading for each\nsection of the text from a list of headings. The sec-\nond text (Text 3), entitled “Workplace dismissals”,\n1https://www.ielts.org\n2The feature on the website has changed to a comparison\nbetween Microsoft’s production and research engines. See\nhttps://translator.microsoft.com/neural.contained five comprehension questions for which\nthe users were required to match each description\nfrom a list with a correct term displayed in a box.\nOne short text was also extracted from the IELTS\nwebsite to be used as baseline. This baseline text\n(Text 1) was available in English, Spanish and\nSimplified Chinese on the IELTS website.3More-\nover, ten questions in the style of the test (write\nTrue, False or Not given) were created in English\nfor this baseline text and translated into Spanish by\na Spanish translator and into Simplified Chinese\nby a native speaker. The baseline was used to test\nparticipants attention and reading comprehension\nwith a human-translated version. The total num-\nber of words in the source baseline text amounted\nto 229 words. The baseline text was presented first\nfollowed by the Text 2 and Text 3 (SMT and NMT)\nwhich were randomised.4Figure 1 shows the set\nup of the task.\nAfter each task (text and comprehension ques-\ntions), four statements were presented (in English)\nin a three-point Likert scale (1- disagree, 2- neither\nagree or disagree, 3- agree) for the participants:\n1. The subject of the text was easy to under-\nstand.\n2. The language was easy to understand.\n3. The question was easy to understand.\n4. I was able to answer the question confidently.\nThe eye tracker used was a Tobii T60XL with the\nfilter set for I-VT (Velocity-Threshold Identifica-\ntion), as this is the filter recommended by Tobii\nfor reading experiments. The participants were\nrecorded during the post-task interview using the\nFlashback application that allows recording of all\nmovements, sounds, and webcam output on the\ncomputer. This retrospective post-task interview\nwas designed so that participants could watch their\nrecordings and give their feedback regarding the\nsubject matter, language used, questions, and per-\nsonal experience when completing the whole task.\n3As this text was available on the target languages on the\nIELTS official website, we assume that the translations were\neither direct human translation of the source or they were\ncomparable texts, i.e. texts with the same information but\noriginally written in the target language.\n4The same order of texts were presented for the English par-\nticipants (Text 1, Text 2 and Text 3) but in the source EN\nlanguage.82\nFigure 1: Task set-up\nFigure 2: Goal Completion (%)\nFigure 3: Task Time (in seconds)\n4 Results\n4.1 Comprehension\nAs mentioned previously, the baseline (Text 1)\ncontained 10 questions, while Text 2 contained 7\nquestions, and Text 3 contained 5 questions. Goal\ncompletion is the number of successfully com-\npleted tasks, while task time is the total task time\nthe participants needed to complete the tasks.\nGoal Completion Figure 2 shows the results for\ngoal completion for all participants (P01, P02, P04\nand so on), where light gray cells are SMT while\ndark gray cells are NMT results. We can see\nthat on average, participants who read the NMT\ntext had a higher rate of goal completion (ES and\nZH: 93%) when compared to the participants who\nread the SMT texts (ES: 66%, ZH: 86%), evenwhen compared to participants who used the En-\nglish source (79%). Interestingly, Simplified Chi-\nnese participants who used the SMT tests also had\nhigher rates of goal completion when compared to\nthe average for the English text.\nWhen looking at the average score per system\nfor each text (last column), participants of all lan-\nguages had higher goal completion when reading\nText 3 when compared to Text 2, which may indi-\ncate that Text 3 was easier to understand5. This is\nmentioned during the retrospective interviews by\nthe participants (see Section 4.4).\nTask Time Regarding the amount of time re-\nquired for participants to read the texts and answer\nthe comprehension questions, Figure 3 shows that,\n5Text 3 contained 5 questions, whereas Text 2 contained seven\nquestion which could also have impacted goal completion83\non average, participants who read the NMT output\n(ES: 375, ZH: 387) were faster than participants\nwho read the SMT output (ES: 412, ZH: 444). Ad-\nditionally, participants who used the NMT texts,\nfor both ES and ZH, have closer average task time\nto participants who used the source text. Interest-\ningly, the Simplified Chinese participants seemed\nto spend slightly more time on the task than the ES\nand EN participants, which could be related to the\nfact that the ZH participants were able to answer\nmore questions correctly.\n4.2 Eye-Tracking Data\nAs previously mentioned, we used an eye tracker\nto collect empirical data to analyse cognitive ef-\nfort. Due to the low number of participants for the\nfirst part of this study, it is not possible to report\nany statistically significant results. However, we\nbelieve that these preliminary results may indicate\na tendency in cognitive effort between NMT and\nSMT.\nFixation Duration (FD) is the length of fixa-\ntions (in seconds) within an area of interest (AOI).\nThe longer the fixations are, the higher the cogni-\ntive effort may be expected. Figure 4 shows the\nresults for the length of fixations. The average\nfixation duration per system indicates that SMT\npresents longer fixations (sum) when compared to\nthe NMT system for both ES and ZH. However,\nthe mean length does not seem to differ much, and,\nin fact, for ZH it presents a slightly shorter mean\n(0.25 secs) than the NMT system (0.26 secs). In\ngeneral, ZH participants present longer FD mean\nresults when compared to ES and EN for both sys-\ntems, including for the baseline (Text 1), which\ncorrelates with the time ZH participants spent on\ntasks (Figure 3).\nFixation Count (FC) is the total number of fixa-\ntions within an AOI. The more there are, the higher\nthe cognitive effort is deemed to be. The average\nFC per system for each language in Figure 5 indi-\ncates that, in general, SMT presents a higher num-\nber of fixations when compared to the fixation for\nthe NMT system for both ES and ZH languages.\nInterestingly, ZH does not show higher means for\nFC as previously observed for FD. In fact, ZH par-\nticipants show lower FC when compared to Span-\nish, and in the case of NMT, lower than the English\nas well.4.3 Satisfaction\nAs stated previously in Section 3, after the par-\nticipants had completed each text and answered\nthe comprehension questions, they were presented\nwith four statements that measured their level of\nsatisfactions with the subject of the text (the sub-\nject of the text was easy to understand), language\n(the language was easy to understand), questions\n(the question was easy to understand) as well\nas their perceived confidence (I was able to an-\nswer the question confidently) when answering the\nquestions, in a 3-point Likert scale (3-agree, 1-\ndisagree). Figure 6 presents the results for all lan-\nguages.\nIn Figure 6, the average per system for each lan-\nguage shows that participants who used the EN\ntexts have the highest satisfaction levels (2.56).\nFor ES, participants who used the NMT system\nseem to be slightly more satisfied (1.6) than par-\nticipants who used the SMT system (1.5). The\nsame pattern can be seen in the ZH participants’\nsatisfaction scores, the average for the NMT was\nconsiderably higher (2.37) than for the SMT sys-\ntem (1.37). This is in line with the task time (Fig-\nure 1) and goal completion (Figure 2) for the ZH\nlanguage, in which participants were able to com-\nplete 93% of the tasks in an average of 387 secs\nusing NMT translations, while using SMT transla-\ntion they were able to complete 86% of the tasks in\nover 444 seconds. These results also illustrate the\ncomments from the participants presented in the\nfollowing section.\n4.4 Retrospective Interviews\nTo triangulate the data from the eye-tracker and\nthe statements presented to the participants after\neach task is completed (satisfaction scores), and\nobtain a more accurate account of the differences\nbetween SMT and NMT in reading comprehen-\nsion tests, we carried out retrospective interviews\nwith all participants. After each participant had\ncompleted the three tasks, we replayed the video\nof their eye movements in the Replay window of\nTobii Studio, and recorded these interviews using\nFlashback as part of a Retrospective Think Aloud\nprotocol. We asked the participants to watch the\nvideo showing their fixations on the screen and\nto describe freely their recollection of what they\nwere thinking or doing at that time in the exercise.\nWe clarified that they should not be worried about\nany grammar mistakes since four out of six of the84\nFigure 4: Fixation Duration - in seconds.(** is the sum for both EN participants for both Text 2 and 3)\nFigure 5: Fixation Count (** is the sum for both EN participants for both Text 2 and 3)\nFigure 6: Ratings of Satisfaction (the higher score, the better)\nparticipants did not have English as their mother\ntongue, the language in which the interviews were\nconducted. At the time of writing this paper, we\nhave not completed a full qualitative analysis ofthese interviews, that is transcription and coding\nof the recordings, therefore what we provide here\nis a summary of the preliminary results.\nAll participants in all languages indicated that85\nText 1 (the baseline text: original English or hu-\nman translation) was easy to understand. They\nfound the text to be short, the content easy to un-\nderstand, and the language clear. Regarding Text\n2, although most participants mentioned that it was\nmore time consuming mainly due to the number\nof questions and options available (seven questions\nand ten options to choose from), their assessment\nof the language quality varied depending on the\nlanguage and the type of engine used for this ex-\nperiment. The same applies for Text 3, although\nthe participants indicated that it was faster to com-\nplete because there were fewer questions and they\nalready knew the dynamic of the exercises.\nIn the case of the English-speaking participants,\nthey did not mention any aspects of the language\nor content that they found particularly difficult, al-\nthough one participant (P02) had difficulties with\nthe coding system to answer the questions in Text\n1 (True, False, Not given). This participant also\nmentioned that he was not happy with certain com-\nmas or double negatives on Text 2. He did not find\nany linguistic issues on Text 3. The other English\nparticipant, P04, found the language to be satisfac-\ntory.\nIf we look at the Spanish language, P01 men-\ntioned that Text 2 (NMT engine) was “more con-\nfusing” than Text 1 (Human translation). There\nwere keywords that were “tricky” and she thought\nthey were probably wrong, such as sostenedor in-\nstead of atril forholder , also she mentioned words\nthat seemed to be completely out of context, such\nashechizo for spell. Regarding Text 3 (SMT), the\nparticipant said that it was “really, really tricky”\nand “the language was really difficult” not be-\ncause of words but because of incorrect grammar,\nand she stated that sentences were difficult to un-\nderstand. She commented that “there were times\nwhere it came to my mind that these were direct\ntranslations from English”. Because of the incor-\nrect translations provided by the engine (two En-\nglish options were translated in the same way in\nSpanish by the SMT engine), the participant an-\nswered two questions incorrectly. Participant 5\nmentioned that in Text 2 (SMT, in this case), he no-\nticed grammar mistakes “straight away”, and then\nhe realised that “it was translated by a machine”\nas “almost every sentence had something wrong”.\nHe mentioned that, although he had to read the\nsentences several times to try and make sense of\nthe meaning, the content was not difficult for him.On the other hand, he found Text 3 (NMT, in this\ncase) easier because there were fewer questions\nto answer, but he also mentioned that Text 3 was\nmachine-translated. He noticed a few grammar er-\nrors and inconsistencies. For example, he noticed\nDespido sumario andResumen despido as a trans-\nlation for Summary Dismissal , and Constructivo\nDespido andConstructivo despido forConstruc-\ntive dismissal , and this created confusion when he\nwas answering the comprehension questions. He\nthought that the language was more technical than\nin the other documents but at the same time that\nthe questions were easier to answer. When asked\nif he saw any difference between Text 2 and Text 3,\nhe said that he had no reasons to assume a different\nMT system was used.\nRegarding the Simplified Chinese language, P08\nstated that Text 2 (SMT in this case) was the most\ndifficult text of the three. According to him, Text 2\n“was not fluent”, some words were “weird”, and\nhe had to guess a lot of the text by the context\nand the questions. For him, the first two para-\ngraphs, for example, were difficult to understand.\nTherefore, both contents and language were diffi-\ncult. Regarding Text 3 (NMT), P08 found that it\nwas “in the middle of the three”. The paragraphs\nwere “better” and the questions were “clear”. Al-\nthough, the content was new to the participant, he\nfound the language easier to understand in Text 3\nthan in Text 2 but worse than in Text 1, as “the\nwords were correct”, but the order was wrong, and\nthere were also characters missing. As for P09,\nshe found that the structure of Text 2 (NMT, in this\ncase) was “okay” but she was not familiar with the\ntopic. She thought the language was also “okay”;\nalthough there were errors and sometimes the vo-\ncabulary was incorrect, she could understand it. In\nthis text, she found the headings difficult to place\nin the corresponding section. P09 found that Text\n3 (SMT in this case) was the most difficult one.\nShe understood that the text was about dismissals,\nbut she found the language “strange”, “totally un-\nclear”, “the structure was not that good” and it was\n“hard to understand”. She found that Text 2 and\nText 3 were stressful, especially Text 3. She com-\nmented that she could understand 60 percent of\nText 2, but only 20 percent of Text 3.\nIn summary, the EN participants found Text 2\nmore cumbersome to resolve than Text 1 and Text\n3, and therefore more time was required, but only\nP02 mentioned that the language was an issue and86\nthat it could be improved in Text 2 with regards to\ncommas and double negatives. This is very inter-\nesting as it suggests that the difficulties EN partic-\nipants found in the source could have been trans-\nlated in the target languages. For ES and ZH, the\nfour participants found Text 1 (human translation)\neasy in content and language, while they were di-\nvided on Text 2 and Text 3. In Simplified Chi-\nnese, the texts translated with NMT, regardless of\nwhether they were Text 2 or 3, were viewed as bet-\nter linguistically than their counterparts translated\nwith SMT, even when the NMT texts had certain\nterms or grammar turns that were wrong, and this\ninfluenced the participants’ responses. In Spanish,\none of the participants found the NMT option bet-\nter linguistically, while the other participant found\nthat both options were comparable and possibly\ncame from the same MT system.\n5 Conclusions and Future Work\nThe aim of this pilot experiment was to verify the\nmethodology to measure the impact of the quality\nof two MT paradigms - NMT and SMT - on the\nend user. For that, we established three research\nquestions regarding efficiency (goal completion),\ncognitive effort, and satisfaction.\nRegarding RQ1 (Which MT engine offers bet-\nter efficiency to participants?), results show that\nparticipants (Figure 2) in the two target languages\n- Spanish and Simplified Chinese - were able to\ncomplete more tasks successfully when using the\nNMT translated texts when compared to the SMT\ntranslations, as well as when compared to partic-\nipants who used the original EN texts. Regard-\ning the time spent to complete the texts, again, we\nnoted that when using the NMT translations, par-\nticipants were faster than when using SMT trans-\nlations and, moreover, have task completion times\ncloser to participants who used the English text\nthan the results for SMT.\nRegarding RQ2 (To what extent are there dif-\nferences in participants cognitive processes due to\ndifferent engines?), results for the FD (Figure 4)\nand FC (Figure 5) show that cognitive effort does\nnot seem to differ much for ES, and presents a\nbit of mixed results for ZH, were FD are slightly\nlonger for the NMT system, whereas FC are lower.\nWe believe that with a greater number of partici-\npants, a clearer tendency would be observed.\nRegarding our last research question (RQ3:\nWhat is the participants level of satisfaction withSMT and NMT when reading for comprehen-\nsion?), participants rated NMT higher and also\ncommented that the language in NMT texts was\neasier to understand in the post-task retrospective\ninterviews. It is also necessary to point out that\nES and ZH participants commented on the fact that\nthe language in the human translation (Text 1) was\neasy to understand, while they struggled in certain\nsections in both NMT and SMT texts (Texts 2 and\n3). This was not the case with EN participants that\nonly made slight remarks on the quality of the En-\nglish, but they did not mention any misunderstand-\nings of the texts.\nWe are aware of the limitations of the results\npresented here since the number of participants\nwas very low, and there were few texts for each MT\nsystem. Our next steps are to add more languages,\nespecially those languages which have been show-\ning greater improvement with NMT over the SMT\nparadigm, as well as gathering more participants.\nAnother consideration to bear in mind is the nature\nof the texts; we noted that the combination of diffi-\ncult text with easy questions and vice-versa could\ncloud the findings.\nFurthermore, we believe that this research could\nbenefit from computing more eye-tracking mea-\nsures, such as visit count, which is the number of\nvisits to an area of interest, as the shifts of atten-\ntion between the questions and the text may be an\nindicator of cognitive effort (Castilho et al., 2014).\nAcknowledgements: We would like to thank\nDag Schmidtke from Microsoft Ireland and Joss\nMoorkens for invaluable help, and the participants\nfor their support on this pilot experiment. This re-\nsearch was supported by the Edge Research Fel-\nlowship programme that has received funding from\nthe European Unions Horizon 2020 and innovation\nprogramme under the Marie Sklodowska-Curie\ngrant agreement No. 713567, and by the ADAPT\nCentre for Digital Content Technology, funded un-\nder the SFI Research Centres Programme (Grant\n13/RC/2106) and co-funded under the European\nRegional Development Fund.\nReferences\nBahdanau, Dzmitry, Kyunghyun Cho, and Yoshua\nBengio. 2014. Neural Machine Translation by\nJointly Learning to Align and Translate. CoRR ,\nabs/1409.0473.\nBentivogli, Luisa, Arianna Bisazza, Mauro Cettolo, and87\nMarcello Federico. 2016. Neural versus Phrase-\nBased Machine Translation Quality: a Case Study.\nCoRR , abs/1608.04631.\nBojar, Ond ˇrej, Rajen Chatterjee, Christian Federmann,\nYvette Graham, Barry Haddow, Matthias Huck,\nAntonio Jimeno Yepes, Philipp Koehn, Varvara\nLogacheva, Christof Monz, Matteo Negri, Aure-\nlie Neveol, Mariana Neves, Martin Popel, Matt\nPost, Raphael Rubino, Carolina Scarton, Lucia Spe-\ncia, Marco Turchi, Karin Verspoor, and Marcos\nZampieri. 2016. Findings of the 2016 Conference\non Machine Translation. In Proceedings of the First\nConference on Machine Translation , pages 131–198,\nBerlin, Germany, August. Association for Computa-\ntional Linguistics.\nCastilho, Sheila and Sharon O’Brien. 2016. Evaluat-\ning the impact of light post-editing on usability. In\nLREC , pages 310–316, Portoroz, Slovenia, May.\nCastilho, Sheila, Sharon O’Brien, Fabio Alves, and\nMorgan O’Brien. 2014. Does post-editing increase\nusability? A study with Brazilian Portuguese as Tar-\nget Language. In Proceedings of European Associ-\nation for Machine Translation (EAMT) , pages 183–\n190, Dubrovnik, Croatia.\nCastilho, Sheila, Joss Moorkens, Federico Gaspari,\nIacer Calixto, John Tinsley, and Andy Way. 2017a.\nIs neural machine translation the new state of the\nart? The Prague Bulletin of Mathematical Linguis-\ntics, 108(1):109–120.\nCastilho, Sheila, Joss Moorkens, Federico Gaspari,\nRico Sennrich, Vilelmini Sosoni, Panayota Geor-\ngakopoulou, Pintu Lohar, Andy Way, Antonio Va-\nlerio Miceli Barone, and Maria Gialama. 2017b.\nA Comparative Quality Evaluation of PBSMT and\nNMT using Professional Translators. In MT Summit\n2017 , Nagoya, Japan.\nCastilho Monteiro de Sousa, Sheila. 2016. Measuring\nacceptability of machine translated enterprise con-\ntent. Ph.D. thesis, Dublin City University.\nDoherty, Stephen and Sharon O’Brien. 2012. A user-\nbased usability assessment of raw machine translated\ntechnical instructions. In Proceedings of the Tenth\nConference of the Association for Machine Transla-\ntion in the Americas , pages 1–10, San Diego, Cali-\nfornia, USA.\nFuji, Masaru, N Hatanaka, E Ito, S Kamei, H Kumai,\nT Sukehiro, T Yoshimi, and Hitoshi Isahara. 2001.\nEvaluation method for determining groups of users\nwho find mt useful. In MT Summit VIII: Machine\nTranslation in the Information Age , pages 103–108.\nFuji, Masaru. 1999. Evaluation experiment for reading\ncomprehension of machine translation outputs. In\nProceedings of MT Summit VII , pages 285–289.\nJean, S ´ebastien, Orhan Firat, Kyunghyun Cho, Roland\nMemisevic, and Yoshua Bengio. 2015. Montreal\nNeural Machine Translation Systems for WMT’15.InProceedings of the Tenth Workshop on Statistical\nMachine Translation , pages 134–140, Lisbon, Portu-\ngal, September.\nJones, Douglas, Edward Gibson, Wade Shen, Neil Gra-\nnoien, Martha Herzog, Douglas Reynolds, and Clif-\nford Weinstein. 2005. Measuring human read-\nability of machine generated text: three case stud-\nies in speech recognition and machine translation.\nInAcoustics, Speech, and Signal Processing, 2005.\nProceedings.(ICASSP’05). IEEE International Con-\nference on , volume 5, pages v–1009. IEEE.\nJordan-Nez, Kenneth, Mikel L Forcada, and Esteve\nClua. 2017. Usefulness of mt output for comprehen-\nsion an analysis from the point of view of linguis-\ntic intercomprehension. volume 1, pages 241–253,\nSeptember.\nKlerke, Sigrid, Sheila Castilho, Maria Barrett, and An-\nders Søgaard. 2015. Reading metrics for estimating\ntask efficiency with mt output. In Proceedings of the\nSixth Workshop on Cognitive Aspects of Computa-\ntional Language Learning , pages 6–13.\nKoehn, Philipp and Rebecca Knowles. 2017. Six\nchallenges for neural machine translation. CoRR ,\nabs/1706.03872.\nRoturier, Johann. 2006. An investigation into the\nimpact of controlled English rules on the compre-\nhensibility, usefulness and acceptability of machine-\ntranslated technical documentation for French and\nGerman users . Ph.D. thesis, Dublin City University.\nScarton, Carolina and Lucia Specia. 2016. A read-\ning comprehension corpus for machine translation\nevaluation. In Chair), Nicoletta Calzolari (Confer-\nence, Khalid Choukri, Thierry Declerck, Sara Goggi,\nMarko Grobelnik, Bente Maegaard, Joseph Mariani,\nHelene Mazo, Asuncion Moreno, Jan Odijk, and Ste-\nlios Piperidis, editors, Proceedings of the Tenth In-\nternational Conference on Language Resources and\nEvaluation (LREC 2016) , Paris, France, may. Euro-\npean Language Resources Association (ELRA).\nStymne, Sara, Henrik Danielsson, Sofia Bremin,\nHongzhan Hu, Johanna Karlsson, Anna Prytz Lil-\nlkull, and Martin Wester. 2012. Eye tracking as a\ntool for machine translation error analysis. In LREC ,\npages 1121–1126.\nTomita, Masaru, Masako Shirai, Junya Tsutsumi, Miki\nMatsumura, and Yuki Yoshikawa. 1993. Evaluation\nof mt systems by toefl. In Proceedings of the Theo-\nretical and Methodological Implications of Machine\nTranslation (TMI-93) , pages 252–265.88", "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "edogEs_Lja", "year": null, "venue": "EAMT 2023", "pdf_link": "https://aclanthology.org/2023.eamt-1.32.pdf", "forum_link": "https://openreview.net/forum?id=edogEs_Lja", "arxiv_id": null, "doi": null }
{ "title": "Migrant communities living in the Netherlands and their use of MT in healthcare settings", "authors": [ "Susana Valdez", "Ana Guerberof Arenas", "Kars Ligtenberg" ], "abstract": null, "keywords": [], "raw_extracted_content": "© 2023 The authors. This article is licensed under a Creative Commons 4.0 license, no derivative works, attribution, CC-BY-ND. Migrant communities living in the Netherlands and their use of MT in healthcare settings Susana Valdez Leiden University [email protected] Ana Guerberof Arenas University of Groningen [email protected] Kars Ligtenberg Radboud University [email protected] Abstract As part of a larger project on the use of MT in healthcare settings among migrant com-munities, this paper investigates if, when, how, and with what (potential) challenges migrants use MT based on a survey of 201 non-native speakers of Dutch currently liv-ing in the Netherlands. Three main findings stand out from our analysis. First, the data shows that most migrants use MT to under-stand health information in Dutch and com-municate with health professionals. How MT is used and received varies depending on the context and the L2 language level, as well as age, but not on the educational level. Sec-ond, some users face challenges of different kinds, including a lack of trust or perceived inaccuracies. Some of these challenges relate to comprehension, bringing us to our third point. We argue that more research is needed to understand the needs of migrants when it comes to translated expert-to-non-expert health communication. This questionnaire helped us identify several topics we hope to explore in the project's next phase. 1 Introduction Access to health information has been recognized as essential (Royston et al., 2020; WHO and UNICEF, 2018), including in meeting the health-related Sus-tainable Development Goals (United Nations, 2020). Evidence, however, suggests that language barriers remain a significant factor contributing to disparities in the quality of care (Bernard et al., 2006; Khoong and Rodriguez, 2022; Liebling et al., 2020). When health information is not available in a lan-guage that the patient can understand, most people resort to public online machine translation (MT) as the only available alternative (Vieira et al., 2021:1519). In the context of healthcare, MT can thus be seen as a potential facilitator of a “multilin-gual health system,” where people from different cultural and linguistic backgrounds, such as mi-grants, can have access to health information and medical care in a language that they understand (e.g., Torres-Hostench, 2022:6). However, unin-formed users with limited MT literacy may face potential risks when using this technology, such as assuming MT output is accurate without fully under-standing its limitations (Vieira et al., 2021:1527) or assuming that MT provides privacy (Vieira et al., 2022b:18). To tackle this topic, this paper reports on a specific use of MT to facilitate communication in healthcare settings between experts and non-experts in migrant communities in the Netherlands. The paper first reviews related work on MT-mediated communication, with a special focus on health-related contexts; then describes the survey methodology adopted and reports the results. Finally, the paper discusses the findings and shares conclusions. 2 Related Work This section covers the work done in MT usability and MT in healthcare. 2.1 MT use initiated by non-language profes-sionals The first studies on the usability of MT have focused on how users of applications, tools, or webs under-stand MT-mediated communication. Using ques-tionnaires, interviews, eye-trackers, and retrospec-tive think-aloud methods, this research explores comprehensibility and/or acceptability, but also usa-bility, defined as effectiveness, efficiency, and satis-faction. Examples of these studies are Gaspari (2004), Stewart et al. (2010), Doherty and O’Brien (2012, 2014), Castilho (2016), Castilho and O’Brien (2018) and Guerberof-Arenas et al. (2019; 2021). This pioneering work seeks to include the final user \n in the translation cycle and explore how they receive MT in depth. More recently, with the growing use of public MT engines, there has been an increasing interest in examining how MT is used in various social contexts. This research has mainly examined the use of MT for gisting purposes.1 Much has been participant-oriented in nature. Often with the use of questionnaires and less frequently with interviews, researchers have focused on “everyday” users of MT. For instance, Nurminen and Papula (2018) combined usage statistics with an end-user question-naire to explore the use of the desktop version of PDF Translator, and Vieira et al. (2022a) investigat-ed typical uses and perceptions of MT based on a questionnaire aimed at United Kingdom residents. A great deal of research has also been carried out on the use of MT for L2 acquisition (Lee, 2020) or in academic settings (Bowker, 2019, 2021; Dorst et al., 2022; Loock et al., 2022). These studies have argued for the importance of training in Machine Translation Literacy. This training would entail gaining an understanding of when and where MT is unsuitable and developing the skills to effectively manage and correct translation errors (cf. Bowker and Ciro, 2019). 2.2 MT use in healthcare settings In comparison, there are fewer empirical studies on the use of MT in healthcare settings to facilitate expert-to-non-expert communication, and, therefore, many questions remain unanswered. On the use of MT initiated by asylum seekers, case studies conducted at detention centers in Leipzig and Ljubljana suggest that the use of MT to access offi-cial information, some of which in healthcare set-tings, is widespread (Fiedler and Wohlfarth, 2018; Pokorn and Čibej, 2018). On MT use initiated by health professionals with the purpose of communicating with patients, Me-handru et al. (2022) conducted a qualitative inter-view study to examine how MT is currently used in these settings. They found that healthcare providers experience difficulties in the presence of language barriers due to limited time and resources, cultural differences, inadequate medical literacy rates, and accountability for communication errors. Healthcare providers relied on a combination of MT, interpret-ing, and their own knowledge of the patients’ lan-guages and developed communication strategies to assess if doctors-patient communication had been ——————————————————————— 1 MT gisting can be defined as “knowingly consuming raw machine translation with the aim of understanding as much of its meaning as needed for a specific purpose” (Nurminen, 2021:30) successful, including back-translation and testing patient comprehension. On MT use initiated by health services to com-municate public health information, Pym et al. (2022), focusing on COVID-19 vaccination infor-mation in 2021 and 2022, conducted a survey on using Google Translate on the official website of the Catalan health service. They analyzed the strategic advantages of MT and the nature of the main errors and argued for a multilingual communication policy. Turner et al. (2015) conducted a feasibility study where raters were asked to assess machine-translated public health texts from English to Chinese com-pared to PE versions, consistently selecting HT over PE. Finally, Vieira et al. (2021) conducted a qualitative meta-analysis of the literature on MT in relation to medical and legal communication. From their review, we can conclude that, in healthcare, the use of MT is often described as high-risk given its implications for health, but it is also often perceived as the only available solution in these settings. The article also discusses the need for cross-disciplinary research on the use of MT in healthcare, as current research often overlooks the complexities of language and translation. The review emphasizes the importance of increasing awareness of the potential for MT to exacerbate social inequalities and put specific communities at risk. 2.3 Expert to non-expert medical translation Translation in healthcare settings, or medical transla-tion, is usually understood as a specific and highly specialized type of professional translation that fo-cuses on medicine and other fields closely related to health and disease (Montalt, 2012). In healthcare settings, communication can range from highly spe-cialized and written by experts addressing experts (e.g., clinical trial protocols or scientific papers) to those that are meant to be read and understood by non-experts or laypeople (e.g., informed consent forms or patient information leaflets). Recent research on medical translation has mostly focused on the latter. Adopting reception-oriented approaches and mainly using offline methods (see Krings, 2005:348 for the distinction between online and offline methods), translation researchers have looked at the lay-friendliness of translated patient package inserts (Askehave and Zethsen, 2003, 2014), patients’ needs for information and the suita-bility and readability of written resources available in hospitals (García Izquierdo, 2016; García-Izquierdo and Muñoz-Miquel, 2015), or how ex-plicitation in translated medical texts is received by Spanish speakers living in the US (Jiménez-Crespo, \n 2017), among other topics. One of the aspects that these studies have in com-mon is that they focus on how laypeople receive medical texts translated by translation professionals or experts in medical communication (including health professionals). To the best of our knowledge, no empirical study focuses on migrants’ use of MT, specifically in healthcare settings. 3 Methodology This study is part of a larger research project aiming to explore for the first time migrants’ use of MT in healthcare settings in the Netherlands. In the first phase, a questionnaire elicited data mainly on if, when, and how migrants use MT in healthcare settings and their (potential) main challenges. Following this, 12 respondents participated in follow-up in-depth interviews to further explore the challenges identified in the first phase. Our idea was to obtain qualitative data to understand not only the usage but also the participants’ difficulties, emotions and MT training needs. To collect this data, we applied the vignette technique, which makes use of a short story to elicit perceptions, opinions, and beliefs to typical scenarios to clarify participants’ decision-making processes and allow for the exploration of actions in context (Finch, 1987). This project has the long-term goal of co-creating training material with target community members as part of an action research initiative. For reasons of space, in this paper, we report the findings from the project’s first phase. 3.1 Questionnaire design and data collection Considering the outlined research gaps, we designed a questionnaire guided by the following research questions (RQ): RQ1: Do migrants currently living in the Nether-lands use MT in health-related contexts? RQ2: If they do, when and how do they use it? RQ3: What are migrants’ challenges when using MT in health contexts? The questionnaire was designed in English using the online survey tool Qualtrics and following the best practices associated with using online question-naires in Translation Studies (Mellinger and Baer, 2021). To make the questionnaire more accessible to specific targeted communities, it was professionally translated into Arabic, Italian, Portuguese, Spanish, Tigrinya, and Turkish. Nevertheless, participation was open to any non-native speaker of Dutch cur-rently living in the Netherlands. The questionnaire consisted of thirty-seven ques-tions, grouped into four sections. Besides the eligi-bility criteria (currently living in the Netherlands and being a non-native Dutch speaker) and profile-related questions (demographic characteristics and background) of sections 1 and 2, respondents were asked in section 3 a series of multiple choice closed-ended questions to understand their use of MT in specific health-related contexts. For instance, re-spondents were asked if and how they use MT at a pharmacy or during a doctor’s appointment. These questions were followed by open-ended questions aimed at eliciting other related contexts where MT was used and the problems participants faced when using MT in healthcare settings. In the last section, respondents were asked about their experiences using MT in day-to-day life, which included questions about frequency of use, the type of MT system, level of satisfaction, and easiness or difficulty of use. The questionnaire in English and its translations can be accessed here: https://github.com/susanavaldez/-Health-information-accessibility-in-migrant-communities. With respect to the analysis of the respondents’ an-swers to open questions, the data were exported to the qualitative data analysis software ATLAS.ti where the answers were coded and organized around recurring themes using inductive coding (Saldaña, 2016). The questionnaire was pre-tested by six non-native speakers of Dutch and received approval from Lei-den University’s Ethics Committee of the Faculties of Humanities and Archaeology (ref. 2022/22), which included the corresponding data management plan. The questionnaire was released in April 2022 and was available until December 2022. It was cir-culated online through social media and WhatsApp dedicated groups of migrants living in the Nether-lands, institutions working with migrant communi-ties, Dutch universities’ newsletters and networks, and personal acquaintances. The call for respondents also took place offline by distributing flyers at local libraries and markets. 3.2 Respondents The survey was completed by 296 participants. From these, 91 were excluded as they did not com-ply with the requirements (that is, non-native speak-ers of Dutch currently living in the Netherlands), they filled in the survey more than once, or did not answer at least 1 question of the non-demographic sections. The total number of participants was 201. The majority of respondents, 150, moved to the Netherlands in the last ten years. Most of them are in paid work (72%) and/or studying (15%), and they hold an MA or equivalent (37%), followed by those that hold a BA or equivalent (29%) and a high \n school degree (16%). Most participants are aged between 35–44 (38%) and 25–34 (29%). Finally, there was a higher number of responses from female participants (73%). Concerning native languages, the distribution of the number of participants above 1% is as follows: Portuguese (39%), Italian (16%), Spanish (10%), English (6%), Arabic (3%), Turkish (3%), and Chi-nese (2%). Perhaps the higher number of participa-tion from Portuguese, Italian and Spanish speakers is due to the native languages of the authors and col-laborators of this project. Even though we reached out to institutions that work with migrant communi-ties, this did not always translate into a high en-gagement level. Regarding Dutch proficiency, a relevant number of respondents reported not knowing any Dutch (23%) or being a Beginner user in the A1 or A2 level2 (37%). The remaining respondents reported in smaller percentages being Intermediate users or B1 (20%), Advanced users or B2 (11%), and Proficient users or C1/C2 (8%). Given these numbers, it is not surprising that most respondents reported English as the most common language used at work and in educational contexts. One hundred forty employed respondents reported English as the language used at work for reading, writing, and speaking; and 31 respondents studying also reported English as the language used in educational contexts for reading, writing, and speaking. The participants reported that the most frequently used MT engine is Google Translate (79%), fol-lowed by DeepL (11%), and Bing Microsoft Trans-lator (1%). 4 Results In this section, we present the results from the questionnaire by grouping the findings into six areas: usage of MT, methods of MT usage, level of easiness and satisfaction, the importance of features, factors such as Dutch language, age, and education, MT features of value and challenges when using MT. 4.1 MT usage by migrant communities To understand the role of MT in health contexts, the participants were asked if they use MT in six common health situations. These were face-to-face medical appointments, health-related letters, calling the doctor, buying medication, and going to a vaccination center or emergency room. For each multiple-choice question, respondents were ——————————————————————— 2 According to the Common European Framework of Reference for Languages (CEFR). presented with statements to choose from (they could choose more than one), such as “I don’t use machine translation,” “I use machine translation by typing on my mobile phone,” or “Not applicable.” Table 1 shows a summary of these responses. The number of respondents varies per question, and this can be seen in column N. I use MT I don't use MT Other N/A N Health letters 70.16% 19.76% 6.05% 4.03% 201 Buying medication 57.14% 35.52% 5.02% 2.32% 198 Medical appoint-ments 47.06% 31.62% 13.24% 8.09% 201 Emergency room 30.99% 27.27% 6.20% 35.54% 201 In a medical call 25.76% 50.66% 15.72% 7.86% 196 Vaccination center 26.27% 51.61% 9.22% 12.90% 196 Table 1: MT usage in healthcare settings In total, respondents mentioned using MT in these health situations 641 times (55%) vs. 521 times (45%) where MT was not used. We can observe that most use MT to read health-related letters sent by their doctor or the Health Ministry (70.16%) and buy medication at the pharmacy or supermarket (57.14%). Respondents also reported using MT to communicate with health professionals in face-to-face medical appointments in meaningful numbers (47.06% use MT vs. 31.62% that do not use MT), indicating that MT is used in healthcare contexts also in synchronous situations. To communicate at the vaccination center or over the phone with health professionals, respondents reported using MT in smaller percentages. Respondents that chose the “Other” option used this opportunity to explain that, instead of using MT in these health situations, they spoke in English with health professionals (68 mentions) or resorted to family members and friends to interpret for them (15 mentions). Some respondents (6) also used this op-tion to clarify that instead of using an MT phone app, they used the web version or the browser exten-sion. Other types of responses were doctors or recep-tionists translating documents when asked. 4.2 Methods of MT usage Table 2 shows that participants use MT primarily by typing directly on the phone app or using the camera function, followed by preparing beforehand with the help of MT. Using MT by dictating or family and friends using MT for the user are the less frequent options. \n I use MT Before-hand Dictate Type Camera Family Health letters ND 5.17% 32.18% 60.34% 2.30% Buying medica-tion 14.86% 4.73% 37.84% 41.89% 0.68% Medical appointments 33.59% 4.69% 60.94% ND 0.78% Emergency room 13.33% 4% 36% 37.33% 9.33% In a medical call 64.41% 3.39% 16.95% 10.17% 5.08% Vaccination center 17.54% 7.02% 29.82% 43.86% 1.75% Table 2. How MT in healthcare settings is used (For N, see Table 1) It is when reading health-related letters that re-spondents use the camera function the most (60.34%), followed by typing directly in the phone app (32.18%). As Table 2 shows, when buying med-ication at the pharmacy or the supermarket, respond-ents also report opting more often for the camera function (41.89%), followed by typing directly on the phone app (37.84%). Respondents opt more often to prepare before-hand by using MT when calling the doctor to ask a question or making an appointment (64.41%) and in face-to-face medical appointments (33.59%), fol-lowed by when buying medication at the pharmacy or the supermarket (14.86%). This is expected since these are immediate situations where using MT (synchronously) might be more complex than in interactions like reading correspondence. 4.3 Level of satisfaction and easiness of MT After the section on MT usage in health contexts, respondents were also asked about MT in their day-to-day life. Participants were asked, “How easy or difficult is it to use machine translation?” and “Overall, how satisfied or dissatisfied are you with machine translation?” For both questions, the partic-ipants selected a statement on a 5-point Likert. Fig-ures 1 and 2 show these results (N = 186 partici-pants). Figure 1. How easy or difficult is it to use machine translation? \n Figure 2. Overall, how satisfied or dissatisfied are you with machine translation? The results in Figure 1 show that 62% found MT extremely easy to use, 26% Somewhat Easy to use, 11% Neither easy nor difficult, and 1% Somewhat difficult. The results in Figure 2 show that 29% are Extremely satisfied, 52% Somewhat satisfied, 14% Neither satisfied nor dissatisfied, 4% Somewhat dissatisfied, and 2% Extremely dissatisfied. Participants seem to find that MT is a tool easy to use and overall satisfying for their purposes. 4.4 Importance of features of MT Another question concerned the importance of cer-tain features of MT in deciding whether or not to use it. These characteristics were: accuracy (in terms of maintaining meaning), ease of use, being free of charge, the speed of the MT service, and confidenti-ality and privacy. The respondents were asked to rate these characteristics on a Likert scale ranging from 1 (Not at all important) to 5 (Extremely im-portant). The results are shown in Figure 3 (n= 186). \n\n Figure 3. How important are certain features for deciding whether to use MT? The results clearly show that respondents care greatly about all of these characteristics, as for most of these 80% or more of the respondents considered the characteristic to be either ‘Very important’ or ‘Extremely important.’ The only aspect that stands out is that of confidentiality and privacy, which is still positively skewed, but only just over half (61%) of the respondents considered it very or extremely important. This seems to suggest that privacy is not as important as the other features, even though this is one of the issues that professional translators find very relevant when using MT, since they signed confidentiality agreements. The questionnaire data does not help us understand the underlying causes, but this is a topic that warrants further exploration in the next phase of the project. 4.5 Dutch language knowledge, age, and educa-tion level Another important factor we wanted to explore was if participants’ Dutch level influenced their recep-tion of MT. The participants had self-reported their level in the questionnaire as follows (in absolute numbers): Beginners (74), Intermediate (40), Ad-vanced (23), Proficient (16), I do not know any Dutch (47), and Other (1). To see if the variable Dutch language level affect-ed the level of Easiness and Satisfaction that the participants had rated from 1 to 5 (from negative to positive), a Kruskal-Wallis test for non-parametric data was run on the data. The results show no statis-tically significant difference between Dutch Level and Easiness/ Satisfaction. Figure 4. Dutch language level and Satisfaction To analyze the data further, the Dutch levels were regrouped into three wider levels: Beginners 0-A2, Intermediate B1-B2, and Advanced C1+. A Kruskal-Wallis test for non-parametric data reveals that there are statistically significant differences between Dutch level and Satisfaction only (H(2) = 9.03, p < .01) and not between Dutch Level and Easiness. Post-hoc comparisons show statistically significant differences between Advanced and Beginner (Z = 0.13; p = -2.85) levels but not between Advanced and Intermediate or Beginner and Intermediate. This seems to indicate that the lower the Dutch level of the participants, the more satisfied they are with the MT proposals. Therefore, MT has a more prominent role when the Dutch language has not been mas-tered. To better explore the factor Age, we regrouped the original six age ranges into three: Young adult (18–24 and 25–34), Middle age (35–44 and 45–54), and Older adult (55–64 and 65–74). A Kruskal-Wallis test for non-parametric data reveals that there are statistically significant differences between Age and Easiness only (H(2) = 10.07, p < .00), but not be-tween Age and Satisfaction. Post-hoc comparisons show statistically significant differences between Middle age and Older adults (Z = 3.27; p = 0.00) and Older and Young adults (Z = 2.90; p = 0. 00) but not between Middle-aged and Young adults. This shows that the participants in the 55 to 74 age brack-et found MT more difficult to use, but they were not less satisfied. The Education Level of the participants reveals no statistically significant differences. In conclusion, the participants’ Dutch level seems to have an effect on their level of satisfaction with MT, while their Age seems to have an effect on the ease of use of MT. \n\n Figure 5. Age group and Easiness 4.6 Challenges when using MT in health con-texts In an open-ended question, we asked respondents, “Tell us what problems you face when using ma-chine translation in a health-related context?” The main themes that emerged from the analysis of the answers are shown in Table 3. This question gath-ered 117 answers. The most common view amongst respondents, mentioned 51 times, is related to the inaccuracy of the MT output. Respondents referred to “inaccu-rate,” “wrong,” or “bad” translations as challenging but also to the misunderstandings that can arise from these translations. As one respondent reported: “às vezes as traduções de frases complexas (ou até mesmo termos específicos) não são exatas e isso pode gerar mal entendimento” [sometimes transla-tions of complex sentences (or even specific terms) are not exact and this can lead to misunderstand-ings].3 As a solution for this perceived inaccuracy, 11 of these respondents reported a preference for indirect translation or using English as a pivot language. For example, one respondent commented: “La traduzione dall'olandese non è accurata. Uso la traduzione dall'olandese all'inglese” [The transla-tion from Dutch is not accurate. I use the translation from Dutch to English]. The second most recurrent theme, expressed 17 times, was related to comprehensibility. Respond-ents who reported this as a challenge referred to unclear translations or nonsensical translations, as these responses illustrate: ——————————————————————— 3 Respondents’ answers are quoted verbatim, including typos. When the answer is not in English, our own translation is pro-vided in squared brackets. “certe volte la traduzione non e' chiara” [sometimes the translation is not clear] “A veces no tiene sentido lo que plantea la traduc-ción automática” [Sometimes what MT proposes does not make sense] Themes Mentions Inaccurate translations 51 Comprehensibility issues 17 Context-related issues 12 Lack of trust in MT 10 Technical issues 10 Terminology difficult to translate 5 Slow and time-consuming 4 Table 3. Most common themes (above two mentions). Other respondents alluded to another type of com-prehension challenge. What these respondents found challenging was understanding the medical language and terminology, not necessarily the MT output. For example, one respondent wrote: “Tampoco conozco la terminología médica en español. Me baso en imagenes” [I also do not know the medical termi-nology in Spanish. I rely on images]. And another commented: “Technical vocabulary is sometimes difficult to understand.” Context-related issues was the third most recurrent theme (12 mentions). Respondents commented that one of the challenges they face when using MT in health situations is that the translations appear cor-rect but do not apply to the health context. Other respondents, when referring to context-related chal-lenges, observed that health information could be culture-specific. One respondent gave the example of symptoms and pain to explain that it cannot be translated literally: “Certain terms to describe a symptom are very culture-specific and/or don’t translate literally. E.g.: the way different types of pain are described in different languages.” And an-other gave the example of definitions: “Credo che uno dei problemi più comuni sia che molte definizioni cambino molto da cultura a cultura” [I believe one of the most common problems is that many definitions change considerably from culture to culture]. The fourth most recurrent theme that emerged from the analysis is related to not trusting the MT output (10 mentions). When discussing trust, some respondents expressed concerns about trusting MT to translate specifically health information, while others expressed a more generalized lack of trust for, \n\n in the words of one of the respondents, “translation apps”. Another noteworthy perspective was also shared by some respondents. For them, the problem relies on not knowing if the translation is accurate. Com-menting on this, one of the respondents wrote: \"I sometimes prepare before going [to a health-related situation] by checking specific phrases, but of course I can never be sure if the phrase the translator gives me is the correct one or is in common usage (...).\" Another respondent commented along the same lines: “Nunca estoy segura al 100% de si la traduc-ción que Google me está dando es correcta. (...) y siempre suelo quedar satisfecha con las traduccio-nes, pero sin tener completa certeza de si un hu-mano que entienda ambos idiomas lo traduciría igual que Google.” [I am never 100% sure if Google’s translation is correct (...) and I am always pleased with the translations, but I am never com-pletely sure if a human who understands both lan-guages would translate it like Google.] As evident from these elucidative answers, the lack of trust in the MT output is associated with the lack of knowledge of the source language and the user's inability to check the translation accuracy for them-selves. This lack of trust can lead to hesitation or reluctance in using the MT output, as explained by another respondent: “(...) so sometimes it doesn't help or I don't feel very confident”. Technical issues were also mentioned by respond-ents (10 mentions). These were related to the diffi-culty of translating scanned files, handwritten text or PDFs, as well as using the camera option or the browser extension to translate websites. A smaller number of respondents referred to the difficulty of translating technical terminology (5 mentions), while others commented on how slow and time-consuming it is to use MT in a health con-text (4 mentions). 5 Conclusion The responses from the participants shed some light on the use of MT by migrant communities in the Netherlands. First and foremost, the majority of migrants use MT in several health contexts to access and understand health information presented to them in Dutch, but also to communicate with health pro-fessionals. This usage is different depending on the situation. When the situation is asynchronous, for example reading a letter from the Health Ministry or the family doctor, they use the phone’s camera func-tion. When the communicative situation is synchro-nous, they use MT more in a face-to-face appoint-ment than in emergency situations, opting to type in the app or to prepare beforehand using MT. Participants find MT easy to use and are satisfied overall, with only a small percentage finding it diffi-cult or extremely dissatisfying. This seems logical. MT is used then as a tool to communicate when there is a lack of knowledge of the source language and not as a tool to improve the speed of communi-cation. They also care greatly about MT being accu-rate, free of charge, fast, easy to use, and to a lesser extent about privacy which is somewhat surprising but in line with previous research (see Vieira et al., 2022b). The findings suggest then that, on the one hand, MT provided access to health information that per-haps otherwise would not have been possible. On the other hand, some users are facing specific challenges of various kinds. For example, they reported chal-lenges such as perceived inaccuracy or lack of trust in MT output in healthcare settings. Our findings also suggest that some migrants face comprehension difficulties associated with unclear translations but also understanding MT-mediated health texts. Based on the users’ statements, we argue that there is a need for a more nuanced understanding of migrants’ needs regarding translated expert-to-non-expert communication that goes beyond a more literal translation of medical language and terminology, involving interlingual but importantly also intralin-gual translation. The second part of the project will certainly bring more qualitative data that will expand the information presented here. We are also aware of the limitations of this study, as we mentioned before, the number of participants (majority of Portuguese, Italian and Spanish) are only a sample of all the migrant communities in the Netherlands. This questionnaire helped us identify several topics to explore further in the follow-up interviews and we will address the issues identified and answer these new questions in our future work. Acknowledgements This research has been funded by the Leiden Uni-versity Centre for Digital Humanities. The authors would also like to thank the participants, as well as the professional translators that translated the questionnaire. References Askehave, Inger, and Karen Korning Zethsen. 2003. “Communication Barriers in Public Discourse.” Information Design Journal 4(1):23–41. Askehave, Inger, and Karen Korning Zethsen. 2014. “A Comparative Analysis of the Lay-Friendliness of Danish EU Patient Information Leaflets from 2000 to 2012.” Communication and Medicine 11(3):209–22. Bernard, Andrew, Misty Whitaker, Myrna Ray, Anna \n Rockich, Marietta Barton-Baxter, Stephen L. Barnes, Bernard Boulanger, Betty Tsuei, and Paul Kearney. 2006. “Impact of Language Barri-er on Acute Care Medical Professionals Is De-pendent Upon Role.” Journal of Professional Nursing 22(6):355–58. Bowker, Lynne. 2019. “Machine Translation Literacy as a Social Responsibility.” Pp. 104–7 in Proceed-ings of the Language Technologies for All (LT4All). Paris. Bowker, Lynne. 2021. “Promoting Linguistic Diversity and Inclusion.” The International Journal of In-formation, Diversity, & Inclusion (IJIDI) 5(3). Bowker, Lynne, and Jairo Buitrago Ciro. 2019. Machine Translation and Global Research. Bingley: Em-erald Publishing. Castilho, Sheila. 2016. “Acceptability of Machine Trans-lated Enterprise Content.” Ph.D. Thesis, Dublin City University. Castilho, Sheila, and Sharon O’Brien. 2018. “Acceptabil-ity of Machine-Translated Content: A Multi-Language Evaluation by Translators and End-Users.” Linguistica Antverpiensia, New Series – Themes in Translation Studies 16. doi: 10.52034/lanstts.v16i0.430. Doherty, Stephen, and Sharon O’Brien. 2012. “A User-Based Usability Assessment of Raw Machine Translated Technical Instructions.” 10th Confer-ence of the Association for Machine Translation in the Americas, San Diego, California, USA. Doherty, Stephen, and Sharon O’Brien. 2014. “Assessing the Usability of Raw Machine Translated Out-put.” International Journal of Human-Computer Interaction 30(1):40–51. Dorst, Aletta G., Susana Valdez, and Heather Bouman. 2022. “Machine Translation in the Multilingual Classroom.” Translation and Translanguaging in Multilingual Contexts 8(1):49–66. Fiedler, Sabine, and Agnes Wohlfarth. 2018. “Language Choices and Practices of Migrants in Germany.” Language Problems and Language Planning 42(3):267–87. Finch, Janet. 1987. “The Vignette Technique in Survey Research.” Sociology 21(1):105–14. García Izquierdo, Isabel. 2016. “At the Cognitive and Situational Interface.” Translation Spaces 5(1):20–37. García-Izquierdo, Isabel, and Ana Muñoz-Miquel. 2015. “Los Folletos de Información Oncológica En Contextos Hospitalarios.” Panacea 16(42):225–31. Gaspari, Federico. 2004. “Online MT Services and Real Users’ Needs.” Pp. 74–85 in Machine Transla-tion, edited by R. E. Frederking and K. B. Tay-lor. Berlin: Springer. Guerberof, Ana, Joss Moorkens, and Sharon O’Brien. 2019. “What Is the Impact of Raw MT on Japa-nese Users of Word: Preliminary Results of a Usability Study Using Eye-Tracking.” P. 11 in Proceedings of XVII Machine Translation Sum-mit. Dublin: European Association for Machine Translation (EAMT). Guerberof-Arenas, Ana, Joss Moorkens, and Sharon O’Brien. 2021. “The Impact of Translation Mo-dality on User Experience: An Eye-Tracking Study of the Microsoft Word User Interface.” Machine Translation. doi: 10.1007/s10590-021-09267-z. Jiménez-Crespo, Miguel A. 2017. “Combining Corpus and Experimental Studies: Insights into the Re-ception of Translated Medical Texts.” JoSTrans 28:2–22. Khoong, Elaine C., and Jorge A. Rodriguez. 2022. “A Research Agenda for Using Machine Translation in Clinical Medicine.” Journal of General Inter-nal Medicine 37(5):1275–77. Krings, Hans P. 2005. “Wege Ins Labyrinth – Fragestel-lungen Und Methoden Der Übersetzungsprozess-forschung Im Überblick.” Meta 50(2):342–58. Lee, Sangmin-Michelle. 2020. “The Impact of Using Machine Translation on EFL Students’ Writing.” Computer Assisted Language Learning 33(3):157–75. Liebling, Daniel J., Michal Lahav, Abigail Evans, Aaron Donsbach, Jess Holbrook, Boris Smus, and Lindsey Boran. 2020. “Unmet Needs and Oppor-tunities for Mobile Translation AI.” Pp. 1–13 in Proceedings of the 2020 CHI Conference. NY: Association for Computing Machinery. Loock, Rudy, Sophie Léchauguette, and Benjamin Holt. 2022. “The Use of Online Translators by Stu-dents Not Enrolled in a Professional Translation Program: Beyond Copying and Pasting for a Pro-fessional Use.” Pp. 23–29 in Proceedings of the 23rd Annual Conference of the European Asso-ciation for Machine Translation. Ghent, Bel-gium: European Association for Machine Trans-lation. Mehandru, Nikita, Samantha Robertson, and Niloufar Salehi. 2022. “Reliable and Safe Use of Machine Translation in Medical Settings.” Pp. 2016–25 in 2022 ACM Conference on Fairness, Accounta-bility, and Transparency. Seoul Republic of Ko-rea: ACM. Mellinger, Christopher D., and Brian James Baer. 2021. “Research Ethics in Translation and Interpreting Studies.” Pp. 365–80 in Routledge Handbook of Translation and Ethics, edited by K. Koskinen and Nike K. Pokorn. New York and London: Routledge. Montalt, Vicent. 2012. “Medical Translation.” Pp. 3649–53 in The Encyclopedia of Applied Linguistics, edited by C. A. Chapelle. Oxford: Blackwell. Nurminen, Mary. 2021. “Investigating the Influence of Context in the Use and Reception of Raw Ma-chine Translation.” Tampere University. Nurminen, Mary, and Niko Papula. 2018. “Gist MT Us-ers: A Snapshot of the Use and Users of One Online MT Tool.” Proceedings of the 21st An-nual Conference of the European Association for Machine Translation, 28-30 May 2018, Univer-sitat d’Alacant, Spain. 199–208. Pokorn, Nike K., and Jaka Čibej. 2018. “‘It’s so Vital to Learn Slovene.’” LPLP 42(3):288–307. Pym, Anthony, Nune Ayvazyan, and Jonathan Prioleau. 2022. “Should Raw Machine Translation Be \n Used for Public-Health Information?” Just. Journal of Language Rights & Minorities 1(1–2):71–99. Royston, Geoff, Neil Pakenham-Walsh, and Chris Ziel-inski. 2020. “Universal Access to Essential Health Information.” BMJ Global Health 5(5):e002475. Saldaña, Johnny. 2016. The Coding Manual for Qualita-tive Researchers. London: SAGE. Stewart, Osamuyimen, David Lubensky, Scott Macdon-ald, and Julie Marcotte. 2010. “Using Machine Translation for the Localization of Electronic Support Content.” Torres-Hostench, Olga. 2022. “Europe, Multilingualism and Machine Translation.” Pp. 1–21 in Machine translation for everyone, edited by D. Kenny. Berlin: Language Science Press. Turner, Anne M., Kristin N. Dew, Loma Desai, Nathalie Martin, and Katrin Kirchhoff. 2015. “Machine Translation of Public Health Materials From English to Chinese.” JMIR Public Health Sur-veill 1(2). United Nations. 2020. Policy Guidelines for Inclusive Sustainable Development Goals - Good Health and Well-Being. OHCHR. Vieira, Lucas Nunes, Minako O’Hagan, and Carol O’Sullivan. 2021. “Understanding the Societal Impacts of Machine Translation.” Information, Communication & Society 24(11):1515–32. Vieira, Lucas Nunes, Carol O’Sullivan, Xiaochun Zhang, and Minako O’Hagan. 2022a. “Machine Transla-tion in Society: Insights from UK Users.” Lang Resources & Evaluation. Vieira, Lucas Nunes, Carol O’Sullivan, Xiaochun Zhang, and Minako O’Hagan. 2022b. “Privacy and Eve-ryday Users of Machine Translation.” Transla-tion Spaces. doi: https://doi.org/10.1075/ts.22012.nun. WHO/UNICEF. 2018. A Vision for Primary Care in the 21st Century. Geneva: World Health Organiza-tion and United Nations Children’s Fund.", "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "wDsm-fD_GWJ", "year": null, "venue": "EAMT 2022", "pdf_link": "https://aclanthology.org/2022.eamt-1.68.pdf", "forum_link": "https://openreview.net/forum?id=wDsm-fD_GWJ", "arxiv_id": null, "doi": null }
{ "title": "CREAMT: Creativity and narrative engagement of literary texts translated by translators and NMT", "authors": [ "Ana Guerberof Arenas", "Antonio Toral" ], "abstract": null, "keywords": [], "raw_extracted_content": "CREAMT: Creativity and narrative engagement of literary texts \ntranslated by translators and NMT \nAna Guerberof Arenas \nUniversity of Surrey/ \nUniversity of Groningen \na.guerberof.arenas @rug.nl Antonio Toral \nUniversity of Groningen \[email protected] \nAbstract \nWe present here the EU -funded project \nCREAMT that seeks to understand what is \nmeant by creativity in different translation \nmodalities, e.g. machine translation, post -\nediting or professional translation. Focus-\ning on the textual elements that determine \ncreativity in translated literary texts and \nthe reader experience, CREAMT uses a \nnovel, interdisciplinary approach to assess \nhow effective machine translation is in lit-\nerary translation considering creativity in \ntranslation and the ultimate user: the \nreader . \n1 Introduction \nResearch has shed some light on the usability of \nmachine translation (MT) in literary texts (Toral, \nWieling, and Way 2018), showing that MT might \nhelp li terary translators when it comes to \nproductivity. At the same time, translators’ \nperception is that the “more creative” the literary \ntext, the less useful MT is (Moorkens et al. 2018). \nBut can we quantify the creativity in texts \ntranslated by humans as opp osed to those \nproduced with the aid of machines? And, since \none of the aims of the translation of a literary text \nis to preserve the reading experience of the \noriginal, what is the reader’s experience when \nfaced with machine -translated texts? Do users \nexpo sed to different translation modalities have \ndifferent reading experiences? \nTo provide answers to these questions, the \nCREAMT is articulated in two main axes with a \ntwo-year duration . The first axis proposes to \n \n© 2022 The authors. This article is licensed under a Crea-\ntive Commons 3.0 licence, no derivative works, attribution, \nCCBY -ND. identify creative shifts (see section 2.2) while the \nsecond axis seeks to identify reader’s narrative \nengagement and gather data on enjoyment and \ntranslation reception. \n2 First axis \nWe translated two stories: Murder in the Mall by \nSherwin B. Nuland (1995) was translated into \nCatalan for a pilot proj ect and 2BR02B by Kurt \nVonnegut (1999) was translated into Catalan and \nDutch for the main experiment. \n2.1 Translation Process \nThe conditions human translation (HT) and post -\nediting (PE) were processed by two professional \nliterary translators. To reduce the ef fect of the \ntranslator, each professional translated and post -\nedited 50% of each modality . \nThe MT condition was based on the output of \nstate-of-the-art literary -adapted neural MT \nsystems based on the transformer architecture \n(Vaswani et al. 2017) trained t o translate from \nEnglish to Catalan (Toral, Oliver, and Ribas -\nBellestín 2020) and to Dutch (Toral, van \nCranenburgh, and Nutters 2021). The training \ndata did not contain the text used for the \nexperiment nor any by these authors . \n2.2 Creativity \nThe source text ( ST) was first annotated for units \nof creative potential (e.g. metaphors, wordplay \nand puns , comparisons). A team of five \nprofessional reviewers annotated the target texts \n(TT) as either reproduction, omission, or creative \nshift (Bayer -Hohenwarter 2011). The creative \nshifts could be 1) modification (i.e. ST is modified \nfor the target culture), 2) concretisation (i.e. ST is \nreplaced by a more concrete example in the TT) \nand 3) abstraction (i.e. ST examples are replaced \nby generic ones in the TT). The texts were also \nchecked for acceptability (number and type of \nerrors) with the Multidimensional Quality Metrics \n(MQM) .2 The number of creative shifts minus the \nerror points divided by the number of ST words \nresulted in a creativity sco re. \n3 Second Axis \nAn on -line questionnaire consisting of three parts \nwas distributed to 88 Catalan participants in the \npilot and 223 Catalan and Dutch participants in \nthe main project using an on-line survey software . \n3.1 Demographics and Reading Patterns \nThis section cover s questions that serve to analyze \nvariables affecting narrative engagement (e.g. \n“What genre do you usually read?”) . \n3.2 Narrative Engagement \nAfter reading the text (the translation modality \nwas assigned randomly), the participants answer \nten four -option questions we created to assess \ncomprehensibility . Afterwards, they filled in a 12 -\nitem narrative engagement questionnaire \n(Busselle and Bilandzic 2009), e.g. “At points, I \nhad a hard time making sense of what was going \non in the story”, “Wh ile reading, I found myself \nthinking about other things” or “I felt sorry for \nsome of the characters in the story”). \n3.3 Readers’ Reception Questionnaire \nParticipants responded to questions designed to \naddress understanding of the text (e.g. “How easy \nwas the text to understand?”), enjoyment (e.g. \n“How did you enjoy the text?”), translation \nassessment (e.g. “How would you like to read a \ntext by the same author and translator?”). \n4 Outcomes \nA pilot was run in Catalan in 2020. The results \nshowed that HT presented a higher creativity \nscore if compared to PE and MT. HT also ranked \nhigher in narrative engagement, and translation \nreception, while PE ranked marginally higher in \nenjoyment. (Guerberof -Arenas and Toral 2020). \nThe main experiment for Dutch and Catalan \nconfi rmed these results for Axis 1: HT has the \nhighest creativity score, followed by PE, and \nlastly, MT, in both languages. Post-editing MT \noutput constrains the creativity of translators, \n \n2 https://www.taus.net/qt21 -project resulting in a poorer translation often not fit for \npublication accordin g to experts. (Guerberof \nArenas and Toral 2022). Axis 2 was finished in \nMarch 2022 and it is under evaluation. \n5 Acknowledgements \nThis project has received funding from the \nEuropean Union’s Horizon 2020 research and \ninnovation programme under the Marie \nSkłod owska -Curie grant agreement No. 890697. \nReference s \nBayer -Hohenwarter, Gerrit. 2011. Creative Shifts as a \nMeans of Measuring and Promoting Translational \nCreativity. Meta 56 (3): 663 –692. \nBusselle, Rick, and Helena Bilandzic. 2009. Measuring \nNarrative Engage ment. Media Psychology 12 (4): \n321–347. \nGuerberof Arenas, Ana, and Antonio Toral. 2022. Cre-\nativity in Translation: Machine Translation as a \nConstraint for Literary Texts. Translation Spaces . \nAvailable https://doi.org/10.1075/ts.21025.gue \nGuerberof -Arenas, Ana, and Antonio Toral. 2020. The \nImpact of Post -Editing and Machine Translation on \nCreativity and Reading Experience. Translation \nSpaces 9 (2): 255 –282. \nMoorkens, Joss, Antonio Toral, Sheila Castilho, and \nAndy Way. 2018. Translators’ Perceptions of Liter-\nary Post -Editing Using Statistical and Neural Ma-\nchine Translation. Translation Spaces 7 (2): 240 –\n262. \nNuland, Sherwin B. 1995. Muder in the Mall. In How \nWe Die: Reflections of Life’s Final Chapter , New \nEdition. 1st edition. Vintage , New York, USA . \nToral, Antonio, Andreas van Cranenburgh, and Tia \nNutters. 2021. Literary -Adapted Machine Transla-\ntion in a Well -Resourced Language Pair. Book of \nabstracts 7th Confe rence of The International Asso-\nciation for Translation and Inter -Cultural Studies \n(IATIS). Barcelona : 257. \nToral, Antonio, Antoni Oliver, and Pau Ribas -\nBellestín. 2020. Machine Translation of Novels in \nthe Age of Transformer. Maschinelle Übersetzung \nfür Üb ersetzungsprofis , edited by Jörg Porsiel. BDÜ \nFachverlag , Berlin, Germany : 276–96. \nToral, Antonio, Martijn Wieling, and Andy Way. 2018. \nPost-Editing Effort of a Novel with Statistical and \nNeural Machine Translation. Frontiers in Digital \nHumanities 5: 1–11. \nVaswani, Ashish, Noam Shazeer, Niki Parmar, Jakob \nUszkoreit, Llion Jones, Aidan N. Gomez, Lukasz \nKaiser, and Illia Polosukhin. 2017. Attention Is All \nYou Need. ArXiv:1706.03762 [Cs], December. \nhttp://arxiv.org/abs/1706.03762. \nVonnegut, Kurt. 1999. 2BR02B in Bagombo Snuff Box . \nG. P. Putnam’s Sons , New York, USA.", "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "bx9_ritH96j", "year": null, "venue": "EAMT 2023", "pdf_link": "https://aclanthology.org/2023.eamt-1.34.pdf", "forum_link": "https://openreview.net/forum?id=bx9_ritH96j", "arxiv_id": null, "doi": null }
{ "title": "Coming to Terms with Glossary Enforcement: A Study of Three Approaches to Enforcing Terminology in NMT", "authors": [ "Fred Bane", "Anna Zaretskaya", "Tània Blanch Miró", "Celia Soler Uguet", "João Torres" ], "abstract": null, "keywords": [], "raw_extracted_content": "Coming to Terms with Glossary Enforcement: A Study of Three\nApproaches to Enforcing Terminology in NMT\nFred Bane Anna Zaretskaya Tània Blanch Miró Celia Soler Uguet João Torres\nTransPerfect\n{fbane,azaretskaya,tblanch,csuguet,joao.torres}@translations.com\nAbstract\nEnforcing terminology constraints is less\nstraight-forward in neural machine trans-\nlation (NMT) than statistical machine\ntranslation. Current methods, such as\nalignment-based insertion or the use of\nfactors or special tokens, each have their\nstrengths and drawbacks. We describe the\ncurrentstateofresearchonterminologyen-\nforcementintransformer-basedNMTmod-\nels,andpresenttheresultsofourinvestiga-\ntionintotheperformanceofthreedifferent\napproaches. In addition to reference based\nquality metrics, we also evaluate the lin-\nguistic quality of the translations thus pro-\nduced. Ourresultsshowthateachapproach\nis effective, though a negative impact on\ntranslation fluency remains evident.\n1 Introduction\nEnsuring translations use the preferred term can\nbe business-critical for commercial translation\nproviders. While there are existing methods to\nensure the correct translation of specified terms,\nthe impact of these methods on translation qual-\nity merits closer inspection. Typically, they have\nbeenevaluatedintermsofgeneraltranslationmet-\nrics such as BLEU, in addition to the accuracy of\nthe terminology translation. However, there is a\ndearth of more detailed linguistic analysis of the\nperformance of different techniques; for example,\nhowoftendothetermsagreemorphologicallywith\nthe rest of the sentence? What are the potential is-\nsues when unruly, real-world, client glossaries are\n©2023Theauthors. ThisarticleislicensedunderaCreative\nCommons 4.0 licence, no derivative works, attribution, CC-\nBY-ND.applied to models trained in more controlled lab-\noratory conditions, and what steps can be taken to\nmitigate these issues?\nIn the present work we implement three ap-\nproaches to glossary/terminology enforcement in\ntwolanguagepairs(English-RussianandJapanese-\nEnglish) and compare their performance on the\nterminology enforcement task. In particular, we\ninvestigate two methods based on interventions in\nthe training data and one post-processing method\nwhich uses the model’s attention mechanism to\nidentify the tokens representing the translation of\nthe input term in the output and replaces these to-\nkens with the translation from the glossary. In ad-\nditiontoautomatedevaluation(chrF,COMET,and\naccuracy), we also engaged professional linguists\nto design a test set of edge cases from their partic-\nular language pairs, and evaluate the performance\nof each approach using this bespoke test set.\nThe ultimate objective of this research is to in-\nform the implementation of a glossary feature for\nuse by machine translation project managers and\nendusers,andthuswemustanticipatethatthefea-\nture will be applied in a multitude of unexpected\nways. For a guide to what our feature may be\nsubjected to, we turned to a database of historical\nglossary enforcement requests kept by our com-\npany. These requests were created by a mixture of\nlinguists, clients, and project managers in transla-\ntion projects. The contents of these glossaries are\nverynoisyanddiverse,includingnouns,adjectives,\nverbs, prepositions, numbers, and acronyms, and\nranging in length from single characters to entire\nsentences. Thisresourceservedbothasthesource\nmaterialtoannotateourtrainingdataforthemeth-\nods using data intervention, and the inspiration for\nour test cases.\nIn addition to the practical motivation of our\nresearch, we hope to provide the MT community\nwith an insight on the linguistic effects that each\nof these methods have on the translation output.\nBelow we share our methodology and the results\nof our experiments.\n2 Related Work\nThefirstapproachestointroducingterminologyen-\nforcement in NMT were quite limited in terms of\nhandling languages with inflections. For example,\nin one approach, a special placeholder token was\nused to mask the term in the source sentence, and\nthen replaced with the correct term after the trans-\nlation (Crego et al., 2016). In the more sophisti-\ncatedalignment method,oneoftheattentionheads\nof the transformer is trained with statistical word\nalignments, and the output of this attention head\nat translation time is used to identify the tokens in\nthe translation that correspond to the source term,\nand replace this token by the translation from the\nglossary. While this method provides an improve-\nment, it still poses a problem for languages with\ninflections, since the target term is inserted in its\nglossaryform,anddependenciesmaybeproduced\nin the wrong form.\nIn theconstrained decoding method, the NMT\ndecoder is guided to produce translation candi-\ndates that include the specified translation of a\ngiven source term that is present in the input sen-\ntence (Chatterjee et al., 2017; Hasler et al., 2018;\nHokamp and Liu, 2017). This method, while cer-\ntainly producing more fluent translations, adds a\nsignificant computational overhead (Post and Vi-\nlar, 2018). Since our applications of MT include\nseveral time-sensitive use cases, such as chat and\ninstantwebsitetranslation,wedidnotconsiderthe\nconstrained decoding method for our experiments.\nLater, Dinu et al. (2019) proposed a method\nwhere intervention was made in the training data:\nthey insert the target term directly in the source\nsentence and use factors to signal which tokens\nare actual source text and which are target transla-\ntions. Factor embeddings are concatinated to the\ntoken embeddings and the two are learned in par-\nallel. Through training, the model learns to essen-\ntiallycopytheinputtokensmarkedastranslations.\nMore information on the practical implications of\nimplementing this approach in a real-life produc-\ntion setting can be found in Exel et al. (2020) and\nBergmanis&Pinnis(2021)addresstheapplication\nof this method to morphologically-rich languages.Ailemetal. (2021)proposeanotherapproachto\nmanipulate the training data: instead of using the\nsource factors, they use special tokens to mark the\nsource and target terms inserted in the source sen-\ntence. In addition, the authors apply token mask-\ning,whichhelpsthemodelgeneralizebetteronun-\nseen terms, and adapt the weighted cross-entropy\nlosstobiasthemodeltowardsgeneratingconstraint\ntokens, resulting in improved translation quality\nand correctly generated constraint terms. This ap-\nproach also accounts for different morphological\nvariations of terms both on the source and on the\ntarget side by applying string-based approximate\nmatching.\nUntil recently, most works only evaluated their\nresults in terms of BLEU scores and accuracy of\nthe terminology enforcement. However, they did\nnot provide any insight into how well the term fits\nin the sentence, if the surrounding translations are\ncorrect, etc. For this reason, Alam et al. (2021a)\nproposednewmetricsthatcanreflectcorrectnessof\nterminology. In particular, they suggest to look at\nthetokenssurroundingthetermandcomparethem\nto the reference translation ( Window Overlap ) and\nto compute terminology-focused TER (Snover et\nal., 2006). These metrics are designed to comple-\nmenttheexact-matchaccuracyandtheholisticMT\nquality metrics and were subsequently used in the\nfirst shared task dedicated to terminology in NMT\n(Alam et al., 2021b).\nSince the experiments described above demon-\nstrate that terminology constraints can be success-\nfully applied in NMT without a significant overall\nperformance loss and computational overhead, we\nchoose two methods that are most suitable for our\nproduction settings, as well as a baseline method\n(replacingtargettokensbythecorrecttermtransla-\ntionbasedonthewordalignments)toanalyseeach\nmethod’sadvantages. Ourgoalisnotonlytomea-\nsure terminology accuracy and overall model per-\nformance, but also to get insight on how naturally\nthetermsareincorporatedintothetargetsentence.\n3 Materials and Methods\nWe implemented three approaches to glossary en-\nforcement: alignment-based replacement, annota-\ntionwithspecialtokensasperAilemetal. (2021),\nand factorization as per Dinu et al. (2019). As\na control, we also obtain translation from a model\ntrainedwiththesamedatawithoutanyterminology\nintervention.\n3.1 Glossaries\nBoth the annotation andfactorsmethod rely on a\nglossary to prepare the training data. Glossaries\ncan be compiled in multiple ways, such as using\nexistent bilingual dictionaries, or learning dictio-\nnariesinanunsupervisedmanner. Wechosetouse\ndatafromhistoricaltranslationprojectsasourglos-\nsaries,assumingthatthesemaybethebestapprox-\nimation of the distribution of inputs our glossary\nfeature will see in production.\nAs these data were extremely noisy, some fil-\ntering was required. We filtered terms containing\nno alphabetic, hiragana, katakana, or kanji charac-\nters, pairs with very unusual length ratios for the\nlanguage pair (many terms contained lists of pos-\nsible translations in the target field), pairs contain-\ning more than five whitespace-separated tokens,\netc. For English-Russian, our database contained\naround 223k unique terminology pairs, of which\n78k were retained after heuristic filtering. For\nJapanese English, the database contained approxi-\nmately 240k unique pairs, of which 156k were re-\ntained after filtering. Many of these retained pairs\nwere near duplicates, such as varying US/UK di-\nalectical forms, pairs differing only in capitaliza-\ntion, or terms in their singular and plural forms.\nOf these terms, some 24k term pairs were actually\nfoundintheEnglish-Russiantrainingdata,and64k\nwere found in the Japanese-English training data.\nWe defer to later work a more in-depth investiga-\ntion of the effects of different glossaries on model\ncapabilities.\n3.2 Data Resources\nThetrainingdatawerecomprisedofdatafromCC\nMatrix(Schwenketal.,2019)andinternaldatare-\nsources,containingapproximately122millionsen-\ntence pairs for the English-Russian direction, and\n56millionfortheJapanese-Englishdirection. The\ndata were filtered with hand-crafted heuristics (for\nexample very long or very short inputs, sentence\npairs with unusual length ratios, sentence pairs\nwith excessive punctuation or no detectable lin-\nguist content, etc.) and cross-entropy scores from\nan NMT model. For the annotation andfactors\nmethods, sentences from these corpora containing\nsource and target glossary pairs included in our\nglossarieswereidentifiedandpreparedasrequired\nforthesetechniques. Theoriginalversionsofthese\nsentences were retained in the corpora, to ensure\nthat the model would still learn to translate thesetermsintheabsenceofguidanceatinferencetime,\nand the modified versions were appended. Thus,\nthe corpora increased in size by approximately 10\nmillionand7.7millionsentencepairs,respectively.\nWe elected to perform such modification only\nwhere the source and target terms appeared in ex-\nactly the same form as in the glossary, surrounded\nby word boundaries on either side for the English\nand Russian corpora (as Japanese does not sepa-\nrate words with white space, this constraint was\nnot applicable for this language). Though lemma-\ntization has been productively used to match other\nword forms not in the glossary – which appears\nto increase the ability of the model to adapt the\nterm appropriately to the translation (Bergmanis\nand Pinnis, 2021) – we chose to use only exact\nmatchesforourbenchmarkingexperimenttomax-\nimize the clarity of the training signal.\n3.3 Training\nAside from the settings required for each ap-\nproach, all models used identical standard trans-\nformer(base)configurations(Vaswanietal.,2017).\nWe allowed models to train for 50 epochs or until\nperplexityfailedtoimprovefortenconsecutiveval-\nidationcheckpoints. Modelsweretrainedusingthe\nMarianframework(Junczys-Dowmuntetal.,2018)\noneightQuadroRTX6000GPUs. Eachmodelwas\ntrained twice and the best performing model was\nused for the experiment.\n4 Evaluation\nHuman and automated evaluation methods were\nused to judge the performance of each approach.\nForthehumanevaluation,weworkedwithlinguists\nto design test sets covering different morpholog-\nical forms and specific edge cases identified for\ntheirlanguages. Themorphologicalformscovered\nincluded adjectives, verbs, simple nouns in nom-\ninative, plural, and genitive forms, phrasal nouns\nand verbs, and entire clauses. For example, the\nENRUtestsetcontained,amongregularnounsand\nnounphrases,termslike men’s,goback,turnedoff .\nThesetermsareusuallynotrecommendedtobeap-\nplied in the MT context, but they are often found\nin client glossaries, so we wanted to understand\nthe behavior of different terminology enforcement\nmethodsinthesescenarios. Amongtheedgecases\ntested were the Japanese elision of the subject and\nothercaseswheregrammaticaldifferencesbetween\nthelanguagescreateambiguity. Intotal,therewere\n27 terms in the ENRU test set and 26 terms in the\nJAENtestset. Oncewehadthetestsetscreated,we\nrequested native linguists in the target language to\nprovide two different translations for each selected\nterm. Then, wefound sentences thatcontained the\nsourcetermsamoungourinternaldatasetsorasked\nthe linguists to artificially create them. These sen-\ntences were used for the human evaluation.\nDuring the human evaluation stage, evaluators\nwerepresentedwithtranslationsofthesesentences\nfromthefourdifferentsystems: thecontrolsystem\nwith no glossary enforcement, the system trained\nwith theannotation approach, the system trained\nwith the factorsapproach, and the system where\nthetargettermisinsertedbasedonthealignments.\nForeachsourcesentence,wefirstenforcedthefirst\ntranslation of the term and then the second one.\nThelinguistswereaskedthefollowingquestions\nabout each of the translations: (a) Is the term\npresent in the translation? (b) Is the term in the\ncorrect grammatical form? (c) Are the grammati-\ncal dependencies on the term in the correct form?\n(d)Doesthetermassumeanon-existentform? (e)\nArethereanyduplicatedwords? (f)Ratetheover-\nall accuracy of the translation from 1 to 10. (g)\nRate the overall fluency of the translation from 1\nto 10. As the size of these bespoke test sets was\nnecessarily quite small, the statistical significance\nof the results was not calculated and only the raw\nresults are presented.\nFor the automated evaluation, we used pub-\nlicly available corpora for comparability. For the\nEnglish-Russianlanguagepair,datafromtheWMT\nsharedtaskonterminologyenforcementwereused.\nDue to the lack of a public corpus designed for\nterminology enforcement in the Japanese-English\nlanguagepair,theBilingualCorpusofWikipedia’s\nKyotoArticles 1anditsaccompanyinglexiconwere\nadapted. We selected terms without non-letter\ncharacters that were identified as organizations,\nproper names, or works of art using Spacy’s NER\nfunction. Finally, we filtered both corpora to re-\nmoveanysentencesthatdidnotcontaintermstobe\nenforced. For terms with multiple glossary trans-\nlations, the form used in the reference translation\nwas enforced.\nTranslationswerescoredwithCOMETandchrF,\nand the number of exact and fuzzy matches were\ncounted. Exact match was defined as a 100% sub-\nstring match with word boundaries on either side,\n1https://github.com/venali/BilingualCorpusand a fuzzy match was defined as at least 80%\nsub-string match. The threshold for statistical sig-\nnificance was established as p<0.01.\n5 Results\n5.1 Human Evaluation\nThe results of the human evaluation for each lan-\nguagepairareshowninTables1and2. Weprovide\ncounts of each of the parameters we evaluated for\neachofthetermtranslations(Term1andTerm2).\nThe only exception is the No glossary approach,\nwhere we did not explicitly provide any instruc-\ntions to the MT engine, so we provide cumulative\nnumbers. Wefindituseful,however,toshowwhich\nof the two term translations was preferred by the\nengine.\nOverall, the alignment method had the best per-\nformance when it comes to including the term in\nthe translation, which is expected by design. In\nthe English-to-Russian language pair, this method\nalsopredictablywastheworstwhenitcomestothe\nmorphological agreement (of the term itself and\nof the surrounding words). However, this was not\nthe case for Japanese into English, where all the\nmethods performed similarly well in this aspect.\nThis suggests that this limitation of the alignment\nmethod may be more evident in morphologically\nrich target languages.\nWhentheglossarytermwasacorrecttranslation\nbutnotintheappropriateformforthesentence,the\nannotation andfactorsmodels sometimes modi-\nfied the term into the appropriate form (examples\nof this are provided in Table 3 below and Table 7\nin Appendix A), and sometimes modified the sen-\ntence structure in order to use the glossary form\nof the term in an appropriate way. In these cases,\nthefactorsapproachwasmostlikelytomodifythe\nterm to an appropriate form, but the translations\nwithoutglossaryenforcementwerejudgedtobeof\nthebestquality. The alignment methodmaintained\nthetermexactlyinitsglossaryformandoftenpro-\nducedungrammaticalsentencesinresponsetosuch\ninputs. Analysis of the evaluation results grouped\nby part of speech showed no clear pattern. Thus,\nweseenoindicationthatanypartofspeechismore\ndifficultthananyother,northatanyapproachmore\norlesscapableofapplyingtheglossaryconstraints\ndepending on their part of speech.\nOther limitations of the alignment method were\nmuch more common in the Japanese-English lan-\nguage pair. Namely, we observed a higher number\nNo glossary Annotation Factors Alignment\nTerm 1 Term 2 Term 1 Term 2 Term 1 Term 2 Term 1 Term 2\nTerm is present 14 (+1) 3 (+1) 23 20 2313 (+2) 24 23\nCorrect form 19 19 15 17 12 10 11\nCorrect dependencies 19 23 19 21 15 18 12\nNon-existent form 1 1 0 0 0 0 0\nDuplicated words 0 0 0 0 0 2 2\nAverage accuracy 9.4 8.9 8.3 8.9 8.4 8.8 8\nAverage fluency 9.6 8.9 8.8 8.9 8.5 8.1 7.5\nTable 1: English-Russian human evaluation results. When the term is present only partially (i.e. the term consists of multiple\ntokens and only one of them is present), its count is indicated in parentheses. The highest scores are marked in bold and are\nconsidered separately for terms 1 and 2. The total number of source sentences was 27.\nNo glossary Annotation Factors Alignment\nTerm 1 Term 2 Term 1 Term 2 Term 1 Term 2 Term 1 Term 2\nTerm is present 9 (+4) 3 (+1) 20 (+4) 20 (+4) 16 (+7) 16 (+6) 24 (+2) 22 (+3)\nCorrect form 17 24 22 21 21 23 23\nCorrect dependencies 17 24 22 21 21 23 23\nNon-existent form 1 0 1 2 3 3 2\nDuplicated words 0 0 0 0 0 1 1\nAverage accuracy 6.9 7.1 8.8 7.6 7.6 6.8 6.9\nAverage fluency 8.6 8.6 8.4 9.1 98.1 8.1\nTable 2: Japanese-Englishhumanevaluationresults. Whenthetermispresentonlypartially(i.e. thetermconsistsofmultiple\ntokens and only one of them is present), it is shown in parentheses. The highest scores are marked in bold and are considered\nseparately for terms 1 and 2. The total number of source sentences was 26.\nof non-existent grammatical form and duplicated\nwords. The latter is typically due to the failure\nof the alignment mechanism in cases when a term\ncorresponds to multiple target words, which may\nnot be contiguous.\nWhen it comes to the general translation qual-\nity,intheEnglish-Russianlanguagepairthemodel\nwith no glossary enforcement achieved the best\nscores, even though its translation did not neces-\nsarily contain the required terms. Out of the three\nterminologyenforcementmethods, annotation and\nfactorsmethods were the best with the annotation\nmethod slightly outperforming in fluency. The\nJapanese-English language pair paints a slightly\ndifferent picture, with the annotation andfactors\nmodels sharing the first positions in accuracy and\nfluency.\nThe results show significantly more partial\nmatches in the Japanese-English language pair.\nMany of these correspond to terms that were verb\nphraseswhereapronounintheglossarytranslation\nwas replaced by the subject of the sentence in the\nMT output (see examples in Table 6 in Appendix\nA).Overall, based on the results of the human eval-\nuation for English-Russian, it seems like the most\noptimal terminology approach is the annotation\none. It has relatively good term accuracy as well\nas the general translation quality, and is the best in\nmaintaining morphological agreement within the\nsentence. In the Japanese-English direction, mor-\nphological agreement plays a less significant role,\nso these results are more even across the different\napproaches. The alignment methodhasthehighest\nterm accuracy, but at the same time is more prone\nto producing errors such as duplicated words and\nnon-existent forms. The factorsmethod has the\nhighest position in the overall translation quality\nbut underperforms in terminology accuracy. The\nannotation methodshowsthemostbalancedscores\noverall.\n5.2 Automated Evaluation\nThe results of the automated evaluation, shown in\nTable 4 below, are similar to the results of the\nhuman evaluation. The factorsmethod obtained\nthe best COMET and chrF scores in the Japenese-\nEnglishdirection,whileintheEnglish-Russiandi-\nSource I’m going for a run. I see him run. Run!!!!!\nNo glossary Я собираюсь а пробежку. Я вижу, как он бежит. Бегите!!!!!\nAnnotation Я собираюсь бегать. Я вижу, как он бегает. Выполнить бегать!!!!!\nFactors Я иду на бегать. Я вижу, как он бегает. Бегать!!!\nAlignment Я еду на бегать. Я вижу, как он бегать. бегать!!!!\nTable 3: Translations when the glossary form is a correct translation but not in the appropriate morphological form for the\nsentence. In this case, our glossary pair was ’run’: ’ бегать’.\nrection the annotation model showed the best per-\nformance. The alignment method achieved com-\npetitive results in all categories, and was clearly\nthemostconsistentinitsadherencetotheimposed\nglossary constraints. The performance of all mod-\nels was quite poor on the Japanese-English auto-\nmatedtestdata,wespeculatethis isduetothesig-\nnificant domain gap between the training and test\ndata. TheEnglish-Russianautomatedtestdatawas\nCOVID-related, and thus more in-domain, which\nwe believe explains the superior performance in\nthis language pair.\n6 Discussion\nOurresultsshowthateachmethodofenforcingter-\nminology tested, which we have referred to in this\npaper asalignment ,annotation , andfactors, is ef-\nfectiveinpromotingtheuseoftherequestedtrans-\nlation. In both languages the approaches outper-\nformedthebaselineinthisregard. Theapproaches\ndid well in a wide variety of test cases, even test\ncases that may strain credulity. The benefit of giv-\ning this sort of guidance to the model seems to be\nmore significant for input content that is out-of-\ndomain for the training data, but this improvement\ninterminologyusedoeslittletomitigatethequality\ndrop observed in such translation scenarios. The\nalignmentmethodseemedtohavealargernegative\nimpact on translation quality, as measured by ac-\ncuracy,fluency,andmorphologicalagreement,but\nwas also the most likely to have the correct term\npresent in the sentence.\nAdditionally, our results show that the use of\nnoisy source material for glossary creation is vi-\nable. Some intervention may still be required to\nretain only good quality term pairs. It remains to\nbe seen how well this glossary actually approxi-\nmatesthedistributionofinputtermsinproduction.\nContrary to the fears of Bergmanis and Pinnis\n(2021), using only exact matches in data prepara-\ntion does not limit the model to simple copying\nbehavior. However, a tendency to restructure theoutputsentencesoastoproperlyusetheexactterm\nprovided is noticeable. Users of glossary features\nshouldbeguidedonhowbesttoworkwithpolyse-\nmous terms in NMT.\nNone of the methods emerged as clearly supe-\nrior, with different models performing better in\ndifferent tasks and different language pairs. We\nbelieve that this suggests that each approach can\nbe viable, but must be carefully adapted for the\nspecific language pair and usage scenario. A so-\nlutioncombiningthe annotation orfactorsmethod\nwith the alignment method may present a good\noption. In such a solution, input data would be\npreparedaccordingtotherequirementsforthefor-\nmermethod,andalignment-basedinsertioncanbe\nused as a fallback, when the model does not pro-\nduce the expected term. The use of lemmatization\nin this fallback method may help reduce the inci-\ndence of false positives for cases where the model\nhasusedthetermcorrectlybutinamorphological\nform different to that of the glossary term.\n7 Future Work\nThis research suggests multiple potential paths for\nfutureresearch. Firstly,ourassumptionthathistori-\ncalterminologyenforcementrequestsapproximate\nthe distribution at inference time calls for proper\nscrutiny. Research comparing the effects of using\ndifferent glossaries to prepare training data under\ncontrolled conditions can show if there is any sig-\nnificant downstream effect in the translation task.\nFurthermore, there are many avenues of inves-\ntigation stemming from the data preparation pro-\ncedure. What is the appropriate ratio of samples\nwith and without glossary enforcement signals in\nthe dataset? What are the effects of lemmatization\nor fuzzy matching of glossary pairs in the dataset?\nWhat would be the effect of adding the glossary\nsignal at the start or end of the sequence instead\nof at the location where the source term occurs?\nShould there be a limit to how many times a par-\nticular term appears? The frequency distribution\nModel chrFCOMET Exact match % Fuzzy match %\nJAEN No glossary 33.2 -0.54 27.62 33.56\nJAEN Annotation 35.1* -0.44* 91.7* 94.24*\nJAEN Factors 36.1* -0.4* 90.36* 95.21*\nJAEN Alignment 35.3* -0.48* 100* 100*\nENRU No glossary 60.7 0.7 68.95 85.9\nENRU Annotation 61.2* 0.7 76.19* 95.05*\nENRU Factors 600.65 68.17 88.38\nENRU Alignment 61.1* 0.62 98.28* 99.81*\nTable 4: Automated evaluation metrics for the Japanese-English (JAEN) and English-Russian (ENRU) language pairs. The\nhighest scores for each language pair are marked in bold, * indicates a statistically significant ( p<0.01) improvement over the\ntranslation without glossary constraints.\noftermsinourdatasetsshowedroughlyaninverse\nrank-frequencycurve(Zipf’slaw),withsometerms\nappearing with great frequency and a long tail of\nterms appearing only once.\nLastly, more research into interventions in the\ndecodingalgorithmiswarranted. Techniquessuch\nasadaptiveMTandconstraineddecoding,orsome\nyet undiscovered technique may still prove to be\nsuperior to the methods investigated in this work.\nWhileprogressthusfarhasbeenremarkable,theis-\nsueofterminologyenforcementisfarfromsolved,\nso close attention to new research is necessary.\nReferences\nAilem, Melissa, Jingshu Liu, and Raheel Qader. 2021.\nEncouragingneuralmachinetranslationtosatisfyter-\nminologyconstraints. In FindingsoftheAssociation\nfor Computational Linguistics: ACL-IJCNLP 2021 ,\npages 1450–1455, Online, August. Association for\nComputational Linguistics.\nAlam,MdMahfuzibn,AntoniosAnastasopoulos,Lau-\nrent Besacier, James Cross, Matthias Gallé, Philipp\nKoehn,andVassilinaNikoulina. 2021a. Ontheeval-\nuationofmachinetranslationforterminologyconsis-\ntency. arXiv.\nAlam, Md Mahfuz ibn, Ivana Kvapilíko’a, Be-\nsacierLaurentAnastasopoulos,Antonios,Georgiana\nDinu, Marcello Federico, Matthias Gallé, Philipp\nKoehn, Vassilina Nikoulina, and Kweon Woo Jung.\n2021b. Findings of the wmt shared task on ma-\nchine translation using terminologies. In Proceed-\nings of the 6th Conference on Machine Translation\n(WMT21) , Online, November. Association for Com-\nputational Linguistics.\nBergmanis,TomsandM ¯arcisPinnis. 2021. Facilitating\nterminology translation with target lemma annota-\ntions. In Proceedings of the 16th Conference of the\nEuropean Chapter of the Association for Computa-\ntional Linguistics: Main Volume , pages 3105–3111,Online, April. Association for Computational Lin-\nguistics.\nChatterjee, Rajen, Matteo Negri, Marco Turchi, Mar-\ncello Federico, Lucia Specia, and Frédéric Blain.\n2017. Guiding neural machine translation decod-\ning with external knowledge. In Proceedings of the\nSecond Conference on Machine Translation , pages\n157–168, Copenhagen, Denmark, September. Asso-\nciation for Computational Linguistics.\nCrego,Josep,JungiKim,GuillaumeKlein,AnabelRe-\nbollo, Kathy Yang, Jean Senellart, Egor Akhanov,\nPatriceBrunelle,AurelienCoquard,YongchaoDeng,\nSatoshi Enoue, Chiyo Geiss, Joshua Johanson, Ar-\ndas Khalsa, Raoum Khiari, Byeongil Ko, Catherine\nKobus, Jean Lorieux, Leidiana Martins, and Peter\nZoldan. 2016. Systran’s pure neural machine trans-\nlation systems. 10.\nDinu, Georgiana, Prashant Mathur, Marcello Federico,\nand Yaser Al-Onaizan. 2019. Training neural ma-\nchinetranslationtoapplyterminologyconstraints. In\nProceedingsofthe57thAnnualMeetingoftheAsso-\nciation for Computational Linguistics , pages 3063–\n3068,Florence,Italy,July.AssociationforComputa-\ntional Linguistics.\nExel, Miriam, Bianka Buschbeck, Lauritz Brandt, and\nSimona Doneva. 2020. Terminology-constrained\nneural machine translation at SAP. In Proceedings\nof the 22nd Annual Conference of the European As-\nsociation for Machine Translation , pages 271–280,\nLisboa, Portugal, November. European Association\nfor Machine Translation.\nHasler, Eva, Adrià de Gispert, Gonzalo Iglesias, and\nBill Byrne. 2018. Neural machine translation de-\ncodingwithterminologyconstraints. In Proceedings\nofthe2018ConferenceoftheNorthAmericanChap-\nteroftheAssociationforComputationalLinguistics:\nHumanLanguageTechnologies,Volume2(ShortPa-\npers),pages506–512,NewOrleans,Louisiana,June.\nAssociation for Computational Linguistics.\nHokamp, Chris and Qun Liu. 2017. Lexically con-\nstraineddecodingforsequencegenerationusinggrid\nbeam search. In Proceedings of the 55th Annual\nMeeting of the Association for Computational Lin-\nguistics(Volume1: LongPapers) ,pages1535–1546,\nVancouver, Canada, July. Association for Computa-\ntional Linguistics.\nJunczys-Dowmunt, Marcin, Roman Grundkiewicz,\nTomasz Dwojak, Hieu Hoang, Kenneth Heafield,\nTom Neckermann, Frank Seide, Ulrich Germann,\nAlham Fikri Aji, Nikolay Bogoychev, André F. T.\nMartins, and Alexandra Birch. 2018. Marian: Fast\nneuralmachinetranslationinC++. In Proceedingsof\nACL 2018, System Demonstrations , pages 116–121,\nMelbourne, Australia, July. Association for Compu-\ntational Linguistics.\nPost, Matt and David Vilar. 2018. Fast lexically con-\nstrained decoding with dynamic beam allocation for\nneural machine translation. In Proceedings of the\n2018 Conference of the North American Chapter of\nthe Association for Computational Linguistics: Hu-\nman Language Technologies, Volume 1 (Long Pa-\npers), pages 1314–1324, New Orleans, Louisiana,\nJune. Association for Computational Linguistics.\nSchwenk, Holger, Guillaume Wenzek, Sergey Edunov,\nEdouard Grave, and Armand Joulin. 2019. Ccma-\ntrix: Mining billions of high-quality parallel sen-\ntences on the web.\nSnover, Matthew, Bonnie Dorr, Rich Schwartz, Lin-\nnea Micciulla, and John Makhoul. 2006. A study\nof translation edit rate with targeted human annota-\ntion. In Proceedings of the 7th Conference of the\nAssociation for Machine Translation in the Ameri-\ncas: Technical Papers , pages 223–231, Cambridge,\nMassachusetts, USA, August 8-12. Association for\nMachine Translation in the Americas.\nVaswani, Ashish, Noam Shazeer, Niki Parmar, Jakob\nUszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz\nKaiser, and Illia Polosukhin. 2017. Attention is all\nyou need. In Guyon, I., U. Von Luxburg, S. Ben-\ngio, H. Wallach, R. Fergus, S. Vishwanathan, and\nR. Garnett, editors, Advances in Neural Information\nProcessing Systems , volume 30. Curran Associates,\nInc.\nAppendix A. Supplementary Materials\nSource sentence あなたが許可を取り消した場合、あなたや赤ちゃんの 身元を特定\nする情報を新たに収集することはありません 。\nTranslation without glos-\nsary enforcementIf you withdraw your permission, no new information that identifies you or\nyour baby will be collected .\nAnnotation 1 あなたが許可を取り消した場合、あなたや赤ちゃんの 身元を特定\nする情報を新たに<S><C>we </C> 収集することはありません 。\nAnnotation 1 translation If you withdraw your permission, wewill not collect any new information\nthat identifies you or your baby.\nAnnotation 2 あなたが許可を取り消した場合、あなたや赤ちゃんの 身元を特定\nする情報を新たに<S><C>the research center </C> 収集することはあ\nりません 。\nAnnotation 2 translation If you withdraw your permission, no new information identifying you or\nyour baby will be collected by the research center .\nTable 5: Examplelanguage-specificedgecase. IntheJapanesesource,thesubjectiselided,asitmaybeinferredfromcontext.\nWithout glossary guidance, the model chooses a passive voice. With glossary guidance, an active voice can be induced. As no\nsourcetermexists,weaddedtheannotationwithanemptysourcefieldwherethesubjectwouldappear. Boldfaceforemphasis.\nSource term Target term Source sentence Target sentence ( annotation method)\n言い続けてThey keep\nsayingこれは死亡が宣告された\n日から遺族がずっと言い\n続けてきたことだ 。This is because the surviving family\nhas always kept saying , starting from\nthe day the death was declared.\n戻って来たThey have\nreturned市職員や住民、観光客ら\nがそのうちの 何頭かを引\nきずり、なんとか 沖へ帰\nしたものの 、その多くが\n戻って来たという 。City officials, residents, and tourists\ndragged some of them, and they some-\nhow returned to the offshore, but many\nof them said they had returned .\nTable 6: Japanese-English examples of partial term matches. Boldface for emphasis.\nSource term Target term Source sentence Original translation Annotation method\nsubject пациент One subject experi-\nenced an SAE (pneu-\nmonia) during study\ntreatment with FSC.У одного пациен-\nта развилось СНЯ\n(пневмония) во вре-\nмя исследуемого ле-\nчения КФС.Один пациент пере-\nнес СНЯ (пневмо-\nнию) во время ис-\nследуемого лечения\nКФС.\nTable 7: Sentence adaptation to match the glossary form of the term in English-Russian.", "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "3aH3fk5UK_", "year": null, "venue": "EAMT 2022", "pdf_link": "https://aclanthology.org/2022.eamt-1.26.pdf", "forum_link": "https://openreview.net/forum?id=3aH3fk5UK_", "arxiv_id": null, "doi": null }
{ "title": "Comparing Multilingual NMT Models and Pivoting", "authors": [ "Celia Soler Uguet", "Fred Bane", "Anna Zaretskaya", "Tània Blanch Miró" ], "abstract": null, "keywords": [], "raw_extracted_content": "Comparing Multilingual NMT Models and Pivoting\nCelia Soler Uguet Fred Bane Anna Zaretskaya T `ania Blanch Mir ´o\nTransPerfect\n{csuguet,fbane,azaretskaya,tblanch }@translations.com\nAbstract\nFollowing recent advancements in multi-\nlingual machine translation at scale, our\nteam carried out tests to compare the per-\nformance of pre-trained multilingual mod-\nels (M2M-100 from Facebook and multi-\nlingual models from Helsinki-NLP) with a\ntwo-step translation process using English\nas a pivot language. Direct assessment\nby linguists rated translations produced by\npivoting as consistently better than those\nobtained from multilingual models of sim-\nilar size, while automated evaluation with\nCOMET suggested relative performance\nwas strongly impacted by domain and lan-\nguage family.\n1 Background and Motivation\nAs a translation company, our work involves\nhundreds of distinct translation directions across\ndozens of languages. However, demand is not\nevenly distributed across all language pairs. The\nvast majority of our translation requests involve\nEnglish as either the source or target language,\nwith most other requests concentrated in a few ma-\njor languages, such as German, French, Italian,\nJapanese, and Chinese.\nOur fleet of machine translation (MT) engines\nis developed considering both the demand and the\nresources available for training. Currently, we\nuse mostly bilingual models with some many-to-\none models (such as for Scandinavian languages),\nbut no many-to-many models. For language pairs\nwhere only a few hundred words are translated\n© 2022 The authors. This article is licensed under a Creative\nCommons 3.0 licence, no derivative works, attribution, CC-\nBY-ND.each year, the demand does not justify the costs\nincurred in training, deploying, and maintaining\nan engine for that language pair. Moreover, these\nlanguage pairs often have scant high-quality re-\nsources available for training. Thus, in situations\nwhere demand for machine translation exists, but\nin insufficient amount to offset training and de-\nployment costs, we have historically chosen to use\na two-step translation process: pivoting through a\nrelated, high-resource language.\nIn recent years, multilingual models have shown\ngrowing potential to wholly or partially replace a\nfleet of bilingual models. The benefits are clear:\nno error propagation resulting from using the out-\nput of one model as the input to another as in the\npivot scenario; reduced overhead and complexity\nby using one model for multiple language direc-\ntions instead of separate models for each direction;\nimproved translation quality in low-resource lan-\nguages due to knowledge transfer from related lan-\nguages; the potential for zero-shot translation for\nlanguage directions for which no direct data ex-\nist, and so on. However, these models also have\ntheir drawbacks, including the expense and diffi-\nculty of retraining the models, the inability to add\nadditional languages without retraining the model\nentirely, and the near impossibility of fine-tuning\nthe model for particular clients.\nBelow we report the results of an experiment\ncomparing bilingual base transformers (Vaswani et\nal., 2017) with pre-trained M2M-100 from Face-\nbook obtained from Hugging Face (Fan et al.,\n2020) and multilingual models made public by\nHelsinki-NLP (Tiedemann and Thottingal, 2020),1\nusing data drawn from our previous translation\nwork and out-of-domain corpora.\n1https://github.com/Helsinki-NLP/Opus-MT\n2 Related Research\nInteresting and very promising work has been car-\nried out recently on multilingual MT approaches,\nwhere instead of training one NMT model for each\nlanguage pair separately, a single model is trained\nthat can translate from a single source into multi-\nple target languages, or even many-to-many mod-\nels that can translate in any direction between the\nlanguages they are trained on. Apart from improv-\ning MT performance for low-resource languages\nthat can benefit from such models, these works\nalso show competitive performance for resource-\nrich languages, suggesting the possibility of fully\nreplacing the bilingual approach in the near future.\nMost recently, the Facebook AI research group\nproposed a single multilingual translation model\nable to translate within any pair of the 100 lan-\nguages included (Fan et al., 2020). The authors ob-\nserved a significant improvement in performance\nin non-English language pairs, and a competitive\nperformance in language pairs that include En-\nglish compared to the WMT baseline from previ-\nous years (Barrault et al., 2019; Bojar et al., 2017;\nBojar et al., 2018)\nMultilingual MT models have been a subject\nof research for a few years now. In most cases,\nthe goal has been to leverage parallel data avail-\nable for resource-rich languages to improve MT\nperformance for languages with scarce resources.\nAs early as in 2015, Dong et al. (2015) ex-\nplored an approach for simultaneously translating\nthe same source sentence into multiple target sen-\ntences. They obtained a better performance on\nall language pairs (English into French, Spanish,\nDutch and Portuguese) when using the multilin-\ngual model as opposed to single-target RNN mod-\nels. However, statistical significance of the deltas\nare not indicated in the paper.\nA few other works report significant improve-\nment for low-resource languages thanks to multi-\nlingual models. Fira et al. (2016) propose a multi-\nway multilingual model trained on WMT’15 data.\nHa et al. (2016) explore a multilingual NMT ap-\nproach and report on promising results for low-\nresource languages, as well as in scenarios where\nthere are not enough parallel data available in or-\nder to train a bilingual NMT model while achiev-\ning good performance.\nA simpler multilingual NMT approach was pro-\nposed by Johnson et al. (2016). It does not re-\nquire any change to the model architecture, butinstead introduces a token at the beginning of\nthe input sentence to indicate the target language.\nThe authors report improvement for low-resource\nlanguages but, unlike the majority of other simi-\nlar works, they observed a degradation on high-\nresource languages compared to bilingual models.\nFinally, Tan et al. (2019) propose one more\ninteresting approach, namely to use NMT with\nknowledge distillation, where bilingual models act\nas teachers. The authors report similar or improved\nresults compared to the bilingual models used in\nthe experiment.\nIt is notable that most of these works report\nvery encouraging results: multilingual models al-\nways seem to outperform bilingual ones for low-\nresource languages, and perform en par or better\nfor resource-rich languages. This contributes to the\nintuition that they will perform mostly better than\ntwo-level systems that pivot through English.\n3 Materials and Methods\nFor this research, we set out to compare the\nperformance of our company’s pivoting system\nwith open-source pre-trained multilingual models.\nFor the pivoting system, we used general-purpose\nmodels trained to handle the different content types\nwe have historically received in our translation\nwork. These models were trained with between ten\nand thirty million sentence pairs, for fifty epochs\nor until the early stopping criterion was met (no\nimprovement in validation set perplexity for 6\nsuccessive validation checkpoints). We used the\ntransformer-base architecture with guided align-\nment using alignment from fast align (Dyer et al.,\n2013), and to limit potential confounding factors\nwe use English as the pivot language for all lan-\nguage pairs. We chose to compare our system with\ntwo M2M-100 systems (the 480 million and the\n1.2 billion parameter models) (Fan et al., 2020)\nand the multilingual models from Helsinki-NLP\n(Tiedemann and Thottingal, 2020). While there\nare other pre-trained multilingual SOTA models\nsuch as mT5 that could be fine-tuned for the down-\nstream task of multilingual translation (Xue et al.,\n2021), we believe that the M2M-100 and Helsinki-\nNLP models were easily accessible and ready to be\nused with no further fine-tuning. Moreover, since\nall these systems were released around the same\ntime, there is no published or reliable research to\nsuggest that one model outperforms the rest.\nWe selected seven language pairs for which we\nreceived requests in the past year but for which we\nhad no direct bilingual model. These were the fol-\nlowing:\n• Italian-French (referred to as IT–FR);\n• French-Japanese (referred to as FR–JA);\n• French-Chinese (referred to as FR–ZH);\n• Spanish-Italian (referred to as ES–IT);\n• French-Portuguese (referred to as FR–PT);\n• Italian-German (referred to IT–DE);\n• French-Arabic (referred to as FR–AR).\nWe also carried out a quantitative compari-\nson for Danish–Spanish and Swedish–French, but\nsince we could not find linguists available for the\nhuman evaluation, we do not include the results for\nthese two pairs of languages.\nIn our experiments we used data from two dif-\nferent sources to avoid biases and compare perfor-\nmance across multiple domains. The first set of\ndata was drawn from our company’s previous hu-\nman translation work (with care being taken to en-\nsure that none of the data had been seen by the\nmodels during training). Although the data in-\nvolved a wide variety of content types, we con-\nsider these test data to be “in domain” for our en-\ngines as they were sampled from essentially the\nsame distribution as our training data. The sec-\nond set of data was extracted from Leipzig’s Cor-\npora Collection (Goldhahn et al., 2012). This col-\nlection includes monolingual corpora for 291 Lan-\nguages. Being a monolingual database, we can be\nquite confident that none of those texts were used\nfor the training of any of the engines we were com-\nparing. We extracted text from the news domain\nand from the most recent year available for each\nsource language. These test data were considered\nto be “out-of-domain” for our engines.\nSince no reference translations were available\nfor any of the input sentences, we performed au-\ntomated, reference-free evaluation using COMET,\nwhich was Unbabel’s submission for the WMT\n2020 Quality Estimation Shared Task (Rei et al.,\n2020). The reason behind this decision was that\nthis model ended on the top 5 of best models in\nall tasks and language pairs but one. Moreover, it\ncan be used for document-level assessment, it is\neasily accessible, it can be run on GPU, and it of-\nfers a command to compare multiple systems with\nstatistical testing. Additionally, we also engagedhuman linguists to carry out blinded direct assess-\nment (DA) for each language pair. Ordinarily we\nwould commission multiple linguists for each lan-\nguage pair to mitigate the effects of bias and hu-\nman error. However, for these less common trans-\nlation directions, only one linguist was available\nper language pair. Nevertheless, we consider these\nscores reliable as the linguists were selected from\nour pool of certified translators for the language\npair. This means that the annotators were not sim-\nply bilingual speakers, but held translation certifi-\ncation and actively performed translation tasks in\nthis language pair.\nEach linguist scored 200 segments chosen at\nrandom (100 from the in-domain data and 100\nfrom the out-of-domain data) using a scale from 0\nto 100. Linguists were instructed to score the seg-\nments based on the general quality of the MT out-\nput – how well it represented the main message of\nthe input sentence – rather than small errors which\nwould be more heavily penalized when evaluating\nhuman translations. The scoring criteria provided\nto the linguists were as below:\n• 0: Completely unintelligible and useless\ntranslation;\n• 25: Most of the target needs editing, but part\nof the MT can be preserved;\n• 50: Half of the output is usable and half needs\nto be edited;\n• 75: Edits needed, but MT output is usable;\n• 100: Perfect translation, fully accurate.\nStatistical significance for automated metrics\nwas calculated using the bootstrap t-test from\nCOMET (Koehn, 2004), and statistical signifi-\ncance for human DA was determined using un-\npaired t-test with p<0.01 considered statistically\nsignificant.\n4 Results\nThe results of the human and automated evalua-\ntions are presented in Tables 1 through 3 below. In\nevery case, human evaluation favored the transla-\ntion from the pivot system, often by a large margin.\nThis was true for both test sets as well as the over-\nall scores. The difference was more pronounced\nfor language pairs from different families than for\nlanguage pairs where both the source and target\nwere European languages (average difference of\n10.99 in the overall scores for FR–JA, FR–ZH, and\nFR–AR vs. 3.59 for IT–FR, ES–IT, FR–PT, and\nIT–DE).\nCOMET scores were less conclusive, suggest-\ning that relative performance was more dependent\non the domain of the content and the language fam-\nilies to which the source and target belonged. On\nthe in-domain test set, scores for the pivot sys-\ntem were better than the small M2M-100 model\nin all but one language pair (FR–PT), and even\noutperformed the larger M2M-100 model in the\nthree inter-language-family language pairs (FR–\nJA, FR–ZH, and FR–AR). For the European lan-\nguage pairs, the larger M2M-100 system obtained\nscores significantly higher than those for the pivot\nsystem.\nFor the out-of-domain test set, on the other\nhand, the M2M-100 models obtained higher scores\nin all language pairs, though we may again observe\nthat scores for language pairs from different lan-\nguage families are roughly 50 percent lower than\nthose for European language pairs.\n4.1 Divergence Between COMET and DA\nScores\nIn a number of instances, we noted pronounced di-\nvergence between the scores assigned by COMET\nand those from human linguists. To better un-\nderstand this phenomenon, we manually analyzed\nsome of these sentences and provide some exam-\nples in Table 4.\nWe find that in general those segments being\ngiven a low score by COMET but a higher score\nby human reviewers tend to contain a large number\nof punctuation marks, numbers, or proper nouns\n(especially those written in Latin characters when\nthe language uses a different script). We speculate\nthat low scores due to proper nouns may suggest\na difference between COMET’s linguistic knowl-\nedge and world knowledge, while the low scores\nfor sentences in the former two categories may be\nrelated to the composition of the training data used\nto train the COMET system.\nWe present as well a comparison of the agree-\nment between human reviewers and COMET. The\nplots for each language pair can be found from Fig-\nures 1 and 2 in Appendix A. X values represent\nthe normalized difference in COMET scores be-\ntween the M2M-100 translation and the translation\nof the pivot system; Y values represent the nor-\nmalized difference in human scores respectively.\nPositive values represent a better score from theM2M-100 system, and negative values represent a\nbetter score from the pivoting system. Data points\nin quadrants I and III represent agreement between\nthe human evaluation and COMET, while those in\nquadrants II and IV represent disagreement.\n5 Discussion\nIn this study we compared translations from differ-\nent models using human DA and automated eval-\nuation with the COMET quality estimation model.\nWe tested model performance using a combination\nof data sampled from the same distribution as our\ntraining data (in-domain) and news data (which\nwere out-of-domain for our models used in the\npivot system). Single-blind human DA showed\na clear preference for the translations obtained\nthrough pivoting, while automated evaluation with\nCOMET was less conclusive: the domain of the\ncontent and whether or not the source and target\nlanguages belonged to the same language family\nappeared to have a significant effect on the scores.\nBeyond translation quality, as a translation com-\npany we must also take other aspects into consid-\neration. While these fall outside the scope of this\nwork, there are many other relevant factors, such\nas:\n• Simplicity in production: It might be more\ndesirable to have one model instead of many;\n• Resource requirements: While one model can\ntake the place of many, multiple instances\nof the model would be needed, and each in-\nstance requires greater resources, so the ulti-\nmate effect on hosting and inference costs is\nuncertain;\n• Updating problems: With a multilingual\nmodel it is more complex and costly to update\nor fix problems that are discovered during in-\nference. It is much easier to retrain bilingual\nmodels in response to issues;\n• Adding more languages: It is not possible\nto add more languages to an already-trained\nmultilingual model, whereas a pivoted ap-\nproach can be deployed on-demand for any\ntwo languages that are supported with bilin-\ngual models;\n• Client customization: It is unclear how, if\nat all, a multilingual model may be adapted\nfor particular clients, especially clients with\nOverall\nLg. Pair Pivot M2M Helsinki\nIT–FR 73.64 68.35 64.66\nFR–JA 69.86 *58.84 N/A\nFR–ZH 73.18 *65.56 N/A\nES–IT 83.3 78.98 76.02\nFR–PT 90.63 88.21 84.59\nIT–DE 86.2 83.85 N/A†\nFR–AR 67.8 53.46 N/AIn-Domain\nPivot M2M Helsinki\n70.04 67.25 64.54\n71.34 *56.15 N/A\n78.23 *66.56 N/A\n88.3 81.53 79.2\n91.79 87.78 83.23\n78.95 76.58 N/A†\n76.73 51.72 N/AOut-Of-Domain\nPivot M2M Helsinki\n76.89 69.42 64.77\n68.45 *61.4 N/A\n68.07 64.56 N/A\n78.3 76.43 72.9\n89.47 88.65 85.94\n93.68 91.28 N/A†\n58.86 55.2 N/A\nTable 1: Human direct assessment scores for each system. The M2M-100 system used here is the smaller of the two (480M),\nso as to be directly comparable with the base transformers used in the pivot system. * Indicates scores with a statistically\nsignificant difference ( p<0.01). †Indicates that no multilingual model was available, only a direct bilingual model.\nLanguage Pair Pivot M2M (480M) M2M (1.2B) Helsinki-NLP\nIT–FR 0.3773 0.3608 0.4035 * 0.3216\nFR–JA 0.2305 0.1937 0.2222 N/A\nFR–ZH 0.1944 0.1563 0.1728 N/A\nES–IT 0.4704 0.4464 0.4877 * 0.3903\nFR–PT 0.3711 0.3782 0.4026 * 0.3372\nIT–DE 0.3271 0.2901 0.3498 * N/A†\nFR–AR 0.2003 0.1875 0.1574 N/A\nTable 2: COMET scores for each system on in-domain data. * Indicates scores with a statistically significant improvement\ncompared to the Pivot column ( p<0.01). †Indicates that no multilingual model was available, only a direct bilingual model.\nLanguage Pair Pivot M2M (480M) M2M (1.2B) Helsinki-NLP\nIT–FR 0.3158 0.3223 0.3934 * 0.2698\nFR–JA 0.1816 0.1889 0.227 * N/A\nFR–ZH 0.1376 0.1401 0.1783 * N/A\nES–IT 0.3771 0.3987* 0.4487 * 0.3418\nFR–PT 0.3394 0.4042* 0.4543 * 0.3395\nIT–DE 0.2302 0.229 0.3158 * N/A†\nFR–AR 0.1943 0.2141 * 0.1751 N/A\nTable 3: COMET scores for each system on out-of-domain data. * Indicates scores with a statistically significant improvement\ncompared to the Pivot column ( p<0.01). †Indicates that no multilingual model was available, only a direct bilingual model.\nLanguage Pair Source Target COMET2Linguist\nIT–FR La siringa contiene <<mL COUNT >>ml di\nsoluzione iniettabile, da <>mg<>,<>mg\n<>o placebo.La seringue contient <<mL COUNT >>ml\nde solution injectable, de <> mg<>,<>\nmg<> ou placebo.27.69 100\nFR–JA C’est une rentr ´ee pleine d’incertitudes `a\nl’hˆopital , confirme M ´elanie Meier, de la\nCFDT.「これは不確実性に満ちた病院への帰還\nだ」とCFDTのメラニー・メイエ氏は述\nべている。0 80\nFR–ZH Je travaille pendant les vacances `a Dour et\n`a Pukkelpop et j’ai normalement beaucoup\nd’argent de poche l’ ´et´e.我在Dour和Pukkelpop 度假期工作 ,我通常\n在夏天有很多。0 90\nES–IT jersey de rayas anchas con cuello a la caja. Maglia a righe larghe con scollo. 0 90\nFR–PT Tribunal de Paris – Corruption : Apr `es\nLamine Diack, Papa Massata condamn ´e. . .Tribunal de Paris – Corrupc ¸ ˜ao: depois de\nLamine Diack, Papa Massata condenada...29.34 99\nIT–DE 2.2 Come meglio descritto nel dettaglio al\nsuccessivo art.2.2 Wie besser in der Kunst ausf ¨uhrlich\nbeschrieben.0 98\nFR–AR CHRU DE LILLE - H ˆopital Albert Calmette\nCHRU DE LILLE - \u0010IJ\nÖÏA¿\u0010HQ\u001e.Ë\r@ ù\t®\u0011‚\u0010\u001c‚Ó20.30 90\nTable 4: Some examples of segments with a low COMET score in comparison to the score given by the linguist.\nsmall translation memories or those who\ntranslate in only one language pair;\n• Trade-off between low- and high-resource\nlanguages: Performance in low-resource lan-\nguages can be improved through knowledge\ntransfer from higher-resource languages, but\ndecreased performance in these higher-\nresource languages may outweigh these gains\ndue to the greater volume of demand.\nContrary to our intuitions prior to undertaking\nthis study, our results suggest that pivoting is a rea-\nsonable choice for language pairs where no direct\nmodel exists, at least in terms of translation qual-\nity. The strength of the conclusions are limited\nby the relatively small sample size, and we antici-\npate these results will need to be revisited as mul-\ntilingual models become more capable. Moreover,\nfine-tuning other pre-trained multilingual models\nsuch as mT5 and comparing those with the pivot-\ning approach could lead to different conclusions.\nFurther research is needed to more comprehen-\nsively weigh the advantages and disadvantages of\nreplacing multiple bilingual models with a single\nmultilingual model.\nReferences\nBarrault, Lo ¨ıc, Ond ˇrej Bojar, Marta R. Costa-juss `a,\nChristian Federmann, Mark Fishel, Yvette Gra-\nham, Barry Haddow, Matthias Huck, Philipp Koehn,\nShervin Malmasi, Christof Monz, Mathias M ¨uller,\nSantanu Pal, Matt Post, and Marcos Zampieri. 2019.\nFindings of the 2019 conference on machine transla-\ntion (WMT19). In Proceedings of the Fourth Con-\nference on Machine Translation (Volume 2: Shared\nTask Papers, Day 1) , pages 1–61, Florence, Italy,\nAugust. Association for Computational Linguistics.\nBojar, Ondrej, Chatterjee Rajen, Christian Federmann,\nYvette Graham, Barry Haddow, Matthias Huck,\nPhilipp Koehn, Qun Liu, Varvara Logacheva, and\nChristof et al. Monz. 2017. Findings of the 2017\nconference on machine translation (WMT17). In\nSecond Conference on Machine Translation , pages\n169–214. The Association for Computational Lin-\nguistics.\nBojar, Ond ˇrej, Christian Federmann, Mark Fishel,\nYvette Graham, Barry Haddow, Philipp Koehn, and\nChristof Monz. 2018. Findings of the 2018 con-\nference on machine translation (WMT18). In Pro-\nceedings of the Third Conference on Machine Trans-\nlation: Shared Task Papers , pages 272–303, Bel-\ngium, Brussels, October. Association for Computa-\ntional Linguistics.Dong, Daxiang, Hua Wu, Wei He, Dianhai Yu, and\nHaifeng Wang. 2015. Multi-task learning for mul-\ntiple language translation. In Proceedings of the\n53rd Annual Meeting of the Association for Compu-\ntational Linguistics and the 7th International Joint\nConference on Natural Language Processing (Vol-\nume 1: Long Papers) , pages 1723–1732, Beijing,\nChina, July. Association for Computational Linguis-\ntics.\nDyer, Chris, Victor Chahuneau, and Noah A. Smith.\n2013. A simple, fast, and effective reparameteriza-\ntion of ibm model 2. In In Proc. NAACL .\nFan, Angela, Shruti Bhosale, Holger Schwenk, Zhiyi\nMa, Ahmed El-Kishky, Siddharth Goyal, Man-\ndeep Baines, Onur Celebi, Guillaume Wenzek,\nVishrav Chaudhary, Naman Goyal, Tom Birch, Vi-\ntaliy Liptchinsky, Sergey Edunov, Edouard Grave,\nMichael Auli, and Armand Joulin. 2020. Beyond\nenglish-centric multilingual machine translation. 10.\nFirat, Orhan, KyungHyun Cho, and Yoshua Bengio.\n2016. Multi-way, multilingual neural machine trans-\nlation with a shared attention mechanism. CoRR ,\nabs/1601.01073.\nGoldhahn, Dirk, Thomas Eckart, and Uwe Quasthoff.\n2012. Building large monolingual dictionaries at the\nLeipzig corpora collection: From 100 to 200 lan-\nguages. In Proceedings of the Eighth International\nConference on Language Resources and Evaluation\n(LREC’12) , pages 759–765, Istanbul, Turkey, May.\nEuropean Language Resources Association (ELRA).\nHa, Thanh-Le, Jan Niehues, and Alexander H. Waibel.\n2016. Toward multilingual neural machine trans-\nlation with universal encoder and decoder. CoRR ,\nabs/1611.04798.\nJohnson, Melvin, Mike Schuster, Quoc V . Le, Maxim\nKrikun, Yonghui Wu, Zhifeng Chen, Nikhil Tho-\nrat, Fernanda B. Vi ´egas, Martin Wattenberg, Greg\nCorrado, Macduff Hughes, and Jeffrey Dean. 2016.\nGoogle’s multilingual neural machine translation\nsystem: Enabling zero-shot translation. CoRR ,\nabs/1611.04558.\nKoehn, Philipp. 2004. Statistical significance tests\nfor machine translation evaluation. In Proceed-\nings of the 2004 Conference on Empirical Methods\nin Natural Language Processing , pages 388–395,\nBarcelona, Spain, July. Association for Computa-\ntional Linguistics.\nRei, Ricardo, Craig Stewart, Ana C Farinha, and Alon\nLavie. 2020. COMET: A neural framework for MT\nevaluation. In Proceedings of the 2020 Conference\non Empirical Methods in Natural Language Process-\ning (EMNLP) , pages 2685–2702, Online, November.\nAssociation for Computational Linguistics.\nTan, Xu, Yi Ren, Di He, Tao Qin, Zhou Zhao, and\nTie-Yan Liu. 2019. Multilingual neural ma-\nchine translation with knowledge distillation. CoRR ,\nabs/1902.10461.\nTiedemann, J ¨org and Santhosh Thottingal. 2020.\nOPUS-MT – building open translation services for\nthe world. In Proceedings of the 22nd Annual\nConference of the European Association for Ma-\nchine Translation , pages 479–480, Lisboa, Portu-\ngal, November. European Association for Machine\nTranslation.\nVaswani, Ashish, Noam Shazeer, Niki Parmar, Jakob\nUszkoreit, Llion Jones, Aidan N. Gomez, Lukasz\nKaiser, and Illia Polosukhin. 2017. Attention is all\nyou need. CoRR , abs/1706.03762.\nXue, Linting, Noah Constant, Adam Roberts, Mi-\nhir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya\nBarua, and Colin Raffel. 2021. mT5: A massively\nmultilingual pre-trained text-to-text transformer. In\nProceedings of the 2021 Conference of the North\nAmerican Chapter of the Association for Computa-\ntional Linguistics: Human Language Technologies ,\npages 483–498, Online, June. Association for Com-\nputational Linguistics.\nAppendix A. Comparison of COMET and Human DA Scores\nFigure 1: Comparison of difference between COMET and human annotations: language pairs in the same language family\nFigure 2: Comparison of difference between COMET and human annotations: language pairs in different language families", "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "9HCI5wAvGg", "year": null, "venue": "EAMT 2010", "pdf_link": "https://aclanthology.org/2010.eamt-1.29.pdf", "forum_link": "https://openreview.net/forum?id=9HCI5wAvGg", "arxiv_id": null, "doi": null }
{ "title": "Domain Adaptation in Statistical Machine Translation using Factored Translation Models", "authors": [ "Jan Niehues", "Alex Waibel" ], "abstract": null, "keywords": [], "raw_extracted_content": "Domain Adaptation in Statistical Machine Translation using Factored\nTranslation Models\nJan Niehues and Alex Waibel\nInstitute for Anthropomatics\nKarlsruhe Institute for Technology\nKarlsruhe, Germany\nfjan.niehues,[email protected]\nAbstract\nIn recent years the performance of SMT\nincreased in domains with enough train-\ning data. But under real-world conditions,\nit is often not possible to collect enough\nparallel data. We propose an approach to\nadapt an SMT system using small amounts\nof parallel in-domain data by introducing\nthe corpus identifier (corpus id) as an ad-\nditional target factor. Then we added fea-\ntures to model the generation of the tags\nand features to judge a sequence of tags.\nUsing this approach we could improve the\ntranslation performance in two domains by\nup to 1 BLEU point when translating from\nGerman to English.\n1 Introduction\nStatistical machine translation (SMT) is currently\nthe most promising approach to machine transla-\ntion of large vocabulary tasks. The approach was\nfirst presented in Brown et al. (1993) and has been\nused in many translation systems since then.\nOne drawback of this approach is that large\namounts of training data are needed. Furthermore,\nthe performance of the SMT system improves if\nthis data is selected from a similar topic and from\na similar genre. Since this is not possible for many\nreal-world scenarios, one approach to overcome\nthis problem is to use all available data to train a\ngeneral system and to adapt the system using in-\ndomain training data.\nFactored translation models as presented in\nKoehn and Hoang (2007) are able to tightly in-\ntegrate additional knowledge into a phrase-based\nstatistical machine translation system. In most\nc\r2010 European Association for Machine Translation.cases, the approach is used to incorporate linguis-\ntic knowledge, such as morphological, syntactic\nand semantic information. In contrast, we will use\nthe approach to integrate domain knowledge into\nthe system by introducing a corpus identifier (cor-\npus id) tag.\nUsing the corpus id as a target word factor en-\nables us to adapt the SMT system by introducting\ntwo new types of features in the log-linear model in\nphrase-based SMT systems. First, we will use rela-\ntive frequencies to model the generation of the cor-\npus id tags similar to the translation model features\nthat are used to model the generation of the target\nwords in a standard phrase-based system. Further-\nmore, we can use features comparable to the word\ncount and language model features to judge the\ngenerated sequence of corpus id tags. Moreover,\nusing the general framework of factored transla-\ntion models leads to a simple integration of this ap-\nproach into state-of-the-art phrase-based systems.\nThe remaining part of the paper is structured as\nfollows: First, we present some related work in the\nnext section. Afterwards, in Section 3, a motiva-\ntion for and an overview over the presented model\nis given. In the following two sections the new\ntypes of features are introduced. In Section 6 the\napproaches are evaluated and in the end a conclu-\nsion is given.\n2 Related Work\nIn recent years different methods were proposed\nto adapt translation systems to a domain. Some\nauthors adapted only the language model inspired\nby similar approaches in speech recognition (Bu-\nlyko et al., 2007). The main advantage of language\nmodel adaptation in contrast to translation model\nadaptation is that only monolingual in-domain data\nis needed.\n[EAMT May 2010 St Raphael, France]\nTo be able to adapt the translation model in these\nconditions, other authors tried to generated a syn-\nthetic parallel text by translating the monolingual\ncorpus with a baseline system and use this corpus\nto train a new system or to adapt the baseline sys-\ntem (Ueffing et al., 2007), (Schwenk and Senellart,\n2009), (Bertoldi and Federico, 2009).\nSnover et al. (2008) used cross-lingual informa-\ntion retrieval to find similar target language cor-\npora. This data was used to adapt the language\nmodel as well as to learn new possible translations.\nWu et al. (2008) presented an approach to adapt the\nsystem using hand-made dictionaries and monolin-\ngual source and target language text.\nIn cases where also in-domain parallel data is\navailable, authors also tried to adapt the translation\nmodel. Koehn and Schroeder (2007) adapted the\nlanguage model by linear and log-linear interpola-\ntion. Furthermore, they could improve the trans-\nlation performance using two translation models\nand combining them with the alternate decoding\npath model as described in Birch et al. (2007). Al-\nthough this approach does also use factored trans-\nlation models, the way they integrate the domain\nand the type of features they use is different from\nours.\nAn approach based on mixture models was pre-\nsented by Foster and Kuhn (2007). They tried\nto use linear and log-linear, language model and\ntranslation model adaptation. Furthermore, they\ntried to optimize the weights for the different do-\nmains on a development set as well as to set\nthe weights according to text distance measures.\nMatsoukas et al. (2009) also adapt the system by\nchanging the weights of the phrase pairs. In their\napproach this is done by assigning discriminative\nweights for the sentences of the parallel corpus.\nIn contrast, Hildebrand et al. (2005) proposed\na method to adapt the translation towards similar\nsentences which are automatically found using in-\nformation retrieval techniques.\nFactored translation models were introduced by\nKoehn and Hoang (2007) to enable the straight-\nforward integration of additional annotations at the\nword-level. This is done by representing a word by\na vector of factors for the different types of annota-\ntions instead of using only the word token. The ad-\nditional factors can be used to better judge the gen-\nerated output as well as to generate the target word\nfrom the other factors, if no direct translation of the\nsource word is possible. Factored translation mod-els are mainly used to incorporate additional lin-\nguistic knowledge, for example, part-of-speech in-\nformation. It could be shown that the performance\nof SMT systems can be improved by incorporating\nlinguistic information using this approach.\n3 Factored Domain Model\nIn this section, we will first give a short motiva-\ntion for modeling the domain of the training data.\nAfterwards, we will describe how to introduce the\ndomain information into a phrase-based translation\nmodel using the framework of factored translation\nmodels.\n3.1 Motivation\nIn the phrase-based translation approach every\ntraining sentence is equally important for gener-\nating the translations. In many cases this simpli-\nfication is acceptable, but it no longer holds if the\ntraining corpus consists of an in-domain and out-\nof-domain set. In this case, the information learned\nfrom the in-domain set should be more important\nthan the one from the out-of-domain set.\nThe simplification mentioned before does lead\nto many translation errors, if the size of the in-\ndomain training data is small compared to the out-\nof-domain data, which is the case in most applica-\ntions. In these cases, a small amount of in-domain\ntraining data is not able to improve the translations\nquality as much as possible. To be able to make\nbetter use of examples, the in-domain translations\nshould be more important in the training process\nthan the ones from out-of-domain data.\nFor example, for the German-English transla-\ntion task, the biggest available parallel corpus are\nthe Proceedings of the European Parliament. In\nthis context, some words have different English\ntranslations than they would have in a news docu-\nment. If all sentences are treated equally, the prob-\nability of the translations specific to the proceed-\nings would be more probable, since they were seen\nmore often.\n3.2 Model\nTo be able to overcome the problems mentioned\nbefore, we tried to model the influence of the in-\ndomain and out-of-domain data explicitly. To be\nable to do this, we store with every phrase pair the\ncorpus it is extracted from. Using this information,\nthe phrase pairs are no longer equally important,\nbut we can, for example, prefer phrase pairs that\nFigure 1: Example of German to English translation with corpus id factors\nwere extracted from the in-domain corpus.\nTo model this idea we used the general frame-\nwork of factored translations models. In this\nframework we add one factor to the target words\nrepresenting the corpus id.\nThe resulting representation of an example sen-\ntence is shown in Figure 1. In the example we use\ntwo corpus ids. One for the in-domain and one\nfor the out-of-domain part of the corpus. Phrases\nextracted from the out-of-domain corpus, gener-\nate the OUT factor on the target side. In contrast,\nphrase pairs learned from the in-domain part will\ngenerate an INfactor on the target side.\nMany phrase pairs occur in different parts of\nthe corpus. For example, the phrase pair Ein # A\nshown in the example in Figure 1 does also occur\nin the out-of-domain part of the corpus. In these\ncases, both phrase pairs will be extracted and the\ndecoder will select one of them depending on the\nmodels described in the following.\nWith this approach it is possible to see which\nparts of the translation are learned from the in-\ndomain training examples and which parts are\ntranslated by using phrase pairs from the out-of-\ndomain corpus. This information can then be used\nto judge the quality of the translation. That means,\ntranslations which are generated from in-domain\nphrase pairs will more probably be a better trans-\nlations than the ones generated merely by phrase\npairs extracted from the out-of-domain corpus.\nTo be able to model this, we add two types of\nfeatures to the log-linear model used in a phrase-\nbased translation system. The first one that we call\ntheDomain Factors Translation Model , models the\nprobability that a sequence of corpus id tags is gen-\nerated. As it is done for the translation model of\nwords, we use features based on relative frequen-\ncies to model this probability. The different fea-\ntures are described in detail in the next section.\nA second group of features is used to judge the\ncorpus id tag sequence similar to the target lan-\nguage model used in SMT systems. For example,\nwe count the number of in-domain tags and use\nthis as an Domain Factors Sequence Model . Dif-ferent approaches to model this probability will be\ndescribed in Section 5.\nSince we use the general framework of factored\ntranslation models, the weights for these features\ncan be optimized during the training on the devel-\nopment data of the log-linear model using, for ex-\nample, Minimum Error Rate training. The result-\ning weights prefer in-domain phrase pairs in a way\nthat leads to the best translation performance on\nthe development data.\n4 Domain Factors Translation Model\nThe domain factors translation model is used to\ndescribe the probability that a sequence of corpus\nid tags is generated. If we look at the example\nmentioned before, the features of the model should\nmodel the probability of generating the sequence\nIN OUT IN IN IN IN IN IN IN if the input sentence\nisEin blauer Bogen (demokratischer) Staaten im\nOsten .\nAs mentioned before, this is similar to the\nphrase translation model in state-of-the-art SMT\napproaches. In most cases, this is described by a\nlog-linear combination of four different probabili-\nties. First, the probability of the target phrase given\nthe source phrase P(tjs)is approximated by the\nrelative frequency\nP(tjs) =cooc (s; t)\ncooc (s;\u0003)(1)\nwhere cooc (s; t)is the number of cooccurences of\nthe source phrase sand the target phrase tin the\nparallel training corpus and cooc (s;\u0003)the number\nof phrase pairs extracted from the parallel corpus\nwith source phrase s.\nThe second probability P(sjt), the probability\nof the source phrase given the target phrase ap-\nproximated in the analogous:\nP(tjs) =cooc (s; t)\ncooc (\u0003; t)(2)\nIn addition the lexical translation probabilities\nof both directions are used in a default configura-\ntion of an SMT system.\nIn our factored model we no longer have only\nthe coocurrence count depending on the source\nand target phrase cooc (s; t), but, in addition, a\ncoocurrence count depending on three parameters,\ncooc (s; t; d ), where d is the sequence of corpus id\ntags. Consequently, we can extend the existing\nprobabilities by three more possible features.\nFirst, we can use the probability of the corpus id\ntags given the source phrase P(djs; t)that can be\napproximated analogously to the existing transla-\ntion probabilities by\nP(djs; t) =cooc (s; t; d )\ncooc (s; t;\u0003)(3)\nSecondly, we can also use the probability of the\ntarget phrase given the source phrase and domain\ntag sequence P(tjs; d). Since we cannot extract a\nphrase pair partly from one corpus and partly from\nanother one, the corpus id tags for all words of one\nphrase pair is the same. Consequently this prob-\nability equals the probability of the target phrase\ngiven the source phrase restricted to the phrases\nextracted from the corpus indicated by the corpus\nid tags. The probability can be approximated by:\nP(tjs; d) =cooc (s; t; d )\ncooc (s;\u0003; d)(4)\nAt last, we can define the probability with\nswitched roles of the source and target phrase\nP(sjt; d). This can be approximated by:\nP(sjt; d) =cooc (s; t; d )\ncooc (\u0003; t; d )(5)\n5 Domain Factors Sequence Model\nIn the last section we described how to model the\ngeneration of the corpus id tags. After generating\nthe corpus id tag sequence, one main advantage of\nthis approach is that we are able to introduce mod-\nels to judge the different possible domain tag se-\nquences for a given source sentence.\nIn the example given in Figure 1 another possi-\nble translation of the source sentence would gener-\nate the tag sequence IN OUT OUT IN IN IN OUT\nOUT OUT OUT . By only looking at the corpus id\nsequence we should prefer the translation shown\nin the figure, since it uses more phrase pairs that\noccur in the in-domain corpus. To model this we\npropose two features.\nIn contrast to the translation model, in this case\nwe cannot simply extend the language model ap-\nproach used for the target words. This wouldmean, we train a language model on the corpus id\ntag sequence and then use this language model to\nevaluate the tag sequence. The problem is that a\ntraining sentence is always only from one docu-\nment. Consequently, all words have got the same\ncorpus id tag. But in the test case, we do not only\nwant to generate sentences using phrase pairs ex-\ntracted from the same corpus. Consequently, there\nis no corpus to train a language model and we have\nto use different types of models.\nTherefore, we use a unigram model in the ex-\nperiments, although the framework supports gen-\neral sequence models. One possibility is to do\nthis at phrase level leading to a model similar\nto the phrase count model used in nearly every\nphrase-based SMT decoder. Instead of counting\nall phrases we can just count the phrases which\nhave got in-domain corpus id tags. In the exam-\nple shown in Figure 1 this would lead to a value of\n4 for this feature.\nAnother approach is to use a in-domain word\ncount feature similar to the already existing word\ncount feature. We evaluated both types of feature\nand present the results in Section 6.\n6 Evaluation\nWe evaluated our approach to adapt an SMT sys-\ntem on the German-English translation task. For\nthis language pair, the biggest parallel corpus\navailable are the Proceedings of the European Par-\nliament (EPPS).\nThe first system we built was designed to trans-\nlate documents from the news-commentary do-\nmain. As test set we used the test set from the\nWMT Evaluation in 2007. As parallel training data\nwe used the EPPS corpus as well as an in-domain\nnews-commentary corpus with about 1M words.\nIn contrast, the corpus from the EPPS domain has\nabout 39M words.\nIn a preprocessing step we cleaned the data and\nperformed a compound splitting on the German\ntext based on the frequency method described in\nKoehn et al. (2003). We generated a word align-\nment for the parallel corpus using a discrimina-\ntive approach as described in Niehues and V ogel\n(2008) trained on a set of 500 hand-aligned sen-\ntences. Afterwards, the phrase pairs were extracted\nusing the training scripts from the Moses package\n(Koehn et al., 2007).\nWe use two language models in our SMT sys-\ntem. The first one, a general language model, was\ntrained on the English Gigaword corpus. In addi-\ntion, we use a second one, trained only on the En-\nglish part of the parallel in-domain corpus. Using\nthis additional language model, our baseline sys-\ntem was already partly adapted to the target do-\nmain.\nTo be able to model the quite difficult reorder-\ning between German and English we used a part-\nof-speech based reordering model as described in\nRottmann and V ogel (2007) and Niehues and Kolss\n(2009). In this approach reordering rules based on\npart-of-speech tags are learned from the parallel\ncorpus. For every test sentence, different possi-\nble reorderings of the source sentence are encoded\nin a word lattice. Then the decoder translates this\nlattice instead of the original input sentence.\nWe use a phrase-based decoder as described in\nV ogel (2003) using the language models, phrase\nscores as well as a word count and phrase count\nmodel. The optimization was done by MER train-\ning described in Venugopal et al. (2005).\nWe performed a second series of experiments on\nthe translation task of lectures from German to En-\nglish. The system was trained on the data from the\nEuropean Parliament, the news-commentary data,\nGerman-English BTEC data and a small amount\nof translated lectures. The in-domain corpus con-\ntained only around 210K words.\nThe system was built similar to the systems\nin the other experiments except to some small\nchanges due to speech translation. Instead of doing\na separate compound splitting, we used the same\nsplitting as it was used by the speech recognizer.\nSince the output of the speech recognition system\nis lower-cased, we lower-cased the source part of\nthe phrase table. In the case where this lead to two\nidentical phrase pairs, we kept both.\n6.1 Results for News Task\nWe first evaluated our approach on the task of\ntranslating documents from the news-commentary\ndomain. We performed experiments to analyze the\ninfluence of the domain sequence model and a sec-\nond group of experiments to look at the domain\ntranslation model.\nThe results using different sequence models are\nshown in Table 1. The baseline system does not\nuse any adaptation. Then we added a second in-\ndomain target language mode. In addition, the\nother three systems use the domain translation\nmodel as described in Section 4 using only the do-Table 1: Different sequence models for domain\nfactors (BLEU)\nSystem Dev Test\n1 Baseline 25.90 29.03\n2 (1) + LM Adaptation 26.68 29.24\n3 (2) + Domain Rel. Frequency 26.80 29.21\n4 (3) + Word Count Model 27.03 29.63\n5 (3) + Phrase Count Model 27.09 29.54\nmain relative frequency as introduced in Equation\n3.\nThe first system does not apply the domain se-\nquence model. As it can be seen in the table, in\nthis case the approach could not improve the trans-\nlation quality compared to the baseline system.\nUsing one of the two different types of sequence\nmodeling as described in Section 5 could improve\nthe performance by 0.3 or 0.4 BLEU points com-\npared to the system where only the LM is adapted.\nIf we compare both sequence models they perform\nquite similar.\nTable 2: Different translation models for domain\nfactors (BLEU)\nSystem Dev Test\n1 Baseline 25.90 29.03\n2 (1) + LM Adaptation 26.68 29.24\n3 (2) + Word Count Model 26.13 29.17\n4 (3) + Domain Frequency 27.03 29.63\n5 (3) + Target Frequency 27.00 29.51\n6 (3) + Source Frequency 26.95 29.84\n7 (3) + All 27.07 29.69\nAfter taking a look at the sequence model, we\nevaluated the different approaches to model the\ngeneration of the corpus id tags. The results us-\ning different translation model scores for the do-\nmain factors are displayed in Table 2. In this ex-\nperiments we always used the word count model\nto judge the corpus id sequence.\nThe first experiment, using only a domain se-\nquence model and no domain translation model\ndid not lead to any improvement in the transla-\ntion quality. Then we used the scores introduced\nin Equations 3, 4 and 5 separately. This means,\nthat system 4 is equal to system 4 in Table 1. On\nthe development set, this did lead to quite similar\nresults, while the results on the test set vary a little\nFigure 2: Example translations with and without domain adaptation\nInput: Ein blauer Bogen (demokatischer) Staaten im Osten, ...\nReference: An arc of blue (Democratic) states in the East, ...\nBaseline: A blue sheet (democratic) countries in Eastern Europe, ...\nAdapted: A blue arc (democratic) states in the east, ...\nbit more. But all models lead to an improvement\nof translation quality compared to the baseline sys-\ntem.\nIn a last experiment we use all three domain\ntranslation model scores. With the resulting sys-\ntem, we get the best performance on the develop-\nment data, but a slightly worse performance on the\ntest set than by using the source frequency only.\n6.2 Results for Lecture Task\nTable 3: Evaluation of the speech translation sys-\ntem (BLEU)\nSystem Dev Test\n1 LM Adaptation 36.93 29.84\n2 (1) + Source Frequency 37.90 31.12\n3 (1) + Target Frequency 37.63 30.73\n4 (1) + Domain Frequency 37.28 30.16\n5 (1) + All 37.74 31.53\n6 (2) + All Sep. corpus ids 38.01 31.51\nA second group of experiments was performed\non the lecture translation task. The results for these\nexperiments are displayed in Table 3.\nWe use a baseline system that is already adapted\nto the target domain by an in-domain target lan-\nguage model. Then we add the word count domain\nsequence model and the different domain transla-\ntions model scores separately. In the next config-\nuration we combined the different domain trans-\nlation models. The system using the Source Fre-\nquency preformed best on the development set and\nled to an improvement of 1.3 BLEU points on the\ntest set compared to the baseline system.\nIn a last system, we extended the best preform-\ning system to not only use two corpus id tags for\nthe in-domain and out-of-domain part of the cor-\npus, but we use a separate one for every part of\nthe corpus. This led to four corpus id tags ( EPPS ,\nNEWS ,BTEC ,LECTURE ). Then we use a differ-\nent word count feature for each of this tags. If we\nlook at the weights assigned to the different fea-\ntures after the optimization, we see that the sys-tem prefers phrases extracted from the lecture do-\nmain most. Then, translations from the BTEC do-\nmain are preferred over phrases from the EPPS and\nNEWS domain. These additional features could\nimprove the performance even more by 0.4 BLEU\npoints leading to a BLEU score of 31.51 BLEU on\nthe test set.\n6.3 Examples\nFigure 2 shows different translations of the exam-\nple sentence introduced in Figure 1. The first trans-\nlation was generated by a baseline system that does\nnot use the adaptation technique and the second\ntranslation by a system using the technique.\nThe translations of both systems differ in three\nwords. Applying the adaptation model could im-\nprove the lexical selection in these cases. One ex-\nample is the German word Osten (Engl. East). In\nthe big out-of-domain corpus containing the Pro-\nceedings of the European Parliament, quite often\nthis word is used as abbreviation for Eastern Eu-\nrope. But in the example sentence, this is a wrong\ntranslation. In this case, we get a better transla-\ntion, if we also use the information, how words are\ntranslated differently in the in-domain corpus.\nFurthermore, the example shows the impor-\ntance of combining the better matching in-domain\nknowledge with the broader knowledge of the\nwhole corpus. Since there is no translation for\nblauer in the in-domain corpus, we use the out-of-\ndomain phrase pair. For other phrases for which in-\ndomain and out-of-domain phrase pairs are avail-\nable, we prefer the better matching in-domain\nphrase pairs.\n7 Conclusion\nWe presented a new approach to adapt a phrase-\nbased translation system using factored translation\nmodels. Consequently, this approach is easy to\nintegrate into state-of-the-art phrase-based transla-\ntions systems. Instead of incorporating linguistic\nknowledge with the framework, we used it to inte-\ngrate domain knowledge by introducing the corpus\nid as additional factor.\nUsing this approach we can prefer phrase pairs\nfrom a specific domain. Therefore, we introduced\na model to estimate the probability of the phrase\npair belonging to a certain domain and a model\nto judge the generated sequence of corpus id tags.\nThe weights of the different models can be op-\ntimized using MER training leading to the best\ntranslation quality on the development data.\nWe could show an improvement by up to 1\nBLEU point on two different tasks when translat-\ning from German to English.\nIn the future, we will investigate more complex\ndomain sequence models, to judge better where\nto use in-domain and out-of-domain phrase pairs.\nFurthermore, we will try to automatically split the\ntraining corpus into segments according to differ-\nent topics to obtain more fine-grained corpus ids.\nAcknowledgement\nThis work was realized as part of the Quaero Pro-\ngramme, funded by OSEO, French State agency\nfor innovation.\nReferences\nBertoldi, Nicola and Marcello Federico. 2009. Do-\nmain Adaptation for Statistical Machine Translation\nwith Monolingual Resources. In Fourth Workshop\non Statistical Machine Translation , Athens, Greece.\nBirch, Alexandra, Miles Osborne, and Philipp Koehn.\n2007. CCG Supertags in Factored Satistical Ma-\nchine Translation. In Second Workshop on Statistical\nMachine Translation , Prague, Czech Republic.\nBrown, Peter F., Stephen A. Della Pietra, Vincent\nJ. Della Pietra, and Robert L. Mercer. 1993. The\nMathematics of Statistical Machine Translation: Pa-\nrameter Estimation. Computational Linguistics ,\n19(2):263–311.\nBulyko, Ivan, Spyros Matsoukas, Richard Schwartz,\nLong Nguyen, and John Makhoul. 2007. Lan-\nguage Model Adaptation in Machine Translation\nfrom Speech. In ICASSP 2007 , Honolulu, USA.\nFoster, George and Roland Kuhn. 2007. Mixture-\nModel Adaptation for SMT. In ACL 2007 , Prague,\nCzech Republic.\nHildebrand, Silja, Matthias Eck, Stephan V ogel, and\nAlex Waibel. 2005. Adapatation of the Transla-\ntion Model for Statistical Machine Translation based\non Information Retrieval. In EAMT 2005 , Budapest,\nHungary.\nKoehn, Philipp and Hieu Hoang. 2007. Factored\nTranslation Models. In EMNLP-CoNLL , Prague,\nCzech Republic.Koehn, Philipp and Josh Schroeder. 2007. Experi-\nments in Domain Adaptation for Statistical Machine\nTranslation. In Second Workshop on Statistical Ma-\nchine Translation , Prague, Czech Republic.\nKoehn, Philipp, Franz Josef Och, and Daniel Marcu.\n2003. Statistical Phrase-Based Translation. In\nHLT/NAACL 2003 .\nKoehn, P., H. Hoang, A. Birch, Ch. Callison-Burch,\nM. Federico, N. Bertoldi, B. Cowan, W. Shen, Ch.\nMoran, R. Zens, Ch. Dyer, O. Bojar, A. Constantin,\nand E. Herbst. 2007. Moses: Open Source Toolkit\nfor Statistical Machine Translation. In ACL 2007,\nDemonstration Session , Prague, Czech Republic.\nMatsoukas, Spyros, Antti-Veikko I. Rosti, and Bing\nZhang. 2009. Discriminative Corpus Weight Es-\ntimation for Machine Translation. In Conference on\nEmpirical Methods on Natural Language Processing\n(EMNLP 2009) , Singapore.\nNiehues, Jan and Muntsin Kolss. 2009. A POS-Based\nModel for Long-Range Reorderings in SMT. In\nFourth Workshop on Statistical Machine Translation\n(WMT 2009) , Athens, Greece.\nNiehues, Jan and Stephan V ogel. 2008. Discriminative\nWord Alignment via Alignment Matrix Modeling.\nInThird Workshop on Statistical Machine Transla-\ntion, Columbus, USA.\nRottmann, Kay and Stephan V ogel. 2007. Word\nReordering in Statistical Machine Translation with\na POS-Based Distortion Model. In TMI, Sk¨ovde,\nSweden.\nSchwenk, Holger and Jean Senellart. 2009. Transla-\ntion Model Adapation for an Arabic/French News\nTranslation System by Lightly-Supervised Training.\nInMT Summit XII , Ottawa, Canada.\nSnover, Matthew, Bonnie Dorr, and Richard Schwartz.\n2008. Language and Translation Model Adaptation\nusing Comparable Corpora. In Conference on Em-\npirical Methods on Natural Language Processing\n(EMNLP 2008) , Honolulu, USA.\nUeffing, Nicola, Gholamerza Haffari, and Anoop\nSarkar. 2007. Semi-Supervised Model Adaptation\nfor Statistical Machine Translation. Machine Trans-\nlation , 21(2):77–94.\nVenugopal, Ashish, Andreas Zollman, and Alex\nWaibel. 2005. Training and Evaluation Error Min-\nimization Rules for Statistical Machine Translation.\nInWorkshop on Data-drive Machine Translation and\nBeyond (WPT-05) , Ann Arbor, MI.\nV ogel, Stephan. 2003. SMT Decoder Dissected: Word\nReordering. In Int. Conf. on Natural Language Pro-\ncessing and Knowledge Engineering , Beijing, China.\nWu, Hua, Haifend Wang, and Chengqing Zong. 2008.\nDomain Adaptation for Statistical Machine Transla-\ntion with Domain Dictionary and Monolingual Cor-\npora. In Coling 2008 , Manchester, UK.", "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "Z84UDD8BAw2M", "year": null, "venue": "EAMT 2015", "pdf_link": "https://aclanthology.org/W15-4917.pdf", "forum_link": "https://openreview.net/forum?id=Z84UDD8BAw2M", "arxiv_id": null, "doi": null }
{ "title": "Stripping Adjectives: Integration Techniques for Selective Stemming in SMT Systems", "authors": [ "Isabel Slawik", "Jan Niehues", "Alex Waibel" ], "abstract": null, "keywords": [], "raw_extracted_content": "Stripping Adjectives: Integration Techniques for Selective Stemming in\nSMT Systems\nIsabel Slawik Jan Niehues\nInstitute for Anthropomatics and Robotics\nKIT - Karlsruhe Institute of Technology, Germany\[email protected] Waibel\nAbstract\nIn this paper we present an approach to re-\nduce data sparsity problems when translat-\ning from morphologically rich languages\ninto less inflected languages by selectively\nstemming certain word types. We de-\nvelop and compare three different integra-\ntion strategies: replacing words with their\nstemmed form, combined input using al-\nternative lattice paths for the stemmed and\nsurface forms and a novel hidden combina-\ntion strategy, where we replace the stems in\nthe stemmed phrase table by the observed\nsurface forms in the test data. This allows\nus to apply advanced models trained on the\nsurface forms of the words.\nWe evaluate our approach by stem-\nming German adjectives in two\nGerman→English translation scenar-\nios: a low-resource condition as well as a\nlarge-scale state-of-the-art translation sys-\ntem. We are able to improve between 0.2\nand 0.4 BLEU points over our baseline and\nreduce the number of out-of-vocabulary\nwords by up to 16.5%.\n1 Introduction\nStatistical machine translation (SMT) is currently\nthe most promising approach to automatically\ntranslate text from one natural language into an-\nother. While it has been successfully used for\na lot of languages and applications, many chal-\nlenges still remain. Translating from a morpholog-\nically rich language is one such challenge where\nc�2015 The authors. This article is licensed under a Creative\nCommons 3.0 licence, no derivative works, attribution, CC-\nBY-ND.the translation quality of modern systems is often\nstill not sufficient for many applications.\nTraditional SMT approaches work on a lexical\nlevel, that is every surface form of a word is treated\nas its own distinct token. This can create data spar-\nsity problems for morphologically rich languages,\nsince the occurrences of a word are distributed over\nall its different surface forms. This problem be-\ncomes even more apparent when translating from\nan under-resourced language, where parallel train-\ning data is scarce.\nWhen we translate from a highly inflected lan-\nguage into a less morphologically rich language,\nnot all syntactic information encoded in the surface\nforms may be needed to produce an accurate trans-\nlation. For example, verbs in French must agree\nwith the noun in case and gender. When we trans-\nlate these verbs into English, case and gender in-\nformation may be safely discarded.\nWe therefore propose an approach to overcome\nthese sparsity problems by stemming different\nmorphological variants of a word prior to transla-\ntion. This allows us to not only estimate transla-\ntion probabilities more reliably, but also to trans-\nlate previously unseen morphological variants of\na word, thus leading to a better generalization of\nour models. To fully maximize the potential of our\nSMT system, we looked at three different integra-\ntion strategies. We evaluated hard decision stem-\nming, where all adjectives are replaced by their\nstem, as well as soft integration strategies, where\nwe consider the words and their stemmed form as\ntranslation alternatives.\n2 Related Work\nThe specific challenges arising from the transla-\ntion of morphologically rich languages have been\nwidely studied in thefield of SMT. The factored129\ntranslation model (Koehn and Hoang, 2007) en-\nriches phrase-based MT with linguistic informa-\ntion. By translating the stem of a word and its\nmorphological components separately and then ap-\nplying generation rules to form the correct surface\nform of the target word, it is possible to generate\ntranslations for surface forms that have not been\nseen in training.\nTalbot and Osborne (2006) address lexical\nredundancy by automatically clustering source\nwords with similar translation distributions,\nwhereas Yang and Kirchhoff (2006) propose\na backoff model that uses increasing levels of\nmorphological abstractions to translate previously\nunseen word forms.\nNiehues and Waibel (2011) present quasi-\nmorphological operations as a means to translate\nout-of-vocabulary (OOV) words. The automati-\ncally learned operations are able to split off po-\ntentially inflected suffixes, look up the translation\nfor the base form using a lexicon of Wikipedia1ti-\ntles in multiple languages, and then generate the\nappropriate surface form on the target side. Sim-\nilar operations were learned for compound parts\nby Macherey et al. (2011).\nHardmeier et al. (2010) use morphological re-\nduction in a German→English SMT system by\nadding the lemmas of every word output as a by-\nproduct of compound splitting as an alternative\nedge to input lattices. A similar approach is used\nby Dyer et al. (2008) and Wuebker and Ney (2012).\nThey used word lattices to represent different\nsource language alternatives for Arabic→English\nand German→English respectively.\nWeller et al. (2013a) employ morphological\nsimplification for their French→English WMT\nsystem, including replacing inflected adjective\nforms with their lemma using hand-written rules,\nand their Russian→English (Weller et al., 2013b)\nsystem, removing superfluous attributes from the\nhighly inflected Russian surface forms. Their sys-\ntems are unable to outperform the baseline system\ntrained on the surface forms. Weller et al. argue\nthat human translators may prefer the morpholog-\nically reduced system due to better generalization\nability. Their analysis showed the Russian system\noften produces an incorrect verb tense, which in-\ndicates that some morphological information may\nbe helpful to choose the right translation even if the\ninformation seems redundant.\n1http://www.wikipedia.org3 Stemming\nIn order to address the sparsity problem, we try\nto cluster words that have the same translation\nprobability distribution, leading to higher occur-\nrence counts and therefore more reliable transla-\ntion statistics. Because of the respective morpho-\nlogical properties of our source and target lan-\nguage, word stems pose a promising type of clus-\nter. Moreover, stemming alleviates the OOV prob-\nlem for unseen morphological variants. Because of\nthese benefits, we chose stem clustering in this pa-\nper, however, our approach can work on different\ntypes of clusters, e.g. synonyms.\nMorphological stemming prior to translation has\nto be done carefully, as we are actively discarding\ninformation. Indiscriminately stemming the whole\nsource corpus hurts translation performance, since\nstemming algorithms make mistakes and often too\nmuch information is lost.\nAdding the stem of every word as an alterna-\ntive to our source sentence greatly increases our\nsearch space. Arguably the majority of the time\nwe need the surface form of a word to make an in-\nformed translation decision. We therefore propose\nto keep the search space small by only stemming\nselected word classes which have a high diversity\nin inflections and whose additional morphological\ninformation content can be safely disregarded.\nFor our use case of translating from German to\nEnglish, we chose to focus only on stemming ad-\njectives. Adjectives in German can havefive dif-\nferent suffixes, depending on the gender, number\nand case of the corresponding noun, whereas in\nEnglish adjectives are only rarely inflected. We\ncan therefore discard the information encoded in\nthe suffix of a German adjective without losing any\nvital information for translation.\n3.1 Degrees of Comparison\nWhile we want to remove gender, number and case\ninformation from the German adjective, we want\nto preserve its comparative or superlative nature.\nIn addition to its base form (e.g.sch ¨on[pretty]),\na German adjective can have one offive suffixes\n(-e, -em, -en, -er, -es). However, we cannot sim-\nply remove all suffixes usingfixed rules, because\nthe comparative base form of an adjective is identi-\ncal to the inflected masculine, nominative, singular\nform of an attributive adjective.\nFor example, the inflected formsch ¨onerof the\nadjectivesch ¨onis used as an attributive adjective in130\nthe phrasesch ¨oner Mann[handsome man] and as\na comparative in the phrasesch ¨oner wird es nicht\n[won’t get prettier]. We can stem the adjective in\nthe attributive case to its base form without any\nconfusion (sch ¨on Mann), as we generate a form\nthat does not exist in proper German. However,\nwere we to apply the same stemming to the com-\nparative case, we would lose the degree of com-\nparison and still generate a valid German sentence\n(sch¨on wird es nicht[won’t be pretty]) with a dif-\nferent meaning than our original sentence. In or-\nder to differentiate between cases in which stem-\nming is desirable and where we would lose infor-\nmation, a detailed morphological analysis of the\nsource text prior to stemming is vital.\n3.2 Implementation\nWe used readily available part-of-speech (POS)\ntaggers, namely the TreeTagger (Schmid, 1994)\nand RFTagger (Schmid and Laws, 2008), for mor-\nphological analysis and stemming. In order to\nachieve accurate results, we performed standard\nmachine translation preprocessing on our corpora\nbefore tagging. We discarded exceedingly long\nsentences and sentence pairs with a large length\ndifference from the training data. Special dates,\nnumbers and symbols were normalized and we\nsmart-cased thefirst letter of every sentence. Typi-\ncally preprocessing for German also includes split-\nting up compounds into their separate parts. How-\never, this would confuse the POS taggers, which\nhave been trained on German text with proper\ncompounds. Furthermore, our compound splitting\nalgorithm might benefit from a stemmed corpus,\nproviding higher occurrence counts for individual\nword components. We therefore refrain from com-\npound splitting before tagging and stemming.\nWe only stemmed words tagged as attributive\nadjectives, since only they are inflected in Ger-\nman. Predicative adjectives are not inflected and\ntherefore were left untouched. Since we want to\nretain the degree of comparison, we used thefine-\ngrained tags of the RFTagger to decide when and\nhow to stem. Adjectives tagged as comparative or\nsuperlative were stemmed through the use offixed\nrules. For all others, we used the lemma output by\nthe TreeTagger, since it is the same as the stem and\nwas already available in our system.\nFinally, our usual compound splitting (Koehn\nand Knight, 2003) was trained and performed on\nthe stemmed corpus.4 Integration\nAfter clustering the words into groups that can be\ntranslated in the same or at least in a similar way,\nthere are different possibilities to use them in the\ntranslation system. A naive strategy is to replace\neach word by its cluster representative, calledhard\ndecision stemming. However, this carries the risk\nof discarding vital information. Therefore we in-\nvestigated techniques to integrate both, the surface\nforms as well as the word stems, into the transla-\ntion system. In thecombined input, we add the\nstemmed adjectives as translation alternatives to\nthe preordering lattices. Since this poses problems\nfor the application of more advanced translation\nmodels during decoding, we propose the novelhid-\nden combinationtechnique.\n4.1 Hard Decision Stemming\nAssuming that the translation probabilities of the\nword stems can be estimated more reliably than\nthose of the surface forms, the most intuitive strat-\negy is to consequently replace each surface form\nby its stem. In our case, we replaced all adjec-\ntives with their stems. This has the advantage that\nafterwards the whole training pipeline can be per-\nformed in exactly the same manner as it is done\nin the baseline system. For tuning and testing,\nthe adjectives in the development and test data are\nstemmed and replaced in the same manner as in the\ntraining data.\n4.2 Combined Input\nMistakes made during hard decision stemming\ncannot be recovered. Soft integration techniques\navoid this pitfall by deferring the decision whether\nto use the stem or surface form of a word until de-\ncoding. We enable our system to choose by com-\nbining both the surface form based (default) phrase\ntable and the word stem based (stemmed) phrase\ntable log-linearly. The weights of the phrase scores\nare then learned during optimization.\nIn order to be able to apply both phrase tables\nat the same time, we need to modify the input of\nthe decoder. Our baseline system already uses pre-\nordering lattices, which encode different reorder-\ning possibilities of the source sentence. We re-\nplaced every edge in the lattice containing an ad-\njective by two edges: one containing the surface\nform and the other the word stem. This allows the\ndecoder to choose which word form to use depend-\ning on the word and its context.131\nFigure 1: Workflow for unstemming the PT.\n4.3 Hidden Combination\nWhile we are able to modify our phrase table to\nuse both surface forms and stems in the last strat-\negy, other models in our log-linear system suffer\nfrom the different types of source input. For ex-\nample, the bilingual language model (Niehues et\nal., 2011) is based on tokens of target words and\ntheir aligned source words. In training, we can use\neither the stemmed corpus or the original one, but\nduring decoding a mixture of stems and surface\nforms occurs. For the unknown word forms the\nscores will not be accurate and the performance\nof our model will suffer. Similar problems occur\nwhen using other translation models such as neu-\nral network based translation models.\nWe therefore developed a novel strategy to in-\ntegrate the word stems into the translation system.\nInstead of stemming the input tofit the stemmed\nphrase table, we modified the stemmed phrase ta-\nble so that it can be applied to the surface forms.\nThe workflow is illustrated in Figure 1. We ex-\ntracted all the stem mappings from the develop-\nment and test data and compiled a stem lexicon.\nThis maps the surface forms observed in the dev\nand test data to their corresponding stems. We\nthen applied this lexicon in reverse to our stemmed\nphrase table, in effect duplicating every entry con-\ntaining a stemmed adjective with the inflected form\nreplacing the stem. Afterwards this “unstemmed”\nphrase table is log-linearly combined with the de-\nfault phrase table and used for translation.\nThis allows us to retain our generalization won\nby using word clusters to estimate phrase proba-\nbilities, and still use all models trained on the sur-face forms. Using the hidden combination strat-\negy, stemming can easily be implemented into cur-\nrent state-of-the-art SMT systems without the need\nto change any of the advanced models beyond the\nphrase table. This makes our approach highly ver-\nsatile and easy to implement for any number of\nsystem architectures and languages.\n5 Experiments\nSince we expect stemming to have a larger impact\nin cases where training data is scarce, we evalu-\nated the three presented strategies on two different\nscenarios: a low-resource condition and a state-of-\nthe-art large-scale system. In both scenarios we\nstemmed German adjectives and translated from\nGerman to English.\nIn our low-resource condition, we trained an\nSMT system using only training data from the\nTED corpus (Cettolo et al., 2012). TED trans-\nlations are currently available for 107 languages2\nand are being continuously expanded. Therefore,\nthere is a high chance that a small parallel corpus\nof translated TED talks will be available in the cho-\nsen language.\nIn the second scenario, we used a large-scale\nstate-of-the-art German→English translation sys-\ntem. This system was trained on significantly more\ndata than available in the low-resource condition\nand incorporates several additional models.\n5.1 System Description\nThe low-resource system was trained only on the\nTED corpus provided by the IWSLT 2014 machine\ntranslation campaign, consisting of 172k lines. As\nmonolingual training data we used the target side\nof the TED corpus.\nThe large-scale system was trained on the Euro-\npean Parliament Proceedings, News Commentary,\nTED and Common Crawl corpora provided for the\nIWSLT 2014 machine translation campaign (Cet-\ntolo et al., 2014), encompassing 4.69M lines. For\nthe monolingual training data we used the target\nside of all bilingual corpora as well as the News\nShuffle and the Gigaword corpus.\nBefore training and translation, the data is pre-\nprocessed as described in Section 3.2. The noisy\nCommon Crawl corpus wasfiltered with an SVM\nclassifier as described by Mediani et al. (2011).\nAfter preprocessing, the parallel corpora are word-\naligned with the GIZA++ toolkit (Gao and V o-\n2http://www.ted.com/participate/translate132\ngel, 2008) in both directions. The resulting align-\nments are combined using thegrow-diag-final-and\nheuristic. The Moses toolkit (Koehn et al., 2007)\nis used for phrase extraction. For the large-scale\nsystem, phrase table adaptation combining an in-\ndomain and out-of-domain phrase table is per-\nformed (Niehues and Waibel, 2012). All transla-\ntions are generated by our in-house phrase-based\ndecoder (V ogel, 2003).\nWe used 4-gram language models (LMs) with\nmodified Kneser-Ney smoothing, trained with the\nSRILM toolkit (Stolcke, 2002) and scored in the\ndecoding process with KenLM (Heafield, 2011).\nAll our systems include a reordering model\nwhich automatically learns reordering rules based\non part-of-speech sequences and, in case of\nthe large-scale system, syntactic parse tree con-\nstituents to better match the target language word\norder (Rottmann and V ogel, 2007; Niehues and\nKolss, 2009; Herrmann et al., 2013). The resulting\nreordering possibilities for each source sentence\nare encoded in a lattice.\nFor the low-resource scenario, we built two sys-\ntems. One small baseline with only one phrase ta-\nble and language model, as well as aforementioned\nPOS-based preordering model, and an advanced\nsystem using an extended feature set of models\nthat are also used in the large-scale system. The\nextended low-resource and the large-scale system\ninclude the following additional models.\nA bilingual LM (Niehues et al., 2011) is used\nto increase the bilingual context during transla-\ntion beyond phrase boundaries. It is built on to-\nkens consisting of a target word and all its aligned\nsource words. We also used a 9-gram cluster LM\nbuilt on 100 automatically clustered word classes\nusing the MKCLS algorithm (Och, 1999).\nThe large-scale system also uses an in-domain\nLM trained on the TED corpus and a word-based\nmodel trained on 10M sentences chosen through\ndata selection (Moore and Lewis, 2010).\nIn addition to the lattice preordering, a lexical-\nized reordering model (Koehn et al., 2005) which\nstores reordering probabilities for each phrase pair\nis included in both extended systems.\nWe tune all our systems using MERT (Venu-\ngopal et al., 2005) against the BLEU score. Since\nthe systems have a varying amount of features, we\nreoptimized the weights for every experiment.\nFor the low-resource system, we used IWSLT\ntest 2012 as a development set and IWSLT testSystem Dev Test\nBaseline 28.91 30.25\nHard Decision 29.01 30.30\nCombined Input 29.13 30.47\nHidden Combination29.25 30.62\nTable 1: TED low-resource small systems results.\n2011 as test data. For the large-scale system, we\nused IWSLT test 2011 as development data and\nIWSLT test 2012 as test data.\nAll results are reported as case-sensitive BLEU\nscores calculated with one reference translation.\n5.2 Low-resource Condition\nThe results for the systems built only on the TED\ncorpus are summarized in Table 1 for the small sys-\ntem and Table 2 for the extended system. The base-\nline systems reach a BLEU score on the test set of\n30.25 and 31.33 respectively.\nIn the small system we could slightly improve\nto 30.30 using only stemmed adjectives. However,\nin the extended system the hard decision strategy\ncould not outperform the baseline. This indicates\nthat for words with sufficient data it might be better\nto translate the surface forms.\nAdding the stemmed forms as alternatives to the\npreordering lattice leads to an improvement of 0.2\nBLEU points over the small baseline system. In\nthe larger system with the extended features set,\nthe combined input performed better than the hard\ndecision stemming, but is still 0.1 BLEU points be-\nlow the baseline. With this strategy we do not tap\nthe full potential of our extended system, as there\nis still a mismatch between the combined input and\nthe training data of the advanced models.\nThe hidden combination strategy rectifies this\nproblem, which is reflected in the results. Using\nthe hidden combination we could achieve our best\nBLEU score for both systems. We could improve\nby almost 0.4 BLEU points over the small baseline\nsystem and 0.3 BLEU points on the system using\nextended features.\nSystem Dev Test\nBaseline 29.73 31.33\nHard Decision 29.74 30.84\nCombined Input29.9731.22\nHidden Combination 29.8731.61\nTable 2: TED extended features systems results.133\nSystem Dev Test\nBaseline 38.30 30.89\nHard Decision 38.25 30.82\nCombined Input38.65 31.10\nHidden Combination 38.40 31.08\nTable 3: IWSLT large-scale systems results.\n5.3 Large-scale System\nIn order to assess the impact of our stemming on\na state-of-the-art system, we tested our techniques\non a large-scale system using training data from\nseveral domains. The results of these experiments\nare summarized in Table 3. The baseline system\nachieved a BLEU score of 30.89 on the test set.\nAs in the low-resource condition, the hard deci-\nsion to use only the stems causes a slight drop in\nperformance. Given the large amount of training\ndata, the problem of having seen a word few times\nis much less severe than before.\nWhen we combine the inputs, we can improve\nthe translation quality to our best score of 31.10\nBLEU points. The hidden combination performs\nsimilarly. By using combined input or hidden com-\nbination, we achieved a gain of 0.2 BLEU points\nover the baseline.\n5.4 Further Analysis\nIn this work we have focused on selectively stem-\nming only a small subset of our input text, namely\nadjectives. We therefore do not expect to see a\nlarge difference in BLEU score in our systems and\nindeed the improvements, while existent, are mod-\nerate. It is a well known shortcoming of automatic\nmetrics that they cannot differentiate between ac-\nceptable translation alternatives and errors. Since\ntime and monetary constraints did not allow us to\nperform a full-scale human evaluation, we use the\nOOV rate and manual inspection to demonstrate\nthe benefits of our approach.\nFor a monolingual user of machine translation\nsystems, even an imperfect translation will be bet-ter than no translation at all. We therefore looked at\nthe out-of-vocabulary (OOV) rate of our systems.\n477 OOV words occurred in the test set of the\nlow-resource baseline. This means of the 1433\nlines in our test set, on average every third con-\ntained an untranslated word. With stemming we\nwere able to translate 79 of those words and re-\nduce the number of OOV words by 16.5%. Even in\nthe large-scale system, which is trained on a large\namount of data and therefore has an already low\nOOV rate, we achieved a decrease of 4%. Figure 2\nshows an example sentence where we managed to\ntranslate two previously OOV words using the hid-\nden combination strategy. Furthermore, stemming\ncan also improve our word choices as shown in the\nexample in Figure 3.\nSRC Aber es war sehr traurig .\nREF But it was very sad .\nBASE But it was really upset .\nH.C. But it was very sad .\nFigure 3: Example of improved word choice.\nStemming certain words in a corpus not only af-\nfects the translation of that word, but the whole\nsystem. For example, stemming changes the oc-\ncurrence statistics of the stemmed words, and\ntherefore the output of empirical algorithms such\nas compound splitting and word alignment is sub-\nject to change. By combining the stemmed and de-\nfault phrase tables, we gave our decoder the chance\nto use a phrase from the stemmed phrase table\neven if the phrase contains no stemmed words.\nA manual evaluation of the output of the hidden\ncombination system compared to the hard decision\nstemmed system showed that the difference was\nlargely in word order as exemplified in Figure 4.\n6 Conclusion\nIn this paper we addressed the problem of translat-\ning from morphologically rich languages into less\ninflected languages. The problem of low occur-\nSRC W ¨ahrend Schimpansen von großen ,furchteinfl ¨oßendenKerlen gef ¨uhrt werden ,\nwird die Bonobo - Gesellschaft vonerm ¨achtigtenWeibchen gef ¨uhrt .\nREF While chimpanzees are dominated by big ,scaryguys , bonobo society is run byempoweredfemales .\nBASE As chimpanzees by large ,fear einfl ¨oßendenguys are , the Bonobo-society led byerm ¨achtigtenfemales .\nH.C. During the chimpanzees of big ,scaryguys are , the Bonobo is society ofempoweredfemales .\nFigure 2: Example translations of the baseline and hidden combination low-resource systems. OOV\nphrases have been marked in bold.134\nSRC Nun ja , eine Erleuchtung ist f ¨ur gew ¨ohnlich etwas , dass manfindet weil man es irgendwo fallen gelassen hat .\nREF And you know , an epiphany is usually something youfind that you dropped someplace .\nH.D. Well , there is an epiphany usually , something that you canfind because it has somewhere dropped .\nH.C. Well , an epiphany is usually something that you canfind because it has dropped somewhere .\nFigure 4: Example of improved word order of the hidden combination over the hard decision system.\nrence counts for surface forms and high out-of-\nvocabulary rates for unobserved surface forms can\nbe alleviated by stemming words.\nWe showed that stemming has to be done care-\nfully, since SMT systems are highly sensitive to\nlost information. Given our use case of German\nto English translation, we chose to only stem ad-\njectives, which can havefive suffixes depending\non gender, number and case of the corresponding\nnoun. We took special care to ensure comparative\nand superlative adjectives retained their degree of\ncomparison after stemming.\nAs an alternative to the hard decision strategy,\nwhere every word is replaced by its stem, we\nproposed two soft integration techniques incorpo-\nrating the stems and surface forms as alternative\ntranslation paths in the preordering lattices. State-\nof-the-art SMT systems consist of a log-linear\ncombination of many advanced models. Combin-\ning the surface forms and word stems posed prob-\nlems for models relying on source side tokens. We\ntherefore developed a novel hidden combination\ntechnique, where the word stems in the phrase ta-\nble are replaced by the observed surface forms in\nthe test data. This allowed us to use the more reli-\nably estimated translation probabilities calculated\non the word stems in the decoder while simultane-\nously applying all our other models to the surface\nforms of the words.\nWe evaluated our approach on\nGerman→English translation in two scenar-\nios, one low-resource condition and a large-scale\nstate-of-the-art SMT system. Given the low-\nresource condition, we evaluated a small, basic\nsystem as well as a more sophisticated system\nusing an extended feature set. Using the hidden\ncombination strategy, we were able to outperform\nthe baseline systems in all three experiments by\n0.2 up to 0.4 BLEU points. While these improve-\nments may seem moderate, they were achieved\nsolely through the modification of adjectives.\nWe were also able to show that our systems\ngeneralized better than the baseline as evidenced\nby the OOV rate, which could be decreased by\n16.5% in the low-resource condition.Acknowledgments\nThe project leading to this application has received\nfunding from the European Union’s Horizon 2020\nresearch and innovation programme under grant\nagreement n◦645452.\nReferences\nCettolo, M., C. Girardi, and M. Federico. 2012. WIT3:\nWeb Inventory of Transcribed and Translated Talks.\nInProceedings of the 16th Annual Meeting of the Eu-\nropean Association for Machine Translation, Trento,\nItaly.\nCettolo, M., J. Niehues, S. St ¨uker, L. Bentivogli, and\nM. Federico. 2014. Report on the 11th IWSLT\nEvaluation Campaign, IWSLT 2014. InProceedings\nof the 11th International Workshop on Spoken Lan-\nguage Translation, Lake Tahoe, California, USA.\nDyer, C., S. Muresan, and P. Resnik. 2008. Generaliz-\ning Word Lattice Translation. InProceedings of the\n46th Annual Meeting of the ACL: Human Language\nTechnologies, Columbus, Ohio, USA.\nGao, Q. and S. V ogel. 2008. Parallel Implementations\nof Word Alignment Tool. InProceedings of the Soft-\nware Engineering, Testing, and Quality Assurance\nfor Natural Language Processing, Columbus, Ohio,\nUSA.\nHardmeier, C., A. Bisazza, M. Federico, and F.B.\nKessler. 2010. FBK at WMT 2010: Word Lattices\nfor Morphological Reduction and Chunk-based Re-\nordering. InProceedings of the Fifth Workshop on\nStatistical Machine Translation and Metrics MATR,\nUppsala, Sweden.\nHeafield, K. 2011. KenLM: Faster and Smaller Lan-\nguage Model Queries. InProceedings of the Sixth\nWorkshop on Statistical Machine Translation, Edin-\nburgh, UK.\nHerrmann, T., J. Niehues, and A. Waibel. 2013. Com-\nbining Word Reordering Methods on Different Lin-\nguistic Abstraction Levels for Statistical Machine\nTranslation. InSeventh Workshop on Syntax, Seman-\ntics and Structure in Statistical Translation, Atlanta,\nGeorgia, USA.\nKoehn, P. and H. Hoang. 2007. Factored Transla-\ntion Models. InProceedings of the Joint Conference\non Empirical Methods in Natural Language Process-\ning and Computational Natural Language Learning,\nPrague, Czech Republic.135\nKoehn, P. and K. Knight. 2003. Empirical Methods\nfor Compound Splitting. InProceedings of the 10th\nConference of the European Chapter of the ACL, Bu-\ndapest, Hungary.\nKoehn, P., A. Axelrod, A.B. Mayne, C. Callison-Burch,\nM. Osborne, and D. Talbot. 2005. Edinburgh Sys-\ntem Description for the 2005 IWSLT Speech Trans-\nlation Evaluation. InProceedings of the 2nd Inter-\nnational Workshop on Spoken Language Translation,\nPittsburgh, Pennsylvania, USA.\nKoehn, P., H. Hoang, A. Birch, C. Callison-Burch,\nM. Federico, N. Bertoldi, B. Cowan, W. Shen,\nC. Moran, R. Zens, C. Dyer, O. Bojar, A. Con-\nstantin, and E. Herbst. 2007. Moses: Open Source\nToolkit for Statistical Machine Translation. InPro-\nceedings of the 45th Annual Meeting of the ACL,\nPrague, Czech Republic.\nMacherey, K., A. Dai, D. Talbot, A. Popat, and F. Och.\n2011. Language-independent Compound Splitting\nwith Morphological Operations. InProceedings of\nthe 49th Annual Meeting of the ACL: Human Lan-\nguage Technologies, Portland, Oregon, USA.\nMediani, M., E. Cho, J. Niehues, T. Herrmann, and\nA. Waibel. 2011. The KIT English-French Transla-\ntion Systems for IWSLT 2011. InProceedings of the\nEights International Workshop on Spoken Language\nTranslation, San Francisco, California, USA.\nMoore, R.C. and W. Lewis. 2010. Intelligent Selec-\ntion of Language Model Training Data. InProceed-\nings of the 48th Annual Meeting of the ACL, Uppsala,\nSweden.\nNiehues, J. and M. Kolss. 2009. A POS-based Model\nfor Long-Range Reorderings in SMT. InProceed-\nings of the Fourth Workshop on Statistical Machine\nTranslation, Athens, Greece.\nNiehues, J. and A. Waibel. 2011. Using Wikipedia to\nTranslate Domain-Specific Terms in SMT. InPro-\nceedings of the Eights International Workshop on\nSpoken Language Translation, San Francisco, Cal-\nifornia, USA.\nNiehues, J. and A. Waibel. 2012. Detailed Analysis of\nDifferent Strategies for Phrase Table Adaptation in\nSMT. InProceedings of the Tenth Conference of the\nAssociation for Machine Translation in the Ameri-\ncas, San Diego, California, USA.\nNiehues, J., T. Herrmann, S. V ogel, and A. Waibel.\n2011. Wider Context by Using Bilingual Language\nModels in Machine Translation. InProceedings of\nthe Sixth Workshop on Statistical Machine Transla-\ntion, Edinburgh, UK.\nOch, F.J. 1999. An Efficient Method for Determin-\ning Bilingual Word Classes. InProceedings of the\nNinth Conference of the European Chapter of the\nACL, Bergen, Norway.Rottmann, K. and S. Vogel. 2007. Word Reordering\nin Statistical Machine Translation with a POS-Based\nDistortion Model. InProceedings of the 11th In-\nternational Conference on Theoretical and Method-\nological Issues in Machine Translation, Sk ¨ovde,\nSweden.\nSchmid, H. and F. Laws. 2008. Estimation of Condi-\ntional Probabilities with Decision Trees and an Ap-\nplication to Fine-Grained POS Tagging. InProceed-\nings of the 22nd International Conference on Com-\nputational Linguistics, Manchester, UK.\nSchmid, H. 1994. Probabilistic Part-of-Speech Tag-\nging Using Decision Trees. InProceedings of the\nInternational Conference on New Methods in Lan-\nguage Processing, Manchester, UK.\nStolcke, A. 2002. SRILM – An Extensible Language\nModeling Toolkit. InProceedings of the Interna-\ntional Conference of Spoken Language Processing,\nDenver, Colorado, USA.\nTalbot, D. and M. Osborne. 2006. Modelling Lexical\nRedundancy for Machine Translation. InProceed-\nings of the 21st International Conference on Compu-\ntational Linguistics and the 44th Annual Meeting of\nthe ACL, Sydney, Australia.\nVenugopal, A., A. Zollman, and A. Waibel. 2005.\nTraining and Evaluation Error Minimization Rules\nfor Statistical Machine Translation. InProceedings\nof the Workshop on Data-driven Machine Transla-\ntion and Beyond, Ann Arbor, Michigan, USA.\nV ogel, S. 2003. SMT Decoder Dissected: Word Re-\nordering. InProceedings of the IEEE International\nConference on Natural Language Processing and\nKnowledge Engineering, Beijing, China.\nWeller, M., A. Fraser, and S. Schulte im Walde.\n2013a. Using Subcategorization Knowledge to Im-\nprove Case Prediction for Translation to German. In\nProceedings of the 51st Annual Meeting of the ACL,\nSofia, Bulgaria.\nWeller, M., M. Kisselew, S. Smekalova, A. Fraser,\nH. Schmid, N. Durrani, H. Sajjad, and R. Farkas.\n2013b. Munich-Edinburgh-Stuttgart Submissions of\nOSM Systems at WMT13. InProceedings of the\nEighth Workshop on Statistical Machine Translation,\nSofia, Bulgaria.\nWuebker, J. and H. Ney. 2012. Phrase Model Train-\ning for Statistical Machine Translation with Word\nLattices of Preprocessing Alternatives. InProceed-\nings of the Seventh Workshop on Statistical Machine\nTranslation, Montreal, Canada.\nYang, M. and K. Kirchhoff. 2006. Phrase-Based Back-\noff Models for Machine Translation of Highly In-\nflected Languages. InProceedings of the 11th Con-\nference of the European Chapter of the ACL, Trento,\nItaly.136", "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "oQx-yF41t0", "year": null, "venue": "EAMT 2022", "pdf_link": "https://aclanthology.org/2022.eamt-1.41.pdf", "forum_link": "https://openreview.net/forum?id=oQx-yF41t0", "arxiv_id": null, "doi": null }
{ "title": "MaCoCu: Massive collection and curation of monolingual and bilingual data: focus on under-resourced languages", "authors": [ "Marta Bañón", "Miquel Esplà-Gomis", "Mikel L. Forcada", "Cristian García-Romero", "Taja Kuzman", "Nikola Ljubesic", "Rik van Noord", "Leopoldo Pla Sempere", "Gema Ramírez-Sánchez", "Peter Rupnik", "Vít Suchomel", "Antonio Toral", "Tobias van der Werff", "Jaume Zaragoza" ], "abstract": "Marta Bañón, Miquel Esplà-Gomis, Mikel L. Forcada, Cristian García-Romero, Taja Kuzman, Nikola Ljubešić, Rik van Noord, Leopoldo Pla Sempere, Gema Ramírez-Sánchez, Peter Rupnik, Vít Suchomel, Antonio Toral, Tobias van der Werff, Jaume Zaragoza. Proceedings of the 23rd Annual Conference of the European Association for Machine Translation. 2022.", "keywords": [], "raw_extracted_content": "MaCoCu: Massive collection and curation of monolingual and bilingual\ndata: focus on under-resourced languages\nMarta Ba ˜n´on†, Miquel Espl `a-Gomis⋆, Mikel L. Forcada⋆, Cristian Garc ´ıa-Romero⋆,\nTaja Kuzman‡, Nikola Ljube ˇsi´c‡, Rik van Noord♦, Leopoldo Pla Sempere⋆,\nGema Ram ´ırez-S ´anchez†, Peter Rupnik‡, V´ıt Suchomel‡, Antonio Toral♦,\nTobias van der Werff♦, Jaume Zaragoza†\n‡Joˇzef Stefan Institute,†Prompsit,♦Rijksuniversiteit Groningen,⋆Universitat d’Alacant\n‡{taja.kuzman,nikola.ljubesic,peter.rupnik }@ijs.si ,\[email protected]\n†{mbanon,gramirez,jzaragoza }@prompsit.com\n♦{r.i.k.van.noord,a.toral.ruiz,t.n.van.der.werff }@rug.nl\n⋆{mespla,mlf,cgarcia,lpla }@dlsi.ua.es\nAbstract\nWe introduce the project MaCoCu: Mas-\nsive collection and curation of monolin-\ngual and bilingual data: focus on under-\nresourced languages , funded by the Con-\nnecting Europe Facility, which is aimed at\nbuilding monolingual and parallel corpora\nfor under-resourced European languages.\nThe approach followed consists of crawl-\ning large amounts of textual data from se-\nlected top-level domains of the Internet, and\nthen applying a curation and enrichment\npipeline. In addition to corpora, the project\nwill release the free/open-source web crawl-\ning and curation software used.\n1 Introduction\nThis paper describes the project MaCoCu: Massive\ncollection and curation of monolingual and bilin-\ngual data: focus on under-resourced languages ,\nfunded by the Connecting Europe Facility in the\n2020 CEF Telecom Call - Automated Translation\n(2020-EU-IA-0078).1This project started on June\n1, 2021, and will last for two years. It is aimed\nat building large and high-quality monolingual\nand parallel (with English) corpora for five under-\nresourced official EU languages: Maltese, Bulgar-\nian, Slovenian, Croatian, and Icelandic;2and for the\nlanguages of the five candidate states to become EU\nmembers: Turkish, Albanian, Macedonian, Mon-\n©2022 The authors. This article is licensed under a Creative\nCommons 3.0 licence, no derivative works, attribution, CC-\nBY-ND.\n1https://ec.europa.eu/inea/connecting-eur\nope-facility/cef-telecom/2020-eu-ia-0078\n2Maltese and Icelandic were chosen since they are especially\nunder-resourced official EU languages; Bulgarian, Slovenian\nand Croatian were chosen due to the interest of the consortium\non South-Slavic languages, a decision that extends previous\nefforts in the Abu-MaTran project (Toral et al., 2015).tenegrin, and Serbian. Existing initiatives produc-\ning similar corpora, such as Paracrawl (Ba ˜n´on et al.,\n2020) or Oscar (Abadji et al., 2022) exploit existing\nresources such as Common Crawl3or the Internet\nArchive.4In contrast, our strategy consists in auto-\nmatically crawling top-level domains (TLD) with\nthe potential to contain substantial amounts of tex-\ntual data in the targeted languages,5and then apply-\ning a monolingual and a parallel curation pipelines\non the downloaded data. This approach aims at\nobtaining more and higher-quality data than that\navailable in existing compilations.6\nOne of the objectives of the project is to iden-\ntify data relevant for Digital Service Infrastructures\n(DSIs). Our corpora will be enriched with infor-\nmation about the relevance of the data collected\nfor ten DISs: e-Health, e-Justice, Online Dispute\nResolution, Europeana, Open Data Portal, Business\nRegisters Interconnection System, e-Procurement,\nSafer Internet, Cybersecurity, and Electronic Ex-\nchange of Social Security Information.\n1.1 International consortium\nFour partners are involved in this project: Institut\nJoˇzef Stefan (Slovenia), Rijksuniversiteit Gronin-\ngen (Netherlands), Prompsit Language Engineer-\ning S.L. (Spain), and Universitat d’Alacant (Spain;\ncoordinator). The consortium has a strong back-\nground in the task of building corpora, as several\npartners have been also part of the consortiums\nbehind projects such as Paracrawl (Ba ˜n´on et al.,\n2020), GoURMET (Birch et al., 2019), EuroPat7\nand Abu-MaTran (Toral et al., 2015).\n3https://commoncrawl.org/\n4https://archive.org/\n5National TLDs such as .hr for Croatian, or .is for Ice-\nlandic, and also generic TLDs such as .com ,.org , or.eu.\n6Preliminary automatic evaluation seem to confirm the quality\nof the data in the first data release (see Table 1).\n7https://ec.europa.eu/inea/connecting-eur\nope-facility/cef-telecom/2018-eu-ia-0061\n2 Outcomes of the project\nThe main results of the project will be parallel and\nmonolingual corpora, as well as the code used to\nbuild them. In this section, we briefly describe the\nmost relevant features of these outcomes.\n2.1 Corpora\nThe main goal of this project is to build monolin-\ngual and parallel corpora for the ten languages men-\ntioned in Section 1. Since the project is aimed at\nproducing high-quality corpora, a thorough clean-\ning process will be carried out, which will include\nautomatic noise cleaning/fixing, removal of near-\nduplicates and irrelevant data, such as boilerplates,\nand automatic detection of machine translated con-\ntent. The corpora produced will be enriched with:\n•Identifiers that allow to re-construct the orig-\ninal paragraphs or documents from the seg-\nments in the corpora, enabling to leverage in-\nformation beyond the sentence-level;\n•Language variety (e.g. British/American En-\nglish) for some covered languages;\n•Document-level affinity to the DSIs covered,\nwhich will be automatically identified through\ndomain modelling;\n•Personal information identification, to allow\nfinal users to remove it for specific use cases;\n•Translationese , or the identification of the\ntranslation direction (only for parallel data);\n•Identification of machine translation (only for\nparallel data), so that such crawled documents\ncan be filtered out by the user.\nCurrently, monolingual and parallel data have\nbeen released for seven out of the ten languages\ntargeted. Table 1 provides information about the\nsizes of the current version of these corpora.\n2.2 Free/open-source pipeline\nAll the code developed within the project to crawl,\ncurate and enrich the corpora built will be made\navailable under free/open-source licences on Ma-\nCoCu8and Bitextor9GitHub organisations.10\n3 Acknowledgment\nThis action has received funding from the Euro-\npean Union’s Connecting Europe Facility 2014-\n2020 - CEF Telecom, under Grant Agreement No.\n8https://github.com/macocu\n9https://github.com/bitextor\n10Two code releases will be made, one at the end of the first\nyear of the project, and the second one at the end of the project.Monolingual Parallel\nLanguage Docs. Words Segs. Words\nTurkish 16.0 4346.3 10.3 513.5\nBulgarian 10.5 3508.9 3.9 158.7\nCroatian 7.3 2318.3 3.1 134.9\nSlovene 5.8 1779.1 3.2 137.0\nMacedonian 2.0 524.1 0.5 23.9\nIcelandic 1.7 644.5 0.4 14.4\nMaltese 0.5 347.9 1.2 69.6\nTable 1: Sizes for the monolingual and parallel corpora for\nthe first data release. Monolingual corpora are measured in\nmillions of documents (Docs.) and millions of words. Parallel\ncorpora are measured in millions of parallel segments (Segs.)\nand millions of words in the language other than English.\nINEA/CEF/ICT/A2020/2278341. The contents of\nthis publication are the sole responsibility of its au-\nthors and do not necessarily reflect the opinion of\nthe European Union.\nReferences\nAbadji, Julien, Pedro Ortiz Suarez, Laurent Romary, and\nBeno ˆıt Sagot. 2022. Towards a Cleaner Document-\nOriented Multilingual Crawled Corpus. arXiv e-\nprints , page arXiv:2201.06642, January.\nBa˜n´on, Marta, Pinzhen Chen, Barry Haddow, Ken-\nneth Heafield, Hieu Hoang, Miquel Espl `a-Gomis,\nMikel L. Forcada, Amir Kamran, Faheem Kirefu,\nPhilipp Koehn, Sergio Ortiz Rojas, Leopoldo Pla Sem-\npere, Gema Ram ´ırez-S ´anchez, Elsa Sarr ´ıas, Marek\nStrelec, Brian Thompson, William Waites, Dion Wig-\ngins, and Jaume Zaragoza. 2020. ParaCrawl: Web-\nscale acquisition of parallel corpora. In Proceedings\nof the 58th Annual Meeting of the Association for\nComputational Linguistics , pages 4555–4567, Online,\nJuly.\nBirch, Alexandra, Barry Haddow, Ivan Tito, Antonio Va-\nlerio Miceli Barone, Rachel Bawden, Felipe S ´anchez-\nMart ´ınez, Mikel L. Forcada, Miquel Espl `a-Gomis,\nV´ıctor S ´anchez-Cartagena, Juan Antonio P ´erez-Ortiz,\nWilker Aziz, Andrew Secker, and Peggy van der\nKreeft. 2019. Global under-resourced media transla-\ntion (GoURMET). In Proceedings of Machine Trans-\nlation Summit XVII: Translator, Project and User\nTracks , pages 122–122, Dublin, Ireland, August.\nToral, Antonio, Tommi Pirinen, Andy Way, Rapha ¨el\nRubino, Gema Ram ´ırez-S ´anchez, Sergio Ortiz-\nRojas, V ´ıctor S ´anchez-Cartagena, Jorge Ferr ´andez-\nTordera, Mikel Forcada, Miquel Espla-Gomis, Nikola\nLjube ˇsi´c, Filip Klubi ˇcka, Prokopis Prokopidis, and\nVassilis Papavassiliou. 2015. Automatic acquisi-\ntion of machine translation resources in the Abu-\nMaTran project. Procesamiento del Lenguaje Natu-\nral, (55):185–188.", "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "WqKtTE6s1Q3", "year": null, "venue": "EAMT 2022", "pdf_link": "https://aclanthology.org/2022.eamt-1.19.pdf", "forum_link": "https://openreview.net/forum?id=WqKtTE6s1Q3", "arxiv_id": null, "doi": null }
{ "title": "Automatic Discrimination of Human and Neural Machine Translation: A Study with Multiple Pre-Trained Models and Longer Context", "authors": [ "Tobias van der Werff", "Rik van Noord", "Antonio Toral" ], "abstract": null, "keywords": [], "raw_extracted_content": "Automatic Discrimination of Human and Neural Machine Translation:\nA Study with Multiple Pre-Trained Models and Longer Context\nTobias van der Werff\nBernoulli Institute\nUniversity of Groningen\[email protected] van Noord\nCLCG\nUniversity of Groningen\[email protected] Toral\nCLCG\nUniversity of Groningen\[email protected]\nAbstract\nWe address the task of automatically\ndistinguishing between human-translated\n(HT) and machine translated (MT) texts.\nFollowing recent work, we fine-tune pre-\ntrained language models (LMs) to perform\nthis task. Our work differs in that we use\nstate-of-the-art pre-trained LMs, as well\nas the test sets of the WMT news shared\ntasks as training data, to ensure the sen-\ntences were not seen during training of the\nMT system itself. Moreover, we analyse\nperformance for a number of different ex-\nperimental setups, such as adding transla-\ntionese data, going beyond the sentence-\nlevel and normalizing punctuation. We\nshow that (i) choosing a state-of-the-art\nLM can make quite a difference: our\nbest baseline system ( DEBERTA ) outper-\nforms both BERT and ROBERTA by over\n3% accuracy, (ii) adding translationese\ndata is only beneficial if there is not much\ndata available, (iii) considerable improve-\nments can be obtained by classifying at the\ndocument-level and (iv) normalizing punc-\ntuation and thus avoiding (some) shortcuts\nhas no impact on model performance.\n1 Introduction\nGenerally speaking, translations are either per-\nformed manually by a human, or performed au-\ntomatically by a machine translation (MT) sys-\ntem. There exist many use cases in Natural Lan-\nguage Processing in which working with a human-\ntranslated text is not a problem, as they are usually\n© 2022 The authors. This article is licensed under a Creative\nCommons 3.0 licence, no derivative works, attribution, CC-\nBY-ND.of high quality, but in which we would like to fil-\nter out automatically translated texts. For example,\nconsider training an MT system on a parallel cor-\npus crawled from the Internet: we would prefer-\nably only keep the high-quality human-translated\nsentences.\nIn this paper, we will address this task of dis-\ncriminating between human-translated (HT) and\nmachine-translated texts automatically. Studies\nthat have analysed MT outputs and HTs compar-\natively have found evidence of systematic differ-\nences between the two (Ahrenberg, 2017; Van-\nmassenhove et al., 2019; Toral, 2019). These out-\ncomes provide indications that an automatic classi-\nfier should in principle be able to discriminate be-\ntween these two classes, at least to some extent.\nThere is previous related work in this direc-\ntion (Arase and Zhou, 2013; Aharoni et al., 2014;\nLi et al., 2015), but they used Statistical Machine\nTranslation (SMT) systems to get the translations,\nwhile the introduction of Neural Machine Trans-\nlation (NMT) has considerably improved general\ntranslation quality and has led to more natural\ntranslations (Toral and Sánchez-Cartagena, 2017).\nArguably, the discrimination between MT and HT\nis therefore more difficult with NMT systems than\nit was with previous paradigms to MT.\nWe follow two recent publications that have\nattempted to distinguish NMT outputs from\nHTs (Bhardwaj et al., 2020; Fu and Nederhof,\n2021) and work with MT outputs generated by\nstate-of-the-art online NMT systems. Addition-\nally, we also build a classifier by fine-tuning\na pre-trained language model (LM), given the\nfact that this approach obtains state-of-the-art\nperformance in many text-based classification\ntasks.\nThe main differences with previous work are:\n• We experiment with state-of-the-art LMs, in-\nstead of only using BERT - and ROBERTA -\nbased LMs;\n• We empirically check the performance im-\npact of adding translationese training data;\n• We go beyond sentence-level by training and\ntesting our best system on the document-\nlevel;\n• We analyse the impact of punctuation short-\ncuts by normalizing the input texts;\n• We use the test sets of WMT news shared task\nas our data sets, to ensure reproducibility and\nthat the MT system did not see the transla-\ntions during its training.\nThe rest of the paper is organised as follows.\nSection 2 outlines previous work on the topic. Sec-\ntion 3 details our methodology, focusing on the\ndata sets, classifiers and evaluation metrics. Sub-\nsequently, Section 4 presents our experiments and\ntheir results. These are complemented by a dis-\ncussion and further analyses, in Section 5. Finally,\nSection 6 presents our conclusions and suggestions\nfor future work. All our data, code and results is\npublicly available at https://github.com/\ntobiasvanderwerff/HT-vs-MT\n2 Related Work\nAnalyses Previous work has dealt with finding\nsystematic and qualitative differences between HT\nand MT. Ahrenberg (2017) compared manually an\nNMT system and a HT for one text in the trans-\nlation direction English-to-Swedish. They found\nthat the translation by NMT was closer to the\nsource and exhibited a more restricted repertoire of\ntranslation procedures than the HT. Related, an au-\ntomatic analysis by Vanmassenhove et al. (2019)\nfound that translations by NMT systems exhibit\nless lexical diversity than HTs. A contemporary\nautomatic analysis corroborated the finding about\nless lexical diversity and concluded also that MT\nled to translation that had lower lexical density,\nwere more normalised and had more interference\nfrom the source language (Toral, 2019).\nSMT vs HT classification Given these findings,\nit is no surprise that automatic classification to dis-\ncriminate between MT and HT has indeed been\nattempted in the past. Most of this work targetsSMT since it predates the introduction of NMT and\nuses a variety of approaches. For example, Arase\nand Zhou (2013) relied on fluency features, while\nAharoni et al. (2014) used part-of-speech tags and\nfunction words, and Li et al. (2015) parse trees,\ndensity and out-of-vocabulary words. Their meth-\nods reach quite high accuracies, though indeed rely\non SMT systems, which are of considerable lower\nquality than the current NMT ones.\nNMT vs HT classification To the best of our\nknowledge only two publications have tackled this\nclassification with the state-of-the-art paradigm,\nNMT (Bhardwaj et al., 2020; Fu and Nederhof,\n2021). We now outline these two publications and\nplace our work with respect to them.\nBhardwaj et al. (2020) work on automatically\ndetermining if a French sentence is HT or MT,\nwith the source sentences in English. They test\na variety of pre-trained language models, either\nmultilingual –XLM-R (Conneau et al., 2020) and\nmBERT (Devlin et al., 2019a)– or monolingual for\nFrench: CamemBERT (Martin et al., 2020) and\nFlauBERT (Le et al., 2020). Moreover, they test\ntheir trained models across different domains and\nMT systems used during training. They find that\npre-trained LMs can perform this task quite well,\nwith accuracies of over 75% for both in-domain\nand cross-domain evaluation. Our work follows\ntheirs quite closely, though there are a few impor-\ntant differences. First, we use publicly available\nWMT data, while they use a large private data set,\nwhich unfortunately limits reproducibility. Sec-\nond, we analyze the impact of punctuation-type\n“shortcuts”, while it is unclear to what extent this\ngets done in Bhardwaj et al. (2020).1Third, we\nalso test our model on the document-level, instead\nof just the sentence-level.\nFu and Nederhof (2021) work on the WMT18\nnews commentary data set for translating Czech,\nGerman and Russian into English. By fine-tuning\nBERT they obtain an accuracy of 78% on all lan-\nguages. However, they use training sets from\nWMT18, making it highly likely that Google\nTranslate (which they use to get the translations)\nhas seen these sentences during training.2This\nmeans that the MT outputs they get are likely\nof higher quality than it would be the case in a\n1They do apply 12 conservative regular expressions, but, as\nthere is no code available, it is unclear what these are and\nwhat impact this had on their results.\n2This likely does not apply to Bhardwaj et al. (2020), as they\nuse a private data set.\nreal-world scenario, and thus closer to HT, which\nwould make the task unrealistically harder for the\nclassifiers. On the other hand, an accuracy of 78%\nis quite high on this challenging task, so perhaps\nthis is not the case. This accuracy might even be\nsuspiciously high: it could be that the model over-\nfit on the Google Translations, or that the data con-\ntains artifacts that the model uses as a shortcut.\nOriginal vs MT Finally, there are three related\nworks that attempt to discriminate between MT\nand original texts written in a given language,\nrather than human translations as is our focus.\nNguyen-Son et al. (2019a) tackles this by matching\nsimilar words within paragraphs and subsequently\nestimating paragraph-level coherence. Nguyen-\nSon et al. (2019b) approaches this task by round-\ntrip translating original and machine-translated\ntexts and subsequently using the similarities be-\ntween the original texts and their round-trip trans-\nlated versions. Nguyen-Son et al. (2021) extends\nthe former work improving the detection of MT\neven if a different system is used.\n3 Method\n3.1 Data\nWe will experiment with the test sets from the\nWMT news shared tasks.3We choose this data set\nmainly for these four reasons:\n(i) it is publicly available so it guarantees repro-\nducibility;\n(ii) it has the translation direction annotated,\nhence we can inspect the impact of having\noriginal text or human-translated text (i.e.\ntranslationese ) in the source side;\n(iii) the data sets are also available at the\ndocument-level, meaning we can train and\nevaluate systems that go beyond sentence-\nlevel;\n(iv) these sets are commonly used as test sets, so it\nis unlikely that they are used as training data\nin online MT systems, which we use in our\nexperiments.\nWe will use the German-English data sets, and\nwill focus on the translation direction German-to-\nEnglish. This language pair has been present the\nlongest at WMT’s news shared task, from 2008\ntill the present day. Hence, it is the language pair\n3For example, https://www.statmt.org/wmt20/\ntranslation-task.htmlData set # SNT O# SNT T# DOC O# DOC T\nWMT08 361 0 15 0\nWMT09 432 448 17 21\nWMT10 500 505 15 22\nWMT11 601 598 16 18\nWMT12 611 604 14 18\nWMT13 500 500 7 9\nWMT14 1,500 1,503 96 68\nWMT15 736 1,433 33 48\nWMT16 1,499 1,500 87 68\nWMT17 1,502 1,502 66 64\nWMT18 (dev) 1,498 — 69 —\nWMT19 (test) 2,000 — 145 —\nWMT08-17 8,242 8,593 366 336\nWMT14-17 5,237 5,938 282 248\nTable 1: Statistics of the data sets. # SNT stands for number\nof sentences, # DOC for number of documents, Ofor number\nof sentences or documents in which the source side is original,\nwhile Tstands for translationese . WMT08-17 and WMT14-\n17 indicate the sizes of the two training sets used.\nwith the most test data available. We use 2008 to\n2017 as training, 2018 as dev and 2019 as test. Full\nstatistics are shown in Table 1.\nTranslationese For each of these sets, roughly\nhalf of the data was originally written in our source\nlanguage (German) and human-translated to our\ntarget language (English), while the other half was\noriginally written in our target language (English)\nand translated to our source language (German) by\na human translator. We thus make a distinction be-\ntween text that originates from text written in the\nsource language (German), and text that originates\nfrom a previous translation (i.e. English to Ger-\nman). We will refer to the latter as translationese .\nHalf of the data can thus be considered a dif-\nferent category: the source sentences are actually\nnot original, but a translation, which means that\nthe machine-translated output will actually be an\nautomatic translation of a human translation, in-\nstead of an automatic translation of original text.\nIn that part of the data, the texts in the HT cat-\negory are not human translations of original text,\nbut the original texts themselves. Since this data\nmight exhibit different characteristics, given that\nthe translation direction is the inverse, we only use\nthe sentences and documents that were originally\nwritten in German for our dev and test sets (indi-\ncated with Oin Table 1). Moreover, we empiri-\ncally evaluate in Section 4 whether removing the\nextra translationese data from the training set is\nactually beneficial for the classifier.\nMT Since we are interested in contrasting HT\nvs state-of-the-art NMT, we automatically trans-\nlate the sentences using a general-purpose and\nwidely used online MT system, DeepL.4We trans-\nlate from German to British English,5specifically.\nWe use this MT system for the majority of our ex-\nperiments, though we do experiment with cross-\nsystem classification by testing on data that was\ntranslated with other MT systems, such as Google\nTranslate, using their paid API.6We manually\nwent through a subset of the translations by both\nDeepL and Google Translate and indeed found\nthem to be of high quality.\nTo be clear, in our experiments, the machine\ntranslations actually double the size of the train,\ndev and test sets as indicated in Table 1. For each\nGerman source sentence, the data set now contains\na human translation (HT, taken from WMT) and\na machine translated variant (MT, from DeepL or\nGoogle), which are labelled as such. As an exam-\nple, if we train on both the original andtransla-\ntionese sentence-level data of WMT08-17, we ac-\ntually train on 8,242·2 + 8 ,593·2 = 33 ,670in-\nstances. Note that this also prevents a bias in topic\nor domain towards either HT or MT.\nCeiling To get a sense of what the upper ceil-\ning performance of this task will be, we check\nthe number of cases where the machine translation\nis the exact same as the human translation. For\nDeepL, this happened for 3.0% of the WMT08-\n17 training set sentences, 3.1% of the dev set and\n3.9% of the test set. For Google, the percent-\nages are 2.4%, 2.0% and 3.5%, respectively.7Of\ncourse, in practice, it is likely impossible to get\nanywhere near this ceiling, as the MT system also\nsometimes offers arguably better translations (see\nSection 5 for examples).\n4https://www.deepl.com/translator - used in\nNovember 2021.\n5DeepL forces the user to choose a variety of English (either\nBritish or American). This implies that the MT output could\nbe expected to be (mostly) British English while the HT is a\nmix of both varieties. Hence, one could argue that variety is\nan aspect that could be picked up by the classifier. We also\nuse Google Translate, which does not allow the user to select\nan English variety.\n6We noticed that the free Python library googletrans had\nclearly inferior translations. The paid APIs for Google and\nDeepL obtain COMET (Rei et al., 2020) scores of 59.9 and\n61.9, respectively, while the googletrans library obtains 21.0.\n7If we apply a bit more fuzzy matching by only keeping ascii\nletters and numbers for each sentence, the percentages go up\nby around 0.5%.Parameter Range\nLearning rate 5×10−6,10−5,3×10−5\nBatch size {32,64}\nWarmup {0.06}\nLabel smoothing {0.0 ,0.1,0.2}\nDropout {0.0,0.1}\nTable 2: Hyperparameter range and final values (bold) for our\nfinal DEBERTA models. Hyperparameters not included are left\nat their default value.\n3.2 Classifiers\nSVM We will experiment with a number of dif-\nferent classifiers. As a baseline model, we use\na linear SVM with unigrams and bigrams as fea-\ntures trained with scikit-learn (Pedregosa et\nal., 2011), for which the data is tokenized with\nSpacy .8The use of a SVM is mainly to find out\nhow far we can get by just looking at the superficial\nlexical level. It also allows us to identify whether\nthe classifier uses any shortcuts, i.e. features that\nare not necessarily indicative of a human or ma-\nchine translation, but due to artifacts in the data\nsets, which can still be picked up as such by our\nmodels. An example of this is punctuation, which\nwas mentioned in previous work (Bhardwaj et al.,\n2020). MT systems might normalize uncommon\npunctuation,9while human translators might opt\nfor simply copying the originally specified punc-\ntuation in the source sentence (e.g. quotations,\ndashes). We analyse the importance of normaliza-\ntion in Section 5.\nFine-tuning LMs Second, we will experiment\nwith fine-tuning pre-trained language models.10\nFu and Nederhof (2021) only used BERT (Devlin\net al., 2019b) and Bhardwaj et al. (2020) used a\nset of BERT - and ROBERTA -based LMs, but there\nexist newer pre-trained LMs that generally obtain\nbetter performance. We will empirically decide the\nbest model for this task, by experimenting with a\nnumber of well-established LMs: BERT (Devlin et\nal., 2019b), RoBERTa (Liu et al., 2019), DeBERTa\n(He et al., 2021b; He et al., 2021a), XLNet (Yang\net al., 2019), BART (Lewis et al., 2020) and Long-\nformer (Beltagy et al., 2020). For all these models,\nwe only tune the batch size and learning rate. The\n8https://spacy.io/\n9The normalisation of the punctuation as a pre-processing\nstep when training an MT system is a widespread technique,\nso that e.g. «, »,′′, “ and „ are all converted to e.g.′′.\n10Implemented using HuggingFace (Wolf et al., 2020).\nAcc.\nBART -large Lewis et al. (2020) 64.9\nBERT -large Devlin et al. (2019b) 61.9\nDEBERTA -v3-large He et al. (2021a) 68.6\nLongformer-large Beltagy et al. (2020) 63.5\nROBERTA -large Liu et al. (2019) 65.5\nXLNET -base Yang et al. (2019) 62.3\nDEBERTA -v3-large (optim) 68.9\nTable 3: Best development set results (all in %) for MT vs\nHT classification for a number of pre-trained LMs. On the test\nset, DEBERTA -v3-large (optim) obtains an accuracy of 66.1.\nbest model from these experiments is then tuned\nfurther (on the dev set). We tune a single parameter\nat a time and do not perform a full grid search due\nto efficiency and environmental reasons. Hyperpa-\nrameter settings and range of values experimented\nwith are shown in Table 2.\nEvaluation We evaluate the models looking at\nthe accuracy and F1-score. When standard de-\nviation is reported, we averaged over three runs.\nFor brevity, we only report accuracy scores, as\nwe found them to correlate highly with the F-\nscores. We include additional metrics, such as the\nF-scores, on our GitHub repository.\n4 Experiments\nSVM The SVM classifier was trained on the\ntraining set WMT08–17 O(i.e. part of the data set\nwith original source side), where the MT output\nwas generated with DeepL. It obtained an accu-\nracy of 57.8 on dev and 54.9 on the test set. This is\nin line with what would be expected: there is some\nsignal at the lexical level, but other than that the\ntask is quite difficult for a simple SVM classifier.\nFinding the best LM As previously indicated,\nwe experimented with a number of pre-trained\nLMs. For efficiency reasons, we perform these\nexperiments with a subset of the training data\n(WMT14-17 O, i.e. with only translations from\noriginal text). The results are shown in Table 3. We\nfind the best performance by using the DeBERTa-\nv3 model, which quite clearly outperformed the\nother LMs. We obtain a 6.7 point absolute increase\nin accuracy over BERT (61.9 to 68.6), the LM\nused by Fu and Nederhof (2021)), and a 3.7 point\nincrease over the second best performing model,\nBART -large. We tune some of the remaining hyper-\nparameters further (see Table 2) and obtain an ac-\ncuracy of 68.9. We will use this model in our next\nexperiments.Trained on → DeepL Google\n↓Evaluated on Acc. Acc.\nDeepL 66.1±1.1 56 .3±0.3\nGoogle 63.8±1.6 64 .9±1.1\nFAIR (Ng et al., 2019) 62.6±1.9 57 .7±1.8\nRWTH (Rosendahl et al., 2019) 61.9±1.5 58 .3±1.8\nPROMT (Molchanov, 2019) 50.3±0.9 52 .1±3.3\nonline-X 57.5±1.1 56 .6±3.4\nTable 4: Test set scores (all in %) for training and testing\nour best DEBERTA across different MT-systems (DeepL and\nGoogle) and 4 WMT19 submissions. online-X refers to an\nanonymous online MT system evaluated at WMT19.\nCross-system performance A robust classifier\nthat discriminates between HT and MT should\nnot only recognize MT output that is produced by\na particular MT system (the one the classifier is\ntrained on), but should also work across different\nMT systems. Therefore, we test our DeepL-trained\nclassifier on the translations of Google Translate\n(instead of DeepL) and vice versa. In this experi-\nment we train the classifier on all the training data\n(i.e. WMT08-17 O+T) and evaluate on the test set.\nIn Table 4, we find that this cross-system eval-\nuation leads to quite a drop in accuracy: 2.3% for\nDeepL and even 8.6% for Google. It seems that\nthe classifier does not just pick up general features\nthat discriminate between HTs and NMT outputs,\nbut also MT-system specific features that do not al-\nways transfer to other MT systems.\nIn addition, we test both classifiers on a set of\nMT systems submitted to WMT19. We pick the\ntwo top and two bottom submissions according to\nthe human evaluation (Barrault et al., 2019). The\nmotivation is to find out how the classifiers per-\nform on MT outputs of different levels of transla-\ntion quality. We also notice a considerable drop in\nperformance here. Interestingly, the classifiers per-\nform best on the high-quality translations of FAIR\nand RWTH (81.6 and 81.5 human judgment scores\nat WMT19, respectively), and perform consider-\nably worse on the two bottom-ranked WMT19 sys-\ntems (71.8 and 69.7 human judgment scores). It\nseems that the classifier does not learn to recognize\nlower-quality MT outputs if it only saw higher-\nquality ones during training.\nThis inability to deal with lower-quality MT\nwhen trained only on high-quality MT seems\ncounterintuitive and was quite surprising to us. Af-\nter all, the difference between high-quality MT\nand human translation tends to be more subtle\nthan in the case of low-quality MT. However,\nDev Test\nWMT14-17 O+T71.1±1.3 64 .9±0.6\nWMT14-17 O 68.9±1.4 64 .0±1.1\nWMT08-17 O+T71.2±0.9 66 .1±1.1\nWMT08-17 O 71.5±0.8 66 .3±0.5\nWMT08-17 T 63.7±0.8 59 .5±0.3\nTable 5: Dev and test scores for training our best DEBERTA\nmodel on either WMT14-17 or WMT08-17 translated with\nDeepL, compared with training on the same data sets but not\nadding the translationese data (T) and only using T.\nthe learned features most useful for distinguish-\ning high-quality MT from HT are likely differ-\nent in nature than the features that are most use-\nful for distinguishing low-quality MT from HT\n(e.g., simple lexical features versus features related\nto word ordering). From this perspective, feed-\ning low-quality MT to a system trained on high-\nquality MT can be seen as an instance of out-of-\ndistribution data that is not modelled well during\nthe training stage. Nevertheless, this featural dis-\ncrepancy could likely be resolved by supplying ad-\nditional examples of low-quality MT to the classi-\nfier at training time.\nRemoving translationese data In our previous\nexperiment we used the full training data (i.e.\nWMT08-17 O+T). However, most of the WMT\ndata sets only consist for 50% of sentences that\nwere originally written in German; the other\nhalf were originally written in English (see Sec-\ntion 3.1). We ask the question whether this addi-\ntional data (which we refer to as translationese )\nis actually beneficial to the classifier. On the one\nhand, it is in fact a different category than human\ntranslations from original text. On the other, its us-\nage allows us to double the amount of training data\n(see Table 1).\nIn Table 5 we show that the extra data helps if\nthere is not much training data available (WMT14-\n17), but that this effect disappears once we in-\ncrease the amount of training data (WMT08-17).\nIn fact, the translationese data seems to be clearly\nof lower quality (for this task), since a model\ntrained on only this data (WMT08-17 T), which is\nof the same size as the WMT08-17 Oexperiments,\nresults in quite a drop in accuracy (59.5 vs 66.3 on\nthe test set). We have also experimented with pre-\ntraining on WMT08-17 O+Tand then fine-tuning\non WMT08-17 O. Our initial results were mixed,\nbut we plan on investigating this in future work.Beyond sentence-level In many practical use-\ncases, we actually have access to full documents,\nand thus do not have to restrict ourselves to look-\ning at just sentences. This could lead to better\nperformance, since certain problems of NMT sys-\ntems only come to light in a multi-sentence set-\nting (Frankenberg-Garcia, 2021). Since WMT also\ncontains document-level information, we can sim-\nply use the same data set as before. Due to the\nnumber of instances being very low at document\nlevel (see Table 1), and to the fact that the addition\noftranslationese data showed to be beneficial with\nlimited amounts of training data (see Table 5), we\nuse all the data available for our document-level\nexperiments, i.e. WMT08-17 O+T.\nWe have four document-level classifiers: (i) a\nSVM, similar to the one used in our sentence-level\nexperiments, but for which each training instance\nis a document; (ii) majority voting atop our best\nsentence-level classifier, DEBERTA , i.e. we aggre-\ngate its sentence-level predictions for each docu-\nment by taking the majority class; (iii) DEBERTA\nfine-tuned on the document-level data, truncated\nto 512 tokens; and (iv) Longformer (Beltagy et\nal., 2020) fine-tuned on the document-level data,\nas this LM was designed to handle documents.\nFor document-level training, we use gradient ac-\ncumulation and mixed precision to avoid out-of-\nmemory errors. Additionally, we truncate the input\nto 512 subword tokens for the DEBERTA model.\nFor the dev and test set, this means discarding 11%\nand 2% of the tokens per document on average, re-\nspectively.11A potential approach for dealing with\nlonger context without resorting to truncation is to\nuse a sliding window strategy, which we aim to ex-\nplore in future work.\nThe results are presented in Table 6. First, we\nobserve that the document-level baselines obtain,\nas expected, better accuracies than their sentence-\nlevel counterparts (e.g. 60.7 vs 54.9 for SVM and\n72.5 vs 66.1 for DEBERTA on test). Second, we\nobserve large differences between dev and test, as\nwell as large standard deviations. The instability\nof the results could be due, to some extent, to the\nlow number of instances in these data sets (138 and\n290, as shown in Table 1). Moreover, the test set is\nlikely harder in general than the dev set, since it on\naverage has fewer sentences per document (13.8 vs\n21.7).\n11The median subword token count in the HT document-level\ndata is 376, with a minimum of 47 and maximum of 3,254.\nDeepL Google\nDev Test Dev Test\nSVM 74.8 60.7 84.7 64.8\nDEBERTA (mc) 84.7±8.0 72 .5±5.2 93 .2±1.1 67 .6±3.4\nDEBERTA 91.1±2.4 76 .8±4.4 95 .9±1.5 60 .8±1.2\nLongformer 80.2±2.7 82 .0±7.2 94 .2±1.3 63 .2±0.9\nTable 6: Accuracies of training and evaluating on document-\nlevel DeepL and Google data. For DEBERTA , we try two\nversions: a sentence-level model applied to each sentence in\na document followed by majority classification (mc), and a\nmodel trained on full documents (truncated to 512 tokens).\n5 Discussion & Analysis\nThus far we have reported results in terms of an au-\ntomatic evaluation metric: classification accuracy.\nNow we would like to delve deeper by conducting\nanalyses that allow us to obtain further insights. To\nthis end, we exploit the fact that the SVM classifier\noutputs the most discriminative features for each\nclass: HT and MT.\n5.1 Punctuation Normalization\nIn this first analysis we looked at the best features\nof the SVM to find out whether there is an obvious\nindication of “shortcuts” that the pre-trained lan-\nguage models can take. The best features for both\nHT and MT are shown in Table 8.\nFor comparison, we also show the best features\nafter applying Moses’ (Koehn et al., 2007) punc-\ntuation normalization,12which is commonly used\nas a preprocessing step when training MT systems.\nIndeed, there are punctuation-level features that by\nall accounts should not be indicative of either class,\nbut still show up as such. The backtick (`) and dash\nsymbol (–) show up as the best unigram features\nindicating HT, but are not present after the punctu-\nation is normalized.\nNow, to be clear, one might make a case of still\nincluding these features in HT vs MT experiments.\nAfter all, if this is how MT sentences can be spot-\nted, why should we not consider them? On the\nother hand, the shortcuts that work for this partic-\nular data set and MT system (DeepL) might not\nwork for texts in different domains or texts that are\ntranslated by different MT systems. Moreover, the\nshortcuts might obscure an analysis of the more in-\nteresting differences between human and machine\ntranslated texts.\n12https://github.com/moses-smt/\nmosesdecoder/blob/master/scripts/\ntokenizer/normalize-punctuation.perlOriginal Normalized\nSent-level\nSVM 54.9 54.5\nDEBERTA -v3 66.1±1.1 67 .0±0.6\nDoc-level\nSVM 60.7 60.0\nDEBERTA (majority) 72.5±5.2 72 .0±4.1\nDEBERTA 76.8±4.4 77 .2±4.7\nLongformer 82.0±7.2 83 .7±2.1\nTable 7: Test set accuracies of training and evaluating on\nsentence-level and document-level data on either the original\nor normalized (by Moses) input texts, translated with DeepL.\nIn any case, we want to determine the impact\nof punctuation-level shortcuts by comparing the\noriginal scores versus the scores of our classi-\nfiers trained on punctuation-normalized texts. The\nresults of our baseline and best sentence- and\ndocument-level systems with and without normal-\nization are shown in Table 7. We observe that,\neven if the two best unigram features were initially\npunctuation, normalizing does not affect perfor-\nmance in a major way. There is even a small in-\ncrease in performance for DEBERTA -v3 and Long-\nformer, though likely not significant.\n5.2 Unigram Analysis\nIn our second analysis we manually went through\nthe data set to analyse the 10 most indicative uni-\ngram features for MT (before normalization).13In-\nterestingly, some are due to errors by the human\ntranslator: the MT system correctly used school-\nyard instead of the split school yard , and it also\nused the correct name Olympiakos Piraeus instead\nof the incorrect Olypiacos Piraeus (typo in the first\nword). Some are indeed due to a different (and\nlikely better) lexical choice by the human transla-\ntor, though the translation is not necessarily wrong:\ncompeting gang instead of rival gang ,espionage\nscandal instead of spy affair ,judging panel instead\nofjury andradiation instead of rays. Finally, the\nfeature disclosure looks to be an error on the MT\nside. It occurs a number of times in the machine-\ntranslated version of a news article discussing Wik-\nileaks, in which the human translator chose the\ncorrect Wikileaks publication instead of Wikileaks\ndisclosure andwhistleblower activists instead of\ndisclosure activists .\n13Of course, since we only look at unigrams here, and the per-\nformance of the sentence-level SVM is not very high anyway,\nall these features have in common that they do not necessarily\ngeneralize to other domains or MT-systems.\nBefore normalization After normalization\nMost indicative for MT Most indicative for HT Most indicative for MT Most indicative for HT\n1-grams 2-grams 1-grams 2-grams 1-grams 2-grams 1-grams 2-grams\nolympiakos are said ` the riders olympiakos \" proctor u.s. the riders\naffair \" proctor – the 2015 affair are said program consequently ,\nforsa 2010 , u.s. consequently , forsa book \" nearly the 2015\nrival per cent nearly projects , rays 2010 , anticipated . the\nrays almost the program . the rival per cent everybody projects ,\nschoolyard the flat anticipated life \" disclosure almost the premier <93>the hunting\ndisclosure in view <93>the - weiss jury be put lama <92>s a part\njury with industry premier a part succeed and later weiss as for\nTable 8: Best features (1-gram and 2-gram models) in the SVM classifier per class, before and after normalizing punctuation.\nFor the best unigrams indicative of HT, there are\nsome signs of simplification by the MT system.\nIt never uses nearly oranticipate , instead gener-\nally opting for almost andexpected . Similarly, hu-\nman translators sometimes used U.S. to refer to the\nUnited States, while the MT system always uses\nUS. The fact that we used British English for the\nDeepL translations might also play a role: program\nis indicative for HT since the MT system generally\nused programme .\n6 Conclusions\nIn this paper we trained classifiers to automat-\nically distinguish between human and machine\ntranslations for German-to-English. Our classifiers\nare built by pre-training state-of-the-art language\nmodels. We use the test sets of the WMT shared\ntasks, to ensure that the machine translation sys-\ntems we use (DeepL and Google) did not see the\ndata already during training. Throughout a number\nof experiments, we show that: (i) the task is quite\nchallenging, as our best sentence-level systems ob-\ntain around 65% accuracy, (ii) using translationese\ndata during training is only beneficial if there is\nlimited data available, (iii) the accuracy drops con-\nsiderably when performing cross MT-system eval-\nuating, (iv) accuracy improves when performing\nthe task on the document-level and (v) normalizing\npunctuation (and thus avoiding certain shortcuts)\ndoes not have an impact on model performance.\nIn future work, we aim to do a number of things.\nFor one, we want to experiment with both trans-\nlation directions and different source languages\ninstead of just German. Second, we want to\nperform cross-domain experiments (as in Bhard-\nwaj et al. (2020)), as we currently only lookedat news texts.14Third, we want to look at the\neffect of the source language: does a monolin-\ngual model that is trained on English translations\nfrom German still work on translations into En-\nglish from different source languages? This can\nshed on light on the question in what sense gen-\neral source language-independent features that dis-\ncriminate between HT and MT are actually identi-\nfied by the model. Fourth, we plan to also use the\nsource sentence, with a multilingual pre-trained\nLM, following Bhardwaj et al. (2020). This ad-\nditional information is expected to lead to better\nresults. While the source sentence is not always\navailable, there are real-world cases in which it is,\ne.g. filtering crawled parallel corpora. Fifth, we\nwould like to expand the task to a 3-way classi-\nfication, as in the least restrictive scenario, given\na text in a language, it could be either originally\nwritten in that language, human translated from\nanother language or machine translated from an-\nother language.\n7 Acknowledgements\nThe authors received funding from the Euro-\npean Union’s Connecting Europe Facility 2014-\n2020 - CEF Telecom, under Grant Agreement No.\nINEA/CEF/ICT/A2020/2278341 (MaCoCu). This\ncommunication reflects only the authors’ views.\nThe Agency is not responsible for any use that\nmay be made of the information it contains. We\nthank the Center for Information Technology of\nthe University of Groningen for providing access\nto the Peregrine high performance computing clus-\nter. Finally, we thank all our MaCoCu colleagues\nfor their valuable feedback throughout the project.\n14Note that this domain has a real-world application: the de-\ntection of fake news, given the fact that MT could be use to\nspread such news in other languages (Bonet-Jover, 2020).\nReferences\nAharoni, Roee, Moshe Koppel, and Yoav Goldberg.\n2014. Automatic detection of machine translated\ntext and translation quality estimation. In Proceed-\nings of the 52nd Annual Meeting of the Association\nfor Computational Linguistics (Volume 2: Short Pa-\npers) , pages 289–295.\nAhrenberg, Lars. 2017. Comparing machine transla-\ntion and human translation: A case study. In Pro-\nceedings of the Workshop Human-Informed Trans-\nlation and Interpreting Technology , pages 21–28,\nVarna, Bulgaria, September. Association for Com-\nputational Linguistics, Shoumen, Bulgaria.\nArase, Yuki and Ming Zhou. 2013. Machine transla-\ntion detection from monolingual web-text. In Pro-\nceedings of the 51st Annual Meeting of the Associa-\ntion for Computational Linguistics (Volume 1: Long\nPapers) , pages 1597–1607.\nBarrault, Loïc, Ond ˇrej Bojar, Marta R. Costa-jussà,\nChristian Federmann, Mark Fishel, Yvette Gra-\nham, Barry Haddow, Matthias Huck, Philipp Koehn,\nShervin Malmasi, Christof Monz, Mathias Müller,\nSantanu Pal, Matt Post, and Marcos Zampieri. 2019.\nFindings of the 2019 conference on machine transla-\ntion (WMT19). In Proceedings of the Fourth Con-\nference on Machine Translation (Volume 2: Shared\nTask Papers, Day 1) , pages 1–61, Florence, Italy,\nAugust. Association for Computational Linguistics.\nBeltagy, Iz, Matthew E Peters, and Arman Cohan.\n2020. Longformer: The long-document transformer.\narXiv preprint arXiv:2004.05150 .\nBhardwaj, Shivendra, David Alfonso Hermelo,\nPhillippe Langlais, Gabriel Bernier-Colborne, Cyril\nGoutte, and Michel Simard. 2020. Human or neural\ntranslation? In Proceedings of the 28th Interna-\ntional Conference on Computational Linguistics ,\npages 6553–6564, Barcelona, Spain (Online), De-\ncember. International Committee on Computational\nLinguistics.\nBonet-Jover, Alba. 2020. The disinformation battle:\nLinguistics and artificial intelligence join to beat it.\nConneau, Alexis, Kartikay Khandelwal, Naman Goyal,\nVishrav Chaudhary, Guillaume Wenzek, Francisco\nGuzmán, Edouard Grave, Myle Ott, Luke Zettle-\nmoyer, and Veselin Stoyanov. 2020. Unsupervised\ncross-lingual representation learning at scale. In\nProceedings of the 58th Annual Meeting of the Asso-\nciation for Computational Linguistics , pages 8440–\n8451, Online, July. Association for Computational\nLinguistics.\nDevlin, Jacob, Ming-Wei Chang, Kenton Lee, and\nKristina Toutanova. 2019a. BERT: Pre-training of\ndeep bidirectional transformers for language under-\nstanding. In Proceedings of the 2019 Conference of\nthe North American Chapter of the Association for\nComputational Linguistics: Human Language Tech-\nnologies, Volume 1 (Long and Short Papers) , pages4171–4186, Minneapolis, Minnesota, June. Associa-\ntion for Computational Linguistics.\nDevlin, Jacob, Ming-Wei Chang, Kenton Lee, and\nKristina Toutanova. 2019b. BERT: Pre-training of\ndeep bidirectional transformers for language under-\nstanding. In Proceedings of the 2019 Conference of\nthe North American Chapter of the Association for\nComputational Linguistics: Human Language Tech-\nnologies, Volume 1 (Long and Short Papers) , pages\n4171–4186, Minneapolis, Minnesota, June. Associa-\ntion for Computational Linguistics.\nFrankenberg-Garcia, Ana. 2021. Can a corpus-driven\nlexical analysis of human and machine translation\nunveil discourse features that set them apart? Tar-\nget. International Journal of Translation Studies , 09.\nFu, Yingxue and Mark-Jan Nederhof. 2021. Auto-\nmatic classification of human translation and ma-\nchine translation: A study from the perspective of\nlexical diversity. In Proceedings for the First Work-\nshop on Modelling Translation: Translatology in the\nDigital Age , pages 91–99, online, May. Association\nfor Computational Linguistics.\nHe, Pengcheng, Jianfeng Gao, and Weizhu Chen.\n2021a. Debertav3: Improving deberta using electra-\nstyle pre-training with gradient-disentangled embed-\nding sharing. arXiv preprint arXiv:2111.09543 .\nHe, Pengcheng, Xiaodong Liu, Jianfeng Gao, and\nWeizhu Chen. 2021b. DeBERTa: Decoding-\nenhanced BERT with disentangled attention. In\nInternational Conference on Learning Representa-\ntions .\nKoehn, Philipp, Hieu Hoang, Alexandra Birch, Chris\nCallison-Burch, Marcello Federico, Nicola Bertoldi,\nBrooke Cowan, Wade Shen, Christine Moran,\nRichard Zens, Chris Dyer, Ond ˇrej Bojar, Alexandra\nConstantin, and Evan Herbst. 2007. Moses: Open\nsource toolkit for statistical machine translation. In\nACL Companion Volume Proceedings of the Demo\nand Poster Sessions , pages 177–180, Prague, Czech\nRepublic, June.\nLe, Hang, Loïc Vial, Jibril Frej, Vincent Segonne, Max-\nimin Coavoux, Benjamin Lecouteux, Alexandre Al-\nlauzen, Benoit Crabbé, Laurent Besacier, and Didier\nSchwab. 2020. FlauBERT: Unsupervised language\nmodel pre-training for French. In Proceedings of\nthe 12th Language Resources and Evaluation Con-\nference , pages 2479–2490, Marseille, France, May.\nEuropean Language Resources Association.\nLewis, Mike, Yinhan Liu, Naman Goyal, Mar-\njan Ghazvininejad, Abdelrahman Mohamed, Omer\nLevy, Veselin Stoyanov, and Luke Zettlemoyer.\n2020. BART: Denoising sequence-to-sequence pre-\ntraining for natural language generation, translation,\nand comprehension. In Proceedings of the 58th An-\nnual Meeting of the Association for Computational\nLinguistics , pages 7871–7880, Online, July. Associ-\nation for Computational Linguistics.\nLi, Yitong, Rui Wang, and Hai Zhao. 2015. A machine\nlearning method to distinguish machine translation\nfrom human translation. In Proceedings of the 29th\nPacific Asia Conference on Language, Information\nand Computation: Posters , pages 354–360.\nLiu, Yinhan, Myle Ott, Naman Goyal, Jingfei Du, Man-\ndar Joshi, Danqi Chen, Omer Levy, Mike Lewis,\nLuke Zettlemoyer, and Veselin Stoyanov. 2019.\nRoberta: A robustly optimized bert pretraining ap-\nproach. arXiv preprint arXiv:1907.11692 .\nMartin, Louis, Benjamin Muller, Pedro Javier Or-\ntiz Suárez, Yoann Dupont, Laurent Romary, Éric\nde la Clergerie, Djamé Seddah, and Benoît Sagot.\n2020. CamemBERT: a tasty French language model.\nInProceedings of the 58th Annual Meeting of the\nAssociation for Computational Linguistics , pages\n7203–7219, Online, July. Association for Computa-\ntional Linguistics.\nMolchanov, Alexander. 2019. Promt systems for wmt\n2019 shared translation task. In Proceedings of the\nFourth Conference on Machine Translation (Volume\n2: Shared Task Papers, Day 1) , pages 302–307, Flo-\nrence, Italy, August. Association for Computational\nLinguistics.\nNg, Nathan, Kyra Yee, Alexei Baevski, Myle Ott,\nMichael Auli, and Sergey Edunov. 2019. Facebook\nFAIR’s WMT19 news translation task submission.\nInProceedings of the Fourth Conference on Machine\nTranslation (Volume 2: Shared Task Papers, Day 1) ,\npages 314–319, Florence, Italy, August. Association\nfor Computational Linguistics.\nNguyen-Son, Hoang-Quoc, Tran Phuong Thao, Seira\nHidano, and Shinsaku Kiyomoto. 2019a. Detecting\nmachine-translated paragraphs by matching similar\nwords. arXiv preprint arXiv:1904.10641 .\nNguyen-Son, Hoang-Quoc, Tran Phuong Thao, Seira\nHidano, and Shinsaku Kiyomoto. 2019b. Detect-\ning machine-translated text using back translation.\narXiv preprint arXiv:1910.06558 .\nNguyen-Son, Hoang-Quoc, Tran Thao, Seira Hidano,\nIshita Gupta, and Shinsaku Kiyomoto. 2021. Ma-\nchine translated text detection through text similar-\nity with round-trip translation. In Proceedings of the\n2021 Conference of the North American Chapter of\nthe Association for Computational Linguistics: Hu-\nman Language Technologies , pages 5792–5797.\nPedregosa, F., G. Varoquaux, A. Gramfort, V . Michel,\nB. Thirion, O. Grisel, M. Blondel, P. Prettenhofer,\nR. Weiss, V . Dubourg, J. Vanderplas, A. Passos,\nD. Cournapeau, M. Brucher, M. Perrot, and E. Duch-\nesnay. 2011. Scikit-learn: Machine learning in\nPython. Journal of Machine Learning Research ,\n12:2825–2830.\nRei, Ricardo, Craig Stewart, Ana C Farinha, and Alon\nLavie. 2020. COMET: A neural framework for MT\nevaluation. In Proceedings of the 2020 Conferenceon Empirical Methods in Natural Language Process-\ning (EMNLP) , pages 2685–2702, Online, November.\nAssociation for Computational Linguistics.\nRosendahl, Jan, Christian Herold, Yunsu Kim, Miguel\nGraça, Weiyue Wang, Parnia Bahar, Yingbo Gao,\nand Hermann Ney. 2019. The RWTH Aachen Uni-\nversity machine translation systems for WMT 2019.\nInProceedings of the Fourth Conference on Machine\nTranslation (Volume 2: Shared Task Papers, Day 1) ,\npages 349–355, Florence, Italy, August. Association\nfor Computational Linguistics.\nToral, Antonio and Víctor M. Sánchez-Cartagena.\n2017. A multifaceted evaluation of neural versus\nphrase-based machine translation for 9 language di-\nrections. In Proceedings of the 15th Conference of\nthe European Chapter of the Association for Compu-\ntational Linguistics: Volume 1, Long Papers , pages\n1063–1073, Valencia, Spain, April. Association for\nComputational Linguistics.\nToral, Antonio. 2019. Post-editese: an exacerbated\ntranslationese. In Proceedings of Machine Transla-\ntion Summit XVII: Research Track , pages 273–281,\nDublin, Ireland, August. European Association for\nMachine Translation.\nVanmassenhove, Eva, Dimitar Shterionov, and Andy\nWay. 2019. Lost in translation: Loss and decay of\nlinguistic richness in machine translation. In Pro-\nceedings of Machine Translation Summit XVII: Re-\nsearch Track , pages 222–232, Dublin, Ireland, Au-\ngust. European Association for Machine Translation.\nWolf, Thomas, Lysandre Debut, Victor Sanh, Julien\nChaumond, Clement Delangue, Anthony Moi, Pier-\nric Cistac, Tim Rault, Rémi Louf, Morgan Funtow-\nicz, Joe Davison, Sam Shleifer, Patrick von Platen,\nClara Ma, Yacine Jernite, Julien Plu, Canwen Xu,\nTeven Le Scao, Sylvain Gugger, Mariama Drame,\nQuentin Lhoest, and Alexander M. Rush. 2020.\nTransformers: State-of-the-art natural language pro-\ncessing. In Proceedings of the 2020 Conference on\nEmpirical Methods in Natural Language Process-\ning: System Demonstrations , pages 38–45, Online,\nOctober. Association for Computational Linguistics.\nYang, Zhilin, Zihang Dai, Yiming Yang, Jaime Car-\nbonell, Russ R Salakhutdinov, and Quoc V Le. 2019.\nXlnet: Generalized autoregressive pretraining for\nlanguage understanding. Advances in neural infor-\nmation processing systems , 32.", "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "9y_VHB5DHkB", "year": null, "venue": "EAMT 2023", "pdf_link": "https://aclanthology.org/2023.eamt-1.55.pdf", "forum_link": "https://openreview.net/forum?id=9y_VHB5DHkB", "arxiv_id": null, "doi": null }
{ "title": "MaCoCu: Massive collection and curation of monolingual and bilingual data: focus on under-resourced languages", "authors": [ "Marta Bañón", "Malina Chichirau", "Miquel Esplà-Gomis", "Mikel L. Forcada", "Aarón Galiano Jiménez", "Taja Kuzman", "Nikola Ljubesic", "Rik van Noord", "Leopoldo Pla Sempere", "Gema Ramírez-Sánchez", "Peter Rupnik", "Vit Suchomel", "Antonio Toral", "Jaume Zaragoza-Bernabeu" ], "abstract": "Marta Bañón, Mălina Chichirău, Miquel Esplà-Gomis, Mikel Forcada, Aarón Galiano-Jiménez, Taja Kuzman, Nikola Ljubešić, Rik van Noord, Leopoldo Pla Sempere, Gema Ramírez-Sánchez, Peter Rupnik, Vit Suchomel, Antonio Toral, Jaume Zaragoza-Bernabeu. Proceedings of the 24th Annual Conference of the European Association for Machine Translation. 2023.", "keywords": [], "raw_extracted_content": "MaCoCu: Massive collection and curation of monolingual and bilingual\ndata: focus on under-resourced languages\nMarta Ba ˜n´on†, M˘alina Chichir ˘au♦, Miquel Espl `a-Gomis⋆, Mikel L. Forcada⋆,\nAar´on Galiano-Jim ´enez⋆, Taja Kuzman‡, Nikola Ljube ˇsi´c‡, Rik van Noord♦,\nLeopoldo Pla Sempere⋆, Gema Ram ´ırez-S ´anchez†, Peter Rupnik‡, V´ıt Suchomel‡,\nAntonio Toral♦, Jaume Zaragoza†\n‡Joˇzef Stefan Institute,†Prompsit,♦Rijksuniversiteit Groningen,⋆Universitat d’Alacant\n‡{taja.kuzman,nikola.ljubesic,peter.rupnik }@ijs.si ,\[email protected]\n†{mbanon,gramirez,jzaragoza }@prompsit.com\n♦{r.i.k.van.noord,a.toral.ruiz,m.chichirau }@rug.nl\n⋆{mespla,mlf,cgarcia,lpla }@dlsi.ua.es\nAbstract\nWe present the most relevant results of the\nproject MaCoCu: Massive collection and\ncuration of monolingual and bilingual data:\nfocus on under-resourced languages in its\nsecond year. Parallel and monolingual cor-\npora have been produced for eleven low-\nresourced European languages by crawling\nlarge amounts of textual data from selected\ntop-level domains of the Internet; both hu-\nman and automatic evaluation show its use-\nfulness. In addition, several large language\nmodels pretrained on MaCoCu data have\nbeen published, as well as the code used to\ncollect and curate the data.\n1 Introduction\nThis paper describes the main outcomes of the\nproject MaCoCu: Massive collection and curation\nof monolingual and bilingual data: focus on under-\nresourced languages (Ba˜n´on et al., 2022), span-\nning from June 2021 to July 2023. MaCoCu is\naimed at building large and high-quality monolin-\ngual and parallel (with English) corpora for ten low-\nresourced European languages (see Table 1). The\ninternational consortium behind this project con-\nsists of four partners: Jo ˇzef Stefan Institute (Slove-\nnia), Rijksuniversiteit Groningen (Netherlands),\nPrompsit Language Engineering S.L. (Spain), and\nUniversitat d’Alacant (Spain; coordinator).\nOther existing initiatives, such as Paracrawl1or\nOscar2exploit existing resources such as Common\nCrawl3or the Internet Archive.4Our strategy con-\n©2022 The authors. This article is licensed under a Creative\nCommons 3.0 licence, no derivative works, attribution, CC-\nBY-ND.\n1https://paracrawl.eu/\n2https://oscar-project.org/\n3https://commoncrawl.org/\n4https://archive.org/sists in automatically crawling top-level domains\n(TLD), potentially containing substantial amounts\nof text in the targeted languages,5and then apply-\ning a monolingual and a parallel curation pipelines.\nThe evaluation of the first data release (van Noord\net al., 2022a) confirms the usefulness of these data\nfor different natural-language processing tasks.\n2 Collected corpora\nMonolingual and parallel corpora are built from\ncrawled data by applying a thorough cleaning\nprocess, including noise fixing/filtering and re-\nmoval of near-duplicate/boilerplate text. Corpora\nare then automatically annotated with: (a) doc-\nument and paragraph IDs; (b) language variety\n(e.g. British/American English); (c) document-\nlevel affinity to DSIs identified through domain\nmodelling (van Noord et al., 2022b); (d) personal\ninformation; and (e) identification of translated text:\neither human or machine translations (only for par-\nallel corpora). Table 1 shows the size of the corpora\nfor the second data release, published in April 2023.\n2.1 Data evaluation\nTo the date, evaluation only covers the seven lan-\nguages included in the first data release of the ac-\ntion, made public in April of 2022.\nMono-lingual A set of pre-trained language mod-\nels (LMs)6has been built and released for Icelandic,\nMaltese and Bulgarian/Macedonian by continuing\nthe training of multilingual XLM-RoBERTa-large\n(Conneau et al., 2020) using only MaCoCu data for\nall languages. These models outperform monolin-\ngual baselines, and XLM-R and large models on\nthe POS, NER and COPA (Roemmele et al., 2011)\n5National TLDs such as .hr for Croatian, or .is for Ice-\nlandic, and also generic TLDs such as .com ,.org , or.eu.\n6https://huggingface.co/MaCoCu\nMonolingual Parallel\nLanguage Docs. Words Segs. Words\nTurkish 16.0 4344.9 1.6 89.2\nBulgarian 10.5 3506.2 1.8 72.1\nCroatian 8.1 2363.7 2.3 99.5\nSlovenian 6.3 1920.1 1.9 85.0\nMacedonian 2.0 524.1 0.4 18.3\nIcelandic 1.7 644.5 0.3 10.6\nMaltese 0.5 347.9 0.9 53.9\nAlbanian 1.7 625.7 0.5 24.3\nSerbian 7.5 2491.0 2.1 95.9\nMontenegrin 0.6 161.4 0.2 11.2\nBosnian 2.8 730.3 0.5 22.2\nTable 1: Sizes for corpora in the 2nd data release. Monolingual\ncorpora are measured in millions of documents (Docs.) and\nmillions of words. Parallel corpora are measured in millions\nof parallel segments (Segs.) and millions of words. Bosnian is\na bonus language as it was not initially covered in the action.\nbg is mk mt tr\nXLM-R-base 56.9 55.2 55.3 52.2 53.2\nXLM-R-large 53.1 54.3 52.5 54.0 50.5\nMonolingual LM — 54.6 — 55.6 56.4\nXLM-R + MaCoCu 54.6 59.6 55.6 54.4 58.5\nTable 2: Test set COPA scores for baseline LMs compared to\ncontinuing training XLM-R-large on MaCoCu data.\nevaluation tasks. Table 2 shows the results for the\nCOPA test set, the most challenging evaluation task.\nFor Bulgarian/Macedonian we also train an LM\nfrom scratch using the RoBERTa (Liu et al., 2019)\narchitecture, dubbed BERTovski, which reached\ncompetitive performance with XLM-R.\nParallel Parallel data were extrinsically evaluated\nfirst training neural machine translation systems\non large data sets available on OPUS7(ParaCrawl,\nCommonCrawl, Tilde), and comparing the results\nobtained when adding the MaCoCu data to the train-\ning set. Results show improved performance for\nall languages across different evaluation sets and\nmetrics. These results were confirmed by human\nevaluation (van Noord et al., 2022a).\n3 Free/open-source pipeline\nThe curation pipelines used to produce MaCoCu\ncorpora, Monotextor8and Bitextor,9have been re-\n7https://opus.nlpl.eu/\n8https://github.com/bitextor/monotextor\n9https://github.com/bitextor/bitextorleased under free/open-source licences. Crawling\nand corpora-enrichment software have been also\nreleased under the MaCoCu10GitHub organisation.\n4 Acknowledgment\nThis action has received funding from the Euro-\npean Union’s Connecting Europe Facility 2014-\n2020 - CEF Telecom, under Grant Agreement No.\nINEA/CEF/ICT/A2020/2278341. The contents of\nthis publication are the sole responsibility of its au-\nthors and do not necessarily reflect the opinion of\nthe European Union.\nReferences\nBa˜n´on, Marta, Miquel Espl `a-Gomis, Mikel L. For-\ncada, Cristian Garc ´ıa-Romero, Taja Kuzman, Nikola\nLjube ˇsi´c, Rik van Noord, Leopoldo Pla Sempere,\nGema Ram ´ırez-S ´anchez, Peter Rupnik, V ´ıt Suchomel,\nAntonio Toral, Tobias van der Werff, and Jaume\nZaragoza. 2022. MaCoCu: Massive collection and\ncuration of monolingual and bilingual data: focus on\nunder-resourced languages. In Proceedings of the\n23rd Annual Conference of the EAMT , pages 303–\n304, Ghent, Belgium, June.\nConneau, Alexis, Kartikay Khandelwal, Naman Goyal,\nVishrav Chaudhary, Guillaume Wenzek, Francisco\nGuzm ´an, Edouard Grave, Myle Ott, Luke Zettle-\nmoyer, and Veselin Stoyanov. 2020. Unsupervised\ncross-lingual representation learning at scale. In Pro-\nceedings of the 58th Annual Meeting of the ACL ,\npages 8440–8451, Online, July.\nLiu, Yinhan, Myle Ott, Naman Goyal, Jingfei Du, Man-\ndar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke\nZettlemoyer, and Veselin Stoyanov. 2019. RoBERTa:\nA robustly optimized BERT pretraining approach.\narXiv preprint arXiv:1907.11692 .\nRoemmele, Melissa, Cosmin Bejan, and Andrew Gor-\ndon. 2011. Choice of plausible alternatives: An eval-\nuation of commonsense causal reasoning. In AAAI\nSpring Symposium - Technical Report , pages 90–95.\nvan Noord, Rik, Miquel Espl `a-Gomis, Nikola Ljube ˇsi´c,\nTaja Kuzman, Gema Ram ´ırez-S ´anchez, Peter Rupnik,\nand Antonio Toral. 2022a. MaCoCu Evaluation\nReport.\nvan Noord, Rik, Cristian Garc ´ıa-Romero, Miquel Espl `a-\nGomis, Leopoldo Pla Sempere, and Antonio Toral.\n2022b. Building domain-specific corpora from the\nweb: the case of European digital service infrastruc-\ntures. In Proceedings of the BUCC Workshop within\nLREC 2022 , pages 23–32, Marseille, France, June.\n10https://github.com/macocu", "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "aQnSt193Az", "year": null, "venue": "EAMT 2023", "pdf_link": "https://aclanthology.org/2023.eamt-1.21.pdf", "forum_link": "https://openreview.net/forum?id=aQnSt193Az", "arxiv_id": null, "doi": null }
{ "title": "Automatic Discrimination of Human and Neural Machine Translation in Multilingual Scenarios", "authors": [ "Malina Chichirau", "Rik van Noord", "Antonio Toral" ], "abstract": null, "keywords": [], "raw_extracted_content": "Automatic Discrimination of Human and Neural\nMachine Translation in Multilingual Scenarios\nMalina Chichirau\nBernoulli Institute\nUniversity of Groningen\[email protected] van Noord\nCLCG\nUniversity of Groningen\[email protected] Toral\nCLCG\nUniversity of Groningen\[email protected]\nAbstract\nWe tackle the task of automatically dis-\ncriminating between human and machine\ntranslations. As opposed to most previ-\nous work, we perform experiments in a\nmultilingual setting, considering multiple\nlanguages and multilingual pretrained lan-\nguage models. We show that a classifier\ntrained on parallel data with a single source\nlanguage (in our case German–English)\ncan still perform well on English transla-\ntions that come from different source lan-\nguages, even when the machine transla-\ntions were produced by other systems than\nthe one it was trained on. Additionally, we\ndemonstrate that incorporating the source\ntext in the input of a multilingual classifier\nimproves (i) its accuracy and (ii) its robust-\nness on cross-system evaluation, compared\nto a monolingual classifier. Furthermore,\nwe find that using training data from mul-\ntiple source languages (German, Russian,\nand Chinese) tends to improve the accu-\nracy of both monolingual and multilingual\nclassifiers. Finally, we show that bilin-\ngual classifiers and classifiers trained on\nmultiple source languages benefit from be-\ning trained on longer text sequences, rather\nthan on sentences.\n1 Introduction\nIn many NLP tasks one may want to filter out ma-\nchine translations (MT), but keep human transla-\ntions (HT). Consider, for example, the construc-\n© 2023 The authors. This article is licensed under a Creative\nCommons 4.0 licence, no derivative works, attribution, CC-\nBY-ND.tion of parallel corpora used for training MT sys-\ntems: filtering out MT output is getting progres-\nsively harder, given the ever-increasing quality of\nneural MT (NMT) systems. Moreover, the exis-\ntence of such high-quality NMT systems might ag-\ngravate the problem, as people are getting more\nlikely to employ them when creating texts. In addi-\ntion, it is also hard to get fairtraining data to build\na classifier that can distinguish between these two\ntypes of translations, since publicly-available par-\nallel corpora with human translations were likely\nused in the training of well-known publicly avail-\nable MT systems (such as Google Translate or\nDeepL). Therefore, we believe that making the\nmost of the scarcely available (multilingual) train-\ning data is a crucial research direction.\nPrevious work aiming at discriminating between\nHT and NMT operates mostly in a monolingual\nsetting (Fu and Nederhof, 2021; van der Werff\net al., 2022), To our knowledge, the only ex-\nception is Bhardwaj et al. (2020), who targeted\nEnglish–French, and fine-tuned not only French\nLMs (monolingual target-only setting), but also\nmultilingual LMs, so that the classifier had also ac-\ncess to the source text. However, this work used an\nin-house data set, therefore limiting reproducibil-\nity and practical usefulness. There is also older\nwork that tackled statistical MT (SMT) vs HT clas-\nsification (Arase and Zhou, 2013; Aharoni et al.,\n2014; Li et al., 2015). Nevertheless, since both the\nMT paradigm (SMT) and the classifiers used are\nnot state-of-the-art anymore, less recent studies are\nof limited relevance today.\nCompared to previous work, this paper explores\nthe classification of HT vs NMT in the multi-\nlingual scenario in more depth, considering sev-\neral languages and multilingual LMs. We demon-\nstrate that classifiers trained on parallel data with\na single source language still work well when ap-\nplied to translations from other source languages\n(Experiment 1 ). We show improved performance\nfor fine-tuning multilingual LMs by incorporating\nthe source text ( Experiment 2 ), which also dimin-\nishes the gap between training and testing on dif-\nferent MT systems ( Experiment 3 ). Moreover,\nwe improve performance when training on addi-\ntional training data from different source languages\n(Experiment 4 ) and full documents instead of iso-\nlated sentences ( Experiment 5 ).\n2 Method\n2.1 Data\nTo get the source texts and human translation part\nof the data set, we use the data sets provided across\nthe WMT news shared tasks of the past years.1\nAs explained in the previous section, we only use\nthe WMT test sets, to (reasonably) ensure that the\npopular MT systems we will be using (Google\nTranslate and DeepL) did not use this as training\ndata. Note that if any of the MT systems had used\nthis data for training the task would actually be\nharder , since their translations for the data would\nbe expected to resemble more human translations\nthan if this data had not been used for training.\nAn alternative would be to use in-house datasets,\nlike Bhardwaj et al. (2020), but that also comes\nwith an important drawback, namely limited repro-\nducibility.\nWe run experiments across 7 language pairs\n(German, Russian, Chinese, Finnish, Gujarati,\nKazakh, and Lithuanian to English) and use only\nthe source texts that were originally written in the\nsource language, following the findings by Zhang\nand Toral (2019). The data of WMT19 functions as\nthe test set for all languages, while WMT18 is the\ndevelopment set (only used for German, Russian\nand Chinese, since the other languages are tested in\na zero-shot fashion). Detailed data splits are shown\nin Table 1.\n2.2 Translations\nWe obtain the MT part of the data set by trans-\nlating the non-English source texts to English by\nusing Google Translate or DeepL. The translations\nwere obtained in November-December 2022, ex-\ncept for the German translations, which we take\n1For example, https://www.statmt.org/wmt19/\ntranslation-task.htmlTrain Dev Test\nSentence-level\nGerman (WMT08-19) 8,242 1,498 2,000\nRussian (WMT15-19) 4,136 1,386 1,805\nChinese (WMT17-19) 878 2,260 1,838\nFinnish (WMT19) — — 1,996\nGujarati (WMT19) — — 1,016\nKazakh (WMT19) — — 1,000\nLithuanian (WMT19) — — 1,000\nDocument-level\nGerman (WMT08-19) 366 69 145\nRussian (WMT15-19) 249 115 196\nChinese (WMT17-19) 123 222 163\nTable 1: Number of sentences and documents per split for the\nlanguages used throughout this paper.\nfrom van der Werff et al. (2022) and were obtained\nin November 2021.2\nThe data set we feed to our classification model\nis built by selecting exactly one human transla-\ntion and one machine translation (either Google or\nDeepL) per source text. This way, we ensure there\nis no influence of the domain of the texts, while\nsimultaneously ensuring a perfectly balanced data\nset for each experiment. Note that this also means\nthat we actually train and test on twice as much\ndata as is reported in Table 1. Target-only ormono-\nlingual classifiers are trained only on the English\ntranslations, while source + target ormultilingual\nclassifiers are trained on both the source text and\nthe English translation thereof. For evaluation, we\nalso use MT outputs from selected WMT2019’s\nsubmissions.3\n2.3 Classifiers\nWe follow previous work (Bhardwaj et al., 2020;\nvan der Werff et al., 2022) in fine-tuning a pre-\ntrained language model on our task. We use\nDEBERTA -V3 (He et al., 2021) for the target-only\nclassifiers since this was the best model by van der\nWerff et al. (2022). For the source + target classi-\nfiers we test M-BERT (Devlin et al., 2019), M-BART\n(Lewis et al., 2020), XLM -R(Conneau et al., 2020)\nand M-DEBERTA (He et al., 2021), while Bhardwaj\net al. (2020) only used M-BERT and XLM -R.\n2We translated the German test set in April 2023 with both\nGoogle and DeepL and compared them to the original trans-\nlation of November 2021. We found BLEU scores of 98.27\nand 98.54 for Google and DeepL, respectively, leading us to\nconclude that there are no substantial differences between the\ntwo versions of the MT systems.\n3Details in Appendix A (Table 8).\nTrained on Google translations Trained on DeepL translations\n↓Eval de-d de-t fi gu kk lt ru zh de-d de-t fi gu kk lt ru zh\nDeepL 66.0 57.4 64.8 — — 57.6 54.6 53.8 71.7 66.9 68.7 — — 68.6 59.5 67.7\nGoogle 75.0 65.6 70.8 62.0 68.6 70.3 63.5 58.5 70.0 64.8 65.7 59.5 65.1 65.0 60.6 61.8\nWMT 1 57.3 70.7 67.0 65.2 66.8 62.9 58.8 58.2 60.9 66.8 66.5 60.5 64.3 66.9 58.6 65.7\nWMT 2 58.1 70.2 68.5 63.1 68.2 63.8 56.9 57.1 60.6 65.9 64.5 58.9 65.9 68.0 49.1 63.2\nWMT 3 58.9 64.9 65.2 59.4 70.9 67.0 56.4 53.7 55.7 47.1 49.2 38.6 64.5 53.9 46.2 48.5\nWMT 4 57.0 64.1 47.6 61.8 54.7 61.5 59.4 53.5 47.2 39.8 30.5 52.9 41.9 47.0 51.2 55.2\nTable 2: Accuracies for the target-only DEBERTA -V3 model when training on English translations (by Google or DeepL) from\nGerman and testing on translations from a different source language and different MT system on the test set. For German we\nreport results both on the development (de-d) and test (de-t) sets. DeepL does not offer translations from Gujarati or Kazakh.\nWe fine-tuned our pre-trained language models\nby using the Transformers library from Hugging-\nFace (Wolf et al., 2020). We use the ForSequence-\nClassification implementations, both for the target-\nonly as well as the source + target models. For\nthe latter, this means that the source and target are\nconcatenated by adding the [SEP] special charac-\nter, which is the default implementation when pro-\nviding two input sentences. We did experiment\nwith adding source and target in the reverse order,\nbut did not obtain improved performance. We did\nnot experiment with adding a language tag to the\nsource text.\n2.4 Experimental details\nThe results for Experiment 1 were obtained with-\nout any hyper-parameter tuning - we simply took\nthe settings of van der Werff et al. (2022). For\nfinding the best multi-lingual language model (Ex-\nperiment 2), we did perform a search over batch\nsize and learning rate on the development set. We\nperformed separate searches for the Google and\nDeepL translations, as well as the monolingual and\nbilingual settings. The final settings are shown in\nTable 9 in Appendix B. For Experiment 3 and Ex-\nperiment 4, we used the settings of the previously\nfound best models. For the document-level sys-\ntems in Experiment 5 we used the hyperparameters\nlisted in Table 10 in Appendix B. Reported accura-\ncies are averaged over three runs for the sentence-\nlevel experiments (Exp 1–4) and over ten runs for\nthe document-level experiments (Exp 5). Standard\ndeviations (generally in range 0.2 - 2.0) are omit-\nted for brevity, except for the document-level ex-\nperiments, since they tend to be higher in the latter\nsetting. All our code, data and results are publicly\navailable.4\n4https://github.com/Malina03/\nmacocu-ht-vs-mt/3 Results\n3.1 Experiment 1: Testing on Translations\nfrom Different Source Languages\nIn our first experiment (with results in Table 2),\nwe analyse the performance of our classifier when\ntesting a target-only model on English translations\nfrom a different source language. Here, the ma-\nchine translations for training our classifier come\nfrom Google or DeepL, while we evaluate on\ntranslations from Google, DeepL and the two top-\nranked (WMT1, WMT2) and two bottom-ranked\n(WMT3, WMT4) WMT2019 submissions (Bar-\nrault et al., 2019). See Appendix A for additional\ndetails on these WMT submissions.\nThe results in Table 2 show that human and\nmachine translations from a different source lan-\nguage can still be reasonably well distinguished.\nFor certain languages, we are even very close to\nthe performance on German (the original source\nlanguage). The other languages do seem to show\nan influence of the source language, as the accu-\nracies are generally slightly lower, but are usu-\nally still comfortably above chance-level. How-\never, there are a few cases were the classifier now\nperforms below chance level. This happened only\nfor the bottom-ranked WMT systems (WMT3 and\nWMT4), which might not be representative of\nhigh-quality MT systems.\nMT quality vs accuracy We are also interested\nin how the quality of the translations influences\naccuracy scores. Since we have the human (ref-\nerence) translations, we plotted the accuracy score\nof our classifier versus an automatic MT evaluation\nmetric, BLEU (Papineni et al., 2002), in Figure 1.5\nWhat is quite striking here is that we actually ob-\ntain an increased performance for higher-quality\ntranslations. When training on DeepL translations\n5Plots for COMET (Rei et al., 2020) instead of BLEU are in\nAppendix C.\n10 20 30 40 503040506070Google\n10 20 30 40 50Deepl\nSource language:\nGerman-test\nGerman-dev\nKazakh\nFinnish\nRussian\nGujarati\nChinese\nLithuanian\nTranslated by:\ndeepl\ngoogle\nwmt1\nwmt2\nwmt3\nwmt4\nBLEU ScoreAccuracy %Figure 1: Accuracy versus BLEU scores for each system in Table 2, using Google or DeepL translations during training.\nwe actually find a significant correlation between\naccuracy and BLEU ( R= 0.696,p < 0.0001 ),\nthough for Google translations we did not ( R=\n0.249,p= 0.094). Intuitively, it should be easier\nto distinguish between low-quality MT and HT, so\nthis is likely a side-effect of training on the high-\nquality translations from Google and DeepL. We\nconsider this an important lesson for future work:\nif a classifier learns to distinguish high-quality MT\nfrom HT, this does not mean that distinguishing\nlower-quality MT comes for free.\n3.2 Experiment 2: Source-only vs\nSource+Target Classifiers\nIn our second experiment, we aim to determine\nwhether having access to the source sentence im-\nproves classification performance. We test a va-\nriety of multilingual LMs, comparing their per-\nGoogle DeepL\ntgt-only src + tgt tgt-only src + tgt\nDEBERTA -V375.0 — 71.7 —\nM-BERT 65.9 71.7 65.5 66.1\nM-BART 69.3 71.7 61.9 68.1\nXLM -R 66.0 69.3 62.4 66.9\nM-DEBERTA 70.4 74.9 65.1 71.8\nTable 3: Development set accuracies of the best monolingual\nLM by van der Werff et al. (2022) ( DEBERTA -V3) and multi-\nlingual LMs, comparing the use of target-only and source +\ntarget data. The classifiers are trained and evaluated on the\nGerman–English data (Google or DeepL). Best result per col-\numn in bold.formance when having access only to the transla-\ntion (target-only) to when also having access to the\nsource sentence (source + target). Table 3 shows\nthat accuracies indeed clearly improve for all of\nthe tested LMs, with M-DEBERTA being the mul-\ntilingual LM that leads to the highest accuracy.\nNote that this model performs similarly to the best\ntarget-only monolingual LM ( DEBERTA -V3, with\nthe scores taken from van der Werff et al. (2022))\non the development set, likely due to the higher\nquality of the latter LM for English. However, on\nthe test set (also shown in Table 4), which was\nnever seen during development of the classifiers,\nthe multilingual model is actually clearly superior\n(72.3% versus 65.6%).\n3.3 Experiment 3: Cross-system Evaluation\nThe study of van der Werff et al. (2022) showed\nthat MT vs HT classifiers are sensitive to the\nMT system that was used to generate the training\ntranslations, as performance dropped considerably\nwhen doing a cross-system evaluation. However,\nwe hypothesize that giving the classifier access to\nthe source sentence will make it more robust to\nseeing translations from different MT systems at\ntraining and test times.\nWe show the results of the cross-MT sys-\ntem evaluation for the best performing target-only\n(DEBERTA -V3) and source + target ( M-DEBERTA )\nmodels in Table 4. For training on Google and test-\ning on DeepL, we still see a considerable drop in\nperformance for the source + target model (around\n9 points in both the dev and test sets for both\nEvaluated on → Dev Test\n↓Trained on Google DeepL Google DeepL\nDEBERTA -V3\nGoogle 75.0 66.0 65.6 57.4\nDeepL 70.0 71.7 64.8 66.9\nM-DEBERTA\nGoogle 74.9 66.2 72.3 63.8\nDeepL 71.3 71.8 72.7 72.0\nTable 4: Dev and test set accuracies of DEBERTA -V3 (target-\nonly) and M-DEBERTA (source + target) when trained and\nevaluated on Google and DeepL. First two rows of results\ntaken from van der Werff et al. (2022). Best score per col-\numn and classifier in bold.\nthe target-only and source + target classifiers).\nHowever, when training on DeepL and testing on\nGoogle, we do see a clear effect on the test set: the\ntarget-only model dropped 2.1% in accuracy (66.9\n→64.8), while the source + target model actually\nimproved by 0.7% (72.0 →72.7).\n3.4 Experiment 4: Training on Multiple\nSource Languages\nHere, we investigate if we can actually combine\ntraining data from different source languages to\nimprove performance. We run experiments for\nGerman, Russian and Chinese for both the target-\nonly and the source + target model, of which\nthe results are shown in Table 5. Having ad-\nditional training data from different source lan-\nguages clearly helps, even for the multilingual\nsource + target model. The only exception is the\nexperiment on Chinese for the multilingual model,\nas the best performance (68%) is obtained by only\ntraining on the Chinese training data.6There does\nseem to be a diminishing effect of incorporat-\ning training data from different source languages\nthough, as the best score is only once obtained\nby combining all three languages as training data.\nNevertheless, given the improved performance for\neven only small amounts of additional training data\n(Chinese has only 1,756 training instances), we see\nthis as a promising direction for future work.\n3.5 Experiment 5: Sentence- vs\nDocument-level\nWe perform a similar experiment as van der Werff\net al. (2022) by testing our classifiers on the\ndocument-level, as the WMT data sets include this\n6The best performance on Chinese, in general, was, surpris-\ningly, obtained by the target-only model (76.1% accuracy).Eval→ DEBERTA -V3 M-DEBERTA\n↓Train de zh ru de zh ru\nGerman (de) 65.6 64.2 63.3 72.3 55.1 66.1\nChinese (zh) 58.1 75.4 53.4 63.5 68.0 61.6\nRussian (ru) 56.7 52.3 63.1 64.3 56.7 69.0\nde + zh 66.6 76.1 63.7 72.7 66.2 68.7\nde + ru 66.3 62.0 67.1 73.6 58.5 71.6\nru + zh 59.7 75.5 66.2 66.3 66.0 69.3\nde + zh + ru 66.5 75.2 68.1 72.8 65.8 71.3\nTable 5: Test set accuracies on discriminating between HT\nand Google Translate with DEBERTA -V3 (target-only) and M-\nDEBERTA (source + target) when training on data from one\nversus multiple source languages. Best score per column\nshown in bold.\ninformation. We expect that the task is (a lot) eas-\nier if the classifier has access to full documents\ninstead of just sentences. We test this with both\nthe best monolingual ( DEBERTA -V3) and multi-\nlingual ( M-DEBERTA ) models on Google transla-\ntions from German.\nTruncation DeBERTa models can in principle\nwork with sequence lengths up to 24,528 tokens,\nbut that does not mean this is optimal, espe-\ncially when taking speed and memory require-\nments into account. In Table 6 we compare ac-\ncuracies for different values of maximum length,\nor in other words, different levels of truncation.\nFor DEBERTA -V3, the preferred truncation value\nis 1,024 tokens, while for M-DEBERTA we opt for\n3,072. For both models, the input documents are\nbarely truncated. The larger value for M-DEBERTA\nis expected, as those experiments have roughly\ntwice the amount of input tokens (source- + target-\nlanguage data versus just target data). Lengths\nof 3,072 ( DEBERTA -V3) or 4,096 ( M-DEBERTA )\ndid not fit into our GPU memory (NVIDIA V100)\neven with a batch size of 1, but looking at the\nscores and truncation percentages, this does not\nseem to be an issue.\nEvaluation We evaluate the models using the\npreferred truncation settings found above.7We\ntrain on either just German, or German, Russian,\nand Chinese data, and evaluate on the German\ndata.8We evaluate the performance on three dif-\nferent classifiers: (i) applying the best sentence-\nlevel model on the documents sentence by sen-\n7Hyperparameters used are shown in Appendix B (Table 10).\n8Results for Russian and Chinese are in Appendix C (Ta-\nble 11).\nmax DEBERTA -V3 M-DEBERTA\nlength T (%) T (avg) Acc. T (%) T (avg) Acc.\n512 38 132 79.4 77 793 75.9\n768 17 62 95.0 62 617 80.1\n1,024 8 32 96.4 50 472 85.3\n2,048 0.8 4 93.4 16 155 89.7\n3,072 0.0 0.0 — 5 20 91.9\nTable 6: Document-level accuracies ( Acc.) for different val-\nues of maximum length (number of tokens) on the German\ndevelopment set, trained on German data. T (%) indicates\nthe percentage of training documents that were truncated. T\n(avg) indicates the average amount of tokens that were trun-\ncated across the training set. Best score per classifier in bold.\ntence, and taking the majority vote, (ii) simply\ntraining on the documents instead of sentences and\n(iii) fine-tuning the best sentence-level model on\ndocuments. The latest classifier is motivated by\nthe fact that there are much fewer document-level\ntraining instances than there are of sentence-level\n(Table 1).\nDocument-level classifiers The results are\nshown in Table 7, which allows us to draw the\nfollowing conclusions. For one, fine-tuning the\nsentence-level model on documents is clearly\npreferable over simply training on documents,\nwhile also comfortably outperforming the ma-\njority vote baseline. Fine-tuning not only leads\nto the highest accuracies, but also to the lowest\nstandard deviations, indicating that this classifier\nis more stable than the other two. Second, we\nconfirm our two previous findings: the models can\nimprove performance when training on texts from\na different source language (Chinese and Russian\nin this case) and the models clearly benefit from\nhaving access to the source text itself during\ntraining and evaluation.\n4 Conclusion\nThis paper has investigated the discrimination be-\ntween neural machine translation (NMT) and hu-\nman translation (HT) in multilingual scenarios,\nusing as classifiers monolingual and multilingual\nlanguage models (LMs) that are fine-tuned with\nsmall amounts of task-specific labelled data.\nWe have found out that a monolingual classifier\ntrained on English translations from a given source\nlanguage still performs well above chance on En-\nglish translations from other source languages. Us-\ning a multilingual LM and therefore having access\nalso to the source sentence results overall in bet-\nter performance than an equivalent LM that onlyTrained on → German (de) de + ru + zh\nDEB M -DEB DEB M -DEB\nMajority vote 68.5 ±8.773.1±5.775.6±6.776.5±4.7\nDoc-level 62.6 ±3.675.3±3.967.3±10.783.0±2.2\nDoc-level (ft) 81.1±2.786.0±1.287.0±2.688.7±1.4\nTable 7: Document-level accuracies and standard deviations\nwith DEBERTA -V3 (target-only, denoted as DEB) and M-\nDEBERTA (source + target, M-DEB) when evaluating on the\ntest that has German as the source language using Google as\nthe MT system. Best result per column shown in bold.\nhas access to the target sentence. Such a classi-\nfier seems more robust in a cross-system situation,\ni.e. when the MT systems used to train and evalu-\nate the classifier are different. Moreover, as task-\nspecific data is limited, we experimented with (i)\ntraining on data from different source languages\nand (ii) training on the document-level instead of\nthe sentence-level, with improved performance in\nboth settings.\n4.1 Future work\nIn this work, we took an important step toward de-\nveloping an accurate, reliable, and accessible clas-\nsifier that can distinguish between HT and MT.\nThere are, of course, still many research directions\nto explore, in particular regarding combining dif-\nferent source languages and MT systems during\ntraining. Moreover, in many practical applications,\nit is unknown whether a text is actually a transla-\ntion, as it can also be an original text. Therefore, in\nfuture work, we aim to develop a classifier that can\ndistinguish between original texts, human transla-\ntions, and machine translations.\nAcknowledgements\nThe authors received funding from the Euro-\npean Union’s Connecting Europe Facility 2014-\n2020 - CEF Telecom, under Grant Agreement No.\nINEA/CEF/ICT/A2020/2278341 (MaCoCu). This\ncommunication reflects only the authors’ views.\nThe Agency is not responsible for any use that\nmay be made of the information it contains. We\nthank the Center for Information Technology of\nthe University of Groningen for providing access\nto the Peregrine high performance computing clus-\nter. Finally, we thank all our MaCoCu colleagues\nfor their valuable feedback throughout the project.\nReferences\nAharoni, Roee, Moshe Koppel, and Yoav Goldberg.\n2014. Automatic detection of machine translated\ntext and translation quality estimation. In Proceed-\nings of the 52nd Annual Meeting of the Association\nfor Computational Linguistics (Volume 2: Short Pa-\npers) , pages 289–295.\nArase, Yuki and Ming Zhou. 2013. Machine transla-\ntion detection from monolingual web-text. In Pro-\nceedings of the 51st Annual Meeting of the Associa-\ntion for Computational Linguistics (Volume 1: Long\nPapers) , pages 1597–1607.\nBarrault, Lo ¨ıc, Ond ˇrej Bojar, Marta R. Costa-juss `a,\nChristian Federmann, Mark Fishel, Yvette Gra-\nham, Barry Haddow, Matthias Huck, Philipp Koehn,\nShervin Malmasi, Christof Monz, Mathias M ¨uller,\nSantanu Pal, Matt Post, and Marcos Zampieri. 2019.\nFindings of the 2019 conference on machine transla-\ntion (WMT19). In Proceedings of the Fourth Con-\nference on Machine Translation (Volume 2: Shared\nTask Papers, Day 1) , pages 1–61, Florence, Italy,\nAugust. Association for Computational Linguistics.\nBawden, Rachel, Nikolay Bogoychev, Ulrich Germann,\nRoman Grundkiewicz, Faheem Kirefu, Antonio Va-\nlerio Miceli Barone, and Alexandra Birch. 2019.\nThe University of Edinburgh’s submissions to the\nWMT19 news translation task. In Proceedings of the\nFourth Conference on Machine Translation (Volume\n2: Shared Task Papers, Day 1) , pages 103–115, Flo-\nrence, Italy, August. Association for Computational\nLinguistics.\nBei, Chao, Hao Zong, Conghu Yuan, Qingming Liu,\nand Baoyong Fan. 2019. GTCOM neural machine\ntranslation systems for WMT19. In Proceedings of\nthe Fourth Conference on Machine Translation (Vol-\nume 2: Shared Task Papers, Day 1) , pages 116–121,\nFlorence, Italy, August. Association for Computa-\ntional Linguistics.\nBhardwaj, Shivendra, David Alfonso Hermelo,\nPhillippe Langlais, Gabriel Bernier-Colborne, Cyril\nGoutte, and Michel Simard. 2020. Human or neural\ntranslation? In Proceedings of the 28th Interna-\ntional Conference on Computational Linguistics ,\npages 6553–6564, Barcelona, Spain (Online), De-\ncember. International Committee on Computational\nLinguistics.\nBic ¸ici, Ergun. 2019. Machine translation with parfda,\nMoses, kenlm, nplm, and PRO. In Proceedings of\nthe Fourth Conference on Machine Translation (Vol-\nume 2: Shared Task Papers, Day 1) , pages 122–128,\nFlorence, Italy, August. Association for Computa-\ntional Linguistics.\nBriakou, Eleftheria and Marine Carpuat. 2019. The\nUniversity of Maryland’s Kazakh-English neural\nmachine translation system at WMT19. In Proceed-\nings of the Fourth Conference on Machine Transla-\ntion (Volume 2: Shared Task Papers, Day 1) , pages134–140, Florence, Italy, August. Association for\nComputational Linguistics.\nBudiwati, Sari Dewi, Al Hafiz Akbar Maulana Siagian,\nTirana Noor Fatyanosa, and Masayoshi Aritsugi.\n2019. DBMS-KU interpolation for WMT19 news\ntranslation task. In Proceedings of the Fourth Con-\nference on Machine Translation (Volume 2: Shared\nTask Papers, Day 1) , pages 141–146, Florence, Italy,\nAugust. Association for Computational Linguistics.\nConneau, Alexis, Kartikay Khandelwal, Naman Goyal,\nVishrav Chaudhary, Guillaume Wenzek, Francisco\nGuzm ´an, Edouard Grave, Myle Ott, Luke Zettle-\nmoyer, and Veselin Stoyanov. 2020. Unsupervised\ncross-lingual representation learning at scale. In\nProceedings of the 58th Annual Meeting of the Asso-\nciation for Computational Linguistics , pages 8440–\n8451, Online, July. Association for Computational\nLinguistics.\nDabre, Raj, Kehai Chen, Benjamin Marie, Rui Wang,\nAtsushi Fujita, Masao Utiyama, and Eiichiro Sumita.\n2019. NICT’s supervised neural machine translation\nsystems for the WMT19 news translation task. In\nProceedings of the Fourth Conference on Machine\nTranslation (Volume 2: Shared Task Papers, Day 1) ,\npages 168–174, Florence, Italy, August. Association\nfor Computational Linguistics.\nDevlin, Jacob, Ming-Wei Chang, Kenton Lee, and\nKristina Toutanova. 2019. BERT: Pre-training of\ndeep bidirectional transformers for language under-\nstanding. In Proceedings of the 2019 Conference of\nthe North American Chapter of the Association for\nComputational Linguistics: Human Language Tech-\nnologies, Volume 1 (Long and Short Papers) , pages\n4171–4186, Minneapolis, Minnesota, June. Associa-\ntion for Computational Linguistics.\nFu, Yingxue and Mark-Jan Nederhof. 2021. Auto-\nmatic classification of human translation and ma-\nchine translation: A study from the perspective of\nlexical diversity. In Proceedings for the First Work-\nshop on Modelling Translation: Translatology in the\nDigital Age , pages 91–99, online, May. Association\nfor Computational Linguistics.\nGoyal, Vikrant and Dipti Misra Sharma. 2019. The\nIIIT-H Gujarati-English machine translation system\nfor WMT19. In Proceedings of the Fourth Con-\nference on Machine Translation (Volume 2: Shared\nTask Papers, Day 1) , pages 191–195, Florence, Italy,\nAugust. Association for Computational Linguistics.\nGuo, Xinze, Chang Liu, Xiaolong Li, Yiran Wang,\nGuoliang Li, Feng Wang, Zhitao Xu, Liuyi Yang,\nLi Ma, and Changliang Li. 2019. Kingsoft’s neu-\nral machine translation system for WMT19. In\nProceedings of the Fourth Conference on Machine\nTranslation (Volume 2: Shared Task Papers, Day 1) ,\npages 196–202, Florence, Italy, August. Association\nfor Computational Linguistics.\nHe, Pengcheng, Jianfeng Gao, and Weizhu Chen. 2021.\nDebertav3: Improving deberta using electra-style\npre-training with gradient-disentangled embedding\nsharing. arXiv preprint arXiv:2111.09543 .\nLewis, Mike, Yinhan Liu, Naman Goyal, Mar-\njan Ghazvininejad, Abdelrahman Mohamed, Omer\nLevy, Veselin Stoyanov, and Luke Zettlemoyer.\n2020. BART: Denoising sequence-to-sequence pre-\ntraining for natural language generation, translation,\nand comprehension. In Proceedings of the 58th An-\nnual Meeting of the Association for Computational\nLinguistics , pages 7871–7880, Online, July. Associ-\nation for Computational Linguistics.\nLi, Zhenhao and Lucia Specia. 2019. A compari-\nson on fine-grained pre-trained embeddings for the\nWMT19Chinese-English news translation task. In\nProceedings of the Fourth Conference on Machine\nTranslation (Volume 2: Shared Task Papers, Day 1) ,\npages 249–256, Florence, Italy, August. Association\nfor Computational Linguistics.\nLi, Yitong, Rui Wang, and Hai Zhao. 2015. A machine\nlearning method to distinguish machine translation\nfrom human translation. In Proceedings of the 29th\nPacific Asia Conference on Language, Information\nand Computation: Posters , pages 354–360.\nLi, Bei, Yinqiao Li, Chen Xu, Ye Lin, Jiqiang Liu,\nHui Liu, Ziyang Wang, Yuhao Zhang, Nuo Xu,\nZeyang Wang, Kai Feng, Hexuan Chen, Tengbo Liu,\nYanyang Li, Qiang Wang, Tong Xiao, and Jingbo\nZhu. 2019. The NiuTrans machine translation sys-\ntems for WMT19. In Proceedings of the Fourth Con-\nference on Machine Translation (Volume 2: Shared\nTask Papers, Day 1) , pages 257–266, Florence, Italy,\nAugust. Association for Computational Linguistics.\nMolchanov, Alexander. 2019. PROMT systems for\nWMT 2019 shared translation task. In Proceedings\nof the Fourth Conference on Machine Translation\n(Volume 2: Shared Task Papers, Day 1) , pages 302–\n307, Florence, Italy, August. Association for Com-\nputational Linguistics.\nMondal, Riktim, Shankha Raj Nayek, Aditya Chowd-\nhury, Santanu Pal, Sudip Kumar Naskar, and Josef\nvan Genabith. 2019. JU-Saarland submission to the\nWMT2019 English–Gujarati translation shared task.\nInProceedings of the Fourth Conference on Machine\nTranslation (Volume 2: Shared Task Papers, Day 1) ,\npages 308–313, Florence, Italy, August. Association\nfor Computational Linguistics.\nNg, Nathan, Kyra Yee, Alexei Baevski, Myle Ott,\nMichael Auli, and Sergey Edunov. 2019. Facebook\nFAIR’s WMT19 news translation task submission.\nInProceedings of the Fourth Conference on Machine\nTranslation (Volume 2: Shared Task Papers, Day 1) ,\npages 314–319, Florence, Italy, August. Association\nfor Computational Linguistics.\nPapineni, Kishore, Salim Roukos, Todd Ward, and Wei-\nJing Zhu. 2002. Bleu: a method for automatic eval-\nuation of machine translation. In Proceedings of the\n40th Annual Meeting of the Association for Com-\nputational Linguistics , pages 311–318, Philadelphia,Pennsylvania, USA, July. Association for Computa-\ntional Linguistics.\nPinnis, Marcis, Rihards Kri ˇslauks, and Mat ¯ıss Rikters.\n2019. Tilde’s machine translation systems for WMT\n2019. In Proceedings of the Fourth Conference on\nMachine Translation (Volume 2: Shared Task Pa-\npers, Day 1) , pages 327–334, Florence, Italy, Au-\ngust. Association for Computational Linguistics.\nPirinen, Tommi. 2019. Apertium-fin-eng–rule-based\nshallow machine translation for WMT 2019 shared\ntask. In Proceedings of the Fourth Conference on\nMachine Translation (Volume 2: Shared Task Pa-\npers, Day 1) , pages 335–341, Florence, Italy, Au-\ngust. Association for Computational Linguistics.\nRei, Ricardo, Craig Stewart, Ana C Farinha, and Alon\nLavie. 2020. COMET: A neural framework for MT\nevaluation. In Proceedings of the 2020 Conference\non Empirical Methods in Natural Language Process-\ning (EMNLP) , pages 2685–2702, Online, November.\nAssociation for Computational Linguistics.\nRosendahl, Jan, Christian Herold, Yunsu Kim, Miguel\nGrac ¸a, Weiyue Wang, Parnia Bahar, Yingbo Gao,\nand Hermann Ney. 2019. The RWTH Aachen Uni-\nversity machine translation systems for WMT 2019.\nInProceedings of the Fourth Conference on Machine\nTranslation (Volume 2: Shared Task Papers, Day 1) ,\npages 349–355, Florence, Italy, August. Association\nfor Computational Linguistics.\nSun, Meng, Bojian Jiang, Hao Xiong, Zhongjun He,\nHua Wu, and Haifeng Wang. 2019. Baidu neu-\nral machine translation systems for WMT19. In\nProceedings of the Fourth Conference on Machine\nTranslation (Volume 2: Shared Task Papers, Day 1) ,\npages 374–381, Florence, Italy, August. Association\nfor Computational Linguistics.\nvan der Werff, Tobias, Rik van Noord, and Antonio\nToral. 2022. Automatic discrimination of human\nand neural machine translation: A study with multi-\nple pre-trained models and longer context. In Pro-\nceedings of the 23rd Annual Conference of the Eu-\nropean Association for Machine Translation , pages\n161–170, Ghent, Belgium, June. European Associa-\ntion for Machine Translation.\nWolf, Thomas, Lysandre Debut, Victor Sanh, Julien\nChaumond, Clement Delangue, Anthony Moi, Pier-\nric Cistac, Tim Rault, Remi Louf, Morgan Funtow-\nicz, Joe Davison, Sam Shleifer, Patrick von Platen,\nClara Ma, Yacine Jernite, Julien Plu, Canwen Xu,\nTeven Le Scao, Sylvain Gugger, Mariama Drame,\nQuentin Lhoest, and Alexander Rush. 2020. Trans-\nformers: State-of-the-art natural language process-\ning. In Proceedings of the 2020 Conference on Em-\npirical Methods in Natural Language Processing:\nSystem Demonstrations , pages 38–45, Online, Oc-\ntober. Association for Computational Linguistics.\nXia, Yingce, Xu Tan, Fei Tian, Fei Gao, Di He, We-\nicong Chen, Yang Fan, Linyuan Gong, Yichong\nLeng, Renqian Luo, Yiren Wang, Lijun Wu, Jinhua\nZhu, Tao Qin, and Tie-Yan Liu. 2019. Microsoft Re-\nsearch Asia’s systems for WMT19. In Proceedings\nof the Fourth Conference on Machine Translation\n(Volume 2: Shared Task Papers, Day 1) , pages 424–\n433, Florence, Italy, August. Association for Com-\nputational Linguistics.\nZhang, Mike and Antonio Toral. 2019. The effect of\ntranslationese in machine translation test sets. In\nProceedings of the Fourth Conference on Machine\nTranslation (Volume 1: Research Papers) , pages 73–\n81, Florence, Italy, August. Association for Compu-\ntational Linguistics.\nA WMT MT Systems\nTable 8 shows the specific WMT19 systems that\nwere used during Experiment 1. Barrault et al.\n(2019) did not specify which specific online sys-\ntems were used.\nB Hyperparameters\nSentence-level hyperparameters used in our exper-\niments are shown in Table 9, while the document-\nlevel settings are shown in Table 10.\nWMT1 WMT2 WMT3 WMT4\nde Ng et al. (2019) Rosendahl et al. (2019) Molchanov (2019) online-X\nfi Xia et al. (2019) online-Y Bic ¸ici (2019) Pirinen (2019)\ngu Li et al. (2019) Bawden et al. (2019) Goyal and Sharma (2019) Mondal et al. (2019)\nkk online-B Li et al. (2019) Briakou and Carpuat (2019) Budiwati et al. (2019)\nlt Bei et al. (2019) Pinnis et al. (2019) JUMT online-X\nru Ng et al. (2019) online-G online-X Dabre et al. (2019)\nzh Sun et al. (2019) Guo et al. (2019) Li and Specia (2019) online-X\nTable 8: WMT systems used in our analysis. WMT1 and WMT2 are the two top-ranked systems, while WMT3 and WMT4\nare the two bottom-ranked systems. The JUMT system did not submit a paper.\nMonolingual Multilingual\nLearning Rate Batch Size Learning Rate Batch Size\nGoogle DeepL Google DeepL Google DeepL Google DeepL\nDEBERTA -V31e−51e−532 32 — — — —\nM-BERT 1e−51e−516 32 1e−51e−516 32\nM-BART 1e−55e−616 32 5e−61e−516 16\nXLM -R 1e−51e−516 32 1e−51e−516 16\nM-DEBERTA 1e−51e−532 32 5e−51e−532 16\nTable 9: Final hyper-parameter settings for the models used throughout the paper. We experimented with a batch size of\n{16,32,64}and a learning rate of {1e−4,1e−5,5e−5,1e−6,5e−6}.C Additional Results\nFigure 2 shows additional results for Experiment\n1, specifically scatter plots of the accuracy of the\nclassifier versus COMET scores for each system\nfor both Google and DeepL. This complements\nFigure 1 in Section 3.1, in which BLEU was used\ninstead of COMET. The trends are very similar in\nboth figures.\nTable 11 shows additional evaluation results on\ndocument-level classification (Experiment 5), as\nopposed to Table 7 in which we evaluated on\nthe test set that has German as the source lan-\nguage. We observe that fine-tuning the sentence-\nlevel model is still generally preferable, though\nthere are few cases in which just training on docu-\nments resulted in the best performance. A curious\nobservation is that for Chinese including the source\ntext does generally not lead to improved perfor-\nmance, while this is not the case for Russian and\nGerman.\nMax Sequence Length Learning Rate Batch Size Gradient Accumulation\nDEBERTA -V3 1,024 1e−52 8\nM-DEBERTA 3,072 1e−51 8\nTable 10: Final hyper-parameter settings for the models trained on the document level. We experimented with batch sizes of\n{1,2,4,8}and different gradient accumulation values such that the effective batch size was at most 16due to the hardware\nlimitations. The learning rates tested were {1e−6,2e−6,5e−6,1e−5}.\nTested on → Russian Chinese\nTrained on → German (de) de + ru + zh German (de) de + ru + zh\nDEB M -DEB DEB M -DEB DEB M -DEB DEB M -DEB\nMajority vote 67.2 ±9.168.8±2.271.0±5.569.5±4.762.5±10.853.8±5.292.6±11.965.2±4.6\nDoc-level 58.7 ±3.873.3±2.766.0±6.471.7±2.453.6±3.669.0±8.884.8±13.684.4±6.9\nDoc-level (ft) 78.4±1.872.8±2.475.3±2.380.6±1.576.5±6.452.6±1.296.0±0.978.0±1.8\nTable 11: Document-level accuracies when evaluating on test sets where the source language is either Russian or Chinese (see\nTable 7 for results on German). We train source-only DEBERTA -V3 (DEB) and source + target M-DEBERTA (M-DEB) models\non either German, or German, Russian and Chinese combined.\n1.00\n 0.75\n 0.50\n 0.25\n 0.00 0.25 0.50 0.753040506070Google\n1.00\n 0.75\n 0.50\n 0.25\n 0.00 0.25 0.50 0.75Deepl\nSource language:\nGerman-test\nGerman-dev\nKazakh\nFinnish\nRussian\nGujarati\nChinese\nLithuanian\nTranslated by:\ndeepl\ngoogle\nwmt1\nwmt2\nwmt3\nwmt4\nComet ScoreAccuracy %\nFigure 2: Accuracy versus COMET scores for each system in Table 2, using Google (left) or DeepL (right) translations during\ntraining. Accuracy versus BLEU scores can be found in Figure 1.", "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "rtT23CRH2G", "year": null, "venue": "EAMT 2010", "pdf_link": "https://aclanthology.org/2010.eamt-1.18.pdf", "forum_link": "https://openreview.net/forum?id=rtT23CRH2G", "arxiv_id": null, "doi": null }
{ "title": "On the Use of Confidence Measures within an Interactive-predictive Machine Translation System", "authors": [ "Jesús González-Rubio", "Daniel Ortiz-Martínez", "Francisco Casacuberta" ], "abstract": null, "keywords": [], "raw_extracted_content": "On the Use of Confidence Measures within an Interactive-predictive\nM\nachine Translation System\nJes´usGonz´alez-Rubio\nInst. Tec. de Inform ´atica\nUniv. Polit ´ec. de Valencia\n46021 Valencia, Spain\[email protected] Ortiz-Mart ´ınez\nDpto. de Sist. Inf. y Comp.\nUniv. Polit ´ec. de Valencia\n46021 Valencia,Spain\[email protected]\nDpto. de Sist. Inf. y Comp.\nUniv. Polit ´ec. de Valencia\n46021 Valencia, Spain\[email protected]\nAbstract\nIn this work, we address the question of\nhow to integrate confidence measures into\na interactive-predictive machine transla-\ntion system and reduce user effort. Specif-\nically, we propose to use word confidence\nmeasures to aid the user in validating cor-\nrect prefixes from the outputs given by the\nsystem. Experimental results obtained on\na corpus of the Bulletin of the European\nUnion show that confidence information\ncan help to reduce usereffort.\n1 Introduction\nThe research in the field of machine translation\n(MT)aimstodevelopcomputersystemswhichare\nable to translate text or speech without human in-\ntervention. However, present translation technol-\nogy has not been able to deliver fully automated\nhigh-quality translations (Kay, 1997; Hutchins,\n1999;Arnold,2003). Typicalsolutionstoimprove\nthe quality of the translations supplied by an MT\nsystem require manual post-editing. This serial\nprocess prevents the MT system from taking ad-\nvantage of the knowledge of the human translator\nandthehumantranslatorcannottakeadvantageof\nthe adapting ability of the MTsystem.\nAn alternative way to take advantage of the ex-\nisting MT technologies is to use them in collabo-\nrationwith human translators within a computer-\nassisted translation (CAT) or interactive frame-\nwork (Isabelle and Church, 1997). Interactivity\nin CAT has been explored for a long time. Sys-\ntems have been designed to interact with human\ntranslators in order to solve ambiguities or update\nc/circlecopyrt2010 European Association forMachine Translation.user dictionaries (Slocum, 1985; Whitelock et al.,\n1986).\nAn important contribution to CAT technology\nwas pioneered by the TransType project (Foster et\nal., 1997; Langlais and Lapalme, 2002; Foster et\nal., 2002). It entailed a focus shift in which inter-\nactiondirectlyaimedattheproductionofthetarget\ntext,ratherthanatthedisambiguationofthesource\ntext, as in former interactive systems. The idea\nproposed in that work was to embed data driven\nMT techniques within the interactive translation\nenvironment. FollowingtheTransTypeideas,Bar-\nrachina et al. (2009) proposed, in the TransType-\n2project, the use of fully-fledged statistical MT\n(SMT) systems to produce full target sentences\nhypotheses, or portions thereof, which can be ac-\ncepted or amended by a human translator. Each\ncorrect text segment is then used by the MT sys-\ntemasadditionalinformationtoachieveimproved\nsuggestions. More specifically, in each iteration,\na prefix1of the target sentence is fixed by the hu-\nmantranslatorand,inthenextiteration,thesystem\npredicts a best (or N-best) translation suffix(es)1\nto complete this prefix. This process is known as\nInteractive-predictiveMachineTranslation (IMT).\nIn this paper, we also focus on the IMT approach\nto CAT.\nFigure 1 illustrates a typical IMT session. Ini-\ntially, the user is given an input sentence fto be\ntranslated. Theprovidedreference eisthetransla-\ntionthattheuserwouldliketoachieveattheendof\nthe IMT session. At iteration 0, the user does not\nsupplyanycorrecttextprefixtothesystem,forthis\nreasontheprefix episshownasempty. Therefore,\n1The terms prefix and suffix denote any substring at the be-\nginning and end (respectively) of a string of characters, with\nnoimplicationofmorphologicalsignificanceasisusuallyim-\nplied by these termsin linguistics.\n[EAMT May 2010 St Raphael, France]\nSOURCE ( f): Paraencencer laimpresora:\nREFERENCE ( e):Topower ontheprinter:\nITER-0 ep( )\nˆes To switchonaprinter:\nITER-1a To\nk power\nep Topower\nˆes onaprinter:\nITER-2a on\nk the\nep Topower onthe\nˆes printer:\nFINALa printer:\nk #\nep=eTopower ontheprinter:\nFigure 1: IMT session to translate a Spanish sentence into English. System s uggestions are in italics,\naccepted prefixesareprintedinnormalfontanduser inputsareinboldf ace font.\nthe IMT system has to provide an initial complete\ntranslation ˆes, as if it were a conventional SMT\nsystem. In the next iteration, the user accepts a\npreffix of this suffix aand introduces a correction\nk. Thisbeingdone,thesystemsuggestsanewsuf-\nfix hypothesis ˆes, subject to ep≡ak. Again, the\nuser validates a new prefix, introduces a new cor-\nrection and so forth. The process continues until\nthe whole sentence is correct. A correct sentence\nisvalidated byintroducingthespecialword“ #”.\nAs the reader could devise from the IMT ses-\nsiondescribedabove,IMTaimsatreducingtheef-\nfort and increasing the productivity of translators,\nwhilepreservinghigh-qualitytranslation.\nInthiswork,weintendtofurtherreducetheuser\neffort. As explained above, in each iteration, the\nuser is asked to validate a prefix of the hypothesis\ngenerated by the system and then, to make a cor-\nrection. To do that, the user only has information\naboutthesourcesentencetobetranslated. Wepro-\nposetoprovidetheuserwithinformationaboutthe\ncorrectness for each word in the suffix. This con-\nfidencemeasure (CM)willguidetheusertolocate\npossible translation errors in the sufixes given by\ntheIMTsystem.\n2 ConfidenceMeasures\nSentences generated by a MT system are often in-\ncorrect but may contain correct substrings. Using\nCMs allow to identify these correct substrings and\nfind possible errors. For this purpose, each wordinthegeneratedtargetsentenceisassignedavalue\nexpressing the confidence that it is correct. Con-\nfidence estimation can be seen as a conventional\npattern classification problem in which a feature\nvector is obtained for each hypothesised word in\norder to classify it as either correct or incorrect.\nConfidenceestimationhavebeenextensivelystud-\nied for speech recognition. Only recently have re-\nsearchers started to investigate CMs for MT (Gan-\ndrabur and Foster, 2003; Blatz et al., 2004; Quirk,\n2004;UeffingandNey,2007;Sanchisetal.,2007;\nSpecia etal.,2009).\nDifferent TransType-style MT systems use con-\nfidence information to improve translation predic-\ntion accuracy (Foster et al., 2002; Gandrabur and\nFoster,2003;UeffingandNey,2005). Inthiswork,\nweproposeafocusshiftinwhichconfidenceinfor-\nmation is used to aid the user in validating correct\nprefixesbylocatingincorrectlytranslatedwordsin\nthesufixes givenby theIMTsystem.\n2.1 SelectingaConfidenceMeasure forIMT\nTwo problems have to be solved in order to com-\npute CMs. First, suitable confidence features have\nto be computed. Second, a binary classifier has to\nbe defined, which decides whether a word is cor-\nrector not.\nIn this work, we implement a word CM based\non the IBM Model 1 (Brown et al., 1993), similar\nto the one described in (Blatz et al., 2004). We\nchoose this because it relies only on the source\nSOURCE ( f): Paraencencer laimpresora:\nREFERENCE ( e):Topower ontheprinter:\nITER-0 ep( )\nˆes To switchona printer:\nITER-1a Toswitchon\nk the\nep Toswitchonthe\nˆes printer:\nFINALa printer:\nk #\nep≡eToswitch ontheprinter:\nFigure 2: IMT session with confidence information using our proposed us er simulation. System sug-\ngestions are in italics, accepted prefixes are printed in normal font and us er inputs are in boldface font.\nWords classified as incorrect are displayed underlined and translation e rrors are printed in typewriter\nfont. The final output is different from the reference translation e, but it is also a correct translation of\nthesourcesentence f.\nsentence and the proposed extension, and not on\nanN-best list or an additional confidence estima-\ntion layer as many other word CMs do. Thus, it\ncan be calculated very fast during search, which\nis crucial given the time constraints of the IMT\nsystems. Moreover, its performance in identify-\ning correct words is similar to that of other word\nCMs as theresultspresentedin(Blatz etal.,2003;\nBlatzetal.,2004;Sanchisetal.,2007)show. How-\never, we modified this CM by replacing the aver-\nageby themaximal lexicon probability, because\nworkbyUeffingandNey(2005)showthattheav-\nerage is dominated by this maximum. The confi-\ndence valueof word ei,c(ei),isthen givenby\nc(ei) = max\n0≤j≤Jp(ei|fj), (1)\nwhere p(ei|fj)is the lexicon probability based on\ntheIBMModel1, f0istheemptysourcewordand\nJis the number of words in the source sentence.\nUeffing and Ney (2005) report that even this rela-\ntivelysimpleCMyieldsasignificantimprovement\nin the quality of the suffixes proposed by an IMT\nsystem.\nAfter computing the confidence value, each\nwordisclassifiedaseithercorrectorincorrect,de-\npending on whether its confidence exceeds or not\naclasifficationthreshold.\n3 IMTwith ConfidenceMeasures\nIntheIMTapproach(seeFigure1),theuserinter-\naction with the IMT system consists on validatingacorrectprefixforeachsuffix ˆesgivenbythesys-\ntem. To do that, the user has to check the correct-\nness of each word in the given suffix looking for\nthe first incorrectly translated word. We propose\nthe use of CMs as a new source of information to\naid the user inlocating these incorrectlytranslated\nwords.\nIn a conventional IMT system, the only infor-\nmation available to the user is the source sentence\nto betranslated,so,all the wordsof the targetsen-\ntence are equally likely to be correct or incorrect.\nIn contrast, we propose to provide the user with\ninformation about the correctness of each of the\nwords in the suffix. In our proposal, the user has\nmore available information which can help her to\neasilyvalidatethe correctprefix.\nTo appropriately evaluate the impact of provid-\ningtheuserwithconfidenceinformationwithinthe\nIMT scenario, experimentation involving human\ntranslators should be carried out. Unfortunately,\nsuch a user study would be very costly. Because\nof this, we are forced to carry out experimentation\nsimulating the human translators. This user sim-\nulation does not intend to exactly imitate the be-\nhaviour of real IMT users, but to test if confidence\ninformation may be useful for a human translator\nwithintheIMTprocess. Anyway,experimentation\ninvolving human translators will be carried out in\nthefuture.\n3.1 UserSimulation\nWe want to study the impact of using CMs within\nthe IMT process. To do that, we simulate a hu-\nman translator that absolutely rely on the confi-\ndenceinformationtovalidatecorrectprefixesfrom\nthe suffixes given by the IMT system. To simu-\nlatesuchahumantranslator,wemaketwoassump-\ntions. First,weassumethattheCMmakesnomis-\ntakesinclasiffyingwords. Second,weassumethat\nthe user is always able to correct a word without\ntakingintoaccount thecontext ofthis word.\nThefirstassumptionimpliesthattheuserchecks\nthe correctness of only those words that are clas-\nsified as incorrect, skipping the words classified\nas correct. Confidence estimation is not perfect,\ntherefore some of the words may be misclassified,\nasaresult,theoutputgeneratedbyourusersimula-\ntionis notguaranteedtobe equaltothereference.\nThe second assumption is a consequence of the\nfirst one. If we skip words that may be incorrect,\nthe user should be capable of correcting each in-\ncorrect word even when the context of this word\nmay be erroneous. We use the reference sentence\nto correct the words classified as incorrect, i.e. if\nthe second word of a suffix needs to be corrected,\nwe correct it with the word in the same position in\nthecorrespondingreferencesentence.\nWe are aware that the above described assump-\ntions may seem unrealistic, but they are made to\nsimplify the IMT scenario in which the impact of\nusingconfidence informationis tobeevaluated.\nOur user simulation is exemplified in Figure 2.\nAtiteration 0,thesystemhasclassifiedtheword a\nas incorrect (words classified as incorrect are dis-\nplayed underlined in the example). With this in-\nformationtheuserfocusesherattentiondirectlyon\nthe word aand corrects it, skipping the words “ To\nswitch on ” that the system considers to be correct.\nWordswitchis different from the reference word\npower, so, in this scenario, the final translation er-\nror will be greater than zero. At the second itera-\ntion there are no words classified as erroneous, so\ntheuseracceptsthesuffixwithoutcheckinganyof\nthe suffix words. Following the conventional IMT\napproach, the user has to check the correctness of\n5words and correct two of them to obtain the de-\nsired translation, while in our simulation, the user\nhas to check the correctness of only one word and\ncorrect it to obtain the final translation. In spite of\nthe fact that this final translation is different from\ntheonetheuserhasinmind,itisacorrecttransla-\ntionof thesourcesentence.\nIt is worth of notice that, in our user simula-\ntion, varying the value of the classification thresh-oldallowstorangefromafullyautomaticSMTap-\nproach (threshold equal to 0.0, all words are clas-\nsified as correct) to a conventional IMT approach\n(threshold equal to 1.0, all words are classified as\nincorrect). The classification threshold value al-\nlows us to control the ratio between the user effort\nrequiredbytheIMTsystemandtheexpectedfinal\ntranslation error, according to the requirements of\nthe given translation task. For any threshold value\nlower than 1.0our user simulation does not guar-\nantee errorfreetranslations.\n4 Experimentation\nThe aim of this experimentation was to study the\nimpact of providing the user of an IMT system\nwith confidence information. All the experiments\nwere carried out using the user simulation de-\nscribedinsection3.1.\n4.1 Systemevaluation\nAutomatic evaluation of results is a difficult prob-\nlem in MT. In fact, it has evolved to a research\nfield with its own identity. This is due to the fact\nthat, given an input sentence, a great number of\ncorrect and different output sentences may exist.\nHence, there is no sentence which can be consid-\neredgroundtruth,asitisthecaseinspeechortext\nrecognition. Byextension,thisproblemisalsoap-\nplicable to our user simulation. Moreover, we ad-\nditionally have to deal with the problem of mea-\nsuringtheusereffort.\nIn this paper, we report our results as measured\nbyWord Stroke Ratio (WSR) (Tom ´as and Casacu-\nberta,2006). WSRisusedinthecontextofIMTto\nmeasurethe effortrequired bythe user togenerate\nher translations. WSR is computed as the quotient\nbetween the number of word-strokes a user would\nneed to perform in order to achieve the translation\nshe has in mind and the total number of words in\nthe sentence. In this context, a word-stroke is in-\nterpretedasasingleaction,inwhichtheusertypes\na complete word, and is assumed to have constant\ncost. Moreover, each word-stroke also takes into\naccountthecostincurredbytheuserwhenreading\nthenew suffixprovidedby thesystem.\nIn addition, and because our user simulation al-\nlows differences between its output and the refer-\nence translation, we will also present translation\nquality results in terms of Translation Edit Rate\n(TER)(Snoveretal.,2006)and BiLingualEvalua-\ntion Understudy (BLEU) (Papineni et al., 2002).\nSpanish EnglishTrainSentences 214.5K\nRunning words 5.8M 5.2M\nVocabulary 97.4K 83.7KDev.Sentences 400\nRunning words 11.5K 10.1K\nPerplexity(trigrams) 46.1 59.4TestSentences 800\nRunning words 22.6K 19.9K\nPerplexity(trigrams) 45.2 60.8\nTable 1: Statistics of the Spanish–English EU cor-\npora. K and M denote thousands and millions of\nelements respectively.\nTER is calculated as the number of edit opera-\ntions(insertions,deletionsandsubstitutionsofsin-\ngle words and shifts of word sequences) to con-\nvertthesystemtranslationintothereferencetrans-\nlation. BLEU computes a geometric mean of the\nprecision of n-grams multiplied by a factor to pe-\nnaliseshortsentences.\nFinally, to evaluate the performance of the se-\nlected CM we use the Classification Error Rate\n(CER). This metric is defined as the number of\nclassificationerrorsdividedbythetotalnumberof\nclassifiedwords.\n4.2 Experimental Setup\nOur experiments were carried out on the EU cor-\npora (Barrachina et al., 2009). The EU corpora\nwere extracted from the Bulletin of the European\nUnion, which is publicly available on the Internet.\nThe EU corpora are composed of sentences given\ninthreedifferentlanguagepairs. Here,wewillfo-\ncusontheSpanish–EnglishpartoftheEUcorpora.\nThe corpus is divided into three separate sets: one\nfortraining,onefordevelopment,andonefortest.\nThefiguresof thecorpus canbeseeninTable1.\nAs a first step, be built a SMT system to trans-\nlate from Spanish into English. This was done\nby means of the Thot toolkit (Ortiz et al., 2005),\nwhich is a complete system for building phrase-\nbased SMT models. This toolkit involves the esti-\nmation from the training set of different statistical\nmodels,whicharecombinedinalog-linearfashion\nbyadjustingaweightforeachofthembymeansof\nthe MERT (Och, 2003) procedure, optimising the\nBLEUscoreon thedevelopmentpartition.\nThe IMT system which we have implemented\nrelies on the use of word graphs (Ueffing et al., 35 40 45 50 55 60\n 0 0.2 0.4 0.6 0.8 1Classification Error Rate (%)\nClassification ThresholdCER=41.14\nCER=37.01CER=58.86\nFigure3: CERfordifferentclassificationthreshold\nvalueswhentranslatingfromSpanishintoEnglish.\n2002) to efficiently compute the suffix for a given\nprefix. A word graph has to be generated for each\nsentence to be interactively translated. For this\npurpose, we used a multi-stack phrase-based de-\ncoder which will be distributed in the near future\ntogether with the Thot toolkit. We discarded the\nuse of the state-of-the-art Moses toolkit (Koehn\net al., 2007) because preliminary experiments per-\nformed with it revealed that the decoder by Ortiz-\nMart´ınezetal.(2005)performsclearlybetterwhen\nused to generate word graphs for their use in IMT.\nIn addition, we performed an experimental com-\nparison in regular SMT, and found that the perfor-\nmancedifferencewasnegligible. Thedecoderwas\nset to only consider monotonic translation, since\nin real IMT scenarios considering non-monotonic\ntranslationleadstoexcessiveresponsetimeforthe\nuser.\nFinally, the obtained word graphs were used in\nour user simulation to produce the translations of\nthesentencesinthetestset,measuringWSR,TER\nand BLEU.\n4.3 Word ConfidenceClassificationResults\nWe carried out an experimentation intended to\nstudytheperformanceoftheCMinclassifyingthe\nwords as correct or incorrect. In order to evaluate\nthe classification performance of the CM, a cor-\npus is needed where each word is tagged as cor-\nrect or incorrect. We carried out a conventional\nIMT session to produce the reference translations\nandusetheuserinteractionswiththesystemtotag\nthe words as correct or incorrect. For example, in\nthe IMT session in Figure 1, at iteration 1word\nTois tagged as correct because the user marked it\nas a valid prefix and word switchis tagged as in-\n 0 10 20 30 40 50 60\n 0 0.2 0.4 0.6 0.8 1 0 10 20 30 40 50 60Word Stroke Ratio\nTranslation Edit Rate\nClassification ThresholdWSR IMT-CM\nTER IMT-CM\nWSR IMT\nTER SMT\n 0 20 40 60 80 100\n 0 0.2 0.4 0.6 0.8 1 0 20 40 60 80 100Word Stroke Ratio\nBLEU\nClassification ThresholdWSR IMT-CM\nBLEU IMT-CM\nWSR IMT\nBLEU SMT\nFigure 4: TER (left) and BLEU (right) translation scores against WSR for different values of the confi-\ndence classificationthresholdwhentranslatingfromSpanishintoEnglish.\ncorrect because the user corrects it with the word\npower. At iteration 2, wordonis tagged as cor-\nrectandword aasincorrect. Finally,word printer:\nis tagged as correct. Once the words are tagged,\nconfidenceclassificationisperformedforacertain\nclassification threshold and the CER score for this\nthresholdis calculated.\nFigure3displaysCERfordifferentvaluesofthe\nclassification threshold. The two extreme values\n0.0and1.0imply that the CM does not add infor-\nmation about the correctness of the words in the\nsuffix. Specifically, a threshold value equal to 0.0\nclassifies all the target words as correct, whereas a\nthresholdvalueequalto 1.0classifiesallthetarget\nwordsas incorrect.\nAccording to Figure 3, best CER score was ob-\ntainedforathresholdvalueof 0.75. Thisthreshold\nvalue allows to achieve better CER score than that\nobtained using a threshold value of 1.0. Since a\nthreshold value of 1.0corresponds to the conven-\ntionalIMTsystem,weconcludethatprovidingthe\nuserwithconfidenceinformationisbetterthannot\nprovidingconfidence informationatall.\n4.4 UserSimulationIMT Results\nIn the previous section, we have seen that confi-\ndence information is useful to detect incorrectly\ntranslated words, and so, may make the user inter-\naction with the IMT system easier. One advantage\nof integrating CMs within an IMT system is their\nability to achieve a trade-off between the required\nusereffortand theexpectedfinal translationerror.\nIn this section, we present a series of ex-\nperiments ranging the value of the classification\nthresholdbetween 0.0(unsupervisedSMTsystem)\nand1.0(conventional IMT system). For eachthreshold value, we calculated the effort of our\nsimulated user in terms of WSR, and the transla-\ntionqualityofthefinaloutputasmeasuredbyTER\nand BLEU.\nFigure 4 shows WSR (WSR IMT-CM), TER\n(TER IMT-CM) and BLEU (BLEU IMT-CM)\nscores obtained by our user simulation for differ-\nent classification threshold values. Additionally,\nwe also show the TER and BLEU scores (TER\nSMT and BLEU SMT) obtained by a fully auto-\nmaticSMTsystemastranslationqualitybaselines,\nandtheWSRscore(WSRIMT)obtainedbyacon-\nventional IMTsystemasuser effortbaseline.\nFigure 4 shows a smooth transition between the\nunsupervised SMT system and the conventional\nIMT system. As we raised the threshold value,\nmore words were marked as incorrect, and there-\nfore,morewordsweresuitableforcorrection. Ac-\ncording to Figure 4, using the best threshold value\n(0.75)inFigure3,wecanachieveatranslationer-\nroraslowas 4TERpointsbycorrectingonly 30%\nabsolute of the words. This constitutes a WSR re-\nduction of 40%relative with respect to the stan-\ndard IMT approach and a BLEU improvement of\nalmost 60points with respect to the unsupervised\nSMTsystem.\nIt is worth of notice that the experimentation is\ncarried out simulating a user whose decisions are\nabsolutely guided by the confidence information.\nTheusereffortsavingsandtheimprovementsover\nthe SMT translation quality displayed in Figure 4,\nconfirm that confidence information can aid a hu-\nman translator in making her decisions within the\nIMTprocess.\n5 ConcludingRemarks\nInthiswork,weproposedtoenrichtheIMTframe-\nwork with confidence information. Since an ex-\nperimentationinvolvinghumanuserwouldbevery\ncostly,wewereforcedtodesignasimulationofthe\nhumanuserstotestourproposal. Thisusersimula-\ntionwasnotintendedtoreproducearealIMTuser,\nbuttotestifconfidenceinformationmaybeuseful\nfora realIMTuser.\nExperimentation results show that confidence\ninformationcan aid real users tolocate incorrectly\ntranslated words, making easier for them to val-\nidate correct prefixes within an IMT framework.\nAccordingtoourusersimulation,a 40%reduction\nin the WSR was obtained with respect to the con-\nventional IMT system. In addition, an improve-\nment of 60BLEU points is also achieved with re-\nspecttotheSMTsystem.\nAs future work, we plan to perform a human\nevaluation to verify the results obtained with our\nusersimulation.\nAcknowledgements\nWork supported by the EC (FEDER/FSE) and\nthe Spanish MEC/MICINN under the MIPRCV\n“Consolider Ingenio 2010” program (CSD2007-\n00018) and the FPU scholarship AP2006-00691.\nAlso supported by the Spanish MITyC under the\nerudito.com (TSI-020110-2009-439) project and\nby the Generalitat Valenciana under grant Prom-\neteo/2009/014.\nReferences\nArnold, Doug, 2003. Computers and Translation: A\ntranslator’sguide , chapter 8, pages 119–142.\nBarrachina, Sergio, Oliver Bender, Francisco Casacu-\nberta, Jorge Civera, Elsa Cubel, Shahram Khadivi,\nAntonio Lagarda, Hermann Ney, Jes ´us Tom´as, and\nEnrique Vidal. 2009. Statistical approaches to\ncomputer-assisted translation. Computational Lin-\nguistics,35(1):3–28.\nBlatz, Jonh, Erin Fitzgerald, George Foster, Simona\nGandrabur,CyrilGoutte,AlexKulesza,AlbertoSan-\nchis, and Nicola Ueffing. 2003. Confidence estima-\ntionfor machine translation.\nBlatz, Jonh, Erin Fitzgerald, George Foster, Simona\nGandrabur,CyrilGoutte,AlexKulesza,AlbertoSan-\nchis, and Nicola Ueffing. 2004. Confidence estima-\ntion for machine translation. In Proceedings of the\nInternationalConferenceonComputationalLinguis-\ntics,page 315.Brown, Peter F., Stephen A. Della Pietra, Vincent J.\nDella Pietra, and Robert L. Mercer. 1993. The\nMathematics of Statistical Machine Translation: Pa-\nrameter Estimation. Computational Linguistics ,\n19(2):263–311.\nFoster, George, Pierre Isabelle, and Pierre Plamon-\ndon. 1997. Target-textmediatedinteractivemachine\ntranslation. Machine Translation , 12:12–175.\nFoster, George, Philippe Langlais, and Guy Lapalme.\n2002. User-friendly text prediction for translators.\nInProceedingsoftheconferenceonEmpiricalmeth-\nods innatural language processing , pages 148–155.\nGandrabur, Simona and George Foster. 2003. Confi-\ndence estimation for text prediction. In Proceedings\nof the Conference on Computational Natural Lan-\nguage Learning , pages 315–321.\nHutchins, Jonh. 1999. Retrospect and prospect in\ncomputer-based translation. In Proceedings of the\nMachine Translation Summit , pages 30–44.\nIsabelle, Pierre and Ken Church. 1997. Special issue\non new tools for human translators. Machine Trans-\nlation, 12(1–2).\nKay, Martin. 1997. It’s still the proper place. Machine\nTranslation , 12(1-2):35–38.\nKoehn, Philipp, Hieu Hoang, Alexandra Birch, Chris\nCallison-Burch, Marcello Federico, Nicola Bertoldi,\nBrooke Cowan, Wade Shen, Christine Moran,\nRichard Zens, Chris Dyer, Ondrej Bojar, Alexandra\nConstantin, and Evan Herbst. 2007. Moses: Open\nsource toolkit for statistical machine translation. In\nProceedings of the Association for Computational\nLinguistics meeting , pages 177–180.\nLanglais,PhilippeandGuyLapalme. 2002. Transtype:\nDevelopment-evaluation cycles to boost translator’s\nproductivity. Machine Translation , 15(4):77–98.\nOch, Franz J. 2003. Minimum error rate training in\nstatisticalmachinetranslation. In Proceedingsofthe\nAssociation for Computational Linguistics meeting ,\npages 160–167.\nOrtiz, Daniel, Ismael Garc ´ıa-Varea, and Francisco\nCasacuberta. 2005. Thot: a toolkit to train phrase-\nbased statistical translation models. In Proceedings\nof the Machine Translation Summit , pages 141–148.\nPapineni,Kishore,SalimRoukos,ToddWard,andWei-\nJing Zhu. 2002. BLEU: a method for automatic\nevaluationofMT. In ProceedingsoftheAssociation\nfor Computational Linguistics meeting , pages 311–\n318.\nQuirk, Chris. 2004. Training a sentence-level ma-\nchine translation confidence metric. In Proceedings\nof the International Conference on Language Re-\nsources and Evaluation , pages 825–828.\nSanchis, Alberto, Alfons Juan, and Enrique Vidal.\n2007. Estimation of confidence measures for ma-\nchine translation. In Proceedings of the Machine\nTranslation Summit , pages 407–412.\nSlocum, Jonathan. 1985. A survey of machine transla-\ntion: Its history, current status, and future prospects.\nComputational Linguistics , 11(1):1–17.\nSnover, Mattew, Bonnie Dorr, Richard Schwartz, Lin-\nnea Micciulla, and John Makhoul. 2006. A study of\ntranslation edit rate with targeted human annotation.\nInProc of the Association for Machine Translation\nintheAmericas meeting , pages 223–231.\nSpecia, Lucia, Marco Turchi, Zhuoran Wang, John\nShawe-Taylor, and Craig Saunders. 2009. Improv-\ningtheconfidence ofmachinetranslationqualityes-\ntimates. In Proceedings of the Machine Translation\nSummit.\nTom´as, Jes´us and Francisco Casacuberta. 2006. Statis-\ntical phrase-based models for interactive computer-\nassisted translation. In Proceedings of the Inter-\nnational Conference on Computational Linguistics ,\npages 835–841.Ueffing, Nicola and Hermann Ney. 2005. Application\nofword-levelconfidencemeasuresininteractivesta-\ntistical machine translation. In Proceedings of the\nEuropean Association for Machine Translation con-\nference, pages 262–270.\nUeffing, Nicola and Hermann Ney. 2007. Word-level\nconfidenceestimationformachinetranslation. Com-\nputational Linguistics , 33(1):9–40.\nUeffing,Nicola,FranzJ.Och,andHermannNey. 2002.\nGeneration of word graphs in statistical machine\ntranslation. In Proceedings of the conference on\nEmpiricalMethodsinNaturalLanguageProcessing ,\npages 156–163.\nWhitelock, P.J., M. Wood, B.J. Chandler, N. Holden,\nand H.J. Horsfall. 1986. Strategies for interactive\nmachinetranslation: theexperienceandimplications\nof the umist japanese project. In Proceedings of\ntheAssociationforComputationalLinguistics ,pages\n329–334.", "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "DP6bmSLHS3m", "year": null, "venue": "EAMT 2011", "pdf_link": "https://aclanthology.org/2011.eamt-1.35.pdf", "forum_link": "https://openreview.net/forum?id=DP6bmSLHS3m", "arxiv_id": null, "doi": null }
{ "title": "Bilingual segmentation for phrasetable pruning in Statistical Machine Translation", "authors": [ "Germán Sanchis-Trilles", "Daniel Ortiz-Martínez", "Jesús González-Rubio", "Jorge González" ], "abstract": null, "keywords": [], "raw_extracted_content": "Bilingual segmentation for phrasetable pruning\nin Statistical Machine Translation\nGerm ´an Sanchis-Trilles Daniel Ortiz-Mart ´ınez Jes ´us Gonz ´alez-Rubio\nJorge Gonz ´alez Francisco Casacuberta\nInstituto Tecnol ´ogico de Inform ´atica\nDepartamento de Sistemas Inform ´aticos y Computaci ´on\nUniversitat Polit `ecnica de Val `encia\nValencia, Spain\n{gsanchis,dortiz,jegonzalez,jgonzalez,fcn }@dsic.upv.es\nAbstract\nStatistical machine translation systems\nhave greatly improved in the last years.\nHowever, this boost in performance usu-\nally comes at a high computational cost,\nyielding systems that are often not suitable\nfor integration in hand-held or real-time\ndevices. We describe a novel technique\nfor reducing such cost by performing a\nViterbi-style selection of the parameters of\nthe translation model. We present results\nwith finite state transducers and phrase-\nbased models showing a 98% reduction of\nthe number of parameters and a 15-fold in-\ncrease in translation speed without any sig-\nnificant loss in translation quality.\n1 Introduction\nNowadays, the key step of the process of statisti-\ncal machine translation (SMT) involves inferring\na large table of phrase pairs that are translations\nof each other from a large corpus of aligned sen-\ntences. The set of all phrase pairs, together with es-\ntimates of conditional probabilities and other use-\nful features, is called phrasetable . Such phrases\nare applied during the decoding process, combin-\ning their target sides to form the final translation.\nA variety of algorithms to extract phrase pairs\nhas been proposed (Och and Ney, 2000; Marcu and\nWong, 2002; Zens et al., 2002; Och and Ney, 2003;\nV ogel, 2005). Typically, these algorithms heuristi-\ncally collect a highly redundant set of phrases from\neach training sentence pair generating phrasetables\nwith a huge number of elements.\nThis bulk comes at a cost. Large phrasetables\nlead to large data structures that require more re-\nc/circlecopyrt2011 European Association for Machine Translation.sources and more time to process. More impor-\ntantly, effort spent in handling large tables could\nlikely be more usefully employed in more features\nor more sophisticated search processes. Addition-\nally, this is the main restriction for the widespread\napplication of SMT techniques in small portable\ndevices like cell phones, PDAs or hand-held game\nconsoles; one can imagine many scenarios that\ncould benefit from a lightweight translation device:\ntourism, medicine, military, etc.\nIn this paper, we show that is possible to prune\nphrasetables by removing those phrase pairs that\nhave little influence on the final translation per-\nformance. Our approach consist in selecting only\nthose phrase pairs extracted from the most proba-\nble segmentation of the training sentences.\nThe technique presented here has several advan-\ntages. It does not depend on the actual algorithm\nused to extract the phrase pairs, therefore can be\napplied to every imaginable method that assigns\nprobabilities to phrase pairs. It provides a straight-\nforward method for pruning the phrasetables, with-\nout the need of adjusting any additional parameter.\nIt does not significantly affect translation quality,\nas measured by BLEU or TER scores, while very\nsubstantial savings in terms of computational re-\nquirements are reported.\nThe rest of the paper is organised as follows.\nSection 2 revised previously published techniques\nto prune the phrasetable. Section 3 introduces\nSMT and the different models used in the exper-\nimentation. Section 4 reviews the bilingual seg-\nmentation problem in order to present our tech-\nnique to filter the phrasetable. Section 5 describes\nthe experimentation carried out and presents the\nobtained results. The paper concludes with a sum-\nmary and discussion of the results.Mik el L. F orcada, Heidi Depraetere, Vincen t V andeghinste (eds.)\nPr o c e e dings of the 15th Confer enc e of the Eur op e an Asso ciation for Machine T r anslation , p. 257\u0015264\nLeuv en, Belgium, Ma y 2011\n2 Related work\nMost phrase-based decoders already include se-\nveral built-in thresholds in order to prune the size\nof phrasetables estimated from training corpora\n(Ortiz et al., 2005; Koehn et al., 2007). They are\nusually related either to absolute scores of phrase\npairs in the phrasetable or to relative scores be-\ntween the phrase pairs sharing their source phrase.\nApart from phrasetable threshold pruning tech-\nniques, which are usually employed in SMT, dif-\nferent complementary methods in order to reduce\neven more the size of phrasetables have been ex-\nplored within the last years. On the one hand,\nJohnson et al. (2007) propose to use significance\ntesting in order to select only those phrase pairs\nwhich are the most co-occurring ones in the train-\ning corpus. On the other hand, Eck et al. (2007)\nconsiders usage statistics of phrase pairs, also\nbased on either their scores or their ranks, in or-\nder to prune the ones below some minimal values.\nOur work however does not perform an explicit\nstatistical analysis of the phrases in phrasetables,\nbut instead uses the concept of bilingual segmen-\ntation of each sentence pair to greatly reduce the\nnumber of parameters to be included in the fi-\nnal phrasetable. Gonz ´alez et al. (2008) already\nproposed a segmentation-based technique using\nphrasetables which indirectly causes a reduction\nin their sizes. This technique was adopted by\nSanchis-Trilles and Casacuberta (2008) in order to\ntake advantage of the phrasetable pruning concept\nwithin a standard, phrasetable-based SMT system.\nSimilarly, Wuebker et al. (2010) propose the use\nof a single bilingual segmentation in order to re-\nestimate translation probabilities by leaving-one-\nout. As a side effect, the amount of model parame-\nters is also reduced. In our work however, the goal\nof reducing the size of phrasetables is directly tar-\ngeted, thus achieving much larger reductions.\n3 Statistical machine translation\nStatistical Machine Translation (SMT) was defined\nby Brown et al. (1993) as follows: given a sen-\ntence xfrom a certain source language, a corre-\nsponding sentence ˆyin a given target language\nthat maximises the posterior probability is to be\nfound. State-of-the-art SMT systems model the\ntranslation distribution p(y|x)via the log-linear\napproach (Och and Ney, 2002):\nˆy=argmax\nyPr(y|x) (1)≈argmax\nyM/summationdisplay\nm=1λmhm(x,y) (2)\nwherehm(x,y)is a function representing an im-\nportant feature for the translation of xintoy,Mis\nthe number of features (or models) and λmare the\nweights of the log-linear combination.\nCurrent SMT systems are strongly based on the\nconcept of phrase . A phrase is defined as a con-\nsecutive group of words of the source or the target\nsentences. In this work, we will conduct our exper-\niments on two different machine translation mod-\nels based on phrases: phrase-based (PB) models\nandphrase-based stochastic finite state transduc-\ners(PBSFSTs).\nPB models (Tomas and Casacuberta, 2001; Och\nand Ney, 2002; Marcu and Wong, 2002; Zens et\nal., 2002), constitute the core of the current state-\nof-the-art in SMT. The basic idea of PB models\nis to segment the source sentence into phrases,\nthen to translate each source phrase into a target\nphrase, and finally to reorder them in order to com-\npose the final translation in the target language.\nThe set of feature functions that compose the log-\nlinear model used by state-of-the-art PB-SMT sys-\ntems typically include an n-gram language model,\nphrase-based models estimated in both translation\ndirections and some additional components such\nas word or phrase penalties. The word and phrase\npenalties allow the SMT system to limit the num-\nber of words or target phrases, respectively, that\ncompose the translations of the source sentences.\nPBSFSTs (Gonz ´alez et al., 2008) are defined as\na set of states, a set of labelled transitions between\npairs of states (where labels are composed of a\nsource phrase and a target phrase), and probabilis-\ntic distributions for the initial and the final states,\nand for the labelled transitions (Vidal et al., 2005).\nThe inference of PBSFSTs is based on the use\nof monotonic bilingual segmentations of parallel\ntraining data and a language model of bilingual\nphrases (Casacuberta and Vidal, 2004). These\nmodels can also implement the log-linear approach\nas described for PB models, which the aforesaid\nPB bilingual language model is incorporated to as\nan additional feature.\n4 Phrasetable pruning by bilingual\nsegmentation\nThe problem of segmenting a bilingual sentence\npair in such a manner that the resulting segmen-\ntation is the one that contains, without overlap, the258\nsource phrase target phrase\nLa the\ncasa house\nverde green\ncasa verde green house\nLa casa verde the green house\n. .\ncasa verde . green house .\nLa casa verde . the green house .\nFigure 1: Consistent bilingual phrases (right) given a word alignment matrix (left).\nbest phrases that can be extracted from that pair is a\ndifficult problem. First, because of the huge num-\nber of possible segmentations that are to be con-\nsidered. Second, because a measure of optimality\nmust be established. Consider the example:\nSource: La casa verde .\nTarget: The green house .\nWhen considering this example, one would proba-\nbly state that a good segmentation for this bilingual\npair is{{La, The},{casa verde , green house },{.\n, .}}. However, why is such a segmentation bet-\nter than{{La , The},{casa verde . , green house\n.}}? As humans, we could argue with more or\nless convincing linguistic terms in favour of the\nfirst option, but that does not necessarily mean that\nsuch a segmentation is the most appropriate one\nfor SMT. Furthermore, one could possibly think of\nseveral linguistically motivated segmentations for\nthis small example.\nIn SMT, a variety of algorithms to extract phrase\npairs have been proposed (Tomas and Casacu-\nberta, 2001; Marcu and Wong, 2002; Och and\nNey, 2003; V ogel, 2005). Typically, the bilingual\nphrases that compose phrasetables are extracted\nby using a heuristic algorithm (Zens et al., 2002).\nSuch heuristic algorithm is driven by the following\nconstraint: bilingual phrases must be consistent\nwith their corresponding word alignment matrix.\nA phrase pair constitutes a consistent bilingual\nphrase if all aligned words in the source phrase\nare aligned with words of the target phrase and\nvice versa. Figure 1 exemplifies this phrase extrac-\ntion process, together with the bilingual phrases\nextracted for a simple sentence. As shown, this\nprocess generates huge phrasetables with highly\nredundant phrase pairs.\nThe main purpose of this paper is to reduce\nthe extremely high redundancy in the amount of\nphrase-pairs that current state-of-the-art SMT sys-\ntems contain. With this purpose, we examine two\ndifferent methods to obtain one single segmenta-tion per sentence pair. These two methods rely on\nthe concept of bilingual segmentation.\n4.1 Bilingual segmentation\nIn SMT, the concept of bilingual segmentation can\nbe easily derived from a phrase-based alignment,\nwhich can be stated formally as follows let xbe\na source sentence and ythe corresponding target\nsentence in a bilingual corpus. A phrase-alignment\nbetween xandyis defined as a set Sof ordered\nsegment pairs included in P(x)×P(y), where\nP(x)andP(y)are the set of all subsets of con-\nsecutive sequences of words, of xandy, respec-\ntively. In addition, the ordered pairs contained in\nShave to include all the words of both the source\nand target sentences, without overlap. A phrase-\nbased alignment ˜A(x,y)of lengthKof a sentence\npair(x,y)is defined as a specific one-to-one map-\nping ˜abetweenP(y)andP(x). Then, the prob-\nlem of finding the best PB-alignment ˜AV(x,y)(or\nViterbi phrase-alignment) between xandycan be\nstated formally as\n˜AV(x,y) =argmax\n˜ap(˜a|x,y) (3)\nOne would suggest that we can perform a search\nprocess using a regular SMT system which fil-\nters its PT to obtain those translations of xthat\nare compatible with y. Unfortunately, such prob-\nlem cannot be easily solved, since standard esti-\nmation tools such as Thot (Ortiz et al., 2005) and\nMoses (Koehn et al., 2007) do not guarantee com-\nplete coverage of sentence pairs seen in training\ndue to the large number of heuristic decisions in-\nvolved in the estimation process. This means that it\nis often the case that the SMT system is not able to\nproduce the correct output sentence y. This prob-\nlem is exemplified in Figure 2. In this example,\nwhich has been extracted from a real training pro-\ncedure, only three phrase pairs will be extracted,\nand the remaining words will not be included into\nthe PT. It is shown that words such as cannot259\nFigure 2: Example of word alignment that results\nin coverage problems. Maximum phrase length of\n7 is assumed. Black squares represent word align-\nments, whereas extracted phrases are marked with\na rectangle involving one or more squares.\npresent multiple alignments. In order to include\ntarget word cannot within a consistent align-\nment, one would need to include word puedo into\nthe alignment, but including word puedo implies\nthat word Iis also included. Including Ialso\nforces the two commas to be included, together\nwith whatever words appear between both. Con-\ntinuing with this procedure leads to the necessity\nof including the whole sentence pair (except for the\nfinal dot) as a phrase before being able to include\ncannot into a consistent alignment. However,\ndue to performance reasons, it is quite common to\nrestrict the maximum length of the phrases to be\nextracted. If such maximum is set to e.g. 7, the\ncomplete sentence pair will not be included into\nthe system, and cannot will remain unknown de-\nspite having been observed in training.\nWe propose two different solutions to this prob-\nlem. The first one pursues the goal of obtaining\ntrue phrase-based alignments between xandy,\nwhereas the second one focuses on the primary\ngoal of this work, i.e. reducing the amount of bilin-\ngual phrases derived from each sentence pair, lead-\ning to a source-driven bilingual segmentation.\n4.2 True bilingual segmentation\nAs described in the previous section, coverage\nproblems inherent to state-of-the-art SMT systems\nimply that it is often impossible to obtain the\nViterbi segmentation of a given sentence pair. For\nthis reason, a possible way of overcoming such\ncoverage problems is proposed in (Ortiz-Mart ´ınez\net al., 2008). In their work, the main idea is toconsider every source phrase of xas a possible\ntranslation of every target phrase of y. For this\npurpose, a general mechanism to assign probabili-\nties to phrase pairs is needed, regardless if they are\ncontained in the phrasetable or not.\nSuch mechanism can be implemented by means\nof the application of smoothing techniques over\nthe phrasetable. As shown in (Foster et al., 2006),\nwell-known language model smoothing techniques\ncan be imported into the PB translation framework,\nand these can also be applied to obtain phrase-\nlevel alignments. According to (Ortiz-Mart ´ınez\net al., 2008), the best smoothing techniques com-\nbine a maximum likelihood phrase-based model\nstatistical estimator with a lexical distribution\nby means of linear interpolation or backing-off.\nThe lexical distribution uses an IBM 1 alignment\nmodel (Brown et al., 1993) that allows to de-\ncompose phrase-to-phrase translation probabilities\ninto word-to-word translation probabilities. In our\nexperiments, we have combined a phrase-based\nstatistical estimator with a lexical distribution by\nmeans of linear interpolation. In addition, (Ortiz-\nMart ´ınez et al., 2008) also proposes the use of a\nlog-linear model to control different aspects of the\nsegmentation, such as the number of phrases in\nwhich the sentences are divided, the length of the\nsource and the target phrases, the re-orderings and\nso on. In this work we have also adopted this strat-\negy. Hence Equation 3 can be rewritten as:\n˜AV(x,y) = argmax\n˜ap(˜a|x,y)\n=argmax\n˜ap(˜a,y|x)\np(x|y)\n=argmax\n˜ap(˜a,y|x) (4)\nAlthough it might seem that Equation 4 matches\nexactly the decoding problem in SMT, this is not\nso, since the maximisation takes place only over\nphrase-alignments, and is subject to the constraint\nthatyis the actual reference sentence given.\nOnce the scoring function for phrase pairs has\nbeen defined, a search algorithm to find the bilin-\ngual segmentations is required. For this purpose,\na search strategy based on the well-known stack-\ndecoding algorithm (Jelinek, 1969) can be used.\nThe bilingual segmentation procedure that has\nbeen described above allows us to compute one\ntrue segmentation for each sentence pair. Once the\nsegmentations for every sentence pair have been\ncomputed, it is possible to build a phrasetable by260\nonly taking into account those segments that are\ncontained in the set of true segmentations.\n4.3 Source-driven bilingual segmentation\nAs it has been explained in Section 4.1, computing\n˜AV(x,y)according to a given phrasetable is not\nan easy task. Specifically, the phrase alignments\ncannot often be generated due to coverage prob-\nlems of the phrase-based alignment model. In the\nprevious section it has been shown how to com-\npute a true phrase-alignment between two given\nsentences. However, such method must bear with\nthe constraint of having the output sentence fixed.\nAlthough such restriction seems logical at training\ntime, it should not be underestimated that this will\nnot be the case in translation time, and such re-\nstriction may introduce a non-intended bias. The\nbilingual segmentation technique described in Sec-\ntion 4.2 allows to overcome coverage problems by\ncombining smoothing techniques with an appro-\npriate search algorithm. This is done at the cost\nof modifying the scoring function used during the\nsearch process due to the application of smoothing\ntechniques, and also by introducing new segment\npairs. As said in Section 3, phrase-extraction is\ntypically done by a heuristic algorithm, which has\nproved to provide appropriate bilingual segments,\nand altering such segments may not be a good idea.\nSince our goal is to discard unnecessary seg-\nment pairs contained in the phrasetable, we pro-\npose an alternative bilingual segmentation tech-\nnique that obtains source-driven bilingual segmen-\ntations, by relaxing the restriction considered in\nEquation 4, leading to\n˜AV(x)≈argmax\n˜a,yp(˜a,y|x) (5)\nwhere the output sentence yis allowed to be dif-\nferent from the true reference, and the segmenta-\ntion has been induced by taking into account only\nthe input sentence. By using ˜AV(x)instead of\n˜AV(x,y), we ensure that only segments present in\nthe current phrasetable are used, and no new seg-\nments are introduced.\nThe maximisation described in Equation 5 is\nexactly the same problem as the one of finding\nthe best translation of a source sentence within a\nphrase-based system. Hence, for computing ˜awe\nsimply translate each source training sentence and\ninclude into the phrasetable those phrase pairs that\ncompose the output hypothesis. We are aware that\ntranslating the source sentence will not necessarilyproduce the target sentence in the training pair, but\non the other hand no artificial bilingual segments\nwill be introduced into the phrasetable. In addi-\ntion, as shown in Section 5, experiments show that\nthis approach might be good enough to prune the\nPT without a significant loss in translation quality.\n5 Experimental Setup\nBoth true and source-driven segmentations were\nconducted by means of a yet unpublished exten-\nsion of the Thot (Ortiz et al., 2005) toolkit, which\nfeatures a log-linear model and includes a state-of-\nthe-art decoder and a phrase-based aligner, used\nhere to obtain true alignments. Although such\ntoolkit does not include lexical-based probabilities\nor a lexical-based distortion model, Sanchis-Trilles\nand Casacuberta (2008) show that the relationship\nbetween the baseline system and the reduced sys-\ntem via source-driven segmentation also holds for\nthe Moses toolkit. The weights of the log-linear\nmodel were optimised by means of MERT (Och,\n2003). This log-linear model includes direct and\ninverse phrase-based translation models, a lan-\nguage model and word and phrase penalties.\nOnce the source-driven or true segmentation is\nobtained, the new phrase pairs were used to build\nnew phrasetables and new PBSFSTs. The proba-\nbilities assigned to the extracted segment pairs are\nobtained by normalising for the whole set of pa-\nrameters resulting from the segmentation process.\nAlthough PBSFSTs have the potential to use\na log-linear combination of features to estimate\nPr(y|x), they were only used here to model the\njoint probability distribution Pr(x,y), allowing us\nto determine the baseline associated to the segmen-\ntation method employed.\n5.1 System evaluation and corpora\nIn this work, we measure the translation quality by\nmeans of BLEU and TER scores. BLEU measures\nthe precision of n-grams (Papineni et al., 2001),\nwhereas TER (Snover et al., 2006) is an error met-\nric that computes the minimum number of edits\nrequired to modify the system hypotheses so that\nthey match the references. In addition to this, we\nwill also report the number of parameters that are\nused by the translation system and the speedup of\nthe proposed system with respect to a conventional\nsystem. We define the speedup by means of the\nformulaSp=Tb/Tr, whereTbis the time taken\nby the baseline system and Tris the time taken by261\nSubset features De En Es EnTrainingSentences 751k 731k\nRun. words 15.3M 16.1M 15.7M 15.2M\nMean length 20.3 21.4 21.5 20.8\nV ocabulary 195k 66k 103k 64kDev.Sentences 2000 2000\nRun. words 55k 59k 61k 59k\nMean length 27.6 29.3 30.3 29.3\nOoV words 432 125 208 127TestSentences 3064 3064\nRun. words 82k 85k 92k 85k\nMean length 26.9 27.8 29.9 27.8\nOoV words 1020 488 470 502\nTable 1: Main figures of the Europarl corpus. OoV\nstands for Out of V ocabulary, k for thousands of\nelements, and M for millions of elements.\nthe system with reduced PT.\nWe conducted our experiments on the Europarl\ncorpus (Koehn, 2005), with the partition estab-\nlished in the Workshop on SMT of NAACL\n2006 (Koehn and Monz, 2006). The Europarl cor-\npus (Koehn, 2005) is built from the proceedings\nof the European Parliament published on the web,\nand was acquired in eleven different languages.\nWe will only focus on the German–English (De–\nEn) and Spanish–English (Es–En) tasks, since ex-\nperiments with other language pairs yielded sim-\nilar results. The corpus is divided into four sep-\narate sets: one for training, one for development,\none for test and another test set which was the one\nused in the workshop for the final evaluation and\nincluded a surprise out-of-domain subset. We per-\nformed experiments on both test sets, yielding sim-\nilar results for both of them. Because of this, and to\navoid an overwhelming number of results, we only\nreport those results obtained with the final evalua-\ntion test set, being these more interesting because\nof the out-of-domain data involved. The figures of\nthe corpus are shown in Table 1.\n6 Results\nIn the tables shown in this section, sizes are given\nin number of entries in the PT or number of tran-\nsitions of PBSFSTs. Speed is reported in words\nper second ( w/s), andSpstands for speedup , as\ndescribed in Section 5.1.\nConfidence intervals at a confidence level of\n95% were computed, following the bootstrap tech-\nnique described by Koehn (2004). These turned tobe, in every case and for BLEU and TER, around\n0.65 points, and are omitted for the sake of clarity.\n6.1 Phrase-based models\nWe carried out translation experiments using both\nsource-driven bilingual segmentation and true\nbilingual segmentation. Results for both propos-\nals and baseline system are displayed in Table 2.\nIn the case of the source-driven segmentation,\ntranslation quality is not significantly affected by\nthe reduction of the size of the phrasetable we pro-\npose. On the one hand BLEU scores, are slightly\nlower than those of the baseline system, although\nconfidence tests show that these differences are not\nstatistically significant. On the other hand, TER\nscores seem to remain completely unaltered, even\nthough a very slight variation can be observed\nAs for the number of parameters of the models\nused, it can be seen that such number is reduced\nin two orders of magnitude, i.e. the number of pa-\nrameters remaining in the phrase table after apply-\ning our pruning technique is only around 2% the\noriginal number of parameters. Moreover, transla-\ntion speed is increased by a factor between 9 and\n16, all this without a significant loss in translation\nquality.\nIn the case of true segmentation, and as opposed\nto source-driven segmentation, translation quality\ndoes drop significantly (although not consistently)\nwith respect to the baseline, ranging from 0.5to\n4.4BLEU points and from 0.2to5.1TER points.\n6.2 Phrase-based SFSTs\nSince our PBSFST estimation framework is based\non the use of monotonic bilingual segmentations,\nthere is no chance for the above-mentioned base-\nline setup to be applied given that it relies on mul-\ntiple overlapping segmentations for each bilingual\nsentence pair. However, both segmentation tech-\nniques proposed here could actually be employed.\nAs Section 6.1 has shown that source-driven\nsegmentation method performs best, only these\nexperiments were carried out then for PBSFSTs.\nThe corresponding results are presented in Table 3.\nAlthough baseline PB models are able to pro-\nvide better translation quality, it must be stressed\nthat, as described in Section 5, PBSFSTs were\nused to take into account only one feature model\nwhereas PB models were a combination of five.\nTherefore, the differences between PBSFSTs and\nPB models may be welcome as an interesting262\nBaseline Source-driven True\nPair BLEU TER size w/s BLEU TER size w/s S pBLEU TER size w/s S p\nEs–En 28.2 56.0 5.0 93 27.5 56.2 0.05 1500 16 23.8 60.8 0.07 380 4\nEn–Es 27.6 56.6 5.1 76 27.2 56.6 0.12 700 9 24.7 60.1 0.16 250 3\nDe–En 21.6 64.8 4.2 100 21.1 64.8 0.06 1500 15 17.5 69.9 0.22 280 3\nEn–De 15.2 70.9 5.5 46 15.1 70.2 0.14 400 9 14.7 71.1 0.31 170 4\nTable 2: Translation quality, number of model parameters, number of translated words per second and\nspeedup (Sp) obtained when using a PB translation system for both source-driven and true segmentation\ntechniques. Monotonic search was considered. PB model size is given in millions of phrase-pairs.\nSource-driven\nPair BLEU TER size w/s S p\nEs–En 25.8 58.2 0.12 91730 986\nEn–Es 25.3 59.0 0.23 28411 374\nDe–En 18.8 68.3 0.12 41249 412\nEn–De 13.0 74.1 0.28 14205 309\nTable 3: Translation quality, number of model pa-\nrameters and number of translated words per sec-\nond for the source-driven segmentation technique\nwhen using a PBSFST translation system. Size of\nPBSFSTs given in millions of single-word edges.\ntrade-off to achieve acceptable quality perfor-\nmance with a further increase in translation speed.\nIt must be remarked that PBSFSTs are able to\ntranslate any of the test sets in just a few seconds\n(vs. tens of minutes taken by baseline PB models).\n7 Discussion and conclusions\nIn this paper, we have presented a technique to\nreduce the size of the phrasetables used in state-\nof-the-art SMT systems. Our approach consist on\nselecting the phase pairs given by the most prob-\nable segmentation of the training sentences. We\npropose two different segmentation techniques.\nBoth segmentation techniques allow to obtain sub-\nstantial reductions in the size of the phraseta-\nbles as well as in the time cost of the translation\nprocess. Particularly, source-driven segmentation\nleads to important improvements in terms of de-\ncoding speed without a significant loss in transla-\ntion quality. We think that the reductions in spatial\nand time costs of the proposed techniques can sig-\nnificantly help to implement state-of-the-art trans-\nlation models into hand-held devices.\nIt is worth noting that, unexpectedly, in the ex-\nperiments we carried out, the true bilingual seg-\nmentation technique obtained worse results than\nthe source-driven segmentation technique.\nOne key difference between the two proposedtechniques consists in the degree of similarity of\nthe pruned phrasetables obtained by the techniques\nwith respect to the original phrasetable. Although\nthe true bilingual segmentation allows to obtain\na complete segmentation of the source and target\nsentences, this comes at the cost of introducing\nsmoothing techniques. Hence, the resulting seg-\nmentations contain phrase pairs that are not present\nin the original phrasetable. In the experiments we\ncarried out, the pruned phrasetables generated by\nthe true bilingual segmentation contained a rela-\ntively high number of phrase pairs that were not\npresent in the original phrasetables, ranging from\n10% to 50% depending on the language pair. In\ncontrast, the source-driven bilingual segmentation,\nsince it merely consists in translating the source\nsentence, always generates a pruned phrasetable\nthat is a true subset of the original phrasetable.\nThis suggests that the true segmentation technique\nnot only prunes the original phrasetable, but also\nhas an important role in the estimation of new\nmodel parameters, which could be the reason for\nthe degradation of the translation quality. Nev-\nertheless, a further analysis of the impact of the\nsmoothing techniques used by true bilingual seg-\nmentation is required to better understand why this\ntechnique is not performing as expected.\nAcknowledgements\nThis paper is based upon work supported by the\nEC (FEDER/FSE) and the Spanish MICINN un-\nder projects MIPRCV “Consolider Ingenio 2010”\n(CSD2007-00018) and iTrans2 (TIN2009-14511).\nAlso supported by the Spanish MITyC under\nthe erudito.com (TSI-020110-2009-439) project,\nby the Generalitat Valenciana under grant Prom-\neteo/2009/014, and by the UPV under grant\n20091027.\nThe authors would also like to thank the anony-\nmous reviewers for their constructive and detailed\ncomments.263\nReferences\nBrown, P.F., S.A. Della Pietra, V .J. Della Pietra, and\nR.L. Mercer. 1993. The mathematics of ma-\nchine translation. In Computational Linguistics , vol-\nume 19, pages 263–311, June.\nCasacuberta, F. and E. Vidal. 2004. Machine transla-\ntion with inferred stochastic finite-state transducers.\nComputational Linguistics , 30:205–225, June.\nEck, M., S. V ogel, and A. Waibel. 2007. Translation\nmodel pruning via usage statistics for statistical ma-\nchine translation. In Proc. of the North American\nChapter of the Association for Computational Lin-\nguistics , pages 21–24.\nFoster, G., R. Kuhn, and H. Johnson. 2006. Phrasetable\nsmoothing for statistical machine translation. In\nProc. of Empirical Methods in Natural Language\nProcessing , pages 53–61.\nGonz ´alez, J., G. Sanchis, and F. Casacuberta. 2008.\nLearning finite state transducers using bilingual\nphrases. In Proc. of Computational linguistics and\nintelligent text processing , pages 411–422.\nJelinek, F. 1969. Fast sequential decoding algorithm\nusing a stack. IBM Journal of Research Develop-\nment , 13:675–685, November.\nJohnson, J.H., J. Martin, G. Foster, and R. Kuhn. 2007.\nImproving translation quality by discarding most of\nthe phrasetable. In Proc. of Empirical methods in\nnatural language processing , pages 967–975.\nKoehn, P. and C. Monz. 2006. Manual and automatic\nevaluation of machine translation between european\nlanguages. In Proc. of Workshop on Statistical Ma-\nchine Translation , pages 102–121, June.\nKoehn, P., H. Hoang, A. Birch, C. Callison-Burch,\nM. Federico, N. Bertoldi, B. Cowan, W. Shen,\nC. Moran, R. Zens, C. Dyer, O. Bojar, A. Constantin,\nand E. Herbst. 2007. Moses: open source toolkit for\nstatistical machine translation. In Proc. of Associa-\ntion for Computational Linguistics , pages 177–180.\nKoehn, P. 2004. Statistical significance tests for ma-\nchine translation evaluation. In Proc. of Empirical\nmethods in natural language processing , pages 388–\n395.\nKoehn, P. 2005. Europarl: A Parallel Corpus for Statis-\ntical Machine Translation. In Proc. of the Machine\nTranslation Summit , pages 79–86.\nMarcu, D. and W. Wong. 2002. A phrase-based, joint\nprobability model for statistical machine translation.\nInProc. of Empirical methods in natural language\nprocessing , pages 133–139.\nOch, F.J. and H. Ney. 2000. Improved statistical align-\nment models. In Proc. of Association for Computa-\ntional Linguistics , pages 440–447.Och, F.J. and H. Ney. 2002. Discriminative training\nand maximum entropy models for statistical machine\ntranslation. In Proc. of Association for Computa-\ntional Linguistics , pages 295–302.\nOch, F.J. and H. Ney. 2003. A systematic comparison\nof various statistical alignment models. Computa-\ntional Linguistics , 29:19–51, March.\nOch, F.J. 2003. Minimum error rate training for statis-\ntical machine translation. In Proc. of Association for\nComputational Linguistics , pages 160–167, July.\nOrtiz, D., I. Garc ´ıa-Varea, and F. Casacuberta. 2005.\nThot: a toolkit to train phrase-based statistical trans-\nlation models. In Proc. of the Machine Translation\nSummit , pages 141–148.\nOrtiz-Mart ´ınez, D., I. Garc ´ıa-Varea, and F. Casacu-\nberta. 2008. Phrase-level alignment generation\nusing a smoothed loglinear phrase-based statistical\nalignment model. In Proc. of European Association\nfor Machine Translation , pages 158–167.\nPapineni, K., S. Roukos, T. Ward, and W. Jing-Zhu.\n2001. Bleu: A method for automatic evaluation\nof machine translation. In IBM Research Report\nRC22176 (W0109-022) .\nSanchis-Trilles, G. and F. Casacuberta. 2008. Increas-\ning translation speed in phrase-based models via sub-\noptimal segmentation. In Proc. of Workshop on Pat-\ntern Recognition in Information Systems , pages 135–\n143.\nSnover, M, B. Dorr, R. Schwartz, L. Micciulla, and\nJ. Makhoul. 2006. A study of translation edit rate\nwith targeted human annotation. In Proc. of Associ-\nation for Machine Translation in the Americas , pages\n223–231.\nTomas, J. and F. Casacuberta. 2001. Monotone statis-\ntical translation using word groups. In Proc. of the\nMachine Translation Summit , pages 357–361.\nVidal, E., F. Thollard, F. Casacuberta C. de la Higuera,\nand R. Carrasco. 2005. Probabilistic finite-state ma-\nchines - part II (in Section IV-A). IEEE Transac-\ntions on Pattern Analysis and Machine Intelligence ,\n27(7):1025–1039.\nV ogel, S. 2005. PESA: Phrase Pair Extraction as Sen-\ntence Splitting. In Proc. of the Machine Translation\nSummit , pages 251–258.\nWuebker, J., A. Mauser, and H. Ney. 2010. Training\nphrase translation models with leaving-one-out. In\nProc. of Association for Computational Linguistics ,\npages 475–484.\nZens, Richard, Franz Josef Och, and Hermann Ney.\n2002. Phrase-based statistical machine translation.\nInProc. of Advances in Artificial Intelligence , pages\n18–32.264", "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "Raa02FcJgX5", "year": null, "venue": "EAMT 2008", "pdf_link": "https://aclanthology.org/2008.eamt-1.22.pdf", "forum_link": "https://openreview.net/forum?id=Raa02FcJgX5", "arxiv_id": null, "doi": null }
{ "title": "Phrase-level alignment generation using a smoothed loglinear phrase-based statistical alignment model", "authors": [ "Daniel Ortiz-Martínez", "Ismael García-Varea", "Francisco Casacuberta" ], "abstract": null, "keywords": [], "raw_extracted_content": "Phrase-level alignment generation using a\nsmoothed\nloglinear phrase-based statistical\nalignment model\nDaniel Ortiz-Mart´ ınez1, Ismael Garc´ ıa-Varea2, and Francisco Casacuberta1\n1Dpto. de Sist. Inf. y Comp., Univ. Polit´ ecnica de Valencia, 46071 Valencia, Spain\[email protected] [email protected]\n2Dpto. de Sist. Inf., Univ. de Castilla-La Mancha, 02071 Albacete, Spain\[email protected]\nAbstract. We present a phrase-based statistical alignment model togheter\nwith a set of different smoothing techniques to be applied when the best\nphrase-to-phrase alignment for a pair of sentences is to be computed.\nWe follow a loglinear approach, which allows us to introduce different\nscoring functions to control specific aspects of phrase-level alignments.\nExperimental results for a well-known shared task on word alignment\nevaluation are reported, showing the great importance of smoothing in\nthe generation of alignments. As a step forward, we also discuss the adap-\ntation of the proposed model for its use in a CAT (Computer Assisted\nTranslation) system.\n1 Introduction\nStatistical Machine translation (SMT) is an area of great interest in the NLP\ncommunity that deals with the transformation of text or speech from a source\nlanguage into a target language.\nFrom a purely statistical point of view, the translation process can be for-\nmulated as follows: A source language string fis to be translated into a target\nlanguage string e. Every target string is regarded as a possible translation for\nthe source language string with maximum a-posteriori probability Pr(e|f). Ac-\ncording to Bayes’ theorem, the target string ˆethat maximizes the product of\nboth the target language model Pr(e) and the string translation model Pr(f|e)\nmust be chosen. The equation that models this process is:\nˆeI\n1= arg max\ne/braceleftbig\nPr(e)·Pr(f|e)/bracerightbig\n(1)\nState-of-the-art statistical translation systems follow a phrase-based approach,\nthat is, the structural relations between source and target sentences are captured\nby means of phrases instead of isolated words.\nIn this paper we tackle the problem of generating alignments at phrase level\nby means of smoothed phrase-based statistical alignment models. As far as we\nknow the problem of finding the best alignment at phrase level has not been ex-\ntensively addressed in the literature. For example, in [1] three different techniques\n12th EAMT conference, 22-23 September 2008, Hamburg, Germany\n160\nfor obtaining phrase-level alignments are compared, but there is no mention at\nall of\nthe phrase-level aligment coverage problems that arise when real tasks and\napplications are used.\nDifferent applications can benefit from the techniques proposed here, ranging\nfrom phrase-based SMT systems to machine-aided NLP tools, as for example\nCAT [2]. Under the CAT framework, we are given a source sentence and a prefix\nof the target sentence, and the goal is to obtain the best suffix that constitutes\na complete translation. Then the first problem to be solved is how to align\nthe given prefix with the corresponding portion of the source sentence. The\ntechniques proposed here can be easily adapted to deal with this problem.\n2 Phrase-based SMT\nDifferent translation models have been proposed depending on how the relation\nbetween the source and the target languages is structured; that is, the way a\ntarget sentence is generated from a source sentence. This relation is summarized\nusing the concept of alignment ; that is, how the constituents (typically words or\ngroups-of-words) of a pair of sentences are aligned with each other.\nFor the translation model, Pr(f|e), in Eq. (1), Phrase-based Translation\n(PBT) can be explained from a generative point of view as follows [3]:\n1. The target sentence eis segmented into Kphrases (˜ eK\n1).\n2. Each target phrase ˜ ekis translated into a source phrase ˜f.\n3. Finally, the source phrases are reordered in order to compose the source\nsentence ˜fK\n1=f.\nIn PBT, it is assumed that the relations between the words of the source and\ntarget sentences can be explained by means of the hidden variable ˜ aK\n1, which\ncontains all the decisions made during the generative story.\nPr(f|e) =/summationdisplay\nK,˜aK\n1Pr(˜fK\n1,˜aK\n1|˜eK\n1) =/summationdisplay\nK,˜aK\n1Pr(˜aK\n1|˜eK\n1)Pr(˜fK\n1|˜aK\n1,˜eK\n1) (2)\nwhere each ˜ ak∈ {1... K}denotes the index of the target phrase ˜ ethat is aligned\nwith the k-th source phrase ˜fk.\nDifferent assumptions can be made from the previous equation. For exam-\nple, in [3] all possible segmentations have the same probability, and in [4], it is\nalso assumed that the alignments must be monotonic. In both cases the model\nparameters that have to be estimated are the translation probabilities between\nphrase pairs ( {p(˜f|˜e)}), which typically are estimated via relative frequencies as\np(˜f|˜e) =N(˜f,˜e)/N(˜e), where N(˜f|˜e) is the number of times that ˜fhas been\nseen as a translation of ˜ ewithin the training corpus.\nAccording to Eq. (2), and following a maximum approximation, the problem\nstated in Eq. (1) can be reframed as:\nˆe≈arg max\ne,a{p(e)·p(f,a|e)} (3)\n12th EAMT conference, 22-23 September 2008, Hamburg, Germany\n161\nState-of-the-art statistical machine translation systems model p(f,a|e) fol-\nlowing\na loglinear approach [5], that is:\np(f,a|e)∝exp/bracketleftbigg/summationdisplay\niλifi(f,e,a)/bracketrightbigg\n(4)\nwhere each fi(f,e,a) is a feature function, and weights λiare optimized using a\nminimun error rate training (MERT) criteria [6] to optimize a particular quality\nmetric (for example maximize the BLEU metric for translation quality, or mini-\nmize the Alignment Error Rate (AER) for alignment quality) on a development\ncorpus.\n3 Phrase-based alignments\nThe problem of finding the best alignment at phrase level has not been exten-\nsively addressed in the literature. A first attempt can be found in [1]. The\nconcept of phrase-based alignment can be stated formally as follows:\nLetf≡f1,f2,... ,f Jbe a source sentence and e≡e1,e2,... ,e Ithe corre-\nsponding target sentence in a bilingual corpus. A phrase-alignment between f\nandeis defined as a set Sof ordered pairs included in P(f)× P(e), where P(f)\nandP(e) are the set of all subsets of consecutive sequences of words, of fand\ne, respectively. In addition, the ordered pairs contained in Shave to include all\nthe words of both the source and target sentences.\nA phrase-based alignment of length K(˜AK) of a sentence pair ( f,e) is de-\nfined as a triple ˜AK≡(˜fK\n1,˜eK\n1,˜aK\n1), where ˜ aK\n1is a specific one-to-one mapping\nbetween the Ksegments/phrases of both sentences (1 ≤K≤min(J,I)).\nThen, given a pair of sentences ( f,e) and a phrase-based alignment model,\nwe have to obtain the best phrase-alignment ˜AK(or Viterbi phrase-alignment\nV(˜AK)) between them. Assuming a phrase-alignment of length K,V(˜AK) can\nbe computed as:\nV(˜AK) = arg max\n˜AK/braceleftbig\np(˜fK\n1,˜aK\n1|˜eK\n1)/bracerightbig\n(5)\nwhere, following the assumptions of [3], Pr(˜fK\n1,˜aK\n1|˜eK\n1) can be efficiently com-\nputed as:\np(˜fK\n1,˜aK\n1|˜eK\n1) =K/productdisplay\nk=1p(˜fk|˜e˜ak) (6)\nOn the basis of Eq. (6), a very straightforward technique can be proposed for\nfinding the best phrase-alignment of a sentence pair ( f,e). This can be conceived\nas a sort of constrained translation. In this way, the search process only requires\nthe use of a regular SMT system which filters its phrase-table in order to obtain\nthose translations of fthat are compatible with e.\nIn spite of its simplicity, this technique has no practical interest when applied\non regular tasks. Specifically, the technique is not applicable when the alignments\n12th EAMT conference, 22-23 September 2008, Hamburg, Germany\n162\ncannot be generated due to coverage problems of the phrase-based alignment\nmodel\n(i.e. one or more phrase pairs required to compose a given alignment\nhave not been seen during the training process). This problem cannot be easily\nsolved, since standard estimation tools such as THOT [7] and MOSES [8] do not\nguarantee the complete coverage of sentence pairs even if they are included in\nthe training set; this is due to the great number of heuristic decisions involved\nin the estimation process.\nOne possible way to overcome the above-mentioned coverage problems re-\nquires the definition of an alternative technique that is able to consider every\nsource phrase of fas a possible translation of every target phrase of e. Such a\ntechnique requires the following two elements:\n1. A general mechanism to assign probabilities to phrase pairs, no matter if\nthey are contained in the phrase-table or not\n2. A search algorithm that enables efficient exploration of the set of possible\nphrase-alignments for a sentence pair\nThe general mechanism for assigning probabilities to phrase pairs can be im-\nplemented by means of the application of smoothing techniques over the phrase-\ntable. As shown in [9], well-known language model smoothing techniques can\nbe imported into the PBT framework. As will be shown in section 4, the PBT\nsmoothing techniques described in [9] can also be adapted to the generation of\nphrase-based alignments.\nRegarding the search algorithm to be used, different search strategies can\nbe adopted, as for example dynamic-programming-based or branch-and-bound\nalgorithms. In this study, a branch-and-bound search strategy has been adopted.\nOur branch-and-bound search algorithm attempts to iteratively expand partial\nsolutions, called hypotheses, until a complete phrase-alignment is found. The\nhypotheses are stored in a stack and ordered by their score. Since the number of\npossible alignments for a given sentence pair may become huge, it is necessary\nto apply heuristic prunings in order to reduce the search space. Such heuristic\nprunings include the limitation of the maximum number of hypotheses that can\nbe stored in the stack and also the maximum number of different target phrases\nthat can be linked to an unaligned source phrase when expanding a partial\nhypothesis.\n3.1 A loglinear approach to phrase-to-phrase alignments\nThe score for a given alignment can be calculated according Eq (6). This scoring\nfunction has an important disadvantage. Specifically, it does not allow control\nof basic aspects of the phrase alignment, such as the lengths of the source and\ntarget phrases, and the reorderings of phrase alignments. This problem can be\nalleviated following the approach stated in Eq. (4), thus introducing different\nfeature functions as scoring components in a log-linear fashion.\nWe propose the following set of feature functions:\n–f1(f,a,e) =/producttextK\nk=1p(˜e˜ak|˜fk): direct phrase model probability\n12th EAMT conference, 22-23 September 2008, Hamburg, Germany\n163\n–f2(f,a,e) =/producttextK\nk=1p (˜fk|˜e˜ak): inverse phrase model probability\n–f3(f,a,e) =/producttextK\nk=1p(|˜ek|): target phrase length model. This component can\nbe modeled by means of a uniform distribution (penalizes the length of the\nsegmentation) or a geometric distribution (penalizes the length of the source\nphrases)\n–f4(f,a,e) =/producttextK\nk=1p(˜fk|˜fk−1): distortion model. This component is typically\nmodeled by means of a geometric distribution (penalizes the reorderings)\n–f5(f,a,e) =/producttextK\nk=1p(|˜fk| | |˜e˜ak|): source phrase length model given the length\nof the target phrase. This component can be modeled by means of different\ndistributions: uniform (does not take into account the relationship between\nthe length of source and target phrase), Poisson or geometric\nThe corresponding weights λi,i∈ {1,2,... , 5}can be computed by means of\nMERT training.\nRegarding the probability distribution used to model feature functions f3,f4,\nandf5, we tested all possible combinations of uniform, geometric, and Poisson\ndistributions in the experiments that we describe in section 5.\n3.2 Application to CAT\nThe technique presented above for generating complete phrase-alignments can\nbe easily adapted for the generation of partial alignments. As was mentioned\nin section 1, a good example in which the generation of partial alignments is\nrequired is the Computer Assisted Translation (CAT) framework. Under this\nframework, we are given a source sentence f, and a prefix of the target sentence,\nwhich we will call p, and the goal is to obtain the best suffix of pthat constitutes\na complete translation of f. The generation of the suffix in CAT can be seen as\na two-stage process. First we partially align the prefix pwith only a part of\nf, and second, we translate the unaligned portion of f(if any). The formalism\npresented at the beginning of this section requires few modifications to allow the\ngeneration of partial alignments. Specifically, given fandpwe have to obtain\nthe set S′of ordered pairs that contains all the words of pand only a subset of\nthe words of f.\n4 Smoothing techniques\nAs was mentioned in section 3, the application of smoothing techniques is crucial\nin the generation of phrase-alignments. Although smoothing is an important is-\nsue in language modeling and other areas of statistical NLP (see for example [10]\nfor more details), it has not received much attention from the SMT community.\nHowever, most of the well-known language model smoothing techniques can be\nimported to the SMT field and specifically to the PBT framework, as it is shown\nin [9].\nIn spite of the fact that PBT and the generation of phrase-alignments are\nsimilar tasks, it should be noted that the two problems differ in a key aspect.\n12th EAMT conference, 22-23 September 2008, Hamburg, Germany\n164\nWhile in PBT the probabilities of unseen events are not important (since the de-\ncoder only\nproposes phrase translations that are contained in the model, see [9]),\nin the generation of phrase alignments, assigning probabilities to unseen events\nis one of the most important problems that has to be solved.\nIn the rest of this section, we describe the smoothing techniques that we have\nimplemented. They are very similar to those proposed in [9], although in our case\nwe have strongly focused on the appropriate treatment of unseen events.\n4.1 Statistical estimators\nTraining data can be exploited in different ways to estimate statistical models.\nRegarding the phrase-based models, the standard estimation technique is based\non the relative frequencies of the phrase pairs (see section 2). Taking this stan-\ndard estimation technique as a starting point, a number of alternative estimation\ntechniques can be derived.\nPhrase-based model estimators We have implemented the following estima-\ntion techniques for phrase-based models:\n–Maximum-likelihood estimation (ML)\n–Good-Turing estimation (GT)\n–Absolute-discount estimation (AD)\n–Kneser-Ney smoothing (KN)\n–Simple discount (SD)\nAs was mentioned above, ML estimation uses the concept of relative fre-\nquency as a probability estimate. Once the counts of the phrase pairs have\nbeen obtained, three different well-known estimation techniques can be applied,\nnamely, GT estimation and two estimation techniques based on the subtraction\nof a fixed quantity from all non-zero counts: AD estimation and KN estimation\n(see [9] for more details). In addition, we have implemented a very simple esti-\nmation technique (labeled as SD) which works in a similar way to AD estimation\nbut it subtracts a fixed probability mass instead of a fixed count.\nLexical distributions A good way to tackle the problem of unseen events is the\nuse of probability distributions that decompose phrases into words. Two different\ntechniques are mentioned in [9] for this purpose: the noisy-or and an alternative\ntechnique which is based on alignment matrices. In our work we have applied\nanother technique which consists in obtaining the IBM 1 model probability as\ndefined in [11] for phrase pairs instead of sentence pairs (this distribution will\nbe referred to as LEX).\n4.2 Combining estimators\nThe statistical estimators described in the previous subsection can be combined\nin the hope of producing better models. In our work we have chosen three dif-\nferent techniques for combining estimators:\n12th EAMT conference, 22-23 September 2008, Hamburg, Germany\n165\n–Linear interpolation\n–Backing-off\n–Log-linear\ninterpolation\nThe linear interpolation technique consists of making a linear combination of\ndifferent estimators, ensuring that the weights of such combination determine a\nprobability function. We have implemented linear combinations of two estima-\ntors. One of them is a phrase-based model estimator and the second one is the\nlexical distribution described in section 4.1. This combination scheme has been\nspecially chosen to deal with unseen events.\nThe backing-off combination technique consults different models in order\ndepending on their specificity. Again, we have implemented backoff models which\ncombine two different estimators in the same way as has been described for the\ncase of linear interpolation. In this particular case, only GT and SD estimation\nwere implemented.\nFinally, phrase-based model estimators and lower order distributions can also\nbe combined by means of log-linear interpolation. In this case, the procedure\nconsists of adding new components to the initial log-linear model described in\nsection 3.1. Again, the main goal of the combination is to achieve good treatment\nof unseen events. For this purpose, lexical distributions in both directions are\nincorporated into the log-linear model as score components. In this case, only\nGT estimation was implemented.\n5 Experimental Results\nDifferent experiments were carried out in order to assess the proposed phrase-\nto-phrase alignment smoothing techniques.\n5.1 Corpora and evaluation\nThe experiments consisted of obtaining phrase-to-phrase alignments between\npairs of sentences following the different smoothing techniques described in the\nprevious section. Specifically, a test set containing several sentence pairs to be\naligned was used. The test set was taken from the shared tasks in word align-\nments developed in HLT/NAACL 2003 [12]. This shared task involved four dif-\nferent language pairs, but we only used English-French in our experiments.\nA subset of the Canadian Hansards corpus was used in the English-French\ntask. The English-French corpus is composed of 447 English-French test sen-\ntences and about a million training sentences.\nWe were interested in evaluating the quality of the phrase-to-phrase align-\nments obtained with the different phrase alignment smoothing techniques that\nwe proposed. Unfortunately, there does not exist a gold standard for phrase align-\nments, so we needed to refine the obtained phrase alignments to word alignments\nin order to compare them with other existing word alignment techniques.\nTaking these considerations into account, we proceeded as follows: Given a\npair of sentences to be aligned we first aligned them at phrase level, obtaining\n12th EAMT conference, 22-23 September 2008, Hamburg, Germany\n166\na phrase-to-phrase alignment. Afterwards, we obtained a word-to-word IBM1\nalignment\nfor each pair of aligned phrases. Finally, these “intra-phrase” word\nalignments were joined, resulting in a word level alignment for the whole sen-\ntence. We could thus make a fair comparison of the proposed smoothing tech-\nniques with the ones presented in the HLT/NAACL 2003 shared task.\nTo evaluate the quality of the final alignments obtained, different measures\nwere taken into account: Precision ,Recall, F-measure , andAlignment Error Rate.\nGiven an alignment Aand a reference alignment G(both AandGcan be\nsplit into two subsets AS,APandGS,GP, respectively representing Sureand\nProbable alignments) Precision (PS,PP),Recall (RS,RP),F-measure (FS,FP)\nandAlignment Error Rate (AER) were computed (see [12] for more details).\n5.2 Alignment quality results\nAs described in [12], two different sets of evaluations were conducted:\n–NULL alignments: given a word alignment afor a pair of sentences ( f,e),\nif a word fj(j∈ {1...|f|}) is not aligned with any ei(i∈ {1...|f|}), or\nviceversa, that word is aligned with the NULL word.\n–NO-NULL alignments: NULL alignments are removed, from the test set and\nfrom the obtained alignments.\nIn Table 1 the alignment quality results using different phrase-to-phrase align-\nment smoothing techniques are presented, for NO-NULL and NULL alignments.\nIt is worth mentioning that the figures for Surealignments are identical for\nNO-NULL and NULL alignments. In the table the first row shows the baseline,\nwhich consists of the results obtained using a maximum likelihood estimation\n(ML) without smoothing. The rest of the rows corresponds to different estima-\ntion techniques combined with linear interpolation except in those cases where\na back-off (BO) or a log-linear interpolation (LL) were used.\nFor the NO-NULL alignment experiment, significant improvements in all\nalignment quality measures were obtained for all the smoothing techniques com-\npared with the baseline. The baseline system results were worse due to the great\nnumber of times in which the segmentation of a sentence pair could not be com-\npleted due to coverage problems (in our experiments, 86 .5% of the test pairs\npresented this problem); in such situations, the baseline system aligned all the\nsource words of the source sentence with all the target words of the target sen-\ntence. Finally, it is worth pointing out that all those experiments that included\nthe LEX distribution outperformed the others due to improved assignment of\nprobabilities to unseen events.\nWith respect to the probability distribution used to model feature functions\nf3andf5, we show the results corresponding to the use of a uniform distribution\nforf3and a geometric distribution for f5, since such choices led to better results.\nAs was mentioned in section 3.1, the use of a uniform distribution for f3penalizes\nthe length of the segmentation and the use of a geometric distribution for f5\nmakes it possible to establish a relationship between the length of source and\ntarget phrases (the use of a Poisson distribution also worked well).\n12th EAMT conference, 22-23 September 2008, Hamburg, Germany\n167\nNO-NULL & NULL NO-NULL NULL\nSmooth. tech. PSRSFS PPRPFPAER PPRPFPAER\nML 64.39 76.62 69.98 77.49 28.31 41.47 20.04 55.10 29.38 38.32 36.42\nGT 71.58 79.59 75.38 87.80 27.02 41.32 14.82 52.45 28.84 37.22 39.11\nAD 69.11 77.64 73.13 84.02 26.56 40.36 17.12 51.10 28.10 36.26 40.18\nKN 68.62 77.91 72.97 83.71 26.66 40.44 17.23 51.49 28.19 36.44 39.83\nML+LEX 72.56 83.31 77.57 89.67 28.37 43.10 12.03 55.39 30.09 39.00 35.80\nGT+LEX 72.64 83.18 77.56 89.42 28.24 42.93 12.23 55.07 29.98 38.82 36.07\nAD+LEX 71.92 81.95 76.61 90.03 27.80 42.48 12.55 54.25 29.58 38.29 37.10\nKN+LEX 71.31 82.12 76.34 89.93 28.01 42.72 12.46 54.80 29.76 38.58 36.60\nGT+LEX+BO 71.74 85.98 78.22 91.5529.64 44.78 09.77 58.78 31.37 40.91 32.49\nSD+LEX+BO 72.0786.16 78.44 91.52 29.57 44.7009.77 59.09 31.45 41.05 32.18\nGT+LEX+LL 71.37 84.72 77.48 89.82 29.10 43.96 11.21 57.43 30.80 40.09 33.78\nTable 1. Comparativ e alignment quality results (in %) using different smoothing tech-\nniques for NO-NULL and NULL alignments\nIt is also worth mentioning that despite the fact that phrase alignment tech-\nniques proposed here are not specifically designed to obtain word alignments,\nall the results are competitive with those presented in [12]. In the table, the\nbest results for each column are highlighted showing that GT+LEX+BO and\nSD+LEX+BO obtained the best results.\nRegarding the results for the NULL alignment experiment, there were small\nrelative improvements in 5 out of 9 smoothing techniques compared with the\nbaseline. The differences between these results and those for NO-NULL align-\nment experiment are due to the fact that the baseline generated a lot of align-\nments in which all words were aligned with all words due to coverage problems.\nIn those situations, the IBM1 alignment model tended to align less words with\nthe NULL word than when it was applied over intra-phrase alignments derived\nfrom successful segmentations of sentence pairs. If we compare column PPof\nboth experiments, a significant reduction in precision is obtained in the case of\nthe NULL alignment experiment. This makes our results less competitive than\nthose presented in [12] for the NULL alignment experiment.\nAccording to these results, more research is needed in order to improve the\nintra-phrase word alignments. One possible solution is to use higher order word\nalignment models, for example HMM or IBM4 models.\n6 Conclusions\nWe have presented a phrase-based statistical alignment model which can be used\nto obtain phrase-to-phrase alignments for pairs of sentences.\nThe proposed phrase-based statistical alignment model combines different\nsmoothing techniques to overcome the coverage problems that the standard\nphrase-based models present.\nThe proposed system follows a loglinear approach which makes it possible to\ninclude different score components specifically designed to improve the phrase\nalignments.\n12th EAMT conference, 22-23 September 2008, Hamburg, Germany\n168\nExperimental results for a well-known shared task on word alignment evalu-\nation ha\nve been reported. The results show the great impact of the smoothing\ntechniques on alignment quality. As a step forward, we have also discussed the\nadaptation of the proposed model for its use in a CAT system.\nAcknowledgments. This work has been partially supported by the Spanish\nresearch programme Consolider Ingenio 2010: MIPRCV (CSD2007-00018) and\nthe EC (FEDER), the Spanish MEC under grant TIN2006-15694-CO2-01, the\ni3media project (CDTI 2007-1012) and the Spanish JCCM under grant PBI08-\n0210-7127.\nReferences\n1. Garc´ ıa-Varea, I., Ortiz, D., Nevado, F., G´ omez, P.A., Casacuberta, F.: Automatic\nsegmentation of bilingual corpora: A comparison of different techniques. In: Proc.\nof the 2nd IbPRIA. Volume 3523 of LNCS., Estoril (Portugal) (June 2005) 614–621\n2. Barrachina, S., Bender, O., Casacuberta, F., Civera, J., Cubel, E., Khadivi, S.,\nNey, A.L.H., Tom´ as, J., Vidal, E.: Statistical approaches to computer-assisted\ntranslation. Computational Linguistics (2008) In press\n3. Zens, R., Och, F.J., Ney, H.: Phrase-based statistical machine translation. In:\nAdvances in artificial intelligence. 25. Annual German Conference on AI. Volume\n2479 of LNCS. Springer Verlag (September 2002) 18–32\n4. Tom´ as, J., Casacuberta, F.: Monotone statistical translation using word groups.\nIn: Proc. of the MT Summit VIII, Santiago de Compostela, Spain (2001) 357–361\n5. Och, F.J., Ney, H.: Discriminative Training and Maximum Entropy Models for\nStatistical Machine Translation. In: Proc. of the 40th ACL, Philadelphia, PA\n(July 2002) 295–302\n6. Och, F.J.: Minimum error rate training in statistical machine translation. In: Proc.\nof the 41th ACL, Sapporo, Japan (July 2003) 160–167\n7. Ortiz, D., Garc´ ıa-Varea, I., Casacuberta, F.: Thot: a toolkit to train phrase-based\nstatistical translation models. In: Proc. of the Machine Translation Summit X,\nPhuket, Thailand (September 2005) 141–148\n8. Koehn, P., Hoang, H., Birch, A., Callison-Burch, C., Federico, M., Bertoldi, N.,\nCowan, B., Shen, W., Moran, C., Zens, R., Dyer, C., Bojar, O., Constantin, A.,\nHerbst, E.: Moses: Open source toolkit for statistical machine translation. In:\nACL, Prague, Czech Republic (June 2007) 177–180\n9. Foster, G., Kuhn, R., Johnson, H.: Phrasetable smoothing for statistical machine\ntranslation. In: Proc. of the EMNLP, Sydney, Australia, ACL (July 2006) 53–61\n10. Manning, C.D., Sch¨ utze, H.: Foundations of Statistical Natural Language Process-\ning. MIT Press, Cambridge, Massachusetts 02142 (2001)\n11. Brown, P.F., Della Pietra, S.A., Della Pietra, V.J., Mercer, R.L.: The mathe-\nmatics of statistical machine translation: Parameter estimation. Computational\nLinguistics 19(2) (1993) 263–311\n12. Mihalcea, R., Pedersen, T.: An evaluation exercise for word alignment. In Mihalcea,\nR., Pedersen, T., eds.: HLT-NAACL 2003 Workshop: Building and Using Parallel\nTexts: Data Driven Machine Translation and Beyond, Edmonton, Alberta, Canada,\nACL (May 31 2003) 1–10\n12th EAMT conference, 22-23 September 2008, Hamburg, Germany\n169", "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "qX1cAtJm75", "year": null, "venue": "EAMT 2014", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=qX1cAtJm75", "arxiv_id": null, "doi": null }
{ "title": "Efficient wordgraph for interactive translation prediction", "authors": [ "Germán Sanchis-Trilles", "Daniel Ortiz-Martínez", "Francisco Casacuberta" ], "abstract": null, "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "ra1y782tPc", "year": null, "venue": "EAMT 2012", "pdf_link": "https://aclanthology.org/2012.eamt-1.5.pdf", "forum_link": "https://openreview.net/forum?id=ra1y782tPc", "arxiv_id": null, "doi": null }
{ "title": "User Evaluation of Interactive Machine Translation Systems", "authors": [ "Vicent Alabau", "Luis A. Leiva", "Daniel Ortiz-Martínez", "Francisco Casacuberta" ], "abstract": null, "keywords": [], "raw_extracted_content": "User Evaluation of Interactive Machine Translation Systems\nVicent Alabau, Luis A. Leiva, Daniel Ortiz-Mart ´ınez, Francisco Casacuberta\nITI/DSIC – Universitat Polit `ecnica de Val `encia\nfvalabau,luileito,dortiz,fcng@fiti,dsicg.upv.es\nAbstract\nRecent developments in search algorithms\nand software architecture have enabled\nmulti-user web-based prototypes for Inter-\nactive Machine Translation (IMT), a tech-\nnology that aims to assist, rather than re-\nplace, the human translator. Surprisingly,\nformal human evaluations of IMT systems\nare highly scarce in the literature. To\nthis regard, we discuss experiences gained\nwhile testing IMT systems. We report\nthe lessons learned from two user evalua-\ntions. Our results can provide researchers\nand practitioners with several guidelines\ntowards the design of on-line IMT tools.\n1 Introduction\nResearch in machine translation (MT) aims to de-\nvelop computer systems which are able to translate\ndocuments without human intervention. However,\ncurrent translation technology has not been able to\ndeliver full automated error-free translations. Typ-\nical solutions to improve the quality of an MT sys-\ntem require manual post-editing. This serial pro-\ncess does not allow integrating the knowledge of\nthe human translator into the system decisions.\nOne alternative to take advantage of the ex-\nisting MT technologies is to apply the so-called\ninteractive machine translation (IMT) paradigm\n(Langlais et al., 2002). The IMT paradigm adapts\ndata driven MT techniques for its use in collab-\noration with human translators. Following these\nideas, Barrachina et al. (2009) proposed a new ap-\nproach to IMT, in which fully-fledged statistical\nMT systems are used to produce full target sen-\ntences hypotheses, or portions thereof, which can\nbe accepted or amended by a human translator.\nEach corrected text segment is then used by the\nMT system as additional information to achieve\nimproved suggestions. Figure 1 shows a minimal\nIMT session example.\nc\r2012 European Association for Machine Translation.source: Para ver la lista de recursos\nreference: To view a listing of resources\nsuggestion sTo view the resources list\ninteractionpTo view\nk a\ns listing of resources\naccept pTo view a listing of resources\nFigure 1: An IMT session example, using only 1\nkey stroke (k) to achieve the reference sentence.\nNotice that the user submits partial sentences (p)\nto the system, which tries to complete them (s).\nFollowing the IMT paradigm, recent develop-\nments in search algorithms and software architec-\nture have allowed multi-user web-based transla-\ntion prototypes. These systems have grown in fea-\ntures, e.g., allowing advanced multimodal interac-\ntion, which have also added extra complexity to\nthe prototypes. Then, their effectiveness should\nbe tested with respect to technology dissemination.\nWhile pure data-driven evaluations have already\nshown that IMT is a promising technology (Bar-\nrachina et al., 2009), surprisingly, formal human\nevaluations are highly scarce in the literature.\nIn this paper, we describe our experiences eval-\nuating two IMT prototypes with real users: an\ninitial, advanced version and a simplified but im-\nproved version. Our results identify important\ndesign issues, which open a discussion regarding\nhow IMT systems should be deployed.\n2 Related Work\nLanglais et al. (2002) performed a human evalua-\ntion on their IMT prototype. They emulated a real-\nistic working environment in which the users could\nobtain automatic completions for what they were\ntyping. Users reported an improvement in per-\nformance; however, raw productivity decreased by\n17%, although the users appreciated the tool and\nwere confident to improve their productivity after\nproper training. That work was extended in the\nTT2 project (Casacuberta et al., 2009), where the\nProceedings of the 16th EAMT Conference, 28-30 May 2012, Trento, Italy\n20\nperformance tended to increase as the participants\ngrew accustomed to the system, over a 18-month\nperiod. A slightly different approach was stud-\nied in (Koehn, 2010). There, monolingual users\nevaluated a translation interface supporting IMT\npredictions and the so-called ‘translation options’.\nWhen translating from undecipherable languages\n(as Chinese or Arabic for an English speaker),\nricher assistance improved user performance.\n3 User Interfaces and Evaluation\nPrevious research on multimodal interfaces in nat-\nural language processing have shown a compre-\nhensible tendency to choose an interactive collab-\norative environment over a manual system for non-\nexpert computer users (Leiva et al., 2011). We fol-\nlowed this approach to build a prototype with an\nIMT backend. We will refer to this system as the\nadvanced demonstrator (IMT-AD, Figure 2) since\nit implemented a number of complementary fea-\ntures, which conditioned the design of the inter-\nface; e.g., the use of one boxed text field per sen-\ntence word aimed to ease e-pen interaction.\n3.1 Evaluation of the Advanced Prototype\nThe goal of this evaluation was aimed to assess\nboth qualitatively and quantitatively IMT-AD, and\ncompare it to a state-of-the-art post-editing (PE)\nMT output. Translating from scratch was not con-\nsidered since this practice is being increasingly\ndisplaced by assistive technologies. Indeed, PE\nof MT systems is found frequently in a profes-\nsional translation workflow (TT2, 2001). Thus, in\naddition to IMT-AD, a post-editing version of the\ndemonstrator (PE-AD) was developed to make a\nfair comparison with state-of-the-art PE systems.\nPE-AD used the same interface as IMT-AD, but\nthe IMT engine was replaced by autocompletion-\nonly capabilities as found in popular text editors.\nDesign Both systems were evaluated on the ba-\nsis of the ISO 9241-11 standard (ergonomics of\nhuman-computer interaction). Three aspects were\nconsidered: efficiency, effectiveness, and user sat-\nisfaction. For the former, we computed the av-\nerage time in seconds that took to complete each\ntranslation. For the second, we evaluated the\nBLEU against the reference and a crossed multi-\nBLEU among users’ translations. For the latter, we\nadapted the system usability scale (SUS) question-\nnaire to score the user satisfaction, by asking 10\nquestions that users would assess in a 1–5 Likertscale (1:strongly disagree, 5:strongly agree), plus\na text area to submit free-form comments.\nParticipants A group of 10 users (3 females)\naged 26–43 from our research group volunteered\nto perform the evaluation as non-professional\ntranslators. All of them were proficient in Span-\nish and had an advanced knowledge of English.\nAlthough none had worked with IMT systems, all\nknew the basis of the IMT paradigm.\nApparatus Since participants were Spanish na-\ntives, we decided to perform translations from En-\nglish to Spanish. We chose a medium-sized cor-\npus, the EU corpus, typically used in IMT (Bar-\nrachina et al., 2009), which consists of legal docu-\nments. We built a glossary for each source word by\nusing the 5-best target words from a word-based\ntranslation model. We expected this would cover\nthe lack of knowledge for our non-expert trans-\nlators towards this particular task. In addition, a\nset of 9keyboard shortcuts was designed, aiming\nto simulate a real translation scenario, where the\nmouse is typically used sparingly. Furthermore,\nautocompletion was added to PE-AD, i.e., words\nwith more than 3 characters were autocompleted\nusing a task-dependent word list. In addition, IMT-\nAD was set up to predict at character level interac-\ntions. We disabled the complementary features to\nfocus the evaluation on basic IMT.\nProcedure Three disjoint sentence sets (C1, C2,\nC3) were randomly selected from the test dataset.\nEach set consisted of 20 sentence pairs and kept\nthe sequentiality of the original text. Sentences\nlonger that 40 words were discarded. C3 was used\nin a warm up session, where users gained expe-\nrience with the IMT system (5–10 min per user\non average) before carrying out the actual evalua-\ntion. Then, C1 and C2 were evaluated by two user\ngroups (G1, G2) in a counterbalanced fashion: G1\nevaluated C1 on PE-AD and C2 on IMT-AD, while\nG2 did C1 on IMT-AD and C2 in PE-AD.\nResults Although the results were not conclu-\nsive (there were no statistical differences between\ngroups), we observed some trends. First, the time\nspent (efficiency) per sentence on average in the\nIMT system was higher than in PE (67 vs. 62 s).\nHowever, the effectiveness was slightly higher for\nIMT in BLEU with respect to the reference (41:5\nvs. 40:7) and with respect to a cross-validation\nwith other user translations (78:9 vs.77:4). This\n21\nFigure 2: Detail of the advanced web-based interface with a boxed text field for each word.\nPE-AD IMT-AD\nAvg. time (s) 62 (SD = 51) 67 ( SD = 65)\nBLEU 40:7 (13 :4) 41 :5 (13:5)\nCrossed BLEU 77:4 (4 :5) 78 :9 (4:8)\nGlobal Satisfaction 2:5(1:2) 2:1(1 :2)\nTable 1: Summary of the results for the first test.\nsuggested that the IMT system helped to achieve\nmore consistent and standardized translations.\nFinally, users perceived the PE system more\nsatisfactorily than the IMT system, although the\nglobal scores were 2:5for PE and 2:1for IMT,\nwhich suggested that users were not comfortable\nwith none of the systems. IMT failed to succeed in\nquestions regarding the system being easy to use,\nconsistent, and reliable. This was corroborated by\nthe submitted comments. Users complained about\nhaving too many shortcuts and available edit oper-\nations, some operations not working as expected,\nthe word-box based interface, and some annoying\ncommon mistakes in the predictions of the IMT en-\ngine (e.g., inserting a whitespace instead of com-\npleting a word, which would be interpreted as two\ndifferent words). One user stated that the PE sys-\ntem “was much better than the [IMT] predictive\ntool”. Regarding PE, users mainly questioned the\nusefulness of the autocompletion feature.\n3.2 Simplified Web Based Prototype\nThe results from the first evaluation were quite\ndisappointing. Not only participants took more\ntime to complete the evaluation with IMT-AD, but\nthey also perceived that IMT-AD was more cum-\nbersome and unreliable than PE-AD. However, we\nstill observed that IMT-AD had been occasionally\nbeneficial, and probably the bloated UI was the\ncause for IMT to fail. Thus, we developed a sim-\nplified version of the original prototype (Figure 3).\nDesign In this case, the word-box based inter-\nface was changed to a simple text area. In addi-tion, the edit operations were simplified to allow\nonly word substitutions and single-click rejections.\nBesides, we expected that the simplification of the\ninterface logic would reduce some of the program-\nming bugs that bothered users in the first evalua-\ntion. The PE interface was simplified in the same\nway. Furthermore, the autocompletion feature was\nimproved to support n-grams of arbitrary length.\nParticipants Fifteen participants aged 23–34\nfrom university English courses (levels B2 and C1\nfrom the Common European Framework of Ref-\nerence for Languages) were paid to perform the\nevaluation (5 ¤each). A special price of 20 ¤was\ngiven to the participant who would contribute with\nthe most useful comments about both prototypes.\nIt was found that, following this method, partic-\nipants were more verbose when providing feed-\nback.\nApparatus In this case, a different set of sen-\ntences (C10, C20, C30) was randomly extracted\nfrom the EU corpus.\nProcedure To avoid the bias regarding which\nsystem was being used, sentences were presented\nin random order, and the type of system was hid-\nden to the participants. As a consequence, users\ncould not evaluate each system independently.\nTherefore, a reduced questionnaire with just two\nquestions was shown on a per-sentence basis. Q1\nasked if the system suggestions were useful. Q2\nasked if the system was cumbersome to use. A text\narea for free-form comments was also included.\nResults Still with no statistical significance, we\nfound that the IMT prototype was perceived now\nbetter than PE. First, interacting with IMT was\nmore efficient than with PE on average (55 s vs.\n69s). The number of interactions was also lower\n(79vs.94). Concerning user satisfaction, the IMT\nsystem was perceived as more helpful (3:5 vs.3:1)\nbut also more cumbersome (3:1 vs.2:9). However,\nin this case the differences were narrower. On the\n22\nFigure 3: Detail of the simplified web-based interface.\nPE-BD IMT-BD\nAvg. time (s) 69 (SD = 42) 55 ( SD = 37)\nNo. interactions 94 (60) 79 (55)\nQ1 (Likert scale) 3:1 (1:2) 3 :5 (1:1)\nQ2 (Likert scale) 2:9 (1:2) 3 :1 (1:3)\nTable 2: Summary of results for the second test.\nother hand, IMT received 16positive comments\nwhereas PE received only 5. Regarding negative\ncomments, the counts were 35(IMT) and 31(PE).\nWhile the number of negative comments is simi-\nlar, there was an important difference regarding the\npositive ones. Finally, the users’ complaints of the\nIMT system can be summarized in the following\nitems: a)system suggestions changed too often,\noffering very different solutions; b)while correct-\ning one mistake, subsequent words that were cor-\nrect were changed by a worse suggestion; c)sys-\ntem suggestions did not keep gender, number, and\ntime concordance; d)if the user goes back in the\nsentence and performs a correction, parts of the\nsentence already corrected were not preserved on\nsubsequent system suggestions.\n4 Discussion and Conclusions\nOur initial UI performed poorly when tested with\nreal users. However, when the UI design was\nadapted to the users’ expectations, the results were\nencouraging. Note that in both cases the same IMT\nengine was evaluated under the hood. This fact re-\nmarks the importance of the UI design when eval-\nuating a highly interactive system as IMT is.\nThe literature had reported good experimental\nresults in simulated-user scenarios, where IMT\nis focused on optimizing some automatic metric.\nHowever, user productivity is strongly related to\nhow the user interacts with the system and other UI\nconcerns. For instance, a suggestion that changes\non every key stroke might obtain better automatic\nresults, whereas the user productivity decreases\nbecause of the cognitive effort needed to processthose changes. Therefore, a new methodology is\nrequired for optimizing interactive systems (like\nIMT) towards the user.\nIn sum, the following issues should be addressed\nin an IMT system: 1)user corrections should not\nbe modified, since that causes frustration; 2)sys-\ntem suggestions should not change dramatically\nbetween interactions, in order to avoid confusing\nthe user; 3)the system should propose a new sug-\ngestion only when it is sure that it improves the\nprevious one.\nWe hope these considerations will reduce the\ngap between translators and researchers needs, so\nthat future developments can have an impact on the\ntranslation industry.\nAcknowledgments\nThis research has received funding from the EC’s 7th\nFramework Programme (FP7/2007-2013) under grant\nagreement No. 287576 - CasMaCat, and from the\nSpanish MEC/MICINN under the MIPRCV project\n(CSD2007-00018). We would also like to thank the\nparticipants and the Centro de Lenguas at the UPV .\nReferences\nBarrachina, S., O. Bender, F. Casacuberta, J. Civera,\nE. Cubel, S. Khadivi, A. L. Lagarda, H. Ney,\nJ. Tom ´as, and E. Vidal. 2009. Statistical ap-\nproaches to computer-assisted translation. Compu-\ntational Linguistics, 35(1):3–28.\nCasacuberta, F., J. Civera, E. Cubel, A. L. Lagarda,\nG. Lapalme, E. Macklovitch, and E. Vidal. 2009.\nHuman interaction for high quality machine transla-\ntion. Communications of the ACM, 52(10):135–138.\nKoehn, P. 2010. Enabling Monolingual Translators:\nPost-Editing vs. Options. In Proc. ACL-HLT.\nLanglais, P., G. Lapalme, and M. Loranger. 2002.\nTRANSTYPE: Development-Evaluation Cycles to\nBoost Translator’s Productivity. Machine Transla-\ntion, 15(4):77–98.\nLeiva, L. A., V . Romero, A. H. Toselli, and E. Vidal.\n2011. Evaluating an Interactive-Predictive Paradigm\non Handwriting Transcription: A Case Study and\nLessons Learned. In Proc. COMPSAC.\nTT2. 2001. TransType2 - Computer Assisted Transla-\ntion. Project Technical Annex. Information Society\nTechnologies (IST) Programme, IST-2001-32091.\n23", "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "XG7ye8xTeE", "year": null, "venue": "EAMT 2023", "pdf_link": "https://aclanthology.org/2023.eamt-1.2.pdf", "forum_link": "https://openreview.net/forum?id=XG7ye8xTeE", "arxiv_id": null, "doi": null }
{ "title": "Tailoring Domain Adaptation for Machine Translation Quality Estimation", "authors": [ "Javad PourMostafa Roshan Sharami", "Dimitar Shterionov", "Frédéric Blain", "Eva Vanmassenhove", "Mirella De Sisto", "Chris Emmery", "Pieter Spronck" ], "abstract": "Javad Pourmostafa Roshan Sharami, Dimitar Shterionov, Frédéric Blain, Eva Vanmassenhove, Mirella De Sisto, Chris Emmery, Pieter Spronck. Proceedings of the 24th Annual Conference of the European Association for Machine Translation. 2023.", "keywords": [], "raw_extracted_content": "Tailoring Domain Adaptation for Machine Translation Quality Estimation\nJavad Pourmostafa Roshan Sharami, Dimitar Shterionov, Fr ´ed´eric Blain,\nEva Vanmassenhove, Mirella De Sisto, Chris Emmery, Pieter Spronck\nDepartment of Cognitive Science and Artificial Intelligence, Tilburg University\n{j.pourmostafa,d.shterionov,F.L.G.Blain,e.o.j.vanmassenhove,\nM.DeSisto,C.D.Emmery,p.spronck }@tilburguniversity.edu\nAbstract\nWhile quality estimation (QE) can play an\nimportant role in the translation process,\nits effectiveness relies on the availability\nand quality of training data. For QE in\nparticular, high-quality labeled data is of-\nten lacking due to the high cost and effort\nassociated with labeling such data. Aside\nfrom the data scarcity challenge, QE mod-\nels should also be generalizable; i.e., they\nshould be able to handle data from dif-\nferent domains , both generic and specific.\nTo alleviate these two main issues — data\nscarcity and domain mismatch — this pa-\nper combines domain adaptation and data\naugmentation in a robust QE system. Our\nmethod first trains a generic QE model\nand then fine-tunes it on a specific domain\nwhile retaining generic knowledge. Our\nresults show a significant improvement for\nall the language pairs investigated, better\ncross-lingual inference, and a superior per-\nformance in zero-shot learning scenarios\nas compared to state-of-the-art baselines.\n1 Introduction\nPredicting the quality of machine translation (MT)\noutput is crucial in translation workflows. Inform-\ning translation professionals about the quality of\nan MT system allows them to quickly assess the\noverall usefulness of the generated translations\nand gauge the amount of post-editing that will be\nrequired (Tamchyna, 2021; Murgolo et al., 2022).\nQuality estimation (QE) is an approach that aims\n© 2023 The authors. This article is licensed under a Creative\nCommons 4.0 licence, no derivative works, attribution, CC-\nBY-ND.to reduce the human effort required to analyze\nthe quality of an MT system by assessing the\nquality of its output without the need for reference\ntranslations.\nQE can be applied on word-, sentence- or\ndocument-levels. The goal of sentence-level QE,\nwhich is the focus of our work, is to predict a\nquality label based on a source sentences and\nits MT equivalents. This label, (i.e., the quality\nestimate), can be expressed in various ways such\nas TER/HTER (Snover et al., 2006), BLEU (Pa-\npineni et al., 2002) or any metric of interest to\nthe user. Training a sentence-level QE system\ntypically requires aligned data of the form: source\nsentence (SRC), target sentence (TRG), and\nquality gold label (LBL). However, most quality\nlabels are by-products of MT and post-editing —\na rather difficult and expensive process — limiting\nthe size of the available QE data (Rei et al., 2020;\nZouhar et al., 2023).\nThe WMT QE shared task (Specia et al., 2021;\nZerva et al., 2022) has been offered a platform to\ncompare different QE systems and to share QE\ndata. Despite efforts from initiatives like the QE\nshared task to publicly release QE datasets, such\nresources remain scarce across language pairs and,\nby extension, also have a limited coverage across\ndomains (Fomicheva et al., 2020a; Fomicheva et\nal., 2022). This can pose a challenge for all QE\nmodels, especially recent ones that utilize large\npre-trained language models (LLMs) (Ranasinghe\net al., 2020; Zerva et al., 2022), since fine-tuning\npre-trained models with small datasets has been\ndemonstrated to be quite unstable (Zhang et al.,\n2020; Rubino, 2020).\nFurthermore, QE models trained on specific\ndata do not generalize well to other domains that\nare outside of the training domain (Kocyigit et\nal., 2022). Domain mismatches lead to significant\ndecreases in the performance of QE models (de\nSouza et al., 2014a; Zouhar et al., 2023). To\nimprove the generalizability of QE models, it is\nimportant to establish the right balance between\ndomain-specific and generic training data. To date,\nonly a few attempts have been made to address\nthis challenge (de Souza et al., 2014b; Rubino,\n2020; Lee, 2020). Thus, the majority of QE\nmodels have difficulty with accurately estimating\nquality across different domains, whether they are\ngeneric or specific (Zouhar et al., 2023).\nIn this work, we propose to tackle both the\ndata scarcity and the domain mismatch challenge\nthat LLM-based QE models face. We propose a\nmethodology whereby a small amount of domain-\nspecific data is used to boost the overall QE pre-\ndiction performance. This approach is inspired\nby work on domain adaptation (DA) in the field\nof MT, where a large generic model is initially\ntrained and then fine-tuned with domain-specific\ndata (Chu and Wang, 2018; Pham et al., 2022).\nTo assess the validity of the proposed approach\nin QE, we conducted experiments using small\nand large, authentic and synthetic data in bilin-\ngual, cross-lingual, and zero-shot settings. We ex-\nperimented with publicly available language pairs\nfrom English (EN) into German (DE), Chinese\n(ZH), Italian (IT), Czech (CZ), and Japanese (JA)\nand from Romanian (RO) and Russian (RU) into\nEnglish (EN). We used the common test sets from\nthe WMT2021 QE shared tasks1.\nOur experiments show a statistically significant\nimprovement in the performance of QE models.\nOur findings also indicate that not only our im-\nplementation leads to better multi-/cross-lingual\nQE models (where multi-/cross-lingual data is pro-\nvided) but also zero-shot QE (where no data for the\nevaluated language pairs was provided at training).\nThe main contributions of our research are:\n• A QE methodology that employs DA and data\naugmentation (DAG), along with a novel QE\ntraining pipeline that supports this methodology.\n• An empirical demonstration of the pipeline’s ef-\nfectiveness, which highlights improvements in\nQE performance, and better cross-lingual infer-\nence.\n• A comparative analysis with state-of-the-art\n(SOTA) baseline methods that demonstrates the\n1https://www.statmt.org/wmt21/\nquality-estimation-task.htmleffectiveness of our approach in enhancing zero-\nshot learning (ZSL) for the task of QE.\n• Adaptable QE pipelines that can be tailored and\nimplemented for other language pairs; i.e., high\ngeneralizable QE pipelines.\nTo the best of our knowledge, this is the first QE\nmethodology to use DA and DAG. Furthermore,\nit is easily reusable and adaptable: (i) while we\nused XLM-R in our experiments, one can easily\nreplace it with any preferred LLM as long as the\ninput-output criteria are met; (ii) we built our tool\naround Hugging Face (HF) implementations of\nLLMs, meaning one can employ a certain generic\nmodel and apply it to any QE task by simply\nfine-tuning it on (newly-collected) QE data.\n2 Domain adaptation for specialized QE\nIn this section, we outline our methodology for\ntraining LLM-based QE models for a specific do-\nmain with limited available in-domain data. This\ninvolves: (i) a set of training steps that we found to\nbe particularly effective, and (ii) DAG techniques\nto improve the QE models’ specificity. Addition-\nally, we provide details on two different training\nmodes we implemented (with or without tags).\n2.1 Training steps\nWe implement the “mixed fine-tuning + fine-\ntuning” DA technique that proved promising for\nMT (Chu et al., 2017). We tailor this methodol-\nogy to suit our needs following the steps outlined\nbelow. A visualization of the steps involved can\nbe found in Appendix A.1. Our technique involves\nleveraging both in-domain (ID) and out-of-domain\n(OOD) QE data (see Section 3.1 for details on the\ndatasets).\nStep 1 We train a QE model using OOD data\nuntil it converges. We employ the experimental\nframework described in Section 3.2 in which an\nLLM is fine-tuned to predict QE labels. The goal\nof this step is two-fold: (i) leveraging the LLM’s\ncross-lingual reference capabilities and (ii) build-\ning a generic QE model. This way we ensure that\nthe model can estimate the quality of a broad range\nof systems, but with limited accuracy on ID data.\nStep 2 The model’s parameters are fine-tuned\nusing a mix of OOD and ID data. We use different\nID data, both authentic and synthetic according\nto the DAG approaches in Section 2.2. The\nobjective here is to ensure the model does not\nforget generic-domain knowledge acquired during\nthe first step while simultaneously improving its\nability to perform QE on the domain-specific\ndata. This mixing step is often referred to as\n“oversampling” in DA literature, where a smaller\nsubset of OOD data is concatenated with ID data\nto allow the model to assign equal attention to\nboth datasets; it aims to further adapt the model to\nthe specific domain of interest.\nStep 3 We continue to train the QE model on a\nspecific ID dataset until convergence, resulting in a\nmore domain-specific QE model than that obtained\nin Step 2.\n2.2 Data augmentation for DA in QE\nIn our study, we explore two alternative ap-\nproaches to oversampling to optimize the utiliza-\ntion of available ID resources and assess the po-\ntential benefits of incorporating synthetic ID data\ninto the QE pipeline:\nApproach 1: Concatenating all available au-\nthentic ID data across all languages. The\nXLM-R model is multilingual, allowing us to ap-\nply it to different language pairs. When there is\nnot enough data to fine-tune it for a specific lan-\nguage, one can use multilingual data. In our work,\nto increase the amount of authentic data (given the\nsmall volume of parallel data for two languages),\nwe construct a multilingual ID dataset: we con-\ncatenate all available ID data, which includes dif-\nferent language pairs. The rationale behind this\napproach is to make use of all available authen-\ntic resources in order to improve the performance\nof the QE model by providing better cross-lingual\nreferences.\nApproach 2: Generating synthetic ID data.\nGiven that all available ID resources have been al-\nready utilized in Approach 1, we propose to sup-\nplement the existing data with artificially gener-\nated additional ID data using a trained MT model\nfor each language pair, inspired by the research\nconducted by Negri et al., (2018) and Lee (2020).\nThis approach aims to tackle the data scarcity\nproblem and further improve the QE model’s ac-\ncuracy. Let Dlpdenote the publicly available par-\nallel data (SRC, TRG) for a language pair lp, as\nidentified in Section 3.1. The approach consists\nof the following steps for each ID involved in the\npipeline:1. Randomly select Nsamples from Dlpto obtain\na set Slpof training samples. Divide Slpinto\ntwo equal sets S1andS2.\n2. Train a multilingual MT model MlponS1(de-\ntails of the model can be found in Section 3.2).\n3. Use Mlpto translate the sources-side of S2(or\na portion of it), obtaining a set Tlpof translated\nsamples.\n4. Compute quality labels (e.g., TER/HTER) by\ncomparing Tlpwith the reference ( TRG ) text\nfromS2.\nThe resulting three-part output of this approach\ncomprises the source-side of S2,Tlp, and\nTER/HTER obtained from the fourth step. A vi-\nsual representation of these steps can be found in\nAppendix A.3.\n2.3 Additional indication of domain\nIn NMT, in order to handle multiple domains and\nreduce catastrophic forgetting, DA has been con-\ntrolled using additional tags added at the begin-\nning or at the end of the sentence (Sennrich et\nal., 2016; Chu and Dabre, 2019). Following these\nstudies, we explore two training modes: (i) with\ntag (“TAG”), by appending either <OOD> or<ID>\nat the end of sentences based on the dataset domain\ntype (i.e., OOD or ID). The input format in this\nmode is <s> SRC </s> TRG <Tag> </s> ,\nwhere SRC and TRG represent source and target\nof the QE triplet, and <s> and</s> are the be-\nginning and separator tokens for the LLM used in\nthe pipeline; (ii) without tag (“NO TAG”), where\nthe training steps are the same as detailed in Sec-\ntion 2.1.\n3 Experiments\n3.1 Data\nWe conducted experiments on publicly available\ndata in different languages: from EN into DE, ZH,\nIT, CZ, and JA and from RO and RU into EN. We\ncategorize the data into three groups according to\ntheir use in our pipeline:\nGroup 1: for building IDandOOD QE mod-\nels. The IDdata is collected from WMT 2021\nshared task on QE (Specia et al., 2021), Task\n2, consisting of sentence-level post-editing efforts\nfor four language pairs: EN-DE, EN-ZH, RU-EN\nand RO-EN. For each pair there are train, de-\nvelopment (dev), and test sets of 7 K, 1K, 1K\nsamples, respectively. Additionally, as our OOD\ndata we used the eSCAPE (Negri et al., 2018)\ndataset with approximately 3.4 Mtokenized SRC,\nmachine-translated text (MTT), post-edited (PE)\nsentences. We used sacrebleu2(Post, 2018) to\ncalculate TER (Snover et al., 2006) from MTT and\nPE pairs. We split the data into train, dev, test sets\nvia the scikit-learn package3(Pedregosa et\nal., 2011) with 98%, 1%, and 1% of the total data,\nrespectively. To improve the generalization of our\nmodels and enable them to better adapt to specific\nQE through the ID dataset, we utilized a larger\nOOD dataset. This decision is in line with prior\nstudies on DA, which are described in the related\nwork section (Section 6).\nGroup 2: for building MT systems as a compo-\nnent of Approach 2 in the proposed DAG (Sec-\ntion 2.2). We collected parallel data — SRC and\nreference translations (REF) — from Opus (Tiede-\nmann, 2012) for each language pair used in ID:\nEN-DE, EN-ZH, RO-EN, and RU-EN. Next, we\ntrained MT models for Approach 2 of our method-\nology by selecting 4 Msamples and dividing them\ninto two equal parts, each with 2 Msamples. We\nsplit either of the two parts into train, dev, test\nsets. To save time during evaluation and inference,\nwe set the size of the dev and test splits to be the\nsame as the number of training samples in the ID\ndatasets, which is 7 K. Moreover, we randomly se-\nlected a portion of the SRC (7 Kout of 2 M) in the\nsecond split, which was not used for training. We\npassed this portion to the trained MT to get MTT.\nFinally, we computed the TER using the MTT and\nthe corresponding REF via sacrebleu . We set\nthe portion size 7 Kas the goal was to double the\nsize of the initial ID data.\nGroup 3: for testing the zero-shot capabili-\nties of the trained QE models in our proposed\nmethodology. We used two zero-shot test sets,\nnamely English to Czech (EN-CS) and English to\nJapanese (EN-JA), which were provided by WMT\n2021 shared task on QE for Task 2. Each test set\ncontained 1 Ksamples.\n3.2 Frameworks\nQuality Estimation. To train all QE models of\nour study, we developed a new QE framework with\nthe ability to invoke multilingual models from HF\nmodel repository. In all our experiments we chose\n2signature:nrefs:1 |case:lc |tok:tercom |punct:yes |version:2.3.1\n3random state/seed =8, shuffle =True, used for all splits.to use XLM-RoBERTa4(XLM-R) (Conneau et al.,\n2020), to derive cross-lingual embeddings, which\nhas shown success in prior studies such as Ranas-\ninghe et al., (2020). The framework is simi-\nlar in architecture to “MonoTransQuest” (Ranas-\ninghe et al., 2020), but adapted to the needs of\nour experiments. The differences with “Mono-\nTransQuest” are the additional tokens ( <OOD> and\n<ID> ) added during the tokenization process, as\nwell as the resizing of the model’s token embed-\ndings in order to support the added tags. Addi-\ntionally, rather than computing the softmax, we di-\nrectly used logits to estimate the quality labels.\nTraining and evaluation details of QE models.\nIn Section 2.1 we describe our methodology for\ntraining and evaluating QE models. During Step\n1, we trained and evaluated an OOD QE model\nevery 1000 steps HF5using the train and dev sets\nfrom Group 1. In Step 2, we trained and evaluated\nQE mix models every 500 steps HFusing a mix\nof OOD and ID data from Group 1. For Step 3,\nwe evaluated the final domain-specific QE model\nafter 500 steps HFusing only an ID train and dev\nset. Throughout training, we used an early stop-\nping mechanism to halt the training process if there\nwas no improvement in the evaluation loss after\n5 evaluations. We adjusted the default evaluation\nsteps HFfrom 500 to 1000 for Step 1 due to the\nlarger number of training samples in that step.\nMachine Translation. Our approach to gener-\nating synthetic ID (Approach 2, Section 2.2) dif-\nfers from prior studies, such as Eo et al., (2021),\nwhich rely on a generic/common translation model\n(e.g., Google machine translate). Instead, we first\ntrained a separate NMT model on a subset of\nthe original dataset. This approach ensures that\nthe training data and the data used for translation\nhave similar vocabularies, cover comparable top-\nics, styles, and domains, which leads to higher\nquality translations.\nWe used an in-house MT framework to train\nour models, based on pre-trained mBART-50\n(Liu et al., 2020) from HF. We followed the\nSeq2SeqTraining arguments recommended by HF\nand trained the model for Approach 2, stopping the\ntraining if the evaluation loss did not improve after\n5 evaluations.\n4xlm-roberta-large\n5steps HFrefers to Hugging Face framework’s training or\nevaluation steps, which are different from the ones we de-\nscribed in Section 2.1.\nWe used default hyperparameters recommended\nby HF for QE and MT, and our frameworks\nwith modified hyperparameters are available\nathttps://github.com/JoyeBright/\nDA-QE-EAMT2023 to reproduce our results.\n4 Results\nTo assess the performance of our approach we\nevaluate output from the trained QE models\nin comparison to the reference quality metric\n(HTER/TER) on the test sets described in data\nGroups 1 and 3. We use Pearson’s coefficient\n(ρ∈ −1 : 1 , which we rescale to −100to100\nfor clarity) to correlate our predictions with the test\nset. We use the BLEU score as a metric to evaluate\nthe translation quality of our MT models.\n4.1 Baseline results\nTo establish a baseline for our study, we fine-tuned\nXLM-R with the ID data for each language pair as\nprovided by WMT 2021 shared task (Group 1 of\ndata). This is a conventional approach employed\nin prior research, such as Ranasinghe et al. (2020),\nwhere pre-trained models are utilized to provide\ncross-lingual reference for training QE models.\nWe also attempted to compare our work with the\nmodels of Rubino (2020) and Lee (2020). For the\nlatter work, their experiments used the WMT 2020\ntest sets, while we used WMT 2021, which makes\nit difficult to compare our results to theirs directly.\nFurthermore, we could not replicate their models\nas no code is available (at the time of writing this\npaper). Our baseline results are presented in Ta-\nble 1.\n4.2 Main results\nIn Table 1 we present our results using the DAG\napproaches and the two training modes (Tag and\nNo Tag). Additional details on the statistical\ntests for each language pair are available in Ap-\npendix A.2. The results in Table 1 show that,\nin general, all of the proposed DA methods per-\nformed better than the baseline for each language\npair, except for Approach 1 in the RO-EN language\npair. For this language pair, the use of a domain tag\nled to reduced performance, and the improvement\nachieved without such a tag was not statistically\nsignificant.\nWe also observe that the increase of perfor-\nmance compared to the baseline for each language\npair shown as percentage in the last column of Ta-\nble 1 is substantial, except for RO-EN (only 0.92%Language\npairBaselineNO TAG TAGIncrease %DAG 1 DAG 2 DAG 1 DAG 2\nEN-DE 47.17 49.93 49.54 51.90 51.25 10.03\nEN-ZH 29.16 34.75 35.27 35.62 36.60 25.51\nRO-EN 83.63 83.67 83.74 83.37 84.40 00.92\nRU-EN 40.65 44.91 45.40 47.16 43.98 16.01\nTable 1: Pearson correlation scores for proposed QE mod-\nels across 4 language pairs : EN-DE, EN-ZH, RO-EN, and\nRU-EN. For each language pair, the bold result indicates the\nhighest-performing method compared to the baseline. Results\nfor the first and second DAG approaches are reported under\nDAG 1 and DAG 2, respectively. The column labeled “In-\ncrease %” shows the percentage improvement for the highest-\nperforming model (in bold) compared to the baseline.\nincrease over the baseline). This is mainly due\nto the already high baseline performance (83.63),\nmaking it challenging to achieve significant im-\nprovements. Among the other language pairs, the\nEN-ZH pair had the largest increase in perfor-\nmance –– just over 25%. The RU-EN and EN-DE\npairs had the second and third highest increases,\nwith improvements of around 16% and 10% over\ntheir respective baselines.\nAdditional indication of domain results. The\nresults indicate that incorporating tags into the\nDA training pipeline was generally effective, al-\nthough in some instances, the improvement was\nnot statistically significant compared to the mod-\nels that were trained without tags. However, it\nwas observed that at least one model outperformed\nthe same language pair’s models that were not\ntrained with tags, when DAG techniques were\nused. Specifically, the EN-DE Approach 1 model\ntrained with tags performed better compared to\nApproach 2 without tags, as did the EN-ZH Ap-\nproach 1 model trained with tags relative to the\nsame approach without tags. Finally, the RO-EN\nApproach 2 model trained with tags outperformed\nApproach 2 without tags, and the RU-EN Ap-\nproach 1 model trained with tags exhibited better\nperformance than Approach 1 without tags.\n4.3 Data Augmentation results\nUpon analyzing the integration of DAG techniques\ninto the specialized QE pipeline, we observe that\nfor most language pairs, both approaches showed\nbetter performance than their respective baselines.\nHowever, in situations where tags were not em-\nployed, Approach 2 only showed statistical signif-\nicance over Approach 1 in the EN-ZH and RU-\nEN language pairs. Moreover, when tags were\nused, Approach 2 lead to statistically significant\nimprovements only for EN-DE and EN-ZH. These\nfindings suggest that the choice of DAG approach\nand the use of tags should be carefully consid-\nered when applying DA in QE. Additionally, DAG\nwas observed to be significant for EN-ZH, for both\ncases — with or without tags.\n4.4 Zero-shot results\nIn order to evaluate the effectiveness of our QE\nmodels in the context of ZSL, we compared their\nperformance with the baseline models for the EN-\nCS and EN-JA language pairs (test sets). The re-\nsults of these tests are presented in Table 2.\nThe findings show that, for the EN-CS test\nset, the QE model trained solely on the EN-DE\ndataset achieved the highest performance among\nall QE baselines, with a Pearson correlation score\nof 46.97. Additionally, we observe that our pro-\nposed DA pipeline performed even better than the\nhighest-performing baseline for EN-CS, but only\nDAG approach 1 and 2 with tags were found to\nbe statistically significant. Likewise, for the EN-\nJA test set, the highest-performing QE baseline\nwas the one that was trained solely on the RU-EN\ndataset, with a Pearson correlation score of 20.32.\nIn contrast to EN-CS, none of the models that\nwere trained with our pipeline and with the RU-EN\ndataset outperformed the baselines. Nevertheless,\nwe observed that three models trained with EN-ZH\nand using our pipeline (Approach 1 with and with-\nout tag, and Approach 2 with tag) performed better\nthan the highest-performing baseline.\nOverall, these findings suggest that if a QE\nmodel is conventionally trained with and evaluated\non an unseen QE dataset, some extent of ZSL ca-\npabilities can be achieved due to the use of XLM-\nR. However, the proposed DA pipeline can signif-\nicantly increase this extent, whether through mod-\nels trained with the same dataset or other datasets\nused in the pipeline. Furthermore, we observed\nthat training a QE model conventionally using cer-\ntain language pairs may lead to decreased perfor-\nmance. For instance, a model trained exclusively\nwith the EN-DE language pair showed a Pearson\ncorrelation of approximately 10. In such cases, the\nproposed pipeline may enhance performance even\nwhen using the same training data.\n5 Additional observations\n5.1 Cross-lingual inference\nTable 3 presents data that shows that our pro-\nposed methodology has an overall advantage overTrained\nonTest set BaselineNO TAG TAG\nDAG 1 DAG 2 DAG 1 DAG 2\nEN-DEEN-CS 46.97 48.77 48.07 47.78 47.82\nEN-JA 09.67 18.16 08.00 16.12 17.36\nEN-ZHEN-CS 35.56 49.33 48.54 47.98 46.83\nEN-JA 13.13 22.77 19.87 22.24 21.54\nRO-ENEN-CS 26.33 39.10 39.79 39.20 40.41\nEN-JA 18.88 20.34 18.55 20.11 21.22\nRU-ENEN-CS 28.42 45.58 44.85 46.43 45.22\nEN-JA 20.32 17.64 17.04 17.26 19.63\nTable 2: Performance comparison of the proposed meth-\nods and the baseline model trained on the EN-DE, EN-ZH,\nRO-EN, and RU-EN datasets in the context of ZSL, with re-\nsults presented for EN-CS and EN-JA test sets. Results for\nthe first and second DAG approaches are reported under DAG\n1 and DAG 2, respectively.\nthe conventional training method of using a pre-\ntrained LLM and fine-tuning it with QE data (base-\nlines) in terms of cross-lingual inference. That\nis, the QE models trained with our proposed DA\npipeline not only perform significantly better than\nbaselines on their target domain and language pair\nbut can also estimate the quality of other language\npairs to some extent better than their correspond-\ning baseline.\nBy examining the data closely (bottom to top\nrow of the Table 3), we observe that XLM-R\nprovides a limited level of cross-lingual infer-\nence, which is insufficient for estimating qual-\nity labels due to the absence of prior knowl-\nedge about them. However, using Step 1 of our\npipeline, which utilizes little inference knowledge,\nthe model still achieves an acceptable level of gen-\neralization across all language pairs.\nSpecifically, the first step achieved an average\nPearson correlation score of approximately 39,\nwhich is higher than all baseline scores, except for\nthe RO-EN pair, which achieved around 42. Fur-\nthermore, the model trained using Step 1 of the\npipeline achieved a Pearson correlation of around\n70 when evaluated with the RO-EN test set. This\nresult can be attributed to the training of the model\nwith IT, which was used as OOD data. From a lin-\nguistic point of view, this result could be explained\nby the fact that IT and RO belong to the same lan-\nguage family, i.e., the “romance languages” (refer\nto Appendix A.5), which explains the high Pearson\ncorrelation score achieved by the model.\nAs we move up the table, we can observe that\nthe model built in Step 2 of our pipeline be-\ncomes more specific toward the task and the ID\ndatasets. Consequently, there is an average im-\nModelsTest SetsA VGEN-DE EN-ZH RO-EN RU-EN\nBaseline 47.17 19.67 44.96 32.91 36.17\nEN-DE 49.93 22.66 78.97 39.55 47.77\n∆ 02.76 02.99 34.01 06.64 11.60\nBaseline 30.34 29.16 47.55 36.87 35.98\nEN-ZH 43.46 34.75 80.51 42.67 50.34\n∆ 13.12 05.59 32.96 05.80 14.36\nBaseline 24.64 23.56 83.63 39.97 42.95\nRO-EN 43.02 24.31 83.67 38.74 47.43\n∆ 18.38 00.75 00.04 -01.23 04.48\nBaseline 22.40 24.67 57.17 40.69 36.23\nRU-EN 25.36 26.06 75.34 44.91 42.91\n∆ 02.96 01.39 18.17 04.22 06.68\nStep2 38.29 24.72 76.96 31.35 42.83\nStep1 30.80 16.57 70.14 39.93 39.36\nXLM-R -02.74 07.30 02.97 03.12 02.66\nTable 3: Performance comparison of proposed models and\nbaselines across all test sets using Pearson correlation as the\nmetric. ∆represents the difference between them. “A VG”\ncolumn shows the overall difference for each language model.\nStep 1: model trained with OOD. Step 2: model trained with\nDAG approach 1 and OOD. Approach 2 in Step 2 had similar\nresults, not included. XLM-R: model not being trained. Mod-\nels and baselines are color-coded for clarity, with bold num-\nbers indicating the average ∆across all language pairs, and\nunderlined numbers representing each model’s performance\non their respective test sets.\nprovement of around 3.5 Pearson correlation (from\n39.36 to 42.83) across the languages. This indi-\ncates that our DA pipeline is effective in improv-\ning more specific cross-lingual QE performance.\nUltimately, fine-tuning Step 2 with any of the ID\nlanguages provides a highly domain-specific QE\nmodel that is not only better estimates the qual-\nity of their language pair, but also performs better\ncross-lingual inference over its baseline.\n5.2 OOD Performance\nThe main goals of DA are to quickly create an\nadapted system and to develop a system that per-\nforms well on ID test data while minimizing per-\nformance degradation on a general domain. In our\nstudy, we showed that models from Step 1 or Step\n2 can be fine-tuned quickly using the user’s data\n(achieving the first of these goals). Our main focus\nwas on the assessment of ID QE. However, we test\nthe generalizability of our ID models on an OOD\ntest set. Our results, summarized in Table 4, in-\ndicate that all ID models outperformed the corre-\nsponding baselines on the OOD test set, and we\nobserve that incorporating ID data in Approaches\n1 and 2 did not compromise the performance with\nrespect to OOD. However, comparing the models’performance with models trained solely on OOD\nwe see a small performance drop, which is in-\nevitable and in most cases acceptable.\nTrained\nwithQE Models\nEN-DE EN-ZH RO-EN RU-EN OOD DAG 1 DAG 2\nBaseline 11.95 03.59 11.60 03.43\n64.33 65.24 64.76Our pipeline 54.62 59.30 52.51 47.36\n∆Baseline 42.67 55.71 40.91 43.93\n∆OOD -09.71 -05.03 -11.82 -16.97\nTable 4: Model comparison on OOD test set using Pearson\ncorrelation as the metric. The ∆Baseline values indicate the\nperformance difference relative to the corresponding baseline,\nwhile the ∆OOD values compare the models’ performance\nwith the one trained solely with OOD.\n6 Related Work\nData Scarcity in QE. The issue of data scarcity\nin MT QE has been explored in numerous previous\nstudies. The work of Rubino and Sumita (2020)\ninvolves the use of pre-training sentence encoders\nand an intermediate self-supervised learning step\nto enhance QE performances at both the sentence\nand word levels. This approach aims to facilitate\na smooth transition between pre-training and fine-\ntuning for the QE task. Similarly, Fomicheva et\nal., (2020b) proposed an unsupervised method for\nQE that does not depend on additional resources\nand obtains valuable data from MT systems.\nQiu et al. (2022) conducted a recent study on the\nthe impact of various types of parallel data in QE\nDAG, and put forward a classifier to differentiate\nthe parallel corpus. Their research revealed a sig-\nnificant discrepancy between the parallel data and\nreal QE data, as the most common QE DAG tech-\nnique involves using the target size of parallel data\nas the reference translation (Baek et al., 2020; Qiu\net al., 2022), followed by translation of the source\nside using an MT model, and ultimately generating\npseudo QE labels (Freitag et al., 2021). However,\nour study diverges from this conventional approach\nand concentrates on a straightforward yet effective\nDAG methods to mitigate this gap. Similarly, Ko-\ncyigit et al. (2022) proposed a negative DAG tech-\nnique to improve the robustness of their QE mod-\nels. They suggested training a sentence embedding\nmodel to decrease the search space and training it\non QE data using a contrastive loss.\nDomain Adaptation in QE. To tackle the chal-\nlenges with translating data when training data\ncomes from diverse domains, researchers have ex-\ntensively used DA in MT. DA involves training\na large generic model and then fine-tuning its\nparameters with domain-specific data (Chu and\nWang, 2018; Saunders, 2021; Pourmostafa Roshan\nSharami et al., 2021; Pham et al., 2022). In MT,\none way to achieve DA is by appending tags to sen-\ntences to handle different domains (Sennrich et al.,\n2016; Vanmassenhove et al., 2018; Chu and Dabre,\n2019) and reduce catastrophic forgetting.\nDespite being useful in MT, DA has not been\nwidely used in QE according to our knowledge.\nDongjun Lee (2020) proposed a two-step QE train-\ning process similar to our own, and Raphael Ru-\nbino (2020) pre-trained XLM and further adapted\nit to the target domain through intermediate train-\ning. Both studies demonstrated that adding a step\nbefore fine-tuning improves performance com-\npared to fine-tuning alone. However, unlike our\nmethodology, neither of them included sentence\ntags or conducted additional fine-tuning (such as\nStep 3 in our methodology). As a result, their QE\nmodels are not as specialized for the target domain\nas ours. A few researchers have made attempts to\nintegrate aspects of DA into QE. For instance, in\nan effort to improve QE performance in domain-\nspecific scenarios, Arda Tezcan (2022) included\nfuzzy matches into MonoTransQuest with the aid\nof XLM-RoBERTa model and data augmentation\ntechniques.\n7 Conclusion and future work\nThis paper addresses two key challenges related\nto quality estimation (QE) of machine transla-\ntion (MT): (i) the scarcity of available QE data and\n(ii) the difficulties in estimating translations across\ndiverse domains. The primary aim of this study is\nto enhance the performance of QE models by ad-\ndressing these challenges. To do so, we propose a\nsolution that utilizes domain adaptation (DA) tech-\nniques adopted from MT. We adapt the “mixed\nfine-tuning + fine-tuning” approach (Chu et al.,\n2017) and extend it with data augmentation as an\nalternative to the traditional oversampling tech-\nnique. We adopt a three-step training methodol-\nogy: (i) we fine-tune XLM-R, a language model,\nwith a large generic QE dataset, which enables\nthe model to generalize; (ii) we fine-tune the\nmodel with a mix of out-of-domain (OOD) and in-\ndomain (ID) data derived from two data augmen-\ntation (DAG) approaches; and (iii) we fine-tune\nthe model with a small amount of domain-specific\ndata, which leads to a more specific model. We\nevaluated models’ performance with and without\ndomain tags appended to the sentences.Our experiments show significant improvements\nacross all language pairs under consideration, in-\ndicating that our proposed solution has a benefi-\ncial impact in addressing the aforementioned chal-\nlenges. Our study also demonstrates the effective-\nness of both proposed DAG approaches and shows\nthat using domain tags improves the performance\nof the models. Additionally, we find that our model\noutperforms the baseline in the context of zero-\nshot learning and in cross-lingual inference.\nMoving forward, there are several directions for\nfuture work based on our findings. First, it would\nbe interesting to investigate the performance of our\npipeline on low-resource language pairs, where\nthere is limited ID data available. This is partic-\nularly relevant given the smaller coverage of QE\ndatasets compared to parallel data in MT. Second,\nwe only used one type of OOD data in our ex-\nperiments (EN-IT); it would be useful to explore\nother OOD data over different language pairs for\nQE. Third, it would be valuable to study the perfor-\nmance of other LLMs than XLM-R. Fourth, since\nthe choice of languages employed in the pipeline\nwas based on availability, we would suggest ex-\nploring a more regulated approach for selecting\nthe languages to be used in the proposed pipeline.\nSpecifically, the optimal transfer languages can be\nselected based on their data-specific features, such\nas dataset size, word overlap, and subword over-\nlap, or dataset-independent factors, such as genetic\n(see Appendix A.5) and syntactic distance (Lin et\nal., 2019).\nReferences\nBaek, Yujin, Zae Myung Kim, Jihyung Moon, Hyun-\njoong Kim, and Eunjeong Park. 2020. PATQUEST:\nPapago translation quality estimation. In Proceed-\nings of the Fifth Conference on Machine Translation ,\npages 991–998, Online, November. Association for\nComputational Linguistics.\nChu, Chenhui and Raj Dabre. 2019. Multilingual\nmulti-domain adaptation approaches for neural ma-\nchine translation. ArXiv , abs/1906.07978.\nChu, Chenhui and Rui Wang. 2018. A survey of do-\nmain adaptation for neural machine translation. In\nProceedings of the 27th International Conference on\nComputational Linguistics , pages 1304–1319, Santa\nFe, New Mexico, USA, August. Association for\nComputational Linguistics.\nChu, Chenhui, Raj Dabre, and Sadao Kurohashi.\n2017. An empirical comparison of domain adapta-\ntion methods for neural machine translation. In Pro-\nceedings of the 55th Annual Meeting of the Associa-\ntion for Computational Linguistics (Volume 2: Short\nPapers) , pages 385–391, Vancouver, Canada, July.\nAssociation for Computational Linguistics.\nConneau, Alexis, Kartikay Khandelwal, Naman Goyal,\nVishrav Chaudhary, Guillaume Wenzek, Francisco\nGuzm ´an, Edouard Grave, Myle Ott, Luke Zettle-\nmoyer, and Veselin Stoyanov. 2020. Unsupervised\ncross-lingual representation learning at scale. In\nProceedings of the 58th Annual Meeting of the Asso-\nciation for Computational Linguistics , pages 8440–\n8451, Online, July. Association for Computational\nLinguistics.\nde Souza, Jos ´e G.C., Marco Turchi, and Matteo Ne-\ngri. 2014a. Machine translation quality estima-\ntion across domains. In Proceedings of COLING\n2014, the 25th International Conference on Compu-\ntational Linguistics: Technical Papers , pages 409–\n420, Dublin, Ireland, August. Dublin City University\nand Association for Computational Linguistics.\nde Souza, Jos ´e G.C., Marco Turchi, and Matteo Negri.\n2014b. Towards a combination of online and mul-\ntitask learning for MT quality estimation: a prelim-\ninary study. In Workshop on interactive and adap-\ntive machine translation , pages 9–19, Vancouver,\nCanada, October 22. Association for Machine Trans-\nlation in the Americas.\nEo, Sugyeong, Chanjun Park, Jaehyung Seo, Hyeon-\nseok Moon, and Heuiseok Lim. 2021. A new tool\nfor efficiently generating quality estimation datasets.\nFomicheva, Marina, Shuo Sun, Erick Fonseca,\nChrysoula Zerva, Fr ´ed´eric Blain, Vishrav Chaud-\nhary, Francisco Guzm ´an, Nina Lopatina, Lucia Spe-\ncia, and Andr ´e F. T. Martins. 2020a. MLQE-PE:\nA Multilingual Quality Estimation and Post-Editing\nDataset. arXiv e-prints , page arXiv:2010.04480, Oc-\ntober.\nFomicheva, Marina, Shuo Sun, Lisa Yankovskaya,\nFr´ed´eric Blain, Francisco Guzm ´an, Mark Fishel,\nNikolaos Aletras, Vishrav Chaudhary, and Lucia\nSpecia. 2020b. Unsupervised quality estimation for\nneural machine translation. Transactions of the As-\nsociation for Computational Linguistics , 8:539–555.\nFomicheva, Marina, Shuo Sun, Erick Fonseca,\nChrysoula Zerva, Fr ´ed´eric Blain, Vishrav Chaud-\nhary, Francisco Guzm ´an, Nina Lopatina, Lucia Spe-\ncia, and Andr ´e F. T. Martins. 2022. MLQE-PE:\nA multilingual quality estimation and post-editing\ndataset. In Proceedings of the Thirteenth Language\nResources and Evaluation Conference , pages 4963–\n4974, Marseille, France, June. European Language\nResources Association.\nFreitag, Markus, Ricardo Rei, Nitika Mathur, Chi-kiu\nLo, Craig Stewart, George Foster, Alon Lavie, and\nOndˇrej Bojar. 2021. Results of the WMT21 met-\nrics shared task: Evaluating metrics with expert-\nbased human evaluations on TED and news domain.\nInProceedings of the Sixth Conference on Machine\nTranslation , pages 733–774, Online, November. As-\nsociation for Computational Linguistics.Kocyigit, Muhammed, Jiho Lee, and Derry Wijaya.\n2022. Better quality estimation for low resource cor-\npus mining. In Findings of the Association for Com-\nputational Linguistics: ACL 2022 , pages 533–543,\nDublin, Ireland, May. Association for Computational\nLinguistics.\nLee, Dongjun. 2020. Two-phase cross-lingual lan-\nguage model fine-tuning for machine translation\nquality estimation. In Proceedings of the Fifth Con-\nference on Machine Translation , pages 1024–1028,\nOnline, November. Association for Computational\nLinguistics.\nLin, Yu-Hsiang, Chian-Yu Chen, Jean Lee, Zirui Li,\nYuyan Zhang, Mengzhou Xia, Shruti Rijhwani,\nJunxian He, Zhisong Zhang, Xuezhe Ma, Antonios\nAnastasopoulos, Patrick Littell, and Graham Neubig.\n2019. Choosing transfer languages for cross-lingual\nlearning. In Proceedings of the 57th Annual Meet-\ning of the Association for Computational Linguistics ,\npages 3125–3135, Florence, Italy, July. Association\nfor Computational Linguistics.\nLiu, Yinhan, Jiatao Gu, Naman Goyal, Xian Li, Sergey\nEdunov, Marjan Ghazvininejad, Mike Lewis, and\nLuke Zettlemoyer. 2020. Multilingual Denoising\nPre-training for Neural Machine Translation. Trans-\nactions of the Association for Computational Lin-\nguistics , 8:726–742, 11.\nMurgolo, Elena, Javad Pourmostafa Roshan Sharami,\nand Dimitar Shterionov. 2022. A quality estimation\nand quality evaluation tool for the translation indus-\ntry. In Proceedings of the 23rd Annual Conference\nof the European Association for Machine Transla-\ntion, pages 307–308, Ghent, Belgium, June. Euro-\npean Association for Machine Translation.\nNegri, Matteo, Marco Turchi, Rajen Chatterjee, and\nNicola Bertoldi. 2018. ESCAPE: a large-scale\nsynthetic corpus for automatic post-editing. In\nProceedings of the Eleventh International Confer-\nence on Language Resources and Evaluation (LREC\n2018) , Miyazaki, Japan, May. European Language\nResources Association (ELRA).\nPapineni, Kishore, Salim Roukos, Todd Ward, and Wei-\nJing Zhu. 2002. Bleu: a method for automatic eval-\nuation of machine translation. In Proceedings of the\n40th Annual Meeting of the Association for Com-\nputational Linguistics , pages 311–318, Philadelphia,\nPennsylvania, USA, July. Association for Computa-\ntional Linguistics.\nPedregosa, Fabian, Ga ¨el Varoquaux, Alexandre Gram-\nfort, Vincent Michel, Bertrand Thirion, Olivier\nGrisel, Mathieu Blondel, Peter Prettenhofer, Ron\nWeiss, Vincent Dubourg, et al. 2011. Scikit-learn:\nMachine learning in python. Journal of machine\nlearning research , 12(Oct):2825–2830.\nPham, Minh-Quang, Josep Crego, and Franc ¸ois Yvon.\n2022. Multi-domain adaptation in neural ma-\nchine translation with dynamic sampling strategies.\nInProceedings of the 23rd Annual Conference of\nthe European Association for Machine Translation ,\npages 13–22, Ghent, Belgium, June. European As-\nsociation for Machine Translation.\nPost, Matt. 2018. A call for clarity in reporting BLEU\nscores. In Proceedings of the Third Conference on\nMachine Translation: Research Papers , pages 186–\n191, Belgium, Brussels, October. Association for\nComputational Linguistics.\nPourmostafa Roshan Sharami, Javad, Dimitar Shteri-\nonov, and Pieter Spronck. 2021. Selecting Parallel\nIn-domain Sentences for Neural Machine Transla-\ntion Using Monolingual Texts. arXiv e-prints , page\narXiv:2112.06096, December.\nQiu, Baopu, Liang Ding, Di Wu, Lin Shang, Yib-\ning Zhan, and Dacheng Tao. 2022. Original or\nTranslated? On the Use of Parallel Data for Trans-\nlation Quality Estimation. arXiv e-prints , page\narXiv:2212.10257, December.\nRanasinghe, Tharindu, Constantin Orasan, and Ruslan\nMitkov. 2020. TransQuest: Translation quality esti-\nmation with cross-lingual transformers. In Proceed-\nings of the 28th International Conference on Com-\nputational Linguistics , pages 5070–5081, Barcelona,\nSpain (Online), December. International Committee\non Computational Linguistics.\nRei, Ricardo, Craig Stewart, Ana C Farinha, and Alon\nLavie. 2020. COMET: A neural framework for MT\nevaluation. In Proceedings of the 2020 Conference\non Empirical Methods in Natural Language Process-\ning (EMNLP) , pages 2685–2702, Online, November.\nAssociation for Computational Linguistics.\nRubino, Raphael and Eiichiro Sumita. 2020. Inter-\nmediate self-supervised learning for machine trans-\nlation quality estimation. In Proceedings of the 28th\nInternational Conference on Computational Linguis-\ntics, pages 4355–4360, Barcelona, Spain (Online),\nDecember. International Committee on Computa-\ntional Linguistics.\nRubino, Raphael. 2020. NICT Kyoto submission for\nthe WMT’20 quality estimation task: Intermediate\ntraining for domain and task adaptation. In Proceed-\nings of the Fifth Conference on Machine Translation ,\npages 1042–1048, Online, November. Association\nfor Computational Linguistics.\nSaunders, Danielle. 2021. Domain adaptation and\nmulti-domain adaptation for neural machine transla-\ntion: A survey. J. Artif. Intell. Res. , 75:351–424.\nSennrich, Rico, Barry Haddow, and Alexandra Birch.\n2016. Controlling politeness in neural machine\ntranslation via side constraints. In Proceedings of\nthe 2016 Conference of the North American Chap-\nter of the Association for Computational Linguistics:\nHuman Language Technologies , pages 35–40, San\nDiego, California, June. Association for Computa-\ntional Linguistics.Snover, Matthew, Bonnie Dorr, Rich Schwartz, Lin-\nnea Micciulla, and John Makhoul. 2006. A study\nof translation edit rate with targeted human annota-\ntion. In Proceedings of the 7th Conference of the\nAssociation for Machine Translation in the Ameri-\ncas: Technical Papers , pages 223–231, Cambridge,\nMassachusetts, USA, August 8-12. Association for\nMachine Translation in the Americas.\nSpecia, Lucia, Fr ´ed´eric Blain, Marina Fomicheva,\nChrysoula Zerva, Zhenhao Li, Vishrav Chaudhary,\nand Andr ´e F. T. Martins. 2021. Findings of the\nWMT 2021 shared task on quality estimation. In\nProceedings of the Sixth Conference on Machine\nTranslation , pages 684–725, Online, November. As-\nsociation for Computational Linguistics.\nTamchyna, Ale ˇs. 2021. Deploying MT quality esti-\nmation on a large scale: Lessons learned and open\nquestions. In Proceedings of Machine Translation\nSummit XVIII: Users and Providers Track , pages\n291–305, Virtual, August. Association for Machine\nTranslation in the Americas.\nTezcan, Arda. 2022. Integrating fuzzy matches\ninto sentence-level quality estimation for neural ma-\nchine translation. Computational Linguistics in the\nNetherlands Journal , 12:99–123, Dec.\nTiedemann, J ¨org. 2012. Parallel data, tools and in-\nterfaces in OPUS. In Proceedings of the Eighth In-\nternational Conference on Language Resources and\nEvaluation (LREC’12) , pages 2214–2218, Istanbul,\nTurkey, May. European Language Resources Asso-\nciation (ELRA).\nVanmassenhove, Eva, Christian Hardmeier, and Andy\nWay. 2018. Getting gender right in neural machine\ntranslation. In Proceedings of the 2018 Conference\non Empirical Methods in Natural Language Process-\ning, pages 3003–3008, Brussels, Belgium, October-\nNovember. Association for Computational Linguis-\ntics.\nZerva, Chrysoula, Fr ´ed´eric Blain, Ricardo Rei, Piyawat\nLertvittayakumjorn, Jos ´e G. C. De Souza, Steffen\nEger, Diptesh Kanojia, Duarte Alves, Constantin\nOr˘asan, Marina Fomicheva, Andr ´e F. T. Martins, and\nLucia Specia. 2022. Findings of the WMT 2022\nshared task on quality estimation. In Proceedings\nof the Seventh Conference on Machine Translation\n(WMT) , pages 69–99, Abu Dhabi, United Arab Emi-\nrates (Hybrid), December. Association for Computa-\ntional Linguistics.\nZhang, Tianyi, Felix Wu, Arzoo Katiyar, Kilian Q.\nWeinberger, and Yoav Artzi. 2020. Revisiting Few-\nsample BERT Fine-tuning. arXiv e-prints , page\narXiv:2006.05987, June.\nZouhar, Vil ´em, Shehzaad Dhuliawala, Wangchunshu\nZhou, Nico Daheim, Tom Kocmi, Yuchen Jiang, and\nMrinmaya Sachan. 2023. Poor man’s quality esti-\nmation: Predicting reference-based mt metrics with-\nout the reference. ArXiv , abs/2301.09008.\nA Appendices\nA.1 Training Steps\nIn Figure 1, we present an overview of the pro-\nposed training steps for specialized QE.\nOOD QE \nDataset QE Framework \ncheckpoint : a \npre-trained LM OOD \nQE Model Step 1 \n+\nID QE \nDataset QE Framework \ncheckpoint : OOD \nQE Model initialization\nMixed FT \nQE Model \nQE Framework \ncheckpoint : Mixed FT \nQE Model initialization\nID \nQE Model \n Step 2 \n Step 3 \nFigure 1: Overview of the proposed training steps for spe-\ncialized QE. The “+” sign indicates the oversampling per-\nformed in Step 2 to balance the use of ID and OOD data. The\ndashed arrows indicate the source of the checkpoint used to\ninitialize the models in each stage.\nA.2 Statistically Significance Test Results\nThe statistical significance test results for the pre-\ndictions in Table 1 for the language pairs EN-DE,\nEN-ZH, RO-EN, and RU-EN are shown in Table 5.\nLanguage\npairModels NO TAG 1 NO TAG 2 TAG 1 TAG 2\nEN-DEBaseline Y Y Y Y\nNO TAG 1 - N N Y\nNO TAG 2 - - Y Y\nTAG 1 - - - Y\nEN-ZHBaseline Y Y Y Y\nNO TAG 1 - Y Y N\nNO TAG 2 - - N N\nTAG 1 - - - Y\nRO-ENBaseline N Y Y Y\nNO TAG 1 - N Y Y\nNO TAG 2 - - N N\nTAG 1 - - - N\nRU-ENBaseline Y Y Y Y\nNO TAG 1 - Y Y Y\nNO TAG 2 - - N Y\nTAG 1 - - - N\nTable 5: Statistically significant test results with a p-value\nless than 0.05. The letter “Y” in the table indicates that the\ncorresponding prediction in Table 1 is statistically significant,\nwhile “N” indicates that it is not.\nA.3 Data Augmentation: Approach 2\nFigure 2 presents an overview of Approach 2 that\nis employed for data augmentation in the context\nof domain adaptation for QE.\nSlpS1 \nS2Multilingual MT \nFramework \ntrain\nMT model \nMlpSRC \n4 1\n12 \n3 SRC \nSRC \nTlp translation \nSacreBLEU \nCompute TER TER TRG TRG \nTRG Figure 2: Overview of Approach2 (Generating synthetic\nID) of data augmentation for domain adaptation in QE.\nThe various steps involved in the approach are indicated close\nto the corresponding arrows. Arrow 1 represents subsam-\npling. The abbreviations SRC ,TRG , and Tlpstand for\nsource, target, and machine-translated text, respectively. The\nfinal outputs which include SRC ,Tlpand quality labels\n(TER ) are color-coded for clarity.\nA.4 Machine Translation Performance\nWe utilized multilingual MT systems to generate\nsynthetic ID data. Table 6 displays the results of\nthe top-performing models used in generating this\ndata.\nLanguage pair BLEU ↑Eval Loss ↓\nEN-DE 41.25 01.09\nEN-ZH 32.28 01.52\nRO-EN 49.60 00.96\nRU-EN 41.29 01.61\nTable 6: MT performance used as a component of Ap-\nproach 2 in the proposed DAG (Section 2.2).\nA.5 Genetic Distance\nDE16%ZH25%RO8%RU13%JA25%CZ13%\nFigure 3: Genetic distance between IT and other lan-\nguages: DE, ZH, RO, RU, JA, and CZ.\nIn MT, measuring the similarity between lan-\nguages is important for effective cross-lingual\nlearning. One such measure is the “genetic dis-\n0.463.411.661.110.310.32\n01234BaselineStep 1Step 2DAG 1Step 2DAG 2Step 3DAG 1Step 3DAG 2Figure 4: Training time (in hours) for models in the EN-\nZH language pair , where Step X refers to the training step\noutlined in Section 2.1, and DAG X denotes the data aug-\nmentation approach used in the second step of the pipeline.\nThe term “Baseline” denotes a model fine-tuned from XLM-\nR. The X and Y axes represent the training time in hours and\nthe approaches used to train the model, respectively.\ntance” between languages, which has been shown\nto be a good indicator of language similarity for\nindependent data (Lin et al., 2019). To illustrate\nthis, we calculate6and present the genetic distance\nscores between Italian (used as OOD data) and the\nother languages included in our study in Figure 3.\nThe genetic distance is represented as a numeri-\ncal value ranging from 0 (indicating the same lan-\nguage) to 100 (the greatest possible distance).\nA.6 Training time\nCompared to the conventional approach of using a\npre-trained LLM and fine-tuning it with QE data\n(baselines), our proposed DA methodology results\nin a significant improvement in performance, re-\ngardless of whether we include tags in the sen-\ntences or not. However, it requires two additional\ntraining steps: Step 1, training an OOD QE model,\nand Step 2, fine-tuning the model using a mix of\nOOD and ID QE data. These additional steps re-\nquire more time. Step 1 and Step 2 (with both DAG\napproaches) are reused (i.e., not trained) for each\nlanguage pair, and Step 3 of the pipeline took al-\nmost the same amount of time across all languages.\nThat is why we present the consumed time for EN-\nZH in Figure 4, and use it to discuss training times\nfor other language pairs as well. Models trained\nwith tagged data have a similar training time.\nThe data presented in Figure 4 indicates that\nStep 1 has the highest training time with approx-\n6http://www.elinguistics.net/Compare_\nLanguages.aspximately 3.4 hours. It is noteworthy that this long\ntraining time is partly due to the fact that the model\nwas evaluated after every 1000 steps HF, which\nconsequently resulted in a longer running time in\ncomparison to other models that were evaluated af-\nter every 500 steps HF. Furthermore, the model\nthat was trained is publicly accessible, and other\nindividuals can utilize it to fine-tune with new ID\ndatasets, avoiding the need for retraining for each\nspecific ID data. This applies to both DAG ap-\nproaches, given that the target language pair was\nused in Step 2 of the pipeline. If not, Step 1 must\nbe fine-tuned with a new set of QE data.", "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "akIWRbZ7E2K", "year": null, "venue": "EAMT 2022", "pdf_link": "https://aclanthology.org/2022.eamt-1.60.pdf", "forum_link": "https://openreview.net/forum?id=akIWRbZ7E2K", "arxiv_id": null, "doi": null }
{ "title": "Curated Multilingual Language Resources for CEF AT (CURLICAT): overall view", "authors": [ "Tamás Váradi", "Marko Tadic", "Svetla Koeva", "Maciej Ogrodniczuk", "Dan Tufis", "Radovan Garabík", "Simon Krek", "Andraz Repar" ], "abstract": "Tamás Váradi, Marko Tadić, Svetla Koeva, Maciej Ogrodniczuk, Dan Tufiş, Radovan Garabík, Simon Krek, Andraž Repar. Proceedings of the 23rd Annual Conference of the European Association for Machine Translation. 2022.", "keywords": [], "raw_extracted_content": "Curated Multilingual Language Resources for CEF AT (CURLICAT): Overall View Tamás Váradi Research Institute for Linguistics, Budapest, Hungary [email protected] Marko Tadić University of Zagreb, Faculty of Humanities and Social Sciences, Zagreb, Croatia [email protected] Svetla Koeva Institute of Bulgarian Language, Bulgarian Academy of Sciences, Sofia, Bulgaria [email protected] Maciej Ogrodniczuk Institute of Computer Science, Polish Acad-emy of Sciences, Warsaw, Poland [email protected] Dan Tufiș RACAI, Romanian Academy, Bucharest, Romania [email protected] Radovan Garabík Ľ. Štúr Institute of Linguistics, Slovak Acad-emy of Sciences, Bratislava, Slovakia [email protected] Simon Krek, Andraž Repar Institute Jozef Stefan, Ljubljana, Slovenia [email protected], [email protected] Abstract The work in progress on the CEF action CURLICAT is presented. The general aim of the action is to compile curated monolingual datasets in seven languages of the consortium in domains of rele-vance to European Digital Service Infra-structures (DSIs) in order to enhance the eTranslation services. 1 Introduction The paper©presents the work in progress on the CEF action Curated Multilingual Language Resources for CEF AT (CURLICAT, which runs from 2020-06-01 till 2022-11-30). The aim of the action is to compile monolingual curated datasets in seven languages of the consortium (Bulgarian, Croatian, Hungarian, Polish, Romanian, Slovak, Slovenian) in domains of relevance to European Digital Service Infrastructures (DSIs) with a view to enhancing the eTranslation automated translation system. © 2022 The authors. This article is licensed under a Creative Commons 3.0 licence, no derivative works, attribution, CC- BY-ND. 2 Datasets The primary data come from national or refer-ence corpora of the above languages and it is planned to cover domains of interest for CEF DSIs such as eHealth, Europeana or eGovern-ment. When completed, the corpus will contain at least at least 2 million sentences from each language, i.e. 14 million sentences, estimated to number at least 140 million words, from domains including culture, health, science and econo-my/finances. For each language, it is expected to produce corpora in each of the above mentioned four domains with at least 500 000 sentences and 5 million words. In case that legally non-binding data with a clear licence allowing free redistribu-tion could not be found from the national corpora in the required quantities, additional data is in-cluded from other sources. 2.1 Annotation Apart from corpora being domain classified, data are linguistically annotated including sentence splitting, tokenisation, lemmatisation, part-of-speech/morphosyntactic-descriptor tagging, dep-endency parsing and NERC. The annotation \n \n follows the extended CoNLL-U Plus1 format presented by Váradi et al. (2020). Additionally, terms from the most recent version of the IATE terminological database are identified and annotated so that the language models built with the help of these corpora could take into account not only single words but also multi-word ex-pressions since these terms represent an addition-al layer of annotation in stand-off manner. With this additional annotation these corpora can serve as a valuable resource for terminological proc-essing as well. 2.2 Intellectual Propery Rights Issues and Anonymisation The data are technically and legally cleaned by either of two procedures: 1) inclusion of text samples published under permissive licences, or for which consent was obtained from the content producer, or 2) scrambling of the order of sen-tences. In this way these corpora will be useful for producing language models up to the level of a sentence, while they will not be useful for higher linguistic level language modelling, but even with this limitation we see these corpora as a valuable resource for MT training. The metada-ta will specify whether the texts were scrambled or not. For legal reasons data will also be anony-mised through replacement of named entities of the same kind and with similar phonological, morphological or graphemic structure (a process that is inherently language-dependent, but, e.g. for Romanian \"Maria\" becomes \"_#PER#1_\", while \"Mariei\" becomes \"_#PER#1_ei\"). To en-sure a higher degree of privacy preservation, lo-cal pseudonymisation, as the process of compete replacement of named entities by one or more artificial identifiers, at document or sub-document level, is used. During the course of the project, we will de-velop an anonymisation solution tailored to the specific needs of the CURLICAT corpus by lean-ing on existing European anonymisation initia-tives (i.e. Multilingual Anonymisation for Public Administrations2 (MAPA) project (Ajausks et al. 2020) which provided anonymisation support for all EU languages) and local solutions developed by the project partners. Specifically, Hungarian, Romanian, Bulgarian and Slovak plan to imple-ment local solutions, while Slovenian, Croatian and Polish will use a solution based on the 1 https://universaldependencies.org/ext-format.html 2 https://mapa-project.eu MAPA project. The approaches for all seven languages will be combined in a single user in-terface and made available via the European Language Grid3 repository. 3 Conclusions Since an important aspect of today’s neural ma-chine translation technology is the quality of the language model, the envisaged seven language corpora, although monolingual datasets in them-selves, can be rightly expected to make an impact on the quality of the eTranslation system through the enhanced language models built with them. Since these corpora in seven languages cover systematically the same four domains, they could be regarded also as comparable corpora for these domains and thus be used for further processing, e.g. in parallel terminology extraction. Moreover, the action addresses the gap in MT technology, which crucially depends on the provision of do-main specific quality language resources for the under-resourced languages. Acknowledgements The work reported here was supported by the European Commission in the CEF Telecom Pro-gramme (Action No: 2019-EU-IA-0034, Grant Agreement No: INEA/CEF/ICT/A2019/ 1926831) and the Polish Ministry of Science and Higher Education: research project 5103/CEF/ 2020/2, funds for 2020–2022). References Váradi, Tamás; Koeva, Svetla; Yamalov, Martin; Tadić, Marko; Sass, Bálint; Nitoń, Bartłomiej; Ogrodniczuk, Maciej; Pęzik, Piotr; Barbu Mititelu, Verginica; Ion, Radu; Irimia, Elena; Mitrofan, Ma-ria; Păiș, Vasile; Tufiș, Dan; Garabík, Radovan; Krek, Simon; Repar, Andraž; Rihtar, Matjaž; and Brank, Janez. 2020. The MARCELL legislative corpus. In Proceedings of The 12th Language Re-sources and Evaluation Conference (LREC2020), pp. 3761–3768. Ajausks, Ēriks; Arranz, Victoria; Bie, Laurent; Cerdà-i-Cucó, Aleix; Choukri, Khalid; Cuadros, Montse; Degroote, Hans; Estela, Amando; Etchegoyhen, Thierry; García-Martínez, Mercedes; García-Pablos, Aitor; Herranz, Manuel; Kohan, Alejandro; Melero, Maite; Rosner, Mike; Rozis, Roberts; Paroubek, Patrick; Vasiļevskis, Artūrs; Zweigen-baum, Pierre. 2020. The Multilingual Anonymisa-tion Toolkit for Public Administrations (MAPA) Project. In Proceedings of the 22nd Annual Con-ference of the European Association for Machine Translation (EAMT2020), pp. 471–472. 3 https://www.european-language-grid.eu", "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "SDld4iK4Gu9", "year": null, "venue": "EAMT 2023", "pdf_link": "https://aclanthology.org/2023.eamt-1.11.pdf", "forum_link": "https://openreview.net/forum?id=SDld4iK4Gu9", "arxiv_id": null, "doi": null }
{ "title": "An Empirical Study of Leveraging Knowledge Distillation for Compressing Multilingual Neural Machine Translation Models", "authors": [ "Varun Gumma", "Raj Dabre", "Pratyush Kumar" ], "abstract": null, "keywords": [], "raw_extracted_content": "An Empirical Study of Leveraging Knowledge Distillation\nfor Compressing Multilingual Neural Machine Translation Models\nVarun Gumma1, Raj Dabre2, Pratyush Kumar3\nIndian Institute of Technology, Madras1,3Microsoft3AI4Bharat1,2,3\nNational Institute of Information and Communications Technology2\[email protected]@nict.go.jp\[email protected]\nAbstract\nKnowledge distillation (KD) is a well-\nknown method for compressing neural\nmodels. However, works focusing on dis-\ntilling knowledge from large multilingual\nneural machine translation (MNMT) mod-\nels into smaller ones are practically nonex-\nistent, despite the popularity and superior-\nity of MNMT. This paper bridges this gap\nby presenting an empirical investigation\nof knowledge distillation for compressing\nMNMT models. We take Indic to English\ntranslation as a case study and demonstrate\nthat commonly used language-agnostic\nand language-aware KD approaches yield\nmodels that are 4-5×smaller but also suf-\nfer from performance drops of up to 3.5\nBLEU. To mitigate this, we then experi-\nment with design considerations such as\nshallower versus deeper models, heavy pa-\nrameter sharing, multi-stage training, and\nadapters. We observe that deeper compact\nmodels tend to be as good as shallower\nnon-compact ones, and that fine-tuning a\ndistilled model on a High-Quality subset\nslightly boosts translation quality. Over-\nall, we conclude that compressing MNMT\nmodels via KD is challenging, indicating\nimmense scope for further research.\n1 Introduction\nNeural Machine Translation (NMT) (Bahdanau et\nal., 2015; Vaswani et al., 2017) is a state-of-the-\nart approach to machine translation that has gained\n© 2023 The authors. This article is licensed under a Creative\nCommons 4.0 licence, no derivative works, attribution, CC-\nBY-ND.\nFigure 1: A comparison of the major distillation techniques\nand models we experimented with. Note that the red incre-\nments in the bar plots denote the improvements due to HQ\nfine-tuning for those models.\nsignificant attention in recent years. With the avail-\nability of large corpora and compute, Multilingual\nNMT (MNMT) (Zhang et al., 2019; Firat et al.,\n2016; Aharoni et al., 2019) has gained popularity\nsince it enables a single model to translate between\nmultiple languages. Large MNMT models trained\non substantial data have shown higher levels of\nperformance. However, these models are impracti-\ncal for deployment on a commercial or production\nscale due to their size, which contains millions, if\nnot billions, of parameters. Therefore, they need\nto be compressed into smaller models for efficient\nand convenient usage.\nIn practice, models are compressed via two\nmethods: Firstly, by stripping unnecessary and re-\ndundant parameters from the existing model (Bu-\nciluˇa et al., 2006), and secondly, by transferring\nknowledge from the larger “teacher” model to a\nsmaller “student” model using distillation (Hin-\nton et al., 2015). This study focuses on the lat-\nter, as the former can be done post-hoc (Diddee\net al., 2022). Although existing literature mainly\ndiscusses bilingual-to-multilingual or bilingual-to-\nbilingual distillation, to the best of our knowledge,\nthere is no work in end-to-end multilingual-to-\nmultilingual knowledge distillation for compres-\nsion in a setting with a mix of low, medium, and\nhigh resource languages. Therefore, we aim to\ndistill a large MNMT model into a smaller one\ntaking Indic to English language translation as a\ncase study and perform an empirical investigation\nof prominent techniques such as language agnostic\nand language-wise word-level and sequence-level\ndistillation. We also look into architectural varia-\ntions, multi-stage training, and High-Quality data\nfiltering to improve our performance.\nOur contributions can be summarized as follows:\n1.We investigate the effect of existing distillation\ntechniques for compressing MNMT models and\nfind that all of them produce comparable results,\nindicating that the simplest methods are sufficient.\n2.We explore the outcome of language-specific\narchitectures such as Adapters and Language-\nQueues and conclude that they failed to sufficiently\nspecialize the models for significant gains.\n3.We analyze the performance gains due to multi-\nstage training and find that High-Quality fine-\ntuning boosts performance in a noisy scenario.\n4.We analyze the trade-off between width and\nheight for Transformers (Vaswani et al., 2017) and\ndetermine that thinner but deeper models comprise\nfewer parameters but perform comparably to wider\nbut shallower models.\n2 Related works\nThis paper focuses on Knowledge Distillation\n(KD) for compressing Multilingual Neural Ma-\nchine Translation (MNMT) models.\nMultilingual Neural Machine Translation\n(Zhang et al., 2019; Firat et al., 2016; Aharoni et\nal., 2019) is the favored approach for developing\nmachine translation systems that can handle\nmultiple languages. MNMT systems incorporate\nlanguage-specific information through the use\nof shared encoder and decoder architecture and\nlanguage-specific embeddings. MNMT systems\noften require less training data than separate\nbilingual models for each language, making it an\nattractive area of research. A detailed analysis\nof MNMT can be found in the survey paper by\n(Dabre et al., 2020).\nModel compression , which involves pruning or\nreparameterizing large models to reduce their\nsizes, has been explored in previous studies(Bucilu ˇa et al., 2006; Wang et al., 2020; Behnke\nand Heafield, 2020; Behnke et al., 2021). Or-\nthogonally, compression can be achieved by\nheavy parameter sharing, especially across lay-\ners (Dabre and Fujita, 2019). (Dabre et al.,\n2022) have investigated this in their IndicBART\nwork, demonstrating that a significant parameter\nreduction leads to decreased performance, but\nknowledge distillation can help overcome this\ngap. We also explore this parameter sharing across\nlayers, noting that we focus on compressing larger\nmodels in higher resource settings.\nKnowledge Distillation (Hinton et al., 2015;\nKim and Rush, 2016) is yet another orthogonal\napproach for model compression, to extract\nessential information from a larger model and\ntransfer it to a smaller model while minimizing\nthe drop in performance. (Dabre and Fujita, 2020)\npresent an approach leveraging Sequence-Level\nDistillation (Kim and Rush, 2016) with Transfer\nLearning for efficiently training NMT models in\na highly low-resource scenario. However, their\nsetup focused on relatively minor data scales,\nwhereas we mainly operate in a medium to high\nresource scenario with multilingualism. (Do and\nLee, 2022) propose a multilingual distillation\ntechnique but use multiple multilingual strong\nteacher models of similar languages, similar to the\nmethod of (Tan et al., 2019) where they employ\nbilingual teacher models to distill into a single\nmultilingual student. Our work differs from both\nin two aspects: (a) we do not use multiple bilin-\ngual/multilingual models as teachers, but instead\nfocus on distilling one single robust multilingual\nmodel into another multilingual model end-to-end\n(b) we aim to compress where they do not. We do\nnot use their techniques because our preliminary\ninvestigations showed that our teacher model was\nbetter than individual bilingual or multilingual\nmodels of similar languages.\nTo the best of our knowledge, previous research\non distillation has focused on distilling bilingual\nnetworks or training an equally sized student\nmodel from multiple strong bilingual/multilingual\nteacher models. Therefore, we believe our work\nis a first-of-its-kind introductory investigation in\nthe domain of end-to-end distillation of MNMT\nmodels for compression.\n3 Methodology\nThis section describes the KD approaches and de-\nsign considerations we focused on in this paper.\n3.1 KD Approaches\nWe describe the fundamental language-agnostic\nKD approaches, such as word and Sequence-Level\nKD and a language-aware KD approach using\nqueues.\nWord-Level Distillation (WLD) : Following (Hin-\nton et al., 2015), (Kim and Rush, 2016) pro-\nposed Word-Level Distillation, which aims to min-\nimize the KL-Divergence/Cross-Entropy between\nthe student and teacher models at each time-step.\nHowever, we did not test this method because\n(Kim and Rush, 2016) showed that it is not a good\napproximation of the sequential learning task, as it\nfocuses on the current timestep only and not on the\nentire sequence.\nSequence-Level Distillation (SLD) : (Kim and\nRush, 2016) argued that the student model should\ncapture the Sequence-Level distribution of the\nteacher model rather than the individual word-level\ndistribution at each timestep. Therefore, they pro-\nposed that capturing the best beam search output\nof the teacher, which can approximate the distribu-\ntion, can be used as hard pseudo-labels for the stu-\ndent. These hard pseudo-labels are called the dis-\ntilled targets. We extensively used this Sequence-\nLevel Distillation technique to train all our stu-\ndent models because it is easy to implement and\nhas been proven to give better results than regular\nword-level distribution.\nWord + Sequence-Level Distillation (W+S LD) :\n(Kim and Rush, 2016) further proposed that Word-\nLevel Distillation can be carried out in congruence\nwith Sequence-Level Distillation to aid the student\nmodel in capturing both the word-level distribu-\ntion at each timestep and the overall Sequence-\nLevel distribution. This allows the student model\nto mimic the generalization of the teacher better.\nHence, we applied this technique to determine if\nthere were any improvements in performance over\nvanilla Sequence-Level Distillation.\nSelective Distillation : (Wang et al., 2021) showed\nthat some samples are “hard” to distill and require\nadditional distillation signals to train, while others\nare “easy” and do not. Therefore, they proposed\nthe idea of identifying “hard” samples from a batch\nand applying a word-level distillation loss specif-\nically to them. They further extended the Batch-Level selection to Global-Level selection, where\nthey select “hard” samples from a large queue\ncomparable in size to the entire dataset to better\napproximate the negative log-likelihood loss dis-\ntribution used to identify “hard” samples. Since\nwe operate with a mix of low, medium, and high-\nresource languages, we chose to investigate both\ntheir Batch-Level (BL) andGlobal-Level (GL)\nselection strategies to promote low-resource lan-\nguages, which might be challenging to distill due\nto their scarcity during training.\nGlobal-Language-wise Distillation (GLwD) :\nThe selection strategy proposed by (Wang et al.,\n2021) at the global level is designed for bilingual\nsettings. However, in multilingual settings with\nmixtures of languages with varying levels of\nabundance, a single global queue may not be\nsuitable because it may become populated with\nsamples mainly from high-resource languages. As\na result, the selection algorithm may be biased\ntoward resource-rich languages. Therefore, we\npropose a novel modification to this technique\ninvolving a language-wise selection strategy.\nSpecifically, we propose to push samples from\neach language into their respective global queues,\nremove the oldest samples to maintain the queue\nsize, and apply an additional distillation loss to the\n“harder” samples from each queue, similar to the\nGlobal-Level selection.\n3.2 Design Considerations\nApart from the core distillation approaches above,\nwe also explore the impact of several architectural\nand training pipeline design considerations. In par-\nticular, we focus on the impact of variable depth,\nextreme parameter-sharing, dataset filtering and\nmulti-stage training, and language-specific distil-\nlation via adapters.\nWidth vs. Height: Based on the findings of\n(Tay et al., 2022), we opted to analyze thinner but\ndeeper models, as we found these models to have\nfewer parameters than wider but shallower models.\nRecurrent-Stacking: We also train models on the\ndistilled data with recurrently stacked layers, fol-\nlowing the idea of (Dabre and Fujita, 2019) in\nwhich layer parameters are tied across layers. This\nlimited the number of parameters to 207M but gave\nthe effect of a model with multiple layers.\nMulti-stage Training with High-Quality Data:\nWe observed that the distilled data contained a\nfew noisy samples that hindered training. To ad-\nFigure 2: A flow chart depicting our set of experiments\ndress this issue, we experimented with a multi-\nstage training setup. First, we trained a smaller\nmodel on the complete dataset, and then we fine-\ntuned it on the High-Quality data filtered from the\ncomplete dataset. We filtered the data based on\nthe LaBSE1(Feng et al., 2022) cosine similarity\nscores, selecting only those translation pairs whose\nsimilarity score was greater than µL+kσLfor each\nlanguage, where uLandσLdenote the mean and\nstandard deviation of the translation scores for lan-\nguage L. We empirically chose kto limit the High-\nQuality data size to approximately 20% of the to-\ntal, with a uniform sampling of data from each lan-\nguage.\nAdapters: Adapters are small feed-forward mod-\nules introduced in pre-trained models and fine-\ntuned on a downstream task while freezing the\ntrained model’s parameters (Houlsby et al., 2019;\nBapna and Firat, 2019). They add only a tiny frac-\ntion of parameters to the model but provide addi-\ntional parameterization for the model to adapt to\nadditional languages/domains independently with-\nout requiring complete fine-tuning. Adapters are\nparticularly useful for distillation, as they should\nhelp recover any loss in performance due to com-\npression via fewer additional parameters. Further-\nmore, they should help the model adjust to var-\nious languages’ specifics during translation. To\ninvestigate the effects of language similarity and\ncross-lingual inference on distillation, we have ex-\n1https://huggingface.co/\nsentence-transformers/LaBSEperimented with fine-tuning distilled models with\nadapters for individual languages and language\nfamilies (Chronopoulou et al., 2022).\n4 Experiments\nWe now focus on Indic-to-English translation as a\ncase study and describe experiments we conducted\nto compress IndicTrans, a 474M parameter model.\n4.1 Datasets\nWe use or create the following datasets:\nOriginal data : We use Samanantar (Ramesh et\nal., 2022) as the original (undistilled) dataset, the\nstatistics for which are in Table-1 in the column\n#Pairs. This dataset was used to train IndicTrans,\nour teacher model, and we use it for generating the\ndistilled data and conducting comparative studies.\nDistilled data : The distilled data used for train-\ning student models was generated by performing\nbeam search (with a beam size of 5) over Samanan-\ntar in the Indic-En direction with IndicTrans., i.e.,\nusing the Sequence-Level distillation technique of\n(Kim and Rush, 2016). The best beam output was\nthen utilized as the hard pseudo-labels for train-\ning smaller models. Following Section 3.2, we\nfilter this data to obtain a smaller, higher quality\nversion, the statistics for which are in the column\n#HQ-Pairs in Table-1.\nEvaluation data : We use Flores101 (Goyal et al.,\n2022) for evaluation, where the dev set ( 997pairs\nper language) is used for validation and the test set\n(1012 pairs) for testing.\nLang ISO code #Pairs #HQ Pairs\nAssamese as 0.1 0.02\nOdia or 1.0 0.2\nPunjabi pa 3.0 0.6\nGujarati gu 3.1 0.6\nMarathi mr 3.6 0.8\nKannada kn 4.1 0.9\nTelugu te 4.9 1.1\nTamil ta 5.3 1.0\nMalayalam ml 5.9 1.3\nBengali bn 8.6 1.7\nHindi hi 10.1 2.0\nTotal - 49.8 10.3\nTable 1: The number of original (#pairs) sentence pairs per\nlanguage (in millions) in the distilled (and original). #HQ-\nPairs indicates High-Quality distilled pairs. The languages\nare categorized into low, medium, and high-resource groups.\n4.2 Pre-Processing and Vocabulary\nWe follow (Ramesh et al., 2022) and transliterate\nall the Indic source sentences into Devanagari us-\ning the Indic-NLP-Library2before training, to take\nadvantage of the script-similarity between vari-\nous Indian languages. The dev-test set is likewise\ntransliterated, and language tags are added before\nevaluation. For consistency, we use the same vo-\ncabulary as IndicTrans, which contains 32K sub-\nwords for all 11Indic languages and separate 32K\nsubwords for English.\n4.3 Evaluation Metrics\nWe use BLEU (Papineni et al., 2002) as the pri-\nmary evaluation metric. We also report Chrf++\nscores (Popovi ´c, 2017) in the Appendix.\n4.4 Training setup\nWe train our models using fairseq3(Ott et al.,\n2019). We obtained the implementation for KD\nfrom LeslieOverfitting4. The Transformer archi-\ntecture (Vaswani et al., 2017) is used throughout\nour experiments. The hyperparameters used for\ntraining are presented in Appendix-A Table-9.\nUnlike IndicTrans, we use GELU activation\n(Hendrycks and Gimpel, 2016) instead of ReLU\nactivation. Additionally, pre-normalization is ap-\nplied to all modules, and layer normalization (Ba\net al., 2016) is applied to the embedding. These\nmodifications led to more stable training. Where\n2https://github.com/anoopkunchukuttan/\nindic_nlp_library\n3https://github.com/VarunGumma/fairseq\n4https://github.com/LeslieOverfitting/\nselective_distillationearly stopping for IndicTrans was done using loss\non the development set, we used BLEU score.\n4.5 Model Configurations\nWe trained models with various configurations (as\nlisted in Table-2). The smallest model is “base”,\nthe same as Transformer-base in (Vaswani et al.,\n2017). The largest is “huge” which is the same\nsize as IndicTrans, and “huge RS” is its equivalent\nwhere all layers have the same parameters.\nModel P d M dFF L H\nbase 95.4 512 2048 6 8\nbase12L 139.5 512 2048 12 8\nbase18L 183.7 512 2048 18 8\nbase24L 227.8 512 2048 24 8\nbig 278.9 1024 4096 6 16\nhugeRS 207.3 1536 4096 1 16\nhuge 474.9 1536 4096 6 16\nTable 2: The table presents the architectural description\nof various Transformer models that were tested. Here, the\ncolumns represent the number of parameters (P) in millions,\nthe dimension of the model (d M), the dimension of the feed-\nforward network (d FF), the number of layers (L) and the\nnumber of attention heads (H). It is worth noting that the\nhugeRSmodel contains only one unique layer, but it is recur-\nrently stacked 6times. This means the other 5layers in the\nencoder/decoder are simply references to the original layer.\n5 Results\nThis section presents the results of applying\nKnowledge Distillation (KD) approaches to com-\npress the IndicTrans Indic-to-English teacher\nmodel.\n5.1 Main Results\nTable-3 compares various distillation approaches\nusing a student model with the base configura-\ntion. As compared to a base model trained on\nthe original data, which is around 3.6BLEU be-\nlow the IndicTrans model, we can observe im-\nprovements for both low and high-resource lan-\nguages through the use of conventional distillation\nmethods. The simplest among these, Sequence-\nLevel distillation (SLD), shows an improvement of\n0.3BLEU on average compared to its undistilled\nequivalent. Significantly, low-resource languages\nsuch as Assamese and Odia and a few medium-\nresource languages like Kannada benefit the most.\nIn contrast, resource-rich languages like Hindi and\nBengali have comparable or a slight drop in perfor-\nmance. The Batch-Level selection approach (BL)\nwas the best among all distillation approaches and\nshowed the best results for 6out of 11languages.\nLang OG base IT SLD W+S LD BL GL GLwD\nas 18.4 23.3 19.7 19.8 20.5 20.3 20.5\nbn 28.9 31.8 28.8 28.9 29.1 28.3 28.7\ngu 30.6 34.1 30.6 31.5 31.7 31.3 30.9\nhi 34.3 37.5 34.1 34.2 34.7 34.4 34.6\nkn 25.2 28.7 26.1 25.8 25.9 26.0 25.8\nml 27.7 31.4 28.2 27.9 28.2 27.6 28.0\nmr 27.4 31.0 28.1 28.0 27.8 27.5 27.8\nor 26.3 29.8 26.8 27.0 27.0 27.1 26.5\npa 31.0 35.8 31.2 31.4 31.3 31.4 31.1\nta 25.3 28.4 25.1 25.1 25.4 25.2 25.2\nte 30.4 33.4 30.4 30.6 30.2 30.6 30.4\nAvg 27.8 31.4 28.1 28.2 28.3 28.2 28.1\nTable 3: BLEU scores of base model distilled with various\ndistillation techniques. Note that the scores of the base model\ntrained on the Original Samanantar data (OG base) and In-\ndicTrans (IT; huge ) in the first and second columns are for\nreference. The best scores of distilled models are bolded.\nOn the other hand, Global-Level selection (GL)\ndid not perform as well, indicating that adaptation\nis best done per batch since Global-Level selec-\ntion may update similar examples whereas Batch-\nLevel adaptation would choose diverse examples.\nFurther, we observed that the queue size should be\nmeticulously tuned in case of a mix of languages.\nTo our surprise, active distillation (W+S LD)\nfailed to significantly improve despite leverag-\ning distilled data and the parent model’s soft la-\nbels. Also, or adaptation of Global-Level selection\nto Global-Language-wise Distillation (GLwD) re-\nsulted in only minor variations when compared\nto the base model that was trained using regu-\nlar Sequence-Level distillation and Global-Level\ndistillation. Interested readers can check Chrf++\nscores in Appendix-B, Table-11, and observe that\nthey follow the same trend.\nNo matter the approach, however, the distilled\nmodel consistently underperforms the teacher, in-\ndicating the high difficulty of distilling MNMT\nmodels. Indeed, where the base model trained\nwithout distilled data was behind by 3.6BLEU,\nthe best-distilled model is behind by 3.1BLEU on\naverage. Going forward, for the ease of rapidly\nconducting large-scale experiments, we only re-\nport and discuss the results of remaining models\ntrained using Sequence-Level distillation, i.e., by\ndirectly training them on the distilled dataset.\n5.2 Analyses and Further Investigation\nWe now investigate factors that influence distil-\nlation. We analyze the quality of the distillation\ndata, the impact of different model architectures,\nand multi-stage training using High-Quality datafor further training models or with adapters with-\nout High-Quality data. These experiments can help\nus ascertain whether the poor performance of dis-\ntilled models can be remedied.\nDistilled Dataset Analysis: LaBSE cosine-\nsimilarity scores were used to assess the quality\nof translation pairs in the distilled data. The dis-\ntilled dataset was significantly better, as evidenced\nby higher mean and lower standard deviation of the\nLaBSE scores, as shown in Table-4.\nOG Distilled\nLang pair mean std dev mean std dev\nen-as 0.6460 0.2773 0.7850 0.1297\nen-bn 0.7974 0.1286 0.8446 0.0726\nen-gu 0.8007 0.1515 0.8487 0.0699\nen-hi 0.7988 0.1159 0.8524 0.0737\nen-kn 0.8129 0.1240 0.8469 0.0680\nen-ml 0.8018 0.1310 0.8432 0.0743\nen-mr 0.7886 0.1471 0.8472 0.0672\nen-or 0.8283 0.0877 0.8474 0.0666\nen-pa 0.7958 0.1383 0.8579 0.0726\nen-ta 0.7762 0.1691 0.8415 0.0771\nen-te 0.8152 0.1089 0.8448 0.0685\nTable 4: LaBSE cosine similarity scores between translation\npairs of Original and Distilled data\nImpact of Deeper vs. Shallower Models on Per-\nformance and Inference Time: Table-5 shows\nthat thinner but deeper networks perform compa-\nrably with the wider but shallower models while\nhaving fewer parameters. However, Table-6 also\nhighlights that the deeper models often suffer from\nlonger latency during inference due to the numer-\nous sequential transformations to the input in both\nthe encoder and decoder. Furthermore, we ob-\nserved diminishing returns in performance as we\nincreased the number of layers.\nImpact of extreme parameter sharing: From\nTable-5 we can see that recurrent stacking\n(hugeRS) is not particularly impactful. Note that\nthe key difference between the huge andhugeRS\nmodels is that the latter has shared layer param-\neters. (Dabre et al., 2022) showed that recur-\nrent stacking models, when trained with distilla-\ntion data, can reach the performance of the par-\nent model ( huge ), but this does not appear to be\nthe case in our setting. Note that, in our case,\nour training data is much larger than (Dabre et\nal., 2022), indicating that recurrent stacking mod-\nels might not be suitable here. Next, the infer-\nence time for hugeRSis almost the same as its\nhuge counterpart because the input is still trans-\nformed the same number of times, but just using\nthe same layer. Comparing with the deeper base\nmodels ( base12L ,base18L ,base24L ), increasing\nthe width of models increases parameters but re-\nsults in only a slight increase in inference times,\nunlike increasing the depth of the network.\nLang huge RS base 12L base 18L base 24L\nas 19.2 21.6 23.3 22.9\nbn 27.9 29.8 30.9 31.1\ngu 30.4 32.5 33.9 33.9\nhi 34.1 36.0 36.6 36.2\nkn 25.4 27.0 28.3 28.0\nml 26.7 29.3 29.8 30.5\nmr 26.7 29.5 30.4 30.6\nor 25.4 28.3 29.5 29.6\npa 31.2 33.0 34.0 34.2\nta 24.6 26.3 27.4 27.9\nte 29.6 31.4 33.0 33.0\nAvg 27.4 29.5 30.6 30.7\nTable 5: Performance of models with varying depth\nMulti-stage training: The rationale behind High-\nQuality data fine-tuning is that it enables the model\nto relearn the richer set of examples and disregard\nthe previously noisy examples, which hurt the per-\nformance. We observed that the performance of\nthe model improves with fine-tuning5an existing\ndistilled model with HQ data (see Table-7). The\nmaximum improvement was observed for the Re-\ncurrent Stacked model, which showed the weakest\nperformance thus far, given its size. Note the im-\nprovement of the base model from 28.1(SLD in\nTable 3) to 28.4, by 0.3BLEU. The previous gap\nbetween the parent (IndicTrans; huge ) and base\nmodel was 3.3, and it has now come down to 3.0,\nindicating that the gap can be overcome, but that\nmultilingual model compression is still very chal-\nlenging.\nThe increments resulting from High-Quality\nfine-tuning were averaged across multiple models\nand languages, and the findings are presented in\nFigure-3. It is observed in Figure-3 that multi-\nstage training had the least effect on high-resource\nlanguages such as Bengali and Hindi since the\nmodel well learned these languages due to the\nample amount of training data available. Con-\nversely, low-resource languages, such as Odia\nand Assamese, benefited from multi-stage train-\ning. Our analysis showed that Malayalam expe-\nrienced the most significant improvement with HQ\nfine-tuning.\n5For optimal fine-tuning, it is recommended to use a lower\nlearning rate ( 3e-5) and a smaller batch size ( 24K).Lang base base 12Lbase 18Lbase 24Lbig huge RShuge\nas 8.3 15.7 19.4 25.9 9.4 9.9 15.8\nbn 7.8 13.1 18.8 23.7 8.6 9.2 8.8\ngu 8.9 13.4 18.2 25.6 8.4 9.1 9.9\nhi 8.8 13.0 18.4 24.2 10.7 9.3 8.7\nkn 12.4 13.1 18.5 23.6 9.8 9.1 9.0\nml 8.7 13.8 20.7 26.2 9.7 9.0 9.0\nmr 9.1 12.9 18.0 24.4 8.9 9.2 8.9\nor 9.2 13.7 20.9 24.3 9.3 9.4 9.0\npa 8.9 13.7 19.3 24.7 8.9 9.2 9.0\nta 8.4 13.4 20.3 23.8 8.7 9.8 9.4\nte 8.0 13.0 20.1 26.1 8.6 10.2 9.0\nAvg 9.0 13.5 19.4 24.8 9.2 9.4 9.7\nTable 6: Inference time per language (in seconds) with a\nbatch size of 64on the Flores101 test set ( 1012 sentences\nper language). As seen from the above table, base24L has the\nhighest latency due to the highest number of layers in the en-\ncoder and decoder.\nLang base base 12Lbase 18Lbase 24Lbig huge RS\nas 0.6 0.7 0.3 0.3 -0.1 1.2\nbn 0.2 0.5 0.3 0.5 -0.1 0.7\ngu 0.6 0.6 0.1 0.2 0.4 1.1\nhi 0.2 0.1 0.2 0.4 0.0 1.0\nkn 0.3 0.6 0.2 0.5 0.2 0.8\nml 0.5 0.6 0.8 0.6 0.4 1.4\nmr 0.0 0.5 0.4 0.3 0.7 1.2\nor 0.5 0.6 -0.2 0.3 0.9 1.3\npa 0.3 0.3 0.4 0.6 -0.2 1.0\nta 0.2 0.6 0.1 0.2 0.3 0.8\nte 0.2 0.4 0.5 0.5 0.4 0.6\nAvg 0.3 0.5 0.3 0.4 0.3 1.0\nTable 7: Multistage training improvements. Once again,\nall these models were trained and fine-tuned on the distilled\ndataset. The absolute scores, i.e., score of model trained\non the distilled data + the increment by fine-tuning on HQ-\ndistilled data is available in Table-14 of Appendix-B\nAdapters: Adapters were introduced on top of the\ndistilled base model for each language and promi-\nnent language families, such as Eastern Indo-\nAryan (Assamese-Bengali-Odiya), Western Indo-\nAryan (Hindi-Gujarati-Punjabi-Marathi), and Dra-\nvidian (Kannada-Malayalam-Tamil-Telugu). No-\ntably, these adapters were again fine-tuned on the\nunfiltered distilled dataset. As presented in Table-\n8, the outcomes revealed that the language-wise\nand language-family adapters exhibited minimal\nor no improvement in the given setting. This lack\nof improvement could be attributed to the inad-\nequacy of the added parameters in learning new\nrepresentations from languages to enhance per-\nformance. Language-wise adapters outperformed\nlanguage-family adapters since high-resource lan-\nguages dominate the low-resource ones when\nbuilding language families. In other words, when\nFigure 3: Top: Comparative bar plot of improvements due to\nHQ fine-tuning averaged over various languages vs. Model\nBottom: Comparative bar plot of improvements due to HQ\nfine-tuning averaged over various models vs. Language\nworking with adapters, their limited capacity can\nonly handle limited data. Although we do not\nshow it, given our positive results with High-\nQuality data, we expect that fine-tuning on the\nsame might lead to higher improvements. The\nspecific hyperparameters used for language-wise\nand language-family adapters can be found in\nAppendix-A Table-10.\nLang base LW LF\nas 19.7 21.0 20.6\nbn 28.8 28.8 29.2\ngu 30.6 30.8 30.8\nhi 34.1 34.4 34.2\nkn 26.1 26.1 26.1\nml 28.2 28.2 27.9\nmr 28.1 28.0 27.7\nor 26.8 26.7 27.2\npa 31.2 31.3 31.2\nta 25.1 25.0 25.1\nte 30.4 30.7 30.4\nAvg 28.1 28.3 28.1\nTable 8: Results of language-wise (LW) and language-family\n(LF) adapter fine-tuning of base SLD model.\n5.3 Key Takeaways and Recommendations\nWe have the following lessons:1.The use of active learning techniques produced\ncomparable results, and no single approach stood\nout as the best. Batch-Level distillation exhibited\nthe strongest numerical performance, but the im-\nprovements were statistically insignificant.\n2.Multiple metrics should be used to evaluate\ntranslations. Paraphrases of the target did not score\nwell in BLEU but were rated highly with Chrf++.\n3.Multistage training, involving complete dataset\ntraining followed by fine-tuning on a High-Quality\nfraction, improves model performance. To main-\ntain consistent distribution, the proportions of\ntranslation pairs from each language should be\nsimilar during data filtering, and the length distri-\nbution should resemble the original dataset.\n4.The use of adapters did not improve model\nperformance, attributed to insufficient parameter-\nization. With learning rate and batch size tuning,\nequal language family proportions should be main-\ntained during multilingual adapter fine-tuning.\n5.Narrower but deeper models can achieve com-\nparable performance to wider but shallower mod-\nels, despite having fewer parameters. Increasing\ndepth by adding layers can lead to diminishing re-\nturns with increasing inference latency.\n6. Recurrently-stacked networks, despite their\npromise, do not deliver in multilingual settings like\nours with low to high-resource languages. How-\never, multi-stage training is recommended for such\nmodels and, generally, for lower-parameter ones.\n6 Conclusion and Future Work\nIn this paper we have empirically studied the com-\npression of MNMT models, taking Indic to En-\nglish translation as a case study, and explored the\neffectiveness of prominent knowledge distillation\napproaches. We have also studied the impact of\nmodel size, parameter sharing, multi-stage train-\ning, and quality of training data. We confirm the\nhigh difficulty of this task but make several rec-\nommendations that we expect will benefit practi-\ntioners. Having noted the positive impact of High-\nQuality data, we will explore this aspect in further\ndetail in the future. We will also expand to MNMT\nmodels focusing on other language groups. Fi-\nnally, the impact of post-training quantization ap-\nproaches and low-precision decoding will also be\ninvestigated.\n7 Acknowledgements\nWe sincerely thank Prof. Mitesh Khapra and Pran-\njal Agadh Chitale for their valuable insights and\ncomments on the paper. We also extend our ap-\npreciation to the Center for Development of Ad-\nvanced Computing6(CDAC) for providing us with\nthe necessary computing resources to conduct our\nexperiments.\nReferences\n[Aharoni et al.2019] Aharoni, Roee, Melvin Johnson,\nand Orhan Firat. 2019. Massively multilingual neu-\nral machine translation. In Proceedings of the 2019\nConference of the North American Chapter of the\nAssociation for Computational Linguistics: Human\nLanguage Technologies, Volume 1 (Long and Short\nPapers) , pages 3874–3884, Minneapolis, Minnesota,\nJune. Association for Computational Linguistics.\n[Ba et al.2016] Ba, Jimmy Lei, Jamie Ryan Kiros, and\nGeoffrey E. Hinton. 2016. Layer normalization.\n[Bahdanau et al.2015] Bahdanau, Dzmitry, Kyunghyun\nCho, and Yoshua Bengio. 2015. Neural machine\ntranslation by jointly learning to align and translate.\nIn Bengio, Yoshua and Yann LeCun, editors, 3rd\nInternational Conference on Learning Representa-\ntions, ICLR 2015, San Diego, CA, USA, May 7-9,\n2015, Conference Track Proceedings .\n[Bapna and Firat2019] Bapna, Ankur and Orhan Firat.\n2019. Simple, scalable adaptation for neural ma-\nchine translation. In Proceedings of the 2019 Con-\nference on Empirical Methods in Natural Language\nProcessing and the 9th International Joint Confer-\nence on Natural Language Processing (EMNLP-\nIJCNLP) , pages 1538–1548, Hong Kong, China,\nNovember. Association for Computational Linguis-\ntics.\n[Behnke and Heafield2020] Behnke, Maximiliana and\nKenneth Heafield. 2020. Losing heads in the lot-\ntery: Pruning transformer attention in neural ma-\nchine translation. In Proceedings of the 2020 Con-\nference on Empirical Methods in Natural Language\nProcessing (EMNLP) , pages 2664–2674, Online,\nNovember. Association for Computational Linguis-\ntics.\n[Behnke et al.2021] Behnke, Maximiliana, Nikolay Bo-\ngoychev, Alham Fikri Aji, Kenneth Heafield,\nGraeme Nail, Qianqian Zhu, Svetlana Tchistiakova,\nJelmer van der Linde, Pinzhen Chen, Sidharth\nKashyap, and Roman Grundkiewicz. 2021. Effi-\ncient machine translation with model pruning and\nquantization. In Proceedings of the Sixth Confer-\nence on Machine Translation , pages 775–780, On-\nline, November. Association for Computational Lin-\nguistics.\n6https://www.cdac.in/index.aspx?id=print_\npage&print=PN[Bucilu ˇa et al.2006] Bucilu ˇa, Cristian, Rich Caruana,\nand Alexandru Niculescu-Mizil. 2006. Model com-\npression. In Proceedings of the 12th ACM SIGKDD\nInternational Conference on Knowledge Discovery\nand Data Mining , KDD ’06, page 535–541, New\nYork, NY , USA. Association for Computing Machin-\nery.\n[Chronopoulou et al.2022] Chronopoulou, Alexandra,\nDario Stojanovski, and Alexander Fraser. 2022.\nLanguage-family adapters for multilingual neural\nmachine translation.\n[Dabre and Fujita2019] Dabre, Raj and Atsushi Fujita.\n2019. Recurrent stacking of layers for compact\nneural machine translation models. Proceedings\nof the AAAI Conference on Artificial Intelligence ,\n33(01):6292–6299, Jul.\n[Dabre and Fujita2020] Dabre, Raj and Atsushi Fujita.\n2020. Combining sequence distillation and trans-\nfer learning for efficient low-resource neural ma-\nchine translation models. In Proceedings of the Fifth\nConference on Machine Translation , pages 492–502,\nOnline, November. Association for Computational\nLinguistics.\n[Dabre et al.2020] Dabre, Raj, Chenhui Chu, and\nAnoop Kunchukuttan. 2020. Multilingual neural\nmachine translation. In Proceedings of the 28th In-\nternational Conference on Computational Linguis-\ntics: Tutorial Abstracts , pages 16–21, Barcelona,\nSpain (Online), December. International Committee\nfor Computational Linguistics.\n[Dabre et al.2022] Dabre, Raj, Himani Shrotriya,\nAnoop Kunchukuttan, Ratish Puduppully, Mitesh\nKhapra, and Pratyush Kumar. 2022. IndicBART:\nA pre-trained model for indic natural language\ngeneration. In Findings of the Association for\nComputational Linguistics: ACL 2022 , pages\n1849–1863, Dublin, Ireland, May. Association for\nComputational Linguistics.\n[Diddee et al.2022] Diddee, Harshita, Sandipan Danda-\npat, Monojit Choudhury, Tanuja Ganu, and Kalika\nBali. 2022. Too brittle to touch: Comparing the\nstability of quantization and distillation towards de-\nveloping low-resource MT models. In Proceedings\nof the Seventh Conference on Machine Translation\n(WMT) , pages 870–885, Abu Dhabi, United Arab\nEmirates (Hybrid), December. Association for Com-\nputational Linguistics.\n[Do and Lee2022] Do, Heejin and Gary Geunbae Lee.\n2022. Target-oriented knowledge distillation with\nlanguage-family-based grouping for multilingual\nnmt. ACM Trans. Asian Low-Resour. Lang. Inf. Pro-\ncess. , jun. Just Accepted.\n[Feng et al.2022] Feng, Fangxiaoyu, Yinfei Yang,\nDaniel Cer, Naveen Arivazhagan, and Wei Wang.\n2022. Language-agnostic BERT sentence embed-\nding. In Proceedings of the 60th Annual Meeting\nof the Association for Computational Linguistics\n(Volume 1: Long Papers) , pages 878–891, Dublin,\nIreland, May. Association for Computational\nLinguistics.\n[Firat et al.2016] Firat, Orhan, Kyunghyun Cho, and\nYoshua Bengio. 2016. Multi-way, multilingual neu-\nral machine translation with a shared attention mech-\nanism. In Proceedings of the 2016 Conference of the\nNorth American Chapter of the Association for Com-\nputational Linguistics: Human Language Technolo-\ngies, pages 866–875, San Diego, California, June.\nAssociation for Computational Linguistics.\n[Goyal et al.2022] Goyal, Naman, Cynthia Gao,\nVishrav Chaudhary, Peng-Jen Chen, Guillaume\nWenzek, Da Ju, Sanjana Krishnan, Marc’Aurelio\nRanzato, Francisco Guzm ´an, and Angela Fan.\n2022. The Flores-101 evaluation benchmark for\nlow-resource and multilingual machine translation.\nTransactions of the Association for Computational\nLinguistics , 10:522–538.\n[Hendrycks and Gimpel2016] Hendrycks, Dan and\nKevin Gimpel. 2016. Gaussian error linear units\n(gelus).\n[Hinton et al.2015] Hinton, Geoffrey, Oriol Vinyals,\nand Jeff Dean. 2015. Distilling the knowledge in\na neural network.\n[Houlsby et al.2019] Houlsby, Neil, Andrei Giurgiu,\nStanislaw Jastrzebski, Bruna Morrone, Quentin\nDe Laroussilhe, Andrea Gesmundo, Mona Attariyan,\nand Sylvain Gelly. 2019. Parameter-efficient trans-\nfer learning for NLP. In Chaudhuri, Kamalika and\nRuslan Salakhutdinov, editors, Proceedings of the\n36th International Conference on Machine Learn-\ning, volume 97 of Proceedings of Machine Learning\nResearch , pages 2790–2799. PMLR, 09–15 Jun.\n[Kim and Rush2016] Kim, Yoon and Alexander M.\nRush. 2016. Sequence-level knowledge distillation.\nInProceedings of the 2016 Conference on Empiri-\ncal Methods in Natural Language Processing , pages\n1317–1327, Austin, Texas, November. Association\nfor Computational Linguistics.\n[Ott et al.2019] Ott, Myle, Sergey Edunov, Alexei\nBaevski, Angela Fan, Sam Gross, Nathan Ng, David\nGrangier, and Michael Auli. 2019. fairseq: A\nfast, extensible toolkit for sequence modeling. In\nProceedings of the 2019 Conference of the North\nAmerican Chapter of the Association for Compu-\ntational Linguistics (Demonstrations) , pages 48–53,\nMinneapolis, Minnesota, June. Association for Com-\nputational Linguistics.\n[Papineni et al.2002] Papineni, Kishore, Salim Roukos,\nTodd Ward, and Wei-Jing Zhu. 2002. Bleu: a\nmethod for automatic evaluation of machine trans-\nlation. In Proceedings of the 40th Annual Meet-\ning of the Association for Computational Linguistics ,\npages 311–318, Philadelphia, Pennsylvania, USA,\nJuly. Association for Computational Linguistics.[Popovi ´c2017] Popovi ´c, Maja. 2017. chrF++: words\nhelping character n-grams. In Proceedings of the\nSecond Conference on Machine Translation , pages\n612–618, Copenhagen, Denmark, September. Asso-\nciation for Computational Linguistics.\n[Ramesh et al.2022] Ramesh, Gowtham, Sumanth Dod-\ndapaneni, Aravinth Bheemaraj, Mayank Jobanpu-\ntra, Raghavan AK, Ajitesh Sharma, Sujit Sahoo,\nHarshita Diddee, Mahalakshmi J, Divyanshu Kak-\nwani, Navneet Kumar, Aswin Pradeep, Srihari Na-\ngaraj, Kumar Deepak, Vivek Raghavan, Anoop\nKunchukuttan, Pratyush Kumar, and Mitesh Shan-\ntadevi Khapra. 2022. Samanantar: The largest pub-\nlicly available parallel corpora collection for 11 indic\nlanguages. Transactions of the Association for Com-\nputational Linguistics , 10:145–162.\n[Tan et al.2019] Tan, Xu, Yi Ren, Di He, Tao Qin, and\nTie-Yan Liu. 2019. Multilingual neural machine\ntranslation with knowledge distillation. In Interna-\ntional Conference on Learning Representations .\n[Tay et al.2022] Tay, Yi, Mostafa Dehghani, Jinfeng\nRao, William Fedus, Samira Abnar, Hyung Won\nChung, Sharan Narang, Dani Yogatama, Ashish\nVaswani, and Donald Metzler. 2022. Scale ef-\nficiently: Insights from pretraining and finetuning\ntransformers. In International Conference on Learn-\ning Representations .\n[Vaswani et al.2017] Vaswani, Ashish, Noam Shazeer,\nNiki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N\nGomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017.\nAttention is all you need. In Guyon, I., U. V on\nLuxburg, S. Bengio, H. Wallach, R. Fergus, S. Vish-\nwanathan, and R. Garnett, editors, Advances in Neu-\nral Information Processing Systems , volume 30. Cur-\nran Associates, Inc.\n[Wang et al.2020] Wang, Ziheng, Jeremy Wohlwend,\nand Tao Lei. 2020. Structured pruning of large\nlanguage models. In Proceedings of the 2020 Con-\nference on Empirical Methods in Natural Language\nProcessing (EMNLP) , pages 6151–6162, Online,\nNovember. Association for Computational Linguis-\ntics.\n[Wang et al.2021] Wang, Fusheng, Jianhao Yan, Fan-\ndong Meng, and Jie Zhou. 2021. Selective knowl-\nedge distillation for neural machine translation. In\nProceedings of the 59th Annual Meeting of the Asso-\nciation for Computational Linguistics and the 11th\nInternational Joint Conference on Natural Language\nProcessing (Volume 1: Long Papers) , pages 6456–\n6466, Online, August. Association for Computa-\ntional Linguistics.\n[Zhang et al.2019] Zhang, Wen, Yang Feng, Fandong\nMeng, Di You, and Qun Liu. 2019. Bridging the\ngap between training and inference for neural ma-\nchine translation. In Proceedings of the 57th Annual\nMeeting of the Association for Computational Lin-\nguistics , pages 4334–4343, Florence, Italy, July. As-\nsociation for Computational Linguistics.\nA Hyperparameter Details\nHyperparameter Value\nGlobal Batch size 64K\nDropout 0.2\nLabel smoothing 0.1\nGradient clipnorm 1.0\nEarly-stopping patience 5\nOptimizer Adam\nAdam betas (0.9,0.98)\nlearning rate 5e-4\nlrscheduler inverse-sqrt decay\nWarmup steps 4000\nTable 9: Hyperparameters employed for training the student\nmodels, identical to those used for training IndicTrans\nHyperparameter LW LF\nGlobal Batch size 2K (as), 8K 24K\nAdapter Dropout 0.1 0 .1\nAdapter Activation GELU GELU\nAdapter Bottleneck 256 256\nlearning rate 1e-3 1 e-3\nWarmup steps 1000 (as),2000 (gu),1600 (or),4000 4000\nTable 10: Hyperparameters employed for Adapter fine-\ntuning. Note that, the rest of the model hyperparameters are\nthe same as in Table-9\nB Additional Analysis\nThis section presents the remaining Chrf++ results\nfor Distillation techniques, Adapter fine-tuning,\nWidth-vs-Height Analysis, and Multistage train-\ning.\nLang OG base IT SLD W+S LD BL GL GLwD\nas 43.0 48.2 44.8 44.9 45.5 45.2 45.1\nbn 54.6 56.9 54.7 54.6 55.0 54.3 54.6\ngu 55.9 58.7 56.2 56.8 56.9 56.6 56.5\nhi 58.9 61.3 58.7 59.0 59.3 59.0 59.0\nkn 51.4 54.6 52.2 52.1 52.2 52.1 52.2\nml 53.6 57.2 54.3 54.3 54.6 53.9 54.4\nmr 53.2 56.4 54.0 53.9 54.2 53.7 53.6\nor 52.2 55.5 53.0 53.2 52.9 53 52.8\npa 56.2 60.0 56.4 56.7 56.9 56.8 56.7\nta 51.1 54.1 51.1 51.1 51.3 51.2 51.3\nte 55.3 58.2 55.7 55.9 55.7 55.8 55.8\nAvg 53.2 56.5 53.7 53.9 54.0 53.8 53.8\nTable 11: Chrf++ scores of base model distilled with various\ndistillation techniques. Note that the IndicTrans (IT) scores in\nthe first column are for reference.Lang base LW LF\nas 45.8 45.6 45.1\nbn 54.7 54.7 54.9\ngu 56.2 56.4 56.3\nhi 58.7 58.8 58.7\nkn 52.2 52.4 52.2\nml 54.3 54.2 54.1\nmr 54.0 53.8 53.7\nor 53.0 52.7 53.0\npa 56.4 56.3 56.2\nta 51.1 50.9 50.8\nte 55.7 55.9 55.6\nAvg 53.7 53.8 53.7\nTable 12: Chrf++ Results of language-wise (LW) and\nlanguage-family (LF) adapter fine-tuning of base SLD model.\nLang huge RS base 12L base 18L base 24L\nas 42.9 46.6 48.0 47.9\nbn 52.9 55.4 56.3 56.4\ngu 55.2 58.0 58.6 58.8\nhi 58.4 60.1 60.5 60.3\nkn 51.2 53.2 54.1 54.1\nml 52.5 55.4 55.8 56.3\nmr 52.0 55.1 55.9 56.2\nor 50.7 54.3 55.3 55.5\npa 56.1 58.1 58.7 59.0\nta 50.1 52.3 53.1 53.5\nte 54.2 56.6 57.7 57.9\nAvg 52.4 55.0 55.8 56.0\nTable 13: Chrf++ scores for Width-vs-Height analysis\nLang base base 12Lbase 18Lbase 24Lbig huge RS\nas 20.3 22.3 23.6 23.2 23.3 20.4\nbn 29.0 30.3 31.2 31.6 31.1 28.6\ngu 31.2 33.1 34.0 34.1 34.2 31.5\nhi 34.3 36.1 36.8 36.6 36.5 35.1\nkn 26.4 27.6 28.5 28.5 28.1 26.2\nml 28.7 29.9 30.6 31.1 30.6 28.1\nmr 28.1 30.0 30.8 30.9 31.2 27.9\nor 27.3 28.9 29.3 29.9 30.1 26.7\npa 31.5 33.3 34.4 34.8 34.3 32.2\nta 25.3 26.9 27.5 28.1 27.7 25.4\nte 30.6 31.8 33.5 33.5 33.3 30.2\nAvg 28.4 30.0 30.9 31.1 30.9 28.4\nTable 14: Absolute BLEU scores obtained by Multi-stage\ntraining.\nLang base base 12L base18L base24L big huge RS\nas 45.5 (0.7) 47.5 (0.9) 48.7 (0.7) 48.5 (0.6) 48.2 (0.1) 44.3 (1.4)\nbn 55.0 (0.3) 55.9 (0.5) 56.6 (0.3) 56.8 (0.4) 56.6 (0.2) 54.1 (1.2)\ngu 56.9 (0.7) 58.4 (0.4) 59.0 (0.4) 59.1 (0.3) 58.9 (0.5) 56.5 (1.3)\nhi 59.1 (0.4) 60.2 (0.1) 60.8 (0.3) 60.7 (0.4) 60.8 (0.4) 59.4 (1.0)\nkn 52.5 (0.3) 53.7 (0.5) 54.5 (0.4) 54.7 (0.6) 54.1 (0.3) 52.2 (1.0)\nml 54.9 (0.6) 56.1 (0.7) 56.6 (0.8) 57.0 (0.7) 56.8 (0.7) 54.0 (1.5)\nmr 54.3 (0.3) 55.9 (0.8) 56.4 (0.5) 56.6 (0.4) 56.7 (0.5) 53.6 (1.6)\nor 53.4 (0.4) 55.0 (0.7) 55.5 (0.2) 55.9 (0.4) 55.8 (0.9) 52.6 (1.9)\npa 56.9 (0.5) 58.3 (0.2) 59.2 (0.5) 59.6 (0.6) 59.2 (0.3) 57.2 (1.1)\nta 51.4 (0.3) 52.8 (0.5) 53.3 (0.2) 54.0 (0.5) 53.6 (0.4) 51.2 (1.1)\nte 56.1 (0.4) 57.2 (0.6) 58.1 (0.4) 58.4 (0.5) 58.1 (0.5) 55.2 (1.0)\nAvg 54.1 (0.5) 55.5 (0.5) 56.2 (0.4) 56.5 (0.5) 56.2 (0.4) 53.7 (1.3)\nTable 15: Multistage training Chrf++ results. The bracketed\nnumber denotes the Chrf++ improvement due to High-Quality\nfine-tuning.\nC Note on Evaluation\nThis paper mainly relies on BLEU and Chrf++,\nbut lately, COMET7is becoming popular. How-\never, COMET is unavailable for most Indic lan-\nguages we study. Therefore, we leave this for fu-\nture work.\n7https://unbabel.github.io/COMET/html/\nindex.html", "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "_rZCzIs7c7S", "year": null, "venue": "EAMT 2018", "pdf_link": "https://aclanthology.org/2018.eamt-main.23.pdf", "forum_link": "https://openreview.net/forum?id=_rZCzIs7c7S", "arxiv_id": null, "doi": null }
{ "title": "Translating Short Segments with NMT: A Case Study in English-to-Hindi", "authors": [ "Shantipriya Parida", "Ondrej Bojar" ], "abstract": null, "keywords": [], "raw_extracted_content": "Translating Short Segments with NMT: A Case Study in English-to-Hindi\nShantipriya Parida Ondřej Bojar\nCharles University, Faculty of Mathematics and Physics\nInstitute of Formal and Applied Linguistics\nMalostranské náměstí 25, 118 00 Prague, Czech Republic\n{parida,bojar}@ufal.mff.cuni.cz\nAbstract\nThis paper presents a case study in trans-\nlating short image captions of the Visual\nGenome dataset from English into Hindi\nusing out-of-domain data sets of varying\nsize. We experiment with three NMT mod-\nels: the shallow and deep sequence-to-\nsequence and the Transformer model as im-\nplemented in Marian toolkit. Phrase-based\nMoses serves as the baseline.\nThe results indicate that the Transformer\nmodel outperforms others in the large data\nsetting in a number of automatic met-\nrics and manual evaluation, and it also\nproduces the fewest truncated sentences.\nTransformer training is however very sen-\nsitive to the hyperparameters, so it requires\nmore experimenting. The deep sequence-\nto-sequence model produced more flawless\noutputs in the small data setting and it was\ngenerally more stable, at the cost of more\ntraining iterations.\n1 Introduction\nIn recent years, neural machine translation (NMT)\nsystems have been gaining more popularity due\nto their improved accuracy and even more flu-\nency compared with “classical” statistical ma-\nchine translation systems such as phrase-based MT\n(PBMT), see e.g. the shared tasks of WMT and\nIWSLT (Bojar et al., 2017; Cettolo et al., 2017).\nThe major advantages of NMT include the consid-\neration of the entire sentence, capturing similarity\n© 2018 The authors. This article is licensed under a Creative\nCommons 3.0 licence, no derivative works, attribution, CC-\nBY-ND.of words, and the capacity to learn complex rela-\ntionships between languages. At the same time, it\nhas been observed that NMT is more sensitive to\nthe shortage of or noise in the parallel training data\n(Koehn and Knowles, 2017).\nOur goal is to create the Hindi version of Visual\nGenome (Krishna et al., 2017).1\nHindi, with 260 million speakers, is the fourth\nmost widely spoken language on the planet (after\nChinese, Spanish and English). Hindi is a morpho-\nlogically rich language (MRL), with e.g. the gen-\nder category being reflected in the forms of nouns,\nverbs and also adjectives (Sreelekha S and Bhat-\ntacharyya, 2017). The structural and morphologi-\ncal differences between English and Hindi result in\ntranslation difficulties (Tsarfaty et al., 2010).\nVisual Genome is a dataset of images, captions\nand relations. As such, it is potentially useful for\nmany NLP and image processing applications. The\nHindi version would allow to exploit this dataset\ne.g. to create Hindi image labellers or other practi-\ncal tools.\nThe textual part of Visual Genome consists pri-\nmarily of short sentences or noun phrases that were\nmanually attached to rectangular regions in an in-\nput image. In the current version, Visual Genome\ncontains 108K distinct images with 5.4 million\nsuch labelled regions in total. On average, an im-\nage is thus associated with 50 text segments. Text\nsegments can repeat across images and indeed,\nwhen de-duplicated, the set of unique strings re-\nduces to 3.15 million unique segments.\nEven with this de-duplication, this set remains\ntoo big to be translated manually. It is thus natu-\nral to attempt to translate this dataset automatically\nand in this paper, we are trying to find the best base-\n1http://visualgenome.org/P\u0013 erez-Ortiz, S\u0013 anchez-Mart\u0013 \u0010nez, Espl\u0012 a-Gomis, Popovi\u0013 c, Rico, Martins, Van den Bogaert, Forcada (eds.)\nProceedings of the 21st Annual Conference of the European Association for Machine Translation , p. 229{238\nAlacant, Spain, May 2018.\nline translation. In the future, we want to include\nalso information available in the context of each of\nthe labels: either the text descriptions of nearby re-\ngions or directly the visual information in a form\nof multi-modal translation (Matusov et al., 2017;\nCalixto et al., 2012; Huang et al., 2016).\nThe paper is organized as follows. Section 2\nreviews related work on neural MT and English-\nHindi translation. Section 3 describes our experi-\nmental setting: data, models and their parameters.\nSection 4 provides automatic and manual evalua-\ntion of the translations and Section 5 discusses the\nresults in closer detail. We conclude in Section 6.\n2 Related Work\nSingh et al. (2017) have compared two neural ma-\nchine translation models, convolutional sequence\nto sequence (ConvS2S) and recurrent sequence to\nsequence (RNNS2S) for English ↔Hindi machine\ntranslation task. They have used the IITB corpus\nfor training (see Section 3.1) and also for devel-\nopment and test data. The RNNS2S model was\ntrained using Nematus (Sennrich et al., 2017) and\nConvS2S using Fairseq (Gehring et al., 2017), an\nopen source library developed by Facebook. In\ntheir evaluation, ConvS2S was better when tar-\ngetting English (BLEU scores: RNNS2S: 11.55,\nConvS2S: 13.76) but RNNS2S was better when\ntargetting Hindi (BLEU scores: RNNS2S: 12.23,\nConvS2S: 11.73). As our experiment scope is lim-\nited to English to Hindi translation, we have not\ntried the ConvS2S.\nWang et al. (2017) use the encoder-decoder\nframework with attention (Bahdanau et al., 2015)\nfor their submission to the Workshop on Asian\nTranslation (WAT) 2017 shared task and observe\nconsiderable gains for English-to-Hindi compared\nto PBMT. Similarly to other works, they benefit\nfrom subword units (Sennrich et al., 2016a) and\nback-translation (Sennrich et al., 2016b), as well\nas model ensembling.\nAgrawal and Misra Sharma (2017) evaluate\nEnglish-Hindi translation quality using several\nvariants of RNN-based neural network architecture\nand basic units (LSTMs, Hochreiter and Schmid-\nhuber, 1997, and GRUs, Cho et al., 2014b), in-\ncluding the attention mechanism by Bahdanau et al.\n(2015) and more layers in the encoder and decoder.\nThe bi-directional LSTM model with four layers\nand attention performs best.\nThe early models of NMT have suffered from\nTraining\nRed\nFigure 1: Overall experimental setting.\nlower translation quality for long sentences, see\ne.g. Cho et al. (2014a) and Bahdanau et al. (2015).\nA recent experiment by Beyer et al. (2017) has\nhowever suggested that NMT can perform worse\nthan PBMT also for short segments (insignifi-\ncantly). It is thus natural to evaluate the effect in\nour particular setting.\nWe note that monolingual data plays an impor-\ntant role in boosting the performance of the trans-\nlation in both PBMT (Brants et al., 2007; Bojar and\nTamchyna, 2011) and NMT (Sennrich et al., 2016b;\nDomhan and Hieber, 2017). We leave these exper-\niments for future work because we would first need\nto find or select Hindi texts closely matching to the\ndomain of Visual Genome texts.\n3 Experiments\nThe overall framework of our work is shown in\nFigure 1. The targeted dataset is English text de-\nscriptions from Visual Genome but no similar or re-\nlated data is available in Hindi. So far, we thus used\nVisual Genome only to select the development and\nthe test set.\nWe experimented with two parallel corpora as\nour training data, HindEnCorp and IITB Corpus\n(see Section 3.1), three NMT models and the\nPBMT baseline (Section 3.2).\nWe used the experiment management tool Eman\n(Bojar and Tamchyna, 2013)2for organizing and\nrunning the experiments.\n2http://ufal.mff.cuni.cz/eman230\nSet #Sentences #Tokens\nEn Hi\nTrain (HindEnCorp) 273.9k 3.8M 5.6M\nTrain (IITB) 1492.8k 20.8M 31.4M\nDev (Visual Genome) 898 4519 6219\nTest (Visual Genome) 1000 4909 6918\nTable 1: Statistics of our data.\n3.1 Dataset Description\nThis section describes the processing and usage\nof the training and development data. We have\nused HindEnCorp (Bojar et al., 2014) as the train-\ning dataset which contains 274k parallel sen-\ntences. Additionally, we have explored the very re-\ncent “IIT Bombay English-Hindi Parallel Corpus”\n(Kunchukuttan et al., 2018) which is supposedly\nthe largest publicly available English-Hindi paral-\nlel corpus. This corpus contain 1.49 million paral-\nlel segments and it includes HindEnCorp.\nThe development and test sentences were ex-\ntracted from the Visual Genome. The original\ndataset contains images and their region annota-\ntions and several other formally captured types of\ninformation (objects, attributes, relationships, re-\ngion graphs, scene graphs and question answer\npairs). We built our dataset by extracting only the\nregion descriptions, which are generally short sen-\ntences or phrases. We selected the development\nand test segments randomly and prepared the corre-\nsponding Hindi translation by manually correcting\nGoogle Translate outputs.\nThe training and test sets sizes are shown in Ta-\nble 1. Note that the token counts considerably dif-\nfer from those reported in the corpus descriptions.\nHere we report the token counts as obtained by the\nMoses tokenizer and used in all our experiments.\n3.2 MT Models Tested\nOne of the current most efficient NMT toolkits is\nMarian3(Junczys-Dowmunt et al., 2016), which\nis a pure C++ implementation of several popular\nNMT models. All our experiments thus use Mar-\nian models.\n3.2.1 Marian’s nematus Model (Bi-RNN)\nThe common baseline NMT architecture is\nthe (shallow) attentional encoder-decoder of Bah-\ndanau et al. (2015). A particularly popular imple-\nmentation of this model is available in the Nematus\ntoolkit (Sennrich et al., 2017),4which adds some\n3http://github.com/marian-nmt/marian\n4http://github.com/EdinburghNLP/nematusParameter Bi-RNN S2S Transformer\nbeam-size 12 12 12\ndec-cell gru lstm –\ndec-cell-base-depth 2 4 –\ndec-cell-high-depth 1 2 –\ndec-depth 1 4 6\ndecay-inv – – 16000\ndim-emb 512 512 512\ndim-rnn 1024 1024 1024\ndropout-rnn 0.2 0.2 –\ndropout-src 0.1 0.1 –\ndropout-trg 0.1 0.1 –\nearly-stopping 10 – –\nenc-cell gru lstm –\nenc-cell-depth 1 2 –\nenc-depth 1 4 6\nenc-type bidirectional alternating –\nexponential-smoothing – 0.0001 –\nheads – – 8\nlabel-smoothing – – 0.1\nlearning-rate 0.0001 0.0001 0.0003\nmax-length 50 50 100\nnormalize – – 0.6\noptimizer adam adam adam\ntransformer-dim-ffn – – 2048\ntransformer-dropout – – 0.1\ntransformer-dropout-attention – – 0\ntransformer-postprocess – – dhn\nwarm-up – – 16000\nTable 2: Model configurations.\nimplementation differences such as a different ini-\ntial hidden state, a different RNN cell and several\nothers.\nMarian implements both the training and in-\nference with the Nematus (Sennrich et al., 2017)\nmodel and in fact, it can load models trained by the\noriginal Nematus.\nWe call this setup “Bi-RNN” in the following\nand use it only in shallow (depth 1) setting.\n3.2.2 Marian’s Sequence-to-Sequence ( s2s)\nModel\nA more advanced variation of the RNN-based\nmodel allows to use deeper layers in both decoder\nand encoder and it also differs from the original\nNematus model in several features, such as a dif-\nferent layer normalization (Sennrich et al., 2017;\nJunczys-Dowmunt and Grundkiewicz, 2017).\nWe denote this model “S2S” in the following and\nuse it only in the deep (depth 4) setting.\n3.2.3 Marian’s transformer Model\nThe Transformer model (Vaswani et al., 2017)\nhas been recently proposed to avoid the expensive\ntraining of RNNs, relying on the attention mecha-\nnism.\nAs explored by Popel and Bojar (2018) with the231\n010203040\n30000 60000 900001 2 3 4 5 6 7 8\nHindEnCorp NMT IterationsHindEnCorp MERT Iterations\n010203040\n30000 60000 90000 1200001 2 3 4 5 6 7\nIITB NMT IterationsIITB MERT Iterations\nBi-RNN\nS2S\nTransformer\nPBMT\nFigure 2: Learning curves in terms of BLEU on dev set. The big black dots indicate which iteration was used for test set\ntranslation and evaluation.\noriginal Google implementation,5the model can\nbe more difficult to train but it will likely outper-\nform other architectures in both training time and\nfinal translation quality. Indeed, we needed to try\n9 different configuration settings for Transformer\nbefore we got any reasonable performance, com-\npared to just 3 for S2S and 1 for Bi-RNN.\nMarian’s implementation should be fully com-\npatible with the original Google one.\nThe configuration parameters used for training\nof the models are shown in Table 2.\n3.2.4 Common Settings\nIn all NMT experiments, we used the same BPE\n(Sennrich et al., 2016a), with 30k merges, joint for\nEnglish and Hindi and extracted from HindEnCorp\nonly. We also tried to extract the BPE from the re-\nspective training corpus (i.e. IITB for IITB mod-\nels) but the performance was lower, perhaps due to\ndomain differences between the corpora. The Hin-\ndEnCorp BPEs are thus used in all experiments re-\nported here.\n3.2.5 Moses PBMT Baseline\nFor the purposes of comparison, we also train\nMoses (Koehn et al., 2007) phrase-based MT sys-\ntem with a 5-gram LM and a lexicalized reorder-\ning model, trained with the standard MERT opti-\nmization towards BLEU. The alignment is based\non lowercase tokens, stemmed to the first 4 char-\nacters only.\n4 Results\nFigure 2 presents the learning curves for all the\nmodels evaluated on the development set using the\n5http://github.com/tensorflow/tensor2tensorBi-RNN S2S Transf. PBMTHindEnCorpBLEU 20.68 26.45 23.91 20.61\nchrF3 32.30 39.52 36.36 36.49\nnCDER 34.04 40.91 38.26 32.71\nnCharacTER 12.27 18.47 23.12 29.05\nnPER 41.76 49.05 47.01 50.40\nnTER 29.63 35.70 33.52 24.78IITB CorpusBLEU 31.78 32.81 38.31 25.06\nchrF3 42.63 44.50 51.08 43.09\nnCDER 44.49 44.91 51.78 37.54\nnCharacTER -14.76 -47.00 25.07 37.55\nnPER 51.86 52.04 59.60 55.17\nnTER 40.62 41.44 49.05 32.76\nTable 3: Results on the test set, multiplied by 100. Best model\naccording to each automatic metric in bold. Metrics with the\nprefix “n” were flipped ( 100−score) to make better scores\nhigher. The negative numbers for nCharacTER happen when\nthe original CharacTER score is over 1.\nBLEU score (Papineni et al., 2002). (PBMT train-\ning is displayed in terms of MERT iterations on the\nsecondary x axis.)\nFor NMT, we validated the model every 10000\niterations and ran the training until the cross-\nentropy has not improved for 10 consecutive val-\nidations. For each model, we selected the iteration\nwhere the highest BLEU score was reached and\ntranslated the test set with this model.\n4.1 Automatic Evaluation\nTable 3 provides automatic scores of the models in\nseveral metrics (Papineni et al., 2002; Snover et al.,\n2006; Leusch and Ney, 2008; Popović, 2015; Wang\net al., 2016).6We see that on the smaller HindEn-\n6Note that the exact scores are heavily dependent on the to-\nkenization. We collect outputs from all our system after\ndetokenization and tokenize if needed by the metric (chrF3\nand CharacTER do not expect tokenized text). We report\nthe scores when Moses tokenizer was used. Using e.g.\nthe Hindi tokenization from IndicNLP, http://github.com/\nanoopkunchukuttan/indic_nlp_library , leads to sub-232\nCorp, S2S performs best except in CharacTER and\nPER where the outputs of PBMT score best. On the\nlarger IITB Corpus, Transformer wins in all met-\nrics except again CharacTER. We suspect that the\ndifferent evaluation by CharacTER could be an ar-\ntifact of the Devanagari script used in Hindi.\nPER, position-independent error-rate, reflects\nthe overlap of exact word forms used in the ref-\nerence and the hypothesis, suggesting that PBMT\nperforms reasonably well in terms of preserving\nwords, although the fluency is probably worse.\nIt should be noted that the automatic scores can\nbe affected by the fact that our test set was created\nby manual revision of Google Translate outputs.\nThe underlying model of Google Translate is how-\never unknown. Also, we have only one reference\ntranslation and it is well known that with more ref-\nerence translations, automatic evaluations are more\nreliable (Finch et al., 2004; Bojar et al., 2013).\n4.2 Manual Evaluation\nTo validate the automatic scoring, we manually an-\nnotated 100 randomly selected segments as trans-\nlated by the NMT models.7\nIn this annotation, each annotated segment gets\nexactly one label from the following set:\nFlawless for translations without any error (type-\nsetting issues with diacritic marks due to dif-\nferent tokenization are ignored),\nGood for translations which are generally OK and\ncomplete but need a small correction,\nPartly Correct for cases where a part of the seg-\nment is correct but some words are mis-\ntranslated,\nAmbiguity for segments where the MT system\n“misunderstood” a word’s meaning, and\nIncomplete for segments that run well but stop too\nearly, missing some content words. This cat-\negory also includes the relatively rare cases\nwhere the NMT model produced just a single\nword, unrelated to the source.\nThe results are summarized in Figure 3.\nstantially lower scores, e.g. BLEU of 7 instead of 20. For-\ntunately, these BLEU scores correlate very well (Pearson of\n0.94) with our scores.\n7We excluded PBMT from this annotation because its BLEU\nscores were low; we are now reconsidering this decision given\nthe good performance in PER.\n(a) HindEnCorp-trained models\n(b) IITB-trained models\nFigure 3: Manual evaluation summary.\nThe manual annotation generally confirms the\nautomatic scores. On HindEnCorp, S2S has the\nhighest number of Flawless segments and Bi-RNN\nperforms worst, having the majority of outputs\nonly Partly Correct and suffering most from Am-\nbiguity.\nOn IITB, the performance of all the models is\ngenerally much better, with 40–60 of the 100 anno-\ntated segments falling into the Flawless category.\nTransformer is a clear winner here and S2S suffers\nfrom surprisingly many Incomplete segments.\nSome translation samples are shown in Figure 4.\n5 Analysis and Discussion\nWe assumed that PBMT may perform better on\nshort segments. In order to test this assumption, we\ndivided the 1000 test segments into 5 groups based\non the source segment length. Group boundaries\nwere chosen to achieve reasonably balance distri-\nbution and at least a minimal size for automatic\nscoring:\nSource length: 1–3 4 5 6 7–12\nSegment count: 73 380 282 165 100\nFigure 5 plots BLEU scores evaluated on each\ngroup of segments separately. We see that our as-\nsumption does not hold and that there is no clear\ntendency in translation quality based on source sen-\ntence length. In the small data setting (HindEn-\nCorp), PBMT scores well sentences of length 4 and233\nFlawless:\nA car on a street\nसडक पर एक कार\nGloss: A car on a street\nA white and yellow passenger car\nएक सफ े द और पीला यात ◌् रɍ कार\nGloss: A white and yellow passenger car\nWhite part of the chair\nक ु र ◌् सी का सफ े द भाग\nGloss: White part of the Chair\nPartly Correct:\nA man wearing white shorts\nएक आदमी सफ े द शॉर ◌् ट पहनना\nGloss: A man put on white short\n(output does not convey the intended\nmeaning in the target language)\nDog in a lake\nइस झील मȅ क ु त ◌् ते\nGloss: Dogs in this lake\n(grammar error: dog vs. dogs)\nAmbiguity:\nFaucet is above sink\nफ े सबुक Ȯस\u0000क से ऊपर ह ै\nGloss: Facebook is above sink\n(bad translation of the word “Faucet’)\nGreen bean in soup\nआत ◌् मा मȅ हरा\nGloss: Spirit in green\n(mis-translated words “bean”, and “soup”)\nFigure 4: Sample segment translations and their manual clas-\nsification.\nthen on sentences over 7 words. In other cases, S2S\nwins. With the IITB training corpus, Transformer\nwins and PBMT loses across all lengths.\nA generally interesting property of NMT is its\nability to correctly predict the sentence length (Shi\net al., 2016). We take a look at this by considering\nboth the relation of our candidate translations with\nthe source and with the reference.\nFigure 6 plots the length of the translation for in-\ndividual source segments sorted by length. We see\nthat the target length varies a lot across segments\nand also different NMT models. In general, out-\nputs are longer than sources but the length of the\nsource is not really followed by any of the models.\nWe observed on the HindEnCorp training data\nthat some of the NMT models tended to cut off\nsentences too short in early iterations. To exam-\nine this, we checked the difference in length be-\n(a) HindEnCorp-trained models\n(b) IITB-trained models\nFigure 5: Translation quality for groups of segments based\non their source length.\nFigure 6: Source and candidate translation lengths for indi-\nvidual segments in the subset of 100 manually-evaluated seg-\nments. Segments are sorted by source length. The models\nwere trained on the IITB corpus.\ntween the candidate and the reference throughout\nthe iterations. The distribution of length differ-\nences was however not skewed in any way and\nthe only observable pattern was that the differences\nget smaller as the training progresses. We plot the\ndifferences for the converged runs over the whole\n1000 segments in the test set in Figure 7. We see\nthat all the NMT models are very similar, produc-\ning output slightly longer (peak at +2) than the\nreference. The PBMT is optimized well and the\npeak is located at zero difference between the can-\ndidate and reference length. The interesting pattern\nin NMT outputs of slightly fewer segments with\nodd differences (+1, +3 and +5) has still to be ex-\nplained.\n6 Conclusion\nWe have applied the state-of-the-art neural ma-\nchine translation models and the phrase-based234\nFigure 7: Segment length difference (candidate vs reference)\nof the IITB-trained models. The positive numbers indicate\nthat candidate is longer than the reference.\nbaseline to English-to-Hindi translation. Our tar-\nget domain were relatively short segments appear-\ning in descriptions of image regions in the Visual\nGenome.\nThe results indicate that with smaller data (274k\nparallel segments, 3.8M English tokens), the deep\nsequence-to-sequence attentional model is the best\nchoice, although the PBMT baseline seemed to\nperform well in two of the tested automatic met-\nrics, CharacTER and PER. With large parallel data\navailable, Transformer should be preferred and all\nNMT models clearly outperform PBMT. We have\nnot yet explored the effect of adding monolingual\ndata.\nA deeper analysis has not revealed any differ-\nence in performance for shorter or longer segments,\nbut the manual annotation suggested that the per-\nformance of NMT models varies across individual\nsegments. The overall performance is thus perhaps\ntoo crude and it would be suboptimal to decide for\na single model.\nIn the future, we will focus on the possibilities\nof multi-modal translation (Matusov et al., 2017;\nCalixto et al., 2012; Huang et al., 2016) to im-\nprove translation quality using the Visual Genome\nimages or other contextual information available.\nOur ultimate plan is to release a machine-translated\nHindi version of Visual Genome.\nAcknowledgement\nThis work has been supported by the grants\n18-24210S of the Czech Science Foundation,\nSVV 260 453 and “Progress” Q18+Q48 of Charles\nUniversity, and using language resources dis-\ntributed by the LINDAT/CLARIN project of the\nMinistry of Education, Youth and Sports of the\nCzech Republic (projects LM2015071 and OP\nVVV VI CZ.02.1.01/0.0/0.0/16 013/0001781).We thank Dr. Satyaranjan Dash and Miss Sneha\nShrivastav for their support in Development and\nTest Data preparation.\nReferences\nRuchit Agrawal and Dipti Misra Sharma. Building\nan Effective MT System for English-Hindi Us-\ning RNN’s. International Journal of Artificial\nIntelligence & Applications , 8:45–58, 09 2017.\nDzmitry Bahdanau, Kyunghyun Cho, and Yoshua\nBengio. Neural Machine Translation by Jointly\nLearning to Align and Translate. In Proceedings\nof ICLR , 2015.\nAnne Beyer, Vivien Macketanz, Aljoscha Bur-\nchardt, and Philip Williams. Can Out-of-the-box\nNMT Beat a Domain-trained Moses on Techni-\ncal Data? In Proceedings of EAMT User Studies\nand Project/Product Descriptions , pages 41–46,\nPrague, Czech Republic, 2017.\nOndřej Bojar and Aleš Tamchyna. The Design\nof Eman, an Experiment Manager. The Prague\nBulletin of Mathematical Linguistics , 99:39–58,\n2013. ISSN 0032-6585.\nOndřej Bojar and Aleš Tamchyna. Improving\nTranslation Model by Monolingual Data. In\nProceedings of the Sixth Workshop on Statisti-\ncal Machine Translation , pages 330–336, Ed-\ninburgh, Scotland, July 2011. Association for\nComputational Linguistics.\nOndřej Bojar, Matouš Macháček, Aleš Tamchyna,\nand Daniel Zeman. Scratching the Surface of\nPossible Translations. In Proc. of TSD 2013 ,\nLecture Notes in Artificial Intelligence, Berlin\n/ Heidelberg, 2013. Západočeská univerzita v\nPlzni, Springer Verlag.\nOndřej Bojar, V ojtěch Diatka, Pavel Rychlý, Pavel\nStraňák, Vít Suchomel, Aleš Tamchyna, and\nDaniel Zeman. HindEnCorp — Hindi-English\nand Hindi-only Corpus for Machine Transla-\ntion. In Nicoletta Calzolari (Conference Chair),\nKhalid Choukri, Thierry Declerck, Hrafn Lofts-\nson, Bente Maegaard, Joseph Mariani, Asun-\ncion Moreno, Jan Odijk, and Stelios Piperidis,\neditors, Proceedings of the Ninth International\nConference on Language Resources and Eval-\nuation (LREC’14) , pages 3550–3555, Reyk-\njavik, Iceland, may 2014. European Language\nResources Association (ELRA). ISBN 978-2-\n9517408-8-4.235\nOndřej Bojar, Rajen Chatterjee, Christian Fe-\ndermann, Yvette Graham, Barry Haddow,\nMatthias Huck, Philipp Koehn, Varvara Lo-\ngacheva, Christof Monz, Matteo Negri, Matt\nPost, Raphael Rubino, Lucia Specia, and Marco\nTurchi. Findings of the 2017 Conference on Ma-\nchine Translation (WMT17). In Proceedings of\nthe Second Conference on Machine Translation ,\nCopenhagen, Denmark, September 2017. Asso-\nciation for Computational Linguistics.\nThorsten Brants, Ashok C. Popat, Peng Xu,\nFranz J. Och, and Jeffrey Dean. Large Language\nModels in Machine Translation. In Proceedings\nof the Joint Conference on Empirical Methods\nin Natural Language Processing and Compu-\ntational Natural Language Learning (EMNLP-\nCoNLL) , pages 858–867, 2007.\nIacer Calixto, Teófilo Emídio de Campos, and Lu-\ncia Specia. Images as Context in Statistical Ma-\nchine Translation. In In The 2nd Annual Meeting\nof the EPSRC Network on Vision & Language\n(VL’12) , Sheffield, UK, 2012. EPSRC Vision\nand Language Network.\nMauro Cettolo, Marcello Federico, Luisa Ben-\ntivogli, Jan Niehues, Sebastian Stüker, Kat-\nsuhito Sudoh, Koichiro Yoshino, and Christian\nFedermann. Overview of the IWSLT 2017 Eval-\nuation Campaign. In Proceedings of the 14th\nInternational Workshop on Spoken Language\nTranslation (IWSLT) , pages 2–14, Tokyo, Japan,\n2017.\nKyunghyun Cho, Bart van Merrienboer, Dzmitry\nBahdanau, and Yoshua Bengio. On the Prop-\nerties of Neural Machine Translation: Encoder–\nDecoder Approaches. In Proceedings of SSST-\n8, Eighth Workshop on Syntax, Semantics and\nStructure in Statistical Translation , pages 103–\n111, Doha, Qatar, October 2014a. Association\nfor Computational Linguistics.\nKyunghyun Cho, Bart van Merrienboer, Caglar\nGulcehre, Dzmitry Bahdanau, Fethi Bougares,\nHolger Schwenk, and Yoshua Bengio. Learning\nPhrase Representations using RNN Encoder–\nDecoder for Statistical Machine Translation.\nInProceedings of the Conference on Empir-\nical Methods in Natural Language Process-\ning (EMNLP) , pages 1724–1734, Doha, Qatar,\nOctober 2014b. Association for Computational\nLinguistics.\nTobias Domhan and Felix Hieber. Using Target-side Monolingual Data for Neural Machine\nTranslation through Multi-task Learning. In\nProceedings of the Conference on Empiri-\ncal Methods in Natural Language Processing,\nEMNLP , pages 1500–1505, 2017.\nAndrew M. Finch, Yasuhiro Akiba, and Eiichiro\nSumita. How Does Automatic Machine Trans-\nlation Evaluation Correlate with Human Scor-\ning as the Number of Reference Translations In-\ncreases? In Proceedings of the Fourth Interna-\ntional Conference on Language Resources and\nEvaluation, LREC , 2004.\nJonas Gehring, Michael Auli, David Grangier, De-\nnis Yarats, and Yann N. Dauphin. Convolutional\nSequence to Sequence Learning. In Doina Pre-\ncup and Yee Whye Teh, editors, Proceedings\nof the 34th International Conference on Ma-\nchine Learning , volume 70 of Proceedings of\nMachine Learning Research , pages 1243–1252,\nInternational Convention Centre, Sydney, Aus-\ntralia, 06–11 Aug 2017. PMLR.\nSepp Hochreiter and Jürgen Schmidhuber. Long\nShort-Term Memory. Neural Comput. , 9(8):\n1735–1780, November 1997. ISSN 0899-7667.\ndoi: 10.1162/neco.1997.9.8.1735.\nPo-Yao Huang, Frederick Liu, Sz-Rung Shiang,\nJean Oh, and Chris Dyer. Attention-based Multi-\nmodal Neural Machine Translation. In Proceed-\nings of the First Conference on Machine Trans-\nlation, WMT , pages 639–645, 2016.\nMarcin Junczys-Dowmunt and Roman Grund-\nkiewicz. An Exploration of Neural Sequence-\nto-Sequence Architectures for Automatic Post-\nEditing. In Proceedings of the Eighth Interna-\ntional Joint Conference on Natural Language\nProcessing, IJCNLP , pages 120–129, 2017.\nMarcin Junczys-Dowmunt, Tomasz Dwojak, and\nHieu Hoang. Is Neural Machine Translation\nReady for Deployment? A Case Study on 30\nTranslation Directions. In Proceedings of the 9th\nInternational Workshop on Spoken Language\nTranslation (IWSLT) , Seattle, WA, 2016.\nPhilipp Koehn and Rebecca Knowles. Six Chal-\nlenges for Neural Machine Translation. In Pro-\nceedings of the First Workshop on Neural Ma-\nchine Translation , pages 28–39, Vancouver, Au-\ngust 2017. Association for Computational Lin-\nguistics.\nPhilipp Koehn, Hieu Hoang, Alexandra Birch,\nChris Callison-Burch, Marcello Federico,236\nNicola Bertoldi, Brooke Cowan, Wade Shen,\nChristine Moran, Richard Zens, Chris Dyer,\nOndřej Bojar, Alexandra Constantin, and Evan\nHerbst. Moses: Open Source Toolkit for Sta-\ntistical Machine Translation. In Proceedings of\nthe 45th Annual Meeting of the Association for\nComputational Linguistics (ACL) Companion\nVolume Proceedings of the Demo and Poster\nSessions , pages 177–180, Prague, Czech Repub-\nlic, June 2007. Association for Computational\nLinguistics.\nRanjay Krishna, Yuke Zhu, Oliver Groth, Justin\nJohnson, Kenji Hata, Joshua Kravitz, Stephanie\nChen, Yannis Kalantidis, Li-Jia Li, David A.\nShamma, Michael S. Bernstein, and Li Fei-Fei.\nVisual Genome: Connecting Language and Vi-\nsion Using Crowdsourced Dense Image Annota-\ntions. International Journal of Computer Vision ,\n123(1):32–73, May 2017. ISSN 1573-1405. doi:\n10.1007/s11263-016-0981-7.\nAnoop Kunchukuttan, Pratik Mehta, and Pushpak\nBhattacharyya. The IIT Bombay English-Hindi\nParallel Corpus. In Proceedings of LREC , 2018.\nIn print.\nGregor Leusch and Hermann Ney. BLEUSP, IN-\nVWER, CDER: Three improved MT evaluation\nmeasures. In NIST Metrics for Machine Transla-\ntion Challenge , Waikiki, Honolulu, Hawaii, Oc-\ntober 2008.\nEvgeny Matusov, Andy Way, Iacer Calixto, Daniel\nStein, Pintu Lohar, and Sheila Castilho. Us-\ning Images to Improve Machine-Translating E-\nCommerce Product Listings. In Proceedings of\nthe 15th Conference of the European Chapter of\nthe Association for Computational Linguistics,\nEACL , pages 637–643, 2017.\nKishore Papineni, Salim Roukos, Todd Ward, and\nWei-Jing Zhu. Bleu: a Method for Automatic\nEvaluation of Machine Translation. In Proceed-\nings of the 40th Annual Meeting of the Associa-\ntion for Computational Linguistics , pages 311–\n318, 2002.\nMartin Popel and Ondřej Bojar. Training Tips for\nthe Transformer Model. The Prague Bulletin of\nMathematical Linguistics , 110(1):43–70, 2018.\nMaja Popović. chrF: character n-gram F-score for\nautomatic MT evaluation. In Proceedings of the\nTenth Workshop on Statistical Machine Transla-\ntion, pages 392–395, Lisbon, Portugal, Septem-ber 2015. Association for Computational Lin-\nguistics.\nRico Sennrich, Barry Haddow, and Alexandra\nBirch. Neural Machine Translation of Rare\nWords with Subword Units. In Proceedings of\nthe 54th Annual Meeting of the Association for\nComputational Linguistics, ACL , 2016a.\nRico Sennrich, Barry Haddow, and Alexandra\nBirch. Improving Neural Machine Translation\nModels with Monolingual Data. In Proceedings\nof the 54th Annual Meeting of the Association\nfor Computational Linguistics, ACL , 2016b.\nRico Sennrich, Orhan Firat, Kyunghyun Cho,\nAlexandra Birch, Barry Haddow, Julian\nHitschler, Marcin Junczys-Dowmunt, Samuel\nLäubli, Antonio Valerio Miceli Barone, Jozef\nMokry, and Maria Nadejde. Nematus: a Toolkit\nfor Neural Machine Translation. In Proceed-\nings of the 15th Conference of the European\nChapter of the Association for Computational\nLinguistics, EACL; Software Demonstrations ,\npages 65–68, 2017.\nXing Shi, Kevin Knight, and Deniz Yuret. Why\nNeural Translations are the Right Length. In\nProceedings of the Conference on Empirical\nMethods in Natural Language Processing , pages\n2278–2282, Austin, Texas, November 2016. As-\nsociation for Computational Linguistics.\nSandhya Singh, Ritesh Panjwani, Anoop\nKunchukuttan, and Pushpak Bhattacharyya.\nComparing Recurrent and Convolutional Ar-\nchitectures for English-Hindi Neural Machine\nTranslation. In Proceedings of the 4th Workshop\non Asian Translation, WAT@IJCNLP , pages\n167–170, 2017.\nMatthew Snover, Bonnie Dorr, Richard Schwartz,\nLinnea Micciulla, and John Makhoul. A Study\nof Translation Edit Rate with Targeted Human\nAnnotation. In Proceedings AMTA , pages 223–\n231, August 2006.\nSreelekha S and Pushpak Bhattacharyya. Role of\nMorphology Injection in SMT: A Case Study\nfrom Indian Language Perspective. ACM Trans.\nAsian & Low-Resource Lang. Inf. Process. , 17\n(1):1:1–1:31, 2017. doi: 10.1145/3129208.\nReut Tsarfaty, Djamé Seddah, Yoav Goldberg,\nSandra Kübler, Yannick Versley, Marie Can-\ndito, Jennifer Foster, Ines Rehbein, and Lamia\nTounsi. Statistical Parsing of Morphologi-\ncally Rich Languages (SPMRL) What, How and237\nWhither. In Proceedings of the First Work-\nshop on Statistical Parsing of Morphologically-\nRich Languages, SPMRL@NAACL-HLT , pages\n1–12, 2010.\nAshish Vaswani, Noam Shazeer, Niki Parmar,\nJakob Uszkoreit, Llion Jones, Aidan N. Gomez,\nLukasz Kaiser, and Illia Polosukhin. Attention\nis All you Need. In Advances in Neural Infor-\nmation Processing Systems 30: Annual Confer-\nence on Neural Information Processing Systems ,\npages 6000–6010, 2017.\nBoli Wang, Zhixing Tan, Jinming Hu, Yidong\nChen, and Xiaodong Shi. XMU Neural Machine\nTranslation Systems for WAT 2017. In Proceed-\nings of the 4th Workshop on Asian Translation,\nWAT@IJCNLP , pages 95–98, 2017.\nWeiyue Wang, Jan-Thorsten Peter, Hendrik\nRosendahl, and Hermann Ney. CharacTER:\nTranslation Edit Rate on Character Level. In\nACL First Conference on Machine Translation\n(WMT) , Berlin, Germany, August 2016.238", "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "fkKqFvUtx1", "year": null, "venue": "EAMT 2016", "pdf_link": "https://aclanthology.org/W16-3408.pdf", "forum_link": "https://openreview.net/forum?id=fkKqFvUtx1", "arxiv_id": null, "doi": null }
{ "title": "Pivoting Methods and Data for Czech-Vietnamese Translation via English", "authors": [ "Duc Tam Hoang", "Ondrej Bojar" ], "abstract": null, "keywords": [], "raw_extracted_content": "Baltic J. Modern Computing, V ol. 4 (2016), No. 2, pp. 190–202\nPivoting Methods and Data for\nCzech-Vietnamese Translation via English\nDuc Tam HOANG, Ond ˇrej BOJAR\nCharles University in Prague, Faculty of Mathematics and Physics,\nInstitute of Formal and Applied Linguistics\[email protected], [email protected]\nAbstract. The statistical approach to machine translation (MT) relies heavily on large parallel\ncorpora. For many language pairs, this can be a significant obstacle. A promising alternative is\npivoting, i.e. making use of a third language to support the translation. There are a number of\npivoting methods, but unfortunately, they were not evaluated in comparable settings. We focus on\none particular language pair, Czech $Vietnamese translation, with English as the pivoting lan-\nguage, and provide a comparison of several pivoting methods and the baseline (direct translation).\nBesides the experiments and analysis, another contribution is the datasets that we have collected\nand prepared for the three languages.\nKeywords: Statistical Machine Translation, Czech-Vietnamese, parallel corpus, pivoting meth-\nods, phrase table triangulation, system cascades\n1 Introduction\nLarge parallel corpora are of utmost importance for statistical machine translation (SMT)\nfor producing reliable translations. Unfortunately, for most pairs of living languages,\nthe amount of available parallel data is not sufficient. “Pivoting” methods make use of\na third language (“pivot language”) to support the translation.\nOver past years, a number of pivoting methods have been proposed. Most of the\nworks were conducted using multi-parallel corpora such as Europarl (Koehn, 2005),\nwhere the same text is available in more than two languages. In a realistic condition, the\ntwo corpora, source-pivot corpus and pivot-target corpus, are independent , i.e. coming\nfrom different sources. We expect that some of the approaches are more beneficial in\nonly one of the two conditions and for sure, some approaches utilizing multilingual\ncorpora are not applicable for independent corpora at all (Kumar et al., 2007; Chen et\nal., 2008).\nIn this work, we carry out experiments to directly compare several methods of piv-\noting. We select Czech and Vietnamese, a relatively unexplored language pair, for the\nPivoting Methods and Data for Czech-Vietnamese Translation via English 191\nexperiments. English is chosen for the role of pivot language because it offers the largest\nparallel corpora with both Czech and Vietnamese.\nThis paper has two main contributions. (1) The paper evaluates a wide range of\npivoting methods in a directly comparable setting and under the more realistic condition\nwhere the parallel corpora are independent (as opposed to multi-parallel). (2) It is the\nfirst study which focuses on machine translation between Czech and Vietnamese. It\ndescribes and publishes the corpora that we have collected and processed.\nThe remainder of this paper is organized as follows. Section 2 discusses related\nwork of pivoting methods. Section 3 describes the dataset that we collected, prepared\nand released. Section 4 presents experimental set up, results and discussions. Finally,\nSection 5 concludes the paper.\n2 Pivoting Methods\nPivoting is formulated as the translating task from a source language to a target language\nthrough one or more pivot languages. An important, yet mostly overlooked aspect in\npivoting is the relation between the source-pivot and pivot-target corpora. For example,\nChen et al. (2008) reduce the size of the phrase table by filtering out phrase pairs if they\nare not linked by at least one common pivot phrase. Kumar et al. (2007) combine word\nalignments using multiple pivot languages to correct the alignment errors trained on the\nsource-target parallel data. Both methods (implicitly) rely on the fact that the corpora\ncontain the same sentences available in multiple languages. While this is a reasonable\nassumption for a multi-parallel corpus , the methods are not applicable for independent\nparallel corpora .\nIn our study, we compare pivoting methods which can be applied under the per-\nhaps more realistic condition that the source-pivot and pivot-target corpora are indepen-\ndent (Tiedemann, 2012a; Tiedemann and Nakov, 2013). This section discusses such\nmethods and highlights their difference and potential. Each method has a number of\nconfiguration options which significantly affect the translation quality, we explore them\nempirically in Section 4 below.\n2.1 Synthetic Corpus/Phrase Table\nThe synthetic corpus method (Gispert and Mari ˜no, 2006; Galu ˇsˇc´akov ´a and Bojar, 2012)\nand the phrase table (PT) translation method (called synthetic phrase table) (Wu and\nWang, 2007) aim to generate training data from MT output. Specifically, an MT system,\nwhich translates pivot language into the source or target language, is employed to trans-\nlate a corpus or a phrase table of the other language pair. The result is a source !target\ncorpus or phrase table with one side “synthetic”, i.e. containing MT translated data. The\nsynthetic corpus or phrase table is then used to build the source !target MT system.\nUsing MT translated data is generally seen as a bad thing. The model can easily re-\nproduce errors introduced by the underlying MT system. In practice, however, machine-\ngenerated translations need not be always harmful, especially when they compensate for\nthe lack of direct bilingual training data. For example, Gispert and Mari ˜no (2006) re-\nport impressive English $Catalan translation results by translating the English-Spanish\n192 Hoang and Bojar\ncorpus using a Spanish !Catalan MT system. The results are on par with the transla-\ntion quality of English $Spanish translation. Similarly, Galu ˇsˇc´akov ´a and Bojar (2012)\nobserve that pivoting through Czech was better than direct translation from English to\nSlovak, due to a large difference in training data size.\nBetween the two methods, the task of translating a phrase table poses different chal-\nlenges compared to the task of translating a corpus. Phrasal input is generally much\nshorter than a sentence and a lot of contextual information is lost (even considering the\nlimited scope of existing language models).\n2.2 Phrase Table Triangulation\nThe phrase table triangulation method (Cohn and Lapata, 2007; Zhu et al., 2014),\nsometimes called simply triangulation, generates an artificial source-target phrase ta-\nble by directly joining two phrase tables (source-pivot and pivot-target) on common\npivot phrases.\nOnce the tables are combined, approaches to triangulating the two phrase tables\ndiverge in how they set the scores for the phrases. There are two options for estimating\nthe necessary feature scores of the new phrase table: multiplying the original posterior\nprobabilities or manipulating the original co-occurrence counts of phrases.\nThe first option views the triangulation as a generative probabilistic process on two\nsets of phrase pairs, s-tandp-t. Assuming the independent relations between three\nlanguages, the conditional distribution p(sjt)is estimated over source-target phrase pair\ns-tby marginalising out the pivot phrase p:\np(sjt) =X\npp(sjp;t)\u0002p(pjt)\n\u0019X\npp(sjp)\u0002p(pjt)(1)\nAfterwards, the feature values of identical phrases pairs are combined in the final\nphrase table. Either the scores are summed up or maximized (i.e. taking the higher of\nthe score values).\nThe second option estimates the co-occurrence count of the source and target phrases\nc(s;t)from the co-occurrence counts c(s;p)andc(p;t)of the component phrase pairs.\nAfterwards, the feature scores are estimated by the standard phrase extraction (Koehn,\n2010).\nc(s;t) =X\npf(c(s;p);c(p;t))(2)\nIn Equation 2, function fis the desired approximation function. Zhu et al. (2014)\nproposed four functions f: minimum, maximum, arithmetic mean and geometric mean.\nPhrase table triangulation methods have received much attention, yet they have not\nbeen tested with two disjoint and independent corpora.\nPivoting Methods and Data for Czech-Vietnamese Translation via English 193\n2.3 System Cascades\nA widely popular method, system cascades (Utiyama and Isahara, 2007), simply uses\ntwo black-box machine translation systems in a sequence. The first system translates\nthe input from the source language into the pivot language. The second system picks up\nthe pivot hypothesis and translates it into the target language.\nFormally, the problem of finding the best sentence ^efor a foreign input sentence fis\ndefined as maximizing the translation score from source sentence fto a pivot sentence\np, then frompto target sentence e:\n^e\u0019arg max\ne;p ipsmt(pijf)\u0002psmt(ejpi) (3)\nwherepiis a pivot hypothesis of the first MT system and serves as the input of the\nsecond system.\nBecause investigating all possible pivot sentences pis too expensive, pis chosen\nfrom the list of n-best translations of the source sentence. Sometimes, the first system is\nnot capable of providing a list of possible translations and pivot hypotheses are limited\nton= 1, taking the top candidate only.\n2.4 Phrase Table Interpolation for System Combination\nEach of the pivoting methods described above leads to a separate MT system. This\nopens a possibility of combining these systems, hoping that the strengths of one method\nwould offset the weaknesses of other methods. We choose to combine multiple systems\nby linearly interpolating translation models. This method, called “phrase table interpo-\nlation”, is defined as follows:\np(ejf;\u0015) =nX\ni=1\u0015ipi(ejf) (4)\nwhere\u0015iis the interpolation weight of translation model iand satisfies the conditionP\ni\u0015i= 1.\nWe note that the system cascades method does not have a single phrase table. It\ndirectly uses the two SMT systems, rather than building a new SMT system. It thus\ndoes not lend itself to this combination method. We circumvent the problem by creating\na synthetic phrase table from the development and test sets, each translated with the\ncascades method. We pair the translated text with the original text to create a small\nsynthetic corpus. A phrase table is then extracted from the synthetic corpus and used in\nthe combination.\n194 Hoang and Bojar\n3 Dataset Created and Released\nCzech and Vietnamese are the national languages of the Czech Republic and Vietnam,\nrespectively. Furthermore, the two languages are not under-resourced on their own, but\nthe amount of bilingual corpora between them is very limited despite the large Viet-\nnamese community living in the Czech Republic. So far, no effort has been put into\ndeveloping an MT tool specifically for this language pair.\nWe wish to investigate the potential of pivoting methods for translating between\nCzech and Vietnamese. After carefully examining the potential of all possible pivot\nlanguages, we decide to select English as the sole pivot language. It is the only language\nthat provides sufficient resources to act as a bridge between Vietnamese and Czech.\nWe created and released two sets of multilingual datasets: a set of test data and a set\nof parallel corpora.\n3.1 WMT Test Data\nOur test set was derived from the WMT 2013 shared task,1which consists of 3000\naligned sentences from newspapers. We opted for the 2013 set, because more recent\nWMT test sets were no longer multi-parallel across all the languages. The WMT 2013\ntest set spanned across six languages (Czech, English, German, French, Spanish and\nRussian) and we extended it to include Vietnamese.\nTable 1. Statistics of test data\n# sentences # words\nCzech 3,000 48,472\nEnglish 3,000 56,089\nVietnamese 3,000 75,804\nOur contribution was created by human translators working in two stages. The first\nstage delivered a Vietnamese translation from the English side of the WMT 2013 test\nset, sometimes by post-editing machine-translated text. The second stage was a careful\ncheck to arrive at fluent Vietnamese text. Finally, we prepared a multi-lingual test set\nfor Czech, English and Vietnamese. Table 1 gives the statistics of the test set.\n3.2 Training Data\nThe training data is composed of parallel corpora among the source, target and pivot\nlanguages. For Czech-English language pair, we used CzEng 1.0, a Czech-English par-\nallel corpus (Bojar et al., 2012) to train the translation model. For Czech-Vietnamese\nand English-Vietnamese, we collected available bitexts from the Internet as there were\nno ready-made corpora sufficient to train the translation models.\n1http://www.statmt.org/wmt13\nPivoting Methods and Data for Czech-Vietnamese Translation via English 195\nTable 2. Statistics of Czech-Vietnamese training data\nOriginal Cleaned\nCzech Vietnamese Czech Vietnamese\n#sentences 1,337,199 1,337,199 1,091,058 1,091,058\n#words 9,128,897 12,073,975 6,718,184 7,646,701\n#unique words 224,416 68,237 195,446 59,737\nTable 3. Statistics of English-Vietnamese training data\nOriginal Cleaned\nEnglish Vietnamese English Vietnamese\n#sentences 2,035,624 2,035,624 1,113,177 1,113,177\n#words 16,638,364 17,565,580 8,518,711 8,140,876\n#unique words 91,905 78,333 69,513 58,286\nWe collected data from two main sources: OPUS2and TED talks.3OPUS is a grow-\ning multilingual corpus of translated open source documents. It covers over 90 lan-\nguages and includes data from several domains (Tiedemann, 2012b). The majority of\nVietnamese-English and Vietnamese-Czech bitexts in OPUS were subtitles from mo-\ntion pictures. As such, these bitexts were not always close translations; due to various\nconstraints of the domain, the texts were often just paraphrases. The later source con-\ntained selected TED talks which were provided in English and equipped with transcripts\nin Czech and/or Vietnamese. There were 1198 talks for which English and Vietnamese\ntranscripts are available. There were 784 TED talks for which Czech and Vietnamese\ntranscripts are available.\nOur preliminary analysis indicated that the collected datasets were noisy to the ex-\ntent that the noise would harm the performance of SMT approaches. Hence, we opted\nfor a semi-automatic cleanup of the corpora (both Czech-Vietnamese and English-\nVietnamese). We improved the corpus quality by two steps: normalizing and filtering.\nThe normalizing step cleaned up the corpora based on some typical formatting patterns\nin subtitles and transcripts (e.g. we tried to rejoin sentences spanning over multiple\nsubtitles). The filtering step relied on the filtering tool used in the development of the\nCzEng corpus (Bojar et al., 2012). We trained the tool on a set of 1,000 sentence pairs\nwhich had been selected randomly from the corpus and manually annotated. Overall, the\nnormalization and filtering reduced the size of the Czech-Vietnamese corpus by about\n32.25% and the size of the English-Vietnamese corpus by about 51.29% (the number\nof words). The statistics of the training data is shown in Table 2 and 3. Our analysis\nshowed that the cleaning phrase helped in improving the performance of the translation\nmodel trained on the collected datasets.\n2http://opus.lingfil.uu.se\n3https://www.ted.com/talks\n196 Hoang and Bojar\n4 Experiments\nWe empirically evaluate the pivoting methods in the context of Czech $Vietnamese\ntranslation. We also carry out a brief evaluation on the quality of Czech $English\nand English$Vietnamese translations. This provides an insight into the corpus qual-\nity, which affects the final performance of pivoting methods.\n4.1 Setup\nThe experiments are carried out using using Moses framework (Koehn et al., 2007).\nInstead of Moses standard EMS, we use Eman (Bojar and Tamchyna, 2013) to manage\nthe large number of experiments.\nWe use the standard phrase-based SMT approach which follows the log-linear model.\nThe model features include the translation model, language model, distance-based re-\nordering, word penalty and phrase penalty (no lexicalized reordering model). The trans-\nlation models are trained on the parallel data that we have prepared (see Section 3).\nWord alignments are created automatically on the bitexts using Giza++ (Och and Ney,\n2003), followed by the standard phrase extraction (Koehn et al., 2003). Three language\nmodels are trained using the KenLM language modeling toolkit (Heafield, 2011) with\nthe order of 5.\nFor the tuning and final evaluation, we split the prepared Czech-English-Vietnamese\nWMT 2013 set into two parts: the first 1500 sentences as the development set and the\nremaining 1500 sentences as the test set. The log-linear model is optimized by tuning\non the development data with minimum error rate training (MERT, Och (2003)) as the\ntuning method and BLEU as the tuning metric (Papineni et al., 2002).\nThe pivoting methods are implemented and processed using the available data that\nwe have. The experimental results are evaluated using BLEU (as implemented in Moses\nscorer; single-reference, lowercased, and in the tokenization used by the MT system).\nWe also carry out manual evaluation for the final results.\n4.2 Baseline Systems\nWe first build the SMT system by training on the direct parallel data for all 6translation\ndirections among Czech, English and Vietnamese. Of the 6component systems, we use\nthe SMT systems trained on the direct Czech $Vietnamese parallel data as the baseline\nsystem.\nTable 4 shows the experimental results of six component systems on the test set. We\ncan see that the Czech !Vietnamese and Vietnamese !Czech baseline systems attain\nvery low results (10.59 and 7.62 BLEU points). This is not surprising. Despite the\npreparation step, the Czech $Vietnamese training data is still noisy. The essence of\ntranscribed bitexts is paraphrasing, which may be correct in a particular context but\nincorrect in general. Furthermore, the properties of the examined languages (Czech\ninflective with very rich morphology, Vietnamese analytic with rather fixed word order)\nrender the Czech-Vietnamese translation as a difficult problem.\nOur analysis shows that the component systems for English $Vietnamese transla-\ntion perform relatively well. This is attributed by the similarity between English and\nPivoting Methods and Data for Czech-Vietnamese Translation via English 197\nTable 4. Performance of baseline systems by direct translation\nDirection Label BLEU\nCzech!English cs!en 23.23\nEnglish!Czech en!cs 15.26\nVietnamese!English vi!en 33.88\nEnglish!Vietnamese en!vi 34.45\nCzech!Vietnamese cs!vi 10.59\nVietnamese!Czech vi!cs 7.62\nVietnamese, notably the small number of inflectional morphemes. With the collected\ndataset, we attain competitive results compared to current English $Vietnamese MT\ntranslation.\n4.3 Results of Pivoting Methods\n4.3.1 Phrase Table Translation We choose to conduct the phrase table translation\nmethod, which is similar to the synthetic corpus method. To create synthetic Czech $Vietnamese\nPTs, there are two options:\n1. Translating the English side of English $Vietnamese phrase tables into Czech us-\ning the English!Czech component MT system.\n2. Translating the English side of Czech $English phrase tables into Vietnamese us-\ning the English!Vietnamese component MT system.\nAfter translation, the probabilities and lexical weights are kept from the original\nphrase tables.\nTable 5. Performance of synthetic phrase table method\nOption vi!cscs!vi\nTranslating English $Vietnamese phrase table 7.34 9.67\nTranslating Czech$English phrase table 8.40 12.09\nDirect Translation (Baseline) 7.62 10.59\nTable 5 shows the performance of the two options. We see that translating the large\nCzEng 1.0 phrase table by the small systems achieves better results than the other way\naround, regardless of the translation direction. We note that not only the CzEng 1.0 PT\nhas a better coverage, but also the English !Vietnamese system delivers translations\nof a relatively good quality. The English !Czech system faces the problem of incorrect\nword forms even though the morphemes are correct. We also note that the PT translation\n198 Hoang and Bojar\nmethod which involves translating Czech $English phrase table attains better results\nthan the baseline systems. This shows the potential of pivoting methods over the direct\ntranslation.\n4.3.2 Phrase Table Triangulation We followed two specific options to conduct\nphrase table triangulation. Each option in turn offers a number of ways to merge the\nfeature values of identical pivoted phrase pairs.\n1. Pivoting posterior probabilities, merging by the summation or maximization func-\ntion\n2. Pivoting the co-occurrence counts, approximating by the minimum, maximum,\narithmetic mean or geometric mean function\nFor each of the translation directions, these two options result in six phrase tables\nwhich have the same phrase pairs but different feature values. Table 6 shows the perfor-\nmance of all the setups.\nTable 6. Comparison between the six options of PT triangulation method\nOption Function vi!cscs!vi\n1 summation 7.44 10.28\n1 maximization 7.21 9.64\n2 minimum 7.24 9.86\n2 maximum 6.38 7.64\n2 arithmetic mean 6.25 6.95\n2 geometric mean 7.05 9.24\nDirect Translation (Baseline) 7.62 10.59\nFirst, we can see that both options of the triangulation method receive lower BLEU\nscores, compared to the phrase table translation method. The result is rather interesting\nbecause the triangulation method has an appealing description. It is generally consid-\nered a good system, sometimes outperforming direct translation. The primary reason\nfor the failure here is the high level of noise created by triangulation. The method dou-\nbles the amount of noise in both phrase tables, thus decreasing the overall performance.\nMoreover, as our corpora are independent, the overlapping part is small. This results in\na low coverage of phrases.\nSecond, re-estimating co-occurrence counts appears to be less effective than com-\nbining the probabilities directly. The primary reason is the difference between two\nphrase tables. The Czech-English phrase table is much larger than the English-Vietnamese\nphrase table. As the co-occurrence counts are biased either the large PT or the small PT,\nthus minimizing the difference between valid and noise phrase pairs. Hence, the noisy\npairs acquire probabilities as high as the valid pairs. When the co-occurrence counts are\nPivoting Methods and Data for Czech-Vietnamese Translation via English 199\nTable 7. Performance of system cascades method\nn 1 2 5 10 20 30 50 75 100\ncs!vi 9.05 9.19 9.33 9.50 9.70 9.70 9.80 9.82 9.82\nvi!cs 13.35 13.51 13.65 13.71 13.77 13.83 13.73 13.75 13.79\nbiased towards the large PT (i.e. the maximum and arithmetic mean functions), the high\nnumber of common phrases worsens the probabilities.\nAnother observation shows that computation of the new probability favours sum-\nmation over maximization. It is reasonable that the final probability of a source- target\npairs should be computed over all middle-phrases rather than just one phrase. One unit\n(word or phrase) may have more than one translation in other language.\n4.3.3 System Cascades For system cascades, we use the component systems to trans-\nlate each step of the process. There are two directions of translation, which lead to two\ndifferent settings for the system cascades method.\nFor Vietnamese!Czech system cascades method, we first use the Vietnamese !English\ncomponent MT system to translate the input from Vietnamese into English. We then\nuse the English!Czech component MT system to translate the English sentence into\nCzech.\nFor Czech!Vietnamese system cascades method, we first use the Czech !English\ncomponent MT system to translate the input from Czech into English. We then use\nthe English!Vietnamese component MT system to translate the English sentence into\nVietnamese.\nIn our experiments, we select nfromf1;2;5;10;20;30;50;75;100gto verify the\neffectiveness of using n-best translations instead of just selecting the top hypothesis.\nThe list of n-best translations allows the second system to compensate for errors of the\nfirst system’s single-best output, thus producing a better translation.\nTable 7 confirms our claim that the n-best list of hypotheses helps system cascades.\nFurthermore, the system cascades method achieves higher results than the baseline sys-\ntem and other pivoting methods. The promising performance of system cascades comes\nfrom the fact that the method uses complete translations. During the translation pro-\ncess, pivoting sentences are broken into phrases separately for each of the two phrase\ntables. Only a small portion of phrases remains intact during the process. In most of the\ncases, the segmentation into phrases is different for the pivot-target translation and for\nthe source-pivot translation.\n4.3.4 Combination through Phrase Table Interpolation We adopt the uniform\nweights to perform phrase table interpolation, which has shown to be robust (Cohn\nand Lapata, 2007). We adapt all four features of the standard Moses SMT translation\nmodel: the phrase translation probabilities and the lexical weights.\n200 Hoang and Bojar\nTable 8. Automatic evaluation of Czech $Vietnamese translation\nMethod PT Size vi!cscs!vi\nDirect Translation 8.70M 7.62 10.59\nPT Translation 53.21M 8.40 12.09\nPT Triangulation 61.50M 7.44 9.86\nSystem Cascades 0.08M 9.82 13.83\nCombination (PT Interpolation) 95.00M 10.12 13.80\nTable 8 summarizes our experimental results using automatic scoring. It includes\nthe results of the individual systems and the combined system, which is built based on\nthe interpolated phrase table.\nWe further conduct manual evaluation over the final results of Czech !Vietnamese\ntranslation. We perform relative ranking among 5 systems, the established practice of\nWMT. To interpret this 5-way ranking, we adopt the technique used by WMT until\n2013 (before TrueSkill): we extract the 10 pairwise comparisons from each ranking.\nFor a given system, we report the proportion of pairs in which the system was ranked\nequally or higher than its competitor (out of all pairs where the system was evaluated),\nsee the column “\u0015Others” in Table 9. Additionally, we report a simpler interpretation of\nthe 5-way ranking following Bojar et al. (2011). Each 5-way ranking is called a “block”\nand we report how often each system was among the winners in this block. Since we\nare comparing 5 systems, all our blocks include all systems, so “ \u0015All in block” simply\nmeans the rate of wins.\nTable 9. Manual evaluation of Czech !Vietnamese translation\nMethod \u0015Others\u0015All in Block\nDirect Translation 0.76 0.56\nPT Translation 0.71 0.48\nPT Triangulation 0.77 0.56\nSystem Cascades 0.86 0.56\nCombination (PT Interpolation) 0.85 0.60\nTables 8 and 9 provide the same picture: the system combination improves a little\nover the system cascades method.\nWe note that the performance of a specific method heavily depends on languages,\ndomains and corpora in question. For example, system cascades achieved the best re-\nsults with our datasets and the performance of phrase table translation is better when\ntranslating the larger (Czech-English) phrase table with the smaller (English-Vietnamese)\nMT system than the other way around, regardless of the final translation direction\n(Czech$Vietnamese) using the translated phrase table.\nPivoting Methods and Data for Czech-Vietnamese Translation via English 201\n5 Conclusion\nWe carried our a set of experiments with baseline direct translation and three types\nof pivoting methods, optionally concluded by a last step that combines the different\napproaches to a single system, improving over each of the individual components. Our\ncomparative study suggests that in absence of a multi-parallel corpus, simple cascading\nof systems outperforms methods manipulating the phrase table.\nTo support further experiments in Czech $Vietnamese machine translation, we as-\nsembled and described two training corpora and created one test set. The corpora are\navailable in the Lindat repository:\n–http://hdl.handle.net/11234/1-1594 (WMT13 Vietnamese Test Set)\n–http://hdl.handle.net/11234/1-1595 (CsEnVi Pairwise Parallel Corpus)\nAcknowledgement\nThis work has received funding from the European Union’s Horizon 2020 research and\ninnovation programme under grant agreement no. 645452 (QT21).\nThis work has been using language resources developed, stored and distributed by\nthe LINDAT/CLARIN project of the Ministry of Education, Youth and Sports of the\nCzech Republic (project LM2010013).\nReferences\nBojar, Ond ˇrej and Ale ˇs Tamchyna. 2013. The design of Eman, an experiment manager. The\nPrague Bulletin of Mathematical Linguistics , 99:39–56.\nBojar, Ond ˇrej, Milo ˇs Ercegov ˇcevi´c, Martin Popel, and Omar Zaidan. 2011. A Grain of Salt for\nthe WMT Manual Evaluation. In Proceedings of the Sixth Workshop on Statistical Machine\nTranslation , pages 1–11, Edinburgh, Scotland, July. Association for Computational Linguis-\ntics.\nBojar, Ond ˇrej, Zden ˇekˇZabokrtsk ´y, Ond ˇrej Du ˇsek, Petra Galu ˇsˇc´akov ´a, Martin Majli ˇs, David\nMare ˇcek, Ji ˇr´ı Mar ˇs´ık, Michal Nov ´ak, Martin Popel, and Ale ˇs Tamchyna. 2012. The joy\nof parallelism with CzEng 1.0. In Proceedings of the 2012 International Conference on\nLanguage Resources and Evaluation .\nChen, Yu, Andreas Eisele, and Martin Kay. 2008. Improving statistical machine translation\nefficiency by triangulation. In Proceedings of the International Conference on Language\nResources and Evaluation .\nCohn, Trevor and Mirella Lapata. 2007. Machine translation by triangulation: Making effective\nuse of multi-parallel corpora. In Proceedings of the 45th Annual Meeting of the Association\nfor Computational Linguistics .\nGalu ˇsˇc´akov ´a, Petra and Ond ˇrej Bojar. 2012. Improving SMT by Using Parallel Data of a Closely\nRelated Language. In Proceedings of the Fifth International Conference Baltic Human Lan-\nguage Technologies , volume 247 of Frontiers in AI and Applications , pages 58–65, Amster-\ndam, Netherlands. IOS Press.\nGispert, Adri `a De and Jos ´e B. Mari ˜no. 2006. Catalan-english statistical machine translation\nwithout parallel corpus: Bridging through spanish. In Proceedings of 5th International Con-\nference on Language Resources and Evaluation , pages 65–68.\n202 Hoang and Bojar\nHeafield, Kenneth. 2011. KenLM: faster and smaller language model queries. In Proceedings of\nthe 2011 Sixth Workshop on Statistical Machine Translation .\nKoehn, Philipp, Franz Josef Och, and Daniel Marcu. 2003. Statistical phrase-based translation.\nInProceedings of the 2003 Conference of the North American Chapter of the Association for\nComputational Linguistics - Human Language Technologies .\nKoehn, Philipp, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola\nBertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ond ˇrej\nBojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: open source toolkit for statis-\ntical machine translation. In Proceedings of the 45th Annual Meeting of the Association for\nComputational Linguistics .\nKoehn, Philipp. 2005. Europarl: A parallel corpus for statistical machine translation. In MT\nSummit , volume 5, pages 79–86.\nKoehn, Philipp. 2010. Statistical Machine Translation . Cambridge University Press.\nKumar, Shankar, Franz Josef Och, and Wolfgang Macherey. 2007. Improving word alignment\nwith bridge languages. In Proceedings of the 2007 Joint Conference on Empirical Methods\nin Natural Language Processing and Computational Natural Language Learning .\nOch, Franz Josef and Hermann Ney. 2003. A systematic comparison of various statistical align-\nment models. Computational Linguistics , 29(1):19–51.\nOch, Franz Josef. 2003. Minimum error rate training in statistical machine translation. In\nProceedings of the 41st Annual Meeting on Association for Computational Linguistics .\nPapineni, Kishore, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. BLEU: a method for\nautomatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting on\nAssociation for Computational Linguistics .\nTiedemann, J ¨org and Preslav Nakov. 2013. Analyzing the use of character-level translation with\nsparse and noisy datasets. In Proceedings of the International Conference Recent Advances\nin Natural Language Processing RANLP 2013 , pages 676–684, Hissar, Bulgaria, September.\nINCOMA Ltd. Shoumen, BULGARIA.\nTiedemann, J ¨org. 2012a. Character-based pivot translation for under-resourced languages and\ndomains. In Proceedings of the 13th Conference of the European Chapter of the Associa-\ntion for Computational Linguistics , pages 141–151, Avignon, France, April. Association for\nComputational Linguistics.\nTiedemann, Jorg. 2012b. Parallel data, tools and interfaces in OPUS. In Proceedings of the Eight\nInternational Conference on Language Resources and Evaluation .\nUtiyama, Masao and Hitoshi Isahara. 2007. A comparison of pivot methods for phrase-based\nstatistical machine translation. In Proceedings of the 2007 Conference of the North American\nChapter of the Association for Computational Linguistics Human Language Technologies .\nWu, Hua and Haifeng Wang. 2007. Pivot language approach for phrase-based statistical machine\ntranslation. In Proceedings of the 45th Annual Meeting of the Association for Computational\nLinguistics .\nZhu, Xiaoning, Zhongjun He, Hua Wu, Conghui Zhu, Haifeng Wang, and Tiejun Zhao. 2014.\nImproving pivot-based statistical machine translation by pivoting the co-occurrence count\nof phrase pairs. In Proceedings of the 2014 Conference on Empirical Methods in Natural\nLanguage Processing .\nReceived May 3, 2016 , accepted May 10, 2016", "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "fshWMrOvagB", "year": null, "venue": "EAMT 2020", "pdf_link": "https://aclanthology.org/2020.eamt-1.3.pdf", "forum_link": "https://openreview.net/forum?id=fshWMrOvagB", "arxiv_id": null, "doi": null }
{ "title": "Efficiently Reusing Old Models Across Languages via Transfer Learning", "authors": [ "Tom Kocmi", "Ondrej Bojar" ], "abstract": null, "keywords": [], "raw_extracted_content": "Efficiently Reusing Old Models Across Languages via Transfer Learning\nTom Kocmi Ond ˇrej Bojar\nCharles University, Faculty of Mathematics and Physics\nInstitute of Formal and Applied Linguistics\nMalostranské nám ˇestí 25, 118 00 Prague, Czech Republic\n{kocmi,bojar}@ufal.mff.cuni.cz\nAbstract\nRecent progress in neural machine transla-\ntion is directed towards larger neural net-\nworks trained on an increasing amount of\nhardware resources. As a result, NMT mod-\nels are costly to train, both financially, due\nto the electricity and hardware cost, and en-\nvironmentally, due to the carbon footprint.\nIt is especially true in transfer learning for\nits additional cost of training the “parent”\nmodel before transferring knowledge and\ntraining the desired “child” model. In this\npaper, we propose a simple method of re-\nusing an already trained model for different\nlanguage pairs where there is no need for\nmodifications in model architecture. Our\napproach does not need a separate parent\nmodel for each investigated language pair,\nas it is typical in NMT transfer learning. To\nshow the applicability of our method, we\nrecycle a Transformer model trained by dif-\nferent researchers and use it to seed models\nfor different language pairs. We achieve\nbetter translation quality and shorter con-\nvergence times than when training from ran-\ndom initialization.\n1 Introduction\nNeural machine translation (NMT), the current\nprevalent approach to automatic translation, is\nknown to require large amounts of parallel training\nsentences and an extensive amount of training time\non dedicated hardware. The total training time sig-\nnificantly increases, especially when training strong\nc\r2020 The authors. This article is licensed under a Creative\nCommons 3.0 licence, no derivative works, attribution, CC-\nBY-ND.baselines, searching for best hyperparameters or\ntraining multiple models for various language pairs.\nSchwartz et al. (2019) analyzed 60 papers from\ntop AI conferences and found out that 80% of them\ntarget accuracy over efficiency, and only a small\nportion of papers argue for a new efficiency result.\nThey also noted that the increasing financial cost\nof the computations could make it difficult for re-\nsearchers to engage in deep learning research or\nlimit training strong baselines. Furthermore, in-\ncreased computational requirements have also an\nenvironmental cost. Strubell et al. (2019) estimated\nthat training a single Transformer “big” model pro-\nduces 87 kg of CO2and that the massive Trans-\nformer architecture parameter search produced 298\ntonnes of CO 2.1\nHowever, a lot of research has been already in-\nvested into cutting down the long training time by\nthe design of NMT model architectures, promot-\ning self-attentive (Vaswani et al., 2017) or convo-\nlutional (Gehring et al., 2017) over recurrent ones\n(Bahdanau et al., 2014) or the implementation of\nheavily optimized toolkits (Junczys-Dowmunt et\nal., 2018).\nIn this paper, we propose a novel view on re-\nusing already trained “parent” models without the\nneed to prepare a parent model in advance or mod-\nify its training hyper-parameters. Furthermore, we\npropose a second method based on a vocabulary\ntransformation technique that makes even larger\nimprovements, especially for languages using an\nalphabet different from the re-used parent model.\nOur transfer learning approach leads to better per-\nformance as well as faster convergence speed of\nthe “child” model compared to training the model\nfrom scratch. We document that our methods are\n1The paper reports numbers based on the U.S. energy mix.\nnot restricted only to low-resource languages, but\nthey can be used even for high-resource ones.\nPrevious transfer learning techniques (Neubig\nand Hu, 2018; Kocmi and Bojar, 2018) rely on\na shared vocabulary between the parent and child\nmodels. As a result, these techniques separately\ntrain parent model for each different child language\npair. In contrast, our approach can re-use one parent\nmodel for multiple various language pairs, thus\nfurther lowering the total training time needed.\nIn order to document that our approach is not\nrestricted to parent models trained by us, we re-use\nparent model trained by different researchers: we\nuse the winning model of WMT 2019 for Czech-\nEnglish language pair (Popel et al., 2019).\nThe paper is organized as follows: Section 2\ndescribes the method of Direct Transfer learning,\nincluding our improvement of vocabulary transfor-\nmation. Section 3 presents the model, training data,\nand our experimental setup. Section 4 describes the\nresults of our methods followed by the analysis in\nSection 5. Related work is summarized in Section 6\nand we conclude the discussion in Section 7.\n2 Transfer Learning\nIn this work, we present the use of transfer learning\nto reduce the training time and improve the per-\nformance in comparison to training from random\ninitialization even for high-resource language pairs.\nTransfer learning is an approach of using training\ndata from a related task to improve the accuracy of\nthe main task in question (Tan et al., 2018). One of\nthe first transfer learning techniques in NMT was\nproposed by Zoph et al. (2016). They used word-\nlevel NMT and froze several model parts, especially\nembeddings of words that are shared between par-\nent and child model.\nWe build upon the work of Kocmi and Bojar\n(2018), who simplified the transfer learning tech-\nnique thanks to the use of subword units (Wu et\nal., 2016) in contrast to word-level NMT transfer\nlearning (Zoph et al., 2016) and extended the appli-\ncability to unrelated languages.\nTheir only requirement, and also the main disad-\nvantage of the method, is that the vocabulary has\nto be shared and constructed for the given parent\nand child languages jointly, which makes the parent\nmodel usable only for the particular child language\npair. This substantially increases the overall train-\ning time needed to obtain the desired NMT system\nfor the child language pair.The method of Kocmi and Bojar (2018) con-\nsists of three steps: (1) construct the vocabulary\nfrom both the parent and child corpora, (2) train\nthe parent model with the shared vocabulary until\nconvergence, and (3) continue training on the child\ntraining data.\nNeubig and Hu (2018) call such approaches\nwarm-start, where we use the child language pair\nto influence the parent model. In our work, we\nfocus on the so-called cold-start scenario, where\nthe parent model is trained without a need to know\nthe language pair in advance. Therefore we cannot\nmake any modifications of the parent training to\nbetter handle the child language pair. The cold-start\ntransfer learning is expected to have slightly worse\nperformance than the warm-start approach. How-\never, it allows reusing one parent model for multiple\nchild language pairs, which reduces the total train-\ning time in comparison to the use of warm-start\ntransfer learning.\nWe present two approaches: Direct Transfer that\nignores child-specific vocabulary altogether; and\nTransformed V ocabulary, which modifies vocabu-\nlary of the already trained parent. Thus, one parent\nmodel can be used for multiple child language pairs.\n2.1 Direct Transfer\nDirect Transfer can be seen as a simplification of\nKocmi and Bojar (2018). We ignore the specifics\nof the child vocabulary and train the child model\nusing the parent vocabulary. We suppose that the\nsubword vocabulary can handle the child language\npair, although it is not optimized for it.\nWe take an already trained model and use it as\ninitialization for a child model using a different\nlanguage pair. We continue the training process\nwithout any change to the vocabulary or hyper-\nparameters. This applies even to the training param-\neters, such as the learning rate or moments.\nThis method of continued training on different\ndata while preserving hyper-parameters is used un-\nder the name “continued training” or “fine-tuning”\n(Hinton and Salakhutdinov, 2006; Miceli Barone et\nal., 2017), but it is mostly used as a domain adapta-\ntion within a given language pair.\nDirect Transfer relies on the fact that the current\nNMT uses subword units instead of words. The sub-\nwords are designed to handle unseen words or even\ncharacters, breaking the input into shorter units, pos-\nsibly down to individual bytes as implemented, for\nexample, by Tensor2Tensor (Vaswani et al., 2018).\nChild-specific EN-CS vocab.\nAvg. # per: Sent. Word Sent. Word\nOdia 95.8 3.7 496.8 19.1\nEstonian 26.0 1.1 56.2 2.3\nFinnish 22.9 1.1 55.9 2.6\nGerman 27.4 1.3 55.4 2.5\nRussian 33.3 1.3 134.9 5.3\nFrench 42.0 1.6 65.7 2.5\nTable 1: Average number of tokens per sentence (column\n“Sent.”) and average number of tokens per word (column\n“Word”) when the training corpus is segmented by child-\nspecific or parent-specific vocabulary. “Child-specific” repre-\nsents the effect of using vocabulary customized for examined\nlanguage. “EN-CS” corresponds to the use of English-Czech\nvocabulary.\nSegmented sentence\nOriginal Сьерра-Леоне\nEN-RU Сьерра_ -_Леоне_\nEN-CS Сьерра_-_\\1051;еоне_\nFigure 1: Illustration of segmentation of Russian phrase\n(gloss: Sierra Leone) with English-Czech and English-Russian\nvocabulary from our experiments. The character represents\nsplits.\nThis property ensures that the parent vocabulary\ncan, in principle, serve for any child language pair,\nbut it can be highly suboptimal, segmenting child\nwords into too many subwords.\nWe present an example of a Russian phrase\nand its segmentation based on English-Czech or\nEnglish-Russian vocabulary in Figure 1. When\nusing child-specific vocabulary, the segmentation\nworks as expected, splitting the phrase into three\ntokens. However, when we use a vocabulary that\ncontains only the Cyrillic alphabet2and not many\nlonger sequences of characters, the sentence is\nsplit into 13 tokens. We can notice that English-\nCzech wordpiece vocabulary is missing a character\n“Л”, thus it breaks it into the byte representation\n“\\1051; ”.\nWe examine the influence of parent-specific vo-\ncabulary on the training dataset of the child. Table 1\ndocuments the segmenting effect of different vocab-\nularies. If we compare the child-specific and parent-\nspecific (“EN-CS”) vocabulary, the average number\nof tokens per sentence or per word increases more\nthan twice. For example, German has twice as many\ntokens per word compared to its child-specific vo-\ncabulary, and Russian has four times more tokens\n2This happened solely due to noise in the Czech-English parent\ntraining data.Input: Parent vocabulary (an ordered list of\nparent subwords) and the training cor-\npus for the child language pair.\nGenerate child-specific vocabulary with the\nmaximum number of subwords equal to the\nparent vocabulary size;\nforsubword S in parent vocabulary do\nifS in child vocabulary then\ncontinue;\nelse\nReplace position of S in the parent vo-\ncabulary with the first unused child\nsubword not contained in the parent;\nend\nend\nResult: Transformed parent vocabulary\nAlgorithm 1: Transforming parent vocabulary to\ncontain child subwords and match positions for\nsubwords common for both of language pairs.\ndue to Cyrillic. Odia is affected even more.\nThus, we see that ignoring the vocabulary mis-\nmatch introduces a problem for NMT models in the\nform of an increasing split ratio of tokens. As ex-\npected, this is most noticeable for languages using\ndifferent scripts.\n2.2 Vocabulary Transformation\nUsing parent vocabulary roughly doubles the num-\nber of subword tokens per word, as we showed in\nthe previous section. This problem would not hap-\npen with child-specific vocabulary. However, we\nare using an already trained parent with its vocab-\nulary. Therefore, we propose a vocabulary trans-\nformation method that replaces subwords in the\nparent wordpiece (Wu et al., 2016) vocabulary with\nsubwords from the child-specific vocabulary.\nNMT models associate each vocabulary item\nwith its vector representation (embedding). When\ntransferring the model from the parent to the child,\nwe decide which subwords should preserve their\nembedding as trained in the parent model and which\nembeddings should be remapped to new subwords\nfrom the child vocabulary. The goal is to preserve\nembeddings of subwords that are contained in both\nparent and child vocabulary. In other words, we\nreuse embeddings of subwords common to both\nparent and child vocabularies and reuse the vocabu-\nlary entries of subwords not occurring in the child\ndata for other, unrelated, subwords that the child\ndata need. Obviously, the embeddings for these\nsubwords will need to be retrained.\nOur Transformed V ocabulary method starts by\nconstructing the child-specific vocabulary with the\nsize equal to the parent vocabulary size (the parent\nmodel is trained, thus it has a fixed number of em-\nbeddings). Then, as presented in Algorithm 1, we\ngenerate an ordered list of child subwords, where\nsubwords known to the parent vocabulary are on\nthe same positions as in the parent vocabulary, and\nother subwords are assigned arbitrarily to places\nwhere parent-only subwords were stored.\nWe experimented with several possible mappings\nbetween the parent and child vocabulary. We tried\nto assign subwords based on frequency, by random\nassignment, or based on Levenshtein distance of\nparent and child subwords. However, all the ap-\nproaches reached comparable performance; neither\nof them significantly outperformed the others. One\nexception is when assigning all subwords randomly,\neven those that are shared between parent and child.\nThis method leads to worse performance, having\nseveral BLEU points lower than other approaches.\nAnother approach would be to use pretrained sub-\nword embeddings similarly as proposed Kim et al.\n(2019). However, in this paper, we focus on show-\ning, that transfer learning can be as simple as not\nusing any modifications at all.\n3 Experiments\nIn this section, we first provide the details of the\nNMT model used in our experiments and the ex-\namined set of language pairs. We then discuss the\nconvergence and a stopping criterion and finally\npresent the results of our method for recycling the\nNMT model as well as improvements thanks to the\nvocabulary transformation.\n3.1 Parent Model and its Training Data\nIn order to document that our method functions\nin general and is not restricted to our laboratory\nsetting, we do not train the parent model ourselves.\nInstead, we recycle two systems trained by Popel et\nal. (2019), namely the English-to-Czech and Czech-\nto-English winning models of WMT 2019 News\nTranslation Task. It is important to note, that we use\ntwo parent models and for experiments we always\nuse the parent model with English on the same side,\ne.g. English-to-Russian child has English-to-Czech\nas a parent. We leave experimenting with differentparents or various combinations for future works,\nbecause the goal of this work is to make approach\nmost simple.\nWe decided to use this model for several rea-\nsons. It is trained to translate into Czech, a high-\nresource language that is dissimilar from any of the\nlanguages used in this work.3At the same time,\nit is trained using the state-of-the-art Transformer\narchitecture as implemented in the Tensor2Tensor\nframework.4(Vaswani et al., 2018). We use Ten-\nsor2Tensor in version 1.8.0.\nThe model is described in Popel (2018). It is\nbased on the “Big GPU Transformer” setup as de-\nfined by Vaswani et al. (2017) with a few modifica-\ntions. The model uses reverse square root learning\nrate decay with 8000 warm-up steps and a learning\nrate of 1. It uses the Adafactor optimizer, the batch\nsize of 2900 subword units, disabled layer dropout.\nDue to the memory constraints, we drop training\nsentences longer than 100 subwords. We use child\nhyper-parameter setting equal to the parent model.\nHowever, some hyper-parameters like learning rate,\ndropouts, optimizer, and others could be modified\nfor the training of the child model. We leave these\nexperiments for future work.\nWe train models on single GPU GeForce 1080Ti\nwith 11GB memory. In this setup, 10000 training\nsteps take on average approximately one and a half\nhours. Popel et al. (2019) trained the model on\n8 GPUs for 928k steps, which means that on the\nsingle GPU, the parent model would need at least\n7424k steps, i.e. more than 45 days of training.\nIn our experiments, we train all child models up\nto 1M steps and then take the model with the best\nperformance on the development set. Because some\nof the language pairs, especially the low-resource\nones, converge within first 100k steps, we use a\nweak early stopping criterion that stops the training\nwhenever there was no improvement larger than\n0.5% of maximal reached BLEU over the past 50%\nof training evaluations (minimum of training steps\nis 100k). This stopping criterion makes sure that no\nmodel is stopped prematurely.\n3The linguistically most similar language of our language se-\nlection is Russian, but we do not transliterate Cyrillic into\nLatin script. Therefore, the system cannot associate similar\nRussian and Czech words based on appearance.\n4https://github.com/tensorflow/\ntensor2tensor\nLanguage pair Pairs Training set Development set Test set\nEN - Odia 27k Parida et al. (2018) Parida et al. (2018) Parida et al. (2018)\nEN - Estonian 0.8M Europarl, Rapid WMT dev 2018 WMT 2018\nEN - Finnish 2.8M Europarl, Paracrawl, Rapid WMT 2015 WMT 2018\nEN - German 3.5M Europarl, News commentary, Rapid WMT 2017 WMT 2018\nEN - Russian 12.6M News Commentary, Yandex, and UN Corpus WMT 2012 WMT 2018\nEN - French 34.3MCommoncrawl, Europarl, Giga FREN,\nNews commentary, UN corpusWMT 2013 WMT dis. 2015\nTable 2: Corpora used for each language pair. The names specify the corpora from WMT 2018 News Translation Task data.\nColumn “Pairs” specify the total number of sentence pairs in training data.\nLanguage pair Baseline Direct Transfer Transformed V ocab\nBLEU Steps BLEU Steps BLEU Steps \u0001BLEU Speed-up\nEnglish-to-Odia 3.54 45k 0.26 47k 6.38z* 38k 2.84 16 %\nEnglish-to-Estonian 16.03 95k 20.75z75k 20.27z 75k 4.24 21 %\nEnglish-to-Finnish 14.42 420k 16.12z255k 16.73z*270k 2.31 36 %\nEnglish-to-German 36.72 270k 38.58z190k 39.28z* 110k 2.56 59 %\nEnglish-to-Russian 27.81 1090k 27.04 630k 28.65z* 450k 0.84 59 %\nEnglish-to-French 33.72 820k 34.41z660k 34.46z720k 0.74 12 %\nEstonian-to-English 21.07 70k 24.36z30k 24.64z*60k 3.57 14 %\nRussian-to-English 30.31 980k 23.41 420k 31.38z*700k 1.07 29 %\nTable 3: Translation quality and training time. “Baseline” is trained from scratch with its own vocabulary and child corpus only.\n“Direct Transfer” is initialized with parent model using the parent vocabulary and continues training. “Transformed V ocab” has\nthe same initialization but merges the parent and child vocabulary as described in Section 2.2. Best score and lowest training\ntime in each row in bold. The statistical significance is computed against the baseline ( z) or against “Direct Transfer” (*). Last\ntwo columns show improvements of Transformed V ocabulary in comparison to the baseline.\n3.2 Studied Language Pairs\nWe use several child language pairs to show that\nour approach is useful for various sizes of corpora,\nlanguage pairs, and scripts. To cover this range of\nsituations, we select languages in Table 2. Future\nworks could focus also on languages outside from\nIndo-European family, such as Chinese.\nAnother decision behind selecting these language\npairs is to include language pairs reaching vari-\nous levels of translation quality. This is indicated\nby automatic scores of the baseline setups ranging\nfrom 3.54 BLEU (English-to-Odia) to 36 BLEU\n(English-to-German)5, see Table 3.\nThe sizes of corpora are in Table 2. The small-\nest language pair is English-Odia, which uses the\nBrahmic writing script and contains only 27 thou-\nsand training pairs. The largest is the high-resource\nEnglish-French language pair.\nFor most of the language pairs, we use training\ndata from WMT (Bojar et al., 2018).6We use the\ntraining data without any preprocessing, not even\n5The systems submitted to WMT 2018 for English-to-German\ntranslation have better performance than our baseline due to\nthe fact, that we decided not to use Commoncrawl, which\nartificially made English-German parallel data less resourceful.\n6http://www.statmt.org/wmt18/tokenization.7See Table 2 for the list of used cor-\npora for each language pair. For some languages,\nwe have opted out from using all available corpora\nin order to experiment on languages containing var-\nious magnitudes of parallel sentences.\nFor high-resource English-French language pair,\nwe perform a corpora cleaning using language de-\ntection Langid.py (Lui and Baldwin, 2012). We\ndrop all sentences that are not recognized as the cor-\nrect language. It removes 6.5M (15.9 %) sentence\npairs from the English-French training corpora.\n4 Results\nAll reported results are calculated on the test data\nand evaluated with SacreBLEU (Post, 2018). The\nresults are in Table 3. We discuss separately the\ntraining time, automatically assessed translation\nquality using the parent and the Transformed V ocab-\nulary, and comparison to Kocmi and Bojar (2018)\nin the following sections.\nBaselines use the same architecture, and they\nare trained solely on the child training data with\nthe use of child-specific vocabulary. We compute\n7While the recommended best practice in past WMT evalua-\ntions was to use Moses tokenizer. It is not recommended for\nTensor2Tensor with its build-in tokenizer any more.\nstatistical significance with a paired bootstrap re-\nsampling (Koehn, 2004). We use 1000 samples and\na confidence level of 0.05. Statistically significant\nimprovements are marked by z.\n4.1 Direct Transfer Learning\nFirst, we compare the Direct Transfer learning in\ncontrast to the baseline. We see that Direct Transfer\nlearning is significantly better than the baseline in\nboth translation directions in all cases except for\nOdia and Russian, which we will discuss later. We\nget improvements for various language types, as\ndiscussed in Section 3.2. The largest improvement\nis of 4.72 BLEU for the low-resource language\npair of Estonian-English, but we also get an im-\nprovement of 0.69 BLEU for the high-resource pair\nFrench-English.\nThe results are even more surprising when we\ntake into account the fact that the model uses the\nparent vocabulary, and it is thus segmenting words\ninto considerably more subwords. This suggests\nthat the Transformer architecture generalizes very\nwell to short subwords. However, the worse per-\nformance of English-Odia and English-Russian can\nbe attributed to the different writing script. The\nOdia script is not contained in the parent vocabu-\nlary at all, leading to segmenting of each word into\nindividual bytes, the only common units with the\nparent vocabulary. Therefore, to avoid problems\nwith filtering, we increase the filtering limit of long\nsentences during training from 100 to 500 subwords\nfor these two language pairs (see Section 3.1).\n4.2 Results with Transformed Vocabulary\nAs the results in Table 3 confirm, Transformed V o-\ncabulary successfully tackles the problem of the\nchild language using a different writing script. We\nsee “Transformed V ocab” delivering the best per-\nformance for all language pairs except for English-\nto-Estonian, significantly improving over baseline\nand even over “Direct Transfer” in most cases.\n4.3 Training Time\nIn the introduction, we discussed that recent devel-\nopment in NMT focuses mainly on the performance\nover efficiency (Schwartz et al., 2019). Therefore,\nin this section, we discuss the amount of training\ntime required for our method to converge. We are\nreporting the number of updates (i.e. steps) needed\nto get the model used for evaluation.8\n8Another possibility would be to report wall-clock time. How-\never, that is influenced by server load and other factors. TheLanguage Transf. Warm\npair Baseline vocab StartBLEUTo Estonian 16.03 20.27 20.75\nTo Russian 27.81 28.65 29.03z\nFrom Estonian 21.07 24.64 26.00z\nFrom Russian 30.31 31.38 31.15StepsTo Estonian 95k 75k 735k\nTo Russian 1090k 450k 1510k\nFrom Estonian 70k 60k 700k\nFrom Russian 980k 700k 1465k\nTable 4: Comparison of our Transformed V ocabulary method\nwith Kocmi and Bojar (2018) (abridged as “Warm Start”). The\ntop half of table compares results in BLEU, the bottom half\nthe number of steps needed to convergence. Steps of Kocmi\nand Bojar (2018) method are reported as the sum of parent and\nchild training, due to the nature of the method.\nWe see in Table 3 that both our methods con-\nverged in a lower number of steps than the baseline.\nFor the Transformed V ocabulary method, we get a\nspeed-up of 12–59 %. The reduction in the number\nof steps is most visible in English-to-German and\nEnglish-to-Russian. It is important to note that the\nnumber of steps to the convergence is not precisely\ncomparable, and some tolerance must be taken into\naccount. It is due to the fluctuation in the training\nprocess. However, in neither of our experiments,\nTransformed V ocabulary is slower than baseline.\nThus we conclude that our Transformed V ocabulary\nmethod takes fewer training steps to finish training\nthan training a model from scratch.\n4.4 Comparison to Kocmi and Bojar (2018)\nWe replicated the experiments of Kocmi and Bojar\n(2018) with the identical framework and hyperpa-\nrameter setting in order to compare their method\nto ours. We experiment with Estonian-English and\nRussian-English language pair in both translation\ndirections. Their approach needs an individual par-\nent for every child model, so we train four models:\ntwo English-to-Czech and two Czech-to-English on\nthe same parent training data as Kocmi and Bojar\n(2018). All vocabularies contain 32k subwords. We\ncompare their method with our Transformed V ocab-\nulary. Furthermore, the results of Direct Transfer in\nTable 3 are also comparable with this experiment.\nIn Table 4, we see that their method reaches\na slightly better performance in three translation\nmodels, where English-to-Russian and Estonian-\nto-English are significantly ( z) better than Trans-\nformed V ocabulary technique; the other two are\non par with our method, which is understandable.\nThe Transformed V ocabulary cannot outperform\nnumber of steps is better for the comparison as long as the\nbatch size stays the same across experiments.\nDirect T. Transf. V . Direct T Transf. V14161820221\r 2\r 3\r 4\r\nFreeze only one Freeze all but oneBLEUEnglish-to-Estonian\nDirect T Transf. V Direct T. Transf. V .1520255\r 6\r 7\r 8\r\nFreeze only one Freeze all but oneEstonian-to-English\nEmbedings Encoder\nDecoder Attention\nTrain all\nFigure 2: Child BLEU scores when trained with some parameters frozen. The left plot shows English-to-Estonian and the right\nis Estonian-to-English. In both plots, the first two groups are experiments where one component is frozen and the second two are\nwhen all components but one are frozen.\nthe warm-start technique since the warm-start par-\nent model has the advantage of being trained with\nthe vocabulary prepared for the investigated child.\nHowever, when we compare the total number of\nsteps needed to reach the performance, both our\napproaches are significantly faster than Kocmi and\nBojar (2018). The most substantial improvements\nare roughly ten times faster for Estonian-to-English,\nand the smallest difference for English-to-Russian\nis two times faster. This is mostly because their\nmethod first needs to train the parent model that is\nspecialized for the child, while our method can di-\nrectly re-use any already trained model. Moreover,\nwe can see that their method is even slower than the\nbaseline model.\n5 Analysis by Freezing Parameters\nTo discover which transferred parameters are the\nmost helpful for the child model and which need to\nbe changed the most, we follow the analysis used\nby Thompson et al. (2018): When training the child,\nwe freeze some of the parameters.\nBased on the internal layout of the Transformer\nmodel in Tensor2Tensor, we divide the model into\nfour components. (i) Word embeddings (shared\nbetween encoder and decoder) map each subword\nunit to a dense vector representation. (ii) The en-\ncoder component includes all the six feed-forward\nlayers converting the input sequence to the deeper\nrepresentation. (iii) The decoder component con-\nsists again of six feed-forward layers preparing the\nchoice of the next output subword unit. (iv) The\nmulti-head attention is used throughout encoder and\ndecoder, as self-attention layers interweaved with\nthe feed-forward layers.\nWe run two sets of experiments: either freezeonly one out of the four components and leave the\nrest of the model updating or freeze everything but\nthe examined component. We also test it on two\ntranslation directions: English-to-Estonian in the\nleft hand part of Figure 2 and Estonian-to-English\nin the right hand part. In both cases, English-Czech\n(in the corresponding direction, i.e. with English\non the correct side) serves as the parent. We dis-\ncuss individual components separately, indexing\nthe experiments 1\rto8\r.\nSimilarly to Thompson et al. (2018) in domain\nadaptation, we observe that parent embeddings\nserve well in Direct Transfer, freezing them has\na minimal impact compared to the baseline in 1\r\nand 5\r. The frozen embeddings in Transformed V o-\ncabulary ( 2\r,6\r) results in significant performance\ndrops which can be attributed to the arbitrary as-\nsignment of embeddings to new subwords.\nThe comparison of all but embeddings frozen in\n4\rand 8\r(Transformed V ocabulary) is interesting.\nIn8\r, the performance of the network can be recov-\nered close to the baseline by retraining either parent\nsource embeddings or the encoder. These two com-\nponents can compensate for each other. This differs\nfrom the case with English reused in the source ( 4\r)\nwhere updating embeddings to the child language\nis insufficient: the decoder must be updated to pro-\nduce fluent output in the new target language and\neven with the decoder updated, the loss compared\nto the baseline is quite substantial.\nThe most important component for transfer learn-\ning is generally the component handling the new\nlanguage: decoder in English-to-Estonian and en-\ncoder in the reverse. With this component fixed, the\nperformance drops the most with this component\nfixed ( 1\r,2\r,5\r,6\r) and among the least with this\ncomponent free to update ( 3\r,4\r,7\r,8\r). This con-\nfirms that at least for examined language pair, the\nTransformer model lends itself very well to encoder\nor decoder re-use.\nOther results in Figure 2 reveal that the archi-\ntecture can compensate for some of the training\ndeficiencies. Freezing the encoder 1\r,2\r(resp. de-\ncoder for Estonian-to-English 5\r,6\r) or attention\nis not that critical as the frozen decoder (resp. en-\ncoder). The bad result of the encoder 3\r,4\r(resp.\ndecoder 7\r,8\r) being the only non-frozen compo-\nnent shows that model is not capable of providing\nall the needed capacity for the new language, unlike\nthe self-attention where the loss is not that large.\nThis behaviour correlates with our intuition that\nthe model needs to update the most the component\nthat handles the differing language with the parent\nmodel (in our case Czech).\nAll in all, these experiments illustrate the robust-\nness of the Transformer model that it is able to train\nand reasonably well utilize pre-trained weights even\nif they are severely crippled.\n6 Related Work\nThis paper focuses on re-using an existing NMT\nmodel in order to improve the performance in terms\nof training time and translation quality without any\nneed to modify the model or pre-trained weights.\nLakew et al. (2018) presented two model modifi-\ncations for multilingual MT and showed that trans-\nfer learning could be extended to transferring from\nthe parent to the first child, followed by the sec-\nond child and then the third one. They achieved\nimprovements with dynamically updating embed-\ndings for the vocabulary of a target language.\nThe use of other language pairs for improving\nresults for the target language pair has been ap-\nproached from various angles. One option is to\nbuild multilingual models (Liu et al., 2020), ideally\nso that they are capable of zero-shot, i.e. translat-\ning in a translation direction that was never part\nof the training data. Johnson et al. (2017) and Lu\net al. (2018) achieve this with a unique language\ntag that specifies the desired target language. The\ntraining data includes sentence pairs from multi-\nple language pairs, and the model implicitly learns\ntranslation among many languages. In some cases,\nit achieves zero-shot and can translate between lan-\nguages never seen together. Gu et al. (2018) tackled\nthe problem by creating universal embedding space\nacross multiple languages and training many-to-oneMT system. Firat et al. (2016) propose multi-way\nmulti-lingual systems. Their goal is to reduce the\ntotal number of parameters needed to train multiple\nsource and target models. In all cases, the methods\nare dependent on a special training schedule.\nThe lack of parallel data in low-resource lan-\nguage pairs can also be tackled by unsupervised\ntranslation (Artetxe et al., 2018; Lample et al.,\n2018). The general idea is to train monolingual\nautoencoders for both source and target languages\nseparately, followed by mapping both embeddings\nto the same space and training simultaneously two\nmodels, each translating in a different direction. In\nan iterative training, this pair of NMT systems is\nfurther refined, each system providing training data\nfor the other one by back-translating monolingual\ndata (Sennrich et al., 2016).\nFor very closely related language pairs, translit-\neration can be used to generate training data from\na high-resourced pair to support the low-resourced\none as described in Karakanta et al. (2018).\n7 Conclusion\nIn this paper, we focus on a setting where exist-\ning models are re-used without any preparation for\nknowledge transfer of original model ahead of its\ntraining. This is a relevant and prevailing situation\nin academia due to computing restrictions, and in-\ndustry, where updating existing models and scaling\nto more language pairs is essential. We evaluate\nand propose methods of re-using Transformer NMT\nmodels for any “child” language pair regardless of\nthe original “parent” training languages and espe-\ncially showing, that no modification is better than\ntraining from scratch.\nThe techniques are simple, effective, and appli-\ncable to models trained by others which makes it\nmore likely that our experimental results will be\nreplicated in practice. We showed that despite the\nrandom assignment of subwords, the Transformed\nV ocabulary improves the performance and shortens\nthe training time of the child model compared to\ntraining from random initialization.\nFurthermore, we showed that this approach is\nnot restricted to low-resource languages, and we\ndocumented that the highest improvements are (ex-\npectably) due to the shared English knowledge.\nMoreover, we confirmed the robustness of the\nTransformer and its ability to achieve good results\nin adverse conditions like very fragmented subword\nunits or parts of the network frozen.\nThe warm-start approach by Kocmi and Bojar\n(2018) performs slightly better than our Trans-\nformed V ocabulary, but it needs to be trained for a\nsignificantly longer time. This leaves room for ap-\nproaches that also focus on the efficiency of the\ntraining process. We perceive our approach as\na technique for increasing the performance of a\nmodel without an increase in training time. Thus,\nre-using older models in cold-start scenario of trans-\nfer learning can be used in standard NMT training\npipelines without any performance or speed losses\ninstead of random initialization as is the common\npractice currently.\nAcknowledgements\nThis study was supported in parts by the grants\n18-24210S of the Czech Science Foundation and\n825303 (Bergamot) of the European Union. This\nwork has been using language resources and tools\nstored and distributed by the LINDAT/CLARIN\nproject of the Ministry of Education, Youth and\nSports of the Czech Republic (LM2015071).\nReferences\nArtetxe, Mikel, Gorka Labaka, Eneko Agirre, and\nKyunghyun Cho. 2018. Unsupervised neural ma-\nchine translation. In Proceedings of the Sixth Inter-\nnational Conference on Learning Representations .\nBahdanau, Dzmitry, Kyunghyun Cho, and Yoshua Ben-\ngio. 2014. Neural machine translation by jointly\nlearning to align and translate. In ICLR 2015 .\nBojar, Ond ˇrej, Christian Federmann, Mark Fishel,\nYvette Graham, Barry Haddow, Matthias Huck,\nPhilipp Koehn, and Christof Monz. 2018. Find-\nings of the 2018 conference on machine translation\n(wmt18). In Proceedings of the Third Conference on\nMachine Translation, Volume 2: Shared Task Papers ,\npages 272–307, Belgium, Brussels, October. Associ-\nation for Computational Linguistics.\nFirat, Orhan, Kyunghyun Cho, and Yoshua Bengio.\n2016. Multi-Way, Multilingual Neural Machine\nTranslation with a Shared Attention Mechanism. In\nProceedings of the 2016 Conference of the North\nAmerican Chapter of the Association for Computa-\ntional Linguistics: Human Language Technologies ,\npages 866–875, San Diego, California, June. Associ-\nation for Computational Linguistics.\nGehring, Jonas, Michael Auli, David Grangier, De-\nnis Yarats, and Yann N Dauphin. 2017. Convolu-\ntional sequence to sequence learning. arXiv preprint\narXiv:1705.03122 .\nGu, Jiatao, Hany Hassan, Jacob Devlin, and Victor O.K.\nLi. 2018. Universal neural machine translation forextremely low resource languages. In Proceedings\nof the 2018 Conference of the North American Chap-\nter of the Association for Computational Linguistics:\nHuman Language Technologies, Volume 1 (Long Pa-\npers) , pages 344–354, New Orleans, Louisiana, June.\nAssociation for Computational Linguistics.\nHinton, Geoffrey E. and Ruslan R. Salakhutdinov.\n2006. Reducing the dimensionality of data with neu-\nral networks. science , 313(5786):504–507.\nJohnson, Melvin, Mike Schuster, Quoc Le, Maxim\nKrikun, Yonghui Wu, Zhifeng Chen, Nikhil Thorat,\nFernand a Vigas, Martin Wattenberg, Greg Corrado,\nMacduff Hughes, and Jeffrey Dean. 2017. Google’s\nmultilingual neural machine translation system: En-\nabling zero-shot translation. Transactions of the As-\nsociation for Computational Linguistics , 5:339–351.\nJunczys-Dowmunt, Marcin, Roman Grundkiewicz,\nTomasz Dwojak, Hieu Hoang, Kenneth Heafield,\nTom Neckermann, Frank Seide, Ulrich Germann, Al-\nham Fikri Aji, Nikolay Bogoychev, André F. T. Mar-\ntins, and Alexandra Birch. 2018. Marian: Fast neu-\nral machine translation in C++. In Proceedings of\nACL 2018, System Demonstrations , pages 116–121,\nMelbourne, Australia, July. Association for Compu-\ntational Linguistics.\nKarakanta, Alina, Jon Dehdari, and Josef van Genabith.\n2018. Neural machine translation for low-resource\nlanguages without parallel corpora. Machine Trans-\nlation , 32(1):167–189, Jun.\nKim, Yunsu, Yingbo Gao, and Hermann Ney. 2019. Ef-\nfective cross-lingual transfer of neural machine trans-\nlation models without shared vocabularies. In Korho-\nnen, Anna, David R. Traum, and Lluís Màrquez, ed-\nitors, Proceedings of the 57th Conference of the As-\nsociation for Computational Linguistics, ACL 2019,\nFlorence, Italy, July 28- August 2, 2019, Volume\n1: Long Papers , pages 1246–1257. Association for\nComputational Linguistics.\nKocmi, Tom and Ond ˇrej Bojar. 2018. Trivial Trans-\nfer Learning for Low-Resource Neural Machine\nTranslation. In Proceedings of the 3rd Conference\non Machine Translation (WMT) , Brussels, Belgium,\nNovember.\nKoehn, Philipp. 2004. Statistical significance tests for\nmachine translation evaluation. In Proceedings of\nEMNLP , volume 4, pages 388–395.\nLakew, Surafel M, Aliia Erofeeva, Matteo Negri, Mar-\ncello Federico, and Marco Turchi. 2018. Transfer\nlearning in multilingual neural machine translation\nwith dynamic vocabulary. IWSLT .\nLample, Guillaume, Myle Ott, Alexis Conneau, Lu-\ndovic Denoyer, and Marc’Aurelio Ranzato. 2018.\nPhrase-based & neural unsupervised machine trans-\nlation. In Proceedings of the 2018 Conference on\nEmpirical Methods in Natural Language Processing .\nLiu, Yinhan, Jiatao Gu, Naman Goyal, Xian Li, Sergey\nEdunov, Marjan Ghazvininejad, Mike Lewis, and\nLuke Zettlemoyer. 2020. Multilingual denoising\npre-training for neural machine translation. arXiv\npreprint arXiv:2001.08210 .\nLu, Yichao, Phillip Keung, Faisal Ladhak, Vikas Bhard-\nwaj, Shaonan Zhang, and Jason Sun. 2018. A\nneural interlingua for multilingual machine transla-\ntion. In Proceedings of the Third Conference on Ma-\nchine Translation: Research Papers , pages 84–92,\nBelgium, Brussels, October. Association for Compu-\ntational Linguistics.\nLui, Marco and Timothy Baldwin. 2012. langid.py:\nAn off-the-shelf language identification tool. In Pro-\nceedings of the ACL 2012 System Demonstrations ,\npages 25–30, Jeju Island, Korea, July. Association\nfor Computational Linguistics.\nMiceli Barone, Antonio Valerio, Barry Haddow, Ulrich\nGermann, and Rico Sennrich. 2017. Regularization\ntechniques for fine-tuning in neural machine trans-\nlation. In Proceedings of the 2017 Conference on\nEmpirical Methods in Natural Language Processing ,\npages 1489–1494, Copenhagen, Denmark, Septem-\nber. Association for Computational Linguistics.\nNeubig, Graham and Junjie Hu. 2018. Rapid adapta-\ntion of neural machine translation to new languages.\nInProceedings of the 2018 Conference on Empiri-\ncal Methods in Natural Language Processing , pages\n875–880, Brussels, Belgium, October. Association\nfor Computational Linguistics.\nParida, Shantipriya, Ondrej Bojar, and Satya Ranjan\nDash. 2018. Odiencorp: Odia-english and odia-only\ncorpus for machine translation. In Smart Computing\nand Informatics . Springer.\nPopel, Martin, Dominik Macha ˇcek, Michal\nAuersperger, Ond ˇrej Bojar, and Pavel Pecina.\n2019. English-czech systems in wmt19: Document-\nlevel transformer. In Proceedings of the Fourth\nConference on Machine Translation (Volume 2:\nShared Task Papers, Day 1) , pages 342–348, Flo-\nrence, Italy, August. Association for Computational\nLinguistics.\nPopel, Martin. 2018. CUNI Transformer Neural MT\nSystem for WMT18. In Proceedings of the Third\nConference on Machine Translation , pages 486–491,\nBelgium, Brussels, October. Association for Compu-\ntational Linguistics.\nPost, Matt. 2018. A call for clarity in reporting bleu\nscores. In Proceedings of the Third Conference on\nMachine Translation, Volume 1: Research Papers ,\npages 186–191, Belgium, Brussels, October. Asso-\nciation for Computational Linguistics.\nSchwartz, Roy, Jesse Dodge, Noah A Smith, and\nOren Etzioni. 2019. Green ai. arXiv preprint\narXiv:1907.10597 .Sennrich, Rico, Barry Haddow, and Alexandra Birch.\n2016. Improving Neural Machine Translation Mod-\nels with Monolingual Data. In Proceedings of the\n54th Annual Meeting of the Association for Compu-\ntational Linguistics (Volume 1: Long Papers) , pages\n86–96, Berlin, Germany, August. Association for\nComputational Linguistics.\nStrubell, Emma, Ananya Ganesh, and Andrew McCal-\nlum. 2019. Energy and policy considerations for\ndeep learning in NLP. In Proceedings of the 57th An-\nnual Meeting of the Association for Computational\nLinguistics , pages 3645–3650, Florence, Italy, July.\nAssociation for Computational Linguistics.\nTan, Chuanqi, Fuchun Sun, Tao Kong, Wenchang\nZhang, Chao Yang, and Chunfang Liu. 2018. A sur-\nvey on deep transfer learning. In International Con-\nference on Artificial Neural Networks , pages 270–\n279. Springer.\nThompson, Brian, Huda Khayrallah, Antonios Anasta-\nsopoulos, Arya D. McCarthy, Kevin Duh, Rebecca\nMarvin, Paul McNamee, Jeremy Gwinnup, Tim An-\nderson, and Philipp Koehn. 2018. Freezing subnet-\nworks to analyze domain adaptation in neural ma-\nchine translation. In Proceedings of the Third Con-\nference on Machine Translation: Research Papers ,\npages 124–132, Belgium, Brussels, October. Associ-\nation for Computational Linguistics.\nVaswani, Ashish, Noam Shazeer, Niki Parmar, Jakob\nUszkoreit, Llion Jones, Aidan N Gomez, Łukasz\nKaiser, and Illia Polosukhin. 2017. Attention is all\nyou need. In Guyon, I., U. V . Luxburg, S. Bengio,\nH. Wallach, R. Fergus, S. Vishwanathan, and R. Gar-\nnett, editors, Advances in Neural Information Pro-\ncessing Systems 30 , pages 6000–6010. Curran Asso-\nciates, Inc.\nVaswani, Ashish, Samy Bengio, Eugene Brevdo, Fran-\ncois Chollet, Aidan Gomez, Stephan Gouws, Llion\nJones, Lukasz Kaiser, Nal Kalchbrenner, Niki Par-\nmar, Ryan Sepassi, Noam Shazeer, and Jakob Uszko-\nreit. 2018. Tensor2Tensor for Neural Machine\nTranslation. In Proceedings of the 13th Conference\nof the Association for Machine Translation in the\nAmericas (Volume 1: Research Papers) , pages 193–\n199, Boston, MA, March. Association for Machine\nTranslation in the Americas.\nWu, Yonghui, Mike Schuster, Zhifeng Chen, Quoc V\nLe, Mohammad Norouzi, Wolfgang Macherey,\nMaxim Krikun, Yuan Cao, Qin Gao, Klaus\nMacherey, et al. 2016. Google’s neural ma-\nchine translation system: Bridging the gap between\nhuman and machine translation. arXiv preprint\narXiv:1609.08144 .\nZoph, Barret, Deniz Yuret, Jonathan May, and Kevin\nKnight. 2016. Transfer learning for low-resource\nneural machine translation. In Proceedings of the\n2016 Conference on Empirical Methods in Natu-\nral Language Processing , pages 1568–1575, Austin,\nTexas, November. Association for Computational\nLinguistics.", "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "MN8gHkPTPU", "year": null, "venue": "EAMT 2006", "pdf_link": "https://aclanthology.org/2006.eamt-1.24.pdf", "forum_link": "https://openreview.net/forum?id=MN8gHkPTPU", "arxiv_id": null, "doi": null }
{ "title": "A Syntactic Skeleton for Statistical Machine Translation", "authors": [ "Bart Mellebeek", "Karolina Owczarzak", "Declan Groves", "Josef van Genabith", "Andy Way" ], "abstract": null, "keywords": [], "raw_extracted_content": "ASyntactic Skeleton forStatistical Machine\nTranslation\nBartMelleb eek,Karolina Owczarzak, Declan Groves,\nJosef VanGenabith andAndy Way\nDublin CityUniversity,National CentreforLanguage Technology ,Dublin 9,Ireland\nfmellebeek jowczarzak jdgroves jjosef jawayg@computing .dcu.ie\nAbstract\nWepresen tametho dforimproving statistical machinetranslation perfor-\nmance byusing linguistically motivatedsyntactic information. Ouralgo-\nrithm recursiv elydecomp osessource language sentences intosyntactically\nsimpler andshorter chunks, andrecomp osestheirtranslation toformtarget\nlanguage sentences. Thisimpro vesboththewordorder andlexical selection\nofthetranslation. Wereportstatistically signi\fcan trelativ eimpro vements\nof3.3% BLEU score inanexperimen t(English !Spanish) carried outon\nan800-sen tence testsetextracted fromtheEuroparl corpus.\n1Introduction\nAlmost allresearc hinMTbeingcarried out\ntodayiscorpus-based, withbyfarthemost\ndominan tparadigm beingphrase-based Sta-\ntistical MachineTranslation (SMT). Phrase-\nbased modelshaverecentlyachievedcon-\nsiderable impro vementsintranslation qual-\nity;however,theystillfacedi\u000ecult ywhen\nitcomes tomodeling long-distance depen-\ndencies ordi\u000berences inwordorder between\nsource andtarget languages. Anobvious\nwaytohelpovercome these obstacles isto\ntrytoaddasyntactic leveltothemod-\nels.While anumberofattempts havebeen\nmade toincorp orate syntactic knowledge\nintophrase-based SMT, thishasledtolittle\nimpro vementintranslation andthelossof\nlanguage-indep endence forthesystems.\nOurnovelapproac husesTransBo oster, a\nwrapp ertechnology designed toimpro vethe\noutput ofwide-co verage MTsystems (Melle-\nbeek,Khasin, VanGenabith, &Way,2005)\nbyexploiting thefactthatbothrule-based\nandstatistical MTsystems tend toper-\nformbetterattranslating shorter sentences\nthan longer ones. TransBo oster decom-\nposessource language sentences intosyntac-\ntically simpler andshorter chunks, sends the\nchunkstoabaseline MTsystem andrecom-posesthetranslated output intotarget lan-\nguage sentences. Ithasalready provedsuc-\ncessful inexperimen tswithrule-based MT\nsystems (Melleb eek, Khasin, Owczarzak,\nVanGenabith, &Way,2005). Inthispaper\nweapply theTransBo osterwrapp ertechnol-\nogytoastate-of-the-art phrase-based En-\nglish !Spanish SMT modelconstructed\nwith Pharaoh (Koehn,2004) andwere-\nportastatistically signi\fcan timpro vement\ninBLEU andNIST score.\nThepaperisorganised asfollows.Insec-\ntion2,wegiveashort overview ofthemost\nrelevantmetho dsthatincorp orate syntactic\nknowledge inSMT models.Weexplain our\napproac hinsection 3anddemonstrate it\nwithaworkedexample. Sections 4and5\ncontainthedescription, results andanaly-\nsisofourexperimen ts.Wesummarize our\n\fndings insection 6.\n2Related Researc h\nOneofthemajordi\u000eculties SMT faces is\nitsinabilit ytomodellong-distance depen-\ndencies andcorrect wordorder formany\nlanguage pairs. Inthisrespect,phrase-\nbased SMT systems faremuchbetterthan\nword-based systems, butarestillfarfrom\nperfect. Asreported by(Koehn,Och,&\nMarcu, 2003), evenincreasing phrase length\nabovethree wordsdoesnotleadtosigni\f-\ncantimpro vementsduetodatasparseness.\nTherefore, SMT isusually most accurate\ninverylocalised environmen tsandforlan-\nguage pairs thatdonotdi\u000ber toomuchin\nthesystematic ordering ofconstituen ts.A\nnumberofmorerecentMTmodelsattempt\ntoremedy theshortcomings ofSMT byin-\ntroducing adegree ofsyntactic information\nintotheprocess.Generally ,MTmodelsthat\ndoincorp orate syntaxdosoinalimited\nfashion, byusing syntaxoneither thesource\nortarget sidebutnotonboth. (Yamada\n&Knigh t,2001, 2002; Charniak, Knigh t,&\nYamada, 2003; Burbank etal.,2005)\nItshould alsobenoted that,todate, ap-\nproacheswhichhaveattempted toincor-\nporate more syntactic modeling intoSMT\nhaveonthewhole notyetresulted insigni\f-\ncantimpro vements.Previous approac hesin-\nclude thetree-to-string manipulation model\nof(Yamada &Knigh t,2001, 2002), andthe\nattempt tomarry PCFGlanguage models\ntoSMT (Charniak etal.,2003). Further-\nmore, theposthocreranking approac hof\n(Koehnetal.,2003) actually demonstrated\nthatadding syntaxharmed thequalityof\ntheirSMT system.\n(Chiang, 2005) presen tsanSMT model\nthatuseshierarc hical phrase probabilities.\nThis allowsforthecorrect treatmen tof\nhigher-lev eldependencies, suchasthedif-\nferentordering ofNP-mo difying relativ e\nclauses inChinese andEnglish. Inanexper-\nimentonMandarin toEnglish translation,\n(Chiang, 2005) reportsanrelativ eincrease\nof7.5% intheBLEU score overabase-\nlinePharaoh model.Although thismetho d\ncandealsuccessfully with certain notori-\nously problematic tasks andislanguage-\nindependen t,itinduces agrammar froma\nparallel textwithout relying onanylinguis-\nticannotations orassumptions. Ittherefore\ndoesnotmakeuseoflinguistic allymotivate d\nsyntax ,incontrasttoTransBo oster.3TransBo oster: Architec-\nture\nTransBo oster usesachunking algorithm to\ndivide input strings intosmaller andsim-\nplerconstituen ts,sends those constituen ts\ninaminimal necessary contexttoabaseline\nMTsystem andrecomp osestheMToutput\nchunkstoobtain theoveralltranslation of\ntheoriginal input string.\nOurapproac hpresupp osestheexistence\nofsome sortofsyntactic analysis ofthein-\nputsentence. Wereportexperimen tsonhu-\nmanparse-annotated sentences (thePennII\nTreebank (Marcus etal.,1994)) andonthe\noutput ofastate-of-the-art statistical parser\n(Charniak, 2000) insection 5.\nEssentially,eachTransBo ostercyclefrom\naparsed input string toatranslated output\nstring consists ofthefollowing5steps:\n1.Finding thePivot.\n2.Locating Argumen tsand Adjuncts\n(`Satellites') inthesource language.\n3.Creating andTranslating Skeletons and\nSubstitution Variables.\n4.Translating Satellites.\n5.Combining thetranslation ofSatellites\nintotheoutput string.\nWebrie\ry explain eachofthese steps by\nprocessing thefollowingsimple example sen-\ntence.\n(1) Thechairman, along-time rivalof\nBillGates, likesfastandcon\fden-\ntialdeals.\nBabelFish (English !Spanish) translates\n(1)as(2):\n(2) Elpresiden te,rivaldelargo plazo\ndeBillGates, gustos ayuna ylos\nrepartos con\fdenciales.\nSince thesystem haswrongly identi\fed\nfastasthemain verb(`ayunar' =`to\nfast')andhastranslated likesasanoun\n(`gustos' =`tastes' ),itisalmost impossi-\nbletounderstand theoutput. Thefollowing\nsections willshowhowTransBo oster inter-\nactswiththebaseline MTsystem tohelpit\nimpro veitsowntranslations.\n3.1Decomp osition ofInput\nIna\frststep,theinput sentence isdecom-\nposedintoanumberofsyntactically mean-\ningful chunksasin(3).\n(3) [ARG1] [ADJ 1]...[ARGL]\n[ADJ l]pivot [ARGL+1]\n[ADJ l+1]...[ARGL+R][ADJ l+r]\nwhere pivot=thenucleus ofthesentence,\nARG=argumen t,ADJ=adjunct, fl,rg=\nnumberofADJstoleft/righ tofpivot,and\nfL,Rg=numberofARGstoleft/righ tof\npivot.\nThepivotisthepartofthestring that\nmustremain unaltered during decomp osi-\ntioninorder toensure acorrect translation.\nInorder todetermine thepivot,wecom-\nputethehead ofthelocaltreebyadapt-\ningthehead-lexicalised grammar annotation\nscheme of(Magerman, 1995). Incertain\ncases, wederivea`complex pivot'consisting\nofthisheadterminal together withsome of\nitsneighbours,e.g.phrasal verbsorstrings\nofauxiliaries. Inthecaseoftheexample\nsentence (1),thepivotis`likes'.\nDuring thedecomp osition, itisessential\ntobeabletodistinguish betweenargumen ts\n(required elemen ts)andadjuncts (optional\nmaterial), asadjuncts cansafely beomit-\ntedfromthesimpli\fed string thatwesub-\nmittotheMTsystem. Theprocedure used\nforargumen t/adjunct location isanadapted\nversion ofHockenmaier's algorithm forCCG\n(Hockenmaier, 2003). Theresult ofthis\frst\nsteponatheexample sentence (1)canbe\nseenin(4).\n(4) [Thechairman, along-time rival\nofBillGates,] ARG 1[likes]pivot[fast\nandcon\fden tialdeals] ARG 2.\n3.2Skeletons and Substitution\nVariables\nInanextstep, wereplace theargumen ts\nbysimilar butsimpler strings, whichwe\ncall`Substitution Variables'. Thepurpose\nofSubstitution Variables is:(i)tohelpto\nreduce thecomplexit yoftheoriginal argu-\nments,whichoften leads toanimpro ved\ntranslation ofthepivot;(ii)tohelpkeep\ntrackofthelocation ofthetranslation oftheargumen tsintarget. Inchoosing an\noptimal Substitution Variable foracon-\nstituen t,there exists atrade-o\u000b betweenac-\ncuracy andretriev ability.`Static' orpre-\nviously de\fned Substitution Variables (e.g.\n`cars' toreplace theNP`fastandcon\fden-\ntialdeals') areeasytotrackintarget, since\ntheirtranslation byaspeci\fc MTengine is\nknowninadvance, buttheymightdistort\nthetranslation ofthepivotbecause ofsyn-\ntactic/seman ticdi\u000berences withtheoriginal\nconstituen t.`Dynamic' Substitution Vari-\nables comprise therealheads ofthecon-\nstituen t(e.g.`deals' toreplace theNP`fast\nandcon\fden tialdeals') guaran teeamax-\nimumsimilarit y,butaremore di\u000ecult to\ntrackintarget. Ouralgorithm emplo ysDy-\nnamic Substitution Variables \frstandbacks\no\u000btoStatic Substitution Variables ifprob-\nlemsoccur. Byreplacing theargumen tsby\ntheirSubstitution Variables andleavingout\ntheadjuncts in(1),weobtain theskeleton\nin(5)\n(5) [VARG 1]...[VARG L]pivot\n[VARG L+1]...[VARG L+R]\nwhere VARG iisthesimpler string substitut-\ningARGi\nTheresult ofthissecond steponthe\nworkedexample canbeseenin(6).\n(6) [The chairman] VARG1[likes]pivot\n[deals] VARG2.\nTransBo oster sends thissimple string to\nthebaseline MTsystem, whichthistimeis\nabletoproduceabettertranslation thanfor\ntheoriginal, more complex sentence, asin\n(7).\n(7) Elpresiden tetienegusto derepar-\ntos.\nThistranslation allowsus(i)toextract\nthetranslation ofthepivotand(ii)tode-\ntermine thelocation oftheargumen ts.This\nispossible because wedetermine thetrans-\nlations oftheSubstitution Variables (the\nchairman ,deals)atruntime. Ifthese trans-\nlations arenotfound in(7),wereplace the\nargumen tsbypreviously de\fned Static Sub-\nstitution Variables. E.g. in(4),were-\nplace `Thechairman, along-time rivalof\nBillGates' by`Theman'and`fastandcon-\n\fdential deals'by`cars'.Incasethetrans-\nlations oftheStatic Substitution Variables\narenotfound (7),weinterrupt thedecom-\nposition andhavetheentireinput string (1)\ntranslated bytheMTengine.\n3.3Translating Satellites\nAfter \fnding thetranslation ofthepivotand\nthelocation ofthetranslation ofthesatel-\nlitesintarget, theprocedure isrecursiv ely\napplied toeachoftheidenti\fedchunks`The\nchairman, along-time rivalofBillGates'\nand`fastandcon\fdential deals'.\nSince thechunk`fastandcon\fdential\ndeals'contains fewerwordsthan aprevi-\nously setthreshold -thisthreshold depends\nonthesyntactic nature oftheinput -itis\nready tobetranslated bythebaseline MT\nsystem. Translating individual chunksout\nofcontextislikelytoproduceade\fcien t\noutput orleadtoboundary friction phenom-\nena,soweneedtoensure thateachchunkis\ntranslated inasimple contextthatmimics\ntheoriginal. AsinthecaseoftheSubstitu-\ntionVariables, thiscontextcanbestatic (a\npreviously established template, thetrans-\nlation ofwhichisknowninadvance) ordy-\nnamic (asimpler version oftheoriginal con-\ntext).\nThedynamic contextforARG2in(4)\nwouldbetheasimpli\fed version ofARG1\nfollowedbythepivot`Thechairman likes',\nthetranslation ofwhichisdetermined at\nruntime, asin(8):\n(8) [Thechairman likes]fastandcon-\n\fdentialdeals. ![Elpresiden te\ntienegusto de]repartos r\u0013apidos y\ncon\fdenciales.\nAnexample ofastatic contextmimic king\ndirect objectposition forsimple NPswould\nbethestring Themansees,whichmost of\nthetimeinSpanish wouldbetranslated as\nElhombreve,asin(9):\n(9) [Themansees]fastandcon\fden-\ntialdeals. ![Elhombreve]repar-\ntosr\u0013apidos ycon\fdenciales.\nSince theremaining chunk`Thechair-\nman, along-time rivalofBillGates' con-\ntains more words than apreviously setthreshold, itisjudged toocomplex fordirect\ntranslation. Thedecomp osition andtrans-\nlation procedure isnowrecursiv elyapplied\ntothischunk:itisdecomp osedintosmaller\nchunks, whichmayormaynotbesuited for\ndirect translation, andsoforth.\n3.4Forming theTranslation\nAsexplained insubsection 3.3,theinput de-\ncomposition procedure isrecursiv elyapplied\ntoeachconstituen tuntilacertain threshold\nisreached.Constituen tsbelowthisthresh-\noldaresenttothebaseline MTsystem for\ntranslation. Curren tly,thethreshold isre-\nlated tothenumberoflexical items that\neachnodedominates. Itsoptimal valuede-\npendsonthesyntactic environmen tofthe\nconstituen tandthebaseline MTsystem\nused. After allconstituen tshavebeende-\ncomposedandtranslated, theyarerecom-\nbined toyield thetarget string output to\ntheuser.\nInexample (1),theentiredecomp osition\nandrecom bination processleads toanim-\nprovementintranslation qualitycompared\ntotheoriginal output bySystran in(2),as\nisshownin(10):\n(10) Elpresiden te,unrivaldelargo\nplazo deBillGates, tienegusto de\nrepartos r\u0013apidos ycon\fdenciales.\n4Experimen talSetup\nForourexperimen ts,thephrase-based SMT\nsystem (English !Spanish) wascon-\nstructed using thePharaoh phrase-based\nSMT decoder,andtheSRILanguage Mod-\nelingtoolkit.1Weusedaninterpolated tri-\ngram language modelwithKneser-Ney dis-\ncounting.\nThedatausedtotrain thesystem was\ntakenfrom theEnglish-Spanish section of\ntheEuroparl corpus (Koehn,2005). From\nthisdata, 501K sentence pairs wereran-\ndomly extracted fromthedesignated train-\ningsection ofthecorpus andlowercased.\nSentence length waslimited toamaxim um\nof40wordsforbothSpanish andEnglish,\n1http://www.sp eech.sri.com/pro jects/s rilm/\nwithsentence pairshavingamaxim umrel-\nativesentence length ratio of1.5. From\nthisdataweusedthemetho dof(Och&\nNey,2003) toextract phrase corresp on-\ndences fromGIZA++ wordalignmen ts.\nFortesting purposestwosetsofdatawere\nused, eachconsisting of800English sen-\ntences. The\frstsetwasrandomly extracted\nfromsection 23oftheWSJ section ofthe\nPennIITreebank; thesecond setconsists of\nrandomly extracted sentences fromthetest\nsection oftheEuroparl corpus, whichhad\nbeenparsed with(Bikel,2002).\nWedecided tousetwodi\u000beren tsetsoftest\ndatainstead ofonebecause wearefaced\nwithtwo`out-of-domain' phenomena that\nhaveanin\ruence onthescores, onea\u000bect-\ningtheTransBo oster algorithm, theother\nthephrase-based SMT system.\nOntheonehand, theTransBo oster de-\ncomposition algorithm performs better on\n`perfectly' parse-annotated sentences from\nthePennTreebank thanontheoutput pro-\nduced byastatistical parser as(Bikel,2002),\nwhichintroduces acertain amoun tofnoise.\nOntheother hand, Pharaoh wastrained on\ndatafrom theEuroparl corpus, soitper-\nforms muchbetterontranslating Europarl\ndatathanout-of-domain WallStreet Jour-\nnaltext.\n5Results andEvaluation\nWepresen tresults ofanautomatic evalua-\ntionusing BLEU (Papineni, Roukos,Ward,\n&Zhu,2002) andNIST (Doddington, 2002)\nagainst the800-sen tence testsetsmentioned\ninsection 4.Ineachcase, thestatistical\nsigni\fcance oftheresults wastested byus-\ningtheBLEU/NIST resampling toolkitde-\nscribedin(Zhang &Vogel,2004).2Wealso\nconduct amanualevaluation ofthe\frst200\nsentences intheEuroparl testset.Finally ,\nweanalyse thedi\u000berences betweentheout-\nputofPharaoh andTransBo oster, andpro-\nvideanumberofexample translations.\n2http://pro jectile.is.cs.cm u.edu/researc h/public/\ntools/bootStrap/tutorial.h tm5.1Automatic Evaluation\n5.1.1 Europarl\nEnglish!Spanish BLEU NIST\nPharaoh 0.1986 5.8393\nTransBo oster 0.2052 5.8766\nPercent.ofBaseline 103.3% 100.6%\nTable1:TransBo ostervs.Pharaoh: Results on\nthe800-sen tence testsetofEuroparl\nThecomparison betweenTransBo oster\nandPharaoh ontheEuroparl testsetis\nshowninTable1.TransBo osterimpro veson\nPharaoh withastatistically signi\fcan trela-\ntiveimpro vementof3.3%inBLEU and0.6%\ninNIST score. These results showsthat\ntheTransBo oster approac hnotonlyworks\nforsentences parse-annotated byhumans,\nasreported in(Melleb eeketal.,2005), but\nalsoforpreviously unseen input afterpars-\ningwithastatistical parser (Bikel,2002).\n5.1.2 WallStreet Journal\nEnglish!Spanish BLEU NIST\nPharaoh 0.1343 5.1432\nTransBo oster 0.1379 5.1259\nPercent.ofBaseline 102.7% 99.7%\nTable2:TransBo ostervs.Pharaoh: Results on\nthe800-sen tence testsetoftheWSJ\nThecomparison betweenTransBo oster\nandPharaoh ontheWallStreet Journal test\nsetisshowninTable2.AswithEuroparl,\nTransBo osterimpro vesonPharaoh accord-\ningtotheBLEU metric, butfallsslightly\nshort ofPharaoh's NIST score. Incontrast\ntothescores ontheEuroparl corpus, these\nresults arenotstatistically signi\fcan tac-\ncording toaresampling test(on2000resam-\npledtestsets)withthetoolkitdescrib edin\n(Zhang &Vogel,2004).\nAlthough theinput toTransBo oster\ninthiscase are`perfect' human parse-\nannotated sentences, wearenotabletore-\nportstatistically signi\fcan timpro vements\noverPharaoh. Thiscanbeexplained bythe\nfactthattheperformance ofphrase-based\nSMT systems onout-of-domain textisvery\npoor(items areleftuntranslated, etc.) as\nOriginal Despite animpressive numberofinternational studies ,there isstillnoclear\nevidenc eofanydirect linkbetweenviolence andmedia consumption\nPharaoh apesardelosestudios internacionales ,todav\u0013\u0010anoexiste ninguna relaci\u0013ondirecta\nentrelaviolencia ymedia unn\u0013umeroimpresionante pruebasclarasdeconsumo\nTransBo oster peseaunn\u0013umeroimpresionante deestudios internacionales ,todav\u0013\u0010anohay\npruebasclarasdeninguna relaci\u0013ondirecta entrelaviolencia ylosmedios consumo\nAnalysis wordorder :betterplacemen tofthetranslations of`animpressive number'and\n`clearevidenc e'\nOriginal TheEurop eanUnion isjointlyresponsible, withthecountries oforigin ,forimmi-\ngration andfororganising those migration \rows,whicharesonecessary forthe\ndevelopmen toftheregion.\nPharaoh launi\u0013oneuropeaescorresp onsable deinmigraci\u0013 onydelos\rujos migratorios, que\nsonnecesarias paraeldesarrollo delaregi\u0013on,conlospa\u0013\u0010sesdeorigen ,organizador .\nTransBo oster launi\u0013oneuropeaescorresp onsable, conlospa\u0013\u0010sesdeorigen ,deinmigraci\u0013 onyde\nlos\rujos migratorios, quesonnecesarias paraorganizar eldesarrollo delaregi\u0013on.\nAnalysis wordorder :betterplacemen tofthetranslation of`withthecountries oforigin'\nand`organising'\nOriginal Presidency comm unication onthesituation intheMiddleEast\nPharaoh presidencia comunicaci\u0013 onsobre lasituaci\u0013 onenelmediterr\u0013aneo\nTransBo oster presidencia comunicaci\u0013 onsobre lasituaci\u0013 onenelcercanooriente\nAnalysis lexical selection :impro vedtranslation of`theMiddleEast'\nOriginal IamproudofthefactthattheCommittee onBudgetary Controlhasbeenable\ntoagreeunanimously onadraftopinion within averyshort periodoftime.\nPharaoh mealegraelhechodequelacomisi\u0013 ondepresupuestos hapodidodarmiaprobaci\u0013on\nun\u0013anime sobre unproyectodictamen enunperiododetiemp omuycorto .\nTransBo oster estoyorgullosodelhechoquelacomisi\u0013 ondepresupuestos hallevado aacuerdo\nun\u0013anime sobre unproyectodictamen enunperiododetiemp omuycorto .\nAnalysis lexical selection :impro vedtranslation of`Iamproudof'and`agreeunani-\nmously'\nTable3:Examples ofimpro vementsoverPharaoh: wordorder andlexical selection.\nisdescrib edin(Koehn,2005) andindicated\nbymuchlowerabsolute testscores incom-\nparison totable 1.Inother words, inthis\ncaseitismore di\u000ecult forTransBo osterto\nhelptheSMT system toimpro veonitsown\noutput through syntactic guidance.\n5.2ManualEvaluation\nAfter amanualevaluation ofthe\frst200\nsentences oftheEuroparl testset,based on\nanaverage betweenaccuracy and\ruency ,we\nconsidered 20%ofthese tobebetterwhen\nTransBo osterwasused,7%beingworse,and\ntheremaining 73%adjudged tobesimilar.\nThe majorityofimpro vements(70%)byinvoking theTransBo oster metho don\nPharaoh arecaused byabetterwordorder.\nThisisbecause itissyntactic knowledge and\nnotalinguistically limited language model,\nthatguides theplacemen tofthetranslation\nofthedecomp osedinput chunks. Moreo ver,\nsmaller input chunks, asproduced byTrans-\nBoosterandtranslated inaminimal context,\naremorelikelytoreceivecorrect internal or-\ndering fromtheSMT language model.\nTheremaining 30%ofimpro vementsre-\nsulted fromabetterlexical selection. Thisis\ncaused notonlybyshortening theinput, but\nmainly byTransBo osterbeingabletosepa-\nratetheinput sentences atpointsofleastco-\nhesion, namely ,atmajorconstituen tbound-\naries. Itisplausible toassume thatprobabil-\nitylinksbetweenthemajorconsituen tsare\nweakerthaninside them, duetodatasparse-\nness,sotranslating aphrase inthecontextof\nonlytheheadsofneighbouring constituen ts\nmightactually help.\nTable3illustrates themain typesofim-\nprovementswithanumberofexamples.\n6Conclusions\nWehaveshownthat statistical machine\ntranslation impro veswhen weaddalevel\nthat incorp orates syntactic information.\nTransBo oster capitalises onthefactthat\nMTsystems generally deal better with\nshorter sentences, andusessyntactic anno-\ntation todecomp osesource language sen-\ntences intoshorter, simpler chunks which\nhaveahigher chance ofbeingcorrectly\ntranslated. Theresulting translations are\nrecomp osedintotarget language sentences.\nTheadvantage oftheTransBo oster ap-\nproachoverother metho dsisthatitis\ngeneric, beingabletoworkwithvarious MT\nsystems, andthatthesyntactic information\nitusesislinguistically motivated. Weshow\nthatthePharaoh modelcoupled withTrans-\nBooster achievesastatistically signi\fcan t\nrelativ eimpro vementof3.3%inBLEU score\noverPharaoh alone, onEnglish !Spanish\ntranslations ofa800-sen tence testsetex-\ntracted fromtheEuroparl corpus.\nReferences\nBikel,D.M. (2002). Design ofa\nMulti-lingual, Parallel-pro cessing Statis-\nticalParsing Engine. InProceedings of\nHuman Language Technolo gyConfer ence\n(HLT2002) (p.24-27). SanDiego, CA.\nBurbank, A.,Carpuat, M.,Clark, S.,\nDreyer,M.,Fox,P.,Groves,D.,Hall,K.,\nHearne, M.,Melamed, D.,Shen, Y.,Way,\nA.,Wellington, B.,&Wu,D.(2005). Fi-\nnalReportoftheJohns Hopkins Summer\nWorkshop onStatistical MachineTransla-\ntionbyParsing. InJHUWorkshop 2005.\nBaltimore, MD.\nCharniak, E.(2000). Amaxim umentropyinspired parser. InProceedings ofthe\nFirstAnnual MeetingoftheNorth Amer-\nicanChapter oftheAssociation forCom-\nputational Linguistics (NAACL2000) (p.\n132-139). Seattle, WA.\nCharniak, E.,Knigh t,K.,&Yamada, K.\n(2003). Syntax-based Language Models\nforStatistical MachineTranslation. In\nProceedingsoftheNinth Machine Trans-\nlation Summit (p.40-46). NewOrleans,\nLO.\nChiang, D.(2005). AHierarc hicalPhrase-\nBased ModelforStatistical Machine\nTranslation. InProceedingsofACL2005\n(p.263-270). AnnArbor,MI.\nDoddington, G.(2002). Automatic Eval-\nuation ofMTQualit yusing N-gram Co-\noccurrence Statistics. Human Language\nTechnolo gy,128-132.\nHockenmaier, J.(2003). Parsing with\nGenerativ emodelsofPredicate-Argumen t\nStructure. InProceedings oftheACL\n2003 (p.359-366). Sapporo,Japan.\nKoehn,P.(2004). Pharaoh: Abeamsearch\ndecoderforphrase-based statistical ma-\nchinetranslation models. InAMTA(p.\n115-124). Georgeto wnUniversity,Wash-\nington DC.\nKoehn,P.(2005). Europarl: Aparallel\nCorpus forEvaluation ofMachineTrans-\nlation. InMTSummit X(p.79-86).\nPhuket,Thailand.\nKoehn,P.,Och,F.,&Marcu, D.(2003).\nStatistical Phrase-based Translation. In\nProceedings ofHLT-NAA CL2003 (p.\n127-133). Edmon ton,Canada.\nMagerman, D.(1995). Statistical Decision-\nTreeModelsforParsing. InProceedings\nofthe33rdAnnual MeetingoftheAsso-\nciation forComputational Linguistics (p.\n276-283). Cambridge, MA.\nMarcus, M.,Kim, G.,Marcinkiewicz, M.,\nMacIn tyre,R.,Bies, A.,Ferguson, M.,\nKatz, K.,&Schasberger, B.(1994). The\nPennTreebank: Annotating Predicate\nArgumen tStructure. InProceedings of\ntheARPAHuman Language Technolo gy\nworkshop (p.114-119).\nMelleb eek,B.,Khasin, A.,Owczarzak, K.,\nVanGenabith, J.,&Way,A.(2005). Im-\nprovingonline MachineTranslation Sys-\ntems. InProceedings ofMTSummit X\n(p.290-297). Phuket,Thailand.\nMelleb eek, B.,Khasin, A.,VanGen-\nabith, J.,&Way,A.(2005). Trans-\nBooster: Boosting thePerformance of\nWide-Co verage MachineTranslation Sys-\ntems. InProceedings ofthe10thAn-\nnualConfer enceoftheEuropeanAsso-\nciation forMachine Translation (p.189-\n197). Budap est,Hungary .\nOch,F.J.,&Ney,H.(2003). ASystematic\nComparison ofVarious Statistical Align-\nmentModels.Computational Linguistics\n29(1),19-51.\nPapineni, K.,Roukos,S.,Ward,T.,&Zhu,\nW.-J. (2002). BLEU: AMetho dforAu-\ntomatic Evaluation ofMachineTransla-\ntion. In(p.311-318). Philadelphia.\nYamada, K.,&Knigh t,K. (2001).\nASyntax-Based Statistical Translation\nModel.InProceedings ofthe39thAn-\nnualConfer enceoftheAssociation for\nComputational Linguistics (p.523-530).\nToulouse, France.\nYamada, K.,&Knigh t,K.(2002). ADe-\ncoderforSyntax-Based Statistical MT.\nInProceedings ofthe40thAnnual Con-\nferenceoftheAssociation forComputa-\ntional Linguistics (p.303-310). Philadel-\nphia,PA.\nZhang, Y.,&Vogel, S.(2004). Measur-\ningCon\fdence IntervalsfortheMachine\nTranslation Evaluation Metrics. InPro-\nceedingsoftheTenthConfer enceonThe-\noreticalandMethodologicalIssues inMa-\nchine Translation (p.85-94). Baltimore,\nMD.", "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "1NSEnIeBEx", "year": null, "venue": "EAMT 2005", "pdf_link": "https://aclanthology.org/2005.eamt-1.26.pdf", "forum_link": "https://openreview.net/forum?id=1NSEnIeBEx", "arxiv_id": null, "doi": null }
{ "title": "TransBooster: boosting the performance of wide-coverage machine translation systems", "authors": [ "Bart Mellebeek", "Anna Khasin", "Josef van Genabith", "Andy Way" ], "abstract": null, "keywords": [], "raw_extracted_content": "EAMT 2005 Conference Proceedings 189 TransBooster: Boosting the Pe rformance of Wide-Coverage \nMachine Translation Systems \nBart Mellebeek, Anna Khasin, Josef Van Genabith and Andy Way \nNational Centre for Language Technology \n School of Computing \nDublin City Universi ty, Dublin 9, Ireland. \n{mellebeek,akhasin,josef,away}@computing.dcu.ie \nAbstract. We propose the design, implementation and evaluation of a novel and modular \napproach to boost the translat ion performance of existing, wide-coverage, freely available \nmachine translation systems based on reliable and fast automatic decomposition of the translation input and corresponding composition of translation output. We provide details \nof our method, and experimental results compared to the MT systems SYSTRAN and \nLogomedia. While many avenues for further expe rimentation remain, to date we fall just \nbehind the baseline systems on the full 800-sentence testset, but in certain cases our method \ncauses the translation quality obtained via the MT systems to improve. \n1. Introduction \nA significant number of freely available, com-\nmercial, wide-coverage machine translation (MT) systems populate the market, including SDL In-ternational’s Enterprise Translation Server, Re-verso by Softissimo, Logomedia, Promt and perhaps best known BabelFish by AltaVista. \nSome of these systems are based on 1st gen-\neration ‘word-for-word’ direct translation tech-nology, and share a number of characteristics: (i) they are designed to translate wide-coverage, general language material; (ii) they are robust; and (iii) they perform comparatively limited linguistic analysis. These points are interrelated and bear further elaboration. Detailed automatic linguistic analysis of translation input is poten-tially costly (both in terms of processing time and required lingware such as computational grammars etc.), and in the past has often been inversely related to coverage and robustness. In other words, the more detailed the linguistic analysis, the smaller the coverage of the system, and conversely, the wider the coverage, the less detailed the linguistic analysis. This has led commercial, wide-coverage MT systems to con-centrate on ‘linguistics-lite’, robust design prin-ciples. In order to analyse translation input, they often consider only a limited linguistic context. A consequence of this is that existing commer-\ncial systems are much stronger when translating shorter sentences than they are on longer, more complex input. The reason behind this is sim-ple: the longer the input sentence to be trans-lated, the more likely that the automatic transla-tion system will be led astray by the complexi-ties in the source and target languages. \nWe contend that better performance in terms \nof output quality can be achieved than these systems can obtain by processing the texts that they are required to translate at any one time into smaller chunks. \nConsider the example in (1): \n(1) The chairman, a long-time rival of Bill \nGates, likes fast and confidential deals \nA reasonable translation into German (estab-\nlished by a human translator) is: \n(2) Der Vorsitzende, ein langfristiger Ri-\nvale von Bill Gates, mag schnelle und vertrauliche Abkommen. \nHowever, the translation produced by the Ba-\nbelFish MT system is (3): \n(3) Der Vorsitzende, ein langfristiger Ri-\nvale von Bill Gates, Gleiche fasten und vertrauliche Abkommen. \nMellebeek, et al. \n190 EAMT 2005 Conference Proceedings This involves a significant distortion of the in-\nput proposition, almost to the point of rendering it unrecognisable. The problem is that the Eng-lish verb likes is mistranslated as a noun ( Glei-\nche) and the adjective fast is completely mis-\nrecognised as a verb fasten (‘to fast’) . \nContrast what happens if you feed Babel-\nFish the shorter sentences in (4): \n(4) a. The chairman likes deals \nÆ Der Vor-\nsitzende mag Abkommen b. The chairman likes fast deals \nÆ Der \nVorsitzende mag schnelle Abkommen \nBoth German strings in (4) are perfectly accept-\nable translations of the English input and con-stitute no errors. \nThis small set of translation examples is in-\ndicative of a general trend: that commercially available, wide-coverage MT systems tend to be much better at translating short and simple input. They perform much worse on longer strings, as the extra context provided gives ample op-portunity for mistakes to be made. \nIn this paper, we present our method which \ntakes long input strings from the Penn-II Tree-bank and breaks them down recursively into smaller and simpler constituents, and translates those shorter parts individually. At the same time, we keep track of where those individual parts fit into the overall translation in order to stitch together the translation result for the en-tire input string. It is important to note that throughout the process, the MT engine itself \ndoes all the translation: we are essentially help-ing the system work to the best of its ability so as to generate better translations than would otherwise have been produced to the benefit of the end user. We use SYSTRAN\n1 because of its \nwidespread use in the industry and Logomedia2 \nsince it was deemed the better of the three on-\nine MT systems tested in (Way & Gough, 2003). Accordingly, we have set ourselves a rather challenging task: we anticipate that the poorer the MT engine, the larger the increase in translation quality to be seen from incorporat-ing the method described here. \nThe remainder of this paper is organised as \nfollows: in section 2, we provide details of re-\n \n \n1 http://www.systransoft.com \n2 http://www.lec.com lated research. In section 3, we illustrate the ap-\nproach we have taken to date. Section 4 shows a worked-out sample sentence. Section 5 includes the results of a number of experiments we have carried out on a testset of 800 sentences ran-domly extracted from Section 23 of the Penn-II Treebank, and translated by native speakers into Spanish. We provide both automatic and human evaluations of translation quality, using SYSTRAN \nand Logomedia as baseline systems. In section \n6, we note a number of possible improvements we wish to carry out in further research, and fi-nally we conclude. \n2. Related Research \nThe use of on-line systems is the biggest growth \narea in the use of MT: people are translating web pages (an area where MT provides the so-\nlution, as human translation of pages which need to be continually updated is not feasible) \nor communicating with one another in their own languages via email, using on-line MT sys-tems as the translation engine. \nSurprisingly, however, we are aware of very \nlittle research that has been carried out to try and investigate (a) how such systems work, and (b) how their obvious faults might be improved. The main point, of course, is that engines such as BabelFish are ‘black box’ systems, where any lexical and structural rules are hidden from the user; the only way to figure out how the sys-tem is working is to compare the input strings against the generated translations. \n(Pérez-Ortiz & Forcada, 2001) demonstrate \na laboratory experiment they created in order to show students new to MT that these on-line systems are rather more sophisticated than what they term a ‘Model 0’ MT system, a basic word-for-word version of these on-line engines. In so doing the students infer that by iteratively pro-viding the MT system with more and more con-text, certain rule-based processing is apparent. \nAs to seeking to improve on the output gen-\nerated by such systems, the only previous (yet unpublished) research that we know of took place at the University of Leuven in the late-80s. Researchers experimented with a pre-processing system named ‘Tarzan’ in which a human translator identified certain clearly de-fined syntactic units in the input sentence which could be replaced by a syntactically similar \nTransBooster: boosting the performance of wide-coverage machine translation systems \nEAMT 2005 Conference Proceedings 191 placeholder for the purposes of simplifying the \ntask of MT. \n3. Our Current Approach \n3.1. Overview \nIn the first phase of this project, we have used \nthe pre-parsed sentences in the WSJ section of the Penn-II Treebank (Marcus et al., 1994) as input to our decomposition algorithm. A further line of research will involve the possible use of statistical parsing techniques to produce this in-put from previously unseen data. \nIn order to prepare a Penn-II input sentence \nfor translation with TransBooster, the tree for that string is flattened into a simpler structure consisting of a pivot (meaningful head) and a number of satellites (argum ents and adjuncts). \nThe satellites are then replaced by syntactically similar strings, the translations of which are known in advance, as in (5) \n(5) SL: [SAT\n1] ... [SAT l] pivot [SAT l+1] ... \n[SAT l+r] \nl = number of satellites to left of pivot r = \nnumber of satellites to right of pivot \nThe string SL is then submitted to the client MT \nsystem which outputs TL. \n(6) TL: [SAT’ 1] ... [SAT’ l] pivot’ \n[SAT’ l+1] ... [SAT’ l+r] \nNote that the position of the translation SAT’ i \ndoes not necessarily have to be identical to the position of the constituent SAT\ni in the source. \nWe proceed to retrieve the translation of the pivot as well as the placement of each of the satellites. This process is applied recursively to \neach satellite found, after which the retrieved partial translations are recombined to yield the final target string corresponding to the input sentence. \nWe will extend each point of this process in \nthe subsequent sections and illustrate it with an example. \n3.2. Flattening Penn-II trees into \nTransBooster trees \nConsider the Penn-II tree of example (1): \n(7) (S (NP-SBJ (NP (DT the) (NN \nchairman)) (, ,) (NP (NP (DT a) (JJ long-time) (NN rival)) (PP (IN of) \n(NP (NNP Bill) (NNP Gates)))) (, ,)) (VP (VBZ likes) (NP (ADJP (JJ fast) (CC and) (JJ confidential)) (NNS deals)))) \nAfter finding the pivot ‘ likes’ (explained in sec-\ntion 3.3) and replacing the arguments ‘ the \nchairman, a long-time rival of Bill Gates ’ and \n‘fast and confidential deals ’ by adequate substi-\ntution variables (explained in section 3.5), we obtain the following flattened structure: \n(8) (S (NP-SBJ The man) (VBZ likes) \n(NP dogs)) \n3.3. Finding the Pivot \nThe pivot is most often the head terminal of the \nPenn-II node currently being examined. In cer-tain cases in English, in addition to the head, \nsome of its rightmost neighbours are used in the construction of the pivot, where we consider it too dangerous to translate either part out of con-\ntext. An obvious example is the use of auxilia-ries, as is shown in (9). \n(9) (VP (MD might) (VP (VB have) (S \n(NP-SBJ (-NONE- *-2)) (VP (TO to) (VP (VB buy) (NP (NP (DT a) (JJ large) (NN quantity)) (PP (IN of) (NP (NN sugar))))))))) \nHere the found pivot is ‘ might have to buy ’. \nAnother example would be an ADJP whose \nhead dominates a PP, as in (10). \n(10) (ADJP (JJ close) (PP (TO to) (NP \n(DT the) (NN utility) (NN industry)))) \nHere the found pivot is ‘ close to ’. \nIn the initial experiments presented here, \nonly contiguous pivots have been considered. In ongoing work, we intend to incorporate non-contiguous pivots in both source and target lan-guages. Phrasal verbs and verbs with auxiliaries can be non-contiguous pivots in the presence of intervening material. \nOne of the pivot search parameters is the \nmaximum length L of the pivot. If a head node N with L words or less in its coverage is arrived at during pivot search, the node N in its entirety is taken to be the pivot. If, on the other hand, the head node N contains too many leaf nodes (>L), we consider the head node N’ of node N \nMellebeek, et al. \n192 EAMT 2005 Conference Proceedings to be a pivot candidate, and so on, until a head \nwith L words or less in its coverage is found. This parameter allows us to experiment with varying maximum pivot lengths. Until now, the best results have been achieved for L = 4. \n3.4. Finding Arguments and Adjuncts \nin the Source Language \nWe have explained how the strings submitted to \nthe MT system comprise pivots, arguments, and adjuncts. We broaden the traditional notion of the term ‘argument’ to those nodes that are re-quired for the correct (or, at any rate, safe) translation of the parent node. The distinction between arguments and adjuncts is essential, since nodes labelled as adjuncts can be safely omitted in the SL string that we submit to the client MT system. \nFor example, in (1) a substitution of the ar-\nguments ‘ the chairman, a long-time rival of Bill \nGates ’ and ‘ fast and confidential deals ’ has to \nbe present in the string submitted to the client MT system in order to retrieve a correct transla-tion of pivot ‘ likes’. On the other hand, when \ntreating ‘ the chairman, a long-time rival of Bill \nGates ’, the apposition ‘ a long-time rival ’ can be \nsafely left out in the string submitted to the MT system. This is shown in more detail in the ex-ample in section 4. The omission of adjuncts is a simple and safe method to reduce the com-plexity of the SL candidate strings. Additional strategies for reducing the complexity of a sen-tence involve substituting simpler but syntacti-cally similar elements for constituents (as ex-plained in the following section) and are more hazardous. \nIn the current implementation, in cases of \ndoubt we have veered in favour of labelling nodes as arguments. We continue to experiment to see whether better translations can be ob-tained by labelling nodes as arguments only when we can be (reasonably) sure that they are indeed required by the local head. \nThe procedure used for argument/adjunct lo-\ncation is an adapted version of Hockenmaier’s algorithm for CCG (Hockenmaier, 2003). The nodes we label as arguments include all the nodes Hockenmaier labels as arguments to-gether with some of the nodes (e.g. VP children of S where S is headed by a modal verb; quanti-tative adjectives) whic h she describes as ad-juncts. In ongoing research, we wish to com-\npare this procedure with the annotation of Penn-\nII nodes with LFG functional information (Ca-hill et al., 2004). \n3.5. Substitution Variables and \nSkeletons \nWhen trying to find an appropriate substitution \nvariable for a satellite, we have to take into ac-count a trade-off between accuracy and re-trievability. On the one hand, non-word strings and certain acronyms are easy to retrieve be-cause their translation is known in advance, but often they don’t have the necessary syntactic properties to ensure a correct translation of the pivot. On the other hand, substitution variables that comprise the real head of the satellite that they substitute for are very accurate and will only in rare cases distort the translation of the pivot, but their translation is much more diffi-cult to retrieve. \nTo confirm the low accuracy of non-word \nstring substitution variables, we experimented with different kinds of substitution variables for the most frequent verb subcategorisation frames in the Penn-II Treebank (Cahill et al., 2004). We chose verbs belonging to the 10 most fre-quent subcategorization frames (8 in the active voice and 2 in the passive voice), so as to be able to handle the most frequently occurring syntactic contexts, and extracted the sentences in the Treebank which contained those verbs. This gave us 6559 frame-verb lemma pairs, for each of which we made test sentences with dummy arguments in the future and past tense. We replaced the arguments in these sentences with different kinds of substitution variables, ranging from non-word strings to syntactically similar constituents, and had these sentences translated by 4 MT systems (SYSTRAN, Logomedia, Promt\n3, and SDL4) into Spanish \nand German. We used a string comparison script to automatically check the 262360 ob-tained translations for the correctness of the lo-cation of the arguments in the target language and for the quality of the pivot translation. Al-though the results are dependent on the sub-\n \n \n3 http://www.online-transl ator.com/default.asp? \nlang=en \n4 http://www.freetranslation.com/ \nTransBooster: boosting the performance of wide-coverage machine translation systems \nEAMT 2005 Conference Proceedings 193 categorisation frame used, MT system and lan-\nguage pair, the overall results of the experiment confirmed our expectation that substitution vari-\nables with a syntactic structure similar to the one of the substituted constituent outperform simple non-word strings. \nInstead of taking an extreme position in the \ntrade-off between accuracy and retrievability, we have chosen to adopt a middle course: in or-der to find the position of the satellites in the target language, we replace each of them with a substitution variable with a syntactic structure similar (but not identical) to the satellites it re-places and whose possible translations are known beforehand in most cases. For example, a simple NP as ‘ the man ’ can replace certain \nNPs in singular, or simple clauses such as ‘ that \nthe man was sleeping ’ can be substituted for a \nmore complex SBAR. \nSubstitution variables are not only used to \nfind the location of the translated satellite in the target language. Their second function is to em-bed the pivot in a simplified context which we hope leads to an improvement in its translation. We call the string consisting of the pivot and its arguments, replaced by substitution variables, the ‘argument skeleton’.\n5 For example, the sen-\ntence in (1) takes as an argument skeleton the string in (11): \n(11) The man likes dogs. \nWe retrieve the translation of the pivot by sub-\nmitting this skeleton to the MT system and sub-tracting the known translations of the substitu-tion variables. For example, translating the ar-gument skeleton in (11) yields \n(12) Der Mann mag Hunde. \nIf we subtract the kn own translations ‘ Der Mann ’ \nand ‘ Hunde ’, we obtain the translation ‘ mag’ \nfor the pivot ‘ likes’. \nAs a safeguard, we verify that the retrieved \ntranslation of the pivot is present in the transla-tion of a ‘pivot skeleton’, which consists of the original source language string from which all adjuncts have been previously stripped. If our \n \n \n5 In a similar way ‘adjunct skeletons’ comprising \nthe argument skeleton together with the substitution variables for the adjuncts inserted in the appropriate positions are used to retrieve the position of all the adjuncts. candidate translation of the pivot is not found in \nthe translation of the pivot skeleton, the algo-rithm backs off to allow the MT system to translate the entire current node as is. Consider for example the pivot skeleton of the sentence in (1): \n(13) pivot skeleton = ‘The chairman likes \ndeals. ’ \nÆ ‘Der Vorsitzende mag Ab-\nkommen. ’ \nThe found pivot translation ‘ mag’ is present in \nthe translation of the pivot skeleton, so we con-tinue the process and focus now on the transla-tion of the satellites. \nIf the translation of the substitution variables \ncannot be found in the target language, the same order of arguments and adjuncts is as-sumed as in the source language. This is obvi-ously very simplistic, and a modicum of lin-guistic knowledge about how the target lan-guage relates to the s ource would improve the \ntarget word order in those cases. This remains an avenue for further investigation. \n3.6. Translation of Satellites \nA satellite is considered to be ready for transla-\ntion if the number of its leaf nodes is less than a predefined threshold N (the optimal N is estab-lished empirically and may vary according to the MT system and language pair). The satel-lites are then translated in a predefined template \n(derived based on the syntactic context of each satellite in its parsed and tagged Penn-II repre-sentation), and inserted where their replace-ments appear in the appropriate skeleton. If the number of leaf nodes of the satellite exceeds the threshold, the process is repeated recursively for the satellite in question. \nIn our example, in order to retrieve the cor-\nrect translation of ‘ fast and confidential deals ’, \nwe have to insert this constituent into a tem-plate that will force it to be interpreted as a di-rect object. One of these templates might be the string ‘ The man sees ’, which in a majority of \ncases will translate into the string ‘ Der Mann \nsieht ’, as in (14): \n(14) [The man sees] fast and confidential \ndeals. [Der Mann sieht] schnelle \nund vertrauliche Abkommen. \nMellebeek, et al. \n194 EAMT 2005 Conference Proceedings In case the translation of the template gets dis-\ntorted and cannot be retrieved, the satellite is translated without context. \n3.7. Deriving the Translation \nIt is easy to see how the above described proc-\ness can be applied recursively. If a node con-tains fewer or the same number of leaf nodes than the predefined threshold N mentioned in the previous section, that node is translated in its entirety (embedded in a context template that mimicks the original syntactic environment, if necessary). At this moment, we obtain the best results for N = 4. If the node contains more than N leaf nodes, we apply our decomposition proc-ess to each of its satellites, and so on, until all found satellites are considered small enough for translation. In the final recomposition step the translations of the pivots and satellites are re-combined to yield the translation of the original input sentence. \n4. Worked-out example \nIn this section, we will illustrate the entire proc-\ness on the example sentence in (1) \n‘The chairman, a long-time rival of Bill \nGates, likes fast and confidential deals’ \nAlgorithm : \n \nQUEUE = {S} \nWhile (QUEUE not empty) { \nNode N = shift QUEUE; \nIf (# leaf nodes of N <= 4) { translate N in context; \n} \nelse { \nfind pivot N; \nfind satellites N; \nsubstitute satellites; \nbuild skeleton(s); \ntranslate skeleton(s); find translation pivot; \nif (translation pivot not OK) { \n translate N in context; \n break; \n} \nfind location of translation satellites;\n \nadd satellites to QUEUE; \n } Recompose translations; Input to algorithm = \n(S (NP-SBJ (NP (DT The) (NN chairman)) (, \n,) (NP (NP (DT a) (JJ long-time) (NN rival)) (PP (IN of) (NP (NNP Bill) (NNP Gates)))) (, ,)) (VP (VBZ likes) (NP (ADJP (JJ fast) (CC and) (JJ confidential)) (NNS deals)))) \nQUEUE = {S} \nStep 1: \nƒ S contains more than 4 leaf nodes Æ not \nready for translation Æ decompose \nƒ Find pivot S \npivot = ‘likes’ \nƒ find satellites S \nARG1 = ‘The chairman, a long-time rival of Bill Gates’ ARG2 = ‘fast and confidential deals.’ \nƒ substitute satellites \nARG1_subst = ‘The man’ ARG2_subst = ‘dogs’ \nƒ build skeleton(s) \narg. skel = ‘The man likes dogs.’ \nƒ translate skeleton(s) \ntrans. arg. skel. = ‘Der Mann mag Hunde.’ \nƒ find translation pivot \ntrans. pivot = ‘mag’ \nƒ pivot skel = ‘The chairman likes deals.’ \ntrans pivot skel = ‘Der Vorsitzende mag Abkommen.’ ‘mag’ is present in trans pivot skel Æ con-\ntinue \nƒ find location of translation satellites \nARG1’ left of pivot’, ARG2’ right of pivot’ \nƒ add satellites to QUEUE \nQUEUE = {ARG1, ARG2} \nStep 2: \nƒ ARG1 ‘The chairman, a long-time rival of \nBill Gates’ contains more than 4 leaf nodes \nÆ not ready for translation Æ decompose \nƒ pivot = ‘The chairman’ \nƒ ADJ11 = ‘a long-time rival of Bill Gates’ \nƒ ... \nƒ QUEUE = {ADJ11, ARG2} \nStep 3: \nƒ ADJ11 contains more than 4 leaf nodes Æ \nnot ready for translation Æ decompose \nƒ pivot = ‘a long-time rival’ \nƒ ADJ111 = ‘of Bill Gates’ \nƒ ... \nƒ QUEUE = {ADJ111, ARG2} \nTransBooster: boosting the performance of wide-coverage machine translation systems \nEAMT 2005 Conference Proceedings 195 Step 4: \nƒ ADJ111 ‘of Bill Gates’ contains less than 5 \nleaf nodes Æ ready for translation Æ trans-\nlate in context \nƒ ‘The car of Bill Gates’ Æ ‘ D a s A u t o v o n \nBill Gates.’ \nƒ ADJ111’ = ‘von Bill Gates’ \nƒ QUEUE = {ARG2} \nStep 5: \nƒ ARG2 ‘fast and confidential deals’ contains \nless than 5 leaf nodes Æ ready for transla-\ntion Æ translate in context \nƒ ‘The man sees fast and confidential deals’ \nÆ ‘Der Mann sieht die schnellen und ver-\ntraulichen Abkommen.’ \nƒ ARG2’ = ‘die schnellen und vertraulichen \nAbkommen.’ \nƒ QUEUE = {} \nStep 6: \nƒ Recompose translation: \n‘Der Vorsitzende, ein langfristiger Rivale von Bill Gates, mag die schnellen und ver-traulichen Abkommen.’ \nƒ Original translation by Babelfish: \n‘Der Vorsitzende, ein langfristiger Rivale von Bill Gates, Gleiche fasten und vertrau-liche Abkommen.’ \n5. Results and Evaluation \nThe effectiveness of our algorithm is measured \nagainst an 800-sentence testset (min. 1 word, max. 54 words, ave. 19.75 words) from Section 23 of the Penn-II Treebank using a range of automatic MT evaluation metrics. (The toolkit we used, mteval , is obtainable from http://www. \nnist.gov/speech/tests/mt/resources/scoring.htm.) Given the requirements of our other research projects, the testset comprises all sentences in the PARC-700 (Riezler et al., 2002) and DCU-105 (Cahill et al., 2004) testsets for LFG. Al-though our approach is largely language-indepen-dent, for practical purposes we use English Æ, \nSpanish as our evaluation language pair. Groups of 200 sentences from the testset were trans-lated by four native speakers of Spanish, each of whom was a certified translator, in order to obtain a set of reference tr anslations for use with \nthe automatic evaluation metrics. Tables 5.1 and 5.2 contain the test results for \nEnglishÆSpanish using Logomedia and \nSYSTRAN respectively. \nMT system Cutoff \nlength BLEU NIST GTM \nLogomedia - 0.310 7.342 0.574 \nTB-01 4 0.213 5.867 0.342 \nTB-08 4 0.309 7.322 0.566 \nTB-12 4 0.268 6.995 0.498 \nTable 5.1. Transbooster vs Logomedia \nMT system Cutoff \nlength BLEU NIST GTM \nSYSTRAN - 0.296 7.178 0.563 \nTB-08 4 0.290 7.104 0.549 \nTB-12 4 0.264 6.756 0.494 \nTable 5.2. Transbooster vs SYSTRAN \nThe baseline Logomedia system scored 0.31 \nBLEU (Papineni et al, 2002), 7.342 NIST (Dod-dington, 2002) and 57.4% F-Score using the GTM (Turian et al., 2003) on this testset for this language pair. The first version of TransBooster \n(TB-01) scored just 0.21 BLEU, 5.867 NIST, and 34.2% F-Score. Our best results (TB-08) for a 4-word cutoff length show scores of 0.309 BLEU, 7.32 NIST and 56.7% F-Score.\n6 The \nimprovements from our initial effort to these better figures are due to using enhanced substi-tution variables to embed translations of pivots, better pivot-finding routines and improving the addition of context in which to embed the trans-lation of satellites. \nThe scores obtained by using SYSTRAN as \nour baseline system (Table 5.2) are comparable to the ones obtained by using Logomedia . More-\nover, the scores for Logomedia and SYSTRAN \non their own show Logomedia slightly outper-\nforming SYSTRAN . \nDue to our safety measure of backing off to \nthe original translation of the sentence in case the translation of the pivot is not found in the translation of the pivot skeleton (cf. Section \n \n \n6 When the cutoff length (the number of leaf \nnodes below which we consider a node ready for translation) increases, all scores slightly improve. The implentation of further improvements will lead to fewer backoffs, which will make these results mo-re meaningful. \nMellebeek, et al. \n196 EAMT 2005 Conference Proceedings 3.5), we back off in 85 % of the cases in version \nTB-08. Improvements in the pivot finding methods have reduced this backoff to 40% in our latest version, but have caused other errors (mainly due to a faulty substitution or context) rise to the surface, which explains the slight drop in performance. We are confident, though, that further enhancements (cf. Section 6) will lead us to improve on the baseline systems by isolating those routines which contribute posi-tively to the automatic evaluation scores from those that cause these to deteriorate. \nWe also carried out a manual inspection of \nthe translations obtained via the baseline sys-tems and our method, and there are cases such as (15) and (16) where TransBooster’s interven-tion caused translation quality to improve: \n(15) [Source] ‘Our goal is to create more \nprograms with an individual iden-\ntity,’ says Paul Amos, CNN executive \nvice president for programming. \n[LogoMedia] ‘Nuestro objetivo es \ncrear más programas con una iden-\ntidad individual, ’ Paul Amos, Vice-\npresidente Ejecutivo de CNN para la \nprogramación dice. \n [TransBooster] „Nuestro objetivo es \ncrear más programas con una iden-tidad individual , „Dice Paul Amos , Vicepresidente Ejecutivo de CNN para la programación. \nThe reduction of the arguments of ‘says’ by \nTransBooster forces Logomedia to keep the verb ‘dice’ (‘says’) and subjec t ‘Paul Amos’ together, \nwhich results in an improvement in word order. \n(16) [Source] ‘Some early selling is like-\nly to stem from investors and portfo-\nlio managers who want to lock in this year’s fat profits.’ \n[SYSTRAN] ‘Algo temprano que ven-\nde es probable provenir a los inver-sionistas y a los encargados de lista que desean trabarse en beneficios gordos relativos a este año.’ \n [TransBooster] ‘Una cierta venta \ntemprana es para provenir proba-blemente a los inversionistas y a los encargados de lista que desean tra-barse en beneficios gordos relativos \na este año.’ \nThe translation of ‘Some early selling’ in a simp-\nlified context causes its translation by Trans-Booster ( ‘Una cierta venta temprana’ ) to out-\nperform the original translation by SYSTRAN \n(‘Algo temprano que vende’) \n6. Improvements \nWe expect that improvements to the labelling of \nnodes as adjuncts and arguments, involving the refinement of the syntactic contexts handled, will reduce the error rate of TransBooster in two ways: firstly, arguments which are cur-rently mislabelled as adjuncts will no longer be omitted from the (argument) skeleton; sec-ondly, with fewer nodes defaulting to argument status, the argument skeletons will be less clut-tered than they are now. This will allow the baseline MT systems to do what we think they do best, namely process a concise, syntactically simple skeleton with a reasonable expectation of a good translation. We expect further im-provements from incorporating a named entity recogniser into the algorithm, either by creating one ourselves via the Penn-II tags for nouns, or by incorporating an independently developed module. \nFurthermore, more elaborate variable-substi-\ntution and context-generation routines are ex-pected to lead to a reduction in the number of cases when the translations of constituents can-not be found in the respective skeletons. \nA refinement to the matching process of the \ntranslations of substition variables may include matching stems (rather than surface forms, as at present). This is expected to lead to more match-es. \nHandling non-contiguous pivots in the source \nand target will further extend the number of syntactic contexts handled adequately by Trans-Booster. \n7. Concluding Remarks \nThe translation quality obtained from on-line \nMT systems deteriorates wi th longer input strings. \nWe have presented a method where we recur-sively break down sentences from the Penn-II Treebank into smaller and smaller constituents, and confront the MT system with these shorter \nTransBooster: boosting the performance of wide-coverage machine translation systems \nEAMT 2005 Conference Proceedings 197 sub-strings. We keep track of where those indi-\nvidual parts fit into the overall translation in or-der to stitch together the translation result for the entire input string. Throughout the process the commercial MT engine does all the transla-tion itself : our method helps the system to im-\nprove its own output translations. \nTo date the quality obtained via our approach \nfalls just below the baseline systems SYSTRAN \nand Logomedia , with a BLEU score of 0.268 \nagainst the best baseline Logomedia ’s 0.310, \nbacking off in 40% of cases. We have identified a number of research avenues which we feel will lead to further improvements, especially when we test against poorer systems. Further-more, if we isolate those cases where our algo-rithm does produce better translations than the baseline systems and exclude cases where our intervention causes translation quality to deteri-orate, then we expect to be able to improve the translation quality available from commercial, wide-coverage MT systems. \n8. References \nCAHILL, A., M. BURKE, R. O’DONOVAN, J. VAN \nGENABITH and A. WAY (2004) ‘Long-Distance \nDependency Resolution in Automatically Acquired \nWide-Coverage PCFG-Based LFG Approximations’. In Proc. 42nd Annual Meeting of the Association for \nComputational Linguistics (ACL’04) , Barcelona, \nSpain: 319–326. \nDODDINGTON, George (2002) . ‘Automatic evalua-\ntion of machine translation quality using n-gram co-\noccurrence statistics’. In Proc . Human Language \nTechnology 2003: 3rd Meeting of the NAACL , San \nDiego, CA.:128–132. HOCKENMAIER, Julia (2003). ‘Parsing with Gen-\nerative models of Predicat e-Argument Structure’. In \nProc. 41st Annual Conference of the Association for \nComputational Linguistics (ACL’03) , Sapporo, Ja-\npan: 359–366. \nMARCUS, M., G. KIM, M.A. MARCINKIEWICZ, \nR. MACINTYRE, M. FERGUSON, K. KATZ and \nB. SCHASBERGER (1994). ‘The Penn Treebank: \nAnnotating Predicate Argume nt Structure’. In Proc. \n1994 ARPA Human Language Technology Workshop , \nPrinceton, NJ: 110–115. \nPAPINENI, K., S. ROUKOS, T. WARD and W-J. \nZHU. 2002. BLEU: ‘A Method for Automatic Evalua-\ntion of Machine Translation’. In Proc . 40th Annual \nMeeting of the Association for Computational Lin-guistics (ACL’02) , Philadelphia, PA.: 311–318. \nPÉREZ-ORTIZ, J. & M. FORCADA (2001). ‘Dis-\ncovering Machine Translation Strategies: Beyond Word-for-Word Translation: a Laboratory Assign-\nment’. In Proc. of the Workshop on Teaching Ma-\nchine Translation, MT Summit VIII , Santiago de Com-\npostela, Spain: 57–60. \nRIEZLER, S., T. KING, R. KAPLAN, R. CROUCH, \nJ. MAXWELL, and M. JOHNSON (2002). ‘Parsing the Wall Street Journal using a Lexical-Functional \nGrammar and Discriminative Estimation Techniques’. \nIn Proc . 40th Annual Meeting of the Association for \nComputational Linguistics (ACL’02) , Philadelphia, \nPA.: 271–278. \nTURIAN, J., L. SHEN. and D. MELAMED (2003). \n‘Evaluation of Machine Tr anslation and its Evalua-\ntion’. MT Summit IX , New Orleans, LA.: 386–393. \nWAY, A. and N. GOUGH (2003). ‘ wEBMT : Devel-\noping and Validating an Example-Based Machine \nTranslation System using the World Wide Web’. \nComputational Linguistics 29 (3): 421–457", "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "TiGCvDotkN", "year": null, "venue": "EAMT 2008", "pdf_link": "https://aclanthology.org/2008.eamt-1.10.pdf", "forum_link": "https://openreview.net/forum?id=TiGCvDotkN", "arxiv_id": null, "doi": null }
{ "title": "Packed rules for automatic transfer-rule induction", "authors": [ "Yvette Graham", "Josef van Genabith" ], "abstract": null, "keywords": [], "raw_extracted_content": "PackedRules forAutomatic Transfer-rule Induction\nYve\ntteGraham andJosefvanGenabith\nNationalCentreforLanguageTechnology,\nSchoolofComputing,\nDublinCity University,\nDublin9, ´Eire\njosef,[email protected]\nAbstract\nWe present a method of encoding transfer\nrulesin a highlyefficient packedstructure us-\ning contextualized constraints (Maxwell and\nKaplan, 1991), an existing method of encod-\ning adopted from LFG parsing (Kaplan and\nBresnan, 1982; Bresnan, 2001; Dalrymple,\n2001). Thepackedrepresentationallowsusto\nencode O(2n)transferrulesinasinglepacked\nrepresentation only requiring O(n)storage\nspace. Besides reducing space requirements,\nthe representation also has a high impact on\nthe amount of time taken to load large num-\nbers of transfer rules to memory with very\nlittle trade-off in time needed to unpack the\nrules. We include an experimental evaluation\nwhichshowsaconsiderablereductioninspace\nand time requirements for a large set of auto-\nmaticallyinducedtransferrulesbystoringthe\nrulesinthe packedrepresentation.\n1 Introduction\nProbabilistic Transfer-Based MachineTranslation is\none of several current approaches to machine trans-\nlation that combine data-driven statistical methods\nwith the use of linguistic information (Quirk et al.,\n2005; Koehn and Hoang, 2007; Ding and Palmer,\n2005; Charniak et al., 2003; Lavie, 2008; Riezler\nand Maxwell, 2006; Bojar and Hajiˇ c, 2008).\nTraditionally, transfer rules were manually de-\nveloped. Recently, methods of automatically in-\nducing transfer rules from bilingual corpora have\nemerged (Hajiˇ cet al.,2002; Eisner, 2003; Bojar and\nHajiˇ c,2008;RiezlerandMaxwell,2006). Acquiring\ntransfer rules automatically from bilingual corporahas several advantages. One obvious advantage is\nthat automatic methods of rule induction are much\nquicker than manual rule development. This means\nthat amuchlarger quantity oftransfer rulescannow\nbe produced.\nRiezlerandMaxwell(2006)usefeaturestructures\nof the Lexical Functional Grammar (LFG) formal-\nism (Kaplan and Bresnan, 1982; Bresnan, 2001;\nDalrymple, 2001) for deep transfer. They impose a\nlimit of a maximum of three primitive rules to con-\nstruct acomplexrule1. We believe removing arbi-\ntrary limits on the number of transfer rules induced\ncould result in improved translations, and therefore\nwe wish to induce as many different size rules as\npossible from a pair of parsed training sentences2.\nShort rules3are needed for high coverage of un-\nseen sentences, but where possible larger rules4are\npreferred so as to increase the likelihood of a flu-\nent target language sentence (all other things being\nequal).\nAnother issue for transfer rule induction is the\namount oflinguistic information that should bekept\ninthetransferrules. Wewouldliketoinvestigatethe\neffects of keeping all or most of the linguistic infor-\nmation in the rules. If we both increase the number\nof induced rules and increase the amount of infor-\n1Riezler and Maxwell (2006) construct primitive transfer\nrulesusingSMTphrasesandthenconstructlargerrulesbycom-\nbining contiguous primitive rules.\n2The notion of different size rules we refer to is related to\ndifferent length phrases inPhrase-based SMT.\n3Rules that cover a small part of the source language struc-\nture.\n4Rules that cover a large part of the target language struc-\nture.\n12th EAMT conference, 22-23 September 2008, Hamburg, Germany\n57\nmation contained in the rules, storing the rules in\nthe\nconventional way of enumerating each rule sep-\narately will require large amounts of storage space\nand time to load the rules to memory. We address\nthese problems byproviding apacked data structure\nthat efficiently stores large numbers oflinguistically\nrichtransferrules,greatlyreducingboththerequired\nstorage space and load time.\nThe paper is structured as follows. In Section 2\nwe describe dependency-based transfer rules, Sec-\ntion 3 describes in detail our packed transfer rule\nrepresentation, Section 4 describes an algorithm for\nunpacking the transfer rules. Finally, in Section 5\nwe report an experimental evaluation in which we\nextract a large number of transfer rules automati-\ncallyfrom abilingual corpus andcomparethespace\nand time requirements of the packed representation\nto that of storing each rule separately. Section 6 de-\nscribes our conclusions and future work.\n2 Dependency-Based Transfer-Rules\nIn our research we use LFG f-structures as the in-\ntermediate representation for transfer. F-structures\nare attribute-value structure encodings of bilexical\nlabelled dependencies. In order to automatically\ninduce transfer rules from a source and target f-\nstructure pair, correspondences between pairs of\nsource and target local f-structures are drawn us-\ning the predicate (PRED) of the local f-structures.\nFigure 1(a) shows an example f-structure pair with\ncorrespondences betweenlocalf-structures depicted\nby lines linking the predicates. F-structures encode\nthe grammatical relations between the words of a\nsentence and this motivates their use as a represen-\ntation for transfer-based machine translation. Sen-\ntences often contain long distance dependencies be-\ntween words. One advantage of using f-structures\nfor transfer-based machine translation is that two\nnon-adjacent dependent words in a sentence are ad-\njacent in the f-structure representation. In addition\nto these grammatical dependencies, the f-structure\nalso contains information about the atomic gram-\nmatical features of words, such as case,number,\npersonandtense. On the LHS of a transfer rule\ntheatomicfeaturesareusefultoguidetranslation by\nchoosing arule that appropriately fitsthe f-structure\nof the source language sentence, and on the RHS ofFigure3: ExampleConstraint-basedEncodingforTrans-\nfer\nRuleofFigure2(a)\nthe rules the atomic features are needed to correctly\ninflectthewordsinthetargetlanguagesentencedur-\ning generation.\nThere are many ways to visualize an f-structure.\nInFigure1(a)thef-structureisshownintheconven-\ntional LFGformat5. Figure 1(b) shows asimplified\ngraph-based visualization weuseformostoftheex-\namples in this paper. Each local f-structure is rep-\nresented by a node in the dependency structure la-\nbelled by its predicate value, with branches labelled\nwiththegrammatical dependencies between local f-\nstructures6.\nRiezlerandMaxwell(2006) automatically induce\ntransfer rules composed of a snippet of the original\nsource language f-structure on the LHS and a snip-\npet of the target language f-structure on the RHS.\nFigure 2 shows a subset of the rules that can be in-\nducedfromthef-structurepairshowninFigure1. In\na transfer rule, corresponding leaf-level arguments\ncan be replaced by a variable, Xi, on either side\nof the rule to map equivalent arguments in the LHS\nstructure to the appropriate place in the RHS struc-\nture. For example, the rule in Figure 2(a) maps the\nsubject of spiegelnto the subject of reflectand the\nobjectofspiegelntotheobjectof reflect. F-structure\nbased transfer rules are each stored as two sets of\nconstraints, encoding the LHS and RHS of the rule,\nrespectively. For every dependency relation that ex-\nists between two words in the sentence, a constraint\nwill encode this relation. Figure 3 shows the trans-\nfer rule in Figure 2(a) represented in terms of con-\nstraints7.\n5Without atomic features and values.\n6Atomic features and reentrancy are left out of the simpli-\nfiedrepresentation. Figure1(a)showsanexampleofreentrancy\n(Germanlocalf-structure1,valueofTOPIC).Thetransferrules\nwe induce docontain the atomic grammatical feature and reen-\ntrancyinformation.\n7Withatomic features andvalues.\n12th EAMT conference, 22-23 September 2008, Hamburg, Germany\n58\nFigure1: Example(a)F-structurePair,(b)DependencyRelatio nsinSimplifiedRepresentation,(c)ConstraintEncod-\ning for the parsedSentences ”Sprachenspiegeln die Vielfalt der Europ ¨aischenUnionwider.” and”Languagesreflect\nthediversity ofthe EuropeanUnion.”\n12th EAMT conference, 22-23 September 2008, Hamburg, Germany\n59\nFigure 2: ExampleTransfer Rules: A subset of the transfer rules automaticallyinducedfrom training f-structurepair\nshownin Figure1.\n12th EAMT conference, 22-23 September 2008, Hamburg, Germany\n60\nFor automatic rule induction, multiple transfer\nrul\nes are extracted from each f-structure pair in the\ncorpus. When several rules are extracted from a\nsingle f-structure pair, the resulting set of rules of-\nten contains a large amount of duplicated data. The\nonly existing method of encoding transfer rules, to\nthebest ofour knowledge, involves enumerating the\nentire set of LHS and RHS constraints of each rule\nseparately (Figure 3). This method of encoding re-\nsultsinalargenumber ofconstraints beingrecorded\nrepeatedly, once for each transfer rule they end up\nin. Figure 2(h) shows the transfer rule that maps the\nentire source language set of constraints to the en-\ntire target language set of constraints. Every other\nrule induced from the f-structure pair will consist of\na subset of these constraints. Recording each sub-\nsequent ruleseparatelyinvolvesduplicating thecon-\nstraintsalreadyrecordedinrule2(h). Sincethenum-\nberoftransferrulesthatcanbeinducedfromagiven\nf-structure pair is O( 2n), where nis the number of\nlocal f-structures, storing the rules by enumerating\neach rule separately ishighly inefficient.\n3 Packed Representations for\nDependency-Based Transfer Rules\nOur method of storing transfer rules involves pack-\ningallthetransferrulesinducedfromthesametrain-\ning f-structure pair into asingle packed transfer rule\ndata structure. Our packing method can encode\nO(2n) transfer rules without duplicating any con-\nstraints. The packed rule representation uses con-\ntextualizedconstraints(MaxwellandKaplan,1991),\na well-established method of encoding grammars in\nLFG parsing (Kaplan and Bresnan, 1982). Con-\nstraints are contextualized to improve the efficiency\nof processing disjunctive constraints of a grammar\nand thus simplify the encoding of grammatical pos-\nsibilities, byallowing disjunctive statements ascon-\nstraints. For example, the following constraint for\nthe German word dieis taken from (Maxwell and\nKaplan, 1991):\ncase(die, nom) ∨case(die, acc)\nIn this example, the value of the atomic feature,\ncase, of the word diecan be either nominative or\naccusative , depending onagiven context.\nWe adopt this approach but adapt it for our own\npurposes of translation as opposed to parsing, andFigure4: (a)PackedTransferRulewithContextVariables\n(b)\nSimplifiedRepresentationofTransferRule\nuse contextualized constraints to encode that each\nconstraint of the original f-structure pair may be in-\ncluded or excluded from a transfer rule, depending\non the context. A packed rule contains a single in-\nstance of each of the constraints of the original f-\nstructure pair with each constraint being assigned a\ncontextvariable. ThisenablestheencodingofO( 2n)\nrules in a single O(n)size packed structure. Figure\n4(a) shows an example packed rule structure8and\nFigure 4(b) shows the same rule using the simpli-\nfied visualisation. The entire set of source language\nf-structure constraints forms the LHS of the packed\nrule, and the whole set of constraints of the target\nf-structure forms the RHS. Each constraint is given\na context variable, Ai, which is used to determine\nwhether or not the constraint should be included or\nexcluded from aparticular transfer rule.\n3.1 AssigningContext Variables tothe\nF-structure Constraints\nTheconstraints of each local f-structure on the LHS\nof the packed rule is labelled with a context vari-\nable(seevariablesA0-A6onLHSoftheruleinFig-\n8Atomic features and values have been left out of this dia-\ngram tosave space.\n12th EAMT conference, 22-23 September 2008, Hamburg, Germany\n61\nFigure 5: Context Variable instantiations producing a\nTra\nnsferRule: (a) thevalueseachcontextvariableof the\npacked rule is instantiated to, (b) the instantiated packed\nrule structure (c) the transfer rule produced by the vari-\nableinstantiations\nure 4). The constraints of the corresponding local\nf-structure on the RHS is given the context variable\nof its LHS counterpart (see variables A0-A5 on the\nRHSof the rule inFigure 4). Theconstraints of any\nremaining unaligned local f-structures on the RHS\nareeachassigned another distinct variable (see vari-\nable A7 in Figure 4). Extracting a particular trans-\nfer rule from the packed structure now simply in-\nvolves assigning the value trueto the constraints of\nthe extracted rule and falseto the constraints that\narenotpart of therule. Figure5(a) showsoneofthe\npossible combinations of boolean values for the set\nof context variables given to the constraints of the\npacked rule shown in Figure 4. Figure 5(b) shows\nthe packed rule with context variables instantiated\nand Figure 5(c) showsthe rule that results by taking\nthe constraints labelled truefor this particular com-\nbination of boolean values9.\n9Notation:true=1andfalse=0.Figure6: Parent-DependentRelationStatementFormat\n3.2\nContextualizing theContext Variables\nThe variables are given a context in order to con-\nstrain the types of rules that can be induced. Al-\nthough it is possible to encode O( 2n) transfer rules\nwithin the packed structure, many of these rules are\nactuallyundesirable, andwethereforegiveacontext\ntothe variables toeliminate such rules.\nRiezler and Maxwell (2006) define a contiguity\nconstraint for transfer rules, that states that neither\nside of a rule may contain any gaps in the structure.\nTo enforce this constraint we encode the relations\nbetween the context variables in a series of parent-\ndependent relation statements . Arelation statement\nconsists of the context variable of a single local f-\nstructure, which we call the parentf-structure and\nspecifies twolists, eachcontaining context variables\nbelonging to the dependent f-structures of the par-\nent f-structure. Dependents of a parent are split into\ntwo disjoint sets, optionaldependents and obliga-\ntorydependents. The inclusion in a transfer rule\nof the constraints labelled by an obligatory depen-\ndentcontext variable is entailed by the inclusion of\nthe constraints of its parentin the rule. The con-\nstraints of an optional dependent , ontheother hand,\nmay either be included or omitted from a rule that\nincludes its parent’s constraints. The distinction be-\ntweenobligatory andoptionaldependents is useful\nto permit a rule induction algorithm to constrain the\nrulessothattheinclusionofagivenlocalf-structure\nin the rule entails the inclusion of one or more of\nits dependents10. Figure 6 shows the format of the\nrelation statements and Figure 7 shows the relation\nstatements that constrain the rules encoded in the\npacked structure of Figure 4.\n4 Unpacking the Rules\nUnpacking a transfer rule involves instantiating the\ncontext variables of the constraints that are part of\nthe rule to true and the rest of the context variables\nin the packed structure to false. Unpacking all of\n10However,ifsuchconstraintsarenotrequired,thenitispos-\nsible tomake all of the dependents inthe rules optional.\n12th EAMT conference, 22-23 September 2008, Hamburg, Germany\n62\nFigure 7: Node Relations for Packed Rules of Depen-\nden\ncyStructuresin Figure3\nAlgorithm:\nInput: f-structure f and\nrule root r\nOutput: vector v of possible boolean\nvalues for the context\nvariable of f\n// A\nif f==r then\nv = { true }\n// B\nelse if parent(f)==false then\nv = { false }\n// C\nelse if obligatory(f) then\nv = { true }\n// D\nelse\nv = { true , false }\nend if\nFigure 8: Algorithm for Unpacking Transfer Rules from\nthePackedRuleRepresentation.\nthe rules from the structure involves assigning all\npossible combinations of true and false values to\nthe context variables with respect to the contiguity\nconstraint (Riezler and Maxwell, 2006) and relation\nstatements. The algorithm in Figure 8 is applied to\neach constraint variable recursively in a top-down\nfashion starting with the context variable of the out-\nermost f-structure. The set of solutions of the algo-\nrithm retrieves all unpacked rules from the packed\nrepresentation.\n5 Experimental Evaluation\nIn order to evaluate the effects of using the packed\nrule representation on the space required to store\ntransfer rules, we ran an automatic transfer rule in-\nduction algorithm on sentences of the Europarl cor-\npus. WerestrictedthetestcorpustoGerman-English\nsentences within the length range of 5 to 15 words.\nThis resulted in 219,666 sentence pairs. We re-\nserved2000ofthesesentencesasadevelopmentset.\nEach side of the corpus was parsed with a monolin-\ngual LFG grammar (Butt et al., 2002; Riezler et al.,2002). The automatic rule induction algorithm used\na bilingual dictionary (Richter, 2007) and Giza++\nword alignments (Och and Ney, 2000) to align lo-\ncalf-structures. Apackedtransferrepresentation for\neach input f-structure pair was induced. All of the\nrules were then unpacked and counted. Our rule in-\nduction algorithm induced 5,148,874 transfer rules\nfromthetrainingdataf-structurepairs. Thisresulted\ninanaverageof23.65rulesbeinginducedfromeach\naligned f-structure pair11. The total time taken for\nthe rule extraction algorithm was approximately 3.5\nhoursrunning thealgorithm on8parallel processors\n(28 CPUhours).\nInorder todetermine theeffect ofthepacked rep-\nresentation we randomly selected 10 sets of 1000\nsentences from the training data and examined the\namount of space required to store the rules induced\nfrom these sentences in the packed representation\nand in the conventional way of storing rules, i.e.\nenumerating each rule separately. Time and space\nrequirements were recorded for each of the 10 sets.\nThe results for each set of rules are shown in Ta-\nble 1, as well as the average of these results and an\nAll Rules Estimate , i.e. an estimate of results for\nrules extracted from the entire training corpus12.\nThe average number of rules induced from a set of\n1,000 training sentences was 23,955. The packed\nrepresentation reduces the average disk space re-\nquired to store rules extracted from 1,000 training\nsentencesfrom95.96Mto7.17M,andtheestimated\ndiskspacerequiredfor rulesinduced fromtheentire\ntrainingcorpusisreducedfrom20.4Gto1.52G.The\naverageamountoftimetakentoload23,955rulesto\nmemory is reduced from 207.4 seconds to 18.1 sec-\nonds,andthereductioninloadtimeforthe AllRules\nEstimate is from 12 hours 32 minutes to 1 hour 6\nminutes. The average time to retrieve 23,955 rules\nfrom memory as expected is slightly increased from\n1.8seconds fortheenumerated representation to2.6\nseconds for the packed representation, with the All\nRules Estimate increasing from 6 minutes 31 sec-\nonds to 9 minutes 24 seconds. The average time to\n11We refer to the number of rule tokens as opposed to types\nhere.\n12Resources did not permit unpacking all of the induced\nrules, therefore estimates were calculated. The All Rules Es-\ntimatewascalculatedbymultiplyingtheaverageresultforaset\nof 1,000 sentence pairs by217.666.\n12th EAMT conference, 22-23 September 2008, Hamburg, Germany\n63\nDiskSpace WriteTime LoadTime Unpacking Time\nSetNo. Rules Enum. Packed Enum. Packed Enum. Packed Enum. Packed\n1 24,121 96.37M 7.2M 144s 128s 211s 17s 2s 3s\n2 24,486 98.89M 7.16M 145s 127s 215s 19s 2s 3s\n3 23,650 93.58M 7.17M 142s 133s 200s 18s 2s 2s\n4 23,882 96.83M 7.22M 149s 118s 210s 18s 2s 2s\n5 24,146 98.03M 7.15M 148s 128s 212s 17s 2s 3s\n6 23,355 91.75M 7.1M 140s 128s 198s 21s 2s 3s\n7 23,620 94.55M 7.21M 141s 142s 204s 18s 2s 2s\n8 23,687 94.02M 7.11M 137s 124s 201s 17s 1s 3s\n9 23,534 94.95M 7.12M 142s 120s 204s 17s 1s 3s\n10 25,069 100.66M 7.26M 152s 231s 219s 19s 2s 2s\nAverage 23,955 95.96M 7.17M 144s137.9s 207.4s 18.1s 1.8s 2.6s\nAll\nRules 5,214,189 20.4G 1.52G 8h43m 8h20m 12h32m 1h06m 6m31s 9m24s\nEstimate\nTable1: SpaceandTimeComparisonofEnumeratedRules(Enum.) V ersusPackedRepresentation(Packed): Results\nshown are for rules induced from 10 randomly selected sets of 1000 training sentence pairs. An average result for a\nset of1000sentencepairsisalsoincludedandanestimateofthespaceandtime requirementsforinducingrulesfrom\ntheAll RulesEstimate (M =megabytes,G =gigabytes,h= hours,m=minutes,s= seconds).\nrecord 23,955 rules to disk was decreased from 144\nseconds to137.9seconds, andfrom 8hours 43min-\nutes to 8 hours 20 minutes for All Rules Estimate .\nTheAll Rules Estimate for the total number of rules\nis 5,214,189, which is close to the actual no. of in-\nduced rules mentioned above.\nOur experimental evaluation clearly shows using\nthe packed representation of transfer rules has two\nmajor advantages. Both the required disk space and\ntimeneededtoloadtherulestomemoryarereduced\nby more than a factor of 10. This is achieved with\nvery little trade-off in the time taken by the unpack-\ning algorithm that retrieves the rules from memory,\nas the estimate increase in time taken to retrieve\n23,955rulesfrommemoryislessthanasecond, and\ntheAllRulesEstimate showsanincreaseoflessthan\nthreeminutestoretrieveoverfivemillionrulesfrom\nmemory.\n6 Conclusions\nWe presented a new method of encoding\ndependency-based transfer rules in an effi-\ncient packed representation. The method is a\nstraight-forward approach that uses contextualized\nconstraints and achieves the ability to encodeO(2n)transfer rules in a O(n)size data structure.\nOur experimental evaluation shows an impressive\nreduction in the amount of disk space required to\nstore the transfer rules as well as a great reduction\ninthetimeneeded toloadalarge number of rulesto\nmemory.\nThismethod of packing transfer rules iscurrently\nusedatthestageoftransferruleinduction. However,\nwe believe the packing scheme could be used for\npacking rules in the transfer chart. This could pro-\nvideameansofreducingthememoryneededforde-\ncodingandallowalarger beamsizeforbeamsearch\ndecoding,whichcouldresultinimprovedtranslation\nquality. In addition, the method could be applied to\nconstrain factoring of linguistic features contained\nin transfer rules. We plan to carry out this research\ninthe future.\n7 Acknowledgements\nThe work presented in this paper was partly funded\nby a Science Foundation Ireland PhD studentship\nP07077-60101. We would like to thank Mary\nHearne and John Maxwell for their assistance and\ncomments.\n12th EAMT conference, 22-23 September 2008, Hamburg, Germany\n64\nReferences\nOnd\nˇ rej Bojar and Martin ˇCmejrek. 2007. Mathematical\nModel of Tree Transformations. Project Euromatrix\nDeliverable3.2 ,Ufal,CharlesUniversity,Prague.\nOndˇ rej Bojar and Jan Hajiˇ c. 2008. Phrase-Based and\nDeep Syntactic English-to-Czech Statistical Machine\nTranslation. In Proceedings of the third Workshop\non Statistical Machine Translation , Columbus, Ohio,\nJune2008.\nJoan Bresnan. 2001. Lexical-FunctionalSyntax., Black-\nweelmOxford,2001.\nMiriam Butt, Helge Dyvik, Tracy H. King, Hiroshi Ma-\nsuichiandChristianRohrer. 2002. TheParallelGram-\nmar Project. (grammar version 2005) In Proceed-\ningsofthe19thInternationalConferenceonComputa-\ntional Linguistics (COLING’02), Workshop on Gram-\nmar Engineering and Evaluation , pages 1-7. Tapei,\nROC.\nEugene Charniak, Kevin Knight and Kenji Yamada.\n2003. Syntax-based Language Models for Statistical\nMachine Translation. In Proceedings of the Machine\nTranslationSummitIX2003\nIlyasCicekli and Altay Gvenir. 2003. LearningTransla-\ntion Templates from Bilingual Translation Examples.\nInRecentAdvancesinExample-basedMachineTrans-\nlation,pages255-286,M. CarlandA.Way (eds.)\nMary Dalrymple. Lexical-Functional Grammar, Aca-\ndemicPress,San Diego,CA; London. 2001.\nSteve DeNeefe, Kevin Knight, Wei Wang and Daniel\nMarcu. 2007. What Can Syntax-based MT Learn\nfrom Phrase-based MT? In Proceedings of the 2007\nJoint Conference on Empirical Methods in Natural\nLanguageProcessingandComputational\nYuan Ding and Martha Palmer. 2005. Machine Transla-\ntion Using ProbabilisticSynchronousDependencyIn-\nsertionGrammars. In Proceedingsof the 43rdAnnual\nMeeting of the Association of ComputationalLinguis-\ntics(ACL) ,pages541-548,AnnArbor,June2005.\nJason Eisner. 2003. Learning non-isomorphictree map-\npings for machine translation. In Proceedings of the\n41st Annual Meeting of the Association of Computa-\ntionalLinguistics(ACL) ,pages205-208,Sappora,July\n2003.\nJan Hajiˇ c, Martin ˇCmejrek, Bonnie Dorr,Yuan Ding, Ja-\nson Eisner, Daniel Gildea, Terry Koo, Kristen Parton,\nGerald Penn, Dragomir Radev and Owen Rambow.\n2002. Natural LanguageGeneration in the Context of\nMachine Translation. Technical Report. NLP WS’02 ,\nfinalreport.\nRonald M. Kaplan, Tracy H. King and John T. Maxwell.\n2002. Adapting existing grammars: the XLE experi-\nence. In Proceedings of COLING 2002 , Taipei, Tai-\nwan.\nRonald Kaplan and Joan Bresnan. 1982. Lexical Func-\ntional Grammar, a Formal System for GrammaticalRepresenation. InBresnan, J. editor, The MentalRep-\nresentationofGrammaticalRelations ,pages173-281,\nMITPress, Cambridge,MA.\nPhilippKoehn,FranzJosefOchandDanielMarcu. 2003.\nStatistical Phrase-based Translation. In Proceedings\nof the HLT-NAACL 2003 , pages 48-54, Edmonton,\nMay/June2003.\nPhilippKoehnandHieuHoang. 2007. FactoredTransla-\ntionModels. In Proceedingsofthe2007JointConfer-\nenceonEmpiricalMethodsinNaturalLanguagePro-\ncessing and Computational Natural Language Learn-\ning,pages868876,Prague,June2007.\nAlon Lavie. 2008. . Stat-XFER: A General Search-\nBased Syntax-Driven Framework for Machine Trans-\nlation. In ProceedingsoftheConferenceonIntelligent\nText ProcessingandComputationalLinguistics ,pages\n362-375,Haifa,Israel,2008.\nJohn T. Maxwell III and Ronald M. Kaplan. 1991. A\nMethod for Disjunctive Constraint Satisfaction. In\nCurrent Issues in Parsing Technology ,Masaru Tomita\neditor,pages173-190,KluwerAcademicPublishers.\nFranz Josef Och, Christoph Tillmann Hermann and Ney.\n2000. Improved alignment modesl for statistical ma-\nchine translation. In Proceedings of the 1999 Confer-\nenceonEmpiricalMethodsinNaturalLanguagePro-\ncessing(EMNLP’99).CollegePark,MD,pages20-28.\nFranzJosefOchandHermannNey. 2000. ImprovedSta-\ntistical AlignmentModels. In Proceedingsof the38th\nAnnual Meeting of the Association of Computational\nLinguistics(ACL) ,pages440-447.\nChrisQuirk,ArulMenezesandColin Cherry. 2005. De-\npendency Treelet Translation: Syntactically Informed\nPhrasal SMT. In Proceedings of the 43rd Annual\nMeeting of the Association of ComputationalLinguis-\ntics(ACL) ,AnnArbor,June2005,pages271-279.\nFranz Richter. 2007. The German-English word\nlist.http://ftp.tu-chemnitz.de/pub/Local/urz/ding/de-\nenCopyright(c)FrankRichter1995-2007.\nStefan Riezler, Tracy H. King, Ronald M. Kaplan,\nRichard Crouch, John T. Maxwell, and Mark John-\nson. 2002. ParsingtheWallStreetJournalusingLexi-\ncal FunctionalGrammaranddiscriminitiveestimation\ntechniques . (grammar version 2005) In Proceedings\nofthe40thAnnualMeetingoftheAssociationofCom-\nputationalLinguistics(ACL) ,Philadelphia,July 2002.\nStefan Riezler and John T. Maxwell III. 2006. Gram-\nmatical Machine Translation. In Proceedingsof HLT-\nACL,pages248-255,New York.\nPetr Sgall, Eva Hajicova and Jarmilla Panevova. 1986.\nThe Meaning of the Sentence and its Semantic and\nPragmatic Aspects . Dordrecht: Reidel and Prague:\nAcademia1986.\n12th EAMT conference, 22-23 September 2008, Hamburg, Germany\n65", "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "nZE1t_OzLmr", "year": null, "venue": "EAMT 2015", "pdf_link": "https://aclanthology.org/W15-4921.pdf", "forum_link": "https://openreview.net/forum?id=nZE1t_OzLmr", "arxiv_id": null, "doi": null }
{ "title": "Re-assessing the WMT2013 Human Evaluation with Professional Translators Trainees", "authors": [ "Mihaela Vela", "Josef van Genabith" ], "abstract": null, "keywords": [], "raw_extracted_content": "Re-assessing the WMT2013 Human Evaluation with Professional\nTranslators Trainees\nMihaela Vela\nSaarland University\[email protected] van Genabith\nGerman Research Center for Artificial Intelligence\[email protected]\nAbstract\nThis paper presents experiments on the\nhuman ranking task performed during\nWMT2013. The goal of these experiments\nis to re-run the human evaluation task with\ntranslation studies students and to compare\nthe results with the human rankings per-\nformed by the WMT development teams\nduring WMT2013. More specifically, we\ntest whether we can reproduce, and if yes\nto what extent, the WMT2013 ranking\ntask and whether specialised knowledge\nfrom translation studies influences the re-\nsults in terms of intra- and inter-annotator\nagreement as well as in terms of system\nranking. We present two experiments on\nthe English-German WMT2013 machine\ntranslation output. Analysis of the data\nfollows the methods described in the of-\nficial WMT2013 report. The results in-\ndicate a higher inter- and intra-annotator\nagreement, less ties and slight differences\nin ranking for the translation studies stu-\ndents as compared to the WMT develop-\nment teams.\n1 Introduction\nMachine translation evaluation is an important el-\nement in the process of building MT systems.\nThe Workshop for Statistical Machine Translation\n(WMT) compares new techniques for MT through\nhuman and automatic MT evaluation and provides\nalso tracks for evaluation metrics, quality estima-\ntion of MT as well as post-editing of MT.\nTo date, the most popular MT evaluation met-\nrics essentially measure lexical overlap between\nreference and hypothesis translation such as IBM\nc�2015 The authors. This article is licensed under a Creative\nCommons 3.0 licence, no derivative works, attribution, CC-\nBY-ND.BLEU (Papineni et al., 2002), NIST (Dodding-\nton, 2002), Meteor (Denkowski and Lavie, 2014),\nWER (Levenshtein, 1966), position-independent\nerror rate metric PER (Tillmann et al., 1997) and\nthe translation edit rate metric TER (Snover et al.,\n2006) and TERp (Snover et al., 2009). Gonzàlez et\nal. (2014) as well as Comelles and Atserias (2014)\nintroduce their fully automatic approaches to ma-\nchine translation evaluation using lexical, syntac-\ntic and semantic information when comparing the\nmachine translation output with reference transla-\ntions.\nHuman machine translation evaluation can be\nperformed with different methods. Lo and Wu\n(2011) propose HMEANT, a metric based on\nMEANT (Lo et al., 2012) that measures mean-\ning preservation between hypothesis and reference\ntranslation on the basis of verb frames and their\nrolefillers. Another method is HTER (Snover\net al., 2006) which produces targeted reference\ntranslations by post-editing MT output. Another\nmethod is HTER (Snover et al., 2006) which\nproduces targeted reference translations by post-\nediting MT output. Human evaluation can also be\nperformed by measuring post-editing time, or by\nasking evaluators to assess thefluency and ade-\nquacy of a hypothesis translation on a Likert scale.\nAnother popular human evaluation method is rank-\ning: ordering a set of translation hypotheses ac-\ncording to their quality. This is also the method ap-\nplied during the recent WMTs, where humans are\nasked to rank machine translation output by using\nAPPRAISE (Federmann, 2012), a software tool\nthat integrates facilities for such a ranking task. In\nWMT, human MT evaluation is carried out by the\nMT development teams, usually computer scien-\ntists or computational linguists, sometimes involv-\ning crowd-sourcing based on Amazon’s Mechani-\ncal Turk.\nBeing aware of the two communities, machine\ntranslation and translation studies, we took the161\navailable online data from the WMT20131and\ntried to reproduce the ranking task with translation\nstudies students for the English to German transla-\ntions. The three questions we want to answer are:\n•Can we reproduce at all the WMT2013 results\nfor the language pair English-German?\n•Are translation studies students (future trans-\nlators) evaluating different from the WMT de-\nvelopment teams, or in other words does spe-\ncialised knowledge from translation studies\ninfluence the outcome of the ranking task?\n•Are translation studies students more consis-\ntent as a group and with themselves in terms\nof intra- and inter-agreement?\nWe concentrate on English-German data since\nthe majority of our evaluators were native speak-\ners of German and since, from a translation studies\npoint of view, professional translation should be\nperformed only into the mother tongue.\n2 The WMT2013 English-German Data\nBefore presenting the experimental setting and\noutcomes, we present the WMT data. We are\naware of the fact that the main objective of the\nWMT is to evaluate the state-of-the-art in machine\ntranslation. In this context evaluation plays an im-\nportant role, since a robust and reliable evaluation\nmethod makes it easier to perform a more in-depth\ndifferentiation between different machine transla-\ntion outputs.\nIn 2013 during the WMT human evaluation\ncampaign, the evaluation was performed both by\nthe WMT development teams (further named re-\nsearchers) and by turkers. The researcher group\ncomprised all the participants in the WMT ma-\nchine translation task. The turkers group was com-\nposed of non-experts on Amazon’s Mechanical\nTurk (MTurk). Both groups were asked to rank\nrandomly selected machine translation outputs, or-\nganised as quintuples of 5 outputs produced by dif-\nferent MT systems. The researchers were asked to\nrank quintuples for 300 source sentences whereas\nthe turkers were paid per MTurk unit. Such a\nunit is called a human intelligence Task (HIT) and\nconsisted of three source sentences and the corre-\nsponding quintuples. For each HIT turkers were\npaid $0.25.\n1http://www.statmt.org/wmt13/In our experiments we focus on the language\npair English-German, we compare our results with\nthose obtained in the English-German human eval-\nuation task. We concentrate on the evaluation per-\nformed by researchers, assuming that translation\nstudies students will be at least as consistent as re-\nsearchers and having in mind that intra- and inter-\nannotator agreement for the turkers’ group was\nlower than for the researchers’ group. Researchers\nare a well defined group, or at least a better defined\ngroup, than the turkers about whom we had no in-\nformation.\nFrom the WMT2013 English-German data,\nwhich we took as reference for our experiments,\nwe observed that there were in total 38 researchers\ntaking part in the English-German manual eval-\nuation task. The range of the evaluated source\nsentences and their quintuples is from 3 to 1059.\nFrom the 38 evaluators 12 evaluated the same sen-\ntences more than once, the range in this case be-\ning from 3 to 240 repeated sentences. From here\nwe can conclude that for the English-German task\njust 12 researchers can be considered for the intra-\nannotator agreement. The sentence overlap be-\ntween researchers (relevant for the inter-annotator\nagreement) has also a wide range: from sentences\nevaluated in common with 2 researchers to sen-\ntences evaluated in common with 36 researchers.\nIn total the researchers in WMT2013 produced\n39582 ranking pairs, without counting ties, based\non which thefinal agreement scores and the system\nranking was computed.\nAnother observation from the WMT2013 data\nis related to the systems researchers had to rank.\nThe data shows that researchers ranked only 14 out\nof the 21 participating systems. The anonymised\ncommercial and online systems were excluded\nfrom the human evaluation task.\nThe main criticism towards this kind of evalua-\ntion of MT output is that the evaluation does not\nprovide evidence of the absolute quality of the MT\noutput, but evidence of the quality of a machine\ntranslation system compared to other MT systems.\nIf the evaluators had to decide on the ranking of\n5 bad MT outputs, it might happen that even the\nMT system rankedfirst, scores bad in terms of ad-\nequacy andfluency. On the other hand, in such\nranking tasks the specific skills, required for ex-\nample in translation studies, are not necessary acti-\nvated, since the ranking task is in fact a comparison\ntask. Therefore, we assume that researchers and162\ntranslations studies students will achieve at least\ncomparable scores since no task-specific knowl-\nedge is required and the two groups, different from\nthe turkers’ group, can be considered homoge-\nneous groups.\n3 Experimental Design\nWe conducted the experiments as similar as\npossible to the manual ranking task in WMT2013.\nLike in WMT2013, evaluators were presented\nwith a source sentence, a reference translation\nandfive outputs produced byfive anonymised and\nrandomised machine translations systems. The\ninstructions for the evaluators remained the same\nas in WMT2013:\nYou are shown a source sentence followed by\nseveral candidate translations. Your task is to\nrank the translations from best to worst (ties are\nallowed)\nFor performing the ranking task we imple-\nmented the Java-based ranking tool depicted in\nFigure 1.2Similar to APPRAISE (Federmann,\n2012) the ranking can be performed on a scale\nfrom 1 to 5, with 1 being the best translation and 5\nbeing the worst translation.\nFor a given source sentence, each ranking of the\nfive MT outputs has the potential to produce 10\nranking pairs. Before applying the corresponding\nformulas on the data, the ranking pairs from all\nevaluators and for all systems are collected in a\nmatrix like the one in Table 1. The matrix records\nthe number of times system S iwas ranked better\nthan S jand vice-versa.\nFor example, if we look at the two systems S 1\nand S 3in the matrix, we can see that S 3was ranked\n2 times higher (from the left triangle) and 4 times\nlower (from the right triangle) than system S 1.\nFrom the matrix, thefinal score for each sys-\ntem - as defined by Koehn (2012) and applied in\nWMT2013 - can be computed. From the matrix\nin Table 1 the score for system S 1is computed\nby counting for each pair of systems (S 1, S2), (S 1,\nS3), (S 1, S4), (S 1, S5) the number of times S 1was\nranked higher than the other system divided by the\ntotal number of rankings for each pair. The re-\nsults for each pair of systems including S 1are then\n2The implementation of a new tool was motivated by the ac-\ncessibility of a server for the evaluators. This way each eval-\nuator had his own evaluation set containing both the tool and\nthe data set.S1S2S3S4S5\nS10 3 4 2 2\nS20 0 1 0 1\nS32 2 0 2 2\nS44 3 4 0 5\nS51 2 1 1 0\nTable 1: Representation of the ranking pairs as a\nmatrix\nsummed and divided by the number of systems,\nthis being thefinal score for S 1.\nConsidering having a system S ifrom a set of\nsystems S of size k and a set of rankings for each\nsystem pair (S i, Sj), wherej= 1. . . k,S j∈Sand\ni�=jthe score for S iis defined as follows:\nscore(S i) =1\nkk�\ni,j�=i|Si> S j|\n|Si> S j|+|S i< S j|\nBased on Koehn’s (2012) formula each system\ngets a score and a ranking among the set of sys-\ntems. After performing the ranking the systems\nare clustered by using bootstrap resampling, thus\nreturning thefinal score and the cluster for each\nsystem.\nDifferent from WMT2013 we run two eval-\nuation rounds for the ranking task. Thefirst\nround was a pilot study on which all evaluators\nhad to evaluate the same set of randomised and\nanonymised sentences selected from the published\nWMT2013 ranking task data set. The set contained\n200 source sentences andfive anonymised and ran-\ndomised MT outputs for each source sentence. In\nthe pilot study we selected, as in WMT2013, only\nthe above mentioned 14 machine translation sys-\ntems for evaluation, disregarding the remaining\nanonymised commercial and online systems.\nRegarding the sampling of the data, the sec-\nond evaluation round followed the ranking task\nperformed in WMT2013: each evaluator ranked\na different randomised and anonymised sam-\nple consisting of 200 source sentences andfive\nanonymised and randomised MT outputs for each\nsource sentence. The individual samples were built\nout of all 21 machine translations outputs of the\n3000 source sentences provided for the translation\ntask.163\nFigure 1: The Java-based ranking tool.\n3.1 The Pilot Study\nDuring the pilot study, the translation studies\nstudents had to manually rank 200 source sen-\ntences and their corresponding randomised and\nanonymised 5 translations. The specifics of the\npilot was that each evaluator received the same\ndata set for evaluation. In fact we randomly re-\ntrieved 180 sentences and their 5 corresponding\nmachine translation outputs from the WMT2013\nmanual evaluation data set, from the rankings per-\nformed by the researchers. Out of the 180 sen-\ntences we randomly selected 20 sentences which\nwere repeated in the data set. Based on the 200\nsource sentences, out of which 10% were repeated,\nwe could compute both the inter-annotator agree-\nment and the intra-annotator agreement. For the\ninter-annotator agreement we took all 200 sen-\ntences into consideration, whereas for the intra-\nannotator agreement we considered the preselected\n20 sentences which were repeated in the data set.\nDuring the pilot study 25 translation students\nand a translation lecturer took part in the experi-\nment. Except for three students, the remaining 23\nevaluators were native speakers of German with at\nleast a B2 level3for English. The three non-native\n3http://en.wikipedia.org/wiki/Common_European_Framework_\nof_Reference_for_Languages#Common_reference_levelsspeakers of English had at least a C1 knowledge\nlevel of German and B2 for English. Out of the\n26 evaluators 14 completed the task by ranking\nthe quintuples for all 200 source sentences, the re-\nmaining group evaluated between 2 and 26 source\nsentences. In total we collected 25780 ranking\npairs in the pilot study.\nBased on the collected rankings the intra-\nannotator agreement could be computed just for 17\nevaluators, the ones who evaluated sentences more\nthan once. On the other hand, the inter-agreement\nwas computed pairwise between all evaluators, the\nfact that all evaluators received the same set of sen-\ntences made this possible.\nBoth types of agreement (intra and inter) were\nmeasured by computing Cohen’s kappa coeffi-\ncient (Cohen, 1960), as it was defined by Bojar et\nal. (2013)\nκ=Pagree(Si, Sj)−P chance (Si, Sj)\n1−P chance (Si, Sj)(1)\nwhere P agree(Si,Sj) is the proportion of times\nthat evaluators agree on the ranking of the sys-\ntems S iand S j(Si<S jor S i= Sjor S i>S j) and164\nPchance (Si,Sj) is the number of times they agree by\nchance. P chance (Si,Sj) itself is defined as\nPchance (Si, Sj) =\nP(S i> S j)2+P(S i=S j)2+P(S i< S j)2\n(2)\nTable 2 list the values for P agree, Pchance and\nκ. Thefinalκis then the arithmetic mean of\nthe fourth column, resulting in an overall intra-\nannotator agreement of 0.745 as compared to\n0.649 during WMT2013.\nUser P agree Pchance κ\nuds1 1.000 0.431 1.000\nuds2 0.915 0.387 0.861\nuds3 0.674 0.157 0.613\nuds4 0.661 0.148 0.602\nuds5 1.000 0.360 1.000\nuds6 0.746 0.271 0.651\nuds7 0.710 0.199 0.637\nuds8 0.638 0.142 0.578\nuds9 1.000 0.467 1.000\nuds10 0.520 0.095 0.469\nuds11 0.974 0.392 0.957\nuds12 0.884 0.373 0.815\nuds13 0.792 0.302 0.702\nuds14 0.710 0.172 0.649\nuds15 0.792 0.302 0.702\nuds19 0.900 0.352 0.845\nuds25 0.666 0.190 0.579\nTable 2: Intra-annotator agreement for the pilot\nstudy.\nFor the inter-annotator agreementκis computed\nby comparing each evaluator with other evaluators\nwith whom she/he shared sentences in the ranking\ntask. Each evaluator has been compared with the\nother 25 evaluators, the pairwise comparison of the\n26 evaluators resulting in 325 evaluators pairs. For\neach of these pairs we calculated Cohen’sκ, the\noverall inter-annotator agreement being the arith-\nmetic mean from the inter-annotator agreement of\nthe evaluator pairs. In the pilot study the inter-\nannotator agreement achieved a value of 0.494 as\ncompared to 0.454 during WMT2013.\nThe system scores were calculated according\nto Koehn (2012). The results are listed in Ta-\nble 3. In this stage we performed no clustering,since the experiments with bootstrap resampling\nhave shown, that the cluster varied a lot depend-\ning on the sample size. Since we had no informa-\ntion about the sample size during bootstrap resam-\npling performed during WMT2013 and because\nwe collected less rankings (25780 vs. 39582 dur-\ning WMT2013), we stopped here with the compu-\ntation of system rankings.\nRank Score System\n1 0.647 PROMT\n2 0.572 UEDIN-SYNTAX\n3 0.546 ONLINE-B\n4 0.516 LIMSI-SOUL\n5 0.505 STANFORD\n6 0.504 UEDIN\n7 0.490 KIT\n8 0.462 CU-ZEMAN\n9 0.456 TUBITAK\n10 0.453 MES-REORDER\n11 0.404 JHU\n12 0.331 SHEF-WPROA\n13 0.314 RWTH-JANE\n14 0.294 UU\nTable 3: System ranking in the pilot study without\nbootstrap resampling\nThe pilot study proved that performing the re-\nranking of the English to German MT output from\nWMT2013 is a feasible task. Moreover, theκ\nscores indicate that translation studies students are\nmore consistent when ranking MT output.\n3.2 Main Study\nIn the main phase of our re-ranking experiment\neach evaluator received a different sample consist-\ning of 200 source sentences, the reference transla-\ntion for each source sentence andfive anonymised\nand randomised machine translation outputs. Be-\ncause we sampled the data from the 3000 source\nsentences and the 21 available system outputs, dur-\ning the main study we collected information about\nall systems and ignored the fact, that in WMT2013\nevaluators were shown only preselected systems.\nThe software as well as the requirements for per-\nforming the ranking task remained the same as in\nthe pilot study.\nSimilar to the pilot study, in each sample con-\nsisting of the 200 source sentences and the cor-\nresponding 5 machine translation outputs 10%\nof the data was repeated, in order to compute165\nthe intra-annotator agreement. For inter-annotator\nagreement we selected 20 source sentences and\ntheir corresponding reference translation as well\nas the corresponding 5 machine translation outputs\nwhich were common to each sample. In this phase\nwe had 37 evaluators, all of them being 2nd or 3rd\nBA translation studies students. With the excep-\ntion of 3 students, all of the students were native\nspeakers of German with at least a B2 level of En-\nglish. The three non-native speaker of German had\na C1 level of English. From the 37 students, 19\nranked all 200 sentences completing the task. The\nother 18 students ranked between between 20 and\n60 sentences. From all the rankings performed\nby the evaluators in the main study we collected\n37318 ranking pairs4, a comparable number to the\n39582 ranking pairs collected during WMT2013.\nFrom the collected data we computed Cohen’s\nκfor the intra-annotator agreement based on the\nrankings collected from 22 evaluators. We obtain a\nκof 0.772 for the intra-annotator agreement. From\nall possible pairs of evaluators, here 666, only 536\npairs had ranked sentences in common and had\ntherefore an inter-annotatorκgreater than 0. The\narithmetic mean of these pairs gave us the overall\ninter-annotator agreement resulting inκof 0.510.\nSince in the second run of the experiment we\ncollected almost the same number of ranking pairs\nas during WMT2013, we performed the ranking\nof the systems with and without bootstrap resam-\npling. Table 4 lists the ranking scores without\nbootstrap resampling.\nFor bootstrap resampling we sampled from the\nset of pairwise rankings (S i, Sj) collected from all\nevaluators and computed the score for each system\nwith the formula in equation 3. By iterating this\nprocedure a 1000 times, we determined the range\nof ranks into which a system falls in 95% of the\ncases5, corresponding to a p-level ofp≤0.05.\nThe systems with overlapping ranges we clustered\nby taking into account that Bojar et al. (2013) rec-\nommend to build the largest set of clusters. Actu-\nally we performed the bootstrap resampling twice,\nonce by picking 100 rankings pairs from each eval-\nuator6, and once by selecting 200 ranking pairs for\neach evaluator. The results show that the difference\nbetween 100 and 200 ranking pairs had no impact\n4For the 14 systems evaluated by researchers during\nWMT2013 we collected 24202 ranking pairs\n5This means that the best and worst 2.25% scores for a system\nare not taken into consideration\n6Repetitions were allowed.Rank Score System\n1 0.593 ONLINE-B\n3 0.573 UEDIN-SYNTAX\n4 0.552 PROMT\n5 0.541 UEDIN\n6 0.511 KIT\n7 0.480 MES-REORDER\n8 0.478 LIMSI-SOUL\n9 0.465 CU-ZEMAN\n10 0.463 STANFORD\n11 0.426 TUBITAK\n12 0.422 JHU\n13 0.352 UU\n14 0.345 SHEF-WPROA\n15 0.311 RWTH-JANE\nTable 4: System ranking in the main study without\nbootstrap resampling\non thefinal ranking of the systems, and a mini-\nmal one on the way how systems were grouped to\nclusters. On the right side of Table 5 we present\nthe ranking and clustering results based on sam-\nples build of 100 randomly picked rankings pairs\nper evaluator.\n4 Discussion on Results\nThe motivation for running the experiments pre-\nsented in the previous sections was guided by the\nmain question whether future translators, in our\ncase translations studies students, would rank MT\noutput differently than the WMT2013 develop-\nment teams. Being aware that translation studies\nstudents are language and translation experts, we\nexpected them to be more consistent and more dis-\ncriminative in their decisions as the WMT devel-\nopment teams.\nWith this in mind, we conducted two experi-\nments, a pilot study and a main study, for the lan-\nguage pair English-German investigating whether\ntranslation studies students would evaluate MT\noutput very differently from the WMT devel-\nopment teams and if yes, to what extent and\nhow could we quantify these differences. Dur-\ning the pilot study we observed that the results\nare similar to those from WMT2013, achieving an\nintra-annotator agreement of 0.745 and an inter-\nannotator agreement of 0.494 as compared to\n0.649 and 0.457 during WMT2013, we run the\nmain study described in Section 3.2. The results\nfrom the main experiment show that translation166\nWMT2013 Main Study\nRank Score System Rank Score System\n1 0.637 ONLINE-B 1 0.594 ONLINE-B\n0.636 PROMT 2 0.572 UEDIN-SYNTAX\n3 0.614 UEDIN-SYNTAX 0.556 PROMT\n0.571 UEDIN 0.540 UEDIN\n0.571 KIT 6 0.510 KIT\n7 0.523 STANFORD 7 0.482 MES-REORDER\n8 0.507 LIMSI-SOUL 0.480 LIMSI-SOUL\n9 0.477 MES-REORDER 0.460 STANFORD\n0.476 JHU 0.459 CU-ZEMAN\n0.460 CU-ZEMAN 11 0.427 TUBITAK\n0.453 TUBITAK 0.426 JHU\n13 0.361 UU 13 0.351 UU\n14 0.329 SHEF-WPROA 0.344 SHEF-WPROA\n0.323 RWTH-JANE 15 0.308 RWTH\nTable 5: System ranking with bootstrap resampling in WMT2013 and in the main study\nstudies students achieve an intra-annotator agree-\nment of 0.772 and an inter-annotator agreement of\n0.510. The values are slightly higher than the ones\nof the researchers during WMT2013, but the dif-\nferences are not really that pronounced. One in-\nterpretation of these results is that this task did not\nrequire specialised knowledge neither from the re-\nsearchers nor from the translation studies students.\nAlthough researchers are probably not so famil-\niar with translation studies theories and translation\nstudents are not specialists in machine translation,\nfrom the results, we notice an overlap in decision\ntaking/making between the two groups. This over-\nlap can be, as mentioned before, due to the nature\nof the evaluation task, since evaluators from both\ngroups had to rank the machine translation output\ngiven the source text and the reference translation\nand the knowledge about the source and target lan-\nguage was enough.\nThe higher agreement values for the students’\ngroup can be an indicator that students ranked the\nmachine translation output more thoroughly, a fact\nthat was confirmed also by the non-formal feed-\nback we got from the evaluators. Most of them\nthem complained that it was very difficult to rank\nmachine translation output of roughly similar over-\nall quality. They reported that they hadfirst to rank\nfor themselves the errors they saw in the machine\ntranslation output before ranking the sentences.\nAnother aspect which probably influenced the\nresults is the number of evaluators (for intra-\nannotator agreement) and evaluator pairs (for theinter-annotator agreement) considered in the com-\nputation ofκ. The lower the number of evaluators\nand evaluator pairs the higher the influence of each\nevaluator and pair on thefinalκ.\nConcerning the system rankings presented\nby Bojar et al. (2013) and computed based on the\nexpected wins described by Koehn (2012), we can\nremark a shifting of ranks between the systems\nlisted in the WMT2013 report and the rankings\nobtained by the translation studies students. Still,\nthis rank shifting is more preeminent in the mid-\ndle part of the table, than at the bottom, prov-\ning that systems with similar quality of MT out-\nput are harder to rank than MT output which is\nvery different. Table 5 gives an overview of the\nWMT2013 system rankings as well as of the sys-\ntem rankings in our main experiment. ONLINE-\nB was ranked by both groups as the best system,\nUEDIN-SYNTAX and UEDIN kept their ranks as\nwell as KIT, UU, SHEF-WPROA and RWTH. Al-\nthough the other systems changed their rankings\nby moving up or down, there is no real striking\nposition change in the ranking list. From Table 5\nwe can also notice that the scores for the systems\nhave suffered a slight decrease in our main exper-\niment as compared to the WMT2013 results. This\nis due to the fact that students made a clearer dis-\ntinction between good and bad translations by try-\ning to avoid ties, this being reflected into thefinal\nsystems scores.167\nWMT2013 Pilot Main Study\nTotal number of evaluators 38 26 37\nTotal number of rankings pairs 39582 25780 37318\nEvaluators considered for intra-annotator agreement 12 16 22\nκ(Intra-annotator agreement) 0.649 0.745 0.772\nEvaluators pairs considered for inter-annotator agreement 372 325 536\nκ(Inter-annotator agreement) 0.457 0.494 0.510\nTable 6: Overview over collected data and Cohen’sκfor the language pair English-German\n5 Conclusion\nFrom our pilot study as well as from our main\nexperiment on evaluating machine translation by\nranking sentence level machine translation out-\nput we found that the MT development teams in\nWMT2013 are not so different from the transla-\ntion studies students we had as evaluators in our\nexperiments. Turning back to the questions we\nasked in Section 1, we can say that our experi-\nments overall reproduced the WMT2013 ranking\ntask with some differences in the results. Indeed,\nwe observed that the group of students achieved\nhigher agreement scoreκmeaning that they were\nmore consistent individually and as a group. On\nthe other hand, from the computation of the sys-\ntem rankings the students confirmed at least the\nfirst and last places in the WMT2013 system rank-\ning, although the scores achieved by all systems\nwere slightly lower. The slight decrease of ranking\nscores is due to the fact that translation studies stu-\ndents were more discriminative and produced less\nties. Based on the results presented in the previ-\nous sections we consider that the human ranking\ntask does not required any specialised knowledge.\nMoreover, we argue that a homogeneous group and\na good command of the source and target language\nare enough to replicate the results of the ranking\ntask in the WMT2013.\nReferences\nBojar, Ond ˇrej, Christian Buck, Chris Callison-Burch, Barry\nHaddow, Philipp Koehn, Christof Monz, Matt Post, Hervé\nSaint-Amand, Radu Soricut, and Lucia Specia, editors.\n2013.Proceedings of the 8th Workshop on SMT. ACL.\nCohen, Jacob. 1960. A Coefficient of Agreement for Nomi-\nnal Scales.Educational and Psychological Measurement,\n20(1):37–46, April.\nComelles, Elisabet and Jordi Atserias. 2014. Verta partici-\npation in the wmt14 metrics task. InProceedings of the\nNinth Workshop on Statistical Machine Translation, pages\n368–375, Baltimore, Maryland, USA, June. Association\nfor Computational Linguistics.\nDenkowski, Michael and Alon Lavie. 2014. Meteor univer-\nsal: Language specific translation evaluation for any targetlanguage. InProceedings of the EACL 2014 Workshop on\nStatistical Machine Translation.\nDoddington, George. 2002. Automatic evaluation of\nmachine translation quality using n-gram co-occurrence\nstatistics. InProceedings of the 2nd International Con-\nference on HLT, pages 138–145.\nFedermann, Christian. 2012. Appraise: An open-source\ntoolkit for manual evaluation of machine translation out-\nput.PBML, 98:25–35, 9.\nGonzàlez, Meritxell, Alberto Barrón-Cedeño, and Lluís\nMàrquez. 2014. Ipa and stout: Leveraging linguistic and\nsource-based features for machine translation evaluation.\nInProceedings of the Ninth Workshop on Statistical Ma-\nchine Translation, pages 394–401, Baltimore, Maryland,\nUSA, June. Association for Computational Linguistics.\nKoehn, Philipp. 2012. Simulating human judgment in ma-\nchine translation evaluation campaigns. InIWSLT, pages\n179–184.\nLevenshtein, Vladimir Iosifovich. 1966. Binary codes capa-\nble of correcting deletions, insertions and reversals.Soviet\nPhysics Doklady, 10(8):707–710.\nLo, Chi-Kiu and Dekai Wu. 2011. MEANT: An inexpensive,\nhigh-accuracy, semi-automatic metric for evaluating trans-\nlation utility based on semantic roles. InProceedings of\nthe 49th Annual Meeting of the ACL, pages 220–229.\nLo, Chi-kiu, Anand Karthik Tumuluru, and Dekai Wu. 2012.\nFully automatic semantic MT evaluation. InProceedings\nof the Seventh Workshop on Statistical Machine Transla-\ntion, pages 243–252, Montréal, Canada, June. Association\nfor Computational Linguistics.\nPapineni, Kishore, Salim Roukos, Todd Ward, and Wei-Jing\nZhu. 2002. Bleu: a method for automatic evaluation of\nmachine translation. InProceedings of the 40th Annual\nMeeting of the ACL, pages 311–318.\nSnover, Matthew, Bonnie Dorr, Richard Schwartz, Linnea\nMicciulla, and John Makhoul. 2006. A study of trans-\nlation edit rate with targeted human annotation. InPro-\nceedings of AMTA, pages 223–231.\nSnover, Matthew, Nitin Madnani, Bonnie Dorr, and Richard\nSchwartz. 2009. Fluency, adequacy, or HTER? Exploring\ndifferent human judgments with a tunable MT metric. In\nProceedings of the 4th Workshop on SMT, pages 259–268.\nTillmann, Christoph, Stephan Vogel, Hermann Ney, Alexan-\nder Zubiaga, and Hassan Sawaf. 1997. Accelerated DP\nbased search for statistical translation. InProceedings of\nthe EUROSPEECH, pages 2667–2670.168", "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "Y2rbYqGVLqf", "year": null, "venue": "EAMT 2012", "pdf_link": "https://aclanthology.org/2012.eamt-1.38.pdf", "forum_link": "https://openreview.net/forum?id=Y2rbYqGVLqf", "arxiv_id": null, "doi": null }
{ "title": "Domain Adaptation of Statistical Machine Translation using Web-Crawled Resources: A Case Study", "authors": [ "Pavel Pecina", "Antonio Toral", "Vassilis Papavassiliou", "Prokopis Prokopidis", "Josef van Genabith" ], "abstract": "Pavel Pecina, Antonio Toral, Vassilis Papavassiliou, Prokopis Prokopidis, Josef van Genabith. Proceedings of the 16th Annual conference of the European Association for Machine Translation. 2012.", "keywords": [], "raw_extracted_content": "Domain Adaptation of Statistical Machine Translation\nusing Web-Crawled Resources: A Case Study\n1Faculty of Mathematics and Physics\nCharles University in Prague\nCzech Republic\[email protected] Pecina1, Antonio Toral2, Vassilis Papavassiliou3, Prokopis Prokopidis3, Josef van Genabith2\n2School of Computing\nDublin City Universiy\nDublin 9, Ireland\n{atoral,josef}@computing.dcu.ie3Institute for Language and\nSpeech Processing, Athena RIC\nAthens, Greece\n{vpapa,prokopis}@ilsp.gr\nAbstract\nWe tackle the problem of domain adapta-\ntion of Statistical Machine Translation by\nexploiting domain-specific data acquired\nby domain-focused web-crawling. We de-\nsign and evaluate a procedure for auto-\nmatic acquisition of monolingual and par-\nallel data and their exploitation for train-\ning, tuning, and testing in a phrase-based\nStatistical Machine Translation system. We\npresent a strategy for using such resources\ndepending on their availability and quan-\ntity supported by results of a large-scale\nevaluation on the domains of Natural En-\nvironment and Labour Legislation and two\nlanguage pairs: English–French, English-\n-Greek. The average observed increase of\nBLEU is substantial at 49.5% relative.\n1 Introduction\nRecent advances of Statistical Machine Transla-\ntion (SMT) have improved Machine Translation\n(MT) quality to such an extent that it can be suc-\ncessfully used in industrial processes (Flournoy\nand Duran, 2009). However, this mostly happens\nin very specific domains for which ample train-\ning data is available (Wu et al., 2008). Using\nin-domain1data for training has a substantial ef-\nfect on the final translation quality: SMT, as any\nother machine-learning application, is not guaran-\nteed to perform optimally if the data for training\nand testing are not identically (and independently)\ndistributed, which is often the case in practice. The\nmain problem is usually vocabulary coverage: spe-\ncific domain texts typically contain vocabulary that\nis not likely to be found in texts from other do-\nmains (Banerjee et al., 2010). Other problems can\nbe caused by divergence in style or genre where the\ndifference is not only in lexis but also in grammar.\n© 2012 European Association for Machine Translation.\n1In this work, in-domain always refers to the domain of test data.In order to achieve optimal performance, an\nSMT system should be trained on data from the\nsame domain, genre, and style as it is applied to.\nFor many domains, though, in-domain data of\na size sufficient to train a full system is hard to find.\nRecent experiments have shown that even small\namounts of such data can be used to adapt a sys-\ntem to the domain of interest (Koehn et al., 2007).\nIn this work, we present a strategy for automatic\nweb-crawling and cleaning of domain-specific\ndata. Further, our exhaustive experiments, car-\nried out for the Natural Environment (env) and\nLabour Legislation (lab) domains and English–\nFrench (EN–FR) and English–Greek (EN–EL ) lan-\nguage pairs (in both directions), demonstrate how\nthe crawled data improves SMT quality.\nAfter an overview of related work, we discuss\nthe possibility of adapting a general-domain SMT\nsystem by using various types of in-domain data.\nThen, we present our web-crawling procedure fol-\nlowed by a description of a series of experiments\nexploiting the data we acquired. Finally, we report\non the results and conclude with recommendations\nfor similar attempts to domain adaptation in SMT.\n2 Related work and state of the art\n2.1 Domain-focused web crawling\nA key challenge for a focused crawler that as-\npires to build domain-specific web collections is\nthe prioritisation of the links to follow. Several\nalgorithms have been exploited for selecting the\nmost promising links. The Best-First algorithm\n(Cho et al., 1998) sorts the links with respect\nto their relevance scores and selects a predefined\namount of them as the seeds for the next crawl-\ning cycle. Menczer and Belew (2000) proposed an\nadaptive population of agents, called InfoSpiders,\nand searched for pages relevant to a domain us-\ning evolving query vectors and Neural Networks\nto decide which links to follow. Hybrid models\nand modifications of these crawling strategies have\nProceedings of the 16th EAMT Conference, 28-30 May 2012, Trento, Italy\n145\nlanguage pair (L1–L2) dom set source sentence pairs L1 tokens / vocabulary L2 tokens / vocabulary\nEnglish–French gen train Europarl 5 1,725,096 47,956,886 73,645 53,262,628 103,436\ndev WPT 2005 2,000 58,655 5,734 67,295 6,913\ntest WPT 2005 2,000 57,951 5,649 66,200 6,876\nEnglish–Greek gen train Europarl 5 964,242 27,446,726 61,497 27,537,853 173,435\ndev WPT 2005 2,000 58,655 5,734 63,349 9,191\ntest WPT 2005 2,000 57,951 5,649 62,332 9,037\nTable 1: Detailed statistics of the general-domain data sets obtained from the Europarl corpus and the WPT 2005 workshop.\nalso been proposed (Gao et al., 2010) with the aim\nof reaching relevant pages rapidly.\nApart from the crawling algorithm, classifica-\ntion of web content as relevant to a domain or\nnot also affects the acquisition of domain-specific\nresources, on the assumption that relevant pages\nare more likely to contain links to more pages in\nthe same domain. Qi and Davison (2009) review\nfeatures and algorithms used in web page clas-\nsification. In most of the algorithms reviewed,\non-page features (i.e. textual content and HTML\ntags) are used to construct a corresponding fea-\nture vector and then, several machine-learning ap-\nproaches, such as SVMs, Decision Trees, and Neu-\nral Networks, are employed (Yu et al., 2004).\nConsidering the Web as a parallel corpus,\nResnik and Smith (2003) proposed the STRAND\nsystem, in which they used Altavista to search for\nmultilingual websites and examined the similarity\nof the HTML structures of the fetched web pages\nin order to identify pairs of potentially parallel\npages. Similarly, Esplà-Gomis and Forcada (2010)\nproposed Bitextor, a system that exploits shallow\nfeatures (file size, text length, tag structure, and\nlist of numbers in a web page) to mine paral-\nlel documents from multilingual web sites. Be-\nsides structure similarity, other systems either filter\nfetched web pages by keeping only those contain-\ning language markers in their URLs (Désilets et al.,\n2008), or employ a predefined bilingual wordlist\n(Chen et al., 2004), or a naive aligner (Zhang et al.,\n2006) in order to estimate the content similarity of\ncandidate parallel web pages.\n2.2 Domain adaptation in SMT\nThe first attempt towards domain adaptation in\nSMT was made by Langlais (2002) who integrated\nin-domain lexicons into the translation model.\nEck et al. (2004) presented a language model\nadaptation technique applying an information re-\ntrieval approach based on selecting similar sen-\ntences from available training data. Hildebrand et\nal. (2005) applied the same approach on the trans-\nlation model. Wu et al. (2005) proposed an align-ment adaptation approach to improve domain-\n-specific word alignment. Munteanu and Marcu\n(2005) automatically extracted in-domain bilin-\ngual sentence pairs from large comparable (non-\n-parallel) corpora to enlarge the in-domain bilin-\ngual corpus. Koehn and Schroeder (2007) in-\ntegrated in-domain and out-of-domain language\nmodels as log-linear features in the Moses (Koehn\net al., 2007) phrase-based SMT system with mul-\ntiple decoding paths for combining multiple do-\nmain translation tables. Nakov (2008) combined\nin-domain translation and reordering models with\nout-of-domain models into Moses. Finch and\nSumita (2008) employed a probabilistic mixture\nmodel combining two models for questions and\ndeclarative sentences with a general model. They\nused a probabilistic classifier to determine a vector\nof probability representing class membership.\nIn general, all approaches to domain adapta-\ntion of SMT depend on the availability of domain-\n-specific data. If the data is available, it can be\ndirectly used to improve components of the MT\nsystem. Otherwise, it can be extracted from a pool\nof texts from different domains or even from the\nweb, which is also the case in our work.\n3 Resources and their acquisition\nIn this section, we review the existing resources we\nused for training the general-domain systems and\npresent the acquisition procedures of in-domain\ndata used for domain adaptation of these systems.\n3.1 Existing general domain data\nFor the baseline, a general-domain system, we\nexploited the widely used data provided for the\nSMT workshops (WPT 2005 – WMT 2010): the\nEuroparl parallel corpus (Koehn, 2005) as training\ndata for translation and language models, and\nWPT 2005 development and test sets as develop-\nment and test data for general-domain parameter\noptimization and testing, respectively (Table 1).\nEuroparl is extracted from the European Parliament\nproceedings and for practical reasons we consider\nthis corpus to contain general-domain texts.\n146\ninitial phase main phase\nlanguage dom sites pages stored / sampled / acc (%) sites pages visited / stored (\u0001%) / dedup (\u0001 %) t (h)\nEnglish env 146 505 224 92.9 3,181 90,240 34,572 38.3 28,071 18.8 47\nlab 150 461 215 91.6 1,614 121,895 22,281 18.3 15,197 31.8 50\nFrench env 106 543 232 95.7 2,016 160,059 35,488 22.2 23,514 33.7 67\nlab 64 839 268 98.1 1,404 186,748 45,660 27.2 26,675 41.6 72\nGreek env 112 524 227 97.4 1,104 113,737 31,524 27.7 16,073 49.0 48\nlab 117 481 219 88.1 660 97,847 19,474 19.9 7,124 63.4 38\nAverage 94.0 25.6 39.7\nTable 2: Statistics from the initial (focused on domain-classification accuracy estimation) and main phases of crawling mono-\nlingual data: stored refers to the visited pages classified as in-domain, dedup refers to pages after near-duplicate removal, time\nis the total duration (in hours), accis accuracy estimated on the sampled pages,\u0001refers to reduction w.r.t. pages visited.\nlanguage dom paragraphs all / clean (\u0001%) / unique (\u0001%) sentences tokens vocabulary\nEnglish env 5,841,059 1,088,660 18.6 693,971 11.9 1,700,436 44,853,229 225,650\nlab 3,447,451 896,369 26.0 609,696 17.7 1,407,448 43,726,781 136,678\nFrench env 4,440,033 1,069,889 24.1 666,553 15.0 1,235,107 42,780,009 246,177\nlab 5,623,427 1,382,420 24.6 822,201 14.6 1,232,707 46,992,912 180,628\nGreek env 3,023,295 672,763 22.3 352,017 11.6 655,353 20,253,160 324,544\nlab 2,176,571 521,109 23.9 284,872 13.1 521,358 15,583,737 273,602\nAverage 23.3 14.0\nTable 3: Statistics from the cleaning stage of the monolingual data acquisition procedure and of the final data set: clean refers\nto paragraphs classified as non-boilerplate, unique to those kept after duplicate removal, \u0001to reduction w.r.t. paragraphs all.\n3.2 Web-crawling for monolingual data\nTo acquire monolingual in-domain corpora used in\nimproving language models, we enhanced a work-\nflow described in Pecina et al. (2011). Consid-\nering the small size of crawled data in that work\n(repeated here as col. 3–6 in Table 2), we imple-\nmented a focused monolingual crawler that adopts\na distributed computing architecture based on Bixo\n(2011), an open source web mining toolkit. More-\nover, an out-link relevance score lwas calculated\nas:l=p=N +PM\ni=1ni\u0001wi, where pis the rel-\nevance score of its source page as in Pecina et al.\n(2011), Nis the amount of links originating from\nthe source page, Mis the number of entries in a\ndomain definition consisting of relevant terms ex-\ntracted from Eurovoc2,nidenotes the number of\noccurrences of the i-th term in the surrounding text\nandwiis the weight of the i-th term. Further pro-\ncessing steps include boilerplate detection and lan-\nguage identification at paragraph level. These en-\nhancements resulted in acquiring much more in-\ndomain data (col. 8 in Table 2). In addition, the\nevolutions of the crawls were satisfactory since the\nratio of pages classified as in-domain with the vis-\nited ones is 25.6% on average (col. 9 in Table 2).\nThen, near-duplicates were removed by em-\nploying the deduplication strategy included in the\nNutch framework3. The relatively high percent-\nages of documents removed (col. 13 in Table 2) are\n2http://eurovoc.europa.eu/\n3http://nutch.apache.orgin accordance with Baroni et al.’s (2009) observa-\ntion that during building of the Wacky corpora the\namount of documents was reduced by more than\n50% after deduplication. Another observation is\nthat the percentages of duplicates for the labdo-\nmain are much higher than the ones for env. This\ncan be explained by the fact that labweb pages\nare mainly legal documents or press releases repli-\ncated on many websites.\nFinal processing of the monolingual data (see\nTable 3) concerned the exclusion of paragraphs an-\nnotated as not in the targeted language or as boil-\nerplate, which reduced their total amount to 23.3%\non average (col. 5). Removal of duplicate para-\ngraphs then reduced their total number to 14.0%\non average (col. 7). However, most of the removed\nparagraphs were very short chunks of text (such as\nnavigation links). In terms of tokens, the reduction\nis only to 50.6%. The last three columns in Ta-\nble 3 refer to the final monolingual data sets used\nfor training language models. For ENandFR, we\nacquired about 45 million tokens for each domain;\nforEL, which is less frequent on the web, we ob-\ntained only about 15–20 million tokens.\n3.3 Web-crawling for parallel data\nSome steps involved in parallel data acquisition\n(including language identification and cleaning)\nwere discussed in the previous subsection as a part\nof the monolingual data acquisition. To guide the\nfocused bilingual crawler we used sets of bilin-\n147\nlanguage pair dom sites docs sentences all / paired (\u0001%) / good (\u0001%) / unique (\u0001%) / sampled / corrected\nEnglish–French env 6 559 19,042 14,881 78.1 14,079 73.9 13,840 72.7 3,600 3,392\nlab 4 900 35,870 31,541 87.9 27,601 76.9 23,861 66.5 3,600 3,411\nEnglish–Greek env 14 288 17,033 14,846 87.2 14,028 82.4 13,253 77.8 3,600 3,000\nlab 7 203 13,169 11,006 83.6 9,904 75.2 9,764 74.1 2,700 2,506\nAverage 84.2 77.1 72.8\nTable 4: Statistics from the parallel data acquisition: document pairs (docs), source sentences (sentences all ), aligned sentence\npairs (paired ), those of sufficient translation quality (good ); after duplicate removal (unique); sentences randomly selected for\nmanual correction (sampled ) and those really corrected (corrected ).\u0001always refers to percentages w.r.t. the previous step.\ngual topic definitions. In order to construct the\nlist of seed URLs we selected web pages that\nwere collected during the monolingual crawls and\noriginated from in-domain multilingual web sites.\nSince it is likely that these multilingual sites con-\ntain parallel documents, we initialize the crawler\nwith these seed URLs and force the crawler to fol-\nlow only links internal to these sites. After down-\nloading in-domain pages from the selected web\nsites, we employed Bitextor to identify pairs of\ndocuments that could be considered parallel.\n3.4 Parallel sentence extraction\nAfter identification of parallel documents, the next\nsteps aimed at extraction of parallel sentences.\nFor each document pair free of boilerplate para-\ngraphs, we applied these steps: sentence split-\nting and tokenization by the Europarl tools, and\nsentence alignment by Hunalign (Varga et al.,\n2005). Hunalign implements a heuristic, language-\n-independent method for identification of parallel\nsentences in parallel texts which can be improved\nby providing an external bilingual dictionary of\nword forms. Without having such dictionaries for\nEN–FR andEN–EL at hand, we realign data in\nthese languages from Europarl by Hunalign and\nused the dictionaries produced by this tool.\nFor each sentence pair identified as parallel, Hu-\nnalign provides a confidence score which reflects\nthe level of parallelness. We manually investigated\na sample of sentence pairs extracted by Hunalign\nfrom the pool data (about 50 sentence pairs for\neach language pair and domain), by relying on the\njudgement of native speakers, and estimated that\nsentence pairs with a score above 0.4 are of a good\ntranslation quality. We kept sentence pairs with 1:1\nalignment only (one sentence on each side) and re-\nmoved those with scores below this threshold. Fi-\nnally, we also removed duplicate sentence pairs.\nThe statistics from the parallel data acquisition\nprocedure are given in Table 4. On average, 84.2%\nof the source sentences extracted from the parallel\ndocuments were aligned in the 1:1 fashion (col. 7),10% of them were removed due to low translation\nquality, and after discarding duplicate sentences\npairs we acquired 72.8% of the original source sen-\ntences aligned to their target sides (col. 11).\nThe translation quality of the parallel sentences\nobtained by the procedure described above is not\nguaranteed in any sense. Tuning the procedure and\nfocusing on high-quality translations is possible\nbut leads to a trade-off between quality and quan-\ntity. For translation model training, high transla-\ntion quality of the data is not as essential as for\ntesting. Bad phrase pairs can be removed from\nthe translation tables based on their low translation\nprobabilities. However, a development set contain-\ning sentence pairs which are not good translations\nof each other might lead to sub-optimal values of\nmodel weights which would harm system perfor-\nmance. If such sentence pairs are used in the test\nset, the evaluation would clearly be unreliable.\nIn order to create reliable test and development\nsets for each language pair and domain, we per-\nformed the following low-cost procedure. From\nthe data obtained by the steps described in the\nprevious section, we selected a random sample of\n3,600 sentence pairs (2,700 for EN–EL in the lab\ndomain, for which less data was available) and\nasked native speakers to check and correct them.\nThe task consisted of checking that the sentence\npairs belonged to the right domain, the sentences\nwithin a sentence pair were equivalent in terms of\ncontent, and the translation quality was adequate\nand (if needed) correcting it. The goal was to ob-\ntain at least 3,000 correct sentence pairs for each\ndomain and language pair; thus the correctors did\nnot have to correct every sentence pair. They were\nallowed to skip (remove) misaligned sentence pairs\nand asked to remove those sentence pairs that were\nobviously from a very different domain (despite\nbeing correct translations). The number of cor-\nrected sentences is in the last column of Table 4.\nAccording to the human judgements (see Table\n5), 53–72% of sentence pairs were accurate trans-\nlations, 22–34% needed only minor corrections, 1–\n148\ncategory EN–EL / env EN–FR / lab\n1. perfect translation 53.49 72.23\n2. minor corrections done 34.15 21.99\n3. major corrections needed 3.00 0.33\n4. misaligned sentence pair 5.09 1.58\n5. wrong domain 4.28 3.86\nTable 5: Results (%) of the manual correction of parallel data.\n3% would require major corrections (which was\nnot necessary, as the accurate sentence pairs to-\ngether with those requiring minor corrections were\nenough to reach our goal of at least 3,000 sentence\npairs in most cases), 2–5% of sentence pairs were\nmisaligned and would have had to be translated\ncompletely, and about 4% were from a different\ndomain (despite being correct translations).\nFurther, we selected 2,000 pairs from the cor-\nrected sentences for the test set and left the re-\nmaining part for the development set. The paral-\nlel sentences which were not selected for correc-\ntions were used as training sets. See further statis-\ntics in Table 6. The correctors confirmed that the\nmanual corrections were about 5–10 times faster\nthan translating the sentences from scratch, so this\ncan be viewed as low-cost method for acquiring\nin-domain test and development sets for SMT.\n4 Domain adaptation experiments\nIn this section, we present experiments that exploit\nall the acquired in-domain data in eight different\nevaluation scenarios involving two domains (env,\nlab) and two language pairs (EN–FR, EN–EL ) in\nboth directions. Our primary evaluation measure\nis BLEU (Papineni et al., 2002). For detailed anal-\nysis we also present NIST (Doddington, 2002) and\nMETEOR (Banerjee and Lavie, 2005) in Table 8.\n4.1 System description\nOur MT system is based on Moses (Koehn et al.,\n2007). For training the baseline system, training\ndata is tokenized and lowercased using the Eu-\nroparl tools. The original (non-lowercased) target\nsides of the parallel data are kept for training the\nMoses recaser. The lowercased versions of the tar-\nget sides are used for training an interpolated 5-\n-gram language model with Kneser-Ney discount-\ning using the SRILM toolkit (Stolcke, 2002).\nTranslation models are trained on the relevant parts\nof the Europarl corpus, lowercased and filtered on\nsentence level; we kept all sentence pairs having\nless than 100 words on each side and with length\nratio within the interval h0.11,9.0i. The maximumpair dom set sents L1 tokens / voc L2 tokens / voc\nenv train 10,240 300,760 10,963 362,899 14,209\ndev 1,392 41,382 4,660 49,657 5,542\ntest 2,000 58,865 5,483 70,740 6,617\nlab train 20,261 709,893 12,746 836,634 17,139\ndev 1,411 52,156 4,478 61,191 5,535English–French test 2,000 71,688 5,277 84,397 6,630\nenv train 9,653 240,822 10,932 267,742 20,185\ndev 1,000 27,865 3,586 30,510 5,467\ntest 2,000 58,073 4,893 63,551 8,229\nlab train 7,064 233,145 7,136 244,396 14,456\ndev 506 15,129 2,227 16,089 3,333English–Greek test 2,000 62,953 4,022 66,770 7,056\nTable 6: Details of the in-domain parallel data sets obtained\nby web-crawling and manual correction: sentence pairs (sents ),\nsource (L1 ) and target (L2 ) tokens and vocabulary size (voc ).\nlength of aligned phrases is set to 7 and the re-\nordering models are generated using parameters:\ndistance, orientation-bidirectional-fe. The model\nparameters are optimized by Minimum Error Rate\nTraining (Och, 2003, MERT) on development sets.\nFor decoding, test sentences are tokenized, low-\nercased, and translated by the tuned system. Letter\ncasing is then reconstructed by the recaser and ex-\ntra blank spaces in the tokenized text are removed\nin order to produce human-readable text.\n4.2 Using out-of-domain test data\nA number of previous experiments (Wu et al.,\n2008; Banerjee et al., 2010, e.g.) showed signif-\nicant degradation of translation quality if an SMT\nsystem was applied to out-of-domain data. In or-\nder to verify this observation we trained and tuned\nour system on general-domain data and compared\nits performance on test sets from general (gen) and\nspecific (env, lab) domains (the results are referred\nto as vXandv0in Table 7, respectively). The aver-\nage decrease in BLEU is 44.3%: while on general-\n-domain test sets we observe scores in the interval\n42.24–57.00, the scores on the specific-domain test\nsets are in the range 20.20–31.79. This is presum-\nably caused by the divergence of training and test\ndata: the out-of-vocabulary (OOV) rate increased\nfrom 0.25% to 0.90% (see col. 4 and 16 in Table 7).\n4.3 Using in-domain development data\nOptimization of parameters of the SMT log-linear\nmodels is known to have a big influence on the\nperformance. The first step towards domain adap-\ntation of a general-domain system it to use in-\n-domain development data. Such data usually\ncomprises of a small set of parallel sentences\nwhich are repeatedly translated while the model\nparameters are adjusted towards their optimal val-\n149\ndirection dom vX / OOV dom v0 / OOV v1 /\u0001% v2 /\u0001% v3 /\u0001% v4 /\u0001% / OOV\nEnglish–Fench gen 49.12 0.11 env 28.03 0.98 35.81 27.8 39.23 40.0 40.53 44.6 40.72 45.3 0.65\nlab 22.26 0.85 30.84 35.6 34.00 52.7 39.55 77.7 39.35 76.8 0.48\nFench–English gen 57.00 0.11 env 31.79 0.81 39.04 22.5 40.57 27.6 42.23 32.8 42.17 32.7 0.54\nlab 27.00 0.68 33.52 23.7 38.07 41.0 44.14 63.5 43.85 62.4 0.38\nEnglish–Greek gen 42.24 0.22 env 20.20 1.15 26.18 29.1 32.06 58.7 33.83 67.5 34.50 70.8 0.82\nlab 22.92 0.47 28.79 25.7 33.59 46.6 33.54 46.3 33.71 47.1 0.40\nGreek–English gen 44.15 0.56 env 29.23 1.53 34.15 16.8 36.93 26.3 39.13 33.9 39.18 34.0 1.20\nlab 31.71 0.69 37.55 18.4 40.17 26.7 40.44 27.5 40.33 27.2 0.62\nAverage 0.25 0.90 25.5 40.0 49.2 49.5 0.64\nTable 7: BLEU scores from domain adaptation of the baseline general-domain systems (v0) by exploiting: corrected devel. data\n(v1), monolingual training data (v2), parallel training data (v3), both monolingual and parallel training data (v4); vXrefers to\nthe baseline systems applied to general-domain test sets, OOV to out-of-vocabulary rates, \u0001to relative improvement over v0.\nues. The minimum number of development sen-\ntences is not strictly given. The only requirement\nis that the optimization procedure (MERT in our\ncase) must converge, which might not happen if\nthe set is too small. By using the parallel data\nacquisition procedure (see Section 3.2), we ac-\nquired development sets (506–1,411 sentence pairs\nin each) which proved to be very beneficial: com-\npared to the baseline systems trained and tuned on\ngeneral-domain data only (v0), systems trained on\ngeneral-domain data and tuned on in-domain data\n(v1) improved BLEU scores by 25.5% on aver-\nage. Taking into account that the development sets\ncontain only several hundreds of parallel sentences\neach, such improvement is remarkable (compare\ncolumns v0andv1in Table 7).\n4.4 Adding in-domain monolingual data\nImproving an SMT system by adding in-domain\nmonolingual training data cannot reduce the rel-\natively high OOV rate observed when general-\n-domain systems were applied on test sets from\nspecific domains. However, such data can im-\nprove the language models and contribute to bet-\nter estimations of probabilities of n-grams consist-\ning of known words. To verify this hypothesis,\nwe trained systems (v2) on general-domain paral-\nlel training data, in-domain development data, and\na concatenation of general-domain and in-domain\nmonolingual data described in Section 3.2.1 (com-\nprising 15–45 million words). Compared to the\nsystems v1, the BLEU scores were improved by\nadditional 14.5% absolute on average. In compari-\nson with the baseline systems v0, the total increase\nof BLEU is 40.0% on average. The most substan-\ntial improvement over the system v1is achieved\nfor translations to Greek (23.0% for env, 16.2% for\nlab) despite the smallest size of the monolingual\ndata acquired for this language (Table 3) which is\nprobably due to the complex Greek morphology.4.5 Adding in-domain parallel training data\nParallel data is essential for building translation\nmodels of SMT systems. While a good language\nmodel can improve an SMT system by preferring\nbetter translation options in given contexts, it has\nno effect if the translation model offers no trans-\nlation at all, which is the case for OOV words.\nIn the next experiment, we use in-domain parallel\ntraining data acquired as described in Section 3.2.3\n(7–20 thousand sentence pairs). First, we trained\nsystems (v3) on a concatenation of general-domain\nand in-domain parallel training data, in-domain de-\nvelopment data, and a general-domain monolin-\ngual data only which outperformed the previous\nsystems (v2) by additional 9.2% absolute on aver-\nage (49.2% over the baseline). In some scenarios,\nthe overall improvement was above 70%.\nTo provide a complete picture we also trained\nfully adapted systems (v4) using both general-\n-domain and in-domain sets of parallel and mono-\nlingual data and tuned on the corrected in-domain\ndevelopment sets. In most scenarios the difference\nof results of these systems compared to systems v3\nare not statistically significant (p=0.05). The aver-\nage relative improvement over the baseline (v0) is\n49.5%, which is almost identical to 49.2% from the\nprevious experiment (v3). In practice, this means\nthat using additional monolingual in-domain data\non top of the in-domain parallel data has no ef-\nfect on the translation quality. Although additional\nexperiments would verify whether larger monolin-\ngual data could bring any additional improvement\nor not, it seems that parallel data is more important.\n5 Conclusions\nWe presented two methods for the acquisition\nof domain-specific monolingual and parallel data\nfrom the web. They employ existing open-source\ntools for normalization, language identification,\n150\nNatural Environment Labour Legislation\nsys BLEU / \u0001% NIST / \u0001% MET / \u0001% WER / \u0001% BLEU / \u0001% NIST / \u0001% MET / \u0001% WER / \u0001%\nv0 28.03 0.0 7.03 0.0 63.32 0.0 63.70 0.0 22.26 0.0 6.27 0.0 56.73 0.0 69.93 0.0\nv1 35.81 27.7 8.10 15.2 68.44 8.0 53.78 -15.5 30.84 38.5 7.42 18.3 62.94 10.9 57.99 -17.0\nv2 39.23 39.9 8.43 19.9 70.35 11.1 51.34 -19.4 34.00 52.7 7.68 22.4 65.56 15.5 57.06 -18.4\nv3 40.53 44.6 8.61 22.4 71.10 12.2 50.04 -21.4 39.55 77.6 8.37 33.4 69.82 23.0 52.04 -25.5English-Frenchv4 40.72 45.2 8.63 22.7 71.23 12.4 49.92 -21.6 39.35 76.7 8.34 33.0 69.79 23.0 52.29 -25.2\nv0 31.79 0.0 7.77 0.0 66.25 0.0 57.09 0.0 27.00 0.0 7.07 0.0 59.90 0.0 61.57 0.0\nv1 39.04 22.8 8.75 12.6 69.17 4.4 48.26 -15.4 33.52 24.1 7.98 12.8 63.70 6.3 53.39 -13.2\nv2 40.57 27.6 8.90 14.5 70.23 6.0 47.19 -17.3 38.07 41.0 8.47 19.8 66.88 11.6 50.35 -18.2\nv3 42.23 32.8 9.09 16.9 71.40 7.7 46.07 -19.3 44.14 63.4 9.22 30.4 71.24 18.9 45.49 -26.1French-Englishv4 42.17 32.6 9.09 16.9 71.32 7.6 46.05 -19.3 43.85 62.4 9.17 29.7 71.07 18.6 45.81 -25.6\nv0 20.20 0.0 5.73 0.0 82.81 0.0 67.83 0.0 22.92 0.0 5.93 0.0 87.27 0.0 65.88 0.0\nv1 26.18 29.6 6.57 14.6 84.19 1.6 60.80 -10.3 28.79 25.6 6.80 14.6 87.91 0.7 58.20 -11.6\nv2 32.06 58.7 7.24 26.3 84.52 2.0 56.68 -16.4 33.59 46.5 7.36 24.1 88.34 1.2 54.71 -16.9\nv3 33.83 67.4 7.63 33.1 86.10 3.9 53.47 -21.1 33.54 46.3 7.34 23.7 89.55 2.6 54.68 -17.0English-Greekv4 34.50 70.7 7.57 32.1 85.91 3.7 54.16 -20.1 33.71 47.0 7.34 23.7 89.42 2.4 54.71 -16.9\nv0 29.23 0.0 7.50 0.0 60.57 0.0 54.69 0.0 31.71 0.0 7.76 0.0 62.42 0.0 52.34 0.0\nv1 34.16 16.8 8.01 6.8 64.98 7.2 51.15 -6.4 37.55 18.4 8.28 6.7 67.36 7.9 49.02 -6.3\nv2 36.93 26.3 8.27 10.2 66.60 9.9 49.40 -9.6 40.17 26.6 8.58 10.5 68.67 10.0 47.03 -10.1\nv3 39.13 33.8 8.55 14.0 68.24 12.6 47.94 -12.3 40.44 27.5 8.61 10.9 68.91 10.4 46.78 -10.6Greek-Englishv4 39.18 34.0 8.54 13.8 68.19 12.5 47.94 -12.3 40.33 27.1 8.60 10.8 68.83 10.2 47.00 -10.2\nTable 8: Complete results of the domain adaptation experiments. With the exception of NIST, all scores are percentages; MET\ndenotes METEOR, system identifiers refer to those in Table 7, and \u0001to relative improvement over the baseline systems v0.\ncleaning, deduplication, and parallel sentence ex-\ntraction. These methods were applied to acquire\nmonolingual and parallel data for two language\npairs and two domains with only minimal manual\nintervention (domain definitions and seed URLs).\nThe acquired resources were then successfully\nused to adapt general-domain SMT systems to\nthe new domains. The average relative improve-\nment of BLEU achieved in eight scenarios was a\nsubstantial 49.5%. Based on our experiments\nwe made the following observations: even small\namounts of in-domain parallel data is more im-\nportant for translation quality than large amounts\nof in-domain monolingual data. As few as 500–\n1,000 sentence pairs can be used as development\ndata with expected 25% relative improvement of\nBLEU. Additional parallel data can be used to im-\nprove translation models: 7,000–20,000 sentences\npairs in our experiments increased BLEU by other\n25% relative on average. If such data is not avail-\nable, a general-domain system can benefit from us-\ning additional in-domain monolingual data, how-\never quite large amounts (tens of million words)\nare necessary to obtain a moderate improvement.\nAcknowledgments\nThis research was supported by the EU FP7\nproject PANACEA (contract no. 7FP-ITC-248064)\nand by the Czech Science Foundation (grant no.P103/12/G084). We thank Victoria Arranz, Olivier\nHamon, and Khalid Choukri for their help with\nmanual correction of the EN–FR data; Maria Gi-\nagkou and V oula Giouli for construction of the do-\nmain definitions and correction of the EN–EL data.\nReferences\nBanerjee, S. and A. Lavie. 2005. METEOR: An Au-\ntomatic Metric for MT Evaluation with Improved\nCorrelation with Human Judgments. In Proc. of the\nACL Workshop on Intrinsic and Extrinsic Evaluation\nMeasures for Machine Translation and/or Summa-\nrization, pp 65–72, Ann Arbor, Michigan.\nBanerjee, P., J. Du, B. Li, S. Naskar, A. Way, and J. van\nGenabith. 2010. Combining Multi-Domain Statis-\ntical Machine Translation Models using Automatic\nClassifiers. In The Ninth Conference of the Associa-\ntion for MT in the Americas, pp 141–150.\nBaroni, M., S. Bernardini, A. Ferraresi, and\nE. Zanchetta. 2009. The WaCky Wide Web: a\ncollection of very large linguistically processed\nweb-crawled corpora. Language Resources and\nEvaluation, 43(3):209–226.\nBixo. 2011. Web mining toolkit. http://openbixo.org/.\nChen, J., R. Chau, and C.-H. Yeh. 2004. Discover-\ning parallel text from the World Wide Web. In Proc.\nof the 2nd workshop on Australasian information se-\ncurity, Data Mining and Web Intelligence, and Soft-\nware Internationalisation, volume 32, pp 157–161,\nDarlinghurst, Australia.\nCho, J., H. Garcia-Molina, and L. Page. 1998. Ef-\nficient crawling through URL ordering. Comput.\nNetw. ISDN Syst., 30:161–172.\n151\nDésilets, A., B. Farley, M. Stojanovic, and G. Pate-\nnaude. 2008. WeBiText: Building Large Heteroge-\nneous Translation Memories from Parallel Web Con-\ntent. In Proc. of Translating and the Computer (30),\nLondon, UK.\nDoddington, G. 2002. Automatic evaluation of ma-\nchine translation quality using n-gram co-occurrence\nstatistics. In Proc. of the second international con-\nference on Human Language Technology Research,\npp 138–145, San Diego, California.\nEck, M., S. V ogel, and A. Waibel. 2004. Language\nModel Adaptation for Statistical Machine Transla-\ntion based on Information Retrieval. In International\nConference on Language Resources and Evaluation,\nLisbon, Portugal.\nEsplà-Gomis, M. and M. L. Forcada. 2010. Com-\nbining Content-Based and URL-Based Heuristics to\nHarvest Aligned Bitexts from Multilingual Sites with\nBitextor. The Prague Bulletin of Mathemathical Lin-\ngustics, 93:77–86.\nFinch, A. and E. Sumita. 2008. Dynamic model inter-\npolation for statistical machine translation. In Proc.\nof the Third Workshop on Statistical Machine Trans-\nlation, pp 208–215, Columbus, Ohio, USA.\nFlournoy, R. and C. Duran. 2009. Machine translation\nand document localization at Adobe: from pilot to\nproduction. In MT Summit XII: proc. of the twelfth\nMachine Translation Summit, pp 425–428.\nGao, Z., Y . Du, L. Yi, Y . Yang, and Q. Peng. 2010.\nFocused Web Crawling Based on Incremental Learn-\ning.Journal of Comp. Information Systems, 6:9–16.\nHildebrand, A. S., M. Eck, S. V ogel, and A. Waibel.\n2005. Adaptation of the Translation Model for Sta-\ntistical Machine Translation based on Information\nRetrieval. In Proc. of the 10th Annual Conference of\nthe European Association for Machine Translation,\npp 133–142, Budapest, Hungary.\nHua, W., W. Haifeng, and L. Zhanyi. 2005. Alignment\nmodel adaptation for domain-specific word align-\nment. In 43rd Annual Meeting on Association for\nComputational Linguistics, pp 467–474, Ann Arbor,\nMichigan, USA.\nKoehn, P. and J. Schroeder. 2007. Experiments in do-\nmain adaptation for statistical machine translation.\nInProc. of the Second Workshop on Statistical Ma-\nchine Translation, pp 224–227, Prague, Czech Rep.\nKoehn, P., H. Hoang, A. Birch, C. Callison-Burch,\nM. Federico, N. Bertoldi, B. Cowan, W. Shen,\nC. Moran, R. Zens, C. Dyer, O. Bojar, A. Constantin,\nand E. Herbst. 2007. Moses: open source toolkit for\nstatistical machine translation. In Proc. of the 45th\nAnnual Meeting of the ACL on Interactive Poster and\nDemo Sessions, pp 177–180, Prague, Czech Rep.\nKoehn, P. 2005. Europarl: A Parallel Corpus for Sta-\ntistical Machine Translation. In Conference Proc.:\nthe tenth Machine Translation Summit, pp 79–86,\nPhuket, Thailand.\nKohlschütter, C., P. Fankhauser, and W. Nejdl. 2010.\nBoilerplate detection using shallow text features. InProc. of the 3rd ACM International Conference on\nWeb Search and Data Mining, pp 441–450, NY .\nLanglais, P. 2002. Improving a general-purpose Statis-\ntical Translation Engine by terminological lexicons.\nInCOLING-02 on COMPUTERM 2002: second in-\nternational workshop on computational terminology\n- Volume 14, pp 1–7, Taipei, Taiwan.\nMenczer, F. and R. K. Belew. 2000. Adaptive Retrieval\nAgents: Internalizing Local Contextand Scaling up\nto the Web. Machine Learning, 39:203–242.\nMunteanu, D. S. and D. Marcu. 2005. Improving Ma-\nchine Translation Performance by Exploiting Non-\nParallel Corpora. Comput. Linguist., 31:477–504.\nNakov, P. 2008. Improving English-Spanish statistical\nmachine translation: experiments in domain adapta-\ntion, sentence paraphrasing, tokenization, and recas-\ning. In Proc. of the Third Workshop on Statistical\nMachine Translation, pp 147–150, Columbus, USA.\nOch, F. J. 2003. Minimum error rate training in statis-\ntical machine translation. In 41st Annual Meeting on\nAssociation for Computational Linguistics, pp 160–\n167, Sapporo, Japan.\nPapineni, K., S. Roukos, T. Ward, and W.-J. Zhu. 2002.\nBLEU: a method for automatic evaluation of ma-\nchine translation. In 40th Annual Meeting on Asso-\nciation for Computational Linguistics, pp 311–318,\nPhiladelphia, USA.\nPecina, P., A. Toral, A. Way, V . Papavassiliou, P. Proko-\npidis, and M. Giagkou. 2011. Towards Using Web-\nCrawled Data for Domain Adaptation in Statistical\nMachine Translation. In Proc. of the 15th Annual\nConference of the European Associtation for Ma-\nchine Translation, pp 297–304, Leuven, Belgium.\nQi, X. and B. D. Davison. 2009. Web page classifi-\ncation: Features and algorithms. ACM Computing\nSurveys, 41:12:1–12:31.\nResnik, P. and N. A. Smith. 2003. The Web as a paral-\nlel corpus. Computational Linguistics, 29:349–380.\nStolcke, A. 2002. SRILM-an extensible language\nmodeling toolkit. In Proc. of International Confer-\nence on Spoken Language Processing, pp 257–286,\nDenver, Colorado, USA.\nVarga, D., L. Németh, P. Halácsy, A. Kornai, V . Trón,\nand V . Nagy. 2005. Parallel corpora for medium\ndensity languages. In Recent Advances in Natural\nLanguage Processing, pp 590–596.\nWu, H., H. Wang, and C. Zong. 2008. Domain adap-\ntation for statistical machine translation with do-\nmain dictionary and monolingual corpora. In Proc.\nof the 22nd International Conference on Computa-\ntional Linguistics - Volume 1, pp 993–1000.\nYu, H., J. Han, and K. C.-C. Chang. 2004. PEBL:\nWeb Page Classification without Negative Examples.\nIEEE Transactions on Knowledge and Data Engi-\nneering, 16(1):70–81.\nZhang, Y ., K. Wu, J. Gao, and P. Vines. 2006. Auto-\nmatic Acquisition of Chinese-English Parallel Cor-\npus from the Web. In Proc. of the 28th European\nConference on Information Retrieval, pp 420–431.\n152", "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "wZsGdvI3eoa", "year": null, "venue": "EAMT 2015", "pdf_link": "https://aclanthology.org/W15-4916.pdf", "forum_link": "https://openreview.net/forum?id=wZsGdvI3eoa", "arxiv_id": null, "doi": null }
{ "title": "Searching for Context: a Study on Document-Level Labels for Translation Quality Estimation", "authors": [ "Carolina Scarton", "Marcos Zampieri", "Mihaela Vela", "Josef van Genabith", "Lucia Specia" ], "abstract": "Carolina Scarton, Marcos Zampieri, Mihaela Vela, Josef van Genabith, Lucia Specia. Proceedings of the 18th Annual Conference of the European Association for Machine Translation. 2015.", "keywords": [], "raw_extracted_content": "Searching for Context: a Study on Document-Level Labels for Translation\nQuality Estimation\nCarolina Scarton1, Marcos Zampieri2,3, Mihaela Vela2, Josef van Genabith2,3and Lucia Specia1\n1University of Sheffield / Regent Court, 211 Portobello, Sheffield, UK\n2Saarland University / Campus A2.2, Saarbr ¨ucken, Germany\n3German Research Centre for Artificial Intelligence / Saarbr ¨ucken, Germany\n{c.scarton,l.specia}@sheffield.ac.uk\n{marcos.zampieri,m.vela}@uni-saarland.de\njosef.van [email protected]\nAbstract\nIn this paper we analyse the use of pop-\nular automatic machine translation evalu-\nation metrics to provide labels for qual-\nity estimation at document and paragraph\nlevels. We highlight crucial limitations of\nsuch metrics for this task, mainly the fact\nthat they disregard the discourse structure\nof the texts. To better understand these\nlimitations, we designed experiments with\nhuman annotators and proposed a way of\nquantifying differences in translation qual-\nity that can only be observed when sen-\ntences are judged in the context of entire\ndocuments or paragraphs. Our results in-\ndicate that the use of context can lead to\nmore informative labels for quality anno-\ntation beyond sentence level.\n1 Introduction\nQuality estimation (QE) of machine translation\n(MT) (Blatz et al., 2004; Specia et al., 2009) is\nan area that focuses on predicting the quality of\nnew, unseen machine translation data without rely-\ning on human references. This is done by training\nmodels using features extracted from source and\ntarget texts and, when available, from the MT sys-\ntem, along with a quality label for each instance.\nMost current work on QE is done at the sentence\nlevel. A popular application of sentence-level QE\nis to support post-editing of MT (He et al., 2010).\nAs quality labels, Likert scores have been used for\npost-editing effort, as well as post-editing time and\nedit distance between the MT output and thefinal\nversion – HTER (Snover et al., 2006).\nc�2015 The authors. This article is licensed under a Creative\nCommons 3.0 licence, no derivative works, attribution, CC-\nBY-ND.There are, however, scenarios where quality pre-\ndiction beyond sentence level is needed, most no-\ntably in cases when automatic translations without\npost-editing are required. This is the case, for ex-\nample, of quality prediction for an entire product\nreview translation in order to decide whether or not\nit can be published as is, so that customers speak-\ning other languages can understand it.\nThe quality of a document is often seen as some\nform of aggregation of the quality of its sentences.\nWe claim, however, that document-level quality\nassessment should consider more information than\nsentence-level quality. This includes, for exam-\nple, the topic and structure of the document and\nthe relationship between its sentences. While cer-\ntain sentences are considered perfect in isolation,\ntheir combination in context may lead to incoher-\nent text. Conversely, while a sentence can be con-\nsidered poor in isolation, when put in context, it\nmay benefit from information in surrounding sen-\ntences, leading to a document that isfit for pur-\npose.\nDocument-level quality prediction is a rather\nunderstudied problem. Recent work has looked\ninto document-level prediction (Scarton and Spe-\ncia, 2014; Soricut and Echihabi, 2010) using au-\ntomatic metrics such as BLEU (Papineni et al.,\n2002) and TER (Snover et al., 2006) as quality\nlabels. However, their results highlighted issues\nwith these metrics for the task at hand: the evalu-\nation of the scores predicted in terms of mean er-\nror was inconclusive. In most cases, the predic-\ntion model only slightly improves over a simple\nbaseline where the average BLEU or TER score of\nthe training documents is assigned to all test docu-\nments.\nOther studies have considered document-level\ninformation in order to improve, analyse or au-121\ntomatically evaluate MT output (not for QE pur-\nposes). Carpuat and Simard (2012) report that MT\noutput is overall consistent in its lexical choices,\nnearly as consistent as manually translated texts.\nMeyer and Webber (2013) and Li et al. (2014)\nshow that the translation of connectives differs\nfrom humans to MT, and that the presence of\nexplicit connectives correlates with higher HTER\nvalues. Guzm ´an et al. (2014) explore rhetori-\ncal structure (RST) trees (Mann and Thompson,\n1987) for automatic evaluation of MT into English,\noutperforming traditional metrics at system-level\nevaluation.\nThus far, no previous work has investigated\nways to provide a global quality score for an entire\ndocument that takes into account document struc-\nture, without access to reference translations. Pre-\nvious work on document-level QE use automatic\nevaluation metrics as quality labels that do not con-\nsider document-level structures and are developed\nfor inter-system rather than intra-system evalua-\ntion. Also, previous work on evalution of MT does\nnot focus on complete evaluation at document-\nlevel.\nIn this paper, we show that the use of BLEU\nand other automatic metrics as quality labels do\nnot help to successfully distinguish different qual-\nity levels. We discuss the role of document-wide\ninformation for document-level quality estimation\nand present two experiments with human annota-\ntors.\nIn thefirst experiment, translators are asked to\nsubjectively assess paragraphs in terms of cohe-\nsion and coherence (herein, SUBJ). In the second\nexperiment, a two-pass post-editing experiment is\nperformed in order to measure the difference be-\ntween corrections made with and without wider\ncontexts (the tow passes are called PE1 and PE2,\nrepectively).\nThe task of assessing paragraphs according to\ncohesion and coherence is highly subjective and\nthus the results of thefirst study did not show\nhigh agreement among annotators. The results of\nthe two-stage post-editing experiment showed sig-\nnificant differences from the post-editing of sen-\ntences without context to the second stage where\nsentences were further corrected in context. This\nis an indication that certain translation issues can\nonly be solved by relying on wider contexts, which\nis a crucial information for document-level QE. A\nmanual analysis was conducted to evaluate differ-ences between PE1 and PE2. Although several of\nthe changes were found to be related to style or\nother non-discourse related phenomena, many dis-\ncourse related changes were performed that were\nonly possible given the wider context available.\nIn the remainder of this paper wefirst present\nrelated work in Section 2. In Section 3 we discuss\nthe use of BLEU-style metrics for QE at document\nlevel. Section 4 describes the experimental set up\nused in the paper. Section 5 presents thefirst study\nwere the annotators assess quality in terms of co-\nhesion and coherence, while Section 6 shows the\ntwo-pass post-editing experiment and its results.\nThe conclusions and future work are presented in\nSection 7.\n2 Related work\nThe research reported here is about quality esti-\nmation at document-level. Therefore, work on\ndocument-level features and document-level qual-\nity prediction are both relevant, as well as studies\non how discourse phenomena manifest in the out-\nput of MT systems.\nSoricut and Echihabi (2010) propose document-\nlevel features to predict document-level quality for\nranking purposes, having BLEU as quality label.\nWhile promising results were reported for ranking\nof translations for different source documents, the\nresults for predicting absolute scores proved incon-\nclusive. For two out of four domains, the predic-\ntion model only slightly improves over a baseline\nwhere the average BLEU score of the training doc-\numents is assigned to all test documents. In other\nwords, most documents have similar BLEU scores,\nand therefore the training mean is a hard baseline\nto beat.\nScarton and Specia (2014) propose a number\nof discourse-informed features in order to predict\nBLEU and TER at document level. They also\nfound the use of these metrics as quality labels\nproblematic: the error scores of several QE mod-\nels were very close to that obtained by the train-\ning mean baseline. Even when mixing translations\nfrom different MT systems, BLEU and TER were\nnot found to be discriminative enough.\nCarpuat and Simard (2012) provide a detailed\nevaluation of lexical consistency in translations of\ndocuments produced by a statistical MT (SMT)\nsystem, i.e., on the consistency of words and\nphrases in the translation of a given source text.\nSMT was found to be overall consistent in its lexi-122\ncal choices, nearly as consistent as manually trans-\nlated texts.\nMeyer and Webber (2013) present a study on\nimplicit discourse connectives in translation. The\nphenomenon is evaluated using human references\nand machine translations for English-French and\nEnglish-German. They found that humans trans-\nlated explicit connectives in the source (English)\ninto implicit connectives in the target (German and\nFrench) in18%of the cases. MT systems trans-\nlated explicit connectives into implicit ones less\noften.\nLi et al. (2014) study connectives in order\nto improve MT for Chinese-English and Arabic-\nEnglish. They show that the presence of ex-\nplicit connectives correlates with high HTER\nfor Chinese-English only. Chinese-English also\nshowed correlation between ambiguous connec-\ntives and higher HTER. When comparing the pres-\nence of discourse connectives in translations and\npost-editions, they found that cases of connectives\nonly appearing in the translation or post-edition\nalso show correlation with high HTER scores.\nGuzm ´an et al. (2014) explore RST trees (Mann\nand Thompson, 1987) for automatic evaluation of\nMT into English, with a discourse parser to anno-\ntate RST trees at sentence level in English. They\ncompare the discourse units of machine transla-\ntions with those in the references by using tree ker-\nnels to compute the number of common subtrees\nbetween the two trees. This metric outperformed\nothers at system-level evaluation.\nIn summary, no previous work has investigated\nways to provide a global quality score for an entire\ndocument that takes into account document struc-\nture, neither for evaluation nor for estimation pur-\nposes.\n3 Automatic evaluation metrics as\nquality labels for document-level QE\nAs discussed in Section 2, although the use\nof BLEU-style metrics as quality scores for\ndocument-level QE clearly seems inadequate, pre-\nvious work resorted to these automatic metrics be-\ncause of the lack of better labels. In order to\nbetter understand this problem, we conducted an\nexperiment with French-English translations from\nthe LIG corpus (Potet et al., 2012). We took the\nfirst part of the corpus containing119source doc-\numents on the news domain (from various WMT\nnews test sets), their MT by a phrase-based SMTsystem, a post-edited version of these translations\nby a human translator, and a reference transla-\ntion. We used a range of automatic metrics such\nas BLEU, TER, METEOR-ex (exact match) and\nMETEOR-st (stem match), which are based on a\ncomparison between machine translations and hu-\nman references, and the “human-targeted” version\nof BLEU and TER, where machine translations are\ncompared against their post-editions: HBLEU and\nHTER. Table 1 shows the results of the average\nscore (A VG) for each metric considering all docu-\nments, as well as the standard deviation (STDEV).\nA VG STDEV\nBLEU (↑) 0.27 0.05\nTER (↓) 0.53 0.07\nMETEOR-ex (↑) 0.29 0.03\nMETEOR-st (↑) 0.30 0.03\nHTER (↓) 0.21 0.03\nHBLEU (↑) 0.64 0.05\nTable 1: Average metric scores in the LIG corpus.\nWe conducted a similar analysis on the English-\nGerman (EN-DE) news test set from WMT13 (Bo-\njar et al., 2013), which contains52documents,\nboth at document and paragraph levels. Three MT\nsystems were considered in this analysis:UEDIN\n(an SMT system),PROMT(a hybrid system) and\nRBMT-1(a rule-based system). Average metric\nscores are shown in Table 2.\nFor all the metrics and corpora, the STDEV val-\nues for documents are very small (below0.1), in-\ndicating that all documents are considered similar\nin terms of quality according to these metrics (the\nscores are all very close to the mean).\nAt paragraph level (Table 2), the scores variation\nincreases, with BLEU showing the highest varia-\ntion. However, the very high STDEV values for\nBLEU (very close to the actual average score for\nall documents) is most likely due to the fact that\nBLEU does not perform well for short segments\nsuch as a paragraph due to the n-gram sparsity\nat this level, as shown in Stanojevi ´c and Sima’an\n(2014).\nOverall, it is important to emphasise that BLEU-\nstyle metrics were created to evaluate different MT\nsystems based on the same input, as opposed to\nevaluating different outputs of a single MT system,\nas we do here. The experiments in Section 6 at-\ntempt to shed some light on alternative ways to ac-\ncurately measure document-level quality, with an\nemphasis on designing a label for document-level\nquality prediction.123\nUEDIN PROMT RBMT-1\nDocument Paragraph Document Paragraph Document Paragraph\nA VG STDEV A VG STDEV A VG STDEV A VG STDEV A VG STDEV A VG STDEV\nBLEU (↑) 0.2 0.048 0.2 0.16 0.19 0.05 0.2 0.16 0.15 0.04 0.16 0.14\nTER (↓) 0.62 0.063 0.63 0.24 0.61 0.07 0.62 0.25 0.66 0.06 0.67 0.23\nMETEOR-ex (↑) 0.37 0.056 0.37 0.16 0.36 0.06 0.37 0.16 0.32 0.05 0.33 0.15\nMETEOR-st (↑) 0.39 0.058 0.39 0.16 0.38 0.06 0.39 0.16 0.34 0.05 0.35 0.15\nTable 2: Average metric scores for automatic metrics in the WMT13 EN-DE corpus.\n4 Experimental settings\nIn the following experiments, we consider a para-\ngraph as a “document”. This decision was made\nto make the annotation feasible, given the time and\nresources available. Although the datasets are dif-\nferent for the two subtasks, they were taken from\nthe same larger corpus and annotated by the the\nsame group of translators.\n4.1 Methods\nThe SUBJ experiment (Section 5) consists in as-\nsessing the quality of paragraphs in terms of co-\nhesion and coherence. We define cohesion as the\nlinguistic marks (cohesive devices) that connect\nclauses, sentences or paragraphs together; coher-\nence captures whether clauses, sentences or para-\ngraphs are connected in a logical way, i.e. whether\nthey make sense together (Stede, 2011). In or-\nder to assess these two phenomena, we propose a\n4-point scale. For coherence: 1=Completely co-\nherent; 2=Mostly coherent; 3=Little coherent, and\n4=Incoherent; for cohesion: 1=Flawless; 2=Good;\n3=Disfluent and 4=Incomprehensible.\nPE1 and PE2 (Section 6) consist in objective\nassessments through the post-editing of MT sen-\ntences in two rounds: in isolation and in context.\nIn thefirst round (PE1), annotators were asked to\npost-edit sentences which were shown to them out\nof context. In the second round (PE2), they were\nasked to further post-edit the same sentences now\ngiven in context andfix any other issues that could\nonly be solved by relying on information beyond\nindividual sentences. For this, each annotator was\ngiven as input the output of their PE1, i.e. the sen-\ntences they had previously post-edited themselves.\n4.2 Data\nThe datasets were extracted from the test set of\nthe EN-DE WMT13 MT shared task. EN-DE was\nchosen given the availability of in-house annota-\ntors for this language pair. Outputs of theUEDIN\nSMT system were chosen as this was the best par-ticipating system for this language pair (Bojar et\nal., 2013). For the SUBJ experiment, paragraphs\nwere randomly selected from the full corpus.\nFor PE1 and PE2, only source (English) para-\ngraphs with3-8sentences were selected (filter S-\nNUMBER) to ensure that there is enough infor-\nmation beyond sentence-level to be evaluated and\nmake the task feasible for the annotators. These\nparagraphs were furtherfiltered to select those\nwith cohesive devices. Cohesive devices are lin-\nguistic units that play a role in establishing co-\nhesion between clauses, sentences or paragraphs\n(Halliday and Hasan, 1976). Pronouns and dis-\ncourse connectives are examples of such devices.\nA list of pronouns and the connectives from Pitler\nand Nenkova (2009) was considered for that. Fi-\nnally, paragraphs were ranked according to the\nnumber of cohesive devices they contain and the\ntop200paragraphs were selected (filter C-DEV).\nTable 3 shows the statistics of the initial corpus and\nthe resulting selection after eachfilter.\nNumber of Number of\nParagraphs Cohesive devices\nFULL CORPUS 1,215 6,488\nS-NUMBER 394 3,329\nC-DEV 200 2,338\nTable 3: WMT13 English source corpus.\nFor the PE1 experiment, the paragraphs in C-\nDEV were randomised. Then, sets containing\nseven paragraphs each were created. For each\nset, the sentences of its paragraphs were also ran-\ndomised in order to prevent annotators from hav-\ning access to wider context when post-editing. The\nguidelines made it clear to annotators that the sen-\ntences they were given were not related, not nec-\nessarily part of the same document, and that there-\nfore they should not try tofind any relationships\namong them. For PE2, sentences were put together\nin their original paragraphs and presented to the\nannotators as a complete paragraph.124\n4.3 Annotators\nThe annotators for both experiments are students\nof “Translation Studies” courses (TS) in Saarland\nUniversity, Saarbr ¨ucken, Germany. All students\nwere familiar with concepts of MT and with post-\nediting tools. They were divided in two sets:\n(i)Undergraduate students (B.A.), who are na-\ntive speakers of German; and (ii)Master students\n(M.A.), the majority of whom are native speak-\ners of German. Non-native speakers have at least\nseven years of German language studies. B.A. and\nM.A. students have on average10years of En-\nglish language studies. Only the B.A. group did\nthe SUBJ experiment. PE1 and PE2 were done by\nall groups.\nPE1 and PE2 were done using three CAT tools:\nPET (Aziz et al., 2012), Matecat (Federico et al.,\n2014) and memoQ.1These tools operate in very\nsimilar ways in terms of their post-editing func-\ntionalities, and therefore the use of multiple tools\nwas only meant to make the experiment more in-\nteresting for students and did not affect the results.\nSUBJ was done without the help of tools.\n5 Coherence/cohesion judgements\nOurfirst attempt to access quality beyond sentence\nlevel was to explicitly guide annotators to consider\ndiscourse, where the notion of “discourse” covers\nvarious linguistic phenomena observed across dis-\ncourse units. Discourse units can be clauses (intra-\nsentence), sentences or paragraphs.\nSix sets with17paragraphs each were randomly\nselected from FULL CORPUS and given to25an-\nnotators from the B.A. group (each annotator eval-\nuated one set). The task was to assess the para-\ngraphs in terms of cohesion and coherence, using\nthe scale given. The annotators could also rely on\nthe source paragraphs. The agreement for the task\nin terms of Spearman’s rank correlation and the\nnumber of students per set are presented in Table\n4. The number of annotators per set is different\nbecause some of them did not complete the task.\nSet 1 Set 2 Set 3 Set 4 Set 5 Set 6\nAnnotators 3 3 4 7 6 2\nCoherence 0.07 0.05 0.16 0.16 0.28 0.58\nCohesion 0.38 0.43 0.28 0.09 0.38 0.12\nTable 4: Spearman’s correlation for the SUBJ task.\nA low agreement in terms of Spearman’sρrank\n1https://www.memoq.com/correlation was found for both cohesion (ranging\nfrom0.09to0.43) and coherence (ranging from\n0.05to0.28, having0.58as an outlier) evaluations.\nNaturally, these concepts are very abstract, even\nfor humans, offering substantial room for subjec-\ntive interpretations. In addition, the existence of\n(often many) errors in the MT output can hinder\nthe understanding of the text altogether, rendering\njudgements on any specific quality dimension dif-\nficult to make.\n6 Quality assessment as a two-stage\npost-editing task\nUsing HTER, we measured the edit distance be-\ntween the post-edited versions with and without\ncontext. The hypothesis is that differences be-\ntween the two versions are likely to be corrections\nthat could only be performed with information be-\nyond sentence level.\nFor PE1, paragraphs from C-DEV set were di-\nvided in sets of seven and the sentences were ran-\ndomised in order to prevent annotators from hav-\ning access to context when post-editing. For PE2,\nsentences were put together in their original para-\ngraphs and presented to annotators in context. A\ntotal of112paragraphs were evaluated in16differ-\nent sets, but only sets where more than two annota-\ntors completed the task are presented here (SET1,\nSET2, SET7, SET9, SET14 and SET15).2\n6.1 Task agreement\nTable 5 shows the agreement for the PE1 and PE2\ntasks using Spearman’sρrank correlation. It was\ncalculated by comparing the HTER values of PE1\nagainst MT and PE2 against PE1. “Annotators”\nshows the number of annotators per set.\nThe HTER values of PE1 against PE2 are low,\nas expected, since the changes from PE1 to PE2\nare only expected to reflect discourse related is-\nsues. In other words, no major changes were ex-\npected during the PE2 task. The correlation in\nHTER between PE1 and MT varies from0.22to\n0.56, whereas the correlation in HTER between\nPE1 and PE2 varies between−0.14and0.39. The\nnegativefigures mean that the annotators strongly\ndisagreed regarding the changes made from PE1 to\nPE2. This can be related to stylistic choices made\nby annotators, although further analysis is needed\nto study that (see Section 6.3).\n2Sets with only two annotators are difficult to interpret.125\nSET1 SET2 SET5 SET6 SET9 SET10 SET14 SET15 SET16\nAnnotators 3 3 3 4 4 3 3 3 3\nPE1 x MT - HTER 0.63 0.57 0.22 0.32 0.28 0.18 0.30 0.24 0.18\nPE1 x PE2 - HTER 0.05 0.07 0.05 0.03 0.10 0.06 0.09 0.07 0.05\nPE1 x MT - Spearman 0.52 0.50 0.52 0.56 0.37 0.41 0.71 0.22 0.46\nPE2 x PE1 - Spearman 0.38 0.39 −0.03 −0.14 0.25 0.15 0.14 0.18 −0.02\nTable 5: HTER values for PE1 against MT and PE1 against PE2 and Spearman’s rank correlation values\nfor PE2 against PE1.\n6.2 Issues beyond sentence level\nThe values for HTER among annotators in PE2\nagainst PE1 were averaged in order to provide a\nbetter visualisation of changes made in the para-\ngraphs from PE1 to PE2. Figure 1 shows the re-\nsults for individual paragraphs in all sets. The ma-\njority of the paragraphs were edited in the second\nround of post-editions. This clearly indicates that\ninformation beyond sentence-level can be helpful\nto further improve the output of MT systems. Be-\ntween0and19%of the words have changed from\nPE1 to PE2 (on average7%of the words changed).\nAn example of changes from PE1 to PE2 related\nto discourse phenomena is shown in Table 6. In\nthis example, two changes are related to the use of\ninformation beyond sentence level. Thefirst is re-\nlated to the substitution of the sentence“Das ist\nfalsch”- literal translation of“This is wrong”-\nby“Das ist nicht gut”, whichfits better into the\ncontext. The other change is related to explici-\ntation of information. The annotator decided to\nchange from“Hier ist diese Schicht ist d ¨unn”- lit-\neral translation of“Here, this layer is thin”- to\n“Hier ist die Anzahl solcher Menschen gering”, a\ntranslation that betterfits the context of the para-\ngraph“Here, the number of such people is low”.\n6.3 Manual analysis\nIn order to better understand the changes made by\nthe annotators from PE1 to PE2 and also better\nexplain the negative values in Table 5, we man-\nually inspected the post-edited data. This analy-\nsis was done by senior translators who were not\ninvolved in the actual post-editing experiments.\nThey counted modifications performed and cate-\ngorised them into three classes:\nDiscourse/context changes:changes related to\ndiscourse phenomena, which could only be\nmade by having the entire paragraph text.\nStylistic changes:changes related to translator’s\nstylistic or preferential choices. Thesechanges can be associated with the paragraph\ncontext, although they are not strictly neces-\nsary under our post-editing guidelines.\nOther changes:changes that could have been\nmade without the paragraph context (PE1),\nbut were only performed during PE2.\nThe results are shown in Table 7. Low agree-\nment in the number of changes and the type of\nchanges among annotators is found in most sets.\nAlthough annotators were asked not to make un-\nnecessary changes (stylistic), some of them made\nchanges of this type (especially annotators2and\n3from sets5and6, respectively). These sets are\nalso the ones that show negative values in Table\n5. Since stylistic changes do not follow a pattern\nand are related to the background and preferences\nof the translator, the high number of this type of\nchange for these sets can be the reason for the neg-\native correlationfigures. In the case of SET6, an-\nnotator 2 also performed several changes classified\nas “other changes”. This may have also led to neg-\native correlation values. However, the reasons be-\nhind the negative values in SET16 could include\nother phenomena, since overall the variation in the\nchanges performed is low. Further analysis con-\nsidering the quality of the post-edition needs to be\ndone in order to better explain these results.\n7 Conclusions\nThis paper focused on judgements of translation\nquality at document level with the aim to pro-\nduce labels for QE datasets. We highlighted is-\nsues with the use of automatic evaluation metrics\nfor the task, and proposed and experimented with\ntwo methods for collecting labels using human an-\nnotators.\nOur pilot study for quality assessment of para-\ngraphs in terms of coherence and cohesion proved\na very subjective and difficult task. Definitions of\ncohesion and coherence are vague and the anno-\ntators’ previous knowledge can play an important\nrole during the annotation task.126\n���� ���� ���� ���� ���� ����� ����� ����� �����\n������������������������������������������������������������������������\n���\n��Figure 1: HTER between PE1 and PE2 for each of the seven paragraphs in each set.\nPE1:- St. Petersburg bietet nicht viel kulturelles Angebot, Moskau hat viel mehr Kultur, es hat eine Grundlage.\nEs ist schwer fr die Kunst, sich in unserem Umfeld durchzusetzen.\nWir brauchen das kulturelle Fundament, aber wir haben jetzt mehr Schriftsteller als Leser.\nDas ist falsch.\nIn Europa gibt es viele neugierige Menschen, die auf Kunstausstellungen, Konzerte gehen.\nHier ist diese Schicht ist d ¨unn.\nPE2:- St. Petersburg bietet nicht viel kulturelles Angebot, Moskau hat viel mehr Kultur, es hat eine Grundlage.\nEs ist schwer fr die Kunst, sich in unserem Umfeld durchzusetzen.\nWir brauchen das kulturelle Fundament, aber wir haben jetzt mehr Schriftsteller als Leser.\nDas ist nicht gut.\nIn Europa gibt es viele neugierige Menschen, die auf Kunstausstellungen, Konzerte gehen.\nHier ist die Anzahl solcher Menschen gering.\nSRC:- St. Petersburg is not a cultural capital, Moscow has much more culture, there is bedrock there.\nIt’s hard for art to grow on our rocks.\nWe need cultural bedrock, but we now have more writers than readers.\nThis is wrong.\nIn Europe, there are many curious people, who go to art exhibits, concerts.\nHere, this layer is thin.\nTable 6: Example of changes from PE1 to PE2.\nSET1 SET2 SET5 SET6 SET9 SET10 SET14 SET15 SET16\nAnnotators 12312312312341234123123123123\nDiscourse/context 23106221022001710400101212011\nStylistic 201101311 00393510 13122600332213\nOther 12402222606012042102201121110\nTotal errors 556185714 6211 94817 65624902665334\nTable 7: Manual analysis of PE1 and PE2.\nOur second method for collecting labels using\nhuman annotators is based on post-editing and\nshowed promising results on uncovering issues\nthat rely on wider context to be identified (and\nfixed). Although some annotators did not follow\nthe task specification and made unnecessary modi-\nfications or did not correct relevant errors at sen-\ntence level, overall the results showed that sev-\neral issues could only be solved with paragraph-\nwide context. Moreover, even though stylistic\nchanges can be considered unnecessary, some of\nthem could only be made based on wider context.\nWe will now turn to studying how to use the in-\nformation reflecting differences between the tworounds of post-editing as labels for QE at docu-\nment level. One possibility is to use the HTER be-\ntween the second andfirst rounds directly, but this\ncan lead to many “0” labels, i.e. no edits made.\nAnother idea is to devise a function that combines\nthe HTER without context (PE1 x MT) and the dif-\nference between PE1 and PE2.\nOurfindings reveal important discourse depen-\ndencies in translation that go beyond QE, with rel-\nevance for MT evaluation and MT in general.\nAcknowledgments\nThis work was supported by the EXPERT (EU\nMarie Curie ITN No. 317471) project.127\nReferences\nAziz, Wilker, Sheila Castilho Monteiro de Sousa, and\nLucia Specia. 2012. Cross-lingual Sentence Com-\npression for Subtitles. InThe 16th Annual Con-\nference of the European Association for Machine\nTranslation, pages 103–110, Trento, Italy.\nBlatz, John, Erin Fitzgerald, George Foster, Simona\nGandrabur, Cyril Goutte, Alex Kulesza, Alberto San-\nchis, and Nicola Ueffing. 2004. Confidence Esti-\nmation for Machine Translation. InThe 20th Inter-\nnational Conference on Computational Linguistics,\npages 315–321, Geneva, Switzerland.\nBojar, Ond ˇrej, Christian Buck, Chris Callison-\nBurch, Christian Federmann, Barry Haddow, Philipp\nKoehn, Christof Monz, Matt Post, Radu Soricut, and\nLucia Specia. 2013. Findings of the 2013 Workshop\non Statistical Machine Translation. InThe Eighth\nWorkshop on Statistical Machine Translation, pages\n1–44, Sofia, Bulgaria.\nCarpuat, Marine and Michel Simard. 2012. The Trou-\nble with SMT Consistency. InThe Seventh Work-\nshop on Statistical Machine Translation, pages 442–\n449, Montreal, Quebec, Canada.\nFederico, Marcello, Nicola Bertoldi, Mauro Cettolo,\nMatteo Negri, Marco Turchi, Marco Trombetti,\nAlessandro Cattelan, Antonio Farina, Domenico\nLupinetti, Andrea Martines, Alberto Massidda, Hol-\nger Schwenk, Lo ¨ıc Barrault, Frederic Blain, Philipp\nKoehn, Christian Buck, and Ulrich Germann. 2014.\nTHE MATECAT TOOL. InThe 25th International\nConference on Computational Linguistics: System\nDemonstrations, pages 129–132, Dublin, Ireland.\nGuzm ´an, Francisco, Shafiq Joty, Llu ´ıs M `arquez, and\nPreslav Nakov. 2014. Using Discourse Structure\nImproves Machine Translation Evaluation. InThe\n52nd Annual Meeting of the Association for Compu-\ntational Linguistics, pages 687–698, Baltimore, MD.\nHalliday, Michael A. K. and Ruqaiya Hasan. 1976.Co-\nhesion in English. English Language Series. Long-\nman, London, UK.\nHe, Yifan, Yanjun Ma, Josef van Genabith, and Andy\nWay. 2010. Bridging SMT and TM with Translation\nRecommendation. InThe 48th Annual Meeting of\nthe Association for Computational Linguistics, pages\n622–630, Uppsala, Sweden.\nLi, Junyi Jessy, Marine Carpuat, and Ani Nenkova.\n2014. Assessing the Discourse Factors that Influence\nthe Quality of Machine Translation. InThe 52nd An-\nnual Meeting of the Association for Computational\nLinguistics, pages 283–288, Baltimore, MD.\nMann, Willian C. and Sandra A. Thompson. 1987.\nRhetorical Structure Theory: A Theory of Text Orga-\nnization. Cambridge University Press, Cambridge,\nUK.Meyer, Thomas and Bonnie Webber. 2013. Implicita-\ntion of Discourse Connectives in (Machine) Trans-\nlation. InThe Workshop on Discourse in Machine\nTranslation, pages 19–26, Sofia, Bulgaria.\nPapineni, Kishore, Salim Roukos, Todd Ward, and Wei\njing Zhu. 2002. BLEU: a Method for Automatic\nEvaluation of Machine Translation. InThe 40th An-\nnual Meeting of the Association for Computational\nLinguistics, pages 311–318, Philadelphia, PA.\nPitler, Emily and Ani Nenkova. 2009. Using Syntax\nto Disambiguate Explicit Discourse Connectives in\nText. InThe Joint conference of the 47th Annual\nMeeting of the Association for Computational Lin-\nguistics and the Fourth International Joint Confer-\nence on Natural Language Processing of the Asian\nFederation of Natural Language Processing, pages\n13–16, Suntec, Singapore.\nPotet, Marion, Emmanuelle Esperanc ¸a-Rodier, Laurent\nBesacier, and Herv ´e Blanchon. 2012. Collection\nof a Large Database of French-English SMT Output\nCorrections. InThe 8th International Conference on\nLanguage Resources and Evaluation, pages 23–25,\nIstanbul, Turkey.\nScarton, Carolina and Lucia Specia. 2014. Document-\nlevel translation quality estimation: exploring dis-\ncourse and pseudo-references. InThe 17th Annual\nConference of the European Association for Machine\nTranslation, pages 101–108, Dubrovnik, Croatia.\nSnover, Matthew, Bonnie Dorr, Richard Schwartz, Lin-\nnea Micciulla, and John Makhoul. 2006. A Study of\nTranslation Edit Rate with Targeted Human Anno-\ntation. InProceedings of the Seventh biennial con-\nference of the Association for Machine Translation\nin the Americas, AMTA 2006, pages 223–231, Cam-\nbridge, MA.\nSoricut, Radu and Abdessamad Echihabi. 2010.\nTrustRank: Inducing Trust in Automatic Transla-\ntions via Ranking. InThe 48th Annual Meeting of\nthe Association for Computational Linguistics, pages\n612–621, Uppsala, Sweden.\nSpecia, Lucia, Marco Turchi, Nicola Cancedda, Marc\nDymetman, and Nello Cristianini. 2009. Estimating\nthe Sentence-Level Quality of Machine Translation\nSystems. InThe 13th Annual Conference of the Eu-\nropean Association for Machine Translation, pages\n28–37, Barcelona, Spain.\nStanojevi ´c, Milo ˇs and Khalil Sima’an. 2014. Fitting\nSentence Level Translation Evaluation with Many\nDense Features. In2014 Conference on Empiri-\ncal Methods in Natural Language Processing, pages\n202–206, Doha, Qatar.\nStede, Manfred. 2011.Discourse Processing, vol-\nume 4 ofSynthesis Lectures on Human Language\nTechnologies. Morgan & Claypool Publishers.128", "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "FpmDluIvkpr", "year": null, "venue": "EAMT 2015", "pdf_link": "https://aclanthology.org/W15-4905.pdf", "forum_link": "https://openreview.net/forum?id=FpmDluIvkpr", "arxiv_id": null, "doi": null }
{ "title": "Can Translation Memories afford not to use paraphrasing?", "authors": [ "Rohit Gupta", "Constantin Orasan", "Marcos Zampieri", "Mihaela Vela", "Josef van Genabith" ], "abstract": "Rohit Gupta, Constantin Orăsan, Marcos Zampieri, Mihaela Vela, Josef van Genabith. Proceedings of the 18th Annual Conference of the European Association for Machine Translation. 2015.", "keywords": [], "raw_extracted_content": "Can Translation Memories afford not to use paraphrasing?\nRohit Gupta1, Constantin Or ˘asan1, Marcos Zampieri2,3, Mihaela Vela2, Josef van Genabith2,3\n1Research Group in Computational Linguistics, University of Wolverhampton, UK\n2Saarland University, Germany\n3German Research Center for Artificial Intelligence (DFKI)\n{r.gupta, c.orasan}@wlv.ac.uk\n{marcos.zampieri, m.vela}@uni-saarland.de\njosef.van [email protected]\nAbstract\nThis paper investigates to what extent the\nuse of paraphrasing in translation mem-\nory (TM) matching and retrieval is use-\nful for human translators. Current trans-\nlation memories lack semantic knowledge\nlike paraphrasing in matching and re-\ntrieval. Due to this, paraphrased seg-\nments are often not retrieved. Lack of se-\nmantic knowledge also results in inappro-\npriate ranking of the retrieved segments.\nGupta and Or ˘asan (2014) proposed an im-\nproved matching algorithm which incorpo-\nrates paraphrasing. Its automatic evalua-\ntion suggested that it could be beneficial\nto translators. In this paper we perform\nan extensive human evaluation of the use\nof paraphrasing in the TM matching and\nretrieval process. We measure post-editing\ntime, keystrokes, two subjective evalua-\ntions, and HTER and HMETEOR to assess\nthe impact on human performance. Our re-\nsults show that paraphrasing improves TM\nmatching and retrieval, resulting in trans-\nlation performance increases when trans-\nlators use paraphrase enhanced TMs.\n1 Introduction\nOne of the core features of a TM system is the\nretrieval of previously translated similar segments\nfor post-editing in order to avoid translation from\nscratch when an exact match is not available. How-\never, this retrieval process is still limited to edit-\ndistance based measures operating on surface form\nc⃝2015 The authors. This article is licensed under a Creative\nCommons 3.0 licence, no derivative works, attribution, CC-\nBY-ND.(or sometimes stem) matching. Most of the com-\nmercial systems use edit distance (Levenshtein,\n1966) or some variation of it, e.g. the open-source\nTM OmegaT1uses word-based edit distance with\nsome extra preprocessing. Although these mea-\nsures provide a strong baseline, they are not suf-\nficient to capture semantic similarity between the\nsegments as judged by humans.\nGupta and Or ˘asan (2014) proposed an edit dis-\ntance measure which incorporates paraphrasing in\nthe process. In the present paper, we perform\na human-centred evaluation to investigate the use\nof paraphrasing in translation memory matching\nand retrieval. We use the same system as Gupta\nand Or˘asan (2014) and investigate the following\nquestions: (1) how much of an improvement can\nparaphrasing provide in terms of retrieval? (2)\nWhat is the quality of the retrieved segments and\nits impact on the work of human translators? These\nquestions are answered using human centred eval-\nuations.\nTo the best of our knowledge, this paper presents\nthefirst work on assessing the quality of any type\nof semantically informed TM fuzzy matches based\non post-editing time or keystrokes.\n2 Related Work\nSeveral researchers have used semantic or syntac-\ntic information in TMs, but their evaluations were\nshallow and most of the time limited to subjective\nevaluation carried out by the authors. This makes\nit hard to judge how much a semantically informed\nTM matching system can benefit a translator.\nExisting research (Planas and Furuse, 1999;\nHod´asz and Pohl, 2005; Pekar and Mitkov, 2007;\nMitkov, 2008) pointed out the need for similarity\n1http://www.omegat.org35\ncalculations in TMs beyond surface form compar-\nisons. Both Planas and Furuse (1999) and Hodasz\nand Pohl (2005) proposed to use lemma and parts\nof speech along with surface form comparison.\nHodasz and Pohl (2005) also extend the matching\nprocess to a sentence skeleton where noun phrases\nare either tagged by a translator or by a heuristic\nNP aligner developed for English-Hungarian trans-\nlation. Planas and Furuse (1999) tested a prototype\nmodel on 50 sentences from the software domain\nand 75 sentences from a journal with TM sizes of\n7,192 and 31,526 segments respectively. A fuzzy\nmatch retrieved was considered usable if less than\nhalf of the words required editing to match the\ninput sentence. The authors concluded that the\napproach gives more usable results compared to\nTrados Workbench used as a baseline. Hodasz\nand Pohl (2005) claimed that their approach stores\nsimplified patterns and hence makes it more prob-\nable tofind a match in the TM. Pekar and Mitkov\n(2007) presented an approach based on syntactic\ntransformation rules. On evaluation of the pro-\ntotype model using a query sentence, the authors\nfound that the syntactic rules help in retrieving\nbetter segments.\nRecently, work by Utiyama et al. (2011) and\nGupta and Or ˘asan (2014) presented approaches\nwhich use paraphrasing in TM matching and re-\ntrieval. Utiyama et al. (2011) proposed an ap-\nproach using afinite state transducer. They eval-\nuate the approach with one translator andfind that\nparaphrasing is useful for TM both in terms of\nprecision and recall of the retrieval process. How-\never, their approach limits TM matching to exact\nmatches only. Gupta and Or ˘asan (2014) also use\nparaphrasing at the fuzzy match level and they\nreport an improvement in retrieval and quality of\nretrieved segments. The quality of retrieved seg-\nments was evaluated using the machine translation\nevaluation metric BLEU (Papineni et al., 2002).\nSimard and Fujita (2012) used different MT eval-\nuation metrics for similarity calculation as well as\nfor testing the quality of retrieval. For most of the\nmetrics, the authorsfind that, the metric which is\nused in evaluation gives better score to itself (e.g.\nBLEU gives highest score to matches retrieved\nusing BLEU as similarity measure).\nKeystroke and post-editing time analysis are not\nnew for TM and MT. Keystroke analysis has been\nused to judge translators’ productivity (Langlais\nand Lapalme, 2002; Whyman and Somers, 1999).Koponen et al. (2012) suggested that post-editing\ntime reflects the cognitive effort in post-editing the\nMT output. Sousa et al. (2011) evaluated different\nMT system performances against translating from\nscratch. Their study also concluded that subjective\nevaluations of MT system output correlate with\nthe post-editing time needed. Zampieri and Vela\n(2014) used post-editing time to compare TM and\nMT translations.\n3 Our Approach and Experiments\nWe have used the approach presented in Gupta\nand Or˘asan (2014) to include paraphrasing in the\nTM matching and retrieval process. The approach\nclassifies paraphrases into different types for ef-\nficient implementation based on the matching of\nthe words between the source and correspond-\ning paraphrase. Using this approach, the fuzzy\nmatch score between segments can be calculated\nin polynomial time despite the inclusion of para-\nphrases. The method uses dynamic programming\nalong with greedy approximation. The method\ncalculates fuzzy match score as if the appropriate\nparaphrases are applied. For example, if the trans-\nlation memory used has a segment “What is the\nactual aim of this practice ?” and the paraphrase\ndatabase has paraphrases “the actual”⇒“the real”\nand “aim of this”⇒“goal of this”, for the input\nsentence “What is the real goal of this mission ?”,\nthe approach will give a 89.89% fuzzy match score\n(only one word, “practice”, needs substitution with\n“mission”) rather than 66.66% using simple word-\nbased edit distance.\nIn TM, the performance of retrieval can be\nmeasured by counting the number of segments or\nwords retrieved. However, NLP techniques are not\n100% accurate and most of the time, there is a\ntradeoff between the precision and recall of this\nretrieval process. This is also one of the reasons\nthat TM developers shy away from using semantic\nmatching. One cannot measure the gain unless\nretrieval benefits the translator.\nWhen we use paraphrasing in the matching and\nretrieval process, the fuzzy match score of a para-\nphrased segment is increased, which results in the\nretrieval of more segments at a particular thresh-\nold. This increment in retrieval can be classified\nin two types: without changing the top rank; and\nby changing the top rank. For example, for a\nparticular input segment, we have two segments A\nand B in the TM. Using simple edit-distance, A36\nhas a 65% and B has a 60% fuzzy score; the fuzzy\nscore of A is better than that of B. As a result of\nusing paraphrasing we notice two types of score\nchanges:\n1. the score of A is still better than or equal to\nthat of B, for example, A has 85% and B has\n70% fuzzy score;\n2. the score of A is less than that of B, for\nexample, A has 75% and B has 80% fuzzy\nscore.\nIn thefirst case, paraphrasing does not supersede\nthe existing model and just facilitates it by improv-\ning the fuzzy score so that the top segment ranked\nusing edit distance gets retrieved. However, in\nthe second case paraphrasing changes the ranking\nand now the top ranked segment is different. In\nthis case, the paraphrasing model supersedes the\nexisting simple edit distance model. This second\ncase also gives a different reference to compare\nwith. We take the top segment retrieved using\nsimple edit distance as a reference against the top\nsegment retrieved using paraphrasing and compare\nto see which is better for a human translator to\nwork with.\nTo evaluate the influence of paraphrasing on\nmatching and retrieval, we have carried out four\ndifferent experiments. Section 3.1 describes the\nsettings and measures used for post-editing evalua-\ntion, and Sections 3.2 and 3.3 describe the settings\nfor the subjective evaluations.\n3.1 Post-editing Time (PET) and Keystrokes\n(KS)\nIn this evaluation, the translators were presented\nwith fuzzy matches and the task was to post-edit\nthe segment in order to obtain a correct translation.\nThe translators were presented with an input En-\nglish segment, the German segment retrieved from\nthe TM for post-editing and the English segment\nused for matching in TM.\nIn this task, we recorded post-editing time\n(PET) and keystrokes (KS). The post-editing time\ntaken for the wholefile is calculated by summing\nup the time taken on each segment. Only one\nsegment is visible on screen. The segment is\nonly visible after clicking and the time is recorded\nfrom when the segment becomes visible until the\ntranslatorfinishes post-editing and goes to the next\nscreen. The next screen is a blank screen so that\nthe translator can have a rest after post-editinga segment. The translators were aware that the\ntime is being recorded. Each translator post-edited\nhalf of the segments retrieved using simple edit\ndistance (ED) and half of the segments retrieved\nusing paraphrasing (PP). The ED and PP matches\nwere presented one after the other (ED at odd\npositions and PP at even positions or vice versa).\nHowever, the same translator did not post-edit the\nmatch retrieved using PP and ED for the same\nsegment: insteadfive different translators post-\nedited the segment retrieved using PP and another\nfive different translators post-edited the match re-\ntrieved using ED.\nPost-editing time (PET) for each segment is the\nmean of the normalised time (N) taken by all\ntranslators on this segment. Normalisation is ap-\nplied to account for both slow and fast translators.\nPET j=n∑\ni=1Nij\nn(1)\nNij=Tij×Avg time on thisfile by all translators\nm∑\nj=1Tij\n(2)\nIn the equations 1 and 2 above, PET jis the post\nediting time for each segmentj,nis the number of\ntranslators,N ijis the normalised time of translator\nion segmentj,mis the number of segments in the\nfile, andT ijis the actual time taken by a translator\nion a segmentj.\nAlong with the post-editing time, we also\nrecorded all printable keystrokes, whitespace and\nerase keys pressed. For our analysis, we consid-\nered average keystrokes pressed by all translators\nfor each segment.\n3.2 Subjective Evaluation with Two Options\n(SE2)\nIn this evaluation, we carried out subjective evalu-\nation with two options (SE2). We presented fuzzy\nmatches retrieved using both paraphrasing (PP)\nand simple edit distance (ED) to the translators.\nThe translators were unaware of the details (ED\nor PP) of how the fuzzy matches were obtained.\nTo neutralise any bias, half of the ED matches\nwere tagged as A and the other half as B, with\nthe same applied to PP matches. The translator\nhas to choose between two options: A is better;\nor B is better. 17 translators participated in this\nexperiment. Finally, the decision of whether ‘ED37\nis better’ or ‘PP is better’ is made on the basis of\nhow many translators choose one over the other.\n3.3 Subjective Evaluation with Three Options\n(SE3)\nThis evaluation is similar to Evaluation SE2 except\nthat we provided one more option to translators.\nTranslators can choose among three options: A is\nbetter; B is better; or both are equal. 7 translators\nparticipated in this experiment.\n4 Corpus, Tool and Translators expertise\nAs a TM and test data, we have used English-\nGerman pairs of the Europarl V7.0 (Koehn, 2005)\ncorpus with English as the source language and\nGerman as the target language. From this corpus\nwe havefiltered out segments of fewer than seven\nwords and greater than 40 words, to create the TM\nand test datasets. Tokenization of the English data\nwas done using the Berkeley Tokenizer (Petrov et\nal., 2006). We have used the lexical and phrasal\nparaphrases from the PPDB corpus (Ganitkevitch\net al., 2013) of L size. In these experiments, we\nhave not paraphrased any capitalised words (but\nwe lowercase them for both baseline and para-\nphrasing similarities calculation). This is to avoid\nparaphrasing any named entities. Table 1 shows\nour corpus statistics. The translators involved in\nTM Test Set\nSegments 1565194 9981\nSource words 37824634 240916\nTarget words 36267909 230620\nTable 1: Corpus Statistics\nour experiments were third year bachelor or mas-\nters translation students who were native speakers\nof German with English language level C1, in the\nage group of 21 to 40 years with a majority of\nfemale students. Our translators were not expert\nin any specific technical or legalfield. For this\nreason we did not use such a corpus. In this way\nwe avoid any bias from unfamiliarity or familiarity\nwith domain specific terms.\n4.1 Familiarisation with the Tool\nWe used the PET tool (Aziz et al., 2012) for all\nour human experiments. However, settings were\nchanged depending on the experiment. To famil-\niarise translators with the PET tool we carried out\na pilot experiment before the actual experiment\nwith the Europarl corpus. This experiment wasdone on a corpus (Vela et al., 2007) different from\nEuroparl. 18 segments are used in this experiment.\nWhile thefindings are not included in this paper,\nthey informed the design of our main experiments.\n5 Results and Analysis\nThe retrieval results are given in Table 2. The table\nshows the similarity threshold for TM (TH), the to-\ntal number of segments retrieved using the baseline\napproach (EDR), the additional number of seg-\nments retrieved using the paraphrasing approach\n(+PPR), the percentage improvement in retrieval\nobtained over the baseline (Imp), the number of\nsegments that changed their ranking and rose to the\ntop because of paraphrasing (RC), and the number\nof unique paraphrases used to retrieve +PPR (NP)\nand RC (NPRC). Table 2 shows that when using\nTH 100 [85, 100) [70, 85) [55, 70)\nEDR 117 98 225 703\n+PPR 16 30 98 311\n%Imp 13.67 30.61 43.55 44.23\nRC 9 14 55 202\nNP 24 49 169 535\nNPRC 14 24 92 356\nTable 2: Results of Retrieval\nparaphrasing we obtain around 13.67% increase\nin retrieval for exact matches and more than 30%\nand 43% increase in the intervals [85, 100) and\n[70, 85), respectively. This is a clear indication\nthat paraphrasing significantly improves the re-\ntrieval results. We have also observed that there\nare different paraphrases used to bring about this\nimprovement. In the interval [70, 85), 169 differ-\nent paraphrases are used to retrieve 98 additional\nsegments.\nTo check the quality of the retrieved segments\nhuman evaluations are carried out. The sets’ distri-\nbution for human evaluation is given in the Table 3.\nThe sets contain randomly selected segments from\nthe additionally retrieved segments using para-\nphrasing which changed their top ranking.2\nTH 100 [85, 100) [70, 85) Total\nSet1 2 6 6 14\nSet2 5 4 7 16\nTotal 7 10 13 30\nTable 3: Test Sets for Human Experiments\n2The sets are constructed so that a translator can post-edit a\nfile in one sitting. There is no differentiation between the\nevaluations based on sets and all evaluations are carried out\nin both sets in a similar fashion with different translators.38\nPost-editing Subjective Evaluations\nPET KS SE2 (2 Options) SE3 (3 options)\nSeg # ED PP ED PP EDB PPB EDB PPB BEQ\n1 42.98 41.30↑ ↑↑42.40.4↑ ↑↑1 16↑ ↑↑0 7↑↑↑0\n2!+ 13.72 10.65↑ ↑↑2.8 2.4↑ ↑↑10 7↓ ↓↓2 2 3\n3*! 13.88 12.62↑ ↑↑2.0 3.6↓ ↓↓12 5↓ ↓↓4 1↓↓↓2\n4 37.9717.64↑ ↑↑26.26.2↑ ↑↑1 16↑ ↑↑0 6↑↑↑1\n5!+ 21.52 17.69↑ ↑↑22.4 13.2↑ ↑↑13 4↓ ↓↓2 3↑↑↑2\n6!+ 41.14 42.74↓ ↓↓13.2 34.4↓ ↓↓4 13↑ ↑↑2 0 5\n7!+ 33.69 31.59↑ ↑↑34.0 33.4↑ ↑↑10 7↓ ↓↓1 0 6\n8 47.1423.41↑ ↑↑61.66.4↑ ↑↑0 17↑ ↑↑0 7↑↑↑0\n9 22.8914.20↑ ↑↑37.22.2↑ ↑↑0 17↑ ↑↑0 6↑↑↑1\n10 46.89 38.20↑ ↑↑77.6 65.6↑ ↑↑1 16↑ ↑↑0 1 6\n11 58.25 53.65↑ ↑↑82.8 58.8↑ ↑↑0 17↑ ↑↑0 3 4\n12!+ 34.04 45.03↓ ↓↓36.8 39.6↓ ↓↓2 15↑ ↑↑0 6↑↑↑1\n13 30.3421.12↑ ↑↑54.8 39.2↑ ↑↑7 10↑ ↑↑1 1 5\n14!+ 75.50 96.54↓ ↓↓38.8 50.8↓ ↓↓5 12↑ ↑↑0 3 4\nSet1-subtotal 520.02 466.44 532.60 356.20 66 172 12 46 40\n15 24.149.18↑ ↑↑24.00.0↑ ↑↑5 12↑ ↑↑1 5↑↑↑1\n16*+ 28.30 29.20↓ ↓↓23.4 15.4↑ ↑↑11 6↓ ↓↓2 2 3\n17*! 65.64 53.49↑ ↑↑6.2 22.4↓ ↓↓10 7↓ ↓↓2 3↑↑↑2\n18 41.9120.98↑ ↑↑28.02.0↑ ↑↑1 16↑ ↑↑0 6↑↑↑1\n19 29.81 19.71↑ ↑↑23.86.8↑ ↑↑7 10↑ ↑↑2 3↑↑↑2\n20 41.2515.42↑ ↑↑39.03.8↑ ↑↑0 17↑ ↑↑1 5↑↑↑1\n21*! 42.0465.44↓ ↓↓39.4 36.0↑ ↑↑7 10↑ ↑↑1 2 4\n22 29.28 35.87↓ ↓↓17.0 33.4↓ ↓↓12 5↓ ↓↓5 0↓↓↓2\n23 32.6449.49↓ ↓↓11.450.8↓ ↓↓11 6↓ ↓↓2 2 3\n24!+ 59.35 54.54↑ ↑↑79.6 79.2↑ ↑↑17 0↓ ↓↓5 0↓↓↓2\n25 62.51 61.30↑ ↑↑71.0 54.0↑ ↑↑2 15↑ ↑↑0 3 4\n26*! 36.82 41.06↓ ↓↓55.023.4↑ ↑↑1 16↑ ↑↑0 6↑↑↑1\n27!+ 27.2144.02↓ ↓↓24.448.8↓ ↓↓4 13↑ ↑↑1 5↑↑↑1\n28 40.9933.08↑ ↑↑39.624.6↑ ↑↑5 12↑ ↑↑3 4↑↑↑0\n29 52.0131.55↑ ↑↑50.623.4↑ ↑↑2 15↑ ↑↑0 6↑↑↑1\n30*! 43.76 38.76↑ ↑↑38.2 44.6↓ ↓↓15 2↓ ↓↓1 1 5\nSet2-subtotal 657.75 603.17 570.6 468.59 110 162 26 53 33\nTotal 1177.77 1069.61 1103.2 824.79 176 334 38 99 73\nTable 4: Results of Human Evaluation on Set1 (1-14) and Set2 (15-30)\n39\nResults for human evaluations (PET, KS, SE2\nand SE3) on both sets (Set1 and Set2) are given\nin Table 4. Here ‘Seg #’ represents the segment\nnumber, ‘ED’ represents the match retrieved using\nsimple edit distance and ‘PP’ represents the match\nretrieved after incorporating paraphrasing. ‘EDB’,\n‘PPB’ and ‘BEQ’ in Subjective Evaluations repre-\nsent the number of translators who judge ‘ED is\nbetter’, ‘PP is better’ and ‘Both are equal’, respec-\ntively.\n5.1 Results: Post-editing Time (PET) and\nKeystrokes (KS)\nAs we can see in Table 4, improvements were\nobtained for both sets.↑ ↑↑demonstrates cases in\nwhich PP performed better than ED and↓ ↓↓shows\nwhere ED performed better than PP. Entries in bold\nfor PET, KS and SE2 indicate where the results are\nstatistically significant3.\nFor Set1, translators made 356.20 keystrokes\nand 532.60 keystrokes when editing PP and ED\nmatches, respectively. Translators took 466.44\nseconds for PP as opposed to 520.02 seconds for\nED matches. This means that by using PP matches,\ntranslators edit 33.12% less (49.52% more using\nED), which saves 10.3% time .\nFor Set2, translators made 468.59 keystrokes\nand 570.6 keystrokes when editing PP and ED\nmatches respectively. Translators took 603.17 sec-\nonds for PP as opposed to 657.75 seconds for ED\nmatches. This means that by using PP matches,\ntranslators edit 17.87% less (21.76% more using\nED), which saves 8.29% time.\nIn total, combining both the sets, translators\nmade 824.79 keystrokes and 1103.2 keystrokes\nwhen editing PP and ED matches, respectively.\nTranslators took 1069.61 seconds for PP as op-\nposed to 1177.77 seconds for ED matches. There-\nfore, by using PP matches, translators edit 25.23%\nless, which saves time by 9.18%. In other words,\nED matches require 33.75% more keystrokes and\n10.11% more time. We observe that the percent-\nage improvement obtained by keystroke analysis\nis smaller compared to the improvement obtained\nby post-editing time. One of the reasons for this\nis that the translator spends a fair amount of time\nreading a segment before starting editing.\n3p<0.05, one tailed Welch’s t-test for PET and KS,χ2test\nfor SE2. Because of the small sample size for SE3, no\nsignificance test was performed on individual segment basis.5.2 Results: Using post-edited references\nWe also calculated the human-targeted transla-\ntion error rate (HTER) (Snover et al., 2006)\nand human-targeted METEOR (HMETEOR)\n(Denkowski and Lavie, 2014). HTER and\nHMETEOR was calculated between ED and PP\nmatches presented for post-editing and references\ngenerated by editing the corresponding ED and\nPP match. Table 5 lists H TER5 and H METEOR 5,\nwhich usefive corresponding ED or PP references\nonly and H TER10 and H METEOR 10, which use all\nten references generated using ED and PP.\nTable 5 shows improvements in both the H TER5\nand H METEOR 5 scores. For Set-1, H METEOR 5\nimproved from 59.82 to 81.44 and H TER5 im-\nproved from 39.72 to 17.634. For Set-2, H ME-\nTEOR 5 improved from 69.81 to 80.60 and H TER5\nimproved from 27.81 to 18.71. We also observe\nthat while ED scores of Set1 and Set2 differ sub-\nstantially (59.82 vs 69.81 and 39.72 vs 27.81), PP\nscores are nearly the same (81.44 vs 80.60 and\n17.63 vs 18.71). This suggests that paraphrasing\nnot only brings improvement but may also improve\nconsistency.\nSet-1 Set-2\nED PP ED PP\nHMETEOR 5 59.82 81.44 69.81 80.60\nHTER5 39.72 17.63 27.81 18.71\nHMETEOR 10 59.82 81.44 69.81 80.61\nHTER10 36.93 18.46 27.26 18.40\nTable 5: Results using human targeted references\n5.3 Results: Subjective evaluations\nThe subjective evaluations also show significant\nimprovements.\nIn subjective evaluation with two options (SE2)\nas given in Table 4, from a total of 510 (30×17)\nreplies for 30 segments from both sets by 17 trans-\nlators, 334 replies tagged ‘PP is better’ and 176\nreplies tagged ‘ED is better’5.\nIn subjective evaluation with three options\n(SE3), from a total of 210 (30×7) replies for 30\nsegments from both sets by 7 translators, 99 replies\ntagged ‘PP is better’, 73 replies tagged ‘both are\nequal’ and 38 replies tagged ‘ED is better’6.\n4For HMETEOR, higher is better and for HTER lower is\nbetter\n5statistically significant,χ2test,p <0.001\n6statistically significant,χ2test,p <0.00140\n5.4 Results: Segment wise analysis\nA segment wise analysis of 30 segments from both\nsets shows that 21 segments extracted using PP\nwere found to be better according to PET eval-\nuation and 20 segments using PP were found to\nbe better according to KS evaluation. In subjec-\ntive evaluations, 20 segments extracted using PP\nwere found to be better according to SE2 eval-\nuation whereas 27 segments extracted using PP\nwere found to be better or equally good according\nto SE3 evaluation (15 segments were found to be\nbetter and 12 segments were found to be equally\ngood).\nWe have also observed that not all evaluations\ncorrelate with each other on segment-by-segment\nbasis. ‘!, ‘+ and ‘* next to each segment num-\nber in Table 4 indicate conflicting evaluations: ‘!’\ndenotes that PET and SE2 contradict each other,\n‘+’ denotes that KS and SE2 contradict each other\nand ‘*’ denotes that PET and KS contradict each\nother. In twelve segments where KS evaluation or\nPET evaluation show PP as statistically significant\nbetter, except for two cases all the evaluations also\nshows them better.7For Seg #13 SE3 shows ‘Both\nare equal’ and for Seg #26, PET is better for ED,\nhowever for these two sentences also all the other\nevaluations show PP as better.\nIn three segments (Seg #’s 21, 23, 27) KS evalu-\nation or PET evaluation show ED as statistically\nsignificant better, but none of the segment are\ntagged better by all the evaluations. In Seg #21 all\nthe evaluations with the exception of PET show PP\nas better. In Seg #23, SE3 shows ‘both are equal’.\nSeg #23 is given as follows:\nInput:The next item is the Commission dec-\nlaration on Belarus .\nED:The next item is the Commission State-\nment on AIDS .//Als n ¨achster Punkt folgt die\nErkl¨arung der Kommission zu AIDS.\nPP:The next item is the Commission state-\nment on Haiti .//Nach der Tagesordnung folgt\ndie Erkl ¨arung der Kommission zu Haiti.\nIn Seg #23, apart from “AIDS” and “Haiti” the\nsource side does not differ but the German side\ndiffers. The reason for PP match retrieval was\nthat “statement on” in lower case was paraphrased\nas “declaration on” while in the other segment\n7In this section all evaluations refer to all four evaluations viz\nPET, KS, SE2 and SE3.“Statement” was capitalised and hence was not\nparaphrased. If we look at the German side of both\nED and PP, “Nach der Tagesordnung” requires a\nbroader context to accept it as a translation of “The\nnext item” whereas “Als n ¨achster Punkt” does not\nrequire much context.\nIn Seg #27, we observe contradictions between\npost-editing evaluations and subjective evalua-\ntions. Seg #27 is given below (EDPE and PPPE\nare post-edited translations of ED and PP match\nrespectively):\nInput:That would be an incredibly important\nsignal for the whole region .\nED:That could be an important signal for the\nfuture .//Dies k ¨onnte ein wichtiges Signal f ¨ur\ndie Zukunft sein.\nPP:That really would be extremely important\nfor the whole region .//Und das w ¨are wirklich\nf¨ur die ganze Region extrem wichtig.\nEDPE:Dies k ¨onnte ein unglaublich\nwichtiges Signal f ¨ur die gesamte Region\nsein.\nPPPE:Das w ¨are ein unglaublich wichtiges\nSignal f ¨ur die ganze Region.\nIn subjective evaluations, translators tagged PP as\nbetter than ED. But, post-editing suggests that it\ntakes more time and keystrokes to post-edit the PP\ncompare to ED.\nThere is one segment, Seg #22, on which all\nthe evaluations show that ED is better. Seg #22\nis given below:\nInput:I would just like to comment on one\npoint.\nED:I would just like to emphasise one\npoint.//Ich m ¨ochte nur eine Sache betonen.\nPP:I would just like to concentrate on one\nissue.//Ich m ¨ochte mich nur auf einen Punkt\nkonzentrieren.\nIn segment 22, the ED match is clearly closer to\nthe input than the PP match. Paraphrasing “on\none point” as “on one issue” does not improve the\nresult. Also, “konzentrieren” being a long word\ntakes more time and keystrokes in post-editing.41\n6 Conclusion\nOur evaluation answers the two questions previ-\nously raised. We conclude that paraphrasing sig-\nnificantly improves retrieval. We observe more\nthan 30% and 43% improvement for the threshold\nintervals [85, 100) and [70, 85), respectively. The\nquality of the retrieved segment is also signifi-\ncantly better, which is evident from all our hu-\nman translation evaluations. On average on both\nsets used for evaluation, compared to paraphrasing\nsimple edit distance takes 33.75% more keystrokes\nand 10.11% more time when evaluating the seg-\nments who changed their top rank and come up in\nthe threshold intervals because of paraphrasing.\nAcknowledgement\nThe research leading to these results has received\nfunding from the People Programme (Marie Curie\nActions) of the European Union’s Seventh Frame-\nwork Programme FP7/2007-2013/ under REA\ngrant agreement no. 317471.\nReferences\nAziz, Wilker, S Castilho, and Lucia Specia. 2012.\nPET: a Tool for Post-editing and Assessing Machine\nTranslation. InProceedings of LREC.\nDenkowski, Michael and Alon Lavie. 2014. Meteor\nuniversal: Language specific translation evaluation\nfor any target language. InProceedings of WMT-\n2014 Workshop.\nGanitkevitch, Juri, Van Durme Benjamin, and Chris\nCallison-Burch. 2013. Ppdb: The paraphrase\ndatabase. InProceedings of NAACL-HLT, pages\n758–764, Atlanta, Georgia.\nGupta, Rohit and Constantin Or ˘asan. 2014. Incorporat-\ning Paraphrasing in Translation Memory Matching\nand Retrieval. InProceedings of EAMT.\nHod´asz, G ´abor and G ´abor Pohl. 2005. MetaMorpho\nTM: a linguistically enriched translation memory. In\nIn International Workshop, Modern Approaches in\nTranslation Technologies.\nKoehn, Philipp. 2005. Europarl: A parallel corpus\nfor statistical machine translation. InMT summit,\nvolume 5, pages 79–86.\nKoponen, Maarit, Wilker Aziz, Luciana Ramos, and\nLucia Specia. 2012. Post-editing time as a measure\nof cognitive effort. InWorkshop on Post-Editing\nTechnology and Practice in AMTA-2012, pages 11–\n20.Langlais, Philippe and Guy Lapalme. 2002. Trans\ntype: Development-evaluation cycles to boost trans-\nlator’s productivity.Machine Translation, 17(2):77–\n98.\nLevenshtein, Vladimir I. 1966. Binary codes capable\nof correcting deletions, insertions, and reversals. In\nSoviet physics doklady, volume 10, pages 707–710.\nMitkov, Ruslan. 2008. Improving Third Genera-\ntion Translation Memory systems through identifi-\ncation of rhetorical predicates. InProceedings of\nLangTech2008.\nPapineni, Kishore, Salim Roukos, Todd Ward, and Wei-\nJing Zhu. 2002. Bleu: a method for automatic\nevaluation of machine translation. InProceedings\nof the ACL, pages 311–318.\nPekar, Viktor and Ruslan Mitkov. 2007. New\nGeneration Translation Memory: Content-Sensivite\nMatching. InProceedings of the 40th Anniversary\nCongress of the Swiss Association of Translators,\nTerminologists and Interpreters.\nPetrov, Slav, Leon Barrett, Romain Thibaux, and Dan\nKlein. 2006. Learning accurate, compact, and\ninterpretable tree annotation. InProceedings of the\nCOLING/ACL, pages 433–440.\nPlanas, Emmanuel and Osamu Furuse. 1999. Formal-\nizing Translation Memories. InProceedings of the\n7th Machine Translation Summit, pages 331–339.\nSimard, Michel and Atsushi Fujita. 2012. A Poor Man\ns Translation Memory Using Machine Translation\nEvaluation Metrics . InProceedings of AMTA.\nSnover, Matthew, Bonnie Dorr, Richard Schwartz, Lin-\nnea Micciulla, and John Makhoul. 2006. A study of\ntranslation edit rate with targeted human annotation.\nInProceedings of AMTA, pages 223–231.\nSousa, Sheila C.M. de, Wilker Aziz, and Lucia Specia.\n2011. Assessing the post-editing effort for automatic\nand semi-automatic translations of dvd subtitles. In\nProceedings of RANLP, pages 97–103.\nUtiyama, Masao, Graham Neubig, Takashi Onishi, and\nEiichiro Sumita. 2011. Searching Translation Mem-\nories for Paraphrases. InMachine Translation Sum-\nmit XIII, pages 325–331.\nVela, Mihaela, Stella Neumann, and Silvia Hansen-\nSchirra. 2007. Querying multi-layer annotation and\nalignment in translation corpora. InProceedings of\nthe Corpus Linguistics Conference CL.\nWhyman, Edward K and Harold L Somers. 1999.\nEvaluation metrics for a translation memory system.\nSoftware-Practice and Experience, 29(14):1265–84.\nZampieri, Marcos and Mihaela Vela. 2014. Quantify-\ning the influence of MT output in the translators per-\nformance: A case study in technical translation. In\nWorkshop on Humans and Computer-assisted Trans-\nlation.42", "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "AZZzysMJqx", "year": null, "venue": "EAMT 2005", "pdf_link": "https://aclanthology.org/2005.eamt-1.18.pdf", "forum_link": "https://openreview.net/forum?id=AZZzysMJqx", "arxiv_id": null, "doi": null }
{ "title": "Augmenting a statistical translation system with a translation memory", "authors": [ "Sanjika Hewavitharana", "Stephan Vogel", "Alex Waibel" ], "abstract": null, "keywords": [], "raw_extracted_content": "126 EAMT 2005 Conference Proceedings Augmenting a Statistical Translation System \nwith a Translation Memory \nSanjika Hewavitharana, Ste phan Vogel, Alex Waibel \nLanguage Technologies Institute \nCarnegie Mellon University, Pittsburgh \nU.S.A. \n{sanjika, vogel+, ahw}@cs.cmu.edu \nAbstract. In this paper, we present a translation memory (TM) based system to augment a \nstatistical translation (SMT) system. It is used for translating sentences which have close \nmatches in the training corpus. Given a test sentence, we first extract sentence pairs from the training corpus, whose source side is similar to the test sentence. Then, the TM system \nmodifies the translation of the sentences by a sequence of substitution, deletion and inser-\ntion operations, to obtain the desired result. Statistical phrase alignment model of the SMT system is used for this purpose. The syst em was evaluated using a corpus of Chinese-\nEnglish conversational data. For close matching sentences, the translations produced by the \ntranslation memory approach were compared with the translations of the statistical decoder. \n1. Introduction \nSpoken language translation has received more \nattention in recent times. Some of the notable attempts include Verbmobil (Wahlster, 2000) and Nespole (Metze et at., 2002). Many corpora have been compiled for this purpose covering various domains, includi ng conversations in \ntravel and medical domains. Typically, these cor-pora contain shorter sentences. For example, in the Basic Travel Expression Corpus (BTEC) cor-pus (Takezawa et al., 2002), the sentences have 6-7 words on average. Another noticeable fea-ture is that they have sentence with similar pat-terns, as shown in Figure 1 with Spanish-Eng-lish sentence pairs from the BTEC corpus. \n \nen qué tipo de trabajo\n estás interesado ? \nwhat kind of job are you interested in ? \nen qué tipo de cosas estás interesado ? \nwhat kind of things are you interested in ? \nen qué tipo de excursiones estás interesado ? \nwhat kind of tour are you interested in ? \nFigure 1: Similar patterns in sentences \nThese three sentences differ only in one word in both Spanish sentences as well as their English translations. For a given test sentence, we often find in the training corpus, a very similar sen-\ntence with few mismatching words; sometimes even an exact matching sentence. Translation memory (TM) systems typically work well in these situations. In its pure form, a TM system is simply a database of past translations, stored as sentence pairs in sour ce and target languages. \nWhenever an exact match is found for a new sentence to be translated, the desired translation is extracted from the translation memory. TM systems have been succes sfully used in Com-\nputer Aided Translations (CAT) as a tool for human translators. \nThere have been attempts to combine trans-\nlation memory with other machine translation approaches. In (Marcu, 2001) an automatically derived TM is used along with a statistical mo-del to obtain translations of higher probability than those found using only a statistical model. Sumita (2001) describes an example-based tech-nique which extracts similar translations and modifies them using a bilingual dictionary. Wa-tanabe and Sumita (2003) proposed an exam-ple-based decoder that start with close matching example translations, and then modify them us-ing a greedy search algorithm. Instead of extract-ing complete sentences from the TM, Langlais and Simard, (2002) work on sub sentential le-\nAugmenting a statistical translation sy stem with a translation memory \nEAMT 2005 Conference Proceedings 127 vel. Translations for word sequences are ex-\ntracted from a TM and then fed into a statistical engine to generate the desired translation. \nIn this paper, we present an experiment where \nwe attempted to augment a statistical translation system with a translation memory. For a sen-tence which has a close match in the training corpus, the idea is to start with the available translation and apply specific modifications to produce the desired translation. By a close match, we mean a very similar sentence with only a few mismatching words. \nGiven a test sentence, we extract sentence \npairs from the bilingual training corpus, whose source side is similar to the test sentence. If a close matching sentence is found, we use our TM system to translate it. For each mismatch-ing word in the source side of the close match-ing pair, we identify its translation in the target side. Then a sequence of substitution, deletion and insertion operations is applied to the target side to produce the correct translation. If a close match is not found in the training corpus, we use a statistical translation system to generate the translation. \nThe system was evaluated using a subset of \nthe Chinese-English BTEC corpus. For those close matching sentences, the translations pro-duced by the TM system were compared with the translations produced by the statistical de-coder. In our current experiments TM system did not show an improvement in terms of auto-matic evaluation metrics. However, a subjective human evaluation found that, in several instances, the TM system produced better translations than the statistical decoder. \nIn the following section we explain the TM \nsystem in detail. We also describe the phrase extraction method we used to identify alignments between source words and target words, which is a modified version of the IBM1 alignment model (Brown et al. 1993). In Section 3, we pre-sent the experimental setting and the results of the evaluation. It is followed by a discussion in section 4, and conclusions in section 5. We have identified a number of improvements to the cur-rent system, some of which are already in pro-gress. 2. Translation Memory System \n2.1. Extracting Similar Sentences \nFor each new test sentence F, we find a set of \nsimilar source sentences { F1, F2, …} from the \ntraining corpus. The similarity is measured in terms of the standard edit distance criterion with equal penalties for insert ion, deletion and sub-\nstitution operations. The corresponding set of translations { E\n1, E2,…} is also extracted from \nthe bilingual training corpus. \nFollowing are some close matching sen-\ntences we extracted for the Spanish sentence estoy nerviosa . \ni. estoy resfriado (i have a cold) \nii. estoy cansada (i am tired) \niii. estoy resfriado (i feel chilled) \nIf we select the first match as input to the TM \nsystem, it will generate the translation, i have \nnervous . If instead we select the second match, \nwe get, I am nervous , which is the correct trans-\nlation. Selecting the first best does not always produce better results. Therefore, for each test sentence, we select the 10 best matching sentence pairs as candidates for the next step. \nIf we found an exact match among the ex-\ntracted sentences, we terminate the search and output the translation of it as the desired trans-lation of the test sentence. In the case of multi-ple exact matches (which might have different meanings in the target si de), we score each sen-\ntence pair ( F\nk, Ek) using a translation model and \na language model and select the best one. \n2.2. Modifying Translations of Close \nMatching Sentences \nIf an exact match is not found, but a close match-\ning sentence pair ( Fk, Ek) is found, then the \ntranslation Ek is slightly altered using a statisti-\ncal translation model to produce the correct re-sult. We start by identifying the words in F\nk that \nhave to be changed and the sequence of substi-tution, deletion, or insertion operations\n1 required \nto make it the same as F. For each of these \nwords, we then identify its alignment in the tar-get side E\nk. Finally, the aligned words are modi-\n \n1 Since there can be many such sequences with the \nsame edit distance, the sequence is not unique. \nHewavitharana et al. \n128 EAMT 2005 Conference Proceedings fied with the identified operations, to produce \nthe desired translation. Figure 2 illustrates the sub-stitution operation for a single word. Details of how each operation is performed are explained in section 2.4. \n f\n f ’ F\nFk\n e’ Ek e(i) Identify mismatch\n(ii) Find alignment(iii) Find translation\n(iv) Subsitute\n \nFigure 2: Steps in the substitution operation \nThe underlying assumption here is that the same \nsequence of operations that resolves the mis-match between the test sentence F and the source \nsentence F\nk, would produce the correct transla-\ntion E from Ek. Therefore, it is important to re-\nliably identify the alignments of the words in the source sentence. Our initial experiments with word-to-word alignment did not produce cor-rect translations since a word in the source side, sometimes, corresponds to more than one word in the target side. \nTherefore, we used a phrase-to-phrase align-\nment method which allows us to do phrase level operations. The term phrase is used throughout \nthe paper to indicate any sequence of words, not necessarily in the linguistic sense. We used the same method to identify the candidate transla-tions of the mismatching words in F. In the next \nsection, we explain the PESA phrase extraction method (Vogel et al., 2004) used in the experi-ments. \n2.3. Phrase Extraction via Sentence \nAlignment (PESA) \nSuppose we are searching for a good translation \nfor the source phrase kfff ...1= , and that we \nfound a sentence in the bilingual corpus, con-\ntaining the same word sequence. We are now in-terested in identifying a sequence of words \nleee ...1= in the target sentence, which is an op-\ntimal translation of the source phrase. Although \nany sequence of words in the target sentence can be a candidate translation, most of them would be deemed incorrec t. Some of them would \nbe partial translations while a small number of \ncandidates would be acceptable or good transla-tions. We want to find these good candidates. \nThe IBM1 word alignment model aligns each \nword in the source phrase to all the words in the \ntarget phrase with varying probabilities. Typi-cally, only one or two words will have high alignment probability, which for IBM1 model is just the lexicon probability. We now modify the IBM1 alignment with the following constrains: \nƒ for words inside the source phrase we sum \nprobabilities only over the words inside the candidate target phrase, and for words out-side the source phrase we sum probabilities only over the words outside the candidate target phrase. \nƒ the position alignment probability, which \nfor the standard IBM1 alignment is 1/ I, where \nI is the number of words in the target sentence, \nis modified to \nl/1inside the source phrase \nand to ) /(1 lI−outside the source phrase. \nMore formally we calculate the constrained \nalignment probability: \n× =∑∏\n∉−\n= )..(1\n1,\n211\n21)|( )|(\niiii jj\njii efp ef p \n∑∏ ∏∑\n∉+= == )..(1 2122\n12\n1)|( )|(\niiii jJ\njjj\njji\niii j efp efp \nand optimize over the target side boundaries i1 \nand i2, \n)}|( {maxarg),(\n21\n21,\n,21 ef p iiii\nii= \nwhere J is the number of words in the source \nsentence. \nSince word alignment models are asymmet-\nric with respect to aligning one-to-many words, it gives better results when the alignments are calculated for both directions. Similarly we cal-culate the alignment probabilities for the other direction: \n× =∑∏\n∉−\n= )..(1\n1,\n2 11\n21)|( )|(\njjjj ii\niii fep fe p \n∑∏ ∏∑\n∉+= == )..( 1 2 122\n12\n1)|( )|(\njjjj iI\niii\niij\njjj i fep fep \nAugmenting a statistical translation sy stem with a translation memory \nEAMT 2005 Conference Proceedings 129 To find the optimal target phrase we interpolate \nboth alignment probabilities and take the pair (i\n1, i2) which gives the highest probability. \n))}|( log( ))|( log()1{(maxarg),(\n21 21\n21, ,\n,21 fe p c ef p c iiii ii\nii+ − =\nThe phrase pairs are extracted from the bilin-\ngual corpus at decoding time. We treat the sin-gle source words in the same way as a phrase of length 1. The target translation can then be ei-ther one or several words. \nMost phrase pairs \n) (),(2 1 2 1... , ... i i j j eeff ef= are \nseen only a few times, even in very large cor-\npora. Therefore, probabilities based on occur-rence counts have little discriminative power. We calculate phrase translation probabilities based on statistical lexicon, i.e. on the word transla-tion probabilities p(f, e) : \n∑∏=\nii j\njefp efp )|( )|( \n2.4. Modification Operations \nFor a given test sentence F and a close matching \nsentence pair ( F’, E’) with an edit distance of \none, the three repair operations are handled as follows. Boldface letters are used to indicate phrases. \n1. Substitution of word f’ in F’ with word f in \nF: \ni. Find all possible phrase alignments e’ \nin E’ for the word f’. \nii. Find all possible translations e of word f. \niii. Replace e’ with e to produce E. \niv. Score the resulting translation ( E, F) with \nthe translation and language models. \nv. Iterate over all e’ and e and choose the \nbest E as the desired translation. \n2. Deletion of word f’ from F’: \ni. Find the possible phrase alignments e’ \nin E’ for the word f’. \nii. Remove e’ from E’ to produce E. \niii. Score the resulting translation ( E, F) \nwith the translation and language model. \niv. Iterate over all e’ and choose the best \nE \nas the desired translation. \n3. Insertion of word f into F’: \ni. Find all possible translations e of word f. ii. Insert e into a position i in E’ to produce E. \niii. Score the resulting translation ( E, F) with \nthe translation and language model. \niv. Iterate over all translations e and all word \npositions i in E’ and choose the best E as \nthe desired translation. \nWhen more than one close matching sentence is \nfound, the above process is iteratively applied on all of them and the best one is selected as the resultant translation. \n3. Evaluation \n3.1. Corpus \nFor the evaluation we used a subset of the BTEC \nwhich contains travel conversations in Chinese and English. The corpus was originally created in Japanese and English by ATR (Takezawa et al., 2002) and was later extended to other lan-guages including Chinese. Our training set con-tained 20,000 sentence pa irs, where the Chinese \nsentences were already word segmented. Table 1 summarizes the statistics of the training set. \n Chinese English \nSentences 20,000 20,000 \nWords 182,902 188,935 \nVocabulary 7,645 7,181 \nLM PP — 68.6 \nTable 1: Training data statistics \nWe used a development set (Dev) to tune the pa-\nrameters of the system and a final test set (Test) to evaluate the tuned system. It was assumed that the word segmentation of the test data matches the word segmentation of the training data. 16 reference translations per sentence were used for the evaluation. Table 2 gives the details of the two test sets. \nChinese \n Dev Test \nSentences 506 500 \nWords 3515 4108 \nVocabulary 870 893 \nUnknown Words 160 104 \nTable 2: Test data statistics \nHewavitharana et al. \n130 EAMT 2005 Conference Proceedings 3.2. Language Model \nA standard trigram language model was used to \nevaluate the translations produced by TM sys-tem, as well as in the statistical decoder. We used the SRI language model toolkit (SRI_LM Toolkit) to build the language model using Eng-lish data of the training set. Table 1 also con-tains the language model perplexity (LM PP). \n3.3. Statistical Translation System \nWe used a statistical machine translation (SMT) \ndecoder which allows phrase-to-phrase transla-tion using the phrase extraction method ex-plained in section 2.3. The decoding process works in two states: Firs t, the word-to-word and \nphrase-to-phrase translations and, if available, other specific information like named entity trans-lation tables are used to build a translation lat-tice. A standard n-gram language model is then applied to find the best path in this lattice. Stan-dard pruning strategies ar e employed to keep the \ndecoding time within reasonable bounds. De-tails of the system are described in (Vogel et al., 2003) and (Vogel, 2003). \nOur SMT system and the TM system are \nclosely connected, since we use the same IBM1 \ntranslation lexicon, language model and the phrase extraction method in both systems. This contrasts our approach with a multi engine ap-proach where results from different, often inde-pendent, translation systems are integrated. \n3.4. Evaluation \nWe extracted similar sentences from the train-\ning data using the edit distance criterion. Table 3 gives the similarity statistics for both devel-opment and test set, based on the best match. \n Dev Test \nExact match 27 30 \n1 mismatch 103 104 \n> 1 mismatch 376 366 \nTable 3: Best matching cases \nFor the 506 sentence development set, 5% of \nthe sentences had an exact match in the training corpus. Another 20% of the sentences could be matched with one substitution, deletion or in-sertion. For the 500 sentence test set, these val-ues were 6% and 20% respectively. We tested the TM system for the test sen-\ntences that have exact matches or close matches with only one mismatching word. There are 130 sentences in the development set which holds this condition, and in the test set there are 134 sentences. Translation results are reported in Bleu (Papineni, 2001) and NIST mteval (Mte-val, 2002) scores. NIST mteval script version 11a was used to calculate both the NIST and Bleu scores. \nWe used the SMT system to generate trans-\nlations for the complete data set. Parameters of the SMT system were tuned to generate transla-tions with high NIST scores\n2. The translations \ncorresponding to exact matches or one mismatch were then replaced by those produced by the TM system. We tested the system with two dif-ferent settings; only considering the single best matching sentence (TM 1\n–Best), and consider-\ning up to 10 best matching sentences (TM n –\nBest). Table 4 gives the final translation results. \nDev Test Bleu NIST Bleu NIST \nTM 1–Best 38.8 7.84 36.8 8.16 \nTM n–Best 39.3 7.86 37.8 8.27 \nSMT Alone 39.1 7.90 37.9 8.31 \nTable 4: Translation results \n4. Discussion \nAs it can be seen in Table 4, the translation mem-\nory did not produce improved results in terms of NIST score. For the development set, it has slightly better results with respect to Blue score. Use of the n-best list of close matching senten-ces, rather than only the best matching sentence did produce better results. Still, there is a small drop in NIST scores. Ho wever, the differences are \nnot statistically significant\n3. \nWhen the translations of the two methods are \ncompared, in several instances, the TM system produced better quality tr anslations than the SMT \n \n \n2 This discrepancy between Blue and NIST scores is \ndue to the different method used to calculate length \npenalty for Blue metric in the current mteval script. \nThis problem arises only, when several reference translations are available, and when they are very \nshort, as is the case with BTEC data. \n3 95% confidence levels for the data set are: \nFor Bleu: [-3.0,+3.0] (i.e. ± 8% relative difference) \nFor NIST:[- 0.4,+0.4] (i.e. ± 5% relative difference) \nAugmenting a statistical translation sy stem with a translation memory \nEAMT 2005 Conference Proceedings 131 system. Some of the notable examples are given \nin Figure 3. \nRef \nSMT TM how much does it cost to send this to japan \nplease send this to japan how much is it what is the cost for sending this to japan \nRef SMT \nTM do i have to transfer to get there \ni'd like to change trains to get there \ndo i have to change buses to get there \nRef \nSMT TM could you repeat that please \nwould you please say it again please Would you say it again please \nRef SMT TM what is today's date \nwhat is today's number what 's the date today \nRef SMT \nTM i don't know my size \ni don't know my size \nnobody knows size \nRef \nSMT TM where’s the ladies' restroom \nwhere's ladies' bathroom where are the restrooms \nFigure 3: Sample translations \nFor each sentence, one refe rence translation, the \nresult of the SMT system and the result of the TM system are provided. Top part of Figure 3 contains examples where the TM system gener-ated better translations than the SMT system. In the last 2 examples, the SMT translation is bet-ter than the TM translation. \nWe conducted a subjective evaluation, to \ncompare the quality of the translations. Two per-sons were asked to compare the translations of the TM system and the SMT system for those 130 sentences with exact matches/only one mis-match. They were asked to mark each sentence with one of the following: \nA – Translation 1 is better than Translation 2. \nB – Translation 2 is better than Translation 1. C – Both translations are comparable in quality. \nEvaluators were not aware of which system \ngenerated which result. We also shuffled the translations for each sentence so as to further remove the bias towards a particular system. Ta-ble 5 gives the subjective evaluation results for evaluators E1 and E2. \nE1 E2 \n # % # % \nSMT Better 11 8 17 13 \nTM Better 37 29 37 29 \nComparable 82 63 76 58 \nTable 5: Subjective evaluation results According to the subjective evaluations, for \n29% of the sentences, the TM system produced better quality translations than the SMT system. On average, 11% of the sentences are better trans-\nlated by the SMT system. Nearly 60% of the time both systems produced translations of com-parable quality. \nWhen the full dataset is considered, 5% of \nthe sentences have better translations after combining the TM system with the SMT sys-tem. \nWhy is this improvement not reflected in the \nautomatic evaluation scores? A possible expla-nation can be as follows: The differences we observe in the subjective evaluations are at the sentence level whereas automatic metrics work on the word level. Therefore, these metrics might not be able to capture the subtle differences in quality between the two systems, as in the cases listed in Figure 3. For example, let’s consider the n-gram precision for Bleu metric for the ex-ample 1 in Figure 3. The TM system translation has 1 trigram, 2 bigram and 4 unigram matches. The SMT system translation has 1 4-gram, 2 trigram, 4 bigram and 7 unigram matches. This would give a higher Bleu score for the SMT trans-lation than the TM translation, although the TM translation is clearly better than the SMT trans-lation. \nThe phrase extraction method used in the \nSMT system allows alignments up to any length. For sentences that are close or exact matching to those found in the training corpus, this allows the extraction of longer phrases, or even the full translation. Therefore, the SMT system can gen-erate the desired translation fairly easily with less re-orderings. In other words, the SMT system in these situations works as a translation memory. This makes it a stronger baseline. Further, scor-\ning the translations produced by the TM system using an SMT based translation model might have a bias towards translations that are closer to those produced by the SMT system. \n5. Conclusions and Future Work \nIn this paper we presented a translation memory \nsystem, which can enhance the translations of a statistical machine translation system. We also presented a phrase alignment approach, which finds the target phrase for a given source phrase by optimizing the alignment probability for the \nHewavitharana et al. \n132 EAMT 2005 Conference Proceedings entire sentence pair. The TM system did not \nshow an improvement over the SMT baseline in terms of automatic evaluation metrics. However, a subjective evaluation found that the TM sys-tem generate better quality translation, resulting in a 5% overall improvement over the com-bined system. We plan to extend this work in a number of directions: 1. Allow more than one mismatch between the test sentence and the sen-tences in the training corpus, esp. for longer sen-tences. 2. Using additional information, such as parts of speech, to have a more discriminative matching between sentences. 3. Integrating the SMT system and the TM system using a better criterion than just on the number of mismatches. \nPerhaps a more interesting direction would \nbe to use the TM system within the phrase search itself. The current phrase search only ex-tracts exact matching phrases. Using the same repair operations we use in our TM system, we would be able to extrac t close matching phrases, \nrepair them and use them in the SMT decoder. \n6. References \nPeter F. Brown, Stephen A. Della Pietra, Vincent J. \nDella Pietra, and Robert L. Mercer. (1993). The \nmathematics of statistical machine translation: Pa-\nrameter estimation. Computational Linguistics , \n19(2):263–311. \nPhilippe Langlais and Michel Simard (2002). Merg-\ning Example-Based and Statistical Machine Transla-tion: An Experiment. In Proceedings of the 5th Con-\nference of Association for Machine Translation in \nthe Americas (AMTA) , pp. 104-114, Tiburon, Cali-\nfornia, October. \nDaniel Marcu (2001). Towards a Unified Approach \nto Memory- and Statistical-Based Machine Transla-tion. In Proceedings of the 39th Annual Meeting of \nthe Association for Computational Linguistics \n(ACL) , pp. 378-385, Toulouse, France, July. \nFlorian Metze, J. McDonough, H. Soltau, C. Lang-\nley, A. Lavie, L. Levin, T. Schultz, A. Waible, R. \nCattoni, G. Lazzari, N. Mana, F. Pianesi, E. Pianta (2002), The NESPOLE! Sp eech-to-Speech Transla-\ntion System, In Proceedings of HLT, San Diego, \nCalifornia U.S, March. MTeval (2002). NIST MT Evaluation Kit Version \n11a. Available at: \nhttp://www.nist.go v/speech/test/mt/. \nKishore Papineni, Salim Roukos, Todd Ward and Wei-\nJing Zhu (2001). Bleu: a Method for Automatic Evalua-\ntion of Machine Translation. Technical Report \nRC22176 (W0109-022) , IBM Research Division, T.J. \nWatson Research Center. \nSRI-LM. The SRI Language Modeling Toolkit. SRI \nSpeech Technology and Res earch Laboratory. http:// \nspeech.sri.com/projects/srilm/ \nEiichiro Sumita (2001). Example-based machine trans-\nlation using DP-matching between word sequences, \nDDMT workshop of 39\nth Annual Meeting of the As-\nsociation for Computational Linguistics (ACL) , pp. \n1-8. \nToshiyuki Takezawa, Eiic hiro Sumita, Fumiaki \nSugaya, Hirofumi Yamamoto, &Seiichi Yamamoto. (2002). Towards a broad-coverage bilingual corpus \nfor speech translation of travel conversations in the \nreal world. In Proceedings. of LREC 2002 , pp. 147-\n152, Las Palmas, Canary Islands, Spain, May. \nStephan Vogel (2003). SMT Decoder Dissected: Word \nReordering. In Proceedings of 2003 IEEE Interna-\ntional Conference on Natural Language Processing \nand Knowledge Engineering , pp. 561-566, Beijing, \nChina, October. \nStephan Vogel, Ying Zhang, Fei Huang, Alicia \nTribble, Ashish Venugopal, Bing Zhao, Alex Waibel \n(2003). The CMU Statistical Translation System. In \nProceedings of MT Summit IX , New Orleans, LA, \nUSA, September. \nStephan Vogel, Sanjika Hewavitharana, Muntsin Kolss \nand Alex Waibel (2004). The ISL Statistical Trans-\nlation System for Spoken Language Translation. In \nProceedings of the International Workshop on Spo-ken Language Translation (IWSLT) , pp. 65-72, Kyoto, \nJapan, September. \nWolfgang Wahlster, ed. (2000). Verbmobil: Founda-\ntions of Speech-to-Speech Translation, Springer. \nTaro Watanabe and Eiichiro Sumita (2003). Exam-\nple-based Decoding for Sta tistical Machine Transla-\ntion. In Proceedings of MT Summit IX , New Orleans, \nLA, USA, September.", "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "w3NerSo9FGx", "year": null, "venue": "EAMT 2006", "pdf_link": "https://aclanthology.org/2006.eamt-1.12.pdf", "forum_link": "https://openreview.net/forum?id=w3NerSo9FGx", "arxiv_id": null, "doi": null }
{ "title": "A Flexible Online Server for Machine Translation Evaluation", "authors": [ "Matthias Eck", "Stephan Vogel", "Alex Waibel" ], "abstract": null, "keywords": [], "raw_extracted_content": "A Flexible Online Server for Machine Translation Evaluation \nMatthias Eck, Stephan Vogel, and Alex Waibel \nInterACT Research \nCarnegie Mellon University \nPittsburgh, PA, 15213, USA \n{matteck, vogel, waibel}@cs.cmu.edu \nAbstract. We present an Online Server for Machine Translation Evaluation that offers \nimprovements over the standard usage of the typical scoring scripts. Users are able to \ninteractively define their own test sets, experiments and pre-processing steps. Several \nscores are automatically calculated for submitted translations and the hypotheses and scores \nare organized and archived for later review. The server offers a nice web based user \ninterface.\n1. Introduction \nEvaluating machine translation hypotheses is a \nvery important part of the ongoing research. \nAutomatic scoring metrics allow a fast \nevaluation of translations and a quick turn-\naround for experiments. Researchers rely on the \nevaluation metrics to measure performance \nimprovements gained by new approaches. \nThe well-known automatic scores for \nmachine translation are BLEU and NIST but a \nvariety of other scores is available (Papineni, \nRoukos, Ward, and Zhu, 2002; Doddington, \n2001; Banerjee and Lavie, 2005). Most of the \nscores rely on software programs or scripts that \nexpect a variety of parameters including the \nhypothesis and reference files. The software \nthen calculates the appropriate score. For most \napplications the files have to be in a special \nSGML file format that tags the different parts \nof the hypothesis or reference file. \nIt is especially difficult for newcomers or for \npeople who just want to get a glimpse of the \npossibilities to use these software programs. An \nexperienced developer will most probably have \na sophisticated setup for translation scoring but \nthis will take a while for a beginner. \nThe web server application presented here \ntries to circumvent some of the difficulties of \nscoring machine translation output. The online \nuser interface offers an interactive environment \nin which test sets and experiments can be \ndefined and hypotheses can be scored. The \nserver stores the submitted translations for later review. It also offers directly accessible web \nservices that allow score calculation in scripts \nand software programs based on the defined test \nsets. \n2. Related Work \nOnline Servers for Competitive Evaluations \nDifferent online servers have been used to \nevaluate translations for a variety of \ncompetitive evaluations. Especially notable are \nthe evaluation servers for the NIST MT \nEvaluations (NIST, 2001-2006) and for the \nEvaluations in the International Workshops for \nSpoken Language Translation (IWSLT) in the \nyears 2004 and 20051 (Akiba, Federico, Kando, \nNakaiwa, Paul, and Tsuji, 2004; Eck and Hori, \n2005). All of these evaluation servers were \ngeared towards the needs of a competitive \nevaluation. The main goal was to make it easier \nfor the organizers to handle a large amount of \ntranslation submissions and not necessarily to \nsupport the research of the participants. The \nservers did for example not show any scores \nduring the actual evaluation period so that \ntuning the systems was impossible. The servers \nalso did not provide any possibility to the \nparticipants to set up their own test sets. \n \n1 The server presented here was developed based on \nthe server used for IWSLT 2005. \n \n \nEvaluation Application \nAnother similar work is the EvalTrans tool \npresented in Nießen, Och, Leusch, and Ney \n(2000). Here the focus is on a locally installed \ntool that allows better and faster human \nevaluation by having a nice interface to support \nthe evaluators. This tool is able to automatically \nextrapolate known human scores to similar \nsentences and give a prediction of the actual \nhuman score. Automatic evaluation scores can \nalso be calculated. \n3. Standard Scoring Routine \nSGML File Format \nFor most scoring software the first step is to \nconvert the hypothesis (candidate translation), \nreference and sometimes source files into an \nSGML defined format. SGML here offers \nadditional flexibility compared to standard text \nfiles, mainly, the possibility of having different \nreference translations for a given sentence. \nFigure 1 shows how a simple SGML-tagged \nhypothesis could look like (with the appropriate \nvalues filled in). \n \n<TSTSET setid=\"setid\" trglang=\"language\" srclang=\"language\"> \n<DOC docid=\"docid\" sysid=\"sysid\"> \n<SEG id=1>hypothesis sentence 1</SEG> \n<SEG id=2>hypothesis sentence 2</SEG> \n… \n</DOC> \n</TSTSET> \nFigure 1: SGML tagged translation hypothesis \nThe main difference for an SGML-tagged \nreference file is <REFSET> that replaces \n<TSTSET> as the main tag. It is also possible to \nhave different <DOC> tags within one file that \ncan be used to provide more than one reference \ntranslation per sentence (see Figure 2). Some \nscripts also expect the original source file to be \nin SGML format. \n \n<REFSET setid=\"setid\" trglang=\"language\" srclang=\"language\"> \n<DOC docid=\"docid\" sysid=\"reference1\"> \n<SEG id=1>reference 1 sentence 1</SEG> \n<SEG id=2>reference 1 sentence 2</SEG> \n… \n</DOC> \n<DOC docid=\"docid\" sysid=\"reference1\"> \n<SEG id=1>reference 2 sentence 1</SEG> \n<SEG id=2>reference 2 sentence 2</SEG> \n… \n</DOC> \n</REFSET> \nFigure 2: SGML tagged reference translation \nInvoke Scoring Software \nAfter this step the actual command to execute \nthe scoring script is similar to: \n$ scoretranslation -r referencefile -s sourcefile -t hypothesisfile \n \nMost machine translation scoring procedures \nfollow this setup with slight changes and \npossibly additional parameters and options. \nAnnoyances \nWhile none of these steps is very inconvenient \nthere are a number of little annoyances in the \nwhole process as SGML files have to be \nprepared and scoring scripts downloaded and \ninstalled. It is also necessary to find out the \ncorrect usage of the scoring scripts via user \nunfriendly command line interfaces. Files tend \nto be distributed over several directories with \nlong pathnames which makes it especially hard \nto find the translations after a couple of months. \nIt is also necessary to make sure that the same \npreprocessing steps are always applied. \n \n4. Server for Online Evaluation \n4.1. Requirements \nTypical researchers in machine translation will \nhave a number of training and test sets. After \nimplementing new ideas or changing any part \nof the pre- or post-processing, training or \ndecoding they will compare the automatic \nscores on a test set with the baseline score. \nSystematically trying different parameter \n \n settings for the new approach and comparing \nthe results leads to maximizing its impact. \nWhile every experiment could use a \ndifferent test set it is common practice to reuse \ntest sets to be able to compare the scores to \nearlier experiments. The public availability of \ntest sets from the well-known competitive \nevaluations also allows other researchers to \neasily compare published scores with their own \nresults. \nThe goal of the server presented here is to \nsupport researchers during their work with fast \nautomatic evaluation scores. The user should be \nable to define test sets, experiments and score \ntranslations without any need to know anything \nabout the inner workings of the scoring scripts, \ntheir parameters or file formats. The application \nin its current form is mainly geared towards the \nsupport of machine translation research but \ncould also be extended or used for other text \nbased scores most notably evaluations of \nAutomatic Speech Recognition systems. \n4.2. Design and Implementation \nThe initial requirement is that there is a concept \nof a test set with reference and source that can \nbe used to score translations. We also decided \nto add an additional layer of abstraction with \nthe introduction of the “Experiment” concept. \nAn “Experiment” consists of a test set and \nadditional information about which pre-\nprocessing steps to take and which scores to \ncalculate. But it also and especially serves as a \nmeans of organizing translations for different \napproaches that are using the same test set. The \noverall design is shown in Figure 1. This \ndiagram illustrates the relationships and \nvariables for the concepts “Test Set”, \n“Experiment” and “Translation”. The under-\nlying database is modeled according to this \ndiagram. referencefile\nsourcefile\nreferenceSGML(opt)\nSource language\nTarget languageTest Set\nreferencefile\nsourcefile\nreferenceSGML(opt)\nSource language\nTarget languageTest Set\nPreprocessing Steps\nScores to calculateExperiment\nPreprocessing Steps\nScores to calculateExperiment1\nHypothesisfile\nCalculated scoresTranslation\n \nFigure 3: General design of the \nunderlying data structure \nFigure 4 shows the practical application of this \ndesign. The same test set can be used in three \ndifferent experiments. Two of these experi-\nments use the same preprocessing while the \nthird experiment applies different pre-\nprocessing. \n \nExperiment 1\nBaseline SystemExperiment 2\nExperiment AImproved System\nImproved System\nBest System…Baseline System\nImproved System\nBest System…\nSystem 1\nSystem 2\n…Test Set\nPre-processing A Pre-processing A\nPre-processing B\n \nFigure 4: Example Test Set used in 3 Experiments \nUser Interface and Web Services \nThe online user interface is intended to be clean \nand simple and to give the user easy and \nintuitive access to the functions of the server. \nThe use of the web interface will be described \nin Section 4.3. \nA more advanced way to access the \nfunctions has also been implemented. While a \nweb interface is very convenient for the user, it \nis very hard to use from scripts or programs \ninvolved to produce a number of translations. \nThus, there is also a direct way to score \ntranslations using typical programming \n \n languages with predefined test sets. The web \nservice technology offers an easy way to \naccomplish this. Using the SOAP protocol a \nweb server can provide functions that every \nprogramming language with the necessary \nSOAP libraries can directly access. An example \non invoking web services will be given at the \nend of section 4.3. \nImplementation Considerations \nThe server has been implemented as a web \napplication using PHP scripts on an Apache \nwebserver. The database used is MySQL. The \nscoring scripts mainly use Perl and are directly \ncalled from within the PHP scripts. \n4.3. Practical application \nGeneral Information \nThe scoring server is available at: \nhttp://www.is.cs.cmu.edu/mteval \nAll major web browsers can be used to access \nthis website. The following description is lim-\nited to the most important functions. For a more \nin depth description please check the web site \nfor the latest documentation. \nSupported Scores \nThe scoring server right now supports the \ncalculation of the following scores: \n \n· BLEU (Papineni et al., 2002) \n· NIST (Doddington, 2001) version 11b, \navailable from \nhttp://www.nist.gov/speech/tests/mt/resources/ \nscoring.htm \n· 95% confidence intervals for NIST and \nBLEU scores based on 1000 samples \n(Zhang and Vogel, 2004). \n· mWER, mPER (word and position \nindependent error rate based on multiple \nreferences). \n· METEOR (Banerjee and Lavie, 2005) \nversion 0.4.3, available from \nhttp://www.cs.cmu.edu/~alavie/METEOR/ \n \nThe user can select any combination of these \nscores. Especially the confidence intervals can \ntake some time to compute so it might be \nreasonable to not calculate those for every \ntranslation submitted. Missing scores can simply be recalculated for interesting submis-\nsions (e.g. baseline, best systems). \nAdditional scoring metrics can be added to \nthe application if they support the standard \nSGML format with multiple references. Feed-\nback from users will be especially appreciated \nhere. \nRegistering a New User \nFirst a new user has to be registered. After \nentering the required information and a user-\nname and password the user gets access to the \nevaluation server. \nMain Functions \nThe main menu on top of the screen offers the \nthree main functions offered by the server: \n· Submit Translation \n· Define Experiments \n· Define Test sets \nIt also offers the administrative functions to edit \nthe user information and to log out. \nDefining New Test Sets \nA new user will not have any private test sets \nyet, so the first step is to define a new test set. \nA new user will however have access to test \nsets that were defined as public by other users. \nThe form to define a new test set is shown in \nFigure 5. A test set is identified by its name. \nThe user also has the option to give additional \ncomments. The first reference translation and \nthe source file have to be uploaded in plain text \nand the target and source language have to be \nidentified. If it is intended to use more than one \nreference, an additional reference SGML file \ncan be uploaded as well. \nThe test set can either be private or public. A \nprivate test set can only be accessed and used \nby the user who originally defined it while a \npublic test set is accessible by every user. It is \nnecessary to ensure that there are no copyright \nlimitations before a test set can be public. \nDefining New Experiments \nAfter a test set has been defined it can be used \nto define a new experiment (Form in Figure 6). \nAn experiment is also identified by a name \nand the users select one of their test sets as the \nbasis for this experiment. \n \n \n \n \nFigure 5: Form to define new test sets \nThe next step is to define the pre-processing of \nthe uploaded candidate translations. It is \npossible to convert the hypothesis to lower case \nand remove or separate the standard punc-\ntuation marks. The users can also enter arbitrary \ncharacters that should be removed or separated. \nThis will be especially useful for languages \nwith a different set of punctuation marks. In the \nlast part of the form the user selects which \nscores should be calculated for this experiment. \n \n \nFigure 6: Form to define new experiments Submitting Translation Hypotheses \nFinally with a defined experiment it is possible \nto submit actual translation hypotheses and \ncalculate the selected scores (Figure 7). \nAfter the translation hypothesis has been \nsubmitted the server will calculate the requested \nscores for the selected experiment. After all \nscores have been calculated the new hypothesis \nwill show up in the list of submitted hypotheses \nwith the respective scores. \n \n \nFigure 7: Form to submit translation hypotheses \nArchiving of Previous Scores \nThis view gives the user a summary of the \nsubmitted translations and scores. It is also \npossible to calculate other scores by clicking on \nthe “-calc-” links or to directly compare the \nhypothesis with the first reference by clicking \non the hypothesis filename. This automatic \narchiving of the previously calculated scores \nand the respective hypotheses is one of the \nmain advantages of the server presented here. \nFigure 9 shows an example overview with 3 \ndifferent experiments and a number of submis-\nsions for each experiment. \nUsage via Web Services \nThe web services defined for this online \nevaluation server allow a direct call of the \nscoring functions from virtually any program-\nming language. \nThe following example (Figure 8) in PHP \nuses the NuSOAP module to call the provided \nfunction. The web site interface will provide \nthe necessary testsetid. For more detailed \ndescriptions please consult the online docu-\nmentation. \n \n \n//Load hypothesis file from disk \n$file=\"hypothesis\"; \n$tstFileHandle = fopen($file,\"r\"); \n$tstFileContent = fread($tstFileHandle, filesize($file)); \n \n \n //Define necessary parameters \n$parameters = array( 'hypothesis' => $tstFileContent, \n 'testsetid' => testsetid \n 'score'=>'BLEU', ); \n//Connect to Web Service \n$soapclient = new soapclient_nusoap \n('http://moon.is.cs.cmu.edu:8080/EvaluationServer2/ \nwebservice.php'); \n//Call Web Service \n$score=$soapclient->call('score',$parameters); \nFigure 8: Web service invocation with PHP 5. Conclusions \nThe web application presented here offers a \nconvenient interface for the evaluation of \nmachine translation hypotheses compared to the \nstandard techniques. The functions are also \navailable via web services for handy usage in \ntypical programming languages. \nWe intend to continue to further improve the \nserver by adding other scores and giving more \ndetailed outputs as well as improved statistical \nanalysis. The hope is that with more feedback \nwe will get a better understanding of what the \nusers actually expect from such a tool and we \nwill try to incorporate those findings. \n \n \nFigure 9: Translation Score Overview Table \n6. References \nAKIBA, Yasuhiro, FEDERICO, Marcello, KANDO, \nNoriko, NAKAIWA, Hiromi, PAUL, Michael and \nTSUJI, Jun’ichi (2004). 'Overview of the IWSLT04 \nEvaluation Campaign'. Proceedings of IWSLT 2004, \nKyoto, Japan. \nBANERJEE, Satanjeev, LAVIE, Alon (2005). \n'METEOR: An Automatic Metric for MT Evaluation \nwith Improved Correlation with Human Judgments'. \nACL 2005 Workshop on Intrinsic and Extrinsic \nEvaluation Measures for Machine Translation and/or \nSummarization, Ann Arbor, MI, USA. \nDODDINGTON, George (2001). 'Automatic \nEvaluation of Machine Translation Quality using n-\nGram Co-occurrence Statistics'. NIST Washington, \nDC, USA. \nECK, Matthias and HORI, Chiori (2005). 'Overview \nof the IWSLT 2005 Evaluation Campaign'. \nProceedings of IWSLT 2005, Pittsburgh, PA, USA. \n \nNIEßEN, Sonja, OCH, Franz Josef, LEUSCH, Gregor, \nand NEY, Hermann (2000). 'An Evaluation Tool for Machine Translation: Fast Evaluation for MT \nResearch'. In Proceedings of 2nd International \nConference on Language Resources and Evaluation \n(LREC 2000), Athens, Greece. \nNIST MT Evaluation information and schedule \nhttp://www.nist.gov/speech/tests/mt/schedule.htm \nNational Institute of Standards and Technology, \n2001-2006. \nNuSOAP SOAP Toolkit for PHP \nhttp://sourceforge.net/projects/nusoap/ \nPAPINENI, Kishore, ROUKOS, Salim, WARD, Todd, \nand ZHU, Wei-Jing (2002). 'BLEU: a Method for \nAutomatic Evaluation of Machine Translation'. \nProceedings of ACL 2002, Philadelphia, PA, USA. \nZHANG, Ying and VOGEL, Stephan (2004). \n'Measuring Confidence Intervals for the Machine \nTranslation Evaluation Metrics'. Proceedings of the \nInternational Conference on Theoretical and \nMethodological Issues in Machine Translation (TMI \n2004), Baltimore, MD, USA.", "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "yvj0bXc8egZ", "year": null, "venue": "EAMT 2005", "pdf_link": "https://aclanthology.org/2005.eamt-1.19.pdf", "forum_link": "https://openreview.net/forum?id=yvj0bXc8egZ", "arxiv_id": null, "doi": null }
{ "title": "Adaptation of the translation model for statistical machine translation based on information retrieval", "authors": [ "Almut Silja Hildebrand", "Matthias Eck", "Stephan Vogel", "Alex Waibel" ], "abstract": null, "keywords": [], "raw_extracted_content": "EAMT 2005 Conference Proceedings 133 Adaptation of the Translation Model for Statistical Machine \nTranslation based on Information Retrieval \nAlmut Silja Hildebrand, Matthias Eck, Stephan Vogel and Alex Waibel \nInteractive Systems Laboratory \nCarnegie Mellon University \n5000 Forbes Avenue \nPittsburgh, PA, 15213, USA \[email protected], [email protected], st [email protected], [email protected] \nAbstract. In this paper we present experiments concerning translation model adaptation for \nstatistical machine translation. We develop a method to adapt translation models using in-\nformation retrieval. The approach selects sentences similar to th e test set to form an adapted \ntraining corpus. The method allows a better use of additionally available out-of-domain \ntraining data or finds in-domain data in a mixed corpus. The adapted translation models \nsignificantly improve the translation perfor mance compared to co mpetitive baseline sys-\ntems. \n1. Introduction \nThe goal of this research is to improve the trans-\nlation performance for a Statistical Machine Translation system. The basic approach is to adapt the translation models. \nStatistical machine translation can be de-\nscribed in a more formal way as follows: \n)()|( maxarg)|( maxarg*tPtsP stP t\nt t⋅ = = \nHere t is the target sentence, and s is the source \nsentence. P(t) is the target language model and \nP(s|t) is the translation model used in the de-\ncoder. Statistical machine translation searches for the best target sentence from the space de-fined by the target language model (LM) and the translation model (TM). \nStatistical translation models are usually ei-\nther phrase- or word-based and include most notably IBM1 to IBM4 and HMM (Brown et al., 1993; Vogel et al., 1996). All models use available bilingual training data in the source and target language to estimate their parameters and approximate the translation probabilities. \nTypically, the more data is used to estimate \nthe parameters of the translation model, the bet-ter it can approximate the “true” translation probabilities. This will obvious ly lead to a higher \ntranslation performance. However if a signifi-cant amount of out-of-domain data is added to \nthe training data, translation quality can drop. One reason for this is that a general translation model P(s|t), that was trained on in-domain and \nout-of-domain data, does not fit the topic or style of individual texts. Unfortunately the mean-\ning of quite a number of words and phrases is ambiguous; this results in the fact that their translation highly depends on the topic and con-text they are used in. \nFor example the word ‘leg’ is usually thought \nof as a body part (‘He broke his leg’). In sports, especially bicycling, the word ‘leg’ can also have the meaning of ‘stage’ (‘US Postal wins fourth leg’). Similar to this meaning is the use in aviation with the phrase ‘single leg airline’. \nThis fact would not be a problem if the trans-\nlations for ‘leg’ were the same in every case. But this is rarely true. German for example uses different words for the upper three meanings of ‘leg’. So a translation that might be totally ac-ceptable for one specific topic, applied to test data in another topic will lead to an error in the translation. \n1.1. Basic Idea \nOur approach is similar to recent approaches to \nlanguage model adaptation. We try to find sen-\nHildebrand et al. \n134 EAMT 2005 Conference Proceedings tences from the training data, which are similar \nto the test sentences. Then we train the transla-tion system only on this selection. This reduced training data hopefully matches the test data better in domain, topic and style thus improving translation performance. \n \n1. for each test sentence \n• use test sentence to select n \nmost similar sentences in the training data \n2. build translation model only using \nthe training sentences found for \neach test sentence \n3. translate with adapted translation \nmodel \n1.2. Previous Work \nThe main idea is based on the work that was \ndone for language model adaptation. Mahajan et al. (1999) used similar techniques for language model adaptation in speech recognition. This was applied to Statistical Machine Translation by Eck et al. (2004) and further refined by Zhao et al. (2004). Kim and Khudanpur (2003) used a similar idea for their language model adaptation and introduced the idea to use the likelihood of their first pass speech recognition result accord-ing to the adapted language model to find the optimal number of retrieved documents to use. \nThere have not been a lot of publications for \nthe adaptation of the translation model for Sta-tistical Machine Translation yet. One method for the adaptation of the translation model was pro-posed by Wu and Wang (2004). Wu and Wang focus on the actual word alignment and improve it by training different alignment models from in-domain and out-of-domain data. It is neces-sary for this approach to have at least a small separate amount of in-domain data available. \n2. Translation Model Adaptation \n2.1. Selecting Sentences using \nInformation Retrieval \nFor information retrieval we used the source lan-\nguage part of the bilingual training data as the document collection, each sentence representing one document. Using only the source language for the information retrie val has the advantage, \nthat it is independent from the quality of the trans-lation system, as no first pass translation is nec-essary. Each sentence from the test data was used as \none separate query. \nFor most of the experiments we used cosine \ndistance similarity measure with TF-IDF term weights to determine the relevance of a query to a document. \nTF-IDF term weighing is widely used in in-\nformation retrieval. Each document \niD is repre-\nsented as a vector ( )ik i i w ww ,..., ,2 1 if k is the \nsize of the vocabulary. The entry ijw is calcu-\nlated as: \n) log(*j ij ij idf tf w= . \nijtf is the weighted term frequency of the j-th \nword in the vocabulary in the document iD i.e. \nthe number of occurrences. \njidf is the inverse document frequency of \nthe j-th term, given as \nth term-j containing documents#documents #=jidf \nThe similarity between two documents is then de-\nfined as the cosine of the angle between the two vectors. \n2.2. Training an Adapted Translation \nSystem \nWe use the top n similar sentences for each sen-\ntence from the test data to train the translation \nmodel. \nWe do not train separate translation models \nfor each sentence, but put all retrieved sentences together to form the new training set. The rea-son for this is that a translation model trained from only a few hundred sentences is unlikely to give robust probabilities. It can also be expected \nthat a smaller test set will not change its domain so rapidly that a phrase or word translation, that is correct in the beginning of the document, would be wrong in later sentences. If a particular test consists of parts from different domains, a solu-tion could be to train separate translation mod-els for these parts of the test set. \nIt is also relevant to note that this training set \ncan contain duplicate sentences as the top n re-\ntrieval results for different test sentences can contain the same training sentence. (It will cer-tainly contain duplicate sentences for higher val-\nAdaptation of the translation model for statistical machine translation based on information retrieval \nEAMT 2005 Conference Proceedings 135 ues of n as the adapted training set becomes lar-\nger than the amount of available sentences). \nIt is questionable if the duplicates help the \ntranslation performance. The duplicates force the \ntranslation probabilities towards the more often seen words which could help, but the adaptation should already take care of this. \nWe re-did all experiments with removed du-\nplicates in the first experiment to see how the duplicate sentences effect the translations. \nIn the first experiment we always use a lan-\nguage model built from the entire training data. In some sense this language model is not match-ing the translation models, which were adapted. The general language model does not further support this adaptation. It is even possible that the general language mode l contradict s a correct \ntranslation for a specific topic and another – wrong – path is chosen in the decoding process. \nWe tried to resolve this un-matching condi-\ntion by changing the language model training data as well and used the English part of the adapted training set to train a new adapted lan-guage model in a second experiment. \n2.3. Language Model Perplexity for \nMeasuring Selection Quality \nOne unsolved question at this point is how many \nsentences to select for the adapted training cor-pus. \nAs shown in the experiments (see sections 3–\n5) the optimal size of the adapted training cor-pus is different for different language pairs, train-ing corpora or test sets. To be able to do a grid search for the optimal selection size, it is neces-sary to use a development test set with reference translations. The estimate for the optimal selection \nsize then has to be transf erable to the actual test \nset. The optimal number of sentences to select for training might also vary for each individual test sentence. \nIt would be very useful, not to be forced to \ncompare translation scores for many experiments to estimate the selection size. \nFollowing an idea introduced by Kim and \nKhudanpur (2003), to judge how well a selection of training data fits the test sentence, we meas-ure the perplexity (PPL) of a language model built from this selection against the test sen-tence. Then we find the perplexity minimum to determine the optimal selec tion size for each test sentence. Still the main se lection criterion is TF-\nIDF information retrieval, as we look only at the e.g. top 1000 sentences ranked by TF-IDF re-\ntrieval. \nDiagram 1 shows the behavior of the perplex-\nity of the language model (LM) built from the top 10, 20, 30…1000 sentences against the re-spective test sentence. (For the diagram we ran-domly chose 4 sentences from the Spanish – English experiment setting (see section 4).) \n020406080100120140160180200\n10 110 210 310 410 510 610 710 810 910\n# sentencesperplexitysentence no 1 sentence no 2 sentence no 3 sentence no 4\n \nDiagram 1: LM perplexities for all selection sizes \nUnfortunately the perplex ity curve does not show \na nice convex shape for most sentences. There are even sentences, wher e the perplexity mini-\nmum is at the first or second batch. The previ-ous experiments have shown, that the optimal selection size is definite ly bigger than 10 sen-\ntences per test sentence. So picking the selection with the lowest perplexity seems not to be rea-sonable in many cases. \nBecause information retrieval ranks the sen-\ntences according to their term weights, while lan-guage model perplexity gives information about matching word order, some of the 10-sentence-batches added early to the selection make the perplexity worse, while some 10-sentence-batches ranked lower in the TF-IDF retrieval improve the perplexity. \nTo exploit that additional information, we \nuse the perplexity change each batch of sentences causes as an additional measure for ranking sen-tences on the top n sentences retrieved by TF-IDF. Because the size of the selection increases over the testing run, the changes in perplexity are not comparable, so the batches can’t be com-pletely re-ranked according to the perplexity change. The batches are onl y classified as ‘good’ \nor ‘bad’ during the pass. All the ‘bad’ batches \nHildebrand et al. \n136 EAMT 2005 Conference Proceedings are taken out of the list and are being shuffled \nto the end. Among the good as well as the bad batches we keep the original TF-IDF ranking. \nAfter re-ranking once the shape of the per-\nplexity curve is already smoother and has a considerably lower perplexity minimum than the original order. After re-ranking a second time, the measured perplexities are even lower (Dia-gram 2). There are already almost no ‘bad’ batches before the minimum and ‘good’ batches after it after re-ranking twice for most sen-tences. So it’s not worth the computation time to re-rank a third time. \ntest sentence number 1\n051015202530\n10 110 210 310 410 510 610 710 810 910\n# of sentencesperplexityno reranking reranked once reranked twice\n \nDiagram 2: Perplexity re-ranking \nAfter re-ranking the selection size was determined \nfor each sentence by picking the selection with the lowest perplexity. \nThis technique can do without any develop-\nment test set, translation run or even a reference translation to adapt the translation model. \n3. Experiments \n3.1. Overview \nWe did our experiments for two different cor-\npora and setups. The first setup translating Span-\nish to English in the medical domain was used to test the basic idea and check different set-tings. \nThe experiment translating Chinese to Eng-\nlish proves that the ideas can be applied to an-other domain (tourism) and an overall different scenario as the out-of-domain data there is much larger than the available in-domain data. \nBoth experiments use a small amount of in-\ndomain training data and an additional larger amount of out-of-domain data. In both cases just adding the out-of-domain data does not signifi-cantly improve the performance of a baseline sys-tem that was trained on the in-domain data only. The adaptation can then be viewed in two \ndifferent ways: The adaptation can improve word translations by using translations that are more appropriate for the topic. This is the case for the baseline systems that use all available in-domain and out-of-domain data. For baseline systems that have only been trained on the available small in-domain data the goal of the adaptation is to cover unknown words. Words that are covered by the available in-domain data can be translated fairly well. The hope is that the additionally selected data will cover previ-ously unknown words. \n3.2. Translation System \nThe applied statistical machine translation sys-\ntem uses IBM1 lexicon transducers and differ-ent types of phrase transducers (Zhang et al., 2003; Vogel et al., 1996; Vogel et al., 2003). The Language model is a trigram language model with Kneser-Ney-discounting built with the SRI-Toolkit (SRI, 1995-2004) using only the Eng-lish part of the training data. This system was used for all experiments. \nThe best scores for NIST (Doddington; 2001) \nor BLEU (Papineni et al.; 2002) evaluation metrics are usually achieved using considerably different tuning parameters for the translation system. In the experiments for the Spanish–English translation the system was only tuned towards NIST, in the Chinese–English experi-ments we tuned the system towards both NIST and BLEU respectively. \n4. Experiments Spanish – English \n4.1. Test and Training Data \nThe test data for the Spanish–English experi-\nments consisted of 329 lines of medical dia-logues (6 doctor-patient dialogues). It contains 3,399 English words and 3,065 Spanish words (tokens) with one reference translation. \nWe had 3 different corpora of bilingual train-\ning data available. 25,077 lines of medical dia-logues can be regarded as in-domain data. Addi-tional out-of-domain data were 2,323 lines of tourism dialogues and 123,416 lines of BTEC data (also tourism domai n, general tourist sen-\ntences and phrases) described in Takezawa et al (2002). \nAdaptation of the translation model for statistical machine translation based on information retrieval \nEAMT 2005 Conference Proceedings 137 Training sets #lines #words \n(English) #words \n(Spanish) \nMedical dialogues (in-domain) 25,077 218,788 208,604\nTourism dialogues \n(out-of-domain) 2,323 26,600 24,375\nBTEC data \n(out-of-domain) 123,416 903,525 852,364\nOverall 150,816 1,148,913 1,085,343\nTable 1: Training Data sizes for Experiments \nSpanish – English \n4.2. Baseline Systems \nWe trained two different baseline systems. The \nfirst system only uses the medical data. In some sense this is an oracle experiment, because it might not always be known what part of the available data is the actual in-domain data. The second baseline system uses all available train-ing data. \nThe scores show, that the baseline system \nthat only uses the available in-domain data is not necessarily better than the system that uses all data. The best NIST score is actually a little higher for the second baseline system (but not statistically significant). There may be two pos-sible reasons for the improvement using the ad-ditional data. \n1. It covers 27% of the previously unknown \nwords (36 of 132). \n2. It consists of dialogues like the medical data. \nThose dialogues cover a different topic, but they still might be helpful for the translation as the sentence structure is fairly similar. \nSystem NIST \nonly in-domain data 5.1820 \nin-domain and out-of-domain data 5.2074 \nTable 2: Baseline System results \nIn this experiment the translation system was \nonly tuned towards the NIST score. \n4.3. Experiment 1: distinct and non-\ndistinct retrieval \nFor the Spanish – English setting we built the \ninformation retrieval index using the Spanish part of all available in-domain and out-of-domain data. (We used the Lemur Toolkit (Lemur) for all Information retrieval tasks) The top n similar sentences for each Spanish \ntest sentence for n=30, 50, 100, 200, 300... 1000 were then retrieved from the index, using TF-IDF as the similarity measure. \nFor n=50 the selection for the entire test set \ncontained 40% duplicates, 75% for n=1000. \nIt is also important to note that the Lemur \ntoolkit sometimes retrieved fewer sentences than was asked for. This happens especially for \nshort sentences, when all remaining sentences have no TF-IDF weight because not even one word matches. \nThis training set was used to train the new \nadapted translation models. The LM was trained on the entire training data. \nDiagram 3 illustrates the results. The num-\nbers in parentheses on the x-axis denote the num-ber of distinct sentences th at were used to train \nthis particular system. The non-distinct training set contained some of those distinct sentences more than once. \n4.84.95.05.15.25.35.4\nBaseline 1 (25k)\nBaseline 2 (150k)\nTop 30 (7k)\nTop 50 (10k)\nTop 100 (18k)\nTop 200 (32k)\nTop 300 (42k)\nTop 400 (51k)\nTop 500 (58k)\nTop 600 (64k)\nTop 700 (70k)\nTop 800 (75k)\nTop 900 (80k)\nTop 1000 (84k)non-distinct distinct\n \nDiagram 3: Distinct and non-distinct retrieval for \nSpanish–English (NIST scores) \nThe highest NIST score for this experiment in \nthe non-distinct case was 5.3026 at Top 800 re-trieved sentences. This training set has about 250,000 sentences (with duplicates) and about 75,000 distinct sentences wh ich is about half the \nsize of the original training data. \nIn the distinct case, when the duplicate sen-\ntences were removed for the actual training the highest NIST score was 5.2878 for Top 900 (about 80,000 sentences). \n4.4. Experiment 2: TM and LM \nAdaptation \nAs noted earlier, we always used the baseline \n(baseline 2) language model for the translations in experiment 1. \nHildebrand et al. \n138 EAMT 2005 Conference Proceedings In experiment 2 we changed the language \nmodel training data as well and used the Eng-lish part of the adapted training set to train the new language model. This had a bigger impact on the smaller systems, as the adapted and the general LM become more similar for larger se-lection sizes. \nThis further improved the best NIST score to \n5.3264 (Top 200 with about 64,000 sentences of overall training data and just about 32,000 distinct sentences). \nDiagram 4 illustrates the results in NIST \nscore. All these experiments were done without removing the duplicate sentences. \n4.84.95.05.15.25.35.4\nBaseline 1 (25k)\nBaseline 2 (150k)\nTop 30 (7k)\nTop 50 (10k)\nTop 100 (18k)\nTop 200 (32k)\nTop 300 (42k)\nTop 400 (51k)\nTop 500 (58k)\nTop 600 (64k)\nTop 700 (70k)\nTop 800 (75k) Top 900 (80k)\nTop 1000 (84k)general LM adapted LM\n \nDiagram 4: TM and LM adaptation for \nSpanish–English (NIST scores) \n4.5. Experiment 3: Perplexity based \nSelection Size Determination \nTo find the optimal selection size for the adapted \ntraining corpus we re-ranked the top 1000 sen-tences retrieved via TF-IDF retrieval. The per-plexity was calculated after adding sentences in batches of 10. \nIn this experiment we always built the lan-\nguage model from the adapted training data, as this worked well for the previous experiments. \nDiagram 5 shows the NIST scores for selec-tion sizes picked at the perplexity minimum be-\nfore re-ranking and after re-ranking once and twice in comparison to the baselines and the best scores from the previous experiments. The best NIST score of 5.3807 was reached after re-ranking twice. \n4.6. Summary \nThe differences between the systems with or \nwithout duplicate sentences are not significant. The highest NIST score was reached using a training set that contained duplicates. \nTraining the language model on an matching \nadapted data selection clearly improves the per-formance. \nThe selection automatically found by per-\nplexity based selection size determination was able to achieve about the same scores as the best \none of a whole set of selection sizes, PPL re-ranking improved slightly over them. \nSystem NIST \nbaseline 1: in-domain data 5.1820 \nbaseline 2: all data 5.2074 \nbest TF-IDF with duplicates 5.3026 \nbest TF-IDF distinct 5.2878 \nbest with LM adaptation 5.3264 \nbest with PPL re-ranking 5.3807 \nTable 3: Results for each experiment: Spanish-English \n5. Experiments Chinese – English \n5.1. Test and Training data \nThe Test Data for the Chinese–English experi-\nments consisted of 506 lines of tourism dialogues. The test data contains 3510 Chinese words. There \nare 16 English references per test sentence avail-able. \nThe in-domain training data consisted of ex-\nactly 20,000 lines of tour ism dialogues, also from \nthe BTEC data. \nWe used additional 9.1 million lines of TIDES \ndata (mainly Chinese newswires and speeches) to build the index and retrieve the additional data. \nTraining sets #lines #words \n(English) #words \n(Chinese) \nBTEC data \n(in-domain) 20,000 188,935 175,284 \nTIDES data \n(out-of-domain) 9.1 million 144 million 135 million \nTable 4: Training Data sizes for Experiments \nChinese–English 4,84,95,05,15,25,35,4\nbaseline 1\n(25k)baseline 2\n(150k)previous\nbest\n(75k/32k)no re-\nranking\n(23k)re-ranked\nonce (21k)re-ranked\ntwice\n(20k)non-distinct\ndistinct\nDiagram 5: PPL based selection size and re-ranking for \nSpanish-English (NIST scores) \nAdaptation of the translation model for statistical machine translation based on information retrieval \nEAMT 2005 Conference Proceedings 139 5.2. Baseline System \nThe baseline system was only trained on the avail-\nable in-domain data and had a NIST score of 8.1129 and a BLEU score of 0.4621. \nIt was known from earlier results that a sys-\ntem using all available training data does not improve over this baseline. The vocabulary coverage certainly improves (89 unknown words in the baseline, 4 with the complete TIDES cor-pus) but the out-of-domain data introduces too many wrong translations. We did not explicitly train another baseline from all data for this rea-son. \nIn this in-/out-of-domain data scenario one \ncould argue, that adding some data to the small initial system will improve the translation per-formance, no matter what data is selected. So we selected different numbers of sentences randomly from the complete training corpus and compared the translation results to our adaptive selection. From different random selections only small ones could improve over the baseline (2 examples are given in table 2). \nSystem BLUE NIST \nonly in-domain data (20k lines) 0.4621 8.1129 \nRandomly selected out-of-\ndomain data 15k lines 0.4850 8,2262 \nRandomly selected out-of-\ndomain data 75k lines 0,4501 7,9482 \nTable 2: Baseline System results: Spanish-English \nThis shows the trade-off between a small do-\nmain-specific model that can not cover all words and a larger system that might introduce wrong out-of-domain translations. \n5.3. Experiment 4: In-domain/out-of- \ndomain data scenario \nWith this small amount of in-domain training \ndata at hand we built the index for the out-of-domain data only. The top n similar sentences for each Chinese test sentence for n=10, 20, 30, 40, 60, 70, 80, 100, 125, 150, 175, 200, 250 and 300 were then retrieved from the index, using \nTF-IDF as the similarity measure. \nWe then added the retrieved sentences from \nthe out-of-domain data to the in-domain data for the training of the translation model. As we felt that the available in-domain data \nwas too poorly represented especially if we added more and more training data for a larger number of retrieved n we removed the duplicates in all cases. In additional translation runs we also weighted the in-domain data three times (in-stead of once) in the training to get more robust probabilities for the words already known to be in-domain (denoted by ‘weight 3:1’ in the dia-grams). As expected this especially helped with the larger selection sizes. \nThe overall best scores were 8.3398 (NIST) \nand 0.4931 (BLEU). Both scores were accom-plished with changed weight of the in-domain data, the best NIST score for the Top 60 retrieved sentences, the best BLEU score for the Top 80 retrieved sentences. Diagrams 6 and 7 illustrate the further results. (The number in parentheses on the x-axis denotes the amount of training data in lines that was added to the available in-domain data of 20,000 lines to form the overall training data.) \n7.57.67.77.87.988.18.28.38.48.5\nBaseline\nTop 10 (6k)\nTop 20 (11k)\nTop 30 (16k) Top 40 (21k)\nTop 60 (30k) Top 70 (35k)\nTop 80 (40k)\nTop 100 (50k)\nTop 125 (63k)\nTop 150 (75k)\nTop 175 (86k)\nTop 200 (98k)\nTop 250 (121k)\nTop 300 (143k)weight 1:1 weight 3:1\n \nDiagram 6: Chinese-English: different selection sizes \n(NIST scores) \n0.380.390.400.410.420.430.440.450.460.470.480.490.50\nBaseline\nTop 10 (6k)\nTop 20 (11k)\nTop 30 (16k)\nTop 40 (21k)\nTop 60 (30k)\nTop 70 (35k)\nTop 80 (40k)\nTop 100 (50k)\nTop 125 (63k)\nTop 150 (75k)\nTop 175 (86k)\nTop 200 (98k)\nTop 250 (121k)\nTop 300 (143k)weight 1:1 weight 3:1\nDiagram 7: Chinese-English: different selection sizes \n(BLEU scores) \nHildebrand et al. \n140 EAMT 2005 Conference Proceedings 5.4. Experiment 5: Perplexity based \nSelection Size Determination \nIn this data setting we c hose a batch size of 20 \nand re-ranked only the top 800 retrieved sen-tences in the first and the top 600 in the second perplexity re-ranking run because of runtime is-sues due to the big data collection and vocabu-lary size. \nDiagram 8 and 9 show the NIST and BLEU \nscores for selection sizes picked at the perplex-ity minimum before re-ranking and after re-ranking once and twice in comparison to the baselines and the best scores from the previous experiments. \n7,57,67,77,87,988,18,28,38,48,5\nbaseline previous\nbest\n(35k/75k)no re-\nranking\n(72k)re-ranked\nonce (54k)re-ranked\ntwice (50k)weight 1:1\nweight 3:1\n \nDiagram 8: Perplexity determined Selection Size and \nRe-ranking (NIST scores) \n0,380,390,400,410,420,430,440,450,460,470,480,490,50\nbaseline previous\nbest\n(16k/40k)no re-\nranking\n(72k)re-ranked\nonce (54k)re-ranked\ntwice (50k)weight 1:1\nweight 3:1\n \nDiagram 9: Perplexity determined Selection Size and \nRe-ranking (BLEU scores) \nIn this experimental setting the re-ranking itself \ngave no real improvement but the automatic de-termination of the selection size was able to reach the same results achieved by trying vari-ous selection sizes. The reason might be, that the 3:1 weight for in- and out-of-domain data is far from optimal for these selection sizes. The weigh-ing had a big impact on the scores, drowning out the possibly positive effect of re-ranking. 5.5. Summary & Example translations \nSystem BLEU NIST \nbaseline: in-domain data 0.4621 8.1129 \nbest random 0.4850 8.2262 \nbest weight 1:1 0.4871 8.2132 \nbest weight 3:1 0.4931 8.3398 \nbest with PPL selection size 0.4924 8.3812 \nTable 5: Results for each experiment: Chinese-English \nTable 6 shows some example translations com-\nparing the reference with the baseline and best system (according to NIST score). \nReference no-smoking, please. \nBaseline i ‘d like a seat please \nBest system i ‘d like a no smoking seat please \nReference can i have a medical certificate? \nBaseline could you give me a medical open \nBest system could you give me a medical certifi-\ncate \nReference three glasses of me lon juice, please. \nBaseline please give me three of those melon\njuice please \nBest system please give me three glasses of \nmelon juice please \nReference excuse me. could you tell me how to \nget to the getty museum? \nBaseline excuse me could you tell me the way \nto the art museum yosemite san diego please \nBest system excuse me could you tell me how to \nget to the museum \nTable 6: Example Translations: Chinese-English \n6. Further results \nThere are several other similarity measures that \nare widely used in information retrieval. We compared results using the Okapi similarity meas-ure instead of TF-IDF and found no significant difference in translation quality. Looking at the retrieval result for the whole test set the portion of retrieved sentences fro m the TF-IDF retrieval \nthat can be found in the Okapi retrieval result amounts to over 75% for the top 300 sentences per query and over 90% for the top 1000 retrieved sentences per query. \nAdaptation of the translation model for statistical machine translation based on information retrieval \nEAMT 2005 Conference Proceedings 141 7. Future Work \nDifferent things could be done to further inves-\ntigate this approach to translation model adapta-tion. We already tried the TF-IDF and Okapi similarity measures but those only focus on uni-grams. It could be helpful to develop a more so-phisticated similarity measure that matches phrases, too. It was demonstrated in Zhao et al. (2004) that language model adaptation could benefit from such an advanced similarity meas-ure and it is certainly possible to apply these ideas here. Other information retrieval techniques like stemmers, the usage of a stop-word list or pseudo feedback could be applied, too. \nIt might also be beneficial to use training al-\ngorithms that allow sentences to have fractional weights. Section 5.4 showed that tuning weights for in- and out-of domain data can give im-provements. Determining th e best weight in each \nsituation would certainly be helpful and it could be interesting to further investigate this behav-ior. \nAnother possible experime nt could be to train \nseparate translation models for the in-domain and retrieved out-of-dom ain data and interpo-\nlate those models. \nThe LM adaptation in the presented experi-\nments is always based on the source side. It is possible that target side LM adaptation ap-proaches as presented in Eck et al. (2004) and Zhao et al. (2004) combined with the TM adap-tation as presented in this paper could further improve the translation performance. \n8. Conclusions \nWe show that it is possible to adapt translation \nmodels for statistical machine translation by se-lecting similar sentences from the available training data. There are improvements in trans-lation performance on two different language pairs and overall different test conditions. \nThe results show that it is helpful to support \nthis adaptation method by analogically adapting the language model as this further improves the translation quality. \nUsing language model perplexity to deter-\nmine the selection size automatically renders a development test set with reference translations unnecessary. Re-ranking the retrieval result ac-cording to LM perplexity even improved trans-\nlation quality slightly in one of the cases. \nWith more investigation especially into op-\ntimizing the weights between in- and out-of-domain data, it will hopefully be possible to further improve the translation performance. \nReferences \nBROWN, Peter E., DELLA PIETRA, Stephen A., \nDELLA PIETRA, Vincent J. and MERCER, Robert \nL. (1993). ‘The mathematics of statistical machine \ntranslation: Parameter estimation’, Computational Lin-guistics, 19(2), pp. 263-311 \nDODDINGTON, George (2001). ‘Automatic Evalua-\ntion of Machine Translation Quality using n-Gram Co-occurrence Statistics’. NIST Washington, DC, USA. \nECK, Matthias, VOGEL, St ephan and WAIBEL, Alex \n(2004). ‘Language Model Adaptation for Statistical Machine Translation based on Information Retrieval’, \nProceedings of LREC 2004, Lisbon, Portugal, May \n2004. \nKIM, Woosung and KHUDANPUR, Sanjeev (2003). \n‘Language Model Adaptation Using Cross-Lingual \nInformation’, Proceedings of Eurospeech 2003, Ge-neva, Switzerland, Sept. 2003. \n‘The LEMUR Toolkit for Language Modeling and \nInformation Retrieval’ http://www.cs.cmu.edu/~lemur/ \nMAHAJAN, Milind BEEFERMAN Doug and \nHUANG, X.D. (1999). ‘Improved Topic-Dependent \nLanguage Modeling Using Information Retrieval Techniques’, IEEE International Conference on Acous-\ntics, Speech and Signal Proc essing 1999, Phoenix, AZ. \nPAPINENI, Kishore, ROUKOS, Salim, WARD, Todd \nand ZHU, Wei-Jing (2002). ‘BLEU: a Method for \nAutomatic Evaluation of Machine Translation’, Pro-\nceedings of the ACL 2002 , Philadelphia, USA. \nTAKEZAWA, Toshiyuki, SUMITA, Eiichiro, SU-\nGAYA, Fumiaki, YAMAMOTO, Hirofumi, YA-\nMAMOTO, Seiichi (2002) ‘Toward a Broad-coverage Bilingual Corpus for Speech Translation of Travel \nConversation in the Real Wo rld’, LREC 2002 (Third \nInternational Conference on Language Resources and Evaluation), Vol.1, pp.147-152 \nVOGEL, Stephan, NEY, Hermann and TILLMANN, \nChristoph (1996). ‘HMM-based Word Alignment in Statistical Translation’, Proceedings of COLING 1996: \nProceedings of Coling 199 6. Copenhagen, August \n1996. \nVOGEL, Stephan, ZHANG, Ying, HUANG, Fei, \nTRIBBLE, Alicia, VENUGOPAL, Ashish, ZHAO, \nHildebrand et al. \n142 EAMT 2005 Conference Proceedings Bing, WAIBEL, Alex (2003). ‘The CMU Statistical \nTranslation System’, Proceedings of MT-Summit IX \n2003. New Orleans, LA. Sept. 2003. \nWU, Hua and WANG, Haifeng (2004). ‘Improving \nDomain-Specific Word A lignment for Computer \nAssisted Translation’, Proceedings of ACL 2004, Barcelona, Spain, July 2004. \nZHANG, Ying, VOGEL, Stephan and WAIBEL, Alex \n(2003). ‘Integrated Phrase Segmentation and Align-ment Algorithm for Statistical Machine Translation’, \nProceedings of Internatio nal Conference on Natural \nLanguage Processing and Knowledge Engineering 2003, Beijing, China, Oct. 2003. \nZHAO, Bing, ECK, Matthias and VOGEL, Stephan \n2004. ‘Language Model Adaptation for Statistical Ma-chine Translation via Structured Query Models’, Pro-\nceedings of Coling 2004, Geneva, Switzerland, Aug. \n2004.", "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "oWneH1of0-i", "year": null, "venue": "EAMT 2020", "pdf_link": null, "forum_link": "https://openreview.net/forum?id=oWneH1of0-i", "arxiv_id": null, "doi": null }
{ "title": "Sockeye 2: A Toolkit for Neural Machine Translation", "authors": [ "Felix Hieber", "Tobias Domhan", "Michael J. Denkowski", "David Vilar" ], "abstract": "Felix Hieber, Tobias Domhan, Michael Denkowski, David Vilar. Proceedings of the 22nd Annual Conference of the European Association for Machine Translation. 2020.", "keywords": [], "raw_extracted_content": null, "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "xNOy04hjz3", "year": null, "venue": "EAMT 2011", "pdf_link": "https://aclanthology.org/2011.eamt-1.10.pdf", "forum_link": "https://openreview.net/forum?id=xNOy04hjz3", "arxiv_id": null, "doi": null }
{ "title": "SMT-CAT integration in a Technical Domain: Handling XML Markup Using Pre & Post-processing Methods", "authors": [ "Arda Tezcan", "Vincent Vandeghinste" ], "abstract": null, "keywords": [], "raw_extracted_content": "SMT-CAT integration in a Technical Domain: \nHandling XML Markup Using Pre & Post-processing Methods \nArda Tezcan \nKatholike Universiteit Leuven \nLeuven, Belgium \nardatezcan @gmail.com Vincent Vandeghinste \nCentre for Computational Linguistics \nKatholike Universiteit Leuven \nLeuven, Belgium \[email protected] \n/c03Abstract \nThe increasing use of eXtensible Markup \nLanguage (XML) is bringing additional \nchallenges to statistical machine transla-\ntion (SMT) and computer assisted trans-\nlation (CAT) workflow integration in the \ntranslation industry. This paper analyzes \nthe need to handle XML markup as a part \nof the translation material in a technical \ndomain. It explores different ways of \nhandling such markup by applying trans-\nducers in pre and post-processing steps. \nA series of experiments indicates that \nXML markup needs a specific treatment \nin certain scenarios. One of the proposed \nmethods not only satisfies the SMT-CAT \nintegration need, but also provides \nslightly improved translation results on \nEnglish-to-Spanish and English-to-\nFrench translations, compared to having \nno additional pre or post-processing \nsteps. \n1 Introduction \nAlthough it took decades for machine transla-\ntion (MT) to find its way into the translation \nbusiness, and although the goal of perfect trans-\nlation performed by machines alone seems to be \nunfeasible, MT systems are increasingly being \nintegrated into the translation and localization \nworkflows. A translation memory (TM) is one of \nthe primary components of current CAT tools in \n/c57/c52/c47/c44/c5c/cb6/c56/c03/c57/c55/c44/c51/c56/c4f/c44/c57/c4c/c52/c51/c03/c44/c51/c47/c03/c4f/c52/c46/c44/c4f/c4c/c5d/c44/c57ion needs (Gar-\ncia, 2005). After being commercialized, TMs \nwere basically used as a database of translation \nunits, which are able to retrieve existing transla-\ntions for the sentences that need to be translated \nagain, increasing the efficiency and productivity \nof a translation task. \n \n \n© 2011 European Association for Machine Translation. \n \nDriven by competition, the translation industry \nintegrated new MT systems with the widely used \nTM technology. Today, as one of several ap-\nproaches, SMT systems prove to be successful, \nespecially when they are integrated in post-\nediting workflows and are trained with TM data \n(He et al., 2010). \nWhile the translation industry follows the \nscientific developments of MT closely, it faces \nits own specific problems. Although there is \nmuch effort put into scientific research topics in \nthe field of SMT (Yamada and Knight, 2000; \nOch and Ney, 2002; Koehn et al., 2003; Koehn et \nal., 2007; Koehn, 2010) this paper introduces the \nXML markup problem in SMT-CAT integration \nand proposes practical solutions. Section 2 \npresents background information about XML in \ntranslation and post-editing workflows, and ex-\nplains the challenges XML markup brings. Sec-\ntion 3 refers to related work and motivates this \npaper. Section 4 introduces several methods to \nhandle the XML markup. Section 5 reports and \nanalyzes the experiments that were conducted. \nSection 6 concludes, looking into the possibili-\nties for future work. \n2 XML in Translation Workflows \nAn important challenge in SMT-CAT integra-\ntion is the use of a TM database for the creation \nof the necessary corpora for an SMT system. The \nparallel text stored in a TM can be used after it is \nexported. Although exporting can be done in dif-\nferent formats depending on the tool that is used, \nthere is an XML based standard defined for this \npurpose by the Localization Industry Standards \nAssociation (LISA)1. TMX (Translation Memory \neXchange) is a vendor-neutral open XML stan-\ndard for the exchange of TM data and can be \ncreated directly within the TM software. It is \nevident that this file format can be used for other \ncircumstances than data exchange purposes such \n \n1 http://www.lisa.org/fileadmin/standards/tmx1.4/tmx.htm Mik el L. F orcada, Heidi Depraetere, Vincen t V andeghinste (eds.)\nPr o c e e dings of the 15th Confer enc e of the Eur op e an Asso ciation for Machine T r anslation , p. 55\u001562\nLeuv en, Belgium, Ma y 2011\nas for the creation of training corpora for SMT \nsystems. TMX contains its own XML structure \nas a first layer of markup. A second layer, con-\nsisting of the translation units that are stored in \nTMs can also contain XML, HTML, SGML \nand/or RTF markup. This second layer of XML \nmarkup and the challenges it brings are the main \nfocus of this paper. \nThere are several reasons why it is possible to \nhave XML tags inside the sentences, such as pro-\ntecting text from being translated or for auto-\nmatic replacements. When XML markup is in-\nvolved in post-editing, the SMT system should \nbe able to satisfy additional requirements to keep \nthis process as efficient as possible. \nFirst of all, XML markup should appear in the \ntranslation without any loss of (meta) data, keep-\ning its well-formedness. A TMX including non-\nwell-formed XML tags can be imported into a \nTM without any warnings, if the correct TMX \nmarkup is provided. \nBesides the integrity of the structure, the sys-\ntem should also be able to preserve the content of \nthe tags, as an output lacking the correct content \nof tags risks passing the post-editing stage unde-\ntected. As part of a commercial business, such \nsystems cannot afford to make mistakes on the \ncorrectness of the content of XML elements in \nthe absence of extra checking mechanisms that \nwork on the meta-data level. \nTo sum up, the existence of XML markup in-\nside sentences brings some challenges to the \nSMT system, such as: 1) preserving the validity \nand the content of the XML tags in the SMT out-\nput; 2) Poor coverage of unseen data, considering \nthat the attributes of such elements can create a \nbig vocabulary of a set of alphanumeric charac-\nters; 3) Poor word alignments, and; 4) Poor word \nreordering, due to a significantly increased num-\nber of tokens in the sentences. Seeing the impor-\ntance and the challenges that XML tags can add \nto translation workflows, it is essential to pay \nextra attention to handling these tags in the SMT \nsystem to be integrated within the CAT \nworkflows. \n3 Related Work and Motivation \nAlthough one of the options of handling XML \ntags in a corpus and an SMT system might sim-\nply be removing them from the data (and recon-\nstructing them afterwards), this paper will focus \non approaches that will preserve the XML \nmarkup as part of the training and translation \nmaterial due to the word-like use inside sen-tences. Unlike the TMX markup, the tags that are \nused inside sentences can be more than place-\nholders, containing/representing words or \nphrases. \nAn increasing amount of work is being in-\nvested in producing successful SMT-CAT inte-\ngration and several approaches improve integra-\ntion and translation quality. Vogel et al. (2000) \npresent a hierarchical TM system, in which the \nbilingual corpus is converted into a set of pat-\nterns by applying transducers to convert words to \ncategory labels, and recursively, to convert sets \nof labels to more complex expressions. The \ntranslation takes place with this modified TM, \nfollowed by the use of a complete cascade of \ntransducers to recursively convert complex ex-\npressions to sets of labels and finally produces \ntext from labels. This approach is reported to \nprovide good coverage for unseen data. \nDu et al. (2010) propose different methods for \ntreating TMX markup, in a scenario of SMT-TM \n/c4c/c51/c57/c48/c4a/c55/c44/c57/c4c/c52/c51/c11/c03/c37/c4b/c4c/c56/c03/c56/c57/c58/c47/c5c/c03/c49/c52/c46/c58/c56/c48/c56/c03/c52/c51/c03/c57/c4b/c48/c03/cb3/c49/c4c/c55st \n/c4f/c44/c5c/c48/c55/cb4/c03/c52/c49/c03/c3b/c30/c2f/c03/c50/c44/c55/c4e/c58/c53/c0f/c03/c44/c51/c47/c03/c56/c58/c4a/c4a/c48/c56/c57/c56/c03/c57/c4b/c44/c57/c03/c44/c51/c03\nSMT system can handle such markup well, when \nthe markup is kept as a part of the training data. \nLeplus et al. (2004) show that TM data is \nmore successful as training material for an MT \nsystem when simple alterations are made on \nnumbers, names of days, etc. This study proposes \nadding pre and post-processing steps to the actual \ntranslation process, altering the training material \nand the input for translation so that all numbers \n/c44/c55/c48/c03/c55/c48/c53/c55/c48/c56/c48/c51/c57/c48/c47/c03/c5a/c4c/c57/c4b/c03/c44/c51/c03/cb3/c42/c42/c2c/c31/c37/c42/c42/cb4/c03/c57oken, result-\ning in an output containing the same tokens. The \npost-processing step finalizes the translation by \nreplacing the token with the correct number. \nA similar idea is used by Eck et al. (2004) to \nimprove SMT results in the Medical Domain. \nThey use the semantic type information that is \ndefined in the UMLS Semantic Network2 to gen-\neralize the training data. A set of transducers is \nused in pre-processing to alter the words like \n/cb3/c4b/c48/c44/c47/cb4/c0f/c03/cb3/c44/c55/c50/cb4/c03/c44/c51/c47/c03/cb3/c4e/c51/c48/c48/cb4/c03/c57/c52/c03/cb3/c23/c25/c32/c27/c3c/c33/c24/c35/c37/cb4/c03\ntokens. After translation, these dummy words are \nchanged back into the actual word in the target \nlanguage. \nThis paper goes one step further than the exist-\ning studies that are directly or indirectly related \nto SMT-CAT integration. We provide methods \nthat can directly be applied within the current \nworkflows, facing the challenges of XML \nmarkup that are explained in section 2. From a \nmore general perspective this work will also help \n \n2 http://www.nlm.nih.gov/research/umls/ 56\nthe translation industry, considering that current \nacademic research is more focused on general \npurpose approaches and not on specific domains, \nspecific types of data, or specific types of prob-\nlems that occur in the day-to-day real life transla-\ntion process. \n4 Handling XML Markup in SMT \nWe propose four different methods for treating \nXML markup inside the sentences of the TM \ndatabase, and an additional method for solving a \nproblem that can be introduced while handling \nXML markup. All the methods have been tested \non TM data, which was collected from the TMX \nand cleaned from the TMX markup. This clean-\ning process is beyond the scope of this paper. All \nmethods assume that the data does not include \nany TMX markup. \n4.1 Method 1: Full Tokenization \nThis approach represents our baseline and \nconsists of the default tokenizer script of Moses \n(Koehn et al., 2007). This method produces an \nSMT system where all the meta-data is treated as \n/c53/c4f/c44/c4c/c51/c03/c57/c48/c5b/c57/c11/c03/c37/c4b/c4c/c56/c03/c4c/c56/c03/c44/c03/cb3/c47/c4c/c55/c57/c5c/cb4/c03/c44/c53/c53/c55/c52/c44/c46/c4b/c03/c56/c4c/c51/c46/c48/c03/c44/c4f/c4f/c03/c57/c4b/c48/c03\nXML characters are tokenized, increasing the \ntoken size of the sentences dramatically in some \ncases. \nThis baseline is interesting as it shows how \nwell Moses can handle the translation of the ac-\ntual material and the ordering of the XML re-\nserved characters (or entity references). Any \nslight change in the order of such characters may \nresult in a non-well-formed XML structure. This \nmethod provides a wide range of results, from \nshowing that no special treatment is needed for \nXML markup to how important it is to properly \ntreat such markup. Moreover, as a consequence \nof tokenizing the XML elements in preprocess-\ning, this method requires a post-processing stage \nto reconstruct the XML tags from the separate \ntokens. This process is essential to provide the \nsame XML structure as in the source segments. \n4.2 Method 2: Full Markup Normalization \nAs mentioned in section 3, normalizing certain \nwords and/or numbers has been used in the past \nto improve SMT results. Considering that an \nXML tag in a TM can contain up to 50 tokens as \n/c44/c03/c55/c48/c56/c58/c4f/c57/c03/c52/c49/c03/cb3/c49/c58/c4f/c4f/c03/c57/c52/c4e/c48/c51/c4c/c5d/c44/c57/c4c/c52/c51/cb4/c03/c0b/c30/c48/c57/c4b/c52/c47/c03/c14/c0c/c0f/c03/c44/c03/c56/c4c/c50i-\nlar normalization of XML markup solves most of \nthe problems of that approach. \nWe add pre and post-processing steps to the \nSMT flow. The pre-processing step transforms all the different types of tags (and their contents) \ninto a general to/c4e/c48/c51/c03/cb3/c23/c57/c44/c4a/c23/cb4/c11/c03/c37/c4b/c48/c03/c46/c52/c55/c53/c58/c56/c03/c4c/c56/c03\ntransformed prior to training and the input files \nare similarly modified. The content of the tag is \nthen injected into the output, replacing the corre-\n/c56/c53/c52/c51/c47/c4c/c51/c4a/c03/cb3/c23/c57/c44/c4a/c23/cb4/c03/c57/c52/c4e/c48/c51/c03/c47/c58ring post-\nprocessing. \nBesides possibly solving most of the problems \nthat are subject to SMT-TM integration, such as \npoor coverage, poor alignments and preserving \nthe content and the structure of the XML, this \nmethod causes an additional challenge: the \n/c44/c4f/c4c/c4a/c51/c50/c48/c51/c57/c03/c57/c44/c56/c4e/c03/c52/c49/c03/c57/c4b/c48/c03/cb3/c46/c52/c55/c55/c48/c46/c57/cb4/c03/c3b/c30/c2f/c03/c57/c44/c4a/c56/c03/c4c/c51/c03/c57/c4b/c48/c03\ninput segment with the tags in the output, in the \ncase that the output order of the tags is different \nfrom the input order. In the technical data subject \nto our experiments 10% of the sentences in Eng-\nlish-Spanish (En-Sp) and 9% in English-French \n(En-Fr) included at least one XML tag. 16% (En-\nSp) and 20% (En-Fr) of these sentences (with at \nleast one tag) contained multiple tags. We focus \non retrieving the tag contents and the tag align-\nment problem in section 4.5. \n4.3 Method 3: Role-Based Markup Norma-\nlization \n/c3a/c4b/c48/c51/c03/c44/c4f/c57/c48/c55/c4c/c51/c4a/c03/c57/c4b/c48/c03/c3b/c30/c2f/c03/c50/c44/c55/c4e/c58/c53/c03/c57/c52/c03/cb3/c23/c57/c44/c4a/c23/cb4/c03\ntokens we are decreasing the vocabulary size \nnoticeably. However, we might actually get a \npoorer translation system by normalizing differ-\nent tags (that contain different contextual infor-\nmation, words and phrases) to one single type of \ntag and create an overgeneralization. As an alter-\nnative method, role-based markup normalization \nmodifies the tags, based on the element names. \nAs different tags are used in different contexts, \njust like words, this method helps to distinguish \ncertain contextual differences while creating bet-\nter phrase and word alignments. \nDifferent pre and post-processing steps are ap-\nplied to alter the XML tags to tokens based on \n/c57/c4b/c48/c4c/c55/c03/c51/c44/c50/c48/c56/c03/c0b/c4c/c51/c03/c57/c4b/c48/c03/c49/c52/c55/c50/c44/c57/c03/cb3/c23tag-name/c23/cb4/c0c/c0f/c03/c44/c51/c47/c03\nto take the corresponding tag content from the \nsource after the input file is translated. In our \ndata 16 different tags were converted to different \ntokens before the segments were passed to the \nMoses decoder. \nAlthough the risk of having problems with the \nalignment of tags is reduced, it still exists. \n4.4 Method 4: Role-Based Markup Norma-\nlization /cb1 XML Input \nUnlike the previous methods, this method \navoids introducing additional problems by ex-57\nperimenting with a feature of Moses, as an addi-\ntion to the method described in section 4.3. \n/c30/c52/c56/c48/c56/c03/c52/c49/c49/c48/c55/c56/c03/c44/c51/c03/cb3-xml-input /cb4/c03/c49/c4f/c44/c4a/c0f/c03/c5a/c4b/c4c/c46/c4b/c03\ncan be turned on during the decoding process3. \nThe decoder has an XML markup scheme that \nallows the specification of translations for parts \nof the sentence. This is used to plug translations \ninto the decoder, without changing the model. \nThere are four different values that are associated \nwith this flag (exclusive, inclusive, \npass through- , and ignore). During these \n/c48/c5b/c53/c48/c55/c4c/c50/c48/c51/c57/c56/c03/c5a/c48/c03/c58/c56/c48/c03/c57/c4b/c48/c03/cb3exclusive/cb4/c03/c59/c44/c4f/c58/c48/c0f/c03\nwhich only uses the translation specified in the \nXML structure of the input phrase. \nThe specified translation is treated just like \nany other translation, being scored with the lan-\nguage model (LM)4. \nInstead of only using the tokens in the input \nfile, the translation of the actual content of these \ntags is forced by wrapping the tokens with the \nMoses XML markup, keeping the actual tag \navailable at decoding time so it is plugged into \nthe translation. This method requires only a \nchange in the treatment of the input file, com-\npared to the method in section 4.3. \nAlthough the idea is rather simple, Moses still \nrefuses such an XML tag (including XML re-\nserved characters) when wrapped with Moses \nmarkup as this results in non-wellformed XML. \nTherefore, we add another step to pre-processing, \nto convert the XML reserved characters of the \ndata-specific-tags to entity references or other \n/c57/c52/c4e/c48/c51/c56/c03/c0b/c49/c52/c55/c03/c48/c5b/c44/c50/c53/c4f/c48/c03/cb3/c1f/cb4/c03/c46/c52/c58/c4f/c47/c03/c45/c48/c03/c55/c48/c53/c4f/c44/c46/c48/c47/c03/c45/c5c/c03\n/cb3/c23/c44/c55/c55/c52/c23/cb4/c0f/c03/c57/c52/c03/c55/c48/c53/c55/c48/c56/c48/c51/c57/c03/c57/c4b/c48/c03/c46/c4b/c44/c55/c44/c46/c57/c48/c55/c03/cb3/c44rrow \n/c52/c53/c48/c51/c4c/c51/c4a/cb4/c0c/c0f/c03/c56/c52/c03/c57/c4b/c44/c57/c03/c57/c4b/c48/c5c/c03/c44/c55/c48/c03/c57/c55/c48/c44/c57/c48/c47/c03/c44/c56/c03/c57/c48/c5b/c57/c03/c45/c5c/c03\nMoses. As a result, the post-processing step \nrequires converting these entity references or \ntokens back to XML reserved characters. \nWe test this method with two different LMs, \none (same as in Method 3) keeping the tokens \nthat represent the tags (as Method 4a), and the \nother keeping the tags in fully tokenized format \n(same as in Method 1) in which the XML-\nspecific characters are replaced by additional \ntokens (as Method 4b). The reason for this addi-\ntional experiment is to make sure which type of \nLMs yields better results, considering that in this \ncase, the translation (the XML tags) would al-\nready have been plugged into the decoder (with \n/c57/c4b/c48/c03/c58/c56/c48/c03/c52/c49/c03/c57/c4b/c48/c03/cb3-xml-input /cb4/c03/c49/c4f/c44/c4a/c0c/c03/c44/c51/c47/c03/c57/c4b/c44/c57/c03/c4c/c57/c03\nwould still be scored by the LM. A final over-\n \n3 http://www.statmt.org/moses/ \n4 Moses Support, 2010: http://www.mail-\narchive.com/[email protected]/msg02618.html view of the four different methods is shown in \nFigure 1. \n4.5 Retrieving the Content of Tags and \nReordering \nIn the translations produced with Methods 2 and \n3, if the order of (multiple) tags is different in the \ninput and output, the necessary reordering should \nbe externally applied to the tokens in the output, \nas normalization cuts down all the connections \nwith the tokens, the corresponding tags, and their \ncontents. One of the additional steps that can be \napplied for this purpose is the /cb3-report-\nsegmentation /cb45 functionality of Moses, to \nreport phrase segmentation in the output. Figure \n2 illustrates a sample output of Moses6 with this \nfunctionality turned on during the decoding, in \nwhich the translation /cb3/c44/cb4/c03was generated from the \nGerman wor/c47/c03/cb3/c48/c4c/c51/cb4/c03/c0b/c13-0). A similar interpreta-\ntion applies to the other words in the translation. \n \n \n \n \n \n \n \n \nThe additional information stored in the output \nshows the alignment of source and target tokens. \nUsing this information, a post-processing step \ncan be applied to transfer the actual tags to the \noutput, in the correct order. \nAn alternative approach can be constructed by \nrunning two decoding processes in parallel with \ntwo different input files. Besides the input file \nthat is used in Method 3, we translate another \nfile, modified additionally with the same use of \n/c57/c4b/c48/c03/cb3-xml-input /cb4/c03/c49/c58/c51/c46/c57/c4c/c52/c51/c44/c4f/c4c/c57/c5c/c03/c57/c4b/c44/c57/c03/c4b/c44/c56/c03/c45/c48/c48/c51/c03\ndiscussed in Method 4. With this parallel transla-\ntion, we can align the two output files and trans-\nfer the tags from the second output file as in this \nfile the XML tags will be present explicitly, due \n/c57/c52/c03/c57/c4b/c48/c03/c58/c56/c48/c03/c52/c49/c03/c57/c4b/c48/c03/cb3-xml-input /cb4/c03/c49/c4f/c44/c4a/c11/c03/c29/c4c/c4a/c58/c55/c48/c03/c16/c03\nshows the workflow of the two different ap-\nproaches to give a better overview. \n \n \n \n5 Further information at : \nhttp://www.statmt.org/moses/?n=Moses.Tutorial \n6 Example taken from Moses tutorial page: \nhttp://www.statmt.org/moses/?n=Moses.Tutorial echo 'ein haus ist das' | moses -f \nmoses.ini -t -d 0 \n> this |3-3| is |2 -2| a |0-0| house |1 -1| \n \nFigure 2: An example translation of Moses using \nthe \"/cb1report-segmentation \" flag. \n 58\n \n \n \n \n \n \n \n \n \n5 Experiments and Analysis \nIn the experiments we use two TM exports \n(TMX) from the automotive domain for the lan-\nguage pairs English-Spanish and English-French. \nThese TMs include domain specific data and are \nheavily tagged with XML. 41.145 segments out \nof 400.912 (En-Sp) included one or more tags. \n36.540 segments out of 400.360 (En-Fr) included \none or more tags. \n \n \n \nFigure 3: Representation of tag content retrieval \nand reordering methods. \n5.1 Corpora, System and Evaluation \nFrom this data 912 and 871 pairs of segments \nare extracted respectively as test sets, leaving \n400.000 and 399.489 fragments as training data \nfor the SMT system. As these TMs do not con-\ntain any duplicate translation pairs, there is no \noverlap between the test set and the training set. \nThe TMX exports are cleaned from TMX \nmarkup prior to the training, leaving two aligned \nfiles per TM (on the sentence level), for source \nand target segments. \nFor the SMT system, we use the Moses toolkit \nconsisting of Moses, GIZA++ (Och and Ney, \n2003) and SRILM (Stolcke, 2002). The LMs \nwere trained with five-grams, applying interpola-\ntion, Kneser-Ney discounting and \n \n \n \n \n \n \n \n \n \nthe phrase based translation models with a \nmaximum phrase length of seven. \nBesides the first set of experiments, focusing \non the proposed methods in Section 4, a second \nset of experiments was conducted with half and \nquarter size of the initial corpora consisting of \nrandomly selected sentences from the original \ntraining set, to see possible changes in the re-\nsults. Table 1 shows the numbers of sentence \npairs present in different sets of training data. \n \n \n \n \nTable 1: Different training sets represented with \nnumber of sentence pairs used for both language \npairs. \n \nTo evaluate the SMT results, we use automatic \nevaluation metrics such as BLEU (Papineni et \nal., 2000), NIST (Doddington, 2002) and ME-\nTEOR (Banerjee and Lavie, 2005) and a human \ntranslator for judging the MT outputs for tag re-\nordering. The training data, the input, the output, \nand the reference files are all tokenized (with the \nMoses tokenizer) and all the tags in all outputs of \ndifferent methods are n/c52/c55/c50/c44/c4f/c4c/c5d/c48/c47/c03/c57/c52/c03/cb3/c23/c57/c44/c4a/c23/cb4/c0f/c03to \navoid possible score differences caused by com-\nparing different type of output and reference \ntranslations regarding the XML tags. \n5.2 Results \nTable 2 shows the scores obtained by the pro-\nposed methods for the two language pairs. \n \n \n \nTable 2: Automatic evaluation scores for Eng-\nlish-Spanish and English-French. Method 1: Full tokenization \ninstall the transfer ( See page < xref attribute = \" at01 \" href = \" AZE0033XSZLM \" / > ) . \nreposer la boîte de transfert ( cf. page < xref attribute = \" at01 \" href = \" aze0033xszlm \" / > ) . \n \nMethod 2: Full markup normalization \ninstall the transfer ( see page @tag@ ) . \nreposer la boîte de transfert ( cf. page @tag@ ) . \n \nMethod 3: Role based markup normalization \ninstall the transfer ( See page @xref@ ) . \nreposer la boîte de transfert ( cf. page @xref@ ) . \n \nMethod 4a and 4b: Role based markup normalization /cb1 XML input \ninstall the transfer ( see page <np translation=\"@arro@ xref label = @dbq@ at01 @dbq@ href = @dbq@ aze0033xszlm @dbq@ / @arrc@\">@xref@</np> ) . \nreposer la boîte de transfert ( cf. page @arro@ xref attribute = @dbq@ at01 @dbq@ href = @dbq@ aze0033xszlm @dbq@ / @arrc@ ) . \n \nFigure 1: Sample input and output segments, when a Moses system is built and ran in four different ways. \n 59\nThe most striking outcome is how the tags \n/c5a/c48/c55/c48/c03/c4b/c44/c51/c47/c4f/c48/c47/c03/c45/c5c/c03/c57/c4b/c48/c03/cb3/c49/c58/c4f/c4f/c03/c57/c52/c4e/c48/c51/c4c/c5da/c57/c4c/c52/c51/cb4/c03/c50/c48/c57/c4b/c52/c47/c11/c03\nThe systems actually handle the XML tags quite \nwell, making no mistakes on the XML structure \nitself. However, when the tags included words or \n/c53/c4b/c55/c44/c56/c48/c56/c0f/c03/c57/c4b/c48/c03/c56/c5c/c56/c57/c48/c50/c56/c03/c53/c55/c52/c59/c4c/c47/c48/c03/cb3/c58/c51/c51/c48/c46/c48/c56/c56/c44/c55/c5c/cb4/c03\ntranslations (for phrases of three or more tokens). \n/c37/c4b/c48/c56/c48/c03/cb3/c58/c51/c51/c48/c46/c48/c56/c56/c44/c55/c5c/cb4/c03/c57/c55/c44/c51/c56/c4f/c44/c57/c4c/c52/c51/c56/c03/c47/c44/c50/c44/c4a/c48/c03/c57/c4b/c48/c03\nintegrity of some of the tags in general, proving \nthe potential risks of this method for translation \ntasks that are sensitive to such errors. \nWe also observe how the role-based normali-\nzation of tags improves the scores slightly, com-\npared to a strong baseline, by relative values of \n1% and 1,2% on BLEU, 0.06%, 1% and 1,5% on \nMETEOR for Spanish and French translations \nrespectively. For a more in-depth analysis we \nscore the translations once more, after dividing \nthe test sets into two. In one set we keep only the \nsentences that include at least one tag and in the \nsecond set we keep only the sentences without \ntags (forming almost equal sizes of sub test \ntests). Table 3 shows the BLEU results of the \ndivided test sets for both language pairs. \nFrom these scores, it is clear that when the \nsentences include tag(s), the results improve \nmore (relative to baseline, by 1,5% for Spanish \nand 1,6% for French on BLEU) compared to \nwhen the sentences do not include any tags at all, \nshowing that the improvement is minimal for the \nsegments without tags. These results indicate that \n/cb3/c52/c59/c48/c55/c4a/c48/c51/c48/c55/c44/c4f/c4c/c5d/c44/c57/c4c/c52/c51/cb4/c03/c52/c49/c03/c57/c44/c4a/c56/c03/c0b/c30/c48/c57/c4b/c52/c47/c03/c15/c0c/c03/c44/c46/c57/c58/c44/c4f/c4f/c5c/c03\ndecreases the quality of non-tagged segment \ntranslations. \n \n \n \nTable 3: BLEU results for the split test sets. \n \nFor Method 4, we use two different LMs to \n/c56/c46/c52/c55/c48/c03/c57/c4b/c48/c03/c57/c55/c44/c51/c56/c4f/c44/c57/c4c/c52/c51/c56/c0f/c03/c5a/c4b/c48/c55/c48/c03/cb3/c2f/c30/c03/c57/c44/c4a/c56/cb4/c03/c55/c48/c53/c55e-\nsents the LM that was used in the baseline \nmethod, with full tokenization (XML characters \nwere additionally converted to tokens to match \n/c57/c4b/c48/c03/c52/c58/c57/c53/c58/c57/c03/c52/c49/c03/c30/c52/c56/c48/c56/c0c/c0f/c03/c44/c51/c47/c03/cb3/c2f/c30/c03/c57/c52/c4e/c48/c51/c56/cb4/c03/c55/c48/c53/c55e-\nsents the LM that was used in Method 3. As \nshown in tables 1 and 2, the systems handling the tokens /c5a/c4c/c57/c4b/c03/c57/c4b/c48/c03/cb3/cb1xml-input/cb4/c03/c49/c4f/c44/c4a/c03/c56/c46/c52/c55/c48/c03\nworse than Method 3 in both cases. Although this \nflag helps us protect the XML structure, as all the \ntags are scored with the LM in both cases, we \ncan still expect a poor coverage and mismatches \nbetween both LMs and the output. However, \nconsidering which LM performs best, the results \nare not conclusive, although they point to an im-\nprovement in Spanish translation quality when \n/c58/c56/c4c/c51/c4a/c03/cb3/c2f/c30/c03/c57/c44/c4a/c56/cb4/c11/c03 \nA final observation can be made about the tag \nreordering capabilities of the different systems, \nwhen the tags are normalized. First of all, both \nTMs are analyzed to see how often the tag order \nis changed in the translations compared to the \nsource segments. In eight translations of the Eng-\nlish-Spanish TM, and in 14 translations of the \nEnglish-French TM (less than 0.001% of the \nnumber of sentences stored in these TMs), the \norder of tags in translation is different than in the \nsource. This low number indicates that an addi-\ntional reordering task is almost never necessary \nfor this specific data. Still, we remove five of \nthese translations from each database, retrain the \nsystems and translate the source segments, using \nMethods 1 and 3. As a result, all translations fail \na potentially necessary reordering. Although this \nresult might indicate incorrect translations, a hu-\nman translator judges that all translations are still \ncorrect as a whole. We have to mention that the \n/c55/c48/c56/c58/c4f/c57/c03/c46/c52/c58/c4f/c47/c03/c53/c52/c57/c48/c51/c57/c4c/c44/c4f/c4f/c5c/c03/c45/c48/c03/c47/c4c/c49/c49/c48/c55/c48/c51/c57/c03/c49/c52/c55/c03/cb3/c4f/c48/c56/c56/c03\n/c56/c4c/c50/c4c/c4f/c44/c55/cb4/c03/c4f/c44/c51/c4a/c58/c44/c4a/c48/c03/c53/c44/c4c/c55/c56/c0f/c03/c52/c55/c03/c48/c59/c48/c51/c03/c49/c52/c55/c03/c57/c4b/c48/c03/c56/c44/c50/c48/c03\nlanguage pairs in another scenario, due to the \ndifference in the use of tags. \n5.3 Adding data vs. RBMN \nThe aim of a second set of experiments is to \ncompare the improvement using role-based \nmarkup normalization, while adding data. This is \nanother straightforward way of improving the \ntranslation results in the case of SMT-TM inte-\ngration. For this purpose, the tests are repeated \n/c0b/c58/c56/c4c/c51/c4a/c03/c30/c48/c57/c4b/c52/c47/c56/c03/c14/c03/c44/c51/c47/c03/c16/c0c/c03/c5a/c4c/c57/c4b/c03/cb3/c4b/c44/c4f/c49/cb4/c03/c44/c51/c47/c03/cb3/c54/c58/c44r-\n/c57/c48/c55/cb4/c03/c56/c4c/c5d/c48/c03/c47/c44/c57/c44/c11/c03/c26/c55eating two more systems per \nlanguage pair, the BLEU scores are shown in \nTable 4. \nIt is accepted that more data implies better \ntranslations (Fraser and Marcu, 2007) and in-\ncreasing the corpus size results in a decreasing \ngrowth rate in the quality of an SMT system \n(Koehn, 2002). The systems that are subject to \nthese experiments are no exception, as it can 60\n \n \nTable 4: Comparison of BLEU scores using Me-\nthod 1 and Method 3, for different sizes of data. \n \nclearly be seen that reducing the size of the cor-\npus by half and three quarters (by removing ran-\ndom sentences), decreases the translation quality \nsimilarly. The most interesting part of this ex-\nperiment is to see what size of additional data is \nnecessary to improve the system as much as the \n(role-based) normalization of tags. This addi-\ntional information enables us to see the scale of \nimprovement in a more practical point of view in \ntranslation business, compared to analyzing the \nimprovement purely on metric scores. When the \ntranslations of Method 3 are compared to the \ntranslations of Method 1 for different sizes of \ndata, it can be seen that the improvement made \nby such normalization becomes greater as the \nsize of the data increases. This can be considered \nan important aspect when larger sizes of data are \nsubject to such a method, considering that the \nrate of improvement on the translation quality \nwould decrease as the data size increases. Table \n5 and Table 6 show the rate of improvement in \nBLEU scores that Method 3 provides on top of \nthe baseline and comparing these results with the \neffect of doubling the size of our data. \n \n-0.200.000.200.400.600.801.00\nquarter size ha lf size full sizeSPA\nFRE\n \n \nTable 5: The improvement in BLEU points of \nMethod 3 over the baseline system, for different \nsizes of data. \n -0,500,000,501,001,502,002,503,00\nM1\nquarter\nto M1 halfM1\nquarter\nto M3\nquarterM1 half to\nM1 fullM1 half to\nM3 halfM1 full to\nM3 fullSPA\nFRE\n \n \nTable 6: Comparison of the improvements pro-\nvided by doubling the size of training data and by \napplying Method 3 instead. Although the im-\nprovement on translation is expected to decrease \nwhen size of the full data is doubled, the exact \namount of improvement remains unknown. \n6 Conclusions and Future Work \nThis paper proposes four different methods for \nhandling XML markup in the training data with \npre and post-processing steps that consist of \ntransducers. Each of these methods is evaluated \nusing the automated MT metrics. \nThe first set of the experiments shows that, al-\nthough Moses can handle XML markup well \nwhen basic tokenization is applied, it still occa-\nsionally fails to deliver the original content of the \nXML tags. However, this result is strictly related \nto how the tags are used in a certain TM and the \ntype of content they have. In the case of a similar \nuse, such as using the XML tags to protect text \nfrom being translated or to wrap text for auto-\nmatic replacements, letting Moses handle tags as \npart of text might not be the best option for seri-\nous translation business. \nThe best results were obtained when the XML \ntags were normalized based on their roles. Al-\nthough this improvement is not stunning, the re-\nsults suggest that the importance of this im-\nprovement might be greater for larger data sets \nand that these improvements are comparable to \nthe effects in the range of increasing the size of \nthe data by half (En-Sp) to doubling it (En-Fr). \nConsidering that increasing the data on this scale \nis rarely a realistic scenario for large TMs in the \ntranslation industry using the method of role-\nbased normalization can lead to cost savings and \nincreased productivity, while also ensuring the \nintegrity of the XML elements. \nAdditionally, it can be observed that, although \nthe use of the proposed methods have the poten-\ntial to impose new challenges like reordering the \ntags, such a reordering is almost never necessary \nin our test data. If reordering is necessary, using 61\nthe segmentation report of Moses or performing \na parallel translation, as explained in section 4.5, \ncan provide accurate results. \n Further improvement on the translation qual-\nity of an SMT system integrated with CAT tools, \ncan possibly be achieved by including more tasks \nin pre and post-processing stages. If the proper-\nties of specific domains and types of TMs are \nanalyzed carefully, they could be exploited to \nsupply better systems. Some of these properties \ncould be: the use of abbreviations and their ex-\nplicit counterparts; existence of phrases that \nshould not be translated; the use of alphanumeric \nformulas and codes for normalization methods. \nFurthermore, repeating the experiments with \nother language pairs would help give a better \noverview on the results. A human evaluation \nwould also be necessary and helpful to confirm \nthe results that are obtained in this paper. \n7 Acknowledgement \nThis work has been supported by ITP nv, Bel-\ngium (http://www.itp-europe.com/). \n \n8 References \nBanerjee, S., Lavie, A. 2005. METEOR: An Automat-\nic Metric for MT Evaluation with Improved Corre-\nlation with Human Judgments. In Proceedings of \nthe ACL 2005 Workshop on Intrinsic and Extrinsic \nEvaluation Measures for MT and/or Summariza-\ntion. \nDoddington, G. 2002. Automatic evaluation of ma-\nchine translation quality using n-gram co-\noccurrence statistics. In Proceedings of the ARPA \nWorkshop on Human Language Technology. \nDu, J., Roturier, J., Way, A. 2010. TMX Markup: A \nChallenge When Adapting SMT to the Localisation \nEnvironment. In Proceedings of the 14th Annual \nEAMT conference, pp. 253-260, Saint-Raphaël, \nFrance. \nEck, M., Vogel, S., Waibel, A. 2004. Improving sta-\ntistical machine translation in the medical domain \nusing the unified medical language system. In Col-\ning 2004: Proceedings of the 20th international \nconference on Computational Linguistics, pp. 792/cb1\n798, Geneva, Switzerland. \nFraser, M., Marcu, D. 2007. Measuring word align-\nment quality for statistical machine translation. \nComputational Linguistics, 33(3):293-303. \nGarcia, I. 2005. Long term memories: Trados and TM \nturn 20. Journal of Specialized Translation 4, pp. \n18-31. He, Y., Ma, Y., Way, A., van Genabith, J. 2010. Inte-\ngrating N-best SMT Outputs into a TM System. In \nProceedings of Coling 2010, pp 374/cb1382, Beijing, \nChina. \nKoehn, P. 2010. Statistical Machine Translation. \nCambridge University Press. \nKoehn, P. 2002. Europarl: A Multilingual Corpus for \nEvaluation of Machine Translation, Draft, Unpub-\nlished. \nhttp://people.csail.mit.edu/~koehn/publications/eur\noparl.ps \nKoehn et al. 2007. Moses: Open source toolkit for \nstatistical machine translation. In Proceedings of \nACL, Demonstration Session. \nKoehn, P., Och, F. J., Marcu, D. 2003. Statistical \nphrase-based translation. In Proceedings of \nNAACL, pp. 48/cb154. \nLeplus, T., Langlais, P., Lapalme, G. 2004. Weather \nreport translation using a translation memory. In \nProceedings of the 6th AMTA, pp. 154/cb1163, Wash-\nington DC, USA. \nOch, F. J., Ney, H. 2002. Statistical machine transla-\ntion. In Proc. of Workshop of the European Asso-\nciation for Machine Translation, pp. 39-46, Ljubl-\njana, Slovenia. \nOch, F. J., Ney, H. 2003. A systematic comparison of \nvarious statistical alignment models. Computation-\nal Linguistics, 29(1):19-51. \nPapineni et al. 2002. BLEU: A Method for Automatic \nEvaluation of Machine Translation. In Proceedings \nof the 20th Annual Meeting of the ACL. \nStolcke, A. 2002. SRILM - an extensible language \nmodeling toolkit. In Proceedings of the Interna-\ntional Conference on Spoken Language \nProcessing, Denver, Colorado, September. \nVogel, S., Ney, H. 2000. Construction of a Hierarchi-\ncal Translation Memory. 18th International Con-\nference on Computational Linguistics, pp. 1131-\n1135, Saarbrucken, Germany. \nYamada, K., Knight, K. 2001. A syntax-based statisti-\ncal translation model. In Proceedings of the 39th \nAnnual Meeting on ACL, pp. 523-530, Toulouse, \nFrance. \n 62", "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "-Dm0qawKWe", "year": null, "venue": "EAMT 2010", "pdf_link": "https://aclanthology.org/2010.eamt-1.38.pdf", "forum_link": "https://openreview.net/forum?id=-Dm0qawKWe", "arxiv_id": null, "doi": null }
{ "title": "Bottom-up Transfer in Example-based Machine Translation", "authors": [ "Vincent Vandeghinste", "Scott Martens" ], "abstract": null, "keywords": [], "raw_extracted_content": "Bottom-up transfer in Example-based Machine Translation\nV\nincent Vandeghinste\nCentrum voor Computerlingu ¨ıstiek\nKatholieke Universiteit Leuven\nBelgium\[email protected] Martens\nCentrum voor Computerlingu ¨ıstiek\nKatholieke Universiteit Leuven\nBelgium\[email protected]\nAbstract\nThis paper describes the transfer compo-\nnentofasyntax-basedExample-basedMa-\nchine Translation system. The source sen-\ntence parse tree is matched in a bottom-up\nfashion with the source language side of a\nparallelexampletreebank,whichresultsin\na target forest which is sent to the target\nlanguage generation component. The re-\nsults on a 500 sentences test set are com-\npared with a top-down approach to trans-\nferofthesamesystem,withthebottom-up\napproach yielding much better results.\n1 Introduction\nIn machine translation, the use of linguistics had\nall but disappeared during the rise of the statisti-\ncal machine translation (SMT) paradigm, but as\n“pure” SMT is reaching its ceiling, the pendulum\nis swinging back towards the use of linguistics in\nMT, even within the SMT paradigm, as demon-\nstrated by the Workshops on Syntax and Structure\nin SMT (among others Wu and Chiang, 2009).\nThe MT engine which is used in this paper is a\nsyntax-based example-based machine translation\n(EBMT) system.\nItisexample-based asitusesalargesetoftrans-\nlationexamples(aparallelcorpus)astrainingdata\ntobaseitsdecisionsonanditis syntax-based asthe\ndata in the parallel corpus is annotated with syn-\ntacticparsetrees,bothonthesourceandthetarget\nside. Input sentences are syntactically analysed,\nand the system generates target language parse\ntrees where all ordering information is removed.\nThese serve as input for the target language gen-\nc/circlecopyrt2010European Association for Machine Translation.eration component, which determines the output\nsentences.\nThe system can also be considered a rule-based\ntransfersystem, as it conforms to the general ar-\nchitectureofarule-basedsystem,usingsourcelan-\nguage syntactic analysis, syntactic transfer rules\nand a dictionary (lexical transfer rules), and a tar-\nget language generation component. Both syntac-\ntic and lexical transfer rules are automatically in-\nduced from a parallel corpus.\n2 Related Work\nWe compare the transfer component described in\nthis paper with the transfer component of Van-\ndeghinste and Martens (2009).\nThe general approach towards MT is quite sim-\nilar to Data-Oriented translation (DOT) (Poutsma,\n1998; Hearne, 2005), differing in the fact that we\nuserule-basedorprobabilisticcontext-freeparsers\nwhereas they use Data-Oriented Parsing (Bod,\n1992), and the DOT approach was only tested on\nsmall corpora and a limited domain, whereas we\nintend a general news domain using large corpora.\nTherearealsosimilaritieswiththeworkofAm-\nbati et al. (2009). They use synchronous context-\nfree grammars (SCFGs) (Aho and Ullman, 1969),\nwhich limit the depth of the transfer rules to 2,\nwhereas the approach described in this paper does\nnot set a limit to the maximum depth of a trans-\nfer rule, just like the synchronous tree-substitution\ngrammars (STSGs), as described in Zhang et al.\n(2007). The difference between our system and\nSTSGs is the fact that we build a target language\ntree without using any ordering information, since\nthis is handled in the decoding step: the tar-\nget language generation component (Vandeghin-\nste, 2009).\nOur general approach is also similar to the\n[EAMT May 2010 St Raphael, France]\nexample-basedMT-enginedescribedbyKurohashi\n(2009),differinginthefactthatKurohashiusesde-\npendency trees and we combine information from\nphrase structure trees and dependency trees.\n3 System Description\nThe example-based machine translation system\nhas an architecture very similar to that of rule-\nbased transfer systems. An input sentence is an-\nalyzed by a source language parser. The source\nlanguage parse tree is converted by the transfer\ncomponent into a target language forest that rep-\nresents all possible target language parses that are\nconsidered translation candidates. The target lan-\nguage generation component turns this forest into\na ranked set of sentences, each with their weight.\n3.1 Syntactic Analysis\nThe system reuses existing parsers for both source\nand target language analysis. As we are trans-\nlating from Dutch to English, the system uses\nthe Alpino parser (Van Noord, 2006) for Dutch,\nwhich outputs results in an xml-format combining\nphrase structure information with dependency in-\nformation;andthesystemusestheStanfordparser\n(Klein and Manning, 2003) for English, which\ngives a phrase structure tree and an additional de-\npendency tree (de Marneffe et al., 2006). Both\nparsers are freely available.\n3.2 Preprocessing the parallel corpus\nThe system was trained on the sentence-aligned\nEuroparl corpus version 3 (Koehn, 2005).\nThe source language parser is used to parse the\nsource side of the parallel corpus in preprocess-\ning, as well as the input sentence during actual\ntranslation. The target language parser is only\nused to preprocess the target side of the paral-\nlel corpus. This results in a parallel treebank, on\nwhichmoredetailscanbefoundinTiedemannand\nKotz´e(2009a). Thistreebankiswordalignedwith\nGIZA++ (Och and Ney, 2003) and node aligned\nusing a discriminative approach to tree alignment\n(Tiedemann and Kotz ´e, 2009b).\n3.3 Bottom-up transfer\nThe transfer component takes the source language\nparse tree and matches the nodes in that tree with\nnodes on the source language side of the parallel\ntreebank. Thecorrespondingtargetsidefragmentsarerecombinedintotargetlanguagetrees. Allpos-\nsible output trees of this component are merged\ninto a target language forest.\nVandeghinste and Martens (2009) describe a\ntop-down transfercomponentwhichleadstounsat-\nisfactory results. We have investigated a bottom-\nuptransfer component instead. As this paper is\nmainlyaboutthistransfercomponent,wedescribe\nit more thoroughly in section 4, and compare the\nscores of this bottom-up approach with the scores\nof Vandeghinste and Martens (2009) on the top-\ndown approach, using the same test set.\n3.4 Target language generation\nTarget language generation (TLG) is the compo-\nnent that converts the target language forest into a\nset of target language sentences, ordered by their\nconfidence weight. The system uses the same tar-\ngetlanguagegenerationcomponentasVandeghin-\nste and Martens (2009). It is an improved version\nof the TLG module of Vandeghinste (2009).\nThe target language forest, which is the output\nofthetransfercomponentdoesnotcontainanytar-\nget language ordering information. For each par-\nentnode,theactualorderofthedaughtersisdeter-\nminedbythetargetlanguagemodel. Itdetermines\nwordandconstituentorderingandplaysanimpor-\ntant role in lexical selection.\nThe target language model is trained on the\nEnglish part of Europarl. From the parse trees\nin the treebank, a set of context-free rules is\nextracted, using the phrase category labels on\nthe left-hand side. The TLG model is not re-\nstricted to parts-of-speech, the phrase category la-\nbels or the tokens on the right-hand side as differ-\nent abstraction levels are distinguished. For En-\nglish, these are: dependency relations (Rels), syn-\ntactic categories for non-terminal nodes and the\nparts-of-speech for terminal nodes (Cat/Pos), the\ndependency relations together with the syntactic\ncategories/parts of speech (Cat+Rel), the depen-\ndency relations together with the syntactic cate-\ngories/parts of speech as well as the head token\ninformation (Cat+Rel+Token).\nTraversing the target language trees in the for-\nest depth-first the TLG module checks whether\nit finds rewrite rules at the least abstract level\n(Cat+Rel+Token). If this is not the case, it checks\nat the next level and soon until a solution is found\nallowingtoestimatetheprobabilityofdifferentor-\nderingsofthedaughterofthenodeitislookingat.\nDependingonthebeamsizeandanumberofother\ncutoffparameters,itselectsthe nmostprobableby\nlooking at the relative frequency of occurrence of\nthe different patterns in the training data.\nThe parameters that allow us to investigate the\ntrade-off between quality and time of processing\nare the following:\n•Beam size , a.k.a. histogram pruning;\n•Cutoff factor , a.k.a. threshold pruning;\n•Maximum Combinations sets a limit to the\nnumberofcombinationsinvestigated,ordered\nby weight. Imagine a node with three daugh-\nters, and for each daughter an average of ten\nsolutions where found, then this combines\ninto 1000 combinations.\n•Maximum Permutations sets a limit to the\nnumber of permutations under investigation,\nwhen no solutions are found in the database.\nAll permutations of that node are generated.\nThis can lead to very high numbers, as the\nnumber of permutations is the faculty of the\nnumber of daughters of a node\nFor each of these parameters, before any cutoff\nhappens all alternatives are ordered according to\ntheir weights.\n4 Bottom-up Transfer\nIn response to the shortcomings of the top-down\nmodel of Vandeghinste and Martens (2009), we\nproposed and implemented an alternative trans-\nfer strategy, one that proceeds from the bottom\nupwards, starting with translations of words and\nphrases and then selecting among the translations\nfurther up in the parse tree on the basis of the\ntranslations discovered at the bottom. The logic\nof this approach is that it would be better to con-\nfidently translate words and phrases in source sen-\ntences,andthenusethosetranslationstoconstrain\nthe choice of structures above. In this way, er-\nrors might propagate upwards but not downwards,\nwhere they had proven to force the transfer engine\nto make unlikely and unacceptable translations.\n4.1 Indexed treebanks and virtual rules\nTo do this, we did not extract rules of prede-\ntermined depths of trees like Vandeghinste and\nMartens (2009) but embarked on constructing a\nsystemofvirtualrules,inwhichthetreebankitselfwould be consulted, on the fly, to identify transla-\ntions at all levels.\nFor lexicalized translations, where an entire\nphrase that appears as a constituent in the parse\ntree also appears in the treebank, we deployed\nthe solution originally proposed by Luccio et al.\n(2004). Trees are reordered so that the children\nof each node in the parse tree appear in a fixed\nlexicographicorder,ignoringtheoriginalwordor-\nder. These trees are then rewritten as strings, us-\ning what Luccio et al. (among others) refer to as a\ndepth-firstorder. Ifthetreeundersomenodeinthe\nparse tree is identical to the tree under some node\ninthetreebank,andifbothareconvertedtodepth-\nfirst format, then there is a substring in the tree-\nbank that is identical to the one representing that\nportion of the parse tree. This is called bottom-up\nsubtree matching. Given two trees, a bottom-up\nsubtree (Figure 1) match is one that, if it matches\nany node, also matches all the descendants of that\nnode.\nFigure 1: “of the Minutes” is an example of a\nbottom-up subtree\nPerforming bottom-up subtree matching is sim-\nilar to the ideas behind subsentential translation\nmemories: each match is to a linguistically moti-\nvated phrase within sentences, and where a match\nis found and that match aligns to some subtree\nin the target language, translation can proceed by\ncopying that target language subtree.\nFinding string matches quickly in large texts\nhas a well-known solution: the suffix array, which\nidentifies matches in indexed strings in sublinear\ntime(ManberandMyers,1990). Byconvertingthe\nproblem of subtree discovery into a string match-\ningproblem,wecanextracttransferrulesfromthe\ntreebank for any node very quickly.\nFor transfer of the upper portions of the parse\ntree, we found that we could generalize the rule\nconstructionmethoddescribedfortop-downtrans-\nFigure2: Constructingandsortingbreadth-firstrepresentationsofthesubtreesoftheexampleparsefrom\nFigure 1.\nfer as described by Vandeghinste and Martens\n(2009) by modifying the string matching tech-\nniquesusedforbottom-upmatching,andthendis-\npensing with rule-sets and using the treebank to\nperform those transfers as well. Instead of con-\nvertingtreesintostringsusingdepth-firstrepresen-\ntations, we took each non-leaf node in the source\nlanguagetreebankandconverteditintoastringus-\ning a breadth-first method inspired by Chi et al.\n(2005).\nConverting a tree to a breadth-first string repre-\nsentation(BFSR)requirestwoextrasymbols-rep-\nresented here by “#” and “$” - one to indicate the\nexhaustion of the children of some node, and the\nsecond to indicate the exhaustion of the nodes at a\nparticular depth in the tree. The process proceeds\nby reading breadth-first through the tree starting\nat the root, appending node labels to an initially\nemptystring. Whenallthechildrenofanodehave\nbeenexhausted,the“#”symbolisadded,andwhen\nall the nodes at some depth have been exhausted,\nthe “$” is added. This maps each source language\nnode in the treebank to a string, as shown in Fig-\nure 2. These string representations can be trivially\nconvertedbackintotreesandstandinaone-to-one\ncorrespondence with the trees that generate them.\nNote that BFSRs are sortable and that if any two\nsubtrees are identical from the root down to some\nparticulardepth,thentheBFSRsofthosetwosub-\ntrees share a common prefix. By organizing them\ninto a sorted array, we can quickly match any sub-\ntree in a new parse tree to all subtrees in the tree-\nbank that share the same upper part. This makes\nsearch using string indexing methods feasible.\nInthisimplementation,aBFSRwasconstructed\nfor each non-terminal node in the treebank, con-\nsuming space proportionate to the mean square of\nthe size of each sentence. Then these string rep-\nresentations were sorted using quicksort (Hoare1962). Alternative and possibly more efficient\nstrategies for indexing these representations are\nalso feasible, based on the expansive literature on\nsuffix tree and suffix array construction. These\nwould be equivalent in terms of results, and come\nwith various tradeoffs in preprocessing time and\nspace.\n4.2 Matching the source language tree with\nthe examples\nThe system proceeds by, first, checking for\nbottom-up matches in the source language tree\nindices. Finding one is equivalent to finding a\nsubsentential match in a translation memory sys-\ntem. Figure 3 shows a possible set of bottom-up\nmatches.\nFigure 3: Bottom-up matching finds all phrases\nand words that have matches in the treebank\nThe transfer engine then tries to identify top-\ndown matches for the remaining upper portion of\nthe tree, and rejects all matches that are incompat-\nible with the bottom-up matches already found, as\nin Figure 4. Top-down matching proceeds by con-\nstructing a BFSR for each unmatched node in the\nsource parse tree, as described in section 4. For\neach such BFSR, the transfer engine searches the\nsorted index of BFSRs from the treebank for the\nFigure 4: Top-down matching looks for structures\nin the source language treebank matching the re-\nmaining part of the translation. Note that the\nleaves of the subtree being matched using top-\ndown methods must all be at the same depth.\nFigure 5: Each top-down match is finally con-\nnected to the bottom-up matches\nentries that share the longest common prefix with\nit.\nThe treebank alignment information discussed\nin 3.2 is used to align the source language nodes\npointed to by the sorted index with their corre-\nsponding target language nodes. Those target lan-\nguage parses are then directly searched for sub-\ntrees that can join together the bottom-up matches\nalready found. When there are too many match-\ning nodes, a random sample is searched. The re-\nsultingtargetlanguagesubtreesarethencombined\nwiththebottom-upmatchesalreadyfoundtoform\na target language tree, as in Figure 5.\nThisprocedureisperformedrecursivelyoverthe\nparse tree, until the entire tree is translated.\nWhere a word is missing from the treebank, or\nhas no target language alignment, the fall-back\ntranslationstrategyisthesameasforthetop-down\napproach from Vandeghinste and Martens (2009):\nThe part-of-speech or other information is trans-\nlated and the word copied over directly. However,\nthe search for structural translations of the upperparts of parse trees may also fail to find a match.\nIn those cases, two strategies are considered.\nFirst, a special target language index is con-\nstructedthatcontainsthelabels-phrasecategories\nor parts-of-speech - for each non-leaf node in\nthe target language treebank. When no top-down\nmatch can be found, this database is searched for\nany target language node whose children are iden-\ntical to the labels of bottom-up matches whose\nroots are siblings in the source language parse\nand whose own label is a likely match for the\nsource language phrase label that appears above\nthem. The transfer engine then uses that shallow,\ntwoleveltreetotranslatethecorrespondingsource\nsubtree.\nFor example, if there was no transfer found for\nthe upper portion of the tree in Figure 4, the trans-\nfer engine would look for nodes in the target lan-\nguage index that have an IN and an NP as chil-\ndren,andthatarelikelytocorrespondtotheDutch\nphrase category label pp.\nThis fall-back strategy tends to produce trees\nthat closely hew to the structure of the source.\nWheneventhisstrategyfails,thetransferengine\nassumes that nodes that are siblings in the source\nhave translations that are siblings in the target lan-\nguage. So, when no other transfer rule is avail-\nable, it selects the target language node label that\nmost corresponds to the source language parent,\nandthenguesseswhichofthetargetlanguagechild\nnodes is likely to be the head of that phrase, based\non what labels are usually heads for that type of\nphrase.\nTranslatingfromthebottom-upinthismanneris\ncloselyrelatedtoclassicalparsingstrategieswhich\nbuild tree structures up from the bottom.\n5 Evaluation\nWe evaluated our system, using well-known au-\ntomated MT metrics, like BLEU (Papineni et al.\n2002),NIST(Doddington2002),andTER(Snover\net al. 2006), as well as WER (word error rate),\nPER (position independent word error rate), and\nCER(charactererrorrate). Wehaveusedthesame\nevaluation test set as Vandeghinste and Martens\n(2009) consisting of 500 Dutch sentences, with\ntwo reference translations for each sentence. To\ngive an idea about the difficulty of the test set, it\nscored 29.96 BLEU on Moses (Koehn et al. 2007)\ntrained on the same sentences of Europarl as used\nin our system and 38.82 BLEU on Google trans-\nlate.\nWe evaluated the bottom-up system in three\nconditions:\n1. Smallbeam: In target language generation,\nwe use a beam size of 10, a cutoff factor of\n50,amaxcombof100andamaxpermof100\n2. Largebeam: In target language generation,\nwe use a beam size of 100, cutoff factor of\n500, maxcomb of 200 and a maxperm of 200\n3. Dummy: Only bottom-up transfer of match-\ningwords,asdescribedinsection4.2. Source\nword order is retained and the target lan-\nguage generation module favours orders that\nare close to the source order, when all else is\nequal.\nThe results are shown in Table 1. The results of\nVandeghinste and Martens (2009) are added in the\nTop-down row.\nWe also compared with Moses (Koehn et al.,\n2007) trained on the same data, and using the\nsame word-alignments. Due to the source lan-\nguage parser of our system which puts all punctu-\nationoutsidetheactualparsetree,oursystemdoes\nnot handle punctuation (yet). To get a better com-\nparison with the state-of-the art of Moses, in ta-\nble 1 we remove all punctuation from its output as\nwell.\nTheresultsforthebottom-upapproachtotrans-\nferarealotbetterthantheresultsforthetop-down\nsystem. There is a relative rise of 52.7% in BLEU\nscore when comparing the best conditions of the\ntop-down and the bottom-up approach.\nFurthermore,wecanseethatthedifferencewith\nMoses (without punctuation) has become very\nsmallwhenconsideringthePERmetric,whichin-\ndicates the position-independent word-error rate.\nThis is important as it indicates the fact that con-\ncerning lexical selection our early prototype sys-\ntem scores only marginally worse than the state-\nof-the-art.\nComparing the scores with the Dummycondi-\ntion gives an indication of the influence of struc-\nturaltransferinbothlexicalselectionaswellasre-\nordering of the output. All scores consistently in-\ndicate that structural transfer contributes substan-\ntially to better lexical selection. When comparing\nthe PER score of the Dummycondition with the\nPER score of the Top-down approach, it is clearthat lexical selection is better bottom-up, even\nwhen the influence of structural transfer has been\nremoved as is the case in the Dummycondition.\nConcerning the beam size in target language\ngeneration we can say that there is no significant\ndifference in results of the two conditions, but it is\nsignificantlyfastertoprocessthesentenceintarget\nlanguage generation for the Smallbeam condition.\n6 Conclusions and Future Work\nAnimportantconclusionfromtheresultsisthefact\nthat in lexical selection, our results are similar to\nthose of Moses. There are still a few differences,\nfor instance in the treatment of separable verbs,\nand we have implemented solutions for this which\nare not yet reflected in these results. This will re-\nquireacompletereprocessingandrealigningofthe\nparallel treebank, which is a very time consuming\nand computationally heavy process.\nThe influence of the structural transfer is large\nandpositive,andthereforeindicatesthatweshould\nwork on that aspect of our engine more: we can\ntest different parameter settings, and in future ver-\nsionsofthesystem,wealsowanttoincludepartial\nsubtree matching, which should greatly improve\nthe coverage of the parallel corpus with respect to\nstructural transfer.\nImprovementstothevirtualtransferrulesystem\nare a major research direction for this project. The\ncurrent scheme, which searches the aligned tree-\nbank directly, using sampling in many cases, is in\nthe worst case linear in performance time on the\nsize of the treebank or the sample size where sam-\npling is used. Using subtree indexing (Chi et al.,\n2005;Martens,2009),wehopetoreducethistime\ndramatically.\nThe virtual rule system implemented here con-\nstitutes a regular tree grammar (RTG) , which\nis weakly equivalent in generating capacity to\na context-free grammar (CFG) (Thatcher, 1967;\nRounds, 1970), that is to say that the trees gener-\nated by every RTG yield a set of strings for which\nsome CFG exists that generates them. Its princi-\npal benefit is that, by generating trees, it separates\nthe generation of target language strings from the\ninduction of target language linguistic structures.\nHowever, the limitations of CFGs and comparable\ntree grammars are well-known. Context-free tree\ngrammars are weakly equivalent to indexed gram-\nmars(Rounds,1970),whichprovideamuchlarger\nset of options, at the cost of NP-complete process-\nCondition BLEU NIST WER CER PER TER\nTop-down 13.53 5.70 76.20 61.91 52.39 70.36\nDummy 12.49 6.01 78.75 63.83 50.05 70.69\nSmallbeam 20.65 6.44 70.34 55.37 48.9663.72\nLargebeam 20.59 6.43 70.10 55.12 48.9863.54\nMoses No Punct. 26.72 6.94 60.53 45.65 47.82 58.07\nTable 1: Evaluation Results\ning times, just like the indexed grammars. The\ntree-adjoining grammar (TAG) formalism (Joshi\net al., 1975) limits context-sensitive generation to\nthemonadiccontext-freetreegrammars(M ¨onnich,\n1997; Fujiyoshi,2004), and other subsets of tree\ngrammars are available for linguistic formalisms\n(Knight and Graehl, 2005).\nExtending the machinery for syntactic transfer\nbeyond RTGs to more powerful formalisms is a\nmajor future research area for this project. No-\ntably, work is in progress to extend the virtual\ntransfer rule system to support non-deleting tree\nrules(KnightandGraehl,2005)whichcanbeeffi-\ncientlyextractedfromalignedtreebanksusingdata\nmining techniques (Martens, 2009).\nThere is also room for improvement in subsen-\ntential alignments. We will investigate whether\nthereareotheralignmentspossiblewhichwilllead\nto better results. For now we are using a first ver-\nsion of the alignments, but the work we have done\nup to now has given us a great deal of informa-\ntion about how we might improve the alignment.\nThis is not reflected in the alignments as they are\nnow,asthisrequirestoreapplythetimeconsuming\nalignment processing of the parallel data.\nThe evaluation of this system has shown some\nencouragingresults,anddetailederroranalysishas\nshown some of the paths to follow in the future.\nWewillfirstofalltryourapproachonotherlan-\nguage pairs and see whether the conclusions still\nhold.\nApart from that, we are working on an index-\ning system which will allow us to work with par-\ntial subsentential matches instead of full subtree\nmatching, which will have a large effect on the\ncoverage of the parallel treebank, as well as on\nthe speed of the transfer engine, which is rather\nslow as it is now. This will also solve a number of\ntranslationissuesforwhichthecurrentsystemcan-\nnot generate a correct translation unless the whole\nphrase is found in the parallel treebank.\nWe will also investigate the effect of enlarg-ing the treebanks used, both parallel and monolin-\ngual, including the translation memories we have\nreceived from a translation company.\nIn general, we can conclude that we have come\nto a point where we are reasonably satisfied with\nthetransferengine,whichcanserveinthefirstver-\nsion of the MT system, but there is plenty that re-\nmains to be done to further improve the system.\n7 Acknowledgements\nThe research presented in this paper was done in\nthe PaCo-MT project, sponsored by the STEVIN-\nprogramme of the Dutch Language Union and by\ntheAMASS++projectsponsoredbyIWT-Vlaan-\nderen.\nReferences\nAbouelhoda, M., Kurtz, S., and Ohlebusch, E. (2004).\nReplacing Suffix Trees with Enhanced Suffix Arrays.\nJournal of Discrete Algorithms , 2(1):53-86.\nAho,A.,andUllman,J.(1969). Syntaxdirectedtranslations\nandthepushdownassembler. JournalofComputerand\nSystem Sciences , volume 3(1):37-56.\nAmbati, V., Lavie, A., and Carbonell, J. (2009). Extraction\nof Syntactic Translation Models from Parallel Data us-\ning Syntax from Source and Target Languages. In: MT\nSummit XII Proceedings of the twelfth Machine Trans-\nlation Summit . Ottawa, Ontario, Canada. pp. 190-197.\nBod, R. (1992). A Computational Model of Language\nPerformance: Data-Oriented Parsing. In Proceedings\nof the fifteenth International Conference on Computa-\ntional Linguistics (COLING’92) . International Com-\nmittee on Computational Linguistics. Nantes, France.\npp. 855-859.\nChi, Y., Nijssen, S., Muntz, R., and Kok, J. (2005). Fre-\nquent Subtree Mining An Overview. Fundamental In-\nformatics,SpecialIssueonGraphandTreeMining . pp.\n1001-1038.\nde Marneffe, M., MacCartney, B., and Manning, C. (2006).\nGenerating Typed Dependency Parses from Phrase\nStructure Parses. In: Proceedings of the 5th edition of\nthe International Conference on Language Resources\nand Evaluation (LREC) . Genoa, Italy.\nDoddington, G. (2002). Automatic Evaluation of Ma-\nchineTranslationQualityusingN-gramCo-occurrence\nStatistics. In: Proceedings of the Second Human Lan-\nguage Technology Conference (HLT) . Morgan Kauf-\nmann. San Diego, USA. pp. 138-145.\nFujiyoshi A. (2004). Epsilon-free grammars and lexicalized\ngrammars that generate the class of the mildly context-\nsensitivelanguages. In: Proceedingsofthe7thInterna-\ntional Workshop on Tree Adjoining Grammar and Re-\nlated Formalisms . Vancouver, pp. 16-23.\nHearne, M. (2005). Data-Oriented Models of Parsing and\nTranslation . PhD thesis. Dublin City University.\nDublin, Ireland.\nHoare,C.(1962)Quicksort, ComputerJournal 5,pp. 10-15.\nJoshi, A., Levy, L., and Takahashi, M. (1975) Tree adjunct\ngrammars, Journal of Computer and System Sciences\n10, pp. 136-163.\nK¨arkk¨ainen J., and Sanders P. (2003) Simple linear work\nsuffix array construction, In: Proceedings of the 30th\nInternationalColloquiumonAutomata,Languagesand\nProgramming (ICALP’03) . Eindhoven, Netherlands.\npp. 943-955.\nKlein, D., and Manning, C. (2003). Accurate Unlexicalized\nParsing. In: Proceedings of 41st Annual Meeting of the\nAssociation of Computational Linguistics (ACL) . Sap-\nporo, Japan. pp. 423-430.\nKnight, K. and Graehl, J. (2005). An Overview of Proba-\nbilisticTreeTransducersforNaturalLanguageProcess-\ning. In:Proceedings of the Sixth International Confer-\nence on Intelligent Text Processing and Computational\nLinguistics\nKoehn, P. (2005). Europarl. A parallel corpus for statistical\nmachine translation. In: MT Summit X: Proceedings of\nthetenthMachineTranslationSummitX .Phuket,Thai-\nland. pp. 79-97.\nKoehn, P., Hoang, H., Birch, A., Callison-Burch, C., Fed-\nerico, M., Bertoldi, N., Cowan, B., Shen, W., Moran,\nC., Zens, R., Dyer D., Bojar, O., Constantin, A., and\nHerbst, E. (2007). Moses: Open source toolkit for sta-\ntisticalmachinetranslation. In: Proceedingsofthe45th\nAnnual Meeting of the Association for Computational\nLinguistics (ACL) . Prague, Czech Republic. pp. 177-\n180.\nKurohashi, S. (2009). Fully syntactic example-based ma-\nchine translation (abstract). In Proceedings of the 3rd\nInternational Workshop on Example-based Machine\nTranslation (EAMT). Dublin City University, Dublin,\nIreland. p. 1.\nLuccio, F., Enriquez, A., Rieumont, P., and Pagl, L. (2004).\nBottom-up subtree isomorphism for unordered labeled\ntrees.Technical Report TR-04-13, Universit Di Pisa.\nPisa, Italy.\nManber, U., and Myers, G. (1990). Suffix arrays: a new\nmethod for on-line string searches. In: SODA 90: Pro-\nceedings of the first annual ACM-SIAM symposium on\nDiscrete algorithms . Philadelphia. pp.319-327.\nMartens, S. (2009). Quantitative Analysis of Treebanks us-\ning frequent subtree mining methods. In: Proceed-\nings of the 2009 Workshop on Graph-based Methods\nfor Natural Language Processing (TextGraphs-4) . pp.\n84-92.M¨onnich, U. (1997). Adjunction as substitution. In: G.-J.\nKruiff, G. Morrill, and D. Oehrle (eds.) Formal Gram-\nmar. pp. 169-178.\nOch, F., and Ney, H. (2003). A Systematic Comparison of\nVarious Statistical Alignment Models. Computational\nLinguistics 29 (1), pp. 19-51.\nPapineni, K., Roukos, S., Ward, T., and Zhu, W. (2002).\nBLEU: a method for automatic evaluation of Machine\nTranslation. In: Proceedings of the 40th Annual Meet-\ning of the Association for Computational Linguistics\n(ACL). Philadelphia, USA. pp. 311-318.\nPoutsma, A. (1998). Data-Oriented Translation. In: Ninth\nConferenceofComputationalLinguisticsintheNether-\nlands (CLIN). Leuven, Belgium.\nRounds, W. (1970). Tree oriented proofs of some theorems\nincontext-freeandindexedlanguages. In: Proceedings\nof the 2nd ACM Symposium on Theory on Computing ,\n109-116.\nSnover, M., Dorr, B., Schwartz, R., Micciula, L, and\nMakhoul,J.(2006). Astudyoftranslationeditratewith\ntargeted human annotation. In: Proceedings of the 7th\nConference of the Association for Machine Translation\nin the Americas (AMTA) . Cambridge, USA. pp. 223-\n231.\nThatcher, J. (1967). Characterizing derivation trees of\ncontext-free grammars through a generalization of fi-\nnite automata theory” Journal of Computer and System\nSciences, volume 1, 317-322.\nTiedemann, J., and Kotz ´e, G. (2009a). Building a Large\nMachine-Aligned Parallel Treebank. In: Proceedings\nof the 8th International Workshop on Treebanks and\nLinguistic Theories (TLT) . Milan, Italy. pp. 197-208.\nTiedemann,J.,andKotz ´e,G.(2009b). ADiscriminativeAp-\nproach to Tree Alignment. In: Proceedings of Recent\nAdvances in Natural Language Processing . Borovets,\nBulgaria. pp. 33-39.\nVandeghinste, V., and Martens, S. (2009). Top-down Trans-\nfer in Example-based MT. In Proceedings of the 3rd\nInternational Workshop on Example-based Machine\nTranslation. Dublin City University, Dublin, Ireland.\npp. 69-76.\nVandeghinste,V.(2009). Tree-basedTargetLanguageMod-\neling. In: Proceedings of the 13th Annual Conference\nof the European Association for Machine Translation\n(EAMT). Barcelona, Spain. pp. 152-159.\nvan Noord, G. (2006). At Last Parsing Is Now Opera-\ntional. In: Proceedings of Traitement Automatique des\nLangues Naturelles (TALN) , Leuven, Belgium. pp. 20-\n42.\nWeiner, P. (1973). Linear pattern matching algorithm. In:\nProceedings of the 14th Annual IEEE Symposium on\nSwitching and Automata Theory , pp. 1-11.\nWu, D., and Chiang, D. (2009). Proceedings of the 3rd\nWorkshop on Syntax and Structure in Statistical Trans-\nlation.\nZhang, M., Jiang, H., Aw, A., Sun, J., Li, S., and Tan, C.\n(2007). Atree-to-treealignment-basedmodelforstatis-\ntical machine translation. In: Proceedsing of MT Sum-\nmit XI, pp. 535-542.", "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "0NWrAg9N0ao7", "year": null, "venue": "EAMT 2009", "pdf_link": "https://aclanthology.org/2009.eamt-1.21.pdf", "forum_link": "https://openreview.net/forum?id=0NWrAg9N0ao7", "arxiv_id": null, "doi": null }
{ "title": "Tree-Based Target Language Modeling", "authors": [ "Vincent Vandeghinste" ], "abstract": null, "keywords": [], "raw_extracted_content": "Proceedings of the 13th Annual Conference of the EAMT , pages 152–159,\nBarcelona, May 2009\nTree-based TargetLanguage Modeling\nVincent Vandeghinste\nCentre forComputational Linguistics -KULeuv en\nLeuv en,Belgium\[email protected]. be\nAbstract\nInthispaper wedescribe anapproach to\ntargetlanguage modeling which isbased\nonalargetreebank. Weassume abagof\nbags asinput forthetargetlanguage gener -\nation component, leaving ituptothiscom-\nponent todecide upon wordandphrase or-\nder.Anexperiment with Dutch astarget\nlanguage showsthatthisapproach tocan-\ndidate translation reranking outperforms\nstandard n-gram modeling, when measur -\ningoutput quality with BLEU, NIST ,and\nTER metrics.\n1Ackno wledgements\nThe development ofthissystem andresearch is\nmade possible bytheSTEVIN-programme ofthe\nDutch Language Union, Project Nr.STE-07007,\nwhich issponsored bytheFlemish and Dutch\nGovernments, andbytheSBO-programme ofthe\nFlemish IWT,Project Nr.060051.\n2Introduction\nInthispaper wedescribe anapproach totargetlan-\nguage modeling using largetreebanks. This intro-\nduction starts with adescription oftheMTsystem\nforwhich thistargetlanguage modeling compo-\nnent isintented andcontinues with ashort descrip-\ntionofrelated research.\nInsection 3wedescribe thedetails ofthetarget\nlanguage modeling component andinsection 4we\ndescribe anevaluation experiment forthiscompo-\nnent. Section 5drawsconclusions andsketches\nfuture work.\n2.1 System description\nWearedeveloping adata-dri venhybrid approach\ntowards machine translation, reusing asmuch as\nc\n\u00002009 European Association forMachine Translation.possible already existing tools andresources toset\nupanMT architecture much likeaclassic rule-\nbased transfer system. Instead ofmanually design-\ningtherules, weintend toderivethem from large\nparallel andmonolingual (uncorrected) treebanks.\nThe system requires asource language parser\nand aparallel treebank, aligned from thesen-\ntence leveluptothewordlevel(Och andNey,\n2003), including sub-sentential alignment (Tiede-\nmann, 2003; Tinsle yetal.,2007, Mack enand\nDaelemans, 2008). Togetaparallel treebank we\nparse both thesource and targetlanguage com-\nponents ofparallel corpora \u001ealaEuroparl (Koehn,\n2005). Each treepair,sub-tree pair orwordpair\npresents anexample translation pair,andbecomes\nadictionary entry .This wayweareremo ving the\nconceptual distinction between adictionary anda\nparallel corpus, likeVandeghinste (2007).\nInasimilar fashion, butmaking abstraction of\ntheconcrete words, wederiveasetoftransfer rules\nfrom theavailable alignments. Atranslation model\nisbuiltbycounting thefrequencies ofoccurrence\nofallthese alignments.\nThe source language sentence issyntactically\nparsed, and theparse tree (and itssub-trees) is\nmatched with thesource language side parse trees\nofthedictionary/parallel treebank. The retrie ved\ntargetfragments arethen restructured according to\ntheinformation inthetransfer rules resulting ina\ntargetlanguage bagofbags, which isstructured\nlikeaparse tree, butwithout implying anysurface\norder inthedaughters ofeach node. When larger\nunits areretrie vedfrom thedictionary ,their sur-\nfaceorder ispreserv ed,implying thatsome nodes\ninthebagofbags arenotbags buttrees, with or-\ndered daughters.\nItisuptothetargetlanguage generation com-\nponent todetermine thelexical selection (which\ntranslation alternati vesarepreferred) andoptimal\nsurfaceordering using thetargetlanguage tree-\nbank. Itisthiscomponent which wedescribe and\n152\nevaluate intherestofthispaper .\nWhen thesystem hasgenerated atranslation, it\nisuptothehuman post-editor toaccept thetrans-\nlation ortocorrect it.Forthispurpose aweb-based\npost-editing interf aceisbeing designed, which al-\nlowsadding, deleting, substituting, and moving\nwords andphrases. The post-editor canchoose\namongst several translation alternati vesforthe\nsentence, orforcertain parts ofthesentence. When\nasentence isaccepted thepost-editing information\nisfedback into thesystem' sdatabases, updating\ntheweights ofboth thetranslation model andthe\ntargetlanguage generation model.\n2.2 Related Resear ch\nThe hybrid MTsystem described intheprevious\nsection issimilar totheData-Oriented Transla-\ntion(DOT)approach, which was\u0002rstproposed by\nPoutsma (1998) andfurther researched byHearne\n(2005). DOTuses Data-Oriented Parse Trees\n(Bod, 1992), whereas weuseeither rule-based\nparsers based onasetoflinguistic rules and a\nstochastic disambiguation component orweuse\nstochastic parsers trained onamanually parsed or\ncorrected treebank. TheDOTapproach only uses\nsmall corpora andalimited domain, whereas we\nintend touselargecorpora andageneral domain\n(news).\nThe targetlanguage generation approach is\nsome what similar tothefeatur etemplates used by\nthetranslation candidate reranking component of\nVelldal (2007), although there aresome important\ndifferences: Velldal' sfeature templates canhavea\nhigher depth, whereas thepatterns weextract can\nbeseen asconte xt-free rewrite rules, only captur -\ninginformation about amother anditsimmediate\ndaughters. This canbeattrib uted tothefactthatthe\nLOGON system (Lønning etal.,2004) forwhich\nVelldal builtthecomponent isalimited domain\nMTsystem (Tourist information) whereas wein-\ntend tobuildalargedomain system (News), sowe\nareusing much largercorpora. Storing informa-\ntionatasimilar levelasVelldal isnotfeasible with\nsuch largetreebanks.\nFurthermore, oursystem borro wsideas forcom-\nbining targetlanguage fragments from theMETIS-\nIIsystem (Carl etal.,2008; Vandeghinste, 2008).\nOursystem isbeing implemented from Dutch to\nEnglish andFrench, andvice versa. Intherestof\nthispaper ,weassume Dutch asthetargetlanguage.3The TargetLanguage Generation\nComponent\nThis section describes theapproach weusefortar-\ngetlanguage modeling. Insection 3.1wedescribe\ntheinput thiscomponent expects, section 3.2de-\nscribes thetraining procedure andthepreprocess-\ningsteps applied onthetraining data, andsection\n3.3describes howthetargetlanguage generation\ncomponent actually works.\nThe targetlanguage generation component is\nbased onalargetargetlanguage treebank. Thein-\nputisassumed tobeasource language indepen-\ndent bagofbags, asallelements inthisbagare\ncoming from thetargetlanguage sideofthedictio-\nnary,andthestructure ofthebagofbags ismapped\nonto thetargetlanguage structure through thedic-\ntionary andthetransfer rules.\n3.1 Bag ofBags asinput\nWede\u0002ne abagofbags asasetofsets,orinour\ncase, asaparse tree representing thetargetlan-\nguage sentence, inwhich foreach node,1thesur-\nfaceorder ofthedaughters ofthatbagisundeter -\nmined, representing allpermutations ofthelistof\ndaughters. Itisuptothetargetlanguage genera-\ntioncomponent toresolv ethese bags andcome up\nwith thebestsolution.\nIn\u0002gure 1you \u0002nd anexample ofabag of\nbags inxml-format representing theDutch sen-\ntence “Zieookhetkaartje hieronder .”[Eng: Also\nseethemap below.].Aregular parse treeforthis\nsentence ispresented in\u0002gure 2.Figure 1repre-\nsents besides thissentence numerous ( \u0001\u0003\u0002\u0005\u0004\u0007\u0006\b\u0002\u0005\u0004\t\u0001\u0003\u0002\u000b\n\f\u000e\r)other surfacestrings, each apermutation ofthe\nwords inthesentence.\nNote thatin\u0002gure 1weleftoutsome features\ninthe<bag> tags ofthebagofbags forclarity\nandpresentational purposes. The bagofbags is\nexactly thesame asthexmloutput ofthesyntac-\nticparse forthesame sentence generated bythe\nAlpino parser (vanNoord, 2006), apart from the\nfactthatthe<node> tags intheparse treehave\nbeen replaced by<bag> tags inthebagofbags,\nindicating thatthese bags stillneed toberesolv ed,\nandfrom thefactthatitdoes notcontain position\ninformation.\nTheAlpino parser istheparser weuseforDutch\nsyntactic analysis. Itisaparser which isbased\nonhead-dri venphrase structure grammar (Pollard\n1Some ofthesub-trees arecoming straight from thedictio-\nnary,sotheyarenotsub-bags anddonotneed toberesolv ed.153\nFigure 1:Anexample bagofbags\n<bagcat=\"top\" rel=\"top\">\n<bagcat=\"sv1\" rel=\"--\">\n<bagframe=\"verb(hebben,sg1,\ntransitive_ndev_ndev)\"\npos=\"verb\" rel=\"hd\" word=\"Zie\"/>\n<bagframe=\"sentence_adverb\" pos=\"adv\"\nrel=\"mod\" word=\"ook\"/>\n<bagcat=\"np\" rel=\"obj1\">\n<bagframe=\"determiner(het,nwh,nmod ,\npro,nparg,wkpro)\"\npos=\"det\" rel=\"det\" word=\"het\"/>\n<bagframe=\"noun(het,count,sg)\"\npos=\"noun\" rel=\"hd\"\nword=\"kaartje\"/>\n</bag>\n<bagframe=\"er_adverb(onder)\" pos=\"pp\"\nrel=\"mod\" word=\"hieronder\"/>\n</bag>\n<bagframe=\"punct(punt)\" pos=\"punct\"\nrel=\"--\" word=\".\"/>\n</bag>\nFigure 2:Parse tree fortheexample sentence\n(without frames)\nandSag, 1994) giving both phrase structure and\ndependenc yinformation.\nResolving thebagofbags inabottom-up fash-\nion, we\u0002rst resolv ethenoun phrase (NP) “het\nkaartje ”[Eng: themap]. There aretwopossible\npermutations forthisNP,andwewantto\u0002ndthe\nmost probable. Howthisisdone isexplained in\nsection 3.3.\nWhen theNPisresolv ed,weneed toresolv e\nthesv1,which stands forasentence with theverb\nin\u0002rstposition .The sv1hasfour daughters, so\nthisamounts to24(\u0006\b\u0002)different possible surface\norders.2One ofthese daughters hastwopossible\noutcomes, sothisalready totals 48translation al-\nternati vesunder investig ation.\nThis procedure isapplied onallnon-terminal\nbags.\n2Because wetreat allcategories thesame, wedonotmakeuse\nofthefactthatforansv1weknowthattheverbshould be,by\nde\u0002nition, in\u0002rstposition.3.2 Training thetargetlanguage generation\ncomponent\nInorder toresolv ethebags, wetrain thetar-\ngetlanguage generation component onalarge\ntreebank. ForDutch, this treebank wasau-\ntomatically annotated bythe Alpino parser\n(vanNoord, 2006), and isavailable online at\nhttp://www .let.rug.nl/ \u000fvannoord/trees/.\nItconsists, amongst others, ofthefollowing cor-\npora: theSpok enDutch corpus (CGN) (Oostdijk\netal.,2002), theLassy corpus (vanNoord etal.,\n2006), theDutch partofEuroparl (Koehn, 2005),\nandtheDutch wikipedia.\nThetotal corpus used intheexperiments insec-\ntion4consists of290,658,861 words in18,048,702\nsentences, averaging 16.10 words persentence.\nFrom each ofthese sentences, wecollect the\nrewrite rules atdifferent levelsofabstraction. For\ninstance, fortheexample sentence “Zieookhet\nkaartje hieronder ”,wewould collect theinforma-\ntionintable 1.3\nNote thatweabbre viated some oftheframes to\n\u0002tinthetable andthatweuse“ \u0010”asa\u0002eld sep-\narator between thedifferent kinds ofinformation\nrepresented inourrewrite rules. Consecuti veele-\nments ontheright-hand side oftherules arewrit-\ntenwith aspace inbetween oronanewline. Forin-\nstance, thesv1 rulehasfour right-hand sidesym-\nbols oneveryabstraction level.\nWedistinguish several different levelsofab-\nstraction, going from veryabstract (Level1:Rela-\ntions) toveryconcrete (Level7:Head +Frame/Cat\n+Relations).\n1.Relations (Rel): Containing thedependenc y\nrelations andthefunction information.\n2.Part-of-speech/Cate gory (Pos/Cat): contain-\ningtheparts-of-speech ofterminal nodes and\nthecategory fornon-terminal nodes.\n3.Pos/Cat +Rel: containing thecombinations\nofparts-of-speech/cate gory information and\ndependenc yinformation.\n4.Frame/Cat: Containing frame information for\nterminal nodes andthecategory information\nfornon-terminals. Frames aregenerated by\ntheAlpino parser ,andareavery\u0002ne-grained\npart-of-speech tag.\n3This sentence hasaparse treeexactly liketheexample bag\nofbags, apart from replacing the<bag> tags with<node>\ntags.154\nTable 1:Extracting information from asentence at\ndifferent abstraction levels\nLevel1:Relations\ntop:----\nsv1:hdmodobj1mod\nnp:dethd\nLevel2:Pos/Cat\ntop:sv1punct\nsv1:verbadvnppp\nnp:detnoun\nLevel3:Pos/Category +Relations\ntop:sv1 \u0011--punct \u0011--\nsv1:verb \u0011hdadv \u0011modnp \u0011obj1pp \u0011mod\nnp:det\u0011detnoun\u0011hd\nLevel4:Frame/Category\ntop:sv1punct(punt)\nsv1:verb(hebben,sg1,transitive...)\nsentence adverb\nnp\npp\nnp:determiner(het,nwh,nmod,pro. ..)\nnoun(het,count,sg)\nLevel5:Frame/Category +Relations\ntop:sv1\u0011--punct(punt)\u0011\nsv1:verb(hebben,sg1,transitive...) \u0011hd\nsentence adverb\u0011mod\nnp\u0011obj1\npp \u0011mod\nnp:determiner(het,nwh,nmod,pro. ..) \u0011det\nnoun(het,count,sg)\u0011hd\nLevel6:Head +Pos/Cat +Relations\ntop:sv1\u0011--\u0011Ziepunct\u0011--\u0011.\nsv1:verb \u0011hd \u0011Zie\nadv \u0011mod \u0011ook\nnp\u0011obj1\u0011kaartje\npp \u0011mod \u0011hieronder\nnp:det \u0011det \u0011hetnoun \u0011hd \u0011kaartje\nLevel7:Head +Frame/Cat +Relations\ntop:sv1 \u0011-- \u0011Ziepunct(punt) \u0011-- \u0011.\nsv1:verb(hebben,sg1,...)\u0011hd\u0011Zie\nsentence adverb\u0011mod\u0011ook\nnp \u0011obj1 \u0011kaartje\npp\u0011mod\u0011hieronder\nnp:determiner(het,nwh...) \u0011det \u0011het\nnoun(het,count,sg) \u0011hd \u0011kaartjeTable 2:Number ofdifferent labels andbags\nAbstraction Level Labels Bags\n1Rel 32 50,233\n2Pos/Cat 48 568,299\n3Pos+Rel 510 1,584,535\n4Frame 36,729 9,764,647\n5Frame +Rel 50,130 10,251,079\n6Head +Pos+Rel 22,924,782 60,753,604\n7Head +Frame +Rel 26,400,004 61,283,814\n5.Frame/Cat +Rel: containing thecombina-\ntions offrame/cate gory andrelation informa-\ntion.\n6.Head +Pos/Cat +Rel: containing thecom-\nbination ofthehead wordofanode with the\nparts-of-speech /cate gory andrelation.\n7.Head +Frame/Cat +Rel: containing thecom-\nbination ofthehead wordofanode with the\nframe andrelation.\nIntable 2wepresent some information about\nourdatabase forthetotal corpus sizeof18million\nsentences. The second column (Labels) indicates\nthenumber ofdifferent labels (types) forthatab-\nstraction level.Thethird column (Bags) showsthe\nnumber ofdifferent bags atthatlevel.Ifthecor-\npuscontains twoormore permutations ofthesame\nbag, then these arecounted asonebag.\nAllthisdata iscollected overthewhole tree-\nbank, andputinadatabase, precalculating which\npatterns arepermutations ofeach other ,andadding\nthefrequenc yofoccurrence foreach ofthese per-\nmutations.\nWehaveone database table percategory per\nabstraction level,andwehave25categories for\nDutch, resulting in175tables. Each ofthese tables\ncontains onerowperbagandonecolumn persub-\ncorpus. Foreach bagandeach corpus, westore\nthesurfaceorder ofthebagelements andtheir fre-\nquenc y,allowing multiple surfaceorders andfre-\nquencies perdatabase cell.\nTheuseofseparate columns forsub-corpora al-\nlowsustoeasily activateanddeacti vecertain parts\nofthetotal corpus. Itisadesign choice thatfacil-\nitates adapting theMTsystem tospeci\u0002c domains\nbyactivating theappropriate columns.155\n3.3 Matching theBag ofbags with the\ntraining data\nWewanttoresolv ethenoun phrase-bag “het\nkaartje ”,knowing thatthere aretwopossible per-\nmutations.\nWestart ofonthemost concrete level,looking\nfortheoccurrence inthetraining data ofeither\nnp:det(...) \u0010det \u0010het\nnoun(...) \u0010hd \u0010kaartje\nor\nnp:noun(...) \u0010hd \u0010kaartje\ndet(...)\u0010det\u0010het\nIfoneorboth ofthese occur inthetraining data,\nthen weusetheir relati vefrequencies asweights\nforthesolution. When neither ofthem occurs\ninthetraining data, wegotoamore abstract\nlevel,hoping to\u0002ndinformation regarding therel-\nativehigher occurrence ofonepermutation over\ntheother ,cascading overthedifferent abstraction\nlevels,until thebagisresolv ed.Intherare case\nthatnone oftheabstraction levelscanresolv ethe\nbag, allpermutations getthesame weight.\nWeuseasetofcut-of fparameters tolimit the\nnumber ofalternati veanalyses under consideration\ntoamanageable number .Currently ,wekeeponly\ntrack ofthe10best scoring alternati ves. When\nnoinformation orequal frequencies arefound, and\nthebagwould generate more than 30permutations,\nwecutoffat30.This isespecially required inthe\nexperimental conditions where thecorpus size is\nstilllow(cfsection 4).Westop processing anal-\nternati vesolution ifitsweight is10times lower\nthan theweight ofthecurrent best solution, and\nforeach node, weallowamaximum of100com-\nbinations ofthesolutions ofthedaughters. Asthe\nsystem iscurrently fastenough, wehavenotyetin-\nvestig ated different values forthese cut-of fparam-\neters, butitisclear thatcutting offsooner would\nlead tofaster processing butloweraccurac y.Most\nofthese cut-of fparameters come inaction only at\nlowcorpus sizes and/or inexperimental conditions\nwith only high abstraction levels.\n4Experiment\nInthissection wedescribe anexperiment inwhich\nweevaluate thetargetlanguage generation compo-\nnent ofourMTsystem inisolation, excluding fac-\ntorsthatmight contrib utetothetranslation quality\ningood orbadsense thatarenotpartofthetarget\nlanguage model.\nSection 4.1describes themethodology that isused fortheexperiment, andsection 4.2describes\ntheevaluation results.\n4.1 Methodology\nInaway,wearetranslating from Dutch toDutch,\nonly evaluating theordering mechanism used in\nthetargetlanguage generation component.\nWetested thequality oftheoutput ofthetar-\ngetlanguage generation component bycomparing\nittotheinput sentence from which thebag of\nbags originates, which servesasareference trans-\nlation when evaluating with BLEU (Papineni et\nal.,2002), NIST (Doddington, 2002), and TER\n(Snoveretal.,2006).\nAdditionally wealso measured thenumber of\nexact matches: those cases inwhich theoutput\nsentence isidentical totheinput sentence.\nWehaveconstructed atestsetof575real-life\nsentences from arealtranslation conte xtthatwere\nparsed with Alpino andconverted intobags.\nWehaveseveraltestconditions intwodimen-\nsions:\n1.Corpus size:expressed innumber ofsen-\ntences. Thetreebank consists ofseveralsub-\ncorpora, andwetested thesystem while grad-\nually adding these sub-corpora. The size of\nthese sub-corpora servesasdata points onthe\nX-axis in\u0002gures 3,4,5,6,and7.\n2.Abstr action level:wehavedescribed the\nsevenabstraction levelsforDutch insection\n3.2. Wetested thesystem with only thedata\nforthemost abstract levelavailable, gradually\nadding lessabstract levels.These arethedata\nseries 1to7inthelegend.\nAsabaseline, wealso calculated thequality of\natrigramlangua gemodels .Weused theSRILM\ntoolkit (Stolck e,2002) totrain aback offtrigram\nmodel. Additional baseline testing with afourgram\nmodel with Chen and Goodman' s(1998) modi-\n\u0002ed Kneser -Neydiscounting didnotyield better\nresults. Asitisnotfeasible togenerate allper-\nmutations andthen calculate their likelihood, we\nimplemented abranch andbound approach. For\neach sub-bag, allpermutations were generated and\nthese were ordered according totheir likelihood,\nkeeping only the10best foreach sub-bag. When\nanyofthese permutations contained more than \u0012\nwords, asliding windo wofsize \u0012wasused toes-\ntimate their likelihood. This procedure wasrecur -\nsivelyapplied until thewhole bagisresolv ed.156\nFigure 3:Effect ofcorpus size and abstraction\nlevelonBLEU score\nFigure 4:Effect ofcorpus size and abstraction\nlevelonNIST score\nForexact match wecalculated abaseline by\ncounting thetotal number ofpossible permutations\nandtheprobability ofranodmly picking anexact\nmatch.\n4.2 Results\nWhen looking at\u0002gure 3itisclear thattheaddition\noftheleast abstract levelsyields thebest results,\nalthough there isnotmuch difference between lev-\nels6and7.Atthelargest corpus size, level6\nevenoutperforms level7.This canbeexplained by\nthefactthatthere isonly arelati velysmall differ-\nence ingranularity between levels6and7,which\nisclear when looking attable 2.There isareduc-\ntion ofnumbers ofbags ofless than 1%, sotheFigure 5:Effect ofcorpus size and abstraction\nlevelonTER score\nFigure 6:Effect ofcorpus size and abstraction\nlevelonpercentage ofExact Matches\nabstraction isverylimited. Infuture versions of\nthesystem, wemight omit level7asitdoes not\naddanyaccurac y.\nItisalso clear thatforallcorpus sizes, abstrac-\ntionlevels4,5,6,and7outperform thebaseline.\nThe results areconsistent fortheNIST scores\nshownin\u0002gure 4.\nWhen looking attheTER scores in\u0002gure 5,the\nsame observ ations arestilltrue. Note thatTER ex-\npresses anerror rate, solowerscores arebetter .\nAsome what unexpected result isthefactthat\nlevel1consistently outperforms levels2and3.We\nassume some kind ofartefactandwillinvestig ate\nthisfurther .\nThe percentage ofexact matches, aspresented157\nin\u0002gure 6con\u0002rms theresults from theother met-\nrics. Note thattheprobability ofrandomly picking\noneofthepossible permutations oftheinput bagof\nbagasitssolution would result inanexact match\nbaseline of0.0000911%, soallexperimental con-\nditions impro veoverthisbaseline.\n5Conclusions andFutur eWork\nWehavesetupatranslation generation component\nforaparse andcorpus-based MT system. This\ncomponent requires abagofbags asinput, each\nbagandsub-bag representing allpermutations of\ntheir respecti vedaughters.\nWetrained thecomponent onalargetargetlan-\nguage treebank (with fully automatic parses) sowe\ncanlook upforeach ofthebags whether itoccurs\ninthecorpus, inwhat surfaceorder ,andwith what\nfrequenc y.\nComparing oursystem toastandard n-gram\nmodel wecanconclude thatoursystem clearly out-\nperforms thisbaseline.\nAlthough theresults oftheexperiment suggest\nthatwehavereached some kind ofceiling intrans-\nlation quality ,weintend toatleast double thesize\nofthetargetlanguage treebank andtestwhether\nwecanbreak through these ceilings.\nFigure 7showsthepercentages ofnewbags to\nbeadded tothedatabase foreach oftheabstrac-\ntionlevelswhen gradually adding thesubcorpora.\nAdding newcorpora seems toaddrelati velylittle\nnewinformation tothemost abstract levels,butfor\nthemore concrete levels,growthpercentages are\nstillmore than 50%, meaning thatmore than 50%\nofthebags found inthenewcorpus were unseen\nintheprevious corpora.\nWesetupthisexperiment inorder toestimate\ntheupper bound ofourMTsystem. Connecting\nthiscomponent totheother components ofourMT\nsystem willrevealitstruequality ,buttheresults up\ntonowareveryencouraging.\nWewill also implement thisapproach forthe\nother languages inourMTsystem, butprobably\nwith lessabstraction levels.Forinstance, forEn-\nglish weusetheStanford parser (Klein andMan-\nning, 2003), which generates parts-of-speech, de-\npendenc yrelations, categories, andwords, butnot\nframes oranything equivalent.\nRefer ences\nBod, R.(1992). AComputational Model ofLan-\nguage Performance: Data-Oriented Parsing.Figure 7:Growthpercentage foreach abstraction\nlevel\nIn. C.Boˆ\u0011tet(ed.), Proceedings ofthe\u0002f-\nteenth International Confer ence onCompu-\ntational Linguistics (COLING'92) .Interna-\ntional Committee onComputational Linguis-\ntics. Nantes, France. pp.855-859.\nCarl, M.,Melero, M.,Badia, T.,Vandeghinste,\nV.,Dirix, P.,Schuurman, I.,Markantona-\ntou,S.,So\u0002anopoulos, S.,Vassiliou, M.,and\nYannoutsou, O.(2008). METIS-II: LowRe-\nsources Machine Translation :Background,\nImplementation, Results, andPotentials. Ma-\nchine Translation 22(1). pp.69-99. Springer .\nChen, S.F.,andGoodman, J.(1998). AnEmpiri-\ncalStudy ofSmoothing Techniques forLan-\nguage Modeling. Technical Report TR-10-98 .\nComputer Science Group, Harv ardU.,Cam-\nbridge, MA.\nDoddington, G.(2002). Automatic Evaluation\nofMachine Translation Quality using N-gram\nCo-occurrence Statistics. InProceedings\noftheSecond Human Langua geTechnolo gy\nConfer ence (HLT).MorganKaufmann. San\nDiego,USA. pp.138-145.\nHearne, M.(2005). Data-Oriented Models of\nParsing andTranslation .PhD thesis. Dublin\nCity University .Ireland.\nKlein, D.,andManning, C.(2003). Accurate Un-\nlexicalized Parsing. InProceedings of41st\nAnnual Meeting oftheAssociation ofCom-\nputational Linguistics (ACL).Sapporo, Japan.\npp.423-430.158\nKoehn, P.(2005). Europarl. Aparallel corpus for\nstatistical machine translation. InProceed-\nings ofMTSummit X.Phuk et,Thailand. pp.\n79-97.\nLønning, J.T.,Oepen, S.,Beermann, D.,Hel-\nlan,L.,Carroll, J.,Dyvik, H.,Flickinger ,D.,\nJohannsen, J.B., Meurer ,P.,Nordg \tard, T.,\nRos´en,V.,andVelldal, E.(2004). LOGON.\nANorwe gian MTeffort. InProceedings of\ntheWorkshop inRecent Advances inScandi-\nnavian Machine Translation .Uppsala, Swe-\nden.\nMack en,L.,andDaelemans, W.(2009). Aligning\nlinguistically motivated phrases. InCompu-\ntational Linguistics intheNetherlands 2007:\nSelected paper sfromtheeighteenth CLIN\nmeeting .LOTNetherlands Graduate School\nofLinguistics. Utrecht. pp.37-52\nOch, F.,and Ney,H.(2003). ASystematic\nComparison ofVarious Statistical Alignment\nModels. Computational Linguistics 29(1),\npp.19-51.\nOostdijk, N.,Goedertier ,W.,VanEynde, F.,\nBoves,L.,Martens, J.P.,Moortg at,M.,and\nBaayen, H.(2002). Experiences form the\nSpok enDutch Corpus Project. InProceed-\nings ofthe3rdInternational confer ence on\nLangua geResour cesandEvaluation (LREC) .\nLasPalmas, Spain. pp.340-347.\nPapineni, K.,Rouk os,S.,Ward,T.,andZhu, W.\n(2002). BLEU: amethod forautomatic eval-\nuation ofMachine Translation. InProceed-\nings ofthe40th Annual Meeting oftheAsso-\nciation forComputational Linguistics (ACL).\nPhiladelphia, USA. pp.311-318.\nPollard, C.,and Sag, I.(1994). Head-driven\nPhraseStructur eGrammar .CSLI Stanford.\nUniversity ofChicago Press. Stanford, USA.\nPoutsma, A.(1998). Data-Oriented Translation.\nPresented attheNinth Confer ence ofCompu-\ntational Linguistics intheNetherlands .Leu-\nven,Belgium.\nSnover,M.,Dorr,B.,Schw artz, R.,Micciula, L,\nandMakhoul, J.(2006). Astudy oftrans-\nlation editratewith targeted human annota-\ntion. InProceedings ofthe7thConfer enceoftheAssociation forMachine Translation\ninthe Americas (AMT A).Cambridge, USA.\npp.223-231.\nStolck e,A.(2002). SRILM -AnExtensible\nLanguage Modeling Toolkit. InProceed-\nings oftheInternational Confer ence onSpo-\nkenLangua geProcessing .Denver,Colorado,\nSeptember 2002.\nTiedemann, J.(2003). Recycling Translations -\nExtraction ofLexical Data fromParallel Cor-\nporaand their Application inNatur alLan-\nguageProcessing .PhD. Studia Linguistica\nUpsaliensia 1.\nTinsle y,J.,Zeche v,V.,Hearne, M.,andWay,A.\n(2007). RobustLanguage Pair-Independent\nSub-T reeAlignment. Proceedings ofMT\nSummit XI.Copenhagen. pp.467-474.\nVandeghinste, V.(2007). Remo ving theDistinc-\ntionBetween aTranslation Memory ,aBilin-\ngual Dictionary andaParallel Corpus. InPro-\nceedings ofTranslating and theComputer ,\n29.ASLIB. London, UK.\nVandeghinste, V.(2008). AHybrid Modular Ma-\nchine Translation System .PhD. Katholiek e\nUniversiteit Leuv en.LOTNetherlands Grad-\nuate School ofLinguistics. Utrecht.\nvanNoord, G.,Schuurman, I.,and Vandegh-\ninste, V.(2006). Syntactic Annotation of\nLargeCorpora inSTEVIN. InProceedings\nofthe5thInternational confer ence onLan-\nguageResour cesandEvaluation LREC .Gen-\nova,Italy.\nvanNoord, G.(2006). AtLast Parsing IsNow\nOperational. InProceedings ofTALN,Leu-\nven,Belgium.159", "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "_zCCCCan7C", "year": null, "venue": "EAMT 2018", "pdf_link": "https://aclanthology.org/2018.eamt-main.27.pdf", "forum_link": "https://openreview.net/forum?id=_zCCCCan7C", "arxiv_id": null, "doi": null }
{ "title": "A Comparison of Different Punctuation Prediction Approaches in a Translation Context", "authors": [ "Vincent Vandeghinste", "Lyan Verwimp", "Joris Pelemans", "Patrick Wambacq" ], "abstract": null, "keywords": [], "raw_extracted_content": "A Comparison of Different Punctuation Prediction Approaches\nin a Translation Context\nVincent Vandeghinste\nCCL – KU Leuven\[email protected]\nJoris Pelemans\nApple Inc.\[email protected] Verwimp\nESAT-PSI – KU Leuven\[email protected]\nPatrick Wambacq\nESAT-PSI – KU Leuven\[email protected]\nAbstract\nWe test a series of techniques to pre-\ndict punctuation and its effect on ma-\nchine translation (MT) quality. Sev-\neral techniques for punctuation prediction\nare compared: language modeling tech-\nniques, such as n-grams and long short-\nterm memories (LSTM), sequence labeling\nLSTMs (unidirectional and bidirectional),\nand monolingual phrase-based, hierarchi-\ncal and neural MT. For actual translation,\nphrase-based, hierarchical and neural MT\nare investigated. We observe that for punc-\ntuation prediction, phrase-based statistical\nMT and neural MT reach similar results,\nand are best used as a preprocessing step\nwhich is followed by neural MT to perform\nthe actual translation. Implicit punctuation\ninsertion by a dedicated neural MT system,\ntrained on unpunctuated source and punc-\ntuated target, yields similar results.\n1 Introduction\nIn speech translation, the first step often consists\nof automatic speech recognition (ASR). Most ASR\nsystems output an unsegmented stream of words,\napart from some form of acoustic segmentation\nwhich splits a transcript into so-called utterances .\nTranslating this stream of words, using off-the-\nshelf MT, results in a lower translation quality\ncompared to translating punctuated input, as MT\nsystems are usually trained on properly punctuated\nand segmented source and target text. End-to-end\nspeech translation systems, that do not suffer from\nc/circlecopyrt2018 The authors. This article is licensed under a Creative\nCommons 3.0 licence, no derivative works, attribution, CC-\nBY-ND.this problem, have recently achieved high-quality\nresults too (Weiss et al., 2017), but these models\nrequire infrastructure (in terms of GPUs and train-\ning time) that is not available to everyone.\nWe compare several techniques and approaches\nfor punctuation prediction in a translation context,\nstarting from an input that already contains the cor-\nrect sentence boundaries. All techniques and ap-\nproaches are trained on the same dataset, allowing\nus to fully attribute different results to the specific\ntechniques and approaches used. Thus, the main\ncontribution of this paper is not introducing new\nmethods for punctuation prediction, but a thorough\ncomparison of methods previously used, since ex-\ntensive comparisons are often lacking in related\nwork. We compare three families of approaches\nfor punctuation prediction: (1) language modeling,\n(2) sequence modeling, and (3) monolingual MT.\nThese approaches are combined in three differ-\nent architectures resulting in translated and punc-\ntuated output: (1) Preprocessing adds punctua-\ntion before translating with a normal MT system,\ntrained on punctuated source and punctuated target\ndata; (2) Implicit insertion adds punctuation dur-\ning MT, which is trained on unpunctuated source\nand punctuated target data; and (3) Postprocess-\ningadds punctuation after MT, which is trained on\nunpunctuated source and unpunctuated target data.\nFigure 1 shows these different strategies, together\nwith the baseline strategy, in which the unpunc-\ntuated data is translated by a regular MT system\ntrained on punctuated source and target data.\n2 Related work\nIn this section we discuss work that explicitly tries\nto predict punctuation marks like we do. We do\nnot consider sentence boundary prediction.\nPunctuation prediction is first described inP\u0013 erez-Ortiz, S\u0013 anchez-Mart\u0013 \u0010nez, Espl\u0012 a-Gomis, Popovi\u0013 c, Rico, Martins, Van den Bogaert, Forcada (eds.)\nProceedings of the 21st Annual Conference of the European Association for Machine Translation , p. 269{278\nAlacant, Spain, May 2018.\nFigure 1: The different punctuation prediction strategies in a\ntranslation context.\n(Beeferman et al., 1998), who use a lexical hid-\nden Markov model to predict comma insertion\nin ASR output. Several other models have also\nbeen investigated, such as a decision tree clas-\nsifiers (Kim and Woodland, 2001; Zhang et al.,\n2002), finite state models and multi-layer percep-\ntrons (Christensen et al., 2001), a maximum en-\ntropy model (Huang and Zweig, 2002) and condi-\ntional random fields (Lu and Ng, 2010; Ueffing et\nal., 2013).\nGravano et al. (2009) use a purely text-based\nn-gram language model but do not compare with\npreviously published methods. Several researchers\nuse recurrent neural networks (RNNs) to tackle\nthe problem as a sequence labeling task. Tilk\nand Alum ¨ae (2015; 2016) use a two-stage LSTM\n(Hochreiter and Schmidhuber, 1997) to predict\npunctuation based on textual and prosodic fea-\ntures. Mor ´o and Szasz ´ak (2017) only use prosodic\ninformation to train a bidirectional LSTM while\nGale and Parthasarathy (2017) compare several\ncharacter-level convolutional and LSTM architec-\ntures, of which a simple LSTM with delay per-\nforms the best, although not consistently better\nthan word-level bidirectional models. Pahuja et\nal. (2017) train a bidirectional RNN to jointly pre-\ndict the correlated tasks of punctuation and capi-\ntalization.\nAs far as we know, only Tilk and Alum ¨ae (2016)\ndirectly compare unidirectional and bidirectional\nword-level LSTMs for sequence labeling: even\nthough their unidirectional model is smaller than\ntheir bidirectional model1, the bidirectional one\ndoes not consistently outperform the unidirectional\none. As we will see in section 4.1, we observe a\nsimilar trend.\n1A hidden size of 100 (Tilk and Alum ¨ae, 2015) vs. 256 (Tilk\nand Alum ¨ae, 2016), while it is not clear whether both the for-\nward and the backward have 256 units or whether each of\nthem have 128.In the context of MT, Matusov et al. (2006) and\nPeitz et al. (2011) present the three strategies for\npunctuation prediction we also use (as shown in\nfigure 1). Lee and Roukos (2006) use a prepro-\ncessing approach, and Hassan et al. (2007) present\na postprocessing apprach. Peitz et al. (2011) com-\nbine the outputs of the different strategies and find\nthat “the translation-based punctuation prediction\noutperformed the LM based approach as well as\nimplicit method in terms of BLEU and TER on the\nIWSLT 2011 SLT task”. Combining outputs from\ndifferent approaches through system combination\nyields even better results (Matusov et al., 2006b).\nIf we examine the comparisons with previously\npublished methods in related work, we see that\nsome do no compare their approach at all (Beefer-\nman et al., 1998; Huang and Zweig, 2002; Hassan\net al., 2007; Mor ´o and Szasz ´ak, 2017), others com-\npare with either n-gram LMs (Kim and Woodland,\n2001; Zhang et al., 2002; Lu and Ng, 2010; Peitz\net al., 2011; Ueffing et al., 2013; Tilk and Alum ¨ae,\n2015; Tilk and Alum ¨ae, 2016), CRF (Gale and\nParthasarathy, 2017) or CRF and LSTM sequence\nlabeling (Pahuja et al., 2017). We are not aware\nof a systematic comparison of MT approaches, n-\ngram LMs, LSTM LMs and LSTM sequence la-\nbeling. Especially a direct comparison of two of\nthe most promising approaches, LSTM sequence\nlabeling and monolingual MT, is lacking.\n3 Methodology\nWe test several methods, keeping the data for train-\ning, tuning, and testing constant. Section 3.1 de-\nscribes the data, section 3.2 discusses the mod-\nels for punctuation prediction and section 3.3 the\nbilingual translation models. Finally section 3.4\nexplains how the quality of the punctuation pre-\ndiction and translation is measured.\n3.1 Data\nAs training data, we use the Dutch (source) and\nEnglish (target) components of the Europarl cor-\npus, version 7 (Koehn, 2005). The training data\ncontains 55M words or 2M sentences (per lan-\nguage). As development set and test set we use\nthe data of Vandeghinste et al. (2013). The devel-\nopment set consists of 574 sentences with one ref-\nerence translation, randomly selected from actual\ntranslations made by a language service provider.\nAs test set, we use 500 sentences with three refer-\nence translations, made by three different transla-270\ntors.2\nAll the data are tokenized, truecased, and de-\npending on the experimental condition, cleaned\nwith the Moses toolkit (Koehn et al., 2007). We\ncompare the full dataset with a dataset in which\nall sentences longer than 80 words have been re-\nmoved.3\nWe predict the following punctuation symbols:\ndot (.), comma ( ,), question mark ( ?), exclama-\ntion mark ( !), colon ( :), semicolon ( ;), opening\nand closing brackets ( ()), slash ( /) and dash ( -).\nNote that our punctuation set is much larger than\nmost previous work that we are aware of: for ex-\nample Gale and Parthasarathy (2017) and Pahuja et\nal. (2017) only focus on predicting periods, com-\nmas and question marks.\n3.2 Punctuation prediction\nWe apply punctuation prediction in the preprocess-\ning as well as in the postprocessing punctuation\nstrategy, i.e. in Dutch and in English.\n3.2.1 Punctuation prediction using language\nmodeling\nWe train two types of LMs: n-gram models and\nLSTMs. The models for preprocessing are trained\non Dutch and those for postprocessing on English.\nThen-gram models are 4-gram LMs (5-grams did\nnot improve the performance) with interpolated\nmodified Kneser-Ney smoothing (Chen and Good-\nman, 1999), trained with the SRILM toolkit (Stol-\ncke, 2002). We compare the results for using the\nleft context only (forward fw) with those for using\nboth the left and the right context (forward + back-\nward fw+bw ), where both the preceding 3 words\nand the following 3 words can be used.\nThe LSTM LMs are trained with Tensor-\nFlow (Abadi et al., 2015) and consist of 1 layer\nof 512 cells, initialized randomly with a uniform\ndistribution between -0.05 and 0.05. They are op-\ntimized with Adagrad (Duchi et al., 2011) with a\nlearning rate of 0.1, early stopping is applied if\nthe validation perplexity has not improved 3 times.\nOtherwise, the maximum number of epochs is 39.\nWe train on batches of size 20 and unroll the net-\nwork for maximum 35 time steps during backprop-\nagation through time. With respect to regulariza-\ntion, the norms of the gradients are clipped at 5\nand we apply 50% dropout (Srivastava et al., 2014)\n2These test sets are freely available upon request.\n3For the development and test set, cleaning does not make any\ndifference, as they are hand-made and are clean to begin with.during training. We use sampled softmax (Jean et\nal., 2014) to speed up training. Due to a lack of re-\nsources, we did not apply an exhaustive hyperpa-\nrameter optimization, but started from settings that\nhave proven to work well for similar datasets.4\nFor punctuation prediction with LMs, we pro-\nceed as follows: we train the LMs on punctuated\ndata and test on unpunctuated data. Given a non-\npunctuated input sentence, we determine the most\nprobable token after every word. If a punctuation\nsymbol is predicted, it is inserted at the current po-\nsition in the input sentence and the updated sen-\ntence is used during the rest of the prediction. We\ncontinue the prediction until the end of the sen-\ntence is reached, including the position after the\nlast token.\nThe full vocabulary of the training set consists\nof approximately 280k words for Dutch and 130k\nwords for English (Dutch has much more com-\npounding than English). Since models with that\nvocabulary size do not fit on our GPUs and since\nthe large vocabulary also considerably slows down\ntraining of the LSTMs, we limit the vocabulary\nsize to 50k. For fair comparison, we report results\nforn-grams models with the same vocabulary, but\nalso for n-gram models with the full vocabulary\nin order to investigate the effect of the vocabulary\nsize on the performance. All words not in the vo-\ncabulary are mapped to an unknown-word -class.\n3.2.2 Punctuation prediction using sequence\nlabeling\nBesides LSTM LMs, we investigate LSTM se-\nquence labeling (‘LSTM seq’): we train an LSTM\nthat takes as input a word and the previous state\nand predicts in the output whether the word is fol-\nlowed by a punctuation symbol or not ( /angbracketleftnopunct /angbracketright-\nclass). There are several advantages to this ap-\nproach compared to language modeling: firstly, we\ntrain the LSTM on unpunctuated text and test it on\nunpunctuated text, so there is no mismatch in train-\ning and test conditions. Secondly, the models are\ndirectly optimized for punctuation prediction and\nthey are easier to train since we do not have the\nlarge output weight matrix of an LM and we only\n4We do not use bidirectional LSTM LMs for this task, since\nduring training, the backward LSTM will have seen punctua-\ntion symbols following the current token and the model will\nlearn to make use of those symbols. However, for applications\nsuch as speech translation, the input for the punctuation pre-\ndiction model will have no punctuation at all, and hence the\nmodel that has learned to make use of subsequent symbols\nwill not be optimal.271\nhave to compute the softmax function over a small\nnumber of output classes. Finally, we can train\nbidirectional LSTMs without causing a mismatch\nbetween training input and testing input (see foot-\nnote 4). A disadvantage of these models is that the\ninput is not punctuated, and hence the model can-\nnot exploit punctuation in other parts of the sen-\ntence that is previously predicted. Note that, as op-\nposed to Tilk and Alum ¨ae (2016), we do not insert\nend-of-sentence symbols for the LSTM sequence\nlabeling, because this would be an (unfair) advan-\ntage for bidirectional models – the probability of\nseeing an end-of-sentence punctuation mark right\nbefore an end-of-sentence symbol is naturally very\nhigh.\nThe hyperparameters of these models are the\nsame as for the LSTM LMs, except that we use\na full softmax in the output layer since we do not\nhave to deal with a large vocabulary anymore. The\nbidirectional LSTMs consist of one forward LSTM\nof 256 cells and one backward LSTM of 256 cells,\nin total giving the same amount of LSTM cells and\nparameters as for the unidirectional LSTM (512).\n3.2.3 Punctuation prediction using machine\ntranslation\nWe can model the punctuation prediction as an\nMT problem, treating the non-punctuated version\nof our text as source language, and the punctuated\nversion as target language: we build such mono-\nlingual MT systems for Dutch (preprocessing) and\nEnglish (postprocessing).\nThe phrase-based statistical MT ( PBSMT ) con-\ndition uses the Moses decoder (Koehn et al., 2007)\nin its phrase-based mode, with a 5-gram LM, and\ngrow-diag-final-and as phrase alignment criterion.\nFor other parameters we use the default settings.\nThe data is word-aligned using GIZA++ (Och and\nNey, 2003). We do not allow reordering, setting\nthedistortion-limit to0. The PBSMT clean condi-\ntion is equal to PBSMT, but removing all sentences\nlonger than 80 words from the training data.\nTheHiero condition uses Moses in hierarchical\nmode (Chiang, 2007), with a glue grammar and a\nmaximum phrase length of 5. The other param-\neters are the same as for the PBSMT condition.\nAll Moses systems are tuned using Minimum Error\nRate Training (Och, 2003), maximizing on BLEU.\nFor the Neural MT (NMT) models, we use the\nOpenNMT framework (Klein et al., 2017), trained\nwith the default settings, i.e. 500 LSTM cells,\nseq2seq model type, a vocabulary of 50k for bothsource and target language, a general global atten-\ntion model (Luong et al., 2015), 13 epochs, a batch\nsize of 64, and optimization through stochastic\ngradient descent (SGD). The initial learning rate is\n1, except for the English model trained with SGD:\nsince the training got stuck in a local minimum, we\nuse 0.9 instead. The learning rate is decreased with\na decay factor of 0.7, a beam size of 5, and replace-\nments of unknowns , based on the highest attention\nweight.5We also try a variant with optimization\nAdam (Kingma and Ba, 2014) and a learning rate\nof 0.0002.\nVariants of the systems trained with byte pair en-\ncoding have not been included in this study as ini-\ntial tests only showed worse results than without\nbyte pair encoding.\n3.3 Translation Methods\nWe use the same MT systems as described in sec-\ntion 3.2.3, but now trained on the bilingual version\nof Europarl. Different from section 3.2.3 is that\nwe now do allow phrase reordering for the phrase-\nbased model, setting the distortion limit to 6.\n3.4 Evaluation\nWe measure the quality of punctuation prediction\nwith precision, recall and F1-score. The precision\nover all punctuation symbols is calculated as fol-\nlows:\nprecision all=/summationdisplay\ni∈PTPi\nTPi+FPi(1)\nwithPthe class of all punctuation symbols, TPi\nthe number of true positives for a certain punctua-\ntion symbol and FPithe number of false positives.\nRecall is calculated analogously. If a certain punc-\ntuation symbol has been predicted but the target\nis another punctuation symbol, we count this as a\nfalse negative.\nAdditionally, we use three common MT eval-\nuation metrics, i.e. BLEU (Papineni et al.,\n2002), TER (Snover et al., 2006) and ME-\nTEOR (Denkowski and Lavie, 2014) with syn-\nonyms, comparing the test set with predicted punc-\ntuation with the reference text (original text includ-\ning punctuation). These metrics give us informa-\ntion on the quality of the entire output (and not\nonly the punctuation prediction), which can be an\n5Replacing the unknowns by their most probable aligned\nsource language word.272\nissue in MT models that allow reordering, such as\nHiero and NMT.\nWe measure the translation quality with the\nsame three MT evaluation metrics. Note that, as\ndescribed in section 3.1, we use an evaluation set\nwith three references, ensuring a higher correlation\nof BLEU with human judgment, than when only\none reference is used.6\nWe perform significance testing by bootstrap re-\nsampling for BLEU scores (Koehn, 2004) and F1\nscores.7\n4 Results\nSection 4.1 describes the results of punctuation\nprediction and section 4.2 describes the results of\nMT of unpunctuated input.\n4.1 Punctuation Prediction\n4.1.1 Dutch\nTable 1 shows the results of the punctuation pre-\ndiction experiments for Dutch. All MT approaches\nscore significantly better on F1 and BLEU scores\n(p < . 001) than the LM approaches. They\nalso score significantly better on F1-score than\nthe LSTM seq approaches ( p < . 001), but only\nPBSMT ,PBSMT clean andHiero score better on\nBLEU score ( p < . 001) than any of the LSTM\nseq approaches. PBSMT scores significantly better\n(p < . 05) than the other MT approaches on BLEU,\nbut on F1-score it scores only significantly better\nthan PBSMT clean (p < . 001). This difference be-\ntween BLEU and F1 score can be explained by the\nfact that the non-PBSMT approaches can reorder\nthe words and perform unwanted transformations\nother than inserting punctuation (mainly affecting\nBLEU). This is why we consider the PBSMT ap-\nproach to punctuation insertion the best approach\nfor this experimental setup.\nOf the LM approaches, n-gram fw+bw scores\nsignificantly better than the other approaches on\nBLEU and F1 ( p < . 001). Increasing the vocabu-\nlary size has only a minor influence on the results:\nit decreases precision but increases recall, and has\nno significant effects on BLEU nor on F1. These\n6The original BLEU paper (Papineni et al., 2002) also uses\nmultiple references.\n7We adapted the perl implementation by\nMark Fishel for BLEU bootstrap resampling,\nwhich is available at https://github.com/moses-\nsmt/mosesdecoder/blob/master/scripts/analysis/bootstrap-\nhypothesis-difference-significance.pl to work for F-scores on\npunctuation insertion.Table 1: Results of punctuation prediction in Dutch\nMethod Prec. Recall F1 BLEU TER MET.\nn-gram LMs\nfw 50k 22.63 27.89 24.98 68.69 14.10 87.10\nfw full 22.08 28.45 24.86 70.56 13.33 87.69\nfw+bw 50k 54.49 78.59 64.36 79.63 8.32 92.88\nfw+bw full 53.57 79.15 63.89 80.86 7.78 93.28\nLSTM LM fw 44.75 31.83 37.20 83.90 7.97 92.42\nLSTM seq\nfw 72.03 11.97 20.53 86.11 8.42 91.12\nfw opt 43.70 32.25 37.11 83.45 9.31 90.86\nfw+bw 50.23 15.07 23.18 86.84 9.17 90.39\nfw+bw opt 41.28 16.34 23.41 86.13 9.74 89.94\nPBSMT 92.36 74.93 82.74 94.20 2.85 97.14\nclean 93.88 71.27 81.02 93.76 3.06 96.88\nHiero 83.16 80.70 81.92 93.54 3.11 97.16\nNMT SGD 84.53 79.30 81.83 85.71 6.88 91.79\nAdam 82.43 79.30 80.83 85.04 7.12 91.61\nTable 2: Results of punctuation prediction in English\nMethod Prec. Recall F1 BLEU TER MET.\nn-gram LM\nfw 50k 12.51 28.31 17.35 50.27 23.04 48.24\nfw full 23.69 30.37 26.61 71.96 13.64 54.93\nfw+bw 50k 42.62 73.60 53.96 69.21 10.41 53.30\nfw+bw full 51.30 79.53 62.35 79.78 8.21 58.87\nLSTM fw 35.23 25.10 29.31 80.76 10.37 57.21\nLSTM seq\nfw 69.88 7.50 13.54 90.19 8.86 59.75\nfw opt 32.88 32.44 32.66 78.79 11.48 56.66\nfw+bw 41.53 13.31 20.20 86.34 10.02 58.31\nfw+bw opt 39.10 13.83 20.42 85.21 9.93 58.36\nPBSMT 86.09 77.15 81.37 94.76 3.12 66.12\nclean 83.46 76.77 79.77 93.77 3.45 65.36\nHiero 76.48 79.87 77.97 92.18 3.91 64.32\nNMT SGD 91.41 82.59 86.76 93.53 3.44 64.97\nAdam 90.62 82.63 86.43 93.78 3.35 65.19\nmodels tend to overgenerate punctuation, which\ncan be seen from their low precision.\nLSTM sequence labeling ( LSTM seq ) does not\nscore better than the LM approaches, mainly be-\ncause of the low recall. The bidirectional LSTM\nhas a lower precision but a slightly higher recall\nthan the unidirectional LSTM. The n-gram fw+bw\n50k andn-gram fw-bw full methods result in a\nsignificantly better F1 score ( p < . 001) than any\nof the LSTM seq methods. In BLEU scores, all\nLSTM seq methods are significantly better than all\nn-gram LM approaches. This reflects the fact that\nBLEU is a precision metric. Only the difference\nbetween LSTM fw andLSTM seq fw opt is not sig-\nnificant.\nWe tested two methods to improve recall for se-\nquence labeling: thresholding for the probability\ndistribution and weighted cross-entropy. Thresh-\nolding means that if /angbracketleftnopunct /angbracketrightis predicted but the\nratio of the probability of the second most probable\noutput over the probability of /angbracketleftnopunct /angbracketrightis higher\nthan a certain threshold, we assign the second most\nprobable token as prediction. This method indeed\nimproves the recall of the model but lowers the\nprecision: we report the result after optimizing the\nthreshold for F1 score (‘opt’ in the table). We also273\nobserve that optimizing for F1 does not result in\nbetter quality according to the MT metrics. The\noptimal threshold for the unidirectional model was\n0.3 and for the bidirectional model 0.6. Trading\noff precision and recall had a much smaller effect\non the bidirectional model than on the unidirec-\ntional one. Training with weighted cross-entropy,\nwhere more weight is given to the punctuation\nsymbols since they are much less frequent than the\n/angbracketleftnopunct /angbracketright-class, has similar effects but has the dis-\nadvantage of having to re-train the model and opti-\nmize the weights per output class, while the thresh-\nold can be optimized during testing.\n4.1.2 English\nTable 2 shows the results of punctuation predic-\ntion for English. As we had three reference sets\nin the original test set, we present averaged results\nover punctuation prediction on each of these three\nsets (we calculate the result for each set separately\nand average over the three datasets). For BLEU\nscores we used all three references.\nAll MT approaches score significantly better\nthan the LM approaches ( p < . 001) They also\nscore significantly better than the LSTM seq meth-\nods (at least p < . 005). Similar to punctuation in-\nsertion for Dutch, PBSMT reaches the best BLEU\nscores, although not significantly better than PB-\nSMT clean , but significantly better than NMT SGD\nandNMT Adam (both p < . 05). With respect to the\nF1-score, we see that there is no significant differ-\nence between NMT SGD andNMT Adam , but NMT\nSGD scores significantly better ( p < . 001) than the\nother MT methods. NMT Adam scores better than\nPBSMT (p < 0.05) and PBSMT clean (p < 0.001\nfor two of the three test sets, not significant for the\nthird one), and Hiero (p < . 001).\nFor LM and sequence labeling, we see similar\nresults as for Dutch, with the exception that lim-\niting the vocabulary to only 50k words decreased\nthe performance much more for English than for\nDutch. This might seem surprising given that the\nDutch dataset has a much larger vocabulary, but it\nhas many more words that occur only once or a few\ntimes (ca. 200k types have a frequency of 5 or less\nin Dutch, as opposed to ca. 80k in English).\nThen-gram LM approaches score much better\non F1 score, but they overgenerate, as can be seen\nfrom the low precision and lower BLEU scores,\nwhen compared to LSTM seq approaches, which\nseem to undergenerate.To conclude, we observe that for both Dutch and\nEnglish the MT approaches work best for punc-\ntuation prediction as an isolated task. Since we\nare mainly interested in punctuation prediction in\nthe context of speech translation, the phrase-based\napproach is the most promising since it does not\ncause any reordering of the words, giving the best\nresults according to the MT metrics. We will now\nexamine which approach achieves the best (bilin-\ngual) translation quality.\n4.2 Translation of unpunctuated input\nTable 3 shows the different experimental condi-\ntions that are evaluated and will be further ex-\nplained in the next subsections. The best scores\nper punctuation strategy are marked in bold, the\nbest scores per translation system are underlined.\n4.2.1 Baselines\nIn the baseline conditions, we train the MT sys-\ntems on normal, punctuated, tokenized, and true-\ncased source and target text, and tune them on\nthe normal, punctuated, tokenized and truecased\ndevelopment set. We remove all the punctuation\nfrom the test set, and let the MT systems translate\nit. It hence constitutes the lower bound .\nNMT SGD gets the highest BLEU score, but\nnot significantly better than PBSMT andPBSMT\nclean .Hiero andNMT Adam score significantly\nworse than the other three conditions ( p < . 001).\n4.2.2 Upper Bounds\nIn the upper bounds conditions, we use the same\nMT systems as in the baselines, and evaluate them\non the normal, punctuated, tokenized and true-\ncased test set, to see how well the MT systems\nwould do with “perfect” input.\nEach of the upper bound scores is significantly\nbetter ( p < 0.001) than the same approach in the\nbaseline condition, so using MT without any form\nof punctuation insertion results in a significant loss\nin translation quality.\nComparing the different MT systems, NMT\nSGD is significantly better than PBSMT (p < . 01),\nPBSMT clean andNMT Adam (both p < . 001).\nThere is no significant difference between PBSMT ,\nPBSMT clean , and NMT Adam , but all score signif-\nicantly better than Hiero (p < . 001). Remarkable\nis the higher METEOR score for PBSMT .274\nTable 3: Results of punctuation insertion + translation.Punctuation Translation System\nInsertion PBSMT PBSMT clean Hiero NMT SGD Adam\nMethod BLEU TER METEOR BLEU TER METEOR BLEU TER METEOR BLEU TER METEOR BLEU TER METEOR\nBaselines 43.22 44.69 37.27 42.97 44.37 37.31 39.81 46.69 35.70 44.01 43.80 36.40 38.91 46.85 33.02\nUpper bounds 46.19 39.15 39.08 46.55 38.80 39.01 42.72 41.59 37.35 49.13 37.66 38.26 46.30 39.41 37.59\nPreprocessing\nn-gram\nfw 50K 36.64 48.53 35.36 35.36 48.11 34.93 35.68 49.35 34.73 36.00 52.76 31.50 34.33 53.05 31.01\nfw full 37.63 47.36 36.62 37.46 46.98 36.77 36.19 48.51 35.55 38.76 50.99 33.45 36.60 51.34 32.74\nfw+bw 50K 38.98 44.91 36.46 38.25 44.51 36.14 37.97 46.05 35.71 43.51 43.14 35.78 40.43 45.30 34.97\nfw+bw full 40.41 43.46 37.81 40.08 43.42 37.83 38.33 45.36 36.40 44.36 42.01 36.68 42.54 43.18 36.45\nLSTM fw 39.10 46.08 34.48 38.06 45.52 34.15 37.11 47.24 33.64 38.23 49.25 32.95 35.91 50.85 31.37\nLSTM seq\nfw 42.75 44.25 37.22 42.17 44.02 37.15 39.56 46.25 35.76 43.49 43.54 36.19 39.43 46.55 33.43\nfw opt 40.44 45.55 36.99 40.29 45.01 37.09 38.88 45.88 35.93 41.81 45.97 35.22 39.21 46.97 34.00\nfw+bw 42.01 45.35 37.03 41.57 45.26 36.98 39.96 45.57 35.96 43.78 43.87 36.15 39.69 47.11 33.96\nfw+bw opt 41.73 45.83 36.93 10.99 45.91 36.87 40.13 45.40 36.08 43.13 44.94 35.95 39.69 47.11 33.96\nPBSMT 45.61 40.26 38.59 45.27 40.26 38.56 42.11 42.60 37.01 47.37 39.18 37.55 45.49 40.79 37.10\nclean 45.54 39.98 38.51 45.19 40.27 38.58 41.87 42.70 36.95 47.04 39.40 37.44 45.29 41.05 36.97\nHiero 44.94 40.44 38.57 42.97 44.37 37.32 39.81 46.69 35.70 46.41 39.72 37.12 45.39 40.96 37.08\nNMT SGD 44.99 40.49 37.49 44.81 40.56 37.64 40.97 42.90 35.83 47.17 39.35 37.22 45.19 40.77 37.04\nAdam 44.65 40.77 37.40 44.53 40.35 37.60 40.94 43.14 35.84 47.05 39.77 37.34 45.23 40.95 37.13\nImplicit 44.47 41.65 38.11 37.37 44.68 34.77 41.89 42.37 36.78 47.12 38.99 37.46 44.78 41.24 36.56\nUnpunctuated 44.81 42.00 38.55 44.26 41.58 38.62 40.27 45.20 36.77 46.86 41.08 37.55 43.71 43.57 36.52\nPostprocessing\nn-gram\nfw 50K 29.62 55.08 35.28 29.05 55.35 35.34 26.75 58.08 33.84 31.36 53.62 34.09 29.23 55.68 33.33\nfw full 36.77 48.15 36.47 36.25 47.71 36.45 33.71 50.69 35.02 38.29 47.02 35.22 35.35 49.90 34.32\nfw+bw 50K 36.59 46.18 37.08 30.06 45.80 37.26 33.19 49.13 35.54 39.17 45.06 35.97 36.19 47.54 35.05\nfw+bw full 38.54 44.62 37.34 38.11 44.74 37.52 34.86 47.72 35.82 41.53 43.47 36.37 38.07 46.33 35.40\nLSTM fw 39.67 46.51 36.74 39.04 46.15 36.80 35.81 49.42 35.10 41.78 45.53 35.70 39.20 47.73 34.92\nLSTM seq\nfw 41.71 44.90 36.76 41.00 44.43 36.72 37.57 47.64 35.09 42.69 44.72 35.68 39.86 46.69 34.70\nfw opt 40.66 46.44 36.43 37.68 46.77 36.57 34.46 50.24 34.98 39.62 46.52 35.40 37.17 48.84 34.48\nfw+bw 40.66 46.44 36.43 39.88 46.29 36.40 36.93 49.14 34.81 41.86 45.95 35.34 39.12 47.98 34.34\nfw+bw opt 40.51 46.67 36.41 39.66 46.54 36.38 36.71 49.52 34.81 41.70 46.09 35.30 39.01 48.07 34.32\nPBSMT 45.10 40.94 38.31 44.75 40.41 38.44 41.11 43.62 36.63 46.73 40.21 37.34 43.58 42.44 36.33\nclean 45.05 40.83 38.30 44.73 40.33 38.43 41.06 43.58 36.61 46.64 40.20 37.30 43.60 42.45 36.30\nHiero 44.41 41.61 38.25 43.85 41.11 38.39 40.38 44.27 36.59 46.04 40.73 37.21 43.14 43.08 36.26\nNMT SGD 44.87 40.62 38.34 43.95 41.15 38.40 40.65 43.45 36.66 47.00 39.68 37.48 44.17 41.65 36.46\nAdam 44.54 40.78 38.10 44.04 40.61 38.25 40.50 43.30 36.50 46.89 39.68 37.38 44.21 41.57 36.47.275\n4.2.3 Preprocessing\nIn the preprocessing conditions, we first insert\npunctuation, as described in section 4.1, before\ntranslating. The output of the punctuation inser-\ntion is then translated using a regular MT system,\ntrained on punctuated data.\nUsing LM and LSTM seq as preprocessing ap-\nproach never helps significantly over the baseline,\nonly in the case of ngram fw+bw full + NMT Adam\n(p < . 001). Using monolingual MT as prepro-\ncessing nearly always helps ( p < . 005), except\nwhen using Hiero as preprocessing or as transla-\ntion engine. Whether PBSMT orPBSMT clean are\nused as preprocessors does not make a significant\ndifference. When using NMT SGD as translation\nmethod, the kind of monolingual MT (apart from\nHiero) does not play a significant role.\nThe best preprocessing results (using PBSMT as\npunctuation inserter) score still significantly lower\nthan the upper bound scores when using the same\ntranslation system ( p < . 05forPBSMT ,PBSMT\nclean andNMT Adam ,p < . 01forHiero andp <\n.001forNMT SGD ).\n4.2.4 Implicit Punctuation Insertion\nWe remove punctuation from the source side of\nthe parallel corpus and train the MT engines on\nthese data, so they should be well-suited to trans-\nlate source text without punctuation in target text\nwith punctuation.\nThe score for implicit translation using NMT\nSGD is not significantly worse than preprocess-\ningPBSMT + NMT SGD .NMT SGD scores sig-\nnificantly better ( p < . 005) than all other implicit\npunctuation insertion methods.\n4.2.5 Unpunctuated\nWe have tested the MT systems trained on un-\npunctuated data both in the source and the target,\nand evaluated against references from which the\npunctuation is also removed. As we use a differ-\nent version of the references, we cannot apply sig-\nnificance testing. We present these results as they\nprovide an indication about the maximum score we\ncan expect for the postprocessing approach.\nEven without punctuation inserted, it is clear\nthat the scores are much lower than the Upper\nbounds presented earlier. The presence of punctua-\ntion thus improves the bilingual translation quality\nin general.4.2.6 Postprocessing\nIn the postprocessing approach, we translate\nusing MT systems trained on unpunctuated data\n(both source and target), resulting in a translation\nthat does not contain punctuation. The postpro-\ncessing step consists of punctuation insertion, sim-\nilar to the preprocessing punctuation insertion step,\nbut now for English.\nPostprocessing with LM and LSTM seq does\nnot yield any improvements over the baseline.\nWith monolingual MT we reach significance in all\ncases where we use PBSMT (p < . 001),PBSMT\nclean (p < . 001) and NMT SGD (p < . 05).NMT\nAdam also improves over the baseline ( p < . 05),\nexcept when combined with the Hiero system.\n4.2.7 General results\nWe note the lack of significant difference be-\ntween pre- and postprocessing in the cases where\npunctuation insertion consists of PBSMT ,PBSMT\nclean ,NMT SGD orNMT Adam and translation\nconsists of PBSMT ,PBSMT clean ,NMT SGD or\nNMT Adam .\nWhen considering how much we can close the\ngap between upper bound andbaseline using the\nbest scoring combination of methods for each of\nthe translation systems, we note gap closure of\n80% for PBSMT , 64% for PBSMT clean , 79%\nforHiero , 66% for NMT SGD and 89% for NMT\nAdam .\n5 Conclusions and Future Work\nWe set out to compare different approaches to\npunctuation prediction in the context of transla-\ntion. We test several different architectures and\nmethods for punctuation prediction as well as for\nMT, all trained on the exact same data sets, and\nevaluate the punctuation prediction quality as a\nmonolingual phenomenon, as well as its effect on\nMT quality.\nWhile there is a clear deterioration of MT qual-\nity when working with unpunctuated input, this\ngap can be closed for 66% in the case of our best\nbilingual MT system, NMT, by applying monolin-\ngual MT as punctuation insertion, or by using a\ndedicated implicit insertion MT system.\nWhether we use pre- or postprocessing did, in\nmost cases, not result in a significant difference,\nindicating that the general punctuation prediction\nquality for Dutch is similar to that of English.276\nIn future work, we would like to develop a sim-\nilar experiment for segmentation prediction , and\ntest the results on real speech signals in order to\ndetermine the usefulness of the results in a more\nrealistic setting. A possible improvement would\nbe to use NMT as punctuation prediction model,\nbut constrain the word order with the help of the\nattention weights, thus combining the advantage\nof neural MT with the constraints on reordering of\nPBSMT.\nAcknowledgements\nThis research was done in the context of the\nSCATE project, funded by the Flemish Agency\nfor Innovation and Entrepreneurship (IWT project\n13007).\nReferences\nAbadi, Mart ´ın, Ashish Agarwal, Paul Barham, Eugene\nBrevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado,\nAndy Davis, Jeffrey Dean, Matthieu Devin, Sanjay\nGhemawat, Ian Goodfellow, Andrew Harp, Geoffrey\nIrving, Michael Isard, Rafal Jozefowicz, Yangqing\nJia, Lukasz Kaiser, Manjunath Kudlur, Josh Lev-\nenberg, Dan Man ´e, Mike Schuster, Rajat Monga,\nSherry Moore, Derek Murray, Chris Olah, Jonathon\nShlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar,\nPaul Tucker, Vincent Vanhoucke, Vijay Vasudevan,\nFernanda Vi ´egas, Oriol Vinyals, Pete Warden, Mar-\ntin Wattenberg, Martin Wicke, Yuan Yu and Xiao-\nqiang Zheng. 2015. TensorFlow: Large-scale ma-\nchine learning on heterogeneous systems. Software\navailable from tensorflow.org .\nBeeferman, Doug, Adam Berger and John Lafferty.\n1998. Cyberpunc: A lightweight punctuation an-\nnotation system for speech. IEEE Conference on\nAcoustics, Speech and Signal Processing. 689–692.\nKim, Ji-Hwan and P.C. Woodland 2001. The use of\nprosody in a combined system for punctuation gen-\neration and speech recognition. Proceedings of Eu-\nroSpeech.\nChristensen, Heidi, Yoshihiko Gotoh and Steve Re-\nnals. 2001. Punctuation Annotation using Statistical\nProsody Models. Proceedings of ISCA Workshop on\nProsody in Speech Recognition and Understanding.\nHuang, Jing and Geoffrey Zweig. 2002. Maximum en-\ntropy model for punctuation annotation from speech.\nProceedings of ICSLP .\nGravano, Agust ´ın, Martin Jansche and Michiel Bacchi-\nani. 2009. Restoring punctuation and capitalization\nin transcribed speech. Proceedings ICASSP .Chen, Stanley F. and Joshua Goodman. 1999. An\nempirical study of smoothing techniques for lan-\nguage modeling. Computer Speech and Language ,\n17:359–394.\nChiang, David. 2007. Hierarchical Phrase-Based\nTranslation. Computational Linguistics , 33(2):201–\n228.\nDenkowski, Michael and Alon Lavie. 2014. Meteor\nUniversal: Language Specific Translation Evalua-\ntion for Any Target Language. Proceedings of WMT.\n376–380.\nDuchi, John, Elad Hazan and Yoram Singer. 2011.\nAdaptive Subgradient Methods for Online Learning\nand Stochastic Optimization. Journal of Machine\nLearning Research , 12:2121–2159.\nGale, William and Sarangarajan Parthasarathy. 2017.\nExperiments in Character-level Neural Network\nModels for Punctuation. Proceedings Interspeech.\n2794–2798.\nHassan, Hany, Yanjun Ma and Andy Way. 2007.\nMatrex: the DCU machine translation system for\nIWSLT 2007. Proceedings IWSLT. 69–75.\nHochreiter, Sepp and J ¨urgen Schmidhuber. 1997.\nLong Short-term Memory. Neural Computation ,\n9(8):1735–1780.\nJean, S ´ebastien, Kyunghyun Cho, Roland Memisevic\nand Yoshua Bengio. 2014. On Using Very Large\nTarget V ocabulary for Neural Machine Translation.\narXiv preprint arxiv:1412.2007 .\nKlein, Guillaume, Yoon Kim, Yuntian Deng, Jean\nSenellart and Alexander M. Rush. 2017. Open-\nNMT: Open-Source Toolkit for Neural Machine\nTranslation. arXiv preprint arxiv:1701.02810 .\nKingma, Diederik P. and Jimmy Ba. 2014. Adam: A\nMethod for Stochastic Optimization. arXiv preprint\narxiv:1412.6980 .\nKoehn, Philip. 2004. Statistical Significance Tests\nfor Machine Translation Evaluation. Proceedings\nEMNLP . 388–395.\nKoehn, Philip. 2005. Europarl: A Parallel Corpus\nfor Statistical Machine Translation. Proceedings MT\nSummit X. 79–86.\nKoehn, Philip, Hieu Hoang, Alexandra Birch, Chris\nCallison-Burch, Marcello Federico, Nicola Bertoldi,\nBrooke Cowan, Wade Shen, Christine Moran,\nRichard Zens, Chris Dyer, Ond ˇrej Bojar, Alexandra\nConstantin, and Evan Herbst. 2007. Moses: Open\nSource Toolkit for Statistical Machine Translation.\nProceedings of ACL Demonstration Sessions. 177–\n180.\nLee, Young-suk and Salim Roukos 2006. IBM Spo-\nken Language Translation System. Proceedings TC-\nSTAR Workshop on Speech-to-Speech Translation.\n13–18.277\nLu, Wei and Hwee Tou Ng. 1998. Better punctuation\nprediction with dynamic conditional random fields.\nProceedings EMNLP . 177–186.\nLuong, Minh-Thang, Hieu Pham and Christopher D.\nManning. 2015. Effective approaches to attention-\nbased neural machine translation. arXiv preprint\narXiv:1508.04025.\nMatusov, Evgeny, Arne Mauser and Hermann Ney.\n2006. Automatic sentence segmentation and punc-\ntuation prediction for spoken language translation.\nProceedings IWSLT. 158–165.\nMatusov, Evgeny, Nicolas Ueffing and Hermann Ney.\n2006. Computing Consensus Translation from Mul-\ntiple Machine Translation Systems Using Enhanced\nHypotheses Alignment. Proceedings EACL. 33–40.\nMor´o, Anna and Gy ¨orgy Szasz ´ak. 2017. A phonologi-\ncal phrase sequence modelling approach for resource\nefficient and robust real-time punctuation recovery.\nProceedings Interspeech. 558–562.\nOch, Franz Josef and Hermann Ney. 2003. A Sys-\ntematic Comparison of Various Statistical Alignment\nModels. Computational Linguistics , 29(1):19–51.\nOch, Franz Josef. 2003. Minimum Error Rate Training\nin Statistical Machine Translation. Proceedings of\nACL. 160–167.\nPahuja, Vardaan, Anirban Laha, Shachar Mirkin, Vikas\nRaykar, Lili Kotlerman and Guy Lev. 2017. Joint\nLearning of Correlated Sequence Labeling Tasks Us-\ning Bidirectional Recurrent Neural Networks. Pro-\nceedings Interspeech. 548–552.\nPapineni, Kishore, Salim Roukos, Todd Ward and Wei-\nJing Zhu. 2002. BLEU: a Method for Automatic\nEvaluation of Machine Translation. Proceedings\nACL. 311–318.\nPeitz, Stephan, Markus Freitag, Arne Mauser and Her-\nmann Ney. 2011. Modeling Punctuation Prediction\nas Machine Translation. Proceedings IWSLT. 238–\n245.\nSnover, Matthew, Bonnie Dorr, Richard Schwartz, Lin-\nnea Micciulla and John Makhoul. 2006. A Study of\nTranslation Edit Rate with Targeted Human Annota-\ntion. Proceedings of AMTA. 223–231.\nSrivastava, Nitish, Geoffrey Hinton, Alex\nKrizhevsky,Ilya Sutskever and Ruslan Salakhutdi-\nnov. 2014. Dropout: A Simple Way to Prevent\nNeural Networks from Overfitting. Journal of\nMachine Learning Research , 15:1929–1958.\nStolcke, Andreas. 2002. SRILM an extensible lan-\nguage modeling toolkit. Proceedings International\nConference Spoken Language Processing. 901–904.\nTilk, Ottokar and Tanel Alum ¨ae. 2015. LSTM for\nPunctuation Restoration in Speech Transcripts. Pro-\nceedings Interspeech. 683–687.Tilk, Ottokar and Tanel Alum ¨ae. 2013. Bidirectional\nRecurrent Neural Network With Attention Mecha-\nnism for Punctuation Restoration Transcripts. Pro-\nceedings Interspeech. 3047–3051.\nUeffing, Nicola, Maximilian Bisani and Paul V ozila.\n2013. Improved models for automatic punctuation\nprediction for spoken and written text. Proceedings\nInterspeech. 3097–3101.\nVandeghinste, Vincent, Scott Martens, Gideon Kotz ´e,\nJ¨org Tiedemann, Joachim Van den Bogaert, Koen\nDe Smet, Frank Van Eynde and Gertjan van Noord.\n2013. Parse and Corpus-based Machine Transla-\ntion. Essential Speech and Language Technology for\nDutch , chapter 17, Peter Spyns and Jan Odijk (eds.),\n305–319.\nWeiss, Ron, Jan Chorowski, Navdeep Jaitly, Yonghui\nWu and Zhifeng Chen. 2017. Sequence-to-\nSequence Models Can Directly Translate Foreign\nSpeech. Proceedings Interspeech. 2625–2629.\nZhang, Zhu, Michael Gamon, Simon Corston-Oliver\nand Eric Ringger. 2002. Intra-sentence punctuation\ninsertion in natural language generation. Microsoft\nTechnical Report.278", "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "Ws_GFifjOg", "year": null, "venue": "EAMT 2018", "pdf_link": "https://aclanthology.org/2018.eamt-main.7.pdf", "forum_link": "https://openreview.net/forum?id=Ws_GFifjOg", "arxiv_id": null, "doi": null }
{ "title": "M3TRA: integrating TM and MT for professional translators", "authors": [ "Bram Bulté", "Tom Vanallemeersch", "Vincent Vandeghinste" ], "abstract": null, "keywords": [], "raw_extracted_content": "M3TRA: integrating TM and MT for professional translators\nBram Bult ´e\nCCL – KU LeuvenTom Vanallemeersch\nCCL – KU Leuven\[email protected] Vandeghinste\nCCL – KU Leuven\nAbstract\nTranslation memories (TM) and machine\ntranslation (MT) both are potentially use-\nful resources for professional translators,\nbut they are often still used independently\nin translation workflows. As translators\ntend to have a higher confidence in fuzzy\nmatches than in MT, we investigate how to\ncombine the benefits of TM retrieval with\nthose of MT, by integrating the results of\nboth. We develop a flexible TM-MT in-\ntegration approach based on various tech-\nniques combining the use of TM and MT,\nsuch as fuzzy repair, span pretranslation\nand exploiting multiple matches. Results\nfor ten language pairs using the DGT-TM\ndataset indicate almost consistently better\nBLEU, METEOR and TER scores com-\npared to the MT, TM and NMT baselines.\n1 Introduction\nWhile software for professional translators has in-\ncluded translation memories (TMs) since several\ndecades, especially in the context of specialized\ndocuments, the use of machine translation (MT) in\nsuch software is more recent. Even though certain\ncommercial translation tools now offer function-\nalities such as automatic fuzzy match repair, TM\nand MT technologies are often still used indepen-\ndently, i.e. either a match for a query sentence or\nan MT output is provided. This is not ideal, as\ntranslators tend to have a higher confidence in ‘hu-\nman’ TM than in MT. It has to be kept in mind,\nhowever, that only exact matches provide a trans-\nc/circlecopyrt2018 The authors. This article is licensed under a Creative\nCommons 3.0 licence, no derivative works, attribution, CC-\nBY-ND.lation of the query sentence; ‘fuzzy’ matches of-\nfer a translation of a similar sentence. In contrast,\nMT systems provide a translation for any sentence,\nbut they have problems with a number of, often\nlinguistic, issues, such as complex morphologi-\ncal phenomena, long distance dependencies and\nword order (Bisazza and Federico, 2016; Sudoh\net al., 2010). We investigate how to combine the\nconfidence in fuzzy match retrieval with full sen-\ntence translation by integrating TM and MT out-\nput. We develop M3TRA,1a method which per-\nforms a TM match preprocessing step before run-\nning a standard phrase-based statistical MT (PB-\nSMT) system trained on the TM. M3TRA com-\nbines different approaches, and is flexible in sev-\neral respects: it applies various fuzzy match score\nthresholds, allows for more than one match to be\nused per query sentence, and can use several fuzzy\nmetrics. It comprises two main components: (a)\nfuzzy repair , automatically editing high-scoring\nfuzzy matches, and (b) span pretranslation , con-\nstraining MT output by including certain consis-\ntently aligned spans of one or more TM matches.\nWe perform tests on ten language pairs which\ninvolve multiple language families, using the\nDGT-TM dataset (Steinberger et al., 2013). We ap-\nply PBSMT without span pretranslation as a base-\nline, as well as ‘pure’ TM and a standard NMT\nsystem, and evaluate the translations using several\nmetrics. M3TRA is integrated in a prototype trans-\nlation interface providing translators with more\n‘informed’ MT output (Coppers et al., 2018).\nThe following sections describe the research\ncontext, system architecture, experimental design\nand results. The final sections contain a discussion,\noverview of work in progress and conclusions.\n1MeMory + M achine TRA nslationP\u0013 erez-Ortiz, S\u0013 anchez-Mart\u0013 \u0010nez, Espl\u0012 a-Gomis, Popovi\u0013 c, Rico, Martins, Van den Bogaert, Forcada (eds.)\nProceedings of the 21st Annual Conference of the European Association for Machine Translation , p. 69{78\nAlacant, Spain, May 2018.\n2 Research context\nThe baseline approach to TM-MT integration uses\nMT to translate a query sentence in case no suf-\nficiently similar translation unit is found in the\nTM (Simard and Isabelle, 2009). This can be aug-\nmented by using an estimation of the usefulness\nof MT and TM output (He, 2011). Other stud-\nies focus on correcting close matches from a TM\nusing PBSMT, based on a set of learned edit op-\nerations (Hewavitharana et al., 2005). Ortega et\nal. (2016) propose a patching approach to cor-\nrect TM matches with any kind of SMT system,\nand Espla-Gomis et al. (2015) a more translator-\noriented method that offers word keeping recom-\nmendations based on information coming from an\nMT system. Example-based MT systems have\nalso been used to leverage sub-segmental TM\ndata (Simard and Langlais, 2001).\nOf particular relevance are approaches that con-\nstrain a PBSMT system to use relevant parts of a\nfuzzy match (Zhechev and Van Genabith, 2010),\nfor example by adding XML markup to Moses in-\nput (He, 2011; Koehn and Senellart, 2010; Ma\net al., 2011) or by using a constrained word lat-\ntice (Li et al., 2016). Related to these are meth-\nods that augment the translation table of a PB-\nSMT system with aligned spans from a retrieved\nTM match, yet without forcing the SMT system\nto incorporate (parts of) these aligned spans (Bi-\ncici and Dymetman, 2008; Simard and Isabelle,\n2009). Alternatively, information from the fuzzy\nmatches can also be integrated in the SMT system\nitself (Wang et al., 2013), for example using sparse\nfeatures (Li et al., 2017). Recent studies focus\non how to leverage TM information for NMT sys-\ntems. These approaches work, for example, by im-\nposing lexical constraints on the search algorithms\nused by NMT (Hokamp and Liu, 2017), by aug-\nmenting NMT systems with an additional lexical\nmemory (Feng et al., 2017), or by explicitly pro-\nviding the NMT system with access to retrieved\nTM matches (Gu et al., 2017).\nM3TRA combines different elements from\nthese approaches, which is its main novelty. In\nthis paper we focus on (a) repairing close fuzzy\nmatches, and (b) augmenting the MT input with\ninformation derived from the parallel corpus (the\nTM) used to train the MT system, thus constrain-\ning the translation of certain (parts of) sentences.\nWe use a PBSMT system as basis for TM-MT inte-\ngration because SMT allows a straightforward ap-plication of pretranslation (e.g. explicit alignment\ninformation is used in the process).\n3 System architecture\nM3TRA consists of four components: (a) a TM\nsystem, (b) a PBSMT engine, (c) a system for\nfuzzy repair (FR) and (d) a system for pretrans-\nlation span search (PSS). We elaborate on each of\nthese components in the following sections. The\nsentence to translate can follow a number of routes,\ndepending on the fuzzy match score of the best re-\ntrieved match and the success or failure of certain\nattempted operations (see Figure 1). First, FR is at-\ntempted for sentences that have at least one match\nwhich meets the relevant threshold ( θFR). If FR\nis performed, it may modify the translation of the\nfuzzy match by deleting, inserting or substituting\nwords. In case FR is not performed or fails, there\nare three options: (a) if the score of the highest\nmatch satisfies the TM threshold ( θTM), the trans-\nlation of the TM match becomes the final output,\n(b) if the score is between the TM and MT thresh-\nolds, PSS is attempted, and (c) if the score is be-\nlow the MT threshold ( θMT), or PSS fails (i.e. the\nquery sentence as such becomes input to MT), the\n‘pure’ MT output is used as final output.\nEach of the four M3TRA components is de-\nscribed in detail below, followed by an overview\nof the parameter tuning process.\n3.1 Translation Memory System\nThe TM is defined as a set Mconsisting of tuples\nof source and target sentences (s,t), i.e. transla-\ntion units. Let qbe the sentence to be translated\n(query sentence). It is looked up in the TM using a\nsimilarity function Sim , according to Equation 1,\nresulting in a setMqof translation units the source\nsentencesof which is sufficiently similar to q, ac-\ncording to threshold θSim. The best match for qis\ndetermined according to Equation 2.2\nMq={(s,t)∈M :Sim(q,s)≥θSim}(1)\n(sb,tb) = arg max\n(s,t)∈MqSim(q,s) (2)\nMatches are retrieved from the TM using\ntwo different similarity metrics: Levenshtein dis-\ntance (Levenshtein, 1966) and METEOR (Lavie\n2In case there are several matches with the same score, the\nfirst match encountered in the TM is taken as best match.70\nTM\nSYSTEMPRETRANSLATION\nSPAN SEARCH (PSS)MT\nENGINE\nFUZZY REPAIR\n(FR)no match\nbest match<θFR\nbest match≥θFRbest match<θMT\nθMT≤best match<θTM\nbest match≥θTM(no) pretranslation\nfailure of repair\npunctuation repair / deletion of part\npretranslation with insertion/substitution\nFigure 1: M3TRA workflow\nand Agarwal, 2007). We limit the size of Mqton,\ni.e. we only keep the tuples with the nbest matches\n(plus any additional tuples with matches that have\nthe same score as the nth best match).\nAs shown in Figure 1, we compare Sim(q,sb)\nto thresholds like θFRto decide whether to send q\nto FR or to PSS.\n3.2 MT engine\nWe train a Moses PBSMT system (Koehn et al.,\n2007) from the TM sentence pairs.3We build\na5-gram KenLM language model, set the dis-\ntortion limit to 6, and apply a maximal phrase\nlength of 7.4During decoding, we set the max-\nimum phrase length to 100. This is necessary to\nbe able to pretranslate long word sequences using\nXML markup. The GIZA++ word alignment (us-\ning the grow-diag-final heuristic), the lexical prob-\nabilities and the principle of consistently aligned\nspans (Koehn, 2009) based on which the Moses\nphrase table is constructed are also used in the\nFR and PSS components (with an additional con-\nstraint, as explained later on).\n3.3 Fuzzy repair\nLetMFRbe the set of high-scoring translation\nunits retrieved for q.5Three types of editing op-\nerations are attempted to arrive at the final output\no:substitution ,deletion andinsertion . First, how-\never, a number of specific operations aimed at re-\npairing punctuation are performed.\n3Minus the development set used for tuning the parameters.\n4These are ‘default’ settings.\n5To limit potential negative effects of erroneously aligned\ntranslation units,MFRis filtered by imposing a threshold on\nthe percentage of aligned source tokens per translation unit.Punctuation repair: since (simple) punctuation\nis arguably different from other linguistic phenom-\nena, it is tackled by a dedicated subcomponent.\nWe rank the tuples (s,t)∈ M FR, according to\nSim(q,s), and iterate through the ranked list in\norder to verify whether simple punctuation issues\ncan be resolved to produce o:\n•if the only difference between qandsis due\nto casing, or one additional comma, we con-\nsider them as identical sentences, and set oto\nt; hence, we could say this is a type of ‘void’\nrepair;\n•ifqends in punctuation,6and bothsandt\ndo not, we set ototfollowed by the corre-\nsponding punctuation; if, however, talready\ncontains punctuation in final position, we set\notot(another type of ‘void’ repair);\n•ifsandtend in punctuation, and qdoes not,\nwe setototminus the final punctuation.\nWe stop iterating as soon as we produced o. In\ncase of failure, we look at the more general mech-\nanisms of substitution ( sub), deletion (del) and in-\nsertion (ins). Since both delandinscan be con-\nsidered more specific versions of sub(i.e. replace-\nment of a part of sortby the empty string), we\nfocus onsubfirst.\nSubstitution: the basic idea behind the subop-\neration is to translate non-matching tokens of qand\nsin the context of tokens in t.subis attempted\nwhen bothqandscontain one sequence of one\nor more unmatched tokens qj\niandsj/prime\nithat end at\npotentially different positions jandj/prime. We check\nwhethersj/prime\niis consistently aligned to a sequence\n6One of the tokens .,?!:;-71\nFigure 2: Examples of (attempted) substitution\ntl\nk, i.e. whether each token in sj/prime\niis either aligned\nto a token in tl\nkor unaligned, and vice versa.7In\naddition, we impose the condition that the first and\nlast token of sj/prime\nibe aligned; the same goes for the\nfirst and last token of tl\nk. We assume that an align-\nment satisfying this condition, which we will call\naborder-link alignment in the remainder of this ar-\nticle, increases the likelihood of translation equiv-\nalence between sequences.\nThesuboperation is illustrated by the simplified\nexamples in Figure 2. In the first example, both\nqandscontain a one-word sequence that is not\nshared ( rejects andrejected respectively). In both\ncases, this sequence starts at the second position.\nThe word rejected is aligned with the adjacent\nFrench target tokens aandrejet´e, which in turn are\nonly aligned with rejected . This allows for trans-\nlating rejects in the context of Ilandtout. In the\nsecond example, substitution fails since rejected\nis aligned with two Dutch target words, heeft and\nverworpen , which do not form an uninterrupted\nsequence. In the third example, substitution is\nimpossible: sj/prime\niconsists of Commission , which\nis aligned with Kommissionsvorschlag , while the\nGerman word is aligned with both Commission\nandproposal , the latter word not being part of sj/prime\ni.\nTo translate a span of qin the context of tokens\noft, we proceed as follows. We block all retained\ntokens from tas pretranslation, by annotating qi−1\n1\nwith the tokens of tk−1\n1using XML markup (unless\ni= 1), and annotating qv\nj+1with the tokens of tw\nl+1,\nunlessjequalsv;vandwstand for the number of\ntokens inqandt. The annotated qis then sent to\nthe MT system, which translates qj\niin the context\noftk−1\n1and/ortw\nl+1(Ilandtoutin Figure 2).\nTo verify multiple potential substitutions, a slid-\ning window is applied by a stepwise decrease of\niand increase of jandj/prime. Eachoresulting from\na successful substitution is scored using the lan-\nguage model of the PBSMT system, in order to\npick the best alternative o. The size of the sliding\n7With the understanding that at least one token in sj/prime\niis\naligned.window is a model parameter. Two additional pa-\nrameters8are put in place to limit the applicability\nofsuboperations: a threshold for the maximum\nlength of the span tl\nkand one for the maximum\npercentage of unaligned tokens within that span.\nDeletion: thedeloperation consists of removing\na sequence from tto yieldo. Ifsis identical to q,\napart from one additional sequence sj\ni(which may\nbe a prefix, infix or suffix of s), and the latter has\na border-link alignment with a target sequence tl\nk,\nthe target sequence can be deleted. Two safeguard\nrules control the modification. If the token tk−1\nis not aligned with a token in s, it is also deleted.\nThe second rule is optional and ensures that tl\nkis\nnot removed if it consists of only one token with\nless than 4 characters;9this leadsoto be equal to\nt, which is another instance of ‘void’ repair.\nThe two safeguard rules are illustrated in Figure\n3. In the leftmost example, the first occurrence of\nthe Dutch word de, which precedes the sequence\nidentified for deletion, is not aligned with any to-\nken ins. It is therefore also deleted. The rightmost\nexample shows that the only difference between q\nandsis the token the, which has less than 4 char-\nacters.tis thus left unchanged.\nFigure 3: Examples of (attempted) deletion\nInsertion: theinsoperation can be performed\nwhenqis identical to s, apart from a sequence qj\ni\n(which may be a prefix, infix or suffix of q). Key\ntoinsis determining where to insert the transla-\ntion ofqj\niint. For this to be possible, all of the\nfollowing conditions need to be satisfied: (a) the\ntokensi−1is aligned to one or more tokens, the\nrightmost of which we call tk, (b)siis aligned to\n8Added after a qualitative analysis of development set output.\n9This heuristic was implemented to deal with articles in par-\nticular, in the absence of part-of-speech information.72\none or more tokens, the leftmost of which we call\ntl, and (c)kandlare adjacent (i.e. l=k+ 1).\nIf we found the insertion position k, we annotate\nqi−1\n1with the tokens in tk\n1, and annotate qv\nj+1with\nthe tokens in tw\nk+1. This is illustrated in Figure 4.\nqcontains an additional sequence compared to s\n(European ), starting at the second position. We\nverify with which German word the first source to-\nken (si−1,the) is aligned, and with which word\nthe second source token ( Parliament ) is aligned.\nAs the aligned German words are adjacent, thecan\nbe annotated with dasandParliament with Parla-\nment .\nFigure 4: Example of insertion\nIfiis 1 (i.e. the non-matching part qj\niis the\nprefix of the sentence), we apply a different proce-\ndure. If token s1is aligned with one or more target\ntokens, we annotate the sequence qv\nj+1withtw\nk,k\nbeing the position of the leftmost aligned token. If\njisv(i.e. the non-matching part is the suffix of the\nsentence), and the last token of sis aligned to one\nor more target tokens, we annotate the sequence\nqi−1\n1withtk\n1,kbeing the position of the rightmost\naligned token.\nFor anyqthat is not repaired and for which\nSim(q,sb)≥θTM, we setoto the most frequent\ntb. Otherwise, qis sent to PSS.\n3.4 Pretranslation span search\nPSS consists of annotating (pretranslating) spans\nofqbased on matches in Mq, and subsequently\nconstraining the MT system to respect the trans-\nlations of these spans while producing o. PSS is\napplied in case the following condition is satisfied:\nθMT≤Sim(q,sb)<θTM(see Figure 1). If so, a\nsubsetMpis established according to Equation 3.\nMp={(s,t)∈M q:Sim(q,s)≥θPSS}(3)\nBased on the sentence pairs in Mp, we de-\nfine another setPq, which contains pretranslation\ntuples (s,t,i,j,i/prime,j/prime,k,l). These are tuples for\nwhich all of the following conditions are valid: (a)\nthe sentence pair belongs to Mp, (b)qj\nimatchesthe source span sj/prime\ni/prime10and (c)sj/prime\ni/primehas a border-link\nalignment with the target span tl\nk. A specific pair\nof source and target span may occur in multiple\nsentence pairs (see the frequency check below).\nSome of the tuples in Pqwill be used for pretrans-\nlation, as described below.\nFiltering pretranslation tuples: a tuplep∈Pq\nis filtered out if it satisfies one of the following\nconditions: (a) given all tuples P/prime\nq⊆P qthat in-\nvolve the sentence pair of p, the total length of\nthe source and target spans in P/prime\nqdoes not satisfy\na minimum length, (b) the length of the source\nand/or target span in pdoes not satisfy a mini-\nmum value, (c) the source and/or target span in p\ndo not contain any content word (i.e. noun, adjec-\ntive, verb or adverb), (d) the percentage of words\naligned between the source and target span in pis\ntoo low, or (e) the one-to-many alignment score of\np, defined in Equation 4, is too low. In this equa-\ntion,yxrepresents the number of tokens aligned to\nsx, a token in the source span sj/prime\ni/primeofp.\n1\nj−i+ 1j/summationdisplay\nx=i1\nyx(4)\nCombining pretranslation tuples: after filter-\ning, each tuple p∈Pqis scored according to the\nweighted sum of (a) the length of the target span,\n(b) the frequency of the pair of source and target\nspan, i.e. the number of tuples in Pqin which\nthe pair occurs, and (c) the maximal fuzzy match\nscore for the span pair, i.e. the maximal similarity\nSim(q,s)for all tuples in which the span pair oc-\ncurs. The weights of the three above factors are\nmodel parameters. Subsequently, the tuples are\nranked according to score, and used in the fol-\nlowing iterative procedure. The spans of the first\nranked tuple are used for pretranslation, i.e. the\nspantl\nkis used to annotate the qj\nispan. This tu-\nple is removed from Pq. The system then looks for\nthe first ranked tuple in which the qj\nispan does not\noverlap with the already annotated span of q. This\nprocess is repeated until Pqonly contains tuples\nwith overlapping spans, or until the threshold for\nnumber of annotations has been reached. Figure 5\n10Matchingqtosgiven some similarity function leads to the\nidentification of a number of matching parts. These parts are\ntypically sequences which are identical in qands. A match-\ning spanqj\nirefers to such a matching part, or one of its pre-\nfixes, infixes or suffixes. For instance, if two sentences have\na matching part The EC was , matching spans include The EC\nwas,The EC ,ECetc.73\nFigure 5: Example of pretranslation span search\nprovides an example of how two non-overlapping\nspans of a query sentence ( the news spread , and\nto obtain the results . ) are pretranslated by two\nDutch target spans ( het nieuws zich verspreidde ,\nandde resultaten te bekomen . ) originating from\ntwo different translation units. The PBSMT sys-\ntem is constrained to use these target spans in its\nfinal output.\n3.5 Parameter setting and tuning\nMany of M3TRA’s components involve parame-\nters (such as θFR) that can either be manually fixed\nor whose optimal value can be determined on the\nbasis of an automated parameter tuning process.\nInitial tests were run on subsets of the development\nsets using random parameter initializations. Man-\nual spot-checks of system outputs with different\nconfigurations were performed to verify the qual-\nity of the resulting translations (in comparison to\npure MT output). To make the spot checks poten-\ntially more informative, differences in METEOR\nscores (compared to the MT baseline) were used as\na criterion to select sentences with pretranslations\nthat either led to large gains in translation quality\nor that appeared to result in worse translations.\nIn addition, a local hill-climbing algorithm was\nused to help determine the best parameter settings.\nThe methodology followed here involved a step-\nwise narrowing of the search interval per parame-\nter based on a combination of random initializa-\ntions and runs of the hill-climber (with increas-\ningly small step size). BLEU scores (Papineni et\nal., 2002) were used as tuning criterion.\n4 Experimental design\nThis section describes the empirical tests that were\ncarried out. We first describe the dataset and eval-\nuation procedures, before turning to the results.\n4.1 Data\nWe use the TM of the Directorate-General for\nTranslation of the European Commission (Stein-\nberger et al., 2013), for 5 language pairs in 2 di-rections: EN↔NL, FR, DE, HU, PL.11To en-\nsure consistency, we only use the cross-section of\neach of these datasets, resulting in 1.6 million sen-\ntence pairs per language combination. 2000 sen-\ntence pairs are set aside for development, and the\ntest set consists of 3207 sentences.12We tok-\nenized and lowercased all sentences before train-\ning Moses and tuning its parameters.\nTable 1 shows the percentage of q’s categorised\non the basis of Sim(q,sb). For only 5 to 7% of q’s\nno match is found in the TM. For the majority a\nmatch below 70% is retrieved, but for around 28-\n35% a high-scoring match ( >70%) exists.\nNone<70 70-79 80-89 90-99\nEN 5.9% 59.0% 9.4% 13.6% 12.1%\nNL 5.0% 62.5% 8.9% 11.4% 12.3%\nPL 6.7% 64.5% 8.0% 12.1% 8.7%\nDE 6.3% 62.9% 9.6% 12.0% 9.2%\nFR 4.5% 67.2% 9.3% 11.2% 7.8%\nHU 6.6% 64.8% 8.7% 11.1% 8.9%\nTable 1: Percentage of test sentences per match range\n4.2 Baseline systems\nWe use three baselines to compare M3TRA with:\n(a) ‘pure’ TM matching, which involves selecting\nthe (most frequent) tbforqaso,13(b) the ‘pure’\nMoses PBSMT system, and (c) a standard neural\ntranslation model.\nFor the neural MT model, we use Open-\nNMT (Klein et al., 2017) with default settings, i.e.\na seq2seq RNN model with global attention con-\nsisting of 50000 words on the source as well as the\ntarget side, word embeddings of 500 dimensions,\na hidden layer of 500 LSTM nodes, and learning\nthrough stochastic gradient descent with a learning\nrate of 1, and we ran the model for 20 epochs. We\nchose the best performing model, selected using a\ndevelopment set (different from the validation set)\n11Note that the original source language may differ and that\nnot all EC documents are translated directly.\n12We were strict in filtering the test sets: any qfor which a\n100% match existed in any source language was left out for\nall language pairs.\n13If no match is found in the TM, no translation is provided.74\nEN-NL\nEN-PL\nEN-DE\nEN-FR\nEN-HU\nNL-EN\nPL-EN\nDE-EN\nFR-EN\nHU-EN\nθTM 0.79 0.87 0.79 0.83 0.70 0.79 0.93 0.71 0.72 0.70\nθFR 0.77 0.63 0.55 0.54 0.39 0.52 0.57 0.53 0.49 0.40\nMin % aligned tok FR 0.83 0.85 0.63 0.63 0.65 0.70 0.64 0.66 0.66 0.50\nWindow shift L 2 2 2 2 1 1 2 1 3 4\nWindow shift R 0 3 3 2 1 1 0 2 3 1\nMax % non-aligned tok FR 0.50 0.42 0.74 0.24 0.72 0.53 0.75 0.48 0.44 0.67\nθPSS 0.48 0.45 0.43 0.73 0.45 0.50 0.69 0.52 0.24 0.35\nMin span length PSS 4 6 4 12 4 8 9 5 9 3\nMin % aligned tok PSS 74 67 67 56 75 53 58 76 55 62\nMin alignment score PSS 0.83 0.64 0.62 0.79 0.64 0.64 0.59 0.55 0.78 0.71\nTable 2: Parameter settings after tuning\nwhich was evaluated on BLEU, TER (Snover et al.,\n2006) and METEOR. The model that scored best\non the majority of the metrics was chosen. When\nall three metrics differ, we chose the best scoring\nmodel according to BLEU.\n4.3 Evaluation\nBLEU scores are used as main evaluation crite-\nrion.14In addition, we report TER and METEOR\nscores to verify whether related yet different met-\nrics point to similar trends. We only use one refer-\nence translation. To verify whether differences in\nBLEU scores between the baselines and M3TRA\nare statistically significant, we use the bootstrap re-\nsampling method described by Koehn (2004).\n5 Results\n5.1 Tuning\nTable 2 provides an overview of the parameter set-\ntings that were found to lead to the highest BLEU\nscores on the development sets. We retained ten\nfree parameters, the others were either fixed at cer-\ntain values or disabled.15The results for METEOR\nas a fuzzy metric were found to be similar to the re-\nsults using Levenshtein. For the current study, we\ndecided to continue with Levenshtein as metric.\nLooking more closely at the retained parameter\nsettings, some observations can be made. First,\nθTMvaries between 0.70 and 0.93. Second, the\nvalue ofθFRlies between 0.39 and 0.77. Third, for\nany language pair at least half of the source tokens\nin a translation unit need to be aligned to perform\nFR. Fourth, for all language pairs, working with\na sliding window for substitution was beneficial.\nFifth, between 3 and 12 tokens per span are needed\n14We acknowledge that using BLEU is not ideal, especially\nwhen comparing SMT and NMT (Shterionov et al., 2017).\n15θSim = 0.2;n-best matches = 15; PSS weights: length = 0;\nfrequency = 0.83; match score = 0.17.to provide beneficial pretranslations. Sixth, impos-\ning restrictions on alignments proved to be positive\nfor translation quality. Finally, the imposed thresh-\nold for minimum percentage of aligned words at\nsource side varied between 50 and 83%.\n5.2 Tests\nTable 3 provides an overview of the evalua-\ntion scores for the ten language combinations of\nM3TRA compared to three baselines: pure TM,\npure SMT, and NMT. For 9 of the 10 language\ncombinations, M3TRA scores significantly better\nthan the best baseline (SMT) in terms of BLEU.\nThe increase in BLEU varies between 0.2 (for EN-\nPL; non-significant difference) and 5.47 points (for\nEN-HU). METEOR scores actually decrease for\nFR-EN, and are practically unchanged for EN-PL\n(+0.06). For EN-HU they increase with 3 points.\nTER scores consistently decrease for all language\npairs. The decrease lies between 0.25 points (for\nEN-PL) and 5.33 points (EN-HU). Compared to\nthe baseline SMT system, M3TRA affects between\n9 and 39% of the sentences in the test set.\nLooking at BLEU (see also Figure 6), baseline\nSMT also consistently outperforms baseline NMT,\nwith the exception of EN-HU. With TER as evalu-\nation criterion, NMT scores better for EN-HU and\nFR-EN. In terms of METEOR, SMT consistently\noutperforms baseline NMT. The quality of pure\nTM is estimated to be the lowest for all language\npairs, which is not surprising, since e.g. a qfor\nwhichMqis empty is left untranslated.\nFigure 7 presents the performance of the dif-\nferent systems for different subsets defined on the\nbasis ofSim(q,sb)for one language pair (DE-\nEN).16WithSim(q,sb)below 70%, M3TRA does\nnot lead to better scores compared to SMT. Pure\n16For reasons of space we restrict ourselves to one language\npair. For the other languages, similar trends are observed.75\nFigure 6: Overview BLEU scores\nFigure 7: BLEU scores per match range (DE-EN)\nTM starts scoring better than SMT in the range\n80-89%. Thanks to FR, M3TRA also outperforms\npure TM in the two highest match ranges.\n6 Discussion\nThe main novelty of M3TRA is in its adaptable\nparameters, threshold values and safeguards, as\nwell as in its combination of various features that\nare present in a number of approaches described\nin Section 2. Most notably, the use of XML\nmarkup to add pretranslation spans to input sen-\ntences is also used by He (2011), Koehn and Senel-\nlart (2010) and Ma et al. (2011). In M3TRA,\nMoses is constrained to include these pretrans-\nlated spans in the final output (the so-called ex-\nclusive mode is used). The fuzzy repair feature is\nclosely related to the work of Ortega et al. (2016).\nAlso the option to simply use TM target matches\nabove a certain match score threshold has been\nimplemented before (Simard and Isabelle, 2009).\nMoreover, by making use of the information ob-\ntained during the alignment process, M3TRA canbe adapted easily to provide translators with in-\nformation on the origin of parts of the proposed\ntranslations, possibly indicating which sentences\nshould most likely be post-edited (Espla-Gomis et\nal., 2015). Finally, the combination of information\nfrom different fuzzy matches is also present in pre-\nvious research (Wang et al., 2013; Li et al., 2016).\nThe test results show that integrating TM with\nMT can lead to better MT output, provided that\nsufficient high-scoring matches are retrieved from\nthe TM. We argue that M3TRA is especially bene-\nficial in a context with enough repetition and where\nthe focus is (at least to a certain extent) on con-\nsistency and formulaic language use. Looking at\nthe results for the different language pairs, the po-\ntential for improvement is highest for EN-HU and\nHU-EN,17which is most likely due to the (mor-\nphological) structure of the Hungarian language\nand its associated problems for (S)MT. The signif-\nicant improvements for almost all language com-\nbinations indicate that M3TRA potentially works\nwith different language families (Germanic, Ro-\nmance, Finno-Ugric). The smallest improvement\nwas found for the only Slavic language we tested\n(Polish).\nWith regard to the relatively low scores ob-\ntained by our NMT baseline, a number of com-\nments are in order. First, we only tested certain\nstandard/recommended settings in OpenNMT. It is\nlikely that higher scores can be reached by tun-\ning other NMT hyperparameters to better fit the\ndataset used. Second, SMT uses BLEU scores\nas tuning criterion, whereas in NMT perplexity is\n17We realise one has to be careful when comparing BLEU\nscores across (target) languages.76\nNMT TM SMT TM-MT AlteredEN-NLBLUE 49.02 40.66 53.91 55.72** 25.5%\nTER 38.16 56.57 36.90 34.96\nMET. 67.67 52.37 71.04 72.25EN-PLBLUE 46.64 36.31 52.18 52.38 17.87%\nTER 39.57 60.85 37.79 37.54\nMET. 35.45 26.39 38.67 38.73EN-DEBLUE 42.57 38.37 47.32 49.59** 30.50%\nTER 44.81 59.13 44.43 41.95\nMET. 55.56 45.05 60.11 61.71EN-FRBLUE 52.76 41.00 59.08 59.65* 19.15%\nTER 35.79 57.63 32.96 32.22\nMET. 67.31 50.16 72.97 73.45EN-HUBLUE 37.75 34.33 35.71 41.18** 39.16%\nTER 48.01 61.72 55.31 49.98\nMET. 55.23 45.66 55.67 58.67NL-ENBLUE 52.55 43.17 59.00 60.63** 20.95%\nTER 35.11 55.13 32.32 30.56\nMET. 41.65 30.28 44.95 45.51PL-ENBLUE 52.21 42.49 61.95 62.57** 9.17%\nTER 35.28 55.54 29.42 28.86\nMET. 42.17 29.94 46.60 46.85DE-ENBLUE 47.59 42.50 55.44 57.17** 25.69%\nTER 39.90 55.73 36.49 34.67\nMET. 38.70 30.17 43.05 43.46FR-ENBLUE 55.42 43.11 56.39 57.12** 23.57%\nTER 32.42 55.14 35.33 34.23\nMET. 44.02 30.37 45.81 45.70HU-ENBLUE 45.09 41.51 48.62 52.10** 35.11%\nTER 43.35 56.13 44.25 40.37\nMET. 37.51 29.60 40.10 40.93\n(* p<0.01; ** p<0.001)\nTable 3: Results (significance tests for SMT vs TM-MT). Al-\ntered: % of sentences affected by TM-MT vs SMT\nused to train the system. Third, BLEU evalua-\ntion focuses on precision (arguably the strength of\nSMT), and less on fluency (NMT’s forte).18Fi-\nnally, it is possible that SMT is more suited than\nNMT for contexts in which there is a considerable\namount of repetition, and where adequacy and pre-\ncision are crucial.\nThis study is limited in a number of ways: (a)\nthe coverage of certain M3TRA components could\nstill be improved, such as fuzzy repair, which could\nbe extended to cover multiple edits per TM match\nor to also target non-sequential tokens, (b) only\none dataset was used for testing, (c) only automatic\nmetrics were used for evaluation, (d) BLEU scores\nwere used for both training and testing, (e) no\npreviously developed TM-MT integration method\nwas used as baseline, and (f) the time spent on de-\nveloping the NMT baseline was restricted. These\nlimitations can be seen as suggestions for future re-\nsearch. For example, it would be interesting to see\nhow professional translators appreciate M3TRA’s\n18It can be argued, however, that BLEU scores are a good\nevaluation metric in a context in which precision is important.output and indications of the origin of proposed\ntranslations, and what effect this has on transla-\ntion efficiency. Some preliminary tests have been\ncarried out (Coppers et al., 2018), but an in-depth\nstudy is still lacking. Such a study would also re-\nquire us to take issues such as the positioning of\nformatting (and other types of tags) into consider-\nation, which was outside the scope of the current\npaper. The same holds for a more qualitative eval-\nuation of M3TRA’s output (e.g. paying attention\nto certain morphological features).\n7 Conclusions\nWe designed and tested a system for the integration\nof MT and TM, M3TRA, with a view to increasing\nthe quality of MT output. M3TRA contains two\nmain components, fuzzy repair and span pretrans-\nlation, which both make use of a TM with fuzzy\nmatching techniques and an SMT system with re-\nlated alignment information. The system uses the\noption to add XML markup to sentences sent to a\nMoses SMT system. Tests on ten language combi-\nnations using the DGT-TM dataset showed that it is\nclear that this approach has potential. Significantly\nhigher BLEU scores for 9 of the 10 language com-\nbinations were observed, and METEOR and TER\nscores showed comparable patterns. In a next step,\nM3TRA has to be evaluated in an actual translation\nenvironment involving professional translators.\nAcknowledgements\nThis research was done in the context of the\nSCATE project, funded by the Flemish Agency\nfor Innovation and Entrepreneurship (IWT project\n13007).\nReferences\nBic ¸ici, E. and M. Dymetman. 2008. Dynamic transla-\ntion memory: using statistical machine translation to\nimprove translation memory fuzzy matches. Inter-\nnational Conference on Intelligent Text Processing\nand Computational Linguistics , 454–465.\nBisazza, A. and M. Federico. 2016. A survey of word\nreordering in statistical machine translation: Com-\nputational models and language phenomena. Com-\nputational Linguistics, 42 (2), 163-205.\nCoppers, S., J. Van den Bergh, K. Luyten, I. van der\nLek-Ciudin, T. Vanallemeersch and V . Vandeghin-\nste. 2018. Intellingo: An Intelligible Translation\nEnvironment. ACM conference on Human Factors\nin Computing Systems , 1–13.77\nEspla-Gomis, M., F. S ´anchez-Mart ´ınez and M.L. For-\ncada. 2015. Using machine translation to provide\ntarget-language edit hints in computer aided transla-\ntion based on translation memories. Journal of Arti-\nficial Intelligence Research, 53 (1), 169–222.\nFeng, Y ., S. Zhang, A. Zhang, D. Wang and A. Abel.\n2017. Memory-augmented Neural Machine Trans-\nlation. arXiv preprint arXiv:1708.02005 .\nGu, J., Y . Wang, K. Cho and V .O. Li. 2017. Search En-\ngine Guided Non-Parametric Neural Machine Trans-\nlation. arXiv preprint arXiv:1705.07267 .\nHe, Y . 2011. The Integration of Machine Transla-\ntion and Translation Memory . Doctoral dissertation.\nDublin City University.\nHewavitharana, S., S. V ogel and A. Waibel. 2005.\nAugmenting a statistical translation system with a\ntranslation memory. 10th Annual Conference of\nthe European Association for Machine Translation ,\n126–132.\nHokamp, C. and Q. Liu. 2017. Lexically Constrained\nDecoding for Sequence Generation Using Grid\nBeam Search. arXiv preprint arXiv:1704.07138 .\nKlein, G., Y . Kim, Y . Deng, J. Senellart and A.M.\nRush. 2017. OpenNMT: Open-source toolkit\nfor neural machine translation. arXiv preprint\narXiv:1701.02810 .\nKoehn, P. 2004. Statistical significance tests for ma-\nchine translation evaluation. Proceedings of EMNLP\n2004 , 388-395.\nKoehn, P. 2009. Statistical machine translation . Cam-\nbridge: Cambridge University Press.\nKoehn, P., H. Hoang, A. Birch, C. Callison-Burch, M.\nFederico, N. Bertoldi, ... and C. Dyer. 2007. Moses:\nOpen source toolkit for statistical machine transla-\ntion. 45th annual meeting of the Association of Com-\nputational Linguistics , 177–180.\nKoehn, P. and J. Senellart. 2010. Convergence of trans-\nlation memory and statistical machine translation.\n2nd Joint EM+/CNGL Workshop Bringing MT to the\nUser: Research on Integrating MT in the Translation\nIndustry , 21–31.\nLavie, A. and A. Agarwal. 2002. METEOR: An au-\ntomatic metric for MT evaluation with high levels of\ncorrelation with human judgments. 2nd Workshop\non Statistical Machine Translation , 228–231.\nLevenshtein, V .I. 1966. Binary codes capable of cor-\nrecting deletions, insertions, and reversals. Soviet\nPhysics Doklady, 10 (8), 707–710.\nLi, L., C.P. Escart ´ın, A. Way and Q. Liu. 2017. Com-\nbining translation memories and statistical machine\ntranslation using sparse features. Machine Transla-\ntion, 30 (3), 183–202.Li, L., A. Way and Q. Liu. 2016. Phrase-level com-\nbination of SMT and TM using constrained word\nlattice. 54th Annual Meeting of the Association for\nComputational Linguistics , 275–280.\nMa, Y ., Y . He, A. Way and J. van Genabith. 2011. Con-\nsistent translation using discriminative learning - A\ntranslation memory-inspired approach. 49th Annual\nMeeting of the Association for Computational Lin-\nguistics , 1239-1248.\nOrtega, J.E., F. S ´anchez-Mart ´ınez, and M.L. Forcada.\n2016. Fuzzy-match repair using black-box machine\ntranslation systems: what can be expected? 12th\nBiennial Conference of the Association for Machine\nTranslation in the Americas , V ol. 1, 27–39.\nPapineni, K., S. Roukos, T. Ward and W.J. Zhu. 2002.\nBLEU: a method for automatic evaluation of ma-\nchine translation. 40th Annual Meeting of the As-\nsociation for Computational Linguistics , 311–318.\nShterionov, D., P. Nagle, L. Casanellas, R. Superbo and\nT. ODowd. 2017. Empirical evaluation of NMT\nand PBSMT quality for large-scale translation pro-\nduction. 20th Annual Conference of the European\nAssociation for Machine Translation , 74–79.\nSimard, M. and P. Isabelle. 2009. Phrase-based\nmachine translation in a computer-assisted transla-\ntion environment. Machine Translation Summit XII ,\n120–127.\nSimard, M. and P. Langlais. 2001. Sub-sentential ex-\nploitation of translation memories. Machine Trans-\nlation Summit VIII , 335–339.\nSnover, M., B. Dorr, R. Schwartz, L. Micciulla and J.\nMakhoul. 2006. A study of translation edit rate with\ntargeted human annotation. Proceedings of the As-\nsociation for Machine Translation in the Americas ,\nV ol. 200, No. 6.\nSteinberger, R., A. Eisele, S. Klocek, S. Pilos and\nP. Schl ¨uter. 2013. DGT-TM: A freely available\ntranslation memory in 22 languages. arXiv preprint\narXiv:1309.5226 .\nSudoh, K., K. Duh, H. Tsukada, T. Hirao and M. Na-\ngata. 2010. Divide and translate: improving long\ndistance reordering in statistical machine translation.\nJoint Fifth Workshop on Statistical Machine Transla-\ntion and Metrics , 418–427.\nWang, K., C. Zong and K.Y . Su. 2013. Integrating\ntranslation memory into phrase-based machine trans-\nlation during decoding. 51st Annual Meeting of the\nAssociation for Computational Linguistics , 11–21.\nZhechev, V . and J. Van Genabith. 2010. Seeding sta-\ntistical machine translation with translation mem-\nory output through tree-based structural alignment.\n4th Workshop on Syntax and Structure in Statistical\nTranslation , 43–51.78", "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "LbdcOEs5mKo", "year": null, "venue": "EAMT 2015", "pdf_link": "https://aclanthology.org/W15-4920.pdf", "forum_link": "https://openreview.net/forum?id=LbdcOEs5mKo", "arxiv_id": null, "doi": null }
{ "title": "Assessing linguistically aware fuzzy matching in translation memories", "authors": [ "Tom Vanallemeersch", "Vincent Vandeghinste" ], "abstract": null, "keywords": [], "raw_extracted_content": "Assessing linguistically aware fuzzy matching in translation memories\nTom Vanallemeersch, Vincent Vandeghinste\nCentre for Computational Linguistics, University of Leuven\nBlijde Inkomststraat 13\nB-3000 Leuven, Belgium\n{tom,vincent}@ccl.kuleuven.be\nAbstract\nThe concept of fuzzy matching in trans-\nlation memories can take place using lin-\nguistically aware or unaware methods, or a\ncombination of both.\nWe designed aflexible and time-efficient\nframework which applies and combines\nlinguistically unaware or aware metrics in\nthe source and target language.\nWe measure the correlation of fuzzy\nmatching metric scores with the evaluation\nscore of the suggested translation tofind\nout how well the usefulness of a sugges-\ntion can be predicted, and we measure the\ndifference in recall between fuzzy match-\ning metrics by looking at the improve-\nments in mean TER as the match score de-\ncreases. We found that combinations of\nfuzzy matching metrics outperform single\nmetrics and that the best-scoring combina-\ntion is a non-linear combination of the dif-\nferent metrics we have tested.\n1 Introduction\nComputer-aided translation (CAT) has become an\nessential aspect of translators’ working environ-\nments. CAT tools speed up translation work, create\nmore consistent translations, and reduce repetitive-\nness of the translation work. One of the core com-\nponents of a CAT tool is the translation memory\nsystem (TMS). It contains a database of already\ntranslated fragments, called the translation mem-\nory (TM), which consists of translation units: seg-\nments of a text together with their translation.\nc�2015 The authors. This article is licensed under a Creative\nCommons 3.0 licence, no derivative works, attribution, CC-\nBY-ND.Given a sentence to be translated, the traditional\nTMS looks for source language sentences in a TM\nwhich are identical (exact matches) or highly sim-\nilar (fuzzy matches), and, upon success, suggests\nthe translation of the matching sentence to the\ntranslator (Sikes, 2007).\nFormally, a TM consists of a set of source sen-\ntencesS 1, ..., S nand target sentencesT 1, ..., T n,\nwhere(S i, Ti)form a translation unit. Let us call\nthe sentence that we want to translateQ(the query\nsentence).\nThe TMS checks whetherQalready occurs in\nthe TM, i.e. whether∃S i∈S 1, ..., S n:Q=S i.\nIf this is the case,Qneeds no new translation and\nthe translationT ican be retrieved and used as the\ntranslation ofQ. This is an exact match. If the\nTMS cannotfind a perfect match, fuzzy matching\nis applied using some functionSim, which calcu-\nlates the best matchS bofQin the TM, i.e. the\nmost similar match, as in (1):\nSb= max\nSiSim(Q, S i)(1)\nIfSim(Q, S b)>=θ(a predefined minimal\nthreshold, which is typically 0.7 in CAT tools)1),\nTb, the translation ofS b, is retrieved from the TM\nand provided as a suggestion for translatingQ. If\nthe threshold is not reached, the TMS assumes that\nTbis of no value for the translator and does not\nprovide it as a translation suggestion.\nSimilarity calculation can be done in many\nways. In current TMS systems, fuzzy matching\ntechniques mainly consider sentences as simple se-\nquences of words and contain very limited linguis-\ntic knowledge. The latter is for instance present in\n1In CAT tool interfaces, this is usually expressed as a percent-\nage, 70%, which may be modified by the user. Developers\nmay determine the threshold empirically, but their use of a\n70% threshold may also be a matter of convention.153\nthe form of stop word lists. Few tools use more\nelaborate linguistic knowledge.2\n2 Related work\nThere is a large variety of methods that can be\nused for comparing sentences to each other. They\nare designed for comparing any pair of sequences\nor trees (not necessarily sentences and their parse\ntrees), for fuzzy matching in a TM, or for com-\nparing machine translation (MT) output to a ref-\nerence translation. As pointed out by Simard and\nFujita (2012), the third type, MT automatic eval-\nuation metrics, can also be used in the context\nof TMs, both as fuzzy matching metric and as\nmetric for comparing the translation of a fuzzy\nmatch with the desired translation. Some match-\ning methods specifically support the integration of\nfuzzy matches within an MT system (example-\nbased or statistical MT); see for instance Aramaki\net al. (2005), Smith and Clark (2009), Zhechev and\nvan Genabith (2010), Ma et al. (2011).\nSome matching methods are linguistically un-\naware. Levenshtein distance (Levenshtein, 1966),\nwhich calculates the effort needed to convert one\nsequence into another using the operations inser-\ntion, deletion and substitution, is the most com-\nmonly used fuzzy matching method (Bloodgood\nand Strauss, 2014). Tree edit distance (Klein,\n1998) applies this principle to trees; another tree\ncomparison method is tree alignment (Jiang et al.,\n1995).3\nTo allow using string-based matching methods\non trees, there are several ways of converting trees\ninto strings without information loss, as described\nin Li et al. (2008), who applies a method de-\nsigned by Pr ¨ufer (1918) and based on post-order\ntree traversal.\nExamples of matching methods specifically de-\nsigned for fuzzy matching are percent match and\nngram precision (Bloodgood and Strauss, 2014),\nwhich act on unigrams and longer ngrams. Bald-\nwin (2010) compares bag-of-words fuzzy match-\ning metrics with order-sensitive metrics, and word-\nbased with character-based metrics. Examples of\nwell-known MT evaluation metrics are BLEU (Pa-\n2One example of such a tool isSimilis(http://www.similis.\norg), which determines constituents in sentences and allows\nto retrieve(S i, Ti)whenS ishares constituents withQ.\n3We implemented the Tree Edit Distance algorithm of Klein\nfrom its description in Bille (2005), as well as the Tree Align-\nment Distance algorithm. However, both were too slow to be\nuseful for parse trees unless severe optimization takes place.pineni et al., 2002) and TER, i.e. Translation Error\nRate4(Snover et al., 2006).\nLinguistically aware matching methods make\nuse of several layers of information. The ”subtree\nmetric” of Liu and Gildea (2005) compares sub-\ntrees of phrase structure trees. We devised a sim-\nilar method,shared partial subtree matching, de-\nscribed in Section 3.1.2. Matching can also involve\ndependency structures, as in the approach of Smith\nand Clark (2009), head word chains (Liu and\nGildea, 2005), semantic roles, as in the HMEANT\nmetric (Lo and Wu, 2011), and semantically simi-\nlar words or paraphrases, as in the MT evaluation\nmetric Meteor (Denkowski and Lavie, 2014). The\nlatter aligns MT output to one or more reference\ntranslations, not only by comparing word forms,\nbut also through shallow linguistic knowledge, i.e.\nby calculating the stem of words (in some cases\nusing language-specific rules), and by using lists\nwith function words, synonyms and paraphrases.\nSome MT evaluation metrics, such as\nVERTa (Comelles et al., 2014) and LAY-\nERED (Gautam and Bhattacharyya, 2014), and\nsome fuzzy matching methods, like the one\nof Gupta et al. (2014), are based on multiple\nlinguistic layers. The layers are assigned weights\nor combined using a support vector machine.\nDifferent types of metrics can be combined in\norder to join their strengths. For instance, the\nAsiya toolkit (Gim ´enez and M ´arquez, 2010) con-\ntains a large number of matching metrics of dif-\nferent origins and applies them for MT evaluation.\nAn optimal metric set is determined by progres-\nsively adding metrics to the set if that increases the\nquality of the translation.\n3 Experimental setup\n3.1 Independent variables\nIn Sections 3.1.1, 3.1.2, and 3.1.3, we describe the\nindependent variables of our experiment.\n3.1.1 Linguistically unaware metrics\nLevenshtein (baseline)Given the Levenshtein\ndistanceΔ LEV(S, T i), we define Levenshtein\nscore (i.e. similarity) as in (2):\nSim LEV(Q, S i) = 1−ΔLEV(Q, S i)\nmax(|Q|,|S i|)(2)\n4While the developers of TER call itTranslation Edit Rate,\nthe nameTranslation Error Rateis often used, through the\ninfluence of the metric nameWord Error Rate, which is used\nin automatic speech recognition.154\nLevenshtein distance, which is based on three\noperation types (insertion, deletion and substitu-\ntion), and its variants, assign a specific cost to each\ntype of operation. Typically, each type has a cost\nof 1. Certain costs may be changed in order to\nobtain a specific behaviour. For instance, the cost\nof a substitution may depend on the similarity of\nwords.\nTranslation Error RateGiven a sentenceQ\noutput by an MT system and a reference trans-\nlationR, TER keeps on applying shifts toQas\nlong asΔ LEV(Q, R)keeps decreasing.5The TER\ndistanceΔ TER(Q, R), which equalsΔ LEV(Q, R)\nplus the cost of the shifts, is normalized as in (3):\nScore TER(Q, R) =ΔTER(Q, R)\n|R|(3)\nWe convertScore TERinto a similarity score\nbetween 0 and 1 as in (4). This formula assumes a\nvery high upper bound forScore TER.6\nSim TER(Q, R) = 1−log(1 +Score TER(Q, R))\n3(4)\nPercent matchcalculates the percent of uni-\ngrams inQthat are found inS i, as in (5):7\nSim PM(Q, S i) =|Q1grams ∩Si,1grams |\n|Q1grams |(5)\nNgram precisioncomparesngrams, i.e. subse-\nquences of one or more elements, of length 1 up\ntillN, as in (6), where the precision forngrams of\nlengthnis calculated as in (7).\nSim NGP(Q, S i) =N�\nn=11\nNpn (6)\npn=|Qngrams ∩Si,ngrams |\nZ∗|Q ngrams |+ (1−Z)∗|S i,ngrams |\n(7)\n5An implementation of TER can be found here: http://www.\ncs.umd.edu/˜snover/tercom. We used version 0.7.25 for our\nexperiment.\n6Setting the denominator to 3 ensures thatSim TERis a non-\nnegative number unlessScore TERexceeds the upper bound\nof 19. We chose this arbitrary bound in order to have an inte-\nger as denominator in the formula.\n7This metric is similar to the metric PER,position-\nindependent word error rate(Tillmann et al., 1997). The dif-\nference between both metrics lies in the fact that PER takes\naccount of multiple occurrences of a token in a sentence, does\nnot calculate a normalized value between 0 and 1, and does\nnot ignore words which are present inS ibut not inQ.Qngrams is the set ofngrams inQ.S i,ngrams\nis the set ofngrams inS i, andZis a parameter to\ncontrol normalization. SettingZto a high value\nprefers longer translations.8\nBloodgood and Strauss propose weighted vari-\nants for theSim PMandSim NGPmetrics, us-\ning IDF weights, which reflect the relevance of the\nmatching words. We will refer to one of these vari-\nants later on, calling itSim PMIDF .\n3.1.2 Linguistically aware metrics\nAdaptations of linguistically unaware metrics\nWe investigated Levenshtein, percent match, and\nTER not only on sequences of word forms, but\nalso on sequences of lemmas. We will refer\nto these lemma-based metrics asSim LEV LEM ,\nSim PMLEM andSim TERLEM .\nShared partial subtree matchingWe devised a\nmethod which aims specifically at comparing two\nparse trees. In order to perform this comparison in\nan efficient way, we apply the following steps: (1)\ncheck whether pairs of subtrees in the two parses\nshare a partial subtree; (2) determine the scores of\nthe shared partial subtrees, based on lexical and\nnon-lexical similarity of the nodes, on the rele-\nvance of the words, and on the number of nodes\nin the shared partial subtree; (3) perform a greedy\nsearch for the best combination of shared partial\nsubtrees.\nBased on the scores of the partial subtrees in\nthefinal combination, we determine the shared\npartial subtree similarity, as in (8). In this equa-\ntion,Score SPS(Q, S i)stands for the sum of the\nscores of the partial subtrees in the combination\nandMaxScore SPS(Q, S i)stands for the score we\nobtain ifQandS iare equal.\nSim SPS(Q, S i) =Score SPS(Q, S i)\nMaxScore SPS(Q, S i)(8)\nLevenshtein for Pr ¨ufer sequencesExtracting\ninformation from a tree and gathering it into a\nsequence allows us to apply string-based meth-\nods, which are less time-costly than tree-based me-\nthods, and which come in a great variety.\nWhen comparing the structures in two Pr ¨ufer se-\nquences, we may use either a cost of 0 (identity of\nstructures) or 1. However, some structures which\n8For our experiment, we set N to 4 and Z to 0.5. Setting its\nvalue experimentally, however, would be more appropriate.155\nare not identical may have some degree of similar-\nity (for instance, a terminal node with equal part-\nof-speech but different lemmas). Therefore, we as-\nsign costs between 0 and 1 when calculating the\nLevenshtein distance. We refer to Levenshtein cal-\nculation on Pr ¨ufer sequences asSim LEV PRFC .\nNgram precision for head word chainsHead\nword chains can be considered as ngrams. There-\nfore, we apply a variant of ngram precision to them\nwhich we callSim NGPHWC .\nMeteorFor brevity’s sake we do not provide the\nformulas on whichSim METEOR is based. We use\nthe standard settings including shallow linguistic\nknowledge and paraphrases.9\n3.1.3 Combinations of metrics\nCould a combination of matching metrics per-\nform better than the metrics on their own? We\nchecked this by creating regression trees.10The\ntraining examples provided for building the tree\nare the matches of sentences to translate, the fea-\ntures (independent variables) are matching met-\nrics, and their values are the matching score. The\nregression trees model decisions for predicting the\nevaluation score of the translation of the match (the\ndependent variable) in a non-linear way. We con-\nsider the predicted evaluation score as a new fuzzy\nmatch score.\n3.2 Dependent variable\nThe dependent variable of our experiment is the\nevaluation score of the translation suggestion. We\nuseSim TERas evaluation metric. It reflects the\neffort required to change a translation suggestion\ninto the desired translation. It should be noted that\nthe usefulness of a translation suggestion should\nultimately be determined by a translator working\nwith a CAT tool. However, human evaluation is\ntime-consuming. We therefore use an automatic\nevaluation metric as a proxy for human evaluation,\nsimilarly to the modus operandi in the develop-\nment of MT systems.\nIn order to assess the usefulness of an indvidual\nor combined fuzzy matching metric, we apply a\nleave-one-out test to a set of parallel sentences and\ninvestigate how well each metric correlates with\n9We use version 1.5 of Meteor. See http://www.cs.cmu.edu/\n˜alavie/METEOR.\n10We used complexity parameter 0.001, retained 500 competi-\ntor splits in the output and applied 100 cross-validations.the evaluation score. For eachQ i∈Q 1, . . . , Q n,\nwe select the best match produced by the metric,\nwhich we callS b,i. We call its match scoreM b,i\nand its translation in the TMT b,i. We callQ i’s\ntranslationR i. The evaluation score of the transla-\ntion isE b,i=Sim TER(Tb,i, Ri). We compute the\nPearson correlation coefficient betweenMandE.\nA higher coefficient indicates a more useful fuzzy\nmatching metric.\nA second way for assessing the usefulness of\nmetrics is considering their mean evaluation score\nand investigating the significance of the differ-\nence between metrics through bootstrap resam-\npling. This approach consists of taking a large\nnumber of subsets of test sentences and compar-\ning the mean evaluation score of their best matches\nacross metrics. For instance, if one metric has a\nhigher mean in at least 95% of the subsets than\nanother one, thefirst metric is significantly better\nthan the second one at confidence level 0.05.\nA third way we study the usefulness of metrics\nis by investigating the degree to which the mean\nevaluation score decreases as we keep adding sen-\ntences with diminishing match score. If the de-\ncrease in mean evaluation score is slower in one\nmetric than in another, thefirst metric has a higher\nrecall than the second metric, as we need to put\nless effort in editing the translation suggestions to\nreach the desired translation.\n3.3 Speed of retrieval\nWe developed afilter calledapproximate query\ncoverage(AQC). Its purpose is to select candi-\ndate sentences in the TM which are likely to reach\na minimal matching threshold when submitting\nthem to a fuzzy matching metric, in order to in-\ncrease the speed of matching. A candidate sen-\ntence is a sentence which shares one or more\nngrams of a minimal lengthNwith Q, and which\nshares enoughngrams with Q so as to cover the\nlatter sufficiently.\nThe implementation of thefilter uses a suffix ar-\nray (Manber and Myers, 1993), which allows for a\nvery efficient search for sentences sharingngrams\nwithQ.11This approach is similar to the one used\nin the context of fuzzy matching by Koehn and\nSenellart (2010).\nIn order to measure the usefulness of the AQC\nfilter, we measured the tradeoff between the gain\n11We used the SALM toolkit (Zhang and V ogel, 2006) for\nbuilding and consulting suffix arrays in our experiment.156\nin speed and the loss of potentially useful matches.\nWe used a sample of about 30,000 English-Dutch\nsentence pairs selected from Europarl (Koehn,\n2005), and a threshold of 0.2. After applying\na leave-one-out test, which consists of consider-\ning eachS iin the sample as aQand comparing\nit to all the otherS iin the sample, it appeared\nthat the AQCfilter selected about 9 candidate sen-\ntences perQ. The gain in speed is very signifi-\ncant: afterfiltering, a fuzzy matching metric like\nSim LEVonly needs to be applied to 0.03% of the\nsentences in the sample. As for the loss of po-\ntentially useful matches, we considered eachS i\nfor whichSim LEV(Q, S i)>= 0.3to be such a\nmatch. It appears that most of theseS iare still\navailable afterfiltering: 93% of all pairs(Q, S i)\nwith aSim LEVvalue between 0.3 and 0.4 have an\nAQC score>= 0.2. For pairs between 0.4 and 0.6,\nthis is 98%, and for pairs above 0.6 100%. Hence,\nthere is a very good tradeoff between gain in speed\nand loss of potentially useful matches.\n3.4 Preprocessing data\nWe use the Stanford parser (Klein and Manning,\n2003) to parse English sentences. We divide a\nsample of sentences into two equally sized sets: a\ntraining set, from which regression trees are built,\nand a test set, to which individual metrics and com-\nbined metrics derived from regression trees are ap-\nplied. We derive IDF weights from the full sample.\n4 Results\nWe tested the setup described in the previous sec-\ntion on a sample of 30,000 English-Dutch sentence\npairs from Europarl. We built regression trees for\ndifferent combinations of metrics. The combined\nmetrics either involve the baseline and an individ-\nual metric or a larger set of metrics. The results are\nshown in Table 1. The leftmost column shows the\nmetric used:\n•Individual metrics: LEV (Levenshtein), TER,\nMETEOR ,PM(percent match), PMIDF (per-\ncent match with weights), NGP (ngram preci-\nsion), NGPHWC (head word chains), LEVLEM\n(lemma-based Levenshtein), PMLEM ,TER-\nLEM,SPS(shared partial subtree matching)\n•Combination of baseline and individual met-\nric: TER+LEV ,SPS+LEV , . . .\n•Combination of all linguistically aware met-\nrics: LINGTable 1: Comparison of metrics with baseline\nSim Corr(M b,i, Eb,i)Score TER\nBaseline\nLEV 0.278 1.007\nLinguistically aware metrics\nLEVLEM 0.279 1.009\nLEVPRFC 0.283 0.983\nMETEOR 0.058 1.066\nNGPHWC 0.291 1.028\nPMLEM 0.420 0.927∗\nSPS 0.275 0.987\nTERLEM 0.500 0.926∗\nLinguistically unaware metrics\nNGP 0.222 1.035\nPM 0.4240.926∗\nPMIDF 0.335 0.963∗\nTER 0.502 0.926∗\nMetrics combined using regression tree\nLEVLEM+LEV 0.3620.869∗\nLEVPRFC+LEV 0.386 0.905∗\nMETEOR+LEV 0.391 0.910∗\nNGPHWC+LEV 0.3470.869∗\nPMLEM+LEV 0.478 0.916∗\nSPS+LEV 0.363 0.908∗\nTERLEM+LEV 0.562 0.894∗\nNGP+LEV 0.376 0.903∗\nPM+LEV 0.455 0.906∗\nPMIDF+LEV 0.405 0.906∗\nTER+LEV 0.561 0.894∗\nLING 0.564 0.899∗\nNONLING 0.5710.889∗\nALL 0.563 0.899∗\n∗p <0.05\n•Combination of all linguistically unaware\nmetrics (except for the baseline): NONLING\n•Combination of all metrics (including the\nbaseline): ALL\nThe middle column of Table 1 shows the Pear-\nson correlation coefficient between the match\nscore and the evaluation score (Sim TER). The\nrightmost column shows the means ofScore TER\nvalues (which reflect the estimated editing effort)\ninstead of the means of theSim TERvalues. We\nused the latter primarily to facilitate the calculation\nof certain statistics regarding TER, such as corre-\nlations.\nLet usfirst have a look at the individual metrics\nin Table 1. TheSim PMandSim TERmetrics,\nand their lemma-based variants, have the highest\ncorrelation with the evaluation score; their corre-\nlation is markedly higher than that of the baseline.\nInterestingly, IDF weights do not seem to help per-\ncent match, on the contrary. The correlation of\nmost other individual metrics is close to that of\nthe baseline. Looking at the worst-performing two\nmetrics,Sim NGPandSim METEOR , it is strik-\ning that the latter has an extremely low correla-157\ntion compared to the baseline. This needs fur-\nther investigation. The high score ofSim TER\nandSim TERLEM raises the question whether an\nevaluation metric favors a fuzzy matching metric\nwhich is identical or similar to it.\nThe means of theScore TERvalues for individ-\nual metrics more or less confirm the differences\nobserved for correlation.Sim PM,Sim TERand\ntheir lemma-based variants have the lowest mean.\nAs shown by the asterisks in the table, the differ-\nence in mean with the baseline is significant at the\n0.05 level for about half of the individual metrics.\nLooking at the combined metrics in Table 1, we\nsee that all of them have a higher correlation with\nthe evaluation score than the baseline, and a lower\nScore TERmean; the difference in mean with the\nbaseline is always significant. Of all two-metric\ncombinations, the ones involvingSim TERand its\nlemma-based variant perform the best. The com-\nbinationsSim LING andSim NONLING perform\nslightly better than the best two-metric combina-\ntions.Sim ALL, which includes the baseline itself,\ncomes close toSim LINGandSim NONLING but\ndoes not exceed their performance. From the re-\ngression tree involving the combination of all met-\nrics, it appears that it uses 9 of the 12 individ-\nual metrics, including the baseline, to predict the\nevaluation score. There is no clearcut association\nbetween the correlation values of the combined\nmetrics and theirScore TERmean. For instance,\nSim NGPHWC has the lowest correlation but also\nthe lowestScore TERmean.\nFigure 1 shows the meanScore TERincrease\n(i.e. increase in editing effort) that we obtain when\nadding baseline matches with decreasing match\nscore. When we order all test sentences according\nto the baseline score of their best match, the mean\nScore TERof thefirst 1000 sentences (the 1000 top\nsentences) is 0.74. When we order the test sen-\ntences according toSim ALL, the meanScore TER\nof the 1000 top sentences is 0.67. As we add more\nsentences to the top list, theScore TERmean for\nthe baseline increases more strongly than that of\nSim ALL. The recall ofSim ALLincreases, as we\nneed to put less effort in editing the translation\nsuggestions of the top list. For instance, the re-\ncall for 1000 sentences is 10% lower for the base-\nline (0.74/0.67=1.10). For 2000 sentences, the dif-\nference increases to 11%, and for 3000 to 13%.\nTheoracleline in Figure 1 indicates the mean\nScore TERincrease in case we know the evalua-tion score of the best match beforehand; this is the\nupper bound for a matching metric.\nFrom the results in Table 1, we can conclude\nthat, though linguistically unaware metrics help a\nlong way in improving on the baseline, linguistic\nmetrics clearly have added value. A question that\narises here, and to which we already pointed pre-\nviously, is whether the use of an identical metric\nfor fuzzy matching and for evaluation favors that\nfuzzy matching metric with respect to others. If\nthat is the case, it may be better to optimize fuzzy\nmatching methods towards a combination of eval-\nuation metrics rather than a single metric. Ideally,\nhuman judgment of translation should also be in-\nvolved in evaluation.\n5 Conclusion and future\nOur comparison of the baseline matching metric,\nLevenshtein distance, with linguistically aware\nand unaware matching metrics, has shown that the\nuse of linguistic knowledge in the matching pro-\ncess provides clear added value. This is especially\nthe case when several metrics are combined into a\nnew metric using a regression tree. The correla-\ntion of combined metrics with the evaluation score\nis much stronger than the correlation of the base-\nline. Moreover, significant improvement is ob-\nserved in terms of mean evaluation score, and the\ndifference in recall with the baseline increases as\nmatch scores decrease.\nConsidering the fact that there is added value in\nlinguistic information, we may further improve the\nperformance of matching metrics by testing more\nmetric configurations, by using additional metrics\nor metric combinations built for MT evaluation,\nand by building regression trees using larger train-\ning set sizes. Testing on an additional language,\nfor instance a highly inflected one, may also shed\nlight on the value of fuzzy metrics.\nOur experiments were performed using a sin-\ngle evaluation metric, TER. We may also use other\nmetrics for evaluation, such as percent match, Me-\nteor or shared partial subtree matching, in order to\nassess to which degree the use of an identical met-\nric for fuzzy matching and for evaluation affects\nresults. In this respect, we will also investigate the\nlow correlation between Meteor as a fuzzy match-\ning metric and TER as an evaluation score, and\nselect a new metric which we use for evaluation\nonly and which applies matching techniques ab-\nsent from the other metrics. An example of such a158\nFigure 1: MeanScore TERincrease\nmetric is the recently developed BEER (Stanojevi ´c\nand Sima’an, 2014), which is based on permuta-\ntion of tree nodes. Human judgment of translation\nsuggestions will also be taken into account.\nLast but not least, we would like to point out\nthat we have created an innovative fuzzy match-\ning framework with powerful features: integration\nof matching metrics with different origins and lev-\nels of linguistic information, support for different\ntypes of structures (sequences, trees, trees con-\nverted into sequences), combination of metrics us-\ning regression trees, use of any metric in the source\nor target language (fuzzy matching metric or eval-\nuation metric), and fastfiltering through a suffix\narray.\n6 Acknowledgements\nThis research is funded by the Flemish govern-\nment agency IWT (project 130041, SCATE). See\nhttp://www.ccl.kuleuven.be/scate.\nReferences\nAramaki, Eiji, Sadao Kurohashi, Hideki Kashioka\nand Naoto Kato. 2005. Probabilistic Model for\nExample-based Machine Translation.Proceedings\nof the 10th Machine Translation Summit, Phuket,\nThailand. pp. 219–226.\nBaldwin, Timothy. 2010. The Hare and the Tortoise:\nSpeed and Accuracy in Translation Retrieval.Ma-\nchine Translation, 23(4):195–240.\nBille, Philip. 2005. A Survey on Tree Edit Distanceand Related Problems.Theoretical Computer Sci-\nence, 337(1-3):217–239.\nBloodgood, Michael and Benjamin Strauss. 2014.\nTranslation Memory Retrieval Methods.Procee-\ndings of the 14th Conference of the European Asso-\nciation for Computational Linguistics, Gothenburg,\nSweden. pp. 202–210.\nComelles, Elisabet, Jordi Atserias, Victoria Arranz,\nIrene Castell ´on, and Jordi Ses ´e. 2014. VERTa: Fa-\ncing a Multilingual Experience of a Linguistically-\nbased MT Evaluation.Proceedings of the 9th In-\nternational Conference on Language Resources and\nEvaluation, Reykjavik, Iceland. pp. 2701–2707.\nDenkowski, Michael and Alon Lavie. 2014. Meteor\nUniversal: Language Specific Translation Evalua-\ntion for Any Target Language.Proceedings of the\n9th Workshop on Statistical Machine Translation,\nBaltimore, Maryland, USA. pp. 376–380.\nGautam, Shubham and Pushpak Bhattacharyya. 2014.\nLAYERED: Metric for Machine Translation Eva-\nluation.Proceedings of the 9th Workshop on Sta-\ntistical Machine Translation, Baltimore, Maryland,\nUSA. pp. 387-393.\nGim´enez, Jes ´us and Llu ´ıs M ´arquez. 2010. Asiya:\nAn Open Toolkit for Automatic Machine Translation\n(Meta-)Evaluation.The Prague Bulletin of Mathe-\nmatical Linguistics, 94:77–86.\nGupta, Rohit, Hanna Bechara and Constantin Orasan.\n2014. Intelligent Translation Memory Matching and\nRetrieval Metric Exploiting Linguistic Technology.\nProceedings of Translating and the Computer 36,\nLondon, UK. pp. 86–89.\nJiang, Tao, Lushen Wang, and Kaizhong Zhang. 1995.\nAlignment of Trees – An Alternative to Tree Edit.\nTheoretical Computer Science, 143(1):137-148.159\nKlein, Dan and Christopher Manning. 2003. Fast Ex-\nact Inference with a Factored Model for Natural Lan-\nguage Parsing.Advances in Neural Information Pro-\ncessing Systems 15 (NIPS), MIT Press. pp. 3–10.\nKlein, Philip. 1998. Computing the Edit Distance\nbetween Unrooted Ordered Trees.Proceedings of\nthe 6th Annual European Symposium on Algorithms,\nVenice, Italy. pp. 91–102.\nKoehn, Philipp. 2005. Europarl: A Parallel Corpus for\nStatistical Machine Translation.Proceedings of the\n10th Machine Translation Summit, Phuket, Thailand.\npp. 79–86.\nKoehn, Philipp and Jean Senellart. 2010. Fast Ap-\nproximate String Matching with Suffix Arrays and\nA*Parsing.Proceedings of the 9th Conference\nof the Association for Machine Translation in the\nAmericas, Denver, Colorado. 9 pp. [http://www.mt-\narchive.info/AMTA-2010-Koehn.pdf]\nLevenshtein, Vladimir I. 1966. Binary Codes Capable\nof Correcting Deletions, Insertions, and Reversals.\nSoviet Physics Doklady, 10(8):707–710.\nLi, Guoliang, Xuhui Liu, Jianhua Feng, and Lizhu\nZhou. 2008. Efficient Similarity Search for Tree-\nStructured Data.Proceedings of the 20th Inter-\nnational Conference on Scientific and Statistical\nDatabase Management, Hong Kong, China. pp.\n131–149.\nLiu, Ding and Daniel Gildea. 2005. Syntactic Fea-\ntures for Evaluation of Machine Translation.Pro-\nceedings of ACL 2005 Workshop on Intrinsic and\nExtrinsic Evaluation Measures for Machine Trans-\nlation and/or Summarization, Ann Arbor, Michigan,\nUSA. pp. 25–32.\nLo, Chi-kiu and Dekai Wu. 2011. MEANT: An Inex-\npensive, High-accuracy, Semi-automatic Metric for\nEvaluating Translation Utility via Semantic Frames.\nProceedings of the 49th Annual Meeting of the Asso-\nciation for Computational Linguistics: Human Lan-\nguage Technologies – Volume 1, Portland, Oregon,\nUSA. pp. 220–229.\nMa, Yanjun, Yifan He, Andy Way, and Josef van Gen-\nabith. 2011. Consistent Translation using Discrimi-\nnative Learning: a Translation Memory-inspired Ap-\nproach.Proceedings of the 49th Annual Meeting of\nthe Association for Computational Linguistics: Hu-\nman Language Technologies – Volume 1, Portland,\nOregon. pp. 1239–1248.\nManber, Udi and Gene Myers. 1993. Suffix Arrays:\nA New Method for On-line String Searches.SIAM\nJournal on Computing, 22:935–948.\nPapineni, Kishore, Salim Roukos, Todd Ward, and Wei-\nJing Zhu. 2002. BLEU: a Method for Automatic\nEvaluation of Machine Translation.Proceedings of\nthe 40th Annual Meeting of the Association for Com-\nputational Linguistics, Philadelphia, Pennsylvania,\nUSA. pp. 311–318.Pr¨ufer, Heinz. 1918. Neuer Beweis eines Satzes ¨uber\nPermutationen.Archiv der Mathematik und Physik,\n27:742–744.\nSikes, Richard. 2007. Fuzzy Matching in Theory and\nPractice.Multilingual, 18(6):39–43.\nSimard, Michel and Atsushi Fujita. 2012. A Poor\nMan’s Translation Memory Using Machine Trans-\nlation Evaluation Metrics.Proceedings of the\n10th Conference of the Association for Machine\nTranslation in the Americas, San Diego, California,\nUSA. 10 pp. [http://www.mt-archive.info/AMTA-\n2012-Simard.pdf]\nSmith, James and Stephen Clark. 2009. EBMT for\nSMT: a new EBMT-SMT hybrid.Proceedings of the\n3rd International Workshop on Example-Based Ma-\nchine Translation, Dublin, Ireland. pp. 3–10.\nSnover, Matthew, Bonnie Dorr, Richard Schwartz, Lin-\nnea Micciula, and John Makhoul. 2006. A Study of\nTranslation Edit Rate with Targeted Human Annota-\ntion.Proceedings of the 7th Conference of the As-\nsociation for Machine Translation in the Americas,\nCambridge, Massachusetts, USA. pp. 223–231.\nStanojevi ´c, Milo ˇs and Khalil Sima’an. 2014. BEER:\nBEtter Evaluation as Ranking.Proceedings of the\n9th Workshop on Statistical Machine Translation,\nBaltimore, Maryland, USA. pp. 414–419.\nTillmann, Christoph, Stephan V ogel, Hermann Ney,\nAlex Zubiaga, and Hassan Sawaf. 1997. Ac-\ncelerated Dp Based Search For Statistical Transla-\ntion.Proceedings of the 5th European Conference\non Speech Communication and Technology, Rhodes,\nGreece. pp. 2667–2670.\nZhang, Ying and Stephan V ogel. 2006. Suffix Ar-\nray and its Applications in Empirical Natural Lan-\nguage Processing.Technical Report CMU-LTI-06-\n010, Language Technologies Institute, School of\nComputer Science, Carnegie Mellon University.\nZhechev, Ventsislav and Josef van Genabith. 2010.\nMaximising TM Performance through Sub-Tree\nAlignment and SMT.Proceedings of the 9th\nconference of the Association for Machine Trans-\nlation in the Americas, Denver, Colorado, USA.\n10 pp. [http://www.mt-archive.info/AMTA-2010-\nZhechev.pdf]\n160", "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "PFIoepUX9Ra", "year": null, "venue": "EAMT 2023", "pdf_link": "https://aclanthology.org/2023.eamt-1.54.pdf", "forum_link": "https://openreview.net/forum?id=PFIoepUX9Ra", "arxiv_id": null, "doi": null }
{ "title": "GoSt-ParC-Sign: Gold Standard Parallel Corpus of Sign and spoken language", "authors": [ "Mirella De Sisto", "Vincent Vandeghinste", "Lien Soetemans", "Caro Brosens", "Dimitar Shterionov" ], "abstract": null, "keywords": [], "raw_extracted_content": "GoSt-ParC-Sign\nGold Standard Parallel Corpus of Sign and spoken language\nMirella De Sisto∗, Vincent Vandeghinste†, Lien Soetemans‡, Caro Brosens§, Dimitar Shterionov∗\n∗Tilburg University,†Instituut voor de Nederlandse Taal,‡KU Leuven,§Vlaams Gebarentaalcentrum\[email protected], [email protected],\[email protected], [email protected],\[email protected]\n1 Introduction\nIn the last decade, there has been an increasing in-\nterest in extending MT from only focusing on Spo-\nken Languages (SpLs) to also targeting Sign Lan-\nguages (SLs); nevertheless, the advances of this\nfield are still limited, and this is due to a number\nof reasons (e.g. challenges related to data avail-\nability, lack of notation conventions, etc.).\nBesides the technological gap between SpLMT\nand SLMT, a severe difference lies in the avail-\nability of high-quality (training) data. SpLMT can\ncount on open and free datasets, such as Europarl\n(Koehn, 2005) and OPUS (Tiedemann and Ny-\ngaard, 2004), and on several MT platforms which\nallow training on specific datasets.1The availabil-\nity of sufficient amounts of high-quality (training)\ndata drives the MT performance up. Furthermore,\nwell-designed test sets allow to adequately assess\nquality and fairly compare MT systems.\nFor SLs, instead, training data is scarce and\nscattered. Parallel datasets, with one side in a\nSL and the other in a SpL, are extremely lim-\nited. In addition, most of the available datasets\nconsist in broadcasts with subtitles/autocues as a\nwritten form of a SpL as the source and interpreta-\ntion into a SL as the target (Camgoz et al., 2018);\nthis leads to various concerns related to their qual-\nity: SL as the result of interpretation or transla-\ntion is heavily influenced by the source language2\nas well as by the interpreting process; in addition,\n© 2023 The authors. This article is licensed under a Creative\nCommons 4.0 licence, no derivative works, attribution, CC-\nBY-ND.\n1See, for instance, Nematus ( https://github.\ncom/EdinburghNLP/nematus ), OpenNMT\n(https://opennmt.net/ ), MarianMT ( https:\n//marian-nmt.github.io/ ),\n2This phenomenon is referred to as translationese (Graham et\nal., 2020)even though in some cases hearing interpreters are\nCODA’s (children of deaf adults), most often the\ninterpretation is made by a hearing interpreter for\nwhom the SL is the L2.\nIn some cases, corpora with SL as source\nare available, such as the Corpus Vlaamse\nGebarentaal3(VGT) (Van Herreweghe et al., 2015)\n(Corpus of Flemish Sign Language); nevertheless,\nas annotation of the data is ongoing, the transla-\ntions available are too insufficient for quality (au-\ntomatic) SL translation (SLT). Additionally, as the\ndata contain videos of the signer’s faces, strict\nGDPR rules apply, and signed informed consent\nforms are required from each of the signers.\nThe SignON project4aims to build SLT engines\nand hence gathers available SL data; throughout\nthis process, we faced a number of issues,5which\nled us to identify the need for a gold standard par-\nallel corpus of SL - SpL. The collection, organi-\nsation and (public) release of such a corpus, will\nprovide a common ground for advancing the field\nof SLT.\n2 Gost-Parc-Sign\nThe goal of this project is to create a gold stan-\ndard parallel corpus of authentic VGT as source\nand a translation into written Dutch as target lan-\nguage. This 12-month project, running between\nFebruary 2023 and January 2024, consists of three\nphases: (1) Collection of existing source SL videos\nin VGT and of informed consent forms from their\nsigners.6(2) Manual translation of the SL into\n3https://www.corpusvgt.be/\n4https://signon-project.eu/\n5for an overview of data-related challenges of SLMT see\n(De Sisto et al., 2022)\n6Informed consent for the voice over will not be needed, since\naudio will not be included in our corpus.\nwritten Dutch, performed by a mixed team of deaf\nand hearing professional VGT translators; this will\noptimize the translation process, preserve the con-\ntent of the original message, and ensure good qual-\nity of the Dutch text. This phase will consist of\n133 hours of translation work,7resulting in ap-\nproximately at least 9–10 hours of video being\ntranslated.8Translations will be created in ELAN\n(Sloetjes and Wittenburg, 2008). Translations will\nbe arranged into a “Translation” tier in the ELAN\nAnnotation Format (EAF) file of each correspond-\ning video. Since there is no sign-to-word corre-\nspondence between VGT and Dutch, alignment is\nat the sentence or message level. (3) Quality con-\ntrol by members of the Flemish deaf community\nand L1 Dutch language users, which will ensure\nthat the translations made convey the same mes-\nsage as the original videos. All phases will be over-\nseen by the Vlaams GebarenTaalCentrum (VGTC)\nand KU Leuven, both members of SignON, in or-\nder to ensure data and translation quality. The final\ncorpus will be made publicly available (with a Cre-\native Commons BY licence) through the CLARIN\ninfrastructure at the Instituut voor de Nederlandse\nTaal (INT), and through the European Language\nGrid.\n3 Current and future steps\nIn this initial phase of GoSt-ParC-Sign approxi-\nmately 10 hours of authentic VGT videos to be\ntranslated into written Dutch have been identified.\nThe videos cover different topics and genres: 5\nhours of free conversation, a 1,5 hour panel dis-\ncussion about linguistic change in the community,\nover 2 hours of a deaf-lead talk, a game show to\ncelebrate 15 years of recognition for VGT, and 45\nminutes of semi-spontaneous vlogs about typical\nlanguage uses in VGT. They all constitute content\noriginally produced for a signing audience. VGTC\nhas recruited translators and we are currently col-\nlecting signed informed consents from the video’s\nowners. After phase 1, the translation phase will\nstart; the quality control, i.e. phase 3, will follow\nbetween August and December 2023. In the final\nmonth of the project we will prepare and release\n7This amount was calculated based on the funding available\nand translators’ average hourly rate (60 euro).\n8This estimate was made by consulting professional SL to\nSpL translators: 15 minutes of translation work correspond\nroughly to one minute of video translation. In terms of re-\nsulting text, we could estimate, based on a recently concluded\ncorpus project, that the translation of these videos into written\nDutch might correspond approximately to 50.000 words.all the data and documentation.\nAcknowledgements\nThe GoSt-ParC-Sign project has been awarded the\nEAMT Sponsorship of Activities 2022 and par-\ntially by the SignON project, funded by the Eu-\nropean Union’s Horizon 2020 Research and In-\nnovation Programme under Grant Agreement No.\n101017255.\nReferences\nCamgoz, Necati Cihan, Simon Hadfield, Oscar Koller,\nHermann Ney, and Richard Bowden. 2018. Neu-\nral sign language translation. In Proceedings of the\nIEEE Conference on Computer Vision and Pattern\nRecognition (CVPR) , Salt Lake City, USA, 18 – 22\nJune. IEEE.\nDe Sisto, Mirella, Vincent Vandeghinste, Santiago\nEgea G ´omez, Mathieu De Coster, Dimitar Shteri-\nonov, and Horacio Saggion. 2022. Challenges with\nsign language datasets for sign language recogni-\ntion and translation. In Proceedings of the Thir-\nteenth Language Resources and Evaluation Confer-\nence, pages 2478–2487, Marseille, France, June. Eu-\nropean Language Resources Association.\nGraham, Yvette, Barry Haddow, and Philipp Koehn.\n2020. Statistical Power and Translationese in Ma-\nchine Translation Evaluation. In Proceedings of the\n2020 Conference on Empirical Methods in Natural\nLanguage Processing (EMNLP) , pages 72–81, On-\nline. Association for Computational Linguistics.\nKoehn, Philipp. 2005. Europarl: A parallel corpus\nfor statistical machine translation. In Proceedings of\nMachine Translation Summit X: Papers , pages 79–\n86, Phuket, Thailand, September 13–15.\nSloetjes, Han and Peter Wittenburg. 2008. Annotation\nby category: ELAN and ISO DCR. In Proceedings\nof the Sixth International Conference on Language\nResources and Evaluation (LREC’08) , Marrakech,\nMorocco, May. European Language Resources As-\nsociation (ELRA).\nTiedemann, J ¨org and Lars Nygaard. 2004. The OPUS\ncorpus - parallel and free: http://logos.uio.\nno/opus . In Proceedings of the Fourth Interna-\ntional Conference on Language Resources and Eval-\nuation (LREC’04) , pages 1183–1186, Lisbon, Por-\ntugal, May. European Language Resources Associa-\ntion (ELRA).\nVan Herreweghe, Mieke, Myriam Vermeerbergen,\nEline Demey, Hannes De Durpel, Hilde Nyf-\nfels, and Sam Verstraete. 2015. Het Cor-\npus VGT. Een digitaal open access corpus van\nvideo’s and annotaties van Vlaamse Gebarentaal, on-\ntwikkeld aan de Universiteit Gent ism KU Leuven.\nhttps://www.corpusvgt.ugent.be/.", "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "yt6XLGVdeay", "year": null, "venue": "EAMT 2020", "pdf_link": "https://aclanthology.org/2020.eamt-1.29.pdf", "forum_link": "https://openreview.net/forum?id=yt6XLGVdeay", "arxiv_id": null, "doi": null }
{ "title": "Terminology-Constrained Neural Machine Translation at SAP", "authors": [ "Miriam Exel", "Bianka Buschbeck", "Lauritz Brandt", "Simona Doneva" ], "abstract": null, "keywords": [], "raw_extracted_content": "Terminology-Constrained Neural Machine Translation at SAP\nMiriam Exel Bianka Buschbeck Lauritz Brandt\nSAP SE\nDietmar-Hopp-Allee 16, 69190 Walldorf\nGermany\[email protected] Doneva\u0003\nUniversity of Mannheim\n68131 Mannheim\nGermany\[email protected]\nAbstract\nThis paper examines approaches to bias a\nneuralmachinetranslationmodeltoadhere\nto terminology constraints in an industrial\nsetup. In particular, we investigate varia-\ntionsoftheapproachbyDinuetal.(2019),\nwhich uses inline annotation of the target\nterms in the source segment plus source\nfactor embeddings during training and in-\nference, and compare them to constrained\ndecoding. Wedescribethechallengeswith\nrespect to terminology in our usage sce-\nnarioatSAPandshowhowfartheinvesti-\ngatedmethodscanhelptoovercomethem.\nWe extend the original study to a new lan-\nguagepairandprovideanin-depthevalua-\ntion including an error classification and a\nhuman evaluation.\n1 Introduction\nWith over one billion words per year, SAP deals\nwith a huge translation volume; covering prod-\nuct localization and translation of documentation,\ntraining materials or support instructions for up\nto 85 languages. With a wide range of prod-\nuctlinesindifferentindustries,translationsettings\nare diverse. There are over 100 active transla-\ntiondomainsforwhichwemaintaintranslationre-\nsources such as translation memories and termi-\nnologies. At SAP we usually train multi-domain\nneural machine translation (NMT) engines, whose\ninputconsistsofamultitudeofdatasourcesinclud-\ning the contents of the company-internal transla-\ntion memories from various domains. The result-\n\u0003Employed as a working student at SAP during this project.\n©2020Theauthors. ThisarticleislicensedunderaCreative\nCommons 3.0 licence, no derivative works, attribution, CC-\nBY-ND.ing NMT system produces high-quality technical\ntranslations, but has difficulties generating appro-\npriate and coherent terminology in specific con-\ntexts. Given the great importance of correct and\nconsistentterminologyintechnicaltranslation,this\nisanuisanceforthetranslatorsthatworkinapost-\neditingscenarioaswellasforusersconsumingma-\nchine translation (MT) in a self-service scenario.\nIn our translation environment, translators are\nassigned projects along with the relevant transla-\ntion domain’s terminology. To achieve term con-\nsistency, SAP maintains SAPterm1, a large termi-\nnology database which also specifies viable term\ntranslations. Translators can easily select target\nterms from SAPterm in a computer-assisted trans-\nlation (CAT) environment, but applying terminol-\nogy constraints in NMT is a challenge. As we do\nnot have reliable term recognition or morphologi-\ncalinflectiongenerationtoolsforallourproductive\nlanguages at our disposal, we require an approach\nthat not only enforces the correct terminology but\nalso learns its contextually appropriate inflections.\nTo that end, we investigate the approach pre-\nsented in Dinu et al. (2019), which combines in-\nline annotation with source factors (Sennrich and\nHaddow, 2016), that provide an additional in-\nput stream with terminology annotation, to show\nhow domain-specific terminology can be enforced\nin a multi-domain NMT model. The approach\nshouldbecapableofhandlingunseenterminology\nwhile retaining NMT’s ability to produce fluent\noutput sequences without the need for additional\nresources such as morphological generators and\nwithout drastically reducing decoding speed. We\nwill present results for variations of this approach\nwhich were not investigated in Dinu et al. (2019),\n1http://www.sapterm.com/\nbut could be of interest to users of NMT who plan\ntoimplementthatapproachinaproductivesystem.\nWhiletheWMTnewstranslationtaskthatDinu\net al. (2019) evaluate on is a viable test bed for\nnew methods, we aim to validate that the method\nis also applicable to other scenarios, such as the\ntranslation of texts from the business and IT con-\ntextofSAP,whenconstrainingitwithentriesfrom\nSAPterm. We furthermore extend the original\nstudytoanewlanguagepair(English–Russian)and\nprovide an in-depth evaluation including a human\nassessment. Our study yields very promising re-\nsults, amongst others improvements of up to 11\nBLEU points on terminology data, and paves the\nway to the customization of NMT at SAP: a se-\nlected SAPterm glossary can be applied directly\nwhen producing MT proposals for a translation\nproject. Thisyieldsbettertranslationquality,helps\nto reduce post-editing costs and eases translators’\nfrustration with correcting terms.\n2 Related Work\nSeveral approaches to make NMT adapt to a\ndomain-specific terminology have been proposed\nintheliterature. Fine-tuningonin-domaintraining\ndata on-the-fly (Farajian et al., 2018; Huck et al.,\n2019) is shown to improve translation quality and\nterm accuracy but creates additional technological\nchallengesformodelmanagementandincreasesin-\nfrastructure costs. Additionally, terminology con-\nstraints cannot be specified on a sentence or docu-\nmentlevel,butinsteadneedtobedistinctlypresent\nin the available training data, which often is not\nthe case in a productive scenario. The latter ar-\ngument also holds for domain-aware MT (Kobus\net al., 2017), where a multi-domain model distin-\nguishesthetranslationdomainsusingadomaintag,\nwhich is prepended to the source segment.\nSince terminology databases are available in\nmost translation environments, integrating them\ninto NMT at run-time to enable domain-specific\ntranslation is an ongoing research topic. Early ap-\nproaches use placeholder tokens for source and\ntarget (for example (Crego et al., 2016)). Place-\nholder approaches often suffer from disfluency as\nthe NMT model does not have access to the term\nand therefore has difficulties creating a fluent and\nmorphologically sound translation.\nConstrained decoding is one of the most promi-\nnentapproachestoenforcingterminologyinNMT.\nThe decoder is subject to a set of constraints thatarestrictlyenforcedduringdecoding(Hokampand\nLiu, 2017; Chatterjee et al., 2017). Some issues\nwith constrained decoding have already been ad-\ndressed, such as better positioning of target terms\nby exploiting the correspondence between source\nand target terms (Hasler et al., 2018), and improv-\ning performance for the base approach (Post and\nVilar, 2018; Hu et al., 2019). Nevertheless, the in-\ncreaseindecodingtimecomparedtounconstrained\ndecoding is still considerable (cf. Section 5). Also\nthe output surface form is enforced exactly as pro-\nvidedbytheconstraintandnomorphologicaladap-\ntationisappliedbythedecoder. Thisleadstomis-\nplaced constraints and broken sentences (Burlot,\n2019) as well as special cases where surface form\nvariants of an enforced term are being produced\nbythedecoderbutnotpickedupbytheconstraint,\nleading to a duplication as the constraint produces\nthe terminology again (Dinu et al., 2019).\nDinu et al. (2019) offer a different approach to\napplyingterminologyconstraintsinNMT. Thetar-\ngettermsareinsertedintothesourcestringduring\ntraining and decoding, and thus the model learns\na copying behavior. An indication of which words\nare source terms, target terms or no terms is pro-\nvided to the model via an additional input stream.\nThis input is encoded as source factors, in the\nsame way that linguistic features can be encoded\n(Sennrich and Haddow, 2016). For the English–\nGerman WMT 2018 news translation task, mod-\nerate improvements in BLEU and term accuracies\n>90% are reported. The zero-shot nature of this\napproach enables the application of unseen termi-\nnologyattesttime. Furthermore,Dinuetal.(2019)\nreport cases of generating morphological variants\nof terminology entries in the output, while decod-\ning times are not increased compared to the base\nmodel. As the ability to apply terminology con-\nstraints is trainedinto the NMT model byeither\nappending the target term to the source term or by\nreplacingit,Dinuetal.(2019)refertotheirmodels\nastrain-bymodels,andwewillcontinuedoingso.\nMany commercial providers of MT offer an op-\ntion to upload a user dictionary in order to cus-\ntomize the NMT output to enforce a certain termi-\nnology.2This is a feature that users became ac-\n2Accessed on February 21st, 2020:\nAmazonTranslate : https://aws.amazon.com/blogs/machine-\nlearning/introducing-amazon-translate-custom-terminology/\nGoogle Translate : https://cloud.google.com/translate/docs/\nadvanced/glossary\nMicrosoft Translator : https://docs.microsoft.com/en-\nus/azure/cognitive-services/translator/dynamic-dictionary\ncustomed to in rule-based and statistical MT, and\nconsequentlytheyexpectasimilarfunctionalityfor\nNMTaswell. Naturally,thecommercialproviders\nusually leave us in the dark about the technology\nthat is used for the implementation of that feature.\nSuch custom terminology features are described\nmore for marketing purposes rather than from an\nobjective technical viewpoint. Usually, no trans-\nparentevaluationresultsareavailable. Someprod-\nuctdescriptionsareneverthelessfairenoughtode-\nscribe the limitations of the feature and best prac-\ntices.\n3 Methodology\nWe experiment with variants of the train-byap-\nproachintroducedbyDinuetal.(2019),whichisa\nform of inline term annotation. Target terms ttare\ninsertedintoasourcesentencebyeitherappending\nthem to the source term ts(append) or by replac-\ning tscompletely( replace). Anadditionalsignalis\nprovidedbyatermannotationforeachinputtoken,\nwhere1meanspartofasourceterm,2meanspart\nof a target term and 0 is the default. An example\nfor the input is provided in Table 1.\nThe term annotations are presented as source\nfactors and have their own embedding vectors,\nwhich are combined with the respective (sub-)\nword embeddings to represent the input of the\nencoder in an encoder-decoder NMT architecture\n(SennrichandHaddow,2016). Thetwoembedding\nvectors can be combined by either concatenating\n(concat) or summing ( sum) them. This makes the\ndimensionality of the source factor embedding ei-\ntheravariable-sized( concat)orafixedsized( sum)\nvector. WhileDinuetal.(2019)onlyreportresults\nfor the concatenation strategy with an embedding\nsize of 16, we investigate an embedding size of 8\nas well as the vector summarization combination.\nWearealsointerestedintheimpactofthesource\nfactors themselves, and thus investigate whether\nthe additionally provided annotation is actually\nnecessary by using only the inline annotation and\nno term factor annotation.\nThesourcesentencesareannotatedasdescribed\nforallterminologyentries ¹ts;ttº,when tsispresent\nin the source and ttoccurs in the reference. To\nSDL: https://www.sdl.com/about/news-media/press/\n2018/sdls-neural-machine-translation-sets-new-industry-\nstandards-with-state-of-the-art-dictionary-and-image-\ntranslation-features.html\nSystran: https://blog.systransoft.com/our-neural-network-\njust-learned-syntax/check whether a term occurs in a sentence, we use\na matching strategy that also covers morphologi-\ncalvariants. Thisisessentialasourterminological\ndatabasecontainsbaseformsonly. Notethatwein-\nsert ttintothesourceinitsbaseform,becausethis\nwill also be the scenario at test time.\nDuringtraining,themodellearnstocopythein-\njected target terms to the output. We expect to see\nmorphologicalvariantsofthebasetermsintheout-\nputinaccordancewiththecontextofthesentence,\nas is reported in Dinu et al. (2019).\n4 Experimental Setup\nWe evaluate the application of terminology con-\nstraints in the usage scenario of MT at SAP, for\ntwo language pairs English–German (en–de) and\nEnglish-Russian (en–ru). We use target languages\nthatarerelativelymorphologicallyrichbecausewe\nwantto investigatewhether theapproach isable to\nproducethetargettermsinanappropriatemorpho-\nlogical form.\n4.1 Data and Data Preparation\nCorpus Our parallel data consists of a large\ncollection of proprietary translation memories\nfrom within SAP. It is a multi-domain corpus\ncovering different content types, such as doc-\numentation, user interface strings and training\nmaterial in relation to various SAP products.\nFor all our training/validation/test sets we use\n5,000,000/2,000/3,000 parallel segments respec-\ntively. We use two test sets, where the first is tar-\ngeted towards the evaluation of terminology and\ncontainsatleastoneterminologyentrypairineach\nsentence,whereastheotherdoesnothaveterminol-\nogy annotated. We will refer to them as terminol-\nogyandno-terminology test sets respectively.\nTerminology SAPterm is organized into con-\ncepts where terms that are translations of each\nother are linked. A concept can cover different\nterm types, such as a main term entry, its syn-\nonyms, acronyms or abbreviations. To generate\na high-quality glossary, we only consider source-\ntarget term pairs consisting of main term entries\nand their synonyms. To avoid common words and\nspurious entries, we filter out high-frequency and\nlow-frequency entries.3We therefore only select\na subset of all entries in SAPterm, consisting of\n3We filter out term pairs where the English side occurs more\nthan5,000timesorlessthan100timesinalargecorpus(>20\nmillion sentences) of proprietary SAP data.\nappend en This 0indicator 0is0only 0necessary 0for0manual 1depreciation 1manuelle 2Abschreibung 2and0\nwrite-ups 0.0\nreplace en This 0indicator 0is0only 0necessary 0for0manuelle 2Abschreibung 2and0write-ups 0.0\nRef. de Das Kennzeichen wird nur für manuelle Abschreibungen und Zuschreibungen benötigt .\nTable 1: Example input for the two term injection methods appendandreplace. Source factors are indicated as indices. The\nterminology entry is (manual depreciation, manuelle Abschreibung).\nen–de en–ru\ntrain 784,666 582,281\nvalidation 303 238\nterminology test 4,868 3,510\nno-terminology test 0 0\nTable 2: Number of term annotations\n116,188 entries for English–Russian and 153,417\nentries for English–German.\nWe apply a fuzzy matching strategy to find and\nannotatethetermsinourdata,asmotivatedinSec-\ntion 3. Specifically, we lemmatize4on the English\nside, and allow for differences of two characters\non the target side. In case of multiple overlapping\nmatches, we keep only the longest match. Inspired\nby Dinu et al. (2019), we strictly separate training\nand testing terminology entries and select our par-\nalleldataaccordinglytodemonstratethezero-shot\nlearning capabilities of the model. For train-by\nmethodsweannotate10%ofthetrainingandvali-\ndation segments with terminology using the train-\ning terms. The term annotation statistics can be\nfound in Table 2.\nPreprocessing We tokenize all data using\nNLTK5and perform a joint source and target\nBPE encoding (Sennrich et al., 2016) using 89.5k\nmergeoperations. Wefurthermoreinjectthetarget\ntermsforannotatedtermsaccordingtothe append\nandreplacemethods and generate source factors\non BPE-level accordingly (cf. Table 1).\n4.2 NMT Models\nWe make use of the Sockeye toolkit (Hieber et al.,\n2018)forthisinvestigation. Itsupportssourcefac-\ntors and constrained decoding out-of-the-box.6\nFor all our experiments, we use a transformer\nnetwork (Vaswani et al., 2017). We configure two\nencoding and two decoding layers, unless stated\notherwise. We also conduct experiments with a\n4http://www.nltk.org/api/nltk.stem.html#module-\nnltk.stem.wordnet\n5https://www.nltk.org/api/nltk.tokenize.html\n6https://awslabs.github.io/sockeye/training.htmlsixlayersetup( 6 layers),whichcorrespondstothe\nbase configuration of Vaswani et al. (2017). The\nearly stopping criterion is computed on the vali-\ndation data (32 validation runs without improve-\nment). All evaluations are performed with beam\nsize 5.\nFor both the appendandreplacemethod, we\ntrain and evaluate models in which the embedding\nof the term annotation is added or concatenated to\nthecorrespondingsubwordembedding. Weexper-\niment with embedding sizes of 8 and 16 for con-\ncatenation. To investigate the impact of the term\nannotation in the form of source factors, we also\ntrain and evaluate models without source factors\n(nofactors), while still using the term injection of\ntheappendandreplacemethod.\nFor comparison, we train a baseline without\ninjected terms and source factors. We further\ncompareagainstSockeye’simplementationofcon-\nstrained decoding, which is based on Post and Vi-\nlar(2018). Forthis,weusethebaselinemodeland\nconstrain the output to contain the target terms of\nthe terminology entries that are annotated in the\nterminology test set.\n5 Automatic Evaluation\nInthissectionwepresenttheresultsofourexperi-\nments using automatic evaluation.\n5.1 Metrics\nTo automatically assess the translation quality, we\nreport BLEU (Papineni et al., 2002) and CHRF\n(Popović,2015)onde-BPEedoutput,usingtheim-\nplementation in NLTK7. To evaluate how well the\nmodels adhere to the terminology constraints, we\nreport term rates (TR),computedasthepercentage\noftimesthetargettermisgeneratedintheMTout-\nputoutofthetotalnumberoftermannotations. We\nalso employ the previously used fuzzy matching\nstrategy to match the words in the output against\nthe annotated terms in the reference. Note that we\nare not interested in generating the exact morpho-\n7https://www.nltk.org/api/nltk.translate.html\nlogical form of the term that occurs in the refer-\nence or in the terminology database, but we want\nthe term in whatever form is required in the sen-\ntential context of the MT output. We also report\nthevariant term rate (variant TR), in which a tar-\nget term is also counted as correct if it coincides\nwith one of the other possible translations of the\nsource term according to SAPterm. We are aware\nthatthosetermratesonlyapproximatethetruth,as\ndoallautomaticMTevaluationmetrics. Hencewe\nquantifysomeshortcomingsinSection7.2andadd\na human evaluation in Section 6.\n5.2 Results\nResults for en–de and en–ru can be found in Ta-\nbles 3 and 4 respectively. Our train-bysystems\nare labeled according to whether they use the ap-\npendorreplacemethod from Dinu et al. (2019)\nand which kind of source factor embedding strat-\negytheyemploy. Wepresentresultsforthetestsets\nterminology andno-terminology separately. The\nfirstallowsustodemonstratehowthedifferentap-\nproaches fare in terms of translation quality and\nterm accuracy, while the latter serves as a san-\nity check to make sure that the general translation\nquality does not suffer for data without terminol-\nogy.\nThe first thing to note is that BLEU scores for\nen–ru on the terminology data set are a lot higher\nthan for en–de. This can be explained by the test\nsets that differ in sentence length and grammati-\ncalcomplexity. Withanaverageof17.7words,the\nen–de data contains a large number of longer sen-\ntences with a higher term density. The en–ru data\nin contrast contains many short simple sentences\nwith an average of 9.04 words per segment with\nmostly only one term.\nTerminologytestdata Itcanbeeasilyseenthat,\nfor both language pairs, all train-bymodels out-\nperform the baseline in terms of translation qual-\nityandtermratebyawidemargin. Comparingthe\ntermratewiththevarianttermratefortheindivid-\nual models reveals that, while the baseline some-\ntimeschoosesanalternativetranslationforaterm,\nthis does not hold for the train-bymodels where\nthe two term rates are basically the same. Over-\nall, the results show that the train-byapproach is\neffectiveinimprovingthetranslationqualityusing\nterminologyconstraintsintheevaluatedusagesce-\nnarioofSAPdataannotatedwithterminologyfrom\nSAPterm.Taking all results into account, the append\nmethod works better than the replacemethod for\nour experimental setup. Looking only at the ap-\npendmethod results, concatenation of the two\nembedding vectors works better than summariza-\ntion. From the approaches that use source factors,\ntheappend-concat16 settingconsistentlyperforms\nbest,bothintermsofoveralltranslationqualityand\nterm rate. This finding holds for both language\npairs.\nWe rerun the most promising setting as well as\nthe baseline with the six-layer transformer for en–\nde. As expected, both show an improvement for\nall metrics over their respective two-layer coun-\nterpart. The finding that the append-concat16 ap-\nproach outperforms the baseline in terms of trans-\nlation quality and term rate by a wide margin thus\nholds for the shallow model as well as for the\ndeeper model.\nSomewhat surprisingly, we can observe that the\nimpact of source factors is small for en–de and\nnonexistent, or even slightly detrimental for en–\nru. It seems that the model has learned the code\nswitching that happens in the source sentence and\nthe intended copy behavior of the injected terms\nto the output, without requiring the additional in-\nputsignal. Wehypothesizethatthedifferentscripts\nof English and Russian, Latin and Cyrillic, are the\nreason why the model picks up the code switching\nbetter than for en-de, which both use the Latin al-\nphabet.\nFinally,whencomparingthe train-bymethodsto\nconstraineddecoding,weobservethateventhough\nconstrained decoding reaches almost perfect term\nrates( >99%),theoveralltranslationqualitythatis\nachieved with the train-bymodels is clearly supe-\nrior. The decrease in BLEU further confirms ob-\nservations that have previously been made in the\nliterature (cf. Section 2), namely that constrained\ndecodingcansometimesleadtoquestionabletrans-\nlation quality. In addition, it is important to note\nthat constrained decoding caused an approximate\nsixfold increase in translation time in our experi-\nments, while no such impact was observed for the\ntrain-bymodels.\nTest data without terminology The results of\nthe individual approaches on the no-terminology\ntestdatashowslightdifferencesintranslationqual-\nity as measured by BLEU and CHRF. We deem\nthosetobewithintheregularvariationthatwesee\namongst different training runs with the same data\nterminology no-terminology\nBLEU CHRF TR Variant TR BLEU CHRF\nBaseline 42.74 72.11 71.20 76.73 48.02 71.87\nConstrained decoding 41.81 73.91 99.51 99.65 –”– –”–\nAppend-concat16 47.08 76.06 96.40 96.52 48.22 72.01\nAppend-concat8 46.72 75.81 96.30 96.50 47.67 71.59\nAppend-sum 46.45 75.74 96.24 96.42 47.83 71.62\nReplace-concat16 45.41 75.31 96.30 96.34 47.79 71.67\nReplace-sum 45.75 75.46 96.44 96.50 48.21 71.99\nAppend-nofactors 46.19 75.58 95.06 95.43 47.26 71.56\nReplace-nofactors 45.50 75.16 95.37 95.52 48.04 72.13\nBaseline (6 layers) 43.50 72.66 71.98 77.31 48.66 72.52\nAppend-concat16 (6 layers) 47.45 76.60 96.87 97.16 48.98 72.79\nTable 3: Results for English–German on the terminology andno-terminology test sets\nterminology no-terminology\nBLEU CHRF TR Variant TR BLEU CHRF\nBaseline 50.24 72.57 64.10 69.09 41.79 63.21\nConstrained decoding 42.10 78.08 99.12 99.23 –”– –”–\nAppend-concat16 61.23 81.06 95.72 95.81 41.80 63.02\nAppend-sum 60.94 80.91 95.30 95.32 41.77 62.99\nReplace-concat16 60.30 80.46 94.92 94.92 42.04 63.11\nReplace-sum 60.29 80.33 95.10 95.10 41.87 63.15\nAppend-nofactors 61.47 81.48 96.07 96.18 41.98 63.14\nReplace-nofactors 60.83 80.67 95.33 95.33 41.78 62.99\nTable 4: Results for English–Russian on the terminology andno-terminology test sets\nandconfiguration. Wethusconcludethatthe train-\nbyapproach in the investigated setting generally\ndoes not seem to have a negative impact on data\nwithout terminology constraints.\n6 Translators’ Assessment\nAs we apply MT in post-editing scenarios, it is of\nimportancethatourtranslatorsapproveofourpro-\nposedsolutionofenforcingSAP-specificterminol-\nogy. Takingtheshortcomingsofautomaticmetrics\nforMTintoaccount,wethereforealsoconducteda\nhuman evaluation.\n6.1 Setup\nFor the human evaluation, we chose to compare\nthe baseline and the two best-performing train-\nbymodels append-concat16 andappend-nofactors\nfrom the automatic evaluation. The latter scored\nsurprisingly well, requires less involved prepro-\ncessing and a simpler network architecture, which\nis appealing in a commercial setup. We selected\n100 segments from the terminology test set (cf.\nSection 4.1). As we were primarily interested in\nthedifferencesbetweenthethreesystems,wemade\nsure that none of the three translations are identi-caltoeachotherortothereferencetranslation. We\nmadesurethat35ofthetestsentencescontainmore\nthanonetermannotation,toalsocoverthispartic-\nular case.\nFor both language pairs, we had three testers\nwhoevaluatedthesame300translationsinablind\nevaluation using our in-house MT evaluation tool.\nTesterswereshownthesourcewithhighlightedter-\nminology,therelevantterminologyentriesandone\ntranslation at a time in random order. They were\naskedtoratethetargettermaccuracyandtheover-\nall translation quality, both on a scale from one\n(poor) to six (excellent). Note that the human tar-\nget term accuracy does not directly correspond to\nthe automatic term rates (cf. Section 5), as testers\nwereadvisedtoalsoconsiderwhethertargetterms\nappear in the expected syntactic position and fit\nmophologically into their context.\n6.2 Results\nTo consolidate the results of the human evalua-\ntion, the accuracy and quality ratings of all testers\nwere averaged for each evaluated segment. Ta-\nble 5 shows the respective results. Generally, they\nconfirmthefindingsoftheautomaticevaluationin\nTerm accuracy Transl. quality\nen–de en–ru en–de en–ru\nBaseline 4.52 4.99 4.40 4.90\nAppend-concat16 5.74 5.70 4.54 4.98\nAppend-nofactors 5.79 5.69 4.50 4.90\nTable 5: Results of human evaluation: term accuracy rating\nand translation quality rating\nRating baseline nofactors concat16\nende enru ende enru ende enru\nexcellent 50% 53% 86% 80% 87% 77%\nvery good 6% 12% 9% 13% 7% 14%\ngood 5% 15% 2% 2% 0% 4%\nmedium 13% 8% 0% 0% 1% 2%\npoor 14% 8% 1% 3% 2% 3%\nvery poor 12% 4% 2% 2% 3% 0%\nTable 6: Distribution of term accuracy ratings for baseline\nandappendsystems\nSection 5. In addition, Table 6 shows the distribu-\ntion of the average term accuracy ratings.\nThe accuracy of the term translations of the\nbaseline model clearly lags behind the train-by\nmodels for both language pairs. The results how-\never also show that terminology is quite well cov-\nered by the baseline model already.\nThe term accuracies for append-concat16 and\nappend-nofactors approachthemaximumscorefor\nboth language pairs, and are very close to each\nother. Thisgivesrisetotheconclusionthattheap-\nproachworkssimilarlywellforenforcingterminol-\nogy on both morphologically average (de) as well\nas rich (ru) target languages.\nIntermsofoveralltranslationquality,thediffer-\nencebetweenthebaselineandthe appendsystems\nislesspronouncedthansuggestedbytheautomatic\nscores. Forbothlanguagepairs,thequalityratings\nof the appendmodels are comparable. Term en-\nforcement does not seem to have noticeable nega-\ntive side effects on overall translation quality.\nHuman evaluation also reveals that there is no\nquality loss when more than one term is injected\ninto a sentence. In the 35% of test segments\nwithmultipleterms,termaccuraciesofthe append\nmodels are even sightly higher than for sentences\nwithoneterm. Thisalsohasaneffectontheoverall\ntranslation quality. For append-concat16 , for ex-\nample,weseeapositivedifferenceof0.13(en–de)\nand 0.18 (en–ru) points between the average qual-\nityratingsofsentenceswithoneandwithmultiple\nterms.7 Examples & Discussion\nInthissection,wepresentexamplesofcorrectterm\ntranslations as well as an in-depth human analysis\nof the terms that were not produced according to\nthe automatic evaluation. Examples for en–de and\nen–ru are displayed in Table 7.\n7.1 Analysis of Term Translations\nWiththehightermratesofall train-bymodels(cf.\nTables3and4)itisexpectedthatthemodelsadhere\nwelltotheterminologyconstraints. Whentakinga\ncloserlookintotheoutputof append-concat16 ,we\nmake the following observations (examples taken\nfrom Table 7):\n•Terminology integrates smoothly into the\ncontext of the target language using correct\nmorphological forms (ex. 2). This is espe-\ncially important for a highly inflecting lan-\nguage like Russian where case information is\nproperly transferred (ex. 5, 6)\n•Single terms can build natural compound\nwords in German (ex. 3).\n•When enforcing nominal terminology, En-\nglish verb-noun ambiguities are often re-\nsolvedtowardsnouns,whichisreflectedinthe\ntranslation (ex. 5 compared to baseline). An-\nothereffectistheverbaltranslationofEnglish\nimperatives instead of using its nominaliza-\ntion (ex. 7 compared to baseline).\n•Enforcing nominal terminology leads\nto less compounding and prevents over-\ncompounding in German target (ex. 4).\n•Abbreviationsinthetranslationareprevented.\nIn our case, they are caused by large amounts\noftrainingdatafromheavilyabbreviatedcon-\ntent (ex. 4 reference and ex. 8 baseline).\n•The baseline translation often uses synonyms\nof the expected term (ex. 2, 6). This means\nthat the translation does not adhere to the ter-\nminology constraint, but that it is not com-\npletely wrong either.\n7.2 Missed Term Translations\nWe also analyzed sentences for which term en-\nforcementdidnotworkasexpected,i.e.theremain-\ning 3.6% and 4.3% from append-concat16 in Ta-\nbles 3 and 4 respectively. For this, 75 segments\nwithmissingtermtranslationsaccordingtotheau-\ntomatic evaluation were analyzed manually. The\nresults of this investigation are shown in Table 8.\n(1)product substitution – Produktsubstitution\nlocation substitution – Lokationsfindung\nSource Product Substitution e.g. nolocation substitution for oversea customer\nBaseline Produktersetzung, z.B. keine Lokationsersetzung für ÜberseeKunde\nAppend-concat16 Produktsubstitution z.B. keine Lokationsfindung für Überseekunden\nReference Produktsubstitution ; Beispiel: keine Lokationsfindung für Überseekunden\n(2)budget hierarchy – Haushaltsstruktur\nbudget – Haushalt\nSource Defining a budget hierarchy is the first step in setting up an overall budget.\nBaseline Die Definition einer Budgethierarchie ist der erste Schritt bei der Einrichtung eines Gesamtbudgets.\nAppend-concat16 Die Definition einer Haushaltsstruktur ist der erste Arbeitsschritt im Aufbau eines Haushalts .\nReference Der Aufbau einer Haushaltsstruktur ist der erste Schritt beim Einrichten eines Haushalts .\n(3)inconsistency – Inkonsistenz\nprogram error – Programmfehler\nSource The table inconsistency is probably due to a program error.\nBaseline Wahrscheinlich liegt ein Programmfehler vor.\nAppend-concat16 Die Tabellen inkonsistenz wird wahrscheinlich durch einen Programmfehler verursacht.\nReference Die Tabellen inkonsistenz ist vermutlich durch einen Programmfehler entstanden.\n(4) processing time – Bearbeitungszeit\nSource Field: Goods receipt processing time\nBaseline Feld: Wareneingangs bearbeitungszeit\nAppend-concat16 Feld: Bearbeitungszeit für den Wareneingang\nReference Field: WE Bearbeitungszeit\n(5)release order – отзыв\npackage number – номер пакета\nSource Purchase order: releaseorder package number\nBaseline Заказ на поставку: деблокировать номер пакета заказов\nAppend-concat16 Заказ на поставку: номер пакета отзыва\nReference Заказ на поставку: номер пакета отзыва\n(6)logical port – логический порт\nproxy class – прокси-класс\nSource No logical port found for the proxy class &1\nBaseline Логический порт для класса прокси &1 не найден\nAppend-concat16 Логический порт дляпрокси-класса &1 не найден\nReference Логический порт дляпрокси-класса &1 не найден\n(7)markup – надбавка\nreplacement category – тип замены\nSource To define a relative markup, choose the Relative Markup value in the Replacement Category field.\nBaseline Для определения относительной наценки выберите значение Относительная наценка в поле Тип\nзамены .\nAppend-concat16 Чтобы определить относительную надбавку , выберите значение Относительная надбавка в поле\nТип замены .\nReference Чтобы определить относительную надбавку , выберите значение Относительная надбавка в поле\nТип замены .\n(8) table key – ключ таблицы\nSource Table Key > 70 Char\nBaseline Табл. ключ > 70 символов\nAppend-concat16 Ключ таблицы > 70 символов\nReference Ключ таблицы > 70 символов\nTable 7: Examples for en–de and en–ru. Terminology constraints are provided above each example. Underlining is used to\nhighlight linguistic aspects described in Section 7.1.\nType of term match en–de en–ru\nTrue negative (unmatched) 56% 55%\nFalse negative (matched) 44% 45%\nTable 8: Results of analysis of negative term rate samples\nIt was found that among the analyzed examples\nthere are many false negatives, i.e. the expected\nterm translations were indeed produced. The rea-\nson is that our fuzzy term matching strategy on\nwhichthetermratesarebaseddoesnotcoverthem.\nIn the investigated examples, for both languages,\naround 45% of the terms were not recognized by\nthe term rate for the following reasons:\n•The term occurs in an inflected form that es-\ncapesthefuzzymatchofthetermrate(ex.7).\n•The term is part of a compound word that es-\ncapesthefuzzymatchofthetermrate(ex.3).\nWhenanalyzingtrulyproblematicterms,i.e.the\ntruenegativesthatwerenotgeneratedinthetrans-\nlationatall,patternsthathintatareasonareharder\nto detect. Generally, there are three types of be-\nhavior: most of the time, the term in question is\ntranslatedbyasynonym,sometimesitismistrans-\nlated, and in rare cases it is dropped. For en–ru,\nthere are a few terms in our test set that were not\nproduced by the NMT model, for example trans-\naction control -управление транзакциями . The\nproblem also occurs for en–de but to a lesser ex-\ntent. Allthosemissedtermsareproperlyannotated\nin the source text and, as the other terms in the\ntest set, all segments containing these terms were\nremoved from the training data. Without looking\nat the decoder in detail, we cannot draw any con-\nclusions for now. It is possible that some transla-\ntions are not enforced since another translation is\ntoo “strong”, or the target word does not exist in\nthetrainingdataandisthereforedifficulttoassem-\nbleandproduce. Wealsonoticedsomeproblemsin\ncompounding,forexampleanincorrectconnecting\nelement on non-head words.\nFrom our analysis we conclude that term en-\nforcement using the train-bymethod does not al-\nways work perfectly - but we also know that MT\nin general does not always work perfectly either.\nNevertheless, we have shown that the term rate is\nhigher than what we have reported in Tables 3 and\n4. Thisisduetothelargenumberoffalsenegatives\nofthetermratecausedbytheautomaticevaluation\nstrategy.7.3 Considerations for a Production Setting\nWith the high term rates paired with an improved\ntranslationqualityandnonegativeimpactontrans-\nlation speed, the train-bymethod, specifically the\nappendvariant, offers a good trade-off for termi-\nnologyenforcementinaproductionsetting,partic-\nularlycomparedtocurrentalternativesintheclass\nof constrained decoding. Whether term rates are\nhigh enough for a productive scenario obviously\ndepends on the specific requirements on the MT\nsystem and cannot be answered universally.\nNotethatwedidnotperformahumananalysisof\nsegments without terminology and only interpret\ntheautomaticscores. Itremainstobeseenwhether\nthe inline annotation, particularly if used without\nsource factors, is reliable enough to not apply the\nlearned copy mechanism in unsuitable occasions.\nClearly, the results of this approach depend to a\nhigh extent on the quality of the term dictionary.\nGrammatical and lexical ambiguity of terms as\nwell as the quality of translation correspondences\naretobeconsidered. Performanceandprecisionof\nthetermrecognitionmechanismareadditionalkey\nfactors for making this approach work.\n8 Conclusion\nWe have investigated a new approach for termi-\nnology integration into NMT, originally proposed\nby Dinu et al. (2019), in an real-world setup. Our\nexperimental setting was IT-related corporate data\nfrom SAP with terminology from SAP’s terminol-\nogy database, for two language pairs with rather\nmorphologically rich target languages. Our study\nyields positive results, namely term rates >95%\nand improvements in translation quality compared\nto a baseline model as well as constrained decod-\ning, with neither impacting the translation speed\nnor the translation quality on data without termi-\nnology. The improvements in term accuracy were\nfurthermore confirmed in a human evaluation for\nboth language pairs. In an additional manual in-\nvestigation,weinspectedtheproblematiccasesand\nfound that almost half of them are false negatives,\nmeaning that term rates are in fact even higher.\nWe have furthermore confirmed that with this ap-\nproach the term translations are used flexibly in\nthesurfaceformrequiredbythesententialcontext.\nOverall, it seems to be a promising approach for\napplying terminology constraints.\nReferences\nBurlot,Franck. 2019. LinguacustodiaatWMT’19: At-\ntemptstocontrolterminology. In Proceedings of the\nFourth Conference on Machine Translation , pages\n147–154, Florence, Italy. Association for Computa-\ntional Linguistics.\nChatterjee, Rajen, Matteo Negri, Marco Turchi, Mar-\ncello Federico, Lucia Specia, and Frédéric Blain.\n2017. Guiding neural machine translation decoding\nwith external knowledge. In Proceedings of the Sec-\nond Conference on Machine Translation ,pages157–\n168, Copenhagen, Denmark. Association for Com-\nputational Linguistics.\nCrego,Josep,JungiKim,GuillaumeKlein,AnabelRe-\nbollo, Kathy Yang, Jean Senellart, Egor Akhanov,\nPatriceBrunelle,AurelienCoquard,YongchaoDeng,\nSatoshi Enoue, Chiyo Geiss, Joshua Johanson, Ar-\ndas Khalsa, Raoum Khiari, Byeongil Ko, Catherine\nKobus,JeanLorieux,LeidianaMartins,Dang-Chuan\nNguyen,AlexandraPriori,ThomasRiccardi,Natalia\nSegal,ChristopheServan,CyrilTiquet,BoWang,Jin\nYang, Dakun Zhang, Jing Zhou, and Peter Zoldan.\n2016. Systran’s pure neural machine translation sys-\ntems.\nDinu, Georgiana, Prashant Mathur, Marcello Federico,\nand Yaser Al-Onaizan. 2019. Training neural ma-\nchinetranslationtoapplyterminologyconstraints. In\nProceedings of the 57th Annual Meeting of the Asso-\nciation for Computational Linguistics , pages 3063–\n3068,Florence,Italy.AssociationforComputational\nLinguistics.\nFarajian, M. Amin, Nicola Bertoldi, Matteo Negri,\nMarco Turchi, and Marcello Federico. 2018. Eval-\nuation of terminology translation in instance-based\nneuralMTadaptation. In Proceedings of the 21st An-\nnual Conference of the European Association for Ma-\nchine Translation , pages 149–158, Alacant, Spain.\nHasler,Eva,AdriàdeGispert,GonzaloIglesias,andBill\nByrne. 2018. Neural machine translation decoding\nwith terminology constraints. In Proceedings of the\n2018 Conference of the North American Chapter of\nthe Association for Computational Linguistics ,pages\n506–512, New Orleans, Louisiana. Association for\nComputational Linguistics.\nHieber, Felix, Tobias Domhan, Michael Denkowski,\nDavid Vilar, Artem Sokolov, Ann Clifton, and Matt\nPost. 2018. The Sockeye Neural Machine Trans-\nlation toolkit at AMTA 2018. In Proceedings of\nthe 13th Conference of the Association for Machine\nTranslation in the Americas ,pages200–207,Boston,\nMA. Association for Machine Translation in the\nAmericas.\nHokamp, Chris and Qun Liu. 2017. Lexically con-\nstraineddecodingforsequencegenerationusinggrid\nbeam search. In Proceedings of the 55th Annual\nMeeting of the Association for Computational Lin-\nguistics, pages 1535–1546, Vancouver, Canada. As-\nsociation for Computational Linguistics.Hu, J. Edward, Huda Khayrallah, Ryan Culkin, Patrick\nXia, Tongfei Chen, Matt Post, and Benjamin\nVan Durme. 2019. Improved lexically constrained\ndecoding for translation and monolingual rewriting.\nInProceedings of the 2019 Conference of the North\nAmerican Chapter of the Association for Compu-\ntational Linguistics , pages 839–850, Minneapolis,\nMinnesota. Association for Computational Linguis-\ntics.\nHuck, Matthias, Viktor Hangya, and Alexander Fraser.\n2019. Better OOV translation with bilingual termi-\nnology mining. In Proceedings of the 57th Annual\nMeeting of the Association for Computational Lin-\nguistics, pages 5809–5815, Florence, Italy. Associ-\nation for Computational Linguistics.\nKobus, Catherine, Josep Crego, and Jean Senellart.\n2017. Domain control for neural machine transla-\ntion. In Proceedings of the International Conference\nRecent Advances in Natural Language Processing ,\npages 372–378, Varna, Bulgaria. INCOMA Ltd.\nPapineni,Kishore,SalimRoukos,ToddWard,andWei-\nJingZhu. 2002. Bleu: amethodforautomaticevalu-\nation of machine translation. In Proceedings of the\n40th Annual Meeting of the Association for Com-\nputational Linguistics ,pages311–318,Philadelphia,\nPennsylvania, USA. Association for Computational\nLinguistics.\nPopović, Maja. 2015. chrF: character n-gram f-score\nfor automatic MT evaluation. In Proceedings of\nthe Tenth Workshop on Statistical Machine Transla-\ntion, pages 392–395, Lisbon, Portugal. Association\nfor Computational Linguistics.\nPost, Matt and David Vilar. 2018. Fast lexically con-\nstrained decoding with dynamic beam allocation for\nneural machine translation. In Proceedings of the\n2018 Conference of the North American Chapter of\nthe Association for Computational Linguistics ,pages\n1314–1324,NewOrleans,Louisiana.Associationfor\nComputational Linguistics.\nSennrich, Rico and Barry Haddow. 2016. Linguis-\nticinputfeaturesimproveneuralmachinetranslation.\nInProceedings of the First Conference on Machine\nTranslation , pages 83–91, Berlin, Germany. Associ-\nation for Computational Linguistics.\nSennrich, Rico, Barry Haddow, and Alexandra Birch.\n2016. Neuralmachinetranslationofrarewordswith\nsubword units. In Proceedings of the 54th Annual\nMeeting of the Association for Computational Lin-\nguistics, pages 1715–1725, Berlin, Germany. Asso-\nciation for Computational Linguistics.\nVaswani, Ashish, Noam Shazeer, Niki Parmar, Jakob\nUszkoreit, Llion Jones, Aidan N Gomez, Łukasz\nKaiser, and Illia Polosukhin. 2017. Attention is all\nyou need. In Guyon, I., U. V. Luxburg, S. Bengio,\nH.Wallach,R.Fergus,S.Vishwanathan,andR.Gar-\nnett, editors, Advances in Neural Information Pro-\ncessing Systems 30 ,pages5998–6008.CurranAsso-\nciates, Inc.", "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "hjfWhBSQt5", "year": null, "venue": "EAMT 2022", "pdf_link": "https://aclanthology.org/2022.eamt-1.22.pdf", "forum_link": "https://openreview.net/forum?id=hjfWhBSQt5", "arxiv_id": null, "doi": null }
{ "title": "\"Hi, how can I help you?\" Improving Machine Translation of Conversational Content in a Business Context", "authors": [ "Bianka Buschbeck", "Jennifer Mell", "Miriam Exel", "Matthias Huck" ], "abstract": null, "keywords": [], "raw_extracted_content": "“Hi, how can I help you?”\nImproving Machine Translation of Conversational Content\nin a Business Context\nBianka Buschbeck∗Jennifer Mell∗Miriam Exel Matthias Huck\nSAP SE\nDietmar-Hopp-Allee 16, 69190 Walldorf, Germany\[email protected]\nAbstract\nThis paper addresses the automatic trans-\nlation of conversational content in a busi-\nness context, for example support chat dia-\nlogues. While such use cases share charac-\nteristics with other informal machine trans-\nlation scenarios, translation requirements\nwith respect to technical and business-\nrelated expressions are high. To succeed\nin such scenarios, we experimented with\ncurating dedicated training and test data,\ninjecting noise to improve robustness, and\napplying sentence weighting schemes to\ncarefully manage the influence of the dif-\nferent corpora. We show that our approach\nimproves the performance of our models\non conversational content for all 18 in-\nvestigated language pairs while preserv-\ning translation quality on other domains –\nan indispensable requirement to integrate\nthese developments into our MT engines at\nSAP.\n1 Introduction\nAt SAP we build machine translation systems\nto cope with a huge translation volume, cover-\ning product localization and translation of docu-\nmentation, training materials or support instruc-\ntions for up to 85 languages. We usually train\nmixed-domain neural machine translation (MT)\nengines, whose training input consists of a mul-\ntitude of data sources including the contents of the\ncompany-internal translation memories from vari-\nous domains. The resulting MT systems produce\n∗Equal contribution.\n© 2022 The authors. This article is licensed under a Creative\nCommons 3.0 licence, no derivative works, attribution, CC-\nBY-ND.high-quality technical translations but have diffi-\nculties generating appropriate output for conversa-\ntional content, required for multilingual chatbots\nor product support chats. For an enhanced user ex-\nperience, it becomes more and more important that\nour customers can communicate in the language of\ntheir choice. Therefore, we started a project to im-\nprove translation quality for business-related con-\nversations. This includes better translations, but\nalso more robustness towards typos and poor input\nquality. We focus on methods that allow for im-\nprovements in one area without degrading quality\nin others, since we aim to deliver a single model\nper language pair.\nThe remainder of this paper is structured as fol-\nlows. Section 2 introduces the particular chal-\nlenges to be addressed when training an MT sys-\ntem for corporate conversational content. The\nmethods we explored are presented in Section 3\nand experimental results on selected language\npairs can be found in Section 4. Section 5 con-\ntains results on 18 language pairs of major interest\nat SAP using the final experimental configuration.\nWe point out open research questions in Section 6.\nRelated problems and approaches are highlighted\nin Section 7 before we summarize our findings in\nSection 8.\n2 Challenges\nOur baseline MT system is built on large amounts\nof technical documents and UI-style training data\nwith the consequence that it performs unsatisfac-\ntorily in business conversations, where the content\nis technical, but style is conversational. An exam-\nple conversation is given in Table 1 (column Orig-\ninal Conversation in English ). It contains a variety\nof features that are common in conversational con-\ntent, with major challenges posed by incomplete or\nAgent Original Conversation in English Baseline MT Improved MT\nUser hello anyone Bonjour `a tous Bonjour `a tous\nUser why my gdm is not working pourquoi mon gdm ne fonctionne\npasPourquoi mon gdm ne marche pas\n?\nUser it says .service file is not there cela indique que le fichier .service\nn’existe pas.C ¸ a dit que le fichier .service n’est\npas l `a\nSupport what are you doing exactly quefaites-vous exactement Que faites-vous exactement ?\nSupport where is it exactly o`u c’est exactement O`u est-ce exactement ?\nSupport i didnt evn see the question.. i didnt voir la question... Je n’ai m ˆeme pas vu la question.\nUser i have dsl connection my isp pro-\nvides me and i got device not man-\naed so i edit.conf file of nmi a dsl connexion mon isp me four-\nnit et i get device not manaed so i\nedit .conf file of nmJ’ai une connexion dsl que mon\nisp me fournit et je n’ai pas\ng´er´e l’appareil donc je modifie le\nfichier .conf de nm\nUser yeah it is oui, Oui, c’est c ¸a\nSupport id leave it alone. et laissez-le tranquille. Je le laisse tranquille.\nUser i thinking to filter that port i penser `afiltrer ce port Je pense filtrer ce port.\nSupport i never noticed it being open to the\nnetwork. just to localhosti n’a jamais remarqu ´e qu’ elle´etait\nouverte au r´eseau. Il s’agit sim-\nplement d’un h ˆote localJe n’ai jamais remarqu ´e qu’ il´etait\nouvert au r´eseau. Juste `a local-\nhost\nUser ok OK OK\nSupport so you m ay be worried about noth-\ningVous vous inqui ´etez donc de ne\nrienvoirDonc tu es toujours inquiet pour\nrien.\nSupport seems its not really an security is-\nsue and it makes loookups quickersemble qu’il ne s’agit pas vrai-\nment d’un probl `eme de s ´ecurit ´e et\nqu’il acc´el`ere les loookupsn’est pas vraiment un probl `eme de\ns´ecurit ´e et acc ´el`ere les recherches\nUser thanks remerciements Merci\nTable 1: Excerpt of an English conversation (from the Ubuntu Dialogue Corpus (Lowe et al., 2015)) translated to French using\nthe baseline and our improved MT model.\nPhenomenon Examples\nSpelling\nTypos thansk, tanks, thanx\nCasing cpu, i, aws\nSpacing ofcourse, any one, Id o\nLack of punctuation Hi are you there\nConversational word forms dunno, gotcha, doin’\nConversational variants hey, hey hi, hiya, howdy\nAbbreviations\nWord/phrase abbreviations plz, thx, np, omg, ttyl\nLetter/number homophones u r, I c, c u, u 2, some1\nParalinguistic features\nEmoticons :D ;-) :(\nEmotional expressions uh, hmm, oh, ah, whoa\nEmphasis - duplication no no no, oh noooo\nEmphasis - typography it’s URGENT, It broke\n*EVERYTHING*!\nExpletives damn!, crap, sh*t\nTable 2: Typical phenomena in conversational data.\nungrammatical sentences and high contextual de-\npendency. Conversational expressions ( hello any-\none,thanks ) and syntactic structures such as ques-\ntions and utterances in first and second person sin-\ngular are typical of conversational style. Techni-\ncal documents do not provide a good coverage of\nthese phenomena. Support chats, moreover, ex-\nhibit other challenging phenomena that are sum-\nmarized in Table 2 based on initial exploration of\nin-domain data. While most of the listed linguis-\ntic issues could be corrected, paralinguistic phe-\nnomena that are a kind of textual equivalent toverbal prosodic features or facial expressions are\nmore difficult. Emphasis expressed by word or let-\nter duplication or typography are highly language-\nspecific and cannot be easily transferred. Even\nemoticons are not used in the same way across lan-\nguages.\n3 Methods\nIn this section, we describe the methods we inves-\ntigated to address some of these challenges.\n3.1 High-quality Parallel Data\nThe most straightforward way to improve trans-\nlation quality of conversational content would be\nadding appropriate training data. However, bilin-\ngual data in this domain is hard to find. Even\nlargely conversational datasets, such as OpenSub-\ntitles (Lison and Tiedemann, 2016) are not well\nsuited for this purpose, as business conversations\nare highly technical.\nThus, we manually select and translate appropri-\nate sentences to enrich our available training data\nwith conversational style segments (Section 2). To\ncollect suitable source segments, we draw on dif-\nferent resources such as support dialogues and ex-\npressions used for intents in our chatbots. But\nthe most valuable resource is the Ubuntu Dia-\nlogue Corpus (UDC) (Lowe et al., 2015), a pub-\nlicly available dataset that contains almost one\nmillion two-person conversations extracted from\nUbuntu technical support chat logs between 2004\nand 2015. We create a list of utterances and their\nfrequency from the UDC that helps us extract the\nfollowing:\n• Utterances that cover greetings, agreement,\naffirmations, refusal, uncertainty, wishes, re-\ngrets, hold-on expressions, thanks and re-\nsponses to them, etc.\n• Utterances starting with WH words and in-\nverted questions ( Are you ,Do you ,Does that ,\netc.), frequent in support dialogues but under-\nrepresented in technical documentation.\n• Utterances that contain the pronouns “ I” and\n“you” to improve first- and second-person\ncoverage.\n• Frequent single word utterances, as they are\nespecially problematic.\nWe mainly focus on short expressions that do\nnot contain vocabulary specific to the UDC. The\nresulting list of approximately 10,000 English seg-\nments is then normalized, since it contains too\nmany variants of the same expression, differing\nonly in spelling, punctuation, and casing that\nwould increase translation costs without resulting\nin more varied training data. The final corpus\nconsists of 7,000 segments that we have manually\ntranslated by our professional translators into the\nrequired target languages. Source variations are\nlater created using the methods described in Sec-\ntion 3.4.\n3.2 Domain Adaptation\nWe define as domain adaptation the task of opti-\nmizing a natural language processing system’s pa-\nrameters towards improved quality on a specific\ntext domain. A text domain typically exhibits par-\nticular charateristics wrt. aspects such as genre,\ntopic, style, terminology, and so on. Domain adap-\ntation for MT is an established field of study (Chu\nand Wang, 2018), with fine-tuning nowadays be-\ning one of the prevalent paradigms for neural MT\nmodels (Freitag and Al-Onaizan, 2016; Huck et\nal., 2017). In fine-tuning, training of a generic MT\nmodel is continued using in-domain data. The pit-\nfalls of this method are overfitting and quality loss\non out-of-domain data (Huck et al., 2015; Thomp-\nson et al., 2019). We found that sentence weighting\n(Chen et al., 2017; Rieß et al., 2021; Wang et al.,2017) suits our purpose of adapting towards con-\nversational content better while at the same time\nnot sacrificing translation quality on other text do-\nmains, thus keeping overall system performance\nstable. We apply a straightforward up-weighting\ntechnique by giving higher instance weights to\nsubsections of the training set which contain con-\nversational content. Experimental results on this\nwill be reported in Section 4.3.\n3.3 Error-sensitive Back-translation Scoring\nThe amount of conversational training data for MT\nmodels can be increased by employing synthetic\nbitext from back-translation (Huck et al., 2011;\nSchwenk, 2008; Sennrich et al., 2016a). We back-\ntranslate the UDC dataset with the aim of bene-\nfiting conversational style and vocabulary cover-\nage without harming grammaticality and spelling\nof MT output. To that end, we first clean the\ndataset using in-house scripts, resulting in 4.6 mil-\nlion English sentences. We then machine-translate\nthe English sentences into the source languages of\nthe models which we intend to improve, using our\nexisting engines for back-translation in the reverse\ndirection. Experiments are thus only carried out\non language directions with English target (Sec-\ntion 4.6).\nWe assume that grammatical and correctly\nspelled input sentences result in better back-\ntranslations, which in turn will lead to better per-\nformance of the final model. Furthermore, we re-\nquire the final model to produce grammatical sen-\ntences despite the training references containing\nuser-generated text. We therefore use Acrolinx1\nto measure the acceptability of a segment in terms\nof grammaticality and spelling. Acrolinx is AI-\npowered software that improves the quality and\nimpact of enterprise content. Using a customized\nversion of Acrolinx specialized for the techni-\ncal support domain, we extract grammaticality,\nspelling, and clarity scores for every sentence and\naggregate them into a sentence-level acceptability\nscore. We further include sentence length into each\nsentence-level score since exploratory analysis has\nshown that longer sentences tend to achieve lower\nAcrolinx scores. The sentence-level scores will be\nused in Section 4.6 to either filter or weight the\nback-translated UDC training data.\n1https://www.acrolinx.com/\n3.4 Noise Injection\nTo improve and assess model robustness beyond\nthe addition of conversational style segments, we\ninject noise into the in-domain subsets of training\nand test data. We replicate some typical chat phe-\nnomena (Table 2) by injecting noise in the form\nof(1.)typos, (2.)common chat variants and word\nforms, (3.) lowercasing and (4.) punctuation re-\nmoval on the source side only. The required lan-\nguage data for typo injection and generation of\nchat variants (described below) is only available\nin English, restricting experiments to language di-\nrections with English source. Table 3 gives an\noverview of all generated variants. They are gener-\nated from the unmodified source data, except vari-\nants of conversational data (Section 3.1), which are\nbased on the normalized dataset.\nFor typo generation we apply an approach sim-\nilar to Shah and de Melo (2020) and compute\na model of real-world typos based on a collec-\ntion of character-level typos found in individual\ntokens. Typos are grouped into four categories:\ninsertion (ex.: threre), deletion (ex.: particu ar),\nsubstitution (ex.: favulous ) and transposition (ex.:\ncorce ct). For each error category and each char-\nacter, we calculate probability distributions based\non corpus occurrences. They constitute a statisti-\ncal model of typos in the English language which\nwe refer to as the typo model . For details on the\ncomputation of the probabilities, please see Shah\nand de Melo (2020).\nFor every token in a source sentence, we sam-\nple from a token corruption probability ( c) to de-\ntermine whether any noise will be injected. If a\ntoken is chosen for noise injection, we iterate over\nits characters and decide according to a typo prob-\nability ( t) whether an error will be inserted at the\ncurrent character. Using the typo model as a noise\nfunction, we sample from the calculated probabil-\nity distributions to generate one of the four types\nof errors.\nWe inject spelling errors using two approaches.\nSimply applying the typo model and method as\ndescribed above results in the artificial variants.\nAdditionally, we inject typos and further filter\nthe generated errors by checking corrupted tokens\nagainst token-level typo lists. This yields the real\nvariants which are modified with real-world typos\nonly.\nTable 4 contains the hyperparameters used to\ngenerate three different misspelling levels for bothVariant\n1 Low real typo injection\n2 Medium real typo injection\n3 High real typo injection\n4 Low artificial typo injection\n5 Medium artificial typo injection\n6 High artificial typo injection\n7 Colloquial replacements\n8 Lowercasing\n9 Punctuation removal\n10 Lowercasing and punctuation removal\nTable 3: List of generated source-side variants for a single\ndataset.\nartificial real\nc t c t\nLow 0.2 0.025 1.0 0.1\nMedium 0.3 0.05 1.0 0.2\nHigh 0.5 0.075 1.0 0.3\nTable 4: Token corruption probability ( c) and typo probabil-\nity (t) for injecting noise using the typo model.\napproaches. They are based on preliminary ex-\nperiments and settings reported by Shah and de\nMelo (2020). The parameters for the realapproach\nwere chosen such that, after the restrictive filtering\nstep, the level of noise was comparable to that of\nthe corresponding artificial variant. Comparability\nwas assessed via the distribution of typos per sen-\ntence and manual checks of the resulting variants.\nWe thus obtain a total of six variants from injecting\ntypos for a single dataset (Table 3, rows 1–6).\nAdditionally, we create a variant of the dataset\nwhere we replace standard language with typical\nconversational expressions, abbreviations and ho-\nmophones (Table 3, row 7) using an in-house ex-\npression mapping. For example, “ thanks ” is re-\nplaced with “ thx”, “give me ” turns into “ gimme ”,\n“are you ” becomes “ r u” etc.\nLastly, we generate three additional variants of\nthe data by lowercasing it and/or removing punc-\ntuation (Table 3, rows 8–10).\n4 Experiments\nWe now empirically evaluate the methods intro-\nduced in Section 3, with the goal of improving\nMT quality on conversational content. We focus\non conducting detailed experiments and presenting\nresults for two language pairs per method, one be-\ning rather close languages, the other rather distant.\nThese are English to French and Japanese ( en–fr ,\nen–ja ) for up-weighting and noise injection, and\nItalian and Japanese to English ( it–en ,ja–en ) for\nback-translation. In Section 5 we will demonstrate\nthat our main findings generalize to other language\npairs.\n4.1 Experimental Setup\nFor training we use large amounts of company-\ninternal parallel data that mostly consists of doc-\numentation, training materials, UI strings and sup-\nport instructions. We also utilize some publicly\navailable datasets. The training data amounts to\nabout 25 M parallel segments per language pair.\nThe data is tokenized using a simple tokenization\nscheme based on whitespace and punctuation, then\nsegmented into subwords using byte-pair encoding\n(Sennrich et al., 2016b).\nWe make use of the Marian toolkit (Junczys-\nDowmunt et al., 2018) for this investigation. For\nall our experiments, we use a Transformer network\nin the standard base configuration (Vaswani et al.,\n2017) and train it on the training data of the cor-\nresponding language pair. The early stopping cri-\nterion is computed on a dedicated validation set of\n4,000 parallel segments.\n4.2 Test Corpora\nTargeted changes to MT systems require mean-\ningful test sets to guide experimentation and to\nmeasure improvement. As it is hard to find pub-\nlicly available test data that reflects the technical\nsupport dialogue content we are interested in, we\ncreated new test sets consisting of customer sup-\nport dialogues and some dialogues taken from the\nUDC. In contrast to the conversational training\ndata, we kept the dialogue structure for the test data\nand selected a total of 21 dialogues, consisting of\nabout 1,000 sentences, that were also translated by\nprofessional translators after normalization.\nTo measure performance on noisy input, we\ncreated ten variations of the normalized English\nsource text of the support dialogues using the noise\ninjection techniques introduced in Section 3.4, see\nTable 3. While we analyzed scores on the individ-\nual test set variants in the experimental phase, we\nwill only present results on all variants combined\nhere. Obviously, the impact of the methods on the\nindividual test set variants differs but as we intend\nto cover different phenomena, the combined score\nalso helps to select the best overall configuration.\nWe use three groups of test data for in-domain\nand out-of-domain testing in this study:en–fr en–ja\nWeight CHRF2 B LEU CHR F2 B LEU\n1 59.4 36.3 41.1 34.1\n5 59.5 36.3 41.9 34.8\n10 59.8 36.9 42.1 35.2\n20 59.9 37.0 42.2 35.6\n30 59.9 37.2 42.1 35.2\n40 60.0 36.9 42.3 35.4\n50 59.8 36.9 42.3 35.5\nTable 5: CHRF2 and B LEU scores on the conversational test\nset with different weighting of the in-domain corpus. Best\nresults are highlighted in bold.\nConversational comprises the original and nor-\nmalized support dialogue test sets, their ten\nvariants (Table 3) and two additional related\npublicly available test sets.\nCorporate refers to a set of about 10 test sets with\ndiverse SAP-internal content.\nGeneric groups together public test sets from\nnews, Wikipedia, UN and EU sources.\nEach of these groups contains about 10,000–\n15,000 test segments, amounting to a total of about\n40,000 per language pair. We evaluate using case-\nsensitive CHRF2 (Popovi ´c, 2016) and B LEU (Pap-\nineni et al., 2002) and, in view of its better corre-\nlation with human judgment (Mathur et al., 2020),\nrely on CHRF2 for system choice. We report scores\naveraged over all test sets per group.\n4.3 Sentence Weighting Experiments\nThe amount of conversational training data we\nhave at our disposal is tiny compared to the rest\nof the training data. It corresponds to 0.02% for\nen–fr and to 0.06% for en–ja. Our first target is to\neffectively use the new in-domain training data de-\nscribed in Section 3.1 to adapt the model to the tar-\nget domain of conversational content. We thus fo-\ncus initially on conversational test sets, results on\nout-of-domain test data are reported in Section 4.5.\nInstead of fine-tuning, we use sentence weight-\ning, giving the in-domain training data more\nweight, see Section 3.2. We explore the up-\nweighting factor empirically (Table 5). A weight\nof 1 constitutes the baseline. Increasing the weight\nmultiplier yields a small but steady improvement.\nA factor of 40 delivers the best performance for\nen–fr and is almost equal to the best CHRF2 for en–\nja. For the purpose of applying a common weight\nsetting across language pairs, we keep the factor of\n40 fixed for subsequent experiments.\nTypos Lc. Punct. Colloq.\nLevel Corpus real art.\n0 None – – – – –\n1 Conv. ✓ – ✓ ✓ ✓\n2 Conv. ✓ ✓ ✓ ✓ ✓\n3Conv. ✓ ✓ ✓ ✓ ✓\nTatoeba ✓ ✓ (low)✓ ✓ –\nTable 6: Configurations of the different noise levels used in\nnoise injection experiments. Conv. denotes the conversational\ncorpus; Lc., Punct. and Colloq. refer to the lowercased, punc-\ntuation and colloquial variants; art. abbreviates artificial.\nen–fr en–ja\nLevel CHRF2 B LEU CHR F2 B LEU\n0 60.0 36.9 42.3 35.4\n1 60.7 37.9 42.5 36.3\n2 60.8 38.3 42.8 36.8\n3 61.4 38.6 43.4 36.9\n3 + Tatoeba 3x 61.5 39.1 43.5 37.4\nTable 7: Results of the noise injection experiments. The\nconversational corpus has a fixed weight multiplier of 40x.\nTatoeba 3x indicates addition of the Tatoeba corpus with a 3x\nweight multiplier. Best results are highlighted in bold.\n4.4 Noise Injection Experiments\nAs described in Section 3.4, noisy variants are in-\njected into the training and test data on the English\nsource only. The target remains in its original form\nso that the model learns to correct and translate\nat the same time. We categorize the noise injec-\ntion experiments into three levels (Table 6) where\nwe successively add more misspelled or wrongly\ncased data to the source of the training data. The\nadditional noisy data is weighted with a factor of 1.\nBesides the newly created conversational dataset\nwe also involve the Tatoeba corpus (Tiedemann,\n2020) that was already part of our training data and\nis rich in conversational expressions.\nThe results on the conversational test sets com-\nbined are shown in Table 7. As the test sets cover\ndifferent noise variants, we see a nice improvement\nwith the highest noise level 3, and conclude that\nwe gain in robustness of our MT system. Finally,\nwe also up-weight the original Tatoeba corpus by\na factor of 3. This gives an additional small, but\nconsistent improvement on the conversational test\ndata. Thus we select this configuration for further\ntrainings and evaluations.\n4.5 Out-of-domain Performance\nAs we want to integrate the selected configuration\ninto a mixed-domain “one-size-fits-all” model, we\nneed to make sure that the overall system quality\nremains stable. To check whether up-weighting ornoise injection harms translation quality on non-\nconversational test data, we measure the perfor-\nmance of the systems that perform best on con-\nversational test data on all other test sets, grouped\ninto corporate and generic test sets, as explained\nin Section 4.2. The results are reported in Ta-\nble 8. They show clear improvements on the con-\nversational test sets of over 2.0 CHRF2 points and\naround 3.0 B LEU points for both en–fr and en–\nja. Furthermore, the improvements do not lead to\ndegradations on other test sets. These findings sup-\nport the claim that the quality on all other test sets\nstayed quite stable.\n4.6 Error-sensitive Back-translation Scoring\nExperiments\nFor language pairs targeting English, we experi-\nment with adding different configurations of the\nUDC to the training data of the baseline systems:\nFull adds the entire back-translated UDC to the\ntraining data of the baseline.\nFilter adds only those pairs from the UDC where\nthe source segment’s acceptability score ex-\nceeds a set threshold.\nWeight adds the entire UDC, but assigns a weight\nbetween 0.2 and 1 to all segments based on\ntheir acceptability score.\nThe filtering threshold was set based on manual\nexploration of resulting filtered corpora for a small\ndevelopment set of UDC sentences. The filtered\nUDC dataset contains roughly 840,000 parallel\nsentences. For the weighting approach, we decide\nto down-weight noisy segments rather than up-\nweight correct segments due to the user-generated\nnature of the dataset. Table 9 shows the number of\nUDC sentences per weight.\nTable 10 contains the CHRF2 and B LEU scores\non all test sets for it–en and ja–en. Adding the\nentire UDC data ( full) improves performance for\nboth language pairs on in-domain test data. This\nindicates that the back-translations are of sufficient\nquality to provide training signals despite the do-\nmain mismatch of the translation system used to\nobtain them. For generic test sets, performance re-\nmains stable, while there is a slight drop in quality\non corporate test sets.\nComparing the filtering method ( filter ) with full,\nit performs similarly on generic and corporate test\nsets but does not achieve the same performance in-\ncrease on the conversational test sets. It should be\nnoted that filtering results in less than 20% of the\nCHR F2 B LEU\nLanguage pair Test domain Baseline Final version Baseline Final version\nen–frconversational 59.4 61.5 36.3 39.1\ngeneric 67.0 67.0 43.1 43.1\ncorporate 81.5 81.4 63.8 63.7\nen–jaconversational 41.1 43.5 34.1 37.4\ngeneric 33.9 34.5 35.8 36.3\ncorporate 67.8 68.0 69.8 70.0\nTable 8: Results on all test sets when adding the noise-injected and up-weighted conversa-\ntional training data to the baselines.Weight # segments\n0.2 3,636\n0.4 123,185\n0.6 727,263\n0.8 2,073,784\n1.0 1,622,266\nTable 9: Number of seg-\nments by weight for the\nweight experiment.\nCHR F2 B LEU\nLanguage pair Test domain Baseline full filter weight Baseline full filter weight\nit–enconversational 64.3 65.7 65.1 65.5 41.9 43.6 42.8 43.4\ngeneric 65.9 66.0 65.9 65.9 43.1 43.5 43.4 43.4\ncorporate 80.8 80.5 80.6 80.8 63.2 62.8 62.9 63.1\nja–enconversational 45.6 46.3 45.9 46.3 20.5 21.2 20.7 21.1\ngeneric 51.8 51.9 51.9 51.9 22.3 22.3 22.4 22.1\ncorporate 74.9 74.9 74.9 74.9 51.2 51.1 51.4 51.2\nTable 10: Results on all test sets when adding back-translated UDC data to the training data of the baselines. Best results are\nhighlighted in bold.\nUDC being added to the training data. However,\nfurther experiments with larger subsets of UDC\ndata have also not outperformed the fullmodel.\nWeighting the UDC data ( weight ) leads to in-\ndomain improvements comparable to full. Addi-\ntionally, adding the weighted UDC to the training\ndata does not compromise performance in other\ndomains. This may be on account of the down-\nweighting of ungrammatical segments, enabling\nthe weighting model to learn from conversational\ndata while preserving output quality.\n5 From Experiments to Production\nThe experimental results from Section 4 motivated\nus to use the same data assembling techniques and\nconfigurations for other language pairs that had not\nbeen previously tested. For the translation direc-\ntions with English source, Table 11 lists the lan-\nguage pairs and shows the gain in case-sensitive\nCHRF2 and B LEU for the three groups of test sets\n(see Section 4.2). Base constitutes the baseline, to\nwhich New adds up-weighted parallel data noise-\ninjected using the best configuration found in Sec-\ntion 4. Note that the scores for en–fr and en–\nja are slightly different from those in Table 8 as\nthe overall setup and training data composition of\nthe experimental and final systems are not exactly\nidentical. Across all language pairs there is con-\nsiderable improvement on the conversational test\nsets, while on the other domains (corporate and\ngeneric) the performance remains stable on aver-age, according to both automatic metrics. Thus,\nour approach works similarly well for the other\nseven language pairs as for English to French and\nEnglish to Japanese, showing that we can deliver\nhigh-quality business conversation MT broadly for\nmany languages without compromising translation\nquality of other text types.\nThe results of adding the back-translated UDC\ndata with error-sensitive weight factors for systems\ntranslating into English are shown in Table 12. Al-\nthough the impact is less pronounced than for the\nother language direction, it is consistent and visi-\nble. It is quite surprising that the large amount of\nback-translated data is not harming the translation\nquality in other domains.\nTo illustrate the differences, we refer back to Ta-\nble 1, comparing the French MT output after the\nquality improvements with the baseline engine’s\noutput on the English example dialogue. The ex-\nample demonstrates that robustness to typos has\nimproved, and that punctuation is placed correctly.\nFewer words remain untranslated and the MT out-\nput is more fluent.\n6 Outlook\nAlthough we see nice improvements, the trans-\nlation quality in technical business conversations\ncould be further improved. We point out the main\nopen issues in this section, leaving them for future\nwork and calling for new methods to address them.\nCHR F2 B LEU\nTest domain Base New Base New\nen–deconversational 55.3 57.1 29.4 31.5\ngeneric 66.2 66.4 40.4 40.7\ncorporate 77.1 76.9 53.6 53.6\nen–esconversational 65.4 68.0 44.0 47.3\ngeneric 70.0 70.0 48.4 48.5\ncorporate 81.6 81.6 64.4 64.3\nen–frconversational 58.8 61.7 35.7 39.0\ngeneric 67.2 67.2 43.4 43.4\ncorporate 81.8 81.8 64.2 64.3\nen–itconversational 59.3 63.0 34.6 39.1\ngeneric 67.2 67.4 42.0 42.1\ncorporate 81.9 81.5 62.9 62.1\nen–jaconversational 41.6 43.9 34.2 37.5\ngeneric 33.8 34.2 35.3 36.1\ncorporate 70.5 71.0 72.1 72.5\nen–koconversational 44.1 46.3 20.2 22.5\ngeneric 65.9 65.2 44.0 43.1\ncorporate 72.9 72.5 57.2 56.7\nen–ptconversational 68.5 71.5 46.4 51.0\ngeneric 69.6 69.9 45.5 46.1\ncorporate 84.3 84.3 68.3 68.3\nen–ruconversational 50.3 52.9 27.5 29.9\ngeneric 64.9 65.0 38.8 38.9\ncorporate 76.2 76.3 54.8 54.9\nen–zhconversational 48.9 49.0 35.3 37.5\ngeneric 42.6 43.3 45.6 46.2\ncorporate 70.9 71.8 72.1 73.0\nTable 11: CHRF2 and B LEU scores on test sets from all do-\nmains for the translation directions with English source.CHR F2 B LEU\nTest domain Base New Base New\nde–enconversational 60.1 60.7 36.1 36.6\ngeneric 67.0 67.6 44.1 44.7\ncorporate 81.7 81.5 65.4 65.0\nes–enconversational 67.2 68.3 45.0 46.5\ngeneric 69.2 69.8 46.3 47.2\ncorporate 81.0 80.9 63.8 63.4\nfr–enconversational 62.5 63.2 39.7 40.8\ngeneric 67.7 67.4 44.8 44.5\ncorporate 79.3 78.2 61.1 59.2\nit–enconversational 63.5 65.1 40.8 43.1\ngeneric 67.1 67.9 44.0 45.2\ncorporate 82.6 82.5 66.2 65.9\nja–enconversational 44.1 45.7 19.1 20.5\ngeneric 53.5 54.8 23.8 24.7\ncorporate 74.5 75.2 50.9 51.9\nko–enconversational 50.8 52.8 24.2 26.3\ngeneric 57.7 57.9 33.4 33.6\ncorporate 75.8 76.1 52.9 53.8\npt–enconversational 69.5 70.6 47.8 49.4\ngeneric 72.3 72.9 50.5 51.5\ncorporate 84.6 84.7 69.6 69.6\nru–enconversational 56.5 57.5 32.8 33.8\ngeneric 64.9 64.9 39.0 39.0\ncorporate 75.9 75.8 55.7 55.2\nzh–enconversational 52.4 53.6 27.0 28.5\ngeneric 60.3 60.5 31.9 32.2\ncorporate 78.9 79.1 57.5 57.7\nTable 12: CHRF2 and B LEU scores on test sets from all do-\nmains for the translation directions with English target.\nIn order to enhance robustness with respect to\nmisspellings, casing, chat-typical conversational\nforms, or abbreviations, a normalization step in\npreprocessing could be investigated (Chitrapriya et\nal., 2018; Clark and Araki, 2011). This would sup-\nport subsequent MT. However, text normalization\nor automatic spelling correction (Peitz et al., 2013)\nis highly text-type specific and prone to over-\ngeneration when applied to non-conversational\ntext, especially for technical documentation with\nlots of acronyms and technical abbreviations. This\nis one of the reasons why we decided for the noise\ninjection approach targeted at conversational con-\ntent only.\nChat language includes other specific phenom-\nena which we did not specifically address in this\nwork, one of them being capitalization for em-\nphasis, which could be tackled, e.g., using a fac-\ntored representation for source and target (Garc ´ıa-\nMart ´ınez et al., 2016; Niehues et al., 2016; Wilken\nand Matusov, 2019). Another frequent phe-\nnomenon is emoticons, where one would need to\ndecide whether they should just be copied over, orwhether they also need to be localized to the target\nlanguage. For expletives in conversations, appli-\ncable methods largely depend on the expectations\nin specific use cases, i.e., should a swearword be\ntranslated to its counterpart in the target language,\nshould it be removed, or masked with asterisks?\nOur MT model operates on the sentence level,\nand we treat each utterance as one sentence. How-\never, in chat conversations, sentences are some-\ntimes spread over multiple utterances, meaning the\nsource is actually over-segmented, leading to poor\ntranslation quality. This could be improved by a\ndifferent segmentation paradigm, and/or by an MT\nmodel that takes dialogue context beyond the sen-\ntence level into account (Liang et al., 2021). The\nlatter should also improve the coherent use of pro-\nnouns and verbal forms within a dialogue.\nLevels of politeness and their expression in con-\nversations differ between cultures and languages.\nAccordingly, this also poses challenges for MT, es-\npecially when the target language has more fine-\ngrained distinctions than the source language.\n7 Related Work\nOur work has focused on four methods: (1.)In-\ntegrating parallel high-quality conversational con-\ntent into the training corpus, (2.)creating synthetic\nin-domain data via back-translation, (3.)data aug-\nmentation to make the model more robust to noisy\ninput, and (4.)model adaptation towards the style\nof conversational content in the business domain.\nPrior work by other researchers has pursued aims\nrelated to ours while often employing slightly dif-\nferent techniques. For instance, high-quality paral-\nlel data is oftentimes identified by means of pseudo\nin-domain data selection (Axelrod et al., 2011);\nback-translation can be improved by sampling or\nnoisy synthetic data (Edunov et al., 2018); better\nrobustness towards noisy input may be achieved\nwith a stochastically corrupted subword segmen-\ntation procedure (Provilkov et al., 2020); or do-\nmain adaptation might be feasible even in a semi-\nsupervised or unsupervised manner in certain sce-\nnarios (Dou et al., 2019; Niu et al., 2018). We are\nconfident that many of the existing related tech-\nniques are complementary to our work and will\nhelp further improve MT quality of conversational\ncontent in the business domain.\n8 Conclusion\nWe have shown that an MT model specialized in\nthe IT and business domains can be enhanced to\nalso cover conversational content well. This bal-\nancing act is highly relevant in scenarios such as\nproduct support chats or multilingual chatbots. We\nhave achieved that by curating high-quality paral-\nlel data to address phenomena where the model\nexhibited the most devastating shortcomings. We\nfurther add back-translated data from the dialogue\ndomain, inject typos, punctuation and capitaliza-\ntion variants to make the model more robust, and\ncarefully manage the influence of the different cor-\npora using a sentence weighting scheme. We have\ndemonstrated that promising results from experi-\nments involving only a few language pairs gen-\neralize well to the main languages in our produc-\ntion scenario at SAP, achieving an improvement of\n2.4 CHRF2 / 3.1 B LEU on average for language\npairs from English and 1.2 CHRF2 / 1.5 B LEU\nfor language pairs to English on our conversational\ntest sets, while the performance on other domains\nand test sets remains stable.Acknowledgments\nWe thank Vincent Asmuth, Nathaniel Berger, and\nDominic Jehle for proofreading and valuable dis-\ncussions, as well as the four anonymous reviewers\nfor their feedback and helpful comments.\nReferences\nAxelrod, Amittai, Xiaodong He, and Jianfeng Gao.\n2011. Domain Adaptation via Pseudo In-Domain\nData Selection. In Proc. of EMNLP , pages 355–362,\nEdinburgh, Scotland, UK, July.\nChen, Boxing, Colin Cherry, George Foster, and\nSamuel Larkin. 2017. Cost Weighting for Neural\nMachine Translation Domain Adaptation. In Proc.\nof the Workshop on Neural Machine Translation ,\npages 40–46, Vancouver, Canada, August.\nChitrapriya, N., Md. Ruhul Islam, Minakshi Roy, and\nSujala Pradhan. 2018. A Study on Different Nor-\nmalization Approaches of Word. In Advances in\nElectronics, Communication and Computing , pages\n239–251, Singapore.\nChu, Chenhui and Rui Wang. 2018. A Survey of Do-\nmain Adaptation for Neural Machine Translation. In\nProc. of COLING , pages 1304–1319, Santa Fe, NM,\nUSA, August.\nClark, Eleanor and Kenji Araki. 2011. Text Normaliza-\ntion in Social Media: Progress, Problems and Appli-\ncations for a Pre-Processing System of Casual En-\nglish. Procedia - Social and Behavioral Sciences ,\n27:2–11.\nDou, Zi-Yi, Junjie Hu, Antonios Anastasopoulos, and\nGraham Neubig. 2019. Unsupervised Domain\nAdaptation for Neural Machine Translation with\nDomain-Aware Feature Embeddings. In Proc. of\nEMNLP-IJCNLP , pages 1417–1422, Hong Kong,\nChina, November.\nEdunov, Sergey, Myle Ott, Michael Auli, and David\nGrangier. 2018. Understanding Back-Translation at\nScale. In Proc. of EMNLP , pages 489–500, Brussels,\nBelgium, October/November.\nFreitag, Markus and Yaser Al-Onaizan. 2016. Fast\nDomain Adaptation for Neural Machine Translation.\nCoRR , abs/1612.06897.\nGarc ´ıa-Mart ´ınez, Mercedes, Lo ¨ıc Barrault, and Fethi\nBougares. 2016. Factored Neural Machine Transla-\ntion Architectures. In Proc. of IWSLT , Seattle, WA,\nUSA, December.\nHuck, Matthias, David Vilar, Daniel Stein, and Her-\nmann Ney. 2011. Lightly-Supervised Training for\nHierarchical Phrase-Based Machine Translation. In\nProc. of the First Workshop on Unsupervised Learn-\ning in NLP , pages 91–96, Edinburgh, Scotland, UK,\nJuly.\nHuck, Matthias, Alexandra Birch, and Barry Haddow.\n2015. Mixed-Domain vs. Multi-Domain Statistical\nMachine Translation. In Proc. of MT Summit , pages\n240–255, Miami, FL, USA, October.\nHuck, Matthias, Fabienne Braune, and Alexander\nFraser. 2017. LMU Munich’s Neural Machine\nTranslation Systems for News Articles and Health\nInformation Texts. In Proc. of WMT, Vol. 2: Shared\nTask Papers , pages 315–322, Copenhagen, Den-\nmark, September.\nJunczys-Dowmunt, Marcin, Roman Grundkiewicz,\nTomasz Dwojak, Hieu Hoang, Kenneth Heafield,\nTom Neckermann, Frank Seide, Ulrich Germann,\nAlham Fikri Aji, Nikolay Bogoychev, Andr ´e F. T.\nMartins, and Alexandra Birch. 2018. Marian: Fast\nNeural Machine Translation in C++. In Proc. of\nACL, System Demonstrations , pages 116–121, Mel-\nbourne, Australia, July.\nLiang, Yunlong, Fandong Meng, Yufeng Chen, Jinan\nXu, and Jie Zhou. 2021. Modeling Bilingual Con-\nversational Characteristics for Neural Chat Transla-\ntion. In Proc. of ACL-IJCNLP (Vol. 1: Long Papers) ,\npages 5711–5724, Online, August.\nLison, Pierre and J ¨org Tiedemann. 2016. OpenSub-\ntitles2016: Extracting Large Parallel Corpora from\nMovie and TV Subtitles. In Proc. of LREC , pages\n923–929, Portoro ˇz, Slovenia, May.\nLowe, Ryan, Nissan Pow, Iulian Serban, and Joelle\nPineau. 2015. The Ubuntu Dialogue Corpus: A\nLarge Dataset for Research in Unstructured Multi-\nTurn Dialogue Systems. In Proc. of SIGDIAL , pages\n285–294, Prague, Czech Republic, September.\nMathur, Nitika, Timothy Baldwin, and Trevor Cohn.\n2020. Tangled up in BLEU: Reevaluating the Evalu-\nation of Automatic Machine Translation Evaluation\nMetrics. In Proc. of ACL , pages 4984–4997, Online,\nJuly.\nNiehues, Jan, Thanh-Le Ha, Eunah Cho, and Alex\nWaibel. 2016. Using Factored Word Representation\nin Neural Network Language Models. In Proc. of\nWMT: Vol. 1, Research Papers , pages 74–82, Berlin,\nGermany, August.\nNiu, Xing, Sudha Rao, and Marine Carpuat. 2018.\nMulti-Task Neural Models for Translating Between\nStyles Within and Across Languages. In Proc. of\nCOLING , pages 1008–1021, Santa Fe, NM, USA,\nAugust.\nPapineni, Kishore, Salim Roukos, Todd Ward, and Wei-\nJing Zhu. 2002. Bleu: a Method for Automatic\nEvaluation of Machine Translation. In Proc. of ACL ,\npages 311–318, Philadelphia, PA, USA, July.\nPeitz, Stephan, Saab Mansour, Matthias Huck, Markus\nFreitag, Hermann Ney, Eunah Cho, Teresa Her-\nrmann, Mohammed Mediani, Jan Niehues, Alex\nWaibel, Alexander Allauzen, Quoc Khanh Do,\nBianka Buschbeck, and Tonio Wandmacher. 2013.Joint WMT 2013 Submission of the QUAERO\nProject. In Proc. of WMT , pages 185–192, Sofia,\nBulgaria, August.\nPopovi ´c, Maja. 2016. chrF deconstructed: beta pa-\nrameters and n-gram weights. In Proc. of WMT: Vol.\n2, Shared Task Papers , pages 499–504, Berlin, Ger-\nmany, August.\nProvilkov, Ivan, Dmitrii Emelianenko, and Elena V oita.\n2020. BPE-Dropout: Simple and Effective Subword\nRegularization. In Proc. of ACL , pages 1882–1892,\nOnline, July.\nRieß, Simon, Matthias Huck, and Alex Fraser. 2021. A\nComparison of Sentence-Weighting Techniques for\nNMT. In Proc. of MT Summit , pages 176–187, Vir-\ntual, August.\nSchwenk, Holger. 2008. Investigations on Large-Scale\nLightly-Supervised Training for Statistical Machine\nTranslation. In Proc. of IWSLT , pages 182–189,\nWaikiki, HI, USA, October.\nSennrich, Rico, Barry Haddow, and Alexandra Birch.\n2016a. Improving Neural Machine Translation\nModels with Monolingual Data. In Proc. of ACL ,\npages 86–96, Berlin, Germany, August.\nSennrich, Rico, Barry Haddow, and Alexandra Birch.\n2016b. Neural Machine Translation of Rare Words\nwith Subword Units. In Proc. of ACL , pages 1715–\n1725, Berlin, Germany, August.\nShah, Kshitij and Gerard de Melo. 2020. Correcting\nthe Autocorrect: Context-Aware Typographical Er-\nror Correction via Training Data Augmentation. In\nProc. of LREC , Marseille, France, May.\nThompson, Brian, Jeremy Gwinnup, Huda Khayrallah,\nKevin Duh, and Philipp Koehn. 2019. Overcom-\ning Catastrophic Forgetting During Domain Adap-\ntation of Neural Machine Translation. In Proc. of\nNAACL-HLT , pages 2062–2068, Minneapolis, MN,\nUSA, June.\nTiedemann, J ¨org. 2020. The Tatoeba Translation Chal-\nlenge – Realistic Data Sets for Low Resource and\nMultilingual MT. In Proc. of WMT , pages 1174–\n1182, Online, November.\nVaswani, Ashish, Noam Shazeer, Niki Parmar, Jakob\nUszkoreit, Llion Jones, Aidan N. Gomez, Łukasz\nKaiser, and Illia Polosukhin. 2017. Attention is All\nyou Need. In Advances in Neural Information Pro-\ncessing Systems 30 , pages 5998–6008. Curran Asso-\nciates, Inc.\nWang, Rui, Masao Utiyama, Lemao Liu, Kehai Chen,\nand Eiichiro Sumita. 2017. Instance Weighting for\nNeural Machine Translation Domain Adaptation. In\nProc. of EMNLP , pages 1482–1488, Copenhagen,\nDenmark, September.\nWilken, Patrick and Evgeny Matusov. 2019. Novel Ap-\nplications of Factored Neural Machine Translation.\nCoRR , abs/1910.03912.", "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "CQc-MhIU54", "year": null, "venue": "EAMT 2023", "pdf_link": "https://aclanthology.org/2023.eamt-1.8.pdf", "forum_link": "https://openreview.net/forum?id=CQc-MhIU54", "arxiv_id": null, "doi": null }
{ "title": "Enhancing Supervised Learning with Contrastive Markings in Neural Machine Translation Training", "authors": [ "Nathaniel Berger", "Miriam Exel", "Matthias Huck", "Stefan Riezler" ], "abstract": null, "keywords": [], "raw_extracted_content": "Enhancing Supervised Learning with Contrastive Markings\nin Neural Machine Translation Training\nNathaniel Berger∗, Miriam Exel‡, Matthias Huck‡andStefan Riezler†,∗\n∗Computational Linguistics &†IWR, Heidelberg University, Germany\n‡SAP SE, Dietmar-Hopp-Allee 16, 69190 Walldorf, Germany\n{berger, riezler }@cl.uni-heidelberg.de\n{miriam.exel, matthias.huck }@sap.com\nAbstract\nSupervised learning in Neural Machine\nTranslation (NMT) typically follows a\nteacher forcing paradigm where reference\ntokens constitute the conditioning context\nin the model’s prediction, instead of its\nown previous predictions. In order to\nalleviate this lack of exploration in the\nspace of translations, we present a sim-\nple extension of standard maximum like-\nlihood estimation by a contrastive mark-\ning objective. The additional training sig-\nnals are extracted automatically from ref-\nerence translations by comparing the sys-\ntem hypothesis against the reference, and\nused for up/down-weighting correct/incor-\nrect tokens. The proposed new training\nprocedure requires one additional transla-\ntion pass over the training set per epoch,\nand does not alter the standard inference\nsetup. We show that training with con-\ntrastive markings yields improvements on\ntop of supervised learning, and is espe-\ncially useful when learning from postedits\nwhere contrastive markings indicate hu-\nman error corrections to the original hy-\npotheses. Code is publicly released1.\n1 Introduction\nDue to the availability of large parallel data sets\nfor most language pairs, the standard training pro-\ncedure in Neural Machine Translation (NMT) is\n© 2023 The authors. This article is licensed under a Creative\nCommons 4.0 licence, no derivative works, attribution, CC-\nBY-ND.\n1https://www.cl.uni-heidelberg.de/\nstatnlpgroup/contrastive_markings/supervised learning of a maximum likelihood ob-\njective where reference tokens constitute the target\nhistory in the conditional language model, instead\nof the model’s own predictions. Feeding back\nthe reference history in model training, known as\nteacher forcing (Williams and Zipser, 1989), en-\ncourages the sequence model to stay close to the\nreference sequence, but prevents the model to learn\nhow to predict conditioned on its own history,\nwhich is the actual task at inference time. This\nlack of exploration in learning has been dubbed\nexposure bias by Ranzato et al. (2016). It has\nbeen tackled by techniques that explicitly inte-\ngrate the model’s own prediction history into train-\ning, e.g. scheduled sampling (Bengio et al., 2015),\nminimum risk training (Shen et al., 2016), rein-\nforcement learning (Bahdanau et al., 2017), im-\nitation learning (Lin et al., 2020), or ramp loss\n(Jehl et al., 2019), amongst others. In most of\nthese approaches, feedback from a human ex-\npert is simulated by comparing a system transla-\ntion against a human reference according to an\nautomatic evaluation metric, and by extracting a\nsequence- or token-level reward signal from the\nevaluation score.\nIn this paper, we present a method to incorpo-\nratecontrastive markings of differences between\nthe model’s own predictions and references into\nthe learning objective. Our approach builds on pre-\nvious work on integrating weak human feedback\nin form of error markings as supervision signal in\nNMT training (Kreutzer et al., 2020). This work\nwas conceptualized for reducing human annotation\neffort in interactive machine translation, however,\nit can also be used on simulated error markings ex-\ntracted from an automatic evaluation score. It al-\nlows the model to extract a contrastive signal from\nthe reference translation that can be used to re-\ninforce or penalize correct or incorrect tokens in\nthe model’s own predictions. Such a reward signal\nis more fine-grained than a sequence-level reward\nobtained by a sequence-level automatic evaluation\nmetric, and less noisy than token-based rewards\nobtained by reward shaping (Ng et al., 1999).\nOur hypothesis is that such contrastive mark-\nings should be especially useful in learning se-\ntups where human postedits are used as reference\nsignals. In such scenarios, contrastive markings\nare likely to indicate erroneous deviations of ma-\nchine translations from human error corrections,\ninstead of penalizing correct translations that hap-\npen to deviate from independently constructed hu-\nman reference translations. We confirm this hy-\npothesis by simulating a legacy machine transla-\ntion system for which human postedits are avail-\nable by performing knowledge distillation (Kim\nand Rush, 2016) on the stored legacy machine\ntranslations. We define a “legacy” machine trans-\nlation system as a system which was previously\nused in production and produced translations for\nwhich human feedback was gathered, but which\nis no longer productive. Knowledge distillation\nis required because the legacy system is a black-\nbox system that is unavailable to us, but its out-\nputs are available. For comparison, we apply our\nframework to standard parallel data where refer-\nence translations were generated from scratch. Our\nexperimental results show that on both datasets,\ncombining teacher forcing on postedits with learn-\ning from error markings, improves results with re-\nspect to TER on test data, with larger improve-\nments for the knowledge-distilled model that emu-\nlates outputs of the legacy system.\nA further novelty of our approach is the true\nonline learning setup where new error markings\nare computed after every epoch of model train-\ning, instead of using constant simulated markings\nthat are pre-computed from fixed machine trans-\nlation outputs as in previous work (Petrushkov et\nal., 2018; Grangier and Auli, 2018; Kreutzer et al.,\n2020). Online error markings can be computed in\na light-weight fashion by longest common subse-\nquence calculations. The overhead incurred by the\nnew training procedure is one additional transla-\ntion pass over the training set, whereas at inference\ntime the system does not require additional infor-\nmation, but can be shown to produce improved\ntranslations based on the proposed improved train-\ning setup.2 Related Work\nMost approaches to remedy the exposure bias\nproblem simulate a sentence-level reward or cost\nfunction from an automatic evaluation metric and\nincorporate it into a reinforcement- or imitation-\nlearning setup (Ranzato et al., 2016; Shen et al.,\n2016; Bahdanau et al., 2017; Lin et al., 2020; Jehl\net al., 2019; Gu et al., 2019; Xu and Carpuat,\n2021).\nMethods that are conceptualized to work di-\nrectly with human postedits integrate the human\nfeedback signal more directly, without the mid-\ndleman of an automatic evaluation heuristic. The\nstandard learning paradigm is supervised learning\nwhere postedits are treated as reference transla-\ntions (see, for example, Turchi et al. (2017)). Most\napproaches to learning from error markings adapt\nthe supervised learning objective to learn from cor-\nrect tokens in partial translations (Marie and Max,\n2015; Petrushkov et al., 2018; Domingo et al.,\n2017; Kreutzer et al., 2020).\nThe QuickEdit (Grangier and Auli, 2018) ap-\nproach uses the hypothesis produced by an NMT\nsystem and token-level markings as an extra input\nto an automatic postediting system (APE), and ad-\nditionally requires markings on the system output\nat inference time. This requires a dual encoder ar-\nchitecture with the decoder attending to both the\nsource and hypothesis encoders. In this case, con-\nvolutional encoders and decoders of Gehring et al.\n(2017) are used.\nOur approach builds upon the work of\nPetrushkov et al. (2018) and Kreutzer et al. (2020)\nwho incorporate token-level markings as learning\nsignal into NMT training. In contrast to Grang-\nier and Auli (2018), who compute markings off-\nline before training and require them for inference,\nwe only require them during training and calcu-\nlate markings online. Furthermore, instead of pre-\nsenting markings to the system as an extra input,\nthey are integrated into the objective function as a\nweight. While Petrushkov et al. (2018) simulate\nmarkings from reference translations by extracting\ndeletion operations from longest common subse-\nquence calculations, Kreutzer et al. (2020) show\nhow to learn from markings solicited from human\nannotators. In contrast to these approaches, we in-\ntegrate markings to enhance supervised learning in\na true online fashion.\nSource To remove the highlighting , un@@ mark the menu entry .\nHypothesis Um die Her@@ vor@@ hebung zu entfernen , mark@@ ieren Sie den Men ¨u@@\nein@@ trag .\nReference Um die Her@@ vor@@ hebung auszu@@ schalten , de@@ aktivieren Sie diesen\nMen ¨u@@ ein@@ trag .\nMarkings 1 1 1 1 1 0 0 1 0 0 1 01 1 1 1\nTable 1: An example of a source, hypothesis, and reference triple along with the contrastive markings generated by comparing\nthe hypothesis to the reference. Markings of 1indicate a correct subword token, while 0indicates an incorrect subword token.\nWe used byte-pair encoding (Sennrich et al., 2016) and the ”@@” indicate that this token is part of the same word as the\nfollowing token. We underline and color the incorrect tokens and their corresponding markings red.\nSource\nLegacy\nNMT Model\nLogged \nHypotheses\nHuman\nReview\nPostedits\nLogged \nHypotheses\nSource\nPre-trained \nModel\nEmulated \nLegacy \nModel\nKnowledge Distilation\nCross-\nEntropy\nFigure 1: Left: The WMT21 APE dataset is created by having a black-box NMT system generate hypothesis translations.\nThese logged hypotheses are then given to human reviewers to postedit to create a triple of (source, hypothesis, postedit).\nRight: Because the system that generated the hypotheses is not available for us to fine-tune, we try to emulate it with knowledge\ndistillation. We train the model to reproduce the original hypothesis by using them as targets with a cross-entropy loss to produce\nan emulated legacy model.\n3 Methods\n3.1 Learning Objectives\nLetx=x1. . . x Sbe a sequence of indices over\na source vocabulary VSRC, and y=y1. . . y Ta se-\nquence of indices over a target vocabulary VTRG.\nThe goal of sequence-to-sequence learning is to\nlearn a function for mapping an input sequence\nxinto an output sequence y. For the example of\nmachine translation, yis a translation of x, and a\nmodel parameterized by a set of weights θis opti-\nmized to maximize pθ(y|x). This quantity is fur-\nther factorized into conditional probabilities over\nsingle tokens pθ(y|x) =QT\nt=1pθ(yt|x;y<t),\nwhere the latter distribution is defined by the neu-\nral model’s softmax-normalized output vector:\npθ(yt|x;y<t) =softmax (NNθ(x;y<t)).(1)\nThere are various options for building the archi-\ntecture of the neural model NN θ, such as recurrent\n(Bahdanau et al., 2015), convolutional (Gehring et\nal., 2017) or attention-based (Vaswani et al., 2017)\nencoder-decoder architectures.\nStandard supervised learning from postedits\ntreats a postedited output translation y∗for an in-\nputxthe same as a human reference translation\n(Turchi et al., 2017) by maximizing the likelihoodof the user-corrected outputs where\nLPE(θ) =X\nx,y∗TX\nt=1logpθ(y∗\nt|x;y∗\n<t),(2)\nusing stochastic gradient descent techniques (Bot-\ntou et al., 2018).\nPetrushkov et al. (2018) suggested learning\nfrom error markings δm\ntof tokens tin machine-\ngenerated output ˆy. Denote δ+\ntif marked as cor-\nrect, or δ−\ntotherwise, than a model with δ+\nt= 1\nandδ−\nt= 0 will reward correct tokens and ignore\nincorrect outputs. The objective of the learning\nsystem is to maximize the likelihood of the correct\nparts of the output where\nLM(θ) =X\nx,ˆyTX\nt=1δm\ntlogpθ(ˆyt|x; ˆy<t).(3)\nThe tokens ˆytthat receive δt= 1are part of the\ncorrect output y∗, so the model receives a strong\nsignal how a corrected output should look like. Al-\nthough the likelihood of the incorrect parts of the\nsequence does not weigh into the sum, they are\ncontained in the context of the correct parts (in\nˆy<t). Alternatively, it might be beneficial to pe-\nnalize incorrect tokens, with e.g. δ−\nt=−0.5, and\nreward correct tokens δ+\nt= 0.5, which aligns with\nthe findings of Lam et al. (2019).\nOur final combined objective is a linear interpo-\nlation of the log-likelihood of postedits LPEand\nthe log-likelihood of markings LM:\nL(θ) =αLPE+ (1−α)LM. (4)\n3.2 Simulating Markings\nError markings are simulated by comparing the hy-\npothesis to the reference and marking the longest\ncommon subsequence as correct, as proposed by\nPetrushkov et al. (2018). We show an example\nof a data point in Table 1. Markings were ex-\ntracted from the longest common subsequence cal-\nculations. For every token in the model hypothe-\nsis there is a corresponding reward. A reward is 0\nwhen the corresponding token is not present in the\nreference and is 1 when the token was kept in the\nreference.\n3.3 Knowledge Distillation\nWe want to showcase the advantage of our tech-\nnique of enhancing supervised learning from\nhuman reference translations and from human\npostedits. In order to take advantage of the fact\nthat human postedits indicate errors in machine\ntranslations instead of differences between ma-\nchine translations and independent human refer-\nences, we need to simulate the legacy machine\ntranslation system that produced the translations\nthat were postedited. For this purpose we use\nAPE data consisting of sources, MT outputs, and\npostedits. Since the legacy system is a black box\nto us, we carry out sequence-level knowledge dis-\ntillation (Kim and Rush, 2016) on the machine\ntranslations provided in the train split of the APE\ndataset (cf. Section 4). This allows us to emu-\nlate the legacy system by knowledge distillation\nand to consider the postedits in the APE dataset\nas feedback on the knowledge-distilled model. We\npresent an overview of this process in Figure 1.\nAs shown in Table 2, after fine-tuning on the\nMT outputs in the train split of the APE data,\nwe are able to produce translations that are more\nsimilar to the black-box systems than those of the\npre-trained baseline system. Additionally, because\nthe APE dataset’s postedits were generated by cor-\nrecting those MT outputs, Table 3 shows that the\nknowledge-distilled system’s performance on the\npostedits is closer to the black-box system’s per-\nformance than before distillation.3.4 Online Learning\nOur learning setup performs standard stochastic\ngradient descent learning on mini-batches. After\nevery epoch, new system translations are produced\nand error markings are extracted by comparing the\ntranslations to references. This process is shown\nin Figure 2, showing that we produce error mark-\nings by comparing the model’s output with the\npostedits and then use the marked hypotheses and\nthe postedits to train the system.\nIn preliminary experiments we found that com-\nputing error markings from a fixed initial set of\nsystem translations and using them as learning sig-\nnals in iterative training appeared to bring initial\nimprovements. Continued training, however, led to\ndecreased performance. We conjecture that learn-\ning from constant marking signals can work for\nvery small datasets (for example, Kreutzer et al.\n(2020) used fewer than 1,000 manually created\nmarkings for training), but it leads to divergence\nof parameter estimates on datasets that are one or\ntwo orders of magnitude larger, as in this work.\n4 Data\nWe use the WMT17 En-De dataset2for pre-\ntraining. Our data is pre-processed using the\nMoses tokenizer and punctuation normalization\nfor both English and German implemented in\nSacremoses3.\nWe first test our ideas on the IWSLT14 En-De\ndataset4(Cettolo et al., 2012). We download and\npre-process the data using joey scripts5. The En-\nDe dataset consists of transcribed TED talks and\nvolunteer provided reference translations into the\ntarget languages.\nThe APE dataset is from the WMT automatic\npostediting shared task 2021 (Akhbardeh et al.,\n2021). The legacy system that produced the origi-\nnal MT outputs is based on a standard Transformer\narchitecture (Vaswani et al., 2017) and follows\nthe implementation described by Ott et al. (2018).\nThis system was trained on publicly available MT\ndatasets, including Paracrawl (Ba ˜n´on et al., 2020)\nand Europarl (Koehn, 2005), totalling 23.7M par-\nallel sentences for English-German. The APE\n2https://www.statmt.org/wmt17/\ntranslation-task.html\n3https://github.com/alvations/sacremoses\n4https://sites.google.com/site/\niwsltevaluation2014/data-provided\n5https://github.com/joeynmt/joeynmt/blob/\nmain/scripts/get_iwslt14_bpe.sh\nSystem Train Dev Test\nBLEU TER BLEU TER BLEU TER\nAPE MT Outputs 100.0 0.0 100.0 0.0 100.0 0.0\nBaseline Model 48.0 31.8 49.0 31.0 46.2 33.8\nKD Model 88.9 5.8 56.0 25.9 55.8 26.7\nTable 2: Systems outputs compared to APE data MT outputs . BLEU and TER scores indicate distance of system outputs to MT\noutputs that were shown to human posteditors. Results show that Knowledge Distillation (KD) on APE MT Outputs improves\ndistances (higher BLEU, lower TER), enabling improved approximation of the MT system that generated the hypotheses used\nin the APE dataset. Baseline and Knowledge Distillation systems evaluated with a beam size of 5.\nSystem Train Dev Test\nBLEU TER BLEU TER BLEU TER\nAPE MT Outputs 70.8 18.1 69.1 18.9 71.5 17.9\nBaseline Model 42.4 36.9 43.3 35.8 41.7 37.8\nKD Model 66.0 20.8 49.1 31.2 49.6 31.6\nTable 3: System outputs compared to APE data postedits . Results show that Knowledge Distillation (KD) on APE MT outputs\nalso reduces the distance to APE postedits (higher BLEU, lower TER) . Baseline and KD systems are evaluated with a beam\nsize of 5.\nHypotheses\nPostedits\nHypotheses\n+Markings \nSource\nNMT Model\nInference\nPostedits\nSource\nLCS\nAlgorithm\nNMT Model\nTraining\nWeighted\nCross-\nEntropy\nFigure 2: Once per epoch, we have our model run inference on all source sentences to generate hypothesis sentences. These\nthen get compared to the postedits using the Longest Common Subsequence algorithm with tokens contained in the subsequence\nmarked as good and those not in the subsequence marked as bad. Both the marked hypotheses and postedits are used as targets\nwith a weighted cross-entropy loss function. The NMT model that generate the hypotheses and the model we train are the same\nmodel.\ndata consists of source, MT output, and postedit\ntriples. The source data was selected from English\nWikipedia articles. The MT outputs were provided\nby the legacy system and were postedited by pro-\nfessional translators. The sizes of the datasets are\ngiven in Table 4.\n5 Experiments\n5.1 Experimental Setup\nWe implement our loss function and data-loading\non top of JoeyNMT (Kreutzer et al., 2019).6All\nthat needs to be changed, in addition to adding\nweighting to the loss function, is a way of loading\ndata and constructing combined batches such that\neach batch contains sources, hypotheses, weights,\nand postedits. To do this, we duplicate each source\n6https://github.com/joeynmt/joeynmttwice in the batch and pair the first copy with the\nhypothesis and the second copy with the postedit.\nFrom the point of view of the model and loss func-\ntion, the batch constructed for the combined ob-\njective does not differ from a normal batch with\ntoken-level weights. Batches constructed this way\nand in the usual manner can both contain the same\nnumber of tokens, but half of the target sequences\nin the combined batches come from the model’s\nown translation of the training data.\nOur baseline system is a standard Trans-\nformer model (Vaswani et al., 2017), pre-trained\non WMT17 data for English-to-German transla-\ntion (Bojar et al., 2017), and available through\nJoeyNMT7. The model uses 6 layers in both the en-\n7https://www.cl.uni-heidelberg.\nde/statnlpgroup/joeynmt/wmt_ende_\ntransformer.tar.gz\nDataset Train Dev Test\nWMT17 (pre-train) 5,919,142\nIWSLT14 (fine-tune) 158,794 7 ,216 6 ,749\nWMT21 APE (fine-tune) 7,000 1 ,000 1 ,000\nTable 4: Size of En-De datasets used for pre-training and fine-tuning: The WMT17 and IWSLT14 data consist of pairs of\nsource and target sentences; the WMT21 APE data consists of triples of source, MT output, and postedited sentences.\nSystem References Online markings TER\na 1.0 0.0 48.2\nb 0.9 0.1 48.1\nc 0.7 0.3 48.0a,f\nd 0.5 0.5 47.8a,f\ne 0.3 0.7 48.3\nf ∅ ∅ 51.3\nTable 5: Results from fine-tuning the WMT17 News model on out-of-domain IWSLT references. Numbers in the References\nand Online markings columns refer to interpolation weights given to that loss. The bottom row is the unchanged system, hence\nits interpolation values are ∅. The results show that, up to a threshold, increasing the weight given to Online markings improves\nTER scores. Superscripts denote statistically significant differences to indicated system at p-value <0.05.\ncoder and decoder with 8 attention heads each, and\nhyper-parameters as specified in the pre-trained\nJoeyNMT model’s configuration file.\nWe compare the combined objective given in\nEquation (4) to standard supervised fine-tuning by\ncontinued training on references or postedits and\nto the pre-trained model.\nAll systems share the same hyper-parameters\nexcept for the weighting of target tokens. The stan-\ndard supervised learning method does not account\nfor token-level weights and therefore all weights\nin the loss-function are set to 1. For the contrastive\nmarking method, we experimented with a range of\ninterpolation values αon the IWSLT14 dataset to\nselect the best value. The weighting of the tokens\nwere set to −0.5,0.5in correspondence with the\nresults from Kreutzer et al. (2020).\n5.2 Experimental Results\nSince our work is concerned with learning from\ntoken-based feedback, we evaluate all systems ac-\ncording to Translation Edit Rate (TER) (Snover et\nal., 2006). Furthermore, we provide the Sacre-\nBLEU (Post, 2018) signatures8for evaluation con-\nfigurations of evaluation metrics. Statistical sig-\nnificance is tested using a paired approximate ran-\ndomization test (Riezler and Maxwell, 2005).\n8TER: nrefs:1 |ar:10000 |seed:12345 |case:lc |tok:tercom |\nnorm:no |punct:yes |asian:no |version:2.0.0Table 5 shows results from fine-tuning on inde-\npendently created human references. A baseline\nmodel trained on WMT17 data (line f) is fine-tuned\non references (line a) or on a combination of ref-\nerences and online markings (lines b-e, using dif-\nferent interpolation weights) from the TED talks\ndomain. We see that up to a threshold, increasing\nthe interpolation weight given to learning from on-\nline markings significantly improves TER scores\nup to 3.5points (line d) compared to the baseline\n(line f), and up to 0.5points compared to training\nfrom references only (line a).\nTable 6 gives an experimental comparison of\nfine-tuning experiments on human postedits. A\nbaseline model trained on news data is fine-tuned\non postedit data from the Wikipedia domain. The\npostedit data is feedback on real MT outputs that\nwe have trained on using knowledge distillation\nto emulate. Line a shows TER results for fine-\ntuning on postedits. This result can be improved\nsignificantly by 0.6TER by combined learning on\npostedits and online markings, using an interpola-\ntion weight of 0.5(line b). Lines c and d perform\nthe same comparison of objectives for a model that\nhas been trained via knowledge distillation (KD) of\nthe legacy machine translations that were the input\ndata for postediting. Comparing line d to line a,\nwe see that by combined learning of a KD system\non postedits and markings even larger gains, close\nto1TER point, can be obtained. The improve-\nSystem TER\na Baseline + Postedits 31.3\nb Baseline + Postedits + Online Markings 30.7a\nc Baseline + KD + Postedits 30.8\nd Baseline + KD + Postedits + Online Markings 30.4ac\nTable 6: Fine-tuned systems compared to WMT APE postedit test data. Results show that Online markings, when combined\nwith learning from references, are able to improve our systems more than references alone. Even larger improvements are\ngained by systems trained by knowledge distillation (KD) on legacy translations. Interpolation weights are set to 0.5. Super-\nscripts indicate a significant improvement p <0.05over the indicated system.\nments due to adding online markings are signifi-\ncant over training from postedits alone in all cases,\nand nominally, results for models adapted to the\nlegacy machine translations via KD are better than\nfor unchanged models trained on postedits.\nAn example showing the learning progress of\nthe different approaches during the first epochs is\ngiven in Table 7. The results of epoch 0 are given\nin the first block. It shows the system outputs\nof the models trained with knowledge distillation\nand the baselines before learning from postedits\nor markings. The KD models, given in lines c\nand d, already show better terminology translation\n(superstructure - ¨Uberbau, bases - Fundamente)\nthan the baselines in lines a and b (superstruc-\nture - Superstruktur, bases - St ¨utzpunkte). After\none epoch, contrastive learning (lines b and d) and\nlearning from postedits (lines a and c) correct ”ar-\nmored - gewagelt” and ”armored - getrieben” to\n”armored - gepanzert”, but only for KD models or\nif contrastive learning is used. Furthermore, con-\nstrastive learning of a KD model (line d) also cor-\nrects the translation of ”funnel” from ”Funnels” to\n”Trichter”.\n6 Discussion\nOur experimental results in Table 6 show that\nonline markings combined with references or\npostedits bring greater improvements than super-\nvised learning on references or postedits alone, and\nmoreover, the knowledge distilled models benefit\nmore from the provided feedback. This suggests\nthat the more related the feedback is to the sys-\ntem’s own output, the more can be learned from\nthe feedback.\nFurthermore, this result has implications for\nhow to best use postedits. Postedits are of-\nten treated as new reference translations for the\nsources and used to train new systems, whereas the\noriginal MT outputs are discarded. However, fine-tuning the original system on the postedits may\nyield larger improvements than training a new, un-\nrelated model on the source and postedit alone.\nLastly, we believe that our results can be inter-\npreted as the effect of mitigating exposure bias.\nThe pre-trained model is exposed not only to refer-\nence translations, but to its own trajectories. Even\nif the model’s trajectory is far from the gold ref-\nerence and multiple tokens in its history are incor-\nrect, it will be rewarded if it predicts a token that\nis in the output. This may enable it to return to a\nmore rewarding trajectory.\n7 Conclusion\nIn this work we present a way to combine postedits\nand word-level error markings extracted from the\nedit operations between the postedit and the MT\noutput to learn more than what the postedit alone\nis able to provide. Experimentally, we try this on\nsystems unrelated to the legacy system, whose out-\nputs were originally postedited, and on a simula-\ntion of the legacy system we create via knowledge\ndistillation. We show that these contrastive mark-\nings are able to bring significant improvements to\nTER scores and we hypothesize this is because\nthey are able to target insertion errors that con-\ntribute to higher TER scores. Additionally, learn-\ning from the model’s own output may allow it to\nlearn how to correct itself after making an error if\nit is later rewarded for correct outputs.\nReferences\nAkhbardeh, Farhad, Arkady Arkhangorodsky, Mag-\ndalena Biesialska, Ond ˇrej Bojar, Rajen Chatter-\njee, Vishrav Chaudhary, Marta R. Costa-jussa,\nCristina Espa ˜na-Bonet, Angela Fan, Christian Fe-\ndermann, Markus Freitag, Yvette Graham, Ro-\nman Grundkiewicz, Barry Haddow, Leonie Har-\nter, Kenneth Heafield, Christopher Homan, Matthias\nHuck, Kwabena Amponsah-Kaakyire, Jungo Kasai,\nDaniel Khashabi, Kevin Knight, Tom Kocmi, Philipp\nSource the superstructure was armored to protect the bases of the turrets , the funnels and the ventilator ducts in what he\ntermed a breastwork .\nPostedit der ¨Uberbau wurde gepanzert , um die Fundamente der T ¨urme , der Trichter und der Ventilatorkan ¨ale in dem\nBereich zu sch ¨utzen , den er als Brustwehr bezeichnete .\nEpoch 0\nSystem Hypothesis\na die Superstruktur wurde getrieben , um die St ¨utzpunkte der Turm- , der Funn ¤ rn- und der Ventilator die Herde\nin dem , was er als die Brustst besteigung bezeichnet hatte zu sch ¨utzen .\nb die Superstruktur wurde getrieben , um die St ¨utzpunkte der Turm- , der Funn ¤ rn- und der Ventilator die Herde\nin dem , was er als die Brustst besteigung bezeichnet hatte zu sch ¨utzen .\nc der ¨Uberbau wurde gewagelt , um die Fundamente der T ¨urme , die Funnels und die Ventilatoren kan¨ale in einem\nBrustwerk zu sch ¨utzen .\nd der ¨Uberbau wurde gewagelt , um die Fundamente der T ¨urme , die Funnels und die Ventilatoren kan¨ale in einem\nBrustwerk zu sch ¨utzen .\nEpoch 1\nSystem Hypothesis\na die Superstruktur wurde gezeichnet , um die St ¨utzen der Turrets , der Funnels und der Ventilator in seiner Art\nBrustwork zu sch ¨utzen .\nb die ¨Uberbauung war gepanzert , um die Grundst ¨ucke der Turrets , der Funnels und der Vaterfun kanten in dem ,\nwaser als Brustwerk nannte , zu sch ¨utzen .\nc der Super bau wurde gepanzert , um die St ¨utzpunkte der Turrets , der Funnels und der Ventilatorent ¨otungen in\neiner so genannten Brustarbeit zu sch ¨utzen .\nd der ¨Uberbau wurde gepanzert , um die Fundamente der T ¨urme , der Trichter und der Ventilatorkan kan¨ale zu\nsch¨utzen , was er als Brustwerk nannte .\nTable 7: Here we show the beginning of a training trajectory for a single example from the APE dataset. Above is the source\nand the postedit from the dataset, after which follows the first three epochs. Because translations and markings are generated\nbefore the beginning of an epoch, epoch 0 contains outputs from the knowledge distilled (KD) (lines c and d) and baseline\nsystems (lines a and b). The systems letters correspond to those in Table 6, indicating learning from postedits in lines a and\nc, and learning additionally from the contrastive markings in lines b and d. Models c and d have seen the MT side of this\ndataset beforehand and are already more capable of translating terminology such as ”superstructure” to ” ¨Uberbau”. After one\nepoch, we see that the KD models and the contrastive learning objective models are able to correct ”gewagelt” and ”getrieben”\nto ”gepanzert” as the translation of ”armored”. Because we use subword tokens, we have markings on portions of words.\nAlthough ” ¨Uberbau” is a part of ” ¨Uberbauung”, the subwords used to construct them differ, leading to ”bau” in ” ¨Uberbauung”\nbeing marked as incorrect.\nKoehn, Nicholas Lourie, Christof Monz, Makoto\nMorishita, Masaaki Nagata, Ajay Nagesh, Toshi-\naki Nakazawa, Matteo Negri, Santanu Pal, Allah-\nsera Auguste Tapo, Marco Turchi, Valentin Vydrin,\nand Marcos Zampieri. 2021. Findings of the 2021\nConference on Machine Translation (WMT21). In\nProceedings of the Sixth Conference on Machine\nTranslation , pages 1–88, Online, November. Asso-\nciation for Computational Linguistics.\nBahdanau, Dzmitry, Kyunghyun Cho, and Yoshua Ben-\ngio. 2015. Neural machine translation by jointly\nlearning to align and translate. In Proceedings of\nthe International Conference on Learning Represen-\ntations (ICLR) , San Diego, CA.\nBahdanau, Dzmitry, Philemon Brakel, Kelvin Xu,\nAnirudh Goyal, Ryan Lowe, Joelle Pineau, Aaron\nCourville, and Yoshua Bengio. 2017. An actor-critic\nalgorithm for sequence prediction. In Proceedings of\nthe 5th International Conference on Learning Repre-\nsentations (ICLR) , Toulon, France.\nBa˜n´on, Marta, Pinzhen Chen, Barry Haddow, Ken-\nneth Heafield, Hieu Hoang, Miquel Espl `a-Gomis,Mikel L. Forcada, Amir Kamran, Faheem Kirefu,\nPhilipp Koehn, Sergio Ortiz Rojas, Leopoldo\nPla Sempere, Gema Ram ´ırez-S ´anchez, Elsa Sarr ´ıas,\nMarek Strelec, Brian Thompson, William Waites,\nDion Wiggins, and Jaume Zaragoza. 2020.\nParaCrawl: Web-scale acquisition of parallel cor-\npora. In Proceedings of the 58th Annual Meeting of\nthe Association for Computational Linguistics , pages\n4555–4567, Online, July. Association for Computa-\ntional Linguistics.\nBengio, Samy, Oriol Vinyals, Navdeep Jaitly, and\nNoam Shazeer. 2015. Scheduled sampling for se-\nquence prediction with recurrent neural networks.\nInProceedings of the 28th International Conference\non Neural Information Processing Systems (NIPS) ,\nMontreal, Canada.\nBojar, Ond ˇrej, Rajen Chatterjee, Christian Federmann,\nYvette Graham, Barry Haddow, Shujian Huang,\nMatthias Huck, Philipp Koehn, Qun Liu, Varvara Lo-\ngacheva, Christof Monz, Matteo Negri, Matt Post,\nRaphael Rubino, Lucia Specia, and Marco Turchi.\n2017. Findings of the 2017 Conference on Ma-\nchine Translation (WMT17). In Proceedings of the\nSecond Conference on Machine Translation, Vol-\nume 2: Shared Task Papers , pages 169–214, Copen-\nhagen, Denmark, September. Association for Com-\nputational Linguistics.\nBottou, Leon, Frank E. Curtis, and Jorge Nocedal.\n2018. Optimization methods for large-scale machine\nlearning. SIAM Review , 60(2):223–311.\nCettolo, Mauro, Christian Girardi, and Marcello Fed-\nerico. 2012. WIT3: Web inventory of transcribed\nand translated talks. In Proceedings of the 16th An-\nnual conference of the European Association for Ma-\nchine Translation , pages 261–268, Trento, Italy, May\n28–30. European Association for Machine Transla-\ntion.\nDomingo, Miguel, ´Alvaro Peris, and Francisco\nCasacuberta. 2017. Segment-based interactive-\npredictive machine translation. Machine Transla-\ntion, 31(4):163–185.\nGehring, Jonas, Michael Auli, David Grangier, Denis\nYarats, and Yann Dauphin. 2017. Convolutional se-\nquence to sequence learning. In Proceedings of the\n55th Annual Meeting of the Association for Compu-\ntational Linguistics (ACL) , Vancouver, Canada.\nGrangier, David and Michael Auli. 2018. QuickEdit:\nEditing text & translations by crossing words out.\nInProceedings of the 2018 Conference of the North\nAmerican Chapter of the Association for Computa-\ntional Linguistics: Human Language Technologies,\nVolume 1 (Long Papers) , pages 272–282, New Or-\nleans, Louisiana, June. Association for Computa-\ntional Linguistics.\nGu, Jiatao, Changhan Wang, and Junbo Zhao. 2019.\nLevenshtein transformer. Advances in Neural Infor-\nmation Processing Systems , 32.\nJehl, Laura, Carolin Lawrence, and Stefan Riezler.\n2019. Learning neural sequence-to-sequence mod-\nels from weak feedback with bipolar ramp loss.\nTransactions of the Association for Computational\nLinguistics , 7:233–248.\nKim, Yoon and Alexander M. Rush. 2016. Sequence-\nlevel knowledge distillation. In Proceedings of the\n2016 Conference on Empirical Methods in Natural\nLanguage Processing , Austin, Texas.\nKoehn, Philipp. 2005. Europarl: A parallel corpus\nfor statistical machine translation. In Proceedings of\nMachine Translation Summit X: Papers , pages 79–\n86, Phuket, Thailand, September 13-15.\nKreutzer, Julia, Jasmijn Bastings, and Stefan Riezler.\n2019. Joey NMT: A minimalist NMT toolkit for\nnovices. In Proceedings of the 2019 Conference on\nEmpirical Methods in Natural Language Processing\nand the 9th International Joint Conference on Natu-\nral Language Processing (EMNLP-IJCNLP): System\nDemonstrations , pages 109–114, Hong Kong, China,\nNovember. Association for Computational Linguis-\ntics.Kreutzer, Julia, Nathaniel Berger, and Stefan Riezler.\n2020. Correct me if you can: Learning from er-\nror corrections and markings. In Proceedings of the\n22nd Annual Conference of the European Associa-\ntion for Machine Translation , pages 135–144, Lis-\nboa, Portugal, November. European Association for\nMachine Translation.\nLam, Tsz Kin, Shigehiko Schamoni, and Stefan Riezler.\n2019. Interactive-predictive neural machine transla-\ntion through reinforcement and imitation. In Pro-\nceedings of the Machine Translation Summit (MT-\nSUMMIT XVII) , Dublin, Ireland.\nLin, Alexander, Jeremy Wohlwend, Howard Chen, and\nTao Lei. 2020. Autoregressive knowledge distilla-\ntion through imitation learning. In Proceedings of\nthe 2020 Conference on Empirical Methods in Natu-\nral Language Processing (EMNLP) , Online.\nMarie, Benjamin and Aur ´elien Max. 2015. Touch-\nbased pre-post-editing of machine translation out-\nput. In Proceedings of the Conference on Empirical\nMethods in Natural Language Processing (EMNLP) ,\nLisbon, Portugal.\nNg, Andrew Y ., Daishi Harada, and Stuart J. Russell.\n1999. Policy invariance under reward transforma-\ntions: Theory and application to reward shaping. In\nProceedings of the Sixteenth International Confer-\nence on Machine Learning (ICML) , Bled, Slovenia.\nOtt, Myle, Sergey Edunov, David Grangier, and\nMichael Auli. 2018. Scaling neural machine trans-\nlation. In Proceedings of the Third Conference on\nMachine Translation: Research Papers , pages 1–9,\nBrussels, Belgium, October. Association for Com-\nputational Linguistics.\nPetrushkov, Pavel, Shahram Khadivi, and Evgeny Ma-\ntusov. 2018. Learning from chunk-based feedback\nin neural machine translation. In Proceedings of the\n56th Annual Meeting of the Association for Compu-\ntational Linguistics (ACL) , Melbourne, Australia.\nPost, Matt. 2018. A call for clarity in reporting BLEU\nscores. In Proceedings of the Third Conference on\nMachine Translation: Research Papers , pages 186–\n191, Belgium, Brussels, October. Association for\nComputational Linguistics.\nRanzato, Marc’Aurelio, Sumit Chopra, Michael Auli,\nand Wojciech Zaremba. 2016. Sequence level train-\ning with recurrent neural networks. In Proceedings\nof the International Conference on Learning Repre-\nsentation (ICLR) , San Juan, Puerto Rico.\nRiezler, Stefan and John T. Maxwell. 2005. On some\npitfalls in automatic evaluation and significance test-\ning for MT. In Proceedings of the ACL Workshop\non Intrinsic and Extrinsic Evaluation Measures for\nMachine Translation and/or Summarization , pages\n57–64, Ann Arbor, Michigan, June. Association for\nComputational Linguistics.\nSennrich, Rico, Barry Haddow, and Alexandra Birch.\n2016. Neural machine translation of rare words with\nsubword units. In Proceedings of the 54th Annual\nMeeting of the Association for Computational Lin-\nguistics (Volume 1: Long Papers) , pages 1715–1725,\nBerlin, Germany, August. Association for Computa-\ntional Linguistics.\nShen, Shiqi, Yong Cheng, Zongjun He, Wei He, Hua\nWu, Maosong Sun, and Yang Liu. 2016. Minimum\nrisk training for neural machine translation. In Pro-\nceedings of the 54th Annual Meeting of the Associ-\nation for Computational Linguistics (ACL) , Berlin,\nGermany.\nSnover, Matthew, Bonnie Dorr, Rich Schwartz, Lin-\nnea Micciulla, and John Makhoul. 2006. A study\nof translation edit rate with targeted human annota-\ntion. In Proceedings of the 7th Conference of the\nAssociation for Machine Translation in the Ameri-\ncas: Technical Papers , pages 223–231, Cambridge,\nMassachusetts, USA, August 8-12. Association for\nMachine Translation in the Americas.\nTurchi, Marco, Matteo Negri, M. Amin Farajian, and\nMarcello Federico. 2017. Continuous learning\nfrom human post-edits for neural machine transla-\ntion. The Prague Bulletin of Mathematical Linguis-\ntics (PBML) , 1(108):233–244.\nVaswani, Ashish, Noam Shazeer, Niki Parmar, Jakob\nUszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz\nKaiser, and Illia Polosukhin. 2017. Attention is all\nyou need. In Guyon, I., U. V on Luxburg, S. Ben-\ngio, H. Wallach, R. Fergus, S. Vishwanathan, and\nR. Garnett, editors, Advances in Neural Information\nProcessing Systems , volume 30. Curran Associates,\nInc.\nWilliams, Ronald J. and David Zipser. 1989. A learn-\ning algorithm for continually running fully recurrent\nneural networks. Neural Computation , 1(2):270–\n280.\nXu, Weijia and Marine Carpuat. 2021. EDITOR: An\nedit-based transformer with repositioning for neu-\nral machine translation with soft lexical constraints.\nTransactions of the Association for Computational\nLinguistics , 9:311–328.", "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "fGot-ZJckWg", "year": null, "venue": "EAMT 2022", "pdf_link": "https://aclanthology.org/2022.eamt-1.12.pdf", "forum_link": "https://openreview.net/forum?id=fGot-ZJckWg", "arxiv_id": null, "doi": null }
{ "title": "Multilingual Neural Machine Translation With the Right Amount of Sharing", "authors": [ "Taido Purason", "Andre Tättar" ], "abstract": null, "keywords": [], "raw_extracted_content": "Multilingual Neural Machine Translation With the Right Amount of\nSharing\nTaido Purason\nUniversity of Tartu\nTartu, Estonia\[email protected] T ¨attar\nUniversity of Tartu\nTartu, Estonia\[email protected]\nAbstract\nLarge multilingual Transformer-based ma-\nchine translation models have had a pivotal\nrole in making translation systems avail-\nable for hundreds of languages with good\nzero-shot translation performance. One\nsuch example is the universal model with\nshared encoder-decoder architecture. Ad-\nditionally, jointly trained language-specific\nencoder-decoder systems have been pro-\nposed for multilingual neural machine\ntranslation (NMT) models. This work in-\nvestigates various knowledge-sharing ap-\nproaches on the encoder side while keep-\ning the decoder language- or language-\ngroup-specific. We propose a novel ap-\nproach, where we use universal, language-\ngroup-specific and language-specific mod-\nules to solve the shortcomings of both\nthe universal models and models with\nlanguage-specific encoders-decoders. Ex-\nperiments on a multilingual dataset set\nup to model real-world scenarios, includ-\ning zero-shot and low-resource translation,\nshow that our proposed models achieve\nhigher translation quality compared to\npurely universal and language-specific ap-\nproaches.\n1 Introduction\nMultilingual neural machine translation has been a\nfundamental topic in recent years, especially for\nzero- and few-shot translation scenarios. Tradi-\ntionally, universal NMT models (see Fig. 1a) have\n© 2022 The authors. This article is licensed under a Creative\nCommons 3.0 licence, no derivative works, attribution, CC-\nBY-ND.\nUniv. Univ.\ntgt lang+(a) Universal\nmodel\ngem gem\nroa roa\ntgt lang+(b)Model with\nlanguage-group-\nspecific modules\nen\nde\nesen\nesde(c) Model\nwith language-\nspecific modules\nFigure 1: Different granularities of the modular architecture.\nroa– Romance; gem – Germanic; tgt lang – Target language\ntoken added to indicate the language of the output sentence.\nbeen used to produce zero-shot or low-resource\ntranslations (Johnson et al., 2016). However, pre-\nvious research has established that universal NMT\nmodels with shared encoder-decoder architecture\nhave some disadvantages: (1) high-resource lan-\nguage pairs tend to suffer loss in translation qual-\nity (Arivazhagan et al., 2019); (2) the vocabulary\nof the model increases greatly, especially for lan-\nguages that do not share an alphabet such as En-\nglish and Japanese; (3) the need to retrain from\nscratch when a new language does not share the\nmodel’s vocabulary.\nRecently, there has been renewed interest in\nmultilingual systems, which have jointly trained\nlanguage-specific encoders-decoders (see Fig. 1c)\nwhich we call the modular architecture (Lyu et\nal., 2020). The goal of these models has been to\nachieve a better overall translation quality com-\npared to universal or uni-directional NMT models.\nHowever, there is a disadvantage: lower zero-shot\ntranslation quality compared to universal models.\nTo combat this problem, shared encoder/decoder\nlayers (also called interlingua layers) have been\nproposed (Liao et al., 2021).\nen\nG\nde\nfr\nesRen\nesfrde(a) Language\ngroup sharing\nen\nG\nU\nfr\nesRen\nesfrde de(b) Language\ngroup and\nuniversal sharing\n(tiered)\nen\nde\nfr\nesen\nesfrde\nU(c) Universal\nsharing (between\nall languages)\nFigure 2: Different types of encoder layer sharing in the mod-\nular architecture. Note that the width of layers in the figure\ndoes not correspond to the actual width but rather reflects the\nsharing extent, i.e. all layers in the encoder have the same\nwidth dimension. U– universal, G– Germanic, R– Romance.\nIn this paper, we focus on improving the overall\ntranslation quality by using different knowledge-\nand layer-sharing methods. More specifically, we\ninvestigate the effect of sharing encoder layers to\nimprove the generalizability and quality of NMT\nmodels. Secondly, we present novel language\ngroup based models that are inspired by the univer-\nsal and modular systems. We propose (1) various\ndegrees of granularity (or specificity) of modules\n(illustrated in Fig. 1); (2) layer sharing, includ-\ning combining layers of various granularities into\na tiered architecture (illustrated by Fig. 2). Our\nmethods show better translation quality in all test-\ning scenarions compared to the universal model\nwithout increasing training or inference time by\nhaving variable degrees of modularity or sharing\nin the encoder.\nOur research looks beyond zero-shot and high-\nresource NMT performance – we set up our ex-\nperiments to investigate model performance for\nmany data scenarios like zero-shot and low- to\nhigh-resource settings. We use a combination\nof Europarl (Koehn, 2005), EMEA (Tiedemann,\n2012), and JRC-Acquis (Steinberger et al., 2006)\ndatasets for training and evaluation and six lan-\nguages grouped into two language groups: Ger-\nmanic (German, English, Danish) and Romance\n(French, Spanish, Portuguese). The results show\nthat our approaches can provide an improvement\nto universal models in all data scenarios. Further-\nmore, our approaches improve the zero-shot and\nlow-resource translation quality of the modular ar-\nchitecture without harming the high-resource lan-\nguage translation quality.The main contributions of our paper are:\n• We introduce a novel language-group-\nspecific modular encoder and decoder\narchitecture (Fig. 1b).\n• Showing that different architectures of shared\nencoder layers (Fig. 2) improve the low-\nresource MT quality of the modular model\nwhile also improving the high-resource MT\nquality that suffers in the universal NMT set-\nting.\n• We empirically show what effect sharing en-\ncoder layers has and present a detailed analy-\nsis that supports layer sharing.\n2 Related Works\nMultilingual neural machine translation models\nfollow the encoder-decoder architecture and ap-\nproaches following this architecture can vary in the\namount of parameter sharing (Dabre et al., 2020).\nThe most straightforward approach with no pa-\nrameter sharing would be having a system of uni-\ndirectional models. While it is feasible with a\nsmall amount of high-resource languages, it be-\ncomes problematic in scenarios with low-resource\nlanguages or a large number of languages. Firstly,\nthe number of uni-directional models in the sys-\ntem grows quadratically with the number of lan-\nguages, harming maintainability. Secondly, there\nis no transfer learning between language pairs due\nto separate models, which means that low-resource\nlanguages generally have low translation quality.\nThese issues are addressed by pivoting with some\nsuccess, however, it does not come without trade-\noffs (Habash and Hu, 2009). The main problem\nwith pivoting is that it is not possible to fully uti-\nlize all the training data since we only use training\ndata that contains the pivot language. Furthermore,\ndue to multiple models being potentially used for\na translation, the translation is slower, and there is\na chance of error propagation and loss of informa-\ntion.\nThe most widely used approach in multilingual\nNMT uses a fully shared (universal) model, which\nhas a single encoder and decoder shared between\nall the languages and uses a token added to the in-\nput sentence to indicate the target language (John-\nson et al., 2016). Arivazhagan et al. (2019) iden-\ntified that the universal model suffers from the\ncapacity bottleneck: with many languages in the\nmodel, the translation quality begins to deteriorate.\nThis especially harms the translation quality of\nhigh-resource language pairs. Zhang et al. (2020)\nfurther confirmed this and suggested deeper and\nlanguage-aware models as an improvement. Still,\nthe problem of low maintainability remains, since\nadding the languages to the model is not possible\nwithout retraining the whole model. Furthermore,\nadding languages with different scripts likely re-\nsults in lower translation quality since the vocabu-\nlary can not be altered.\nEscolano et al. (2019) suggested a proof-of-\nconcept model with language-specific encoders\nand decoders that started bilingual and was in-\ncrementally trained to include other languages.\nEscolano et al. (2020) further improved on it\nand proposed a joint training procedure that pro-\nduced a model that outperformed the universal\nmodel in translation quality. Furthermore, their\nproposed model is expandable by incrementally\nadding new languages without affecting the ex-\nisting languages’ translation quality. Lyu et al.\n(2020) investigated the performance of the mod-\nular model from the industry perspective. They\nfound that the modular model often outperforms\nsingle direction models thanks to transfer learning\nwhile being a competitor to the universal model\nas well due to the additional capacity of language-\nspecific modules.\nModular models can contain shared modules as\nwell. Liao et al. (2021) set out to improve the zero-\nshot performance of modular models, which is of-\nten worse than the zero-shot performance of uni-\nversal models. They achieve this by sharing up-\nper layers of language-specific encoders between\nall languages. The current paper is an extension of\nthat work. While Liao et al. (2021) used English-\ncentric training data and denoising autoencoder\ntask to achieve universal interlingua, in this paper\nwe are not using an autoencoder task, since our\ndata is not one language centric.\nIntroducing language-specific modules into a\nuniversal model can be a good way to increase\nthe capacity of the model without significantly in-\ncreasing training or inference time. An example of\na system that utilizes this is described in Fan et al.\n(2020). They use language-specific and language\ngroup layers in the decoder of the model following\nthe universal architecture model to provide more\ncapacity. They also note that language-specific\nlayers are more effective when applied to the de-\ncoder. Liao et al. (2021) also found that sharingin decoder is not beneficial when there are shared\nlayers in the encoder. These are also the main mo-\ntivations for focusing on sharing encoder layers in\nthis paper.\n3 Experiment setup\n3.1 Data\nOur aim was to create a dataset that resembles\na real-world scenario where language pairs with\nvarying amounts of data are encountered. The data\nis collected from Europarl (Koehn, 2005), EMEA\n(Tiedemann, 2012), and JRC-Acquis (Steinberger\net al., 2006). The training dataset is created by\nsampling from the aforementioned datasets so that\nthe training dataset is composed of 70% Europarl,\n15% EMEA, and 15% JRC-Acquis. The test set is\ncomposed of completely multi-parallel sentences.\nLanguage\ncombinationDirection (lang. group)\nintra inter\nhigh–high 1,000,000 1,000,000\nhigh–mid 500,000 500,000\nmid–mid 500,000 100,000\nlow–high 100,000 10,000\nlow–mid 100,000 0\nlow–low 0 0\nTable 1: Dataset size rules per language type pair and lan-\nguage group. intra – translation within language group, inter\n– translating between language groups\nThe dataset is composed of English, German,\nDanish, French, Spanish, and Portuguese. For cre-\nating the dataset and defining models, these are\ndivided into Germanic (English, German, Dan-\nish) and Romance (French, Spanish, Portugese)\nlanguage groups. We define high-resource (En-\nglish, German, French), medium-resource (Span-\nish), and low-resource (Danish, Portuguese) lan-\nguages that produce high-resource (1,000,000\nlines), higher medium resource (500,000 lines),\nlower medium resource (100,000 lines), low-\nresource (10,000 lines), and zero-shot (0 lines)\nlanguage pairs when combined according to the\nrules in Table 1. With these rules, we also give\nlow and medium resource language directions less\ntraining sentences if they consist of languages\nfrom different language groups compared to the\npairs consisting of the same language group lan-\nguages. The resulting dataset composition from\nthese rules is visible in Table 2. The test set\nconsists of 2000 multi-parallel sentences for each\nlanguage pair from the same distribution as the\ntraining data. Since the training dataset is cre-\nsrctgt\nen de da fr es pt all\nen – 1,000,000 100,000 1,000,000 500,000 10,000 2,610,000\nde 1,000,000 – 100,000 1,000,000 500,000 10,000 2,610,000\nda 100,000 100,000 – 10,000 0 0 210,000\nfr 1,000,000 1,000,000 10,000 – 500,000 100,000 2,610,000\nes 500,000 500,000 0 500,000 – 100,000 1,600,000\npt 10,000 10,000 0 100,000 100,000 – 220,000\nall 2,610,000 2,610,000 210,000 2,610,000 1,600,000 220,000 9,860,000\nTable 2: Dataset sizes (number of sentence pairs) per language pair.\nated by randomly sampling data for each lan-\nguage pair, it is not completely multi-parallel,\nhowever, it probably contains many multi-parallel\nlines. The validation dataset is created for all\nnon-zero-shot pairs with size per language pair de-\nfined by ntest(langpair) = max( ntrain(langpair) ·\n0.0006,100) .\nThe dataset size is quite small compared to data\nused for training state-of-the-art models mainly\ndue to limited computational resources. However,\nwe believe that it still allows us to draw conclu-\nsions that can be applied at larger scales.\n3.2 Model architecture\nPrevious research has investigated sharing layers\nof the modular architecture (Liao et al., 2021). In\nthis work, we mainly focus on layer sharing in the\nencoders. The layers are shared in 2 ways: (1) in-\nside language groups (Fig. 2a), and (2) between all\nlanguages (universally, Fig. 2c). These two meth-\nods are also combined into a tiered architecture\n(Fig. 2b). We also experiment with different levels\nof granularity of modules and introduce language-\ngroup-specific modules referred to as group mod-\nular model (Fig. 1b).\nAs baselines, we use a modular architecture\nwithout layer sharing (Fig. 1c) and a universal ar-\nchitecture with one encoder and decoder shared\nbetween all languages (Fig. 1a).\nAll of the models in our experiments follow\nthe transformer base architecture (Vaswani et al.,\n2017) (6 encoder layers, 6 decoder layers). In\naddition to dropout of 0.1, attention and activa-\ntion dropout of 0.1 are used. The embeddings\nare shared within a language module (encoder-\ndecoder) for language-specific modular models\nand within a language group module for group\nmodular models. For the universal model, all em-\nbeddings are shared.3.3 Segmentation model training\nWe use Byte Pair Encoding (BPE) (Sennrich et\nal., 2016) implemented in SentencePiece (Kudo\nand Richardson, 2018) as the segmentation algo-\nrithm. For the language-specific encoder-decoder\napproach, we train a BPE model with a vocabu-\nlary size of 16,000 for each of the languages. In\nthe group-specific approach, we have a BPE model\nfor each of the language groups with a vocabulary\nsize of 32,000. For the universal model, we have a\nsingle unified BPE model with vocabulary size of\n32,000. For training the BPE models, we use char-\nacter coverage of 1.0 and training data consisting\nof the training set of the corresponding languages.\n3.4 Model training\nFairseq (Ott et al., 2019) is used to implement\ntraining and models. We made the code for our\ncustom implementations publicly available1.\nFor the following experiments, we set the con-\nvergence criteria to be 5 epochs of no improvement\nin the validation set loss. To evaluate the experi-\nments, we always use the best epoch according to\nthe validation loss.\nThe learning rate is selected from {0.0002,\n0.0004, 0.0008 }by the highest BLEU score on the\nvalidation set after 20 training epochs. Gradient\naccumulation frequency is selected using BLEU\nscore on the validation set after convergence from\n8, 16, 32, 48. For all experiments in this paper, the\ntotal maximum batch size is 384,000 tokens (max\ntokens in a batch multiplied by the gradient accu-\nmulation frequency and the number of GPUs).\nFrom the initial experiments, learning rate of\n0.0004 and gradient accumulation frequency of\n48 is selected. For all experiments, Adam opti-\nmizer (Kingma and Ba, 2015), inverse square root\nlearning-rate scheduler with 4,000 warm-up steps,\nand label smoothing (Szegedy et al., 2016) of 0.1\n1https://github.com/TartuNLP/fairseq/\ntree/modular-layer-sharing\nArchitectureLanguage pair resource\nzero-shot low medium-low medium-high high all\nUniversal 33.62 38.12 39.64 43.64 42.32 39.87\nGroup modular (GM)\nEA3–6 35.03 39.48 40.89 44.66 43.31 41.06\nEA5–6 34.52 39.23 40.78 44.59 43.19 40.88\nNo sharing 33.76 38.90 40.75 44.60 43.32 40.73\nLanguage modular (LM)\nEA3–6 34.73 38.79 40.91 44.68 43.36 40.90\nEG3–4 EA5–6 34.57 38.61 40.76 44.91 43.59 40.90\nEG 3–6 34.37 38.56 40.56 44.90 43.42 40.78\nEA5–6 33.81 38.28 40.32 44.75 43.38 40.54\nEG5 EA6 33.51 38.07 40.33 44.72 43.41 40.46\nEG5–6 33.59 37.85 40.32 44.69 43.44 40.43\nNo sharing 32.14 37.19 39.92 44.74 43.50 40.02\nTable 3: Average test set BLEU scores per language pair resource. EG - encoder layer shared within language group, EA -\nencoder layer shared between all languages. Best score(s) per resource (column) in bold.\nare used.\nThe training approach is similar to the propor-\ntional approach in Lyu et al. (2020). The batches\nare created according to the granularity of the\nmodules, so that the correct module can be cho-\nsen for each batch. For the modular models with\nlanguage-specific encoders-decoders, each batch\ncontains only samples from one language pair. For\nthe group-specific models, the batch contains data\nfrom one group pair. We determined by prelim-\ninary experiments that gradient accumulation is\nnecessary for the modular models to learn, which\nwe speculate is due to language-specific modules\nand the aforementioned batch creation strategy.\nSince the universal model does not have that con-\nstraint, a lower gradient accumulation frequency of\n8 is used. For group-specific and universal models,\ntarget language tokens are added to the input sen-\ntence.\nWe used one NVIDIA A100 GPU for training\nthe models. All models were trained with mixed\nprecision.\n3.5 Evaluation\nBLEU (Papineni et al., 2001) score is used as the\nprimary metric for translation quality. It is cal-\nculated using SacreBLEU2(Post, 2018). Beam\nsearch with beam size of 5 is used for decoding.\nSince there are 30 language pairs in total, we group\nthe languages depending on the size of the lan-\nguage pair dataset and mostly look at average test\nset BLEU scores for analysis.4 Results\n4.1 Main results\nAs a baseline, we trained a universal and a modu-\nlar model. We then trained modular models with\n2 uppermost or 4 uppermost layers of the encoder\nshared universally, language-group-specifically or\ntiered (bottom half of the shared layers shared\ngroup-specifically, the rest universally). We also\nexplore language-group-specific modules (group\nmodular model). The main results are visible in\nTable 3 (evaluation results of individual directions\nare in Appendix B). Note that the ordering of rows\nin the table corresponds to the increasing order of\ntotal number of parameters which can be found in\nAppendix A.\n4.1.1 No sharing\nWe can firstly observe that the modular model\nwithout any sharing (LM No sharing) performs\nworse on zero-shot and low-resource language\npairs than the universal model (by 1.48 and 0.93\nBLEU points, respectively). However, when look-\ning at the medium-high and high resource di-\nrections, the modular model performs achieves a\nhigher translation quality (by 1.10 and 1.18 BLEU\npoints, respectively). The translation quality in the\nmedium-low language pairs is similar between the\nuniversal and baseline modular model.\n4.1.2 Sharing 2 layers\nCompared to the baseline modular model (LM\nNo sharing), the modular model with 2 shared\nencoder layers (LM EA5–6) performs better on\n2signature: refs:1|case:mixed|eff:no|tok:13a|\nsmooth:exp|version:2.0.0\nzero-shot, low, and medium-low resource language\npairs on average, with medium-high and high re-\nsource language translation quality only slightly\ndecreasing. Overall, we can observe 0.52 BLEU\npoint increase in translation quality of the shared\nlayer model compared to the modular model.\nWe can also see that with sharing 2 upper lay-\ners in language groups (LM EG5–6) or tiered (LM\nEG5 EA6), the results are similar, but on average\nlower by 0.11 and 0.08 BLEU points, respectively.\nSharing layers group-specifically gives a similar\neffect to sharing layers between all languages on\naverage. With group-specific sharing, the lower\nresource languages have a slightly lower BLEU\nscore, and the higher resource languages have a\nslightly higher BLEU score compared to the uni-\nversal layer sharing. We can see the same trend\nwith tiered sharing.\nComparing the language modular models with\n2 shared layers to the universal model, the group\nsharing (LM EG5–6) and tiered (LM EG5 EA6)\nhave slightly worse translation quality in zero- and\nlow-resource language pairs on average, however\nthey outperform the universal model in all of the\nother higher resource directions. The model with\n2 universally shared layers outperforms the uni-\nversal model in all resource levels. On average,\nthe universally shared modular model (LM EA5–\n6) outperforms the universal model by 0.67 BLEU\npoints.\n4.1.3 Sharing 4 layers\nWe can see that sharing 4 layers provides better\ntranslation quality on average than sharing 2 lay-\ners. All of the models (LM EG3–6, LM EG3–\n4 EA5–6, LM EA3–6) outperform the universal\nmodel in all resource types. The universally shared\nmodel (LM EA3–6) performs the best out of the\nthree on average in the zero, low, and medium-low\nresource directions, while the tiered model (LM\nEG3–4 EA5–6) has the best higher resource per-\nformance, even outperforming the baseline modu-\nlar model, although only by a small margin. Over-\nall, the two aforementioned models have the high-\nest average BLEU score of the language modu-\nlar models, outperforming the baseline modular\nmodel by 0.88 points and the universal model by\n1.03 points. Both of them outperform the univer-\nsal model in the zero-shot direction: the univer-\nsally shared modular model (LM EA3–6) by 1.11\nBLEU points and the tiered modular model (LM\nEG3–4 EA5–6) by 0.95 BLEU points.4.1.4 Group modules\nWhen looking at models with group-specific\nmodules (group modular in Table 3), we can see\nthat they outperform the universal model and the\nbaseline language modular model (LM No shar-\ning) on average. The improvement over the base-\nline modular model comes mostly from the in-\ncrease in translation quality in low-resource di-\nrections and the improvement over the universal\nmodel from higher-resource directions, as we also\nobserved in the previous results. We can also ob-\nserve that the group modular models outperform\nthe universal model at all resource levels.\nThe group modular model also benefits from\nhaving layers shared between all languages. The\naverage BLEU score increases when shared lay-\ners are added to the group modular model, which\ncan mainly be attributed to the increase in zero-\nshot and low resource translation quality.\nThe group modular model with 4 encoder lay-\ners (GM EA3–6) shared is the best performing\nmodel in zero-shot and low-resource directions,\noutperforming the universal model by 1.41 BLEU\npoints in zero-shot and 1.36 BLEU points in low-\nresource directions on average. On average, it\noutperforms the baseline language modular model\nby 1.04 BLEU points and the baseline universal\nmodel by 1.19 BLEU points. Complete evaluation\nresults are presented in Appendix B.\nAlthough we used language group modules and\nlanguage group sharing in our experiments, we\nfailed to find any meaningful effect on the trans-\nlation quality when translating between language\ngroups versus translating between languages in the\nsame group.\n4.2 Sharing between all languages\nThe previous experiments have shown that group\nsharing and tiered architectures were only slightly\ndifferent from sharing between all languages. Fur-\nthermore, the number of shared layers affects the\nresult more than the type of sharing. Hence, we\ncontinue with experiments on sharing the language\nmodular model layers between all languages to\nfurther study the effect of number of encoder lay-\ners shared on BLEU scores. The results can be\nseen in Table 4.\nWe can see that, on average, sharing more lay-\ners increases the BLEU score steadily until 5 up-\nper encoder layers are shared. Compared to shar-\ning 5 upper layers, sharing all 6 layers slightly de-\nEnc. shared layer(s)Language pair resource\nzero-shot low medium-low medium-high high all\nNo sharing 32.14 37.19 39.92 44.74 43.50 40.02\n6 33.07 37.63 40.09 44.67 43.35 40.23\n5–6 33.81 38.28 40.32 44.75 43.38 40.54\n4–6 34.16 38.43 40.41 44.85 43.43 40.68\n3–6 34.73 38.79 40.91 44.68 43.36 40.90\n2–6 34.97 39.03 40.81 44.94 43.44 41.03\n1–6 34.61 38.70 40.79 44.60 43.23 40.80\nTable 4: Average test set BLEU scores for experiments with encoder layer sharing between all languages in the language\nmodular model.\ncreases the BLEU scores in all language resource\ntypes. This could be attributed to: (1) 1 language-\nspecific layer can better transform the language-\nspecific embeddings to a joint representation than\nnone or (2) more capacity with 5 layers shared and\n1 language-specific compared to sharing all 6.\nThe modular model with encoder layers 2–6\nshared provides a very close BLEU score to the\nbest performing model from the previous set of ex-\nperiments (GM EA3–6). It should be noted how-\never that none of the shared layer models outper-\nform the plain modular model in high resource lan-\nguages on average, although the difference is quite\nsmall. Detailed evaluation results with all transla-\ntion directions for this model are available in Ap-\npendix B.\n4.3 Effect of joint embeddings\nSince the universal model uses joint embed-\ndings and vocabulary and the modular model\nuses language-specific embeddings, we investigate\nwhether this could be the reason for the better\nperformance of the latter. We train a modular\nmodel with shared embeddings, vocabulary, and\nencoder layers while still using language-specific\ndecoders. The results in Table 5 show that on av-\nerage the modular model with shared encoder lay-\ners still outperforms the universal model in all re-\nsource types even with shared vocabulary and em-\nbeddings. Although the selection of training data\nfor the SentencePiece model did not take the lan-\nguage data imbalance into account, we can see that\nusing a unified segmentation model and vocabu-\nlary does not significantly decrease the translation\nquality.\n5 Discussion and future work\nMultilingual NMT is a complex problem. On\nthe one hand, we face the problem of poor low-\nresource MT performance of the fully modular\nmodel, and on the other hand, we have the capac-ity issues of the universal model. Our experiments\nshow that we can achieve the best of both worlds\nwith models that combine aspects of both universal\nand modular NMT architectures.\nAlthough including shared layers in the modu-\nlar model has kept the translation quality of higher\nresource language pairs the same or slightly de-\ncreased it, there has been a substantial improve-\nment in the translation quality of low and zero re-\nsource language pairs compared to the plain mod-\nular model. Furthermore, compared to the univer-\nsal model, these shared layer modular models sub-\nstantially increase translation quality in all types of\nlanguage resource directions.\nLanguage-group-specific modules are worth\nconsidering as an architecture, as they provide\nbetter translation quality in all language resource\ntypes compared to the universal model while hav-\ning fewer parameters in total than models with\nlanguage-specific modules. Even with language\ngroup modules, the zero-shot and low-resource\ntranslation quality benefits from layers shared be-\ntween all languages.\nThe layer sharing strategy ultimately depends\non the available computational and data resources.\nHaving language-specific modules could become\nmemory inefficient in massively multilingual sce-\nnarios. Hence, having language group modules or\nlayer sharing is a good compromise between ca-\npacity and model size. Approaching the problem\nfrom the perspective of the universal model, using\nsome degree of modularization is a good way of\nincreasing capacity without sacrificing zero-shot\nperformance or training time.\nOur work also leaves room for future research.\nWhile we focused on encoder layer sharing, de-\ncoder layer sharing is a direction that we want to\ninvestigate in future work comprehensively. In-\ncrementally adding languages is also an important\naspect of modular models and should be inves-\ntigated. In our work, we had a relatively small\nArchitectureLanguage pair resource\nzero-shot low medium-low medium-high high all\nUniversal 33.62 38.12 39.64 43.64 42.32 39.87\nLanguage modular\nshared enc. + emb. + voc. 34.65 39.01 40.67 44.43 43.06 40.77\nshared enc. 34.61 38.70 40.79 44.60 43.23 40.80\nTable 5: Average test set BLEU scores for embedding sharing experiments. shared enc. – shared encoder; shared enc. + emb.\n+ voc. – shared encoder, shared embeddings (incl. decoder embeddings) and joint vocabulary.\ndataset compared to many state-of-the-art systems,\nso it would be beneficial to see how our approaches\nwork in a scenario with significantly more data.\nAs previously mentioned, using significantly more\nlanguages in the system could also set more con-\nstraints on our approaches and would be a promis-\ning direction for future works since it could high-\nlight differences between our proposed methods\nbetter.\n6 Conclusion\nIn this paper, we propose multiple ways of improv-\ning universal models and models with language-\nspecific encoders-decoders by combining features\nof both. We experimented with language- and\nlanguage-group-specific modules and sharing lay-\ners of the encoders between all languages, groups\nof languages, or combining them into a tiered ar-\nchitecture. We found that having some layers uni-\nversally shared (between all languages) benefits\nthe zero-shot and low-resource translation qual-\nity of the modular architectures while not hurt-\ning the translation quality of high-resource direc-\ntions. The modular models with some universally\nshared layers outperform the universal models in\nall language-resource types (from zero to high).\nOur best model outperforms the baseline language\nmodular model by 1.04 BLEU points and the uni-\nversal model by 1.19 BLEU points on average.\nReferences\nArivazhagan, Naveen, Ankur Bapna, Orhan Firat,\nDmitry Lepikhin, Melvin Johnson, Maxim Krikun,\nMia Xu Chen, Yuan Cao, George Foster, Colin\nCherry, Wolfgang Macherey, Zhifeng Chen, and\nYonghui Wu. 2019. Massively Multilingual Neu-\nral Machine Translation in the Wild: Findings and\nChallenges. 7.\nDabre, Raj, Chenhui Chu, and Anoop Kunchukuttan.\n2020. A Survey of Multilingual Neural Machine\nTranslation. ACM Computing Surveys , 53(5).\nEscolano, Carlos, Marta R. Costa-juss `a, and Jos ´e A. R.Fonollosa. 2019. From Bilingual to Multilingual\nNeural Machine Translation by Incremental Train-\ning. In Proceedings of the 57th Annual Meeting of\nthe Association for Computational Linguistics: Stu-\ndent Research Workshop , pages 236–242, Strouds-\nburg, PA, USA. Association for Computational Lin-\nguistics.\nEscolano, Carlos, Marta R. Costa-juss `a, Jos ´e A. R.\nFonollosa, and Mikel Artetxe. 2020. Multilin-\ngual Machine Translation: Closing the Gap between\nShared and Language-specific Encoder-Decoders. 4.\nFan, Angela, Shruti Bhosale, Holger Schwenk, Zhiyi\nMa, Ahmed El-Kishky, Siddharth Goyal, Man-\ndeep Baines, Onur Celebi, Guillaume Wenzek,\nVishrav Chaudhary, Naman Goyal, Tom Birch, Vi-\ntaliy Liptchinsky, Sergey Edunov, Edouard Grave,\nMichael Auli, and Armand Joulin. 2020. Beyond\nEnglish-Centric Multilingual Machine Translation.\n10.\nHabash, Nizar and Jun Hu. 2009. Improving Arabic-\nChinese Statistical Machine Translation using En-\nglish as Pivot Language. In EACL 2009 - 4th Work-\nshop on Statistical Machine Translation, Proceed-\nings of theWorkshop .\nJohnson, Melvin, Mike Schuster, Quoc V . Le, Maxim\nKrikun, Yonghui Wu, Zhifeng Chen, Nikhil Tho-\nrat, Fernanda Vi ´egas, Martin Wattenberg, Greg Cor-\nrado, Macduff Hughes, and Jeffrey Dean. 2016.\nGoogle’s Multilingual Neural Machine Translation\nSystem: Enabling Zero-Shot Translation. 11.\nKingma, Diederik P. and Jimmy Lei Ba. 2015. Adam:\nA method for stochastic optimization. In 3rd Inter-\nnational Conference on Learning Representations,\nICLR 2015 - Conference Track Proceedings .\nKoehn, Philipp. 2005. Europarl : A Parallel Corpus for\nStatistical Machine Translation. MT Summit , 11.\nKudo, Taku and John Richardson. 2018. Sentence-\nPiece: A simple and language independent subword\ntokenizer and detokenizer for neural text processing.\nInEMNLP 2018 - Conference on Empirical Methods\nin Natural Language Processing: System Demon-\nstrations, Proceedings .\nLiao, Junwei, Yu Shi, Ming Gong, Linjun Shou, Hong\nQu, and Michael Zeng. 2021. Improving Zero-shot\nNeural Machine Translation on Language-specific\nEncoders- Decoders. In 2021 International Joint\nConference on Neural Networks (IJCNN) , pages 1–\n8. IEEE, 7.\nLyu, Sungwon, Bokyung Son, Kichang Yang, and\nJaekyoung Bae. 2020. Revisiting Modularized\nMultilingual NMT to Meet Industrial Demands. In\nProceedings of the 2020 Conference on Empirical\nMethods in Natural Language Processing (EMNLP) ,\npages 5905–5918, Online, 11. Association for Com-\nputational Linguistics.\nOtt, Myle, Sergey Edunov, Alexei Baevski, Angela\nFan, Sam Gross, Nathan Ng, David Grangier, and\nMichael Auli. 2019. Fairseq: A fast, extensible\ntoolkit for sequence modeling. In NAACL HLT 2019\n- 2019 Conference of the North American Chapter of\nthe Association for Computational Linguistics: Hu-\nman Language Technologies - Proceedings of the\nDemonstrations Session .\nPapineni, Kishore, Salim Roukos, Todd Ward, and Wei-\nJing Zhu. 2001. BLEU: a Method for Automatic\nEvaluation of Machine Translation. In Proceedings\nof the 40th Annual Meeting on Association for Com-\nputational Linguistics - ACL ’02 , page 311, Morris-\ntown, NJ, USA. Association for Computational Lin-\nguistics.\nPost, Matt. 2018. A Call for Clarity in Reporting\nBLEU Scores. In WMT 2018 - 3rd Conference\non Machine Translation, Proceedings of the Confer-\nence, volume 1.\nSennrich, Rico, Barry Haddow, and Alexandra Birch.\n2016. Neural machine translation of rare words with\nsubword units. In 54th Annual Meeting of the As-\nsociation for Computational Linguistics, ACL 2016 -\nLong Papers , volume 3.\nSteinberger, Ralf, Bruno Pouliquen, Anna Widiger,\nCamelia Ignat, Toma ˇz Erjavec, Dan Tufis ¸, and\nD´aniel Varga. 2006. The JRC-Acquis: A multilin-\ngual aligned parallel corpus with 20+ languages. In\nProceedings of the 5th International Conference on\nLanguage Resources and Evaluation, LREC 2006 .\nSzegedy, Christian, Vincent Vanhoucke, Sergey Ioffe,\nJon Shlens, and Zbigniew Wojna. 2016. Rethinking\nthe Inception Architecture for Computer Vision. In\nProceedings of the IEEE Computer Society Confer-\nence on Computer Vision and Pattern Recognition ,\nvolume 2016-December.\nTiedemann, J ¨org. 2012. Parallel data, tools and inter-\nfaces in OPUS. In Proceedings of the 8th Interna-\ntional Conference on Language Resources and Eval-\nuation, LREC 2012 .\nVaswani, Ashish, Noam Shazeer, Niki Parmar, Jakob\nUszkoreit, Llion Jones, Aidan N. Gomez, Łukasz\nKaiser, and Illia Polosukhin. 2017. Attention is all\nyou need. In Advances in Neural Information Pro-\ncessing Systems , volume 2017-December.Zhang, Biao, Philip Williams, Ivan Titov, and Rico\nSennrich. 2020. Improving Massively Multilingual\nNeural Machine Translation and Zero-Shot Transla-\ntion.\nA Number of parameters\nThe number of parameters of the models can be\nseen in Table 6.\nArchitecture Total params. Inference params.\nUniversal 60,526,080 60,526,080\nGroup modular\nEA3–6 108,442,624 60,526,080\nEA5-6 114,747,392 60,526,080\nNo sharing 121,052,160 60,526,080\nLanguage modular\nEA3-6 250,938,368 52,331,008\nEA5-6 EG3-4 257,243,136 52,331,008\nEG3–6 263,547,904 52,331,008\nEA5-6 282,462,208 52,331,008\nEA6 EG5 285,614,592 52,331,008\nEG5-6 288,766,976 52,331,008\nNo sharing 313,986,048 52,331,008\nTable 6: Number of parameters\nB Detailed evaluation results\nTables 7, 8, 9, 10, and 11 provide detailed evalua-\ntion results for selected experiments.\nsrctgt\nen de da fr es pt\nen – 38.84 40.39 48.60 51.07 45.32\nde 46.41 – 32.44 38.60 39.08 34.41\nda 45.60 30.57 – 36.77 37.32 32.77\nfr 49.28 32.19 31.65 – 42.95 39.65\nes 52.06 32.66 32.63 44.02 – 41.13\npt 49.17 31.37 31.74 43.25 44.09 –\nTable 7: Universal model test set BLEU scores.\nsrctgt\nen de da fr es pt\nen – 1.30 2.14 1.25 1.30 -0.30\nde 1.44 – 0.98 1.31 1.15 -0.38\nda 0.56 -0.32 – -1.56 -1.60 -2.93\nfr 1.07 0.73 1.03 – 1.04 0.16\nes 1.61 0.98 1.17 0.50 – 0.12\npt -1.49 -2.84 -2.55 -0.77 -0.60 –\nTable 8: Improvement of the baseline language modular\nmodel over the universal model on test set in BLEU points.\nsrctgt\nen de da fr es pt\nen – 0.76 1.78 1.44 0.55 1.29\nde 1.00 – 1.52 1.12 1.13 1.37\nda 0.98 0.91 – 1.41 0.87 1.28\nfr 0.79 0.82 1.62 – 0.75 1.51\nes 1.31 1.11 1.87 1.25 – 0.98\npt 1.38 1.14 1.65 1.34 0.95 –\nTable 9: Improvement of the group modular model with layers 3–6 shared (group modular EA3–6) over the universal model\non test set in BLEU points.\nsrctgt\nen de da fr es pt\nen – 0.84 1.75 1.49 1.10 -0.62\nde 1.40 – 1.30 1.19 1.43 -0.44\nda 2.30 1.25 – 1.93 1.59 0.35\nfr 0.94 0.88 2.10 – 1.26 0.18\nes 1.70 1.06 1.79 1.26 – 0.22\npt 1.73 0.80 1.70 1.07 1.33 –\nTable 10: Improvement of the modular model with layers 2–6 shared (EA2–6) over the universal model on test set in BLEU\npoints.\nLang. pair UniversalGroup modular Language modular\nEA3–6 EA5–6 – EA3–6 EG3–4 EA5–6 EG3–6 EA5–6 EG5 EA6 EG5–6 –\nen–de 38.84 39.6 39.57 39.77 39.96 40.11 39.8 39.67 39.96 39.83 40.14\nde–en 46.41 47.41 47.25 47.32 47.76 47.8 47.78 47.88 47.56 47.72 47.85\nen–da 40.39 42.17 41.99 42.37 42.36 42.65 42.5 42.52 42.45 42.68 42.53\nda–en 45.6 46.58 46.77 46.62 47.86 47.91 47.52 46.93 47 47.23 46.16\nen–fr 48.6 50.04 50.04 49.9 49.78 50.15 49.78 49.77 50.08 49.84 49.85\nfr–en 49.28 50.07 49.84 50.32 50.43 50.56 50.49 50.57 50.27 50.45 50.35\nen–es 51.07 51.62 52.03 52.01 51.92 52.22 52.34 52.18 52.03 52.07 52.37\nes–en 52.06 53.37 53.27 53.58 53.72 53.77 53.84 53.89 53.69 53.7 53.67\nen–pt 45.32 46.61 46.49 46.12 45.11 44.73 44.58 45.04 45.07 44.54 45.02\npt–en 49.17 50.55 50.39 50.53 50.13 49.95 49.95 48.97 48.82 48.87 47.68\nde–da 32.44 33.96 33.66 33.56 34.08 34.11 33.67 33.93 33.75 33.58 33.42\nda–de 30.57 31.48 31.42 31.21 31.89 31.53 31.27 30.85 30.8 30.95 30.25\nde–fr 38.6 39.72 39.7 39.7 39.56 39.92 39.72 39.77 39.72 39.97 39.91\nfr–de 32.19 33.01 32.72 32.93 32.68 32.98 32.97 32.64 32.89 32.83 32.92\nde–es 39.08 40.21 40.12 40.2 39.94 40.44 40.28 40.18 40.07 40.06 40.23\nes–de 32.66 33.77 33.61 33.29 33.44 33.63 33.76 33.66 33.55 33.45 33.64\nde–pt 34.41 35.78 35.72 35.14 34.27 34.35 34.28 34.59 34.33 34.18 34.03\npt–de 31.37 32.51 32.35 32.17 31.55 31.51 31.52 30.38 30.03 30.02 28.53\nda–fr 36.77 38.18 37.91 37.94 37.99 38 38.26 37.03 36.78 36.82 35.21\nfr–da 31.65 33.27 32.54 31.49 33.67 33.11 32.8 33.65 33.37 32.66 32.68\nda–es 37.32 38.19 38.31 37.84 38.47 38.56 38.59 37.39 37.09 37.52 35.72\nes–da 32.63 34.5 33.41 32.46 34.52 34.81 33.78 34.62 34.14 34.23 33.8\nda–pt 32.77 34.05 33.78 33.5 33.19 32.57 32.72 31.79 31.66 31.74 29.84\npt–da 31.74 33.39 32.57 31.24 32.76 32.34 32.38 31.44 31.13 30.86 29.19\nfr–es 42.95 43.7 43.78 43.78 43.86 44.18 44.09 43.73 43.83 43.86 43.99\nes–fr 44.02 45.27 44.74 44.76 45.18 45.21 45.08 44.88 45.14 44.98 44.52\nfr–pt 39.65 41.16 41.08 40.84 40.13 39.57 39.79 39.88 39.97 39.64 39.81\npt–fr 43.25 44.59 44.27 44.24 44.19 43.94 43.79 43.16 43.14 42.99 42.48\nes–pt 41.13 42.11 42.15 42.38 41.65 41.39 41.36 41.19 41.42 41.04 41.25\npt–es 44.09 45.04 44.88 44.78 45.09 44.95 44.61 44.06 44.1 44.46 43.49\nTable 11: Test set BLEU scores for the main experiments. The best result of each row is in bold.", "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "pkMMH7cpvbz", "year": null, "venue": "EAMT 2011", "pdf_link": "https://aclanthology.org/2011.eamt-1.37.pdf", "forum_link": "https://openreview.net/forum?id=pkMMH7cpvbz", "arxiv_id": null, "doi": null }
{ "title": "Advancements in Arabic-to-English Hierarchical Machine Translation", "authors": [ "Matthias Huck", "David Vilar", "Daniel Stein", "Hermann Ney" ], "abstract": null, "keywords": [], "raw_extracted_content": "Advancements in Arabic-to-English Hierarchical Machine Translation\nMatthias Huck1andDavid Vilar1,2andDaniel Stein1andHermann Ney1\n1Human Language Technology and Pattern2DFKI GmbH\nRecognition Group, RWTH Aachen University Berlin, Germany\n<surname>@cs.rwth-aachen.de [email protected]\nAbstract\nIn this paper we study several advanced\ntechniques and models for Arabic-to-\nEnglish statistical machine translation. We\nexamine how the challenges imposed by\nthis particular language pair and transla-\ntion direction can be successfully tack-\nled within the framework of hierarchical\nphrase-based translation.\nWe extend the state-of-the-art with a novel\ncross-system and cross-paradigm lightly-\nsupervised training approach. In addition,\nfor following recently developed tech-\nniques we provide a concise review, an em-\npirical evaluation, and an in-depth analy-\nsis: soft syntactic labels, a discriminative\nword lexicon model, additional reorder-\nings, and shallow rules. We thus bring to-\ngether complementary methods that previ-\nously have only been investigated in iso-\nlation and mostly on different language\npairs.\nCombinations of the methods yield signifi-\ncant improvements over a baseline using a\nusual set of models. The resulting hierar-\nchical systems perform competitive on the\nlarge-scale NIST Arabic-to-English trans-\nlation task.\n1 Introduction\nSince its introduction in (Chiang, 2005), hierar-\nchical phrase-based translation has become a stan-\ndard approach in statistical machine translation.\nMany additional features and enhancements to\nthe hierarchical paradigm have been proposed or\nc/circlecopyrt2011 European Association for Machine Translation.adopted from the conventional phrase-based ap-\nproach, but the effect of the various methods is\ntypically merely evaluated separately. Neither are\nthey compared to each other, nor is it clear whether\ncombining the methods would be beneficial.\nThe aim of the work presented in this pa-\nper is to explore the effectiveness of a state-of-\nthe-art hierarchical phrase-based system for large-\nscale Arabic-to-English statistical machine trans-\nlation (SMT). Within this framework, we inves-\ntigate the impact of several recently developed\nmethods on the translation performance. Not only\ndo we analyze them separately, but also exam-\nine whether their combination further increases the\noutput quality.\nMore specifically, we focus on three models:\nFirst, we integrate syntactic information in order\nto improve the linguistic structure of the transla-\ntion. Second, we utilize a discriminatively trained\nextended word lexicon to obtain a better lexical\nselection based on global source sentence con-\ntext. Third, we introduce a jump model which is\nbased on reordering enhancements to the hierar-\nchical grammar to allow for more flexibility during\nthe search process.\nThe Arabic-English language pair is known to\nbehave more monotone than other language pairs,\ne.g. Urdu-English or Chinese-English. In a con-\ntrastive experiment done by Birch et al. (2009),\na hierarchical system does not outperform a con-\nventional phrase-based system for Arabic-English.\nOn the other hand, a lattice-based hierarchical\nsystem (de Gispert et al., 2010) has been the\nbest-performing system at the 2009 NIST Arabic-\nEnglish evaluation campaign.1Noticing these\n1http://www.itl.nist.gov/iad/mig/tests/\nmt/2009/ResultsRelease/currentArabic.\nhtmlMik el L. F orcada, Heidi Depraetere, Vincen t V andeghinste (eds.)\nPr o c e e dings of the 15th Confer enc e of the Eur op e an Asso ciation for Machine T r anslation , p. 273\u0015280\nLeuv en, Belgium, Ma y 2011\nfacts, we also want to investigate to what extent the\ntranslation quality relies on the recursion depth for\nhierarchical rules. In order to separate the effect\nof the recursion level, we conduct all experiments\nwith an unrestricted hierarchical grammar as well\nas with a depth-restricted one.\nFinally, we perform a novel cross-system and\ncross-paradigm variant of lightly-supervised train-\ning (Schwenk, 2008). We make use of bitexts\nthat have been built by automatic translation of\nlarge amounts of monolingual data with a conven-\ntional phrase-based system to improve our transla-\ntion model. We propose to integrate this kind of\ndata as purely lexicalized rules solely while stick-\ning to the set of hierarchical rules that is extracted\nfrom the more reliable human-generated parallel\ndata.\n2 Overview\nThe paper is structured as follows: First we give\nan outline of some previous work that is related to\nours (Section 3). We then present the methods we\napply in the following sections:\nWe introduce soft syntactic labels in Section 4,\nan approach to integrate syntactic information in\na non-obtrusive manner into hierarchical search as\nan additional model. The discriminatively trained\nextended word lexicon model that is employed in\nthis work is discussed in Section 5. Section 6 con-\ntains a description of the reordering enhancement\nwe apply to the hierarchical phrase-based model.\nIn Section 7 we describe the limitation of the re-\ncursion depth for hierarchical rules. Section 8\npresents an effective and easily implementable\nway to integrate information extracted from unsu-\npervised training data into the translation model of\na hierarchical phrase-based system.\nWe present the experimental setup and discuss\nthe results obtained with the various configurations\nin Section 9. Finally we sum up our findings in\nSection 10.\n3 Related Work\nHierarchical phrase-based translation has been pi-\noneered by David Chiang (Chiang, 2005) with his\nHiero system. He induces a weighted synchronous\ncontext-free grammar from parallel text, the search\nis typically carried out using the cube pruning al-\ngorithm.\nSoft syntactic labels. Soft syntactic labels have\nbeen first introduced by Venugopal et al. (2009)as an extension to their previous SAMT approach.\nIn SAMT, the generic non-terminal of the hier-\narchical model is substituted with syntactic cat-\negories. Using soft syntactic labels, these addi-\ntional non-terminals are considered in a probabilis-\ntic way, no hard constraints are imposed. Many\nother groups have presented similar approaches to\naugment hierarchical systems with syntactic infor-\nmation recently, e.g. Chiang (2010), Hoang and\nKoehn (2010), Stein et al. (2010), and Baker et al.\n(2010), among others. Results on Arabic-English\ntasks are rarely reported.\nDiscriminative word lexicon. Several variants of\ndiscriminatively trained extended lexicon models\nhave been utilized effectively within quite differ-\nent statistical machine translation systems. Mauser\net al. (2009) integrate a discriminative as well\nas a trigger-based extended lexicon model into a\nphrase-based system, Huck et al. (2010) report re-\nsults within hierarchical decoding, and Jeong et\nal. (2010) use a discriminative lexicon model with\nmorphological and dependency features in a treelet\ntranslation system.\nReordering extensions. Some techniques to ma-\nnipulate the reordering capabilities of hierarchi-\ncal systems by modifying the grammar have been\npublished lately. Iglesias et al. (2009) investigate\na maximum phrase jump of 1 (MJ1) reordering\nmodel. They include a swap rule, but withdraw all\nhierarchical phrases. He et al. (2010) combine an\nadditional BTG-style swap rule with a maximum\nentropy based lexicalized reordering model and\nachieve improvements on a Chinese-English task.\nVilar et al. (2010) apply IBM-style reordering en-\nhancements successfully to a German-English Eu-\nroparl task.\nShallow rules. The way to restrict the parsing\ndepth we apply in this work has been introduced by\nIglesias et al. (2009), along with methods to filter\nthe hierarchical rule set.\nLightly-supervised training. Large-scale\nlightly-supervised training for SMT as we define\nit in this paper has been introduced by Schwenk\n(2008). Schwenk automatically translates a large\namount of monolingual data with an initial Moses\n(Koehn et al., 2007) baseline system from French\ninto English. He uses the resulting unsupervised\nbitexts as additional training corpora to improve\nthe baseline system. In Schwenk’s original work,\nan additional bilingual dictionary is added to\nthe baseline. With lightly-supervised training,274\nSchwenk achieves improvements of around one\nBLEU point over the baseline. In a later work\n(Schwenk and Senellart, 2009) he applies the\nsame method for translation model adaptation on\nan Arabic-French task. We extend this line of\nresearch by investigating the impact of lightly-\nsupervised training across different SMT systems\nand translation paradigms.\n4 Soft Syntactic Labels\nA possibility to enhance the hierarchical model is\nto extend the set of non-terminals from the origi-\nnal generic symbol to a richer, syntax-oriented set.\nHowever, augmenting the set of non-terminals also\nrestricts the parsing space and thus we alter the\nset of possible translations. Furthermore, it can\nhappen that no parse can be found for some in-\nput sentences. To address this issue, our extrac-\ntion is extended in a similar way as in the work\nof Venugopal et al. (2009): for every rule in the\ngrammar, we store information about the possible\nnon-terminals that can be substituted in place of\nthe generic non-terminal X, together with a prob-\nability for each combination of non-terminal sym-\nbols (cf. Figure 1).\nDuring decoding, we compute two additional\nquantities for each derivation d. The first one is\ndenoted by ph(Y|d)(hfor “head”) and reflects\nthe probability that the derivation dunder con-\nsideration of the additional non-terminal symbols\nhasYas its starting symbol. This quantity is\nneeded for computing the probability psyn(d)that\nthe derivation conforms with the extended set of\nnon-terminals. Let rbe the top rule in deriva-\ntiond, withnnon-terminal symbols. For each of\nthese non-terminal symbols we substitute the sub-\nderivationsd1,...,d ninr. Denoting with Sthe\nextended set of non-terminals, psyn(d)is defined\nas\npsyn(d) =/summationdisplay\ns∈Sn+1/parenleftBigg\np(s|r)·n+1/productdisplay\nk=2ph(s[k]|dk−1)/parenrightBigg\n.\n(1)\nWe use the notation [·]to address the elements of a\nvector.\nThe probability phis computed in a similar way,\nbut the summation index is restricted only to those\nvectors of non-terminal substitutions where the\nleft-hand side is the one for which we want to com-\npute the probability:X→uXvXw\nd1d2d\n\np(A→uDvCw |r)\np(B→uAvBw |r)\np(C→uCvDw |r)\n/braceleftbiggp(A|d1)\np(D|d1)\n\np(B|d2)\np(C|d2)\np(E|d2)\nFigure 1: Visualization of the soft syntactic la-\nbels approach (Section 4). For each derivation, the\nprobabilities of non-terminal labels are computed.\nph(Y|d) =\n/summationdisplay\ns∈Sn+1:s[1]=Y/parenleftBigg\np(s|r)·n+1/productdisplay\nk=2ph(s[k]|dk−1)/parenrightBigg\n.\n(2)\n5 Discriminative Word Lexicon\nWe integrate a discriminative word lexicon (DWL)\nmodel that is very similar to the one presented by\nMauser et al. (2009). This type of extended lex-\nicon model accounts for global source sentence\ncontext to make predictions of target words. It\ngoes beyond the capabilities of the standard model\nset of typical hierarchical systems as word lexi-\ncons and phrase models (even with hierarchical\nphrases) normally do not consider context beyond\nthe phrase boundaries.\nThe DWL model acts as a classifier that pre-\ndicts the words contained in the translation from\nthe words given in the source sentence. The se-\nquential order or any other structural interdepen-\ndencies between the words on the source side as\nwell as on the target side are ignored.\nLetVFbe the source vocabulary and VEbe the\ntarget vocabulary. Then, we represent the source\nside as a bag of words by employing a count vec-\ntorF= (...,F f,...)of dimension|VF|, and the\ntarget side as a set of words by employing a binary\nvectorE= (...,E e,...)of dimension|VE|. Note\nthatFfis a count and Eeis a bit. The model es-\ntimates the probability p(E|F), i.e. that the target275\nsentence consists of a set of target words given a\nbag of source words. For that purpose, individual\nmodelsp(Ee|F)are trained for each target word\ne∈VE(i.e. target word eshould be included in the\nsentence, or not), which decomposes the problem\ninto many separate two-class classification prob-\nlems in the way shown in Equation (3).\np(E|F) =/productdisplay\ne∈VEp(Ee|F) (3)\nEach of the individual classifiers is modeled as\na log-linear model\np(Ee|F) =eg(Ee,F)\n/summationtext\n˜Ee∈{0,1}eg(˜Ee,F)(4)\nwith the function\ng(Ee,F) =Eeλe+/summationdisplay\nf∈VFEeFfλef,(5)\nwhere theλefrepresent lexical weights and the λe\nare prior weights. Though the log-linear model of-\nfers a high degree of flexibility concerning the kind\nof features that may be used, we simply use the\nsource words as features. The feature weights for\nthe individual classifiers are trained with the im-\nproved RProp+ algorithm (Igel and H ¨usken, 2003).\n6 IBM-style Reorderings for\nHierarchical Phrase-based Translation\nWe extend the hierarchical phrase-based system\nwith a jump model as proposed by Vilar et al.\n(2010), to permit jumps across whole blocks of\nsymbols, and to facilitate a less restricted place-\nment of phrases within the target sequence. The\nmodel is made up of additional, non-lexicalized\nrules and a distance-based jump cost, and allows\nfor constrained reorderings. It is comparable to\nconventional phrase-based IBM-style reordering\n(Zens et al., 2004).\nThe hierarchical model comprises hierarchi-\ncal rules with up to two non-neighboring non-\nterminals on their right-hand side as built-in re-\nordering mechanism. An initial rule\nS→/angbracketleftX∼0,X∼0/angbracketright (6)\nis engrafted, as well as a special glue rule that the\nsystem can use for serial concatenation of phrases\nas in monotonic phrase-based translation (Chiang,\n2005):\nS→/angbracketleftS∼0X∼1,S∼0X∼1/angbracketright (7)Sdenotes the start symbol of the grammar, the\nXsymbol is a generic non-terminal which is used\non all left-hand sides of the rules that are extracted\nfrom the training corpus and as a placeholder for\nthe gaps within the right-hand side of hierarchi-\ncal rules.∼defines a one-to-one relation between\nthe non-terminals within the source part and the\nnon-terminals within the target part of hierarchical\nrules.\nTo enable IBM-style reorderings with a window\nlength of 1, we replace the two rules from Equa-\ntions (6) and (7) by the rules given in Equation (8):\nS→/angbracketleftM∼0,M∼0/angbracketright\nS→/angbracketleftM∼0S∼1,M∼0S∼1/angbracketright†\nS→/angbracketleftB∼0M∼1,M∼1B∼0/angbracketright‡\nM→/angbracketleftX∼0,X∼0/angbracketright\nM→/angbracketleftM∼0X∼1,M∼0X∼1/angbracketright†\nB→/angbracketleftX∼0,X∼0/angbracketright\nB→/angbracketleftB∼0X∼1,B∼0X∼1/angbracketright†(8)\nIn these rules, the Mnon-terminal represents a\nblock that will be translated in a monotonic way,\nand theBis a “back jump”. Although these two\nsymbols could be joined into one (the production\nrules are the same for both), it is useful to keep\nthem separate to facilitate the computation of the\ndistortion costs. The reordering extensions can\neasily be adapted to the shallow grammar that will\nbe described in the following section.\nWe add a binary feature that fires for the rules\nthat act analogous to the glue rule (†). Addition-\nally, a distance penalty based on the jump width\nis computed during decoding when the back jump\nrule (‡) is applied.\n7 Deep Rules vs. Shallow Rules\nIn order to constrain the search space of the de-\ncoder, we can modify the grammar so that the\ndepth of the hierarchical recursion is restricted to\none (Iglesias et al., 2009).\nWe replace the generic non-terminal Xby two\ndistinct non-terminals XH andXP. By changing\nthe left-hand sides of the rules, we allow lexical\nphrases only to be derived from XP, and hierar-\nchical phrases only from XH. On all right-hand\nsides of hierarchical rules, the Xis replaced by\nXP. Gaps within hierarchical phrases can thus\nonly be filled with purely lexicalized phrases, but\nnot a second time with hierarchical phrases.276\nNote that the initial rule (Eqn. 6) has to be sub-\nstituted with\nS→/angbracketleftXP∼0,XP∼0/angbracketright\nS→/angbracketleftXH∼0,XH∼0/angbracketright,(9)\nand the glue rule (Eqn. 7) has to be substituted with\nS→/angbracketleftS∼0XP∼1,S∼0XP∼1/angbracketright\nS→/angbracketleftS∼0XH∼1,S∼0XH∼1/angbracketright.(10)\nWe refer to this kind of rule set and the parses\nproduced with such a grammar as shallow , in con-\ntrast to the standard rule set and parses which we\ndenote as deep .\n8 Improving the Translation Model with\nLightly-supervised Training\nIn this section, we propose a novel cross-system\nand cross-paradigm variant of lightly-supervised\ntraining. More specifically, we extend the trans-\nlation model of the hierarchical system using un-\nsupervised parallel training data derived from au-\ntomatic translations produced with a conventional\nphrase-based system. The additional bitexts are\ncreated by translating large amounts of monolin-\ngual source language data with a conventional\nphrase-based system. Word alignments are trained\nto be able to extract phrases from the data. Note\nthat, unlike Schwenk (2008), we do not try to im-\nprove the same system which was used to create\nthe unsupervised data but rather change the trans-\nlation paradigm, in order to combine the strengths\nof both approaches.\nConventional phrase-based systems are usually\nable to correctly translate short sequences in a lo-\ncal context, but often have problems in producing\na fluent sentence structure across long distances\nThus, we decided to include lexical phrases from\nthe unsupervised data, but to restrict the set of\nphrases with non-terminals to those that were de-\nrived from the more reliable human-generated par-\nallel data.\nTo our knowledge, this is the first time that\nlightly-supervised training is applied to a hierar-\nchical system.\n9 Experiments\nWe use the open source Jane toolkit (Vilar et al.,\n2010) for our experiments, a hierarchical phrase-\nbased translation software written in C++. We give\na detailed description of our setup to ease repro-\nduction by the scientific community.9.1 Experimental Setup\nThe phrase table of the baseline system has been\nproduced from a parallel training corpus of 2.5M\nArabic-English sentence pairs. Word alignments\nin both directions were trained with GIZA ++and\nsymmetrized according to the refined method that\nwas proposed by Och and Ney (2003). To reduce\nthe size of the phrase table, a minimum count cut-\noff of one and an extraction pruning threshold of\n0.1 have been applied to hierarchical phrases.\nArabic English\nSentences 2 514 413\nRunning words 54 324 372 55 348 390\nV ocabulary 264 528 207 780\nSingletons 115 171 91 390\nTable 1: Data statistics for the preprocessed\nArabic-English parallel training corpus. In the cor-\npus, numerical quantities have been replaced by a\nspecial category symbol.\nThe models integrated into our baseline sys-\ntem are: phrase translation probabilities and lex-\nical translation probabilities at phrase level, each\nfor both translation directions, length penalties on\nword and phrase level, three binary features for hi-\nerarchical phrases, glue rule, and rules with non-\nterminals at the boundaries, a binary feature that\nfires if the phrase has a source length of only one\nword, three binary features marking phrases that\nhave been seen at least two, four, or six times, re-\nspectively, and an n-gram language model.\nOur setups use a 4-gram language model with\nmodified Kneser-Ney smoothing. It was created\nwith the SRILM toolkit (Stolcke, 2002) and was\ntrained on a large collection of monolingual data\nincluding the target side of the parallel corpus and\nthe LDC Gigaword v4 corpus. We measured a per-\nplexity of 96.9 on the four reference translations of\nMT06.\nThe scaling factors of the log-linear model com-\nbinations have been optimized with MERT on the\nMT06 NIST test corpus. MT08 was employed as\nheld-out test data. Detailed statistics about the par-\nallel training data are given in Table 1, for the de-\nvelopment and the test corpus in Table 2.\nTo obtain the syntactic annotation for the soft\nsyntactic labels, the Berkeley Parser (Petrov et al.,\n2006) has been applied.\nThe DWL model has been trained on a manually\nselected high-quality subset of the parallel data of277\ndev (MT06) test (MT08)\nSentences 1 797 1 360\nRunning words 49 677 45 095\nV ocabulary 9 274 9 387\nOOV [%] 0.5 0.4\nTable 2: Data statistics for the preprocessed Arabic\npart of the dev and test corpora. In the corpus, nu-\nmerical quantities have been replaced by a special\ncategory symbol.\n277 234 sentence pairs. The number of features\nper target word which are considered during train-\ning is equal to the size of the source vocabulary of\nthe training corpus, i.e. 122 592 in this case. We\ncarried out 100 training iterations per target word\nwith the improved RProp+ algorithm. After train-\ning, the full DWL model was pruned with a thresh-\nold of 0.1. The pruned model contains on average\n80 features per target word.\n9.2 Unsupervised Data\nThe unsupervised data that we integrate has been\ncreated by automatic translations of parts of the\nArabic LDC Gigaword corpus (mostly from the\nHYT collection) with a conventional phrase-based\nsystem. Translating the monolingual Arabic data\nhas been performed by LIUM, Le Mans, France.\nWe thank Holger Schwenk for kindly providing the\ntranslations.\nThe score computed by the decoder for each\ntranslation has been normalized with respect to the\nsentence length and used to select the most reliable\nsentence pairs. We report the statistics of the unsu-\npervised data in Table 3. Word alignments for the\nunsupervised data have been produced in the same\nway as for the baseline bilingual training data.\nArabic English\nSentences 4 743 763\nRunning words 121 478 207 134 227 697\nV ocabulary 306 152 237 645\nSingletons 130 981 102 251\nTable 3: Data statistics for the Arabic-English un-\nsupervised training corpus after selection of the\nmost reliable sentence pairs. In the corpus, nu-\nmerical quantities have been replaced by a special\ncategory symbol.\nUsing the unsupervised data in the way de-\nscribed in Section 8 increases the number of non-hierarchical phrases by roughly 30%, compared to\nthe baseline system where the phrase table is ex-\ntracted from the human-generated bitexts only.\n9.3 Translation Results\nThe empirical evaluation of all our systems is pre-\nsented in Table 4. All methods are evaluated\non the two standard metrics BLEU and TER and\nchecked for statistical significance over the base-\nline. The confidence intervals have been computed\nusing bootstrapping for BLEU and Cochran’s ap-\nproximate ratio variance for TER (Leusch and Ney,\n2009). We report experimental results on both\nthe development and the test corpus (MT06 and\nMT08, respectively). The figures with deep and\nwith shallow rules are set side by side in separate\ncolumns to facilitate a direct comparison between\nthem. All the setups given in separate rows exist in\na deep and a shallow variant.\nOne of the objectives is to compare the deep and\nshallow setups. This has an important effect in\npractice, as the shallow setup is much more effi-\ncient in terms of computational effort, with speed-\nups of 5 to 10 when compared to the (standard)\ndeep setup. We found that the shallow system\ntranslation quality is comparable to the deep sys-\ntem.\nThe inclusion of the unsupervised data leads to\na gain on the unseen test set of +0.7% BLEU / -\n0.6% TER absolute in the deep setup and +0.8%\nBLEU / -0.2% TER absolute in the shallow setup.\nThis shows that the proposed approach is benefi-\ncial and allows to use available monolingual data\nto improve the performance of the system.\nA further clear increase in translation quality\nis achieved by adding the extended word lexicon\nmodel. Both the deep and the shallow setup ben-\nefit from the incorporation of the discriminative\nword lexicon, with gains of about the same or-\nder of magnitude (+0.7% BLEU / -0.7% TER with\ndeep rules, +0.6% BLEU / -1.0% TER with shal-\nlow rules). Combining the unsupervised training\ndata and the extended word lexicon we arrive at an\nimprovement that is significant at the 95% confi-\ndence level.\nThe two other approaches investigated in this\npaper do not really help improving the transla-\ntion quality. The syntactic labels improve the\nBLEU score only slightly in the deep approach,\nand even degrade the translation quality in the shal-\nlow setup. The additional reorderings have nearly278\ndev (MT06) test (MT08)\ndeep shallow deep shallow\nBLEU TER BLEU TER BLEU TER BLEU TER\n[%] [%] [%] [%] [%] [%] [%] [%]\nHPBT Baseline 43.9 50.2 44.1 49.9 44.3±1.150.0±0.944.4±1.149.4±0.9\n+ Unsup 45.2 48.9 45.1 49.1 45.0 49.4 45.2 49.2\n+ Unsup + DWL 45.8 48.3 45.8 48.4 45.7 48.7 45.8 48.2\n+ Unsup + Syntactic Labels 45.1 49.0 45.2 49.1 45.2 49.3 45.0 49.0\n+ Unsup + Reorderings 45.4 48.8 45.3 49.0 45.3 49.1 45.3 48.9\n+ Unsup + DWL + Syntactic Labels 46.2 48.0 46.1 48.2 46.0 48.2 45.8 48.3\n+ Unsup + DWL + Reorderings 46.1 47.9 46.1 48.2 45.7 48.7 45.9 48.2\nTable 4: Results for the NIST Arabic-English translation task (truecase). The 95% confidence interval is\ngiven for the baseline systems. Results in bold are significantly better than the baseline.\nno effect on the translation.\nThese results, although a bit disappointing, were\nto be expected. As stated above, the Arabic-\nEnglish language pair is rather monotonic and\nthese two last approaches are more useful when\ndealing with translation directions where the word\norder in the languages is rather different. The\ndegradation in translation quality in the shallow\nsetup can be explained by the restriction in the\nparse trees that are constructed during the trans-\nlation process. By restricting their depth they can\nnot conform with the syntax trees derived from lin-\nguistic parsing.\nThe best results are obtained with a deep sys-\ntem including all the advanced methods at once,\nwith the exception of the additional reorderings. It\nachieves an improvement of +1.7% BLEU / -1.8%\nTER over the baseline. For the shallow system, the\ncombination of the methods does not improve over\nthe unsupervised data and discriminative word lex-\nicon alone. The final result does not exceed the\ntranslation quality of the best deep setup, but re-\nmember that the computation time is significantly\ndecreased.\n10 Conclusion\nWe presented a cross-system and cross-paradigm\nlightly-supervised training approach. We demon-\nstrated that improving the non-hierarchical part of\nthe translation model with lightly-supervised train-\ning is a very effective technique. On the NIST\nArabic-English task, we evaluated various recently\ndeveloped methods separately as well as in combi-\nnation. Our results suggest that soft syntactic la-\nbels and IBM-style reordering extensions are less\nhelpful. By including the discriminative word lex-icon model, we have been able to increase the per-\nformance of the hierarchical system significantly.\nOur experiments with shallow rules confirm that\na deep recursion for hierarchical rules is not es-\nsential to achieve competitive performance for the\nArabic-English language pair, while dramatically\ndecreasing the computational effort.\nAcknowledgments\nThe authors would like to thank Holger Schwenk\nfrom LIUM, Le Mans, France, for making the au-\ntomatic translations of the Arabic LDC Gigaword\ncorpus available. This work was partly realized as\npart of the Quaero Programme, funded by OSEO,\nFrench State agency for innovation, and also partly\nbased upon work supported by the Defense Ad-\nvanced Research Projects Agency (DARPA) under\nContract No. HR0011-08-C-0110. Any opinions,\nfindings and conclusions or recommendations ex-\npressed in this material are those of the authors and\ndo not necessarily reflect the views of the DARPA.\nReferences\nBaker, Kathryn, Michael Bloodgood, Chris Callison-\nBurch, Bonnie Dorr, Nathaniel Filardo, Lori\nLevin, Scott Miller, and Christine Piatko. 2010.\nSemantically-Informed Syntactic Machine Transla-\ntion: A Tree-Grafting Approach. In Conf. of the\nAssoc. for Machine Translation in the Americas\n(AMTA) , Denver, CO, October/November.\nBirch, Alexandra, Phil Blunsom, and Miles Osborne.\n2009. A Quantitative Analysis of Reordering Phe-\nnomena. In Proc. of the Workshop on Statistical Ma-\nchine Translation , pages 197–205, Athens, Greece,\nMarch.279\nChiang, David. 2005. A Hierarchical Phrase-Based\nModel for Statistical Machine Translation. In Proc.\nof the 43rd Annual Meeting of the Assoc. for Com-\nputational Linguistics (ACL) , pages 263–270, Ann\nArbor, MI, June.\nChiang, David. 2010. Learning to Translate with\nSource and Target Syntax. In Proc. of the Annual\nMeeting of the Assoc. for Computational Linguistics\n(ACL) , pages 1443–1452, Uppsala, Sweden, July.\nde Gispert, Adri `a, Gonzalo Iglesias, Graeme Black-\nwood, Eduardo R. Banga, and William Byrne.\n2010. Hierarchical Phrase-Based Translation with\nWeighted Finite-State Transducers and Shallow-n\nGrammars. Computational Linguistics , 36(3):505–\n533.\nHe, Zhongjun, Yao Meng, and Hao Yu. 2010. Extend-\ning the Hierarchical Phrase Based Model with Maxi-\nmum Entropy Based BTG. In Conf. of the Assoc. for\nMachine Translation in the Americas (AMTA) , Den-\nver, CO, October/November.\nHoang, Hieu and Philipp Koehn. 2010. Improved\nTranslation with Source Syntax Labels. In ACL 2010\nJoint Fifth Workshop on Statistical Machine Trans-\nlation and Metrics MATR , pages 409–417, Uppsala,\nSweden, July.\nHuck, Matthias, Martin Ratajczak, Patrick Lehnen, and\nHermann Ney. 2010. A Comparison of Various\nTypes of Extended Lexicon Models for Statistical\nMachine Translation. In Conf. of the Assoc. for Ma-\nchine Translation in the Americas (AMTA) , Denver,\nCO, October/November.\nIgel, Christian and Michael H ¨usken. 2003. Empirical\nEvaluation of the Improved Rprop Learning Algo-\nrithm. Neurocomputing , 50:2003.\nIglesias, Gonzalo, Adri `a de Gispert, Eduardo R. Banga,\nand William Byrne. 2009. Rule Filtering by Pattern\nfor Efficient Hierarchical Translation. In Proc. of the\n12th Conf. of the Europ. Chapter of the Assoc. for\nComputational Linguistics (EACL) , pages 380–388,\nAthens, Greece, March.\nJeong, Minwoo, Kristina Toutanova, Hisami Suzuki,\nand Chris Quirk. 2010. A Discriminative Lexi-\ncon Model for Complex Morphology. In Conf. of\nthe Assoc. for Machine Translation in the Americas\n(AMTA) , Denver, CO, October/November.\nKoehn, P., H. Hoang, A. Birch, C. Callison-Burch,\nM. Federico, N. Bertoldi, B. Cowan, W. Shen,\nC. Moran, R. Zens, et al. 2007. Moses: Open Source\nToolkit for Statistical Machine Translation. In Proc.\nof the Annual Meeting of the Assoc. for Compu-\ntational Linguistics (ACL) , pages 177–180, Prague,\nCzech Republic, June.\nLeusch, Gregor and Hermann Ney. 2009. Edit dis-\ntances with block movements and error rate confi-\ndence estimates. Machine Translation , December.Mauser, Arne, Sa ˇsa Hasan, and Hermann Ney. 2009.\nExtending Statistical Machine Translation with Dis-\ncriminative and Trigger-Based Lexicon Models. In\nProc. of the Conf. on Empirical Methods for Natu-\nral Language Processing (EMNLP) , pages 210–218,\nSingapore, August.\nOch, Franz Josef and Hermann Ney. 2003. A Sys-\ntematic Comparison of Various Statistical Alignment\nModels. Computational Linguistics , 29(1):19–51,\nMarch.\nPetrov, Slav, Leon Barrett, Romain Thibaux, and Dan\nKlein. 2006. Learning Accurate, Compact, and In-\nterpretable Tree Annotation. In Proc. of the 21st In-\nternational Conference on Computational Linguis-\ntics and 44th Annual Meeting of the Assoc. for Com-\nputational Linguistics , pages 433–440, Sydney, Aus-\ntralia, July.\nSchwenk, Holger and Jean Senellart. 2009. Transla-\ntion Model Adaptation for an Arabic/French News\nTranslation System by Lightly-Supervised Training.\nInMT Summit XII , Ottawa, Ontario, Canada, August.\nSchwenk, Holger. 2008. Investigations on Large-Scale\nLightly-Supervised Training for Statistical Machine\nTranslation. In Proc. of the Int. Workshop on Spo-\nken Language Translation (IWSLT) , pages 182–189,\nWaikiki, Hawaii, October.\nStein, Daniel, Stephan Peitz, David Vilar, and Hermann\nNey. 2010. A Cocktail of Deep Syntactic Features\nfor Hierarchical Machine Translation. In Conf. of\nthe Assoc. for Machine Translation in the Americas\n(AMTA) , Denver, CO, October/November.\nStolcke, Andreas. 2002. SRILM – an Extensible Lan-\nguage Modeling Toolkit. In Proc. of the Int. Conf.\non Spoken Language Processing (ICSLP) , volume 3,\nDenver, CO, September.\nVenugopal, Ashish, Andreas Zollmann, N.A. Smith,\nand Stephan V ogel. 2009. Preference Grammars:\nSoftening Syntactic Constraints to Improve Statisti-\ncal Machine Translation. In Proc. of the Human Lan-\nguage Technology Conf. / North American Chapter\nof the Assoc. for Computational Linguistics (HLT-\nNAACL) , pages 236–244, Boulder, CO, June.\nVilar, David, Daniel Stein, Matthias Huck, and Her-\nmann Ney. 2010. Jane: Open Source Hierarchi-\ncal Translation, Extended with Reordering and Lex-\nicon Models. In ACL 2010 Joint Fifth Workshop on\nStatistical Machine Translation and Metrics MATR ,\npages 262–270, Uppsala, Sweden, July.\nZens, Richard, Hermann Ney, Taro Watanabe, and Ei-\nichiro Sumita. 2004. Reordering Constraints for\nPhrase-Based Statistical Machine Translation. In\nCOLING ’04: The 20th Int. Conf. on Computational\nLinguistics , pages 205–211, Geneva, Switzerland,\nAugust.280", "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "vt30iwTmgYv", "year": null, "venue": "EAMT 2012", "pdf_link": "https://aclanthology.org/2012.eamt-1.66.pdf", "forum_link": "https://openreview.net/forum?id=vt30iwTmgYv", "arxiv_id": null, "doi": null }
{ "title": "Discriminative Reordering Extensions for Hierarchical Phrase-Based Machine Translation", "authors": [ "Matthias Huck", "Stephan Peitz", "Markus Freitag", "Hermann Ney" ], "abstract": "Matthias Huck, Stephan Peitz, Markus Freitag, Hermann Ney. Proceedings of the 16th Annual conference of the European Association for Machine Translation. 2012.", "keywords": [], "raw_extracted_content": "Discriminative Reordering Extensions\nfor Hierarchical Phrase-Based Machine Translation\nMatthias Huck andStephan Peitz andMarkus Freitag andHermann Ney\nHuman Language Technology and Pattern Recognition Group\nComputer Science Department\nRWTH Aachen University\nD-52056 Aachen, Germany\n<surname>@cs.rwth-aachen.de\nAbstract\nIn this paper, we propose novel exten-\nsions of hierarchical phrase-based systems\nwith a discriminative lexicalized reorder-\ning model. We compare different fea-\nture sets for the discriminative reorder-\ning model and investigate combinations\nwith three types of non-lexicalized re-\nordering rules which are added to the hi-\nerarchical grammar in order to allow for\nmore reordering flexibility during decod-\ning. All extensions are evaluated in stan-\ndard hierarchical setups as well as in se-\ntups where the hierarchical recursion depth\nis restricted. We achieve improvements\nof up to +1.2 %B LEU on a large-scale\nChinese!English translation task.\n1 Introduction\nLexicalized reordering models are a common com-\nponent of standard phrase-based machine trans-\nlation systems. In hierarchical phrase-based\nmachine translation, reordering is modeled im-\nplicitely as part of the translation model. Hierar-\nchical phrase-based decoders conduct phrase re-\norderings based on a one-to-one relation between\nthe non-terminals on source and target side within\nhierarchical translation rules. Non-terminals on\nsource and target side are linked if they result from\nthe same valid phrase being cut out at their posi-\ntion during phrase extraction. Usually neither ex-\nplicit lexicalized reordering models nor additional\nmechanisms to perform reorderings that do not re-\nsult from the application of hierarchical rules are\nintegrated into hierarchical decoders.\nc\r2012 European Association for Machine Translation.In this work, we augment the grammar with\nmore flexible reordering mechanisms based on\nadditional non-lexicalized reordering rules and\nintegrate a discriminative lexicalized reordering\nmodel. This kind of model has been shown to\nperform well when being added to the log-linear\nmodel combination of standard phrase-based sys-\ntems. We present an extension of a hierarchical\ndecoder with the discriminative reordering model\nand evaluate it in setups with the usual hierarchical\ngrammar as well as in setups with a shallow hier-\narchical grammar. The shallow grammar restricts\nthe depth of the hierarchical recursion. Two dif-\nferent feature sets for the discriminative reorder-\ning model are examined. We report experimental\nresults on the large-scale NIST Chinese!English\ntranslation task. The best translation quality is\nachieved with combinations of the extensions with\nadditional reordering rules and with the discrim-\ninative reordering model. The overall improve-\nment over the respective baseline system is +1.2\n%B LEU / -0.6 %T ERabsolute in the standard setup\nand +1.2 %B LEU / -0.5 %T ERabsolute in the shal-\nlow setup.\n2 Related Work\nHierarchical phrase-based translation was pro-\nposed by Chiang (2005). Iglesias et al. (2009) and\nin a later journal publication Gispert et al. (2010)\npresent a way to limit the recursion depth for hi-\nerarchical rules by means of a modification to the\nhierarchical grammar. Their work is of interest to\nus as a limitation of the recursion depth affects the\nsearch space and in particular the reordering capa-\nbilities of the system. It is therefore basically an-\ntipodal to some of the techniques presented in this\npaper, which allow for even more flexibility during\nthe search process by extending the grammar with\nProceedings of the 16th EAMT Conference, 28-30 May 2012, Trento, Italy\n313\nspecific non-lexicalized reordering rules. Combi-\nnations of both techniques are possible, though,\nand in fact Iglesias et al. (2009) also investigate\na maximum phrase jump of 1 (MJ1) reordering\nmodel. In the MJ1 experiment, they include a swap\nrule, but simultaneously withdraw all hierarchical\nphrases.\nVilar et al. (2010) extend a hierarchical phrase-\nbased system with non-lexicalized rules that per-\nmit jumps across whole blocks of symbols and\nreport improvements on a German!English Eu-\nroparl task. Their technique is inspired by conven-\ntional phrase-based IBM-style reordering (Zens et\nal., 2004). In an Arabic!English NIST setup,\nHuck et al. (2011) try a similar reordering exten-\nsion, but conclude that it is less helpful for their\ntask. Other groups attempt to attain superior mod-\neling of reordering effects in their hierarchical sys-\ntems by examining syntactic annotation, e.g. Gao\net al. (2011).\nHe et al. (2010a) combine an additional BTG-\nstyle swap rule with a maximum entropy based\nlexicalized reordering model and achieve improve-\nments on the Chinese!English NIST task. Their\napproach is comparable to ours, but their reorder-\ning model requires the training of different classi-\nfiers for different rule patterns (He et al., 2010b).\nExtracting training instances separately for several\npatterns of hierarchical rules yields a dependence\non the phrase segmentation. In the more general\napproach we propose, the definition of the fea-\ntures is independent of the phrase boundaries on\nthe source side.\nIn standard phrase-based systems, lexicalized\nreordering models are a commonly included com-\nponent. A widely used variant is the orientation\nmodel as implemented in the Moses toolkit (Till-\nmann, 2004; Koehn et al., 2007) which distin-\nguishes monotone, swap, and discontinuous phrase\norientations. Galley and Manning (2008) suggest\na refinement of the same model. A discrimina-\ntively trained lexicalized reordering model as the\none employed by us has been exmanined in a stan-\ndard phrase-based setting by Zens and Ney (2006).\n3 Shallow-1 Grammar\nGispert et al. (2010) propose a limitation of the re-\ncursion depth for hierarchical rules with shallow-n\ngrammars. The main benefit of the limitation is a\ngain in decoding efficiency. Moreover, the mod-\nification of the grammar to a shallow version re-stricts the search space of the decoder and may\nbe convenient to prevent overgeneration. We will\ninvestigate reordering extensions to both standard\nhierarchical systems and systems with a shallow-1\ngrammar, i.e. a grammar which limits the depth of\nthe hierarchical recursion to one. We refer to this\nkind of rule set and the parses produced with such\na grammar as shallow, in contrast to the standard\nrule set and parses which we denote as deep.\nIn a shallow-1 grammar, the generic non-\nterminal Xof the standard hierarchical approach\nis replaced by two distinct non-terminals XH and\nXP. By changing the left-hand sides of the rules,\nlexical phrases are allowed to be derived from XP\nonly, hierarchical phrases from XH only. On all\nright-hand sides of hierarchical rules, the Xis re-\nplaced by XP. Gaps within hierarchical phrases\ncan thus be filled with contiguous lexical phrases\nonly, not with hierarchical phrases. The initial rule\nis substituted with\nS!hXP\u00180;XP\u00180i\nS!hXH\u00180;XH\u00180i;(1)\nand the glue rule is substituted with\nS!hS\u00180XP\u00181; S\u00180XP\u00181i\nS!hS\u00180XH\u00181; S\u00180XH\u00181i:(2)\n4 Reordering Rules\nIn this section we describe three types of reorder-\ning extensions to the hierarchical grammar. All\nof them add specific non-lexicalized reordering\nrules which facilitate a more flexible arrangement\nof phrases in the hypotheses. We first present a\nsimple swap rule extension (Section 4.1), then we\nsuggest two different extensions with several ad-\nditional rules that allow for more complex jumps\n(Section 4.2) or very constrained jumps (Sec-\ntion 4.3). Furthermore, variants for deep and shal-\nlow grammars are proposed.\n4.1 Swap Rule\n4.1.1 Swap Rule for Deep Grammars\nIn a deep grammar, we can bring in more re-\nordering capabilities by adding a single swap rule\nX!hX\u00180X\u00181;X\u00181X\u00180i (3)\nsupplementary to the standard initial rule and glue\nrule. The swap rule allows adjacent phrases to be\ntransposed.\n314\nAn alternative with a comparable effect would\nbe to remove the standard glue rule and to add\ntwo rules instead, one of them being as in Equa-\ntion (3) and the other a monotonic concatenation\nrule for the non-terminal Xwhich is symmetric to\nthe swap rule. The latter rule acts as a replace-\nment for the glue rule. This is the approach He et\nal. (2010a) take. Our approach to keep the stan-\ndard glue rule has however one advantage: We are\nstill able to apply a maximum length constraint to\nX. The maximum length constraint restricts the\nlength of the yield of a non-terminal. The lexical\nspan covered by Xis typically restricted to 10 to\nmake decoding less demanding in terms of com-\nputational resources. We would still be able to add\na monotonic concatenation rule to our grammar in\naddition to the standard glue rule. Its benefit is\nthat it entails more symmetry in the grammar. In\nour variant, sub-derivations which result from ap-\nplications of the swap rule can fill the gap within\nhierarchical phrases, while no mechanism to carry\nout the same in a monotonic manner is available.\nIn the deep grammar, we refrain from adding a\nmonotonic concatenation rule as recursive embed-\ndings are possible anyway. We nevertheless tried\nthe variant with the additional monotonic concate-\nnation rule in a supplementary experiment (cf. Sec-\ntion 6.2.2) to make sure that our assumption that\nthis rule is dispensable is correct. We were not\nable to obtain improvements over the setup with\nthe swap rule only.\n4.1.2 Swap Rule for Shallow Grammars\nIn a shallow grammar, several directions of in-\ntegrating swaps are possible. We decided to add a\nswap rule and a monotonic concatenation rule\nXP!hXP\u00180XP\u00181;XP\u00181XP\u00180i\nXP!hXP\u00180XP\u00181;XP\u00180XP\u00181i(4)\nsupplementary to the standard shallow initial rules\nand glue rules. The swap rule allows adjacent lex-\nical phrases to be transposed, but not hierarchi-\ncal phrases. Here, we could as well have used\nXH as the left-hand side of the rules. As we\nchoseXP and thus allow for embedding of sub-\nderivations resulting from applications of the swap\nrule into hierarchical phrases, which is not pos-\nsible with sub-derivations resulting from applica-\ntions of hierarchical rules in a shallow grammar,\nwe also include the monotonic concatenation rule\nfor symmetry reasons. A constraint can again beapplied to the number of terminals spanned by both\nXP andXH. With a length constraint, building\nsub-derivations of arbitrary length by applying the\nrules from Equation (4) is impossible.\n4.2 Jump Rules, Variant 1\nInstead of employing a swap rule that transposes\nadjacent phrases, we can adopt more complex ex-\ntensions to the grammar that implement jumps\nacross blocks of symbols. Our first jump rules vari-\nant is inspired by Vilar et al. (2010), but is a gen-\neralization that facilitates an arbitrary number of\nblocks per sentence to be jumped across.\n4.2.1 Jump Rules for Deep Grammars\nIn a deep grammar, to enable block jumps, we\ninclude the rules\nS!hB\u00180X\u00181; X\u00181B\u00180iy\nS!hS\u00180B\u00181X\u00182; S\u00180X\u00182B\u00181iy\nB!hX\u00180; X\u00180i\nB!hB\u00180X\u00181; B\u00180X\u00181iz(5)\nin addition to the standard initial rule and glue rule.\nThe rules marked withyare jump rules that put\njumps across blocks (B ) on source side into ef-\nfect. The rules with Bon their left-hand side en-\nable blocks that are skipped by the jump rules to be\ntranslated, but without further jumps. Reordering\nwithin these windows is just possible with hierar-\nchical rules. Note that our rule set keeps the con-\nvenient property of the standard hierarchical gram-\nmar that the initial symbol Sneeds to be expanded\nin the leftmost cells of the CYK chart only.\nA binary jump feature for the two jump rules (y)\nmay be added to the log-linear model combination\nof the decoder, as well as a binary feature that fires\nfor the rule that acts analogous to the glue rule,\nbut within blocks that is being jumped across (z).\nA maximum jump width can be established by ap-\nplying a length constraint to the non-terminal B. A\ndistance-based distortion model can also easily be\nimplemented by computing the span width of the\nnon-terminal Bon the right-hand side of the jump\nrules at each application of one of them.\n4.2.2 Jump Rules for Shallow Grammars\nIn a shallow grammar, block jumps are realized\nin the same way as in a deep one, but the number\nof rules that are required is doubled.\n315\nWe include\nS!hB\u00180XP\u00181; XP\u00181B\u00180iy\nS!hB\u00180XH\u00181; XH\u00181B\u00180iy\nS!hS\u00180B\u00181XP\u00182; S\u00180XP\u00182B\u00181iy\nS!hS\u00180B\u00181XH\u00182; S\u00180XH\u00182B\u00181iy\nB!hXP\u00180; XP\u00180i\nB!hXH\u00180; XH\u00180i\nB!hB\u00180XP\u00181; B\u00180XP\u00181iz\nB!hB\u00180XH\u00181; B\u00180XH\u00181iz(6)\nin addition to the standard shallow initial rules and\nglue rules.\n4.3 Jump Rules, Variant 2\nAs a second jump rules variant, we try an approach\nthat follows (Huck et al., 2011) and allows for very\nconstrained reorderings. At most one contiguous\nblock per sentence can be jumped across in this\nvariant.\nIn a deep grammar, to enable constrained block\njumps with at most one jump per sentence, we re-\nplace the initial and glue rule by the rules given in\nEquation (7):\nS!hM\u00180; M\u00180i\nS!hS\u00180M\u00181; S\u00180M\u00181iz\nS!hB\u00180M\u00181; M\u00181B\u00180iy\nM!hX\u00180; X\u00180i\nM!hM\u00180X\u00181; M\u00180X\u00181iz\nB!hX\u00180; X\u00180i\nB!hB\u00180X\u00181; B\u00180X\u00181iz(7)\nIn these rules, the Mnon-terminal represents a\nblock that will be translated in a monotonic way,\nand the Bis a “back jump”. We omit the exposi-\ntion for shallow grammars as deducing the shallow\nfrom the deep version of the rules is straightfor-\nward from our previous explanations.\nWe add a binary feature that fires for the rules\nthat act analogous to the glue rule (z). We further\nconform to the approach of Huck et al. (2011) by\nadditionally including a distance-based distortion\nmodel (dist. feature ) that is computed during de-\ncoding whenever the back jump rule (y) is applied.\n5 Discriminative Reordering Model\nOur discriminative reordering extensions for hi-\nerarchical phrase-based machine translation sys-\ntems integrate a discriminative reordering modele1e2e3\nf1 f2 f3\nFigure 1: Illustration of an embedding of a lexical\nphrase (light) in a hierarchical phrase (dark), with\norientations scored with the neighboring blocks.\nthat tries to predict the orientation of neighboring\nblocks. We use two orientation classes leftand\nright, in the same manner as described by Zens\nand Ney (2006). The reordering model is applied\nat the phrase boundaries only, where words which\nare adjacent to gaps within hierarchical phrases are\ndefined as boundary words as well. The orienta-\ntion probability is modeled in a maximum entropy\nframework. We investigate two models that differ\nin the set of feature functions:\ndiscrim. RO (src word) The feature set of this\nmodel consists of binary features based on the\nsource word at the current source position.\ndiscrim. RO (src+tgt word+class) The feature\nset of this model consists of binary features\nbased on the source word and word class\nat the current source position and the target\nword and word class at the current target\nposition.\nUsing features that depend on word classes pro-\nvides generalization capabilities. We employ 100\nautomatically learned word classes which are ob-\ntained with the mkcls tool on both source and tar-\nget side.1The reordering model is trained with the\nGeneralized Iterative Scaling (GIS) algorithm with\nthe maximum class posterior probability as train-\ning criterion, and it is smoothed with a gaussian\nprior.\nFor each rule application during hierarchical\ndecoding, we apply the reordering model at all\n1mkcls is distributed along with the GIZA ++package:\nhttp://code.google.com/p/giza-pp/\n316\nboundaries where lexical blocks are placed side\nby side within the partial hypothesis. For this\npurpose, we need to access neighboring bound-\nary words and their aligned source words and\nsource positions. Note that, as hierarchical phrases\nare involved, several block joinings may take\nplace at once during a single rule application.\nFigure 1 gives an illustration with an embed-\nding of a lexical phrase (light) in a hierarchi-\ncal phrase (dark). The gap in the hierarchical\nphrase hf1f2X\u00180; e1X\u00180e3iis filled with the lex-\nical phrase hf3; e2i. The discriminative reordering\nmodel scores the orientation of the lexical phrase\nwith regard to the neighboring block of the hier-\narchical phrase which precedes it within the target\nsequence (here: right orientation), and the block of\nthe hierarchical phrase which succeeds the lexical\nphrase with regard to the latter (here: left orienta-\ntion).\nThe way we interpret reordering in hierarchi-\ncal phrase-based translation keeps our model sim-\nple. We are basically able to treat the orientation\nof contiguous lexical blocks in almost exactly the\nsame way as the orientation of phrases in stan-\ndard phrase-based translation. We avoid the usage\nof multiple reordering models for different source\nand target patterns of rules that is done by He et al.\n(2010b).\n6 Experiments\nWe present empirical results obtained with the ad-\nditional swap rule, the jump rules and the discrim-\ninative reordering model on the Chinese!English\n2008 NIST task.2\n6.1 Experimental Setup\nWe employ the freely available hierarchical trans-\nlation toolkit Jane (Vilar et al., 2010) to set up our\nsystems. In our experiments, we use the cube prun-\ning algorithm (Huang and Chiang, 2007) to carry\nout the search. A maximum length constraint of 10\nis applied to all non-terminals but the initial sym-\nbolS. We work with a parallel training corpus of\n3.0M Chinese-English sentence pairs (77.5M Chi-\nnese / 81.0M English running words). Word align-\nments are created by aligning the data in both di-\nrections with GIZA ++and symmetrizing the two\ntrained alignments (Och and Ney, 2003). The lan-\nguage model is a 4-gram with modified Kneser-\n2http://www.itl.nist.gov/iad/mig/tests/\nmt/2008/Ney smoothing which was trained with the SRILM\ntoolkit (Stolcke, 2002).\nModel weights are optimized against B LEUwith\nMinimum Error Rate Training on 100-best lists.\nWe employ MT06 as development set to tune the\nmodel weights, MT08 is used as unseen test set.\nThe performance of the systems is evaluated using\nthe two metrics B LEU and T ER. The results on the\ntest set are checked for statistical significance over\nthe baseline. Confidence intervals have been com-\nputed using bootstrapping for B LEUand Cochran’s\napproximate ratio variance for T ER(Leusch and\nNey, 2009).\n6.2 Experimental Results\nThe empirical evaluation of our reordering exten-\nsions is presented in Table 1. We report translation\nresults on both the development and the test cor-\npus. The figures with deep and with shallow rules\nare set side by side in separate columns to facilitate\na direct comparison between them. All the setups\ngiven in separate rows exist in a deep and a shallow\nvariant.\nThe shallow baseline is a bit worse than the\ndeep baseline. Adding discriminative reorder-\ning models to the baselines without additional re-\nordering rules results in an improvement of up to\n+0.6 %B LEU / -0.6 %T ER(in the deep setup).\nThe src+tgt word+class feature set for the dis-\ncriminative reordering model altogether seems to\nperform slightly better than the src word feature\nset. Adding reordering rules in isolation can also\nimprove the systems, in particular in the deep\nsetup with the swap rule or the second jump\nrules variant. However, extensions with both re-\nordering rules and discriminative lexicalized re-\nordering model provide the best results, e.g. +1.0\n%B LEU / -0.5 %T ERwith the system with deep\ngrammar, swap rule, binary swap feature and dis-\ncrim. RO (src+tgt word+class) and +1.2 %B LEU /\n-0.5 %T ERwith the system with shallow gram-\nmar, swap rule, binary swap feature and discrim.\nRO (src+tgt word+class). The second jump rules\nvariant performs particularly well in combination\nwith a deep grammar and the discrim. RO (src+tgt\nword+class) model, with an improvement of +1.2\n%B LEU / -0.6 %T ERabsolute over the deep base-\nline. This system provides the best translation\nquality of all the setups investigated in this paper.\nWith a shallow grammar, the combinations of the\ndiscrim. RO with the swap rule outperforms both\n317\nMT06 (Dev) MT08 (Test)\ndeep shallow deep shallow\nBLEU TER BLEU TER BLEU TER BLEU TER\n[%] [%] [%] [%] [%] [%] [%] [%]\nBaseline 32.6 61.2 31.4 61.8 25.2 66.6 24.9 66.6\n+ discrim. RO (src word) 32.9 61.3 31.6 61.8 25.4 66.3 25.2 66.6\n+ discrim. RO (src+tgt word+class) 33.0 61.3 31.6 61.6 25.8 66.0 25.1 66.3\n+ swap rule 32.8 61.7 31.8 62.1 25.8 66.6 25.0 67.0\n+ discrim. RO (src word) 33.0 61.2 32.5 61.4 25.8 66.1 26.0 66.2\n+ discrim. RO (src+tgt word+class) 33.1 61.2 32.6 61.4 26.0 66.1 26.1 66.3\n+ binary swap feature 33.2 61.0 32.1 61.8 25.9 66.2 25.7 66.5\n+ discrim. RO (src word) 33.1 61.3 32.4 61.4 26.0 66.1 26.1 66.3\n+ discrim. RO (src+tgt word+class) 33.2 61.3 32.9 61.0 26.2 66.1 26.1 66.1\n+ jump rules, variant 1 32.9 61.3 32.1 62.4 25.6 66.4 25.1 67.5\n+ discrim. RO (src word) 32.9 61.1 31.9 62.0 25.8 66.0 25.1 66.9\n+ discrim. RO (src+tgt word+class) 33.2 61.0 32.1 62.0 25.9 66.1 25.6 66.5\n+ binary jump feature 32.8 61.3 31.9 61.7 25.7 66.3 25.2 66.7\n+ discrim. RO (src word) 32.8 61.3 32.2 61.9 25.8 66.1 25.2 66.7\n+ discrim. RO (src+tgt word+class) 33.1 61.2 32.3 62.0 26.0 66.1 25.5 66.7\n+ jump rules, variant 2 + dist. feature 33.0 61.5 31.5 62.0 25.8 66.5 25.3 66.3\n+ discrim. RO (src word) 33.2 60.8 31.6 61.9 26.2 65.8 25.2 66.4\n+ discrim. RO (src+tgt word+class) 33.2 61.0 31.7 62.1 26.4 66.0 25.5 66.3\nTable 1: Experimental results for the NIST Chinese!English translation task (truecase). On the test set,\nbold font indicates results that are significantly better than the baseline (p < : 1).\njump rules variants.\nWe proceed with discussing some supplemen-\ntary results obtained with the deep grammar that\nare not included in Table 1. The results for Sec-\ntions 6.2.2 through 6.2.4 can be found in Table 2.\n6.2.1 Dropping Length Constraints\nIn order to find out if we lose performance by\napplying the maximum length constraint of 10 to\nall non-terminals but the initial symbol Sduring\ndecoding, we optimized systems with no length\nconstraints. When we drop the length constraint in\nthe baseline setup, we observe no improvement on\nthe dev set and +0.3 %B LEU improvement on the\ntest set. Dropping the length constraint in the sys-\ntem with deep grammar, swap rule, discrim. RO\n(src+tgt word+class) and binary jump feature re-\nsults in +0.2 %B LEU / -0.2 %T ERon the dev set,\nbut no improvement on the test set.\n6.2.2 Monotonic Concatenation Rule\nIn this experiment, we add a monotonic concate-\nnation rule\nX!hX\u00180X\u00181;X\u00180X\u00181i (8)\nas discussed in Section 4.1.1 to the system with\ndeep grammar, swap rule, binary swap feature anddiscrim. RO (src+tgt word+class). As we pre-\nsumed, the monotonic concatenation rule does not\nimprove the performance of our system.\n6.2.3 Distance-Based Distortion Feature\nOur second jump rules variant includes a\ndistance-based distortion feature (dist. feature). To\nmake sure that the good performance of the jump\nrules variant 2 extension compared to jump rules\nvariant 1 is not simply due to this feature, we also\ntested it in the best setup with our first jump rules\nvariant. Adding the distance-based distortion fea-\nture does not yield an improvement over that setup.\nWe tried such a feature with the swap rule as well\nby just computing the length of the yield of the\nleft-hand side non-terminal at each swap rule ap-\nplication. Here again, adding the distance-based\ndistortion feature does not yield an improvement.\n6.2.4 Discriminative Reordering for\nReordering Rules Only\nInstead of applying the discriminative reorder-\ning model at all rule applications, the model can\nas well be used to score the orientation of blocks\nonly if they are placed side by side within the tar-\nget sequence by selected rules. We conducted ex-\n318\ndeep\nMT06 (Dev) MT08 (Test)\nBLEU TER BLEU TER\n[%] [%] [%] [%]\nBaseline 32.6 61.2 25.2 66.6\n+ no length contraints 32.6 61.5 25.5 66.6\n+ swap rule + bin. swap feat. + discrim. RO (src+tgt word+class) 33.2 61.3 26.2 66.1\n+ no length contraints 33.4 61.1 26.2 66.3\n+ monotonic concatenation rule 33.2 61.6 26.0 66.4\n+ dist. feature 33.4 61.4 26.2 66.2\n+ discrim. RO scoring restricted to swap rule 33.1 61.4 26.0 66.4\n+ jump rules 1 + bin. jump feat. + discrim. RO (src+tgt word+class) 33.1 61.2 26.0 66.1\n+ dist. feature 33.2 61.1 25.9 66.1\n+ discrim. RO scoring restricted to jump rules 32.8 61.3 25.9 66.3\nTable 2: Supplementary experimental results with the deep grammar (truecase).\ndeep shallow\nBaseline Best Swap System Baseline Best Swap System\nused hierarchical phrases 25.8% 32.0% 17.8% 24.0%\nused lexical phrases 45.8% 40.0% 47.6% 44.7%\nused initial and glue rules 28.4% 26.8% 34.6% 29.5%\nused swap rules - 1.2% - 1.8%\napplied swap rule in sentences - 295 (22%) - 446 (33%)\nTable 3: Statistics on the rule usage for the single best translation of the test set (MT08).\nperiments in which the discriminative reordering\nscoring is restricted to the swap rule or the explicit\njump rules (marked asyin Eq. 5), respectively. The\nresult is in both setups slightly worse than the re-\nsult with the discriminative reordering model ap-\nplied to all rules.\n6.3 Investigation of the Rule Usage\nTo figure out the influence of the swap rule on the\nusage of different types of rules in the translation\nprocess, we compare in Table 3 the baseline sys-\ntems (deep and shallow) with the systems using\nthe swap rule, binary swap feature and discrim. RO\n(denoted as Best Swap System in the table). As ex-\npected, the deep systems use in general more hi-\nerarchical phrases compared to the shallow setups.\nHowever, adding the swap rule causes an increased\nusage of hierarchical phrases and less applications\nof the glue rule. The swap rule by itself makes up\nthe smallest part, but is employed in 22% (deep)\nand 33% (shallow) respectively of the 1357 test\nsentences.\n6.4 Translation Examples\nFigure 2 depicts a translation example along withits decoding tree from our system with deep gram-\nmar, swap rule, binary swap feature and discrim.\nRO (src+tgt word+class). The example is taken\nfrom the MT08 set, with the four reference trans-\nlations “But it is actually very hard to do that. ”,\n“However, it is indeed very difficult to achieve. ”,\n“But to achieve this point is actually very diffi-\ncult. ” and“But to be truly frank is, in fact, very\ndifficult. ”. The hypothesis does not match any of\nthe references, but still is a fully convincing En-\nglish translation. Note how the application of the\nswap rule affects the translation. Our baseline sys-\ntem with deep grammar translates the sentence as\n“but to do this , it is in fact very difficult . ”.\n7 Conclusion\nWe presented novel extensions of hierarchical\nphrase-based systems with a discriminative lexi-\ncalized reordering model. We investigated com-\nbinations with three variants of additional non-\nlexicalized reordering rules. Our approach shows\nsignificant improvements (up to +1.2 %B LEU)\nover the respective baselines with both a deep and\na shallow-1 hierarchical grammar on a large-scale\nChinese!English translation task.\n319\nS\nX\n. X\nX\nX\nˆ¾,vžX\nZ0Ù¹F\nSX. XXachie ve this\nXXitis very difficult to\n,in fact ,\nbut\nFigure 2: Translation example from the system\nwith deep grammar, swap rule, binary swap fea-\nture and discrim. RO (src+tgt word+class).\nAcknowledgments\nThis work was partly achieved as part of the\nQuaero Programme, funded by OSEO, French\nState agency for innovation, and partly funded by\nthe European Union under the FP7 project T4ME\nNet, Contract No. 249119.\nReferences\nChiang, D. 2005. A Hierarchical Phrase-Based Model\nfor Statistical Machine Translation. In Proc. of the\nACL, pages 263–270, Ann Arbor, MI, June.\nGalley, M. and C. D. Manning. 2008. A Simple and\nEffective Hierarchical Phrase Reordering Model. In\nProc. of the EMNLP, pages 847–855, Honolulu,\nHawaii, October.\nGao, Y ., P. Koehn, and A. Birch. 2011. Soft De-\npendency Constraints for Reordering in Hierarchical\nPhrase-Based Translation. In Proc. of the EMNLP,\npages 857–868, Edinburgh, Scotland, UK, July.\nGispert, A. de, G. Iglesias, G. Blackwood, E. R. Banga,\nand W. Byrne. 2010. Hierarchical Phrase-Based\nTranslation with Weighted Finite-State Transducersand Shallow-n Grammars. Computational Linguis-\ntics, 36(3):505–533.\nHe, Z., Y . Meng, and H. Yu. 2010a. Extending the Hi-\nerarchical Phrase Based Model with Maximum En-\ntropy Based BTG. In Proc. of the AMTA, Denver,\nCO, October/November.\nHe, Z., Y . Meng, and H. Yu. 2010b. Maximum Entropy\nBased Phrase Reordering for Hierarchical Phrase-\nbased Translation. In Proc. of the EMNLP, pages\n555–563, October.\nHuang, L. and D. Chiang. 2007. Forest Rescoring:\nFaster Decoding with Integrated Language Models.\nInProc. of the ACL , pages 144–151, Prague, Czech\nRepublic, June.\nHuck, M., D. Vilar, D. Stein, and H. Ney. 2011. Ad-\nvancements in Arabic-to-English Hierarchical Ma-\nchine Translation. In Proc. of the EAMT, pages 273–\n280, Leuven, Belgium, May.\nIglesias, G., A. de Gispert, E. R. Banga, and W. Byrne.\n2009. Rule Filtering by Pattern for Efficient Hier-\narchical Translation. In Proc. of the EACL, pages\n380–388, Athens, Greece, March.\nKoehn, P., H. Hoang, A. Birch, C. Callison-Burch,\nM. Federico, N. Bertoldi, B. Cowan, W. Shen,\nC. Moran, R. Zens, et al. 2007. Moses: Open Source\nToolkit for Statistical Machine Translation. In Proc.\nof the ACL, pages 177–180, Prague, Czech Republic,\nJune.\nLeusch, G. and H. Ney. 2009. Edit distances with block\nmovements and error rate confidence estimates. Ma-\nchine Translation, December.\nOch, F. J. and H. Ney. 2003. A Systematic Comparison\nof Various Statistical Alignment Models. Computa-\ntional Linguistics, 29(1):19–51, March.\nStolcke, A. 2002. SRILM – an Extensible Language\nModeling Toolkit. In Proc. of the ICSLP, Denver,\nCO, September.\nTillmann, C. 2004. A Unigram Orientation Model for\nStatistical Machine Translation. In Proc. of the HLT-\nNAACL: Short Papers, pages 101–104.\nVilar, D., D. Stein, M. Huck, and H. Ney. 2010.\nJane: Open Source Hierarchical Translation, Ex-\ntended with Reordering and Lexicon Models. In\nProc. of the ACL/WMT , pages 262–270, Uppsala,\nSweden, July.\nZens, R. and H. Ney. 2006. Discriminative Reordering\nModels for Statistical Machine Translation. In Proc.\nof the HLT-NAACL, pages 55–63, New York City,\nJune.\nZens, R., H. Ney, T. Watanabe, and E. Sumita. 2004.\nReordering Constraints for Phrase-Based Statistical\nMachine Translation. In COLING ’04: The 20th Int.\nConf. on Computational Linguistics, pages 205–211,\nGeneva, Switzerland, August.\n320", "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "GC2SGchlNvH", "year": null, "venue": "EAMT 2020", "pdf_link": "https://aclanthology.org/2020.eamt-1.10.pdf", "forum_link": "https://openreview.net/forum?id=GC2SGchlNvH", "arxiv_id": null, "doi": null }
{ "title": "Low-Resource Unsupervised NMT: Diagnosing the Problem and Providing a Linguistically Motivated Solution", "authors": [ "Lukas Edman", "Antonio Toral", "Gertjan van Noord" ], "abstract": "Lukas Edman, Antonio Toral, Gertjan van Noord. Proceedings of the 22nd Annual Conference of the European Association for Machine Translation. 2020.", "keywords": [], "raw_extracted_content": "Low-Resource Unsupervised NMT: Diagnosing the Problem and\nProviding a Linguistically Motivated Solution\nLukas Edman, Antonio Toral, Gertjan van Noord\nCenter for Language and Cognition\nUniversity of Groningen\nfj.l.edman, a.toral.ruiz, g.j.m.van.noord [email protected]\nAbstract\nUnsupervised Machine Translation has\nbeen advancing our ability to translate\nwithout parallel data, but state-of-the-art\nmethods assume an abundance of mono-\nlingual data. This paper investigates the\nscenario where monolingual data is lim-\nited as well, finding that current unsuper-\nvised methods suffer in performance un-\nder this stricter setting. We find that the\nperformance loss originates from the poor\nquality of the pretrained monolingual em-\nbeddings, and we propose using linguis-\ntic information in the embedding train-\ning scheme. To support this, we look at\ntwo linguistic features that may help im-\nprove alignment quality: dependency in-\nformation and sub-word information. Us-\ning dependency-based embeddings results\nin a complementary word representation\nwhich offers a boost in performance of\naround 1.5 BLEU points compared to stan-\ndard WORD 2VEC when monolingual data\nis limited to 1 million sentences per lan-\nguage. We also find that the inclusion of\nsub-word information is crucial to improv-\ning the quality of the embeddings.\n1 Introduction\nMachine Translation (MT) is a rapidly advancing\nfield of Natural Language Processing, where there\nis an ever-increasing number of claims of MT sys-\ntems reaching human parity (Hassan et al., 2018;\nBarrault et al., 2019). However, most of the fo-\ncus has been on MT systems under the assumption\nc\r2020 The authors. This article is licensed under a Creative\nCommons 3.0 licence, no derivative works, attribution, CC-\nBY-ND.that there is a large amount of parallel data avail-\nable, which is only the case for a select number of\nlanguage pairs.\nRecently, there have been approaches that do\naway with this assumption, requiring only mono-\nlingual data, with the first methods based solely\naround neural MT (NMT), using aligned pre-\ntrained embeddings to bootstrap the translation\nprocess, and refining the translation with a neural\nmodel via denoising and back-translation (Artetxe\net al., 2017b; Lample et al., 2017). More re-\ncently, statistical MT (SMT) approaches as well\nas hybrid approaches, combining SMT and NMT,\nhave proven more successful (Lample et al., 2018;\nArtetxe et al., 2019).\nWhile the unsupervised approaches so far have\ndone away with the assumption of parallel data,\nthey still assume an abundance of monolingual\ndata for the two languages, typically assuming\nat least 10 million sentences per language. This\namount of data is not available for every language,\nnotably languages without much of a digital pres-\nence. For example, Fulah is a language spoken in\nWest and Central Africa by over 20 million peo-\nple, however there is a scarce amount of data freely\navailable online. This motivates a new paradigm\nin unsupervised MT: Low-Resource Unsupervised\nMT (LRUMT).\nIn this paper, we investigate the reasons why\ncurrent unsupervised NMT methods fail in the\nlow-resource setting, addressing the source of the\nissue, and we propose a potential solution to make\nunsupervised NMT more robust to the lack of\navailability of monolingual data.\nWe start by giving a brief overview of the work\nso far in unsupervised MT in Section 2, estab-\nlishing the general pipeline used to train an unsu-\npervised system. We then identify the source of\nthe performance problem in LRUMT in Section 3,\nand propose potential improvements in Section 4.\nLastly, in Section 5, we present our conclusions\nand lines for future work.\n2 An Unsupervised MT Overview\nThe typical unsupervised NMT pipeline can be\nbroken down into 3 sequential steps:\n1. Train monolingual embeddings for each lan-\nguage\n2. Align embeddings with a mapping algorithm\n3. Train NMT system, initialized with aligned\nembeddings\nIn the first step, monolingual embeddings (which\nwe will also refer to as pretrained embed-\ndings) are most often trained in the style of\nWORD 2VEC’s skip-gram algorithm (Mikolov et\nal., 2013). To incorporate sub-word information,\nLample et al. (2018) use FAST TEXT (Bojanowski\net al., 2017), which formulates a word’s embed-\nding as the sum of its character n-gram embed-\ndings. Artetxe (2019) uses a WORD 2VEC exten-\nsion PHRASE 2VEC (Artetxe et al., 2018b), which\nlearns embeddings of word n-grams up to trigrams,\neffectively creating embeddings for phrases.\nThe second step involves the alignment of the\ntwo monolingual embeddings such that the em-\nbeddings of words with identical or similar mean-\ning across language appear close in the shared em-\nbedding space. Artetxe et al. achieve this using\nVECMAP(Artetxe et al., 2018a), which learns a\nlinear transformation between the two embeddings\ninto a shared space. If there is a large shared vo-\ncabulary between the two languages, it is also pos-\nsible to concatenate the monolingual corpora and\ntrain a single embedding for both languages, ef-\nfectively completing steps 1 and 2 simultaneously\n(Lample et al., 2018).\nThe third and final step is to train the NMT\nmodel. The architecture can be any encoder-\ndecoder model, with the condition that it can trans-\nlate in both directions. Models typically share an\nencoder and decoder for both languages, with a\nlanguage token provided only to the decoder. Two\nobjectives are used to train the model: denois-\ning and on-the-fly back-translation. Denoising is\nmonolingual; the model is given an altered sen-\ntence (e.g. with word order shuffling or word re-\nmoval) and trained to reconstruct the original, un-altered sentence. On-the-fly back-translation in-\nvolves first translating a sentence from the source\nlanguage ( ssrc) to the target language ( s0\ntgt). This\ncreates a pseudo-parallel sentence pair ( s0\ntgt,ssrc),\nso the output s0\ntgtis translated back to the source\nlanguage (creating s00\nsrc), and the model is trained\nto reconstruct the original source sentence, mini-\nmizing the difference between s00\nsrcand ssrc. De-\nnoising and back-translation are carried out alter-\nnately during training.\nThe unsupervised SMT approach is fairly simi-\nlar, with a replacement of step 3 (or in the hybrid\napproach, a step added between steps 2 and 3). In\nArtetxe et al. (2019) for example, a phrase-based\nSMT model is built by constructing a phrase table\nthat is initialized using the aligned cross-lingual\nphrase embeddings, and tuning it using an unsu-\npervised variant of the Minimum Error Rate Train-\ning (Och, 2003) method. For the hybrid model, the\nSMT system can then create pseudo-parallel data\nused to train the NMT model, alongside denois-\ning and back-translation. In the remainder of this\npaper, we focus on the purely NMT approach to\nunsupervised MT.\n3 The Role of Pretrained Embeddings in\nUnsupervised MT\nWith the pipeline established, we now turn to the\nLRUMT setting. In LRUMT, the existing un-\nsupervised approaches fail somewhere along the\npipeline, but simply measuring MT performance\ndoes not make it clear where this failure occurs.\nWe speculate that the failure is relative to the qual-\nity of the pretrained word embeddings, and subse-\nquent quality of the cross-lingual alignment. We\ntest this hypothesis in this section.\nThe aligned pretrained embeddings of an un-\nsupervised NMT system are what jump-starts the\nprocess of translation. From aligned pretrained\nembeddings alone, we can effectively do word-for-\nword translation, which is commonly measured\nusing Bilingual Lexicon Induction (BLI). With-\nout well-aligned pretrained embeddings, denoising\nand back-translation alone are not enough to pro-\nduce meaningful translations.\nFor our following experiments1, we train on En-\nglish and German sentences from the WMT Mono-\nlingual News Crawl from years 2007 to 2017,\nusenewstest 2015 for development and newstest\n1Our code for running our experiments can be found at:\nhttps://github.com/Leukas/LRUMT\nFigure 1: English!German BLEU scores of unsupervised\nNMT systems where the amount of training data used for the\npre-trained embedding training and the amount used for the\nNMT model training is varied.\n2016 for testing, following Lample et al. (2018).\nThe training data is filtered such that sentences\nthat contain between 3-80 words are kept. We\nthen truncate the corpora to sizes ranging from\n0.1 to 10 million sentences per language, speci-\nfied as necessary. We used UDP IPE(Straka and\nStrakov ´a, 2017) for tokenization2, MOSES (Koehn\net al., 2007) for truecasing, and we apply 60 thou-\nsand BPE joins (following Lample et al. (2018))\nacross both corpora using fastBPE.3,4We train the\nword embeddings using the WORD 2VEC skipgram\nmodel, with the same hyperparameters as used in\nArtetxe et al. (2017b), except using an embedding\ndimension size of 512.5For embedding align-\nment, we use the completely unsupervised version\nof V ECMAPwith default parameters. We then\ntrain our unsupervised NMT models using Lam-\nple et al. (2018)’s implementation, using the de-\nfault parameters, with the exception of 10 back-\ntranslation processors rather than 30 due to hard-\nware limitations. We use the early stopping crite-\nrion from Lample et al. (2018).6\nTo demonstrate the importance of a large\namount of training data, we vary the amount of\nmonolingual data used for training the embeddings\nas well as the amount used for training the NMT\n2We use UDP IPE’s tokenizer over the commonly used\nMOSES as UDP IPElearns tokenization from gold-standard\nlabels based on the UD tokenizing standard, allowing for\nhigher-quality dependency parsing (which will be used in\nSection 4).\n3https://github.com/glample/fastBPE\n4BPE is not applied when measuring BLI or word similarity.\n5We use a dimension size of 512 to match the embedding size\nused in Lample et al. (2018)’s Transformer model.\n6We also limit training to 24 hours. On the GPU we used to\ntrain our experiments, an Nvidia V100, limiting the training\ntime only affected systems which used 10 million sentences\nper language.\nFigure 2: BLI of standard WORD 2VEC using various amounts\nof training data, measured with precision at 1, 5, and 10.\nsystem in Figure 1.7Even if we then use 10 million\nsentences per language to train the NMT system,\nusing only 100 thousand sentences per language to\ntrain the embeddings results in a BLEU score be-\nlow 1. Conversely, the NMT system can achieve a\nBLEU score of around 6 using embeddings trained\non 10 million sentences, even when the NMT sys-\ntem is only trained on 100 thousand sentences per\nlanguage.\nWe also provide Figure 2, showing the\nBLI scores of the aligned embeddings (using\nthe English !German test set from Artetxe et\nal. (2017a)8) as we vary the amount of training data\nused for the embeddings. We can see that the BLI\nscores decrease dramatically as the amount of sen-\ntences decreases, matching the trend of the results\nfrom Figure 1. Although BLI has been criticized\nfor not always correlating with downstream tasks\n(Glavas et al., 2019), in this case, poor alignment\ncorresponds to poor MT performance.\nIn these experiments, we use V ECMAPfor\naligning embeddings. V ECMAP’s algorithm be-\ngins by initializing a bilingual dictionary, which\nuses a word’s relations to the other words in the\nsame language, with the idea being that “apple”\nwould be close to “pear” but far from “motorcy-\ncle” in every language, for example. However, if\nthe quality of embeddings is poor, the random ini-\ntialization of embeddings has a greater dampening\neffect. Using embedding similarity tasks (shown\nin Table 1), we find this to be the case.\nWe measure the quality of the monolingual em-\nbeddings using 3 similarity datasets for English:\n7Although we only show results for an unsupervised NMT\nsystem, the state-of-the-art SMT systems also require initial-\nization from pretrained embeddings. Therefore, we expect the\nsame trend would appear.\n8We modify the test set by truecasing it in order to match our\nmodels.\nWord SimilarityAmount of Data (M)\n0.1 1 10\nEN - MEN 0.138 0.421 0.705\nEN - WS353 0.018 0.461 0.628\nEN - SIMLEX 0.011 0.232 0.300\nDE - SIMLEX DE 0.017 0.051 0.293\nTable 1: The Spearman correlation of the similarity of word\npairs (measured by cosine similarity) and human evalua-\ntion. Evaluation done using: https://github.com/\nkudkudak/word-embeddings-benchmarks\nMEN (Bruni et al., 2014), WS353 (Agirre et al.,\n2009), and SIMLEX999 (Hill et al., 2015). We\nalso use Multilingual SIMLEX999 (Leviant and\nReichart, 2015) for German and denote this as\nSIMLEX_DE .\nAs we can see in Table 1, the correlation to hu-\nman judgment on similarity tasks decreases dra-\nmatically as the amount of data used to train the\nmodels decreases. The poor correlation when data\nis limited explains V ECMAP’s poor alignment, as\nit relies on word similarity being relatively equiva-\nlent across languages for its initialization step.\n4 Getting More out of Scarce Data\nWith the source of the problem established as the\ndrop in quality of embeddings, we ask ourselves:\nhow can we prevent this drop in a low-resource\nscenario, where considerably less monolingual\ndata is available? We argue that the conventional\nword embedding methods (i.e. WORD 2VEC) do\nnot use all of the information present within sen-\ntences during the training process.\nWord embedding algorithms typically define a\ncontext-target pair as a word and its neighbor-\ning words in a sentence, respectively. While this\nmethod works with a large amount of data avail-\nable, it relies on the fact that a word is seen in sev-\neral different contexts in order to be represented\nin the embedding space with respect to its mean-\ning. When data is limited, the contexts contain too\nmuch variability to allow for a meaningful repre-\nsentation to be learned.\nTo test this, we use an embedding strategy\nthat has a different definition of the context:\ndependency-based word embeddings (Levy and\nGoldberg, 2014). These embeddings model the\nsyntactic similarity between words rather than se-\nmantic similarity, providing an embedding repre-\nsentation complementary to standard embeddings.\nThis section presents our findings using\nFigure 3: Example of a dependency-parsed sentence.\ndependency-based embeddings (4.1). We also\nconsider the effect of using sub-word information\nvia FAST TEXT (4.2). With the previous two\napproaches, we find that ensembling models\ncan be useful, and investigate this further (4.3).\nFinally, we vary context window size and report\non its effect (4.4).\n4.1 Dependency-Based Embeddings\nDependency parsing offers a formalization of the\ngrammatical relationship between the words in a\nsentence. For each sentence, a dependency parser\nwill create a tree in which words are connected if\nthey have a dependency relation between them. As\nshown in Figure 3, the nsubj relation denotes the\nsubject-to-verb relation between she andowns ,\nfor example.\nLevy and Goldberg (2014) use dependency in-\nformation to train word embeddings, defining the\ncontext as the parent and child relation(s) of the\ntarget word. This has two effects that distin-\nguish dependency-based embeddings from stan-\ndard embeddings. Firstly, the context is limited\nto syntactically-related words. For example, deter-\nminers are always limited to a context of a noun.\nTherefore, words of the same part-of-speech tend\nto be closer in the embedding space, since they\nhave similar contexts. Secondly, the context is not\nlimited by the distance between words in a sen-\ntence. For example, Figure 4 shows a long-range\ndependency between item andrack . This rela-\ntion would only be captured by a standard word\nembedding algorithm with a large context window\nof length 14 or greater, whereas in the dependency-\nbased version rack is one of 4 tokens in item ’s\ncontext, and item is one of 6 tokens in rack ’s\ncontext.\nLevy and Goldberg (2014) also require the em-\nbedding model to predict the relation between the\ntarget word and a context word, and whether it is\na parent or child relation. This explicitly trains the\nmodel to understand the syntactic relationship be-\ntween two words, which provides information on\nthe function of a word in a sentence. For example,\nreferring back to Figure 3, the fact that owns has\nFigure 4: Example of a sentence with a long-range dependency, in this case, an nsubj relation between item andrack .\nadobj relation means that owns is a transitive\nverb. Although this information could be learned\nimplicitly by regular WORD 2VEC, as the amount\nof training data decreases, it becomes much harder\nto learn without explicit labels.\nDue to their reduced context variability and their\nexplicit learning of linguistic information, we ex-\npect dependency-based embeddings to achieve a\nbetter alignment in the low-resource setting.\nIn the following experiments, we use the same\nsettings as mentioned in Section 3, apart from\nthose explicitly mentioned. With the addition of\ndependency parsing into the pipeline, we apply a\nparser on the tokenized sentences, while truecas-\ning is learned prior to but applied after parsing.\nWe use the StanfordNLP parser (Qi et al., 2019),\nusing the pretrained English and German models\nprovided to parse our data.\nAlthough the dependency parser that we use is\nsupervised, therefore requiring dependency data, it\nis possible to train a dependency parser in an un-\nsupervised fashion (He et al., 2018). Regardless, a\ndependency parser extracts linguistic information\nthat is present in a sentence, thus our dependency-\nbased method can still show whether using such\nlinguistic information for training embeddings is\nuseful for their alignment.\nFor training dependency-based word embed-\ndings, we apply Levy and Goldberg (2014)’s\ndependency-based WORD 2VEC, and compare this\nagainst the standard WORD 2VEC. For the\ndependency-based embeddings, we use the same\nhyperparameters as we use for WORD 2VEC.\nTo achieve considerable results in unsupervised\nNMT, it is necessary that we apply Byte-Pair En-\ncoding (BPE) (Gage, 1994). In the dependency-\nbased pipeline, this is learned after truecasing and\napplied after dependency parsing. In order to apply\nBPE to dependency-parsed sentences, any words\nthat are split into multiple sub-word units will have\nabpe relation or relations connecting them. We\nconnected sub-word units from left-to-right, where\nthe leftmost unit was the parent of all other units.9\n9We experimented with several methods of connecting the re-Amount (M) Reg DP Reg+DP\n0.1 0.00% 0.00% 0.00%\n0.4 0.27% 0.18% 0.62%\n1 2.49% 5.05% 9.64%\n2 15.28% 11.32% 18.66%\n10 35.86% 25.03% 36.06%\nTable 2: BLI P@5 scores for aligned standard (Reg),\ndependency-based (DP), and hybrid (Reg+DP) WORD 2VEC\nembeddings. The best scores are shown in bold.\nIn addition to the standard and dependency-\nbased word embeddings, we also combine the two\napproaches, forming a hybrid embedding. This\nis done by training word embeddings using both\nmethods separately with half the embedding di-\nmension size (i.e. 256), concatenating them, and\naligning them with V ECMAP. We use the +sym-\nbol to denote a combined model.\nTable 2 shows the BLI accuracies for the\nstandard WORD 2VEC (Reg), dependency-based\nWORD 2VEC (DP), and hybrid (Reg+DP) embed-\ndings as we vary the amount of monolingual sen-\ntences available to the embedding algorithms. We\ncan see that the hybrid model outperforms the\nother two models at each threshold for data, apart\nfrom 100 thousand, where all three models fail en-\ntirely. Although the dependency-based model per-\nforms relatively poorly in cases where more than\n1 million sentences are available, we see that the\nhybrid model still outperforms the regular model,\nwhich would indicate that the dependency-based\nmodel is providing complementary information to\nthe regular model.\nWe also include Table 3, which shows\nthe English !German BLEU scores10of our\nNMT systems using the pretrained standard,\ndependency-based, and hybrid embeddings. Here,\nwe see that the standard embeddings outperform\nthe other two models when they are given 2 mil-\nlion or more sentences to train on. We suspect\nlations, considering token length and frequency, but we found\nthat the connection method had little impact on the resulting\nBLEU scores.\n10We report the German !English BLEU scores in Table 8 in\nAppendix A.\nAmount (M) Reg DP Reg+DP\n0.1 0.44 0.97 0.4\n0.4 1.58 2.56 3.26\n1 5.41 5.9 6.99\n2 9.31 7.82 8.82\n10 12.9 10.28 11.41\nTable 3: English!German BLEU scores for NMT models\nusing pretrained standard (Reg), dependency-based (DP), and\nhybrid (Reg+DP) embeddings. The best scores are shown in\nbold.\nthis difference in performance is due to the in-\nclusion of BPE, as that is the only difference in\npreprocessing. When adding the bpe relation to\nour dependency-parsed sentences, we may inad-\nvertently isolate some sub-word units from their\nnatural contexts. As we treat the leftmost unit as\nthe parent, the other units will only have a relation\nto the leftmost unit, limiting their context and po-\ntentially adversely affecting their embedded repre-\nsentation.\nDespite the potentially adverse effects of BPE,\nwe see that dependency-based embeddings and hy-\nbrid embeddings outperform standard embeddings\nwhen monolingual data is limited to 1 million sen-\ntences per language or fewer.\n4.2 Considering Sub-word Information\nAs Lample et al. (2018) and Artetxe et al. (2019)\nestablished, considering sub-word information\nproves very effective in increasing the performance\nof unsupervised MT systems. We follow Lam-\nple et al. (2018) and achieve this by using FAST -\nTEXT. As FAST TEXT represents words as a sum-\nmation of character n-grams, rarer words can have\na meaningful representation if they are composed\nof common character n-grams. So as data becomes\nmore scarce, FAST TEXT effectively relies on mor-\nphemes to represent words.\nFor FAST TEXT, we use the same hyperparam-\neters as used for the regular WORD 2VEC, apart\nfrom the context size, in which we follow Lam-\nple et al. (2018) and use a size of 5. Additionally,\nwe create hybrid models of FAST TEXT and regu-\nlarWORD 2VEC concatenated (Fast+Reg), as well\nasFAST TEXT and dependency-based WORD 2VEC\nconcatenated (Fast+DP). The resulting BLI scores\nare shown in Table 4.\nWe can see that the inclusion of sub-word in-\nformation via FAST TEXT has a very large impact\non the alignment quality in general: for FAST -Amount (M) Fast Fast+Reg Fast+DP\n0.1 0.24% 0.36% 1.45%\n0.4 0.18% 1.06% 19.98%\n1 0.78% 29.86% 25.66%\n2 34.09% 35.64% 29.98%\n10 47.36% 50.61% 50.34%\nTable 4: BLI P@5 scores for aligned FAST TEXT (Fast),\nand two hybrid models consisting of FAST TEXT with reg-\nular (Fast+Reg) and FAST TEXT with dependency-based\n(Fast+DP) WORD 2VEC embeddings. The best scores are\nshown in bold.\nAmount (M) Fast Fast+Reg Fast+DP\n0.1 0.77 1.94 1.16\n0.4 7.47 7.28 5.32\n1 10.37 9.37 7.48\n2 11.49 11.48 10.12\n10 13.98 13.89 11.77\nTable 5: English!German BLEU scores for aligned FAST -\nTEXT (Fast), and two hybrid models consisting of FAST TEXT\nwith regular (Fast+Reg) and FAST TEXT with dependency-\nbased (Fast+DP) WORD 2VEC embeddings. The best scores\nare shown in bold.\nTEXTalone, the alignment scores improve over the\nregular and dependency-based models, provided\nthere are 2 million or more sentences. Unlike with\nregular embeddings, the Fast+DP model does not\nprovide improvements when there are at least 1\nmillion sentences available. With all three FAST -\nTEXT-based models, we see a drastic improvement\nfrom 0-2% up to 20-35% when the amount of data\nis increased, however the Fast+DP model has this\nincrease with less data, which may indicate that\ndependency information is useful in the lower re-\nsource setting.\nFor 100 thousand sentences, we do see some im-\nprovement, but with a P@5 of less than 2%, it is\nclear that none of the embedding methods tested\nare capable of providing embeddings of a high\nenough quality to allow for a decent unsupervised\nalignment.\nWhile the inclusion of sub-word information\nviaFAST TEXT outperforms the dependency-based\nembeddings alone, the two are not mutually exclu-\nsive: it is feasible to train a variant of FAST TEXT\nthat uses contexts based on dependency relations to\nget the best of both worlds. From simple concate-\nnation, the Fast+DP hybrid embeddings proved\nuseful for cases where only 100-400 thousand sen-\ntences per language were available.\nTable 5 shows the resulting BLEU scores for\nFAST TEXT and the two previously described hy-\nbrid models.1112With at least 400 thousand sen-\ntences available, we see that the non-hybrid model\nand the Fast+Reg hybrid perform similarly, but\nthe Fast+DP hybrid performs worse than the other\ntwo. With only 100 thousand sentences available,\nboth hybrid models perform better than the non-\nhybrid model, with Fast+Reg giving the best per-\nformance.\nThe BLEU scores from Table 5 as well as Table\n3 seem to indicate that hybridization does not nec-\nessarily lead to better translation quality, despite\noften giving a higher BLI score. The BLEU score\nof the Fast+DP model trained on 400 thousand sen-\ntences per language stands out in particular, as the\ncorresponding BLI score appears to indicate that\nthe quality of the alignment should be much better\nthan the other two models. We speculate that this\ncould be due to one of two things: either it is due to\nthe inclusion of BPE (as we previously discussed),\nor it is an artifact of V ECMAP’s training. Concern-\ning the latter, V ECMAPmay be aligning the em-\nbeddings to the point where they are close enough\nfor the NMT system to understand which words\ncorrespond to which, but not to the point where a\nlarge number of words will have their correspond-\ning words in the other language close enough to be\ncounted for the BLI precision at 5 score. There-\nfore, the large jump in BLI scores can be mislead-\ning in terms of alignment quality for unsupervised\nNMT.\nOverall, the performance of FAST TEXT indi-\ncates that the use of sub-word information is very\nimportant to the performance of the NMT sys-\ntem, as we see both BLI and BLEU score im-\nprovements when comparing FAST TEXT to stan-\ndard WORD 2VEC. Along with the performance of\nthe dependency-based embeddings, this supports\nthe idea that linguistic information as a whole can\nbe useful in improving translation quality in unsu-\npervised NMT.\n11We report the German !English BLEU scores in Table 9 in\nAppendix A.\n12The BLEU scores are not directly comparable to the results\nof Lample et al. (2018) for a couple of reasons (apart from\nthe hardware limitation previously mentioned): 1. We use\nVECMAPto align embeddings, whereas they concatenate cor-\npora and train a singular embedding. 2. We use a maximum of\n10 million sentences per language, they use the entire WMT\nNews Crawl dataset, which is well over 100 million sentences\nper language.4.3 Ensembling of Embeddings\nAs our hybrid embeddings have shown to have an\nincrease in performance, we note that this could be\ndue to the effect of ensembling two embeddings\nwith different random weight initializations rather\nthan due to the differences between the embedding\nalgorithms. To test this, we train two embeddings\nusing the same algorithm (but different weight\ninitializations) and concatenate them in the same\nmanner as the hybrid models. Using this method,\nwe produce Reg+Reg, DP+DP, and Fast+Fast, and\nwe compare them to our hybrid models in Table 6.\nThe scores show that the improvement found in\nReg+DP is greater than the improvement found\nby ensembling either of its two constituent mod-\nels. This indicates that there is a complemen-\ntary relationship between regular and dependency-\nbased WORD 2VEC. As for Fast+Fast, the model\nperforms better than the two hybrid models using\nFAST TEXT when the number of sentences ranges\nfrom 400 thousand to 2 million, with the great-\nest improvement found at 400 thousand sentences\nper language. While there is a greater improve-\nment from Fast+Fast compared to Fast+Reg and\nFast+DP, this may be more due to the poor qual-\nity of the Reg and DP components of the hy-\nbrid models, whose contribution may be hinder-\ning the alignment rather than helping. Overall,\nensembling 2 embeddings from the same embed-\nding algorithm yields marginal improvements in\nalignment quality, whereas ensembling 2 embed-\ndings from different algorithms can potentially\nyield greater benefits.\n4.4 Context Size\nSeeing as the context plays a role in the alignment\nquality of embeddings, we vary the context win-\ndow size of WORD 2VEC and FAST TEXT embed-\ndings to see its effect. Additionally, using a context\nsize of 1 with WORD 2VEC produces embeddings\nwhich are better suited for inducing part-of-speech\ntags (Lin et al., 2015), which could also aid with\nalignment. As such we test on context sizes of 1,\n3, 5, and 10.\nThe results overwhelmingly indicate that a\nlarger context size is better for alignment when\nthere are at least 1 million sentences per language\navailable. This may explain why the dependency-\nbased embeddings do not perform well relative to\nthe standard WORD 2VEC and FAST TEXT embed-\ndings. In the sentence in Figure 4, for example, the\nAmount (M) Reg+Reg DP+DP Reg+DP Fast+Fast Fast+Reg Fast+DP\n0.1 0.00% 0.00% 0.00% 0.84% 0.36% 1.45%\n0.4 0.09% 0.44% 0.62% 24.14% 1.06% 19.98%\n1 6.07% 4.67% 9.64% 31.26% 29.86% 25.66%\n2 15.50% 11.46% 18.66% 35.86% 35.64% 29.98%\n10 35.93% 25.30% 36.06% 47.16% 50.61% 50.34%\nTable 6: BLI comparison of ensemble models (Reg+Reg, DP+DP, and Fast+Fast), to the aforementioned hybrid models\n(Reg+DP, Fast+Reg, and Fast+DP).\nAmount (M)WORD 2VEC FAST TEXT\n1 3 5 10 1 3 5 10\n0.1 0.00% 0.12% 0.00% 0.00% 0.12% 0.60% 0.24% 0.00%\n0.4 0.00% 0.00% 0.00% 0.27% 0.18% 0.27% 0.18% 0.35%\n1 0.00% 0.08% 1.48% 2.49% 0.00% 0.23% 0.78% 28.07%\n2 3.16% 5.66% 13.15% 15.28% 23.14% 32.33% 34.09% 35.05%\n10 27.06% 32.27% 33.90% 35.86% 39.92% 45.20% 47.36% 48.58%\nTable 7: BLI P@5 scores for aligned FAST TEXT, and WORD 2VEC, with varying window sizes of 1, 3, 5, and 10.\nlargest context is 6 for the word rack , and the av-\nerage context size is 1.83. Given the increases we\nsee from WORD 2VEC and FAST TEXT with a larger\ncontext size, it is likely we will see a large increase\nin alignment quality for dependency-based embed-\ndings as well if they can be trained with a larger\ncontext.\n5 Conclusion and Future Work\nUnsupervised NMT has made great strides in mak-\ning MT more accessible for language pairs that\nlack parallel corpora. We attempt to further this ac-\ncessibility by introducing LRUMT, where mono-\nlingual data is also limited. Our results show\nthat, in the current state-of-the-art pipeline, the\nquality of the pretrained word embeddings is the\nmain issue, and that using syntactically-motivated\ndependency-based embeddings has the potential to\nimprove performance when monolingual data is\nlimited.\nWe also see that the inclusion of sub-word infor-\nmation for training word embeddings provides a\ncrucial performance increase, which provides fur-\nther evidence that using the latent linguistic in-\nformation in a sentence can improve embedding\nalignment quality.\nFinally, on the topic of context size, we find that\na larger context size is almost always better, most\nnoticeably when more data is available. This helps\nexplain the poorer performance of the dependency-\nbased embeddings on larger amounts of data.\nTo improve upon dependency-based embed-dings for unsupervised NMT, we consider two\navenues to explore: including sub-word infor-\nmation and increasing the context size. To in-\nclude sub-word information, it should be possi-\nble to combine the training methods of FAST TEXT\nand dependency-based WORD 2VEC. To increase\nthe context size, one might consider including a\nword’s grandparent, grandchildren, and siblings\n(its parent’s other children) as part of the context.\nWe also note that we currently use a pretrained\ndependency parser, trained on labelled dependency\ndata, which is often harder to come by than parallel\ndata. We plan to switch to using unsupervised de-\npendency parsing techniques to ensure this method\nis accessible for all languages.\nFurthermore, there are several potential meth-\nods for incorporating more linguistic information\ninto embeddings. One such possibility would be\nto use a morphological segmenter such as M OR-\nFESSOR (Virpioja et al., 2013) rather than BPE,\nwhich would likely provide better results for more\nmorphologically-rich languages. As we only test\non English–German, our future work will test this\nnew paradigm on other language pairs, particu-\nlarly those in which unsupervised NMT fails to\nperform such as English into morphologically-rich\nlanguages.\nReferences\nAgirre, Eneko, Enrique Alfonseca, Keith Hall, Jana\nKravalova, Marius Pasca, and Aitor Soroa. 2009.\nA study on similarity and relatedness using distribu-\ntional and wordnet-based approaches.\nArtetxe, Mikel, Gorka Labaka, and Eneko Agirre.\n2017a. Learning bilingual word embeddings with\n(almost) no bilingual data. In Proceedings of the\n55th Annual Meeting of the Association for Compu-\ntational Linguistics (Volume 1: Long Papers) , pages\n451–462.\nArtetxe, Mikel, Gorka Labaka, Eneko Agirre, and\nKyunghyun Cho. 2017b. Unsupervised neural ma-\nchine translation. arXiv preprint arXiv:1710.11041 .\nArtetxe, Mikel, Gorka Labaka, and Eneko Agirre.\n2018a. A robust self-learning method for fully un-\nsupervised cross-lingual mappings of word embed-\ndings. In Proceedings of the 56th Annual Meeting of\nthe Association for Computational Linguistics (Vol-\nume 1: Long Papers) , pages 789–798.\nArtetxe, Mikel, Gorka Labaka, and Eneko Agirre.\n2018b. Unsupervised statistical machine translation.\nInProceedings of the 2018 Conference on Empirical\nMethods in Natural Language Processing , Brussels,\nBelgium, November. Association for Computational\nLinguistics.\nArtetxe, Mikel, Gorka Labaka, and Eneko Agirre.\n2019. An effective approach to unsupervised ma-\nchine translation. arXiv preprint arXiv:1902.01313 .\nBarrault, Lo ¨ıc, Ond ˇrej Bojar, Marta R Costa-juss `a,\nChristian Federmann, Mark Fishel, Yvette Gra-\nham, Barry Haddow, Matthias Huck, Philipp Koehn,\nShervin Malmasi, et al. 2019. Findings of the\n2019 conference on machine translation (wmt19). In\nProceedings of the Fourth Conference on Machine\nTranslation (Volume 2: Shared Task Papers, Day 1) ,\npages 1–61.\nBojanowski, Piotr, Edouard Grave, Armand Joulin, and\nTomas Mikolov. 2017. Enriching word vectors with\nsubword information. Transactions of the Associa-\ntion for Computational Linguistics , 5:135–146.\nBruni, Elia, Nam-Khanh Tran, and Marco Baroni.\n2014. Multimodal distributional semantics. Journal\nof Artificial Intelligence Research , 49:1–47.\nGage, Philip. 1994. A new algorithm for data compres-\nsion. The C Users Journal , 12(2):23–38.\nGlavas, Goran, Robert Litschko, Sebastian Ruder, and\nIvan Vulic. 2019. How to (properly) evaluate cross-\nlingual word embeddings: On strong baselines, com-\nparative analyses, and some misconceptions. arXiv\npreprint arXiv:1902.00508 .\nHassan, Hany, Anthony Aue, Chang Chen, Vishal\nChowdhary, Jonathan Clark, Christian Feder-\nmann, Xuedong Huang, Marcin Junczys-Dowmunt,\nWilliam Lewis, Mu Li, et al. 2018. Achieving\nhuman parity on automatic chinese to english news\ntranslation. arXiv preprint arXiv:1803.05567 .He, Junxian, Graham Neubig, and Taylor Berg-\nKirkpatrick. 2018. Unsupervised learning of syntac-\ntic structure with invertible neural projections. arXiv\npreprint arXiv:1808.09111 .\nHill, Felix, Roi Reichart, and Anna Korhonen. 2015.\nSimlex-999: Evaluating semantic models with (gen-\nuine) similarity estimation. Computational Linguis-\ntics, 41(4):665–695.\nKoehn, Philipp, Hieu Hoang, Alexandra Birch, Chris\nCallison-Burch, Marcello Federico, Nicola Bertoldi,\nBrooke Cowan, Wade Shen, Christine Moran,\nRichard Zens, et al. 2007. Moses: Open source\ntoolkit for statistical machine translation. In Pro-\nceedings of the 45th annual meeting of the associa-\ntion for computational linguistics companion volume\nproceedings of the demo and poster sessions , pages\n177–180.\nLample, Guillaume, Alexis Conneau, Ludovic De-\nnoyer, and Marc’Aurelio Ranzato. 2017. Unsuper-\nvised machine translation using monolingual corpora\nonly. arXiv preprint arXiv:1711.00043 .\nLample, Guillaume, Myle Ott, Alexis Conneau, Lu-\ndovic Denoyer, and Marc’Aurelio Ranzato. 2018.\nPhrase-based & neural unsupervised machine trans-\nlation. arXiv preprint arXiv:1804.07755 .\nLeviant, Ira and Roi Reichart. 2015. Separated by an\nun-common language: Towards judgment language\ninformed vector space modeling. arXiv preprint\narXiv:1508.00106 .\nLevy, Omer and Yoav Goldberg. 2014. Dependency-\nbased word embeddings. In Proceedings of the 52nd\nAnnual Meeting of the Association for Computa-\ntional Linguistics (Volume 2: Short Papers) , vol-\nume 2, pages 302–308.\nLin, Chu-Cheng, Waleed Ammar, Chris Dyer, and Lori\nLevin. 2015. Unsupervised pos induction with word\nembeddings. arXiv preprint arXiv:1503.06760 .\nMikolov, Tomas, Ilya Sutskever, Kai Chen, Greg S Cor-\nrado, and Jeff Dean. 2013. Distributed representa-\ntions of words and phrases and their compositional-\nity. In Advances in neural information processing\nsystems , pages 3111–3119.\nOch, Franz Josef. 2003. Minimum error rate training\nin statistical machine translation. In Proceedings of\nthe 41st Annual Meeting on Association for Compu-\ntational Linguistics-Volume 1 , pages 160–167. Asso-\nciation for Computational Linguistics.\nQi, Peng, Timothy Dozat, Yuhao Zhang, and Christo-\npher D Manning. 2019. Universal dependency pars-\ning from scratch. arXiv preprint arXiv:1901.10457 .\nStraka, Milan and Jana Strakov ´a. 2017. Tokenizing,\npos tagging, lemmatizing and parsing ud 2.0 with\nudpipe. In Proceedings of the CoNLL 2017 Shared\nTask: Multilingual Parsing from Raw Text to Univer-\nsal Dependencies , pages 88–99, Vancouver, Canada,\nAugust. Association for Computational Linguistics.\nVirpioja, Sami, Peter Smit, Stig-Arne Gr ¨onroos, Mikko\nKurimo, et al. 2013. Morfessor 2.0: Python imple-\nmentation and extensions for morfessor baseline.\nA German !English Results\nWe report the BLEU scores for German !English\nin Tables 8 and 9. Comparing these BLEU scores\nto the respective English !German BLEU scores\nin Tables 3 and 5, we see that the best perform-\ning models are the same for both translation di-\nrections. This suggests that the translation direc-\ntion is not important for evaluating the relative dif-\nferences unsupervised NMT systems. However,\nsince English and German are related languages,\nthis could also simply be a feature of this language\npair.\nAmount (M) Reg DP Reg+DP\n0.1 0.54 1.20 0.57\n0.4 1.95 2.91 3.71\n1 6.99 7.14 8.74\n2 11.90 10.03 11.44\n10 16.97 12.95 15.07\nTable 8: German!English BLEU scores for NMT models\nusing pretrained standard (Reg), dependency-based (DP), and\nhybrid (Reg+DP) embeddings. The best scores are shown in\nbold.\nAmount (M) Fast Fast+Reg Fast+DP\n0.1 1.11 2.39 1.35\n0.4 10.01 9.98 7.10\n1 13.68 12.38 9.99\n2 15.27 14.82 13.15\n10 18.40 18.31 15.16\nTable 9: German!English BLEU scores for aligned FAST -\nTEXT (Fast), and two hybrid models consisting of FAST TEXT\nwith regular (Fast+Reg) and FAST TEXT with dependency-\nbased (Fast+DP) WORD 2VEC embeddings. The best scores\nare shown in bold.", "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "xzpGsgQinQ", "year": null, "venue": "EAMT 2023", "pdf_link": "https://aclanthology.org/2023.eamt-1.58.pdf", "forum_link": "https://openreview.net/forum?id=xzpGsgQinQ", "arxiv_id": null, "doi": null }
{ "title": "CorCoDial - Machine translation techniques for corpus-based computational dialectology", "authors": [ "Yves Scherrer", "Olli Kuparinen", "Aleksandra Miletic" ], "abstract": null, "keywords": [], "raw_extracted_content": "CorCoDial – Machine translation techniques\nfor corpus-based computational dialectology\nYves Scherrer Olli Kuparinen\nDepartment of Digital Humanities, University of Helsinki, Finland\[email protected] Mileti ´c\nAbstract\nThis paper presents CorCoDial, a research\nproject funded by the Academy of Fin-\nland aiming to leverage machine transla-\ntion technology for corpus-based compu-\ntational dialectology. In this paper, we\nbriefly present intermediate results of our\nproject-related research.\n1 Introduction\nDialectology is concerned with the study of\nlanguage variation across space. Over the\nlast decades, dialectologists have collected large\ndatasets, which typically consist of transcribed in-\nterviews with informants. Unfortunately, these\ninterviews cannot easily be compared with each\nother as they differ considerably in length and con-\ntent. If informant Adoes not use word x, this does\nnot necessarily mean that the word does not exist\ninA’s dialect. It may just be that Achose to talk\nabout topics that did not require the use of word\nx. The CorCoDial ( Corpus-based computational\ndialectology ) project aims to introduce compara-\nbility in dialect corpora with the help of machine\ntranslation techniques. CorCoDial is funded by the\nAcademy of Finland during the period 2021–2025.\nThe core of the project focuses on the dialect-\nto-standard normalization process, which is a\nsequence-to-sequence task that maps the phonetic\ntranscriptions to the standardized spellings. We are\nnot only interested in the results of the normaliza-\ntion process, but also in the emerging representa-\ntions of dialects and speakers that the (statistical or\nneural) normalization models learn. These repre-\nsentations allow us to provide new visualisations\nof dialect landscapes and to confirm or challenge\ntraditional dialect classifications.\nTraditional dialect corpora are costly to pro-\nduce: informants need to be found and inter-\n© 2023 The authors. This article is licensed under a Creative\nCommons 4.0 licence, no derivative works, attribution, CC-\nBY-ND.viewed, and the recorded interviews need to be\ntranscribed and annotated. To circumvent this data\nbottleneck, researchers have increasingly turned to\nuser-generated content (UGC), i.e., to texts pub-\nlished by laypeople on social media. We also\ninvestigate to what extent normalization methods\ntrained on “clean” data transcribed by dialectolo-\ngists generalize to noisier UGC datasets.\nThe main goals of the CorCoDial project are:\n1. to improve the automatic normalization of di-\nalect texts by using state-of-the-art machine\ntranslation methods,\n2. to extract, visualize, compare and interpret\nthe dialectal patterns emerging from the nor-\nmalization models, and\n3. to contrast the patterns found in traditional\ndialectological corpora with those found in\nuser-generated content.\nIn the following sections, we present some re-\nsults of our ongoing research.\n2 Benchmarking dialect-to-standard\nnormalization systems\nIn contrast to historical text normalization (Boll-\nmann, 2019; Bawden et al., 2022) and UGC stan-\ndardization, there have not been any multilin-\ngual evaluations of dialect-to-standard normaliza-\ntion systems. In order to establish dialect normal-\nization as a distinct task, we compiled a multi-\nlingual benchmark dataset from existing sources,\ncovering Finnish, Norwegian, Swiss German and\nSlovene.\nWe evaluate different sequence-to-sequence\nmodels that have been previously employed for\nnormalization tasks:1statistical machine transla-\ntion with character-level segmentation; neural ma-\nchine translation with RNN and Transformer ar-\nchitectures, character-level and BPE segmentation,\n1Note that normalization tasks, in contrast to other translation\ntasks, are monotonic. Although specific monotonic NMT ar-\nchitectures have been proposed, we follow earlier evaluations\nand focus on vanilla architectures. We leave the evaluation of\nnormalization-specific architectures to future work.\nand full-sentence and word-trigram windows; and\nthe pre-trained multilingual ByT5 model using\nbyte-level segmentation.\nOur results show that the Transformer is the\nmost successful model architecture on all four\ndatasets. This is somewhat surprising since re-\ncent related work (Bollmann, 2019; Partanen et al.,\n2019; Bawden et al., 2022) found SMT and RNN-\nNMT to be competitive. Using word trigram win-\ndows instead of full sentences, as in Partanen et al.\n(2019), is also effective in our setup, although the\ngap towards full-sentence models is considerably\nlower than in their work. Finally, the pre-trained\nByT5 model only outperforms vanilla Transform-\ners on the Norwegian dataset.\n3 Analyzing speaker representations in\nmulti-dialectal NMT\nLanguage labels are often used in multilingual\nneural language modeling and machine translation\nto inform the model of the language(s) of each\nsample. As a result of the training process, the\nmodels learn embeddings of these language labels,\nwhich in turn reflect the relationships between the\nlanguages ( ¨Ostling and Tiedemann, 2017). Fol-\nlowing Abe et al. (2018), we apply this idea to\nthe Finnish and Norwegian parts of the normaliza-\ntion dataset introduced in the previous section. We\nuse distinct labels for each speaker in the corpus\nand analyze their representations obtained by the\nTransformer-based normalization models.\nWe find that (1) the speaker label embeddings\nof two speakers coming from the same village are\nvery similar, and that (2) the embeddings of all\nspeaker labels taken together reflect the traditional\ndialect classifications precisely. Detailed results of\nthis analysis are given in Kuparinen and Scherrer\n(2023).\n4 Collecting Finnish dialect tweets\nIn order to extend our dialectological research to\nmore modern and realistic types of data, we col-\nlected and annotated a dataset of dialectal Finnish\ntweets. We take advantage of Murreviikko (‘di-\nalect week’), a Twitter campaign initiated at the\nUniversity of Eastern Finland, which promotes the\nuse of dialects on Finnish social media. The cam-\npaign lasts for a week in October and has run for\nthree years (2020–2022). We collected tweets con-\ntaining the keyword murreviikko or#murreviikko\nvia the Twitter API from all three years.This collection resulted in a total of 465 tweets,\n344 of which were written in a dialect of Finnish.\nThe tweets were manually annotated by a dialec-\ntologist with the dialect region and normalized to\nStandard Finnish on sentence level.\nIn contrast to the “clean” Finnish dialect dataset\nused in our benchmark (Section 2), the Murrevi-\nikko data is much noisier.2In terms of normaliza-\ntion performance, the SMT model has been found\nto perform best, followed by the pre-trained ByT5\nmodel. These two approaches turned out to be\nmuch more robust to noise than the vanilla Trans-\nformers.\nThe corpus collection process, the normaliza-\ntion results and the modalities of access are de-\nscribed in detail in Kuparinen (2023).3\nReferences\nAbe, Kaori, Yuichiroh Matsubayashi, Naoaki Okazaki,\nand Kentaro Inui. 2018. Multi-dialect neural ma-\nchine translation and dialectometry. In Proceedings\nof PACLIC , pages 1–10, Hong Kong, China.\nBawden, Rachel, Jonathan Poinhos, Eleni Kogkitsidou,\nPhilippe Gambette, Beno ˆıt Sagot, and Simon Gabay.\n2022. Automatic normalisation of early Modern\nFrench. In Proceedings of LREC , pages 3354–3366,\nMarseille, France.\nBollmann, Marcel. 2019. A large-scale comparison of\nhistorical text normalization systems. In Proceed-\nings of NAACL-HLT , pages 3885–3898, Minneapo-\nlis, Minnesota, USA.\nKuparinen, Olli and Yves Scherrer. 2023. Dialect rep-\nresentation learning with neural dialect-to-standard\nnormalization. In Proceedings of VarDial , pages\n200–212, Dubrovnik, Croatia.\nKuparinen, Olli. 2023. Murreviikko - a dialectolog-\nically annotated and normalized dataset of Finnish\ntweets. In Proceedings of VarDial , pages 31–39,\nDubrovnik, Croatia.\n¨Ostling, Robert and J ¨org Tiedemann. 2017. Continu-\nous multilinguality with language vectors. In Pro-\nceedings of EACL , pages 644–649, Valencia, Spain.\nPartanen, Niko, Mika H ¨am¨al¨ainen, and Khalid Alnaj-\njar. 2019. Dialect text normalization to normative\nstandard Finnish. In Proceedings of W-NUT , pages\n141–146, Hong Kong, China.\n2The Murreviikko tweet authors are laypersons who do not\nfollow any transcription conventions used by trained dialec-\ntologists. Some of the tweets also mix dialectal and standard\nfeatures. Finally, the tweets contain a lot of social-media spe-\ncific artifacts (emojis, hashtags, etc.) that are completely ab-\nsent from the clean dataset.\n3The public part of the corpus is available at https://\ngithub.com/Helsinki-NLP/murreviikko .", "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "PMUuzCztmBe", "year": null, "venue": "EAMT 2023", "pdf_link": "https://aclanthology.org/2023.eamt-1.12.pdf", "forum_link": "https://openreview.net/forum?id=PMUuzCztmBe", "arxiv_id": null, "doi": null }
{ "title": "Empirical Assessment of kNN-MT for Real-World Translation Scenarios", "authors": [ "Pedro Henrique Martins", "João Alves", "Tânia Vaz", "Madalena Gonçalves", "Beatriz Silva", "Marianna Buchicchio", "José G. C. de Souza", "André F. T. Martins" ], "abstract": "Pedro Henrique Martins, João Alves, Tânia Vaz, Madalena Gonçalves, Beatriz Silva, Marianna Buchicchio, José G. C. de Souza, André F. T. Martins. Proceedings of the 24th Annual Conference of the European Association for Machine Translation. 2023.", "keywords": [], "raw_extracted_content": "Empirical Assessment of kNN-MT for Real-World Translation Scenarios\nPedro Henrique Martins∗1, Jo˜ao Alves∗1,\nTˆania Vaz1, Madalena Gonc ¸alves1, Beatriz Silva1, Marianna Buchicchio1,\nJos´e G. C. de Souza1, Andr ´e F. T. Martins1,2,3\n1Unbabel, Lisbon, Portugal,\n2Instituto de Telecomunicac ¸ ˜oes, Lisbon, Portugal\n3Instituto Superior T ´ecnico, University of Lisbon, Portugal\nAbstract\nThis paper aims to investigate the effec-\ntiveness of the k-Nearest Neighbor Ma-\nchine Translation model ( kNN-MT) in\nreal-world scenarios. kNN-MT is a\nretrieval-augmented framework that com-\nbines the advantages of parametric mod-\nels with non-parametric datastores built us-\ning a set of parallel sentences. Previous\nstudies have primarily focused on evaluat-\ning the model using only the B LEU met-\nric and have not tested kNN-MT in real-\nworld scenarios. Our study aims to fill this\ngap by conducting a comprehensive analy-\nsis on various datasets comprising different\nlanguage pairs and different domains, us-\ning multiple automatic metrics and expert-\nevaluated Multidimensional Quality Met-\nrics (MQM). We compare kNN-MT with\ntwo alternate strategies: fine-tuning all the\nmodel parameters and adapter-based fine-\ntuning. Finally, we analyze the effect of the\ndatastore size on translation quality, and\nwe examine the number of entries neces-\nsary to bootstrap and configure the index.\n1 Introduction\nThe remarkable advances in neural models have\nbrought significant progress in the field of machine\ntranslation (Sutskever et al., 2014; Bahdanau et al.,\n2015; Vaswani et al., 2017). However, current sys-\ntems rely heavily on a fully-parametric approach,\nwhere the entire training data is compressed into\n∗Equal contribution.\nContact: {pedro.martins, joao.alves }@unbabel.com.\n∗© 2023 The authors. This article is licensed under a Creative\nCommons 4.0 licence, no derivative works, attribution, CC-\nBY-ND.the model parameters. This can lead to inade-\nquate translations when encountering rare words\nor sentences outside of the initial training do-\nmain (Koehn and Knowles, 2017), requiring sev-\neral stages of fine-tuning to adapt to data drift or to\nnew domains.\nBy combining the advantages of parametric\nmodels with non-parametric databases built from\nparallel sentences, retrieval-augmented models\nshowed to be a promising solution, particularly\nin domain adaptation scenarios (Gu et al., 2018;\nZhang et al., 2018; Bapna and Firat, 2019; Meng\net al., 2021; Zheng et al., 2021; Jiang et al., 2021;\nMartins et al., 2022a; Martins et al., 2022b).\nOne notable example is the k-Nearest Neighbor\nMachine Translation model ( kNN-MT) (Khandel-\nwal et al., 2021), known for its simplicity and very\npromising results. The model first creates a token-\nlevel datastore using parallel sentences, and then it\nretrieves similar examples from the database dur-\ning inference, enhancing the generation process\nvia interpolation of probability distributions.\nHowever, despite its potential, the kNN-MT\nmodel has yet to be tested in real-world scenar-\nios. Previous studies have primarily focused on\nevaluating it using only the B LEU metric, which\ncorrelates poorly with human judgments. In or-\nder to gain a deeper understanding of when and\nhow kNN-MT can be effective, we conduct a\nthorough analysis on various datasets which com-\nprise 4 different language pairs and 3 different do-\nmains, using B LEU (Papineni et al., 2002; Post,\n2018), C OMET (Rei et al., 2020), and Multidi-\nmensional Quality Metrics (MQM) – quality as-\nsessments obtained from the identification of error\nspans in translation outputs by experts (Lommel et\nal., 2014; Freitag et al., 2021).\nTo sum up, our main contributions are:\nParametric\ncomponentNeighbors\nSoftmax\nInterpolationNN\nDatastore0\n1\n2\nk - 1...Figure 1: Diagram of the kNN-MT model.\n• We compare using kNN-MT with directly us-\ning a pre-trained multilingual model, fine-\ntuning all the model parameters, and with\nadapter-based fine-tuning, reporting results in\nseveral automatic metrics.\n• We analyze the effect of the datastore size\non the quality of kNN-MT’s translations and\nexamine the number of entries necessary to\nbootstrap and configure the datastore’s index.\n• We perform MQM evaluation of the transla-\ntions generated by a pre-trained model with\nand without retrieval, and by a fully fine-\ntuned model with and without retrieval.\n2k-Nearest Neighbor Machine\nTranslation\nIn machine translation, the goal is to take a sen-\ntence or document in a source language, repre-\nsented as x= [x1, . . . , x L], and generate a cor-\nresponding translation in a target language, rep-\nresented as y= [y1, . . . , y N]. This is typi-\ncally achieved using a fully-parametric sequence-\nto-sequence model (Sutskever et al., 2014; Bah-\ndanau et al., 2015; Vaswani et al., 2017). In these\nmodels, the encoder takes in the source sentence\nand outputs a set of hidden states. The decoder\nthen generates the target translation one token at a\ntime by attending to these hidden states and out-\nputting a probability distribution over the vocab-\nulary for each step, pNMT(yt|y<t,x). Finally, a\nsearch procedure, such as beam search (Reddy,\n1977), is applied using these probability distribu-\ntions to generate the final translation.\nThe k-nearest neighbor machine translation\nmodel ( kNN-MT) (Khandelwal et al., 2021), il-\nTranslation contexts \n Targets \n \nJ'ai été à Paris. I have\n \nJ'avais été à la maison. I \n \nJ'apprécie l’été. I enjoy\n \n...\n \nJ'ai ma propre chambre. been \n had \n summer \n ... \n have ... been \n had \n summer \n ... \n haveDatastore \nKeys Values \n Figure 2: Diagram of the kNN-MT datastore.\nlustrated in Figure 1, is a retrieval-augmented\nmodel. It combines a standard sequence-to-\nsequence model as the one described above, with\nan approximate nearest neighbor retrieval mecha-\nnism, that allows the model to access a datastore\nof examples at inference time.\n2.1 Building the Datastore\nBuilding kNN-MT’s datastore, D, requires a par-\nallel corpus, S, with the desired source and tar-\nget languages, process illustrated in Figure 2. The\ndatastore is a key-value memory, where each key is\nthe decoder’s output representation of the context\n(source and ground-truth translation until current\nstep), f(x,y<t)∈Rd. The value is the corre-\nsponding target token yt∈ V:\nD={(f(x,y<t), yt)∀t|(x,y)∈ S} .(1)\nTherefore, to construct the datastore, we simply\nneed to perform force-decoding on the parallel cor-\npusSand store the context vector representations\nand their corresponding ground-truth target tokens.\nSource Reference\nEn-Tr The Company has a 65+ year track record in sup-\nplying high quality pharmaceutical products across\noral solid and liquid forms.S ¸irket, oral katı ve sıvı formlarda y ¨uksek kaliteli\nilac ¸ ¨ur¨unleri tedarikinde 65 yılı as ¸kın gec ¸mis ¸e\nsahiptir.\nEn-Ko A South Korean detective looks into the reason for\nhis counterparts visit.ᄂ ᅡ ᆷ한ᄋ ᅴ형ᄉ ᅡ는ᄀ ᅳᄀ ᅡᄂ ᅡ ᆷ한ᄋ ᅦᄑ ᅡ견된ᄋ ᅵᄋ ᅲ를ᄋ ᅡ ᆯᄋ ᅡ\nᄂ ᅢᄀ ᅩᄌ ᅡ한ᄃ ᅡ.\nEn-De (1) When I track your order it seems like it is lost in\ntransit, I am so sorry about this.Wenn ich Ihre Bestellung schicke, scheint es, als ob\nsie beim Versandverfahren verloren gegangen ist.\nEs tut mir sehr leid.\nEn-De (2) I have put the request in to cancel the order. Ich habe um eine Stornierung der Bestellung\ngebeten.\nEn-Fr Sorry to hear about your domains, you can move\nthem, so we can look at that together.D´esol´e d’apprendre ce qui s’est pass ´e pour vos do-\nmaines, vous pouvez les d ´eplacer, afin que nous\npuissions examiner cela ensemble.\nTable 1: Datasets translation examples.\n2.2 Searching for k-NN\nTo find the closest examples in the datastore, the\nstandard approach is to use a library for efficient\nsimilarity search such as FAISS (Johnson et al.,\n2019) to perform k-nearest neighbor search. To do\nthis, a searchable index that encapsulates the datas-\ntore vectors must first be created. Since exact kNN\nsearch is computationally expensive, an approxi-\nmate kNN search is performed by segmenting the\ndatastore. This can be done by defining V oronoi\ncells in the d-dimensional space, which are defined\nby a centroid, and assigning each datastore key to\none of these cells using k-means clustering (Mac-\nQueen, 1967). Then, during inference, the model\nsearches the index hierarchically to approximately\nretrieve the set of knearest neighbors N.\n2.3 Combining kNN with the NMT model\nAfter retrieving the knearest neighbors, we need\na way to leverage this information. In kNN-MT\nthis is done by computing a probability distribu-\ntion based on the neighbors’ values, which is then\ncombined with the parametric component’s distri-\nbution, at each step of the generation.\nThe retrieval distribution, pkNN(yt|y< t,x),\nis calculated using the neighbors’ distance\nto the current decoder’s output representation,\nd(f(x,y< t),·):\npkNN(yt|y<t,x) = (2)P\n(kj,vj)∈N 1yt=vjexp (−d(kj,f(x,y<t))/T)\nP\n(kj,vj)∈Nexp (−d(kj,f(x,y<t))/T),\nwhere Tis the softmax temperature, kjdenotes the\nkey of the jthneighbor and vjits value.Finally, the retrieval distribution,\npNMT(yt|y<t,x)and the parametric compo-\nnent distribution, pkNN(yt|y<t,x), are combined,\nby performing interpolation, to obtain the final dis-\ntribution, which is used to generate the translation\nthrough beam search:\np(yt|y<t,x) = (1 −λ)pNMT(yt|y<t,x)(3)\n+λ pkNN(yt|y<t,x),\nwhere λ∈[0,1]is a hyperparameter that controls\nthe weight given to the two distributions. This in-\nterpolation allows the model to benefit from the\nstrengths of both the parametric component and\nthe retrieval component.\n3 Experimental Settings\nIn order to analyze how kNN-MT performs in real-\nworld scenarios, we performed experiments using\ndatasets from several domains and different lan-\nguage pairs (as described in §3.1). We compared\nthe results with that of a pre-trained multilingual\nmodel (referred to as the base model; see §3.2),\nfine-tuning all the parameters of the base model (as\ndiscussed in §3.3), and using adapter-based fine-\ntuning (as described in §3.4). The specific settings\nofkNN-MT are detailed in §3.5 and the automatic\nmetrics employed are described in §3.6.\n3.1 Datasets\nIn our experiments, we use 5 proprietary datasets\nacross 4 different language pairs: English-\nTurkish (En-Tr), English-Korean (En-Ko),\nEnglish-German (En-De (1) and En-De (2)), and\nEnglish-French (En-Fr). The En-Tr and En-Ko\ndatasets are composed of sentences related to press\nEn-Tr En-ko En-De (1) En-De (2) En-Fr\nk λ T k λ T k λ T k λ T k λ T\nkNN-MT 16 0.4 10 16 0.5 10 4 0.5 100 4 0.5 10 4 0.6 10\nFine-tuned (Adapters) + kNN-MT 16 0.3 10 16 0.3 10 4 0.3 100 8 0.3 10 8 0.4 10\nFine-tuned (Full) + kNN-MT 8 0.5 100 4 0.3 10 4 0.3 10 16 0.2 100 16 0.3 1\nTable 2: Hyperparameters values: number of neighbors k, interpolation coefficient λ, and retrieval softmax temperature T.\nreleases and media descriptions, respectively. The\nEn-De (1), En-De (2) and En-Fr datasets belong\nto the customer service domain. We provide some\ntranslation examples in Table 1 as well as the data\nsplits for each dataset in Table 3.\nTrain set Validation Set Test set\nEn-Tr 10,281 944 492\nEn-Ko 197,945 973 496\nEn-De (1) 10,599 1000 2000\nEn-De (2) 556,972 1000 2000\nEn-Fr 1,353,257 1000 2000\nTable 3: Number of sentences in each dataset split.\n3.2 Base Model\nThe mBART50 model (Tang et al., 2020) serves as\nthe base model for our study. Its “one-to-many”\nvariation is pre-trained to translate English into 49\nother languages, including the languages used in\nour study. The model architecture is a transformer-\nbased encoder-decoder, with 12 layers in the en-\ncoder, 12 layers in the decoder, a hidden layer di-\nmension of 1024 and 16 heads, encompassing a\ntotal of approximately 610 million parameters. It\nwas first trained on a denoising task using mono-\nlingual data from 25 languages (mBART; (Liu et\nal., 2020)), and then further pre-trained on a larger\nset of monolingual data from 50 languages. It was\nthen fine-tuned on parallel data for all 50 languages\nto adapt the model to the machine translation task.\n3.3 Fine-tuning\nWe compare applying kNN-MT with fine-tuning\nall the base model parameters. To do so, we fine-\ntune the base model for each dataset, using its\ntraining set, the Adam optimizer with a learning\nrate of 3×10−5, a batch size of 16, and gradient ac-\ncumulation of 8 steps. We perform early stopping\non the validation set, with a patience of 5check-\npoints, being the validation step computed every\n100 steps for the En-Tr and En-De (1) datasets,\nevery 500steps for the En-Ko dataset, and every\n1000 steps for the En-De (2) and En-Fr datasets.3.4 Adapter-based Fine-tuning\nWe also explore the use of adapter-based fine-\ntuning as a method of light-weight adaptation.\nAdapters (Houlsby et al., 2019) are small resid-\nual layers inserted into the middle of a pre-trained\nmodel and are used to adapt the model to a\nnew task, in this case, adapting the model to the\ndataset’s domain. As it is possible to incorporate\nadapters corresponding to different datasets to the\nsame model, this method is an efficient solution in\nterms of model parameters, since we only need to\nsave one set of parameters for multiple datasets.\nFor each domain we add adapters with 12.5M pa-\nrameters, approximately 2% of the total number of\nparameters of the pretrained model (610M). To im-\nplement it, we employ the same hyper-parameters\nand training settings as previously described in\nthe methodology section for fine-tuning the entire\nmodel. This allows a fair comparison of the effec-\ntiveness of adapter-based fine-tuning versus tradi-\ntional fine-tuning methods.\n3.5 kNN-MT\nFor the kNN-MT we build the token-based\ndatastores using the training sets’ parallel sen-\ntences. To set the parameters for kNN-MT,\nwe conduct a grid search on the validation set\nfor the interpolation coefficient λ, the temper-\nature T, and the number of retrieved neigh-\nbors k. The grid search is performed on\nλ∈ {0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8},T∈\n{1,10,100}, and k∈ {4,8,16}. The chosen val-\nues for each dataset are listed in Table 2. To per-\nform the kNN search, we use the FAISS library\n(Johnson et al., 2019) with the IVFPQ index and\nset the number of centroids to 2000, the code size\nto 64, and perform the search over 32 partitions.\n3.6 Automatic Metrics\nTo evaluate the model we use two automatic met-\nrics: B LEU (Papineni et al., 2002; Post, 2018) – n-\ngram matching based metric – and C OMET (Rei et\nal., 2020) – metric based on fine-tuned pre-trained\nlanguage models.\nEn-De (1) En-De (2) En-Fr Average\nBLEU COMET BLEU COMET BLEU COMET BLEU COMET\nBase Model 42.6 0.534 38.0 0.492 49.1 0.716 43.2 0.581\nkNN-MT 48.0 0.668 49.2 0.673 71.2 0.945 56.1 0.762\nFine-tuned (Adapters) 53.2 0.737 53.9 0.720 78.9 1.009 62.0 0.822\nFine-tuned (Full) 53.5 0.742 52.4 0.720 76.8 1.004 61.5 0.822\nFine-tuned (Adapters) + kNN-MT 53.2 0.748 54.7 0.724 78.5 1.014 62.1 0.829\nFine-tuned (Full) + kNN-MT 54.1 0.751 53.2 0.724 77.5 1.011 61.6 0.829\nTable 4: BLEU and C OMET scores on the English-German and English-French customer-service test sets.\nEn-Tr En-Ko Average\nBLEU COMET BLEU COMET BLEU COMET\nBase Model 24.5 0.672 7.9 0.273 16.2 0.473\nkNN-MT 31.1 0.857 19.2 0.545 25.2 0.701\nFine-tuned (Adapters) 33.8 0.912 20.9 0.574 27.4 0.743\nFine-tuned (Full) 35.7 0.931 23.0 0.612 29.4 0.772\nFine-tuned (Adapters) + kNN-MT 35.1 0.927 22.6 0.597 28.9 0.762\nFine-tuned (Full) + kNN-MT 36.2 0.956 24.0 0.626 30.1 0.791\nTable 5: BLEU and C OMET scores on the English-Turkish and English-Korean test sets.\n4 Results with Automatic Metrics\nWe report the results of our experiments using au-\ntomatic metrics in Tables 4 and 5, which we dis-\ncuss in the following sections.\n4.1 Does kNN-MT improve the base model’s\nperformance?\nWhen comparing the performance of kNN-MT to\nthe base model (mBART50) using automatic met-\nrics, we see that kNN-MT leads to significant im-\nprovements in all datasets. Specifically, by retriev-\ning examples from a datastore, kNN-MT results in\nan average increase of 12.9 B LEU points and 0.181\nCOMET points for the customer service datasets,\nand 9 B LEU points and 0.228 C OMET points for\nthe En-Tr and En-Ko datasets.\n4.2 Is kNN-MT better than fine-tuning?\nWhen comparing with fine-tuning all the model\nparameters or performing adapter-based fine-\ntuning (using each dataset’s training data), kNN-\nMT falls short, according to the automatic metrics.\nHowever, MQM evaluation leads to different con-\nclusions, as we will see in §5.\nOn average, for the customer-service datasets,\nkNN-MT results in a decrease of 5.9 B LEU points\nand 0.060 C OMET points compared to adapter-\nbased fine-tuning and of 5.4 B LEU points and\n0.060 C OMET points compared to fine-tuning the\nentire model. For the remaining datasets, kNN-\nMT shows an average decrease of 2.2 B LEU points\nand 0.042 C OMET points compared to adapter-based fine-tuning and of 4.2 B LEU points and\n0.071 C OMET points compared to full fine-tuning.\nDespite these findings, applying kNN-MT can be\ncomputationally cheaper, since it reduces the need\nto fine-tune the model, and avoids having different\nmodels (or adapters) for each dataset.\n4.3 Does kNN-MT improve fine-tuned model\nperformance?\nApplying kNN-MT to fine-tuned models results\nin small improvements. On customer-service\ndatasets, it increases B LEU by 0.1 points and\nCOMET by 0.007 points compared to adapter-\nbased fine-tuning and fine-tuning the entire model.\nOn other datasets, kNN-MT shows an average\nincrease of 1.5 B LEU points and 0.019 C OMET\npoints compared to adapter-based fine-tuning, and\n0.7 B LEU points and 0.019 C OMET points com-\npared to fine-tuning the entire model.\n4.4 How does the datastore size influences the\ntranslation quality?\nWe analyzed the effect of the number of entries\nin the datastore on the translation quality of the\nmodel by using the base model (mBART50) ex-\ntended with kNN-MT on the En-De (2) and En-Fr\ntest sets. We calculated the C OMET score for dif-\nferent datastore sizes and plotted the results in Fig-\nure 3. The results show that, for both datasets, as\nthe number of entries in the datastore increases, the\nCOMET score also improves. The rate of improve-\nment is steepest for small datastore sizes but still\npresent as the size increases. Additionally, we ob-\n0.0 0.2 0.4 0.6 0.8 1.0 1.2\nDatastore size 1e70.450.500.550.600.650.70COMET score\nkNN-MT\nBase Model\n0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5\nDatastore size 1e70.700.750.800.850.900.951.00COMET score\nkNN-MT\nBase ModelFigure 3: COMET scores when varying the number of entries on the datastore for the En-De (2) and En-Fr datasets, respectively.\n5000 100000 200000 250000\nNumber of entries used to train FAISS index0.5000.5250.5500.5750.6000.6250.6500.6750.700COMET score\nkNN-MT\nBase Model\n5000 200000 500000 1000000\nNumber of entries used to train FAISS index0.450.500.550.600.650.70COMET score\nkNN-MT\nBase Model\nFigure 4: COMET scores when varying the number of entries used to train the FAISS index for the En-De (1) and En-De (2)\ndatasets, respectively.\nserved that even using small datastores (250,000\nand 1,000,000 entries for the En-De (2) and En-\nFr datasets) already leads to a substantial improve-\nment when compared to the base model.\n4.5 How many entries are needed to train\ndatastore index?\nWe also investigated the optimal number of entries\nto use for training the FAISS index for hierarchical\napproximate k-nearest neighbor search. We evalu-\nated the performance of the kNN-MT model on the\nEn-De (1) and En-De (2) datasets by measuring the\nCOMET score using different numbers of entries\nfor training the index. The results, as shown in\nFigure 4, indicate that a relatively small number of\nentries is sufficient for achieving the best C OMET\nscores. For example, in the left plot, we can see\nthat using only 2,000 or 5,000 entries leads to a re-\nduction in C OMET score, but increasing the num-\nber of entries to 10,000 results in a similar score\nas using the entire number of entries (261,669).\nSimilarly, in the right plot, we see that even when\nusing only 5,000 entries, the translation quality isalready comparable to using the entire number of\nentries (1,000,000). This suggests that it is possi-\nble to create a datastore and train its index with a\nlimited amount of data, and then add more entries\nas more data becomes available.\n5 Results with MQM Assessments\nTo complement this analysis, we evaluated the per-\nformance of the pre-trained model with and with-\nout retrieval, as well as the fully fine-tuned model\nwith and without retrieval using Multidimensional\nQuality Metrics (MQM) – quality assessments ob-\ntained from the identification of error spans in\ntranslation outputs (Lommel et al., 2014; Freitag\net al., 2021). To conduct this assessment, we had\nprofessional linguists assessing the models’ trans-\nlations for the En-Ko, En-De (2), and En-Fr test\nsets. We asked the annotators to identify all er-\nrors and independently label them with an error\ncategory (accuracy, fluency, and style, each with\na specific set of subcategories) and a severity level\n(neutral, minor, major, and critical).\nTable 6 presents the MQM results while Fig-\nEn-De (2) En-Fr En-Ko\nMINOR MAJOR CRITICAL MQM M INOR MAJOR CRITICAL MQM M INOR MAJOR CRITICAL MQM\nBase Model 1301 896 439 61.24 499 237 266 88.42 713 185 28 75.23\nkNN-MT 928 417 75 86.22 335 116 137 93.77 527 95 6 85.72\nFine-tuned 982 471 72 85.03 377 131 3 97.14 513 101 3 85.56\nFine-tuned + kNN-MT 800 391 62 88.03 363 118 5 96.87 466 99 5 85.97\nTable 6: Error severity counts and MQM scores.\nFigure 5: Error typology and severity level breakdown for the En-De (2) test set.\nures 5, 6, and 7 provide a breakdown of the er-\nror typology distribution. The MQM assessment\nindicates that both fine-tuning and kNN-MT sig-\nnificantly improve translation performance when\ncompared to the base model, resulting in a substan-\ntial increase in MQM score and a notable reduction\nin critical, major, and minor errors. Interestingly,\naccording to the MQM scores and in contrast to the\nautomatic metric scores, kNN-MT slightly outper-\nforms fine-tuning in two out of the three datasets.\nMoreover, in the customer service datasets (En-Fr\nand En-De (2)), kNN-MT proved to be useful in\nmitigating source sentence errors, which are preva-\nlent in this domain and can adversely impact the\ntranslation quality (Gonc ¸alves et al., 2022). Addi-\ntionally, combining kNN-MT with fine-tuning re-\nsults in marginal improvements for two datasets.6 Related Work\nIn recent years, retrieval-augmented models have\ngained attention for their effectiveness in vari-\nous text generation tasks. One such model is\nthek-nearest neighbor language model ( kNN-\nLM; (Khandelwal et al., 2019)), which combines\na parametric model with a retrieval component.\nOther works have proposed methods to integrate\nthe retrieved tokens using gating mechanisms (Yo-\ngatama et al., 2021) or cross-attention (Borgeaud\net al., 2021), and techniques to improve the ef-\nficiency of the kNN-LM by performing datastore\npruning, adaptive retrieval (He et al., 2021) and\nadding pointers to the next token on the original\ncorpus to the datastore entries (Alon et al., 2022).\nRetrieval-augmented models have also been ex-\nplored in the field of machine translation. Ear-\nFigure 6: Error typology and severity level breakdown for the En-Fr test set.\nFigure 7: Error typology and severity level breakdown for the En-Ko test set.\nlier works have proposed using a search engine\nto retrieve similar sentence pairs and incorporat-\ning them through shallow and deep fusion (Gu et\nal., 2018) or attention mechanisms (Bapna and Fi-\nrat, 2019), or retrieving n-grams to up-weight to-\nken probabilities (Zhang et al., 2018). More re-\ncently, the kNN-MT model has been proposed as\nan adaptation of the kNN-LM for machine trans-\nlation (Khandelwal et al., 2021), and was then ex-\ntended with a network that determines the num-\nber of retrieved tokens to consider (Zheng et al.,\n2021). As kNN-MT can be up to two orders of\nmagnitude slower than a fully-parametric model,\n(Meng et al., 2021) and (Wang et al., 2021) pro-\nposed the Fast and Faster kNN-MT, in which the\nmodel has a higher decoding speed by creating a\ndifferent datastore based on the source sentence\nfor each example. (Martins et al., 2022a) proposed\nefficient kNN-MT by adapting the methods intro-\nduced by (He et al., 2021) to machine translation\nand introducing a retrieval distributions cache to\nspeed-up decoding. (Martins et al., 2022b) pro-\nposed retrieving chunks of tokens instead of single\ntokens. However, most of these methods have been\nevaluated on a limited number of datasets and lan-\nguage pairs, and using only the B LEU metric. Our\npaper addresses this gap by evaluating kNN-MT\nacross five “real-world” datasets and four language\npairs using C OMET and MQM evaluation.\n7 Conclusions\nIn this paper, we conducted a study to assess the\nperformance k-Nearest Neighbor Machine Trans-\nlation ( kNN-MT) in real-world scenarios. To do\nso, we augmented a pre-trained multilingual model\nwith kNN-MT’s retrieval component and com-\npared it against using the pre-trained model, per-\nforming fine-tuning, and doing adapter-based fine-\ntuning on five datasets comprising four language\npairs and three different domains. The results on\nautomatic metrics, C OMET and B LEU, revealed\nthat while kNN-MT significantly improves the\ntranslation quality over the pre-trained language\nmodel, it falls short when compared to fine-tuning\nand adapter-based fine-tuning. Furthermore, we\nobserved that incorporating kNN-MT’s retrieval\ncomponent into a fine-tuned model resulted in\nsmall improvements. We also assessed the kNN-\nMT model using Multidimensional Quality Met-\nrics (MQM) by having professional linguists eval-\nuate the translations for the En-Ko, En-De (2), andEn-Fr test sets. The MQM scores revealed a signif-\nicant improvement in the kNN-MT model over the\nbase model, with kNN-MT slightly outperform-\ning fine-tuning in two out of three language pairs.\nCombining kNN-MT with a fine-tuned model re-\nsulted in minor improvements. Additionally, we\nanalyzed the effect of the number of entries in the\ndatastore on translation quality and the number of\nentries required to train the FAISS index. Our\nfindings suggest that having larger datastores im-\nproves translation quality, with the improvement\nsteepness being higher when increasing the size of\na small datastore. The number of entries used to\ntrain the FAISS index has a small impact on the\nfinal translation quality, which is relevant when\ncreating a dynamic datastore that can be updated\nwhen more data becomes available.\nAcknowledgements\nThis work was supported by EU’s Horizon Eu-\nrope Research and Innovation Actions (UTTER,\ncontract 101070631), by the Portuguese Recovery\nand Resilience Plan through project C645008882-\n00000055 (NextGenAI, Center for Responsible\nAI), and by the Fundac ¸ ˜ao para a Ci ˆencia e Tec-\nnologia through contract UIDB/50008/2020.\nReferences\n[Alon et al.2022] Alon, Uri, Frank F Xu, Junxian He,\nSudipta Sengupta, Dan Roth, and Graham Neubig.\n2022. Neuro-Symbolic Language Modeling with\nAutomaton-augmented Retrieval. In Proc. ICML .\n[Bahdanau et al.2015] Bahdanau, Dzmitry, Kyunghyun\nCho, and Yoshua Bengio. 2015. Neural machine\ntranslation by jointly learning to align and translate.\nInProc. ICLR .\n[Bapna and Firat2019] Bapna, Ankur and Orhan Firat.\n2019. Non-Parametric Adaptation for Neural Ma-\nchine Translation. In Proc. NAACL .\n[Borgeaud et al.2021] Borgeaud, Sebastian, Arthur\nMensch, Jordan Hoffmann, Trevor Cai, Eliza\nRutherford, Katie Millican, George van den Driess-\nche, Jean-Baptiste Lespiau, Bogdan Damoc, Aidan\nClark, et al. 2021. Improving language models by\nretrieving from trillions of tokens.\n[Freitag et al.2021] Freitag, Markus, George Foster,\nDavid Grangier, Viresh Ratnakar, Qijun Tan, and\nWolfgang Macherey. 2021. Experts, Errors, and\nContext: A Large-Scale Study of Human Evaluation\nfor Machine Translation. Transactions of the Asso-\nciation for Computational Linguistics .\n[Gonc ¸alves et al.2022] Gonc ¸alves, Madalena, Marianna\nBuchicchio, Craig Stewart, Helena Moniz, and Alon\nLavie. 2022. Agent and User-Generated Content\nand its Impact on Customer Support MT. In Proc.\nEAMT .\n[Gu et al.2018] Gu, Jiatao, Yong Wang, Kyunghyun\nCho, and Victor OK Li. 2018. Search engine guided\nneural machine translation. In Proc. AAAI .\n[He et al.2021] He, Junxian, Graham Neubig, and Tay-\nlor Berg-Kirkpatrick. 2021. Efficient Nearest\nNeighbor Language Models. In Proc. EMNLP .\n[Houlsby et al.2019] Houlsby, Neil, Andrei Giurgiu,\nStanislaw Jastrzebski, Bruna Morrone, Quentin\nDe Laroussilhe, Andrea Gesmundo, Mona Attariyan,\nand Sylvain Gelly. 2019. Parameter-Efficient Trans-\nfer Learning for NLP. In Proc. ICML .\n[Jiang et al.2021] Jiang, Qingnan, Mingxuan Wang, Jun\nCao, Shanbo Cheng, Shujian Huang, and Lei Li.\n2021. Learning Kernel-Smoothed Machine Trans-\nlation with Retrieved Examples. In Proc. EMNLP .\n[Johnson et al.2019] Johnson, Jeff, Matthijs Douze, and\nHerv ´e J´egou. 2019. Billion-scale similarity search\nwith gpus. IEEE Transactions on Big Data .\n[Khandelwal et al.2019] Khandelwal, Urvashi, Omer\nLevy, Dan Jurafsky, Luke Zettlemoyer, and Mike\nLewis. 2019. Generalization through Memoriza-\ntion: Nearest Neighbor Language Models. In Proc.\nICLR .\n[Khandelwal et al.2021] Khandelwal, Urvashi, Angela\nFan, Dan Jurafsky, Luke Zettlemoyer, and Mike\nLewis. 2021. Nearest neighbor machine translation.\nInProc. ICLR .\n[Koehn and Knowles2017] Koehn, Philipp and Rebecca\nKnowles. 2017. Six Challenges for Neural Machine\nTranslation. In Proceedings of the First Workshop\non Neural Machine Translation .\n[Liu et al.2020] Liu, Yinhan, Jiatao Gu, Naman Goyal,\nXian Li, Sergey Edunov, Marjan Ghazvininejad,\nMike Lewis, and Luke Zettlemoyer. 2020. Multi-\nlingual Denoising Pre-training for Neural Machine\nTranslation. Transactions of the Association for\nComputational Linguistics .\n[Lommel et al.2014] Lommel, Arle, Hans Uszkoreit,\nand Aljoscha Burchardt. 2014. Multidimensional\nquality metrics (MQM): A framework for declaring\nand describing translation quality metrics. Revista\nTradum `atica: tecnologies de la traducci ´o.\n[MacQueen1967] MacQueen, J. 1967. Classification\nand analysis of multivariate observations. In 5th\nBerkeley Symp. Math. Statist. Probability .\n[Martins et al.2022a] Martins, Pedro Henrique, Zita\nMarinho, and Andr ´e F. T. Martins. 2022a. Efficient\nMachine Translation Domain Adaptation. In Proc.\nACL 2022 Workshop on Semiparametric Methods in\nNLP: Decoupling Logic from Knowledge .[Martins et al.2022b] Martins, Pedro Henrique, Zita\nMarinho, and Andr ´e FT Martins. 2022b. Chunk-\nbased Nearest Neighbor Machine Translation. In\nProc. EMNLP .\n[Meng et al.2021] Meng, Yuxian, Xiaoya Li, Xiayu\nZheng, Fei Wu, Xiaofei Sun, Tianwei Zhang, and\nJiwei Li. 2021. Fast Nearest Neighbor Machine\nTranslation.\n[Papineni et al.2002] Papineni, Kishore, Salim Roukos,\nTodd Ward, and Wei-Jing Zhu. 2002. Bleu: a\nmethod for automatic evaluation of machine trans-\nlation. In Proc. ACL .\n[Post2018] Post, Matt. 2018. A Call for Clarity in Re-\nporting BLEU Scores. In Proc. Third Conference on\nMachine Translation .\n[Reddy1977] Reddy, Raj. 1977. Speech understanding\nsystems: summary of results of the five-year research\neffort at Carnegie-Mellon University.\n[Rei et al.2020] Rei, Ricardo, Craig Stewart, Ana C Far-\ninha, and Alon Lavie. 2020. COMET: A Neural\nFramework for MT Evaluation. In Proc. EMNLP .\n[Sutskever et al.2014] Sutskever, Ilya, Oriol Vinyals,\nand Quoc V Le. 2014. Sequence to sequence learn-\ning with neural networks. In Proc. NeurIPS .\n[Tang et al.2020] Tang, Yuqing, Chau Tran, Xian Li,\nPeng-Jen Chen, Naman Goyal, Vishrav Chaudhary,\nJiatao Gu, and Angela Fan. 2020. Multilingual\nTranslation with Extensible Multilingual Pretraining\nand Finetuning.\n[Vaswani et al.2017] Vaswani, Ashish, Noam Shazeer,\nNiki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N\nGomez, Łukasz Kaiser, and Illia Polosukhin. 2017.\nAttention is all you need. In Proc. NeurIPS .\n[Wang et al.2021] Wang, Shuhe, Jiwei Li, Yuxian\nMeng, Rongbin Ouyang, Guoyin Wang, Xiaoya Li,\nTianwei Zhang, and Shi Zong. 2021. Faster Nearest\nNeighbor Machine Translation.\n[Yogatama et al.2021] Yogatama, Dani, Cyprien\nde Masson d’Autume, and Lingpeng Kong. 2021.\nAdaptive Semiparametric Language Models. Trans-\nactions of the Association for Computational\nLinguistics , 9:362–373.\n[Zhang et al.2018] Zhang, Jingyi, Masao Utiyama, Ei-\nichiro Sumita, Graham Neubig, and Satoshi Naka-\nmura. 2018. Guiding Neural Machine Translation\nwith Retrieved Translation Pieces. In Proc. NAACL .\n[Zheng et al.2021] Zheng, Xin, Zhirui Zhang, Junliang\nGuo, Shujian Huang, Boxing Chen, Weihua Luo,\nand Jiajun Chen. 2021. Adaptive Nearest Neighbor\nMachine Translation.", "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "FmvE4rjJl_iA", "year": null, "venue": "EAMT 2015", "pdf_link": "https://aclanthology.org/W15-4932.pdf", "forum_link": "https://openreview.net/forum?id=FmvE4rjJl_iA", "arxiv_id": null, "doi": null }
{ "title": "HimL (Health in my Language)", "authors": [ "Barry Haddow" ], "abstract": null, "keywords": [], "raw_extracted_content": "HimL (Health in my Language)\nFunding agency: European Union\nFunding call identification: H2020-ICT-2014-1\nType of project: Innovation Action\nProject ID number: 644402 \nhttp://www.himl.eu\nList of partners\nUniversity of Edinburgh, United Kingdom (coordinator) \nCharles University, Prague, Czech Republic\nLMU Munich, Germany\nLingea, Czech Republic\nNHS 24, United Kingdom\nCochrane, United Kingdom\nProject duration: February 2015 — January 2018\nSummary\nTo an ever-increasing extent, web-based services are providing a frontline for healthcare in-\nformation in Europe. They help citizens find answers to their questions and help them under-\nstand and find the local services they need. However, due to the number of languages spoken\nin Europe, and the mobility of its population, there is a high demand for these services to be\navailable in many languages. In order to satisfy this demand, we need to rely on automatic\ntranslation, as it is infeasible to manually translate into all languages requested. The aim of\nHimL is to use recent advances in machine translation to create and deploy a system for the\nautomatic translation of public health information, with a special focus on meaning preserva-\ntion. In particular, we will include recent work on domain adaptation, translation into morpho-\nlogically rich languages, terminology management, and semantically enhanced machine trans-\nlation to build reliable machine translation for the health domain. The aim will be to create us -\nable, reliable, fully automatic translation of public health information, initially testing with\ntranslation from English into Czech, Polish, Romanian and German. In the HimL project we\nwill iterate cycles of incorporating improvements into the MT systems, with careful evaluation\nand user acceptance testing.\n214", "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "mbjS_XRngI", "year": null, "venue": "EAMT 2023", "pdf_link": "https://aclanthology.org/2023.eamt-1.4.pdf", "forum_link": "https://openreview.net/forum?id=mbjS_XRngI", "arxiv_id": null, "doi": null }
{ "title": "Unsupervised Feature Selection for Effective Parallel Corpus Filtering", "authors": [ "Mikko Aulamo", "Ona de Gibert", "Sami Virpioja", "Jörg Tiedemann" ], "abstract": null, "keywords": [], "raw_extracted_content": "Unsupervised Feature Selection for Effective Parallel Corpus Filtering\nMikko Aulamo, Ona de Gibert, Sami Virpioja, J ¨org Tiedemann\nDepartment of Digital Humanities\nUniversity of Helsinki, Helsinki / Finland\n{name.surname }@helsinki.fi\nAbstract\nThis work presents an unsupervised\nmethod of selecting filters and threshold\nvalues for the OpusFilter parallel corpus\ncleaning toolbox. The method clusters\nsentence pairs into noisy and clean cate-\ngories and uses the features of the noisy\ncluster center as filtering parameters.\nOur approach utilizes feature importance\nanalysis to disregard filters that do not\ndifferentiate between clean and noisy\ndata. A randomly sampled subset of a\ngiven corpus is used for filter selection\nand ineffective filters are not run for the\nfull corpus. We use a set of automatic\nevaluation metrics to assess the quality\nof translation models trained with data\nfiltered by our method and data filtered\nwith OpusFilter’s default parameters. The\ntrained models cover English-German and\nEnglish-Ukrainian in both directions. The\nproposed method outperforms the default\nparameters in all translation directions for\nalmost all evaluation metrics.\n1 Introduction\nNeural machine translation (NMT) is dependent\non large parallel text corpora. Available train-\ning data can often be noisy, especially if the data\nis retrieved by the common method of extract-\ning bitexts from web crawls (Espl `a-Gomis et al.,\n2019; Schwenk et al., 2021; Ba ˜n´on et al., 2020).\nTraining NMT on noisy data can be detrimental\nto the translation models. Ensuring that the train-\n© 2023 The authors. This article is licensed under a Creative\nCommons 4.0 licence, no derivative works, attribution, CC-\nBY-ND.ing examples are clean sentence pairs leads to bet-\nter translation quality and more efficient training\n(Khayrallah and Koehn, 2018). If clean paral-\nlel corpora are not readily available, a common\npractice is to refine a noisy corpus by filtering\nout low quality training examples. The amount\nand type of noise varies between different cor-\npora. Selecting the kind of filters that are optimal\nfor cleaning a specific parallel corpus can take a\nlot of trial and error. Several methods and tools\nfor corpus cleaning have been proposed and de-\nveloped (Taghipour et al., 2011; Carpuat et al.,\n2017; Ram ´ırez-S ´anchez et al., 2020). OpusFilter\n(Aulamo et al., 2020) is one such toolkit. It pro-\nvides a selection of configurable filters, but suffers\nfrom the same issue of having to manually choose\nthe filters and their parameters. In this work, we\npropose an unsupervised method of selecting ef-\nfective filters and filtering thresholds based on the\nproperties of a given corpus. Our method automat-\nically generates a filtering configuration file which\nserves as a solid starting point for finding the op-\ntimal settings for an OpusFilter corpus cleaning\npipeline. We assess the proposed method by com-\nparing the translation quality of models trained\nwith data filtered with default parameters from\nOpusFilter and data filtered with autogenerated pa-\nrameters. Our implementation of the filter selec-\ntion method is available at https://github.\ncom/Helsinki-NLP/OpusFilter .\n2 Related work\nCorpus cleaning has been a part of training\npipelines since the statistical machine translation\n(SMT) era. Some of the most common and most\nstraightforward methods include sentence length\nbased methods, for example removing too short\nand too long sentences and sentence pairs where\nthe ratio of source and target lengths is above a\ngiven threshold. The Moses toolkit (Koehn et al.,\n2007) offers commonly used scripts for this pur-\npose. Taghipour et al. (2011) map sentence pairs\ninto an N-dimensional space and filter out the out-\nliers. Cui et al. (2013) propose a graph-based\nrandom walk filtering method which is based on\nthe idea that better sentence pairs lead to better\nphrase extraction and that good sentence pairs con-\ntain more frequent phrase pairs. The Zipporah data\ncleaning system (Xu and Koehn, 2017) maps sen-\ntence pairs into a feature space and uses logistic\nregression to classify good and bad data. As the\nfeatures, they use bag-of-word translation scores\nand n-gram language model scores.\nTraining data quality has a strong effect on NMT\nperformance. Khayrallah and Koehn (2018) study\nseveral types of noise and their impact on trans-\nlation quality. They report that NMT is less ro-\nbust against noisy data than SMT. Rikters (2018)\npoints out common problems in parallel corpora\nthat can result in low quality NMT and provides\nfilters to overcome these issues. These problems\ninclude mismatch of non-alphabetic characters be-\ntween source and target segments, wrong language\nand repeating tokens.\nRam´ırez-S ´anchez et al. (2020) present two tools\nfor more careful corpus cleaning with NMT in\nmind: Bifixer and Bicleaner. Bifixer is a restora-\ntive cleaner; it only removes sentence pairs with\neither side being empty but otherwise it fixes text-\nrelated issues in place. Bifixer corrects char-\nacter encoding and orthography issues, conducts\nre-splitting of the sentences and identifies dupli-\ncates. Bicleaner consists of filtering rules, lan-\nguage model scoring and a classification part. The\nfiltering rules are predefined, but other steps of\nBicleaner require training a language model and\na classifier. However, pretrained models are pro-\nvided for many language pairs.\nOpusFilter (Aulamo et al., 2020) is a config-\nurable parallel corpus cleaning toolbox. OpusFil-\nter provides a variety of data selection, text pro-\ncessing, filtering and classification features that\ncan be combined into a reproducible corpus clean-\ning pipeline. An important step in constructing this\npipeline is to choose which filters to use and with\nwhat parameters. The filters work by producing\na score for a sentence pair and checking whether\nthe score exceeds a threshold value. OpusFilter\ndefines default threshold values for each filter, butthere is no guarantee that these values are optimal\nfor a given corpus and language pair.\nWe propose an unsupervised method to choose\nfilters that are useful in differentiating between\nclean and noisy sentence pairs and to initialize\nthreshold values based on features extracted from\na parallel corpus. The approach consists of cluster-\ning sentence pairs into noisy and clean categories\nand using the features of the noisy cluster center\nas the threshold values. This method is especially\nuseful in setting initial OpusFilter parameters that\nare adapted to the characteristics of a given corpus.\n3 Method\nOur proposed method of selecting relevant filters\nand useful threshold values for OpusFilter is based\non clustering sentence pairs into clean and noisy\ncategories and using the features of the noisy clus-\nter center as our filtering parameters. To select the\nfilters that are actually useful in detecting noisy\nsentence pairs, we convert the clustering task into\na classification task and find the features that af-\nfect classification accuracy the most. For cluster-\ning, classification and feature importance inspec-\ntion, we use the scikit-learn Python package\n(Pedregosa et al., 2011).\n3.1 Filter scores as features\nIn order to extract features from a parallel cor-\npus, we select a set of filters and use them to pro-\nduce scores for sentence pairs with OpusFilter’s\nscore function. We conduct this procedure on a\nrandomly sampled subset of 100k sentence pairs\nfrom the training corpus in order to keep the con-\nfiguration generation reasonably fast even for large\ncorpora. In this work, we use the following filter\nscores as features:\n• AlphabetRatioFilter: The proportion of al-\nphabetic characters in the segments.\n• CharacterScoreFilter: The proportion of char-\nacters in a valid script.\n• LanguageIdFilter: A confidence score from\ncld2 language identifier.1\n• LengthRatioFilter: The ratio between the\nsource and target segment lengths. We use\ntwo versions of this score: one with charac-\nters and one with tokens as the length unit.\n1https://github.com/CLD2Owners/cld2\n• NonZeroNumeralsFilter: The similarity of\nnumerals in the source and target segments\n(V´azquez et al., 2019).\n• TerminalPunctuationFilter: A penalty score\nfor terminal punctuation co-occurrence in the\nsource and target segments (V ´azquez et al.,\n2019).\nThese features are chosen as they are inexpensive\nto produce and easy to interpret, but our approach\ncan be expanded to use any filter that produces\nscores ranging from noisy to clean.\n3.2 Clustering\nWe train k-means clustering with the filter scores\nas features and we cluster the sentence pairs\ninto two categories: noisy and clean. We use\nthe k-means++ algorithm for centroid initializa-\ntion (Arthur and Vassilvitskii, 2007). All feature\nscores are standardized by removing the mean and\nscaling to unit variance before clustering. After\ntraining the clustering algorithm, we look at the\ncentroids of each cluster to recognize the two cat-\negories. The cluster center which has lower mean\nfeature score represents the noisy cluster. For some\nfilters, low values represent clean sentence pairs\nand in those cases we use the value’s additive in-\nverse when calculating the mean. The features of\nthe noisy cluster center are used as the generated\nfiltering threshold parameters.\n3.3 Feature importance\nNot all features are useful in differentiating be-\ntween noisy and clean sentence pairs. The k-\nmeans clustering algorithm does not directly indi-\ncate which of the features are important. In order\nto determine the feature importance, we convert\nthe unsupervised clustering task into a supervised\nclassification task similarly to Ismaili et al. (2014).\nWe train a random forest classifier with the same\nfeatures as extracted for clustering, and as the la-\nbels we use the categories assigned to each sen-\ntence pair by the clustering step.\nOnce the classifier is trained, we find the im-\nportant features using permutation feature impor-\ntance scores which show how much the classifi-\ncation accuracy is affected by shuffling the values\nof a given feature (Breiman, 2001). In order to\ndetermine which features are important enough to\nkeep in the filtering configuration, we compare the\nimportance value of each feature to the mean ofall importance values. The importance threshold\nthat each feature has to cross is the mean multi-\nplied by a rejection coefficient. This coefficient is\nused to lower the threshold in order to accept all\nfeatures in cases where all importance values are\nclose to the mean. In our preliminary experiments,\nwe found using 0.1 as the coefficient to work in\nrejecting features that do not differentiate between\nnoisy and clean sentence pairs. The default value\nfor the coefficient is 0.1 but it can be set to other\nvalues. Finding the optimal value is not trivial as\nthis would require examining the results of running\nthe filters on full datasets and possibly training MT\nsystems to assess the datasets. Finding a more ro-\nbust approach for rejecting filters remains for fu-\nture work.\nNoisy Clean Importance\nAlphabetRatio.src 0.74 0.82 0.086\nAlphabetRatio.trg 0.76 0.84 0.104\nCharacterScore.src 1.0 1.0 0.0\nCharacterScore.trg 0.99 1.0 0.010\nLanguageID.src 0.94 0.92 0.001\nLanguageID.trg 0.91 0.92 0.001\nLengthRatio.char 1.18 1.17 0.001\nLengthRatio.word 1.21 1.21 0.001\nNonZeroNum 0.67 0.99 0.088\nTerminalPunctuation -0.67 -0.05 0.063\nTable 1: Feature selection for English-Ukrainian. The ta-\nble shows the feature values of the noisy and clean cluster\ncenters. The rightmost column shows the importance val-\nues determined by the random forest classification task. The\nmean importance is 0.036 and rejection coefficient is set to\n0.1. Thus, the threshold to be considered an important fea-\nture is 0.0036. Five of the features are rejected as they do not\ncross this threshold. Rejected importance values have a grey\nbackground.\nTable 1 shows an example of feature selection\nfor the English-Ukrainian training set used in our\ntranslation experiments in Section 4. Five of the\nten features are rejected as they do not cross the\nimportance score threshold. The features that are\nrejected appear to have similar values in both the\nnoisy and clean cluster centers. On the other hand,\nthe character score on the target side is not rejected\ndespite having values very close to each other in\nboth clusters. This can be explained by the fact that\nthe importance values take into account the whole\ndistribution of feature scores, while the cluster cen-\nters only represent the means of each feature.\n4 Translation experiments\nIn order to assess the impact of our data filtering\nmethod, we train translation models for English-\nGerman (en-de) and English-Ukrainian (en-uk) in\nDefault Autogen Default Autogen\nen-de en-uk en-de en-uk en-de en-uk\nAlphabetRatio 0.75, 0.75 0.73, 0.76 0.74, 0.76 13.5% 16.2% 10.6% 15.0%\nCharacterScore 1, 1 –, – –, 0.99 0.1% 14.1% – 11.1%\nLanguageId 0, 0 –, 0.85 –, – 8.5% 10.6% 8.7% –\nLengthRatio.char 3 – – 0.0% 0.0% – –\nLengthRatio.word 3 – – 0.0% 0.0% – –\nNonZeroNumeral 0.5 0.60 0.67 7.9% 7.8% 9.6% 11.9%\nTerminalPunctuation -2 -0.66 -0.67 0.8% 0.7% 19.1% 14.9%\nTable 2: The left side shows the default thresholds and the generated thresholds for each filter. The default thresholds are the\nsame for both language pairs. AlphabetRatio, CharacterScore and LanguageId filters each have two threshold values: one for\nthe source and one for the target sentence. The right side shows the proportions of data that each filter would remove with these\nthresholds if ran individually. The hyphens indicate filters that have been rejected by the autogeneration method.\nboth translation directions. These language pairs\nare chosen as the latest WMT shared transla-\ntion task (Kocmi et al., 2022) provides develop-\nment and test data for them and there is available\nParaCrawl data for both language pairs (Espl `a-\nGomis et al., 2019; Ba ˜n´on et al., 2020). We train\nmodels with three different training datasets: one\nunfiltered set, one cleaned with the default param-\neters from OpusFilter, and one cleaned with filters\nand parameters selected by our proposed configu-\nration generation method. We compare the transla-\ntion quality of the resulting models with automatic\nmetrics.\n4.1 Experiment setting\nFor our experiments, we use ParaCrawl v9 data,\nwhich has been previously shown to contain a good\namount of noise (Kreutzer et al., 2022). To con-\nduct basic initial cleaning on our training datasets,\nwe remove duplicates and filter out sentences by\nlength (we remove sentences shorter than 3 words\nand longer than 100 words). The en-uk training\nset has 12,605,229 sentence pairs after the initial\nfiltering. For en-de, we take a sample of 30M sen-\ntence pairs from the initially filtered set to serve as\nthe training data.\nOur translation models, trained using the Mar-\nianNMT toolkit (Junczys-Dowmunt et al., 2018),\nare transformer-base with an encoder and decoder\ndepth of 6. We train SentencePiece (Kudo and\nRichardson, 2018) unigram tokenizers for each\nmodel and restrict the vocabulary size to 32k fol-\nlowing Gowda and May (2020). For en-de we\nchoose a shared vocabulary, while for en-uk we\nchoose to have separate vocabularies of 32k for\neach script. All models are trained until conver-\ngence with early-stopping on development data,\nfor which we use Flores-101 (Goyal et al., 2022).\nFlores-101 is the only development set for en-uk\nin WMT22 and we aim to create consistent train-ing conditions for all our experiments. Therefore,\nwe use Flores-101 development data for en-de as\nwell. We use 1 single NVIDIA V olta V100 GPU\nfor training.\nWe train models in both translation directions\nfor each language pair based on three different data\nfiltering methods:\n•baseline : raw data deduplicated and fil-\ntered by length.\n•default : data filtered with OpusFilter’s de-\nfault parameters.\n•autogen : data filtered with OpusFilter con-\nfiguration files produced with the proposed\nautogeneration method.\n4.2 Corpus filtering\nWe filter the training sets for both language pairs\nwith two different methods: using the default\nparameters from OpusFilter and using automat-\nically generated parameters. In both methods,\nwe use the filters defined in Section 3.1. Ta-\nble 2 shows the default thresholds for each filter\nas well as the thresholds generated by the auto-\ngeneration method. Many filtering thresholds are\nrejected as the configuration generation procedure\ndoes not consider them useful for differentiating\nbetween noisy and clean sentence pairs. For exam-\nple, the length ratio score distributions are similar\nin the noisy and clean clusters for both language\npairs and consequently, the length ratio filters are\ndropped for both language pairs. Language iden-\ntification scores are not found important for en-uk\nbut for the en-de training set, the threshold for the\nGerman side is kept. All character score thresh-\nolds are rejected except for the Ukrainian side of\nthe en-uk set.\nTable 2 also shows how much data each filter\nwould remove with default and autogenerated pa-\nrameters if each filter was run individually. The\nBLEU chrF COMET\nen-uk uk-en en-de de-en en-uk uk-en en-de de-en en-uk uk-en en-de de-en\nBaseline 11.1 21.3 24.6 24.1 35.3 45.8 52.6 49.6 -0.395 -0.177 0.198 0.152\nDefault 15.8 28.9 b24.6 24.6 43.4 53.2 b52.5 50.9 0.027 0.108 b0.201 0.202\nAutogen 16.3 29.9 25.5 d24.6 44.2 54.4 53.7 d50.8 0.065 0.164 0.230 d0.212\nTable 3: Results of the translation experiments. When the results from default parameters or autogenerated parameters are not\nsignificantly different from the baseline results, we prefix them with b. When the results from autogenerated parameters are not\nsignificantly different from the default parameter results, we prefix them with d.\nproportion of sentence pairs removed by the four\nlength ratio filters with default thresholds ranges\nfrom none at all to 0.0005%. This supports the hy-\npothesis that length ratio values are not useful for\nfinding noisy data in these training sets. Similarly,\nthe character score filter with default parameters\nremoves only 0.1% of the en-de set and the filter\nis not present in the generated configuration. On\nthe other hand, the language identification score\nfor the en-uk set does not follow this trend: the\ndefault thresholds filter out a substantial portion of\nthe data, 10.6%, but it is still rejected by the auto-\ngeneration method.\nIn total, filtering with default values keeps\n22,586,611 (75.3%) sentence pairs for the en-\nde set and 8,069,599 (64.0%) for the en-uk set.\nIn turn, after filtering with the autogenerated\nthreshold parameters, the dataset size for en-de\nis 19,417,755 (64.7%) and for en-uk 8,316,491\n(66.0%) sentence pairs. The en-de training sets\nhave 19,031,231 overlapping sentence pairs which\nis 84.3% of the default set and 98.0% of the auto-\ngeneration set. For en-uk, the number of overlap-\nping sentence pairs is 7,280,959 which is 90.2% of\nthe default set and 87.5% of the autogeneration set.\n4.3 Results\nThe trained translation models are evaluated with\nthree evaluation metrics: BLEU (Papineni et al.,\n2002), chrF (Popovi ´c, 2015) and COMET (Rei et\nal., 2020). We use SacreBLEU (Post, 2018) to\ncalculate BLEU and chrF. COMET is computed\nwith the unbabel-comet Python package2us-\ning evaluation model wmt20-comet-da. Addi-\ntionally, we conduct significance testing by us-\ning paired bootstrap resampling (Koehn, 2004) to\ncompare the filtered training sets to the baseline,\nand to compare the default and autogeneration\nmethods to each other. Results are shown in Ta-\nble 3 for the WMT22 general test sets (Kocmi et\nal., 2022).\nAutogeneration performs better than the base-\n2https://github.com/Unbabel/COMETline for all metrics and language pairs. The perfor-\nmance gains are especially noticeable for the en-\nuk and uk-en translation pairs. Default filtering\nscores are higher than the baseline in all transla-\ntion directions except en-de where the scores are\nnot significantly different from the baseline by any\nmetric. Autogeneration outperforms default filter-\ning in all language pairs except de-en for which\nthere are no significant performance differences\nbetween the two approaches.\nThese results suggest that the proposed method\nis able to improve the translation quality of mod-\nels trained on parallel corpora that are filtered by\nextracting and clustering corpus-specific features.\nAdditionally, our method makes the corpus filter-\ning phase more efficient. We select the filters and\ntheir thresholds based on a 100k sentence pair sam-\nple of a much larger corpus. This allows us to\navoid unnecessarily running filters that do not re-\nmove noisy sentence pairs on the whole corpus. In\nour experiments, running the filters with default\nparameters took 1h3m12s for en-de and 31m21s\nfor en-uk. Using the generated configurations, the\nfiltering times were 47m4s (25.5% faster) for en-de\nand 18m35s (40.7% faster) for en-uk. Generating\nthe filtering parameters takes one to two minutes.\nThe filters used in this work are quite inexpensive\nand fast to run but our method can be easily ex-\npanded to more demanding cleaning.\n5 Conclusion\nWe propose an unsupervised method for selecting\nfilters and filtering thresholds for OpusFilter. We\nevaluate our method in translation tasks where we\ntrain models on data filtered with the default pa-\nrameters of OpusFilter and another set of mod-\nels trained on data filtered with generated filter-\ning configuration files. The autogeneration method\noutperforms the default parameters in almost all\ncases. Additionally, our method makes corpus fil-\ntering more efficient as we only run useful filters\nwith appropriate parameters on the full training set.\nIn future work, we will evaluate our method in a\nlarger variety of corpus cleaning scenarios to con-\nfirm our findings. One point of interest is to test\nthe method for corpora with different proportions\nof noisy data. We will also conduct tests in low-\nresource language settings. Additionally, we will\nevaluate the effects of expanding our approach by\nintegrating a larger range of different filters. In or-\nder to improve the autogeneration method, more\ncareful analysis of the feature selection process\nwill be performed, for example manual evalua-\ntion of sentence pairs in noisy and clean categories\nin order to assess the clustering accuracy. We\nwill also explore using statistical inference (e.g.\nWelch’s t-test) for finding effective filters as an al-\nternative for the feature importance analysis. Re-\nlying on statistical significance could be a more ro-\nbust approach for discarding filters than the current\nrejection coefficient method.\nAcknowledgements\nThis work was supported by the HPLT project\nwhich has received funding from the European\nUnion’s Horizon Europe research and innovation\nprogramme under grant agreement No 101070350.\nThe contents of this publication are the sole re-\nsponsibility of its authors and do not necessarily\nreflect the opinion of the European Union.\nThis work was also supported by the FoTran\nproject, funded by the European Research Council\n(ERC) under the European Union’s Horizon 2020\nresearch and innovation programme under grant\nagreement No 771113.\nReferences\nArthur, David and Sergei Vassilvitskii. 2007. K-\nmeans++: The advantages of careful seeding. In\nProceedings of the Eighteenth Annual ACM-SIAM\nSymposium on Discrete Algorithms , SODA ’07, page\n1027–1035, USA. Society for Industrial and Applied\nMathematics.\nAulamo, Mikko, Sami Virpioja, and J ¨org Tiedemann.\n2020. OpusFilter: A configurable parallel corpus fil-\ntering toolbox. In Proceedings of the 58th Annual\nMeeting of the Association for Computational Lin-\nguistics: System Demonstrations , pages 150–156,\nOnline, July. Association for Computational Lin-\nguistics.\nBa˜n´on, Marta, Pinzhen Chen, Barry Haddow, Ken-\nneth Heafield, Hieu Hoang, Miquel Espl `a-Gomis,\nMikel L Forcada, Amir Kamran, Faheem Kirefu,\nPhilipp Koehn, et al. 2020. Paracrawl: Web-scale\nacquisition of parallel corpora. In Proceedings of the58th Annual Meeting of the Association for Compu-\ntational Linguistics , pages 4555–4567.\nBreiman, Leo. 2001. Random forests. Machine learn-\ning, 45:5–32.\nCarpuat, Marine, Yogarshi Vyas, and Xing Niu. 2017.\nDetecting cross-lingual semantic divergence for neu-\nral machine translation. In Proceedings of the First\nWorkshop on Neural Machine Translation , pages\n69–79, Vancouver, August. Association for Compu-\ntational Linguistics.\nCui, Lei, Dongdong Zhang, Shujie Liu, Mu Li, and\nMing Zhou. 2013. Bilingual data cleaning for\nSMT using graph-based random walk. In Proceed-\nings of the 51st Annual Meeting of the Association\nfor Computational Linguistics (Volume 2: Short Pa-\npers) , pages 340–345, Sofia, Bulgaria, August. As-\nsociation for Computational Linguistics.\nEspl`a-Gomis, Miquel, Mikel L Forcada, Gema\nRam´ırez-S ´anchez, and Hieu Hoang. 2019.\nParacrawl: Web-scale parallel corpora for the lan-\nguages of the EU. In Proceedings of Machine Trans-\nlation Summit XVII: Translator, Project and User\nTracks , pages 118–119.\nGowda, Thamme and Jonathan May. 2020. Finding the\noptimal vocabulary size for neural machine transla-\ntion. In Findings of the Association for Computa-\ntional Linguistics: EMNLP 2020 , pages 3955–3964,\nOnline, November. Association for Computational\nLinguistics.\nGoyal, Naman, Cynthia Gao, Vishrav Chaudhary,\nPeng-Jen Chen, Guillaume Wenzek, Da Ju, San-\njana Krishnan, Marc’Aurelio Ranzato, Francisco\nGuzm ´an, and Angela Fan. 2022. The Flores-101\nEvaluation Benchmark for Low-Resource and Mul-\ntilingual Machine Translation. Transactions of the\nAssociation for Computational Linguistics , 10:522–\n538, 05.\nIsmaili, Oumaima Alaoui, Vincent Lemaire, and An-\ntoine Cornu ´ejols. 2014. A supervised methodol-\nogy to measure the variables contribution to a clus-\ntering. In Neural Information Processing: 21st\nInternational Conference, ICONIP 2014, Kuching,\nMalaysia, November 3-6, 2014. Proceedings, Part I\n21, pages 159–166. Springer.\nJunczys-Dowmunt, Marcin, Roman Grundkiewicz,\nTomasz Dwojak, Hieu Hoang, Kenneth Heafield,\nTom Neckermann, Frank Seide, Ulrich Germann,\nAlham Fikri Aji, Nikolay Bogoychev, Andr ´e F. T.\nMartins, and Alexandra Birch. 2018. Marian: Fast\nneural machine translation in C++. In Proceed-\nings of ACL 2018, System Demonstrations , pages\n116–121, Melbourne, Australia, July. Association\nfor Computational Linguistics.\nKhayrallah, Huda and Philipp Koehn. 2018. On the\nimpact of various types of noise on neural machine\ntranslation. In Proceedings of the 2nd Workshop on\nNeural Machine Translation and Generation , pages\n74–83, Melbourne, Australia, July. Association for\nComputational Linguistics.\nKocmi, Tom, Rachel Bawden, Ond ˇrej Bojar, An-\nton Dvorkovich, Christian Federmann, Mark Fishel,\nThamme Gowda, Yvette Graham, Roman Grund-\nkiewicz, Barry Haddow, Rebecca Knowles, Philipp\nKoehn, Christof Monz, Makoto Morishita, Masaaki\nNagata, Toshiaki Nakazawa, Michal Nov ´ak, Martin\nPopel, and Maja Popovi ´c. 2022. Findings of the\n2022 conference on machine translation (WMT22).\nInProceedings of the Seventh Conference on Ma-\nchine Translation (WMT) , pages 1–45, Abu Dhabi,\nUnited Arab Emirates (Hybrid), December. Associa-\ntion for Computational Linguistics.\nKoehn, Philipp, Hieu Hoang, Alexandra Birch, Chris\nCallison-Burch, Marcello Federico, Nicola Bertoldi,\nBrooke Cowan, Wade Shen, Christine Moran,\nRichard Zens, Chris Dyer, Ond ˇrej Bojar, Alexandra\nConstantin, and Evan Herbst. 2007. Moses: Open\nsource toolkit for statistical machine translation. In\nProceedings of the 45th Annual Meeting of the ACL\non Interactive Poster and Demonstration Sessions ,\nACL ’07, page 177–180, USA. Association for Com-\nputational Linguistics.\nKoehn, Philipp. 2004. Statistical significance tests\nfor machine translation evaluation. In Proceed-\nings of the 2004 Conference on Empirical Methods\nin Natural Language Processing , pages 388–395,\nBarcelona, Spain, July. Association for Computa-\ntional Linguistics.\nKreutzer, Julia, Isaac Caswell, Lisa Wang, Ahsan Wa-\nhab, Daan van Esch, Nasanbayar Ulzii-Orshikh, Al-\nlahsera Tapo, Nishant Subramani, Artem Sokolov,\nClaytone Sikasote, Monang Setyawan, Supheak-\nmungkol Sarin, Sokhar Samb, Beno ˆıt Sagot, Clara\nRivera, Annette Rios, Isabel Papadimitriou, Sa-\nlomey Osei, Pedro Ortiz Suarez, Iroro Orife, Kelechi\nOgueji, Andre Niyongabo Rubungo, Toan Q.\nNguyen, Mathias M ¨uller, Andr ´e M ¨uller, Sham-\nsuddeen Hassan Muhammad, Nanda Muhammad,\nAyanda Mnyakeni, Jamshidbek Mirzakhalov, Tapi-\nwanashe Matangira, Colin Leong, Nze Lawson,\nSneha Kudugunta, Yacine Jernite, Mathias Jenny,\nOrhan Firat, Bonaventure F. P. Dossou, Sakhile\nDlamini, Nisansa de Silva, Sakine C ¸ abuk Ballı,\nStella Biderman, Alessia Battisti, Ahmed Baruwa,\nAnkur Bapna, Pallavi Baljekar, Israel Abebe Azime,\nAyodele Awokoya, Duygu Ataman, Orevaoghene\nAhia, Oghenefego Ahia, Sweta Agrawal, and Mofe-\ntoluwa Adeyemi. 2022. Quality at a Glance: An Au-\ndit of Web-Crawled Multilingual Datasets. Transac-\ntions of the Association for Computational Linguis-\ntics, 10:50–72, 01.\nKudo, Taku and John Richardson. 2018. Sentence-\nPiece: A simple and language independent subword\ntokenizer and detokenizer for neural text processing.\nInProceedings of the 2018 Conference on Empirical\nMethods in Natural Language Processing: System\nDemonstrations , pages 66–71, Brussels, Belgium,November. Association for Computational Linguis-\ntics.\nPapineni, Kishore, Salim Roukos, Todd Ward, and Wei-\nJing Zhu. 2002. Bleu: a method for automatic eval-\nuation of machine translation. In Proceedings of the\n40th Annual Meeting of the Association for Com-\nputational Linguistics , pages 311–318, Philadelphia,\nPennsylvania, USA, July. Association for Computa-\ntional Linguistics.\nPedregosa, F., G. Varoquaux, A. Gramfort, V . Michel,\nB. Thirion, O. Grisel, M. Blondel, P. Prettenhofer,\nR. Weiss, V . Dubourg, J. Vanderplas, A. Passos,\nD. Cournapeau, M. Brucher, M. Perrot, and E. Duch-\nesnay. 2011. Scikit-learn: Machine learning in\nPython. Journal of Machine Learning Research ,\n12:2825–2830.\nPopovi ´c, Maja. 2015. chrF: character n-gram F-score\nfor automatic MT evaluation. In Proceedings of the\nTenth Workshop on Statistical Machine Translation ,\npages 392–395, Lisbon, Portugal, September. Asso-\nciation for Computational Linguistics.\nPost, Matt. 2018. A call for clarity in reporting BLEU\nscores. In Proceedings of the Third Conference on\nMachine Translation: Research Papers , pages 186–\n191, Belgium, Brussels, October. Association for\nComputational Linguistics.\nRam´ırez-S ´anchez, Gema, Jaume Zaragoza-Bernabeu,\nMarta Ba ˜n´on, and Sergio Ortiz-Rojas. 2020. Bi-\nfixer and Bicleaner: two open-source tools to clean\nyour parallel data. In Proceedings of the 22nd An-\nnual Conference of the European Association for\nMachine Translation , pages 291–298, Lisboa, Por-\ntugal, November. European Association for Machine\nTranslation.\nRei, Ricardo, Craig Stewart, Ana C Farinha, and Alon\nLavie. 2020. COMET: A neural framework for MT\nevaluation. In Proceedings of the 2020 Conference\non Empirical Methods in Natural Language Process-\ning (EMNLP) , pages 2685–2702, Online, November.\nAssociation for Computational Linguistics.\nRikters, Mat ¯ıss. 2018. Impact of corpora quality on\nneural machine translation. In Human Language\nTechnologies–The Baltic Perspective , pages 126–\n133. IOS Press.\nSchwenk, Holger, Guillaume Wenzek, Sergey Edunov,\nEdouard Grave, Armand Joulin, and Angela Fan.\n2021. CCMatrix: Mining billions of high-quality\nparallel sentences on the web. In Proceedings of the\n59th Annual Meeting of the Association for Compu-\ntational Linguistics and the 11th International Joint\nConference on Natural Language Processing (Vol-\nume 1: Long Papers) , pages 6490–6500, Online, Au-\ngust. Association for Computational Linguistics.\nTaghipour, Kaveh, Shahram Khadivi, and Jia Xu. 2011.\nParallel corpus refinement as an outlier detection al-\ngorithm. In Proceedings of Machine Translation\nSummit XIII: Papers , Xiamen, China, September 19-\n23.\nV´azquez, Ra ´ul, Umut Sulubacak, and J ¨org Tiedemann.\n2019. The University of Helsinki submission to the\nWMT19 parallel corpus filtering task. In Proceed-\nings of the Fourth Conference on Machine Transla-\ntion (Volume 3: Shared Task Papers, Day 2) , pages\n294–300, Florence, Italy, August. Association for\nComputational Linguistics.\nXu, Hainan and Philipp Koehn. 2017. Zipporah: a\nfast and scalable data cleaning system for noisy web-\ncrawled parallel corpora. In Proceedings of the 2017\nConference on Empirical Methods in Natural Lan-\nguage Processing , pages 2945–2950.", "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "JtlWH18R0IGO", "year": null, "venue": "EAMT 2006", "pdf_link": "https://aclanthology.org/2006.eamt-1.22.pdf", "forum_link": "https://openreview.net/forum?id=JtlWH18R0IGO", "arxiv_id": null, "doi": null }
{ "title": "Obtaining Word Phrases with Stochastic Inversion Translation Grammars for Phrase-based Statistical Machine Translation", "authors": [ "Joan-Andreu Sánchez", "José-Miguel Benedí" ], "abstract": null, "keywords": [], "raw_extracted_content": "Obtaining WordPhrases withStochastic Inversion\nTransduction Grammars forPhrase-based\nStatistical MachineTranslation\u0003\nJ.A.S\u0013anchezandJ.M.Bened \u0013\u0010\nDSIC Universidad Polit\u0013ecnica deValencia, 46022 Valencia, Spain\nfjandreu jjbenedi [email protected]\nAbstract\nPhrase-based statistical translation systems arecurren tlyproviding excel-\nlentresults inrealmachinetranslation tasks. Inphrase-based statistical\ntranslation systems, thebasic translation units arewordphrases. Anim-\nportantproblem thatisrelated totheestimation ofphrase-based statistical\nmodelsistheobtaining ofwordphrases fromanaligned bilingual training\ncorpus. Inthiswork,weproposeobtaining wordphrases bymeans ofa\nStochastic Inversion Transduction Grammar. Preliminary experimen tshave\nbeencarried outonrealtasks andpromising results havebeenobtained.\n1Introduction\nMachineTranslation isaproblem thatcan\nbeaddressed bymeans ofstatistical tech-\nniques (Brown,Pietra, Pietra, &Mercer,\n1993). Inthisapproac h,theprocessofhu-\nmanlanguage translation ismodeled sta-\ntistically bymeans ofstatistical translation\nmodels.\nInorder toestimate these statistical\ntranslation models,severalapproac heshave\nbeenproposedintheliterature: \fnite-state\ntechniques (Bangalore &Riccardi, 2001;\nCasacub erta&Vidal, 2004); alignmen t\ntechniques (Brownetal.,1990, 1993; Zens,\nOch,&Ney,2002; Vogeletal.,2003; Koehn,\n2004; Och&Ney,2004); andsyntax-based\ntechniques (Wu,1997; Yamada &Knigh t,\n2001). Phrase-based techniques arebased\nonthealignmen tofwordphrases (Marcu\n&Wong,2002; Zens etal.,2002; Vogel\netal.,2003; Koehn,2004; Tom\u0013as,Lloret, &\nCasacub erta,2005). Phrase-based statisti-\ncaltranslation systems arecurren tlyprovid-\ningexcellen tresults inrealmachinetransla-\ntiontasks. Inphrase-based statistical trans-\nlation systems, thebasic translation units\n\u0003Thisworkhasbeenpartially supported bythe\nUniversidad Polit\u0013ecnicadeValencia withtheILETA\nproject.arewordphrases.\nAnimportantproblem thatisrelated to\nphrase-based statistical translation istoau-\ntomatically obtain bilingual wordphrases\nfromparallel corpora.Severalmetho dshave\nbeende\fned fordealing withthisproblem\n(Och&Ney,2003). Inthiswork,westudy a\nmetho dtoobtain wordphrases thatisbased\nonStochastic Inversion Transduction Gram-\nmarsthatwasproposedin(Wu,1997).\nStochastic Inversion Transduction Gram-\nmars (SITG) canbeviewedasare-\nstricted Stochastic Context-F reeSyntax-\nDirected Transduction Scheme (Aho &Ull-\nman, 1972; Maryanski &Thomason, 1979;\nCasacub erta,1995). SITGs canbeusedto\ncarry outasimultaneous parsing ofboth\ntheinput string andtheoutput string. In\nthiswork,weproposetoapply thisidea\ntoobtain aligned wordphrases tobeused\ninphrase-based translation systems. Some\nworksalong thisideahavebeenproposed\nelsewhere (Zhang &Gildea, 2005).\nInSection 2,wereview thephrase-based\nmachinetranslation approac h.SITGs are\nreview edinSection 3.InSection 4,we\npresen tpreliminary experimen tswithtwo\nrealtasks.\n2Phrase-based Statistical\nMachineTranslation\nThetranslation units inaphrase-based\nstatistical translation system arebilingual\nphrases rather than simple paired words.\nSeveralsystems thatfollowthisapproac h\nhavebeenpresen tedinrecentworks(Zens\netal.,2002; Koehn,2004; Tom\u0013asetal.,\n2005). These systems havedemonstrated\nexcellen ttranslation performance inreal\ntasks.\nTheword-based statistical machinetrans-\nlation systems presen tsome problems. One\nofthese problems isthattheclassical for-\nmulation presen tedin(Brownetal.,1993)\ndoesnothaveadirect translation metho d.\nAnother ofthese problems isthereordering\nproblem thatoccursbetweenlanguages with\ndi\u000beren twordorders. Finally ,theproblem\noftheunitsizewhichmustbeincreased\ninorder toimpro vetheperformance ofthe\nsystems. These problems canbealleviated\nthrough theuseofwordphrases. These\nlarger units allowustorepresen tbilingual\ncontextual information inanexplicit and\neasyway.\nThebasic ideaofaphrase-based statis-\nticalmachinetranslation system consists of\nthefollowingsteps (Zens etal.,2002):\n1.Thesource sentence issegmen tedinto\nphrases.\n2.Eachsource phrase istranslated intoa\ntarget phrase.\n3.Thetarget phrases arereordered inor-\ndertocomposethetarget sentence.\nBilingual translation phrases areanim-\nportantcomponentofaphrase-based sys-\ntem. Di\u000beren tmetho dshavebeende\fned\ntoobtain bilingual translations phrases,\nmainly from word-based alignmen tsand\nfrom syntax-based models (Yamada &\nKnigh t,2001).\nInthiswork,wefocusonlearning bilin-\ngualwordphrases byusing Stochastic Inver-\nsionTransduction Grammars (SITGs) (Wu,\n1997). Thisformalism allowsustoobtain\nbilingual wordphrases inanatural wayfrom\nthebilingual parsing oftwosentences. Inad-\ndition, theSITGs allowustoeasily incorp o-ratemanydesirable characteristics toword\nphrases suchaslength restrictions, selection\naccording tothewordalignmen tprobabilit y,\nbracketing information, etc.Wereview this\nformalism inthefollowingsection.\n3Stochastic Inversion\nTransduction Grammars\nStochastic Inversion Transduction Gram-\nmars (SITGs) (Wu,1997) canbeviewed\nasarestricted subset ofStochastic Syntax-\nDirected Transduction Grammars (Aho &\nUllman, 1972; Maryanski &Thomason,\n1979). They canbeusedtosimultaneously\nparse twostrings. SITGs areclosely related\ntoStochastic Context-F reeGrammars.\nFormally ,aSITG inChomsky Nor-\nmalForm1\u001cscanbede\fned asatuple\n(N;S;W1;W2;R;p),where: Nisa\fnite set\nofnon-terminal symbols;S2Nistheaxiom\noftheSITG; W1isa\fnite setofterminal\nsymbolsoflanguage 1;andW2isa\fnite set\nofterminal symbolsoflanguage 2.Risa\f-\nnitesetof:lexical rulesofthetypeA!x=\u000f,\nA!\u000f=y,A!x=y;direct syntactic rules\nthatarenoted asA![BC];andinverse\nsyntactic rulesthatarenoted asA!hBCi,\nwhere A;B;C2N,x2W1,y2W2and\n\u000fistheemptystring. When adirect syn-\ntactic ruleisusedinaparsing, bothstrings\nareparsed withthesyntactic ruleA!BC.\nWhen aninverseruleisusedinaparsing,\nonestring isparsed withthesyntactic rule\nA!BC,andtheother string isparsed with\nthesyntactic ruleA!CB.Termpofthe\ntuple isafunction thatattachesaprobabil-\nitytoeachrule.\nAne\u000ecien tViterbi-lik eparsing algorithm\nthatisbased onaDynamic Programing\nScheme isproposedin(Wu,1997). Theal-\ngorithm issimilar tothestochastic version of\ntheCYK algorithm forStochastic Context-\nFreeGrammars. Anextension ofthisal-\ngorithm willbepresen tedbelow.Itallows\nustoobtain themostprobable parsing tree\nthatsimultaneously analyzes twostrings, x\nandy.Theproposedalgorithm hasatime\n1ANormal FormforSITGs canbede\fned (Wu,\n1997) byanalogy totheChomsky Normal Formfor\nStochastic Context-F reeGrammars.\ncomplexit yofO(jxj3jyj3jRj).Itisimportant\ntonotethatthistimecomplexit yrestricts\ntheuseofthealgorithm torealtasks with\nshort strings.\nIfabracketedcorpus isavailable, then\namodi\fed version oftheparsing algorithm\ncanbede\fned inorder totakeintoaccoun t\nthebracketing ofthestrings. Themod-\ni\fcations aresimilar tothose proposedin\n(Pereira &Schabes,1992) fortheinside al-\ngorithm. Followingthenotation thatispre-\nsentedin(Pereira &Schabes,1992), wecan\nde\fne apartially bracketedcorpus asasetof\nsentence pairsthatisannotated withparen-\ntheses thatmarkconstituen tfrontiers. More\nprecisely ,abracketedcorpus \nisasetoftu-\nples(x;Bx;y;By),where xandyarestrings,\nBxisthebracketing ofx,andByisthe\nbracketing ofy.Letdxybeaparsing of\nxandywiththeSITG \u001cs.IftheSITG\ndoesnothaveuseless symbols,theneach\nnon-terminal thatappearsineachsentential\nformofthederivationdxygenerates apairof\nsubstrings xi:::xjofx,1\u0014i\u0014j\u0014jxj,and\nyk:::ylofy,1\u0014k\u0014l\u0014jyj,andde\fnes\naspan(i;j)ofxandaspan(k;l)ofy.A\nderivation ofxandyiscompatible withBx\nandByifallthespans de\fned byitarecom-\npatible withBxandBy.Thiscompatibilit y\ncanbeeasily de\fned bythefunction:\nc(i;j;k;l)\n=8\n>>><\n>>>:1if(i;j)doesnotoverlap anyb2Bx\nand,\nif(k;l)doesnotoverlap anyb2By,\n0otherwise.\nThis function \flters those derivations (or\npartial derivations) whose parsing isnot\ncompatible withthebracketing de\fned in\nthesample.\nTheparsing algorithm isbased onthedef-\ninition of:\n\u000eijkl(A)=Pr(A\u0003)xi+1\u0001\u0001\u0001xj=yk+1\u0001\u0001\u0001yl);\nastheprobabilit ythatthenon-terminal\nsymbolAsimultaneously generates thesub-\nstrings xi+1\u0001\u0001\u0001xjandyk+1\u0001\u0001\u0001yl.\nFollowingthenotation of(Wu,1997), the\nparsing algorithm canbeadequately modi-\n\fedinorder totakeintoaccoun tonlythose\npartial parses thatarecompatible withthe\nbracketing de\fned onthestrings:1:Initialization\n\u000ei\u00001;i;k\u00001;k(A)=p(A!xi=yk)\n1\u0014i\u0014jxj;1\u0014k\u0014jyj;\n\u000ei\u00001;i;k;k(A)=p(A!xi=\u000f)\n1\u0014i\u0014jxj;0\u0014k\u0014jyj;\n\u000ei;i;k\u00001;k(A)=p(A!\u000f=yk)\n0\u0014i\u0014jxj;1\u0014k\u0014jyj;\n2:Recursion. ForallA2N,andi,j,k,l\nsuchthat0\u0014i<j\u0014jxj,0\u0014k<l\u0014jyj\nandj\u0000i+l\u0000k>2:\n\u000eijkl(A)=c(i+1;j;k+1;l)\nmax(\u000e[]\nijkl(A);\u000ehi\nijkl(A))\nwhere\n\u000e[]\nijkl(A)\n=max\nB;C2N\ni\u0014I\u0014j;k\u0014K\u0014l\n(I\u0000i)(j\u0000I)+(K\u0000k)(l\u0000K)6=0p(A![BC])\u000eiIkK(B)\u000eIjKl(C)\n\u000ehi\nijkl(A)\n=max\nB;C2N\ni\u0014I\u0014j;k\u0014K\u0014l\n(I\u0000i)(j\u0000I)+(K\u0000k)(l\u0000K)6=0p(A!hBCi)\u000eiIKl(B)\u000eIjkK(C):\nThis algorithm canbeimplemen tedto\ncompute onlythose subproblems intheDy-\nnamic Programing Scheme thatarecompat-\niblewiththebracketing. Thus,thetime\ncomplexit yisO(jxj3jyj3jRj)foranunbrack-\netedstring, while thetime complexit yis\nO(jxjjyjjRj)forafullbracketedstring. It\nisimportanttonotethatthelasttimecom-\nplexityallowsustoworkwithrealtaskswith\nlonger strings.\nBykeeping theargumen tofthemaxi-\nmization, theparse treecanbee\u000ecien tly\nobtained. Eachnodeinthetreerelates two\nwordphrases ofthestrings beingparsed.\nTherelated wordphrases canbeconsidered\ntobethetranslation ofeachother. These\nwordphrases canbeusedtocompute the\ntranslation table ofaphrase-based machine\nstatistical translation system.\n4Experimen ts\nInthissection, wedescrib epreliminary\nexperimen tsthatwerecarried outusing\nSITGs. Twodi\u000beren tcorporawereused\nintheexperimen ts,theEuTrans -Icor-\npus(Casacub erta&Vidal, 2004) andthe\nXRCEcorpus (TT2, 2002). TheEuTrans -\nIisacorpus with asmall vocabulary\nthathasbeensemi-automatically generated.\nThiscorpus allowedustocarry outacom-\nprehensiv esetofexperimen ts.TheXRCE\ncorpus isarealcorpus thathasbeentaken\nfrommanualsofXeroxprinters.\nASITG wasobtained foreveryexperi-\nmentinthissection. TheSITG wasused\ntoparse paired sentences inatraining sam-\nplebyusing theparsing algorithm describ ed\ninSection 3.Allpairsofwordphrases that\nwerederivedfromeachinternal nodeinthe\nparse tree,except therootnode,werecon-\nsidered forthephrase-based machinetrans-\nlation system. Atranslation table wasob-\ntained frompaired wordphrases, bycount-\ningthenumberoftimes thateachpairap-\npeared inthephrases. These values were\nthenappropriately normalized.\nInalltheexperimen tsinthissection, the\nPharaoh software(Koehn,2004) wasused\nasphrase-based translation system. Thede-\nfault values wereusedforthetranslation\nprocess,andatrigram modelwasusedas\nlanguage model.Thistrigram modelwas\ntrained withtheSRILM toolkitusing the\nsame parameters describ edinthePharaoh\nsystem manual.Weusedtheworderrorrate\n(WER) andtheBLEU scoretomeasure the\nresults.\n4.1Experimen ts with the\nEuTrans -Icorpus\nTheEuTrans -Icorpus consists ofqueries,\nrequests, andcomplain tsmade attherecep-\ntiondeskofahotel (Casacub erta&Vidal,\n2004). Thecorpus wassemi-automatically\ngenerated using travelbooklets. Thiscorpus\nhasasmall vocabulary andalotofrepeated\nstrings. Forthese experimen ts,thetransla-\ntionwasfromSpanish toEnglish. Themain\ncharacteristics ofthiscorpus canbeseenin\nTable1.Table1:Characteristics oftheEuTrans -Icor-\npus\nTraining\nSpanish English\nSentence pairs 10,000\nRunning words 97,131 99,292\nVocabulary 683 513\nTest\nSentence pairs 3,000\nRunning words 35,067 35,630\n3-gram test-set perp. 3.7 3.0\n4.1.1 Obtaining aSITG from an\naligned corpus\nForthisexperimen t,aSITG wascon-\nstructed asfollows: the GIZA++\ntoolkit (Och&Ney,2000) wasused\ntoobtain atranslation table andthe\ncorresp onding probabilit yPr(fje).The\nalignmen twascarried outinbothdi-\nrections inorder tohavebothinsertions\nanddeletions available. This table was\nusedtocomposelexical rules oftheform\nA!e=f.Then, twoadditional rules of\ntheformA![AA]andA!hAAiwith\nlowprobabilit ywereadded. Therules\nwerethen adequately normalized. This\nSITG wasused toobtain wordphrases\nfrom thetraining corpus byparsing each\npairofaligned sentences. Then these word\nphrases wereusedbythePharaoh system to\ntranslate thetestset.Theresults obtained\nforthisexperimen twere19.1% WER and\n0.72BLEU.\nItisimportanttopointoutthatthecon-\nstructed SITG didnotparse allthetraining\nsentences. Eventheinsertions anddeletions\nincluded intheSITG didnotsolvethisprob-\nlem.Therefore, themodelwassmoothedby\nadding alltheremaining rulesoftheform\nA!e=\u000fandA!\u000f=fwithlowprobabilit y,\nsothatallthetraining sentences could be\nparsed. Theresults obtained withthisnew\nSITG were14.6% WER and0.79BLEU.\nNote thattheWER results decreased no-\ntably.Thereason forthiswasthatmore\nphrases wereobtained (anincrease of100%)\nandtheirprobabilit ywasbetterestimated.\nThefollowingexperimen tswerecarried out\nwithonlysmoothedSITGs.\n4.1.2 Using bracketing information\nintheparsing\nAsSection 3shows,theparsing algorithm\nforSITGs canbeadequately modi\fed inor-\ndertotakebracketedsentences intoaccoun t.\nIfthebracketing respectslinguistically mo-\ntivatedstructures, thenaligned phrases with\nlinguistic information canbeused. Note\nthatthisapproac hrequires havingquality\nparsed corporaavailable. Thisproblem can\nbereduced byusing automatically learned\nparsers.\nThisexperimen twascarried outtode-\ntermine theperformance ofthetranslation\nwhen some kindofstructural information\nwasincorp orated intheparsing. Since the\ntraining datawasnotbracketed,weparsed\ntheEnglish partofthecorpus with the\nCharniak parser (Charniak, 2000). Onlythe\nbracketing waskeptinthecorpus andthe\nother information (POStags andsyntactic\ntags) wasremoved.Wethenobtained word\nphrases according tothebracketing byus-\ningthesameSITG thatwasdescrib edinthe\nprevious section. Theobtained phrases were\nusedwiththePharaoh system. Theresults\ninthisexperimen twere10.7% WER and\n0.83BLEU. Theresults impro vednotably\nbyincorp orating bracketing information in\nthetraining corpus. Thissuggests thatus-\ningsome structural information could lead\ntoimportantimpro vements.\n4.1.3 Increasing thenumberofnon-\nterminal symbolsintheSITG\nNote that theSITG describ edinSec-\ntion4.1.1wasveryrestricted sinceonlyone\nnon-terminal symbolshould bemodeling the\nstructural relations ofbothstrings. Inthis\nexperimen t,wetriedtodetermine whether\nmoderately increasing thenumberofnon-\nterminal symbolswouldleadtoimpro ve-\nmentssincetheSITG could havemore\rex-\nibilitytomodelstructural relations.\nGiventhecomplexit yoftheparsing algo-\nrithms, onlysmall values weretested. We\ngenerated allthesyntactic rules(direct and\ninverse)thatcould begenerated witha\fxed\nnumberofnon-terminal symbols,except for\nonenon-terminal symbolthatonlygener-\natedlexical rules. Probabilities ofthesyn-tactic rules wererandomly generated and\nwerethenconvenientlynormalized.\nFirst, weparsed thecorpus thatdidnot\ninclude anylinguistic information. Second,\nweparsed thecorpus thatincluded brack-\neting information. Theresults obtained are\nshowninTable2.Note thatthe\frstrow\ncorresp ondstotheexperimen tsinSections\n4.1.2and4.1.3.\nNote thattheresults impro vedasthe\nnumberofnon-terminal symbolsincreased.\nThese results con\frm ourhypothesis inthe\nsense that better phrases wereobtained\nwhen more\rexibilit yinmodeling structural\nrelations wasgiventothemodel.\nItshould alsobenoted thatbetterresults\nwereobtained when thephrases wereob-\ntained fromthenonbracketedcorpus. The\nreason forthiscould bethatinthecase\nofphrases obtained fromthenon-brac keted\ncorpus, themodelhadmore \rexibilit yto\npairwordphrases. Thisway,thenumber\nofdi\u000beren tphrases decreased (seecolumn\n#param.inTable2).Thus,theprobabil-\nitiesofthephrases werebetter estimated.\nInthecaseofphrases obtained from the\nbracketedcorpus, thebracketingmaybeim-\nposing hardrestrictions andmanyphrases\nwerepaired inaforced manner. Thus,the\nnumberofdi\u000beren tphrases didnotdecrease\nasthenumberofnon-terminal symbolsin-\ncreased. Therefore, theprobabilities ofthe\nphrases werenotwellestimated.\nFinally ,weconsidered thecombination of\nbothkinds ofsegmen ts.Theresults canbe\nseenintheCombine dcolumn inTable2.\nThistable showsthattheresults impro ved\ninallcases. Thereason forthiscould bethat\nbothkinds ofsegmen tsweredi\u000beren tinna-\nture,and,therefore, thenumberofsegmen ts\n(column #param.)increased notably .\n4.1.4 Using aSITG from anim-\nprovedtranslation table\nOnepossible waytoimpro vethequality\nofthetranslation table consists ofaligning\nthesource andtarget sentences inbothdi-\nrections, andthenchoosing thealignmen ts\nthatappearinbothdirections (Och&Ney,\n2003). Alignmen tsthatappearinthein-\ntersection areassumed tobeofbetterqual-\nTable2:Results obtained when thenumberofnon-terminal symbols(jNj)intheSITG wasincreased.\nNonbracketed Bracketed Combined\njNjWER BLEU #param. WER BLEU #param. WER BLEU #param.\n114.6 0.79 37,508 10.7 0.83 35,300 10.5 0.84 63,292\n5 9.5 0.88 28,028 10.1 0.86 37,828 8.3 0.89 58,452\n10 8.7 0.89 30,260 9.2 0.86 37,372 7.6 0.88 59,833\nity.Thisheuristic hasdemonstrated toim-\nprovetheresults inphrase-based translation\nsystems (Tom\u0013asetal.,2005). Wetested\nthisheuristic bycomputing thealignmen ts\nthatappeared intheintersection, andthen\nwesmoothedthemodelasdescrib edinSec-\ntion4.1.1. Theobtained results areshown\ninTable3.\nItshould bepointedoutthatsimilar re-\nsultswereobtained using thebracketedcor-\npusandusing thenonbracketedcorpus.\nThisbehaviorsuggests thatthisapproac h\ncanbeveryuseful when abracketedcor-\npusisnotavailable. Notethatnoimpro ve-\nmentswereobtained when thenumberof\nnon-terminal symbolswasincreased. The\nresults fromthistable didnotimpro vethe\nresults obtained inSection 4.1.3. Thereason\nforthiscould bethatifthenumberoflexical\nrulesisreduced, thenfewerwordphrases are\nobtained andtheyarenotwellestimated.\nThebestresult reported forthistaskwas\n4.4%WER, whichwasobtained byusing the\nalignmen ttemplates approac h(Och&Ney,\n2000). However,thatresult cannot becom-\npared exactly withtheresults achievedin\nthisworkbecause thestatistical templates\napproac husedanexplicit (automatic) cate-\ngorization ofthesource andthetarget words\nandourapproac husedonlytherawword\nforms. Acomparable result totheonesob-\ntained herecanbeseenin(Casacub erta&\nVidal, 2004), whichwas6.7%WER and0:90\nBLEU. However,itshould benoted thatwe\ncarried outalltheexperimen tsbyusing the\ndefault parameters ofthePharaoh system.\nWhen weslightlytuned theparameters for\ntheexperimen tinTable2,theCombine dcol-\numn, row10,weobtained aWER of7.3%.4.2Experimen tswiththeXRCE\ncorpus\nThiscorpus consisted ofmanualsofXerox\nprinters.Thisisareduced-domain taskthat\nhasbeende\fned intheTransT ype2project\n(TT2, 2002). Theusage manualswereorigi-\nnallywritten inEnglish andwerethentrans-\nlated toSpanish, German, andFrench.For\nthese experimen ts,thetranslation wasfrom\nSpanish toEnglish. Themain characteris-\nticsofthiscorpus areshowninTable4.\nTable4:Characteristics oftheXRCEcorpus\nTraining\nSpanish English\nSentence pairs 55,761\nRunning words 752,469 665,388\nVocabulary 11,051 7,957\nTest\nSentence pairs 1,125\nRunning words 10,106 8,370\n3gram test-set perp. 31 45\nGiventhesizeofthiscorpus andthecom-\nplexityofthealgorithms, onlypreliminary\nexperimen tsthatwereanalogous tothose\nofSection 4.1.4werecarried out.Thelex-\nicalrules ofthemodelwereobtained by\naligning thesource sentence andthetarget\nsentence inbothdirections andthenchoos-\ningthealignmen tsthatappearintheinter-\nsection. Wordphrases werethenobtained\nwiththeSITG constructed frombracketed\nsentences andtheSITG constructed from\nunbracketedtraining sentences, whichhad\nbothbeenusedinthePharaoh system. The\nresults obtained areshowninTable5.\nSeveralresults forthistaskwerereported\nin(Tom\u0013asetal.,2005). Inthatwork,sev-\neralwaysofobtaining wordphrases were\nTable3:Results obtained withtheSITGs.\nNonbracketed Bracketed Combined\njNjWER BLEU #param. WER BLEU #param. WER BLEU #param.\n110.4 0.87 31,413 10.0 0.85 35,257 8.2 0.87 57,963\n510.0 0.87 28,095 10.1 0.86 37,781 8.6 0.89 58,368\nTable5:Results obtained withtheXRCEcorpus.\nNonbracketed Bracketed Combined\nWER BLEU #param. WER BLEU #param. WER BLEU #param.\n32.6 0.57 397,284 33.2 0.57 389,286 32.9 0.57 661,479\ndescrib ed.Thebestreported result was\n26.2% WER andthenumberofdi\u000beren t\nwordphrases thatwereusedforthebest\nresult wasabout2.5M. Theobtained word\nphrases wereusedinaphrase-based machine\ntranslation system thatisdi\u000beren ttothe\noneusedinourwork.With ourproposal,\nthenumberofdi\u000beren twordphrases was\nabout0.4M. Thedi\u000beren tnumberofparam-\neters mightexplain thebetter results ob-\ntained inTom\u0013as'swork.Inhisexperimen t,\naWER ofabout31%wasobtained when\nanumberofparameters ofabout0.4M was\nused. When weslightlytuned theparame-\ntersofthePharaoh system fortheexperi-\nmentinTable5,theCombine dcolumn, we\nobtained aWER of31.5%.\n5Conclusions\nInthiswork,wehaveexplored theproblem\nofobtaining wordphrases forphrase-based\nmachinetranslation systems from SITGs.\nWehavepresen tedhowtheparsing algo-\nrithms forthisformalism canbemodi\fed in\norder totakeintoaccoun tabracketedcor-\npus.Experimen tswerereported fortwodif-\nferenttasks, andtheresults obtained were\nverypromising.\nForfuture work,weproposetoworkalong\ndi\u000beren tlines. First, toincorp oratenewlin-\nguistic information inboththeparsing algo-\nrithm andinthealigned corpus. Second, to\nobtain betterSITGs fromaligned bilingual\ncorpora.Third, toimpro vetheSITG byes-\ntimating thesyntactic rules. Inaddition, we\nalsointendtoaddress other machinetrans-lation tasks.\nReferences\nAho, A.,&Ullman, J.(1972). Thethe-\noryofparsing, translation, andcompiling.\nvolumen i:parsing. Prentice-Hall.\nBangalore, S.,&Riccardi, G.(2001). A\n\fnite-state approac htomachinetransla-\ntion. InProc.ofthenaacl.\nBrown,P.,Cocke,J.,Pietra, S.D.,Pietra,\nV.D.,Jelinek, F.,La\u000bert y,J.,Mercer, R.,\n&Roossin, P.(1990). Astatistical ap-\nproachtomachinetranslation. Computa-\ntional Linguistics ,16(2),79{85.\nBrown,P.,Pietra, S.D.,Pietra, V.D.,&\nMercer, R.(1993). Themathematics of\nstatistical machinetranslation: parame-\nterestimation. Computational Linguis-\ntics,19(2),263{311.\nCasacub erta, F.(1995). Probabilistic es-\ntimation ofstochastic regular syntax-\ndirected translation schemes. InA.Calvo\n&R.Medina (Eds.), Proc.vispanishsym-\nposium onpattern recognition andim-\nageanalysis (pp.201{207). C\u0013ordoba,\nEspa~na.\nCasacub erta,F.,&Vidal, E.(2004). Ma-\nchinetranslation withinferred stochastic\n\fnite-state transducers. Computational\nLinguistics ,30(2),205{225.\nCharniak, E.(2000). Amaxim um-en tropy-\ninspired parser. InProc.ofnaacl-2000\n(pp.132{139).\nKoehn,P.(2004). Pharaoh: abeamsearch\ndecoderforphrase-based statistical ma-\nchine translation models. InProc.of\namta.\nMarcu, D.,&Wong,W.(2002). Aphrase-\nbased, jointprobabilit ymodelforstatis-\nticalmachinetranslation. InProc.ofthe\nconferenceonempiric almethodsinnatu-\nrallanguage processing.\nMaryanski, F.,&Thomason, M.(1979).\nProperties ofstochastic syntax-directed\ntranlation schemata. Journal ofCom-\nputerandInformation Scienc es,8(2),89{\n110.\nOch,F.,&Ney,H.(2003). Asystematic\ncomparison ofvarious statistical align-\nmentmodels.Computational Linguistics ,\n29(1),19{52.\nOch,F.,&Ney,H.(2004). Thealign-\nmenttemplate approac htostatistical ma-\nchinetranlation. Computational Linguis-\ntics,30(4),417{450.\nOch,F.J.,&Ney,H.(2000). Impro ved\nstatistical alignmen tmodels. InProc.of\nacl(pp.440{447). Hongk ong,China.\nPereira, F.,&Schabes,Y.(1992). Inside-\noutside reestimation frompartially brack-\netedcorpora.InProceedings ofthe30th\nannual meetingoftheassociation forcom-\nputational linguistics (pp.128{135).\nTom\u0013as,J.,Lloret, J.,&Casacub erta, F.\n(2005). Phrase-based alignmen tmodels\nforstatistical machine translation. In\nIberian conferenceonpattern recognition\nandimage analysis (Vol.3523, pp.605{\n613). Estoril (Portugal): Springer-V erlag.\nTT2. (2002). Transtype2computer assisted\ntranslation (tt2). technical report.infor-\nmation societytechnologies (ist)program.\nist-2001-32091.\nVogel, S.,Zhang, Y.,Huang, F.,Tribble,\nA.,Venugopal, A.,Zhao, B.,&Waibel,\nA.(2003). Thecmustatistical machine\ntranslation system. InProc.oftheninth\nmachine translation summit.\nWu,D.(1997). Stochastic inversion trans-\nduction grammars andbilingual parsing\nofparallel corpora.Computational Lin-\nguistics ,23(3),377{404.\nYamada, K.,&Knigh t,K. (2001).\nAsyntax-based statistical translationmodel.InProc.ofthe39thannual meet-\ningoftheassociation ofcomputational\nlinguistics (pp.523{530).\nZens, R.,Och,F.,&Ney,H.(2002).\nPhrase-based statistical machinetransla-\ntion. InProc.ofthe25thannual german\nconferenceonarti\fcial intelligence(pp.\n18{32).\nZhang, H.,&Gildea, D.(2005). Stochastic\nlexicalized inversion transduction gram-\nmarforalignmen t.InProceedings of\nthe43rdannual conferenceoftheasso-\nciation forcomputational linguistics (acl-\n05).AnnArbor,MI.", "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "rsAOfpXd5P", "year": null, "venue": "EAMT 2020", "pdf_link": "https://aclanthology.org/2020.eamt-1.5.pdf", "forum_link": "https://openreview.net/forum?id=rsAOfpXd5P", "arxiv_id": null, "doi": null }
{ "title": "When and Why is Unsupervised Neural Machine Translation Useless?", "authors": [ "Yunsu Kim", "Miguel Graça", "Hermann Ney" ], "abstract": null, "keywords": [], "raw_extracted_content": "When and Why is Unsupervised Neural Machine Translation Useless?\nYunsu Kim Miguel GraçayHermann Ney\nHuman Language Technology and Pattern Recognition Group\nRWTH Aachen University, Aachen, Germany\n{surname}@cs.rwth-aachen.de\nAbstract\nThis paper studies the practicality of the\ncurrent state-of-the-art unsupervised meth-\nods in neural machine translation (NMT).\nIn ten translation tasks with various data\nsettings, we analyze the conditions un-\nder which the unsupervised methods fail\nto produce reasonable translations. We\nshow that their performance is severely af-\nfected by linguistic dissimilarity and do-\nmain mismatch between source and tar-\nget monolingual data. Such conditions\nare common for low-resource language\npairs, where unsupervised learning works\npoorly. In all of our experiments, super-\nvised and semi-supervised baselines with\n50k-sentence bilingual data outperform the\nbest unsupervised results. Our analyses\npinpoint the limits of the current unsuper-\nvised NMT and also suggest immediate re-\nsearch directions.\n1 Introduction\nStatistical methods for machine translation (MT)\nrequire a large set of sentence pairs in two lan-\nguages to build a decent translation system (Resnik\nand Smith, 2003; Koehn, 2005). Such bilingual\ndata is scarce for most language pairs and its\nquality varies largely over different domains (Al-\nOnaizan et al., 2002; Chu and Wang, 2018). Neu-\nral machine translation (NMT) (Bahdanau et al.,\n2015; Vaswani et al., 2017), the standard paradigm\nof MT these days, has been claimed to suffer from\nthe data scarcity more severely than phrase-based\nMT (Koehn and Knowles, 2017).\nUnsupervised NMT, which trains a neural trans-\nlation model only with monolingual corpora, was\nyThe author is now at DeepL GmbH.\nc\r2020 The authors. This article is licensed under a Creative\nCommons 3.0 licence, no derivative works, attribution, CC-\nBY-ND.proposed for those scenarios which lack bilingual\ndata (Artetxe et al., 2018b; Lample et al., 2018a).\nDespite its progress in research, the performance\nof the unsupervised methods has been evalu-\nated mostly on high-resource language pairs, e.g.\nGerman$English or French$English (Artetxe et\nal., 2018b; Lample et al., 2018a; Yang et al., 2018;\nArtetxe et al., 2018a; Lample et al., 2018b; Ren et\nal., 2019b; Artetxe et al., 2019; Sun et al., 2019;\nSen et al., 2019). For these language pairs, huge\nbilingual corpora are already available, so there\nis no need for unsupervised learning in practice.\nEmpirical results in these tasks do not carry over\nto low-resource language pairs; they simply fail to\nproduce any meaningful translations (Neubig and\nHu, 2018; Guzmán et al., 2019).\nThis paper aims for a more comprehensive and\npragmatic study on the performance of unsuper-\nvised NMT. Our experiments span ten translation\ntasks in the following five language pairs:\n\u000fGerman$English: similar languages, abun-\ndant bilingual/monolingual data\n\u000fRussian$English: distant languages, abun-\ndant bilingual/monolingual data, similar sizes\nof the alphabet\n\u000fChinese$English: distant languages, abun-\ndant bilingual/monolingual data, very differ-\nent sizes of the alphabet\n\u000fKazakh$English: distant languages, scarce\nbilingual data, abundant monolingual data\n\u000fGujarati$English: distant languages, scarce\nbilingual/monolingual data\nFor each task, we compare the unsupervised per-\nformance with its supervised and semi-supervised\ncounterparts. In addition, we make the monolin-\ngual training data vary in size and domain to cover\nmany more scenarios, showing under which con-\nditions unsupervised NMT works poorly.\nHere is a summary of our contributions:\n\u000fWe thoroughly evaluate the performance of\nstate-of-the-art unsupervised NMT in numer-\nous real and artificial translation tasks.\n\u000fWe provide guidelines on whether to employ\nunsupervised NMT in practice, by showing\nhow much bilingual data is sufficient to out-\nperform the unsupervised results.\n\u000fWe clarify which factors make unsupervised\nNMT weak and which points must be im-\nproved, by analyzing the results both quan-\ntitatively and qualitatively.\n2 Related Work\nThe idea of unsupervised MT dates back to word-\nbased decipherment methods (Knight et al., 2006;\nRavi and Knight, 2011). They learn only lexicon\nmodels at first, but add alignment models (Dou et\nal., 2014; Nuhn, 2019) or heuristic features (Naim\net al., 2018) later. Finally, Artetxe et al. (2018a)\nand Lample et al. (2018b) train a fully-fledged\nphrase-based MT system in an unsupervised way.\nWith neural networks, unsupervised learning of\na sequence-to-sequence NMT model has been pro-\nposed by Lample et al. (2018a) and Artetxe et al.\n(2018b). Though having slight variations (Yang et\nal., 2018; Sun et al., 2019; Sen et al., 2019), un-\nsupervised NMT approaches commonly 1) learn\na shared model for both source !target and\ntarget!source 2) using iterative back-translation,\nalong with 3) a denoising autoencoder objective.\nThey are initialized with either cross-lingual word\nembeddings or a cross-lingual language model\n(LM). To further improve the performance at the\ncost of efficiency, Lample et al. (2018b), Ren et\nal. (2019b) and Artetxe et al. (2019) combine un-\nsupervised NMT with unsupervised phrase-based\nMT. On the other hand, one can also avoid the\nlong iterative training by applying a separate de-\nnoiser directly to the word-by-word translations\nfrom cross-lingual word embeddings (Kim et al.,\n2018; Pourdamghani et al., 2019).\nUnsupervised NMT approaches have been so\nfar evaluated mostly on high-resource language\npairs, e.g. French !English, for academic pur-\nposes. In terms of practicality, they tend to un-\nderperform in low-resource language pairs, e.g.\nAzerbaijani!English (Neubig and Hu, 2018) or\nNepali!English (Guzmán et al., 2019). To the\nbest of our knowledge, this work is the first to\nsystematically evaluate and analyze unsupervised\nlearning for NMT in various data settings.3 Unsupervised NMT\nThis section reviews the core concepts of the re-\ncent unsupervised NMT framework and describes\nto which points they are potentially vulnerable.\n3.1 Bidirectional Modeling\nMost of the unsupervised NMT methods share\nthe model parameters between source !target and\ntarget!source directions. They also often share a\njoint subword vocabulary across the two languages\n(Sennrich et al., 2016b).\nSharing a model among different translation\ntasks has been shown to be effective in multilin-\ngual NMT (Firat et al., 2016; Johnson et al., 2017;\nAharoni et al., 2019), especially in improving per-\nformance on low-resource language pairs. This\nis due to the commonality of natural languages;\nlearning to represent a language is helpful to rep-\nresent other languages, e.g. by transferring knowl-\nedge of general sentence structures. It also pro-\nvides good regularization for the model.\nUnsupervised learning is an extreme scenario\nof MT, where bilingual information is very weak.\nTo supplement the weak and noisy training signal,\nknowledge transfer and regularization are crucial,\nwhich can be achieved by the bidirectional sharing.\nIt is based on the fact that a translation problem is\ndual in nature; source !target and target!source\ntasks are conceptually related to each other.\nPrevious works on unsupervised NMT vary in\nthe degree of sharing: the whole encoder (Artetxe\net al., 2018b; Sen et al., 2019), the middle layers\n(Yang et al., 2018; Sun et al., 2019), or the whole\nmodel (Lample et al., 2018a; Lample et al., 2018b;\nRen et al., 2019a; Conneau and Lample, 2019).\nNote that the network sharing is less effective\namong linguistically distinct languages in NMT\n(Kocmi and Bojar, 2018; Kim et al., 2019a). It still\nworks as a regularizer, but transferring knowledge\nis harder if the morphology or word order is quite\ndifferent. We show how well unsupervised NMT\nperforms on such language pairs in Section 4.1.\n3.2 Iterative Back-Translation\nUnsupervised learning for MT assumes no bilin-\ngual data for training. A traditional remedy for the\ndata scarcity is generating synthetic bilingual data\nfrom monolingual text (Koehn, 2005; Schwenk,\n2008; Sennrich et al., 2016a). To train a bidirec-\ntional model of Section 3.1, we need bilingual data\nof both translation directions. Therefore, most un-\nsupervised NMT methods back-translate in both\ndirections, i.e. source and target monolingual data\nto target and source language, respectively.\nIn unsupervised learning, the synthetic data\nshould be created not only once at the beginning\nbut also repeatedly throughout the training. At the\nearly stages of training, the model might be too\nweak to generate good translations. Hence, most\nmethods update the training data as the model gets\nimproved during training. The improved model\nfor source!target direction back-translates source\nmonolingual data, which improves the model for\ntarget!source direction, and vice versa. This cy-\ncle is called dual learning (He et al., 2016) or itera-\ntive back-translation (Hoang et al., 2018). Figure 1\nshows the case when it is applied to a fully shared\nbidirectional model.\nencoder decoder source/target \njoint vocabulary \nsource/target \njoint vocabulary \nsource \nsentence target \ntranslation target \ntranslation source \nsentence \n1) 2)\n(a)\nencoder decoder source/target \njoint vocabulary \nsource/target \njoint vocabulary \ntarget \nsentence source \ntranslation source \ntranslation target \nsentence 1) 2)\n(b)\nFigure 1: Iterative back-translation for training a bidirec-\ntional sequence-to-sequence model. The model first translates\nmonolingual sentences (solid arrows), and then gets trained\nwith the translation as the input and the original as the out-\nput (dashed arrows). This procedure alternates between (a)\nsource!target and (b)target!source translations.\nOne can tune the amount of back-translations\nper iteration: a mini-batch (Artetxe et al., 2018b;\nYang et al., 2018; Conneau and Lample, 2019; Ren\net al., 2019a), the whole monolingual data (Lam-\nple et al., 2018a; Lample et al., 2018b; Sun et\nal., 2019), or some size in between (Artetxe et al.,\n2019; Ren et al., 2019b).\nHowever, even if carefully scheduled, the itera-\ntive training cannot recover from a bad optimum if\nthe initial model is too poor. Experiments in Sec-\ntion 4.5 highlight such cases.3.3 Initialization\nTo kickstart the iterative training, the model should\nbe able to generate meaningful translations already\nin the first iteration. We cannot expect the training\nto progress from a randomly initialized network\nand the synthetic data generated by it.\nCross-lingual embeddings give a good starting\npoint for the model by defining a joint continu-\nous space shared by multiple languages. Ideally, in\nsuch a space, close embedding vectors are seman-\ntically related to each other regardless of their lan-\nguages; they can be possible candidates for transla-\ntion pairs (Mikolov et al., 2013). It can be learned\neither in word level (Artetxe et al., 2017; Conneau\net al., 2018) or in sentence level (Conneau and\nLample, 2019) using only monolingual corpora.\nIn the word level, we can initialize the em-\nbedding layers with cross-lingual word embed-\nding vectors (Artetxe et al., 2018b; Lample et al.,\n2018a; Yang et al., 2018; Lample et al., 2018b;\nArtetxe et al., 2019; Sun et al., 2019). On the other\nhand, the whole encoder/decoder parameters can\nbe initialized with cross-lingual sequence training\n(Conneau and Lample, 2019; Ren et al., 2019a;\nSong et al., 2019).\nCross-lingual word embedding has limited per-\nformance among distant languages (Søgaard et al.,\n2018; Nakashole and Flauger, 2018) and so does\ncross-lingual LM (Pires et al., 2019). Section 4.5\nshows the impact of a poor initialization.\n3.4 Denoising Autoencoder\nInitializing the word embedding layers furnishes\nthe model with cross-lingual matching in the lex-\nical embedding space, but does not provide any\ninformation on word orders or generation of text.\nCross-lingual LMs encode word sequences in dif-\nferent languages, but they are not explicitly trained\nto reorder source words to the target language syn-\ntax. Both ways do not initialize the crucial param-\neters for reordering: the encoder-decoder attention\nand the recurrence on decoder states.\nAs a result, an initial model for unsupervised\nNMT tends to generate word-by-word translations\nwith little reordering, which are very non-fluent\nwhen source and target languages have distinct\nword orders. Training on such data discourages the\nmodel from reordering words, which might cause\na vicious cycle by generating even less-reordered\nsynthetic sentence pairs in the next iterations.\nAccordingly, unsupervised NMT employs an\nde-en ru-en zh-en kk-en gu-en\nGerman English Russian English Chinese English Kazakh English Gujarati English\nLanguage family Germanic Germanic Slavic Germanic Sinitic Germanic Turkic Germanic Indic Germanic\nAlphabet Size 60 52 66 52 8,105 52 42 52 91 52\nMonolingualSentences 100M 71.6M 30.8M 18.5M 4.1M\nWords 1.8B 2.3B 1.1B 2.0B 1.4B 699M 278.5M 421.5M 121.5M 93.8M\nBilingualSentences 5.9M 25.4M 18.9M 222k 156k\nWords 137.4M 144.9M 618.6M 790M 440.3M 482.9M 1.6M 1.9M 2.3M 1.5M\nTable 1: Training data statistics.\nadditional training objective of denoising autoen-\ncoding (Hill et al., 2016). Given a clean sentence,\nartificial noises are injected, e.g. deletion or per-\nmutation of words, to make a corrupted input. The\ndenoising objective trains the model to reorder the\nnoisy input to the correct syntax, which is essen-\ntial for generating fluent outputs. This is done for\neach language individually with monolingual data,\nas shown in Figure 2.\nencoder decoder source/target \njoint vocabulary \nsource/target \njoint vocabulary \nnoisy \nsource noisy \ntarget source \nsentence target \nsentence \nFigure 2: Denoising autoencoder training for source or target\nlanguage.\nOnce the model is sufficiently trained for de-\nnoising, it is helpful to remove the objective or re-\nduce its weight (Graça et al., 2018). At the later\nstages of training, the model gets improved in re-\nordering and translates better; learning to denoise\nmight hurt the performance in clean test sets.\n4 Experiments and Analysis\nData Our experiments were conducted on\nWMT 2018 German $English and Russian $En-\nglish, WMT 2019 Chinese $English, Kazakh$\nEnglish, and Gujarati $English (Table 1). We pre-processed the data using the M OSES1tokenizer\nand a frequent caser. For Chinese, we used the\nJIEBA segmenter2. Lastly, byte pair encoding\n(BPE) (Sennrich et al., 2016b) was learned jointly\nover source and target languages with 32k merges\nand applied without vocabulary threshold.\nModel We used 6-layer Transformer base ar-\nchitecture (Vaswani et al., 2017) by default:\n512-dimension embedding/hidden layers, 2048-\ndimension feedforward sublayers, and 8 heads.\nDecoding and Evaluation Decoding was done\nwith beam size 5. We evaluated the test perfor-\nmance with S ACRE BLEU (Post, 2018).\nUnsupervised Learning We ran X LM3by\nConneau and Lample (2019) for the unsupervised\nexperiments. The back-translations were done\nwith beam search for each mini-batch of 16k to-\nkens. The weight of the denoising objective started\nwith 1 and linearly decreased to 0.1 until 100k up-\ndates, and then decreased to 0 until 300k updates.\nThe model’s encoder and decoder were both\ninitialized with the same pre-trained cross-lingual\nLM. We removed the language embeddings from\nthe encoder for better cross-linguality (see Section\n4.6). Unless otherwise specified, we used the same\nmonolingual training data for both pre-training and\ntranslation training. For the pre-training, we set the\nbatch size to 256 sentences (around 66k tokens).\nTraining was done with Adam (Kingma and Ba,\n2014) with an initial learning rate of 0.0001, where\ndropout (Srivastava et al., 2014) of probability 0.1\nwas applied to each layer output and attention\ncomponents. With a checkpoint frequency of 200k\nsentences, we stopped the training when the val-\nidation perplexity (pre-training) or B LEU (trans-\nlation training) was not improved for ten check-\n1http://www.statmt.org/moses\n2https://github.com/fxsjy/jieba\n3https://github.com/facebookresearch/XLM\nBLEU [%]\nApproach de-en en-de ru-en en-ru zh-en en-zh kk-en en-kk gu-en en-gu\nSupervised 39.5 39.1 29.1 24.7 26.2 39.6 10.3 2.4 9.9 3.5\nSemi-supervised 43.6 41.0 30.8 28.8 25.9 42.7 12.5 3.1 14.2 4.0\nUnsupervised 23.8 20.2 12.0 9.4 1.5 2.5 2.0 0.8 0.6 0.6\nTable 2: Comparison among supervised, semi-supervised, and unsupervised learning. All bilingual data was used for the\n(semi-)supervised results and all monolingual data was used for the unsupervised results (see Table 1). All results are computed\non newstest2019 of each task, except for de-en/en-de and ru-en/en-ru on newstest2018.\npoints. We extensively tuned the hyperparameters\nfor a single GPU with 12GB memory, which is\nwidely applicable to moderate industrial/academic\nenvironments. All other hyperparameter values\nfollow the recommended settings of X LM.\nSupervised Learning Supervised experiments\nused the same hyperparameters as the unsuper-\nvised learning, except 12k tokens for the batch\nsize, 0.0002 for the initial learning rate, and 10k\nbatches for each checkpoint.\nIf the bilingual training data contains less than\n500k sentence pairs, we reduced the BPE merges\nto 8k, the batch size to 2k, and the checkpoint\nfrequency to 4k batches; we also increased the\ndropout rate to 0.3 (Sennrich and Zhang, 2019).\nSemi-supervised Learning Semi-supervised\nexperiments continued the training from the super-\nvised baseline with back-translations added to the\ntraining data. We used 4M back-translated sen-\ntences for the low-resource cases, i.e. if the orig-\ninal bilingual data has less than 500k lines, and\n10M back-translated sentences otherwise.\n4.1 Unsupervised vs. (Semi-)Supervised\nWe first address the most general question of this\npaper: For NMT, can unsupervised learning re-\nplace semi-supervised or supervised learning? Ta-\nble 2 compares the unsupervised performance to\nsimple supervised and semi-supervised baselines.\nIn all tasks, unsupervised learning shows much\nworse performance than (semi-)supervised learn-\ning. It produces readable translations in two\nhigh-resource language pairs (German $English\nand Russian$English), but their scores are only\naround half of the semi-supervised systems. In\nother three language pairs, unsupervised NMT\nfails to converge at any meaningful optimum,\nreaching less than 3% B LEU scores. Note that,\nin these three tasks, source and target languages\nare very different in the alphabet, morphology, and\n104105106107\nbilingual training sentence pairs010203040BLEU [%]\nsupervised\nsemi-supervised\nunsupervised(a)German!English\n104105106107\nbilingual training sentence pairs010203040BLEU [%]supervised\nsemi-supervised\nunsupervised\n(b)Russian!English\nFigure 3: Supervised and semi-supervised learning over\nbilingual training data size. Unsupervised learning (horizon-\ntal line) uses all monolingual data of Table 1.\nword order, etc. The results in Kazakh $English\nand Gujarati$English show that the current unsu-\npervised NMT cannot be an alternative to (semi-\n)supervised NMT in low-resource conditions.\nTo discover the precise condition where the\nunsupervised learning is useful in practice, we\nvary the size of the given bilingual training data\nfor (semi-)supervised learning and plot the re-\nsults in Figure 3. Once we have 50k bilingual\nsentence pairs in German $English, simple semi-\nsupervised learning already outperforms unsuper-\nvised learning with 100M monolingual sentences\nin each language. Even without back-translations\n(supervised), 100k-sentence bilingual data is suffi-\ncient to surpass unsupervised NMT.\nIn the Russian$English task, the unsupervised\nlearning performance can be more easily achieved\nwith only 20k bilingual sentence pairs using semi-\nsupervised learning. This might be due to that Rus-\nsian and English are more distant to each other\nthan German and English, thus bilingual training\nsignal is more crucial for Russian $English.\nNote that for these two language pairs, the bilin-\ngual data for supervised learning are from many\ndifferent text domains, whereas the monolingual\ndata are from exactly the same domain of the test\nsets. Even with such an advantage, the large-scale\nunsupervised NMT cannot compete with super-\nvised NMT with tiny out-of-domain bilingual data.\n4.2 Monolingual Data Size\nIn this section, we analyze how much monolin-\ngual data is necessary to make unsupervised NMT\nproduce reasonable performance. Figure 4 shows\nthe unsupervised results with different amounts of\nmonolingual training data. We keep the equal size\nfor source and target data, and the domain is also\nthe same for both (web-crawled news).\n104105106107108\nmonolingual training sentences0510152025BLEU [%]de-en\nru-en\nFigure 4: Unsupervised NMT performance over the size of\nmonolingual training data, where source and target sides have\nthe same size.\nFor German!English, training with only 1M\nsentences already gives a reasonable performance,\nwhich is only around 2% B LEU behind the 100M-\nsentence case. The performance starts to saturate\nalready after 5M sentences, with only marginal im-\nprovements by using more than 20M sentences.\nWe observe a similar trend in Russian !English.\nThis shows that, for the performance of unsu-\npervised NMT, using a massive amount of mono-\nlingual data is not as important as the similarityof source and target languages. Comparing to su-\npervised learning (see Figure 3), the performance\nsaturates faster when increasing the training data,\ngiven the same model size.\n4.3 Unbalanced Data Size\nWhat if the size of available monolingual data is\nlargely different for source and target languages?\nThis is often the case for low-resource language\npairs involving English, where there is plenty of\ndata for English but not for the other side.\nOur experiments so far intentionally use the\nsame number of sentences for both sides. In Fig-\nure 5, we reduced the source data gradually while\nkeeping the large target data fixed. To counteract\nthe data imbalance, we oversampled the smaller\nside to make the ratio of source-target 1:1 for\nBPE learning and mini-batch construction (Con-\nneau and Lample, 2019). We compare such un-\nbalanced data settings to the previous equal-sized\nsource/target settings.\n104105106107\nmonolingual training sentences0510152025BLEU [%]de-en (equal)\nde-en (unbalanced)\nru-en (equal)\nru-en (unbalanced)\nFigure 5: Unsupervised NMT performance over source train-\ning data size, where the target training data is fixed to 20M\nsentences (dashed line). Solid line is the case where the target\ndata has the same number of sentences as the source side.\nInterestingly, when we decrease the target data\naccordingly (balanced, solid line), the performance\nis similar or sometimes better than using the full\ntarget data (unbalanced, dashed line). This means\nthat it is not beneficial to use oversized data on one\nside in unsupervised NMT training.\nIf the data is severely unbalanced, the distribu-\ntion of the smaller side should be much sparser\nthan that of the larger side. The network tries to\ngeneralize more on the smaller data, reserving the\nmodel capacity for smoothing (Olson et al., 2018).\nThus it learns to represent a very different distribu-\ntion of each side, which is challenging in a shared\nmodel (Section 3.1). This could be the reason for\nno merit in using larger data on one side.\n4.4 Domain Similarity\nIn high-resource language pairs, it is feasible to\ncollect monolingual data of the same domain on\nboth source and target languages. However, for\nlow-resource language pairs, it is difficult to match\nthe data domain of both sides on a large scale.\nFor example, our monolingual data for Kazakh is\nmostly from Wikipedia and Common Crawl, while\nthe English data is solely from News Crawl. In\nthis section, we study how the domain similarity\nof monolingual data on the two sides affects the\nperformance of unsupervised NMT.\nIn Table 3, we artificially change the domain of\nthe source side to politics (UN Corpus4) or random\n(Common Crawl), while keeping the target domain\nfixed to newswire (News Crawl). The results show\nthat the domain matching is critical for unsuper-\nvised NMT. For instance, although German and\nEnglish are very similar languages, we see the per-\nformance of German $English deteriorate down to\n-11.8% B LEU by the domain mismatch.\nDomain Domain B LEU [%]\n(en) ( de/ru) de-en en-de ru-en en-ru\nNewswireNewswire 23.3 19.9 11.9 9.3\nPolitics 11.5 12.2 2.3 2.5\nRandom 18.4 16.4 6.9 6.1\nTable 3: Unsupervised NMT performance where source and\ntarget training data are from different domains. The data size\non both sides is the same (20M sentences).\nTable 4 shows a more delicate case where we\nkeep the same domain for both sides (newswire)\nbut change the providers and years of the news\narticles. Our monolingual data for Chinese (Ta-\nble 1) consist mainly of News Crawl (from years\n2008-2018) and Gigaword 4th edition (from years\n1995-2008). We split out the News Crawl part\n(1.7M sentences) and trained an unsupervised\nNMT model with the same amount of English\nmonolingual data (from News Crawl 2014-2017).\nSurprisingly, this experiment yields much better\nresults than using all available data. Even if the\nsize is small, the source and target data are col-\nlected in the same way (web-crawling) from sim-\nilar years (2010s), which seems to be crucial for\nunsupervised NMT to work.\nOn the other hand, when using the Gigaword\npart (28.6M sentences) on Chinese, unsupervised\n4https://conferences.unite.un.org/uncorpusYears Years #sents B LEU [%]\n(en) ( zh) ( en/zh)zh-en en-zh\n2014-20172008-2018 1.7M 5.4 15.1\n1995-2008 28.6M 1.5 1.9\nTable 4: Unsupervised NMT performance where source and\ntarget training data are from the same domain (newswire) but\ndifferent years.\nlearning again does not function properly. Now the\nsource and target text are from different decades;\nthe distribution of topics might be different. Also,\nthe Gigaword corpus is from traditional newspaper\nagencies which can have a different tone from the\nonline text of News Crawl. Despite the large scale,\nunsupervised NMT proves to be sensitive to a sub-\ntle discrepancy of topic, style, period, etc. between\nsource and target data.\nThese results agree with Søgaard et al. (2018)\nwho show that modern cross-lingual word embed-\nding methods fail in domain mismatch scenarios.\n4.5 Initialization vs. Translation Training\nThus far, we have seen a number of cases where\nunsupervised NMT breaks down. But which part\nof the learning algorithm is more responsible for\nthe performance: initialization (Section 3.3) or\ntranslation training (Section 3.2 and 3.4)?\nIn Figure 6, we control the level of each of\nthe two training stages and analyze its impact on\nthe final performance. We pre-trained two cross-\nlingual LMs as initializations of different quality:\nbad (using 10k sentences) and good (using 20M\nsentences). For each initial point, we continued the\ntranslation training with different amounts of data\nfrom 10k to 20M sentences.\n104105106107\nmonolingual training sentences0510152025BLEU [%]de-en (init 20M)\nde-en (init 10k)\nFigure 6: Unsupervised NMT performance over the training\ndata size for translation training, where the pre-training data\nfor initialization is fixed (10k or 20M sentences).\nFrom the bad initialization, unsupervised learn-\ning cannot build a reasonable NMT model, no mat-\nTask B LEU [%] Source input System output Reference output\nde-en23.8Seit der ersten Besichtigung wurde die\n1.000 Quadratfuß große ...Since the first Besichtigung, the 3,000\nsquare fueled ...Since the first viewing, the 1,000sq\nft flat has ...\n10.4München 1856: Vier Karten, die Ihren\nBlick auf die Stadt verändernAustrailia 1856: Eight things that can\nkeep your way to the UKMunich 1856: Four maps that will\nchange your view of the city\nru-en 12.0В ходе первоочередных оператив-\nно-следственных мероприятий ус-\nтановлена личность роженицыTheпервоочередных оператив-\nно-следственных мероприятий\nhave been established by the dolphinThe identity of the mother was de-\ntermined during preliminary inves-\ntigative and operational measures\nzh-en 1.5...调整要兼顾生产需要和消费需求。 ...调整要兼顾生产需要and消费需\n求.... adjustment must balance produc-\ntion needs with consumer demands.\nTable 5: Problematic translation outputs from unsupervised NMT systems ( input copying, ambiguity in the same context ).\nter how much data is used in translation training.\nWhen the initial model is strong, it is possible to\nreach 20% B LEU by translation training with only\n100k sentences. Using 1M sentences in transla-\ntion training, the performance is already compa-\nrable to its best. Once the model is pre-trained\nwell for cross-lingual representations, fine-tuning\nthe translation-specific components seems man-\nageable with relatively small data.\nThis demonstrates the importance of initializa-\ntion over translation training in the current unsu-\npervised NMT. Translation training relies solely\non model-generated inputs, i.e. back-translations,\nwhich do not reflect the true distribution of the in-\nput language when generated with a poor initial\nmodel. On Figure 7, we plot all German !English\nunsupervised results we conducted up to the pre-\nvious section. It shows that the final performance\ngenerally correlates with the initialization quality.\n242526272829210\ninitial LM perplexity0510152025BLEU [%]\nFigure 7: Unsupervised NMT performance over the valida-\ntion perplexity of the initial cross-lingual LM ( de-en ).\n4.6 Qualitative Examples\nIn this section, we analyze translation outputs of\nunsupervised systems to find out why they record\nsuch low B LEU scores. Do unsupervised systems\nhave particular problems in the outputs other than\nlimited adequacy/fluency?Table 5 shows translation examples from the un-\nsupervised systems. The first notable problem is\ncopying input words to the output. This happens\nwhen the encoder has poor cross-linguality, i.e.\ndoes not concurrently model two languages well\nin a shared space. The decoder then can easily de-\ntect the input language by reading the encoder and\nmay emit output words in the same language.\nA good cross-lingual encoder should not give\naway information on the input language to the de-\ncoder. The decoder must instead rely on the ouptut\nlanguage embeddings or an indicator token (e.g.\n<2en> ) to determine the language of output to-\nkens. As a simple remedy, we removed the lan-\nguage embeddings from the encoder and obtained\nconsistent improvements, e.g. from 4.3% to 11.9%\nBLEU in Russian!English. However, the problem\nstill remains partly even in our best-performing un-\nsupervised system (the first example).\nThe copying occurs more often in inferior sys-\ntems (the last example), where the poor initial\ncross-lingual LM is the main reason for the worse\nperformance (Section 4.5). Note that the auto-\nencoding (Section 3.4) also encourages the model\nto generate outputs in the input language.\nAnother problem is that the model cannot distin-\nguish words that appear in the same context. In the\nsecond example, the model knows that Vier in Ger-\nman ( Four in English) is a number, but it generates\na wrong number in English ( Eight ). The initial LM\nis trained to predict either Four orEight given the\nsame surrounding words (e.g. 1856, things) and\nhas no clue to map Four toVier.\nThe model cannot learn these mappings by itself\nwith back-translations. This problem can be partly\nsolved by subword modeling (Bojanowski et al.,\n2017) or orthographic features (Riley and Gildea,\n2018; Artetxe et al., 2019), which are however not\neffective for language pairs with disjoint alphabets.\n5 Conclusion and Outlook\nIn this paper, we examine the state-of-the-art un-\nsupervised NMT in a wide range of tasks and data\nsettings. We find that the performance of unsuper-\nvised NMT is seriously affected by these factors:\n\u000fLinguistic similarity of source and target lan-\nguages\n\u000fDomain similarity of training data between\nsource and target languages\nIt is very hard to fulfill these in low-/zero-resource\nlanguage pairs, which makes the current unsuper-\nvised NMT useless in practice. We also find that\nthe performance is not improved by using massive\nmonolingual data on one or both sides.\nIn practice, a simple, non-tuned semi-supervised\nbaseline with only less than 50k bilingual sen-\ntence pairs is sufficient to outperform our best\nlarge-scale unsupervised system. At this moment,\nwe cannot recommend unsupervised learning for\nbuilding MT products if there are at least small\nbilingual data.\nFor the cases where there is no bilingual data\navailable at all, we plan to systematically com-\npare the unsupervised NMT to pivot-based meth-\nods (Kim et al., 2019b; Currey and Heafield, 2019)\nor multilingual zero-shot translation (Johnson et\nal., 2017; Aharoni et al., 2019).\nTo make unsupervised NMT useful in the future,\nwe suggest the following research directions:\nLanguage-/Domain-agnostic LM We show in\nSection 4.5 that the initial cross-lingual LM actu-\nally determines the performance of unsupervised\nNMT. In Section 4.6, we argue that the poor perfor-\nmance is due to input copying, for which we blame\na poor cross-lingual LM. The LM pre-training\nmust therefore handle dissimilar languages and do-\nmains equally well. This might be done by careful\ndata selection or better regularization methods.\nRobust Translation Training On the other\nhand, the current unsupervised NMT lacks a mech-\nanism to bootstrap out of a poor initialization. In-\nspired by classical decipherment methods (Section\n2), we might devalue noisy training examples or\nartificially simplify the problem first.\nReferences\nAharoni, Roee, Melvin Johnson, and Orhan Firat.\n2019. Massively multilingual neural machine trans-\nlation. In NAACL-HLT , pages 3874–3884.Al-Onaizan, Yaser, Ulrich Germann, Ulf Hermjakob,\nKevin Knight, Philipp Koehn, Daniel Marcu, and\nKenji Yamada. 2002. Translation with scarce bilin-\ngual resources. Machine Translation , 17(1):1–17.\nArtetxe, Mikel, Gorka Labaka, and Eneko Agirre.\n2017. Learning bilingual word embeddings with (al-\nmost) no bilingual data. In ACL, pages 451–462.\nArtetxe, Mikel, Gorka Labaka, and Eneko Agirre.\n2018a. Unsupervised statistical machine translation.\nInEMNLP , page 3632–3642.\nArtetxe, Mikel, Gorka Labaka, Eneko Agirre, and\nKyunghyun Cho. 2018b. Unsupervised neural ma-\nchine translation. In ICLR .\nArtetxe, Mikel, Gorka Labaka, and Eneko Agirre.\n2019. An effective approach to unsupervised ma-\nchine translation. In ACL, pages 194–203.\nBahdanau, Dzmitry, Kyunghyun Cho, and Yoshua Ben-\ngio. 2015. Neural machine translation by jointly\nlearning to align and translate. In ICLR .\nBojanowski, Piotr, Edouard Grave, Armand Joulin, and\nTomas Mikolov. 2017. Enriching word vectors with\nsubword information. TACL , 5:135–146.\nChu, Chenhui and Rui Wang. 2018. A survey of do-\nmain adaptation for neural machine translation. In\nCOLING , pages 1304–1319.\nConneau, Alexis and Guillaume Lample. 2019. Cross-\nlingual language model pretraining. In NeurIPS ,\npages 7057–7067.\nConneau, Alexis, Guillaume Lample, Marc’Aurelio\nRanzato, Ludovic Denoyer, and Hervé Jégou. 2018.\nWord translation without parallel data. In ICLR .\nCurrey, Anna and Kenneth Heafield. 2019. Zero-\nresource neural machine translation with monolin-\ngual pivot data. In WNGT , pages 99–107.\nDou, Qing, Ashish Vaswani, and Kevin Knight. 2014.\nBeyond parallel data: Joint word alignment and deci-\npherment improves machine translation. In EMNLP ,\npages 557–565.\nFirat, Orhan, Kyunghyun Cho, and Yoshua Bengio.\n2016. Multi-way, multilingual neural machine\ntranslation with a shared attention mechanism. In\nNAACL-HLT , pages 866–875.\nGraça, Miguel, Yunsu Kim, Julian Schamper, Jiahui\nGeng, and Hermann Ney. 2018. The RWTH aachen\nuniversity English-German and German-English un-\nsupervised neural machine translation systems for\nWMT 2018. In WMT .\nGuzmán, Francisco, Peng-Jen Chen, Myle Ott, Juan\nPino, Guillaume Lample, Philipp Koehn, Vishrav\nChaudhary, and Marc’Aurelio Ranzato. 2019.\nThe FLORES evaluation datasets for low-resource\nmachine translation: Nepali–English and Sinhala–\nEnglish. In EMNLP-IJCNLP , pages 6097–6110.\nHe, Di, Yingce Xia, Tao Qin, Liwei Wang, Nenghai Yu,\nTie-Yan Liu, and Wei-Ying Ma. 2016. Dual learning\nfor machine translation. In NIPS , pages 820–828.\nHill, Felix, Kyunghyun Cho, and Anna Korhonen.\n2016. Learning distributed representations of sen-\ntences from unlabelled data. In NAACL-HLT , pages\n1367–1377.\nHoang, Vu Cong Duy, Philipp Koehn, Gholamreza\nHaffari, and Trevor Cohn. 2018. Iterative back-\ntranslation for neural machine translation. In WNGT ,\npages 18–24.\nJohnson, Melvin, Mike Schuster, Quoc V Le, Maxim\nKrikun, Yonghui Wu, Zhifeng Chen, Nikhil Thorat,\nFernanda Viégas, Martin Wattenberg, Greg Corrado,\net al. 2017. Google’s multilingual neural machine\ntranslation system: Enabling zero-shot translation.\nTACL , 5(1):339–351.\nKim, Yunsu, Jiahui Geng, and Hermann Ney. 2018.\nImproving unsupervised word-by-word translation\nwith language model and denoising autoencoder. In\nEMNLP , pages 862–868.\nKim, Yunsu, Yingbo Gao, and Hermann Ney. 2019a.\nEffective cross-lingual transfer of neural machine\ntranslation models without shared vocabularies. In\nACL, pages 1246–1257.\nKim, Yunsu, Petre Petrov, Pavel Petrushkov, Shahram\nKhadivi, and Hermann Ney. 2019b. Pivot-based\ntransfer learning for neural machine translation be-\ntween non-English languages. In EMNLP-IJCNLP ,\npages 866–876.\nKingma, Diederik P and Jimmy Ba. 2014. Adam: A\nmethod for stochastic optimization.\nKnight, Kevin, Anish Nair, Nishit Rathod, and Kenji\nYamada. 2006. Unsupervised analysis for decipher-\nment problems. In COLING/ACL , pages 499–506.\nKocmi, Tom and Ond ˇrej Bojar. 2018. Trivial transfer\nlearning for low-resource neural machine translation.\nInWMT , pages 244–252.\nKoehn, Philipp and Rebecca Knowles. 2017. Six chal-\nlenges for neural machine translation. In WNMT ,\npages 28–39.\nKoehn, Philipp. 2005. Europarl: A parallel corpus for\nstatistical machine translation. In MT Summit , pages\n79–86.\nLample, Guillaume, Ludovic Denoyer, and\nMarc’Aurelio Ranzato. 2018a. Unsupervised\nmachine translation using monolingual corpora only.\nInICLR .\nLample, Guillaume, Myle Ott, Alexis Conneau, Lu-\ndovic Denoyer, and Marc’Aurelio Ranzato. 2018b.\nPhrase-based & neural unsupervised machine trans-\nlation. In EMNLP , pages 5039–5049.\nMikolov, Tomas, Kai Chen, Greg Corrado, and Jeffrey\nDean. 2013. Efficient estimation of word represen-\ntations in vector space.\nNaim, Iftekhar, Parker Riley, and Daniel Gildea. 2018.\nFeature-based decipherment for machine translation.\nComputational Linguistics , 44(3):525–546.\nNakashole, Ndapandula and Raphael Flauger. 2018.\nCharacterizing departures from linearity in word\ntranslation. In ACL, pages 221–227.\nNeubig, Graham and Junjie Hu. 2018. Rapid adapta-\ntion of neural machine translation to new languages.\nInEMNLP , pages 875–880.\nNuhn, Malte. 2019. Unsupervised Training with Appli-\ncations in Natural Language Processing . Ph.D. the-\nsis, Computer Science Department, RWTH Aachen\nUniversity.\nOlson, Matthew, Abraham Wyner, and Richard Berk.\n2018. Modern neural networks generalize on small\ndata sets. In NIPS , pages 3619–3628.\nPires, Telmo, Eva Schlinger, and Dan Garrette. 2019.\nHow multilingual is multilingual bert? In ACL,\npages 4996–5001.Post, Matt. 2018. A call for clarity in reporting bleu\nscores. In WMT , pages 186–191.\nPourdamghani, Nima, Nada Aldarrab, Marjan\nGhazvininejad, Kevin Knight, and Jonathan May.\n2019. Translating translationese: A two-step\napproach to unsupervised machine translation. In\nACL, pages 3057–3062.\nRavi, Sujith and Kevin Knight. 2011. Deciphering for-\neign language. In ACL, pages 12–21.\nRen, Shuo, Yu Wu, Shujie Liu, Ming Zhou, and Shuai\nMa. 2019a. Explicit cross-lingual pre-training\nfor unsupervised machine translation. In EMNLP-\nIJCNLP , pages 770–779.\nRen, Shuo, Zhirui Zhang, Shujie Liu, Ming Zhou, and\nShuai Ma. 2019b. Unsupervised neural machine\ntranslation with smt as posterior regularization.\nResnik, Philip and Noah A. Smith. 2003. The web\nas a parallel corpus. Computational Linguistics ,\n29(3):349–380.\nRiley, Parker and Daniel Gildea. 2018. Orthographic\nfeatures for bilingual lexicon induction. In ACL,\npages 390–394.\nSchwenk, Holger. 2008. Investigations on large-\nscale lightly-supervised training for statistical ma-\nchine translation. In IWSLT .\nSen, Sukanta, Kamal Kumar Gupta, Asif Ekbal, and\nPushpak Bhattacharyya. 2019. Multilingual unsu-\npervised NMT using shared encoder and language-\nspecific decoders. In ACL, pages 3083–3089.\nSennrich, Rico and Biao Zhang. 2019. Revisiting low-\nresource neural machine translation: A case study.\nInACL, pages 211–221.\nSennrich, Rico, Barry Haddow, and Alexandra Birch.\n2016a. Improving neural machine translation mod-\nels with monolingual data. In ACL, pages 86–96.\nSennrich, Rico, Barry Haddow, and Alexandra Birch.\n2016b. Neural machine translation of rare words\nwith subword units. In ACL, pages 1715–1725.\nSøgaard, Anders, Sebastian Ruder, and Ivan Vuli ´c.\n2018. On the limitations of unsupervised bilingual\ndictionary induction. In ACL, pages 778–788.\nSong, Kaitao, Xu Tan, Tao Qin, Jianfeng Lu, and\nTie-Yan Liu. 2019. Mass: Masked sequence to\nsequence pre-training for language generation. In\nICML , pages 5926–5936.\nSrivastava, Nitish, Geoffrey Hinton, Alex Krizhevsky,\nIlya Sutskever, and Ruslan Salakhutdinov. 2014.\nDropout: a simple way to prevent neural networks\nfrom overfitting. The journal of machine learning\nresearch , 15(1):1929–1958.\nSun, Haipeng, Rui Wang, Kehai Chen, Masao Utiyama,\nEiichiro Sumita, and Tiejun Zhao. 2019. Unsuper-\nvised bilingual word embedding agreement for unsu-\npervised neural machine translation. In ACL, pages\n1235–1245.\nVaswani, Ashish, Noam Shazeer, Niki Parmar, Jakob\nUszkoreit, Llion Jones, Aidan N Gomez, Łukasz\nKaiser, and Illia Polosukhin. 2017. Attention is all\nyou need. In NIPS , pages 5998–6008.\nYang, Zhen, Wei Chen, Feng Wang, and Bo Xu.\n2018. Unsupervised neural machine translation with\nweight sharing. In ACL, pages 46–55.", "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "OorGgUn6iU_", "year": null, "venue": "EAMT 2009", "pdf_link": "https://aclanthology.org/2009.eamt-1.8.pdf", "forum_link": "https://openreview.net/forum?id=OorGgUn6iU_", "arxiv_id": null, "doi": null }
{ "title": "Improving a Catalan-Spanish Statistical Translation System using Morphosyntactic Knowledge", "authors": [ "Mireia Farrús", "Marta R. Costa-jussà", "Marc Poch", "Adolfo Hernandez", "José B. Mariño" ], "abstract": null, "keywords": [], "raw_extracted_content": "Proceedings of the 13th Annual Conference of the EAMT , pages 52–57,\nBarcelona, May 2009\nImproving a Catalan-Spanish Statistical Translation System\nusing Morphosyntactic Knowledge\nMireia Farrús, Marta R. Costa-jussà, Marc Poch, Adolfo Hernández, and José B. Mariño\nTALP Research Centre, Department of Signal Theory and Communications\nUniversitat Politècnica de Catalunya\nC/ Jordi Girona 1-3, 08034 Barcelona, Spain\n{mfarrus,mruiz,mpoch,adolfohh,canton}@gps.tsc.upc.edu\nAbstract\nIn this paper, a human evaluation of a\nCatalan-Spanish Ngram-based statistical\nmachine translation system is used to de-\nvelop specific techniques based on the use\nof grammatical categories, lexical cate-\ngorisation and text processing, for the en-\nhancement of the final translation. The\nsystem is successfully improved when test-\ning with ad hoc and general corpora, as it\nis shown in the final automatic evaluation.\n1 Introduction\nStatistical Machine Translation (SMT) nowadays\nhas become one of the most popular Machine\nTranslation paradigms. The SMT approach allows\nto build a translator with open-source tools as long\nas a parallel corpus is available. If the languages\ninvolved in the translation belong to the same lin-\nguistic family, the translation quality can be sur-\nprisingly nice. Furthermore, one of the most at-\ntractive reasons to build an statistical system in-\nstead of an standard rule-based system is the little\nhuman effort required.\nTheoretically, when using SMT, no linguistic\nknowledge is required. In practice, once the sys-\ntem is built and specially, if the translation qual-\nity is high, then the linguistic knowledge becomes\nnecessary to make further improvements (Niessen\nand Ney, 2000; Popovi ´c and Ney, 2004; Popovi ´c\net al., 2006). In fact, the main question that arose\nat the beginning of this work was: which are the\nsteps to follow when the intention is to improve a\nhigh quality statistical translation?\nLet’s consider a high quality statistical trans-\nlation defined as the system which has a BLEU\nc/circlecopyrt2009 European Association for Machine Translation.around 75% with a single reference in an in-\ndomain test. This is a relatively unusual situation\nas most of the statistical translation systems have\nmuch lower performance. This study is devoted to\ndevelop this stage in the Catalan-Spanish pair in\nboth directions.\nThe study starts from a high quality Ngram-\nbased statistical translation baseline system,\ntrained with the aligned Spanish-Catalan parallel\ncorpus taken from El Periódico newspaper, which\ncontains 1.7 million sentences. A human error\nanalysis of the translation is then performed and\nused to further improve the translation by introduc-\ning statistical techniques and linguistic rules.\nThis paper is organised as follows. Section 2 de-\nscribes the Ngram-based statistical translation sys-\ntem used as baseline system. Section 3 reports the\nhuman error analysis and evaluation of the base-\nline system, whose solutions based on statistical\ntechniques, linguistic rules and text processing are\nexplained in section 4. In section 5, an automatic\nevaluation of the new system is performed and dis-\ncussed. Finally, Section 6 sums up the conclusions.\n2 Ngram-based statistical translation\nsystem\nAn Ngram-based SMT system regards translation\nas a stochastic process. In recent systems, such\nan approach is faced using a general maximum en-\ntropy approach in which a log-linear combination\nof multiple feature functions is implemented (Och,\n2003). This approach leads to maximising a linear\ncombination of feature functions:\n˜t=argmax\nt/braceleftBiggM/summationdisplay\nm=1λmhm(t, s)/bracerightBigg\n(1)\nwhere the argmax operation denotes the search\n52\nproblem, i.e. the generation of the output sentence\nin the target language, hm(t, s)are the feature\nfunctions and λmare their corresponding weights.\nThe main feature function (and the only one in\nour baseline system) is the Ngram-based transla-\ntion model which is trained on bilingual n-grams.\nThis model constitutes a language model of a par-\nticular bi-language composed of bilingual units\n(translation units) which are referred to as tuples.\nIn this way, the translation model probabilities at\nthe sentence level are approximated by using n-\ngrams of tuples.\nThe Ngram-based approach is monotonic in that\nits model is based on the sequential order of tu-\nples during training. Therefore, the baseline sys-\ntem with only one feature function may be spe-\ncially appropriate for pairs of languages with rela-\ntively similar word order schemes. Further details\ncan be found in Mariño (2006 et al.).\n3 Linguistic error analysis\nIn this section we report the linguistic error analy-\nsis performed over the Ngram-based baseline out-\nput. The analysis was performed by a Catalan and\nSpanish native linguist at the level of syntaxis, se-\nmantics and morphology using out-of-domain text.\nThe set of errors are listed and briefly described\nnext.\nObligation The obligation Spanish expression\ntener que (have to ) was literally translated as\n*tenir que into Catalan, instead of haver de .\nSolo confusion The term solo in Spanish can\nbe related to three distinct parts of speech\n(POS): adverb ( only), adjective ( alone ) or\nnoun ( solo). Since the translation into Cata-\nlan depends on the POS, the translated term\nbecomes erroneous when the Spanish POS is\nnot well-recognised, which happens specially\nin this case between the adverb and the adjec-\ntive.\nApostrophe In the Spanish-Catalan translation,\nthe apostrophe rules for the Catalan articles\nel,laand the preposition dein front of vow-\nels are not fulfilled.\nGeminated l(l·l)Although the Catalan gemi-\nnated lshould be always written with a mid-\ndle dot ( ·), it is very frequent to find it writ-\nten with normal dot, which leads to erroneous\ntranslations into Spanish.Omission of prepositions The preposition deis\nfrequently omitted when translating the Span-\nish verb deber (must ) into the phrasal verb\nhaver de . On the other hand, Spanish nor-\nmally uses the preposition ain front of a di-\nrect object while Catalan does not, so that\nsuch preposition is usually omitted in the\nCatalan-Spanish translation.\na,enprepositions These prepositions are used in\nvery distinct ways in both Catalan and Span-\nish languages, so that it becomes difficult to\nachieve correct translations in both directions.\nPossessive pronouns and adjectives In Catalan,\npossessive pronouns and adjectives are ex-\npressed with the same term, whereas Spanish\ndoes not. This ambiguity in Catalan leads to\nconfusion in the translation to Spanish.\nConjunction perquè This conjunction is ambigu-\nous in the Catalan-Spanish translation since it\ndepends on whether the conjunction is causal,\nin which case corresponds to porque (be-\ncause), or final, where it corresponds to para\nque(in order to).\nVerb soler The conjugated forms solandsols of\nthe verb soler (to use to) can be confused\nby the adjective meaning alone that uses the\nsame term.\nConjunctions i,oThese Catalan conjunctions\nmust be translated into Spanish as eand o\ninstead of yanduwhen the following word\nbegins with iandu, respectively.\nNumbers Many numeric expressions are not in-\ncluded in the training corpus, so that no trans-\nlation can be generated in any of the target\nlanguages.\nHours Catalan and Spanish time expressions dif-\nfer significantly, being usually impossible to\nuse literal translations. The main difference\nis found in the use of the quarters: where\nSpanish hours express the quarters that pass\nfrom a specific hour, Catalan uses the follow-\ning hour. E.g. Las cuatro y cuarto (four and\na quarter) in Spanish would correspond to Un\nquart de cinc (a quarter of five) in Catalan.\nPronominal clitics Frequently, the translation\nfails in the combination of the pronominal\nclitic and the corresponding verb.53\nCuyo relative pronoun The relative construc-\ntions involving the Spanish pronoun cuyo\nare subject to a lexical reordering in the\ntranslation into Catalan and viceversa. E.g.\nthe Spanish expression la mesa cuyo propi-\netario es (the table whose owner is) would\ncorrespond to la taula el propietari del qual\nés.\nGender concordance A masculine Spanish term\ncan correspond to a feminine Catalan term,\nand viceversa. E.g. la señal (Spanish fem.,\nthe signal) corresponds to el senyal (Catalan\nmasc.).\nUnknown words Apart from the numbers, there\nare other words that are not found in the train-\ning corpus due to the fact that they appear\nonly at the beginning of the sentence in cap-\nital letters, so that the same words written in\nlower case letters are not translated.\n4 Applying improvement techniques\nIn order to solve some of the problems described\nin the previous section, three different techniques\nhave been applied, based on the use of the gram-\nmatical category of the words, lexical categorisa-\ntion and direct text processing, respectively.\n4.1 Grammatical category-based techniques\nGrammatical categories have been successfully\nimplemented in statistical machine translation in\norder to deal with some problems such as reorder-\ning (Crego and Mariño, 2007) and automatic error\nanalysis (Popovi ´c and Ney, 2006). The aim is to\nadd the grammatical category (tag) corresponding\nto the word we are dealing with, so that the sta-\ntistical model will be able to distinguish the words\naccording to its category and to learn from context.\nHomonymy disambiguation\nIn the translation task, it is common to find\ntwo words in the source language with the same\nspelling and different meaning that correspond to\ntwo different words in the target language, which\nleads to incorrect translations. When equal words\nin the source language differ from each other by\ntheir grammatical category or associated tag (they\nare homonymous), such tag can be used for disam-\nbiguation.\nIn the case of the Catalan verb soler , instead of\ngenerating a series of rules to detect whether solandsolsare verbal conjugations of soler , the tag is\ndirectly taken from Freeling tool (Carreras et al.,\n2004).\nHowever, in some cases, the tag information\ngiven by the FreeLing tool is not correct, and some\nadditional processing is needed in order to perform\nthe word disambiguation. In the solo case, a se-\nries of context-based rules have been designed to\nidentify the solo adverb from the solo adjective\nin the doubtful cases. The rules are applied over\nthe source language and the corresponding tag is\nadded to the word in question. Thus, a source lan-\nguage sentence such as venía solo (he was coming\nalone) is transformed into venía solo_<ADJ> , so\nthat the statistical model will be able to distinguish\nbetween both cases.\nA similar process is performed in the Catalan\npossessives: a set of rules has been designed in or-\nder to assign a tag indicating the category of the\nword (adjective or pronoun), and the tags are then\nimplemented in the source language. Some ex-\namples of the resulting translations after applying\nhomonymy disambiguation can be found in Table\n1.\nSoler (S) La CR soldisposar de quatre.\n(T1) La CR * solo disponer de cuatro.\n(T2) La CR suele disponer de cuatro.\nSolo (S) Era solo un niño.\n(T1) Era * solun nen.\n(T2) Només era un nen.\nPoss. (S) Els meus amics no són els teus .\n(T1) Mis amigos no están * tus.\n(T2) Mis amigos no son los tuyos .\nTable 1: Examples of correction after homonymy\ndisambiguation.\nPronominal clitics\nThe pronominal clitics are initially detected and\nseparated from the verb by using the Freeling tool.\nAfter translating them, they are combined again\nwith the corresponding verb. In order to solve the\nerrors in this combination process, a set of rules\nis defined, in which two grammatical aspects are\nconsidered: the Spanish accentuation rules and the\npronoun-verb combination in Catalan. In Spanish,\nfor instance, the stressed syllable position changes\nwhen adding an enclitic pronoun to the verb:\nvende + lo→vénde lo(sellit)54\nwhile in Catalan, the accentuation rules are not\naltered and the pronoun-verb combination is per-\nformed by using apostrophes or hyphens:\nseguir + lo→seguir- lo(follow it)\ncompra + el→compra’ l(buy it)\nel+ aixecava →l/primeaixecava (lifted it)\nApostrophe\nA series of rules have been applied in order to\nfulfil the Catalan apostrophe rules. The basic apos-\ntrophe rule states that the singular articles el, la\nand the preposition demust be apostrophised when\npreceding a word that begins with a vowel or an\nunsounded h(in Catalan language the letter his\nnot pronounced):\nel + arbre →l’arbre (the tree)\nla + hora →l’hora (the hour)\nde + eines →d’eines (of tools)\nSome exceptions to these rules have also been\nincluded:\n•The articles and the preposition are not apos-\ntrophised when they precede terms beginning\nwith semiconsonantic ioru(including hi,\nhu):el uombat (the wombat), la hiena (the\nhyena), de iogurt (of yoghurt).\n•The feminine article is not apostrophised\nwhen precedes a word that begins with atonic\nioru(including hiandhu):la universitat (the\nuniversity), la Irene .\n•The feminine article and the preposition are\nnot apostrophised when preceding the nega-\ntive prefix a:la anormalitat (the anormality),\nde asimètric (of asimètric).\n•La una [hora](one o’clock), la ira (the wrath),\nla host (the host) and the names of letters( la\ne,la hac ,la erra , etc.) are not apostrophised.\nSome examples of clitics and apostrophe correc-\ntion can be found in Table 2.\nCapital letters at sentence beginning\nIt was also seen in section 3 that some of the un-\nknown words appear in the training corpus only in\ncapital letters, since they are found only at the be-\nginning of sentences. In order to solve this prob-\nlem, all those words that appear at the sentence be-\nginning are changed to lower case words, except\nfor proper nouns, common nouns and adjectives,Clitics (S) No quiero ver temás por aquí.\n(T1) No vull veure * etmés per aquí.\n(T2) No vull veure’ tmés per aquí.\nApostr. (S) La acepta hasta el final.\n(T1) * La accepta fins al final.\n(T2) L’accepta fins al final.\nTable 2: Examples of clitics and apostrophe cor-\nrection.\nsince common nouns and adjectives could be also\nproper nouns, and they are usually not found at\nsentence beginnings. Therefore, those words that\nappeared only in capital letters will be translated\nwhen writing them in lower case. An example of\nthis type of correction can be found in Table 3.\n(S) No entenc per què no hi assisteixes .\n(T1) No entiendo por qué no * assisteixes .\n(T2) No entiendo por qué no asistes .\nTable 3: Example of capital letter unknown word\ncorrection.\nGender concordance\nIn order to improve the translation of those\nwords that change the gender between Catalan and\nSpanish, a tag containing the part-of-speech in-\nformation has been used. This technique bene-\nfits those word sequences that maintain the gender\ncoherence; for instance: pilota _FN verda _FAdj\n(where FN is feminine noun and FAdj feminine\nadjective) will have a higher probability that pi-\nlota_FN verd_MAdj (where MAdj is a mascu-\nline adjective), since the tags model will have\nseen more time the sequence FN-FAdj than the se-\nquence FN-MAdj.\nNevertheless, the tags model will be useful\nonly if the language model (i.e. the tuples in-\ncluded in the training corpus) allows it. Thus, the\ntranslation of senyal _MN blanc _MAdj will remain\nasseñal _FN blanco _MAdj instead of señal _FN\nblanca _FAdj, since the tuple blanc#blanca is not\ncontained in the translation model.\nCuyo\nIn order to solve the problem of the relative\npronoun cuyo , a preprocessing rule was applied\nto transform the Spanish structure into a literal\ntranslation of the Catalan structure del qual ; i.e.\nthe sentences containing cuyo or some of its other\nforms ( cuya ,cuyos ,cuyas ), were transformed to55\nsentences containing del cual or its corresponding\nforms ( de la qual ,de los cuales ,de las cuales ), so\nthat the alignment was easier, and some translation\nerrors related to this pronoun were avoided.\nTable 4 shows some examples of gender concor-\ndance and cuyo correction.\nGender (S) Me encantan las espinacas .\n(T1) M’encanten * les espinacs .\n(T2) M’encanten els espinacs .\nCuyo (S) Un pueblo cuyo nombre es largo.\n(T1) Un poble * amb un nom és llarg.\n(T2) Un poble el nom del qual és llarg.\nTable 4: Examples of gender concordance and\ncuyo relative pronoun correction.\n4.2 Numbers and time categorisation\nAs it was seen in section 3, many numeric expres-\nsions are not included in the training corpus and\nthey appear as unknown words in the translation\nprocess. In order to solve this problem, the nu-\nmeric expressions are detected in the source lan-\nguage, codified, and generated again in the target\nlanguage.\nIn order to detect the numbers in the source lan-\nguage, two issues must be considered: the struc-\nture of the numeric expressions (compound words,\nuse of dashes, etc.) and the gender of the num-\nber, if applicable. Then, a specific codification is\ndefined in order to maintain the coherence of the\ndetected expression. Numbers like un/una (one),\nnou(nine) and deu(ten) have not been categorised\nbecause they can be related to non numeric expres-\nsions.\nOn the other hand, it was also seen in section 3\nthat time expressions differ in Catalan and Span-\nish languages. Since the training corpus contains\nfew examples related to time expressions, it is dif-\nficult to learn from context and to obtain correct\ntranslations. As in the numbers, time expressions\nare detected (considering three possible expression\nstructures), codified and generated in the target\nlanguage. In some cases, where a verb exists, this\nchanges in the translation, so that it becomes nec-\nessary to include it in the detection step. In the fol-\nlowing Catalan-Spanish example: són dos quarts\nde dues (it’s half past one), which is translated into\nes la una media , the verb changes from plural to\nsingular; thus, the verb must also be included in\nthe detected structure.Some examples of the correction after number\nand time categorisation can be found in Table 5.\nNumbers (S) L’alliberament de quatre-cents\nquaranta-un presoners.\n(T1) La liberación de * quatre-cents\n*quaranta-un prisioneros.\n(T2) La liberación de cuatrocientos\ncuarenta y un prisioneros.\nHours (S) Són tres quarts de vuit .\n(T1) Son *tres cuartos de ocho .\n(T2) Son las ocho menos cuarto .\nTable 5: Examples of correction after number and\ntime categorisation.\n4.3 Text processing\nSome of the errors need to be solved by perform-\ning a text processing before or after the translation.\nThe geminated l, for instance, have been treated\nbefore the translation, by normalising the writing\nof the middle dot. In other cases such as the oblig-\nation tener que and the conjunctions yandohave\nbeen treated as a postprocessing after the transla-\ntion. Some examples correction by text processing\ncan be found in Table 6.\nGemin. (S) Reformat a Brussel.les .\nl (T1) Reformado en * Bruselas. las .\n(T2) Se ha reformado en Bruselas .\nObligat. (S) Nos lo tenemos que creer.\n(T1) Ens ho * tenim que creure.\n(T2) Ens ho hem de creure.\ny/o (S) Com a Blanes oOlot.\n(T1) Como Blanes * oOlot.\n(T2) Como Blanes uOlot.\nTable 6: Some examples of text processing correc-\ntion.\n5 Evaluation\nIn order to evaluate the final system after applying\nthe grammatical rules and statistical techniques de-\nscribed in the current paper, a test corpus contain-\ning the above-mentioned problematic cases was\ndeveloped. The built corpus contains 636 sen-\ntences for each of the source and target languages,\nwhere the problems to deal with can be found in\na balanced proportion. In addition, an evaluation\nwith a 2000-sentence test extracted from El Per-56\niódico itself was also performed. The obtained re-\nsults are shown in Table 7.\nSent. ES>CA CA>ES\nBaseline N-II63675.91 73.50\nImproved N-II 81.35 76.12\nBaseline N-II200083.80 83.01\nImproved N-II 83.91 83.23\nTable 7: BLEU results in both directions of trans-\nlation.\nThe results obtained with the 636-sentence test\ncorpus show that the problems we were focusing\non are being solved better than in the baseline sys-\ntem. A slight improvement is also observed when\nusing the El Periódico test set, although the im-\nprovement is not so obvious since the corpus does\nnot contain explicitly the error cases we were deal-\ning with. Additionally, the following points could\nexplain some reasons why the improvement was\nnot higher:\n1. The improved translation has an additional\nknowledge with respect to the corpus. There-\nfore, some translations from the improved\nsystem are correct but differ from the refer-\nence while the baseline system outputs the\nreference as it is. E.g. EUA està (...) instead\nofEls EUA estan (...)\n2. The CA >ES translation from the improved\nsystem contains more words than the CA >ES\ntranslation from the baseline system. It must\nbe taken into account that BLEU measures\nthe precision and not the recall.\n6 Conclusions\nThe initial aim of the current paper was to improve\nan Ngram-based statistical machine system. Once\na set of common errors were detected through a\nhuman evaluation, a set of techniques based on the\nused of grammatical category, lexical categorisa-\ntion and text processing have been applied.\nWhen using an ad hoc built test corpus, the re-\nsults show that the use of grammatical informa-\ntion and the correction of the text as a pre- and\npostprocessing are useful techniques in order to\nachieve this goal, as it has been shown in the auto-\nmatic evaluation: the BLEU of the improved N-II\nis higher with respect to the baseline system.\nA higher performance in terms of BLEU is also\nreflected in the improved N-II when using a gen-eral corpus extracted from EL Periódico , although\nthe relative improvement is less than the previous\none, since the corpus does not contain explicitly\nthe problems we were tackling in the current paper.\nAdditionally, possible causes for the less improve-\nment observed have been analysed.\nReferences\nCarreras, Xavier , Chao, Isaac , Padró, Lluís, and Padró,\nMuntsa. 2004. FreeLing: An Open-Source Suite of\nLanguage Analyzers. Proceedings of the Conference\non Language Resources and Evaluation , Lisboa.\nCrego, Josep M. and Mariño, José B. 2007. Improving\nSMT by coupling reordering and decoding. Machine\nTranslation , 20:3:199–215.\nMariño, José B. , Banchs, Rafael E. , Crego, Josep\nM. , de Gispert, Adrià , Lambert, Patrick , Fonol-\nlosa, J.A.R. and Costa-jussà, Marta R. 2006. N-\ngram Based Machine Translation. Computational\nLinguistics , 32:4:527–549.\nNiessen, S., Ney, H. 2000. Improving SMT quality\nwith morpho-syntactic analysis. Proceedings of the\nInternational conference on Computational Linguis-\ntics, Saarbrücken, Germany.\nOch, Franz Josef 2003. Minimum Error Rate Train-\ning in Statistical Machine Translation. Proceedings\nof the 41st Meeting of the Association for Computa-\ntional Linguistics , Sapporo, Japan. 160–167.\nPopovi ´c, M., Ney, H. 2004. Towards the Use of Word\nStems and Suffixes for Statistical Machine Transla-\ntion. Proceedings of International Conference on\nLanguage Resources and Evaluation , Lisbon, Por-\ntugal.\nPopovi ´c, M., Ney, H. 2006. POS-based Word Reorder-\nings for Statistical Machine Translation. Proceed-\nings of International Conference on Language Re-\nsources and Evaluation , Genoa, Italy.\nPopovi ´c, M., de Gispert, A., Gupta, D., Lambert, P.,\nNey, H., Mariño, J.B. y Banchs, R. 2006. Morpho-\nsyntactic Information for Automatic Error Analysis\nof Statistical Machine Translation Output. Proceed-\nings of the HLT/NAACL Workshop on Statistical Ma-\nchine Translation , New York.57", "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "xzUSkTPbk2zB", "year": null, "venue": "EAMT 2010", "pdf_link": "https://aclanthology.org/2010.eamt-1.12.pdf", "forum_link": "https://openreview.net/forum?id=xzUSkTPbk2zB", "arxiv_id": null, "doi": null }
{ "title": "Linguistic-based Evaluation Criteria to identify Statistical Machine Translation Errors", "authors": [ "Mireia Farrús", "Marta R. Costa-jussà", "José B. Mariño", "José A. R. Fonollosa" ], "abstract": null, "keywords": [], "raw_extracted_content": "Linguistic-based Evaluation Criteria to identify\nStatistical Machine Translation Errors\nMireia Farr ´us*, Marta R. Costa-juss `a**, Jos ´e B. Mari ˜no* and Jos ´e A.R. Fonollosa*\n*Universitat Polit `ecnica de Catalunya, TALP Research Center\nC/Jordi Girona 1-3, 08034 Barcelona, Spain\nfmfarrus,canton,[email protected]\n** Barcelona Media Innovation Center\nAv. Diagonal 177, 08018 Barcelona, Spain\[email protected]\nAbstract\nMachine translation evaluation methods\nare highly necessary in order to analyze the\nperformance of translation systems. Up to\nnow, the most traditional methods are the\nuse of automatic measures such as BLEU\nor the quality perception performed by na-\ntive human evaluations. In order to com-\nplement these traditional procedures, the\ncurrent paper presents a new human evalu-\nation based on the expert knowledge about\nthe errors encountered at several linguistic\nlevels: orthographic, morphological, lexi-\ncal, semantic and syntactic. The results ob-\ntained in these experiments show that some\nlinguistic errors could have more influence\nthan other at the time of performing a per-\nceptual evaluation.\n1 Introduction\nOne of the aims in the research community is to\nfind accurate evaluation methods that allow ana-\nlyzing and comparing the performance of these\ntranslation systems. The most commonly used\nevaluation methods are the standard automatic\nmeasures such as BLEU (Papineni et al., 2002),\nNIST (Doddington, 2002), TER (Snover and Dorr,\n2006) and WER (McCowan, 2004 et al.), as well\nas the use of human native evaluators that analyze\nand compare translated sentences according to a\ngeneral perception of the linguistic quality.\nIn this paper, these evaluation methods are used\nto evaluate and compare two translation systems\nbased on the statistical approaches in the Catalan-\nto-Spanish language pair: Google Translate and N-\nc\r2010 European Association for Machine Translation.II; this one developed at the Universitat Polit `ecnica\nde Catalunya (UPC).\nIn addition, a new human evaluation method is\napplied, based on an expert linguistic evaluation,\nwhich provides information about the errors clas-\nsified according the level they are encountered: or-\nthographic, morphological, lexical, semantic and\nsyntactic. The number of errors found in each\nlevel is then used to compare both human evalu-\nations: linguistic and perceptual. Since the aim is\nto achieve a good human perception in our final\ntranslation, one of the main points is to see which\nlinguistic errors have more impact in the human\nevaluation.\nThe structure of this paper is as follows. Next\nsection presents a brief summary of the related\nwork. Section 3 presents an overview of the sta-\ntistical machine translation approach. Section 4\nincludes the description of the systems and the hu-\nman evaluations used in the experiments. Section 5\nshows the results obtained in each of the evalua-\ntions, and finally, conclusions are presented in sec-\ntion 6.\n2 Related work\nAutomatic and human evaluation has been widely\ninvestigated by the scientific community. Having\nan automatic evaluation is a must in order to opti-\nmize a MT system. Actually, there are many inter-\nesting measures, for example the ones which have\nbeen presented and evaluated in the Annual Work-\nshop of Machine Translation (WMT)1. Some mea-\nsures include linguistic knowledge and do corre-\nlate with human criteria. However, as mentioned\nin the introduction, in this area, BLEU is still the\nmost widely used measure by most MT research\n1http://www.statmt.org/wmt09/\n[EAMT May 2010 St Raphael, France]\ngroups. Some of the main problems in automatic\nevaluation are that: the measure depends on the\nquality of the references; and, the measure do not\nbehave objectively among different types of MT\ntranslation systems. Given that a source sentence\nmay have multiple correct target sentences, it is\ndifficult to compose a test set which covers all of\nthem.\nHuman evaluation is time consuming. One of\nthe main problems here is that the criteria changes\nfor each annotator. People do not have the same\ncriteria when evaluating or ranking one transla-\ntion. Recently, in the GALE project, one effec-\ntive way to evaluate was asking annotators to edit\nthe translation. In that sense, the less number of\neditions, the better the translation. In (Callison-\nBurch, 2009), they proposed to edit the translation\noutput as fluent as possible which reflects the an-\nnotators’ understanding of the sentence.\nApart from the inconveniences mentioned\nabove, both automatic and human evaluation pro-\nvide little information about the linguistic errors\ncommitted by the system, which would help fur-\nther research. In this paper, we propose a linguistic\nevaluation which aims at being objective over any\ntranslation output and at specifying the type of er-\nrors committed by the system in order to help MT\ndevelopers to improve it.\nSome proposals regarding evaluation classifica-\ntion schemas can be found in the literature. (Vi-\nlar et al., 2006), for instance, propose a 5-category\nschema that does not use linguistic criteria. The\nclassification presented in the current paper offers\nmore linguistic information about the type of error;\ne.g. (Vilar et al., 2006) use the concept of incor-\nrect words that can be related to multiple linguistic\nlevels: lexical, semantic and morphological. On\nthe other hand, Flanagan classification (Flanagan,\n1994) lists a series of errors that are pair language-\ndependent. In the current paper, a similar list\nof subcategories for Catalan-Spanish is presented.\nHowever, these subcategories are included in a 5-\ncategory schema, which is language-independent.\n3 Statistical Machine Translation\nNowadays, Statistical Machine Translation (SMT)\nhas become one of the most popular machine\ntranslation paradigms. The SMT approach allows\nbuilding a translation system by means of open-\nsource tools as long as a parallel corpus is avail-\nable. Moreover, one of the most attractive reasonsto build a statistical system is that, unlike standard\nrule-based system, little human effort is required.\nIn SMT, statistical weights are used to decide\nthe most likely translation of a word. Mod-\nern SMT systems are phrase-based rather than\nword-based, and assemble translations using the\noverlap in phrases. Thus, given a source string\nsJ\n1=s1:::s j:::s Jto be translated into a target\nstringtI\n1=t1:::ti:::tI, the aim is to choose,\namong all possible target strings, the string with\nthe highest probability:\n~tI\n1=argmax\ntI\n1P(tI\n1jsJ\n1)\nwhereIandJare the number of words of the\ntarget and source sentence, respectively.\nThe first SMT systems were reformulated using\nBayes’ rule. In recent systems, such an approach\nhas been expanded to a more general maximum en-\ntropy approach in which a log-linear combination\nof multiple feature functions is implemented (Och,\n2003). This approach leads to maximising a linear\ncombination of feature functions:\n~t=argmax\ntnPM\nm=1\u0015mhm(t;s)o\n.\nGiven a target sentence and a foreign sentence,\nthe translation model tries to assign a probability\nthattI\n1generatessJ\n1. While these probabilities can\nbe estimated by thinking about how each individ-\nual word is translated, modern statistical MT is\nbased on the intuition that a better way to compute\nthese probabilities is by considering the behavior\nof phrases (sequences of words). The intuition of\nphrase-based statistical MT is to use phrases as\nwell as single words as the fundamental units of\ntranslation. Phrases are estimated from multiple\nsegmentation of the aligned bilingual corpora by\nusing relative frequencies.\nThe translation problem has also been ap-\nproached from the finite-state perspective as the\nmost natural way for integrating speech recog-\nnition and machine translation into a speech-\nto-speech translation system (Vidal, 1997; Ban-\ngalore and Riccardi, 2001; Casacuberta, 2001).\nThe Ngram-based system implements a transla-\ntion model based on this finite-state perspective\n(de Gispert and Mari ˜no, 2002) which is used along\nwith a log-linear combination of additional feature\nfunctions (Mari ˜no, 2006 et al.).\nIn addition to the translation model, SMT sys-\ntems use the language model, which is usually for-\nmulated as a probability distribution over strings\nthat attempts to reflect how likely a string occurs\ninside a language (Chen and Goodman, 1998).\nStatistical MT systems make use of the same n-\ngram language models as do speech recognition\nand other applications. The language model com-\nponent is monolingual, so acquiring training data\nis relatively easy.\nThe lexical models allow the SMT systems\nto compute another probability to the translation\nunits based on the probability of translating word\nper word of the unit. The probability estimated by\nlexical models tends to be in some situations less\nsparse than the probability given directly by the\ntranslation model. Many additional feature func-\ntions can also be introduced in the SMT frame-\nwork to improve the translation, like the word or\nthe phrase bonus.\nAlthough SMT systems provide, in general,\ngood performance, it has been demonstrated in re-\ncent papers that the addition of linguistic infor-\nmation can be highly useful in this kind of sys-\ntems (Niessen and Ney, 2000; Popovi ´c and Ney,\n2004; Popovi ´c and Ney, 2006; Popovi ´c et al.,\n2006).\n4 Experimental Framewok\nMachine translation systems can be evaluated by\nmeans of human judgments in many different\nways. The main objective of this work is to utilize\nthree kinds of evaluations (automatic, perceptual\nand linguistic) and see whether they are somehow\ncorrelated or not. The three evaluations have been\nperformed over two SMT systems: Google and N-\nII. This section includes an overview of both sys-\ntems and a brief description of the human evalua-\ntions used in the current work.\n4.1 Systems Description\nGoogle Translate2has been developed by\nGoogle’s research group on multiple pairs of lan-\nguages. This system feeds the computer with bil-\nlions of text words, including monolingual text in\nthe target language, as well as aligned text consist-\ning of examples of human translations between the\nlanguages. Then, statistical learning techniques\nare applied in order to build a translation model.\nThe accuracy of the automatic language detection\nincreases with the amount of text entered.\nGoogle is constantly working to support more\nlanguage in order to introduce them as soon as the\n2http://translate.google.comautomatic translation meets their standards. Large\namounts of bilingual texts are needed to further de-\nvelop new systems.\nN-II3, developed at the UPC mainly for the\nSpanish-Catalan pair, is an engine based on an N-\ngram translation model integrated in an optimized\nlog-linear combination of additional features. Al-\nthough it is mainly statistical, additional linguis-\ntic rules are included in order to solve some errors\ncaused by the statistical translation, such as ambi-\nguity in adjective and possessive pronouns, ortho-\ngraphic errors or time expressions, among others.\nTime expressions, which differ largely in both\nlanguages, are solved by detecting them, codify-\ning them as numeric expressions, and generating\nthem in the target language (Farr ´us, 2004 et al.).\nThe same procedure is used in the numbers, since\nmany of them were not included in the training cor-\npus. Other unknown words apart from numbers are\nsolved by including a dictionary as a post-process\nafter the translation, and a spell checker in order to\navoid wrong-written words in the input.\n4.2 Perceptual and Linguistic Evaluations\nHuman evaluations of the systems can be per-\nformed in different ways. The most commonly\nused, is the one called perceptual in the current pa-\nper. It consists in selecting a reasonable number\nof evaluators, which are not necessary linguistic\nexperts but having a good knowledge of the lan-\nguage in question. Such evaluators are then asked\nto compare translations output by two or more sys-\ntems. In addition, another human evaluation is pre-\nsented in this paper, consisting of a linguistic anal-\nysis made by an expert linguist. Next, both evalu-\nations are briefly described.\n4.2.1 Perceptual Evaluation\nThe comparison between different translation\nsystem outputs was performed by ten different hu-\nman evaluators. All of them were bilingual in both\nCatalan and Spanish languages, therefore no refer-\nence of translation was shown to them, in order to\navoid any bias in their evaluation.\nEach evaluator was asked to make a system-to-\nsystem (pairwise) comparison, where the system\npairs were randomized, so that the evaluator did\nnot know which system was being judged. Each\njudge evaluated 100 randomly extracted transla-\ntion pairs, and assessed, in each case, whether\n3http://www.n-ii.org/\none system produced a better translation than the\nother one, or whether both outputs were equiv-\nalent. Therefore, a total number of 1000 judge-\nments was collected. Next, an example of an out-\nput shown to the evaluators is presented:\nSource: Cal que hi hagi oferta per a tothom.\n(1): Hace falta que haya ofrecida para todo el\nmundo.\n(2): Es necesario que haya oferta para todos.\nWhich translation was better? (Type 0 for same\nquality)\n4.2.2 Linguistic Evaluation\nIn order to evaluate the translations by means of\nlinguistic criteria, rather than using only the com-\nmon knowledge of the language speakers, a lin-\nguistic error classification was proposed in order to\nlinguistically evaluate the encountered errors. The\nerror-annotation process was very time consum-\ning. Since the linguistic evaluation guidelines were\nvery specific, only one evaluator was required, and\nno inter-annotator agreement was needed. The sys-\ntem order was randomized, so that the annotator\ndid not know which system was being judged.\nThe errors are reported according to the differ-\nent linguistic levels involved: orthographic, mor-\nphological, lexical, semantic and syntactic, and ac-\ncording to the specific cases that can be found in\na Catalan to Spanish (and vice versa) translation\ntask.\nThe annotation guidelines are described in de-\ntail in a journal paper4submitted and pending of\nacceptance at the time of writting this paper. The\nguidelines include a detailed description of the lin-\nguistic levels, providing the kind of errors that\ncould be encountered in each level, and giving ex-\namples of each of them.\nNext, the annotation guidelines are summarized.\nFor each linguistic level, the most common er-\nrors encountered for the Spanish-Catalan pair are\nbriefly described.\n\u000fOrthographic errors include punctuation\nmarks, erroneous accents, letter capitalisa-\ntion, joined words, spare blanks coming from\na wrong detokenisation, apostrophes, con-\njunctions and errors in foreign words.\nAn apostrophe error, for instance, can be seen\nin the following example, where the pronoun\nin Spanish is not apostrophized in Catalan as\nit should be:\n4Overcoming statistical machine translation limitations: error\nanalysis and proposed solutions for the Catalan-Spanish lan-\nguage pair.Source (es): la acepta .\nIncorrect T (ca): *la acepta .\nCorrect T (ca): l’accepta .\n\u000fMorphological errors include lack of gender\nand number concordance, apocopes, errors\nin verbal morphology (inflection) and lexical\nmorphology (derivation and compounding),\nand morphosyntactic changes due to changes\nin syntactic structures.\nThe next example shows an error regarding\nlack of gender concordance. The gender of\nthe feminine Spanish term se˜nal(signal) must\nbe translated into a masculine term in Cata-\nlan:\nSource (es): la se ˜nal.\nIncorrect T (ca): *la senyal .\nCorrect T (ca): el senyal .\n\u000fLexical errors include no correspondence\nbetween source and target words, non-\ntranslated source words, missing target\nwords, and non-translated proper nouns or\ntranslated when not necessary.\nThe next example shows a non-translated\nword in the source language:\nSource (es): el n´umero diecis ´eis.\nIncorrect T (ca): el n´umero *diecis ´eis.\nCorrect T (ca): el n´umero setze .\n\u000fSemantic errors include polysemy,\nhomonymy, and expressions used in a\ndifferent way in the source and target\nlanguages.\nNext, an example of an homonymy problem\nis shown. The word solo in Spanish can be an\nadverb or an adjective. In the Catalan trans-\nlation, the wrong category was chosen: it was\ntranslated as an adjective when in that context\nshould have been taken as an adverb:\nSource (es): era solo un ni ˜no.\nIncorrect T (ca): era *sol un nen. .\nCorrect T (ca): era nom ´es un nen .\n\u000fSyntactic errors include errors in preposi-\ntions, errors in relative clauses, verbal pe-\nriphrasis, clitics, missing or spare article in\nfront of proper nouns, and syntactic element\nreordering.\nNext, two examples regarding syntactic errors\nare presented. The first one shows a wrong\ncombination of a pronominal clitic with the\nverb. The second one shows an error in the\ntranslation of a relative clause involving the\nrelative pronoun cuyo .\nSource (es): quiero verte .\nIncorrect T (ca): vull veure *et .\nCorrect T (ca): vull veure’t .\nSource (es): un pueblo cuyo nombre es largo .\nIncorrect T (ca): un poble *amb un nom ´es llarg .\nCorrect T (ca): un poble el nom del qual ´es llarg .\n5 Evaluation Results\nThis section shows the results obtained in the au-\ntomatic evaluation and in both human evaluations\ndescribed above: the perceptual-non-expert evalu-\nation, and the linguistic-expert evaluation.\nThe test set selected for the current evaluation\nconsists as follows. The Spanish source test corpus\nconsists of 711 sentences extracted from El Pa ´ıs\nandLa Vanguardia newspapers, while the Cata-\nlan source test corpus consists of 813 sentences\nextracted from the Avui newspaper plus transcrip-\ntions from the TV program `Agora . For each set\nand each direction of translation, two manual ref-\nerences were provided. Table 1 shows the number\nof sentences, words and vocabulary used for each\nlanguage.\nSpanish Catalan\nsentences 711 813\nwords 15974 17099\nvocabulary 5702 5540\nTable 1: Corpus statistics for the Catalan-Spanish\ntest.\n5.1 Automatic Evaluation Results\nTable 2 presents the results obtained by using two\nstandard measures: BLEU and TER, for both sys-\ntems and both directions of translation: Spanish\nto Catalan (es2ca) and Catalan to Spanish (ca2es).\nBLEU (Bilingual Evaluation Understudy) com-\nputes lexical matching accumulated precision for\nn-gram up to length four, while TER (Translation\nError Rate) measures the number of edits required\nto change a system output into one of the refer-\nences.\n5.2 Perceptual Evaluation Results\nTable 3 presents the results obtained in the percep-\ntual evaluation, for both systems and both direc-\ntions of translation: Spanish to Catalan (es2ca) and\nCatalan to Spanish (ca2es).es2ca ca2es\nErrors Google N-II Google N-II\nBLEU 86.10 86.54 92.37 88.58\nTER 11.32 10.76 5.70 7.80\nTable 2: Automatic evaluation measures for both\nstatistical systems and both directions of transla-\ntion.\ndirection Google N-II\nes2ca 48% 52%\nca2es 53% 47%\nTable 3: Human judgments after the system-to-\nsystem comparison, showing in which percentage\neach system was found better than the other one.\nThe results show in which percentage each of\nthe systems was perceived as better than the other\none by the human evaluators. Thus, in the es2ca\ndirection of translation, the performance of the N-\nII translation system was perceived as better than\nthe performance of Google. On the other hand,\nopposite results were found in the ca2es direction\nof translation, where the Google system performed\nbetter than the N-II one in terms of the evaluators\nperception. All the results obtained in this evalua-\ntion seem to be consistent with the results obtained\nin the automatic evaluation.\n5.3 Linguistic Evaluation Results\nThe results found in the linguistic evaluation are\nshown in Table 4. It can clearly be seen that, in\nthe es2ca translation, the N-II system performance\noutperformed largely the Google performance: the\nlatter doubled the N-II in the total number of er-\nrors. This is consistent with the results obtained in\nthe perception evaluation, where N-II was found\nbetter than Google in 52% of the cases. The same\nconsistency is found in the automatic evaluation,\nwhere the BLEU and TER in the NII sytem equal\n86.54 and 10.76, respectively, slightly better than\nin the Google system, where BLEU and TER equal\n86.10 and 11.32, respectively.\nNevertheless, and despite these consistencies,\nthe difference of quality between both systems in\nthe linguistic evaluation is not reflected neither\nin the perceptual evaluation, nor in the automatic\nevaluation. In both perceptual and automatic eval-\nuations the difference of performance quality is\nsmaller. They are mutually consistent and, in con-\nsequence, they differ from the linguistic evaluation\nin the same way.\nIn the opposite direction of translation (ca2es),\nthe evaluation results differ from the es2ca trans-\nlation: N-II outperforms Google translator only in\nthree linguistic levels: orthographic, morphologi-\ncal and syntactic. In the lexical and the semantic\ndomains, the Google system outperforms the N-II\ntranslator.\nes2ca ca2es\nErrors Google N-II Google N-II\northographic 169 62 102 82\nmorphological 80 29 40 37\nlexical 113 65 54 67\nsemantic 101 61 50 65\nsyntactic 183 79 111 87\ntotal 646 295 357 338\nTable 4: Number and type of errors encountered in\nboth systems and both directions of translation.\nThe total number of errors is similar in both\ntranslation systems, although it is slightly lower in\nthe N-II system (338 in front of 357 in the Google\nsystem). Nevertheless, the perceptual evaluation is\nnot consistent with these results, since the evalua-\ntors judged the Google performance as better than\nthe N-II performance in 53% of the cases.\nThe presented results can be interpreted in dif-\nferent ways. First, it seems that some linguistic\nerrors have more influence than others at the time\nof performing a perceptual evaluation, and that the\nlexical and semantic errors (which are, in turn,\nhighly related) could have a higher weight. Sec-\nond, that human evaluations do not have a mu-\ntual and real consistency, and thus, they are highly\nindependent from each other, since the evaluators\nmay not rely on any specific linguistic error level\nwhen performing the evaluations.\nThus, it seems that further experiments by using\nother corpora, other languages and other transla-\ntion approaches should be performed in order to\nsee whether a real correlation exists between all\nthe evaluation methods included. Nevertheless, the\nproposed linguistic human-expert evaluation gives\nmore detailed information regarding the type of er-\nrors occurred. Therefore, a more specific starting\npoint is provided in order to improve the transla-\ntion system in the future.6 Conclusions\nIn this paper, a new evaluation method has been\nproposed in order to evaluate two statistical ma-\nchine translation systems. System evaluation is a\ndecisive task when trying to improve a system of\nsuch characteristics. Therefore, a lot of effort has\nbeen put into trying to find the best or the most\naccurate and consistent evaluation method.\nThe evaluation procedure proposed in this paper\ntakes into account the type of errors encountered in\neach system, by classifying them into different lin-\nguistic levels: orthographic, morphological, lexi-\ncal, semantic and syntactic. When comparing the\nresults obtained through this classification to the\nones obtained by performing a traditional human\nevaluation, it could be stated that some levels (the\nlexical and the semantic levels) have more influ-\nence in the way how the human evaluators perceive\nthe errors. In the same way, both lexical and se-\nmantic errors seem to be also consistent with the\nautomatic evaluation measures BLEU and TER.\nNevertheless, the experiments in the current pa-\nper where only carried out within one pair of lan-\nguages (Spanish-Catalan). Further experiments\nshould be performed in order to analyze more ac-\ncurately this possible correlation and whether ex-\nists or not a dependency with the languages used\nin the translation.\nAcknowledgments\nThe N-II machine translation system developed at\nthe UPC has been funded by the European Union\nunder the integrated project TC-STAR: Technol-\nogy and Corpora for Speech to Speech Transla-\ntion (IST-2002-FP6-506738), the Spanish Govern-\nment under the BUCEADOR project (TEC2009-\n14094-C04-01), and partially funded by the Span-\nish Department of Education and Science through\ntheJuan de la Cierva fellowship program.\nReferences\nBangalore, Srinivas, and Giuseppe Riccardi. 2001.\nFinite-state models for lexical reordering in spoken\nlanguage translation. Proceedings of the ICSLP ,\n4:422–425, Beijing, China.\nCallison-Burch, Chris, Philipp Koehn, Christof Monz\nand Josh Schroeder. 2009. Findings of the 2009\nWorkshop on Statistical Machine Translation. Pro-\nceedings of the Fourth Workshop on Statistical Ma-\nchine Translation , 1–28, Athens, Greece.\nCasacuberta, Francisco. 2001. Finite-state transduc-\ners for speech-input translation. Proceedings of\nthe IEEE Automatic Speech Recognition and Under-\nstanding Workshop , 375–380, Trento, Italy.\nChen, Stanley F., and Joshua Goodman. 1998. An em-\npirical study of smoothing techniques for language\nmodeling. Technical Report TR-10-98 , Harvard Uni-\nversity.\nDoddington, George. 2002. Automatic evaluation\nof machine translation quality using n-gram co-\noccurrence statistics. Proceedings of the HLT-\nNAACL , 138–145, San Diego.\nFarr´us, Mireia, Marta R. Costa-juss `a, Marc Poch,\nAdolfo Hern ´andez, and Jos ´e B. Mari ˜no. 2009.\nImproving a Catalan-Spanish Statistical Translation\nSystem using Proceedings of the EAMT , 52–57,\nBarcelona.\nFlanagan, Mary A. 2002. Error classification for MT\nevaluation. Proceedings of the AMTA Conference ,\n65–72, Columbia, Maryland.\nde Gispert, Adri `a, and Jos ´e B. Mari ˜no. 2002. Using X-\ngrams for speech-to-speech translation. Proceedings\nof the ICSLP , 1885–1888, Denver, Colorado.\nMari ˜no, Jos ´e B., Rafael E. Banchs, Josep M. Crego,\nJosep M., Adri `a de Gispert, Patrick Lambert, Jos ´e\nA.R. Fonollosa and Marta R. Costa-juss `a. 2006. N-\ngram Based Machine Translation. Computational\nLinguistics , 32:4:527–549.\nMcCowan, Iain, Darren Moore, John Dines, Daniel\nGatica-Perez, Mike Flynn, Pierre Wellner and Herv ´e\nBourlard. 2004. On the Use of Information Re-\ntrieval Measures for Speech Recognition Evalua-\ntion. Technical Report of the IDIAP , 73, Martigny,\nSwitzerland.\nNiessen, Sonja, and Hermann Ney. 2000. Improving\nSMT quality with morpho-syntactic analysis. Pro-\nceedings of the International conference on Compu-\ntational Linguistics , Saarbr ¨ucken, Germany.\nOch, Franz Josef. 2003. Minimum Error Rate Train-\ning in Statistical Machine Translation. Proceedings\nof the 41st Meeting of the Association for Computa-\ntional Linguistics , 160–167, Sapporo, Japan.\nPapineni, Kishore, Salim Roukos, Todd Ward, and Wei-\njing Zhu. 2002. BLEU: A method for automatic\nevaluation of machine translation. Proceedings of\nthe 40th Meeting of the Association for Computa-\ntional Linguistics , 311–318, Philadelphia, Pennsyl-\nvania.\nPopovi ´c, Maja, and Hermann Ney. 2004. Towards\nthe Use of Word Stems and Suffixes for Statistical\nMachine Translation. Proceedings of International\nConference on Language Resources and Evaluation ,\nLisbon, Portugal.Popovi ´c, Maja and Hermann Ney. 2006. POS-based\nWord Reorderings for Statistical Machine Transla-\ntion. Proceedings of International Conference on\nLanguage Resources and Evaluation , Genoa, Italy.\nPopovi ´c, Maja, Adri `a de Gispert, Deepa Gupta, Patrick\nLambert, Hermann Ney, Jos ´e B. Mari ˜no and Rafael\nE. Banchs. 2006. Morpho-syntactic Information\nfor Automatic Error Analysis of Statistical Machine\nTranslation Output. Proceedings of the HLT/NAACL\nWorkshop on Statistical Machine Translation , New\nYork.\nSnover, Matthew, and Bonnie Dorr. 2006. A Study of\nTranslation Edit Rate with Targeted Human Annota-\ntion. Proceedings of the AMTA , Boston, USA.\nVidal, Enrique. 1997. Finite-state speech-to-speech\ntranslation. Proceedings of the ICASSP , 111–114,\nMunich, Germany.\nVilar, David, Jia Xu, Luis Fernando D’Haro, and Her-\nmann Ney. 2006. Error analysis of statistical ma-\nchine translation output. Proceedings of the LREC ,\nGenoa, Italy.", "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "o0h7Da2-MoE", "year": null, "venue": "EAMT 2014", "pdf_link": "https://aclanthology.org/2014.eamt-1.7.pdf", "forum_link": "https://openreview.net/forum?id=o0h7Da2-MoE", "arxiv_id": null, "doi": null }
{ "title": "Data selection for discriminative training in statistical machine translation", "authors": [ "Xingyi Song", "Lucia Specia", "Trevor Cohn" ], "abstract": null, "keywords": [], "raw_extracted_content": "Data Selection for Discriminative Training in\nStatistical Machine Translation\nXingyi Song andLucia Specia\nDepartment of Computer Science\nUniversity of Sheffield\nS1 4DP, UK\nfxsong2,l.specia [email protected] Cohn\nComputing and Information Systems\nThe University of Melbourne\nVIC 3010, Australia\[email protected]\nAbstract\nThe efficacy of discriminative training in\nStatistical Machine Translation is heavily\ndependent on the quality of the develop-\nment corpus used, and on its similarity\nto the test set. This paper introduces a\nnovel development corpus selection algo-\nrithm – the LA selection algorithm. It fo-\ncuses on the selection of development cor-\npora to achieve better translation quality\non unseen test data and to make training\nmore stable across different runs, particu-\nlarly when hand-crafted development sets\nare not available, and for selection from\nnoisy and potentially non-parallel, large\nscale web crawled data. LA does not re-\nquire knowledge of the test set, nor the de-\ncoding of the candidate pool before the se-\nlection. In our experiments, development\ncorpora selected by LA lead to improve-\nments of over 2.5 BLEU points when com-\npared to random development data selec-\ntion from the same larger datasets.\n1 Introduction\nDiscriminative training – also referred to as tuning\n– is an important step in log-linear model in Sta-\ntistical Machine Translation (SMT) (Och and Ney,\n2002). The efficacy of training is closely related\nto the quality of training samples in the develop-\nment corpus, and to a certain extent, to the prox-\nimity between this corpus and the test set(s). Hui\net al. (2010) in their experiments show that by us-\ning different development corpora to train the same\nc\r2014 The authors. This article is licensed under a Creative\nCommons 3.0 licence, no derivative works, attribution, CC-\nBY-ND.SMT system, translation performance can vary up\nto 2.5 BLEU points (Papineni et al., 2002) with a\nstandard phrase-based system (Koehn et al., 2007).\nHow to build a ‘suitable’ development corpus is a\nimportant problem in SMT discriminative training.\nA suitable development corpus should aid dis-\ncriminative training achieve higher quality mod-\nels, and thus yield better translations. Previous re-\nsearch on selecting training samples for the devel-\nopment corpus can be grouped into two categories:\ni) selecting samples based on the test set (trans-\nductive learning), or ii) selecting samples without\nknowing the test set (inductive learning). Research\nin the first category focuses on how to find simi-\nlar samples to the ones the system will be tested\non. Li et al. (2010), Lu et al. (2008), Zheng et\nal. (2010), and Tamchyna et al. (2012) measure\nsimilarity based on information retrieval methods,\nwhile Zhao et al. (2011) selects similar sentences\nbased on edit distance. These similarity based ap-\nproaches have been successfully applied to the lo-\ncal discriminative algorithm proposed in (Liu et\nal., 2012). The limitation of these approaches is\nthat the test set needs to be known before model\nbuilding, which is rarely true in practice.\nOur research belongs to the second category.\nPrevious work on development data selection for\nunknown test sets include Hui et al. (2010). They\nsuggest that training samples with high oracle\nBLEU scores1will lead to better training qual-\nity. Cao and Khudanpur (2012) confirmed this and\nfurther showed that better training data will offer\nhigh variance in terms of BLEU scores and feature\nvector values between oracle and non-oracle hy-\npotheses, since these are more easily separable by\n1Oracle BLEU scores are those computed for the closest can-\ndidate translation to the reference in the n-best list of the de-\nvelopment set.\n45\nthe machine learning algorithms used for tuning.\nBoth of the above studies achieved positive results,\nbut these approaches require decoding the candi-\ndate development data to obtain BLEU scores and\nfeature values, which may be difficult apply if the\npool for data selection is extremely large.\nAnother potential way of improving training\nquality based on a development corpus is to in-\ncrease the size of this corpus. However, high-\nquality sentence aligned parallel corpora are ex-\npensive to obtain. In contrast to data used for rule\nextraction in SMT, data used for SMT discrimi-\nnative training is required to be of better quality\nfor reliable training. Development data is therefore\noften created by professional translators. In addi-\ntion, increasing the corpus size also increases the\ncomputational cost and the time required to train\na model. Therefore, finding out how much data is\nenough to build a suitable development corpus is\nalso an important question. Web crawled or crowd-\nsourcing data are much cheaper than profession-\nally translated data, and research towards exploit-\ning such type of data (Zaidan and Callison-Burch,\n2011; Uszkoreit et al., 2010; Smith et al., 2010;\nResnik and Smith, 2003; Munteanu and Marcu,\n2005) has already been successfully applied to ma-\nchine translation, both in phrase extraction and dis-\ncriminative training. However, they do not provide\na direct comparison between their selected data\nand professionally built development corpora.\nIn order to address these problems, in this pa-\nper we introduce a novel development corpus se-\nlection algorithm, the LA Selection algorithm. It\ncombines sentence length, bilingual alignment and\nother textual clues, as well as data diversity for\nsample sentence selection. It does not rely on\nknowledge of the test sets, nor on the decoding of\nthe candidate sentences. Our results show that the\nproposed selection algorithm achieves improve-\nments of over 2.5 BLEU points compared to ran-\ndom selection. We also present experiments with\ndevelopment corpora for various datasets to shed\nsome light on aspects that might have an impact\non translation quality, namely showing a substan-\ntial effect of the sentence length in the develop-\nment corpus, and that with the right selection pro-\ncess large development corpora offer little benefits\nover smaller ones.\nThe remainder of this paper is structured as fol-\nlows: We will describe our novel LA selection al-\ngorithm in Section 2. Experimental settings andAlgorithm 1 Development Data Selection\nRequire: Data PoolD= (ft;rt;at)T\nt=1, Number\nof wordsN, length limits \u0015lowand\u0015top\n1:Select = [];Cand = []; L= 0\n2:fordi= (fi;ri;ai)inDdo\n3: if\u0015low<length(fi)<\u0015topthen\n4: Calculate feature score\nsi= score(fi;ri;ai)\n5: Add(si;di)toCand\n6: end if\n7:end for\n8:SortCand by score from high to low\n9:while Selected length L<N do\n10: fordiinCand do\n11: ifmaxSim(fi;Select[fj]J\nj=J\u0000200)<0:3\nandsim(fi;ri)<0:6then\n12: Add(fi;ri)toSelect\n13:L=L+ length(fi)\n14: end if\n15: end for\n16:end while\n17:returnSelected\nresults are presented in Sections 3 and 4, respec-\ntively, where we also discuss the training quality\nand scalability over different corpus size.\n2 Development Corpus Selection\nAlgorithm\nThe proposed development corpus selection algo-\nrithm has two main steps: (i) selecting training\nsentence pairs by sentence Length, and (ii) select-\ning training sentence pairs by Alignment and other\ntextual clues. We call it LA selection . It also has\nan further step to reward diversity in the set of se-\nlected sentences in terms of the words they contain.\nThe assumption of the LA algorithm is that a good\ntraining sample should have a “reasonable” length,\nbe paired with a good quality translation, as mostly\nindicated by the word alignment clues between the\ncandidate pair, and add to the existing set in terms\nof diversity.\nLA selection is shown in Algorithm 1. Assume\nthat we have Tsentence pairs in our data set D.\nEach sentence pair diinDcontains a foreign sen-\ntencefi, a translation of the foreign sentence ri\nand the word alignment between them ai. We\nfirst filter out sentence pairs below the low length\nthreshold\u0015lowand above the high length thresh-\nold\u0015top(Line 3). Sentence length has a major im-\n46\n+/- Alignment Features\n+ Source/Target alignment ratio\n- Source/Target top three fertilities ratio\n+ Source/Target largest contiguous span ratio\n- Source/Target largest discontiguous span\nText only Features\n+ Source and target length ratio\n- Target function word penalty\nTable 1: Features used to score candidate sentence pairs.\npact on word alignment quality, which constitute\nthe basis for the set of features we use in the next\nstep. Shorter sentences tend to be easier to align\nthan longer sentences and therefore our algorithm\nwould naturally be biased to selecting shorter sen-\ntences. However, as we show later in our exper-\niments, sentences that are either too short or too\nlong often harm model accuracy. Therefore, is im-\nportant to set both bottom and top limits on sen-\ntence length. Based on empirical results, we sug-\ngest set\u0015low= 10\u0015top= 50 , as we will further\ndiscuss in Section 4.1.\nAfter filtering out sentences by the length\nthresholds, the next step is to extract the feature\nvalues for each remaining candidate sentence pair.\nThe features used in this paper are listed in Table\n1. The first column of the Table is an indicator\nof the sign of the feature value, where a negative\nsign indicates that the feature will return a negative\nvalue, and positive sign indicates that the feature\nwill return a positive value. The actual features,\nwhich we describe below, are given in the second\ncolumn. These include word alignment features,\nwhich are computed based on GIZA++ alignments\nfor the candidate development set, and simpler tex-\ntual features. The alignment features used here\nare mostly adapted from (Munteanu and Marcu,\n2005).\nThealignment ratio is the ratio between the\nnumber of aligned words and length of the sen-\ntence in words:\nAlignment Ratio =No. Aligned Words\nSentence Length\nA low alignment ratio means that the data is most\nlikely non-parallel, or else a highly non-literal\ntranslation. Either way, these are likely to prove\ndetrimental.\nWord fertility is the number of foreign words\naligned to each target word. The word fertilityratio is the ratio between word fertility and sen-\ntence length. We use the top three largest fertility\nratio as three features:\nFertility Ratio =\u0000Word fertility\nSentence Length\nThis feature can detect garbage collection, where\nthe aligner uses a rare word to erroneously account\nfor many difficult words in the parallel sentence.\nOur definition of contiguous span differs from\nthat in (Munteanu and Marcu, 2005): we define it\nas a substring in which all words have an align-\nment to words in the other language. A discon-\ntiguous span is defined as a substring in which all\nwords have no alignment to any word in the other\nlanguage. The contiguous span ratio ,CSR , is\nthe length of the largest contiguous span over the\nlength of the sentence:\nCSR =LC\nSentence Length\nThediscontiguous span ratio ,DCSR , is the\nlength of the largest discontiguous span over the\nlength of the sentence:\nDCSR =\u0000LDC\nSentence Length\nwhereLCis the length of the contiguous span and\nLDC is the length of the discontiguous span.\nIn addition to the word alignment features, we\nusesource and target length ratio ,LR, to mea-\nsure how close the source and target sentences in\nthe pair are in terms of length:\nLR=\u001aTL\nSLifSL>TL\nSL\nTLifTL>SL\nwhereTL is target sentence length and SLis\nsource sentence length.\nFinally, the target function words penalty ,\nFP, penalises sentences with a large proportion\nof function words or punctuation:\nFP=\u0000exp\u0010\n\u0000nfunc\nTL\u0011\nwherenfuncis number of function words and punc-\ntuation symbols, and TLis the target sentence\nlength. We only consider a target language penalty,\nbut a source language penalty could also be used.\nOnce we obtained these feature values for all\ncandidate sentence pairs, we apply two approaches\n47\nto calculate an overall score for the candidate. The\nfirst is a heuristic approach, which simply sums\nover the scores of all features for each sentence\n(with some features negated as shown in Table 1).\nThe second approach uses machine learning to\ncombine these features, similar to what was done\nin (Munteanu and Marcu, 2005) to distinguish be-\ntween parallel and non-parallel sentences. Here\na binary SVM classifier is trained to predict sam-\nples that are more similar to professionally created\nsentences. The labelling of the data was therefore\ndone by contrasting professionally created transla-\ntions against badly aligned translations from web\ncrawled data. The heuristic approach achieved\nbetter performance than the machine learning ap-\nproach, as we will discuss in Section 4.2.\nLines 8 through 16 in Algorithm 1 describe the\nsentence pair selection procedure based on this\noverall feature score. The candidate sentence pair\nand its features are stored in the Cand list, and\nsorted from high to low according to their over-\nall feature scores. The algorithm takes candidate\nsentence pairs from the Cand list until the num-\nber of words in the selected training corpus Select\nreaches the limit N. If the candidate sentence pair\npasses the condition in Line 11, the sentence pair\nis added to the selected corpus Select .\nLine 11 has two purposes: first, it aims at in-\ncreasing the diversity of the selected training cor-\npus. Based on our experiments, candidate sentence\npairs with similar feature scores (and thus simi-\nlar rankings) may be very similar sentences, with\nmost of their words being identical. We therefore\nonly select a sentence pair whose source sentence\nhas less than 0:3BLEU similarity as compared to\nthe source sentences in last 200selected sentence\npairs.2The second purpose is to filter out sen-\ntence pairs that are not translated, i.e., sentence\npairs with same words in the source and target\nsides. Untranslated or partially untranslated sen-\ntence pairs are common in web crawled data. We\ntherefore filter out the sentence pairs whose source\nand target have a BLEU similarity score of over\n0:6.\n3 Experimental Settings\nSMT system: We build standard phrase-based\nSMT systems for each corpus using Moses with\nits 14 default features. The word alignment and\n2The 200 sentence pair limit is used to reduce the runtime on\nlarge datasets.language models were learned using GIZA++ and\nIRSTLM with Moses default settings. A trigram\nlanguage model was trained on English side of the\nparallel data. For discriminative training we use\nthe popular MERT (Och, 2003) algorithm.\nTwo language pairs are used in the experiments,\nFrench to English and Chinese to English, with the\nfollowing corpora:\nFrench-English Corpora: To build a French to\nEnglish system we used the Common Crawl cor-\npus (Smith et al., 2013). We filtered out sentence\nwith length over 80 words and split the corpus\ninto training (Common Crawl training) and tun-\ning (Common Crawl tuning). The training sub-\nset was used for phrase table, language model and\nreordering table training. It contains 3;158;523\nsentence pairs (over 161M words) and average\nsource sentence length of 27words. The tun-\ningsubset is used as “Noisy Data Pool” to test\nour LA selection algorithm. It contains 31;929\nsentence pairs (over 1:6M words), and average\nsource sentence length of 27words. We com-\npare the performance of our selected corpora\nagainst a concatenation of four professionally cre-\nated development corpora (Professional Data Pool)\nfor the news test sets distributed as part of the\nWMT evaluation (Callison-Burch et al., 2008;\nCallison-Burch et al., 2009; Callison-Burch et\nal., 2010): ‘newssyscomb2009’, ‘news-test2008’,\n‘newstest2009’ and ‘newstest2010’. Altogether,\nthey contain 7;518 sentence pairs (over 392K\nwords) with average source sentence length of 27\nwords. As test data , we take the WMT13 (average\nsource sentence length = 24 words) and WMT14\n(average source sentence length = 27 words) news\ntest sets.\nChinese-English Corpora: To build the Chi-\nnese to English translation system we use the non-\nUN and non-HK Hansards portions of the FBIS\n(LDC2003E14) training corpus ( 1;624;512 sen-\ntence pairs, over 83M words, average source sen-\ntence = 24) and tuning (33;154 sentence pairs,\nover 1:7M words, average sentence length = 24).\nThe professionally created development corpus in\nthis case is the NIST MT06 test set3(1;664sen-\ntence pairs, 86K words, average sentence length\n= 23 words). As test data , we use the NIST\n3It contains 4 references, but we only apply the first reference\nto make it comparable to our selection algorithm.\n48\nMT08 test set (average source sentence length =\n24 words).\nNote that for both language pairs, the test sets\nand professionally created development corpora\nbelong to the same domain: news, for both French-\nEnglish and Chinese-English. In addition, the test\nand development corpora for each language pair\nhave been created in the same fashion, following\nthe same guidelines. Our pool of noisy data, how-\never, includes not only a multitude of domains dif-\nferent from news, but also translations created in\nvarious ways and noisy data.\n4 Results\nOur experiments are split in three parts: Section\n4.1 examines how sentence length in development\ncorpora affects the training quality. Section 4.2\ncompares our LA selection algorithm against ran-\ndomly selected corpora and against professionally\ncreated corpora. Section 4.3 discusses the effect\nof development corpus size by testing translation\nperformance with corpora of different sizes.\n4.1 Selection by Sentence Length\nIn order to test how sentence length affects the\nquality of discriminative training, we split the\ntuning corpus into six parts according to source\nsentence length ranges (in words): [1-10], [10-\n20], [20-30], [30-40], [40-50] and [50-60]. For\neach range, we randomly select sentences to total\n30;000words as a small training set, train a dis-\ncriminative model based on the small training set,\nand test the translation performance on WMT13\nand NIST MT08 test sets. We repeat the random\nselection and training procedure five times and re-\nport average BLEU scores in Table 2.\nThe top half of Table 2 shows the results for\nFrench-English translation. From this Table, we\ncan see that corpora with sentence lengths of [30-\n40] and [30-50] lead to better translation quality\nthan random selection, with a maximum average\nBLEU score of 25.62 for sentence length [30-40],\noutperforming random length selection by 1.26\nBLEU points. Corpora with sentences in [10-20]\nand [20-30] perform slightly worse than random\nselection. The worst performance is obtained for\ncorpora with very short or very long sentences.\nThe lower half of Table 2 shows the results for\nChinese-English translation. Lengths [10-20], [20-\n30], [30-40] and [40-50] lead to better transla-\ntion performance than random selection. As forFrench-English translation, the worst performance\nis obtained for corpora with very short or very long\nsentences, with a lower BLEU score than random\nselection.\nAccording to above results, the best sentence\nlength for discriminative training is not fixed, as\nit may depend on language pairs and corpus type.\nHowever, sentences below 10 words or above 50\nwords lead to poor results for both language pairs.\nWe conduct another experiment selecting develop-\nment corpora excluding sentences with length be-\nlow 10 or above 50. Results are shown in col-\numn [10-50] of both Tables. Compared to ran-\ndom selection, [10-50] improved BLEU scores by\n1.18 for French-English, and by 0.54 for Chinese-\nEnglish. Note that our systems were developed\non corpora with average sentence length of around\n25 words, which is typical in most freely avail-\nable training corpora,4the thresholds may differ\nfor corpora with very different sentence lengths.\n4.2 Selection by LA Algorithm\nIn what follows we compare the performance of\nour LA selection algorithm against randomly se-\nlected and professionally created corpora. We set\n\u0015low= 10 and\u0015top= 50 and select a development\ncorpus with no more than 30;000words. Results\nare reported in Table 3, again with averages over\nfive runs.\nConsidering first the results for the French-\nEnglish WMT13 test set, the LA selection im-\nproves BLEU by 1.36 points compared with ran-\ndom selection, and also improves over sentence\nlength-based selection (10-50). The performance\nof the LA selected corpus is only slightly lower\n(0.1 BLEU) than that of the professionally cre-\nated corpus (Prof.), but the system is much more\nrobust with much lower standard deviation (std).\nThis is a surprising outcome as the professionally\ncreated development sets are drawn from the same\ndomain as the test sets (news), and were created us-\ning the same translation guidelines as the test set,\nand therefore better results were expected for these\ncorpora. We have similar findings for the French-\nEnglish WMT14 and Chinese-English MT08 test\nsets. Systems trained on corpora selected by LA\nincrease 1.21 and 2.53 BLEU points over ran-\ndom selection, respectively. For the WMT14 test\nset, the corpus selected by LA show slight im-\n4For example, both Europarl and News-Commentary WMT\ncorpora have an average of 25 words on their English side.\n49\nRand. 1-10 10-20 20-30 30-40 40-50 50-60 10-50\nWMT13avg. 24.36 22.85 23.61 24.43 25.62 24.62 22.94 25.54\nstd. 0.84 0.65 0.80 0.51 0.40 1.06 0.99 0.84\nMT08avg. 18.79 18.11 20.00 19.63 18.85 19.29 18.53 19.33\nstd. 0.83 0.29 1.45 1.00 0.85 1.38 0.81 1.16\nTable 2: Average BLEU scores and standard deviation on French to English (WMT13) and Chinese to English (MT08) test\nsets for different ranges of sentence length. The leftmost Rand. column has no length restrictions.\nRand.\n10-50\nLA 10\u000050\nProf.\nWMT13avg.\n24.36\n25.54\n25.72\n25.82\nstd.\n0.84\n0.84\n0.01\n0.23\nWMT14avg.\n25.19\n25.31\n26.40\n26.31\nstd.\n0.30\n0.14\n0.04\n0.16\nMT08avg.\n18.79\n19.33\n21.32\n23.49\nstd.\n0.83\n1.16\n0.83\n0.31\nTable 3: Average BLEU scores and standard deviation\nfor French-English (WMT13, WMT14) news test sets and\nChinese-English (MT08) test set with development corpora\nselected by length (10-50), LA algorithm (LA 10\u000050), ran-\ndomly (Rand.), or created by professionals (Prof.).\nprovements over the professionally created corpus\n(26.40 vs. 26.31) with a lower variance.\nWe also experiment with using the SVM clas-\nsifier to combine features in the LA selection al-\ngorithm, as previously discussed. The classifier\nwas trained using the SVMlight5toolkit with RBF\nkernel with its default parameter settings. We se-\nlected 30;000words from the professionally cre-\nated WMT development corpus as positive training\nsamples, and used as negative examples 30;000\nwords from our corpus with the lowest LA se-\nlection score. Different from the LA selection\nmethod, here sentence length is not limited to 10-\n50, but rather the sentence length is provided as a\nfeature to the classifier. The motivation was to test\nthe ability of the algorithm in learning a suitable\nsentence length for tuning. Nevertheless, on aver-\nage sentences have similar lengths: 16 for the cor-\npus selected with the SVM classifier against 18 for\nthe corpus selected with the heuristic method. Re-\nsults for sentence selection using the highest clas-\nsification scores are shown in Table 4.\nLA selection with the SVM classifier outper-\nforms random selection, but does worse than our\nheuristic approach (compare to LA 10\u000050in Ta-\nble 3). The reason may be the quality of the\ntraining data: both our positive and negative\ntraining examples will contain considerable noise.\n5http://svmlight.joachims.org/WMT13 WMT14\navg. 25.42 26.08\nstd. 0.08 0.08\nTable 4: Average BLEU scores and standard deviation for\nSVM-based LA selection on French-English WMT13 and\nWMT14 test sets.\n●●●●\n●●●\n●●●●\n●●●●\n20000 60000 100000 14000023 24 25 26\ncorpus size (sentences)BLEU\n●Random\nLA Selected\nProfessional\nFigure 1: BLEU score changes for development corpora of\ndifferent sizes with the French-English WMT13 corpus. The\nhorizontal axis shows corpus size, and the vertical axis, BLEU\nscores. Points show the mean results and whiskers denote \u0006\none standard deviation.\nThe WMT professionally created corpora includes\nsome odd translations, so the alignment features\nwill be less reliable. Also, we stress that this\nis a harder problem than the one introduced in\n(Munteanu and Marcu, 2005), since their pool of\ncandidate samples contained either parallel or non-\nparallel sentences, which are easier to label and to\ndistinguish based on word alignment features. Our\npool of candidate samples is assumed to be paral-\nlel, with our selection procedure aiming at select-\ning from this the highest quality translations.\n50\n4.3 Effect of Training Corpus Size\nNext, we consider the question of how much devel-\nopment data is needed to train a phrase-based SMT\nsystem. To test this we experiment with corpora\nranging in size from 10;000 words to 150;000\nwords, with an incremental step of 10;000words.\nAt each step we run MERT training five times and\nreport the average BLEU scores. The test set is the\nWMT13.\nFigure 1 shows how BLEU changes as we in-\ncrease the training corpus size. The three lines rep-\nresent the BLEU scores of three systems: Random\nselection from the French-English tuning dataset\n(blue line), LA selection from the same pool (red\nline), and WMT professionally created develop-\nment corpus (green line). According to this Figure,\nperformance increases as corpora sizes increase,\nfor all techniques, but only up to 70;000words,\nafter which performance is stable. The profes-\nsionally created corpus achieves the best perfor-\nmance for any corpus size. Note however that the\nLA selection technique is only slightly worse, with\nless than 0.1 BLEU difference, for corpora sizes\n\u001530;000words. Random selection clearly per-\nforms poorly compared to both.\nAlso shown in Figure 1 are the standard devi-\nation from five runs of the experiment. Random\nselection presents the largest standard deviation\n(greater than 0.6 BLEU) for training corpora of\nsizes below 50;000 words. The maximum stan-\ndard deviation is 1.93 at 30;000words. With larger\ntraining corpus sizes, the standard deviation of ran-\ndom selection is still higher than that of LA se-\nlected and professional data. LA selection has a\nmuch lower average standard deviation, even lower\nthan the professionally created data. This is impor-\ntant for real application settings, where repeated\nruns are not practical and robust performance from\na single run is imperative.\nThese results confirm some findings of previ-\nous research (Hui et al., 2010), namely that enlarg-\ning the tuning corpus leads to more accurate mod-\nels. However we find that increasing the amount\nof data is not the best solution when creating a\ndevelopment corpus: much greater improvements\nare possible by instead focusing on selecting better\nquality data. Using data selection reduces the need\nfor large development sets, in fact as few as 70k\nwords is sufficient for robust tuning.5 Conclusions\nIn this paper we have shown how the choice of the\ndevelopment corpus is critical for machine trans-\nlation systems’ performance. The standard prac-\ntice of resorting to expensive human translations\nis not practical for many SMT application scenar-\nios, and consequently making better use of exist-\ning parallel resources is paramount. Length is the\nmost important single criterion for selecting effec-\ntive sentences for discriminative training: overly\nshort and overly long training sentences often harm\ntraining performance. Using large development\nsets brings only small improvements in accuracy,\nand a modest development set of 30k-70k words\nis sufficient for good performance. The key in-\nnovation in this paper was the LA sentence selec-\ntion algorithm, which selects high quality and di-\nverse sentence pair for translation. We have shown\nlarge improvements over random selection, of up\nto 2.53 BLEU points (Chinese-English). The ap-\nproach is competitive with using manually trans-\nlated development sets, despite having no knowl-\nedge of the test set, test set domain, nor using\nexpensive expert translators. In future work, we\nplan to improve the classification technique for\nautomatically predicting training quality through\nalternative methods for extracting training exam-\nples and additional features to distinguish between\ngood and bad translations.\n6 Acknowledgement\nDr. Specia has received funding from the Euro-\npean Union’s Seventh Framework Programme for\nresearch, technological development and demon-\nstration under grant agreement no. 296347 (QT-\nLaunchPad). Dr. Cohn is the recipient of an\nAustralian Research Council Future Fellowship\n(project number FT130101105).\nReferences\nCallison-Burch, Chris, Cameron Fordyce, Philipp Koehn,\nChristof Monz, and Josh Schroeder. 2008. Further meta-\nevaluation of machine translation. In Proceedings of the\nThird Workshop on Statistical Machine Translation , pages\n70–106, Columbus, Ohio, June. Association for Computa-\ntional Linguistics.\nCallison-Burch, Chris, Philipp Koehn, Christof Monz, and\nJosh Schroeder. 2009. Findings of the 2009 Workshop\non Statistical Machine Translation. In Proceedings of\nthe Fourth Workshop on Statistical Machine Translation ,\npages 1–28, Athens, Greece, March. Association for Com-\nputational Linguistics.\n51\nCallison-Burch, Chris, Philipp Koehn, Christof Monz, Kay\nPeterson, Mark Przybocki, and Omar Zaidan. 2010. Find-\nings of the 2010 joint workshop on statistical machine\ntranslation and metrics for machine translation. In Pro-\nceedings of the Joint Fifth Workshop on Statistical Ma-\nchine Translation and MetricsMATR , pages 17–53, Up-\npsala, Sweden, July. Association for Computational Lin-\nguistics. Revised August 2010.\nCao, Yuan and Sanjeev Khudanpur. 2012. Sample selection\nfor large-scale mt discriminative training. In AMTA .\nHui, Cong, Hai Zhao, Yan Song, and Bao-Liang Lu. 2010.\nAn empirical study on development set selection strat-\negy for machine translation learning. In Proceedings of\nthe Joint Fifth Workshop on Statistical Machine Transla-\ntion and MetricsMATR , WMT ’10, pages 67–71, Uppsala,\nSweden. Association for Computational Linguistics.\nKoehn, Philipp, Hieu Hoang, Alexandra Birch, Chris\nCallison-Burch, Bertoldi Nicola Federico, Marcello,\nBrooke Cowan, Wade Shen, Christine Moran, Richard\nZens, Chris Dyer, Ondrej Bojar, Alexandra Constantin,\nand Evan Herbst. 2007. Moses: Open source toolkit\nfor statistical machine translation. In Proceedings of ACL\n2007, Demonstration Session , Prague, Czech Republic.\nLi, Mu, Yinggong Zhao, Dongdong Zhang, and Ming Zhou.\n2010. Adaptive development data selection for log-linear\nmodel in statistical machine translation. In Proceedings\nof the 23rd International Conference on Computational\nLinguistics , COLING ’10, pages 662–670, Beijing, China.\nAssociation for Computational Linguistics.\nLiu, Lemao, Hailong Cao, Taro Watanabe, Tiejun Zhao,\nMo Yu, and CongHui Zhu. 2012. Locally training the\nlog-linear model for smt. In Proceedings of the 2012 Joint\nConference on Empirical Methods in Natural Language\nProcessing and Computational Natural Language Learn-\ning, EMNLP-CoNLL ’12, pages 402–411, Jeju Island, Ko-\nrea.\nLu, Yajuan, Jin Huang, and Qun Liu. 2008. Improving sta-\ntistical machine translation performance by training data\nselection and optimization.\nMunteanu, Dragos Stefan and Daniel Marcu. 2005. Improv-\ning machine translation performance by exploiting non-\nparallel corpora. Comput. Linguist. , 31(4):477–504, De-\ncember.\nOch, Franz Josef and Hermann Ney. 2002. Discriminative\ntraining and maximum entropy models for statistical ma-\nchine translation. In Proceedings of the 40th Annual Meet-\ning on Association for Computational Linguistics , ACL\n’02, pages 295–302, Philadelphia, Pennsylvania.\nOch, Franz Josef. 2003. Minimum error rate training in sta-\ntistical machine translation. In Proceedings of the 41st An-\nnual Meeting on Association for Computational Linguis-\ntics - Volume 1 , ACL ’03, pages 160–167, Sapporo, Japan.\nPapineni, Kishore, Salim Roukos, Todd Ward, and Wei-Jing\nZhu. 2002. Bleu: a method for automatic evaluation\nof machine translation. In Proceedings of the 40th An-\nnual Meeting on Association for Computational Linguis-\ntics, ACL ’02, pages 311–318, Philadelphia, Pennsylvania.\nResnik, Philip and Noah A. Smith. 2003. The web as a paral-\nlel corpus. Comput. Linguist. , 29(3):349–380, September.Smith, Jason R., Chris Quirk, and Kristina Toutanova. 2010.\nExtracting parallel sentences from comparable corpora us-\ning document level alignment. In Human Language Tech-\nnologies: The 2010 Annual Conference of the North Amer-\nican Chapter of the Association for Computational Lin-\nguistics , HLT ’10, pages 403–411, Los Angeles, Califor-\nnia.\nSmith, Jason R., Philipp Koehn, Herve Saint-Amand, Chris\nCallison-Burch, Magdalena Plamada, and Adam Lopez.\n2013. Dirt cheap web-scale parallel text from the com-\nmon crawl. In Proceedings of the 2013 Conference of the\nAssociation for Computational Linguistics (ACL 2013) .\nTamchyna, Ale ˇs, Petra Galu ˇsˇc´akov ´a, Amir Kamran, Milo ˇs\nStanojevi ´c, and Ond ˇrej Bojar. 2012. Selecting data for\nenglish-to-czech machine translation. In Proceedings of\nthe Seventh Workshop on Statistical Machine Translation ,\nWMT ’12, pages 374–381, Montreal, Canada.\nUszkoreit, Jakob, Jay M. Ponte, Ashok C. Popat, and Moshe\nDubiner. 2010. Large scale parallel document mining\nfor machine translation. In Proceedings of the 23rd Inter-\nnational Conference on Computational Linguistics , COL-\nING ’10, pages 1101–1109, Beijing, China.\nZaidan, Omar F. and Chris Callison-Burch. 2011. Crowd-\nsourcing translation: professional quality from non-\nprofessionals. In Proceedings of the 49th Annual Meeting\nof the Association for Computational Linguistics: Human\nLanguage Technologies - Volume 1 , HLT ’11, pages 1220–\n1229, Portland, Oregon.\nZhao, Yinggong, Yangsheng Ji, Ning Xi, Shujian Huang, and\nJiajun Chen. 2011. Language model weight adaptation\nbased on cross-entropy for statistical machine translation.\nInProceedings of the 25th Pacific Asia Conference on Lan-\nguage, Information and Computation , pages 20–30, Singa-\npore, December. Institute of Digital Enhancement of Cog-\nnitive Processing, Waseda University.\nZheng, Zhongguang, Zhongjun He, Yao Meng, and Hao Yu.\n2010. Domain adaptation for statistical machine transla-\ntion in development corpus selection. In Universal Com-\nmunication Symposium (IUCS), 2010 4th International .\n52", "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "s6RHzEYWg2L", "year": null, "venue": "EAMT 2012", "pdf_link": "https://aclanthology.org/2012.eamt-1.35.pdf", "forum_link": "https://openreview.net/forum?id=s6RHzEYWg2L", "arxiv_id": null, "doi": null }
{ "title": "Evaluating User Preferences in Machine Translation Using Conjoint Analysis", "authors": [ "Katrin Kirchhoff", "Daniel Capurro", "Anne M. Turner" ], "abstract": null, "keywords": [], "raw_extracted_content": "Evaluating User Preferences in Machine Translation Using Conjoint\nAnalysis\nKatrin Kirchhoff\nDepartment of Electrical Engineering\nUniversity of Washington\nSeattle, WA, USA\[email protected] Capurro, Anne Turner\nDepartment of Medical Education\nand Biomedical Informatics\nUniversity of Washington\nSeattle, WA, USA\[email protected]\[email protected]\nAbstract\nIn spite of much ongoing research on ma-\nchine translation evaluation there is little\nquantitative work that directly measures\nusers’ intuitive or emotional preferences\nregarding different types of machine trans-\nlation errors. However, the elicitation and\nmodeling of user preferences is an im-\nportant prerequisite for future research on\nuser adaptation and customization of ma-\nchine translation engines. In this paper we\nexplore the use of conjoint analysis as a\nformal quantitative framework to gain in-\nsight into users’ relative preferences for\ndifferent translation error types. Using\nEnglish-Spanish as the translation direc-\ntion we conduct a crowd-sourced conjoint\nanalysis study and obtain utility values for\nindividual error types. Our results indicate\nthat word order errors are clearly the most\ndispreferred error type, followed by word\nsense, morphological, and function word\nerrors.\n1 Introduction\nCurrent work in machine translation (MT) evalu-\nation research falls into three different categories:\nautomatic evaluation, human evaluation, and em-\nbedded application evaluation. Much effort has\nfocused on the first category, i.e. on designing eval-\nuation metrics that can be computed automatically\nfor the purpose of system tuning and development.\nThese include e.g. BLEU (Papineni et al., 2002),\nposition-independent word error rate (PER), ME-\nTEOR (Lavie and Agarwal, 2007), or translation\nerror rate (TER) (Snover et al., 2006). Human\nc\r2012 European Association for Machine Translation.evaluation (see (Denkowskie and Lavie, 2010) for\na recent overview) typically involves rating trans-\nlation output with respect to fluency and adequacy\n(LDC, 2005), or directly comparing and ranking\ntwo or more translation outputs (Callison-Burch et\nal., 2007). All of these evaluation techniques pro-\nvide a global assessment of overall translation per-\nformance without regard to different error types.\nMore fine-grained analyses of individual MT er-\nrors often include manual or (semi-) automatic er-\nror annotation to gain insights into the strengths\nand weaknesses of MT engines (Vilar et al., 2006;\nPopovic and Ney, 2011; Condon et al., 2010; Far-\nreus et al., 2012). There have also been studies of\nhow MT errors influence the work of post-editors\nwith respect to productivity, speed, etc. (Krings,\n2001; O’Brien, 2011) or the performance of back-\nend applications like information retrieval (Parton\nand McKeown, 2010).\nIn contrast to this line of research, there is\nsurprisingly little work that directly investigates\nwhich types of errors are intuitively the most dis-\nliked by users of machine translation. Although\nthere is ample anecdotal evidence of users’ reac-\ntions to machine translation, it is difficult to find\nformal, quantitative studies of how users perceive\nthe severity of different translation errors and what\ntrade-offs they would make between different er-\nrors if they were given a choice. User prefer-\nences might sometimes diverge strongly from the\nsystem development directions suggested by auto-\nmatic evaluation procedures. Most automatic pro-\ncedures do not take into consideration factors such\nas the cognitive effort required for the resolution\nof different types of errors, or the emotional re-\nactions they provoke in users. For example, er-\nrors that are inadvertently comical or culturally of-\nfensive might provoke strong negative user reac-\nProceedings of the 16th EAMT Conference, 28-30 May 2012, Trento, Italy\n119\ntions and should thus be weighted more strongly\nby system developers when user acceptance is a\nkey factor in the intended application. On the other\nhand, most users might expect, and thus be forgiv-\ning of, minor grammatical errors. A deeper insight\ninto which errors are perceived as the most egre-\ngious for a particular machine translation appli-\ncation (depending on language pair, domain, etc.)\nis therefore crucial for improving user acceptance.\nIn addition, user adaptation and customization of\nMT engines are emerging as important future di-\nrections for machine translation research, and it is\nnecessary to develop principled strategies for elic-\niting and modeling user preferences. However, de-\nspite a wealth of existing research on computa-\ntional preference elicitation techniques little of it\nhas been applied to machine translation evaluation\nresearch.\nIn this paper we explore the use of conjoint anal-\nysis(CA) to gain knowledge of users’ preferences\nregarding different types of machine translation\nerrors. Conjoint analysis is a formal framework\nfor preference elicitation that was originally de-\nveloped in mathematical psychology and is widely\nused in marketing research (Green and Srinivasan,\n1978). Its typical application is to determine the\nreasons for consumers’ purchasing choices. In\nconjoint analysis studies, participants are asked to\nchoose from, rate, or rank a range of products char-\nacterized by different combinations of attributes.\nStatistical modeling, typically some form of multi-\nnomial regression analysis, is then used to infer the\nvalues (“utilities” or “part-worths”) consumers at-\ntach to different attributes. In a typical marketing\nsetup the attributes might be price, packaging, per-\nformance, etc. In our case the attributes represent\ndifferent types of machine translation errors and\ntheir frequencies. The outcome of conjoint anal-\nysis is a list of values attached to different error\ntypes across a group of users, along with statistical\nsignificance values.\nIn the remainder of this paper we will first give\nan overview of the basic techniques of conjoint\nanalysis (Section 2), followed by a description of\nthe data set (Section 3) and experimental design\n(Section 4). Results and discussion are provided in\nSection 5. Section 6 concludes.\n2 Conjoint Analysis\nConjoint analysis is based on discrete choice the-\nory and studies how the characteristics of a prod-uct or service influence users’ choices and prefer-\nences. It is typically used to evaluate and predict\npurchasing decisions in marketing research but\nhas also been used in analyzing migration trends\n(Christiadi and Cushing, 2007), decision-making\nin healthcare settings (Philips et al., 2002), and\nmany other fields. The assumption is that each\nproduct or “concept” can be described by a set of\ndiscrete attributes and their values or “levels”. For\nexample, a laptop can be described by CPU type,\namount of RAM, price, battery life, etc. CA gen-\nerates different concepts by systematically varying\nthe combination of attributes and values and letting\nrespondents choose their preferred one. Clearly,\nthe most preferred and least preferred combina-\ntions are known (e.g. a laptop with maximum CPU\npower, RAM and battery life at the minimum price\nwould be the most preferred). The value of CA\nderives from studying intermediate combinations\nbetween these extremes since they shed light on\nthe trade-offs users are willing to make. In an ap-\npropriately designed CA study, each attribute level\nis equally likely to occur. For a small number of\nattributes and levels, the total number of possible\nconcepts (defined by different combinations of at-\ntributes) is generated and tested exhaustively; if\nthe number of possible combinations is too large,\nsampling techniques are used. The total set of re-\nsponses is then evaluated for main effects (i.e. the\nrelative importance of each individual attribute)\nand for interactions between attributes.\nVarious different approaches to CA have been\ndeveloped. The traditional full-profile CA requires\nrespondents to rate or rank all concepts presented.\nIn choice-based conjoint analysis (CBC) (Lou-\nviere and Woodworth, 1983) several different con-\ncepts are presented, and respondents are required\nto choose one of them. Finally, adaptive con-\njoint analysis dynamically adapts and changes the\nset of concepts presented to respondents based on\ntheir previous choices. CBC is currently the most\nwidely used method of conjoint analysis, due to\nits simplicity: respondents merely need to choose\none of a set of proposed concepts, as task which\nis similar to many real-life decision-making prob-\nlems. The disadvantage is that the elicitation pro-\ncess is less efficient: respondents need to process\nthe entirety of information presented before mak-\ning a choice; therefore, it is advisable to only in-\nclude a small number of concepts to choose from\nin any given task. CBC is thus appropriate for con-\n120\ncepts involving a small number of attributes.\nThe most frequently-used underlying statistical\nmodel for CBC is McFadden’s conditional logit\nmodel (McFadden, 1974). The conditional logit\nmodel specifies the npossible concept choices as\na categorial dependent variable Ywith outcomes\n1;:::;n. The decision of an individual respondent\niin favor of the j0thoutcome is based on a util-\nity valueuij, which must exceed the utility val-\nues for all other outcomes k= 1;:::;n;k6=j.\nIt is assumed that uijdecomposes into a system-\natic or representative part vijand a random part\n\"ij;uij=vij+\"ij. A further assumption is that\nthe random components are independent and iden-\ntically distributed according to the extreme value\ndistribution with cumulative density function\nF(\"ij) =e\u0000e\u0000\"ij(1)\nThe systematic part vijis modeled as a linear com-\nbination\f0X, where X=fx1;:::;xmgis a vector\nofmobserved predictor variables (the attributes of\nthe alternatives) and \fis a vector of coefficients\nindicating the importance of the attributes. Then,\nthe probability that the i0thindividual chooses the\nj0thoutcome,P(jji), can be defined as:\nP(jji) =e\f0Xij\nPn\nk=1e\f0Xik(2)\nThe\fparameters are typically estimated by\nmaximizing the conditional likelihood using the\nNewton-Raphson method. For basic CBC an ag-\ngregate logit model is used, where responses are\npooled across respondents. In this case a single set\nof\fparameters is used to represent the average\npreferences of an entire market, rather than indi-\nviduals’ preferences. This implicitly assumes that\nrespondents form a homogeneous group, which is\ntypically not correct. This oversimplification can\nbe circumvented by applying latent class analysis\n(Goodman, 1974), which groups respondents into\nhomogeneous subsets and estimates different util-\nity values for each one.\nThere are numerous advantages to using a for-\nmal analysis framework of this type rather than\nsimply questioning users about their experience.\nFirst, for a complex “product” like machine trans-\nlation output, users are notoriously poor at analyz-\ning their own judgments and stating them in ex-\nplicit terms, especially when they lack linguistic\ntraining. It has been noted in the past that it is often\ndifficult for human evaluators to assign consistentratings for fluency and adequacy, leading to low\ninter-annotator agreement (Callison-Burch et al.,\n2007). Requiring users to rank the output from dif-\nferent systems has proven easier but, as discussed\nin (Denkowskie and Lavie, 2010), it is still diffi-\ncult for evaluators to produce consistent rankings.\nBy contrast, the CA framework used here only re-\nquires the choice of one out of several possibilities.\nUsers are not asked to provide an objective ranking\nof several translation possibilities but a single, per-\nsonal choice, which is an easier task. Furthermore,\nthe choice-based design provides a way of observ-\ning trade-offs users make with respect to different\ntypes and numbers of errors. For instance, from the\nuser’s point of view, do three morphological errors\nin one sentence count as much, more, or less than a\nsingle word-sense error? Second, CA provides nu-\nmerical values (“utilities” or “part-worths”) indi-\ncating the relative importance of different features\nof a machine translation output. These might be\nhelpful in machine translation system tuning pro-\nvided that different error types can be classified au-\ntomatically. Third, it is also possible to analyze in-\nteractions between different attributes, e.g. the ef-\nfect that a certain combination of errors (e.g. both\nword order and word sense error present in one\nsentence) has vs. other combinations. Fourth, dif-\nferent techniques exist to segment the population\ninto different user types (or ’market segments’)\nand estimate different utility values for each. How-\never, in this paper only aggregate conjoint analysis\nwill be used, where preferences are analyzed for\nthe entire population surveyed.\n2.1 Conjoint analysis for eliciting machine\ntranslation user preferences\nWhen applying the conjoint analysis framework to\nmachine translation evaluation we treat different\nmachine translations as different products or ”con-\ncepts” between which users may choose. We as-\nsume that users clearly prefer some machine trans-\nlations over others, and that these preferences are\ndependent on the types and frequencies of the er-\nrors present in the translation. Thus, error types\nserve as the attributes of our concepts and the\n(discretized) error frequencies (e.g. high, medium,\nlow) are the levels. Note that there may be other\nfeatures of a translation (e.g. sentence length) that\nmay affect a user’s choice – these are not consid-\nered in this study but they could easily be included\nin future studies.\n121\nIn contrast to most standard applications of con-\njoint analyis a particular combination of attributes\ndefines not only a single concept but a large set of\nconcepts (alternative translations of a single sen-\ntence, or multiple sentences). It is therefore useful\nto consider a representative sample of sentences\nfor each combination of attributes. Thus, com-\npared Eq. 2 we have another conditioning variable\nsranging over sentences:\nP(jji;s) =e\f0Xijs\nPn\nk=1e\f0Xijs(3)\nOur procedure for this study is as follows. First,\nwe select the error types to be investigated. This\nis done by manually annotating machine transla-\ntion errors in our data set and selecting the most\nfrequent error types. The different error frequen-\ncies are quantized into a small number of levels for\neach error type. We then generate different profiles\n(combinations of attributes/levels) and group them\ninto choice tasks – these are the combinations of\nprofiles from which respondents will choose one.\nRespondents’ choices are gathered through Me-\nchanical Turk. Finally, we estimate a single set of\nmodel parameters, aggregating over both respon-\ndents and sentences, and compute statistical sig-\nnificance values. Additionally, we perform predic-\ntion experiments, using the estimated utility values\nto predict users’ choices on held-out data.\n3 Data\nThe data used for the present study was collected\nas part of a research project on applying machine\ntranslation to the public health domain. It con-\nsists of information materials on general health and\nsafety topics (e.g. HIV , STDs, vaccinations, emer-\ngency preparedness, maternal and child health, di-\nabetes, etc.) collected from a variety of English-\nlanguage public health websites. The documents\nwere translated into Spanish by Google Translate\n(http://www.google.com/translate). 60 of these\ndocuments were then manually annotated for er-\nrors by two native speakers of Spanish. Our error\nannotation scheme is similar to other systems used\nfor Spanish (Vilar et al., 2006) and comprises the\nfollowing categories:\n1.Untranslated word. These are original En-\nglish words that have been left untranslated\nby the MT engine and that are not proper\nnames or English words in use in Spanish.Type % Subtypes %\nMorphology 28.2 Verbal 15.8\nNominal 12.4\nMissing word 16.7 Function word 12.6\nContent word 4.1\nWord sense error 16.1\nWord order error 9.7 short range 8.0\nlong range 1.7\nPunctuation 9.1\nOther 5.9\nSpelling 5.1\nSuperfluous word 4.7 Function word 3.8\nContent word 0.9\nCapitalization 2.7\nUntranslated word 1.1 medical term 0.0\nproper name 0.2\nother 0.9\nPragmatic 1.0\nDiacritics 0.2\nTotal 100.0\nTable 1: Error statistics from manual consensus\nannotation of 25 documents. The two right-hand\ncolumns show error subtypes.\n2.Missing word. A word necessary in the out-\nput is missing – a further distinction is made\nbetween missing function words and missing\ncontent words.\n3.Word sense error. The translation reflects a\nword sense of the English word that is wrong\nor inappropriate in the present context.\n4.Morphology. The morphological features of\na word in the translation are wrong.\n5.Word order error. The word order is\nwrong – a further distinction is made between\nshort-range errors (within a linguistic phrase,\ne.g. adjective-noun ordering errors) and long-\nrange errors (spanning a phrase boundary).\n6.Spelling. Orthographic error.\n7.Superfluous word. A word in the translation\nis redundant or superfluous.\n8.Diacritics. The diacritics are faulty (missing,\nsuperfluous, or wrong).\n9.Punctuation. Punctuation signs are missing,\nwrong, or superfluous.\n10.Capitalization. Missing or superfluous capi-\ntalization.\n11.Pragmatic/Cultural error. The translation\nis unacceptable for pragmatic or cultural rea-\nsons, e.g. offensive or comical.\n12.Other. Anything not covered by the above\ncategories.\nAnnotators were linguistically trained and were su-\npervised in their annotation efforts.\nFor a subset of 25 of these documents (1804\nsentences), the annotators were instructed to create\n122\na consensus error annotation, and to subsequently\ncorrect the errors, thus producing consensus refer-\nence translations. Computing BLEU/PER scores\nagainst the corrected output yields a BLEU score\nof 65.8 and a PER of 19.8%. Unsurprisingly, these\nscores are very good since the reference transla-\ntions are corrections of the original output rather\nthan independently created translations – however,\nannotators independently judged the overall trans-\nlation quality as quite good as well. The detailed\nerrors statistics computed from the 25 documents\nis shown in Table 1. The most frequent error types\nare, in order: morphological errors, word sense er-\nrors, missing function words, and word order er-\nrors. Based on this we defined four error types to\nbe used as the attributes in our conjoint analysis\nstudy: word sense errors (S), morphology errors\n(M), word order errors (O) and function word er-\nrors (F) – the latter includes both missing and su-\nperfluous function words. For word sense, word\norder, and function word errors we defined two\nvalues (levels): high (H) and low (L). Since mor-\nphology errors are much more frequent than others\nwe use a three-valued attribute in this case (high,\nmedium (M), and low).\nFrom these documents we selected 40 sen-\ntences, each of which contained a minimum of one\ninstance each of sense, order and function word\nerrors, and a minimum of two instances of mor-\nphological errors. Based on the error annotations\nand their manual corrections, each sentence can be\nedited selectively to reflect different attribute lev-\nels, i.e. different numbers of errors of a given type.\nFor example, different versions of a sentence are\ncreated that exhibit a high, medium, or low level\nof morphological errors. The variable number of\nerrors are mapped to the discrete attribute levels as\nfollows: If the total number of errors for a given\ntype is\u00142, then H = 2 errors and L = 0 errors\nfor the binary attributes, and H=2, M=1, L=0 for\nthe three-valued attribute. When the number of er-\nrors is larger than 2, the interval size for each level\nis defined by the number of errors divided by the\nnumber of levels, rounded to the nearest integer.\nThe number of all possible different combina-\ntions of attributes/levels is 24; thus, for each sen-\ntence, 24 concepts or “profiles” are constructed. A\npartial example is shown in Table 2.4 Experiments\nWe chose a full factorial experiment design,\ni.e. each of the 24 possible profiles was utilized\nfor each of the 40 sentences. Each partially-edited\nsentence represents a different profile. However,\nnot all 24 profiles can be presented simultaneously\nto a single respondent – typically, CBC surveys\nneed to be kept as small and simple as possible to\nprevent respondents from resorting to simplifica-\ntion strategies and delivering noisy response data.\nProfiles were grouped into choice tasks with three\nalternatives each, representing a balanced distribu-\ntion of attribute levels.\nFor each survey, 4 choice tasks were randomly\nselected from the total set of choice tasks. The\nquestions in the survey thus included profiles per-\ntaining to different sentences, which was intended\nto avoid respondent fatigue. Surveys were pre-\nsented to respondents on the Amazon Mechanical\nTurk platform. For each choice task, Turkers were\ninstructed to carefully read the original source sen-\ntence and the translations provided, then choose\nthe one they liked best (an obligatory choice ques-\ntion with the possibility of choosing exactly one\nof the alternatives provided), and to state the rea-\nson for their preference (an obligatory free-text an-\nswer). The latter was included as a quality con-\ntrol step to prevent Turkers from making random\nchoices. The set of Turkers was limited to those\nwho had previously delivered high-quality results\nin other Spanish translation and annotation HITs\nwe had published on Mechanical Turk. In total we\npublished 240 HITs (surveys) with 4 choice tasks\nand 3 assignments each, resulting in a total of 2880\nresponses. A total of 29 workers completed the\nHITs, with a variable number of HITs per worker.\nThe responses were analyzed using the conditional\nlogit model implementation in the R package.1\n5 Results and Discussion\nWe first measured the overall agreement among\nthe three different responses per choice task us-\ning Fleiss’s Kappa (Fleiss, 1971). The kappa co-\nefficient was 0.35, which according to (Landis\nand Koch, 1977) constitutes “fair agreement” but\ndoes indicate that there is considerable variation\namong workers regarding their preferred transla-\ntion choice. We next estimated the coefficients of\nthe conditional logit model considering main ef-\n1http://www.r-project.org\n123\nNo. Attributes Sentence\n1 S=H:M=H:O=H:F=H Planear con anticipaci ´on y tomar un atajo pocos ahorrar su tiempo y su dinero para alimentos.\n2 S=H:M=H:O=H:F=L Planear con anticipaci ´on y tomar un atajo le pocos ahorrar su tiempo y su dinero para la alimentos.\n3 S=H:M=H:O=L:F=H Planear con anticipaci ´on y tomar un pocos atajo ahorrar su tiempo y su dinero para alimentos.\n4 S=H:M=H:O=L:F=L Planear con anticipaci ´on y tomar un pocos atajo le ahorrar su tiempo y su dinero para la alimentos.\n5 S=H:M=M:O=H:F=H Planear con anticipaci ´on y tomar un atajo pocos ahorrar su tiempo y su dinero para alimentos.\n6 S=H:M=M:O=H:F=L Planear con anticipaci ´on y tomar un atajo le pocos ahorrar su tiempo y su dinero para la alimentos.\n7 S=H:M=M:O=L:F=H Planear con anticipaci ´on y tomar un pocos atajo ahorrar ´a su tiempo y su dinero para alimentos.\n8 S=H:M=M:O=L:F=L Planear con anticipaci ´on y tomar un pocos atajo le ahorrar ´a su tiempo y su dinero para la alimentos.\n9 S=H:M=L:O=H:F=H Planear con anticipaci ´on y tomar unos atajos pocos ahorrar ´a su tiempo y su dinero para alimentos.\n10 S=H:M=L:O=H:F=L Planear con anticipaci ´on y tomar unos atajos le pocos ahorrar ´a su tiempo y su dinero para la alimentos.\netc. etc.\n24 S=L:M=L:O=L:F=L Planear con anticipaci ´on y realizar unos pocos recortes le ahorrar ´a su tiempo y su dinero para la comida.\nTable 2: Examples of the 24 attribute combinations and corresponding partially-edited translations for\nthe English input sentence Planning ahead and taking a few short cuts will save both your time and your\nfood dollars.\n.\nVariable \f exp(\f )\u000b\nO -1.125 0.3246 0.001\nS -0.6302 0.5325 0.001\nM -0.4034 0.6680 0.001\nF -0.1211 0.8859 0.001\nTable 3: Estimated coefficients in the conditional\nlogit model and associated significance levels (\u000b )\n– main effects. O = word order, S = word sense, M\n= morphology, F = function words.\nfects only. The model’s \fcoefficients, exponenti-\nated\f’s, and significance values are shown in Ta-\nble 3. It is easiest to interpret the exponentiated\n\fcoefficients: these represent the change in the\nodds (i.e. odds ratios) of the error type being as-\nsociated with the chosen translation, for each unit\nincrease in the error level and while holding other\nerror levels constant. For example, if the level\nof word sense errors is increased by 1 (i.e. goes\nfrom low to high) while other error types are be-\ning held constant, the odds of the corresponding\ntranslation being chosen decrease by a multiplica-\ntive factor of 0.5325 (i.e. roughly 50%). Overall\nwe see that word order errors are the most dispre-\nferred, followed by word sense, morphology, and\nfunction word errors. All values are highly sig-\nnificant (p < 0:001, two-sided z-test). We next\ntested all pairwise interactions between individual\nattributes. An interaction between two attributes\nmeans that the impact of one attribute on the out-\ncome is dependent on the level of the other at-\ntribute. We found two statistically significant in-\nteractions, between word sense and function wordVariable \f exp(\f )\u000b\nO -1.149e+00 3.169e-01 0.001\nS -1.079e+00 3.398e-01 0.001\nM -6.971e-01 4.980e-01 0.001\nF -8.932e-01 4.094e-01 0.001\nM:F 2.081e-01 1.231e+00 0.001\nS:F 2.649e-01 1.303e+00 0.01\nTable 4: Estimated coefficients in the conditional\nlogit model and associated significance values (\u000b )\n– interactions. O = word order, S = word sense,\nM = morphology, F = function words. Variables\ncontaining “:” denote interaction terms.\nerrors, and between morphological and function\nword errors. The meaning of the coefficients in\nTable 4 changes with the introduction of interac-\ntion terms, and they cannot directly be compared\nto those in Table 3. In particular, the exp(\f )for\nM:F and S:F now need to be interpreted as ratios\nof odds ratios for unit increases in the attribute lev-\nels. The values (> 1) indicate that the odds ratio\nof a positive choice associated with a unit increase\nin function word error level actually increases as\nthe level of M or S errors rises – e.g. the odds ratio\nfor S=high is 0.4462 (exp(\f S+\fS:F) vs. 0.3398\nfor S=low). This means that function word errors\nhave a stronger impact on respondents’ choices at\nlow levels of morphological or word sense errors;\nby contrast, when the level of the latter is high,\nrespondents are less sensitive to function word er-\nrors. This effect is also observable for word order\nand function word errors but it is not statistically\nsignificant.\n124\nAccuracy (%) Stddev\nClogit 54.68 1.99\nFewest errors 49.49 2.70\nRandom 33.33 0.0\nTable 5: Average cross-validation accuracy and\nstandard deviation of conditional logit model,\nfewest-errors-baseline, and random baseline.\nA standard way of validating the overall ex-\nplanatory power of the model is to perform predic-\ntion on a held-out data set. To this end we compute\nthe probability of each choice in a set according to\nEq. 3 by inserting the estimated \fcoefficients and\ntake the max over j, which can be simplified as:\nj\u0003=maxj\f0Xijs (4)\n(5)\nThe percentage of correctly identified outcomes\n(the “hit rate” or accuracy) is then used to assess\nthe quality of the model.\nWe performed 8-fold cross-validation. For each\nfold one eighth of the data for each sentence was\nassigned to the test set; the rest was assigned to\nthe training set. Table 5 shows the average accura-\ncies for our conditional logit model as well as two\nbaselines. The first is the random baseline – each\ntraining/test sample is a choice task with 3 alterna-\ntives; thus, choosing one alternative randomly re-\nsults in a baseline accuracy of 33.3%. The second\nbaseline consists of choosing the alternative with\nthe lowest number of errors overall. This leads to\naccuracies ranging from 45.75%-53.75%, with an\naverage of 49.59%. The accuracies obtained by\nour model with the fitted coefficients range from\n53.00%-58.75%, with an average of 54.06%. This\nis significantly better than the random baseline and\nclearly better (though not statistically significant)\nthan the fewest-errors baseline. Nevertheless there\nclearly is room for improvement in the predictive\naccuracy of the model. The model shows virtu-\nally the same performance (54.04% accuracy on\naverage) on the training data; thus, generalization\nability is not the problem here. Rather, the diffi-\nculty lies in the underlying variability of the data to\nbe modelled, in particular the diversity of the user\ngroup and the sentence materials. For example,\nno distinction has been made between short-range\nand long-range word order errors, although it may\nbe assumed that long-range word order errors are\nconsidered more severe by users than short-rangeerrors. Another source of variability is the respon-\ndent population itself – since we only used aggre-\ngate conjoint analysis in this study, preferences are\naveraged over the entire population, ignoring po-\ntential sub-classes of users. It may well be pos-\nsible that some user types are more accepting of\ne.g. word-order errors than word sense errors, or\nvice versa – recall that the agreement coefficient\non the top choice was only 0.35. Finally, another\nconfounding factor might be the quality of the Me-\nchanical Turk data. Although we took several steps\nto ensure reasonable results, responses may not be\nas reliable as in a face-to-face study with respon-\ndents.\n6 Conclusions and Future Work\nWe have studied the use of conjoint analysis to\nelicit user preferences for different types of ma-\nchine translation errors. Our results confirms that,\nat least for the language pair and population stud-\nied, users do not necessarily rely on the overall\nnumber of errors when expressing their prefer-\nences for different machine translation outputs. In-\nstead, some error types affect users’ choices more\nstrongly than others. Of the different error types\nconsidered in this study, word order errors have\nthe lowest frequency in our data but are the most\ndispreferred error type, followed by word sense er-\nrors. The most frequent error type in our data, mor-\nphology errors, is ranked third, and function word\nerrors are the most tolerable. The viability of the\nconjoint analysis framework was demonstrated by\nshowing that the prediction accuracy of the fitted\nmodel exceeds that of a random or fewest-errors\nbaseline.\nIn future work the overall predictive power of\nthe model could be improved by more fine-grained\nmodeling of different sources of variability in the\ndata. Specifically, we plan to compare the present\nresults to results from face-to-face experiments, in\norder to gauge the reliability of crowd-sourced data\nfor conjoint analysis. In addition, latent class anal-\nysis will be used in order to obtain preference mod-\nels for different user types. In the long run, such\nmodels could be exploited for rapid user adapta-\ntion of machine translation engines after eliciting a\nfew basic preferences from the user. Utility values\nobtained by conjoint analysis might also be used\nin MT system tuning, by appropriately weighting\ndifferent error types in proportion to their utility\nvalues; however, this would require high-accuracy\n125\nautomatic classification of different error types.\nAnother way of extending the present analysis\nis to elicit user preferences in the context of a spe-\ncific task to be accomplished; for instance, users\ncould be asked to indicate their preferred transla-\ntion when faced with the tasks of postediting or\nextracting information from the translation. Fi-\nnally, it is also possible to investigate a larger set\nof error types than those considered in this study.\nThese may include different types of word order\nerrors (long-range vs. short-range), consistency er-\nrors (where a source term is not translated con-\nsistently in the target language throughout a doc-\nument), or named-entity errors.\nAcknowledgments\nWe are grateful to Aurora Salvador Sanchis and\nLorena Ruiz Marcos for providing the error anno-\ntations and corrections. This study was funded by\nNLM grant 1R01LM010811-01.\nReferences\nCallison-Burch, C., C. Fordyce, P. Koehn, C. Monz,\nand J. Schroeder. 2007. (Meta-)evaluation of ma-\nchine translation. In Proceedings of WMT, pages\n136–158.\nChristiadi and B. Cushing. 2007. Conditional logit,\nIIA, and alternatives for estimating models of inter-\nstate migration. In Proceedings of the 46th Annual\nMeeting of the Southern Regional Science Associa-\ntion.\nCondon, S., D. Parvaz, J. Aberdeen, C. Doran, A. Free-\nman, and M. Awad. 2010. Evaluation of machine\ntranslation errors in English and Iraqi Arabic. In\nProceedings of LREC.\nDenkowskie, M. and A. Lavie. 2010. Choosing the\nright evaluation for machine translation: an examina-\ntion of annotator and automatic metric performance\non human judgment tasks. In Proceedings of AMTA.\nFarreus, M., M.R. Cosa-Jussa, and M. Popovic Morse.\n2012. Study and correlation analysis of linguistic,\nperceptual, and automatic machine translation eval-\nuations. Journal of the American Society for Infor-\nmation Science and Technology, 63(1):174–184.\nFleiss, J.L. 1971. Measuring nominal scale agree-\nment among many raters. Psychological Bulletin,\n76(5):378–382.\nGoodman, L.A. 1974. Exploratory latent structure\nanalysis using both identifiable and unidentifiable\nmodels. Biometrika, 61(2):215–231.\nGreen, P. and V . Srinivasan. 1978. Conjoint analysis in\nconsumer research: Issues and outlook. Journal of\nConsumer Research, 5:103–123.Krings, H. 2001. Empirical Investigations of Machine\nTranslation Post-Editing Processes. Kent State Uni-\nversity Press.\nLandis, J.R. and G.G. Koch. 1977. The measurement\nof observer agreement for categorical data. Biomet-\nrics, 33:159174.\nLavie, A. and A. Agarwal. 2007. Meteor: An au-\ntomatic metric for MT evaluation with high levels\nof correlation with human judgments. In Proceed-\nings of the Second Workshop on Statistical Machine\nTranslation, pages 28–231.\nLDC. 2005. Linguistic data annotation specification:\nAssessment of fluency and adequacy in translations.\nrevision 1.5. Technical report, LDC.\nLouviere and Woodworth. 1983. Design and analysis\nof simulated consumer choice experiments: an ap-\nproach based on aggregate data. Journal of Market-\ning Research, 20(4):350–67.\nMcFadden, D.L. 1974. Conditional logit analysis of\nqualitative choice behavior. In Zarembka, P., edi-\ntor,Frontiers in Econometrics, pages 105–142. Aca-\ndemic Press: New York.\nO’Brien, S., editor. 2011. Cognitive Explorations of\nTranslation: Eyes, Keys, Taps. Continuum.\nPapineni, K., S. Roukos, and T. Ward. 2002. BLEU: a\nmethod for automatic evaluation of machine transla-\ntion. In Proceedings of ACL, pages 311–318.\nParton, K. and K. McKeown. 2010. MT error detec-\ntion for cross-lingual question answering. In Pro-\nceedings of Coling.\nPhilips, K., T. Maddala, and F.R. Johnson. 2002. Mea-\nsuring preferences for health care interventions using\nconjoint analysis. Health Services Research, pages\n1681–1705.\nPopovic, M. and H. Ney. 2011. Towards automatic\nerror analysis of machine translation output. Com-\nputational Linguistics, 37(4):657–688.\nSnover, M., B. Dorr, R. Schwartz, L. Micciulla, and\nJ. Makhoul. 2006. A study of translation edit rate\nwith targeted human annotation. In Proceedings of\nAMTA.\nVilar, D., J. Xiu, L.F. D’Haro, and H. Ney. 2006. Error\nanalysis of statistical machine translation output. In\nProceedings of LREC.\n126", "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "2mV8BJUxzoX", "year": null, "venue": "EAMT 2023", "pdf_link": "https://aclanthology.org/2023.eamt-1.20.pdf", "forum_link": "https://openreview.net/forum?id=2mV8BJUxzoX", "arxiv_id": null, "doi": null }
{ "title": "State Spaces Aren't Enough: Machine Translation Needs Attention", "authors": [ "Ali Vardasbi", "Telmo Pessoa Pires", "Robin M. Schmidt", "Stephan Peitz" ], "abstract": null, "keywords": [], "raw_extracted_content": "State Spaces Aren’t Enough: Machine Translation Needs Attention\nAli Vardasbi†∗\nUniversity of Amsterdam\[email protected] Pessoa Pires†Robin M. Schmidt Stephan Peitz\nApple\n{telmo, robin_schmidt, speitz}@apple.com\nAbstract\nStructured State Spaces for Sequences (S4)\nis a recently proposed sequence model with\nsuccessful applications in various tasks,\ne.g. vision, language modeling, and au-\ndio. Thanks to its mathematical formula-\ntion, it compresses its input to a single hid-\nden state, and is able to capture long range\ndependencies while avoiding the need for\nan attention mechanism. In this work, we\napply S4 to Machine Translation (MT), and\nevaluate several encoder-decoder variants\non WMT’14 and WMT’16. In contrast\nwith the success in language modeling, we\nfind that S4 lags behind the Transformer by\napproximately 4BLEU points, and that it\ncounter-intuitively struggles with long sen-\ntences. Finally, we show that this gap is\ncaused by S4’s inability to summarize the\nfull source sentence in a single hidden state,\nand show that we can close the gap by in-\ntroducing an attention mechanism.\n1 Introduction\nThe Transformer (Vaswani et al., 2017) is the most\npopular architecture for state-of-the-art Natural\nLanguage Processing (NLP) (Devlin et al., 2019;\nBrown et al., 2020; NLLB Team et al., 2022). How-\never, the attention mechanism on which it is built\nis not well suited for capturing long-range depen-\ndencies due to its quadratic complexity (Ma et al.,\n2023). Recently, Structured State Spaces for Se-\nquences (S4) was shown to be on par with the\n©2023 The authors. This article is licensed under a Creative\nCommons 4.0 licence, no derivative works, attribution, CC-\nBY-ND.\n†Equal contribution.\n∗Work done during an internship at Apple.Transformer on various sequence modelling tasks,\nincluding time series forecasting, language model-\ning (Gu et al., 2022), and audio generation (Goel et\nal., 2022); and to surpass the Transformer on tasks\nrequiring reasoning over long range dependencies,\nlike the Long Range Arena (Tay et al., 2021).\nInternally, S4 keeps a state-space based represen-\ntation. Due to the way its weights are initialized,\nit is able to approximately “memorize” the input\nsequence, removing the need for an attention mech-\nanism. Indeed, the results from Gu et al. (2022)\nshow that the self-attention layers can be replaced\nby S4 layers without losing accuracy, and that it is\nable to effectively model long-range dependencies\nin data. Moreover, one of the key advantages of the\nS4 kernel is that its forward step can be formulated\nboth as a convolution and as a recurrence formula,\nallowing fast implementation during training, when\nthe convolution method is used, while the recur-\nrence formula is used to generate the output step by\nstep during inference.\nS4’s competitive performance in Language Mod-\neling (LM) promises an alternative to the Trans-\nformer for other sequence modeling tasks, such as\nMachine Translation (MT). In this work, we ex-\nplore S4-based architectures for MT. Our goal is\nto find the best performing S4 architecture, and we\nstudy the impact of several architectural choices\non translation accuracy, namely the effect of model\ndepth, the number of S4 blocks, and the importance\nof the encoder. Despite our best efforts, our top per-\nforming attention-free S4 model lags significantly\n(∼4BLEU points) behind the Transformer, with\nthe gap increasing with input length. We hypothe-\nsize this is due to the fact that S4 compresses the\nsource sentence to a fixed-size representation, and\nthus lacks a way to access the token-level states\nof the source, which is important for MT. As the\ninput length increases, it becomes increasingly hard\nfor the model to accurately store the full source\nsentence in a single hidden state. In contrast, the\ndecoder cross-attention in the Transformer acts as a\nretrieval mechanism, allowing to accurate retrieval\nof the source sentence during decoding. Armed\nwith this observation, we enhance S4 with cross-\nattention, and show this is enough to close the gap\nto the Transformer. Finally, we combine the Trans-\nformer and S4 into an hybrid architecture that out-\nperforms both of them.\nTo summarize, the main contributions of the present\nwork are:\n1. We present an in-depth study of S4 for MT.\n2.We provide evidence that S4 learns self-\ndependencies , i.e. dependencies between the\ntokens of a single sequence, but struggles to\ncapture cross-dependencies , i.e. dependencies\nbetween the tokens of two sequences, as it\nlacks a way to retrieve prior states.\n3.We show that extending S4 with an attention\nmechanism allows it to more accurately cap-\nture cross-dependencies and to close the gap\nto the Transformer on MT.\n2 Background\nIn this section, we provide a brief overview of S4\nand Machine Translation.\n2.1 Structured State Space Models\nThe continuous state space model (SSM) is defined\nby:\nx′(t) =Ax(t) +Bu(t)\ny(t) =Cx(t) +Du(t),(1)\nwhere u(t)is a 1D input signal that is mapped to\nthe latent state x(t)and finally to the output y(t).\nA,B,C, andDare learned parameters. Similar\nto Gu et al. (2022), we assume D= 0 since it is\nequivalent to a residual connection.\nDiscretization Following Gu et al. (2022), we\ndiscretize Equation (1) to apply it to discrete se-\nquences:\nxk=Axk−1+Buk\nyk=Cxk,(2)\nwhere A∈RN×N,B∈RN×1,C∈R1×Nare\ncomputed using a bilinear approximation with stepsize∆1:\nA= (I−∆/2·A)−1(I+ ∆/2·A)\nB= (I−∆/2·A)−1∆B\nC=C,(3)\nandu(t)is sampled at uk=u(k∆).\nEquation (2) is designed to handle 1D input signals.\nIn practice, inputs are rarely 1D, but rather high-\ndimensional feature vectors, such as embeddings.\nTo handle multiple features, Gu et al. (2022) use\none independent SSM per dimension. These inde-\npendent SSMs are then concatenated and mixed\nusing a linear layer. For example, if a model has a\nstate size of 64and a hidden size of 512, it will con-\ntain512independent SSMs (Equation (1)). Each of\nthese SSMs has a size of 64and processes a single\nfeature. The 1D outputs of these 512models are\nconcatenated, and a linear transformation is applied.\nThis process is referred to as an S4 block , which\ninvolves concatenating all the independent SSMs\n(one Equation (2) for each feature), followed by\na mixing layer, a residual connection, and Layer\nNormalization (Ba et al., 2016).\nHiPPO Matrix A careful initialization of the A\nmatrix is necessary to reduce exploding/vanishing\ngradient (Gu et al., 2022). Gu et al. (2020) proposed\nHiPPO-LegS matrices, which allow the state x(t)\nto memorize the history of the input u(t):\nAnk=−\n\n(2n+ 1)1/2(2k+ 1)1/2ifn > k\nn+ 1 ifn=k\n0 ifn < k\nwhere Ankis the entry on row nand column k.\nFollowing Gu et al. (2022), we initalize Awith the\nabove equation but train it freely afterwards.\nStructured State Spaces (S4) Finally, Gu et al.\n(2022) introduced a set of techniques to make the\ntraining of the above architecture more efficient.\nThese include directly computing the output se-\nquence at training time using a single convolution\n(denoted with ∗):\ny=K∗uk. (4)\nwhere Kis a kernel given by:\nK:=\u0010\nCAiB\u0011\ni∈[L]\n=\u0010\nCB,CAB , . . . ,CAL−1B\u0011\n,(5)\n1Since for Machine Translation the step size does not change,\nwe use ∆ = 1 .\nEmb\n Emb\n Emb\nMulti-Head Self-Attention\nAdd & Norm\nAdd & Norm\nSoftmax\nEmb\n Emb\n EmbEncoder\nStack\nMLP\nOutput Projection\nMasked Multi-Head Self-Attention\nAdd & Norm\nAdd & Norm\nMLP\nMulti-Head Cross-Attention\nAdd & Norm\nDecoder\nStack(a) Transformer (T R-TR)\nEmb\n Emb\n Emb\nAdd & Norm\nSoftmax\nEmb\n Emb\n EmbEncoder\nStack\nMLP\nOutput Projection\nAdd & Norm\nMLP\nDecoder\nStack\nHiPPO Kernel\nMLP\nAdd & NormS4 Blocks\nHiPPO Kernel\nMLP\nAdd & Norm\nConcat\nAdd & Norm\nMulti-Head Cross-AttentionAttention only\nenabled for\nS4A variant\nS4 Blocks(b) State Spaces (S4-S4 and ∅-S4)\nFigure 1: Overview of the architectures used. The Transformer architecture (a) is compared to a\nS4 architecture with an optional encoder (b). “Add & Norm” represents the residual connection and\nnormalization blocks used. The attention module is used only for the S4 Avariant (see Section 4.4).\nandLis the sequence length. At inference time,\nEquation (2) is applied step-by-step. For more de-\ntails, see Gu et al. (2022).\n2.2 Machine Translation (MT)\nLet(x1:n, y1:m)be a source and target sentence\npair. The negative log-likelihood of ygiven xcan\nbe written as:\n−logp(y1:m|x1:n) =−mX\ni=1logp(yi|x1:n, y<i),(6)\nwhere p(yi|x1:n, y<i)is modeled using a neural\nnetwork. In encoder-decoder models, such as the\nTransformer (Vaswani et al., 2017), the model has\ntwo main components: an encoder, responsible for\ncapturing source-side dependencies, and a decoder,\nwhich captures both target-side and source-target\ndependencies.\nAlternatively, MT can be treated as a Language\nModeling task, where the (decoder-only) model\nis trained on the concatenated source and target\nsentences, separated with a special [SEP] token in\nbetween (Wang et al., 2021; Gao et al., 2022). Fol-\nlowing this approach, the negative log-likelihood is\nwritten as:\n−logp(y1:m, x1:n) =LAE\nz }| {\n−nX\nj=1logp(xj|x<j) +\n−mX\ni=1logp(yi|x1:n, y<i)\n| {z }\nLMT.(7)\nTheLAEterm corresponds to the source reconstruc-\ntion loss, while LMTis identical to Equation (6).Since our focus is on MT, we only need to optimize\nthe second term, i.e., LMT. In our experiments, in-\ncluding both loss terms degraded translation quality\n(see Appendix A). Therefore, for our decoder-only\nmodels using only the second term, LMT.\n2.3 Transformer\nTransformers (Vaswani et al., 2017) are the state-of-\nthe-art architecture for MT. We show a typical ar-\nchitecture in Figure 1a. In particular, both encoder\nand decoder layers have self-attention and multi-\nlayer perceptron (MLP) modules, and the decoder\nlayer has an extra cross-attention module.\nTo simplify the text, we will refer to the architec-\ntures we discuss as [ENC]-[D EC], where [ENC]\nand[DEC]refer to the architecture used. For ex-\nample, the Transformer model in Figure 1a will be\nreferred to as TR-TR, since both the encoder and\ndecoder are from the Transformer.\n3 S4 for Machine Translation\n3.1 Base Architecture\nFollowing Gu et al. (2022), our architectures are\nbased on the Transformer, but with the S4 block\n(Section 2) replacing self-attention. In our initial ex-\nperiments, we intentionally omitted the use of cross-\nattention in our models to determine whether S4’s\ninternal states alone suffice in capturing long-range\ndependencies for MT. We call the Bconsecutive\nS4 blocks together with the MLP layer, followed\nby a residual connection and normalization, one S4\nlayer . Gu et al. (2022) use B= 2.\nWe consider two approaches (Figure 1b): a decoder-\nonly model ( ∅−S4), and an encoder-decoder archi-\ntecture ( S4-S4). Our decoder-only model is based\non Gu et al. (2022), which was shown to perform\nwell in language modeling. This model is designed\nto predict the next target token by taking as input the\nconcatenated source and the previously predicted\ntarget tokens. Our S4-S4encoder-decoder architec-\nture consists of LES4encoder layers and LDS4\ndecoder layers, without cross-attention. Instead, we\nuse a simple method to propagate information be-\ntween the encoder and the decoder: concatenating\nthe encoder outputs with the shifted target sequence.\nThis way, the decoder processes both the encoder\noutputs and the target tokens.2\nFinally, for some of the latter experiments, we con-\nsider the case where encoder is bidirectional, which\nwe will refer to as S4BI. In this configuration, the\nS4 blocks have two sets of parameters ( A,Band\nC), one per direction.\n3.2 S4 with Cross-Attention\nIn our later experiments, we employ a modified\nS4 decoder architecture, S4A (S4 with Attention).\nS4A can be used with either a Transformer or S4\nencoder. It incorporates a multihead cross-attention\nmodule on top of the HiPPO kernel, as shown in\nFigure 1b. Specifically, cross-attention is inserted\nabove the “Add & Norm” layer in the S4 block,\nfollowed by another “Add & Norm” layer, similar to\nthe Transformer architecture. When cross-attention\nis employed, we no longer concatenate the encoder\noutputs to the shifted target sequence.\n4 Results\nIn this section, we describe the experimental setup,\nand discuss our results.\n4.1 Experimental Setup\nData We run experiments on WMT’14\nEnglish ↔German ( EN↔DE,4.5M sentence pairs),\nand WMT’16 English ↔Romanian ( EN↔RO,\n610K sentence pairs), allowing us to measure\nperformance on four translation directions. For\nour analysis, we focus on EN→DE. We tokenize\nall data using the Moses tokenizer and apply the\nMoses scripts (Koehn et al., 2007) for punctuation\n2Ideally, we would initialize the S4decoder state spaces with\nthe last state of the encoder. However, this is non-trivial to\nimplement, since the forward step is executed as a single convo-\nlution during training. We leave the exploration of this method\nto future work.normalization. We use Byte-pair encoding\n(BPE, Sennrich et al. (2016)) with 40,000merge\noperations, and the WMT’16 provided scripts\nto normalize EN↔ROfor the ROside, and to\nremove diacritics when translating RO→EN.\nTranslations into Romanian keep diacritics to\ngenerate accurate translations. We evaluate using\nsacreBLEU3version 2.1.0 (Post, 2018), with\nsignature nrefs:1 | case:mixed | eff:no |\ntok:13a | smooth:exp . We run all experiments\nusing FAIRSEQ (Ott et al., 2019), onto which we\nported the code from Gu et al. (2022)4.\nUnless stated otherwise, we report BLEU scores on\nthe WMT’14 E N→DEvalidation set.\nHyperparameters We optimize using ADAM\n(Kingma and Ba, 2015). After careful tuning, we\nfound the best results with a learning rate of 0.005\nfor the S4 models, 0.001for the Transformer mod-\nels, and 0.002for the hybrid models. We train for\n100epochs ( 28 000 steps), by which point our mod-\nels had converged, and average the last 10check-\npoints. We use 4 000 warm-up steps and an in-\nverse square root learning rate scheduler (Vaswani\net al., 2017). We used a dropout rate of 0.1for\nEN↔DE, and 0.3forEN↔RO. Unless stated oth-\nerwise, all models have layer and embedding sizes\nof512, the hidden size of the feed-forward layers\nis2048 , and we use 8attention heads for the Trans-\nformer. For both the Transformer and S4, we use\npost-normalization5. Following Gu et al. (2022)\nwe use GeLU activation (Hendrycks and Gimpel,\n2016) after the S4 modules and GLU activation\n(Dauphin et al., 2017) after the linear layer.\nS4-specific Training Details During our explo-\nration, we experimented with several choices that\nhad a marginal effect on performance:\n(i)Module-specific learning rates. Gu et al.\n(2022) suggested different learning rates for\nthe matrices in eq. (2) and the neural layer, but\nwe did not observe any significant difference.\n(ii)Trainable AandB.In line with Gu et al.\n(2022), freezing AandBdid not cause a no-\nticeable performance drop.\n(iii) State dimension. We varied the size of the\nstate ( xkin Equation (2)), but found that that\n3https://github.com/mjpost/sacrebleu\n4https://github.com/HazyResearch/state-spaces\n5In our experiments, we didn’t observe any difference between\npre and post-normalization.\nincreasing it dimension beyond 64did not no-\nticeably affect translation quality. Therefore,\nsimilarly to Gu et al. (2022), we set the state\ndimension to 64in our experiments. Note that\nthis parameter should not be confused with\nthe model’s hidden size, which we examine in\nSection 4.2. Increasing the state dimension in-\ncreases the modeling capacity of the S4 kernel\nforeach input dimension, but the output is\nstill collapsed to the hidden size, making the\nlatter the bottleneck.\n(iv)Learning rate scheduler. We observed no sig-\nnificant difference between using the inverse\nsquare root scheduler and the cosine scheduler\nsuggested in (Gu et al., 2022).\n4.2 Parameter Allocation and Scaling\nEncoder Scaling To explore the effect of param-\neter allocation on performance, we compare the\ntranslation quality of different encoder-decoder con-\nfigurations with the same total number of parame-\nters (roughly 65M). In Figure 2a, the xaxis repre-\nsents the ratio of encoder layers to the total num-\nber of layers (encoder + decoder). Starting with\na decoder-only model ( ratio = 0), we gradually\nincrease the number of encoder layers, and end\nwith a model containing only a single decoder layer.\nTwo results stand out: first, there is a wide gap be-\ntween the best S4 and Transformer models: 20.7\nand26.4BLEU, respectively. Second, and consis-\ntent with prior work, we find that an even split of\nparameters between the encoder and decoder ( 6\nencoder layers and 6decoder layers, i.e., Trans-\nformer base) yields the best translation quality for\nthe Transformer (Vaswani et al., 2017), whereas\nno encoder produces the best results for S4. Based\non this finding, we focus on the S4 decoder-only\nvariant for the next experiments.\nNumber of S4 Blocks per Layer Prior research\nset the number of S4 blocks, B, to2(Gu et al.,\n2022). We found that increasing Bis beneficial\nas S4 blocks are responsible for capturing depen-\ndencies between tokens. In Table 1 we vary B\nwhile keeping the parameter count roughly constant.\nIncreasing Bleads to noticeable quality improve-\nments until B= 10 . This architecture achieves a\nscore of 22.7BLEU, but the gap to the Transformer\nis still substantial: 3.7BLEU points. From here on-\nward we use B= 10 and6layers for the decoder-\nonly model, unless stated otherwise.B L D|θS4| | θ| BLEU\n1 17 10 M 66M 20.0\n2 14 20 M 66M 20.7\n3 12 21 M 66M 21.2\n4 10 23 M 64M 21.5\n6 8 28 M 64M 22.1\n10 6 35 M 67M 22.7\n16 4 37 M 65M 22.0\n22 3 38 M 64M 22.2\n35 2 40 M 64M 22.5\nTable 1: Effect of number of S4 blocks per layer\non the decoder-only architecture. Bis the number\nof S4 blocks, LDthe number of decoder layers,\n|θS4|are the parameters allocated for S4 inside the\nHiPPO kernels, and |θ|are the total parameters.\nShort Medium LongOverall[1,17] [18 ,29] [30 ,117]\nTR-TR 25.9 26 .8 26 .4 26 .4\nS4-Normal 24.0 24 .3 21 .4 22 .7\nS4-Reverse 23.2 24 .2 22 .5 23 .1\nTable 2: Translation quality of S4, trained on reg-\nular and reversed source sentences, compared to\nTransformer on the WMT’14 EN-DEvalidation\nset, for different reference sentence lengths. Each\nbucket has approximately 1k sentences.\nDepth Scaling In Figure 2b we show BLEU as\nwe increase the number of layers. The xaxis shows\nthe total number of parameters of each architecture,\nand the numbers next to each data point indicate the\narchitecture (e.g., 1-2means a 1layer encoder and 2\nlayer decoder). There is a clear gap in performance\nbetween the two models, which is decreasing as\nmore layers are added, i.e. S4 seems to benefit more\nfrom increasing the number of layers.\nWidth Scaling In Figure 2c we examine the in-\nfluence of the hidden size on both S4 and Trans-\nformer, for the 0-6and6-6architectures, respec-\ntively. While S4’s performance improves with in-\ncreasing width, the returns are diminishing, and the\ngap to the Transformer does not go way.\n4.3 Translation Quality Comparison\nDespite our extensive tuning of the S4 architecture,\na gap of almost 4BLEU points to the Transformer\nremains. In this section, we delve deeper into S4’s\nresults to determine why it is struggling.\n0.0 0.2 0.4 0.6 0.81417202326BLEU\nS4\nTransformer(a) Encoder parameter allocation ( ratio ).\n50M 100M 150M17202326\n0-20-30-60-90-120-140-20\n1-23-36-69-9 12-12 15-15 21-21\nS4\nTransformer (b) Number of parameters ( depth ).\n256 512 102417202326\n22M67M222M21M65M218M\nS4\nTransformer (c) Hidden size ( width ).\nFigure 2: Scaling plots for S4 and the Transformer. We explore shifting the parameter allocation between\nthe encoder (a), depth scaling (with a fixed hidden size of 512), symmetrically for the encoder-decoder\nTransformer, and on the decoder for S4 (b), and hidden size (width) scaling (c), with 0-6and6-6layers of\nS4 and Transformer, respectively.\nSentence Length In Table 2, we split the source\nsentences into 3buckets according to their length6,\nand show the BLEU scores for both S4 and the\nTransformer. There is a clear gap between the\ntwo models, which increases with sentence length.\nSpecifically, the gap is 1.9and2.5BLEU for short\nand medium-length sentences, respectively, but it\nincreases to 5for the longest bucket. This observa-\ntion is not entirely surprising: S4 uses a fixed-size\nvector to compress the full source sentence and the\nprevious target tokens, which is not enough for long\nsentences. The Transformer, on the other hand, has\nno such constraint, as its attention mechanism lets\nit retrieve previous states as needed.\nReversing Source Sentences To further investi-\ngate whether the limited representation size is caus-\ning the poor performance of the model, we applied\na technique from the earlier neural MT literature.\nBefore the introduction of attention (Bahdanau et\nal., 2015), it was observed that reversing the source\nsequence could improve performance by decreasing\nthe distance between cross-language dependencies\n(Sutskever et al., 2014). We trained a model on\nreversed source sentences, and report the results in\nTable 2 as S4-Reverse. Compared with the regu-\nlar model, we get a small overall improvement of\n0.4BLEU points, but a large improvement of 1.1\nBLEU on long sentences. This observation suggests\nthat although the HiPPO matrix has promising tem-\nporal characteristics, S4 is not able to adequately\nrepresent the source sentence and utilize its content\nduring the decoding phase.\n6To limit spuriousness issues, we chose the buckets so that\neach bucket has roughly 1k sentences.4.4 The Importance of Attention\nIn the previous section, we showed that S4 struggles\nto translate long sentences. In this section, we study\nthe influence of each source token on the output of\nthe model.\nAttention Heatmaps To investigate the extent to\nwhich S4 captures dependencies between source\nand target words, we use a method from He et al.\n(2019). For each generated target token, we mask\nout the source tokens, one by one, and replace them\nwith padding tokens. Then, we measure the relative\nchange in the decoder’s final layer activation caused\nby this intervention using L2 distance. By repeating\nthis process for each source token, we obtain a two-\ndimensional matrix measuring the impact of each\nsource token on each target token. Similarly, we\ncan perform the same procedure by masking the\nprevious target tokens to obtain a similar plot for\ntarget-side self-dependencies.\nWe show the heatmaps for both S4 and the Trans-\nformer7in Figure 3. As shown, the differences are\nstark. The Transformer is focused on just a few\nwords (sharp diagonal in fig. 3b), while S4 is is\nmuch more “blurred” and unable to appropriately\nattend to specific parts of the source sentence. The\ndifference is not as pronounced for short sentences\n(see Figure 4), indicating that a single hidden state\nis not enough to capture all the information the\nmodel needs for longer sentences.\nIn Appendix B, we explore how Bimpacts the\nheatmaps. We find that increasing Bsharpens the\nheatmaps, although they never get as sharp as those\nof the Transformer.\n7The plots are qualitatively similar to the usual attention\nweights heatmaps for the Transformer. We show these “mask-\ning” maps for both models for fair comparison.\nmasked sourcetarget\n0.00.20.40.60.81.0\nmasked targettarget\n0.00.20.40.60.81.0\n(a)∅-S4\nmasked sourcetarget\n0.00.20.40.60.81.0\nmasked targettarget\n0.00.20.40.60.81.0\n (b) T R-TR\nFigure 3: Change in the final decoder hidden state for each generated token when masking out source and\ntarget tokens in one long sample of EN-DE(109tokens), for the decoder-only S4 (a) and the Transformer\n(b). While the latter can discriminate between source words very accurately (sharp diagonal in b), S4 fails\nto do so.\nmasked sourcetarget\n0.00.20.40.60.81.0\nmasked targettarget\n0.00.20.40.60.81.0\n(a)∅-S4\nmasked sourcetarget\n0.00.20.40.60.81.0\nmasked targettarget\n0.00.20.40.60.81.0\n (b) TR-TR\nFigure 4: Change in the final decoder hidden state for each generated token when masking out source and\ntarget tokens in one short sample of EN-DE(11tokens) for the decoder-only S4 and the Transformer. In\nthe case of short sentences, S4 is able to more accurately align source and target words.\n4.5 Attention-enhanced Architectures\nIn the previous experiments, we found that S4\nunderperforms on long sentences, and hypothe-\nsized that this is due to its fixed-size representa-\ntion, which makes it unable to recall the full source\nsentence. To address this, we now extend the S4\ndecoder with an attention mechanism, which al-\nlows us to use an encoder-decoder setup, S4-S4A.\nFor more details on the attention mechanism, see\nSection 3.2.\nWe conducted experiments similar to those in Sec-\ntion 4.2 to determine the optimal Band how to al-\nlocate layers to the encoder and the decoder, while\nkeeping the total number of parameters constant.\nWe summarize the findings in Tables 3 and 4. We\nfound the best results with a balanced architecture,\n5−5, and B= 3. This model improves perfor-\nmance by almost 3BLEU points on the WMT’14\nvalidation set, from 22.7to25.6. From here on-\nward, encoders and decoders have 5layers for S4\nand6layers for Transformer.\nIn Table 5 we compare the performance of S4-S4A\nand the Transformer ( TR-TR) for short, medium,\nand long sentences. Although there is a noticeable\nimprovement over the attention-free S4 model ( ∅-B L ELD|θ| BLEU\n2 6 6 66 M 24.9\n3 5 5 64 M 25.4\n5 4 4 64 M 25.4\n8 3 3 63 M 25.2\nTable 3: Effect of number of Band number of\nencoder ( LE) and decoder ( LD) layers for the S4-\nS4Aencoder-decoder architecture.\nLE 1 2 3 4 5 6 7 8 9\nLD 9 8 7 6 5 4 3 2 1\nBLEU 24.5 24.8 25.1 25.125.425.1 25.1 25.1 23.7\nTable 4: Effect of allocating layers to the encoder\nor to the decoder on the S4-S4Aarchitecture, with\nB= 3. The models have a total of 10layers be-\ntween the encoder and decoder.\nS4), especially for longer sentences, there is still\ngap between the two models. One possible expla-\nnation for the comparatively poorer performance\nofS4-S4Ais the unidirectional nature of the S4\nencoder. This results in subpar representations for\nthe initial words in the source sentence. Indeed,\nwhen using a S4 encoder with a Transformer de-\nShort Medium LongOverall[1,17] [18 ,29] [30 ,117]\n∅-S4 24.0 24 .3 21 .4 22 .7\nTR-TR 25.9 26.8 26 .4 26 .4\nS4-T R 24.7 25 .5 25 .2 25 .2\nS4-S4 A 25.0 26 .5 25 .3 25 .6\nS4BI-TR 25.5 25 .9 25 .6 25 .7\nS4BI-S4 A 25.3 26 .5 25 .8 25 .9\nTR-S4 24.2 24 .8 22 .9 23 .7\nTR-S4 A 25.6 26.9 26 .5 26 .5\nTable 5: Translation quality of different attention-\nenhanced models on the WMT’14 EN-DEvalida-\ntion set for different source sentence lengths. Each\nbucket has approximately 1k sentences. All models\nhave64M <|θ|<66Mparameters.\nmasked sourcetarget\n0.00.20.40.60.81.0\nmasked targettarget\n0.00.20.40.60.81.0\nFigure 5: Comparison of TR-S4A’s change in the\nfinal decoder hidden state for each generated to-\nken when masking out source tokens for one long\nsample of EN-DE(the same sample as Figure 3).\nEnhancing S4 with attention helps it to focus on the\nsource tokens, similar to T R-TR.\ncoder ( S4-TR), the performance is still behind that\nofTR-TR, and replacing the S4 encoder with a\nTransformer ( TR-S4A) allows us to match the per-\nformance Transformer. Making the S4 encoder\nbidirectional ( S4BI), we are able to narrow the per-\nformance gap to the Transformer to just 0.5BLEU\npoints (see S4 BI-S4 A).\nFinally, in Figure 5 we show the attention heatmaps\nforTR-S4Aarchitecture, which were generated in\nthe same was as those in Figure 3. These plots show\nthat the model is now capable of accurately align-\ning source and target words, and are qualitatively\nsimilar to those of the Transformer.\nWhy does S4 perform well on LM but not MT?\nA natural question to ask is why does S4 perform\nwell on LM (Gu et al., 2022), but not on MT. Our\nintuition is that MT is a more challenging task. For\nLM, the model only needs to consider a shorter con-\ntext to accurately predict the next token, whereasEN-DEDE-EN EN-RO RO-EN\n∅-S4 22.1 25 .4 12 .8 19 .7\nS4BI-S4 A 26.1 29 .5 22 .7 31 .0\nTR-S4 A 27.3†31.4 24 .1†33.6†\nTR-TR 26.9 31.4 23.8 33 .2\nTable 6: BLEU scores on test set for each architec-\nture in 4different language pairs. The †onTR-S4A\nindicates statistically significant results.\nfor MT, it requires accurate access to the source\nsentence representations. As the length of the\nsource sentence increases, a fixed-size state is insuf-\nficient to capture fine-grained representations of the\nsource, and thus the model’s performance suffers.\nThis is in line with the observations made by Vig\nand Belinkov (2019), who argue that Transformer\nLMs tend to pay more attention to the previous few\ntokens, emphasizing the importance of short-term\nmemory over long-term memory.\n4.6 Results for Other Language Pairs\nIn the previous sections, we focused on EN-DE.\nIn this section, we compare the different S4 archi-\ntectures for other language pairs ( DE-EN,EN-RO,\nandRO-EN) and summarize the results in Table 6.\nThese numbers are on the test sets of the respective\nlanguage pairs. The results align with our previous\nfindings. Without attention, there is a significant\ngap between S4 and the Transformer models, which\nis reduced significantly by adding it. Interestingly,\nthe best performing architecture for all language\npairs is the hybrid TR-S4A, which provides a small\nbut statistically significant8improvement over the\nTransformer for all but D E→EN.\n5 Conclusion and Future Work\nIn this work, we explored the application of S4 to\nMachine Translation and conducted an investiga-\ntion into the best architecture and hyperparameters.\nDespite our efforts, we found that S4’s translation\naccuracy lagged behind the Transformer, and the\nperformance gap widened for longer sentences. We\nthen showed that this was due to the limitations of\nthe fixed-size representation used by S4, which had\nto compress the entire prior context, including the\nsource sentence and previous output tokens. Finally,\nwe showed that the performance gap can be closed\nby incorporating attention.\n8We performed statistical significance tests using paired boot-\nstrap resampling (Koehn, 2004) and a significance of 5%.\nSince we did our investigation into S4, numerous\nnew SSM models have been proposed. Of partic-\nular note are S5 (Smith et al., 2023), which uti-\nlizes a multi-input multi-output SSM, instead of\none single-input single-output SSM per feature as\nS4 does, and H3 (Dao et al., 2023), which is faster\nand better at LM than S4. We hope future research\nexplores how well these models perform on MT.\nAdditionally, it is worth noting MEGA (Ma et al.,\n2023), which incorporates SSM’s into the Trans-\nformer attention, and is effective in MT, albeit at\nthe expense of quadratic complexity.\n6 Acknowledgements\nWe would like to thank António V . Lopes, Hen-\ndra Setiawan, and Matthias Sperber for their sug-\ngestions and feedback. Their contributions signifi-\ncantly improved the final work.\nReferences\nBa, Jimmy Lei, Jamie Ryan Kiros, and Geoffrey E.\nHinton. 2016. Layer normalization. arXiv preprint\narXiv:1607.06450 .\nBahdanau, Dzmitry, Kyunghyun Cho, and Yoshua Ben-\ngio. 2015. Neural machine translation by jointly learn-\ning to align and translate. In Bengio, Yoshua and Yann\nLeCun, editors, 3rd International Conference on Learn-\ning Representations, ICLR 2015, San Diego, CA, USA,\nMay 7-9, 2015, Conference Track Proceedings .\nBrown, Tom B., Benjamin Mann, Nick Ryder, Melanie\nSubbiah, Jared Kaplan, Prafulla Dhariwal, Arvind\nNeelakantan, Pranav Shyam, Girish Sastry, Amanda\nAskell, Sandhini Agarwal, Ariel Herbert-V oss, Gretchen\nKrueger, Tom Henighan, Rewon Child, Aditya Ramesh,\nDaniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christo-\npher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin,\nScott Gray, Benjamin Chess, Jack Clark, Christopher\nBerner, Sam McCandlish, Alec Radford, Ilya Sutskever,\nand Dario Amodei. 2020. Language models are few-\nshot learners. In Larochelle, Hugo, Marc’Aurelio Ran-\nzato, Raia Hadsell, Maria-Florina Balcan, and Hsuan-\nTien Lin, editors, Advances in Neural Information Pro-\ncessing Systems 33: Annual Conference on Neural In-\nformation Processing Systems 2020, NeurIPS 2020, De-\ncember 6-12, 2020, virtual .\nDao, Tri, Daniel Y . Fu, Khaled K. Saab, Armin W.\nThomas, Atri Rudra, and Christopher Ré. 2023. Hungry\nHungry Hippos: Towards language modeling with state\nspace models. In International Conference on Learning\nRepresentations .\nDauphin, Yann N., Angela Fan, Michael Auli, and David\nGrangier. 2017. Language modeling with gated convo-\nlutional networks. In Precup, Doina and Yee Whye Teh,\neditors, Proceedings of the 34th International Confer-\nence on Machine Learning, ICML 2017, Sydney, NSW,Australia, 6-11 August 2017 , volume 70 of Proceedings\nof Machine Learning Research , pages 933–941. PMLR.\nDevlin, Jacob, Ming-Wei Chang, Kenton Lee, and\nKristina Toutanova. 2019. BERT: Pre-training of deep\nbidirectional transformers for language understanding.\nInProceedings of the 2019 Conference of the North\nAmerican Chapter of the Association for Computational\nLinguistics: Human Language Technologies, Volume 1\n(Long and Short Papers) , pages 4171–4186, Minneapo-\nlis, Minnesota, June. Association for Computational Lin-\nguistics.\nGao, Yingbo, Christian Herold, Zijian Yang, and Her-\nmann Ney. 2022. Is encoder-decoder redundant for\nneural machine translation? In He, Yulan, Heng Ji,\nYang Liu, Sujian Li, Chia-Hui Chang, Soujanya Po-\nria, Chenghua Lin, Wray L. Buntine, Maria Liakata,\nHanqi Yan, Zonghan Yan, Sebastian Ruder, Xiaojun\nWan, Miguel Arana-Catania, Zhongyu Wei, Hen-Hsen\nHuang, Jheng-Long Wu, Min-Yuh Day, Pengfei Liu, and\nRuifeng Xu, editors, Proceedings of the 2nd Conference\nof the Asia-Pacific Chapter of the Association for Com-\nputational Linguistics and the 12th International Joint\nConference on Natural Language Processing, AACL/I-\nJCNLP 2022 - Volume 1: Long Papers, Online Only,\nNovember 20-23, 2022 , pages 562–574. Association for\nComputational Linguistics.\nGoel, Karan, Albert Gu, Chris Donahue, and Christopher\nRé. 2022. It’s raw! audio generation with state-space\nmodels. In Chaudhuri, Kamalika, Stefanie Jegelka,\nLe Song, Csaba Szepesvári, Gang Niu, and Sivan Sabato,\neditors, International Conference on Machine Learn-\ning, ICML 2022, 17-23 July 2022, Baltimore, Maryland,\nUSA, volume 162 of Proceedings of Machine Learning\nResearch , pages 7616–7633. PMLR.\nGu, Albert, Tri Dao, Stefano Ermon, Atri Rudra, and\nChristopher Ré. 2020. Hippo: Recurrent memory with\noptimal polynomial projections. In Larochelle, Hugo,\nMarc’Aurelio Ranzato, Raia Hadsell, Maria-Florina Bal-\ncan, and Hsuan-Tien Lin, editors, Advances in Neural\nInformation Processing Systems 33: Annual Confer-\nence on Neural Information Processing Systems 2020,\nNeurIPS 2020, December 6-12, 2020, virtual .\nGu, Albert, Karan Goel, and Christopher Re. 2022. Ef-\nficiently modeling long sequences with structured state\nspaces. In International Conference on Learning Repre-\nsentations .\nHe, Shilin, Zhaopeng Tu, Xing Wang, Longyue Wang,\nMichael Lyu, and Shuming Shi. 2019. Towards under-\nstanding neural machine translation with word impor-\ntance. In Proceedings of the 2019 Conference on Empir-\nical Methods in Natural Language Processing and the\n9th International Joint Conference on Natural Language\nProcessing (EMNLP-IJCNLP) , pages 953–962, Hong\nKong, China, November. Association for Computational\nLinguistics.\nHendrycks, Dan and Kevin Gimpel. 2016. Gaus-\nsian error linear units (gelus). arXiv preprint\narXiv:1606.08415 .\nKingma, Diederik P. and Jimmy Ba. 2015. Adam: A\nmethod for stochastic optimization. In Bengio, Yoshua\nand Yann LeCun, editors, 3rd International Conference\non Learning Representations, ICLR 2015, San Diego,\nCA, USA, May 7-9, 2015, Conference Track Proceed-\nings.\nKoehn, Philipp, Hieu Hoang, Alexandra Birch, Chris\nCallison-Burch, Marcello Federico, Nicola Bertoldi,\nBrooke Cowan, Wade Shen, Christine Moran, Richard\nZens, Chris Dyer, Ond ˇrej Bojar, Alexandra Constantin,\nand Evan Herbst. 2007. Moses: Open source toolkit\nfor statistical machine translation. In Proceedings of the\n45th Annual Meeting of the Association for Computa-\ntional Linguistics Companion Volume Proceedings of\nthe Demo and Poster Sessions , pages 177–180, Prague,\nCzech Republic, June. Association for Computational\nLinguistics.\nKoehn, Philipp. 2004. Statistical significance tests\nfor machine translation evaluation. In Proceedings of\nthe 2004 Conference on Empirical Methods in Natural\nLanguage Processing , pages 388–395, Barcelona, Spain,\nJuly. Association for Computational Linguistics.\nMa, Xuezhe, Chunting Zhou, Xiang Kong, Junxian He,\nLiangke Gui, Graham Neubig, Jonathan May, and Luke\nZettlemoyer. 2023. Mega: Moving average equipped\ngated attention. In The Eleventh International Confer-\nence on Learning Representations .\nNLLB Team, Marta R. Costa-jussà, James Cross, Onur\nÇelebi, Maha Elbayad, Kenneth Heafield, Kevin Hef-\nfernan, Elahe Kalbassi, Janice Lam, Daniel Licht,\nJean Maillard, Anna Sun, Skyler Wang, Guillaume\nWenzek, Al Youngblood, Bapi Akula, Loic Barrault,\nGabriel Mejia Gonzalez, Prangthip Hansanti, John Hoff-\nman, Semarley Jarrett, Kaushik Ram Sadagopan, Dirk\nRowe, Shannon Spruit, Chau Tran, Pierre Andrews,\nNecip Fazil Ayan, Shruti Bhosale, Sergey Edunov,\nAngela Fan, Cynthia Gao, Vedanuj Goswami, Fran-\ncisco Guzmán, Philipp Koehn, Alexandre Mourachko,\nChristophe Ropers, Safiyyah Saleem, Holger Schwenk,\nand Jeff Wang. 2022. No language left behind: Scal-\ning human-centered machine translation. arXiv preprint\narXiv:2207.04672 .\nOtt, Myle, Sergey Edunov, Alexei Baevski, Angela Fan,\nSam Gross, Nathan Ng, David Grangier, and Michael\nAuli. 2019. fairseq: A fast, extensible toolkit for se-\nquence modeling. In Proceedings of the 2019 Confer-\nence of the North American Chapter of the Association\nfor Computational Linguistics (Demonstrations) , pages\n48–53, Minneapolis, Minnesota, June. Association for\nComputational Linguistics.\nPost, Matt. 2018. A call for clarity in reporting BLEU\nscores. In Proceedings of the Third Conference on Ma-\nchine Translation: Research Papers , pages 186–191,\nBrussels, Belgium, October. Association for Computa-\ntional Linguistics.\nSennrich, Rico, Barry Haddow, and Alexandra Birch.\n2016. Neural machine translation of rare words with\nsubword units. In Proceedings of the 54th Annual Meet-\ning of the Association for Computational Linguistics(Volume 1: Long Papers) , pages 1715–1725, Berlin,\nGermany, August. Association for Computational Lin-\nguistics.\nSmith, Jimmy T.H., Andrew Warrington, and Scott Lin-\nderman. 2023. Simplified state space layers for se-\nquence modeling. In The Eleventh International Confer-\nence on Learning Representations .\nSutskever, Ilya, Oriol Vinyals, and Quoc V . Le. 2014.\nSequence to sequence learning with neural networks.\nIn Ghahramani, Zoubin, Max Welling, Corinna Cortes,\nNeil D. Lawrence, and Kilian Q. Weinberger, editors,\nAdvances in Neural Information Processing Systems 27:\nAnnual Conference on Neural Information Processing\nSystems 2014, December 8-13 2014, Montreal, Quebec,\nCanada , pages 3104–3112.\nTay, Yi, Mostafa Dehghani, Samira Abnar, Yikang Shen,\nDara Bahri, Philip Pham, Jinfeng Rao, Liu Yang, Se-\nbastian Ruder, and Donald Metzler. 2021. Long range\narena : A benchmark for efficient transformers. In 9th\nInternational Conference on Learning Representations,\nICLR 2021, Virtual Event, Austria, May 3-7, 2021 , pages\n1–19.\nVaswani, Ashish, Noam Shazeer, Niki Parmar, Jakob\nUszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser,\nand Illia Polosukhin. 2017. Attention is all you need.\nIn Guyon, Isabelle, Ulrike von Luxburg, Samy Bengio,\nHanna M. Wallach, Rob Fergus, S. V . N. Vishwanathan,\nand Roman Garnett, editors, Advances in Neural Infor-\nmation Processing Systems 30: Annual Conference on\nNeural Information Processing Systems 2017, December\n4-9, 2017, Long Beach, CA, USA , pages 5998–6008.\nVig, Jesse and Yonatan Belinkov. 2019. Analyzing the\nstructure of attention in a transformer language model.\nIn Linzen, Tal, Grzegorz Chrupala, Yonatan Belinkov,\nand Dieuwke Hupkes, editors, Proceedings of the 2019\nACL Workshop BlackboxNLP: Analyzing and Interpret-\ning Neural Networks for NLP , BlackboxNLP@ACL 2019,\nFlorence, Italy, August 1, 2019 , pages 63–76. Associa-\ntion for Computational Linguistics.\nWang, Shuo, Zhaopeng Tu, Zhixing Tan, Wenxuan\nWang, Maosong Sun, and Yang Liu. 2021. Lan-\nguage models are good translators. arXiv preprint\narXiv:2106.13627 .\nA Influence of LAE\nIn our experiments with the decoder-only architec-\nture, we intentionally excluded the loss term LAE\nfrom Equation (6) as it is not necessary for MT. In\nTable 7 we show the effect of including this loss\nduring training: performance degradation of around\n4BLEU points for both architectures.\nB L D|θ| w/LAEw/oLAE\n6 8 65 M 17.9 22 .3\n10 6 68 M 18.6 22 .5\nTable 7: Impact of the autoencoder loss ( LAE) on\ntranslation quality on the WMT’14 validation set\nfor two decoder-only architectures. Bis the number\nof S4 blocks, LDthe number of decoder layers\n(this is a decoder-only architecture), and |θ|is the\nnumber of parameters.\nB Effect of Bin the Cross-Attention\nHeatmaps\nUsing the methodology described in Section 4.4,\nFigure 6 shows the cross-attention heatmaps for\nthe models in Table 1. All models have roughly\nthe same number of parameters, and differ only\ninBand the number of layers ( LD). As in Fig-\nure 3, the source sentence has 109tokens. A notice-\nable pattern emerges: as Bincreases, the heatmap\nsharpens, meaning it is easier for S4 to retrieve the\nsource states. It is worth noting, however, that these\nheatmaps never get as sharp as those of the models\nwith attention.\nmasked sourcetarget\n0.00.20.40.60.81.0(a)B= 1&LD= 17 .\nmasked sourcetarget\n0.00.20.40.60.81.0 (b)B= 2&LD= 14 .\nmasked sourcetarget\n0.00.20.40.60.81.0 (c)B= 3&LD= 12 .\nmasked sourcetarget\n0.00.20.40.60.81.0\n(d)B= 4&LD= 10 .\nmasked sourcetarget\n0.00.20.40.60.81.0 (e)B= 6&LD= 8.\nmasked sourcetarget\n0.00.20.40.60.81.0 (f)B= 10 &LD= 6.\nmasked sourcetarget\n0.00.20.40.60.81.0\n(g)B= 16 &LD= 4.\nmasked sourcetarget\n0.00.20.40.60.81.0 (h)B= 22 &LD= 3.\nmasked sourcetarget\n0.00.20.40.60.81.0 (i)B= 35 &LD= 2.\nFigure 6: Cross-attention heatmaps for the models in Table 1. Increasing B(while keeping the total\nnumber of parameters roughly constant) makes the heatmaps less blurry, which means it is easier for the\nmodel to retrieve source states.", "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "ctGBchBD9ORT", "year": null, "venue": "EAMT 2015", "pdf_link": "https://aclanthology.org/W15-4941.pdf", "forum_link": "https://openreview.net/forum?id=ctGBchBD9ORT", "arxiv_id": null, "doi": null }
{ "title": "CRACKER: Cracking the Language Barrier", "authors": [ "Georg Rehm" ], "abstract": null, "keywords": [], "raw_extracted_content": "�\n���������������������������������������\n������������������������������������������������������������������������\n�������������������������������\n�\n�����������������\n������������������������������������������������������������������������������������������������\n���������������������������������������������\n������������������������������������������������������������������\n���������������������������������������������������������������������������������������������������������������������������������������\n����������������������������\n����������������������������\n��������������������������������������������������������\n����������������������������������������������������������������������������������������\n�����������������������������������������������������������������������������������������������\n����������������������������������������������������������������������������������������������\n������������������������������������������������������������������������������������������\n����������������������������������������������������������������������������������������������\n����������������������������������������������������������\n��������������������������������������������������������������������������������������\n��������������������������������������������������������������������������������������������\n����������������������������������������������������������������������������������������������\n��������������������������������������������������������������������������������������������\n�����������������������������������������������������������������������������������������\n�������������������������������������������������������������������������������������������\n��������������������������������������������������������������������������������������\n�������������������������������������������������������������������������������������\n���������������������������\n��������������������������������������������������������������������������������������������\n���������������������������������������������������������������������������������������������\n�������������������������������������������������������������������������������������������\n������������������������������������������������������������������������������������\n����������������������������������������������������������������������������������������\n��������������������������������������������������������������������������������������\n�����������������������������������������������������������������������������������������\n����������������������������������������������������������������������������������������\n���������������������������������������������������223", "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "j3XcQcjOeWx", "year": null, "venue": "EAMT 2022", "pdf_link": "https://aclanthology.org/2022.eamt-1.66.pdf", "forum_link": "https://openreview.net/forum?id=j3XcQcjOeWx", "arxiv_id": null, "doi": null }
{ "title": "Overview of the ELE Project", "authors": [ "Itziar Aldabe", "Jane Dunne", "Aritz Farwell", "Owen Gallagher", "Federico Gaspari", "Maria Giagkou", "Jan Hajic", "Jens Peter Kückens", "Teresa Lynn", "Georg Rehm", "German Rigau", "Katrin Marheinecke", "Stelios Piperidis", "Natália Resende", "Tea Vojtechová", "Andy Way" ], "abstract": "Itziar Aldabe, Jane Dunne, Aritz Farwell, Owen Gallagher, Federico Gaspari, Maria Giagkou, Jan Hajic, Jens Peter Kückens, Teresa Lynn, Georg Rehm, German Rigau, Katrin Marheinecke, Stelios Piperidis, Natalia Resende, Tea Vojtěchová, Andy Way. Proceedings of the 23rd Annual Conference of the European Association for Machine Translation. 2022.", "keywords": [], "raw_extracted_content": "Overview of the ELE Project \nItziar Aldabe,4 Jane Dunne,1 Aritz Farwell,4 Owen Gallagher,1 Federico Gaspari,1 \nMaria Giagkou,5 Jan Hajic,3 Jens Peter Kückens,2 Teresa Lynn,1 Georg Rehm,2 \nGerman Rigau,4 Katrin Marheinecke,2 Stelios Piperidis,5 Natalia Resende,1 Tea \nVojtěchová,3 Andy Way1 \n1 ADAPT Centre, School of Computing, Dublin City University, Dublin 9, Ireland \n2 Deutsches Forschungszentrum für Künst liche Intelligenz (DFKI) GmbH, Alt-Moabit 91c, \n10559 Berlin, Germany \n3 Charles University (C UNI), Ovocný trh 5, Prague 1, 116 36, Czech Republic \n4 Universidad Del Pais Vasco/ Euskal Herriko Unibertsitatea (University of the Basque \nCountry) UPV/EHU, Barrio Sarriena s/n, 48940 Leioa, Bizkaia \n5Athina -Erevnitiko Kentro Kainotomias Stis Technologies T is Pliroforias, Ton Epikoinonion \nKai Tis Gnosis (ILSP), Artemidos 6 & Epidavrou, GR -151 25 Maroussi, Athens, Greece \n \nAbstract \nThis paper presents the ongoing European \nLanguage Equality (ELE) project, an 18 -\nmonth action funded by the European \nCommission. The primary goal of the ELE \nproject is to prepare the ELE programme, \nin the form of a strategic research, \ninnovation a nd implementation agenda \nand roadmap for achieving full digital \nlanguage equality in Europe by 2030. \n1. Background \nTwen ty-four official languages and more than \n60 regional and minority languages constitute the \nfabric of the EU’s linguistic landscape. However, \nlanguage barriers still hamper communication and \nthe free flow of information across the EU. \nMultilingualism is a k ey cultural cornerstone of \nEurope and signifies what it means to be and to \nfeel European. The landmark 2018 European \nParliament resolution “Language equality in the \ndigital age” found a striking imbalance in terms of \nsupport through language technologies ( LTs) so \nissued a call to action. Starting in January 2021, \nELE answered this call and i s laying the \nfoundations for a strategic research, innovation \nand implementation agenda ( SRIA ) and roadmap \nto make full digital language equality (DLE) a \nreality in Euro pe by 2030. \n \n \n© 2022 The authors. This article is licensed under a Creative \nCommons 3.0 licence, no derivative works, attribution, CC -\nBY-ND. Developing an SRIA and roadmap for \nachieving full DLE in Europe by 2030 involves \nmany stakeholders with different perspectives. \nAccordingly, the ELE project – led by DCU, and \nwith DFKI, Charles University, ILSP and \nEHU/UPV as core members – has put together a \nlarge consortium of all 52 partners, who together \nwith the wider European LT community, are \npreparing the different parts of the SRIA and \nroadmap, for all European languages: official, \nregional and minority languages. \n2. Achievements & Ongoin g Activities \nEnsuring appropriate technology support for all \nEuropean languages will create jobs, growth and \nopportunities in the digital single market. Equally \ncrucial, overcoming language barriers in the \ndigital environment is essential for an inclusive \nsociety and for providing unity in diversity for \nmany years to come. \nTo date, we have concentrated on two distinct \naspects: (i) collecting the current state of play \n(2021/2022) of LT support for the more than 70 \nlanguages under investigation, largely by t he 32 \nNational Competence Centres in our sister project \nEuropean Language Grid (ELG);2 and (ii) \nstrategic and technological forecasting, i.e. \nestimating and envisioning the future situation in \n2030 and beyond. Furthermore, we distinguish \nbetween two main s takeholder groups: LT \ndevelopers (industry and research) and LT users as \nwell as consumers. Both groups are represented in \nELE by several networks (e.g. EFNIL, ELEN, \n2 https://www.european -language -grid.eu/ \nECSPM) and associations (e.g. ELDA, LIBER) \nwho each produce a report highlighting their ow n \nindividual requirements towards DLE. The \nproject’s industry partners produce four “deep \ndives” with the needs, wishes and visions of the \nEuropean LT industry regarding machine \ntranslation, speech technology, text analytics as \nwell as data, all available on the project website. \nWe have also organised a larger number of \nsurveys and consultations with stakeholders who \nare not represented in the consortium. \nWe have formulated a preliminary working \ndefinition of DLE to drive our activities, namely: \n“Digital Language Equality is the state of affairs \nin which all languages have the technological \nsupport and situational context necessary for them \nto continue to exist and to prosper as living \nlanguages in the digital age.” \nThis DLE definition allows us to compute an \neasy-to-interpret metric (a “DLE score”) for \nindividual languages, which enables the \nquantification of the level of technological support \nfor a language and, crucially, the identification of \ngaps and shortcomings that hamper the \nachievement of full DLE . This approach enables \ndirect comparisons across languages, tracking \ntheir advancement towards the goal of DLE, and \nfacilitates the prioritization of needs, especially to \nfill existing gaps. The metric is computed for each \nlanguage on the basis of various factors, grouped \ninto technological factors (technological support, \ne.g. available language resources, tools and \ntechnologies) and contextual factors (e.g. societal, \neconomic, educational, industrial). \nOur systematic collection of language \nresources, i.e. data (corpora, lexical resources, \nmodels) and LT tools/services for Europe’s \nlanguages has resulted in more than 6,000 \nmetadata records, which will be imported into the \nELG catalogue and complement the existing, \nconstantly growing inventory of ELG resourc es, \nthus providing information on the availability of \nmore than 11,000 language resources and tools. \nAll languages investigated by ELE are covered. \nUsing this collection as a firm empirical \nfoundation for further investigation, we computed \na DLE score for each language. We will present \nthese results in full at the conference, but \nunsurprisingly, English was clearly shown as \nhaving the best context for the development of \nLTs and language resources. English is followed \nby German and French, and then by Italia n and \nSpanish. After these five leading languages, variations between the configurations begin to be \nseen. Mostly, Swedish, Dutch, Danish, Polish, \nCroatian, Hungarian, Greek and Finnish are \nranked in the upper half of the official EU \nlanguages. The officia l EU languages with the \nlowest scores are mostly Latvian, Lithuanian, \nBulgarian, Romanian and Maltese. \nAmong the group of official national languages \nwhich are not recognised as official EU languages, \nSerbian is always the top performer, achieving a \nsimila r score to those of the lower -scoring official \nEU languages, while Manx is always presented as \na downward outlier. Norwegian, Luxembourgish, \nFaroese and Icelandic achieve better scores than \nAlbania, Turkish, Macedonian and Bosnian. The \nregional and minorit y languages are usually led by \nSaami South and Skolt. \nThese and other perhaps unexpected results will \nbe explained at the conference. The results from \nour various surveys will also be shown, including \nthe novel survey which targeted European citizens \nper se, where we look like surpassing 25,000 \nrespondents from all over the continent. \n3. Future Plans \nELE is on track to achieve its ambitious objectives \nwith the consortium currently working on the \nSRIA which will be ready at the end of the project \nin June. The DLE metric has proven to be an \nextremely useful tool to demonstrate how \nprepared European languages are for the digital \nage, and what needs to be done to get them to the \npoint where all such languages are digitally equal \nby 2030. As an extension of this w ork, we will \nsoon publish our interactive DLE dashboard that \nmakes use of the metadata records available in the \nELG platform. \nAcknowledgements \nELE is co -financed by the European Union under \nthe grant agreement № LC -01641480 – \n101018166 (ELE). \nReference \nGeorg Rehm, Federico Gaspari, German Rigau, \nMaria Giagkou, Stelios Piperidis, Natalia \nResende, Jan Hajic, Andy Way. 2022. The \nEuropean Language Equality Project: Enabling \nDigital Language Equality for all European \nLanguages by 2030. EFNIL Annual Publication \nSeries , Cavtat, Croatia (in press).", "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "PZ3jwESmSjuN", "year": null, "venue": "EAMT 2015", "pdf_link": "https://aclanthology.org/W15-4941.pdf", "forum_link": "https://openreview.net/forum?id=PZ3jwESmSjuN", "arxiv_id": null, "doi": null }
{ "title": "CRACKER: Cracking the Language Barrier", "authors": [ "Georg Rehm" ], "abstract": null, "keywords": [], "raw_extracted_content": "�\n���������������������������������������\n������������������������������������������������������������������������\n�������������������������������\n�\n�����������������\n������������������������������������������������������������������������������������������������\n���������������������������������������������\n������������������������������������������������������������������\n���������������������������������������������������������������������������������������������������������������������������������������\n����������������������������\n����������������������������\n��������������������������������������������������������\n����������������������������������������������������������������������������������������\n�����������������������������������������������������������������������������������������������\n����������������������������������������������������������������������������������������������\n������������������������������������������������������������������������������������������\n����������������������������������������������������������������������������������������������\n����������������������������������������������������������\n��������������������������������������������������������������������������������������\n��������������������������������������������������������������������������������������������\n����������������������������������������������������������������������������������������������\n��������������������������������������������������������������������������������������������\n�����������������������������������������������������������������������������������������\n�������������������������������������������������������������������������������������������\n��������������������������������������������������������������������������������������\n�������������������������������������������������������������������������������������\n���������������������������\n��������������������������������������������������������������������������������������������\n���������������������������������������������������������������������������������������������\n�������������������������������������������������������������������������������������������\n������������������������������������������������������������������������������������\n����������������������������������������������������������������������������������������\n��������������������������������������������������������������������������������������\n�����������������������������������������������������������������������������������������\n����������������������������������������������������������������������������������������\n���������������������������������������������������223", "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "pPoZxQtVc7", "year": null, "venue": "EAMT 2022", "pdf_link": "https://aclanthology.org/2022.eamt-1.66.pdf", "forum_link": "https://openreview.net/forum?id=pPoZxQtVc7", "arxiv_id": null, "doi": null }
{ "title": "Overview of the ELE Project", "authors": [ "Itziar Aldabe", "Jane Dunne", "Aritz Farwell", "Owen Gallagher", "Federico Gaspari", "Maria Giagkou", "Jan Hajic", "Jens Peter Kückens", "Teresa Lynn", "Georg Rehm", "German Rigau", "Katrin Marheinecke", "Stelios Piperidis", "Natália Resende", "Tea Vojtechová", "Andy Way" ], "abstract": "Itziar Aldabe, Jane Dunne, Aritz Farwell, Owen Gallagher, Federico Gaspari, Maria Giagkou, Jan Hajic, Jens Peter Kückens, Teresa Lynn, Georg Rehm, German Rigau, Katrin Marheinecke, Stelios Piperidis, Natalia Resende, Tea Vojtěchová, Andy Way. Proceedings of the 23rd Annual Conference of the European Association for Machine Translation. 2022.", "keywords": [], "raw_extracted_content": "Overview of the ELE Project \nItziar Aldabe,4 Jane Dunne,1 Aritz Farwell,4 Owen Gallagher,1 Federico Gaspari,1 \nMaria Giagkou,5 Jan Hajic,3 Jens Peter Kückens,2 Teresa Lynn,1 Georg Rehm,2 \nGerman Rigau,4 Katrin Marheinecke,2 Stelios Piperidis,5 Natalia Resende,1 Tea \nVojtěchová,3 Andy Way1 \n1 ADAPT Centre, School of Computing, Dublin City University, Dublin 9, Ireland \n2 Deutsches Forschungszentrum für Künst liche Intelligenz (DFKI) GmbH, Alt-Moabit 91c, \n10559 Berlin, Germany \n3 Charles University (C UNI), Ovocný trh 5, Prague 1, 116 36, Czech Republic \n4 Universidad Del Pais Vasco/ Euskal Herriko Unibertsitatea (University of the Basque \nCountry) UPV/EHU, Barrio Sarriena s/n, 48940 Leioa, Bizkaia \n5Athina -Erevnitiko Kentro Kainotomias Stis Technologies T is Pliroforias, Ton Epikoinonion \nKai Tis Gnosis (ILSP), Artemidos 6 & Epidavrou, GR -151 25 Maroussi, Athens, Greece \n \nAbstract \nThis paper presents the ongoing European \nLanguage Equality (ELE) project, an 18 -\nmonth action funded by the European \nCommission. The primary goal of the ELE \nproject is to prepare the ELE programme, \nin the form of a strategic research, \ninnovation a nd implementation agenda \nand roadmap for achieving full digital \nlanguage equality in Europe by 2030. \n1. Background \nTwen ty-four official languages and more than \n60 regional and minority languages constitute the \nfabric of the EU’s linguistic landscape. However, \nlanguage barriers still hamper communication and \nthe free flow of information across the EU. \nMultilingualism is a k ey cultural cornerstone of \nEurope and signifies what it means to be and to \nfeel European. The landmark 2018 European \nParliament resolution “Language equality in the \ndigital age” found a striking imbalance in terms of \nsupport through language technologies ( LTs) so \nissued a call to action. Starting in January 2021, \nELE answered this call and i s laying the \nfoundations for a strategic research, innovation \nand implementation agenda ( SRIA ) and roadmap \nto make full digital language equality (DLE) a \nreality in Euro pe by 2030. \n \n \n© 2022 The authors. This article is licensed under a Creative \nCommons 3.0 licence, no derivative works, attribution, CC -\nBY-ND. Developing an SRIA and roadmap for \nachieving full DLE in Europe by 2030 involves \nmany stakeholders with different perspectives. \nAccordingly, the ELE project – led by DCU, and \nwith DFKI, Charles University, ILSP and \nEHU/UPV as core members – has put together a \nlarge consortium of all 52 partners, who together \nwith the wider European LT community, are \npreparing the different parts of the SRIA and \nroadmap, for all European languages: official, \nregional and minority languages. \n2. Achievements & Ongoin g Activities \nEnsuring appropriate technology support for all \nEuropean languages will create jobs, growth and \nopportunities in the digital single market. Equally \ncrucial, overcoming language barriers in the \ndigital environment is essential for an inclusive \nsociety and for providing unity in diversity for \nmany years to come. \nTo date, we have concentrated on two distinct \naspects: (i) collecting the current state of play \n(2021/2022) of LT support for the more than 70 \nlanguages under investigation, largely by t he 32 \nNational Competence Centres in our sister project \nEuropean Language Grid (ELG);2 and (ii) \nstrategic and technological forecasting, i.e. \nestimating and envisioning the future situation in \n2030 and beyond. Furthermore, we distinguish \nbetween two main s takeholder groups: LT \ndevelopers (industry and research) and LT users as \nwell as consumers. Both groups are represented in \nELE by several networks (e.g. EFNIL, ELEN, \n2 https://www.european -language -grid.eu/ \nECSPM) and associations (e.g. ELDA, LIBER) \nwho each produce a report highlighting their ow n \nindividual requirements towards DLE. The \nproject’s industry partners produce four “deep \ndives” with the needs, wishes and visions of the \nEuropean LT industry regarding machine \ntranslation, speech technology, text analytics as \nwell as data, all available on the project website. \nWe have also organised a larger number of \nsurveys and consultations with stakeholders who \nare not represented in the consortium. \nWe have formulated a preliminary working \ndefinition of DLE to drive our activities, namely: \n“Digital Language Equality is the state of affairs \nin which all languages have the technological \nsupport and situational context necessary for them \nto continue to exist and to prosper as living \nlanguages in the digital age.” \nThis DLE definition allows us to compute an \neasy-to-interpret metric (a “DLE score”) for \nindividual languages, which enables the \nquantification of the level of technological support \nfor a language and, crucially, the identification of \ngaps and shortcomings that hamper the \nachievement of full DLE . This approach enables \ndirect comparisons across languages, tracking \ntheir advancement towards the goal of DLE, and \nfacilitates the prioritization of needs, especially to \nfill existing gaps. The metric is computed for each \nlanguage on the basis of various factors, grouped \ninto technological factors (technological support, \ne.g. available language resources, tools and \ntechnologies) and contextual factors (e.g. societal, \neconomic, educational, industrial). \nOur systematic collection of language \nresources, i.e. data (corpora, lexical resources, \nmodels) and LT tools/services for Europe’s \nlanguages has resulted in more than 6,000 \nmetadata records, which will be imported into the \nELG catalogue and complement the existing, \nconstantly growing inventory of ELG resourc es, \nthus providing information on the availability of \nmore than 11,000 language resources and tools. \nAll languages investigated by ELE are covered. \nUsing this collection as a firm empirical \nfoundation for further investigation, we computed \na DLE score for each language. We will present \nthese results in full at the conference, but \nunsurprisingly, English was clearly shown as \nhaving the best context for the development of \nLTs and language resources. English is followed \nby German and French, and then by Italia n and \nSpanish. After these five leading languages, variations between the configurations begin to be \nseen. Mostly, Swedish, Dutch, Danish, Polish, \nCroatian, Hungarian, Greek and Finnish are \nranked in the upper half of the official EU \nlanguages. The officia l EU languages with the \nlowest scores are mostly Latvian, Lithuanian, \nBulgarian, Romanian and Maltese. \nAmong the group of official national languages \nwhich are not recognised as official EU languages, \nSerbian is always the top performer, achieving a \nsimila r score to those of the lower -scoring official \nEU languages, while Manx is always presented as \na downward outlier. Norwegian, Luxembourgish, \nFaroese and Icelandic achieve better scores than \nAlbania, Turkish, Macedonian and Bosnian. The \nregional and minorit y languages are usually led by \nSaami South and Skolt. \nThese and other perhaps unexpected results will \nbe explained at the conference. The results from \nour various surveys will also be shown, including \nthe novel survey which targeted European citizens \nper se, where we look like surpassing 25,000 \nrespondents from all over the continent. \n3. Future Plans \nELE is on track to achieve its ambitious objectives \nwith the consortium currently working on the \nSRIA which will be ready at the end of the project \nin June. The DLE metric has proven to be an \nextremely useful tool to demonstrate how \nprepared European languages are for the digital \nage, and what needs to be done to get them to the \npoint where all such languages are digitally equal \nby 2030. As an extension of this w ork, we will \nsoon publish our interactive DLE dashboard that \nmakes use of the metadata records available in the \nELG platform. \nAcknowledgements \nELE is co -financed by the European Union under \nthe grant agreement № LC -01641480 – \n101018166 (ELE). \nReference \nGeorg Rehm, Federico Gaspari, German Rigau, \nMaria Giagkou, Stelios Piperidis, Natalia \nResende, Jan Hajic, Andy Way. 2022. The \nEuropean Language Equality Project: Enabling \nDigital Language Equality for all European \nLanguages by 2030. EFNIL Annual Publication \nSeries , Cavtat, Croatia (in press).", "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "bk32V9CS3pF", "year": null, "venue": "EAMT 2012", "pdf_link": "https://aclanthology.org/2012.eamt-1.39.pdf", "forum_link": "https://openreview.net/forum?id=bk32V9CS3pF", "arxiv_id": null, "doi": null }
{ "title": "Relevance Ranking for Translated Texts", "authors": [ "Marco Turchi", "Josef Steinberger", "Lucia Specia" ], "abstract": null, "keywords": [], "raw_extracted_content": "Relevance Ranking for Translated Texts\nMarco Turchi, Josef Steinberger\nEuropean Commission JRC\nIPSC - GlobeSec\nVia Fermi 2749,\n21020 Ispra (V A), Italy\[email protected] Specia\nDepartment of Computer Science\nUniversity of Sheffield\nRegent Court, 211 Portobello\nSheffield, S1 4DP, UK\[email protected]\nAbstract\nThe usefulness of a translated text for gist-\ning purposes strongly depends on the over-\nall translation quality of the text, but espe-\ncially on the translation quality of the most\ninformative portions of the text. In this\npaper we address the problems of rank-\ning translated sentences within a document\nand ranking translated documents within a\nset of documents on the same topic accord-\ning to their informativeness and translation\nquality. An approach combining quality\nestimation and sentence ranking methods\nis used. Experiments with French-English\ntranslation using four sets of news com-\nmentary documents show promising re-\nsults for both sentence and document rank-\ning. We believe that this approach can be\nuseful in several practical scenarios where\ntranslation is aimed at gisting, such as mul-\ntilingual media monitoring and news anal-\nysis applications.\n1 Introduction\nReading and understanding the main ideas behind\ndocuments written in different languages can be\nnecessary or desirable in a number of scenarios.\nExisting online translation systems such as Google\nTranslate andBing Translator1serve to this pur-\npose, mitigating the language barrier effects. De-\nspite the large improvements in translation qual-\nity in recent years, translated documents are still\naffected by the presence of sentences which are\nnot correctly translated and in the extreme case,\nc\r2012 European Association for Machine Translation.\n1translate.google.com/ and www.\nmicrosofttranslator.com/whose original meaning has been lost. These sen-\ntences can compromise the readability and reliabil-\nity of translated documents, especially if they are\nthe ones that should convey the most important in-\nformation in the document.\nQuality estimation methods can flag incorrect\ntranslations without access to reference sentences,\nhowever the informativeness of these sentences is\nnot taken into account. On the other hand, sentence\nranking methods are able to identify the most rel-\nevant sentences in a given language for tasks such\nas document summarisation. However, the perfor-\nmance of sentence ranking algorithms for machine\ntranslated texts can be significantly degraded due\nto the introduction of errors by the translation pro-\ncess, as it has been shown for other language pro-\ncessing tasks, e.g. in information retrieval (Savoy\nand Dolamic, 2009). Moreover, particularly in\nthe case of supervised ranking methods, these may\nonly be available for the source language.\nIn this paper we propose combining quality esti-\nmation and relevance sentence ranking methods in\norder to identify the most relevant translated texts.\nWe experiment with two ranking tasks:\n\u000fThe ranking of translated sentences within a\ndocument; and\n\u000fThe ranking of documents within a set of doc-\numents on the same topic.\nAn evaluation with French-English translations\nin groups of news commentary documents in dif-\nferent domains has shown promising results for\nboth sentence and document ranking.\n2 Related work\nA considerable amount of work has been dedicated\nin recent years to estimating the quality of ma-\nProceedings of the 16th EAMT Conference, 28-30 May 2012, Trento, Italy\n153\nchine translated texts, i.e., the problem of predict-\ning the quality of translated text without access to\nreference translations. Most related work focus on\npredicting different types of sentence-level qual-\nity scores, including automatic and semi-automatic\nMT evaluation metrics such as TER (He et al.,\n2010), HTER (Specia and Farzindar, 2010; Bach\net al., 2011), post-editing effort scores and post-\nediting time (Specia, 2011). At document level,\nsimilar to this paper, Soricut and Echihabi (2010)\nfocus on the ranking translated documents accord-\ning to their estimated quality so that the top ndoc-\numents can be selected for publishing. A range of\nindicators from the MT system, source and transla-\ntion texts have been used in previous work. How-\never, none of these include the notion of informa-\ntiveness of the texts.\nThe sentence ranking problem has been widely\nstudied in particular for document summarization,\nwhere different approaches have been proposed to\nquantify the amount of information contained in\neach sentence. In (Goldstein et al., 1999), a tech-\nnique called Maximal Marginal Relevance (MMR)\nwas introduced to measure the relevance of each\nsentence in a document according to a user pro-\nvided query. Other approaches represent a docu-\nment as a set of trees and take the position of a\nsentence in a tree is indicative of its importance\n(Carlson et al., 2001). Graph theory has been ex-\ntensively used to rank sentences (Yeh et al., 2008)\nor keywords (Mihalcea, 2004), with their impor-\ntance determined using graph connectivity mea-\nsures such as in-degree or PageRank. A sentence\nextraction method based on Singular Value De-\ncomposition over term-by-sentence matrices was\nintroduced in (Gong and Xin, 2002).\nThe combination of relevance and translation\nquality scores has been recently proposed in the\ncontext of cross-language document summariza-\ntion. In (Wan et al., 2010), sentences in a docu-\nment were ranked using the product of quality esti-\nmation and relevance scores, both computed using\nthe source text only. The best five sentences were\nadded to a summary, and then translated to the\ntarget language. (Boudin et al., 2010) used both\nsource and target language features for quality es-\ntimation and targeted multi-document summariza-\ntion, selecting sentences from different translated\ndocuments to generate a summary.\nThis paper extends previous work in the attempt\nto rank translated sentences within documents, butwith a different objective: instead of selecting\na pre-defined number of sentences to compose a\nsummary, we aim at obtaining a global ranking of\nsentences within a document according to their in-\nformativeness and translation quality and use this\nranking to assign a global score to each document\nfor the ranking of groups of documents. This re-\nquires different evaluation strategies from those\nused in the text summarization field, as we will dis-\ncuss in Section 5.2.\n3 Quality estimation method\nThe quality estimation method used in this paper is\nthat proposed in (Specia, 2011). A sentence-level\nmodel is built using a Support Vector Machines re-\ngression algorithm with radial basis function ker-\nnel from the LIBSVM package (Chang and Lin,\n2011) and a number of shallow and MT system-\nindependent features. These features are extracted\nfrom the source sentences and their correspond-\ning translations, and from monolingual and par-\nallel corpora. They include source & transla-\ntion sentence lengths, source & translation sen-\ntence language model probabilities, average num-\nber of translations per source word, as given by\nprobabilistic dictionaries, percentages of numbers,\ncontent-/non-content words in the source & trans-\nlation sentences, among others. The regression al-\ngorithm is trained on examples of translations and\ntheir respective human judgments for translation\nquality (Section 5.1).\n4 Sentence ranking methods\n4.1 Co-occurrence-based ranking\nOriginally proposed by (Gong and Xin, 2002) and\nlater improved by (Steinberger and Je ˘zek, 2004),\nthis is an unsupervised method based on the appli-\ncation of Singular Value Decomposition (SVD) to\nindividual documents or sets of documents on the\nsame topic. It has been reported to have the best\nperformance in the multilingual multi-document\nsummarization task at TAC 2011. The method first\nbuilds a term-by-sentence matrix from the text,\nthen applies SVD and uses the resulting matrices\nto identify and extract the most salient sentences.\nSVD is aimed at finding the latent (orthogonal) di-\nmensions, which would correspond to the different\ntopics discussed in the set of documents.\nMore formally, we first build a matrix A\nwhere each column represents the weighted term-\nfrequency vector of a sentence jin a given docu-\n154\nment or set of documents. The weighting schemes\nfound to work best in (Steinberger and Je ˘zek,\n2009) are a binary local weight and an entropy-\nbased global weight.\nAfter that step, SVD is applied to the matrix as\nA=USVT, and subsequently a matrix F=S\u0001VT\nreduced tordimensions2is derived.\nSentence selection starts with measuring the\nlength of the sentence vectors in F. This length\ncan be viewed as a measure of the importance of\nthat sentence within the top topics (the most impor-\ntant dimensions). In other words, the length corre-\nsponds to the combined weight across the most im-\nportant topics. We call it co-occurrence sentence\nscore. The sentence with the largest score is se-\nlected as the most informative (its corresponding\nvector in Fis denoted by fbest). To prevent se-\nlecting a sentence with similar content in the next\nstep, the topic/sentence distribution in matrix Fis\nchanged by subtracting the information contained\nin the selected sentence:\nF(it+1)=F(it)\u0000fbest\u0001fT\nbest\njfbestj2\u0001F(it)\nThe vector lengths of similar sentences are thus\ndecreased, which avoids selecting the same/similar\nsentences. We call this a redundancy filter. Af-\nter this subtraction, the process continues with\nthe sentence which has the largest co-occurrence\nsentence score computed on the updated matrix\nF1(the first update of the original matrix F0).\nThe process is repeated until all the sentences\nof the document(s) are annotated with their co-\noccurrence sentence score.\nSince it is unsupervised, in our work this method\nwas applied to both the source language texts and\nthe translated texts.\n4.2 Profile-based ranking\nThe supervised profile-based ranking algorithm by\n(Pouliquen et al., 2003) was proposed for address-\ning the multi-label categorization problem using\nthe Eurovoc thesaurus3. Models for thousands of\ncategories were trained using only positive sam-\nples for each category. The training process con-\nsisted in identifying a list of representative words\nand associating to each of them a log-likelihood\n2The degree of importance of each ‘latent’ topic is given by\nthe singular values and the optimal number of latent topics\n(i.e., dimensions) rcan be tuned on some development data.\n3Eurovoc.europa.eu/weight, using the training set as the reference cor-\npus. A new document was represented as a vec-\ntor of words with their frequency in the document.\nThe most appropriate categories for the new doc-\nument were found by ranking the category vector\nrepresentations (the profiles) according to their co-\nsine similarity to the vector representation of the\nnew document.\nIn this paper we are primarily interested in the\nranking of sentences, as opposed to the ranking of\ncategories. Since we know beforehand which cat-\negory (a topic of interest) a document belongs to,\na profile vector is created for that category using\nhuman labeled data. The cosine similarity for each\nsentence in the document and the category vector\nis computed and all the sentences are ranked ac-\ncording their cosine value.\nIn our work this method was applied to the\nsource language sentences only.\n5 Experimental settings\n5.1 Corpora\nRelevance ranking training The profile-based\nmethod (Section 4.2) is trained using 1;000\nFrench news documents for each of our four\ntopics of interest. These documents were se-\nlected using an in-house news categorization\nsystem (Steinberger et al., 2009), where cate-\ngory definitions are created by humans. Ar-\nticles are said to fall into a given category\nif they satisfy the category definition, which\nconsists of Boolean operators with optional\nvicinity operators and wild cards. Alternative\nclassifiers can also be trained using the Eu-\nrovoc human labeled multi-lingual resource.\nQuality estimation training To train the re-\ngression algorithm for the quality estimation\nmodel we use the French-English corpus cre-\nated in (Specia, 2011), which is freely avail-\nable4. This corpus contains 2;525 French\nnews sentences from the WMT news-test2009\ndataset and their translations into English us-\ning a statistical machine translation system\nbuilt from the Moses toolkit5. These sen-\ntences were scored by a human translator\naccording to the effort necessary to correct\nthem: 1 = requires complete retranslation; 2\n= requires some retranslation; 3 = very little\n4www.dcs.shef.ac.uk/ ˜lucia/resources.html\n5www.statmt.org/wmt10/\n155\npost editing needed; 4 = fit for purpose. An\naverage human score of 2:83 was reported.\nEvaluation corpus To evaluate the performance\nof our approach we use the multilingual sum-\nmary evaluation dataset created by Turchi et\nal. (2010)6. It contains four sets of documents\ncovering four topics: Israeli-Palestinian con-\nflict (IPC), Malaria (M), Genetics (G) and\nScience and Society (SS). Each set contains\nfive documents, here in French. All sentences\n(amounting to 789) in these documents were\nannotated by four human annotators with bi-\nnary labels indicating whether or not it is in-\nformative to that topic. Therefore, the final\nscore for each sentence is a discrete num-\nber ranging from 0(uninformative) to 4(very\ninformative). These French sentences were\nthen translated using the same Moses system\nas in the training set for quality estimation and\nannotated for quality using the 1-4 scoring\nscheme. The average human quality scores\nare shown in Table 2.\n5.2 Evaluation metrics for ranking\nOur goal is to find the best possible ranking of\ntranslated sentences and documents according to\ntheir relevance and translation quality. While\nthe ranked sentences/documents could be used for\nmany applications, including cross-lingual sum-\nmarization, we are interested in a more general\nranking approach, and therefore our evaluation is\ntask-independent. We use the following metrics:\nSentence ranking Sentences in the system out-\nput and gold standard documents are first\nordered according to their combined score\nfor relevance and translation quality (or rel-\nevance score only, for the monolingual rank-\ning evaluation, Table 1). We then compute\nthe Spearman’s rank correlation coefficient\n(\u001a) between the two rankings. Additionally,\ninspired by the vBLEU\u0001 metric (Soricut and\nEchihabi, 2010), we compute Avg\u0001, a met-\nric that measures the relative gain (or loss)\nin performance obtained from selecting the\ntopk%sentences ranked according to the\npredicted scores, as compared to the perfor-\nmance obtained from randomly selecting k%\nsentences:\nAvg\u0001 = (Avg sys\u0000Avggold)\n6langtech.jrc.it/JRC_Resources.htmlwhereAvggoldis the average gold-standard\nscore for allsentences in the test set (i.e.,\nthe approximate score if sentences are ran-\ndomly taken) and Avgsysis the average gold-\nstandard score for the top k%sentences from\nthe test set ranked according to the predicted\n(system) scores.\nIntuitively, the smaller the k, the higher the\nupper bound Avg\u0001, but the harder the rank-\ning task becomes. Larger values of kshould\nresult in smaller values for Avg\u0001. Fork\n=100,Avg\u0001= 0. In this paper we com-\nputeAvg\u0001over different values of k:10,25\nand50, and consider the arithmetic mean over\nthese values of kas our final metric, Avg\u0001all.\nDocument ranking Likewise in sentence rank-\ning, both gold-standard and system rankings\nfor the documents are compared. Since there\nare only five documents within each set of\ndocuments, Spearman’s rank correlation co-\nefficient would not be reliable. We instead\nevaluate the pairwise rankings of documents\nusing Cohen’s Kappa coefficient (\u0014) (Cohen,\n1960), defined as: \u0014=P(A)\u0000P (E)\n1\u0000P (E), where\nP(A) is the proportion of times the gold-\nstandard and system ranking agree on the\nranking of a pair of documents and P(E)is\nthe proportion of times they could agree by\nchance. This probability is empirically com-\nputed by observing the frequency of ties, as\nin (Callison-Burch et al., 2011).\n6 Experiments and results\nIn what follows we show the results of the quality\nestimation and relevance ranking methods on their\nown and then we present the results obtained with\nthe combination of these two methods.\n6.1 Quality estimation\nThe performance of the quality estimation method\nis shown in Table 2. The average regression er-\nror is measured using Root Mean Squared Error,\nRMSE =q\n1\nNPN\ni=1(yi\u0000byi)2, whereNis the\nnumber of test sentences, byis the predicted score\nandyis the actual score for that test sentence. The\nperformance is generally lower than what has been\nreported in (Specia, 2011) for French-English and\nsimilar settings (RMSE = 0:662). The decrease\nin performance is most likely due to the differ-\nence in the text domain of the training and test\n156\nG IPC M SS Macro Av.\nAvg\u0001all\u001aAvg\u0001all\u001aAvg\u0001all\u001aAvg\u0001all\u001aAvg\u0001all\u001a\nInvPos -0.254 -0.088 -0.08 0.006 -0.22 0.012 0.132 0.015 -0.105 -0.013\nLength 0.287 0.328 0.322 0.278 0.75 0.541 0.156 0.113 0.378 0.315\nPB 1000 0.312 0.285 0.358 0.321 0.329 0.286 0.227 0.072 0.307 0.242\nPB 2000 0.568 0.401 0.568 0.338 0.385 0.303 0.154 0.141 0.419 0.296\nPB 5000 0.478 0.249 0.503 0.31 0.607 0.451 0.046 0.095 0.409 0.271\nCoRS 25 0.293 0.364 0.469 0.301 0.544 0.428 0.203 0.244 0.377 0.335\nCoNRS 2 0.267 0.269 0.388 0.236 0.28 0.389 0.607 0.367 0.386 0.316\nCoNRS 5 0.12 0.224 0.605 0.3 0.394 0.389 0.412 0.365 0.382 0.32\nCoRD 25 0.292 0.295 0.53 0.362 0.589 0.461 0.18 0.208 0.398 0.332\nCoRD 5 0.271 0.263 0.446 0.335 0.546 0.41 0.183 0.296 0.362 0.326\nOracle 1.559 1 1.623 1 1.453 1 1.5 1\nLower bound -0.94 -1 -0.898 -1 -0.726 -1 -0.9 -1\nTable 1: Performance of the sentence ranking methods on monolingual data. PB: profile-based ranker;\nCo: co-occurrence-based ranker; R/NR: Redundancy reduction enabled/disabled; D/S: ranking based\non individual documents or sets of documents on the same topic of interest. The Oracle values are\nobtained using the gold-standard ranking, while the Lower bound values consider the inverted gold-\nstandard ranking.\nTopic Avg. human score RMSE\nIPC 3:29 0:696\nG 3:00 0:755\nM 3:14 0:734\nSS 2:89 0:712\nTable 2: Average human score and regression error\nof the quality estimation approach.\ndatasets. The training dataset covers main news\nstories from September to October 2008, while the\ntest set covers news commentaries on specific top-\nics from 2005 to 2009.\n6.2 Monolingual relevance ranking\nThe performance of the relevance ranking methods\non the original, source-language texts is shown in\nTable 1. For the unsupervised co-occurrence rank-\ning (Co), we run a number of experiments with dif-\nferent settings. We perform a greedy search on the\nnumber of dimensions to be used: 1, and 2%,5%,\n10%, 25% or40% of the total. We run several ex-\nperiments enabling (R) and disabling (NR) the sen-\ntence redundancy filter and on the full set of docu-\nments (S) and on a single document (D). We report\nhere the settings that work the best across different\ntopics. For the profile-based ranking (PB), based\non our previous experience with this method, we\nchose to use the following numbers of words defin-\ning the profile vector: 1;000,2;000and5;000.\nTo define the gold-standard scores for the evalu-\nation at sentence level, we use the number of anno-\ntators who selected the sentence as relevant (0-4).\nThe results in Table 1 are the average performancefor all documents within a set of documents for\neach topic. They are compared against baselines\nproposed in (Kennedy and Szpakowicz, 2011):\n\u000fInverse position (InvPos): each sentence is as-\nsociated with the inverse of its position in the\ndocument. The ranking of the sentences thus\ncorresponds to their position in the document\nand the inverse position is used as their rele-\nvance score.\n\u000fSentence length (Length): each sentence is\nassociated with the number of words that it\ncontains. Longer sentences are deemed more\ninformative.\nThe proposed baselines are highly competitive,\nin particular Length. This reflects the fact that\nlonger sentences are naturally better candidates to\nbe more informative, simply because they contain\nmore words. Both methods in all settings outper-\nform the InvPos ranker. Except for the Mtopic,\nmost settings of the co-occurrence method and at\nleast one setting of the profile-based method out-\nperform Length according to Avg\u0001all.\nThe last column of the Table shows that on aver-\nage (all topics), the profile-based method seems to\nbe slightly better suited for ranking the top 50%\ndocuments, with better Avg\u0001all, while the co-\noccurrence-based method seems to be better for\nproducing a global ranking of all sentences in the\ndataset, with better \u001acoefficient. While the per-\nformances of the variations of the co-occurrence-\nbased method seem to be highly dependent on the\ntopic of the documents, it can be observed that on\n157\nG IPC M SS Macro Av.\nAvg\u0001all\u001aAvg\u0001all\u001aAvg\u0001all\u001aAvg\u0001all\u001aAvg\u0001all\u001a\nLength 0.593 0.272 0.886 0.259 2.075 0.512 0.365 0.089 0.981 0.283\nLength QE 0.853 0.28 1.02 0.258 2.156 0.518 0.5 0.096 1.132 0.288\nCo-Tr RS 25 0.374 0.177 1.527 0.31 1.843 0.398 0.607 0.197 1.087 0.27\nCo-Tr NRS 5 0.574 0.276 1.284 0.302 0.832 0.341 1.196 0.344 0.971 0.315\nCo-Tr NRS 2 0.945 0.282 1.518 0.242 1.393 0.377 1.174 0.313 1.257 0.303\nCo-Tr RD 25 0.834 0.217 1.577 0.323 1.668 0.44 0.99 0.246 1.267 0.306\nCo-Tr RD 5 0.752 0.238 1.598 0.289 1.536 0.341 1.101 0.274 1.246 0.285\nPB 1000 0.853 0.262 1.018 0.304 0.726 0.268 0.657 0.06 0.814 0.224\nPB 2000 1.78 0.386 1.375 0.318 1.19 0.318 0.642 0.12 1.247 0.286\nPB 5000 1.455 0.239 1.589 0.279 1.926 0.41 0.06 0.062 1.258 0.248\nCoRS 25 0.728 0.327 1.521 0.299 1.768 0.405 0.665 0.222 1.171 0.314\nCoNRS 5 0.443 0.198 1.494 0.275 1.262 0.361 0.947 0.349 1.037 0.296\nCoNRS 2 0.981 0.241 1.121 0.23 0.944 0.369 1.383 0.34 1.108 0.295\nCoRD 25 0.729 0.262 2.163 0.341 1.481 0.402 0.68 0.172 1.264 0.294\nCoRD 5 0.77 0.21 1.326 0.317 1.344 0.384 0.534 0.23 0.994 0.286\nOracle 5.249 1 4.109 1 3.854 1 3.707 1\nLower bound -2.859 -1 -2.335 -1 -1.844 -1 -2.097 -1\nTable 3: Performance of the approaches combining informativeness and quality estimation for sentence\nranking. Co-Tr: co-occurrence-based ranker applied directly to translated sentences; PB: profile-based\nranker combined with quality estimates, Co: co-occurrence-based ranker applied to source texts and\ncombined with quality estimates. R/NR and D/S as in Table 1.\naverage across different topics all these variations\nperform similarly.\nWe used the same methods - except the InvPos\nbaseline, which clearly performs very poorly - and\nsettings to assess the ranking of translated docu-\nments.\n6.3 Relevance ranking for translated texts\nWe combine the translation quality and sentence\nranking scores for each translated sentence tiby\ntaking their product:\nscore(t i) =relevance (si)\u0002quality (ti)\nwhererelevance (si)is given by either the co-\noccurrence (Co) or profile-based (PB) methods\napplied to the source language sentence si, and\nquality (ti)is given by the quality estimation\nmethod applied to the translation of si.\nThis is done for both the gold-standard annota-\ntion and the systems’ predictions. The ranges of\nthese two values are different, but this difference\nis not relevant, since we are only interested in the\nranking of the sentences, as opposed to their abso-\nlute scores.\nUsing the product for combining scores is how-\never not ideal: a translation with very low quality\nbut high relevance can receive comparable scores\nas translations with high quality but low relevance.\nWe have also experimented with using quality es-\ntimates as a filter for the relevance rankings. In\nother words, setting a threshold on the translationquality scores below which a translated sentence is\nranked at the bottom of the list even if its corre-\nsponding source is highly relevant. This strategy\nhowever was strongly affected by the choice of the\nthreshold and resulted in generally poorer perfor-\nmance. Due to space constraints, we only present\nthe results using the product of the two scores.\nIn the first set of experiments we evaluate the\nability of our approach to rank translated sen-\ntences within a document. We combine the quality\nand the relevance scores at sentence level as ex-\nplained above. As an alternative approach, we ap-\nply the unsupervised co-occurrence-based method\n(Co-Tr) to directly estimate the relevance of the\ntranslated text without any quality filtering. In\nthis case,score(t i) =relevance (ti). This ap-\nproach does not explicitly address translation per-\nformance. Nevertheless, it can account for some\ntranslation problems implicitly, particularly words\nleft untranslated or translated incorrectly. In all\ncases, the evaluation is performed comparing the\nsystem outputs against the combined (product)\ngold-standard. Results are shown in Table 3. The\nLength baseline is the same as in the monolingual\nsetting and does not include the quality estimation\nfilter. It is also compared against the combined\ngold-standard.\nIt is interesting to note that the quality estima-\ntion has a positive impact even for the baseline\nLength QE, confirming that long sentences are of-\nten badly translated. The performance of most set-\n158\ntings of the co-occurrence and profile-based meth-\nods outperform the baselines, except for the M\ntopic, as in the monolingual experiments. On av-\nerage, the co-occurrence method on translated and\nsource data provides better performance than the\nprofile-based method in terms of \u001a, while all meth-\nods are comparable according to Avg\u0001all. This\nseems to indicate that the profile-based is good\nat ranking good quality informative sentences, but\nfails at ranking informative but poorly translated\nsentences. A possible reason is that it scores each\nsentence independently from the others and relies\non the quality of the training data.\nThe best settings of the co-occurrence-based\nmethod applied to the source language texts out-\nperform the best settings of the same method ap-\nplied to translated texts. This is more evident in\nterms ofAvg\u0001, as opposed to \u001a. This seems to in-\ndicate that the combination strategy based on the\nproduct of the translation quality and relevance\nscores may not be the most appropriate for fine-\ngrained ranking. Although the monolingual (Ta-\nble 1) and cross-lingual (Table 3) results are not\ndirectly comparable because of their different up-\nper and lower bounds (due to the different gold-\nstandard values in each of these experiments), we\ncan note similar trends with respect to the two\nranking methods, Co and PB.\nIn the second set of experiments we assess the\ntask of ranking documents within a set of docu-\nments on the same topic. To produce a unique\nscore for each document, the sentence scores are\nscaled into [0;1]and averaged. Documents are\nthen ranked according their average values within\ntheir respective groups. The same process is per-\nformed using the gold-standard scores and the \u0014is\ncomputed, as shown in Table 4.\nThe best scores of the proposed approaches vary\nfrom moderate to substantial. For the G,IPC and\nMtopics, the best settings of the co-occurrence-\nbased method on the source language outperform\nthe baselines and is superior or equal the other\nmethods. For the SStopic, the Length baseline is\nthe best method. The co-occurrence method ap-\nplied directly on the translated sentences is often\nas good as the two proposed methods that use the\nsource language data. The co-occurrence methods\non translated text can in fact be better for heteroge-\nneous sets of documents such as M, but in general\nthe usage of source language text can be beneficial.\nOverall, the experiments in this paper showG IPC M SS\nLength 0.4 0.4 0.2 0.8\nLength QE 0.4 0.6 0.2 0.6\nCo-Tr RS 25 0.6 0.4 0.4 0.6\nCo-Tr NRS 5 0.8 0.0 0.4 0.4\nCo-Tr NRS 2 0.4 0.0 0.4 0.2\nCo-Tr RD 25 0.6 0.0 0.4 0.2\nCo-Tr RD 5 1.0 0.4 0.0 0.4\nPB 1000 0.8 0.4 0.2 0.0\nPB 2000 0.8 0.6 0.0 0.0\nPB 5000 0.8 0.6 0.2 0.2\nCoRS 25 0.8 0.0 0.2 0.4\nCoNRS 5 0.6 0.4 0.2 0.0\nCoNRS 2 0.6 0.2 0.2 0.4\nCoRD 25 0.2 0.0 0.4 0.2\nCoRD 5 1.0 0.0 0.0 0.0\nTable 4: Kappa coefficient of the various ap-\nproaches combining informativeness and quality\nestimation for document ranking.\nsignificant variations in performance for different\nmethods and settings of the same method over dif-\nferent topics. We believe this is mostly due to the\ndifferences in the level of homogeneity of the doc-\numents within each topic. Nevertheless, if we con-\nsider only the average results over the four topics,\nwe find that most methods/settings perform sim-\nilarly. This average result however hides signifi-\ncant differences between the methods/settings and\nopens the way for future research into a better un-\nderstanding of how to select the best methods and\nsettings for different types of corpora.\n7 Conclusions and future work\nWe have proposed combining source relevance in-\nformation and translation quality estimates to rank\ntranslated sentences and documents within groups\nof texts on the same topic. The approach has\nshown promising results and it is potentially use-\nful in different scenarios. These include applica-\ntions where large numbers of documents with re-\ndundant information are clustered together accord-\ning to certain criteria, for example, news on a given\ntopic in media monitoring and news analysis appli-\ncations, or reviews on a given product/service, and\nthen machine translated to be published in other\nlanguages. In this scenario, it would be wise to\nselect for publication only a subset of those doc-\numents whose translations are both relevant and\nof good quality. Additionally, the identification of\nrelevant and high-quality sentences in documents\ncan be used to highlight portions of a document\nthat can be relied upon for gisting purposes, es-\npecially in cases where the reader does not have\n159\naccess to the source document.\nIn future work, we plan to investigate better\nways of combining the translation quality and rel-\nevance scores, as well as further investigate the ef-\nfects of methods and settings on different topics.\nReferences\nBach, N., F. Huang, and Y . Al-Onaizan. 2011. Good-\nness: A Method for Measuring Machine Transla-\ntion Confidence. In Proceedings of the 49th Annual\nMeeting of the Association for Computational Lin-\nguistics, pages 211–219, Portland.\nBoudin, F., S. Huet, and J.M. Torres-Moreno. 2010.\nA graph-based approach to cross-language multi-\ndocument summarization. Research journal on\nComputer science and computer engineering with\napplications (Polibits), 1:21–24.\nCallison-Burch, C., P. Koehn, C. Monz, and O. Zaidan.\n2011. Findings of the 2011 workshop on statisti-\ncal machine translation. In Proceedings of the Sixth\nWorkshop on Statistical Machine Translation, pages\n22–64, Edinburgh, Scotland.\nCarlson, L., J.M. Conroy, D. Marcu, D.P. O’Leary,\nM.E. Okurowski, A. Taylor, and W. Wong. 2001.\nAn empirical study of the relation between abstracts,\nextracts, and the discourse structure of texts. In Pro-\nceedings of Document Understanding Conference.\nChang, C. and C. Lin. 2011. LIBSVM: A library\nfor support vector machines. ACM Transactions\non Intelligent Systems and Technology, 2:27–27.\nSoftware available at http://www.csie.ntu.\nedu.tw/ ˜cjlin/libsvm.\nCohen, J. 1960. A coefficient of agreement for nomi-\nnal scales. Educational and Psychological Measure-\nment, 20(1):37–46, April.\nGoldstein, J., M. Kantrowitz, V . Mittal, and J.G. Car-\nbonell. 1999. Summarizing text documents: sen-\ntence selection and evaluation metrics. Computer\nScience Department, page 347.\nGong, Y . and L. Xin. 2002. Generic text summa-\nrization using relevance measure and latent semantic\nanalysis. In Proceedings of ACM SIGIR, New Or-\nleans, US.\nHe, Y ., Y . Ma, J. van Genabith, and A. Way. 2010.\nBridging smt and tm with translation recommenda-\ntion. In Proceedings of the 48th Annual Meeting of\nthe Association for Computational Linguistics, pages\n622–630, Uppsala, Sweden, July.\nKennedy, A. and S. Szpakowicz. 2011. Evaluation of a\nsentence ranker for text summarization based on ro-\ngets thesaurus. In Text, Speech and Dialogue, pages\n101–108. Springer.Mihalcea, R. 2004. Graph-based ranking algorithms\nfor sentence extraction, applied to text summariza-\ntion. In Proceedings of the ACL 2004 on Interactive\nposter and demonstration sessions, page 20.\nPouliquen, B., R. Steinberger, and C. Ignat. 2003. Au-\ntomatic annotation of multilingual text collections\nwith a conceptual thesaurus. In Proceedings of the\nworkshop Ontologies and Information Extraction at\nthe EUROLAN’2003, Bucharest, Romania.\nSavoy, J. and L. Dolamic. 2009. How effective is\ngoogle’s translation service in search? Communi-\ncations of the ACM, 52(10):139–143.\nSoricut, R. and A. Echihabi. 2010. Trustrank: Inducing\ntrust in automatic translations via ranking. In Pro-\nceedings of the 48th Annual Meeting of the Associa-\ntion for Computational Linguistics, pages 612–621,\nUppsala, Sweden, July.\nSpecia, L. and A. Farzindar. 2010. Estimating machine\ntranslation post-editing effort with hter. In Proceed-\nings of the AMTA-2010 Workshop Bringing MT to\nthe User: MT Research and the Translation Indus-\ntry, Denver, Colorado.\nSpecia, L. 2011. Exploiting objective annotations for\nmeasuring translation post-editing effort. In 15th\nConference of the European Association for Machine\nTranslation, pages 73–80, Leuven, Belgium.\nSteinberger, J. and K. Je ˘zek. 2004. Text summarization\nand singular value decomposition. In Proceedings of\nthe 3rd ADVIS conference, Izmir, Turkey.\nSteinberger, J. and K. Je ˘zek. 2009. Update summa-\nrization based on novel topic distribution. In Pro-\nceedings of the 9th ACM Symposium on Document\nEngineering, Munich, Germany.\nSteinberger, R., B. Pouliquen, and E. Van der Goot.\n2009. An introduction to the europe media moni-\ntor family of applications. In Information Access in\na Multilingual World-Proceedings of the SIGIR 2009\nWorkshop (SIGIR-CLIR 2009), pages 1–8.\nTurchi, M., J. Steinberger, M. Kabadjov, and R. Stein-\nberger. 2010. Using parallel corpora for multi-\nlingual (multi-document) summarisation evaluation.\nMultilingual and Multimodal Information Access\nEvaluation, pages 52–63.\nWan, X., H. Li, and J. Xiao. 2010. Cross-language doc-\nument summarization based on machine translation\nquality prediction. In Proceedings of the 48th An-\nnual Meeting of the Association for Computational\nLinguistics, pages 917–926.\nYeh, J.Y ., H.R. Ke, and W.P. Yang. 2008. iSpreadRank:\nRanking sentences for extraction-based summariza-\ntion using feature weight propagation in the sentence\nsimilarity network. Expert Systems with Applica-\ntions, 35(3):1451–1462.\n160", "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "cPyXlWS2rQqs", "year": null, "venue": "EAMT 2018", "pdf_link": "https://aclanthology.org/2018.eamt-main.15.pdf", "forum_link": "https://openreview.net/forum?id=cPyXlWS2rQqs", "arxiv_id": null, "doi": null }
{ "title": "Evaluation of Terminology Translation in Instance-Based Neural MT Adaptation", "authors": [ "M. Amin Farajian", "Nicola Bertoldi", "Matteo Negri", "Marco Turchi", "Marcello Federico" ], "abstract": null, "keywords": [], "raw_extracted_content": "Evaluation of Terminology Translation\nin Instance-Based Neural MT Adaptation\nM. Amin Farajian1,2, Nicola Bertoldi1, Matteo Negri1, Marco Turchi1, Marcello Federico1\n1Fondazione Bruno Kessler, Trento, Italy\n2University of Trento, Trento, Italy\n{farajian,bertoldi,negri,turchi,federico }@fbk.eu\nAbstract\nWe address the issues arising when a neu-\nral machine translation engine trained on\ngeneric data receives requests from a new\ndomain that contains many specific tech-\nnical terms. Given training data of the\nnew domain, we consider two alterna-\ntive methods to adapt the generic system:\ncorpus-based andinstance-based adapta-\ntion. While the first approach is compu-\ntationally more intensive in generating a\ndomain-customized network, the latter op-\nerates more efficiently at translation time\nand can handle on-the-fly adaptation to\nmultiple domains. Besides evaluating the\ngeneric and the adapted networks with\nconventional translation quality metrics, in\nthis paper we focus on their ability to prop-\nerly handle domain-specific terms. We\nshow that instance-based adaptation, by\nfine-tuning the model on-the-fly, is capable\nto significantly boost the accuracy of trans-\nlated terms, producing translations of qual-\nity comparable to the expensive corpus-\nbased method.\n1 Introduction\nWhen deployed in production lines, machine trans-\nlation (MT) systems need to serve requests from\nvarious domains ( e.g. legal, medical, finance,\nsports, etc.) with a variety of structural and lexi-\ncal differences. Considering that technical transla-\ntion ( e.g. user guides, medical reports, etc.) rep-\nresents the largest share in the translation indus-\ntry (Kingscott, 2002) and that a significant part\nc/circlecopyrt2018 The authors. This article is licensed under a Creative\nCommons 3.0 licence, no derivative works, attribution, CC-\nBY-ND.of it deals with domain-specific terms, it is im-\nportant that machine translation delivers not only\ngeneric quality but also accurate translations of\nterms. The possibility of bearing different mean-\nings in different contexts increases the difficulty\nof translating terms, making it an interesting and\nchallenging topic in MT. Table 1 shows two ex-\namples in which Google Translate1(GT) and Bing\ntranslator2(BT) wrongly translate domain termi-\nnology. In the first example, the English word ap-\npleis wrongly recognized and translated as a term\nof the computer domain ( apple ) while it actually\nrefers to the fruit type ( mele ). In the second ex-\nample, on the contrary, Bing fails to recognize the\nmulti-word term broken Windows by producing in-\nstead a literal translation that departs from the orig-\ninal sense. These examples show that existing MT\nsystems still have difficulties in handling domain-\nspecific terms, which calls for solutions to improve\nthis aspect of MT.\nIdeal solutions for this real-world multi-domain\ntranslation scenario should be scalable enough to\nenable the industrial deployment at a reasonable\ncost, while guaranteeing a high level of flexibility\nin delivering good-quality translations for all (or\nmost of) the domains. This is of higher impor-\ntance for the neural approach, where building the\nsystems usually requires expensive GPU machines\ntrained for several days to weeks on large amounts\nof parallel data.\nIn this paper we analyze the ability of instance-\nbased adaptation strategy in handling domain ter-\nminology (technical terms) and compare its per-\nformance with a non-adaptive generic neural MT\n(NMT) system trained on a large pool of parallel\ndata, and a corpus-based adaptive NMT system as\n1https://translate.google.com\n2https://www.bing.com/translatorP\u0013 erez-Ortiz, S\u0013 anchez-Mart\u0013 \u0010nez, Espl\u0012 a-Gomis, Popovi\u0013 c, Rico, Martins, Van den Bogaert, Forcada (eds.)\nProceedings of the 21st Annual Conference of the European Association for Machine Translation , p. 149{158\nAlacant, Spain, May 2018.\nSrc. Composition and nutritive value of apple products.\nGT Composizione e valore nutritivo dei prodotti apple.\nRef. Composizione e valore nutritivo dei prodotti a base di mele.\nSrc. It also contains system recovery tools you can use to repair broken Windows.\nBT Esso contiene anche gli strumenti di ripristino del sistema possibile utilizzare per riparare le finestre rotte.\nRef. Esso contiene anche gli strumenti di ripristino del sistema che possibile utilizzare per riparare Windows non\nfunzionante.\nTable 1: Examples of incorrectly translating technical terms from English into Italian by online translation engines. Translation\nqueries submitted on 29/03/2018. GT and BT refer to Google Translate and Bing translator, respectively.\na strong (and expensive) term of comparison.\nOur results show that, in contrast to the generic\nand corpus-based adaptive solutions which com-\npromise either the translation quality or the ar-\nchitectural cost, recently proposed instance-based\nadaptation methods (Farajian et al., 2017b) pro-\nvide a flexible solution at reasonable costs. This\nadaptive system is based on a retrieval mechanism\nthat, given a test sentence to be translated, extracts\nfrom the pool of parallel data the top ( source ,tar-\nget) pairs in terms of similarity between the source\nand the test sentence. Using this small set of re-\ntrieved pairs, it then fine-tunes the model, and ap-\nplies it to translate the input sentence. As shown in\n(Farajian et al., 2017b), by applying local adapta-\ntion to few training instances, not only the system\nis able to improve the performance of the generic\nNMT but, in some domains, it can also outper-\nform strong specialized corpus-based NMT sys-\ntems trained for several epochs on the correspond-\ning domain-specific data.\nIn this paper, we further explore the effective-\nness of the instance-based adaptation method re-\nporting, in addition to global corpus-level BLEU\nscores, empirical results on how they perform in\ntranslating domain terminology. To this aim, we\ndivide the terms into two categories of single- and\nmulti-word phrases, and study the systems’ be-\nhaviour in each class separately. Unsurprisingly,\nin both cases corpus-based adaptation improves\nthe performance of the generic model by a large\nmargin. Such improvements, however, come at\nthe cost of computationally intensive adaptation on\nall the in-domain data. In contrast, instance-based\nadaptation achieves comparable results with a less\ndemanding strategy based on adapting the model\nto few training instances retrieved from the pool\nof data on the fly. This empirical proof, focused\non the proper treatment of domain terms in NMT\nadaptation, is the main contribution of this paper.2 Related works\nWhen exposed to new domains (Koehn and\nKnowles, 2017) or applied in multi-domain sce-\nnarios (Farajian et al., 2017a), machine transla-\ntion systems in general and neural MT in partic-\nular, experience performance degradations due to\nthe distance between the target domain and the\ndomain(s) on which they were trained. Previous\nstudies on multi-domain MT provide solutions for\nthis issue, making it possible to cover more than\none domain while reducing performance degrada-\ntions in the target domains (Luong and Manning,\n2015; Chen et al., 2016; Zhang et al., 2016; Fre-\nitag and Al-Onaizan, 2016; Chu et al., 2017; Fara-\njian et al., 2017b; Kobus et al., 2017). These so-\nlutions can be categorized into static andadap-\ntiveapproaches. To train one single model using\nheterogeneous data from many domains, static ap-\nproaches assume to have simultaneous access to\nall the training data and their corresponding do-\nmain/topic information. This information, which\nis either manually assigned or automatically in-\nferred from the data, is passed as additional sig-\nnal to the MT system, helping it to produce higher\nquality translations for the desired target domain.\nExisting solutions in the field of NMT propose to\nincorporate this topic/domain information only on\nthe source side ( i.e.to support the encoder) (Kobus\net al., 2017), only on the target side ( i.e.to support\nthe decoder) (Chen et al., 2016), or on both sides\n(Zhang et al., 2016). Although consistent and sig-\nnificant improvements have been reported by this\napproach, its application to new domains is not\ntrivial, mostly due to the fact that it requires per-\nforming expensive NMT and topic model adapta-\ntions using the original multi-domain data and the\ntraining set for the new domain.\nAdaptive approaches, on the other hand, pro-\npose to fine tune an existing MT system, trained\neither on another domain or pool of parallel data,\nto the new domain or task. While Luong and\nManning (2015) report significant improvements150\nby this approach on the new target domain, Freitag\nand Al-Onaizan (2016) observe a significant drop\nin system’s performance on the original domain,\nwhich is due to the severe overfitting of the model\nto the new domain. To solve this issue, they pro-\npose a slightly different approach, which performs\nensemble decoding using both the adapted and the\ngeneric model. The mixed fine tuning method pro-\nposed by Chu et al. (2017) is another approach for\nkeeping under control the performance degrada-\ntion on the original out-domain data while adapt-\ning the model to the new domain. Given the out-\ndomain and in-domain training sets and a model\npre-trained only on the out-domain data, this ap-\nproach continues the training on a parallel corpus\nthat is a mix of the two training corpora, in which\nthe smaller in-domain corpus is oversampled to\nhave the same size as the larger out-domain corpus.\nThe specialized models obtained by these adapta-\ntion techniques are empirically shown to be effec-\ntive, improving the translation quality of a generic\nNMT system on the target domains. However, the\npractical adoption of this approach results in devel-\noping and maintaining multiple specialized NMT\nengines (one model per domain), which increases\nthe infrastructure’s costs and limits its scalability\nin real-world application scenarios.\nTo combine the advantages of the two worlds,\n(i.e. to get close to the high quality of corpus-\nbased adaptation still keeping the scalability of\none single model), Farajian et al. (2017b) in-\ntroduce an instance-based adaptation method for\nNMT inspired by Hildebrand et al. (2005). In-\nstead of adapting the original generic model to\nthe whole in-domain training corpus, the instance-\nbased method retrieves from the pool of parallel\ndata a small set of sentence pairs in which the\nsource side is similar to the test sentence. Then,\nit fine-tunes the generic model on-the-fly by us-\ning the set of retrieved samples. This makes the\ninstance-based adaptive approach a reasonable so-\nlution for real-world production lines, in which the\nMT system needs to cover a wide range of appli-\ncation domains while keeping under control the ar-\nchitecture’s cost.\nIn addition to the architectural costs of NMT\ndeployment in multi-domain application scenar-\nios, there is another important factor that has to\nbe considered, that is their ability in translating\ndomain-specific words and phrases ( i.e. terms).\nBased on their nature, these expressions can be fre-quently observed in their corresponding domains,\nbeing at the same time infrequent or even out of\nvocabulary (OOV) in the other domains. Never-\ntheless, data-driven MT systems need to be trained\non large amounts of training data, which is gen-\nerally collected from different sources. This fur-\nther reduces the relative frequency of these words,\nmaking them less probable to be translated cor-\nrectly by the system. This makes it even more\nchallenging for NMT approach where rare and\nOOV words are either segmented into their cor-\nresponding sub-word units (Sennrich et al., 2016)\nor mapped to a special “ unk” token (Luong and\nManning, 2015). However, in the relatively re-\ncent history of NMT, there are few works that an-\nalyze its behavior focusing on domain terminol-\nogy. Chatterjee et al. (2017) achieve significant\nimprovements with a guide mechanism that helps\nthe NMT decoder to prioritize and adequately han-\ndle translation options obtained from terminology\nlists. Ar ˇcan and Buitelaar (2017) empirically show\nthat offline adaptation of a generic NMT system to\nthe new domain improves its performance in trans-\nlating domain-specific terms. However, they dis-\ncuss only corpus-based adaptation techniques that,\ncompared to instance-based methods, are less suit-\nable for real-world application. Moreover, they\nmostly work in a setting in which domain termi-\nnology has to be translated in isolation without any\ncontext, while in our working scenario we work\nwith full sentences.\n3 Neural machine translation adaptation\nIn this section we first briefly review the state-\nof-the-art sequence-to-sequence neural machine\ntranslation and then describe the two corpus-based\nand instance-based adaptation approaches.\n3.1 Neural machine translation\nWe build our adaptive systems on top of the state-\nof-the-art attention-based encoder-decoder neural\nMT (Bahdanau et al., 2015) in which given the\nsource sentence x=x1,...,x M, the translation is\nmodeled as a two-step process. The source sen-\ntencexis first encoded into a sequence of hid-\nden states by means of a recurrent neural network\n(RNN). Then, another RNN decodes the source\nhidden sequence into the target string. In partic-\nular, at each time step the decoder predicts the\nnext target word from the previously generated\ntarget word, the last hidden state of the decoder,151\nand a weighted combination of the encoder hidden\nstates, where the weights are dynamically com-\nputed through a feed-forward network, called at-\ntention model.\nTraining of the presented NMT architecture is\ngenerally carried out via maximum-likelihood es-\ntimation, in which the model parameters such as\nword embedding matrices, hidden layer units in\nboth the encoder and decoder, and the attention\nmodel weights are optimized over a large collec-\ntion of parallel sentences. In particular, starting\nfrom a random initialisation of the parameters, op-\ntimization is performed via stochastic gradient de-\nscent (Goodfellow et al., 2016), in which at each\niteration a randomly selected batch βis extracted\nfrom the data and each parameter θis moved one\nstep in the opposite direction of the mean gradient\nof the log-likelihood ( L), evaluated on the entries\nofβ:\n∆θ=−η1\n|β|/summationdisplay\n(x,y)∈β∂L(x,y)\n∂θ(1)\nwhereηis a hyperparameter moderating the size of\nthe step ∆θand is usually referred to as the learn-\ning rate. It can either be fixed for all parameters\nand all iterations, or vary along one or both dimen-\nsions (Goodfellow et al., 2016). During training,\nthe optimization is performed by going through\nseveral so-called epochs, i.e.the number of times\nthe whole training data is processed.\n3.2 Corpus-based adaptation in neural MT\nGiven an existing NMT model, trained either on\nanother domain or on a generic pool of parallel\ndata, corpus-based adaptation methods fine-tune\nthe model parameters by applying the same train-\ning procedure described in Section 3.1. Depending\non the application scenario, the optimization is per-\nformed by iterating over a combination of both the\ncurrent and new data (Chu et al., 2017) or only the\ntraining data of the new domain (Luong and Man-\nning, 2015). The former is usually used when the\ngoal is to adapt the model to the new domain while\navoiding performance degradation in the domain\non which the model was initially trained. Other-\nwise, only the training data of the new domain is\nused. In this paper, we opt for the latter solution\nbecause we are interested only in the performance\nof the system in the new domain.\nThese solutions, however, require a few hours to\nfine-tune the system to the target domains, which\nis scarcely compatible with application scenariosin which users need to instantly start interacting\nwith the MT system. In spite of this (and the inher-\nent cost and scalability-related issues), the compet-\nitiveness of this solution motivates its adoption as\na strong term of comparison in this paper.\n3.3 Instance-based adaptation in neural MT\nInstance-based adaptation (Farajian et al., 2017b)\nis an extension of the aforementioned adaptation\nmethod, in which, instead of adapting the model\nto all the available in-domain training data, only\nfew instances ( i.e.sentence pairs) are used to tune\nthe model. In particular, given an already exist-\ning NMT model, the pool of in-domain parallel\ndata, and a sentence to be translated, it performs\nthe following three steps: 1) retrieve from the pool\na set of ( source, target ) pairs in which the source is\nsimilar to the test sentence; 2) locally adapt the pa-\nrameters of the model using the retrieved sentence\npairs; 3) translate the given test sentence by ap-\nplying the resulting locally-tuned model. The dia-\ngram of the approach is shown in Figure 1. In order\nto leverage more effectively the information of the\nretrieved training samples, Farajian et al. (2017b)\npropose to set the hyperparameters of the training\nprocess ( i.e.learning rate and number of epochs)\nproportional to the similarity of the retrieved set to\nthe test. This results in fine tuning the model with\nlarger learning rates and for more epochs when the\nretrieved sentence pairs are highly similar to the\ntest and vice versa.\nInput Retrieve \nParallel Data Adapt Translate Output \nAdapted \nNMT Model \nGeneric \nNMT Model \nFigure 1: Instance-based NMT adaptation.\n4 Experimental setup\n4.1 Data\nThe experiments of this paper are carried out in\nthe English-Italian translation direction. The train-152\nSegments Tokens Types\nGeneric 20.8M 373.5M 1.7M\nGnome 76.5K 685.2K 36.0K\nKDE4 179.5K 2.1M 75.3K\nTable 2: Statistics of the Italian side of the training corpora.\nGeneric data is used for training the generic NMT system,\nwhile the domain-specific data ( i.e.Gnome and KDE4) are\nused only in the adaptation step.\ning set consists of a large collection of propri-\netary data collected from several industrial trans-\nlation projects in different domains ( i.e.medical,\nsoftware documentations, user guides, etc.). The\nstatistics of the training corpus are presented in Ta-\nble 2 (first row).\nTo evaluate the performance of the systems in\ntranslating domain-specific terms we need test sets\nin which the terms are annotated. Moreover, both\nadaptive systems need in-domain training data in\norder to fine tune the generic model to the given\ndomain. This further increases the difficulty of\nfinding the evaluation data. The BitterCorpus3\n(Arˇcan et al., 2014) is a collection of parallel\nEnglish-Italian documents in the information tech-\nnology (IT) domain (extracted from Gnome and\nKDE4 projects) in which technical terms are man-\nually marked and aligned. However, this corpus is\nnot ready to be used in our task as-is , since: i)it\ncontains only the annotated test data without any\nin-domain training set, and ii)test data are aligned\nat document level, while in our experiments we\nneed sentence-level aligned corpora.\nIn order to compile an evaluation package that\naddresses our needs, we used the publicly available\nGnome and KDE4 corpora4which are sentence-\nlevel aligned, divided them into training and test\nsets, and then automatically annotated the termi-\nnologies in the test by means of the term list ex-\ntracted from the BitterCorpus5. The statistics of\nthe Italian side of the training and test corpora\nare reported in Table 2 and 3. Some examples of\nthe English terms and their corresponding Italian\ntranslations are presented in Table 4. As we see,\nthere are several cases where, in addition to the\nspecific translations used in IT domain (marked\nwith *), the English term can have other transla-\ntions that are valid in other domains. For example,\ndepending on the domain, the English word but-\n3https://hlt-mt.fbk.eu/technologies/\nbittercorpus\n4http://opus.lingfil.uu.se/\n5https://gitlab.com/farajian/TermTraGSSeg.Avg. Terms\nLen. single multiallword word\nGnome 2000 20.5 2,660 183 2,843\nKDE4 2000 25.7 3,767 256 4,023\nTable 3: Statistics of the Italian side of the test corpora.\nEnglish Italian\nlist lista*, elenco*\npath path*, percorso*, indirizzo*\nbutton pulsante*, bottone\ntoolbar barra degli strumenti\nwrapping a capo automatico*, avvolgere\ntitle bar barra del titolo\nwildcards caratteri jolly\ntree view vista ad albero\nkonversation konversation\nmouse pointer puntatore del mouse\ndestination folder cartella di destinazione\nregular expression espressione regolare\nright mouse button tasto destro del mouse\nTable 4: Examples of term pairs in our test corpora. Words\nmarked with * represent in-domain translations of the term.\ntoncan refer to the object used to fasten something\n(i.e.in Italian referred to as bottone ), or the elec-\ntrical/electronic equipment that is pressed to turn\non or off a device ( i.e. translated as pulsante in\nItalian). This ambiguity is usually observed in the\ncase of single-word terms, while multi-words often\ndisambiguate each other.\n4.2 NMT systems\nWe conducted the experiments with our in-\nhouse developed and maintained branch of the\nOpenNMT-py toolkit (Klein et al., 2017), which is\nan implementation of the attention-based encoder-\ndecoder architecture (Bahdanau et al., 2015).\nOur code is integrated with the open source\nModernMT project6, and is highly optimized and\nalready deployed for production systems. In our\nexperiments, we segmented the infrequent words\ninto their corresponding sub-word units by apply-\ning the byte pair encoding (BPE) approach (Sen-\nnrich et al., 2016). In order to increase the con-\nsistency between the source and target segmenta-\ntions, we learned the BPE merge rules from the\nconcatenation of the source and target side of the\ntraining data. We set the number of merge rules\nto 32K, resulting in vocabularies of size 34K and\n35K respectively for English and Italian. We used\n2-layered LSTMs in both the encoder and decoder,\neach of which containing 500 hidden units. We\n6www.modernmt.eu153\nset the word embedding size to 500 for both the\nsource and target languages. The parameters are\noptimized with SGD using the initial learning rate\nof 1.0 with a decaying factor of 0.9. The batch size\nis set to 64, and the model is evaluated after each\nepoch. We trained the system for 11 epochs and\nselected the model with the highest BLEU score\non our development set.\nOur reimplementation of the instance-based\nadaptive system uses the open source Lucene li-\nbrary (McCandless et al., 2010) to store the train-\ning samples ( i.e.pool of the generic and domain-\nspecific data). Similar to (Farajian et al., 2017b),\ngiven the test sentence it retrieves the most similar\ninstance from the pool ( i.e.top-1) and adapts the\naforementioned generic model accordingly. It sets\nthe hyperparameters of the fine-tuning process pro-\nportional to the similarity of the retrieved instance\nand the test sentence. For example, it fine-tunes the\nmodel with the learning rate of 0.2 for 10 iterations\nif the similarity of the retrieved instance to the test\nis 1.0. In this work we used sentence-level BLEU\n(Chen and Cherry, 2014) as the similarity measure.\nIn our experiments, the average time for updating\nthe model was about 0.5 seconds per sentence.\nThe corpus-based adapted NMT systems are\nmultiple instances of the generic system each\nof which trained on the corresponding domain-\nspecific training data for several epochs, until no\nimprovement is observed in the model perplex-\nity on our development set for four consecutive\nepochs. We then used, for each domain, the model\nwith minimum perplexity on the development set\n(i.e. model obtained after 26 and 4 epochs re-\nspectively for Gnome and KDE4). We used the\nsame settings as the generic system for training\nthese systems. However, for fine tuning we started\nwith a learning rate of 0.5. In our experiments, the\ncorpus-based adaptation of the model took about\n3:00 and 1:15 hours for Gnome and KDE4 do-\nmains, respectively.7\n4.3 Evaluation metrics\nWe evaluate the systems’ performance both in\nterms of BLEU (Papineni et al., 2002) and term hit\nrate (THR). While the former measures the over-\nall quality of the translations with respect to the\nmanually-translated reference, the latter analyzes\nthe ability of the system in learning the vocabulary\n7We carried out the experiments on Azure instances with\nNVIDIA Tesla k80 GPUs.of each specific domain. To this aim it computes\nthe proportion of terms in the reference that are\ncorrectly translated by the MT system. However,\nin order to avoid assigning higher scores to the sys-\ntems which over-generate the same term, it clips\nthe counts of the matched terms by their frequency\nin the reference (2).\nTHR =/summationtext\nterm∈refcountclip(term )\n/summationtext\nterm∈refcountref(term )(2)\nSince there are two types of single-word and\nmulti-word terms in our test sets, in order to have\na more detailed analysis of systems’ behaviour we\nreport the scores for each class separately in addi-\ntion to the overall THR score.\n5 Analysis and discussion\nIn this section we present a detailed analysis of the\nresults obtained by different systems and compare\nthe systems in translating the technical terms in\nGnome and KDE4.\n5.1 Translation quality\nTable 5 reports the performance of the generic,\ninstance-based, and corpus-based adaptive sys-\ntems on Gnome and KDE4 test sets in terms of\nBLEU. As the results show (first two rows), the\ninstance-based system significantly improves the\nperformance of the generic system by +7.80 and\n+6.55 BLEU points. However, it obtains a lower\nBLEU score compared to its corpus-based counter-\nparts. In our investigations, we noticed that in al-\nmost all the cases where the application domain is\nnew ( i.e.the training data of the target domain was\nnot included in the pool of data used for training\nthe generic system), the corpus-based adapted sys-\ntem produces translations of higher quality com-\npared to the instance-based system. Neverthe-\nless, this comes at the cost of training the system\nfor several hours, instead of instantly starting the\ntranslation process.\nAnother interesting phenomenon that we ob-\nserved in these experiments is the correlation of the\nperformance gain and the average similarity of the\nretrieved samples to the test sentences. We noticed\na larger performance gain in the case of Gnome\ncompared to KDE4 (+7.80 vs. +6.55) while the\naverage similarity of the retrieved sentence pairs\nin this domain is lower (0.36 compared to 0.56).\nComparing the ratio of the successful retrievals in154\nAvg. Sim. GenericInstance Corpus\nbased based\nGnome 0.36 35.97 43.77 49.79\nKDE4 0.56 35.09 41.64 46.26\nGnome† 0.43 38.06 51.36 56.00\nKDE4† 0.61 36.99 51.84 48.95\nTable 5: BLEU score of the generic and adaptive NMTs on\nthe test sets. The corpora marked with †are subsets of the\noriginal corpora for which a similar instance is retrieved.\nthe two systems partially explains this behaviour:\nin the case of Gnome, in 83.9% of the cases the\nsystem is able to find training samples similar to\nthe test while in KDE4 this figure decreases to\n75.8%. Moreover, by limiting our analysis to these\ncases, i.e.sentences for which the system has suc-\ncessfully retrieved a similar instance (last two rows\nin Table 5), we see a correlation higher than 0.9\nbetween the performance gain and the similarity.\nEven more surprisingly, we observe that on this\nsubset of KDE4 corpus the instance-based system\noutperforms its corpus-based counterpart. This is\nmostly due to the fact that retrieved instances in\nthis case are highly similar to the test sentences.\n5.2 Term translation\nTable 6 presents the performance of the systems\non both Gnome and KDE4 data. Since a large\nportion of the generic training data belongs to\nthe IT domain we observe a reasonably high per-\nformance by the generic system in the studied\ndomains, in particular on the single-word terms\n(79.58 and 73.70 on Gnome and KDE4 domains,\nrespectively). However, translating multi-word\nterms is more challenging for all the systems as it\ninvolves producing sequences of words that might\nhave several translations in different context. For\nexample the English words bar,path, and mouse\nare usually translated into bar,indirizzo , and topo\nwhile their contextual translations in the techni-\ncal terms title bar ,full path andmouse pointer is\nbarra ,path, and mouse . This makes the transla-\ntion more difficult for the systems, resulting in a\nsignificant performance drop compared to the case\nof single-word terms.\n5.3 Instance selection effect\nIn addition to the similarity of the retrieved sam-\nples to the test discussed in (Farajian et al., 2017b),\nthe presence of domain terms in the retrieved sen-\ntence pairs is another important factor for instance-\nbased adaptation. As Table 7 shows, in about 30%Term Type GenericInstance Corpus\nbased based\nGnomeSingle-word 79.58 82.16 86.55\nMulti-word 62.79 70.54 80.62\nOverall 78.59 81.48 86.20\nKDE4Single-word 73.70 79.48 81.94\nMulti-word 48.15 58.52 61.48\nOverall 72.24 78.28 80.78\nTable 6: Performance of the generic and adaptive NMTs on\nthe test sets, in terms of THR.\nTerm Type English Italian\nGnomeSingle-word 70.1 62.0\nMulti-word 60.0 51.6\nOverall 69.7 61.4\nKDE4Single-word 71.7 59.5\nMulti-word 68.2 45.5\nOverall 71.5 58.7\nTable 7: Percentage of the retrieved samples that contain the\ndesired terms.\nof the cases the retrieved English sentence does not\ncontain the desired term. This proportion is even\nhigher if we look at the target side of the retrieved\ninstances, in which around 40% of the desired term\ntranslations are missing. However, this is expected\nsince the retrieval is performed only based on the\nsource side information ( i.e. in our experiments\nEnglish), with no additional filters based on the\ntarget side of the retrieved instance. Measuring the\nperformance of the adaptive system in correcting\nthe terms which are missed by the generic system\nshows that the instance-based system effectively\nlearns the vocabulary of the application domain,\ncorrecting up to 76.64% of the mistakes made by\nthe generic system if the desired term translation\nexists in the retrieved instance (Table 8).\n6 Further analysis\nIn addition to the automatic evaluations we per-\nformed further manual analysis on the outputs of\nthe instance-based adaptive system. The results of\nthis analysis indicate that, compared to the generic\nsystem, its behavior differs in two main aspects:\ni)learning to translate the terms that are missed\nor wrongly translated by the generic system, ii)\nadapting to different style of the translation. When\nrun on new domains, for which it has not seen any\nSingle-word Multi-word Overall\nGnome 64.33 52.94 63.22\nKDE4 76.92 73.91 76.64\nTable 8: Percentage of the terms corrected by the instance-\nbased adaptive system.155\nin-domain training data, it is highly probable that\nthe generic system receives translation requests\ncontaining terms which are OOV or infrequently\nobserved in the training data. In such cases, even\nafter applying BPE, it might not be able to pro-\nduce proper translations. As an example, the En-\nglish word dolphin , which is rarely observed in\nthe generic training data, is always translated in\nthe Italian word delfino which refers to the animal.\nHowever, in the KDE4 domain it corresponds to\na proper noun that indicates a file manager appli-\ncation. As we see in Table 9, the generic system\nwrongly translates it into delphin . By accessing in-\ndomain training data ( i.e.either the full corpus or\njust one single, highly similar instance), both the\nadaptive systems are able to correctly translate it.\nThe English terms Control Center andmouse\ncursor are two interesting examples of learning\ndomain-specific translation styles. While in the\ngeneric training data these terms are usually trans-\nlated into Control Center andcursore del mouse ,\nin the KDE4 domain the human translators prefer\nthem to be translated into centro di controllo and\npuntatore del mouse . As we see in the examples\nof Table 9, the generic system produces their com-\nmonly used translations, while the instance-based\nsystem is able to learn and produce the desired\ndomain-specific translations.\nWe also observed a few cases in which the\ninstance-based approach learns to properly gener-\nate Italian terms in the translation while there is no\ncorresponding source English term in the given test\nsentence. The Italian word pulsante in the fourth\nexample provided in Table 9 is one of these cases.\nAs we see, the input English sentence does not\ncontain the word button , hence both the generic\nand corpus-based adapted NMT systems do not\nproduce any translation for it. On the contrary,\nthe instance-based system, being trained on a very\nsimilar instance which contains the word pulsante ,\nlearns the pattern and produces a translation that is\ncloser to the reference.\nFinally, we noticed that inconsistent translations\nof the terms can affect the instance-based adaptive\nsystem, resulting in translations which are differ-\nent than the manually produced references. The\nlast example provided in Table 9 shows one of\nthese cases. As we see, the English term pack-\nages can be translated into either pacchetti orpack-\nage. So, based on the suggestion provided by the\nretrieval module, the instance-based system learnsto translate it into package which is another valid\ntranslation of this term. This, however, does not\naffect the global performance of the system due to\nthe small amount of similar situations.\n7 Conclusions\nWe investigated the application of instance-based\nadaptive NMT in a real-world scenario where\ntranslation requests come from new domains that\ncontain many technical terms. In particular, we\nanalyzed its ability to properly handle domain ter-\nminology, comparing its output against the trans-\nlations produced by a generic (unadapted) NMT\nsystem and a corpus-based specialized NMT sys-\ntem. Overall, our experiments with Gnome and\nKDE4 data reveal that the two adaptation meth-\nods significantly improve the performance of the\ngeneric system both in terms of global BLEU score\nandterm translation accuracy . Unsurprisingly, by\nperforming a computationally intensive fine tuning\non the full in-domain training data, corpus-based\nadaptation produces specialized NMT systems that\nachieve better results at the cost of reduced scal-\nability. However, the less demanding instance-\nbased adaptation (performed on one parallel sen-\ntence pair retrieved from a pool of data based on\nits similarity to the test sentence), is able to ef-\nfectively learn domain terms’ translations, even\nfor expressions that were never observed by the\ngeneric model. Such capability allows instance-\nbased adaptation to significantly reduce the gap be-\ntween generic and corpus-based specialized NMT\nmodels at manageable costs.\nAcknowledgements\nThis work has been partially supported by the EC-\nfunded H2020 projects QT21 (grant no. 645452)\nand ModernMT (grant no. 645487). This work\nwas also supported by The Alan Turing Institute\nunder the EPSRC grant EP/N510129/1 and by a\ndonation of Azure credits by Microsoft.\nReferences\n[Arˇcan and Buitelaar2017] Ar ˇcan, Mihael and Paul\nBuitelaar. 2017. Translating domain-specific ex-\npressions in knowledge bases with neural machine\ntranslation. CoRR. http://arxiv.org/abs/\n1709.02184 .\n[Arˇcan et al.2014] Ar ˇcan, Mihael, Marco Turchi, Sara\nTonelli, and Paul Buitelaar. 2014. Enhancing sta-\ntistical machine translation with bilingual terminol-156\nSource [...]Files may be dragged and dropped onto & kwrite; from the Desktop, the filemanager & dolphin;[...]\nReference [...]I file possono essere trascinati e rilasciati su & kwrite; dal Desktop, & dolphin;[...]\nRet. Src. [...]Files may be dragged and dropped onto & kate; from the Desktop, the filemanager & dolphin;[...]\nRet. Trg. [...]I file possono essere trascinati e rilasciati in & kate; dal Desktop, dal gestore di file & dolphin;[...]\nGeneric [...]I file possono essere trascinati e rilasciati su & kwrite; dal Desktop, dal file manager e dal delphin;[...]\nInstance-based [...]I file possono essere trascinati e rilasciati in & kwrite; dal Desktop, dal gestore di file & dolphin;[...]\nCorpus-based [...]I file possono essere trascinati e caduti su kwrite; dal desktop, dal gestore file & dolphin;[...]\nSource [...]The mouse cursor is identified in the status bar.[...]\nReference [...]Al puntatore del mouse `e identificato nella barra di stato.[...]\nRet. Src. [...]When you hold the mouse cursor still for a moment[...]\nRet. Trg. [...]Mantenendo fermo per qualche istante il puntatore del mouse[...]\nGeneric [...]Il cursore del mouse viene identificato nella barra di stato.[...]\nInstance-based [...]Il puntatore del mouse viene identificato nella barra di stato.[...]\nCorpus-based [...]Il cursore del mouse `e identificato nella barra di stato.[...]\nSource Exiting the kde Control Center\nReference Uscire dal centro di controllo di kde\nRet. Src. The kde Control Center Screen\nret. Trg. Lo schermo del centro di controllo di kde;\nGeneric Uscita da kde Control Center\nInstance-based Uscita dal centro di controllo di kde\nCorpus-based Uscita dal centro di controllo di kde\nSource This saves the settings and closes the configuration dialog.\nReference Questo pulsante salva le impostazioni e chiude la finestra di configurazione.\nRet. Src. This saves the settings without closing the configuration dialog.\nRet. Trg. Questo pulsante salva le impostazioni senza chiudere la finestra di configurazione.\nGeneric pulsante Salva le impostazioni e chiude la finestra di configurazione.\nInstance-based Questo pulsante salva le impostazioni e chiude la finestra di configurazione.\nCorpus-based Questo pulsante salva le impostazioni e chiude la finestra di configurazione.\nSource Automatically scan project’s packages\nReference Analizzare automaticamente i pacchetti del progetto\nRet. Src. View or modify the UML system’s packages.\nRet. Trg. Consente di visualizzare o modificare i package di sistema UML.\nGeneric Scansione automatica dei pacchetti del progetto\nInstance-based Scansione automatica dei package di progetto\nCorpus-based Scansiona automaticamente i pacchetti del progetto\nTable 9: Translation examples produced by generic and adaptive NMT systems.157\nogy in a CAT environment. In Proc. ofAMTA’14,\nVancouver, BC, Canada, October.\n[Bahdanau et al.2015] Bahdanau, Dzmitry, Kyunghyun\nCho, and Yoshua Bengio. 2015. Neural machine\ntranslation by jointly learning to align and translate.\nInProc. ofICLR’15, San Diego, CA, USA, May.\n[Chatterjee et al.2017] Chatterjee, Rajen, Matteo Negri,\nMarco Turchi, Marcello Federico, Lucia Specia, and\nFrederic Blain. 2017. Guiding neural machine trans-\nlation decoding with external knowledge. In Proc. of\nWMT’17, pages 157–168, Copenhagen, Denmark,\nSeptember.\n[Chen and Cherry2014] Chen, Boxing and Colin\nCherry. 2014. A systematic comparison of smooth-\ning techniques for sentence-level BLEU. In Proc.\nofWMT’14, pages 362–367, Baltimore, Maryland,\nUSA, June.\n[Chen et al.2016] Chen, Wenhu, Evgeny Matusov,\nShahram Khadivi, and Jan-Thorsten Peter. 2016.\nGuided Alignment Training for Topic-Aware Neural\nMachine Translation. In Proc. ofAMTA’16, pages\n121–134, Austin, Texas, October.\n[Chu et al.2017] Chu, Chenhui, Raj Dabre, and Sadao\nKurohashi. 2017. An empirical comparison of sim-\nple domain adaptation methods for neural machine\ntranslation. In Proc. ofACL’17-Short Papers, pages\n385–391, Vancouver, Canada, July, August.\n[Farajian et al.2017a] Farajian, M. Amin, Marco Turchi,\nMatteo Negri, Nicola Bertoldi, and Marcello Fed-\nerico. 2017a. Neural vs. phrase-based machine\ntranslation in a multi-domain scenario. In Proc. of\nEACL’17, pages 280–284, Valencia, Spain, April.\n[Farajian et al.2017b] Farajian, M. Amin, Marco\nTurchi, Matteo Negri, and Marcello Federico.\n2017b. Multi-domain neural machine translation\nthrough unsupervised adaptation. In Proc. of\nWMT’17, pages 127–137, Copenhagen, Denmark,\nSeptember.\n[Freitag and Al-Onaizan2016] Freitag, Markus and\nYaser Al-Onaizan. 2016. Fast domain adap-\ntation for neural machine translation. CoRR.\nhttps://arxiv.org/abs/1612.06897 .\n[Goodfellow et al.2016] Goodfellow, Ian, Yoshua Ben-\ngio, and Aaron Courville. 2016. Deep Learning.\nMIT Press.\n[Hildebrand et al.2005] Hildebrand, Almut Silja, Mat-\ntias Eck, Stephan V ogel, and Alex Waibel. 2005.\nAdaptation of the translation model for statistical\nmachine translation based on information retrieval.\nInProc. ofEAMT’05, pages 133–142, Budapest,\nHungary, May.\n[Kingscott2002] Kingscott, Geoffrey. 2002. Techni-\ncal translation and related disciplines. Perspectives,\n10(4):247–255.[Klein et al.2017] Klein, Guillaume, Yoon Kim, Yun-\ntian Deng, Jean Senellart, and Alexander M. Rush.\n2017. Opennmt: Open-source toolkit for neural ma-\nchine translation. In Proc. ofACL’17, pages 67–72,\nVancouver, Canada, July, August.\n[Kobus et al.2017] Kobus, Catherine, Josep Maria\nCrego, and Jean Senellart. 2017. Domain Con-\ntrol for Neural Machine Translation. In Proc.\nofRANLP’17, pages 372–378, Varna, Bulgaria,\nSeptember.\n[Koehn and Knowles2017] Koehn, Philipp and Rebecca\nKnowles. 2017. Six challenges for neural ma-\nchine translation. In Proc. of1stWorkshop on\nNeural Machine Translation, pages 28–39, Vancou-\nver, Canada, August.\n[Luong and Manning2015] Luong, Minh-Thang and\nChristopher D Manning. 2015. Stanford Neural\nMachine Translation Systems for Spoken Language\nDomains. In Proc. ofIWSLT’15, pages 76–79, Da\nNang, Vietnam, December.\n[McCandless et al.2010] McCandless, Michael, Erik\nHatcher, and Otis Gospodnetic. 2010. Lucene in\nAction. Manning Publications Co., Greenwich, CT,\nUSA.\n[Papineni et al.2002] Papineni, Kishore, Salim Roukos,\nTodd Ward, and Wei-Jing Zhu. 2002. Bleu: a\nmethod for automatic evaluation of machine transla-\ntion. In Proc. ofACL’02, pages 311–318, Philadel-\nphia, USA, July.\n[Sennrich et al.2016] Sennrich, Rico, Barry Haddow,\nand Alexandra Birch. 2016. Neural Machine Trans-\nlation of Rare Words with Subword Units. In Proc.\nofACL’16, pages 1715–1725, Berlin, Germany, Au-\ngust.\n[Zhang et al.2016] Zhang, Jian, Liangyou Li, Andy\nWay, and Qun Liu. 2016. Topic-informed neural\nmachine translation. In Proc. ofCOLING’16, pages\n1807–1817, Osaka, Japan, December.158", "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "4lFNk0AXIpUF", "year": null, "venue": "EAMT 2018", "pdf_link": "https://aclanthology.org/2018.eamt-main.1.pdf", "forum_link": "https://openreview.net/forum?id=4lFNk0AXIpUF", "arxiv_id": null, "doi": null }
{ "title": "Contextual Handling in Neural Machine Translation: Look behind, ahead and on both sides", "authors": [ "Ruchit Agrawal", "Marco Turchi", "Matteo Negri" ], "abstract": null, "keywords": [], "raw_extracted_content": "Contextual Handling in Neural Machine Translation:\nLook Behind, Ahead and on Both Sides\nRuchit Agrawal(1,2), Marco Turchi(1), Matteo Negri(1)\n(1)Fondazione Bruno Kessler, Italy\n(2)University of Trento, Italy\n{ragrawal, turchi, negri }@fbk.eu\nAbstract\nA salient feature of Neural Machine Trans-\nlation (NMT) is the end-to-end nature of\ntraining employed, eschewing the need of\nseparate components to model different\nlinguistic phenomena. Rather, an NMT\nmodel learns to translate individual sen-\ntences from the labeled data itself. How-\never, traditional NMT methods trained on\nlarge parallel corpora with a one-to-one\nsentence mapping make an implicit as-\nsumption of sentence independence. This\nmakes it challenging for current NMT sys-\ntems to model inter-sentential discourse\nphenomena. While recent research in\nthis direction mainly leverages a single\nprevious source sentence to model dis-\ncourse, this paper proposes the incorpora-\ntion of a context window spanning previ-\nous as well as next sentences as source-\nside context and previously generated out-\nput as target-side context, using an effec-\ntive non-recurrent architecture based on\nself-attention. Experiments show improve-\nment over non-contextual models as well\nas contextual methods using only previous\ncontext.\n1 Introduction\nNeural Machine Translation (Kalchbrenner and\nBlunsom, 2013; Sutskever et al., 2014; Bahdanau\net al., 2014; Cho et al., 2014) has consistently out-\nperformed other MT paradigms across a range of\ndomains, applications and training settings (Ben-\ntivogli et al., 2016; Castilho et al., 2017; Toral\nc/circlecopyrt2018 The authors. This article is licensed under a Creative\nCommons 3.0 licence, no derivative works, attribution, CC-\nBY-ND.and S ´anchez-Cartagena, 2017), thereby emerging\nas the de facto standard in Machine Translation.\nNMT models are typically trained at the sentence\nlevel (Cho et al., 2014), whereby the probability of\nan output sentence given an input sentence is max-\nimized, implicitly making an assumption of sen-\ntence independence across the dataset. This works\nwell for the translation of stand-alone sentences or\ndatasets containing shuffled sentences, which are\nnot connected with each other in terms of discur-\nsive dependencies. However, in real life situations,\nwritten text generally follows a sequential order\nfeaturing a number of cross-sentential phenomena.\nAdditionally, speech-like texts (Bawden, 2017)\nexhibit the trait of contextual dependency and se-\nquentiality as well, often containing a greater num-\nber of references that require a common knowl-\nedge ground and discourse understanding for cor-\nrect interpretation. Figure 1 shows an example\nof such inter-sentential dependencies. These de-\npendencies are not fully leveraged by the majority\nof contemporary NMT models, owing to the treat-\nment of sentences as independent units for transla-\ntion.\nIn order to perform well on sequential texts,\nNMT models need access to extra information,\nwhich could serve as the disambiguating context\nfor better translation. Recent work in this direc-\ntion (Zoph and Knight, 2016; Jean et al., 2017;\nTiedemann and Scherrer, 2017; Bawden et al.,\n2017; Wang et al., 2017) has primarily focused\non previous source-side context for disambigua-\ntion. Since all of these approaches utilize recurrent\narchitectures, adding context comprising of more\nthan a single previous sentence can be challeng-\ning due to either (i) the increased number of esti-\nmated parameters and training time, in case of the\nmulti-encoder approach (Jean et al., 2017), or (ii)P\u0013 erez-Ortiz, S\u0013 anchez-Mart\u0013 \u0010nez, Espl\u0012 a-Gomis, Popovi\u0013 c, Rico, Martins, Van den Bogaert, Forcada (eds.)\nProceedings of the 21st Annual Conference of the European Association for Machine Translation , p. 11{20\nAlacant, Spain, May 2018.\nFigure 1: Inter-sentential dependencies requiring previous (source and target) and next (source) context\nperformance drop due to very long inputs (Koehn\nand Knowles, 2017), in case of extended transla-\ntion units (Tiedemann and Scherrer, 2017). Hence,\nthe impact of utilizing a large-sized context win-\ndow on the source as well as the target side re-\nmains unclear. Additionally, the impact of incor-\nporating the next sentences as context in the source\nside also needs to be examined, owing to discourse\nphenomena like cataphora and gender agreement,\nillustrated in Figure 1.\nWe address this gap and investigate the con-\ntribution of a context window looking behind as\nwell as ahead on the source-side, combined with\nprevious target-side context, in an efficient non-\nrecurrent “Transformer” architecture with self-\nattention ( hereafter Transformer ), recently pro-\nposed by Vaswani et al. (2017). We choose this\narchitecture due to its effective handling of long-\nrange dependencies and easily achievable compu-\ntational parallelization. These characteristics are\ndue to the fact that the Transformer is based en-\ntirely on self-attention, as opposed to LSTMs or\nGRUs. The non-recurrent architecture enables ef-\nfective parallelization, which is not possible with\nRNNs due to their sequentiality, thereby reducing\nthe computational complexity considerably. We\nperform experiments using differently sized con-\ntext windows on the source and target side. This\nis the first effort towards contextual NMT with\nTransformer to the best of our knowledge. On\nthe English-Italian data from the IWSLT 2017\nshared task (Cettolo et al., 2017), the best of our\nmodels achieves a 2.3% increase in BLEU score\nover a baseline Transformer model trained without\nany inter-sentential context and a 2.6% increase in\nBLEU score over a multi-source BiLSTM model\ntrained using a previous source sentence as addi-tional context.\nThe major contributions of this paper are sum-\nmarized below:\n•We demonstrate that looking ahead at the\nfollowing text in addition to looking behind\nat the preceding text on the source-side im-\nproves performance.\n•We demonstrate that both source-side context\nas well as target-side context help to improve\ntranslation quality, the latter however is more\nprone to error propagation.\n•We demonstrate that looking further beyond\na single previous sentence on the source-side\nresults in better performance, especially in\nabsence of target-side context.\n•We show that a simple method like concate-\nnation of the multiple inputs, when used with\nthe Transformer, generates efficient transla-\ntions, whilst being trained more than three\ntimes faster than an RNN based architecture.\nThe rest of the paper is organized as follows: We\ndescribe an outline of the related work in Section\n2, and provide a theoretical background in Section\n3. Section 4.1 briefly describes the discourse phe-\nnomena which we would like to capture using our\ncontextual NMT models. Our approach to model\ndiscourse and the experiments conducted are de-\nscribed in Section 4. Section Section 5 presents\nthe results obtained by our models, along with a\nlinguistic analysis of the implications therein. We\npresent the conclusions of the present research and\nhighlight possible directions for future work in\nSection 6.12\n2 Related Work\nDiscourse modeling has been explored to a sig-\nnificant extent for Statistical Machine Translation\n(Hardmeier, 2012), using methods like discrim-\ninative learning (Gim ´enez and M `arquez, 2007;\nTamchyna et al., 2016), context features (Gim-\npel and Smith, 2008; Costa-Juss `a et al., 2014;\nS´anchez-Mart ´ınez et al., 2008; Vintar et al.,\n2003), bilingual language models (Niehues et al.,\n2011), document-wide decoding (Hardmeier et\nal., 2012; Hardmeier et al., 2013) and factored\nmodels (Meyer et al., 2012). The majority of these\nworks, however, look mainly at intra-sentential\ndiscourse phenomena, owing to the limited capa-\nbility of SMT models to exploit extra-sentential\ncontext. The neural MT paradigm, on the other\nhand, offers a larger number of avenues for look-\ning beyond the current sentence during translation.\nRecent work on incorporating contextual infor-\nmation into NMT models has delved primarily into\nmulti-encoder models (Zoph and Knight, 2016;\nJean et al., 2017; Bawden et al., 2017), hierarchy\nof RNNs (Wang et al., 2017) and extended transla-\ntion units containing the previous sentence (Tiede-\nmann and Scherrer, 2017). These approaches build\nupon the multi-task learning method proposed by\nLuong et al. (2015), adapting it specifically for\ntranslation. Zoph and Knight (2016) propose a\nmulti-source training method, which employs mul-\ntiple encoders to represent inputs coming from dif-\nferent languages. Their method utilizes the sources\navailable in two languages in order to produce bet-\nter translations for a third language. Jean et al.\n(2017) use the multi-encoder framework, with one\nset of encoder and attention each for the previous\nand the current source sentence as an attempt to\nmodel context. However, this method would be\ncomputationally expensive with an increase in the\nnumber of contextual sentences owing to the in-\ncrease in estimated parameters.\nWang et al. (2017) employ a hierarchy of RNNs\nto summarize source-side context (previous three\nsentences). This method addresses the computa-\ntional complexity to an extent, however it does\nnot incorporate target-side context, which has been\nshown to be useful by (Bawden et al., 2017).\nBawden et al. (2017) present an in-depth analysis\nof the evaluation of discourse phenomena in NMT\nand the challenges faced thereof. They provide a\nhand-crafted test set specifically aimed at captur-\ning discursive dependencies. However, this set iscreated with the assumption that the disambiguat-\ning context lies in the previous sentence, which is\nnot always the case (Scarton et al., 2015).\nOur work is most similar to (Tiedemann and\nScherrer, 2017), who employ the standard NMT\narchitecture without multiple encoders, but using\nlarger blocks containing the previous and the cur-\nrent sentence as input for the encoder, as an at-\ntempt to better model discourse phenomena. The\nprimary limitation of this method is the inability to\nadd larger context due to the ineffective handling\nof long-range dependencies by RNNs (Koehn and\nKnowles, 2017). Additionally, this method does\nnot look at the following source-text, due to which\nphenomena like cataphora and lexical cohesion are\nnot captured well.\nWhile the above-mentioned works employ the\nprevious source text, we propose employing a con-\ntext window spanning previous as well as next\nsource sentences in order to model maximal dis-\ncourse phenomena. On the target-side, we decode\nthe previous and current sentence while looking\nat the source-window, thereby employing target-\nside context as well. Additionally, we employ the\nTransformer for our contextual models, as opposed\nto the above-mentioned works using RNNs, due to\nthe enhanced long-range performance and compu-\ntational parallelization.\n3 Background\n3.1 NMT with RNNs and Transformer\nNeural MT employs a single neural network\ntrained jointly to provide end-to-end translation\n(Kalchbrenner and Blunsom, 2013; Sutskever et\nal., 2014; Bahdanau et al., 2014). NMT mod-\nels typically consist of two components - an en-\ncoder and a decoder. The components are gener-\nally composed of Stacked RNNs (Recurrent Neu-\nral Networks), using either Long Short Term Mem-\nory (LSTM) (Sundermeyer et al., 2012) or Gated\nRecurrent Units (GRU) (Chung et al., 2015). The\nencoder transforms the source sentence into a vec-\ntor from which the decoder extracts the proba-\nble targets. Specifically, NMT aims to model\nthe conditional probability p(y|x)of translating a\nsource sentence x = x1,x2...xuto a target sentence\ny =y1,y2,...yv. Letsbe the representation of\nthe source sentence as computed by the encoder.\nBased on the source representation, the decoder\nproduces a translation, one target word at a time13\nand decomposes the conditional probability as:\nlogp(y|x) =v/summationdisplay\nj=1logp(yj|y<j,s) (1)\nThe entire model is jointly trained to maximize the\n(conditional) log-likelihood of the parallel training\ncorpus:\nmax\nθ1\nNN/summationdisplay\nn=1logpθ(y(n)|x(n),θ) (2)\nwhere (y(n),x(n))represents the nthsentence in\nparallel corpus of size Nandθdenotes the set of\nall tunable parameters.\nResearch in NMT recently witnessed a major\nbreakthrough in the Transformer architecture pro-\nposed by Vaswani et al. (2017). This architec-\nture eschews the recurrent as well as convolution\nlayers, both of which are integral to the major-\nity of contemporary neural network architectures.\nInstead, it uses stacked multi-head attention as\nwell as positional encodings to model the com-\nplete sequential information encoded by the in-\nput sentences. The decoder comprises of a sim-\nilar architecture, using masked multi-head atten-\ntion followed by softmax normalization to gener-\nate the output probabilities over the target vocab-\nulary. The positional encodings are added to the\ninput as well as output embeddings, enabling the\nmodel to capture the sequentiality of the input sen-\ntence without having recurrence. The encodings\nare computed from the position ( pos) and the di-\nmension (i) as follows:\nPE (pos, 2i)=sin(pos/10000(2i/dmodel ))(3)\nPE (pos, 2i+1)=cos(pos/10000(2i/dmodel ))(4)\nwherePE stands for positional encodings and\ndmodel is the dimensionality of the vectors result-\ning from the embeddings learned from the input\nand output tokens. Thus, each dimension of the\nencoding (i) corresponds to a sinusoid.\n3.2 Inter-sentential discourse phenomena\nCoherence in a text is implicitly established using\na variety of discourse relations. Contextual infor-\nmation can help in handling a variety of discourse\nphenomena, mainly involving lexical choice, lin-\nguistic agreement, coreference - anaphora (Hard-\nmeier and Federico, 2010) as well as cataphora,and lexical coherence. Spoken language espe-\ncially contains a large number of such dependen-\ncies, due to the presence of an environment facil-\nitating direct communication between the parties\n(Pierrehumbert and Hirschberg, 1990), where ges-\ntures and a common ground/theme are often used\nthe disambiguating context, thereby rendering the\nneed for explicit mentions in the text less impor-\ntant. A reasonable amount of noun phrases are es-\ntablished deictically, and the theme persists until\nit’s taken over by another theme.\nThe deictic references are challenging to resolve\nfor NMT models using only the current sentence-\npair in consideration, and possible errors involving\ngender usage as well as linguistic agreement can\nbe introduced in the translation. For instance, for\nEnglish→Italian translation, establishing the lin-\nguistic features of the noun under consideration is\ncrucial for translation. The co-ordination with the\nadjective ( buona vsbuono ), pronominal references\n(luivslei), past participle verb form ( sei andato vs\nsei andata ) as well as articles ( ilvsla) depends on\nthe noun.\nEstablishing the noun under consideration could\nimprove MT quality significantly, an example of\nwhich is shown in (Babych and Hartley, 2003),\nwherein Named Entity Recognition benefit trans-\nlation. This would eventually lead to less post-\nediting effort, which is significant for correcting\ncoreference related errors (Daems et al., 2015).\nOther inter-sentential phenomena we would like to\ncapture include temporality (precedence, succes-\nsion), causality (reason, result), condition (hypo-\nthetical, general, unreal, factual), implicit asser-\ntion, contrast (juxtaposition, opposition) and ex-\npansion (conjunction, instantiation, restatement,\nalternative).\n4 Experiments\n4.1 Context integration\nWe model discourse using context windows on the\nsource as well as the target side. For the source,\nwe use one, two and three previous sentences and\none next sentence as additional context. For the\ntarget, we use one and two previous sentences as\nadditional context.1We choose the Transformer\nfor our experiments. The non-recurrent architec-\nture enables it to better handle longer sequences,\nwithout an additional computational cost. This\n1Increasing beyond this caused a drop in performance in our\npreliminary experiments.14\nis made possible by using a multi-headed self-\nattention mechanism. The attention is a mapping\nfrom (query, key, value) tuples to an output vec-\ntor. For the self-attention, the query, key and value\ncome from the previous encoder layer, and the at-\ntention is computed as:\nSA(Q,K,V ) =softmax (QKT//radicalbig\ndk)V (5)\nwhere Q is the query matrix, K is the key matrix\nand V is the value matrix, dkis the dimensionality\nof the queries and keys, and SA is the computed\nself-attention. This formulation ensures that the\nnet path length between any two tokens irrespec-\ntive of their position in the sequence is O(1).\nThe multi-head attention makes it possible for\nthe Transformer to model information coming in\nfrom different positions simultaneously. It em-\nploys multiple attention layers in parallel, with\neach head using different linear transformations\nand thereby learning different relationships, to\ncompute the net attention:\nMH(Q,K,V ) =Concat (head 1,...,head h)WO\n(6)\nwhere MH is the multi-head attention, his the\nnumber of attention layers (also called “heads”),\nheadiis the self-attention computed over the ith\nattention layer and WOis the parameter matrix of\ndimensionhdv*dmodel . In this case, queries come\nfrom the previous decoder layer, and the key-value\npairs come from encoder output.\nFor training the contextual models, we inves-\ntigate the usage of all the possible combinations\nfrom the following configurations for modeling\ncontext on both sides:\n•Source side configuration:\n–Previous sentence, previous two sen-\ntences, previous three sentences, previ-\nous and next sentence, previous two and\nnext sentence.\n•Target side configuration:\n–Previous sentence, previous two sen-\ntences.\nFor our experiments using the Transformer\nmodel, we concatenate the contextual information\nin our training and validation sets using a BREAK\ntoken, inspired by (Tiedemann and Scherrer,\n2017). Since the Transformer has positionalencodings, it encodes position information inher-\nently and using just a single BREAK token worked\nbetter than appending a feature for each token\nspecifying the sentence it belongs to. The models\nare referred to by the following label subsequently:\nPrevm+Curr +Nextn→Prevp+Curr\nwhere mis the number of previous sentences used\nas source-side context, nis the number of next sen-\ntences used as source-side context, and pis the\nnumber of previous sentences used as target-side\ncontext.Curr refers to the current sentence on\nboth sides.\nFor comparison with RNN based techniques, we\ntrained baseline as well as contextual models using\na BiLSTM architecture. We employed the previ-\nous sentence as source-side context for the contex-\ntual models, integrated using the methods of con-\ncatenation and multi-encoder RNN proposed by\nTiedemann and Scherrer (2017) and Jean et al.\n(2017) respectively. These are denoted by the la-\nbelsconcat andMulti−Source . For the concate-\nnation, theBREAK token was used, similar to\nthe Transformer experiments. We also compared\nthe performance using target-side context (Tiede-\nmann and Scherrer, 2017; Bawden et al., 2017).\nThe contextual models using only source-context\nare labeled “2 to 1”, while those using the previ-\nous target sentence as context are labeled “2 to 2”.\n4.2 Dataset\nFor our experiments, we employ the IWSLT 2017\n(Cettolo et al., 2012) dataset, for the language di-\nrection English→Italian (en→it). The dataset\ncontains parallel transcripts of around 1000 TED\ntalks, spanning various genres like Technology,\nEntertainment, Business, Design and Global is-\nsues.2We use the “train” set for training, the\n“tst2010” set for validation, and the “tst2017” set\nfor testing. The statistics for the training, valida-\ntion and test splits are as given in Table 1. For\ntraining the models, the sentences are first tok-\nenized, following by segmentation of the tokens\ninto subword units (Sennrich et al., 2015) us-\ning Byte Pair Encoding (BPE). The number of\nBPE operations is set to 32,000 and the frequency\nthreshold for the vocabulary filter is set to 35.\n2This dataset is publicly available at https://wit3.fbk.eu/15\nPhase Training Validation Test\n#Sentences 221,688 1,501 1,147\n#Tokens-en 4,073,526 27,191 21,507\n#Tokens-it 3,799,385 25,131 20,238\nTable 1: Statistics for the IWSLT dataset\n4.3 Model Settings\nWe employ OpenNMT-tf (Klein et al., 2017) for\nall our experiments.3For training the Transformer\nmodels, we use the Lazy Adam optimizer, with a\nlearning rate of 2.0 , model dimension of 512, la-\nbel smoothing of 0.1, beam width of 4, batch size\nof 3,072 tokens, bucket width of 1 and stopping\ncriteria at 250,000 steps or plateau in BLEU, in\ncase of the larger context models, since we ob-\nserved some instability in the convergence behav-\nior of the Transformer, especially for the contex-\ntual models. The maximum source length is set to\nbe 70 for the baseline model, increasing linearly\nwith more context. The maximum target length is\nset to be 10% more than the source length.4For\ntraining the RNN models, we employ the stochas-\ntic gradient descent optimizer, with a learning rate\nof 1.0, decay rate 0.7 with an exponential decay,\nbeam width of 5, batch size 64, bucket width 1\nand stopping criteria 250,000 steps or plateau in\nBLEU, whichever occurs earlier.\n4.4 Evaluation\nThe evaluation of discourse phenomena in MT is\na challenging task (Hovy et al., 2002; Carpuat\nand Simard, 2012), requiring specialized test sets\nto quantitatively measure the performance of the\nmodels for specific linguistic phenomena. One\nsuch test set was created by (Bawden et al., 2017)\nto measure performance on coreference, cohesion\nand coherence respectively. However, the test set\nwas created with the assumption that the disam-\nbiguating context always lies in the previous sen-\ntence, which is not necessarily the case. Tradi-\ntional automatic evaluation metrics do not cap-\nture discourse phenomena completely (Scarton et\nal., 2015), and using information about the dis-\ncourse structure of a text improves the quality of\nMT evaluation (Guzm ´an et al., 2014). Hence,\nalternate methods for evaluation have been pro-\n3The code is publicly available at\nhttps://github.com/OpenNMT/OpenNMT-tf\n4This is done to ensure no loss in target-side information, a\nknown sensitivity of the Transformer architecture.Configuration BLEU TER\n(i) BiLSTM, no context 28.2 52.9\n(ii) BiLSTM, Concat, 2 to 1 26.3 53.7\n(iii) BiLSTM, Multi-Source, 2 to 1 28.9 52.6\n(iv) BiLSTM, Concat, 2 to 2 25.4 53.4\n(v) BiLSTM, Multi-Source, 2 to 2 28.9 52.5\nTable 2: Performance using RNN based approaches\nModel Configuration BLEU TER\n(i)Curr→Curr 29.2 52.8\n(ii)Prev 1+Curr→Curr 29.4 52.5\n(iii)Prev 2+Curr→Curr 29.8 51.9\n(iv)Prev 3+Curr→Curr 29.2 52.8\n(v)Curr +Next 1→Curr 29.7 51.9\n(vi)Prev 1+Curr +Next 1→Curr 30.6 51.1\n(vii)Prev 2+Curr +Next 1→Curr 29.8 51.4\nTable 3: Results of our models using only source-side con-\ntext, on en→it, IWSLT 2017\nposed (Mitkov et al., 2000; Fomicheva and Bel,\n2016) However, these methods do not look at the\ndocument as a whole, but mainly model intra-\nsentential discourse. Developing an evaluation\nmetric that considers document-level discourse re-\nmains an open problem. Hence, we perform a pre-\nliminary qualitative analysis in addition to the au-\ntomatic evaluation of our outputs.\nFor automatic evaluation, we measure the per-\nformance of our models using two standard\nmetrics: BLEU (Papineni et al., 2002) and\nTER (Snover et al., 2006). For comparison with\nthe test set, we extract the current sentence sepa-\nrated by the BREAK tokens from the output gen-\nerated by the contextual models. We also measure\nthe percentage of sentences for which the contex-\ntual models improve over the baseline model. This\nis done by computing the sentence-level TER for\neach generated output sentence, and comparing it\nwith the corresponding one in the test set.\n5 Results and Discussion\n5.1 Performance on automatic evaluation\nmetrics\nTables 3 and 4 show the results obtained by the\ndifferent configurations of our models using the\nTransformer architecture. For comparison with\nprevious approaches, we also train four contextual\nconfigurations using RNN-based models, and re-\nport the results in Table 2.\nThe RNN results confirm that:\n•Adding contextual information is useful for\nRNN models, provided that it is incorporated\nusing a multi-encoder architecture ( ≈28.916\nModel Configuration BLEU TER\n(i)Prev 1+Curr→Prev 1+Curr 29.5 52.1\n(ii)Prev 2+Curr→Prev 1+Curr 29.8 51.9\n(iii)Prev 2+Curr→Prev 2+Curr 29.7 52.1\n(iv)Prev 3+Curr→Prev 1+Curr 29.2 52.2\n(v)Prev 3+Curr→Prev 2+Curr 28.9 52.9\n(vi)Prev 1+Curr +Next 1→Prev 1+Curr 31.5 49.7\n(vii)Prev 2+Curr +Next 1→Prev 1+Curr 31.1 50.5\n(viii)Prev 2+Curr +Next 1→Prev 2+Curr 30.2 51.2\nTable 4: Results of our models using source as well as target\nside context, on en →it, IWSLT 2017\nModel Configuration % sentences\nPrev 1+Curr→Curr 62.8\nCurr +Next 1→Curr 61.3\nPrev 1+Curr +Next 1→Curr 67.2\nTable 5: Percentage of sentences for which TER score is\nless than or equal to the baseline model, depending upon the\nsource-context used\nBLEU score with multi-source, ≈0.8 more\nthan the baseline BLEU score of 28.18).\n•RNNs are sensitive to the length of the sen-\ntence, both on the source and target side (Ta-\nble 2, (ii) and (iv)). This can be attributed to a\nvanishing signal between very long-range de-\npendencies, despite the gating techniques em-\nployed.\n•The RNN models need more sophisticated\ntechniques than concatenation, like multi-\nsource training, to leverage the information\nfrom the previous sentence (Table 2, (iii),\n(v)). This can be attributed to the drop in per-\nformance on very long sequences (Cho et al.,\n2014; Koehn and Knowles, 2017)5, owing to\nconcatenation.\nFor the Transformer architecture, the contex-\ntual models achieve an increase of 1-2% in BLEU\nscore over a baseline model trained without any\ninter-sentential context (Tables 3 and 4).\nThe results suggest that:\n•Looking further ahead at the next sentence\ncan help in disambiguation, evident from the\nimproved performance of the configurations\ninvolving both previous as well as next sen-\ntences on the source side than those looking\nonly at previous context (Table 3, (v) - (vii)).\n•Target-side context also helps to improve per-\nformance (Table 4, (i)-(v) vs. Table 3. (ii)-\n(iv)). as also suggested by (Bawden et al.,\n5On manual inspection, we observed frequent short, incom-\nplete predictions in this case.2017). However, a larger context window on\nthe source side and a window with one pre-\nvious sentence on the target side generally\nworks better. Our intuition is that going be-\nyond one previous sentence on the target side\nincreases the risk of error propagation (Table\n4, (viii)).\n•The Transformer performs significantly better\nthan RNN’s for very long inputs (Table 2, (iv)\nvs. Table 4, (i)). This can be attributed to\nthe multi-head self-attention, which captures\nlong-range dependencies better.\n•Contextual information does not necessarily\ncome from the previous one sentence. Incor-\nporating more context, especially on source-\nside, helps on TED data (Table 4, (vi), (vii)),\nand can be effectively handled with Trans-\nformer.\n•The self-attention mechanism of the Trans-\nformer architecture enables a simple strategy\nlike concatenation of a context window to\nwork better than multi-encoder RNN based\napproaches.\nAdditionally, the training time for the Trans-\nformer models was significantly shorter than the\nRNN based ones (≈30 hours and≈100 hours re-\nspectively). This can be attributed to the fact that\nthe positional encodings capture the sequentiality\nin the absence of recurrence, and the multi-head\nattention makes it easily parallelizable. In addi-\ntion to the corpus level scores, we also compute\nsentence level TER scores, in order to estimate the\npercentage of sentences which are better translated\nusing cross sentential source-side context. These\nare given in Table 5.\n5.2 Qualitative analysis\nIn addition to the performance evaluation using the\nautomatic evaluation metrics, we also analyzed a\nrandom sample of outputs generated by our mod-\nels, in order to have a better insight as to which lin-\nguistic phenomena are handled better by our con-\ntextual NMT models. Tables 6 and 7 compare the\noutputs of our best-performing contextual models\n(Table 4, (vi)) with the baseline model. The con-\ntextual models in general make better morphosyn-\ntactic choices generating more coherent transla-\ntions than the baseline model. For instance, in the\noutput of the contextual model (Table 6, (iii)), the17\nSource I went there with my friend . She was amazed to see that it had multiple floors. Each one had\na number of shops.\n(i) Baseline\nTransformerArrivai li con il mio amico . Rimaneva meravigliato di vedere che aveva una cosa piu incredibile.\nOgnuna aveva tanti negozi.\n(ii) Contextual\nTransformer\n(Prev)Arrivai la con il mio amico . Era sorpresa vedere che aveva diversi piani. Ognuno aveva un\ncerto numero di negozi.\n(iii) Contextual\nTransformer\n(Prev + Next)Sono andato con la mia amica . Fu sorpresa nel vedere che aveva piu piani. Ognuno aveva tanti\nnegozi.\nReference Sono andato la’ con la mia amica. E’ rimasta meraviglia nel vedere che aveva piu’ piani.\nOgnuno aveva tanti negozi.\nTable 6: Qualitative analysis - Improvement for cataphora, anaphora and gender agreement\nSource OK, I need you to take out your phones. Now that you have your phone out, I’d like you to\nunlock your phone.\n(i) Baseline\nTransformerOk, devo tirare fuori i vostri cellulari. Ora che avete il vostro telefono, vorrei che bloccaste il\nvostro telefono.\n(ii) Contextual\nTransformer\n(Prev)OK, dovete tirare i vostri cellulari . Ora che avete il vostro telefono, vorrei che faceste sbloccare\nil vostro telefono.\n(iii) Contextual\nTransformer\n(Prev + Next)Ok, ho bisogno che tiriate fuori i vostri telefoni . Ora che avete il vostro telefono, vorrei che\nsbloccaste il vostro telefono.\nReference Ok, ho bisogno che tiriate fuori i vostri telefoni. Ora che avete il vostro telefono davanti vorrei\nche lo sbloccaste.\nTable 7: Qualitative analysis - Improvement for lexical cohesion and verbal inflections\nphrase sono andato employs the passato prossimo\n(“near past”) verb form andato , which is more ap-\npropriate than the passato remoto (“remote past”)\nform arrivai , since the latter refers to events oc-\ncurred far in the past, while the former refers to\nmore recent ones. Additionally, the cataphor my\nfriend is successfully disambiguated to refer to the\npostcedent she, apparent from the correctly pre-\ndicted gender of the translated phrase la mia amica\n(feminine) as opposed to il mio amico (masculine).\nSimilarly, the anaphora Each one is resolved ( og-\nnuna as opposed to ognuno ). In the second ex-\nample from Table 7, improved lexical choice - che\ntiriate (second person plural subjunctive), bisogno\n(“I need”) as opposed to devo (“I must”) and lex-\nical cohesion cellulari (“mobile phones”) vs. tele-\nfoni(“phones”) can be observed.\nWhile our models are able to incorporate con-\ntextual information from the surrounding text, they\ncannot leverage the disambiguating context which\nlies very far away from the current sentence being\ntranslated. In such cases, concatenating the sen-\ntences would be non-optimal, since there is a high\npossibility of irrelevant information overpowering\ndisambiguating context. This is also evident from\nour experiments using n >2 previous sentences as\nadditional context using concatenation (Table 3,\n(iv)).6 Conclusion\nNeural MT methods, being typically trained at sen-\ntence level, fail to completely capture implicit dis-\ncourse relations established at the inter-sentential\nlevel in the text. In this paper, we demonstrated\nthat looking behind as well as peeking ahead in\nthe source text during translation leads to better\nperformance than translating sentences in isola-\ntion. Additionally, jointly decoding the previous\nas well as current text on the target-side helps to\nincorporate target-side context, which also shows\nimprovement in translation quality to a certain ex-\ntent, albeit being more prone to error propagation\nwith increase in the size of the context window.\nMoreover we showed that using the Transformer\narchitecture, a simple strategy like concatenation\nof the context yields better performance on spo-\nken texts than non-contextual models, whilst being\ntrained significantly faster than recurrent architec-\ntures. Contextual handling using self-attention is\nhence a promising direction to explore in the fu-\nture, possibly with multi-source techniques in con-\njugation with the Transformer architecture. In the\nfuture, we would like to perform a fine-grained\nanalysis on the improvement observed for specific\nlinguistic phenomena using our extended context\nmodels.18\nReferences\nBabych, Bogdan and Anthony Hartley. 2003. Im-\nproving machine translation quality with automatic\nnamed entity recognition. In Proceedings of the\n7th International EAMT workshop on MT and other\nLanguage Technology Tools, Improving MT through\nother Language Technology Tools: Resources and\nTools for Building MT , pages 1–8. Association for\nComputational Linguistics.\nBahdanau, Dzmitry, Kyunghyun Cho, and Yoshua Ben-\ngio. 2014. Neural machine translation by jointly\nlearning to align and translate. arXiv preprint\narXiv:1409.0473 .\nBawden, Rachel, Rico Sennrich, Alexandra Birch,\nand Barry Haddow. 2017. Evaluating discourse\nphenomena in neural machine translation. arXiv\npreprint arXiv:1711.00513 .\nBawden, Rachel. 2017. Machine translation of speech-\nlike texts: Strategies for the inclusion of context. In\n19es REncontres jeunes Chercheurs en Informatique\npour le TAL (RECITAL 2017) .\nBentivogli, Luisa, Arianna Bisazza, Mauro Cettolo, and\nMarcello Federico. 2016. Neural versus phrase-\nbased machine translation quality: a case study.\nInProceedings of the 2016 Conference on Empiri-\ncal Methods in Natural Language Processing , pages\n257–267.\nCarpuat, Marine and Michel Simard. 2012. The trou-\nble with smt consistency. In Proceedings of the Sev-\nenth Workshop on Statistical Machine Translation ,\npages 442–449. Association for Computational Lin-\nguistics.\nCastilho, Sheila, Joss Moorkens, Federico Gaspari,\nRico Sennrich, Vilelmini Sosoni, Panayota Geor-\ngakopoulou, Pintu Lohar, Andy Way, Antonio Va-\nlerio Miceli Barone, and Maria Gialama. 2017. A\ncomparative quality evaluation of pbsmt and nmt us-\ning professional translators.\nCettolo, Mauro, Girardi Christian, and Federico Mar-\ncello. 2012. Wit3: Web inventory of transcribed and\ntranslated talks. In Conference of European Associ-\nation for Machine Translation , pages 261–268.\nCettolo, Mauro, Federico Marcello, Bentivogli Luisa,\nNiehues Jan, St ¨uker Sebastian, Sudoh Katsuitho,\nYoshino Koichiro, and Federmann Christian. 2017.\nOverview of the iwslt 2017 evaluation campaign. In\nInternational Workshop on Spoken Language Trans-\nlation , pages 2–14.\nCho, Kyunghyun, Bart Van Merri ¨enboer, Dzmitry Bah-\ndanau, and Yoshua Bengio. 2014. On the properties\nof neural machine translation: Encoder-decoder ap-\nproaches. arXiv preprint arXiv:1409.1259 .\nChung, Junyoung, Caglar Gulcehre, Kyunghyun Cho,\nand Yoshua Bengio. 2015. Gated feedback recur-\nrent neural networks. In International Conference\non Machine Learning , pages 2067–2075.Costa-Juss `a, Marta R, Parth Gupta, Paolo Rosso, and\nRafael E Banchs. 2014. English-to-hindi system de-\nscription for wmt 2014: deep source-context features\nfor moses. In Proceedings of the Ninth Workshop on\nStatistical Machine Translation , pages 79–83.\nDaems, Joke, Sonia Vandepitte, Robert Hartsuiker, and\nLieve Macken. 2015. The impact of machine trans-\nlation error types on post-editing effort indicators. In\n4th Workshop on Post-Editing Technology and Prac-\ntice (WPTP4) , pages 31–45. Association for Ma-\nchine Translation in the Americas.\nFomicheva, Marina and N ´uria Bel. 2016. Using con-\ntextual information for machine translation evalua-\ntion.\nGim´enez, Jes ´us and Llu ´ıs M `arquez. 2007. Context-\naware discriminative phrase selection for statistical\nmachine translation. In Proceedings of the Second\nWorkshop on Statistical Machine Translation , pages\n159–166. Association for Computational Linguis-\ntics.\nGimpel, Kevin and Noah A Smith. 2008. Rich source-\nside context for statistical machine translation. In\nProceedings of the Third Workshop on Statistical\nMachine Translation , pages 9–17. Association for\nComputational Linguistics.\nGuzm ´an, Francisco, Shafiq Joty, Llu ´ıs M `arquez, and\nPreslav Nakov. 2014. Using discourse structure im-\nproves machine translation evaluation. In Proceed-\nings of the 52nd Annual Meeting of the Association\nfor Computational Linguistics (Volume 1: Long Pa-\npers) , volume 1, pages 687–698.\nHardmeier, Christian and Marcello Federico. 2010.\nModelling pronominal anaphora in statistical ma-\nchine translation. In IWSLT (International Workshop\non Spoken Language Translation); Paris, France;\nDecember 2nd and 3rd, 2010. , pages 283–289.\nHardmeier, Christian, Joakim Nivre, and J ¨org Tiede-\nmann. 2012. Document-wide decoding for phrase-\nbased statistical machine translation. In Proceedings\nof the 2012 Joint Conference on Empirical Methods\nin Natural Language Processing and Computational\nNatural Language Learning , pages 1179–1190. As-\nsociation for Computational Linguistics.\nHardmeier, Christian, Sara Stymne, J ¨org Tiedemann,\nand Joakim Nivre. 2013. Docent: A document-level\ndecoder for phrase-based statistical machine transla-\ntion. In ACL 2013 (51st Annual Meeting of the Asso-\nciation for Computational Linguistics); 4-9 August\n2013; Sofia, Bulgaria , pages 193–198. Association\nfor Computational Linguistics.\nHardmeier, Christian. 2012. Discourse in statistical\nmachine translation. a survey and a case study. Dis-\ncours. Revue de linguistique, psycholinguistique et\ninformatique. A journal of linguistics, psycholinguis-\ntics and computational linguistics , (11).19\nHovy, Eduard, Margaret King, and Andrei Popescu-\nBelis. 2002. Principles of context-based ma-\nchine translation evaluation. Machine Translation ,\n17(1):43–75.\nJean, Sebastien, Stanislas Lauly, Orhan Firat, and\nKyunghyun Cho. 2017. Does neural machine trans-\nlation benefit from larger context? arXiv preprint\narXiv:1704.05135 .\nKalchbrenner, Nal and Phil Blunsom. 2013. Recurrent\ncontinuous translation models. In Proceedings of the\n2013 Conference on Empirical Methods in Natural\nLanguage Processing , pages 1700–1709.\nKlein, Guillaume, Yoon Kim, Yuntian Deng, Jean\nSenellart, and Alexander M Rush. 2017. Opennmt:\nOpen-source toolkit for neural machine translation.\narXiv preprint arXiv:1701.02810 .\nKoehn, Philipp and Rebecca Knowles. 2017. Six chal-\nlenges for neural machine translation. arXiv preprint\narXiv:1706.03872 .\nLuong, Minh-Thang, Quoc V Le, Ilya Sutskever, Oriol\nVinyals, and Lukasz Kaiser. 2015. Multi-task\nsequence to sequence learning. arXiv preprint\narXiv:1511.06114 .\nMeyer, Thomas, Andrei Popescu-Belis, Najeh Ha-\njlaoui, and Andrea Gesmundo. 2012. Machine\ntranslation of labeled discourse connectives. In Pro-\nceedings of the Tenth Biennial Conference of the As-\nsociation for Machine Translation in the Americas\n(AMTA) , number EPFL-CONF-192524.\nMitkov, Ruslan, Richard Evans, Constantin Orasan,\nCatalina Barbu, Lisa Jones, and Violeta Sotirova.\n2000. Coreference and anaphora: developing\nannotating tools, annotated resources and annota-\ntion strategies. In Proceedings of the Discourse,\nAnaphora and Reference Resolution Conference\n(DAARC2000) , pages 49–58. Citeseer.\nNiehues, Jan, Teresa Herrmann, Stephan V ogel, and\nAlex Waibel. 2011. Wider context by using bilin-\ngual language models in machine translation. In\nProceedings of the Sixth Workshop on Statistical Ma-\nchine Translation , pages 198–206. Association for\nComputational Linguistics.\nPapineni, Kishore, Salim Roukos, Todd Ward, and Wei-\nJing Zhu. 2002. Bleu: a method for automatic eval-\nuation of machine translation. In Proceedings of\nthe 40th annual meeting on association for compu-\ntational linguistics , pages 311–318. Association for\nComputational Linguistics.\nPierrehumbert, Janet and Julia Bell Hirschberg. 1990.\nThe meaning of intonational contours in the inter-\npretation of discourse. Intentions in communication ,\npages 271–311.\nS´anchez-Mart ´ınez, Felipe, Juan Antonio P ´erez-Ortiz,\nand Mikel L Forcada. 2008. Using target-languageinformation to train part-of-speech taggers for ma-\nchine translation. Machine Translation , 22(1-2):29–\n66.\nScarton, Carolina, Marcos Zampieri, Mihaela Vela,\nJosef van Genabith, and Lucia Specia. 2015.\nSearching for context: a study on document-level la-\nbels for translation quality estimation. In Proceed-\nings of the 18th Annual Conference of the European\nAssociation for Machine Translation .\nSennrich, Rico, Barry Haddow, and Alexandra Birch.\n2015. Neural machine translation of rare words with\nsubword units. arXiv preprint arXiv:1508.07909 .\nSnover, Matthew, Bonnie Dorr, Richard Schwartz, Lin-\nnea Micciulla, and John Makhoul. 2006. A study of\ntranslation edit rate with targeted human annotation.\nInProceedings of association for machine transla-\ntion in the Americas , volume 200.\nSundermeyer, Martin, Ralf Schl ¨uter, and Hermann Ney.\n2012. Lstm neural networks for language modeling.\nInThirteenth Annual Conference of the International\nSpeech Communication Association .\nSutskever, Ilya, Oriol Vinyals, and Quoc V Le. 2014.\nSequence to sequence learning with neural networks.\nInAdvances in neural information processing sys-\ntems, pages 3104–3112.\nTamchyna, Ale ˇs, Alexander Fraser, Ond ˇrej Bojar, and\nMarcin Junczys-Dowmunt. 2016. Target-side con-\ntext for discriminative models in statistical machine\ntranslation. arXiv preprint arXiv:1607.01149 .\nTiedemann, J ¨org and Yves Scherrer. 2017. Neural ma-\nchine translation with extended context. In Proceed-\nings of the Third Workshop on Discourse in Machine\nTranslation , pages 82–92.\nToral, Antonio and V ´ıctor M S ´anchez-Cartagena.\n2017. A multifaceted evaluation of neural versus\nphrase-based machine translation for 9 language di-\nrections. arXiv preprint arXiv:1701.02901 .\nVaswani, Ashish, Noam Shazeer, Niki Parmar, Jakob\nUszkoreit, Llion Jones, Aidan N Gomez, Łukasz\nKaiser, and Illia Polosukhin. 2017. Attention is all\nyou need. In Advances in Neural Information Pro-\ncessing Systems , pages 6000–6010.\nVintar, ˇSpela, Ljup ˇco Todorovski, Daniel Sonntag, and\nPaul Buitelaar. 2003. Evaluating context features\nfor medical relation mining. Data Mining and Text\nMining for Bioinformatics , page 64.\nWang, Longyue, Zhaopeng Tu, Andy Way, and Qun\nLiu. 2017. Exploiting cross-sentence context\nfor neural machine translation. arXiv preprint\narXiv:1704.04347 .\nZoph, Barret and Kevin Knight. 2016. Multi-source\nneural translation. arXiv preprint arXiv:1601.00710 .20", "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "1kfdysCu_7v", "year": null, "venue": "EAMT 2022", "pdf_link": "https://aclanthology.org/2022.eamt-1.67.pdf", "forum_link": "https://openreview.net/forum?id=1kfdysCu_7v", "arxiv_id": null, "doi": null }
{ "title": "LITHME: Language in the Human-Machine Era", "authors": [ "Maarit Koponen", "Kais Allkivi-Metsoja", "Antonio Pareja-Lora", "Dave Sayers", "Márta Seresi" ], "abstract": null, "keywords": [], "raw_extracted_content": "LITHME: Language in the Human–Machine Era\nMaarit Koponen\nUniversity of Eastern Finland\[email protected] Allkivi-Metsoja\nTallinn University\[email protected] Pareja-Lora\nUniversidad de Alcal ´a\[email protected]\nDave Sayers\nUniversity of Jyv ¨askyl ¨a\[email protected]´arta Seresi\nE¨otv¨os Lor ´and University\[email protected]\nAbstract\nThe LITHME COST Action brings to-\ngether researchers from various fields of\nstudy focusing on linguistics and tech-\nnology. We present the overall goals\nof LITHME and the network’s working\ngroups focusing on diverse questions re-\nlated to language and technology. As an\nexample of the work addressing machine\ntranslation within LITHME, we discuss\nthe activities of the working group on lan-\nguage work and language professionals.\n1 Introduction\nLanguage in the Human–Machine Era (LITHME)\nis a research and innovation network funded by\nCOST (European Cooperation in Science and\nTechnology). It is coordinated by the University of\nJyv¨askyl ¨a, Finland, and has more than 300 mem-\nbers from universities, research institutions and\ncompanies in 52 countries (all 27 EU states and\n25 other countries worldwide).\nThe network brings together researchers, de-\nvelopers and other specialists with diverse back-\ngrounds with the goal of sharing insights about\nhow new and emerging technologies will im-\npact interaction and language use. By “human–\nmachine era”, we envision a time when humans\nwill be interacting and conversing with artificial in-\ntelligence (AI) technology that is not confined only\nto mobile devices but integrated with our senses\nthrough virtual and augmented reality. Machine\ntranslation (MT) is one of the key technologies en-\nabling communication across languages.\nc\r2022 The authors. This article is licensed under a Creative\nCommons 3.0 licence, no derivative works, attribution, CC-\nBY-ND.LITHME focuses on two aspects which are\nshaping human communication (Sayers et al.,\n2021). On the one hand, we will increasingly be\nspeaking through technology, which can translate\nbetween languages in real time as well as alter\nvoices and facial movements. On the other hand,\nwe will also be speaking totechnology, which\nwill understand both the content and the context\nof natural language. This will lead to increas-\ningly substantive and meaningful real-time con-\nversations with devices like smart assistants. En-\nhanced virtual reality featuring lifelike characters\nwill enable learning and even socialising among\nintelligent and responsive artificial partners.\nThroughout its four-year duration (2020–2024),\nthe LITHME network of researchers aims to ex-\nplore the impact that various technologies, includ-\ning MT, have on language and communication. We\ninvestigate the opportunities, the new ways to talk,\nto translate, to remember, and to learn, but also the\nuncertainties and potential inequalities or other ad-\nverse effects.\nDeliverables consist of open-access forecast re-\nports, the first of which was published in 2021\n(Sayers et al., 2021), multimedia presentations,\nguidelines on ethics, safety, equality and accessi-\nbility for emerging language technologies, and in-\nterim reports of activities on the LITHME web-\nsite.1LITHME organises an annual conference\nand a training school focusing on language and\ntechnology, workshops, short-term scientific mis-\nsions2and invited talks. In addition to collabo-\nration between researchers, LITHME aims to fa-\ncilitate the involvement of stakeholders outside of\nacademia, such as corporate and non-profit tech-\nnology developers.\n1https://lithme.eu/\n2https://lithme.eu/short-term-scientific-missions/\n2 LITHME Working Groups\nLITHME features eight working groups3(WGs)\nwhich focus on different areas of research related\nto language and technology.\n\u000fWG1 Computational linguistics\n\u000fWG2 Language and law\n\u000fWG3 Language rights\n\u000fWG4 Language diversity, vitality and endan-\ngerment\n\u000fWG5 Language learning and teaching\n\u000fWG6 Ideologies, beliefs, attitudes\n\u000fWG7 Language work, language profession-\nals\n\u000fWG8 Language variation\nAt the centre of LITHME, WG1 aims to pro-\nduce forecasts of various relevant technologies,\nand other WGs focus on how these technologies\nare incluencing specific areas of language use. The\ndevelopment of MT is of course one of the issues\nclosely followed in WG1, and MT can be seen to\nplay a role in all of these areas covered by the\nworking groups. The focus on MT, specifically, is\nperhaps clearest in WG7, as the work of language\nprofessionals such as translators is one area where\nthe impacts of MT have been most pronounced.\nWe next discuss the aims of this working group in\nmore detail.\n3 Language professionals in the\nhuman–machine era\nThe LITHME working group 7 brings together re-\nsearchers and practitioners with expertise in di-\nverse areas of interest from translation and inter-\npreting to clinical linguistics, from terminology to\ncopywriting and language technology to examine\nhow the field is being shaped by MT as well as\nother technologies. As professionals involved in\nworking with language have varied titles and pro-\nfiles, one of the key tasks for WG7 is to map and\nconceptualise what “language work” is, who “lan-\nguage professionals” are, and how technology is\nchanging their work.\n3More detailed descriptions of the WGs and their activities:\nhttps://lithme.eu/working-groupsFor various types of language professionals,\ntechnology is already a significant part of their ev-\neryday work. A typical case might be that of trans-\nlators interacting with MT, which is an increas-\ningly common process and has had profound ef-\nfects on the field. Professionals also communicate\nand interact through technology, for example, us-\ning remote interpreting solutions or collaborative\nplatforms. In the future, the use of speech and\ntouch interfaces, as well as augmented and vir-\ntual reality, also seems poised to take a larger role\nin the professionals’ interaction with their tools.\nWhile technology can be a useful tool, for exam-\nple, for supporting wider accessibility, it may also\nbring potential adverse effects to working condi-\ntions or create new barriers. WG7 aims to form a\ndeeper understanding of how MT and other tech-\nnologies are used in language work, how they af-\nfect the future roles of professionals and machines\nin language work, and how the training of future\nlanguage professionals can adapt to these changes.\nActivities of WG7 include regular meetings and\ninvited talks from various areas of language indus-\ntry, conceptual mapping of language professionals,\na meta-survey of the use of MT by translators, and\na survey focusing on the use of MT by language\nprofessionals other than translators or interpreters.\nBased on this work, the working group aims to pro-\nduce reports and forecasts on the implications of\ntechnology for theory, practice, ethics and training\nin the area of language work.\nAcknowledgements\nThe COST Action “Language in the Human–\nMachine Era” LITHME (CA19102) is supported\nby COST (European Cooperation in Science and\nTechnology).\nReferences\nSayers, Dave, Rui Sousa-Silva, Sviatlana H ¨ohn, et\nal. 2021. The Dawn of the Human–Machine\nEra: A Forecast of New and Emerging Lan-\nguage Technologies . Report for EU COST Ac-\ntion CA19102 ‘Language In The Human–Machine\nEra’. https://doi.org/10.17011/jyx/\nreports/20210518/1", "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "cmX-sGHSR-o", "year": null, "venue": "EAMT 2018", "pdf_link": "https://aclanthology.org/2018.eamt-main.56.pdf", "forum_link": "https://openreview.net/forum?id=cmX-sGHSR-o", "arxiv_id": null, "doi": null }
{ "title": "Project PiPeNovel: Pilot on Post-editing Novels", "authors": [ "Antonio Toral", "Martijn Wieling", "Sheila Castilho", "Joss Moorkens", "Andy Way" ], "abstract": null, "keywords": [], "raw_extracted_content": "Project PiPeNovel: Pilot on Post-editing Novels\nAntonio Toral, Martijn Wieling\nCenter for Language and Cognition\nFaculty of Arts\nUniversity of Groningen, The Netherlands\n{a.toral.ruiz,m.b.wieling} @rug.nlSheila Castilho, Joss Moorkens, Andy Way\nADAPT Centre\nSchool of Computing\nDublin City University, Ireland\nfirstname.secondname @adaptcentre.ie\n Abstract\nGiven (i) the rise of a new paradigm to\nmachine translation based on neural net -\nworks that results in more fluent and less\nliteral output than previous models and\n(ii) the maturity of machine-assisted\ntranslation via post-editing in industry,\nproject PiPeNovel studies the feasibility\nof the post-editing workflow for literary\ntext conducting experiments with profes -\nsional literary translators.\nMachine translation (MT) has progressed enor -\nmously over the last years and it is widely used\nnowadays for gisting purposes. However, its use\nin professional translation is still largely confined\nto the post-editing of technical and legislative\ntext. The aim of PiPeNovel is to carry out a pilot\nstudy to assess the feasibility of broadening the\nuse of the post-editing workflow to literary text,\nin particular to novels. The translation direction\ncovered in the project is English-to-Catalan. Now\nPiPeNovel is about to finish and we present the\nthree main activities conducted in the project:\n(1) MT. First, we built a literary-adapted neu -\nral MT (NMT) system and evaluated it against a\nsystem pertaining to the previous dominant para -\ndigm in MT: statistical phrase-based MT (PB -\nSMT) (Toral and Way, 2018). Both systems were\ntrained on over 1,000 novels. We conducted a hu -\nman evaluation on three novels by Orwell, Rowl -\ning and Salinger; between 17% and 34% of the\ntranslations, depending on the book, produced by\nNMT (versus 8% and 20% with PBSMT) were\nperceived by native speakers of the target lan -\nguage to be of equivalent quality to translations\nproduced by a professional human translator.\n(2) Post-editing effort . Subsequently, using\nthese MT systems, we conducted a post-editing\n © 2018 The authors. This article is licensed under a Cre -\native Commons 3.0 licence, no derivative works, attribution,\nCC-BY-ND.study with six professional literary translators on\na fantasy novel (Toral et al., 2018). We analysed\ntemporal effort and found that both MT ap -\nproaches result in increases in translation produc -\ntivity: PBMT by 18%, and NMT by 36%. Post-\nediting also led to reductions in the number of\nkeystrokes (technical effort): by 9% with PBMT,\nand by 23% with NMT. Finally, regarding cogni -\ntive effort, post-editing resulted in fewer (29%\nand 42% less with PBMT and NMT respectively)\nbut longer pauses (14% and 25%).\n(3) Translators’ perceptions . Finally, we ana-\nlysed the perceptions of the translators that took\npart in the post-editing experiment (Moorkens et\nal., 2018), which were collected via question -\nnaires and a debrief session. While, as stated be -\nfore, all participants were faster when post-edit -\ning NMT, they all still stated a preference for\ntranslation from scratch, as they felt less con -\nstrained and could be more creative. When com -\nparing MT systems, participants found NMT out -\nput to be more fluent and adequate.\nAcknowledgements \nPiPeNovel is funded by the European Association for\nMachine Translation through its 2015 sponsorship of\nactivities programme. The ADAPT Centre at Dublin\nCity University is funded under the Science Founda -\ntion Ireland Research Centres Programme (Grant\n13/RC/2106).\nReferences\nMoorkens, Joss, Antonio Toral, Sheila Castilho and\nAndy Way. 2018. Perceptions of Literary Post-edit -\ning using Statistical and Neural Machine Transla -\ntion. Translation Spaces (under review).\nToral, Antonio and Andy Way. 2018. What Level of\nQuality can Neural Machine Translation Attain on\nLiterary Text? In Translation Quality Assessment .\nSpringer (in press).\nToral, Antonio, Martijn Wieling, and Andy Way. 2018.\nPost-editing Effort of a Novel with Statistical and\nNeural Machine Translation . Frontiers in Digital\nHumanities (in press).P\u0013 erez-Ortiz, S\u0013 anchez-Mart\u0013 \u0010nez, Espl\u0012 a-Gomis, Popovi\u0013 c, Rico, Martins, Van den Bogaert, Forcada (eds.)\nProceedings of the 21st Annual Conference of the European Association for Machine Translation , p. 365\nAlacant, Spain, May 2018.", "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "d9ZazmeIbR", "year": null, "venue": "EAMT 2005", "pdf_link": "https://aclanthology.org/2005.eamt-1.2.pdf", "forum_link": "https://openreview.net/forum?id=d9ZazmeIbR", "arxiv_id": null, "doi": null }
{ "title": "The Language Translation Interface", "authors": [ "Dominique Estival" ], "abstract": null, "keywords": [], "raw_extracted_content": "EAMT 2005 Conference Proceedings 1 The Language Translation Interface \nDominique Estival \nDefence Science and Technol ogy Organisation, Australia \[email protected] \nAbstract. The Language Translation Interface (LTI) is a prototype developed for the Aus-\ntralian Defence Organisation. The aim is provide a single, simple, interface to a variety of \nMT tools and utilities for personnel who need to produce translations when they have no \neasy access to human translators. Now that the LTI has been demonstrated and trialled at \nseveral military exercises, we are gathering us er requirements to further develop it as the \nLanguage Translation Tools Suite. This paper describes the functionalities of the LTI and \nreports on our experience with users during development, leading to future improvements. \n1. Introduction \nI am very pleased to have been invited to give \nthis opening talk at EAMT 2005, although I re-gret that Harry Somers cannot be with us in Bu-dapest. Australia is a long way from Hungary and he is still enjoying his sabbatical there, but it was when I was talking to Harry a few months ago about some aspects of my work on MT at DSTO that he thought it would be interesting for the EAMT audience. I want to tell you about how we have been able to build a translation system without doing MT and about the way we dealt with the difficulties of getting access to users in our specific environment. \nBefore I tell you what I do at DSTO, I need \nto say a few words about what it is. DSTO stands for Defence Scie nce and Technology Or-\nganisation and is it the R&D organisation for the Australian Defence Organisation. Our custom-ers and end users are primarily the ADF (Aus-tralian Defence Force, the military side of De-fence) and the ADO (Australian Defence Or-ganisation, which also includes the civilian side of Defence), but also the Australian Govern-ment more generally. As an R&D organisation, DSTO may be more “small r and big D” than in earlier times, but it is committed to exploring and utilising technical innovations. In fact, our main role is to give advice on new technologies and to build prototypes to show what advantages these technologies can bring to the end users. \nI am sure everyone knows where Australia \nis, but it is always interesting to see on a map what the world looks like from our perspective. \nAustralia is geographically isolated, certainly far \naway from Europe and North America, and fur-ther from Japan than people realise. We are part of the Pacific-Asia region, with very different linguistic neighbours than our traditional allies, the UK and the US. Australia is a \"small\" coun-try in spite of its size, with a population of just over 20 million. Our resources are not huge, es-pecially in terms of personnel. The environ-ment, which is very harsh on most of the conti-nent, also means that ou r technological require-\nments – and traditions – are quite different from those of European or North American countries. This is one of the reasons why Australia has a Defence R&D organisation, because technolo-gies that may be appropriate for other countries need to be evaluated for our environment and sometimes new solutions need to be developed to meet Australian requi rements. And we can ar-\ngue that this is in fact the case when we look at MT and our linguistic environment. The lan-guages spoken in our part of the world are not those that have traditionally been worked on for MT and, for many of them, NLP tools or re-sources are not even available. \nIn that context, at DSTO, I am now leading a \nresearch programme in language technologies for a variety of purposes. These include spoken dialogue systems (Estival et al., 2003), multi-modal interaction in a virtual environment (Es-tival et al., 2004), document classification (Carr & Estival, 2003), semantic clustering and lan-guage translation tools, which is the one I will \nDominique Estival \n2 EAMT 2005 Conference Proceedings discuss today. That particular project started \nvery small and was not considered particularly important in the beginning. It is still quite a small project in terms of its size, but I think it is inter-esting to see what we have been able to accom-plish in that area, because that can be seen as an indication of the need for MT and of the value MT can bring to organisations which may not have been aware of their needs for it. This is where I hope my talk today will be of most in-terest to you, because I will talk about our ex-perience in bringing some awareness of MT and of the need for MT to an organisation with no previous history of using language processing tools. I will describe the prototype tool we de-veloped and how we went about to show it to potential users who do not have much time to play with software. \n2. Project scope \nInitially this project was only a small part of a \nlarger project on Speech and Language Tech-nologies\n1 which was very much focussed on the \nspeech aspects and which aimed at delivering speech interfaces in Headquarters environments. During demonstrations of the speech interface, people would ask the usual question: Can you do that in other languages? So, when I arrived at DSTO in March 2002, I was given the re-sponsibility for looking at how that might be done and for assessing whether there was any potential for MT tools in the ADO. I was very lucky that at the same time, one of the students that we take every year for year-long projects, Jennifer Biggs, started at the same time and that she was very interested in that topic. At the time, \nJenny had no background in machine transla-tion, language processing, linguistics or compu-tational linguistics and there was no one else working with us on this project. So I am fairly proud of the fact that, three years later, Jenny is still with me on a full-time contract, the LTI has been successfully demonstrated and it is being adopted by some sections of the ADO. In fact, Jenny is the person who actually built the LTI and, without her, the project would not have got off the ground. So what I want to talk about is what we did and how we did it. \n \n \n1 This project was initiated and led by Dr Ahmad Ha-\nshemi-Sakhtsari. Having worked in MT before – in industry \nat Weidner in the US in the late 80s and in re-search at ISSCO in Geneva in the 90s –, my first assessment was that there was no point in us trying to build an MT system. We would have failed and not produced anything worth-while. MT evaluation was another option, where the aim would have been to provide advice on what MT systems to purchase. That was not a very satisfactory proposition either: there was not enough funding to pu rchase systems to evalu-\nate and more importantly, not enough trained personnel to perform the evaluation. I can look at French of course, Jenny knows Japanese and the task manager (Dr Ahmad Hashemi-Sakhtsari) could deal with Farsi, but again, it would have been very time-consumi ng and the results would \nprobably not have even been worth reporting. So we settled for a survey of the tools available and for designing a way to make some of those tools accessible to our potential users. This re-sulted in the LTI (Language Translation Inter-face) and the LTDB (Language Translation Da-taBase). \nThe LTDB was a useful exercise for finding \nout what was available and we now use the in-formation we collected to choose appropriate systems from within the LTI. \n \nFigure 1. LTDB: Matrix of systems for language pairs \nIn the rest of this talk, I want to tell you about the technical design of the LTI and describe the functionalities of the two prototypes we devel-oped; then I will talk about our experience in scoping out user requirements and setting up a trial system. I will conclude with what we have learned so far and where we are going with the continuation of this project, which we are now calling the Language Translation Tool Suite \nThe Language Translation Interface \nEAMT 2005 Conference Proceedings 3 (LTTS). But, first, I want to discuss why it would \nbe worthwhile for the ADO to have translation tools in the first place. \n3. Why would the ADO want \nLanguage Translation Tools? \nThe first point to make is that there is a growing \nrecognition of the need for translation services in the ADO. This is a global issue which has only recently started to affect Australia but the ADO, like other Australian government agen-cies, is facing an increased demand for dealing with documents and in formation in languages \nother than English. This is especially true be-cause the shift of focus for the ADO from “De-fence of Australia” to “National Security” im-plies an increased awareness of the international environment around Australia. Other sources of demands for dealing with documents or com-munications in foreign languages include: intel-ligence gathering, coalition operations and for-eign operations. \nIntelligence gathering \nI will not discuss intelligence gathering in great \ndetail here, I imagine everyone in 2005 is aware of the intelligence failures which have been shown to precede the tragedy of 9/11 in the US and the ensuing discussions about the urgent need for better and more timely intelligence. The requests for more translators and for tools to help them have been widely publicised and Australia is in the same situation as all other countries in this respect. Of course, the Bali bombing in September 2002 and the bombing of the Australian embassy in Jakarta in October 2004 mean that there are also specific threats and concerns for Australia, with particular lin-guistic implications for us. \nCoalition operations \nTraditionally, our main allies are other English-\nspeaking countries, such as the UK, the US, Canada and New-Zealand and, apart from the regular jokes about mutual unintelligibility of the various English dialects, there is not much need for translation between those countries. However, Australia also has strong ties with other nations in the Pacific region, and these coun-tries do not all have English as their first lan-guage. It is also the case that military exercises have become increasingly multi-national and that Australia is often involved in operations \nwith a number of coalition partners whose first language is not English. Recent international \nexercises have included such countries as Ja-pan, South Korea, Thailand or France, to name only a few. \nThe need for translation is not greatly felt in \nthose exercises, because communications are assumed to be conducted in English. However, now that the technology allows e-mail commu-nication not only in other languages but also, crucially, in other scripts, it is no longer the case that all communications during an opera-tion will necessarily all be conducted in Eng-lish, and it can be argued that Australians who are monolingual speakers of English will find themselves at a disadvantage when their coali-tion partners can choose to communicate in se-veral other languages. \nForeign operations \nAlthough Australia has participated in both \nGulf Wars, over the past couple of decades, the ADF has been more involved in peace-keeping, humanitarian and relief operations in the Asia-Pacific region than in combat operations. For instance in the past few years, there have been operations in the Solomon Islands, in East Timor and in Aceh (Indonesia) after the tsu-nami. In this type of operations, there is a need not only to communicate with the population, but also to disseminate information, for instance by distributing leaflets or making radio broad-casts. From a technological point of view, the problem is that many, if not most, languages of the region are not covered by developments ef-forts for NLP and there are few, if any, compu-tational linguistic resources for those languages. From the point of view of MT, it is not even possible to resort to bu ilding Translation Memo-\nries because there may not be enough texts avail-able to build Translation Memories. \nDuring foreign operations, there may also be \nsituations on the ground where defence person-nel might come into possession of documents or media (CDs, diskettes, computer hard-drives, etc.) which may contain crucial information. For ex-ample, when entering a building and seizing com-puters or filing cabinets. One issue here is the identification of the language or languages prior to translation, but there is also the issue of speed of access to translation services, whether \nDominique Estival \n4 EAMT 2005 Conference Proceedings it be sending the document to a human transla-\ntor in the field or back at home, or access to tools that could be used in the field or over a network. \nSo it is clear that there are great translation \nneeds for an organisation like the ADO, and these needs have become apparent even to the more old-fashioned officers from a generation that used to consider English was all they needed. The question is: Can these needs be met by hu-man translators? \nFirst, we can make a comparison with the \nUS. The US Department of Defence has a long tradition of training linguists and language spe-cialists at the Monterey Defence Language In-stitute and, after 9/11, the FBI set up the Na-tional Virtual Translation Center to serve as a “clearinghouse for human translators” to “pro-vide translation of foreign intelligence”. Never-theless, the DoD also saw the need for develop-\ning the Phraselator (followed by the Speechla-tor), a PDA with limited speech translation ca-pabilities which was first used in Afghanistan in 2002. The “Basic Language Translation Ser-vice” (BLTS) project, which is part of the larger “Horizontal Fusion” programme, now aims at developing automated language translation ca-pabilities to meet the growing need for lan-guage translation in the battlefield (DoD, 2004). Looking at future research, on 18 March 2005, DARPA issued a Call for Proposals for a new research project, GALE (Global Autonomous Language Exploitation), w hose goals are phrased \nas “eliminating the need for linguists and ana-lysts” and “automatically … interpret[ing] huge volumes of speech and text in multiple lan-guages” (GALE, 2005). \nIn Australia, the ADO has also long recog-\nnised the need for personnel with linguistic skills and has its own tr aining of linguists and \ntranslators, at the ADF School of Languages. Personnel receive training for spoken and writ-ten language skills in a number of languages that have been recognised to be of interest. How-ever, these skills are mainly geared towards field operations and the training does not neces-sarily equip the linguists with specific transla-tion skills. \nAs we all know, with the advent of e-mail \nand the internet, the number of documents which are of potential interest for intelligence gather-ing has increased exponentially in the past dec-\nade. At the same time, the global growth of the internet and the development of electronic me-dia for a large number of languages have eroded the dominance of English: although English is still the language of the majority of web pages, it is no longer the first language of the majority of web users. Many web sites and electronic communication channels (email, chat rooms, etc) now use other languages. These constitute sources of information which have to be taken into ac-count by analysts. At the same time, these new media also constitute alternative channels for the dissemination of information to local popu-lations during humanitarian and relief opera-tions. \nThe problem is that it is not possible for the \nADF School of Languages to train new lin-guists and translators for all the languages that might be of interest in the future.\n2 It takes one \nto two years to train a linguist to attain a level of fluency in a language such that they can function using the spoken language. Training a translator/interpreter who can produce good trans-lations may take another two to three years, de-pending on the language. However, it is very difficult to predict which languages are going to be of interest in a three year timeframe and even more difficult to predict the extent of the potential demand for translation for those lan-guages. It would be impractical to train linguists in all the languages that might become of inter-est. Without expanding the size of the ADF, it is not possible to increase the number of re-cruits to be trained as linguists, because the ex-isting personnel are alread y needed for other tasks \nand operations. However, the population of Aus-tralia is not of a size that can support a larger ADF at this time. In summary, given the size of the Australian population, there will never be enough personnel available to be trained and the \nrange of languages of interest cannot be predicted \n \n \n2 In this respect, it is interesting to note the wide va-\nriety of languages spoken in Australia. This is not only due to the number of Aboriginal languages, \nwhich are of great interest linguistically but not so \nrelevant for us at this time, but because of the large immigration from all over the world. As a result, \nthere is in fact a sizeable pool of native speakers for \nmany languages in Australia, but they would not all be available as translat ors for the ADO and their \nlanguages may not be those that are of interest. \nThe Language Translation Interface \nEAMT 2005 Conference Proceedings 5 in time to perform the training required to pro-\nduce skilled translators in those languages. \nSo, given this situation, we have argued that \nautomated translation tools can alleviate that problem by providing rough but usable transla-tions which can either be used directly, for in-stance in the case of information gathering or of coalition operations, or which can be sent to a human translator for further editing if neces-sary, for instance in the case of foreign opera-tions. Fortunately, this fits in quite well with re-cent ADO requirements for “increased efficiency through the use of automation in headquarters” and “the ability to work in multilingual environ-ments”. This has been expressed as “computer aided comprehension of languages other than English”, and this is now part of the description \nof our project deliverables. \nSince the start of this project, an overriding \nissue has been the constraint that neither the ADO nor the DSTO can realistically envisage to de-velop their own machine translation systems. Therefore we are limited to using existing sys-tems, whether commercial off-the shelf (COTS) or freely available. Our focus is on developing easy access to existing translation engines and our main concern has been to make that access transparent to the users. The intention is to make available to ADO personnel existing tools which may increase the efficiency of current translation work and which would be appropri-ate in situations where there is a need for rapid translation and where no human translators are readily available. \n4. The LTI \nWe have now produced and demonstrated sev-\neral versions of the LTI. Two of them, the Translation Comparison Tool and the Web Trans-lation Tool, deserve to be described separately because they illustrate quite different function-alities and because their interfaces look very dif-ferent. We demonstrated them at several events within the last year and, after I describe the functionalities of the LTI, I will explain what those events were, who our audience was and what the outcomes were. \nFirst, as I mentioned before, the LTI is not a \ntranslation system, but an interface to transla-tion tools (Biggs and Estival, 2002; Estival and Biggs, 2003). The main idea was to provide a single, simple, interface to as many translation \nsystems as possible. We did not want to assume that our users would be trained translators, that they would know any other language besides English, or that they would be computer ex-perts. We expect our users to be military or de-fence personnel, who are computer literate in that they know how to use a computer for basic e-mail, word processing and data entry, but not necessarily more. We first defined our users to be personnel who find th emselves in positions \nwhere they have to get a translation for some form of document (for instance, participating in a coalition exercise or in a foreign operation) and in situations where they may not have ac-cess to a human translator (for instance, if there are no translators in the ADO for that language, or when there is not enough time to send the documents to a human translators). We also wanted the same tool to be useful to translators (mili-tary “linguists”) who could use it to get quick translation drafts and to build translation memo-ries. \nThe first version of the LTI was the Transla-\ntion Comparison Tool (TCT), shown in Figure 2. \n \nFigure 2. The LTI: Tran slation Comparison Tool \nWith this interface, the aim is to provide a set of \ntranslation results from as many translation sys-tems as are available for the required language pair. The idea here is that if the users can view results from a number of systems, even if they have no knowledge of the other language, they may be able to make some useful comparison and select the most likely translation output. This is fraught with potential problems, which I \nDominique Estival \n6 EAMT 2005 Conference Proceedings do not have to detail to you,3 but the main idea \nremains sound and it was met with great inter-est when we demonstrated it. The point here is not to dwell on the shortcomings of the individ-ual systems, but to build upon the useable parts, if any, of the different outputs. \nThe emphasis for this tool was on the ease of \nuse, at the three different stages of 1) input, 2) processing and 3) output. For ease of input, the user can choose to type text directly in the input window, or either load from a file or cut and paste from a file, or load a web page or an email message. Since most translation systems work best if the input is seg mented into discrete sen-\ntences, when a file is loaded, it is first passed to a sentence segmenter. The sentence segmenter produces a list of sentences which are then used as input to the translation systems. \nFor ease of processing, all the systems are \naccessed in the same way. That is, from the user’s point of view, by ticking the systems that are shown as available for that language pair. From the point of view of the LTI, the access to all the available systems is specified in an \"ini\" \nfile, which gives all the information necessary so the users do not have to know how to access each separate system. For instance, access to Ba-belfish over the internet or access to the Indone-sian-English Kataku system, which has to be installed on a Linux machine on a local net-work, look exactly the same to the users and the users do not have to know the difference. In the list of systems available for a language pair, we include the use of Translation Memories which may have been built for that language pair and which, from the point of view of the user, are just another translation tool. \nRegarding the production of the translation \noutput, the main issue has been the design of the output document. First, although translation is performed sentence by sentence, the user can choose to have the results presented either as continuous input and output texts or sentence by sentence. Second, the user can choose to ac-cept all the translation results at once and then edit the output file. We found that this is what our users preferred to do when there is only one \n \n \n3 This process would be worth studying and we in-\ntend to include an evaluation of its merits when we gather user requirements in the next phase of the \nproject. translation system available. Alternatively, they \ncan edit the translation results sentence by sen-\ntence within the LTI and then send the edited result to the output file. This is the mode which is probably the best when there are two or more translation systems available for a language pair. \nIn the LTI screen shown in Figure 2 above, \nthere are several translation outputs, with one of the outputs highlighted for editing. The user can then choose one of the translation results for each input sentence and editing in place seems to be more convenient. When the user edits the translation result through the LTI, this is re-corded in the output file. Figure 3 shows the de-fault layout for the output document which is automatically generated when the translation results have been accepted. In this example, we have three translations for the Japanese input and the output document records information about which translation systems have been used and whether the output has been post-edited. Our users have already asked that the segments that have been post-edited be indicated in a dif-ferent colour or highlighted, and this will be done for the next version. \nUsers can choose to have all the translation \nresults included in the output file, or just the re-sult they consider the best. The flexibility we wanted to offer the users reflects the range of possible situations in which they would need to produce a translation and the range of language skills they might have. \nLanguage Translation Interface session output: 5/05/2005 \n11:23:04 AM \nUser: biggsj \n \nSource: Japanese \n自爆攻撃は、警官募集にも使われているクルド民主党の事\n務所で起きた。 \n \nTarget: English; Translation engine: WTS; Post editor: BiggsJ \n \nThe suicidal attack occurred in the Kurd Democratic Party office \nwhich is also currently used as a policeman enlistment post. \n \nTarget: English; Translation engine: WTS; Post editor: None \n \nSuicidal explosion attack occurre d in the office of Kurd Democ-\nratic Party which is used al so as policeman collection. \n \n(continued on next page) \n(Continued from previous page) \nThe Language Translation Interface \nEAMT 2005 Conference Proceedings 7 Target: English; Translation engine: Linguatech; Post editor: \nNone \n \nSuicidal explosion attack occurre d in the office of the ??? Democ-\nratic Party which is used also as policeman collection. . \n \nTarget: English; Translation e ngine: AmikaiOCN; Post editor: \nNone \n \nThe suicide bomb attack broke out in the office of the Kurd De-\nmocratic Party currently used also for policeman collection. \n \nTarget: English; Translation engine: WorldLingo; Post editor: \nNone \n \nSuicide bombing attack occurred wi th the office of the Kurd De-\nmocratic party which is used even in officer collection. \n \nTarget: English; Translation e ngine: Mail2World; Post editor: \nNone \n \nA crashing itself attack occurred in an office of クルド Democ-\nratic Party used by police officer enlistment also. \nFigure 3. Example output document \nWe presented the TCT version of the LTI at a \nmulti-nation military exercise in June 2004. I will explain in more detail what was involved, but the main point is that these exercises serve as a trial for new technologies and the LTI was one of two systems presented by DSTO for Australia. The actual exercise takes place over a period of three weeks, but the preparation of this trial took several months and that in itself gave us a good exposure. The result from the exercise, that is the f eedback we collected dur-\ning and after it, was then the starting point for the next development of the tool. We had de-signed the TCT to be as widely useful as possi-ble and, during that exercise, we showed that it could be used in a range of situations and for a range of purposes: coalition exercises with the translation of email from a South Korean ship (South Korea being one of the exercise coali-tion partners), information gathering for situa-tion awareness for regional exercises with the translation of news sites from Arabic and Indo-nesian, and humanitarian operations with the production of a draft pamphlet in Tetun (one of the national languages of East Timor).\n4 Those \nlanguage pairs were chos en both for experimen-\ntal purposes, taking into account the availability of MT tools and the environment, and to fit in with the general exercise scenario. \n \n \n4 While Portuguese is the official language of East \nTimor, Tetun serves as the lingua franca and Bahasa \nIndonesia is another language used in the area. During the three weeks of the exercise, we \ncollected feedback from bo th users and visitors \nto the exercise. Then, a new version of the tool was specifically developed for a particular envi-ronment, with automated web access being the main priority. This is the Web Translation Tool, shown in Figure 4. \n \nFigure 4. The LTI: Web Translation Tool \n \nFigure 5. Keywor d statistics \nThis second version of the LTI answered spe-\ncific requests from users for new functionalities. With the Translation Comparison Tool, we had concentrated on the access to translation sys-tems and on making it simp le for users to deal \nwith different types of input: typing input di-rectly in the input window, loading a file or auto-matic access to e-mail. With the Web Transla-tion Tool, the emphasis is on automating access to web pages and producin g batch translation of \nthose web pages. The users wanted to be able to have a list of web pages to be accessed and translated regularly. Also in answer to requests for \nnew functionalities, we added other utilities so that users can create lists of keywords they want to monitor and they can get statistics on those \nDominique Estival \n8 EAMT 2005 Conference Proceedings keywords when they are found in the source \ndocuments or in their translation (see Figure 5). \n5. Access to users: Exercises and \ntrials \nThis is always a problem for software design-\ners: how do you get access to real users when you don't have a real system for them to try out? I have experienced that problem in a num-ber of other projects, especially in research pro-jects when you don't even necessarily know who the potential users might be, but also in in-dustry when you already have a user base. There, the main issue is often that users are too busy to be interviewed or to be asked to partici-pate in trials. For speech recognition, for in-stance, you may have to organise data collec-tion projects, and you may try to entice users to give some of their time in return for a prize in a prize draw. In our case, not only are our users too busy in their daily job to be asked to par-ticipate in surveys or experiments, but they also change all the time. This is because of the post-ing cycle in the military. People may be as-signed to positions for 12, 18 or 24 months and they may not be there when you come back to talk to them. \nSo, in our case, we first had to imagine who \nour users might be, then try to understand what they would want, and then take advantage of opportunities to get some of them to try out the system. We also have to organise how we can take advantage of those opportunities and make sure the few users we reach can give us feed-back, so we can see whether we are on the right track. These opportunities to reach our users are demonstrations and trials during “open days” and military exercises where new technology is presented to various levels of the military. \nOur first opportunity was a multi-nation coa-\nlition exercise (JWID 2004)\n5. In this exercise, \nour users were military personnel untrained in the use of language tools, who as part of their role-playing in the exercise had to produce trans-lations for different types of input texts. This exercise runs on a scenario which is broken down into a number of “events” which recur at specific times throughout the five days of the \n \n \n5 http://jitc.fhu.disa.mil/washops/jtca/jwid.html. demonstration. Each event demonstrates a par-\nticular capability for the trials. \nThe LTI was an Australian trial and the LTI \nevents only concerned Australian role players within the Australian exercise Headquarters. The four events for demonstrating translation capabilities were chosen to exemplify a range of situations where translation would be useful or even necessary: \nƒ coalition exercises, with the translation of \nemail from Korean; \nƒ information gathering for situation aware-\nness, with the translation of web news arti-cles from Arabic and Indonesian; \nƒ humanitarian operations, with the produc-\ntion of a draft pamphlet in Tetun, giving in-formation on voting procedures. \nThis gave us four events with four language pairs. \nFor some events, only one system was available for that language pair, e.g. only TM for English-Tetun, while for others we had two translation outputs, e.g. both TM and Kataku for Indone-\nsian-English. \nThe whole exercise runs for three weeks. The \nfirst week is for training of the role players and rehearsal of all the events. In the second week, the role-players run through the complete sce-nario over five days. In the third week, they run through the complete scenario again, but with visitors attending and being given demonstra-\ntions throughout the even ts. Our users, the role-\nplayers who were running through the transla-tion events, were monolingual English speakers who had never thought about translation. We had ample time to get to know them during the first week of training and rehearsal and to ap-preciate the job they were doing and what their background was. Although they were “role-playing”, they were representative of our intended users in real operations and their comments and feedback was extremely valuable. \nThey were very interested in the trial, found \nthe LTI very easy to use, and said they could see that if they were asked to perform those du-ties, the LTI would be useful to them. The LTI also attracted a lot of interest from other role-players, who were not meant to have to use it but who asked to try it for themselves during down-time. Those other people also gave us useful feedback and suggestions. At the end of \nThe Language Translation Interface \nEAMT 2005 Conference Proceedings 9 the exercise, there was a formal assessment re-\nport, compiling comments collected from an on-line questionnaire. The assessment for the LTI was that it was considered to be of “significant value” and that the trial yielded “useful results”, with recommendation for further development and integration. \n6. MT Tools available via the LTI \nTurning to the MT tools we have made avail-\nable with the LTI, the first point is that, as I mentioned at the beginning of this talk, we did not have the resources to buy many MT sys-tems for demonstration, so we focussed on pro-viding uniform access to as many free systems as possible. The drawback, of course, is that the translation quality is not as high as with com-mercial systems, but we managed to keep the emphasis on the flexibility and useability of the tools. We insisted on the fact that this was a de-monstration prototype for an interface, not a testbed for people to evaluate the quality of the translation. So we provided access to a fairly large number of systems over the internet and con-centrated on the issues of ensuring the smooth input and display of all writing systems, with all character encodings made available. Again, the emphasis was on making this invisible to the user, so they do not have to know how to switch between Arabic and Latin characters or between the different character encoding systems for Japanese. This effort h as paid off because, pre-\ndictably, it is always the first question we are asked: “Can you deal with other writing sys-tems?”. So, our standard demo is to show Japa-nese and Arabic, as well as Indonesian – or French when we cannot access our Indonesian MT system. This leads me to my second point about the tools we have made available through the LTI and that is the issue of network con-straints. \nOur first prototype coul d only access free MT \nsystems over the internet\n6 but it soon became \nobvious that this was never going to be the way it would be used in reality. In fact, it could not even be used that way wh en we got to the point \nof participating in trials and military exercises. One reason is that Defence uses secure net-\n \n \n6 For example, the always popular Babelfish: http://ba-\nbelfish.altavista.com/. works and does not allow unrestricted access to \nthe internet. Another is that it would not meet security requirements to send potentially sensi-tive data over the internet to be translated on a public site and then sent back to us. \nFor our first trial with real users, we had to \nrun the whole exercise on a secure local net-work. Access to the intern et was out of the ques-\ntion, so we had to find systems that we could ei-ther integrate on a local machine or access over that secure local network. In the end, we were able to have local access to the IBM WebSphere \nTranslation Server ;\n7 w e w e r e a b l e t o b u y t w o \nlanguage pairs from a commercial-off-the shelf system, AppTek's Transphere ;\n8 and we were \nalso able to use a re search license for access to \nanother commercial system, ToggleText's Ka-\ntaku.9 In addition, with the purchase of Word-\nfast, we were able to build and demonstrate the use of Translation Memories.\n10 So, currently, \nwe are able to demonstrate the LTI with the fol-lowing translation systems: \nƒ IBM WebSphere Translation Server ( WTS ), \nunder a Defence-wide license. WTS pro-vides translation for a number of languages, including Korean and Japanese. \nƒ AppTek TranSphere , for Korean and Ara-\nbic. We purchased the English/Arabic and English/Korean language pairs and were granted a free temporary license for the API for the few months leading to JWID and for use during JWID. \nƒ Wordfast , a Translation Memories (TM) \napplication operating within Microsoft Word. We bought a license and we built TMs for Indonesian/English and for Tetun/English. \nƒ ToggleText Kataku for Indonesian-English. \nWe have an NDA with ToggleText, an Australian company, for research at DSTO and they have granted us permission to use Kataku during demonstrations and exer-\ncises. \nThe first three systems are all available on a \nsingle workstation, while Kataku is accessed over \n \n \n7 IBM WebSphere: http://www-3.ibm.com/soft-\nware/pervasive/ws_translation_server \n8 AppTek: http://www.apptek.com/ \n9 ToggleText: http://www.toggletext.com \n10 Wordfast: http://www.wordfast.net \nDominique Estival \n10 EAMT 2005 Conference Proceedings a Local Area Network (LAN) via scripting \ncommands within a telnet connection. For in-stallation at a customer's site, of course, com-mercial licenses have to be purchased. In our role of providing advice on the choice and pur-chase of technology for the ADO, we are still looking at other systems which might be better suited to our customers' requirements for spe-cific language pairs. \nBesides the question of which MT tools we \ncan make available through the LTI, another important issue for us is access to users, so we can assess their actual needs and requirements. \nDuring the first two weeks of the exercise, \nwe had shown the LTI to most of the partici-pants in the Australian exercise Headquarters and, apart from some network connection issues, there had been no problem for any of the LTI demonstrations. That in itself and the accep-tance by the military personnel were very posi-tive results. Then, during the last week, when the \nrole-players themselves had to do the demon-strations for the visitors (developers are not al-lowed to intervene), we received more feedback and very positive responses. \nWe then developed the Web Translation Tool \nI described earlier, to meet the specific user re-quirements which we received as a result of that first trial. A few months later, we brought the new Web Translation Tool in the particular Head-quarters where people had said they wanted to try it for real. This allowed us to have new users try it in their own environment. They used it to translate websites that were of interest to them and they were positive about the results they were getting. What was interesting and very en-couraging was that, although the system could not have been trained or tuned to the documents they wanted to translate, they found that the quality of the translation was enough for them to get the information they needed. \nWe had several opportunities to demonstrate \nthe LTI again, first at a multi-national coalition exercise and then in a Headquarters exercise. These exercises did not involve users trying out the system, but we received very positive re-sponses from the higher-level people who were attending. We made furthe r improvements to the \nLTI, mostly to ensure the system was more se-cure and reliable, and we were then able to run a new trial in the Headquarters which had ex-pressed strong interest in it. This time, the LTI \nwas used over a period of several days by differ-ent analysts than those who had tried it earlier. \nWhen I talked to Harry Somers about this \nproject, I wanted to ask his advice on how best to utilise the opportunities we get to have users try the LTI for themselves. His first comment was that one must provide MT users with back-ground reading on MT pitfalls and shortcom-ings and that one must give them training be-fore letting them loose. I agreed this would be ideal but unfortunately this is not always feasi-ble, for our users do not have much time to read background material before using new tools, they expect the tools to be useable right away. \nTo help with that problem, we have produced very short user guides, in which we do warn us-ers about what can go wrong, but it would be unrealistic to expect that the users will devote much time to those tools, at least until there is wider acceptance of the technology and the di-rective comes from the top that those tools must be used. \n7. Conclusions from user trials \nand experiments \nThe main goals of the Language Translation In-\nterface (LTI) project were the identification of requirements for automated translation within the ADF and the development of tools to meet these requirements. The development of a new translation engine require s enormous efforts and \nresources and is beyond the scope of a research project at DSTO. In any case it is not possible to predict which languages might become of in-terest and the results of such efforts would most likely not meet actual needs. It is interesting to note that these two issues are exactly parallel to the problems faced by the training of linguists and translators: it takes between one and three years to train a linguist in a new language, and languages of interest change according to world events and demands on the ADF. We had iden-tified the development of a single interface to existing translation tools as filling the need for rapid and easy automated translation tools when human translators are not available and the LTI was first developed as a concept demonstrator providing users with a single interface for ac-cessing a range of language translation systems and tools. \nThe Language Translation Interface \nEAMT 2005 Conference Proceedings 11 Following participation in military exercises \nand trials, two versions of the LTI, the Transla-tion Comparison Tool and the Web Translation Tool are now fully integrated into one seamless system. However, the most important results from this interaction with potential users have been the exposure of the technology to those prospective users and the feedback we received from them. This exposure has raised awareness of the need for access to information in lan-guages other than English, even in an English-speaking country such as Australia. To ADO members who were already aware of this need, but who had previously been reliant on human-only translations, this was an opportunity to see what can already be achieved with computer-assisted language translation and it raised an awareness of the need to develop tools to proc-ess documents in other languages. Finally, we were able to establish fruitful contacts with a user community for the LTI and we are now building upon them. \n8. Future Work \nFollowing these positive contacts, the main is-\nsue is to manage customers' and users' expecta-tions. We have argued that relying on human translators to meet all the translation require-ments of the ADO is not a viable option in the long run and that translation tools would help meet those needs. What we are now proposing is the development of a prototype with new ca-pabilities, the Language Translation Tool Suite (LTTS), which will build on and extend the LTI. So, what we need to show is that the LTTS will in fact help meet those needs and not put an increased burden on the ADO translators or on defence personnel who would use the LTTS. \nThe prototype LTI tool is being tested in \nHeadquarters, and this will give us more feed-back from military personnel. We have also re-cently started the process of gathering user re-quirements for the new LTTS at the ADF School of Languages, with the goal of ensuring that these requirements coincide with those already established in Headquarters. The next phase of the project is to validate all these user require-ments and to produce a report for transition to an operational system. We can already say that, from the point of view of the ADO, the LTTS is in line with the requireme nts for increased auto-mation in Headquarters and for the ability to \nwork in multi-lingual environments. It would also meet the stated ADF School of Languages goals of improving language tr aining and efficiency, \nby improving the range of language skill train-ing for students, with limited operational costs and limited additional training. \nWe are arguing that the LTTS would be a \nsuperior option to acquiring individual MT sys-tems when the need for tools for a particular language pair arises, beca use the installation of \nthe LTI (or the LTTS) is a one-off operation, which gives seamless access to all subsequent MT systems that might be added for new trans-lation requirements. Furthermore, training of operators and of language students would be subs-tantially lower than if separate systems were pur-chased on an ad hoc basis, because the same in-terface will be used for all systems, so the initial training for using the LTTS would cover all ad-ditional translation te chnologies accessed through \nthe LTTS. \nAnother important advantage of the LTTS \nover having separate transl ation tools is the abil-\nity to combine the outputs of several MT systems for a language pair. We expect that this will in-crease the quality of translation output, espe-cially when the systems also include Transla-tion Memories. Although the MT systems we have so far made available through the LTI are primarily translation engines, we have been ar-guing that Translation Memory technology should be an important component of the LTTS: TMs give the ability to store translations and re-use them and this will both reduce translation time and contribute to building an organisation-wide database of translations. Use of this database will increase the translators' efficiency and im-prove the quality of translation. It is true that the creation of TMs requires resources but we hope that once the general tool is adopted, the translators will be able to build their own TMs and share them with other. More importantly, TMs will allow us to build new translation sys-tems for languages with no existing MT engines, which is the case for many of the languages of interest in our region. \nOn the technical front, we plan to make im-\nprovements to the interface after feedback from users. We already know that this will include the addition of a number of utilities, in particu-\nDominique Estival \n12 EAMT 2005 Conference Proceedings lar keyword synonym recognition, in addition \nto the keyword facility I mentioned we already developed for the WTC. We also plan to obtain significant improvements to translation quality by better text pre-processing. This will include spell-checking and limited named entity recog-nition, e.g. place names, dates, groups and indi-viduals in the languages of interest. However, the first item on our list has to be the integration of military vocabulary, including acronyms, first for English then for the other languages of interest. This leads to a very challenging area of research, because what we want to develop is a general “Vocabulary Update” functionality which would provide users with the same simple inter-face to enter new vocabulary in the same way for all the systems. This can be seen as an ex-tension of the idea of sharing linguistic data, ei-ther dictionaries or previous translations, be-tween the tools for one language pair. \nUsers have already requested that the system \nbe able to take input from OCR and we plan to include OCR and spell-checkers utilities. We demonstrated a couple of years ago the use of output from speech recognition, but I am not convinced that we are ready to offer this func-tionality yet. However, a simple utility to add is language detection and au tomatic selection of \nthe appropriate translation tools and this will render the LTTS absolutely transparent to the users. \nFurther extensions include the integration of \nmultilingual linguistic resources, e.g. dictionar-ies, Part-of-Speech taggers and extension of multi-lingual capabilities for named entity rec-ognition. Ultimately, what we aim to do is no less than cross-language information retrieval and multi-lingual document classification, but we still have some way to go. 9. References \nCARR, Oliver and Dominique ESTIVAL. (2003). \n“Document Classification in Structured Military Mes-\nsages”. Australasian Language Technology Work-\nshop , Melbourne, Australia. pp-73-81. \nDoD/CIO (2004). Horizontal Fusion FY2004 After \nAction Report Version 1.0, Department of Defense, \nAssistant Secretary of Defe nce for Networks and In-\nformation Integration / DoD CIO: 83. \nESTIVAL, Dominique and Jennifer BIGGS. (2003). \n“The Language Translation Interface and automated language translation tools for the ADO”. Eighth In-\nternational Command and Control Research and Tech-\nnology Symposium, Washington, DC, USA. \nESTIVAL, Dominique, Michael BROUGHTON, And-\nrew ZSCHORN and Elizabeth PRONGER. (2003). \n“Spoken Dialogue for Virtual Advisers in a semi-im-mersive Command and Control environment”. 4th SIG-\ndial Workshop on Discourse and Dialogue . ACL \n2003, Sapporo, Japan. pp-125-134. \nESTIVAL, Dominique, Chris NOWAK and Andrew \nZSCHORN. (2004). “Towards Ontology-based Natu-\nral Language Processing”. RDF/RDFS and OWL in \nLanguage Technology: 4th Workshop on NLP and \nXML (NLPXML-2004. ACL 2004, Barcelona, Spain. \npp.59-66. \nGALE (2005). http://www2.eps.gov/spg/ODA/ DAR-\nPA/CMOBAA05%2D28/SynopsisP.html \nHyperwave (2005). Hyperwave and the Horizontal \nFusion Portfolio Initiative. Washington DC, Hyper-\nwave Government Solutions: 7. \nJOHNSON, T. (2003). ARL technology to protect \nSoldiers, improve communication. RDECOM. Decem-\nber 2003: 12. \nOSTERHOLZ, J. L. (2003). Net-Centric Operations \n& Warfare. 6th Annual System Engineering Confer-\nence, San Diego, CA (DTIC – external site), Na-\ntional Defence Industrial Association (NDIA).", "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "mCw7cfxmcL0", "year": null, "venue": "EAMT 2022", "pdf_link": "https://aclanthology.org/2022.eamt-1.7.pdf", "forum_link": "https://openreview.net/forum?id=mCw7cfxmcL0", "arxiv_id": null, "doi": null }
{ "title": "Passing Parser Uncertainty to the Transformer. Labeled Dependency Distributions for Neural Machine Translation", "authors": [ "Dongqi Pu", "Khalil Sima'an" ], "abstract": null, "keywords": [], "raw_extracted_content": "Passing Parser Uncertainty to the Transformer: Labeled Dependency\nDistributions for Neural Machine Translation\nDongqi Liu Khalil Sima’an\[email protected] [email protected]\nInstitute for Logic, Language and Computation\nUniversity of Amsterdam\nAbstract\nExisting syntax-enriched neural machine\ntranslation (NMT) models work either\nwith the single most-likely unlabeled parse\nor the set of n-best unlabeled parses com-\ning out of an external parser. Passing a\nsingle or n-best parses to the NMT model\nrisks propagating parse errors. Further-\nmore, unlabeled parses represent only syn-\ntactic groupings without their linguisti-\ncally relevant categories. In this paper\nwe explore the question: Does passing\nboth parser uncertainty and labeled syn-\ntactic knowledge to the Transformer im-\nprove its translation performance? This\npaper contributes a novel method for in-\nfusing the whole labeled dependency dis-\ntributions (LDD) of the source sentence’s\ndependency forest into the self-attention\nmechanism of the encoder of the Trans-\nformer. A range of experimental results on\nthree language pairs demonstrate that the\nproposed approach outperforms both the\nvanilla Transformer as well as the single\nbest-parse Transformer model across sev-\neral evaluation metrics.\n1 Introduction\nNeural Machine Translation (NMT) models based\non the seq2seq schema, e.g., Kalchbrenner and\nBlunsom (2013); Cho et al. (2014); Sutskever et\nal. (2014); Bahdanau et al. (2014), first encode the\nsource sentence into a high-dimensional content\nvector before decoding it into the target sentence.\n© 2022 The authors. This article is licensed under a Creative\nCommons 3.0 licence, no derivative works, attribution, CC-\nBY-ND.Several prior studies (Shi et al., 2016; Belinkov\nand Bisk, 2018) have pointed out that although\nNMT models may induce aspects of syntactic re-\nlations, they still cannot capture the subtleties of\nsyntactic structure that should be useful for accu-\nrate translation, particularly by bridging long dis-\ntance relations.\nPrevious work provides support for the hypoth-\nesis that explicit incorporation of source syntactic\nknowledge could result in better translation per-\nformance, e.g., Eriguchi et al. (2016); Bastings et\nal. (2017). Most models condition translation on a\nsingle best parse syn:\narg max\ntP(t|s,syn) (1)\nwhere sandtare the source and target sentences\nrespectively. Other models incorporate the n-best\nparses or forest (without parser probabilities and\nlabels), e.g., Neubig and Duh (2014). The idea\nhere is that the syntactically richer input (s, syn)\nshould be better than the bare sequential word or-\nder of s, leading to a more accurate and sharper\ntranslation distribution P(t|s,syn).\nWhile most syntax-enriched strategies result in\nperformance improvements, there are two note-\nworthy gaps in the literature addressing source\nsyntax. Firstly, none of the existing works con-\nditions on the probability distributions over source\nsyntactic relations. And secondly, none of the ex-\nisting approaches conditions on the dependency\nlabels, thereby conditioning only on the binary\nchoice whether there is an unlabeled dependency\nrelation between two words.\nTu et al. (2010); Ma et al. (2018); Zaremoodi\nand Haffari (2018) showed that the whole depen-\ndency forest provides better performance than a\nsingle best parse approach. In this paper we go\none step further and propose that a syntactic parser\nis more useful if it conveys to the NMT model\nalso its remaining uncertainty, expressed as the\nwhole probability distributions over dependency\nrelations rather than a mere forest.\nTo the best of our knowledge, there is no pub-\nlished work that incorporates a parser’s distribu-\ntions over dependency relations into the Trans-\nformer model (Vaswani et al., 2017), let alone in-\ncorporating distributions over labeled dependency\nrelations into NMT models at large.\nThis paper contributes a generic approach for\ninfusing labeled dependency distributions into the\nencoder’s self-attention layer of the Transformer.\nWe represent a labeled dependency distributions\nas a three-dimensional tensor of parser probabil-\nities, where the first and second dimensions con-\ncern word-positions and the third concerns the de-\npendency labels.\nThe resulting tensor is infused into the compu-\ntation of the multi-head self-attention, where every\nhead is made to specialize in a specific dependency\nclass. We contribute empirical evidence that pass-\ning uncertainty to the Transformer and passing la-\nbeled dependencies both give better performance\nthan passing a single unlabeled parse, or an unla-\nbeled/labeled set of dependency relations with uni-\nform probabilities.\n2 Related Work\nThe role of source syntactic knowledge in better\nreordering was appreciated early on during the Sta-\ntistical Machine Translation (SMT) era. For exam-\nple, Mylonakis and Sima’an (2011) propose that\nsource language parses should play a crucial role\nin guiding the reordering within translation, and\ndo so by integrating constituency labels of varying\ngranularity into the source language. Although,\nNMT encoders have been claimed to have the abil-\nity to learn syntax, work on RNNs-based mod-\nels shows the value of external source syntax in\nimproving translation performance, e.g., Eriguchi\net al. (2016), by refining the encoder component,\nleading to a combination of a tree-based encoder\nand a sequential encoder.\nNoteworthy to recall here that the atten-\ntion mechanism was originally aimed to capture\nall word-to-word relations, including syntactic-\nsemantic relations. whereas, the work of Bastings\net al. (2017) has shown that a single unlabeled de-\npendency parse, encoded utilizing Graph Convo-lutional Networks (GCNs), can help improve MT\nperformance. Ma et al. (2018) and Zaremoodi and\nHaffari (2018) attempt to incorporate parse forests\ninto RNNs-based NMT models, mitigating parsing\nerrors by providing more candidate options. How-\never, these two works only rely on the binary (un-\nlabeled) relations in all the sub-trees, ignoring the\nelaborate probability relations between word posi-\ntions and the type of these relations.\nAlthough the Transformer (Vaswani et al.,\n2017) is considered to have a better ability to\nimplicitly learn relations between words than the\nRNNs-based models, existing work (Zhang et al.,\n2019; Currey and Heafield, 2019) shows that even\nincorporating a single best parse could improve the\nTransformer translation performance. Followup\nwork (Bugliarello and Okazaki, 2020; Peng et\nal., 2021) provides similar evidence by changing\nthe Transformer’s self-attention mechanism based\non the distance between the input words of de-\npendency relations, exploiting the single best un-\nlabeled dependency parse.\nThe work of Pham et al. (2019) suggests that\nthe benefits of incorporating a single (possibly\nnoisy) parse (using data manipulation, linearized\nor embedding-based method) can be explained as\na mere regularization effect of the model, which\ndoes not help the Transformer to exploit the ac-\ntual syntactic knowledge. Interestingly, Pham et\nal. (2019) arrive at a similar hypothesis, but they\nconcentrate on exploring how to train one of the\nheads of the self-attention in the Transformer for a\ncombined objective of parsing and translation. The\nparsing-translation training objective focuses the\nself-attention of a single head at learning the distri-\nbution of unlabeled dependencies while learning to\ntranslate as well, i.e., the distribution is not taken\nas source input but as a gold training objective. By\ntraining a single head with syntax, they leave all\nother heads without direct access to syntax.\nOur work confirms the intuition of Pham et\nal. (2019) regarding the utility of the parser’s full\ndependency distributions, but in our model these\ndistributions are infused directly into the self-\nattention while maintaining a single training ob-\njective (translation). Furthermore, we propose that\nonly when the full probability distribution matri-\nces over labeled dependency relations is infused\ndirectly into the transformer’s self-attention mech-\nanism (not as training objective), syntax has a\nchance to teach the Transformer to better learn\nsyntax-informed self-attention weights.\n3 Proposed Approach\nA parser can be seen as an external expert sys-\ntem that provides linguistic knowledge to assist the\nNMT models in explicitly taking into account syn-\ntactic structure. For some sentences, the parser\ncould be rather uncertain and spread its proba-\nbility over multiple parses almost uniformly, but\nin the majority of cases the parser could have a\nrather sharp distribution over the alternative parses.\nTherefore, simply passing a dependency forest\namounts merely to passing all alternative parses\naccompanied with zero information on parser con-\nfidence (maximum perplexity) to the Transformer\nNMT model, which does not help it to distinguish\nbetween the parsing information of the one input\nfrom that of another. This could increase the com-\nplexity of learning the NMT model unnecessarily.\nAn alternative is then to use for each sentence\na dependency distribution in the form of condi-\ntional probabilities, which could be taken to rep-\nresent the degree of confidence of the parser in the\nindividual dependency relations. Furthermore, we\npropose that each dependency relation type (label),\nprovides a more granular local probability distri-\nbution that could assist the Transformer model in\nmaking more accurate estimation of the context\nvector. This might enhance the quality of encod-\ning the source sentence, particularly because the\nTransformer model relies on a weak notion or word\norder, which is input in the form of positional en-\ncoding outside the self-attention mechanism.\nNote that the word-to-word dependency proba-\nbilities is not equivalent to using a distribution over\ndependency parses. This is because in some cases\nthe word-to-word dependencies (just like word-to-\nword attention) could combine together into gen-\neral graphs (not necessarily trees). We think that\nusing relations between pairs of words (rather than\nupholding strict tree or forest structures) fits well\nwith the self-attention mechanism.\n3.1 Dependency Distributions\nDenote with |T|target sentence length and with\nencode( ·)the NMT model’s encoder. We contrast\ndifferent syntax-driven models:\nP(t|s,syn)≈|T|Y\ni=1P(ti|t<i,encode( s,syn))(2)with syn∈ {{L,U}DD,U{L,U}DD,{L,U}DP},\nwhere {L,U}DD is the labeled/unlabeled de-\npendency distribution1,U{L,U}DDthe uniform\nlabeled/unlabeled dependency distribution2, and\n{L,U}DP the 1-best labeled/unlabeled depen-\ndency parse. We also use LDA to stand for a model\nwere the attention weights are fixed equal to LDD\n(i.e., not learned).\nOur primary idea is to exert a soft influence on\nthe self-attention in the encoder of the Transformer\nto allow it to fit its parameters with both syntax and\ntranslation awareness together. For infusing the la-\nbeled dependency distributions, we start with “ma-\ntrixization” of labeled dependency distributions,\nwhich results in a compact tensor representation\nsuitable for NMT models.\nFigure 1: Labeled dependency distributions\nFigure 1 illustrates by example how we convert\nthe labeled dependency distribution ( LDD ) into a\nthree-dimensional LDD tensor. The x-axis and y-\n1Unlabeled dependency distribution is the sum of labeled de-\npendency distributions on the z-axis, which is the same as\n1-best unlabeled dependency parse.\n2It is used for the purpose of ablation experiments, that is, the\nvalue of each point in the 3-dimensional tensor is identical.\naxis of the tensor are the words in the source sen-\ntence, and the z-axis represents the type of depen-\ndency relation. Each point representing a condi-\ntional probability p(i, j, l) =p(sj, l|si)∈[0,1]⊆\nRof source word simodifying another source\nword sjwith relation l.\nLDD Matrix for a specific label l:The matrix\nLDDlextracted from the LDD tensor for a depen-\ndency label lis defined as the matrix in which ev-\nery entry (i, j)contains the probability of a word\nsito modify word sjwith dependency relation l.\n3.2 Parser-Infused Self-attention\nInspired by Bugliarello and Okazaki (2020), we\npropose a novel Transformer NMT model that in-\ncorporates the LDD into the first layer of the en-\ncoder side. Figure 2 shows our LDD sub-layer.\nThe standard self-attention layer employs a\nmulti-head attention mechanism of hheads. For\nan input sentence of length T, the input of self-\nattention head hiin the LDD layer is the word\nembedding matrix X∈RT×dmodel and the depen-\ndency distribution matrix LDDli∈RT×Tfor label\nliassigned to head hiuniquely3. Hence, when we\nrefer to head hi, we refer also to its uniquely as-\nsigned dependency label li, but we omit lito avoid\ncomplicating the notation.\nAs usual in multi-head self-attention ( hbeing\nthe number of heads) for head hi, first it linearly\nmaps three input vectors, q,k,v∈R1×dmodel for\neach token, resulting in three matrices Qhi∈\nRT×d,Khi∈RT×d, and Vhi∈RT×d, where\ndmodel is the dimension of input vectors, and d=\ndmodel/h. Subsequently, an attention weight for\neach position is obtained by:\nShi=Qhi·Khi⊤\n√\nd(3)\nAt this point we infuse the resulting self-\nattention weight matrix Shifor head hiwith the\nspecific LDD matrix LDDlifor label liusing\nelement-wise multiplication. Assuming that dlip,q∈\nLDDli, this is to say:\nnhip,q=ship,q×dlip,q,forp, q= 1, ..., T (4)\nThe purpose of element-wise multiplication is to\nnudge the attention mechanism to “dynamically”\n3We group the original dependency labels into 16 alternative\ngroup labels. The grouping is provided in Appendix A.learn weights that optimize the translation objec-\ntive but also diverge the least from the parser prob-\nabilities in the dependency distribution matrix.\nNext, the resulting weights are softmaxed to ob-\ntain the final syntax-infused distribution matrix for\nheadhiand the label attached to this head li:\nNhi= softmax( Shi⊙LDDli) (5)\nWe stress that every attention head is infused\nwith a different dependency relation matrix LDDli\nfor a particular dependency relation li. By focus-\ning every head on a different label we hope to “soft\nlabel”, or specialize, it for that label.\nNow that we have syntax-infused weights Nhi\nwe multiply them with the value matrix Vhito get\nthe attention weight matrix of the attention head hi\nfor the relation li.\nMhi=Nhi·Vhi(6)\nSubsequently, the multi-head attention linearly\nmaps the concatenation of all the heads with a pa-\nrameter matrix Wo∈Rdmodel×dmodel, and sends\nthis hidden representation to the standard Trans-\nformer encoder layers for further computations.\nMultiHead( Q,K,V) = Concat( Mhi, ...,Mhm)Wo(7)\nFinally, the objective function for training our\nmodel with syntax knowledge is identical to that\nof the vanilla Transformer (Vaswani et al., 2017):\nLoss = −TX\nt=1[ytln(ot)+(yt−1) ln(1 −ot)](8)\nWhere ytandotare, respectively, the true and\nthe model-predicted value at state t, and Trepre-\nsents the number of states. The syntactic distribu-\ntion matrices are not the object of optimization in\nthe model, so it is incorporated into the model in\nthe form of a parameter-free matrix.\n4 Experiments and Analysis\nExperimental Setup We establish seven distinct\nsets of experiments, refer to Table 1. To be\nspecific, we will conduct particular experiments\nto validate the empirical performance under both\nmedium size and small size training parallel cor-\npora. Apart from the different network structures\nused in the models, the number of network lay-\ners are identical in the same language pair trans-\nlation experiments for all models. Additionally,\nFigure 2: Labeled dependency distribution sub-layer ( LDDlifor head hi)\nthe seven models in each experiment will use the\nsame parameter settings, loss function, and opti-\nmizer algorithm. Experiments will employ BLEU-\n{1,4}score (Papineni et al., 2002), RIBES score\n(Isozaki et al., 2010), TER score (Snover et al.,\n2006), and BEER score (Stanojevic and Sima’an,,\n2014) as criteria for evaluating the model’s effec-\ntiveness.\nParser: We employ an external dependency\nparser SuPar (Zhang et al., 2020) to automatically\nparse the source sentences. Since this parser was\ntrained using the biaffine method (Dozat and Man-\nning, 2016), we can extract dependency distribu-\ntions by changing its source code.\nData: We evaluate the translation tasks for\nthree language pairs from three different language\nfamilies: English-Chinese (En →Zh), English-\nItalian (En →It), and English-German (En →De).\nWe chose dev2010 andtest2010 as our validation\nand test datasets from IWSLT2017 En →De and\nEn→It tasks. In En →Zh, we randomly selected a\n110K subset from the IWSLT2015 dataset as train-\ning set and used dev2010 as validation set, tst2010\nas test set. Table 2 exhibits the division and statis-\ntics of the datasets.\nFor training only, we first filtered out the source\nsentences that SuPar cannot parse and sentencesthat exceed 256 tokens in length. And then, we\nused SuPar4to parse each source language sen-\ntence to obtain the labeled dependency distribu-\ntions and applied Spacy5to tokenize the source and\ntarget languages, respectively. Finally, we replaced\nwords in the corpus with “ <unk>” for words with\nfrequency less than two counts, and for each mini-\nbatch sentences, added “ <bos>”,“<eos>” tokens\nat the beginning and end, and for sentences with\ninconsistent lengths per mini-batch, added a corre-\nsponding number of “ <pad>” tokens at the end of\nthe sentences to keep the batch length consistent.\nHyperparameters: In the low-resource ex-\nperiments, the batch size was 256, the number\nof layers for the encoder and decoder was 4, and\nthe number of warm-up steps was 400. In the\nmedium-resource experiments, their values were\n512, 6, 4000, respectively. For the rest, we use the\nbase configuration of the Transformer (Vaswani et\nal., 2017): All experiments were optimized using\nAdam (Kingma and Ba, 2015) (where β1was 0.9,\nβ2was 0.98, ϵwas 10-9) and the initial learning\nrate was set to 0.0001, gradually reduced during\ntraining as follows:\n4https://github.com/yzhangcs/parser\n5https://spacy.io/\nTable 1: Five sets of experimental group description\nExperimental group Description\nBaseline (BL) The original Transformer model.\n+Labeled dependency attention only (LDA) Replace Smatrix directly with the labeled dependency distributions.\n+1-best labeled dependency parse (LDP) Incorporate 1-best dependency tree with specific (e.g. l1) label.\n+1-best unlabeled dependency parse (UDP) Incorporate 1-best (regardless the type of dependency relations) dependency tree.\n+Uniform labeled dependency distributions (ULDD) Incorporate uniform labeled dependency distributions.\n+Uniform unlabeled dependency distributions (UUDD) Incorporate uniform unlabeled dependency distributions.\n+Labeled dependency distributions (LDD) Incorporate labeled dependency distributions with standard Transformer self-attention.\nTable 2: Datasets statistics\nTask Corpus Training set Validation set Test set\nEnglish →GermanMulti30k 29000 1014 1000\nIWSLT 2017 206112 888 1568\nEnglish →Italian IWSLT 2017 231619 929 1566\nEnglish →Chinese IWSLT 2015 107860 802 1408\nlr =d−0.5\nmodel·min(step num−0.5,step num\n·warmup steps−1.5)(9)\nThe number of heads in multi-head attention\nwas set to 8 (16 in LDD layer), the dimension of\nthe model was 512, the dimension of inner fully-\nconnected layers was set to 2048, and the loss\nfunction was the cross-entropy loss function. The\ncheckpoint with the highest BLEU-4 score on the\nvalidation set was saved for model testing during\ntraining. The number of epochs was set to 50 (one\nepoch represents a complete training produce). In\norder to prevent over-fitting, we set the dropout\nrate (also in our LDD layer) to 0.1.\n4.1 Experimental Results\nThe experimental results for each model under\nlow- and medium-resource scenarios are shown in\nTables 3 to 6. The first group represents the base-\nline model, while the remaining groups represent\nthe control models. It is necessary to note that the\nlast group is the model proposed in this paper.\nAs compared to the baseline model, either form\nof modeling the syntactic knowledge of the source\nlanguage could be beneficial to the NMT models.\nWhether it was in the choice of lexical (BLEU-\n1) or in the order of word (RIBES), there was a\ncertain degree of improvement, which also sup-\nports the validity and rationality of incorporating\nsyntactic knowledge. The proposed model (LDD)\nachieved the best score in at least three of the five\ndifferent evaluation metrics, regardless of the lan-\nguage translation tasks. The proposed model con-\nsistently reached the highest results on BLEU-4,Table 3: Multi30k evaluation results (En →De)\nModel BLEU-1 RIBES BLEU-4 TER BEER\nBL 58.13 78.86 30.14 62.95 0.59\n+LDA 54.10 80.10 30.49 63.47 0.61\n+LDP 54.26 79.58 30.71 79.58 0.61\n+UDP 55.84 78.96 31.05 63.38 0.60\n+ULDD 52.20 79.50 27.80 63.02 0.59\n+UUDD 53.38 79.75 29.09 63.34 0.60\n+LDD 55.65 79.97†‡31.29†‡62.66†‡0.61\nLDD compared to BL −∆2.48 +∆1.11 +∆1.15 +∆0.29 +∆0.02\nLDD compared to UDP −Φ0.19 +Φ1.01 +Φ0.24 +Φ0.72 +Φ0.01\n1The black bold in the table represents the best experimental\nresults under the same test set.\n2∆andΦrepresent the improvement of our model compared\nto baseline and 1-best unlabeled parse system respectively.\n3†and‡indicate statistical significance (p <0.05) against\nbaseline and 1-best unlabeled parse system via T-test and\nKolmogorov-Smirnov test respectively.\nTable 4: IWSLT2017 evaluation results (En →De)\nModel BLEU-1 RIBES BLEU-4 TER BEER\nBL 51.63 68.64 26.13 83.34 0.53\n+LDA 49.89 69.04 26.16 83.53 0.53\n+LDP 51.12 68.91 26.38 83.93 0.53\n+UDP 50.90 69.20 26.39 84.65 0.53\n+ULDD 50.80 69.56 25.10 82.76 0.53\n+UUDD 48.85 68.90 25.41 86.19 0.53\n+LDD 54.98†‡68.83†27.78†‡81.85†‡0.54\nLDD compared to BL +∆3.35 +∆0.19 +∆1.65 +∆1.49 +∆0.01\nLDD compared to UDP +Φ4.08 −Φ0.37 +Φ1.39 +Φ2.80 +Φ0.01\n1The black bold in the table represents the best experimental\nresults under the same test set.\n2∆andΦrepresent the improvement of our model compared\nto baseline and 1-best unlabeled parse system respectively.\n3†and‡indicate statistical significance (p <0.05) against\nbaseline and 1-best unlabeled parse system via T-test and\nKolmogorov-Smirnov test respectively.\nwhich increased by at least one point when com-\npared to the baseline model, with an average in-\ncrease rate of more than 5%. Furthermore, in most\ntranslation experiments, incorporating labeled de-\npendency distributions provided better outcomes\nthan the 1-best unlabeled dependency parse system\n(UDP)6. This indicates the efficacy of providing\nmore parsing information, particularly the depen-\ndency probabilities. In the low resource scenarios,\nthe models of incorporating syntactic knowledge\n6All previous work uses only 1-best unlabeled parse, which is\nalso our main comparison object. We will refer to it as 1-best\nparse or 1-best tree below.\nTable 5: IWSLT2017 evaluation results (En →It)\nModel BLEU-1 RIBES BLEU-4 TER BEER\nBL 54.14 68.58 27.11 77.52 0.56\n+LDA 51.25 69.90 26.13 81.23 0.56\n+LDP 51.72 68.26 25.65 80.03 0.55\n+UDP 53.17 69.90 28.13 76.18 0.56\n+ULDD 51.30 67.83 25.23 80.62 0.54\n+UUDD 54.00 66.83 25.23 78.41 0.55\n+LDD 56.73†‡69.69†29.34†‡76.34†0.57\nLDD compared to BL +∆2.59 +∆1.11 +∆2.23 +∆1.18 +∆0.01\nLDD compared to UDP +Φ3.56 −Φ0.21 +Φ1.21 −Φ0.16 +Φ0.01\n1The black bold in the table represents the best experimental\nresults under the same test set.\n2∆andΦrepresent the improvement of our model compared\nto baseline and 1-best unlabeled parse system respectively.\n3†and‡indicate statistical significance (p <0.05) against\nbaseline and 1-best unlabeled parse system via T-test and\nKolmogorov-Smirnov test respectively.\nTable 6: IWSLT2015 evaluation results (En →Zh)\nModel BLEU-1 BLEU-4 TER BEER\nBL 46.53 18.31 67.96 0.20\n+LDA 44.91 18.25 70.96 0.20\n+LDP 47.34 18.85 70.02 0.20\n+UDP 46.92 19.71 67.29 0.20\n+ULDD 40.67 17.89 77.04 0.19\n+UUDD 34.14 18.05 79.27 0.18\n+LDD 47.62†‡20.25†‡67.38†0.20\nLDD compared to BL +∆1.09 +∆1.94 +∆0.58 +∆0.00\nLDD compared to UDP +Φ0.70 +Φ0.54 −Φ0.09 +Φ0.00\n1The black bold in the table represents the best exper-\nimental results under the same test set.\n2∆andΦrepresent the improvement of our model\ncompared to baseline and 1-best unlabeled parse sys-\ntem respectively.\n3†and‡indicate statistical significance (p <0.05)\nagainst baseline and 1-best unlabeled parse sys-\ntem via T-test and Kolmogorov-Smirnov test respec-\ntively.\npaid less attention to the neighboring words in\nthe corpus sentence because syntactic knowledge\nmay assist models in focusing on distant words\nwith syntactic relations, which was reflected in the\ndecrease of BLEU-1 scores. This problem was\nalleviated in the richer-resource scenarios, which\nalso showed that the robustness of the models im-\nproved.\nFor ablation experiments, passing the uniform\ndependency distributions verifies our hypothesis.\nA uniform probability tensor cannot provide valu-\nable information to the Transformer model and\nrisks misleading the model, resulting in the worst\nperformance. Another notable finding is that sim-\nply incorporating labeled dependency distributions\n(replacing the KandQmatrices in the attention\nmatrices) as dependency attention outperformed\nthe baseline model on average. The benefit of this\nstrategy is that by replacing KandQmatrices and\ntheir associated calculation process can drasticallydecrease the number of parameters and computing\nrequirements.\n4.2 Qualitative Analysis\nBLEU-4 Scores Comparison: We also at-\ntempted to visualize the results to understand the\nperformance of the proposed model better. In Fig-\nure 3, although the 1-best parse model performs\nbetter than the baseline model, the model we pro-\npose has higher scores than the baseline model\nand the 1-best parse model in all the median, up-\nper and lower quartile scores. From the original\nscatter diagram, we can observe the scatter distri-\nbution of the proposed model at the upper posi-\ntion in general, indicating that, our model can earn\nhigher scores for translated results than the base-\nline model and 1-best parse model.\nFigure 3: Box plot of baseline model, 1-best tree model and\nproposed model results\nImpact of Sentence Length: We investigated\ntranslation performance for different target sen-\ntence lengths, by grouping the target sentences in\nthe IWSLT datasets by sentence length intervals.\nWe choose to group the target sentence lengths\nrather than source sentence lengths because, cf.\nMoore (2002), the source sentence and target sen-\ntence lengths are proportional. Second, since the\ntarget languages are different, and the source lan-\nguage is English, we are particularly concerned\nabout the change in the length of sentences across\ndifferent target languages.\nOverall, our model outperformed the baseline\nsystem and 1-best parse system, as shown in Fig-\nure 4. Among them, the increase in the length\nrange (20,30], (30,40] and (40,50] were more pro-\nnounced over the baseline system and 1-best parse\nsystem. The BLEU-4 scores of both our model\nand 1-best parse model were in danger of slipping\nFigure 4: BLEU-4 comparison in sentences length\nbelow the baseline model in the sentence length\ninterval (0,10]. Corpus analysis shows that this\nlength interval contains many fragments, remain-\ning after slicing long sentences. Because the syn-\ntactic structures of these fragments were incom-\nplete, they may negatively impact on the model’s\ntranslation performance. As sentence length in-\ncreased further, all models saw substantial declines\nin BLEU-4 scores, following similar downward\npatterns. When the sentence length exceeds 50,\nthe BLEU-4 scores of our method remained sig-\nnificantly different from both the baseline model\nand the 1-best parse model. These showed that\nour proposed model has better translation perfor-\nmance in lengthy sentences, but BLEU-4 scores\nwere still relatively low, indicating that the NMT\nmodels have much room for improvement.\nAttention Weights Visualization: The final\nlayer’s attention weights of the 1-best parse model\nand the model we proposed are depicted in Figures\n5 and 6, respectively. Judging from the compar-\nison of the figures, we find that there are certain\nconsistencies; for example, each word has higher\nattention weights to the words around it. However,\nthe distinction is also discernible.\nSpecifically, for the word “A”, the word “A” and\nthe word “man” have a syntactic relation, which\nwas represented in both figures. However, the 1-\nbest parse model also provided “staring” a higher\nFigure 5: An example of 1-best parse model’s attention\nweights\nFigure 6: An example of proposed model’s attention weights\nattention weight, which is contrary to the syntac-\ntic structures, and the model we proposed resolved\nthis problem. For the word “man”, the 1-best parse\nmodel did not pay proper attention to distance but\nwith syntactic relation word “staring”, on the con-\ntrary, in the proposed model, “staring” was paid at-\ntention with a very high value. In a nutshell, both\nthe 1-best parse model and the proposed model are\nbetter than the baseline model in terms of attention\nalignment which demonstrates that the syntactic\nknowledge contained in dependency distributions\ncan guide the weight computation of the attention\nmechanism, directing it to pay more attention to\nwords with syntactic relations, thereby improving\nthe alignment quality to a certain extent.\n5 Conclusion\nThis paper presented a novel supervised con-\nditional labeled dependency distributions Trans-\nformer network (LDD-Seq). This method primar-\nily improves the self-attention mechanism in the\nTransformer model by converting the dependency\nforest to conditional probability distributions; each\nself-attention head in the Transformer learns a de-\npendency relation distribution, allowing the Trans-\nformer to learn source language’s dependency con-\nstraints, and generates attention weights that are\nmore in line with the syntactic structures. The\nexperimental outcomes demonstrated that the pro-\nposed method was straightforward, and it could\neffectively leverage the source language depen-\ndency syntactic structures to improve the Trans-\nformer’s translation performance without increas-\ning the complexity of the Transformer network or\ninterfering with the highly parallelized character-\nistic of the Transformer model.\nReferences\nBahdanau, Dzmitry, Kyunghyun Cho, and Yoshua Ben-\ngio. 2014. Neural machine translation by jointly\nlearning to align and translate. arXiv preprint\narXiv:1409.0473 .\nBastings, Jasmijn and Ivan Titov and Wilker Aziz and\nDiego Marcheggiani and Khalil Sima’an. 2017.\nGraph Convolutional Encoders for Syntax-aware\nNeural Machine Translation. Proceedings of the\n2017 Conference on Empirical Methods in Natural\nLanguage Processing . 1957–1967.\nBelinkov, Yonatan and Yonatan Bisk. 2018. Syn-\nthetic and Natural Noise Both Break Neural Machine\nTranslation. International Conference on Learning\nRepresentations .\nBugliarello, Emanuele and Naoaki Okazaki. 2020.\nEnhancing Machine Translation with Dependency-\nAware Self-Attention. Proceedings of the 58th An-\nnual Meeting of the Association for Computational\nLinguistics , Online. 1618–1627.\nChen, Kehai and Rui Wang and Masao Utiyama and\nEiichiro Sumita and Tiejun Zhao. 2018. Syntax-\ndirected attention for neural machine translation.\nProceedings of the AAAI Conference on Artificial In-\ntelligence .\nCho, Kyunghyun and Bart van Merri ´enboer and\nCaglar Gulcehre and Dzmitry Bahdanau and Fethi\nBougares and Holger Schwenk and Yoshua Ben-\ngio. 2014. Learning Phrase Representations us-\ning RNN Encoder–Decoder for Statistical Machine\nTranslation. Proceedings of the 2014 Conference on\nEmpirical Methods in Natural Language Processing\n(EMNLP) . 1724–1734.\nCurrey, Anna and Kenneth Heafield. 2019. Incorpo-\nrating Source Syntax into Transformer-Based Neu-\nral Machine Translation. Proceedings of the FourthConference on Machine Translation (Volume 1: Re-\nsearch Papers . 24–33.\nDeguchi, Hiroyuki and Akihiro Tamura and Takashi\nNinomiya. 2019. Dependency-based self-attention\nfor transformer NMT. Proceedings of the Interna-\ntional Conference on Recent Advances in Natural\nLanguage Processing (RANLP 2019) . 239–246.\nDozat, Timothy and Christopher D Manning. 2016.\nDeep biaffine attention for neural dependency pars-\ning. arXiv preprint arXiv:1611.01734 .\nDuan, Sufeng and Hai Zhao and Junru Zhou and Rui\nWang. 2019. Syntax-aware transformer encoder\nfor neural machine translation. 2019 International\nConference on Asian Language Processing (IALP) .\nIEEE. 396–401.\nEriguchi, Akiko and Kazuma Hashimoto and Yoshi-\nmasa Tsuruoka. 2016. Tree-to-Sequence Atten-\ntional Neural Machine Translation. Proceedings\nof the 54th Annual Meeting of the Association for\nComputational Linguistics (Volume 1: Long Papers) ,\nBerlin, Germany 823–833.\nIsozaki, Hideki and Tsutomu Hirao and Kevin Duh and\nKatsuhito Sudoh and Hajime Tsukada. 2010. Au-\ntomatic evaluation of translation quality for distant\nlanguage pairs. Proceedings of the 2010 Conference\non Empirical Methods in Natural Language Process-\ning. 944–952.\nKalchbrenner, Nal and Phil Blunsom. 2013. Recurrent\nContinuous Translation Models. Proceedings of the\n2013 Conference on Empirical Methods in Natural\nLanguage Processing . 1700–1709.\nKingma, Diederik P and Jimmy Ba. 2015. Adam: A\nMethod for Stochastic Optimization. ICLR (Poster) .\nMa, Chunpeng and Akihiro Tamura and Masao\nUtiyama and Tiejun Zhao and Eiichiro Sumita.\n2018. Forest-Based Neural Machine Translation.\nProceedings of the 56th Annual Meeting of the As-\nsociation for Computational Linguistics (Volume 1:\nLong Papers) , Melbourne, Australia. 1253–1263.\nMoore, Robert C. 2002. Fast and accurate sentence\nalignment of bilingual corpora. Conference of the\nAssociation for Machine Translation in the Ameri-\ncas. Springer. 135–144.\nOmote, Yutaro and Akihiro Tamura and Takashi Ni-\nnomiya. 2019. Dependency-based relative posi-\ntional encoding for transformer NMT. Proceed-\nings of the International Conference on Recent Ad-\nvances in Natural Language Processing (RANLP\n2019) . 854–861.\nMylonakis, Markos and Khalil Sima’an. 2011. Learn-\ning hierarchical translation structure with linguistic\nannotations. Proceedings of the 49th Annual Meet-\ning of the Association for Computational Linguis-\ntics: Human Language Technologies . 642–652.\nNeubig, Graham and Kevin Duh. 2014. On the ele-\nments of an accurate tree-to-string machine transla-\ntion system. Proceedings of the 52nd Annual Meet-\ning of the Association for Computational Linguistics\n(Volume 2: Short Papers) . 143–149.\nPapineni, Kishore and Salim Roukos and Todd Ward\nand Wei-Jing Zhu. 2002. Bleu: a method for au-\ntomatic evaluation of machine translation. Proceed-\nings of the 40th annual meeting of the Association\nfor Computational Linguistics . 311–318.\nPeng, Ru and Tianyong Hao and Yi Fang. 2021.\nSyntax-aware neural machine translation directed by\nsyntactic dependency degree. Neural Computing\nand Applications . 16609–16625.\nPham, Thuong Hai and Dominik Mach ´aˇcek and Ond ˇrej\nBojar. 2019. Promoting the Knowledge of Source\nSyntax in Transformer NMT Is Not Needed. Com-\nputaci ´on y Sistemas . 923–934.\nShi, Xing and Inkit Padhi and Kevin Knight. 2016.\nDoes String-Based Neural MT Learn Source Syn-\ntax? Proceedings of the 2016 Conference on Em-\npirical Methods in Natural Language Processing ,\nAustin, Texas. 1526–1534.\nSnover, Matthew and Bonnie Dorr and Richard\nSchwartz and Linnea Micciulla and John Makhoul.\n2006. A study of translation edit rate with targeted\nhuman annotation. Proceedings of the 7th Confer-\nence of the Association for Machine Translation in\nthe Americas: Technical Papers . 223–231.\nStanojevi ´c, Milo ˇs and Khalil Sima’an. 2014. Fitting\nSentence Level Translation Evaluation with Many\nDense Features. Proceedings of the 2014 Confer-\nence on Empirical Methods in Natural Language\nProcessing (EMNLP) , Doha, Qatar. 202–206.\nSutskever, Ilya and Oriol Vinyals and Quoc V Le. 2014.\nSequence to sequence learning with neural networks.\nAdvances in neural information processing systems .\nTu, Zhaopeng and Yang Liu and Young-Sook Hwang\nand Qun Liu and Shouxun Lin. 2010. Dependency\nforest for statistical machine translation. Proceed-\nings of the 23rd International Conference on Com-\nputational Linguistics (Coling 2010) . 1092–1100.\nVaswani, Ashish and Noam Shazeer and Niki Parmar\nand Jakob Uszkoreit and Llion Jones and Aidan\nN Gomez and Łukasz Kaiser and Illia Polosukhin.\n2017. Attention is all you need. Advances in neural\ninformation processing systems . 5998–6008.\nZaremoodi, Poorya and Gholamreza Haffari. 2018.\nIncorporating Syntactic Uncertainty in Neural Ma-\nchine Translation with a Forest-to-Sequence Model.\nProceedings of the 27th International Conference on\nComputational Linguistics . 1421–1429.Zhang, Tianfu and Heyan Huang and Chong Feng and\nLongbing Cao. 2021. Self-supervised bilingual syn-\ntactic alignment for neural machine translation. Pro-\nceedings of the AAAI Conference on Artificial Intel-\nligence . 14454–14462.\nZhang, Meishan and Zhenghua Li and Guohong Fu and\nMin Zhang. 2019. Syntax-Enhanced Neural Ma-\nchine Translation with Syntax-Aware Word Repre-\nsentations. Proceedings of the 2019 Conference of\nthe North American Chapter of the Association for\nComputational Linguistics: Human Language Tech-\nnologies, Volume 1 (Long and Short Papers , Min-\nneapolis, Minnesota. 1151–1161.\nZhang, Yu and Zhenghua Li and Min Zhang. 2020.\nEfficient Second-Order TreeCRF for Neural Depen-\ndency Parsing. Proceedings of the 58th Annual\nMeeting of the Association for Computational Lin-\nguistics , Online. 3295–3305.\nA Appendix: Dependency group labels\nTable A: 16 alternative dependency group labels\nDependency group labels Original dependency labels\nl1 root\nl2 aux, auxpass, cop\nl3 acomp, ccomp, pcomp, xcomp\nl4 dobj, iobj, pobj\nl5 csubj, csubjpass\nl6 nsubj, nsubjpass\nl7 cc\nl8 conj, preconj\nl9 advcl\nl10 amod\nl11 advmod\nl12 npadvmod, tmod\nl13 det, predet\nl14 num, number, quantmod\nl15 appos\nl16 punct", "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]
{ "id": "rj3YQHQwAx", "year": null, "venue": "EAMT 2022", "pdf_link": "https://aclanthology.org/2022.eamt-1.16.pdf", "forum_link": "https://openreview.net/forum?id=rj3YQHQwAx", "arxiv_id": null, "doi": null }
{ "title": "Auxiliary Subword Segmentations as Related Languages for Low Resource Multilingual Translation", "authors": [ "Nishant Kambhatla", "Logan Born", "Anoop Sarkar" ], "abstract": null, "keywords": [], "raw_extracted_content": "Auxiliary Subword Segmentations as Related Languages\nfor Low Resource Multilingual Translation\nNishant Kambhatla Logan Born Anoop Sarkar\nSchool of Computing Science\nSimon Fraser University\n8888 University Drive, Burnaby BC, Canada\n{nkambhat, loborn, anoop }@sfu.ca\nAbstract\nWe propose a novel technique of combin-\ning multiple subword tokenizations of a\nsingle source-target language pair for use\nwith multilingual neural translation train-\ning methods. These alternate segmenta-\ntions function like related languages in\nmultilingual translation, improving trans-\nlation accuracy for low-resource languages\nand producing translations that are lex-\nically diverse and morphologically rich.\nWe also introduce a cross-teaching tech-\nnique which yields further improvements\nin translation accuracy and cross-lingual\ntransfer between high- and low-resource\nlanguage pairs. Compared to other strong\nmultilingual baselines, our approach yields\naverage gains of +1.7 BLEU across the\nfour low-resource datasets from the multi-\nlingual TED-talks dataset. Our technique\ndoes not require additional training data\nand is a drop-in improvement for any ex-\nisting neural translation system.\n1 Introduction\nMultilingual neural machine translation (NMT,\nDong et al. 2015; Johnson et al. 2017) models are\ncapable of translating from multiple source and\ntarget languages. Besides allowing efficient pa-\nrameter sharing (Aharoni et al., 2019) these mod-\nels facilitate inherent transfer learning (Zoph et al.,\n2016; Firat et al., 2016) that can especially bene-\nfit low resource languages (Nguyen and Chiang,\n2017; Gu et al., 2018; Neubig and Hu, 2018;\n© 2022 The authors. This article is licensed under a Creative\nCommons 3.0 licence, no derivative works, attribution, CC-\nBY-ND.Tan et al., 2019). A common technique to ad-\ndress lexical sharing and complex morphology in\nmultilingual NMT is to decompose longer words\ninto shorter subword units (Sennrich et al., 2016).\nSince subword units are produced using heuris-\ntic methods, not all subwords are created equally.\nThis can put low- and extremely low-resource lan-\nguages at a disadvantage, even when these lan-\nguages are paired with a suitable high resource lan-\nguage. To diminish the impact of rare subwords\nin NMT, Kambhatla et al. (2022) leverage cipher-\ntexts to augment the training data by constructing\nmultiple-views of the source text. “Soft” decom-\nposition methods based on transfer learning (Wang\net al., 2018) address the problem of sub-optimal\nword segmentation with shared character-level lex-\nical and sentence representations across multiple\nsource languages (Gu et al., 2018). Wang et al.\n(2021) addressed this problem with a multiview-\nsubword regularization technique that also im-\nproves the effectiveness of cross-lingual transfer\nin pretrained multilingual representations by si-\nmultaneously finetuning on different input seg-\nmentations from a heuristic and a probabilistic to-\nkenizer. While subword-regularization methods\n(Kudo, 2018; Provilkov et al., 2020) have been\nwidely explored in NMT, this work is the first\nto study them together with multilingual training\nmethods.\nConcretely, we construct pairs of “related lan-\nguages” by segmenting an input corpus twice, each\ntime with a different vocabulary size and algorithm\nfor finding subwords; we use these “languages”\n(really, views of the same language) for multi-\nlingual training of an NMT model. We propose\nMulti-Sub training , a method that combines multi-\nlingual NMT training methods with a diverse set\nof auxiliary subword segmentations which func-\nНа@@ т ура@@ льна , мы працу ем , мы рых@@ т у@@ ем\nнастаўні@@ к аў . Мы вык ла@@ дае м правы ж анчын ,\nправы чалав ека , прынцы@@ пы д э@@ ма@@ кра@@\nты@@ і , прав а@@ пара@@ дак . Мы прав о@@ дзім\nразнаст ай@@ ныя трэні@@ н@@ гі . \n▁На тура льна ▁ , ▁ мы ▁ працу ем ▁ , ▁мы ▁ ры х т у ем\n▁настаў нік аў ▁. ▁Мы ▁ вы к лада е м ▁ пра вы ▁ жанчын\n▁, ▁ пра вы ▁ чалав ека ▁, ▁прынцы пы ▁ дэ ма кра ты і \n▁, ▁ прав а пара дак ▁ . ▁ Мы ▁ прав одзі м ▁ разнаст ай ныя\n▁трэ ні н гі ▁ .But of course , we &apos;re doing all our work , we were giving\nteacher training . We were training women &apos;s rights ,\nhuman rights , de@@ mo@@ cr@@ acy , rule of law . W e\nwere giving all kind@@ s of training . \n▁But ▁ of ▁ course ▁ , ▁ we ▁ & apos ; re ▁ doing ▁ all ▁ our\n▁work ▁ , ▁we ▁ were ▁ giving ▁ teach er ▁ train ing ▁. ▁ We\n▁were ▁ train ing ▁ women ▁ & apos ; s ▁ right s ▁ , ▁ human\n▁right s ▁ , ▁ dem oc r acy ▁ , ▁ r ule ▁ of ▁ law ▁ . ▁ We\n▁were ▁ giving ▁ all ▁ kinds ▁ of ▁ train ing ▁ . SPBPE BPE\nSP[2bpe]\n[2bpe][2sp]\n[2sp]Figure 1: An illustration of the interaction between the primary (BPE) and auxiliary (SP) subwords for the same sample from\nthebe-en dev set where each type of segmentation is treated as a separate language. The model is taught to translate into\na specific segmentation via multilingual training using the target “language” tags [2bpe] and[2sp] . The sentence in bold\ntype font shows both variants of the source sentence translating to the same target sentence. The colored spans show different\nsegmentations of the same word(s) in source/target.\ntion like related languages in a multilingual setting\nsince they have distinct but partially-overlapping\nvocabularies and share the same underlying lexi-\ncal and grammatical features. Our model is able to\ntransfer information between segmentations analo-\ngous to the way information is transferred between\ntypologically similar languages.\nWe also introduce a cross-teaching technique in\nwhich a model is trained to translate source sen-\ntences from one subword tokenization into target\nsentences from a different subword tokenization.\nBy using Multi-Sub training together with cross-\nteaching, we obtain strong results on four low-\nresource languages in the multilingual TED talks\ndataset outperforming strong multilingual base-\nlines, with the most significant improvements in\nthe lowest-resource languages. In addition to im-\nproving the BLEU scores, our technique captures\nword compositionality better leading to improved\nlexical diversity and morphological richness in the\ntarget language. Multi-Sub with cross-teaching is\nbetter at clustering different languages in the sen-\ntence embedding space than previous methods in-\ncluding Multi-Sub without cross-teaching.\n2 Auxiliary Segmentation as a Related\nLanguage\nPairing related languages is common in multilin-\ngual NMT1: Nguyen and Chiang (2017) combine\nUzbek/Turkish and Uzbek/Uyghur; Johnson et al.\n(2017) study multilingual translation to and from\nEnglish with pairs such as Spanish/Portuguese or\nJapanese/Korean. Neubig and Hu (2018) pair low\nresource languages like Azerbaijani with a related\n1Here we do not distinguish between languages which are re-\nlated in the linguistic sense (having some genetic affiliation)\nand those which are related in a more pragmatic sense of hav-\ning high lexical overlap.“helper” language like Turkish.\nWe take these techniques as motivation for the\npresent work. Our principal contribution is to re-\nthink what it means to use “related” languages in\na multilingual translation model. Beyond simply\nemploying other languages from the same fam-\nily, or those with high lexical overlap, we show\nthat a model trained on different segmentations of\nthe same language can produce improvements in\ntranslation quality.\nRather than segmenting a corpus with a single\ntokenizer prior to training a translation model, we\nproduce multiple segmentations using different to-\nkenizers. Consider the example sentences in Fig-\nure 1. On both the source and target sides, the same\nsentence is represented using both Byte-pair En-\ncodings (BPEs, Sennrich et al. 2016, with a “ @@”\nseparator) and in parallel as sentencepieces (SP,\nKudo 2018, with a “ ” separator). Each segmenta-\ntion uses a different vocabulary size, which guar-\nantees that their subword sequences are to some\nextent distinct. The two tokenizations still resem-\nble one other in many ways: (i) they have a non-\ntrivial degree of lexical overlap (mostly between\nsubwords which do not fall along word bound-\naries); (ii) they share the same grammatical struc-\nture, as both represent the same underlying lan-\nguage; and (iii) both sequences have the same se-\nmantic interpretation. We thus refer to the two seg-\nmentations as a pair of “related languages”.\nApplying two segmentations to a parallel cor-\npus yields a total of four “languages”: the source\nand target represented as BPE subwords, and the\nsame represented using SP subwords. We obtain\ntwo source “languages” (each containing data from\nboth high and low resource languages) and two tar-\nget “languages”. Using this four way configura-\ntion, we train a model following a common multi-\nlingual training method (Johnson et al., 2017): de-\npending on the segmentation we want to translate\ninto, we prepend a target token [2bpe] or[2sp]\nto the source side. We explore two different multi-\nlingual training configurations:\n[BPE+SP]: In this setting, a source sentence in\na particular segmentation is translated into the tar-\nget with the same segmentation. Specifically, this\nmodel is trained multilingually on the pairs\nBPE [src] →BPE [tgt]\nSP [src] →SP [tgt]\nCross-teaching: In addition to [BPE+SP], in\nthis setting, each source sentence with a particu-\nlar segmentation is translated into the target with\nalternate segmentation. This multilingual model is\ntherefore trained on the following pairs:\nBPE [src] →SP [tgt]\nSP [src] →BPE [tgt]\nUsing multilingual training, our model is able to\ntransfer information between BPE and SP segmen-\ntations in much the same way that conventional\nmultilingual models transfer information between\nlanguages with a shared linguistic affiliation. Un-\nlike data augmentation techniques which gener-\nate synthetic training data, Multi-Sub training uses\nonly the content of the original training corpus.\nFurthermore, contrary to other works which em-\nploy multiple segmentations (Wang et al., 2018;\nWu et al., 2020), Multi-Sub training and cross-\nteaching do not affect model architecture and do\nnot require specialised training. Thus Multi-Sub\ntraining can be used as a simple, drop-in improve-\nment to an existing neural translation model.\n3 Experiments\n3.1 Experimental Setup\nData Following prior work on low-resource and\nmultilingual NMT (Neubig and Hu, 2018; Wang\net al., 2018) we use the multilingual Ted talks\ndataset (Qi et al., 2018). We use four low re-\nsource languages (LRL): Azerbaijani (az), Belaru-\nsian (be), Galician (gl) and Slovak (sk), and four\nhigh resource languages (HRL): Turkish (tr), Rus-\nsian (ru), Brazilian-Portuguese (pt), and Czech\n(cs). In all experiments and baselines, each LRL\nis paired with the related HRL and English is the\ntarget language.\nTable 1 shows general statistics for each dataset.\nBased on the size of the training data, we consider\naz, be and gl as extremely low-resource while sk is\na slightly higher-resource dataset.LRL #train #dev #test HRL #train\naz 5.9k 671 903 tr 182k\nbe 4.5k 248 664 ru 208k\ngl 10.0k 682 1007 pt 185k\nsk 61.5k 2271 2445 cs 103k\nTable 1: Statistics from our low resource language (LRL) and\nhigh resource language (HRL) datasets.\nModel Details Our model comprises a single\nbi-directional LSTM as encoder and decoder,\nwith 128-dimensional word embeddings and 512-\ndimensional hidden states. We are careful to\nkeep this configuration consistent with our base-\nline model (Neubig and Hu, 2018) to ensure a fair\ncomparison. We use fairseq2to implement the\nbaseline as well as our proposed models. We set\ndropout probability to 0.3, and use an adam opti-\nmizer with a learning rate of 0.001. In practice,\nwe train a Multi-Sub model until convergence,\nand then use this model to continue training on\ncross-teaching data until convergence. For infer-\nence, we use beam size 5 with length penalty. We\nusesacrebleu3(Post, 2018) to report BLEU\n(Papineni et al., 2002) scores on the detokenized\ntranslations. We perform statistical significance\ntests for our results based on bootstrap resampling\n(Koehn, 2004) using compare-mt toolkit.4\nFor fair comparison with prior work, we use\nBPE (Subword-nmt, Sennrich et al. 2016) as our\nprimary segmentation toolkit and sentencepiece\n(SP, Kudo 2018) as our auxiliary tokenizer. We\nonly use the BPE segmentations to tune our model\nvia validation. In other words, while we train on\nboth BPE and SP, we save model checkpoints that\nare optimized for BPE tokenized inputs.5\nFollowing Neubig and Hu (2018), we separately\nlearn 8k BPE subwords on each of the source and\ntarget languages. When combining an LRL and a\nHRL, we take the union of the vocabulary on the\nsource side and the target side separately. We use\nthe same procedure with the SP tokenizer using a\nsubword vocabulary size of 4k. To train BPE and\nSP together, we take the union of the vocabularies\n2https://github.com/pytorch/fairseq\n3SacreBLEU signature: BLEU+ CASE .MIXED +NUMREFS .1\n+SMOOTH .EXP+TOK.13A+VERSION .1.4.14\n4https://github.com/neulab/compare-mt\n5Our model can handle sentencepiece inputs as well. For a\nmodel that performs equally well on BPE and SP, construct\na validation set with equal number of source sentences with\nboth segmentations and save the checkpoints optimized for\nthe validation metric. We chose BPE segments for validation\nto be comparable with previous work.\nLex Unit Model tr/az ru/be pt/gl cs/sk\nWord Lookup 7.66 13.03 28.65 25.24\nSub-joint Lookup 9.40 11.72 22.67 24.97\nSub-sep UniEnc (Gu et al., 2018) 4.80 8.13 14.58 12.09\nSub-sep Lookup (Neubig and Hu, 2018)610.8 16.2 27.7 28.4\nSub-sep Adaptation (All →Bi) (ibid.) 11.7 18.3 28.8 28.2\nWord SDE (Wang et al., 2018) 11.82 18.71 30.30 28.77\nSub-sep SDE (ibid.) 12.35 16.30 28.94 28.35\nMulti-Sub Lookup [BPE + SP] (Ours) 12.0∗18.5∗∗28.6∗28.8†\n(BPE 8k + SP 4k) Lookup + Cross-teaching (Ours) 12.7∗∗18.8∗∗29.6∗∗28.6†\nTable 2: All models are trained on a LRL and a related HRL with English as the target language with LSTMs. BLEU scores\nare reported on the test set of the LRL. The sub-sep lookup model (Neubig and Hu, 2018) is our primary baseline (shaded in\ngrey). Our best results compared to the baseline are underlined. Bolding indicates best overall results on the datasets. We\nindicate statistical significance w.r.t primary baseline with †(p < 0.05),∗(p < 0.001) and∗∗(p < 0.0001 ).\nof the source and target sides separately, resulting\nin a vocabulary which is union of the BPE and SP\nsubword vocabularies of each side.\n3.2 Main results\nWe compare the results of our Multi-Sub models\nagainst various baselines in Table 2. Sub-sep mod-\nels use a union of subword vocabularies learned\nseparately for each of the source and target lan-\nguages; the union is performed separately for the\nsource and target sides yielding two separate vo-\ncabularies. Sub-joint refers to subword vocabular-\nies learned jointly on the concatenation of all of\nthe source and target languages. Such models con-\nsistently perform worse than their sub-sep counter-\nparts for all datasets, as the HRL tends to occupy a\nlarger share of the vocabulary and leaves the LRL\nwith both a smaller vocabulary as well as smaller\nsubwords. Our reimplementation of the sub-sep\nmodel (Neubig and Hu, 2018) mitigates this by\n(separately) learning the same number of subwords\nfor the HRL and LRL. Using words instead of sub-\nwords performs on par with the sub-sep model for\ngl→enbut worse for other languages.\nWe see that our model, Multi-Sub, handily out-\nperforms all of these baselines. Compared to\nthede-facto sub-sep model (highlighted in grey,\nand used as the baseline in the rest of the pa-\nper), Multi-Sub without cross-teaching gains +1.2\nBLEU points on azandbe, and +0.9 on gl. The\nimprovement on csis not large, but is significant\nat +0.4 BLEU.\n6The numbers are from our reimplementation of Neubig and\nHu (2018). Original BLEU scores on this dataset were az:\n10.9, be: 15.8, gl: 27.3, sk: 25.5 while a reimplementation\nby Wang et al. (2018) yields az: 10.9, be: 16.17, gl: 28.1,\nsk: 28.5. Our implementation matches the performance on all\ntest sets except for gl where we lag by 0.5 points.We also compare our approach against more so-\nphisticated models, such as soft decoupled encod-\ning (SDE, Wang et al. 2018) which shares lexi-\ncal and latent semantic representations across mul-\ntiple source languages. Our modest Multi-Sub\nmodel with cross-teaching outperforms SDE (with\nwords as lexical units) on three out of four lan-\nguages, with the largest gain being +0.9 BLEU\nonaz→en. Multi-Sub consistently and signif-\nicantly outperforms subword -level SDE on all lan-\nguage pairs with gains ranging from +0.4 BLEU to\n+2.5 BLEU. Note that although Multi-Sub is -0.7\nBLEU behind word-level SDE on gl, it outper-\nforms sub-sep by +2.6 BLEU and subword-level\nSDE by +2.5 BLEU.\nOverall, our models are consistently better than\nthe sub-sep baseline. For most languages, substan-\ntial improvements over the baseline come when the\nMulti-Sub model is combined with cross-teaching.\n3.3 Comparison with Subword\nRegularization\nTable 3 contrasts Multi-Sub against BPE-dropout\n(Provilkov et al., 2020), a subword regularization\ntechnique.7For comparison we report results from\nthe baseline sub-sep model with and without sub-\nword regularization. Our implementation applies\nBPE-dropout to the training data with probability\np= 0.1, and the model and training are otherwise\nidentical to sub-sep.\nAlthough subword regularization improves\nupon the baseline model, the difference is small,\nlikely because of the small amount of data avail-\n7Using only one tokenizer (either BPE or SP) with different\nsubword sizes closely resembles subword regularization. Us-\ning SP and BPE, on the other hand, results in different word-\nboundary markers that makes our technique distinct.\ntr/az ru/be pt/gl cs/sk\nSub-sep 10.8 16.2 27.7 28.4\n+ SR 11.0 16.6 28.4 28.2\nMulti-sub 12.7 18.8 29.6 28.8\nTable 3: Comparing subword regularization (SR) with our\nbest results. We use BPE-dropout (Provilkov et al., 2020) at\np= 0.1.\nable for the LRLs. By contrast our Multi-Sub tech-\nnique yields much larger gains.\nDiscussion BPE-dropout (Provilkov et al., 2020)\nis a subword regularization technique that exposes\nthe model to learn better word compositionalities\nby probabilistically producing multiple segmenta-\ntions for each word. Multi-Sub, on the other hand,\nuses a secondary subword segmentation of lower\nvocabulary size and leverages its compositional-\nities as a related language to learn better repre-\nsentations. In Multi-Sub with cross-teaching, the\nmodel learns to produce four way translations on\nthe same source and target languages: BPE [src]\n→ {BPE [tgt] , SP [tgt] }and SP [src] → {BPE\n[tgt] , SP [tgt] }. Although this method is determin-\nistic, and the model learns from only two unique\nsubword sequences instead of one (e.g. sub-sep),\nthis inter-segmentation interaction through multi-\nlingual training helps the model learn better com-\npositionalities and morphology. See Section 4.2\nfor a discussion on the linguistic complexity of the\noutput translations.\n3.4 Choice of Auxiliary Subwords\nOur primary subword tokenizer is BPE with 8000\nsubwords; we use sentencepiece (SP) as our auxil-\niary subword tokenizer. To choose the right auxil-\niary subword vocabulary size, we experiment with\nthree different sizes (6k, 4k and 2k) on tr/az and\nru/be datasets. To determine the optimal vocab-\nulary size, we focus on two key aspects of the can-\ndidate segmentations: translation quality and aver-\nage sentence length. Figure 2 presents a summary\nof our results.\nOn both datasets, subword vocabularies of sizes\n6k and 4k yield slightly lower BLEU scores than\nthe baseline with 8k subwords; the drop is mini-\nmal (az: 10.4 vs. 10.1, be: 15.6 vs. 15.5 for 6k\nand 4k). Performance is substantially worse on the\nsame datasets with 2k subwords (7.2 for azand\n14.1 for be) so we reject the 2k setting.\nNext, we compare the average sentence lengths\n10.816.2\n10.415.6\n10.115.5\n7.214.1BLEU\n69121518\naz be8k 6k 4k 2k(a)\nsrc lengths\n110120130140150\naz be\n(b)\ntgt lengths\n100110120130\naz be (c)\nFigure 2: Effect of auxiliary subword vocabulary size on\nBLEU (a) and sentence length (b, c) in tr/az andru/be .\nin the subword-tokenized training data (both\nsource and target sides) across different subword\nvocabulary sizes. At a vocabulary size of 6k, sen-\ntence length does not vary substantially from the\nlength found with 8k subwords (Figure 2(b, c)). 4k\nsubwords yield a more significant increase in sen-\ntence length on both source ( tr/az : +9,ru/be :\n+10) and target sides for both datasets. This is\nfavourable since this guarantees as many new sub-\nwords as possible in the sentence without increas-\ning its length dramatically. On the basis of these\nresults, we have chosen 4k SP subwords for our\nauxiliary segmentations.\n4 Analysis\n4.1 Correlation to Data Availability\nUsing a secondary subword model as a related lan-\nguage yields different degrees of improvement in\ndifferent languages. We investigate whether these\nvariations correlate with the degree to which the\nLRL is “low-resource”.\nWe report (Table 4) the amount of training data\navailable for the LRL, the word-level vocabulary\nsize of each LRL ( vLRL), and the ratio of this size\nto the vocabulary size of the corresponding HRL\n#train vLRLvLRL\nvHRLBLEU ∆\naz 5.94k 13.1k 11.29 +1.90\nbe 4.50k 9.9k 11.43 +2.61\ngl 10.03k 10.9k 27.69 +1.90\nsk 61.50k 48.5k 80.01 +0.40\nTable 4: Comparison of size of training data in LRL with the\nBLEU improvements. Column 4 shows the ratio of the word\nvocabularies of LRL ( vLRL) to HRL ( vHRL ). The ratios are\nmultiplied by 100 for readability.\nModel BLEU TTR RTTR LTTR MTTR ↓ HD-D MTLD MTLD-A MTLD-Bi Yule’s K ↓\nAz→En Reference – 0.1845 22.98 0.8248 0.0417 0.8738 106.60 108.47 108.17 80.68\n1 Base 10.8 0.0855 10.9615 0.7466 0.0600 0.7750 33.9342 38.3466 38.1259 170.4321\n2 BPE 8k + SP 4k 12.0 0.0971 12.2866 0.7591 0.0572 0.7936 40.0937 44.7958 44.8005 152.0778\n3 2 + Cross-teach 12.7 0.0993 12.4746 0.7610 0.0569 0.7961 41.3529 45.4622 45.3590 149.4563\nBe→En Reference – 0.1863 20.83 0.8219 0.0434 0.8687 102.95 104.44 104.3692 85.73\n1 Base 16.2 0.1149 13.0503 0.7714 0.0556 0.8045 51.1452 52.4293 52.6571 139.7345\n2 BPE 8k + SP 4k 18.5 0.1225 13.7806 0.7777 0.0542 0.8017 51.9363 52.9719 53.0382 147.5613\n3 2 + Cross-teach 18.8 0.1249 14.0746 0.7799 0.0536 0.8071 54.8368 55.6391 55.7884 142.6042\nGl→En Reference – 0.1484 19.45 0.8043 0.0462 0.8643 91.22 94.81 94.67 87.92\n1 Base 27.7 0.1329 17.1629 0.7924 0.0492 0.8312 72.9798 73.9316 73.8523 120.5782\n2 BPE 8k + SP 4k 28.6 0.1365 17.6551 0.7952 0.0485 0.8328 76.0790 75.5915 75.5815 119.1850\n3 2 + Cross-teach 29.6 0.1366 17.7624 0.7955 0.0484 0.8307 74.6902 73.7315 73.7201 112.5075\nSk→En Reference – 0.1253 25.5328 0.8047 0.0423 0.8689 95.38 102.52 102.24 86.20\n1 Base 28.4 0.0935 18.9185 0.7769 0.0484 0.8383 72.7529 74.8386 74.9117 112.8484\n2 BPE 8k + SP 4k 28.8 0.0954 19.3010 0.7787 0.0480 0.8411 74.5821 76.1596 76.2799 110.8807\n3 2 + Cross-teach 28.6 0.0947 19.3118 0.7784 0.0480 0.8379 72.8657 74.7803 74.8770 114.8330\nTable 5: Lexical diversity of the reference human translations vs. model outputs in different settings for each LRL.\n(vHRL). The ratio vLRL/vHRL is directly pro-\nportional to the number of training samples in the\nLRLs. This ratio has a generally negative correla-\ntion to the BLEU gains in our models—the more\ntraining data is available, the smaller the improve-\nments. This strongly suggests that using auxiliary\nsubwords as a foreign language is a technique best\nsuited to low resource languages.\n4.2 Linguistic Complexity\nWhile estimating linguistic complexity is a mul-\ntifarious task, lexical and morphological diversity\nare two of its major components. In this section we\nperform an exhaustive assessment of our models’\ntranslations using lexical diversity metrics (Sec-\ntion 4.2.1) and morphological inflectional diversity\nmetrics (Section 4.2.2).\n4.2.1 Lexical Richness\nWe use several metrics to quantify lexical diver-\nsity across translations from different models.8\nThe metrics include type-token ratio (TTR) and its\nvariants—Root TTR (RTTR, Guiraud 1960), Log\nTTR (LTTR), and (MATTR, Covington and Mc-\nFall 2010)—hypergeometric distribution D (HDD,\nMcCarthy and Jarvis 2007), measure of textual,\nlexical diversity (MTLD, McCarthy 2005) and\nYule’s K (Yule, 2014). The scores for these mea-\nsures are presented in Table 5 for our model out-\nputs and for the reference human translations.\nOn average, Multi-Sub training with cross-\nteaching significantly improves the lexical diver-\n8The intent of this section is not to claim that LD metrics are\npotential indicators of proficiency, quality or sophistication;\nthey simply represent qualities which may be desirable for\ncertain applications, cf. Vanmassenhove et al. (2021)sity of the generated translations. Improvements\nin lexical diversity correlate with BLEU scores in\nall languages (which need not be the case, cf. Van-\nmassenhove et al. 2021), implying that our meth-\nods produce translations which are not only more\naccurate, but also richer and more varied in terms\nof vocabulary. These effects are most pronounced\nin the lowest-resource languages, azandbe,\nwhere cross-teaching yields improvements in ev-\nery metric relative to both the baseline and Multi-\nSub training without cross-teaching. In gl, cross-\nteaching yields improvements in all metrics ex-\ncept MTLD and its variants, which are optimized\nby Multi-Sub training without cross-teaching. Sk\nis unique in that the greatest improvements for\nmost metrics come from Multi-Sub training with-\nout cross-teaching. This parallels the pattern ob-\nserved in the BLEU scores (Table 4), and confirms\nour earlier claim that cross-teaching is most effec-\ntive in cases of extreme data scarcity, while Multi-\nSub training without cross-teaching works better\nfor high resource languages.\n4.2.2 Morphological Richness\nTo examine the morphological complexity of the\ntranslations produced by our models, we averaged\nthe inflectional diversity of the lemmas. Following\nVanmassenhove et al. (2021), we used the Spacy-\nudpipe lemmatizer to retrieve all lemmas.9\nShannon Entropy (H, Shannon 1948 )is used to\nmeasure the variety of inflected forms associated\nwith a given lemma (higher entropy means more\nvariation). Entropy is averaged across each lemma\n9https://github.com/TakeLab/spacy-udpipe\nModel BLEU H ↑ D↓\nAz→En Reference – 69.26 54.75\n1 Base 10.8 64.12 59.14\n2 BPE 8k + SP 4k 12.0 63.67 59.67\n3 2 + Cross-teach 12.7 65.62 57.97\nBe→En Reference – 71.24 53.97\n1 Base 16.2 64.12 59.14\n2 BPE 8k + SP 4k 18.5 67.32 67.78\n3 2 + Cross-teach 18.8 67.78 57.52\nGl→En Reference – 68.27 55.88\n1 Base 27.7 66.64 56.95\n2 BPE 8k + SP 4k 28.6 66.93 56.95\n3 2 + Cross-teach 29.6 66.20 56.92\nSk→En Reference – 69.03 55.41\n1 Base 28.4 62.96 59.18\n2 BPE 8k + SP 4k 28.8 63.41 58.91\n3 2 + Cross-teach 28.6 62.50 59.37\nTable 6: Morphological diversity measures comparing our\nmodel outputs against the human references.\nin the model outputs.\nSimpson’s Diversity Index (D, Simpson 1949 )\nmeasures the probability that two randomly-\nsampled items have the same label; large values\nimply homogeneity (most items belong to the same\ncategory). We measure morphological diversity by\ncomputing the probability that two instances of a\ngiven lemma represent the same inflected form.\nThe results in Table 6 parallel the lexical diver-\nsity evaluation: in the extremely low-resource lan-\nguages azandbe, cross-teaching yields a clear\nimprovement in both the entropy and diversity in-\ndex of the output translations. The model thus em-\nploys a greater variety of inflectional forms, which\nprovides more choices to the decoder (Vanmassen-\nhove et al., 2021) (c.f. Fig. 8). In slightly higher-\nresource languages like sk, the impact of cross-\nteaching is less pronounced: the best diversity in-\ndex is in gl, but Multi-Sub training without cross-\nteaching yields the best entropy. Multi-Sub train-\ning without cross-teaching also yields the greatest\ndegree of morphological diversity in sk.\nModel gl sk\nBase 0.39 0.11\nMulti-Sub/Cross-teaching 0.51∗†0.12†\nTable 7: F1 scores on zero-shot NER in skandgl.†means\nthe best result comes from cross-teaching; ∗means the best\nresult comes without cross-teaching.\n4.3 Improved Cross-lingual Transfer\nDownstream Task: NER Multi-Sub training\nimproves the usefulness of subword embeddings\nGl Baseline\nbpe -> bpe\nsp -> sp\nGl BPE 8k + SP 4k\nbpe -> bpe\nsp -> sp\n+ Cross-teach\nbpe -> bpe\nsp -> sp(a) BPE [src] →BPE [tgt] (red) and SP [src] →SP [tgt] (blue)\nGl Baseline\nbpe -> sp\nsp -> bpe\nGl BPE 8k + SP 4k\nbpe -> sp\nsp -> bpe\n+ Cross-teach\nbpe -> sp\nsp -> bpe\n(b) BPE [src] →SP [tgt] (red) and SP [src] →BPE [tgt] (blue)\nFigure 3: PCA decomposition of Galician sentence represen-\ntations in the baseline (left), Multi-Sub (center), and cross-\nteaching (right) settings. Multi-Sub training can reduce sep-\naration between tokenizations, while the addition of cross-\nteaching eliminates separation entirely.\nfor downstream tasks. We train NER models on pt\nandcsusing the pre-trained embeddings from our\ntranslation models; then, following Sharoff 2017,\nwe evaluate each of these models on the corre-\nsponding LRL.10Since the NER models are never\ntrained on LRL data, this is a zero-shot evaluation\nwhere model performance should reflect the de-\ngree of multilinguality in the pre-trained embed-\ndings. Table 7 reports F1 scores for this task.\nWe observe that Multi-Sub training on its own\ncan yield significant performance improvements\n(as in gl), but cross-teaching is sometimes re-\nquired to obtain optimal results (as in sk). To-\ngether with the results in Figure 3, this suggests\nthat cross-teaching can play a crucial role in facil-\nitating cross-lingual transfer.\nVisualizations of Sentence Embeddings We\nfind that cross-teaching significantly reduces the\nseparation between different tokenizations in the\nsentence representations of certain languages. Fig-\nure 3 shows the distribution of sentence represen-\ntations produced by our two tokenizers. In the\nbaseline, BPE-tokenized sentences are clearly sep-\narated from (parallel) SP-tokenized sentences; in\nthe Multi-Sub setting we observe less separation,\nalthough distinct clusters of BPE and SP inputs\nare still clearly visible. By contrast, in the cross-\nteaching setting, there is significant overlap be-\n10cstraining data taken from Sevc ´ıkov´a et al. 2007, sktest\ndata from Piskorski et al. 2017, and pt/gl training and test\ndata from Garcia and Gamallo 2014\ngl (src) en (ref.) sub-sep SDE multi-sub+cross-teach\nSe queres saber\nsobre o clima,\npreguntas a un\nclimat ´ologo .If you want to know\nabout climate, you\nask a climatologist .If you want to\nknow about cli-\nmate, you’re asking a\ncollege friend .If you want to know\nabout climate, they\nask for a weather .If you want to know\nabout the climat, you\nask a climatologist .\nTable 8: Example of translations of the same source sentence from gl→entest set with different models.\ntween the representations of BPE and SP inputs.\nThis suggests that cross-teaching serves to elim-\ninate “monolingual” subspaces (that is, subspaces\nrepresenting a single tokenization) in favor of rep-\nresenting all input languages in the same joint\nspace. On the basis of this result, we argue that\ncross-teaching is an effective technique for in-\ncreasing the degree of multilinguality in a trans-\nlation model.11\n5 Qualitative Analysis\nWe list translations for the baseline sub-sep and\nSDE models along with our Multi-Sub model in\nTable 8. While sub-sep results in an entirely unre-\nlated translation of the glword climat ´ologo , SDE\nproduces a related word weather . Multi-Sub, how-\never, produces an accurate translation of the word\nwhich is climatologist .\n6 Related Work\nSeveral techniques have been proposed to improve\nlexical representations for multilingual machine\ntranslation. Zoph et al. (2016) propose to first train\na HRL parent model, then transfer some of the\nlearned parameters to the LRL child model to ini-\ntialize and constrain training. Similarly, Nguyen\nand Chiang (2017) pair related languages together\nand transfer source word embeddings from parent-\nHRL words to their child-LRL equivalents. John-\nson et al. (2017); Neubig and Hu (2018), on the\nother hand, learn a joint vocabulary over several\nlanguages and train a single NMT model on the\nconcatenated data. Gu et al. (2018) introduce a la-\ntent embedding space shared by all languages to\nenhance parameter sharing in lexical representa-\ntion. Wang et al. (2018); Gao et al. (2020) use a\nsimilar idea but use character n-gram encodings\n(SDE) instead of the conventional subword/word\nembeddings. By contrast Multi-Sub does not in-\n11In this respect, cross-teaching has a similar effect to BPE-\ndropout (Provilkov et al., 2020), which serves to eliminate\nmonolingual subspaces at the level of subword embeddings\n(but recall our prior comments on the distinction between\nBPE-dropout and Multi-Sub in Section 3.3).volve any architectural changes and improves the\nrepresentation of low-resource languages by train-\ning on multiple segmentations of the same corpus.\nSubword-regularization methods (Kudo, 2018;\nProvilkov et al., 2020) share the motivation of\nalleviating sub-optimal subwords by exposing a\nmodel to multiple segmentations of the same word.\nHowever, our method is substantially different in\nthat (i) we use two completely different subword\nalgorithms with different vocabulary sizes ( con-\ntraWang et al. 2021), and (ii) we do not rely\non expensive sampling procedures ( contra Kudo\n2018) or additional data to learn an LM. Especially\nfor low-resource languages, our method not only\nimproves translation quality but also enhances a\nmodel’s cross-lingual transfer capabilities. Finally,\nthis simple architecture-agnostic technique can act\nas drop-in improvement for existing methods.\n7 Conclusion\nThis work introduces Multi-Sub training with\ncross-teaching—a novel technique that combines\nmultiple alternative subword tokenizations of a\nsource-target language pair—to improve the rep-\nresentation of low-resource languages. Our pro-\nposed methods obtain significant gains on low-\nresource datasets from multilingual TED-talks.\nWe performed exhaustive analysis to show that our\nmethods also increase the lexical and morpholog-\nical diversity of the output translations, and pro-\nduce better multilingual representations which we\ndemonstrate by performing zero-shot NER by ex-\nploiting representations from a high resource lan-\nguage. Multi-Sub training and cross-teaching are\nsimple architecture-agnostic steps which can be\neasily applied to existing single or multilingual\nneural machine translation models and do not re-\nquire any external data.\nAcknowledgements\nN.K would like to thank Kumar Abhishek for\nthe numerous discussions that helped shape this\npaper. The research was partially supported by\nthe Natural Sciences and Engineering Research\nCouncil of Canada grants NSERC RGPIN-2018-\n06437 and RGPAS-2018-522574 and a Depart-\nment of National Defence (DND) and NSERC\ngrant DGDND-2018-00025 to the third author.\nReferences\nAharoni, Roee, Melvin Johnson, and Orhan Firat.\n2019. Massively multilingual neural machine\ntranslation. In Proceedings of the 2019 Con-\nference of the North American Chapter of the\nAssociation for Computational Linguistics: Hu-\nman Language Technologies .\nCovington, Michael A, and Joe D McFall. 2010.\nCutting the gordian knot: The moving-average\ntype–token ratio (mattr). Journal of quantitative\nlinguistics , 17(2):94–100.\nDong, Daxiang, Hua Wu, Wei He, Dianhai Yu, and\nHaifeng Wang. 2015. Multi-task learning for\nmultiple language translation. In Proceedings\nof the 53rd Annual Meeting of the Association\nfor Computational Linguistics and the 7th Inter-\nnational Joint Conference on Natural Language\nProcessing , Beijing, China.\nFirat, Orhan, Kyunghyun Cho, and Yoshua Ben-\ngio. 2016. Multi-way, multilingual neural ma-\nchine translation with a shared attention mecha-\nnism. In Proceedings of the 2016 Conference\nof the North American Chapter of the Asso-\nciation for Computational Linguistics: Human\nLanguage Technologies .\nGao, Luyu, Xinyi Wang, and Graham Neubig.\n2020. Improving target-side lexical transfer\nin multilingual neural machine translation. In\nFindings of the Association for Computational\nLinguistics: EMNLP 2020 .\nGarcia, Marcos, and Pablo Gamallo. 2014. Multi-\nlingual corpora with coreferential annotation of\nperson entities. In Proceedings of the Ninth In-\nternational Conference on Language Resources\nand Evaluation (LREC’14) , Reykjavik, Ice-\nland. European Language Resources Associa-\ntion (ELRA).\nGu, Jiatao, Hany Hassan, Jacob Devlin, and\nVictor OK Li. 2018. Universal neural ma-\nchine translation for extremely low resource lan-\nguages. In Proceedings of the 2018 Confer-\nence of the North American Chapter of the Asso-\nciation for Computational Linguistics: Human\nLanguage Technologies .Guiraud, P. 1960. Probl `emes et M ´ethodes de la\nStatistique Linguistique. Presses universitaires\nde France.\nJohnson, Melvin, Mike Schuster, Quoc V Le,\nMaxim Krikun, Yonghui Wu, Zhifeng Chen,\nNikhil Thorat, Fernanda Vi ´egas, Martin Wat-\ntenberg, Greg Corrado, et al. 2017. Google’s\nmultilingual neural machine translation system:\nEnabling zero-shot translation. Transactions of\nthe Association for Computational Linguistics ,\n5:339–351.\nKambhatla, Nishant, Logan Born, and Anoop\nSarkar. 2022. CipherDAug: Ciphertext Based\nData Augmentation for Neural Machine Trans-\nlation. In Proceedings of the 60th Annual Meet-\ning of the Association for Computational Lin-\nguistics (Volume 1: Long Papers) .\nKoehn, Philipp. 2004. Statistical significance tests\nfor machine translation evaluation. In Pro-\nceedings of the 2004 Conference on Empiri-\ncal Methods in Natural Language Processing ,\nBarcelona, Spain.\nKudo, Taku. 2018. Subword regularization: Im-\nproving neural network translation models with\nmultiple subword candidates. In Proceedings of\nthe 56th Annual Meeting of the Association for\nComputational Linguistics .\nMcCarthy, Philip M. 2005. An assessment of the\nrange and usefulness of lexical diversity mea-\nsures and the potential of the measure of textual,\nlexical diversity (MTLD) . Ph.D. thesis, The Uni-\nversity of Memphis.\nMcCarthy, Philip M, and Scott Jarvis. 2007. vocd:\nA theoretical and empirical evaluation. Lan-\nguage Testing , 24(4):459–488.\nNeubig, Graham, and Junjie Hu. 2018. Rapid\nadaptation of neural machine translation to new\nlanguages. In Proceedings of the 2018 Con-\nference on Empirical Methods in Natural Lan-\nguage Processing .\nNguyen, Toan Q, and David Chiang. 2017. Trans-\nfer learning across low-resource, related lan-\nguages for neural machine translation. In Pro-\nceedings of the Eighth International Joint Con-\nference on Natural Language Processing .\nPapineni, Kishore, Salim Roukos, Todd Ward, and\nWei-Jing Zhu. 2002. Bleu: a method for auto-\nmatic evaluation of machine translation. In Pro-\nceedings of the 40th annual meeting of the As-\nsociation for Computational Linguistics , pages\n311–318.\nPiskorski, Jakub, Lidia Pivovarova, Jan ˇSnajder,\nJosef Steinberger, and Roman Yangarber. 2017.\nThe first cross-lingual challenge on recognition,\nnormalization, and matching of named entities\nin slavic languages. In Proceedings of the 6th\nWorkshop on Balto-Slavic Natural Language\nProcessing , Valencia, Spain.\nPost, Matt. 2018. A call for clarity in reporting\nBLEU scores. In Proceedings of the Third Con-\nference on Machine Translation: Research Pa-\npers, Belgium, Brussels. Association for Com-\nputational Linguistics.\nProvilkov, Ivan, Dmitrii Emelianenko, and Elena\nV oita. 2020. Bpe-dropout: Simple and effec-\ntive subword regularization. In Proceedings of\nthe 58th Annual Meeting of the Association for\nComputational Linguistics , pages 1882–1892.\nQi, Ye, Devendra Sachan, Matthieu Felix, Sar-\nguna Padmanabhan, and Graham Neubig. 2018.\nWhen and why are pre-trained word embed-\ndings useful for neural machine translation?\nInProceedings of the 2018 Conference of the\nNorth American Chapter of the Association for\nComputational Linguistics: Human Language\nTechnologies, Volume 2 (Short Papers) , pages\n529–535.\nSennrich, Rico, Barry Haddow, and Alexandra\nBirch. 2016. Neural machine translation of rare\nwords with subword units. In Proceedings of\nthe 54th Annual Meeting of the Association for\nComputational Linguistics , pages 1715–1725.\nSevc ´ıkov´a, Magda, Zdenek Zabokrtsk ´y, and\nOldrich Kruza. 2007. Named entities in czech:\nAnnotating data and developing NE tagger. In\nText, Speech and Dialogue, 10th International\nConference, TSD 2007, Pilsen, Czech Repub-\nlic, September 3-7, 2007, Proceedings , volume\n4629 of Lecture Notes in Computer Science ,\npages 188–195. Springer.\nShannon, Claude E. 1948. A mathematical the-\nory of communication. Bell Syst. Tech. J. ,\n27(3):379–423.\nSharoff, Serge. 2017. Toward pan-Slavic NLP:\nSome experiments with language adaptation.\nInProceedings of the 6th Workshop on Balto-\nSlavic Natural Language Processing , Valencia,\nSpain. Association for Computational Linguis-\ntics.Simpson, Edward H. 1949. Measurement of diver-\nsity. nature , 163(4148):688.\nTan, Xu, Jiale Chen, Di He, Yingce Xia, Tao\nQin, and Tie-Yan Liu. 2019. Multilingual\nneural machine translation with language clus-\ntering. In Proceedings of the 2019 Confer-\nence on Empirical Methods in Natural Lan-\nguage Processing and the 9th International\nJoint Conference on Natural Language Process-\ning (EMNLP-IJCNLP) , pages 963–973.\nTweedie, Fiona J, and R Harald Baayen. 1998.\nHow variable may a constant be? measures of\nlexical richness in perspective. Computers and\nthe Humanities , 32(5):323–352.\nVanmassenhove, Eva, Dimitar Shterionov, and\nMatthew Gwilliam. 2021. Machine transla-\ntionese: Effects of algorithmic bias on linguis-\ntic complexity in machine translation. In Pro-\nceedings of the 16th Conference of the Euro-\npean Chapter of the Association for Computa-\ntional Linguistics: Main Volume , pages 2203–\n2213, Online..\nWang, Xinyi, Hieu Pham, Philip Arthur, and Gra-\nham Neubig. 2018. Multilingual neural machine\ntranslation with soft decoupled encoding. In In-\nternational Conference on Learning Represen-\ntations .\nWang, Xinyi, Sebastian Ruder, and Graham Neu-\nbig. 2021. Multi-view subword regularization.\nInProceedings of the 2021 Conference of the\nNorth American Chapter of the Association for\nComputational Linguistics: Human Language\nTechnologies , pages 473–482, Online.\nWu, Lijun, Shufang Xie, Yingce Xia, Yang Fan,\nJian-Huang Lai, Tao Qin, and Tieyan Liu. 2020.\nSequence generation with mixed representa-\ntions. In Proceedings of the 37th International\nConference on Machine Learning , volume 119\nofProceedings of Machine Learning Research ,\npages 10388–10398. PMLR.\nYule, C Udny. 2014. The statistical study of liter-\nary vocabulary . Cambridge University Press.\nZoph, Barret, Deniz Yuret, Jonathan May, and\nKevin Knight. 2016. Transfer learning for low-\nresource neural machine translation. In Pro-\nceedings of the 2016 Conference on Empiri-\ncal Methods in Natural Language Processing ,\npages 1568–1575.", "main_paper_content": null }
{ "decision": "Unknown", "reviews": [] }
0
0
[]
[]