text
stringlengths
4
222k
label
int64
0
4
As shown in Figure 2 , our approach firstly obtains the word-level edit matrix through three neural layers. Then based on the word-level edit matrix, it applies a generation algorithm to produce the rewritten utterance. Since the model yields a Ushaped architecture (illustrated later), we name our approach as Rewritten U-shaped Network (RUN).To construct a word-level edit matrix, our model passes through three neural layers: a context layer, an encoding layer and a subsequent segmentation layer. The context layer produces a context-aware representation for each word in both c and x, based on which the encoding layer forms a feature map matrix F to capture word-to-word relevance. Finally a segmentation layer is applied to emit the word-level edit matrix.Context Layer As shown in the left of Figure 2 , at first the concatenation of c and x passes by the word embedding φ to get the representation for each word in both utterances. The embedding is initialized using GloVe (Pennington et al., 2014) , and then updated along with other parameters. On top of the joint word embedding sequence (Hochreiter and Schmidhuber, 1997; Schuster and Paliwal, 1997) is applied to capture contextual information inter and intra utterances. Although c and x are jointly encoded by BiLSTM (see the left of Figure 2 ), below we distinguish their hidden states for clear illustration. For a word c m (m = 1, . . . , M ) in c, its hidden state is denoted by u m obtained through BiLSTM, while the hidden state h n is for wordφ(c 1 ),• • •, φ(c M ), φ(x 1 ),• • •, φ(x N ) , Bidirectional Long Short-Term Memory Network (BiLSTM)x n (n = 1, • • • , N )in the incomplete utterance.Encoding Layer On top of the context-aware hidden states, we consider several similarity functions to encode the word-to-word relevance. Concretely, for each word x n in the incomplete utterance and c m in the context utterances, their relevance is captured by a D-dimensional feature vector F(x n , c m ). It is produced by concatenating element-wise similarity (Ele Sim.), cosine similarity (Cos Sim.) and learned bi-linear similarity (Bi-Linear Sim.) between them as:F(x n , c m ) = h n u m ; cos(h n ,u m ); h n Wu m , (1)where W is a learned parameter. These similarity functions are expected to model the word-to-word relevance from different perspectives, important for the follow-up edit type classification. However, they concentrate on local rather than global information (see discussion in Section 5.3). To capture global information, a segmentation layer is proposed.Segmentation Layer Taking the feature map matrix F ∈ R M ×N ×D as a D-channel image, the segmentation layer is to predict the word-level edit matrix Y ∈ R M ×N , analogous to a pixel-level mask. Inspired by UNet (Ronneberger et al., 2015) , the layer is formed as a U-shaped structure: two down-sampling blocks and two up-sampling blocks with skip connection. A down-sampling block contains two separate "Conv" modules and a subsequent max pooling. Each down-sampling block doubles the number of channels. Intuitively, the down-sampling block expands the receptive fields of each cell, hence providing rich global information for the final decision. An up-sampling block contains two separate "Conv" modules, and a subsequent deconvloution neural network. Each up-sampling block halves the number of channels and concatenates the correspondingly cropped feature map in down-sampling as the output (skip connect in Figure 2 ). Finally a feedforward neural network is employed to map each feature vector to one of three edit types, obtaining the word-level edit matrix Y . By incorporating an encoding layer and a segmentation layer, our model is able to capture both local and global information.BERT Enhanced Embedding Since pretrained language models have been proven to be effective on several tasks, we also experiment with employing BERT (Devlin et al., 2019) to augment our model via BERT enhanced embedding.Once a word-level edit matrix is emitted, a subsequent generation algorithm is applied for producing the rewritten utterance. As indicated in Figure 1 , to apply edit operations without ambiguity, we assume each edit region in Y is a rectangle. However, the predicted Y is not guaranteed to meet this requirement, indicating the need for a standardization step. Therefore, the overall procedure of generation is divided into two stages: first the algorithm delimits standard edit regions via searching minimal covering rectangles for each connected region; then it manipulates the incomplete utterance based on these standard edit regions to produce the rewritten utterance. Since the second step has been illustrated in Section 3, in the following we concentrate on the first standardization step.In the standardization step, we employ the twopass algorithm (also known as Hoshen-Kopelman algorithm) to find connected regions (Hoshen and Kopelman, 1976) . In a nutshell, the algorithm makes two passes over the word-level edit matrix. The first pass is to assign temporary cluster labels and record equivalences between clusters in an order of left to right and top to down. Concretely, for each cell, if its neighbors (i.e. left or top cells with the same edit type) have been assigned temporary cluster labels, it is labeled as the smallest neighboring label. Meanwhile, its neighboring clusters are recorded as equivalent. Otherwise, a new temporary cluster label is created for the cell. The second pass is to merge temporary cluster labels which are recorded as equivalent. Finally, cells with the same label form a connected region. For each connected region, we use its minimal covering rectangle to serve as the output of our model.As mentioned in Section 3, the expected supervision for our model is the word-level edit matrix, but existing datasets only contain rewritten utterances. Therefore, we use a procedure to automatically derive (noisy) word-level edit matrices (i.e. distant supervision), and use these examples to train our model. We use the following process to build our training set. First, we find a Longest Common Subsequence (LCS) between x and x * . Then, for each word in x * , if it is not in LCS, it is marked as ADD. Conversely, for each word in x but not in LCS, it is marked as DEL. Contiguous words with the same mark are merged into one span. By a span-level comparison, any ADD span in x * with a DEL counterpart (i.e. under the same context) relates it to Substitute. Otherwise, the span is inserted into x, corresponding to Insert.Taking the example from Table 1 , given x as "为什么总是这样"(Why is always this) and x * as "北京为什么总是阴天"(Why is Beijing always cloudy), their longest common subsequence is "为什么总是"(Why is always). Therefore, with "这样"(this) in x being marked as DEL and "阴 天"(cloudy) in x * being marked as ADD, they cor-respond to the edit type Substitute. In comparison, since "北京"(Beijing) cannot find a counterpart, it is related to the edit type Insert.
2
The principle approaches for constructing wordnets are the merge approach or the expand approach. In the merge approach, the synsets and relations are built independently and then aligned with WordNet. The drawbacks of the merge approach are that it is time-consuming and requires a lot of manual effort to build. On the contrary in the expand model, wordnet can be created automatically by translating synsets using different strategies, whereby the synsets are built in correspondence with the existing wordnet synsets. We followed the expand approach and created a machine translation systems to translate the sentences, which contained the WordNet senses in English to the target languageIn the following section, we takes as a baseline a parallel text, that has been aligned at the sentence level. To obtain the translations, we use Moses SMT toolkit with of baseline setup with 5-gram language model created using the training data by KenLM (Heafield, 2011) . The baseline SMT system was built for three language pairs, English-Tamil, English-Telugu, and English-Kannada. The test set mentioned in Section 3.3 was used to evaluate our system. From Table 1 and Table 2 we can see that size of the parallel corpus has an impact on the BLEU score for test set which is evaluation criteria for the translation model.Since manual translation of wordnets using the extend approach is a very time consuming and expensive process, we apply SMT to automatically translate WordNet entries into the targeted Dravidian languages. While an domain-unadapted SMT system can only return the most frequent translation when given a term by itself, it has been observed that translation quality of single word expressions improves when the word is given in an disambiguated context of a sentence (Arcan et al., 2016a; Arcan et al., 2016b) . Therefore existing translations of WordNet senses in other languages than English were used to select the most relevant sentences for wordnet senses from a large set of generic parallel corpora. The goal is to identify sentences that share the same semantic information in respect to the synset of the Word-Net entry that we want to translate. To ensure a broad lexical and domain coverage of English sentences, existing parallel corpora for various language pairs were merged into one parallel data set, i.e., Europarl (Koehn, 2005) , DGT -translation memories generated by the Directorate-General for Translation (Steinberger et al., 2014) , Mul-tiUN corpus (Eisele and Chen, 2010) , EMEA, KDE4, OpenOffice (Tiedemann, 2009) , OpenSub-titles2012 (Tiedemann, 2012) . Similarly, wordnets in a variety of languages, provided by the Open Multilingual Wordnet web page, 9 were used. As a motivating example, we consider the word vessel, which is a member of three synsets in Princeton WordNet, whereby the most frequent translation, e.g., as given by Google Translate, is Schiff in German and nave in Italian, corresponding to i60833 10 'a craft designed for water transportation'. For the second sense, i65336 'a tube in which a body fluid circulates', we assume that we know the German translation for this sense is Gefäß and we look in our approach for sentences in a parallel corpus, where the words vessel and Gefäß both occur and obtain a context such as 'blood vessel' that allows the SMT system to translate this sense correctly. This alone is not sufficient as Gefäß is also a translation of i60834 'an object used as a container', however in Italian these two senses are distinct (vaso and recipiente respectively), thus by using as many languages as possible we maximize our chances of finding a well disambiguated context.Code-switching and code-mixing is a phenomenon found among bilingual communities all 9 http://compling.hss.ntu.edu.sg/omw/ 10 We use the CILI identifiers for synsets (Bond et al., 2016) English-Tamil over the world (Ayeomoni, 2006; Yoder et al., 2017) . Code-mixing is mixing of words, phrases, and sentence from two or more languages with in the same sentence or between sentences. In many bilingual or multilingual communities like India, Hong Kong, Malaysia or Singapore, language interaction often happens in which two or more languages are mixed. Furthermore, it increasingly occurs in monolingual cultures due to globalization. In many contexts and domains, English is mixed with native languages within their utterance than in the past due to Internet boom. Due to the history and popularity of the English language, on the Internet Indian languages are more frequently mixed with English than other native languages (Chanda et al., 2016) .A major part of our corpora comes from movie subtitles and technical documents, which makes it even more prone to code-mixing of English in the Dravidian languages. In our corpus, movie speeches are transcribed to text and they differ from that in other written genres: the vocabulary is informal, non-linguistics sounds like ah, and mixing of scripts in case of English and native languages (Tiedemann, 2008) . Two example of codeswitching are demonstrated in Figure 1 .The parallel corpus is initially segregated into English script and native script. All of the annotations are done using an automatic process. All words from a language other than the native script of our experiment are taken out on both sides of corpus if it occurs in native language side of the parallel corpus. The sentences are removed from both sides if the target language side does not contain native script words in it. Table 3 show the percentage of code-mixed text removed from original corpus. The goal of this approach is to investigate whether code-mixing criteria and corresponding training are directly related to the improvement of the translation quality measured with automatic evaluation and manual evaluation. We assumed that code-mixed text can be found by different scripts and did not evaluate the code-mixing written in the native script or Latin script to write the native language as was done by (Das and Gambäck, 2013)
2
The analysis consists of three steps:1. enumerate possible segmentations of an input colnpound nmm by consulting headwords of the thesaurus (BGH)2. assign thesaurus categories to all words 3. calculate the preferences of every structure of thc compound noun according to tlm frcquen-tics of category collocationsWe assume that a structure of a compmmd noun cau be expressed by a binary tree. We also asstone that the category of the right branch of a (sub)tree represents tile category of tile (sub)tree itself. Tiffs assumption exsists because Japanese is a head-final language, a modifier is on the h'.ft of its modifiee. With these assuml)tions, a preference vahte of a structure is calculated by recurslve function p as follows: 1 iftisleaf v(t) = p(l(t)), v
2
In this section we explain our method for extracting support verbs for nominalizations. We suppose that we are given a pair of words: ayerb and its nominalized form. As explained in the previous section, we are interested in extracting only nominalized forms which have not become concrete nouns, and that this will be done by comparing syntactic structures attached to the verb and noun forms. In order to extract corpus evidence related to these phenomena, we proceed as follows:1. We generate all the morphologically related forms of the word pair using a lexical transducer for English (Karttunen et al., 1992) . This list of words will be used as corpus filter.2. The lines of the corpus are tokenized (Grefenstette and Tapanainen, 1994) , and only sentences containing one of the word forms in the filter are retained.3. The corpus lines retained are part-of-speech tagged (Cutting et al., 1992) . This allows us to divide the corpus evidence into verb evidence and noun evidence.4. Using a robust surface parser (Grefenstette, 1994) , we derive the local syntactic patterns involving the verbal form and the nominalized form.5. Considering that nominalized forms retain some of the verbal characteristics of the underlying predicate, we want to extract the most common argument/adjunct structures found around verbal uses of the predicate. As an approximation, we extract here all the prepositional phrases found after the verb.6. For nominal forms, we select only those uses which involve argument/adjunct structures similar to phrases extracted in the previous step. For these selected nominalized forms, we extract the verbs of which these forms are the direct Figure 1 : The most common nouns preceding the most common prepositions following 'propose', and appearing in the same environment.object. We sort these verbs by frequency.7. This sorted list is the list of candidate support verbs for the nominalization.This method assumes that the verb and the nominalized form of the verb are given. We have experimented with automatically extracting the nominalized form by using the prepositional patterns extracted for the verb in step 5. We extracted 6 megabytes of newspaper articles containing a form of the verb propose: propose, proposes, proposed, proposing. Since one use of nominalization is to avoid repetition of the verb form, we suppose that the nominalization of propose is likely to appear in the same articles. We extracted the three most common prepositions following a form of propose (step 5). We then extracted the nouns appearing in these same artigles and which preceded these prepositions.~oThe results 4 appear in figure 1. Since a nomilmlized form is normally morphologically related to the verb form, almost any morphological comparison method will pick proposal from this list.4Further experimentation has confirmed these results, but indicate that it may sufficient to simply tag a text, and perform morphological comparison with the most commonly cooccurring nouns in order to extract the nominalized forms of verbs. 4 Experiment withWe have taken for example the case of the verb appeal which was interesting since its corresponding deverbal noun shares the same surface form appeal. In order to extract corpus evidence, we used a lexical transducer of English that, given the surface word appeal, produced all the inflected forms appeal, appeal's, appealing, appealed, appeals and appeals '.Using these surface forms as a filter, we scanned 134 Megabytes of tokenized Associated Press newswire stories from the year 19895. As a result of filtering, 6704 sentences (1 Mbyte of text) were extracted. This text was part-of-speech tagged using the Xerox HMM tagger (Cutting et al., 1992) . The lexical entries corresponding to appeal were tagged with the following tags: as a noun (3910 times), as an active or infinitival verb (1417), as a progressive verb (292), and as a past participle (400).This tagged text was then parsed by a low-level dependency parser (Grefenstette, 1994) [Chap 3]. From the output of the dependency parser we extracted all the lexically normalized verbs of which appeal was tagged as a direct object. The most common of these verbs are shown in Figure 2 .Our speaker's intuition tells us that the support verb for the nominalized use of appeal is make. But this data does not give us enough information to make this judgement, since concrete versions as a separate entity are not distinguishable from nomlnalizations of the verb. In order to separate nominalized uses of the predicate appeal from concrete uses, we will refer to the linguistic discussion presented in the introduction that says that nominalizations retain some of the argument/adjunct structure of the verbal predicate. This is verified in the corpus since we find many parallel SThis corresponds to 20 million words of text.structures involving appeal both as a verb and as a noun, such as:Vice President Salvador Laurel said today that an ailing Ferdinand Marcos may not survive the year and appealed to President Corazon Aquino to allow her ousted predecessor to die in his homeland.Mrs. Marcos made a public appeal to President Corazon Aqulno to allow Marcos to return to his homeland to die.Indeed, if we examine a common nominalization transformation, i.e. that of transforming the direct object of a verb into a Norman genitive of the nominalized form, we find a great overlap in the lexical arguments 6. The parser's output allowed us to extract patterns involving prepositional phrases following noun phrases headed by appeal as well as those following verb sequences headed by appeal. The most common prepositional phrases found after appeal as a verb began with the prepositions7: Lo (466 times), for (145), in (18), on (12), wilh (5), etc. The prepositional phrases following appeal as a noun are headed by to (321 times), for (253), in (200), of (134), from (78), on (34), etc.The correspondence between the most frequent prepositions allowed us to consider that the patterns of a noun phrase headed by appeal followed by one these prepositional phrases (i.e., begun with to, for, and in) constituted true nominalizations s. There were 6We decided not to use this type of data in our experiments because matching lexical arguments requires much larger corpora than the ones we had extracted for the other verbs tested.rWe ignored prepositionM phrases headed by by as being probable passivizations, since our parser does not recognize passive patterns involving by.SHere we used only part of the corpus evidence that was available. Other patterns of nominMizations of appeal, e.g. Saxon genitives like the criminal's appeal, may well exist in the corpus. frequency 63 make 16 have 15 issue Figure 3 : Most common verbs supporting the structure NP PP where 'appeal' heads the NP and where one of {to, for, in} begins the PP.774 instances of these patterns. The parser's output further allowed us to extract the verbs for which these nominalizations were considered as the direct objects. 318 of these nominal syntactic patterns including to, for and in were found. Of these patterns, the main verb supporting the objective nominalizations are shown in Figure 3 .These results suggest that the support verb for the nominalization of appeal is make.
2
Our methodology is depicted in Figure 1 . In a nutshell, it can be described as follows. For both datasets, we extract four feature sets: LF, SE, BF, and RF. The details of each feature set are described in more detail in these working notes. Next, we train a neural network model for each feature set. We use these neural networks to build a new model based on ensemble learning. This new model combines the predictions of each model. Besides, we also evaluate a knowledge integration strategy. With the knowledge integration strategy, a new neural network is trained with all the feature sets at once. For this, we connect each feature set to a input layer and combine their weights in a new hidden layer. Finally, we select the best strategy and obtain the predictions of the official test split. Next, the feature sets are explained in detail. The first feature set (LF) is a subset of languageindependent linguistic features from the UMU-TextStats tool 1 (García-Díaz et al., 2021b; García-Díaz and Valencia-García, 2022). These features include stylometric features (for instance, word and sentence average and Type-Token Ratio), emojis, and Part-of-Speech features. The second feature set (SE) are non-contextual sentence embeddings from FastText . It is worth noting that FastText has a model for Tamil . FastText provides a tool to extract sentence embeddings. These embeddings are made up of the average of all the words in each document. The embeddings obtained from FastText are non contextual (they ignore word order). The third and forth feature sets are sentence embeddings from BERT (BF) (Devlin et al., 2018) and RoBERTa (RF) (Liu et al., 2019) . In case of Tamil, we use multilingual BERT (Devlin et al., 2018) and XLM RoBERTa (Conneau et al., 2019) .To extract the sentence embeddings from BERT and RoBERTa we conduct a hyperparameter se-lection stage that consisted in the evaluation of 10 models with Tree of Parzen Estimators (TPE) (Bergstra et al., 2013) . We evaluate a weight decay between 0 and .3, 2 batch sizes (8 and 16 2 ), four warm-up speeds (between 0 and 1000 with steps of 250), from 1 to 5 epochs, and a learning rate between 1e-5 and 5e-5. Once we obtained the best configuration for BERT and for RoBERTa, we extract their sentence embeddings extracting the [CLS] token (Reimers and Gurevych, 2019) .The next step in our pipeline is the training of the neural network models. For this, we conduct several hyperparameter optimisation stages with Tensorflow and RayTune (Liaw et al., 2018) . This stage is used for each feature set (LF, SE, BF, RF) and for the knowledge integration strategy (LF + SE + BF + RF). Each hyperparameter optimisation stage evaluated 20 shallow neural networks and 5 deep neural networks. The shallow neural networks contains one or two hidden layers max with the same number of neurons per layer. For these, we evaluate linear, ReLU, sigmoid, and tanh as activation functions. The deep-learning networks can be from 3 to 8 layers. Besides, each hidden layer can have different number of neurons. These hidden layers and their neurons are arranged in shapes, namely brick, triangle, diamond, rhombus, and funnel. For the deep neural networks we evaluated sigmoid, tanh, SELU and ELU as activation functions. In these experiments, we test two learning rates: 10e-03 and 10e-04. We also evaluate large batch sizes (128, 256, 512) due to class imbalance. Our objective is that every batch has sufficient number of instances of all classes. Besides, we also include a regularisation mechanism based on dropout, testing different ratios between .1 and .3.Due to page length restrictions, we only report the results achieved with the knowledge integration strategy, as it is the neural network that we use for our official participation. The results achieved with the validation split are depicted in Table 2 . We report a macro f1-score of 49.834% for Code-mixed and 46.167% for Tamil-script. Concerning the individual labels, the best results are obtained with the none-of-the-above label (the majority class). We observed that documents labelled as transphobic label in Tamil-script (66.667%) achieved promising results whereas its counter-part in Code-mixed achieved limited results (24.561%). This behaviour is explained due to the limited number of examples of this label in Code-mixed. In fact, the results are usually better for Tamil except with documents labelled as xenophobia, in which our model achieved very good precision in Code-mixed (80.357%) but limited in Tamil (48.936%).Besides, we include the confusion matrix for Code-mixed (top) and Tamil-script (bottom) in Figure 2. With the confusion matrix, we can observe what are the wrong classifications made by each model. As expected, the none-of-the-above-label (that is, the neutral label) is the label that has the larger number of wrong classifications. In case of Tamil-script, we can observe that documents labelled as hope-speech are commonly misclassified. Actual 54% 0% 0% 5% 14% 23% 0% 0% 4% 0% 56% 6% 28% 11% 0% 0% 0% 0% 8% 0% 23% 10% 5% 51% 0% 0% 3% 4% 2% 0% 59% 6% 25% 0% 0% 3%13% 7% 2% 10% 50% 17% 0% 0% 2% 7% 0% 2% 8% 3% 77% 0% 0% 2% 0% 0% 0% 0% 0% 100% 0% 0% 0% 0% 0% 0% 0% 0% 50% 0% 50% 0%18% 2% 0% 12% 4% 18% 0% 0% 46% Table 2 : Precision, recall, and f1-score for Code-mixed (left) and Tamil-script (right). These results are obtained with the knowledge integration strategy that combined LF, SE, BF, and BF
2
Our multi-task model consists of three main components: BERT encoder, a multi-task attention interaction module, and two task classifiers.Fine-tuning Bidirectional Encoder Representation from Transformers (BERT) model on downstream tasks has shown a new wave of state-of-the-art performances in many NLP applications (Devlin et al., 2019) . BERT model's architecture consists of multiple transformer encoders for learning contextualized word embedding of a given input text. It is trained on large textual corpora using two selfsupervised objectives, namely the Masked Language Model (MLM) and the Next Sentence Prediction (NSP).The encoder of our MTL model is the pretrained MARBERT (Abdul-Mageed et al., 2020) . MARBERT is fed with a sequence of wordpeices [t 1 , t 2 , ..., t n ] of the input tweet, where n is the sequence length. It outputs the tweet embedding h [CLS] ([CLS] token embedding) and the contextualized word embedding of the input tokens H = [h 1 , h 2 , ..., h n ] ∈ R n×d . Both h [CLS] and h i have the same hidden dimension d.This module consists of two task-specific attention layers (task-specific context-rich representation) and a Sigmoid task-interaction layer.The task-specific sentence representation v * ∈ R 1×d (e.g. v sarc and v sent ) is obtained using the attention mechanism over the contextualized word embedding matrix H :C = tanh(HW a ) α = sof tmax(C T W α ) v * = α • H Twhere W a ∈ R d×1 and W α ∈ R n×n are the learnable parameters of the attention mechanism. C ∈ R n×1 and α ∈ [0, 1] n weights words hidden representations according to their relevance to the task.The task interaction mechanism (Lan et al., 2017) is performed using a learnable shared matrix W i ∈ R d×d and a bias vector b i ∈ R d . The interaction of both task are given by:EQUATIONwhere v sarc and v sent are the output of the sarcasm task-specific attention layer and the sentiment task-specific attention layer, respectively. is the element-wise product.We employ two task classifiers F sarc and F sent for sarcasm detection and SA, respectively. Each classifier consists of one hidden layer and one output layer. They are fed with the concatenation of the pooled output embedding and the task output of the multi-task attention interaction module v * (e.g. v sarc and v sent ). The outputs of the task classifiers are given by:EQUATIONEQUATION3.4 Multi-task learning objectiveWe train our MTL model to jointly minimize the binary cross-entropy loss L BCE , for sarcasm detection, and the cross-entropy loss L CE , for SA. The total loss is given by:L = L BCE (y sarc ,ŷ sarc )+L CE (y sent ,ŷ sent ) (5)whereŷ * is the predicted output and y * is the ground truth label.
2
Suppose we have two non-parallel corpora X and Y with style S x and S y , the goal is training two transferrers, each of which can (i) transfer a sen- tence from one style (either S x and S y ) to another (i.e., transfer intensity); and (ii) preserve the styleindependent context during the transformation (i.e., preservation). Specifically, we denote the two transferrers f and g. f : X → Y transfers a sentence x ∈ X with style S x to y * with style S y . Likewise, g : Y → X transfers a sentence y ∈ Y with style S y to x * with S x . To obtain good style transfer performance, f and g need to achieve both a high transfer intensity and a high preservation, which can be formulated as follows:= = text (in S ! ) text (in S " ) ( to S " ) ( to S ! ) ℒ " ($) ℒ & ($) ℒ " (') ℒ & (')∀x, ∀x ∈ X , ∀y, ∀y ∈ Yy * = f (x) ∈ Y, x * = g(y) ∈ X (1) D(y * ||x) ≤ D(y ||x), D(x * ||y) ≤ D(x ||y) (2)Here D(x||y) is a function that measures the abstract distance between sentences in terms of the minimum edit distance, where the editing operations Φ includes word-level replacement, insertion, and deletion (i.e., the Hamming distance or the Levenshtein distance). On the one hand, Eq. 1 requires the transferred text should fall within the target style spaces (i.e., X or Y). On the other hand, Eq. 2 constrains the transferred text from changing too much, i.e., to preserve the style-independent information.Inspired by CycleGAN (Zhu et al., 2017) , our model (sketched in Figure 1 ) is trained by a cyclic process: for each transferrer, a text is transferred to the target style, and then back-transferred to the source style using another transferrer. In order to transfer a sentence to a target style while preserving the style-independent information, we formulate two sets of training objectives: one set ensures that the generated sentences is preserved as much as possible (detailed in §2.1) and the other set is responsible for transferring the input text to the target style (detailed in §2.2).This section discusses our loss function which enforces our transferrers to preserve the styleindependent information of the input. A common solution to this problem is to use the reconstruction loss of the autoencoders (Dai et al., 2019) , which is also known as the identity loss (Zhu et al., 2017) . However, too much emphasis on preserving the content would hinder the style transferring ability of the transferrers. To balance our model's capability in content preservation and transfer intensity, we instead first train our transferrers in the way of training denoising autoencoders (DAE, Vincent et al., 2008) , which has been proved to help preserving the style independent content of input text (Shen et al., 2020) . More specifically, we train f (or g; we use f as an example in the rest of this section) by feeding it with a noisy sentenceẙ as input, whereẙ is noisified from y ∈ Y and f is expected to reconstruct y.Different from previous works which use DAE in style transfer or MT (Artetxe et al., 2018; Lample et al., 2019) , we propose a novel sentence noisification approach, named neighbourhood sampling, which introduces noise to each sentence dynamically. For a sentence y, we define U α (y, γ) as a neighbourhood of y, which is a set of sentences consisting of y and all variations of noisified y with the same noise intensity γ (which will be explained later). The size of the neighbourhood U α (y, γ) is determined by the proportion (denoted by m) of tokens in y that are modified using the editing operations in Φ. Here the proportion m is sampled from a Folded Normal Distribution F. We hereby define that the average value of m (i.e., the mean of F) is the noise intensity γ. Formally, m is defined as:EQUATIONThat said, a neighbourhood U α (y, γ) would be constructed using y and all sentences that are created by modifying (m × length(y)) words in y, from which we sampleẙ, i.e., a noisified sentence of y: y ∼ U α (y, γ). Analogously, we could also construct a neighbourhood U β (x, γ) for x ∈ X and samplex from it. Using these noisified data as inputs, we then train our transferrers f and g in the way of DAE by optimising the following recon-struction objectives:EQUATIONWith Eq. 4, we essentially encourages the generator to preserve the input as much as possible.Making use of non-parallel datasets, we train f and g in an iterative process. Let M = {g(y)|y ∈ Y } be the range of g when the input is all sentences in the training set Y . Similarly, we can define N = {f (x)|x ∈ X}. During the training cycle of f , g will be kept unchanged. We first feed each sentence y (y ∈ Y ) to g, which tries to transfer y to the target style X (i.e. ideally x * = g(y) ∈ X ). In this way, we obtain M which is composed of all x * for each y ∈ Y . Next, we samplex * (a noised sentence of x * ) based on x * via the neighbourhood sampling, i.e.,x * ∼ U α (x * , γ) = U α (g(y), γ). We useM to represent the collection ofx * . Similarly, we obtains N andN using the aforementioned procedures during the training cycle for g. Instead of directly using the sentences from X for training, we useM to train f by forcing f to transfer eachx * back to the corresponding original y. In parallel,N is utilised to train g. We represent the aforementioned operation as the transfer objective.EQUATIONThe main difference between Eq. 4 and Eq. 5 is how U α (•, γ) and U β (•, γ) are constructed, i.e., U α (y, γ) and U β (x, γ) in Eq. 4 compared to U α (g(y), γ) and U β (f (x), γ) in Eq. 5. Finally, the overall loss of DGST is the sum of the four partial losses:EQUATIONDuring optimisation, we freeze g when optimising f , and vice versa. Also with the reconstruction objective, x * must to be sampled first, and then passedx * into f ; in contrast, it is not necessary to sample according to y when we obtain x * = g(y). (Yelp), which consists of restaurants and business reviews together with their sentiment polarity (i.e., positive or negative), and the IMDb Movie Review Dataset (IMDb), which consists of online movie reviews. For Yelp, we split the dataset following , who also provided human produced reference sentences for evaluation. For IMDb, we follow the pre-processing and data splitting protocol of Dai et al. (2019) . Detailed dataset statistics is given in Table 1 . Evaluation Protocol. Following the standard evaluation practice, we evaluate the performance of our model on the textual style transfer task from two aspects: (1) Transfer Intensity: a style classifier is employed for quantifying the intensity of the transferred text. In our work, we use Fast-Text (Joulin et al., 2017 ) trained on the training set of Yelp;
2
For each of the seven identified skills, we defined an ablation method, as shown in Table 1 design of these methods is based on the fact that explicit discourse relations are expressed using explicit discourse connectives (Webber et al., 2019) . The scope of the proposed methodology hence captures only relations represented by explicit connectives, rather than all discourse relations-related features of the datasets. We assume that through shuffling the order of the sentences with connectives in the context, as well as through dropping these connectives, the corresponding relations will be broken. After applying the ablation method on the development set of an MRC dataset, if the performance of the model did not change significantly, we can say that most of the questions in the dataset are solvable even without the given skill; hence, the dataset does not sufficiently evaluate models with respect to the said skill. On the contrary, if the performance gap between the original and the modified dataset is large, we might infer that a substantial proportion of the questions require that skill.Nonetheless, should the model perform badly on the ablated dataset, we cannot take this as evidence that the model in fact acquired the investigated reasoning capabilities as the bad performance can stem from many different factors (e.g., distribution shift induced by dropping numerous words).
2
Our domain-agnostic approach is based on two aspects: a simple but informative paralinguistic feature set which can be easily extracted for speech signals from different domains and a deep learning approach which can discover temporal regularities in the data.Creating textual transcripts of speech recordings is an expensive and time-consuming task. It requires either thorough manual work or a sophisticated acoustic model trained on large corpora for automatic speech recognition (Xiong et al., 2016) . In contrast to previous work, we rely only on the basic speech signal in order to evaluate whether satisfactory prediction quality can be reached even without transcripts.Since Hosman et al. (2002) find that powerful speeches are more persuasive and Pérez-Rosas et al. (2013b) analyze that the energy level of the voice is predictive for opinion mining, we aim at representing the speech signal by paralinguistic features from the power spectrum. Our auditory system is very sensitive to changes in the frequency of an acoustic wave when the frequency is low, but more robust to changes in higher frequency ranges. The mel-scale is a scale which corresponds to our perception on frequency changes (Stevens et al., 1937) . We use mel-frequency cepstral coefficients (MFCCs) from 13 different frequency ranges as our representation unit because they are a good approximation of the human auditory perception (Davis and Mermelstein, 1980) . The MFCCs are obtained by dividing the speech signal into frames and applying a discrete fourier transform. Based on a filter-bank analysis with mel-scaled frequency bins, the cepstral coefficients can then be determined with a cosine discrete transform. Using only one basic operationalization for speech that can be calculated automatically, it keeps our feature extraction effort small and allows us to apply our approach to different domains. These coefficients are usually interpreted as a good generic indicator for different tasks in speech processing, such as speaker identification (Ren et al., 2016) and claim identification in debates (Lippi and Torroni, 2016) .Deep learning architectures have the power to learn high-level abstractions from raw features and are strongly used in vision, language and speech (Bengio, 2009) . To account for the sequential nature of speech signals, we apply an LSTM architecture which has been developed for processing time series (Hochreiter and Schmidhuber, 1997) . LSTM networks are based on recurrent neural networks and use memory cells to keep track of long-term dependencies by the usage of gate units. The network directly processes the extracted features from each frame and automatically learns high-level abstractions. Using this architecture, we avoid the effort of manually defining task-specific statistics over the frame level features which has usually been necessary for speech labeling tasks.The MFCCs were extracted using the python library python speech features. 8 The window size was 25 ms with a sliding window of 10 ms. The Keras framework 9 was used for implementing the LSTMs. The code from both experiments is available on GitHub. 10 Opinion Mining The audio files from this dataset have a sampling rate of 44,100 Hz. We have implemented a bi-directional LSTM with 128 nodes at each hidden layer. The batch size is 128 and the dataset is divided into 10 folds in order to perform cross-validation. Each utterance is preprocessed, and sequences with a length greater than 236 were truncated. Adam is used as optimizer and binary cross-entropy is used as loss function. We use hyperbolic tangent as activation function for all hidden layers and for the merging layer. The last fully connected layer which assigns the binary label to the sequence uses sigmoid as activation function. All hyperparameters were set based on empirical evidence obtained from experiments on a single fold.We extracted the speech signal for each debater with FFmpeg. 11 The audio segments have a sampling rate of 48,000 Hz. In contrast to the input sequences from the MOUD dataset which were split into utterances and lasted only a few seconds, the segments in the Intelligence Squared dataset last a few minutes resulting in up to 25,000 frames. We apply padding to the shorter sequences.We implemented an LSTM network with hidden layers containing 64 nodes in the Keras framework. We use hyperbolic tangent as activation function and a dropout of 0.2 for both the matrix and the recurrent weights. The last layer is a fully connected layer with a single node and a sigmoid activation function which assigns the label to the sequence. The label indicates whether the debater belongs to the winning or the losing team. We use binary cross-entropy as loss function, RMSProp as optimizer, and a batch size of 1. The hyperparameters were set based on empirical evidence from experiments on a single fold. Like Brilman and Scherer (2015) , we perform a leave-one-debate-out cross-validation to avoid a topic-specific bias. The data is split into 30 different folds, each using 29 debates for training and the remaining debate for testing.
2
Color data We employ the Color Lexicon of American English, which provides extensive data on color naming. The lexicon consists of 51 monolexemic color name judgements for each of the 330 Munsell Chart color chips 4 (Lindsey and Brown, 2014). The color terms are solicited through a free-naming task, resulting in 122 terms.Perceptual color space Following previous work (Regier et al., 2007; Zaslavsky et al., 2018; Chaabouni et al., 2021) , we map colors to their corresponding points in the 3D CIELAB space, where the first dimension L expresses lightness, the second A expresses position between red and green, and the third B expresses the position between blue and yellow. Distances between colors in the space correspond to their perceptual difference.Language models Our analysis is conducted on three widely used language models (LMs): BERT (Devlin et al., 2019) and RoBERTa (Liu et al., 2019) , both of which employ a masked language modelling objective, and ELECTRA (Clark et al., Figure 2 : Our experimental setup. In the center is a Munsell color chart. Each chip in the chart is represented in the CIELAB space (right) and has 51 color term annotations. Color term embeddings are extracted through various methods. In the Representation Similarity Analysis experiments, a corresponding color chip centroid is computed in the CIELAB space. In the Linear Mapping experiments, a color term embedding centroid is computed per chip. 2020), which is trained instead with a discriminative token replacement detection objective. 5 Baselines In addition to the aforementioned language models, we consider two different baselines:• PMI statistics, which are computed 6 for the color terms in common crawl, using window sizes of 1 (pmi-1), 2 (pmi-2), and 3 (pmi-3). The result is a vocabulary length vector quantifying the likelihood of co-occurrence of the color term with every other vocabulary item in within that window. • Word-type FastText embeddings trained on Common Crawl (Bojanowski et al., 2017) .We follow Bommasani et al. (2020) and Vulić et al. (2020) in defining configurations for the extraction of word-type representations from LM hidden states. In the first configuration (NC), a color term is encoded without context, with the appropriate delimiter tokens attached (e.g.[CLS] red [SEP] for BERT).In the second, S sentential contexts that include the color term are encoded and the hidden states representing these contexts are mean pooled. These S contexts are either randomly sampled from common crawl (RC), or deterministically generated to allow for control over contextual variation (CC) . If a color term is split by an LM's tokenizer into more than one token, subword token encodings are averaged over. For each color term and configuration, an embedding vector of hidden state dimension d LM is extracted per layer, per model.To control for the effect of variation in the sentence contexts used to construct color term representations, we employ a templative approach to generate a set of identical contexts for all color terms. When generating controlled contexts, we create three frames in which the terms can appear:• COPULA: the <obj> is <col> • POSSESSION: i have a <col> <obj> • SPATIAL: the <col> <obj> is thereWe use these frames in order to limit the contextual variation across colors (<col>) and to isolate their representations amidst as little semantic interference as possible, all while retaining a naturalistic quality to the input. We also aggregate over numerous object nouns (<obj>), which the color terms are used to describe. We select objects from the McRae et al. (2005) data which are labelled in the latter as plausibly occurring in many colors and which are stratified across 13 category sets, e.g. fan ∈ APPLIANCES, skirt ∈ CLOTHING, etc. Collapsing over categories, we generate sentences combinatorially across frames, objects and color terms, resulting in 3 × 122 × 18 = 6588 sentences, 366 per term.
2
We present a series of experiments performed with BATS dataset. Although there are more results on analogy task published with Google test than with BATS, Google test only contains 15 types of linguistic relations, and these happen to be the easier ones . relations (98,000 questions in total). BATS covers most relations in the Google set, but it adds many new and more difficult relations, balanced across derivational and inflectional morphology, lexicographic and encyclopedic semantics (10 relations of each type). Thus BATS provides a less flattering, but more accurate estimate of the capacity for analogical reasoning in the current VSMs. We use pre-trained GloVe vectors by Pennington et al. 2014, released by the authors 2 and trained on Gigaword 5 + Wikipedia 2014 (300 dimensions, window size 10). We also experiment with Word2Vec vectors (Mikolov et al., 2013b) released by the authors 3 , trained on a subcorpus of Google news (also with 300 dimensions).The evaluation with 3CosAdd and LRCos methods was conducted with the Python script that accompanies BATS. We also added an implementation of 3CosMul, a multiplicative objective proposed by Levy and Goldberg (2014) , now available in the same script 4 . Since 3CosMul requires normalization, we used normalized GloVe and Word2Vec vectors in all experiments.Questions with words not in the model vocabulary were excluded (0.01% BATS questions for GloVe and 0.016% for Word2Vec).Let us remember that 3CosAdd as initially formulated by Mikolov et al. (2013c) excludes the three source vectors a, a and b from the pool of possible answers. Linzen 2016showed that if that is not done, the accuracy drops dramatically, hitting zero for 9 out of 15 Google test categories.Let us investigate what happens on BATS data, split by 4 relation types. The rows of Fig. 2 represent all questions of a given category, with darker color indicating higher percentage of predicted vectors being the closest to a, a , b, b , or any other vector. shows that if we do not exclude the source vectors, b is the most likely to be predicted; in derivational and encyclopedic categories a is also possible in under 30% of cases. b is as unlikely to be predicted as a, or any other vector.This experiment suggests that the addition of the offset between a and a typically has a very small effect on the b vector -not sufficient to induce a shift to a different vector on its own. This would in effect limit the search space of 3CosAdd to the close neighborhood of the b vector.It explains another phenomenon pointed out by Linzen (2016): for the plural noun category in the The numerical values for all data can be found in the Appendix. Google test set 70% accuracy was achieved by simply taking the closest neighbor of the vector b, while 3CosAdd improved the accuracy by only 10%. That would indeed be expected if most singular (a) and plural (a ) forms of the same noun were so similar, that subtracting them would result in a nearly-null vector which would not change much when added to b.Levy and Goldberg (2014, p.173) suggested that 3CosAdd method is "mathematically equivalent to seeking a word (b ) which is similar to b and a but is different from a." We examined the similarity between all source vector pairs, looking not only at the actual, top-1 accuracy of the 3CosAdd (i.e. the vector the closest to the hypothetical vector), but also at whether the correct answer was found in the top-3 and top-5 neighbors of the predicted vector. For each similarity bin we also estimated how many questions of the whole BATS dataset there were. The results are presented in Fig. 3 . Our data indicates that, indeed, for all combinations of source vectors, the accuracy of 3CosAdd decreases as their distance in vector space increases. It is the most successful when all three source vectors are relatively close to each other and the target vector. This is in line with the above evidence from the "honest" 3CosAdd: if the offset is typically small, for it to lead to the target vector, that target vector should be close.Consider also the ranks of the b vectors in the neighborhood of b , shown in Fig. 3f . For nearly 40% of the successful questions b was within 10 neighbors of b -and over 40% of low-accuracy questions were over 90 neighbors away.As predicted by Levy et al., b and a vectors do not exhibit the same clear trend for higher accuracy with higher similarity that is observed in all other cases ( Fig. 3f ). However, in experiments with only 20 morphological categories we did observe the same trend for b and a as for the other vector pairs (see Fig. 4 ). This is counter-intuitive, and requires further examination. The observed correlation between the accuracy of 3CosAdd and the distance to the target vector could explain in particular the overall lower performance on BATS derivational morphology questions (only 0.08% top-1 accuracy) as opposed to inflectional (0.59%) or encyclopedic semantics (0.26%). −−→ man and − −−−− → woman could be expected to be reasonably similar distributionally, as they combine with many of the same verbs: both men and women sit, sleep, drink etc. However, the same could not be said of words derived with prefixes that change part of speech. Going from−−−→ happy to − −−−−−− → happiness, or from − −−− → govern to −−−−−−−−→ government, is likely to have to take us further in the vector space.To make sure that the above trend is not specific to GloVe, we repeated these experiments with Word2Vec, which exhibited the same trends. All data is presented in Appendix A.1.Note that the dependence of 3CosAdd on similarity is not entirely straightforward: Fig. 3b shows that for the highest similarity (0.9 and more) there is actually a drop in accuracy. The same trend was observed with Word2Vec (Fig 10 in Appendix 1) . Theoretically, it could be attributed to there not being much data in the highest similarity range; but BATS has 98,000 questions, and even 0.1% of that is considerable. The culprit is the "dishonesty" of 3CosAdd: as discussed above, it excludes the source vectors a, a , and b from the pool of possible answers. Not only does this mask the real extent of the difference between a and a , but it also creates a fundamental difficulty with categories where the source vectors may be the correct answers. This is what explains the unexpected drops in accuracy at the highest similarity between vectors b and a . − −−− → ?white, the correct answer would a priori be excluded. In BATS data, this factor affects several semantic categories, including country:language, thing:color, animal:young, and animal:shelter.If solving proportional analogies with word vectors is like shooting, the farther away the target vector is, the more difficult it should be to hit. Also, we can hypothesize that the more crowded a particular region is, the more difficult it should be to hit a particular target.However, density of vector neighborhoods is not as straightforward to measure as vector similarity. We could look at average similarity between, e.g., top-10 ranking neighbors, but that could misrepresent the situation if some neighbors were very close and some were very far.In this experiment we estimate density as the similarity to the 5th neighbor. The higher it is, the more highly similar neighbors a word vector has. This approach is shown in Fig. 5 . The results seem counter-intuitive: denser neighborhoods actually yield higher accuracy (although there are virtually no cases of very tight neighborhoods). One explanation could be its reverse correlation with distance: if the neighborhood of b is sparse, the closest word is likely to be relatively far away. But that runs contrary to the above findings that closer source vectors improve the accuracy of 3CosAdd. Then we could expect lower accuracy in sparser neighborhoods.In this respect, too, GloVe and Word2Vec behave similarly (Fig. 15 ).
2
We construct our representations from visual objects. We illustrate an overview of our representation construction in Figure 1 . Based on the representations, we introduce hypernymy measures to measure the generality of word meanings. We then explain how the LE task is solved.We follow the procedure described in the work by . We represent a word w as a vector w ∈ R D , where D is a dimensionality of a vector. We construct a vector from a set of images associated with the word. We extract a feature that includes object labels from an image. The vector w is constructed by aggregating a matrix W ∈ R D×L by using an aggregation function g (See Section 4.1.2), in which each column in W corresponds to a feature extracted from an image.We describe how to construct our representation step by step in the following.We first collect images relevant to a word as a (visual) context. We use image search as our image source to collect the L most relevant images V = {v i |i = 1 . . . L} for a word. An image search returns images for a textual query based on the relevancy. and Kastner et al. (2019) have shown that publicly-available image searches such as Google or Bing Image Search can return images so that the images associated with a more general word have a greater visual variability than a more specific word. Figure 2 shows example images retrieved by queries animal, carnivore, and tiger through Google Image Search 2 . We can see that the variability of visual objects actually decreases as we see narrower concepts, as in carnivore or tiger.Next, we extract visual object labels as a discrete feature by using image recognition. We can use any recognizers that generate a list of object labels with confidence scores such as CNNs or image recognition systems provided by vendors (e.g., Google Cloud Vision 3 ). We represent a feature extracted from the i-th image in V as an n-hot vector v i ∈ R D , in which each dimension represents a visual object, and confidence scores obtained by a recognizer are stored in the corresponding dimensions. By concatenating L vectors, we obtain W ∈ R D×L :EQUATIONwhere [; ] denotes a concatenation of vectors. A representation for a word is obtained by a row-wise aggregation function g: w = g(W ).Since current image recognizers can achieve comparable accuracy with humans (He et al., 2016) , we can expect to obtain reasonably accurate labels. The main reason for using object labels is because we consider that object labels are more discriminative than continuous, more abstract features brought from the middle layers of neural networks. For example, when we have two images that show a dog and cat, respectively, the continuous features are likely close to each other, while the discrete features represented by the object labels are treated differently from one another. Still, the similarity of the discrete features between dog and cat could be higher than one between more dissimilar concepts such as dog and table. This can be explained as follows. An image recognizer often generates more general object labels (e.g., carnivore or animal) in addition to specific labels such as dog and cat to the objects shown in the dog and cat images. The recognizer also generates labels for co-occurring objects (e.g., grass or tree) because similar concepts tend to share these labels in their discrete features while dissimilar concepts do not. This results in a moderately higher similarity between similar concepts.We use a measure to quantify the extent of the hypernymy of a word w and call it the hypernymy measure. To validate whether the DIH holds on visual objects, we adopt a measure based on the informativeness of the contexts of a word. The measure was originally introduced by Santus et al. (2014) . It has been obtained by the median entropy of the n most associated contexts of the word, and the association strength has been calculated with Local Mutual Information (LMI) (Evert, 2005) . However, because the original measure highly depends on the amount of a textual corpus used 4 , we use a modified version proposed by Shwartz et al. (2017) :EQUATIONWe obtain p(w i ) with w i ||w|| , where w i indicates the i-th element of w and ||w|| is the vector length. We consider only the positive values in w in the computation. We call this measure entropy (ent).From the definition, the entropy increases as the vector w forms closer to a uniform distribution, which means that different labels uniformly appear in an image set V for a word. We can see this tendency in Figure 2 . Consequently, a broader word is likely less informative (i.e., higher entropy).Based on hypernymy measures, we measure the difference in the generality of meaning between two words. Santus et al. (2014) used the ratio of the informativeness of a word x to the other y:EQUATIONin which w x and w y are representations of x and y, respectively. The above function returns a positive value if y is a hypernym of x.In addition to detecting hypernyms, we have to detect pairs in hypernym-hyponym relations from other relations. Similarity functions such as cosine similarity or Jensen-Shannon (JS) divergence have been used to distinguish the pairs from others to date. However, such functions cannot distinguish well hypernym relations from certain relations, such as co-hyponyms 5 . Therefore, we propose a new function to distinguish pairs in hypernym relations from others:EQUATIONwhere sim(x, y) measures the similarity of the meaning between two words. We can use cosine similarity and JS divergence as sim(x, y). The proposed function hrel(x, y) has a larger value if and only if two words are in a hypernym relation (i.e., similar in meaning but dissimilar in the generality of meaning) and conversely, a smaller value if and only if two words are in a reversed hypernym relation, in which x should be a hypernym of y. For this generalized function, we can use any combination of sim(x, y) and diff(x, y) unless the value of sim(x, y) becomes larger when the two words are closer in meaning, and the value of diff(x, y) becomes larger when the two words are different in their generalities of meaning. When we detect word pairs in both hypernym and reversed hypernym relations, we take the absolute value of diff(x, y): sim(x, y)|diff(x, y)|. In our experiment (Section 4.1), we tested as hrel(x, y) cosine similarity (cos), JS divergence (JS), cos • diff, JS • diff, cos|diff|, and JS|diff|.We introduce two thresholds, α rel and α hyp , to detect word pairs in a hypernym relation and hypernyms in the detected word pairs. We regard a word pair (x, y) such that hrel(x, y) ≥ α rel is in a hypernym relation. Likewise, we consider a word y in a word pair (x, y) in hypernym relation such that diff(x, y) ≥ α hyp is a hypernym of x. Otherwise,x is marked as a hypernym of y. We explain how to optimize these thresholds in Section 4.1.2.
2
In order to study gender representation within speech resources, let us start by defining what gender is. In this work, we consider gender as a binary category (male and female speakers). Nevertheless, we are aware that gender as an identity also exists outside of these two categories, but we did not find any mention of non-binary speakers within the corpora surveyed in our study. Following work by Doukhan et al. (2018) , we wanted to explore the corpora looking at the number of speakers of each gender category as well as their speech duration, considering both variables as good features to account for gender representation. After the download, we manually extracted information about gender representation in each corpus.The first difficulty we came across was the general absence of information. As gender in technology is a relatively recent research interest, most of the time gender demographics are not made available by the resources creators. So, on top of the further-mentioned general corpus characteristics (see Section 3.3), we also report in our final table where the gender information was found and whether it was provided in the first place or not. The provided attribute corresponds to whether gender info was given somewhere, and the found in attribute corresponds to where we extracted the gender demographics from. The different modalities are paper, if a paper was explicitly cited along the resource, metadata if a metadata file was included, indexed if the gender was explicitly indexed within data or if data was structured in terms of gender and manually if the gender information are the results of a manual research made by ourselves, trying to either find a paper describing the resources, or by relying on regularities that seems like speaker ID and listening to the recordings. We acknowledge that this last method has some methodological shortcomings: we relied on our perceptual stereotypes to distinguish male from female speakers, most of the time for languages we have no knowledge of, but considering the global lack of data, we used it when corpora were small enough in order to increase our sample size.The second difficulty regards the fact that speech time information are not standardised, making impossible to obtain speech time for individual speakers or gender categories. When speech time information is provided, the statistics given do not all refer to the same measurements. Some authors report speech duration in hours e.g. (Panayotov et al., 2015; Hernandez et al., 2018) , some the number of utterances (e.g (Juan et al., 2015) ) or sentences (e.g. (Google, 2019)), the definition of these two terms never being clearly defined. We gathered all information available, meaning that our final table contains some empty cells, and we found that there was no consistency between speech duration and number of utterances, excluding the possibility to approximate one by the other. As a result, we decided to rely on the size of the corpora as a (rough) approximation of the amount of speech data available, the text files representing a small proportion of the resources size. This method however has drawbacks as not all corpora used the same file format, nor the same sampling rate. Sampling rate has been provided as well in the final table, but we decided to rely on qualitative categories, a corpus being considered small if its size is under 5GB, medium if it is between 5 and 50GB and large if above. 4The final result consists of a table 5 reporting all the characteristics of the corpora. The chosen features are the following:• the resource identifier (id) as defined on OpenSLR• the language (lang)• the dialect or accent if specified (dial)• the total number of speakers as well as the number of male and female speakers (#spk, #spk m, #spk f )• the total number of utterances as well as the total number of utterances for male and female speakers (#utt, #utt m, #utt f )• the total duration, or speech time, as well as the duration for male and female speakers (dur, dur m, dur f )• the size of the resource in gigabytes (sizeGB) as well as a qualitative label (size, taking its value between "big", "medium", "small")• the sampling rate (sampling)• the speech task targeted for the resource (task) • is it elicited speech or not: we define as non-elicited speech data which would have existed without the creation of the resources (e.g TedTalks, audiobooks, etc.), other speech data are considered as elicited• the language status (lang status): a language is considered either as high-or low-resourced. The language status is defined from a technological point of view (i.e. are there resources or NLP systems available for this language?). It is fixed at the language granularity (hence the name), regardless of the dialect or accent (if provided).• the year of the release (year)• the authors of the resource (producer) 4. Analysis
2
Our algorithm (Algorithm 1) works by intrinsically using the Phrase2VecGLM model (Section 4.2) for query expansion, to discover concepts that are similar in the shared local contexts that they occur in, within documents ranked as top-K relevant to a query document, and using one of two options for specified threshold criteria to tag the document, as described below. Thus our algorithm consists of two main parts: 1) A document scoring and ranking module applying directly the phrasal embeddings-based general language model described in sections 4.2, 5.1 & algorithm 1, and, 2) A concept selection module to tag the query document with, coming from the set of top ranked matching documents to a query document from step 1. There are a couple of different variations implemented for the concept selection scheme: (i) Selecting the top TF-IDF term from each of the top-K matching documents as the set of diverse concepts, representative of the query document, and (ii) Selecting the top-similar concept terms matching each of the representative query document terms, using word2vec/Phrase2Vec similarities on the topranked set of documents (Mikolov et al., 2013) . The code for the corpus pre-processing, model building and inference (semantically tagging documents) is made available online 1 and the dataset is available publicly 2 .In the pseudocode given by Algorithm 1, < docStats > represents a set of tuples containing various pre-computed document level frequency and similarity statistics, having elements like docT ermsF reqsRawCounts, docT ermsT F IDF s, docT ermP airSimilaritySums. < collStats > represents a similar set for collection level frequency and similarity measures with elements like collT ermsF reqRawCountsIDF s and collT ermP airSimilaritySums. The procedure also assumes available, the precomputed hashtable dqT erms, holding the top TF-IDF terms for each document d, used for querying into the GLM. We have excluded the implementation details for the methods selectConceptsEmbeddingsM odel, selectConceptsT F IDF and also the GLM method (which essentially computes Equations (1) and (5) for the query document to be tagged with concepts.
2
In this paper, we study the problem of logical reasoning on the task of multiple choice question answering (MCQA). Specifically, given a passage P , a question Q and a set of K optionsO = {O 1 , • • • , O K },the goal is to select the correct option O y , where y ∈ [1, K]. Notably, to tackle this task, we devise a novel pre-training method equipped with contrastive learning, where the abundant knowledge contained in the largescale Wikipedia documents is explored. We then transfer the learned knowledge to the downstream logical reasoning task.In a sense, in MCQA for logical reasoning, both the given context (i.e., passage and question) and options express certain relations between different logical variables (Figure 1 ). Go a step further, following Equation 2, the relation triplet contained in the correct option should be deduced from the given context through a reasoning path, while that in the wrong options should not. In other words, the context is logically consistent with the correct option only. In light of this, the training instances for our contrastive learning based pre-training should be in the form of a context-option pair, where the context consists of multiple sentences and expresses the relations between the included constituents, while the option should illustrate the potential relations between parts of the constituents. Nevertheless, it is non-trivial to derive such instance pairs from large-scale unlabeled corpus like Wikipedia due to the redundant constituents, e.g., nouns and predicates. In order to address it, we propose to take the entities contained in unlabeled text as logical variables, and Equation 2 can be transformed as:e i , r i,j , e j ← (e i r i,i+1 −→ e i+1 • • • r j−1,j −→ e j ). (4)As can be seen, the right part above is indeed a meta-path connecting e i , e j as formulated in Equation 3, indicating an indirect relation between e i , e j through intermediary entities and relations. In order to aid the logical consistency conditioned on entities to be established, we posit an assumption that under the same context (in the same passage), the definite relation between a pair of en-tities can be inferred from the contextual indirect one, or at least not logically contradict to it. Taking the passage in Figure 2 as an example, it can be concluded from the sentences s 1 and s 5 that, the director McKean has cooperated with Stephanie Leonidas. Therefore, the logic is consistent between {s 1 , s 5 } and s 3 . This can be viewed as a weaker constraint than the original one in Equation 2 for logical consistency, yet it can be further enhanced by constructing negative candidates violating logics.Motivated by this, given an arbitrary documentD = {s 1 , • • • , s m },where s i is the i-th sentence, we can first build an entity-level graph, denoted as G = (V, E), where V is the set of entities contained in D and E denotes the set of relations between entities. Notably, to comprehensively capture the relations among entities, we take into account both the external relation from the knowledge graph and the intra-sentence relation. As illustrated in Figure 2 (a), there will be an intra-sentence relation between two entities if they are mentioned in a common sentence. Thereafter, we can derive the pre-training instance pairs according to the meta-paths extracted from the graph, which will be detailed in the following subsections.As defined in Equation 4, in the positive instances, the answer should contain a relation triplet that is logically consistent with the given context. Since we take the intra-sentence relationship into consideration, given a pair of entities contained in the document, we first collect the sentences mentioning both of them as the set of answer candidates. Accordingly, we then try to find a meta-path connecting the entity pair and hence derive the corresponding logically consistent context. In particular, as shown in Figure 2 (b) , given an entity pair e i , e j , we denote the collected answer candidates as A + , and then we use Depth-First Search (Tarjan, 1972) to find a meta-path linking them on G, following Equation 3. Thereafter, the context sentences S corresponding to the answer candidates in A + are derived by retrieving those sentences undertaking the intra-sentence relations during the search algorithm. Finally, for each answer candidate a ∈ A + , the pair (S, a) is treated as a positive context-answer pair to facilitate our contrastive learning. The details of positive instance generation algorithm are described in Appendix A.In order to obtain the negative instances (i.e., negative context-option pairs) where the option is not logically consistent with the context, the most straightforward way is to randomly sample the sentences from different documents. However, this approach could lead to trivial solutions by simply checking whether the entities involved in each option are the same as those in the given context. In the light of this, we resort to directly breaking the logical consistency of the positive instance pair by modifying the relation rather than the entities in the context or the option, to derive the negative instance pair.In particular, given a positive instance pair (S, a), we devise two negative instance generation methods: the context-oriented and the optionoriented method, focusing on generating negative pairs by modifying the relations involved in the context S and answer a of the positive pair, respectively. Considering that the relation is difficult to be extracted, especially the intra-sentence relation, we propose to implement this reversely via the entity replacement. In particular, for the optionoriented method, suppose that e i , e j is the target entity pair for retrieving the answer a, we first randomly sample a sentence z that contains at least one different entity pair e a , e b from e i , e j as the relation provider. We then obtain the negative option by replacing the entities e a and e b in z with e i and e j , respectively. The operation is equivalent to replacing the relation contained in a with that in z. Formally, we denote the operation asa − = Relation_Replace(z → a).Pertaining to the context-oriented negative instance generation method, we first randomly sample a sentence s i ∈ S, and then conduct the modification process as follows,s − i = Relation_Replace(z → s i ),where the entity pair to be replaced in s i should be contained in the meta-path corresponding to the target entity pair e i , e j . Accordingly, the negative context can be written as S − = S \ {s i } ∪ {s − i }.According to Ko et al. (2020) ; Guo et al. (2019) ; Lai et al. (2021) ; Guo et al. (2022) , the neural models are adept at finding a trivial solution through the illusory statistical information in datasets to make correct predictions, which often leads to inferior generalization. In fact, this issue can also occur in our scenario. In particular, since the correct answer is from a natural sentence and describes a real world fact, while the negative option is synthesized by entity replacement, which may conflict with the commonsense knowledge. As a result, the pretrained language model tends to identify the correct option directly by judging its factuality rather than the logical consistency with the given context. For example, as shown in Figure 2 (d) (left) , the language model deems a as correct, simply due to that the other synthetic option a − conflicts with the world knowledge.To overcome this problem, we develop a simple yet effective counterfactual data augmentation method to further improve the capability of logical reasoning (Zeng et al., 2020b) . Specifically, given the entities P that are involved in the metapath, we randomly select some entities from P and replace their occurrences in the context and the answer of the positive instance pair (S, a) with the entities extracted from other documents. In this manner, the positive instance also contradicts to the world knowledge. Notably, considering that the positive and negative instance pairs should keep the same set of entities, we also conduct the same replacement for a − or S − , if they mention the selected entities. As illustrated in Figure 2 As discussed in previous subsection, there are two contrastive learning schemes: option-oriented CL and context-oriented CL. Let A − be the set of all constructed negative options with respect to the correct option a. The option-oriented CL can be Wikipedia DocumentsTasks Data( , 0 ) ( , 0 , ) ( , 1 )Prompt-Tuning Fine-Tuning Figure 3 : The overall training scheme of our method.formulated as:EQUATIONIn addition, given C − as the set of all generated negative contexts corresponding to S, the objective of context-oriented CL can be written as:EQUATIONTo avoid the catastrophic forgetting problem, we also add the MLM objective during pre-training and the final loss is:EQUATION4.6 Fine-tuningDuring the fine-tuning stage, to approach the task of MCQA, we adopt the following loss function:EQUATIONwhere O y is the ground-truth option for the question Q, given the passage P . Figure 3 shows the overall training scheme of our method. f is the model to be optimized, θ, ω 0 , ω 1 and φ are parameters of different modules. During pre-training, we use a 2-layer MLP as the output layer. The parameters of the output layer are denoted as ω 0 , and θ represents the pre-trained Transformer parameters. As for the fine-tuning stage, we employ two schemes. For simple fine-tuning, we follow Devlin et al. (2019) to add another 2layer MLP with randomly initialized parameters ω 1 on the top of the pre-trained Transformer. In addition, to fully take advantage the knowledge acquired during pre-training stage, we choose to directly fine-tune the pre-trained output layer with optimizing both θ and ω 0 . In order to address the discrepancy that the question is absent during pretraining, the prompt-tuning technique (Lester et al., 2021) is employed. Specifically, some learnable embeddings with randomly initialized parameters φ are appended to the input to transform the question in downstream tasks into declarative constraint. Table 1 : The overall results on ReClor and LogiQA. We adopt the accuracy as the evaluation metric and all the baselines are based on RoBERTa except specific statement. For each model we repeated training for 5 times using different random seeds and reported the average results. ‡ : The results are reproduced by ourselves. max: The results of the model achieving the best accuracy on the test set.
2
Our primary technical contribution in this paper is the development of a novel approach to identifying structured information embedded within natural language texts. Our approach treats each occurrence of a structured region independently, breaking the problem down into two parts. First, we identify the location of each region within the corpus of text documents. Second, for each region, we identify the records, cells, and cell groupings associated with its ad hoc structure.Our presentation follows our two-part breakdown of the problem. In section 4.1., we describe generic text preprocessing that is a prerequisite to the implementation. Then, in section 4.2. we present our approach to structured region identification. The approach centers on the use of a probabilistic text segmentation algorithm. Input to the algorithm is a token-level representation of the text built from spatial and token class features. As output, it produces a segmentation of the text that groups contiguous lines into either structured or unstructured segments. Finally, in section 4.3., we present our approach to identifying each cell 1 The annotated corpus is available at http://www.ai.sri.com/ yeh/lrec-structure in the region, along with its field assignment (i.e., its "column"). To do this, we assume that one record in the region has been annotated manually. We then combine a distributional representation of tokens with a sequence alignment algorithm to infer cell boundaries and field assignments for the entire region.Tokenization is the first preprocessing step. The input text is split into a sequence of contiguous substrings using the regular expression ([A-Za-z]+|[0-9]+|(.)\1 * ). The resulting tokens are either maximal sequences of alphabetic characters (e.g., [YouTube] or [com] ), maximal sequences of digits (e.g., [05] ), or maximal repetitions of any single non-alphanumeric character (e.g., [@@@] or [\n] ). All subsequent processing ignores individual characters, instead considering these tokens as the atomic elements of the text. We then apply a battery of tagging algorithms, each of which assigns a label to certain token sequences. Table 1 lists some of the taggers along with example labeled token sequences, implemented primarily with fast regular expressions. In addition to those shown, we also tag filenames, hostnames, currency, decimal numbers, percentages, fractions, phone numbers, paths, URLs, units, and xml tags. Our approach favors recall over precision, and we use as many taggers from as wide a range of domains as possible. We also augment our tag inventory with parts of speech, as extracted by a maximum entropy tagger (Toutanova et al., 2003) .Example seq. as an example input text, with the first token [G] taking the offset 0. Tagging this text would produce a set of overlapping tags like those shown in Table 2 . As implied, tokens may be labeled with multiple tags.The next step is to separate structured from unstructured regions, as illustrated in Figure 1 . In this study, we considered Tag ( LABEL, interval ) Meaning ALPHA, 0 alphabetic OTHER, 1 non-alphanumeric SPACE, 2 space ALPHA, 3 alphabetic ACAPS, 0 all caps ICAPS, 3 initial caps INIT, 0 .. 1 initials ABRV, 0 .. 1 abbreviation SURNAM, 3 surname CITY, 3 city PERS, 0 .. 3 person Table 2 : An example list of token sequence tags, each represented by a label and a token offset interval LABEL, interval .entire lines as being structured or unstructured, a simplification that sufficed for most documents, leaving sub-line structure to future work. For each tagged token produced by the previous step, we record the associated set of spatial skip bigrams. These skip bigrams encode both the horizontal and vertical character offsets between a given anchor token and each of its target neighbors, and their tag assignments. The intent here is to simply capture how semantic categories of tokens are spatially arranged with each other.As we have found that structured regions tended to both be contiguous and exhibit content and spatial regularities that differ significantly from unstructured text, we identified them using a labeled variant of a maximum-likelihood segmentation algorithm (Utiyama and Isahara, 2001 ). Our decision to base our approach upon this particular segmentation algorithm is rooted in a previous study that found it produced the most accurate and unbiased results in text segmentation (Niekrasz and Moore, 2010) . For a given document, we aim to identify a segmentation and labeling per segment that maximizes the likelihood,arg max S,L Pr(S, L|W ) = m i=1 Pr W i |L, S Pr (L) Pr (S) Pr (W )Here, S refers to a segmentation of the document and L refers to the labeling of each of those segments as structured or unstructured. S is a vector of integer pairs, each pair consisting of a start and stop sentence index that describes the segment. L is a corresponding vector that identifies each segment as being structured or unstructured. W i represents the pool of spatial skip bigrams within the ith segment. The conditional Pr W i |L, S expresses the probability of observing the collection of spatial skip bigrams within segment number i, as computed by treating the skip bigrams as being mutually independent and dependent only upon the segment label. In this model, labelings are treated as being i.i.d. While prior work treated Pr(S) as a penalty on segment size, we found that a uniform Pr(S) yielded better performance. Our corpus includes a number of documents that have very large segments, drawing into question the validity of any fixed expectations about segment size.The next step is to decompose structured regions into their primitive constituents (cells) and to determine the functional relations over these cells. Our process, for a given structured region, is illustrated in Figure 2 . Using the set of overlapping tags for a given token in the region, we generated lattices describing that token's context. We then apply information theoretic co-clustering (Dhillon et al., 2003) to generate a distributed representation of these contexts, which are used to compute a similarity measure between tokens. We then solicit supervision in the form of a single seed sequence A k,l = (a k , a k+1 , ..., a k+l ) where each a i ∈ F represents the assignment of a token x i to a member of the set of possible record fields F (e.g., "surname", "zipcode", or even "field 3"). The token similarity measure is used by a soft sequence aligner to match candidate tokens c m in the rest of the structured region with the seed tokens a k . Candidate tokens are then assigned the record fields of their seed tokens.In order to establish a basis for computing soft sequence alignments, we assembled "tag lattices," the set of overlapping tags to the left and right of a token, as a source of information about a each token's context. The tags used are given in Table 2 . Lattices are acyclic directed graphs where each tag is represented as a node and two nodes are connected by an edge if and only if they are contiguous in the text with one another. We then traverse the lattice and recorded all possible unidirectional paths up to a specified maximum length (7 is used as a default). We accumulated context statistics for each token by considering all paths that start or end on a tag in which the token is included. For example, the token [.] from our example has (among others) the contexts (0, OTHER), (0, INITIAL), (-1, ALLCAPS), and (+2, SURNAM).Here, the numeral represents the length of the path and the sign represents whether the contextual tag occurs at the end or beginning of the path. The result is a co-ocurrence matrix where rows represent tokens and columns represent unique contexts, e.g. (+1, SURNAM). Our soft sequence aligner required a measure of the similarity between two arbitrary tokens, which itself may be impacted by sparsity in the raw contextual features. To alleviate this, we induced a dense representation of these contexts by applying information theoretic co-clustering to the token to tag context co-ocurrence matrix. The co-clustering procedure is described as follows. Let X = {x 1 , x 2 , ..., x L } represent the set of L tokens in the text. And let Y = {y 1 , y 2 , ..., y N Y } represent the set of N Y = wN T unique context features, where N T is the number of unique context feature labels T , and w is the maximum path length used during lattice traversal. The co-occurrence matrix is thus a set of non-negative integers n xi,yj for every pair of symbols (x i , y j ) in X × Y . The output of co-clustering is two partitionsX * = {x * 1 , x * 2 , ..., x * N X * } and Y * = {y * 1 , y * 2 , ..., y * N Y * } of theThe indictment follows a series of arrests on Dec. 7, 2005, in Oregon, Arizona, New York, and Virginia. The indictment refers to attacks on 3 sites:Oct . 28, 1996 , in Marion County, Ore. Oct. 30, 1998 , in Lane County, Ore. July 21, 1997 Unstructured Unstructured Structured Figure 1 : For a given document (upper left), we label each token by its word class, drawn from a mix of part-of-speech and word categories such as month indicators (lower left). For each token we encode its spatial skip bigrams, describing both the content and spatial arrangement with its neighbors (lower right). Using these features, we label each line as being governed by a structured schema, or as regular unstructured text (upper right). Figure 2 : Overview of the alignment procedure. We first establish tag lattices, which capture the tag context for a given token. We then generate the basis for a contextual similarity measure by co-clustering the token and context co-occurrence matrix. This is then used to align tokens in the structured region with tokens in the annotated sequence.sets X and Y . Co-clustering seeks to maximize the mutual information between a X * and Y * given constraints N X * and N Y * , making it a sensible method for compressing our context features while preserving as much information as possible distinguishing the types of tokens. Namely, it allows us to represent each token x i in the text as a categorical distribution over N Y * clustered context featuresp xi = (n xi,y * 1 , n xi,y * 2 , ..., n xi,y * N Y * ) where n xi,y * j = yj ∈y * j nxi,yj . These vectors are the basis for measuring token similarity as discussed in the next section. Figure 3 shows the result of applying our tagging and coclustering steps to an example text. The colorization of the tokens indicates their assignment to a particular token cluster, which each token cluster x * i being assigned a unique color.Our procedure for aligning tokens with record fields is illustrated in Figure 4 . We first solicit a manually annotated seed representing the field assignments for a single record from the user. This is represented as a contiguous sequence of tokens from the structured region, with the field assignments for each token represented as an integer. The field values have no semantics other than to indicate which tokens belong to which field in the underlying schema. We use the manually annotated seed record and apply sequence alignment and distributional similarity measures to identify other occurrences of records in the text. This is done by iterating a sliding window, of the same length l as the annotated sequence, through the entire text token by token. At each iteration m ∈ (1, 2, ..., L − l + 1) (where L is the length of the text in tokens), the sequenceC m,l = (p xm , p xm+i , ..., p x m+l ) of clustered context distri-butions is aligned with the sequence C k,l corresponding to the annotated record. Alignment is performed using the Needleman-Wunsch sequence alignment algorithm (Needleman and Wunsch, 1970) , where the score (penalty) for insertions and deletions is defined as −1 and the score for paired elements is defined as 1 − D(p m,l , p k,l ), where D(p, q) is the Hellinger distance between distributions p and q. Alignment produces a set of pairs mapping each token in some subset of tokens in one sequence to a subset of tokens in the other. Tokens may remain unmapped, resulting in gaps. Pairings must also be sequential, so they cannot cross. At each iteration of the window, if the resulting alignment score is positive, a field value is assigned from the annotated sequence A k,l to each of the tokens in the window that are part of an alignment pair. This is represented graphically in figure, by following the arrows from the top row to the bottom row. Since each window iteration overlaps with the previous one, any single token may have multiple field values assigned to it. Therefore, we consider each of these assignments as a "vote" and chose the majority vote as the final field assignment for the token.
2
The architecture of our proposed model is presented in Figure 2 . The model is divided into two parts: the autoencoder model for the content, and the style embedding model. In the autoencoder model, the latent representation from the encoder and the style representation are combined, and the decoder uses the resulting expression to generate the output text. During training, the autoencoder model's objective is to reconstruct the input text using the combined latent representation. The style embedding model is responsible for controlling the style of the input text. The style embedding model learns the representation for each style, and the classifier predicts the style using this embedding information. The objective of this part is to learn the style expression classifying the style label. The separation of these two parts simplifies the generation model and makes it possible to use adaptive style expressions. Additionally, each model can concentrate solely on its task.Transformer based Autoencoder Most text generation tasks are based on a sequence-to-sequence model, which consists of an encoder and a decoder (Johnson et al., 2017; Lample et al., 2018) . We Figure 1 : Comparison between models at the training phase. (a) Most style overwriting-based models are trained by the combination of three losses: reconstruction, cycle, and adversarial losses. As a result, many tasks are assigned to a single generation model. Additionally, the task is to make the output y follow the given target style s , not to learn the style expression. Therefore, most overwriting models cannot control how much style is changed in the text. (b) Our model is trained by the reconstruction loss only, and this simplicity can make the model focus more on generating sentences. Learning style expressions and adding them to the latent text representation makes it possible to control the output attribute.use the Transformer-based sequence-to-sequence model to capture the various meanings of the same word. That is, the same word can have slightly different meanings in sentimental text; to capture these subtle differences, relation information is used in Transformer (Vaswani et al., 2017) . In our approach, the transformer-based autoencoder model is trained to reconstruct the input text x with its own style s. Pairs of x, s and x , s are not used in training phase. The generation model does not need to make the output follow a given attribute like other methods (Shen et al., 2017; Hu et al., 2017; Dai et al., 2019) . Therefore, we do not need to add constraints such as the adversarial loss or cycle loss. Instead, the generation model focuses only on the reconstruction when the combined latent representation is given. To train this plain autoencoder, we use label smoothing regularization to improve the performance (Szegedy et al., 2016) . The loss of the autoencoder model is expressed as follows:EQUATIONwhere v is the vocabulary size and represents the smoothing parameter. p andp denote the predicted probability distribution and the true probability distribution over the vocabulary, respectively. This reconstruction loss does not affect the style embedding part and affects only the transformer model.Learning the style embedding. We propose a style embedding module to learn the general style representation. When the text is represented as a compressed dimension z ∈ R d , the style information and content information are hard to separate. Therefore, we do not disentangle the latent representation z into style and content. Instead, we train the common representation depending on the style, and this common expression becomes style embedding. The set of style embeddings isS={S 1 , • • • , S k } ∈ R d×k ,where k is the number of styles. The style embedding module uses a style classifier. The classifier Figure 2 : The architecture of the two modules within our proposed model. The input sentence is represented as x, and s is the style label of the input. z * denotes the combined latent representation, and 'sim' means similarity. In the image, the flow of the gradient is marked with red arrows. (a) The sentence generation model. The encoder has a sentence input x and generates a compressed expression z. The style embedding from the input text style s is added to this representation. Using this added latent representation, the decoder generates the reconstructed input. (b) The style embedding is obtained by the similarity between the input style and the latent representation of the input. The similarity is calculated by the dot product. Based on this similarity, the classifier predicts the style, and the ground truth is the input style label.consists of a linear projection layer that calculates the probabilities W ∈ R k×1 corresponding to each style. The input of the style classifier is the similarity between the latent representation z and style embeddings S. The similarities are calculated by the dot product,sim z,S = {sim z,S 1 , • • • , sim z,S k }.Based on the calculated similarities, the style classifier predicts the style label. Hence, the objective of the training is to make similarity constituting the ground truth of the latent representation achieves the highest value among all the calculated similarities. By making the similarity of the input style the highest value, the corresponding style embedding obtains a more proper representation of the given input style. For instance, if there are only two styles, such as positive and negative, the two styles are labeled 0 and 1; then, the style classifier contains the sigmoid function. This makes the negative style embedding a negative value representation. Similarly, the positive style embedding obtains a positive value expression. The classifier and the style embedding are trained by the classification loss L se as follows:EQUATIONwhere C θc indicates the style classifier in the style embedding model, s i means the input text's style label and sim z,s i represents the similarity between input's latent representation and the style label. The back-propagation procedure with respect to the classification loss does not affect the autoencoder model and latent space z, only the style embedding result.Combining the latent space and style embedding. We finally modify the latent representation from the encoder by adding the learned style embedding. The modified latent representation, which includes information of the original text and style, becomes the input of decoder D. The added latent representation is expressed as follows:EQUATIONThe hyperparameter w reflecting the style strength modulates how much of the style will be changed in the sentence. In addition to the encoder output, the style embedding can be used to adjust the style part in the sentence. During training, the value of the strength is only 1, making the input representation slightly more inclusive of its own style. At this time, only the style of the input sentence is used, not other styles. With this added latent space z *x , the decoder reconstructs the input x as follows:EQUATIONFigure 3: Visualization of representations with different style weights w. The direct output of the encoder is represented as source, and the one after adding the style embedding with the style strength w is indicated as a transferred sample. PCA is used to project the vectors into two-dimensional space. In the image of the source data, the left side of the projected space represents a positive style, while the right side implies a negative style according to the source image. As the weight increases, we can observe that the original negative samples move toward the positive position.This approach slightly adds style information but does not significantly affect the reconstruction. In fact, the input sentence is reconstructed by the decoder if additional style information is not used. As is evident, the reconstruction model and the style model focus only on their tasks. Owing to this architecture, the style expression used inside the generation model can be adapted without any other constraints. At the test time, by adjusting the value of w, we can generate different output style strengths as desired. The larger the value of w is, the more the text style changes.
2
In this section, we outline our procedure for automatic acquisition of patterns. We employ a cascading procedure, as is shown in Figure 3 . First, the original documents are processed by a morphological analyzer and NE-tagger. Then the system retrieves the relevant documents for the scenario as a relevant document set. The system, further, selects a set of relevant sentences as a relevant sentence set from those in the relevant document set. Finally, all the sentences in the relevant sentence set are parsed and the paths in the dependency tree are taken as patterns.Morphological analysis and Named Entity (NE) tagging is performed on the training data at this stage. We used JUMAN [2] for the former and a NE-system which is based on a decision tree algorithm [5] for the latter. Also the part-of-speech information given by JUMAN is used in the later stages.The system first retrieves the documents that describe the events of the scenario of interest, called the relevant document set. A set of narrative sentences describing the scenario is selected to create a query for the retrieval. For this experiment, we set the size of the relevant document set to 300 and retrieved the documents using CRL's stochastic-model-based IR system [3] , which performed well in the IR task in IREX, Information Retrieval and Extraction evaluation project in Japan 2 . All the sentences used to create the patterns are retrieved from this relevant document set.The system then calculates the TF/IDF-based score of relevance to the scenario for each sentence in the relevant document set and retrieves the n most relevant sentences as the source of the patterns, where n is set to 300 for this experiment. The retrieved sentences will be the source for pattern extraction in the next subsection.First, the TF/IDF-based score for every word in the relevant document set is calculated. TF/IDF score of word w is:× ÓÖ ´Ûµ ´Ì ´Ûµ ¡ ÐÓ ´AE •¼ µ ´Ûµ ÐÓ ´AE •½µ if w is Noun, Verb or Named Entity ¼ otherwisewhere N is the number of documents in the collection, TF(w) is the term frequency of w in the relevant document set and DF(w) is the document frequency of w in the collection. Second, the system calculates the score of each sentence based on the score of its words. However, unusually short sentences and A B C D E F Pattern [B * F] F E B A D C ¿ È È È È Õ ¿ É É É × ¹ Pattern [B F] È È È È È È È È È Õ Pattern [C E F] TBPunusually long sentences will be penalized. The TF/IDF score of sentence s is:× ÓÖ ´×µ È Û¾× × ÓÖ ´Ûµ Ð Ò Ø ´×µ • Ð Ò Ø ´×µ Îwhere length(s) is the number of words in s, and AVE is the average number of words in a sentence.Based on the dependency tree of the sentences, patterns are extracted from the relevant sentences retrieved in the previous subsection. Figure 4 shows the procedure. First, the retrieved sentence is parsed into a dependency tree by KNP [1] (Stage 1). This stage also finds the predicates in the tree. Second, the system takes all the predicates in the tree as the roots of their own subtrees, as is shown in (Stage 2). Then each path from the root to a node is extracted, and these paths are collected and counted across all the relevant sentences. Finally, the system takes those paths with fre-quency higher than some threshold as extracted patterns. Figure 5 shows examples of the acquired patterns.
2
Aphasic speech data can be collected in mainly two ways: as a free form discussion between a PWA and an interviewer or a PWA reading a set of provided scripts. While a PWA reading from scripts is conducive to supervised learning methods, it is rarely the case in real life. Hence, our goal is to perform paraphasia detection and classification in the wild i.e. without any target scripts. Another motivation for classification in the wild is the lack of labeled English aphasic speech data. Further, the available speech data has a class imbalance (phonemic and neologistic paraphasias account for 12.0 and 6.4 percent respectively). Low-resource languages such as Hindi, Greek etc. have a serious lack of aphasia speech data and almost non-existent labeled speech data. Using transfer-learning approaches similar to (Le et al., 2017) , would not allow extending it to such low-resource languages. Hence, it was necessary to investigate unsupervised approaches for paraphasia classification. In this section, we outline our proposed unsupervised method which consists of first creating speech embeddings of non-aphasic speech data and then performing soft clustering to further classify the type of paraphasia detected.In order to classify phonemic and neologistic paraphasia, capturing phoneme placement in a word is necessary.Previous work, used features such as Goodness of Pronunciation and Phoneme Edit-Distance to do the same. Hence, we adopt speech embeddings which focus on phoneme pronunciation.In particular, we use the Audio-Word2Vec embeddings outlined in (Chung et al., 2016) as they have demonstrated good performance in distinguishing utterances that have large (>3) phoneme sequence edit distance and grouping utterances with low phoneme sequence edit distance (0 to 2). These speech embeddings are created in an unsupervised fashion. Each word utterance is passed through a sequence-to-sequence encoder and reconstructed via a decoder. This process preserves the acoustic information in the embedding. (Chung et al., 2016) further demonstrated that sequential phoneme structure is preserved in the vector space. This property can be exploited using density based clustering, the next step of our proposed method.Classifying semantic paraphasia requires different approaches which cannot be encompassed in methods used to classify phonemic and neologistic paraphasia and hence is left as future work.Unsupervised word embeddings can be improved further and geared specifically for aphasic speech, but in order to understand what these embeddings are capturing it is important to probe them. Taking inspiration from (Conneau et al., 2018) , we create probing tasks specifically for paraphasia. Probing tasks are simple classification tasks for embeddings. We detail three probing tasks specifically for phonemic and neologistic paraphasia.1. Phoneme-Movement: Phonemic paraphasia is often characterized with phoneme movement, usually involving a shift in the position of one or two phonemes. In this binary classification task, the embeddings are used to determine if a phoneme shift took place or not.2. Phoneme-Add/Delete: The addition or deletion of a phoneme is seen in phonemic paraphasia. We use the generated embeddings to determine if the word utterance has a phoneme addition/deletion or is unchanged.3. In-Dictionary: In this task, we check if the embeddings can classify if the word is in the language's dictionary or not. Neologistic paraphasia occurs when PWA's substitute target words with non-words.These three probing tasks, while not exhaustive, can be used to determine how well the speech embeddings can perform for paraphasia detection.As our method is unsupervised, we do not have access to whether each word utterance is a paraphasia (further what type) or not. To classify each utterance, we use techniques similar to anomaly detection.Firstly, the embeddings generated for each word, represent only non-paraphasia words. This is because the dataset used to create these embeddings consists of only correct words utterances. We cluster these non-paraphasia embeddings into distinct clusters where the members of each cluster are embeddings of the same word. We use individual words as centroids rather than phoneme based centroids. This is because, phoneme based centroid choices such as monophones, senones etc. creates a surjective mapping from embeddings to centroids (eg. both words cat and hat contain the same phoneme ae, hence both words will be assigned to the same centroid), whereas word based centroids has a bijective mapping.Secondly, we use HDBSCAN (McInnes et al., 2017) to perform density based clustering as it allows for cluster densities of varying size. The two most influential parameters namely, minimum cluster size and minimum samples are chosen so as to produce number of clusters equal to the vocabulary size of the dataset.Lastly, we exploit the soft clustering property of HDBSCAN to detect paraphasias. We use simple rule based methods to perform classification. When a word utterance is correct i.e it is not a paraphasia, the top 1 cluster probability should be high, as the embedding should have a core distance of 0. Hence if the utterance satisfies top 1 probability ≥ α then it is classified as a correct word. We use α = 0.75 in our experiments. Now, if a word utterance is phonemic paraphasia, HDBSCAN returns near similar cluster membership probabilities for 2 to 3 clusters (eg. lat will be clustered close to correct words bat, late etc.)EQUATIONIf a word utterance satisfies equation 1 then we can classify it as a phonemic paraphasia. We use β = 0.2 in our experiments. For a neologistic paraphasia, the cluster membership probabilities are evenly low, as the word utterance is a non-word and was never seen by HDBSCAN while clustering. Hence, a utterance that satisfiesk top i ≤ γis classified as a neologistic paraphasia. In our experiments k = 5 and γ = 0.5This clustering based method does not violate the unsupervised nature of the proposed goal. Our reasoning is validated by the empirical evaluations performed in further sections.
2
Before we formulate the problem, we will first give some formal definitions. The set of relations R is defined as {r 1 , r 2 , ..., r m } where each r i is a tuple (r p i , r s i , r o i ) corresponding to the predicate, subject, and object; and the set of attributes E is represented as {e 1 , e 2 , ..., e n } where each e i is a tuple (e o i , e p i ) corresponding to the object and attribute. We introduce z as a discrete vector (z 1 , z 2 , ..., z m+n ) where z i ∈ {0, 1} represents the hidden explainable variable. z is interpreted as an evidence selector: z i = 1 means the corresponding relation/attribute justifies the target action a. We define A as the vocabulary of target actions. Based on all these definitions, our goal is to jointly select evidence z and predict target action a ∈ A. In other words, to learn the probability p(a, z|R, E).The varational autoencoder( VAE) (Kingma and Welling, 2013) is proposed as a generative model to combine the power of both directed continuous or discrete graphical models and neural network with latent variables. The VAE models the generative process of a random variable x as following: first the latent variable z is generated from a prior probability distribution p(z), then a data sample x is generated from a conditional probability distribution p(x|z). The CVAE (Zhao et al., 2017) is a natural extension of VAE: Both the prior distribution and conditional distribution now are conditioned on an additional context c: p(z|c) and p(x|z, c).In our task, we decompose the inference problem p(a, z|R, E) into two smaller problems. The first sub-problem is to infer p(a|R, E), which is a performer. The second problem is to infer p(z|a, R, E) which is an explainer. These two problems are closely coupled, hence we model them jointly. The probability distribution p(a|R, E) can be written as :p(a|R, E) = z p θ (a|z, R, E)p(z|R, E)Directly optimizing this conditional probability is not feasible. Usually the Evidence Lower Bound (ELBO) (Sohn et al., 2015) is optimized, which can be derived as the following:EQUATIONThe first KL divergence term is to minimize the distance between the posterior distribution and the prior distribution. The second term is to maximize the expectation of the target action based on the posterior latent distribution.In most previous work using VAE, there is no explicit meaning for the hidden representation z, thus it's hard for humans to interpret. For example, z is simply assumed as a Gaussian distribution or a categorical distribution. In order to have a more explicit representation for the purpose of explanation, our latent discrete variable z is used to indicate whether the corresponding relation or attribute can be used for justifying the action.The whole system architecture is shown in Figure 2 . From an image, we first extract a candidate relation set R and an attribute set E. Every relation r and attribute e are embedded using a Gated Recurrent Neural Network (Chung et al., 2014) .r emb = GRU([r p , r s , r o ]) e emb = GRU([e o , e p ])The action a is represented by a GloVe embedding (Pennington et al., 2014) , followed by another non-linear layer:a emb = ReLU(W i a glove + b i )where a glove ∈ R k is the pre-trained GloVe embedding. Then the latent variable z can be calculated as:q φ (z|a, R, E) = softmax(W z [U; a emb ] + b z )where U = [r emb 1 , ..., r emb m , e emb 1 , ..., e emb n ] and [U, a emb ] means the concatenation of U and a emb . and W z ∈ R 2×2k as we assume each z i belongs to one of the two classes {0, 1}.The prior distribution can be calculated as:p θ (z|R, E) = softmax(W z U + b z )The KL divergence between the prior random variable z prior from p θ (z|R, E) and the posterior random variable z posterior from q φ (z|a, R, E) is:KL(zprior, zposterior) = −pi log pi p i − (1 − pi) log 1 − pi 1 − p i here z prior ∼ Bern (p i ), z posterior ∼ Bern p i .Another challenge is that z is a discrete variable which blocks the gradient and makes the endto-end training infeasible. Gumbel-Softmax (Jang et al., 2016) is a re-parameterization trick to deal with the discrete variables in the neural network. We use this trick to sample discrete z. Then we do a weighted sum pooling between discretized z and U:h z = ReLU( i z i * U i ) h = ReLU(W h h z + b h ) p θ (a|z, R, E) = softmax(Wh + b)During training, we also add a sparsity regularization on the latent variable z besides the ELBO. So our final training objective isEQUATIONDuring testing, we have two objectives. First we want to infer the target action a, which can be computed through sampling:EQUATIONwhere z s ∼ p(z|R, E) and S is the number of samples. After obtaining the predicted actionâ, the posterior explanation is inferred as q φ (z|â, R, E). In this setting, we assume we have the supervision for the discrete latent variable z, which is more like a multi-task setting. We optimize both the action prediction loss and the evidence selection loss. The final loss function is defined as:L SV = λL CV AE + (1 − λ)L evidence where L evidence = − k (z k log p(ẑ k )+(1−z k ) log(1−p(ẑ k )))in which z k ∈ {0, 1} is the ground truth label,ẑ k is the predicted label and λ is a hyper-parameter.
2
Deep neural networks, with or without word embeddings, have recently shown significant improvements over traditional machine learning-based approaches when applied to various sentence-and documentlevel classification tasks. Kim (2014) have shown that CNNs outperform traditional machine learning-based approaches on several tasks, such as sentiment classification, question type classification, and subjectivity classification, using simple static word embeddings and tuning of hyper-parameters. proposed character level CNN for text classification. Lai et al. (2015; Visin et al. (2015) proposed recurrent CNN while Johnson and Zhang (2015) proposed semi-supervised CNN for solving text classification task. Palangi et al. (2016) proposed sentence embedding using LSTM network for information retrieval task. Zhou et al. (2016) proposed attention-based bidirectional lstm Networks for relation classification task. RNNs model text sequences effectively by capturing long-range dependencies among the words. LSTMbased approaches based on RNNs effectively capture the sequences in the sentences when compared to the CNN and SVM-based approaches. In subsequent sub sections, we describe our proposed CNN and LSTM based approaches for multi-class dialect classification.Collobert et al. (2011) adapted the original CNN proposed by LeCun and Bengio (1995) for modelling natural language sentences. Following Kim (2014), we present a variant of the CNN architecture with four layer types: an input layer, a convolution layer, a max pooling layer, and a fully connected softmax layer. Each dialect in the input layer is represented as a sentence (dialect) comprised of distributional word embeddings. Let v i ∈ R k be the k-dimensional word vector corresponding to the ith word in the EQUATIONIn the convolution layer, for a given word sequence within a dialect, a convolutional word filter P is defined. Then, the filter P is applied to each word in the dialect to produce a new set of features. We use a non-linear activation function such as rectified linear unit (ReLU) for the convolution process and max-over-time pooling (Collobert et al., 2011; Kim, 2014) at pooling layer to deal with the variable dialect size. After a series of convolutions with different filters with different heights, the most important features are generated. Then, this feature representation, Z, is passed to a fully connected penultimate layer and outputs a distribution over different labels:EQUATIONwhere y denotes a distribution over different dialect labels, W is the weight vector learned from the input word embeddings from the training corpus, and b is the bias term.In case of CNN, concatenating words with various window sizes, works as n-gram models but do not capture long-distance word dependencies with shorter window sizes. A larger window size can be used, but this may lead to data sparsity problem. In order to encode long-distance word dependencies, we use long short-term memory networks, which are a special kind of RNN capable of learning long-distance dependencies. LSTMs were introduced by Hochreiter and Schmidhuber (1997) in order to mitigate the vanishing gradient problem (Gers et al., 2000; Gers, 2001; Graves, 2013; Pascanu et al., 2013) .The model illustrated in Figure 2 is composed of a single LSTM layer followed by an average pooling and a softmax regression layer. Each dialect is represented as a sentence (S) in the input layer. Thus, from an input sequence, S i,j , the memory cells in the LSTM layer produce a representation sequence h i , h i+1 , . . . , h j . Finally, this representation is fed to a softmax layer to predict the dialect classes for unseen input dialects. We modeled dialect classification as a sentence classification task. We tokenized the corpus with white space tokenizer. We performed multi-class 5-way classification on the given arabic data set containing 5 language dialects. We used Kim's (2014) Theano implementation of CNN 2 for training the CNN model and a variant of the standard Theano implementation 3 for training the LSTM network. We initialized and used the randomly generated embeddings in both the CNN and LSTM models in the range [−0.25, 0.25] .We used 80% of the training set for training and 20% of the data for validation set and performed 5-fold cross validation in CNN. In LSTM, we used 80% of the given training set for building the model and rest 20% of the data is used as development set. We updated input embedding vectors during the training.In the CNN approach, we used a stochastic gradient descent-based optimization method for minimizing the cross entropy loss during the training with the Rectified Linear Unit (ReLU) non-linear activation function. We used default window filter sizes set at [3, 4, 5] . In the case of LSTM, model was trained using an adaptive learning rate optimizer-adadelta (Zeiler, 2012) over shuffled mini-batches with the sigmoid activation function at input, output and forget gates and tanh non-linear activation function at cell state. Post competition we performed experiments without and with average pooling using LSTM networks and reported the results as shown in tables 5 and 6.Hyper Parameters. We used hyper-parameters such as drop-out for avoiding over-fitting), and batch size and learning rates on 20% of the cross-validation/development set. We varied batch sizes, drop-out rate, embedding sizes, and learning rate on development set. We obtained the best CNN performance with learning rate decay 0.95, batch size 50, drop-out 0.5, and embedding size 300 and ran 20 epochs on cross validated dataset. For LSTM, we got the best results on development set with learning rate 0.001, drop-out 0.5, and embedding size 300, batch-size of 32 and at 12 epochs. We used same settings similar to the development set but varied drop-out rate over [0.5,0.6,0.7] and obtained best results on test set using drop-out 0.7. We obtained best results on test set with drop-out 0.5 using average pooling.Pre-compiled Embeddings. We used the gensim (ehek and Sojka, 2010) word2vec program to compile embeddings from the given training corpus. We compiled 300-dimensional embedding vectors for the words that appear at least 3 times in the Arabic dialect corpus, and for rest of the vocabulary, embedding vectors are assigned uniform distribution in the range of [−0.25, 0.25] . We used these pre-compiled embeddings in LSTM and reported run2 results in the test set.
2
We use two types of models, a feature-based model and an neural-based model, that could be applied to document-level understanding of relations between entities in order to investigate which models are suitable to GxE recognition and whether or not there are important issues particularly in this new task. There are three variants based on the neural-based model: (1) an attentive reader (Hermann et al., 2015), (2) a sequence-tosequence model (Sutskever et al., 2014) , and (3) a static RNN decoder. We envision that different characteristics of these models would lead to different performance, according to the types of task.In our experiment, the three models relied neither on any prior knowledge nor on external tools for collecting candidate environment terms. Even though such words as 'smoking' or 'alcohol' can be considered to have a higher probability to be a biological environment than other words, we did not use such information to prevent error propaga-tion and to investigate the possibility of handling newly introduced terms.We combined two models proposed by Chen et al. (2016) and Xu et al. (2016) : a model that adapts an entity-centric approach to the RC task, and a feature-based model that extracts chemicaldisease relations on a document level.Inspired by these two models, we use the following feature sets that we expect are suitable to our task. We describe each feature in detail below, where g, d, and e indicate gene terms, disease terms, and candidate environment terms, respectively: (1) shortest distance from e to g and d in the abstract, (2) whether e and g pair in the same sentence, (3) whether e and d pair in the same sentence, (4) whether e, d, and g pair in the same sentence, (5) whether e is included in MeSH (Medical Subject Headings) terms, (6) the frequency of e in an abstract, (7) the frequency of e in all abstracts, (8) whether e and g are connected by the dependency parser (De Marneffe et al., 2006) , and (9) whether e and d are connected by the dependency parser (De Marneffe et al., 2006) .Using these features, the model tried to classify all terms that are present in the abstract. If the model assigns 1 to a term, we regard it as an environment. If the model classifies all terms for a particular gene-disease combination as 0, we assume that there is no environment for this combination in the abstract.We propose three neural-based models; 1) an attentive reader, 2) a sequence-to-sequence model, and 3) a static RNN decoder. The three models comprise two parts: converting text to vector representation, called encoding, and predicting the vector to answer, called decoding. The encoding is the same in all the three models. We look over the encoding and then compare each decoding part of the three models.Our encoding with attention is based on the model proposed by Chen et al. (2016) , which shows better performance than any other encoders. The model runs in two steps, text encoding and attention, described in detail as follows.Text encoding: All words are mapped to ddimensional vectors using the PubMed/PMC word embedding model (Pyysalo et al., 2013) with a limited dictionary size (V ). We include special tokens, '<NOE>', that stands for no environment terms for the combination and '<UNK>', that stands for terms that are not included in the dictionary. The sequence of words in an abstract excluding stop words and special characters is encoded as p 1 , ..., p m ∈ R d where m is the number of words in the abstract. Then, we pass the sequence p 1 , ..., p m to bi-directional RNN:− → h i = RN N ( − → h i−1 , p i ) ∈ R h , i = 1, ..., m ← − h i = RN N ( ← − h i+1 , p i ) ∈ R h , i = m, ..., 1 p i = concat( − → h i , ← − h i ) = − → h i ← − h i ∈ R 2h , i = 1, ..., mwhere h is the dimension of hidden units of RNN. From p 1 , ..., p m , the model extracts marked gene and disease names. Let the set of gene names be {g 1 , ...g n } where n is the number of gene names in the abstract. Let the set of disease names be {d 1 , ...d l } where l is the number of disease names in the abstract. Then, we make the gene-disease combination vector c by elementwise summation of concatenated vectors:c = W T c ( g 1 d 1 ⊕ g 2 d 1 ... ⊕ g n d l )where W c ∈ R 2d×2h is the weight vector for gene and disease. Attention: In order to enable the model to focus more on evidence for identifying environment terms in the abstract, we used the attention mechanism. In the QA task, the vector of questions is projected to a document for calculating the probability of relevance degree between a question and a document. Likewise, we project the gene-disease combination vector (c) to the sequence of word vectors (p 1 , ...,p m ). We applied a bilinear term, a variant of attention mechanism, to combine the combination vector and the sequence of vectors:a = sof tmax(c T W bpi ), i = 1, ..., m where W b ∈ R 2h×2h .And then, we generated an attention vector by summation of projecting the bilinear term to the sequence of vectors: a = i ap i , i = 1, ..., mIn this section, we describe each decoding of the three models. Figure 2 illustrates an overview of three models. o a = W T aãwhere W a ∈ R 2h×V :We choose terms that come from their conjunction showing the top values of the output vector and that are represented in the abstract, and consider them as environment. However, if the top value of the output vector indicates '<NOE>', we conclude that there is no environment.(b) A sequence-to-sequence model A decoder in the sequence-to-sequence model dynamically generates tokens from '<SOE>' (start of token) to '<EOE>' (end of token). The model is based on a previous hidden vector, a previous token vector and an encoding vector that is an output vector of the encoding. The previous token vector is computed by projecting a token generated in previous time step to an embedding layer. We try to set the attention vector (ã) to the encoding vector as we expect that the attention vector is more properly tuned to extract terms depending on the gene-disease combinations than the original encoding vector:t i−1 = W T e o i−1 y i = RN N ( − → h i−1 , t i−1 ,ã) ∈ R 2h , i = 1, ..., ewhere e is the number of environment terms and W T e is an embedding layer.o i = argmax(W T s y i ), i = 1, ..., ewhere W T s ∈ R 2h×V . The o i is the index of the vocabulary and a sequence of tokens, (o 1,...,e ), is regarded as environment terms predicted by the model.If the first decoding token indicates '<NOE>' in the output sequence, we assume that there is no environment.As a modification to the sequence-to-sequence model, we suggest that the model uses a static RNN decoder, which does not use a previous token vector (t i−1 ). In particular, the model used randomly normalized token vectors. Because the model is needed to set the length of the decoder in advance, it seems to statically generate environment terms, which is an outstanding feature in comparison to the sequence-to-sequence model. Because our answer tokens are usually atomic and spread over the abstract, the previous output state, which is usually used when making a long sequence of tokens, is not useful for our task.y i = RN N ( − → h i−1 , t i ,ã) ∈ R 2h , i = 1, ..., ewhere e is the number of environment tokens that is set in advance.o i = argmax(W T r y i ), i = 1, ..., ewhere W T r ∈ R 2h×V . The o i is the index of the vocabulary and a sequence of tokens, (o 1,...,e ), is regarded as environment terms predicted by the model.Similar to the sequence-to-sequence model, if the first decoding token indicates '<NOE>' in the output sequence, we assume that there is no environment.
2
As the dataset was fully annotated at token-level, we consider the document layout analysis task as a text-based sequence labeling task. Under this setting, we evaluate three representative pre-trained language models on our dataset including BERT, RoBERTa and LayoutLM to validate the effectiveness of DocBank. To verify the performance of the models from different modalities on DocBank, we train the Faster R-CNN model on the object detection format of DocBank and unify its output with the sequence labeling models to evaluate.The BERT Model BERT is a Transformer-based language model trained on large-scale text corpus. It consists of a multi-layer bidirectional Transformer encoder. It accepts a token sequence as input and calculates the input representation by summing the corresponding token, segment, and position embeddings. Then, the input vectors pass multi-layer attention-based Transformer blocks to get the final contextualized language representation.The RoBERTa Model RoBERTa ) is a more powerful version of BERT, which has been proven successfully in many NLP tasks. Basically, the model architecture is the same as BERT except for the tokenization algorithm and improved training strategies. By increasing the size of the pretraining data and the number of training steps, RoBERTa gets better performance on several downstream tasks.The LayoutLM Model LayoutLM is a multi-modal pre-trained language model that jointly models the text and layout information of visually rich documents. In particular, it has an additional 2-D position embedding layer to embed the spatial position coordinates of elements. In detail, the LayoutLM model accepts a sequence of tokens with corresponding bounding boxes in documents. Besides the original embeddings in BERT, LayoutLM feeds the bounding boxes into the additional 2-D position embedding layer to get the layout embeddings. Then the summed representation vectors pass the BERT-like multilayer Transformer encoder. Note that we use the LayoutLM without image embeddings and more details are provided in the Section 4.2.The Faster R-CNN Model Faster R-CNN is one of the most popular object detection networks. It proposes the Region Proposal Network (RPN) to address the bottleneck of region proposal computation. RPN shares convolutional features with the detection network using 'attention' mechanisms, which leads to nearly cost-free region proposals and high accuracy on many object detection benchmarks.LayoutLM chooses the Masked Visual-Language Model(MVLM) and Multi-label Document Classication(MDC) as the objectives when pre-training the model. For the MVLM task,its procedure is to simply mask some of the input tokens at random keeping the corresponding position embedding and then predict those masked tokens. In this case, the final hidden vectors corresponding to the mask tokens are fed into an output softmax over the vocabulary. For the MDC task, it uses the output context vector of [CLS] token to predict the category labels. With these two training objectives, the LayoutLM is pre-trained on IIT-CDIP Test Collection 1.0 3 (Lewis et al., 2006) , a large document image collection.We organize the DocBank dataset using the reading order, which means that we sort all the text boxes (a hierarchy level higher than text line in PDFMiner) and non-text elements from top to bottom by their top border positions. The text lines inside a text box are already sorted top-to-bottom. We tokenize all the text lines in the left-to-right order and annotate them. Basically, all the tokens are arranged top-to-bottom and left-to-right, which is also applied to all the columns of multi-column documents.We fine-tune the pre-trained model with the DocBank dataset. As the document layout analysis is regarded as a sequence labeling task, all the tokens are labeled using the output with the maximum probability. The number of output class equals the number of semantic structure types.
2
The first objective of our work is to detect emotions expressed in customer turns and the second is to predict the emotional technique in agent turns. We treated these two objectives as two classification tasks. We generated a classifier for each task, where the classification output of one classifier can be part of the input to the other classifier. While both classifiers work at the level of turns, i.e., classify the current turn to emotions ex-pressed in it, they are inherently different. When detecting emotions in a customer turn, the turn's content is available at classification time (as well as the history of the dialogue) -meaning, the customer has already provided her input and the system must now understand what is the emotion being expressed. Whereas, when predicting the emotional technique for an agent turn, the turn's content is not available during classification time, but only the agent action and the history of the dialogue since the agent did not respond yet. This difference stems from the fact that in order to train an automated service agent to respond based on customer input, the agent's emotional technique needs to be computed before the agent generates its response sentence.We defined a different set of relevant emotion classes for each party in the dialogue (customer or agent), based on our above survey of research on customer service (e.g., (Gelbrich, 2010) ). Relevant customer emotions to be detected are: Confusion, Frustration, Anger, Sadness, Happiness, Hopefulness, Disappointment, Gratitude, and Politeness. Relevant agent emotional techniques to be predicted are: Empathy, Gratitude, Apology, and Cheerfulness.We utilized the context of the dialogue to extract informative features that we refer to as dialogue features. Using these features for emotion classification in written dialogues is novel, and as our experimental results show, it improves performance compared to a model based only on features extracted from the turn's text.We used the following features in our models.Comprises three contextual feature families: integral, emotional, and temporal. A feature can be global, namely its value is constant across an entire dialogue or it can be a local, meaning that its value may change at each turn. In addition, a feature can be historical (as will be discussed below).The integral family of features includes three sets of features:1. Dialogue topic: a set of global binary features representing the intent of the customer who initiated the support inquiry. Multiple intents can be assigned to a dialogue from a taxonomy of popular topics, which are adapted to the specific service. Examples of topics include ac-count issues, payments, technical problem and more 2 . This feature set captures the notion that customer emotions are influenced by the event that led the customer to contact the customer service (Steunebrink et al., 2009) . 2. Agent essence: a set of local binary features that represent the action used by the agent to address the last customer turn, independently of any emotional technique expressed. We refer to these actions as the essence of the agent turn. Multiple essences can be assigned to an agent turn from a predefined taxonomy. For instance, "asking for more information" and "offering a solution" are possible essences 3 . This feature set captures the notion that customer emotions are influenced by actions of agents (Little et al., 2013) . 3. Turn number: a local categorical feature representing the number of the turn.The emotional family of features includes Agent emotion and Customer emotion: these two sets of local binary features represent emotions predicted for previous turns. Our model generates predictions of emotions for each customer and agent turn, and uses these predictions as features to classify a later customer or agent turn with emotion expression.The temporal family of features includes the following features extracted from the timeline of the dialogue: 1. Customer/agent response time: two local features that indicate the time elapsed between the timestamp of the last customer/agent turn and the timestamp of the subsequent turn. This is a categorical feature with values low, medium or high (using categorical values yielded better results than using a continuous value). 2. Median customer/agent response time: two local categorical features defined as the median of the customer/agent response times preceding the current turn. The categories are the same as the previous temporal features.2 Currently this feature is not supported in social media. In other channels, for example, customer support on the phone, the customer is requested to provide a topic before she is connected to a support agent (usually using an IVR system). As this feature is inherent in other customer support channels, we assume that in the future it will also be supported in social media. 3 We assume that if the agent is human, then this input is known to her e.g., based on company policies. For the automated service agent case, we assume that the dialogue system will manage and provide this input. When history = 1, the historical features are the agent essence of turn t i−1 and the agent emotion predicted for turn t i−1 (purple solid line). When history = 2, we also add the customer emotion detected in turn t i−2 (red dashed line). Finally, if we set history = 3, then we also add the agent essence of turn t i−3 and the agent emotion predicted for turn t i−3 (blue dotted line), so in total we have 5 historical features. Notice that the customer emotion and agent essence features have different values based on their turn number. When representing a turn, t i , as a feature vector, we added some features originating in previous turns j < i to t i . These features, that are historical, include the emotional features family and local integral features (namely agent emotions, customer emotions and agent essence). We do not include the turn number of previous turns, as this is dependent on the turn number of t i . We denote these features as historical features. The value of history, that is a parameter of our models, defines the number of sequential turns that precede t i which propagate historical features to t i . Figure 3 shows an example of the historical features in relation to the classification of customer turn t i , for history size between 1 and 3.These features are extracted from the text of a customer turn, without considering the context of the dialogue. We use various state-of-the-art text based features that have been shown to be effective for the social media domain (Mohammad, 2012; Roberts et al., 2012) . These features include various n-grams, punctuation and social media features. Namely, unigrams, bigrams, NRC lexicon features (number of terms in a post associated with each affect label in NRC lexicon), and presence of exclamation marks, question marks, usernames, links, happy emoticons, and sad emoticons. We note that these are the features we used in our baseline model detailed below, in the description of our experiments.For both of the agent and customer turn classification tasks, we implemented two different models which incorporate all of the feature sets we have detailed above. We considered these tasks as multi-label classification tasks. This captures the notion that a party can express multiple emotions (e.g., confusion and anger) in a turn. We chose to use a problem transformation approach which maps the multi-label classification task into several binary classification tasks, one for each emotion class which participates in the multi-label problem (Tsoumakas and Katakis, 2006) . For each emotion e, a binary classifier is created using the one-vs.-all approach which classifies a turn as expressing e or not. A test sample is fully classified by aggregating the classification results from all independent binary classifiers. We next define our two modeling approaches.In our first approach we trained an SVM classifier for each emotion class as explained above. The feature vector we used to represent a turn incorporates dialogue and textual features. The history size is also a parameter of this model. Feature extraction for a training/testing feature vector representing a turn t i , works as follows. Textual features are extracted for t i if it is a customer turn, or for t i−1 if it is an agent turn (recall that the system does not have the content of agent turn t i at classification time). The temporal features are also extracted using time lapse values between previous turns as explained above. As discussed above, agent essence is assumed to be an input to our module, while agent emotion and customer emotion features are propagated from classification results of previous turns during testing (or from ground truth labels during training), where the number of previous turns is determined according to the value of history. These historical features are also appended to the feature vector of t i , similarly to (Kim et al., 2010) where this method was used for classifying dialogue acts.Our second approach to classifying dialogue turns is to use a sequence classification method (SVM-HMM), which classifies a sample sequence into its most probable tag sequence. For instance (Kim et al., 2010; Tavafi et al., 2013) used SVM-HMM and Conditional Random Fields for dialogue act classification. Since emotions expressed in customer and agent turns are different, we treated them as different classification tasks (like in our previous approach) and trained a separate classifier for each emotion. We made the following changes when using SVM-HMM:(1) We treated the emotion classification problem of turn t i as a sequence classification problem of the sequence t 1 , t 3 , ..., t i (i.e., only customer turns) if t i is a customer turn and t 2 , t 4 , ..., t i (i.e., only agent turns) if it is an agent turn. 2The SVM-HMM classifier generates models that are isomorphic to a k th -order hidden Markov model. Under this model, dependency in past classification results is captured internally by modeling transition probabilities between emotion states. Thus, we removed historical customer emotion (resp. agent emotion) feature sets when representing a feature vector for a customer (resp. agent) turn. (3) We note that in our setting we provide classifications in real-time during the progress of the dialogue, so at classification time we have access only to previous turns and global information, and we cannot change classification decisions for past turns. Thus, we tagged a test turn, t i , by classifying the sequence which ends in t i . Then, t i was tagged with its sequence classification result.
2
The overall methodology is split into three phases: preprocessing of data, extraction of features and finally evaluation of models and feature sets.Due to the unstructured format of the text used in social media, a set of filters were employed to reduce the noise while not losing useful information.1. A tweet-tokenizer was used 8 to parse the tweet and replace every username mentions, hashtags, and urls with <mention>, <hash-tag>and <url>respectively.2. The tokenized text then underwent stopword removal and was used as an input to Word-Net Lemmatizer provided by nltk (Bird and Loper, 2004) .3. Using Lancaster Stemmer, provided by nltk (Bird and Loper, 2004 ) stemmed text was also generated to be used as inputs for some feature extraction methods.The features extracted from the data set can be broadly classified into four types: Text-based features, tweet metadata features, User Historical tweets features and Social Graph-based features.• TF-IDF: Term Frequency-Inverse Document Frequency was used with the unigrams and bigrams from the stemmed text, using a total of 2000 features chosen by the tf-idf scores across the training dataset. The tf-idf scores were l2 normalized.• POS: Parts of Speech counts for each lemmatized text using The Penn Tree Bank (Marcus et al., 1993) from the Averaged Perceptron Tagger in nltk is used to extract 34 features.• GloVe Embeddings: The word embeddings for each word present in the pre-trained GloVe embeddings trained on Twitter (Pennington et al., 2014) were extracted, and for each tweet, the average of these is taken.• NRC Emotion: The NRC Emotion Lexicon (Mohammad and Turney, 2013) is a publicly available lexicon that contains commonly occurring words along with their affect category (anger, fear, anticipation, trust, surprise, sadness, joy, or disgust) and two polarities (negative or positive). The score along these 10 features was computed for each tweet.• LDA: Topic Modelling using the probability distribution over the most commonly occurring 100 topics was used as a feature for each tweet. LDA features were extracted by using scikit-learn's Latent Dirichlet Allocation module (Pedregosa et al., 2011) . Only those tokens were considered which occurred at least 10 times in the entire corpus.The count of hashtags, mentions, URLs, and emojis along with the retweet count and favorite count of every tweet was extracted and used as a feature to gain information about the tweets response by the authors environment.User Historical tweets: To gain information about the behavior of the author and their stylistic choices, a collection of their tweets were preprocessed, and stylistic and semantic features such as the averaged GloVe embeddings, NRC sentiment scores and Parts of Speech counts were extracted.Social Graph Features: Grover and Leskovec (2016) describe an algorithm node2vec for converting nodes in a graph (weighted or unweighted) into feature representations.This method has been employed by Mishra et al. (2018) in the task of abuse detection in tweets. node2vec vectors were generated for each of the graphs as introduced in Section 3.2.
2
In this section, we briefly introduce several methods for news recommendation, including general recommendation methods and news-specific recommendation methods. These methods were developed in different settings and on different datasets. Some of their implementations can be found in Microsoft Recommenders open source repository 12 . We will compare them on the MIND dataset.LibFM (Rendle, 2012), a classic recommendation method based on factorization machine. Besides the user ID and news ID, we also use the content features 13 extracted from previously clicked news and candidate news as the additional features to represent users and candidate news. DSSM (Huang et al., 2013) , deep structured semantic model, which uses tri-gram hashes and multiple feed-forward neural networks for query-document matching. We use the content features extracted from previous clicked news as query, and those from candidate news as document. Wide&Deep (Cheng et al., 2016), a two-channel neural recommendation method, which has a wide linear transformation channel and a deep neural network channel. We use the same content features of users and candidate news for both channels. DeepFM (Guo et al., 2017) , another popular neural recommendation method which synthesizes deep neural networks and factorization machines. The same content features of users and candidate news are fed to both components.DFM (Lian et al., 2018) , deep fusion model, a news recommendation method which uses an inception network to combine neural networks with different depths to capture the complex interactions between features. We use the same features of users and candidate news with aforementioned methods. GRU (Okura et al., 2017) , a neural news recommendation method which uses autoencoder to learn latent news representations from news content, and uses a GRU network to learn user representations from the sequence of clicked news. DKN (Wang et al., 2018) , a knowledge-aware news recommendation method. It uses CNN to learn news representations from news titles with both word embeddings and entity embeddings (inferred from knowledge graph), and learns user representations based on the similarity between candidate news and previously clicked news. NPA (Wu et al., 2019b) , a neural news recommendation method with personalized attention mechanism to select important words and news articles based on user preferences to learn more informative news and user representations. NAML (Wu et al., 2019a) , a neural news recommendation method with attentive multi-view learning to incorporate different kinds of news information into the representations of news articles. LSTUR , a neural news recommendation method with long-and short-term user interests. It models short-term user interest from recently clicked news with GRU and models longterm user interest from the whole click history. NRMS (Wu et al., 2019c) , a neural news recommendation method which uses multi-head selfattention to learn news representations from the words in news text and learn user representations from previously clicked news articles.
2
In order to find qualia relations for entities in REO, we looked for ways to automatically extract them from SUMO. By examining around a hundred nodes in SUMO, a number of relations were found to be useful in extracting qualia. For instance, the relation hasPurpose in SUMO directly specifies the purpose (hence the telic quale) of a given entity. If a given entity is the second argument of the relation instrument, then it is a tool used to bring about an event which is the first argument of the relation instrument in the same axiom (e.g., (instrument ?TRANSFER ?ARTERY), where ARTERY is a tool enabling a TRANSFER event). Therefore, the first argument could be extracted as the telic quale of this entity. In the same way, a number of relations, including part and all its sub-relations, were found to lead us to the constitutive quale. For instance, we find that Syllable is a part of a Word, through an axiom roughly like (part Syllable Word). Though the sub-relations of part have fine-grained semantic differences, they all have the same weight here for our purposes. These include relations such as initialPart, initiallyContainsPart, partTypes, typicalPart, etc. The SUMO relation result leads us to the agentive quale, as its second argument is the output (or the product) of its first argument (e.g., (result ?WRITE ?TEXT) where TEXT is the output of WRITE event).In addition to using SUMO relations, we used each node's documentation in SUMO to automatically extract telic quale using fairly straightforward regular expressions, since a number of entities were found in our initial examination to have their purpose described in their documentation, but not in their axioms. For instance, we extracted parts of the documentation that occur after terms such as 'purpose is,' 'intended to,' 'designed for,' etc. as potential candidates for the telic quale.As the formal quale is basically an IS-A relationship, we extracted the parent of each node as its formal quale. Of course the inheritance is always at work for all the four types of qualia relations. For instance, an Adverb is a Word (parent), but is also a LinguisticExpression (grandparent).The output of the function that automatically extracts qualia 1 currently can have two forms. If they are extracted from SUMO axioms, we have a set of functions to extract the actual SUMO node that represents the desired quale. The second format of the output occurs when we extract qualia from the documentation. In this case, what we extract is actually plain English -a part of a sentence. In writing functions to extract sensible parts of sentences, we decided to sacrifice recall in favor of precision. At the end, we managed to extract 112 agentive, 762 telic, and 481 constitutive qualia relations. We also have a separate function that takes a SUMO entity as input and returns all possible qualia found in SUMO for that entity. For instance, Building has 13 total qualia found in SUMO, including 13 constitutive, 1 telic, and 1 agentive qualia relations. Table 1 In order to evaluate the automatically extracted qualia relations from SUMO, we designed an on-line annotation system, where we asked the annotators to decide whether the automatically extracted quale relation was reasonable or unreasonable, or indicate if they're not sure. We also had a comment box for each entry, and asked the annotators to provide comments. For instance, for the telic quale, our instruction for commenting was as follows: If you feel the given function is reasonable but it's not at all how you would phrase or describe the function, then please provide your own description/phrasing of the function in the comment box. You can also use to comment box to suggest a function for the unreasonable cases. If you're unsure, comment on what makes you unsure. In collecting such comments in addition to the reasonable judgments, we hope to gain insights into better alternatives for the quale relation.We also provided SUMO documentation for the entity in question for cases where the annotator might not be familiar with the entity. The need for this was revealed in our pilot testing of the annotation system, where one of the annotators had not heard of some entities such as Lanai, which according to SUMO "refers to a roofed outdoor area Adjacent to a Building often furnished and used as a living room." So we decided to include SUMO documentation for each entity just in case the annotators are not familiar with it. Figure 2 shows a sample entry in our evaluation system. To ensure that the annotators are not judging haphazardly and without thinking, we inserted 3 random attention tests in each page of annotation. The attention tests were completely unreasonable possibilities, in which the extracted function for one entity was paired with a totally different entity (e.g., Entity: AerobicExerciseDevice, Telic: to attach one thing to something else). So in each page, we had 25 real tasks and 3 attention tests. Examining the accuracy of each annotator, we had to throw out the data from one of them with only 73% accuracy on the attention tasks; those tasks were re-annotated by another annotator.It would be prohibitively difficult to have a measure of recall for this task of automatic extraction, so we limit our evaluation to the precision of the results, using human annotators' judgments as the gold standard. The 'unreasonable' judgment for the constitutive relation was mostly applied to the ones taken from SUMO axioms, where the constitutive relation was too general for our purposes. For instance, the extraction found Object as a component part of WireCoil, and Physical as what constitutes Solenoid. Despite being true, they're too general to be accepted by a human as a reasonable part-whole relation. Another reason for marking the extracted constitutive relation as unreasonable was the jargon used in particular professions with which an annotators was not familiar, such as biology or chemistry. For instance, an AtomicGroup is part of a Molecule, but it's been marked as unreasonable with the comment: "part whole switch." Some other errors were due to an ordering mistake in SUMO axioms, such as (part Penne Hole) (which means Penne is part of Hole), whereas the reverse is true. Still others were due to a bug in the extraction, which ignored negation in axioms when finding constitutive relations, such as BloodTypeB which does NOT contain AntigenA according to SUMO axioms. We extracted it as a part by ignoring the negation. Thus, the results of the annotation were illuminating: helping to pinpoint where SUMO is too general for our purposes, or where our extraction script needs refinement. The unreasonable judgments for the telic quale were mostly due to not yet capturing and combining inherited relations from SUMO. At any particular node, SUMO underspecifies the definition because it assumes inheritance from ancestor nodes. Although we assume this as well, the qualia relations we have extracted are limited to the ones found with direct mentioning of that entity. For instance, for the entity MilitaryVehicle, the telic role extracted was "MilitaryOrganization uses it," which was marked as unreasonable with the suggested alternative "Provide transportation for any military organization." However, MilitaryVehicle has Vehicle as its parent and inherits from it. Therefore, the telic quale for Vehicle, which is extracted as "Translocation," would be inherited by MilitaryVehicle.This sort of error confirms our plan to inherit qualia relations and add the quale for each entity to all its children. Not only will these types of errors be eliminated, the number of entities with qualia relations will increase significantly. Currently, many lower level entities have no specific qualia to be extracted at their SUMO node but have very informative quale that could be inherited from parent nodes. Thus, we would have a significant increase in coverage (recall), while precision is guaranteed to remain high.Yet other instances of unreasonable telic quale were due to the wording used in SUMO. For instance, the following pairs have been judged as unreasonable. Entity: HearingProtection -Function: protect Human from Injuring caused by RadiatingSound Entity: PerformanceStage -Function: location of Demonstrating Entity: Campground -Function: to have MobileResidences These may not be how a human would describe the functions, but SUMO has tried to maximize the grounding of its definitions by using its other defined entities within them, leading to a more interconnected network of concepts. Demonstrating, for instance, is not the word people use to talk about the function of a performance stage, but according to the documentation of Demonstrating in SUMO, it would cover 'software demos, theatrical plays, lectures, dance and music recitals, museum exhibitions, etc.' Given the connection of REO concepts to SUMO, we need not be overly concerned about these types of seemingly inaccurate results.6 Incorporating Qualia Relations into REO
2
The CAM design integrates multiple matching strategies at different levels of representation and various abstractions from the surface form to compare meanings across a range of response variations. The approach is related to the methods used in machine translation evaluation (e.g., Banerjee and Lavie, 2005; Lin and Och, 2004) , paraphrase recognition (e.g., Brockett and Dolan, 2005; Hatzivassiloglou et al., 1999) , and automatic grading (e.g., Leacock, 2004; Marín, 2004) .To illustrate the general idea, consider the example from our corpus in Figure 2 . We find one string identical match between the token was occurring in the target and the learner response. At the noun chunk level we can match home with his house. And finally, after pronoun resolution it is possible to match Bob Hope with he.The overall architecture of CAM is shown in Figure 3. Generally speaking, CAM compares the learner response to a stored target response and decides whether the two responses are possibly different realizations of the same semantic content. The design relies on a series of increasingly complex comparison modules to "align" or match compatible concepts. Aligned and unaligned concepts are used to diagnose content errors. The CAM design supports the comparison of target and learner responses on token, chunk and relation levels. At the token level, the nature of the comparison includes abstractions of the string to its lemma (i.e., uninflected root form of a word), semantic type (e.g., date, location), synonyms, and a more general notion of similarity supporting comparison across part-of-speech.The system takes as input the learner response and one or more target responses, along with the question and the source reading passage. The comparison of the target and learner input pair proceeds first with an analysis filter, which determines whether linguistic analysis is required for diagnosis. Essentially, this filter identifies learner responses that were copied directly from the source text.Then, for any learner-target response pair that requires linguistic analysis, CAM assessment proceeds in three phases -Annotation, Alignment and Diagnosis. The Annotation phase uses NLP tools to enrich the learner and target responses, as well as the question text, with linguistic information, such as lemmas and part-of-speech tags. The question text is used for pronoun resolution and to eliminate concepts that are "given" (cf. Halliday, 1967, p. 204 and many others since). Here "given" information refers to concepts from the question text that are reused in the learner response. They may be necessary for forming complete sentences, but contribute no new information. For example, if the question is What is alliteration? and the response is Alliteration is the repetition of initial letters or sounds, then the concept represented by the word alliteration is given and the rest is new. For CAM, responses are neither penalized nor rewarded for containing given information. Table 1 contains an overview of the annotations and the resources, tools or algorithms used. The choice of the particular algorithm or implementation was primarily based on availability and performance on our development corpus -other implementations could generally be substituted without changing the overall approach.Language Processing Tool Sentence Detection, MontyLingua (Liu, 2004 ) Tokenization, Lemmatization Lemmatization PC-KIMMO (Antworth, 1993) Edit distance (Levenshtein, 1966) , SCOWL word list (Atkinson, 2004 ) Part-of-speech Tagging TreeTagger (Schmid, 1994) Noun Phrase Chunking CASS (Abney, 1997) Lexical Relations WordNet (Miller, 1995) Similarity Scores PMI-IR (Turney, 2001; Mihalcea et al., 2006 ) Dependency Relations Stanford Parser (Klein and Manning, 2003) After the Annotation phase, Alignment maps new (i.e., not given) concepts in the learner response to concepts in the target response using the annotated information. The final Diagnosis phase analyzes the alignment to determine whether the learner re- sponse contains content errors. If multiple target responses are supplied, then each is compared to the learner response and the target response with the most matches is selected as the model used in diagnosis. The output is a diagnosis of the input pair, which might be used in a number of ways to provide feedback to the learner.To combine the evidence from these different levels of analysis for content evaluation and diagnosis, we tried two methods. In the first, we handwrote rules and set thresholds to maximize performance on the development set. On the development set, the hand-tuned method resulted in an accuracy of 81% for the semantic error detection task, a binary judgment task. However, performance on the test set (which was collected in a later quarter with a different instructor and different students) made clear that the rules and thresholds thus obtained were overly specific to the development set, as accuracy dropped down to 63% on the test set. The handwritten rules apparently were not general enough to transfer well from the development set to the test set, i.e., they relied on properties of the development set that where not shared across data sets. Given the variety of features and the many different options for combining and weighing them that might have been explored, we decided that rather than hand-tuning the rules to additional data, we would try to machine learn the best way of combining the evidence collected. We thus decided to explore machine learning, even though the set of development data for training clearly is very small.Machine learning has been used for equivalence recognition in related fields. For instance, Hatzivassiloglou et al. (1999) trained a classifier for paraphrase detection, though their performance only reached roughly 37% recall and 61% precision. In a different approach, Finch et al. (2005) found that MT evaluation techniques combined with machine learning improves equivalence recognition. They used the output of several MT evaluation approaches based on matching concepts (e.g., BLEU) as features/values for training a support vector machine (SVM) classifier. Matched concepts and unmatched concepts alike were used as features for training the classifier. Tested against the Microsoft Research Paraphrase (MSRP) Corpus, the SVM classifier obtained 75% accuracy on identifying paraphrases. But it does not appear that machine learning techniques have so far been applied to or even discussed in the context of language learner corpora, where the available data sets typically are very small.To begin to address the application of machine learning to meaning error diagnosis, the alignment data computed by CAM was converted into features suitable for machine learning. For example, the first feature calculated is the relative overlap of aligned keywords from the target response. The full list of features are listed in Table 2 . Percent of token alignments that were token-identical 9. Similarity Match Percent of token alignments that were similarity-resolved 10. Type Match Percent of token alignments that were type-resolved 11. Lemma Match Percent of token alignments that were lemma-resolved 12. Synonym Match Percent of token alignments that were synonym-resolved 13. Variety of Match Number of kinds of token-level (0-5) alignments Table 2 were used to train the detection classifier. For diagnosis, a fourteenth feature -a detection feature (1 or 0 depending on whether the detection classifier detected an error) -was added to the development data to train the di-agnosis classifier. Given that token-level alignments are used in identifying chunk-and triple-level alignments, that kinds of alignments are related to variety of matches, etc., there is clear redundancy and interdependence among features. But each feature adds some new information to the overall diagnosis picture.The machine learning suite used in all the development and testing runs is TiMBL (Daelemans et al., 2007) . As with the NLP tools used, TiMBL was chosen mainly to illustrate the approach. It was not evaluated against several learning algorithms to determine the best performing algorithm for the task, although this is certainly an avenue for future research. In fact, TiMBL itself offers several algorithms and options for training and testing. Experiments with these options on the development set included varying how similarity between instances was measured, how importance (i.e., weight) was assigned to features and how many neighbors (i.e., instances) were examined in classifying new instances. Given the very small development set available, making empirical tuning on the development set difficult, we decided to use the default learning algorithm (knearest neighbor) and majority voting based on the top-performing training runs for each available distance measure.
2
Our method has a very strong assumption, which oversimplifies the problem but it also gives the chance of recognizing some patterns. The assumption is that all lemmas and their inflections have the following form for all languages STEM +SUFFIX → STEM +SUFFIX as illustrated in the following examples for English and Spanish:play → playing play+ → play+ing jugar → jugando jug+ar → jug+andoWe use a pipeline that includes four different steps. These are described below.In the first step, for each lemma l in the lemma list L and each word w in the corpus/dictionary D, all possible splitsl 1 1 +l |l| 2 , l 2 1 +l |l| 3 ,.., l |l| 1 + , and w 1 1 +w |w| 2 , w 2 1 +w |w| 3 ,.., w |w| 1 + are generated. (We use v j i , with 0 ≤ i ≤ j ≤ |v|, to denote the sub- string v i ..v j of a string v.)We assume the stem (the hypothesized STEM) to be nonempty but allow the suffix to be empty. For the Spanish lemma jugar we thus get j +ugar , ju+gar , jug+ar , juga+r , and jugar+ .In the second step, we determine the inflections of the regular verbs of the language. These will be used for the estimation of the morphological richness r m of the lemmas (verbs) in the third step. The morphological richness of the lemmas can be identified with the number of combinations of those tense, aspect, mood, and agreement features that can be distinctively morphologically realized. Because the morphological richness of the lemmas (verbs) does not tend to vary much across the different lemmas (verbs), even if they inflect semi-irregularly or irregularly, we assume that each lemma has r m different inflections. r m thus provides an upper bound on the number of cells of the paradigms of the language/corpus.For determining r m we identify the inflections of the lemmas with regular inflection. First, we determine for each splitted lemma l = r+s the number of potential inflections of the hypothesized stem r, that is r+s , in D. This is the set S r+s = {s | r+s ∈ D}. Then, for regularly inflecting lemmas, S r+s will be large for the actual split but also for any split within the stem. This is illustrated for the German lemma spielen (play) with the actual split spiel+en below.S spiele+n = { , n} S spiel+en = {e, st, t, en, ...} S spie+len = {le, lst, lt, len, ...} To accommodate for this deficiency, we also consider pairs of splitted lemmas l = r+s, l = r +s with distinct stem endings r |r| = r |r | and we determine the splitî of s that yields the maximum number of common inflections:i = max i∈{0,..,|s|} |S rs i 0 +s |s| i+1 ∩ S r s i−1 0 +s |s| i |We choose for each lemma pair l, l the splitŝ r+ŝ andr +ŝ, withr = rsî 0 ,r = r sî 0 , and s = s |s| i+1 , and consider their common suffixes in D: Sr +ŝ ∩ Sr +ŝ .Because regularly inflecting verbs tend to share their inflections, this lemma pairing allows us to reliably predict that, for example, the stems of spielen and gehen are spiel and geh.S spiele+n ∩ S gehe+n = { , n} S spiel+en ∩ S geh+en = {e, st, t, en, ...} S spie+len ∩ S ge+hen = ∅ Finally, for all splitted lemmasr+ŝ we collect the suffixes in Sr +ŝ in one bag.The goal of this step is to group different realizations of the same suffix. The previous step captures relevant suffixes, but in some cases, some parts of the stem are also included in these suffixes, or there might be some slight differences, because of morphophonological changes. In order to group them, we employ K-Means.When using K-means we need a function that calculates the distance between the elements, and based on this distance, the instances will be clustered. We decided to employ a modified version of Minimum Edit Distance. Our modified version tries to punish changes that are made at the end of the suffix. The assumption in this case, is that changes at the beginning of the suffix are more likely to be caused by the stem (and they could be the same suffix). On the other hand, if there are changes at the end, it would be a different suffix. Our edit distance algorithm allows insertion and deletion as possible changes. We also assume that it is worse to substitute a vowel with a consonant, than changing a vowel with a vowel. Therefore, this would happen:Distance (era, bra) > Distance (era, ara)ntar ntaron aron ar ntar 0.000 0.939 0.778 0.094 ntaron -0.000 0.015 0.832 aron --0.000 0.656 ar ---0.000We estimate that the number of paradigms (r m ) in a language is approximately the third of the number of different suffixes found in the previous step. This number was estimated based on the behaviour of the model considering Swedish data. Therefore, K-means will reduce the number of possible suffixes to the third (this is a parameter that will be tuned in the future). For example, one of the clustered groups found in this step considering the Spanish data would be this: {rá, erá, derá, ará, irá}. This corresponds to the suffix of future simple, third person singular.In the previous steps we will have generated possible suffixes for each cell in a paradigm. Now, the goal is to make a guess of how a word form should be generated. For example, in Spanish, if we have the lemma sanar, and we want to build the first person singular of the future simple tense (sanaré), we could expect the lemma to be combined with suffixes likeé, ré, aré, iré, and so on. These suffixes would be the output of the previous step.First of all, for each lemma, the model needs to decide the position in which we will split the lemma, as following the previous assumption a word will have this shape: STEM+SUFFIX. In order make that decision, we check how often we associate each lemma with a specific stem in the output of step 2, and use the most frequently occurring stem for all the suffixes. For example, for the verb sanar, in Spanish, we get these frequencies: san:15, sana:21, sa:1, and therefore, we would use the stem sana.We, then, try to join that stem with the clustered suffixes. Each stem will be joined with one suffix from each cluster. In order to decide which is the best suffix, we use a bigram character-level language model to estimate the probability of the output sequences, trained on the input bible. These are the probabilities that we get if we consider the example of the stem sana (from sanar) and suffixeś e, ré, aré and iré in Spanish.Candidate output Probability sanaé 0.0 sanaré 4.097e − 07 sanaaré 1.272e − 10 sanairé 2.201e − 10Obviously, in this case, the conjugation sanaré would be returned.At this point, the model produced a little amount of suffixes. Then, we decided to extend the list of input lemmas, so that it can find new suffixes and increase, therefore, the recall of the model. We obtain new lemmas by training a very simple verb classifier. We create a simple dataset with the input lemmas and some random words from the corpus. The input lemmas will be tagged as verbs and the random words will be tagged as nonverbs. We, then, train a simple Logistic Regression model, using character uni-, bi-and trigrams for representing each word. We also include word boundary symbols. For instance, in Spanish we would have cases like:Features (trigrams) class comer <co, com, ome, mer, me> V plaza <pl, pla, laz, aza, za> NVUsing this approach we obtain new verbs that can be used in our Pipeline. The model that uses the extended list of lemmas for extracting suffixes is called the Flexible model, and on the other hand, the initial model (the one that uses only the initial lemmas as input) is called the Non-flexible model. languages. Unfortunately, we could not surpass the baseline model in any of the languages. We can say that among the development language results, Portuguese and Swedish are the ones that are best captured by the Non-flexible model. Considering the test languages, Spanish and English are the ones that were best modeled by the Non-flexible model. It also seems that while the flexible model might have a better recall, the obtained result is not good enough, and therefore, it still requires some filtering.
2
The experiments are designed for supervised classification on the type level, i.e., we do not try to decide whether a particular verb coordination in a given context is an SPC, but rather whether the verb coordination, given all its contexts, tends to function as a pseudo-coordination. For this we need a labeled data set and a suitable set of features. These features were derived from previous work and adapted to our settings. The values for each feature are based on all evidence for a verb coordination in the current data set.Once we have trained and tested our classifier on the labeled data set, we then apply the classifier on unknown instances and evaluate the top SPC candidates according to the classifier, i.e., try to use the classifier as an SPC discovery procedure.Using the Weka tool (Hall et al., 2009) , we experimented with different types of machine learning algorithms, all with similar results. A requirement was that the classifier should be able to produce a real-valued classification to enable ranking. For no other strong reason, we ended up using a random forest classifier (Breiman, 2001) . A random forest classifier consists of a combination of decision trees where features are randomly extracted to build a set of decision trees. A decision tree is a tree-structured graph where each node corresponds to a test on a feature. A path from the root to a leaf represents a classification rule.The features are decided upon beforehand and the values for each node are learned based on training data, with the aim to best separate the positive instances from the negative instances. In our case, the instances to be classified are the verb coordinations,(V i ,V j ), that are considered positive if they are in the class SPC, and negative if they do not.The classifier is trained and tested on labeled data from both the positive and negative class. Training and testing are performed on mutually exclusive parts of the labeled data in a stratified ten-fold cross validation. The classification results are then averaged over all ten folds.The result according to the test data is presented in a confusion matrix with four classes: true positive (TP), true negative (TN), false positive (FP), and false negative (FN). The true positive and true negative classes contain those instances that have been correctly classified. The false positive class contains all non-SPC instances that have been misclassified as SPC, and conversly, the false negative class contains all SPC instances misclassified as non-SPC.For each verb coordination (V i ,V j ), we derived a set of features based on the evidence in our data set. Our features were derived from Hilpert and Koops (2008) , Teleman et al. (1999) , Tsvetkov and Wintner (2011) (a work on classifying multiword expressions, a task similar to this one), as well as our own observations. The features generally measured closeness and order, as well as represented negative tests, and the features were real-valued features rather than binary, i.e., a test like "is the word både 'both' used before the verb coordination?" was translated into "how often is the word både used before the verb coordination?". In particular when working with unedited text such as blogs, real-valued features can help reduce the effects of noise.The features used by our classifier are described below.1. frequency Frequency of (V 1 ,V 2 ), normalized by• the maximum frequency of any verb coordination • the average frequency of all verb coordinations 2. closeness How often are V 1 and V 2 separated by words other than och 'and'?3. inverse order How often does (V 2 ,V 1 ) occur in relation to the frequency of (V 1 ,V 2 )?4. inverse frequency Frequency of (V 2 ,V 1 ), normalized using the maximum frequency of any verb coordination.5. inverse closeness Similar to test 2, but for(V 2 ,V 1 ).6. both How often is både 'both' used in conjunction to the verb coordination?7. between How many words appear on average between V 1 and V 2 ?8. spread How many different V can be found with V 1 ? Normalized by the maximum spread of all V 1 .9. PMI Pointwise mutual information aslog(p(V 1 ,V 2 )/(p(V 1 ) * p(V 2 ))) where p(V i )is the relative frequency of verbV i and p(V 1 ,V 2 )is the relative frequency of the verb coordination.10. not How often does the word inte 'not' fol-low V 1 : V 1 inte och V 2 ?11. tense How often do V 1 and V 2 share the same tense?12. pos tags before Distribution of the three most common pos-tags before the verb coordination.13. pos tags after Distribution of the three most common pos-tags after the verb coordination.Since the classification is done on the type level, it is unavoidable that we sometimes misclassify individual instances. Moreover, since the extraction of verb coordinations is currently done without any sophistication, some chains of verb coordinations can be misinterpreted, e.g., Jag var ute och gick och hittade min bok 'I was out walking and found my book' will probably be misclassified as SPC, since gick och hittade is erroneously extracted, a verb coordination that tends to be an SPC.
2
In MUKAYESE, we focus on under-researched tasks of NLP in the Turkish language. After defining the task and assessing its importance, we construct the following three key elements for each benchmark:Datasets are the first element to consider when it comes to a benchmark. We define the minimum requirements of a benchmark dataset as follows: (i) accessible with reasonable size. (ii) Satisfactory quality. (iii) A publicly shareable, compliant applicable regulations (GDPR licensing).We chose the dataset sizes in a task-specific manner, unless used in a few-shot setting, benchmarks with small datasets will lack generalizability, and models trained on them might suffer from overfitting. On the other hand, training models on enormous datasets might be costly and inefficient (Ethayarajh and Jurafsky, 2020).Another feature to assess is the quality of the dataset. A manually annotated dataset with a low Interannotator Agreement (IAA) rate is not suitable for benchmarking. Moreover, to build a generalizable benchmark, we need to consider using a dataset representing the general domain. For instance, sentence segmentation methods of editorial texts do not work on user-generated content such as social media posts, as we show in Subsection 6.4.Metrics are the second element of benchmarks. We need to decide on one or more evaluation metrics to evaluate and compare methodologies. In order to do so, we have to answer the following questions: (a) Does this metric measure what our task aims to do? (b) How well does it correlate with human judgment? (c) Are there any issues/bugs to consider in these metrics? (For example, using accuracy to measure performance on an unbalanced set does not give a representative idea of model performance).Baselines are the final element of benchmarking. In order to characterize the performance characteristics of different methodologies, it is bet-ter to diversify our baselines as much as possible. For instance, we can compare pretrained vs. non-pretrained approaches, rule-based systems vs. trained systems, or unsupervised vs. supervised models.
2
Following prior work , we translate the i-th source sentence x i into the i-th target sentence y i in the presence of extra source contexts c = (x i−1 , x i+1 ), where x i−1 and x i+1 refer to the predecessor and successor of x i respectively. We adopt Transformer as the model architecture of pre-training and machine translation. The model is trained by minimizing the negative log-likelihood of target sequence y conditioned on the source sequence x, i.e., L = −logp(y|x). Readers can refer to Vaswani et al. (2017) for more details. We introduce our approach based on EN→DE Doc-MT. Figure 1 shows the sketch of our context-interactive pre-training approach, elaborated on as follows.Cross Sentence Translation (CST) When translating the i-th sentence x i and the source context c = (x i−1 , x i+1 ) into the i-th target sentence y i , prior approaches tend to pay most attention on x i (Li et al., 2019) , resulting in the neglect of c. To maximize the use of the source context c, we propose cross sentence translation (CST) to encourage the model to more effectively utilize the valuable information contained in c. We mask the whole source sentence x i in the model input, and enforce the model to generate the target sentence y i only based on c = (x i−1 , x i+1 ). To be specific, we pack both the source context c and the mask token [mask] as a continue span, and employ a special token </s> to indict the end of each sentence. To distinguish texts from different languages, we add language identifier (e.g., <en> for English and <de> for German) to the ends of both the source input and target output. Figure 1(A) presents the illustration of this task on EN-DE translation, where the input of Transformer is the concatenation of (x i−1 , <mask>, x i+1 ) and the target output is y i .Inter Sentence Generation (ISG) Voita et al. (2019b) has demonstrated that the cross-sentence dependency within the target document can effec-tively improve the translation quality. Transformer decoder should be able to model the corresponding historical information to improve coherence or lexical cohesion and other aspects during translation. Motivated by this, here we propose inter sentence generation (ISG) to capture the cross-sentence dependency among the target output. The ISG task aims to predict the inter sentence y i based on its surrounding predecessor y i−1 and successor y i+1 . In this way, the model is trained to capture the interactions between the sentences in the target document. Besides, the training of ISG only requires the monolingual document corpora of the target language, which effectively alleviates the lack of doclevel parallel data in Doc-MT. Figure 1(B) presents the detailed illustration, where the model input is the concatenation of (y i−1 , <mask>, y i+1 ) and the target output is y i . Both source and target language identifiers are <de>.In practice, the available sent-level parallel corpora usually present larger scale than doc-level parallel corpora. Thus, here we introduce parallel sentence translation (PST) performing context-agnostic sentence translation, which only requires sent-level parallel data. This further alleviates the lack of the doclevel parallel data in Doc-MT. The illustration of PST is presented in Figure 1(C) , where the input is the concatenation of (<none>, x i , <none>) and the target output is y i . 1 The source and target language identifiers are <en> and <de>, respectively. EWC-Based Fine-Tuning. After finishing the pre-training, the pre-trained transformer is used as the model initialization for subsequent finetuning on downstream datasets. As shown in Figure 1, the input of Transformer in this scenario is (x i−1 , x i , x i+1 ), i.e., the concatenation of the i-th source sentence x i and its surrounding context c = (x i−1 , x i+1 ). The desired output is the i-th target sentence y i . The source and target language identifiers are same as PST. However, obvious catastrophic forgetting has been observed during fine-tuning. As fine-tuning continues, the model performance exhibits degradation. Due to large-scale model capacity and limited downstream datasets, pre-trained models usually suffer from overfitting. To remedy this, here we introduce Elastic Weight Consolidation (EWC) regularization (Kirkpatrick et al., 2016) . EWC regularizes Table 1 : The detailed illustration of different tasks. "SLI" and "TLI"denotes the source and target language identifier, respectively. "Use Mono-Doc", "Use Bi-Doc" and "Use Bi-Sent" means that the corresponding task can use monolingual doc-level, bilingual doc-level, bilingual sent-level corpora, respectively. the weights individually based on their importance to that task, which forces the model to remember the original language modeling tasks. Formally, the EWC regularization is computed as:R = i λ 2 F i (θ i − θ * i ) 2 (1)where λ is a hyperparameter weighting the importance of old LM tasks compared to new MT task, and i labels each parameter. The final loss J for fine-tuning is the sum of negative log-likelihood in all pre-training tasks and newly introduced R, i.e.,J = L CST + L ISG + L PST + R.We summarize the key information of our approach in Table 1 , which also shows the available data of different tasks.
2
In our method, the predominant sense for a target word is determined from a prevalence ranking of the possible senses for that word. The senses come from a predefined inventory (which might be a dictionary or WordNet-like resource). The ranking is derived using a distributional thesaurus automatically produced from a large corpus, and a semantic similarity measure defined over the sense inventory. The distributional thesaurus contains a set of words that are "nearest neighbors" to the target word with respect to similarity of the way in which they are distributed. (Distributional similarity is based on the hypothesis of Harris, 1968 , that words which occur in similar contexts have related meanings.) The thesaurus assigns a distributional similarity score to each neighbor word, indicating its closeness to the target word. For example, the nearest 10 neighbors of sandwich might be: salad, pizza, bread, soup... and the nearest neighbors of the polysemous noun star 11 might be:actor, footballer, planet, circle...These neighbors reflect the various senses of the word, which for star might be:r a celebrity r a celestial body r a shape r a sign of the zodiac 12 We assume that the number and distributional similarity scores of neighbors pertaining to a given sense of a target word will reflect the prevalence of that sense in the corpus from which the thesaurus was derived. This is because the more prevalent senses of the word will appear more frequently and in more contexts than other, less prevalent senses. The neighbors of the target word relate to its senses, but are themselves word forms rather than senses. The senses of the target word are predefined in a sense inventory and we use a semantic similarity score defined over the sense inventory to relate the neighbors to the various senses of the target word. The two semantic similarity scores that we use in this article are implemented in the WordNet similarity package. One uses the overlap in definitions of word senses, based on Lesk (1986) , and the other uses a combination of corpus statistics and the WordNet hyponym hierarchy, based on Jiang and Conrath (1997) . We describe these fully in Section 4.2. We now describe intuitively the measure for ranking the senses according to predominance, and then give a more formal definition.The measure uses the sum total of the distributional similarity scores of the k nearest neighbors. This total is divided between the senses of the target word by apportioning the distributional similarity of each neighbor to the senses. The contribution of each neighbor is measured in terms of its distributional similarity score so that "nearer" neighbors count for more. The distributional similarity score of each neighbor is divided between the various senses rather than attributing the neighbor to only one sense. This is done because neighbors can relate to more than one sense due to relationships such as systematic polysemy. For example, in the thesaurus we describe subsequently in Section 4.1 acquired from the BNC, chicken has neighbors duck and goose which relate to both the meat and animal senses. We apportion the contribution of a neighbor to each of the word senses according to a weight which is the normalized semantic similarity score between the sense and the neighbor. We normalize the semantic similarity scores because some of the semantic similarity scores that we use, described in Section 4.2, can get disproportionately large. Because we normalize the semantic similarity scores, the sum of the ranking scores for a word equals the sum of the distributional similarity scores. To summarize, we rank the senses of the target word, such as star, by apportioning the distributional similarity scores of the top k neighbors between the senses. Each distributional similarity score (dss) is weighted by a normalized semantic similarity score (sss) between the sense and the neighbor. This process is illustrated in Figure 1 .More formally, to find the predominant sense of a word (w) we take each sense in turn and obtain a prevalence score. Let N w = {n 1 , n 2 ...n k } be the ordered set of the top scoring k neighbors of w from the distributional thesaurus with associated scores {dss(w, n 1 ), dss(w, n 2 ), ...dss(w, n k )}. Let senses(w) be the set of senses of w in the sense inventory. For each sense of w (s i ∈ senses(w)) we obtain a prevalence score by summing over the dss(w, n j ) of each neighbor (n j ∈ N w ) multiplied by a weight. This weight is the sss between the target sense (s i ) and n j divided by the sum of all sss scores for senses(w)The prevalence ranking process for the noun star. and n j . sss is the maximum WordNet similarity score (sss ) between s i and the senses of n j (s x ∈ senses(n j )). 13 Each sense s i ∈ senses(w) is therefore assigned a score as follows:EQUATIONwhereEQUATIONWe describe dss and sss in Sections 4.1 and 4.2. Note that the dss for a given neighbor is shared between the different senses of w depending on the weight given by the normalized sss.Measures of distributional similarity take into account the shared contexts of the two words. Several measures of distributional similarity have been described in the literature. In our experiments, dss is computed using Lin's similarity measure (Lin 1998a) .We set the number of nearest neighbors to equal 50. 14 We use three different sources of data for our first two experiments, resulting in three distributional thesauruses. These are described in the next section. We use domain-specific data for our third and fourth experiments. The data sources for these are described in Sections 6.3 and 6.4. A word, w, is described by a set of features, f , each with an associated frequency, where each feature is a pair r, x consisting of a grammatical relation name and the other word in the relation. We computed distributional similarity scores for every pair of words of the same PoS where each word's total feature frequency was at least 10. A thesaurus entry of size k for a target word w is then defined as the k most similar words to w.A large number of distributional similarity measures have been proposed in the literature (see Weeds 2003 for a review) and comparing them is outside the scope of this work. However, the study of Weeds and Weir (2005) provides interesting insights into what makes a "good" distributional similarity measure in the contexts of semantic similarity prediction and language modeling. In particular, weighting features by pointwise mutual information (Church and Hanks 1989) appears to be beneficial. The pointwise mutual information (I(w, f )) between a word and a feature is calculated asEQUATIONIntuitively, this means that the occurrence of a less-common feature is more important in describing a word than a more-common feature. For example, the verb eat is more selective and tells us more about the meaning of its arguments than the verb be.We chose to use the distributional similarity score described by Lin (1998a) because it is an unparameterized measure which uses pointwise mutual information to weight features and which has been shown (Weeds 2003) to be highly competitive in making predictions of semantic similarity. This measure is based on Lin's information-theoretic similarity theorem (Lin 1997 The similarity between A and B is measured by the ratio between the amount of information needed to state the commonality of A and B and the information needed to fully describe what A and B are.In our application, if T(w) is the set of features f such that I(w, f ) is positive, then the similarity between two words, w and n, isdss(w, n) = f ∈T(w)∩T(n) I(w, f ) + I(n, f ) f ∈T(w) I(w, f ) + f ∈T(n) I(n, f ) (4)However, due to this choice of dss and the openness of the domain, we restrict ourselves to only considering words with a total feature frequency of at least 10. Weeds et al. (2005) do show that distributional similarity can be computed for lower frequency words but this is using a highly specialized corpus of 400,000 words from the biomedical domain. Further, it has been shown (Weeds et al. 2005; Weeds and Weir 2005) that performance of Lin's distributional similarity score decreases more significantly than other measures for low frequency nouns. We leave the investigation of other distributional similarity scores and the application to smaller corpora as areas for further study.WordNet is widely used for research in WSD because it is publicly available and there are a number of associated sense-tagged corpora (Miller et al. 1993; Cotton et al. 2001; Preiss and Yarowsky 2001; Mihalcea and Edmonds 2004) available for testing purposes. Several semantic similarity scores have been proposed that leverage the structure of WordNet; for sss we experiment with two of these, as implemented in the WordNet Similarity Package .The WordNet Similarity Package implements a range of similarity scores. McCarthy et al. (2004b) experimented with six of these for the sss used in the prevalence score, Equation (2). In the experiments reported here we use the two scores that performed best in that previous work. We briefly summarize them here; Patwardhan, Banerjee, and Pedersen (2003) give a more detailed discussion. The scores measure the similarity between two WordNet senses (s1 and s2).lesk This measure (Banerjee and Pedersen 2002) maximizes the number of overlapping words in the gloss, or definition, of the senses. It uses the glosses of semantically related (according to WordNet) senses too. We use the default version of the measure in the package with no normalizing for gloss length, and the default set of relations:EQUATIONwhere definitions(s) is the gloss definition of sense s concatenated with the gloss definitions of the senses related to s where the relationships are defined by the de-fault set of relations in the relations.dat file supplied with the WordNet Similarity package. W ∈ definition(s) is the set of words from the concatenated definitions. jcn This measure (Jiang and Conrath 1997) uses corpus data to populate classes (synsets) in the WordNet hierarchy with frequency counts. Each synset is incremented with the frequency counts (from the corpus) of all words belonging to that synset, directly or via the hyponymy relation. The frequency data is used to calculate the "information content" (IC; Resnik 1995) of a class as follows:EQUATIONJiang and Conrath specify a distance measure:EQUATIONwhere the third class (s3) is the most informative, or most specific, superordinate synset of the two senses s1 and s2. This is converted to a similarity measure in the WordNet Similarity package by taking the reciprocal as in Equation 8(which follows). For this reason, the jcn values can get very large indeed when the distances are negligible, for example where the neighbor has a sense which is a synonym. This is a motivation for our normalizing the sss in Equation 1.EQUATIONThe IC data required for the jcn measure can be acquired automatically from raw text. We used raw data from the BNC to create the IC files. There are various parameters that can be set in the WordNet Similarity Package when creating these files; we used the RESNIK method of counting frequencies in WordNet (Resnik 1995) , the stop words provided with the package, and no smoothing.The lesk score is applicable to all parts of speech, whereas the jcn is applicable only to nouns and verbs because it relies on IC counts which are obtained using the hyponym links and these only exist for nouns and verbs. 15 However, we did not use jcn for verbs because in previous experiments (McCarthy et al. 2004c ) the lesk measure outperformed jcn because the structure of the hyponym hierarchy is very shallow for verbs and the measure is therefore considerably less informative for verbs than it is for nouns.We illustrate the application of our measure with an example. For star, if we set 16 k = 4 and have the dss for the previously given neighbors as in the first row of Table 6 , and The prevalence score for each of the senses would be:prevalence score(celebrity) = 0 .3145 prevalence score(celestial body) = 0.0687 prevalence score(shape) = 0 .0277 prevalence score(zodiac) = 0 .0390 so the method would select celebrity as the predominant sense.
2
We assign users in the IT dataset to two groups, Yes and No, based on the quantity nu,yes nu,yes+nu,no , where n u,yes is the number of tweets in which user u has used at least one of the Yes hashtags and none of the No hashtags in Table 1 ; and n u,no is the number of tweets in which u has used at least one No hashtag and none of the Yes hashtags. The Yes group consists of all users for whom this quantity is greater than or equal to 0.75, while the No group consists of all users for whom it is less than or equal to 0.25. Users for whom the value lies between 0.25 and 0.75 (as well as those for whom our dataset does not contain any tweets with Yes or No hashtags), are not assigned to either group. The Yes group Table 3. contains 4,513 users, while the No group contains 1,356 users, which is consistent with the general perception at the time that the Yes campaign was much more vocal than the No campaign. To test our hypothesis that the probability of choosing Scottish variants is, on average, greater for users in the Yes group than for users in the No group, we estimate the difference between the two groups in the average probability of choosing Scottish variants, and conduct a permutation test to approximate the distribution of this difference under the null hypothesis. We first test whether the Yes group are more likely than the No group to use Scottish variants in tweets which contain hashtags that indicate a stance on the referendum. Subsequently, we test whether the Yes group are more likely than the No group to use Scottish variants in general across all of their tweets.Let U g be the set of all users in group g ∈ {yes, no} who have used at least one of the variables in Table 3 . For a given user u ∈ U g , let V be the set of all variables that u has used in at least one tweet. We estimate the probability of user u choosing a Scottish variant of variablev ∈ V aŝ p u,v = nu,vscot nu,v, where n u,vscot is the token count of Scottish variants of v in user u's tweets, and n u,v is the token count of all variants of v in user u's tweets. Averaging across variables, we obtainp u = 1 V v∈Vp u,v. We then average across users to obtain the group mean,p g = 1 U u∈Ugp u . Our test statistic is the difference between the two group means, d =p yes −p no .We randomly shuffle users between the two groups (maintaining each group's original number of users), and re-compute the value of d using these permuted groups. We repeat this procedure 100,000 times in order to approximate the distri- Table 4 : Number of users and tweets included per group in the two analyses in Study 1 bution of differences in group means that would be observable were the difference independent of the assignment of users to groups. The proportion of permuted differences which are greater than or equal to the observed difference between the original group means provides an approximate p-value.For a tweet to be included in the analysis, it must contain at least one of the variables in Table 3 . Hence not all users contribute data to the test statistic, as some have not used any of the variables in their tweets. The number of tweets and users included in each analysis are shown in Table 4 .The results for the first analysis are shown in the left column of Table 5 . The difference between the two groups in their average probability of choosing Scottish variants in tweets that contain polarised referendum hashtags is statistically significant (p < 0.002). Results for the second analysis are shown in the right column of Table 5 . Once again, the difference between the two groups is statistically significant (p < 0.001).The results show that the Yes group do use Scottish variants at a significantly higher rate than the No group, both when using Yes or No hashtags, and in general. The stronger significance level for the 'All tweets' dataset is partly due to its larger size (see Table 4 ), which enables better estimates of the Table 5 : Results of the two analyses in Study 1 usage rates. While the rates are very low overall, the relative differences are large: the Yes group rate is more than three times the No group rate when we include only tweets with Yes or No hashtags, and approximately twice as big when we include all tweets. The higher rates in the 'All Tweets' dataset suggest that both groups of users chose Scottish variants less often when discussing the referendum than in their other tweets. However, the test we used does not provide a significance value for the difference in usage rates across the two datasets. To establish whether users do modulate their usage of Scottish variants when discussing the referendum, we will need a more careful paired design.6 Study 2: Effects of topic and audience on Scotland-specific vocabulary usage Do tweeters choose Scottish variants at a different rate when using referendum-related hashtags than in their other tweets?
2
We use a hierarchical recurrent neural network (Serban et al., 2016) to model the current utterances (Figure 1a) . In other words, a recurrent neural network (RNN) captures the meaning of a sentence; another LSTM-RNN aggregates the sentence information into a fixed-size vector. For simplicity, we use RNN's last state as the current utterances' representation (u in Equation 2).u 1 , • • • , u NIn the rest of this section, we investigate content-based and temporal-based prediction in Subsections 3.1. and 3.2.; the spirit is similar to "dynamic" and "static" models, respectively, in Ouchi and Tsuboi (2016) . We combine content-based and temporal-based prediction using gating mechanisms in Subsection 3.3..In this method, we model a speaker by what he or she has said, i.e., content. Figure 1b illustrates the content-based model: a hierarchical RNN (which is the same as Figure 1a) yields a vector s i for each speaker, based on his or her nearest several utterances. The speaker vector s i is multiplied by current utterances' vector u for softmax-like prediction (Equation 2). We pick the candidate speaker that has the highest probability.It is natural to model a speaker by his/her utterances, which provide illuminating information of the speaker's background, stance, etc. As will be shown in Section 4., contentbased prediction achieves significantly better performance than random guess. This also verifies that speaker classification is feasible, being a meaningful surrogate task for speaker modeling.In temporal-based approach, we sort all speakers in a descending order according to the last time he or she speaks, and assign a vector (embedding) for each index in the list, following the "static model" in Ouchi and Tsuboi (2016) . Each speaker vector is randomly initialized and optimized as parameters during training. The predicted probability of a speaker is also computed by Equation 2. The temporal vector is also known as a position embedding in other NLP literature (Nguyen and Grishman, 2015) .Our experiments show that temporal information provides strong bias: nearer speakers tend to speak more; hence, it is also useful for speaker modeling.As both content and temporal provide important evidence for speaker classification, we propose to combine them by interpolating or gating mechanisms (illustrated in Figure 1d) . In particular, we haveEQUATIONHere, g is known as a gate, balancing these two aspects. We investigate three strategies to compute the gate.1. Interpolating after training. The simplest approach, perhaps, is to train two predictors separately, and interpolate after training by validating the hyperparameter g. Figure 1 : Hybrid content-and temporal-based speaker classification with a gating mechanism. from DNCs, however, the gate here is not based on input (i.e., u in our scenario), but the result of content prediction p (content) . FormallyModel Macro F 1 Weighted F 1 Micro F 1 Acc. MRR.EQUATIONwhere we compute the standard deviation (std) of p. w and b are parameters to scale std[ p (content) ] to a sensitive region of the sigmoid function.
2
Writing academic paper by referencing examples (e.g., We illustrate the method ...) often does not work very well, because learners may fail to generalize from examples and apply them to their own situations. Often, there are too many examples to choose from and to adapt to match the need of the learner writers. To help the learner in writing, a promising approach is to extract a set of representative sentential patterns consisting of keywords and categories that are expected to assist learners to write better.We focus on the extracting process of sentential patterns for various rhetoric functions: identifying a set of candidate patterns with keywords and categories. These candidate patterns are then statistically analyzed, filtered and finally returned as the output of the system. The returned patterns can be directly examined by the learner, alternatively they can be used to annotate rhetoric moves. Thus, it is crucial that the extracted patterns cover all semantic categories of interest. At the same time, the set of extracted patterns of a semantic category cannot be too large that it overwhelms the writer or the tagging process of the subsequent move. Therefore, our goal is to return a reasonable-sized set of sentential patterns that, at the same time, must cover all rhetoric moves. We now formally state the problem that we are addressing.Problem Statement: We are given a raw corpus CORP (e.g., Citeseer x ) as well as an annotated corpus TAGGED-CORP in a specific genre and domain, and a list of semantic categories (e.g., PAPER = { paper, article}, PRESENT = { present, describe, introduce }). Our goal is to retrieve a set of tagged sentential patterns, p 1 , ... , p m , consisting of keywords and categories from CORP. For this, we convert all sentences in CORP and TAGGED-CORP into candidate patterns (e.g., In this PAPER, we PRESENT Teufel and Moens (2002 , 2004 , 2006 Naive Bayes 7 moves + scientific papers Anthony (2003) Naive Bayes BPGMRC scientific papers McKnight and Srinivasan (2003) Support Vector Machine OMRC MEDLINE Shimbo et al. (2003) Support Vector Machine OMRC MEDLINE Yamamoto and Takagi (2005) Support Vector Machine BPMRC MEDLINE Wu et al. 2006Hidden Markov Model BPMRC Citeseer Lin et al. (2006) Hidden Markov Model OMRC MEDLINE Hirohata et al. (2008) Conditional , such that these candidates can be statistically analyzed and filtered to generate common and representative patterns.In the rest of this section, we describe our solution to this problem. First, we define a strategy for transforming sentences from academic corpora into candidate patterns (Section 3.2.1). This strategy relies on a set of candidate patterns derived from sentences of patterns (which we will describe in detail in Section 3.2.3). In this section, we also describe our method for extracting the most representative of the candidate patterns for each semantic category of interest. Finally, we show how WriteAhead displays patterns at run-time (Section 3.3).We attempt to find transformations from sentences into patterns, consisting of keywords and categories expected to characterize rhetoric moves in academic writings. Our learning process is shown in Figure 4 . 3.2.1 Extracting Candidate Patterns.In the first stage of the extracting process (Step (1) in Figure 5 ), we tokenize sentences in the given corpus, and assign to each word its syntactic information including lemma, part of speech, and phrase group (represented using the B-I-O notation to mark the beginning, inside, and outside of some phrase group).See Table 2 for an example of tagged sentence. In order to identify the head of a phrase, we convert the B-I-O notation to I-H-O notation with H denoting the headword of a phrase. Using the I-H-O notation allows us to directly identify the headword of a phrase chunk. Then, we convert every word in a sentence into elements of a candidate pattern. The • Semantic categories (See Table 3 ) : typical domain specific concepts and words, • Lexical symbols: a list of common prepositions, pronouns, adverbs, and determinants, • Noun phrase and verb phrase: head words that are not classified in a category are represented as something or do.Note that determinants (e.g., the, an, a) or adjectives need to be represented in a pattern. For those words, we add " " (ignored) to the element list. The ignored elements will be deleted before patterns are analyzed and filtered (as shown in Table 2 ). We design the scope of extracted patterns, as from the beginning of the sentence, to the object phrase after the main verb.Finally, we combine elements for a sentence to generate pattern candidates (Step (2) in Figure 5 ). Table 2 shows those elements associated with words and how they combine to form pattern candidates.In Step (3), we use semantic categories to generalize words and generate formulaic patterns. As will be described in Section 4, we used a Teufel manually analyzed research article to device a set of categories of words (Teufel, 1999) . Table 2 , the sentence "In this paper, we propose a method that accurately reports timing information by accounting for intrusion introduced by monitoring." will be transformed into the candidate pattern "In this PAPER, we PRESENT WORK".The input to this state is a set of sentences. These sentences constitute the data for generating the candidate patterns, that can be used in the next step.The output of this stage is a set of candidate patterns that can be statistically analyzed and filter in a later step. See Table 4 for example candidate patterns extracted from some sentences.In the second stage of the process (Step (4) in Figure 5) , we filter candidate patterns to generate representative patterns. Once patterns and instances are generated, they are sorted and grouped by category.Then, we count the number of instances of each pattern within the category (in Step (5)), and the average and standard deviation of these counts for each category (in Step (6)).In Step (7), we select patterns with an instance count exceeding the average count by Min-STDThreshold standard deviation.Consider the partial sentence "In this paper, we propose a method" Table 2 shows elements of each word, pattern candidates anchored each word. Note that the candidate (e.g., In this PAPER, we PRESENT WORK associated with the instance of In this paper, we propose a method) are valid patterns. In the third and final stage (Step (8) in Figure 5 ), we count, sort, and filter patterns, essentially using the frequency counts from CORP with the tags in TAGGED-CORP (See Tables 4). Figure 5 shows the algorithm for ranking a set of corresponding sentential patterns for all semantic categories. See Table 6 for an example of the move tag AIM and its corresponding sentential patterns.Once the patterns and examples are automatically extracted for each category in the given corpus, they are stored and indexed by category that can be annotated with corresponding rhetoric moves. At run-time in a writing session, WriteAhead detects a rhetoric move tag Move in the text box. With the tag as a query, WriteAhead retrieves and sorts all relevant patterns and examples (Pattern and Example) by frequency, aiming to display the most common information toward the top.
2
In this section, we present our approach for estimating the value of actions. Our approach casts the problem as a supervised learning-to-rank problem between pairs of actions. Given, a textual description of an action a, we want to estimate its value magnitude v. We represent the action a via a set of features that are extracted from the description of the action. We use a linear model that combines the features into a single scalar value for the valueEQUATIONwhere x a is the feature vector for action description a and w is a learned weight vector. The goal is to learn a suitable weight vector w that approximates the true relationship between textual expressions of actions and their magnitude of value. Instead of estimating the value directly, we take an alternative approach and consider the task of learning the relative ranking of pairs of actions. We follow the pairwise approach to ranking (Herbrich et al., 1999; Cao et al., 2007) that reduces ranking to a binary classification problem. Ranking the values v 1 and v 2 of two actions a 1 and a 2 is equivalent to determining the sign of the dot product between the weight vector w and the difference between the feature vectors x a 1 and xEQUATIONFor each ranking pair of actions, we create two complimentary classification instances:(x a 1 − x a 2 , l 1 ) and (x a 2 − x a 1 , l 2 ), where the labels are l 1 = +1, l 2 = −1 if the first challenge has higher value than the second challenge and l 1 = −1, l 2 = +1 otherwise. We can train a standard linear classifier on the generated training instances to learn the weight vector w.In the case of the IWIYW data, there is no explicit ranking between actions. However, we are able to create ranking pairs for the IWIYW data in the following way. As we have seen, there is only a small set of different You Will challenges that are reciprocal actions for a diverse set of I Will challenges. Thus, many I Will challenges will end up having the same You Will challenge. We can use the You Will challenges as a pivot to effectively "join" the I Will challenges. The number of required people to perform Y induces a natural ordering between the values of the I Will actions where a higher number of required participants means that the I Will task has higher value.For example, for the challenges displayed in Table 1, we can use the common You Will challenges to create the following ranked challenge pairs. I will quit smoking < I will adopt a panda I will dye my hair red < I will learn Java 3According to the examples, adopting a panda has higher value than quitting smoking and learning Java has higher value than dying ones hair red. The third challenge does not share a common You Will challenge with any other challenge and therefore no ranking pairs can be formed with it.As the IWIYW challenges are created online in a non-controlled environment, we have to expect that there is some noise in the automatically created ranked challenges. However, a robust learning algorithm has to be able to handle a certain amount of noise. We note that our method is not limited to the IWIYW data set but can be applied to any data set of actions where relative rankings are provided or can be induced.The choice of appropriate feature representations is crucial to the success of any machine learning method. We start by parsing each I Will If You Will challenge with a constituency parser. Because each challenge has the same I Will If You Will structure, it is easy to identify the subtrees that correspond to the I Will and You Will parts of the challenge. An example parse tree of a challenge is shown in Figure 1 . The yield of the You Will subtree serves as a pivot to join different I Will challenges. To represent the I Will action a as a feature vector x a , we extract the following lexical and syntax features from the I Will subtree of the sentence.• Verb: We extract the verb of the I Will clause as a feature. To identify the verb, we pick the left-most verb of the I Will subtree based on its part-of-speech (POS) tag. We extract the lowercased word token as a feature. For example, for the sentence in Figure 1 , the verb feature is verb=quit. If the verb is negated (the left sibling of the I Will subtree spans exactly the word not), we add the postfix NOT to the verb feature, for example verb=quit NOT.• Object: We take the right sibling of the I will verb as the object of the action. If the right sibling is a particle with constituent label PRT, e.g., travel around the UK on bike, we skip the particle and take the next sibling as the object. If the object is a prepositional phrase with constituent tag PP, e.g., go without electricity for a month, we take the second child of the prepositional phrase as the object phrase. We then extract two features to represent the object. First, we extract the lowercased head word of the object as a feature. Second, we extract the concatenation of all the words in the yield of the object node as a single feature to capture the complete argument for longer objects. In our example sentence, the object head feature and the complete object feature are identical: object head=smoking and object=smoking.• Unigram: We take all lowercased words that are not stopwords in the I Will part of the sentence as binary features. In our example sentence, the unigram features unigr quit and unigr smoking would be active.• Bigram: We take all lowercased bigrams in the I Will part of the sentence as binary features. We do not remove stopwords for bigram features. In our example sentence, the bigram features bigr quit smoking would be active.We note that our method is not restricted to these feature templates. More sophisticated features, like tree kernels (Collins and Duffy, 2002) or se-mantic role labeling (Palmer et al., 2010) , can be imagined.
2
We discuss several different metrics we developed for human-level clue-giving ability as well as a baseline metric for automatic clue-giving ability in order to provide more context to our machine learning experiment results. We perform machine learning experiments in order to determine the predictive value of simple textual features for determining a clue's effectiveness (capability to elicit a correct guess). We use the Weka Machine Learning Library's naive bayes classifier in our experiments (Hall et al., 2009) .We perform 10-fold cross validation with our folds stratified across classes. We introduce some general notation in Equation 1 before discussing the previously mentioned metrics.N = total # of clues (given or in corpus) c = total # of single clues leading to a correct guess(1)Baseline We use random clue selection as the baseline for the effective clue prediction task. Random selection here represents a completely naive clue-giving agent that only has the ability to randomly select a clue from the population of automatically generated clues for a given target word. In order to compute the likelihood for random selection selecting an effective clue we simply compute Equation (2).# clues that elicit corr. guess f rom turkers N (2)Human-level In order to provide more context for the results of our automatic method it is necessary to consider the effectiveness of clues produced by a typical human clue-giver. Since we are ultimately interested in full RDG-Phrase gameplay, in which multiple clues can be given for a target, we used examples from the corpus from (Paetzel et al., 2014) , with annotations from , identifying clues and correctness of response. However, we need a way of estimating the quality of individual clues given this data, and there is not an obvious scoring methodology to use to discern effective clues (generated by a human clue-giver) from ineffective ones (as any clue given prior to a correct guess can contribute to the receiver's arrival at the correct guess). We have therefore defined three measures to try to approximate clue quality, given appearance in a clue sequence. We include an upper-bound measure, a lower-bound measure and expected guessability measure, that assign different weights to a clue that appears in a successful sequence. The annotations for 8 rounds of the RDG-Phrase game (involving 4 different human cluegivers) that are discussed in were used to calculate these statistics. An upper-bound score for a human clue's effectiveness can be arrived at if each clue in a clue sequence leading to a correct or partially correct guess is considered effective and given a score of 1. Implicit in this upper-bound is that each clue could elicit a correct guess from a receiver on its own (an analysis of the RDG-Phrase corpus shows this to be a very generous (unrealistic) assumption -see further comments in section 7. If a target-word is skipped or time runs out before a correct guess is made each clue in that sequence is considered ineffective and receives a score of 0. Using these optimistic assumptions the chance of a human clue-giver generating an effective clue can be calculated by equation 3.# of clues part of a correct clue sequence NA lower-bound score for each clue's effectiveness can be computed by giving only clues that elicited correct guesses without being preceded by additional clues a score of 1 and all clues that were in a sequence of more than one clue a score of 0. This assumes that a clue sequence's effectiveness can be totally attributed to synergies from the combination of clues in the sequence rather than to any single clue's effectiveness (unless of course the clue sequence is of length one). These pessimistic assumptions provide yet another way, shown in equation 4, to compute the likelihood that a human clue-giver's next clue is effective.EQUATIONAs a compromise between these extremes, we define an expected guessability score for each clue in a sequence leading to a correct or partially correct guess, where partial credit (between the above extremes of 0 and 1) is given for each clue in the sequence. For simplicity, for sequences larger than 1, we define the expected guessability to be 1/(t + 1) where t represents the total number of clues in the sequence. For single clues, we assign a value of 1, as in both of the above measures. The intuition behind this measure is that the method distributes the credit equally between each clue and a synergistic combination of clues. If a target-word is skipped or time runs out before a correct guess is made each clue in that sequence is considered ineffective and receives a score of 0. Taking the weighted average of the clue's effectiveness scores then provides an alternative measure of how likely a human clue-giver is to generate an effective clue; this calculation can be found in equation 5. (5)It is interesting to note that our definitions for human lowerbound, upper-bound, and expected guessability converge to the same value in the case where a correct guess comes after one clue. Feature Selection We perform feature selection using Weka's attribute selection method ChiSquaredAttributeEval which ranks the attributes based on computing an attribute's chi-square statistic with respect to the class. We then use a greedy approach where we start with all attributes and remove the lowest remaining ranked attribute from the ChiSquaredAttributeEval one by one as long as effective clue classification precision is increasing (we discuss why we focus on effective clue precision in 6.).Features We have extracted some simple textual features from the clues utilized in the mechanical turk experiment. These features are listed in Table 4 . A + indicates that this feature is part of the optimal feature set found by our feature selection method. The features include: the clue source (WordNet, Wikipedia, or Dictionary.com), the clue type as discussed in 3., a binary feature of value 1 if the original clue contained the target word otherwise of value 0, as well as Point-wise mutual information (for the words in the clue and the clue's target word) features. The model utilized to calculate the PMI features was built on a corpora containing millions of web blog entries, it is a subset of the spinn3r dataset discussed in (Burton et al., 2009) . The point-wise mutual information features for a clue were calculated in two ways. An average PMI for each clue was calculated by taking the average of the average PMI of all constituent clue-words with the target word for the clue, as shown in equation 6, and a max PMI for each clue was calculated by taking the maximum value of equation 6for all the constituent clue-words. The optimal feature set includes every feature but the clue source. Although the feature set we use in these experiments do not satisfy the assumption of conditional independence made by the Naive Bayes classifier, previous work has shown that the NB classifier has yielded promising results in other text classification tasks even when the features utilized were not completely independent of one another (Dumais et al., 1998) . Our results, presented in Section 6., are also consistent with this observation.P M I(clueW ord, target) + P M I(target, clueW ord) 2 (6)
2
Our approach on MeasEval consisted of a cascade system composed of individual subsystems for each of the problems in the first two subtasks, and then jointly solving the last three subtasks with a single subsystem.The subtask of identifying quantities in text was formalized as a sequence labeling problem with Inside-Outside-Beginning (IOB) tags (Ramshaw and Marcus, 1999) that were predicted by a pretrained language model with a CRF on top of predicted logits, as proposed by Avram et al. (2020) . The architecture is depicted in Figure 1 . More formally, we project each output embedding e i produced by the pretrained language model into probability logits l i by using a feedforward network with a ReLU activation asl i = ReLU (W T l e i + b l ),where W l is the corresponding weight matrix and b l is the corresponding bias. Then, we model the output conditional probabilities for each tag y i by using the CRF learning algorithm, as depicted in Eq. 1:p(y|l) = 1 Z exp n i=1 W T y i−1 ,y i l i + b y i−1 ,y i (1)where W y i−1 ,y i and b y i−1 ,y i are the weight matrix and the bias of the CRF, and Z is a normalization constant such that the probabilities sum up to one. The entire subsystem is trained to maximizing the log-likelihood of the data, while the Viterbi algorithm (Forney, 1973) is used during inference to find the most likely sequence of tags.For the second subtask, a character-level BiLSTM extracts the units from quantities and classifies their corresponding value modifiers. We approached the unit extraction in a similar way as the quantity identification, by treating the problem as a sequence tagging; however, the pretrained language model was replaced with BiLSTM cells. Moreover, instead of predicting a label for each character (token), we instead averaged the BiLSTM hidden states and projected their average in an eleven-dimensional vector (i.e., number of possible value modifiers) for the classification. Then, a sigmoid activation function was applied to obtain a vector that contains the probability of the quantity to belong to a class at each index. The architectures used for unit extraction and value modifiers classification are depicted in Figure 2 .Subtask Grouping. The last three subtasks were grouped into a single subtask where a pretrained language model was fine-tuned to jointly identify three elements: the span of the measured entities, the measured properties, and their corresponding qualifiers. The model extracts the relations between the three elements and the previously extracted quantities using a multi-turn question answering (QA) architecture, as proposed by Li et al. (2019) . The pretrained language models used for this task were identical to the ones from the quantity identification subtask. Question Templates. The input to the subsystem is created by appending a question before the text that denotes a possible relation between a given and a target entity. There are a total of six question templates that can be filled with the corresponding entities that cover all the possible relations, as depicted in Table 1 . Then, the questions are asked in a specific order to correctly identify the relations and the span of the entities. First, starting with a given quantity, the model is asked to identify its measured properties. If a measured property is found, the model marks its span and links it to the quantity with the HasQuantity relation (question 1). Second, the model is asked to identify the measured entity with that corresponding measured property, linking the two with the HasProperty relation (question 2). Third, if no measured property is found for a given quantity, the model is asked to directly identify the measured entity, marking the relation between the measured entity and the quantity directly with HasQuantity (question 3). Finally, once all quantities, measured entities, and properties are identified, the model is asked to identify corresponding qualifiers and marks the relations accordingly (questions 4-6 in table).Model Output. The architecture proposed in (Devlin et al., 2019) for SQuAD 2.0 (Rajpurkar et al., 2018) is employed to create the output of the subtasks; as such, two vectors are used for finetuning: a starting vector S and an ending vector E. The probability of token i to be the start of a span is computed as a dot-product between the embedding T i and the start vector S, followed by a softmax applied over all the tokens of the input: P = sof tmax(T i • S). An analogous formula computes the end probabilities of a span. Then, we take the indices i and j are taken to compute the most probable span for an entity, where i ≤ j maximizes the sum of log-likelihoodsT i • S + T j • E.We compare for each query the previously defined maximum sum with the sum of the start and end log-likelihoods of the [CLS] What is the qualifier corresponding to the measured property ? there can be questions without an answer 1 . If this sum is higher, then there is no such type of relationship for that entity. A threshold added to the s null is considered in order to provide a higher granularity between questions with or without answers, which was tuned on the development set to maximize F1-score. Figure 3 introduces our architecture for entity recognition and relation extraction. The question tokens marked with Qst and the paragraph tokens marked with Tok are fed as input, while the start S and the end E logits are present at output.
2
Annotated Data We use data from the Parallel Meaning Bank (PMB 3.0.0, Abzianidze et al., 2017) . The documents in this PMB release are sourced from seven different corpora from a wide range of genres. For one of these corpora, Tatoeba, Chinese translations already exist, and we added them to the PMB data. For the remaining texts that had no Chinese translation, we translated the English documents into Chinese using the Baidu API, manually verified the results and, when needed, corrected the translations. Only a few translations needed major corrections. About a hundred translated sentences lacked past or future tense or used uncommon Chinese expressions. Special care was given to the translation of named entities, ambiguous words, and proverbs, and required about a thousand changes. For economical reasons the silver part of the data was only checked on grammatical fluency. Chinese Meaning Representations We start from the English-Chinese sentence pairs with the DRSs originally annotated for English. Interestingly, the DRSs in the PMB can be conceived as language-neutral. Even though the English Word-Net synsets present in the DRS are reminiscent of English, they really represent concepts, not words. Similarly, the VerbNet roles have English names, but are universal thematic roles. An exception is formed by named entities, that are grounded by the orthography used in the source language. In sum, we assume that the translations are, by and large, meaning preserving, and project English to Chinese DRSs by changing all English named entities to Chinese ones as they appeared in the Chinese input (see Figure 1 ). This semantic annotation projection method bears strong similarities and is inspired by Damonte et al. (2017) and Min et al. (2019) .We consider five types of input representations, outlined in Table 1 : (i) raw characters, (ii) continuous characters (i.e., without spaces), (iii) tokenised characters, (iv) tokenised words, and (v) byte-pair encoded text (BPE, Sennrich et al., 2016) . Note that for Chinese, the first two options amount to the same kind of input. For BPE, we experiment with the number of merges (1k, 5k and 10k) and found in preliminary experiments that it was preferable to not add the indicator "@" for Chinese. For English character input we use an explicit "shift" symbol (ˆ) to indicate uppercased characters, to keep the vocabulary size low. Moreover, the | symbol represents an explicit word boundary. For tokenisation we use the Moses tokenizer (Koehn et al., 2007) for English, while we use the default mode of the Jieba tokenizer 2 to segment the Chinese sentences. To fairly compare these different input representations, we do not employ pretrained embeddings.Output Representation Appendix B shows how DRSs are represented for the purpose of training neural models, following van Noord et al. (2018b) . Variables are replaced by indices, and the DRSs are coded in either a linearised character-level or wordlevel clause format. For Chinese, we experimented with both representations and found that the output representation had little effect on parsing performance. To follow previous work (van Noord et al., 2018b) and to allow a fair comparison between the languages, we therefore use the character-level DRS representation for both languages.Data Splits We distinguish between gold (manually corrected meaning representations) and silver (automatically generated and partially corrected meaning representations) data. There are a total of 8,403 English-Chinese documents with gold data, of which 885 are used as development set and 898 as test set. The silver data (97,597 documents) is only used to augment the training data, following van Noord et al. (2018b) . We use a fine-tuning approach to effectively use high-quality data in our experiments: first training the system with silver and gold data, then restarting the training to finetune on only the gold data.Neural Architecture We use a recurrent sequence-to-sequence neural network with two bi-directional LSTM layers (Hochreiter and Schmidhuber, 1997) as implemented by Marian (Junczys-Dowmunt et al., 2018), similar to van Noord et al. (2019). 3 Specific hyper-parameters are shown in Appendix A. We also experimented with the Transformer model (Vaswani et al., 2017) , as implemented in the same framework. However, similar to van Noord et al. (2020), none of our experiments reached the performance of the bi-LSTM model. We will therefore only show results of the bi-LSTM model in this paper.Evaluation DRS output is evaluated by using Counter (van Noord et al., 2018a) , a tool that calculates the micro precision and recall of matching DRS clauses. Counter has been widely used in the evaluation of DRS parsers (Abzianidze et al., 2019) . The generated DRSs have to be syntactically as well as semantically well-formed, as checked by the Referee tool (van Noord et al., 2018b) , and are otherwise penalised with an F-score of 0. 4 3 Code to reproduce our experiments is available at:https://github.com/wangchunliu/ Chinese-DRS-parsing 4 For all our models, this only happened <1% of the time. Table 3 shows the average of five runs for each input representation type. Generally, performance on English is significantly better than on Chinese, which is not surprising as the DRSs are based on English input using English WordNet synsets as concepts (see Figure 1) . Given the situation, it is remarkable that Chinese reaches high scores given the differences between the languages in how they convey meaning (Levy and Manning, 2003) . In general, F-scores start to decrease when sentences get longer (Figure 2 ), though there is no clear difference between the character and wordlevel models. This is in line with the findings of van Noord et al. (2018b) . For English, the input types based on characters outperform those based on words. BPE approaches character-level performance for small amounts of merges (1k), but never surpasses it. This too is in line with van Noord et al. (2018b), but also with previous work on NMT for Chinese (Li et al., 2019) . There is a small benefit (0.5) for tokenizing the input text before converting the input to character-level format, though the continuous character representation also works surprisingly well. For Chinese, characterbased input shows the best performance too, though for a very small amount of merges BPE obtains a similar score. As opposed to English, tokenizing the Chinese input is not beneficial when using a character-level representation, though it also does not hurt performance. In general, character-level models seem the most promising for Chinese DRS parsing. Similar results were obtained by Min et al. (2019) Figure 3 shows detailed scores for the characterbased (raw) model on the Chinese and English dev set, categorizing operators (e.g., negation, presupposition or modalities), VerbNet roles (e.g., Agent, Theme), predicates, and senses. Modifiers, especially adverbs, get a systematic lower score in Chinese compared to English. This is interesting, and examination of the data reveals that English adverbs are regularly translated as Chinese noun phrases (e.g., slightly → a little). This will lower the F-score even though the meaning is preserved, only expressed in a semantically different way.
2
In order to compare text simplification corpora in different languages and domains, we have chosen eight corpora in five languages and three domains (see Section 3.1). For the analysis, we use in sum 104 language-independent features (see Section 3.2). In order to analyze relevance of the features per corpus, language, and domain, we conduct several statistical tests (see Section 3.3).Most text simplification research focuses on English, but also research in other languages exist, e.g., Bulgarian, French, Danish, Japanese, Korean. However, due to limited access, now-defunct links, non-parallel-versions, or a missing statement regarding availability, we focus on the following four non-English text simplification corpora:• German (DE) web data corpus (Klaper et al., 2013) , • Spanish (ES) news corpus Newsela (Xu et al., 2015) 2 , 2 https://newsela.com/data/• Czech (CS) newspaper corpus COSTRA (Barančíková and Bojar, 2019) 3 , and• Italian (IT) web data corpus PaCCSS (Brunato et al., 2016) 4 .In contrast, several freely available corpora for English text simplification exist. We decided to use the following four:• TurkCorpus (Xu et al., 2016) 5 ,• QATS corpus (Štajner et al., 2016) 6 , and• two current used versions of the Newsela corpus (Xu et al., 2015) 7 .The first version of Newsela (2015-03-02) (Xu et al., 2015) is already sentence-wise aligned whereas the second version (2016-01-29) is not aligned. Therefore, the alignment is computed on all adjacent simplification levels (e.g., 0-1, 1-2, .., 4-5) with the alignment algorithm MASSAlign proposed in Paetzold et al. 20178 using a similarity value α of 0.2 for the paragraph as well as for the sentence aligner. In addition to the language variation, the corpora chosen for this purpose differ in their domains, i.e., newspaper articles, web data, and Wikipedia data. An overview, including the license, domain, size, and alignment type of the corpora, is provided in Table 1 . As illustrated in Table 1 , the corpora largely differ in their size of pairs (CS-Costra: 293, EN-Newsela-15: 141,582) as well as in the distribution of simplification transformations (see Table 1 ), e.g., 15% of only syntactic simplifications in EN-QATS but only 0.03% in EN-Newsela-15.For the analysis, overall, 104 language-independent features are measured per corpus, domain, or language. 43 features, further called single features, are measured per item in the complex-simplified pair. For the domain and language comparison, the difference of each of the same 43 features between the complex and simplified text is measured, further called difference features. The remaining 18 features, paired features, describe respectively one feature per complex-simplified pair. The implementation of the features is in Python 3 and is based on the code provided by Martin et al. (2018) . In contrast to them, we are offering the usage of SpaCy 9 and Stanza 10 instead of NLTK for pre-processing. In comparison to SpaCy, Stanza is slower but has a higher accuracy and supports more languages. In the following, the results using SpaCy are presented. The pre-processing with SpaCy includes sentence-splitting, tokenization, lemmatizing, POS-tagging, dependency parsing, named entity recognition, and generating word embeddings. The SpaCy word embeddings are replaced in this study by pre-trained word embeddings of FastText (Grave et al., 2018) to achieve a higher quality 11 . Unless otherwise stated, this data is used to measure the used features.The single features are grouped into proportion of part of speech (POS) tags, proportion of clauses & phrases, length of phrases, syntactical, lexical, word frequency, word length, sentence length, and readability features. An overview is provided in Table 2 .Proportion of POS Tags Features. Gasperin et al. (2009) and Kauchak et al. (2014) name the proportion of POS tags per sentence as a relevant feature for text simplification. According to Kercher (2013), a higher proportion of verbs in German indicates for instance a simpler text because it might be more colloquial. POS tag counts are normalized by dividing them by the number of tokens per text, as in Kauchak et al. (2014) . A list of all used POS tags features is provided in Table 2 .Proportion of Clauses and Phrases Features. Gasperin et al. (2009) and recommend using the proportion of clauses and phrases. The clauses and phrases extend and complex a sentence, so they are often split (Gasperin et al., 2009) . The proportion of the clauses and phrases is measured using the dependency tree of the texts and differentiated, as shown in Table 2 .Length of Phrases Features. In a study regarding sentence splitting prediction (Gasperin et al., 2009) , the length of noun, verb, and prepositional phrases are used as features because the longer a phrase, the more complex the sentence and the higher the amount of processing. Syntactic Features. We use six syntactic features, computed based on the SpaCy dependency trees and POS tags. Inspired by Niklaus et al. (2019) , we measure whether the head of the text is a verb (Feature 1). If the text contains more than one sentence, at least one root must be a verb. Following Universal Dependencies 12 , a verb is most likely to be the head of a sentence in several languages. So, sentences whose heads are not verbs might be ungrammatical or hard to read due to their uncommon structure. Therefore, the feature of whether the head of the sentence is a noun is added (2). Niklaus et al. (2019) also state that a sentence is more likely to be ungrammatical and, hence, more difficult to read if no child of the root is a subject (3). According to Collins-Thompson (2014), a sentence with a higher parse tree is more difficult to read, we therefore add the parse tree height as well (4). Feature (5) indicates whether the parse tree is projective; a parse is non-projective if dependency arcs cross each other or, put differently, if the yield of a subtree is discontinuous in the sentence. In some languages, e.g., German and Czech, non-projective dependency trees are rather frequent, but we hypothesize that they decrease readability. Gasperin et al. (2009) suggest passive voice (6) as a further feature because text simplification often includes transforming passive to active, as recommended in easy-to-read text guidelines, because the agent of the sentence might get clearer. Due to different dependency label sets in SpaCy for some languages, this feature is only implemented for German and English.Lexical Features. Further, six features are grouped into lexical features. The lexical complexity (Feature 1) might be a relevant feature because a word might be more familiar for a reader the more often it occurs in texts. In order to measure the lexical complexity of the input text, the third quartile of the log-ranks of each token in the frequency table is used (Alva-Manchego et al., 2019). The lexical density -type-token-ratio-(2) is calculated using the ratio of lexical items to the total number of words in the input text (Martin et al., 2018; Collins-Thompson, 2014; Hancke et al., 2012; Scarton et al., 2018) . It is assumed that a more complex text has a larger vocabulary than a simplified text (Collins-Thompson, 2014).Following Collins-Thompson (2014) , the proportion of function words is a relevant feature for readability and text simplification. In this study, function words (3) are defined using the universal dependency labels "aux", "cop", "mark" and "case". Additionally, we added the proportion of multi-word expressions (MWE, 4) using the dependency labels "flat", "fixed", and "compound" because it might be difficult for non-native speakers to identify and understand the separated components of an MWE, especially when considering long dependencies between its components. The ratio of referential expressions (5) is also added based on POS tags and dependency labels. The more referential expression, the more difficult the text because the reader has to connect previous or following tokens of the same or even another sentence. Lastly, the ratio of named entities (6) is examined because they might be difficult to understand for non-natives or non-experts of the topic.Word Frequency Features. As another indication for lexical simplification, the word frequency can be used (Martin et al., 2018; Collins-Thompson, 2014) . Complex words are often infrequent, so word frequency features may help to identify difficult sentences. The frequency of the words is based on the ranks in the FastText Embeddings (Grave et al., 2018) . The average position of all tokens in the frequency table is measured as well as the position of the most infrequent word. The paired features (see Table 3 ) are grouped into lexical, syntactic, simplification, word embeddings, and machine translation features.Lexical Features. Inspired by Martin et al. (2018) and Alva-Manchego et al. (2019) , the following proportions relative to the simplified or complex texts are included as lexical features:• Added Lemmas: Additional words can make the simplified sentence more precise and comprehensible by enriching it with, e.g., decorative adjectives or term definitions. • Deleted Lemmas: Deleting complex words might contribute to ease of readability. • Kept Lemmas: Keeping words, on the other hand, might contribute to preserving the meaning of the text (but also its complexity). Kept lemmas describe the words which occur in both texts but might be differently inflected. • Kept Words: Kept Words are a portion of kept lemmas, they describe the proportion of words which occur exactly in the same inflection in both texts. • Rewritten Words: Words which are differently inflected in the simplified text, compared to the complex one, but have the same lemma are called rewritten words. Granted that complex words are rewritten, a higher amount of rewritten words represents a more simplified text.The compression ratio is similar to the Levenshtein Distance and measures how many characters are left in the simplified text compared to the complex text. The Levenshtein Similarity measures the difference between complex and simplified texts by insertions, substitutions, or deletions of characters in the texts.Syntactic Features. The idea of the features of split and joined sentences are based on Gasperin et al. (2009) , both show an applied simplification transaction. The sentence is counted as split if the number of sentences of the complex text is lower than of the simplified text. The sentence is counted as joined if the number of sentences of the complex text is higher than of the simplified text.Simplification Features. In order to address more simplification transactions, we measure lexical, syntactical, and no changes. A complex-simplified-pair is considered as a lexical simplification if tokens are added or rewritten in the simplified text. A complex-simplified-pair is considered as a syntactic simplification if the text is split or joined. Also, a change from non-projective to projective, passive to active, and a reduction of the parse tree height are considered as syntactic simplifications. A complex-simplifiedpair is considered as identical if both texts are the same, so no simplification has been applied. As each pair is solely analyzed, the standard text simplification evaluation metric SARI (Xu et al., 2016) , which needs several gold references, cannot be considered in the analysis.Word Embedding Features. The similarity between the complex and the simplified text (Martin et al., 2018) is measured using pre-trained FastText embeddings (Grave et al., 2018) . We consider cosine similarity, and also the dot product (Martin et al., 2018) . The higher the value, the more similar the sentences, the more the meaning might be preserved and the higher the simplification quality might be. Machine Translation (MT) Features. Lastly, three MT features are added to the feature set, i.e., BLEU, ROUGE-L, and METEOR. As text simplification is a monolingual machine translation task, evaluation metrics from MT, in particular the BLEU score, are often used in text simplification. Similar to the word embedding features, the higher the value the more meaning of the complex text is preserved in the simplified text. The BLEU score is a well-established measurement for MT based on n-grams. We use 12 different BLEU implementations, 8 from the Python package NLTK and 4 implemented in Sharma et al. (2017) .The research questions stated in Section 1 will be answered using non-parametric statistical tests using the previously described features on the eight corpora.In order to answer the first research question regarding differences between the simplified and the complex text, the complexity level is the dependent variable (0: complex, 1: simple). The features previously named are the independent variables and the values per complex-simple pairs are the samples. To evaluate whether the feature values differ between the simplified and complex texts, we use nonparametric statistical hypothesis test for dependent samples, i.e., Wilcoxon signed-rank tests. Afterwards, we measure the effect size r, where r>=0.4 represents a strong effect, 0.25<=r<0.4 a moderate effect and 0.1<=r<0.25 a low effect. For the analysis of the research questions 2 and 3 regarding differences between the corpora regarding domains or languages, Kruskal-Wallis one-way analyses of variance are conducted. Therefore, the dependent variables are the languages or domains and the independent variables are the paired and difference features. For the analysis within domains and languages, the tests are evaluated against all corpora of one domain or language, e.g., for Wikipedia data the values of EN-QATS and EN-TurkCorpus are analyzed. For the analysis within and across languages and domains, the tests are evaluated against stacked corpora. All corpora assigned to the same language or domain are stacked to one large corpus, e.g., the German corpus and IT-PaCCSS are stacked as web data corpus and are tested against the stacked Wikipedia corpus and the stacked news article corpus. If there is a significant difference between the groups, a Dunn-Bonferonni Post-hoc Test is applied to find the pair(s) of the difference. Afterwards, again, the effect size is measured using the same interpretation levels as for the Wilcoxon signed-rank tests.
2
We perform a series of methodologies for narrative analysis. Figure 1 illustrates the main components that are used to analyse news and create the website.Preprocessing. First, we perform co-reference and anaphora resolution on each U.S Election article. This is based on the ANNIE plugin in GATE (Cunningham, 2002) . Next, we ex-tract Subject-Verb-Object (SVO) triplets using the Minipar parser output (Lin, 1998) . An extracted triplet is denoted for example like "Obama(S)-Accuse(V)-Republicans(O)". We found that news media contains less than 5% of passive sentences and therefore it is ignored. We store each triplet in a database annotated with a reference to the article from which it was extracted. This allows us to track the background information of each triplet in the database.Key Actors. From triplets extracted, we make a list of actors which are defined as subjects and objects of triplets. We rank actors according to their frequencies and consider the top 50 subjects and objects as the key actors.Polarity of Actions. The verb element in triplets are defined as actions. We map actions to two specific action types which are endorsement and opposing. We obtained the endorsement/opposing polarity of verbs using the Verbnet data (Kipper et al, 2006) ).Extraction of Relations. We retain all triplets that have a) the key actors as subjects or objects; and b) an endorse/oppose verb. To extract relations we introduced a weighting scheme. Each endorsement-relation between actors a, b is weighted by w a,b :EQUATIONwhere f a,b (+) denotes the number of triplets between a, b with positive relation and f a,b (−) with negative relation. This way, actors who had equal number of positive and negative relations are eliminated. Endorsement Network. We generate a triplet network with the weighted relations where actors are the nodes and weights calculated by Eq. 1 are the links. This network reveals endorse/oppose relations between key actors. The network in the main page of ElectionWatch website, illustrated in Fig. 2 , is a typical example of such a network.Network Partitioning. By using graph partitioning methods we can analyse the allegiance of actors to a party, and therefore their role in the political discourse. The Endorsement Network is a directed graph. To perform its partitioning we first omit directionality by calculating graph B = A + A T , where A is the adjacency matrix of the Endorsement Network. We computed eigenvectors of the B and selected the eigenvector that Figure 1 : The Pipeline correspond to the highest eigenvalue. The elements of the eigenvector represent actors. We sort them by their magnitude and we obtain a sorted list of actors. In the website we display only actors that are very polarised politically in the sides of the list. These two sets of actors correlate well with the left-right political ordering in our experiments on past US Elections. Since in the first phase of the campaign there are more than two sides, we added a scatter plot using the first two eigenvectors.Subject/Object Bias of Actors. The Subject/Object bias S a of actor a reveals the role it plays in the news narrative. It is computed as:EQUATIONA positive value of S for actor a indicates that the actor is used more often as a subject and a negative value indicates that the actor is used more often as an object.
2
The proposed MTS architecture is graphically shown in figure 2. It takes four separate inputs: (i) discussed topic, (ii) first statement, (iii) second statement, and (iv) their stance toward the topic. The final output is the similarity score of the fed in statements with respect to the main context. In the remainder of this section, we would like to describe three main components of MTS: encoding, context integration and statement encoding layers.We observe that a small percentage of the arguments (4.71%) belong to two or more key points, while the rest are matched with at most one. For that reason, a straightforward idea is gathering arguments, which belong to the same key point, and label the clusters in order. In other words, each cluster is represented by a key point K i , contains K i and its matching arguments. Our clustering technique results in the fact that there are a small number of arguments that belong to multiple clusters. Arguments that do not match any of the key points are grouped into the NON-MATCH set.Intuitively, if two different arguments support the same key point, they tend to convey similar meanings and should be considered as a matching pair of statements. Conversely, statements from different clusters are considered non-match in our approach. This pseudo-label method thus utilizes the similar semantic of within-cluster documents and enhances the model robustness. In the remainder of this paper, those arguments that come from the same cluster are referred to as positive pairs, otherwise, they are negative pairs.During training, we use each key point and its matching/non-matching arguments (based on the annotation in the ArgKP-2021 dataset) in a minibatch. Moreover, we also sample a small proportion of the NON-MATCH arguments and merge them into the mini-batch. Specifically, all the NON-MATCH arguments are considered to come from different and novel clusters. Because the definition of positive/negative statement pairs is well-defined, we can easily compute the loss in each mini-batch with a usual metric learning loss (Chopra et al., 2005; Yu and Tao, 2019) .We first extract the contextualized representation for textual inputs using the RoBERTa (Liu et al., 2019) model. We adopt a canonical method (Sun et al., 2019) to achieve the final embedding of a given input, which is concatenating the last four hidden states of the [CLS] token. These embeddings are fed into the context integration layer as an aggregate representation for topics, arguments and key points. For example, a statement vector at this point is denoted as 3 :h X = [ h X 1 , h X 2 , . . . , h X 4×768 ] (h X i ∈ R) = [ h X 1 , h X 2 , . . . , h X 3072 ]with 768 is the number of hidden layers produced by the RoBERTa-base model. For the stance encoding, we employ a fullyconnected network with no activation function to map the scalar input to a N -dimensional vector space. The representation of each topic, statement and stance are denoted as h T , h X and h S , respectively.After using the RoBERTa backbone and a shallow neural network to extract the embeddings acquired from multiple inputs, we conduct a simple concatenation with the aim of incorporating the topic (i.e. context) and stance information into its argument/key point representations. After this step, the obtained vector for each statement is ([; ] notation indicates the concatenation operator):v X = [h S ; h T ; h X ] where v X ∈ R N +2×3072The statement encoding component has another fully-connected network on top of the context integration layer to get the final D-dimensional embeddings for key points or arguments:e X = v X W + bwhere W ∈ R (N +6144)×D and b ∈ R D are the weight and bias parameters.Concretely, training our model is equivalent to learning a function f (S, T, X) that maps the similar statements onto close points and dissimilar ones onto distant points in R (N +6144)×D .In each iteration, we consider each input statement from the incoming mini-batch as an anchor document and sample positive/negative documents from within/inter clusters. For calculating the matching score between two statements, we compute the cosine distance of their embeddings:D cosine (e X 1 , e X 2 ) = 1 − cos(e X 1 , e X 2 ) (1) = 1 − e X 1 T e X 2 ||e X 1 || 2 ||e X 2 || 2Empirical results show that cosine distance yields the best performance compared to Manhattan distance (||e X 1 − e X 2 || 1 ) and Euclidean distance (||e X 1 − e X 2 || 2 ). Hence, we use cosine as the default distance metric throughout our experiments. We also revisit several loss functions, such as contrastive loss (Chopra et al., 2005) , triplet loss (Dong and Shen, 2018) and tuplet margin loss (Yu and Tao, 2019) . Unlike previous work, Yu and Tao (2019) use another distance metric, which will be described below.Assume that a mini-batch consists of k + 1 samples {X a , X p , X n 1 , X n 2 , . . . , X n k−1 }, which satisfies the tuplet constraint: X p is a positive statement whereas X n i are X a 's negative statements w.r.t X a . Mathematically, the tuplet margin loss function is defined as:L tuplet = log(1 + k−1 i=1 e s(cos θan i −cos (θap−β)) )where θ ap is the angle between e Xa and e Xp ; θ an i is the angle between e Xa and e Xn i . β is the margin hyper-parameter, which imposes the distance between negative pair to be larger than β. Finally, s acts like a scaling factor.Additionally, Yu and Tao (2019) also introduced the intra-pair variance loss, which was theoretically proven to mitigate the intra-pair variation and improve the generalizability. In MTS, we use a weighted combination of both tuplet margin and intra-pair variance as our loss function. The formulation of the latter one is:L pos = E[(1 − ) E[cos θ ap ] − cos θ ap ] 2 + L neg = E[cos θ an − (1 + ) E[cos θ an ] 2 + L intra−pair = L pos + L neg where [•] + = max(0, •).As pointed out by Hermans et al. (2017) ; Wu et al. 2017, training these siamese neural networks raises a couple of issues regarding easy/uninformative examples bias. In fact, if we keep feeding random pairs, more easy ones are included and prevent models from training. Hence, a hard mining strategy becomes crucial for avoiding learning from such redundant pairs. In MTS, we adapt the multi-similarity mining from Wang et al. (2019) , which identifies a sample's hard pairs using its neighbors.Given a pre-defined threshold , we select the negative pairs if they have the cosine similarity greater than the hardest positive pair, minus . For instance, let X a be a statement, which has its positive and negative sets of statements denoted by P Xa and N Xa , respectively. A negative pair of statements {X a , X n } is chosen if:cosine(e Xa , e Xn ) ≥ minX i ∈P Xacosine(e Xa , e X i ) − Such pairs are referred to as hard negative pairs, we carry out a similar process to form hard positive pairs. If a positive pair {X a , X p } is selected, then:cosine(e Xa , e Xp ) ≤ maxX i ∈N Xacosine(e Xa , e X i ) +At inference time, we pair up the arguments and key points that debate on a topic under the same stance. Afterward, we compute the matching score based on the angle between their embeddings. For instance, an argument A and key point K will have a matching score of:score(e A , e K ) = 1 − D cosine (e A , e K ) = cos(e A , e K )The right-hand side function squashes the score into the probability interval of [0, 1) and compatible with the presented loss function in section 4.5.
2
The analysis of this is divided into two parts. First, a list of commonly used adjectives was to be compiled based on their frequencies in the corpus. The second part involves a statistical analysis based on the categorization of the semantic and syntactic features of the adjectives.The British National Corpus was used through the BNCWeb platform to gather the corpus data. Prior to the query search, several subcorpora were created by using the user-defined function of BNCWeb to select four academic genres including two non-science genres, humanities and social sciences, and two science genres, natural sciences and medicine, as listed in Table 2 . A query string (_AJ0 of) was used for each subcorpus and after the frequency breakdown function was applied, the results were downloaded in a text file. All the results were then pooled together in a metafile for further analysis. The top 20 most frequent results from each subcorpus were examined and compared. Any mis-placed items were removed from the lists and replaced with another item ranked next in the frequency breakdown lists. For example, important of is listed in all four subcorpora within the top 20 ranking. However, upon a closer examination of the concordance lines showed that they did not carry a predicative role. These items were therefore removed. The same procedure was also applied to few mis-tagged items (e.g., centile of, Carboniferous of).After obtaining the query results from the BNCWeb as described in Section 3.1, the results were randomized and 100 instances of the concordance lines were extracted from each subcorpus and copied onto an Excel file for further analysis. The data were re-grouped into sciences (natural sciences and medicine) and non-sciences (humanities and social sciences) to avoid sparse data for Chi-squared tests. Next, the data were annotated according to their semantic classes, syntactic roles, and presence or absence of adverbial premodification as shown in Table 3 . In the process of annotation, about one quarter of the data (49 for non-sciences and 51 for sciences) were excluded. Most of these data either contain a mistagged form (e.g., revealing of) or were found to be a preposed adjective (e.g., the most important of).The remaining 151 and 149 instances were subjected to Pearson's Chi-squared test and graphically visualized by using the vcd package (Meyer, Zeileis, & Hornik, 2017) with the R program.
2
To evaluate the performance of our speech-driven retrieval system, we used the IREX collection 4 . This test collection, which resembles one used in the TREC ad hoc retrieval track, includes 30 Japanese topics (information need) and relevance assessment (correct judgement) for each topic, along with target 4 http://cs.nyu.edu/cs/projects/proteus/irex/index-e.html documents. The target documents are 211,853 articles collected from two years worth of "Mainichi Shimbun" newspaper (1994) (1995) .Each topic consists of the ID, description and narrative. While descriptions are short phrases related to the topic, narratives consist of one or more sentences describing the topic. Figure 2 shows an example topic in the SGML form (translated into English by one of the organizers of the IREX workshop).However, since the IREX collection does not contain spoken queries, we asked four speakers (two males/females) to dictate the narrative field. Thus, we produced four different sets of 30 spoken queries. By using those queries, we compared the following different methods:1. text-to-text retrieval, which used written narratives as queries, and can be seen as a perfect speech-driven text retrieval, 2. speech-driven text retrieval, in which only words listed in the dictionary were modeled in the language model (in other words, the OOV word detection and query completion modules were not used),3. speech-driven text retrieval, in which OOV words detected in spoken queries were simply deleted (in other words, the query completion module was not used), 4. speech-driven text retrieval, in which our method proposed in Section 3 was used.In cases of methods 2-4, queries dictated by four speakers were used independently. Thus, in practice we compared 13 different retrieval results. In addition, for methods 2-4, ten years worth of Mainichi Shimbun Japanese newspaper articles (1991) (1992) (1993) (1994) (1995) (1996) (1997) (1998) (1999) (2000) were used to produce language models. However, while method 2 used only 20,000 high-frequency words for language modeling, methods 3 and 4 also used syllables extracted from lower-frequency words (see Section 4). Following the IREX workshop, each method retrieved 300 top documents in response to each query, and non-interpolated average precision values were used to evaluate each method.<TOPIC><TOPIC-ID>1001</TOPIC-ID> <DESCRIPTION>Corporate merging</DESCRIPTION> <NARRATIVE>The article describes a corporate merging and in the article, the name of companies have to be identifiable. Information including the field and the purpose of the merging have to be identifiable. Corporate merging includes corporate acquisition, corporate unifications and corporate buying.</NARRATIVE></TOPIC> Figure 2 : An English translation for an example topic in the IREX collection.First, we evaluated the performance of detecting OOV words. In the 30 queries used for our evaluation, 14 word tokens (13 word types) were OOV words unlisted in the dictionary for speech recognition. Table 1 shows the results on a speaker-byspeaker basis, where "#Detected" and "#Correct" denote the total number of OOV words detected by our method and the number of OOV words correctly detected, respectively. In addition, "#Completed" denotes the number of detected OOV words that were corresponded to correct index terms in 300 top documents.It should be noted that "#Completed" was greater than "#Correct" because our method often mistakenly detected words in the dictionary as OOV words, but completed them with index terms correctly. We estimated recall and precision for detecting OOV words, and accuracy for query completion, as in Equation (4).EQUATIONLooking at Table 1 , one can see that recall was generally greater than precision. In other words, our method tended to detect as many OOV words as possible. In addition, accuracy of query completion was relatively low. Figure 3 shows example words in spoken queries, detected as OOV words and correctly completed with index terms. In this figure, OOV words are transcribed with syllables, where ":" denotes a long vowel. Hyphens are inserted between Japanese words, which inherently lack lexical segmentation. Second, to evaluate the effectiveness of our query completion method more carefully, we compared retrieval accuracy for methods 1-4 (see Section 7.1). Table 2 shows average precision values, averaged over the 30 queries, for each method 5 . The average precision values of our method (i.e., method 4) was approximately 87% of that for text-to-text retrieval.By comparing methods 2-4, one can see that our method improved average precision values of the other methods irrespective of the speaker. To put it more precisely, by comparing methods 3 and 4, one can see the effectiveness of the query completion method. In addition, by comparing methods 2 and 4, one can see that a combination of the OOV word detection and query completion methods was effective.It may be argued that the improvement was relatively small. However, since the number of OOV words inherent in 30 queries was only 14, the effect of our method was overshadowed by a large number of other words. In fact, the number of words used as query terms for our method, averaged over the four speakers, was 421. Since existing test collections for IR research were not produced to explore the OOV problem, it is difficult to derive conclusions that are statistically valid. Experiments using larger-scale test collections where the OOV problem is more crucial need to be further explored.Finally, we investigated the time efficiency of our method, and found that CPU time required for the query completion process per detected OOV word was 3.5 seconds (AMD Athlon MP 1900+). However, an additional CPU time for detecting OOV words, which can be performed in a conventional speech recognition process, was not crucial.To facilitate retrieving information by spoken queries, the out-of-vocabulary problem in speech recognition needs to be resolved. In our proposed method, out-of-vocabulary words in a query are detected by speech recognition, and completed with terms indexed for text retrieval, so as to improve the recognition accuracy. In addition, the completed query is used to improve the retrieval accuracy. We showed the effectiveness of our method by using dictated queries in the IREX collection. Future work would include experiments using larger-scale test collections in various domains.In Japanese, kanji (or Chinese character) is the idiogram, and katakana and hiragana are phonograms.http://winnie.kuis.kyoto-u.ac.jp/dictation 3 http://chasen.aist-nara.ac.jpAverage precision is often used to evaluate IR systems, which should not be confused with evaluation measures in Equation (4).
2
All our translation experiments were conducted with Moses' EMS toolkit (Koehn et al., 2007) , which in turn uses gizapp (Och and Ney, 2003) and SRILM (Stolcke, 2002) .As a test bed, we used the 200 bilingual tweets we acquired that were not used to follow urls, as described in Sections 2.1 and 2.3. We kept each feed separate in order to measure the performance of our system on each of them. Therefore we have 12 test sets.We tested two configurations: one in which an out-of-domain translation system is applied (without adaptation) to the translation of the tweets of our test material, another one where we allowed the system to look at in-domain data, either at training or at tuning time. The in-domain material we used for adapting our systems is the URL corpus we described in section 2.3. More precisely, we prepared 12 tuning corpora, one for each feed, each containing 800 heldout sentence pairs. The same number of sentence pairs was considered for out-domain tuning sets, in order not to bias the results in favor of larger sets. For adaptation experiments conducted at training time, all the URL material extracted from a specific feed (except for the sentences of the tuning sets) was used. The language model used in our experiments was a 5-gram language model with Kneser-Ney smoothing.It must be emphasized that there is no tweet material in our training or tuning sets. One reason for this is that we did not have enough tweets to populate our training corpus. Also, this corresponds to a realistic scenario where we want to translate a Twitter feed without first collecting tweets from this feed.We use the BLEU metric (Papineni et al., 2002) as well as word-error rate (WER) to measure translation quality. A good translation system maximizes BLEU and minimizes WER. Due to initially poor results, we had to refine the tokenizer mentioned in Section 2.1 in order to replace urls with serialized placeholders, since those numerous entities typically require rule-based translations. The BLEU and WER scores we report henceforth were computed on such lowercased, tokenized and serialized texts, and did not incur penalties that would have Table 3 : Performance of generic systems versus systems adapted at tuning time for two particular feeds. The tune corpus "in" stands for the URL corpus specific to the feed being translated. The tune corpora "hans" and "euro" are considered out-of-domain for the purpose of this experiment.otherwise been caused by the non-translation of urls (unknown tokens), for instance. Table 3 reports the results observed for the two main configurations we tested, in both translation directions. We show results only for two feeds here: canadabusiness, for which we collected the largest number of sentence pairs in the URL corpus, and DFAIT MAECI for which we collected very little material. For canadabusiness, the performance of the system trained on Hansard data is higher than that of the system trained on Europarl (∆ ranging from 2.19 to 5.28 points of BLEU depending on the configuration considered). For DFAIT MAECI , suprisingly, Europarl gives a better result, but by a more narrow margin (∆ ranging from 0.19 to 1.75 points of BLEU). Both tweet feeds are translated with comparable performance by SMT, both in terms of BLEU and WER. When comparing BLEU performances based solely on the tuning corpus used, the in-domain tuning corpus created by mining urls yields better results than the out-domain tuning corpus seven times out of eight for the results shown in Table 3 . The complete results are shown in Figure 2 , showing BLEU scores obtained for the 12 feeds we considered, when translating from English to French. Here, the impact of using in-domain data to tune Figure 2 : BLEU scores measured on the 12 feed pairs we considered for the English-to-French translation direction. For each tweet test corpus, there are 4 results: a dark histogram bar refers to the Hansard training corpus, while a lighter grey bar refers to an experiment where the training corpus was Europarl. The "in" category on the x-axis designates an experiment where the tuning corpus was in-domain (URL corpus), while the "out" category refers to an out-of-domain tuning set. The out-of-domain tuning corpus is Europarl or Hansard, and always matches the nature of training corpora.the system is hardly discernible, which in a sense is good news, since tuning a system for each feed is not practical. The Hansard corpus almost always gives better results, in keeping with its status as a corpus that is not so out-of-domain as Europarl, as mentioned above. The results for the reverse translation direction show the same trends.In order to try a different strategy than using only tuning corpora to adapt the system, we also investigated the impact of training the system on a mix of out-of-domain and in-domain data. We ran one of the simplest adaptation scenarios where we concatenated the in-domain material (train part of the URL corpus) to the out-domain one (Hansard corpus) for the two feeds we considered in Table 3 . The results are reported in Table 4 .We measured significant gains both in WER and BLEU scores in conducting training time versus tuning time adaptation, for the canadabusiness feed (the largest URL corpus). For this corpus, we observe an interesting gain of more than 6 absolute points in BLEU scores. However, for the DFAIT MAECI (the smallest URL corpus) we note a very modest loss in translation quality when translating from French and a significant gain in the other translation direction. These figures could show that mining parallel sentences present in URLs is a fruitful strategy for adapting the translation engine for feeds like canadabusiness that display poor performance otherwise, without harming the translation quality for feeds that per- Table 4 : Performance of systems trained on a concatenation of out-of-domain and in-domain data. All systems were tuned on in-domain data. Absolute gains are shown in parentheses, over the best performance achieved so far (see Table 3 ).form reasonably well without additional resources. Unfortunately, it suggests that retraining a system is required for better performance, which might hinder the deployment of a standalone translation engine. Further research needs to be carried out to determine how many tweet pairs must be used in a parallel URL corpus in order to get a sufficiently good in-domain corpus.
2
We frame the problem as a translation task from English to Bash. In Section 3.1 we describe our approach to incorporate the structure in modelling natural language invocations. Section 3.2 describes the proposed architecture and an analysis of its computational complexity at inference time.The constituency tree represents the syntactic structure of a sentence based on phrase structure grammar (Chomsky, 1956) . We propose a simple method that utilizes the constituency tree for segmenting natural language invocations. Our method is outlined in Figure 1 . First, we normalize the invocation to replace patterns and file paths with their types. Next, we parse the normalized English invocation to obtain its constituency parse tree. For all the experiments reported in this work, we use the Stanford CoreNLP parser (Manning et al., 2014) . Let the height of a node be defined as the number of edges on the longest path from the node to a leaf in the node's subtree (as shown in Figure 1 ). Then we perform a depth-first traversal on the tree in the left to right order of nodes. While performing the depth-first traversal, we cut the tree at the first node with a height less than a threshold and do not expand the search on this node further. As a result, we obtain various subtrees, where each subtree corresponds to a segment composed of the tokens in the leaves of the subtree. Finally, all segments are collected from the subtrees to obtain the segmented invocation.Let the nlc = [t 1 , t 2 , . . . t n ] be composed of n tokens. The invocation segmentation procedure takes the constituency tree for nlc and the threshold height as inputs and returns k seg-ments [s 1 , s 2 , . . . s k ], where each segment s i = [t j , t j+1 , . . . t j+n i −1 ] is composed of n i tokens such that k i=1 n i = n.We use a Transformer (Vaswani et al., 2017) based architecture and modify the Transformer encoder to capitalize on the segmentation information obtained from the constituency tree. Specifically, an averaging layer is introduced before the Transformer encoder to capture the local structure (Section 2). From the embedded token sequence comprising of n vectors, the averaging layer computes a sequence of k segment embeddings. The input to the averaging layer consists of n vectors, each resulting from the sum of token embedding and the corresponding sinusoidal position embedding. These are grouped into k segments, and the averaging layer then computes the mean over each segment to produce a sequence of k embedding vectors, one for each segment. On the decoder side, we use the standard Transformer decoder. We name this architecture Segmented Invocation Transformer (SIT), and it is shown in Figure 2 . The model is trained by back-propagation on the crossentropy loss with label smoothing of 0.1.Complexity Analysis Next, we analyze the computational complexity of the cross-attention of the decoder during inference to point out the improvement over the vanilla Transformer. The decoding occurs in discrete time steps. We shall consider a single time step in this analysis. At each time step, in the cross-attention layer of the decoder, we first construct the keys, query and values and then perform a softmax over the product of keys matrix with the query vector to get the cross-attention scores. Let the dimension of the embedding vectors be d. Considering a single head for simplicity, the construction of values matrix takes O(kd 2 ) time (from the multiplication of R k×d and R d×d matrices), where k is the number of segments. Similarly, the construction of query vector takes O(d 2 ) time (from the multiplication of a R d vector with R d×d matrix). Multiplying keys matrix (R k×d ) with the query vector (R d ) followed by a softmax (over k attention scores) takes O(kd + k) time. This step is followed by a weighted aggregation of the k values, each being d-dimensional, in O(kd) time. Hence, the overall complexity for cross-attention layer is O(kd 2 + d 2 + kd). Since the dimension d is a constant, this can be simplified to O(k). A vanilla Transformer would incur O(n) time. Therefore, our method provides a constant factor improvement per decoding time step. This advantage adds up due to multiple decoding time steps needed during the inference phase. The time benchmarks (Section 5.2) show these differences in practice.
2
The proposed DDI classification approach con-sists of four main steps. The first is Drug Name Normalization, which used RxNorm (Nelson, et al., 2011) to normalize drug synonyms in order calculate odds ratio more accurately. Following is the Odds Ratio step, in which we calculate the odds ratio of drug-drug pair matched DDI templates to only one drug matched. Next, Features for Classification presents the DDI classification features. Lastly, the Classifier Ensemble divides positive and negative training data into equal size, and training five classifiers in the different sets, then used a voting strategy to ensemble classification results.A drug might be represented as its generic name or branded name in the text. Here we refer them as drug synonyms. To normalize the synonyms can make the calculation of odds ratio more accurately. RxNorm is a tool developed by National Library of Medicine (NIH). It contains the normalized drug names and links them to many drug vocabularies which are commonly used in pharmacy management and drug interaction software. RxNorm can links the drug names between different systems which do not use the same software and vocabulary. Before calculating the odds ratio, we will use RxNorm to normalize the drug name d into its generic name g. If the d cannot be normalized to any generic name, then we will use d as its normalized name.The odds is the ratio r of the probability p 1 that the event of interest occurs to the probability p 2 that it does not occur. This is often estimated by the ratio of the number of times t 1 that the event of interest occurs to the number of times t 2 that it does not. In this paper, the odds ratio refers to the ratio or of the odds r 1 that the drug d 1 interacts with the drug d 2 to the odds r 2 that d 1 or d 2 interacts with the other drugs. For example, the odds r 1 that a drug d 1 interacts with the drug d 2 is 4 and the odds r 2 that d 1 or d 2 interacts with the other drugs is 2. The odds ratio or of d 1 and d 2 will be 4/2 = 2. The higher or indicates the higher odds that d 1 and d 2 have interaction than they interact with the other drugs. While calculating or, whether d 1 interacts with d 2 is obtained by the DDI templates which we will introduce in section 2.3.2.Our classifier uses basic, template and odds ratio features. The basic and template features utilized the immediate context of the drugs pair as features, whereas odds ratio features used the corpus-level information.The basic features comprised words, Part-ofspeech (POS) and syntactic features. There are two sets of word features used in our system, each with a different feature label. Inter-Drugs ngrams set includes all word unigrams and bigrams located between drugs. If none is present, the feature is given a "NULL" value. Surrounding Words set includes the two words before the first drug and the two after the second drug. If there are no words before or after both NEs, a "NULL" value is set. All words are treated as bag-of-words. That is, the order of these words is not considered. Similarly, the unigrams of POS tags between drugs are also used as POS features. We also parse each sentence with a fullsentence syntactic parser (Roark, et al., 2006) to generate its full parse tree. We use the syntactic path through the parse tree from the drug d 1 to the drug d 2 as a feature.Our template generation (TG) algorithm, which extracts word patterns for drugs pairs using Smith and Waterman's local alignment algorithm (Smith and Waterman, 1981) . Firstly, we pair all sentences containing positive relations. The sentence pairs are then aligned word-by-word and a pattern satisfying the alignment result is created. Each slot in the template is given by the corresponding constraint information expressed in the form of a word (e.g. "associated"). If two aligned sentences have nothing in common for a given slot, the TG algorithm puts a wildcard in the position. The complete TG algorithm is described with pseudo code in the Algorithm. The similarity function used to compare the similarity of two tokens in local algorithm is defined as:     otherwise 0, if , y x y x Sim 1 max ) , (where x and y are tokens in sentences s i and s j , respectively. The similarity of two sentences is calculated by the local algorithm on the basis of this token-level similarity function.INPUT: A set of sentences S = {s 1 , …, s k } 1:T = {}; 2:for s i in s 1 to s k-1 3: for s j in s i+1 to s k 4: if the similarity of s i and s j above the threshold 5:then generate template t from s i and s j 6: T ← t ; 7: end; 8:end; 9:return T OUTPUT: A set of templates T = {t 1 , …, t k }The odds ratio is the ratio of one odds to another, and it is larger than zero. In our experiment, we use different thresholds as odd ratio features include 1.0, 1.5, 2.0 and 2.5. The real number of odds ratio is also used as one of the odds ratio features.The amount of negative DDI pairs is higher than the positive ones in both DDI corpus and real world. The support vector machine model (Chang and Lin, 2011) used in our experiment is suffered from this problem. To tackle this problem, we proposed a classifier ensemble approach to training our classifiers. Firstly, we randomly divide the negative data into five unique subsets, since the ratio of the positive pairs to the negative pairs is approximate 5 in the experimental training corpus. Secondly, we construct five training datasets that each contains all positive data and one negative subset. Thirdly, we train five base classifiers with SVM. Here we use the Gaussian kernel. Once the classifiers are constructed, new DDI pairs are classified by the classifiers, and their results are aggregated to form the final ensemble decision output. The vote method is used in this paper. Given classifiers C i , i = 1, 2, ... , N C , and DDI labels L j , j = 1, 2, ... , N L , where N C is the ensemble size and N L is the number of DDI labels. The final aggregated decision is the winning classifier that has the highest votes across all classifiers. If any tie situation existed, the label with the highest predicted value will be assigned.
2